I am trying to detect as many circles in my images using the following code:
maxRadius = int(1.2*(width/16)/2)
minRadius = int(0.9*(width/16)/2)
gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
circles = cv.HoughCircles(image=gray,
method=cv.HOUGH_GRADIENT,
dp=1.2,
minDist=2*minRadius,
param1=70,
param2=0.9,
minRadius=minRadius,
maxRadius=maxRadius
)
Although it does work for some of the images there are a few exceptions for which it doesn't.
Below we can see that for two different images that represent the same kind of experiment, my algorithm yields very different results.
How can I fix this? Should I apply some sort of filter on the images first to enhance the contrast?
EDIT: added original image:
enter image description here
This solution may or may not work on other images but it does work on the one you posted. You might want to work on that "sweet spot" apropos the adaptiveThreshold and HoughCricles parameters so that it works with other images as well.
import numpy as np
import cv2
import matplotlib.pyplot as plt
rgb = cv2.imread('/path/to/your/image/cells_0001.jpeg')
gray = cv2.cvtColor(rgb, cv2.COLOR_BGR2GRAY)
imh, imw = gray.shape
th = cv2.adaptiveThreshold(gray,255, cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY_INV,11,2)
maxRadius = int(1.2*(imw/16)/2)
minRadius = int(0.9*(imw/16)/2)
circles = cv2.HoughCircles(image=th,
method=cv2.HOUGH_GRADIENT,
dp=1.2,
minDist=2*minRadius,
param1=70,
param2=25,
minRadius=minRadius,
maxRadius=maxRadius
)
out_img = rgb.copy()
for (x, y, r) in circles[0]:
# draw the circle in the output image
cv2.circle(out_img, (x, y), int(r), (0, 255, 0), 1)
plt.imshow(out_img)
Related
I have downloaded a number of images (1000) from a website but they each have a black and white ruler running along 1 or 2 edges and some have these catalogue number tickets. I need these elements removed, the ruler at the very least.
Example images of coins:
The images all have the ruler in slightly different places so i cant just preform the same crop on them.
So I tried to remove the black and replace it with white using this code
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
im = Image.open('image-0.jpg')
im = im.convert('RGBA')
data = np.array(im) # "data" is a height x width x 4 numpy array
red, green, blue, alpha = data.T # Temporarily unpack the bands for readability
# Replace black with white
black_areas = (red < 150) & (blue < 150) & (green < 150)
data[..., :-1][black_areas.T] = (255, 255, 255) # Transpose back needed
im2 = Image.fromarray(data)
im2.show()
but it pretty much just removed half the coin as well:
I was having a read of some posts on opencv but though I'd see if there was a simpler way I'd missed first.
So I have taken a look at your problem and I have found a solution for your two images you provided, I hope it works for you other images as well but it is always hard to tell as it can be different on an individual basis. This solution is using OpenCV for preprocessing and contour detection to get the 2nd and 3rd largest elements in your picture (largest is the bounding box around the edges) which should be your coins. Then I create a box around those two items and add some padding before I crop to size.
So we start off with preprocessing:
import numpy as np
import cv2
img = cv2.imread(r'<PATH TO YOUR IMAGE>')
img = cv2.resize(img, None, fx=3, fy=3)
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(imgray, (5, 5), 0)
ret, thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
Still rather basic, we make the image bigger so it is easier to detect contours, then we turn it into grayscale, blur it and apply thresholding to it so we turn all grey values either white or black. This then gives us the following image:
We now do contour detection, get the areas around our contours and sort them by the biggest area. Then we drop the biggest one as it is the box around the whole image and take the 2nd and 3rd biggest. And then get the x,y,w,h values we are interested in.
contours, hierarchy = cv2.findContours(
thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
areas = []
for cnt in contours:
area = cv2.contourArea(cnt)
areas.append((area, cnt))
areas.sort(key=lambda x: x[0], reverse=True)
areas.pop(0)
x, y, w, h = cv2.boundingRect(areas[0][1])
x2, y2, w2, h2 = cv2.boundingRect(areas[1][1])
If we draw a rectangle around those contours:
Now we take those coordinates and create a box around both of them. This might need some minor adjusting as I just quickly took the bigger width of the two and not the corresponding one for the right coin but since I added extra padding it should be fine in most cases. And finally crop to size:
pad = 15
img = img[(min(y, y2) - pad) : (max(y, y2) + max(h, h2) + pad),
(min(x, x2) - pad) : (max(x, x2) + max(w, w2) + pad)]
I hope this helps you to understand how you could achieve what you want, I tried it on both your images and it worked well for them. It might need some adjustments and depending on how your other images look the simple approach of taking the two biggest objects (apart from image bounding box) might be turned into something more sophisticated to detect the cricular shapes or something along those lines. Alternatively you could try to detect the rulers and crop from their position inwards. You will have to decide after you have done this on more example images in your dataset.
If you're looking for a robust solution, you should try something like Max Kaha's response, since it'll provide you with greater fine tuning.
Since the rulers tend to be left with just a little bit of text after your "black to white" filter, a quick solution is to use erosion followed by a dilation to create a mask for your images, and then apply the mask to the original image.
Pillow offers that with the ImageFilter class. Here's your code with a few modifications that'll achieve that:
from PIL import Image, ImageFilter
import numpy as np
import matplotlib.pyplot as plt
WHITE = 255, 255, 255
input_image = Image.open('image.png')
input_image = input_image.convert('RGBA')
input_data = np.array(input_image) # "data" is a height x width x 4 numpy array
red, green, blue, alpha = input_data.T # Temporarily unpack the bands for readability
# Replace black with white
thresh = 30
black_areas = (red < thresh) & (blue < thresh) & (green < thresh)
input_data[..., :-1][black_areas.T] = WHITE # Transpose back needed
erosion_factor = 5
# dilation is bigger to avoid cropping the objects of interest
dilation_factor = 11
erosion_filter = ImageFilter.MaxFilter(erosion_factor)
dilation_filter = ImageFilter.MinFilter(dilation_factor)
eroded = Image.fromarray(input_data).filter(erosion_filter)
dilated = eroded.filter(dilation_filter)
mask_threshold = 220
# the mask is black on regions to be hidden
mask = dilated.convert('L').point(lambda x: 255 if x < mask_threshold else 0)
# create base image
output_image = Image.new('RGBA', input_image.size, WHITE)
# paste only the desired regions
output_image.paste(input_image, mask=mask)
output_image.show()
You should also play around with the black to white threshold and the erosion/dilation factors to try and find the best fit for most of your images.
Hi StackOverflow team,
I have an image and I want to remove many portions/parts from the image. I tried to use the below code taken from Cropping Concave polygon from Image using Opencv python
Assume I have this image . Also, I have multiple polygons (such as rectangular shapes or any form of a polygon) from the image achieved via lebelme annotation tool. So, I want to remove those shapes from the images or simply changing their pixels to white.
In other words, Labelme Tool will give you a dictionary file, where the dictionary has a key consisting of the points of each portion/polygon/shape)
Then the polygon points can be easily extracted from the dictionary file. After points are extracted, we can define our points by giving names (e.g a,b,s...h), and each one is in this multidimensional format "[[1526, 319], [1526, 376], [1593, 379], [1591, 324]]"
Here I thought of whitening each region. but whitening of multidimensional array seems to be unreliable.
import numpy as np
import cv2
import json
with open('ann1.json') as f:
data = json.load(f)
#%%
a = data['shapes'][0]['points']; b = data['shapes'][1]['points']; c = data['shapes'][2]['points'];
#%%
img = cv2.imread("lena.jpg")
pts = np.array(a) # Points
#%%
## (1) Crop the bounding rect
rect = cv2.boundingRect(pts)
x,y,w,h = rect
croped = img[y:y+h, x:x+w].copy()
## (2) make mask
pts = pts - pts.min(axis=0)
mask = np.zeros(croped.shape[:2], np.uint8)
cv2.drawContours(mask, [pts], -1, (255, 255, 255), -1, cv2.LINE_AA)
## (3) do bit-op
dst = cv2.bitwise_and(croped, croped, mask=mask)
## (4) add the white background
bg = np.ones_like(croped, np.uint8)*255
cv2.bitwise_not(bg,bg, mask=mask)
dst2 = bg+ dst
#cv2.imwrite("croped.png", croped)
#cv2.imwrite("mask.png", mask)
#cv2.imwrite("dst.png", dst)
cv2.imwrite("dst2.png", dst2)
Using Lena I have this output .
But I need to go further and whiten other points/polygons, for example, the eyes.
As you can see my code can use only one polygon points. I tried appending two other polygon points in my case the two eyes and got .
By appending, I mean I added the multidimensional points (e.g. pts = np.array(a+b+c)).
In short, having an image is there a short way to remove these multiple polygons from the image (by keeping the dimensions of the image) using OpenCV and python.
Json File:
https://drive.google.com/file/d/1UyOYUVMHpu2vBBEdR99bwrRX5xIfdOCa/view?usp=sharing
You'll need to use to loop to go through all the points in the JSON file. I've edited your code to reflect this.
import cv2
import json
import matplotlib.pyplot as plt
import numpy as np
img_path =r"/path/to/lena.png"
json_path = r"/path/to/lena.json"
with open(json_path) as f:
data = json.load(f)
img = cv2.imread(img_path)
for idx in np.arange(len(data['shapes'])):
if idx == 0: #can remove this
continue #can remove this
a = data['shapes'][idx]['points']
pts = np.array(a) # Points
## (1) Crop the bounding rect
rect = cv2.boundingRect(pts)
print(rect)
x,y,w,h = rect
img[y:y+h, x:x+w] = (255, 255, 255)
plt.imshow(img)
plt.show()
Output:
I ignored the first line, since it didn't visualize the results nicely. I took your lead and used rectangles instead of polygons. If you need polygons, you'll need to use something like cv2.drawContours() or cv2.polylines() or cv2.fillPoly() as is recommnded in the SO answer you have linked here, to achieve it.
I would like to share with you my expected solution which is a bit modified version of #Shawn Mathew answer.
Input image:
Code:
with open('lena.json') as f:
json_file = json.load(f)
img = cv2.imread("folder/lena.jpg")
for polygon in np.arange(len(json_file['shapes'])):
pts = np.array(json_file['shapes'][polygon]['points'])
# If your polygons are rectangular, you can fill with white color to the areas you want be removed by uncommenting the below two lines
# x,y,w,h = cv2.boundingRect(pts)
# cv2.rectangle(img, (x, y), (x+w, y+h), (255, 255, 255), -1)
# if your polygons are different shapes other than rectangles you can just use the below line
cv2.fillPoly(img, pts =[pts], color=(255,255,255))
plt.imshow(img)
plt.show()
The color of the image changed because of Matplotlib, if you want to preserve the color save the image using cv2.imwrite
I have an image, that I want to process. I'm using Opencv and skimage. My goal is to find the distribution of the red dots around the barycenter of all the dots. I proceed as follows : first I select the color, and then I binarize the image that I obtain. Eventually, I would just count the red pixel that are on the rings with a certain width around that barycenter, in order to have an average distribution with regards to the radius supposing a cylindrical symmetry.
My issue is that I have no idea how to find the position of the barycenter.
I would also like to know if there is an short way to count the red pixels in the rings.
Here is my code :
import cv2
import matplotlib.pyplot as plt
from skimage import io, filters, measure, color, external
I'm uploading the image :
sph = cv2.imread('image_sper.jpg')
sph = cv2.cvtColor(sph, cv2.COLOR_BGR2RGB)
plt.imshow(sph)
plt.show()
I want to select the red color. Following https://realpython.com/python-opencv-color-spaces/, I'm converting it in HSV, and I'm using a mask.
hsv_sph = cv2.cvtColor(sph, cv2.COLOR_RGB2HSV)
light_red = (1, 100, 100)
dark_red = (18, 255, 255)
mask = cv2.inRange(hsv_sph, light_red, dark_red)
result = cv2.bitwise_and(sph, sph, mask=mask)
And here is the result :
plt.imshow(result)
plt.show()
Now I'm binarizing the image, since it'll be easier to process it afterwards.
red_image = result[:,:,1]
red_th = filters.threshold_otsu(red_image)
red_mask = red_image > red_th;
red_mask.dtype ;
io.imshow(red_mask);
And here we are :
What I would like some help now to find the barycenter of the white pixels.
Thx
Edit : The binarization gives the image boolean values False/True for the pixels. I don't know how to transform them to 0/1 pixels. If False was 0 and True 1, a code to find the barycenter would be :
np.shape(red_mask)
(* (321L, 316L) *)
bari=0
barj=0
N=0
for i in range(321):
for j in range(316):
bari=bari+red_mask[i,j]*i
barj=barj+red_mask[i,j]*j
N=N+red_mask[i,j]
bari=bari/N
barj=barj/N
Another question that should have been asked here: http://answers.opencv.org/questions/
But, let's go!
The process that I have implemented uses mostly structural analysis (https://docs.opencv.org/3.3.1/d3/dc0/group__imgproc__shape.html#ga17ed9f5d79ae97bd4c7cf18403e1689a)
First I got your image:
import cv2
import matplotlib.pyplot as plt
import numpy as np
from skimage import io, filters, measure, color, external
sph = cv2.imread('points.png')
ret,thresh = cv2.threshold(sph,200,255,cv2.THRESH_BINARY)
Then eroded and converted it for noise reduction
kernel = np.ones((2,2),np.uint8)
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
opening = cv2.cvtColor(opening, cv2.COLOR_BGR2GRAY);
opening = cv2.convertScaleAbs(opening)
Then used "cv::findContours (InputOutputArray image, OutputArrayOfArrays contours, OutputArray hierarchy, int mode, int method, Point offset=Point())" to find all blobs.
After that, just calculate the center of each region and do a weighted average based on the contour area. This way, I got the points centroid (X:143.4202820443726 , Y:154.56471750651224).
im2, contours, hierarchy = cv2.findContours(opening, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
areas = []
centersX = []
centersY = []
for cnt in contours:
areas.append(cv2.contourArea(cnt))
M = cv2.moments(cnt)
centersX.append(int(M["m10"] / M["m00"]))
centersY.append(int(M["m01"] / M["m00"]))
full_areas = np.sum(areas)
acc_X = 0
acc_Y = 0
for i in range(len(areas)):
acc_X += centersX[i] * (areas[i]/full_areas)
acc_Y += centersY[i] * (areas[i]/full_areas)
print (acc_X, acc_Y)
cv2.circle(sph, (int(acc_X), int(acc_Y)), 5, (255, 0, 0), -1)
plt.imshow(sph)
plt.show()
In the original picture, I would like to detect circular regions. (glands) I managed to get to know the outlines of the regions, but because of the many smaller objects (nuclei), I can not go any further.
My original idea was to remove small objects using the cv2.connectedComponentsWithStats function. But unfortunately, as shown in the picture, the glandy regions also contain small objects, they are not connected properly. The function also throws out the small regions that outline the glands, leaving some parts out of the contours.
Can someone help me to find a solution to this problem?
Thank you very much in advance
Original picture
The approximate contour of the glands (with a lot of small objects in it)
After cv2.connectedComponentsWithStats
OpenCV
I think you can solve your task by using the Hough transform. Something like this could work for you (you have to adjust the parameters according to your needs):
import sys
import cv2 as cv
import numpy as np
def main(argv):
filename = argv[0]
src = cv.imread(filename, cv.IMREAD_COLOR)
if src is None:
print ('Error opening image!')
print ('Usage: hough_circle.py [image_name -- default ' + default_file + '] \n')
return -1
gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
gray = cv.medianBlur(gray, 5)
rows = gray.shape[0]
circles = cv.HoughCircles(gray, cv.HOUGH_GRADIENT, 1, rows / 32,
param1=100, param2=30,
minRadius=20, maxRadius=200)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0, :]:
center = (i[0], i[1])
# circle center
cv.circle(src, center, 1, (0, 100, 100), 3)
# circle outline
radius = i[2]
cv.circle(src, center, radius, (255, 0, 255), 2)
cv.imshow("detected circles", src)
cv.waitKey(0)
return 0
if __name__ == "__main__":
main(sys.argv[1:])
Some additional preprocessing might be required, to get rid of the noise, e.g. Morphological Transformations and performing edge detection right before the transformation might be helpful as well.
Neural Networks
Another option would be to use a neural network for image segmentation. A quite successful one is Mask RCNN. There is already a working python implementation on GitHub: Mask RCNN - Nucleus.
I am new to OpenCV and have a few questions. I need to detect a bottle or a can based on their shape. For this I am using a raspberry pi board and pi camera. The background is always black and does not change. I have tried many possible solutions to this problem but could not get satisfactory results. The things I have tried include edge detection, morphological transformations, matchShapes(), matchTemplate(). Please let me know if I can do this task efficiently and with maximum accuracy.
A sample image:
I came up with an approach that may help! If you know more things about the can, i.e the width to height ratio it can be more robust by adjusting the rectangle size!
Approach
Convert image to HSV color space. Increase V by a factor of 2 in order to have more visible things.
Find Sobel derivatives in x and y direction. Compute magnitude with equal weight for both direction.
Threshold your image using Otsu method.
Apply Closing to you image.
Apply Canny edge detector.
Find Hough Line Transform.
Find Bounding Rectangle of your line image.
Superimpose it onto your image.(Finally done :P)
Code
image = cv2.imread('image3.jpg', cv2.IMREAD_COLOR)
original = np.copy(image)
if image is None:
print 'Can not read/find the image.'
exit(-1)
hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
H,S,V = hsv_image[:,:,0], hsv_image[:,:,1], hsv_image[:,:,2]
V = V * 2
hsv_image = cv2.merge([H,S,V])
image = cv2.cvtColor(hsv_image, cv2.COLOR_HSV2RGB)
image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# plt.figure(), plt.imshow(image)
Dx = cv2.Sobel(image,cv2.CV_8UC1,1,0)
Dy = cv2.Sobel(image,cv2.CV_8UC1,0,1)
M = cv2.addWeighted(Dx, 1, Dy,1,0)
# plt.subplot(1,3,1), plt.imshow(Dx, 'gray'), plt.title('Dx')
# plt.subplot(1,3,2), plt.imshow(Dy, 'gray'), plt.title('Dy')
# plt.subplot(1,3,3), plt.imshow(M, 'gray'), plt.title('Magnitude')
ret, binary = cv2.threshold(M,10,255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
# plt.figure(), plt.imshow(binary, 'gray')
binary = binary.astype(np.uint8)
binary = cv2.morphologyEx(binary, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (20, 20)))
edges = cv2.Canny(binary, 50, 100)
# plt.figure(), plt.imshow(edges, 'gray')
lines = cv2.HoughLinesP(edges,1,3.14/180,50,20,10)[0]
output = np.zeros_like(M, dtype=np.uint8)
for line in lines:
cv2.line(output,(line[0],line[1]), (line[2], line[3]), (100,200,50), thickness=2)
# plt.figure(), plt.imshow(output, 'gray')
points = np.array([np.transpose(np.where(output != 0))], dtype=np.float32)
rect = cv2.boundingRect(points)
cv2.rectangle(original,(rect[1],rect[0]), (rect[1]+rect[3], rect[0]+rect[2]),(255,255,255),thickness=2)
original = cv2.cvtColor(original,cv2.COLOR_BGR2RGB)
plt.figure(), plt.imshow(original,'gray')
plt.show()
NOTE: you can uncomment the lines for showing the result of each step! I just comment them for the sake of readability.
Result
NOTE: If you know the aspect ratio of your can you can fix it better!
I hope that will help. Good Luck :)