Detect multiple circles in an image - python

I am trying to detect the count of water pipes in this picture. For this, I am trying to use OpenCV and Python-based detection. The results, I am getting is a little confusing to me because the spread of circles is way too large and inaccurate.
The code
import numpy as np
import argparse
import cv2
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required = True, help = "Path to the image")
args = vars(ap.parse_args())
# load the image, clone it for output, and then convert it to grayscale
image = cv2.imread(args["image"])
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
#detect circles in the image
#circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, param1=40,minRadius=10,maxRadius=35)
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 8.5,70,minRadius=0,maxRadius=70)
#print(len(circles[0][0]))
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# count = count+1
# print(count)
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
# cv2.imshow("output", np.hstack([output]))
cv2.imwrite('output.jpg',np.hstack([output]),[cv2.IMWRITE_JPEG_QUALITY, 70])
cv2.waitKey(0)
After I run this, I do see a lot of circles detected, however, the results are complete haywire.
My question is, how do I improve this detection. Which parameters are specifically needed to optimize in the HoughCircles method to achieve greater accuracy? Or, should I take the approach of annotating hundreds of similar images via bounding boxes and then train them over a full-blown CNN like Yolo to perform detection?
Taking the approach mentioned in answer number 2 from here Measuring the diameter pictures of holes in metal parts, photographed with telecentric, monochrome camera with opencv . I got this output. This looks close to performing a count but misses on lot of actual pipes during the brightness transformation of the image.

The most important parameters for your HoughCircles call are:
param1: because you are using cv2.HOUGH_GRADIENT, param1 is the higher threshold for the edge detection algorithm and param1 / 2 is the lower threshold.
param2: it represents the accumulator threshold, so the lower the value, the more circles will be returned.
minRadius and maxRadius: the blue circles in the example have a diameter of roughly 20 pixels, so using 70 pixels for maxRadius is the reason why so many circles are being returned by the algorithm.
minDist: the minimum distance between the centers of two circles.
The parameterization defined below:
circles = cv2.HoughCircles(gray,
cv2.HOUGH_GRADIENT,
minDist=6,
dp=1.1,
param1=150,
param2=15,
minRadius=6,
maxRadius=10)
returns:

You could do an adaptive threshold as preprocessing. This basically looks for areas that are relatively brighter than the neighboring pixels, your global threshold loses some of the pipes, this keeps them a little better.
import cv2
import matplotlib.pyplot as plt
import numpy as np
img = cv2.imread('a2MTm.jpg')
blur_hor = cv2.filter2D(img[:, :, 0], cv2.CV_32F, kernel=np.ones((11,1,1), np.float32)/11.0, borderType=cv2.BORDER_CONSTANT)
blur_vert = cv2.filter2D(img[:, :, 0], cv2.CV_32F, kernel=np.ones((1,11,1), np.float32)/11.0, borderType=cv2.BORDER_CONSTANT)
mask = ((img[:,:,0]>blur_hor*1.2) | (img[:,:,0]>blur_vert*1.2)).astype(np.uint8)*255
plt.imshow(mask)
You can then carry on with the same post processing steps.
Here are some example processing steps:
circles = cv2.HoughCircles(mask,
cv2.HOUGH_GRADIENT,
minDist=8,
dp=1,
param1=150,
param2=12,
minRadius=4,
maxRadius=10)
output = img.copy()
for (x, y, r) in circles[0, :, :]:
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
You can adjust the parameters to get what you would like, read about the parameters here.

Instead of using cv2.HoughCircles another approach would be to use contour filtering. We can threshold the image then filter using aspect ratio, contour area, and radius of the blob. Here's the result:
Count: 344
Code
import cv2
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,27,3)
cnts = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
count = 0
for c in cnts:
area = cv2.contourArea(c)
x,y,w,h = cv2.boundingRect(c)
ratio = w/h
((x, y), r) = cv2.minEnclosingCircle(c)
if ratio > .85 and ratio < 1.20 and area > 50 and area < 120 and r < 7:
cv2.circle(image, (int(x), int(y)), int(r), (36, 255, 12), -1)
count += 1
print('Count: {}'.format(count))
cv2.imshow('thresh', thresh)
cv2.imshow('image', image)
cv2.waitKey()

Related

Particle detection with Python OpenCV

I'm looking for a proper solution how to count particles and measure their sizes in this image:
In the end I have to obtain the lists of particles' coordinates and area squares. After some search on the internet I realized there are 3 approaches for particles detection:
blobs
Contours
connectedComponentsWithStats
Looking at different projects I assembled some code with the mix of it.
import pylab
import cv2
import numpy as np
Gaussian blurring and thresholding
original_image = cv2.imread(img_path)
img = original_image
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = cv2.GaussianBlur(img, (5, 5), 0)
img = cv2.blur(img, (5, 5))
img = cv2.medianBlur(img, 5)
img = cv2.bilateralFilter(img, 6, 50, 50)
max_value = 255
adaptive_method = cv2.ADAPTIVE_THRESH_GAUSSIAN_C
threshold_type = cv2.THRESH_BINARY
block_size = 11
img_thresholded = cv2.adaptiveThreshold(img, max_value, adaptive_method, threshold_type, block_size, -3)
filter small objects
min_size = 4
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(img, connectivity=8)
sizes = stats[1:, -1]
nb_components = nb_components - 1
# for every component in the image, you keep it only if it's above min_size
for i in range(0, nb_components):
if sizes[i] < min_size:
img[output == i + 1] = 0
generation of Contours for filling holes and measurements. pos_list and size_list is what we were looking for
contours, hierarchy = cv2.findContours(img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
pos_list = []
size_list = []
for i in range(len(contours)):
area = cv2.contourArea(contours[i])
size_list.append(area)
(x, y), radius = cv2.minEnclosingCircle(contours[i])
pos_list.append((int(x), int(y)))
for the self-check, if we plot these coordinates over the original image
pts = np.array(pos_list)
pylab.figure(0)
pylab.imshow(original_image)
pylab.scatter(pts[:, 0], pts[:, 1], marker="x", color="green", s=5, linewidths=1)
pylab.show()
We might get something like the following:
And... I'm not really satisfied with the results. Some clearly visible particles are not included, on the other side, some doubt fluctuations of intensity have been counted. I'm playing now with different filters' settings, but the feeling is it's wrong.
If someone knows how to improve my solution, please share.
Since the particles are in white and the background in black, we can use Kmeans Color Quantization to segment the image into two groups with cluster=2. This will allow us to easily distinguish between particles and the background. Since the particles may be very tiny, we should try to avoid blurring, dilating, or any morphological operations which may alter the particle contours. Here's an approach:
Kmeans color quantization. We perform Kmeans with two clusters, grayscale, then Otsu's threshold to obtain a binary image.
Filter out super tiny noise. Next we find contours, remove tiny specs of noise using contour area filtering, and collect each particle (x, y) coordinate and its area. We remove tiny particles on the binary mask by "filling in" these contours to effectively erase them.
Apply mask onto original image. Now we bitwise-and the filtered mask onto the original image to highlight the particle clusters.
Kmeans with clusters=2
Result
Number of particles: 204
Average particle size: 30.537
Code
import cv2
import numpy as np
import pylab
# Kmeans
def kmeans_color_quantization(image, clusters=8, rounds=1):
h, w = image.shape[:2]
samples = np.zeros([h*w,3], dtype=np.float32)
count = 0
for x in range(h):
for y in range(w):
samples[count] = image[x][y]
count += 1
compactness, labels, centers = cv2.kmeans(samples,
clusters,
None,
(cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10000, 0.0001),
rounds,
cv2.KMEANS_RANDOM_CENTERS)
centers = np.uint8(centers)
res = centers[labels.flatten()]
return res.reshape((image.shape))
# Load image
image = cv2.imread('1.png')
original = image.copy()
# Perform kmeans color segmentation, grayscale, Otsu's threshold
kmeans = kmeans_color_quantization(image, clusters=2)
gray = cv2.cvtColor(kmeans, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Find contours, remove tiny specs using contour area filtering, gather points
points_list = []
size_list = []
cnts, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2:]
AREA_THRESHOLD = 2
for c in cnts:
area = cv2.contourArea(c)
if area < AREA_THRESHOLD:
cv2.drawContours(thresh, [c], -1, 0, -1)
else:
(x, y), radius = cv2.minEnclosingCircle(c)
points_list.append((int(x), int(y)))
size_list.append(area)
# Apply mask onto original image
result = cv2.bitwise_and(original, original, mask=thresh)
result[thresh==255] = (36,255,12)
# Overlay on original
original[thresh==255] = (36,255,12)
print("Number of particles: {}".format(len(points_list)))
print("Average particle size: {:.3f}".format(sum(size_list)/len(size_list)))
# Display
cv2.imshow('kmeans', kmeans)
cv2.imshow('original', original)
cv2.imshow('thresh', thresh)
cv2.imshow('result', result)
cv2.waitKey()

Use OpenCV to identiy hollow and filled circles

I'm using OpenCV houghcircles to identify all the circles (both hollow and filled). Follow is my code:
import numpy as np
import cv2
img = cv2.imread('images/32x32.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
bilateral = cv2.bilateralFilter(gray,10,50,50)
minDist = 30
param1 = 30
param2 = 50
minRadius = 5
maxRadius = 100
circles = cv2.HoughCircles(bilateral, cv2.HOUGH_GRADIENT, 1, minDist, param1=param1, param2=param2, minRadius=minRadius, maxRadius=maxRadius)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
cv2.circle(img, (i[0], i[1]), i[2], (0, 0, 255), 2)
# Show result for testing:
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Test input image 1:
Test output image1:
As you can see I'm able identity most of the circles except for few. What am I missing here? I've tried varying the parameters but this is the best i could get.
Also, if I use even more compact circles the script does not identify any circles whatsoever.
An alternative idea is to use find contour method and chek whether the contour is a circle using appox as below.
import cv2
img = cv2.imread('32x32.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
inputImageCopy = img.copy()
# Find the circle blobs on the binary mask:
contours, hierarchy = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Use a list to store the center and radius of the target circles:
detectedCircles = []
# Look for the outer contours:
for i, c in enumerate(contours):
# Approximate the contour to a circle:
(x, y), radius = cv2.minEnclosingCircle(c)
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
if len(approx)>5: # check if the contour is circle
# Compute the center and radius:
center = (int(x), int(y))
radius = int(radius)
# Draw the circles:
cv2.circle(inputImageCopy, center, radius, (0, 0, 255), 2)
# Store the center and radius:
detectedCircles.append([center, radius])
cv2.imshow("Circles", inputImageCopy)
cv2.waitKey(0)
cv2.destroyAllWindows()
I solved your problem. Using same code as your. No needed to modified. I changed value from 50 to 30. That all.
#!/usr/bin/python39
#OpenCV 4.5.5 Raspberry Pi 3/B/4B-w/4/8GB RAM, Bullseye,v11.
#Date: 19th April, 2022
import numpy as np
import cv2
img = cv2.imread('fill_circles.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
bilateral = cv2.bilateralFilter(gray,10,50,50)
minDist = 30
param1 = 30
param2 = 30
minRadius = 5
maxRadius = 100
circles = cv2.HoughCircles(bilateral, cv2.HOUGH_GRADIENT, 1, minDist, param1=param1, param2=param2, minRadius=minRadius, maxRadius=maxRadius)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
cv2.circle(img, (i[0], i[1]), i[2], (0, 0, 255), 2)
cv2.imwrite('lego.png', img)
# Show result for testing:
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
If you can always get(or create from real image) such "clean" image, the bounding-box of the grid(2-dimensional array of circles) region can be easily obtained.
Therefore, you can know the rectangular area of interest (and the angle of rotation of the grid, if rotation is possible).
If you examine the pixels along the axial direction of the rectangle, you can easily find out how many circles are lined up and the diameter of the circle.
Because lines that all pixel are black are gaps between adjacent row(or column).
(Sum up the pixel values along the direction of the axis. Whether it is 0 or not tells you whether the line passes over the grid-cell or not.)
If necessary, you may check that the shape of the contour in each gird-cell is really circular.

Detect overlapping noisy circles in image

I try to recognize two areas in the following image. The area inside the inner and the area between the outer and inner - the border - circle with python openCV.
I tried different approaches like:
Detecting circles images using opencv hough circles
Find and draw contours using opencv python
That does not fit very well.
Is this even possible with classical image processing or do I need some neuronal networking?
Edit: Detecting circles images using opencv hough circles
# import the necessary packages
import numpy as np
import argparse
import cv2
from PIL import Image
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required = True, help = "Path to the image")
args = vars(ap.parse_args())
# load the image, clone it for output, and then convert it to grayscale
image = cv2.imread(args["image"])
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, 500)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
img = Image.fromarray(image)
if img.height > 1500:
imS = cv2.resize(np.hstack([image, output]), (round((img.width * 2) / 3), round(img.height / 3)))
else:
imS = np.hstack([image, output])
# Resize image
cv2.imshow("gray", gray)
cv2.imshow("output", imS)
cv2.waitKey(0)
else:
print("No circle detected")
Testimage:
General mistake: While using HoughCircles() , the parameters should be chosen appropriately. I see that you are only using first 4 parameters in your code. Ypu can check here to get a good idea about those parameters.
Experienced idea: While using HoughCircles , I noticed that if 2 centers of 2 circles are same or almost close to each other, HoughCircles cant detect them. Even if you assign min_dist parameter to a small value. In your case, the center of circles also same.
My suggestion: I will attach the appropriate parameters with the code for both circles. I couldnt find 2 circles with one parameter list because of the problem I explained above. My suggestion is that apply these two parameters double time for the same image and just get the circles and get the result.
For outer circle result and parameters included code:
Result:
# import the necessary packages
import numpy as np
import argparse
import cv2
from PIL import Image
# load the image, clone it for output, and then convert it to grayscale
image = cv2.imread('image.jpg')
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.medianBlur(gray,15)
rows = gray.shape[0]
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT,1, rows / 8,
param1=100, param2=30,
minRadius=200, maxRadius=260)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
img = Image.fromarray(image)
if img.height > 1500:
imS = cv2.resize(np.hstack([image, output]), (round((img.width * 2) / 3), round(img.height / 3)))
else:
imS = np.hstack([image, output])
# Resize image
cv2.imshow("gray", gray)
cv2.imshow("output", imS)
cv2.waitKey(0)
else:
print("No circle detected")
For inner circle the parameters:
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT,1, rows / 8,
param1=100, param2=30,
minRadius=100, maxRadius=200)
Result:

How to optimize circle detection with Python OpenCV?

I have looked at several pages regarding optimizing circle detection using opencv in python. All seem to be specific to the individual circumstances of a given picture. What are some starting points for each of the parameters for cv2.HoughCircles? Since I am not sure what recommended values are, I have attempted looping over ranges but this is not producing any promising results. Why can't I detect any of the circles in this image?
import cv2
import numpy as np
image = cv2.imread('IMG_stack.png')
output = image.copy()
height, width = image.shape[:2]
maxWidth = int(width/10)
minWidth = int(width/20)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, 20,param1=50,param2=50,minRadius=minWidth,maxRadius=maxWidth)
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circlesRound = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circlesRound:
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.imwrite(filename = 'test.circleDraw.png', img = output)
cv2.imwrite(filename = 'test.circleDrawGray.png', img = gray)
else:
print ('No circles found')
This should be a straight forward circle detection, but all of the circles detected are not even close.
The main parameters that you should pay attention are minDist, minRadius and maxRadius.
Analyzing the radius first: you have an image that is 12 circles wide and 8 circles tall, which gives you a diameter of roughly width/12 for each circle, or a radius of (width/12)/2. The constraints that you have used allowed the algorithm to detect circles way bigger or smaller than necessary, therefore you should use a parameterization that is better fit for your image. In this case, I have used an interval [0.9 * radius, 1.1 * radius].
As there is no overlapping, you could say that the distance between two circles is at least the diameter, so minDist could be set to something like 2*minRadius.
This implementation is basically the same as yours, just updating those 3 parameters:
%matplotlib inline
import cv2
import numpy as np
import matplotlib.pyplot as plt
image = cv2.imread('data/balls.jpg')
output = image.copy()
height, width = image.shape[:2]
maxRadius = int(1.1*(width/12)/2)
minRadius = int(0.9*(width/12)/2)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
circles = cv2.HoughCircles(image=gray,
method=cv2.HOUGH_GRADIENT,
dp=1.2,
minDist=2*minRadius,
param1=50,
param2=50,
minRadius=minRadius,
maxRadius=maxRadius
)
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circlesRound = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circlesRound:
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
plt.imshow(output)
else:
print ('No circles found')
The result is:
Normally circle detection can be done using traditional image processing methods such as thresholding + contour detection, hough circles, or contour fitting but since your circles are overlapping/touching, watershed segmentation may be better. Here's a good resource.
import cv2
import numpy as np
from skimage.feature import peak_local_max
from skimage.morphology import watershed
from scipy import ndimage
# Load in image, convert to gray scale, and Otsu's threshold
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Remove small noise by filtering using contour area
cnts = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
if cv2.contourArea(c) < 1000:
cv2.drawContours(thresh,[c], 0, (0,0,0), -1)
cv2.imshow('thresh', thresh)
# Compute Euclidean distance from every binary pixel
# to the nearest zero pixel then find peaks
distance_map = ndimage.distance_transform_edt(thresh)
local_max = peak_local_max(distance_map, indices=False, min_distance=20, labels=thresh)
# Perform connected component analysis then apply Watershed
markers = ndimage.label(local_max, structure=np.ones((3, 3)))[0]
labels = watershed(-distance_map, markers, mask=thresh)
# Iterate through unique labels
for label in np.unique(labels):
if label == 0:
continue
# Create a mask
mask = np.zeros(gray.shape, dtype="uint8")
mask[labels == label] = 255
# Find contours and determine contour area
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
c = max(cnts, key=cv2.contourArea)
cv2.drawContours(image, [c], -1, (36,255,12), -1)
cv2.imshow('image', image)
cv2.waitKey()

Detecting Overlapping Circles in OpenCV

I'm using the OpenCV library for Python to detect the circles in an image. As a test case, I'm using the following image:
bottom of can:
I've written the following code, which should display the image before detection, then display the image with the detected circles added:
import cv2
import numpy as np
image = cv2.imread('can.png')
image_rgb = image.copy()
image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
grayscaled_image = cv2.cvtColor(image_copy, cv2.COLOR_GRAY2BGR)
cv2.imshow("confirm", grayscaled_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
circles = cv2.HoughCircles(image_copy, cv2.HOUGH_GRADIENT, 1.3, 20, param1=60, param2=33, minRadius=10,maxRadius=28)
if circles is not None:
print("FOUND CIRCLES")
circles = np.round(circles[0, :]).astype("int")
print(circles)
for (x, y, r) in circles:
cv2.circle(image, (x, y), r, (255, 0, 0), 4)
cv2.rectangle(image, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
cv2.imshow("Test", image + image_rgb)
cv2.waitKey(0)
cv2.destroyAllWindows()
I get this:resultant image
I feel that my problem lies in the usage of the HoughCircles() function. It's usage is:
cv2.HoughCircles(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]])
where minDist is a value greater than 0 that requires detected circles to be a certain distance from one another. With this requirement, it would be impossible for me to properly detect all of the circles on the bottom of the can, as the center of each circle is in the same place. Would contours be a solution? How can I convert contours to circles so that I may use the coordinates of their center points? What should I do to best detect the circle objects for each ring in the bottom of the can?
Not all but a majority of the circles can be detected by adaptive thresholding the image, finding the contours and then fitting a minimum enclosing circle for contours having area greater than a threshold
import cv2
import numpy as np
block_size,constant_c ,min_cnt_area = 9,1,400
img = cv2.imread('viMmP.png')
img_gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(img_gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,block_size,constant_c)
thresh_copy = thresh.copy()
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
if cv2.contourArea(cnt)>min_cnt_area:
(x,y),radius = cv2.minEnclosingCircle(cnt)
center = (int(x),int(y))
radius = int(radius)
cv2.circle(img,center,radius,(255,0,0),1)
cv2.imshow("Thresholded Image",thresh_copy)
cv2.imshow("Image with circles",img)
cv2.waitKey(0)
Now this script yields the result:
But there are certain trade-offs like, if the block_size and constant_c are changed to 11 and 2 respectively then the script yields:
You should try applying erosion with a kernel of proper shape to separate the overlapping circles in the thresholded image
You may look at the following links to understand more about adaptive thresholding and contours:
Threshlding examples: http://docs.opencv.org/3.1.0/d7/d4d/tutorial_py_thresholding.html
Thresholding reference: http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html
Contour Examples:
http://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html

Categories

Resources