When I try to detect circle from Coins Image:
In the first program, I use Matlab and everything works fine.
Now I try do detect The same circles from the coin's image but used OpenCV and I get a bad result.
I think that I do something wrong but I don't know what?
% This Example for Hough Transform Circles Matlab
% First step read image and make pre-processing
img = imread('coins2.png'); % Coins Image is Gray
figure, imshow(img), title('Orignal Coins Image');
edges = edge(img, 'Canny', [0.01, 0.5]);
figure, imshow(edges), title('Edge Image');
% Second Step Find all circles with radius [Rmin,Rmax]
Rmin = 10;
Rmax = 40;
[center, radii, metric] = imfindcircles(edges, [Rmin, Rmax]);
% Retain the N strongest circles according to the metric values
N = 24; % Hyperparam I can choose or I can display all circles that I get from the voting
centerStrong = center(1:N,:);
radiusStrong = radii(1:N);
metricStrong = metric(1:N);
% Draw the N strongest circle perimeters over the original image.
figure, imshow(img), hold on, viscircles(centerStrong, radiusStrong,'EdgeColor','b'),
title('Circle Segment'), hold off;
# Python exam:
import cv2
img = cv2.imread('coins2.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Convert the image to gray scale
edges = cv2.Canny(image=gray, threshold1=200, threshold2=100, apertureSize=3)
cv2.imshow('Orignal Image', img), cv2.imshow('Edge Image', edges)
circles = cv2.HoughCircles(image=edges, method=cv2.HOUGH_GRADIENT, dp=20, minDist=1,
param1=30, param2=50, minRadius=10, maxRadius=50)
for circle in circles[0, :]:
a, b = int(circle[0]), int(circle[1])
radius = int(circle[2])
cv2.circle(img=img, center=(a, b), radius=radius, color=(255, 0, 0), thickness=2)
cv2.imshow('Circle Segment', img), cv2.waitKey(0), cv2.destroyAllWindows()
Thanks to everyone for the answers it was very helpful.
Attached the code that brought me almost the same MATLAB performance.
import cv2
img = cv2.imread('coins2.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Convert the image to gray scale
thres=230
edges = cv2.Canny(image=gray, threshold1=thres, threshold2=thres/2,
apertureSize=3, L2gradient=True)
cv2.imshow('Orignal Image', img), cv2.imshow('Edge Image', edges)
circles = cv2.HoughCircles(image=gray, method=cv2.HOUGH_GRADIENT, dp=0.06,
minDist=14, param1=thres, param2=30, minRadius=10,
maxRadius=40)
for circle in circles[0, :]:
a, b = int(circle[0]), int(circle[1])
radius = int(circle[2])
cv2.circle(img=img, center=(a, b), radius=radius, color=(255, 0, 0),
thickness=2)
cv2.imshow('Circle Segment', img), cv2.waitKey(0), cv2.destroyAllWindows()
Related
I'm using OpenCV houghcircles to identify all the circles (both hollow and filled). Follow is my code:
import numpy as np
import cv2
img = cv2.imread('images/32x32.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
bilateral = cv2.bilateralFilter(gray,10,50,50)
minDist = 30
param1 = 30
param2 = 50
minRadius = 5
maxRadius = 100
circles = cv2.HoughCircles(bilateral, cv2.HOUGH_GRADIENT, 1, minDist, param1=param1, param2=param2, minRadius=minRadius, maxRadius=maxRadius)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
cv2.circle(img, (i[0], i[1]), i[2], (0, 0, 255), 2)
# Show result for testing:
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Test input image 1:
Test output image1:
As you can see I'm able identity most of the circles except for few. What am I missing here? I've tried varying the parameters but this is the best i could get.
Also, if I use even more compact circles the script does not identify any circles whatsoever.
An alternative idea is to use find contour method and chek whether the contour is a circle using appox as below.
import cv2
img = cv2.imread('32x32.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
inputImageCopy = img.copy()
# Find the circle blobs on the binary mask:
contours, hierarchy = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Use a list to store the center and radius of the target circles:
detectedCircles = []
# Look for the outer contours:
for i, c in enumerate(contours):
# Approximate the contour to a circle:
(x, y), radius = cv2.minEnclosingCircle(c)
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
if len(approx)>5: # check if the contour is circle
# Compute the center and radius:
center = (int(x), int(y))
radius = int(radius)
# Draw the circles:
cv2.circle(inputImageCopy, center, radius, (0, 0, 255), 2)
# Store the center and radius:
detectedCircles.append([center, radius])
cv2.imshow("Circles", inputImageCopy)
cv2.waitKey(0)
cv2.destroyAllWindows()
I solved your problem. Using same code as your. No needed to modified. I changed value from 50 to 30. That all.
#!/usr/bin/python39
#OpenCV 4.5.5 Raspberry Pi 3/B/4B-w/4/8GB RAM, Bullseye,v11.
#Date: 19th April, 2022
import numpy as np
import cv2
img = cv2.imread('fill_circles.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
bilateral = cv2.bilateralFilter(gray,10,50,50)
minDist = 30
param1 = 30
param2 = 30
minRadius = 5
maxRadius = 100
circles = cv2.HoughCircles(bilateral, cv2.HOUGH_GRADIENT, 1, minDist, param1=param1, param2=param2, minRadius=minRadius, maxRadius=maxRadius)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
cv2.circle(img, (i[0], i[1]), i[2], (0, 0, 255), 2)
cv2.imwrite('lego.png', img)
# Show result for testing:
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
If you can always get(or create from real image) such "clean" image, the bounding-box of the grid(2-dimensional array of circles) region can be easily obtained.
Therefore, you can know the rectangular area of interest (and the angle of rotation of the grid, if rotation is possible).
If you examine the pixels along the axial direction of the rectangle, you can easily find out how many circles are lined up and the diameter of the circle.
Because lines that all pixel are black are gaps between adjacent row(or column).
(Sum up the pixel values along the direction of the axis. Whether it is 0 or not tells you whether the line passes over the grid-cell or not.)
If necessary, you may check that the shape of the contour in each gird-cell is really circular.
is it possible to get the coordinates of the incomplete circle? i am using opencv and python. so i can find the most of the circles.
But i have no clue how can i detect the incomplete cirlce in the picture.
I am looking for a simple way to solve it.
import sys
import cv2 as cv
import numpy as np
## [load]
default_file = 'captcha2.png'
# Loads an image
src = cv.imread(cv.samples.findFile(default_file), cv.IMREAD_COLOR)
## [convert_to_gray]
# Convert it to gray
gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
## [convert_to_gray]
## [reduce_noise]
# Reduce the noise to avoid false circle detection
gray = cv.medianBlur(gray, 3)
## [reduce_noise]
## [houghcircles]
#rows = gray.shape[0]
circles = cv.HoughCircles(gray, cv.HOUGH_GRADIENT, 1, 5,
param1=1, param2=35,
minRadius=1, maxRadius=30)
## [houghcircles]
## [draw]
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0, :]:
center = (i[0], i[1])
# circle center
cv.circle(src, center, 1, (0, 100, 100), 3)
# circle outline
radius = i[2]
cv.circle(src, center, radius, (255, 0, 255), 3)
## [draw]
## [display]
cv.imshow("detected circles", src)
cv.waitKey(0)
## [display]
Hi - there is an other Picture. I want the x and y cords of the incomplete circle, light blue on the lower left.
Here the original Pic:
You need to remove the colorful background of your image and display only circles.
One approach is:
Get the binary mask of the input image
Apply Hough Circle to detect the circles
Binary mask:
Using the binary mask, we will detect the circles:
Code:
# Load the libraries
import cv2
import numpy as np
# Load the image
img = cv2.imread("r5lcN.png")
# Copy the input image
out = img.copy()
# Convert to the HSV color space
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Get binary mask
msk = cv2.inRange(hsv, np.array([0, 0, 130]), np.array([179, 255, 255]))
# Detect circles in the image
crc = cv2.HoughCircles(msk, cv2.HOUGH_GRADIENT, 1, 10, param1=50, param2=25, minRadius=0, maxRadius=0)
# Ensure circles were found
if crc is not None:
# Convert the coordinates and radius of the circles to integers
crc = np.round(crc[0, :]).astype("int")
# For each (x, y) coordinates and radius of the circles
for (x, y, r) in crc:
# Draw the circle
cv2.circle(out, (x, y), r, (0, 255, 0), 4)
# Print coordinates
print("x:{}, y:{}".format(x, y))
# Display
cv2.imshow("out", np.hstack([img, out]))
cv2.waitKey(0)
Output:
x:178, y:60
x:128, y:22
x:248, y:20
x:378, y:52
x:280, y:60
x:294, y:46
x:250, y:44
x:150, y:62
Explanation
We have three chance for finding the thresholding:
Simple Threshold result:
Adaptive Threshold
Binary mask
As we can see the third option gave us a suitable result. Of course, you could get the desired result with other options, but it might take a long time for finding the suitable parameters. Then we applied Hough circles, played with parameter values, and got the desired result.
Update
For the second uploaded image, you can detect the semi-circle by reducing the first and second parameters of the Hough circle.
crc = cv2.HoughCircles(msk, cv2.HOUGH_GRADIENT, 1, 10, param1=10, param2=15, minRadius=0, maxRadius=0)
Replacing the above line in the main code will result in:
Console result
x:238, y:38
x:56, y:30
x:44, y:62
x:208, y:26
I am trying to detect the outer boundary of the circular object in the images below:
I tried OpenCV's Hough Circle, but the code is not working for every image. I also tried to adjust parameters such as minRadius and maxRadius in Hough Circle but its not working on every image.
The aim is to detect the object from the image and crop it.
Expected output:
Source code:
import imutils
import cv2
import numpy as np
from matplotlib import pyplot as plt
image = cv2.imread("path to the image i have provided")
r = 600.0 / image.shape[1]
dim = (600, int(image.shape[0] * r))
resized = cv2.resize(image, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite("path to were we want to save downscaled image", resized)
image = cv2.imread('path of downscaled image')
image1 = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
image2 = cv2.GaussianBlur(image1, (5, 5), 0)
edged = cv2.Canny(image2, 30, 150)
img = cv2.medianBlur(image2,5)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(edged,cv2.HOUGH_GRADIENT,1,20,
param1=50,param2=30,minRadius=200,maxRadius=280)
circles = np.uint16(np.around(circles))
max_circle = max(circles[0,:], key=lambda x:x[2])
# print(max_circle)
# # Create mask
height,width = image1.shape
mask = np.zeros((height,width), np.uint8)
for i in [max_circle]:
cv2.circle(mask,(i[0],i[1]),i[2],(255,255,255),thickness=-1)
masked_data = cv2.bitwise_and(image, image, mask=mask)
_,thresh = cv2.threshold(mask,1,255,cv2.THRESH_BINARY)
# Find Contour
contours = cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)[0]
x,y,w,h = cv2.boundingRect(contours[0])
# Crop masked_data
crop = masked_data[y:y+h,x:x+w]
#Code to close Window
cv2.imshow('OG',image)
cv2.imshow('Cropped ROI',crop)
cv2.imwrite("path to save roi image", crop)
cv2.waitKey(0)
cv2.destroyAllWindows()
Second Answer: an approach based on color segmentation.
While I was editing the question to improve it's readability and was inserting and resizing all the images from the link you shared to make it easier for everyone to visualize what you are trying to do, it occurred to me that this problem might be a better candidate for an approach based on segmentation by color:
This simpler (but clever) approach assumes that the reel appears pretty much in the same location and has more or less the same dimensions every time:
To discover the approximate color of the reel in the image, define a list of Regions of Interest (ROIs) to sample pixels from and determine the min and max color of that area in the HSV color space. The location and size of the ROI are values derived from the size of the image. In the images below, you can see the ROIs as draw as blue-ish rectangles:
Once the min and max HSV colors have been found, a threshold operation with cv2.inRange() can be executed to segment the reel:
Then, iterate though all the contours in the binary image and assume that the largest one represents the reel. Use this contour and draw it in a separate mask to be able to extract the pixels from original image:
At this stage, it is also possible to compute a bounding box for the contour and extract it's precise location to be able to perform a crop operation later and completely isolate the reel in the image:
This approach works for EVERY image shared on the question.
Source code:
import cv2
import numpy as np
import sys
# initialize global H, S, V values
min_global_h = 179
min_global_s = 255
min_global_v = 255
max_global_h = 0
max_global_s = 0
max_global_v = 0
# load input image from the cmd-line
filename = sys.argv[1]
img = cv2.imread(sys.argv[1])
if (img is None):
print('!!! Failed imread')
sys.exit(-1)
# create an auxiliary image for debugging purposes
dbg_img = img.copy()
# initiailize a list of Regions of Interest that need to be scanned to identify good HSV values to threhsold by color
w = img.shape[1]
h = img.shape[0]
roi_w = int(w * 0.10)
roi_h = int(h * 0.10)
roi_list = []
roi_list.append( (int(w*0.25), int(h*0.15), roi_w, roi_h) )
roi_list.append( (int(w*0.25), int(h*0.60), roi_w, roi_h) )
# convert image to HSV color space
hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# iterate through the ROIs to determine the min/max HSV color of the reel
for rect in roi_list:
x, y, w, h = rect
x2 = x + w
y2 = y + h
print('ROI rect=', rect)
cropped_hsv_img = hsv_img[y:y+h, x:x+w]
h, s, v = cv2.split(cropped_hsv_img)
min_h = np.min(h)
min_s = np.min(s)
min_v = np.min(v)
if (min_h < min_global_h):
min_global_h = min_h
if (min_s < min_global_s):
min_global_s = min_s
if (min_v < min_global_v):
min_global_v = min_v
max_h = np.max(h)
max_s = np.max(s)
max_v = np.max(v)
if (max_h > max_global_h):
max_global_h = max_h
if (max_s > max_global_s):
max_global_s = max_s
if (max_v > max_global_v):
max_global_v = max_v
# debug: draw ROI in original image
cv2.rectangle(dbg_img, (x, y), (x2, y2), (255,165,0), 4) # red
cv2.imshow('ROIs', cv2.resize(dbg_img, dsize=(0, 0), fx=0.5, fy=0.5))
#cv2.waitKey(0)
cv2.imwrite(filename[:-4] + '_rois.png', dbg_img)
# define min/max color for threshold
low_hsv = np.array([min_h, min_s, min_v])
max_hsv = np.array([max_h, max_s, max_v])
#print('low_hsv=', low_hsv)
#print('max_hsv=', max_hsv)
# threshold image by color
img_bin = cv2.inRange(hsv_img, low_hsv, max_hsv)
cv2.imshow('binary', cv2.resize(img_bin, dsize=(0, 0), fx=0.5, fy=0.5))
cv2.imwrite(filename[:-4] + '_binary.png', img_bin)
#cv2.imshow('img_bin', cv2.resize(img_bin, dsize=(0, 0), fx=0.5, fy=0.5))
#cv2.waitKey(0)
# create a mask to store the contour of the reel (hopefully)
mask = np.zeros((img_bin.shape[0], img_bin.shape[1]), np.uint8)
crop_x, crop_y, crop_w, crop_h = (0, 0, 0, 0)
# iterate throw all the contours in the binary image:
# assume that the first contour with an area larger than 100k belongs to the reel
contours, hierarchy = cv2.findContours(img_bin, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for contourIdx, cnt in enumerate(contours):
area = cv2.contourArea(contours[contourIdx])
print('contourIdx=', contourIdx, 'area=', area)
# draw potential reel blob on the mask (in white)
if (area > 100000):
crop_x, crop_y, crop_w, crop_h = cv2.boundingRect(cnt)
centers, radius = cv2.minEnclosingCircle(cnt)
cv2.circle(mask, (int(centers[0]), int(centers[1])), int(radius), (255), -1) # fill with white
break
cv2.imshow('mask', cv2.resize(mask, dsize=(0, 0), fx=0.5, fy=0.5))
cv2.imwrite(filename[:-4] + '_mask.png', mask)
# copy just the reel area into its own image
reel_img = cv2.bitwise_and(img, img, mask=mask)
cv2.imshow('reel_img', cv2.resize(reel_img, dsize=(0, 0), fx=0.5, fy=0.5))
cv2.imwrite(filename[:-4] + '_reel.png', reel_img)
# crop the reel to a smaller image
if (crop_w != 0 and crop_h != 0):
cropped_reel_img = reel_img[crop_y:crop_y+crop_h, crop_x:crop_x+crop_w]
cv2.imshow('cropped_reel_img', cv2.resize(cropped_reel_img, dsize=(0, 0), fx=0.5, fy=0.5))
output_filename = filename[:-4] + '_crop.png'
cv2.imwrite(output_filename, cropped_reel_img)
cv2.waitKey(0)
First answer: an approach based on pre-processing the image and executing an adaptiveThreshold operation.
There might be other ways of solving this problem that are not based on Hough Circles. Here is the result of an approach that is not:
Preprocess the image! Decreasing the size of the image and executing a blur helps with segmentation:
The segmentation method uses a cv2.adaptiveThreshold() to create a binary image that preserves the most important objects: the center of the reel and the external edge of the reel. This is an important step since we are only interested in what exists between these two objects. However, life is not perfect and neither is this segmentation. The shadow of reel on the table became part of the binary objects detected. Also, the outer edge is not fully connected as you can see on the resulting image on the right (look at the top left of the circumference):
To join broken segments, a morphological operation can be executed:
Finally, the entire reel area can be exposed by iterating through the contours of the image above and discarding those whose area is larger than what is expected for a reel. The resulting binary image (on the left) can then be used as a mask to identify the reel location on the original image:
Keep in mind that I'm not trying to find an universal solution for your problem. I'm merely showing that there might be other solutions that don't depend on Hough Circles.
Also, this code might need some adjustments to work on a larger number of cases.
Source code:
import cv2
import numpy as np
import sys
img = cv2.imread("test_images/reel.jpg")
if (img is None):
print('!!! Failed imread')
sys.exit(-1)
# create output image
output_img = img.copy()
# 1. Preprocess the image: downscale to speed up processing and execute a blur
SCALE_FACTOR = 0.5
smaller_img = cv2.resize(img, dsize=(0, 0), fx=SCALE_FACTOR, fy=SCALE_FACTOR)
blur_img = cv2.medianBlur(smaller_img, 9)
cv2.imwrite('reel1_blur_img.png', blur_img)
# 2. Segment the image to identify the 2 most important contours: the center of the reel and the outter edge
gray_img = cv2.cvtColor(blur_img, cv2.COLOR_BGR2GRAY)
img_bin = cv2.adaptiveThreshold(gray_img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 19, 4)
cv2.imwrite('reel2_img_bin.png', img_bin)
green_mask = np.zeros((img_bin.shape[0], img_bin.shape[1]), np.uint8)
#green_mask = cv2.cvtColor(img_bin, cv2.COLOR_GRAY2RGB) # debug
contours, hierarchy = cv2.findContours(img_bin, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for contourIdx, cnt in enumerate(contours):
x, y, w, h = cv2.boundingRect(cnt)
area = cv2.contourArea(contours[contourIdx])
#print('contourIdx=', contourIdx, 'w=', w, 'h=', h, 'area=', area)
# filter out tiny segments
if (area < 5000):
#cv2.fillPoly(green_mask, pts=[cnt], color=(0, 0, 255)) # red
continue
# draw green contour (filled)
#cv2.fillPoly(green_mask, pts=[cnt], color=(0, 255, 0)) # green
cv2.fillPoly(green_mask, pts=[cnt], color=(255)) # white
# debug:
#cv2.imshow('green_mask', green_mask)
#cv2.waitKey(0)
cv2.imshow('green_mask', green_mask)
cv2.imwrite('reel2_green_mask.png', green_mask)
# 3. Fix mask: join segments nearby
kernel = np.ones((3,3), np.uint8)
img_dilation = cv2.dilate(green_mask, kernel, iterations=1)
green_mask = cv2.erode(img_dilation, kernel, iterations=1)
cv2.imshow('fixed green_mask', green_mask)
cv2.imwrite('reel3_img.png', green_mask)
# 4. Extract the reel area from the green mask
reel_mask = np.zeros((green_mask.shape[0], green_mask.shape[1]), np.uint8)
#reel_mask = cv2.cvtColor(green_mask, cv2.COLOR_GRAY2RGB) # debug
contours, hierarchy = cv2.findContours(green_mask, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for contourIdx, cnt in enumerate(contours):
x, y, w, h = cv2.boundingRect(cnt)
area = cv2.contourArea(contours[contourIdx])
print('contourIdx=', contourIdx, 'w=', w, 'h=', h, 'area=', area)
# filter out smaller segments
if (area > 110000):
#cv2.fillPoly(reel_mask, pts=[cnt], color=(0, 0, 255)) # red
continue
# draw green contour (filled)
#cv2.fillPoly(reel_mask, pts=[cnt], color=(0, 255, 0)) # green
cv2.fillPoly(reel_mask, pts=[cnt], color=(255)) # white
# debug:
#cv2.imshow('reel_mask', reel_mask)
#cv2.waitKey(0)
cv2.imshow('reel_mask', reel_mask)
cv2.imwrite('reel4_reel_mask.png', reel_mask)
# 5. Draw the reel area on the original image
contours, hierarchy = cv2.findContours(reel_mask, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for contourIdx, cnt in enumerate(contours):
centers, radius = cv2.minEnclosingCircle(cnt)
# rescale these values back to the original image size
centers_orig = (centers[0] // SCALE_FACTOR, centers[1] // SCALE_FACTOR)
radius_orig = radius // SCALE_FACTOR
print('centers=', centers_orig, 'radius=', radius_orig)
cv2.circle(output_img, (int(centers_orig[0]), int(centers_orig[1])), int(radius_orig), (128,0,255), 5) # magenta
cv2.imshow('output_img', output_img)
cv2.imwrite('reel5_output.png', output_img)
# display just the pixels from the original image
larger_reel_mask = cv2.resize(reel_mask, (int(img.shape[1]), int(img.shape[0])))
output_reel_img = cv2.bitwise_and(img, img, mask=larger_reel_mask)
cv2.imshow('output_reel_img', output_reel_img)
cv2.imwrite('reel5_output_reel.png', output_reel_img)
cv2.waitKey(0)
At this point, its possible to use larger_reel_maskand compute a minimal enclosing circle, draw it over this mask to make it a little bit more round and allow us to retrieve the area of the reel more accurately:
But the 4 lines of code that achieve this improvement I leave as an exercise for the reader.
I have an Image in which i want to extract a region of interest which is a circle
and then crop it.
I have done some preprocessing and i am sharing the result and code.
please let me know how can i achieve this.
import imutils
import cv2
import numpy as np
#edge detection
image = cv2.imread('newimage.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
image = cv2.GaussianBlur(image, (5, 5), 0)
canny = cv2.Canny(image, 30, 150)
#circle detection
img = cv2.medianBlur(image,5)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(canny,cv2.HOUGH_GRADIENT,1,20,
param1=50,param2=30,minRadius=0,maxRadius=0)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
#outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
#center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
plt.title('Circle Detected')
plt.xticks([])
plt.yticks([])
plt.imshow(cimg,cmap = 'gray')
plt.show()
In your example you should get circle with the biggest radius which is in i[2]
max_circle = max(circles[0,:], key=lambda x:x[2])
print(max_circle)
i = max_circle
#outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
#center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
Check this out, the main tips:
play with blurring params;
play with HoughCircles params: minSize, maxSize and minDist
(minimal distance between circles)
img = cv2.imread('newimage.jpg', cv2.IMREAD_COLOR)
# Convert to grayscale and blur
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray_blurred = cv2.bilateralFilter(gray, 11, 30, 30)
# tune circles size
detected_circles = cv2.HoughCircles(gray_blurred,
cv2.HOUGH_GRADIENT, 1,
param1=50,
param2=30,
minDist=100,
minRadius=50,
maxRadius=70)
if detected_circles is not None:
# Convert the circle parameters a, b and r to integers.
detected_circles = np.uint16(np.around(detected_circles))
for pt in detected_circles[0, :]:
a, b, r = pt[0], pt[1], pt[2]
# Draw the circumference of the circle.
cv2.circle(img, (a, b), r, (0, 255, 0), 2)
# Draw a small circle (of radius 1) to show the center.
cv2.circle(img, (a, b), 1, (0, 0, 255), 3)
cv2.imshow("Detected circles", img)
cv2.waitKey(0)
small circle values:
minRadius=50
maxRadius=60
big circle values:
minRadius=100
maxRadius=200
I'm using the OpenCV library for Python to detect the circles in an image. As a test case, I'm using the following image:
bottom of can:
I've written the following code, which should display the image before detection, then display the image with the detected circles added:
import cv2
import numpy as np
image = cv2.imread('can.png')
image_rgb = image.copy()
image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
grayscaled_image = cv2.cvtColor(image_copy, cv2.COLOR_GRAY2BGR)
cv2.imshow("confirm", grayscaled_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
circles = cv2.HoughCircles(image_copy, cv2.HOUGH_GRADIENT, 1.3, 20, param1=60, param2=33, minRadius=10,maxRadius=28)
if circles is not None:
print("FOUND CIRCLES")
circles = np.round(circles[0, :]).astype("int")
print(circles)
for (x, y, r) in circles:
cv2.circle(image, (x, y), r, (255, 0, 0), 4)
cv2.rectangle(image, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
cv2.imshow("Test", image + image_rgb)
cv2.waitKey(0)
cv2.destroyAllWindows()
I get this:resultant image
I feel that my problem lies in the usage of the HoughCircles() function. It's usage is:
cv2.HoughCircles(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]])
where minDist is a value greater than 0 that requires detected circles to be a certain distance from one another. With this requirement, it would be impossible for me to properly detect all of the circles on the bottom of the can, as the center of each circle is in the same place. Would contours be a solution? How can I convert contours to circles so that I may use the coordinates of their center points? What should I do to best detect the circle objects for each ring in the bottom of the can?
Not all but a majority of the circles can be detected by adaptive thresholding the image, finding the contours and then fitting a minimum enclosing circle for contours having area greater than a threshold
import cv2
import numpy as np
block_size,constant_c ,min_cnt_area = 9,1,400
img = cv2.imread('viMmP.png')
img_gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(img_gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,block_size,constant_c)
thresh_copy = thresh.copy()
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
if cv2.contourArea(cnt)>min_cnt_area:
(x,y),radius = cv2.minEnclosingCircle(cnt)
center = (int(x),int(y))
radius = int(radius)
cv2.circle(img,center,radius,(255,0,0),1)
cv2.imshow("Thresholded Image",thresh_copy)
cv2.imshow("Image with circles",img)
cv2.waitKey(0)
Now this script yields the result:
But there are certain trade-offs like, if the block_size and constant_c are changed to 11 and 2 respectively then the script yields:
You should try applying erosion with a kernel of proper shape to separate the overlapping circles in the thresholded image
You may look at the following links to understand more about adaptive thresholding and contours:
Threshlding examples: http://docs.opencv.org/3.1.0/d7/d4d/tutorial_py_thresholding.html
Thresholding reference: http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html
Contour Examples:
http://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html