Watershed segmentation on spherical reflecting objects - python

I am trying to do some image segmentation with watershed to detect all balls in an image. I have followed a pyimage tuto. But I am getting very poor results. My guess is that the reflexion is the problem. Still the image is pretty clean and the instances look quite separable.
Am I using the correct approach here? Did I miss somethings?
I tested cellpose and I get almost perfect results. It's not the same approach of course and I was hopping to get something with "classical" computer-vision techniques.
Following is the code I have, the original image and the current result. I have tried to change the parameters, but I am not sure about what I am doing here. I also looked at inRange, but I am afraid the balls are never of the same color.
original image: https://i.stack.imgur.com/7595R.jpg
import numpy as np
from scipy import ndimage
import cv2
from skimage.feature import peak_local_max
from skimage.segmentation import watershed
import imutils
from matplotlib import pyplot as plt
img = cv2.imread('balls.jpg')
gray = - cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(gray)
# Things I tried...
# gray = cv2.dilate(gray,kernel,iterations = 1)
# hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# h, s, v = cv2.split(hsv)
thresh = cv2.threshold(gray, 250, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
# compute the exact Euclidean distance from every binary
# pixel to the nearest zero pixel, then find peaks in this
# distance map
D = ndimage.distance_transform_edt(thresh)
localMax = peak_local_max(D, indices=False, min_distance=30, labels=thresh)
# perform a connected component analysis on the local peaks,
# using 8-connectivity, then appy the Watershed algorithm
markers = ndimage.label(localMax, structure=np.ones((3, 3)))[0]
labels = watershed(-D, markers, mask=thresh)
# draw on mask
for label in np.unique(labels):
# if the label is zero -> 'background'
if label == 0:
continue
mask = np.zeros(gray.shape, dtype="uint8")
mask[labels == label] = 255
# detect contours in the mask and grab the largest one
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
c = max(cnts, key=cv2.contourArea)
# draw a circle enclosing the object
((x, y), r) = cv2.minEnclosingCircle(c)
cv2.circle(img, (int(x), int(y)), int(r), (0, 255, 0), 2)
cv2.putText(img, "#{}".format(label), (int(x) - 10, int(y)),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2)
plt.imshow(img)
The labels : https://i.stack.imgur.com/M6hZb.png

matchTemplate "solution" with opencv/samples/mouse_and_match.py
use whatever you like to find the peaks.
yes, with this approach you gotta pick a template manually.
to fix that, there could be approaches exploiting self-similarity (auto-correlation). exercise to the reader.
you can't pick whole balls because their sizes vary, so that's a huge downside to template matching already, but also a rectangle around a circle contains a significant amount of non-object pixels, which drives down the correlation score wherever that part varies.
picking the reflection works (off of a medium ball) because the reflection shows an environment with nice strong contrast.
notice the one small ball near the top, slightly to the right? that's not doing so well for a bunch of reasons.

Related

How to find rectangles in a full transparent object?

I have an input image of a fully transparent object:
I need to detect the 42 rectangles in this image. This is an example of the output image I need (I marked 6 rectangles for better understanding):
The problem is that the rectangles look really different. I have to use this input image.
How can I achieve this?
Edit 1: Here is a input image as png:
If you calculate the variance down the rows and across the columns, using:
import cv2
import numpy as np
im = cv2.imread('YOURIMAGE', cv2.IMREAD_GRAYSCALE)
# Calculate horizontal and vertical variance
h = np.var(im, axis=1)
v = np.var(im, axis=0)
You can plot them and hopefully locate the peaks of variance which should be your objects:
Mark Setchell's idea is out-of-the-box. Here is a more traditional approach.
Approach:
The image contains boxes whose intensity fades away in the lower rows. Using global equalization would fail here since the intensity changes of the entire image is taken into account. I opted for a local equalization approach in OpenCV this is available as CLAHE (Contrast Limited Adaptive Histogram Equalization))
Using CLAHE:
Equalization is applied on individual regions of the image whose size can be predefined.
To avoid over amplification, contrast limiting is applied, (hence the name).
Let's see how to use it in our problem:
Code:
# read image and store green channel
green_channel = img[:,:,1]
# grid-size for CLAHE
ts = 8
# initialize CLAHE function with parameters
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(ts, ts))
# apply the function
cl = clahe.apply(green_channel)
Notice the image above, the boxes in the lower regions appear slightly darker as expected. This will help us later on.
# apply Otsu threshold
r,th_cl = cv2.threshold(cl, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
# dilation performed using vertical kernels to connect disjoined boxes
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1, 3))
dilate = cv2.dilate(th_cl, vertical_kernel, iterations=1)
# find contours and draw bounding boxes
contours, hierarchy = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
img2 = img.copy()
for c in contours:
area = cv2.contourArea(c)
if area > 100:
x, y, w, h = cv2.boundingRect(c)
img2 = cv2.rectangle(img2, (x, y), (x + w, y + h), (0,255,255), 1)
(The top-rightmost box isn't covered properly. You would need to tweak the various parameters to get an accurate result)
Other pre-processing approaches you can try:
Global equalization
Contrast stretching
Normalization

Low contrast stops HoughCircles from detection(?)

I am trying to build a script capable of counting how many Euros (for now just with coins) are in a picture. In order to accomplish this I am thinking of firstly locating the coins and then compare their relative size in order to know the value of each one as I've seen done in other places. My hardship lies in the first step, in the pre processing of the image.
A note is that this problem arises only when contrast between the background and certain coins is very low
I've tried various methods pre processing with different methods of detection such as connectedComponentsWithStats(), findContours() and SimpleBlobDetector, but the most successful combination I've achieved is:
import numpy as np
import cv2
import os
path = 'GenericImages/TP2/'
path_coins_highlighted = 'GenericImages/Highlights'
path_gaussian_blurs = 'GenericImages/Gaussian_Blurs'
dirs = os.listdir(path)
i = 0
for file in dirs:
path2img = os.path.join(path, file)
img = cv2.imread(path2img)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# clahe = cv2.createCLAHE(clipLimit=40, tileGridSize=(8, 8))
# equalized = clahe.apply(gray)
gray_blur = cv2.GaussianBlur(gray, (15, 15), 0)
# gray_blur = cv2.bilateralFilter(gray, 9, 65, 9)
circles = cv2.HoughCircles(gray_blur, cv2.HOUGH_GRADIENT, 1, 15, param1=50, param2=30, minRadius=0, maxRadius=0)
circles = np.uint16(np.around(circles))
for x in circles[0, :]:
cv2.circle(img, (x[0], x[1]), x[2], (0, 255, 0), 2)
cv2.circle(img, (x[0], x[1]), 2, (0, 0, 255), 3)
cv2.imshow('Gray', gray)
cv2.imshow('Gaussian Blur', gray_blur)
path_save_gaussian_blur = os.path.join(path_gaussian_blurs, str(i) + '_gaussian_blur.jpg')
cv2.imwrite(path_save_gaussian_blur, gray_blur)
# cv2.imshow('equalized', equalized)
cv2.imshow('Highlights', img)
path_save_highlights = os.path.join(path_coins_highlighted, str(i) + '_highlight.jpg')
cv2.imwrite(path_save_highlights, img)
i += 1
cv2.waitKey(0)
The problem lies in the consistency of the detection, I believe that when it fails, it does so because there is little to no contrast between the background and the coins that HoughCircles is not detecting. The set of images below show the cases in which the algorithm fails.
SET 0:
SET1:
I've tried tweaking with equalization and a bilateral filter with different parameters in order to remove noise but keep the transition zones (contours of the coin) but I haven't found significant improvements.
I would appreciate some direction or ideas of what I should be looking for to solve this issue.
The lighting is non-uniform and your images are small and heavily compressed. These are the two factors that hinder a good detection. It might be difficult to control lighting but at least make sure you use lossless image formats (such as png) to avoid compression artifacts.
Anyway, your non-uniform lighting makes this a good case for a lighting normalization method called Gain Division. The idea is that you try to build a model of the background and then weight each input pixel by that model. The output gain should be relatively constant during most of the image. This is very useful because if we eliminate the non-uniform lighting we can create a foreground mask for the coins, and then we simply approximate circles to the coin's contours.
Let's give it a try:
# imports:
import cv2
import numpy as np
# image path
path = "D://opencvImages//"
fileName = "FHlbm.jpg"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Deep copy for results:
inputImageCopy = inputImage.copy()
# Get local maximum:
kernelSize = 30
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
localMax = cv2.morphologyEx(inputImage, cv2.MORPH_CLOSE, maxKernel, None, None, 1, cv2.BORDER_REFLECT101)
# Perform gain division
gainDivision = np.where(localMax == 0, 0, (inputImage / localMax))
# Clip the values to [0,255]
gainDivision = np.clip((255 * gainDivision), 0, 255)
# Convert the mat type from float to uint8:
gainDivision = gainDivision.astype("uint8")
cv2.imshow("Gain Division", gainDivision)
cv2.waitKey(0)
Which yields:
This is the result of applying gain division to the first image. Note that now the background is almost uniform. This is excellent, because we can apply a simple auto threshold to create a binary mask containing just the foreground objects, like this:
# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(gainDivision, cv2.COLOR_BGR2GRAY)
# Get binary image via Otsu:
_, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
This is the binary image:
Now, we have a problem here. The compression artifacts make this mask noisy. We could apply a little bit of morphology to improve the binary blobs, but your image is really small, so I have skipped this step. If you have access to larger, lossless images, you might want to include a cleaning step.
For now I'll simply try to compute the Minimum Enclosing Circle of each blob larger than a threshold, and I should get a detection a little bit more robust than Hough's. Let's see:
# Find the circle blobs on the binary mask:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contoursPoly = [None] * len(contours)
# Store the circles here:
detectedCircles = []
# Alright, just look for the outer bounding boxes:
for i, c in enumerate(contours):
# Get blob area:
blobArea = cv2.contourArea(c)
print(blobArea)
# Set min area:
minArea = 100
# Process only big blobs:
if blobArea > minArea:
# Approximate the contour to a circle:
(x, y), radius = cv2.minEnclosingCircle(c)
# Compute the center and radius:
center = (int(x), int(y))
radius = int(radius)
# Draw the circles:
cv2.circle(inputImageCopy, center, radius, (0, 0, 255), 1)
cv2.line(inputImageCopy, center, center, (0, 255, 0), 2)
# Store the center and radius:
detectedCircles.append([center, radius])
cv2.imshow("Circles", inputImageCopy)
cv2.waitKey(0)
Let's see the results drawn onto a deep copy of the original image:
Not bad. All the circle's data (center and radius) is stored in the detectedCircles list. We can print the info like this:
# Check out the detected circles:
for i in range(len(detectedCircles)):
center, r = detectedCircles[i]
print("Circle #: "+str(i)+" x: "+str(center[0])+" y: "+str(center[1])+" r: "+str(r))

how to fill the hollow lines opencv

I have an image like this:
after I applied some processings e.g. cv2.Canny(), it looks like this now:
As you can see that the black lines become hollow.
I have tried erosion and dilation, but if I do them many times, the 2 entrances will be closed(meaning become connected line or closed contour).
How could I make those lines solid like the below image while keep the 2 entrances not affected?
Update 1
I have tested the following answers with a few of photos, but the code seems customized to only be able to handle this one particular picture. Due to the restriction of SOF, I cannot upload photos larger than 2MB, so I uploaded them into my Microsoft OneDrive folder for your convenience to test.
https://1drv.ms/u/s!Asflam6BEzhjgbIhgkL4rt1NLSjsZg?e=OXXKBK
Update 2
I picked up #fmw42's post as answer as his answer is the most detailed one. It doesn't answer my question but points out the correct way to process maze which is my ultimate goal. I like his approach of answering questions, firstly tells you what each step should do so that you have a clear idea about how to do the task, then provide the full code example from beginning to end. Very helpful.
Due to the limitation of SOF, I can only pick up one answer. If multiple answers are allowed, I would also pick up Shamshirsaz.Navid's answer. His answer not only points to the correct direction to solve the issue, but also the explanation with visualization really works well for me~! I guess it works equally well for all people who are trying to understand why each line of code is needed. Also he follows up my questions in comments, this makes the SOF a bit interactive :)
The Threshold track bar in Ann Zen's answer is also a very useful tip for people to quickly find out a optimal value.
Here is one way to process the maze and rectify it in Python/OpenCV.
Read the input
Convert to gray
Threshold
Use morphology close to remove the thinnest (extraneous) black lines
Invert the threshold
Get the external contours
Keep on those contours that are larger than 1/4 of both the width and height of the input
Draw those contours as white lines on black background
Get the convex hull from the white contour lines image
Draw the convex hull as white lines on black background
Use GoodFeaturesToTrack to get the 4 corners from the white hull lines image
Sort the 4 corners by angle relative to the centroid so that they are ordered clockwise: top-left, top-right, bottom-right, bottom-left
Set these points as the array of conjugate control points for the input
Use 1/2 the dimensions of the input to define the array of conjugate control points for the output
Compute the perspective transformation matrix
Warp the input image using the perspective matrix
Save the results
Input:
import cv2
import numpy as np
import math
# load image
img = cv2.imread('maze.jpg')
hh, ww = img.shape[:2]
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
# use morphology to remove the thin lines
kernel = cv2.getStructuringElement(cv2.MORPH_RECT , (5,1))
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# invert so that lines are white so that we can get contours for them
thresh_inv = 255 - thresh
# get external contours
contours = cv2.findContours(thresh_inv, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
# keep contours whose bounding boxes are greater than 1/4 in each dimension
# draw them as white on black background
contour = np.zeros((hh,ww), dtype=np.uint8)
for cntr in contours:
x,y,w,h = cv2.boundingRect(cntr)
if w > ww/4 and h > hh/4:
cv2.drawContours(contour, [cntr], 0, 255, 1)
# get convex hull from contour image white pixels
points = np.column_stack(np.where(contour.transpose() > 0))
hull_pts = cv2.convexHull(points)
# draw hull on copy of input and on black background
hull = img.copy()
cv2.drawContours(hull, [hull_pts], 0, (0,255,0), 2)
hull2 = np.zeros((hh,ww), dtype=np.uint8)
cv2.drawContours(hull2, [hull_pts], 0, 255, 2)
# get 4 corners from white hull points on black background
num = 4
quality = 0.001
mindist = max(ww,hh) // 4
corners = cv2.goodFeaturesToTrack(hull2, num, quality, mindist)
corners = np.int0(corners)
for corner in corners:
px,py = corner.ravel()
cv2.circle(hull, (px,py), 5, (0,0,255), -1)
# get angles to each corner relative to centroid and store with x,y values in list
# angles are clockwise between -180 and +180 with zero along positive X axis (to right)
corner_info = []
center = np.mean(corners, axis=0)
centx = center.ravel()[0]
centy = center.ravel()[1]
for corner in corners:
px,py = corner.ravel()
dx = px - centx
dy = py - centy
angle = (180/math.pi) * math.atan2(dy,dx)
corner_info.append([px,py,angle])
# function to define sort key as element 2 (i.e. angle)
def takeThird(elem):
return elem[2]
# sort corner_info on angle so result will be TL, TR, BR, BL order
corner_info.sort(key=takeThird)
# make conjugate control points
# get input points from corners
corner_list = []
for x, y, angle in corner_info:
corner_list.append([x,y])
print(corner_list)
# define input points from (sorted) corner_list
input = np.float32(corner_list)
# define output points from dimensions of image, say half of input image
width = ww // 2
height = hh // 2
output = np.float32([[0,0], [width-1,0], [width-1,height-1], [0,height-1]])
# compute perspective matrix
matrix = cv2.getPerspectiveTransform(input,output)
# do perspective transformation setting area outside input to black
result = cv2.warpPerspective(img, matrix, (width,height), cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT, borderValue=(0,0,0))
# save output
cv2.imwrite('maze_thresh.jpg', thresh)
cv2.imwrite('maze_contour.jpg', contour)
cv2.imwrite('maze_hull.jpg', hull)
cv2.imwrite('maze_rectified.jpg', result)
# Display various images to see the steps
cv2.imshow('thresh', thresh)
cv2.imshow('contour', contour)
cv2.imshow('hull', hull)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Thresholded Image after morphology:
Filtered Contours on black background:
Convex hull and 4 corners on input image:
Result from perspective warp:
You can try a simple threshold to detect the lines of the maze, as they are conveniently black:
import cv2
img = cv2.imread("maze.jpg")
gray = cv2.cvtColor(img, cv2.BGR2GRAY)
_, thresh = cv2.threshold(gray, 60, 255, cv2.THRESH_BINARY)
cv2.imshow("Image", thresh)
cv2.waitKey(0)
Output:
You can adjust the threshold yourself with trackbars:
import cv2
cv2.namedWindow("threshold")
cv2.createTrackbar("", "threshold", 0, 255, id)
img = cv2.imread("maze.jpg")
while True:
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
t = cv2.getTrackbarPos("", "threshold")
_, thresh = cv2.threshold(gray, t, 255, cv2.THRESH_BINARY)
cv2.imshow("Image", thresh)
if cv2.waitKey(1) & 0xFF == ord("q"): # If you press the q key
break
Canny is an edge detector. It detects the lines along which color changes. A line in your input image has two such transitions, one on each side. Therefore you see two parallel lines on each side of a line in the image. This answer of mine explains the difference between edges and lines.
So, you shouldn’t be using an edge detector to detect lines in an image.
If a simple threshold doesn't properly binarize this image, try using a local threshold ("adaptive threshold" in OpenCV). Another thing that works well for images like these is applying a top hat filter (for this image, it would be a closing(img) - img), where the structuring element is adjusted to the width of the lines you want to find. This will result in an image that is easy to threshold and will preserve all lines thinner than the structuring element.
Check this:
import cv2
import numpy as np
im=cv2.imread("test2.jpg",1)
#convert 2 gray
mask=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
#convert 2 black and white
mask=cv2.threshold(mask,127,255,cv2.THRESH_BINARY)[1]
#remove thin lines and texts and then remake main lines
mask=cv2.dilate(mask,np.ones((5, 5), 'uint8'))
mask=cv2.erode(mask,np.ones((4, 4), 'uint8'))
#smooth lines
mask=cv2.medianBlur(mask,3)
#write output mask
cv2.imwrite("mask2.jpg",mask)
From now on, everything can be done. You can delete extra blobs, you can extract lines from the original image according to the mask, and things like that.
Median:
Median changes are not much for this project. And it can be safely removed. But I prefer it because it rounds the ends of the lines a bit. You have to zoom in a lot to see the pixels. But this technique is usually used to remove salt/pepper noise.
Erode Kernel:
In the case of the kernel, the larger the number, the thicker the lines. Well, this is not always good. Because it causes the path lines to stick to the arrow and later it becomes difficult to separate the paths from the arrow.
Update:
It does not matter if part of the Maze is cleared. The important thing is that from this mask you can draw a rectangle around this shape and create a new mask for this image.
Make a white rectangle around these paths in a new mask. Completely whiten the inside of the mask with FloodFill or any other technique. Now you have a new mask that can take the whole shape out of the original image. Now in the next step you can correct Perspective.

How can I remove internal noises of gear completely in this image

gear
I want generalize method so that any type of noises inside the gear can be remove. I am using OpenCV with python
I have already try with lots filter and noise removing methods but I am not getting proper output. here is my code
import cv2
import numpy as np
import imutils
from imutils import perspective
from scipy.spatial import distance as dist
img1 = cv2.imread("5cam.png")
img = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
rows, cols = img.shape
dst = cv2.fastNlMeansDenoising(img, 15, 10, 7, 21)
gaussian_blurred_images = cv2.GaussianBlur(dst, (9, 9), 0)
_, thresh = cv2.threshold(gaussian_blurred_images, 200, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel = np.ones((7, 7), np.uint8)
dilation = cv2.dilate(thresh, kernel)
canny = cv2.Canny(dilation, 200, 255)
contours = cv2.findContours(canny, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_NONE)[0]
areas1 = []
for ctr in contours:
areas = cv2.contourArea(ctr)
areas1.append(areas)
amax = max(areas1)
max_contour = [contours[areas1.index(amax)]]
cv2.drawContours(img1, max_contour, -1, (0, 255, 255), 2)
cv2.imshow("g", dst)
cv2.imshow("thresh", thresh)
cv2.imshow("c", canny)
cv2.imshow("img", img1)
cv2.waitKey(0)
cv2.destroyAllWindows()
first you'll need to improve the lighting and scene.
it needs to be more diffuse and not straight on, to prevent reflection on the gear. place lights to the side all around. don't use the camera's built in flash. use/build a "softbox" which is a white sheet of paper or fabric that diffuses the light before it hits the object (either translucent or used like a "mirror").
it needs to be more uniform. your picture shows a "vignette", darkening near the picture's outside. the previous step will probably fix that.
be careful about smudges, dirt on the background
move the camera further away and use zoom if possible. that will improve the overall sharpness of the picture (more depth of focus) and reduce lens distortion (if you care about that).
then you'll need a different approach. I would suggest trying segmentation based on hue and saturation (select the uniformly blue background).
use cv.cvtColor to transform the image into the HSV color space
then use numpy indexing/masking (or cv.inRange) to select a small range of hue (somewhere around green-blue, which is probably a hue of around 180 degrees, or 90 in cvtColor's CV_8U hue values) and saturations (medium to full). for example: mask = ((hsv_img >= (90, 170, 0)) & (hsv_img <= (100, 255, 255))).all(axis=2)
that approach, on the unimproved lighting, gets me this far. on better lighting it should be even better.

OpenCV: Fit ellipse with most points on contour (instead of least squares)

I have a binarized image, which I've already used open/close morphology operations on (this is as clean as I can get it, trust me on this) that looks like so:
As you can see, there is an obvious ellipse with some distortion on the top. NOTE: I do not have prior info as to the size of the circle, and this has to run very quickly (HoughCircles is too slow, I've found). I'm trying to figure out how to fit an ellipse to it, such that it maximizes the number of points on the fitted ellipse that correspond to edges on the shape. That is, I want a result like this:
However, I can't seem to find a way in OpenCV to do this. Using the common tools of fitEllipse (blue line) and minAreaRect (green line), I get these results:
Which obviously do not represent the actual ellipse I'm trying to detect. Any thoughts as to how I could accomplish this? Happy to see examples in Python or C++.
Given the shown example image, I was very skeptical of the following statement:
which I've already used open/close morphology operations on (this is as clean as I can get it, trust me on this)
And, after reading your comment,
For precision, I need it to be fit within about 2 pixels accuracy
I was pretty sure, there might be good approximation using morphological operations.
Please have a look at the following code:
import cv2
# Load image (as BGR for later drawing the circle)
image = cv2.imread('images/hvFJF.jpg', cv2.IMREAD_COLOR)
# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Get rid of possible JPG artifacts (when do people learn to use PNG?...)
_, gray = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY)
# Downsize image (by factor 4) to speed up morphological operations
gray = cv2.resize(gray, dsize=(0, 0), fx=0.25, fy=0.25)
# Morphological Closing: Get rid of the hole
gray = cv2.morphologyEx(gray, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)))
# Morphological opening: Get rid of the stuff at the top of the circle
gray = cv2.morphologyEx(gray, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (121, 121)))
# Resize image to original size
gray = cv2.resize(gray, dsize=(image.shape[1], image.shape[0]))
# Find contours (only most external)
cnts, _ = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Draw found contour(s) in input image
image = cv2.drawContours(image, cnts, -1, (0, 0, 255), 2)
cv2.imwrite('images/intermediate.png', gray)
cv2.imwrite('images/result.png', image)
The intermediate image looks like this:
And, the final result looks like this:
Since your image is quite large, I think, no harm is done by downsizing it. The following morphological operations are (heavily) sped up, which might be of interest for your setting.
According to your statement:
NOTE: I do not have prior info as to the size of the circle[...]
You can mostly find an appropriate approximation for the above kernel sizes from your inputs. Since there is only one example image given, we can't know the variability on that issue.
Hope that helps!
Hough-Circle is perfect for this. If you know the diameter you can get a better solution. If you only know a range this might fits best:
EDIT: The reason this works better than the fitted ellipse is: If you are looking for a circle you should use a circle as model. The wiki article explains this beautiful idea.
By the way, you could have done this with opening and closing as well. (Given you now exactly how big your circle is)
import skimage
import matplotlib.pyplot as plt
import numpy as np
from skimage import data, color
from skimage.feature import canny
from skimage.draw import circle_perimeter
from skimage.util import img_as_ubyte
from skimage.transform import hough_circle, hough_circle_peaks
image = skimage.io.imread("hvFJF.jpg")
# Load picture and detect edges
edges = canny(image, sigma=3, low_threshold=10, high_threshold=50)
# Detect two radii
hough_radii = np.arange(250, 300, 10)
hough_res = hough_circle(edges, hough_radii)
# Select the most prominent 5 circles
accums, cx, cy, radii = hough_circle_peaks(hough_res, hough_radii,
total_num_peaks=3)
# Draw them
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 4))
image = color.gray2rgb(image)
for center_y, center_x, radius in zip(cy, cx, radii):
circy, circx = circle_perimeter(center_y, center_x, radius)
image[circy, circx] = (220, 20, 20)
ax.imshow(image, cmap=plt.cm.gray)
plt.show()

Categories

Resources