How to detect circle defects? - python

Is there any way to tell if a circle has such defects? Roundness does not work. Or is there a way to eliminate them?
perimeter = cv2.arcLength(cnts[0],True)
area = cv2.contourArea(cnts[0])
roundness = 4*pi*area/(perimeter*perimeter)
print("Roundness:", roundness)

The "roundness" measure is sensitive to a precise estimate of the perimeter. What cv2.arcLength() does is add the lengths of each of the polygon edges, which severely overestimates the length of outlines. I think this is the main reason that this measure hasn't worked for you. With a better perimeter estimator you would get useful results.
An alternative measure that might be more useful is "circularity", defined as the coefficient of variation of the radius. In short, you compute the distance of each polygon vertex (i.e. outline point) to the centroid, then determine the coefficient of variation of these distances (== std / mean).
I wrote a quick Python script to compute this starting from an OpenCV contour:
import cv2
import numpy as np
# read in OP's example image, making sure we ignore the red arrow
img = cv2.imread('jGssp.png')[:, :, 1]
_, img = cv2.threshold(img, 127, 255, 0)
# get the contour of the shape
contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contour = contours[0][:, 0, :]
# add the first point as the last, to close it
contour = np.concatenate((contour, contour[0, None, :]))
# compute centroid
def cross_product(v1, v2):
"""2D cross product."""
return v1[0] * v2[1] - v1[1] * v2[0]
sum = 0.0
xsum = 0.0
ysum = 0.0
for ii in range(1, contour.shape[0]):
v = cross_product(contour[ii - 1, :], contour[ii, :])
sum += v
xsum += (contour[ii - 1, 0] + contour[ii, 0]) * v
ysum += (contour[ii - 1, 1] + contour[ii, 1]) * v
centroid = np.array([ xsum, ysum ]) / (3 * sum)
# Compute coefficient of variation of distances to centroid (==circularity)
d = np.sqrt(np.sum((contour - centroid) ** 2, axis=1))
circularity = np.std(d) / np.mean(d)

This make me think of a similar problem that I had. You could compute the signature of the shape. The signature can be defined as, for each pixel of the border of the shape, the distance between this pixel and the center of the shape.
For a perfect circle, the distance from the border to the center should be constant (in an ideal continuous world). When defects are visible on the edge of the circle (either dents or excesses), the ideal constant line changes to a wiggly curve, with huge variation when on the defects.
It's fairly easy to detect those variation with FFT for example, which allows to quantify the defect significance.
You can expand this solution to any given shape. If your ideal shape is a square, you can compute the signature, which will give you some kind of sinusoidal curve. Defects will appear in a same way on the curve, and would be detectable with the same logic as with a circle.
I can't give you an code example, as the project was for a company project, but the idea is still here.

Here is one way to do that in Python/OpenCV.
Read the input
Threshold on white (to remove the red arrow)
Apply Hough Circle
Draw the circle on the thresholded image for comparison
Draw a white filled circle on black background from the circle parameters.
Get the difference between the thresholded image and the drawn circle image
Apply morphology open to remove the ring from the irregular boundary of the original circle
Count the number of white pixels in the previous image as the amount off defect
Input:
import cv2
import numpy as np
# Read image
img = cv2.imread('circle_defect.png')
hh, ww = img.shape[:2]
# threshold on white to remove red arrow
lower = (255,255,255)
upper = (255,255,255)
thresh = cv2.inRange(img, lower, upper)
# get Hough circles
min_dist = int(ww/5)
circles = cv2.HoughCircles(thresh, cv2.HOUGH_GRADIENT, 1, minDist=min_dist, param1=150, param2=15, minRadius=0, maxRadius=0)
print(circles)
# draw circles on input thresh (without red arrow)
circle_img = thresh.copy()
circle_img = cv2.merge([circle_img,circle_img,circle_img])
for circle in circles[0]:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
(x,y,r) = circle
x = int(x)
y = int(y)
r = int(r)
cv2.circle(circle_img, (x, y), r, (0, 0, 255), 1)
# draw filled circle on black background
circle_filled = np.zeros_like(thresh)
cv2.circle(circle_filled, (x,y), r, 255, -1)
# get difference between the thresh image and the circle_filled image
diff = cv2.absdiff(thresh, circle_filled)
# apply morphology to remove ring
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5))
result = cv2.morphologyEx(diff, cv2.MORPH_OPEN, kernel)
# count non-zero pixels
defect_count = np.count_nonzero(result)
print("defect count:", defect_count)
# save results
cv2.imwrite('circle_defect_thresh.jpg', thresh)
cv2.imwrite('circle_defect_circle.jpg', circle_img)
cv2.imwrite('circle_defect_circle_diff.jpg', diff)
cv2.imwrite('circle_defect_detected.png', result)
# show images
cv2.imshow('thresh', thresh)
cv2.imshow('circle_filled', circle_filled)
cv2.imshow('diff', diff)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Input without Red Arrow:
Red Circle Drawn on Input:
Circle from HoughCircle:
Difference:
Difference Cleaned Up:
Textual Result:
defect count: 500

Related

OpenCV Hough Circles Transform not detecting cricket ball

This is the picture I have, and I want to detect the red ball:
However, I simply cannot get the code to work. I've tried experimenting with different param1 and param2 values, larger dp values, and even rescaling the image.
Any help on this (or even an alternate method for detecting the ball) would be much appreciated.
CODE:
frame = cv.imread("cricket_ball.png")
# Convert frame to grayscale
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
# cv.HoughCircles returns a 3-element floating-point vector (x,y,radius) for each circle detected
circles = cv.HoughCircles(gray,cv.HOUGH_GRADIENT,1,minDist=100, minRadius=2.5,maxRadius=10) # Cricket ball on videos are approximately 10 pixels in diameter.
print(circles)
# Ensure at least one circle was found
if circles is not None:
# Converts (x,y,radius to integers)
circles = np.uint8(np.around(circles))
for i in circles[0,:]:
cv.circle(frame, (i[0],i[1]), i[2], (0,255,0), 20) # Produce circle outline
cv.imshow("Ball", frame)
cv.waitKey(0)
Here's my attempt. The idea is to find the ball assuming is (one) of the most saturated objects in the scene. This should cover all bright objects, independent of their color.
I don't use Hough's circles because its a little bit difficult to parametrize and it often doesn't scale well to other image. Instead, I just detect blobs on a binary image and calculate blob circularity, assuming the thing I'm looking for is close to a circle (and its circularity should be close to 1.0).
This is the code:
# imports:
import cv2
import numpy as np
# image path
path = "D://opencvImages//"
fileName = "fv8w3.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Deep copy for results:
inputImageCopy = inputImage.copy()
# Convert the image to the HSV color space:
hsvImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2HSV)
# Set the HSV values:
lowRange = np.array([0, 120, 0])
uppRange = np.array([179, 255, 255])
# Create the HSV mask
binaryMask = cv2.inRange(hsvImage, lowRange, uppRange)
Let's check out what kind of HSV mask we get looking only for high Saturation values:
It's all right, the object of interest is there, but the mask is noisy. Let's try some morphology to define a little bit more those blobs:
# Apply Dilate + Erode:
kernel = np.ones((3, 3), np.uint8)
binaryMask = cv2.morphologyEx(binaryMask, cv2.MORPH_DILATE, kernel, iterations=1)
This is the filtered image:
Now, let me detect contours and compute contour properties to filter the noise. I'll store the blobs of interest in a list called detectedCircles:
# Find the circle blobs on the binary mask:
contours, hierarchy = cv2.findContours(binaryMask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Store the circles here:
detectedCircles = []
# Alright, just look for the outer bounding boxes:
for i, c in enumerate(contours):
# Get blob area:
blobArea = cv2.contourArea(c)
print(blobArea)
# Get blob perimeter:
blobPerimeter = cv2.arcLength(c, True)
print(blobPerimeter)
# Compute circulariity
blobCircularity = (4 * 3.1416 * blobArea)/(blobPerimeter**2)
print(blobCircularity)
# Set min circularuty:
minCircularity = 0.8
# Set min Area
minArea = 35
# Approximate the contour to a circle:
(x, y), radius = cv2.minEnclosingCircle(c)
# Compute the center and radius:
center = (int(x), int(y))
radius = int(radius)
# Set Red color (unfiltered blob)
color = (0, 0, 255)
# Process only big blobs:
if blobCircularity > minCircularity and blobArea > minArea:
# Set Blue color (filtered blob)
color = (255, 0, 0)
# Store the center and radius:
detectedCircles.append([center, radius])
# Draw the circles:
cv2.circle(inputImageCopy, center, radius, color, 2)
cv2.imshow("Circles", inputImageCopy)
cv2.waitKey(0)
I've set a circularity and minimum area test to filter the noisy blobs. All the relevant blobs are stored in the detectedCircles list as fitted circles. Let's see the result:
Looks good. The blob of interested is enclosed by a blue circle and the noise with a red one. Now, let's try another color for the ball. I created a version of the image with a blue ball instead of a red one, this is the result:

How do I find the largest empty space in such images?

I would like to find the empty spaces (black regions) in images similar to the one I've posted below, where I have randomly sized blocks scattered in it.
By empty spaces, I refer to such possible open fields ( i have no particular lower bound on the area, but I would like to extract the top 3-4 largest ones present in the image.) There is also no restriction on the geometric shape they can take, but these empty spaces must not contain any of the blue blocks.
What is the best way to go about this?
What I've done till now:
My original image actually looks like this. I identified all the points, grouped them based on a certain distance threshold and applied a convex hull around them. I'm unsure how to proceed further. Any help would be greatly appreciated. Thank you!
Here is one way in Python/OpenCV using the distance transform to find the largest Euclidean distance between the Xs.
Input:
import cv2
import numpy as np
import skimage.exposure
# read image
img = cv2.imread('xxx.png')
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold to binary and invert so background is white and xxx are black
thresh = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY)[1]
thresh = 255 - thresh
# add black border around threshold image to avoid corner being largest distance
thresh2 = cv2.copyMakeBorder(thresh, 1,1,1,1, cv2.BORDER_CONSTANT, (0))
h, w = thresh2.shape
# create zeros mask 2 pixels larger in each dimension
mask = np.zeros([h + 2, w + 2], np.uint8)
# apply distance transform
distimg = thresh2.copy()
distimg = cv2.distanceTransform(distimg, cv2.DIST_L2, 5)
# remove excess border
distimg = distimg[1:h-1, 1:w-1]
# get max value and location in distance image
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(distimg)
# scale distance image for viewing
distimg = skimage.exposure.rescale_intensity(distimg, in_range='image', out_range=(0,255))
distimg = distimg.astype(np.uint8)
# draw circle on input
result = img.copy()
centx = max_loc[0]
centy = max_loc[1]
radius = int(max_val)
cv2.circle(result, (centx, centy), radius, (0,0,255), 1)
print('center x,y:', max_loc,'center radius:', max_val)
# save image
cv2.imwrite('xxx_distance.png',distimg)
cv2.imwrite('xxx_radius.png',result)
# show the images
cv2.imshow("thresh", thresh)
cv2.imshow("thresh2", thresh2)
cv2.imshow("distance", distimg)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Distance Transform Image:
Region of Largest Distance to Xs:
Textual Information:
center x,y: (179, 352) radius: 92.5286865234375

python + cv2 - determine radius of bright spot in image

I already have code that can detect the brightest point in an image (just gaussian blurring + finding the brightest pixel). I am working with photographs of sunsets, and right now can very easily get results like this:
My issue is that the radius of the circle is tied to how much gaussian blur i use - I would like to make it so that the radius reflects the size of the sun in the photo (I have a dataset of ~500 sunset photos I am trying to process).
Here is an image with no circle:
I don't even know where to start on this, my traditional computer vision knowledge is lacking.. If I don't get an answer I might try and do something like calculate the distance from the center of the circle to the nearest edge (using canny edge detection) - if there is a better way please let me know. Thank you for reading
Here is one way to get a representative circle in Python/OpenCV. It finds the minimum enclosing circle.
Read the input
Crop out the white on the right side
Convert to gray
Apply median filtering
Do Canny edge detection
Get the coordinates of all the white pixels (canny edges)
Compute minimum enclosing circle to get center and radius
Draw a circle with that center and radius on a copy of the input
Save the result
Input:
import cv2
import numpy as np
# read image as grayscale
img = cv2.imread('sunset.jpg')
hh, ww = img.shape[:2]
# shave off white region on right side
img = img[0:hh, 0:ww-2]
# convert to gray
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# median filter
median = cv2.medianBlur(gray, 3)
# do canny edge detection
canny = cv2.Canny(median, 100, 200)
# get canny points
# numpy points are (y,x)
points = np.argwhere(canny>0)
# get min enclosing circle
center, radius = cv2.minEnclosingCircle(points)
print('center:', center, 'radius:', radius)
# draw circle on copy of input
result = img.copy()
x = int(center[1])
y = int(center[0])
rad = int(radius)
cv2.circle(result, (x,y), rad, (255,255,255), 1)
# write results
cv2.imwrite("sunset_canny.jpg", canny)
cv2.imwrite("sunset_circle.jpg", result)
# show results
cv2.imshow("median", median)
cv2.imshow("canny", canny)
cv2.imshow("result", result)
cv2.waitKey(0)
Canny Edges:
Resulting Circle:
center: (265.5, 504.5) radius: 137.57373046875
Alternate
Fit ellipse to Canny points and then get the average of the two ellipse radii for the radius of the circle. Note a slight change in the Canny arguments to get only the top part of the sunset.
import cv2
import numpy as np
# read image as grayscale
img = cv2.imread('sunset.jpg')
hh, ww = img.shape[:2]
# shave off white region on right side
img = img[0:hh, 0:ww-2]
# convert to gray
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# median filter
median = cv2.medianBlur(gray, 3)
# do canny edge detection
canny = cv2.Canny(median, 100, 250)
# transpose canny image to compensate for following numpy points as y,x
canny_t = cv2.transpose(canny)
# get canny points
# numpy points are (y,x)
points = np.argwhere(canny_t>0)
# fit ellipse and get ellipse center, minor and major diameters and angle in degree
ellipse = cv2.fitEllipse(points)
(x,y), (d1,d2), angle = ellipse
print('center: (', x,y, ')', 'diameters: (', d1, d2, ')')
# draw ellipse
result = img.copy()
cv2.ellipse(result, (int(x),int(y)), (int(d1/2),int(d2/2)), angle, 0, 360, (0,0,0), 1)
# draw circle on copy of input of radius = half average of diameters = (d1+d2)/4
rad = int((d1+d2)/4)
xc = int(x)
yc = int(y)
print('center: (', xc,yc, ')', 'radius:', rad)
cv2.circle(result, (xc,yc), rad, (0,255,0), 1)
# write results
cv2.imwrite("sunset_canny_ellipse.jpg", canny)
cv2.imwrite("sunset_ellipse_circle.jpg", result)
# show results
cv2.imshow("median", median)
cv2.imshow("canny", canny)
cv2.imshow("result", result)
cv2.waitKey(0)
Canny Edge Image:
Ellipse and Circle drawn on Input:
Use Canny edge first. Then try either Hough circle or Hough ellipse on the edge image. These are brute force methods, so they will be slow, but they are resistant to non-circular or non-elliptical contours. You can easily filter results such that the detected result has a center near the brightest point. Also, knowing the estimated size of the sun will help with computation speed.
You can also look into using cv2.findContours and cv2.approxPolyDP to extract continuous contours from your images. You could filter by perimeter length and shape and then run a least squares fit, or Hough fit.
EDIT
It may be worth trying an intensity filter before the Canny edge detection. I suspect it will clean up the edge image considerably.

How can I improve Watershed segmentation of heterogenous structures in Python?

I'm following a simple approach to segment cells (microscopy images) using the Watershed algorithm in Python. I'm happy with the result 90% of the time, but I have two main problems: (i) the markers/contours are really "spiky" and (2) the algorithm sometimes fails when two cells are to close to each other (i.e they are segmented together). Can you give some tips in how to improve it?
Here's the code I'm using and an output image showing my 2 issues.
# Adjustable parameters for a future function
img_file = NP_file
sigma = 9 # size of gaussian blur kernel; has to be an even number
alpha = 0.2 #scalling factor distance transform
clear_border = False
remove_small_objects = True
# read image and covert to gray scale
im = cv2.imread(NP_file, 1)
im = enhanceContrast(im)
im_gray = cv2.cvtColor(im.copy(), cv2.COLOR_BGR2GRAY)
# Basic Median Filter
im_blur = cv2.medianBlur(im_gray, ksize = sigma)
# Threshold Image
th, im_seg = cv2.threshold(im_blur, im_blur.mean(), 255, cv2.THRESH_BINARY);
# filling holes in the segmented image
im_filled = binary_fill_holes(im_seg)
# discard cells touching the border
if clear_border == True:
im_filled = skimage.segmentation.clear_border(im_filled)
# filter small particles
if remove_small_objects == True:
im_filled = sk.morphology.remove_small_objects(im_filled, min_size = 5000)
# apply distance transform
# labels each pixel of the image with the distance to the nearest obstacle pixel.
# In this case, obstacle pixel is a boundary pixel in a binary image.
dist_transform = cv2.distanceTransform(img_as_ubyte(im_filled), cv2.DIST_L2, 3)
# get sure foreground area: region near to center of object
fg_val, sure_fg = cv2.threshold(dist_transform, alpha * dist_transform.max(), 255, 0)
# get sure background area: region much away from the object
sure_bg = cv2.dilate(img_as_ubyte(im_filled), np.ones((3,3),np.uint8), iterations = 6)
# The remaining regions (borders) are those which we don’t know if they are img or background
borders = cv2.subtract(sure_bg, np.uint8(sure_fg))
# use Connected Components labelling:
# scans an image and groups its pixels into components based on pixel connectivity
# label background of the image with 0 and other objects with integers starting from 1.
n_markers, markers1 = cv2.connectedComponents(np.uint8(sure_fg))
# filter small particles again! (bc of segmentation artifacts)
if remove_small_objects == True:
markers1 = sk.morphology.remove_small_objects(markers1, min_size = 1000)
# Make sure the background is 1 and not 0;
# and that borders are marked as 0
markers2 = markers1 + 1
markers2[borders == 255] = 0
# implement the watershed algorithm: connects markers with original image
# The label image will be modified and the marker in the border area will change to -1
im_out = im.copy()
markers3 = cv2.watershed(im_out, markers2)
# generate an extra image with color labels only for visuzalization
# color markers in BLUE (pixels = -1 after watershed algorithm)
im_out[markers3 == -1] = [0, 255, 255]
in case you want to try to reproduce my results you can find my .tif file here:
https://drive.google.com/file/d/13KfyUVyHodtEOP_yKAnfFCAhgyoY0BQL/view?usp=sharing
Thanks!
In the past, the best approach for me to apply the watershed algorithm is 'only when needed'. It is computationally intensive and not needed for the majority of cells in your image.
This is the code I have used with your image:
# Threshold your image
# This example worked very well with a threshold value of 1
tv, thresh = cv2.threshold(cv2.cvtColor(img, cv2.COLOR_BGR2GRAY), 1, 255, cv2.THRESH_BINARY)
# Minimize the holes in the cells to facilitate finding contours
for i in range(5):
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, np.ones((3,3)))
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, np.ones((3,3)))
# Find contours and keep the ones big enough to be a cell
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = [c for c in contours if cv2.contourArea(c) > 400]
output = np.zeros_like(thresh)
cv2.drawContours(output, contours, -1, 255, -1)
for i, contour in enumerate(contours):
x, y, w, h = cv2.boundingRect(contour)
cv2.putText(output, f"{i}", (x, y), cv2.FONT_HERSHEY_PLAIN, 1, 255, 2)
The output of this code is this image:
As you can see, only a pair of cells (contour #7) needs splitting using watershed algorithm.
Running the watershed algorithm on that cell is very fast (smaller image to work with) and this is the result:
EDIT
Some of the cell morphology calculations that can be used to assess whether the watershed algorithm should be run on an object in the image:
# area
area = cv2.contourArea(contour)
# perimeter, with the minimum value = 0.01 to avoid division by zero in other calculations
perimeter = max(0.01, cv2.arcLength(contour, True))
# circularity
circularity = (4 * math.pi * area) / (perimeter ** 2)
# Check if the cell is convex (not smoothly elliptical)
hull = cv2.convexHull(contour)
convexity = cv2.arcLength(hull, True) / perimeter
approx = cv2.approxPolyDP(contour, 0.1 * perimeter, True)
convex = cv2.isContourConvex(approx)
You will need to find the thresholds for each of the measurements in your project. In my project, cells were elliptic, and having a blob with a large area and convex usually means there are 2 or more cells lump together.

Advanced square detection (with connected region)

if the squares has connected region in image, how can I detect them.
I have tested the method mentioned in
OpenCV C++/Obj-C: Advanced square detection
It did not work well.
Any good ideas ?
import cv2
import numpy as np
def angle_cos(p0, p1, p2):
d1, d2 = (p0-p1).astype('float'), (p2-p1).astype('float')
return abs( np.dot(d1, d2) / np.sqrt( np.dot(d1, d1)*np.dot(d2, d2) ) )
def find_squares(img):
squares = []
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# cv2.imshow("gray", gray)
gaussian = cv2.GaussianBlur(gray, (5, 5), 0)
temp,bin = cv2.threshold(gaussian, 80, 255, cv2.THRESH_BINARY)
# cv2.imshow("bin", bin)
contours, hierarchy = cv2.findContours(bin, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours( gray, contours, -1, (0, 255, 0), 3 )
#cv2.imshow('contours', gray)
for cnt in contours:
cnt_len = cv2.arcLength(cnt, True)
cnt = cv2.approxPolyDP(cnt, 0.02*cnt_len, True)
if len(cnt) == 4 and cv2.contourArea(cnt) > 1000 and cv2.isContourConvex(cnt):
cnt = cnt.reshape(-1, 2)
max_cos = np.max([angle_cos( cnt[i], cnt[(i+1) % 4], cnt[(i+2) % 4] ) for i in xrange(4)])
if max_cos < 0.1:
squares.append(cnt)
return squares
if __name__ == '__main__':
img = cv2.imread('123.bmp')
#cv2.imshow("origin", img)
squares = find_squares(img)
print "Find %d squres" % len(squares)
cv2.drawContours( img, squares, -1, (0, 255, 0), 3 )
cv2.imshow('squares', img)
cv2.waitKey()
I use some method in the opencv example, but the result is not good.
Applying a Watershed Transform based on the Distance Transform will separate the objects:
Handling objects at the border is always problematic, and often discarded, so that pink rectangle at top left not separated is not a problem at all.
Given a binary image, we can apply the Distance Transform (DT) and from it obtain markers for the Watershed. Ideally there would be a ready function for finding regional minima/maxima, but since it isn't there, we can make a decent guess on how we can threshold DT. Based on the markers we can segment using Watershed, and the problem is solved. Now you can worry about distinguishing components that are rectangles from those that are not.
import sys
import cv2
import numpy
import random
from scipy.ndimage import label
def segment_on_dt(img):
dt = cv2.distanceTransform(img, 2, 3) # L2 norm, 3x3 mask
dt = ((dt - dt.min()) / (dt.max() - dt.min()) * 255).astype(numpy.uint8)
dt = cv2.threshold(dt, 100, 255, cv2.THRESH_BINARY)[1]
lbl, ncc = label(dt)
lbl[img == 0] = lbl.max() + 1
lbl = lbl.astype(numpy.int32)
cv2.watershed(cv2.cvtColor(img, cv2.COLOR_GRAY2BGR), lbl)
lbl[lbl == -1] = 0
return lbl
img = cv2.cvtColor(cv2.imread(sys.argv[1]), cv2.COLOR_BGR2GRAY)
img = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU)[1]
img = 255 - img # White: objects; Black: background
ws_result = segment_on_dt(img)
# Colorize
height, width = ws_result.shape
ws_color = numpy.zeros((height, width, 3), dtype=numpy.uint8)
lbl, ncc = label(ws_result)
for l in xrange(1, ncc + 1):
a, b = numpy.nonzero(lbl == l)
if img[a[0], b[0]] == 0: # Do not color background.
continue
rgb = [random.randint(0, 255) for _ in xrange(3)]
ws_color[lbl == l] = tuple(rgb)
cv2.imwrite(sys.argv[2], ws_color)
From the above image you can consider fitting ellipses in each component to determine rectangles. Then you can use some measurement to define whether the component is a rectangle or not. This approach has a greater chance to work for rectangles that are fully visible, and will likely produce bad results for partially visible ones. The following image shows the result of such approach considering that a component is a rectangle if the rectangle from the fitted ellipse is within 10% of component's area.
# Fit ellipse to determine the rectangles.
wsbin = numpy.zeros((height, width), dtype=numpy.uint8)
wsbin[cv2.cvtColor(ws_color, cv2.COLOR_BGR2GRAY) != 0] = 255
ws_bincolor = cv2.cvtColor(255 - wsbin, cv2.COLOR_GRAY2BGR)
lbl, ncc = label(wsbin)
for l in xrange(1, ncc + 1):
yx = numpy.dstack(numpy.nonzero(lbl == l)).astype(numpy.int64)
xy = numpy.roll(numpy.swapaxes(yx, 0, 1), 1, 2)
if len(xy) < 100: # Too small.
continue
ellipse = cv2.fitEllipse(xy)
center, axes, angle = ellipse
rect_area = axes[0] * axes[1]
if 0.9 < rect_area / float(len(xy)) < 1.1:
rect = numpy.round(numpy.float64(
cv2.cv.BoxPoints(ellipse))).astype(numpy.int64)
color = [random.randint(60, 255) for _ in xrange(3)]
cv2.drawContours(ws_bincolor, [rect], 0, color, 2)
cv2.imwrite(sys.argv[3], ws_bincolor)
Solution 1:
Dilate your image to delete connected components.
Find contours of detected components. Eliminate contours which are not rectangles by introducing some measure (ex. ratio perimeter / area).
This solution will not detect rectangles connected to borders.
Solution 2:
Dilate to delete connected components.
Find contours.
Approximate contours to reduce their points (for rectangle contour should be 4 points).
Check if angle between contour lines is 90 degrees.
Eliminate contours which have no 90 degrees.
This should solve problem with rectangles connected to borders.
You have three problems:
The rectangles are not very strict rectangles (the edges are often somewhat curved)
There are a lot of them.
They are often connected.
It seems that all your rects are essentially the same size(?), and do not greatly overlap, but the pre-processing has connected them.
For this situation the approach I would try is:
dilate your image a few times (as also suggested by #krzych) - this will remove the connections, but result in slightly smaller rects.
Use scipy to label and find_objects - You now know the position and slice for every remaining blob in the image.
Use minAreaRect to find the center, orientation, width and height of each rectangle.
You can use step 3. to test whether the blob is a valid rectangle or not, by its area, dimension ratio or proximity to the edge..
This is quite a nice approach, as we assume each blob is a rectangle, so minAreaRect will find the parameters for our minimum enclosing rectangle. Further we could test each blob using something like humoments if absolutely neccessary.
Here is what I was suggesting in action, boundary collision matches shown in red.
Code:
import numpy as np
import cv2
from cv2 import cv
import scipy
from scipy import ndimage
im_col = cv2.imread('jdjAf.jpg')
im = cv2.imread('jdjAf.jpg',cv2.CV_LOAD_IMAGE_GRAYSCALE)
im = np.where(im>100,0,255).astype(np.uint8)
im = cv2.erode(im, None,iterations=8)
im_label, num = ndimage.label(im)
for label in xrange(1, num+1):
points = np.array(np.where(im_label==label)[::-1]).T.reshape(-1,1,2).copy()
rect = cv2.minAreaRect(points)
lines = np.array(cv2.cv.BoxPoints(rect)).astype(np.int)
if any([np.any(lines[:,0]<=0), np.any(lines[:,0]>=im.shape[1]-1), np.any(lines[:,1]<=0), np.any(lines[:,1]>=im.shape[0]-1)]):
cv2.drawContours(im_col,[lines],0,(0,0,255),1)
else:
cv2.drawContours(im_col,[lines],0,(255,0,0),1)
cv2.imshow('im',im_col)
cv2.imwrite('rects.png',im_col)
cv2.waitKey()
I think the Watershed and distanceTransform approach demonstrated by #mmgp is clearly superior for segmenting the image, but this simple approach can be effective depending upon your needs.

Categories

Resources