Shape detection with opencv/python - python

I'm trying to teach my test automation framework to detect a selected item in an app using opencv (the framework grabs frames/screenshots from the device under test). Selected items are always a certain size and always have blue border which helps but they contain different thumbnail images. See the example image provided.
I have done a lot of Googling and reading on the topic and I'm close to getting it to work expect for one scenario which is image C in the example image. example image This is where there is a play symbol on the selected item.
My theory is that OpenCV gets confused in this case because the play symbol is basically circle with a triangle in it and I'm asking it to find a rectangular shape.
I found this to be very helpful: https://www.learnopencv.com/blob-detection-using-opencv-python-c/
My code looks like this:
import cv2
import numpy as np
img = "testimg.png"
values = {"min threshold": {"large": 10, "small": 1},
"max threshold": {"large": 200, "small": 800},
"min area": {"large": 75000, "small": 100},
"max area": {"large": 80000, "small": 1000},
"min circularity": {"large": 0.7, "small": 0.60},
"max circularity": {"large": 0.82, "small": 63},
"min convexity": {"large": 0.87, "small": 0.87},
"min inertia ratio": {"large": 0.01, "small": 0.01}}
size = "large"
# Read image
im = cv2.imread(img, cv2.IMREAD_GRAYSCALE)
# Setup SimpleBlobDetector parameters.
params = cv2.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = values["min threshold"][size]
params.maxThreshold = values["max threshold"][size]
# Filter by Area.
params.filterByArea = True
params.minArea = values["min area"][size]
params.maxArea = values["max area"][size]
# Filter by Circularity
params.filterByCircularity = True
params.minCircularity = values["min circularity"][size]
params.maxCircularity = values["max circularity"][size]
# Filter by Convexity
params.filterByConvexity = False
params.minConvexity = values["min convexity"][size]
# Filter by Inertia
params.filterByInertia = False
params.minInertiaRatio = values["min inertia ratio"][size]
# Create a detector with the parameters
detector = cv2.SimpleBlobDetector(params)
# Detect blobs.
keypoints = detector.detect(im)
for k in keypoints:
print k.pt
print k.size
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures
# the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0, 0, 255),
cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show blobs
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)
How do I get OpenCV to only look at the outer shape defined by the blue border and ignore the inner shapes (the play symbol and of course the thumbnail image)? I'm sure it must be do-able somehow.

there are many different techniques, that will do the job. I am not really sure how BlobDetector works, so I took anoter approach. Also I am not really sure what you need, but you can modify this solution for your needs.
import cv2
import numpy as np
from matplotlib.pyplot import figure
import matplotlib.pyplot as plt
img_name = "CbclA.png" #Image you have provided
min_color = 150 #Color you are interested in (from green channel)
max_color = 170
min_size = 4000 #Size of border you are interested in (number of pixels)
max_size = 30000
img_rgb = cv2.imread(img_name)
img = img_rgb[:,:,1] #Extract green channel
img_filtered = np.bitwise_and(img>min_color, img < max_color) #Get only colors of your border
nlabels, labels, stats, centroids = cv2.connectedComponentsWithStats(img_filtered.astype(np.uint8))
good_area_index = np.where(np.logical_and(stats[:,4] > min_size,stats[:,4] < max_size)) #Filter only areas we are interested in
for area in stats[good_area_index] : #Draw it
cv2.rectangle(img_rgb, (area[0],area[1]), (area[0] + area[2],area[1] + area[3]), (0,0,255), 2)
cv2.imwrite('result.png',img_rgb)
Take a look at documentation of connectedComponentsWithStats
Note: I am using Python 3
Edit: result image added

If I got it right, you want a rectangle bounding the blue box with curved edges. If this is the case, it's very easy.
Apply this -
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
edged = cv2.Canny(gray, 75, 200) # You'll have to tune these
# Find contours
(_, contour, _) = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# This should return only one contour in 'contour' in your case
This should do but if you still get a contour (the bounding box) with curved edges apply this -
rect = cv2.approxPolyDP(contour, 0.02 * cv2.arcLength(contour, True), True)
# Play with the second parameter, appropriate range would be from 1% to 5%

I toyed around with this a bit more after reading your suggestions and found that blob detection is not the way to go. Using color recognition to find the contours solved the issue however as was suggested above. Thanks again!
My solution looks like this:
frame = cv2.imread("image.png")
color = ((200, 145, 0), (255, 200, 50))
lower_color = numpy.array(color[0], dtype="uint8")
upper_color = numpy.array(color[1], dtype="uint8")
# Look for the color in the frame and identify contours
color = cv2.GaussianBlur(cv2.inRange(frame, lower_color, upper_color), (3, 3), 0)
contours, _ = cv2.findContours(color.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if contours:
for c in contours:
rectangle = numpy.int32(cv2.cv.BoxPoints(cv2.minAreaRect(c)))
# Draw a rectangular frame around the detected object
cv2.drawContours(frame, [rectangle], -1, (0, 0, 255), 4)
cv2.imshow("frame", frame)
cv2.waitKey(0)
cv2.destroyAllWindows()

Related

Simple Image Segmentation OpenCV-Watershed

I'm still pretty new within the image-segmentation / OpenCV scene and hope you can help me out.
Currently, I'm trying to calculate the percentage of the 2 liquids within this photo
It should be something like this (as an example)
I thought opencv watershed could help me but I'm unable to get it right. I'm trying to set the markers manually but I get the following error: (-215:Assertion failed) src.type() == CV_8UC3 && dst.type() == CV_32SC1 in function 'cv::watershed'
(probably I got my markers all wrong)
If anyone can help me (maybe there is a better way to do this), I would greatly appreciate it
This is the code I use:
import cv2
import numpy as np
img = cv2.imread('image001.jpg')
# convert the image to grayscale and blur it slightly
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow("gray", gray)
# read image
#img = cv2.imread('jeep.jpg')
hh, ww = img.shape[:2]
hh2 = hh // 2
ww2 = ww // 2
# define circles
radius = hh2
yc = hh2
xc = ww2
# draw filled circle in white on black background as mask
mask = np.zeros_like(gray)
mask = cv2.circle(mask, (xc,yc), radius, (255,255,255), -1)
# apply mask to image
result = cv2.bitwise_and(gray, mask)
cv2.imshow("Output", result)
ret, thresh = cv2.threshold(result,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
cv2.imshow("ret 1", thresh)
markers = cv2.circle(thresh, (xc,50), 5, 1, -1)
markers = cv2.circle(thresh, (xc,yc+50), 5, 2, -1)
markers = cv2.circle(thresh, (15,15), 5, 3, -1)
cv2.imshow("marker 1", markers)
markers = cv2.watershed(img, markers)
img[markers == -1] = [0,0,255]
cv2.imshow("watershed", markers)
cv2.waitKey(0)
First of all, you obtain an exception because OpenCV's watershed() function expects markers array to be made of 32-bit integers. Converting it forth and back will remove the errors:
markers = markers.astype(np.int32)
markers = cv2.watershed(img, markers)
markers = markers.astype(np.uint8)
However, if you execute your code now you will see that the result isn't very good, the watershed algorithm will merge the liquid areas with unwanted regions. To make your code work like this, you should give one marker for every feature in your image. This will be very impractical.
Let's first extract the region of the image which interest us, i.e. the two liquid circle. You already tried to do a masking operation, I improved it in the code below by detecting automatically the circle using OpenCV's HoughCircles() function. Then the watershed algorithm will need an image with one marker for each region. We will fill the marker image with zeros, place one marker in each liquid area and one marker for the background. Putting all of this together we obtain the code and result below.
Considering the quality of your image (reflects, compression artefacts, etc), I personally think that the result is quite good and I am not sure that another method will give you better than that. On the other side, if you can get better quality images, segmentation methods based on colour space may be more appropriate (as pointed out by Christoph Rackwitz).
import cv2
import numpy as np
img = cv2.imread('image001.jpg')
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Detect circle
circles = cv2.HoughCircles(img_gray, cv2.HOUGH_GRADIENT, 1.3, 100)
# only one circle is detected
circle = circles[0,0,:]
center_x, center_y, radius = circles[0,0,0], circles[0,0,1], circles[0,0,2]
img_circle = img.copy()
cv2.circle(img_circle, (center_x, center_y), int(radius), (0, 255, 0), 3)
cv2.imshow("circle", img_circle)
# build mask for this circle
mask = np.zeros(img_gray.shape, np.uint8)
cv2.circle(mask, (center_x, center_y), int(radius), 255, -1)
img_masked = img.copy()
img_masked[mask == 0] = 0
cv2.imshow("image-masked", img_masked)
# create markers
markers = np.zeros(img_gray.shape, np.int32)
markers[10, 10] = 1 # background marker
markers[int(center_y - radius*0.9), int(center_x)] = 100 # top liquid
markers[int(center_y + radius*0.9), int(center_x)] = 200 # bottom liquid
# do watershed
markers = cv2.watershed(img_masked, markers)
cv2.imshow("watershed", markers.astype(np.uint8))
cv2.waitKey(0)

OpenCV python overlapping particles size and number

I got greyscale images which show particles on a surface. I like to write a program which finds the particles draws a circle around and gives counts the circles and the pixels inside the circles.
One of the main problems is that the particles overlapp. The next problem is that the contrast of the images is changing, from one image to the next.
Here is my first trial:
import matplotlib.pyplot as plt
import cv2 as cv
import imutils
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import os.path
fileref="test.png"
original = cv.imread(fileref)
img = original
cv.imwrite( os.path.join("inverse_"+fileref[:-4]+".png"), ~img );
img = cv.medianBlur(img,5)
img_grey = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
ret,th1 = cv.threshold(img_grey,130,255,cv.THRESH_BINARY)
th2 = cv.adaptiveThreshold(img_grey,255,cv.ADAPTIVE_THRESH_MEAN_C,\
cv.THRESH_BINARY,11,2)
th3 = cv.adaptiveThreshold(img_grey,255,cv.ADAPTIVE_THRESH_GAUSSIAN_C,\
cv.THRESH_BINARY,11,2)
titles = ['Original Image', 'Global Thresholding (v = 127)',
'Adaptive Mean Thresholding', 'Adaptive Gaussian Thresholding']
images = [img, th1]
cv.imwrite( os.path.join("threshhold_"+fileref[:-4]+".jpg"), th1 );
cv.imwrite( os.path.join("adaptivthreshhold-m_"+fileref[:-4]+".jpg"), th2 );
cv.imwrite( os.path.join("adaptivthreshhold-g_"+fileref[:-4]+".jpg"), th3 );
imghsv = cv.cvtColor(img, cv.COLOR_BGR2HSV)
imghsv[:,:,2] = [[max(pixel - 25, 0) if pixel < 190 else min(pixel + 25, 255) for pixel in row] for row in imghsv[:,:,2]]
cv.imshow('contrast', cv.cvtColor(imghsv, cv.COLOR_HSV2BGR))
# Setup SimpleBlobDetector parameters.
params = cv.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = 0
params.maxThreshold = 150
# Filter by Convexity
params.filterByConvexity = True
params.minConvexity = 0.87
# Filter by Inertia
params.filterByInertia = True
params.minInertiaRatio = 0.08 # 0.08
# Set edge gradient
params.thresholdStep = 0.5
# Filter by Area.
params.filterByArea = True
params.minArea = 300
# Set up the detector with default parameters.
detector = cv.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(original)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv.drawKeypoints(original, keypoints, np.array([]), (0, 0, 255),
cv.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
print(len(keypoints))
# Show keypoints
display=cv.resize(im_with_keypoints,None,fx=0.5,fy=0.5)
cv.imshow("Keypoints", display)
cv.waitKey(0)
cv.imwrite( os.path.join("keypoints_"+fileref[:-4]+".jpg"), im_with_keypoints );
It circles most particles but the parameters need to be changed for each image to get better results the circles can't overlapp and I don't know how to count the circles or count the pixels inside the circles.
Any help or hints which point me in the right direction are much appreciated.
I added a couple sample pics
This is an alternative approach and may not necessarily give better results than what you already have. You can try out plugging in different values for parameters and see if it gives you acceptable results.
import numpy as np
import cv2
import matplotlib.pyplot as plt
rgb = cv2.imread('/your/image/path/blobs_0002.jpeg')
gray = cv2.cvtColor(rgb, cv2.COLOR_BGR2GRAY)
imh, imw = gray.shape
cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV,21,2)
th = cv2.adaptiveThreshold(gray,255, cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY_INV,15,15)
contours, hier = cv2.findContours(th.copy(),cv2.RETR_CCOMP,cv2.CHAIN_APPROX_SIMPLE)
out_img = rgb.copy()
for i in range(len(contours)):
if hier[0][i][3] != -1:
continue
x,y,w,h = cv2.boundingRect(contours[i])
ar = min(w,h)/max(w,h)
area = cv2.contourArea(contours[i])
extent = area / (w*h)
if 20 < w*h < 1000 and \
ar > 0.5 and extent > 0.4:
cv2.circle(out_img, (int(x+w/2), int(y+h/2)), int(max(w,h)/2), (255, 0, 0), 1)
plt.imshow(out_img)
For larger coalesced blobs you might try running Hough circles to see if partial contours fit the test. Just a thought. Just to acknowledge the fact that the images you are dealing with are challenging to come up with a clean solution.

How do i ignore specific shapes on image processing, OpenCV, Python

I am working on cell segmentation and tracking. I've set of microscopical images. There are some circular noises caused by lamella. When I'm using my algorithm that may cause loss of cells some parts. I want to say to my program, "hey look those circular things are just noise, and just deny it, and work on real cell's membrane." The other one is, micro noises. There are some points with high or low contrast. I want to say to my program, "Hey, deny points, if its 10x10 pixels radius are the same with backgrounds contrast."
Work platform: Python 3.7.2, OpenCV 3.4.5
I hope, i clearly mentioned what my problem is. I am sharing one of those images.
4 circles on left are point noises.
2 circles on right are lamella noises.
enter image description here
import numpy
import cv2 as cv
from matplotlib import pyplot as plt
img = cv.imread('test001.tif')
gg = img.copy()
img_gray = cv.cvtColor(gg, cv.COLOR_BGR2GRAY)
clache = cv.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
img_gray = clache.apply(img_gray)
_, img_bin = cv.threshold(img_gray, 50, 255,
cv.THRESH_OTSU)
img_bin = cv.morphologyEx(img_bin, cv.MORPH_OPEN,
numpy.ones((10, 9), dtype=int))
img_bin = cv.morphologyEx(img_bin, cv.MORPH_DILATE,
numpy.ones((5, 5), dtype=int), iterations= 1)
def segment(im1, img):
#morphological transformations
border = cv.dilate(img, None, iterations=10)
border = border - cv.erode(border, None, iterations=1)
#invert the image so black becomes white, and vice versa
img = -img
#applies distance transform and shows visualization
dt = cv.distanceTransform(img, 2, 3)
dt = ((dt - dt.min()) / (dt.max() - dt.min()) * 255).astype(numpy.uint8)
#reapply contrast to strengthen boundaries
clache = cv.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
dt = clache.apply(dt)
#rethreshold the image
_, dt = cv.threshold(dt, 127, 255, cv.THRESH_BINARY)
ret, markers = cv.connectedComponents(dt)
markers = markers+1
# Complete the markers
markers[border == 255] = 255
markers = markers.astype(numpy.int32)
#apply watershed
cv.watershed(im1, markers)
markers[markers == -1] = 0
markers = markers.astype(numpy.uint8)
#return the image as one list, and the labels as another.
return dt, markers
dt, result = segment(img, img_bin)
cv.imshow('img',img)
cv.imshow('dt',dt)
cv.imshow('img_bin',img_bin)
cv.imshow('res',result)
Below one is serving as a guinea pig.
import numpy
import cv2 as cv
from matplotlib import pyplot as plt
img = cv.imread('test001.tif')
gg = img.copy()
img_gray = cv.cvtColor(gg, cv.COLOR_BGR2GRAY)
clache = cv.createCLAHE(clipLimit=2.0, tileGridSize=(20,20))
img_gray = clache.apply(img_gray)
cv.imshow('1img',img)
cv.imshow('2gray',img_gray)
#Threshold
_, img_bin = cv.threshold(img_gray, 127, 255, cv.THRESH_BINARY+cv.THRESH_OTSU)
cv.imshow('3threshold',img_bin)
#MorpClose
img_bin = cv.morphologyEx(img_bin, cv.MORPH_CLOSE,numpy.ones((5,5), dtype=int))
cv.imshow('4morp_close',img_bin)
#MorpErosion
img_bin = cv.erode(img_bin,numpy.ones((3,3),dtype=int),iterations = 1)
cv.imshow('5erosion',img_bin)
#MorpOpen
img_bin = cv.morphologyEx(img_bin, cv.MORPH_OPEN, numpy.ones((2, 2), dtype=int))
#cv.imshow('6morp_open',img_bin)
#MorpDilate
img_bin = cv.morphologyEx(img_bin, cv.MORPH_DILATE,numpy.ones((1, 1), dtype=int), iterations= 1)
#cv.imshow('7morp_dilate',img_bin)
#MorpBlackHat
img_bin = cv.morphologyEx(img_bin, cv.MORPH_BLACKHAT,numpy.ones((4,4),dtype=int))
#cv.imshow('8morpTophat',img_bin)
For those small dots you can try eroding and dilating:
You need to convert the image to grayscale and then process it, I'd create a mask with the eroded and dilated parts to remove those dots and then use that mask on the original image to delete the dots without compromising the resolution of you initial image.
For the big blury blobs, maybe add some noise to the image and compare with the original?
If most of those blobs are cv2.HoughCircles described here, it does something like this:
Of course you can tune those parameters to fit what you want and just ignore those parts of the image. Try that and also the noise, that might help reduce the false positives.
Best of luck!

Opencv unable to get ROI of highest intensity part

I have tried this code.
import sys
import numpy as np
sys.path.append('/usr/local/lib/python2.7/site-packages')
import cv2
from cv2.cv import *
img=cv2.imread("test2.jpg",cv2.IMREAD_COLOR)
gimg = cv2.imread("test2.jpg",cv2.IMREAD_GRAYSCALE)
b,g,r = cv2.split(img)
ret,thresh1 = cv2.threshold(gimg,127,255,cv2.THRESH_BINARY);
numrows = len(thresh1)
numcols = len(thresh1[0])
thresh = 170
im_bw = cv2.threshold(gimg, thresh, 255, cv2.THRESH_BINARY)[1]
trig=0
trigmax=0;
xmax=0
ymax=0
for x in range(0,numrows):
for y in range(0,numcols):
if(im_bw[x][y]==1):
trig=trig+1;
if(trig>5):
xmax=x;
ymax=y;
break;
print x,y,numrows,numcols,trig
roi=gimg[xmax:xmax+200,ymax-500:ymax]
cv2.imshow("roi",roi)
WaitKey(0)
here test2.jpg is what I am tring to do is to concentrate on the high intensity part of the image(i.e the circle with high intensity in image).But my code does not seem to do so.
Can anyone help?
I found answer to my question from here
here is my code
# import the necessary packages
import numpy as np
import cv2
# load the image and convert it to grayscale
image = cv2.imread('test2.jpg')
orig = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# apply a Gaussian blur to the image then find the brightest
# region
gray = cv2.GaussianBlur(gray, (41, 41), 0)
(minVal, maxVal, minLoc, maxLoc) = cv2.minMaxLoc(gray)
image = orig.copy()
roi=image[maxLoc[1]-250:maxLoc[1]+250,maxLoc[0]-250:maxLoc[0]+250,2:2]
cv2.imshow("Robust", roi)
cv2.waitKey(0)
test2.jpg
ROI
Try checking whether a pixel is not zero - it turns out those pixels have a value of 255 after thresholding, as it is a grayscale image after all.
The threshold seems to be wrong also, but I don't really know what you want to see (display it with imshow - it isn't just the circle). And your code matches the number '3' in the bottom left corner, therefore the ROI matrix indices are invalid in your example.
EDIT:
After playing around with the image, I ended up using a different approach. I used the SimpleBlobDetector and did an erosion on the image before, so the region you're interested in remains connected. For the blob detector the program inverts the image first. (You may want to read a SimpleBlobDetector tutorial as I did, parts of the code are based on that page - many thanks to the author!)
The following code displays the procedure step by step:
import cv2
import numpy as np
# Read image
gimg = cv2.imread("test2.jpg", cv2.IMREAD_GRAYSCALE)
# Invert the image
im_inv = 255 - gimg
cv2.imshow("Step 1 - inverted image", im_inv)
cv2.waitKey(0)
# display at a threshold level of 50
thresh = 45
im_bw = cv2.threshold(im_inv, thresh, 255, cv2.THRESH_BINARY)[1]
cv2.imshow("Step 2 - bw threshold", im_bw)
cv2.waitKey(0)
# erosion
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(10,10))
im_bw = cv2.erode(im_bw, kernel, iterations = 1)
cv2.imshow('Step 3 - erosion connects disconnected parts', im_bw)
cv2.waitKey(0)
# Set up the detector with default parameters.
params = cv2.SimpleBlobDetector_Params()
params.filterByInertia = False
params.filterByConvexity = False
params.filterByCircularity = False
params.filterByColor = False
params.minThreshold = 0
params.maxThreshold = 50
params.filterByArea = True
params.minArea = 1000 # you may check with 10 --> finds number '3' also
params.maxArea = 100000 #im_bw.shape[0] * im_bw.shape[1] # max limit: image size
# Create a detector with the parameters
ver = (cv2.__version__).split('.')
if int(ver[0]) < 3 :
detector = cv2.SimpleBlobDetector(params)
else :
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(im_bw)
print "Found", len(keypoints), "blobs:"
for kpt in keypoints:
print "(%.1f, %.1f) diameter: %.1f" % (kpt.pt[0], kpt.pt[1], kpt.size)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the
# circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(gimg, keypoints, np.array([]), (0,0,255),
cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)
This algorithm finds the coordinate (454, 377) as the center of the blob, but if you reduce the minArea to e.g. 10 then it will find the number 3 in the bottom corner as well.

Advanced square detection (with connected region)

if the squares has connected region in image, how can I detect them.
I have tested the method mentioned in
OpenCV C++/Obj-C: Advanced square detection
It did not work well.
Any good ideas ?
import cv2
import numpy as np
def angle_cos(p0, p1, p2):
d1, d2 = (p0-p1).astype('float'), (p2-p1).astype('float')
return abs( np.dot(d1, d2) / np.sqrt( np.dot(d1, d1)*np.dot(d2, d2) ) )
def find_squares(img):
squares = []
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# cv2.imshow("gray", gray)
gaussian = cv2.GaussianBlur(gray, (5, 5), 0)
temp,bin = cv2.threshold(gaussian, 80, 255, cv2.THRESH_BINARY)
# cv2.imshow("bin", bin)
contours, hierarchy = cv2.findContours(bin, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours( gray, contours, -1, (0, 255, 0), 3 )
#cv2.imshow('contours', gray)
for cnt in contours:
cnt_len = cv2.arcLength(cnt, True)
cnt = cv2.approxPolyDP(cnt, 0.02*cnt_len, True)
if len(cnt) == 4 and cv2.contourArea(cnt) > 1000 and cv2.isContourConvex(cnt):
cnt = cnt.reshape(-1, 2)
max_cos = np.max([angle_cos( cnt[i], cnt[(i+1) % 4], cnt[(i+2) % 4] ) for i in xrange(4)])
if max_cos < 0.1:
squares.append(cnt)
return squares
if __name__ == '__main__':
img = cv2.imread('123.bmp')
#cv2.imshow("origin", img)
squares = find_squares(img)
print "Find %d squres" % len(squares)
cv2.drawContours( img, squares, -1, (0, 255, 0), 3 )
cv2.imshow('squares', img)
cv2.waitKey()
I use some method in the opencv example, but the result is not good.
Applying a Watershed Transform based on the Distance Transform will separate the objects:
Handling objects at the border is always problematic, and often discarded, so that pink rectangle at top left not separated is not a problem at all.
Given a binary image, we can apply the Distance Transform (DT) and from it obtain markers for the Watershed. Ideally there would be a ready function for finding regional minima/maxima, but since it isn't there, we can make a decent guess on how we can threshold DT. Based on the markers we can segment using Watershed, and the problem is solved. Now you can worry about distinguishing components that are rectangles from those that are not.
import sys
import cv2
import numpy
import random
from scipy.ndimage import label
def segment_on_dt(img):
dt = cv2.distanceTransform(img, 2, 3) # L2 norm, 3x3 mask
dt = ((dt - dt.min()) / (dt.max() - dt.min()) * 255).astype(numpy.uint8)
dt = cv2.threshold(dt, 100, 255, cv2.THRESH_BINARY)[1]
lbl, ncc = label(dt)
lbl[img == 0] = lbl.max() + 1
lbl = lbl.astype(numpy.int32)
cv2.watershed(cv2.cvtColor(img, cv2.COLOR_GRAY2BGR), lbl)
lbl[lbl == -1] = 0
return lbl
img = cv2.cvtColor(cv2.imread(sys.argv[1]), cv2.COLOR_BGR2GRAY)
img = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU)[1]
img = 255 - img # White: objects; Black: background
ws_result = segment_on_dt(img)
# Colorize
height, width = ws_result.shape
ws_color = numpy.zeros((height, width, 3), dtype=numpy.uint8)
lbl, ncc = label(ws_result)
for l in xrange(1, ncc + 1):
a, b = numpy.nonzero(lbl == l)
if img[a[0], b[0]] == 0: # Do not color background.
continue
rgb = [random.randint(0, 255) for _ in xrange(3)]
ws_color[lbl == l] = tuple(rgb)
cv2.imwrite(sys.argv[2], ws_color)
From the above image you can consider fitting ellipses in each component to determine rectangles. Then you can use some measurement to define whether the component is a rectangle or not. This approach has a greater chance to work for rectangles that are fully visible, and will likely produce bad results for partially visible ones. The following image shows the result of such approach considering that a component is a rectangle if the rectangle from the fitted ellipse is within 10% of component's area.
# Fit ellipse to determine the rectangles.
wsbin = numpy.zeros((height, width), dtype=numpy.uint8)
wsbin[cv2.cvtColor(ws_color, cv2.COLOR_BGR2GRAY) != 0] = 255
ws_bincolor = cv2.cvtColor(255 - wsbin, cv2.COLOR_GRAY2BGR)
lbl, ncc = label(wsbin)
for l in xrange(1, ncc + 1):
yx = numpy.dstack(numpy.nonzero(lbl == l)).astype(numpy.int64)
xy = numpy.roll(numpy.swapaxes(yx, 0, 1), 1, 2)
if len(xy) < 100: # Too small.
continue
ellipse = cv2.fitEllipse(xy)
center, axes, angle = ellipse
rect_area = axes[0] * axes[1]
if 0.9 < rect_area / float(len(xy)) < 1.1:
rect = numpy.round(numpy.float64(
cv2.cv.BoxPoints(ellipse))).astype(numpy.int64)
color = [random.randint(60, 255) for _ in xrange(3)]
cv2.drawContours(ws_bincolor, [rect], 0, color, 2)
cv2.imwrite(sys.argv[3], ws_bincolor)
Solution 1:
Dilate your image to delete connected components.
Find contours of detected components. Eliminate contours which are not rectangles by introducing some measure (ex. ratio perimeter / area).
This solution will not detect rectangles connected to borders.
Solution 2:
Dilate to delete connected components.
Find contours.
Approximate contours to reduce their points (for rectangle contour should be 4 points).
Check if angle between contour lines is 90 degrees.
Eliminate contours which have no 90 degrees.
This should solve problem with rectangles connected to borders.
You have three problems:
The rectangles are not very strict rectangles (the edges are often somewhat curved)
There are a lot of them.
They are often connected.
It seems that all your rects are essentially the same size(?), and do not greatly overlap, but the pre-processing has connected them.
For this situation the approach I would try is:
dilate your image a few times (as also suggested by #krzych) - this will remove the connections, but result in slightly smaller rects.
Use scipy to label and find_objects - You now know the position and slice for every remaining blob in the image.
Use minAreaRect to find the center, orientation, width and height of each rectangle.
You can use step 3. to test whether the blob is a valid rectangle or not, by its area, dimension ratio or proximity to the edge..
This is quite a nice approach, as we assume each blob is a rectangle, so minAreaRect will find the parameters for our minimum enclosing rectangle. Further we could test each blob using something like humoments if absolutely neccessary.
Here is what I was suggesting in action, boundary collision matches shown in red.
Code:
import numpy as np
import cv2
from cv2 import cv
import scipy
from scipy import ndimage
im_col = cv2.imread('jdjAf.jpg')
im = cv2.imread('jdjAf.jpg',cv2.CV_LOAD_IMAGE_GRAYSCALE)
im = np.where(im>100,0,255).astype(np.uint8)
im = cv2.erode(im, None,iterations=8)
im_label, num = ndimage.label(im)
for label in xrange(1, num+1):
points = np.array(np.where(im_label==label)[::-1]).T.reshape(-1,1,2).copy()
rect = cv2.minAreaRect(points)
lines = np.array(cv2.cv.BoxPoints(rect)).astype(np.int)
if any([np.any(lines[:,0]<=0), np.any(lines[:,0]>=im.shape[1]-1), np.any(lines[:,1]<=0), np.any(lines[:,1]>=im.shape[0]-1)]):
cv2.drawContours(im_col,[lines],0,(0,0,255),1)
else:
cv2.drawContours(im_col,[lines],0,(255,0,0),1)
cv2.imshow('im',im_col)
cv2.imwrite('rects.png',im_col)
cv2.waitKey()
I think the Watershed and distanceTransform approach demonstrated by #mmgp is clearly superior for segmenting the image, but this simple approach can be effective depending upon your needs.

Categories

Resources