OpenCV, divide object into parts - python

I have the following image:.
Is there a function in OpenCV (preferably Python) that can say that the objects in this picture can be divided into parts. For example, the first object consists of two segments (or two lines), the third one of three (or four).
If OpenCV doesn't have such a thing, it'd be great to know about such an algorithm/function anywhere.

This problem can be solved by skeletonizing the image and then using HoughlinesP.
Scikit-image has a good skeletonization method.
It is straight forward to find the 14 lines segments as shown below.
Finally you will need to go through and find which sets of lines intersect to see which belong together.
#!/usr/bin/python
from skimage import morphology
import cv2
import math
import numpy as np
im = cv2.imread("objects.png")
dst = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
dst = 1 - dst / 255
dst = morphology.skeletonize(dst).astype(np.uint8)
objs = 255 * dst
#cv2.HoughLinesP(image, rho, theta, threshold[, lines[, minLineLength[, maxLineGap]]])
rho = 1
theta = math.pi / 180
threshold = 1
minLineLength = 3
maxLineGap = 5
lines = np.ndarray([1, 1, 4, 4])
lines = cv2.HoughLinesP(dst, rho, theta, threshold, lines, minLineLength, maxLineGap)
lineColor = (0, 255, 0) # red
for line in lines[0]:
#print line
cv2.line(im, (line[0], line[1]), (line[2], line[3]), lineColor, 1, 8)
#
# Now you need to go through lines and find those that intersect
# You will notice that some lines have small gaps where they should
# join to a perpendicular line. Before find intersections you would
# need to make each line longer (just by adjusting the numbers in lines)
# to get around this problem.
#
cv2.imshow('Objects', objs)
cv2.imshow('Lines', im)
cv2.imwrite('lines.png', im)
cv2.waitKey()
cv2.destroyAllWindows()

Related

How to connect broken lines that cannot be connected by erosion and dilation?

I have an image like this that has multiple stoppers and some of the lines are broken. To connect this broken line, I used a morphological operation like this:
import cv2
import numpy as np
img = cv2.imread('sample.png', cv2.IMREAD_GRAYSCALE)
morph = cv2.morphologyEx(im, cv2.MORPH_CLOSE, np.ones((10,10),np.uint8))
But this didn't connect my broken lines. How can I connect the lines without affecting the other lines?
img
A line break is a break between two small lines in the center of the image. Only the discontinuous part does not have rounded ends.
applied morphological operation
You can use createFastLineDetector for detecting each line.
Calculate the slope of the current and neighboring lines.
If the slope of current and neighboring lines are the same draw line.
Initializing Line Detector
We will be using ximgproc library for detecting lines.
import cv2
img = cv2.imread("lines.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
d = cv2.ximgproc.createFastLineDetector()
lines = d.detect(gray)
The lines variable returns similar values like [[14.82, 78.90, 90.89, 120.78]] where x1=14.82, y1=78.90, x2=90.89, y2=120.78 respectively.
Calculating Slope
The slope of a line is calculated with the formula: m = (y2 - y1) / (x2 - x1)
For a given line object, get the coordinates and return the slope.
def calculate_slope(line_object):
x_point1 = line_object[0]
y_point1 = line_object[1]
x_point2 = line_object[2]
y_point2 = line_object[3]
m = abs((y_point2 - y_point1) / (x_point2 - x_point1))
m = float("{:.2f}".format(m))
return m
Comparing Slopes
Check the equality of the lines. if the points are equal, that means they are the same line.
for current_line in lines:
current_slope = calculate_slope(current_line[0])
for neighbor_line in lines:
current_x1 = int(current_line[0][0])
current_y1 = int(current_line[0][1])
current_x2 = int(current_line[0][2])
current_y2 = int(current_line[0][3])
compare_lines = current_line == neighbor_line[0]
equal_arrays = compare_lines.all()
If the lines are not equal, calculate the neighbor's line slope.
if not equal_arrays:
neighbor_slope = calculate_slope(neighbor_line[0])
If slopes are equal, draw the line. From neighbor to current and current to neighbor.
if abs(current_slope - neighbor_slope) < 1e-3:
neighbor_x1 = int(neighbor_line[0][0])
neighbor_y1 = int(neighbor_line[0][1])
neighbor_x2 = int(neighbor_line[0][2])
neighbor_y2 = int(neighbor_line[0][3])
cv2.line(img,
pt1=(neighbor_x1, neighbor_y1),
pt2=(current_x2, current_y2),
color=(255, 255, 255),
thickness=3)
cv2.line(img,
pt1=(current_x1, current_y1),
pt2=(neighbor_x2, neighbor_y2),
color=(255, 255, 255),
thickness=3)
Result
Possible Question But why couldn't you connect the following parts?
Answer
Well, the red dotted line slopes are not equal. Therefore I couldn't connect them.
Possible Question Why didn't you use dilate and erode methods? as shown in here
Answer
I tried, but the result is not satisfactory.

Merge MSER detected objetcs (OpenCV, Python)

I am working on this image as source:
Applying the next code...
import cv2
import numpy as np
mser = cv2.MSER_create()
img = cv2.imread('C:\\Users\\Link\\Desktop\\test2.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
vis = img.copy()
regions, _ = mser.detectRegions(gray)
hulls = [cv2.convexHull(p.reshape(-1, 1, 2)) for p in regions]
cv2.polylines(vis, hulls, 1, (0, 255, 0))
mask = np.zeros((img.shape[0], img.shape[1], 1), dtype=np.uint8)
for contour in hulls:
cv2.drawContours(mask, [contour], -1, (255, 255, 255), -1)
text_only = cv2.bitwise_and(img, img, mask=mask)
cv2.imshow('img', vis)
cv2.waitKey(0)
cv2.imshow('img', mask)
cv2.waitKey(0)
cv2.imshow('img', text_only)
cv2.waitKey(0)
cv2.imwrite('C:\\Users\\Link\\Desktop\\test_o\\1.png', text_only)
...I am obtaining this as result (mask):
The question is this:
how to merge into a single object the number 5 in the number series (157661546) as long as it is divided in the mask image ?
Thanks
Have a look here, it seems like the exact answer.
Here instead there is my version of the above code fine tuned for text extraction (with masking too).
Below there is the original code from the previous article, "ported" to python 3, opencv 3, added mser and bounding boxes. The main difference with my version is how the grouping distance is defined: mine is text-oriented while the one below is a free geometrical distance.
import sys
import cv2
import numpy as np
def find_if_close(cnt1,cnt2):
row1,row2 = cnt1.shape[0],cnt2.shape[0]
for i in range(row1):
for j in range(row2):
dist = np.linalg.norm(cnt1[i]-cnt2[j])
if abs(dist) < 25: # <-- threshold
return True
elif i==row1-1 and j==row2-1:
return False
img = cv2.imread(sys.argv[1])
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
cv2.imshow('input', img)
ret,thresh = cv2.threshold(gray,127,255,0)
mser=False
if mser:
mser = cv2.MSER_create()
regions = mser.detectRegions(thresh)
hulls = [cv2.convexHull(p.reshape(-1, 1, 2)) for p in regions[0]]
contours = hulls
else:
thresh = cv2.bitwise_not(thresh) # wants black bg
im2,contours,hier = cv2.findContours(thresh,cv2.RETR_EXTERNAL,2)
cv2.drawContours(img, contours, -1, (0,0,255), 1)
cv2.imshow('base contours', img)
LENGTH = len(contours)
status = np.zeros((LENGTH,1))
print("Elements:", len(contours))
for i,cnt1 in enumerate(contours):
x = i
if i != LENGTH-1:
for j,cnt2 in enumerate(contours[i+1:]):
x = x+1
dist = find_if_close(cnt1,cnt2)
if dist == True:
val = min(status[i],status[x])
status[x] = status[i] = val
else:
if status[x]==status[i]:
status[x] = i+1
unified = []
maximum = int(status.max())+1
for i in range(maximum):
pos = np.where(status==i)[0]
if pos.size != 0:
cont = np.vstack(contours[i] for i in pos)
hull = cv2.convexHull(cont)
unified.append(hull)
cv2.drawContours(img,contours,-1,(0,0,255),1)
cv2.drawContours(img,unified,-1,(0,255,0),2)
#cv2.drawContours(thresh,unified,-1,255,-1)
for c in unified:
(x,y,w,h) = cv2.boundingRect(c)
cv2.rectangle(img, (x,y), (x+w,y+h), (255, 0, 0), 2)
cv2.imshow('result', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Sample output (the yellow blob is below the binary threshold conversion so it's ignored). Red: original contours, green: unified ones, blue: bounding boxes.
Probably there is no need to use MSER as a simple findContours may work fine.
------------------------
Starting from here there is my old answer, before I found the above code. I'm leaving it anyway as it describe a couple of different approaches that may be easier/more appropriate for some scenarios.
A quick and dirty trick is to add a small gaussian blur and a high threshold before the MSER (or some dilute/erode if you prefer fancy things). In practice you just make the text bolder so that it fills small gaps. Obviously you can later discard this version and crop from the original one.
Otherwise, if your text is in lines, you may try to detect the average line center (make an histogram of Y coordinates and find the peaks for example). Then, for each line, look for fragments with a close average X. Quite fragile if text is noisy/complex.
If you do not need to split each letter, getting the bounding box for the whole word, may be easier: just split in groups based on a maximum horizontal distance between fragments (using the leftmost/rightmost points of the contour). Then use the leftmost and rightmost boxes within each group to find the whole bounding box. For multiline text first group by centroids Y coordinate.
Implementation notes:
Opencv allows you to create histograms but you probably can get away with something like this (worked for me on a similar task):
def histogram(vals, th=4, bins=400):
hist = np.zeros(bins)
for y_center in vals:
bucket = int(round(y_center / 2.)) <-- change this "2."
hist[bucket-1] += 1
print("hist: ", hist)
hist = np.where(hist > th, hist, 0)
return hist
Here my histogram is just an array with 400 buckets (my image was 800px high so each bucket catches two pixels, that is where the "2." comes from). Vals are the Y coordinates of the centroids of each fragment (you may want to ignore very small elements when you build this list). The th threshold is there just to remove some noise. You should get something like this:
0,0,0,5,22,0,0,0,0,43,7,0,0,0
This list describes, moving top to bottom, how many fragments are at each location.
Now I ran another pass to merge the peaks into a single value (just scan the array and sum while it is non-zero and reset the count on first zero) getting something like this {y:count}:
{9:27, 20:50}
Now I know I have two text rows at y=9 and y=20. Now, or before, you assign each fragment to on line (with again an 8px threshold in my case). Now you can process each line on its own, finding "words". BTW, I have your identical problem with broken letters that's why I came here looking for MSER :). Notice that if you find the whole bounding box for the word this problem happens only on the first/last letters: the other broken letters just falls inside the word box anyway.
Here is a reference for the erode/dilate thing, but gaussian blur/th worked for me.
UPDATE: I've noticed that there is something wrong in this line:
regions = mser.detectRegions(thresh)
I pass in the already thresholded image(!?). This is not relevant for the aggregation part but keep in mind that the mser part is not being used as expected.

Road Lane Detection program failing to detect lane properly

I am trying to develop a program that can detect lanes on the road. I have experimented with both Hough Line Transform and Probabilistic Hough Line Transform. However none of these are getting the results that I want.
Original Image:
Hough Line Transform
Probabilistic Hough Line Transform
It seems that for Hough Line Transform, I can at least detect the entire lane, but unfortunately, the line just goes on infinitely (until they move off the picture), to the point where the lines intersect with each other, which is not a good graphical lane detection marker.
I also tried Probalistic Hough Line Transform, and the green line used for lane detection does not go off to infinitely like the other one, but it fails to mark and detect the entire lane.
I am trying to replicate results here (by writing it in Python)
http://www.transistor.io/revisiting-lane-detection-using-opencv.html
What can I do to fix this problem?
Code:
import numpy as np
import cv2
from matplotlib import pyplot as plt
from PIL import Image
import imutils
def invert_img(img):
img = (255-img)
return img
def canny(imgray):
imgray = cv2.GaussianBlur(imgray, (5,5), 200)
canny_low = 5
canny_high = 150
thresh = cv2.Canny(imgray,canny_low,canny_high)
return thresh
def filtering(imgray):
thresh = canny(imgray)
minLineLength = 1
maxLineGap = 1
lines = cv2.HoughLines(thresh,1,np.pi/180,0)
#lines = cv2.HoughLinesP(thresh,2,np.pi/180,100,minLineLength,maxLineGap)
print lines.shape
# Code for HoughLinesP
'''
for i in range(0,lines.shape[0]):
for x1,y1,x2,y2 in lines[i]:
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
'''
# Code for HoughLines
for i in range(0,5):
for rho,theta in lines[i]:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
return thresh
img = cv2.imread('images/road_0.bmp')
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = imutils.resize(img, height = 500)
imgray = imutils.resize(imgray, height = 500)
thresh = filtering(imgray)
cv2.imshow('original', img)
cv2.imshow('result', thresh)
cv2.waitKey(0)
Cool topic! First of all, why did you add the Gaussian blur? Your source article doesn't mention that at all. If I remove that, I instantly get extra crazy lines, which I can tone down with the canny_low and canny_high. About the best I could find was low=100 and high=180.
Second, you did quite a good job translating the article to Python. However, I think you left out a crucial detail. The author writes:
// Canny algorithm
Mat contours;
Canny(image,contours,50,350);
Mat contoursInv;
threshold(contours,contoursInv,128,255,THRESH_BINARY_INV);
You implement the Canny function (cv2.canny()), but you don't call the threshold function. According to documentation I found, this function "applies a fixed-level threshold to each array element." I experimented with your code and came up with the following.
#thresh = canny(imgray) # original
edges = canny(imgray) # docs refer to return value as "edges"
retval, dst = cv2.threshold(edges, 128, 255,cv2.THRESH_BINARY_INV)
Two values are returned - retval isn't particularly important for us right now. dst is the destination 2D array of image data after thresholding. You would then update your call to cv2.HoughLines and cv2.HoughLinesP replacing "thresh" with "dst." When I did this I got a lot more interesting behavior, though I was not able to find the correct tuning values to make the lines work well.
So, hopefully that gives you some pointers. Try my tips, and also read the article once or twice more to double check that you have the same program flow as the author. This seems like a fun project, have fun!

Python cv2 HoughLines grid line detection

I have a simple grid in an image, I am trying to determine the grid size, e.g. 6x6, 12x12, etc. Using Python and cv2.
I am testing it with the above 3x3 grid, I was planning to determine the grid size by counting how many vertical / horizontal lines there are by detecting them in the image:
import cv2
import numpy as np
im = cv2.imread('photo2.JPG')
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
imgSplit = cv2.split(im)
flag,b = cv2.threshold(imgSplit[2],0,255,cv2.THRESH_OTSU)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(1,1))
cv2.erode(b,element)
edges = cv2.Canny(b,150,200,3,5)
while(True):
img = im.copy()
lines = cv2.HoughLinesP(edges,1,np.pi/2,2, minLineLength = 620, maxLineGap = 100)[0]
for x1,y1,x2,y2 in lines:
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),1)
cv2.imshow('houghlines',img)
if k == 27:
break
cv2.destroyAllWindows()
My code detects the lines, as can be seen below, however there are multiple lines detected for each line in my image:
(there are two 1px green lines drawn for every line in the image)
I cannot simply divide the number of lines by two because (depending on the grid size) sometimes just the one line will be drawn.
How can I more accurately detect and draw a single line for every line detected in the original image?
I have tweaked threshold settings, reducing the image to black and white, yet I still get multiple lines. I assume this is because of the canny edge detection?
I ended up iterating through the lines and removing lines that were within 10px of one another:
lines = cv2.HoughLinesP(edges,1,np.pi/180,275, minLineLength = 600, maxLineGap = 100)[0].tolist()
for x1,y1,x2,y2 in lines:
for index, (x3,y3,x4,y4) in enumerate(lines):
if y1==y2 and y3==y4: # Horizontal Lines
diff = abs(y1-y3)
elif x1==x2 and x3==x4: # Vertical Lines
diff = abs(x1-x3)
else:
diff = 0
if diff < 10 and diff is not 0:
del lines[index]
gridsize = (len(lines) - 2) / 2
you can dilate the image with
kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (2, 2))
dilated = cv2.dilate(edges, kernel, iterations=5)
then apply cv2.HoughLinesP
Doesn't the Hough function have a parameter that does exactly this? MaxLineGap? So if your lines were 2px thick, you set that parameter to 3? Does it not work?

Copy a part of an image in opencv and python

I'm trying to split an image into several sub-images with opencv by identifying templates of the original image and then copy the regions where I matched those templates. I'm a TOTAL newbie to opencv! I've identified the sub-images using:
result = cv2.matchTemplate(img, template, cv2.TM_CCORR_NORMED)
After some cleanup I get a list of tuples called points in which I iterate to show the rectangles. tw and th is the template width and height respectively.
for pt in points:
re = cv2.rectangle(img, pt, (pt[0] + tw, pt[1] + th), 0, 2)
print('%s, %s' % (str(pt[0]), str(pt[1])))
count+=1
What I would like to accomplish is to save the octagons (https://dl.dropbox.com/u/239592/region01.png) into separated files.
How can I do this? I've read something about contours but I'm not sure how to use it. Ideally I would like to contour the octagon.
Thanks a lot for your help!
If template matching is working for you, stick to it. For instance, I considered the following template:
Then, we can pre-process the input in order to make it a binary one and discard small components. After this step, the template matching is performed. Then it is a matter of filtering the matches by means of discarding close ones (I've used a dummy method for that, so if there are too many matches you could see it taking some time). After we decide which points are far apart (and thus identify different hexagons), we can do minor adjusts to them in the following manner:
Sort by y-coordinate;
If two adjacent items start at a y-coordinate that is too close, then set them both to the same y-coord.
Now you can sort this point list in an appropriate order such that the crops are done in raster order. The cropping part is easily achieved using slicing provided by numpy.
import sys
import cv2
import numpy
outbasename = 'hexagon_%02d.png'
img = cv2.imread(sys.argv[1])
template = cv2.cvtColor(cv2.imread(sys.argv[2]), cv2.COLOR_BGR2GRAY)
theight, twidth = template.shape[:2]
# Binarize the input based on the saturation and value.
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
saturation = hsv[:,:,1]
value = hsv[:,:,2]
value[saturation > 35] = 255
value = cv2.threshold(value, 0, 255, cv2.THRESH_OTSU)[1]
# Pad the image.
value = cv2.copyMakeBorder(255 - value, 3, 3, 3, 3, cv2.BORDER_CONSTANT, value=0)
# Discard small components.
img_clean = numpy.zeros(value.shape, dtype=numpy.uint8)
contours, _ = cv2.findContours(value, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for i, c in enumerate(contours):
area = cv2.contourArea(c)
if area > 500:
cv2.drawContours(img_clean, contours, i, 255, 2)
def closest_pt(a, pt):
if not len(a):
return (float('inf'), float('inf'))
d = a - pt
return a[numpy.argmin((d * d).sum(1))]
match = cv2.matchTemplate(img_clean, template, cv2.TM_CCORR_NORMED)
# Filter matches.
threshold = 0.8
dist_threshold = twidth / 1.5
loc = numpy.where(match > threshold)
ptlist = numpy.zeros((len(loc[0]), 2), dtype=int)
count = 0
print "%d matches" % len(loc[0])
for pt in zip(*loc[::-1]):
cpt = closest_pt(ptlist[:count], pt)
dist = ((cpt[0] - pt[0]) ** 2 + (cpt[1] - pt[1]) ** 2) ** 0.5
if dist > dist_threshold:
ptlist[count] = pt
count += 1
# Adjust points (could do for the x coords too).
ptlist = ptlist[:count]
view = ptlist.ravel().view([('x', int), ('y', int)])
view.sort(order=['y', 'x'])
for i in xrange(1, ptlist.shape[0]):
prev, curr = ptlist[i - 1], ptlist[i]
if abs(curr[1] - prev[1]) < 5:
y = min(curr[1], prev[1])
curr[1], prev[1] = y, y
# Crop in raster order.
view.sort(order=['y', 'x'])
for i, pt in enumerate(ptlist, start=1):
cv2.imwrite(outbasename % i,
img[pt[1]-2:pt[1]+theight-2, pt[0]-2:pt[0]+twidth-2])
print 'Wrote %s' % (outbasename % i)
If you want only the contours of the hexagons, then crop on img_clean instead of img (but then it is pointless to sort the hexagons in raster order).
Here is a representation of the different regions that would be cut for your two examples without modifying the code above:
I am sorry, I didn't understand from your question on how do you relate matchTemplate and Contours.
Anyway, below is a small technique using contours. It is on the assumption that your other images are also like the one you provided. I am not sure if it works with your other images. But I think it would help to get a startup. Try this yourself and make necessary adjustments and modifications.
What I did :
1 - I needed the edge of octagons . So Thresholded Image using Otsu and apply dilation and erosion (or use any method you like that works well for all your images, beware of the edges in left edge of image).
2 - Then found contours (More about contours : http://goo.gl/r0ID0
3 - For each contours, find its convex hull, find its area(A) & perimeter(P)
4 - For a perfect octagon, P*P/A = 13.25 approximately. I used it here and cut it and saved it.
5 - You can see cropping it also removes some edges of octagon. If you want it, adjust the cropping dimension.
Code :
import cv2
import numpy as np
img = cv2.imread('region01.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
thresh = cv2.dilate(thresh,None,iterations = 2)
thresh = cv2.erode(thresh,None)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
number = 0
for cnt in contours:
hull = cv2.convexHull(cnt)
area = cv2.contourArea(hull)
P = cv2.arcLength(hull,True)
if ((area != 0) and (13<= P**2/area <= 14)):
#cv2.drawContours(img,[hull],0,255,3)
x,y,w,h = cv2.boundingRect(hull)
number = number + 1
roi = img[y:y+h,x:x+w]
cv2.imshow(str(number),roi)
cv2.imwrite("1"+str(number)+".jpg",roi)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Those 6 octagons will be stored as separate files.
Hope it helps !!!

Categories

Resources