Python cv2 HoughLines grid line detection - python

I have a simple grid in an image, I am trying to determine the grid size, e.g. 6x6, 12x12, etc. Using Python and cv2.
I am testing it with the above 3x3 grid, I was planning to determine the grid size by counting how many vertical / horizontal lines there are by detecting them in the image:
import cv2
import numpy as np
im = cv2.imread('photo2.JPG')
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
imgSplit = cv2.split(im)
flag,b = cv2.threshold(imgSplit[2],0,255,cv2.THRESH_OTSU)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(1,1))
cv2.erode(b,element)
edges = cv2.Canny(b,150,200,3,5)
while(True):
img = im.copy()
lines = cv2.HoughLinesP(edges,1,np.pi/2,2, minLineLength = 620, maxLineGap = 100)[0]
for x1,y1,x2,y2 in lines:
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),1)
cv2.imshow('houghlines',img)
if k == 27:
break
cv2.destroyAllWindows()
My code detects the lines, as can be seen below, however there are multiple lines detected for each line in my image:
(there are two 1px green lines drawn for every line in the image)
I cannot simply divide the number of lines by two because (depending on the grid size) sometimes just the one line will be drawn.
How can I more accurately detect and draw a single line for every line detected in the original image?
I have tweaked threshold settings, reducing the image to black and white, yet I still get multiple lines. I assume this is because of the canny edge detection?

I ended up iterating through the lines and removing lines that were within 10px of one another:
lines = cv2.HoughLinesP(edges,1,np.pi/180,275, minLineLength = 600, maxLineGap = 100)[0].tolist()
for x1,y1,x2,y2 in lines:
for index, (x3,y3,x4,y4) in enumerate(lines):
if y1==y2 and y3==y4: # Horizontal Lines
diff = abs(y1-y3)
elif x1==x2 and x3==x4: # Vertical Lines
diff = abs(x1-x3)
else:
diff = 0
if diff < 10 and diff is not 0:
del lines[index]
gridsize = (len(lines) - 2) / 2

you can dilate the image with
kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (2, 2))
dilated = cv2.dilate(edges, kernel, iterations=5)
then apply cv2.HoughLinesP

Doesn't the Hough function have a parameter that does exactly this? MaxLineGap? So if your lines were 2px thick, you set that parameter to 3? Does it not work?

Related

How to connect broken lines that cannot be connected by erosion and dilation?

I have an image like this that has multiple stoppers and some of the lines are broken. To connect this broken line, I used a morphological operation like this:
import cv2
import numpy as np
img = cv2.imread('sample.png', cv2.IMREAD_GRAYSCALE)
morph = cv2.morphologyEx(im, cv2.MORPH_CLOSE, np.ones((10,10),np.uint8))
But this didn't connect my broken lines. How can I connect the lines without affecting the other lines?
img
A line break is a break between two small lines in the center of the image. Only the discontinuous part does not have rounded ends.
applied morphological operation
You can use createFastLineDetector for detecting each line.
Calculate the slope of the current and neighboring lines.
If the slope of current and neighboring lines are the same draw line.
Initializing Line Detector
We will be using ximgproc library for detecting lines.
import cv2
img = cv2.imread("lines.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
d = cv2.ximgproc.createFastLineDetector()
lines = d.detect(gray)
The lines variable returns similar values like [[14.82, 78.90, 90.89, 120.78]] where x1=14.82, y1=78.90, x2=90.89, y2=120.78 respectively.
Calculating Slope
The slope of a line is calculated with the formula: m = (y2 - y1) / (x2 - x1)
For a given line object, get the coordinates and return the slope.
def calculate_slope(line_object):
x_point1 = line_object[0]
y_point1 = line_object[1]
x_point2 = line_object[2]
y_point2 = line_object[3]
m = abs((y_point2 - y_point1) / (x_point2 - x_point1))
m = float("{:.2f}".format(m))
return m
Comparing Slopes
Check the equality of the lines. if the points are equal, that means they are the same line.
for current_line in lines:
current_slope = calculate_slope(current_line[0])
for neighbor_line in lines:
current_x1 = int(current_line[0][0])
current_y1 = int(current_line[0][1])
current_x2 = int(current_line[0][2])
current_y2 = int(current_line[0][3])
compare_lines = current_line == neighbor_line[0]
equal_arrays = compare_lines.all()
If the lines are not equal, calculate the neighbor's line slope.
if not equal_arrays:
neighbor_slope = calculate_slope(neighbor_line[0])
If slopes are equal, draw the line. From neighbor to current and current to neighbor.
if abs(current_slope - neighbor_slope) < 1e-3:
neighbor_x1 = int(neighbor_line[0][0])
neighbor_y1 = int(neighbor_line[0][1])
neighbor_x2 = int(neighbor_line[0][2])
neighbor_y2 = int(neighbor_line[0][3])
cv2.line(img,
pt1=(neighbor_x1, neighbor_y1),
pt2=(current_x2, current_y2),
color=(255, 255, 255),
thickness=3)
cv2.line(img,
pt1=(current_x1, current_y1),
pt2=(neighbor_x2, neighbor_y2),
color=(255, 255, 255),
thickness=3)
Result
Possible Question But why couldn't you connect the following parts?
Answer
Well, the red dotted line slopes are not equal. Therefore I couldn't connect them.
Possible Question Why didn't you use dilate and erode methods? as shown in here
Answer
I tried, but the result is not satisfactory.

How to measure the angle between 2 lines in a same image using python opencv?

I have detected a lane boundary line which is not straight using hough transform and then extracted that line separately. Then blended with another image that has a straight line. Now I need to calculate the angle between those two lines, but I do not know the coordinates of those lines. So I tried with code that gives the coordinates of vertical lines, but it can not specifically identify those coordinates. Is there a way to measure the angle between those lines? Here is my coordinate calculation code and blended image with two lines
import cv2 as cv
import numpy as np
src = cv.imread("blended2.png", cv.IMREAD_COLOR)
if len(src.shape) != 2:
gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
else:
gray = src
gray = cv.bitwise_not(gray)
bw = cv.adaptiveThreshold(gray, 255, cv.ADAPTIVE_THRESH_MEAN_C, cv.THRESH_BINARY, 15, -2)
horizontal = np.copy(bw)
vertical = np.copy(bw)
cols = horizontal.shape[1]
horizontal_size = int(cols / 30)
horizontalStructure = cv.getStructuringElement(cv.MORPH_RECT, (horizontal_size, 1))
horizontal = cv.erode(horizontal, horizontalStructure)
horizontal = cv.dilate(horizontal, horizontalStructure)
cv.imwrite("img_horizontal8.png", horizontal)
h_transpose = np.transpose(np.nonzero(horizontal))
print("h_transpose")
print(h_transpose[:100])
rows = vertical.shape[0]
verticalsize = int(rows / 30)
verticalStructure = cv.getStructuringElement(cv.MORPH_RECT, (1, verticalsize))
vertical = cv.erode(vertical, verticalStructure)
vertical = cv.dilate(vertical, verticalStructure)
cv.imwrite("img_vertical8.png", vertical)
v_transpose = np.transpose(np.nonzero(vertical))
print("v_transpose")
print(v_transpose[:100])
img = src.copy()
# edges = cv.Canny(vertical,50,150,apertureSize = 3)
minLineLength = 100
maxLineGap = 200
lines = cv.HoughLinesP(vertical,1,np.pi/180,100,minLineLength,maxLineGap)
for line in lines:
for x1,y1,x2,y2 in line:
cv.line(img,(x1,y1),(x2,y2),(0,255,0),2)
cv.imshow('houghlinesP_vert', img)
cv.waitKey(0)
One approach is to use the Hough Transform to detect the lines and obtain the angle of each line. The angle between the two lines can then be found by subtracting the difference between the two lines.
We begin by performing an arithmetic average using np.mean to essentially threshold the image which results in this.
image = cv2.imread('2.png')
# Compute arithmetic mean
image = np.mean(image, axis=2)
Now we perform skimage.transform.hough_line to detect lines
# Perform Hough Transformation to detect lines
hspace, angles, distances = hough_line(image)
# Find angle
angle=[]
for _, a , distances in zip(*hough_line_peaks(hspace, angles, distances)):
angle.append(a)
Next we obtain the angle for each line and find the difference to obtain our result
# Obtain angle for each line
angles = [a*180/np.pi for a in angle]
# Compute difference between the two lines
angle_difference = np.max(angles) - np.min(angles)
print(angle_difference)
16.08938547486033
Full code
from skimage.transform import (hough_line, hough_line_peaks)
import numpy as np
import cv2
image = cv2.imread('2.png')
# Compute arithmetic mean
image = np.mean(image, axis=2)
# Perform Hough Transformation to detect lines
hspace, angles, distances = hough_line(image)
# Find angle
angle=[]
for _, a , distances in zip(*hough_line_peaks(hspace, angles, distances)):
angle.append(a)
# Obtain angle for each line
angles = [a*180/np.pi for a in angle]
# Compute difference between the two lines
angle_difference = np.max(angles) - np.min(angles)
print(angle_difference)

Python: How to OCR characters crossed by a horizontal line

I have a batch of images which I would like to scan. Some of them have got a horizontal line crossing the characters that have to be scanned, which would look like this:
I have made a program that is able to remove the horizontal line:
import cv2
import numpy as np
img = cv2.imread('image.jpg',0)
# Applies threshold and inverts the image colors
(thresh, im_bw) = cv2.threshold(img, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
im_wb = (255-im_bw)
# Line parameters
minLineLength = 100
maxLineGap = 10
color = 255
size = 2
# Substracts the black line
lines = cv2.HoughLinesP(im_wb,1,np.pi/180,minLineLength,maxLineGap)[0]
for x1,y1,x2,y2 in lines:
cv2.line(img,(x1,y1),(x2,y2),color,size)
cv2.imshow('clean', img)
This returns the image below:
So, do you have any idea of how to make OCR to these characters that have the white line crossing them? Would you make a different approach than the one stated?
Please ask any questions you have if something is not clear. Thank you.
Following #Rethunk advice, I did the following:
# Line parameters
minLineLength = 100
maxLineGap = 10
color = 255
size = 1
# Substracts the black line
lines = cv2.HoughLinesP(im_wb,1,np.pi/180,minLineLength,maxLineGap)[0]
# Makes a list of the y's located at position x0 and x1
y0_list = []
y1_list = []
for x0,y0,x1,y1 in lines:
if x0 == 0:
y0_list.append(y0)
if x1 == im_wb.shape[1]:
y1_list.append(y1)
# Calculates line thickness and its half
thick = max(len(y0_list), len(y1_list))
hthick = int(thick/2)
# Initial and ending point of the full line
x0, x1, y0, y1 = (0, im_wb.shape[1], sum(y0_list)/len(y0_list), sum(y1_list)/len(y1_list))
# Iterates all x's and prints makes a vertical line with the desired thickness
# when the point is surrounded by white pixels
for x in range(x1):
y = int(x*(y1-y0)/x1) + y0
if im_wb[y+hthick+1, x] == 0 and im_wb[y-hthick-1, x] == 0:
cv2.line(img,(x,y-hthick),(x,y+hthick),colour,size)
cv2.imshow(clean', img)
So, as the HoughLinesP function returns the initial and final point of horizontal lines, I made a list of the y coordinates of the points that are in the begginning and end of the image and thus I am able to know the full line equation (so if it is inclined is valid as well) and I can iterate all its points. For each point, if it is surrounded by white pixels, I remove it. The outcome is the following:
If you have any better idea please tell!

Detect the green lines in this image and calculate their lengths

Sample Images
The image can be more noisy at times where more objects intervene from the background. Right now I am using various techniques using the RGB colour space to detect the lines but it fails when there is change in the colour due to intervening obstacles from the background. I am using opencv and python.
I have read that HSV is better for colour detection and used but haven't been successful yet.
I am not able to find a generic solution to this problem. Any hints or clues in this direction would be of great help.
STILL IN PROGRESS
First of all, an RGB image consists of 3 grayscale images. Since you need the green color you will deal only with one channel. The green one. To do so, you can split the image, you can use b,g,r = cv2.split('Your Image'). You will get an output like that if you are showing the green channel:
After that you should threshold the image using your desired way. I prefer Otsu's thresholding in this case. The output after thresholding is:
It's obvious that the thresholded image is extremley noisy. So performing erosion will reduce the noise a little bit. The noise reduced image will be similar to the following:
I tried using closing instead of dilation, but closing preserves some unwanted noise. So I separately performed erosion followed by dilation. After dilation the output is:
Note that: You can do your own way in morphological operation. You can use opening instead of what I did. The results are subjective from
one person to another.
Now you can try one these two methods:
1. Blob Detection.
2. HoughLine Transform.
TODO
Try out these two methods and choose the best.
You should use the fact that you know you are trying to detect a line by using the line hough transform.
http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/hough_lines/hough_lines.html
When the obstacle also look like a line use the fact that you know approximately what is the orientation of the green lines.
If you don't know the orientation of the line use hte fact that there are several green lines with the same orientation and only one line that is the obstacle
Here is a code for what i meant:
import cv2
import numpy as np
# Params
minLineCount = 300 # min number of point alogn line with the a specif orientation
minArea = 100
# Read img
img = cv2.imread('i.png')
greenChannel = img[:,:,1]
# Do noise reduction
iFilter = cv2.bilateralFilter(greenChannel,5,5,5)
# Threshold data
#ret,iThresh = cv2.threshold(iFilter,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
iThresh = (greenChannel > 4).astype(np.uint8)*255
# Remove small areas
se1 = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
iThreshRemove = cv2.morphologyEx(iThresh, cv2.MORPH_OPEN, se1)
# Find edges
iEdge = cv2.Canny(iThreshRemove,50,100)
# Hough line transform
lines = cv2.HoughLines(iEdge, 1, 3.14/180,75)
# Find the theta with the most lines
thetaCounter = dict()
for line in lines:
theta = line[0, 1]
if theta in thetaCounter:
thetaCounter[theta] += 1
else:
thetaCounter[theta] = 1
maxThetaCount = 0
maxTheta = 0
for theta in thetaCounter:
if thetaCounter[theta] > maxThetaCount:
maxThetaCount = thetaCounter[theta]
maxTheta = theta
# Find the rhos that corresponds to max theta
rhoValues = []
for line in lines:
rho = line[0, 0]
theta = line[0, 1]
if theta == maxTheta:
rhoValues.append(rho)
# Go over all the lines with the specific orientation and count the number of pixels on that line
# if the number is bigger than minLineCount draw the pixels in finaImage
lineImage = np.zeros_like(iThresh, np.uint8)
for rho in range(min(rhoValues), max(rhoValues), 1):
a = np.cos(maxTheta)
b = np.sin(maxTheta)
x0 = round(a*rho)
y0 = round(b*rho)
lineCount = 0
pixelList = []
for jump in range(-1000, 1000, 1):
x1 = int(x0 + jump * (-b))
y1 = int(y0 + jump * (a))
if x1 < 0 or y1 < 0 or x1 >= lineImage.shape[1] or y1 >= lineImage.shape[0]:
continue
if iThreshRemove[y1, x1] == int(255):
pixelList.append((y1, x1))
lineCount += 1
if lineCount > minLineCount:
for y,x in pixelList:
lineImage[y, x] = int(255)
# Remove small areas
## Opencv 2.4
im2, contours, hierarchy = cv2.findContours(lineImage,cv2.RETR_CCOMP,cv2.CHAIN_APPROX_NONE )
finalImage = np.zeros_like(lineImage)
finalShapes = []
for contour in contours:
if contour.size > minArea:
finalShapes.append(contour)
cv2.fillPoly(finalImage, finalShapes, 255)
## Opencv 3.0
# output = cv2.connectedComponentsWithStats(lineImage, 8, cv2.CV_32S)
#
# finalImage = np.zeros_like(output[1])
# finalImage = output[1]
# stat = output[2]
# for label in range(output[0]):
# if label == 0:
# continue
# cc = stat[label,:]
# if cc[cv2.CC_STAT_AREA] < minArea:
# finalImage[finalImage == label] = 0
# else:
# finalImage[finalImage == label] = 255
# Show image
#cv2.imwrite('finalImage2.jpg',finalImage)
cv2.imshow('a', finalImage.astype(np.uint8))
cv2.waitKey(0)
and the result for the images:

OpenCV, divide object into parts

I have the following image:.
Is there a function in OpenCV (preferably Python) that can say that the objects in this picture can be divided into parts. For example, the first object consists of two segments (or two lines), the third one of three (or four).
If OpenCV doesn't have such a thing, it'd be great to know about such an algorithm/function anywhere.
This problem can be solved by skeletonizing the image and then using HoughlinesP.
Scikit-image has a good skeletonization method.
It is straight forward to find the 14 lines segments as shown below.
Finally you will need to go through and find which sets of lines intersect to see which belong together.
#!/usr/bin/python
from skimage import morphology
import cv2
import math
import numpy as np
im = cv2.imread("objects.png")
dst = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
dst = 1 - dst / 255
dst = morphology.skeletonize(dst).astype(np.uint8)
objs = 255 * dst
#cv2.HoughLinesP(image, rho, theta, threshold[, lines[, minLineLength[, maxLineGap]]])
rho = 1
theta = math.pi / 180
threshold = 1
minLineLength = 3
maxLineGap = 5
lines = np.ndarray([1, 1, 4, 4])
lines = cv2.HoughLinesP(dst, rho, theta, threshold, lines, minLineLength, maxLineGap)
lineColor = (0, 255, 0) # red
for line in lines[0]:
#print line
cv2.line(im, (line[0], line[1]), (line[2], line[3]), lineColor, 1, 8)
#
# Now you need to go through lines and find those that intersect
# You will notice that some lines have small gaps where they should
# join to a perpendicular line. Before find intersections you would
# need to make each line longer (just by adjusting the numbers in lines)
# to get around this problem.
#
cv2.imshow('Objects', objs)
cv2.imshow('Lines', im)
cv2.imwrite('lines.png', im)
cv2.waitKey()
cv2.destroyAllWindows()

Categories

Resources