Detect dotted (broken) lines only in an image using OpenCV - python

I am trying to learn techniques on image feature detection.
I have managed to detect horizontal line(unbroken/continuous), however I am having trouble detecting all the dotted/broken lines in an image.
Here is my test image, as you can see there are dotted lines and some text/boxes etc.
So far I have used the following code which detected only one dotted line.
import cv2
import numpy as np
img=cv2.imread('test.jpg')
img=functions.image_resize(img,1000,1000) #function from a script to resize image to fit my screen
imgGray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
imgEdges=cv2.Canny(imgGray,100,250)
imgLines= cv2.HoughLinesP(imgEdges,2,np.pi/100,60, minLineLength = 10, maxLineGap = 100)
for x1,y1,x2,y2 in imgLines[0]:
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
cv2.imshow('Final Image with dotted Lines detected',img)
My output image is below. As you can see I only managed to detect the last dotted line. I have played around with the parameters rho,theta,min/max line but no luck.
Any advice is greatly appreciated :)

This solution:
import cv2
import numpy as np
img=cv2.imread('test.jpg')
kernel1 = np.ones((3,5),np.uint8)
kernel2 = np.ones((9,9),np.uint8)
imgGray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
imgBW=cv2.threshold(imgGray, 230, 255, cv2.THRESH_BINARY_INV)[1]
img1=cv2.erode(imgBW, kernel1, iterations=1)
img2=cv2.dilate(img1, kernel2, iterations=3)
img3 = cv2.bitwise_and(imgBW,img2)
img3= cv2.bitwise_not(img3)
img4 = cv2.bitwise_and(imgBW,imgBW,mask=img3)
imgLines= cv2.HoughLinesP(img4,15,np.pi/180,10, minLineLength = 440, maxLineGap = 15)
for i in range(len(imgLines)):
for x1,y1,x2,y2 in imgLines[i]:
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
cv2.imshow('Final Image with dotted Lines detected', img)

If you have an idea about the dot size, you can use black-hat transform to filter out the dotted lines. Black-hat is the difference between the closing of the image and the image. Then you can try hough line transform.
So, try
Convert bgr-to-gray
Apply black-hat using morphologyEx: this will leave only the black dots in the resulting image.
Invert the result and try hough line transform.
Here, you will have to experiment with the kernel size to filter only the dots. If that proves to be not very robust, another approach would be to use a blob detector. Invert the image and apply opencv blob detector or find contours. Filter the blobs/contours by area. Letters and other structures will have a larger area than the dots, so you can remove any structures that are larger than the dots. Then apply the hough line transform.

because you choose just one line to draw.
You change function draw line to
for i, line in enumerate(imgLines):
for x1, y1, x2, y2 in line:
cv2.line(img, (x1,y1), (x2,y2), (0,255,0), 2)
print(i, x1, y1, x2, y2)

Related

Denoise noisy straight lines / make noisy lines solid Python

I am attempting to denoise / make solid lines in a very noisy image of a floor-plan in python to no success. The methods I have used are:
masking
bluring
and houghlinesp
I have even tried a combination of the first two. here is the sample input image I am trying to make into solid straight lines:
With using the HoughLines method this is the best result I could achieve (lines solid but overlapping like crazy wherever there is text (This cannot easily be fixed by changing my minline/maxlinegap variables):
I have tried: masking, Gaussian blur, and Houghlinesp.
Houghlinesp Code:
import cv2
import numpy as np
from tkinter import Tk # from tkinter import Tk for Python 3.x
from tkinter.filedialog import askopenfilename
import os
Tk().withdraw() # we don't want a full GUI, so keep the root window from appearing
filename = askopenfilename() # show an "Open" dialog box and return the path to the selected file
print(filename)
filename3, file_extension = os.path.splitext(filename)
# Read input
img = cv2.imread(filename, cv2.IMREAD_GRAYSCALE)
# Initialize output
out = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
# Median blurring to get rid of the noise; invert image
img = 255 - cv2.medianBlur(img, 3)
# Detect and draw lines
lines = cv2.HoughLinesP(img, 1, np.pi/180, 10, minLineLength=40, maxLineGap=30)
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(out, (x1, y1), (x2, y2), (0, 0, 255), 2)
cv2.imshow('out', out)
cv2.imwrite(filename3+' '+'69'+'.png', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
There are a few different things you could try, but to start I would recommend the following:
First, threshold the image to identify only the parts that constitute the floor plan
Next, dilate the image to connect any broken segments
Finally, erode the image to prevent your lines from being too thick
You'll have to mess around with the parameters to get it right, but I think this is your best bet to solve this problem without it getting too complicated.
If you want, you can also try the Sobel operators before thresholding to better identify horizontal and vertical lines.

opencv detecting hallway edges and straight lines in realtime/still images

I have a picture that looks like this.
I do know that in simplecv, you can use:
img = Image('hallway.jpg')
img.show()
img.edges.show()
lines = img.findLines()
lines = lines.filter(lines.length() > 50)
lines.show()
I am wondering if anyone knows any library/document or can point me in any direction, that is able to detect the edges of corners, doors etc in real time or in still images with OpenCV?
Opencv python has implementations of Hough lines which could help. While the algo is heavy, there's a probabilistic version of it that works in realtime. You can even adjust parameters to make it faster at the cost of accuracy.
import cv2
import numpy as np
img = cv2.imread('hallway.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
minLineLength = 100
maxLineGap = 10
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength,maxLineGap)
for x1,y1,x2,y2 in lines[0]:
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
cv2.imshow("preview", img)
cv2.waitkey(0)
Note that you might have to adjust thresholds in canny and other parameters according to your requirements.
An alternative is to use contours. This might help https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.html

How can I prevent HoughLines from detecting certain lines multiple times

So I'm working on this piece of code to extract data from some graphs in images. These images are all scanned from a book. Since we're talking about 100+ images here, I would like to automate the process of course. My first step was to make sure that all images are aligned. Because the pages of the book were scanned by hand, the scans are all slightly shifted or rotated in regards to each other. Luckily there are some dotted lines on the images, which can be used as a reference point to align them. Afterwards I can then divide the image into smaller subimages, by slicing the image on these dotted lines. In that way, all subimages will be equal for all scanned images.
So, first step of course is to detect these dotted lines. My strategy can be described in 4 steps:
turn the dotted lines into solid lines, using Morphological Transformation
detect all edges, using Canny Edge Detection
identify the lines, using HoughLines
draw these lines on a mask for further usage
Now there are several problems which may occur. Sometimes HoughLines will detect a wrong line (such as the fold of the next page in the book), but this could potentially be fixed by cropping the image a little on the right side (better solutions are always welcome). The second (and biggest) problem is that HoughLines sometimes tends to identify a single line as multiple lines. I think this has something to do with Canny Edge Detection being too rough or vague about the edges, so that HoughLines actually sees multiple lines. Is there a way I could "smooth" the output from Canny so that HoughLines detects each line exactly once?
In the case of this specific image, the vertical dotted lines in the middle didn't get identified, whereas the fold of the next page in the book did. Furthermore the vertical dotted lines got identified as multiple lines. (left source image, middle edges detected, right lines detected)
# load image
img_large = cv2.imread("image.png")
# resize for ease of use
img_ori = cv2.resize(img_large, None, fx=0.2, fy=0.2, interpolation=cv2.INTER_CUBIC)
# create grayscale
img = cv2.cvtColor(img_ori, cv2.COLOR_BGR2GRAY)
# create mask for image size
mask = np.zeros((img.shape[:2]), dtype=np.uint8)
# do a morphologic close to merge dotted line
kernel = np.ones((8, 8))
res = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
# detect edges for houghlines
edges = cv2.Canny(res, 50, 50)
# detect lines
lines = cv2.HoughLines(edges, 1, np.pi/180, 200)
# draw detected lines
for line in lines:
rho, theta = line[0]
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*a)
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*a)
cv2.line(mask, (x1, y1), (x2, y2), 255, 2)
cv2.line(img, (x1, y1), (x2, y2), 127, 2)
In your script, the pixel-bins and the rotation bins are too fine for the threshold you've set:
lines = cv2.HoughLines(edges, 1, np.pi/180, 200)
So you can tune the threshold parameter (200) to get only one line, or tune the rho (1) and theta (np.pi/180) parameters, or tune all these. You can select a set of image that contain only one horizontal or vertical line from your images. Then do grid search to find the parameters that detect only one line in your set of test images.

How to remove multiple polygons using Opencv python

Hi StackOverflow team,
I have an image and I want to remove many portions/parts from the image. I tried to use the below code taken from Cropping Concave polygon from Image using Opencv python
Assume I have this image . Also, I have multiple polygons (such as rectangular shapes or any form of a polygon) from the image achieved via lebelme annotation tool. So, I want to remove those shapes from the images or simply changing their pixels to white.
In other words, Labelme Tool will give you a dictionary file, where the dictionary has a key consisting of the points of each portion/polygon/shape)
Then the polygon points can be easily extracted from the dictionary file. After points are extracted, we can define our points by giving names (e.g a,b,s...h), and each one is in this multidimensional format "[[1526, 319], [1526, 376], [1593, 379], [1591, 324]]"
Here I thought of whitening each region. but whitening of multidimensional array seems to be unreliable.
import numpy as np
import cv2
import json
with open('ann1.json') as f:
data = json.load(f)
#%%
a = data['shapes'][0]['points']; b = data['shapes'][1]['points']; c = data['shapes'][2]['points'];
#%%
img = cv2.imread("lena.jpg")
pts = np.array(a) # Points
#%%
## (1) Crop the bounding rect
rect = cv2.boundingRect(pts)
x,y,w,h = rect
croped = img[y:y+h, x:x+w].copy()
## (2) make mask
pts = pts - pts.min(axis=0)
mask = np.zeros(croped.shape[:2], np.uint8)
cv2.drawContours(mask, [pts], -1, (255, 255, 255), -1, cv2.LINE_AA)
## (3) do bit-op
dst = cv2.bitwise_and(croped, croped, mask=mask)
## (4) add the white background
bg = np.ones_like(croped, np.uint8)*255
cv2.bitwise_not(bg,bg, mask=mask)
dst2 = bg+ dst
#cv2.imwrite("croped.png", croped)
#cv2.imwrite("mask.png", mask)
#cv2.imwrite("dst.png", dst)
cv2.imwrite("dst2.png", dst2)
Using Lena I have this output .
But I need to go further and whiten other points/polygons, for example, the eyes.
As you can see my code can use only one polygon points. I tried appending two other polygon points in my case the two eyes and got .
By appending, I mean I added the multidimensional points (e.g. pts = np.array(a+b+c)).
In short, having an image is there a short way to remove these multiple polygons from the image (by keeping the dimensions of the image) using OpenCV and python.
Json File:
https://drive.google.com/file/d/1UyOYUVMHpu2vBBEdR99bwrRX5xIfdOCa/view?usp=sharing
You'll need to use to loop to go through all the points in the JSON file. I've edited your code to reflect this.
import cv2
import json
import matplotlib.pyplot as plt
import numpy as np
img_path =r"/path/to/lena.png"
json_path = r"/path/to/lena.json"
with open(json_path) as f:
data = json.load(f)
img = cv2.imread(img_path)
for idx in np.arange(len(data['shapes'])):
if idx == 0: #can remove this
continue #can remove this
a = data['shapes'][idx]['points']
pts = np.array(a) # Points
## (1) Crop the bounding rect
rect = cv2.boundingRect(pts)
print(rect)
x,y,w,h = rect
img[y:y+h, x:x+w] = (255, 255, 255)
plt.imshow(img)
plt.show()
Output:
I ignored the first line, since it didn't visualize the results nicely. I took your lead and used rectangles instead of polygons. If you need polygons, you'll need to use something like cv2.drawContours() or cv2.polylines() or cv2.fillPoly() as is recommnded in the SO answer you have linked here, to achieve it.
I would like to share with you my expected solution which is a bit modified version of #Shawn Mathew answer.
Input image:
Code:
with open('lena.json') as f:
json_file = json.load(f)
img = cv2.imread("folder/lena.jpg")
for polygon in np.arange(len(json_file['shapes'])):
pts = np.array(json_file['shapes'][polygon]['points'])
# If your polygons are rectangular, you can fill with white color to the areas you want be removed by uncommenting the below two lines
# x,y,w,h = cv2.boundingRect(pts)
# cv2.rectangle(img, (x, y), (x+w, y+h), (255, 255, 255), -1)
# if your polygons are different shapes other than rectangles you can just use the below line
cv2.fillPoly(img, pts =[pts], color=(255,255,255))
plt.imshow(img)
plt.show()
The color of the image changed because of Matplotlib, if you want to preserve the color save the image using cv2.imwrite

How to get the angle(theta) value using the HoguhLineP?

I have a sample code which will detect lines. I need to detect lines with a degree angle. But this code detects all the lines in the image.
import cv2
import numpy as np
img = cv2.imread('myhouse.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
minLineLength = 200
maxLineGap = 10
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength,maxLineGap)
for x1,y1,x2,y2,theta in lines[0]:
print (theta)
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
cv2.imwrite('houghlinesmyhouse.png',img)
The image which I use to detect and the result image is,
image
result
I need to detect the roof of the house(image) which I provided. Please help me how to detect the roof. In my method, I've planned to detect the roof by checking the degree angle.
Please refer the Documentation of
Hough Line Transform.
In my point of view, you cannot get the degree in HoughLinesP. You can get from HoughLines.

Categories

Resources