I'm trying to do lane detection, the code is as below, I applied HoughLinesP on canny edge detection's o/p. So the idea is to display only the lines that which are (usually present on the video + more probable to be a lane i.e by picking up the angle). I don't want to use any machine learning algorithms. So please help..
Here are the details :
Code :
import time
import numpy as np
import matplotlib.pyplot as plt
vid = cv2.VideoCapture('4.mp4')
while True:
#cv2.namedWindow('frame',cv2.WINDOW_NORMAL)
ret, img_color = vid.read()
if not ret:
vid = cv2.VideoCapture('5.mp4')
continue
num_rows, num_cols = img_color.shape[:2]
rotation_matrix = cv2.getRotationMatrix2D((num_cols/2, num_rows/2), 270, 0.56) #3
img_rotated = cv2.warpAffine(img_color, rotation_matrix, (num_cols, num_rows))
height, width = img_rotated.shape[:2]
img_resize = cv2.resize(img_rotated,(int(0.8*width), int(0.8*height)), interpolation = cv2.INTER_CUBIC) #2
img_clone = img_resize[10:842,530:1000].copy()
img_roi = img_resize[10+250:842-200,530:1000]
img_gray = cv2.cvtColor(img_roi,cv2.COLOR_BGR2GRAY) #1
kernel = [ [0,-1,0], [-1,5,-1], [0,-1,0] ]
kernel = np.array(kernel)
img_sharp = cv2.filter2D(img_gray,-1,kernel)
blur = cv2.GaussianBlur(img_sharp,(5,5),0)
img_canny = cv2.Canny(blur,130,170, apertureSize = 3) #4
lines = cv2.HoughLinesP(img_canny, 1, np.pi/180, 60, maxLineGap = 240)
if lines is not None:
print(len(lines))
for line in lines:
x1,y1,x2,y2 = line[0]
cv2.line(img_clone, (x1,y1+250), (x2,y2+250), (0,255,0), 2)
#cv2.line(img_clone, (x1,y1), (x2,y2), (255,255,0), 3)
cv2.imshow('frame',img_clone)
cv2.imshow('frame2', img_canny)
k = cv2.waitKey(35) & 0xFF
if k==27 :
break
vid.release()
cv2.destroyAllWindows()
Here's link to videos I'm using
In 4.mp4 you can see that after running this code, a few seconds later a person comes in and there are so many lines as canny detects so many edges in that region, secondly i've fixated region of image for which i want to be dynamic, the idea is to set the region of image on the basis on more probable lanes.
Also there's a cluster of lines appearing, i want to shorten it down to more probable line. Thank you for reading.
You won't get much better results. This is just the nature of your problem. You will now have to go into creating a mathematical model of your lane, and use your hough-lines to correct that model.
E.g. you could track the lane in certain bands of the image using a kalman filter. You then can use the predict step of this filter when you observe line segments that are within the expected angle around your observation band.
Related
I'm trying to clean the following image in order to perform edge and then polygon detection in order to extract the key buildings/features. Ideally I wish to end up with an image where the contours around houses and buildings are extracted correctly. The input image is (the full sized image is given here)
Currently my processing has worked as follows:
Read in image and perform canny edge detection
Apply Gaussian and median blurs
Perform a probabilistic Hough transform
Draw the lines given from the Hough transform in red
Remove non-red lines, apply blurs to the red and perform contour detection
Which is done with the following code:
def read_tif(PATH):
img = Image.open(PATH)
# Turn into np array
img = np.array(img, dtype=np.uint8)*255
return img
# Read in image
img = read_tif(here("Images/" + params['img']))
dst = cv2.Canny(img, 50, 200, None, 3)
# Apply blurs
dst = cv2.GaussianBlur(dst, (5, 5), 0)
dst = cv2.medianBlur(dst, 3)
cdst = cv2.cvtColor(dst, cv2.COLOR_GRAY2BGR)
cdstP = np.copy(cdst)
# Probabilistic Hough Transform
linesP = cv2.HoughLinesP(dst, 1, np.pi / 180, 50, None, 50, 10)
# Draw lines from Hough
if linesP is not None:
for i in range(0, len(linesP)):
l = linesP[i][0]
cv2.line(cdstP , (l[0], l[1]), (l[2], l[3]), (0,0,255), 6, cv2.LINE_AA)
Unfortunately this method is not having too much success as although lines are detected well there are many houses which are returned as several irregular polygons:
(see full image here). I have tried playing around with various other methods like dilation (to increase the width of the house boundary lines) but these don't seem to improve results and amplify some of the noise in the image.
Any advice or help on methods/approaches that can help in improving these results is much appreciated, TIA!!
I'm pretty new to both python and openCV. I just need it for one project. Users take picture of ECG with their phones and send it to the server I need to extract the graph data and that's all.
Here's a sample image :
Original Image Sample
I should first crop the image to have only the graph I think.As I couldn't find a way I did it manually.
Here's some code which tries to isolate the graph by making the lines white (It works on the cropped image) Still leaves some nasty noises and inacurate polygons at the end some parts are not detected :
import cv2
import numpy as np
img = cv2.imread('image.jpg')
kernel = np.ones((6,6),np.uint8)
dilation = cv2.dilate(img,kernel,iterations = 1)
gray = cv2.cvtColor(dilation, cv2.COLOR_BGR2GRAY)
#
ret,gray = cv2.threshold(gray,160,255,0)
gray2 = gray.copy()
mask = np.zeros(gray.shape,np.uint8)
contours, hier = cv2.findContours(gray,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
if cv2.contourArea(cnt) > 400:
approx = cv2.approxPolyDP(cnt,
0.005 * cv2.arcLength(cnt, True), True)
if(len(approx) >= 5):
cv2.drawContours(img, [approx], 0, (0, 0, 255), 5)
res = cv2.bitwise_and(gray2,gray2,mask)
cv2.imwrite('output.png',img)
Now I need to make it better. I found most of the code from different places and attached them togheter.
np.ones((6,6),np.uint8)
For example here if I use anything other than 6,6 I'm in trouble :frowning:
also
cv2.threshold(gray,160,255,0)
I found 160 255 by tweaking and all other hardcoded values in my code what if the lighting on another picture is different and these values won't work anymore?
And other than this I don't still get the result I want some polygons are attached by two different lines from bottom and top!
I just want one line to go from beggining to the end.
Please guide me to tweak and fix it for more general use.
I have a picture that looks like this.
I do know that in simplecv, you can use:
img = Image('hallway.jpg')
img.show()
img.edges.show()
lines = img.findLines()
lines = lines.filter(lines.length() > 50)
lines.show()
I am wondering if anyone knows any library/document or can point me in any direction, that is able to detect the edges of corners, doors etc in real time or in still images with OpenCV?
Opencv python has implementations of Hough lines which could help. While the algo is heavy, there's a probabilistic version of it that works in realtime. You can even adjust parameters to make it faster at the cost of accuracy.
import cv2
import numpy as np
img = cv2.imread('hallway.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
minLineLength = 100
maxLineGap = 10
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength,maxLineGap)
for x1,y1,x2,y2 in lines[0]:
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
cv2.imshow("preview", img)
cv2.waitkey(0)
Note that you might have to adjust thresholds in canny and other parameters according to your requirements.
An alternative is to use contours. This might help https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.html
I need to detect the vein junctions of wings bee (the image is just one example). I use opencv - python.
ps: maybe the image lost a little bit of quality, but the image is all connected with one pixel wide.
This is an interesting question. The result I got is not perfect, but it might be a good start. I filtered the image with a kernel that only looks at the edges of the kernel. The idea being, that a junction has at least 3 lines that cross the kernel-edge, where regular lines only have 2. This means that when the kernel is over a junction, the resulting value will be higher, so a threshold will reveal them.
Due to the nature of the lines there are some value positives and some false negatives. A single joint will most likely be found several times, so you'll have to account for that. You can make them unique by drawing small dots and detecting those dots.
Result:
Code:
import cv2
import numpy as np
# load the image as grayscale
img = cv2.imread('xqXid.png',0)
# make a copy to display result
im_or = img.copy()
# convert image to larger datatyoe
img.astype(np.int32)
# create kernel
kernel = np.ones((7,7))
kernel[2:5,2:5] = 0
print(kernel)
#apply kernel
res = cv2.filter2D(img,3,kernel)
# filter results
loc = np.where(res > 2800)
print(len(loc[0]))
#draw circles on found locations
for x in range(len(loc[0])):
cv2.circle(im_or,(loc[1][x],loc[0][x]),10,(127),5)
#display result
cv2.imshow('Result',im_or)
cv2.waitKey(0)
cv2.destroyAllWindows()
Note: you can try to tweak the kernel and the threshold. For example, with the code above I got 126 matches. But when I use
kernel = np.ones((5,5))
kernel[1:4,1:4] = 0
with threshold
loc = np.where(res > 1550)
I got 33 matches in these locations:
You can use Harris corner detector algorithm to detect vein junction in above image. Compared to the previous techniques, Harris corner detector takes the differential of the corner score into account with reference to direction directly, instead of using shifting patches for every 45 degree angles, and has been proved to be more accurate in distinguishing between edges and corners (Source: wikipedia).
code:
img = cv2.imread('wings-bee.png')
# convert image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
'''
args:
img - Input image, it should be grayscale and float32 type.
blockSize - It is the size of neighbourhood considered for corner detection
ksize - Aperture parameter of Sobel derivative used.
k - Harris detector free parameter in the equation.
'''
dst = cv2.cornerHarris(gray, 9, 5, 0.04)
# result is dilated for marking the corners
dst = cv2.dilate(dst,None)
# Threshold for an optimal value, it may vary depending on the image.
img_thresh = cv2.threshold(dst, 0.32*dst.max(), 255, 0)[1]
img_thresh = np.uint8(img_thresh)
# get the matrix with the x and y locations of each centroid
centroids = cv2.connectedComponentsWithStats(img_thresh)[3]
stop_criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# refine corner coordinates to subpixel accuracy
corners = cv2.cornerSubPix(gray, np.float32(centroids), (5,5), (-1,-1), stop_criteria)
for i in range(1, len(corners)):
#print(corners[i])
cv2.circle(img, (int(corners[i,0]), int(corners[i,1])), 5, (0,255,0), 2)
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
output:
You can check the theory behind Harris Corner detector algorithm from here.
I am trying to develop a program that can detect lanes on the road. I have experimented with both Hough Line Transform and Probabilistic Hough Line Transform. However none of these are getting the results that I want.
Original Image:
Hough Line Transform
Probabilistic Hough Line Transform
It seems that for Hough Line Transform, I can at least detect the entire lane, but unfortunately, the line just goes on infinitely (until they move off the picture), to the point where the lines intersect with each other, which is not a good graphical lane detection marker.
I also tried Probalistic Hough Line Transform, and the green line used for lane detection does not go off to infinitely like the other one, but it fails to mark and detect the entire lane.
I am trying to replicate results here (by writing it in Python)
http://www.transistor.io/revisiting-lane-detection-using-opencv.html
What can I do to fix this problem?
Code:
import numpy as np
import cv2
from matplotlib import pyplot as plt
from PIL import Image
import imutils
def invert_img(img):
img = (255-img)
return img
def canny(imgray):
imgray = cv2.GaussianBlur(imgray, (5,5), 200)
canny_low = 5
canny_high = 150
thresh = cv2.Canny(imgray,canny_low,canny_high)
return thresh
def filtering(imgray):
thresh = canny(imgray)
minLineLength = 1
maxLineGap = 1
lines = cv2.HoughLines(thresh,1,np.pi/180,0)
#lines = cv2.HoughLinesP(thresh,2,np.pi/180,100,minLineLength,maxLineGap)
print lines.shape
# Code for HoughLinesP
'''
for i in range(0,lines.shape[0]):
for x1,y1,x2,y2 in lines[i]:
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
'''
# Code for HoughLines
for i in range(0,5):
for rho,theta in lines[i]:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
return thresh
img = cv2.imread('images/road_0.bmp')
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = imutils.resize(img, height = 500)
imgray = imutils.resize(imgray, height = 500)
thresh = filtering(imgray)
cv2.imshow('original', img)
cv2.imshow('result', thresh)
cv2.waitKey(0)
Cool topic! First of all, why did you add the Gaussian blur? Your source article doesn't mention that at all. If I remove that, I instantly get extra crazy lines, which I can tone down with the canny_low and canny_high. About the best I could find was low=100 and high=180.
Second, you did quite a good job translating the article to Python. However, I think you left out a crucial detail. The author writes:
// Canny algorithm
Mat contours;
Canny(image,contours,50,350);
Mat contoursInv;
threshold(contours,contoursInv,128,255,THRESH_BINARY_INV);
You implement the Canny function (cv2.canny()), but you don't call the threshold function. According to documentation I found, this function "applies a fixed-level threshold to each array element." I experimented with your code and came up with the following.
#thresh = canny(imgray) # original
edges = canny(imgray) # docs refer to return value as "edges"
retval, dst = cv2.threshold(edges, 128, 255,cv2.THRESH_BINARY_INV)
Two values are returned - retval isn't particularly important for us right now. dst is the destination 2D array of image data after thresholding. You would then update your call to cv2.HoughLines and cv2.HoughLinesP replacing "thresh" with "dst." When I did this I got a lot more interesting behavior, though I was not able to find the correct tuning values to make the lines work well.
So, hopefully that gives you some pointers. Try my tips, and also read the article once or twice more to double check that you have the same program flow as the author. This seems like a fun project, have fun!