I have a picture that looks like this.
I do know that in simplecv, you can use:
img = Image('hallway.jpg')
img.show()
img.edges.show()
lines = img.findLines()
lines = lines.filter(lines.length() > 50)
lines.show()
I am wondering if anyone knows any library/document or can point me in any direction, that is able to detect the edges of corners, doors etc in real time or in still images with OpenCV?
Opencv python has implementations of Hough lines which could help. While the algo is heavy, there's a probabilistic version of it that works in realtime. You can even adjust parameters to make it faster at the cost of accuracy.
import cv2
import numpy as np
img = cv2.imread('hallway.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
minLineLength = 100
maxLineGap = 10
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength,maxLineGap)
for x1,y1,x2,y2 in lines[0]:
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
cv2.imshow("preview", img)
cv2.waitkey(0)
Note that you might have to adjust thresholds in canny and other parameters according to your requirements.
An alternative is to use contours. This might help https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.html
Related
I'm currently working on implementation in python the algorithm presented in https://arxiv.org/abs/1611.03270. In the following paper there is a part when we create epipolar lines and we want to take part of the image between those lines. Creation of the lines is fairly easy and it can be done with approach presented for instance here https://docs.opencv.org/3.4/da/de9/tutorial_py_epipolar_geometry.html. I tried to find a solution that would get me a part of the image between those lines (with some set width) but I couldn't find any. I know that I could manually take values from pixels via calculating if they are under/above lines but maybe there is a more elegant solution to this problem? Do you guys have any idea or maybe experienced similar problem in the past?
you can do like this
import numpy as np
import cv2
# lets say this is our image
np.random.seed(42)
img = np.random.randint(0, high=256, size=(400,400), dtype=np.uint8)
cv2.imshow('random image', img)
# we can create a mask with epipolar points and AND with the original image
mask = np.zeros([400, 400],dtype=np.uint8)
pts = np.array([[20,20],[100,350],[165,240],[30,30]], np.int32)
cv2.fillPoly(mask, [pts], 255)
cv2.imshow('mask', mask)
filt_img = img&mask
cv2.imshow('filtered image', filt_img)
cv2.waitKey(0)
Problem:
I'm working with a dataset that contains many images that look something like this:
Now I need all these images to be oriented horizontally or vertically, such that the color palette is either at the bottom or the right side of the image. This can be done by simply rotating the image, but the tricky part is figuring out which images should be rotated and which shouldn't.
What I have tried:
I thought that the best way to do this, is by detecting the white line that separates the the color palette from the image. I decided to rotate all images that have the palette at the bottom such that they have it at the right side.
# yes I am mixing between PIL and opencv (I like the PIL resizing more)
# resize image to be 128 by 128 pixels
img = img.resize((128, 128), PIL.Image.BILINEAR)
img = np.array(img)
# perform edge detection, not sure if these are the best parameters for Canny
edges = cv2.Canny(img, 30, 50, 3, apertureSize=3)
has_line = 0
# take numpy slice of the area where the white line usually is
# (not always exactly in the same spot which probably has to do with the way I resize my image)
for line in edges[75:80]:
# check if most of one of the lines contains white pixels
counts = np.bincount(line)
if np.argmax(counts) == 255:
has_line = True
# rotate if we found such a line
if has_line == True:
s = np.rot90(s)
An example of it working correctly:
An example of it working incorrectly:
This works maybe on 98% of images but there are some cases where it will rotate images that shouldn't be rotated or not rotate images that should be rotated. Maybe there is an easier way to do this, or maybe a more elaborate way that is more consistent? I could do it manually but I'm dealing with a lot of images. Thanks for any help and/or comments.
Here are some images where my code fails for testing purposes:
You can start by thresholding your image by setting a very high threshold like 250 to take advantage of the property that your lines are white. This will make all the background black. Now create a special horizontal kernel with a shape like (1, 15) and erode your image with it. What this will do is remove the vertical lines from the image and only the horizontal lines will be left.
import cv2
import numpy as np
img = cv2.imread('horizontal2.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 250, 255, cv2.THRESH_BINARY)
kernel_hor = np.ones((1, 15), dtype=np.uint8)
erode = cv2.erode(thresh, kernel_hor)
As stated in the question the color palates can only be on the right or the bottom. So we can test to check how many contours does the right region has. For this just divide the image in half and take the right part. Before finding contours dilate the result to fill in any gaps with a normal (3, 3) kernel. Using the cv2.RETR_EXTERNAL find the contours and count how many we have found, if greater than a certain number the image is correct side up and there is no need to rotate.
right = erode[:, erode.shape[1]//2:]
kernel = np.ones((3, 3), dtype=np.uint8)
right = cv2.dilate(right, kernel)
cnts, _ = cv2.findContours(right, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if len(cnts) > 3:
print('No need to rotate')
else:
print('rotate')
#ADD YOUR ROTATE CODE HERE
P.S. I tested for all four images you have provided and it worked well. If in case it does not work for any image let me know.
I've been trying to convert stereo images into a depth map with use of opencv, but not matter what I do it seems to come out unreadable.
I was able to get an accurate depth image of example images that were provided in the opencv tutorial but not on any other image. Even when I attempted to download other premade, calibrated stereo image from online I get terrible results that are neither accurate nor are even close to quality that I get with the example images.
here is my main python script that I use to make the depth map:
import numpy as np
import cv2
from matplotlib import pyplot as plt
imgL = cv2.imread('calimg_L.png',0)
imgR = cv2.imread('calimg_R.png',0)
# imgL = cv2.imread('./images/example_L.png',0)
# imgR = cv2.imread('./images/example_R.png',0)
stereo = cv2.StereoSGBM_create(numDisparities=16, blockSize=15)
disparity = stereo.compute(imgR,imgL)
norm_image = cv2.normalize(disparity, None, alpha = 0, beta = 1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)
cv2.imwrite("disparityImage.jpg", norm_image)
plt.imshow(norm_image)
plt.show()
where calimg_L.png is a calibrated version of the original image.
Here is the code I use to calibrate my images:
import numpy as np
import cv2
import glob
from matplotlib import pyplot as plt
def createCalibratedImage(inputImage, outputName):
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((3*3,3), np.float32)
objp[:,:2] = np.mgrid[0:3,0:3].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
# org = cv2.imread('./chess.jpg')
# orig_cal_img = cv2.resize(org, (384, 288))
# cv2.imwrite("cal_chess.jpg", orig_cal_img)
images = glob.glob('./chess_webcam/*.jpg')
for fname in images:
print('file in use: ' + fname)
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (3,3),None)
# print("doing the thing");
print('status: ' + str(ret));
# If found, add object points, image points (after refining them)
if ret == True:
# print("found something");
objpoints.append(objp)
cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners)
# Draw and display the corners
cv2.drawChessboardCorners(img, (3,3), corners,ret)
cv2.imshow('img',img)
cv2.waitKey(500)
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)
img = inputImage
h, w = img.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))
# undistort
print('undistorting...')
mapx,mapy = cv2.initUndistortRectifyMap(mtx,dist,None,newcameramtx,(w,h),5)
dst = cv2.remap(inputImage ,mapx,mapy,cv2.INTER_LINEAR)
# crop the image
x,y,w,h = roi
dst = dst[y:y+h, x:x+w]
# cv2.imwrite('calibresult.png',dst)
cv2.imwrite(outputName + '.png',dst)
cv2.destroyAllWindows()
original_L = cv2.imread('capture_L.jpg')
original_R = cv2.imread('capture_R.jpg')
createCalibratedImage(original_R, "calimg_R")
createCalibratedImage(original_L, "calimg_L")
print("images calibrated and outputed")
This code was taken from opencv tutorial on how to calibrate images and was provided at least 16 images of the chess board, but was only able to identify the chessboard in about 4 - 5 of them. The reason I used such a relatively small grid search of 3x3 is because anything higher left me without any images to use for calibration due to its inability to find the chessboard.
Here is what I get from an example image(sorry for weird link, couldn't find how to upload):
https://ibb.co/DYMcdZc
here is the original:
https://ibb.co/gMkqyXD
https://ibb.co/YQZY40C
This acts a it should, but when I use it with any other image it gives me a mess, for example:
output:
https://ibb.co/kXwgDVn
looks like just a mess of pixels, to be fair when you put it into 'gray' on imshow it looks more readable but it is not very representative of the image's depth, here are the originals:
https://ibb.co/vqDKGS0
https://ibb.co/f0X1gMB
Even worse so, when I take images myself and do calibrate them through the chessboard code, it comes out as just a random mess of white and black pixels, and values of some goes into negatives and some pixels are impossibly high value.
tl;dr I can't get any stereo images to be made into a depth map even though the example image works just fine, why is that?
First I want to say that obtaining a good depth map is not such a simple task, and using the basic StereoMatching won't always lead to good results. Nevertheless, something better can be achieved.
In order:
Calibration: you should be able to find the checkerboard in more images, 4/5 is a very low number for calibration, it is very hard to estimate correctly the camera parameters with such low number. How do the images look like? Did you read them as grayscale images? Usually also using a different number for row and column (not 3x3 grid, like 4x3) helps to understand the checkerboard position (otherwise it could be ambiguous which side is up or right, for example, a 90 rotation would result in 0 rotation).
Rectification: this can be easily checked by looking at the images. Open two images on two different layers (using GIMP or similar) and check for similar points. After you rectified the images, they should lie on the same line. Are they really on the same line? If yes, rectification work, otherwise, you need a better calibration. The stereo matching won't work without this step.
Stereo Matching: if all above steps are correct, then you may have a problem on the parameters of the stereo matching. First thing to check is disparity range (since it looks like you have different resolution between example images and your images, you should check and adapt that value). Min disparity can also help (if you reduce the disparity range, you reduce the error possibilities) and also block size (15 is quite big, smaller is also enough).
From what you say, my guess would be the problem is on the calibration. You should try to check the rectified images, and if the problem is there try to acquire a new dataset (or find online a better one) and calibrate your images there. Once you can calibrate and rectify your images correctly, you should get better results.
I see the code is similar to the tutorial here so I guess that's correct and the main problem are the images. Hope this can help,I can help you more if you test and see where the probelm is!
I have a sample code which will detect lines. I need to detect lines with a degree angle. But this code detects all the lines in the image.
import cv2
import numpy as np
img = cv2.imread('myhouse.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
minLineLength = 200
maxLineGap = 10
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength,maxLineGap)
for x1,y1,x2,y2,theta in lines[0]:
print (theta)
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
cv2.imwrite('houghlinesmyhouse.png',img)
The image which I use to detect and the result image is,
image
result
I need to detect the roof of the house(image) which I provided. Please help me how to detect the roof. In my method, I've planned to detect the roof by checking the degree angle.
Please refer the Documentation of
Hough Line Transform.
In my point of view, you cannot get the degree in HoughLinesP. You can get from HoughLines.
I am new to OpenCV and have a few questions. I need to detect a bottle or a can based on their shape. For this I am using a raspberry pi board and pi camera. The background is always black and does not change. I have tried many possible solutions to this problem but could not get satisfactory results. The things I have tried include edge detection, morphological transformations, matchShapes(), matchTemplate(). Please let me know if I can do this task efficiently and with maximum accuracy.
A sample image:
I came up with an approach that may help! If you know more things about the can, i.e the width to height ratio it can be more robust by adjusting the rectangle size!
Approach
Convert image to HSV color space. Increase V by a factor of 2 in order to have more visible things.
Find Sobel derivatives in x and y direction. Compute magnitude with equal weight for both direction.
Threshold your image using Otsu method.
Apply Closing to you image.
Apply Canny edge detector.
Find Hough Line Transform.
Find Bounding Rectangle of your line image.
Superimpose it onto your image.(Finally done :P)
Code
image = cv2.imread('image3.jpg', cv2.IMREAD_COLOR)
original = np.copy(image)
if image is None:
print 'Can not read/find the image.'
exit(-1)
hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
H,S,V = hsv_image[:,:,0], hsv_image[:,:,1], hsv_image[:,:,2]
V = V * 2
hsv_image = cv2.merge([H,S,V])
image = cv2.cvtColor(hsv_image, cv2.COLOR_HSV2RGB)
image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# plt.figure(), plt.imshow(image)
Dx = cv2.Sobel(image,cv2.CV_8UC1,1,0)
Dy = cv2.Sobel(image,cv2.CV_8UC1,0,1)
M = cv2.addWeighted(Dx, 1, Dy,1,0)
# plt.subplot(1,3,1), plt.imshow(Dx, 'gray'), plt.title('Dx')
# plt.subplot(1,3,2), plt.imshow(Dy, 'gray'), plt.title('Dy')
# plt.subplot(1,3,3), plt.imshow(M, 'gray'), plt.title('Magnitude')
ret, binary = cv2.threshold(M,10,255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
# plt.figure(), plt.imshow(binary, 'gray')
binary = binary.astype(np.uint8)
binary = cv2.morphologyEx(binary, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (20, 20)))
edges = cv2.Canny(binary, 50, 100)
# plt.figure(), plt.imshow(edges, 'gray')
lines = cv2.HoughLinesP(edges,1,3.14/180,50,20,10)[0]
output = np.zeros_like(M, dtype=np.uint8)
for line in lines:
cv2.line(output,(line[0],line[1]), (line[2], line[3]), (100,200,50), thickness=2)
# plt.figure(), plt.imshow(output, 'gray')
points = np.array([np.transpose(np.where(output != 0))], dtype=np.float32)
rect = cv2.boundingRect(points)
cv2.rectangle(original,(rect[1],rect[0]), (rect[1]+rect[3], rect[0]+rect[2]),(255,255,255),thickness=2)
original = cv2.cvtColor(original,cv2.COLOR_BGR2RGB)
plt.figure(), plt.imshow(original,'gray')
plt.show()
NOTE: you can uncomment the lines for showing the result of each step! I just comment them for the sake of readability.
Result
NOTE: If you know the aspect ratio of your can you can fix it better!
I hope that will help. Good Luck :)