Why isn't my contour closed (Python, OpenCV)? - python

I've been trying to write a script in opencv for some greyscale image processing. However, I keep running into an issue when finding and drawing contours on thresholded images. Finding contours is easy and gives me the kind of result that I'm looking for. But, when I choose the largest contour by area and intend to draw it seperately, I get a much more 'broken up' result. I've been trying to figure out what is wrong with my code that results in this for a while now, but frankly can't figure it out. Anybody else have a similar experience or possible solution?
My (admittedly very messy) current code is:
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
import os
import math as m
import imutils as imt
read_directory = r'E:\Other\Ultrasound_Trad_Alg\Input'
write_directory = r'E:\Other\Ultrasound_Trad_Alg\Output'
os.chdir(read_directory)
image_files = os.listdir(read_directory)
for image_file in image_files:
input_image_grey = cv.imread(image_file,0)
input_image_color = cv.imread(image_file)
input_image_color_2 = input_image_color.copy()
#Initial Black Background Masking Process:
blurred_input = cv.GaussianBlur(input_image_grey,(7,7),0)
_,thresholded_image_binary = cv.threshold(blurred_input,0,255,cv.THRESH_BINARY)
input_contours,hierarchy = cv.findContours(thresholded_image_binary,cv.RETR_EXTERNAL,cv.CHAIN_APPROX_SIMPLE)
chosen_input_contour = max(input_contours, key=cv.contourArea)
input_mask = np.zeros_like(input_image_grey)
cv.drawContours(input_mask, chosen_input_contour, -1, 255, -1)
#For Testing and Visualization:
cv.drawContours(input_image_color, chosen_input_contour, -1, (0,255,0), 1)
cv.drawContours(input_image_color_2, input_contours, -1, (0,255,0), 1)
cv.imshow("test1",thresholded_image_binary)
cv.imshow("test2",input_image_color)
cv.imshow("test3",input_image_color_2)
cv.imshow("mask",input_mask)
print(cv.isContourConvex(chosen_input_contour))
print(cv.contourArea(chosen_input_contour))
cv.waitKey(0)
When I run this code, I get the following set of 4 images just to demonstrate what I am talking about:
https://i.stack.imgur.com/IP7Ip.jpg
As can be seen, the initial set of contours are pretty much exactly what I'm looking for. However, by isolating the largest contour in the image and drawing it separately, I get a broken up contour that does not work for me. I've also checked the contour areas and it shows that I should have exactly 5, meaning there shouldn't be any tiny hidden contours messing with results either. Any thoughts on what is happening?

I think your issue in your Python/OpenCV code is that you have:
input_mask = np.zeros_like(input_image_grey)
and it should be
input_mask = np.zeros_like(thresholded_image_binary)
the binary mask needs to be the same bit-depth as your binary image, not your grayscale image. That is it needs to be 1-bit and not 8-bits.
See if that fixes it.

Related

How to crop part of the image between specified lines in opencv

I'm currently working on implementation in python the algorithm presented in https://arxiv.org/abs/1611.03270. In the following paper there is a part when we create epipolar lines and we want to take part of the image between those lines. Creation of the lines is fairly easy and it can be done with approach presented for instance here https://docs.opencv.org/3.4/da/de9/tutorial_py_epipolar_geometry.html. I tried to find a solution that would get me a part of the image between those lines (with some set width) but I couldn't find any. I know that I could manually take values from pixels via calculating if they are under/above lines but maybe there is a more elegant solution to this problem? Do you guys have any idea or maybe experienced similar problem in the past?
you can do like this
import numpy as np
import cv2
# lets say this is our image
np.random.seed(42)
img = np.random.randint(0, high=256, size=(400,400), dtype=np.uint8)
cv2.imshow('random image', img)
# we can create a mask with epipolar points and AND with the original image
mask = np.zeros([400, 400],dtype=np.uint8)
pts = np.array([[20,20],[100,350],[165,240],[30,30]], np.int32)
cv2.fillPoly(mask, [pts], 255)
cv2.imshow('mask', mask)
filt_img = img&mask
cv2.imshow('filtered image', filt_img)
cv2.waitKey(0)

How to improve depth map and what are my stereo images lacking?

I've been trying to convert stereo images into a depth map with use of opencv, but not matter what I do it seems to come out unreadable.
I was able to get an accurate depth image of example images that were provided in the opencv tutorial but not on any other image. Even when I attempted to download other premade, calibrated stereo image from online I get terrible results that are neither accurate nor are even close to quality that I get with the example images.
here is my main python script that I use to make the depth map:
import numpy as np
import cv2
from matplotlib import pyplot as plt
imgL = cv2.imread('calimg_L.png',0)
imgR = cv2.imread('calimg_R.png',0)
# imgL = cv2.imread('./images/example_L.png',0)
# imgR = cv2.imread('./images/example_R.png',0)
stereo = cv2.StereoSGBM_create(numDisparities=16, blockSize=15)
disparity = stereo.compute(imgR,imgL)
norm_image = cv2.normalize(disparity, None, alpha = 0, beta = 1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)
cv2.imwrite("disparityImage.jpg", norm_image)
plt.imshow(norm_image)
plt.show()
where calimg_L.png is a calibrated version of the original image.
Here is the code I use to calibrate my images:
import numpy as np
import cv2
import glob
from matplotlib import pyplot as plt
def createCalibratedImage(inputImage, outputName):
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((3*3,3), np.float32)
objp[:,:2] = np.mgrid[0:3,0:3].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
# org = cv2.imread('./chess.jpg')
# orig_cal_img = cv2.resize(org, (384, 288))
# cv2.imwrite("cal_chess.jpg", orig_cal_img)
images = glob.glob('./chess_webcam/*.jpg')
for fname in images:
print('file in use: ' + fname)
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (3,3),None)
# print("doing the thing");
print('status: ' + str(ret));
# If found, add object points, image points (after refining them)
if ret == True:
# print("found something");
objpoints.append(objp)
cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners)
# Draw and display the corners
cv2.drawChessboardCorners(img, (3,3), corners,ret)
cv2.imshow('img',img)
cv2.waitKey(500)
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)
img = inputImage
h, w = img.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))
# undistort
print('undistorting...')
mapx,mapy = cv2.initUndistortRectifyMap(mtx,dist,None,newcameramtx,(w,h),5)
dst = cv2.remap(inputImage ,mapx,mapy,cv2.INTER_LINEAR)
# crop the image
x,y,w,h = roi
dst = dst[y:y+h, x:x+w]
# cv2.imwrite('calibresult.png',dst)
cv2.imwrite(outputName + '.png',dst)
cv2.destroyAllWindows()
original_L = cv2.imread('capture_L.jpg')
original_R = cv2.imread('capture_R.jpg')
createCalibratedImage(original_R, "calimg_R")
createCalibratedImage(original_L, "calimg_L")
print("images calibrated and outputed")
This code was taken from opencv tutorial on how to calibrate images and was provided at least 16 images of the chess board, but was only able to identify the chessboard in about 4 - 5 of them. The reason I used such a relatively small grid search of 3x3 is because anything higher left me without any images to use for calibration due to its inability to find the chessboard.
Here is what I get from an example image(sorry for weird link, couldn't find how to upload):
https://ibb.co/DYMcdZc
here is the original:
https://ibb.co/gMkqyXD
https://ibb.co/YQZY40C
This acts a it should, but when I use it with any other image it gives me a mess, for example:
output:
https://ibb.co/kXwgDVn
looks like just a mess of pixels, to be fair when you put it into 'gray' on imshow it looks more readable but it is not very representative of the image's depth, here are the originals:
https://ibb.co/vqDKGS0
https://ibb.co/f0X1gMB
Even worse so, when I take images myself and do calibrate them through the chessboard code, it comes out as just a random mess of white and black pixels, and values of some goes into negatives and some pixels are impossibly high value.
tl;dr I can't get any stereo images to be made into a depth map even though the example image works just fine, why is that?
First I want to say that obtaining a good depth map is not such a simple task, and using the basic StereoMatching won't always lead to good results. Nevertheless, something better can be achieved.
In order:
Calibration: you should be able to find the checkerboard in more images, 4/5 is a very low number for calibration, it is very hard to estimate correctly the camera parameters with such low number. How do the images look like? Did you read them as grayscale images? Usually also using a different number for row and column (not 3x3 grid, like 4x3) helps to understand the checkerboard position (otherwise it could be ambiguous which side is up or right, for example, a 90 rotation would result in 0 rotation).
Rectification: this can be easily checked by looking at the images. Open two images on two different layers (using GIMP or similar) and check for similar points. After you rectified the images, they should lie on the same line. Are they really on the same line? If yes, rectification work, otherwise, you need a better calibration. The stereo matching won't work without this step.
Stereo Matching: if all above steps are correct, then you may have a problem on the parameters of the stereo matching. First thing to check is disparity range (since it looks like you have different resolution between example images and your images, you should check and adapt that value). Min disparity can also help (if you reduce the disparity range, you reduce the error possibilities) and also block size (15 is quite big, smaller is also enough).
From what you say, my guess would be the problem is on the calibration. You should try to check the rectified images, and if the problem is there try to acquire a new dataset (or find online a better one) and calibrate your images there. Once you can calibrate and rectify your images correctly, you should get better results.
I see the code is similar to the tutorial here so I guess that's correct and the main problem are the images. Hope this can help,I can help you more if you test and see where the probelm is!

Opencv thresholding code working in python2.7(Windows) but not working Raspberry Pi

I have this python code that applies a series thresholding to an image of an eye so that it would it would be able to detect the pupil. I wrote this code using python 2.7 in windows 10. It actually worked great since I was able to get my desired output.
Here is the code that I wrote in windows 10:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('C:\Users\User\Documents\module4\input\left.jpg',0)
image = cv2.medianBlur(img,5)
#Apply Adaptive Threshold with Laplacian
th = cv2.adaptiveThreshold(image,255,cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY,11,2)
laplacian = cv2.Laplacian(th,cv2.CV_64F)
cv2.imwrite('C:\Users\User\Documents\module4\output\output1.jpg', laplacian)
#Apply Inverse Binary Threshold
binthresh = cv2.imread('C:\Users\User\Documents\module4\output\output1.jpg',0)
ret,thresh2 = cv2.threshold(laplacian,127,255,cv2.THRESH_BINARY_INV)
cv2.imwrite('C:\Users\User\Documents\module4\output\output2.jpg', thresh2)
#Apply First Otsu's Threshold
otsuthresh1 = cv2.imread('C:\Users\User\Documents\module4\output\output2.jpg',0)
blur = cv2.GaussianBlur(otsuthresh1,(5,5),0)
ret3,th3 = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
cv2.imwrite('C:\Users\User\Documents\module4\output\output3.jpg', th3)
#Apply Gaussian Blur
gaussblur = cv2.imread('C:\Users\User\Documents\module4\output\output3.jpg',0)
blur2 = cv2.GaussianBlur(gaussblur,(5,5),0)
cv2.imwrite('C:\Users\User\Documents\module4\output\output4.jpg', blur2)
#Apply Second Otsu's Threshold
otsuthresh2 = cv2.imread('C:\Users\User\Documents\module4\output\output4.jpg',0)
blur3 = cv2.GaussianBlur(otsuthresh2,(5,5),0)
ret4,th4 = cv2.threshold(blur3,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
#Apply Circular Hough Transform
circles = cv2.HoughCircles(th4,cv2.HOUGH_GRADIENT,1,20,param1=50,param2=30,minRadius=0,maxRadius=0)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(th4,(i[0],i[1]),i[2],(100,100,0),2)
# draw the center of the circle
cv2.circle(th4,(i[0],i[1]),2,(0,50,100),3)
cv2.imshow('combined', th4)
cv2.imwrite('C:\Users\User\Documents\module4\output\output5.jpg', th4)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here is a screenshot of all the outputs of the code (including the original input image):
I tried running this same code in my raspberry pi, I just changed the file path of the input image as well as where to store the output images.
Here is the code that I ran in my Raspberry Pi:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('/home/pi/IPD/images/image1.jpg',0)
image = cv2.medianBlur(img,5)
#Apply Adaptive Threshold with Laplacian
th = cv2.adaptiveThreshold(image,255,cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY,11,2)
laplacian = cv2.Laplacian(th,cv2.CV_64F)
#Apply Inverse Binary Threshold
binthresh = cv2.imread('/home/pi/IPD/temp/output1.jpg',0)
ret,thresh2 = cv2.threshold(binthresh,127,255,cv2.THRESH_BINARY_INV)
cv2.imwrite('/home/pi/IPD/temp/output2.jpg', thresh2)
#Apply First Otsu's Threshold
otsuthresh1 = cv2.imread('/home/pi/IPD/temp/output2.jpg',0)
blur = cv2.GaussianBlur(otsuthresh1,(5,5),0)
ret3,th3 = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
cv2.imwrite('/home/pi/IPD/temp/output3.jpg', th3)
#Apply Gaussian Blur
gaussblur = cv2.imread('/home/pi/IPD/temp/output3.jpg',0)
blur2 = cv2.GaussianBlur(gaussblur,(5,5),0)
cv2.imwrite('/home/pi/IPD/temp/output4.jpg', blur2)
#Apply Second Otsu's Threshold
otsuthresh2 = cv2.imread('C/home/pi/IPD/temp/output4.jpg',0)
blur3 = cv2.GaussianBlur(otsuthresh2,(5,5),0)
ret4,th4 = cv2.threshold(blur3,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
#Apply Circular Hough Transform
circles = cv2.HoughCircles(th4,cv2.HOUGH_GRADIENT,1,20,param1=50,param2=30,minRadius=0,maxRadius=0)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(th4,(i[0],i[1]),i[2],(100,100,0),2)
# draw the center of the circle
cv2.circle(th4,(i[0],i[1]),2,(0,50,100),3)
cv2.imshow('combined', th4)
cv2.imwrite('/home/pi/IPD/images/final.jpg', th4)
cv2.waitKey(0)
cv2.destroyAllWindows()
However I get the following error:
Traceback (most recent call last):
File "/home/pi/IPD/mod4.py", line 18, in
ret,thresh2 = cv2.threshold(binthresh,127,255,cv2.THRESH_BINARY_INV)
error: /build/opencv-ISmtkH/opencv-2.4.9.1+dfsg/modules/core/src/matrix.cpp:269: error: (-215) m.dims >= 2 in function Mat
Actually, I've also encountered this error when I first wrote the code in windows 10 but I solved it by writing the newly thresholded image and just loading it again (as you can see in my code. I know it's an inefficient way) so that I can apply a new threshold to it. I've tried searching for possible explanations why this might be and I figured it has something to do with how many channels the signal I'm inputting is (I think). However, I'm still new to using opencv and image processing in general and I really don't understand the concept really that well (even though I've already researched it).
If you guys can help me and point me in the right direction, I would be really grateful. And also if you guys can suggest how I can avoid storing the newly thresholded image and loading it again (which is really an inefficient way of going about this) without causing any errors, I would really, really appreciate it.
I spotted 2 errors, I hope those are the only 2.
1) In your first code you have:
laplacian = cv2.Laplacian(th,cv2.CV_64F)
cv2.imwrite('C:\Users\User\Documents\module4\output\output1.jpg', laplacian)
#Apply Inverse Binary Threshold
binthresh = cv2.imread('C:\Users\User\Documents\module4\output\output1.jpg',0)
ret,thresh2 = cv2.threshold(laplacian,127,255,cv2.THRESH_BINARY_INV)
here you are doing the laplacian operator and saving it in a CV_64F image (doubles), but threshold ONLY takes CV_8U or CV_32F. Here you have two options, one is to change this 64F to 32F or to use the function normalize and convert it to 8U image. Something like:
cv2.normalize(laplacian, output1, 0, 255, cv2.NORM_MINMAX, cv2.CV_8U)
2) In the second code you are missing:
cv2.imwrite('C:\Users\User\Documents\module4\output\output1.jpg', laplacian)
So, you are not saving such an image, thus you are not loading it either... no image, an error jumps out.
General suggestions, always use imshow to see what is going on until what point. Use relative paths for the saving and loading of the temp images, this way you only change the input path.

OpenCV (Python): Construct Rectangle from thresholded image

The image below shows an aerial photo of a house block (re-oriented with the longest side vertical), and the same image subjected to Adaptive Thresholding and Difference of Gaussians.
Images: Base; Adaptive Thresholding; Difference of Gaussians
The roof-print of the house is obvious (to the human eye) on the AdThresh image: it's a matter of connecting some obvious dots. In the sample image, finding the blue-bounded box below -
Image with desired rectangle marked in blue
I've had a crack at implementing HoughLinesP() and findContours(), but get nothing sensible (probably because there's some nuance that I'm missing). The python script-chunk that fails to find anything remotely like the blue box, is as follows:
import cv2
import numpy as np
from matplotlib import pyplot as plt
# read in full (RGBA) image - to get alpha layer to use as mask
img = cv2.imread('rotated_12.png', cv2.IMREAD_UNCHANGED)
grey = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Otsu's thresholding after Gaussian filtering
blur_base = cv2.GaussianBlur(grey,(9,9),0)
blur_diff = cv2.GaussianBlur(grey,(15,15),0)
_,thresh1 = cv2.threshold(grey,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
thresh = cv2.adaptiveThreshold(grey,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,2)
DoG_01 = blur_base - blur_diff
edges_blur = cv2.Canny(blur_base,70,210)
# Find Contours
(ed, cnts,h) = cv2.findContours(grey, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:4]
for c in cnts:
approx = cv2.approxPolyDP(c, 0.1*cv2.arcLength(c, True), True)
cv2.drawContours(grey, [approx], -1, (0, 255, 0), 1)
# Hough Lines
minLineLength = 30
maxLineGap = 5
lines = cv2.HoughLinesP(edges_blur,1,np.pi/180,20,minLineLength,maxLineGap)
print "lines found:", len(lines)
for line in lines:
cv2.line(grey,(line[0][0], line[0][1]),(line[0][2],line[0][3]),(255,0,0),2)
# plot all the images
images = [img, thresh, DoG_01]
titles = ['Base','AdThresh','DoG01']
for i in xrange(len(images)):
plt.subplot(1,len(images),i+1),plt.imshow(images[i],'gray')
plt.title(titles[i]), plt.xticks([]), plt.yticks([])
plt.savefig('a_edgedetect_12.png')
cv2.destroyAllWindows()
I am trying to set things up without excessive parameterisation. I'm wary of 'tailoring' an algorithm for just this one image since this process will be run on hundreds of thousands of images (with roofs/rooves of different colours which may be less distinguishable from background). That said, I would love to see a solution that 'hit' the blue-box target - that way I could at the very least work out what I've done wrong.
If anyone has a quick-and-dirty way to do this sort of thing, it would be awesome to get a Python code snippet to work with.
The 'base' image ->
Base Image
You should apply the following:
1. Contrast Limited Adaptive Histogram Equalization-CLAHE and convert to gray-scale.
2. Gaussian Blur & Morphological transforms (dialation, erosion, etc) as mentioned by #bad_keypoints. This will help you get rid of the background noise. This is the most tricky step as the results will depend on the order in which you apply (first Gaussian Blur and then Morphological transforms or vice versa) and the window sizes you choose for this purpose.
3. Apply Adaptive thresholding
4. Apply Canny's Edge detection
5. Find contour having four corner points
As said earlier you need to tweak with input parameters of these functions and also need to validate these parameters with other images. As it might be possible that it will work for this case but not for other cases. Based on trial and error you need to fix the parameter values.

How can i find cycles in a skeleton image with python libraries?

I have many skeletonized images like this:
How can i detect a cycle, a loop in the skeleton?
Are there "special" functions that do this or should I implement it as a graph?
In case there is only the graph option, can the python graph library NetworkX can help me?
You can exploit the topology of the skeleton. A cycle will have no holes, so we can use scipy.ndimage to find any holes and compare. This isn't the fastest method, but it's extremely easy to code.
import scipy.misc, scipy.ndimage
# Read the image
img = scipy.misc.imread("Skel.png")
# Retain only the skeleton
img[img!=255] = 0
img = img.astype(bool)
# Fill the holes
img2 = scipy.ndimage.binary_fill_holes(img)
# Compare the two, an image without cycles will have no holes
print "Cycles in image: ", ~(img == img2).all()
# As a test break the cycles
img3 = img.copy()
img3[0:200, 0:200] = 0
img4 = scipy.ndimage.binary_fill_holes(img3)
# Compare the two, an image without cycles will have no holes
print "Cycles in image: ", ~(img3 == img4).all()
I've used your "B" picture as an example. The first two images are the original and the filled version which detects a cycle. In the second version, I've broken the cycle and nothing gets filled, thus the two images are the same.
First, let's build an image of the letter B with PIL:
import Image, ImageDraw, ImageFont
image = Image.new("RGBA", (600,150), (255,255,255))
draw = ImageDraw.Draw(image)
fontsize = 150
font = ImageFont.truetype("/usr/share/fonts/truetype/liberation/LiberationMono-Regular.ttf", fontsize)
txt = 'B'
draw.text((30, 5), txt, (0,0,0), font=font)
img = image.resize((188,45), Image.ANTIALIAS)
print type(img)
plt.imshow(img)
you may find a better way to do that, particularly with path to the fonts. Ii would be better to load an image instead of generating it. Anyway, we have now something to work on:
Now, the real part:
import mahotas as mh
img = np.array(img)
im = img[:,0:50,0]
im = im < 128
skel = mh.thin(im)
noholes = mh.morph.close_holes(skel)
plt.subplot(311)
plt.imshow(im)
plt.subplot(312)
plt.imshow(skel)
plt.subplot(313)
cskel = np.logical_not(skel)
choles = np.logical_not(noholes)
holes = np.logical_and(cskel,noholes)
lab, n = mh.label(holes)
print 'B has %s holes'% str(n)
plt.imshow(lab)
And we have in the console (ipython):
B has 2 holes
Converting your skeleton image to a graph representation is not trivial, and I don't know of any tools to do that for you.
One way to do it in the bitmap would be to use a flood fill, like the paint bucket in photoshop. If you start a flood fill of the image, the entire background will get filled if there are no cycles. If the fill doesn't get the entire image then you've found a cycle. Robustly finding all the cycles could require filling multiple times.
This is likely to be very slow to execute, but probably much faster to code than a technique where you trace the skeleton into graph data structure.

Categories

Resources