I have an input image and would like to draw ellipses on all contours that are found. I'm using python-opencv and getting the following error. Can anyone help me with this? I do know to draw the ellipse, but unsure about how to draw over every detected objects in a image. I am new to this area, hence please excuse me for silly question.
OpenCV Error: Incorrect size of input array (There should be at least 5 points to fit the ellipse) in cv::fitEllipse, file
C:\bld\opencv_1498171314629\work\opencv-3.2.0\modules\imgproc\src\shapedescr.cpp, line 358
Traceback (most recent call last):
File "D:/project/test.py", line 41, in <module>
ellipse = cv2.fitEllipse(contours[0])
cv2.error: C:\bld\opencv_1498171314629\work\opencv-
3.2.0\modules\imgproc\src\shapedescr.cpp:358: error: (-201) There should be
at least 5 points to fit the ellipse in function cv::fitEllipse
With ellipse=cv2.fitEllipse(contours[0])you are fitting only the first contour, which has less than 5 points... you should iterate through all the contours and draw only the ones with more than 5 points...
Try something like this:
import numpy as np
import cv2
image=cv2.imread("cell.jpg")
grey=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
grey_hist=cv2.calcHist([grey],[0],None,[256],[0,256])
eq=cv2.equalizeHist(grey)
blurredA1=cv2.blur(eq,(3,3))
(T,thresh)=cv2.threshold(blurredA1,190,255,cv2.THRESH_BINARY)
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
for i in range(len(contours)):
if len(contours[i]) >= 5:
cv2.drawContours(thresh,contours,-1,(150,10,255),3)
ellipse=cv2.fitEllipse(contours[0])
else:
# optional to "delete" the small contours
cv2.drawContours(thresh,contours,-1,(0,0,0),-1)
cv2.imshow("Perfectlyfittedellipses",thresh)
cv2.waitKey(0)
As you can see I iterate all of the contours and check if it is bigger than 5. Also, I added something to cloud all the contours that are not big enough.
I do not know how your input image is and so this might or might not work. When you use cv2.RETR_EXTERNAL in the find contours function, it only returns the external contours. Instead, use 'cv2.RETR_TREE'.
This retrieves all of the contours and reconstructs a full hierarchy of nested contours. Refer here for documentation. Your code should change as follows.
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
Hope this helps. If it does not work, it would be great if you could upload your input image so that we can work on it!
Also, as for putting an ellipse on every contour found, like api55 mentioned, you are trying fit ellipse only on the first contour. Hope his answer helps you on that. If you want to put an ellipse on the largest contour, you can sort the found contours based on their area and then fit ellipses on the largest contour or on contours bigger than a certain size.
Related
I have several thousand images of fluid pathlines -- below is a simple example --
and I would like to automatically detect them: Length and position.
For the position a defined point would be sufficient (e.g. left end).
I don't need the full shape information.
This is a pretty common task but I did not find a reliable method.
How could I do this?
My choice would be Python but it's no necessity as long as I can export the results.
Counting curves, angles and straights in a binary image in openCV and python pretty much answers your question.
I tried it on your image and it works.
I used:
ret, thresh = cv2.threshold(gray, 90, 255, cv2.THRESH_BINARY_INV)
and commented out these two lines:
pts = remove_straight(pts) # remove almost straight angles
pts = max_corner(pts) # remove nearby points with greater angle
I am trying to make an object detection tool (given a sample) using Contours.
I have had some progress however when the object is in front of another object with a complicated structure (a hand or a face for example), or the object and its background merge in colors, it stops detecting the edges and thus doesnt give a good contour.
After reading through the algorithms documentation, I discovered that it works on the basis that the edges are detected by difference in color intensity - for example if the object is black and the background is black it will not detect it.
So now i am trying to apply some effects and blurs to try and make it work.
I am currently trying to get a combined Sobel blur (in both axis) with hopes that given enough light it will define the edges - since the product will be used by phones who have flash.
So when i tried to do it:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
frame = cv2.GaussianBlur(frame, (5, 5), 10)
frameX = cv2.Sobel(frame, cv2.CV_64F, 1, 0)
frameY = cv2.Sobel(frame, cv2.CV_64F, 0, 1)
frame = cv2.bitwise_or(frameX, frameY)
I get an error saying the cv2.findContours supports only CV_8UC1 images when the mode is not CV_RETR_FLOODFILL.
Here is the line that triggers the error:
counturs, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
I started messing around with this thing only 1 week ago and im surprised how easy it is to get results but some of the error messages are ridiculous.
Edit: I did try to swap the mode to be CV_RETR_FLOODFILL but that did not fix the problem, then it didnt work at all.
The reason is that findContours function expects a binary image (an image consists of 0's and 1's) whose type is 8 bit integer (uint8). The developers might have done this to reduce the memory usage since there is no point in storing binary values with 64 bits instead of 8 bits. Convert frame into uint8 type by just using
frame = np.uint8(frame)
I followed (and modified) the method from the best-rated answer of this post.
My image is a little bit different. I used HoughLinesP and managed to detect the majority of red lines.
I was wondering is there a way to remove detected lines from the image, without damage to the other black intersecting lines? I am interested in black lines only. Is there a smarter way to isolate black lines without too many missing pixels and segments?
If you want to isolate just black lines, a simple Otsu's threshold and bitwise-and should do it
import cv2
image = cv2.imread('3.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255,cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
result = cv2.bitwise_and(image,image,mask=thresh)
result[thresh==0] = (255,255,255)
cv2.imshow('thresh', thresh)
cv2.imshow('result', result)
cv2.waitKey()
This looks like a problem of signal-separation/processing.
I don't know if this would work or not. But this is just a hunch. Give it a shot and see if it works. Assume your image as a convolved image of the measuring strip and the ECG. So, if you process this in the Fourier domain, perhaps you could dis-entangle these two types of signals.
Take fourier transform (FFT) of the image. (scipy has fft functionality). Call the original image: f and fft-image: F.
Take an image of just the measuring strip (but no measured pattern on it for ECG) and evaluate the FFT for this one as well. Call this image: g, fft-image: G.
Calculate inverse FFT of (F/G) and see if that clears up the background effect.
In case this does not work, please leave a note in the comment section.
I am trying to detect lines in a certain image. I run it through a skeletonization process before applying the cv2.HoughLinesP. I used the skeletonization code here.
No matter what I try I keep getting results similar to what is described here i.e. 'only fragments of a line..'
As suggested by Jiby, I use the named notation for the parameters and also high rho and theta, but to no avail.
Here is my code:
lines = cv2.HoughLinesP(skel, rho=5, theta=np.deg2rad(10), threshold=0, minLineLength=0, maxLineGap=0)
Prior to this I threshold a RGB image to extract most of my 'blue' hollow rectangle. Then I convert it to gray scale which I then feed to the skeletonizer.
Please advise.
I am testing the cv2.threshold() function in with different values but I get each time unexpected results. So this means simply I do not understand the effect of the parameter:
maxval
Could someone clear me on this ?
For example, I want to draw the contours of this star following the white color:
Here is what I got:
From this code:
import cv2
im=cv2.imread('image.jpg') # read picture
imgray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) # BGR to grayscale
ret,thresh=cv2.threshold(imgray,200,255,cv2.THRESH_BINARY_INV)
countours,hierarchy=cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(im,countours,-1,(0,255,0),3)
cv2.imshow("Contour",im)
cv2.waitKey(0)
cv2.destroyAllWindows()
Each time I change the value of maxval I get a strange result that I can not understand. How can I draw the contour of this star correctly using this function then ?
Thank you in advance.
You may want to experiment with a very simple image that clearly lets you understand the various parameters. The interesting thing about the image attached below is that the grayscale value of a number shown in the image is equal to the number. E.g. 200 is written with grayscale value 200. Here is example python code you can use.
import cv2
# Read image
src = cv2.imread("threshold.png", cv2.CV_LOAD_IMAGE_GRAYSCALE)
# Set threshold and maxValue
thresh = 127
maxValue = 255
# Basic threshold example
th, dst = cv2.threshold(src, thresh, maxValue, cv2.THRESH_BINARY);
# Find Contours
countours,hierarchy=cv2.findContours(dst,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# Draw Contour
cv2.drawContours(dst,countours,-1,(255,255,255),3)
cv2.imshow("Contour",dst)
cv2.waitKey(0)
I have copied the following image from a OpenCV Threshold Tutorial I wrote recently. It explains the various parameters with example image, Python and C++ Code. Hope this helps.
Input Image
Result Image
well here you can use COLOR_BGR2HSV and then choose a color and the making of contour will be quite easy try it and let me know
in black and while conversion u have same color of yellow and white thats why this is not working
For better accuracy at finding contours one may apply threshold on image as binary image tend to give higher accuracy and then use contours method.Hope this will help..!!!