When we want to find the contours of a given image according to a certain threshold, we use the function cv2.findContours() which returns among other things, the list of contours (a Pythonic list of arrays representing contours of the picture).
Here is how to use of the function:
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
My question:
What is the order that OpenCV uses toorder the countours ?
I mean on which criterion it says this contour is in position 0, the other in position 1 and so on ?
I need this information because I want to know which contour I am dealing with in my program and why it is given this or that position.
Hope you won't close my question, I am just a beginner.
I do not fully understand the question but from my understanding the Contour positions are based on the x and y pixel coordinates of the image which is what the returned vector list contain. So surely the criterion is they're position on the image. (Please consider this as a comment)
Related
I want to detect parking lot,
I already detect each slot using cv2.findcontour and draw it cv2.drawContours(imgB, contours, -1, (0, 255, 0), 1).
Then I want to compare the difference between reference image and input image from cctv using compare_ssim.
The problem is, contour that I use is not rectangular. I can't compare using ssim. Is there any way to compare non rectangular ROI?
I have tried to create boundingRect and compare, but the result is inaccurate, because the ROI that I want to compare is intersected with other ROI. compare_ssim can't compare non-rectangular ROI. I try (score, diff) = compare_ssim(grayA[[c]], grayB[[c]], full=True), but that gives me an error like this
IndexError: index 463 is out of bounds for axis 0 with size 360!
I expect the output is know the specific empty or not empty parking slot.
But at this point I just want to compare the difference with non rectangular ROI.
is there an easy and direct way to extract the internal contours (holes) from an image using opencv 3.1 python ?
I know that I can use "area" as a condition. However, if I change the image resolution, the "areas" are not the same.
For instance, with this image:
How can I extract the internal holes?
_, contours, hier_ = cv2.findContours(img,cv2.RETR_CCOMP,cv2.CHAIN_APPROX_SIMPLE)
areas = [cv2.contourArea(c) for c in millCnts]
max_area = np.max(areas)
Mask = np.ones(img.shape[:2], dtype="uint8") * 255
# I can do something like this (currently not working, just to show an example)
for c in contours:
if(( cv2.contourArea(c) > 8) and (cv2.contourArea(c)< 100000)):
cv2.drawContours(Mask ,[c],-1,0,1)
As I explained in my comment, you have to check the hierarchy return variable. After find contours you will get the contours (List of List of Points) and hierarchy (List of List).
The documentation is very clear in this:
hierarchy – Optional output vector, containing information about the
image topology. It has as many elements as the number of contours. For
each i-th contour contours[i] , the elements hierarchy[i][0] ,
hiearchy[i][1] , hiearchy[i][2] , and hiearchy[i][3] are set to
0-based indices in contours of the next and previous contours at the
same hierarchical level, the first child contour and the parent
contour, respectively. If for the contour i there are no next,
previous, parent, or nested contours, the corresponding elements of
hierarchy[i] will be negative.
So, this means that for each countour[i] you should get a hierarchy[i] that contains a List with 4 variables:
hierarchy[i][0]: the index of the next contour of the same level
hierarchy[i][1]: the index of the previous contour of the same level
hierarchy[i][2]: the index of the first child
hierarchy[i][3]: the index of the parent
So, saying that, in your case, there should be one without a parent, and you can check which one by checking the hierarchy[i][3] if it is negative.
It should be something like (untested code):
holes = [contours[i] for i in range(len(contours)) if hierarchy[i][3] >= 0]
* UPDATE:*
To summarize what we discussed in the chat,
The image was too big, and the contours had small holes: solved with dilate and erode with a kernel of size 75
The image needed to be inverted since OpenCV expects for dilate a black background
The algorithm was giving 2 big contours, one outside (as expected) and one inside (almost the same as the outside one), this is probably due to the image having some external (and closed) bumps. This was solved by removing the contour without a parent and its first child.
I have been working under the image of bacteria and a wish to take the number of bacteria from the image, and also need to classify the bacteria with specific shape and size.
I am using opencv python. Now i use the contour method.
contours,hierarchy = cv2.findContours(dst,1,2)
cnt = contours[0]
l = len(contours)
print l
li = list(contours)
print li
This give an output of l= 115 and li= some array values .
What does this means??
please help me in finding out the answer..e.coli image below:
Contours connects continuous points and puts all of them in an array. So each element in this array probably corresponds to a different bacteria (or a false detection, due to a connected color group that is a shadow etc).
When you say len(contours), you get the number of elements in this array. Therefore, you get a rough estimation of the number of bacterias.
In your case, there are 115 bacterias, or colors that are different than their surroundings which may or may not be bacterias. When you define a list for them and print the list, you get the properties of each element in this list, therefore you get the properties for each "connected point group" or each "object that is possibly a bacteria". Its all pretty straightforward really.
If you realize that you have many false detections here is what you do:
A group of bacterias appearing as one:
You threshold the image (convert it to black&white) and use the erode function first. Then use dilate function to remove their connections. Then go with findContours once more.
Stains detected as bacterias:
Make your thresholding only cover the bacterias color range, so everything else will be ignored.
See sources below, they might help:
http://docs.opencv.org/2.4/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html
http://docs.opencv.org/trunk/d4/d73/tutorial_py_contours_begin.html#gsc.tab=0
cv2.findCountours returns a list of contours where each contour is a numpy array of points (2 columns for x, y coordinates). len(foo) is the length of list foo. So in your case it found 115 contours and your li is just a copy of the contours list.
You can easily examine the contours by using the drawContours function.
# draws contours in white color, outlines only (not filled)
cv2.drawContours(dst, contours, -1, (255,))
cv2.imshow("result", dst)
cv2.waitKey(-1)
I have "n" number of contour detected images(frame). I wants to find mean value for the rectangle portion of that image. (Instead of finding mean value for a full image, i need to calculate the mean value for the rectangle portion of that image.)
I have rectangle's x,y position and width, height values. First Image x,y,w,h is 109,45 ,171,139 and second image x,y,w,h is 107,71,175,110. I get the values using the below code. cv2.rectangle(frame, (x,y),(x+w,y+h), (0,0,255), 3) I know using "ROI" concept we can do mean calculation. So, I referred some links. Ex. Get the ROI of two binary images and find difference of the mean image intesities between 2 ROI in python. But, I have confused with the parameter settings. Can anyone help me to resolve my problem ? Thanks in advance...
There's easier way to get rectangle from an image in Python. Since cv2 operates on NumPy arrays, you can use normal slicing (note, that i corresponds to y and j - to x, not the other way):
rect = image[i:i+h, j:j+w]
And taking mean is even simpler:
rect.mean()
I am currently using FindContours and DrawContours function in order to segment an image.
I only extract external contours, and want to save only the contour which contains a given point.
I use h_next to move through the cv_seq structure and test if the point is contained using PointPolygonTest
I actually can find the contour that interests me, but my problem is to extract it.
Here is the python code :
def contour_from_point(contours, point, in_img):
"""
Extract the contour from a sequence of contours which contains the point.
For this to work in the eagle road case, the sequence has to be performed using the
FindContours function with CV_RETR_EXTERNAL
"""
if contours:
# We got at least one contour. Search for the one which contains point
contour = contours # first contour of the list
distance = cv.PointPolygonTest(contour, point, 0)
while distance < 0: # 0 means on eadge of contour
contour = contour.h_next()
if contour: # avoid end of contours
distance = cv.PointPolygonTest(contour, point, 0)
else :
contour = None
else:#
contour = None
return contour
At the end, I got contour. But this structure still contains all the contours that have not been tested yet.
How can I do to keep only the first contour of my output sequence?
Thanks by advance !
There is finally a way to get only one contour. Juste use another function that needs a cvseq in input, as ConvexHull for example. The output will be only the first contour of the sequence.