I have been working under the image of bacteria and a wish to take the number of bacteria from the image, and also need to classify the bacteria with specific shape and size.
I am using opencv python. Now i use the contour method.
contours,hierarchy = cv2.findContours(dst,1,2)
cnt = contours[0]
l = len(contours)
print l
li = list(contours)
print li
This give an output of l= 115 and li= some array values .
What does this means??
please help me in finding out the answer..e.coli image below:
Contours connects continuous points and puts all of them in an array. So each element in this array probably corresponds to a different bacteria (or a false detection, due to a connected color group that is a shadow etc).
When you say len(contours), you get the number of elements in this array. Therefore, you get a rough estimation of the number of bacterias.
In your case, there are 115 bacterias, or colors that are different than their surroundings which may or may not be bacterias. When you define a list for them and print the list, you get the properties of each element in this list, therefore you get the properties for each "connected point group" or each "object that is possibly a bacteria". Its all pretty straightforward really.
If you realize that you have many false detections here is what you do:
A group of bacterias appearing as one:
You threshold the image (convert it to black&white) and use the erode function first. Then use dilate function to remove their connections. Then go with findContours once more.
Stains detected as bacterias:
Make your thresholding only cover the bacterias color range, so everything else will be ignored.
See sources below, they might help:
http://docs.opencv.org/2.4/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html
http://docs.opencv.org/trunk/d4/d73/tutorial_py_contours_begin.html#gsc.tab=0
cv2.findCountours returns a list of contours where each contour is a numpy array of points (2 columns for x, y coordinates). len(foo) is the length of list foo. So in your case it found 115 contours and your li is just a copy of the contours list.
You can easily examine the contours by using the drawContours function.
# draws contours in white color, outlines only (not filled)
cv2.drawContours(dst, contours, -1, (255,))
cv2.imshow("result", dst)
cv2.waitKey(-1)
Related
complete noob at open cv and numpy here. here is the image: here is my code:
import numpy as np
import cv2
im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
imgray = cv2.medianBlur(imgray, ksize=7)
ret, thresh = cv2.threshold(imgray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
_, contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
print ("number of countours detected before filtering %d -> "%len(contours))
new = np.zeros(imgray.shape)
new = cv2.drawContours(im,contours,len(contours)-1,(0,0,255),18)
cv2.namedWindow('Display',cv2.WINDOW_NORMAL)
cv2.imshow('Display',new)
cv2.waitKey()
mask = np.zeros(imgray.shape,np.uint8)
cv2.drawContours(mask,[contours[len(contours)-1]],0,255,-1)
pixelpoints = cv2.findNonZero(mask)
cv2.imwrite("masked_image.jpg",mask)
print(len(pixelpoints))
print("type of pixelpoints is %s" %type(pixelpoints))
the length of pixelpoints is nearly 2 million since it contains all the point covered by the contours. But i only require the bordering point of that contour. How do I do it? I have tried several methods from opencv documentation but always errors with tuples and sorting operations. please...help?
I only require the border points of the contour :(
Is this what you mean by border points of a contour?
The white lines you see are points that I have marked out in white against the blue drawn contours. There's a little spot at the bottom right because I think its most likely that your black background isn't really black and so when I did thresholding and a floodfill to get this,
there was a tiny white speck at the same spot. But if you play around with the parameters and do a more proper thresholding and floodfill it shouldn't be an issue.
In openCV's drawContours function, the cnts would contain lists of contours and each contour will contain an array of points. Each point is also of type numpy.ndarray. If you want to place all points of each contour in one place so it returns you a set of points of boundary points (like the white dots outline in the image above), you might want to append them all into a list. You can try this:
#rgb is brg instead
contoured=cv2.drawContours(black, cnts, -1, (255,0,0), 3)
#list of ALL points of ALL contours
all_pixels=[]
for i in range(0, len(cnts)):
for j in range(0,len(cnts[i])):
all_pixels.append(cnts[i][j])
When I tried to
print(len(all_pixels))
it returned me 2139 points.
Do this if you want to mark out the points for visualization purposes (e.g. like my white points):
#contouredC is a copy of the contoured image above
contouredC[x_val, y_val]=[255,255,255]
If you want less points, just use a step function when iterating through to draw the white points out. Something like this:
In python, for loops are slow so I think there's better ways of replacing the nested for loops with a np.where() function or something instead. Will update this if/when I figure it out. Also, this needs better thresholding and binarization techniques. Floodfill technique referenced from: Python 2.7: Area opening and closing binary image in Python not so accurate.
Hope it helps.
I am working with images with definitions from a dictionary such like this one:
I want to get rid of those small elements from neighboring entries (top and bot), if they touch the upper or bot boundaries of an image and extend no further than 20 pixels from it (not to include any actual letters touching top or bot), as this image indicates (in red):
The way I tried doing it was:
1. Load an image in grayscale
2. Get contours of the image using cv2.findContours
3. Find contours that begin at x = 0 but end no further than x = 20
4. Find contours that begin at height-1 and end at height-21
5. Paint these contours in white
The problem is that cv2.findContours returns a list of arrays of arrays of pairs of coordinates. While I was able to delete certain pairs of coordinates, I have difficulty applying that here.
I tried a number of approaches and currently I am stuck with this:
import cv2
import os
def def_trimmer(img):
height, width = img.shape
img_rev = cv2.bitwise_not(img)
_, contours, _ = cv2.findContours(img_rev,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
# contours = np.concatenate(contours, axis = 0)
# contours = contours[((contours<[20-1, width])|(contours>[height-20-1, -1])).all(axis=(1,2))]
for outer in contours:
# for outer2 in outer1:
oldlen = len(outer)
outer = outer[(((outer<[20-1, width])|(outer>[height-20-1, -1])).all(axis=(1, 2)))]
newlen = len(outer)
print((oldlen, newlen))
cv2.drawContours(img,contours,-1,(255,255,255),-1)
return(img)
img = cv2.imread("img.png")
img_out = def_trimmer(img)
cv2.imshow("out", img_out)
I think it is not necessary to use findContours here.
What I would do in your case is to simply iterate over the pixels on the border of your image and remove those components that touch the borders using a growing region algorithm. In more detail:
Iterate over border pixels until you find a black one.
Initialize a list to store pixel coordinates.
Use recursion on neighbouring black pixels to remove them and store their coordinates in the list. If your recursion goes further than 20 pixels away from the border of the image, stop removing pixels and restore the ones you erased before using the stored coordinates in the list.
Repeat from the beginning until no other border components are left.
is there an easy and direct way to extract the internal contours (holes) from an image using opencv 3.1 python ?
I know that I can use "area" as a condition. However, if I change the image resolution, the "areas" are not the same.
For instance, with this image:
How can I extract the internal holes?
_, contours, hier_ = cv2.findContours(img,cv2.RETR_CCOMP,cv2.CHAIN_APPROX_SIMPLE)
areas = [cv2.contourArea(c) for c in millCnts]
max_area = np.max(areas)
Mask = np.ones(img.shape[:2], dtype="uint8") * 255
# I can do something like this (currently not working, just to show an example)
for c in contours:
if(( cv2.contourArea(c) > 8) and (cv2.contourArea(c)< 100000)):
cv2.drawContours(Mask ,[c],-1,0,1)
As I explained in my comment, you have to check the hierarchy return variable. After find contours you will get the contours (List of List of Points) and hierarchy (List of List).
The documentation is very clear in this:
hierarchy – Optional output vector, containing information about the
image topology. It has as many elements as the number of contours. For
each i-th contour contours[i] , the elements hierarchy[i][0] ,
hiearchy[i][1] , hiearchy[i][2] , and hiearchy[i][3] are set to
0-based indices in contours of the next and previous contours at the
same hierarchical level, the first child contour and the parent
contour, respectively. If for the contour i there are no next,
previous, parent, or nested contours, the corresponding elements of
hierarchy[i] will be negative.
So, this means that for each countour[i] you should get a hierarchy[i] that contains a List with 4 variables:
hierarchy[i][0]: the index of the next contour of the same level
hierarchy[i][1]: the index of the previous contour of the same level
hierarchy[i][2]: the index of the first child
hierarchy[i][3]: the index of the parent
So, saying that, in your case, there should be one without a parent, and you can check which one by checking the hierarchy[i][3] if it is negative.
It should be something like (untested code):
holes = [contours[i] for i in range(len(contours)) if hierarchy[i][3] >= 0]
* UPDATE:*
To summarize what we discussed in the chat,
The image was too big, and the contours had small holes: solved with dilate and erode with a kernel of size 75
The image needed to be inverted since OpenCV expects for dilate a black background
The algorithm was giving 2 big contours, one outside (as expected) and one inside (almost the same as the outside one), this is probably due to the image having some external (and closed) bumps. This was solved by removing the contour without a parent and its first child.
When we want to find the contours of a given image according to a certain threshold, we use the function cv2.findContours() which returns among other things, the list of contours (a Pythonic list of arrays representing contours of the picture).
Here is how to use of the function:
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
My question:
What is the order that OpenCV uses toorder the countours ?
I mean on which criterion it says this contour is in position 0, the other in position 1 and so on ?
I need this information because I want to know which contour I am dealing with in my program and why it is given this or that position.
Hope you won't close my question, I am just a beginner.
I do not fully understand the question but from my understanding the Contour positions are based on the x and y pixel coordinates of the image which is what the returned vector list contain. So surely the criterion is they're position on the image. (Please consider this as a comment)
I am currently using FindContours and DrawContours function in order to segment an image.
I only extract external contours, and want to save only the contour which contains a given point.
I use h_next to move through the cv_seq structure and test if the point is contained using PointPolygonTest
I actually can find the contour that interests me, but my problem is to extract it.
Here is the python code :
def contour_from_point(contours, point, in_img):
"""
Extract the contour from a sequence of contours which contains the point.
For this to work in the eagle road case, the sequence has to be performed using the
FindContours function with CV_RETR_EXTERNAL
"""
if contours:
# We got at least one contour. Search for the one which contains point
contour = contours # first contour of the list
distance = cv.PointPolygonTest(contour, point, 0)
while distance < 0: # 0 means on eadge of contour
contour = contour.h_next()
if contour: # avoid end of contours
distance = cv.PointPolygonTest(contour, point, 0)
else :
contour = None
else:#
contour = None
return contour
At the end, I got contour. But this structure still contains all the contours that have not been tested yet.
How can I do to keep only the first contour of my output sequence?
Thanks by advance !
There is finally a way to get only one contour. Juste use another function that needs a cvseq in input, as ConvexHull for example. The output will be only the first contour of the sequence.