How to extract raw coordinates from openCV contours in Python - python

I am trying to use contours without using the standard openCV contour functions.
At the moment I am trying to take out the first "line" and the last "line" in each contour and I have got a bit stuck on how to read the numpy array correctly. After a lot of messing about this is the current state of the code which doesn't work. Can anyone provide an example of how I should be doing this?
contours,hierarchy = cv2.findContours(mask, 1, 2)
for cnt in contours:
#draw first line
img = cv2.line(img,(cnt[0][0],cnt[0][1]),(cnt[1][0], cnt[1][1]),(255,0,0),2)
#draw last line
img = cv2.line(img,(cnt[cnt.size-1][0],cnt[cnt.size-1][1]),(cnt[cnt.size-1][0], cnt[cnt.size-1][1]),(255,0,0),2)

Related

findContours result causes ValueError: not enough values to unpack (expected 3, got 2) [duplicate]

Im trying to use cv2.findContours() on opencv version 4.4.0. (Im using Python version 3.8.5) but it throws an error, I cant figure out. Im not sure whats wrong with the code. Here's some background:
According to OpenCV the syntax for cv2.findContours() is as follows:
Python:
contours, hierarchy = cv.findContours(image, mode, method[, contours[, hierarchy[, offset]]])
I looked for some examples to make sure how to properly implement it, heres what I found:
example 1
(_, contours, _) = cv2.findContours(binary_image, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
example 2
(_, cnts, _) = cv2.findContours(thresholded.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
Those are from working projects I found online, there are plenty examples like those. So, Im trying to implement some code I got from a video to gain some understanding on the topic but it does not seem to work for me and I cant find why. Heres the code:
import cv2
import numpy as np
imagen =cv2.imread('lettuce.jpg')
gray = cv2.cvtColor(imagen,cv2.COLOR_BGR2GRAY)
_,binary = cv2.threshold(gray,100,255,cv2.THRESH_BINARY)
image,contours,hierarchy = cv2.findContours(binary,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(image,contours, -1, (0,255,0),3)
Error:
Traceback (most recent call last):
line 8, in <module>
image,contours,hierarchy = cv2.findContours(binary,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
ValueError: not enough values to unpack (expected 3, got 2)
In Python/OpenCV 4.4.0, findContours returns only 2 values, you list 3.
You show:
image,contours,hierarchy = cv2.findContours(binary,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
OpenCV 4.4.0, lists:
contours, hierarchy = cv.findContours(image, mode, method[, contours[, hierarchy[, offset]]])
Please always check the documentation. See
https://docs.opencv.org/4.4.0/d3/dc0/group__imgproc__shape.html#gadf1ad6a0b82947fa1fe3c3d497f260e0
One way to handle this in a version independent way, if all you want are the contours, is (credit to #nathancy):
contours = cv2.findContours(binary, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
If you do not want all nested contours, then use RETR_EXTERNAL and not RETR_LIST.
The problem was the cv2.findContour function will return two things not three.
contours
hierarchy
This cv2.findContours function takes three input arguments. The first argument is an image that should be a grayscale image. The second is a retrieval mode and the third one is the approximation mode. When we apply the findContours method the original image will be affected. The best practice is to take a copy of the image before processing the findContours method.
OpenCV stores the contours in a list of the list. Each list represents a different contour. within the list, all the coordinates of that contour are added to that list.
We can store these coordinates differently. How can we store that? The approximation mode comes to play.
Using cv2.CHAIN_APPROX_NONE stores all the boundary points. But we don't necessarily need all the boundary points. If the points form a straight line we only need the start and ending points of that line. Using cv2.CHAIN_APPROX_SIMPLE instead only provides these start and endpoints of bounding contours, thus resulting in much more efficient storage of contour information.
what is retrieval mode? Retrieval mode essentially defines the hierarchy of the contour as so hierarchy being like do you want sub contours or external contours or all contours.
There are four types in retrieval mode in OpenCV.
cv2.RETR_LIST → Retrieve all contours
cv2.RETR_EXTERNAL → Retrieves external or outer contours only
cv2.RETR_COMP → Retrieves all in a 2-level hierarchy
cv2.RETR_TREE → Retrieves all in the full hierarchy
Hierarchy is stored in the following format [next, previous, First child, parent].
Here is another way to store all the tuples returned from cv2.findContours() irrespective of the OpenCV version installed in your system/environment:
First, get the version of OpenCV installed (we don't want the entire version just the main number either 3 or 4) :
import cv2
major_number = cv2.__version__[0]
Based on the version either of the following two statements will be executed and the corresponding variables will be populated:
if major_number == '4':
contours, hierarchy = cv2.findContours(img_binary, cv2.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)
elif major_number == '3':
img, contours, hierarchy = cv2.findContours(img_binary, cv2.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)
The contours returned from the function in either scenarios will be stored in contours.

Finding the boundary points of a contour in opencv numpy

complete noob at open cv and numpy here. here is the image: here is my code:
import numpy as np
import cv2
im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
imgray = cv2.medianBlur(imgray, ksize=7)
ret, thresh = cv2.threshold(imgray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
_, contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
print ("number of countours detected before filtering %d -> "%len(contours))
new = np.zeros(imgray.shape)
new = cv2.drawContours(im,contours,len(contours)-1,(0,0,255),18)
cv2.namedWindow('Display',cv2.WINDOW_NORMAL)
cv2.imshow('Display',new)
cv2.waitKey()
mask = np.zeros(imgray.shape,np.uint8)
cv2.drawContours(mask,[contours[len(contours)-1]],0,255,-1)
pixelpoints = cv2.findNonZero(mask)
cv2.imwrite("masked_image.jpg",mask)
print(len(pixelpoints))
print("type of pixelpoints is %s" %type(pixelpoints))
the length of pixelpoints is nearly 2 million since it contains all the point covered by the contours. But i only require the bordering point of that contour. How do I do it? I have tried several methods from opencv documentation but always errors with tuples and sorting operations. please...help?
I only require the border points of the contour :(
Is this what you mean by border points of a contour?
The white lines you see are points that I have marked out in white against the blue drawn contours. There's a little spot at the bottom right because I think its most likely that your black background isn't really black and so when I did thresholding and a floodfill to get this,
there was a tiny white speck at the same spot. But if you play around with the parameters and do a more proper thresholding and floodfill it shouldn't be an issue.
In openCV's drawContours function, the cnts would contain lists of contours and each contour will contain an array of points. Each point is also of type numpy.ndarray. If you want to place all points of each contour in one place so it returns you a set of points of boundary points (like the white dots outline in the image above), you might want to append them all into a list. You can try this:
#rgb is brg instead
contoured=cv2.drawContours(black, cnts, -1, (255,0,0), 3)
#list of ALL points of ALL contours
all_pixels=[]
for i in range(0, len(cnts)):
for j in range(0,len(cnts[i])):
all_pixels.append(cnts[i][j])
When I tried to
print(len(all_pixels))
it returned me 2139 points.
Do this if you want to mark out the points for visualization purposes (e.g. like my white points):
#contouredC is a copy of the contoured image above
contouredC[x_val, y_val]=[255,255,255]
If you want less points, just use a step function when iterating through to draw the white points out. Something like this:
In python, for loops are slow so I think there's better ways of replacing the nested for loops with a np.where() function or something instead. Will update this if/when I figure it out. Also, this needs better thresholding and binarization techniques. Floodfill technique referenced from: Python 2.7: Area opening and closing binary image in Python not so accurate.
Hope it helps.

OpenCV finding contours

We are part of a FIRST robotics team that is using OpenCV for vision detection. Other teams have posted a working detection software This can be found on team2053tigertronics's Github (/2016Code/blob/master/Robot2016/src/vision/vision.cpp) and we are attempting to convert their code into Python as using this code as sample code that we can adjust later. While converting, we ran into a weird issue.
For debugging purposes we are using a print statement to print the contours so that we knew why we got the error when we tried to put the code into the boundingRectangle method
Here is our code so far:http://pastebin.com/7zh4c7Ej
Here is our output: http://pastebin.com/5fRQhC28
The error we are getting:
Traceback:
in line 146, processImage()
in line 98: rec = cv2.boundingRect(i[x])
index 256 is out of bounds for axis 0 with size 15/
Our output has been a list of different numpy arrays that hold integer values. We are confused on how to use these values to draw rectangles and eventually use these values for coordinates for use during the robotics game.
We appreciate any help!!!
Thanks!
Axton (and Team)
EDIT:
As asked for by other members, here is a more simpler question:
Here is our code that we are having problems with:
filler, contours, heirarchy = cv2.findContours(matThresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for i in contours:
for x in i:
rec = Rect()
print i[x]
rec = cv2.boundingRect(i[x])
We would like to know how to use the contour values as points to use the boundingRect method.
Thanks again!
it is indeed as easy as #Miki said above.
since contours is a list of contours, you only need one loop to access a single contour, and get the bounding rect of that:
for contour in contours:
rec = cv2.boundingRect(contour)
print rec
## now, just for the fun, let's look
## at a *single* contour, it consists of points:
for point in contour:
print point

shape matching in an autodesk dxf

I have a drawing (in dxf format) containing 9 different shapes arranged in a random pattern. I need to find the center point of each shape and derive it's x,y coordinate so that I can append it to a list for machining purposes.
The problem is I'm using autocad which saves each shape as a series of vertices even if I first convert them to distinct joined polylines. In other words, opening the drawing in a text editor just gives me a standard vertex list from which it's impossible to say where one shape ends and the next begins.
So far the only solutions I've had any success with seem awfully goldbergian. As an example I can export the dxf to a bmp and then use python and Opencv to identify each shape based on the number of contours it contains:
import sys
import numpy as np
import cv2
im = cv2.imread('drawing.bmp')
im3 = im.copy()
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),0)
thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2)
contours0, hierarchy = cv2.findContours( thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours = [cv2.approxPolyDP(cnt, 4, True) for cnt in contours0]
samples = np.empty((0,100))
responses = []
keys = [i for i in range(30,90)]
for cnt in contours:
tot = cv2.contourArea(cnt)
[x,y,w,h] = cv2.boundingRect(cnt)
if tot in range(1200,1250):
cv2.putText(im,"shape 3",(x+(w/2),y+(h/2)),0,1,(0,255,0))
cv2.imshow('norm',im)
key = cv2.waitKey(0)
I could then take the output, scale it as necessary, and list the x,y. This is however incredibly time consuming and may ultimately lose too much precision to be usable (pixels aren't floats).
There has to be someway of finding these shapes just by reading the dxf otherwise autocad couldn't render them and I would just have a point cloud.
So how exactly does it know so I can tell python what to look for to identify a distinct shape when reading a dxf as a text file?

OpenCV get centers of multiple objects

I'm trying to build a simple image analyzing tool that will find items that fit in a color range and then finds the centers of said objects.
As an example, after masking, I'm analyzing an image like this:
What I'm doing so far code-wise is rather simple:
import cv2
import numpy
bound = 30
inc = numpy.array([225,225,225])
lower = inc - bound
upper = inc + bound
img = cv2.imread("input.tiff")
cv2.imshow("Original", img)
mask = cv2.inRange(img, lower, upper)
cv2.imshow("Range", mask)
contours = cv2.findContours(mask, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_TC89_L1)
print contours
This, however, gives me a countless number of contours. I'm somewhat at a loss while reading the corresponding manpage. Can I use moments to reasonably analyze the contours? Are contours even the right tool for this?
I found this question, that vaguely covers finding the center of one object, but how would I modify this approach when there are multiple items?
How do I find the centers of the objects in the image? For example, in the above sample image I'm looking to find three points (the centers of the rectangle and the two circles).
Try print len(contours). That will give you around the expected answer. The output you see is the full representation of the contours which could be thousands of points.
Try this code:
import cv2
import numpy
img = cv2.imread('inp.png', 0)
_, contours, _ = cv2.findContours(img.copy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_TC89_L1)
print len(contours)
centres = []
for i in range(len(contours)):
moments = cv2.moments(contours[i])
centres.append((int(moments['m10']/moments['m00']), int(moments['m01']/moments['m00'])))
cv2.circle(img, centres[-1], 3, (0, 0, 0), -1)
print centres
cv2.imshow('image', img)
cv2.imwrite('output.png',img)
cv2.waitKey(0)
This gives me 4 centres:
[(474, 411), (96, 345), (58, 214), (396, 145)]
The obvious thing to do here is to also check for the area of the contours and if it is too small as a percentage of the image, don't count it as a real contour, it will just be noise. Just add something like this to the top of the for loop:
if cv2.contourArea(contours[i]) < 100:
continue
For the return values from findContours, I'm not sure what the first value is for as it is not present in the C++ version of OpenCV (which is what I use). The second value is obviously just the contours (an array of arrays) and the third value is a hierarchy holding information on the nesting of contours, which can be very handy.
You can use the opencv minEnclosingCircle() function on your contours to get the center of each object.
Check out this example which is in c++ but you can adapt the logic Example

Categories

Resources