cv2.counterArea doesn't work anymore in OpenCV 4.0 - python

I am trying to extract a hand out of an image. I am using OpenCV 4.0 and Python 3.6.
cnts = cv2.findContours(thresholded.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# return None, if no contours detected
if len(cnts) == 0:
return
else:
# based on contour area, get the maximum contour which is the hand
segmented = max(cnts, key=cv2.contourArea)
And it gives me this error:
Traceback (most recent call last):
File "test.py", line 85, in <module>
hand = segment(gray)
File "test.py", line 37, in segment
segmented = max(cnts, key=cv2.contourArea)
TypeError: Expected cv::UMat for argument 'contour'
Since that worked like half a year ago, I am assuming that the error occurs because of some kind of a module change. How can that be fixed?

You didn't pay attention to the outputs of cv2.findContours. For any OpenCV version >= 4.0, it's
contours, hierarchy = cv2.findContours(...)
I made up an example:
import cv2
import numpy as np
# Set up dummy image
image = np.zeros((400, 400), np.uint8)
cv2.circle(image, (150, 250), 100, 255, cv2.FILLED)
# Find contours: OpenCV 4.x
cnts, _ = cv2.findContours(image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if len(cnts) == 0:
# return None, if no contours detected
segmented = None
else:
# based on contour area, get the maximum contour which is the hand
segmented = max(cnts, key=cv2.contourArea)
print('Number of contour points:', segmented.shape[0])
cv2.imshow('image', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
The example image looks like this:
The output:
Number of contour points: 292
Hope that helps!

Related

Draw grid on a gridless (fully or partially) table image

I have an image that contains a table, the table can be in many sizes and the image too, and the table can be fully gridded (with only some blank spot that needs to be filled), it can be with only vertical grid lines and can be only with horizontal grid lines.
I've searched the web for a long time and found no solution that worked for me.
I found the following questions that seem to be suitable for me:
Python & OpenCV: How to add lines to the gridless table
Draw a line on a gridless image Python Opencv
How to repair incomplete grid cells and fix missing sections in image
Python & OpenCV: How to crop half-formed bounding boxes
My code is taken from the answers to the above questions and the "best" result I got from the above question codes is that it drew 2 lines one at the rightmost part and one on the leftmost part.
I'm kind of new to OpenCV and the image processing field so I am not sure how can I fix the above questions codes to suit my needs or how to accomplish my needs exactly, I would appreciate any help you can provide.
Example of an image table:
Update:
To remove the horizontal lines I use exactly the code you can find in here, but the result I get on the example image is this:
as you can see it removed most of them but not all of them, and then when I try to apply the same for the vertical ones (I tried the same code with rotation, or flipping the kernel) it does not work at all...
I also tried this code but it didn't work at all also.
Update 2:
I was able to remove the lines using this code:
def removeLines(result, axis) -> np.ndarray:
img = result.copy()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
if axis == "horizontal":
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1, 25))
elif axis == "vertical":
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (25, 1))
else:
raise ValueError("Axis must be either 'horizontal' or 'vertical'")
detected_lines = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=2)
cnts = cv2.findContours(detected_lines, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
result = img.copy()
for c in cnts:
cv2.drawContours(result, [c], -1, (255, 255, 255), 2)
return result
gridless = removeLines(removeLines(cv2.imread(image_path), 'horizontal'), 'vertical')
Result:
Problem:
After I remove lines, when I try to draw the vertical lines using this code:
# read image
img = old_image.copy() # cv2.imread(image_path1)
hh, ww = img.shape[:2]
# convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# average gray image to one column
column = cv2.resize(gray, (ww,1), interpolation = cv2.INTER_AREA)
# threshold on white
thresh = cv2.threshold(column, 248, 255, cv2.THRESH_BINARY)[1]
# get contours
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
# Draw vertical
for cntr in contours_v:
x,y,w,h = cv2.boundingRect(cntr)
xcenter = x+w//2
cv2.line(original_image, (xcenter,0), (xcenter,hh-1), (0, 0, 0), 1)
I get this result:
Update 3:
when I try even thresh = cv2.threshold(column, 254, 255, cv2.THRESH_BINARY)[1] (I tried lowering it 1 by 1 until 245, for both the max value and the threshold value, each time I get a different or similar result but always too much lines or too less lines) I get the following:
Input:
Output:
It's putting too many lines instead of just 1 line in each column
Code:
# read image
img = old_image.copy() # cv2.imread(image_path1)
hh, ww = img.shape[:2]
# convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# average gray image to one column
column = cv2.resize(gray, (ww, 1), interpolation = cv2.INTER_AREA)
# threshold on white
thresh = cv2.threshold(column, 254, 255, cv2.THRESH_BINARY)[1]
# get contours
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
for cntr in contours:
x, y, w, h = cv2.boundingRect(cntr)
xcenter = x + w // 2
cv2.line(original_image, (xcenter,0), (xcenter, hh_-1), (0, 0, 0), 1)
Here is one way to get the lines in Python/OpenCV. Average the image down to 1 column. Then threshold and get the contours. Then get the bounding boxes and find the vertical centers. Draw lines at those places.
If you do not want the extra lines, crop your image first to get the inside of the table.
Input:
import cv2
import numpy as np
# read image
img = cv2.imread("table4.png")
hh, ww = img.shape[:2]
# convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# average gray image to one column
column = cv2.resize(gray, (1,hh), interpolation = cv2.INTER_AREA)
# threshold on white
thresh = cv2.threshold(column, 248, 255, cv2.THRESH_BINARY)[1]
# get contours
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
# loop over contours and get bounding boxes and ycenter and draw horizontal line at ycenter
result = img.copy()
for cntr in contours:
x,y,w,h = cv2.boundingRect(cntr)
ycenter = y+h//2
cv2.line(result, (0,ycenter), (ww-1,ycenter), (0, 0, 255), 1)
# write results
cv2.imwrite("table4_lines3.png", result)
# display results
cv2.imshow("RESULT", result)
cv2.waitKey(0)
Result:
You wrote that you tried to remove the lines using the code, but it did not work.
It works fine for me in Python/OpenCV.
Read the input
Convert to grayscale
Threshold to show the horizontal lines
Apply morphology open with a horizontal kernel to isolate the horizontal lines
Get their contours
Draw the contours on a copy of the input as white to cover over the black horizontal lines
Save the results
Input:
import cv2
import numpy as np
# read the input
img = cv2.imread('table4.png')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# do morphology to detect lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (25,1))
detected_lines = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2)
# get contours
cnts = cv2.findContours(detected_lines, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
# draw contours as white on copy of input
result = img.copy()
for c in cnts:
cv2.drawContours(result, [c], -1, (255,255,255), 2)
# save results
cv2.imwrite('table4_horizontal_lines_threshold.png', thresh)
cv2.imwrite('table4_horizontal_lines_detected.png', detected_lines)
cv2.imwrite('table4_horizontal_lines_removed.png', result)
# show results
cv2.imshow('thresh', thresh)
cv2.imshow('morphology', detected_lines)
cv2.imshow('result', result)
cv2.waitKey(0)
Threshold Image:
Morphology Detected Lines Image:
Result:

Error: (-215:Assertion failed) npoints > 0 while working with contours using OpenCV

When I run this code:
import cv2
image = cv2.imread('screenshoot10.jpg')
cv2.imshow('input image', image)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
edged = cv2.Canny(gray, 30, 200)
cv2.imshow('canny edges', edged)
_, contours = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cv2.imshow('canny edges after contouring', edged)
print(contours)
print('Numbers of contours found=', len(contours))
cv2.drawContours(image, contours, -1, (0, 255, 0), 3)
cv2.imshow('contours', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I am getting this error:
OpenCV(4.1.1)
C:\projects\opencv-python\opencv\modules\imgproc\src\drawing.cpp:2509:
error: (-215:Assertion failed) npoints > 0 in function
'cv::drawContours'
What am I doing wrong?
According to the documentation for findContours, the method returns (contours, hierarchy), so I think the code should be:
contours, _ = cv2.findContours(edged,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
instead of
_, contours = cv2.findContours(edged,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
Depending on the OpenCV version, cv2.findContours() has varying return signatures. In v3.4.X, three items are returned.
image, contours, hierarchy = cv2.findContours(image, mode, method[, contours[, hierarchy[, offset]]])
In v2.X and v4.1.X, two items are returned.
contours, hierarchy = cv2.findContours(image, mode, method[, contours[, hierarchy[, offset]]])
You can easily obtain the contours regardless of the version like this:
cnts = cv2.findContours(image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
...
Since the last two values are always the same, we can further condense it into a single line using [-2:] to extract the contours from the tuple returned by cv2.findContours()
cnts, _ = cv2.findContours(image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2:]
Because this question shows up on top when looking for Error: (-215:Assertion failed) npoints > 0 while working with contours using OpenCV, I want to point out another possible reason as to why one can get this error:
I was using the BoxPoints function after doing minAreaRect to get a rotated rectangle around a contour. I hadn't done np.int0 to its output (to convert the returned array to integer values) before passing it to drawContours. This fixed it:
rect = cv2.minAreaRect(cnt)
box = cv2.cv.BoxPoints(rect)
box = np.int0(box) # convert to integer values
cv2.drawContours(im,[box],0,(0,0,255),2)
All the solutions provided here are either fixed for a particular version or they do not store all the values returned from cv2.findContours(). You can store all the values returned from every tuple of the function independent of the version of cv2.
The following snippet will work irrespective of the OpenCV version installed in your system/environment and will also store all the tuples in individual variables.
First, get the version of OpenCV installed (we don't want the entire version just the major number either 3 or 4) :
import cv2
version = cv2.__version__[0]
Based on the version either of the following two statements will be executed and the corresponding variables will be populated:
if version == '4':
contours, hierarchy = cv2.findContours(binary_image, cv2.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)
elif version == '3':
img, contours, hierarchy = cv2.findContours(binary_image, cv2.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)
The contours returned from the function in either scenarios will be stored in contours.
Note: the above snippet is written assuming either version 3 or 4 of OpenCV is installed. For older versions, please refer to the documentation or update to the latest.
If you are using OpenCV version 4, and wondering what the first variable returned by cv2.findContours() of version 3 is; its just the same as the input image (in this case binary_image).

openCV python Find contours based on Edges Problem

i want to detect a car number!
see this photo
using this codes :
import numpy as np
import imutils
import cv2
# Read the image file
image = cv2.imread('Car_Image_1.jpg')
# Resize the image - change width to 500
image = imutils.resize(image, width=500)
# Display the original image
cv2.imshow("Original Image", image)
# RGB to Gray scale conversion
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow("1 - Grayscale Conversion", gray)
# Noise removal with iterative bilateral filter(removes noise while preserving edges)
gray = cv2.bilateralFilter(gray, 11, 17, 17)
cv2.imshow("2 - Bilateral Filter", gray)
# Find Edges of the grayscale image
edged = cv2.Canny(gray, 170, 200)
cv2.imshow("4 - Canny Edges", edged)
# Find contours based on Edges
(new, cnts , _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# sort contours based on their area keeping minimum required area as '30' (anything smaller than this will not be considered)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:30]
# we currently have no Number plate contour
NumberPlateCnt = None
# loop over our contours to find the best possible approximate contour of number plate
count = 0
for c in cnts:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
if len(approx) == 4: # Select the contour with 4 corners
NumberPlateCnt = approx #This is our approx Number Plate Contour
break
# Drawing the selected contour on the original image
cv2.drawContours(image, [NumberPlateCnt], -1, (0,255,0), 3)
cv2.imshow("Final Image With Number Plate Detected", image)
cv2.waitKey(0) #Wait for user input before closing the images displayed
but when i run my code , got this error :
C:\Python27\python.exe C:/Users/Crypt/PycharmProjects/MyDetector/CarPlateDetection.py
Traceback (most recent call last):
File "C:/Users/Crypt/PycharmProjects/MyDetector/CarPlateDetection.py", line 27, in <module>
(new, cnts , _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
ValueError: need more than 2 values to unpack
Process finished with exit code 1
i change this line of code
(new, cnts , _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
to
(new, cnts) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
then run , the error is :
C:\Python27\python.exe C:/Users/Crypt/PycharmProjects/MyDetector/CarPlateDetection.py
Traceback (most recent call last):
File "C:/Users/Crypt/PycharmProjects/MyDetector/CarPlateDetection.py", line 29, in <module>
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:30]
cv2.error: OpenCV(4.0.0) C:\projects\opencv-python\opencv\modules\imgproc\src\shapedescr.cpp:272: error: (-215:Assertion failed) npoints >= 0 && (depth == CV_32F || depth == CV_32S) in function 'cv::contourArea'
Process finished with exit code 1
here is the code on github :
https://github.com/Aqsa-K/Car-Number-Plate-Detection-OpenCV-Python
From OpenCV4, findContours returns 2 values contours and hierachy, so now, contours are in the new in your code. It should be
cnts, hier = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

ValueError while using cv2.findContours() in python. -> not enough values to unpack (expected 3, got 2)

Getting an error:
Traceback (most recent call last):
File "motion_detector.py", line 21, in <module>
(_, cnts, _) = cv2.findContours(thresh_frame.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
ValueError: not enough values to unpack (expected 3, got 2)
Having problems with detecting contours in an image. Have been double checking from the tutorial and also looking from stack overflow to understand where I miss something, but can't find the solution. Using Python 3.6.4 and OpenCV 4.0.0. Thanks for the help!
Code here:
import cv2, time
first_frame = None
video = cv2.VideoCapture(0)
while True:
check, frame = video.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray,(21,21),0)
if first_frame is None:
first_frame = gray
delta_frame = cv2.absdiff(first_frame, gray)
thresh_frame = cv2.threshold(delta_frame, 30, 255, cv2.THRESH_BINARY)[1]
thresh_frame = cv2.dilate(thresh_frame, None, iterations = 2)
(_, cnts, _) = cv2.findContours(thresh_frame.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for contour in cnts:
if cv2.contourArea(contour) < 1000:
continue
(x, y, w, h) = cv2.boundingRect(contour)
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 3)
cv2.imshow("Gray Frame", gray)
cv2.imshow("Delta Frame", delta_frame)
cv2.imshow("Threshold Frame", thresh_frame)
cv2.imshow("Color Frame", frame)
key = cv2.waitKey(1)
print(gray)
print(delta_frame)
if key == ord('q'):
break
video.release()
cv2.destroyAllWindows
I also encountered same problem, if you are using an old tutorial cv2.findContours() function returns 3 value but if you are using later versions it returns 2 value so you can remove first variable assignment and use like that
cnts, _ = cv2.findContours(thresh_frame.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
Well in Python version 2 findContours() used to return 3 values so we save it in (_,cnts,_) however in python 3 it returns 2 values which are countours and hierarchy. so we need to save it in (cnts,_).
So for python 2 people the code goes like:
(_,cnts,_) = cv2.findContours(thresh_frame.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
And for Python 3 people the code goes like :
(cnts,_) = cv2.findContours(thresh_frame.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
Its just about version guys,nothing to worry just change it this way and I am sure you will get the desired output.
If you are using cv 4.0 then findContours is returning two values. See the example here or the documentation for findContours. The function signature looks like this:
contours, hierarchy = cv.findContours(image, mode, method[, contours[, hierarchy[, offset]]])
As pointed problem is with that line:
(_, cnts, _) = cv2.findContours(thresh_frame.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
According to documentation cv2.findCountours returns two things: contours, hierarchy, so when you try to unpack it to (_, cnts, _) having 3 elements error appears. Please try to replace mentioned line with
cnts = cv2.findContours(thresh_frame.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0]
and check if that would solve you problem.

Find and draw the largest contour in opencv on a specific color (Python)

Im trying to get the largest contour of a red book.
I've got a little problem with the code because its getting the contours of the smallest objects (blobs) instead of the largest one and I can't seem to figure out why this is happening
The code I use:
camera = cv2.VideoCapture(0)
kernel = np.ones((2,2),np.uint8)
while True:
#Loading Camera
ret, frame = camera.read()
blurred = cv2.pyrMeanShiftFiltering(frame, 3, 3)
hsv = cv2.cvtColor(blurred, cv2.COLOR_BGR2HSV)
lower_range = np.array([150, 10, 10])
upper_range = np.array([180, 255, 255])
mask = cv2.inRange(hsv, lower_range, upper_range)
dilation = cv2.dilate(mask,kernel,iterations = 1)
closing = cv2.morphologyEx(dilation, cv2.MORPH_GRADIENT, kernel)
closing = cv2.morphologyEx(dilation, cv2.MORPH_CLOSE, kernel)
#Getting the edge of morphology
edge = cv2.Canny(closing, 175, 175)
_, contours,hierarchy = cv2.findContours(edge, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Find the index of the largest contour
areas = [cv2.contourArea(c) for c in contours]
max_index = np.argmax(areas)
cnt=contours[max_index]
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow('threshold', frame)
cv2.imshow('edge', edge)
if cv2.waitKey(1) == 27:
break
camera.release()
cv2.destroyAllWindows()
As you can see on this picture
Hopefully there is someone who can help
You can start by defining a mask in the range of the red tones of the book you are looking for.
Then you can just find the contour with the biggest area and draw the rectangular shape of the book.
import numpy as np
import cv2
# load the image
image = cv2.imread("path_to_your_image.png", 1)
# red color boundaries [B, G, R]
lower = [1, 0, 20]
upper = [60, 40, 220]
# create NumPy arrays from the boundaries
lower = np.array(lower, dtype="uint8")
upper = np.array(upper, dtype="uint8")
# find the colors within the specified boundaries and apply
# the mask
mask = cv2.inRange(image, lower, upper)
output = cv2.bitwise_and(image, image, mask=mask)
ret,thresh = cv2.threshold(mask, 40, 255, 0)
if (cv2.__version__[0] > 3):
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
else:
im2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
# draw in blue the contours that were founded
cv2.drawContours(output, contours, -1, 255, 3)
# find the biggest countour (c) by the area
c = max(contours, key = cv2.contourArea)
x,y,w,h = cv2.boundingRect(c)
# draw the biggest contour (c) in green
cv2.rectangle(output,(x,y),(x+w,y+h),(0,255,0),2)
# show the images
cv2.imshow("Result", np.hstack([image, output]))
cv2.waitKey(0)
Using your image:
If you want the book to rotate you can use rect = cv2.minAreaRect(cnt) as you can find it here.
Edit:
You should also consider other colour spaces beside the RGB, as the HSV or HLS. Usually, people use the HSV since the H channel stays fairly consistent in shadow or excessive brightness. In other words, you should get better results if you use the HSV colourspace.
In specific, in OpenCV the Hue range is [0,179]. In the following figure (made by #Knight), you can find a 2D slice of that cylinder, in V = 255, where the horizontal axis is the H and the vertical axis the S. As you can see from that figure to capture the red you need both to include the lower (e.g., H=0 to H=10) and upper region (e.g., H=170 to H=179) of the Hue values.
Use this to convert Grayscale masks to Rectangles
def mask_to_rect(image):
'''
Give rectangle cordinates according to the mask image
Params: image : (numpy.array) Gray Scale Image
Returns: Cordinates : (list) List of cordinates [x, y, w h]
'''
# Getting the Thresholds and ret
ret,thresh = cv2.threshold(image, 0, 1, 0)
# Checking the version of open cv I tried for (version 4)
# Getting contours on the bases of thresh
if (int(cv2.__version__[0]) > 3):
contours, hierarchy = cv2.findContours(thresh.astype(np.uint8), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
else:
im2, contours, hierarchy = cv2.findContours(thresh.astype(np.uint8), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Getting the biggest contour
if len(contours) != 0:
# draw in blue the contours that were founded
cv2.drawContours(output, contours, -1, 255, 3)
# find the biggest countour (c) by the area
c = max(contours, key = cv2.contourArea)
x,y,w,h = cv2.boundingRect(c)
return [x, y, w, h]
Result

Categories

Resources