I am currently using opencv to try and detect the rect around each individual parking sign in a group of signs. Using "findContours" and "approxPolyDP" has gotten me close but I want consolidate the contours into a single rect. Using "boundingRect" has not worked since there are breaks in the shape.
Let me know if you have any advise on how to approach this problem.
Here is my code:
image = cv2.imread(sign_directory)
# find all the 'red' shapes in the image
lower = np.array([0, 0, 110])
upper = np.array([100, 100, 250])
shapeMask = cv2.inRange(image, lower, upper)
# find the contours in the mask
(img, cnts, _) = cv2.findContours(shapeMask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for c in cnts:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow("Image", image)
cv2.waitKey(0)
This is the image it produces
I want to get something like this so that I can crop it
This is the original
Instead of passing each contour to cv2.boundingRect() you want to merge them all in a single array and then pass to that function, then you get the bounding box you need to crop. But before doing that you need to remove outliers/noise, the contours that are too far from the others.
Related
I want to extract the individual persons from the video screenshot as an image.
So from this frame I want 5 images, which I'll export as 1.jpg, 2.jpg ..., 5.jpg, by creating bounding boxes for each box of video.
zoom conference example.
How would you tackle this? I need a robust method.
Is there any fast simple method I'm not thinking of? Any ML model that takes care of this or is basic image processing the way to go?
Thanks in advance
Tried OpenCV thresholding, but color of background also appears in video of attendees of the call. Which adds noise as you can see.
thresholding result
Your thresholding result looks fine to me. findContours() plus boundingRect() would clean up the black parts of each camera view. contourArea() could be used to reject small white parts from becoming their own camera view.
So, for example, here's how to run findContours():
# This is your post-thresholding image
img_orig = cv2.imread('test183_image.png')
img_gray = cv2.cvtColor(img_orig, cv2.COLOR_BGR2GRAY)
# Find the contours
contours, hierarchy = cv2.findContours(img_gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Draw them for debug purposes
img = img_orig.copy()
cv2.drawContours(img, contours, -1, (0, 255, 0), 10)
plt.imshow(img)
Output:
There's a seventh contour here, in the upper left corner of the image. It can be filtered out like this:
# Reject any contour smaller than min_area
min_area = 20000 # in square pixels
contours = [contour for contour in contours if cv2.contourArea(contour) >= min_area]
Output:
The next step is to find the minimum bounding rectangle for each camera using boundingRect():
# Get bounding rectangle for each contour
bounding_rects = [cv2.boundingRect(contour) for contour in contours]
# Display each rectangle
img = img_orig.copy()
for rect in bounding_rects:
x,y,w,h = rect
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),10)
plt.imshow(img)
Output:
In the bounding_rects list, you now have the x, y, width, and height of every camera.
This question already has answers here:
Filling holes inside a binary object
(7 answers)
Closed 11 months ago.
This is my code, I am trying to delete the mask (noise) from the binary image. What I am getting is white lines left around the noise. I am aware that there is a contour around that noise creating the final white line in the results. any help?
Original Image
Mask and results
Code
import numpy as np
import cv2
from skimage import util
img = cv2.imread('11_otsu.png')
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 127, 255, 0, cv2.THRESH_BINARY)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
#cv2.drawContours(img, contours, -1, (0,255,0), 2)
# create an empty mask
mask = np.zeros(img.shape[:2], dtype=np.uint8)
# loop through the contours
for i, cnt in enumerate(contours):
# if the contour has no other contours inside of it
if hierarchy[0][i][2] == -1:
# if the size of the contour is greater than a threshold
if cv2.contourArea(cnt) <70:
cv2.drawContours(mask, [cnt], 0, (255), -1)
# display result
cv2.imshow("Mask", mask)
cv2.imshow("Img", img)
image = cv2.bitwise_not(img, img, mask=mask)
cv2.imshow("Mask", mask)
cv2.imshow("After", image)
cv2.waitKey()
cv2.destroyAllWindows()
Your code is perfectly fine just make these adjustments and it should work:
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE) # Use cv2.CCOMP for two level hierarchy
if hierarchy[0][i][3] != -1: # basically look for holes
# if the size of the contour is less than a threshold (noise)
if cv2.contourArea(cnt) < 70:
# Fill the holes in the original image
cv2.drawContours(img, [cnt], 0, (255), -1)
Instead of trying to find inner contours and filling those in, may I suggest using cv2.floodFill instead? The flood fill operation is commonly used to fill in holes inside closed objects. Specifically, if you set the seed pixel to be the top left corner of the image then flood fill the image, what will get filled is the background while closed objects are left alone. If you invert this image, you will find all of the pixels that are interior to the closed objects that have "holes". If you take this inverted image and use the non-zero locations to directly set the original image, you will thus fill in the holes.
Therefore:
im = cv2.imread('8AdUp.png', 0)
h, w = im.shape[:2]
mask = np.zeros((h+2, w+2), dtype=np.uint8)
holes = cv2.floodFill(im.copy(), mask, (0, 0), 255)[1]
holes = ~holes
im[holes == 255] = 255
cv2.imshow('Holes Filled', im)
cv2.waitKey(0)
cv2.destroyAllWindows()
First we read in the image that you've provided which is thresholded and before the "noise filtering", then get the height and width of it. We also use an input mask to tell us which pixels to operate on the flood fill. Using a mask of all zeroes means you will operate on the entire image. It's also important to note that the mask needs to have a 1 pixel border surrounding it before using it. We flood fill the image using the top left corner as the initial point, invert it, set any "hole" pixels to 255 and show it. Take note that the input image is mutated once the method finishes so you need to pass in a copy to leave the input image untouched. Also, cv2.floodFill (using OpenCV 4) returns a tuple of four elements. I'll let you look at the documentation but you need the second element of this tuple, which is the filled in image.
We thus get:
I think using cv2.GaussianBlur() method might help you. After you convert the image to gray-scale, blur it using this method (as the name suggests, this is a Gaussian filter). Here is the documentation:
https://docs.opencv.org/4.3.0/d4/d86/group__imgproc__filter.html
I would like to crop an image which has a hand drawn highlighted area in orange as shown below,
The result should be a cropped image along the major axis of the blob or contour with a rectangular bounding box, as shown below,
Here's what i have tried,
import numpy as np
import cv2
# load the image
image = cv2.imread("frame50.jpg", 1)
#color boundaries [B, G, R]
lower = [0, 3, 30]
upper = [30, 117, 253]
# create NumPy arrays from the boundaries
lower = np.array(lower, dtype="uint8")
upper = np.array(upper, dtype="uint8")
# find the colors within the specified boundaries and apply
# the mask
mask = cv2.inRange(image, lower, upper)
output = cv2.bitwise_and(image, image, mask=mask)
ret,thresh = cv2.threshold(mask, 50, 255, 0)
if (int(cv2.__version__[0]) > 3):
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
else:
im2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
# find the biggest countour (c) by the area
c = max(contours, key = cv2.contourArea)
x,y,w,h = cv2.boundingRect(c)
ROI = image[y:y+h, x:x+w]
cv2.imshow('ROI',ROI)
cv2.imwrite('ROI.png',ROI)
cv2.waitKey(0)
This does not seem to work most of the time.For some images, the following happens,
I would like to know if there is better way to go about this or how i can fix what i have right now.Note that the highlighted area is hand drawn and can be of any shape but it is closed and not left open and the colour of the highlight is that shade of orange itself in all cases.
And is there a way to only retain content inside the circle and blackout everything outside it?
EDIT1:
I was able to fix the wrong clipping by varying the threshold more. But my main query now is: is there a way to only retain content inside the circle and blackout everything outside it? I can see the mask as show below,
How do I fill this mask and retain content inside the circle and blackout everything outside it, with the same rectangular bounding box?
Have you tried
image[x:x+w, y:y+h]
And could you check bbox with below code
cv2.rectangle(thresh,(x,y),(x+w,y+h),(255,0,0),2)
First of all, it is always better to use an HSV image instead of BGR image for masking(extracting a color). You can do this by the following code.
HSV_Image = cv2.cvtColor(Image, cv2.COLOR_BGR2HSV)
ThreshImage = cv2.inRange(HSV_Image, np.array([0, 28, 191]), np.array([24, 255, 255]))
The range numbers here are found for orange color in this case.
Image is the input image and ThreshImage is the output image with the orange region colored as white and everything else as black.
Now finding the contour in ThreshImage with cv2.RETR_EXTERNAL flag will give only one contour that is the outer boundary of the orange region.
Contours, Hierarchy = cv2.findContours(ThreshImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
To crop the orange region:
BoundingRect = cv2.boundingRect(Contours[0])
(x, y, w, h) = BoundingRect
CroppedImage = Image[y:y+h, x:x+w].copy()
"CroppedImage" will store the cropped orange region as desired.
To get contents of only inside the contour:
Bitwise AND operation will be useful here as we already have detected the contour.
First, we have to create a black image of shape the same as that of input image and draw the contour with white color filled in it.
ContourFilledImage = np.zeros(Image.shape, dtype=np.uint8)
cv2.drawContours(ContourFilledImage, Contours, -1, (255, 255, 255), -1)
Now perform a bitwise AND operation on Input Image and "ContourFilledImage"
OnlyInnerData = cv2.bitwise_and(ContourFilledImage, Image)
"OnlyInnerData" image is the desired output image having only the content of inside the circle.
I'm doing cell segmentation, so I'm trying to code a function that removes all minor contours around the main one in order to do a mask.
That happens because I load an image with some color markers:
The problem is when I do threshold, it assumes that "box" between the color markers as a part of the main contour.
As you may see in my code, I don't directly pass color image to grays because the red turns black but there are other colors too, at least 8, and always different in each image. I've got thousands of images like this where just one cell is displayed, but in most of it, there are always outsiders contours attached. My goal is to come to a function that gives a binary image of a single cell for each image input like this. So I'm starting with this code:
import cv2 as cv
cell1 = cv.imread(image_cell, 0)
imgray = cv.cvtColor(cell1,cv.COLOR_BGR2HSV)
imgray = cv.cvtColor(imgray,cv.COLOR_BGR2GRAY)
ret,thresh_binary = cv.threshold(imgray,107,255,cv.THRESH_BINARY)
cnts= cv.findContours(image =cv.convertScaleAbs(thresh_binary) , mode =
cv.RETR_TREE,method = cv.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv.drawContours(thresh_binary,[c], 0, (255,255,255), -1)
kernel = cv.getStructuringElement(cv.MORPH_RECT, (3,3))
opening = cv.morphologyEx(thresh_binary, cv.MORPH_OPEN, kernel,
iterations=2) # erosion followed by dilation
Summing up, how do I get just the red contour from image 1?
So another approach, without color ranges.
A couple of things are not going right in your code I think. First, you are drawing the contours on thresh_binary, but that already has the outer lines of the other cells as well - the lines you are trying to get rid off. I think that is why you use opening(?) while in this case you shouldn't.
To fix things, first a little information on how findContours works. findContours starts looking for white shapes on a black background and then looks for black shapes inside that white contour and so on. That means that the white outline of the cells in the thresh_binary are detected as a contour. Inside of it are other contours, including the one you want. docs with examples
What you should do is first look only for contours that have no contours inside of them. The findContours also returns a hierarchy of contours. It indicates whether a contour has 'childeren'. If it has none (value: -1) then you look at the size of the contour and disregard the ones that are to small. You could also just look for the largest, as that is probably the one you want. Finally you draw the contour on a black mask.
Result:
Code:
import cv2 as cv
import numpy as np
# load image as grayscale
cell1 = cv.imread("PjMQR.png",0)
# threshold image
ret,thresh_binary = cv.threshold(cell1,107,255,cv.THRESH_BINARY)
# findcontours
contours, hierarchy = cv.findContours(image =thresh_binary , mode = cv.RETR_TREE,method = cv.CHAIN_APPROX_SIMPLE)
# create an empty mask
mask = np.zeros(cell1.shape[:2],dtype=np.uint8)
# loop through the contours
for i,cnt in enumerate(contours):
# if the contour has no other contours inside of it
if hierarchy[0][i][2] == -1 :
# if the size of the contour is greater than a threshold
if cv2.contourArea(cnt) > 10000:
cv.drawContours(mask,[cnt], 0, (255), -1)
# display result
cv2.imshow("Mask", mask)
cv2.imshow("Img", cell1)
cv2.waitKey(0)
cv2.destroyAllWindows()
Note: I used the image you uploaded, your image probably has far fewer pixels, so a smaller contourArea
Note2: enumerate loops through the contours, and returns both a contour and an index for each loop
Actually, in your code the 'box' is a legitimate extra contour. And you draw all contours on the final image, so that includes the 'box'. This could cause issues if any of the other colored cells are fully in the image.
A better approach is to separate out the color you want. The code below creates a binary mask that only displays the pixels that are in the defined range of red colors. You can use this mask with findContours.
Result:
Code:
import cv2
# load image
img = cv2.imread("PjMQR.png")
# Convert HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of red color in HSV
lower_val = np.array([0,20,0])
upper_val = np.array([15,255,255])
# Threshold the HSV image to get only red colors
mask = cv2.inRange(hsv, lower_val, upper_val)
# display image
cv2.imshow("Mask", mask)
cv2.waitKey(0)
cv2.destroyAllWindows()
This code can help you understand how the different values in this process (HSV with inRange) works. inRange docs
OpenCV in Python provides the following code:
regions, hierarchy = cv2.findContours(binary_image, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for region in regions:
x, y, w, h = cv2.boundingRect(region)
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 1)
This gives some contours within contour. How to remove them in Python?
For that, you should take a look at this tutorial on how to use the hierarchy object returned by the method findContours .
The main point is that you should use cv2.RETR_TREE instead of cv2.RETR_LIST to get parent/child relationships between your clusters:
regions, hierarchy = cv2.findContours(binary_image, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
Then you can check whether a contour with index i is inside another by checking if hierarchy[0,i,3] equals -1 or not. If it is different from -1, then your contour is inside another.
In order to remove the contours inside a contour:
shapes, hierarchy = cv2.findContours(image=image, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_SIMPLE)
However, in some cases you may observe that a big contour is formed on the whole image, and applying the above returns you that one big contour.
In order to avoid this, try inverting the image:
image = cv2.imread("Image Path")
image = 255 - image
shapes, hierarchy = cv2.findContours(image=image, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_SIMPLE)
This will give you the desired result.
UPDATE:
The reason why hierarchy does not work if a big bounding box is approximated on the whole image is that the output of hierarchy[0,iteration,3] is -1 only for the one bounding box drawn on the whole image, as all other bounding boxes are inside this big bounding box, and hierarchy[0,iteration,3] is not equal to -1 for any of them. Thus, inverting the image will be required in order to comply with the following:
In OpenCV, finding contours is like finding white object from black background. So remember, object to be found should be white and background should be black.
However, as pointed out by #Jeru, this is not a generalized solution and one must visualize the image before inverting it.
Consider this image:
Running
shapes, hierarchy = cv2.findContours(image=image, mode=cv2.RETR_TREE, method=cv2.CHAIN_APPROX_SIMPLE)
results in
Now, only displaying the contour with hierarchy[0,iteration,3] = -1 results in
which is not correct. If we want to obtain the rectangle containing the shapes and the text shapes, we can do
image = 255 - image
shapes, hierarchy = cv2.findContours(image=thresh, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_SIMPLE)
In this case we get:
Code:
import cv2
from easyocr import Reader
import math
shape_number = 2
image = cv2.imread("Image Path")
deep_copy = image.copy()
image_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(image_gray, 210, 255, cv2.THRESH_BINARY)
thresh = 255 - thresh
shapes, hierarchy = cv2.findContours(image=thresh, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(image=deep_copy, contours=shapes, contourIdx=-1, color=(0, 255, 0), thickness=2, lineType=cv2.LINE_AA)
for iteration, shape in enumerate(shapes):
if hierarchy[0,iteration,3] == -1:
print(hierarchy[0,iteration,3])
print(iteration)
cv2.imshow('Shapes', deep_copy)
cv2.waitKey(0)
cv2.destroyAllWindows()
img_output, contours, hierarchy = cv2.findContours(blank_image_firstImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
This removes the child contour