How can I find contours inside ROI using opencv and Python? - python

Im trying to find contours in a specific area of the image. Is it possible to just show the contours inside the ROI and not the contours in the rest of the image? I read in another similar post that I should use a mask, but I dont think I used it correctly. Im new to openCV and Python, so any help is much appriciated.
import numpy as np
import cv2
cap = cv2.VideoCapture('size4.avi')
x, y, w, h= 150, 50, 400 ,350
roi = (x, y, w, h)
while(True):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 127, 255, 0)
im2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
roi = cv2.rectangle(frame, (x,y), (x+w, y+h), (0,0,255), 2)
mask = np.zeros(roi.shape,np.uint8)
cv2.drawContours(mask, contours, -1, (0,255,0), 3)
cv2.imshow('img', frame)

Since you claim to be a novice, I have worked out a solution along with an illustration.
Consider the following to be your original image:
Assume that the following region in red is your region of interest (ROI), where you would like to find your contours:
First, construct an image of black pixels of the same size. It MUST BE OF SAME size:
black = np.zeros((img.shape[0], img.shape[1], 3), np.uint8) #---black in RGB
Now to form the mask and highlight the ROI:
black1 = cv2.rectangle(black,(185,13),(407,224),(255, 255, 255), -1) #---the dimension of the ROI
gray = cv2.cvtColor(black,cv2.COLOR_BGR2GRAY) #---converting to gray
ret,b_mask = cv2.threshold(gray,127,255, 0) #---converting to binary image
Now mask the image above with your original image:
fin = cv2.bitwise_and(th,th,mask = mask)
Now use cv2.findContours() to find contours in the image above.
Then use cv2.drawContours() to draw contours on the original image. You will finally obtain the following:
There might be better methods as well, but this was done so as to get you aware of the bitwise AND operation availabe in OpenCV which is exclusively used for masking

For setting a ROI in Python, one uses standard NumPy indexing such as in this example.
So, to select the right ROI, you don't use the cv2.rectangle function (that is for drawing a rectangle), but you do this instead:
_, thresh = cv2.threshold(gray, 127, 255, 0)
roi = thresh[x:(x+w), y:(y+h)]
im2, contours, hierarchy = cv2.findContours(roi, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)

Related

OpenCV - Can't find correct contours in similar images

the task I want to do looks pretty simple: I take as input several images with an object centered in the photo and a little color chart needed for other purposes. My code normally works for the majority of the cases, but sometimes fails miserably and I just can't understand why.
For example (these are the source images), it works correctly on this https://imgur.com/PHfIqcb but not on this https://imgur.com/qghzO3V
Here's the code of the interested part:
img = cv2.imread(path)
height, width, channel = img.shape
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
kernel = np.ones((31, 31), np.uint8)
dil = cv2.dilate(gray, kernel, iterations=1)
_, th = cv2.threshold(dil, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
th_er1 = cv2.bitwise_not(th)
_, contours, _= cv2.findContours(th_er1, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
areas = [cv2.contourArea(c) for c in contours]
max_index = np.argmax(areas)
cnt=contours[max_index]
x,y,w,h = cv2.boundingRect(cnt)
After that I'm just going to crop the image accordingly to the given results (getting the biggest rectangle contour), basically cutting off the photo only the main object.
But as I said, using very similar images sometimes works and sometimes not.
Thank you in advance.
maybe you could try not using otsu's method, and just set threshold manually, if it's possible... ;)
You can use the Canny edge detector. In the two images, there is a good threshold value to isolate the object in the center of the image. After applying the threshold, we blur the results and apply the Canny edge detector before finding the contours:
import cv2
import numpy as np
def process(img):
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(img_gray, 190, 255, cv2.THRESH_BINARY_INV)
img_blur = cv2.GaussianBlur(thresh, (3, 3), 1)
img_canny = cv2.Canny(img_blur, 0, 0)
kernel = np.ones((5, 5))
img_dilate = cv2.dilate(img_canny, kernel, iterations=1)
return cv2.erode(img_dilate, kernel, iterations=1)
def get_contours(img):
contours, hierarchies = cv2.findContours(process(img), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
cv2.drawContours(img, [cnt], -1, (0, 255, 0), 30)
x, y, w, h = cv2.boundingRect(cnt)
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 30)
img = cv2.imread("image.jpeg")
get_contours(img)
cv2.imshow("Result", img)
cv2.waitKey(0)
Input images:
Output images:
The green outlines are the contours of the objects, and the red outlines are the bounding boxes of the objects.

How can I create bounding boxes/contour around the outer object only - Python OpenCV

So I've been trying to make bounding boxes around a couple of fruits that I made in paint. I'm a total beginner to opencv so I watched a couple tutorials and the code that I typed made, makes contours around the object and using that I create bounding boxes. However it makes way too many tiny contours and bounding boxes. For example, heres the initial picture:
and heres the picture after my code runs:
However this is what I want:
Heres my code:
import cv2
import numpy as np
img = cv2.imread(r'C:\Users\bob\Desktop\coursera\coursera-2021\project2\fruits.png')
grayscaled = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(grayscaled, (7,7), 1)
canny = cv2.Canny(blur, 50, 50)
img1 = img.copy()
all_pics = []
contours, Hierarchy = cv2.findContours(canny, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.02*peri, True)
x, y, w, h = cv2.boundingRect(approx)
bob = img [y:y+h, x:x+w]
all_pics.append((x, bob))
cv2.rectangle(img1, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.imshow("title", img1)
cv2.waitKey(0)
I think FloodFill in combination with FindContours can help you:
import os
import cv2
import numpy as np
# Read original image
dir = os.path.abspath(os.path.dirname(__file__))
im = cv2.imread(dir+'/'+'im.png')
# convert image to Grayscale and Black/White
gry = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
cv2.imwrite(dir+'/im_1_grayscale.png',gry)
bw=cv2.threshold(gry, 127, 255, cv2.THRESH_BINARY)[1]
cv2.imwrite(dir+'/im_2_black_white.png',bw)
# Use floodfill to identify outer shape of objects
imFlood = bw.copy()
h, w = bw.shape[:2]
mask = np.zeros((h+2, w+2), np.uint8)
cv2.floodFill(imFlood, mask, (0,0), 0)
cv2.imwrite(dir+'/im_3_floodfill.png',imFlood)
# Combine flood filled image with original objects
imFlood[np.where(bw==0)]=255
cv2.imwrite(dir+'/im_4_mixed_floodfill.png',imFlood)
# Invert output colors
imFlood=~imFlood
cv2.imwrite(dir+'/im_5_inverted_floodfill.png',imFlood)
# Find objects and draw bounding box
cnts, _ = cv2.findContours(imFlood, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
for cnt in cnts:
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.02*peri, True)
x, y, w, h = cv2.boundingRect(approx)
bob = im[y:y+h, x:x+w]
cv2.rectangle(im, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Save final image
cv2.imwrite(dir+'/im_6_output.png',im)
Grayscale and black/white:
Add flood fill:
Blend original objects with the flood filled version:
Invert the colors of blended version:
Final output:
his guys answer was just what I was looking for. If anyone runs through this problem try this:
Draw contours around objects with OpenCV

Find area with content and get its bouding rect

I'm using OpenCV 4 - python 3 - to find an specific area in a black & white image.
This area is not a 100% filled shape. It may hame some gaps between the white lines.
This is the base image from where I start processing:
This is the rectangle I expect - made with photoshop -:
Results I got with hough transform lines - not accurate -
So basically, I start from the first image and I expect to find what you see in the second one.
Any idea of how to get the rectangle of the second image?
I'd like to present an approach which might be computationally less expensive than the solution in fmw42's answer only using NumPy's nonzero function. Basically, all non-zero indices for both axes are found, and then the minima and maxima are obtained. Since we have binary images here, this approach works pretty well.
Let's have a look at the following code:
import cv2
import numpy as np
# Read image as grayscale; threshold to get rid of artifacts
_, img = cv2.threshold(cv2.imread('images/LXSsV.png', cv2.IMREAD_GRAYSCALE), 0, 255, cv2.THRESH_BINARY)
# Get indices of all non-zero elements
nz = np.nonzero(img)
# Find minimum and maximum x and y indices
y_min = np.min(nz[0])
y_max = np.max(nz[0])
x_min = np.min(nz[1])
x_max = np.max(nz[1])
# Create some output
output = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
cv2.rectangle(output, (x_min, y_min), (x_max, y_max), (0, 0, 255), 2)
# Show results
cv2.imshow('img', img)
cv2.imshow('output', output)
cv2.waitKey(0)
cv2.destroyAllWindows()
I borrowed the cropped image from fmw42's answer as input, and my output should be the same (or most similar):
Hope that (also) helps!
In Python/OpenCV, you can use morphology to connect all the white parts of your image and then get the outer contour. Note I have modified your image to remove the parts at the top and bottom from your screen snap.
import cv2
import numpy as np
# read image as grayscale
img = cv2.imread('blackbox.png')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold
_,thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY)
# apply close to connect the white areas
kernel = np.ones((75,75), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# get contours (presumably just one around the outside)
result = img.copy()
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
for cntr in contours:
x,y,w,h = cv2.boundingRect(cntr)
cv2.rectangle(result, (x, y), (x+w, y+h), (0, 0, 255), 2)
# show thresh and result
cv2.imshow("thresh", thresh)
cv2.imshow("Bounding Box", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save resulting images
cv2.imwrite('blackbox_thresh.png',thresh)
cv2.imwrite('blackbox_result.png',result)
Input:
Image after morphology:
Result:
Here's a slight modification to #fmw42's answer. The idea is connect the desired regions into a single contour is very similar however you can find the bounding rectangle directly since there's only one object. Using the same cropped input image, here's the result.
We can optionally extract the ROI too
import cv2
# Grayscale, threshold, and dilate
image = cv2.imread('3.png')
original = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Connect into a single contour and find rect
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
dilate = cv2.dilate(thresh, kernel, iterations=1)
x,y,w,h = cv2.boundingRect(dilate)
ROI = original[y:y+h,x:x+w]
cv2.rectangle(image, (x, y), (x+w, y+h), (36, 255, 12), 2)
cv2.imshow('image', image)
cv2.imshow('ROI', ROI)
cv2.waitKey()

Removing partial borders with openCV

I'm using OpenCV to find tabular data within images so that I can use an OCR on it. So far I have been able to find the table in the image, find the columns of the table, then find each cell within each column. It works pretty well, but I'm having an issue with the cell walls getting stuck in my images and I'm unable to remove them reliably.This is one example that I'm having difficulty with.This would be another example.
I have tried several approaches to get these images better. I have been having the most luck with finding the contours.
img2gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, mask = cv2.threshold(img2gray, 180, 255, cv2.THRESH_BINARY)
image_final = cv2.bitwise_and(img2gray, img2gray, mask=mask)
ret, new_img = cv2.threshold(image_final, 180, 255, cv2.THRESH_BINARY_INV)
kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (5, 3))
# dilate , more the iteration more the dilation
dilated = cv2.dilate(new_img, kernel, iterations=3)
cv2.imwrite('../test_images/find_contours_dilated.png', dilated)
I have been toying with the kernel size and dilation iterations and have found this to be the best configuration.
Another approach I used was with PIL, but it is only really good if the border is uniform around the whole image, which in my cases it is not.
copy = Image.fromarray(img)
try:
bg = Image.new(copy.mode, copy.size, copy.getpixel((0, 0)))
except:
return None
diff = ImageChops.difference(copy, bg)
diff = ImageChops.add(diff, diff, 2.0, -100)
bbox = diff.getbbox()
if bbox:
return np.array(copy.crop(bbox))
There were a few other ideas that I tried, but none got me very far. Any help would be appreciated.
You could try with finding contours and "drawing" them out. Meaning you can draw a border that will be connected with the "walls" mannualy with cv2.rectangle on the borders of your image (that will combine all contours - walls). The biggest two contours will be outer and inner line of your walls and you can draw the contour white to remove the border. Then apply threshold again to remove the rest of the noise. Cheers!
Example:
import cv2
import numpy as np
# Read the image
img = cv2.imread('borders2.png')
# Get image shape
h, w, channels = img.shape
# Draw a rectangle on the border to combine the wall to one contour
cv2.rectangle(img,(0,0),(w,h),(0,0,0),2)
# Convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Apply binary threshold
_, threshold = cv2.threshold(gray, 180, 255, cv2.THRESH_BINARY_INV)
# Search for contours and sort them by size
_, contours, hierarchy = cv2.findContours(threshold,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
area = sorted(contours, key=cv2.contourArea, reverse=True)
# Draw it out with white color from biggest to second biggest contour
cv2.drawContours(img, ((contours[0]),(contours[1])), -1, (255,255,255), -1)
# Apply binary threshold again to the new image to remove little noises
_, img = cv2.threshold(img, 180, 255, cv2.THRESH_BINARY)
# Display results
cv2.imshow('img', img)
Result:

OpenCV - Appending contours inside ROI as numpy array in python

In the picture below I have found and drawn the contour(green) of my object. I would like to get the contour coordinates as a numpy array, so I have used
array = np.vstack(contours).squeeze(). This gives me the coordinates of the entire contour in an array.
My question: is it possible to append only the contour coordinates(two green lines) inside the ROI(red square) into an array?
I know I can find contour inside a ROI separately, but I'm interested in doing it this way since the plan is to have several ROIs, and I dont want to find contours seperatly for each ROI. Any help is much appreciated.
Image with the contour
Original picture:
Here is the code if anyone is interested:
import numpy as np
import cv2
img = cv2.imread('image.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 127, 255, 0)
inv = 255 - thresh
im2, contours, hierarchy = cv2.findContours(inv, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(img, contours, -1, (0,255,0), 2)
array = np.vstack(contours).squeeze()
x, y, w, h= 150, 50, 500 ,150
roi = img[y:(y+h),x:(x+w)]
cv2.rectangle(img,(x,y), (x+w, y+h), (0,0,255), 1)
cv2.imshow('image', img)
cv2.waitKey(0)

Categories

Resources