I'm attempting to extract a blue object, very much like the one described in https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html#object-tracking
An example of a raw image with three blue shapes to extract is here:
The captured image is noisy and the unfiltered shape detection returns hundreds to thousands of "blue" shapes. In order to mitigate this, I applied the following steps:
Blurring the image before filtering it, resulting in closed surfaces
Converting the masked image (after bitwise_and) back to grayscale
Applying an OTSU threshold
Finally, detect the contours
The complete code is:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read()
blur = cv2.GaussianBlur(frame, (15, 15), 0)
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
lower_red = np.array([115, 50, 50])
upper_red = np.array([125, 255, 255])
mask = cv2.inRange(hsv, lower_red, upper_red)
blue = cv2.bitwise_and(blur, blur, mask=mask)
gray = cv2.cvtColor(blue, cv2.COLOR_BGR2GRAY)
(T, ted) = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU)
im2, contours, hierarchy = cv2.findContours(
ted, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
cv2.drawContours(frame, [cnt], 0, (0, 255, 0), 3)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame, str(len(contours)), (10, 500), font, 2, (0, 0, 255), 2, cv2.LINE_AA)
cv2.imshow('mask', mask)
cv2.imshow('blue', blue)
cv2.imshow('grey', gray)
cv2.imshow('thresholded', ted)
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Unfortunately, there are still 6-7 contours left whereas there should be three.
How can I further refine image processing to get just the three shapes?
You could use morphological operations coupled with connected components analysis:
Apply erosion on grayscale image: https://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html?highlight=erode#erode
Find connected components, function cv::connectedComponents (https://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gac2718a64ade63475425558aa669a943a)
Retain connected components whose area is bigger than a given threshold.
Dilate the resulting mask: https://docs.opencv.org/2.4/modules/imgproc/doc/filtering.html?highlight=dilate#dilate
If the shapes that you're looking for specific shapes (e.g. shapes), you could use some shape descriptors.
Finally, I suggest you trying replacing the Gaussian Filter with a bilateral filter (https://docs.opencv.org/3.0-beta/modules/imgproc/doc/filtering.html#bilateralfilter) to better preserve the shapes. If you want an even better filter, have a look at this tutorial on NL-means filter (https://docs.opencv.org/3.3.1/d5/d69/tutorial_py_non_local_means.html)
Related
Iam trying to remove objects from the image using the following method, however, as can be seen from outcome image there is a residual fine colored line around each object. Although i have dilated my image, the lines are still present.
Is there a way to remove those lines?
import cv2
import numpy as np
img = cv2.imread('TRY.jpg')
image_edges = cv2.GaussianBlur(img,(3,3),1) #Helps in defining the edges
image_edges=cv2.Canny(image_edges,100,200)
image_edges=cv2.dilate(image_edges,(3,3),iterations=3)
contours_draw, hierarchy = cv2.findContours(image_edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
mask = np.zeros(img.shape, np.uint8)
mask.fill(255)
for c in contours_draw:
cv2.drawContours(mask, [c], -1, (0, 0, 0), -1)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
res = img.copy()
res[mask == 0] = 255
cv2.imshow('img', res)
cv2.waitKey(0)
Original Image
--
Outcome Image
The outlines are still there because the threshold is too high.
Try this:
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 20, 255, 0)
contours_draw, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
The values you pass to the threshold function determine how inclusive or exclusive the findContours function is going to be. Put low numbers if you want a low threshold, and high numbers if you want a high one.
I'm using OpenCV 4 - python 3 - to find an specific area in a black & white image.
This area is not a 100% filled shape. It may hame some gaps between the white lines.
This is the base image from where I start processing:
This is the rectangle I expect - made with photoshop -:
Results I got with hough transform lines - not accurate -
So basically, I start from the first image and I expect to find what you see in the second one.
Any idea of how to get the rectangle of the second image?
I'd like to present an approach which might be computationally less expensive than the solution in fmw42's answer only using NumPy's nonzero function. Basically, all non-zero indices for both axes are found, and then the minima and maxima are obtained. Since we have binary images here, this approach works pretty well.
Let's have a look at the following code:
import cv2
import numpy as np
# Read image as grayscale; threshold to get rid of artifacts
_, img = cv2.threshold(cv2.imread('images/LXSsV.png', cv2.IMREAD_GRAYSCALE), 0, 255, cv2.THRESH_BINARY)
# Get indices of all non-zero elements
nz = np.nonzero(img)
# Find minimum and maximum x and y indices
y_min = np.min(nz[0])
y_max = np.max(nz[0])
x_min = np.min(nz[1])
x_max = np.max(nz[1])
# Create some output
output = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
cv2.rectangle(output, (x_min, y_min), (x_max, y_max), (0, 0, 255), 2)
# Show results
cv2.imshow('img', img)
cv2.imshow('output', output)
cv2.waitKey(0)
cv2.destroyAllWindows()
I borrowed the cropped image from fmw42's answer as input, and my output should be the same (or most similar):
Hope that (also) helps!
In Python/OpenCV, you can use morphology to connect all the white parts of your image and then get the outer contour. Note I have modified your image to remove the parts at the top and bottom from your screen snap.
import cv2
import numpy as np
# read image as grayscale
img = cv2.imread('blackbox.png')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold
_,thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY)
# apply close to connect the white areas
kernel = np.ones((75,75), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# get contours (presumably just one around the outside)
result = img.copy()
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
for cntr in contours:
x,y,w,h = cv2.boundingRect(cntr)
cv2.rectangle(result, (x, y), (x+w, y+h), (0, 0, 255), 2)
# show thresh and result
cv2.imshow("thresh", thresh)
cv2.imshow("Bounding Box", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save resulting images
cv2.imwrite('blackbox_thresh.png',thresh)
cv2.imwrite('blackbox_result.png',result)
Input:
Image after morphology:
Result:
Here's a slight modification to #fmw42's answer. The idea is connect the desired regions into a single contour is very similar however you can find the bounding rectangle directly since there's only one object. Using the same cropped input image, here's the result.
We can optionally extract the ROI too
import cv2
# Grayscale, threshold, and dilate
image = cv2.imread('3.png')
original = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Connect into a single contour and find rect
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
dilate = cv2.dilate(thresh, kernel, iterations=1)
x,y,w,h = cv2.boundingRect(dilate)
ROI = original[y:y+h,x:x+w]
cv2.rectangle(image, (x, y), (x+w, y+h), (36, 255, 12), 2)
cv2.imshow('image', image)
cv2.imshow('ROI', ROI)
cv2.waitKey()
The objective is to blur the edges of a selected object in an image.
I've done the steps to obtain the contours of the object by using the following code:
image = cv2.imread('path of image')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 60, 255, cv2.THRESH_BINARY)[1]
im, contours, hierarchy = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
I am also able to plot the contour using:
cv2.drawContours(image, contours, -1, (0, 255, 0), 2)
Now I want to make use of the points stored in contours to blur / feather the edge of the object, perhaps using gaussian blur. How am I able to achieve that?
Thanks so much!
Similar to what I have mentioned here, you can do it in the following steps:
Load original image and find contours.
Blur the original image and save it in a different variable.
Create an empty mask and draw the detected contours on it.
Use np.where() method to select the pixels from the mask (contours) where you want blurred values and then replace it.
import cv2
import numpy as np
image = cv2.imread('./asdf.jpg')
blurred_img = cv2.GaussianBlur(image, (21, 21), 0)
mask = np.zeros(image.shape, np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 60, 255, cv2.THRESH_BINARY)[2]
contours, hierarchy = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(mask, contours, -1, (255,255,255),5)
output = np.where(mask==np.array([255, 255, 255]), blurred_img, image)
I have and portrait image. To which I:
Convert to Grayscale
Then convert to binary
Then I used morphological erosion operation.
Then I used morphological dilation operation.
Now, as last process, I am trying to find out the contours and drawing them. However, contours are getting found and drawn too, but, it is only getting drawn in white color, which made it looked like, contours is not drawn at all. What am I doing wrong??
Here is my code:
import numpy as np
import cv2
img = cv2.imread("Test1.jpg")
image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow("GrayScaled", image)
cv2.waitKey(0)
ret, thresh = cv2.threshold(image, 127, 255, cv2.THRESH_BINARY_INV)
cv2.imshow("Black&White", thresh)
cv2.waitKey(0)
kernel1 = np.ones((2,2),np.uint8)
erosion = cv2.erode(thresh,kernel1,iterations = 4)
cv2.imshow("AfterErosion", erosion)
cv2.waitKey(0)
kernel2 = np.ones((1,1),np.uint8)
dilation = cv2.dilate(erosion,kernel2,iterations = 5)
cv2.imshow("AfterDilation", dilation)
cv2.waitKey(0)
contours, hierarchy = cv2.findContours(dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cv2.drawContours(dilation, contours, -1, (255, 0, 0), 2)
cv2.imshow("Contours", dilation)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here are images step by step:
Original Image:
GrayScaled Image:
Binary Image:
After Erosion:
After Dilation:
Contour:
I am defining the color of the contours boundary to be red in above code. So, why is it not showing red boundaries??
Im trying to find contours in a specific area of the image. Is it possible to just show the contours inside the ROI and not the contours in the rest of the image? I read in another similar post that I should use a mask, but I dont think I used it correctly. Im new to openCV and Python, so any help is much appriciated.
import numpy as np
import cv2
cap = cv2.VideoCapture('size4.avi')
x, y, w, h= 150, 50, 400 ,350
roi = (x, y, w, h)
while(True):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 127, 255, 0)
im2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
roi = cv2.rectangle(frame, (x,y), (x+w, y+h), (0,0,255), 2)
mask = np.zeros(roi.shape,np.uint8)
cv2.drawContours(mask, contours, -1, (0,255,0), 3)
cv2.imshow('img', frame)
Since you claim to be a novice, I have worked out a solution along with an illustration.
Consider the following to be your original image:
Assume that the following region in red is your region of interest (ROI), where you would like to find your contours:
First, construct an image of black pixels of the same size. It MUST BE OF SAME size:
black = np.zeros((img.shape[0], img.shape[1], 3), np.uint8) #---black in RGB
Now to form the mask and highlight the ROI:
black1 = cv2.rectangle(black,(185,13),(407,224),(255, 255, 255), -1) #---the dimension of the ROI
gray = cv2.cvtColor(black,cv2.COLOR_BGR2GRAY) #---converting to gray
ret,b_mask = cv2.threshold(gray,127,255, 0) #---converting to binary image
Now mask the image above with your original image:
fin = cv2.bitwise_and(th,th,mask = mask)
Now use cv2.findContours() to find contours in the image above.
Then use cv2.drawContours() to draw contours on the original image. You will finally obtain the following:
There might be better methods as well, but this was done so as to get you aware of the bitwise AND operation availabe in OpenCV which is exclusively used for masking
For setting a ROI in Python, one uses standard NumPy indexing such as in this example.
So, to select the right ROI, you don't use the cv2.rectangle function (that is for drawing a rectangle), but you do this instead:
_, thresh = cv2.threshold(gray, 127, 255, 0)
roi = thresh[x:(x+w), y:(y+h)]
im2, contours, hierarchy = cv2.findContours(roi, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)