I have two intersecting ellipses in a black and white image. I am trying to use OpenCV findContours to identify the separate shapes as separate contours using this code (and attached image below).
import numpy as np
import matplotlib.pyplot as plt
import cv2
import skimage.morphology
img_3d = cv2.imread("C:/temp/test_annotation_overlap.png")
img_grey = cv2.cvtColor(img_3d, cv2.COLOR_BGR2GRAY)
contours = cv2.findContours(img_grey, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[-2]
fig, ax = plt.subplots(len(contours)+1,1, figsize=(5, 20))
thicker_img_grey = skimage.morphology.dilation(img_grey, skimage.morphology.disk(radius=3))
ax[0].set_title("ORIGINAL IMAGE")
ax[0].imshow(thicker_img_grey, cmap="Greys")
for i, contour in enumerate(contours):
new_img = np.zeros_like(img_grey)
cv2.drawContours(new_img, contour, -1, (255,255,255), 10)
ax[i+1].set_title(f"Contour {i}")
ax[i+1].imshow(new_img, cmap="Greys")
plt.show()
However four contours are found, none of which are the original contour:
How can I configure OpenCV.findContours to identify the two separate shapes? (Note I have already played around with Hough circles and found it unreliable for the images I am analysing)
Maybe I overkilled with this approach but it could be used as a working approach. You could find all the contours on the image - you will get the two contours that are like a "semicircle", the contour of the intersection and the contour that is the outer shape of the two addjointed circles. Smallest three contours should be the two semicircles and the intersection. If you draw combinations of two out of these three contours, you will get three mask out of which two will have the combination of one semicircle and the intersection. If you perform closing on the mask you will get your circle. Then you should simply make an algorithm to detect which two masks represent a full circle and you will get your result. Here is the sample solution:
import numpy as np
import cv2
# Function for returning solidity of contour - ratio of contour area to its
# convex hull area.
def checkSolidity(cnt):
area = cv2.contourArea(cnt)
hull = cv2.convexHull(cnt)
hull_area = cv2.contourArea(hull)
solidity = float(area)/hull_area
return solidity
img_orig = cv2.imread("circles.png")
# Had to dilate the image so the contour was completly connected.
img = cv2.dilate(img_orig, np.ones((3, 3), np.uint8))
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Grayscale transformation.
# Otsu threshold.
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)[1]
# Search for contours.
contours = cv2.findContours(thresh, cv2.CHAIN_APPROX_NONE, cv2.RETR_TREE)[0]
# Sorting contours from smallest to biggest.
contours.sort(key=lambda cnt: cv2.contourArea(cnt))
# Three contours - two semi circles and the intersection of the circles.
cnt1 = contours[0]
cnt2 = contours[1]
cnt3 = contours[2]
# Create three empty images
h, w = img.shape[:2]
mask1 = np.zeros((h, w), np.uint8)
mask2 = np.zeros((h, w), np.uint8)
mask3 = np.zeros((h, w), np.uint8)
# Draw all combinations of two out of three contours on the masks.
# The goal here is to draw one semicircle and the intersection together.
cv2.drawContours(mask1, [cnt1], 0, (255, 255, 255), -1)
cv2.drawContours(mask1, [cnt3], 0, (255, 255, 255), -1)
cv2.drawContours(mask2, [cnt2], 0, (255, 255, 255), -1)
cv2.drawContours(mask2, [cnt3], 0, (255, 255, 255), -1)
cv2.drawContours(mask3, [cnt1], 0, (255, 255, 255), -1)
cv2.drawContours(mask3, [cnt2], 0, (255, 255, 255), -1)
# Perform closing operation on the masks so that you get uniform contours.
kernel_size = 25
kernel = np.ones((kernel_size, kernel_size), np.uint8)
mask1 = cv2.morphologyEx(mask1, cv2.MORPH_CLOSE, kernel)
mask2 = cv2.morphologyEx(mask2, cv2.MORPH_CLOSE, kernel)
mask3 = cv2.morphologyEx(mask3, cv2.MORPH_CLOSE, kernel)
masks = [] # List for storing all the masks.
masks.append(mask1)
masks.append(mask2)
masks.append(mask3)
# List where you will append solidity of the found biggest contour of every mask.
solidity = []
for mask in masks:
cnts = cv2.findContours(mask, cv2.CHAIN_APPROX_NONE, cv2.RETR_TREE)[0]
cnt = max(cnts, key=lambda c: cv2.contourArea(c))
s = checkSolidity(cnt)
solidity.append(s)
# Index of the mask with smallest solidity.
min_solidity = solidity.index(min(solidity))
# The mask with the contour that has smallest solidity should be the one that
# has two semicirles drawn instead of one semicircle and the intersection.
#You could build a better function to check which mask is the one with
# two semicircles... like maybe the contour with the largest
# height and width of the bounding box etc.
# I chose solidity because it is enough for this example.
# Selection of colors.
colors = {
0: (0, 0, 255),
1: (0, 255, 0),
2: (255, 0, 0),
}
# Draw contours of the mask other two masks - those two that have the
# semicircle and the intersection.
for i, s in enumerate(solidity):
if s != solidity[min_solidity]:
cnts = cv2.findContours(
masks[i], cv2.CHAIN_APPROX_NONE, cv2.RETR_TREE)[0]
cnt = max(cnts, key=lambda c: cv2.contourArea(c))
cv2.drawContours(img_orig, [cnt], 0, colors[i], 1)
# Display result
cv2.imshow("img", img_orig)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
Philosophically, you want to find two circles, because you search for them, you expect centers and radii. Graphically the figures are connected, we can see them separated because we know what a "circle" is and extrapolate the coordinates, which match the parts which overlap.
So what about finding the minimum enclosing circle for each contour (or in some cases fitEllipse and use the their parameters): https://docs.opencv.org/master/dd/d49/tutorial_py_contour_features.html
Then say draw that circle in a clear image and take the pixel coordinates which are not zero - by a mask or compute them by drawing a circle step by step.
Then compare these coordinates with the coordinates in the other contours with some precision and append the matching coordinates to the current contour.
Finally: draw the extended contour on a clear canvas and apply HoughCircles for a single non-overlapping circle. (Or compute the center and radius, the coordinates of a circle and comparing to the contour with a precision.)
For reference here I will post the solution I came up with based on some ideas here and a few more. This solution was 99.9% effective and recovering ellipses from images often including many overlapping, contained, and with other image noise such as lines, text and so on.
The code is too length and distributed to post here but the pseudocode is as follows.
Segment the image
Run cv2 findContours with RETR_EXTERNAL to get separate regions in the image
For each image, fill in the interior, apply mask, and extract the region to be processed independently of other regions.
Remaining steps are executed for each region independently
Run cv2 findContours with RETR_LIST to get all internal and external contours
For each contour found, apply polygon smoothing to reduce effect of pixellation
For each smoothed contour, identify the continuous segments within that contour which have the same curvature sign i.e. segments which are entirely curving right, or curving left (just compute angles and sign changes)
Within each segment, fit an ellipse model with least squares (scikit-learn EllipseModel)
Perform the Lee algorithm on the original image to compute for each pixel its minimum distance to a white pixel
For each model, perform a greedy local neighbourhood search to improve the fit against the original - the fit being the fitted ellipse maximum distance to white pixel, from the output of the lee algorithm
Not simple or elegant but is highly accurate for the content I am dealing with confirmed from manual review of a large number of images.
Related
the task that I'm trying to accomplish is isolating certain objects in an image through finding contours in the mask of the image, then taking each contour (based on area) and isolating it , and then using this contour to crop the same region in the original image, in order to get the pixel values of the region,
e.g.:
the code I wrote in order to get just one contour and then isolating it with the original pixel value:
import cv2
import matplotlib.pyplot as plt
import numpy as np
image = cv2.imread("./xxxx/xx.png")
mask = cv2.imread("./xxxx/xxx.png")
# making them the same size (function I wrote)
image, mask = resize_two_images(image,mask)
#grayscalling the mask (using cv2.cvtCOLOR)
mask = to_gray(mask)
# a function I wrote to display images using plt
display(image,"image: original image")
display(mask,"mask: mask of the image")
th, mask = cv2.threshold(mask, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
contours, hierarchy = cv2.findContours(
mask, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
for i in range(len(contours)):
hier = hierarchy[0][i][3]
cnt = contours[i]
cntArea = cv2.contourArea(cnt)
if 1000 < cntArea < 2000:
break
#breaking because I'm just keeping the first contour that fills the condtion
# Creating two zero numpy array
img1 = np.zeros(image.shape, dtype=np.uint8)
img2 = img1.copy()
# drawing the contour which will basiclly give the edges of the object
cv2.drawContours(img1, [cnt], -1, (255,255,255), 2)
# drawing indise the the edges
cv2.fillPoly(img2, [cnt], (255,255,255))
# adding the filled poly with the drawn contour gives bigger object that
# contains the both the edges and the insides of the object
img1 = img1 + img2
display(img1,"img1: final")
res = np.bitwise_and(img1,image)
display(res,"res : the ROI with original pixel values")
#cropping the ROI (the object we want)
x,y,w,h = cv2.boundingRect(cnt)
# (de)increased values in order to get nonzero borders or lost pixels
res1 = res[y-1:y+h+1,x-1:x+w+1]
display(res1,"res1: cropped ROI")
The problem is that yes I found a way to do it for just one contour, but is there another way where I can do it more efficiently because per image there could be hundreds of contours.
It's not clear if you want to have just one image with all selected contours as the output, or one individual image per selected contour.
You could get one image with all selected contours in a efficient manner.
First select all the contours you want to work with, then, plot all the contours filling them with white color so you can use this as a mask, and then mask the original image:
selected_contours = [c for c in contours if cv2.contourArea(c) >= 2000]
# the last parameter, negative line thickness, fills the contour
mask = cv2.drawContours(img1, selected_contours, -1, (255,255,255), -1)
res = np.bitwise_and(mask,image)
given a dental form as input, need to find all the checkboxes present in the form using image processing. I have answered my current approach below. Is there any better approach to find the checkboxes for low-quality docs as well?
sample input:
This is one approach in which we can solve the issue,
import cv2
import numpy as np
image=cv2.imread('path/to/image.jpg')
### binarising image
gray_scale=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
th1,img_bin = cv2.threshold(gray_scale,150,225,cv2.THRESH_BINARY)
Defining vertical and horizontal kernels
lineWidth = 7
lineMinWidth = 55
kernal1 = np.ones((lineWidth,lineWidth), np.uint8)
kernal1h = np.ones((1,lineWidth), np.uint8)
kernal1v = np.ones((lineWidth,1), np.uint8)
kernal6 = np.ones((lineMinWidth,lineMinWidth), np.uint8)
kernal6h = np.ones((1,lineMinWidth), np.uint8)
kernal6v = np.ones((lineMinWidth,1), np.uint8)
Detect horizontal lines
img_bin_h = cv2.morphologyEx(~img_bin, cv2.MORPH_CLOSE, kernal1h) # bridge small gap in horizonntal lines
img_bin_h = cv2.morphologyEx(img_bin_h, cv2.MORPH_OPEN, kernal6h) # kep ony horiz lines by eroding everything else in hor direction
finding vertical lines
## detect vert lines
img_bin_v = cv2.morphologyEx(~img_bin, cv2.MORPH_CLOSE, kernal1v) # bridge small gap in vert lines
img_bin_v = cv2.morphologyEx(img_bin_v, cv2.MORPH_OPEN, kernal6v)# kep ony vert lines by eroding everything else in vert direction
merging vertical and horizontal lines to get blocks. Adding a layer of dilation to remove small gaps
### function to fix image as binary
def fix(img):
img[img>127]=255
img[img<127]=0
return img
img_bin_final = fix(fix(img_bin_h)|fix(img_bin_v))
finalKernel = np.ones((5,5), np.uint8)
img_bin_final=cv2.dilate(img_bin_final,finalKernel,iterations=1)
Apply Connected component analysis on the binary image to get the blocks required.
ret, labels, stats,centroids = cv2.connectedComponentsWithStats(~img_bin_final, connectivity=8, ltype=cv2.CV_32S)
### skipping first two stats as background
for x,y,w,h,area in stats[2:]:
cv2.rectangle(image,(x,y),(x+w,y+h),(0,255,0),2)
You can also use contours for this problem.
# Reading the image in grayscale and thresholding it
Image = cv2.imread("findBox.jpg", 0)
ret, Thresh = cv2.threshold(Image, 100, 255, cv2.THRESH_BINARY)
Now perform dilation and erosion twice to join the dotted lines present inside the boxes.
kernel = np.ones((3, 3), dtype=np.uint8)
Thresh = cv2.dilate(Thresh, kernel, iterations=2)
Thresh = cv2.erode(Thresh, kernel, iterations=2)
Find contours in the image with cv2.RETR_TREE flag to get all contours with parent-child relations. For more info on this.
Contours, Hierarchy = cv2.findContours(Thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
Now all the boxes along with all the alphabets in the image are detected. We have to eliminate the alphabets detected, very small contours(due to noise), and also those boxes which contain smaller boxes inside them.
For this, I am running a for loop iterating over all the contours detected, and using this loop I am saving 3 values for each contour in 3 different lists.
1st value: Area of contour(Number of pixels a contour encloses)
2nd value: Contour's bounding rectangle info.
3rd value: Ratio of area of contour to the area of its bounding rectangle.
Areas = []
Rects = []
Ratios = []
for Contour in Contours:
# Getting bounding rectangle
Rect = cv2.boundingRect(Contour)
# Drawing contour on new image and finding number of white pixels for contour area
C_Image = np.zeros(Thresh.shape, dtype=np.uint8)
cv2.drawContours(C_Image, [Contour], -1, 255, -1)
ContourArea = np.sum(C_Image == 255)
# Area of the bounding rectangle
Rect_Area = Rect[2]*Rect[3]
# Calculating ratio as explained above
Ratio = ContourArea / Rect_Area
# Storing data
Areas.append(ContourArea)
Rects.append(Rect)
Ratios.append(Ratio)
Filtering out undesired contours:
Getting indices of those contours which have an area less than 3600(threshold value for this image) and which have Ratio >= 0.99.
The ratio defines the extent of overlap of contour to its bounding rectangle. As in this case, the desired contours are rectangle in shape, this ratio for them is expected to be "1.0" (0.99 for keeping a threshold of small noise).
BoxesIndices = [i for i in range(len(Contours)) if Ratios[i] >= 0.99 and Areas[i] > 3600]
Now final contours are those among contours at indices "BoxesIndices" which do not have a child contour(this will extract innermost contours) and if they have a child contour, then this child contour should not be one of the contours at indices "BoxesIndices".
FinalBoxes = [Rects[i] for i in BoxesIndices if Hierarchy[0][i][2] == -1 or BoxesIndices.count(Hierarchy[0][i][2]) == 0]
Final output image
I am looking for a procedure to detect the corners of an distorted rectangle accurately with OpenCV in Python.
I've tried the solution of different suggestions by googling, but through a sinusoidal superposition of a straight line (see the thresholded image) I probably can't detect the corners. I tried findContours and HoughLines so far without good results.
Unfortunately I don't understand the C-Code from Xu Bin in how to find blur corner position with opencv?
This is my initial image:
After resizing and thresholding I apply canny edge detection to get following image:
contours, hierarchy = cv2.findContours(g_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
box = cv2.minAreaRect(contour)
box = cv2.cv.BoxPoints(box) if imutils.is_cv2() else cv2.boxPoints(box)
box = np.array(box, dtype="float")
box = perspective.order_points(box)
I only get the following result with some extra drawing:
I thought line fitting would be a good way to solve the problem, but unfortunately I couldn't get HoughLines working and after looking in OpenCV Python - How to implement RANSAC to detect straight lines? RANSAC seems also difficult to apply for my problem.
Any help is highly appreciated.
Though this is old, this can at least help any others who have the same issue. In addition to nathancy's answer, this should allow you to find the very blurry corners with much more accuracy:
Pseudo Code
Resize if desired, not necessary though
Convert to grayscale
Apply blurring or bilateral filtering
Apply Otsu's threshold to get a binary image
Find contour that makes up rectangle
Approximate contour as a rectangle
Points of approximation are your rectangle's corners!
Code to do so
Resize:
The function takes the new width and height, so I am just making the image 5 times bigger than it currently is.
img = cv2.resize(img, (img.shape[0] * 5, img.shape[1] * 5))
Grayscale conversion:
Just converting to grayscale from OpenCV's default BGR colorspace.
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
Blurring/bilateral filtering:
You could use any number of techniques to soften up this image further, if needed. Maybe a Gaussian blur, or, as nathancy suggested, a bilateral filter, but no need for both.
# choose one, or a different function
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
blurred = cv2.bilateralFilter(gray, 9, 75, 75)
Otsu's threshold
Using the threshold function, pass 0 and 255 as the arguments for the threshold value and the max value. We pass 0 in because we are using the thresholding technique cv2.THRESH_OTSU which determines the value for us. This is returned along with the threshold itself, but I just set it to _ because we don't need it.
_, thresh = cv2.threshold(blurred, 0, 255, cv2.THRESH_OTSU)
Finding contour
There is a lot more to contours than I will explain here, feel free to checkout docs.
The important things to know for us is that it returns a list of contours along with a hierarchy. We don't need the hierarchy so it is set to _, and we only need the single contour it finds, so we set contour = contours[0].
contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contour = contours[0]
Approximate contour as a rectangle
First we calculate the perimeter of the contour. Then we approximate it with the cv2.approxPolyDP function, and tell it the maximum distance between the original curve and its approximation with the 0.05 * perimeter. You may need to play around with the decimal for a better approximation.
The approx is a numpy array with shape (num_points, 1, 2), which in this case is (4, 1, 2) because it found the 4 corners of the rectangle.
Feel free to read up more in the docs.
perimeter = cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, 0.05 * perimeter, True)
Find your skewed rectangle!
You're already done! Here's just how you could draw those points. First we draw the circles by looping over them then grabbing the x and y coordinates, and then we draw the rectangle itself.
# drawing points
for point in approx:
x, y = point[0]
cv2.circle(img, (x, y), 3, (0, 255, 0), -1)
# drawing skewed rectangle
cv2.drawContours(img, [approx], -1, (0, 255, 0))
Finished Code
import cv2
img = cv2.imread("rect.png")
img = cv2.resize(img, (img.shape[0] * 5, img.shape[1] * 5))
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blurred = cv2.bilateralFilter(gray, 9, 75, 75)
_, thresh = cv2.threshold(blurred, 0, 255, cv2.THRESH_OTSU)
contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contour = contours[0]
perimeter = cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, 0.05 * perimeter, True)
for point in approx:
x, y = point[0]
cv2.circle(img, (x, y), 3, (0, 255, 0), -1)
cv2.drawContours(img, [approx], -1, (0, 255, 0))
To detect corners, you can use cv2.goodFeaturesToTrack(). The function takes four parameters
corners = cv2.goodFeaturesToTrack(image, maxCorners, qualityLevel, minDistance)
image - Input 8-bit or floating-point 32-bit grayscale single-channel image
maxCorners - Maximum number of corners to return
qualityLevel - Minimum accepted quality level of corners between 0-1. All corners below quality level are rejected
minDistance - Minimum possible Euclidean distance between corners
Now that we know how to find corners, we have to find the rotated rectangle and apply the function. Here's an approach:
We first enlarge the image, convert to grayscale, apply a bilateral filter, then Otsu's threshold to get a binary image
Next we find the distorted rectangle by finding contours with cv2.findContours() then obtain the rotated bounding box highlighted in green. We draw this bounding box onto a mask
Now that we have the mask, we simply use cv2.goodFeaturesToTrack() to find the corners on the mask
Here's the result on the original input image and the (x, y) coordinates for each corner
Corner points
(377.0, 375.0)
(81.0, 344.0)
(400.0, 158.0)
(104.0, 127.0)
Code
import cv2
import numpy as np
import imutils
# Resize image, blur, and Otsu's threshold
image = cv2.imread('1.png')
resize = imutils.resize(image, width=500)
mask = np.zeros(resize.shape, dtype=np.uint8)
gray = cv2.cvtColor(resize, cv2.COLOR_BGR2GRAY)
blur = cv2.bilateralFilter(gray,9,75,75)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Find distorted rectangle contour and draw onto a mask
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
rect = cv2.minAreaRect(cnts[0])
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(resize,[box],0,(36,255,12),2)
cv2.fillPoly(mask, [box], (255,255,255))
# Find corners on the mask
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
corners = cv2.goodFeaturesToTrack(mask, maxCorners=4, qualityLevel=0.5, minDistance=150)
for corner in corners:
x,y = corner.ravel()
cv2.circle(resize,(x,y),8,(155,20,255),-1)
print("({}, {})".format(x,y))
cv2.imshow('resize', resize)
cv2.imshow('thresh', thresh)
cv2.imshow('mask', mask)
cv2.waitKey()
I want to fit an ellipse to a partly damaged object in a picture. (The pictures here are only simplified examples for illustration!)
Image with damaged elliptical object
By doing this
def sort(n):
return n.size
Image = cv2.imread('acA2500/1.jpg', cv2.IMREAD_GRAYSCALE)
#otsu binarization
_, binary = cv2.threshold(Image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
#invert binary image for opencv findContours
inverse_binary = cv2.bitwise_not(binary)
#find contours
contours, _ = cv2.findContours(inverse_binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
#sort contours by length
largest_contours = sorted(contours, key=sort, reverse=True)
#fit ellipse
contour = largest_contours[0]
ellipse = cv2.fitEllipseDirect(contour)
I get this result, which is not very satisfying.
Result of cv2.findContours and cv2.fitEllipse
So, I've built this loop to get rid of the contour points which are not on the ellipse.
contour = largest_contours[0]
newcontour = np.zeros([1, 1, 2])
newcontour = newcontour.astype(int)
for coordinate in contour:
if coordinate[0][0] < 600:
newcontour = np.insert(newcontour, 0, coordinate, 0)
newellipse = cv2.fitEllipse(newcontour)
And get this result, which is good.
Result after trimming the contour points
The Problem is, that I have to do a lot of these fits in a short time period. So far this does not reach the desired speed.
Is there any better/faster/nicer way to trim the contour points? Since I don't have a lot of coding experience, I would be happy to find some help here :-)
Edit:
I edited the example pictures so that it is now clear that unfortunatly a cv2.minEnclosingCircle approach does not work.
Also the pictures now demonstrate why I sort the contours. In my real code I fit an ellipse to the three longest contours an than see which one I want to use trough a different routine.
If I don't trimm the contour and pick the contour for cv2.fitEllipse by hand, the code needs around 0.5s. With contour trimming and three times cv2.fitEllipse it takes around 2s. It may only take 1s
If the object is an circle, then you can use cv2.minEnclosingCircle on the contour to capture it.
#!/usr/bin/python3
# 2019/02/13 08:50 (CST)
# https://stackoverflow.com/a/54661012/3547485
import cv2
img = cv2.imread('test.jpg')
## Convert to grayscale and threshed it
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
th, threshed = cv2.threshold(gray, 100, 255, cv2.THRESH_OTSU|cv2.THRESH_BINARY_INV)
## Find the max outers contour
cnts = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2]
cv2.drawContours(img, cnts, -1, (255, 0, 0), 2, cv2.LINE_AA)
## Use minEnclosingCircle
(cx,cy),r = cv2.minEnclosingCircle(cnts[0])
cv2.circle(img, (int(cx), int(cy)), int(r), (0, 255, 0), 1, cv2.LINE_AA)
## This it
cv2.imwrite("dst.jpg", img)
Here is my result.
Problem Statement & Background Info:
EDIT: Constraints: The red coloring on the flange changes over time, so I'm not trying to use color recognition to identify my object at this moment unless it can be robust. Addition,ally external illumination my be a factor since this will be in an outdoor area in the future.
I have RGB-Depth camera and with it, I'm able to capture this scene. Where each pixel (x,y) has a depth value.
Applying a gradient magnitude filter to the depth map associated with my image I'm able to get the following edge map.
The gradient magnitudes are given the value of 0 if they had a magnitude that wasn't zero. Black (255) is for magnitude values associated with 0 (homogenous depth or a flat surface).
From this edge map I dialated the edges so picking up the contours would be easier.
Then I found the contours in the image and tried to plot just the 5 biggest contours.
PROBLEM
Is there a way to reliably find the contours associated with my objects (the red box and metal fixture) and then find their geometric centroid? I keep running into the issue that I can find contours in the image, but I have no way of selectively screening for the contours that are my objects and not noise.
I have provided the image I used for the image processing, but for some reason, OpenCV saves the image as a black image, and when you read it in using...
gray = cv2.imread('GRAYTEST.jpeg', cv2.IMREAD_GRAYSCALE)
it appears blue-ish and not a binary white/black image as I show. So sorry about that.
Here is the image:
Sorry, I don't know why it saved just as a black image, but if you read it in OpenCV it should show up with the same lines as "magnitude of gradients" plot.
MY CODE
gray = cv2.imread('GRAYTEST.jpeg', cv2.IMREAD_GRAYSCALE)
plt.imshow(gray)
plt.title('gray start image')
plt.show()
blurred = cv2.bilateralFilter(gray, 8, 25, 25) # blurr image while preserving edges
kernel = np.ones((3, 3), np.uint8) # define a kernel (block) to apply filters to
dialated = cv2.dilate(blurred, kernel, iterations=1)
plt.title('dialated')
plt.imshow(dialated)
plt.show()
#Just performs a canny edge dectection on an image
edges_empty = self.Commons.CannyE_Auto(dialated) # Canny edge image for some sigma
#makes an empty image using the same diemensions of the given image
empty2 = self.Commons.make_empty(gray)
_, contours, _ = cv2.findContours(edges_empty, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(contours, key=cv2.contourArea, reverse=True)[:5] # get largest five contour area
cv2.drawContours(empty2, cnts, -1, (255, 0, 0), thickness=1)
plt.title('contours')
plt.imshow(empty2)
plt.show()
Instead of performing the blurring operation, dialation, canny edge detection on my already thresholded image, I just performed the contour detection on my original image.
Then I was able to find a decent contour for the outline of my image by modifying findContour command.
_, contours, _ = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
replacing cv2.RETR_TREE with cv2.RETR_EXTERNAL I was able to get only the contours that were associated with an object's outline, rather than trying to get contours within the object. Switching to cv2.CHAIN_APPROX_NONE didn't show any noticeable improvements, but it may provide better contours for more complex geometries.
for c in cnts:
# compute the center of the contour
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
# draw the contour and center of the shape on the image
cv2.drawContours(empty2, [c], -1, (255, 0, 0), thickness=1)
perimeter = np.around(cv2.arcLength(c, True), decimals=3)
area = np.around(cv2.contourArea(c), decimals=3)
cv2.circle(empty2, (cX, cY), 7, (255, 255, 255), -1)
cv2.putText(empty2, "center", (cX - 20, cY - 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
cv2.putText(empty2, "P:{}".format(perimeter), (cX - 50, cY - 50),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
cv2.putText(empty2, "A:{}".format(area), (cX - 100, cY - 100),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
Using the above code I was able to label centroid of each contour as well as information about each contours perimeter and area.
However, I was unable to perform a test that would select which contour was my desired contour. I have an idea to capture my object in a more ideal setting and find its centroid, perimeter, and associated area. This way when I find a new contour I can compare it with how close it is to my known values.
I think this method could work to remove contours that are too large or too small.
If anyone knows of a better solution that would be fantastic!