I have a irregular shape like below:
I need to get the centre of that white area, I just tried with contour in openCV like below
img=cv.imread('mypic',0)
ret,thresh = cv.threshold(img,127,255,cv.THRESH_BINARY_INV)
cnts = cv.findContours(thresh.copy(), cv.RETR_EXTERNAL,
cv.CHAIN_APPROX_SIMPLE)
cnt = contours[0]
x,y,w,h = cv.boundingRect(cnt)
res_img = cv.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
cv.imwrite('output.png',res_img)
But those cnts doeesn't give me very good results, as you can see the original image and its two small black point below the picture. Can someone point me to a good solution to get a centre of an irregular shape like above?
As I suggested, perform an erosion followed by dilation (an opening operation) on the binary image, then compute central moments and use this information to calculate the centroid. These are the steps:
Get a binary image from the input via Otsu's Thresholding
Compute central moments using cv2.moments
Compute the blob's centroid using the previous information
Let's see the code:
import cv2
import numpy as np
# Set image path
path = "C:/opencvImages/"
fileName = "pn43H.png"
# Read Input image
inputImage = cv2.imread(path+fileName)
# Convert BGR to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Threshold via Otsu + bias adjustment:
threshValue, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
This is the binary image you get. Notice the small noise:
The opening operation will get rid of the smalls blobs. A rectangular structuring element will suffice, let's use 3 iterations:
# Apply an erosion + dilation to get rid of small noise:
# Set kernel (structuring element) size:
kernelSize = 3
# Set operation iterations:
opIterations = 3
# Get the structuring element:
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform closing:
openingImage = cv2.morphologyEx(binaryImage, cv2.MORPH_OPEN, maxKernel, None, None, opIterations, cv2.BORDER_REFLECT101)
This is the filtered image:
Now, compute the central moments and then the blob's centroid:
# Calculate the moments
imageMoments = cv2.moments(openingImage)
# Compute centroid
cx = int(imageMoments['m10']/imageMoments['m00'])
cy = int(imageMoments['m01']/imageMoments['m00'])
# Print the point:
print("Cx: "+str(cx))
print("Cy: "+str(cy))
Additionally, let's draw this point onto the binary image to check out the results:
# Draw centroid onto BGR image:
bgrImage = cv2.cvtColor(binaryImage, cv2.COLOR_GRAY2BGR)
bgrImage = cv2.line(bgrImage, (cx,cy), (cx,cy), (0,255,0), 10)
This is the result:
One can think of the centroid calculated using the image moments as the "mass" center of the object in relation to the pixel intensity. Depending on the actual shape of the object it may not even be inside the object.
An alternative would be calculating the center of the bounding circle:
thresh = cv2.morphologyEx(thresh, cv2.MORPH_DILATE, np.ones((3, 3)))
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = [c for c in contours if cv2.contourArea(c) > 100]
(x, y), r = cv2.minEnclosingCircle(contours[0])
output = thresh.copy()
cv2.circle(output, (int(x), int(y)), 3, (0, 0, 0), -1)
cv2.putText(output, f"{(int(x), int(y))}", (int(x-50), int(y-10)), cv2.FONT_HERSHEY_PLAIN, 1, (0, 0, 0), 1)
cv2.circle(output, (int(x), int(y)), int(r), (255, 0, 0), 2)
The output of that code looks like this:
You may want to try ConnectedComponentsWithStats function. This returns centroids, areas and bounding box parameters. Also blur and morphology(dilate/erode) helps a lot with noice, as noted above. If you're generous enough with erode, you`re gonna get almost no stray pixels after processing.
Related
I want to detect the center of a cross. But since the two rectangles are connected, I don't know how to find it. I have these images for example:
Cross 1
Cross 2
I would like to find the "red dot".
The idea is that the point where a vertical and horizontal line touch is the intersection. A potential approach is:
Obtain binary image. Load image, convert to grayscale, Gaussian blur, then Otsu's threshold.
Obtain horizontal and vertical line masks. Create horizontal and vertical structuring elements with cv2.getStructuringElement then perform cv2.morphologyEx to isolate the lines.
Find joints. We cv2.bitwise_and the two masks together to get the joints.
Find centroid on joint mask. We find contours then calculate the centroid to get the intersection point.
Input image -> Horizontal mask -> Vertical mask -> Joints
Detected intersection in green
Results for the other image
Input image -> Horizontal mask -> Vertical mask -> Joints
Detected intersection in green
Code
import cv2
import numpy as np
# Load image, grayscale, Gaussian blur, Otsus threshold
image = cv2.imread('4.PNG')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3,3), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Find horizonal lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (150,5))
horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2)
# Find vertical lines
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,150))
vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2)
# Find joints
joints = cv2.bitwise_and(horizontal, vertical)
# Find centroid of the joints
cnts = cv2.findContours(joints, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
# Find centroid and draw center point
M = cv2.moments(c)
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
cv2.circle(image, (cx, cy), 15, (36,255,12), -1)
cv2.imshow('horizontal', horizontal)
cv2.imshow('vertical', vertical)
cv2.imshow('joints', joints)
cv2.imshow('image', image)
cv2.waitKey()
Here's a possible solution. It is based on my answer here: How can i get the inner contour points without redundancy in OpenCV - Python. The main idea is to convolve the image with a special kernel that identifies intersections. After this operation, you create a mask with possible intersection points, apply some morphology and get the coordinates.
You did not provide your input image, I'm testing this algorithm with the "cross" image you posted. This is the code:
# Imports:
import cv2
import numpy as np
# Image path
path = "D://opencvImages//"
fileName = "cross.png" # Your "cross" image
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Prepare a deep copy of the input for results:
inputImageCopy = inputImage.copy()
# Grayscale conversion:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
Now, the convolution must receive an image where the shapes have been reduced to a 1 pixel width. This can be done computing the skeleton of the image. The skeleton is a version of the binary image where lines have been normalized to have a width of 1 pixel. We can then convolve the image with a 3 x 3 kernel and look for specific pixel patterns.
Before computing the skeleton, we will add a border around the image. This prevents some artifacts that the skeleton yields if a shape extends all the way to the borders of the image:
# Add borders to prevent skeleton artifacts:
borderThickness = 1
borderColor = (0, 0, 0)
grayscaleImage = cv2.copyMakeBorder(grayscaleImage, borderThickness, borderThickness, borderThickness, borderThickness,
cv2.BORDER_CONSTANT, None, borderColor)
# Compute the skeleton:
skeleton = cv2.ximgproc.thinning(grayscaleImage, None, 1)
This is the skeleton, free of artifacts:
Now, let's find the intersections. The approach is based on Mark Setchell's info on this post. The post mainly shows the method for finding end-points of a shape, but I extended it to also identify line intersections. The main idea is that the convolution yields a very specific value where patterns of black and white pixels are found in the input image. Refer to the post for the theory behind this idea, but here, we are looking for a value of 130:
# Threshold the image so that white pixels get a value of 10 and
# black pixels a value of 0:
_, binaryImage = cv2.threshold(skeleton, 128, 10, cv2.THRESH_BINARY)
# Set the intersections kernel:
h = np.array([[1, 1, 1],
[1, 10, 1],
[1, 1, 1]])
# Convolve the image with the kernel:
imgFiltered = cv2.filter2D(binaryImage, -1, h)
# Prepare the final mask of points:
(height, width) = binaryImage.shape
pointsMask = np.zeros((height, width, 1), np.uint8)
# Perform convolution and create points mask:
thresh = 130
# Locate the threshold in the filtered image:
pointsMask = np.where(imgFiltered == thresh, 255, 0)
# Convert and shape the image to a uint8 height x width x channels
# numpy array:
pointsMask = pointsMask.astype(np.uint8)
pointsMask = pointsMask.reshape(height, width, 1)
This is the pointsMask image:
I we apply some morphology we can join individual pixels into blobs. Here, a dilation will do:
# Set kernel (structuring element) size:
kernelSize = 7
# Set operation iterations:
opIterations = 3
# Get the structuring element:
morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform Dilate:
pointsMask = cv2.morphologyEx(pointsMask, cv2.MORPH_DILATE, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)
This is the result of applying the dilation:
Now, we can find the coordinates of the white pixels and compute their mean values (or centroids):
# Get the coordinates of the end-points:
(Y, X) = np.where(pointsMask == 255)
# Get the centroid:
y = int(np.mean(Y))
x = int(np.mean(X))
Let's draw a circle using these coordinates on the original image:
# Draw the intersection point:
# Set circle color:
color = (0, 0, 255)
# Draw Circle
cv2.circle(inputImageCopy, (x, y), 3, color, -1)
# Show Image
cv2.imshow("Intersections", inputImageCopy)
cv2.waitKey(0)
This is the final result:
I have two intersecting ellipses in a black and white image. I am trying to use OpenCV findContours to identify the separate shapes as separate contours using this code (and attached image below).
import numpy as np
import matplotlib.pyplot as plt
import cv2
import skimage.morphology
img_3d = cv2.imread("C:/temp/test_annotation_overlap.png")
img_grey = cv2.cvtColor(img_3d, cv2.COLOR_BGR2GRAY)
contours = cv2.findContours(img_grey, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[-2]
fig, ax = plt.subplots(len(contours)+1,1, figsize=(5, 20))
thicker_img_grey = skimage.morphology.dilation(img_grey, skimage.morphology.disk(radius=3))
ax[0].set_title("ORIGINAL IMAGE")
ax[0].imshow(thicker_img_grey, cmap="Greys")
for i, contour in enumerate(contours):
new_img = np.zeros_like(img_grey)
cv2.drawContours(new_img, contour, -1, (255,255,255), 10)
ax[i+1].set_title(f"Contour {i}")
ax[i+1].imshow(new_img, cmap="Greys")
plt.show()
However four contours are found, none of which are the original contour:
How can I configure OpenCV.findContours to identify the two separate shapes? (Note I have already played around with Hough circles and found it unreliable for the images I am analysing)
Maybe I overkilled with this approach but it could be used as a working approach. You could find all the contours on the image - you will get the two contours that are like a "semicircle", the contour of the intersection and the contour that is the outer shape of the two addjointed circles. Smallest three contours should be the two semicircles and the intersection. If you draw combinations of two out of these three contours, you will get three mask out of which two will have the combination of one semicircle and the intersection. If you perform closing on the mask you will get your circle. Then you should simply make an algorithm to detect which two masks represent a full circle and you will get your result. Here is the sample solution:
import numpy as np
import cv2
# Function for returning solidity of contour - ratio of contour area to its
# convex hull area.
def checkSolidity(cnt):
area = cv2.contourArea(cnt)
hull = cv2.convexHull(cnt)
hull_area = cv2.contourArea(hull)
solidity = float(area)/hull_area
return solidity
img_orig = cv2.imread("circles.png")
# Had to dilate the image so the contour was completly connected.
img = cv2.dilate(img_orig, np.ones((3, 3), np.uint8))
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Grayscale transformation.
# Otsu threshold.
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)[1]
# Search for contours.
contours = cv2.findContours(thresh, cv2.CHAIN_APPROX_NONE, cv2.RETR_TREE)[0]
# Sorting contours from smallest to biggest.
contours.sort(key=lambda cnt: cv2.contourArea(cnt))
# Three contours - two semi circles and the intersection of the circles.
cnt1 = contours[0]
cnt2 = contours[1]
cnt3 = contours[2]
# Create three empty images
h, w = img.shape[:2]
mask1 = np.zeros((h, w), np.uint8)
mask2 = np.zeros((h, w), np.uint8)
mask3 = np.zeros((h, w), np.uint8)
# Draw all combinations of two out of three contours on the masks.
# The goal here is to draw one semicircle and the intersection together.
cv2.drawContours(mask1, [cnt1], 0, (255, 255, 255), -1)
cv2.drawContours(mask1, [cnt3], 0, (255, 255, 255), -1)
cv2.drawContours(mask2, [cnt2], 0, (255, 255, 255), -1)
cv2.drawContours(mask2, [cnt3], 0, (255, 255, 255), -1)
cv2.drawContours(mask3, [cnt1], 0, (255, 255, 255), -1)
cv2.drawContours(mask3, [cnt2], 0, (255, 255, 255), -1)
# Perform closing operation on the masks so that you get uniform contours.
kernel_size = 25
kernel = np.ones((kernel_size, kernel_size), np.uint8)
mask1 = cv2.morphologyEx(mask1, cv2.MORPH_CLOSE, kernel)
mask2 = cv2.morphologyEx(mask2, cv2.MORPH_CLOSE, kernel)
mask3 = cv2.morphologyEx(mask3, cv2.MORPH_CLOSE, kernel)
masks = [] # List for storing all the masks.
masks.append(mask1)
masks.append(mask2)
masks.append(mask3)
# List where you will append solidity of the found biggest contour of every mask.
solidity = []
for mask in masks:
cnts = cv2.findContours(mask, cv2.CHAIN_APPROX_NONE, cv2.RETR_TREE)[0]
cnt = max(cnts, key=lambda c: cv2.contourArea(c))
s = checkSolidity(cnt)
solidity.append(s)
# Index of the mask with smallest solidity.
min_solidity = solidity.index(min(solidity))
# The mask with the contour that has smallest solidity should be the one that
# has two semicirles drawn instead of one semicircle and the intersection.
#You could build a better function to check which mask is the one with
# two semicircles... like maybe the contour with the largest
# height and width of the bounding box etc.
# I chose solidity because it is enough for this example.
# Selection of colors.
colors = {
0: (0, 0, 255),
1: (0, 255, 0),
2: (255, 0, 0),
}
# Draw contours of the mask other two masks - those two that have the
# semicircle and the intersection.
for i, s in enumerate(solidity):
if s != solidity[min_solidity]:
cnts = cv2.findContours(
masks[i], cv2.CHAIN_APPROX_NONE, cv2.RETR_TREE)[0]
cnt = max(cnts, key=lambda c: cv2.contourArea(c))
cv2.drawContours(img_orig, [cnt], 0, colors[i], 1)
# Display result
cv2.imshow("img", img_orig)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
Philosophically, you want to find two circles, because you search for them, you expect centers and radii. Graphically the figures are connected, we can see them separated because we know what a "circle" is and extrapolate the coordinates, which match the parts which overlap.
So what about finding the minimum enclosing circle for each contour (or in some cases fitEllipse and use the their parameters): https://docs.opencv.org/master/dd/d49/tutorial_py_contour_features.html
Then say draw that circle in a clear image and take the pixel coordinates which are not zero - by a mask or compute them by drawing a circle step by step.
Then compare these coordinates with the coordinates in the other contours with some precision and append the matching coordinates to the current contour.
Finally: draw the extended contour on a clear canvas and apply HoughCircles for a single non-overlapping circle. (Or compute the center and radius, the coordinates of a circle and comparing to the contour with a precision.)
For reference here I will post the solution I came up with based on some ideas here and a few more. This solution was 99.9% effective and recovering ellipses from images often including many overlapping, contained, and with other image noise such as lines, text and so on.
The code is too length and distributed to post here but the pseudocode is as follows.
Segment the image
Run cv2 findContours with RETR_EXTERNAL to get separate regions in the image
For each image, fill in the interior, apply mask, and extract the region to be processed independently of other regions.
Remaining steps are executed for each region independently
Run cv2 findContours with RETR_LIST to get all internal and external contours
For each contour found, apply polygon smoothing to reduce effect of pixellation
For each smoothed contour, identify the continuous segments within that contour which have the same curvature sign i.e. segments which are entirely curving right, or curving left (just compute angles and sign changes)
Within each segment, fit an ellipse model with least squares (scikit-learn EllipseModel)
Perform the Lee algorithm on the original image to compute for each pixel its minimum distance to a white pixel
For each model, perform a greedy local neighbourhood search to improve the fit against the original - the fit being the fitted ellipse maximum distance to white pixel, from the output of the lee algorithm
Not simple or elegant but is highly accurate for the content I am dealing with confirmed from manual review of a large number of images.
I am looking for a procedure to detect the corners of an distorted rectangle accurately with OpenCV in Python.
I've tried the solution of different suggestions by googling, but through a sinusoidal superposition of a straight line (see the thresholded image) I probably can't detect the corners. I tried findContours and HoughLines so far without good results.
Unfortunately I don't understand the C-Code from Xu Bin in how to find blur corner position with opencv?
This is my initial image:
After resizing and thresholding I apply canny edge detection to get following image:
contours, hierarchy = cv2.findContours(g_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
box = cv2.minAreaRect(contour)
box = cv2.cv.BoxPoints(box) if imutils.is_cv2() else cv2.boxPoints(box)
box = np.array(box, dtype="float")
box = perspective.order_points(box)
I only get the following result with some extra drawing:
I thought line fitting would be a good way to solve the problem, but unfortunately I couldn't get HoughLines working and after looking in OpenCV Python - How to implement RANSAC to detect straight lines? RANSAC seems also difficult to apply for my problem.
Any help is highly appreciated.
Though this is old, this can at least help any others who have the same issue. In addition to nathancy's answer, this should allow you to find the very blurry corners with much more accuracy:
Pseudo Code
Resize if desired, not necessary though
Convert to grayscale
Apply blurring or bilateral filtering
Apply Otsu's threshold to get a binary image
Find contour that makes up rectangle
Approximate contour as a rectangle
Points of approximation are your rectangle's corners!
Code to do so
Resize:
The function takes the new width and height, so I am just making the image 5 times bigger than it currently is.
img = cv2.resize(img, (img.shape[0] * 5, img.shape[1] * 5))
Grayscale conversion:
Just converting to grayscale from OpenCV's default BGR colorspace.
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
Blurring/bilateral filtering:
You could use any number of techniques to soften up this image further, if needed. Maybe a Gaussian blur, or, as nathancy suggested, a bilateral filter, but no need for both.
# choose one, or a different function
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
blurred = cv2.bilateralFilter(gray, 9, 75, 75)
Otsu's threshold
Using the threshold function, pass 0 and 255 as the arguments for the threshold value and the max value. We pass 0 in because we are using the thresholding technique cv2.THRESH_OTSU which determines the value for us. This is returned along with the threshold itself, but I just set it to _ because we don't need it.
_, thresh = cv2.threshold(blurred, 0, 255, cv2.THRESH_OTSU)
Finding contour
There is a lot more to contours than I will explain here, feel free to checkout docs.
The important things to know for us is that it returns a list of contours along with a hierarchy. We don't need the hierarchy so it is set to _, and we only need the single contour it finds, so we set contour = contours[0].
contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contour = contours[0]
Approximate contour as a rectangle
First we calculate the perimeter of the contour. Then we approximate it with the cv2.approxPolyDP function, and tell it the maximum distance between the original curve and its approximation with the 0.05 * perimeter. You may need to play around with the decimal for a better approximation.
The approx is a numpy array with shape (num_points, 1, 2), which in this case is (4, 1, 2) because it found the 4 corners of the rectangle.
Feel free to read up more in the docs.
perimeter = cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, 0.05 * perimeter, True)
Find your skewed rectangle!
You're already done! Here's just how you could draw those points. First we draw the circles by looping over them then grabbing the x and y coordinates, and then we draw the rectangle itself.
# drawing points
for point in approx:
x, y = point[0]
cv2.circle(img, (x, y), 3, (0, 255, 0), -1)
# drawing skewed rectangle
cv2.drawContours(img, [approx], -1, (0, 255, 0))
Finished Code
import cv2
img = cv2.imread("rect.png")
img = cv2.resize(img, (img.shape[0] * 5, img.shape[1] * 5))
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blurred = cv2.bilateralFilter(gray, 9, 75, 75)
_, thresh = cv2.threshold(blurred, 0, 255, cv2.THRESH_OTSU)
contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contour = contours[0]
perimeter = cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, 0.05 * perimeter, True)
for point in approx:
x, y = point[0]
cv2.circle(img, (x, y), 3, (0, 255, 0), -1)
cv2.drawContours(img, [approx], -1, (0, 255, 0))
To detect corners, you can use cv2.goodFeaturesToTrack(). The function takes four parameters
corners = cv2.goodFeaturesToTrack(image, maxCorners, qualityLevel, minDistance)
image - Input 8-bit or floating-point 32-bit grayscale single-channel image
maxCorners - Maximum number of corners to return
qualityLevel - Minimum accepted quality level of corners between 0-1. All corners below quality level are rejected
minDistance - Minimum possible Euclidean distance between corners
Now that we know how to find corners, we have to find the rotated rectangle and apply the function. Here's an approach:
We first enlarge the image, convert to grayscale, apply a bilateral filter, then Otsu's threshold to get a binary image
Next we find the distorted rectangle by finding contours with cv2.findContours() then obtain the rotated bounding box highlighted in green. We draw this bounding box onto a mask
Now that we have the mask, we simply use cv2.goodFeaturesToTrack() to find the corners on the mask
Here's the result on the original input image and the (x, y) coordinates for each corner
Corner points
(377.0, 375.0)
(81.0, 344.0)
(400.0, 158.0)
(104.0, 127.0)
Code
import cv2
import numpy as np
import imutils
# Resize image, blur, and Otsu's threshold
image = cv2.imread('1.png')
resize = imutils.resize(image, width=500)
mask = np.zeros(resize.shape, dtype=np.uint8)
gray = cv2.cvtColor(resize, cv2.COLOR_BGR2GRAY)
blur = cv2.bilateralFilter(gray,9,75,75)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Find distorted rectangle contour and draw onto a mask
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
rect = cv2.minAreaRect(cnts[0])
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(resize,[box],0,(36,255,12),2)
cv2.fillPoly(mask, [box], (255,255,255))
# Find corners on the mask
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
corners = cv2.goodFeaturesToTrack(mask, maxCorners=4, qualityLevel=0.5, minDistance=150)
for corner in corners:
x,y = corner.ravel()
cv2.circle(resize,(x,y),8,(155,20,255),-1)
print("({}, {})".format(x,y))
cv2.imshow('resize', resize)
cv2.imshow('thresh', thresh)
cv2.imshow('mask', mask)
cv2.waitKey()
I want to fit an ellipse to a partly damaged object in a picture. (The pictures here are only simplified examples for illustration!)
Image with damaged elliptical object
By doing this
def sort(n):
return n.size
Image = cv2.imread('acA2500/1.jpg', cv2.IMREAD_GRAYSCALE)
#otsu binarization
_, binary = cv2.threshold(Image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
#invert binary image for opencv findContours
inverse_binary = cv2.bitwise_not(binary)
#find contours
contours, _ = cv2.findContours(inverse_binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
#sort contours by length
largest_contours = sorted(contours, key=sort, reverse=True)
#fit ellipse
contour = largest_contours[0]
ellipse = cv2.fitEllipseDirect(contour)
I get this result, which is not very satisfying.
Result of cv2.findContours and cv2.fitEllipse
So, I've built this loop to get rid of the contour points which are not on the ellipse.
contour = largest_contours[0]
newcontour = np.zeros([1, 1, 2])
newcontour = newcontour.astype(int)
for coordinate in contour:
if coordinate[0][0] < 600:
newcontour = np.insert(newcontour, 0, coordinate, 0)
newellipse = cv2.fitEllipse(newcontour)
And get this result, which is good.
Result after trimming the contour points
The Problem is, that I have to do a lot of these fits in a short time period. So far this does not reach the desired speed.
Is there any better/faster/nicer way to trim the contour points? Since I don't have a lot of coding experience, I would be happy to find some help here :-)
Edit:
I edited the example pictures so that it is now clear that unfortunatly a cv2.minEnclosingCircle approach does not work.
Also the pictures now demonstrate why I sort the contours. In my real code I fit an ellipse to the three longest contours an than see which one I want to use trough a different routine.
If I don't trimm the contour and pick the contour for cv2.fitEllipse by hand, the code needs around 0.5s. With contour trimming and three times cv2.fitEllipse it takes around 2s. It may only take 1s
If the object is an circle, then you can use cv2.minEnclosingCircle on the contour to capture it.
#!/usr/bin/python3
# 2019/02/13 08:50 (CST)
# https://stackoverflow.com/a/54661012/3547485
import cv2
img = cv2.imread('test.jpg')
## Convert to grayscale and threshed it
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
th, threshed = cv2.threshold(gray, 100, 255, cv2.THRESH_OTSU|cv2.THRESH_BINARY_INV)
## Find the max outers contour
cnts = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2]
cv2.drawContours(img, cnts, -1, (255, 0, 0), 2, cv2.LINE_AA)
## Use minEnclosingCircle
(cx,cy),r = cv2.minEnclosingCircle(cnts[0])
cv2.circle(img, (int(cx), int(cy)), int(r), (0, 255, 0), 1, cv2.LINE_AA)
## This it
cv2.imwrite("dst.jpg", img)
Here is my result.
Problem Statement & Background Info:
EDIT: Constraints: The red coloring on the flange changes over time, so I'm not trying to use color recognition to identify my object at this moment unless it can be robust. Addition,ally external illumination my be a factor since this will be in an outdoor area in the future.
I have RGB-Depth camera and with it, I'm able to capture this scene. Where each pixel (x,y) has a depth value.
Applying a gradient magnitude filter to the depth map associated with my image I'm able to get the following edge map.
The gradient magnitudes are given the value of 0 if they had a magnitude that wasn't zero. Black (255) is for magnitude values associated with 0 (homogenous depth or a flat surface).
From this edge map I dialated the edges so picking up the contours would be easier.
Then I found the contours in the image and tried to plot just the 5 biggest contours.
PROBLEM
Is there a way to reliably find the contours associated with my objects (the red box and metal fixture) and then find their geometric centroid? I keep running into the issue that I can find contours in the image, but I have no way of selectively screening for the contours that are my objects and not noise.
I have provided the image I used for the image processing, but for some reason, OpenCV saves the image as a black image, and when you read it in using...
gray = cv2.imread('GRAYTEST.jpeg', cv2.IMREAD_GRAYSCALE)
it appears blue-ish and not a binary white/black image as I show. So sorry about that.
Here is the image:
Sorry, I don't know why it saved just as a black image, but if you read it in OpenCV it should show up with the same lines as "magnitude of gradients" plot.
MY CODE
gray = cv2.imread('GRAYTEST.jpeg', cv2.IMREAD_GRAYSCALE)
plt.imshow(gray)
plt.title('gray start image')
plt.show()
blurred = cv2.bilateralFilter(gray, 8, 25, 25) # blurr image while preserving edges
kernel = np.ones((3, 3), np.uint8) # define a kernel (block) to apply filters to
dialated = cv2.dilate(blurred, kernel, iterations=1)
plt.title('dialated')
plt.imshow(dialated)
plt.show()
#Just performs a canny edge dectection on an image
edges_empty = self.Commons.CannyE_Auto(dialated) # Canny edge image for some sigma
#makes an empty image using the same diemensions of the given image
empty2 = self.Commons.make_empty(gray)
_, contours, _ = cv2.findContours(edges_empty, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(contours, key=cv2.contourArea, reverse=True)[:5] # get largest five contour area
cv2.drawContours(empty2, cnts, -1, (255, 0, 0), thickness=1)
plt.title('contours')
plt.imshow(empty2)
plt.show()
Instead of performing the blurring operation, dialation, canny edge detection on my already thresholded image, I just performed the contour detection on my original image.
Then I was able to find a decent contour for the outline of my image by modifying findContour command.
_, contours, _ = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
replacing cv2.RETR_TREE with cv2.RETR_EXTERNAL I was able to get only the contours that were associated with an object's outline, rather than trying to get contours within the object. Switching to cv2.CHAIN_APPROX_NONE didn't show any noticeable improvements, but it may provide better contours for more complex geometries.
for c in cnts:
# compute the center of the contour
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
# draw the contour and center of the shape on the image
cv2.drawContours(empty2, [c], -1, (255, 0, 0), thickness=1)
perimeter = np.around(cv2.arcLength(c, True), decimals=3)
area = np.around(cv2.contourArea(c), decimals=3)
cv2.circle(empty2, (cX, cY), 7, (255, 255, 255), -1)
cv2.putText(empty2, "center", (cX - 20, cY - 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
cv2.putText(empty2, "P:{}".format(perimeter), (cX - 50, cY - 50),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
cv2.putText(empty2, "A:{}".format(area), (cX - 100, cY - 100),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
Using the above code I was able to label centroid of each contour as well as information about each contours perimeter and area.
However, I was unable to perform a test that would select which contour was my desired contour. I have an idea to capture my object in a more ideal setting and find its centroid, perimeter, and associated area. This way when I find a new contour I can compare it with how close it is to my known values.
I think this method could work to remove contours that are too large or too small.
If anyone knows of a better solution that would be fantastic!