Ellipse accuracy from 2D points - python

I have an ellipse which I detected from the image using opencv, where elipse is defined as (x_centre,y_centre),(minor_axis,major_axis),angle. I also have list of points in form [(x1, y1), (x2,y2), ...] which are defining where the ellipse should be in the image.
How can I find the accuracy of the found ellipse from the ellipse defined by the points?
Update
For better understanding this is result from my actual script:
ellipse detection. The red ellipse was detected from image and green dots are just loaded from file.
Less accurate example: ellipse detection 2
I need some method to validate how accurate the ellipse is to the outer points.

This answer describes one way to find how accurately a found ellipse matches the ellipse as defined by a list of points.
The first step is to create a mask image, and draw the ellipse on it.
mask = np.zeros((img.shape[0], img.shape[1]), np.uint8)
mask = cv2.ellipse(mask, ellipse, 255, 5)
Next, iterate though the list of points and check if they are in the white part or the black part of the mask image.
hit, miss = 0,0
for point in cnt:
if mask[point[0][1], point[0][0]] == 0: miss += 1
else: hit += 1
This is an ellipse that fits perfectly:
Here is an ellipse that doesn't fit so well:
This RMSE can be found with the help of the function cv2.pointsPolygonTest:
_,ellipse_contours,hierarchy = cv2.findContours(mask, 1, 2)
ellipse_contour = ellipse_contours[0]
for point in cnt:
total_dist += cv2.pointPolygonTest(ellipse_contour, tuple(point[0]), True)**2
rmse = math.sqrt(total_dist/len(cnt))

Related

Trying to detect all the circles with HoughCircles in openCV (python)

I am following this tutorial: https://www.pyimagesearch.com/2014/07/21/detecting-circles-images-using-opencv-hough-circles/
I was playing around with the parameters ( even those you don't see in the code ex: param2) of HoughCircles and it seems very innacurate, in my project, the disks you see on the picture will be placed on random spots and i need to be able to detect them and their color.
Currently i am only able to detect few circles, and sometimes some random circles are drawn where there is no circles so i am a bit confused.
Is this the best way to do circle detection with openCV or is there a more accurate way of doing it ?
Also why is my code not detecting every circles ?
Initial board : https://imgur.com/BrPB5Ox
Circle drawn : https://imgur.com/dT7k29E
My code :
import cv2
import numpy as np
img = cv2.imread('Photos/board.jpg')
output = img.copy()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, 100)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
cv2.imshow("output", np.hstack([img, output]))
cv2.waitKey(0)
Thanks a lot.
First of all you can not expect HoughCircles to detect all circles in different type of situations. It is not an AI. It has different parameters according to get desired results. You can check here to learn more about those parameters.
HoughCircles is a contour based function so you should be sure the contours are being detected properly. In your example I am sure bad contour results will come up because of the lighting problem. Metal materials cause light explosion in image processing and this affects finding contours badly.
What you should do:
Solve the lighting problem
Be sure about the HoughCircle parameters to get desired output
Instead of using HoughCircle you can detect each contour and their mass center ( moments help you to find their mass center). Then you can measure each length of contour points to that mass center if all equal then its a circle.
Hough transform works best on monochromatic/binary image, so you may want to preprocess it with some sort of threshold function. Parameter values for the function are very important for proper recognition.
Is this the best way to do circle detection with openCV or is there a more accurate way of doing it ? Also why is my code not detecting every circles ?
there's also findContours function
https://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#gadf1ad6a0b82947fa1fe3c3d497f260e0
which, to my liking, is more robust and general; you may want to give it a try

How to plot centroids on image after kmeans clustering?

I have a color image and wanted to do k-means clustering on it using OpenCV.
This is the image on which I wanted to do k-means clustering.
This is my code:
import numpy as np
import cv2
import matplotlib.pyplot as plt
image1 = cv2.imread("./triangle.jpg", 0)
Z1 = image1.reshape((-1))
Z1 = np.float32(Z1)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K1 = 2
ret, mask, center =cv2.kmeans(Z1,K1,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
center = np.uint8(center)
print(center)
res_image1 = center[mask.flatten()]
clustered_image1 = res_image1.reshape((image1.shape))
for c in center:
plt.hlines(c, xmin=0, xmax=max(clustered_image1.shape[0], clustered_image1.shape[1]), lw=1.)
plt.imshow(clustered_image1)
plt.show()
This is what I get from the center variable.
[[112]
[255]]
This is the output image
My problem is that I'm unable to understand the output. I have two lists in the center variable because I wanted two classes. But why do they have only one value?
Shouldn't it be something like this (which makes sense because centroids should be points):
[[x1, y1]
[x2, y2]]
instead of this:
[[x]
[y]]
and if I read the image as a color image like this:
image1 = cv2.imread("./triangle.jpg")
Z1 = image1.reshape((-1, 3))
I get this output:
[[255 255 255]
[ 89 173 1]]
Color image output
Can someone explain to me how I can get 2d points instead of lines? Also, how do I interpret the output I got from the center variable when using the color image?
Please let me know if I'm unclear anywhere. Thanks!!
K-Means-clustering finds clusters of similar values. Your input is an array of color values, hence you find the colors that describe the 2 clusters. [255 255 255] is the white color, [ 89 173 1] is the green color. Similar for [112] and [255] in the grayscale version. What you're doing is color quantization
They are correctly the centroids, but their dimension is color, not location. Therefor you cannot plot it anywhere. Well you can, but I looks like this:
See how the 'color location' determines to which class each pixel belongs?
This is not something you can locate in your image. What you can do is find the pixels that belong to the different clusters, and use the locations of the found pixels to determine their centroid or 'average' position.
To get the 'average' position of each color, you have to separate out the pixel coordinates according to the class/color to which they belong. In the code below I used np.where( img <= 240) where 240 is the threshold. I used 240 out of ease, but you could use K-Means to determine where the threshold should be. (inRange() might be useful at some point)) If you sum the coordinates and divide that by the number of pixels found, you'll have what I think you are looking for:
Result:
Code:
import cv2
# load image as grayscale
img = cv2.imread('D21VU.jpg',0)
# get the positions of all pixels that are not full white (= triangle)
triangle_px = np.where( img <= 240)
# dividing the sum of the values by the number of pixels
# to get the average location
ty = int(sum(triangle_px[0])/len(triangle_px[0]))
tx = int(sum(triangle_px[1])/len(triangle_px[1]))
# print location and draw filled black circle
print("Triangle ({},{})".format(tx,ty))
cv2.circle(img, (tx,ty), 10,(0), -1)
# the same process, but now with only white pixels
white_px = np.where( img > 240)
wy = int(sum(white_px[0])/len(white_px[0]))
wx = int(sum(white_px[1])/len(white_px[1]))
# print location and draw white filled circle
print("White: ({},{})".format(wx,wy))
cv2.circle(img, (wx,wy), 10,(255), -1)
# display result
cv2.imshow('Result',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here is an Imagemagick solution, since I am not proficient with OpenCV.
Basically, I convert your actual image (from your link in the comments) to binary, then use image moments to extract the centroid and other statistics.
I suspect you can do something similar in OpenCV, Skimage, or Python Wand, which is based upon Imagemagick. (See for example:
https://docs.opencv.org/3.4/d3/dc0/group__imgproc__shape.html#ga556a180f43cab22649c23ada36a8a139
https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.moments_coords_central
https://en.wikipedia.org/wiki/Image_moment)
Input:
Your image does not have just two colors. Perhaps this image did not have kmeans clustering applied with 2 colors only. So I will do that with an Imagemagick script that I have built.
kmeans -n 2 -m 5 img.png img2.png
final colors:
count,hexcolor
99234,#65345DFF
36926,#27AD0EFF
Then I convert the two colors to black and white by simply thresholding and stretching the dynamic range to full black and white.
convert img2.png -threshold 50% -auto-level img3.png
Then I get all the image moment statistics for the white pixels, which includes the x,y centroid in pixels relative to the top left corner of the image. It also includes the equivalent ellipse major and minor axes, angle of major axis, eccentricity of the ellipse, and equivalent brightness of the ellipse, plus the 8 Hu image moments.
identify -verbose -moments img3.png
Channel moments:
Gray:
--> Centroid: 208.523,196.302 <--
Ellipse Semi-Major/Minor axis: 170.99,164.34
Ellipse angle: 140.853
Ellipse eccentricity: 0.197209
Ellipse intensity: 106.661 (0.41828)
I1: 0.00149333 (0.380798)
I2: 3.50537e-09 (0.000227937)
I3: 2.10942e-10 (0.00349771)
I4: 7.75424e-13 (1.28576e-05)
I5: 9.78445e-24 (2.69016e-09)
I6: -4.20164e-17 (-1.77656e-07)
I7: 1.61745e-24 (4.44704e-10)
I8: 9.25127e-18 (3.91167e-08)

Identifying the densest region/cluster

I have an image like so:
I would like to automatically identify the dense white box area in the top left and then fill it and black out the rest of image. Producing something like this:
Essentially, I just want to return the co-ordinates of the densest cluster. I have tried ad-hoc methods such as erosion, dilation and binary closing but they do not quite suite my needs. I'm not sure if I could use k-means here? Looking for an efficient method, any help is appreciated.
You could erode the image a little bit more, to remove more of the noise, and then find the contours and filter them by area. Here is what I would use (not tested):
kernel = np.ones((2, 2), np.uint8)
img = cv2.erode(img, kernel, iterations = 2)
#Finding contours of white square:
_, conts, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL , cv2.CHAIN_APPROX_SIMPLE)
for cnt in conts:
area = cv2.contourArea(cnt)
#filter more noise
if area > 200: # optimize this number
x1, y1, w, h = cv2.boundingRect(cnt)
x2 = x1 + w # (x1, y1) = top-left vertex
y2 = y1 + h # (x2, y2) = bottom-right vertex
rect = cv2.rectangle(img, (x1, y1), (x2, y2), (255,0,0), 2)
One right approach here would be to apply a large square averaging filter. If you know approximately the size of the box you're looking for, then match that size with the filter. After applying this filter, the largest pixel value in the image will be at the middle of the densest region. Let's call this point p.
Next, apply segmentation and connected component labeling to your original image. From your example image, it seems that the box you're looking for is connected. You might want to apply some morphological operations to make sure it's connected. You can also paint a reasonably-sizes blob centered at point p, it'll connect lots of small regions that together form a dense area.
Next, remove all connected components except the one containing point p. You can do this by finding the label at pixel p, and comparing all pixels in the labeled image for equality with that label.
This should leave you a connected, compact region. You can find the bounding box of this region, and paint it on your image, if you really want to enforce that the found area be a box.

OpenCV - Detecting circular shapes

I have some code which detects circular shapes but I am unable to understand how it works.
From this code:
How can i find the radius and center point of the circle?
What is the behaviour of `cv2.approxPolyDP' for detecting circles?
Now find the contours in the segmented mask
contours, hierarchy = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
Sorting the contours w.r.t contour rect X
contours.sort(key = lambda x:cv2.boundingRect(x)[0])
for contour in contours:
approx = cv2.approxPolyDP(contour, 0.01*cv2.arcLength(contour,True), True)
if len(approx) > 8:
# Find the bounding rect of contour.
contour_bounding_rect = cv2.boundingRect(contour)
mid_point = contour_bounding_rect[0] + contour_bounding_rect[2]/2, contour_bounding_rect[1] + contour_bounding_rect[3]/2
print mid_point[1]/single_element_height, ", ",
So I have figured out the answer to your first question: determining the center and radius of circles in the image.
Initially I am finding all the contours present in the image. Then using a for loop, I found the center and radius using cv2.minEnclosingCircle for every contour in the image. I printed them in the console screen.
contours,hierarchy = cv2.findContours(thresh,2,1)
print len(contours)
cnt = contours
for i in range (len(cnt)):
(x,y),radius = cv2.minEnclosingCircle(cnt[i])
center = (int(x),int(y))
radius = int(radius)
cv2.circle(img,center,radius,(0,255,0),2)
print 'Circle' + str(i) + ': Center =' + str(center) + 'Radius =' + str(radius)
To answer your second question on cv2.approxPolyDP(); this function draws an approximate contour around the object in the image based on a parameter called 'epsilon'. Higher the value of 'epsilon', the contour is roughly approximated. For a lower value of epsilon, the contour grazes almost every edge of the object in the image. Visit THIS PAGE for a better understanding.
Hope this helped!! :)
Don't think approxPolyDP is the right way to go here.
If you have an image where you only have circles and you want to find center and radius, try minEnclosingCircle()
If you have an image where you have various shapes and you want to find the circles, try Hough transform (may take a long time) or fitEllipse() where you check if the bounding box it returns is square.
See documentation for both these functions

Python OpenCV - Find black areas in a binary image

There is any method/function in the python wrapper of Opencv that finds black areas in a binary image? (like regionprops in Matlab)
Up to now I load my source image, transform it into a binary image via threshold and then invert it to highlight the black areas (that now are white).
I can't use third party libraries such as cvblobslob or cvblob
Basically, you use the findContours function, in combination with many other functions OpenCV provides for especially this purpose.
Useful functions used (surprise, surprise, they all appear on the Structural Analysis and Shape Descriptors page in the OpenCV Docs):
findContours
drawContours
moments
contourArea
arcLength
boundingRect
convexHull
fitEllipse
example code (I have all the properties from Matlab's regionprops except WeightedCentroid and EulerNumber - you could work out EulerNumber by using cv2.RETR_TREE in findContours and looking at the resulting hierarchy, and I'm sure WeightedCentroid wouldn't be that hard either.
# grab contours
cs,_ = cv2.findContours( BW.astype('uint8'), mode=cv2.RETR_LIST,
method=cv2.CHAIN_APPROX_SIMPLE )
# set up the 'FilledImage' bit of regionprops.
filledI = np.zeros(BW.shape[0:2]).astype('uint8')
# set up the 'ConvexImage' bit of regionprops.
convexI = np.zeros(BW.shape[0:2]).astype('uint8')
# for each contour c in cs:
# will demonstrate with cs[0] but you could use a loop.
i=0
c = cs[i]
# calculate some things useful later:
m = cv2.moments(c)
# ** regionprops **
Area = m['m00']
Perimeter = cv2.arcLength(c,True)
# bounding box: x,y,width,height
BoundingBox = cv2.boundingRect(c)
# centroid = m10/m00, m01/m00 (x,y)
Centroid = ( m['m10']/m['m00'],m['m01']/m['m00'] )
# EquivDiameter: diameter of circle with same area as region
EquivDiameter = np.sqrt(4*Area/np.pi)
# Extent: ratio of area of region to area of bounding box
Extent = Area/(BoundingBox[2]*BoundingBox[3])
# FilledImage: draw the region on in white
cv2.drawContours( filledI, cs, i, color=255, thickness=-1 )
# calculate indices of that region..
regionMask = (filledI==255)
# FilledArea: number of pixels filled in FilledImage
FilledArea = np.sum(regionMask)
# PixelIdxList : indices of region.
# (np.array of xvals, np.array of yvals)
PixelIdxList = regionMask.nonzero()
# CONVEX HULL stuff
# convex hull vertices
ConvexHull = cv2.convexHull(c)
ConvexArea = cv2.contourArea(ConvexHull)
# Solidity := Area/ConvexArea
Solidity = Area/ConvexArea
# convexImage -- draw on convexI
cv2.drawContours( convexI, [ConvexHull], -1,
color=255, thickness=-1 )
# ELLIPSE - determine best-fitting ellipse.
centre,axes,angle = cv2.fitEllipse(c)
MAJ = np.argmax(axes) # this is MAJor axis, 1 or 0
MIN = 1-MAJ # 0 or 1, minor axis
# Note: axes length is 2*radius in that dimension
MajorAxisLength = axes[MAJ]
MinorAxisLength = axes[MIN]
Eccentricity = np.sqrt(1-(axes[MIN]/axes[MAJ])**2)
Orientation = angle
EllipseCentre = centre # x,y
# ** if an image is supplied with the BW:
# Max/Min Intensity (only meaningful for a one-channel img..)
MaxIntensity = np.max(img[regionMask])
MinIntensity = np.min(img[regionMask])
# Mean Intensity
MeanIntensity = np.mean(img[regionMask],axis=0)
# pixel values
PixelValues = img[regionMask]
After inverting binary image to turn black to white areas, apply cv.FindContours function. It will give you boundaries of the region you need.
Later you can use cv.BoundingRect to get minimum bounding rectangle around region. Once you got the rectangle vertices, you can find its center etc.
Or to find centroid of region, use cv.Moment function after finding contours. Then use cv.GetSpatialMoments in x and y direction. It is explained in opencv manual.
To find area, use cv.ContourArea function.
Transform it to binary image using threshold with the CV_THRESH_BINARY_INV flag, you get threshold + inversion in one step.
If you can consider using another free library, you could use SciPy. It has a very convenient way of counting areas:
from scipy import ndimage
def count_labels(self, mask_image):
"""This function returns the count of labels in a mask image."""
label_im, nb_labels = ndimage.label(mask_image)
return nb_labels
If necessary you can use:
import cv2 as opencv
image = opencv.inRange(image, lower_threshold upper_threshold)
before to get a mask image, which contains only black and white, where white are the objects in the given range.
I know this is an old question, but for completeness I wanted to point out that cv2.moments() will not always work for small contours. In this case, you can use cv2.minEnclosingCircle() which will always return the center coordinates (and radius), even if you have only a single point. Slightly more resource-hungry though, I think...

Categories

Resources