circle detection using python, numpy? - python

I want to implement hough transform algorithm using python, numpy and scipy.
I do not want to use opencv.
I am trying to detect Center of circle or circle in image without known radius.
How do I proceed?

The process of implementing the Hough Transform is pretty straightfoward. I suggest you look on youtube for some videos about it, there are even videos with code/pseudocode for it.
That being said, I've been in the same situation, looking to implement the HT to detect circles. However, the approach I decided to use was a bit different than the traditional HT. Instead of looping over all pixels to generate the circles that passes on at leat one of the circle points, I used the circle points as centers, incrementing the radius from min_radius to max_radius and accummulating it in the same form as the classic HT.
This way, you will end up having a 3D array with (x, y and radius used). The center and radius will be the (x, y, radius) with the maximum accummulated value.
Simplified Hough Transform

I have googled a bit and I found the following:
http://nabinsharma.wordpress.com/2012/12/26/linear-hough-transform-using-python/
Maybe this is what you are searching.
Sorry I think for circles you should try the following:
http://nullege.com/codes/search/houghcircles

Related

Measure exact position of known circle using Python OpenCV

I have an image containing a bunch of circular features. I'd like to know exactly the exact location of the centre of a particular circle and its radius (preferably with sub-pixel accuracy).
Rather than using a circle detector, which will try to find all of the circles, is there a method in OpenCV for fitting a circle to the image? Like this:
Update:
I have tried using the Hough circle detection method, and it seems to get confused about whether the circle should be on the inside or outside edge of the black line. The circle jumps around between the inside and outside edges, or sometimes tries to do both.
All I can think of now is, if you know the approximate centre and radius, search for all the circles and use least fitting squares with the circle equation to find the one you are looking for.

Python Fit circle with center outside of image

I have the following binary image of a disk, and extracted the border of it:
How can I calculate the center and the radius of the circle? I already tried some methods with cv2.HoughCircles() and cv2.findContours() + cv2.fitEllipse(), however these don't work with images where the circle center is far outside of the image.
You can find center of circle from 3 points, but for robust solution it is better to use ransac method. It uses a set of different solutions, for all your bounding points set and will give you more accurate solution. For instance check : here

What do the parameters in Hough Circle signify and how to know what values to use?

While using hough circle, I didn't quite understand how to define the parameters for hough circles? I also don't even know what the parameters they signify. As far as I know
image: It is the source image
method: It is the process it uses to detect the circles,(are there any other than hough gradient)
minDist: is the min. distance between two centres of a circle
minRadius: It the min radius of the circles
maxRadius: It is the max radius
The rest of the parameters I don't even understand. Can someone help to explain this to me.
Good question!
This is an example of HoughCircles() function from opencv-python tutorials. Lets look at it in details
HoughCircles(image, method, dp, minDist[, param1[, param2[, minRadius[, maxRadius]]]]])
image
This is the input image you want to detect circles from. It's strongly recommended that the image is grayscaled because HoughCircles() uses Canny() function to detect edges in image.
method
This is the mathematical formula used to find circles. The only available formula in HoughCircle is cv2.HOUGH_GRADIENT so you don't have other choice but using it.
dp
Check out this answer here. If you couldn't understand it don't worry. Hough Transform is broad subjet and I advise you to research it more in detail if you want to know what this variable mean, but anyway, this variable should be between 0 and 2 and is of type double, so try using numbers like 0.6 or 1.3.
minDist
This is the minimum distance between the center of circles to be detected. How close are the circles in your image? Do you want the function to detect closely connected circles or far between circles?
param1 and param2
As mentioned before, HoughCircles() internally uses Canny() function. These parameters specify how aggressively you want to detect the edges.
The thresholder used in the Canny operator uses a method called
"hysteresis". Most thresholders used a single threshold limit, which
means if the edge values fluctuate above and below this value the line
will appear broken (commonly referred to as ``streaking''). Hysteresis
counters streaking by setting an upper and lower edge value limit.
Considering a line segment, if a value lies above the upper threshold
limit it is immediately accepted. If the value lies below the low
threshold it is immediately rejected. Points which lie between the two
limits are accepted if they are connected to pixels which exhibit
strong response.
minRadius and maxRadius
The size of circle is represented by its radius. The bigger the radius the bigger the circle and vice versa. These parameters specify the range of sizes of the circles you want to detect.
Finally
When you're using HoughCircles() and other similar functions, a lot of time you will spend will be on tuning these parameters to find the best combination of numbers to detect the circles in your image. So don't be frustrated if you think your parameters are wrong.

Method to determine polygon surface rotation from top-down camera

I have a webcam looking down on a surface which rotates about a single-axis. I'd like to be able to measure the rotation angle of the surface.
The camera position and the rotation axis of the surface are both fixed. The surface is a distinct solid color right now, but I do have the option to draw features on the surface if it would help.
Here's an animation of the surface moving through its full range, showing the different apparent shapes:
My approach thus far:
Record a series of "calibration" images, where the surface is at a known angle in each image
Threshold each image to isolate the surface.
Find the four corners with cv2.approxPolyDP(). I iterate through various epsilon values until I find one that yields exactly 4 points.
Order the points consistently (top-left, top-right, bottom-right, bottom-left)
Compute the angles between each points with atan2.
Use the angles to fit a sklearn linear_model.linearRegression()
This approach is getting me predictions within about 10% of actual with only 3 training images (covering full positive, full negative, and middle position). I'm pretty new to both opencv and sklearn; is there anything I should consider doing differently to improve the accuracy of my predictions? (Probably increasing the number of training images is a big one??)
I did experiment with cv2.moments directly as my model features, and then some values derived from the moments, but these did not perform as well as the angles. I also tried using a RidgeCV model, but it seemed to perform about the same as the linear model.
If I'm clear, you want to estimate the Rotation of the polygon with respect to the camera. If you know the length of the object in 3D, you can use solvePnP to estimate the pose of the object, from which you can get the Rotation of the object.
Steps:
Calibrate your webcam and get the intrinsic matrix and distortion matrix.
Get the 3D measurements of the object corners and find the corresponding points in 2d. Let me assume a rectangular planar object and the corners in 3d will be (0,0,0), (0, 100, 0), (100, 100, 0), (100, 0, 0).
Use solvePnP to get the rotation and translation of the object
The rotation will be the rotation of your object along the axis. Here you can find an example to estimate the pose of the head, you can modify it to suit your application
Your first step is good -- everything after that becomes way way way more complicated than necessary (if I understand correctly).
Don't think of it as 'learning,' just think of it as a reference. Every time you're in a particular position where you DON'T know the angle, take a picture, and find the reference picture that looks most like it. Guess it's THAT angle. You're done! (They may well be indeterminacies, maybe the relationship isn't bijective, but that's where I'd start.)
You can consider this a 'nearest-neighbor classifier,' if you want, but that's just to make it sound better. Measure a simple distance (Euclidean! Why not!) between the uncertain picture, and all the reference pictures -- meaning, between the raw image vectors, nothing fancy -- and choose the angle that corresponds to the minimum distance between observed, and known.
If this isn't working -- and maybe, do this anyway -- stop throwing away so much information! You're stripping things down, then trying to re-estimate them, propagating error all over the place for no obvious (to me) benefit. So when you do a nearest neighbor, reference pictures and all that, why not just use the full picture? (Maybe other elements will change in it? That's a more complicated question, but basically, throw away as little as possible -- it should all be useful in, later, accurately choosing your 'nearest neighbor.')
Another option that is rather easy to implement, especially since you've done a part of the job is the following (I've used it to compute the orientation of a cylindrical part from 3 images acquired when the tube was rotating) :
Threshold each image to isolate the surface.
Find the four corners with cv2.approxPolyDP(), alternatively you could find the four sides of your part with LineSegmentDetector (available from OpenCV 3).
Compute the angle alpha, as depicted on the image hereunder
When your part is rotating, this angle alpha will follow a sine curve. That is, you will measure alpha(theta) = A sin(theta + B) + C. Given alpha you want to know theta, but first you need to determine A, B and C.
You've acquired many "calibration" or reference images, you can use all of these to fit a sine curve and determine A, B and C.
Once this is done, you can determine theta from alpha.
Notice that you have to deal with sin(a+Pi/2) = sin(a). It is not a problem if you acquire more than one image sequentially, if you have a single static image, you have to use an extra mechanism.
Hope I'm clear enough, the implementation really shouldn't be a problem given what you have done already.

Detect an arc from an image contour or edge

I am trying to detect arcs inside an image. The information that I have for certain with me is the radius of the arc. I can try and maybe get the centre of the circle whose arc I want to identify.
Is there any algorithm in Open CV which can tell us that the detected contour ( or edge from canny edge is an arc or an approximation of an arc)
Any help on how this would be possible in OpenCV with Python or even a general approach would be very helpful
Thanks
If you think that there will not be any change in the shape (i mean arc won't become line or something like this) then you can have a look a Generalized Hough Transform (GHT) which can detect any shape you want.
Cons:
There is no directly function in openCV library for GHT but you can get several source code at internet.
It is sometimes slow but can become fast if you set the parameters properly.
It won't be able to detect if the shape changes. for exmaple, i tried to detect squares using GHT and i got good results but when square were not perfect squares (i.e. rectangle or something like that), it didn't detect.
You can do it this way:
Convert the image to edges using canny filter.
Make the image binary using threshold function there is an option for regular threshold, otsu or adaptive.
Find contours with sufficient length (findContours function)
Iterate all the contours and try to fit ellipse (fitEllipse function)
Validate fitted ellipses by radius.
Check if detected ellipse is good fit - checking how much of the contour pixels are on the detected ellipse.
Select the best one.
You can try to increase the speed using RANSAC each time selecting 6 points from binarized image and trying to fit.
My math is rusty, but...
What about evaluating a contour by looping over its composite edge-nodes and finding those where the angle between the edges doesn't change too rapidly AND doesn't change sign?
A chain of angles (θ) where:
0 < θi < θmax
with number of edges (c) where:
c > dconst
would indicate an arc of:
radius ∝ 1/(θi + θi+1 + ...+ θn)/n)
or:
r ∝ 1/θave
and:
arclenth ∝ c
A way of finding these angles is discussed at Get angle from OpenCV Canny edge detector

Categories

Resources