I want to find the exact center of these attached images. The things I've tried:
1- HoughCircles, but it didn't work because it's not a perfect circle...
2- Thresholded the picture, so it's all black and white -> contour -> center of contour. This doesn't work on both of these images. It gives a center which isn't correct.
Does one of you know another approach I can try?
EDIT: In the first image, you can see why just taking the center with the contour doesn't work. It's not perfectly in the center of the 'circle'
EDIT2: The definition of the center can be seen in the second image where the circle touches all the 'sides' at the same moment
Thanks,
Try the following:
Threshold the image using the OTSU-Method (which will leave you with a binary image, where only the middle section should be kept. Maybe you have to modify the actual threshold value a little bit)
On the thresholded image find the center using image moments (see e.g. https://docs.opencv.org/3.4/d0/d49/tutorial_moments.html)
This assumes, that the middle section will always be brighter in comparison to the area outside.
One way you could try:
Find corners
Clean up outliers
Average their locations
I expect that to be quite in the center.
If not, please define the question more precisely.
Related
I have the following binary image of a disk, and extracted the border of it:
How can I calculate the center and the radius of the circle? I already tried some methods with cv2.HoughCircles() and cv2.findContours() + cv2.fitEllipse(), however these don't work with images where the circle center is far outside of the image.
You can find center of circle from 3 points, but for robust solution it is better to use ransac method. It uses a set of different solutions, for all your bounding points set and will give you more accurate solution. For instance check : here
I'm facing some general problems regarding the edge detection in an image (the image should be irrelevant for my question).
I want the canny edge detector to ignore a certain pixel value. For example: It should only look for edges if the gray value is not 0. Otherwise there will be "false edges" detected.
I usually use the cv2.canny function which works quite fast and well. Problem is, it is not customizable. So I took this code of a custom canny edge detector (https://rosettacode.org/wiki/Canny_edge_detector#Python) in order to customize it. It works but it's calculating the edges way too slow (It takes several minutes, whereas the cv2.canny function takes a fraction of a second).
This is my first problem.
Is there another way to make the cv2.canny function "ignore" pixels of a certein value. Imagine somewhere in the picture is a area filled with black (soo the image below). I don't want the edge detector to detect the edge of this black area.
Once I have some clear edges detected in my image, I want to create masks based on those edges. I couldn't find any examples for this online. So if anyone knows where to find a good tutorial on how to create masks from edges it would be great if you could help me out.
Thanks in advance
Here's an approach:
Calculate your Canny as usual using the fast OpenCV function.
Now locate all the black pixels in the image - you can do that with _,thr = cv2.threshold(im,1,255,cv2.THRESH_BINARY) and dilate those areas by 1 pixel with morphology to allow edges to be offset a little as they often are.
Multiply the normal Canny image with the mask you created so that anything it found in the black areas gets multiplied by zero, i.e. lost.
The task is: Find all 4 corners of a given red polygon on this image
I know how to find all corners if all borders are visible (linear extrapolation, phew, easy).
My question is how to go about finding bottom corners if we don't see the bottom border?
The answer is quite simple. You can't.
If you don't have both dimensions of your rectangle you cannot find missing corners. You cannot know how far that rectangle extends outside your image.
For your example the only way would be to assume that corner 3 is on the image border. Then you can calculate the missing 4th corner.
I have this image:
Here I have an image on a green background and an area marked with a red line within it. I want to calculate the area of the marked portion with respect to the Image.
I am cropping the image to remove the green background and calculating the area of the cropped Image. From here I don't know how to proceed.
I have noticed that Contour can be used for this but the problem is how do I draw the contour in this case.
I guess if I can create the contour and fill the marked area with some color, I can subtract it from the whole(cropped) image and get both the areas.
In your link, they use the method threshold with a colour in parameter. Basically it takes your source image and sets as white all pixels greater than this value, or black otherwise (This means that your source image needs to be a greyscale image). This threshold is what enables you to "fill the marked area" in order to make a contour detection possible.
However, I think you should try to use the method inRange on your cropped picture. It is pretty much the same as threshold, but instead of having one threshold, you have a minimum and a maximum boundary. If your pixel is in the range of colours given by your boundaries, then it will be set as white. If it isn't, then it will be set as black. I don't know if this will work, but if you try to isolate the "most green" colours in your range, then you might get your big white area on the top right.
Then you apply the method findContours on your binarized image. It will give you all the contours it found, so if you have small white dots on other places in your image it doesn't matter, you'll only have to select the biggest contour found by the method.
Be careful, if the range of inRange isn't appropriate, the big white zone you should find on top right might contain some noise, and it could mess with the detection of contours. To avoid that, you could blur your image and do some stuff like erosion/dilation. This way you might get a better detection.
EDIT
I'll add some code here, but it can't be used as is. As I said, I have no knowledge in Python so all I can do here is provide you the OpenCV methods with the parameters to provide.
Let's make also a review of the steps:
Binarize your image with inRange. You need to find appropriate values for your minimum and maximum boundaries. What you want to do here is isolate the green colours since it is mostly what composes the area inside your contour. I can't really suggest you something better than trial and error to find the best thresholds. Let's start with those min and max values : (0, 125, 0) and (255, 250, 255)
inRange(source_image, Scalar(0, 125, 0), Scalar(255, 250, 255), binarized_image)
Check your result with imshow
imshow("bin", binarized_image)
If you binarization is ok (you can detect the area you want quite well), apply findContours. I'm sorry I don't understand the syntax used in your tutorial nor in the documentation, but here are the parameters:
binarized_mat: your binarized image
contours: an array of arrays of Point which will contain all the contours detected. Each contour is stored as an array of points.
mode: you can choose whatever you want, but I'd suggest RETR_EXTERNAL in your case.
Get the array with the biggest size, since it might be the contour with the highest number of points (the largest one then).
Calculate the area inside
Hope this helps!
I have a panoramic one shot lens from here: http://www.0-360.com/ and I wrote a script using the python image library to "unwrap" the image into a panorama. I want to automate this process though, as currently I have to specify the center of the image. Also, getting the radius of the circle would be good too. The input image looks like this:
And the "unwrapped" image looks like this:
So far I have been trying the Hough Circle detection. The issues I have is selecting the correct values to use. Also, sometimes, dark objects near the center circle seem to throw it off.
Other Ideas I had:
Hough Line detection of the unwrapped image. Basically, choose center pixel as center, then unwrap and see if the lines on the top and bottom are straight or "curvy". If not straight, then keep trying with different centers.
Moments/blob detection. Maybe I can find the center blob and find the center of that. The problem is sometimes I get a bright ring in the center of the dark disk as seen in the image above. Also, the issue with dark objects near the center.
Paint the top bevel of the mirror a distinct color like green to make circle detection easier? If I use green and only use the green channel, would the detection be easier?
Whats the best method I should try and use to get the center of this image and possibly the radius of the outer and inner rings.
As your image have multiple circle with common centre you can move that way, like
Detect circle with Hough circle and consider circle with common centre.
Now check the ratio for co-centred circle, as your image keep that ratio constant.
I guess don't make it too fancy. The black center is at the center of the image, right? Cut a square ROI close to the image center and look for 'black' region there. Store all the 'black' pixel locations and find their center. You may consider using CMYK color space for detecting the black region.