Finding Circles from Partial Circular Shapes - python

I am writing a program to analyze microscopy images. I have an edge mapping of an image that looks like this:
The program I have written draws bounding boxes around circular shapes, but when the contours are not closed (like in the image above) it struggles and the resulting bounding box can include multiple circles.
So given this image, is there a way to differentiate between the two circular (or ovular) bodies, so the bounding boxes accurately enclose each shape?
(The image is an example of a bounding box incorrectly drawn around multiple circles)

If all/most your error cases look like the above one, i.e. with bounding boxes that should be further subdivided, you could try to use robustified fitting of ellipses to the edge points.

Related

Take the image inside a contour and place the irregular contour at a specific location on another image

I have a rotated object (clips for wires for a motherboard)
which I have then used Thresholding and findContours to get the contour of the region of interest (in green). It is an irregular shape which may not always be a rectangle. I know the coordinates I was to place the center of this image on within the section of the motherboard
but do not know how to do this without using bg[y:y+obj_h, x:x+obj_w] = obj, which would assume rectangular shape and would introduce a large noise perimeter surrounding the rotated image. I have tried using transparency around the object, which does not work with cv2. The target goal is this
Any help would be appreciated.

Corner detection: getting rid of unwanted corners

I'd like to find the corners of the following box
However, as you can see I'm detecting a lot of corners I don't want to find. I'm completly stuck on this one. No matter what I try, I always seem to find corners in the dots on the box. I used the function goodFeaturesToTrack() but I also tried cornerHarris()
The most important thing to me is to find the coordinates of the corner pixels so I can draw a wire frame.
Kind regards, Schweini
Edit:
To draw the wire frame onto the image, following process can be thinkable.
When extracting outline of the black box region, the outline consists with 6 straight line segments.
Therefore, you'll able to find at least 6 corners of the box as the intersection of each two adjacent line segments.
Additionally, it looks like that, outline of 6 surfaces of the box will be able to coarsely estimated from each adjacent line segment pairs. (with assuming parallelogram)
This means estimating the positions of the remaining two corners (to draw the wire frame).
Furthermore, if you want, comparing the estimation result with your examination (corner detecition) result will be able to refine the coordinates of one corner.

Lucas Kanade Feature Tracking, problem with bounding box and template update

I am implementing a Lucas Kanade Pyramidal Tracker in python based on affine transforms of the neighborhood of features chosen with Shi-Tomasi corner detector(as described here) for a video with 600 frames.
The algorithm works good, finds corners in the given Bounding Box in the first frame, and tracks those corners correctly, but there are to problems.
Some of the corners that I detect in the bounding box are not on the object that I want to track but on the background.
I need to transform de Bounding Box accordingly with the movement of the features tracked, but as the frames advance some features get lost and the bounding box starts growing instead of following the object.
To move the bounding box I am estimating a similarity transform between the previous and current corners and multiplying this transform with the corners of the boundingbox
How could I fix these problems?
Thank you very much!

Detecting center of image from parabolic mirror?

I have a panoramic one shot lens from here: http://www.0-360.com/ and I wrote a script using the python image library to "unwrap" the image into a panorama. I want to automate this process though, as currently I have to specify the center of the image. Also, getting the radius of the circle would be good too. The input image looks like this:
And the "unwrapped" image looks like this:
So far I have been trying the Hough Circle detection. The issues I have is selecting the correct values to use. Also, sometimes, dark objects near the center circle seem to throw it off.
Other Ideas I had:
Hough Line detection of the unwrapped image. Basically, choose center pixel as center, then unwrap and see if the lines on the top and bottom are straight or "curvy". If not straight, then keep trying with different centers.
Moments/blob detection. Maybe I can find the center blob and find the center of that. The problem is sometimes I get a bright ring in the center of the dark disk as seen in the image above. Also, the issue with dark objects near the center.
Paint the top bevel of the mirror a distinct color like green to make circle detection easier? If I use green and only use the green channel, would the detection be easier?
Whats the best method I should try and use to get the center of this image and possibly the radius of the outer and inner rings.
As your image have multiple circle with common centre you can move that way, like
Detect circle with Hough circle and consider circle with common centre.
Now check the ratio for co-centred circle, as your image keep that ratio constant.
I guess don't make it too fancy. The black center is at the center of the image, right? Cut a square ROI close to the image center and look for 'black' region there. Store all the 'black' pixel locations and find their center. You may consider using CMYK color space for detecting the black region.

why are my contours not being totally filled in opencv?

I am processing video that looks like this (these are moving blobs):
I am successfully able to do basic processing in opencv to find contours around the blobs, draw circles around them, etc. Below is some tracking info drawn over an inverted image of the original video frame:
I would like to do a projection of these moving blob sequences, such that I get an image with a black trace of the blobs movement. I call this a blob track. I think I'm using the term "projection" correctly, because if this was a 3d stack it would be called a "z-projection" (only the projection this time is through time/frames in the video).
When I do this, I get an image of the blob track that is close to what I want, but there are tiny green pixels inside that I do not expect to be there considering I am filling a contour with black and then merging these filled contours. I get something like this:
Note the small green pixels present inside the blob track. They might seem subtle, but I don't want them there and can't figure out why they are there considering all I am doing in the code is stamping black blobs on top of one-another. The fact that they are green implies to me that they are part of the green background image on which I draw the blob tracks.
I know my filling is working because if I take a single frame instead of making a blob-track I get this, which looks good:
Something must be going wrong with the way I am doing the projection (addition of the filled contours through time).
The code for the single frame (working and shown up above) is:
cv2.drawContours(contourimage,bigbadlistofcontours[0],-1,(0,0,0),-1)
where contourimage is a green image the size of my frame, and bigbadlistofcontours[0] is the first entry in a list of my contours, and as you can see bigbadlistofcontours[0] contains two contours, which represent the two blobs, drawn successfully above.
The code for adding/projecting the multiple frames (not working and having these small green pixels inside) is:
for xx in bigbadlistofcontours:
cv2.drawContours(contourimage,xx[0],-1,(0,0,0),-1)
cv2.drawContours(contourimage,xx[1],-1,(0,0,0),-1)
#cv2.fillPoly(contourimage, pts =xx[0], color=(0,0,0))
#cv2.fillPoly(contourimage, pts =xx[1], color=(0,0,0))
As you can see I tried it using two methods - one using drawContours, and the other using fillPoly. Both produce small pixels inside the blob track that should not be there. Any idea what could cause these pixels to be there?
Make a small correction to your code and try below code:
for xx in bigbadlistofcontours:
cv2.drawContours(contourimage,[xx[0]],-1,(0,0,0),-1)
cv2.drawContours(contourimage,[xx[1]],-1,(0,0,0),-1)
Or simply try the following:
for xx in bigbadlistofcontours:
cv2.drawContours(contourimage,xx,-1,(0,0,0),-1)
findContours needs a list of contours as arguments, ie a list of numpy arrays. When you passed bigbadlistofcontours[0], it was a list of two numpy arrays ie, two contours. But when you pass xx the second time, you passed xx[0] which is a numpy array and xx[1] which is another numpy array. In that case, it will draw only a point in that numpy array, not full contours.
More Details on Contours

Categories

Resources