Python: How to replace curvilinear points within a circle with a line? - python

I have the following problem. If you take a look at the image below, you see a microscopic image of a circular strand of DNA, called a teardrop. An initial skeletonization (morphology.skeletonize) yielded the trace in red. The trace is an nd-array of shape (N, 2), where N is the number of points in my trace. In orange, the strand is traced after I applied some sort of improvement algorithm, which I designed myself. Now, as you can see, around the bright region, the trace twists a little bit and doesn't resemble the real shape. To work around that, I want to do the following:
Define a radius around the blue dot which is fixed (i.e. 5 nanometres)
Remove all points of the trace that lie within that radius (my problem)
After those points are removed, connect the blue dot with the first point that lies outside the circle
In the end, I don't know how I can tell Python that for a given array of coordinates, I want to delete all those that lie within an area of my image. That area is, as explained, a circle around the blue dot.

Related

find the polygon enclosing the given coordinates and find the coordinates of polygon (python opencv)

Example image used in program
I am trying to find the coordinates of a polygon in an image
(just like flood fill algorithm we are given a coordinate and we need to search the surrounding pixels for the boundary, if boundary is found we need to append its coordinate to the list if not we need to keep searching other pixels.)and if all the pixels are traversed the program should stop returning the list of pixels.
usually color of boundary is black and image is a gray scale image of maps of building.
It seems that flood-fill will be good enough to completely fill a room, despite the extra annotations. After filling, extract the outer outline. Now you can detect the straight portions of the outline by checking the angle formed by three successive points. I would keep a spacing between them to avoid local inaccuracies.
You will find a sequence of line segments, possibly interrupted at corners. Optionally use line fitting to maximize accuracy, and recompute the corners by intersecting the segments. Also consider joining aligned segments that are interrupted by short excursions.
If the rooms are not well closed, flood filling can leak and you are a little stuck. Consider filling with a larger brush, though this can cause other problems.

Create array from image of chessboard

Basically, I'm working on a robot arm that will play checkers.
There is a camera attached above the board supplying pictures (or even videomaterial but I guess that is just a series of images and since checkers is not really a fast paced game I can just take a picture every few seconds and go from there)
I need to find a way to translate the visual board into a e.g a 2d array to feed into the A.I to compute the robots moves.
I have a line detection working which draws lines on along the edges of the squares (and also returns edges in canny as a prior step). Moreover I detect green and red (the squares of my board are green and red) and return these both as a mask each.
I also have a sphere detection in place to detect the position of the pieces and some black and white color detection returning a mask each with the black or white detected areas.
My question is how I can now combine these things I have and as a result get some type of array out of which I can deduct information over in which squares my pieces are ?
Like how would i build the 2d array (or connect any 8x8) array to the image of the board with the lines and/or the masks of the red/green tiles ? I guess I have to do some type of calibration ?
And secondly is there a way to somehow overlay the masks so that I then know which pieces are in which squares ?
Well, first of all remember that chess always starts with the same pieces on the same positions e.g. black knight starts at 8-B which can be [1][7] in your 2D array. If I were you I would start with a 2D array with the begin positions of all the chess pieces.
As to knowing which pieces are where: you do not need to recognize the pieces themselves. What I would do if I were you is detect the empty spots on the chessboard which is actually quite easy in comparison to really recognizing the different chess pieces.
Once your detection system detects that one of the previously empty spots is now no longer empty you know that a chess piece was moved there. Since you can also detect a new open spot(the spot where the chess piece came from) you also know the exact chess piece which was moved. If you keep track of this list during the whole game you can always know which pieces are moved and which pieces are where.
Edit:
As noted in the comments my answer was based on chess instead of checkers. The idea is however still the same but instead of chess pieces you can now put men and kings in the 2D array.
Based on either the edge detector or the red/green square detector, calculate the center coordinates of each square on the game board. For example, average the x-coordinate of the left and right edge of a square to get the x-coordinate of the square's center. Similarly, average the y-coordinate of the top and bottom edge to get the y-coordinate of the center.
It might also be possible to find the top, left, bottom and right edge of the board and then interpolate to find the centers of all the squares. The sides of each square are probably more than a hundred pixels in length, so the calculations don't need to be that accurate.
To determine where the pieces are, iterate of a list of the center coordinates and look at the color of the pixel. If it is red or green, the square is empty. If it is black or white, the square has a corresponding piece in it. Use the information to fill an array with the information for the AI.
If the images are noisy, it might be necessary to average several pixels near the center or to average the center pixel over several frames.
It would work best if the camera is above the center of the board. If it is off to the side, the edges wouldn't be parallel/orthogonal in the picture, which might complicate the math for finding the centers.

Method to determine polygon surface rotation from top-down camera

I have a webcam looking down on a surface which rotates about a single-axis. I'd like to be able to measure the rotation angle of the surface.
The camera position and the rotation axis of the surface are both fixed. The surface is a distinct solid color right now, but I do have the option to draw features on the surface if it would help.
Here's an animation of the surface moving through its full range, showing the different apparent shapes:
My approach thus far:
Record a series of "calibration" images, where the surface is at a known angle in each image
Threshold each image to isolate the surface.
Find the four corners with cv2.approxPolyDP(). I iterate through various epsilon values until I find one that yields exactly 4 points.
Order the points consistently (top-left, top-right, bottom-right, bottom-left)
Compute the angles between each points with atan2.
Use the angles to fit a sklearn linear_model.linearRegression()
This approach is getting me predictions within about 10% of actual with only 3 training images (covering full positive, full negative, and middle position). I'm pretty new to both opencv and sklearn; is there anything I should consider doing differently to improve the accuracy of my predictions? (Probably increasing the number of training images is a big one??)
I did experiment with cv2.moments directly as my model features, and then some values derived from the moments, but these did not perform as well as the angles. I also tried using a RidgeCV model, but it seemed to perform about the same as the linear model.
If I'm clear, you want to estimate the Rotation of the polygon with respect to the camera. If you know the length of the object in 3D, you can use solvePnP to estimate the pose of the object, from which you can get the Rotation of the object.
Steps:
Calibrate your webcam and get the intrinsic matrix and distortion matrix.
Get the 3D measurements of the object corners and find the corresponding points in 2d. Let me assume a rectangular planar object and the corners in 3d will be (0,0,0), (0, 100, 0), (100, 100, 0), (100, 0, 0).
Use solvePnP to get the rotation and translation of the object
The rotation will be the rotation of your object along the axis. Here you can find an example to estimate the pose of the head, you can modify it to suit your application
Your first step is good -- everything after that becomes way way way more complicated than necessary (if I understand correctly).
Don't think of it as 'learning,' just think of it as a reference. Every time you're in a particular position where you DON'T know the angle, take a picture, and find the reference picture that looks most like it. Guess it's THAT angle. You're done! (They may well be indeterminacies, maybe the relationship isn't bijective, but that's where I'd start.)
You can consider this a 'nearest-neighbor classifier,' if you want, but that's just to make it sound better. Measure a simple distance (Euclidean! Why not!) between the uncertain picture, and all the reference pictures -- meaning, between the raw image vectors, nothing fancy -- and choose the angle that corresponds to the minimum distance between observed, and known.
If this isn't working -- and maybe, do this anyway -- stop throwing away so much information! You're stripping things down, then trying to re-estimate them, propagating error all over the place for no obvious (to me) benefit. So when you do a nearest neighbor, reference pictures and all that, why not just use the full picture? (Maybe other elements will change in it? That's a more complicated question, but basically, throw away as little as possible -- it should all be useful in, later, accurately choosing your 'nearest neighbor.')
Another option that is rather easy to implement, especially since you've done a part of the job is the following (I've used it to compute the orientation of a cylindrical part from 3 images acquired when the tube was rotating) :
Threshold each image to isolate the surface.
Find the four corners with cv2.approxPolyDP(), alternatively you could find the four sides of your part with LineSegmentDetector (available from OpenCV 3).
Compute the angle alpha, as depicted on the image hereunder
When your part is rotating, this angle alpha will follow a sine curve. That is, you will measure alpha(theta) = A sin(theta + B) + C. Given alpha you want to know theta, but first you need to determine A, B and C.
You've acquired many "calibration" or reference images, you can use all of these to fit a sine curve and determine A, B and C.
Once this is done, you can determine theta from alpha.
Notice that you have to deal with sin(a+Pi/2) = sin(a). It is not a problem if you acquire more than one image sequentially, if you have a single static image, you have to use an extra mechanism.
Hope I'm clear enough, the implementation really shouldn't be a problem given what you have done already.

Smoothing the edge in a large scale

I have images of eyes and eyebrows like the following one.
And I want it to be processed to be more smooth on the edges like the following one, which is drawn by hand.
I've tried with morphology opening, but with different size of the SE, it either fills the unexpected area or leaves with some rough edges. Here's the result with circle SE of size 9 and 7 respectively.
Another idea is to calculate the Convex Hull of the eyebrow and fill the color. But since the eyebrow is usually bending and the Convex Hull will become something like the following image, which is also not very ideal.
Or should I make every pixel on the edge to be a vertex of a polygon and then smooth the polygon? Any specific idea here?
Any idea how can I get the result in the second image?
I'm using Python OpenCV. Code or general idea are both welcomed. Thanks!

Find 3d perspective on 2d image

I have an image like this:
I want to find a perpendicular line to the red line (I mean perpendicular line to the track). How can I do this using OpenCV and Python? Problem is that the height of the camera is unknown and a visible angle of 90 degrees is not a real 90 degrees angle. I have found here an option to use OpenCV .projectPoints() method, but looks like it needs to know the position of the resulting point and pass some vector there. Can somebody help how can achieve this? Or is that even possible?
#Chiefir, you don't have enough data to get the perpendicular line you ask for.
I believe your best chance is to find some parallel line in the image, like those marks in the grass (right where the green eleven is).
Some methods look for parallels in the image automatically, assuming a perpendicular straight lines world (like a city of roads and buildings), and get a 3D pose. I don't think those work on you image.
There is very little information in this one image (almost none, in fact) to accomplish your goal, so every solution will necessarily be imprecise. If this is a frame of a video sequence, you can apply the method below to a sequence of frames around this one to improve its accuracy.
One way is to assume that
The height of the rail from the ground is small (compared to their distance from the camera).
The long edges of the "11" number cut in the grass are perpendicular to the red line.
You can then estimate the vanishing point V of the "11". Then, any line drawn from V to a point of your red line is, by construction, the image of a line on the ground plane orthogonal to the one represented by the red line.
You can improve a little the accuracy by using, instead of your (presumably) hand-drawn red line, a line joining the bottom points of the supports of the rails, since this would be really on the ground.
If the poles supporting the railing were vertical (they aren't, as evidenced by the ones supported by the other rail higher in the image), you could compute their vanishing point P, then use in place of V in the method above.

Categories

Resources