Get corners of quadrilateral from outline - python

I have the coordinates of the outline of a (convex) quadrilateral. These are stored as a list in the form of outline = [(x1,y1), (x2,y2), ..., (xn,yn)], though this can be changed/modified in any way convenient. I want to find the vertices of the quadrilateral.
Image of plotted outline
So far, I've considered using linear programming to find the coordinates. However, because of the possible granularity, this wouldn't work. Also, it would probably require heuristics to implement, which I want to avoid for robustness.
Obviously, one can find two coordinates by doing the highest and lowest y-value, but from there, I'm rather stuck at where to go next.
How can I get the coordinates of the corners?
Note - I've tagged this python, as that's the language that I'm using for my project, though a description of an algorithmic approach would be a much-appreciated answer as well.

If data points have exact coordinates (exactly lie on quad sides), you can get some extremal point (topmost for example), then sort other points by angle, choose the smallest angle, аnd get the farthest point with this angle - next vertex, then repeat with new vertex and so on - it is like convex hull building with gift wrapping algorithm.
If points positions are not perfect, consider approximation of sides with straight lines using Hough transform
Some libraries like OpenCV contain Hough transform inplementations, as well as convex hull ones.

Related

Find triangle containing point in spherical triangular mesh (Python, spherical coordinates)

Main problem
In Python, I have triangulated the surface of a (unit) sphere using an icosahedral mesh. I have a list simplices of tuples containing the indices of the three vertices of each triangle, and I have a two lists describing the coordinates (in radians) of each vertex: its latitude and longitude.
For about a million points, I want to determine which triangle each point is in. I am looking for an efficient algorithm that returns the list indices of each triangle (indices corresponding to list simplices).
I am willing to sacrifice memory over efficiency, so I am fine with constructing a tree or using some lookup method.
Caveats
The triangles are of roughly equal size, but not exactly, so I suspect that a simple nearest-neighbor KDTree implementation is not exact.
Extra information
The icosahedral mesh has been obtained using the stripy package. It projects the vertices of the icosahedron onto the unit sphere and subsequently bisects the triangles, so that each edge is split in half, or conversely, each triangle is split in four. stripy has a built-in method for calculating the triangle a point is contained in, but for a mesh refinement of 6 (i.e. 6 bisections) and about a million points, this takes hours. I suspect that this method does not make use of a tree/lookup method and I hope there is a method that significantly improves on this.
Compute a latitude/longitude bounding box for each triangle. Remember that the largest-magnitude latitudes may be on an edge (easily found by considering the normal to the great circle including each edge) or (if the pole is enclosed) in the interior.
Divide all triangles that cross the periodic longitude boundary in two—or, to be cheap, just their bounding boxes.
Build an extended-object k-d tree over the triangles (and triangle pieces from above). This still uses only the latitude/longitude values.
Run the obvious recursive, conservative containment search to find candidate triangles. (It doesn’t matter which piece of the divided triangles you find.)
Test carefully for triangle inclusion: for each triangle side, judge which hemisphere (defined by the great circle containing the segment) contains the query point in a fashion (probably just a cross product on the three-dimensional vectors) that doesn’t depend on the order in which the vertices are presented and never produces “on the dividing line”. Then every point is guaranteed to be in exactly one triangle.

When to transform GPS co-ordinates

I have no real experience with GIS data, so when what I believed to be a simple problem has turned out to have more subtleties to it, I am dangerously unprepared!
I want to be able to classify a GPS position as inside/outside a polygon defined by GPS co-ordinates. It turns out this is the well-known (but not to me) point-in-polygon problem. I have read many questions/answers on https://gis.stackexchange.com/ (and here e.g. this).
Shapely seems a good solution, but assumes the co-ordinates are on the same cartesian plane, i.e. not GPS? So I would first need to transform my GPS points to UTM points.
Do I need to introduce this extra step, however, if the points being compared (i.e. the point and the polygon) are always going to be naturally within the same UTM zone. They should always be within the same town/city, so can I just leave them as GPS and use the lat/long co-ordinates in Shapely?
I also came across this UTM-WGS84 converter so I could convert my lat/long pairs using this package, and then use those UTM pairs in Shapely, but I would like to avoid any extra dependencies where possible.
Point-in-polygon already assumes a 2D restriction, and GPS coordinates are 3D. Right away, that gets you in trouble.
A simple workaround is to discard the GPS height, reducing it to 2D surface coordinates. Your next problem is that that your 2D surface is now a sphere. On a spherical surface, a polygon divides the surface in two parts, but there is no obvious "inside". There's a left-hand side and a right-hand side which follows from the order of points in the polygon, but neither side is the obvious "inside". Consider the equator as a trivial polygon - which hemisphere is "inside" the equator?
Next up is the issue of the polygon edges. By definition, these are straight, i.e. line segments. But lines on a spherical surface are weird - they're generally known as great circles. And any two great circles cross in exactly two points. That's not how cartesian lines behave. Worse, the equations for a great circle are not linear when expressed in GPS coordinates, because those are longitude/latitude pairs.
I can imagine that at this point you're feeling a bit confused. You might want to look at this from another side - we have a similar problem with maps. Globe maps are by definition attempts to flatten that non-flat surface. Since that's not exactly possible, you end up with map projections. You can also project the corner points of your polygons on such projections. And because the projections are flat, you can draw the edges on the projection. You now see the problem visually: On two different projections, identical polygons will contain different parts of the world!
So, since we agreed that in the real world, the edges of the polygons are great circles, we should really consider a projection that keeps the great circles straight. There's exactly one family of projections that has this property, and that's the Gnomonic projection. It's a family of projections because you can pick any point as the center.
As it happens, we have one natural point to consider here: the GPS point we're considering. If you put that in the center, draw a gnomonic projection around it, project the polygon edges, and then draw the polygon, you have an exact solution.
Except that the actual earth isn't spherical. Sorry. How exact did you need the test to be, anyway?

Remove outer hull of 2D scatterpoints in Python

For my project i use 2D images from a telescope. The outer border of each image is known to be oversatured with points due to telescope malfunction. Therefor i want to extract the points that make up the outer border of the 2D image.
So what i want to do is somehow extract the points that make up the outer shell, with a desired width of the shell according to my preference.
What i have tried so far:
In Python i have tried finding the points that make up the edge by using scipy.ConvexHull to find the outer points and then removing these points. When doing this in a loop it should remove the outer edge with a width dependant on the amount of iterations. However, this method is dependant on the point density, and removes less points for places on the edge where the density is large. What i want is that an about equal width of outer edge is removed of the whole image, see images below :
To show what i mean, i have added the ConvexHull result, in red the points it gives as outer edge points after 15x iterations:
For clarification, this is the desired result i would like my algorithm to give me, an outer edge with equal width over the whole image, which is independant of point density.
Since you showed only ideas and graphics without code, I will do the same.
I see several ways to get the smaller polygon within your convex hull with a near-constant width between them. There are also variations on each. I illustrate with a convex hull that is a simplified version of the one in your graphics. Each of my solutions ignores the majority of points in the problem and uses only the vertices of the convex hull, so the "point density" is ignored.
Before choosing a polygon, you could find the "center point" of your convex hull. There are multiple ways to define this. You could use the centroid of the vertices of the hull, where the x- and y-coordinates are the averages of the coordinates of the vertices, but this biases toward parts of the hull with many small segments. You could use the center of the bounding rectangle, where the x- and y-coordinates are the average of the maximum and the minimum coordinates of the hull's vertices. This is the approach I used in my graphics. There are other possible "center points."
My first inner polygon moves each vertex a proportional distance toward the center point. In my example, I moved each point one-fourth of the distance toward the center point.
My second inner polygon moves the vertices a fixed distance toward the center point. I chose a distance one-fourth of the average distance of the vertices from the center point. Note that for this particular example there is very little difference between this polygon and my previous one. The differences would be more obvious for a hull where come points are much closer to the center point than some other points.
My third polygon abandons the center point. It moves each side of the hull a fixed distance toward the inside of the hull. The intersections of these new segments are used to define the new polygon. In other words, I did "inward polygon offsetting" or "polygon buffering." This is a non-trivial task in computational geometry, but some discussion on this task and similar tasks can be found at this SO question. This does look different from the other polygons, since the smaller sides of the hull tend to shrink or completely disappear from the result.
Choose whichever polygon suits your needs--the first two are easier to compute than the third, but the third comes closest to your ideal of "equal width of outer edge."

Find the most significant corner of a skeleton and segment the skeleton at that corner

I have images of ore seams which I have first skeletonised (medial axis multiplied by the distance transform), then extracted corners (see the green dots). It looks like this:
The problem is to find a turning point and then segment the seam by separating the seam at the turning point. Not all skeletons have turning points, some are quite linear, and the turning points can be in any orientation. But the above image shows a seam which does have a defined turning point. Other examples of turning points look like (using ASCII): "- /- _". "X" turning points don't really exist.
I've tried a number of methods including downsampling the image, curve fitting, k-means clustering, corner detection at various thresholds and window sizes, and I haven't figured it out yet. (I'm new to to using scikit)
The technique must be able to give me some value which I can use heuristically determine whether there is a turning point or not.
What I'd like to do is to do some sort of 2 line ("piecewise"?) regression and find an intersection or some sort of rotated polynomial regression, then determine if a turning point exists, and if it does exist, the best coordinate that represents the turning point. Here is my work in progress: https://gist.github.com/anonymous/40eda19e50dec671126a
From there, I learned that a watershed segmentation with appropriate label coordinates should be able to segment the skeleton.
I found this resource: Fit a curve for data made up of two distinct regimes
But I wasn't able to figure out to apply it my current situation. More importantly there's no way for me to guess a-priori what the initial coefficients are for the fitting function since the skeletons can be in any orientation.

OpenCV find object's position with solvePnPRansac with not-corresponding points

I am trying to find object's position relative to camera position in real-world coordinates by tracking a known 2D LED pattern on the object.
I did camera calibration. I was able to sucessfully detect LEDs in the pattern and find their exact coordinates in the image frame. These points however do not correspond exactly 1-to-1 to the known coordinates in the pattern, they are in random order. The correspondence is important in functions like solvePnPRansac or findHomography, which would be my first choice to use.
How can I find the correspondence between these sets of points or maybe should I use some other function to calculate transformation just like solvePnPRansac does?
As you did not ask about the way to estimate the relative pose between your object and your camera, I will let this topic aside and focus on the way to find correspondences between each LED and their 2D projections.
In order to obtain a unique 1-to-1 correspondence set, the LED pattern you use should be unambiguous with respect to rotation. For example, you may use a regular NxN grid with the top-left cell containing an additional LED, or LEDs located on a circle with one extra LED underneath a single one, etc. Then, the method to find the correspondences depends on the pattern you chose.
In the case of the circle pattern, you could do the following:
Estimate the center of gravity of the points
Find the disambiguing point, which is the only one not lying on a circle, and define the closest of the other points as the first observed point
Order the remaining points by increasing angle with respect to the center of gravity (i.e. clock-wise order)
In the case of the regular grid pattern, you could try the following:
Find the four corners of the grid (those with min/max coordinates)
Estimate the homography which transforms these four corners to the corners of a regular NxN square (with orthogonal angles)
Transform the other points using this homography
Find the disambiguing point, which is the only one for which X-floor(X) and Y-floor(Y) are close to 0.5, and define the closest of the four initial corners as the first observed point
Order the remaining points by increasing angle with respect to the center of the grid and decreasing distance to the center of the grid
You could also study the algorithm used by the function findChessboardCorners (see calibinit.cppin the calib3D module), which uses a similar approach to order the detected corners.

Categories

Resources