How to find the sides of a polygon in an image? - python

I have a list of connected pixels (tuples of x and y coordinates) forming the outer boundaries of a polygon in an image. How can I pick out its sides suppose they are all straight? What should I do if the lines are not too straight, and contain deviations? Is there anything readily made in Python Image Library or Numpy for such a task? Thanks!

You could use OpenCVs approxPolyDP
You can reduce the number of vertices using that function.
The functions approxPolyDP approximate a curve or a polygon with
another curve/polygon with less vertices so that the distance between
them is less or equal to the specified precision. It uses the
Douglas-Peucker algorithm
http://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithm

Related

Rasterization algorithms with anti-aliasing in Python

I have vector graphics. (In my first case, it's the epigraph of a function whose formula is given. So it is a shape whose outline is given by a parametric curve.)
I want to rasterize this image with anti-aliasing. So I want raster graphics, i.e. a numpy array. I want to obtain this array in a low-level way, avoiding libraries that are meant for object-oriented interactive GUI visualizations with plot axes, etc.. I just want an array. The only problem with doing something like Y,X=np.ogrid(...) and then picture = Y>f(X) is that that's not anti-aliased. (Note that blurring that binary picture is worse than a good dedicated anti-aliasing algorithm.) How to rasterize with anti-aliasing in Python without any overkill GUI-centered libraries?
If the curve is given by an implicit equation F(x,y)=0, evaluate the value of the function at the four corners of every pixel. If the signs are the same, the pixel is wholly outside or inside. If the signs vary, the area inside the polygon formed by the corners and the points along the edges where the function vanishes (find these by a mere linear interpolation) tells you the mixture of background and foreground colors (alpha blending coefficient).
Tracing the polygon isn't that difficult: traverse the four edges of the square and keep the positive vertices and zero points in the order you meet them. You will get from a triangle to an hexagon. The area is obtained by the shoelace formula.
The case of a parametric function is a little harder. You need to find the intersections of the curve with the grid lines, and perform the area estimation in all cells that are traversed. For this, draw the curve as a polyline (this is called flattening), and slice the polyline with horizontals, then verticals.
Manim might be able to rasterize epigraphs and parametric curves well and fast. Its community edition seems actively maintained.
Edits/comments with details are welcome.

How to divide a polygon into tiny polygons of a particular size?

I would like to divide/cut an irregular polygon into tiny polygons of a particular size(1.6m x 1m) in such a way that most of the irregular polygon area has to be utilized (an OPTIMIZATION MODEL)
The length and width of the polygon can be interchanged (either 1.6m X 1m (or) 1m X 1.6m)
So, in the end, I need to have as many polygons of size (1.6m X 1m) as possible.
You may consider it as a packing problem. I need to pack as many rectangles of size(1.6m x 1m) as possible inside a polygon. The rectangles can be translated and rotated but not intersect each other.
I used the "Create Grid" feature but it just cuts the whole polygon in a particular fashion.
But what I also want is that here, a blue polygon can also be cut in a vertical manner(1m x 1.6m) too.
So, I would like to know whether there is a plugin for this in QGIS/ArcGIS or any python script for this kind of polygon optimization?

Find triangle containing point in spherical triangular mesh (Python, spherical coordinates)

Main problem
In Python, I have triangulated the surface of a (unit) sphere using an icosahedral mesh. I have a list simplices of tuples containing the indices of the three vertices of each triangle, and I have a two lists describing the coordinates (in radians) of each vertex: its latitude and longitude.
For about a million points, I want to determine which triangle each point is in. I am looking for an efficient algorithm that returns the list indices of each triangle (indices corresponding to list simplices).
I am willing to sacrifice memory over efficiency, so I am fine with constructing a tree or using some lookup method.
Caveats
The triangles are of roughly equal size, but not exactly, so I suspect that a simple nearest-neighbor KDTree implementation is not exact.
Extra information
The icosahedral mesh has been obtained using the stripy package. It projects the vertices of the icosahedron onto the unit sphere and subsequently bisects the triangles, so that each edge is split in half, or conversely, each triangle is split in four. stripy has a built-in method for calculating the triangle a point is contained in, but for a mesh refinement of 6 (i.e. 6 bisections) and about a million points, this takes hours. I suspect that this method does not make use of a tree/lookup method and I hope there is a method that significantly improves on this.
Compute a latitude/longitude bounding box for each triangle. Remember that the largest-magnitude latitudes may be on an edge (easily found by considering the normal to the great circle including each edge) or (if the pole is enclosed) in the interior.
Divide all triangles that cross the periodic longitude boundary in two—or, to be cheap, just their bounding boxes.
Build an extended-object k-d tree over the triangles (and triangle pieces from above). This still uses only the latitude/longitude values.
Run the obvious recursive, conservative containment search to find candidate triangles. (It doesn’t matter which piece of the divided triangles you find.)
Test carefully for triangle inclusion: for each triangle side, judge which hemisphere (defined by the great circle containing the segment) contains the query point in a fashion (probably just a cross product on the three-dimensional vectors) that doesn’t depend on the order in which the vertices are presented and never produces “on the dividing line”. Then every point is guaranteed to be in exactly one triangle.

Defining non-linear limiting lines to dataset in python

I have some set of (x,y) data. Plotting this in python simply gives
I want to be able to fit some lines or contours to this data, defining the inner and outer radii, such that I can then randomly select a point within those limits.
However, I have no idea where to even start. Any ideas?
If that are concentric circles, I would calculate average x and y coordinates (that would be the center), and then compute distances between each point and the center. Maximal and minimal distance would be the radii of inner and outer circle. (You could also choose other percentile if you wish).
If that are not circles, but something more complex, you could approximate outer shape as poligon using convex hull, and innner using something like marching squares (Or use marching squares for both). Than you could do point inside poligon test for points you choose.

OpenCV find object's position with solvePnPRansac with not-corresponding points

I am trying to find object's position relative to camera position in real-world coordinates by tracking a known 2D LED pattern on the object.
I did camera calibration. I was able to sucessfully detect LEDs in the pattern and find their exact coordinates in the image frame. These points however do not correspond exactly 1-to-1 to the known coordinates in the pattern, they are in random order. The correspondence is important in functions like solvePnPRansac or findHomography, which would be my first choice to use.
How can I find the correspondence between these sets of points or maybe should I use some other function to calculate transformation just like solvePnPRansac does?
As you did not ask about the way to estimate the relative pose between your object and your camera, I will let this topic aside and focus on the way to find correspondences between each LED and their 2D projections.
In order to obtain a unique 1-to-1 correspondence set, the LED pattern you use should be unambiguous with respect to rotation. For example, you may use a regular NxN grid with the top-left cell containing an additional LED, or LEDs located on a circle with one extra LED underneath a single one, etc. Then, the method to find the correspondences depends on the pattern you chose.
In the case of the circle pattern, you could do the following:
Estimate the center of gravity of the points
Find the disambiguing point, which is the only one not lying on a circle, and define the closest of the other points as the first observed point
Order the remaining points by increasing angle with respect to the center of gravity (i.e. clock-wise order)
In the case of the regular grid pattern, you could try the following:
Find the four corners of the grid (those with min/max coordinates)
Estimate the homography which transforms these four corners to the corners of a regular NxN square (with orthogonal angles)
Transform the other points using this homography
Find the disambiguing point, which is the only one for which X-floor(X) and Y-floor(Y) are close to 0.5, and define the closest of the four initial corners as the first observed point
Order the remaining points by increasing angle with respect to the center of the grid and decreasing distance to the center of the grid
You could also study the algorithm used by the function findChessboardCorners (see calibinit.cppin the calib3D module), which uses a similar approach to order the detected corners.

Categories

Resources