Optimizing Polygon Search - python

I split de world in X random polygons.
Then I am given a coordinate C1, for instance (-21.45, 7.10), and I want to attribute the right polygon to this coordinate.
The first solution is to apply my ‘point_in_polygon’ algorithm (given a set of coordinates that defines a polygon and a coordinate that defines a point, tell me if the point is inside or not) on each polygon until I find the right one.
But that is very expensive if I have a lot of points to put in a lot of polygons.
An improvement on that relies on the following idea:
To optimise the search, I create a grid (a collection) with a step n, k where I already attribute each pair of coordinates such that:
for i=-180 to 180 step n
for j = -90 to 90 step k
grid.add(i,j)
Then I create a dictionary, and for each pair in the collection I find the corresponding polygon
For each g in grid
For each p in polygons
If point_in_polygon(g,p) == True
my_dict(g) = p
Then, when I receive C1, I look for the closest coordinate in my grid, let’s say g1.
Thanks to my_dict, I can get quickly p1 = my_dict(g1)
Then I compute point_in_polygon(C1, p1) which is likely to be true. If it’s not, I find the closest g which is assigned to a different polygon, and I redo a test. Etc. until I have found the right polygon.
Now, the question is: what is the optimal n, k to create the grid?
So that I can find the right polygon in the minimum number of steps.
I don’t want it too low, because the search of the closest g which is assigned to a different polygon might be expensive.
I don’t want it too high as well, because then I might be missing some polygons and then the search never converges.
My intuition is that the smallest polygon is going to give the steps.
I am not sure if this is a programming problem, a maths problem, or just something I can find empirically, that's why I ask it here.
Any inputs appreciated!

Let me suggest a slight modification to your grid. Currently, you store for each cell the polygon that the cell's center belongs to. Instead, store all the polygons that overlap the cell. Then, whenever you see that a cell has only a single overlapping polygon, you don't need to do any inclusion testing. The grid can be built by methods of conservative rasterization (note that the referenced article is not focused on conservative but rather general rasterization).
The efficiency of your grid correlates with the ratio of single-polygon cells and total cells (because this is the probability of not having to perform polygon-inclusion tests). The storage itself is pretty cheap. You can use a dense array and get constant access to the cells. Hence, from a theoretical point of view, you should have as many cells as possible (because as you have more cells, the single-polygon cell ratio increases). In practice, you might find that cache and other memory effects might make large grids impractical. However, there is no good way to know other than test. So, just try with a couple of sizes on a few different machines and try to find a good fit.
If I had to guess, I would say that your cells should be square and have an area of about 1% - 5% of the average polygon area. Also, more compact polygons can be handled more efficiently than many long and thin polygons.

Pick any point and draw a line straight down from that point. The first polygon edge you hit tells you what polygon the point is in.
So, if you don't want to do polygon tests, then instead of dividing the space into a regular grid, first cut it into strips with vertical cuts that go through all polygon intersections.
Now, within each strip none of the polygon edges cross or end, so you can make an ordered list of all those edges from bottom to top.
If you want to find the polygon that contains a point, then, do a binary search using the x coordinate to find the proper strip. Then in the list of edges that span the strip, you can do a binary search using the y coordinate to find the closest one underneath the point, and that tells you what polygon the point is in.
Google 'trapezoidal decomposition' to find lots of information about similar techniques.

Related

How to determine if 3d point lie in a certain volume or not?

I have a set of 3d points in a txt file in the form of (x,y,z) as shown in figure 1. I want to specify the boundaries of these points as in figure 2 such that if any new points were added outside the boundaries they are deleted as the blue points, and if they are inside the boundaries as the green ones they are kept. How can I achieve that in python? I tried convex hull but it only gets the boundary points !
The real data can be found here, I used figures for simplification. https://drive.google.com/file/d/1ei9NaJHN922pYItK2CRIXyLfwqm_xgrt/view?usp=sharing
Figure 1
Figure 2
For 2D points, you can apply the test as described in Wikipedia:
One simple way of finding whether the point is inside or outside a simple polygon is to test how many times a ray, starting from the point and going in any fixed direction, intersects the edges of the polygon. If the point is on the outside of the polygon the ray will intersect its edge an even number of times. If the point is on the inside of the polygon then it will intersect the edge an odd number of times. The status of a point on the edge of the polygon depends on the details of the ray intersection algorithm.
The n-dimensional case involves a convex hull test and requires linear programming techniques as described here.

Find triangle containing point in spherical triangular mesh (Python, spherical coordinates)

Main problem
In Python, I have triangulated the surface of a (unit) sphere using an icosahedral mesh. I have a list simplices of tuples containing the indices of the three vertices of each triangle, and I have a two lists describing the coordinates (in radians) of each vertex: its latitude and longitude.
For about a million points, I want to determine which triangle each point is in. I am looking for an efficient algorithm that returns the list indices of each triangle (indices corresponding to list simplices).
I am willing to sacrifice memory over efficiency, so I am fine with constructing a tree or using some lookup method.
Caveats
The triangles are of roughly equal size, but not exactly, so I suspect that a simple nearest-neighbor KDTree implementation is not exact.
Extra information
The icosahedral mesh has been obtained using the stripy package. It projects the vertices of the icosahedron onto the unit sphere and subsequently bisects the triangles, so that each edge is split in half, or conversely, each triangle is split in four. stripy has a built-in method for calculating the triangle a point is contained in, but for a mesh refinement of 6 (i.e. 6 bisections) and about a million points, this takes hours. I suspect that this method does not make use of a tree/lookup method and I hope there is a method that significantly improves on this.
Compute a latitude/longitude bounding box for each triangle. Remember that the largest-magnitude latitudes may be on an edge (easily found by considering the normal to the great circle including each edge) or (if the pole is enclosed) in the interior.
Divide all triangles that cross the periodic longitude boundary in two—or, to be cheap, just their bounding boxes.
Build an extended-object k-d tree over the triangles (and triangle pieces from above). This still uses only the latitude/longitude values.
Run the obvious recursive, conservative containment search to find candidate triangles. (It doesn’t matter which piece of the divided triangles you find.)
Test carefully for triangle inclusion: for each triangle side, judge which hemisphere (defined by the great circle containing the segment) contains the query point in a fashion (probably just a cross product on the three-dimensional vectors) that doesn’t depend on the order in which the vertices are presented and never produces “on the dividing line”. Then every point is guaranteed to be in exactly one triangle.

Split quadrilateral into sub-regions of a maximum area

It is pretty easy to split a rectangle/square into smaller regions and enforce a maximum area of each sub-region. You can just divide the region into regions with sides length sqrt(max_area) and treat the leftovers with some care.
With a quadrilateral however I am stumped. Let's assume I don't know the angle of any of the corners. Let's also assume that all four points are on the same plane. Also, I don't need for the the small regions to be all the same size. The only requirement I have is that the area of each individual region is less than the max area.
Is there a particular data structure I could use to make this easier?
Is there an algorithm I'm just not finding?
Could I use quadtrees to do this? I'm not incredibly versed in trees but I do know how to implement the structure.
I have GIS work in mind when I'm doing this, but I am fairly confident that that will have no impact on the algorithm to split the quad.
You could recursively split the quad in half on the long sides until the resulting area is small enough.
If your quadrilateral is convex, then in fact you can split it into two equal-area pieces which at the same time have equal perimeters! This is called a fair partitioning, and is described at The Open Problems Project (it is open for larger number of pieces, but solved for two pieces).
For nonconvex quadrilaterals, it is not difficult to find a line to partition it into
two equal pieces.
I believe this will work: Pass a line through the one
reflex vertex, and spin it about that vertex until it partitions the area equally.
The same method works for convex polygons, if your only goal is to partition the area into
two equal halves.
The generic problem (for arbitrary polygons) goes under the name of
"ham-sandwich sectioning of polygons." In fact, I wrote a paper with that exact title.

Point-in-polygon for a large number of points

I'm wondering what may be the most efficient way of determining whether a large number of points (O(1 million) are inside or outside a collection (O(10)) of polygons? The latter are not necessarily convex, but do not have holes in them. At the moment I prune the number of points by comparing their positions to the bounding boxes, then use this crossing-number method on the remaining points. But is there perhaps a faster method?
There is an efficient matplotlib function for that: matplotlib.nxutils.points_inside_poly(). The algorithm is documented on this page.
Assuming you have axis-aligned bounding boxes, you could sort the list of points by their x coordinate, find the places on the list points go inside or outside the bounding boxes by binary search and potentially discard a large number of points at once. Repeat for the y coordinate. Then continue as before with the remaining points. You could perform polygon triangulation to speed up the test within the bounding box.
Would perform best when area of the plane is much greater than the area of the polygons, and the polygons are reasonably compact (i.e. not long and thin, which may give you many false positives).
I'd probably use a Quadtree for fast rough test of "am I inside or outside of the polygon" to some level of precision that you determine when you generate the quadtree.
Each lookup is O(log n), which will be about as fast as you can get. For the points that lie within a cell of the quadtree that's marked as "contains an edge" then you'll have to do a traditional point-in-polygon test.

Test if point is in some rectangle

I have a large collection of rectangles, all of the same size. I am generating random points that should not fall in these rectangles, so what I wish to do is test if the generated point lies in one of the rectangles, and if it does, generate a new point.
Using R-trees seem to work, but they are really meant for rectangles and not points. I could use a modified version of a R-tree algorithm which works with points too, but I'd rather not reinvent the wheel, if there is already some better solution. I'm not very familiar with data-structures, so maybe there already exists some structure that works for my problem?
In summary, basically what I'm asking is if anyone knows of a good algorithm, that works in Python, that can be used to check if a point lies in any rectangle in a given set of rectangles.
edit: This is in 2D and the rectangles are not rotated.
This Reddit thread addresses your problem:
I have a set of rectangles, and need to determine whether a point is contained within any of them. What are some good data structures to do this, with fast lookup being important?
If your universe is integer, or if the level of precision is well known and is not too high, you can use abelsson's suggestion from the thread, using O(1) lookup using coloring:
As usual you can trade space for
time.. here is a O(1) lookup with very
low constant. init: Create a bitmap
large enough to envelop all rectangles
with sufficient precision, initialize
it to black. Color all pixels
containing any rectangle white. O(1)
lookup: is the point (x,y) white? If
so, a rectangle was hit.
I recommend you go to that post and fully read ModernRonin's answer which is the most accepted one. I pasted it here:
First, the micro problem. You have an
arbitrarily rotated rectangle, and a
point. Is the point inside the
rectangle?
There are many ways to do this. But
the best, I think, is using the 2d
vector cross product. First, make sure
the points of the rectangle are stored
in clockwise order. Then do the vector
cross product with 1) the vector
formed by the two points of the side
and 2) a vector from the first point
of the side to the test point. Check
the sign of the result - positive is
inside (to the right of) the side,
negative is outside. If it's inside
all four sides, it's inside the
rectangle. Or equivalently, if it's
outside any of the sides, it's outside
the rectangle. More explanation here.
This method will take 3 subtracts per
vector * times 2 vectors per side,
plus one cross product per side which
is three multiplies and two adds. 11
flops per side, 44 flops per
rectangle.
If you don't like the cross product,
then you could do something like:
figure out the inscribed and
circumscribed circles for each
rectangle, check if the point inside
the inscribed one. If so, it's in the
rectangle as well. If not, check if
it's outside the circumscribed
rectangle. If so, it's outside the
rectangle as well. If it falls between
the two circles, you're f****d and you
have to check it the hard way.
Finding if a point is inside a circle
in 2d takes two subtractions and two
squarings (= multiplies), and then you
compare distance squared to avoid
having to do a square root. That's 4
flops, times two circles is 8 flops -
but sometimes you still won't know.
Also this assumes that you don't pay
any CPU time to compute the
circumscribed or inscribed circles,
which may or may not be true depending
on how much pre-computation you're
willing to do on your rectangle set.
In any event, it's probably not a
great idea to test the point against
every rectangle, especially if you
have a hundred million of them.
Which brings us to the macro problem.
How to avoid testing the point against
every single rectangle in the set? In
2D, this is probably a quad-tree
problem. In 3d, what generic_handle
said - an octree. Off the top of my
head, I would probably implement it as
a B+ tree. It's tempting to use d = 5,
so that each node can have up to 4
children, since that maps so nicely
onto the quad-tree abstraction. But if
the set of rectangles is too big to
fit into main memory (not very likely
these days), then having nodes the
same size as disk blocks is probably
the way to go.
Watch out for annoying degenerate
cases, like some data set that has ten
thousand nearly identical rectangles
with centers at the same exact point.
:P
Why is this problem important? It's
useful in computer graphics, to check
if a ray intersects a polygon. I.e.,
did that sniper rifle shot you just
made hit the person you were shooting
at? It's also used in real-time map
software, like say GPS units. GPS
tells you the coordinates you're at,
but the map software has to find where
that point is in a huge amount of map
data, and do it several times per
second.
Again, credit to ModernRonin...
For rectangles that are aligned with the axes, you only need two points (four numbers) to identify the rectangle - conventionally, bottom-left and top-right corners. To establish whether a given point (Xtest, Ytest) overlaps with a rectangle (XBL, YBL, XTR, YTR) by testing both:
Xtest >= XBL && Xtest <= XTR
Ytest >= YBL && Ytest <= YTR
Clearly, for a large enough set of points to test, this could be fairly time consuming. The question, then, is how to optimize the testing.
Clearly, one optimization is to establish the minimum and maximum X and Y values for the box surrounding all the rectangles (the bounding box): a swift test on this shows whether there is any need to look further.
Xtest >= Xmin && Xtest <= Xmax
Ytest >= Ymin && Ytest <= Ymax
Depending on how much of the total surface area is covered with rectangles, you might be able to find non-overlapping sub-areas that contain rectangles, and you could then avoid searching those sub-areas that cannot contain a rectangle overlapping the point, again saving comparisons during the search at the cost of pre-computation of suitable data structures. If the set of rectangles is sparse enough, there may be no overlapping, in which case this degenerates into the brute-force search. Equally, if the set of rectangles is so dense that there are no sub-ranges in the bounding box that can be split up without breaking rectangles.
However, you could also arbitrarily break up the bounding area into, say, quarters (half in each direction). You would then use a list of boxes which would include more boxes than in the original set (two or four boxes for each box that overlapped one of the arbitrary boundaries). The advantage of this is that you could then eliminate three of the four quarters from the search, reducing the amount of searching to be done in total - at the expense of auxilliary storage.
So, there are space-time trade-offs, as ever. And pre-computation versus search trade-offs. If you are unlucky, the pre-computation achieves nothing (for example, there are two boxes only, and they don't overlap on either axis). On the other hand, it could achieve considerable search-time benefit.
I suggest you take a look at BSP trees (and possible quadtrees or octrees, links available on that page as well). They are used to partition the whole space recursively and allow you to quickly check for a point which rectangles you need to check at all.
At minimum you just have one huge partition and need to check all rectangles, at maximum your partitions get so small, that they get down to the size of single rectangles. Of course the more fine-grained the partition, the longer you need to walk down the tree in order to find the rectangles you want to check.
However, you can freely decide how many rectangles are suitable to be checked for a point and then create the corresponding structure.
Pay attention to overlapping rectangles though. As the BSP tree needs to be precomputed anyways, you may as well remove overlaps during that time, so you can get clear partitions.
Your R-tree approach is the best approach I know of (that's the approach I would choose over quadtrees, B+ trees, or BSP trees, as R-trees seem convenient to build in your case). Caveat: I'm no expert, even though I remember a few things from my senior year university class of algorithmic!
Why not try this. It seems rather light on both computation and memory.
Consider the projections of all the rectangles onto the base line of your space. Denote that set of line intervals as
{[Rl1, Rr1], [Rl2, Rr2],..., [Rln, Rrn]}, ordered by increasing left coordinates.
Now suppose your point is (x, y), start a search at the left of this set until you reach a line interval that contains the point x.
If none does, your point (x,y) is outside all rectangles.
If some do, say [Rlk, Rrk], ..., [Rlh, Rrh], (k <= h) then just check whether y is within the vertical extent of any of these rectangles.
Done.
Good luck.
John Doner

Categories

Resources