Find out if a GPS waypoint has been passed - python

Think of several runners on a marathon. The athletes are all wearing GPS devices. The track itself has no sensors, and I need to know when each athlete crosses a predetermined set of GPS coordinates. However, each athlete may cross the waypoint at a slightly different lat/long, since the track/road might be wide enough that different parts of the track/road are used.
What is the best way to determine whether an athlete has passed a waypoint?
I'm using Python, and am open to using an external library. I'm working with pre-processed GPS data, so I have only the latitude and longitude at each time point (and a few other bits and pieces like speed and distance travelled).

IMHO there are few ways of solving your problem.
First one which came to my mind is this:
from shapely.geometry import LineString
line1 = LineString([(i, i) for i in range(5)])
line2 = zip(range(5)[::-1], range(5))
if line1.crosses(line2):
print 'yeah!'
add a loop and iterate every waypoint-line
Other possible options:
simple math calculation using intersection of two strait lines - high school stuff
import your data into postgres with postgis and use postgis function eg ST_Crosses (if postgres is to heavy for you I would give SpatialLite-sqlite a try)
pyshp, shapely, gdal/geos, geodjango, geoalchemy
combine some of the above and write a bit more fancy algo like creating a buffer around one line/point and check if it "ST_Contains" GPS position also checking if any later positions are off buffer zone(?)

You may try this:
The waypoint is the point at which two track (line) segments meet (black lines in the picture). Draw a line orthogonal to one of the line segments through the waypoint for each of the two line segments meeting in the waypoint (the red and blue line through the way point in the picture). The runner is considered to be near the waypoint when it enters the area marked red in the picture (assuming the runner comes from the right). After some time the runner may enter the area marked blue in the picture - when this happens, the runner has passed the waypoint.
If the runner never shows up in the area marked blue the runner deviated from the track, the waypoint has not been passed.

Related

How to check if there is a line segment between two given points?

I made a model that predicts electrical symbols and junctions:
image of model inference.
Given the xywh coordinates of each junctions' bounding box in a form of a dataframe: image of the dataframe, how would I make an output that stores the location of all the wires in a .txt file in a form of: (xstart,ystart), (xend,yend).
I'm stuck at writing a way to check if there is a valid line (wire) between any two given junctions.
data = df.loc[df['name'] == 'junction']
# iterates through all of the junctions
for index, row in data.iterrows():
for index2, row2 in data.iterrows():
check_if_wire_is_valid()
My attempt was to erase all electrical symbols (make everything in bounding boxes white except for junctions) from the inference image and run cv.HoughLinesP to find wires. How can I write a function that checks if the cv.HoughLinesP output lies between two junctions?
Note that the minimum distance that lies between two junctions should be greater than 1px because if I have a parallel circuit like such: top left and bottom right junction would "detect" more than 1px of line between them and misinterpret that as a valid line.
EDIT: minAreaRect on contours . I've drawn this circuit with no elements for simplification and testing. This is the resulting minAreaRect found for the given contours. I can't seem to find a way to properly validate lines from this.
My initial solution was to compare any two junctions and if they are relatively close on the x-axis, then I would say that those two junctions form a vertical wire, and if other two junctions were close on the y-axis I would conclude they form a horizontal wire. junction distance to axis.
Now, this would create a problem if I had a diagonal line. I'm trying find a solution that is consistent and applicable to every circuit. I believe I'm onto something with HoughLinesP method or contours but that's as far as my knowledge can assist me.
The main goal is to create an LTSpice readable circuit for simulating purposes. Should I change my method of finding valid lines? If so, what is your take on the problem?
This should be doable using findContours(). A wire is always a (roughly) straigt line, right ?
Paint the classified boxes white, as you said
threshold() to get a binary image with the wires (and other symbols and letters) in white, everything else black.
run findContours() on that to extract objects.
Get the bounding boxes (minAreaRect) for all contours
discard all contours with a too wide side ratio, those are letter or symbols, keep only those slim enough to be a wire
Now you got all wires as objects, similiar to the junction list. Now, for how to merge those two... Some options that come to mind:
Grow the boxes by a certain amount, and check if they overlap.
Interpolate a line from the wire boxes and check if they cross any intersection box close by.
Or the other way around: draw a line between intersections and check how much of it goes through a certain wire box.
This is a pure math problem, and i don't know what you performance requirements are. So i'll leave it at that.

Python 2.7.1 - Pynomo - alignment of scales in circular type 8 nomograms

first question from my side. For the last couple of month self-teaching I have figured out everything using stack overflow and clumsy own designs, but now I am stuck since days:
I use Pynomo to make nomograms with some very fine results. My newest project is to design a circular nomogram using multiple "type 8" conversion charts like in
http://www.myreckonings.com/pynomo/CreatingNomogramsWithPynomo.pdf
starting from page 30.
However I cannot properly line up, or influence how to line up, the circular charts. In the example given the charts mostly shared a common minimum (zero). My functions however (all like AxB+C) do not share a common minimum. I easily manage to distribute the values for each scale in a circle but the circles do not line up at their minima.
Does anyone have an idea or a workaround how I can line up the minimum of scale "32 to 340" with "66 to 285" (for example) so 32 and 66 correspond?
I could provide some example code but I guess the problem is very specific to Pynomo and regular users will know what I am talking about.
to whom it might concern: I figured out a solution. It is clumsy alright, but I am neither a programmer nor mathematician.
I was trying to show 3 different items in one graph as "pie pieces", 110 degrees each, with some space in between (think mercedes benz sign). For each piece a number of circle fragments with scales was to be displayed with a result scale on the outside.
So for item 1, you find find the group you are interested in (circle fragment), draw from bullsey through it to the outer ring and read a result. If you want, repeat for items 2 and 3.
Since all fragments and pieces followed the general formula:
Result = (Measurement x A) + B
I was running into the problem that the circle fragments appeared rotated against each other with one starting at 10 degrees, the other at 113 and so on.
To set each fragment to the correct beginning, therefore forming a nice cake piece, I had to do the following:
determine the dimensions of your cake piece (i chose 110 degrees).
determine your result range, you are interested in (i was only interested in results from 1.5 to 2.0).
calculate for each group in a cake piece the minimum and maximum measurement for the minimum and maximum results.
Subtract the minimum from maximum and divide by the cake piece dimensions (/110 in my case) to give you "units per degree".
Now decide on the starting position of the cake piece (zero is usually "east" position) and calculate the relative degrees to your minimum measurement value as "starting point", for example 45 degrees.
Multiply that number of degrees with the "units per degree". Now you have everything to construct the nomograph pieces. You do not have to fill it in completely, checking your assumptions by applying the numbers to the outer or inner most part should give you a good estimate where everything ends up in the full nomograph.
In my case (one block only):
Center2_params={
'u_min':25.2413,
'u_max':42.4827,
'function_x':lambda u:9*math.cos((math.radians(110/(42.4827-25.2413))*(u+10.025078))),#*2*pi/360*56.4),
'function_y':lambda u:9*math.sin((math.radians(110/(42.4827-25.2413))*(u+10.025078))),#*2*pi/360*56.4),
'title':r'your title here',
'tick_levels':4,
'tick_text_levels':2,
'tick_side':'right',
'axis_color':color.gray(0.10),
'text_color':color.gray(0.10),
'title_color':color.gray(0.10),
}
Center2_block_params={
'block_type':'type_8',
'f_params':Center2_params,
'width':15.0,
'height':15.0,
}
Read up on circle coordinates if needed.
As in the pdf example, I included a bullseye center. I could have displayed all pieces in one nomogram but instead created 3 single nomograms and imported them from pdf into GIMP where I combined them (select by color - click white- pick "selection - invert", copy and paste as new layer.
If you go this route make sure to uncommen the "scale paper" option:
'transformations':[('rotate',0.01)],#('scale paper',)],
Since this option tries to optimally use all the paper, producing curvy nomograms from the pie piece. This looks cool and more sophisticated, but makes combining 3 Nomograms a pain. By uncommenting you get nice pieces. Overlay them bullseye in GIMP, if necessary rotate the pieces with the rotate tool. Make sure to move the pivot point on the bullseye.
Hope this helps anybody.
PS: note that in this nomogram style, all scales are independent (basically overlayed with ticks pointing to left and right), so nothing "automatically follows)

Computing the similarity between two line drawings

I have a Python program where people can draw simple line drawings using a touch screen. The images are documented in two ways. First, they are saved as actual image files. Second, I record 4 pieces of information at every refresh: the time point, whether contact was being made with the screen at the time (1 or 0), the x coordinate, and the y coordinate.
What I'd like to do is gain some measure of how similar a given drawing is to any other drawing. I've tried a few things, including simple Euclidian distance and similarity between each pixel, and I've looked at Frechet distance. None of these can give what I'm looking for.
The issues are that each drawing might have a different number of points, one segment does not always immediately connect to the next, and the order of the points is irrelevant. For instance, if you and I both draw something as simple as an ice cream cone, I might draw ice cream first, and you might draw the cone first. We may get an identical end result, but many of the most intuitive metrics would be totally thrown off.
Any ideas anyone has would be greatly appreciated.
if you care about how similar a drawing is to another, then there's no need to collect data at every refresh. just collect it once the drawer is done drawing
Then, you can use fourier analysis to break the images down in to frequency domains and run cross correlations on that
or some kind of 2D cross correlation on the images, I guess

Strategy for isolating 3d data points

I have two sets of points, one from an analysis and another that I will use for the results of post-processing on the analysis data.
The analysis data, in black, is scattered.
The points used for results are red.
Here are the two sets on the same plot:
The problem I have is this: I will be interpolating onto the red points, but as you can see there are red points which fall inside areas of the black data set that are in voids. Interpolation causes there to be non-zero values at those points but it is essential that these values be zero in the final data set.
I have been thinking of several strategies for getting those values to zero. Here are several in no particular order:
Find a convex hull whose vertices only contain black data points and which contains only red data points inside the convex set. Also, the area of this hull should be maximized while still meeting the two criteria.
This has proven to be fairly difficult to implement, mostly due to having to select which black data points should be excluded from the iterative search for a convex hull.
Add an extra dimension to the data sets with a single value, like 1 or 0, so both can be part of the same data set yet still distinguishable. Use a kNN (nearest neighbor) algorithm to choose only red points in the voids. The basic idea is that red points in voids will have nearest n(6?) nearest neighbors which are in their own set. Red data points that are separated by a void boundary only will have a different amount, and lastly, the red points at least one step removed from a boundary will have a almost all black data set neighbors. The existing algorithms I have seen for this approach return indices or array masks, both of which will be a good solution. I have not yet tried implementing this yet.
Manually extract boundary points from the SolidWorks model that was used to create the black data set. No on so many levels. This would have to be done manually, z-level by z-level, and the pictures I have shown only represent a small portion of the actual, full set.
Manually create masks by making several refinements to a subset of red data points that I visually confirm to be of interest. Also, no. Not unless I have run out of options.
If this is a problem with a clear solution, then I am not seeing it. I'm hoping that proposed solution 2 will be the one, because that actually looks like it would be the most fun to implement and see in action. Either way, like the title says, I'm still looking for direction on strategies to solve this problem. About the only thing I'm sure of is that Python is the right tool.
EDIT:
The analysis data contains x, y, z, and 3 electric field component values, Ex, Ey, and Ez. The voids in the black data set are inside of metal and hence have no change in electric potential, or put another way, the electric field values are all exactly zero.
This image shows a single z-layer using linear interpolation of the Ex component with scipy's griddata. The black oval is a rough indicator of the void boundary for that central racetrack shaped void. You can see that there is red and blue (for + and - E field in the x direction) inside the oval. It should be zero (lt. green in this plot). The finished data is going to be used to track a beam of charged particles and so if a path of one of the particles actually crossed into the void the software that does the tracking can only tell if the electric potential remains constant, i.e. it knows that the path goes through solid metal and it discards that path.
If electric field exists in the void the particle tracking software doesn't know that some structure is there and bad things happen.
You might be able to solve this with the big-data technique called "Support Vector Machine". Assign the 0 and 1 classifications as you mentioned, and then run this through the libsvm algorithm. You should be able to use this model to classify and identify the points you need to zero out, and do so programmatically.
I realize that there is a learning curve for SVM and the libsvm implementation. If this is outside your effort budget, my apologies.

Assigning surfaces to zones based on the 3D regions they enclose

Given a set of surfaces in three-dimensional space, I am attempting to assign each surface to a zone referring to the smallest 3D region the set encloses, or no zone if this is not applicable. I also want to determine if a surface is an interface between two zones. So, for example, if we had 11 surfaces representing two cubes stacked on top of each other, the surfaces in the top cube would be in the same zone and the surfaces in the bottom would be in a different zone (with the interface surface being in both zones).
As an example, I want to take in a set of surfaces such as this and turn it in to this. Each color here represents a zone, with gray being no zone associated (as in the flap at the bottom).
I have done some searching around trying to find if someone has already come up with an algorithm to do this, but I have not found anything (most seem to identify regions rather than link surfaces to the region they enclose). As such I am trying to come up with my own algorithm and am wondering if there are any other alternatives or if my method would work.
I am assuming all surfaces are connected.
My idea is the following:
Select a random surface whose sides each touch exactly one other surface, and add this to zone 1.
Add each connected surface to zone 1 provided each of its sides touch exactly one other surface.
For those connected surfaces that touch more than one surface on at least one of its sides, add it to the "maybe" list.
For each new surface in zone 1, repeat steps 2-3.
Once a surface has been added to the "maybe" list twice, add it to zone 1 and remove from the "maybe" list. Mark this surface as a zone interface.
Add the zone interface to zone 2.
Select one random surface from the "maybe" list and assign it to zone 2 and clear the "maybe" list.
Repeat steps 2-7 (updating the zone number of course) until there are no surfaces that are unassigned.
This seems to work for simple scenarios (e.g., two cubes stacked on top of one another), but I am not sure if there are any tricky conditions I need to watch out for, or if it falls apart once there are more than two zones that share a side.
Any improvement on my rough algorithm/alternate ideas for implementation would be appreciated. Thanks!
EDIT: Here are some more details in response to some comments.
A zone by my definition is simply a group of surfaces that completely bound a 3D region with no gaps. So if I had two cubes, A and B, that do not touch, I would have two zones: one consisting of all the surfaces of cube A and the other of all the surfaces for cube B. If I had a cube that was missing one side, there would be no zone associated with those surfaces.
My end goal is to make an automated process for grouping surfaces in a modeling tool I am creating. The specifics are classified, but essentially I am dealing with models where certain properties are common only between surfaces in the same "zone" as described above. I want to make an automated process that creates these zones so that the user can apply these properties to all surfaces in the zone at once instead of doing it manually.
Essentially the problem boils down to finding the smallest 3D regions that are completely enclosed by an arbitrary set of surfaces, and keeping track of which surfaces belong to which regions. I hope this makes my question more clear.
What you are interested in, then, would be discovering closed surface (volume) mesh topology from a set of input polygons; in other words - polytopes. This is common to pretty much every 3d modeling package. I would guess that Blender has code that does that. There are different ways of doing this, commonly however, some version of half-edge graph is used. See wiki link here: Doubly Linked half edge graph. The idea is to walk your input polies, and build these graphs. Once done, you can easily query each graph to see if there are holes (edges missing, etc).
I attached a picture explaining how to use a half-edge structure to get what you want: Say you are given a soup of five rectangles (they make up a cube with out a top. U process your first rectangle say ABCD, this creates your first graph, say G1. Now you process second polygon, say FEHG, none of these vertices you have seen yet, so you create second graph, G2. Now say you process polygon CDGH. You have seen these vertices before, so instead of creating a new graph, you merge(connect) existing graphs that share these nodes. Proceed until you process all polygons. You get graph in picture.
Now, to query the graph to get your information. Once you walk the graph, you will see that there are exactly four vertices (nodes) that are missing edges. Those verts correspond to the missing top of the box (the edges are red in the illustration). Hence you know that this graph is not a closed manifold. If you had another box, that did not share nodes with this one, you would have another graph. So each graph, once you done processing your polygons, is a "zone" for you.
Note, if you have two say intersecting shapes, you can track those too using these graphs, but its much more complicated. Basically when processing a new polygon, you would not only have to see if any of its verts belong to already processed graphs, but also see if this polygon intersects any of the previously processed polygons, if so, split this polygon and add all this to the intersected graph.

Categories

Resources