Given an 2 arrays of (x,y) points (numpy arrays), for a racetrack representing the inner and outer borders of a racetrack, I want to plot these points onto a grid system. There is a problem however, with these points, you would normally draw a "line" between them but that cannot exactly happen with a grid system.
The inner track looks like:
[[0.83937341 2.87564301]
[0.85329181 2.74359298]
[0.8711707 2.61805296]
[0.89493519 2.49186611]
[0.92430413 2.36440611]
[0.95832813 2.2375989 ]
[0.99367839 2.12898302]
[1.03462696 2.02958798]
[1.08152199 1.93906105]
[1.13470805 1.85674906]
[1.17767704 1.80507398]
[1.21820199 1.77083302]
...
As you can see, the points are very fine, 0.02 meters makes all the difference, so in order to scale this to a grid to use, I figured that I would need to muliply each of these by like 1000 maybe, plot that on the grid, then figure out which sqaures of the grid to fill in to connect the points (maybe using a*?)
I tried using pygame, and even visualizing the grid, but when I tried to use more than 500 rows, the program crashed. I don't need to necessary visualize the program, I just want it to meet the specifications.
I have two coordinate systems for each record in my dataset. Lat-lon coordinates and what I suppose is utm x-y coordinates.
50% of my dataset only has x-y data without lat-lon, viceversa is 6%.
There is a good portion of the dataset that has both (33%) for each single record.
I wanted to know if there is a way to take advantage of the intersection (and maybe the x-y only part, since it's the biggest) to obtain a full dataset with only one coordinate system that makes sense. The problem is that after a little bit of preprocessing, they look "relaxed" in a different way and the intersection doesn't really match. The scatter plot shows what I believe to be a non linear, warped relationship from one system of coordinates to another. With this, I mean that normalizing them both to [0;1] and centering them to (0,0) (by subtracting the mean), gives two slightly different point distributions, and apparently a coefficient multiplication to scale one down to look like the other is not enough to get them to match completely. Looks like some more complicated relationship between the two.
I also tried to use an external library called utm to convert the lat-long coordinates to x-y to have a third pair of attributes (let's call it my_xy), only to find out that is not matching even one of the first two systems, instead it shows another slight warp.
Notes: When I say I do not have data from one coordinate system, assume NaN.
Furthermore, I know the warping could be a result of the fundamental geometrical differences between latlon and xy systems, but I still do not know what else I could try, if the utm conversion and the scaling did not work.
Blue: latlon, Red: original xy, Green: my_xy calculated from latlon
In short, I'm trying to find a faster way to plot real time data coming through a serial input. The data looks like a coordinate (x,y) and about 40 are coming in each second. The stream will store the data in a array, using x as the index and setting y as the value for it. This portion is being threaded. While the stream can read in data immediatley, the pyqtgraph library isn't able to keep up with this speed.
Here's the portion of the code where I am plotting the data. The distances and theta variables are arrays with 6400 indexes. They have been transformed into polar values and plotted with each iteration. I added a delay there just to help keep it real-time, though it's only a temporary solution.
while True:
x = distances * np.cos(theta)
y = distances * np.sin(theta)
plot.plot(x, y, pen=None, symbol='o', clear=True)
pg.QtGui.QApplication.processEvents()
#sleep(0.025)
While it's going the way I expect it to, it's not able to plot the most recent data from the serial input. It's easily several seconds behind from the most recent reads, probably because it can not plot 6400 points every 1/40 of a second. I'm wondering if there's a way to only update 1 point rather than having to re-plot the entire scatter every time in pyqtgraph.
It may be possible to plot based on point, but if so, is there a way to keep track of each individual point? There should be no point that shares the same angle value and have different distances, and should essentially overwrite it.
I'm also wondering if there are other graphing animation libraries out there that may be a possible solution worth considering.
This is what it looks like, if you're wondering:
Threading allows you to always have data available to plot but the plot speed is bottlenecked due to the paintEvent latency for each plot iteration. From my understanding, there is no way to update 1 point per paint event with setData instead of having to replot the entire data set for each iteration. So if you have 6400, you must repaint all points even if you are updating the just data with 1 additional point.
Potential workarounds to this include downsampling your data or to only plot once every X amount of data points. Essentially, you are capped at the speed you can plot data to the screen but you can alter your data set to display the most relevant information with less screen refreshes.
I'm working on a heatmap generation program which hopefully will fill in the colors based on value samples provided from a building layout (this is not GPS based).
If I have only a few known data points such as these in a large matrix of unknowns, how do I get the values in between interpolated in Python?:
0,0,0,0,1,0,0,0,0,0,5,0,0,0,0,9
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
0,0,0,2,0,0,0,0,0,0,0,0,8,0,0,0
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
0,8,0,0,0,0,0,0,0,6,0,0,0,0,0,0
0,0,0,0,0,3,0,0,0,0,0,0,0,0,7,0
I understand that bilinear won't do it, and Gaussian will bring all the peaks down to low values due to the sheer number of surrounding zeros. This is obviously a matrix handling proposition, and I don't need it to be Bezier curve smooth, just close enough to be a graphic representation would be fine. My matrix will end up being about 1500×900 cells in size, with approximately 100 known points.
Once the values are interpolated, I have written code to convert it all to colors, no problem. It's just that right now I'm getting single colored pixels sprinkled over a black background.
Proposing a naive solution:
Step 1: interpolate and extrapolate existing data points onto surroundings.
This can be done using "wave propagation" type algorithm.
The known points "spread out" their values onto surroundings until all the grid is "flooded" with some known values. At the end of this stage you have a number of intersected "disks", and no zeroes left.
Step 2: smoothen the result (using bilinear filtering or some other filtering).
If you are able to use ScyPy, then interp2d does exactly what you want. A possible problem with is that it seems to not extrapolate smoothly according to this issue. This means that all values near the walls are going to be the same as closest their neighbour points. This can be solved by putting thermometers in all 4 corners :)
I am doing a project about image-processing, and I asked about how to solve a very large overdetermined systems of linear equations here. Before I can figure out a better way to accomplish the task, I just split the image into four equal parts and solve the systems of equations separately. The result is shown in the attached file.
The image represents the surface height of a pictured object. You can think of the two axes as the x and y axis, and the z-axis is the axis coming out of the screen. I solved the very large systems of equations to get z(x,y), which is displayed in this intensity plot. I have the following questions:
The lower left part is not shown because when I solved the equations for that region, the intensity plot calculated is affected by some extreme values. One or two pixels have the intensity (which represents the height) as high as 60, and because of the scaling of the colourbar scale, the rest of the image (which can be seen has height ranging only from -15 to 9) appears largely the same colour. I am still figuring out why those one or two pixels have such abnormal results, but if I do get these abnormal results, how can I eliminate/ignore them so the rest of the image can be seen properly?
I am using the imshow() in matplotlib. I also tried using a 3D plot, with the z-axis representing the surface height, but the result is not good. Are there any other visualization tools that can display the results in a nice way (preferably showing it in a 3D way) given that I have obtained z(x,y) for many pairs of (x,y)?
The four separate parts are clearly visible. Are there any ways to merge the separate parts together? First, I am thinking of sharing the central column and row. e.g. The top-left region spans from column=0 to 250, and the top-right region spans from column=250 to the right. In this case, values in col=250 will be calculated twice in total, and the values in each region will almost certainly differ from the other one slightly. How to reconcile the two slightly different values together to combine the different regions? Just taking the average of the two, do something related to curve fitting to merge the two regions, or what? Or should I stick to col=0 to 250, then col=251 to rightmost?
thanks
About point 2: you could try hill shading. See the matplotlib example and/or the novitsky blog