Creating Grid Based grid over race track - python

Given an 2 arrays of (x,y) points (numpy arrays), for a racetrack representing the inner and outer borders of a racetrack, I want to plot these points onto a grid system. There is a problem however, with these points, you would normally draw a "line" between them but that cannot exactly happen with a grid system.
The inner track looks like:
[[0.83937341 2.87564301]
[0.85329181 2.74359298]
[0.8711707 2.61805296]
[0.89493519 2.49186611]
[0.92430413 2.36440611]
[0.95832813 2.2375989 ]
[0.99367839 2.12898302]
[1.03462696 2.02958798]
[1.08152199 1.93906105]
[1.13470805 1.85674906]
[1.17767704 1.80507398]
[1.21820199 1.77083302]
...
As you can see, the points are very fine, 0.02 meters makes all the difference, so in order to scale this to a grid to use, I figured that I would need to muliply each of these by like 1000 maybe, plot that on the grid, then figure out which sqaures of the grid to fill in to connect the points (maybe using a*?)
I tried using pygame, and even visualizing the grid, but when I tried to use more than 500 rows, the program crashed. I don't need to necessary visualize the program, I just want it to meet the specifications.

Related

How to create a curved line vector plot of a triangle in Python?

Question
Suppose one has 3 random coordinates with 3 random functions that describe the continuous lines between them*, how would one create a vector plot in Python that allows for smooth lines after infinite zooming in?
Example
The functions should be rotated and translated from their specification to map onto the edge/line in the geometry. For example, one curved line may be specified as -x(x-5)=0 which describes the line from (x,y) coordinates:(2,6) to (5,2) (which has length 5). Another curved line from (x,y) coordinates:(2,2) to (2,6) may be specified as sin(x/4*pi)=0. One can assume all bulges point outward (of the triangle in this case).
Approach
I can perform a translation and rotation of the respective functions to the lines of the coordinates, and then save the plt as a .eps or .pdf, however before doing that, I thought it would be wise to ask how these functions are represented and how these plots are generated, as I expect the dpi setting may simply turn it into a (very) high resolution plot, instead of something that still provides smooth lines after infinite scrolling.
Doubt
I can imagine using a sinusoid does not allow for infinite smooth scrolling as they may be stored numerically. If the representation is finite for sinusoids but analytical/symbolic for polynomials, I would be happy to constraint this question to polynomials only to get smooth infinitely scrollable images (like fractals).

How to efficiently store, check for inclusion and retrieve large amounts of float numbers in python?

let me describe my problem, so I am creating a simple graphing calculator, the way I did it was that every y coordinate is calculated by putting it into a function f(x) then graphing the point (x, f(x)).
To make things simple for myself, whenever I wanted to shift the graph or zoom in I just adjust the dimensions of the current view and then recalculate all the new points on the screen. For example going from This to this by zooming in and shifting the screen would mean that every single point has been recalculated, for me to get the graph to look like it is formed by actual lines instead of just points I divide the width of the screen into about 1000 ~ 10000 points and plot it and if there are enough points it just looks like lines. These points are made by tuple pairs of floats.
As you could imagine there is a lot of overlap and recalculations that may be slowing down the program and so I am wondering what the best way to calculate a (x, f(x)) point, store it and anytime I change the view of the graph, if that x happens to be in view, be able to retrieve the f(x) and skip the calculation. The thing is there is going to be like thousands and thousands of these points and so I figured using list operations like "i in lst" is not efficient enough.
I am trying to make my graph as fast as possible so any suggestions would be helpful! Thanks.

How to divide a polygon into tiny polygons of a particular size?

I would like to divide/cut an irregular polygon into tiny polygons of a particular size(1.6m x 1m) in such a way that most of the irregular polygon area has to be utilized (an OPTIMIZATION MODEL)
The length and width of the polygon can be interchanged (either 1.6m X 1m (or) 1m X 1.6m)
So, in the end, I need to have as many polygons of size (1.6m X 1m) as possible.
You may consider it as a packing problem. I need to pack as many rectangles of size(1.6m x 1m) as possible inside a polygon. The rectangles can be translated and rotated but not intersect each other.
I used the "Create Grid" feature but it just cuts the whole polygon in a particular fashion.
But what I also want is that here, a blue polygon can also be cut in a vertical manner(1m x 1.6m) too.
So, I would like to know whether there is a plugin for this in QGIS/ArcGIS or any python script for this kind of polygon optimization?

how to merge images in intensity plot

I am doing a project about image-processing, and I asked about how to solve a very large overdetermined systems of linear equations here. Before I can figure out a better way to accomplish the task, I just split the image into four equal parts and solve the systems of equations separately. The result is shown in the attached file.
The image represents the surface height of a pictured object. You can think of the two axes as the x and y axis, and the z-axis is the axis coming out of the screen. I solved the very large systems of equations to get z(x,y), which is displayed in this intensity plot. I have the following questions:
The lower left part is not shown because when I solved the equations for that region, the intensity plot calculated is affected by some extreme values. One or two pixels have the intensity (which represents the height) as high as 60, and because of the scaling of the colourbar scale, the rest of the image (which can be seen has height ranging only from -15 to 9) appears largely the same colour. I am still figuring out why those one or two pixels have such abnormal results, but if I do get these abnormal results, how can I eliminate/ignore them so the rest of the image can be seen properly?
I am using the imshow() in matplotlib. I also tried using a 3D plot, with the z-axis representing the surface height, but the result is not good. Are there any other visualization tools that can display the results in a nice way (preferably showing it in a 3D way) given that I have obtained z(x,y) for many pairs of (x,y)?
The four separate parts are clearly visible. Are there any ways to merge the separate parts together? First, I am thinking of sharing the central column and row. e.g. The top-left region spans from column=0 to 250, and the top-right region spans from column=250 to the right. In this case, values in col=250 will be calculated twice in total, and the values in each region will almost certainly differ from the other one slightly. How to reconcile the two slightly different values together to combine the different regions? Just taking the average of the two, do something related to curve fitting to merge the two regions, or what? Or should I stick to col=0 to 250, then col=251 to rightmost?
thanks
About point 2: you could try hill shading. See the matplotlib example and/or the novitsky blog

Render a mayavi scene with a large pipeline faster

I am using mayavi.mlab to display 3D data extracted from images. The data is as follows:
3D camera parameters as 3 lines in the x, y, x direction around the camera center, usually for about 20 cameras using mlab.plot3d().
3D coloured points in space for about 4000 points using mlab.points3d().
For (1) I have a function to draw each line for each camera seperately. If I am correct, all these lines are added to the mayavi pipeline for the current scene. Upon mlab.show() the scene takes about 10 seconds to render all these lines.
For (2) I couldn't find a way to plot all the points at once with each point a different color, so at the moment I iterate with mlab.points3d(x,y,z, color = color). I have newer waited for this routine to finish as it takes to long. If I plot all the points at once with the same color, it takes about 2 seconds.
I already tried to start my script with fig.scene.disable_render = True and resetting fig.scene.disable_render = False before displaying the scene with mlab.show().
How can I display my data with mayavi within a reasonable waiting time?
The general principle is that vtk objects have a lot of overhead, and so you for rendering performance you want to pack as many things into one object as possible. When you call mlab convenience functions like points3d it creates a new vtk object to handle that data. Thus iterating and creating thousands of single points as vtk objects is a very bad idea.
The trick of temporarily disabling the rendering as in that other question -- the "right" way to do it is to have one VTK object that holds all of the different points.
To set the different points as different colors, give scalar values to the vtk object.
x,y,z=np.random.random((3,100))
some_data=mlab.points3d(x,y,z,colormap='cool')
some_data.mlab_source.dataset.point_data.scalars=np.random.random((100,))
This only works if you can adequately represent the color values you need in a colormap. This is easy if you need a small finite number of colors or a small finite number of simple colormaps, but very difficult if you need completely arbitrary colors.

Categories

Resources