Render a mayavi scene with a large pipeline faster - python

I am using mayavi.mlab to display 3D data extracted from images. The data is as follows:
3D camera parameters as 3 lines in the x, y, x direction around the camera center, usually for about 20 cameras using mlab.plot3d().
3D coloured points in space for about 4000 points using mlab.points3d().
For (1) I have a function to draw each line for each camera seperately. If I am correct, all these lines are added to the mayavi pipeline for the current scene. Upon mlab.show() the scene takes about 10 seconds to render all these lines.
For (2) I couldn't find a way to plot all the points at once with each point a different color, so at the moment I iterate with mlab.points3d(x,y,z, color = color). I have newer waited for this routine to finish as it takes to long. If I plot all the points at once with the same color, it takes about 2 seconds.
I already tried to start my script with fig.scene.disable_render = True and resetting fig.scene.disable_render = False before displaying the scene with mlab.show().
How can I display my data with mayavi within a reasonable waiting time?

The general principle is that vtk objects have a lot of overhead, and so you for rendering performance you want to pack as many things into one object as possible. When you call mlab convenience functions like points3d it creates a new vtk object to handle that data. Thus iterating and creating thousands of single points as vtk objects is a very bad idea.
The trick of temporarily disabling the rendering as in that other question -- the "right" way to do it is to have one VTK object that holds all of the different points.
To set the different points as different colors, give scalar values to the vtk object.
x,y,z=np.random.random((3,100))
some_data=mlab.points3d(x,y,z,colormap='cool')
some_data.mlab_source.dataset.point_data.scalars=np.random.random((100,))
This only works if you can adequately represent the color values you need in a colormap. This is easy if you need a small finite number of colors or a small finite number of simple colormaps, but very difficult if you need completely arbitrary colors.

Related

Creating Grid Based grid over race track

Given an 2 arrays of (x,y) points (numpy arrays), for a racetrack representing the inner and outer borders of a racetrack, I want to plot these points onto a grid system. There is a problem however, with these points, you would normally draw a "line" between them but that cannot exactly happen with a grid system.
The inner track looks like:
[[0.83937341 2.87564301]
[0.85329181 2.74359298]
[0.8711707 2.61805296]
[0.89493519 2.49186611]
[0.92430413 2.36440611]
[0.95832813 2.2375989 ]
[0.99367839 2.12898302]
[1.03462696 2.02958798]
[1.08152199 1.93906105]
[1.13470805 1.85674906]
[1.17767704 1.80507398]
[1.21820199 1.77083302]
...
As you can see, the points are very fine, 0.02 meters makes all the difference, so in order to scale this to a grid to use, I figured that I would need to muliply each of these by like 1000 maybe, plot that on the grid, then figure out which sqaures of the grid to fill in to connect the points (maybe using a*?)
I tried using pygame, and even visualizing the grid, but when I tried to use more than 500 rows, the program crashed. I don't need to necessary visualize the program, I just want it to meet the specifications.

How to create a curved line vector plot of a triangle in Python?

Question
Suppose one has 3 random coordinates with 3 random functions that describe the continuous lines between them*, how would one create a vector plot in Python that allows for smooth lines after infinite zooming in?
Example
The functions should be rotated and translated from their specification to map onto the edge/line in the geometry. For example, one curved line may be specified as -x(x-5)=0 which describes the line from (x,y) coordinates:(2,6) to (5,2) (which has length 5). Another curved line from (x,y) coordinates:(2,2) to (2,6) may be specified as sin(x/4*pi)=0. One can assume all bulges point outward (of the triangle in this case).
Approach
I can perform a translation and rotation of the respective functions to the lines of the coordinates, and then save the plt as a .eps or .pdf, however before doing that, I thought it would be wise to ask how these functions are represented and how these plots are generated, as I expect the dpi setting may simply turn it into a (very) high resolution plot, instead of something that still provides smooth lines after infinite scrolling.
Doubt
I can imagine using a sinusoid does not allow for infinite smooth scrolling as they may be stored numerically. If the representation is finite for sinusoids but analytical/symbolic for polynomials, I would be happy to constraint this question to polynomials only to get smooth infinitely scrollable images (like fractals).

Highlighting many ranges on an axis of a Bokeh plot?

I have a scatter plot of data and would like to highlight certain ranges of the x-axis. When the number ranges to highlight are relatively small, using BoxAnnotation works well. However, I'm trying to make many adjacent highlightings (with different opacity). With many adjacent BoxAnnotations, zoomed out, the boxes slightly overlap, creating lines. Additionally, thousands of BoxAnnotations takes a long time to generate and does not run smoothly when interacting with the plot.
To be more specific about my case, I have some temporal data and a predictive model detecting the probability of some event occurring in the data. I want each segment to be highlighted with an opacity given by the probability that an event is occurring at that point in time. However, my current BoxAnnotation approach results in artificial lines from overlap of boxes when zoomed out (they disappear when zooming in on a region), and slow responsiveness of the interactive plot.
Is there a way to accomplish something similar to this without the artifacts and with a smoother experience?
Current method:
source = ColumnDataSource(data=data_frame)
figure_ = figure(x_axis_label='Time', y_axis_label='Intensity')
for index in range(data_frame.shape[0] - 1):
figure_.add_layout(
BoxAnnotation(left=data_frame['time'].values[index], right=data_frame['time'].values[index + 1],
fill_alpha=data_frame['prediction'].values[index], fill_color='red', line_alpha=0)
)
figure_.circle(x='time', y='intensity', source=source)
show(figure_)
Example of artificial lines when there are too many small adjacent BoxAnnotations:
When zooming on the x-axis, the lines disappear:
There's probably not any way to salvage this exact approach. The artifacts are due to the functioning of the underlying raster HTML canvas, and here's not anything that can be one about that. And any slowness is due to the fact that this kind of use of BoxAnnotation (with so very many individual instances) is not at all what was envisioned, and it is simply not optimized to show hundreds of instances the way e.g. scatter glyphs are. You are trying to use box annotations to construct a sort of translucent heat map, and that is not a good fit for it, for the reasons above.
You could potentially overcome slowness by using a single rect or vbar glyph that draws all the boxes at once in a vectorized way. But that won't alleviate the compositing issues.
Your best bet is to create a semi-transparent "heatmap" image overlay yourself with a tool or code that can afford better control over the details of rasterization and compositing. I can't really advise you on how to do that in any detail. The Datashader library might be useful for this.

2D color and quiver plot in python with large datasets

In the very near future I will be doing some analysis of measurement data. This data is geographical data (e.g. height measurements and wind measurements) which has a high resolution (some 50 million x, y, and z points for example). Plotting such a dataset is very slow in matplotlib and I wonder if there are better options.
The plots I see myself creating in the near future would be a quiver plot (for the winddirections) and color plots for terrain heights. It must be noted that the x, y and z values do not line up to be a square or rectangular grid.
Besides creating figures it is likely that the dataset will also need to be shown on google maps. Would this be possible as an overlay (also with such a large dataset or would I need to overlay an image?)
You could consider using PyQt and its Graphics Framework.
You would define classes for each type of item, inheriting from QGraphicsItem, then you just add these items to a QGraphicsScene, and leave the rendering itself to QGraphicsView. This is expected to be very performant.
As for Google Maps, you can export a subset of your data to KML, and render it using a KmlLayer, or you can use an ImageOverlay as you said, or else you can try the DataLayer API.
(As an alternative, you can embed a QWebKit widget pointing to GoogleMaps and overlay a QGraphicsView over it, but I think that would be a bit overkill).

Pygame Large Surfaces

I'm drawing a map of a real world floor with dimensions roughly 100,000mm x 200,000mm.
My initial code contained a function that converted any millimeter based position to screen positioning using the window size of my pygame map, but after digging through some of the pygame functions, I realized that the pygame transformation functions are quite powerful.
Instead, I'd like to create a surface that is 1:1 scale of real world and then scale it right before i blit it to the screen.
Is this the right way to be doing this? I get an error that says Width or Height too large. Is this a limit of pygame?
I dont fully understand your question, but to attempt to answer it here is the following.
No you should not fully draw to the screen then scale it. This is the wrong approach. You should tile very large surfaces and only draw the relevant tiles. If you need a very large view, you should use a scaled down image (pre-scaled). Probably because the amount of memory required to draw an extremely large surface is prohibitive, and scaling it will be slow.
Convert the coordinates to the tiled version using some sort of global matrix that scales everything to the size you expect. So you should also filter out sprites that are not visible by testing their inclusion inside the bounding box of your view port. Keep track of your view port position. You will be able to calculate where in the view port each sprite should be located based on its "world" coordinates.
If your map is not dynamic, I would suggest draw a map outside the game and load it in game.
If you plan on converting the game environment into a map, It might be difficult for a large environment. 100,000mm x 200,000mm is a very large area when converting into a pixels. I would suggest you to scale it down before loading.
As for scaling in-game, you can use pygame.transform.rotozoom or pygame.transform.smoothscale.
Also like the first answer mentions, scaling can take significant memory and time for very large images. Scaling a very large image to a very small image can make the image incomprehensible.

Categories

Resources