Creating a "box" environment in python for optimization - python

I am currently working on a project related to space optimization.
The project is basically arranging a set of objects within a box to use the least amount of space. I have the object's dimensions, surface area, volume, shape, etc. (units are originally in meter)
The problem now is that I need to create a three dimensional box container to hold a certain number of objects where i can track the coordinates if they are filled out by an object or not. Say 'object 1' fills out the corner of the box so it fills coordinates [0,0,0] to [150,150,150]
Note that i converted meter to millimeter, so when i put them in a 3D array i can disregard the decimal of the dimensions (minimize the inaccuracy). I tried using a three-dimensional array for the box but a problem arises when its dimensions is too big. so a box of 1x1x1 meter is equivalent to 1000 x 1000 x 1000 millimeter which leads to a memory error
is there any alternative in creating such "box" environment that my algorithm can operate on in python?
I tried matplotlib's pyplot 3d but i cant track the coordinates and give a specific length,width,height to the plot since it was really meant for plotting data, not storing them.

So i got this idea from #f5r5e5d where in creating numpy 3d arrays, i specify the dtype.
since i need a box with atleast 1000 x 1000 x 1000 cells, and assuming a box can hold by average 100-150 objects, therefore i will only store unsigned int from 0 to 150.
so in creating my array i can just do
box = np.zeros((1750,1750,1750), dtype = 'u1')
where u1 is an 8bit unsigned integer.
gives memory error when i go beyond 1750 but for a box with 1billion or more elements, works perfectly!

Related

Creating Grid Based grid over race track

Given an 2 arrays of (x,y) points (numpy arrays), for a racetrack representing the inner and outer borders of a racetrack, I want to plot these points onto a grid system. There is a problem however, with these points, you would normally draw a "line" between them but that cannot exactly happen with a grid system.
The inner track looks like:
[[0.83937341 2.87564301]
[0.85329181 2.74359298]
[0.8711707 2.61805296]
[0.89493519 2.49186611]
[0.92430413 2.36440611]
[0.95832813 2.2375989 ]
[0.99367839 2.12898302]
[1.03462696 2.02958798]
[1.08152199 1.93906105]
[1.13470805 1.85674906]
[1.17767704 1.80507398]
[1.21820199 1.77083302]
...
As you can see, the points are very fine, 0.02 meters makes all the difference, so in order to scale this to a grid to use, I figured that I would need to muliply each of these by like 1000 maybe, plot that on the grid, then figure out which sqaures of the grid to fill in to connect the points (maybe using a*?)
I tried using pygame, and even visualizing the grid, but when I tried to use more than 500 rows, the program crashed. I don't need to necessary visualize the program, I just want it to meet the specifications.

Animation of image matrices in python

I have a list of (two dimensional) matrices which I want to combine into an animation. The matrices are integer valued with a specified lower and upper bound (in my case from zero to six). I would now like to build an animation where each of the frames consists of the contents of the matrix.
Now, I have managed to get this working if each frame is merely showing the numeric values via matplotlib's built-in matplotlib.pyplot.matshow. Namely, I defined an update function
def update_frames(frame):
animation_image.set_array(matrices[frame])
return animation_image,
where matrices contains the list of integer valued matrices I want to animate. Then, I could pass this into the animation module via
animation_image = ax.imshow(matrices[0])
ani1 = animation.FuncAnimation(fig, update_frames, frames=len(matrices), blit=True)
to obtain the desired result.
I now, however, want to take this a step further by replacing each square on the grid with a specific image, whereas all the images are resized to be of a fixed square shape. For example, at each frame I want to draw a grid of images where the image for each grid cell depends on the integer in the matrix.
Here is a sample animation that I get when using the code snippet above:
I do not have finalized versions of the pictures I would like to show up in the grid yet, but I was thinking of textures that resemble these:
Optimally, I would then end up with grids looking like this (aside from the different placement of objects on the grid as compared to the animation above).
Any help would be greatly appreciated!

Sampling in each region according to a segmentation map in PyTorch

I want to sample a certain number of points in each region of the image according to a segment map(like what the SLIC algorithm produces, it's a map with the same size as the image, containing integers from 0 to num_segment indicating which segment each pixel belongs to).
Currently, I write my own code as follows:
For i in range(0, num_segment):
find the (indexes of the)pixels that belong to the ith segment using torch.where
picking out those pixels and forming a 1-d tensor
use torch.Upsample to uniformly sample n_sample points for the ith segment
stack all the sampled points to form a large 2-d tensor which each row represent selected points belong to one segment, and it has n_sample rows.
I not only want the original value from the image for each selected point, but their indexes are also needed.
I drew a picture to illustrate.
So, my question is, is there a native way to implement this in PyTorch? The above code runs a little bit slow, maybe the For-loop slows down the speed. And if possible, since all the sampling process are independent of each other, how can I speed up the process?
Generally, I have ~400 segments, and I want to sample 20~50 points for each segment.

how to handle lists and arrays with varying length?

I have the following code:
ray_points = [np.array([r1_1, r2_1, r3_1, r4_1]),np.array([r1_2, r2_2, r3_2, r4_2]),np.array([r1_3, r2_3, r3_3, r4_3])]
The list ray_points contains the coordinates of light rays. For this example, I have four light rays (r1,r2,r3,r4). Each time after I've located the next coordinates for the light rays (they have 'new' coordinates once they travel and hit another surfaces), I will append the list ray_points with a new array of size (4L,3L), (4L stands for the four light rays, 3L stands for the three spacial coordinates). So for example, r1_2 is an array with length three and means the 2nd coordinates of ray r1, r3_3 means the 3rd coordinates of ray r3, and so on.
My previous approach is to have N different lists, each contains the coordinates using arrays with shape (3L,1L), for N different rays. Now I use a (4L,3L) array to store all the coordinates of the light rays for each new intersection because I want to 'vectorize' my code in order to speed things up. (Imagine if I have like 100k light rays). Everytime when I want to calculate the next coordinates, next refracted directions, etc, I will just past the most previous coordinates of the ray, which is simple ray_points[-1] to my functions and the functions to calculate everything directly to this array, which is really significantly faster than what I did previously by creating N different lists and looping each light ray for calculation.
Now the problem is that, if different rays have different number of intersection of surfaces, and hence different number of new coordinates, I run into trouble. For instance, ray r1 has refracted 10 times is already terminated, so I will only have r1_1, r1_2 ... r1_10. But I still need to keep track of other rays r2,r3,r4. Since I use a single 4X3 array that contains my coordinates for the 4 rays, I have to keep adding coordinates to r1. I can use keep the new points/directions of r1 the same, but then I am just wasting memory, and imagine if all the rays are terminated and I have a single ray that is keep travelling and having new coordinates, then most of the elements of ray_points are actually redundant.
I would like to know how I should set up my initial codes so that I can use this vectorized approach to speed things up, yet I can still keep the advantage of the original approach - treat each ray separately so I can handle rays with different number of new coordinates.

display subsets of vtkPoints using different glyphs

I am trying to display some timeseries data in 3D using colormapped values using VTK.
I have a single array of 3D positions for two different kinds of objects, say cones and spheres. These positions are interspersed. I also have a 2D array with timeseries data for these objects. n-th entry in position array correponds to n-th row in timeseries array. (all these are numpy arrays).
Now I want an animated display (using python-vtk) of the cones and the spheres with their colors varying according to the entries in the timeseries array. Currently I made it work by splitting each of the arrays into two - one for cones and one for spheres. But ideally I would like to just pipe cone entries of the position array through a coneGlyph and sphere entries through a sphere glyph and set the timeseries values for all the positions directly. What is the way to do that, if possible at all?
I bypassed the issue by storing the indices of the two kinds of objects (i.e., cone_indices and sphere_indices). I also create to vtkPoints objects with the cone positions and the sphere positions. These are used to create the vtkPolyData for the respective classes. In the update method of the timer class, I use these index-arrays to pick up the data for that time point for each type and assign the scalar values to the point data.
conePoints.GetPointData().SetScalars(coneData)
spherePoints.GetPointData().SetScalars(sphereData)
numpy's array lookup using an index-array is fast enough.

Categories

Resources