I am trying to display some timeseries data in 3D using colormapped values using VTK.
I have a single array of 3D positions for two different kinds of objects, say cones and spheres. These positions are interspersed. I also have a 2D array with timeseries data for these objects. n-th entry in position array correponds to n-th row in timeseries array. (all these are numpy arrays).
Now I want an animated display (using python-vtk) of the cones and the spheres with their colors varying according to the entries in the timeseries array. Currently I made it work by splitting each of the arrays into two - one for cones and one for spheres. But ideally I would like to just pipe cone entries of the position array through a coneGlyph and sphere entries through a sphere glyph and set the timeseries values for all the positions directly. What is the way to do that, if possible at all?
I bypassed the issue by storing the indices of the two kinds of objects (i.e., cone_indices and sphere_indices). I also create to vtkPoints objects with the cone positions and the sphere positions. These are used to create the vtkPolyData for the respective classes. In the update method of the timer class, I use these index-arrays to pick up the data for that time point for each type and assign the scalar values to the point data.
conePoints.GetPointData().SetScalars(coneData)
spherePoints.GetPointData().SetScalars(sphereData)
numpy's array lookup using an index-array is fast enough.
Related
A python novice here, so I am sorry if the question seems stupid. I am trying to create a 3D binary mask using a list of contour points. Basically, I want to take an empty 3-dimensional numpy array of a fixed shape and fill the region (all the pixels) enclosed by some points in a list (list of cartesian coordinates) as 1 and everything outside as 0. I will then write this array out into a volume file to use for some other things.
I haven't been able to find a package that can do this for a 3D array; I could only either find fill inside polygons for 2D arrays (using OpenCV). I would appreciate if you could point me to a package/approach that I can use for this.
I have a numpy-array, who's shape is:
(30,40,100,200)
Those are 3D points (30(x-axis)x40(y-axis)x100(z-axis)) for different times (200 in total):
For visualization only (this is not my dataset, the picture comes from here: http://15462.courses.cs.cmu.edu/fall2016/article/35)
Now, I have issues with understanding how I can slice it:
How do I extract a 3D cluster for one specific time, i.e. 140?
From that extracted 3D cluster, how can I plot a 2D x-z cross-section for a specific y-position, i.e.45?
You should read up on basic numpy slicing: https://numpy.org/doc/stable/reference/arrays.indexing.html
How do I extract a 3D cluster for one specific time, i.e. 140?
Just specify the time index, i.e. data[:, :, :, 140]. Be aware that Python indexing starts from 0.
From that extracted 3D cluster, how can I plot a 2D x-z cross-section for a specific y-position, i.e.45?
You can acquire a 2D cross-section by a similar slicing operation, i.e. cluster[:, 45, :]. It can be plotted in various ways depending on the plotting library. imshow() from matplotlib might be one possibility.
Is your question about the data set (how does data categorize and how to get a 3D cluster at a specific time), or about the coding?
If it is about "How to get a cluster at a specific time" it means that your problem is about your particular dataset, which Stackoverflow is not a correct place for these types of question.
If it is about "coding" then define clearly your question and provide us with your code and the problem with it.
Based on your explanation, I think that for each time step, you have a complete set of xyz data, and so the solution is very strait.
I have a standard shapefile which has a column called geometry. This column contains a shapely Polygon object which is essentially a list of paired latitudes and longitudes that trace an object (e.g. the border of a country). The shapes are very complex/detailed, meaning that they have many points and are thus hard to plot. Is there a way to intelligently sub-sample the points in each polygon (e.g. taking only 1% of the points) in python.
I use the word intelligently because I need to ensure a few things:
The first and last point in the polygon must be the same (so that the shape reconnects to itself
If I have two polygons that share a points (e.g. when two countries share a border), there should be minimal overlap between the boundaries after sub-sampling.
An example of a piece of software that does this is https://mapshaper.org/. I want to do the same thing as their "simplify" tool, but in python.
I am currently working on a project related to space optimization.
The project is basically arranging a set of objects within a box to use the least amount of space. I have the object's dimensions, surface area, volume, shape, etc. (units are originally in meter)
The problem now is that I need to create a three dimensional box container to hold a certain number of objects where i can track the coordinates if they are filled out by an object or not. Say 'object 1' fills out the corner of the box so it fills coordinates [0,0,0] to [150,150,150]
Note that i converted meter to millimeter, so when i put them in a 3D array i can disregard the decimal of the dimensions (minimize the inaccuracy). I tried using a three-dimensional array for the box but a problem arises when its dimensions is too big. so a box of 1x1x1 meter is equivalent to 1000 x 1000 x 1000 millimeter which leads to a memory error
is there any alternative in creating such "box" environment that my algorithm can operate on in python?
I tried matplotlib's pyplot 3d but i cant track the coordinates and give a specific length,width,height to the plot since it was really meant for plotting data, not storing them.
So i got this idea from #f5r5e5d where in creating numpy 3d arrays, i specify the dtype.
since i need a box with atleast 1000 x 1000 x 1000 cells, and assuming a box can hold by average 100-150 objects, therefore i will only store unsigned int from 0 to 150.
so in creating my array i can just do
box = np.zeros((1750,1750,1750), dtype = 'u1')
where u1 is an 8bit unsigned integer.
gives memory error when i go beyond 1750 but for a box with 1billion or more elements, works perfectly!
Does anyone know of any efficient ways of computing a pixel's neighborhood and returning it in array form based on an input image and pixel coordinates? I would like to compute a number of pixel level features, such as integrated intensity and the Haralick features, based on these neighborhoods.
I can think of at least two approaches:
Construct slice objects corresponding to your coordinates, then index into the array with those slices individually.
Construct index arrays based on your coordinates, and use fancy indexing to extract the windows. This requires more memory, but is faster if you want to grab all windows with one go. For an example of something similar, have a look at this example: http://stefanvdwalt.bitbucket.org/assp2013/numpy/#25