I want to change the position and rotation of the Object inside my STL file permanantly, bygiving an Transformation matrix and a vector. Do you know how i could realize something like this:
newMesh = Mesh * transfromationMatrix - Vector
I finally did it with: numpy-stl
Its a nice library for python. It has a function called transform(), which uses a 4x4 homogeneours Transformationsmatrix as an input
Related
I am working on a project and I have to do some material derivatives. I can't find the function in the module which can do this type of operation for me. Even if this function does not exist I can make it myself but then there is another problem. I don't know how to extract a vector component from a vector. If I have a vector (5,10) I cant extract the y component alone (10) without bringing the x component next to it.
I read alot about similar problems on this forum and I also read the documentation for sympy vector module. I can't seem to find an answer.
import sympy.physics.vector
A = ReferenceFrame('A')
v = 5*A.x + 10*A.y
I'd like to do a material derivative of vector "v". If this isn't possible I would like to make a function myself (def fun...) but I also don't know how to get one component from a vector.
I imagined that the component extraction would looked something like v[0] or something similar but it doesn't work.
1) As far as material derivatives go, you'd probably be best off writing your own function which shouldn't be too difficult. Otherwise you might be able to make use of other modules within sympy.physics such as the mechanics module. I'm not sure if it will help or not but it's definitely worth a look.
2) To your second question (how to extract the components of a sympy vector object):
This is a little hacky because once you convert to matrix form the underlying reference frame information is lost but you could do something like this:
from sympy.physics import vector
A = vector.ReferenceFrame('A')
v = 5*A.x + 10*A.y
x_component = v.to_matrix(A)[0]
y_component = v.to_matrix(A)[1]
If you still wanted it to be in vector form as opposed to just the scalar coefficient you could immediately re-multiply by the basis vector like such:
x_vec = (v.to_matrix(A)[0])*A.x
y_vec = (v.to_matrix(A)[0])*A.y
Is there any way by which 3D models can be represented as 3D arrays? Are there any libraries that take .obj or .blend files as input and give an array representation of the same?
I thought that I would slice object and export the slices to an image. I would then use those images in opencv to build arrays for each slice. In the end I would combine all the arrays of all the slices to finally get a 3D array representation of my .obj file. But I gave up halfway through because it is a painfully long process to get the image slices aligned to each other.
Is there any other index based representation I could use to represent 3D models in code?
A 3D array would be very convenient for my purposes.
Binvox can be used to generate a voxel representation of a 3D model from a .obj file.
Binvox-rw-py can be used to read and parse the .binvox file generated above.
Here's what I used to a get a numpy array:
>>> import binvox_rw
>>> with open("chair.binvox","rb") as f:
... ml = binvox_rw.read_as_3d_array(f)
...
>>> type(ml.data)
<type 'numpy.ndarray'>
>>> ml.data.shape
(32, 32, 32)
PyGame has an OBJFileLoader class.
PyWavefront has a 3D object model.
If I understand correctly, you want to create a voxel representation of 3D models? Something like the visible human displays?
I would use one of the OBJ file loaders recommended above to import the model into an OpenGL program. Rotate and scale to whatever alignment you want along XYZ.
Then draw the object with a fragment shader that discards any pixel with Z < 0.001 or Z >= 0.002 (or whatever resolution works - I'm just trying to explain the method). This gives you the first image slice, which you store or save. Clear and draw again this time discarding Z < 0.002 or Z >= 0.003 … Because it's the same model in the same position, all your slices will be aligned.
However, are you aware that OBJ (and nearly all other 3D formats) are surface descriptions, not solid? They're hollow inside like origami models. So your 3D array representation will be mostly empty.
Hope this helps.
I want to make a 2D contour plot using one SkyCoord object containing an array of coordinates as an input parameter.
To do this, I wanted to make a mesh gird over parameters.
The code is something like this.
l = np.linspace(0, 360, 180)
b = np.linspace(-90, 90, 90) # Two axes I wanted to make contour on.
y=y.reshape(y.size,1,1) #(originally) an 1D array the same size as `coords`.
l=l.reshape(1,l.size,1)
b=b.reshape(1,1,b.size)
coords = coords.reshape(y.shape) # No, this does not work.
coords.shape = y.shape # You can't write attributes like this. How frustrating.
z = Something_Fun((l,b),y,coords)
The problem comes here.
I tried to use np.meshgird over coords, but it returns a np.array of SkyCoord, rather than one SkyCoord object containing an array of coordinates, which is not what I want. For the function Something_Fun calls member functions of SkyCoord, which certainly does not work with a np.array.
Unfortunately, a built-in reshape method is not provided for SkyCoord, even though it does have a shape method! If keep the shape of coords, the code won't work because operations cannot broadcast with arrays of different dimensions.
Is there any elegant way to do this? I do not wish to rewrite codes that generates coords or the function Something_Fun because it would mess up many other things.
Exporting SkyCoord data to string and import again might do the trick, but is much too "dirty" and loses precision. I might try it as a last resort.
Ok, I've come up with an solution on my own. It still involves exporting and import back and forth, but it would not lose precision. And it just works.
coords=SkyCoord(coords.ra.reshape(y.shape),coords.dec.reshape(y.shape))
Wish they would provide an built-in reshape method in the future, which would save me some time~
Is there a way to convert a stl file to a numpy array?
The numpy array, resolved with x*y*z datapoints should contain volumetric information in the sense of "inside" or "outside" the geometry, say as 0 or 1.
To my surprise I didn't find anything on this yet, although numpy2stl seems to be quite popular.
The problem is a complex geometry of porous media, so convex hull conversion does not work either.
import numpy
import stl
from stl import mesh
stl.stl.MAX_COUNT = 1e10
your_mesh = stl.mesh.Mesh.from_file('Data.stl')
print your_mesh.data
seems to be able to export triangles only.
In addition, even this usually leads to MemoryError messages; but numpy-stl (usually) works for loading the datapoints into numpy.
Is there a way to convert the stl data into volume data that contains information if every point is inside or outside the geometry?
The resulting 3D array could technically be of binary data type, but this isn't required.
overcomplicated
With commercial software this conversion seems to be trivial, but it's not python or free. Implementing a ray casting algorithm from scratch seems over complicated for file type conversion.
I do believe that what you want to do is a voxelization of your volume. You can do that with the trimesh package at https://trimsh.org/
import trimesh
mesh = trimesh.load_mesh('path2yourstlfile.stl')
assert(mesh.is_watertight) # you cannot build a solid if your volume is not tight
volume = mesh.voxelized(pitch=0.1)
mat = volume.matrix # matrix of boolean
You can also check if a list of point are inside the volume using:
mesh.contains(points)
Small typo in [4], trimesh has no matrix atribute, you get it from VoxelGrid.
mat = mesh.matrix
fixed
mat = volume.matrix
I have a huge sparse matrix. I would like to save the dense equivalent one into file system.
The problem is the memory limit on my machine.
My original idea is:
convert huge_sparse_matrix to ndarray by np.asarray(huge_sparse_matrix)
assign values
save it back to file system
However, at step 1, Python raises MemoryError.
One possible approach in my mind is:
create a chunk of the dense array
assign values from the corresponding sparse one
save the dense array chunk back to file system
repeat 1-3
But how to do that?
you can use the scipy.sparse function to read sparse matrix and then convert it to numpy , see documentation here scipy.sparse docs and examples
I think np.asarray() is not really the function you're looking for.
You might try the SciPy matrix format cco_matrix() (coordinate formatted matrix).
scipy.sparse.coo_matrix
this format allows to save huge sparse matrices in very little memory.
furthermore there are many mathematical scipy functions which also work with this matrix format.
The matrix representation in this format are basically three lists:
row: the index of the row
col: the index of the column
data: the value at this position
hope that helped, cheers
The common and most straightforward answer to memory problems is: Do not create objects, use an iterator or a generator.
If I understand correctly, you have a sparse matrix and you want to transform it into a list representation. Here's a sample code:
def iter_sparse_matrix ( m, d1, d2 ):
for i in xrange(d1):
for j in xrange(d2):
if m[i][j]:
yield ( i, j, m[i][j] )
dense_array = list(iter_sparse_matrix(m, d1, d2))
You might also want to look here:
http://cvxopt.org/userguide/matrices.html#sparse-matrices
If I'm not wrong the problem you have is that the dense of the sparse matrix does not fit in your memory, and thus, you are not able to save it.
What I would suggest you is to use HDF5. HDF5 handles big data in disk passing it to memory only when needed.
I something like this should work:
import h5py
data = # your sparse matrix
cx = data.tocoo() # coo sparse representation
This will create your data matrix (of zeros) in disk.
f = h5py.File('dset.h5','w')
dataset = f.create_dataset("data", data.shape)
Fill the matrix with the sparse data:
dataset[cx.row, cx.col] = cx.data
Add any modifications you want to dataset:
dataset[something, something] = something
And finally, save it:
file.close()
The way HDF5 works I think is perfect for your needs. The matrix is stored always in disk, so it doesn't require memory, however, you can operate with it as if it was a standard numpy matrix (indexing, slicing, np.(..) operations and so on) and the h5py driver will send the parts of the matrix that you need to memory (never the whole matrix unless you specifically require it with something like data[:, :]).
PS: I'm assuming your sparse matrix is one of the scipy's sparse matrix. If not replace cx.row, cx.col and cx.data from the ones provided by your matrix representation (should be something like it).