How to represent a 3D .obj object as a 3D array? - python

Is there any way by which 3D models can be represented as 3D arrays? Are there any libraries that take .obj or .blend files as input and give an array representation of the same?
I thought that I would slice object and export the slices to an image. I would then use those images in opencv to build arrays for each slice. In the end I would combine all the arrays of all the slices to finally get a 3D array representation of my .obj file. But I gave up halfway through because it is a painfully long process to get the image slices aligned to each other.
Is there any other index based representation I could use to represent 3D models in code?
A 3D array would be very convenient for my purposes.

Binvox can be used to generate a voxel representation of a 3D model from a .obj file.
Binvox-rw-py can be used to read and parse the .binvox file generated above.
Here's what I used to a get a numpy array:
>>> import binvox_rw
>>> with open("chair.binvox","rb") as f:
... ml = binvox_rw.read_as_3d_array(f)
...
>>> type(ml.data)
<type 'numpy.ndarray'>
>>> ml.data.shape
(32, 32, 32)

PyGame has an OBJFileLoader class.
PyWavefront has a 3D object model.

If I understand correctly, you want to create a voxel representation of 3D models? Something like the visible human displays?
I would use one of the OBJ file loaders recommended above to import the model into an OpenGL program. Rotate and scale to whatever alignment you want along XYZ.
Then draw the object with a fragment shader that discards any pixel with Z < 0.001 or Z >= 0.002 (or whatever resolution works - I'm just trying to explain the method). This gives you the first image slice, which you store or save. Clear and draw again this time discarding Z < 0.002 or Z >= 0.003 … Because it's the same model in the same position, all your slices will be aligned.
However, are you aware that OBJ (and nearly all other 3D formats) are surface descriptions, not solid? They're hollow inside like origami models. So your 3D array representation will be mostly empty.
Hope this helps.

Related

how can I extract the points of a vector shape in Krita using the python scripter?

I want to get the position of the points that make my vector shape.
(https://i.stack.imgur.com/mfmGV.png)
(https://i.stack.imgur.com/J9Noq.png)
So far I could find out how to get the shape, but I don't know how to continue from there. I thought the QPoints are children of the QObjects (the vector Shape) so I tried to get them with .children() but the array I got was empty. Besides that I couldn't find a function that sounds like what I am looking for in the documentation.
from krita import *
shapes = Krita.instance().activeDocument().activeNode().shapes().
Points = shapes.children()

Python: Transform a STL-Mesh with a transformation matrix

I want to change the position and rotation of the Object inside my STL file permanantly, bygiving an Transformation matrix and a vector. Do you know how i could realize something like this:
newMesh = Mesh * transfromationMatrix - Vector
I finally did it with: numpy-stl
Its a nice library for python. It has a function called transform(), which uses a 4x4 homogeneours Transformationsmatrix as an input

imread octave to imread cv2

I have to translate a code from Octave to Python, among many things the program does something like this:
load_image = imread('image.bmp')
which as you can see its a bitmap, then if I do
size(load_image) that prints (1200,1600,3) which its ok, but, when I do:
load_image
it prints a one dimensional array, that does not make any sense to me, my question is how in Octave are these values interpreted because I have to load the same image in opencv and I couldn't find the way.
thanks.
What you have is a 3D array in octave. Here in the x-dimension you seem to have RGB values for each pixel and Y and Z dimension are the rows and columns respectively. However when you print it you will see all the values in the array and hence it looks like a 1D array.
Try something like this and look at the output:
load_image(:,:,i)
The i stands for the dimensions of your image RGB. If you want to 2D print your 3D image using matplotlib or similar, you need to do the same.

How to reshape array inside an astropy.coordinate.SkyCoord object?

I want to make a 2D contour plot using one SkyCoord object containing an array of coordinates as an input parameter.
To do this, I wanted to make a mesh gird over parameters.
The code is something like this.
l = np.linspace(0, 360, 180)
b = np.linspace(-90, 90, 90) # Two axes I wanted to make contour on.
y=y.reshape(y.size,1,1) #(originally) an 1D array the same size as `coords`.
l=l.reshape(1,l.size,1)
b=b.reshape(1,1,b.size)
coords = coords.reshape(y.shape) # No, this does not work.
coords.shape = y.shape # You can't write attributes like this. How frustrating.
z = Something_Fun((l,b),y,coords)
The problem comes here.
I tried to use np.meshgird over coords, but it returns a np.array of SkyCoord, rather than one SkyCoord object containing an array of coordinates, which is not what I want. For the function Something_Fun calls member functions of SkyCoord, which certainly does not work with a np.array.
Unfortunately, a built-in reshape method is not provided for SkyCoord, even though it does have a shape method! If keep the shape of coords, the code won't work because operations cannot broadcast with arrays of different dimensions.
Is there any elegant way to do this? I do not wish to rewrite codes that generates coords or the function Something_Fun because it would mess up many other things.
Exporting SkyCoord data to string and import again might do the trick, but is much too "dirty" and loses precision. I might try it as a last resort.
Ok, I've come up with an solution on my own. It still involves exporting and import back and forth, but it would not lose precision. And it just works.
coords=SkyCoord(coords.ra.reshape(y.shape),coords.dec.reshape(y.shape))
Wish they would provide an built-in reshape method in the future, which would save me some time~

Convert stl 2 numpy, volume data

Is there a way to convert a stl file to a numpy array?
The numpy array, resolved with x*y*z datapoints should contain volumetric information in the sense of "inside" or "outside" the geometry, say as 0 or 1.
To my surprise I didn't find anything on this yet, although numpy2stl seems to be quite popular.
The problem is a complex geometry of porous media, so convex hull conversion does not work either.
import numpy
import stl
from stl import mesh
stl.stl.MAX_COUNT = 1e10
your_mesh = stl.mesh.Mesh.from_file('Data.stl')
print your_mesh.data
seems to be able to export triangles only.
In addition, even this usually leads to MemoryError messages; but numpy-stl (usually) works for loading the datapoints into numpy.
Is there a way to convert the stl data into volume data that contains information if every point is inside or outside the geometry?
The resulting 3D array could technically be of binary data type, but this isn't required.
overcomplicated
With commercial software this conversion seems to be trivial, but it's not python or free. Implementing a ray casting algorithm from scratch seems over complicated for file type conversion.
I do believe that what you want to do is a voxelization of your volume. You can do that with the trimesh package at https://trimsh.org/
import trimesh
mesh = trimesh.load_mesh('path2yourstlfile.stl')
assert(mesh.is_watertight) # you cannot build a solid if your volume is not tight
volume = mesh.voxelized(pitch=0.1)
mat = volume.matrix # matrix of boolean
You can also check if a list of point are inside the volume using:
mesh.contains(points)
Small typo in [4], trimesh has no matrix atribute, you get it from VoxelGrid.
mat = mesh.matrix
fixed
mat = volume.matrix

Categories

Resources