Is there a way to convert a stl file to a numpy array?
The numpy array, resolved with x*y*z datapoints should contain volumetric information in the sense of "inside" or "outside" the geometry, say as 0 or 1.
To my surprise I didn't find anything on this yet, although numpy2stl seems to be quite popular.
The problem is a complex geometry of porous media, so convex hull conversion does not work either.
import numpy
import stl
from stl import mesh
stl.stl.MAX_COUNT = 1e10
your_mesh = stl.mesh.Mesh.from_file('Data.stl')
print your_mesh.data
seems to be able to export triangles only.
In addition, even this usually leads to MemoryError messages; but numpy-stl (usually) works for loading the datapoints into numpy.
Is there a way to convert the stl data into volume data that contains information if every point is inside or outside the geometry?
The resulting 3D array could technically be of binary data type, but this isn't required.
overcomplicated
With commercial software this conversion seems to be trivial, but it's not python or free. Implementing a ray casting algorithm from scratch seems over complicated for file type conversion.
I do believe that what you want to do is a voxelization of your volume. You can do that with the trimesh package at https://trimsh.org/
import trimesh
mesh = trimesh.load_mesh('path2yourstlfile.stl')
assert(mesh.is_watertight) # you cannot build a solid if your volume is not tight
volume = mesh.voxelized(pitch=0.1)
mat = volume.matrix # matrix of boolean
You can also check if a list of point are inside the volume using:
mesh.contains(points)
Small typo in [4], trimesh has no matrix atribute, you get it from VoxelGrid.
mat = mesh.matrix
fixed
mat = volume.matrix
Related
I am using the RPLidar A1: https://www.adafruit.com/product/4010
My goal is to collect data sets and plot them in order to get a live visual representation of the data.
My current code is:
import numpy as np
import matplotlib.pyplot as plt
from rplidar import RPLidar
def get_data():
lidar = RPLidar('COM6', baudrate=115200)
for scan in lidar.iter_scans(max_buf_meas=500):
break
lidar.stop()
return scan
for i in range(1000000):
if(i%7==0):
x = []
y = []
print(i)
current_data=get_data()
for point in current_data:
if point[0]==15:
x.append(point[2]*np.sin(point[1]))
y.append(point[2]*np.cos(point[1]))
plt.clf()
plt.scatter(x, y)
plt.pause(.1)
plt.show()
The above code produces a refreshing graph with changing data as shown below:
The issue is that this is not an accurate representation. There is a native application by SLAMTEC called frame_grabber which clearly shows this device giving accurate rectangular shaped representation of my room. Instead I keep getting a circular shape ranging from small to large.
The raw data from the sensor comes in the form of an array containing roughly 100 sets of the following data: (quality, theta, r). My code checks if the quality is good (15 is maximum), and then goes on to plot data sets and clears the data array every seven instances in order to get rid of old data.
I checked the raw polar data in excel and the data also appears to create a circularish shape.
After a few days of trying out various libraries for plotting and trying a few other things I've noticed the mistake.
Hopefully this can prevent someone from making the same mistake in the future.
The lidars typically give data in terms of "theta" and "r". On the other hand the numpy as well as the built in math library in Python accept arguments in radians for performing cos and sin operations.
I've converted the units from degrees to radians and now everything works perfectly.
Basically, i have a corpus of ~10,000 STL files, and i need to turn them all into 32x32x32 arrays of 1's and 0's (voxels)
I already have this script that turns STL files into voxels; https://github.com/rcpedersen/stl-to-voxel , but sometimes even though i specify that i need a 32x32x32 array, it will give me some huge array, and also along with being buggy, it takes FOREVER (processed ~600 files in 48 hours...)
Would it be easier to attempt to fix this script, or to write my own? It doesnt seem like voxelizing an STL would be a hard task, but I don't know any of the methods out there for this; if there are any strategies/tips, anything would be greatly appreciated.
Sorry to be a bummer, but voxelisation is actually quite a hard task. And not something Python is suitable to do quickly. Even for the simple slice/crossing test I would think a c++ implementation will beat python 1:100. I recommend libigl. Or do it on the GPU for realtime :) Look for conservative rasterization. But that is for "good" meshes that are non intersecting and closed. Otherwise it becomes a lot harder. Look for "generalized winding numbers" - also in igl.
Basicly voxelizing facet surface means separation inside form outside. It can be done in different ways: easiest way is to find signed distance from each voxel but it requeres input mesh to be closed, other way is to find winding number. You can find implemetation of both in MeshLib. Also there is python module that can help you:
pip install --upgrade pip
pip install meshlib
from meshlib import mrmeshpy as mm
# load mesh
mesh = mm.loadMesh(mm.Path("path_to_file.stl"))
mtvParams = mm.MeshToVolumeParams()
# signed will have negative values inside mesh and positive outside, but requires closed mesh
mtvParams.type = mm.MeshToVolumeParamsType.Signed
# voxels with presice distance - 3 inside, 3 - outside
mtvParams.surfaceOffset = 3
# find correct voxel size to have 32x32x32 volume
meshBox = mesh.computeBoundingBox()
boxSize = meshBox.max-meshBox.min
mtvParams.voxelSize = boxSize / 27.0
voxels = mm.meshToVolume(mesh,mtvParams)
# save voxels as tiff slices
vsParams = mm.VoxelsSaveSavingSettings()
vsParams.path = "save_voxels_dir"
vsParams.slicePlane = mm.SlicePlane.XY
mm.saveAllSlicesToImage(voxels,vsParams)
Is there any way by which 3D models can be represented as 3D arrays? Are there any libraries that take .obj or .blend files as input and give an array representation of the same?
I thought that I would slice object and export the slices to an image. I would then use those images in opencv to build arrays for each slice. In the end I would combine all the arrays of all the slices to finally get a 3D array representation of my .obj file. But I gave up halfway through because it is a painfully long process to get the image slices aligned to each other.
Is there any other index based representation I could use to represent 3D models in code?
A 3D array would be very convenient for my purposes.
Binvox can be used to generate a voxel representation of a 3D model from a .obj file.
Binvox-rw-py can be used to read and parse the .binvox file generated above.
Here's what I used to a get a numpy array:
>>> import binvox_rw
>>> with open("chair.binvox","rb") as f:
... ml = binvox_rw.read_as_3d_array(f)
...
>>> type(ml.data)
<type 'numpy.ndarray'>
>>> ml.data.shape
(32, 32, 32)
PyGame has an OBJFileLoader class.
PyWavefront has a 3D object model.
If I understand correctly, you want to create a voxel representation of 3D models? Something like the visible human displays?
I would use one of the OBJ file loaders recommended above to import the model into an OpenGL program. Rotate and scale to whatever alignment you want along XYZ.
Then draw the object with a fragment shader that discards any pixel with Z < 0.001 or Z >= 0.002 (or whatever resolution works - I'm just trying to explain the method). This gives you the first image slice, which you store or save. Clear and draw again this time discarding Z < 0.002 or Z >= 0.003 … Because it's the same model in the same position, all your slices will be aligned.
However, are you aware that OBJ (and nearly all other 3D formats) are surface descriptions, not solid? They're hollow inside like origami models. So your 3D array representation will be mostly empty.
Hope this helps.
We are using a shapely library to check that some random point is not in some prohibited areas stored in a shape file.
with fiona.open(path) as source:
geometry = get_exclusive_item(source[0])
geom = shapely.geometry.shape(geometry['geometry'])
def check(lat, lng):
point = shapely.geometry.Point(lng, lat)
return not geom.contains(point)
But the latest call geom.contains(point) takes about a second to complete. Is there any other faster libraries for python, or could we optimize a shape files somehow to get better speed?
Thank for the #iant point to use a spatial indexes.
My shapefile was a single MultiPoligon with a lot of points, makes .contains() are really slow.
I solved the issue by splitting it into smaller shapes and use Rtree index.
To split shapefile I used QGIS, as it descrived here - https://gis.stackexchange.com/a/23694/65569
The core idea how to use RTree in python is here - https://gis.stackexchange.com/a/144764/65569
In total this gaves me 1000x speed-up for .contains() lookups!
I am looking for a minimalistic solution for doing basic geospatial search in Python.
We have a dataset of roughly 10 k locations and we need to solve the find the all locations within a radius of N kilometers from a given location. I am not looking for explicit database with geospatial support. I hope to get around another external solution. Is there something that would use Python only?
Shapely seems to be a good solution. Its description seems to correspond to what you're looking for :
[Shapely] It lets you do PostGIS-ish stuff outside the context of a database using Python.
It is based on GEOS, which a widely used C++ library.
Here is a link to the documentation
scipy.spatial has a kd-tree implementation that might be the most popular in Python.
A self made solution without any external modules could be something like this:
import numpy as np
points = np.array([[22.22, 33.33],
[08.00, 05.00],
[03.12, 05.00],
[09.00, 08.00],
[-02.5, 03.00],
[0.00, -01.00],
[-10.0,-10.00],
[12.00, 12.00],
[-4.00, -6.00]])
r = 10.0 # Radius withing the points should lie
xm = 3 # Center x coordinate
ym = 8 # Center y coordinate
points_i = points[((points[:,0] - xm)**2 + (points[:,1] - ym)**2)**(1/2.0) < r]
points_i contains those points which lie within the radius. This solution requires the data to be in a numpy array which is to my knowledge also a very fast way to go trough large data sets as oppose to for loops. I guess this solution is pretty much minimalistic. The plot below shows the outcome with the data given in the code.