Basically, i have a corpus of ~10,000 STL files, and i need to turn them all into 32x32x32 arrays of 1's and 0's (voxels)
I already have this script that turns STL files into voxels; https://github.com/rcpedersen/stl-to-voxel , but sometimes even though i specify that i need a 32x32x32 array, it will give me some huge array, and also along with being buggy, it takes FOREVER (processed ~600 files in 48 hours...)
Would it be easier to attempt to fix this script, or to write my own? It doesnt seem like voxelizing an STL would be a hard task, but I don't know any of the methods out there for this; if there are any strategies/tips, anything would be greatly appreciated.
Sorry to be a bummer, but voxelisation is actually quite a hard task. And not something Python is suitable to do quickly. Even for the simple slice/crossing test I would think a c++ implementation will beat python 1:100. I recommend libigl. Or do it on the GPU for realtime :) Look for conservative rasterization. But that is for "good" meshes that are non intersecting and closed. Otherwise it becomes a lot harder. Look for "generalized winding numbers" - also in igl.
Basicly voxelizing facet surface means separation inside form outside. It can be done in different ways: easiest way is to find signed distance from each voxel but it requeres input mesh to be closed, other way is to find winding number. You can find implemetation of both in MeshLib. Also there is python module that can help you:
pip install --upgrade pip
pip install meshlib
from meshlib import mrmeshpy as mm
# load mesh
mesh = mm.loadMesh(mm.Path("path_to_file.stl"))
mtvParams = mm.MeshToVolumeParams()
# signed will have negative values inside mesh and positive outside, but requires closed mesh
mtvParams.type = mm.MeshToVolumeParamsType.Signed
# voxels with presice distance - 3 inside, 3 - outside
mtvParams.surfaceOffset = 3
# find correct voxel size to have 32x32x32 volume
meshBox = mesh.computeBoundingBox()
boxSize = meshBox.max-meshBox.min
mtvParams.voxelSize = boxSize / 27.0
voxels = mm.meshToVolume(mesh,mtvParams)
# save voxels as tiff slices
vsParams = mm.VoxelsSaveSavingSettings()
vsParams.path = "save_voxels_dir"
vsParams.slicePlane = mm.SlicePlane.XY
mm.saveAllSlicesToImage(voxels,vsParams)
Related
I'm working on a computer science project which is a CNC plotter basically all of the methods I see for getting Gcode uses Inkscape. I have already written software to convert Normal images to black and white edges only and I have pulled the coordinates from the image. Is there any way X,Y coordinates can be used to generate Gcode ? or would i have to use Inkscape.
GCode is just instructions called where you can pass arguments.
The machine will execute the Gcode one by one and interpret it for moving his motors or do regulation depending on his firmware.
So if you want to create Gcode in python, just create a txt file and append commands.
You need to have the Gcode availables instructions of you machine first (here InkScape).
For example in Marlin:
G1 X90.6 Y13.8 ; move to 90.6mm on the X axis and 13.8mm on the Y axis
To get this file in python:
positions = [ # Get your datas of format them like this:
[90.6, 13.8], # Point 1 [x, y]
[10.6, 3.98]
]
with open("myGCode.gcode", "w") as f:
for x, y in positions:
f.write(f"G1 X{x} Y{y} ;\n")
File created content:
G1 X90.6 Y13.8 ;
G1 X10.6 Y3.98 ;
It really depends on the machine and its controller, but most of the time,
Linear interpolation like
G1 or G01 usually only needs to be specified once, like
G01 X1.0 Y2.0;
And then its linear interpolation enabled already so it can just be
X1.0 Y3.0;
...
Up to the point where you wanna go back to rapid movement (G0 G00)
Or circular interpolation with (G02, G03)
But then it's still usually just coordinates are enough after switching
to specific interpolation once.
Yet then I assume its for simple milling and more recent mills (I was trained for Haas) has some fancy pocketing functions, where you just specify
few key points for contour and they can kinda be deducted mathematically.
It would be interesting to see your program for getting a contour out of a photo.
But specifying type of interpolation between each set of coordinates is also
OK, just it might make code readability slightly more difficult.
TL;DR: Can I multiply a numpy.average by 2? If yes, how?
For an orientation discrimination experiment, during which people respond on how well they're able to discriminate the angle between an visible grating and non-visible reference grating, I want to calculate the Just Noticeable Difference (JND).
At the end of the code I have this:
#write JND to logfile (average of last 10 reversals)
if len(staircase[stairnum].reversalIntensities) < 10:
dataFile.write('JND = %.3f\n' % numpy.average(staircase[stairnum].reversalIntensities))
else:
dataFile.write('JND = %.3f\n' % numpy.average(staircase[stairnum].reversalIntensities[-10:]))
This is where the JND is written to the file, and I thought it'd be easy to multiply that "numpy.average" line by 2, which doesn't work. I thought of making two different variables that contained the same array, and using numpy.sum to add them together.
#Possible solution
x=numpy.average(staircase[stairnum].reversalIntensities[-10:]))
y=numpy.average(staircase[stairnum].reversalIntensities[-10:]))
numpy.sum(x,y, [et cetera])
I am sure the procedure is very simple, but my current capabilities of programming are limited and the psychopy and python reference materials did not provide what I was looking for (if there is, please share!).
I wrote a rather clunky python program that creates a unit cell of a certain geometry and duplicates it so that I get e.g. a 3x3x3 array of the unit cell. The result is saved as an .stl. The goal was to create these structures directly instead of using CAD.
My problem is that the computation time is annoying (~2 minutes for 8x8x8, which is smaller than what I need usually). The biggest problem is, that 10x10x10 isn't even possible. VTK immediately throws an error "Unable to allocate [large number] of elements of size 8 bytes." This leads me to believe that my memory management is insufficient (non-existent).
I read about vtkSmartPointer, but can only seem to find explanations for C++. How do I correctly use vtkSmartPointer with python? I should add that I do not have any experience with C++ whatsoever.
A MWE is kind of impossible due to the program size. Here's is a shortened example of a part of my algorithm pipeline instead:
import vtk
appendFilter = vtk.vtkAppendPolyData()
# create all 12 struts and combine them into one object
for i in range(1, 13, 1):
tf = create_strut(i, node_dist, amp, d, sides, mode, render=False) # method that creates my unit cell out of 12 struts, parameters are irrelevant
appendFilter.AddInputData(tf.GetOutput())
appendFilter.Update()
# clean up poly data
cleanFilter = vtk.vtkCleanPolyData()
cleanFilter.SetInputConnection(appendFilter.GetOutputPort())
cleanFilter.Update()
# cut the cell to its right size
planes = plane_collection(node_dist) # method that generates 6 planes
clip = vtk.vtkClipClosedSurface()
clip.SetInputData(cleanFilter.GetOutput())
clip.SetClippingPlanes(planes)
clip.Update()
# assemble an array from the unit cell
array = duplicate_cells(clip, xyz[0], xyz[1], xyz[2], node_dist)
# save as .stl
data = array.GetOutputPort()
stlwriter = vtk.vtkSTLWriter()
stlwriter.SetInputConnection(data)
stlwriter.SetFileName("Z:/example.stl")
stlwriter.Update()
stlwriter.Write()
This should demonstrate that I use a lot of filters and never care about deleting them or anything. What is the correct/preferred way to clean up my memory?
Since I posted this I read a lot about VTK and Python came to the conclusion, that Python takes care of memory management itself. SmartPointers and Delete() should only be necessary in a C++ environment.
Furthermore I upgraded from Python 2.7 and VTK 6.2 to Python 3.4 and VTK 7.0 which vastly improved the overall performance of the program. Therefore, I think that the problem was with the old VTK version.
We are using a shapely library to check that some random point is not in some prohibited areas stored in a shape file.
with fiona.open(path) as source:
geometry = get_exclusive_item(source[0])
geom = shapely.geometry.shape(geometry['geometry'])
def check(lat, lng):
point = shapely.geometry.Point(lng, lat)
return not geom.contains(point)
But the latest call geom.contains(point) takes about a second to complete. Is there any other faster libraries for python, or could we optimize a shape files somehow to get better speed?
Thank for the #iant point to use a spatial indexes.
My shapefile was a single MultiPoligon with a lot of points, makes .contains() are really slow.
I solved the issue by splitting it into smaller shapes and use Rtree index.
To split shapefile I used QGIS, as it descrived here - https://gis.stackexchange.com/a/23694/65569
The core idea how to use RTree in python is here - https://gis.stackexchange.com/a/144764/65569
In total this gaves me 1000x speed-up for .contains() lookups!
I am looking for a minimalistic solution for doing basic geospatial search in Python.
We have a dataset of roughly 10 k locations and we need to solve the find the all locations within a radius of N kilometers from a given location. I am not looking for explicit database with geospatial support. I hope to get around another external solution. Is there something that would use Python only?
Shapely seems to be a good solution. Its description seems to correspond to what you're looking for :
[Shapely] It lets you do PostGIS-ish stuff outside the context of a database using Python.
It is based on GEOS, which a widely used C++ library.
Here is a link to the documentation
scipy.spatial has a kd-tree implementation that might be the most popular in Python.
A self made solution without any external modules could be something like this:
import numpy as np
points = np.array([[22.22, 33.33],
[08.00, 05.00],
[03.12, 05.00],
[09.00, 08.00],
[-02.5, 03.00],
[0.00, -01.00],
[-10.0,-10.00],
[12.00, 12.00],
[-4.00, -6.00]])
r = 10.0 # Radius withing the points should lie
xm = 3 # Center x coordinate
ym = 8 # Center y coordinate
points_i = points[((points[:,0] - xm)**2 + (points[:,1] - ym)**2)**(1/2.0) < r]
points_i contains those points which lie within the radius. This solution requires the data to be in a numpy array which is to my knowledge also a very fast way to go trough large data sets as oppose to for loops. I guess this solution is pretty much minimalistic. The plot below shows the outcome with the data given in the code.