What I am trying to do is to create a 3D triangulated mesh that can be parsed into a .vtk or .stl file for use in 3D printing application. Right now I am stuck with the creation of the triangle mesh. The geometry I want to create are basically three dimensional sine waves that have a certain thickness and intersect each other. So far I got one sine wave. Here's a MWE:
import matplotlib.pyplot as plt
import numpy as np
from scipy import ndimage
import scipy.spatial
# create empty 3d array
array = np.zeros((100, 100, 100))
# create 3D sine wave in empty array
strut = np.sin(np.linspace(1, 10, 100))*12
for k in enumerate(strut):
y_shift = int(np.round(strut[k[0]]))
array[k, 50 + y_shift, 50] = 1
pattern = np.ones((4, 4, 4))
# convolve the array with the pattern / apply thickness
conv_array = ndimage.convolve(array, pattern)
# create list with data coordinates from convolved array
data = list()
for j in range(conv_array.shape[0]):
for k in range(conv_array.shape[1]):
for l in range(conv_array.shape[2]):
if conv_array[j, k, l] != 0:
data.append([j, k, l])
data = np.asarray(data)
tri = scipy.spatial.Delaunay(data)
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.hold(True)
ax.plot_trisurf(data[:, 0], data[:, 1], data[:, 2], triangles=tri.simplices)
plt.show()
What it does: I create an empty array which I fill with a sine wave represented by ones. I convolve that array with a rectangular array of a defined size, which gives me a thicker sine wave in space. Then the array gets converted into coordinate form so that it can be triangulated using Delaunay triangulation. What I get is this:
Plot
As you can see the triangulation kinda worked, but it fills the space between the sine wave amplitudes. Is there a way to remove the filled spaced? Or prevent it from doing them in the first place? The sine wave also looks wrong at the ends and I am not sure why. Is this even the best method to achieve want I am trying to do?
The parsing to a .vtk file should not present a problem, but I need a clean structure first. Thanks in advance for any kind of help!
I would not reinvent the wheel and do all that on my own. Rather than that, use python-vtk and paraview (which is a post-processing application for 3D data) to do the triangulation for you. "Just" create the points and do the rest in that application.
I don't know much about 3D printing, but I know my fair share about STL and VTK. It is a pain to do manually and the VTK library has has some nice Python examples and a dedicated STLWriter. You just need to wrap your head around the workflow of VTK and how it manages things internally. This is where paraview comes in quite handy. It enables you to record your actions that you do in the GUI and displays them and displays them in Python. This is great to learn the way it works internally.
Finally I got something very close to what I want. In case someone is interested in the answer:
Instead of going with the point cloud approach I dug myself into VTK (which is a pain to learn, but has a lot of functionality) with python.
My algorithm is basically this:
Approximate the sine wave as a simple triangular wave first.
Feed the x, y and z coordinates of the wave into a vtkPoints object
Use vtkParametricSpline to get a smooth wave
vtkSplineFilter to have control over the smoothness of the wave
vtkTubeFilter to create a volume from the line
vtkTriangleFilter for meshing
vtkSTLWriter
Related
I want to find the derivatives of some scattered data. I have tried two different methods:
projecting the scattered data on a regular grid using scipy.interpolate.griddata, then computing the gradients with numpy.gradients, and then projecting values back to the scattered locations.
creating a CloughTocher2DInterpolater (but I have the same issue with others) and getting the gradients out of it
The second one is an order of magnitude faster than the first one but unfortunately, it also goes crazy quite quickly when data are a bit complex. For instance starting with this signal (called F and which is a simple addition of tanh stepwise functions along x and y):
When I process F using the two methods, I get:
Method 1 gives a good approximation. Method 2 is also good but I need force the colormap because of the existence of some extreme values.
Now, if I add a small noise (i.e. of amplitude 0.1 while the signal has amplitudes between -3 and 3), the interpolator just goes crazy giving very large extreme values:
I don't know how to deal with this. I understand the interpolator won't like irregular function or noise, but I was not expecting such discrepancy. My first idea was to smooth data first but strangely I can't find any method that would help me on this. Another idea would be to make a 2d fit of F to try to remove noise but I'm dry here too...any idea ?
Here is the corresponding python example (working on python3.6.9):
import numpy as np
from scipy import interpolate
import matplotlib.pyplot as plt
plt.interactive(True)
# scattered data
N = 200
coordu = np.random.rand(N**2,2)
Xu=coordu[:,0]
Yu=coordu[:,1]
noise = 0.
noise = np.random.rand(Xu.shape[0])*0.1
Zu=np.tanh((Xu-0.25)/0.01+(Yu-0.25)/0.001)+np.tanh((Xu-0.5)/0.01+(Yu-0.5)/0.001)+np.tanh((Xu-0.75)/0.001+(Yu-0.75)/0.001)+noise
plt.figure();plt.scatter(Xu,Yu,1,Zu)
plt.title('Data signal F')
#plt.savefig('signalF_noisy.png')
### get the gradient
# using griddata np.gradients
Xs,Ys=np.meshgrid(np.linspace(0,1,N),np.linspace(0,1,N))
coords = np.array([Xs,Ys]).T
Zs = interpolate.griddata(coordu,Zu,coords)
nearest = interpolate.griddata(coordu,Zu,coords,method='nearest')
znan = np.isnan(Zs)
Zs[znan] = nearest[znan]
dZs = np.gradient(Zs,np.min(np.diff(Xs[0,:])))
dZus = interpolate.griddata(coords.reshape(N*N,2),dZs[0].reshape(N*N),coordu)
hist_dzus = np.histogram(dZus,100)
plt.figure();plt.scatter(Xu,Yu,1,dZus)
plt.colorbar()
plt.clim([0 ,10])
plt.title('dF/dx using griddata and np.gradients')
#plt.savefig('dxF_griddata_noisy.png')
# using interpolation method Clough
interp = interpolate.CloughTocher2DInterpolator(coordu,Zu)
dZuCT = interp.grad
hist_dzct = np.histogram(dZuCT[:,0,0],100)
plt.figure();plt.scatter(Xu,Yu,1,dZuCT[:,0,0])
plt.colorbar()
plt.clim([0 ,10])
plt.title('dF/dx using CloughTocher2DInterpolator')
#plt.savefig('dxF_CT2D_noisy.png')
# histograms
plt.figure()
plt.semilogy(hist_dzus[1][:-1],hist_dzus[0],'.-')
plt.semilogy(hist_dzct[1][:-1],hist_dzct[0],'.-')
plt.title('histogram of dF/dx')
plt.legend(('griddata','ClouhTocher'))
#plt.savefig('dxF_hist_noisy.png')
I am trying to find intersection points between 3d object .stl file (that I imported to python using numpy-stl) and 3d plot(that I generated using matplotlib). The 3d object is a cad drawing and 3d plot is a bunch of curves in 3d. I can plot them on the same figure but I have no idea how I can find intersection points between the trajectory and the drawing. My idea was to convert the curve into .stl using save() function from the numpy-stl module after plotting it and show them together on a cad application like freeCAD together and find intersection using the application's functionality. But it does not work as simple because the plots are point based and .stl is triangle based. If anyone has any advice on how to approach this problem, please let me know!!
Here is the code to plot both .stl object and the 3d plot. This is what I have so far.
#allInitialE is 1D list, allX, allY, allZ are all 2D lists
from stl import mesh
from numpy import *
from mpl_toolkits import mplot3d
from matplotlib import pyplot as plt
fig = plt.figure()
ax = plt.axes(projection = '3d')
your_mesh = mesh.Mesh.from_file('fileName.stl')
your_mesh.translate([0,7,0])
ax.add_collection3d(mplot3d.art3d.Poly3DCollection(your_mesh.vectors))
ax.view_init(azim = -90, elev = 0)
maxE = max(allInitialE)
ax.set_xlabel('x axis (m)') # y and z are flipped to make it easier for me to visualize
ax.set_ylabel('z axix (m)')
ax.set_zlabel('y axix (m)')
plt.title('Particle Trajectory')
for k in range(numParticles): #iterate through each of the particles' xyz data
e = allInitialE[k]
if e < maxE/3:
ax.plot3D(allX[k], allZ[k], allY[k], 'g-')
elif e < maxE/2:
ax.plot3D(allX[k], allZ[k], allY[k], 'b-')
else:
ax.plot3D(allX[k], allZ[k], allY[k], 'r-')
plt.show()
1 idea: Is there any way to convert .stl object to a set of plane functions? If so, I could make the plots into lines and find intersection between the plane and the line?
2nd idea: Or, since .stl are vector based, I can use vector calculation? i.e. see if a vector on a curve (line segment) has a common point as a triangle on a .stl object (triangle is defined by three vectors).
Please give me any idea you may have! Thank you so much.
Another way might be to use VTK. You can convert your numpy-stl object (obj) to a vtkPolyData
import vedo
import vtk
import itertools
obj = your_mesh
verts = list(itertools.chain(*(obj.vectors)))
faces = [[i*3, i*3+1, i*3+2] for i in range(len(verts)//3)]
vpoly = vedo.Mesh([verts, faces]).clean().polydata()
Then you could use vtkCutter https://discourse.vtk.org/t/get-intersection-of-polydata-line-and-a-plane/3894/3 or vtkIntersectionPolyDataFilter VTK check polydata point objects for intersection
Concerning your 1st idea, yes, you can convert your STL file into a set of triangles, just check out:
your_mesh.vectors
which is an array of triangle 3D vertices. From those you can construct a plane, and then calculate intersection between segment and plane.
Concerning the 2nd idea, you could do that only if your trajectory crosses exactly to a vertex, otherwise it would not detect the intersection. (Or you would need to give it a margin).
I found a way using pyoctree, which has a function to find intersection between line segments and mesh. Here is the link: https://pypi.org/project/pyoctree/
I was able to use rayIntersection() to do what I wanted to do, really quick. This Intersection between line and triangle in 3D also helped, but the calculation was very slow when it dealt with 60K+ points in a curve.
I have a 3D numpy array that I want to rotate with an angle that I want. I have tried using scipy.ndimage.rotate function and it does the job. However, it does a lot of rounding when rotating. This causes me a problem because my 3D array is representation of an object and numbers in each pixel represent the material that pixel is filled with (which I store in a different file). Therefore, I need a way to rotate the array without doing approximation or rounding and making the object blurry is not a problem
Here is what I got with the function I used:
The problem you are dealing with is essentially a sampling issue. Your resolution is too low for the data you are dealing with. One possibility to solve this is to increase the resolution of the image you are working with, enforce the color values as you rotate (ie no blending colors at the edges), and create a size/shape template that must be met after the rotation.
Edit: For clarity, it isn't the data that is at too low of a resolution, it's the image in which the data is stored that should be at a high enough resolution. The wikipedia page on multidimensional sampling is good for this topic: https://en.wikipedia.org/wiki/Multidimensional_sampling
I think the way I would approach it, outside of someone knowing an actual package to do this, is start with the indices and rotate them, then, given they may be floating point, round them. This may not be the best, but I think it should work.
Most of this example is loading a 3D dataset I found to use as an example.
import matplotlib.pyplot as plt
import os
import numpy as np
from scipy.ndimage import rotate
def load_example_data():
# Found data as an example
from urllib.request import urlopen
import tarfile
opener = urlopen( 'http://graphics.stanford.edu/data/voldata/MRbrain.tar.gz')
tar_file = tarfile.open('MRbrain.tar.gz')
try:
os.mkdir('mri_data')
except:
pass
tar_file.extractall('mri_data')
tar_file.close()
import numpy as np
data = np.array([np.fromfile(os.path.join('mri_data', 'MRbrain.%i' % i),
dtype='>u2') for i in range(1, 110)])
data.shape = (109, 256, 256)
return data
def rotate_nn(data, angle, axes):
"""
Rotate a `data` based on rotating coordinates.
"""
# Create grid of indices
shape = data.shape
d1, d2, d3 = np.mgrid[0:shape[0], 0:shape[1], 0:shape[2]]
# Rotate the indices
d1r = rotate(d1, angle=angle, axes=axes)
d2r = rotate(d2, angle=angle, axes=axes)
d3r = rotate(d3, angle=angle, axes=axes)
# Round to integer indices
d1r = np.round(d1r)
d2r = np.round(d2r)
d3r = np.round(d3r)
d1r = np.clip(d1r, 0, shape[0])
d2r = np.clip(d2r, 0, shape[1])
d3r = np.clip(d3r, 0, shape[2])
return data[d1r, d2r, d3r]
data = load_example_data()
# Rotate the coordinates indices
angle = 5
axes = (0, 1)
data_r = rotate_nn(data, angle, axes)
I think the general idea will work. You will have to consider what the axis is to rotate around.
For anyone with this problem stumbling upon this thread: brechmos' comment under the OP put me in the right direction for an actual solution. rotate() by default uses a third-order spline interpolation, which gives nice smooth edges. We want sharp edges though, without numbers in between. Setting order = 0 does exactly this. No need for extra functions or implementing anything yourself, just change a single argument.
Short version: I have a NxNxN matrix full of different values. I want to create a 2D projection of it looking exactly like this: http://tinyurl.com/bellfkn (3D if possible too!)
Long version: I have made a density matrix of dimension NxNxN with the following loop:
ndim = 512
massmat = np.zeros((ndim,ndim,ndim))
for i in range(0,npoints):
massmat[int(x1[i]),int(y1[i]),int(z1[i])] = massmat[int(x1[i]),int(y1[i]),int(z1[i])] + mpart
densemat = massmat/volumeofcell
massmat is a numpy array.
So basically I now have a NxNxN matrix with certain cells containing in this case, a density (units of g/cm^3). Is there a way to turn this into a 2D projection - a side-on view of the densities with a colorbar indicating dense areas and less dense areas?
In Matlab I would just do:
imageArray2Dmesh = mean(densemat, 3);
figure
sc(imageArray2Dmesh, 'pink')
And it gives me a density projection - I'd like to do the same but in Python. Is there a way to view the whole NxNxN matrix in a 3D projection too? Just like the link but in 3D. That would be great.
You can use a very similar code in numpy and matplotlib:
import numpy as np
import pylab as plt
imageArray2Dmesh = np.mean(mesh_reshape, axis=2);
plt.figure()
plt.pcolor(imageArray2Dmesh, cmap = ,cmap=plt.cm.pink)
plt.colorbar()
plt.show()
you have a couple of more command, but this is just due to different approaches for the grafics in matlab and matplotlib (hint: in the long run, the matplotlib way is way better)
If you want the project from another direction just change the axis parameter (remember that python has the indices from 0 and not from 1 like matlab).
For a projection from a generic direction...well, that is quite more difficult.
By the way, if you need to look at some 3D data I strongly suggest you to lose some time to explore mayavi. It's still a python library, and it's really powerful for 3d imaging:
http://docs.enthought.com/mayavi/mayavi/auto/examples.html
So, I have three numpy arrays which store latitude, longitude, and some property value on a grid -- that is, I have LAT(y,x), LON(y,x), and, say temperature T(y,x), for some limits of x and y. The grid isn't necessarily regular -- in fact, it's tripolar.
I then want to interpolate these property (temperature) values onto a bunch of different lat/lon points (stored as lat1(t), lon1(t), for about 10,000 t...) which do not fall on the actual grid points. I've tried matplotlib.mlab.griddata, but that takes far too long (it's not really designed for what I'm doing, after all). I've also tried scipy.interpolate.interp2d, but I get a MemoryError (my grids are about 400x400).
Is there any sort of slick, preferably fast way of doing this? I can't help but think the answer is something obvious... Thanks!!
Try the combination of inverse-distance weighting and
scipy.spatial.KDTree
described in SO
inverse-distance-weighted-idw-interpolation-with-python.
Kd-trees
work nicely in 2d 3d ..., inverse-distance weighting is smooth and local,
and the k= number of nearest neighbours can be varied to tradeoff speed / accuracy.
There is a nice inverse distance example by Roger Veciana i Rovira along with some code using GDAL to write to geotiff if you're into that.
This is of coarse to a regular grid, but assuming you project the data first to a pixel grid with pyproj or something, all the while being careful what projection is used for your data.
A copy of his algorithm and example script:
from math import pow
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
def pointValue(x,y,power,smoothing,xv,yv,values):
nominator=0
denominator=0
for i in range(0,len(values)):
dist = sqrt((x-xv[i])*(x-xv[i])+(y-yv[i])*(y-yv[i])+smoothing*smoothing);
#If the point is really close to one of the data points, return the data point value to avoid singularities
if(dist<0.0000000001):
return values[i]
nominator=nominator+(values[i]/pow(dist,power))
denominator=denominator+(1/pow(dist,power))
#Return NODATA if the denominator is zero
if denominator > 0:
value = nominator/denominator
else:
value = -9999
return value
def invDist(xv,yv,values,xsize=100,ysize=100,power=2,smoothing=0):
valuesGrid = np.zeros((ysize,xsize))
for x in range(0,xsize):
for y in range(0,ysize):
valuesGrid[y][x] = pointValue(x,y,power,smoothing,xv,yv,values)
return valuesGrid
if __name__ == "__main__":
power=1
smoothing=20
#Creating some data, with each coodinate and the values stored in separated lists
xv = [10,60,40,70,10,50,20,70,30,60]
yv = [10,20,30,30,40,50,60,70,80,90]
values = [1,2,2,3,4,6,7,7,8,10]
#Creating the output grid (100x100, in the example)
ti = np.linspace(0, 100, 100)
XI, YI = np.meshgrid(ti, ti)
#Creating the interpolation function and populating the output matrix value
ZI = invDist(xv,yv,values,100,100,power,smoothing)
# Plotting the result
n = plt.normalize(0.0, 100.0)
plt.subplot(1, 1, 1)
plt.pcolor(XI, YI, ZI)
plt.scatter(xv, yv, 100, values)
plt.title('Inv dist interpolation - power: ' + str(power) + ' smoothing: ' + str(smoothing))
plt.xlim(0, 100)
plt.ylim(0, 100)
plt.colorbar()
plt.show()
There's a bunch of options here, which one is best will depend on your data...
However I don't know of an out-of-the-box solution for you
You say your input data is from tripolar data. There are three main cases for how this data could be structured.
Sampled from a 3d grid in tripolar space, projected back to 2d LAT, LON data.
Sampled from a 2d grid in tripolar space, projected into 2d LAT LON data.
Unstructured data in tripolar space projected into 2d LAT LON data
The easiest of these is 2. Instead of interpolating in LAT LON space, "just" transform your point back into the source space and interpolate there.
Another option that works for 1 and 2 is to search for the cells that maps from tripolar space to cover your sample point. (You can use a BSP or grid type structure to speed up this search) Pick one of the cells, and interpolate inside it.
Finally there's a heap of unstructured interpolation options .. but they tend to be slow.
A personal favourite of mine is to use a linear interpolation of the nearest N points, finding those N points can again be done with gridding or a BSP. Another good option is to Delauney triangulate the unstructured points and interpolate on the resulting triangular mesh.
Personally if my mesh was case 1, I'd use an unstructured strategy as I'd be worried about having to handle searching through cells with overlapping projections. Choosing the "right" cell would be difficult.
I suggest you taking a look at GRASS (an open source GIS package) interpolation features (http://grass.ibiblio.org/gdp/html_grass62/v.surf.bspline.html). It's not in python but you can reimplement it or interface with C code.
Am I right in thinking your data grids look something like this (red is the old data, blue is the new interpolated data)?
alt text http://www.geekops.co.uk/photos/0000-00-02%20%28Forum%20images%29/DataSeparation.png
This might be a slightly brute-force-ish approach, but what about rendering your existing data as a bitmap (opengl will do simple interpolation of colours for you with the right options configured and you could render the data as triangles which should be fairly fast). You could then sample pixels at the locations of the new points.
Alternatively, you could sort your first set of points spatially and then find the closest old points surrounding your new point and interpolate based on the distances to those points.
There is a FORTRAN library called BIVAR, which is very suitable for this problem. With a few modifications you can make it usable in python using f2py.
From the description:
BIVAR is a FORTRAN90 library which interpolates scattered bivariate data, by Hiroshi Akima.
BIVAR accepts a set of (X,Y) data points scattered in 2D, with associated Z data values, and is able to construct a smooth interpolation function Z(X,Y), which agrees with the given data, and can be evaluated at other points in the plane.