Newbie here!
I have an STL file which is not watertight and the gap is quite big to repair with the close vertex of the trimesh.
I tried with open3d by following this but I have the following error: "ValueError: vector too long"..
Is there any way to make the mesh watertight? I need to calculate the CoM and Inertia matrix but the values would not be correct if my mesh is not watertight/a closed surface.
For the open3d, firstly I uploaded the stl file, I converted it to numpy and then I used the following code:
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(DataNP)
o3d.io.write_point_cloud("testinggggg.ply", pcd)
poisson_mesh = o3d.geometry.TriangleMesh.create_from_point_cloud_poisson(pcd, depth=8, width=0, scale=1.1, linear_fit=False)[0]
bbox = pcd.get_axis_aligned_bounding_box()
p_mesh_crop = poisson_mesh.crop(bbox)
o3d.io.write_triangle_mesh("output_testinggggg.ply", dec_mesh)
Any help is highly appreciated!
I have managed to make the mesh watertight. I will post here in case anyone is having troubles in the future with it.
My mesh was actually made of 2 smaller meshes, so I had to first merge them together and then use the VTK library to clean the mesh and fill the holes. This made my mesh watertight and I could calculate everything I needed.
This is the code:
input1 = vtk.vtkPolyData()
input2 = vtk.vtkPolyData()
input1.DeepCopy(Data1.GetOutput())
input2.DeepCopy(Data2.GetOutput())
# Append the two meshes
appendFilter = vtk.vtkAppendPolyData()
appendFilter.AddInputData(input1)
appendFilter.AddInputData(input2)
appendFilter.Update()
# Remove any duplicate points.
cleanFilter = vtk.vtkCleanPolyData()
cleanFilter.SetInputConnection(appendFilter.GetOutputPort())
cleanFilter.Update()
# newData = cleanFilter
fill = vtk.vtkFillHolesFilter()
fill.SetInputConnection(appendFilter.GetOutputPort())
fill.SetHoleSize(100)
fill.Update()
Related
I'm creating an application that requires the vertices array of the mesh to be sorted in a particular way, to access them more easily: from left to right then downwards to upwards (in the xy plane) like this:
[indexes][1]
My code successfully does that, but the resulting mesh is all glitched out:
[resulting mesh][2]
I think it's likely that the new edges are causing the problem, but haven't found a way to fix them. Here's the mesh sorting code I wrote:
# This example assumes we have a mesh object selected
import bpy
import bmesh
#Weights for order. Left>>Right Down>>Up
def order(vector):
return vector[0]+200*vector[1]
# Get the active mesh
me = bpy.context.object.data
verts=[]
# Get a BMesh representation
bm = bmesh.new() # create an empty BMesh
bm.from_mesh(me) # fill it in from a Mesh
#Convert current verts to tuple list (x,y,z)
i=0
for v in bm.verts:
verts.append((v.co.x,v.co.y,v.co.z))
# Sort the verts.
verts.sort(key=order)
#Assign the sorted vertices to the mesh
i=0
for v in bm.verts:
v.co.x,v.co.y,v.co.z = verts[i]
i+=1
#Debugging edges (possible problem?)
for v in bm.edges:
if i<10:
print(v)
i+=1
# Finish up, write the bmesh back to the mesh
bm.verts.index_update()
bm.to_mesh(me)
#bmesh.update_edit_mesh(me)
bm.free() # free and prevent further access
Is there any way to rearrange the edges? Or any post processing trick I can do on the mesh, anything helps.
Thanks in advance.
[1]: https://i.stack.imgur.com/iuoBc.png
[2]: https://i.stack.imgur.com/atxvF.png
I made a Python script to create a .ply file using the image and the cloud of points of a scan 3D, stored as a NumPy array.
I can open the resulting file.ply in MeshLab. It works well.
But when I import it in Blender, there is no point. The resulting object is empty.
Do you have an idea on how to solve that?
Thanks
def row_col_xyz_to_ply(self, xyz, rgb, name="output"):
"""Convers a numpy (row, col, xyz) cloud of points to ply format
Parameters:
xyz (NDArray): 3D points for each image pixel (row, col, (x,y,z))
rbg (NDArray): RGBA values for each image pixel (row, col, (r,g,b,a))
Returns:
None: save the .ply file on the disk instead
"""
# reshape
# Extract the coordinates of the points where there is actual values (not NaN) in the xyz cloud of points
points_rows, points_cols = np.where(~np.isnan(xyz[:,:,0]))
# Grab the corresponding points in the xyz cloud of points in an array
points_xyz = xyz[points_rows,points_cols,:] # n*3 array of 3D points (after nan filtering)
# Grab the corresponding points in the image in an array
points_image = rgb[points_rows,points_cols,0:3] # n*3 array of RGB points (after nan filtering)
# Create a dict of data
data = {
'x': points_xyz[:,0],
'y': points_xyz[:,1],
'z': points_xyz[:,2],
'red': points_image[:,0],
'green': points_image[:,1],
'blue': points_image[:,2]
}
# Convert it to a cloud of points
cloud = PyntCloud(pd.DataFrame(data=data))
# Path where to save it
filename = f"{name}.ply"
path = os.path.join(self.path_exports,filename)
# Save it
cloud.to_file(path)
# Debug
print("row_col_xyz_to_ply > saved: ",filename)
The problem is in the blender .ply importer. It doesn't support points that are not used by any triangle.
I ran into the same problem that Rockcat pointed out. I'm not sure if you're still looking for an answer but I found that this custom importer is a bit of a workaround. It imports every point as a vertex but doesn't need them to be connected
So when one exports r.out.vtk from Grass GIS we get a bad surface with -99999 points instead of nulls:
I want to remove them, yet a simple clip is not enough:
pd = pv.read('./pid1.vtk')
pd = pd.clip((0,1,1), invert=False).extract_surface()
p.add_mesh(pd ) #add atoms to scene
p.show()
resulting in:
So I wonder how to keep from it only top (> -999) points and connected vertices - in order to get only the top plane (it is curved\not flat actually) using pyvista?
link to example .vtk
There is an easy way to do this and there isn't...
You could use pyvista's threshold filter with all_scalars=True as long as you have only one set of scalars:
import pyvista as pv
pd = pv.read('./pid1.vtk')
pd = pd.threshold(-999, all_scalars=True)
plotter = pv.Plotter()
plotter.add_mesh(pd) #add atoms to scene
plotter.show()
Since all_scalars starts filtering based on every scalar array, this will only do what you'd expect if there are no other scalars. Furthermore, unfortunately there seems to be a bug in pyvista (expected to be fixed in version 0.32.0) which makes the use of this keyword impossible.
What you can do in the meantime (if you don't want to use pyvista's main branch before the fix is released) is to threshold the data yourself using numpy:
import pyvista as pv
pd = pv.read('./pid1.vtk')
scalars = pd.active_scalars
keep_inds = (scalars > -999).nonzero()[0]
pd = pd.extract_points(keep_inds, adjacent_cells=False)
plotter = pv.Plotter()
plotter.add_mesh(pd) #add atoms to scene
plotter.show()
The main point of both all_scalars (in threshold) and adjacent_cells (in extract_points) is to only keep cells where every point satisfies the condition.
With both of the above I get the following figure using your data:
The background to my problem is that I have a 3D structure saved in a .vtk file that I need to manipulate (dilate, erode, etc.). The following code snippets are designed to be run sequentially, i.e. if you run them one after the other, there should be no problems (apart from those I mention!).
I'm very new to VTK, so apologies for any very basic mistakes!
Problem
My problem stems from a problem with SimpleITK, wherein it is unable to read UnstructuredGrid or PolyData:
In [1]: import SimpleITK as sitk
In [2]: img_vtk = sitk.ReadImage(file_vtk)
Traceback (most recent call last):
File "<ipython-input-52-435ce999db50>", line 1, in <module>
img_vtk = sitk.ReadImage(file_vtk)
File "/usr/local/lib/python3.5/dist-packages/SimpleITK/SimpleITK.py", line 8614, in ReadImage
return _SimpleITK.ReadImage(*args)
RuntimeError: Exception thrown in SimpleITK ReadImage: /tmp/SimpleITK/Code/IO/src/sitkImageReaderBase.cxx:97:
sitk::ERROR: Unable to determine ImageIO reader for "/data/ROMPA_MRIandSeg/09S/Analysis/1_model/clip_dilate.vtk"
SimpleITK can, however, read StructuredGrid, so I tried to solve this by reading using VTK and converting.
import vtk
reader = vtk.vtkGenericDataObjectReader() # Using generic to allow it to match either Unstructured or PolyData
reader.SetFileName(file_vtk)
reader.Update()
output = reader.GetOutput()
However, from that point on, every method I've tried seems to have failed.
Proposed Solutions
Conversion to numpy, then conversion to sitk image
I attempted to convert it to a numpy array (), then interpolate a regular grid, with a dummy variable of 1 to specify the values on the structure.
from vtk.utils import numpy_support
import scipy.interpolate
import numpy as np
nparray = numpy_support.vtk_to_numpy(output.GetPointData().GetArray(0))
output_bounds = output.GetBounds()
x_grid = range(math.floor(output_bounds[0]),math.ceil(output_bounds[1]),1)
y_grid = range(math.floor(output_bounds[2]),math.ceil(output_bounds[3]),1)
z_grid = range(math.floor(output_bounds[4]),math.ceil(output_bounds[5]),1)
grid = list()
for x in x_grid:
for y in y_grid:
for z in z_grid:
grid.append((x,y,z))
dummy = np.array([1 for i in range(nparray.shape[0])])
npgrid = scipy.interpolate.griddata(nparray,dummy,grid,fill_value=0)
npgrid.reshape(len(x_grid),len(y_grid),len(z_grid))
img = sitk.GetImageFromArray(npgrid)
sitk.WriteImage(img,file_out)
However, when I load this in ParaView, a bounding box is displayed for the output, but a contour of the output is empty.
Using ShepardMethod
I attempted to interpolate using the built-in ShepardMethod, after converting the UnstructuredGrid to PolyData (as I'd mostly seen ShepardMethod being applied to PolyData):
bounds = output.GetBounds()
spacings = [1.0,1.0,1.0] # arbitrary spacing
dimensions = [0,0,0]
for i,spacing in enumerate(spacings):
dimensions[i] = int(math.ceil((bounds[i*2 + 1]-bounds[i*2])/spacing))
vtkPoints = vtk.vtkPoints()
for i in range(0,nparray.shape[0]):
x=nparray[i,0]
y=nparray[i,1]
z=nparray[i,2]
p=[x,y,z]
vtkPoints.InsertNextPoint(p)
poly = vtk.vtkPolyData()
poly.SetPoints(vtkPoints)
shepard = vtk.vtkShepardMethod()
shepard.SetInputData(poly)
shepard.SetSampleDimensions(dimensions)
shepard.SetModelBounds(output.GetBounds())
shepard.Update()
shepard_data = shepard.GetOutput().GetPointData().GetArray(0)
shepard_numpy = numpy_support.vtk_to_numpy(shepard_data)
shepard_numpy = shepard_numpy.reshape(dimensions[0],dimensions[1],dimensions[2])
shepard_img = sitk.GetImageFromArray(shepard_numpy)
sitk.WriteImage(shepard_img,file_out)
As with the numpy effort above, this provided a bounding box in ParaView. Applying a contour provided a structure of two triangles, i.e. next to nothing seems to have been successfully written. Alternatively, I attempted to write the output directly using VTK.
shepard_data = shepard.GetOutput()
shepard_grid = vtk.vtkImageToStructuredGrid()
shepard_grid.SetInputData(shepard_data)
shepard_grid.Update()
writer = vtk.vtkStructuredGridWriter()
writer.SetFileName(file_out)
writer.SetInputData(shepard_grid.GetOutput())
writer.Write()
This produced the same output as before.
Using ProbeFilter
I tried the above using ProbeFilter instead (with both conversion to numpy and writing directly). Unfortunately, the output was the same as above.
mesh = vtk.vtkStructuredGrid()
mesh.SetDimensions(dimensions)
probe = vtk.vtkProbeFilter()
probe.SetInputData(mesh)
probe.SetSourceData(output)
probe.Update()
probe_out = probe.GetOutput()
writer = vtk.vtkStructuredGridWriter()
writer.SetFileName(file_out)
writer.SetInputData(probe.GetOutput())
writer.Write()
probe_data = probe.GetOutput().GetPointData().GetArray(0)
probe_numpy = numpy_support.vtk_to_numpy(probe_data)
probe_numpy = probe_numpy.reshape(dimensions[0],dimensions[1],dimensions[2])
probe_img = sitk.GetImageFromArray(probe_numpy)
sitk.WriteImage(probe_img,file_out)
However, this seemed to produce no viable output (vtkStructuredGridWriter produced an empty file, and probe_numpy was empty).
Changing ParaView output
My original data comes from a structuredGrid .vtk file, that I open using ParaView, and then clip to remove structures that aren't required in the mesh. Saving the output saves an unstructuredGrid, and I have been unable to figure out whether I can change that, and avoid this mess in the first place!
Just use "Resample With Dataset" filter in ParaView.
Open ParaView
Open a StructuredGrid file file with the geometry you want it to have
Open your UnstructuredGrid file
Add a "Resample with dataset" filter
Select structured data as source input
Apply
I have been looking for thi answer but i don't seem to figure it out anywhere, so i hope i could get my answer here...
I'm in Maya Python API and i want to apply a transformation Matrix to a mesh.
This is how i made the mesh:
mesh = om.MFnMesh()
ShapeMesh = cmds.group(em=True)
parentOwner = get_mobject( ShapeMesh )
meshMObj = mesh.create(NumVerts, len(FaceCount), VertArray, FaceCount, FaceArray ,parentOwner)
cmds.sets( ShapeMesh, e=True,forceElement='initialShadingGroup')
defaultUVSetName = ''
defaultUVSetName = mesh.currentUVSetName(-1)
mesh.setUVs ( UArray, VArray, defaultUVSetName )
mesh.assignUVs ( FaceCount, FaceArray, defaultUVSetName )
This is how i create the TFM:
m = struct.unpack("<16f",f.read(64))
mm = om.MMatrix()
om.MScriptUtil.createMatrixFromList(m,mm)
mt = om.MTransformationMatrix(mm)
Basically i read 16 floats and convert them into a Transformation Matrix, however i don't know how to apply the mt matrix to my mesh...
I managed to get the Position,Rotation and Scale from this though, maybe it helps, this way:
translate = mt.translation(om.MSpace.kWorld)
rotate = mt.rotation().asEulerRotation()
scaleUtil = om.MScriptUtil()
scaleUtil.createFromList([0,0,0],3)
scaleVec = scaleUtil.asDoublePtr()
mt.getScale(scaleVec,om.MSpace.kWorld)
scale = [om.MScriptUtil.getDoubleArrayItem(scaleVec,i) for i in range(0,3)]
Now my last step comes in applying this Matrix to the mesh, but i can't find a good way to do it, does someone know how to do this on maya?
Thanks in advance:
Seyren.
Not sure what you mean by applying the matrix to your mesh, but if you want to update the position of each point by transforming them with that matrix, then here you go for a given MFnMesh mesh and a given MMatrix matrix:
import banana.maya
banana.maya.patch()
from maya import OpenMaya
mesh = OpenMaya.MFnMesh.bnn_get('pCubeShape1')
matrix = OpenMaya.MMatrix()
points = OpenMaya.MPointArray()
mesh.getPoints(points)
for i in range(points.length()):
points.set(points[i] * matrix, i)
mesh.setPoints(points)
If you don't want to directly update the points of the mesh, then you need to apply the matrix to the transformation node by retrieving its parent transform and using the MFnTransform::set() method.
Note that I've used in my code snippet a set of extensions that I've wrote and that might be helpful if you're using the Maya Python API. The code is available on GitHub and it also comes with a documentation to give you an idea.