Plotting a 3d triangular mesh from point cloud - python

I have this object/point cloud,rendered with pyopengl and pygame.
My object is a numpy array of the co-ordinates of the point. I wish to generate a 3d triangular mesh of this object, also it would be nice if you could decrease the number of triangles.
I have tried scipy.spatial.Delaunay and it doesnt generate triangles for 3d objects.

Dual Contouring would probably work well here, it's an algorithm that takes voxelized data and turns it into a mesh. I don't understand it trivially enough to outline it here, but basically you'd take your array of points and place them into a 3D grid array where if that grid cell contains a point it's set to equal 1 (full), and if it doesn't it is set to 0 (empty), you would then run the DC algorithm on this grid and it would output a mesh. The nice thing about this algorithm is it supports internal cavities and concave shapes.
Here's some links I found that may help you if you decide to use DC:
Basic Dual Contouring Theory
http://ngildea.blogspot.com/2014/11/implementing-dual-contouring.html
This is the github repo to the source I used when I implemented this algorithm in Unity3D:
https://github.com/nickgildea/DualContouringSample

Related

Rasterization algorithms with anti-aliasing in Python

I have vector graphics. (In my first case, it's the epigraph of a function whose formula is given. So it is a shape whose outline is given by a parametric curve.)
I want to rasterize this image with anti-aliasing. So I want raster graphics, i.e. a numpy array. I want to obtain this array in a low-level way, avoiding libraries that are meant for object-oriented interactive GUI visualizations with plot axes, etc.. I just want an array. The only problem with doing something like Y,X=np.ogrid(...) and then picture = Y>f(X) is that that's not anti-aliased. (Note that blurring that binary picture is worse than a good dedicated anti-aliasing algorithm.) How to rasterize with anti-aliasing in Python without any overkill GUI-centered libraries?
If the curve is given by an implicit equation F(x,y)=0, evaluate the value of the function at the four corners of every pixel. If the signs are the same, the pixel is wholly outside or inside. If the signs vary, the area inside the polygon formed by the corners and the points along the edges where the function vanishes (find these by a mere linear interpolation) tells you the mixture of background and foreground colors (alpha blending coefficient).
Tracing the polygon isn't that difficult: traverse the four edges of the square and keep the positive vertices and zero points in the order you meet them. You will get from a triangle to an hexagon. The area is obtained by the shoelace formula.
The case of a parametric function is a little harder. You need to find the intersections of the curve with the grid lines, and perform the area estimation in all cells that are traversed. For this, draw the curve as a polyline (this is called flattening), and slice the polyline with horizontals, then verticals.
Manim might be able to rasterize epigraphs and parametric curves well and fast. Its community edition seems actively maintained.
Edits/comments with details are welcome.

Finding the intersection(s) of two 3D Polylines

I currently have 2 paths generated by a motion planning algorithm in Python. I interpolated into two 3D Polylines with equidistant points. Next I need to find whether these two paths of different lengths intersect (OR even ideally the point(s) at which they get near a certain distance of each other, if any). This is to eventually simulate a collision avoidance algorithm.
After much searching I have found multiple libraries and algorithms that can return points/line intersections of two Polylines, however they are all in 2D. I tried using the Bentley-Ottmann algorithm/python library, the Shapely library, the Geometry3D library, and other 2D solutions found on SO, but none have worked in a 3D environment. Next I tried using a 2D geometry library to find the intersections in the x-y, x-z, and y-z planes individually as shown in the graphs below showing a simple case, but I'm not sure where to go from there or whether that's even the right approach. The Polylines would usually contain 300+ points.
Any help pointing me towards a 3D algorithm or simple adjustments to make to the 2D libraries work in 3D would be really appreciated!
3D graph view, x-y view, z-y view.

How to Transform Centroid from Pixel to Real World Coordinates

I am working on an application using an IFM 3D camera to identify parts prior to a robot pickup. Currently I am able to find the centroid of these objects using contours from a depth image and from there calculate the center point of these objects in pixel space.
My next task is to then transform the 2D centroid coordinates to a 3D point in 'real' space. I am able to train the robot such that it's coordinate frame is either at the center of the image or at the traditional (0,0) point of an image (top left).
The 3D camera I am using provides both an intrinsic and extrinsic matrix. I know I need to use some combination of these matrices to project my centroid into three space but the following questions remain:
My current understanding from googling is the intrinsic matrix is used to fix lens distortion (barrel and pinhole warping, etc.) whereas the extrinsic matrix is used to project points into the real world. Is this simplified assumption correct?
How can a camera supply a single extrinsic matrix? I know traditionally these matrices are found using the checkerboard corners method but are these not dependent on the height of the camera?
Is the solution as simple as taking the 3x4 extrinsic matrix and multiplying it by a 3x1 matrix [x, y, 1] and if so, will the returned values be relative to the camera center or the traditional (0,0) point of an image.
Thanks in advance for any insight! Also if it's any consolation I am doing everything in python and openCV.
No. I suggest you read the basics in Multiple View Geometry of Hartley and Zisserman, freely available in the web. Dependent on the camera model, the intrinsics contain different parameters. For the pinhole camera model, these are the focal length and the principal point.
The only reason why you maybe could directly transform your 2D centroid to 3D is that you use a 3D camera. Read the manual of the camera, it should be explained how the relation between 2D and 3D coordinates is given for your specific model.
If you have only image data, you can only compute a 3D point from at least two views.
No, of course not. Please don't be lazy and start reading the basics about camera projection instead of asking for others to explain the common basics that are written down everywhere in the web and literature.

Python Implementation for creating a triangular mesh from an array of closed loop planar contours

I'm a wee bit stuck.
I have a 3D point cloud (an array of (n,3) vertices), in which I am trying to generate a 3D triangular mesh from. So far I have had no luck.
The format my data comes in:
(x,y) values in regularly spaced (z) intervals. Think of the data as closed loop planar contours stored slice by slice in the z direction.
The vertices in my data must be absolute positions for the mesh triangles (i.e. I don't want them to be smoothed out such that the volume begins to change shape, but linear interpolation between the layers is fine).
Illustration:
Z=2. : ..x-------x... <- Contour 2
Z=1.5: ...\......|... <- Join the two contours into a mesh.
Z=1. : .....x----x... <- Contour 1
Repeat for n slices, end up with an enclosed 3D triangular mesh.
Things I have tried:
Using Open3D:
The rolling ball (pivot) method can only get 75% of the mesh completed and leaves large areas incomplete (despite a range of ball sizes). It has particular problems at the top and bottom slices where there tends to be large gaps in the middle (i.e. a flat face).
The Poisson reconstruction method smooths out the volume too much and I no longer have an accurate representation of the volume. This occurs at all depths from 3-12.
CGAL:
I cannot get this to work for the life of me. SWIG is not very good, the implementation of CGAL using SWIG is also not very good.
There are two PyBind implementations of CGAL however they have not incorporated the 3D triangulation libraries from CGAL.
Explored other modules like PyMesh, TriMesh, TetGen, Scikit-Geometry, Shapely etc. etc. I may have missed the answer somewhere along the line.
Given that my data is a list of closed-loop planar contours, it seems as though there must be some simple solution to just "joining" adjacent slice contours into one big 3d mesh. Kind of like you would in blender.
There are non-python solutions (like MeshLab) that may well solve these problems, but I require a python solution. Does anyone have any ideas? I've had a bit of a look into VTK and ITK but haven't found exactly what I'm looking for as of yet.
I'm also starting to consider that maybe I can interpolate intermediate contours between slices, and fill the contours on the top and bottom with vertices to make the data a bit more "pivot ball" method friendly.
Thank you in advance for any help, it is appreciated.
If there is a good way of doing this that isn't coded yet, I promise to code it and make it available for people in my situation :)
Actually there are two ways of having meshlab functionality in python:
The first is MeshLabXML (https://github.com/3DLIRIOUS/MeshLabXML ) a third party, is a Python scripting interface to meshlab scripting interface
the second is PyMeshLab (https://github.com/cnr-isti-vclab/PyMeshLab ) an ongoing effort done by the MeshLab authors, (currently in alpha stage) to have a direct Python bindings to all the meshlab filters
There is a very neat paper titled "Technical Note: an algorithm and software for conversion of radiotherapy contour‐sequence data to ready‐to‐print 3D structures" in the Journal of Medical Physics that describes this problem quite nicely. No python packages are required, however it is more easily implemented with numpy. No need for any 3D packages.
A useful excerpt is provided:
...
The number of slices (2D contours) constituting the specified structure is determined.
The number of points in each slice is determined.
Cartesian coordinates of each of the points in each slice are extracted and stored within dedicated data structures...
Numbers of points in each slice (curve) are re‐arranged in such a way, that the starting points (points with indices 0) are the closest points between the subsequent slices. Renumeration starts at point 0, slice 0 (slice with the lowest z coordinate).
Orientation (i.e., the direction determined by the increasing indices of points with relation to the interior/exterior of the curve) of each curve is determined. If differences between slices are found, numbering of points in non‐matching curves (and thus, orientation) is reversed.
The lateral surface of the considered structure is discretized. Points at the neighboring layers are arranged into threes, constituting triangular facets for the STL file. For each triangle the closest points with the subsequent indices from each layer are connected.
Lower and upper base surfaces of the considered structure are discretized. The program iterates over every subsequent three points on the curve and checks if they belong to a convex part of the edge. If yes, they are connected into a facet, and the middle point is removed from further iterations.
So basically it's a problem of aligning datasets in each slice to the nearest value of each slice. Then aligning the orientation of each contour. Then joining the points between two layers based on distance.
The paper also provides code to do this (for a DICOM file), however I re-wrote it myself and it works a charm.
I hope this helps others! Make sure you credit the author's in any work you do that uses this.
A recent feature of pymadcad can do things like this, not sure through if it fits your exact expectation in term of "pivot ball" or such things, checkout the doc for blending
Starting from a list of outlines, it can generate blended surfaces to join them:
For your purpose, I guess the best is one of:
blendpair(line1, line2)
junction(*lines)

Is there a Python library to do constrained triangulations in 3D?

I have a function whose range is a non-convex, simply connected region in R3. When I sample the function I know if the resulting point is on the surface of the region or not. I'd like to triangulate those samples subject to the surface constrains, i.e., the resulting tetrahedra should not
"hide" surface points. The hull would not be convex, of course.
I searched around for a library. So far I found Triangle, but it only works in R2. I also found TetGen, which works in R3, but it requires to provide
the surface triangulation (which I don't have). Also, as far as I can see, these C/C++ libraries do not have Python bindings.
Any suggestions? Thanks!
You may take a look at CGAL it has python-bindings.
One side note: If you need the surface triangulation (which seems to be a 2D problem), you may take each face, project it to 2D, triangulate it, and back to your 3D face.
EDIT due to comment: CGAL does "only" 3D triangulation. 3D constrained triangulation requires Steiner points. Since not every input can be triangulated in 3D (Schönhardt polyhedron as the classical counter-example).
Maybe you can take a look at MeshPy it looks like it has what you are looking for: "MeshPy provides Python interfaces to three well-regarded mesh generators, Triangle by J. Shewchuk, TetGen by Hang Si, and gmsh by Christophe Geuzaine and Jean-Francois Remacle."

Categories

Resources