Python: Mesh a voxels assembly to compute the surface area - python

I have a voxels assembly which represents a stone in 3D. It's a binarized numpy 3D array in which 1 is assigned to voxels which make up my stone and 0 elsewhere.
I want to:
create its meshed surface
calculate the surface area on it.
But how?

I can see a few options depending on your application: 3D convex hull or marching cubes.
3D convex hull
First, you will need to obtain an array of all the voxel 3D coordinates where your array is equal to 1
Later, you can use scipy.spatial.ConvexHull. Scipy methods are basically a wrapper for the qhull library. When using qhull, one of the output options (FA) is to compute total area and volume.
You can find some more information here:
http://scipy.github.io/devdocs/generated/scipy.spatial.ConvexHull.html
http://www.qhull.org/html/qconvex.htm
Marching Cubes
If you need to keep concavities, then the previous option will not be enough. Marching cubes will be a good alternative. There is already a python implementation in scikit-image.
http://scikit-image.org/docs/dev/auto_examples/plot_marching_cubes.html
You will again need to obtain all points as in the previous case. You calculate the verbs/faces with:
http://scikit-image.org/docs/dev/api/skimage.measure.html#marching-cubes
And the you can calculate the area with:
http://scikit-image.org/docs/dev/api/skimage.measure.html#mesh-surface-area

I solved my problem if it can be usefull. Marching Cubes algorithm is good, but it doesn't work well on binarized arrays. So:
1) Gaussian Filter applied to the 3D array (scipy.filters)
2) Marching Cubes algorithm to mesh it (scikit-image tool)
3) Sum up the areas of triangles (scikit-image tool)

Related

Rasterization algorithms with anti-aliasing in Python

I have vector graphics. (In my first case, it's the epigraph of a function whose formula is given. So it is a shape whose outline is given by a parametric curve.)
I want to rasterize this image with anti-aliasing. So I want raster graphics, i.e. a numpy array. I want to obtain this array in a low-level way, avoiding libraries that are meant for object-oriented interactive GUI visualizations with plot axes, etc.. I just want an array. The only problem with doing something like Y,X=np.ogrid(...) and then picture = Y>f(X) is that that's not anti-aliased. (Note that blurring that binary picture is worse than a good dedicated anti-aliasing algorithm.) How to rasterize with anti-aliasing in Python without any overkill GUI-centered libraries?
If the curve is given by an implicit equation F(x,y)=0, evaluate the value of the function at the four corners of every pixel. If the signs are the same, the pixel is wholly outside or inside. If the signs vary, the area inside the polygon formed by the corners and the points along the edges where the function vanishes (find these by a mere linear interpolation) tells you the mixture of background and foreground colors (alpha blending coefficient).
Tracing the polygon isn't that difficult: traverse the four edges of the square and keep the positive vertices and zero points in the order you meet them. You will get from a triangle to an hexagon. The area is obtained by the shoelace formula.
The case of a parametric function is a little harder. You need to find the intersections of the curve with the grid lines, and perform the area estimation in all cells that are traversed. For this, draw the curve as a polyline (this is called flattening), and slice the polyline with horizontals, then verticals.
Manim might be able to rasterize epigraphs and parametric curves well and fast. Its community edition seems actively maintained.
Edits/comments with details are welcome.

How to Transform Centroid from Pixel to Real World Coordinates

I am working on an application using an IFM 3D camera to identify parts prior to a robot pickup. Currently I am able to find the centroid of these objects using contours from a depth image and from there calculate the center point of these objects in pixel space.
My next task is to then transform the 2D centroid coordinates to a 3D point in 'real' space. I am able to train the robot such that it's coordinate frame is either at the center of the image or at the traditional (0,0) point of an image (top left).
The 3D camera I am using provides both an intrinsic and extrinsic matrix. I know I need to use some combination of these matrices to project my centroid into three space but the following questions remain:
My current understanding from googling is the intrinsic matrix is used to fix lens distortion (barrel and pinhole warping, etc.) whereas the extrinsic matrix is used to project points into the real world. Is this simplified assumption correct?
How can a camera supply a single extrinsic matrix? I know traditionally these matrices are found using the checkerboard corners method but are these not dependent on the height of the camera?
Is the solution as simple as taking the 3x4 extrinsic matrix and multiplying it by a 3x1 matrix [x, y, 1] and if so, will the returned values be relative to the camera center or the traditional (0,0) point of an image.
Thanks in advance for any insight! Also if it's any consolation I am doing everything in python and openCV.
No. I suggest you read the basics in Multiple View Geometry of Hartley and Zisserman, freely available in the web. Dependent on the camera model, the intrinsics contain different parameters. For the pinhole camera model, these are the focal length and the principal point.
The only reason why you maybe could directly transform your 2D centroid to 3D is that you use a 3D camera. Read the manual of the camera, it should be explained how the relation between 2D and 3D coordinates is given for your specific model.
If you have only image data, you can only compute a 3D point from at least two views.
No, of course not. Please don't be lazy and start reading the basics about camera projection instead of asking for others to explain the common basics that are written down everywhere in the web and literature.

How to find the sides of a polygon in an image?

I have a list of connected pixels (tuples of x and y coordinates) forming the outer boundaries of a polygon in an image. How can I pick out its sides suppose they are all straight? What should I do if the lines are not too straight, and contain deviations? Is there anything readily made in Python Image Library or Numpy for such a task? Thanks!
You could use OpenCVs approxPolyDP
You can reduce the number of vertices using that function.
The functions approxPolyDP approximate a curve or a polygon with
another curve/polygon with less vertices so that the distance between
them is less or equal to the specified precision. It uses the
Douglas-Peucker algorithm
http://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithm

Plotting a 3d triangular mesh from point cloud

I have this object/point cloud,rendered with pyopengl and pygame.
My object is a numpy array of the co-ordinates of the point. I wish to generate a 3d triangular mesh of this object, also it would be nice if you could decrease the number of triangles.
I have tried scipy.spatial.Delaunay and it doesnt generate triangles for 3d objects.
Dual Contouring would probably work well here, it's an algorithm that takes voxelized data and turns it into a mesh. I don't understand it trivially enough to outline it here, but basically you'd take your array of points and place them into a 3D grid array where if that grid cell contains a point it's set to equal 1 (full), and if it doesn't it is set to 0 (empty), you would then run the DC algorithm on this grid and it would output a mesh. The nice thing about this algorithm is it supports internal cavities and concave shapes.
Here's some links I found that may help you if you decide to use DC:
Basic Dual Contouring Theory
http://ngildea.blogspot.com/2014/11/implementing-dual-contouring.html
This is the github repo to the source I used when I implemented this algorithm in Unity3D:
https://github.com/nickgildea/DualContouringSample

How to perform image cross-correlation with subpixel accuracy with scipy

The image below shows two circles of same radius, rendered with antialiasing, only that the left circle is shifted half pixel horizontally (notice that the circle horizontal center is at the middle of a pixel at the left, and at the pixel border at the right).
If I perform a cross-correlation, I can take the position of the maximum on the correlation array, and then calculate the shift. But since pixel positions are always integers, my question is:
"How can I obtain a sub-pixel (floating point) offset between two images using cross-correlation in Numpy/Scipy?"
In my scripts, am using either of scipy.signal.correlate2d or scipy.ndimage.filters.correlate, and they seem to produce identical results.
The circles here are just examples, but my domain-specific features tend to have sub-pixel shifts, and currently getting only integer shifts is giving results that are not so good...
Any help will be much appreciated!
The discrete cross-correlation (implemented by those) can only have a single pixel precision. The only solution I can see is to interpolate your 2D arrays to a finer grid (up-sampling).
Here's some discussion on DSP about upsampling before cross-correlation.
I had a very similar issue, also with shifted circles, and stumbled upon a great Python package called 'image registration' by Adam Ginsburg. It gives you sub-pixel 2D images shifts and is fairly fast. I believe it's a Python implementation of a popular MATLAB module, which only upsamples images around the peak of the x-correlation.
Check it out: https://github.com/keflavich/image_registration
I've been using 'chi2_shifts.py' with good results.

Categories

Resources