Python: Calculate the 2D area of an object from a 3D projection - python

Context
I have a object in 3D for which I have the co-ordinates for. I then rotate the object n number of times and I want calculate the 2D area (in nano meteres) of the object, when the object is projected onto a grid.
For example,
I have an image below which describes my problem. I have the same object but in three different orientations. If i were to project that object onto the grid how would I calculate the area the object occupies in 2D?
My question is
How can I project the object onto a grid (if necessary) and measure the area of the object at that particular orientation? I'm not necessarily looking for code but maybe libraries people can point me to.

If I understand correctly, if your 3d object is convex, then you only need to project it's vertices and take the convex hull of those. The area of the convex hull polygon will be your area.

Related

How to Transform Centroid from Pixel to Real World Coordinates

I am working on an application using an IFM 3D camera to identify parts prior to a robot pickup. Currently I am able to find the centroid of these objects using contours from a depth image and from there calculate the center point of these objects in pixel space.
My next task is to then transform the 2D centroid coordinates to a 3D point in 'real' space. I am able to train the robot such that it's coordinate frame is either at the center of the image or at the traditional (0,0) point of an image (top left).
The 3D camera I am using provides both an intrinsic and extrinsic matrix. I know I need to use some combination of these matrices to project my centroid into three space but the following questions remain:
My current understanding from googling is the intrinsic matrix is used to fix lens distortion (barrel and pinhole warping, etc.) whereas the extrinsic matrix is used to project points into the real world. Is this simplified assumption correct?
How can a camera supply a single extrinsic matrix? I know traditionally these matrices are found using the checkerboard corners method but are these not dependent on the height of the camera?
Is the solution as simple as taking the 3x4 extrinsic matrix and multiplying it by a 3x1 matrix [x, y, 1] and if so, will the returned values be relative to the camera center or the traditional (0,0) point of an image.
Thanks in advance for any insight! Also if it's any consolation I am doing everything in python and openCV.
No. I suggest you read the basics in Multiple View Geometry of Hartley and Zisserman, freely available in the web. Dependent on the camera model, the intrinsics contain different parameters. For the pinhole camera model, these are the focal length and the principal point.
The only reason why you maybe could directly transform your 2D centroid to 3D is that you use a 3D camera. Read the manual of the camera, it should be explained how the relation between 2D and 3D coordinates is given for your specific model.
If you have only image data, you can only compute a 3D point from at least two views.
No, of course not. Please don't be lazy and start reading the basics about camera projection instead of asking for others to explain the common basics that are written down everywhere in the web and literature.

How to calculate an intersection of polyline and 3D surface?

I want to calculate intersection points of polyline and a closed 3D surface (given as a set of polygons).
For simple 2D cases I can use shapely. Can you advice me something for 3D case?
P.S. My programming language is Python, but C++ libraries can be useful too.
Consider every line segment and every polygon independently and build their axis-aligned bounding boxes. Now for every pair of overlapping boxes, rotate the polygon to the horizontal plane, and the segment accordingly, and find the piercing point of the supporting line of the segment.
Then check if the piercing point belongs to the segment and to the inside of the polygon.

Plotting a 3d triangular mesh from point cloud

I have this object/point cloud,rendered with pyopengl and pygame.
My object is a numpy array of the co-ordinates of the point. I wish to generate a 3d triangular mesh of this object, also it would be nice if you could decrease the number of triangles.
I have tried scipy.spatial.Delaunay and it doesnt generate triangles for 3d objects.
Dual Contouring would probably work well here, it's an algorithm that takes voxelized data and turns it into a mesh. I don't understand it trivially enough to outline it here, but basically you'd take your array of points and place them into a 3D grid array where if that grid cell contains a point it's set to equal 1 (full), and if it doesn't it is set to 0 (empty), you would then run the DC algorithm on this grid and it would output a mesh. The nice thing about this algorithm is it supports internal cavities and concave shapes.
Here's some links I found that may help you if you decide to use DC:
Basic Dual Contouring Theory
http://ngildea.blogspot.com/2014/11/implementing-dual-contouring.html
This is the github repo to the source I used when I implemented this algorithm in Unity3D:
https://github.com/nickgildea/DualContouringSample

OpenCV Pose Estimation and projectPoints

I'm doing something similar to the tutorial here: http://docs.opencv.org/3.1.0/d7/d53/t... regarding pose estimation. Essentially, I'm creating an axis in the model coordinate system and using ProjectPoints, along with my rvecs, tvecs, and cameraMatrix, to project the axis onto the image plane.
In my case, I'm working in the world coordinate space, and I have an rvec and tvec telling me the pose of an object. I'm creating an axis using world coordinate points (which assumes the object wasn't rotated or translated at all), and then using projectPoints() to draw the axes the object in the image plane.
I was wondering if it is possible to eliminate the projection, and get the world coordinates of those axes once they've been rotated and translated. To test, I've done the rotation and translation on the axis points manually, and then use projectPoints to project them onto the image plane (passing identity matrix and zero matrix for rotation, translation respectively), but the results seem way off. How can I eliminate the projection step to just get the world coordinates of the axes, once they've been rotation and translated? Thanks!

Python: Mesh a voxels assembly to compute the surface area

I have a voxels assembly which represents a stone in 3D. It's a binarized numpy 3D array in which 1 is assigned to voxels which make up my stone and 0 elsewhere.
I want to:
create its meshed surface
calculate the surface area on it.
But how?
I can see a few options depending on your application: 3D convex hull or marching cubes.
3D convex hull
First, you will need to obtain an array of all the voxel 3D coordinates where your array is equal to 1
Later, you can use scipy.spatial.ConvexHull. Scipy methods are basically a wrapper for the qhull library. When using qhull, one of the output options (FA) is to compute total area and volume.
You can find some more information here:
http://scipy.github.io/devdocs/generated/scipy.spatial.ConvexHull.html
http://www.qhull.org/html/qconvex.htm
Marching Cubes
If you need to keep concavities, then the previous option will not be enough. Marching cubes will be a good alternative. There is already a python implementation in scikit-image.
http://scikit-image.org/docs/dev/auto_examples/plot_marching_cubes.html
You will again need to obtain all points as in the previous case. You calculate the verbs/faces with:
http://scikit-image.org/docs/dev/api/skimage.measure.html#marching-cubes
And the you can calculate the area with:
http://scikit-image.org/docs/dev/api/skimage.measure.html#mesh-surface-area
I solved my problem if it can be usefull. Marching Cubes algorithm is good, but it doesn't work well on binarized arrays. So:
1) Gaussian Filter applied to the 3D array (scipy.filters)
2) Marching Cubes algorithm to mesh it (scikit-image tool)
3) Sum up the areas of triangles (scikit-image tool)

Categories

Resources