I projected the 3D point cloud acquired through the depth camera onto a 2D plane.
As a result, I got a point cloud list projected on a plane like in fig1.
example Image:
How can I get the real area of a polygon made up of a point cloud projected onto a 2D plane?
I tried to calculate the area using the shapely polygon.area function, but the function receives the coordinates of the boundary points of the polygon as input. So it failed.
(https://shapely.readthedocs.io/en/stable/reference/shapely.Polygon.html#shapely.Polygon)
Related
Similar to this question: Using just vertices to generate faces in PyMeshLab
I am looking for a function in either PyVista or PyMeshLab which constructs a mesh surface given vertices. However, my vertices already have normals. They were obtained with a laser scanner, so I used the estimate normals function in CloudCompare knowing the sensor locations that the points were obtained from. Therefore, all my point cloud normals are true surface normals. I want to preserve this information.
All I really need is to march through the cloud and connect the vertices with faces of their nearest neighbors. That is, I would like my point cloud to be the mesh vertices. No Delaunay triangulations or anything like that. No ball pivoting algorithm or Poisson reconstruction that skips over some points. My points are already subsampled and sparse.
Are there any functions that can do this? Maybe it is even easy to do manually.
Thank you.
For example, in this code, nbr_sz dictates the neighborhood of points to look around for estimating surface normals.
import pyvista as pv
bunny = pv.read("Stanford_Bunny.ply")
points = pv.wrap(bunny.points)
surf = points.reconstruct_surface(nbr_sz=10)
pl = pv.Plotter(shape=(1, 2))
pl.add_mesh(points)
pl.add_title("Point Cloud of 3D Surface")
pl.subplot(0, 1)
pl.add_mesh(surf, color=True, show_edges=True)
pl.add_title("Reconstructed Surface")
pl.show()
See https://docs.pyvista.org/api/core/_autosummary/pyvista.PolyDataFilters.reconstruct_surface.html#pyvista.PolyDataFilters.reconstruct_surface
I have the coordinates of four 3D points bounding a 3d plane representing a wall. I want to create a 3D object of that plane using python. Red points shown in figure are the boundaries of the plane, while green points are boundaries of voids in the plane. Any help by a simple code or name of a library or a method?
PS: The shown figure is a mesh but I want a planar surface something like that
Assuming that we have the camera pose P for a points cloud that has been derived from a spherical panorama and its depth map. This camera pose refers to a random position, inside the point cloud. Is there a way to obtain for this specific pose the corresponding sphere params (r,phi,theta) to obtain the perspective view in the spherical panorama?
I am working on an application using an IFM 3D camera to identify parts prior to a robot pickup. Currently I am able to find the centroid of these objects using contours from a depth image and from there calculate the center point of these objects in pixel space.
My next task is to then transform the 2D centroid coordinates to a 3D point in 'real' space. I am able to train the robot such that it's coordinate frame is either at the center of the image or at the traditional (0,0) point of an image (top left).
The 3D camera I am using provides both an intrinsic and extrinsic matrix. I know I need to use some combination of these matrices to project my centroid into three space but the following questions remain:
My current understanding from googling is the intrinsic matrix is used to fix lens distortion (barrel and pinhole warping, etc.) whereas the extrinsic matrix is used to project points into the real world. Is this simplified assumption correct?
How can a camera supply a single extrinsic matrix? I know traditionally these matrices are found using the checkerboard corners method but are these not dependent on the height of the camera?
Is the solution as simple as taking the 3x4 extrinsic matrix and multiplying it by a 3x1 matrix [x, y, 1] and if so, will the returned values be relative to the camera center or the traditional (0,0) point of an image.
Thanks in advance for any insight! Also if it's any consolation I am doing everything in python and openCV.
No. I suggest you read the basics in Multiple View Geometry of Hartley and Zisserman, freely available in the web. Dependent on the camera model, the intrinsics contain different parameters. For the pinhole camera model, these are the focal length and the principal point.
The only reason why you maybe could directly transform your 2D centroid to 3D is that you use a 3D camera. Read the manual of the camera, it should be explained how the relation between 2D and 3D coordinates is given for your specific model.
If you have only image data, you can only compute a 3D point from at least two views.
No, of course not. Please don't be lazy and start reading the basics about camera projection instead of asking for others to explain the common basics that are written down everywhere in the web and literature.
I'm doing something similar to the tutorial here: http://docs.opencv.org/3.1.0/d7/d53/t... regarding pose estimation. Essentially, I'm creating an axis in the model coordinate system and using ProjectPoints, along with my rvecs, tvecs, and cameraMatrix, to project the axis onto the image plane.
In my case, I'm working in the world coordinate space, and I have an rvec and tvec telling me the pose of an object. I'm creating an axis using world coordinate points (which assumes the object wasn't rotated or translated at all), and then using projectPoints() to draw the axes the object in the image plane.
I was wondering if it is possible to eliminate the projection, and get the world coordinates of those axes once they've been rotated and translated. To test, I've done the rotation and translation on the axis points manually, and then use projectPoints to project them onto the image plane (passing identity matrix and zero matrix for rotation, translation respectively), but the results seem way off. How can I eliminate the projection step to just get the world coordinates of the axes, once they've been rotation and translated? Thanks!