3D quadrilateral mesh using Mayavi - python

I have an array of 3D coordinates of a mesh, where each four rows represent a mesh face.
Data:
X11,Y11,Z11
X12,Y12,Z12
X13,Y13,Z13
X14,Y14,Z14
...
XN1,YN1,ZN1
XN2,YN2,ZN2
XN3,YN3,ZN3
XN4,YN4,ZN4
where N is the total number of mesh faces
Does anyone know the best way to display this mesh using Mayavi/matplotlib?

Related

Transform 3D points to points in 2D image

Is there a way how I could transform 3D points to 2D points corresponding to the positions of a 2D image?
Is there a function in Python/OpenCV that has following properties:
input: 3D points, camera position in 3D, direction of the camera, maybe additional features of the camera
output: corresponding 2D points on the 2D image of the camera?
Have you seen cv2.projectPoints method?
Python: cv2.projectPoints(objectPoints, rvec, tvec, cameraMatrix, distCoeffs[, imagePoints[, jacobian[, aspectRatio]]]) → imagePoints, jacobian
The function computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters.
more details here

Camera calibration and point projection from 3D to 2D image

I am trying to project a 3D point from a pointcloud to a 2D panorama image. I have manually picked 8 points on the 2D image and the corresponding 3D pointcloud points and used the OpenCV library to find the camera matrix, translation, and rotation vectors that are then used in projection method.
#initial camera matrix
camera_matrix = cv2.initCameraMatrix2D([objectPoints_3D],[imagePoints_2D], image_size_2D)
#calibrate cameras
intrinsics_error, intrinsic_matrix, intrinsics_distortion_coeffs, rotation_vectors, translation_vectors = cv2.calibrateCamera([objectPoints_3D], [imagePoints_2D], image_size_2D, camera_matrix, None, flags=cv2.CALIB_USE_INTRINSIC_GUESS) #camera_matrix and intrinsic_matrix are the same
#project a 3D point in the pointcloud onto a 2D image
projected_point_2D = cv2.projectPoints(selected_point_3D, rotation_vectors[0], translation_vectors[0], intrinsic_matrix, intrinsics_distortion_coeffs)
The projected_point_2D seems reasonable for the same image that I used when calibrating, but fails in another set of 2D and 3D images. Am I doing this wrong?
Images contain buildings and trees

How to reconstruct 3D triangular meshes from points with SDF?

If I have 64 points with SDF(Signed distance function) on 8x8 grid, I can reconstruct the isosurface by Marching Cubes.
The question is:
If I only know the SDF of part of points(marked by orange squares as follow).
Can I reconstruct the isosurface?
If so, What should I do?

How to extract the surface mesh from binary mask of 3D images?

I have a binary mask of 3D medical images in nii format which is reconstructed from 2D slices and I need to extract surface mesh from the mask. AFAK, a surface mesh is constructed from the surface points of 3D surface, but I do not have the point sets (vertices).
Do you know any function or software that I can extract the mesh from the binary mask?
Thanks

Getting the projection matrix which can transform 3d points to 2d points in an image as done by cv2.projectPoints function in opencv

The opencv function cv2.projectPoints() takes 3d points, rotation vector, translation vector, camera matrix and distortion coefficients and aspect ratio and generate the 2d projection of the 3d point in the image plane. Is there some way to get the full projection matrix which does the same to any 3d point so that i can do the projection without using the projectPoints fucntion

Categories

Resources