I have a binary mask of 3D medical images in nii format which is reconstructed from 2D slices and I need to extract surface mesh from the mask. AFAK, a surface mesh is constructed from the surface points of 3D surface, but I do not have the point sets (vertices).
Do you know any function or software that I can extract the mesh from the binary mask?
Thanks
Related
Is there a way how I could transform 3D points to 2D points corresponding to the positions of a 2D image?
Is there a function in Python/OpenCV that has following properties:
input: 3D points, camera position in 3D, direction of the camera, maybe additional features of the camera
output: corresponding 2D points on the 2D image of the camera?
Have you seen cv2.projectPoints method?
Python: cv2.projectPoints(objectPoints, rvec, tvec, cameraMatrix, distCoeffs[, imagePoints[, jacobian[, aspectRatio]]]) → imagePoints, jacobian
The function computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters.
more details here
I am trying to project a 3D point from a pointcloud to a 2D panorama image. I have manually picked 8 points on the 2D image and the corresponding 3D pointcloud points and used the OpenCV library to find the camera matrix, translation, and rotation vectors that are then used in projection method.
#initial camera matrix
camera_matrix = cv2.initCameraMatrix2D([objectPoints_3D],[imagePoints_2D], image_size_2D)
#calibrate cameras
intrinsics_error, intrinsic_matrix, intrinsics_distortion_coeffs, rotation_vectors, translation_vectors = cv2.calibrateCamera([objectPoints_3D], [imagePoints_2D], image_size_2D, camera_matrix, None, flags=cv2.CALIB_USE_INTRINSIC_GUESS) #camera_matrix and intrinsic_matrix are the same
#project a 3D point in the pointcloud onto a 2D image
projected_point_2D = cv2.projectPoints(selected_point_3D, rotation_vectors[0], translation_vectors[0], intrinsic_matrix, intrinsics_distortion_coeffs)
The projected_point_2D seems reasonable for the same image that I used when calibrating, but fails in another set of 2D and 3D images. Am I doing this wrong?
Images contain buildings and trees
The opencv function cv2.projectPoints() takes 3d points, rotation vector, translation vector, camera matrix and distortion coefficients and aspect ratio and generate the 2d projection of the 3d point in the image plane. Is there some way to get the full projection matrix which does the same to any 3d point so that i can do the projection without using the projectPoints fucntion
I have an array of 3D coordinates of a mesh, where each four rows represent a mesh face.
Data:
X11,Y11,Z11
X12,Y12,Z12
X13,Y13,Z13
X14,Y14,Z14
...
XN1,YN1,ZN1
XN2,YN2,ZN2
XN3,YN3,ZN3
XN4,YN4,ZN4
where N is the total number of mesh faces
Does anyone know the best way to display this mesh using Mayavi/matplotlib?
How can I rotate an image about a specific pixel in Python? I am trying to de-rotate a set of images of the night sky. Since the stars rotate around Polaris, I could define Polaris as the center of rotation and rotate each image to line up the stars.
In phadej's answer the transformation between the old and new coordinates of a point on the image is an affine transformation.
PIL (Python Imaging Library) has an image method called transform which can perform an affine transformation of an image.
The documentation for transform is near the bottom of this page.
With a little math: if each image's pixel position is vector a, and position of Polaris is p, then new position new_p is
new_p = a + R * (a-p)
where R is Rotation matrix.
There will be problem, as new_p is probably not an integer valued position-vector. You
can do it backwards. For each pixel of rotated image apply the inverse of above transform,
than you will get pixel from original image. As it could be not integer also, sample the neighbor pixels like in wu-pixels (the amount of dot spread around can be used as the sampling weight).