Given a 2D image of blade and its corresponding 3D Scan data in stl/ply/pcd format. Is it possible to map the 2D image onto the 3D scan data using python? Or is it possible that we extract the color information from the 2D Image and map the color texture onto the 3D scan data using python libraries?
I am working on a project where I want to localize the defect position on the blade, I have already implemented AI algorithm to locate the defect on the 2D image, but now I want to transfer this information to the 3D CAD Data.
Ps. I am new to handling 3D data, so any suggestion would be of great help.
if your 3d data means a point cloud data from a lidar device in one timestamp, you can calculate the transformation from your camera to your lidar and project your lidar to the image plane and get color. And you can reference code from here
if your 3d was collected from stereo vision software, like colmap. There is also a pose relationship between images and the 3d model. You can make the same approach as situation 1.
the worst condition is that your model doesn't have any relative pose between the images and the 3d model. The solution is to calculate an image-to-geometry result. image-to-geometry methods will guess the 3d pose of the images relative to the 3d model.
if you only want to know how to map a texture to a 3d model. There is an answer.
Related
I have a large model of a house, with every internal object like walls, tables, doors and tv's inside. The file is a 3D object, either a .obj or a .fbx file. I also have a pointcloud from a 180 degrees lidar scanner that has scanned from somewhere inside the house. I know where the scanner stood with a precision of about 3 meters, and I want to find out what my pointcloud corresponds to in my 3D model. In other words, I want to find the translation and rotation required to move my pointcloud to the correct position in the model.
I have tried to turn the 3D model into a pointcloud, and then use ICP (iterative closest point), but as the points I generate does not necessarily correspond with those from the scanner, I get quite weird results from time to time.
I have also looked at this one Match 3D point cloud to CAD model, but in my case I only have a scan of a small portion of the full model.
Does anyone have any advice on how to do this with python?
My goal is to project feature vectors I have on an image pixel level onto voxels (via ray casting/marching). The ideal output would be to cast a ray and get the first intersected voxel as output. I have the camera intrinsics, extrinsics and the voxels. So I should have everything that is needed to do it. Currently my voxels are in a sparse format, i.e. an array of coordinates and feature vectors associated with each coordinate.
But unfortunately I couldn't find a simple way to do it. Is there some performant python library, that should work well for this usecase?
I know of open3d but it only seems to support my usecase for meshes.
I am working on an application using an IFM 3D camera to identify parts prior to a robot pickup. Currently I am able to find the centroid of these objects using contours from a depth image and from there calculate the center point of these objects in pixel space.
My next task is to then transform the 2D centroid coordinates to a 3D point in 'real' space. I am able to train the robot such that it's coordinate frame is either at the center of the image or at the traditional (0,0) point of an image (top left).
The 3D camera I am using provides both an intrinsic and extrinsic matrix. I know I need to use some combination of these matrices to project my centroid into three space but the following questions remain:
My current understanding from googling is the intrinsic matrix is used to fix lens distortion (barrel and pinhole warping, etc.) whereas the extrinsic matrix is used to project points into the real world. Is this simplified assumption correct?
How can a camera supply a single extrinsic matrix? I know traditionally these matrices are found using the checkerboard corners method but are these not dependent on the height of the camera?
Is the solution as simple as taking the 3x4 extrinsic matrix and multiplying it by a 3x1 matrix [x, y, 1] and if so, will the returned values be relative to the camera center or the traditional (0,0) point of an image.
Thanks in advance for any insight! Also if it's any consolation I am doing everything in python and openCV.
No. I suggest you read the basics in Multiple View Geometry of Hartley and Zisserman, freely available in the web. Dependent on the camera model, the intrinsics contain different parameters. For the pinhole camera model, these are the focal length and the principal point.
The only reason why you maybe could directly transform your 2D centroid to 3D is that you use a 3D camera. Read the manual of the camera, it should be explained how the relation between 2D and 3D coordinates is given for your specific model.
If you have only image data, you can only compute a 3D point from at least two views.
No, of course not. Please don't be lazy and start reading the basics about camera projection instead of asking for others to explain the common basics that are written down everywhere in the web and literature.
I am currently working on a summer research project and we have generated 360 slices of a tumor. I now need to compile (if that's the right word) these images into one large 3D image. Is there a way to do this with either a python module or an outside source? I would prefer to use a free software if that is possible.
Perhaps via matplotlib, but anyway may require preprocessing I suppose:
https://www.youtube.com/watch?v=5E5mVVsrwZw
In your case, the z axis (3rd dimension) should be specified by your vector of images. Nonetheless, before proceeding, I suppose you would need to extract the shapes of the object you want to reconstruct. For instance, if i take any image of the many 2D you have, I expect to find RGB value for each pixel, but then, for instance if you want to plot a skull like in the video link, as I understand you would need to extract the borders of your object and from each of its frame (2D shape) and then plot their serie. But anyway, the processing may depend on the encoding of the information you have. Perhaps is sufficient to simply plot the series of images.
Some useful link I found:
https://www.researchgate.net/post/How_to_reconstruct_3D_images_from_two_or_four_2D_images
Python: 3D contour from a 2D image - pylab and contourf
i'm sorry if someone already answered at this question, but I'm looking at every page on the Internet not founding the perfect answer for my problem. I need to reconstruct a 3D model from multiple 2D images. The fact is that i already have the images and i don't have info about the camera. I just know that it has been used one camera that has been rotated around the object. In order to reconstruct the 3D shape of the object i need to establish the camera matrix but i don't have any idea how to do it. I'm using ORB feature detection treating two images like a stereo view to establish a corrispondance and find the Fundamental Matrix and the Homography but i can't proceed to find camera parameters. I'm using python and OpenCV. Thanks in advance.
its been a while since youve asked this question but you can extract the focal length of an image from its Exif tags, It will work only if the images you have are in JPEG format. The optical centers can be approximated as width/2 and height/2, heres a good material about the same: http://phototour.cs.washington.edu/focal.html
Reading Exif can be done using a plethora of packages/libraries available, one such example in Python is : https://pypi.org/project/ExifRead/
Note : Exif data of focal lengths are in mm, you might have to convert it to pixels beforehand by using the sensor width value(also encoded in Exif tag).
F(pixels) = F(mm) x ImageWidth(pixel)/SensorWidth(mm)
Once you have Focal lengths, Cx, Cy determined you can fit these into Camera matrix K and proceed via SFM/MVS or Stereo Reconstruction based on the images you have at hand.