Projection of point cloud on 2D image based on mesh information - python

I have a point cloud and meshes (vertices=points of the point cloud).
I want to project the point cloud with a certain virtual camera.
Here, since the point cloud is sparse, the rendered result includes the points which should be occluded by foreground objects.
To resolve this issue, I want to use mesh information to identify which points should be occluded.
Is there any smart way to do this in python?
Kind advice will be greatly appreciated.

After hours of searching, I conclude that I have to re-implement a novel rendering pipeline to achieve my goal.
So, instead of this, I use a mesh-based renderer to render a depth map.
And then I simply project the points of the point cloud with a projection matrix.
Here, I use the depth map to check whether the point fits with the depth or not.
If the projected point is the one that should be occluded, then the depth of the point would be larger than the depth map value at the corresponding pixel.
So, such points should be ignored while rendering.
I know that this is a less elegant and inefficient trick but anyway it works very well :)

Related

Finding coordinates of a triangle Mesh

I have a triangle mesh and look for a way to get programmatically for a given (x,z) 2D point all y coordinates which are represented by the mesh (x,y1,z),(x,y2,z) ..., preferable in python. I have the mesh stored in one of the common file formats (.stl , .obj ...)
The problem behind this question is that i convert a 2D face image into a 3D mesh of the face (using the marvelous https://github.com/sicxu/Deep3DFaceRecon_pytorch project) and then want to map the depth information of the 3D model back to the 2d image (to build something fancy in blender ...)
I finally found a solution for the problem which is both slow and inelegant but does the job for me for now.
I use section_multiplane function of the trimesh python library for that. Basically i use this function to intersect the mesh with 2d planes parallel to the y,z-plane and calculate the depth values by analyzing the resultant 2D Path. The library is very fast in calculating the intersections but the part i wrote - extracting the depth information from the 2D Paths - is painfully slow right now. (which doesnt matter in my particular application)
The code for that I have now is deeply interwoven in my particular application so it doesn't make sense to share it but if someone is interested in this approach there is a very helpful example included in the trimesh library which covers the crucial points: section_multiplane example
I am sure there are much more elegant solutions for this problem available but I wanted to share this approach in case somebody struggles finding a better approach too ...
Heatmap displaying the extracted depth information:
depth info:

Converting depth map to pointcloud on Raspberry PI for realtime application

I am developing a robot based on StereoPI. I have successfully calibrated the cameras and obtained a fairly accurate depth map. However, I am unable to convert my depth map to point cloud so that I can obtain the actual distance of an object. I have been trying to use cv2.reprojectImageTo3D, but see no success. May I ask if there is a tutorial or guide which teaches how to convert disparity map to point cloud?
I am trying very hard to learn and find reliable sources but see on avail. So, Thank you very much in advance.
By calibrating your cameras you compute their interior orientation parameters (IOP - or intrinsic parameters). To compute the XYZ coordinates from the disparity you need also the exterior orientation parameters (EOP).
If you want your point cloud relative to the robot position, the EOP can be simplified, otherwise, you need to take into account the robot's position and rotation, which can be retrieved with a GNSS receiver and intertial measurement unit (IMU). Note that is very likely that such data need to be processed with a Kalman filter.
Then, assuming you got both (i) the IOP and EOP of your cameras, and (ii) the disparity map, you can generate the point cloud by intersection. There are several ways to accomplish this, I suggest using the collinearity equations.

Python point cloud data to surface fit/function

I have unstructured (taken in no regular order) point cloud data (x,y,z) for a surface. This surface has bulges (+z) and depressions (-z) scattered around in an irregular fashion. I would like to generate some surface that is a function of the original data points and then be able to input a specific (x,y) and get the surface roughness value from it (z value). How would I go about doing this?
I've looked at scipy's interpolation functions, but I don't know if creating a single function for the entire surface is the correct approach? Is there a technical name for what I am trying to do? I would appreciate any suggestions/direction.
I don't know if creating a single function for the entire surface is the correct approach?
I guess this depends on your data. Let's assume the base form of your surface is spherical. Then you can model it as such.
If your surface is more complex then a sphere you might can still model the neighborhood of (x,y) as such. Maybe you could even consider your surface as plain in the near neighborhood of (x,y).
What you are trying to do, can be called surface fitting, or two-dimensional curve fitting. You would be able to find lots of available algorithms by searching for those terms. Now, the choice of the particular algorithm/method should be dictated:
by the origin of your data (there are specialized algorithms or variations of more common ones that are tailored for certain application areas)
by the future use of your data (depending on what you are going to do with it, maybe you need to be able to calculate derivatives easily, etc)
It is not easy to represent complicated data (especially the noisy one) using a single function. Thus there is a lot of research about it. However, in a lot of applications curve-fitting is very successful and very widely used.

SLAM Vs Registration

I am working with 3D point clouds acquired from an object and I need to align them in a single global point cloud. I am having an hard time in understanding the difference between SLAM and registration. Especially since both of them can implement ICP
The point clouds have been acquired in spatial and temporal order and hav extended overlapping area; therefore I should could SLAM for aligning them.
Anyone can clarify this point to me?
Thanks!
anna
Based on your question it sounds like you are working with a depth sensor of some kind moving through an environment. You would like to create a consistent map or point cloud with this sensor. I would have commented this, but my reputation is too low at the moment.
Registration just refers to the alignment of two measurements through some transformation. For image registration this is typically when you find some transformation whether it be a simple translation or an affine warp between two images which makes them 'look' similar. Point cloud registration typically refers to finding a rotation and translation which aligns two point clouds.
SLAM, as you probably know, refers to simultaneous localization and mapping. The goal of SLAM is to find the sensors motion through a scene, and map the scene at the same time.
I think the reason you are having a hard time seeing the difference between the two is because, for your application, registration is a way of accomplishing a simple form of SLAM. The reason for this is because ICP essentially is finding the relative transformation of your depth sensor between two measurements. This is acts as odometry for your sensor.
However, registration is not necessarily going to give you a relative sensor pose in all applications. For example a KLT tracker is a form of simple image registration, but it does not give the relative transformation of two cameras directly.
I hope this clears it up.

Snap 3D cursor to an opaque part of plane (blender)

I have a question regarding python scripting in Blender, and I'd really appreciate it if someone could give me at least some conceptual guidelines to how I could do this:
Basically I have around 100 planes (simple primitive planes) and each of them has its own material and each material has it's own transparency map applied to it.
I need a way to snap each of those plane's respective pivots to their opaque parts. I.e. if there is a way to tell the following to blender through python language - "hey, go over every one of these planes, and do the following for each - snap a 3D cursor to an opaque part of the plane (it doesn't matter where exactly, as long as it's inside of an opaque part of the plane) and then snap plane's pivot point to the 3D cursor".
Of course I don't expect anyone to write me a full algorithm for this, I am just asking for a little help and a push in the right direction, as I do have experience with python, but not with blender :/
Any help would be appreciated.
You can find documentation on blender's python api here.
Within blender's image class you can access the pixel data at image.pixels as an array of floats, 4 floats per pixel (RGBA I think). image.size[0] is the width in pixels image.size[1] for height.
Given bpy.data.objects['Plane'].bound_box is an [8][3] array of points defining the outer extremes of the plane, you can locate a point on the plane for the pixel location to get the target point for the origin. You will also find bpy.data.objects['Plane'].matrix_world useful to translate the object coordinates to global.
bpy.context.scene.cursor_location = Vector((x,y,z)) will move the cursor to where you want.
bpy.ops.object.origin_set(type='ORIGIN_CURSOR') will set the active objects origin to the cursor. Note that this works on the active object, so you will need to alter your selection as you go.

Categories

Resources