Firstly I'm sorry if this is a duplicate!
To explain the situation, I have developing an application that will display a 3D, real time model of an object. On this model I have a series of pressure sensors which will relay the information to my application. Each pressure will then be assigned a colour to produce a 3D pressure map on the surface of my model. I have 144 pressure sensors and around 21000 vertices on my mesh. Each sensor will be assigned an RGB colour.
Please can someone help me understand how I can use barycentric interpolation to interpolate the known colours (144 of them) across the rest of my model?
This website nicely shows what I'm trying to achieve: https://codeplea.com/triangular-interpolation however I cannot find anything that helps me in 3 dimensions.
Help! :)
Firstly, thank you for the nice page about barycentric interpolation you provided in the question!
Secondly, you may triangulate your model with the sensor in every point, and interpolate between sensor values inside every triangle, no matter if it's 2D or 3D -- triangle is a triangle. With barycentric you'll get nice matching colours along every edge, the whole model is going to look very cool.
Related
I am working on an application using an IFM 3D camera to identify parts prior to a robot pickup. Currently I am able to find the centroid of these objects using contours from a depth image and from there calculate the center point of these objects in pixel space.
My next task is to then transform the 2D centroid coordinates to a 3D point in 'real' space. I am able to train the robot such that it's coordinate frame is either at the center of the image or at the traditional (0,0) point of an image (top left).
The 3D camera I am using provides both an intrinsic and extrinsic matrix. I know I need to use some combination of these matrices to project my centroid into three space but the following questions remain:
My current understanding from googling is the intrinsic matrix is used to fix lens distortion (barrel and pinhole warping, etc.) whereas the extrinsic matrix is used to project points into the real world. Is this simplified assumption correct?
How can a camera supply a single extrinsic matrix? I know traditionally these matrices are found using the checkerboard corners method but are these not dependent on the height of the camera?
Is the solution as simple as taking the 3x4 extrinsic matrix and multiplying it by a 3x1 matrix [x, y, 1] and if so, will the returned values be relative to the camera center or the traditional (0,0) point of an image.
Thanks in advance for any insight! Also if it's any consolation I am doing everything in python and openCV.
No. I suggest you read the basics in Multiple View Geometry of Hartley and Zisserman, freely available in the web. Dependent on the camera model, the intrinsics contain different parameters. For the pinhole camera model, these are the focal length and the principal point.
The only reason why you maybe could directly transform your 2D centroid to 3D is that you use a 3D camera. Read the manual of the camera, it should be explained how the relation between 2D and 3D coordinates is given for your specific model.
If you have only image data, you can only compute a 3D point from at least two views.
No, of course not. Please don't be lazy and start reading the basics about camera projection instead of asking for others to explain the common basics that are written down everywhere in the web and literature.
I'd like to create synthetic training data for DL models for segmentation and classification in point clouds. The ground truth / real data comprise LiDAR point clouds. I scripted a simple mesh sampling model in python/open3d and I'm able to quickly transfer 3D scenes to point clouds (see fig 1), but I need to include certain characteristics of LiDAR sensors.
Blensor (https://www.blensor.org/) works the way I need it (fig 2), but I don't want to use blender atm. Also the results don't have a sufficient quality for my use case.
In the first step I'd just like to cut off the points, which are not reachable by a certain position of a LiDAR sensor, mainly to create the "shadows", which are important to make the training data more realistic. Do you have any suggestions for a simple and fast workaround? My point cloud is saved in a pandas dataframe including x,y,z and nx,ny,nz values.
Thx in advance,
reiti
If your 3D scene can be described in the form of distance functions (essentially consisting of a range of simple geometric shapes opposed to point cloud data) you may be good to go with an easily modified ray tracing algorithm that emulates a LiDAR sensor.
For each LiDAR "ray" (i.e. for every direction) you only need to save the first scene collision's xyz coordinates. This also gives you full freedom to match the original real world sensor properties (like angles and number of points).
How easy the calculation of the distance between scene and sensor-ray will be depends on the scene you have set up and how it is represented. Sorry for not being able to provide you with a ready-to-use implementation, but this might give you some direction.
I have a photo taken from a camera (whose focal length, principle point, and distortion coefficients I know). The photo has a 8cm x 8cm post-in on a table and the center of the post-it is the origin (0, 0) again in cm. I've also indicated the positive-y axis on the post-it.
From this information is it possible to compute the location of the camera and the vector in which the camera is looking in Python using OpenCV? If someone has a snippet of code that does that (assuming you know the coordinates of the post-it corners already) that would be amazing!
Use OpenCV's solvePnP specifying SOLVEPNP_IPPE_SQUARE in the flags. With only 4 points (and a postit) the solution will be quite sensitive to how accurately you mark their images, so ask yourself whether you really need the camera pose and location for your application, and how accurately. E.g., if you just want to make a flat CG "sticker" stay fixed on the table while the camera moves, all you need is estimating a homography, a much simpler task.
It does look like you have all the information required. The marker you use can be easily segmented. Shape analysis will provide corners. I did something similar to get basic eyesight tracking:
Here is a complete example.
Segmentation result for the example:
Please notice, accuracy really matters, so it might be useful to rely on several sets of points.
I have unstructured (taken in no regular order) point cloud data (x,y,z) for a surface. This surface has bulges (+z) and depressions (-z) scattered around in an irregular fashion. I would like to generate some surface that is a function of the original data points and then be able to input a specific (x,y) and get the surface roughness value from it (z value). How would I go about doing this?
I've looked at scipy's interpolation functions, but I don't know if creating a single function for the entire surface is the correct approach? Is there a technical name for what I am trying to do? I would appreciate any suggestions/direction.
I don't know if creating a single function for the entire surface is the correct approach?
I guess this depends on your data. Let's assume the base form of your surface is spherical. Then you can model it as such.
If your surface is more complex then a sphere you might can still model the neighborhood of (x,y) as such. Maybe you could even consider your surface as plain in the near neighborhood of (x,y).
What you are trying to do, can be called surface fitting, or two-dimensional curve fitting. You would be able to find lots of available algorithms by searching for those terms. Now, the choice of the particular algorithm/method should be dictated:
by the origin of your data (there are specialized algorithms or variations of more common ones that are tailored for certain application areas)
by the future use of your data (depending on what you are going to do with it, maybe you need to be able to calculate derivatives easily, etc)
It is not easy to represent complicated data (especially the noisy one) using a single function. Thus there is a lot of research about it. However, in a lot of applications curve-fitting is very successful and very widely used.
I'm just a newbie in Blender.
Going to create an jigsaw puzzled sphere model, like wikipedia one, or these plastic 3D puzzles you have probably saw.
For now, i have created Python script which creates arbitrary plain 2D puzzles with Bezier curves, which later can be easy be converted to a mesh
But how to wrap it around a sphere ?
PS. Just had an idea -
to unwrap cube on the puzzle plane, copy edges as negative as shown below
(there no copy of edges on the picture yet).
Then with affine transformations, transform each 2D cube face to respective 3D place, and then apply Object->Transform->To Sphere modifier.
What do you think ? Is there a better way to create puzzled sphere ?
Thanks for your attention !
EDIT: You know, there is a dodecahedron, which can be also assembled from pentagon faces
Transformed code to blender add-on
https://bitbucket.org/ios29A/blender_puzzle_generator, maybe it will help someone
Actually 30Kb of Python code, and cube faces are transformed by hands
Just finished it with affine transformations of cube faces.
See a heart from 7x7x7 cube, so here is plain->cube->sphere->lattice transform
I think this method makes it possible to create any 3D shape from squares, even not sphere ones.
Was going to print it in plastic before metal.
Here it is 3х3х3 and was made right before 14' February
It's much simpler than you may imagine.
Download the addon from the given link above (https://bitbucket.org/ios29A/blender_puzzle_generator).
Install it (refer to the documentation).
In Blender, create a cuboid puzzle (add-curve-puzzle. Choose cuboid in the shape option).
Convert the curve into a mesh (object-convert to-mesh...)
Select the cuboid and enter edit mode (tab)
Select all (A)
Mesh-transform-to sphere.
Move your mouse.