Error with triangulation using cv.triangulatePoints() - python

I am trying to find the corresponding 3d point from two images using the openCV function triangulatePoints() in python. It takes as input the two Projection matrices of both cameras and 2 corresponding image point coordinates (i have all four of thes inputs). --> cv.triangulatePoints(projMatr1, projMatr2, projPoints1, projPoints2).
However, i can't seem to figure out in which form the 2 image point coordinates should be. I've looked up documentation which says :
projPoints1 2xN array of feature points in the first image. It can be also a cell array of feature points {[x,y], ...} or two-channel array of size 1xNx2 or Nx1x2.
However I try to use these coordinates as an input, i always get an error. Does anyone know how i should input these?

Related

Python: Project 2D features onto voxels

My goal is to project feature vectors I have on an image pixel level onto voxels (via ray casting/marching). The ideal output would be to cast a ray and get the first intersected voxel as output. I have the camera intrinsics, extrinsics and the voxels. So I should have everything that is needed to do it. Currently my voxels are in a sparse format, i.e. an array of coordinates and feature vectors associated with each coordinate.
But unfortunately I couldn't find a simple way to do it. Is there some performant python library, that should work well for this usecase?
I know of open3d but it only seems to support my usecase for meshes.

Python: Is there a module that would help me transpose a group of points from shape A to shape B?

During a process, my panel samples deforms into certain linear and non linear shapes due to heat
What I want to do is based on these deformation shapes, I want to estimate how each point in the original panel has moved after the thermal deformation as shown in the image below
I am collecting reference coordinates of few points as shown in the image below
Number of red coordinates between blue is much larger than 1 (~5000)
So here is what I need to do, and I have no idea which module I should start with.
Create a mesh of coordinates I can measure, and create an approximate shape
Map these coordinates onto the created shape in 1, assuming their deformation is equal within these coordinates.
Are there any modules that support these functions?

How to Transform Centroid from Pixel to Real World Coordinates

I am working on an application using an IFM 3D camera to identify parts prior to a robot pickup. Currently I am able to find the centroid of these objects using contours from a depth image and from there calculate the center point of these objects in pixel space.
My next task is to then transform the 2D centroid coordinates to a 3D point in 'real' space. I am able to train the robot such that it's coordinate frame is either at the center of the image or at the traditional (0,0) point of an image (top left).
The 3D camera I am using provides both an intrinsic and extrinsic matrix. I know I need to use some combination of these matrices to project my centroid into three space but the following questions remain:
My current understanding from googling is the intrinsic matrix is used to fix lens distortion (barrel and pinhole warping, etc.) whereas the extrinsic matrix is used to project points into the real world. Is this simplified assumption correct?
How can a camera supply a single extrinsic matrix? I know traditionally these matrices are found using the checkerboard corners method but are these not dependent on the height of the camera?
Is the solution as simple as taking the 3x4 extrinsic matrix and multiplying it by a 3x1 matrix [x, y, 1] and if so, will the returned values be relative to the camera center or the traditional (0,0) point of an image.
Thanks in advance for any insight! Also if it's any consolation I am doing everything in python and openCV.
No. I suggest you read the basics in Multiple View Geometry of Hartley and Zisserman, freely available in the web. Dependent on the camera model, the intrinsics contain different parameters. For the pinhole camera model, these are the focal length and the principal point.
The only reason why you maybe could directly transform your 2D centroid to 3D is that you use a 3D camera. Read the manual of the camera, it should be explained how the relation between 2D and 3D coordinates is given for your specific model.
If you have only image data, you can only compute a 3D point from at least two views.
No, of course not. Please don't be lazy and start reading the basics about camera projection instead of asking for others to explain the common basics that are written down everywhere in the web and literature.

How to compare the orientation of a 3D vector against a plane in three dimensions

I am currently trying to plot a plane in three dimensional space but not sure how to do it for the problem I have.
Currently I have code that defines a 3D vector according to co-ordinates I have, this includes the ability to rotate, translate, and work out the angle between vectors.
The next step is to define a plane. I am not sure the best way to do this, however. The plane will be in a 100,100,100 box, be flat, and likely exist at a z height of around 30.
My issue comes because I need this plane to do a couple of things:
1: I need to be able to rotate it around the three axes.
2: I need to be able to measure the smallest angle between the plane and the vector I have defined where the vector intersects the plane.
I was initially playing around trying to fill a numpy array with 1s where the plane would be etc but I don't see this really working how I need it to.
Does anyone know of any other tool that I would be able to use in this situation? Many thanks.
First of all, you'll need the normal vector to the plane. From there and following this link it should be easy for you to figure it out :)
Basically you get arcsin of the scalar product of your vector and the normal vector of the plane divided by the product of the norms of both vectors.
PS: If the plane is paralel to the XY plane, then it's normal vector it's just (0,0,1).

Calculating and Plotting 2nd moment of image

I am trying to plot the 2nd moments onto a image file (the image file is a numpy array for brightness distribution). I have a rough understanding that 2nd moment is sort of like moment of inertia (Ixx,Iyy) which is a tensor but I am not too sure how to calculate it and how it would translate into two intersecting lines with the centroid at its intersection. I tried using scipy.stats.mstats.moment but I am unsure what to put as axis if I just want two 2nd moments that intersects at centroid.
Also it returns an array but I am not exactly sure what the values in the array signify, and how that relate to what I am going to plot (because the scatter method in the plotting module takes in at least 2 corresponding values in order to be plotted) ?
Thank you.

Categories

Resources