How to find the rotation matrix with known points - python

The corner points of a rectangle are given in 2D coordinates. I also know the real distance of the points and I have the camera matrix.
Now I want to find the rotation vector with respect to the camera, but without using the cv2.calibrateCamera() method with the chessboard corners.

Related

How to find center of the circle if the data points are in curved coordinates system(Horizontal - AltAz)

I have 6 points with their coordinates in the Cartesian plane, XY, placed on one side of a circle. Using the least square method it is relatively easy to fit a circle to those 6 points and to find the radius and the center of the fitted circle again in XY coordinates..
However, I also have Altitude Azimuth coordinates for those 6 points, because those points are on the sky, so I was wondering is it possible to fit a curve to those curved coordinates and then find the center of that circle.
Project your points on the unit sphere and compute the best fitting plane. The normal vector of the plane points towards the center of that circle. The radius of your circle will be equal to sqrt(1-d²) if d is the distance between the plane and the origin or acos(d) if you want the angle between the center and a point of the circle (since we're doing spherical geometry).
EDIT : do an orthogonal regression because if you don't, the z-axis could be favored over the others or vice-versa.

Projecting a Texture Mask onto an existing 3D Mesh given the camera extrinsics

Given an image mask, I want to project the pixels onto a mesh in respect to the position and orientation of the camera and convert these pixels into a pointcloud. I have the intrinsic and extrinsic parameters of the camera in respect to the world, and the location of the mesh in world coordinates. I know the mapping from world coordinates to camera image is as follow:
imgpoint = Intrinsic * Extrinsic * worldpoint
So when I want to the opposite i do the inverse of the intrinsic and extrinsic matrices:
worldpoint= Intrinsic^(-1) * Extrinsic^(-1) * imgpoint
However, the idea that I had was to obtain two points from one pixel, with different depth values, to obtain a line and then look for the closest intersection for the mesh I want with the line, but I do not know how to properly generate a point away from the original camera plane. How can I find this extra point and/or am I complicating this problem?
The top equation below shows how to project a point (x,y,z) onto a pixel (u,v);
The extrinsic parameters are the 3x3 rotation matrix R and translation t.
The intrinsic parameters are the focal distances f_x, f_y and
principal point (c_x, c_y). The value alpha is the perspective foreshortening term that is divided out.
The bottom equation reverses the process by describing how to project
a ray from the camera position through through the pixel (u,v) out into the scene as the parameter alpha varies from 0 to infinity.
Now we have converted the problem into a ray casting problem.
Find the intersection of the ray with your mesh which is a
standard computer graphics problem.

Transform 3D points to points in 2D image

Is there a way how I could transform 3D points to 2D points corresponding to the positions of a 2D image?
Is there a function in Python/OpenCV that has following properties:
input: 3D points, camera position in 3D, direction of the camera, maybe additional features of the camera
output: corresponding 2D points on the 2D image of the camera?
Have you seen cv2.projectPoints method?
Python: cv2.projectPoints(objectPoints, rvec, tvec, cameraMatrix, distCoeffs[, imagePoints[, jacobian[, aspectRatio]]]) → imagePoints, jacobian
The function computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters.
more details here

Using OpenCV to find radius of a dome from a picture

I have a camera positioned a known distance away from a dome, facing it at a known angle.
as seen from the picture, the dome has a strip on it so that I can use image processing to fit an ellipse to it and find the radius. If the camera was positioned at a 90 degree angle from the dome, the full radius could be seen (in this case 140mm.)
While if it was at 0 degrees, no curvature can be seen.
My question is, how does this vary with angle? I believe the answer lies in OpenCVs rotation matrices, but how can I use it for this purpose? If I have the following image from a known angle, how would I rotate it about an axis to find the radius as if viewed from a 90 degree angle?

Getting the projection matrix which can transform 3d points to 2d points in an image as done by cv2.projectPoints function in opencv

The opencv function cv2.projectPoints() takes 3d points, rotation vector, translation vector, camera matrix and distortion coefficients and aspect ratio and generate the 2d projection of the 3d point in the image plane. Is there some way to get the full projection matrix which does the same to any 3d point so that i can do the projection without using the projectPoints fucntion

Categories

Resources