Using OpenCV to find radius of a dome from a picture - python

I have a camera positioned a known distance away from a dome, facing it at a known angle.
as seen from the picture, the dome has a strip on it so that I can use image processing to fit an ellipse to it and find the radius. If the camera was positioned at a 90 degree angle from the dome, the full radius could be seen (in this case 140mm.)
While if it was at 0 degrees, no curvature can be seen.
My question is, how does this vary with angle? I believe the answer lies in OpenCVs rotation matrices, but how can I use it for this purpose? If I have the following image from a known angle, how would I rotate it about an axis to find the radius as if viewed from a 90 degree angle?

Related

How to find center of the circle if the data points are in curved coordinates system(Horizontal - AltAz)

I have 6 points with their coordinates in the Cartesian plane, XY, placed on one side of a circle. Using the least square method it is relatively easy to fit a circle to those 6 points and to find the radius and the center of the fitted circle again in XY coordinates..
However, I also have Altitude Azimuth coordinates for those 6 points, because those points are on the sky, so I was wondering is it possible to fit a curve to those curved coordinates and then find the center of that circle.
Project your points on the unit sphere and compute the best fitting plane. The normal vector of the plane points towards the center of that circle. The radius of your circle will be equal to sqrt(1-d²) if d is the distance between the plane and the origin or acos(d) if you want the angle between the center and a point of the circle (since we're doing spherical geometry).
EDIT : do an orthogonal regression because if you don't, the z-axis could be favored over the others or vice-versa.

How to find the rotation matrix with known points

The corner points of a rectangle are given in 2D coordinates. I also know the real distance of the points and I have the camera matrix.
Now I want to find the rotation vector with respect to the camera, but without using the cv2.calibrateCamera() method with the chessboard corners.

Projecting a Texture Mask onto an existing 3D Mesh given the camera extrinsics

Given an image mask, I want to project the pixels onto a mesh in respect to the position and orientation of the camera and convert these pixels into a pointcloud. I have the intrinsic and extrinsic parameters of the camera in respect to the world, and the location of the mesh in world coordinates. I know the mapping from world coordinates to camera image is as follow:
imgpoint = Intrinsic * Extrinsic * worldpoint
So when I want to the opposite i do the inverse of the intrinsic and extrinsic matrices:
worldpoint= Intrinsic^(-1) * Extrinsic^(-1) * imgpoint
However, the idea that I had was to obtain two points from one pixel, with different depth values, to obtain a line and then look for the closest intersection for the mesh I want with the line, but I do not know how to properly generate a point away from the original camera plane. How can I find this extra point and/or am I complicating this problem?
The top equation below shows how to project a point (x,y,z) onto a pixel (u,v);
The extrinsic parameters are the 3x3 rotation matrix R and translation t.
The intrinsic parameters are the focal distances f_x, f_y and
principal point (c_x, c_y). The value alpha is the perspective foreshortening term that is divided out.
The bottom equation reverses the process by describing how to project
a ray from the camera position through through the pixel (u,v) out into the scene as the parameter alpha varies from 0 to infinity.
Now we have converted the problem into a ray casting problem.
Find the intersection of the ray with your mesh which is a
standard computer graphics problem.

Getting Pixel coordinates of a contour from the overall image

So I would like to get the XY coordinates of the center of a contour on an image. How would I go about this?
I have the contours defined and the objects detected now how do I get the pixel XY of the center.
Get the normal vectors at the start and end of the contour and calculate their intersection.
^ is for a simple 3 point contour ie a circular arc. If you have more points, "center" is a kinda ambiguous term, but if you averaged all points contributing to the contour it could give you a center of mass which may be what you are after.
You could also get bounding boxes or circle etc as described here
You could also use the moments to get center of mass if you prefer as seen on the documentation:

How to get the camera rotation? (Aruco Library)

I've been trying to understand the output of the aruco_test.cpp program that is included when you download the Aruco Library.
The output has this format:
22=(236.87,86.4296) (422.581,78.3856) (418.21,228.032) (261.347,228.529) Txyz=0.00813142 -0.0148134 0.140595 Rxyz=-2.14032 0.0777095 0.138929
22 is the unique identifier of the marker, the next four pairs of numbers are the four corners of the marker. My problem here is the two vectors Tvec and Rvec.
I've been reading on the Internet that tvec is the translation vector from my camera's center to my object (the marker in this case) and that rvec is the rotation of the object with respect to my camera.
I've got a few questions regarding this:
How can I know the axis of my camera? I mean, is there a way to know where the x, y and z are facing?
How can I get the rotation of the camera from the rotation of the object wrt the camera?
Can someone explain me the meaning of the vectors better so I can really understand it? I think my main problem here is that I don't really know what those numbers mean for real.
EDIT: I've been doing some testing to check how the rotation works and I don't really understand the results:
Moving the camera, marker fixed on the floor:
Initial position: camera looking at the marker - 'z' axis of the marker looking to the camera, 'y' is going upwards and 'x' goes to the right: Rxyz=2.40804 -0.0823451 0.23141
Moving the camera on the 'x' axis of the marker (tilt the camera up): Rxyz=-1.97658 -0.0506794 -0.020052
Moving the camera on the 'y' axis of the marker (incline the camera to the right): Rxyz=2.74544 -0.118551 -0.973627
Turn the camera 90 degrees (to the right): Rxyz=1.80194 -1.86528 0.746029
Moving the marker instead of the camera, leaving the camera fixed looking to the marker:
Using the same initial position as in the previous case.
Moving the marker on its 'x' axis: Rxyz=2.23619 -0.0361307 -0.0843008
Moving the marker on its 'y' axis: Rxyz=-2.9065 -0.0291299 -1.13356
Moving the marker on its 'z' axis (90º turn to the right): Rxyz=1.78398 1.74161 -0.690203
I've been assuming that each number of the vector was the rotation on a respective axis but I think I'm assuming wrong as this values don't make so much sense if that was the case.
How can I know the axis of my camera? I mean, is there a way to know
where the x, y and z are facing?
This is defined in the OpenCV library. x-axis increases from left to right of the image, y-axis increases from top to bottom of the image, and z axis increases towards the front of the camera. Below image explains this axis selection.
How can I get the rotation of the camera from the rotation of the
object wrt the camera?
rvec is the rotation of the marker with respect to the camera frame. You can convert rvec to a 3x3 rotation matrix using the built-in Rodrigues function. If the marker is aligned with camera frame, this rotation matrix should read 3x3 identity matrix.
If you get the inverse of this matrix (this is a rotation matrix, so the inverse is the transpose of the matrix), that is the rotation of the camera with respect to the marker.
Can someone explain me the meaning of the vectors better so I can really understand it?
I think my main problem here is that I don't really know what those
numbers mean for real.
tvec is the distance from the origin of the camera frame to the center of the detected marker (this is F_c - P line on the figure. rvec is as described in the above answer.

Categories

Resources