Calculate camera world position with OpenCV Python - python

I want to calculate my camera's position in world coordinates. This should be fairly easy, but I don't get the results I expect. I believe I've read everything on this topic, but my code isn't working. Here's what I do:
I have a camera looking at an area.
1) I drew a map of the area.
2) I calculated the homography by matching 4 image points to 4 points on my map using cv2.getPerspectiveTransform
3) The H homography transforms every world coordinate to camera coordinate; this is working properly
4) To calculate the camera matrix I followed this:
translation = np.zeros((3,1))
translation[:,0] = homography[:,2]
rotation = np.zeros((3,3))
rotation[:,0] = homography[:,0]
rotation[:,1] = homography[:,1]
rotation[:,2] = np.cross(homography[0:3,0],homography[0:3,1])
cameraMatrix = np.zeros((3,4))
cameraMatrix[:,0:3] = rotation
cameraMatrix[:,3] = homography[:,2]
cameraMatrix = cameraMatrix/cameraMatrix[2][3] #normalize the matrix
5) According to this, the camera's position should be calculated like this:
x,y,z = np.dot(-np.transpose(rotation),translation)
The coordinates I'm getting are totally wrong. The problem should be somewhere in step 4 or 5 I guess. What's wrong with my method?

I think I've got it now. The problem was with the method described in step 4. The camera position cannot be calculated from the homography matrix alone. The camera intrinsics matrix is also necessary. So, the correct procedure is the following:
1) draw a map of the area
2) calibrate the camera using the chessboard image with cv2.findChessboardCorners this yields the camera matrix and the distortion coefficients
3) solvePnP with the world coordinates (3D) and image coordinates (2D). The solvePnP returns the object's origo in the camera's coordinate system given the 4 corresponding points and the camera matrix.
4) Now I need to calculate the camera's position in world coordinates. The rotation matrix is: rotM = cv2.Rodrigues(rvec)[0]
5) The x,y,z position of the camera is: cameraPosition = -np.matrix(rotM).T * np.matrix(tvec)

Related

Projecting a Texture Mask onto an existing 3D Mesh given the camera extrinsics

Given an image mask, I want to project the pixels onto a mesh in respect to the position and orientation of the camera and convert these pixels into a pointcloud. I have the intrinsic and extrinsic parameters of the camera in respect to the world, and the location of the mesh in world coordinates. I know the mapping from world coordinates to camera image is as follow:
imgpoint = Intrinsic * Extrinsic * worldpoint
So when I want to the opposite i do the inverse of the intrinsic and extrinsic matrices:
worldpoint= Intrinsic^(-1) * Extrinsic^(-1) * imgpoint
However, the idea that I had was to obtain two points from one pixel, with different depth values, to obtain a line and then look for the closest intersection for the mesh I want with the line, but I do not know how to properly generate a point away from the original camera plane. How can I find this extra point and/or am I complicating this problem?
The top equation below shows how to project a point (x,y,z) onto a pixel (u,v);
The extrinsic parameters are the 3x3 rotation matrix R and translation t.
The intrinsic parameters are the focal distances f_x, f_y and
principal point (c_x, c_y). The value alpha is the perspective foreshortening term that is divided out.
The bottom equation reverses the process by describing how to project
a ray from the camera position through through the pixel (u,v) out into the scene as the parameter alpha varies from 0 to infinity.
Now we have converted the problem into a ray casting problem.
Find the intersection of the ray with your mesh which is a
standard computer graphics problem.

Camera calibration, focal length value seems too large

I tried a camera calibration with python and opencv to find the camera matrix. I used the following code from this link
https://automaticaddison.com/how-to-perform-camera-calibration-using-opencv/
import cv2 # Import the OpenCV library to enable computer vision
import numpy as np # Import the NumPy scientific computing library
import glob # Used to get retrieve files that have a specified pattern
# Path to the image that you want to undistort
distorted_img_filename = r'C:\Users\uid20832\3.jpg'
# Chessboard dimensions
number_of_squares_X = 10 # Number of chessboard squares along the x-axis
number_of_squares_Y = 7 # Number of chessboard squares along the y-axis
nX = number_of_squares_X - 1 # Number of interior corners along x-axis
nY = number_of_squares_Y - 1 # Number of interior corners along y-axis
# Store vectors of 3D points for all chessboard images (world coordinate frame)
object_points = []
# Store vectors of 2D points for all chessboard images (camera coordinate frame)
image_points = []
# Set termination criteria. We stop either when an accuracy is reached or when
# we have finished a certain number of iterations.
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Define real world coordinates for points in the 3D coordinate frame
# Object points are (0,0,0), (1,0,0), (2,0,0) ...., (5,8,0)
object_points_3D = np.zeros((nX * nY, 3), np.float32)
# These are the x and y coordinates
object_points_3D[:,:2] = np.mgrid[0:nY, 0:nX].T.reshape(-1, 2)
def main():
# Get the file path for images in the current directory
images = glob.glob(r'C:\Users\Kalibrierung\*.jpg')
# Go through each chessboard image, one by one
for image_file in images:
# Load the image
image = cv2.imread(image_file)
# Convert the image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Find the corners on the chessboard
success, corners = cv2.findChessboardCorners(gray, (nY, nX), None)
# If the corners are found by the algorithm, draw them
if success == True:
# Append object points
object_points.append(object_points_3D)
# Find more exact corner pixels
corners_2 = cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria)
# Append image points
image_points.append(corners)
# Draw the corners
cv2.drawChessboardCorners(image, (nY, nX), corners_2, success)
# Display the image. Used for testing.
#cv2.imshow("Image", image)
# Display the window for a short period. Used for testing.
#cv2.waitKey(200)
# Now take a distorted image and undistort it
distorted_image = cv2.imread(distorted_img_filename)
# Perform camera calibration to return the camera matrix, distortion coefficients, rotation and translation vectors etc
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(object_points,
image_points,
gray.shape[::-1],
None,
None)
But I think I always get wrong parameters. My focal length is around 1750 in x and y direction from calibration. I think this couldnt be rigth, it is pretty much. The camera documentation says the focal lentgh is between 4-7 mm. But I am not sure, why it is so high from the calibration. Here are some of my photos for the calibration. Maybe something is wrong with them. I moved the chessboard under the camera in different directions, angles and high.
I was also wondering, why I dont need the size of the squares in the code. Can someone explains it to me or did I forgot this input somewhere?
Your misconception is about "focal length". It's an overloaded term.
"focal length" (unit mm) in the optical part: it describes the distance between the lens plane and image/sensor plane, assuming a focus to infinity
"focal length" (unit pixels) in the camera matrix: it describes a scale factor for mapping the real world to a picture of a certain resolution
1750 may very well be correct, if you have a high resolution picture (Full HD or something).
The calculation goes:
f [pixels] = (focal length [mm]) / (pixel pitch [µm / pixel])
(take care of the units and prefixes, 1 mm = 1000 µm)
Example: a Pixel 4a phone, which has 1.40 µm pixel pitch and 4.38 mm focal length, has f = ~3128.57 (= fx = fy).
Another example: A Pixel 4a has a diagonal Field of View of approximately 77.7 degrees, and a resolution of 4032 x 3024 pixels, so that's 5040 pixels diagonally. You can calculate:
f = (5040 / 2) / tan(~77.7° / 2)
f = ~3128.6 [pixels]
And that calculation you can apply to arbitrary cameras for which you know the field of view and picture size. Use horizontal FoV and horizontal resolution if the diagonal resolution is ambiguous. That can happen if the sensor isn't 16:9 but the video you take from it is cropped to 16:9... assuming the crop only crops vertically, and leaves the horizontal alone.
Why don't you need the size of the chessboard squares in this code? Because it only calibrates the intrinsic parameters (camera matrix and distortion coefficients). Those don't depend on the distance to the board or any other object in the scene.
If you were to calibrate extrinsic parameters, i.e. the distance of cameras in a stereo setup, then you would need to give the size of the squares.

How to apply the essential matrix to transform a 3D point from camera 1's world coordinate to camera 2's world coordinates

Using OpenCV's cv2.stereoCalibrate I have calibrated a pair of cameras, one of them being a time-of-flight camera. So, I have the intrinsic calibration parameters and the essential / fundamental matrix.
Now I would like to project a point from the ToF camera to the 2D camera.
To convert image to world coordinates in the ToF camera, I did:
p = [(15, 15, 1)]
z = depth[p[0][0], p[0][1]] # measured ToF depth for this single point
# from ToF image coordinates (including ToF depth!) to ToF world coordinates
invR_x_invM_x_uv1 = R_inv * cameraMatrix_inv_3 * p[0]
invR_x_tvec = R_inv * T
wcPoint = (z + invR_x_tvec[2]) / invR_x_invM_x_uv1[2] * invR_x_invM_x_uv1 - invR_x_tvec
wcPoint = wcPoint[:, -1]
So I have the point in world coordinates.
What I do not get, is (1) how to transform this point to the world coordinate system of the second camera, and then (2) how to project this point to image coordinates of the second camera. Can anybody point me to a OpenCV function in particular for (1)?
First, you should get into linear algebra. You can't multiply matrices elementwise. You have to use the dot product. The dot product of two matrices A and B is either written as A#B or A.dot(B) in Python.
If you have a stereo calibration and two fixed cameras, you also should have the extrinsic parameters of these cameras. The translation and rotation between these cameras and the world coordinate system (there is only one), which could be located at the position of one of the mentioned cameras, enables you to transform your 3D data to obtain the coordinates in the chosen coordinate system. I seriously suggest that you read the basics about camera projection in Hartley/Zisserman, Multiple View Geometry, freely available online.

Homogeneous transform of point clouds in camera coordinates to world coordinates using OpenCV

I have camera calibration intrinsics and extrinsics (including rotations and translations, i.e. rvecs and tvecs, for a set of N camera poses, relative to a fixed ChArUco target.
Additionally, for every camera pose, I have a set of 3D coordinates ("point cloud") defined in the standard OpenCV camera coordinate system.
After a lot of reading, I would have thought that I need to first compute each camera pose relative to the ChArUCo board, by constructing a homogeneous transformation matrix like so in python:
# initialize 4x4 transform
inverted_homogeneous_transform_matrix = np.zeros(shape=(4,4))
inverted_homogeneous_transform_matrix[3,3] = 1.0
# convert Rodrigues vector into Rodrigues matrix, and then invert it
rotation_matrix = np.zeros(shape=(3,3))
cv2.Rodrigues(rvecs, rotation_matrix)
inverted_rotation = rotation_matrix.transpose()
# add inverted rotation to transform
inverted_homogeneous_transform_matrix[:3,:3] = inverted_rotation
# compute inverted translation, e.g. see http://ksimek.github.io/2012/08/22/extrinsic/
inverted_translation_vector = -inverted_rotation * tvecs
inverted_transform_matrix[:3,3] = np.asarray(inverted_translation_vector).flatten()
# x_coords, y_coords, z_coords are defined in camera coordinate system
x_coords=np.asarray([1,2,3,4,5])
y_coords=np.asarray([2,4,6,8,10])
z_coords=np.asarray([3,6,9,12,15])
homogeneous_ones = np.ones(len(x_coords))
homogeneous_points = np.matrix([x_coords, y_coords, z_coords, homogeneous_ones])
# perform the transformation
transformed_points = inverted_transform_matrix * homogeneous_points
# clean up to extract x,y,z values from matrix and save as 1D array
x_coords = np.asarray(transformed_points[0,:]).flatten()
y_coords = np.asarray(transformed_points[1,:]).flatten()
z_coords = np.asarray(transformed_points[1,:]).flatten()
Basically, the above code works, but when I run it on multiple point clouds from different camera perspectives, they do not magically line up as I would expect. I can confirm that my inverse homogeneous transform is indeed an inverse to the homogeneous transform constructed directly from rvecs and tvecs; have tried breaking it apart into translation first then rotation, and vice versa; and have seen a non-inverse transformation make everything almost line up from one perspective (but all rotated weirdly from every other perspective)...
Any help appreciated!

Aruco Marker World Coordinates

I've been working with Python's OpenCV library, using ArUco for object tracking.
The goal is to get the x/y/z coordinates at the center of the ArUco marker, and the angle in relation to the calibrated camera.
I am able to display axes on the aruco marker with the code I have so far, but cannot find how to get x/y/z coordinates from the rotation and translation vectors (if that's even the right way to go about it).
This is the line of code which defines the rotation/translation vectors:
rvec, tvec, _ = aruco.estimatePoseSingleMarkers(corners, markerLength, camera_matrix, dist_coeffs) # For a single marker
Any ideas on how to get angle/marker position in the camera world?
Thanks!
After some tribulation, I found that the x and y coordinates in aruco can be determined by the average of the corners:
x = (corners[i-1][0][0][0] + corners[i-1][0][1][0] + corners[i-1][0][2][0] + corners[i-1][0][3][0]) / 4
y = (corners[i-1][0][0][1] + corners[i-1][0][1][1] + corners[i-1][0][2][1] + corners[i-1][0][3][1]) / 4
And the angle relative to the camera can be determined by the Rodrigues of the rotation vector, the matrix must be filled prior
rotM = np.zeros(shape=(3,3))
cv2.Rodrigues(rvec[i-1], rotM, jacobian = 0)
Finally, yaw pitch and roll can be obtained by taking the RQ Decompose of the rotation matrix
ypr = cv2.RQDecomp3x3(rotM)
As said by chungzuwalla, tvec represents the position of marker center in camera coordinate system and it doesn't change with the rotation of marker at a position. If you want to know about location of corners in camera coordinate system then one needs both rvec and tvec.
Here is a perfect explanation
Aruco markers with openCv, get the 3d corner coordinates?

Categories

Resources