How to embark on calibrating two cameras in opencv? - python

I realise there is a lot of information about calibrating cameras in a single set up and even in a stereo set up all these instances differed to me somewhat. My set up is as follows:
Within the calibration zone there will be a big chess board in which I will calibrate the camera with opencv findchessboard. The problem I have at the moment is that I have been using the calibration only for the pictures distortion, which is what I have mostly found only online. However I need to somehow find the respective positions of the cameras (x,y,z) and the angle of inclination if possible. I though surely this would be possible with some sort of calibration as to line the points up as both cameras will be looking that the same calibration configuration. Essentially my question boils down to:
Is there a opencv calibration which will help me with the relative distances and angles (x,y,z,θ)
Is it as trivial as calibrating the two cameras individually
If not opencv how would this be accomplished.

Related

Calculate Depth Map in OpenCV Python

I want to calculate the depth map of a stereo system, not the disparity alone.
Input:
I am a little bit new to this part of OpenCV. I have two cameras, intrinsic and extrinsic parameters for both cameras, and rotation and translation between two cameras.
Output:
I want to use the current rotation, translation, and camera parameters to calculate the depth map. I do not want to re-calculate those parameters. However, I do not know the proper python OpenCV functions.
My two cameras have angels with respect to each other.

Calculate the motion of a simple object in 3D

I'm a beginner in python and raspberry pi and for a school project we're using a raspberry pi 3 and a camera module (fixed) and the idea is when u move something in the camera frame the program gives an output of where this object is according to the camera (in 3D) and the distance this object has traveled, it would be something like ( x=2.98m, y=5.56m, z=3.87m, distance=0.677m ) and all of this using optical flow and python. Is it possible to make this and if not Is there something close or similar to this.
Any help appreciated.
The first thing you can do is camera calibration. If you have the intrinsics of the camera you can infer the direction of the vector in 3D from your camera to your object.
The problem is how to find the distance, or the length of this vector.
There are two solutions I can think of:
use two cameras - if you have two calibrated cameras (intrinsics and extrinsics) you can triangulate the two points of the object detected in the images and get the object position in 3D.
If you know the size of the object, you can infer its distance from the camera using the intrinsics.
hope I helped.

How to overlay two live images of the same scene having multiple calibrated cameras in python

I have multiple cameras that are closely located to each other, looking at the same scene.
I can calibrate all of them (at once - currently using the openCV algorithm).
What I now want to do is, to overlay for example the following:
Let one camera be a LIDAR depth, the second a grayscale and the third an infrared cam. I want to overlay now the "grayscale scene" in an image format with the depth and infrared information being on the correct pixel. (Similar to depth-grayscale overlays that many 3D-cameras bring).
The cameras have different opening angles and resolutions.
I appreciate any hint or comment :-)
Cheers.

OpenCV: What is the correlation between camera calibration and depth map computing?

I followed the opencv tutorials called 'Camera calibration and 3d reconstruction' (https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_calib3d/py_table_of_contents_calib3d/py_table_of_contents_calib3d.html) and I do not understand what is the correlation between camera calibration and creating a depth map. I can see that focal length is in the equation for disparity, but according to code there's no need to compute the camera matrix 'cause its not used anywhere.
Am I right or someone can point me my mistake?
All code is in the opencv link.
I'm using python.

How to find the Camera Matrix and Camera Distortion file for detecting aruco markers?

I'm trying to detect the location of Aruco Marker but I don't know how to find the Camera Matrix and Camera Distortion file. Can anyone tell how to calculate or find these files for my camera.
You have to calibrate your camera on your own. See this tutorial to know how to get camera matrix and distortion coefficients. If you need to understand the general process
of calibrating the camera see tutorial on camera calibration

Categories

Resources