I try to create Structured-light 3D scanner.
Camera calibration
Camera calibration is copy of OpenCV official tutorial. As resutlt I have camera intrinsic parameters(camera matrix).
Projector calibration
Projector calibration maybe is not correct but process was: Projector show chessboard pattern and camera take some photos from different angles. Images are cv.undistored with camera parameters and then result images are used for calibration with OpenCV official tutorial. As result I have projector intrinsic parameters(projector matrix).
Rotation and Transition
From cv.calibrate I have rotarion and transition vectors as results but vectors count are equal to images count and I thing it is not corect ones because I move camera and projector in calibration.
My new idea is to project chessboard on scanning background, perform calibration and in this way I will have Rotation vector and Transition vector. I don't know is that correct way.
Scanning
Process of scanning is:
Generate patterns -> undistor patterns with projector matrix -> Project pattern and take photos with camera -> undistort taken photos with camera matrix
Camera-projector pixels map
I use GrayCode pattern and with cv.graycode.getProjPixel and have pixels mapping between camera and projector. My projector is not very high resolution and last patterns are not very readable. I will create custom function that generate mapping without the last patterns.
Problem
I don't know how to get depth map(Z) from all this information. My confution is because there are 3 coordinate systems - camera, projector and world coordinate system.
How to find 'Z' with code? Can I just get Z from pixels mapping between image and pattern?
Information that have:
p(x,y,1) = R*q(x,y,z) + T - where p is image point, q is real world point(maybe), R and T are rotation vector and transition vector. How to find R and T?
Z = B.f/(x-x') - where Z is coordinate(depth), B-baseline(distanse between camera and projector) I can measure it by hand but maybe this is not the way, (x-x') - distance between camera pixel and projector pixel. I don't know how to get baseline. Maybe this is Transition vector?
I tried to get 4 meaning point, use them in cv.getPerspectiveTransform and this result to be used in cv.reprojectImageTo3D. But cv.getPerspectiveTransform return 3x3 matrix and cv.reprojectImageTo3D use Q-4×4 perspective transformation matrix that can be obtained with stereoRectify.
Similar Questions:
How is point cloud data acquired from the structured light 3D scanning? - Answer is you need to define a vector that goes from the camera perspective center to the pixel in the image and then rotate this vector by the camera rotation. But I don't know how to define/find thid vercor and Rotation vector is needed.
How to compute the rotation and translation between 2 cameras? - Question is about R and T between two cameras but almost everywhere writes that projector is inverse camera. One good answer is The only thing you have to do is to make sure that the calibration chessboard is seen by both of the cameras. But I think if I project chessboard pattern it will be additional distored by wall(Projective transormation)
There are many other resources and I will update list with comment. I missed something and I can't figure out how to implement it.
Lets assume p(x,y) is the image point and the disparity as (x-x'). You can obtain the depth point as,
disparity = x-x_ # x-x'
point_and_disparity = np.array([[[x, y, disparity]]], dtype=np.float32)
depth = cv2.perspectiveTransform(point_and_disparity, q_matrix)
Related
I am working on an application using an IFM 3D camera to identify parts prior to a robot pickup. Currently I am able to find the centroid of these objects using contours from a depth image and from there calculate the center point of these objects in pixel space.
My next task is to then transform the 2D centroid coordinates to a 3D point in 'real' space. I am able to train the robot such that it's coordinate frame is either at the center of the image or at the traditional (0,0) point of an image (top left).
The 3D camera I am using provides both an intrinsic and extrinsic matrix. I know I need to use some combination of these matrices to project my centroid into three space but the following questions remain:
My current understanding from googling is the intrinsic matrix is used to fix lens distortion (barrel and pinhole warping, etc.) whereas the extrinsic matrix is used to project points into the real world. Is this simplified assumption correct?
How can a camera supply a single extrinsic matrix? I know traditionally these matrices are found using the checkerboard corners method but are these not dependent on the height of the camera?
Is the solution as simple as taking the 3x4 extrinsic matrix and multiplying it by a 3x1 matrix [x, y, 1] and if so, will the returned values be relative to the camera center or the traditional (0,0) point of an image.
Thanks in advance for any insight! Also if it's any consolation I am doing everything in python and openCV.
No. I suggest you read the basics in Multiple View Geometry of Hartley and Zisserman, freely available in the web. Dependent on the camera model, the intrinsics contain different parameters. For the pinhole camera model, these are the focal length and the principal point.
The only reason why you maybe could directly transform your 2D centroid to 3D is that you use a 3D camera. Read the manual of the camera, it should be explained how the relation between 2D and 3D coordinates is given for your specific model.
If you have only image data, you can only compute a 3D point from at least two views.
No, of course not. Please don't be lazy and start reading the basics about camera projection instead of asking for others to explain the common basics that are written down everywhere in the web and literature.
I am trying to find the ground-plane coordinates of contour of an image. I have been able to detect the contour and get the x, y, w and h of the contour, but I now need it on a 2D ground plane coordinate system. I am trying to perform camera calibration to find out data about the camera so that I can perform homography to get the ground plane data. However I am using pre-made videos so I cannot use the checkerboard solution that I am finding online. Does anyone have any ideas?
Below is an example image:
For objects far away, the height of the bridge in the foreground is very small compared to their distance from the camera, so a homography computed from points on the bridge will be approximately correct far away, provided the bridge is approximately horizontal (no roll). This is another way of saying that the images all horizontal planes pass through the horizon.
For objects nearby, but not on the bridge, the above approximation will suffer from parallax error. Unless you have an object of known scale on the plane of interest (the sea), or an estimate of the distance of the bridge from that plane, there is no information available to resolve depth - as far as you can tell the bridge could be 100m or 1mm above the sea.
OpenCV provides methods to calibrate a camera. I want to know if it also has a way to simply generate a view projection matrix if and when the parameters are known.
i.e I know the camera position, rotation, up, FOV... and whatever else is needed, then call MagicOpenCVCamera(parameters) and obtain a 4x4 transformation matrix.
I have searched this up but I can only find information about calibrating the camera, not about creating one if you already know the parameters.
The projection matrix is simply a 3x4 matrix whose [0:3,0:3] left square is occupied by the product K.dot(R) of the camera intrinsic calibration matrix K and its camera-from-world rotation matrix R, and the last column is K.dot(t), where t is the camera-from-world translation. To clarify, R is the matrix that brings into camera coordinates a vector decomposed in world coordinates, and t is the vector whose tail is at the camera center, and whose tip is at the world origin.
The OpenCV calibration routines produce the camera orientations as rotation vectors, not matrices, but you can use cv.Rodrigues to convert them.
I'm currently extracting the calibration parameters of two images that were taken in a stereo vision setup via cv2.aruco.calibrateCameraCharucoExtended(). I'm using the cv2.undistortPoints() & cv2.triangulatePoints() function to convert any two 2D points to a 3D point coordinate, which works perfectly fine.
I'm now looking for a way to convert the 2D images, which can be seen under approach 1, to one 3D image. I need this 3D image because I would like to determine the order of these cups from left to right, to correctly use the triangulatePoints function. If I determine the order of the cups from left to right purely based on the x-coordinates of each of the 2D images, I get different results for each camera (the cup on the front left corner of the table for example is in a different 'order' depending on the camera angle).
Approach 1: Keypoint Feature Matching
I was first thinking about using a keypoint feature extractor like SIFT or SURF, so I therefore tried to do some keypoint extraction and matching. I tried using both the Brute-Force Matching and FLANN based Matcher, but the results are not really good:
Brute-Force
FLANN-based
I also tried to swap the images, but it still gives more or less the same results.
Approach 2: ReprojectImageTo3D()
I looked further into the issue and I think I need the cv2.reprojectImageTo3D() [docs] function. However, to use this function, I first need the Q matrix which needs to be obtained with cv2.stereoRectify [docs]. This stereoRectify function on its turn expects a couple of parameters that I'm able to provide, but there's two I'm confused about:
R – Rotation matrix between the
coordinate systems of the first and
the second cameras.
T – Translation vector between
coordinate systems of the cameras.
I do have the rotation and translation matrices for each camera separately, but not between them? Also, do I really need to do this stereoRectify all over again when I already did a full calibration in ChArUco and already have the camera matrix, distortion coefficients, rotation vectors and translations vectors?
Some extra info that might be useful
I'm using 40 calibration images per camera of the ChArUco board to calibrate. I first extract all corners and markers after which I estimate the calibration parameters with the following code:
(ret, camera_matrix, distortion_coefficients0,
rotation_vectors, translation_vectors,
stdDeviationsIntrinsics, stdDeviationsExtrinsics,
perViewErrors) = cv2.aruco.calibrateCameraCharucoExtended(
charucoCorners=allCorners,
charucoIds=allIds,
board=board,
imageSize=imsize,
cameraMatrix=cameraMatrixInit,
distCoeffs=distCoeffsInit,
flags=flags,
criteria=(cv2.TERM_CRITERIA_EPS & cv2.TERM_CRITERIA_COUNT, 10000, 1e-9))
The board paremeter is created with the following settings:
CHARUCO_BOARD = aruco.CharucoBoard_create(
squaresX=9,
squaresY=6,
squareLength=4.4,
markerLength=3.5,
dictionary=ARUCO_DICT)
Thanks a lot in advance!
I need to calculate distance from camera to depth image pixel. I searched through internet but I found stereo image related info and code example where I need info for depth image.
Here, I defined depth image in gray scale(0-255) and I defined a particular value( let range defined 0 pixel value is equal to 5m and 255 pixel value is equal to 500m in gray scale).
camera's intrinsic (focal length, image sensor format) and extrinsic (rotation and transition matrix) is given. I need to calculate distance from different camera orientation and rotation.
I want to do it using opencv python. Is there any specific documentation and code example regarding this?
Or any further info is necessary to find this.
The content of my research is the same as yours, but I have a problem now. I use stereocalibrate() to calibrate the binocular camera, and found that the obtained translation matrix is very different from the actual baseline distance. In addition, the parameters used in stereocalibrate() are obtained by calibrating the two cameras with calibrate().