I'm doing something similar to the tutorial here: http://docs.opencv.org/3.1.0/d7/d53/t... regarding pose estimation. Essentially, I'm creating an axis in the model coordinate system and using ProjectPoints, along with my rvecs, tvecs, and cameraMatrix, to project the axis onto the image plane.
In my case, I'm working in the world coordinate space, and I have an rvec and tvec telling me the pose of an object. I'm creating an axis using world coordinate points (which assumes the object wasn't rotated or translated at all), and then using projectPoints() to draw the axes the object in the image plane.
I was wondering if it is possible to eliminate the projection, and get the world coordinates of those axes once they've been rotated and translated. To test, I've done the rotation and translation on the axis points manually, and then use projectPoints to project them onto the image plane (passing identity matrix and zero matrix for rotation, translation respectively), but the results seem way off. How can I eliminate the projection step to just get the world coordinates of the axes, once they've been rotation and translated? Thanks!
Related
I am working on an application using an IFM 3D camera to identify parts prior to a robot pickup. Currently I am able to find the centroid of these objects using contours from a depth image and from there calculate the center point of these objects in pixel space.
My next task is to then transform the 2D centroid coordinates to a 3D point in 'real' space. I am able to train the robot such that it's coordinate frame is either at the center of the image or at the traditional (0,0) point of an image (top left).
The 3D camera I am using provides both an intrinsic and extrinsic matrix. I know I need to use some combination of these matrices to project my centroid into three space but the following questions remain:
My current understanding from googling is the intrinsic matrix is used to fix lens distortion (barrel and pinhole warping, etc.) whereas the extrinsic matrix is used to project points into the real world. Is this simplified assumption correct?
How can a camera supply a single extrinsic matrix? I know traditionally these matrices are found using the checkerboard corners method but are these not dependent on the height of the camera?
Is the solution as simple as taking the 3x4 extrinsic matrix and multiplying it by a 3x1 matrix [x, y, 1] and if so, will the returned values be relative to the camera center or the traditional (0,0) point of an image.
Thanks in advance for any insight! Also if it's any consolation I am doing everything in python and openCV.
No. I suggest you read the basics in Multiple View Geometry of Hartley and Zisserman, freely available in the web. Dependent on the camera model, the intrinsics contain different parameters. For the pinhole camera model, these are the focal length and the principal point.
The only reason why you maybe could directly transform your 2D centroid to 3D is that you use a 3D camera. Read the manual of the camera, it should be explained how the relation between 2D and 3D coordinates is given for your specific model.
If you have only image data, you can only compute a 3D point from at least two views.
No, of course not. Please don't be lazy and start reading the basics about camera projection instead of asking for others to explain the common basics that are written down everywhere in the web and literature.
So if you take a pinhole camera and make it as the origin of our plane(3D) and a pixel from the image plane and connect the two with a straight line it should make a vector, which has direction and a length. Think of this as the path followed by the light reflected from an object into the camera lens. And I want to calculate this. I think we have to use the cameras intrinsic properties for this.
Below is a statement that made me think about it all.
In a pinhole camera model, each pixel defines a direction vector in 3D space, specifically the vector from the projection center through the pixel's position on the image plane.
Here is a diagram better explaining this.
I want to calculate the three red lines, and known parameters would be, I guess, the camera position(origin) and the image pixel value, and the intrinsic camera parameters.
I have a flat quadrilateral plane that I know the dimensions of, and I can find the contours of it, along with the 4 corners. I need assistance in figuring out the method of determining its 3D position. I have managed to get a 3x3 perspective transform of it, which looks something like this:
[[ 3.91873630e-02 1.20990983e+00 -2.81213415e+02]
[ 1.21202027e+00 -1.85962357e-15 -3.52697898e+02]
[ 3.83991908e-04 2.52680041e-05 1.00000000e+00]]
(this is OpenCV 3.1.0, in python, and that matrix is just one frame, so it might not be representative of all potential orientations)
Can I determine the angle away from the camera and distance from the camera from this information, or do I need to perform more calculations?
I'm not sure what you guys need to know, so I'll be happy to give you more information if you need it?
You can use solvePnP function to determine quadrilateral's position and orientation wrt camera.
All you need to have is
3D coordinates of quadrilateral's corners in world frame
corresponding pixel coordinates
Camera's intrinsic parameters
You can directly use solvepnp() function of Opencv.
In output you get pose of world coordinate system in camera coordinate system. If you take world as quadrilateral itself, you get pose information(Rotation and translation) of quadrilateral in camera coordinate system
C++: bool solvePnP(InputArray objectPoints, InputArray imagePoints,
InputArray cameraMatrix, InputArray distCoeffs,
OutputArray rvec, OutputArray tvec,
bool useExtrinsicGuess=false, int flags=ITERATIVE );
Use CV_P3P argument for flags, in case you have exactly 4 points.
I'm trying to find the relative position of the camera to the chessboard (or the other way around) - I feel OK with converting between different coordinate systems, e.g. as suggested here. I decided to use chessboard not only for calibration but actual position determination as well at this stage, since I can use the findChessboardCorners to get the imagePoints (and this works OK).
I've read a lot on this topic and feel that I understand the solvePnP outputs (even though I'm completely new to openCV and computer vision in general). Unfortunately, the results I get from solvePnP and physically measuring the test set-up are different: translation in z-direction is off by approx. 25%. x and y directions are completely wrong - several orders of magnitude and different direction than what I've read to be the camera coordinate system (x pointing up the image, y to the right, z away from the camera). The difference persists if I convert tvec and rvec to camera pose in world coordinates.
My questions are:
What are the directions of camera and world coordinate systems' axes?
Does solvePnP output the translation in the same units as I specify the objectPoints?
I specified the world origin as the first of the objectPoints (one of the chessboard corners). Is that OK and is tvec the translation to exactly that point from the camera coordinates?
This is my code (I attach it pro forma as it does not throw any exceptions etc.). I used grayscale images to get the camera intrinsics matrix and distortion coefficients during calibration so decided to perform localisation in grayscale as well. chessCoordinates is a list of chessboard points location in mm with respect to the origin (one of the corner points). camMatrix and distCoefficients come from calibration (performed using the same chessboard and objectPoints).
camCapture=cv2.VideoCapture(0) # Take a picture of the target to get the imagePoints
tempImg=camCapture.read()
imgPts=[]
tgtPts=[]
tempImg=cv2.cvtColor(tempImg[1], cv2.COLOR_BGR2GRAY)
found_all, corners = cv2.findChessboardCorners(tempImg, chessboardDim )
imgPts.append(corners.reshape(-1, 2))
tgtPts.append(np.array(chessCoordinates, dtype=np.float32))
retval,myRvec,myTvec=cv2.solvePnP(objectPoints=np.array(tgtPts), imagePoints=np.array(imgPts), cameraMatrix=camMatrix, distCoeffs=distCoefficients)
The camera coordinates are the same as image coordinates. So You have x axe pointing in the right side from the camera, y axe pointing down, and z pointing in the direction camera is faced. This is a clockwise axe system, and the same would apply to the chessboard, so if You specified the origin in, lets say, upper right corner of the chessboard, x axe goes along the longer side to the right and y along shorter side of the chessboard, z axe would be pointing downward, to the ground.
Solve PnP outputs the translation in the same units as the units in which You specified the length of chessboard fields, but it might also use units specified in camera calibration, as it uses the camera matrix.
Tvec points to the origin of the world coordinates in which You placed the calibration object. So if You placed the first object point in (0,0), thats where tvec will point to.
What are the directions of camera and world coordinate systems' axes?
The 0,0,0 corner on the boards is so that the X & Y axis are towards the rest of the corner points. The Z axis is always pointing away from the board. This means that it's usually pointing somewhat in the direction of the camera.
Does solvePnP output the translation in the same units as I specify the objectPoints?
Yes
I specified the world origin as the first of the objectPoints (one of the chessboard corners). Is that OK and is tvec the translation to exactly that point from the camera coordinates?
Yes, this is pretty common. In most of the cases, the first cam corner is set as 0,0,0 and subsequent corners being set at the z=0 plane (eg; (1,0,0) , (0,1,0), etc).
The tvec, combined with the rotation, points towards that point from the board coordinate frame toward the camera. In short; the tvec & rvec provide you with the inverse translation (world -> camera). With some basic geometry you can calculate the transformation that puts camera -> world.
I am writing a program using PyGTK that displays a gtk.Image. The desktop is projected onto the inside of a spherical dome. If the image displayed is rectangular on the screen, once projected onto a sphere it gets distorted.
To help picture this: The desktop itself is square. The center pixel of the desktop projects to the zenith and a circle inscribed inside the square desktop becomes the horizon (0 degrees elevation in polar coordinates). Everything outside that (in the corners of the desktop) is not displayed.
I would like to somehow modify the gtk.Image such that it still appears rectangular on the spherical surface. I'm sure there are lots of details in how this projection could be done, but very simplistically I have to convert the rectangular image into a curved trapezoid. Converting to a range of polar coordinates (e.g., map this rectangle to the area between two azimuth and two elevation angles) would be a good first approximation, though you can imagine if the elevation angles are 0 and 90, the resulting image will be a wedge of the sphere and not look rectangular at all.
How can I apply transformations like this to a gtk.Image (or its underlying Pixbuf)? Is there a package already that can do this? If not, how should I go about writing it from scratch? Presumably I would have to pull out the pixel values, map them to some new grid, and replace the original image. I just don't want to reinvent something that has already been done.