Strange results with OpenCV camera calibration - python

I'm calibrating my GoPro following the OpenCV tutorial. To calibrate, I have a bunch of pictures with a chessboard in different locations. I then plot the 3D axis on top of the chessboard and everything looks fine, the calibration seems good:
Then I want to see the undistorted version of the image without cropping anything and I get this:
Which clearly doesn't make any sense. I have tried to do the same thing with another set of calibration images and it worked:
I don't understand why it didn't work with the first set of pictures. Any ideas?
Here's the relevant code:
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((9*6,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
files = os.listdir('frames')
for fname in files:
g = re.match('frame(\d+).png',fname)
n = g.groups()[0]
if n not in numbers:
os.remove('frames/%s'%fname)
continue
img = cv2.imread('frames/%s'%fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
img = cv2.drawChessboardCorners(img, (9,6), corners2,ret)
objpoints.append(objp)
imgpoints.append(corners2)
cv2.imshow('img',img)
cv2.waitKey(1)
cv2.destroyAllWindows()
ret, K, D, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)
# K: intrinsic matrix - camera matrix
# D: Distortion coefficients
#Compute reconstruction error
mean_error = 0
for i in xrange(len(objpoints)):
imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], K, D)
error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
mean_error += error
print "total error: ", mean_error/len(objpoints)
ret, rvec, tvec = cv2.solvePnP(objp, corners2, K, D)
# project 3D points to image plane Using openCV
imgpts, jac = cv2.projectPoints(axis, rvec, tvec, K, D)
img = draw(img,corners2,imgpts)
cv2.imshow('img',img)
k = cv2.waitKey(0) & 0xff
f = file('camera.pkl','wb')
cPickle.dump((K,D,rvec,tvec),f)
f.close()
#get the Undistorted image
h, w = img.shape[:2]
newcameraK, roi=cv2.getOptimalNewCameraMatrix(K,D,(w,h),1,(w,h))
dst = cv2.undistort(img, K, D, None, newcameraK)
plt.figure()
plt.axis("off")
plt.imshow(cv2.cvtColor(dst,cv2.COLOR_BGR2RGB))
plt.show()

Micka's point is important; making sure that the calibration grid fills as much of the image as possible is important, or at least have some pictures of the checkerboard where it is near the boundaries.
This issue is that distortion is small near the center and large at the boundaries. A large distortion therefore doesn't affect the center of the image very much. Equivalently, a small error in estimating the distortion from the center of the image leads to a perhaps very wrong overall estimate of the distortion.
Redundant images matter "a little"; it is effectively making those images (and therefore, the location of the checkerboard pattern in those images) more important.

i also observed the same effect but much stranger:
When i use getOptimalTransformationMatrix my resulting images also have those strong circular distortion effect like above. Also changing the alpha does not solve it. BUT: When i use same distortion/calibration results WITHOUT getOptimalTransformationMatrix (so i use same fx/fy/cx/cy as on camera calibration result) i get a very good result in undistortion.... very strange....
could it be that in some situations the getOptimalTransformationMatrix function is buggy? My calibration (images etc) look very good, points are on edges too

Related

OpenCV : Optimized window size for cv.cornersubpix()

I am writing a code for calibrating cameras with Zhang's checkboard method and I am trying to find the best way to reduce the mean error while finding corners.
As I understand, having a window size as big as possible gives the best results but if it is too big (and more than one corner is found in the in the window) it completely false the results.
I would like to know if there is a better way than just trying all the combinations possible or asking a human to check that the corners have been correctly placed on all images.
Computational time is not a big issue here, it can take some time to get the best results
for the moment, I use the code found in the opencv tutorial (here giving me a window size of (23,23):
# termination criteria
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
images = glob.glob('*.jpg')
for fname in images:
img = cv.imread(fname)
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv.findChessboardCorners(gray, (7,6), None)
# If found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp)
corners2 = cv.cornerSubPix(gray,corners, (11,11), (-1,-1), criteria)
imgpoints.append(corners2)
# Draw and display the corners
cv.drawChessboardCorners(img, (7,6), corners2, ret)
cv.imshow('img', img)
cv.waitKey(500)
cv.destroyAllWindows()
My point is what do I have to change in this part, to have the best localization of the corner => essentially how to find the most appropriate winsize (here the (11,11))
corners2 = cv.cornerSubPix(gray,corners, (11,11), (-1,-1), criteria)
Thank you

Why is the performace of cv2.calibratecamera() decreasing drastically with more images?

I am using a camera calibration routine and I want to calibrate a camera with large set of images.
Code: (from here)
import numpy as np
import cv2
import glob
import argparse
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
def calibrate():
height = 8
width = 10
""" Apply camera calibration operation for images in the given directory path. """
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(8,6,0)
objp = np.zeros((height*width, 3), np.float32)
objp[:, :2] = np.mgrid[0:width, 0:height].T.reshape(-1, 2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
# Get the images
images = glob.glob('thermal_final set/*.png')
# Iterate through the pairs and find chessboard corners. Add them to arrays
# If openCV can't find the corners in an image, we discard the image.
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (width, height), None)
# If found, add object points, image points (after refining them)
if ret:
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
imgpoints.append(corners2)
# Draw and display the corners
# Show the image to see if pattern is found ! imshow function.
img = cv2.drawChessboardCorners(img, (width, height), corners2, ret)
e1 = cv2.getTickCount()
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
e2 = cv2.getTickCount()
t = (e2 - e1) / cv2.getTickFrequency()
print(t)
return [ret, mtx, dist, rvecs, tvecs]
if __name__ == '__main__':
ret, mtx, dist, rvecs, tvecs = calibrate()
print("Calibration is finished. RMS: ", ret)
Now, the problem is that the time that cv2.calibratecamera() takes, based on number of points(derived from images) used.
Result with 40 images:
9.34462341234 seconds
Calibration is finished. RMS: 2.357820395255311
Result with 80 images:
66.378870749 seconds
Calibration is finished. RMS: 2.864052963156834
The time taken increases exponentially with increase in images.
Now, I have a really huge set of images (500).
I have tried calibrating camera with points from a single image and then calculating average of all the results I get, but they are different than what I get from this method.
Also, I am sure that my setup is using optimized OpenCV, check using:
print(cv2.useOptimized())
How do I make this process faster? Can I leverage threads here?
Edit: Updated the concept and language from "calibrating images" to "calibrating camera using images"
First, I strongly suspect the reason of your dramatic slowdown is memory related: you may be running out and starting to swap.
But the basic approach you seem to be following is incorrect. You don't calibrate images, you calibrate a camera, i.e. a lens + sensor combo.
Calibrating a camera means estimating the parameters of a mathematical model of that lens+sensor package. Therefore you only need use enough independent data points to achieve the desired level of accuracy in the parameter estimation.
A couple dozen well chosen images will be enough most of the time, provided you are following a well designed calibration procedure. Some time ago I wrote a few tips on how to do such a design, you can find them here.

How to calculate an epipolar line with a stereo pair of images in Python OpenCV

How can I take two images of an object from different angles and draw epipolar lines on one based on points from the other?
For example, I would like to be able to select a point on the left picture using a mouse, mark the point with a circle, and then draw an epipolar line on the right image corresponding to the marked point.
I have 2 XML files which contain a 3x3 camera matrix and a list of 3x4 projection matrices for each picture. The camera matrix is K. The projection matrix for the left picture is P_left. The projection matrix for the right picture is P_right.
I have tried this approach:
Choose a pixel coordinate (x,y) in the left image (via mouse click)
Calculate a point p in the left image with K^-1 * (x,y,1)
Calulate the pseudo inverse matrix P+ of P_left (using np.linalg.pinv)
Calculate the epipole e' of the right image: P_right * (0,0,0,1)
Calculate the skew symmetric matrix e'_skew of e'
Calculate the Fundamental matrix F: e'_skew * P_right * P+
Calculate the epipolar line l' on the right image: F * p
Calculate a point p' in the right image: P_right * P+ * p
Transform p' and l back to pixel coordinates
Draw a line using cv2.line through p' and l
I just did this a few days ago and it works just fine. Here's the method I used:
Calibrate camera(s) to obtain camera matricies and distortion matricies (Using openCV getCorners and calibrateCamera, you can find lots of tutorials on this, but it sounds like you already have this info)
Perform stereo calibration with openCV stereoCalibrate(). It takes as parameters all of the camera and distortion matricies. You need this to determine the correlation between the two visual fields. You will get back several matricies, the rotation matrix R, translation vector T, essential matrix E and fundamental matrix F.
You then want to do undistortion using openCV getOptimalNewCameraMatrix and undistort(). This will get rid of a lot of camera aberrations (it will give you better results)
Finally, use openCV's computeCorrespondEpilines to calculate the lines and plot them. I will include some code below you can try out in Python. When I run it, I can get images like this (The colored points have their corresponding epilines drawn in the other image)
Heres some code (Python 3.0). It uses two static images and static points, but you could easily select the points with the cursor. You can also refer to the OpenCV docs on calibration and stereo calibration here.
import cv2
import numpy as np
# find object corners from chessboard pattern and create a correlation with image corners
def getCorners(images, chessboard_size, show=True):
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((chessboard_size[1] * chessboard_size[0], 3), np.float32)
objp[:, :2] = np.mgrid[0:chessboard_size[0], 0:chessboard_size[1]].T.reshape(-1, 2)*3.88 # multiply by 3.88 for large chessboard squares
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
for image in images:
frame = cv2.imread(image)
# height, width, channels = frame.shape # get image parameters
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(gray, chessboard_size, None) # Find the chess board corners
if ret: # if corners were found
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria) # refine corners
imgpoints.append(corners2) # add to corner array
if show:
# Draw and display the corners
frame = cv2.drawChessboardCorners(frame, chessboard_size, corners2, ret)
cv2.imshow('frame', frame)
cv2.waitKey(100)
cv2.destroyAllWindows() # close open windows
return objpoints, imgpoints, gray.shape[::-1]
# perform undistortion on provided image
def undistort(image, mtx, dist):
img = cv2.imread(image, cv2.IMREAD_GRAYSCALE)
image = os.path.splitext(image)[0]
h, w = img.shape[:2]
newcameramtx, _ = cv2.getOptimalNewCameraMatrix(mtx, dist, (w, h), 1, (w, h))
dst = cv2.undistort(img, mtx, dist, None, newcameramtx)
return dst
# draw the provided points on the image
def drawPoints(img, pts, colors):
for pt, color in zip(pts, colors):
cv2.circle(img, tuple(pt[0]), 5, color, -1)
# draw the provided lines on the image
def drawLines(img, lines, colors):
_, c, _ = img.shape
for r, color in zip(lines, colors):
x0, y0 = map(int, [0, -r[2]/r[1]])
x1, y1 = map(int, [c, -(r[2]+r[0]*c)/r[1]])
cv2.line(img, (x0, y0), (x1, y1), color, 1)
if __name__ == '__main__':
# undistort our chosen images using the left and right camera and distortion matricies
imgL = undistort("2L/2L34.bmp", mtxL, distL)
imgR = undistort("2R/2R34.bmp", mtxR, distR)
imgL = cv2.cvtColor(imgL, cv2.COLOR_GRAY2BGR)
imgR = cv2.cvtColor(imgR, cv2.COLOR_GRAY2BGR)
# use get corners to get the new image locations of the checcboard corners (undistort will have moved them a little)
_, imgpointsL, _ = getCorners(["2L34_undistorted.bmp"], chessboard_size, show=False)
_, imgpointsR, _ = getCorners(["2R34_undistorted.bmp"], chessboard_size, show=False)
# get 3 image points of interest from each image and draw them
ptsL = np.asarray([imgpointsL[0][0], imgpointsL[0][10], imgpointsL[0][20]])
ptsR = np.asarray([imgpointsR[0][5], imgpointsR[0][15], imgpointsR[0][25]])
drawPoints(imgL, ptsL, colors[3:6])
drawPoints(imgR, ptsR, colors[0:3])
# find epilines corresponding to points in right image and draw them on the left image
epilinesR = cv2.computeCorrespondEpilines(ptsR.reshape(-1, 1, 2), 2, F)
epilinesR = epilinesR.reshape(-1, 3)
drawLines(imgL, epilinesR, colors[0:3])
# find epilines corresponding to points in left image and draw them on the right image
epilinesL = cv2.computeCorrespondEpilines(ptsL.reshape(-1, 1, 2), 1, F)
epilinesL = epilinesL.reshape(-1, 3)
drawLines(imgR, epilinesL, colors[3:6])
# combine the corresponding images into one and display them
combineSideBySide(imgL, imgR, "epipolar_lines", save=True)
Hopefully this helps!

Camera calibration for Structure from Motion with OpenCV (Python)

I want to calibrate a car video recorder and use it for 3D reconstruction with Structure from Motion (SfM). The original size of the pictures I have took with this camera is 1920x1080. Basically, I have been using the source code from the OpenCV tutorial for the calibration.
But there are some problems and I would really appreciate any help.
So, as usual (at least in the above source code), here is the pipeline:
Find the chessboard corner with findChessboardCorners
Get its subpixel value with cornerSubPix
Draw it for visualisation with drawhessboardCorners
Then, we calibrate the camera with a call to calibrateCamera
Call the getOptimalNewCameraMatrix and the undistort function to undistort the image
In my case, since the image is too big (1920x1080), I have resized it to 640x320 (during SfM, I will also use this size of image, so, I don't think it would be any problem). And also, I have used a 9x6 chessboard corners for the calibration.
Here, the problem arose. After a call to the getOptimalNewCameraMatrix, the distortion come out totally wrong. Even the returned ROI is [0,0,0,0]. Below is the original image and its undistorted version:
You can see the image in the undistorted image is at the bottom left.
But, if I didn't call the getOptimalNewCameraMatrix and just straight undistort it, I got a quite good image.
So, I have three questions.
Why is this? I have tried with another dataset taken with the same camera, and also with my iPhone 6 Plus, but the results are same as above.
Another question is, what is the getOptimalNewCameraMatrix does? I have read the documentations several times but still cannot understand it. From what I have observed, if I didn't call the getOptimalNewCameraMatrix, my image will retain its size but it would be zoomed and blurred. Can anybody explain this function in more detail for me?
For SfM, I guess the call to getOptimalNewCameraMatrix is important? Because if not, the undistorted image would be zoomed and blurred, making the keypoint detection harder (in my case, I will be using the optical flow)?
I have tested the code with the opencv sample pictures and the results are just fine.
Below is my source code:
from sys import argv
import numpy as np
import imutils # To use the imutils.resize function.
# Resizing while preserving the image's ratio.
# In this case, resizing 1920x1080 into 640x360.
import cv2
import glob
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((9*6,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
images = glob.glob(argv[1] + '*.jpg')
width = 640
for fname in images:
img = cv2.imread(fname)
if width:
img = imutils.resize(img, width=width)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners2)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (9,6), corners2,ret)
cv2.imshow('img',img)
cv2.waitKey(500)
cv2.destroyAllWindows()
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)
for fname in images:
img = cv2.imread(fname)
if width:
img = imutils.resize(img, width=width)
h, w = img.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))
# undistort
dst = cv2.undistort(img, mtx, dist, None, newcameramtx)
# crop the image
x,y,w,h = roi
dst = dst[y:y+h, x:x+w]
cv2.imshow("undistorted", dst)
cv2.waitKey(500)
mean_error = 0
for i in xrange(len(objpoints)):
imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
mean_error += error
print "total error: ", mean_error/len(objpoints)
Already ask someone in answers.opencv.org and he tried my code and my dataset with success. I wonder what is actually wrong.
Question #2:
With cv::getOptimalNewCameraMatrix(...) you can compute a new camera matrix according to the free scaling parameter alpha.
If alpha is set to 1 then all the source image pixels are retained in the undistorted image that is you'll see black and curved border along the undistorted image (like a pincushion). This scenario is unlucky for several computer vision algorithms, because new edges are appeared on the undistorted image for example.
By default cv::undistort(...) regulates the subset of the source image that will be visible in the corrected image and that's why only the sensible pixels are shown on that - no pincushion around the corrected image but data loss.
Anyway, you are allowed to control the subset of the source image that will be visible in the corrected image:
cv::Mat image, cameraMatrix, distCoeffs;
// ...
cv::Mat newCameraMatrix = cv::getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, image.size(), 1.0);
cv::Mat correctedImage;
cv::undistort(image, correctedImage, cameraMatrix, distCoeffs, newCameraMatrix);
Question #1:
It is just my feeling, but you should also take care, if you resize your image after the calibration then the camera matrix must be also "scaled" as well, for example:
cv::Mat cameraMatrix;
cv::Size calibSize; // Image during the calibration, e.g. 1920x1080
cv::Size imageSize; // Your current image size, e.g. 640x320
// ...
cv::Matx31d t(0.0, 0.0, 1.0);
t(0) = (double)imageSize.width / (double)calibSize.width;
t(1) = (double)imageSize.height / (double)calibSize.height;
cameraMatrixScaled = cv::Mat::diag(cv::Mat(t)) * cameraMatrix;
This must be done only for the camera matrix, because the distortion coefficients do not depend on the resolution.
Question #3:
Whatever I think cv::getOptimalNewCameraMatrix(...) is not important in your case, the undistorted image can be zoomed and blurred because you remove the effect of a non-linear transformation. If I were you then I would try the optical flow without calling cv::undistort(...). I think that even a distorted image can contain a lot of good features for tracking.

Stereo Calibration Opencv Python and Disparity Map

I am interested in finding the disparity map of a scene. To start with, I did stereo calibration using the following code (I wrote it myself with a little help from Google, after failing to find any helpful tutorials for the same written in python for OpenCV 2.4.10).
I took images of a chessboard simultaneously on both cameras and saved them as left*.jpg and right*.jpg.
import numpy as np
import cv2
import glob
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpointsL = [] # 3d point in real world space
imgpointsL = [] # 2d points in image plane.
objpointsR = []
imgpointsR = []
images = glob.glob('left*.jpg')
for fname in images:
img = cv2.imread(fname)
grayL = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, cornersL = cv2.findChessboardCorners(grayL, (9,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
objpointsL.append(objp)
cv2.cornerSubPix(grayL,cornersL,(11,11),(-1,-1),criteria)
imgpointsL.append(cornersL)
images = glob.glob('right*.jpg')
for fname in images:
img = cv2.imread(fname)
grayR = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, cornersR = cv2.findChessboardCorners(grayR, (9,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
objpointsR.append(objp)
cv2.cornerSubPix(grayR,cornersR,(11,11),(-1,-1),criteria)
imgpointsR.append(cornersR)
retval,cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, R, T, E, F = cv2.stereoCalibrate(objpointsL, imgpointsL, imgpointsR, (320,240))
How do I rectify the images? What other steps should I do before going on to find the disparity map? I read somewhere that while calculating the disparity map, the features detected on both frames should lie on the same horizontal line. Please help me out here. Any help would be much appreciated.
you need cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2 and "newCameraMatrix" for cv2.undistort()
you can get "newCameraMatrix" using cv2.getOptimalNewCameraMatrix()
so in the remainder of your script paste this:
# Assuming you have left01.jpg and right01.jpg that you want to rectify
lFrame = cv2.imread('left01.jpg')
rFrame = cv2.imread('right01.jpg')
w, h = lFrame.shape[:2] # both frames should be of same shape
frames = [lFrame, rFrame]
# Params from camera calibration
camMats = [cameraMatrix1, cameraMatrix2]
distCoeffs = [distCoeffs1, distCoeffs2]
camSources = [0,1]
for src in camSources:
distCoeffs[src][0][4] = 0.0 # use only the first 2 values in distCoeffs
# The rectification process
newCams = [0,0]
roi = [0,0]
for src in camSources:
newCams[src], roi[src] = cv2.getOptimalNewCameraMatrix(cameraMatrix = camMats[src],
distCoeffs = distCoeffs[src],
imageSize = (w,h),
alpha = 0)
rectFrames = [0,0]
for src in camSources:
rectFrames[src] = cv2.undistort(frames[src],
camMats[src],
distCoeffs[src])
# See the results
view = np.hstack([frames[0], frames[1]])
rectView = np.hstack([rectFrames[0], rectFrames[1]])
cv2.imshow('view', view)
cv2.imshow('rectView', rectView)
# Wait indefinitely for any keypress
cv2.waitKey(0)
hope that gets you on your way to the next thing which might be calculating "disparity maps" ;)
Reference:
http://www.janeriksolem.net/2014/05/how-to-calibrate-camera-with-opencv-and.html
Try this code, I already have been able to solve the mistake:
retVal, cm1, dc1, cm2, dc2, r, t, e, f = cv2.stereoCalibrate(objpointsL, imgpointsL, imgpointsR, (320, 240), None, None,None,None)
firstly, use the opencv calibration application or matlab calibration toolbox to calculate your camera parameters. with the parameters, you can rectify your images.
after rectification, refer to the python sample in opencv's codebase (samples/python/stereo_match.py) to compute the disparity map.

Categories

Resources