I am interested in finding the disparity map of a scene. To start with, I did stereo calibration using the following code (I wrote it myself with a little help from Google, after failing to find any helpful tutorials for the same written in python for OpenCV 2.4.10).
I took images of a chessboard simultaneously on both cameras and saved them as left*.jpg and right*.jpg.
import numpy as np
import cv2
import glob
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpointsL = [] # 3d point in real world space
imgpointsL = [] # 2d points in image plane.
objpointsR = []
imgpointsR = []
images = glob.glob('left*.jpg')
for fname in images:
img = cv2.imread(fname)
grayL = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, cornersL = cv2.findChessboardCorners(grayL, (9,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
objpointsL.append(objp)
cv2.cornerSubPix(grayL,cornersL,(11,11),(-1,-1),criteria)
imgpointsL.append(cornersL)
images = glob.glob('right*.jpg')
for fname in images:
img = cv2.imread(fname)
grayR = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, cornersR = cv2.findChessboardCorners(grayR, (9,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
objpointsR.append(objp)
cv2.cornerSubPix(grayR,cornersR,(11,11),(-1,-1),criteria)
imgpointsR.append(cornersR)
retval,cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, R, T, E, F = cv2.stereoCalibrate(objpointsL, imgpointsL, imgpointsR, (320,240))
How do I rectify the images? What other steps should I do before going on to find the disparity map? I read somewhere that while calculating the disparity map, the features detected on both frames should lie on the same horizontal line. Please help me out here. Any help would be much appreciated.
you need cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2 and "newCameraMatrix" for cv2.undistort()
you can get "newCameraMatrix" using cv2.getOptimalNewCameraMatrix()
so in the remainder of your script paste this:
# Assuming you have left01.jpg and right01.jpg that you want to rectify
lFrame = cv2.imread('left01.jpg')
rFrame = cv2.imread('right01.jpg')
w, h = lFrame.shape[:2] # both frames should be of same shape
frames = [lFrame, rFrame]
# Params from camera calibration
camMats = [cameraMatrix1, cameraMatrix2]
distCoeffs = [distCoeffs1, distCoeffs2]
camSources = [0,1]
for src in camSources:
distCoeffs[src][0][4] = 0.0 # use only the first 2 values in distCoeffs
# The rectification process
newCams = [0,0]
roi = [0,0]
for src in camSources:
newCams[src], roi[src] = cv2.getOptimalNewCameraMatrix(cameraMatrix = camMats[src],
distCoeffs = distCoeffs[src],
imageSize = (w,h),
alpha = 0)
rectFrames = [0,0]
for src in camSources:
rectFrames[src] = cv2.undistort(frames[src],
camMats[src],
distCoeffs[src])
# See the results
view = np.hstack([frames[0], frames[1]])
rectView = np.hstack([rectFrames[0], rectFrames[1]])
cv2.imshow('view', view)
cv2.imshow('rectView', rectView)
# Wait indefinitely for any keypress
cv2.waitKey(0)
hope that gets you on your way to the next thing which might be calculating "disparity maps" ;)
Reference:
http://www.janeriksolem.net/2014/05/how-to-calibrate-camera-with-opencv-and.html
Try this code, I already have been able to solve the mistake:
retVal, cm1, dc1, cm2, dc2, r, t, e, f = cv2.stereoCalibrate(objpointsL, imgpointsL, imgpointsR, (320, 240), None, None,None,None)
firstly, use the opencv calibration application or matlab calibration toolbox to calculate your camera parameters. with the parameters, you can rectify your images.
after rectification, refer to the python sample in opencv's codebase (samples/python/stereo_match.py) to compute the disparity map.
Related
I am trying to calibrate two cameras. I want to calibrate each one individually. At this point, my script can calibrate both cameras successfully. But now I want to use those calibrated cameras in real time. the code that I am using is the one available in the OpenCV documentation.
Below is the code. I'll just share this part because it's the one that it's not working as I want.
def calibrateCamera(self, chessboardRows=9, chessboardCols=6, imshow=False):
self.chessboardRows = chessboardRows
self.chessboardCols = chessboardCols
self.imshow = imshow
chessboardSize = (self.chessboardRows, self.chessboardCols)
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
objp = np.zeros((self.chessboardCols*self.chessboardRows,3), np.float32)
objp[:,:2] = np.mgrid[0:self.chessboardRows,0:self.chessboardCols].T.reshape(-1,2)
objpoints = []
imgpoints = []
for path, index in zip(self.paths, self.indices):
images = glob.glob(path + "*.png")
for img in images:
frame = cv.imread(img)
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
ret, corners = cv.findChessboardCorners(gray, chessboardSize, None)
if ret == True:
objpoints.append(objp)
corners2 = cv.cornerSubPix(gray,corners, (11,11), (-1,-1), criteria)
imgpoints.append(corners2)
cv.drawChessboardCorners(frame, chessboardSize, corners2, ret)
if self.imshow == True:
cv.imshow(f"Calibrated images, Camera{index}", frame)
cv.waitKey(0)
if ret == False:
print("No pattern detected")
break
ret, mtx, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
print(f"Camera{index} matrix\n", mtx)
print(f"Camera{index} distortion coefficients\n", dist)
h, w = frame.shape[:2]
newcameramtx, roi = cv.getOptimalNewCameraMatrix(mtx, dist, (w,h), 1, (w,h))
mapx, mapy = cv.initUndistortRectifyMap(mtx, dist, None, newcameramtx, (w,h), 5)
dst = cv.remap(frame, mapx, mapy, cv.INTER_LINEAR)
x, y, w, h = roi
dst = dst[y:y+h, x:x+w]
cv.imshow('calibresult.png', dst)
k = cv.waitKey(0)
Can anyone help me to use this "remap" in real time?
And, lastly, is there any limitation in terms of frame rate to use this kind of method in real time?
Thanks in advance,
From calibration (OpenCV's calibrateCamera(), not your own function), you gain "intrinsics", i.e. camera matrix and distortion coefficients.
Store those intrinsics.
Then call initUndistortRectifyMap() with those intrinsics. You receive lookup maps suitable for remap(). You do this once, not for every video frame.
Then you use remap() on video frames, using those maps.
remap() of an entire image is fast enough for real-time processing but it has some cost still.
If you can, do your processing on untouched camera images (those frames you have before you call remap()). Then undistort whatever point data you get from your processing. Undistorting points is also not cheap, but cheaper if done on a few points instead of an entire image.
As mentioned by Christoph, you should use cv.initUndistortRectifyMap only once, outside your loop, to generate the map. Then, at each frame, you can use cv.remap.
Remapping (or undistorting) the entire image comes at a cost (especially for large images). Working on distorted images and only undistorting some selected points might be a better options. The function you can use to do so is cv.undistortPoints.
More information is available in the documentation of OpenCV.
https://docs.opencv.org/4.6.0/d9/d0c/group__calib3d.html#ga55c716492470bfe86b0ee9bf3a1f0f7e
I am using a camera calibration routine and I want to calibrate a camera with large set of images.
Code: (from here)
import numpy as np
import cv2
import glob
import argparse
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
def calibrate():
height = 8
width = 10
""" Apply camera calibration operation for images in the given directory path. """
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(8,6,0)
objp = np.zeros((height*width, 3), np.float32)
objp[:, :2] = np.mgrid[0:width, 0:height].T.reshape(-1, 2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
# Get the images
images = glob.glob('thermal_final set/*.png')
# Iterate through the pairs and find chessboard corners. Add them to arrays
# If openCV can't find the corners in an image, we discard the image.
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (width, height), None)
# If found, add object points, image points (after refining them)
if ret:
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
imgpoints.append(corners2)
# Draw and display the corners
# Show the image to see if pattern is found ! imshow function.
img = cv2.drawChessboardCorners(img, (width, height), corners2, ret)
e1 = cv2.getTickCount()
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
e2 = cv2.getTickCount()
t = (e2 - e1) / cv2.getTickFrequency()
print(t)
return [ret, mtx, dist, rvecs, tvecs]
if __name__ == '__main__':
ret, mtx, dist, rvecs, tvecs = calibrate()
print("Calibration is finished. RMS: ", ret)
Now, the problem is that the time that cv2.calibratecamera() takes, based on number of points(derived from images) used.
Result with 40 images:
9.34462341234 seconds
Calibration is finished. RMS: 2.357820395255311
Result with 80 images:
66.378870749 seconds
Calibration is finished. RMS: 2.864052963156834
The time taken increases exponentially with increase in images.
Now, I have a really huge set of images (500).
I have tried calibrating camera with points from a single image and then calculating average of all the results I get, but they are different than what I get from this method.
Also, I am sure that my setup is using optimized OpenCV, check using:
print(cv2.useOptimized())
How do I make this process faster? Can I leverage threads here?
Edit: Updated the concept and language from "calibrating images" to "calibrating camera using images"
First, I strongly suspect the reason of your dramatic slowdown is memory related: you may be running out and starting to swap.
But the basic approach you seem to be following is incorrect. You don't calibrate images, you calibrate a camera, i.e. a lens + sensor combo.
Calibrating a camera means estimating the parameters of a mathematical model of that lens+sensor package. Therefore you only need use enough independent data points to achieve the desired level of accuracy in the parameter estimation.
A couple dozen well chosen images will be enough most of the time, provided you are following a well designed calibration procedure. Some time ago I wrote a few tips on how to do such a design, you can find them here.
How can I take two images of an object from different angles and draw epipolar lines on one based on points from the other?
For example, I would like to be able to select a point on the left picture using a mouse, mark the point with a circle, and then draw an epipolar line on the right image corresponding to the marked point.
I have 2 XML files which contain a 3x3 camera matrix and a list of 3x4 projection matrices for each picture. The camera matrix is K. The projection matrix for the left picture is P_left. The projection matrix for the right picture is P_right.
I have tried this approach:
Choose a pixel coordinate (x,y) in the left image (via mouse click)
Calculate a point p in the left image with K^-1 * (x,y,1)
Calulate the pseudo inverse matrix P+ of P_left (using np.linalg.pinv)
Calculate the epipole e' of the right image: P_right * (0,0,0,1)
Calculate the skew symmetric matrix e'_skew of e'
Calculate the Fundamental matrix F: e'_skew * P_right * P+
Calculate the epipolar line l' on the right image: F * p
Calculate a point p' in the right image: P_right * P+ * p
Transform p' and l back to pixel coordinates
Draw a line using cv2.line through p' and l
I just did this a few days ago and it works just fine. Here's the method I used:
Calibrate camera(s) to obtain camera matricies and distortion matricies (Using openCV getCorners and calibrateCamera, you can find lots of tutorials on this, but it sounds like you already have this info)
Perform stereo calibration with openCV stereoCalibrate(). It takes as parameters all of the camera and distortion matricies. You need this to determine the correlation between the two visual fields. You will get back several matricies, the rotation matrix R, translation vector T, essential matrix E and fundamental matrix F.
You then want to do undistortion using openCV getOptimalNewCameraMatrix and undistort(). This will get rid of a lot of camera aberrations (it will give you better results)
Finally, use openCV's computeCorrespondEpilines to calculate the lines and plot them. I will include some code below you can try out in Python. When I run it, I can get images like this (The colored points have their corresponding epilines drawn in the other image)
Heres some code (Python 3.0). It uses two static images and static points, but you could easily select the points with the cursor. You can also refer to the OpenCV docs on calibration and stereo calibration here.
import cv2
import numpy as np
# find object corners from chessboard pattern and create a correlation with image corners
def getCorners(images, chessboard_size, show=True):
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((chessboard_size[1] * chessboard_size[0], 3), np.float32)
objp[:, :2] = np.mgrid[0:chessboard_size[0], 0:chessboard_size[1]].T.reshape(-1, 2)*3.88 # multiply by 3.88 for large chessboard squares
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
for image in images:
frame = cv2.imread(image)
# height, width, channels = frame.shape # get image parameters
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(gray, chessboard_size, None) # Find the chess board corners
if ret: # if corners were found
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria) # refine corners
imgpoints.append(corners2) # add to corner array
if show:
# Draw and display the corners
frame = cv2.drawChessboardCorners(frame, chessboard_size, corners2, ret)
cv2.imshow('frame', frame)
cv2.waitKey(100)
cv2.destroyAllWindows() # close open windows
return objpoints, imgpoints, gray.shape[::-1]
# perform undistortion on provided image
def undistort(image, mtx, dist):
img = cv2.imread(image, cv2.IMREAD_GRAYSCALE)
image = os.path.splitext(image)[0]
h, w = img.shape[:2]
newcameramtx, _ = cv2.getOptimalNewCameraMatrix(mtx, dist, (w, h), 1, (w, h))
dst = cv2.undistort(img, mtx, dist, None, newcameramtx)
return dst
# draw the provided points on the image
def drawPoints(img, pts, colors):
for pt, color in zip(pts, colors):
cv2.circle(img, tuple(pt[0]), 5, color, -1)
# draw the provided lines on the image
def drawLines(img, lines, colors):
_, c, _ = img.shape
for r, color in zip(lines, colors):
x0, y0 = map(int, [0, -r[2]/r[1]])
x1, y1 = map(int, [c, -(r[2]+r[0]*c)/r[1]])
cv2.line(img, (x0, y0), (x1, y1), color, 1)
if __name__ == '__main__':
# undistort our chosen images using the left and right camera and distortion matricies
imgL = undistort("2L/2L34.bmp", mtxL, distL)
imgR = undistort("2R/2R34.bmp", mtxR, distR)
imgL = cv2.cvtColor(imgL, cv2.COLOR_GRAY2BGR)
imgR = cv2.cvtColor(imgR, cv2.COLOR_GRAY2BGR)
# use get corners to get the new image locations of the checcboard corners (undistort will have moved them a little)
_, imgpointsL, _ = getCorners(["2L34_undistorted.bmp"], chessboard_size, show=False)
_, imgpointsR, _ = getCorners(["2R34_undistorted.bmp"], chessboard_size, show=False)
# get 3 image points of interest from each image and draw them
ptsL = np.asarray([imgpointsL[0][0], imgpointsL[0][10], imgpointsL[0][20]])
ptsR = np.asarray([imgpointsR[0][5], imgpointsR[0][15], imgpointsR[0][25]])
drawPoints(imgL, ptsL, colors[3:6])
drawPoints(imgR, ptsR, colors[0:3])
# find epilines corresponding to points in right image and draw them on the left image
epilinesR = cv2.computeCorrespondEpilines(ptsR.reshape(-1, 1, 2), 2, F)
epilinesR = epilinesR.reshape(-1, 3)
drawLines(imgL, epilinesR, colors[0:3])
# find epilines corresponding to points in left image and draw them on the right image
epilinesL = cv2.computeCorrespondEpilines(ptsL.reshape(-1, 1, 2), 1, F)
epilinesL = epilinesL.reshape(-1, 3)
drawLines(imgR, epilinesL, colors[3:6])
# combine the corresponding images into one and display them
combineSideBySide(imgL, imgR, "epipolar_lines", save=True)
Hopefully this helps!
I was doing stereo camera calibration using Python 2.7 and OpenCV 3.3. The code I used is (I got it from Stereo Calibration Opencv Python and Disparity Map):
import numpy as np
import cv2
import glob
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpointsL = [] # 3d point in real world space
imgpointsL = [] # 2d points in image plane.
objpointsR = []
imgpointsR = []
images = glob.glob('left*.jpg')
for fname in images:
img = cv2.imread(fname)
grayL = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, cornersL = cv2.findChessboardCorners(grayL, (7,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
objpointsL.append(objp)
cv2.cornerSubPix(grayL,cornersL,(11,11),(-1,-1),criteria)
imgpointsL.append(cornersL)
images = glob.glob('right*.jpg')
for fname in images:
img = cv2.imread(fname)
grayR = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, cornersR = cv2.findChessboardCorners(grayR, (7,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
objpointsR.append(objp)
cv2.cornerSubPix(grayR,cornersR,(11,11),(-1,-1),criteria)
imgpointsR.append(cornersR)
retval,cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, R, T, E, F = cv2.stereoCalibrate(objpointsL, imgpointsL, imgpointsR, (640,480))
But I got error like:
retval,cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, R, T, E, F = cv2.stereoCalibrate(objpointsL, imgpointsL, imgpointsR, (640,480))
TypeError: Required argument 'distCoeffs1' (pos 5) not found
I have my code, left and rights images in the same folder. I have read other similar answers but they don't get this error. (Stereo Calibration Opencv Python and Disparity Map). I want to know why this error and how to solve this?
Try changing the last lines of your code to
cameraMatrix1 = None
distCoeffs1 = None
cameraMatrix2 = None
distCoeffs2 = None
R = None
T = None
E = None
F = None
retval, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, R, T, E, F = cv2.stereoCalibrate(objpointsL, imgpointsL, imgpointsR, (640,480), flags=cv2.cv.CV_CALIB_FIX_INTRINSIC)
I'm calibrating my GoPro following the OpenCV tutorial. To calibrate, I have a bunch of pictures with a chessboard in different locations. I then plot the 3D axis on top of the chessboard and everything looks fine, the calibration seems good:
Then I want to see the undistorted version of the image without cropping anything and I get this:
Which clearly doesn't make any sense. I have tried to do the same thing with another set of calibration images and it worked:
I don't understand why it didn't work with the first set of pictures. Any ideas?
Here's the relevant code:
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((9*6,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
files = os.listdir('frames')
for fname in files:
g = re.match('frame(\d+).png',fname)
n = g.groups()[0]
if n not in numbers:
os.remove('frames/%s'%fname)
continue
img = cv2.imread('frames/%s'%fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
img = cv2.drawChessboardCorners(img, (9,6), corners2,ret)
objpoints.append(objp)
imgpoints.append(corners2)
cv2.imshow('img',img)
cv2.waitKey(1)
cv2.destroyAllWindows()
ret, K, D, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)
# K: intrinsic matrix - camera matrix
# D: Distortion coefficients
#Compute reconstruction error
mean_error = 0
for i in xrange(len(objpoints)):
imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], K, D)
error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
mean_error += error
print "total error: ", mean_error/len(objpoints)
ret, rvec, tvec = cv2.solvePnP(objp, corners2, K, D)
# project 3D points to image plane Using openCV
imgpts, jac = cv2.projectPoints(axis, rvec, tvec, K, D)
img = draw(img,corners2,imgpts)
cv2.imshow('img',img)
k = cv2.waitKey(0) & 0xff
f = file('camera.pkl','wb')
cPickle.dump((K,D,rvec,tvec),f)
f.close()
#get the Undistorted image
h, w = img.shape[:2]
newcameraK, roi=cv2.getOptimalNewCameraMatrix(K,D,(w,h),1,(w,h))
dst = cv2.undistort(img, K, D, None, newcameraK)
plt.figure()
plt.axis("off")
plt.imshow(cv2.cvtColor(dst,cv2.COLOR_BGR2RGB))
plt.show()
Micka's point is important; making sure that the calibration grid fills as much of the image as possible is important, or at least have some pictures of the checkerboard where it is near the boundaries.
This issue is that distortion is small near the center and large at the boundaries. A large distortion therefore doesn't affect the center of the image very much. Equivalently, a small error in estimating the distortion from the center of the image leads to a perhaps very wrong overall estimate of the distortion.
Redundant images matter "a little"; it is effectively making those images (and therefore, the location of the checkerboard pattern in those images) more important.
i also observed the same effect but much stranger:
When i use getOptimalTransformationMatrix my resulting images also have those strong circular distortion effect like above. Also changing the alpha does not solve it. BUT: When i use same distortion/calibration results WITHOUT getOptimalTransformationMatrix (so i use same fx/fy/cx/cy as on camera calibration result) i get a very good result in undistortion.... very strange....
could it be that in some situations the getOptimalTransformationMatrix function is buggy? My calibration (images etc) look very good, points are on edges too