Opencv python camera calibration : objp matrix - python

This is the camera calibration code as on the opencv python docs. I want to know how objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2) works. What does reshape (-1,2) do? I tried to change the values in this line of code but got errors. Can someone explain how this works and why only these numbers would work?
import numpy as np
import cv2
import glob
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*7,3), np.float32)
print "objp: ",objp
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
print "objp: ",objp
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
images = glob.glob('left*.jpg')
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (7,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp)
cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners)
# Draw and display the corners
cv2.drawChessboardCorners(img, (7,6), corners,ret)
cv2.imshow('img',img)
cv2.waitKey(500)
cv2.destroyAllWindows()
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None) # camera matrix, distortion coefficients, rotation and translation vectors
Also, objpoints are 3D real world coordinates. This should be manually measured right? Why do we assign points from (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)??
Thanks. Any help would be appreciated.

You should post the error code and the exception, so we can help you to fix it.
-1 mean calculate the real length from the total element number:
np.mgrid[0:7,0:6].T.reshape(-1,2)
you can split the code as following:
a = np.mgrid[0:7, 0:6]
b = a.T
c = b.reshape(-1, 2)
print a.shape, b.shape, c.shape
the output is:
(2, 7, 6) (6, 7, 2) (42, 2)
if it's difficult to understand the code it's the same as:
x, y = np.mgrid[0:7, 0:6]
np.c_[x.ravel(), y.ravel()]
objpoints are 3D real world coordinates, but the length unit is arbitrary, so you don't need to manually measure it if all the box has the same edge length. If the length of the edge is 16cm, then one in objp means 16cm.

Related

OpenCV : Optimized window size for cv.cornersubpix()

I am writing a code for calibrating cameras with Zhang's checkboard method and I am trying to find the best way to reduce the mean error while finding corners.
As I understand, having a window size as big as possible gives the best results but if it is too big (and more than one corner is found in the in the window) it completely false the results.
I would like to know if there is a better way than just trying all the combinations possible or asking a human to check that the corners have been correctly placed on all images.
Computational time is not a big issue here, it can take some time to get the best results
for the moment, I use the code found in the opencv tutorial (here giving me a window size of (23,23):
# termination criteria
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
images = glob.glob('*.jpg')
for fname in images:
img = cv.imread(fname)
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv.findChessboardCorners(gray, (7,6), None)
# If found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp)
corners2 = cv.cornerSubPix(gray,corners, (11,11), (-1,-1), criteria)
imgpoints.append(corners2)
# Draw and display the corners
cv.drawChessboardCorners(img, (7,6), corners2, ret)
cv.imshow('img', img)
cv.waitKey(500)
cv.destroyAllWindows()
My point is what do I have to change in this part, to have the best localization of the corner => essentially how to find the most appropriate winsize (here the (11,11))
corners2 = cv.cornerSubPix(gray,corners, (11,11), (-1,-1), criteria)
Thank you

How to calculate an epipolar line with a stereo pair of images in Python OpenCV

How can I take two images of an object from different angles and draw epipolar lines on one based on points from the other?
For example, I would like to be able to select a point on the left picture using a mouse, mark the point with a circle, and then draw an epipolar line on the right image corresponding to the marked point.
I have 2 XML files which contain a 3x3 camera matrix and a list of 3x4 projection matrices for each picture. The camera matrix is K. The projection matrix for the left picture is P_left. The projection matrix for the right picture is P_right.
I have tried this approach:
Choose a pixel coordinate (x,y) in the left image (via mouse click)
Calculate a point p in the left image with K^-1 * (x,y,1)
Calulate the pseudo inverse matrix P+ of P_left (using np.linalg.pinv)
Calculate the epipole e' of the right image: P_right * (0,0,0,1)
Calculate the skew symmetric matrix e'_skew of e'
Calculate the Fundamental matrix F: e'_skew * P_right * P+
Calculate the epipolar line l' on the right image: F * p
Calculate a point p' in the right image: P_right * P+ * p
Transform p' and l back to pixel coordinates
Draw a line using cv2.line through p' and l
I just did this a few days ago and it works just fine. Here's the method I used:
Calibrate camera(s) to obtain camera matricies and distortion matricies (Using openCV getCorners and calibrateCamera, you can find lots of tutorials on this, but it sounds like you already have this info)
Perform stereo calibration with openCV stereoCalibrate(). It takes as parameters all of the camera and distortion matricies. You need this to determine the correlation between the two visual fields. You will get back several matricies, the rotation matrix R, translation vector T, essential matrix E and fundamental matrix F.
You then want to do undistortion using openCV getOptimalNewCameraMatrix and undistort(). This will get rid of a lot of camera aberrations (it will give you better results)
Finally, use openCV's computeCorrespondEpilines to calculate the lines and plot them. I will include some code below you can try out in Python. When I run it, I can get images like this (The colored points have their corresponding epilines drawn in the other image)
Heres some code (Python 3.0). It uses two static images and static points, but you could easily select the points with the cursor. You can also refer to the OpenCV docs on calibration and stereo calibration here.
import cv2
import numpy as np
# find object corners from chessboard pattern and create a correlation with image corners
def getCorners(images, chessboard_size, show=True):
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((chessboard_size[1] * chessboard_size[0], 3), np.float32)
objp[:, :2] = np.mgrid[0:chessboard_size[0], 0:chessboard_size[1]].T.reshape(-1, 2)*3.88 # multiply by 3.88 for large chessboard squares
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
for image in images:
frame = cv2.imread(image)
# height, width, channels = frame.shape # get image parameters
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(gray, chessboard_size, None) # Find the chess board corners
if ret: # if corners were found
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria) # refine corners
imgpoints.append(corners2) # add to corner array
if show:
# Draw and display the corners
frame = cv2.drawChessboardCorners(frame, chessboard_size, corners2, ret)
cv2.imshow('frame', frame)
cv2.waitKey(100)
cv2.destroyAllWindows() # close open windows
return objpoints, imgpoints, gray.shape[::-1]
# perform undistortion on provided image
def undistort(image, mtx, dist):
img = cv2.imread(image, cv2.IMREAD_GRAYSCALE)
image = os.path.splitext(image)[0]
h, w = img.shape[:2]
newcameramtx, _ = cv2.getOptimalNewCameraMatrix(mtx, dist, (w, h), 1, (w, h))
dst = cv2.undistort(img, mtx, dist, None, newcameramtx)
return dst
# draw the provided points on the image
def drawPoints(img, pts, colors):
for pt, color in zip(pts, colors):
cv2.circle(img, tuple(pt[0]), 5, color, -1)
# draw the provided lines on the image
def drawLines(img, lines, colors):
_, c, _ = img.shape
for r, color in zip(lines, colors):
x0, y0 = map(int, [0, -r[2]/r[1]])
x1, y1 = map(int, [c, -(r[2]+r[0]*c)/r[1]])
cv2.line(img, (x0, y0), (x1, y1), color, 1)
if __name__ == '__main__':
# undistort our chosen images using the left and right camera and distortion matricies
imgL = undistort("2L/2L34.bmp", mtxL, distL)
imgR = undistort("2R/2R34.bmp", mtxR, distR)
imgL = cv2.cvtColor(imgL, cv2.COLOR_GRAY2BGR)
imgR = cv2.cvtColor(imgR, cv2.COLOR_GRAY2BGR)
# use get corners to get the new image locations of the checcboard corners (undistort will have moved them a little)
_, imgpointsL, _ = getCorners(["2L34_undistorted.bmp"], chessboard_size, show=False)
_, imgpointsR, _ = getCorners(["2R34_undistorted.bmp"], chessboard_size, show=False)
# get 3 image points of interest from each image and draw them
ptsL = np.asarray([imgpointsL[0][0], imgpointsL[0][10], imgpointsL[0][20]])
ptsR = np.asarray([imgpointsR[0][5], imgpointsR[0][15], imgpointsR[0][25]])
drawPoints(imgL, ptsL, colors[3:6])
drawPoints(imgR, ptsR, colors[0:3])
# find epilines corresponding to points in right image and draw them on the left image
epilinesR = cv2.computeCorrespondEpilines(ptsR.reshape(-1, 1, 2), 2, F)
epilinesR = epilinesR.reshape(-1, 3)
drawLines(imgL, epilinesR, colors[0:3])
# find epilines corresponding to points in left image and draw them on the right image
epilinesL = cv2.computeCorrespondEpilines(ptsL.reshape(-1, 1, 2), 1, F)
epilinesL = epilinesL.reshape(-1, 3)
drawLines(imgR, epilinesL, colors[3:6])
# combine the corresponding images into one and display them
combineSideBySide(imgL, imgR, "epipolar_lines", save=True)
Hopefully this helps!

Strange results with OpenCV camera calibration

I'm calibrating my GoPro following the OpenCV tutorial. To calibrate, I have a bunch of pictures with a chessboard in different locations. I then plot the 3D axis on top of the chessboard and everything looks fine, the calibration seems good:
Then I want to see the undistorted version of the image without cropping anything and I get this:
Which clearly doesn't make any sense. I have tried to do the same thing with another set of calibration images and it worked:
I don't understand why it didn't work with the first set of pictures. Any ideas?
Here's the relevant code:
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((9*6,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
files = os.listdir('frames')
for fname in files:
g = re.match('frame(\d+).png',fname)
n = g.groups()[0]
if n not in numbers:
os.remove('frames/%s'%fname)
continue
img = cv2.imread('frames/%s'%fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
img = cv2.drawChessboardCorners(img, (9,6), corners2,ret)
objpoints.append(objp)
imgpoints.append(corners2)
cv2.imshow('img',img)
cv2.waitKey(1)
cv2.destroyAllWindows()
ret, K, D, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)
# K: intrinsic matrix - camera matrix
# D: Distortion coefficients
#Compute reconstruction error
mean_error = 0
for i in xrange(len(objpoints)):
imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], K, D)
error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
mean_error += error
print "total error: ", mean_error/len(objpoints)
ret, rvec, tvec = cv2.solvePnP(objp, corners2, K, D)
# project 3D points to image plane Using openCV
imgpts, jac = cv2.projectPoints(axis, rvec, tvec, K, D)
img = draw(img,corners2,imgpts)
cv2.imshow('img',img)
k = cv2.waitKey(0) & 0xff
f = file('camera.pkl','wb')
cPickle.dump((K,D,rvec,tvec),f)
f.close()
#get the Undistorted image
h, w = img.shape[:2]
newcameraK, roi=cv2.getOptimalNewCameraMatrix(K,D,(w,h),1,(w,h))
dst = cv2.undistort(img, K, D, None, newcameraK)
plt.figure()
plt.axis("off")
plt.imshow(cv2.cvtColor(dst,cv2.COLOR_BGR2RGB))
plt.show()
Micka's point is important; making sure that the calibration grid fills as much of the image as possible is important, or at least have some pictures of the checkerboard where it is near the boundaries.
This issue is that distortion is small near the center and large at the boundaries. A large distortion therefore doesn't affect the center of the image very much. Equivalently, a small error in estimating the distortion from the center of the image leads to a perhaps very wrong overall estimate of the distortion.
Redundant images matter "a little"; it is effectively making those images (and therefore, the location of the checkerboard pattern in those images) more important.
i also observed the same effect but much stranger:
When i use getOptimalTransformationMatrix my resulting images also have those strong circular distortion effect like above. Also changing the alpha does not solve it. BUT: When i use same distortion/calibration results WITHOUT getOptimalTransformationMatrix (so i use same fx/fy/cx/cy as on camera calibration result) i get a very good result in undistortion.... very strange....
could it be that in some situations the getOptimalTransformationMatrix function is buggy? My calibration (images etc) look very good, points are on edges too

Stereo Calibration Opencv Python and Disparity Map

I am interested in finding the disparity map of a scene. To start with, I did stereo calibration using the following code (I wrote it myself with a little help from Google, after failing to find any helpful tutorials for the same written in python for OpenCV 2.4.10).
I took images of a chessboard simultaneously on both cameras and saved them as left*.jpg and right*.jpg.
import numpy as np
import cv2
import glob
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpointsL = [] # 3d point in real world space
imgpointsL = [] # 2d points in image plane.
objpointsR = []
imgpointsR = []
images = glob.glob('left*.jpg')
for fname in images:
img = cv2.imread(fname)
grayL = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, cornersL = cv2.findChessboardCorners(grayL, (9,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
objpointsL.append(objp)
cv2.cornerSubPix(grayL,cornersL,(11,11),(-1,-1),criteria)
imgpointsL.append(cornersL)
images = glob.glob('right*.jpg')
for fname in images:
img = cv2.imread(fname)
grayR = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, cornersR = cv2.findChessboardCorners(grayR, (9,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
objpointsR.append(objp)
cv2.cornerSubPix(grayR,cornersR,(11,11),(-1,-1),criteria)
imgpointsR.append(cornersR)
retval,cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, R, T, E, F = cv2.stereoCalibrate(objpointsL, imgpointsL, imgpointsR, (320,240))
How do I rectify the images? What other steps should I do before going on to find the disparity map? I read somewhere that while calculating the disparity map, the features detected on both frames should lie on the same horizontal line. Please help me out here. Any help would be much appreciated.
you need cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2 and "newCameraMatrix" for cv2.undistort()
you can get "newCameraMatrix" using cv2.getOptimalNewCameraMatrix()
so in the remainder of your script paste this:
# Assuming you have left01.jpg and right01.jpg that you want to rectify
lFrame = cv2.imread('left01.jpg')
rFrame = cv2.imread('right01.jpg')
w, h = lFrame.shape[:2] # both frames should be of same shape
frames = [lFrame, rFrame]
# Params from camera calibration
camMats = [cameraMatrix1, cameraMatrix2]
distCoeffs = [distCoeffs1, distCoeffs2]
camSources = [0,1]
for src in camSources:
distCoeffs[src][0][4] = 0.0 # use only the first 2 values in distCoeffs
# The rectification process
newCams = [0,0]
roi = [0,0]
for src in camSources:
newCams[src], roi[src] = cv2.getOptimalNewCameraMatrix(cameraMatrix = camMats[src],
distCoeffs = distCoeffs[src],
imageSize = (w,h),
alpha = 0)
rectFrames = [0,0]
for src in camSources:
rectFrames[src] = cv2.undistort(frames[src],
camMats[src],
distCoeffs[src])
# See the results
view = np.hstack([frames[0], frames[1]])
rectView = np.hstack([rectFrames[0], rectFrames[1]])
cv2.imshow('view', view)
cv2.imshow('rectView', rectView)
# Wait indefinitely for any keypress
cv2.waitKey(0)
hope that gets you on your way to the next thing which might be calculating "disparity maps" ;)
Reference:
http://www.janeriksolem.net/2014/05/how-to-calibrate-camera-with-opencv-and.html
Try this code, I already have been able to solve the mistake:
retVal, cm1, dc1, cm2, dc2, r, t, e, f = cv2.stereoCalibrate(objpointsL, imgpointsL, imgpointsR, (320, 240), None, None,None,None)
firstly, use the opencv calibration application or matlab calibration toolbox to calculate your camera parameters. with the parameters, you can rectify your images.
after rectification, refer to the python sample in opencv's codebase (samples/python/stereo_match.py) to compute the disparity map.

Camera displacement estimation in OpenCV - incorrect pose output

I am currently filming using one camera facing downwards with a chessboard pattern in a fixed position on the ground. I am using python with OpenCV to track the displacement of the camera and using the output to plot displacement in terms of the x,y,z directions. Ultimately I want to mount the camera to the underside of a hovering multirotor UAV in order to calibrate the GPS accuracy.
The basic method I am using is:
Define object points
Open video
Undistort frame based on saved camera matrix (camera calibration already performed)
Find chessboard corners
If corners found, refine corners
Find the rotation and translation vectors (cv2.pnpransac)
Project 3D points to image plane (cv2.projectpoints)
Convert rotation vector to rotation matrix as per this answer:
np_rodrigues = np.asarray(rvecs_new[:,:],np.float64)
rmatrix = cv2.Rodrigues(np_rodrigues)[0]
Calculate camera pose as per this answer:
cam_pos = -np.matrix(rmatrix).T * np.matrix(tvecs_new)
Store values
camx.append(cam_pos.item(0))
camy.append(cam_pos.item(1))
camz.append(cam_pos.item(2))
However when I run this code with a video that should be a straight line at constant altitude, the plotted x,y graph gives a curved x,y plot, and z is not constant as shown by the x,z plot: http://imgur.com/QIY3wgQ,pDM5T0x,HEDJtAt#1
Is there any reason why this should not be giving a straigh line on the graph? Perhaps an error with camera pose calculation in step 9?
The complete code is as follows:
# Criteria, defining object points
import cv2
import numpy as np
import time
import matplotlib.pyplot as plt
#IMPORTANT: Enter chess board dimensions
chw = 9
chh = 6
#Defining draw functions for lines
def draw(img, corners, imgpts):
corner = tuple(corners[0].ravel())
cv2.line(img, corner, tuple(imgpts[0].ravel()), (255,0,0), 5)
cv2.line(img, corner, tuple(imgpts[1].ravel()), (0,255,0), 5)
cv2.line(img, corner, tuple(imgpts[2].ravel()), (0,0,255), 5)
return img
# Load previously saved data
with np.load('camera_calib.npz') as X:
mtx, dist, _, _, _ = [X[i] for i in ('mtx','dist','rvecs','tvecs','imgpts')]
# Criteria, defining object points
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
objp = np.zeros((chh*chw,3), np.float32)
objp[:,:2] = np.mgrid[0:chw,0:chh].T.reshape(-1,2)
# Setting axis
axis = np.float32([[9,0,0], [0,6,0], [0,0,-10]]).reshape(-1,3)
cap = cv2.VideoCapture('Calibration\\video_chess2.MP4')
count = 0
fcount = 0
while(cap.isOpened()):
ret1, img = cap.read()
if ret1 == False or count==lim:
print('Video analysis complete.')
break
if count > 0:
h, w = img.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))
# Undistorting
img2 = cv2.undistort(img, mtx, dist, None, newcameramtx)
gray = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY)
ret2, corners = cv2.findChessboardCorners(gray, (chw,chh),None)
if ret2 == True:
fcount = fcount+1
# Refining corners
cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
# Find the rotation and translation vectors
rvecs_new, tvecs_new, inliers = cv2.solvePnPRansac(objp, corners, mtx, dist)
# Project 3D points to image plane
imgpts, jac = cv2.projectPoints(axis, rvecs_new, tvecs_new, mtx, dist)
draw(img2,corners,imgpts)
cv2.imshow('img',img2)
cv2.waitKey(1)
# Converting rotation vector to rotation matrix
np_rodrigues = np.asarray(rvecs_new[:,:],np.float64)
rmatrix = cv2.Rodrigues(np_rodrigues)[0]
# Pose (According to https://stackoverflow.com/questions/16265714/camera-pose-estimation-opencv-pnp)
cam_pos = -np.matrix(rmatrix).T * np.matrix(tvecs_new)
camx.append(cam_pos.item(0))
camy.append(cam_pos.item(1))
camz.append(cam_pos.item(2))
else:
print 'Board not found'
count += 1
print count
cv2.destroyAllWindows()
plt.plot(camx,camy)
plt.show()

Categories

Resources