I am currently filming using one camera facing downwards with a chessboard pattern in a fixed position on the ground. I am using python with OpenCV to track the displacement of the camera and using the output to plot displacement in terms of the x,y,z directions. Ultimately I want to mount the camera to the underside of a hovering multirotor UAV in order to calibrate the GPS accuracy.
The basic method I am using is:
Define object points
Open video
Undistort frame based on saved camera matrix (camera calibration already performed)
Find chessboard corners
If corners found, refine corners
Find the rotation and translation vectors (cv2.pnpransac)
Project 3D points to image plane (cv2.projectpoints)
Convert rotation vector to rotation matrix as per this answer:
np_rodrigues = np.asarray(rvecs_new[:,:],np.float64)
rmatrix = cv2.Rodrigues(np_rodrigues)[0]
Calculate camera pose as per this answer:
cam_pos = -np.matrix(rmatrix).T * np.matrix(tvecs_new)
Store values
camx.append(cam_pos.item(0))
camy.append(cam_pos.item(1))
camz.append(cam_pos.item(2))
However when I run this code with a video that should be a straight line at constant altitude, the plotted x,y graph gives a curved x,y plot, and z is not constant as shown by the x,z plot: http://imgur.com/QIY3wgQ,pDM5T0x,HEDJtAt#1
Is there any reason why this should not be giving a straigh line on the graph? Perhaps an error with camera pose calculation in step 9?
The complete code is as follows:
# Criteria, defining object points
import cv2
import numpy as np
import time
import matplotlib.pyplot as plt
#IMPORTANT: Enter chess board dimensions
chw = 9
chh = 6
#Defining draw functions for lines
def draw(img, corners, imgpts):
corner = tuple(corners[0].ravel())
cv2.line(img, corner, tuple(imgpts[0].ravel()), (255,0,0), 5)
cv2.line(img, corner, tuple(imgpts[1].ravel()), (0,255,0), 5)
cv2.line(img, corner, tuple(imgpts[2].ravel()), (0,0,255), 5)
return img
# Load previously saved data
with np.load('camera_calib.npz') as X:
mtx, dist, _, _, _ = [X[i] for i in ('mtx','dist','rvecs','tvecs','imgpts')]
# Criteria, defining object points
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
objp = np.zeros((chh*chw,3), np.float32)
objp[:,:2] = np.mgrid[0:chw,0:chh].T.reshape(-1,2)
# Setting axis
axis = np.float32([[9,0,0], [0,6,0], [0,0,-10]]).reshape(-1,3)
cap = cv2.VideoCapture('Calibration\\video_chess2.MP4')
count = 0
fcount = 0
while(cap.isOpened()):
ret1, img = cap.read()
if ret1 == False or count==lim:
print('Video analysis complete.')
break
if count > 0:
h, w = img.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))
# Undistorting
img2 = cv2.undistort(img, mtx, dist, None, newcameramtx)
gray = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY)
ret2, corners = cv2.findChessboardCorners(gray, (chw,chh),None)
if ret2 == True:
fcount = fcount+1
# Refining corners
cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
# Find the rotation and translation vectors
rvecs_new, tvecs_new, inliers = cv2.solvePnPRansac(objp, corners, mtx, dist)
# Project 3D points to image plane
imgpts, jac = cv2.projectPoints(axis, rvecs_new, tvecs_new, mtx, dist)
draw(img2,corners,imgpts)
cv2.imshow('img',img2)
cv2.waitKey(1)
# Converting rotation vector to rotation matrix
np_rodrigues = np.asarray(rvecs_new[:,:],np.float64)
rmatrix = cv2.Rodrigues(np_rodrigues)[0]
# Pose (According to https://stackoverflow.com/questions/16265714/camera-pose-estimation-opencv-pnp)
cam_pos = -np.matrix(rmatrix).T * np.matrix(tvecs_new)
camx.append(cam_pos.item(0))
camy.append(cam_pos.item(1))
camz.append(cam_pos.item(2))
else:
print 'Board not found'
count += 1
print count
cv2.destroyAllWindows()
plt.plot(camx,camy)
plt.show()
Related
Pardon me for i am completely new to coding. To start with, i am using a library of Python bindings to OpenCV for the purpose of this project.
My camera is calibrated for it displayed fish-eye distortion. I have obtained the following values for K and D, the intrinsic camera matrix and distortion matrix respectively:
K = [[438.76709 0.00000 338.13894]
[0.00000 440.79169 246.80081]
[0.00000 0.00000 1.00000]]
D = [-0.098034379506 0.054022224927 -0.046172648829 -0.009039512970]
Focal length: 2.8mm
Field of view: 145 degrees (from manual)
When i undistort the image and display it, i obtain an image with black pixels at areas which were stretched too far(expected). However, this does not hamper with the calculation of object width since the object isn't large, and fills 20% of the image.
I will be placing the object 10cm from the lens of the camera. Based on what i have read on the pin-hole camera model, i will require the extrinsic parameters governing the 3D to 2D transformation. However, i am not sure how i am supposed to derive it.
Assuming i have the pixel coordinates of the 2 points (each along the edges between which i would want to measure distance), how do i find the real-world distance between those two points using these derived matrices?
Also, if my rectangular object is not parallel to the principal axis of the camera, is there an algorithm to calculate the width even with such conditions?
Given that the distance between your camera and object is fixed, what you can do is first find out the pixels distance between the corners found and then convert it into millimeter for your object width using the Pixels-Per-Millimeter Ratio/ Scale factor.
Algorithm used is Harris Corner Detection Harris Corner Detection
Capture the frame with the object in it
cap = cv2.VideoCapture(0)
while(True):
#Capture frame-by-frame
ret, frame = cap.read()
cv2.imshow('LIVE FRAME!', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
#Save it to some location
cv2.imwrite('Your location', frame)
Calibrate the Pixels-Per-Millimeter Ratio using a reference object first.
#Read Image
image = cv2.imread('Location of your previously saved frame with the object in it.')
object_width = input(int("Enter the width of your object: ")
object_height = input(int("Enter the height of your object: ")
#Find Corners
def find_centroids(dst):
ret, dst = cv2.threshold(dst, 0.01 * dst.max(), 255, 0)
dst = np.uint8(dst)
# find centroids
ret, labels, stats, centroids = cv2.connectedComponentsWithStats(dst)
# define the criteria to stop and refine the corners
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100,
0.001)
corners = cv2.cornerSubPix(gray,np.float32(centroids[1:]),(5,5),
(-1,-1),criteria)
return corners
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
dst = cv2.cornerHarris(gray, 5, 3, 0.04)
dst = cv2.dilate(dst, None)
# Get coordinates of the corners.
corners = find_centroids(dst)
for i in range(0, len(corners)):
print("Pixels found for this object are:",corners[i])
image[dst>0.1*dst.max()] = [0,0,255]
cv2.circle(image, (int(corners[i,0]), int(corners[i,1])), 7, (0,255,0), 2)
for corner in corners:
image[int(corner[1]), int(corner[0])] = [0, 0, 255]
a = len(corners)
print("Number of corners found:",a)
#List to store pixel difference.
distance_pixel = []
#List to store mm distance.
distance_mm = []
P1 = corners[0]
P2 = corners[1]
P3 = corners[2]
P4 = corners[3]
P1P2 = cv2.norm(P2-P1)
P1P3 = cv2.norm(P3-P1)
P2P4 = cv2.norm(P4-P2)
P3P4 = cv2.norm(P4-P3)
pixelsPerMetric_width1 = P1P2 / object_width
pixelsPerMetric_width2 = P3P4 / object_width
pixelsPerMetric_height1 = P1P3 / object_height
pixelsPerMetric_height2 = P2P4 / object_height
#Average of PixelsPerMetric
pixelsPerMetric_avg = pixelsPerMetric_width1 + pixelsPerMetric_width2 + pixelsPerMetric_height1 + pixelsPerMetric_height2
pixelsPerMetric = pixelsPerMetric_avg / 4
print(pixelsPerMetric)
P1P2_mm = P1P2 / pixelsPerMetric
P1P3_mm = P1P3 / pixelsPerMetric
P2P4_mm = P2P4 / pixelsPerMetric
P3P4_mm = P3P4 / pixelsPerMetric
distance_mm.append(P1P2_mm)
distance_mm.append(P1P3_mm)
distance_mm.append(P2P4_mm)
distance_mm.append(P3P4_mm)
distance_pixel.append(P1P2)
distance_pixel.append(P1P3)
distance_pixel.append(P2P4)
distance_pixel.append(P3P4)
Print the distance in pixels and mm i.e your width and height
print(distance_pixel)
print(distance_mm)
The pixelsPerMetric is your scale factor and gives the average number of pixels per mm. You can modify this code to work accordingly to your needs.
I would use similar triangles to determine that the width in the image is proportional to the object width by a scale factor of (distance of camera to object)/(focal length) which is 100/2.8 in your case. This would be under the assumption that the object is in the center of the image (i.e directly in front of the camera).
I am using OpenCV for a robot vision project - navigating a maze. I can detect the lines where the walls of the maze meet the floor. And now need to use these detected lines to calculate which way the robot should turn.
In order to work out which way the robot should move I believe the solution is to calculate the angle of the walls in relation to the position of the robot. However where both walls are found how do I select which points to use as a reference.
I understand that I can use the python atan2 formula to calculate the angle between two points but after that I am completely lost.
Here is my code:
# https://towardsdatascience.com/finding-driving-lane-line-live-with-opencv-f17c266f15db
# Testing edge detection for maze
import cv2
import numpy as np
import math
image = cv2.imread("/Users/BillHarvey/Documents/Electronics_and_Robotics/Robot_Vision_Project/mazeme/maze1.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
kernel_size = 5
blur_gray = cv2.GaussianBlur(gray,(kernel_size,kernel_size),0)
low_threshold = 50
high_threshold = 150
edges = cv2.Canny(blur_gray, low_threshold, high_threshold)
# create a mask of the edges image using cv2.filpoly()
mask = np.zeros_like(edges)
ignore_mask_color = 255
# define the Region of Interest (ROI) - source code sets as a trapezoid for roads
imshape = image.shape
vertices = np.array([[(0,imshape[0]),(100, 420), (1590, 420),(imshape[1],imshape[0])]], dtype=np.int32)
cv2.fillPoly(mask, vertices, ignore_mask_color)
masked_edges = cv2.bitwise_and(edges, mask)
# mybasic ROI bounded by a blue rectangle
#ROI = cv2.rectangle(image,(0,420),(1689,839),(0,255,0),3)
# define the Hough Transform parameters
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 40 #minimum number of pixels making up a line
max_line_gap = 30 # maximum gap in pixels between connectable line segments
# make a blank the same size as the original image to draw on
line_image = np.copy(image)*0
# run Hough on edge detected image
lines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array([]),min_line_length, max_line_gap)
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),10)
angle = math.atan2(x2-x1, y2-y1)
angle = angle * 180 / 3.14
print("Angle = ", angle)
# draw the line on the original image
lines_edges = cv2.addWeighted(image, 0.8, line_image, 1, 0)
#return lines_edges
#cv2.imshow("original", image)
#cv2.waitKey(0)
#cv2.imshow("edges", edges)
#cv2.waitKey(0)
cv2.imshow("detected", lines_edges)
cv2.waitKey(0)
cv2.imwrite("lanes_detected.jpg", lines_edges)
cv2.destroyAllWindows()
I have added the athn2 forumla in the piece of code that draws blue lines where HoughLinesP has detected lines.
And to convert the results (angle) to degrees I found this formula:
angle = angle * 180 / 3.14
The following piece of code:
print("Angle = ", angle)
Prints 13 angles which may or may not equate to the lines in the pic, do they? To avoid getting a - degrees I had to do x2-x1, y2-y1 rather than the other way around which I have seen elsewhere.
I do apologise for my fundental lack of python and mathematical knowledge but any help would be gratefully received.
How can I take two images of an object from different angles and draw epipolar lines on one based on points from the other?
For example, I would like to be able to select a point on the left picture using a mouse, mark the point with a circle, and then draw an epipolar line on the right image corresponding to the marked point.
I have 2 XML files which contain a 3x3 camera matrix and a list of 3x4 projection matrices for each picture. The camera matrix is K. The projection matrix for the left picture is P_left. The projection matrix for the right picture is P_right.
I have tried this approach:
Choose a pixel coordinate (x,y) in the left image (via mouse click)
Calculate a point p in the left image with K^-1 * (x,y,1)
Calulate the pseudo inverse matrix P+ of P_left (using np.linalg.pinv)
Calculate the epipole e' of the right image: P_right * (0,0,0,1)
Calculate the skew symmetric matrix e'_skew of e'
Calculate the Fundamental matrix F: e'_skew * P_right * P+
Calculate the epipolar line l' on the right image: F * p
Calculate a point p' in the right image: P_right * P+ * p
Transform p' and l back to pixel coordinates
Draw a line using cv2.line through p' and l
I just did this a few days ago and it works just fine. Here's the method I used:
Calibrate camera(s) to obtain camera matricies and distortion matricies (Using openCV getCorners and calibrateCamera, you can find lots of tutorials on this, but it sounds like you already have this info)
Perform stereo calibration with openCV stereoCalibrate(). It takes as parameters all of the camera and distortion matricies. You need this to determine the correlation between the two visual fields. You will get back several matricies, the rotation matrix R, translation vector T, essential matrix E and fundamental matrix F.
You then want to do undistortion using openCV getOptimalNewCameraMatrix and undistort(). This will get rid of a lot of camera aberrations (it will give you better results)
Finally, use openCV's computeCorrespondEpilines to calculate the lines and plot them. I will include some code below you can try out in Python. When I run it, I can get images like this (The colored points have their corresponding epilines drawn in the other image)
Heres some code (Python 3.0). It uses two static images and static points, but you could easily select the points with the cursor. You can also refer to the OpenCV docs on calibration and stereo calibration here.
import cv2
import numpy as np
# find object corners from chessboard pattern and create a correlation with image corners
def getCorners(images, chessboard_size, show=True):
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((chessboard_size[1] * chessboard_size[0], 3), np.float32)
objp[:, :2] = np.mgrid[0:chessboard_size[0], 0:chessboard_size[1]].T.reshape(-1, 2)*3.88 # multiply by 3.88 for large chessboard squares
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
for image in images:
frame = cv2.imread(image)
# height, width, channels = frame.shape # get image parameters
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(gray, chessboard_size, None) # Find the chess board corners
if ret: # if corners were found
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria) # refine corners
imgpoints.append(corners2) # add to corner array
if show:
# Draw and display the corners
frame = cv2.drawChessboardCorners(frame, chessboard_size, corners2, ret)
cv2.imshow('frame', frame)
cv2.waitKey(100)
cv2.destroyAllWindows() # close open windows
return objpoints, imgpoints, gray.shape[::-1]
# perform undistortion on provided image
def undistort(image, mtx, dist):
img = cv2.imread(image, cv2.IMREAD_GRAYSCALE)
image = os.path.splitext(image)[0]
h, w = img.shape[:2]
newcameramtx, _ = cv2.getOptimalNewCameraMatrix(mtx, dist, (w, h), 1, (w, h))
dst = cv2.undistort(img, mtx, dist, None, newcameramtx)
return dst
# draw the provided points on the image
def drawPoints(img, pts, colors):
for pt, color in zip(pts, colors):
cv2.circle(img, tuple(pt[0]), 5, color, -1)
# draw the provided lines on the image
def drawLines(img, lines, colors):
_, c, _ = img.shape
for r, color in zip(lines, colors):
x0, y0 = map(int, [0, -r[2]/r[1]])
x1, y1 = map(int, [c, -(r[2]+r[0]*c)/r[1]])
cv2.line(img, (x0, y0), (x1, y1), color, 1)
if __name__ == '__main__':
# undistort our chosen images using the left and right camera and distortion matricies
imgL = undistort("2L/2L34.bmp", mtxL, distL)
imgR = undistort("2R/2R34.bmp", mtxR, distR)
imgL = cv2.cvtColor(imgL, cv2.COLOR_GRAY2BGR)
imgR = cv2.cvtColor(imgR, cv2.COLOR_GRAY2BGR)
# use get corners to get the new image locations of the checcboard corners (undistort will have moved them a little)
_, imgpointsL, _ = getCorners(["2L34_undistorted.bmp"], chessboard_size, show=False)
_, imgpointsR, _ = getCorners(["2R34_undistorted.bmp"], chessboard_size, show=False)
# get 3 image points of interest from each image and draw them
ptsL = np.asarray([imgpointsL[0][0], imgpointsL[0][10], imgpointsL[0][20]])
ptsR = np.asarray([imgpointsR[0][5], imgpointsR[0][15], imgpointsR[0][25]])
drawPoints(imgL, ptsL, colors[3:6])
drawPoints(imgR, ptsR, colors[0:3])
# find epilines corresponding to points in right image and draw them on the left image
epilinesR = cv2.computeCorrespondEpilines(ptsR.reshape(-1, 1, 2), 2, F)
epilinesR = epilinesR.reshape(-1, 3)
drawLines(imgL, epilinesR, colors[0:3])
# find epilines corresponding to points in left image and draw them on the right image
epilinesL = cv2.computeCorrespondEpilines(ptsL.reshape(-1, 1, 2), 1, F)
epilinesL = epilinesL.reshape(-1, 3)
drawLines(imgR, epilinesL, colors[3:6])
# combine the corresponding images into one and display them
combineSideBySide(imgL, imgR, "epipolar_lines", save=True)
Hopefully this helps!
I'm calibrating my GoPro following the OpenCV tutorial. To calibrate, I have a bunch of pictures with a chessboard in different locations. I then plot the 3D axis on top of the chessboard and everything looks fine, the calibration seems good:
Then I want to see the undistorted version of the image without cropping anything and I get this:
Which clearly doesn't make any sense. I have tried to do the same thing with another set of calibration images and it worked:
I don't understand why it didn't work with the first set of pictures. Any ideas?
Here's the relevant code:
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((9*6,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
files = os.listdir('frames')
for fname in files:
g = re.match('frame(\d+).png',fname)
n = g.groups()[0]
if n not in numbers:
os.remove('frames/%s'%fname)
continue
img = cv2.imread('frames/%s'%fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
img = cv2.drawChessboardCorners(img, (9,6), corners2,ret)
objpoints.append(objp)
imgpoints.append(corners2)
cv2.imshow('img',img)
cv2.waitKey(1)
cv2.destroyAllWindows()
ret, K, D, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)
# K: intrinsic matrix - camera matrix
# D: Distortion coefficients
#Compute reconstruction error
mean_error = 0
for i in xrange(len(objpoints)):
imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], K, D)
error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
mean_error += error
print "total error: ", mean_error/len(objpoints)
ret, rvec, tvec = cv2.solvePnP(objp, corners2, K, D)
# project 3D points to image plane Using openCV
imgpts, jac = cv2.projectPoints(axis, rvec, tvec, K, D)
img = draw(img,corners2,imgpts)
cv2.imshow('img',img)
k = cv2.waitKey(0) & 0xff
f = file('camera.pkl','wb')
cPickle.dump((K,D,rvec,tvec),f)
f.close()
#get the Undistorted image
h, w = img.shape[:2]
newcameraK, roi=cv2.getOptimalNewCameraMatrix(K,D,(w,h),1,(w,h))
dst = cv2.undistort(img, K, D, None, newcameraK)
plt.figure()
plt.axis("off")
plt.imshow(cv2.cvtColor(dst,cv2.COLOR_BGR2RGB))
plt.show()
Micka's point is important; making sure that the calibration grid fills as much of the image as possible is important, or at least have some pictures of the checkerboard where it is near the boundaries.
This issue is that distortion is small near the center and large at the boundaries. A large distortion therefore doesn't affect the center of the image very much. Equivalently, a small error in estimating the distortion from the center of the image leads to a perhaps very wrong overall estimate of the distortion.
Redundant images matter "a little"; it is effectively making those images (and therefore, the location of the checkerboard pattern in those images) more important.
i also observed the same effect but much stranger:
When i use getOptimalTransformationMatrix my resulting images also have those strong circular distortion effect like above. Also changing the alpha does not solve it. BUT: When i use same distortion/calibration results WITHOUT getOptimalTransformationMatrix (so i use same fx/fy/cx/cy as on camera calibration result) i get a very good result in undistortion.... very strange....
could it be that in some situations the getOptimalTransformationMatrix function is buggy? My calibration (images etc) look very good, points are on edges too
This is the camera calibration code as on the opencv python docs. I want to know how objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2) works. What does reshape (-1,2) do? I tried to change the values in this line of code but got errors. Can someone explain how this works and why only these numbers would work?
import numpy as np
import cv2
import glob
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*7,3), np.float32)
print "objp: ",objp
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
print "objp: ",objp
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
images = glob.glob('left*.jpg')
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (7,6),None)
# If found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp)
cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners)
# Draw and display the corners
cv2.drawChessboardCorners(img, (7,6), corners,ret)
cv2.imshow('img',img)
cv2.waitKey(500)
cv2.destroyAllWindows()
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None) # camera matrix, distortion coefficients, rotation and translation vectors
Also, objpoints are 3D real world coordinates. This should be manually measured right? Why do we assign points from (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)??
Thanks. Any help would be appreciated.
You should post the error code and the exception, so we can help you to fix it.
-1 mean calculate the real length from the total element number:
np.mgrid[0:7,0:6].T.reshape(-1,2)
you can split the code as following:
a = np.mgrid[0:7, 0:6]
b = a.T
c = b.reshape(-1, 2)
print a.shape, b.shape, c.shape
the output is:
(2, 7, 6) (6, 7, 2) (42, 2)
if it's difficult to understand the code it's the same as:
x, y = np.mgrid[0:7, 0:6]
np.c_[x.ravel(), y.ravel()]
objpoints are 3D real world coordinates, but the length unit is arbitrary, so you don't need to manually measure it if all the box has the same edge length. If the length of the edge is 16cm, then one in objp means 16cm.