Camera calibration Open CV-Python - python

I'm trying to do camera calibration, I have taken the code from open cv documentation. Here is my code -
import numpy as np
import cv2
import glob
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
objpoints = []
imgpoints = []
images = glob.glob('/usr/local/share/OpenCV/samples/cpp/chess*.jpg')
img = cv2.imread("2.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret = False
ret, corners = cv2.findChessboardCorners(gray, (7, 6))
print (ret)
if ret == True:
objpoints.append(objp)
cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria)
imgpoints.append(corners)
# Draw and display the corners
cv2.drawChessboardCorners(img, (7,6), corners, ret)
cv2.imshow('img',img)
cv2.imwrite('Corners_detected.jpg', img, None)
cv2.waitKey(0)
cv2.destroyAllWindows()
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints,
gray.shape[::-1],None,None)
img = cv2.imread('2.jpg')
h, w = img.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))
# undistort
dst = cv2.undistort(img, mtx, dist, None, newcameramtx)
cv2.imwrite('calibration_result.png',dst)
In this code image 2.jpg is taken for calibration,
This is the image considered for understanding of calibration
My code is detecting corners for only this image. It is not working fine with other checker board image.It is not able to detect corners. Why is it so ?

Unfortunately, I do not have enough reputation to comment and clarify some points. However, I will try to answer anyway. Given you have added the print(ret) I assume this is where your problem lies.
It looks like you are using the wrong checkerboard size in cv2.findChessboardCorners(gray, (7, 6)). I have found this function returns False given the wrong input dimension values.
This is als the case for the objp object.
Given the image, you are showing this should be n-1 and m-1 (where n and m are the checkboard dimensions).
For your given image, this should be cv2.findChessboardCorners(gray, (9, 6))
Notice on the opencv calibration example the checkerboard is an 8x7, hence the given 7x6 input value.

The thing about the Camera Calibration method is that it sometimes will not recognize a Checkerboard grid that isn't the maximum size. You could most likely get away with 8,6 or 9,5 as the size. However, with 6,7 there is too much of a difference and so the method won't recognize it.
I don't have any research sources but I've tested this myself before.

Related

Problem to remove distortion (after camera-calibration) from image

The image is a single frame of a 90-minute video.
If one draws a line from the top left corner (see, left-distorted) to the right (see, right-distorted), the distortion is visible by looking at the top-line (see, middle-distorted).
The white line between the points to the left and the right should be straight but has a slight U-shape compared to the straight line in red.
Performing a camera calibration with open-cv (reference) using a sample of entire frames from calibration videos (for some examples see, indoor, outdoor-1, and outdoor-2) results in this undistorted image.
Here, the line from the top left corner (see, left-undistorted) to the right (see, right-undistorted) overcorrects the distortion as the difference between the red-line and the white has become an inverse U shape (see, middle-undistorted).
I took the recording with an iPhone 11, and I am using python 3.8.8 and open-cv 4.5.3.
I followed the advice I could find at StackOverflow (and through the most popular search results), but using any checkerboard (variation of size, camera angle, distance, and setting) does not correct distortion correction. I fail to understand why.
UPDATE:
Based on below conversation the calibration footage needs to
have a focus, similar to the reference scenario (in my case, objects are far away),
the board should cover a minimum of say 20% of the image.
Here the video that I used (focus is fixed on objects being far away) to extract a sample of 50 frames to calculate the camera calibration:
SQUARE_SIZE = 0.012
CHECKERBOARD_WIDTH = 22
CHECKERBOARD_HEIGHT = 15
objpoints = []
imgpoints = []
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
val = cv2.CALIB_CB_ADAPTIVE_THRESH + cv2.CALIB_CB_FAST_CHECK + cv2.CALIB_CB_NORMALIZE_IMAGE
SAMPLE_SIZE = 50
objp = np.zeros((CHECKERBOARD_HEIGHT * CHECKERBOARD_WIDTH, 3), np.float32)
objp[:, :2] = np.mgrid[0:CHECKERBOARD_WIDTH, 0:CHECKERBOARD_HEIGHT.T.reshape(-1, 2)
objp = objp * SQUARE_SIZE
# for loop going over the sample of images
img = cv2.imread('path to image')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(gray, (CHECKERBOARD_WIDTH[i], CHECKERBOARD_HEIGHT[i]), val)
if ret == True:
objpoints.append(objp)
# refining pixel coordinates for given 2d points.
corners2 = cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria)
imgpoints.append(corners2)
# Calculate camera calibration
img = cv2.imread('path to reference image')
h,w = img.shape[:2]
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints,
imgpoints,
(w,h),
None,
None)
newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (w,h), 1, (w,h))
# Undistort
mapx,mapy = cv2.initUndistortRectifyMap(mtx, dist, None, newcameramtx, (w,h), 5)
dst = cv2.remap(img,mapx,mapy,cv2.INTER_LINEAR)
cv2.imwrite('path to output image', dst)
The result is unfortunately not correct as shown by the deviation of the red and white line at the top of the pitch.

Python + OpenCV + Webcam Calibration: Doesn't detect full pattern

i am trying to undistort a photo of a laptop screen in order to learn how camera calibration and undistortion with opencv + python work.
On the basis of a camera matrix that i obtained from one photo with controlled content on the screen, i would like to undistort subsequent images.
Neither the camera, nor the display will move, so i think i need only one image for calibration (or is this assumption already completely wrong?)
My first image with controlled content is this 24x13 chessboard. The camera's distortion is nicely visible in the corners of the photo:
This is the script that i use for calibration and undistortion:
import numpy as np
import cv2
img = cv2.imread("img.png")
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def chessboard_objectpoints(boxes_x, boxes_y):
objp = np.zeros(((boxes_x - 1) * (boxes_y - 1), 3), np.float32)
objp[:, :2] = np.mgrid[0:boxes_x-1, 0:boxes_y-1].T.reshape(-1, 2)
return objp
def chessboard_imagepoints(img_gray, boxes_x, boxes_y, out_img=None):
boxdim = (boxes_x - 1, boxes_y - 1)
ret, corners = cv2.findChessboardCorners(img_gray, boxdim, None)
assert ret
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
corners2 = cv2.cornerSubPix(img_gray, corners, (11, 11), (-1, -1), criteria)
if out_img is not None:
out_img = cv2.drawChessboardCorners(out_img, boxdim, corners2, ret)
return corners2
BOXES_X = 23 # should be 24
BOXES_Y = 12 # should be 13
obj_points = chessboard_objectpoints(BOXES_X, BOXES_Y)
img_points = chessboard_imagepoints(img_gray, BOXES_X, BOXES_Y, img)
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(
[obj_points], [img_points], img_gray.shape[::-1], None, None
)
h, w = img.shape[:2]
dimension = w, h
newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, dimension, 0)
dst = cv2.undistort(img, mtx, dist)
x, y, w, h = roi
dst = dst[y : y + h, x : x + w]
cv2.imshow("img", dst)
cv2.waitKey()
The resulting image of this script is the following, and it is indeed less distorted than the original:
Now i have 2 questions:
Why can cv2.findChessboardCorners only find a subset of the corners, see image and sourcecode? I expect it to be able to find a pattern of 23x12 corners, but that won't work.
with the smaller subset of the chessboard, there is a lot of remaining distortion, see image. It also doesn't look like the full pattern would really help, then. How do i undistort this kind of image completely?

Assertion failed when use opencv in python

I am doing Camera Calibration in opencv in python and I followed the tutorials on this page. My code is completely copied from the page with tiny adjustment on the parameters.
Code:
import numpy as np
import cv2
import glob
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
images = glob.glob('../easyimgs/*.jpg')
print('...loading')
for fname in images:
print(f'processing img:{fname}')
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
print('grayed')
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (8, 11),None)
# If found, add object points, image points (after refining them)
if ret == True:
print('chessboard detected')
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners2)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (8,11), corners2,ret)
cv2.namedWindow('img',0)
cv2.resizeWindow('img', 500, 500)
cv2.imshow('img',img)
cv2.waitKey(500)
cv2.destroyAllWindows()
img2 = cv2.imread("../easyimgs/5.jpg")
print(f"type objpoints:{objpoints[0].shape}")
print(f"type imgpoints:{imgpoints[0].shape}")
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)
h, w = img2.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))
dst = cv2.undistort(img2, mtx, dist, None, newcameramtx)
# crop the image
x,y,w,h = roi
dst = dst[y:y+h, x:x+w]
cv2.namedWindow('result', 0)
cv2.resizeWindow('result', 400, 400)
cv2.imshow('result',dst)
cv2.destroyAllWindows()
but when I run it, an error shows up as:
Traceback (most recent call last):
File "*/undistortion.py", line 51, in <module>
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)
cv2.error: OpenCV(3.4.2) C:\projects\opencv-python\opencv\modules\calib3d\src\calibration.cpp:3143: error: (-215:Assertion failed) ni == ni1 in function 'cv::collectCalibrationData'
Here is my image.
I have searched on the Internet that many people are also confronted with this problem. But most of the solution on blog is saying that this is caused by the type of the first and second parameters of calibrateCamera() which is objpoints and imgpoints. But those are all solution for opencv on c++.
Could any one tell me how to solve it in python?
The number of entries in objpoints and imgpoints must be the same. This assertion means they aren't. It looks like you're creating a set of 6*7=42 objpoints, which is intended for a chessboard of 6x7 intersections, but your actual chessboard has 8*11=88 intersections. So as you process images, your objpoints and imgpoints lists get different lengths. You need to modify your creation/initialization of objp to have 8*11=88 points, whose coordinates correspond to the real physical coordinates on your chessboard.
To do this, you'll need to really understand the code you are using. Putting in more debug statements will help you to trace through what is happening in your code.
And note that the Python API to OpenCV is just a wrapper around the C++ API, so any solution for using OpenCV with C++ are (usually) relevant in Python too.

Python + OpenCV name not defined

I'm using a source code example from Open CV for Python documentation as follows:
import numpy as np
import cv2
import glob
# termination criteria in this, 30 max number of iterations, 0.001 minimum accuracy
# CV_TERMCRIT_ITER or CV_TERMCRIT_EPS, tells the algorithm that we want to terminate either after some number of iterations or when the convergence metric reaches some small value (respectively).
# The next two arguments set the values at which one, the other, or both of these criteria should terminate the algorithm.
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0), ..., (6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
images = glob.glob('*.jpg')
# fname= 'C:\\Users\\Bender\\Desktop\\fotospayloads\\'
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (9,6), None)
# If found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria)
imgpoints.append(corners2)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (9,6), corners2, ret)
cv2.imshow('img', img)
cv2.waitKey(500)
cv2.destroyAllWindows()
rms, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
Unfortunately when I run the source code I get the following error:
"NameError: name 'gray' is not defined" (line 50).
Any help would be very much appreciated.
Thanks
Isaac
There are no images in folder where your script is located and that is why glob.glob('.jpg') does not return any files and grey object is not created.

Raspberry pi OpenCV error: (-215) ni == ni1 in function collectCalibrationData

I'm running python 3 on a raspberry pi 3 and have opencv installed. I took 10 images of a checkerboard, it detects all 10 images and displays them, but when it gets to the last line, it throws an error. Here's the images i used: https://imgur.com/gallery/IDfHH This is my code:
import numpy as np
import cv2
import glob
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
images = glob.glob('*.jpg')
for fname in images:
print('test')
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (6,9),None)
# If found, add object points, image points (after refining them)
if ret == True:
print('test2')
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners2)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (6,9), corners2,ret)
cv2.imshow('img',img)
cv2.waitKey(500)
print('test3')
cv2.destroyAllWindows()
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)
the example assumes that you have a 6x7 chessboard image, i think you have a 6x9.
you have to prepare the objp variable for a 6x9 calibration image, so the code has to be like this: objp = np.zeros((6*9,3), np.float32)
code:
objp = np.zeros((6*9,3), np.float32)
Thanks #Rui Sebastiao.
I was using 14 x 10 so I changed the following lines and at least no error :)
objp = np.zeros((14*10, 3), np.float32)
objp[:, :2] = np.mgrid[0:14, 0:10].T.reshape(-1, 2)

Categories

Resources