Python + OpenCV + Webcam Calibration: Doesn't detect full pattern - python

i am trying to undistort a photo of a laptop screen in order to learn how camera calibration and undistortion with opencv + python work.
On the basis of a camera matrix that i obtained from one photo with controlled content on the screen, i would like to undistort subsequent images.
Neither the camera, nor the display will move, so i think i need only one image for calibration (or is this assumption already completely wrong?)
My first image with controlled content is this 24x13 chessboard. The camera's distortion is nicely visible in the corners of the photo:
This is the script that i use for calibration and undistortion:
import numpy as np
import cv2
img = cv2.imread("img.png")
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def chessboard_objectpoints(boxes_x, boxes_y):
objp = np.zeros(((boxes_x - 1) * (boxes_y - 1), 3), np.float32)
objp[:, :2] = np.mgrid[0:boxes_x-1, 0:boxes_y-1].T.reshape(-1, 2)
return objp
def chessboard_imagepoints(img_gray, boxes_x, boxes_y, out_img=None):
boxdim = (boxes_x - 1, boxes_y - 1)
ret, corners = cv2.findChessboardCorners(img_gray, boxdim, None)
assert ret
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
corners2 = cv2.cornerSubPix(img_gray, corners, (11, 11), (-1, -1), criteria)
if out_img is not None:
out_img = cv2.drawChessboardCorners(out_img, boxdim, corners2, ret)
return corners2
BOXES_X = 23 # should be 24
BOXES_Y = 12 # should be 13
obj_points = chessboard_objectpoints(BOXES_X, BOXES_Y)
img_points = chessboard_imagepoints(img_gray, BOXES_X, BOXES_Y, img)
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(
[obj_points], [img_points], img_gray.shape[::-1], None, None
)
h, w = img.shape[:2]
dimension = w, h
newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, dimension, 0)
dst = cv2.undistort(img, mtx, dist)
x, y, w, h = roi
dst = dst[y : y + h, x : x + w]
cv2.imshow("img", dst)
cv2.waitKey()
The resulting image of this script is the following, and it is indeed less distorted than the original:
Now i have 2 questions:
Why can cv2.findChessboardCorners only find a subset of the corners, see image and sourcecode? I expect it to be able to find a pattern of 23x12 corners, but that won't work.
with the smaller subset of the chessboard, there is a lot of remaining distortion, see image. It also doesn't look like the full pattern would really help, then. How do i undistort this kind of image completely?

Related

Problem to remove distortion (after camera-calibration) from image

The image is a single frame of a 90-minute video.
If one draws a line from the top left corner (see, left-distorted) to the right (see, right-distorted), the distortion is visible by looking at the top-line (see, middle-distorted).
The white line between the points to the left and the right should be straight but has a slight U-shape compared to the straight line in red.
Performing a camera calibration with open-cv (reference) using a sample of entire frames from calibration videos (for some examples see, indoor, outdoor-1, and outdoor-2) results in this undistorted image.
Here, the line from the top left corner (see, left-undistorted) to the right (see, right-undistorted) overcorrects the distortion as the difference between the red-line and the white has become an inverse U shape (see, middle-undistorted).
I took the recording with an iPhone 11, and I am using python 3.8.8 and open-cv 4.5.3.
I followed the advice I could find at StackOverflow (and through the most popular search results), but using any checkerboard (variation of size, camera angle, distance, and setting) does not correct distortion correction. I fail to understand why.
UPDATE:
Based on below conversation the calibration footage needs to
have a focus, similar to the reference scenario (in my case, objects are far away),
the board should cover a minimum of say 20% of the image.
Here the video that I used (focus is fixed on objects being far away) to extract a sample of 50 frames to calculate the camera calibration:
SQUARE_SIZE = 0.012
CHECKERBOARD_WIDTH = 22
CHECKERBOARD_HEIGHT = 15
objpoints = []
imgpoints = []
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
val = cv2.CALIB_CB_ADAPTIVE_THRESH + cv2.CALIB_CB_FAST_CHECK + cv2.CALIB_CB_NORMALIZE_IMAGE
SAMPLE_SIZE = 50
objp = np.zeros((CHECKERBOARD_HEIGHT * CHECKERBOARD_WIDTH, 3), np.float32)
objp[:, :2] = np.mgrid[0:CHECKERBOARD_WIDTH, 0:CHECKERBOARD_HEIGHT.T.reshape(-1, 2)
objp = objp * SQUARE_SIZE
# for loop going over the sample of images
img = cv2.imread('path to image')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(gray, (CHECKERBOARD_WIDTH[i], CHECKERBOARD_HEIGHT[i]), val)
if ret == True:
objpoints.append(objp)
# refining pixel coordinates for given 2d points.
corners2 = cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria)
imgpoints.append(corners2)
# Calculate camera calibration
img = cv2.imread('path to reference image')
h,w = img.shape[:2]
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints,
imgpoints,
(w,h),
None,
None)
newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (w,h), 1, (w,h))
# Undistort
mapx,mapy = cv2.initUndistortRectifyMap(mtx, dist, None, newcameramtx, (w,h), 5)
dst = cv2.remap(img,mapx,mapy,cv2.INTER_LINEAR)
cv2.imwrite('path to output image', dst)
The result is unfortunately not correct as shown by the deviation of the red and white line at the top of the pitch.

Python OpenCV.remap/undistort cut the image

I try to implement Camera calibration from OpenCV python site. This is my code:
import numpy as np
import cv2 as cv
import glob
CHECKERBOARD = (8,6)
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
objp = np.zeros((CHECKERBOARD[0]*CHECKERBOARD[1],3), np.float32) #[56][3]
# (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp[:,:2] = np.mgrid[0:CHECKERBOARD[0],0:CHECKERBOARD[1]].T.reshape(-1,2)
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.
# images like *.jpg
lastImageWithPattern = None # last image with pattern recognize
images = glob.glob('*.jpg')
for fname in images:
img = cv.imread(fname) #read image
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY) #black and white
# find corners
# ret: are there corners, corners: corners coordinates
ret, corners = cv.findChessboardCorners(gray,(CHECKERBOARD[0],CHECKERBOARD[1]), None)
if ret == True:
lastImageWithPattern = fname
print("Found pattern in " + fname)
# fix corners coordinates
corners2 = cv.cornerSubPix(gray,corners, (11,11), (-1,-1), criteria)
# save corners coordinate
objpoints.append(objp)
imgpoints.append(corners)
if lastImageWithPattern is not None:
# Camera calibration
# return: is OK calib, camera matrix, distortion,rotation vector, translation vector
ret, matrix, distortion, r_vecs, t_vecs = cv.calibrateCamera( objpoints, imgpoints, gray.shape[::-1], None, None)
img = cv.imread(lastImageWithPattern)
h,w = img.shape[:2] # размерите на изображението
newCameraMatrix, roi = cv.getOptimalNewCameraMatrix(matrix, distortion, (w,h),1,(w,h))
#fix distortion
mapx, mapy = cv.initUndistortRectifyMap(matrix, distortion, None, newCameraMatrix, (w,h), 5)
dst = cv.remap(img, mapx, mapy, cv.INTER_LINEAR)
# Crop image
x,y,w,h = roi
dst = dst[y:y+h, x:x+w]
cv.imwrite('calibResult.jpg', dst)
I use 10 photos. For example here I will show only one image. Problem with the result is same if you use only this image:
In the end of script I expect image with fixed distortion and maybe some missing pixsels but not almost all of them. It this case my result image(calibResult.jpg) is:
Result is the same if I use cv.undistort from the same titorial.
I want to know why image is so cut and maybe more distorted.
My code work OK when I use samples/data/left01.jpg – left14.jpg and maybe something in my images is not OK but I don't know what and how to debug it.

Undistortion crops the image and distort edges

I’m trying to calibrate and undistort image from fish-eye camera.
My code is:
import cv2
import os
import numpy as np
CHECKERBOARD = (5,7)
subpix_criteria = (cv2.TERM_CRITERIA_EPS+cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1)
calibration_flags = cv2.fisheye.CALIB_RECOMPUTE_EXTRINSIC+cv2.fisheye.CALIB_CHECK_COND+cv2.fisheye.CALIB_FIX_SKEW
R = np.zeros((1, 1, 3), dtype=np.float64)
T = np.zeros((1, 1, 3), dtype=np.float64)
objp = np.zeros( (CHECKERBOARD[0]*CHECKERBOARD[1], 1, 3) , np.float64)
objp[:,0, :2] = np.mgrid[0:CHECKERBOARD[0], 0:CHECKERBOARD[1]].T.reshape(-1, 2)
_img_shape = None
objpoints = [] # 3d point in real world space
imgpoints = []
N_OK = len(objpoints)
images = os.listdir('./images/')
for fname in images:
img = cv2.imread(fname)
img = cv2.imread('./images/'+fname)
#print(fname + str(os.path.exists('./images/'+fname)))
ext = os.path.splitext(fname)[-1].lower()
if ext == ".jpg":
print(img.shape[:2])
if _img_shape == None:
_img_shape = img.shape[:2]
else:
assert _img_shape == img.shape[:2], "All images must share the same size."
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(gray, CHECKERBOARD,cv2.CALIB_CB_ADAPTIVE_THRESH+cv2.CALIB_CB_FAST_CHECK+cv2.CALIB_CB_NORMALIZE_IMAGE)
if ret == True:
objpoints.append(objp)
cv2.cornerSubPix(gray,corners,(3,3),(-1,-1),subpix_criteria)
imgpoints.append(corners)
N_OK = len(objpoints)
K = np.zeros((3, 3))
D = np.zeros((4, 1))
rvecs = [np.zeros((1, 1, 3), dtype=np.float64) for i in range(N_OK)]
tvecs = [np.zeros((1, 1, 3), dtype=np.float64) for i in range(N_OK)]
rms, K, D, rvecs, tvecs = \
cv2.fisheye.calibrate(
objpoints,
imgpoints,
gray.shape[::-1],
K,
D,
rvecs,
tvecs,
calibration_flags,
(cv2.TERM_CRITERIA_EPS+cv2.TERM_CRITERIA_MAX_ITER, 30, 1e-6)
)
DIM=_img_shape[::-1]
K=np.array(K.tolist())
D=np.array(D.tolist())
And function to undistort:
def undistort(img_path):
img = cv2.imread(img_path)
h,w = img.shape[:2]
nk = K.copy()
map1, map2 = cv2.fisheye.initUndistortRectifyMap(K, D, np.eye(3), K, DIM, cv2.CV_16SC2)
undistorted_img = cv2.remap(img, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
cv2.imwrite('calibresult1.png',undistorted_img)
It gives following image:
undistorted image
While original image is:
original image
The center seems to be undistorted, but corners are distorted and image itself is cropped.
I'm not sure that the calibration process is correct. If anyone has experience with it, I would be happy if you look at the code and probably find errors.
Fast answer: bad calibration. Your undistorted image was obtained with (very) wrong intrinsics and distortion coefficients.
Sorry I'm not debugging your code, but code can be fine and still not getting the right calibration. Not everything is code: you must choose useful chessboard poses and many images to improve calibration.
Recommendation: with fisheye calibration, let start trying to get only intrinsics (camera center and focal point), and avoid computing distortion coefficients.

Opencv camera calibration use for world to view port mapping

I am making an application in image processing where I would like to map the view port coordinates to world window co ordinates I am using python and has obtain transformation rotation matrices and distortion coefficients in opencv cpp documentation it is written that camera callibration can be use for this purpose but I can't able to find how any one can help?
for camera calibration, you have to print chess sheet first from this link : chess for print
then use these codes for calibration as follows:
1) Take different position of chess sheet in front of camera, this code open camera and detect chess sheet. After several detection of chess sheet, this calculates coefficients matrix to undistortion of camera image.
import numpy as np
import cv2
objp = np.zeros((6 * 7, 3), np.float32)
objp[ : , : 2] = np.mgrid[0 : 7, 0 : 6].T.reshape(-1, 2)
objpoints = []
imgpoints = []
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
cam = cv2.VideoCapture(0)
(w, h) = (int(cam.get(4)), int(cam.get(3)))
while(True):
_ , frame = cam.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(gray, (7, 6), None)
if ret == True:
objpoints.append(objp)
corners = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
imgpoints.append(corners)
cv2.drawChessboardCorners(frame, (7, 6), corners, ret)
cv2.imshow('Find Chessboard', frame)
cv2.waitKey(0)
cv2.imshow('Find Chessboard', frame)
print "Number of chess boards find:", len(imgpoints)
if cv2.waitKey(1) == 27:
break
ret, oldMtx, coef, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints,
gray.shape[: : -1], None, None)
newMtx, roi = cv2.getOptimalNewCameraMatrix(oldMtx, coef, (w, h), 1, (w, h))
print "Original Camera Matrix:\n", oldMtx
print "Optimal Camera Matrix:\n", newMtx
np.save("Original camera matrix", oldMtx)
np.save("Distortion coefficients", coef)
np.save("Optimal camera matrix", newMtx)
cam.release()
cv2.destroyAllWindows()
2) Then load matrices and apply undistort function to correct camera images with below sample code:
import numpy as np
import cv2
oldMtx = np.load("Original camera matrix.npy")
coef = np.load("Distortion coefficients.npy")
newMtx = np.load("Optimal camera matrix.npy")
cam = cv2.VideoCapture(0)
(w, h) = (int(cam.get(4)), int(cam.get(3)))
while(True):
_ , frame = cam.read()
undis = cv2.undistort(frame, oldMtx, coef, newMtx)
cv2.imshow("Original vs Undistortion", np.hstack([frame, undis]))
key = cv2.waitKey(1)
if key == 27:
break
cam.release()
cv2.destroyAllWindows()
these codes work for me. good luck

Camera calibration Open CV-Python

I'm trying to do camera calibration, I have taken the code from open cv documentation. Here is my code -
import numpy as np
import cv2
import glob
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
objpoints = []
imgpoints = []
images = glob.glob('/usr/local/share/OpenCV/samples/cpp/chess*.jpg')
img = cv2.imread("2.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret = False
ret, corners = cv2.findChessboardCorners(gray, (7, 6))
print (ret)
if ret == True:
objpoints.append(objp)
cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria)
imgpoints.append(corners)
# Draw and display the corners
cv2.drawChessboardCorners(img, (7,6), corners, ret)
cv2.imshow('img',img)
cv2.imwrite('Corners_detected.jpg', img, None)
cv2.waitKey(0)
cv2.destroyAllWindows()
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints,
gray.shape[::-1],None,None)
img = cv2.imread('2.jpg')
h, w = img.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))
# undistort
dst = cv2.undistort(img, mtx, dist, None, newcameramtx)
cv2.imwrite('calibration_result.png',dst)
In this code image 2.jpg is taken for calibration,
This is the image considered for understanding of calibration
My code is detecting corners for only this image. It is not working fine with other checker board image.It is not able to detect corners. Why is it so ?
Unfortunately, I do not have enough reputation to comment and clarify some points. However, I will try to answer anyway. Given you have added the print(ret) I assume this is where your problem lies.
It looks like you are using the wrong checkerboard size in cv2.findChessboardCorners(gray, (7, 6)). I have found this function returns False given the wrong input dimension values.
This is als the case for the objp object.
Given the image, you are showing this should be n-1 and m-1 (where n and m are the checkboard dimensions).
For your given image, this should be cv2.findChessboardCorners(gray, (9, 6))
Notice on the opencv calibration example the checkerboard is an 8x7, hence the given 7x6 input value.
The thing about the Camera Calibration method is that it sometimes will not recognize a Checkerboard grid that isn't the maximum size. You could most likely get away with 8,6 or 9,5 as the size. However, with 6,7 there is too much of a difference and so the method won't recognize it.
I don't have any research sources but I've tested this myself before.

Categories

Resources