cv2.stereoCalibrate throws assertion failed - python

I'm using opencv-contrib-python (4.5.4.60) to calibrate stereovision emulated by 2 pictures taken with one camera (for now I only have one of them) like there are two cameras for stereo depth estimation in future. I find intrinsic parameters of camera from several photos and trying to pass ChAruCo markers points from two photos into stereoCalibrate, but get assertion failed:
ret, M1, d1, M2, d2, R, T, E, F = cv2.stereoCalibrate(objpoints_L, imgpoints_L, imgpoints_R, camera_matrix, distortion_coefficients0, camera_matrix, distortion_coefficients0,img_r1.shape[:2], F = F)
cv2.error: OpenCV(4.5.4) D:\a\opencv-python\opencv-python\opencv\modules\calib3d\src\calibration.cpp:1088: error: (-215:Assertion failed) (count >= 4) || (count == 3 && useExtrinsicGuess) in function 'cvFindExtrinsicCameraParams2'
I have checked input type of object points and image points with cv2.utils.dumpInputArray()
InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=40 dims(-1)=2 size(-1)=1x40 type(-1)=CV_32FC3
InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=40 dims(-1)=2 size(-1)=1x40 type(-1)=CV_32FC2
InputArray: empty()=false kind=0x00010000 flags=0x01010000 total(-1)=40 dims(-1)=2 size(-1)=1x40 type(-1)=CV_32FC2
sorted them so I pass only matching on both photos, but still get assertion failed and can't figure out what I'm doing wrong.

The problem was that ChAruCo markers are returned as array of objects with single point (n, 1, 2) by cv2.aruco.detectMarkers. Functions accept this points format (for example cv2.solvePnP or cv2.findFundamentalMat) but if you try to pass them into cv2.stereoCalibrate, which checks for every object to have more then 3 points but get n objects with single point instead, you would get assertion failed. To use this points you have to reshape array to single object with all ChAruCo points (1,n,2). Do the same reshape to (1, n, 3) to object points obtained by cv2.aruco.getBoardObjectAndImagePoints.

Related

OpenCV stereoCalibration of two cameras using ChArUco

I want to calculate the relative transformation between two cameras ([R|t] matrix) using multiple frames of a charuco board. My idea was to obtain image-object point pairs from all the frames and then use a function which takes all of the detected point pairs and outputs relative transformation between cameras (e.g. stereoCalibrate).
What is the best approach to do that? I could not get stereoCalibrate to work, since it always throws assertion errors -> bugreport.
Current implementation (not working):
imagePointsA = []
imagePointsB = []
objectPoints = []
for frameA, frameB in color_framesets(...):
try:
# Find corners
cornersA, idsA, rejected = cv2.aruco.detectMarkers(frameA, charucoDict)
cornersB, idsB, rejected = cv2.aruco.detectMarkers(frameB, charucoDict)
if not cornersA or not cornersB: raise Exception("No markers detected")
retA, cornersA, idsA = cv2.aruco.interpolateCornersCharuco(cornersA, idsA, frameA, charucoBoard)
retB, cornersB, idsB = cv2.aruco.interpolateCornersCharuco(cornersB, idsB, frameB, charucoBoard)
if not retA or not retB: raise Exception("Can't interpolate corners")
# Find common points in both frames (is there a nicer way?)
objPtsA, imgPtsA = cv2.aruco.getBoardObjectAndImagePoints(charucoBoard, cornersA, idsA)
objPtsB, imgPtsB = cv2.aruco.getBoardObjectAndImagePoints(charucoBoard, cornersB, idsB)
# Create dictionary for each frame objectPoint:imagePoint
ptsA = {tuple(a):tuple(b) for a, b in zip(objPtsA[:,0], imgPtsA[:,0])}
ptsB = {tuple(a):tuple(b) for a, b in zip(objPtsB[:,0], imgPtsB[:,0])}
common = set(ptsA.keys()) & set(ptsB.keys()) # intersection between obj points
for objP in common:
objectPoints.append(np.reshape(objP, (1, 3)))
imagePointsA.append(np.reshape(ptsA[objP], (1, 2)))
imagePointsB.append(np.reshape(ptsB[objP], (1, 2)))
except Exception as e:
print(f"Skipped frame: {e}")
continue
result = cv2.stereoCalibrateExtended(objectPoints, imagePointsA, imagePointsB, intrA, distA, intrB, distB, (848, 480), flags=cv2.CALIB_FIX_INTRINSIC+cv2.CALIB_USE_EXTRINSIC_GUESS)
I have just made something similar earlier today. I assume you solved at least part of your issue, since you closed the mentioned bug. In any case, it seems to me that the issue is that you are passing an array of points, while it should be an array of array of points (an array of point for each frame with sufficient data).
On a related note, cv2.aruco.getBoardObjectAndImagePoints is probably not what you are looking for, with cornersA and cornersB already being image points (of chessboard pattern corners), and object points (position of the chessboard pattern corners) being computable from the aruco marker ids, while getBoardObjectAndImagePoints is about the aruco marker corners as far as I can tell.
Internally, cv2.aruco.calibrateCameraCharuco simply calls cv2.calibrateCamera with the passed corners as image points, and the object points computed from the passed aruco IDs. Unfortunately, getting the object point from an aruco ID isn't exposed in the API, but it's pretty easy to compute: https://github.com/opencv/opencv_contrib/blob/master/modules/aruco/src/charuco.cpp#L157-L166

Using board_create for aruco markers leads to error due to objPoints not in the correct type

I am trying to reverse engineer an aruco board from detecting it in an image.
I made a snippet to reproduce the same problem when creating the GridBoard and then trying to use create_Board on the detected corners and ids on the created image.
# Settings for the marker
max_amount_of_markers_w = 10
max_amount_of_markers_h = 6
ar = aruco.DICT_6X6_1000
aruco_dict = aruco.Dictionary_get(ar)
# creat an aruco Board
grid_board = cv2.aruco.GridBoard_create(max_amount_of_markers_w,
max_amount_of_markers_h,
0.05,
0.01,
aruco_dict)
# convert to image
img = grid_board.draw((1920,180))
# detected corners and ids
corners,ids,rejected = aruco.detectMarkers(img,
aruco_dict)
# convert to X,Y,Z
new_corners = np.zeros(shape=(len(corners),4,3))
for cnt,corner in enumerate(corners):
new_corners[cnt,:,:-1] = corner
# try to create a board via Board_create
aruco.Board_create(new_corners,aruco_dict,ids)
The error comes from the last line, the error is the following:
error: OpenCV(4.1.1)
C:\projects\opencv-python\opencv_contrib\modules\aruco\src\aruco.cpp:1458:
error: (-215:Assertion failed) objPoints.type() == CV_32FC3 ||
objPoints.type() == CV_32FC1 in function 'cv::aruco::Board::create'
This means that it needs something with 3 channels (for x,y and z) which is given as the numpy array.
A bit late but I just came across the same problem, so I'll answer for posterity.
The error is not linked to the number of channels that is correct, but linked to the datatype of new_corners, here new_corners.dtype == np.float64. But OpenCV asks for a 32-bit float, as your error shows with CV_32F.
A simple cast of new_corners fixes the issue. Your last line becomes :
aruco.Board_create(new_corners.astype(np.float32),aruco_dict,ids)

Get orientation of an image with respect to base image using Surf and Perspective Transform

In module del5.py
import cv2
import numpy as np
base_img = cv2.imread("/tmp/a/1.jpg")
test_img = cv2.imread("/tmp/a/1_1.jpg")
surf = cv2.xfeatures2d.SURF_create()
base_keyPoints,base_descriptors=surf.detectAndCompute(base_img,None)
test_keyPoints,test_descriptors=surf.detectAndCompute(test_img,None)
bf = cv2.BFMatcher()
matches = bf.knnMatch(base_descriptors, test_descriptors,k=2)#, k=2)
goodMatches = []
for m, n in matches:
if m.distance < 0.7 * n.distance:
goodMatches.append(m)
print len(goodMatches)
sourcePoints=np.float32([base_keyPoints[m.queryIdx].pt for m in goodMatches])
destinationPoints=np.float32([test_keyPoints[m.trainIdx].pt for m in goodMatches ])
print len(sourcePoints)
print len(destinationPoints)
sourcePoints = np.float32([[c[0],c[1] ]for c in sourcePoints])
destinationPoints = np.float32([[c[0],c[1] ]for c in destinationPoints])
_m = cv2.getPerspectiveTransform(sourcePoints, destinationPoints)
I am using python 2.7 and OpenCV 3. I have to same images but test image is 90 degrees rotated with respect to base image
In above code, i try to get a perfect view of the test(rotated image) like the base image and my algos steps are:
read both images (base and test)
create surf
get features of both image
extract good features
get feature point of both images(source point and destination point)
get perspective transform and perform warp perspective on image to get perfect view
but when try to get the perspective view I get the error
Output:
> 4116
> 4116
> 4116
> OpenCV Error: Assertion failed (src.checkVector(2,
> CV_32F) == 4 && dst.checkVector(2, CV_32F) == 4) in
> getPerspectiveTransform, file
> /opt/opencv/modules/imgproc/src/imgwarp.cpp, line 7135 Traceback (most
> recent call last): File "del5.py", line 41, in <module>
> _m = cv2.getPerspectiveTransform(sourcePoints, destinationPoints) cv2.error: /opt/opencv/modules/imgproc/src/imgwarp.cpp:7135: error:
> (-215) src.checkVector(2, CV_32F) == 4 && dst.checkVector(2, CV_32F)
> == 4 in function getPerspectiveTransform
In you getPerspectiveTransform Function you need to pass only Four points.
And you are trying to pass a list.
//In your code change this line to this.
_m = cv2.getPerspectiveTransform(sourcePoints, destinationPoints)
//Change into this one.
_m = cv2.getPerspectiveTransform(sourcePoints[0:4], destinationPoints[0:4])
For perspective transformation, you need a 3x3 transformation matrix. Straight lines will remain straight even after the transformation. To find this transformation matrix, you need 4 points on the input image and corresponding points on the output image. Among these 4 points, 3 of them should not be collinear. The transformation matrix can be found by the function cv2.getPerspectiveTransform. Then apply cv2.warpPerspective with this 3x3 transformation matrix.
As per the OpenCV Documentation Perspective Transformation

trouble getting cv.transform to work

I'd like to use the same affine matrix M on some individual (x,y) points as I use on images with cv2.warpAffine. It seems cv2.transform is the way to go . When I try send an Nx2 matrix of points I get negged (
src = np.array([
[x1,y1],[x2,y2],[x3,y3],[x4,y4]], dtype = "float32")
print('source shape '+str(src.shape))
dst=cv2.transform(src,M)
cv2.error: /home/jeremy/sw/opencv-3.1.0/modules/core/src/matmul.cpp:1947: error: (-215) scn == m.cols || scn + 1 == m.cols in function transform
I can get the transform I want just using numpy arithmetic :
dst = np.dot(src,M[:,0:2]) +M[:,2]
print('dest:{}'.format(dst))
But would like to understand whats going on . The docs say that cv2.transform wants a number of channels equal to number of columns in M but I'm not clear what the channels would be - maybe an 'x' channel and 'y' channel, but then would would the third be, and what would the different rows signify?
OpenCV on Python often wants points in the form
np.array([ [[x1, y1]], ..., [[xn, yn]] ])
This is not clear in the documentation for cv2.transform() but is more clear in the documentation for other functions that use points, like cv2.perspectiveTransform() where they mention coordinates to be on separate channels:
src – input two-channel or three-channel floating-point array
Transforms can also be used in 3D (using a 4x4 perspective transformation matrix) so that would explain the ability to use two- or three-channel arrays in cv2.transform().
The channel is the last dimension of the source array. Let's read the docs of cv2.transform() at the beginning.
To the question:
Because the function transforms each element from the parameter src, the dimension of src is required to be bigger than 2.
import cv2
import numpy as np
rotation_mat = np.array([[0.8660254, 0.5, -216.41978046], [-0.5, 0.8660254, 264.31038357]]) # 2x3
rotate_box = np.array([[410, 495], [756, 295], [956, 642], [610, 842]]) # 2x2
result_box = cv2.transform(rotate_box, rotation_mat) # error: (-215:Assertion failed) scn == m.cols || scn + 1 == m.cols in function 'transform'
The reason is the dimension of each element of rotate_box is (2,). The transform by multiplication on matrices can not proceed.
To another answer:
As long as the last dimension fits, other dimensions do not matter. Continue the above snippet:
rotate_box_1 = np.array([rotate_box]) # 1x4x2
result_box = cv2.transform(rotate_box_1, rotation_mat) # 1x4x2
rotate_box_2 = np.array([[[410, 495]], [[756, 295]], [[956, 642]], [[610, 842]]]) # 4x1x2
result_box = cv2.transform(rotate_box_2, rotation_mat) # 4x1x2
To reader:
Note the shape returned by cv2.transform() is the same as the src.

OpenCV imread and getFundamentalMatrix

Cannot make this work:
img1 = cv::imread('glassL.jpg')
img2 = cv::imread('glassR.jpg')
img1g = cv::Mat.new
cv::cvtColor(img1, img1g, CV_BGR2GRAY);
img2g = cv::Mat.new
cv::cvtColor(img2, img2g, CV_BGR2GRAY);
F = cv::findFundamentalMat(img1g, img2g, cv::FM_RANSAC, 0.1, 0.99)
It throws this error:
OpenCV Error: Assertion failed (npoints >= 0 && points2.checkVector(2) == npoints && points1.type() == points2.type()) in findFundamentalMat, file /tmp/opencv-XbIS/opencv-2.4.8.2/modules/calib3d/src/fundam.cpp, line 1103
/usr/local/var/rbenv/versions/2.1.0/lib/ruby/gems/2.1.0/gems/ropencv-0.0.15/lib/ropencv/ropencv_types.rb:10509:in `find_fundamental_mat': /tmp/opencv-XbIS/opencv-2.4.8.2/modules/calib3d/src/fundam.cpp:1103: error: (-215) npoints >= 0 && points2.checkVector(2) == npoints && points1.type() == points2.type() in function findFundamentalMat (RuntimeError)
I am using ropencv (Ruby + FFI), but I tried with Python cv2 and got exactly the same error. I cannot find any documentation on this and I am lost. checkVector(2) returns -1 on both grayscale and color images and I don't know how to convert them to make them work with findFundamentalMat. Help please.
You are passing the images directly to the function of to compute the fundamental matrix and this is not correct. In the documentation, it says:
points1 – Array of N points from the first image. The point coordinates should be floating-point (single or double precision).
points2 – Array of the second image points of the same size and format as points1 .
Therefore, you cannot simply put the images. There is a full example here using OpenCV. But there is a nice explanation here step by step using matlab so you can understand which points are you going to use (such as harris corners).

Categories

Resources