Cannot make this work:
img1 = cv::imread('glassL.jpg')
img2 = cv::imread('glassR.jpg')
img1g = cv::Mat.new
cv::cvtColor(img1, img1g, CV_BGR2GRAY);
img2g = cv::Mat.new
cv::cvtColor(img2, img2g, CV_BGR2GRAY);
F = cv::findFundamentalMat(img1g, img2g, cv::FM_RANSAC, 0.1, 0.99)
It throws this error:
OpenCV Error: Assertion failed (npoints >= 0 && points2.checkVector(2) == npoints && points1.type() == points2.type()) in findFundamentalMat, file /tmp/opencv-XbIS/opencv-2.4.8.2/modules/calib3d/src/fundam.cpp, line 1103
/usr/local/var/rbenv/versions/2.1.0/lib/ruby/gems/2.1.0/gems/ropencv-0.0.15/lib/ropencv/ropencv_types.rb:10509:in `find_fundamental_mat': /tmp/opencv-XbIS/opencv-2.4.8.2/modules/calib3d/src/fundam.cpp:1103: error: (-215) npoints >= 0 && points2.checkVector(2) == npoints && points1.type() == points2.type() in function findFundamentalMat (RuntimeError)
I am using ropencv (Ruby + FFI), but I tried with Python cv2 and got exactly the same error. I cannot find any documentation on this and I am lost. checkVector(2) returns -1 on both grayscale and color images and I don't know how to convert them to make them work with findFundamentalMat. Help please.
You are passing the images directly to the function of to compute the fundamental matrix and this is not correct. In the documentation, it says:
points1 – Array of N points from the first image. The point coordinates should be floating-point (single or double precision).
points2 – Array of the second image points of the same size and format as points1 .
Therefore, you cannot simply put the images. There is a full example here using OpenCV. But there is a nice explanation here step by step using matlab so you can understand which points are you going to use (such as harris corners).
Related
I am trying to reverse engineer an aruco board from detecting it in an image.
I made a snippet to reproduce the same problem when creating the GridBoard and then trying to use create_Board on the detected corners and ids on the created image.
# Settings for the marker
max_amount_of_markers_w = 10
max_amount_of_markers_h = 6
ar = aruco.DICT_6X6_1000
aruco_dict = aruco.Dictionary_get(ar)
# creat an aruco Board
grid_board = cv2.aruco.GridBoard_create(max_amount_of_markers_w,
max_amount_of_markers_h,
0.05,
0.01,
aruco_dict)
# convert to image
img = grid_board.draw((1920,180))
# detected corners and ids
corners,ids,rejected = aruco.detectMarkers(img,
aruco_dict)
# convert to X,Y,Z
new_corners = np.zeros(shape=(len(corners),4,3))
for cnt,corner in enumerate(corners):
new_corners[cnt,:,:-1] = corner
# try to create a board via Board_create
aruco.Board_create(new_corners,aruco_dict,ids)
The error comes from the last line, the error is the following:
error: OpenCV(4.1.1)
C:\projects\opencv-python\opencv_contrib\modules\aruco\src\aruco.cpp:1458:
error: (-215:Assertion failed) objPoints.type() == CV_32FC3 ||
objPoints.type() == CV_32FC1 in function 'cv::aruco::Board::create'
This means that it needs something with 3 channels (for x,y and z) which is given as the numpy array.
A bit late but I just came across the same problem, so I'll answer for posterity.
The error is not linked to the number of channels that is correct, but linked to the datatype of new_corners, here new_corners.dtype == np.float64. But OpenCV asks for a 32-bit float, as your error shows with CV_32F.
A simple cast of new_corners fixes the issue. Your last line becomes :
aruco.Board_create(new_corners.astype(np.float32),aruco_dict,ids)
In this code i am trying to integral image and every time i run this code
a window flash and desapear , then i get this error in terminal
import cv2
import numpy as np
image = cv2.imread("nancy.jpg")
(rows,cols,dims) = image.shape
sum = np.zeros((rows,cols), np.uint8)
imageIntegral = cv2.integral(image, sum, -1)
cv2.imshow("imageIntegral", imageIntegral)
cv2.waitKey()
Error:
cv2.imshow("imageIntegral",imageIntegral)cv2.error: OpenCV(4.1.0) C:/projects/opencv-python/opencv/modules/highgui/src/precomp.hpp:131:
error: (-215:Assertion failed) src_depth != CV_16F && src_depth !=
CV_32S in function 'convertToShow'
Check whether your image is uint8 or not
image = image.astype(np.uint8)
Help on cv2.integral:
>>> import cv2
>>> print(cv2.__version__)
4.0.1-dev
>>> help(cv2.integral)
Help on built-in function integral:
integral(...)
integral(src[, sum[, sdepth]]) -> sum
. #overload
A simple demo:
import numpy as np
import cv2
img = np.uint8(np.random.random((2,2,3))*255)
dst = cv2.integral(img)
>>> print(img.shape, img.dtype)
(2, 2, 3) uint8
>>> print(dst.shape, dst.dtype)
(3, 3, 3) int32
And you shouldn't use imshow directly on the dst image because it's not np.uint8. Normalize it to np.uint8 (range 0 to 255) or np.float32 (range 0.0 to 1.0). You can find the reason at this link: How to use `cv2.imshow` correctly for the float image returned by `cv2.distanceTransform`?
cv.imshow requires the given image to have dtype from the set of np.uint8, np.uint16, np.float32, np.float64. It does not accept other types.
Your integral image has type np.int32. The error message explicitly rejected your data because it was np.int32 (or half-float, CV_16F, but that's not the case here).
So, you need to convert it to a dtype that is acceptable to imshow. Before converting with .astype(...), you must make sure that your value range is mapped into the range of 8-bit integers (0 to 255), or floats (imshow expects 0.0 to 1.0). Normalize your data using the maximum, like this:
imageIntegral = cv.integral(src=image)
# division results in values ranging from 0.0 to 1.0
# type is floating point array (float64)
presentable = imageIntegral / imageIntegral.max()
cv.imshow("imageIntegral", presentable)
cv.waitKey()
cv.destroyWindow("imageIntegral")
input, from OpenCV documentation:
presentable:
You obviously don't see much in an integral image because by its nature it's mostly a gradient. That is what happens when one integrates a series of values chosen from a range. That is also why imageIntegral.astype(np.uint8) results in noise:
In module del5.py
import cv2
import numpy as np
base_img = cv2.imread("/tmp/a/1.jpg")
test_img = cv2.imread("/tmp/a/1_1.jpg")
surf = cv2.xfeatures2d.SURF_create()
base_keyPoints,base_descriptors=surf.detectAndCompute(base_img,None)
test_keyPoints,test_descriptors=surf.detectAndCompute(test_img,None)
bf = cv2.BFMatcher()
matches = bf.knnMatch(base_descriptors, test_descriptors,k=2)#, k=2)
goodMatches = []
for m, n in matches:
if m.distance < 0.7 * n.distance:
goodMatches.append(m)
print len(goodMatches)
sourcePoints=np.float32([base_keyPoints[m.queryIdx].pt for m in goodMatches])
destinationPoints=np.float32([test_keyPoints[m.trainIdx].pt for m in goodMatches ])
print len(sourcePoints)
print len(destinationPoints)
sourcePoints = np.float32([[c[0],c[1] ]for c in sourcePoints])
destinationPoints = np.float32([[c[0],c[1] ]for c in destinationPoints])
_m = cv2.getPerspectiveTransform(sourcePoints, destinationPoints)
I am using python 2.7 and OpenCV 3. I have to same images but test image is 90 degrees rotated with respect to base image
In above code, i try to get a perfect view of the test(rotated image) like the base image and my algos steps are:
read both images (base and test)
create surf
get features of both image
extract good features
get feature point of both images(source point and destination point)
get perspective transform and perform warp perspective on image to get perfect view
but when try to get the perspective view I get the error
Output:
> 4116
> 4116
> 4116
> OpenCV Error: Assertion failed (src.checkVector(2,
> CV_32F) == 4 && dst.checkVector(2, CV_32F) == 4) in
> getPerspectiveTransform, file
> /opt/opencv/modules/imgproc/src/imgwarp.cpp, line 7135 Traceback (most
> recent call last): File "del5.py", line 41, in <module>
> _m = cv2.getPerspectiveTransform(sourcePoints, destinationPoints) cv2.error: /opt/opencv/modules/imgproc/src/imgwarp.cpp:7135: error:
> (-215) src.checkVector(2, CV_32F) == 4 && dst.checkVector(2, CV_32F)
> == 4 in function getPerspectiveTransform
In you getPerspectiveTransform Function you need to pass only Four points.
And you are trying to pass a list.
//In your code change this line to this.
_m = cv2.getPerspectiveTransform(sourcePoints, destinationPoints)
//Change into this one.
_m = cv2.getPerspectiveTransform(sourcePoints[0:4], destinationPoints[0:4])
For perspective transformation, you need a 3x3 transformation matrix. Straight lines will remain straight even after the transformation. To find this transformation matrix, you need 4 points on the input image and corresponding points on the output image. Among these 4 points, 3 of them should not be collinear. The transformation matrix can be found by the function cv2.getPerspectiveTransform. Then apply cv2.warpPerspective with this 3x3 transformation matrix.
As per the OpenCV Documentation Perspective Transformation
In my ex-post I optimize the python iteration loop into numpy way.
Then I face next problem to convert it to binary image like this
def convertRed(rawimg):
blue = rawimg[:,:,0]
green = rawimg[:,:,1]
red = rawimg[:,:,2]
exg = 1.5*red-green-blue
processedimg = np.where(exg > 50, exg, 2)
ret2,th2 = cv2.threshold(processedimg,0,255,cv2.THRESH_OTSU) //error line
return processedimg
The error is here
error: (-215) src.type() == CV_8UC1 in function cv::threshold
How to solve this problem?
The cv2.threshold function only accepts uint8 values, this means that you can only apply Otsu's algorithm if the pixel values in your image are between 0 and 255.
As you can see, when you multiply your values by 1.5 your image starts to present floating point values, making your image not suited for cv2.threshold, hence your error message src.type() == CV_8UC1.
You can modify the following parts of your code:
processedimg = np.where(exg > 50, exg, 2)
processedimg = cv2.convertScaleAbs(processedimg)
ret2,th2 = cv2.threshold(processedimg,0,255,cv2.THRESH_OTSU) //error line
What we are doing here is using the OpenCV function cv2.convertScaleAbs, you can see in the OpenCV Documentation:
cv2. convertScaleAbs
Scales, calculates absolute values, and converts the result to 8-bit.
Python: cv2.convertScaleAbs(src[, dst[,
alpha[, beta]]]) → dst
it is an error of "Data Type" ,
as Eliezer said,
when you multiply by 1.5 , the exg matrix convert to float64, which didn't work for cv2.threshold which require uint8 data type ,
so , one of the solutions could be adding:
def convertRed(rawimg):
b = rawimg[:,:,0]
g = rawimg[:,:,1]
r = rawimg[:,:,2]
exg = 1.5*r-g-b;
processedimg = np.where(exg > 50, exg, 2)
processedimg = np.uint8(np.abs(processedimg));#abs to fix negative values,
ret2,th2 = cv2.threshold(processedimg,0,255,cv2.THRESH_OTSU) #error line
return processedimg
I used np.uint8() after np.abs() to avoid wrong result,(nigative to white ) in the converstion to uint8 data type.
Although , your very array processedimg is positive, because of the np.where statement applied before , but this practice is usually safer.
why it converts to float64? because in python , when multiply any integer value with "float comma" it get converted to float ,
Like :
type(1.5*int(7))==float # true
another point is the usage of numpy functions instead of Opencv's, which is usually faster .
Need to apply some rapid conversion on the color channels of a image.
1)I have stored in a list the corresponding output value:
ListaVred = [0]*255
for i in range(0,255):
ListaVred[i]=i*127 / 255 + 128
2)I get the color input value from 0 to 255 from the image
3)should replace in the image the input values with the output
i.e. red[45]= 0
ListaVred0] = 128
red[45]= 128
I've looked at cv2.LUT(src,dst) function but not sure about it's use,
http://docs.opencv.org/trunk/modules/core/doc/operations_on_arrays.html#cv2.LUT
cv2.LUT(ListaVred,red)
TypeError: src is not a numpy array, neither a scalar
ListaVred = np.array(ListaVred)
cv2.LUT(ListaVred,red)
cv2.error: /build/opencv-Ai8DTy/opencv-2.4.6.1+dfsg/modules/core/src/convert.cpp:1195: error: (-215) (lutcn == cn || lutcn == 1) && lut.total() == 256 && lut.isContinuous() && (src.depth() == CV_8U || src.depth() == CV_8S) in function LUT
You looked into documentation of yet to be released version of OpenCV (version 3.0). I guess you are using OpenCV 2.4.8 or lower. Check this documentation. Notice the differences in usage of LUT function. At the top of the page you will see the version of OpenCV to which this documentation belong.
LUT(srcImage, lookupTable, dstImage)