I try to change Python to C++.
Here is my first variable defined like this:
x = np.ones((h,w,4),dtype=np.uint8)*255
I want to equalize third channel to another variable like this:
x[:,:,3] = y
I am using OpenCV. I coded a little bit on C++ like this:
cv::Mat x;
x = cv::Mat::ones(cv::Size(h, w), CV_32FC1)*255;
cv::Mat channel[3];
cv::split(x, channel);
channel[0] = cv::Mat::zeros(x.rows, x.cols, CV_32FC1);
cv::merge(channel,3,x);
I got this error when i run code:
merge.dispatch.cpp:129: error: (-215:Assertion failed) mv[i].size == mv[0].size && mv[i].depth() == depth in function 'merge'
How can I fix this?
Related
I am trying to reverse engineer an aruco board from detecting it in an image.
I made a snippet to reproduce the same problem when creating the GridBoard and then trying to use create_Board on the detected corners and ids on the created image.
# Settings for the marker
max_amount_of_markers_w = 10
max_amount_of_markers_h = 6
ar = aruco.DICT_6X6_1000
aruco_dict = aruco.Dictionary_get(ar)
# creat an aruco Board
grid_board = cv2.aruco.GridBoard_create(max_amount_of_markers_w,
max_amount_of_markers_h,
0.05,
0.01,
aruco_dict)
# convert to image
img = grid_board.draw((1920,180))
# detected corners and ids
corners,ids,rejected = aruco.detectMarkers(img,
aruco_dict)
# convert to X,Y,Z
new_corners = np.zeros(shape=(len(corners),4,3))
for cnt,corner in enumerate(corners):
new_corners[cnt,:,:-1] = corner
# try to create a board via Board_create
aruco.Board_create(new_corners,aruco_dict,ids)
The error comes from the last line, the error is the following:
error: OpenCV(4.1.1)
C:\projects\opencv-python\opencv_contrib\modules\aruco\src\aruco.cpp:1458:
error: (-215:Assertion failed) objPoints.type() == CV_32FC3 ||
objPoints.type() == CV_32FC1 in function 'cv::aruco::Board::create'
This means that it needs something with 3 channels (for x,y and z) which is given as the numpy array.
A bit late but I just came across the same problem, so I'll answer for posterity.
The error is not linked to the number of channels that is correct, but linked to the datatype of new_corners, here new_corners.dtype == np.float64. But OpenCV asks for a 32-bit float, as your error shows with CV_32F.
A simple cast of new_corners fixes the issue. Your last line becomes :
aruco.Board_create(new_corners.astype(np.float32),aruco_dict,ids)
I am facing a problem that cv2 methods like cv2.detectAndCompute, cv2.HoughlinesP fails either with depth error or NoneType when fed a binary image.
For e.g. In the following
def high_blue(B, G):
if (B > 70 and abs(B-G) <= 15):
return 255
else:
return 0
img = cv2.imread(os.path.join(dirPath,filename))
b1 = img[:,:,0] # Gives **Blue**
b2 = img[:,:,1] # Gives Green
b3 = img[:,:,2] # Gives **Red**
zlRhmn_query = np.zeros((2400, 2400), dtype=np.uint8)
zlRhmn_query[300:2100, 300:2100] = 255
zlRhmn_query[325:2075, 325:2075] = 0
cv2.imwrite('zlRhmn_query.jpg',zlRhmn_query)
zl_Rahmen_bin = np.zeros((img.shape[0], img.shape[1]), dtype=np.uint8)
zl_Rahmen_bin = np.vectorize(high_blue)
zl_Rahmen_bin = zl_Rahmen_bin(b1, b2)
cv2.imwrite('zl_Rahmen_bin.jpg',zl_Rahmen_bin)
sift = cv2.xfeatures2d.SIFT_create()
kp_query, des_query = sift.detectAndCompute(zlRhmn_query,None)
kp_train, des_train = sift.detectAndCompute(zl_Rahmen_bin,None)
only the last line with the zl_Rahmen_bin is failing with depth mismatch error. Strangely zlRhmn_query does not throw any error.
Next, when I use the skeletonization snippet given here[http://opencvpython.blogspot.de/2012/05/skeletonization-using-opencv-python.html] and pass the skeleton to HoughLinesP, I get a lines object of type NoneType. On inspection I noticed that the skeleton array is also binary i.e. 0 or 255.
Please advise.
I have programmed like 50 lines of Python in my life so excuse me if I am mistaken here.
zl_Rahmen_bin = np.zeros((img.shape[0], img.shape[1]), dtype=np.uint8)
you have just created a 2d array filled with zeros.
zl_Rahmen_bin = np.vectorize(high_blue)
now you immediately assign another value (a function) to the same variable, which makes the first line pretty obsolete.
zl_Rahmen_bin = zl_Rahmen_bin(b1, b2)
So as far as I understand it you just called a function z1_Rahmen_bin and provided the blue and green image as input. The output should be another 2d array with values either 0 or 255.
I was wondering how np.vectorize knows which datatype the output should be. As you obviously need uint8. The documentation says that the type, if not given is determined by calling the function with the first argument. So I guess the default type of 255 or 0 in this example.
and z1_Rahmen_bin.dtype actually is np.uint32.
So I modified this:
zl_Rahmen_bin = np.vectorize(high_blue)
To
zl_Rahmen_bin = np.vectorize(high_blue, otypes=[np.uint8])
which seems to do the job.
Maybe it is sufficient to just do
z1_Rahmen_bin.dtype = np.uint8
But as I said I have no idea about Python...
I'd like to use the same affine matrix M on some individual (x,y) points as I use on images with cv2.warpAffine. It seems cv2.transform is the way to go . When I try send an Nx2 matrix of points I get negged (
src = np.array([
[x1,y1],[x2,y2],[x3,y3],[x4,y4]], dtype = "float32")
print('source shape '+str(src.shape))
dst=cv2.transform(src,M)
cv2.error: /home/jeremy/sw/opencv-3.1.0/modules/core/src/matmul.cpp:1947: error: (-215) scn == m.cols || scn + 1 == m.cols in function transform
I can get the transform I want just using numpy arithmetic :
dst = np.dot(src,M[:,0:2]) +M[:,2]
print('dest:{}'.format(dst))
But would like to understand whats going on . The docs say that cv2.transform wants a number of channels equal to number of columns in M but I'm not clear what the channels would be - maybe an 'x' channel and 'y' channel, but then would would the third be, and what would the different rows signify?
OpenCV on Python often wants points in the form
np.array([ [[x1, y1]], ..., [[xn, yn]] ])
This is not clear in the documentation for cv2.transform() but is more clear in the documentation for other functions that use points, like cv2.perspectiveTransform() where they mention coordinates to be on separate channels:
src – input two-channel or three-channel floating-point array
Transforms can also be used in 3D (using a 4x4 perspective transformation matrix) so that would explain the ability to use two- or three-channel arrays in cv2.transform().
The channel is the last dimension of the source array. Let's read the docs of cv2.transform() at the beginning.
To the question:
Because the function transforms each element from the parameter src, the dimension of src is required to be bigger than 2.
import cv2
import numpy as np
rotation_mat = np.array([[0.8660254, 0.5, -216.41978046], [-0.5, 0.8660254, 264.31038357]]) # 2x3
rotate_box = np.array([[410, 495], [756, 295], [956, 642], [610, 842]]) # 2x2
result_box = cv2.transform(rotate_box, rotation_mat) # error: (-215:Assertion failed) scn == m.cols || scn + 1 == m.cols in function 'transform'
The reason is the dimension of each element of rotate_box is (2,). The transform by multiplication on matrices can not proceed.
To another answer:
As long as the last dimension fits, other dimensions do not matter. Continue the above snippet:
rotate_box_1 = np.array([rotate_box]) # 1x4x2
result_box = cv2.transform(rotate_box_1, rotation_mat) # 1x4x2
rotate_box_2 = np.array([[[410, 495]], [[756, 295]], [[956, 642]], [[610, 842]]]) # 4x1x2
result_box = cv2.transform(rotate_box_2, rotation_mat) # 4x1x2
To reader:
Note the shape returned by cv2.transform() is the same as the src.
Cannot make this work:
img1 = cv::imread('glassL.jpg')
img2 = cv::imread('glassR.jpg')
img1g = cv::Mat.new
cv::cvtColor(img1, img1g, CV_BGR2GRAY);
img2g = cv::Mat.new
cv::cvtColor(img2, img2g, CV_BGR2GRAY);
F = cv::findFundamentalMat(img1g, img2g, cv::FM_RANSAC, 0.1, 0.99)
It throws this error:
OpenCV Error: Assertion failed (npoints >= 0 && points2.checkVector(2) == npoints && points1.type() == points2.type()) in findFundamentalMat, file /tmp/opencv-XbIS/opencv-2.4.8.2/modules/calib3d/src/fundam.cpp, line 1103
/usr/local/var/rbenv/versions/2.1.0/lib/ruby/gems/2.1.0/gems/ropencv-0.0.15/lib/ropencv/ropencv_types.rb:10509:in `find_fundamental_mat': /tmp/opencv-XbIS/opencv-2.4.8.2/modules/calib3d/src/fundam.cpp:1103: error: (-215) npoints >= 0 && points2.checkVector(2) == npoints && points1.type() == points2.type() in function findFundamentalMat (RuntimeError)
I am using ropencv (Ruby + FFI), but I tried with Python cv2 and got exactly the same error. I cannot find any documentation on this and I am lost. checkVector(2) returns -1 on both grayscale and color images and I don't know how to convert them to make them work with findFundamentalMat. Help please.
You are passing the images directly to the function of to compute the fundamental matrix and this is not correct. In the documentation, it says:
points1 – Array of N points from the first image. The point coordinates should be floating-point (single or double precision).
points2 – Array of the second image points of the same size and format as points1 .
Therefore, you cannot simply put the images. There is a full example here using OpenCV. But there is a nice explanation here step by step using matlab so you can understand which points are you going to use (such as harris corners).
Need to apply some rapid conversion on the color channels of a image.
1)I have stored in a list the corresponding output value:
ListaVred = [0]*255
for i in range(0,255):
ListaVred[i]=i*127 / 255 + 128
2)I get the color input value from 0 to 255 from the image
3)should replace in the image the input values with the output
i.e. red[45]= 0
ListaVred0] = 128
red[45]= 128
I've looked at cv2.LUT(src,dst) function but not sure about it's use,
http://docs.opencv.org/trunk/modules/core/doc/operations_on_arrays.html#cv2.LUT
cv2.LUT(ListaVred,red)
TypeError: src is not a numpy array, neither a scalar
ListaVred = np.array(ListaVred)
cv2.LUT(ListaVred,red)
cv2.error: /build/opencv-Ai8DTy/opencv-2.4.6.1+dfsg/modules/core/src/convert.cpp:1195: error: (-215) (lutcn == cn || lutcn == 1) && lut.total() == 256 && lut.isContinuous() && (src.depth() == CV_8U || src.depth() == CV_8S) in function LUT
You looked into documentation of yet to be released version of OpenCV (version 3.0). I guess you are using OpenCV 2.4.8 or lower. Check this documentation. Notice the differences in usage of LUT function. At the top of the page you will see the version of OpenCV to which this documentation belong.
LUT(srcImage, lookupTable, dstImage)