Need to apply some rapid conversion on the color channels of a image.
1)I have stored in a list the corresponding output value:
ListaVred = [0]*255
for i in range(0,255):
ListaVred[i]=i*127 / 255 + 128
2)I get the color input value from 0 to 255 from the image
3)should replace in the image the input values with the output
i.e. red[45]= 0
ListaVred0] = 128
red[45]= 128
I've looked at cv2.LUT(src,dst) function but not sure about it's use,
http://docs.opencv.org/trunk/modules/core/doc/operations_on_arrays.html#cv2.LUT
cv2.LUT(ListaVred,red)
TypeError: src is not a numpy array, neither a scalar
ListaVred = np.array(ListaVred)
cv2.LUT(ListaVred,red)
cv2.error: /build/opencv-Ai8DTy/opencv-2.4.6.1+dfsg/modules/core/src/convert.cpp:1195: error: (-215) (lutcn == cn || lutcn == 1) && lut.total() == 256 && lut.isContinuous() && (src.depth() == CV_8U || src.depth() == CV_8S) in function LUT
You looked into documentation of yet to be released version of OpenCV (version 3.0). I guess you are using OpenCV 2.4.8 or lower. Check this documentation. Notice the differences in usage of LUT function. At the top of the page you will see the version of OpenCV to which this documentation belong.
LUT(srcImage, lookupTable, dstImage)
Related
I try to change Python to C++.
Here is my first variable defined like this:
x = np.ones((h,w,4),dtype=np.uint8)*255
I want to equalize third channel to another variable like this:
x[:,:,3] = y
I am using OpenCV. I coded a little bit on C++ like this:
cv::Mat x;
x = cv::Mat::ones(cv::Size(h, w), CV_32FC1)*255;
cv::Mat channel[3];
cv::split(x, channel);
channel[0] = cv::Mat::zeros(x.rows, x.cols, CV_32FC1);
cv::merge(channel,3,x);
I got this error when i run code:
merge.dispatch.cpp:129: error: (-215:Assertion failed) mv[i].size == mv[0].size && mv[i].depth() == depth in function 'merge'
How can I fix this?
I am trying to reverse engineer an aruco board from detecting it in an image.
I made a snippet to reproduce the same problem when creating the GridBoard and then trying to use create_Board on the detected corners and ids on the created image.
# Settings for the marker
max_amount_of_markers_w = 10
max_amount_of_markers_h = 6
ar = aruco.DICT_6X6_1000
aruco_dict = aruco.Dictionary_get(ar)
# creat an aruco Board
grid_board = cv2.aruco.GridBoard_create(max_amount_of_markers_w,
max_amount_of_markers_h,
0.05,
0.01,
aruco_dict)
# convert to image
img = grid_board.draw((1920,180))
# detected corners and ids
corners,ids,rejected = aruco.detectMarkers(img,
aruco_dict)
# convert to X,Y,Z
new_corners = np.zeros(shape=(len(corners),4,3))
for cnt,corner in enumerate(corners):
new_corners[cnt,:,:-1] = corner
# try to create a board via Board_create
aruco.Board_create(new_corners,aruco_dict,ids)
The error comes from the last line, the error is the following:
error: OpenCV(4.1.1)
C:\projects\opencv-python\opencv_contrib\modules\aruco\src\aruco.cpp:1458:
error: (-215:Assertion failed) objPoints.type() == CV_32FC3 ||
objPoints.type() == CV_32FC1 in function 'cv::aruco::Board::create'
This means that it needs something with 3 channels (for x,y and z) which is given as the numpy array.
A bit late but I just came across the same problem, so I'll answer for posterity.
The error is not linked to the number of channels that is correct, but linked to the datatype of new_corners, here new_corners.dtype == np.float64. But OpenCV asks for a 32-bit float, as your error shows with CV_32F.
A simple cast of new_corners fixes the issue. Your last line becomes :
aruco.Board_create(new_corners.astype(np.float32),aruco_dict,ids)
In this code i am trying to integral image and every time i run this code
a window flash and desapear , then i get this error in terminal
import cv2
import numpy as np
image = cv2.imread("nancy.jpg")
(rows,cols,dims) = image.shape
sum = np.zeros((rows,cols), np.uint8)
imageIntegral = cv2.integral(image, sum, -1)
cv2.imshow("imageIntegral", imageIntegral)
cv2.waitKey()
Error:
cv2.imshow("imageIntegral",imageIntegral)cv2.error: OpenCV(4.1.0) C:/projects/opencv-python/opencv/modules/highgui/src/precomp.hpp:131:
error: (-215:Assertion failed) src_depth != CV_16F && src_depth !=
CV_32S in function 'convertToShow'
Check whether your image is uint8 or not
image = image.astype(np.uint8)
Help on cv2.integral:
>>> import cv2
>>> print(cv2.__version__)
4.0.1-dev
>>> help(cv2.integral)
Help on built-in function integral:
integral(...)
integral(src[, sum[, sdepth]]) -> sum
. #overload
A simple demo:
import numpy as np
import cv2
img = np.uint8(np.random.random((2,2,3))*255)
dst = cv2.integral(img)
>>> print(img.shape, img.dtype)
(2, 2, 3) uint8
>>> print(dst.shape, dst.dtype)
(3, 3, 3) int32
And you shouldn't use imshow directly on the dst image because it's not np.uint8. Normalize it to np.uint8 (range 0 to 255) or np.float32 (range 0.0 to 1.0). You can find the reason at this link: How to use `cv2.imshow` correctly for the float image returned by `cv2.distanceTransform`?
cv.imshow requires the given image to have dtype from the set of np.uint8, np.uint16, np.float32, np.float64. It does not accept other types.
Your integral image has type np.int32. The error message explicitly rejected your data because it was np.int32 (or half-float, CV_16F, but that's not the case here).
So, you need to convert it to a dtype that is acceptable to imshow. Before converting with .astype(...), you must make sure that your value range is mapped into the range of 8-bit integers (0 to 255), or floats (imshow expects 0.0 to 1.0). Normalize your data using the maximum, like this:
imageIntegral = cv.integral(src=image)
# division results in values ranging from 0.0 to 1.0
# type is floating point array (float64)
presentable = imageIntegral / imageIntegral.max()
cv.imshow("imageIntegral", presentable)
cv.waitKey()
cv.destroyWindow("imageIntegral")
input, from OpenCV documentation:
presentable:
You obviously don't see much in an integral image because by its nature it's mostly a gradient. That is what happens when one integrates a series of values chosen from a range. That is also why imageIntegral.astype(np.uint8) results in noise:
I want to convert a float32 image into uint8 image in Python using the openCV library. I used the following code, but I do not know whether it is correct or not.
Here I is the float32 image.
J = I*255
J = J.astype(np.uint8)
I really appreciate if can you help me.
If you want to convert an image from single precision floating point (i.e. float32) to uint8, numpy and opencv in python offers two convenient approaches.
If you know that your image have a range between 0 and 255 or between 0 and 1 then you can simply make the convertion the way you already do:
I *= 255 # or any coefficient
I = I.astype(np.uint8)
If you don't know the range I suggest you to apply a min max normalization
i.e. : (value - min) / (max - min)
With opencv you simply call the following instruction :
I = cv2.normalize(I, None, 255, 0, cv2.NORM_MINMAX, cv2.CV_8U)
The returned variable I type will have the type np.uint8 (as specify by the last argument) and a range between 0 and 255.
Using numpy you can also write something similar:
def normalize8(I):
mn = I.min()
mx = I.max()
mx -= mn
I = ((I - mn)/mx) * 255
return I.astype(np.uint8)
It is actually very simple:
img_uint8 = img_float32.astype(np.uint8)
Cannot make this work:
img1 = cv::imread('glassL.jpg')
img2 = cv::imread('glassR.jpg')
img1g = cv::Mat.new
cv::cvtColor(img1, img1g, CV_BGR2GRAY);
img2g = cv::Mat.new
cv::cvtColor(img2, img2g, CV_BGR2GRAY);
F = cv::findFundamentalMat(img1g, img2g, cv::FM_RANSAC, 0.1, 0.99)
It throws this error:
OpenCV Error: Assertion failed (npoints >= 0 && points2.checkVector(2) == npoints && points1.type() == points2.type()) in findFundamentalMat, file /tmp/opencv-XbIS/opencv-2.4.8.2/modules/calib3d/src/fundam.cpp, line 1103
/usr/local/var/rbenv/versions/2.1.0/lib/ruby/gems/2.1.0/gems/ropencv-0.0.15/lib/ropencv/ropencv_types.rb:10509:in `find_fundamental_mat': /tmp/opencv-XbIS/opencv-2.4.8.2/modules/calib3d/src/fundam.cpp:1103: error: (-215) npoints >= 0 && points2.checkVector(2) == npoints && points1.type() == points2.type() in function findFundamentalMat (RuntimeError)
I am using ropencv (Ruby + FFI), but I tried with Python cv2 and got exactly the same error. I cannot find any documentation on this and I am lost. checkVector(2) returns -1 on both grayscale and color images and I don't know how to convert them to make them work with findFundamentalMat. Help please.
You are passing the images directly to the function of to compute the fundamental matrix and this is not correct. In the documentation, it says:
points1 – Array of N points from the first image. The point coordinates should be floating-point (single or double precision).
points2 – Array of the second image points of the same size and format as points1 .
Therefore, you cannot simply put the images. There is a full example here using OpenCV. But there is a nice explanation here step by step using matlab so you can understand which points are you going to use (such as harris corners).