Opencv Otsu Thresholding error after use numpy? - python

In my ex-post I optimize the python iteration loop into numpy way.
Then I face next problem to convert it to binary image like this
def convertRed(rawimg):
blue = rawimg[:,:,0]
green = rawimg[:,:,1]
red = rawimg[:,:,2]
exg = 1.5*red-green-blue
processedimg = np.where(exg > 50, exg, 2)
ret2,th2 = cv2.threshold(processedimg,0,255,cv2.THRESH_OTSU) //error line
return processedimg
The error is here
error: (-215) src.type() == CV_8UC1 in function cv::threshold
How to solve this problem?

The cv2.threshold function only accepts uint8 values, this means that you can only apply Otsu's algorithm if the pixel values in your image are between 0 and 255.
As you can see, when you multiply your values by 1.5 your image starts to present floating point values, making your image not suited for cv2.threshold, hence your error message src.type() == CV_8UC1.
You can modify the following parts of your code:
processedimg = np.where(exg > 50, exg, 2)
processedimg = cv2.convertScaleAbs(processedimg)
ret2,th2 = cv2.threshold(processedimg,0,255,cv2.THRESH_OTSU) //error line
What we are doing here is using the OpenCV function cv2.convertScaleAbs, you can see in the OpenCV Documentation:
cv2. convertScaleAbs
Scales, calculates absolute values, and converts the result to 8-bit.
Python: cv2.convertScaleAbs(src[, dst[,
alpha[, beta]]]) → dst

it is an error of "Data Type" ,
as Eliezer said,
when you multiply by 1.5 , the exg matrix convert to float64, which didn't work for cv2.threshold which require uint8 data type ,
so , one of the solutions could be adding:
def convertRed(rawimg):
b = rawimg[:,:,0]
g = rawimg[:,:,1]
r = rawimg[:,:,2]
exg = 1.5*r-g-b;
processedimg = np.where(exg > 50, exg, 2)
processedimg = np.uint8(np.abs(processedimg));#abs to fix negative values,
ret2,th2 = cv2.threshold(processedimg,0,255,cv2.THRESH_OTSU) #error line
return processedimg
I used np.uint8() after np.abs() to avoid wrong result,(nigative to white ) in the converstion to uint8 data type.
Although , your very array processedimg is positive, because of the np.where statement applied before , but this practice is usually safer.
why it converts to float64? because in python , when multiply any integer value with "float comma" it get converted to float ,
Like :
type(1.5*int(7))==float # true
another point is the usage of numpy functions instead of Opencv's, which is usually faster .

Related

I have two matrices that I have converted to a grayscale image and I want to find the mean SSIM value between them (python)

For example I have:
X = [[1,2,3],[4,5,6]]
Y = [[1,4,7],[5,5,1]]
a=np.array(X)
grayA=(a-np.amin(a))/(np.amax(a)-np.amin(a))
b=np.array(Y)
grayB=(b-np.amin(b))/(np.amax(b)-np.amin(b))
However, when I do
compare_ssim(grayA, grayB)
I get the error
ValueError: win_size exceeds image extent. If the input is a multichannel (color) image, set multichannel=True.
I tried
compare_ssim(grayA, grayB, multichannel = True)
but I am still getting the same error.
The error is produced because the default value of win_size is 7 and
np.any((np.asarray(grayA.shape) - win_size) < 0)
To solve the problem, you should define win_size to be odd and smaller than any of the image dimensions. So, in your example, it should be win_size=1.
However, when win_size is equal to 1, you need to set use_sample_covariance=False because if not, the code needs to divide by 0. Therefore, your example would work using
compare_ssim(grayA, grayB, win_size=1, use_sample_covariance=False)
The problem vanishes if your images are 7x7 or larger. For instance:
X = np.random.rand(7,7)
Y = np.random.rand(7,7)
compare_ssim(X, Y)

Why PIL.ImageChops.difference and np.array difference have different results?

Why PIL.ImageChops.difference and np.array absolute difference have different results? Pillow document says ImageChops.difference acts just like absolute difference(https://pillow.readthedocs.io/en/3.1.x/reference/ImageChops.html).
tamp_image = Image.open(tamp_file_path).convert("RGB")
orig_image = Image.open(orig_file_path).convert("RGB")
diff = ImageChops.difference(orig_image, tamp_image)
diff.show() #1
Image.fromarray(abs(np.array(tamp_image)-np.array(orig_image))).show() #2
results(top:#1, bottom:#2):
Interestingly, if I convert diff to np.array and then Image object again, it shows like #1.
I had a similar problem. The solution is quite simple, the converted numpy arrays have the datatype uint8. By subtracting large values, the bits will flip entirely, and you get some weird looking result.
So the solution is to convert the images to a datatype with an appropriate range, like int8.
img1 = np.array(tamp_image, dtype='int8')
img2 = np.array(orig_image, dtype='int8')
diff = np.abs(img1 - img2)
Image.fromarray(np.array(diff, dtype='uint8')).show()

cv.imshow throws (-215:Assertion failed) src_depth != CV_16F && src_depth != CV_32S in function 'convertToShow'

In this code i am trying to integral image and every time i run this code
a window flash and desapear , then i get this error in terminal
import cv2
import numpy as np
image = cv2.imread("nancy.jpg")
(rows,cols,dims) = image.shape
sum = np.zeros((rows,cols), np.uint8)
imageIntegral = cv2.integral(image, sum, -1)
cv2.imshow("imageIntegral", imageIntegral)
cv2.waitKey()
Error:
cv2.imshow("imageIntegral",imageIntegral)cv2.error: OpenCV(4.1.0) C:/projects/opencv-python/opencv/modules/highgui/src/precomp.hpp:131:
error: (-215:Assertion failed) src_depth != CV_16F && src_depth !=
CV_32S in function 'convertToShow'
Check whether your image is uint8 or not
image = image.astype(np.uint8)
Help on cv2.integral:
>>> import cv2
>>> print(cv2.__version__)
4.0.1-dev
>>> help(cv2.integral)
Help on built-in function integral:
integral(...)
integral(src[, sum[, sdepth]]) -> sum
. #overload
A simple demo:
import numpy as np
import cv2
img = np.uint8(np.random.random((2,2,3))*255)
dst = cv2.integral(img)
>>> print(img.shape, img.dtype)
(2, 2, 3) uint8
>>> print(dst.shape, dst.dtype)
(3, 3, 3) int32
And you shouldn't use imshow directly on the dst image because it's not np.uint8. Normalize it to np.uint8 (range 0 to 255) or np.float32 (range 0.0 to 1.0). You can find the reason at this link: How to use `cv2.imshow` correctly for the float image returned by `cv2.distanceTransform`?
cv.imshow requires the given image to have dtype from the set of np.uint8, np.uint16, np.float32, np.float64. It does not accept other types.
Your integral image has type np.int32. The error message explicitly rejected your data because it was np.int32 (or half-float, CV_16F, but that's not the case here).
So, you need to convert it to a dtype that is acceptable to imshow. Before converting with .astype(...), you must make sure that your value range is mapped into the range of 8-bit integers (0 to 255), or floats (imshow expects 0.0 to 1.0). Normalize your data using the maximum, like this:
imageIntegral = cv.integral(src=image)
# division results in values ranging from 0.0 to 1.0
# type is floating point array (float64)
presentable = imageIntegral / imageIntegral.max()
cv.imshow("imageIntegral", presentable)
cv.waitKey()
cv.destroyWindow("imageIntegral")
input, from OpenCV documentation:
presentable:
You obviously don't see much in an integral image because by its nature it's mostly a gradient. That is what happens when one integrates a series of values chosen from a range. That is also why imageIntegral.astype(np.uint8) results in noise:

How should I convert a float32 image to an uint8 image?

I want to convert a float32 image into uint8 image in Python using the openCV library. I used the following code, but I do not know whether it is correct or not.
Here I is the float32 image.
J = I*255
J = J.astype(np.uint8)
I really appreciate if can you help me.
If you want to convert an image from single precision floating point (i.e. float32) to uint8, numpy and opencv in python offers two convenient approaches.
If you know that your image have a range between 0 and 255 or between 0 and 1 then you can simply make the convertion the way you already do:
I *= 255 # or any coefficient
I = I.astype(np.uint8)
If you don't know the range I suggest you to apply a min max normalization
i.e. : (value - min) / (max - min)
With opencv you simply call the following instruction :
I = cv2.normalize(I, None, 255, 0, cv2.NORM_MINMAX, cv2.CV_8U)
The returned variable I type will have the type np.uint8 (as specify by the last argument) and a range between 0 and 255.
Using numpy you can also write something similar:
def normalize8(I):
mn = I.min()
mx = I.max()
mx -= mn
I = ((I - mn)/mx) * 255
return I.astype(np.uint8)
It is actually very simple:
img_uint8 = img_float32.astype(np.uint8)

Does cv2 accept binary images?

I am facing a problem that cv2 methods like cv2.detectAndCompute, cv2.HoughlinesP fails either with depth error or NoneType when fed a binary image.
For e.g. In the following
def high_blue(B, G):
if (B > 70 and abs(B-G) <= 15):
return 255
else:
return 0
img = cv2.imread(os.path.join(dirPath,filename))
b1 = img[:,:,0] # Gives **Blue**
b2 = img[:,:,1] # Gives Green
b3 = img[:,:,2] # Gives **Red**
zlRhmn_query = np.zeros((2400, 2400), dtype=np.uint8)
zlRhmn_query[300:2100, 300:2100] = 255
zlRhmn_query[325:2075, 325:2075] = 0
cv2.imwrite('zlRhmn_query.jpg',zlRhmn_query)
zl_Rahmen_bin = np.zeros((img.shape[0], img.shape[1]), dtype=np.uint8)
zl_Rahmen_bin = np.vectorize(high_blue)
zl_Rahmen_bin = zl_Rahmen_bin(b1, b2)
cv2.imwrite('zl_Rahmen_bin.jpg',zl_Rahmen_bin)
sift = cv2.xfeatures2d.SIFT_create()
kp_query, des_query = sift.detectAndCompute(zlRhmn_query,None)
kp_train, des_train = sift.detectAndCompute(zl_Rahmen_bin,None)
only the last line with the zl_Rahmen_bin is failing with depth mismatch error. Strangely zlRhmn_query does not throw any error.
Next, when I use the skeletonization snippet given here[http://opencvpython.blogspot.de/2012/05/skeletonization-using-opencv-python.html] and pass the skeleton to HoughLinesP, I get a lines object of type NoneType. On inspection I noticed that the skeleton array is also binary i.e. 0 or 255.
Please advise.
I have programmed like 50 lines of Python in my life so excuse me if I am mistaken here.
zl_Rahmen_bin = np.zeros((img.shape[0], img.shape[1]), dtype=np.uint8)
you have just created a 2d array filled with zeros.
zl_Rahmen_bin = np.vectorize(high_blue)
now you immediately assign another value (a function) to the same variable, which makes the first line pretty obsolete.
zl_Rahmen_bin = zl_Rahmen_bin(b1, b2)
So as far as I understand it you just called a function z1_Rahmen_bin and provided the blue and green image as input. The output should be another 2d array with values either 0 or 255.
I was wondering how np.vectorize knows which datatype the output should be. As you obviously need uint8. The documentation says that the type, if not given is determined by calling the function with the first argument. So I guess the default type of 255 or 0 in this example.
and z1_Rahmen_bin.dtype actually is np.uint32.
So I modified this:
zl_Rahmen_bin = np.vectorize(high_blue)
To
zl_Rahmen_bin = np.vectorize(high_blue, otypes=[np.uint8])
which seems to do the job.
Maybe it is sufficient to just do
z1_Rahmen_bin.dtype = np.uint8
But as I said I have no idea about Python...

Categories

Resources