CV2 imwrite not saving as16bit - python

Hi all I am working on a code which will save a numpy array of dtype=16, However when I am trying to save by cv2.imwrite, it saves as dtype=8.
print(pixel_array.dtype)
im=cv2.imwrite('result.TIFF',pixel_array)
im=cv2.imread('result.TIFF')
print(im.dtype)
The output for this code is
uint16
uint8
I tried with passing dtype for nd array.
pixel_array=ds.pixel_array
print(pixel_array.dtype)
im=cv2.imwrite('result.TIFF',pixel_array.astype(np.uint16))
im=cv2.imread('result.TIFF',)
print(im.dtype)
Unfortunately second code also saving image as 8 bit, Don't know what am I missing here.

Related

Writing int16 numpy array as an proper image

I am trying to write(save) a int16 numpy array as an image using openCV. Find the numpy file of an image in the link below: https://drive.google.com/file/d/1nEq_CeNmSgacARa2ADr_f_qVaSfJSZZX/view?usp=sharing
The image I saved in bmp or png or tiff format is look like this:
Uint16
I converted the numpy array to uint8 and the image become very dark and the maximum value of the image is just 34 as shown below:
uint8
Please let me know how to properly save and visualize this int16 format image.
Note: plt.imshow of int16 numpy array showing proper visual. matplotlib_imshow
I have saved the image properly using the following syntax
from skimage.util import img_as_ubyte
img=np.load('brain.npy')
img1=img/img.max()
img2=img_as_ubyte(img1)
cv2.imwrite('brain_u8.png', img2)
correct_output

The dimension of the array of an image is 3D not 2D as it is in the Python course

In my python course, the instructor uploads a greyscale picture of himself and reads it on Python with the following code:
import numpy as np
import math
from PIL import Image
from IPython.display import display
im = Image.open("chris.tiff")
array = np.array(im)
print(array.shape)
and he gets
(200,200)
When I write the code and run my own image, with the exact same extension "tiff", I get a 3-dimensional array. I was told it's because my image was colored and so the third entry is for RBG. So I used a greyscale photo just like he did but I still obtain a 3D array, why?
Any help is greatly appreciated, thank you
EDIT
For extra clarity, the array I get for my greyscale image with tiff extension is
(3088, 2316, 4)
Your photo appears to be grey, but actually, it has the three channels based on the posted shape.
So, you need to convert it to greyscale using the following line:
im = Image.open("chris.tiff").convert('L')

Reading and saving tif images with python

I am trying to read this tiff image with python. I have tried PIL to and save this image. The process goes smoothly, but the output image seems to be plain dark. Here is the code I used.
from PIL import Image
im = Image.open('file.tif')
imarray = np.array(im)
data = Image.fromarray(imarray)
data.save('x.tif')
Please let me know if I have done anything wrong, or if there is any other working way to read and save tif images. I mainly need it as NumPy array for processing purposes.
The problem is simply that the image is dark. If you open it with PIL, and convert to a Numpy array, you can see the maximum brightness is 2455, which on a 16-bit image with possible range 0..65535, means it is only 2455/65535, or 3.7% bright.
from PIL import Image
# Open image
im = Image.open('5 atm_gain 80_C001H001S0001000025.tif')
# Make into Numpy array
na = np.array(im)
print(na.max()) # prints 2455
So, you need to normalise your image or scale up the brightnesses. A VERY CRUDE method is to multiply by 50, for example:
Image.fromarray(na*50).show()
But really, you should use a proper normalisation, like PIL.ImageOps.autocontrast() or OpenCV normalize().

Save 16-bit numpy arrays as 16-bit PNG image

I'm trying to save a 16-bit numpy array as a 16-bit PNG but what I obtain is only a black picture. I put here a minimum example of what I'm talking aboout.
im = np.random.randint(low=1, high=6536, size=65536).reshape(256,256) #sample numpy array to save as image
plt.imshow(im, cmap=plt.cm.gray)
Given the above numpy array this is the image I see with matplotlib, but when then I save the image as 16-bit png I obtain the picture below:
import imageio
imageio.imwrite('result.png', im)
Image saved:
where some light grey spots are visible but the image is substantially black. Anyway when I read back the image and visualize it again with matplotlib I see the same starting image. I also tried other libraries instead of imageio (like PIL or PyPNG) but with the same result.
I know that 16-bit image values range from 0 to 65535 and in the array numpy array here there only values from 1 to 6536, but I need to save numpy arrays images similar to this, i.e. where the maximum value represented in the image isn't the maximum representable value. I think that some sort of nornalization is involved in the saving process. I need to save the array exactly as I see them in matplotlib at their maximum resolution and without compression or shrinkage in their values (so division by 255 or conversion to 8-bit array are not suitable).
It looks like imageio.imwrite will do the right thing if you convert the data type of the array to numpy.uint16 before writing the PNG file:
imageio.imwrite('result.png', im.astype(np.uint16))
When I do that, result.png is a 16 bit gray-scale PNG file.
If you want the image to have the full grayscale range from black to white, you'll have to scale the values to the range [0, 65535]. E.g. something like:
im2 = (65535*(im - im.min())/im.ptp()).astype(np.uint16)
Then you can save that array with
imageio.imwrite('result2.png', im2)
For writing a NumPy array to a PNG file, an alternative is numpngw (a package that I created). For example,
from numpngw import write_png
im2 = (65535*(im - im.min())/im.ptp()).astype(np.uint16)
write_png('result2.png', im2)
If you are already using imageio, there is probably no signficant advantage to using numpngw. It is, however, a much lighter dependency than imageio--it depends only on NumPy (no dependence on PIL/Pillow and no dependence on libpng).

Best dtype for creating large arrays with numpy

I am looking to store pixel values from satellite imagery into an array. I've been using
np.empty((image_width, image_length)
and it worked for smaller subsets of an image, but when using it on the entire image (3858 x 3743) the code terminates very quickly and all I get is an array of zeros.
I load the image values into the array using a loop and opening the image with gdal
img = gdal.Open(os.path.join(fn + "\{0}".format(fname))).ReadAsArray()
but when I include print img_array I end up with just zeros.
I have tried almost every single dtype that I could find in the numpy documentation but keep getting the same result.
Is numpy unable to load this many values or is there a way to optimize the array?
I am working with 8-bit tiff images that contain NDVI (decimal) values.
Thanks
Not certain what type of images you are trying to read, but in the case of radarsat-2 images you can the following:
dataset = gdal.Open("RADARSAT_2_CALIB:SIGMA0:" + inpath + "product.xml")
S_HH = dataset.GetRasterBand(1).ReadAsArray()
S_VV = dataset.GetRasterBand(2).ReadAsArray()
# gets the intensity (Intensity = re**2+imag**2), and amplitude = sqrt(Intensity)
self.image_HH_I = numpy.real(S_HH)**2+numpy.imag(S_HH)**2
self.image_VV_I = numpy.real(S_VV)**2+numpy.imag(S_VV)**2
But that is specifically for that type of images (in this case each image contains several bands, so i need to read in each band separately with GetRasterBand(i), and than do ReadAsArray() If there is a specific GDAL driver for the type of images you want to read in, life gets very easy
If you give some more info on the type of images you want to read in, i can maybe help more specifically
Edit: did you try something like this ? (not sure if that will work on tiff, or how many bits the header is, hence the something:)
A=open(filename,"r")
B=numpy.fromfile(A,dtype='uint8')[something:].reshape(3858,3743)
C=B*1.0
A.close()
Edit: The problem is solved when using 64bit python instead of 32bit, due to memory errors at 2Gb when using the 32bit python version.

Categories

Resources