Restore grayscale image from jpg file - python

I have a 2-d numpy array, that I save as .jpg image.
For simplicity, let's assume my numpy array is the numbers between 0...255.
My problem is that once I save this array as .jpg image, I can't restore its values.
So my code is:
import cv2
from scipy.ndimage import imread
arr=np.array(range(256)).reshape(16,16)
cv2.imwrite('arr.jpg',arr)
restored=imread('arr.jpg')
print((arr==restored).sum()) #output is 224 rather than 256, i.e. 32 pixels are different!
So, how can I save the image so that I can see it, and restore the values afterwords?
Any help will be appreciated!

Related

Read Grayscale raster in opencv

I have a raster image which contains only one band. I am trying to apply histogram equalization on this raster. The problem i cannot read it as grayscale. if i read it as unchanged it show the shape and the values.
import cv2
import numpy as np
img = cv2.imread('blur3.tif',cv2.IMREAD_UNCHANGED)
print(img.shape)
shape
(1678, 1064)
But when i change the code to read grayscale it shows empty.
import cv2
import numpy as np
img = cv2.imread('blur3.tif',cv2.IMREAD_GRAYSCALE)

Saving grayscale image to a directory in python

I have a piece of code that takes in image data as grayscale values, and then converts into an image using matplotlib below
import matplotlib.pyplot as plt
import numpy
image_data = image_result.GetNDArray()
numpy.savetxt('data.cvs', image_data)
# Draws an image on the current figure
image = plt.imshow(image_data, cmap='gray')
I want to be able to export this data to LabView as a .png file. So I need to save these image to a folder where LabView and display them. Is there a function with pillow or os that can do this?
plt.imsave('output.png', image)
Does this work?
If image_data is a Numpy array of shape height x width with dtype=np.uint8 or dtype=np.uint16, you can make a PIL Image and save it as a PNG like this:
from PIL import Image
# Make PIL Image from Numpy array
pImage = Image.fromarray(image_data)
pImage.save('forLabView.png')
You can equally use OpenCV to save a Numpy array as a PNG for LabView like this:
import cv2
# Save Numpy array as PNG
cv2.imwrite('forLabView.png', image_data)
Check what your array is with:
print(image_data.shape, image_data.dtype)

Writing int16 numpy array as an proper image

I am trying to write(save) a int16 numpy array as an image using openCV. Find the numpy file of an image in the link below: https://drive.google.com/file/d/1nEq_CeNmSgacARa2ADr_f_qVaSfJSZZX/view?usp=sharing
The image I saved in bmp or png or tiff format is look like this:
Uint16
I converted the numpy array to uint8 and the image become very dark and the maximum value of the image is just 34 as shown below:
uint8
Please let me know how to properly save and visualize this int16 format image.
Note: plt.imshow of int16 numpy array showing proper visual. matplotlib_imshow
I have saved the image properly using the following syntax
from skimage.util import img_as_ubyte
img=np.load('brain.npy')
img1=img/img.max()
img2=img_as_ubyte(img1)
cv2.imwrite('brain_u8.png', img2)
correct_output

The dimension of the array of an image is 3D not 2D as it is in the Python course

In my python course, the instructor uploads a greyscale picture of himself and reads it on Python with the following code:
import numpy as np
import math
from PIL import Image
from IPython.display import display
im = Image.open("chris.tiff")
array = np.array(im)
print(array.shape)
and he gets
(200,200)
When I write the code and run my own image, with the exact same extension "tiff", I get a 3-dimensional array. I was told it's because my image was colored and so the third entry is for RBG. So I used a greyscale photo just like he did but I still obtain a 3D array, why?
Any help is greatly appreciated, thank you
EDIT
For extra clarity, the array I get for my greyscale image with tiff extension is
(3088, 2316, 4)
Your photo appears to be grey, but actually, it has the three channels based on the posted shape.
So, you need to convert it to greyscale using the following line:
im = Image.open("chris.tiff").convert('L')

Converting PIL.Image to skimage

I have 2 modules in my project: first works with image in bytes format, second requires skimage object. I need to combine them.
I have this code:
import io
from PIL import Image
import skimage.io
area = (...)
image = Image.open(io.BytesIO(image_bytes))
image = Image.crop(area)
image = skimage.io.imread(image)
But i get this error:
How can i convert an image (object/variable) to skimage? I don't necessarily need PIL Image, this is just one way to work with bytes image, cause i need to crop my image
Thanks!
Scikit-image works with images stored as Numpy arrays - same as OpenCV and wand. So, if you have a PIL Image, you can make a Numpy array for scikit-image like this:
# Make Numpy array for scikit-image from "PIL Image"
na = np.array(YourPILImage)
Just in case you want to go the other way, and make a PIL Image from a Numpy array, you can do:
# Make "PIL Image" from Numpy array
pi = Image.fromarray(na)

Categories

Resources