Saving grayscale image to a directory in python - python

I have a piece of code that takes in image data as grayscale values, and then converts into an image using matplotlib below
import matplotlib.pyplot as plt
import numpy
image_data = image_result.GetNDArray()
numpy.savetxt('data.cvs', image_data)
# Draws an image on the current figure
image = plt.imshow(image_data, cmap='gray')
I want to be able to export this data to LabView as a .png file. So I need to save these image to a folder where LabView and display them. Is there a function with pillow or os that can do this?

plt.imsave('output.png', image)
Does this work?

If image_data is a Numpy array of shape height x width with dtype=np.uint8 or dtype=np.uint16, you can make a PIL Image and save it as a PNG like this:
from PIL import Image
# Make PIL Image from Numpy array
pImage = Image.fromarray(image_data)
pImage.save('forLabView.png')
You can equally use OpenCV to save a Numpy array as a PNG for LabView like this:
import cv2
# Save Numpy array as PNG
cv2.imwrite('forLabView.png', image_data)
Check what your array is with:
print(image_data.shape, image_data.dtype)

Related

Read Grayscale raster in opencv

I have a raster image which contains only one band. I am trying to apply histogram equalization on this raster. The problem i cannot read it as grayscale. if i read it as unchanged it show the shape and the values.
import cv2
import numpy as np
img = cv2.imread('blur3.tif',cv2.IMREAD_UNCHANGED)
print(img.shape)
shape
(1678, 1064)
But when i change the code to read grayscale it shows empty.
import cv2
import numpy as np
img = cv2.imread('blur3.tif',cv2.IMREAD_GRAYSCALE)

Saving Image in tensor array as jpg or png

I am trying to detect the face using mtcnn. The main aim is to detect face, crop and save the cropped image as jpg or png file type. The code implemented is below.
from facenet_pytorch import MTCNN
from PIL import Image
import numpy as np
from matplotlib import pyplot as plt
img = Image.open("example.jpg")
mtcnn = MTCNN(margin=20, keep_all=True, post_process=False)
faces = mtcnn(img)
print(faces.shape)
This gives the shape
torch.Size([1, 3, 160, 160])
How to save this cropped portion as jpg file.
torch.save(faces, "faces.torch")
that wont be saved as an image, if you want to save it as an image:
img = Image.fromarray(faces.cpu().detach().numpy()[0])
img.save("faces.png")

Converting PIL.Image to skimage

I have 2 modules in my project: first works with image in bytes format, second requires skimage object. I need to combine them.
I have this code:
import io
from PIL import Image
import skimage.io
area = (...)
image = Image.open(io.BytesIO(image_bytes))
image = Image.crop(area)
image = skimage.io.imread(image)
But i get this error:
How can i convert an image (object/variable) to skimage? I don't necessarily need PIL Image, this is just one way to work with bytes image, cause i need to crop my image
Thanks!
Scikit-image works with images stored as Numpy arrays - same as OpenCV and wand. So, if you have a PIL Image, you can make a Numpy array for scikit-image like this:
# Make Numpy array for scikit-image from "PIL Image"
na = np.array(YourPILImage)
Just in case you want to go the other way, and make a PIL Image from a Numpy array, you can do:
# Make "PIL Image" from Numpy array
pi = Image.fromarray(na)

Viewing .npy images

How can I view images stored with a .npy extension and save my own files in that format?
.npy is the file extension for numpy arrays - you can read them using numpy.load:
import numpy as np
img_array = np.load('filename.npy')
One of the easiest ways to view them is using matplotlib's imshow function:
from matplotlib import pyplot as plt
plt.imshow(img_array, cmap='gray')
plt.show()
You could also use PIL or pillow:
from PIL import Image
im = Image.fromarray(img_array)
# this might fail if `img_array` contains a data type that is not supported by PIL,
# in which case you could try casting it to a different dtype e.g.:
# im = Image.fromarray(img_array.astype(np.uint8))
im.show()
These functions aren't part of the Python standard library, so you may need to install matplotlib and/or PIL/pillow if you haven't already. I'm also assuming that the files are either 2D [rows, cols] (black and white) or 3D [rows, cols, rgb(a)] (color) arrays of pixel values.
Thanks Ali_m. In my case I inspect the npy file to check how many images was in the file with:
from PIL import Image
import numpy as np
data = np.load('imgs.npy')
data.shape
then I plotted the images in a loop:
from matplotlib import pyplot as plt
for i in range(len(data)):
plt.imshow(data[i], cmap='gray')
plt.show()

Python PIL cut off my 16-bit grayscale image at 8-bit

I'm working on an python program to display images of stars. The images are 16-bit grayscale tiffs.
If I try to display them in an extern program, e.g. ImageMagick they are correct but if I load them in python and then use 'show()' or implement them in a canvas in Tkinter they are, unless a few pixel, totally white.
So I estimate python sets every pixel above 255 to white but I don't know why. If I load the image and then save it as tiff again, ImageMagick can show it correct.
Thanks for help.
Try to convert the image to a numpy array and display that:
import Image
import matplotlib.pyplot as plt
import numpy as np
img = Image.open('image.tiff')
arr = np.asarray(img.getdata()).reshape(img.size[1], img.size[0])
plt.imshow(arr)
plt.show()
You can change the color mapping too:
from matplotlib import cm
plt.imshow(arr, cmap=cm.gray)

Categories

Resources