I am testing a segmentation algorithm on several VHSR satellite images, which originally comes in 16bit format, but when I convert them to 8bit images, the produced images are showing striped appearance.
I've been trying different python libraries (skimage, cv2, scipy) getting similar results.
1) The original 16-bit image it is a 4 band image (NIR,B,G,R), so you need to choose the right bands to create a true color image, RGB image (4,3,2 bands). thanks in advance. It can be downloaded from this link:
16bit image
2) I use this code to convert each pixel value, from a 16-bit integer now fitting within 8-bit range:
from scipy.misc import bytescale
SS = io.imread('Imag16bit.tif')
SS = bytescale(SS)
SS = np.asarray(SS)
plt.imshow(SS)
This is my result of above code:
bytescale works for me. I think the asarray step messes up something.
import cv2
from skimage import io
from scipy.misc import bytescale
image = io.imread('SkySat_16bit.tif')
cv2.imshow('Original', image)
print(image.dtype)
image = bytescale(image)
print(image.dtype)
cv2.imshow('Converted', image)
cv2.waitKey(0)
I think this is a way to do it:
#!/usr/local/bin/python3
from PIL import Image
from tifffile import imsave, imread
# Load image
im = imread('SkySat_16bit.tif')
# Extract Red, Green and Blue bands into separate 8-bit arrays
R = (im[:,:,3]/256).astype(np.uint8)
G = (im[:,:,2]/256).astype(np.uint8)
B = (im[:,:,1]/256).astype(np.uint8)
# Combine bands into RGB array
RGB = np.dstack((R,G,B))
# Save to disk
Image.fromarray(RGB).save('result.png')
You may want to adjust the contrast a bit, and check I selected the correct bands.
Related
I am concatenating two 1-channel grayscale images into one 2-channel image and writing it to a folder.
import struct
import zlib
import numpy as np
from PIL import Image
from matplotlib import pyplot as plt
import pandas as pd
import cv2
import numpy as np
import glob
import os
from keras.preprocessing import image
import imageio
filenames1 = glob.glob("folder1/*.png")
filenames1.sort()
filenames2 = glob.glob("folder2/*.png")
filenames2.sort()
for f1,f2 in zip(filenames1,filenames2):
img_name = os.path.basename(f1)
img_name = img_name[:-4] + ".png"
img1 = Image.open(f1)
img2 = Image.open(f2)
img1a = image.img_to_array(img1)
img2a = image.img_to_array(img2)
# Merged image
merge_image = np.concatenate((img1a, img2a), axis=2)
# plt.imsave('folder3/{}.png'.format(img_name[:-4]),merge_image)
imageio.imwrite('folder4/{}.png'.format(img_name[:-4]),merge_image)
When I used matplotlib's 'imsave' function, I got the following error:
ValueError: Third dimension must be 3 or 4
When I used Imageio 'imwrite' function, I got the following error:
ValueError: Image must be 2D (grayscale, RGB, or RGBA)
How can I write the 2-channel image to a folder in this case?
As your error stack says, you cannot use matplotlib.imsave or imageio.imwrite as they support only 1 (grayscale), 3 (rgb, bgr, hsv etc...) or 4 (same as 3 + alpha channel). I don't know if png format does support 2 channels at all, but if it does the result would be a single channel image (grayscale) + alpha channel.
The solution depends on what these images do represent and what you're actually trying to achieve:
if you want to save single channel image + alpha channel, you'd better to replicate the first channel 3 times, so that your channels are (BW, BW, BW, alpha)
if you're fusing two spatial informations, for example angle and magnitude of an optical flow, you have to do the conversion manually (OpenCV displaying a 2-channel image (optical flow)) and fill the remaining channel with something else.
if you're only trying to stack two images and save them, png is not the correct solution. You could stack them using numpy and save/store them as .npy objects.
Why this code plot different images?
from PIL import Image
import numpy as np
x = (np.random.random((32,32))*255).astype(np.int16)
img1 = Image.fromarray(x, mode='L')
img2 = Image.fromarray(x)
plt.imshow(img1, cmap='gray')
plt.imshow(img2, cmap='gray')
see images:
PIL requires L mode images to be 8-bit, see here. So, if you pass in your 16-bit image, where every high byte is zero, every second pixel will be black.
As the title states I'm converting my image to a numpy array then converting it right back. Here's my code:
import os
import numpy as np
from PIL import Image
img = Image.open(os.path.join(no_black_border, png_files[0]))
img.show()
np_arr = np.asarray(img)
img1 = Image.fromarray(np_arr)
img1.show()
Here's my before converting it
Here's my after converting it back
Your image is not RGB, it is a palette image. That means it does not have a Red, a Green and a Blue value at every pixel location, instead it has a single 8-bit palette index at each location that PIL uses to know the colour. You lose the palette when you convert to Numpy array.
You have 2 choices.
Either convert your image to RGB when you open it and all 3 values will be carried across to Numpy:
# Load image and make RGB
im = Image.open(...).convert('RGB')
# Convert to Numpy array and process
numpyarray = np.array(im)
Or, do as you currently do, but re-appply the palette from the original image after converting back to PIL Image:
# Load image
im = Image.open()
# Convert to Numpy array
numpyarray = np.array(im)
... do Numpy stuff ...
# Convert back to PIL Image and re-apply original palette
r = Image.fromarray(numpyarray,mode='P')
r.putpalette(im.getpalette())
# Optionally save
r.save('result.png')
See answer here and accompanying comments.
I am using the Python Pillow lib to change an image before sending it to device.
I need to change the image to make sure it meets the following requirements
Resolution (width x height) = 298 x 144
Grayscale
Color Depth (bits) = 4
Format = .png
I can do all of them with the exception of Color Depth to 4 bits.
Can anyone point me in the right direction on how to achieve this?
So far, I haven't been able to save 4-bit images with Pillow. You can use Pillow to reduce the number of gray levels in an image with:
import PIL.Image as Image
im = Image.open('test.png')
im1 = im.point(lambda x: int(x/17)*17)
Assuming test.png is a 8-bit graylevel image, i.e. it contains values in the range 0-255 (im.mode == 'L'), im1 now only contains 16 different values (0, 17, 34, ..., 255). This is what ufp.image.changeColorDepth does, too. However, you still have a 8-bit image. So instead of the above, you can do
im2 = im.point(lambda x: int(x/17))
and you end up with an image that only contains 16 different values (0, 1, 2, ..., 15). So these values would all fit in an uint4-type. However, if you save such an image with Pillow
im2.save('test.png')
the png will still have a color-depth of 8bit (and if you open the image, you see only really dark gray pixels). You can use PyPng to save a real 4-bit png:
import png
import numpy as np
png.fromarray(np.asarray(im2, np.uint8),'L;4').save('test4bit_pypng.png')
Unfortunately, PyPng seems to take much longer to save the images.
using changeColorDepth function in ufp.image module.
import ufp.image
import PIL
im = PIL.Image.open('test.png')
im = im.convert('L') # change to grayscale image
im.thumbnail((298, 144)) # resize to 294x144
ufp.image.changeColorDepth(im, 16) # change 4bits depth(this function change original PIL.Image object)
#if you will need better convert. using ufp.image.quantizeByImprovedGrayScale function. this function quantized image.
im.save('changed.png')
see example : image quantize by Improved Gray Scale. [Python]
My aim:
Read an image into the PIL format.
Convert it to grayscale.
Plot the image using pylab.
Here is the code i'm using:
from PIL import Image
from pylab import *
import numpy as np
inputImage='C:\Test\Test1.jpg'
##outputImage='C:\Test\Output\Test1.jpg'
pilImage=Image.open(inputImage)
pilImage.draft('L',(500,500))
imageArray= np.asarray(pilImage)
imshow(imageArray)
##pilImage.save(outputImage)
axis('off')
show()
My Problem:
The image get's displayed like the colours are inverted.
But I know that the image is getting converted to grayscale, because when I write it to the disk it is appearing as a grayscale image.(Just as I expect).
I feel that the problem is somewhere in the numpy conversion.
I've just started programming in Python for Image Processing.
And Tips and Guideline will also be appreciated.
You want to over-ride the default color map:
imshow(imageArray, cmap="Greys_r")
Here's a page on plotting images and pseudocolor in matplotlib .
This produces a B&W image:
pilImage=Image.open(inputImage)
pilImage = pilImage.convert('1') #this convert to black&white
pilImage.draft('L',(500,500))
pilImage.save('outfile.png')
From the convert method docs:
convert
im.convert(mode) => image
Returns a converted copy of an image.
When translating from a palette image, this translates pixels through the palette.
If mode is omitted, a mode is chosen so that all information in the image and the palette can be represented without a palette.
When from a colour image to black and white, the library uses the ITU-R 601-2 luma transform:
L = R * 299/1000 + G * 587/1000 + B * 114/1000
When converting to a bilevel image (mode "1"), the source image is first converted to black and white.
Resulting values larger than 127 are then set to white, and the image is dithered.
To use other thresholds, use the point method.