Imageio.imwrite does not save the correct values - python

Can someone please explain why do I get this inconsistency in rgb values after saving the image.
import imageio as io
image = 'img.jpg'
type = image.split('.')[-1]
output = 'output' + type
img = io.imread(image)
print(img[0][0][1]) # 204
img[0][0][1] = 255
print(img[0][0][1]) # 255
io.imwrite(output, img, type, quality = 100)
imgTest = io.imread(output)
print(imgTest[0][0][1]) # 223
# io.help('jpg')
Image used = img.jpg

The reason that pixels are changed when loading a jpeg image and then saving it as a jpeg again is that jpeg uses lossy compression. To save storage space for jpeg images, pixel values are saved in a dimension-reduced representation. You can find some information about the specific algorithm here.
The advantage of lossy compression is that the image size can significantly be reduced, without the human eye noticing any changes. However, without any additional methods, we will not retrieve the original image after saving it in jpg format.
An alternative that does not use lossy compression is the png format, which we can verify by converting your example image to png and runnning the code again:
import imageio as io
import numpy as np
import matplotlib.pyplot as plt
image = '/content/drive/My Drive/img.png'
type = image.split('.')[-1]
output = 'output' + type
img = io.imread(image)
print(img[0][0][1]) # 204
img[0][0][1] = 255
print(img[0][0][1]) # 255
io.imwrite(output, img, type)
imgTest = io.imread(output)
print(imgTest[0][0][1]) # 223
# io.help('jpg')
Output:
204
255
255
We can also see that the png image takes up much more storage space than the jpg image
import os
os.path.getsize('img.png')
# output: 688444
os.path.getsize('img.jpg')
# output: 69621
Here is the png image:

there is a defined process in the imageio
imageio reads in a structure of RGB, if you are trying to save it in the opencv , you need to convert this RGB to BGR. Also, if you are plotting in a matplotlib, it varies accordingly.
the best way is,
Read the image in imageio
convert RGB to BGR
save it in the opencv write

Related

When PIL is converting an RBG image in the form of a numpy array to a png it delivers an odd result

I am trying to convert this .tif file to a .png, here is the image (I attached a link because it is 250mb): https://drive.google.com/file/d/1nEvG8O5NM1bsKM-fSo66QJF7mZyR_fh-/view?usp=sharing
Here is my current code, it returns an grayscale image with multiple copies of the original .tif in one .png, it is suppose to return an RGB image:
import rasterio
import numpy as np
from PIL import Image
dataset = rasterio.open("world.tif")
window = rasterio.windows.Window(0, 0, 21600, 10800)
out = dataset.read(window=window)
out = out.reshape(10800, 21600, 3).astype(np.uint8)
img = Image.fromarray(out, "RGB")
img.save("out.png")
I'm not sure why you are mixing up PIL/Pillow and raster like that. You can just do the following with PIL:
from PIL import Image
# Allow monster large images
Image.MAX_IMAGE_PIXELS = None
# Load image
im = Image.open('world.tif')
# Reduce to manageable size and save as PNG
small = im.resize((2160,1080))
small.save('result.png')

Why 16bit to 8bit conversion produces striped image?

I am testing a segmentation algorithm on several VHSR satellite images, which originally comes in 16bit format, but when I convert them to 8bit images, the produced images are showing striped appearance.
I've been trying different python libraries (skimage, cv2, scipy) getting similar results.
1) The original 16-bit image it is a 4 band image (NIR,B,G,R), so you need to choose the right bands to create a true color image, RGB image (4,3,2 bands). thanks in advance. It can be downloaded from this link:
16bit image
2) I use this code to convert each pixel value, from a 16-bit integer now fitting within 8-bit range:
from scipy.misc import bytescale
SS = io.imread('Imag16bit.tif')
SS = bytescale(SS)
SS = np.asarray(SS)
plt.imshow(SS)
This is my result of above code:
bytescale works for me. I think the asarray step messes up something.
import cv2
from skimage import io
from scipy.misc import bytescale
image = io.imread('SkySat_16bit.tif')
cv2.imshow('Original', image)
print(image.dtype)
image = bytescale(image)
print(image.dtype)
cv2.imshow('Converted', image)
cv2.waitKey(0)
I think this is a way to do it:
#!/usr/local/bin/python3
from PIL import Image
from tifffile import imsave, imread
# Load image
im = imread('SkySat_16bit.tif')
# Extract Red, Green and Blue bands into separate 8-bit arrays
R = (im[:,:,3]/256).astype(np.uint8)
G = (im[:,:,2]/256).astype(np.uint8)
B = (im[:,:,1]/256).astype(np.uint8)
# Combine bands into RGB array
RGB = np.dstack((R,G,B))
# Save to disk
Image.fromarray(RGB).save('result.png')
You may want to adjust the contrast a bit, and check I selected the correct bands.

convert EXR to JPEG using ImageIO and Python

I'm simply trying to convert and EXR to a jpg image however my results are turning out to be very dark. Does anyone know what I'm doing wrong here? I'm normalizing the image values and then placing them into 0-255 color space. It still appears incorrect though.
Dropbox link to test exr image: https://www.dropbox.com/s/9a5z6fjsyth7w98/torus.exr?dl=0
import sys, os
import imageio
def convert_exr_to_jpg(exr_file, jpg_file):
if not os.path.isfile(exr_file):
return False
filename, extension = os.path.splitext(exr_file)
if not extension.lower().endswith('.exr'):
return False
# imageio.plugins.freeimage.download() #DOWNLOAD IT
image = imageio.imread(exr_file, format='EXR-FI')
# remove alpha channel for jpg conversion
image = image[:,:,:3]
# normalize the image
data = image.astype(image.dtype) / image.max() # normalize the data to 0 - 1
data = 255 * data # Now scale by 255
rgb_image = data.astype('uint8')
# rgb_image = imageio.core.image_as_uint(rgb_image, bitdepth=8)
imageio.imwrite(jpg_file, rgb_image, format='jpeg')
return True
if __name__ == '__main__':
exr = "C:/Users/John/images/torus.exr"
jpg = "C:/Users/John/images/torus.jpg"
convert_exr_to_jpg(exr, jpg)
The sample image is an EXR image with 16 bit depth (channel). Here is a python script to convert the exr image to png with opencv.
import numpy as np
import cv2
im=cv2.imread("torus.exr",-1)
im=im*65535
im[im>65535]=65535
im=np.uint16(im)
cv2.imwrite("torus.png",im)
Here is the modified code with imageio that will save the image in jpeg format
import sys, os
import imageio
def convert_exr_to_jpg(exr_file, jpg_file):
if not os.path.isfile(exr_file):
return False
filename, extension = os.path.splitext(exr_file)
if not extension.lower().endswith('.exr'):
return False
# imageio.plugins.freeimage.download() #DOWNLOAD IT
image = imageio.imread(exr_file)
print(image.dtype)
# remove alpha channel for jpg conversion
image = image[:,:,:3]
data = 65535 * image
data[data>65535]=65535
rgb_image = data.astype('uint16')
print(rgb_image.dtype)
#rgb_image = imageio.core.image_as_uint(rgb_image, bitdepth=16)
imageio.imwrite(jpg_file, rgb_image, format='jpeg')
return True
if __name__ == '__main__':
exr = "torus.exr"
jpg = "torus3.jpeg"
convert_exr_to_jpg(exr, jpg)
(Tested with python 3.5.2,Ubuntu 16.04)
I ran into the same issue, and fixed it here. Because ImageIO converts everything to a numpy array you can gamma correct the values (fixing your darkness issue) and then convert that back to a PIL Image to work with easily:
im = imageio.imread("image.exr")
im_gamma_correct = numpy.clip(numpy.power(im, 0.45), 0, 1)
im_fixed = Image.fromarray(numpy.uint8(im_gamma_correct*255))
I tested this on your very beautiful torus knot and it worked perfectly. Just let me know if you need a more complete code snippet, but I think the above answers your actual question.

OpenCV return None whan imread is called with GDAL support

I have a very simple program in python with OpenCV and GDAL. In this program i read GeoTiff image with the following line
image = cv2.imread(sys.argv[1], cv2.IMREAD_LOAD_GDAL | cv2.IMREAD_COLOR)
The problem is for a specific image imread return None. I am using images from: https://www.sensefly.com/drones/example-datasets.html
Image in Assessing crops with RGB imagery (eBee SQ) > Map (orthomosaic) works well. Its size is: 19428, 19784 with 4 bands.
Image in Urban mapping (eBee Plus/senseFly S.O.D.A.) > Map (orthomosaic) doesn't work. Its size is: 26747, 25388 and 4 bands.
Any help to figure out what is the problem?
Edit: I tried the solution suggested by #en_lorithai and it works, the problem is then I need to do some image processing with OpenCV and the image loaded by GDAL has several issues
GDAL load images as RGB instead of BGR (used by default in OpenCV)
The image shape expected by OpenCV is (width, height, channels) and GDAL return an image with (channels, width, height) shape
The image returned by GDAL is flipped in Y-axe and rotate clockwise by 90 degree.
The image loaded by OpenCV is (resized to 700x700):
The image loaded by GDAL (after change shape, of course) is (resized to 700x700)
Finally, If I try to convert this image from BGR to RGB with
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
I get (resized to 700x700)
I can convert from GDAL format to OpenCV format with the following code
image = ds.ReadAsArray() #Load image with GDAL
tmp = image.copy()
image[0] = tmp[2,:,:] # swap read channel and blue channel
image[2] = tmp[0,:,:]
image = np.swapaxes(image,2,0) # convert from (height, width, channels) to (channels, height, width)
image = cv2.flip(image,0) # flip in Y-axis
image = cv2.transpose(image) # Rotate by 90 degress (clockwise)
image = cv2.flip(image,1)
The problem is I think that this is a very slow process and I want to know if there is a automatic convert-process.
You can try and open the image in gdal instead
from osgeo import gdal
g_image = gdal.Open('161104_hq_transparent_mosaic_group1.tif')
a_image = g_image.ReadAsArray()
can't test as i don't have enough available memory to open that image.
Edit: equivalent operation on another image
from osgeo import gdal
import matplotlib.pyplot as plt
g_image = gdal.Open('Water-scenes-014.jpg') # 3 channel rgb image
a_image = g_image.ReadAsArray()
s_image = np.dstack((a_image[0],a_image[1],a_image[2]))
plt.imshow(s_image) # show image in matplotlib (no need for color swap)
s_image = cv2.cvtColor(s_image,cv2.COLOR_RGB2BGR) # colorswap for cv
cv2.imshow('name',s_image)
Another method of getting individual bands from gdal
g_image = gdal.Open('image_name.PNG')
band1 = g_image.GetRasterBand(1).ReadAsArray()
You can then do a numpy dstack of each of the bands.

Using Python Pillow lib to set Color depth

I am using the Python Pillow lib to change an image before sending it to device.
I need to change the image to make sure it meets the following requirements
Resolution (width x height) = 298 x 144
Grayscale
Color Depth (bits) = 4
Format = .png
I can do all of them with the exception of Color Depth to 4 bits.
Can anyone point me in the right direction on how to achieve this?
So far, I haven't been able to save 4-bit images with Pillow. You can use Pillow to reduce the number of gray levels in an image with:
import PIL.Image as Image
im = Image.open('test.png')
im1 = im.point(lambda x: int(x/17)*17)
Assuming test.png is a 8-bit graylevel image, i.e. it contains values in the range 0-255 (im.mode == 'L'), im1 now only contains 16 different values (0, 17, 34, ..., 255). This is what ufp.image.changeColorDepth does, too. However, you still have a 8-bit image. So instead of the above, you can do
im2 = im.point(lambda x: int(x/17))
and you end up with an image that only contains 16 different values (0, 1, 2, ..., 15). So these values would all fit in an uint4-type. However, if you save such an image with Pillow
im2.save('test.png')
the png will still have a color-depth of 8bit (and if you open the image, you see only really dark gray pixels). You can use PyPng to save a real 4-bit png:
import png
import numpy as np
png.fromarray(np.asarray(im2, np.uint8),'L;4').save('test4bit_pypng.png')
Unfortunately, PyPng seems to take much longer to save the images.
using changeColorDepth function in ufp.image module.
import ufp.image
import PIL
im = PIL.Image.open('test.png')
im = im.convert('L') # change to grayscale image
im.thumbnail((298, 144)) # resize to 294x144
ufp.image.changeColorDepth(im, 16) # change 4bits depth(this function change original PIL.Image object)
#if you will need better convert. using ufp.image.quantizeByImprovedGrayScale function. this function quantized image.
im.save('changed.png')
see example : image quantize by Improved Gray Scale. [Python]

Categories

Resources