I am reading an image from S3 bucket, then resize the image and get the numpy array of the resized image, called "a". I also save the resized image and reopen it and get the numpy array of that called "b". My question is why a and b are different?
resp = s3.get_object(Bucket=event['bucket'], Key=event['image_keys'][0])
data = resp['Body']
image_as_bytes = io.BytesIO(data.read())
image = Image.open(image_as_bytes).convert('RGB').resize((299, 299),Image.NEAREST)
a = np.asarray(image)
image.save('IMAGE_58990004_110132026B_13d64039_resized_lambda.jpg')
b = np.asarray(Image.open('IMAGE_58990004_110132026B_13d64039_resized_lambda.jpg'))
Does ".save" changes the numpy array?
Assuming that image.save(...) uses the filename ending (.jpg) to pick a file format (I don't know if it does. but it seems reasonable), then you are saving as a JPEG file, and the JPEG compression algorithm is lossy, i.e, it discards some information to make the file smaller.
Try using a file format with lossless compression, such as PNG.
Related
Sorry for my english but it's not my first language.
I would like to create a program that:
Transform a jpeg or png image into an array (very important: I would like an array composed only of the values that the pixels of the image have and not metadata or other information. Where I can select each specific pixel of the image).
Save this array in a txt file.
Transform this array composed of only the pixel values of the image back into jpg or png image and save it in a file.
Requests:
Is the array I created with the program I wrote composed only of the pixel values of the image? is there also metadata or other information?
Is this a valid way to remove metadata from an image?
Is this a valid way to create the array representing that image pixel by pixel?
Is this a valid way to convert png images to jpeg or jpeg to png?
Thank you!
This is the program I created, any opinion?
import numpy as np
from PIL import Image
import sys
img_data = Image.open("imagea.jpeg")
img_arr = np.array(img_data)
np.set_printoptions(threshold=sys.maxsize)
print(img_arr.shape)
new_img = Image.fromarray(img_arr)
new_img.save("imageb.jpeg")
print("Image saved!")
file = open("file1.txt", "w+")
content = str(img_arr)
file.write(content)
file.close()
print("Finished!")
Loading an image and converting it to a Numpy array is a perfectly legitimate way of discarding all metadata including:
EXIF data, copyright data,
IPTC and XMP data,
ICC colour profile data
You can tell it's all gone by thinking about the Numpy array you hold and its dimensions and data type.
Note that you need to be careful with PNG palette images and images with an alpha channel.
Note that you can achieve this more simply on the command-line with ImageMagick using:
magick mogrify -strip IMAGE.JPG
Or with exiftool.
Note that you can achieve this by using a format that doesn't support metadata, such as NetPBM, with extension .ppm e.g.:
magick INPUT.JPG -strip -compress none RESULT.PPM # gives P3/plain ASCII file
magick INPUT.JPG -strip RESULT.PPM # gives P6/binary file
You can also read/write PPM files with PIL.
torchvision.io.read_image uses as an input file stored in path argument. How can I achieve the same output if the image is stored as a variable? Of course, I can just save the image as a file and then read from it but it is additional time. Is there a way to get the same result as torchvision.io.read_image with input as a variable, not path?
If the images in memory are PIL images, you can use a transform function to convert it to a tensor in the right format (achieving the same effect as torchvision.io.read_image without the need of reading something from the disk).
import PIL
import torchvision.transforms.functional as transform
# Reads a file using pillow
PIL_image = PIL.Image.open(image_path)
# The image can be converted to tensor using
tensor_image = transform.to_tensor(PIL_image)
# The tensor can be converted back to PIL using
new_PIL_image = transform.to_pil_image(tensor_image)
I am trying to read this tiff image with python. I have tried PIL to and save this image. The process goes smoothly, but the output image seems to be plain dark. Here is the code I used.
from PIL import Image
im = Image.open('file.tif')
imarray = np.array(im)
data = Image.fromarray(imarray)
data.save('x.tif')
Please let me know if I have done anything wrong, or if there is any other working way to read and save tif images. I mainly need it as NumPy array for processing purposes.
The problem is simply that the image is dark. If you open it with PIL, and convert to a Numpy array, you can see the maximum brightness is 2455, which on a 16-bit image with possible range 0..65535, means it is only 2455/65535, or 3.7% bright.
from PIL import Image
# Open image
im = Image.open('5 atm_gain 80_C001H001S0001000025.tif')
# Make into Numpy array
na = np.array(im)
print(na.max()) # prints 2455
So, you need to normalise your image or scale up the brightnesses. A VERY CRUDE method is to multiply by 50, for example:
Image.fromarray(na*50).show()
But really, you should use a proper normalisation, like PIL.ImageOps.autocontrast() or OpenCV normalize().
I'm trying to encrypt and decrypt an image using RSA algo. For that, I need to read the image as greyscale and then apply the keys and save the uint16 type array into a png or any image format which supports 16bit data. Then I need to read that 16bit data convert it into an array and do the decryption. Now, previously I tried to save the image as .tif and when I read it with
img = sk.imread('image.tiff', plugin = 'tifffile')
it treats the image as RGB, which is not what I want. Now I want to save the uint16 type array to a 16bit png image which will take values between 0 to 65536 and then read it again as a uint16 type data. I tried to save the values to a 16bit png file using
img16 = img.astype(np.uint16)
imgOut = Image.fromarray(img16)
imgOut.save('en.png')
This gives me this error: OSError: cannot write mode I;16 as PNG
I have also tried imgOut = Image.fromarray(img16, 'I') but this yeilds not enough image data
Please help me to save the 16bit data into a .png image. Thank you.
There are a couple of possibilities...
First, using imageio to write a 16-bit PNG:
import imageio
import numpy as np
# Construct 16-bit gradient greyscale image
im = np.arange(65536,dtype=np.uint16).reshape(256,256)
# Save as PNG with imageio
imageio.imwrite('result.png',im)
You can then read the image back from disk and change the first pixel to mid-grey (32768) like this:
# Now read image back from disk into Numpy array
im2 = imageio.imread('result.png')
# Change first pixel to mid-grey
im2[0][0] = 32768
Or, if you don't like imageio, you can use PIL/Pillow and save a 16-bit TIFF:
from PIL import Image
import numpy as np
# Construct 16-bit gradient greyscale image
im = np.arange(65536,dtype=np.uint16).reshape(256,256)
# Save as TIFF with PIL/Pillow
Image.fromarray(im).save('result.tif')
You can then read back the image from disk and change the first pixel to mid-grey like this:
# Read image back from disk into PIL Image
im2 = Image.open('result.tif')
# Convert PIL Image to Numpy array
im2 = np.array(im2)
# Make first pixel mid-grey
im2[0][0] = 32768
Keywords: Image, image processing, Python, Numpy, PIL, Pillow, imageio, TIF, TIFF, PNG, 16 bit, 16-bit, short, unsigned short, save, write.
I have used PIL to convert and resize JPG/BMP file to PNG format. I can easily resize and convert it to PNG, but the file size of the new image is too big.
im = Image.open('input.jpg')
im_resize = im.resize((400, 400), Image.ANTIALIAS) # best down-sizing filter
im.save(`output.png')
What do I have to do to reduce the image file size?
PNG Images still have to hold all data for every single pixel on the image, so there is a limit on how far you can compress them.
One way to further decrease it, since your 400x400 is to be used as a "thumbnail" of sorts, is to use indexed mode:
im_indexed = im_resize.convert("P")
im_resize.save(... )
*wait *
Just saw an error in your example code:
You are saving the original image, not the resized image:
im=Image.open(p1.photo)
im_resize = im.resize((400, 400), Image.ANTIALIAS) # best down-sizing filter
im.save(str(merchant.id)+'_logo.'+'png')
When you should be doing:
im_resize.save(str(merchant.id)+'_logo.'+'png')
You are just saving back the original image, that is why it looks so big. Probably you won't need to use indexed mode them.
Aother thing: Indexed mode images can look pretty poor - a better way out, if you come to need it, might be to have your smalle sizes saved as .jpg instead of .png s - these can get smaller as you need, trading size for quality.
You can use other tools like PNGOUT