PIL image to array and back - python

EDIT: Sorry, the first version of the code was bullshit, I tried to remove useless information and made a mistake. Problem stays the same, but now it's the code I actually used
I think my problem is probably very basic but I cant find a solution. I basically just wanted to play around with PIL and convert an image to an array and backward, then save the image. It should look the same, right? In my case the new image is just gibberish, it seems to have some structure but it is not a picture of a plane like it should be:
def array_image_save(array, image_path ='plane_2.bmp'):
image = Image.fromarray(array, 'RGB')
image.save(image_path)
print("Saved image: {}".format(image_path))
im = Image.open('plane.bmp').convert('L')
w,h = im.size
array_image_save(np.array(list(im.getdata())).reshape((w,h)))

Not entirely sure what you are trying to achieve but if you just want to transform the image to a numpy array and back, the following works:
from PIL import Image
import numpy as np
def array_image_save(array, image_path ='plane_2.bmp'):
image = Image.fromarray(array)
image.save(image_path)
print("Saved image: {}".format(image_path))
im = Image.open('plane.bmp')
array_image_save(np.array(im))
You can just pass a PIL image to np.array and it takes care of the proper shaping. The reason you get distorted data is because you convert the pil image to greyscale (.convert('L')) but then try to save it as RGB.

Related

cv2.read or Image from PIL changes PNG image color

Seems like cv2.imread() or Image.fromarray() is changing original image color to a bluish color. What i am trying to accomplish is to crop the original png image and keep the same colors but the color changes. Not sure how to revert to original color. Please help! ty
`
# start cropping logic
img = cv2.imread("image.png") # import cv2
crop = img[1280:, 2250:2730]
cropped_rendered_image = Image.fromarray(crop) #from PIL import Image
cropped_rendered_image.save("newImageName.png")
`
tried this and other fixes but no luck yet
https://stackoverflow.com/a/50720612/13206968
There is no "changing" going on. It's simply a matter of channel order.
OpenCV natively uses BGR order (in numpy arrays)
PIL natively uses RGB order
Numpy doesn't care
When you call cv.imread(), you're getting BGR data in a numpy array.
When you repackage that into a PIL Image, you are giving it BGR order data, but you're telling it that it's RGB, so PIL takes your word for it... and misinterprets the data.
You can try telling PIL that it's BGR;24 data. See https://pillow.readthedocs.io/en/stable/handbook/concepts.html
Or you can use cv.cvtColor() with the cv.COLOR_BGR2RGB flag (because you have BGR and you want RGB). For the opposite direction, there is the cv.COLOR_RGB2BGR flag.

The dimension of the array of an image is 3D not 2D as it is in the Python course

In my python course, the instructor uploads a greyscale picture of himself and reads it on Python with the following code:
import numpy as np
import math
from PIL import Image
from IPython.display import display
im = Image.open("chris.tiff")
array = np.array(im)
print(array.shape)
and he gets
(200,200)
When I write the code and run my own image, with the exact same extension "tiff", I get a 3-dimensional array. I was told it's because my image was colored and so the third entry is for RBG. So I used a greyscale photo just like he did but I still obtain a 3D array, why?
Any help is greatly appreciated, thank you
EDIT
For extra clarity, the array I get for my greyscale image with tiff extension is
(3088, 2316, 4)
Your photo appears to be grey, but actually, it has the three channels based on the posted shape.
So, you need to convert it to greyscale using the following line:
im = Image.open("chris.tiff").convert('L')

Reading and saving tif images with python

I am trying to read this tiff image with python. I have tried PIL to and save this image. The process goes smoothly, but the output image seems to be plain dark. Here is the code I used.
from PIL import Image
im = Image.open('file.tif')
imarray = np.array(im)
data = Image.fromarray(imarray)
data.save('x.tif')
Please let me know if I have done anything wrong, or if there is any other working way to read and save tif images. I mainly need it as NumPy array for processing purposes.
The problem is simply that the image is dark. If you open it with PIL, and convert to a Numpy array, you can see the maximum brightness is 2455, which on a 16-bit image with possible range 0..65535, means it is only 2455/65535, or 3.7% bright.
from PIL import Image
# Open image
im = Image.open('5 atm_gain 80_C001H001S0001000025.tif')
# Make into Numpy array
na = np.array(im)
print(na.max()) # prints 2455
So, you need to normalise your image or scale up the brightnesses. A VERY CRUDE method is to multiply by 50, for example:
Image.fromarray(na*50).show()
But really, you should use a proper normalisation, like PIL.ImageOps.autocontrast() or OpenCV normalize().

Creating an RGB picture in Python with OpenCV from a randomized array

I want to create a RGB image made from a random array of pixel values in Python with OpenCV/Numpy setup.
I'm able to create a Gray image - which looks amazingly live; with this code:
import numpy as np
import cv2
pic_array=np.random.randint(255, size=(900,800))
pic_array_8bit=slika_array.astype(np.uint8)
pic_g=cv2.imwrite("pic-from-random-array.png", pic_array_8bit)
But I want to make it in color as well. I've tried converting with cv2.cvtColor() but it couldnt work.
The issue might be in an array definition or a missed step. Couldn't find a similar situation... Any help how to make a random RGB image in color, would be great.
thanks!
RGB image is composed of three grayscale images. You can make three grayscale images like
rgb = np.random.randint(255, size=(900,800,3),dtype=np.uint8)
cv2.imshow('RGB',rgb)
cv2.waitKey(0)
First, you should define a random image data consisting of 3 channels using numpy as shown below-
import numpy as np
data = np.random.randint(0, 255, size=(900, 800, 3), dtype=np.uint8)
Now use, python imaging library as shown below-
from PIL import Image
img = Image.fromarray(data, 'RGB')
img.show()
You can also save the image easily using save function
img.save('image.png')

Image height and width getting swapped when read using opencv imread

When I read an image using opencv imread function, I find its height and width being swapped as what it should be. Like my original image is of dimensions (610 by 406) but on being read using opencv::imread function, its dimensions are 406 by 610. Also, if I rotate my original image before passing it to the function then also, no change. The image read still has original dimensions.
Please see example code and images for clarification:
So, below I have provided the input images: one is original and second one is rotated (I rotated it using windows rotate command, by right-clicking and selecting 'rotate right'). Output I get for both the images is same. It seems to me that rotating image did not actually change its shape. I think so because, when I try to put the rotated image here then also, it was showing the un-rotated version of it only (in the preview) so, I had to take a screen-capture of it and then, paste it here.
This is the code:
import cv2
import numpy as np
import sys
import os
image = cv2.imread("C:/img_8075.jpg")
print "image shape: ",image.shape
cv2.imshow("image",image)
cv2.waitKey(0)
image2 = cv2.imread("C:/img_8075_Rotated.jpg")
print "image shape: ",image2.shape
cv2.imshow("image",image2)
cv2.waitKey(0)
The result I get for this is: image shape: (406,610,3)
image shape: (406,610,3)
for both the images.
I am unable to paste input/output pictures here since, it says you should have '10 reputations' and I have just joined.
Any suggestions would be helpful. thanks!
I believe you are just getting the conventions mixed up. OpenCV Mat structures can be accessed (ROW,COLUMN).
So a 1920x1080 image will be 1080 ROWS by 1920 COLUMNS (1080,1920)
Commonly Mat.rows represent the image's height,and the Mat.cols represent the image's width.

Categories

Resources