I am trying to read this tiff image with python. I have tried PIL to and save this image. The process goes smoothly, but the output image seems to be plain dark. Here is the code I used.
from PIL import Image
im = Image.open('file.tif')
imarray = np.array(im)
data = Image.fromarray(imarray)
data.save('x.tif')
Please let me know if I have done anything wrong, or if there is any other working way to read and save tif images. I mainly need it as NumPy array for processing purposes.
The problem is simply that the image is dark. If you open it with PIL, and convert to a Numpy array, you can see the maximum brightness is 2455, which on a 16-bit image with possible range 0..65535, means it is only 2455/65535, or 3.7% bright.
from PIL import Image
# Open image
im = Image.open('5 atm_gain 80_C001H001S0001000025.tif')
# Make into Numpy array
na = np.array(im)
print(na.max()) # prints 2455
So, you need to normalise your image or scale up the brightnesses. A VERY CRUDE method is to multiply by 50, for example:
Image.fromarray(na*50).show()
But really, you should use a proper normalisation, like PIL.ImageOps.autocontrast() or OpenCV normalize().
Related
Seems like cv2.imread() or Image.fromarray() is changing original image color to a bluish color. What i am trying to accomplish is to crop the original png image and keep the same colors but the color changes. Not sure how to revert to original color. Please help! ty
`
# start cropping logic
img = cv2.imread("image.png") # import cv2
crop = img[1280:, 2250:2730]
cropped_rendered_image = Image.fromarray(crop) #from PIL import Image
cropped_rendered_image.save("newImageName.png")
`
tried this and other fixes but no luck yet
https://stackoverflow.com/a/50720612/13206968
There is no "changing" going on. It's simply a matter of channel order.
OpenCV natively uses BGR order (in numpy arrays)
PIL natively uses RGB order
Numpy doesn't care
When you call cv.imread(), you're getting BGR data in a numpy array.
When you repackage that into a PIL Image, you are giving it BGR order data, but you're telling it that it's RGB, so PIL takes your word for it... and misinterprets the data.
You can try telling PIL that it's BGR;24 data. See https://pillow.readthedocs.io/en/stable/handbook/concepts.html
Or you can use cv.cvtColor() with the cv.COLOR_BGR2RGB flag (because you have BGR and you want RGB). For the opposite direction, there is the cv.COLOR_RGB2BGR flag.
I am trying to apply a BinaryMorphologicalClosingImageFilter to a binary TIFF image to fill empty spaces between structures (where ImageJ Fill Holes doesn't help). Here is the code I use:
import SimpleITK as sitk
#Import TIFF image
image = sitk.ReadImage("C:/Users/Christian Nikolov/Desktop/STL/4_bina.tif")
#Apply Filter
sitk.BinaryMorphologicalClosingImageFilter()
#Export Image
sitk.WriteImage(image, "C:/Users/Christian Nikolov/Desktop/STL/4_bina_itk.tif")
The code runs without an error but the problem is that I can't figure out how to set the Kernel Size of the filter and the image doesn't experience a change. Any advise?
(I got the idea to use this filter from the following post on SO: Fill holes on a 3D image)
You've created the BinaryMorphologicalClosingImageFilter object, but you haven't actually applied it to your input image. Your code should be something like this:
import SimpleITK as sitk
#Import TIFF image
image = sitk.ReadImage("C:/Users/Christian Nikolov/Desktop/STL/4_bina.tif")
#Apply Filter
filter = sitk.BinaryMorphologicalClosingImageFilter()
filter.SetKernelRadius([2, 2])
output_image = filter.Execute(image)
#Export Image
sitk.WriteImage(output_image, "C:/Users/Christian Nikolov/Desktop/STL/4_bina_itk.tif")
I set the kernel size to [2, 2], but you can use whatever unsigned integer size works best for you.
In my python course, the instructor uploads a greyscale picture of himself and reads it on Python with the following code:
import numpy as np
import math
from PIL import Image
from IPython.display import display
im = Image.open("chris.tiff")
array = np.array(im)
print(array.shape)
and he gets
(200,200)
When I write the code and run my own image, with the exact same extension "tiff", I get a 3-dimensional array. I was told it's because my image was colored and so the third entry is for RBG. So I used a greyscale photo just like he did but I still obtain a 3D array, why?
Any help is greatly appreciated, thank you
EDIT
For extra clarity, the array I get for my greyscale image with tiff extension is
(3088, 2316, 4)
Your photo appears to be grey, but actually, it has the three channels based on the posted shape.
So, you need to convert it to greyscale using the following line:
im = Image.open("chris.tiff").convert('L')
I am reading an RGB image and converting it into HSV mode using PIL. Now I am trying to save this HSV image but I am getting an error.
filename = r'\trial_images\cat.jpg'
img = Image.open(filename)
img = img.convert('HSV')
destination = r'\demo\temp.jpg'
img.save(destination)
I am getting the following error:
OSError: cannot write mode HSV as JPEG
How can I save my transformed image? Please help
Easy one...save as a numpy array. This works fine, but the file might be pretty big (for me it go about 7 times bigger than the jpeg image). You can numpy's savez_compressed
function to cut that in half to about 3-4 times the size of the original image. Not fantastic, but when you are doing image processing you are probably fine.
EDIT: Sorry, the first version of the code was bullshit, I tried to remove useless information and made a mistake. Problem stays the same, but now it's the code I actually used
I think my problem is probably very basic but I cant find a solution. I basically just wanted to play around with PIL and convert an image to an array and backward, then save the image. It should look the same, right? In my case the new image is just gibberish, it seems to have some structure but it is not a picture of a plane like it should be:
def array_image_save(array, image_path ='plane_2.bmp'):
image = Image.fromarray(array, 'RGB')
image.save(image_path)
print("Saved image: {}".format(image_path))
im = Image.open('plane.bmp').convert('L')
w,h = im.size
array_image_save(np.array(list(im.getdata())).reshape((w,h)))
Not entirely sure what you are trying to achieve but if you just want to transform the image to a numpy array and back, the following works:
from PIL import Image
import numpy as np
def array_image_save(array, image_path ='plane_2.bmp'):
image = Image.fromarray(array)
image.save(image_path)
print("Saved image: {}".format(image_path))
im = Image.open('plane.bmp')
array_image_save(np.array(im))
You can just pass a PIL image to np.array and it takes care of the proper shaping. The reason you get distorted data is because you convert the pil image to greyscale (.convert('L')) but then try to save it as RGB.