In this code sample, the assertion in the function fails.
from pathlib import Path
import numpy as np
import PIL.Image
def make_images(tmp_path):
np.random.seed(0)
shape = (4, 6, 3)
rgb = np.random.randint(0, 256, shape, dtype=np.uint8)
test_image = PIL.Image.fromarray(rgb)
image_path = tmp_path / 'test_image.jpg'
test_image.save(image_path)
return image_path, rgb
def test_Image_load_rgb(tmp_path):
image_path, original_rgb = make_images(tmp_path)
rgb2 = np.array(PIL.Image.open(image_path))
assert np.array_equal(rgb2, original_rgb)
if __name__ == '__main__':
test_Image_load_rgb(tmp_path)
When I look at the two arrays, original_rgb and rgb2, they have different values, so of course it is failing, but I don't understand why their arrays have different values.
Opening them both as images using PIL.Image.fromarray(), visually they look similar but not the same, the brightness values are slightly altered, visually.
I don't understand why this is.
The two images are:
Note: This is fails the same way for both pytest and when run as a script.
It occurred to me to test this with BMP and PNG images, and this problem does not happen with them.
So it occurs to me that the JPG Compression process somehow alters the data slightly, since it is lossy compression.
But I was surprised, that it would have an effect in such a small and light image.
I am leaving this question in case someone else stumbles on to this.
Anyone offering a more detailed explanation would be great!
UPDATE: I noticed the colors in BMP/PNG are much different from the JPG. Any reason why?
Related
I'm fairly new to Python, and I have been trying to recreate a working IDL program to Python, but I'm stuck and keep getting errors. I haven't been able to find a solution yet.
The program requires 4 FITS files in total (img and correctional images dark, flat1, flat2). The operations are as follows:
flat12 = (flat1 + flat2)/2
img1 = (img - dark)/flat12
The said files have dimensions (1024,1024,1). I have resized them to (1024,1024) to be able to even use im_show() function.
I have also tried using cv2.add(), but I get this:
TypeError: Expected Ptr for argument 'src1'
Is there any workaround for this? Thanks in advance.
To read your FITS files use astropy.io.fits: http://docs.astropy.org/en/latest/io/fits/index.html
This will give you Numpy arrays (and FITS headers if needed, there are different ways to do this, as explained in the documentation), so you could do something like:
>>> from astropy.io import fits
>>> img = fits.getdata('image.fits', ext=0) # extension number depends on your FITS files
>>> dark = fits.getdata('dark.fits') # by default it reads the first "data" extension
>>> darksub = img - dark
>>> fits.writeto('out.fits', darksub) # save output
If your data has an extra dimension, as shown with the (1024,1024,1) shape, and if you want to remove that axis, you can use the normal Numpy array slicing syntax: darksub = img[0] - dark[0].
Otherwise in the example above it will produce and save a (1024,1024,1) image.
I have a following code:
import cv2 as cv
import numpy as np
im = cv.imread('outline.png', cv.IMREAD_UNCHANGED)
cv.imwrite('output.png', im)
f1 = open('outline.png', 'rb')
f2 = open('output.png', 'rb')
img1_b = b64encode(f1.read())
img2_b = b64encode(f2.read())
print(img1_b)
print(img2_b)
What is the reason that img1_b and img2_b are different? img2_b is much longer - why?.
I do not want to copy the file - I would like to process it before saving but this part of code is not included.
Both outline.png and output.png looks same after the operation.
What can I change in my code to make img2_b value same as img1_b??
I have tried PIL Image with same result.
The phenomenon you have run into is the result of data compression not being 100% rigidly defined. PNG files use DEFLATE compression, which requires a given compressed file must always decompress to the same output, but does not require that a given input must produce the same compressed file. This gives room for improvement in the compression algorithm where a more optimal compression may be found over a different type of file. It sounds like your original image was compressed using a better (or just different) algorithm than cv2 is using. In order to duplicate the exact compressed version you'll likely need the exact same implementation of compression algorithm that was used to create the original image.
If you want to ensure that the images are indeed identical, you should compare the decoded pixel values. In the name of not re-inventing the wheel, I'll refer you to this excellent blog post on the subject.
Edit: linked article wasn't loading consistently for me so I copied the code here for referencing.
import cv2
import numpy as np
original = cv2.imread("imaoriginal_golden_bridge.jpg")
duplicate = cv2.imread("images/duplicate.jpg")
# 1) Check if 2 images are equals
if original.shape == duplicate.shape:
print("The images have same size and channels")
difference = cv2.subtract(original, duplicate)
b, g, r = cv2.split(difference)
if cv2.countNonZero(b) == 0 and cv2.countNonZero(g) == 0 and cv2.countNonZero(r) == 0:
print("The images are completely Equal")
I'm trying to combine multiple .tif stacks (that already consist of 40 images each) into a single tiff stack. I would prefer to do this using python. What I tried so far is (Keep in mind I don't have a lot of experience writing code, so sorry if I'm missing something obvious):
import numpy as np
from skimage import io
im1 = io.imread('filename1.ome.tif')
for i in range(2,10):
im = io.imread('filename'+str(i)+'.ome.tif')
im1 = np.concatenate((im1,im))
io.imsave('filescombined.ome.tif', im1)
This does leave me with a .tif file, and according to:
print(im1.shape)
It is the correct shape, and from using im1.dtype I get that both are uint16. However, I can not open the resulting image in ImageJ (or any other viewer I've tried). The problem doesn't seem to come from data being lost with io.imread or io.imsave, because if I do:
image = io.imread('filename1.ome.tif')
io.imsave('testing.ome.tif', image)
The result can be opened. So I guess the problem has to stem from np.concatenate, but I have no idea what exactly the problem is, let alone how to fix it.
If you have any ideas on how to fix it, that would be very much appreciated!
Try the external.tifffile module of scikit image. It does not seem to encounter the problem you describe.
The following works for me on Windows 7 and Python 3.5. It correctly saves a stack of 180 images each 100x100 pixels that can be imported straight into ImageJ
from skimage.external import tifffile as tif
import numpy as np
stack1 = np.random.randint(255, size=(20, 100, 100))
for i in range(2,10):
stack = np.random.randint(255, size=(20, 100, 100))
stack1 = np.concatenate((stack1,stack))
tif.imsave('stack1.tif', stack1.astype('uint16'), bigtiff=True)
When you drag and drop the file into ImageJ the Bio-Formats Import Option will pop up (see below). Just select the View Stack as "Standard ImageJ" and data will be loaded.Screenshot of the ImageJ Bio-Format Import Option popup window
I'm attempting to make a reasonably simple code that will be able to read the size of an image and return all the RGB values. I'm using PIL on Python 2.7, and my code goes like this:
import os, sys
from PIL import Image
img = Image.open('C:/image.png')
pixels = img.load()
print(pixels[0, 1])
now this code was actually gotten off of this site as a way to read a gif file. I'm trying to get the code to print out an RGB tuple (in this case (55, 55, 55)) but all it gives me is a small sequence of unrelated numbers, usually containing 34.
I have tried many other examples of code, whether from here or not, but it doesn't seem to work. Is it something wrong with the .png format? Do I need to further code in the rgb part? I'm happy for any help.
My guess is that your image file is using pre-multiplied alpha values. The 8 values you see are pretty close to 55*34/255 (where 34 is the alpha channel value).
PIL uses the mode "RGBa" (with a little a) to indicate when it's using premultiplied alpha. You may be able to tell PIL to covert the to normal "RGBA", where the pixels will have roughly the values you expect:
img = Image.open('C:/image.png').convert("RGBA")
Note that if your image isn't supposed to be partly transparent at all, you may have larger issues going on. We can't help you with that without knowing more about your image.
I am working with 2D floating-point numpy arrays and saving them as .png files with high precision (see this question for how I came to this point). To do this I use the freeimage plugin, as in that linked question.
This creates a weird behaviour where the images are flipped (both left-right and up-down) if saved to 16-bit. This behaviour happens only for RGB or RGBA images, not for greyscale images. Here is some example code:
from skimage import io, img_as_uint, img_as_ubyte
im = np.random.uniform(size=(256, 256))
im[:128, :128] = 1
im = img_as_ubyte(im)
io.use_plugin('freeimage')
io.imsave('test_1.png', im)
creates the following picture:
when I try to save this in 16 bit, I get the same result (albeit taking 99kb instead of 50, so I know the bitdepth is working).
Now do the same as an RGB image:
im = np.random.uniform(size=(256, 256, 3))
im[:128, :128] = 1
im = img_as_ubyte(im)
io.use_plugin('freeimage')
io.imsave('test_1.png', im)
The 8-bit result is:
but doing the following
im = img_as_uint(im)
io.use_plugin('freeimage')
io.imsave('test_1.png', im)
gives me
This happens if the array contains an alpha level too.
It can be fixed by including
im = np.fliplr(np.flipud(im))
before saving. However, it seems to me this is pretty weird behaviour and not very desirable. Any idea why this is happening or whether it is intended? As far as I could see it's not documented.