I have a naive question, but after a long day, I am not still able to get my answer.
I am currently loading my png image using PIL, it works well. However, some of my png
images are 16-bit per pixel. I am trying desperately to query this information, but I am not able to get it, using PIL. Indeed, if I am simply using the file system binary it works.
$ file flower_16b.png
flower_16b.png: PNG image data, 660 x 600, 16-bit/color RGB, non-interlaced
However in my python code:
img = Image.open(filename, "r")
print(img.mode)
I get RGB. Following the documentation PIL RGB means (3x8-bit pixels, true color), it look likes the image has been casted. So does it exist a way to get the depth of an image, using PIL or an other python module ?
PIL/Pillow doesn't support 48-bit images like that. One option might be OpenCV but be aware it comes as BGR not RGB:
import cv2
# Read with whatever bit depth is specified in the image file
BGR = cv2.imread('image.png', cv2.IMREAD_ANYDEPTH|cv2.IMREAD_ANYCOLOR)
# Check dtype and number of channels
print(BGR.dtype, BGR.shape)
dtype('uint16'), (768, 1024, 3)
Another option may be pyvips, which works a slightly different way, but has some good benefits:
import pyvips
im = pyvips.Image.new_from_file('image.png', access="sequential")
print(im)
<pyvips.Image 1024x768 ushort, 3 bands, rgb16>
If you are really, really stuck and can't/won't install OpenCV or pyvips, you have a couple more options with ImageMagick...
You could reduce your 3 RGB channels (16-bits each) to 3 RGB channels (8-bits each) with:
magick input.png PNG24:output.png # then open "output.png" with PIL
Or, you could separate the 3 RGB channels into 3 separate 16-bit files and process them separately with PIL/Pillow:
magick input.png -separate channel-%d.png
and you will get the red channel as a 16-bit image in channel-0.png which you can open with PIL/Pillow, the green as channel-1.png and the blue as channel-2.png
Related
I have a numpy array with 3 RGB channels and two alpha channels. (I am using python)
I want to convert it to a Photoshop .psd file, so I can later apply transformations in photoshop on the annotated alpha layers.
I guess it a very simple task but I haven't find any way to do it from the packages I found by googling it.
I guess it should be something in the lines of the following:
>> im.shape
(.., .., 4)
psd = PSD()
psd.add_layers_from_numpy(im, names=["R", "G", "B", "alpha-1", "alpha-2")
with open(of, 'wb') as f:
psd.write(f)
If you know how to do this, please let me know.
Thanks in advance!
I opened your PSD file in Photoshop and saved it as a TIFF. I then checked with tiffinfo and determined that your file is saved as RGB with 3 layers of "Extra samples":
tiffinfo MULTIPLE_ALPHA.tif
TIFF Directory at offset 0x8 (8)
Subfile Type: (0 = 0x0)
Image Width: 1000 Image Length: 1430
Resolution: 72, 72 pixels/inch
Bits/Sample: 8
Compression Scheme: None
Photometric Interpretation: RGB color <--- HERE
Extra Samples: 3<unspecified, unspecified, unspecified> <--- HERE
Orientation: row 0 top, col 0 lhs
Samples/Pixel: 6
Rows/Strip: 1430
Planar Configuration: single image plane
Software: Adobe Photoshop 21.2 (Macintosh)
DateTime: 2020:09:08 19:34:38
You can load that into Python with:
import tifffile
import numpy as np
# Load TIFF saved by Photoshop
im = tifffile.imread('MULTIPLE_ALPHA.tif')
# Check shape
print(im.shape) # prints (1430, 1000, 6)
# Save with 'tifffile'
tifffile.imwrite('saved.tif', im, photometric='RGB')
And now check that Photoshop looks at and treats the tifffile image the same as your original:
You may want to experiment with the compress parameter. I noticed your file comes out as 8.5MB uncompressed, but as 2.4MB of lossless Zip-compressed data if I use:
tifffile.imwrite('saved.tif', im, photometric='RGB', compress=1)
Note that reading/writing with compression requires you to install imagecodecs:
pip install imagecodecs
Note that I am not suggesting it is impossible with a PSD-writer package, I am just saying I believe you can get what you want with a TIFF.
Keywords: Image processing, Photoshop, PSD, TIFF, save multi-channel, multi-layer, multi-image, multiple alpha, tifffile, Python
I'm trying to convert EPS images to JPEG using Pillow. But the results are of low quality. I'm trying to use resize method, but it gets completely ignored. I set up the size of JPEG image as (3600, 4700), but the resulted image has (360, 470) size. My code is:
eps_image = Image.open('img.eps')
height = eps_image.height * 10
width = eps_image.width * 10
new_size = (height, width)
print(new_size) # prints (3600, 4700)
eps_image.resize(new_size, Image.ANTIALIAS)
eps_image.save(
'img.jpeg',
format='JPEG'
dpi=(9000, 9000),
quality=95)
UPD. Vasu Deo.S noticed one my error, and thanks to him the JPG image has become bigger, but quality is still low. I've tried different DPI, sizes, resample values for resize function, but the result does not change much. How can i make it better?
The problem is that PIL is a raster image processor, as opposed to a vector image processor. It "rasterises" vector images (such as your EPS file and SVG files) onto a grid when it opens them because it can only deal with rasters.
If that grid doesn't have enough resolution, you can never regain it. Normally, it rasterises at 100dpi, so if you want to make bigger images, you need to rasterise onto a larger grid before you even get started.
Compare:
from PIL import Image
eps_image = Image.open('image.eps')
eps_image.save('a.jpg')
The result is 540x720:
And this:
from PIL import Image
eps_image = Image.open('image.eps')
# Rasterise onto 4x higher resolution grid
eps_image.load(scale=4)
eps_image.save('a.jpg')
The result is 2160x2880:
You now have enough quality to resize however you like.
Note that you don't need to write any Python to do this at all - ImageMagick will do it all for you. It is included in most Linux distros and is available for macOS and Windows and you just use it in Terminal. The equivalent command is like this:
magick -density 400 input.eps -resize 800x600 -quality 95 output.jpg
It's because eps_image.resize(new_size, Image.ANTIALIAS) returns an resized copy of an image. Therefore you have to store it in a separate variable. Just change:-
eps_image.resize(new_size, Image.ANTIALIAS)
to
eps_image = eps_image.resize(new_size, Image.ANTIALIAS)
UPDATE:-
These may not solve the problem completely, but still would help.
You are trying to save your output image as a .jpeg, which is a
lossy compression format, therefore information is lost during the
compression/transformation (for the most part). Change the output
file extension to a lossless compression format like .png so that
data would not be compromised during compression. Also change
quality=95 to quality=100 in Image.save()
You are using Image.ANTIALIAS for resampling the image, which is
not that good when upscaling the image (it has been replaced by
Image.LANCZOS in newer version, the clause still exists for
backward compatibility). Try using Image.BICUBIC, which produces
quite favorable results (for the most part) when upscaling the image.
I want to convert a 24-bit PNG image to 32-bit so that it can be displayed on the LED matrix. Here is the code which I have used, but it converted 24-bit to 48-bit
import cv2
import numpy as np
i = cv2.imread("bbb.png")
img = np.array(i, dtype = np.uint16)
img *= 256
cv2.imwrite('test.png', img)
I looked at the christmas.png image in the code you linked to, and it appears to be a 624x8 pixel image with a palette and an 8-bit alpha channel.
Assuming the sample image works, you can make one with the same characteristics by taking a PNG image and adding a fully opaque alpha channel like this:
#!/usr/local/bin/python3
from PIL import Image
# Load the image and convert to 32-bit RGBA
im = Image.open("image.png").convert('RGBA')
# Save result
im.save("result.png")
I generated a gradient image and applied that processing and got this, so maybe you can try that:
I think you have confused the color bit-depth with the size of the input image/array. From the links posted in the comments, there is no mention of 32 as a bit depth. The script at that tutorial link uses an image with 3-channel, 8-bit color (red, green, and blue code values each represented as numbers from 0-255). The input image must have the same height as the array, but can be a different width to allow scrolling.
For more on bit-depth: https://en.wikipedia.org/wiki/Color_depth
I am reading an RGB image and converting it into HSV mode using PIL. Now I am trying to save this HSV image but I am getting an error.
filename = r'\trial_images\cat.jpg'
img = Image.open(filename)
img = img.convert('HSV')
destination = r'\demo\temp.jpg'
img.save(destination)
I am getting the following error:
OSError: cannot write mode HSV as JPEG
How can I save my transformed image? Please help
Easy one...save as a numpy array. This works fine, but the file might be pretty big (for me it go about 7 times bigger than the jpeg image). You can numpy's savez_compressed
function to cut that in half to about 3-4 times the size of the original image. Not fantastic, but when you are doing image processing you are probably fine.
I want to take a BMP or JPG and duplicate it so the new image will darker (or brighrt) what function can I use?
Ariel
You can use ImageEnhance module of PIL:
import Image
import ImageEnhance
image = Image.open(r'c:\temp\20090809210.jpg')
enhancer = ImageEnhance.Brightness(image)
brighter_image = enhancer.enhance(2)
darker_image = enhancer.enhance(0.5)
Look at PIL and ImageEnhance documentation for more details.
Note: I think ImageEnhancer documentation is a bit too terse, and you may need some experimenting within the interactive prompt to get it right.
If you want to do it the hard way i.e. code up a pixel by pixel intensity change. Here is how:
1) Convert from RGB to HSI
2) Increase or decrease the Intensity component
3) Conver back from HSI to RGB
True fade out i.e. alpha channel is not present in the JPG or BMP formats [ RGBA format image in PIL] . You get black to white using the the intensity technique. If you want to use alpha use png or tiff instead.