I'm trying to make a simple code that loads an image, divide the value of each pixel by 2 and stores the image. The image is stored in an array [1280][720][3]. After changing the value of each pixel I've chequed that the values are the expected. For some reason the values are correct but when I store the new image and check it, the values of the pixels are not the same as before...
The image is 1280x720 pixels and each pixel has 3 bytes (one for each color rgb)
import matplotlib.image as mpimg
img = mpimg.imread('image.jpg') # (1280, 720, 3)
myImg = []
for row in img:
myRow = []
for pixel in row:
myPixel = []
for color in pixel:
myPixel.append(color // 2)
myRow.append(myPixel)
myImg.append(myRow)
mpimg.imsave("foo.jpg", myImg)
img is a numpy array, so you can just use img / 2. It's also much faster than using a list loop.
myImg = img / 2
mpimg.imsave("foo.jpg", myImg)
I have an transparent image with RGBA code (0, 0, 0, 0), on which I added some pictures and text. Now, I am trying to paste that on a GIF image, but it is completely ruining it.
Here is my transparent image:
Here is my GIF:
And, this is what I get:
This is my code:
from PIL import ImageSequence
im = Image.open('newav.gif')
frames = []
for frame in ImageSequence.Iterator(im):
frame = frame.copy()
frame.paste(card, (0, 0), card)
frames.append(frame)
frames[0].save('rank_card_gif.gif', save_all=True, append_images=frames[1:], loop=0)
Combining existent, animated GIFs with static PNGs having transparency doesn't work that easily – at least solely using Pillow. Your GIF can only store upto 256 different colors using some color palette, thus has mode P (or PA), when opened using Pillow. Now, your PNG probably has a lot of more colors. When pasting the PNG onto the GIF, the color palette of the GIF is used to convert some of the PNG's colors, which gives unexpected or unwanted results, cf. your output.
My idea would be, since you're already iterating each frame:
Convert the frame to RGB, to get the "explicit" colors from the palette.
Convert the frame to some NumPy array, and manually alpha blend the frame and the PNG using its alpha channel.
Convert the resulting frame back a Pillow Image object.
Thus, all frames are stored as RGB, all colors are the same for all frames. So, when now saving a new GIF, the new color palette is determined from this set of images.
Here's my code for the described procedure:
import cv2
from PIL import Image, ImageSequence
import numpy as np
# Read gif using Pillow
gif = Image.open('gif.gif')
# Read png using OpenCV
pngg = cv2.imread('png.png', cv2.IMREAD_UNCHANGED)
# Extract alpha channel, repeat for later alpha blending
alpha = np.repeat(pngg[..., 3, np.newaxis], 3, axis=2) / 255
frames = []
for frame in ImageSequence.Iterator(gif):
frame = frame.copy()
# Convert frame to RGB
frame = frame.convert('RGB')
# Convert frame to NumPy array; convert RGB to BGR for OpenCV
frame = cv2.cvtColor(np.asarray(frame), cv2.COLOR_RGB2BGR)
# Manual alpha blending
frame = np.uint8(pngg[..., :3] * alpha + frame * (1 - alpha))
# Convert BGR to RGB for Pillow; convert frame to Image object
frame = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
frames.append(frame)
frames[0].save('output.gif', append_images=frames[1:], save_all=True,
loop=0, duration=gif.info['duration'])
And, this is the result:
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
NumPy: 1.20.1
OpenCV: 4.5.1
Pillow: 8.1.0
----------------------------------------
I am working on a solution to crop GIF. I got it working but the size of the image increases a lot. IE, it was 500KB animated GIF but after cropping it's 8MB animated GIF.
I suspect that's because I transform it to RGB, then merge frame with previous one if GIF has a partial mode.
here is an example of how I do that:
img = Image.open(file_path)
last_frame = img.convert('RGBA')
p = img.getpalette()
# this method analyzes image and determines if it's in partial mode.
mode = analyseImage(img)['mode']
all_frames = []
frames = ImageSequence.Iterator(img)
for frame in frames:
if not frame.getpalette():
frame.putpalette(p)
new_frame = Image.new('RGBA', img.size)
if mode == 'partial':
new_frame.paste(last_frame)
new_frame.paste(frame, (0, 0), frame.convert('RGBA'))
last_frame = new_frame.copy()
new_frame.thumbnail(size, Image.ANTIALIAS)
all_frames.append(new_frame)
return all_frames
and then I store it as new image using method:
new_image_bytes = BytesIO()
om = all_frames[0]
om.info = img.info
om.save(new_image_bytes, format='gif', optimize=True, save_all=True, append_images=all_frames[1:], duration=img.info.get('duration'), loop=img.info.get('loop'))
and this image is 8MB instead of 500KB
Do I miss anything obvious ?
Basically, what you do here is discard all compression optimizations that the original GIF might`ve had.
Let`s say there is a 2-frame GIF, the second frame of which only changes pixels outside the crop window. After being cropped it could`ve become empty (and thus small in size), but your process makes it a full copy of the first frame, ultimately leading to the bloat you describe.
I have a gif that I would like to resize with pillow so that its size decreases. The current size of the gif is 2MB.
I am trying to
resize it so its height / width is smaller
decrease its quality.
With JPEG, the following piece of code is usually enough so that large image drastically decrease in size.
from PIL import Image
im = Image.open("my_picture.jpg")
im = im.resize((im.size[0] // 2, im.size[1] // 2), Image.ANTIALIAS) # decreases width and height of the image
im.save("out.jpg", optimize=True, quality=85) # decreases its quality
With a GIF, though, it does not seem to work. The following piece of code even makes the out.gif bigger than the initial gif:
im = Image.open("my_gif.gif")
im.seek(im.tell() + 1) # loads all frames
im.save("out.gif", save_all=True, optimize=True, quality=10) # should decrease its quality
print(os.stat("my_gif.gif").st_size) # 2096558 bytes / roughly 2MB
print(os.stat("out.gif").st_size) # 7536404 bytes / roughly 7.5MB
If I add the following line, then only the first frame of the GIF is saved, instead of all of its frame.
im = im.resize((im.size[0] // 2, im.size[1] // 2), Image.ANTIALIAS) # should decrease its size
I've been thinking about calling resize() on im.seek() or im.tell() but neither of these methods return an Image object, and therefore I cannot call resize() on their output.
Would you know how I can use Pillow to decrease the size of my GIF while keeping all of its frames?
[edit] Partial solution:
Following Old Bear's response, I have done the following changes:
I am using BigglesZX's script to extract all frames. It is useful to note that this is a Python 2 script, and my project is written in Python 3 (I did mention that detail initially, but it was edited out by the Stack Overflow Community). Running 2to3 -w gifextract.py makes that script compatible with Python 3.
I have been resicing each frame individually: frame.resize((frame.size[0] // 2, frame.size[1] // 2), Image.ANTIALIAS)
I've been saving all the frames together: img.save("out.gif", save_all=True, optimize=True).
The new gif is now saved and works, but there is 2 main problems :
I am not sure that the resize method works, as out.gif is still 7.5MB. The initial gif was 2MB.
The gif speed is increased and the gif does not loop. It stops after its first run.
Example:
original gif my_gif.gif:
Gif after processing (out.gif) https://i.imgur.com/zDO4cE4.mp4 (I could not add it to Stack Overflow ). Imgur made it slower (and converted it to mp4). When I open the gif file from my computer, the entire gif lasts about 1.5 seconds.
Using BigglesZX's script, I have created a new script which resizes a GIF using Pillow.
Original GIF (2.1 MB):
Output GIF after resizing (1.7 MB):
I have saved the script here. It is using the thumbnail method of Pillow rather than the resize method as I found the resize method did not work.
The is not perfect so feel free to fork and improve it. Here are a few unresolved issues:
While the GIF displays just fine when hosted by imgur, there is a speed issue when I open it from my computer where the entire GIF only take 1.5 seconds.
Likewise, while imgur seems to make up for the speed problem, the GIF wouldn't display correctly when I tried to upload it to stack.imgur. Only the first frame was displayed (you can see it here).
Full code (should the above gist be deleted):
def resize_gif(path, save_as=None, resize_to=None):
"""
Resizes the GIF to a given length:
Args:
path: the path to the GIF file
save_as (optional): Path of the resized gif. If not set, the original gif will be overwritten.
resize_to (optional): new size of the gif. Format: (int, int). If not set, the original GIF will be resized to
half of its size.
"""
all_frames = extract_and_resize_frames(path, resize_to)
if not save_as:
save_as = path
if len(all_frames) == 1:
print("Warning: only 1 frame found")
all_frames[0].save(save_as, optimize=True)
else:
all_frames[0].save(save_as, optimize=True, save_all=True, append_images=all_frames[1:], loop=1000)
def analyseImage(path):
"""
Pre-process pass over the image to determine the mode (full or additive).
Necessary as assessing single frames isn't reliable. Need to know the mode
before processing all frames.
"""
im = Image.open(path)
results = {
'size': im.size,
'mode': 'full',
}
try:
while True:
if im.tile:
tile = im.tile[0]
update_region = tile[1]
update_region_dimensions = update_region[2:]
if update_region_dimensions != im.size:
results['mode'] = 'partial'
break
im.seek(im.tell() + 1)
except EOFError:
pass
return results
def extract_and_resize_frames(path, resize_to=None):
"""
Iterate the GIF, extracting each frame and resizing them
Returns:
An array of all frames
"""
mode = analyseImage(path)['mode']
im = Image.open(path)
if not resize_to:
resize_to = (im.size[0] // 2, im.size[1] // 2)
i = 0
p = im.getpalette()
last_frame = im.convert('RGBA')
all_frames = []
try:
while True:
# print("saving %s (%s) frame %d, %s %s" % (path, mode, i, im.size, im.tile))
'''
If the GIF uses local colour tables, each frame will have its own palette.
If not, we need to apply the global palette to the new frame.
'''
if not im.getpalette():
im.putpalette(p)
new_frame = Image.new('RGBA', im.size)
'''
Is this file a "partial"-mode GIF where frames update a region of a different size to the entire image?
If so, we need to construct the new frame by pasting it on top of the preceding frames.
'''
if mode == 'partial':
new_frame.paste(last_frame)
new_frame.paste(im, (0, 0), im.convert('RGBA'))
new_frame.thumbnail(resize_to, Image.ANTIALIAS)
all_frames.append(new_frame)
i += 1
last_frame = new_frame
im.seek(im.tell() + 1)
except EOFError:
pass
return all_frames
According to Pillow 4.0x, the Image.resize function only works on a single image/frame.
To achieve what you want, I believe you have to first extract every frame from the .gif file, resize each frame one at a time and then reassemble them up again.
To do the first step, there appears to be some detail that needs to be attended to. E.g. whether each gif frame uses a local palette or a global palette is applied over all frames, and whether gif replace each image using a full or partial frame. BigglesZX has developed a script to address these issues while extracting every frame from a gif file so leverage on that.
Next, you have to write the scripts to resize each of the extracted frame and assemble them all as a new .gif using the PIL.Image.resize() and PIL.Image.save().
I noticed you wrote "im.seek(im.tell() + 1) # load all frames". I think this is incorrect. Rather it is use to increment between frames of a .gif file. I noticed you used quality=10 in your save function for your .gif file. I did not find this as provided in the PIL documentation. You can learn more about the tile attribute mentioned in BiggleZX's script by reading this link
I am using the function below to resize and crop images including animated ones (GIF, WEBP) Simply, we need to iterate each frame in the gif or webp.
from math import floor, fabs
from PIL import Image, ImageSequence
def transform_image(original_img, crop_w, crop_h):
"""
Resizes and crops the image to the specified crop_w and crop_h if necessary.
Works with multi frame gif and webp images also.
args:
original_img is the image instance created by pillow ( Image.open(filepath) )
crop_w is the width in pixels for the image that will be resized and cropped
crop_h is the height in pixels for the image that will be resized and cropped
returns:
Instance of an Image or list of frames which they are instances of an Image individually
"""
img_w, img_h = (original_img.size[0], original_img.size[1])
n_frames = getattr(original_img, 'n_frames', 1)
def transform_frame(frame):
"""
Resizes and crops the individual frame in the image.
"""
# resize the image to the specified height if crop_w is null in the recipe
if crop_w is None:
if crop_h == img_h:
return frame
new_w = floor(img_w * crop_h / img_h)
new_h = crop_h
return frame.resize((new_w, new_h))
# return the original image if crop size is equal to img size
if crop_w == img_w and crop_h == img_h:
return frame
# first resize to get most visible area of the image and then crop
w_diff = fabs(crop_w - img_w)
h_diff = fabs(crop_h - img_h)
enlarge_image = True if crop_w > img_w or crop_h > img_h else False
shrink_image = True if crop_w < img_w or crop_h < img_h else False
if enlarge_image is True:
new_w = floor(crop_h * img_w / img_h) if h_diff > w_diff else crop_w
new_h = floor(crop_w * img_h / img_w) if h_diff < w_diff else crop_h
if shrink_image is True:
new_w = crop_w if h_diff > w_diff else floor(crop_h * img_w / img_h)
new_h = crop_h if h_diff < w_diff else floor(crop_w * img_h / img_w)
left = (new_w - crop_w) // 2
right = left + crop_w
top = (new_h - crop_h) // 2
bottom = top + crop_h
return frame.resize((new_w, new_h)).crop((left, top, right, bottom))
# single frame image
if n_frames == 1:
return transform_frame(original_img)
# in the case of a multiframe image
else:
frames = []
for frame in ImageSequence.Iterator(original_img):
frames.append( transform_frame(frame) )
return frames
I tried to use the script given in the chosen answer but as Pauline commented, it had some problems such as speed issue.
The problem was that the speed wasn't given when saving the new gif. To solve that you must take the speed from the original gif and pass it to the new one when saving it.
Here is my script:
from PIL import Image
def scale_gif(path, scale, new_path=None):
gif = Image.open(path)
if not new_path:
new_path = path
old_gif_information = {
'loop': bool(gif.info.get('loop', 1)),
'duration': gif.info.get('duration', 40),
'background': gif.info.get('background', 223),
'extension': gif.info.get('extension', (b'NETSCAPE2.0')),
'transparency': gif.info.get('transparency', 223)
}
new_frames = get_new_frames(gif, scale)
save_new_gif(new_frames, old_gif_information, new_path)
def get_new_frames(gif, scale):
new_frames = []
actual_frames = gif.n_frames
for frame in range(actual_frames):
gif.seek(frame)
new_frame = Image.new('RGBA', gif.size)
new_frame.paste(gif)
new_frame.thumbnail(scale, Image.ANTIALIAS)
new_frames.append(new_frame)
return new_frames
def save_new_gif(new_frames, old_gif_information, new_path):
new_frames[0].save(new_path,
save_all = True,
append_images = new_frames[1:],
duration = old_gif_information['duration'],
loop = old_gif_information['loop'],
background = old_gif_information['background'],
extension = old_gif_information['extension'] ,
transparency = old_gif_information['transparency'])
Also I noticed that you must save the new gif using new_frames[0] instead of creating a new Image Pillow's object to avoid adding a black frame to the gif.
If you want to see a test using pytest on this script you can check my GitHub's repo.
I wrote a simple code that resize Gif with the same speed and background transparency. I think it could be helpful.
"""
# Resize an animated GIF
Inspired from https://gist.github.com/skywodd/8b68bd9c7af048afcedcea3fb1807966
Useful links:
* https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html#saving
* https://stackoverflow.com/a/69850807
Example:
```
python resize_gif.py input.gif output.gif 400,300
```
"""
import sys
from PIL import Image
from PIL import ImageSequence
def resize_gif(input_path, output_path, max_size):
input_image = Image.open(input_path)
frames = list(_thumbnail_frames(input_image))
output_image = frames[0]
output_image.save(
output_path,
save_all=True,
append_images=frames[1:],
disposal=input_image.disposal_method,
**input_image.info,
)
def _thumbnail_frames(image):
for frame in ImageSequence.Iterator(image):
new_frame = frame.copy()
new_frame.thumbnail(max_size, Image.Resampling.LANCZOS)
yield new_frame
if __name__ == "__main__":
max_size = [int(px) for px in sys.argv[3].split(",")] # "150,100" -> (150, 100)
resize_gif(sys.argv[1], sys.argv[2], max_size)
I am trying to use a dicom image and manipulate it using OpenCV in a Python environment. So far I have used the pydicom library to read the dicom(.dcm) image data and using the pixel array attribute to display the picture using OpenCV imshow method. But the output is just a blank window. Here is the snippet of code I am using at this moment.
import numpy as np
import cv2
import pydicom as dicom
ds=dicom.dcmread('sample.dcm')
cv2.imshow('sample image dicom',ds.pixel_array)
cv2.waitkey()
If i print out the array which is used here, the output is different from what i would get with a normal numpy array. I have tried using matplotlib imshow method as well and it was able to display the image with some colour distortions. Is there a way to convert the array into a legible format for OpenCV?
Faced a similar issue. Used exposure.equalize_adapthist() (source). The resulting image isn't a hundred percent to that you would see using a DICOM Viewer but it's the best I was able to get.
import numpy as np
import cv2
import pydicom as dicom
from skimage import exposure
ds=dicom.dcmread('sample.dcm')
dcm_sample=ds.pixel_array
dcm_sample=exposure.equalize_adapthist(dcm_sample)
cv2.imshow('sample image dicom',dcm_sample)
cv2.waitkey()
I have figured out a way to get the image to show. As Dan mentioned in the comments, the value of the matrix was scaled down and due to the imshow function, the output was too dark for the human eye to differentiate. So, in the end the only thing i needed to do was multiply the entire mat data with 128. The image is showing perfectly now. multiplying the matrix by 255 over exposes the picture and causes certain features to blow. Here is the revised code.
import numpy as np
import cv2
import pydicom as dicom
ds=dicom.dcmread('sample.dcm')
dcm_sample=ds.pixel_array*128
cv2.imshow('sample image dicom',dcm_sample)
cv2.waitkey()
I don't think that is a correct answer. It works for that particular image because most of your pixel values are in the lower range. Check this OpenCV: How to visualize a depth image. It is for c++ but easily adapted to Python.
This is the best way(in my opinion) to open image in opencv as a numpy array while perserving the image quality:
import numpy as np
import pydicom, os, cv2
def dicom_to_numpy(ds):
DCM_Img = ds
rows = DCM_Img.get(0x00280010).value #Get number of rows from tag (0028, 0010)
cols = DCM_Img.get(0x00280011).value #Get number of cols from tag (0028, 0011)
Instance_Number = int(DCM_Img.get(0x00200013).value) #Get actual slice instance number from tag (0020, 0013)
Window_Center = int(DCM_Img.get(0x00281050).value) #Get window center from tag (0028, 1050)
Window_Width = int(DCM_Img.get(0x00281051).value) #Get window width from tag (0028, 1051)
Window_Max = int(Window_Center + Window_Width / 2)
Window_Min = int(Window_Center - Window_Width / 2)
if (DCM_Img.get(0x00281052) is None):
Rescale_Intercept = 0
else:
Rescale_Intercept = int(DCM_Img.get(0x00281052).value)
if (DCM_Img.get(0x00281053) is None):
Rescale_Slope = 1
else:
Rescale_Slope = int(DCM_Img.get(0x00281053).value)
New_Img = np.zeros((rows, cols), np.uint8)
Pixels = DCM_Img.pixel_array
for i in range(0, rows):
for j in range(0, cols):
Pix_Val = Pixels[i][j]
Rescale_Pix_Val = Pix_Val * Rescale_Slope + Rescale_Intercept
if (Rescale_Pix_Val > Window_Max): #if intensity is greater than max window
New_Img[i][j] = 255
elif (Rescale_Pix_Val < Window_Min): #if intensity is less than min window
New_Img[i][j] = 0
else:
New_Img[i][j] = int(((Rescale_Pix_Val - Window_Min) / (Window_Max - Window_Min)) * 255) #Normalize the intensities
return New_Img
file_path = "C:/example.dcm"
image = pydicom.read_file(file_path)
image = dicom_to_numpy(image)
#show image
cv2.imshow('sample image dicom',image)
cv2.waitKey(0)
cv2.destroyAllWindows()