I'm trying to create a set of thumbnails, each one separately downscaled from the original image.
image = Image.open(path)
image = image.crop((left, upper, right, lower))
for size in sizes:
temp = copy.copy(image)
temp.thumbnail((size, height), Image.ANTIALIAS)
temp.save('%s%s%s.%s' % (path, name, size, format), quality=95)
The above code seemed to work fine but while testing I discovered that some images (I can't tell what's special about them, maybe only for PNG) raise this error:
/usr/local/lib/python2.6/site-packages/PIL/PngImagePlugin.py in read(self=<PIL.PngImagePlugin.PngStream instance>)
line: s = self.fp.read(8)
<type 'exceptions.AttributeError'>: 'NoneType' object has no attribute 'read'
Without the copy() these images work just fine.
I could just open and crop the image anew for every thumbnail, but I'd rather have a better solution.
I guess copy.copy() does not work for the PIL Image class. Try using Image.copy() instead, since it is there for a reason:
image = Image.open(path)
image = image.crop((left, upper, right, lower))
for size in sizes:
temp = image.copy() # <-- Instead of copy.copy(image)
temp.thumbnail((size, height), Image.ANTIALIAS)
temp.save('%s%s%s.%s' % (path, name, size, format), quality=95)
Related
Can anyone see where I'm going wrong? I've been through it line by line and it all produces expected results up until new_face, when new_face starts producing NoneType's.
import numpy, cv2
from PIL import Image
face_cascade = cv2.CascadeClassifier("..data/haarcascade_frontalface_default.xml") #absolute path cut down for privacy
def find_faces(image_for_faces):
image = image_for_faces.resize((1800,3150))
image_np = numpy.asarray(image)
image_gray = cv2.cvtColor(image_np, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(image_gray) #correct numpy array, prints correct coordinates per face
face_list = []
image.show()
for face in faces:
print(face)
box = (face[0], face[1], face[0]+face[2], face[1]+face[3]) #correctly produces numpy coordinates
copy_image = image.copy()
cropped = copy_image.crop(box = (box))
new_face = cropped.thumbnail((128,128))
face_list.append(new_face)
return face_list
y = Image.open('famphoto.jpg')
z = y.convert('RGB')
x = find_faces(z)
The Image.thumbnail() modifies the Image object in place, read more about it in the docs. It mentions that it returns a None type.
Image.thumbnail:
Make this image into a thumbnail. This method modifies the image to
contain a thumbnail version of itself, no larger than the given size.
This method calculates an appropriate thumbnail size to preserve the
aspect of the image, calls the draft() method to configure the file
reader (where applicable), and finally resizes the image.
I am working on a solution to crop GIF. I got it working but the size of the image increases a lot. IE, it was 500KB animated GIF but after cropping it's 8MB animated GIF.
I suspect that's because I transform it to RGB, then merge frame with previous one if GIF has a partial mode.
here is an example of how I do that:
img = Image.open(file_path)
last_frame = img.convert('RGBA')
p = img.getpalette()
# this method analyzes image and determines if it's in partial mode.
mode = analyseImage(img)['mode']
all_frames = []
frames = ImageSequence.Iterator(img)
for frame in frames:
if not frame.getpalette():
frame.putpalette(p)
new_frame = Image.new('RGBA', img.size)
if mode == 'partial':
new_frame.paste(last_frame)
new_frame.paste(frame, (0, 0), frame.convert('RGBA'))
last_frame = new_frame.copy()
new_frame.thumbnail(size, Image.ANTIALIAS)
all_frames.append(new_frame)
return all_frames
and then I store it as new image using method:
new_image_bytes = BytesIO()
om = all_frames[0]
om.info = img.info
om.save(new_image_bytes, format='gif', optimize=True, save_all=True, append_images=all_frames[1:], duration=img.info.get('duration'), loop=img.info.get('loop'))
and this image is 8MB instead of 500KB
Do I miss anything obvious ?
Basically, what you do here is discard all compression optimizations that the original GIF might`ve had.
Let`s say there is a 2-frame GIF, the second frame of which only changes pixels outside the crop window. After being cropped it could`ve become empty (and thus small in size), but your process makes it a full copy of the first frame, ultimately leading to the bloat you describe.
I have a folder of JPG images having size 2048x1536. In every image date, time, temperature are given in the top and camera model name is given at the end. I want to crop only upper part and lower part of those image.
Image sample: https://drive.google.com/file/d/1TefkFws5l2RBnI2iH22EI5vkje8JbeBk/view?usp=sharing
With the below code, I am getting error -tile cannot extend outside image **for any size I provide, for example (500,500,500,500) **. My target it (1500, 2000, 1500, 2000)
from PIL import Image
import os
#Create an Image Object from an Image
dir=r"C:\\Users\\Desktop\\crop1"
output_dir = r"C:\\Users\\Desktop\\crop2"
file_names = os.listdir(dir)
for file_name in file_names:
file_path = dir +"\{}".format(file_name)
im = Image.open(r"{}".format(file_path))
cropped = im.crop((2000,1500,2000,1500))
output_file= output_dir+"\{}".format(file_name)
cropped.save(r"{}".format(output_file))
"(2000,1500,2000,1500)" is an empty box, which could explain why crop fails even if the "tile cannot extend outside image" error isn't exactly fitting. The 4-tuple argument to crop has the meaning "(left, upper, right, lower)". Example from the Image.crop() documentation:
from PIL import Image
im = Image.open("hopper.jpg")
# The crop method from the Image module takes four coordinates as input.
# The right can also be represented as (left+width)
# and lower can be represented as (upper+height).
(left, upper, right, lower) = (20, 20, 100, 100)
# Here the image "im" is cropped and assigned to new variable im_crop
im_crop = im.crop((left, upper, right, lower))
I want to generate 32x32 sized thumbnails from uploaded images (actually avatars).
To prevent a thumbnail from being smaller than that size, I want to create a transparent 32x32 background and paste the thumbnail on it.
The code below tries to do so. However, the avatar is displayed on a black and opaque background; I lose transparency information somewhere through the process. Where am I doing wrong?
def handle_image(img):
size = SMALL_AVATAR_IMAGE_SIZE
img.thumbnail(size, Image.ANTIALIAS)
img = img.convert('RGBA')
background = Image.new('RGBA', size, (255, 255, 255, 0))
background.paste(img, (0, (size[1] - img.size[1]) / 2), img)
img = background
processed_image_small = ContentFile(img.tostring('jpeg', img.mode))
targetpath = str(self.user.id) + '_S' + '.jpg'
self.img_small.save(targetpath, processed_image_small,save=False)
That is because JPEG cannot save transparency informations which are contained in a RGBA image. You may want to save the avatar to a format like PNG which is able to keep these informations.
You're generating a JPG image. JPEGs don't support background transparency. You need to generate a PNG image to support transparencies.
I have some strange problem with PIL not resizing the image.
from PIL import Image
img = Image.open('foo.jpg')
width, height = img.size
ratio = floor(height / width)
newheight = ratio * 150
img.resize((150, newheight), Image.ANTIALIAS)
img.save('mugshotv2.jpg', format='JPEG')
This code runs without any errors and produces me image named mugshotv2.jpg in correct folder, but it does not resize it. It does something to it, because the size of the picture drops from 120 kb to 20 kb, but the dimensions remain the same.
Perhaps you can also suggest way to crop images into squares with less code. I kinda thought that Image.thumbnail does it, but what it did was that it scaled my image to 150 px by its width, leaving height 100px.
resize() returns a resized copy of an image. It doesn't modify the original. The correct way to use it is:
from PIL import Image
#...
img = img.resize((150, newheight), Image.ANTIALIAS)
source
I think what you are looking for is the ImageOps.fit function. From PIL docs:
ImageOps.fit(image, size, method, bleed, centering) => image
Returns a sized and cropped version of
the image, cropped to the requested
aspect ratio and size. The size
argument is the requested output size
in pixels, given as a (width, height)
tuple.
[Update]
ANTIALIAS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.image.resize((100,100),Image.ANTIALIAS)
Today you should use something like this:
from PIL import Image
img = Image.open(r"C:\test.png")
img.show()
img_resized = img.resize((100, 100), Image.Resampling.LANCZOS)
img_resized.show()