I want to generate 32x32 sized thumbnails from uploaded images (actually avatars).
To prevent a thumbnail from being smaller than that size, I want to create a transparent 32x32 background and paste the thumbnail on it.
The code below tries to do so. However, the avatar is displayed on a black and opaque background; I lose transparency information somewhere through the process. Where am I doing wrong?
def handle_image(img):
size = SMALL_AVATAR_IMAGE_SIZE
img.thumbnail(size, Image.ANTIALIAS)
img = img.convert('RGBA')
background = Image.new('RGBA', size, (255, 255, 255, 0))
background.paste(img, (0, (size[1] - img.size[1]) / 2), img)
img = background
processed_image_small = ContentFile(img.tostring('jpeg', img.mode))
targetpath = str(self.user.id) + '_S' + '.jpg'
self.img_small.save(targetpath, processed_image_small,save=False)
That is because JPEG cannot save transparency informations which are contained in a RGBA image. You may want to save the avatar to a format like PNG which is able to keep these informations.
You're generating a JPG image. JPEGs don't support background transparency. You need to generate a PNG image to support transparencies.
Related
I have the following code:
def draw():
img = Image.open(image)
draw = ImageDraw.Draw(img)
font = ImageFont.truetype("Poppins-Thin.ttf", 20)
watermark_text = "some text"
draw.text((0, img.height - 25), watermark_text, (211, 211, 211), font=font)
return img
It works with some images, but some throw a ValueError: cannot allocate more than 256 colors error. Manipulating a failing image works without the draw function (I am resizing and compressing it).
How can I have it work for all supported images?
Working image example.
Failing image example.
This image use indexed-colors (it keeps RGB colors on list and every pixel has only index to this color) and it may use only 256 indexes.
This way file is smaller, it uses less memory RAM and less space on disk, and web page can load it faster - so users don't resign to visit page (and server sends less bytes so it costs less)
You have to convert to RGB (or RGBA if you whan to keep transparent background)
img = img.convert('RGBA')
Eventually you can use index to some color (ie. 221) instead of tuple (R,G,B) in
draw.text((0, img.height - 25), watermark_text, 211)
but it makes problem to recognize what color it will give on image.
You can detect type of pallete (mode) with
print(img.mode)
and here explanations
I have some images opened from a post request in Django. When I export the image to png, the file is exported right, and the transparency is preserved. When I export to webp format, the transparent layer becomes white. I think there is a problem with the first list of the code. The last two lines work just fine when I use them in another project.
This is a the part of my code:
img = cv2.imdecode(np.frombuffer(files[x].read(), np.uint8), cv2.IMREAD_UNCHANGED)
...
resized = cv2.resize(img, dimension, interpolation=cv2.INTER_AREA)
cv2.imwrite('img.webp',resized, [cv2.IMWRITE_WEBP_QUALITY, 70])
Update:
I checked the exported wepb image shape and i have 4 channels. When i open it in the browser, the background is white, but when i check it in VSCode, it is transparent. Splitting the images i got r - 255, b - 255, g - 255, alpha - 0 for the transparent pixels.
You need to convert the image img to 4 channels that contains alpha channel
I have two images: an image with a text and an image as the dirty background.
Clean Image
Dirty Background Image
How will I overlay the clean image to the dirty background image using Python? Please assume that the clean image has the smaller size compared to the dirty background image.
There's a library called pillow (which is a fork of PIL) that can do this for you. You can play around with the placements a little, but I think it looks good.
# Open your two images
cleantxt = Image.open('cleantext.jpg')
dirtybackground = Image.open('dirtybackground.jpg')
# Convert the image to RGBA
cleantxt = cleantxt.convert('RGBA')
# Return a sequence object of every pixel in the text
data = cleantxt.getdata()
new_data = []
# Turn every pixel that looks lighter than gray into a transparent pixel
# This turns everything except the text transparent
for item in data:
if item[0] >= 123 and item[1] >= 123 and item[2] >= 123:
new_data.append((255, 255, 255, 0))
else:
new_data.append(item)
# Replace the old pixel data of the clean text with the transparent pixel data
cleantxt.putdata(new_data)
# Resize the clean text to fit on the dirty background (which is 850 x 555 pixels)
cleantxt.thumbnail((555,555), Image.ANTIALIAS)
# Save the clean text if we want to use it for later
cleantxt.save("cleartext.png", "PNG")
# Overlay the clean text on top of the dirty background
## (0, 0) is the pixel where you place the top left pixel of the clean text
## The second cleantxt is used as a mask
## If you pass in a transparency, the alpha channel is used as a mask
dirtybackground.paste(cleantxt, (0,0), cleantxt)
# Show it!
dirtybackground.show()
# Save it!
dirtybackground.save("dirtytext.png", "PNG")
Here's the output image:
I am attempting to take a screenshot of my desktop across multiple monitors using pywin32.
I have the screenshot I need on my third monitor but I only need a certain region of the image.
I am able to crop it when I save the bitmapped image to my hard drive like so:
# initial set up code hooking up pywin32 to the desktop
import win32gui, win32ui, win32con, win32api
hwin = win32gui.GetDesktopWindow()
# example dimensions
width = 800
height = 600
left = 3300
top = 100
hwindc = win32gui.GetWindowDC(hwin)
srcdc = win32ui.CreateDCFromHandle(hwindc)
memdc = srcdc.CreateCompatibleDC()
bmp = win32ui.CreateBitmap()
bmp.CreateCompatibleBitmap(srcdc, width, height)
memdc.SelectObject(bmp)
# saving of the file (what I am currently doing)
bmp.SaveBitmapFile(memdc, 'fullshot.bmp')
# strangely enough this crops the portion I need,
# but within an image that's the entire length of my desktop
# (not sure how to fix that, you could say this is part of the problem)
memdc.BitBlt((0, 0), (width, height), srcdc, (left, top), win32con.SRCCOPY)
img = Image.open('fullshot.bmp')
img = img.crop((0,0,800,600))
# now the cropped image is in memory but I want just the portion I need without saving it to disk
The bmp is of type 'PyCBitmap'. I've tried np.array(bmp) but this doesn't work either. Is there a way can take the bmp screenshotted by pwin32 and crop it to the dimensions I need within the program memory?
update:
I tried the following code which does not work either. When I try to display it with cv2.imshow('image', img) I get an unresponsive window.
signedIntsArray = bmp.GetBitmapBits(True)
img = np.frombuffer(signedIntsArray, dtype='uint8')
img.shape = (height,width,4)
srcdc.DeleteDC()
memdc.DeleteDC()
win32gui.ReleaseDC(hwin, hwindc)
win32gui.DeleteObject(bmp.GetHandle())
cv2.imshow('image', img)
The problem I was having wasn't that the bit of code below wasn't working:
signedIntsArray = bmp.GetBitmapBits(True)
img = np.frombuffer(signedIntsArray, dtype='uint8')
img.shape = (h,w,4)
when I sent it to the cv2.imshow function as cv2.imshow('image', np.array(img), I realized I needed to set a waitKey:
cv2.imshow('image', np.array(screen_grab()))
cv2.waitKey(0)
cv2.destroyAllWindows()
This gave me what I was looking for. Hopefully this helps somebody in the future.
I'm trying to make a composite image from a JPEG photo (1600x900) and a PNG logo with alpha channel (400x62).
Here is a command that does the job with image magick:
composite -geometry +25+25 watermark.png original_photo.jpg watermarked_photo.jpg
Now I'd like to do something similar in a python script, without invoking this shell command externally, with PIL.
Here is what I tried :
photo = Image.open('original_photo.jpg')
watermark = Image.open('watermark.png')
photo.paste(watermark, (25, 25))
The problem here is that the alpha channel is completely ignored and the result is as if my watermark were black and white rather than rbga(0, 0, 0, 0) and rbga(255, 255, 255, 128).
Indeed, PIL docs state : "See alpha_composite() if you want to combine images with respect to their alpha channels."
So I looked at alpha_composite(). Unfortunately, this function requires both images to be of the same size and mode.
Eventually, I read Image.paste() more carefully and found this out:
If a mask is given, this method updates only the regions indicated by the mask. You can use either “1”, “L” or “RGBA” images (in the latter case, the alpha band is used as mask). Where the mask is 255, the given image is copied as is. Where the mask is 0, the current value is preserved. Intermediate values will mix the two images together, including their alpha channels if they have them.
So I tried the following :
photo = Image.open('original_photo.jpg')
watermark = Image.open('watermark.png')
photo.paste(watermark, (25, 25), watermark)
And... it worked!