So I have a table with image sizes. There are multiple images of different sizes (66x66, 400x400, etc.). I have one example of image (the original) that always has a size of 600x532, and on this image is a product (a TV, a PC, etc.).
I have to resize this image, which isn't a problem. But if I do this with proportion I get something like 66x55. If I don't do this with proportion the image doesn't look good.
So the background of the original is always white. Is there a way to extend the area of the image and filling the rest with white?
So like this: 600x532 -> 600x600 -> 66x66 etc etc.
It should be like a anti-crop.
EDIT: I found out that if I use crop() from PIL and instead of "minimizing" using a value above the actual image-size it creates my extra area. but it is going to be black.
Any idea how I could fill this area white?
EDIT2: I guess it has something to do with ImageDraw.
EDIT3: After finding out that ImageDraw was the solution, my problem was solved. Please close this.
Here my solution:
import Image, ImageDraw
img1 = Image.open("img.jpg")
img2 = img1.crop((0,0,600,600))
draw = ImageDraw.Draw(img2)
draw.rectangle( (0,532,600,600), fill='white' )
del draw
img2.save("img2.jpg","JPEG", quality=75)
The next thing I will do is to make the extra crop above and under. So the picture stays in the middle.
EDIT4: final solution
img1 = Image.open("img1.jpg")
img2 = img1.crop( (0,-34,600,566) )
draw = ImageDraw.Draw(img2)
draw.rectangle( (0,0,600,34), fill="white" )
draw.rectangle( (0,566,600,600), fill="white" )
del draw
img2.save("img2.jpg", "JPEG", quality=75)
Supposing we use PIL to process the image
from PIL import Image
def white_bg_square(img):
"return a white-background-color image having the img in exact center"
size = (max(img.size),)*2
layer = Image.new('RGB', size, (255,255,255))
layer.paste(img, tuple(map(lambda x:(x[0]-x[1])/2, zip(size, img.size))))
return layer
You could resize a PIL Image object, img for example
img.resize((width, height), resample=Image.ANTIALIAS)
Thus in the python shell, it looks like
>>> from PIL import Image
>>> img = Image.open('path/to/image')
>>> square_one = white_bg_square(img)
>>> square_one.resize((100, 100), Image.ANTIALIAS)
>>> square_one.save('path/to/result')
There are nice examples inside PIL document and sorl-thumbnail 3.2.5
http://effbot.org/imagingbook/image.htm
http://pypi.python.org/pypi/sorl-thumbnail/3.2.5
My final solution
img1 = Image.open("img1.jpg")
img2 = img1.crop( (0,-34,600,566) )
draw = ImageDraw.Draw(img2)
draw.rectangle( (0,0,600,34), fill="white" )
draw.rectangle( (0,566,600,600), fill="white" )
del draw
img2.save("img2.jpg", "JPEG", quality=75)
If we use opencv to process the image.
import cv2
import numpy as np
def make_square(self, image_in):
size = image_in.shape[:2]
max_dim = max(size)
delta_w = max_dim - size[1]
delta_h = max_dim - size[0]
top, bottom = delta_h//2, delta_h-(delta_h//2)
left, right = delta_w//2, delta_w-(delta_w//2)
color = [255, 255, 255]
#image_out = cv2.copyMakeBorder(image_in, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)
image_out = cv2.copyMakeBorder(image_in, top, bottom, left, right, cv2.BORDER_REPLICATE, value=color)
return image_out
image_in = cv2.imread(image_path)
Related
On blank canvas I want to draw a square, pixel by pixel, using Pillow.
I have tried using img.putpixel((30,60), (155,155,55)) to draw one pixel but it doesn't do anything.
from PIL import Image
def newImg():
img = Image.new('RGB', (1280,768))
img.save('sqr.png')
return img
wallpaper = newImg()
wallpaper.show()
Running the code you say you have tried totally works, see below.
To draw the rectangle, repeat the img.putpixel((30,60), (155,155,55)) command with other coordinates.
from PIL import Image
def newImg():
img = Image.new('RGB', (100, 100))
img.putpixel((30,60), (155,155,55))
img.save('sqr.png')
return img
wallpaper = newImg()
wallpaper.show()
sqr.png
I want to make something like this python.
I have the image in background and write text with transparent fill, so that image shows up.
Here's one way I found to do it using the Image.composite() function which is documented here and here.
The approach used is described (very) tersely in this answer to the question Is it possible to mask an image in Python Imaging Library (PIL)? by #Mark Ransom…the following is just an illustration of applying it to accomplish what you want do.
from PIL import Image, ImageDraw, ImageFont
BACKGROUND_IMAGE_FILENAME = 'cookie_cutter_background_cropped.png'
RESULT_IMAGE_FILENAME = 'cookie_cutter_text_result.png'
THE_TEXT = 'LOADED'
FONT_NAME = 'arialbd.ttf' # Arial Bold
# Read the background image and convert to an RGB image with Alpha.
with open(BACKGROUND_IMAGE_FILENAME, 'rb') as file:
bgr_img = Image.open(file)
bgr_img = bgr_img.convert('RGBA') # Give iamge an alpha channel.
bgr_img_width, bgr_img_height = bgr_img.size
cx, cy = bgr_img_width//2, bgr_img_height//2 # Center of image.
# Create a transparent foreground to be result of non-text areas.
fgr_img = Image.new('RGBA', bgr_img.size, color=(0, 0, 0, 0))
font_size = bgr_img_width//len(THE_TEXT)
font = ImageFont.truetype(FONT_NAME, font_size)
txt_width, txt_height = font.getsize(THE_TEXT) # Size of text w/font if rendered.
tx, ty = cx - txt_width//2, cy - txt_height//2 # Center of text.
mask_img = Image.new('L', bgr_img.size, color=255)
mask_img_draw = ImageDraw.Draw(mask_img)
mask_img_draw.text((tx, ty), THE_TEXT, fill=0, font=font, align='center')
res_img = Image.composite(fgr_img, bgr_img, mask_img)
res_img.save(RESULT_IMAGE_FILENAME)
res_img.show()
Which, using the following BACKGROUND_IMAGE:
produced the image shown below, which is it being viewed in Photoshop so the transparent background it has would be discernible (not to scale):
Here's an enlargement, showing the smoothly rendered edges of the characters:
avatar.jpg
back.jpg
How to synthesize two images as follows?
I Effect:
Here's an example using your images. Dimensions are hardcoded in the example, but you can easily replace them with calculations. avatar.jpg and background.jpg are images in your post saved as is.
Here's a link to github repo for this example : python_pillow_circular_thumbnail
from PIL import Image, ImageOps, ImageDraw
im = Image.open('avatar.jpg')
im = im.resize((120, 120));
bigsize = (im.size[0] * 3, im.size[1] * 3)
mask = Image.new('L', bigsize, 0)
draw = ImageDraw.Draw(mask)
draw.ellipse((0, 0) + bigsize, fill=255)
mask = mask.resize(im.size, Image.ANTIALIAS)
im.putalpha(mask)
output = ImageOps.fit(im, mask.size, centering=(0.5, 0.5))
output.putalpha(mask)
output.save('output.png')
background = Image.open('back.jpg')
background.paste(im, (150, 10), im)
background.save('overlap.png')
output.png:
overlap.png:
Crop part of this code is borrowed form this answer.
Hope it helps!
I've very much a noob when it comes to image processing :(
I have a bunch of PNG files (300 of them) that have large areas of transparency that I wish to crop. I want to automate the process obviously, hence why i tried using python and PIL.
Now I have a look at the following link,
Crop a PNG image to its minimum size, and also using Numpy as suggested by this link, Automatically cropping an image with python/PIL, both to no success :( The output files are identical to the input files! no cropping of the transparency, same size. The getbbox is returning same width and height.
Here's a link to one of those images; 98x50button
The image is of a button icon in the shape of a bell. it's drawn in white so it's hard to see which transparent background. The expected outcome would a 20x17 Button (with the transparency inside that 20x17 box remaining in tact)
Here's the code i'm using;
#!/usr/bin/env python
import sys
import os
import Image
import numpy as np
def autocrop_image2(image):
image.load()
image_data = np.asarray(image)
image_data_bw = image_data.max(axis=2)
non_empty_columns = np.where(image_data_bw.max(axis=0) > 0)[0]
non_empty_rows = np.where(image_data_bw.max(axis=1) > 0)[0]
cropBox = (min(non_empty_rows), max(non_empty_rows),
min(non_empty_columns), max(non_empty_columns))
image_data_new = image_data[cropBox[0]:cropBox[
1] + 1, cropBox[2]:cropBox[3] + 1, :]
new_image = Image.fromarray(image_data_new)
return new_image
def autocrop_image(image, border=0):
# Get the bounding box
bbox = image.getbbox()
# Crop the image to the contents of the bounding box
image = image.crop(bbox)
# Determine the width and height of the cropped image
(width, height) = image.size
# Add border
width += border * 2
height += border * 2
# Create a new image object for the output image
cropped_image = Image.new("RGBA", (width, height), (0, 0, 0, 0))
# Paste the cropped image onto the new image
cropped_image.paste(image, (border, border))
# Done!
return cropped_image
walk_dir = sys.argv[1]
print('walk_dir = ' + walk_dir)
# If your current working directory may change during script execution, it's recommended to
# immediately convert program arguments to an absolute path. Then the variable root below will
# be an absolute path as well. Example:
# walk_dir = os.path.abspath(walk_dir)
print('walk_dir (absolute) = ' + os.path.abspath(walk_dir))
for root, subdirs, files in os.walk(walk_dir):
print('--\nroot = ' + root)
list_file_path = os.path.join(root, 'my-directory-list.txt')
print('list_file_path = ' + list_file_path)
with open(list_file_path, 'wb') as list_file:
for subdir in subdirs:
print('\t- subdirectory ' + subdir)
for filename in files:
file_path = os.path.join(root, filename)
print('\t- file %s (full path: %s)' % (filename, file_path))
filename, file_extension = os.path.splitext(filename)
if file_extension.lower().endswith('.png'):
# Open the input image
image = Image.open(file_path)
# Do the cropping
# image = autocrop_image(image, 0)
new_image = autocrop_image2(image)
# Save the output image
output = os.path.join("output", filename + ".png")
print output
new_image.save(output)
Thank you all for the help :)
The issue you're having is that your images contain transparent white pixels, and your code is only going to crop pixels that are both transparent and black. The RGBA values for most of the pixels in your example image are (255, 255, 255, 0).
In autocrop_image2, you're taking the max of the channel values. You probably just want the alpha channel's value directly, so change:
image_data_bw = image_data.max(axis=2)
To:
image_data_bw = image_data[:,:,3]
The rest of the function should then work as intended.
The autocrop_image function has the same problem. The getbbox method returns the bounds of the non-zero pixels, and transparent white pixels are not zero. To fix it, try converting the image from "RGBA" mode to premultiplied alpha "RGBa" mode before finding the bounding box:
bbox = image.convert("RGBa").getbbox()
Here is one solution to crop the transparent borders.
Just throw this script in your folder with your batch .png files:
from PIL import Image
import numpy as np
from os import listdir
def crop(image_name):
pil_image = Image.open(image_name)
np_array = np.array(pil_image)
blank_px = [255, 255, 255, 0]
mask = np_array != blank_px
coords = np.argwhere(mask)
x0, y0, z0 = coords.min(axis=0)
x1, y1, z1 = coords.max(axis=0) + 1
cropped_box = np_array[x0:x1, y0:y1, z0:z1]
pil_image = Image.fromarray(cropped_box, 'RGBA')
print(pil_image.width, pil_image.height)
pil_image.save(png_image_name)
print(png_image_name)
for f in listdir('.'):
if f.endswith('.png'):
crop(f)
Here's a new solution; I just ran into this problem:
You have an RGBA image:
When the pixel's A is 0, the cell should be fully transparent,
but some of your pixels have A as 0, and RGB values not zero.
Pillow's getbbox() and other functions now fail.
You want to force your RGB to 0 whenever alpha is 0
So:
Make a pure black RGBA image, each pixel being (0, 0, 0, 0)
Make a composite with your image and an RGBA black image, using your
image as a mask.
Wherever your A was 0, your RGB will now be zero
This is a solution; there is probably a lower-memory solution.
Here is the code:
black = Image.new('RGBA', myImage.size)
myImage = Image.composite(myImage, black, myImage)
myCroppedImage = myImage.crop(myImage.getbbox())
I have a large number of images of a fixed size (say 500*500). I want to write a python script which will resize them to a fixed size (say 800*800) but will keep the original image at the center and fill the excess area with a fixed color (say black).
I am using PIL. I can resize the image using the resize function now, but that changes the aspect ratio. Is there any way to do this?
You can create a new image with the desired new size, and paste the old image in the center, then saving it. If you want, you can overwrite the original image (are you sure? ;o)
import Image
old_im = Image.open('someimage.jpg')
old_size = old_im.size
new_size = (800, 800)
new_im = Image.new("RGB", new_size) ## luckily, this is already black!
box = tuple((n - o) // 2 for n, o in zip(new_size, old_size))
new_im.paste(old_im, box)
new_im.show()
# new_im.save('someimage.jpg')
You can also set the color of the new border with a third argument of Image.new() (for example: Image.new("RGB", new_size, "White"))
Yes, there is.
Make something like this:
from PIL import Image, ImageOps
ImageOps.expand(Image.open('original-image.png'),border=300,fill='black').save('imaged-with-border.png')
You can write the same at several lines:
from PIL import Image, ImageOps
img = Image.open('original-image.png')
img_with_border = ImageOps.expand(img,border=300,fill='black')
img_with_border.save('imaged-with-border.png')
And you say that you have a list of images. Then you must use a cycle to process all of them:
from PIL import Image, ImageOps
for i in list-of-images:
img = Image.open(i)
img_with_border = ImageOps.expand(img,border=300,fill='black')
img_with_border.save('bordered-%s' % i)
Alternatively, if you are using OpenCV, they have a function called copyMakeBorder that allows you to add padding to any of the sides of an image. Beyond solid colors, they've also got some cool options for fancy borders like reflecting or extending the image.
import cv2
img = cv2.imread('image.jpg')
color = [101, 52, 152] # 'cause purple!
# border widths; I set them all to 150
top, bottom, left, right = [150]*4
img_with_border = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)
Sources: OpenCV border tutorial and
OpenCV 3.1.0 Docs for copyMakeBorder
PIL's crop method can actually handle this for you by using numbers that are outside the bounding box of the original image, though it's not explicitly stated in the documentation. Negative numbers for left and top will add black pixels to those edges, while numbers greater than the original width and height for right and bottom will add black pixels to those edges.
This code accounts for odd pixel sizes:
from PIL import Image
with Image.open('/path/to/image.gif') as im:
old_size = im.size
new_size = (800, 800)
if new_size > old_size:
# Set number of pixels to expand to the left, top, right,
# and bottom, making sure to account for even or odd numbers
if old_size[0] % 2 == 0:
add_left = add_right = (new_size[0] - old_size[0]) // 2
else:
add_left = (new_size[0] - old_size[0]) // 2
add_right = ((new_size[0] - old_size[0]) // 2) + 1
if old_size[1] % 2 == 0:
add_top = add_bottom = (new_size[1] - old_size[1]) // 2
else:
add_top = (new_size[1] - old_size[1]) // 2
add_bottom = ((new_size[1] - old_size[1]) // 2) + 1
left = 0 - add_left
top = 0 - add_top
right = old_size[0] + add_right
bottom = old_size[1] + add_bottom
# By default, the added pixels are black
im = im.crop((left, top, right, bottom))
Instead of the 4-tuple, you could instead use a 2-tuple to add the same number of pixels on the left/right and top/bottom, or a 1-tuple to add the same number of pixels to all sides.
It is important to consider old dimension, new dimension and their difference here. If the difference is odd (not even), you will need to specify slightly different values for left, top, right and bottom borders.
Assume the old dimension is ow,oh and new one is nw,nh.
So, this would be the answer:
import Image, ImageOps
img = Image.open('original-image.png')
deltaw=nw-ow
deltah=nh-oh
ltrb_border=(deltaw/2,deltah/2,deltaw-(deltaw/2),deltah-(deltah/2))
img_with_border = ImageOps.expand(img,border=ltrb_border,fill='black')
img_with_border.save('imaged-with-border.png')
You can load the image with scipy.misc.imread as a numpy array. Then create an array with the desired background with numpy.zeros((height, width, channels)) and paste the image at the desired location:
import numpy as np
import scipy.misc
im = scipy.misc.imread('foo.jpg', mode='RGB')
height, width, channels = im.shape
# make canvas
im_bg = np.zeros((height, width, channels))
im_bg = (im_bg + 1) * 255 # e.g., make it white
# Your work: Compute where it should be
pad_left = ...
pad_top = ...
im_bg[pad_top:pad_top + height,
pad_left:pad_left + width,
:] = im
# im_bg is now the image with the background.
ximg = Image.open(qpath)
xwid,xhgt = func_ResizeImage(ximg)
qpanel_3 = tk.Frame(Body,width=xwid+10,height=xhgt+10,bg='white',bd=5)
ximg = ximg.resize((xwid,xhgt),Image.ANTIALIAS)
ximg = ImageTk.PhotoImage(ximg)
panel = tk.Label(qpanel_3,image=ximg)
panel.image = ximg
panel.grid(row = 2)
from PIL import Image
from PIL import ImageOps
img = Image.open("dem.jpg").convert("RGB")
This part will add black borders at the sides (10% of width)
img_side = ImageOps.expand(img, border=(int(0.1*img.size[0]),0,int(0.1*img.size[0]),0), fill=(0,0,0))
img_side.save("sunset-sides.jpg")
This part will add black borders at the bottom & top (10% of height)
img_updown = ImageOps.expand(img, border=(0,int(0.1*img.size[1]),0,int(0.1*img.size[1])), fill=(0,0,0))
img_updown.save("sunset-top_bottom.jpg")
This part will add black borders at the bottom,top & sides (10% of height-width)
img_updown_side = ImageOps.expand(img, border=(int(0.1*img.size[0]),int(0.1*img.size[1]),int(0.1*img.size[0]),int(0.1*img.size[1])), fill=(0,0,0))
img_updown_side.save("sunset-all_sides.jpg")
img.close()
img_side.close()
img_updown.close()
img_updown_side.close()