I am trying to combine 4 images, image 1 on top left, image 2 on top right, image 3 on bottom left and image 4 on bottom right. However, my images are different sizes and not sure how to resize the images to same size. I am pretty new to Python and this is my first time using PIL.
I have this so far (after opening the images)
img1 = img1.resize(img2.size)
img1 = img1.resize(img3.size)
img1 = img1.resize(img4.size)
This shall suffice your basic requirement.
This shall suffice your basic requirement.
Steps:
Images are read and stored to list of arrays using io.imread(img) in
a list comprehension.
We resize images to custom height and width.You can change IMAGE_WIDTH,IMAGE_HEIGHT as per your need with respect
to the input image size.
You just have to pass the location of n
images (n=4 for your case) to the function.
If you are
passing more than 2 images (for your case 4), it will work create 2
rows of images. In the top row, images in the first half of the list
are stacked and the remaining ones are placed in bottom row using
hconcat().
The two rows are stacked vertically using vconcat().
Finally, we convert the result to RGB image using
image.convert("RGB") and is saved using image.save().
The code:
import cv2
from PIL import Image
from skimage import io
IMAGE_WIDTH = 1920
IMAGE_HEIGHT = 1080
def create_collage(images):
images = [io.imread(img) for img in images]
images = [cv2.resize(image, (IMAGE_WIDTH, IMAGE_HEIGHT)) for image in images]
if len(images) > 2:
half = len(images) // 2
h1 = cv2.hconcat(images[:half])
h2 = cv2.hconcat(images[half:])
concat_images = cv2.vconcat([h1, h2])
else:
concat_images = cv2.hconcat(images)
image = Image.fromarray(concat_images)
# Image path
image_name = "result.jpg"
image = image.convert("RGB")
image.save(f"{image_name}")
return image_name
images=["image1.png","image2.png","image3.png","image4.png"]
#image1 on top left, image2 on top right, image3 on bottom left,image4 on bottom right
create_collage(images)
To create advanced college make you can look into this:
https://codereview.stackexchange.com/questions/275727/python-3-script-to-make-photo-collages
Related
I was trying to combine 3 gray scale images into a single overlapping image with three different colors for each.
For that, I added each into a 3 channel numpy array.
But when plotting with im.show I don't get a colourful image. Till adding 2nd channel it works, but when I add the third channel, it doesn't work. The final image has only red and blue colour.
It is supposed to be red, green and blue for corresponding to the overlapping images.
Why would it be?
image1 = Image.open("E:/imaging/04102022_Bronze/Copper_4_2/10.tif") #openingimage1
image1_norm =(np.array(image1)-np.array(image1).min() ) / (np.array(image1).max() -
np.array(image1).min()) #normalisingimage1
image2 = Image.open("E:/imaging/04102022_Bronze/Oxygen_1_2/10.tif")#openingimage2
image2_norm = (np.array(image2)-np.array(image2).min()) / (np.array(image2).max() -
np.array(image2).min())#normalisingimage2
image3 = Image.open("E:/imaging/04102022_Bronze/Oxygen_1_2/10.tif")#openingimage3
image3_norm = (np.array(image3)-np.array(image3).min()) / (np.array(image3).max() -
np.array(image3).min())#normalisingimage3
im=np.array(image2)
new_image = np.zeros(im.shape + (3,)) #creating an empty 3 channel numpy array .shape of this
array is (255, 1024, 3)
new_image[:,:,0] = image1_norm #adding the three images into three channels
new_image[:,:,1] = image2_norm
new_image[:,:,2] = image3_norm
new_image1=new_image*255.999
new_image2= new_image1.astype(np.uint8)
final_image=final_image=Image.fromarray(new_image2, mode='RGB')
A few possible issues...
When you open an image in PIL, if you want to be sure it is single-channel greyscale, and not accidentally 3-channel RGB, or a palette image, force it to greyscale:
im = Image.open('image.png').convert('L')
Try not to repeat complicated calculations or expressions several times - it just makes for a maintenance nightmare. Maybe use a function instead:
def normalize(im):
# Normalise image to range 0..1
min, max = im.min(), im.max()
return (im.astype(float)-min)/(max-min)
You can use Numpy's dstack() to merge channels - it means "depth"-stack, as opposed to np.vstack() which stacks images vertically above/below each other and np.hstack() which stacks images side-by-side horizontally. It is a lot simpler than creating an image of the right size and individually pushing each channel into it.
result = np.dstack((im1, im2, im3))
That would make the overall code more like this:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
def normalize(im):
# Normalise image to range 0..1
min, max = im.min(), im.max()
return (im.astype(float)-min)/(max-min)
# Load images as single channel Numpy arrays
im1 = np.array(Image.open('ch1.png').convert('L'))
im2 = np.array(Image.open('ch2.png').convert('L'))
im3 = np.array(Image.open('ch3.png').convert('L'))
# Normalize and scale
n1 = normalize(im1) * 255.999
n2 = normalize(im2) * 255.999
n3 = normalize(im3) * 255.999
# Merge channels to RGB
result = np.dstack((n1,n2,n3))
result = Image.fromarray(result.astype(np.uint8))
result.save('result.png')
That makes these three input images:
into this merged image:
How to downscale a tiff image of 10m resolution and create a new image of 50m where each pixel is stats from the first image?
The initial tiff image is a binary classification map - meaning each pixel (10m) belongs either to class "water" (value =0) or class "ice" (value=1).
I would like to create a new image, where each pixel is the percentage of water in a 5 x 5 block of the initial map, meaning each pixel of the new image will have a 50 m resolution and represents the ratio or percentage of "water" pixel on every 5x5 pixel of the former map. You can see the example here: Example
Here is an image sample (can be downloaded from google drive):
https://drive.google.com/uc?export=download&id=19hWQODERRsvoESiUZuL0GQHg4Mz4RbXj
Your image is saved in a rather odd format, using a 32-bit float to represent just two classes of data which could be represented in a single bit, so I converted it to PNG with ImageMagick using:
magick YOURIMAGE.TIF -alpha off image.png
Many Python libraries will stutter on your actual TIFF so maybe think about using a different way of writing it.
Once that is done, the code might look something like this:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Set size of tiles for subsampling
tileX = tileY = 5
# Open image and convert to greyscale and thence to Numpy array
im = Image.open('image.png').convert('L')
na = np.array(im)
# Round height and width down to next lower multiple of tile sizes
h = (na.shape[0] // tileY) * tileY
w = (na.shape[1] // tileX) * tileX
# Create empty output array to fill
res = np.empty((h//tileY,w//tileX), np.uint8)
pxPerTile = tileX * tileY
for yoffset in range(0,h,tileY):
for xoffset in range(0,w,tileX):
# Count ice pixels in this 5x5 tile
nonZero = np.count_nonzero(na[yoffset:yoffset+tileY, xoffset:xoffset+tileX])
percent = int((100.0 * (pxPerTile - nonZero))/pxPerTile)
res[yoffset//tileY, xoffset//tileX] = percent
# Make Numpy array back into PIL Image and save
Image.fromarray(res.astype(np.uint8)).save('result.png')
On reflection, you can probably do it faster and more simply with cv2.resize() and a decimation of 0.2 on both axes and interpolation cv2.INTER_AREA
I did a version in pyvips:
#!/usr/bin/python3
import sys
import pyvips
image = pyvips.Image.new_from_file(sys.argv[1])
# label (0 == water, 1 == ice) is in the first band
label = image[0]
# average 5x5 areas
label = label.shrink(5, 5)
# turn into a percentage of water
water_percent = 100 * (1 - label)
# ... and save
water_percent.write_to_file(sys.argv[2])
I can run it on your test image like this:
$ ./average.py ~/pics/meltPondClassifiedAndS12.tif x.png
To make this (rather dark) output:
I am working on a project where I am using different masks on two different pictures and than would like to combine them into one picture. So far I have the masking (albeit it has some errors on the edges) and now I am trying to combine the images.
how can I improve the masking so the result on has no errors on the edges (see images )
how do I effectively combine the images into one to result in the third image? I have been trying to use some transparency effects but it hasn't worked. What I am trying to do is merge the two images so they form a complete circle. If any of the original images are needed please let me know
from PIL import Image
# load images
img_day = Image.open('Day.jpeg')
img_night = Image.open('Night_mirror.jpg')
night_mask = Image.open('Masks/12.5.jpg')
day_mask = Image.open('Masks/11.5.jpg')
# convert images
#img_org = img_org.convert('RGB') # or 'RGBA'
night_mask = night_mask.convert('L') # grayscale
day_mask = day_mask.convert('L')
# the same size
img_day = img_day.resize((750,750))
img_night = img_night.resize((750,750))
night_mask = night_mask.resize((750,750))
day_mask = day_mask.resize((750,750))
# add alpha channel
img_day.putalpha(day_mask)
img_night.putalpha(night_mask)
img_night = img_night.rotate(-170)
# save as png which keeps alpha channel
img_day.save('image_day.png')
img_night.save('image_night.png')
img_night.show()
img_day.show()
Any help is appreciated
The main problem are the (JPG) artifacts in your masks (white line at the top, "smoothed" edges). Why not use ImageDraw.arc to generate the masks on-the-fly? The final step you need is to use Image.composite to merge your two images.
Here's some code (I took your first image as desired output, thus the chosen angles):
from PIL import Image, ImageDraw
# Load images
img_day = Image.open('day.jpg')
img_night = Image.open('night.jpg')
# Resize images
target_size = (750, 750)
img_day = img_day.resize(target_size)
img_night = img_night.resize(target_size)
# Generate proper masks
day_mask = Image.new('L', target_size)
draw = ImageDraw.Draw(day_mask)
draw.arc([10, 10, 740, 740], 120, 270, 255, 150)
night_mask = Image.new('L', target_size)
draw = ImageDraw.Draw(night_mask)
draw.arc([10, 10, 740, 740], 270, 120, 255, 150)
# Put alpha channels
img_day.putalpha(day_mask)
img_night.putalpha(night_mask)
# Compose and save image
img = Image.composite(img_day, img_night, day_mask)
img.save('img.png')
That'd be the output:
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.5
Pillow: 8.0.1
----------------------------------------
To you points:
You problem with masking simply orginiates from the fact that your masks are not perfect. Open them in paint and you will see that on the top side, there is a white line remaining. Just use the fill in tool to fill that white part with black. Afterwards it should work.
I suggest mirroring your image horizontally instead of rotating it. You can use PIL.ImageOps.mirror for that. Then you paste one image onto the other image using img.paste(). As a second argument, you give the coordinates where the image should be pasted onto the other, and very importantly, as a third argument, you specify a transparency mask. Since your image already has an alpha channel, you can just use the same image as a mask. PIL will automatically use it's alpha channel for masking. Note that I had to adjust the position of pasting by 4 pixels to overlap the images correctly.
from PIL import Image, ImageOps
# load images
img_day = Image.open('day.jpg')
img_night = Image.open('night.jpg')
night_mask = Image.open('night_mask.jpg')
day_mask = Image.open('day_mask.jpg')
# convert images
#img_org = img_org.convert('RGB') # or 'RGBA'
night_mask = night_mask.convert('L') # grayscale
day_mask = day_mask.convert('L')
# the same size
img_day = img_day.resize((750,750))
img_night = img_night.resize((750,750))
night_mask = night_mask.resize((750,750))
day_mask = day_mask.resize((750,750))
# add alpha channel
img_day.putalpha(day_mask)
img_night.putalpha(night_mask)
img_night = ImageOps.mirror(img_night)
img_night.paste(img_day, (-4, 0), img_day)
img_night.save('composite.png')
Result:
I have run into an issue with a stitching program I made. The way I am slicing the image makes it so the only way it works is if the first image if to the left and above the one it would be stitched to.
def stitchMatches(self,image1,image2,homography):
#gather x and y axis of images that will be stitched
height1, width1 = image1.shape[0], image1.shape[1]
height2, width2 = image2.shape[0], image2.shape[1]
#create blank image that will be large enough to hold stitch image
blank_image = np.zeros(((width1 + width2),(height1 + height2),3),np.uint8)
#stitch image two into the resulting image while using blank_image
#to create a large enough frame for images
result = cv2.warpPerspective((image1),homography,blank_image.shape[0:2])
#numpy notation for slicing a matrix together
#allows you to see the image
result[0:image2.shape[0], 0:image2.shape[1]] = image2
This code runs when the left most image is represented by image1.
When I reverse the order of the images however I only have on image received as it the final line in my code "result[0...=image2" is unable to slive an image in an orientation that is not oriented with the first image in the upper left most corner between tho two images being stitched.
Here is a full example with homography
This is the homgraphy between the two images and Their result:
This is the correct result with imaege1 on the left
This is the incorrect result with image1 on the right
I know the issue is with th final slicing line I am just at a loss to get it to work, Any help is appreciated.
I have folder full of images with each image containing at least 4 smaller images. I would to know how I can cut the smaller images out using Python PIL so that they will all exist as independent image files. fortunately there is one constant, the background is either white or black so what I'm guessing I need is a way to the cut these images out by searching for rows or preferably columns which are entirely black or entirely white, Here is an example image:
From the image above, there would be 10 separate images, each containing a number. Thanks in advance.
EDIT: I have another sample image that is more realistic in the sense that the backgrounds of some of the smaller images are the same colour as the background of the image they are contained in. e.g.
The output of which being 13 separate images, each containng 1 letter
Using scipy.ndimage for labeling:
import numpy as np
import scipy.ndimage as ndi
import Image
THRESHOLD = 100
MIN_SHAPE = np.asarray((5, 5))
filename = "eQ9ts.jpg"
im = np.asarray(Image.open(filename))
gray = im.sum(axis=-1)
bw = gray > THRESHOLD
label, n = ndi.label(bw)
indices = [np.where(label == ind) for ind in xrange(1, n)]
slices = [[slice(ind[i].min(), ind[i].max()) for i in (0, 1)] + [slice(None)]
for ind in indices]
images = [im[s] for s in slices]
# filter out small images
images = [im for im in images if not np.any(np.asarray(im.shape[:-1]) < MIN_SHAPE)]