I have two GIF files, and I want to combine them horizontally to be shown beside each other, and they play together.
They have equal frames.
I tried a lot online for a solution, but didn't find something which is supporting GIF. I think the imageio package supports gif, but I can't find a way to use it to combine two together
Simply, I want something like this example
Any ideas of doing so ?
I would code something like this :
import imageio
import numpy as np
#Create reader object for the gif
gif1 = imageio.get_reader('file1.gif')
gif2 = imageio.get_reader('file2.gif')
#If they don't have the same number of frame take the shorter
number_of_frames = min(gif1.get_length(), gif2.get_length())
#Create writer object
new_gif = imageio.get_writer('output.gif')
for frame_number in range(number_of_frames):
img1 = gif1.get_next_data()
img2 = gif2.get_next_data()
#here is the magic
new_image = np.hstack((img1, img2))
new_gif.append_data(new_image)
gif1.close()
gif2.close()
new_gif.close()
So the magic trick is to use the hstack numpy function. It will basically stack them horizontally. This only work if the two gif are the same dimensions.
Related
I have two images (assume I know their filepaths, and can reach them). How would I "add" them together, so that the function returns an image that is essentially them next to each other, so basically image1+image2=image1image2, left to right.
Assuming you aren't restricted to TensorFlow, and that the images are the same size, as Richard said, you could use numpy concatenate function (as images are treated just like a normal matrix)
import numpy as np
stackedImg = np.concatenate((img1, img2), axis=1)
// Axis=1 for horizontal stacking and 0 for vertical
And if you want to test it out with open cv
import cv2
cv2.imwrite('output.png', stackedImg)
I want read 100 colour images and use them for further processing. Suppose one image size is 256x 256 by reading it in python openCV its size is (256,256,3). I now want to read 100 images and after reading I have to get the size as (100,256,256,3).
You could do something like this, supposing that your images are named like 0.png to 99.png:
import numpy as np
result=np.empty((100,256,256,3))
for i in range (100):
result[i,:,:,:] = cv2.imread('{}.png'.format(i),1)
## your image names
#fnames = sorted(glob.glob("images/*.png"))
## read and stack
img = np.stack([cv2.imread(fname) for fname in fnames])
I have created a program in python using pyscreenshot which periodically takes a screenshot of a specific area of screen which will contain one of several pre-defined images. I am looking to load each of these images from file into a list and compare them with the screenshot to see which is currently displayed. Initially the files were created by screenshotting the images as they were on screen:
while True:
filenm = str(i) + ".png"
im=ImageGrab.grab(bbox=(680,640,735,690)) #accross, down
im.save(filenm)
time.sleep(1)
i = i + 1
Then when I attempt to compare them it always reports false:
image2 = Image.open("04.png")
im=ImageGrab.grab(bbox=(680,640,735,690)) #accross, down
if im == image2:
print "TRUE"
else:
print "FALSE"
However comparing two of the images saved to files works:
image = Image.open("03.png")
image2 = Image.open("04.png")
if image == image2:
print "TRUE"
else:
print "FALSE"
So my question is how do the images differ once loaded from file and how can I compare the 'live' screenshot with an image loaded from file?
It looks like when I use ImageGrab.grab(), a PIL.Image.Image object is created, where as Image.open() creates a PIL.pngImagePlugin.PngImageFile object. You don't want to be calling == on the actual objects, since there's no special semantics implemented for PIL images across comparing these two object types, and thus it just checks if they are the same objects in memory. Code I would use to compare the two images proper (using numpy) would look something like
import numpy as np
from PIL import Image
def image_compare(image_1, image_2):
arr1 = np.array(image_1)
arr2 = np.array(image_2)
if arr1.shape != arr2.shape:
return False
maxdiff = np.max(np.abs(arr1 - arr2))
return maxdiff == 0
def image_compare_file(filename_1, filename_2):
im1 = Image.load(filename_1)
im2 = Image.load(filename_2)
return image_compare(im1, im2)
Here I take advantage of PIL images auto-casting to numpy ndarrays with np.array(). I then check that the dimensions match, and compute the max of the absolute error if they do. If this max is zero, the images are identical. Now you could just call
if image_compare_file('file1.png','file2.png'):
pass # images in file are identical
else:
pass # images differ
or
if image_compare(image1,image2):
pass # images are identical
else:
pass # images differ
You might be interested in using a perceptual diff tool which will let you quickly identify differences in the screenshots. imgdiff is a library that wraps a tool for this in Python. A simple version can probably be implemented with PIL's ImageChop, as in this answer:
import Image
import ImageChops
im1 = Image.open("splash.png")
im2 = Image.open("splash2.png")
diff = ImageChops.difference(im2, im1)
For more on perceptual diffing, check out Bret Slatkin's talk about using it for safe continuous deployment.
I create an image and fill the pixels:
img = Image.new( 'RGB', (2000,2000), "black") # create a new black image
pixels = img.load() # create the pixel map
for i in range(img.size[0]): # for every pixel:
for j in range(img.size[1]):
#do some stuff that requires i and j as parameter
Can this be done more elegant (and may be faster, since theoretically the loops are parallelizable)?
Note: I will first answer the question, then propose an, in my opinion, better alternative
Answering the question
It is hard to give advice without knowing what changes you intend to apply and whether the loading of the image as a PIL image is part of the question or a given.
More elegant in Python-speak typically means using list comprehensions
For parallelization, you would look at something like the multiprocessing module or joblib
Depending on your method of creating / loading in images, the list_of_pixels = list(img.getdata()) and img.putdata(new_list_of_pixels) functions may be of interest to you.
An example of what this might look like:
from PIL import Image
from multiprocessing import Pool
img = Image.new( 'RGB', (2000,2000), "black")
# a function that fixes the green component of a pixel to the value 50
def update_pixel(p):
return (p[0], 50, p[2])
list_of_pixels = list(img.getdata())
pool = Pool(4)
new_list_of_pixels = pool.map(update_pixel, list_of_pixels)
pool.close()
pool.join()
img.putdata(new_list_of_pixels)
However, I don't think that is a good idea... When you see loops (and list comprehensions) over thousands of elements in Python and you have performance on your mind, you can be sure there is a library that will make this faster.
Better Alternative
First, a quick pointer to the Channel Operations module,
Since you don't specify the kind of pixel operation you intend to do and you clearly already know about the PIL library, I'll assume you're aware of it and it doesn't do what you want.
Then, any moderately complex matrix manipulation in Python will benefit from pulling in Pandas, Numpy or Scipy...
Pure numpy example:
import numpy as np
import matplotlib.pyplot as plt
#black image
img = np.zeros([100,100,3],dtype=np.uint8)
#show
plt.imshow(img)
#make it green
img[:,:, 1] = 50
#show
plt.imshow(img)
Since you are just working with a standard numpy.ndarray, you can use any of the available functionalities, such as np.vectorize, apply, map etc. To show a similar solution as above with the update_pixel function:
import numpy as np
import matplotlib.pyplot as plt
#black image
img = np.zeros([100,100,3],dtype=np.uint8)
#show
plt.imshow(img)
#make it green
def update_pixel(p):
return (p[0], 50, p[2])
green_img = np.apply_along_axis(update_pixel, 2, img)
#show
plt.imshow(green_img)
One more example, this time calculating the image content directly from the indexes, instead of from existing image pixel content (no need to create an empty image first):
import numpy as np
import matplotlib.pyplot as plt
def calc_pixel(x,y):
return np.array([100-x, x+y, 100-y])
img = np.frompyfunc(calc_pixel, 2, 1).outer(np.arange(100), np.arange(100))
plt.imshow(np.array(img.tolist()))
#note: I don't know any other way to convert a 2D array of arrays to a 3D array...
And, low and behold, scipy has methods to read and write images and inbetween, you can just use numpy to manipulate them as "classic" mult-dimensional arrays. (scipy.misc.imread depends on PIL, by the way)
More example code.
I am trying to flip a picture on its vertical axis, I am doing this in python, and using the Media module.
like this:
i try to find the relationship between the original and the flipped. since i can't go to negative coordinates in python, what i decided to do is use the middle of the picture as the reference.
so i split the picture in half,and this is what i am going to do:
[note i create a new blank picture and copy each (x,y) pixel to the corresponding to (-x,y), if the original pixel is after the middle.
if its before the middle, i copy the pixel (-x,y) to (x,y)
so i coded it in python, and this is the result.
Original:
i got this:
import media
pic=media.load_picture(media.choose_file())
height=media.get_height(pic)
width=media.get_width(pic)
new_pic=media.create_picture(width,height)
for pixel in pic:
x_org=media.get_x(pixel)
y_org=media.get_y(pixel)
colour=media.get_color(pixel)
new_pixel_0=media.get_pixel(new_pic,x_org+mid_width,y_org) #replace with suggested
#answer below
media.set_color( new_pixel_0,colour)
media.show(new_pic)
this is not what i wanted, but i am so confused, i try to find the relationship between the original pixel location and its transformed (x,y)->(-x,y). but i think that's wrong. If anyone could help me with this method it would be great full.
at the end of the day i want a picture like this:
http://www.misterteacher.com/alphabetgeometry/transformations.html#Flip
Why not just use Python Imaging Library? Flipping an image horizontally is a one-liner, and much faster to boot.
from PIL import Image
img = Image.open("AFLAC.jpg").transpose(Image.FLIP_LEFT_RIGHT)
Your arithmetic is incorrect. Try this instead...
new_pixel_0 = media.get_pixel(new_pic, width - x_org, y_org)
There is no need to treat the two halves of the image separately.
This is essentially negating the x-co-ordinate, as your first diagram illustrates, but then slides (or translates) the flipped image by width pixels to the right to put it back in the range (0 - width).
Here is a simple function to flip an image using scipy and numpy:
import numpy as np
from scipy.misc import imread, imshow
import matplotlib.pyplot as plt
def flip_image(file_name):
img = imread(file_name)
flipped_img = np.ndarray((img.shape), dtype='uint8')
flipped_img[:,:,0] = np.fliplr(img[:,:,0])
flipped_img[:,:,1] = np.fliplr(img[:,:,1])
flipped_img[:,:,2] = np.fliplr(img[:,:,2])
plt.imshow(flipped_img)
return flipped_img