I've been using ImageMagick for a while to create simple animated gifs that demonstrate how GAN-generated faces (from thispersondoesnotexist dot com) all have a "family resemblance".
The animated gif starts by showing an initial image, progressively merges it with the second image & then progressively demerges it until the second image is shown.
I've used a crude bash script that works fine but is slow, and as I code a lot in Python, I wanted to try to do the same in PIL.
I don't know much about image processing & I'm not a professional programmer.
The bash script is like this:
#!/bin/bash
# $1, $2 are input files, $3 is a string
#
for i in {00..100..05}
do composite $1 $2 -blend $i $3_$i.png
done
convert $3_*png -set delay 26 animated.gif
This creates an animated gif like this
My first attempt was using PIL.Image.blend() method:
from PIL import Image
one = Image.open("somepath/some_jpg1.jpg")
two = Image.open("somepath/some_jpg2.jpg)
img_list = [Image.blend(one, two, i/100) for i in range(0, 105, 5)]
img_list[0].save('test_animation.gif', save_all=True, append_images=img_list[1:], duration=250)
This works after a fashion but the images are quite degraded (if it were film I'd call it "reticulation")
I've looked at the PIL docs for other methods such as PIL.Image.composite() and PIL.Image.paste() in case there's other ways of doing this, but I can't understand how to create & deploy transparency masks to achieve what I want.
I don't understand how the images appear to be being degraded or how to stop this happening.
It looks like the PIL palette optimisation and dither code is not very fancy. GIF is a really terrible image format and getting nice results takes a lot of work, unfortunately.
I realise you asked for PIL solutions, but pyvips has a high-quality GIF writer -- it uses libimagequant (the quantisation and dither library from pngquant) and the results are even better than imagemagick.
I tried:
#!/usr/bin/python3
import sys
import pyvips
if len(sys.argv) < 4:
print(f"usage: {sys.argv[0]} OUTPUT-FILENAME IMAGE1 IMAGE2 ...")
sys.exit(1)
# load the images at 200 pixels across
# add a copy of the first face to the end, so we loop
faces = [pyvips.Image.thumbnail(filename, 200).copy_memory()
for filename in sys.argv[2:]]
faces.append(faces[0])
# fade between two images
def fade(a, b):
# factor is a pecentage, so 20 steps
frames = [a * (factor / 100) + b * (1 - factor / 100)
for factor in range(100, 0, -5)]
return pyvips.Image.arrayjoin(frames, across=1)
# fade between face1 and face2, then between face2 and face3, etc
fades = []
for a, b in zip(faces, faces[1:]):
fades.append(fade(a, b))
# join all the fades into a single image and save it
pyvips.Image.arrayjoin(fades, across=1) \
.write_to_file(sys.argv[1], page_height=faces[0].height)
If I run like this, using three faces from thispersondoesnotexist:
$ ./fade.py x.gif ~/pics/face-*
It finishes in about 3 seconds and I get:
Related
As I am trying to create a gif file, the file has been created successfully but it is pixelating. So if anyone can help me out with how to increase resolution.
.Here is the code:-
import PIL
from PIL import Image
import NumPy as np
image_frames = []
days = np.arange(0, 12)
for i in days:
new_frame = PIL.Image.open(
r"C:\Users\Harsh Kotecha\PycharmProjects\pythonProject1\totalprecipplot" + "//" + str(i) + ".jpg"
)
image_frames.append(new_frame)
image_frames[0].save(
"precipitation.gif",
format="GIF",
append_images=image_frames[1:],
save_all="true",
duration=800,
loop=0,
quality=100,
)
Here is the Gif file:-
Here are the original images:-
image1
image2
iamge3
Updated Answer
Now that you have provided some images I had a go at disabling the dithering:
#!/usr/bin/env python3
from PIL import Image
# User editable values
method = Image.FASTOCTREE
colors = 250
# Load images precip-01.jpg through precip-12.jpg, quantize to common palette
imgs = []
for i in range(1,12):
filename = f'precip-{i:02d}.jpg'
print(f'Loading: {filename}')
try:
im = Image.open(filename)
pImage = im.quantize(colors=colors, method=method, dither=0)
imgs.append(pImage)
except:
print(f'ERROR: Unable to open {filename}')
imgs[0].save(
"precipitation.gif",
format="GIF",
append_images=imgs[1:],
save_all="true",
duration=800,
loop=0
)
Original Answer
Your original images are JPEGs which means they likely have many thousands of colours 2. When you make an animated GIF (or even a static GIF) each frame can only have 256 colours in its palette.
This can create several problems:
each frame gets a new, distinct palette stored with it, thereby increasing the size of the GIF (each palette is 0.75kB)
colours get dithered in an attempt to make the image look as close as possible to the original colours
different colours can get chosen for frames that are nearly identical which means colours flicker between distinct shades on successive frames - can cause "twinkling" like stars
If you want to learn about GIFs, you can learn 3,872 times as much as I will ever know by reading Anthony Thyssen's excellent notes here, here and here.
Your image is suffering from the first problem because it has 12 "per frame" local colour tables as well as a global colour table3. It is also suffering from the second problem - dithering.
To avoid the dithering, you probably want to do some of the following:
load all images and append them all together into a 12x1 monster image, and find the best palette for all the colours. As all your images are very similar, I think that you'll get away with generating a palette just from the first image without needing to montage all 12 - that'll be quicker
now palettize each image, with dithering disabled and using the single common palette
save your animated sequence of the palletised images, pushing in the singe common palette from the first step above
2: You can count the number of colours in an image with ImageMagick, using:
magick YOURIMAGE -format %k info:
3: You can see the colour tables in a GIF with gifsicle using:
gifsicle -I YOURIMAGE.GIF
I'm using imageio in Python to read in jpg images and write them as a gif, using something resembling the code below.
import imageio
with imageio.get_writer('mygif.gif', mode='I') as writer:
for filename in framefiles: # iterate over names of jpg files I want to turn into gif frames
frame = imageio.imread(filename)
writer.append_data(frame)
I'm noticing that the image quality in the gifs I produce is quite poor; I suspect this is due to some form of compression. Is there a way to tell imageio not to use any compression? Or maybe a way to do this with opencv instead?
Real problem is that GIF can display only 256 colors (8-bits color) so it has to reduce 24-bits colors (RGB) to 256 colors or it has emulate more colors using dots with different colors - ditherring.
As for options:
Digging in source code I found that it can get two parameters quantizer, palettesize which can control image/animation quality. (There is also subrectangles to reduce file size)
But there are two plugins for GIF which use different modules Pillow or FreeImage and they need different value for quantizer
PIL needs integer 0, 1 or 2.
FI needs string 'wu' or 'nq' (but later it converts it to integer 0 or 1)
They also keep these values in different way so if you want get current value or change it after get_writer() then you also need different code.
You can select module with format='GIF-PIL' or format='GIF-FI'
with imageio.get_writer('mygif.gif', format='GIF-PIL', mode='I',
quantizer=2, palettesize=32) as writer:
print(writer)
#print(dir(writer))
#print(writer._writer)
#print(dir(writer._writer))
print('quantizer:', writer._writer.opt_quantizer)
print('palette_size:', writer._writer.opt_palette_size)
#writer._writer.opt_quantizer = 1
#writer._writer.opt_palette_size = 256
#print('quantizer:', writer._writer.opt_quantizer)
#print('palette_size:', writer._writer.opt_palette_size)
with imageio.get_writer('mygif.gif', format='GIF-FI', mode='I',
quantizer='nq', palettesize=32) as writer:
print(writer)
#print(dir(writer))
print('quantizer:', writer._quantizer)
print('palette_size:', writer._palettesize)
#writer._quantizer = 1
#writer._palettesize = 256
#print('quantizer:', writer._quantizer)
#print('palette_size:', writer._palettesize)
I tried to create animations with different settings but they don't look much better.
I get better result using external program ImageMagick in console/terminal
convert image*.jpg mygif.gif
but still it wasn't as good as video or static images.
You can run it in Python
os.system("convert image*.jpg mygif.gif")
subprocess.run("convert image*.jpg mygif.gif", shell=True)
Or you can try to do it with module Wand which is a wrapper on ImageMagick
Source code: GifWriter in pillowmulti.py and in freeimagemulti.py
* wu - Wu, Xiaolin, Efficient Statistical Computations for Optimal Color Quantization
* nq (neuqant) - Dekker A. H., Kohonen neural networks for optimal color quantization
Doc: GIF-PIL Static and animated gif (Pillow), GIF-FI Static and animated gif (FreeImage)
so There is a 4GB .TIF image that needs to be processed, as a memory constraint I can't load the whole image into numpy array so I need to load it lazily in parts from hard disk.
so basically I need and that needs to be done in python as the project requirement. I also tried looking for tifffile library in PyPi tifffile but I found nothing useful please help.
pyvips can do this. For example:
import sys
import numpy as np
import pyvips
image = pyvips.Image.new_from_file(sys.argv[1], access="sequential")
for y in range(0, image.height, 100):
area_height = min(image.height - y, 100)
area = image.crop(0, y, image.width, area_height)
array = np.ndarray(buffer=area.write_to_memory(),
dtype=np.uint8,
shape=[area.height, area.width, area.bands])
The access option to new_from_file turns on sequential mode: pyvips will only load pixels from the file on demand, with the restriction that you must read pixels out top to bottom.
The loop runs down the image in blocks of 100 scanlines. You can tune this, of course.
I can run it like this:
$ vipsheader eso1242a-pyr.tif
eso1242a-pyr.tif: 108199x81503 uchar, 3 bands, srgb, tiffload_stream
$ /usr/bin/time -f %M:%e ./sections.py ~/pics/eso1242a-pyr.tif
273388:479.50
So on this sad old laptop it took 8 minutes to scan a 108,000 x 82,000 pixel image and needed a peak of 270mb of memory.
What processing are you doing? You might be able to do the whole thing in pyvips. It's quite a bit quicker than numpy.
import pyvips
img = pyvips.Image.new_from_file("space.tif", access='sequential')
out = img.resize(0.01, kernel = "linear")
out.write_to_file("resied_image.jpg")
if you want to convert the file to other format have a smaller size this code will be enough and will help you do it without without any memory spike and in very less time...
I am making a steganography program. I got it to work without PIL but it only works with bitmaps, so I did some research and I found and learned general PIL. I converted my algorithm to be compatible with PIL and it looks like it works, but when I go to decode it seems like it pulls numbers out of nowhere. After some debugging I believe there to be a pattern of sorts because it is only a few values off.
To further debug I've created a similar program that makes the image completely red, then reopens it and reads its pixel values, however, I seem to be encountering the same error. The weird thing is that my computer science teacher, who uses Python 2, isn't encountering this error. I was wondering if any more experienced PIL users know why this is and any fixes. I am using Python 3 on Windows 10.
Here is my code (This is the program I made for debugging):
from PIL import Image
def redify(file_name): #Function that turns the whole image red
image = Image.open(file_name)
image = image.convert("RGB")
pixels = list(image.getdata())
fileTypeIndex = 0
for i in range(0,len(file_name)):
if file_name[-i] == ".":
fileTypeIndex = i
break
for x in range(0,len(pixels)):
pixels[x] = 255,0,0
final = Image.new(image.mode,image.size)
final.putdata(pixels)
final.save(file_name[:-fileTypeIndex] + "_redified" + file_name[-fileTypeIndex:])
def readImage(file_name): #Fucntion that opens an image and reads its data
image = Image.open(file_name)
image = image.convert("RGB")
rgbList = list(image.getdata())
print(rgbList) # This returns every pixel as (254,0,0)
# When I set each pixel to 0,255,0 it returns (0,255,1)
# When I set each pixel to 0,0,255 it returns (0,0,254)
# All of these shouldn't be occuring
redify("moon.jpg")
readImage("moon_redified.jpg")
I have built a code which will stitch 100X100 images approx. I want to view this stitiching process in real time. I am using pyvips to create large image. I am saving final image in .DZI format as it will take very less memory footprint to display.
Below code is copied just for testing purpose https://github.com/jcupitt/pyvips/issues/43.
#!/usr/bin/env python
import sys
import pyvips
# overlap joins by this many pixels
H_OVERLAP = 100
V_OVERLAP = 100
# number of images in mosaic
ACROSS = 40
DOWN = 40
if len(sys.argv) < 2 + ACROSS * DOWN:
print 'usage: %s output-image input1 input2 ..'
sys.exit(1)
def join_left_right(filenames):
images = [pyvips.Image.new_from_file(filename) for filename in filenames]
row = images[0]
for image in images[1:]:
row = row.merge(image, 'horizontal', H_OVERLAP - row.width, 0)
return row
def join_top_bottom(rows):
image = rows[0]
for row in rows[1:]:
image = image.merge(row, 'vertical', 0, V_OVERLAP - image.height)
return image
rows = []
for y in range(0, DOWN):
start = 2 + y * ACROSS
end = start + ACROSS
rows.append(join_left_right(sys.argv[start:end]))
image = join_top_bottom(rows)
image.write_to_file(sys.argv[1])
To run this code:
$ export VIPS_DISC_THRESHOLD=100
$ export VIPS_PROGRESS=1
$ export VIPS_CONCURRENCY=1
$ mkdir sample
$ for i in {1..1600}; do cp ~/pics/k2.jpg sample/$i.jpg; done
$ time ./mergeup.py x.dz sample/*.jpg
here cp ~/pics/k2.jpg will copy k2.jpg image 1600 times from pics folder, so change according to your image name and location.
I want to display this process in real time. Right now after creating final mosaiced image I am able to display. Just an idea,I am thinking to make a large image and display it, then insert smaller images. I don't know, how it can be done. I am confused as we also have to make pyramidal structure. So If we create large image first we have to replace each level images with the new images. Creating .DZI image is expensive, so I don't want to create it in every running loop. Replacing images may be a solution. Any suggestion folks??
I suppose you have two challenges: how to keep the pyramid up-to-date on the server, and how to keep it up-to-date on the client. The brute force method would be to constantly rebuild the DZI on the server, and periodically flush the tiles on the client (so they reload). For something like that you'll also need to add a cache bust to the tile URLs each time, or the browser will think it should just use its local copy (not realizing it has updated). Of course this brute force method is probably too slow (though it might be interesting to try!).
For a little more finesse, you'd want to make a pyramid that's exactly aligned with the sub images. That way when you change a single sub image, it's obvious which tiles need to be updated. You can do this with DZI if you have square sub images and you use a tile size that is some even fraction of the sub image size. Also no tile overlap. Of course you'll have to build your own DZI constructor, since the existing ones aren't primed to simply replace individual tiles. If you know which tiles you changed on the server, you can communicate that to the client (either via periodic polling or with something like web sockets) and then flush only those tiles (again with the cache busting).
Another solution you could experiment with would be to not attempt a pyramid, per se, but just a flat set of tiles at a reasonable resolution to allow the user to pan around the scene. This would greatly simplify your pyramid updating on the server, since all you would need to do would be replace a single image for each sub image. This could be loaded and shown in a custom (non-OpenSeadragon) fashion on the client, or you could even use OpenSeadragon's multi-image feature to take advantage of its panning and zooming, like here: http://www.letsfathom.com/ (each album cover is its own independent image object).