Green screen in MoviePy - python

I'm trying to automate the implementation of some animations in my videos using MoviePy. These animations are played on a green background, so I've been searching for green screen related items on MoviePy. There are not too many results on this Though. However, I did find some code that should work with this. The code that I'm using is the following:
final_clip = concatenate_videoclips(self.clips, method="compose")
subclip = mpe.vfx.mask_color(subclip, color=[100, 227, 4])
subclip = subclip.resize(height=int(final_clip.h / 2))
subclip = subclip.set_position((final_clip.w / 2 - subclip.w / 2, final_clip.h / 2))
comp = CompositeVideoClip([image_clip, final_clip, subclip.set_start(final_clip.duration / 2)])
So, final_clip is every clip stitched together, subclip is the animation on a green screen that I want to display on top somewhere in the middle of the video and comp is the endresult. comp Combines final_clip and subclip and places it on an ImageClip, which serves as background. This all works and gives no errors, but the green in subclip is not gone, it's just the full frame still visible. And I know the RGB color [100, 227, 4] should be correct as I used multiple pieces of software to verify which kind of green it was. So I have subclip, which I want to be half the width and half the height of the video, and placed in the center, but a bit down. The positioning and sizing of the clip is done correctly, it's really just the green that is not being removed here.
I don't get what's wrong here, the samples that I found online use this exact same technique, so if anyone could help me out here, that would be much appreciated!
Also, if anyone knows any hints to speed up the compilation time in MoviePy, that would be a great help as well!

Related

Merging images in Python PIL to produce animated gifs

I've been using ImageMagick for a while to create simple animated gifs that demonstrate how GAN-generated faces (from thispersondoesnotexist dot com) all have a "family resemblance".
The animated gif starts by showing an initial image, progressively merges it with the second image & then progressively demerges it until the second image is shown.
I've used a crude bash script that works fine but is slow, and as I code a lot in Python, I wanted to try to do the same in PIL.
I don't know much about image processing & I'm not a professional programmer.
The bash script is like this:
#!/bin/bash
# $1, $2 are input files, $3 is a string
#
for i in {00..100..05}
do composite $1 $2 -blend $i $3_$i.png
done
convert $3_*png -set delay 26 animated.gif
This creates an animated gif like this
My first attempt was using PIL.Image.blend() method:
from PIL import Image
one = Image.open("somepath/some_jpg1.jpg")
two = Image.open("somepath/some_jpg2.jpg)
img_list = [Image.blend(one, two, i/100) for i in range(0, 105, 5)]
img_list[0].save('test_animation.gif', save_all=True, append_images=img_list[1:], duration=250)
This works after a fashion but the images are quite degraded (if it were film I'd call it "reticulation")
I've looked at the PIL docs for other methods such as PIL.Image.composite() and PIL.Image.paste() in case there's other ways of doing this, but I can't understand how to create & deploy transparency masks to achieve what I want.
I don't understand how the images appear to be being degraded or how to stop this happening.
It looks like the PIL palette optimisation and dither code is not very fancy. GIF is a really terrible image format and getting nice results takes a lot of work, unfortunately.
I realise you asked for PIL solutions, but pyvips has a high-quality GIF writer -- it uses libimagequant (the quantisation and dither library from pngquant) and the results are even better than imagemagick.
I tried:
#!/usr/bin/python3
import sys
import pyvips
if len(sys.argv) < 4:
print(f"usage: {sys.argv[0]} OUTPUT-FILENAME IMAGE1 IMAGE2 ...")
sys.exit(1)
# load the images at 200 pixels across
# add a copy of the first face to the end, so we loop
faces = [pyvips.Image.thumbnail(filename, 200).copy_memory()
for filename in sys.argv[2:]]
faces.append(faces[0])
# fade between two images
def fade(a, b):
# factor is a pecentage, so 20 steps
frames = [a * (factor / 100) + b * (1 - factor / 100)
for factor in range(100, 0, -5)]
return pyvips.Image.arrayjoin(frames, across=1)
# fade between face1 and face2, then between face2 and face3, etc
fades = []
for a, b in zip(faces, faces[1:]):
fades.append(fade(a, b))
# join all the fades into a single image and save it
pyvips.Image.arrayjoin(fades, across=1) \
.write_to_file(sys.argv[1], page_height=faces[0].height)
If I run like this, using three faces from thispersondoesnotexist:
$ ./fade.py x.gif ~/pics/face-*
It finishes in about 3 seconds and I get:

How to auto-crop out backgrounds of logo images (ideally of any color)

I am trying to auto-crop the background around logos. Right now I am cropping using getbbox but it does not always work.
Here are the images that I am using to test the autocropper:
After running
# crop out background
queries = [logo.crop(ImageOps.invert(logo).getbbox()) for logo in queries]
queries = [logo.crop(logo.getbbox()) for logo in queries]
This is the result:
As you can see it mostly works except for a few cases such as the logos in Adobe, Google, and LinkedIn in which the bounding box isn't what one would expect it to be. Can someone give me some insight as to why this doesn't always perform as expected and how I can improve it?
EDIT:
I tried using thresholding as suggested in the comments. It helped for the google and linkedin images but had the same effect on the adobe logo.
logos_gray = [logo.convert('L') for logo in logos]
threshold = 250
logos_gray = [logo.point(lambda p: p > threshold and 255) for logo in logos_gray]
logos_gray_inv = [ImageOps.invert(logo) for logo in logos_gray]
for i in range(len(logos_gray_inv)):
logos[i] = logos[i].crop(logos_gray_inv[i].getbbox())

How to apply transparency to clips in moviepy?

So I'm trying to create a clip with moviepy where five semi-transparent clips are overlaid on each other using the CompositeVideoClip.
The output should be a clip of length equal to the longest clip, where the all the layers of the composite clip are visible.
My code looks something like this:
from moviepy.editor import *
clip_1 = VideoFileClip('some\\path\\here.mp4')
clip_2 = VideoFileClip('some\\path\\here.mp4')
clip_3 = VideoFileClip('some\\path\\here.mp4')
clip_4 = VideoFileClip('some\\path\\here.mp4')
clip_5 = VideoFileClip('some\\path\\here.mp4')
list_of_clips = [clip_1, clip_2, clip_3, clip_4, clip_5]
for index, clip in enumerate(list_of_clips):
list_of_clips[index] = clip.set_opacity(.20)
output_clip = CompositeVideoClip(list_of_clips)
output_clip.write_videofile('some\\path\\here.mp4')
Code runs fine, except transparency is not applied.
Neither does this work:
clip = VideoFileClip(some\\path\\here.mp4).set_opacity(.30)
clip.write_videofile(some\\path\\here.mp4)
Export works fine, but clip is fully opaque.
Any suggestions for how to achieve transparency in clip outputs?
the mp4 (I'm assuming h264) format does not offer transparency. webM (vp9) and some variants of h265 do offer transparency.
Im not sure exactly what you are trying to do - but perhaps creating the overlaid videos as webm (transparency supported) - and then converting to h264 at the end might work for you.

Adding black borders to image without quality loss

I have a 21:9 image with 1920x816 resolution, and i want to add black bars on top and bottom in order to comapre it with same one with 1920x1080 resolution . I tried 2 solutions for that, one using OpenCV and second using Image from Pillow. However, both of those reduced quality of images.
Not edited images are taken from video file using VapourSynth and FFMS2.
Comparison between files: (1920x816 frame.png and 1920x1080 frame.png are not edited files)
https://diff.pics/rKVbxTRRPG35
Am i missing some important options that will prevent quality loss? Or should i use different library for that?
Code that i used for OpenCV:
import cv2
img = cv2.imread('1920x816 frame.png')
color = [0, 0, 0]
top, bottom, left, right = [132, 132, 0, 0]
img_with_border = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)
cv2.imwrite("1920x1080 after OpenCV.png", img_with_border)
And for Pillow:
from PIL import Image, ImageOps
old_im = Image.open("1920x816 frame.png")
new_im = ImageOps.expand(old_im, border=(0, 132))
new_im.save("1920x1080 after Pillow.png", "PNG")
Doesn't look like there's a difference i quality. But there is a difference in Gamma value between the image files. Your original was saved with gamma 0.45455, and the processed image was saved with no gamma value.
Here's an explanation of what gamma means in png files: https://hsivonen.fi/png-gamma/
It's probably best to just strip the gamma value from both images. Pillow doesn't provide any simple way to work with the gamma metadata, and I'm not sure if openCV does either.
After some searching, i realized that i am an idiot. If i take frame with Vapoursynth, i might as well use it to add the needed border.
video = core.std.AddBorders(clip = video, top = 132, bottom = 132, color = [0,0,0])
That gets the job done without any additional compression. Maybe someone will make use of it.

Python PIL: best scaling method that preserves lines

I have a 2D drawing with a black background and white lines (exported from Autocad) and I want to create a thumbnail preserving lines, using Python PIL library.
But what I obtain using the 'thumbnail' method is just a black picture scattered with white dots.
Note that if I put the image into an IMG tag with fixed width, I obtain exactly what I want (but the image is entirely loaded).
After your comments, here is my sample code:
from PIL import Image
fn = 'filename.gif'
im = Image(fn)
im.convert('RGB')
im.thumbnail((300, 300), Image.ANTIALIAS)
im.save('newfilename.png', 'PNG')
How can I do?
The default resizing method used by thumbnail is NEAREST, which is a really bad choice. If you're resizing to 1/5 of the original size for example, it will output one pixel and throw out the next 4 - a one-pixel wide line has only a 1 out of 5 chance of showing up at all in the result!
The surprising thing is that BILINEAR and BICUBIC aren't much better. They take a formula and apply it to the 2 or 3 closest pixels to the source point, but there's still lots of pixels they don't look at, and the formula will deemphasize the line anyway.
The best choice is ANTIALIAS, which appears to take all of the original image into consideration without throwing away any pixels. The lines will become dimmer but they won't disappear entirely; you can do an extra step to improve the contrast if necessary.
Note that all of these methods will fall back to NEAREST if you're working with a paletted image, i.e. im.mode == 'P'. You must always convert to 'RGB'.
from PIL import Image
im = Image.open(fn)
im = im.convert('RGB')
im.thumbnail(size, Image.ANTIALIAS)
Here's an example taken from the electronics.stackexchange site https://electronics.stackexchange.com/questions/5412/easiest-and-best-poe-ethernet-chip-micro-design-for-diy-interface-with-custom-ard/5418#5418
Using the default NEAREST algorithm, which I assume is similar to the results you had:
Using the ANTIALIAS algorithm:
By default, im.resize uses the NEAREST filter, which is going to do what you're seeing -- lose information unless it happens to fall on an appropriately moduloed pixel.
Instead call
im.resize(size, Image.BILINEAR)
This should preserve your lines. If not, try Image.BICUBIC or Image.ANTIALIAS. Any of those should work better than NEAREST.

Categories

Resources