How to apply transparency to clips in moviepy? - python

So I'm trying to create a clip with moviepy where five semi-transparent clips are overlaid on each other using the CompositeVideoClip.
The output should be a clip of length equal to the longest clip, where the all the layers of the composite clip are visible.
My code looks something like this:
from moviepy.editor import *
clip_1 = VideoFileClip('some\\path\\here.mp4')
clip_2 = VideoFileClip('some\\path\\here.mp4')
clip_3 = VideoFileClip('some\\path\\here.mp4')
clip_4 = VideoFileClip('some\\path\\here.mp4')
clip_5 = VideoFileClip('some\\path\\here.mp4')
list_of_clips = [clip_1, clip_2, clip_3, clip_4, clip_5]
for index, clip in enumerate(list_of_clips):
list_of_clips[index] = clip.set_opacity(.20)
output_clip = CompositeVideoClip(list_of_clips)
output_clip.write_videofile('some\\path\\here.mp4')
Code runs fine, except transparency is not applied.
Neither does this work:
clip = VideoFileClip(some\\path\\here.mp4).set_opacity(.30)
clip.write_videofile(some\\path\\here.mp4)
Export works fine, but clip is fully opaque.
Any suggestions for how to achieve transparency in clip outputs?

the mp4 (I'm assuming h264) format does not offer transparency. webM (vp9) and some variants of h265 do offer transparency.
Im not sure exactly what you are trying to do - but perhaps creating the overlaid videos as webm (transparency supported) - and then converting to h264 at the end might work for you.

Related

how do i increase the resolution of my gif file?

As I am trying to create a gif file, the file has been created successfully but it is pixelating. So if anyone can help me out with how to increase resolution.
.Here is the code:-
import PIL
from PIL import Image
import NumPy as np
image_frames = []
days = np.arange(0, 12)
for i in days:
new_frame = PIL.Image.open(
r"C:\Users\Harsh Kotecha\PycharmProjects\pythonProject1\totalprecipplot" + "//" + str(i) + ".jpg"
)
image_frames.append(new_frame)
image_frames[0].save(
"precipitation.gif",
format="GIF",
append_images=image_frames[1:],
save_all="true",
duration=800,
loop=0,
quality=100,
)
Here is the Gif file:-
Here are the original images:-
image1
image2
iamge3
Updated Answer
Now that you have provided some images I had a go at disabling the dithering:
#!/usr/bin/env python3
from PIL import Image
# User editable values
method = Image.FASTOCTREE
colors = 250
# Load images precip-01.jpg through precip-12.jpg, quantize to common palette
imgs = []
for i in range(1,12):
filename = f'precip-{i:02d}.jpg'
print(f'Loading: {filename}')
try:
im = Image.open(filename)
pImage = im.quantize(colors=colors, method=method, dither=0)
imgs.append(pImage)
except:
print(f'ERROR: Unable to open {filename}')
imgs[0].save(
"precipitation.gif",
format="GIF",
append_images=imgs[1:],
save_all="true",
duration=800,
loop=0
)
Original Answer
Your original images are JPEGs which means they likely have many thousands of colours 2. When you make an animated GIF (or even a static GIF) each frame can only have 256 colours in its palette.
This can create several problems:
each frame gets a new, distinct palette stored with it, thereby increasing the size of the GIF (each palette is 0.75kB)
colours get dithered in an attempt to make the image look as close as possible to the original colours
different colours can get chosen for frames that are nearly identical which means colours flicker between distinct shades on successive frames - can cause "twinkling" like stars
If you want to learn about GIFs, you can learn 3,872 times as much as I will ever know by reading Anthony Thyssen's excellent notes here, here and here.
Your image is suffering from the first problem because it has 12 "per frame" local colour tables as well as a global colour table3. It is also suffering from the second problem - dithering.
To avoid the dithering, you probably want to do some of the following:
load all images and append them all together into a 12x1 monster image, and find the best palette for all the colours. As all your images are very similar, I think that you'll get away with generating a palette just from the first image without needing to montage all 12 - that'll be quicker
now palettize each image, with dithering disabled and using the single common palette
save your animated sequence of the palletised images, pushing in the singe common palette from the first step above
2: You can count the number of colours in an image with ImageMagick, using:
magick YOURIMAGE -format %k info:
3: You can see the colour tables in a GIF with gifsicle using:
gifsicle -I YOURIMAGE.GIF

Mix audio files make clipping on python

I have some audio files.
I mixed audio files.
for idx,f in enumerate(files):
if idx == 0:
sound = pydub.AudioSegment.from_file(f)
else:
temp = pydub.AudioSegment.from_file(f)
sound = sound.overlay(temp, position=0)
sound.export("totakmix.wav",format="wav")
Each audio file is not clipping.
However, mix file is clipping.
Is there any way to prevent this??
The easiest thing you can do to prevent clipping while using overlay is to apply negative gain correction with gain_during_overlay like this:
sound = sound.overlay(temp, position=0, gain_during_overlay=-3)
to changes the audio by 3 dB while overlaying audio. Why 3 dB? It translates to roughly twice power gain, so if your original audio was not clipping, the end result should not either.

Making video from clips on moviepy only showing last image

I'm trying to make a video from a list of images using moviepy. I have issues using moviepy.editor since it does not like being frozen using PyInstaller, so I'm using moviepy.video.VideoClip.ImageClip for the images and moviepy.video.compositing.CompositeVideoClip.CompositeVideoClip for the clip. I have a list of .jpg images in a list called images:
from moviepy.video.VideoClip import ImageClip
from moviepy.video.compositing.CompositeVideoClip import CompositeVideoClip
clips = [ImageClip(m).set_duration(1) for m in images]
concat_clip = CompositeVideoClip(clips))
concat_clip.write_videofile('VIDEO.mp4', fps=1)
It successfully makes an .mp4 but the video is only one second long and the last image in the list of images. I can check clips and it has the ~30 images that should be in the video. I can do this from using methods from moviepy.editor following this SO question and answer, but there doesn't seem to be an analogous parameter in CompositeVideoClip for method='compose' which is where I think the issue is.
using concatenate_videoclips might help. I use the code below and it works just fine.
clips = [ImageClip(m).set_duration(1/25)
for m in file_list_sorted]
concat_clip = concatenate_videoclips(clips, method="compose")
concat_clip.write_videofile("test.mp4", fps=25)

How To Resize a Video Clip in Python

I want to resize a video clip in python 2.7.
For example we give "movie.mp4" with 1080p quality
The result should be "movie.mp4" with 360p quality
I Think that there should be solutions with Moviepy. If you know a solution with it.
I would be grateful if you answer me.
Here is how you resize a movie with moviepy:
see the mpviepy doc here
import moviepy.editor as mp
clip = mp.VideoFileClip("movie.mp4")
clip_resized = clip.resize(height=360) # make the height 360px ( According to moviePy documenation The width is then computed so that the width/height ratio is conserved.)
clip_resized.write_videofile("movie_resized.mp4")
You can also tune the quality by adding the parameter bitrate="500k" or bitrate="5000k" in the last line.
As said above, you could also use ffmpeg directly, it will be simpler if you just need a quick script.
Why not ffmpeg?
ffmpeg -i movie.mp4 -vf scale=640:360 movie_360p.mp4
If you use 640:-2 then, in this example, the scale filter will preserve the aspect ratio and automatically calculate the correct height.
Look at the H.264 encoding guide for additional options.
Moviepy Resize function
>>> myClip.resize( (460,720) ) # New resolution: (460,720)
>>> myClip.resize(0.6) # width and heigth multiplied by 0.6
>>> myClip.resize(width=800) # height computed automatically.
>>> myClip.resize(lambda t : 1+0.02*t) # slow swelling of the clip

Video Manipulation

Before a couple days ago I had never used OpenCV or done any video processing. I've been asked to computationally overlay a video based upon some user inputs and build a new video with the overlays incorporated for download in AVI format. Essentially, the goal is to have a form that takes as input 3 images (icon, screenshot #1, screenshot #1) and 3 text inputs and overlays the original video with them. Here is a link to the video. When the video is running you'll notice the icon in the center of the iPhone at the beginning is stretched and pulled. I've been iteratively testing OpenCV methods by breakding down the video frame by frame and doing stuff to each one, then rebuilding (obviously this is probably the only way to successfully rebuild a video with OpenCV with edits, but anyway). this video is one I overlayed a colored circle that moves back and forth with.
# the method I've been using
import cv2 as cv
import numpy as np
cap = cv.VideoCapture('the_vid.avi')
flag, frame = cap.read()
width = np.size(frame,1)
height = np.size(frame,0)
writer = cv.VideoWriter('output.avi', cv.VideoWriter_fourcc('I','4','2','0'), fps=35, (width,height), 1)
while True:
flag, frame = cap.read()
if flag == 0:
break
x = width/2
y = height/2
# add a line or circle or something
origin radius
cv.circle(frame, (x,y), 20, (0,0,255), -1)
# write our new frame
writer.write(frame)
Now we've got an output of this very large uncompressed AVI file which can be compressed using ffmpeg
ffmpeg -i output.avi -vcodec msmpeg4v2 compressed_output.avi
Ok, so that's the method I've been using to rebuild this video, and from that method I'm not seeing it possible to take a static image and stretch it around like is shown in the first 90 frames or so. The only other possibility I saw was maybe doing something like below. If you can tell me if there is even a way to implement this pseudo-code that would be awesome, I'm thinking it will be extremely difficult:
# example for the first image and first few seconds of video only
first_image = cv.imread('user_uploaded_icon.png')
flag, first_frame = cap.read()
# section of the frame that contains the original icon
the_section = algorithm_to_get_array_of_original_icon_in_first_frame(first_frame)
rows, cols = the_section.shape
# somehow find the array within the first image that is the same size as the_section
# containing JUST the icon
icon = array_of_icon(first_image)
# build a blank image with the size of the original icon in the current frame
blank_image = np.zeros((rows,cols,3),np.uint8)
for i in xrange(row):
for j in xrange(col):
blank_image[i,j] = icon[i,j]
What seems like it might not work about this is the fact that the_section in the first_frame will be stretched to different dimensions than the static image...so I'm not sure if there is ANY viable way to handle this. I appreciate all the time saving help in advance.

Categories

Resources