Concatenating videos using ffmpeg writes over empty audio stream - python

I have 4 movie files that I am trying to overlay and concatenate:
Intro file with an empty audio channel (generated using lavfi)
Main movie file(s) that need to be concatenated and trimmed
Watermark that needs to be overlaid on top of 2
An outro movie that also has an empty channel.
Here is the command I am using to do all this:
ffmpeg -i temp_intro.mp4 -f concat -i tempFile.txt -i scoreboard.mp4 -i temp_outro.mp4 \
-filter_complex "[1]trim=end=24:start=12[s0];[s0]setpts=PTS-STARTPTS[s1];[1]atrim=end=24:start=12[s2];[s2]asetpts=PTS-STARTPTS[s3];\
[s1][s3]concat=a=1:n=1:v=1[s4];\
[2]format=yuva444p[s5];[s5]colorchannelmixer=aa=0.5[s6];\
[s4][s6]overlay=eof_action=repeat:x=(main_w-overlay_w)/2:y=main_h-overlay_h-20[s7];\
[0][s7][3]concat=n=3[s8]" test.mp4
Despite how ugly it looks, it mostly works - except for the audio. The audio starts playing as soon as the intro clip starts. I cannot create an output file with the overlaid movie because I also need to add fade-out and fade-in effects for the intro and outro. I can only re-encode once since I will be doing this over multiple large files every night.
Please suggest how I might be able to fix the audio issue.

Solution is easy - just need to concatenate audio and video for the file explicitly in the second concat. Here is the command:
ffmpeg -i temp_intro.mp4 -f concat -i tempFile.txt -i scoreboard.mp4 -i temp_outro.mp4 \
-filter_complex "[1]trim=end=24:start=12[s0];[s0]setpts=PTS-STARTPTS[s1];[1]atrim=end=24:start=12[s2];[s2]asetpts=PTS-STARTPTS[s3]; \
[2]format=yuva444p[s4];[s4]colorchannelmixer=aa=0.5[s5]; \
[s1][s5]overlay=eof_action=repeat:x=(main_w-overlay_w)/2:y=main_h-overlay_h-20[s7]; \
[0:v:0][0:a:0] [s7][s3] [3:v:0][3:a:0]concat=n=3:v=1:a=1[s8]" -map [s8] test.mp4

Related

How can i combine three ffmpeg command in python?

I want to combine three ffmpeg commands on my video and I want single output. In the following code I give a video(footage) as input and I want to compress my video, add watermark and also display frame numbers on the video and after executing these commands it gives an output video. How can I combine these commands?
f'''ffmpeg -i {footage} -vcodec libx264 -acodec aac \ "drawtext=text='{hostname}\n {socket.gethostbyname(hostname)}\n {getpass.getuser()}':x=10:y=H-th-10:fontfile=KhmerOS.ttf:fontsize=50:fontcolor=black:shadowcolor=black:shadowx=2:shadowy=2 " \ "drawtext=fontfile=KhmerOS.ttf: text='%{{frame_num}}': start_number=1: x=(w-tw)/2: y=h-(2*lh): fontcolor=black: fontsize=50: box=1: boxcolor=white: boxborderw=5" -c:a copy {mov_path} '''

set equal duration to images with ffmpeg python

Hi i want to make a video using images. Lets say i have an audio of 60 seconds and 6 images and i want my video to show images with equal durations i.e 10 second per image
but i couldn't figure it out how to do it
here is my code
import ffmpeg
input_audio = ffmpeg.input('./SARS/SARS.mp3')
input_still = ffmpeg.input('./SARS/*.jpg',t=20, pattern_type='glob', framerate=24)
(
ffmpeg
.concat(input_still, input_audio, v=1, a=1)
.filter('scale', size='hd1080', force_original_aspect_ratio='increase')
.output('./SARS/output.mp4')
.run(overwrite_output=True)
)
any help is appriciated
I'm sure you can achieve this with ffmpeg-python but you can try one of the following:
Plain CLI
ffmpeg \
-y \
-i './SARS/SARS.mp3' \
-pattern_type glob -framerate 0.1 -i './SARS/*.jpg' \
-vf scale=size=hd1080:force_original_aspect_ratio=increase \
'./SARS/output.mp4'
You can run this in Python using subprocess.run(['ffmpeg','-y',...])
ffmpegio Package
For a one-time pure transcoding need, ffmpegio is actually an overkill and directly calling ffmpeg via subprocess is more than sufficient and faster, but if you do this kind of operations often you can give it a try.
pip install ffmpegio-core
from ffmpegio.ffmpegprocess import run
run({'inputs': [
('./SARS/SARS.mp3', None)
('./SARS/*.jpg', {'pattern_type':'glob','framerate': 0.1})],
'outputs': [
('./SARS/output.mp4', {'vf':'scale=size=hd1080:force_original_aspect_ratio=increase'})],
overwrite=True)
Essentially, it's like subprocess counterpart but takes a dict of parameters.

Merging of audio and video by ffmpeg

I have 2 separate webm files - video and audio part. Now I want to merge it. I use python and ffmpeg
input_video = ffmpeg.input(f'{title}-video.webm').output("out1.webm")
input_audio = ffmpeg.input(f'{title}-audio.webm').output("out2.webm")
ffmpeg.merge_outputs(input_video, input_audio).run()
Output file looks OK (it plays audio and video) but it takes time to merge it. I guess there's a needless conversion that I could avoid. Is it possible to do with the given API?
I googled a command
ffmpeg -i 12m.mp4 -i 6m.mp4 -c copy -map 1:v -map 0:a -shortest new.mp4
that should be run via command line, but I'd like to implement it by means of the API if it is possible.

Programatically add a simple pause to a video

Say I have a 30s video. I want to produce a 40s video that is just the first video but with an extra "freezed" frame (for say 10s) somewhere in the middle of it. (think of it as wanting to comment the video at a specific point)
I know I can do this easily with video editing software. However, I am looking for a command line tool that allows me to do this efficiently (I need to do this several times with variable points to freeze the video)
I am using Python
I thought of using ffmpeg, splitting the video into two, creating a third video composed of a given frame, and then concatenating the three videos.
But maybe there is a much simpler technique?
I found a way to do it
Let's say I have the original movie file: in.mp4
and I want to pause it for 10 seconds with the frame found at the 15s mark
#first extract the frame
ffmpeg -ss 00:00:15 -i in.mp4 -vframes 1 -q:v 2 -y static_frame.png
# Create the movie_aux1.mp4 of the first part of the original video
ffmpeg -t 00:00:15 -i in.mp4 movie_aux1.mp4
# Create the movie_aux2.mp4
ffmpeg -loop 1 -i static_frame.png -t 10 movie_aux2.mp4
# Create the movie_aux3.mp4
ffmpeg -ss 00:00:15 -i in.mp4 movie_aux3.mp4
# Create a list of the movies to concatenate. Dont forget to erase this file afterwards
echo "file 'movie_aux1.mp4'" >> mylist.txt
echo "file 'movie_aux2.mp4'" >> mylist.txt
echo "file 'movie_aux3.mp4'" >> mylist.txt
# Concatenate all three movies
ffmpeg -f concat -safe 0 -i mylist.txt out.mp4

FFMPEG concat cutting audio off after certain clip

So I am creating multiple video clips in Python using FFMPEG, I am then trying to concat these together. I create multiple videos named result1000, result1001 etc, then I create a transition effect I want to layer between these videos. The result1000, result1001... etc concat together perfectly fine, however inserting the transition effect between them causes every clip after the first transition to lose audio.
Creating the transiton
ffmpeg -loop 1 -y -i media/templates/bg.png -i media/swoosh_sound.mp3 -shortest -acodec copy -vcodec libx264rgb output/swoosh.mp4
Creating video clips
ffmpeg -loop1 -y -i image_files/image+str(1000+i)+.png -i audio_files/audio+str(1000+i)+.mp3 -shortest -acodec copy -vcodec libx264rgb output/result+str(1000+i)+.mp4
The ffmpeg_files.txt then looks something like this
file 'output/result1000.mp4'
file 'output/result1001.mp4'
file 'output/result1002.mp4'
file 'output/result1003.mp4'
file 'output/result1004.mp4'
file 'output/swoosh.mp4'
file 'output/result1005.mp4'
file 'output/result1006.mp4'
and the concat command im using is
ffmpeg -f concat -safe 0 -i ffmpeg_files.txt output/no_bg_out.mp4
In console on running the concat comment it will say
[mov,mp4,m4a,3gp,3g2,mj2 # 000001f289b44c40] Auto-inserting h264_mp4toannexb bitstream filter
for each resultXXXX clip, then as soon as it reaches a transition clip it starts spamming
[mp4 # 000001aa093ad100] Non-monotonous DTS in output stream 0:1; previous: 13619623, current: 8777816; changing to 13619624. This may result in incorrect timestamps in the output file.
I have read over the solutions mentioned Here but none of them seem to solve my issue. It should be noted that all video clips are created from .mp3 audio files and .png image files.
All attributes must match, but swoosh.mp4 varies from the rest with a different audio sample rate and channel layout. Re-encode the audio and try again:
ffmpeg -i swoosh.mp4 -c:v copy -c:a libmp3lame -ar 24000 -ac 1 -b:a 32k swoosh2.mp4

Categories

Resources