This is my code. I am trying to make a video out of all pictures in the pics2 directory using ffmpeg.
import subprocess
command = 'ffmpeg -framerate 1 -i "pics2/Image%02d.png" -r 1 -vcodec libx264 -s 1280x720 -pix_fmt yuv420p Output.mp4'
p = subprocess.call(command, shell=True)
This saves an Output.mp4 successfully but the video has no frames (It is completely black)
How do I solve this?
Your player does not like 1 fps
Many players behave similarly.
You can still provide a low frame rate for the input, but give a more normal frame rate for the output. ffmpeg will simply duplicate the frames. The output video won't look different than your original command, but it will play.
ffmpeg -framerate 1 -i "pics2/Image%02d.png" -r 10 -vcodec libx264 -s 1280x720 -pix_fmt yuv420p Output.mp4
The problem with your original script occurs when setting both -framerate and -r. Here is more information about it. Try using this as your command:
command = "ffmpeg -framerate 1 -i 'pics2/Image%02d.png' -vcodec libx264 -s 1280x720 -pix_fmt yuv420p Output.mp4"
Brief explanation (based on Saaru Lindestøkke's answer):
ffmpeg <- call ffmpeg
-framerate 1 <- set the input framerate to 1
-i 'pics2/Image%02d.png' <- read PNG images with filename Image01, Image02, etc...
-vcodec libx264 <- Set the codec to libx264
-s 1280x720 <- Set the resolution to 1280x720
-pix_fmt yuv420p <- Set the pixel format to planar YUV 4:2:0, 12bpp
Output.mp4 <- the output filename
Related
I'm trying to play a video like a preview before rendering:
ffplay test.mp4 -af "volume=8.0,atempo=4.0" -vf "transpose=2,transpose=2,setpts=1/4*PTS" -s 640x480 -aspect 4:2
Without -s its working fine, but when I add -s its outputs error:
Option -s is deprecated, use -video_size.
Option video_size not found.
Let me know the syntax. And some times after converting I don't find the change in the details
ffmpeg -i 21.mp4 -vf "scale=1280*720" 21_edit.mkv
These errors are specific to ffplay
-s is deprecated for -video_size only in ffplay, not ffmpeg.
-video_size is used to manually tell ffplay the size for videos that do not contain a header (such as raw video). It is not used to resize videos.
To resize/scale video
Use the scale filter:
ffplay -af "volume=8.0,atempo=4.0" -vf "transpose=2,transpose=2,setpts=1/4*PTS,scale=640:-1" test.mp4
To resize the player window
Use the -x or -y options:
-x width
Force displayed width.
-y height
Force displayed height.
Example:
ffplay -af "volume=8.0,atempo=4.0" -vf "transpose=2,transpose=2,setpts=1/4*PTS" -x 640 test.mp4
-s option is depricated in ffmpeg. So you can use scale filter directly to specify the video dimension. The modified ffplay command is
ffplay test.mp4 -af "volume=8.0,atempo=4.0" -vf "transpose=2,transpose=2,setpts=1/4*PTS,scale=640*480" -aspect 4:2
I would like to do something like this:
'ffmpeg -i "video.ts" -vf "crop=768:126:0:400 ,
select=gt(scene\\,0.06) " -vsync vfr -f null /dev/null -loglevel debug
2>&1 | grep select:1'
However this gives me wrong results as compared to cropping the video in the first command
ffmpeg -i "video.ts" -vf "crop=768:126:0:400" cropped.ts
and then taking out the scene changes in the second command.
ffmpeg -i "cropped.ts" -vf "select=gt(scene\\,0.06) " -vsync vfr -f null
/dev/null -loglevel debug 2>&1 | grep select:1'
Is there a single line command which could crop a particular portion of the video and get scene changes only for the cropped portion?
I wrote in cmd:
ffmpeg -y -re -acodec pcm_s16le -rtsp_transport tcp -i
rtsp://192.168.1.200:554/11 -vcodec copy -af asetrate=22050 -acodec
aac -b:a 96k test.mp4
and I got a video file. If I write this command in python:
import os
os.system("ffmpeg -y -re -acodec pcm_s16le -rtsp_transport tcp -i
rtsp://192.168.1.200:554/11 -vcodec copy -af asetrate=22050 -acodec aac -b:a 96k test.mp4")
I get this error:
"ffmpeg" �� ���� ����७��� ��� ���譥� ��������, �ᯮ��塞�� �ணࠬ��� ���
������ 䠩���.
I tried to create a bat file and run it from python, but I get the same error. If I don't run it in python, all is good.
How do I start recording a video stream from an ip camera in python?
tl;dr: how to use a bash ffmpeg command in python
So I'm trying to take one JPEG image and an audio file as input and generate a video file of the same duration as the audio file (by stretching the still image for the whole duration).
So, I found these:
https://superuser.com/questions/1041816/combine-one-image-one-audio-file-to-make-one-video-using-ffmpeg
So, I now have the code for the merging:
ffmpeg -loop 1 -i image.jpg -i audio.wav -c:v libx264 -tune stillimage -c:a aac -b:a 192k -pix_fmt yuv420p -shortest out.mp4
Then I want to use that in python but unable to figure out how to port this to ffmpeg-python or ffpy.
I found this: Combining an audio file with video file in python
So, I tried the same thing as him:
cmd = 'ffmpeg -loop 1 -i image.jpg -i message.mp3 -c:v libx264 -tune stillimage -c:a aac -b:a 192k -pix_fmt yuv420p -shortest out.mp4'
subprocess.check_output(cmd, shell=True)
subprocess.call(cmd, shell=True)
But I got "returned non-zero exit status 1". So what did I do wrong?
Please try running the command in a bash shell.
I pasted the same code and it works for me.
exit status 1 indicates the error is with the ffmpeg process and not with python.
If all goes well you will end up with an exit status of 0
For the purpose of making a time lapse recording of desktop activity, it is possible so "stream" the frame list to ffmpeg over time, rather than all at once in the beginning.
Currently, it is a two step process.
save individual snapshots to disc
im = ImageGrab.grab()
im.save("frame_%s.jpg" % count, 'jpg')
compile those snapshots with ffmpeg via
ffmpeg -r 1 -pattern_type glob -i '*.jpg' -c:v libx264 out.mp4
It would be nice if there were a way to merge the two steps so that I'm not flooding my hard drive with thousands of individual snapshots. Is it possible to do this?
ffmpeg can grab the screen: How to grab the desktop (screen) with FFmpeg
In Linux you could do it with:
ffmpeg -video_size 1024x768 -framerate 1 -f x11grab -i :0.0 -c:v libx264 out.mp4
(change video_size to your desktop size)