How to ignore auto rotation when reading videos with ffmpeg python? - python

When reading a video with ffmpeg-python, if the video metadata contains a "rotate" attribute, it seems that by default ffmpeg transposes the incoming bytes according to the rotation value.
I would like to remove the automatic rotation. I tried the following, with no success:
import ffmpeg
process = (
ffmpeg
.input(filename)
.output('pipe:', format='rawvideo', pix_fmt='yuv420p', loglevel=0, vsync='0')
.global_args('-noautorotate')
.run_async(pipe_stdout=True)
)
The code runs without any issue, but the rotation is not ignored, as I would have expected.
According to this:https://gist.github.com/vxhviet/5faf792f9440e0511547d20d17208a76, the noautorotate argument should be passed before input.
I tried the following:
process = (
ffmpeg
.global_args('-noautorotate')
.input(filename)
.output('pipe:', format='rawvideo', pix_fmt='yuv420p', loglevel=0, vsync='0')
.run_async(pipe_stdout=True)
)
also with no success:
AttributeError: module 'ffmpeg' has no attribute 'global_args'
Any suggestion?
EDIT
Passing noautorotate as kwargs does not work either (size of video after reading is 0)
process = (
ffmpeg
.input(self.file, **{'noautorotate':''})
.output('pipe:', format='rawvideo', pix_fmt='yuv420p', loglevel=1, vsync='0')
.run_async(pipe_stdout=True)
)

Replace **{'noautorotate':''} with **{'noautorotate':None}.
Correct syntax:
process = (
ffmpeg
.input(self.file, **{'noautorotate':None})
.output('pipe:', format='rawvideo', pix_fmt='yuv420p', loglevel=1, vsync='0')
.run_async(pipe_stdout=True)
)
When using **{'noautorotate':''}, FFmpeg output an error:
Option noautorotate (automatically insert correct rotate filters) cannot be applied to output url -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to.
Error parsing options for output file .
Error opening output files: Invalid argument
For testing we may add .global_args('-report'), and look at the log file.
Executing the following command:
process = (
ffmpeg
.input('input.mkv', **{'noautorotate':None})
.output('pipe:', format='rawvideo', pix_fmt='yuv420p', vsync='0')
.global_args('-report')
.run_async()
.wait()
)
The log file shows the built FFmpeg command line - looks correct:
ffmpeg -noautorotate -i input.mkv -f rawvideo -pix_fmt yuv420p -vsync 0 pipe: -report
Executing the following command:
process = (
ffmpeg
.input('input.mkv', **{'noautorotate':''})
.output('pipe:', format='rawvideo', pix_fmt='yuv420p', vsync='0')
.global_args('-report')
.run_async()
.wait()
)
The log file shows the following built FFmpeg command line:
ffmpeg -noautorotate -i input.mkv -f rawvideo -pix_fmt yuv420p -vsync 0 pipe: -report
There is just an extra space before the -i, but for some reason FFmpeg interpret it as a URL (I think it might be a Unicode related issue - the empty strings is interpret as character that is not a space).

Related

I cannot convert a set of images taken from a FTP to a video using FFMPEG

I am writting a python code where I am trying to convert a set of images that I take from a FTP into a video using FFMPEG but I cannot. I have tried, instead of reading the folder where the images are, to read a txt file with the name of the images that I want to use, with the format needed in order that FFMPEG could read it propertly, but I get always the same error: Protocol 'ftp' not on whitelist 'tcp'
In the same code, I also try to change the format of one video and change the resolution and size, and this part of code works well.
However, writting as input the same reference of the FTP, the images' code fail and the video's code works.
Besides, I have tried in my terminal as local the same command I write in the code for the images, and in local it works propertly, but not in the code.
Here there is a part of my code:
Video's code (it works):
command = """ffmpeg -i {i} -an -crf {r} {o}""".format(i=src_path,o=path,r=resolution)
An example of this command when I run this is the next (I dont want to write the exact ip and port):
ffmpeg -i ftp://user:user#ip:port/landing_ffmpeg/pruebas/pruebahd.mp4 -an -crf 45 tmp/pruebasalida456.mp4
And next the images' code (it doesnt work):
command = """ffmpeg -loop 1 -framerate {ips} -i {i} -t 10 -pix_fmt yuv420p {o}""".format(i=src_path,o=path,ips=img_per_sec)
An example of this command is the next:
ffmpeg -loop 1 -framerate 2 -i ftp://user:user#ip:port/landing_ffmpeg/pruebas/prueba_imagenes/prueba06.jpg -t 10 -pix_fmt yuv420p tmp/videoimagen.mp4
And the error I get with this code is the next:
[ftp # 0x560eb3e11800] Protocol 'ftp' not on whitelist 'tcp'!
[image2 # 0x560eb3e09380] Could not open file : ftp://user:user#ip:port/landing_ffmpeg/pruebas/prueba_imagenes/prueba06.jpg
I dont get this error when I try to run the command of the video, only for the images. And both commands run propertly when I write in my terminal in local, with local paths.
I would appreciate if someone can help me to solve the problem and fix my code.
Thanks!
The error is saying it all. Try to whitelist the ftp:
ffmpeg -protocol_whitelist ftp -loop 1 -framerate 2 \
-i ftp://user:user#ip:port/landing_ffmpeg/pruebas/prueba_imagenes/prueba06.jpg \
-t 10 -pix_fmt yuv420p tmp/videoimagen.mp4

Can moviepy be used without reencoding video

I wish to combine a remastered audio track to a video without re-encoding the source video.
I could achieve this with ffmpeg by running this command:
ffmpeg -i original_video.mp4 -i remastered_audio.wav -map 0:v -map 1:a -c:v copy -shortest remastered_video.mp4
The resulting remastered_video.mp4 file will be slightly larger than original_video.mp4 because the video track will be untouched and include the new encoded audio track. I wish to use moviepy to achieve the same, but I can't figure out how to do it without re-encoding the source video.
from moviepy.editor import AudioFileClip, VideoFileClip
original_video = VideoFileClip(<path_to_original_video_file.mp4>)
remastered_audio = AudioFileClip(<path_to_remastered_audio_track.wav>)
remastered_video = original_video.set_audio(remastered_audio)
remastered_video.write_videofile(<path_of_output_remastered_video.mp4>)
Here the output video file will be smaller than the source because the video track was re-encoded.
I then tried to pass my ffmpeg arguments like so:
args=['-map', '0:v', '-map', '1:a', '-c:v', 'copy']
remastered_video.write_videofile(<path_of_output_remastered_video.mp4>,
ffmpeg_params=args,
logger=None)
but that generated the following error:
#[mp4 # 00000190c731d280] Could not find tag for codec rawvideo in stream #0, codec not currently supported in container
#Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
QUESTION
I know how to use subprocess to directly run ffmpeg, but I would like to avoid this and use moviepy. Is it possible to tell moviepy to copy the source video instead of re-encoding it, similar to the ffmpeg example command I posted above?

FFMPEG How to extract sound from a video and combine it with a picture

I use this code,but the picture is not visible :(:
os.system(f'ffmpeg -stream_loop -1 -re -i {dir_path}/maxresdefault.jpg -re -i "{url}" -c:v libx264 -preset superfast -b:v 2500k -bufsize 3000k -maxrate 5000k -c:a aac -ar 44100 -b:a 128k -pix_fmt yuv420p -f flv rtmp://a.rtmp.youtube.com/live2/###########')this main part of code:
p=Playlist("https://www.youtube.com/playlist?list=PLSVhVwIpndKJyjEltpgM37o-OoH1p1840")
for video in p.videos[4:len(p.videos)]:
url=""
while True:
try:
url=video.streams.get_highest_resolution().url
break
except:
continue
print(video.title)
ffmpeg(url)##code,that i wrote upper
url is youtube video
It looks like -stream_loop is not working as it supposed to work (I don't know the reason).
Add -f image2 before -i im001.jpg - force FFmpeg to get the input as an image (it supposed to be the default, but I think there is an issue when using -stream_loop).
Remove the -re from the video input (I think there is an issue when using -re with -stream_loop).
Add -r 1 for setting the frame rate of the input to 1Hz (it is not a must, but the default framerate is too high for a single image).
Add -vn before the audio input - make sure FFmpeg doesn't decode the video from the URL (in case there is a video stream).
I added -vf "setpts=N/1/TB" -af "asetpts=PTS-STARTPTS" -map:v 0:v -map:a 1:a for correcting the timestamps and the mapping (we probably don't need it). The 1 in "setpts=N/1/TB" is the video framerate.
I added -bsf:v dump_extra (I think we need it for FLV streaming).
Add -shortest -fflags +shortest for quitting when the audio ends.
I don't know how to stream it to YouTube.
I used localhost and FFplay as a listener (it takes some time for the video to appear).
Here is the complete code sample:
from pytube import YouTube, Playlist
import subprocess as sp
import os
p = Playlist("https://www.youtube.com/playlist?list=PLSVhVwIpndKJyjEltpgM37o-OoH1p1840")
rtmp_url = "rtmp://127.0.0.1:1935/live/test" # Used localhost for testing (instead of streaming to YouTube).
for video in p.videos[4:len(p.videos)]:
url=""
while True:
try:
url=video.streams.get_highest_resolution().url
break
except:
continue
print(video.title)
# Start the TCP server first, before the sending client.
ffplay_process = sp.Popen(['ffplay', '-listen', '1', '-i', rtmp_url]) # Use FFplay sub-process for receiving the RTMP video.
os.system(f'ffmpeg -r 1 -stream_loop -1 -f image2 -i maxresdefault.jpg -re -vn -i "{url}" -vf "setpts=N/1/TB" -af "asetpts=PTS-STARTPTS" -map:v 0:v -map:a 1:a -c:v libx264 -preset superfast -b:v 2500k -bufsize 5000k -maxrate 5000k -pix_fmt yuv420p -c:a aac -ar 44100 -b:a 128k -shortest -fflags +shortest -f flv -bsf:v dump_extra {rtmp_url}')
ffplay_process.kill() # Forcefully terminate FFplay sub-process.
Please make sure it's working with FFplay first (it supposed to be reproducible).
Replace rtmp_url in my example with rtmp://a.rtmp.youtube.com/live2/###########.

Why is ffmpeg warning "Guessed Channel Layout for Input Stream #0.0 : mono"?

I am using ffmpeg to read and write raw audio to/from my python script. Both the save and load commands I use produce the warning "Guessed Channel Layout for Input Stream #0.0 : mono"? This is despite that fact that I am telling ffmpeg using -ac 1 before both the input and output that there is only one channel. I saw some other answers where I should set -guess_layout_max 0 but this seems like a hack since I don't want ffmpeg to guess; I am telling it exactly how many channels there are with -ac 1. It should not need to make any guess.
My save command is formatted as follows with r being the sample rate and f being the file I want to save the raw audio to. I am sending raw audio via stdin from python over a pipe.
ffmpeg_cmd = 'ffmpeg -hide_banner -loglevel warning -y -ar %d -ac 1 -f u16le -i pipe: -ac 1 %s' % (r, shlex.quote(f))
Likewise my load command is the following with ffmpeg reading from f and writing raw audio to stdout.
ffmpeg_cmd = 'ffmpeg -hide_banner -loglevel warning -i %s -ar %d -ac 1 -f u16le -c:a pcm_u16le -ac 1 pipe:' % (shlex.quote(f), r)
-ac sets no. of channels, not their layout, of which there can be multiple for each value of a channel count.
Use the option -channel_layout.
ffmpeg -hide_banner -loglevel warning -y -ar %d -ac 1 -channel_layout mono -f u16le -i pipe: ...

Video File Size Optimization

I'm trying the 2-pass technique of the FFmpeg in python but couldn't find any python tutorials do this task.
is there is no way instead of using Subprocess? if there's any illustrative example please provide me.
Note:
I have tried the 2-pass in the script like that:
input_fit = {self.video_in:None}
output = {None:"-c:v h264 -b:v 260k -pass 1 -an -f mp4 NUL && ^",
self.video_out:("ffmpeg -i \"%s\" -c:v h264 -b:v 260k -pass 2 " %self.video_in)}
## video_out IS The Name of The output File ##
model = FFmpeg(inputs = input_fit, outputs= output)
print(model.cmd)
It Raises an error of
:: FFRuntimeError: exited with status 1,
but when i take the generated command and run it on the ffmpeg cmd it runs without errors and generates the video perfectly.
so anyone just could tell me what is the problem please?

Categories

Resources