Video Stabilization with spin product OpenCV - python

I'm trying to stabilize video using OpenCV, but my video is a walkaround 360 spin like this spin, but the stabilize not perfect has a shaking because the optical flow not working perfectly with the spined video
can you find python code here
and I'm trying to use FFmpeg but still, have the same issue,
this command
INPUT=$1
STB_OUTPUT=stb1.mp4
ffmpeg -y -i "$INPUT" -vf vidstabdetect=stepsize=32:shakiness=5:accuracy=15:result="$TRF" -f null -
ffmpeg -y -i "$INPUT" -vf vidstabtransform=input="$TRF":smoothing=30,unsharp=5:5:0.8:3:3:0.4 "$STB_OUTPUT"

Related

Option -s is deprecated, use -video_size. Option video_size not found

I'm trying to play a video like a preview before rendering:
ffplay test.mp4 -af "volume=8.0,atempo=4.0" -vf "transpose=2,transpose=2,setpts=1/4*PTS" -s 640x480 -aspect 4:2
Without -s its working fine, but when I add -s its outputs error:
Option -s is deprecated, use -video_size.
Option video_size not found.
Let me know the syntax. And some times after converting I don't find the change in the details
ffmpeg -i 21.mp4 -vf "scale=1280*720" 21_edit.mkv
These errors are specific to ffplay
-s is deprecated for -video_size only in ffplay, not ffmpeg.
-video_size is used to manually tell ffplay the size for videos that do not contain a header (such as raw video). It is not used to resize videos.
To resize/scale video
Use the scale filter:
ffplay -af "volume=8.0,atempo=4.0" -vf "transpose=2,transpose=2,setpts=1/4*PTS,scale=640:-1" test.mp4
To resize the player window
Use the -x or -y options:
-x width
Force displayed width.
-y height
Force displayed height.
Example:
ffplay -af "volume=8.0,atempo=4.0" -vf "transpose=2,transpose=2,setpts=1/4*PTS" -x 640 test.mp4
-s option is depricated in ffmpeg. So you can use scale filter directly to specify the video dimension. The modified ffplay command is
ffplay test.mp4 -af "volume=8.0,atempo=4.0" -vf "transpose=2,transpose=2,setpts=1/4*PTS,scale=640*480" -aspect 4:2

ffmpeg_python library to add a custom thumbnail to a .mp4 file using ffmpeg?

based on this post
How do I add a custom thumbnail to a .mp4 file using ffmpeg?
Using ffmpeg 4.0, released Apr 20 2018 or newer,
ffmpeg -i video.mp4 -i image.png -map 1 -map 0 -c copy -disposition:0 attached_pic out.mp4
As in version 4.2.2...See Section 5.4 in FFmpeg documentation
To add an embedded cover/thumbnail:
ffmpeg -i in.mp4 -i IMAGE -map 0 -map 1 -c copy -c:v:1 png -disposition:v:1 attached_pic out.mp4
how can i do that using ffmpeg_python library
https://github.com/kkroening/ffmpeg-python/tree/master/examples
thanks
Even if the OP might not need a solution anymore, I came here to find it.
I found an example in the issues of the ffmpeg_python GitHub repository:
import ffmpeg
video = ffmpeg.input('in.mp4')
cover = ffmpeg.input('cover.png')
(
ffmpeg
.output(video, cover, 'out.mp4', c='copy', **{'c:v:1': 'png'}, **{'disposition:v:1': 'attached_pic'})
.global_args('-map', '0')
.global_args('-map', '1')
.global_args('-loglevel', 'error')
.run()
)

Add watermark to a video using ffmpeg in python

I want to add watermark for my video, with ffmpeg i found command:
ffmpeg -i input.mp4 -i watermark.png -filter_complex "overlay=1500:1000" output.mp4
But it run in cli, not in python code(i cannot found). So have anyway to embbeded it to python code(not call in subprocess)?
edit: i found pyffmpeg but no guide to use it too.
from pyffmpeg import FFmpeg
ff = FFmpeg()
ff.options("-i input.mp4 -i watermark.png -filter_complex overlay=1500:10 output.mp4")

Edit random videos from youtube using ffmpeg

I'm trying to edit several videos of different size, format, ratio, etc (random videos from youtube)
As to edit these I use : ffmpeg -f concat -i videos.txt -s 1280x720 -c copy output/edited_video.mp4 All videos are listed in videos.txt
But it doesn't work like that, it seems ffmpeg need homogeneous videos to edit the finale video. For now the first video is displayed fine but the following are glitched...
How can I homogenize any video to the same size, ratio, format, codec, ...?
(For now I use: ffmpeg -i input/video.mp4 -r 30 -vcodec mpeg4 -s 1280x720 -c copy temp/video.mp4 but it seems it's not enough.)

With ffmpeg's image to movie feature, is it possible to pass in frames over a period of time versus all at once?

For the purpose of making a time lapse recording of desktop activity, it is possible so "stream" the frame list to ffmpeg over time, rather than all at once in the beginning.
Currently, it is a two step process.
save individual snapshots to disc
im = ImageGrab.grab()
im.save("frame_%s.jpg" % count, 'jpg')
compile those snapshots with ffmpeg via
ffmpeg -r 1 -pattern_type glob -i '*.jpg' -c:v libx264 out.mp4
It would be nice if there were a way to merge the two steps so that I'm not flooding my hard drive with thousands of individual snapshots. Is it possible to do this?
ffmpeg can grab the screen: How to grab the desktop (screen) with FFmpeg
In Linux you could do it with:
ffmpeg -video_size 1024x768 -framerate 1 -f x11grab -i :0.0 -c:v libx264 out.mp4
(change video_size to your desktop size)

Categories

Resources