How can I make cv2.VideoWriter lighter - python

I'm working on a project where I need to perform some manipulations on a video file using opencv.
A big problem I noticed is that the video files I create using cv2.VideoWriter are extremely large.
For instance, when I perform video stabilization, my input video is 10MB and the output video I create can reach even 80MB, despite having the same fps, same number of frames. Both input and output are in color and both input and output are in AVI format and so on...
Is this a known feature of cv2.VideoWriter? Or is there a way that I can make the output videos smaller?

Using ffmpeg can handle the problem.
ffmpeg -i output.mp4 output_light.mp4
This simple method roughly reduces the size of the video by about 1/3.

Related

I am trying to save frames from a webcam via Python and ffmpeg, but the video becomes way to fast

I get a stream of cv2 images from my webcam and want to save it to a video file. After playing a bit with cv2.VideoWriter() it turned out that using ffmpeg would provide more options and - apparently, following a few threads here on SO - lead to better results. So I gave the VidGear Python library a try, and it seems to work fine.
There is one catch though: My webcam provides a variable framerate, most of the time between 10 and 30 FPS. When saving these frames the video file becomes way too fast, like watching in fast-forward. One real-time minute becomes only a few seconds in the video.
I tried to play with various combinations of the ffmpeg's -framerate and/or -r parameters, but without luck. Here is the command I am using right now:
ffmpeg -y -f rawvideo -vcodec rawvideo -s 1920x1080 -pix_fmt bgra -framerate 25.0 -i - -vcodec libx265 -crf 25 -r 25 -preset fast <output_video_file>
For the records, I am creating the WriteGear class from the VidGear library like this:
video_params = {
"-vcodec": "libx265",
"-crf": 25,
"-input_framerate": 25,
"-r": 25,
}
WriteGear(output_filename=video_file, logging=True, **video_params)
Any ideas what I am doing wrong here and how I need to call ffmpeg?
Ok, I solved my issue now by using cv2.VideoWriter() instead of the VidGear library. Don't know why, but I was not able to get the latter working in a proper way - either it was me not using it correctly, or the library is broken (for my use case).
Either way, after playing around with the OpenCV solution, it turned out that using the VideoWriter class and providing the FPS value from the web cam everything works quite well now.
For the records, one thing to note though: Even after finding a correct codec for my platform and double-checking the dimensions of the images to be saved, I still ended up with an "empty" (1k) .mp4 video file. The reason was that all frames from my webcam came with 4 channels, but the VideoWriter class apparently expects frames with 3 channels instead, silently dropping all other, "invalid" frames. With the following code I was able to convert my 4-channel images:
if len(image.shape) > 2 and image.shape[2] == 4:
image = cv2.cvtColor(image, cv2.COLOR_BGRA2BGR)

What are the steps needed to convert a video to a Gif in Python?

I have a project that involves creating a program that converts a video into a gif. Sounds simple enough if I use OpenCV but I need to organize the bitstream of the gif file myself. I Googled around and I can't find any resources that outline the steps required to achieve this or how to organize the bitstream myself.
I'm assuming the steps I need to do is image compression for each frame but I'm not sure if I still need to use Motion Estimation if I want a smooth Gif in the end.
edit: Just to be clear I need this to be done without using a library that converts the video to a gif for me so moviepy won't work
from moviepy.editor import *
clip = (VideoFileClip("ABCD.mp4")
.subclip((1,22.65),(1,23.2))
.resize(0.3))
clip.write_gif("ABCD.gif")
You can download Youtube Video with this command if you have Youtube-dl installed:
youtube-dl 2Jw-AeaU5WI -o ABCD.mp4

ffmpeg - output images in memory instead of disk

I've a python script which basically converts a video into images and stores them in a folder, then all this images are read and informations are extracted from them, then images are deleted. Since the writing images step is so slow and is apparently useless for what I need, I would like to store images somehow in memory instead of the disk, read this images from there and doing my operations, this would speed up my process a lot.
Now my code look like:
1st step:
ffmpeg -i myvideo.avi -r 1 -f image2 C:\img_temp
2nd step:
for i in range(1, len(os.listdir(IMGTEMP)):
#My operations for each image
3rd step:
for image in os.listdir(IMGTEMP):
os.remove(IMGTEMP + "\\" + image)
With MoviePy:
import moviepy.editor as mpy
clip = mpy.VideoFileClip("video.avi")
for frame in clip.iter_frames():
# do something with the frame (a HxWx3 numpy array)
The short version is that you could use ramdisk or tmpfs or something like that, so that the files are indeed actually stored in memory. However, I'm wondering about your "operations for each image". Do you really need an image file for them? If all you're doing is read their size, why do you need an image (with compression/decompression etc.) overhead at all? Why not just use the FFmpeg API, read the AVI file, decode frames, and do your metrics on the decoded data directly?
Have a look at PyAV. (There is documentation, but it's rather sparse.)
It looks like you could just open a video, and then iterate over the frames.

Why the frames of a VideoClip change when it is written to a video file?

I wrote the following code:
from moviepy.editor import *
from PIL import Image
clip= VideoFileClip("video.mp4")
video= CompositeVideoClip([clip])
video.write_videofile("video_new.mp4",fps=clip.fps)
then to check whether the frames have changed or not and if changed, which function changed them, i retrieved the first frame of 'clip', 'video' and 'video_new.mp4' and compared them:
clip1= VideoFileClip("video_new.mp4")
img1= clip.get_frame(0)
img2= video.get_frame(0)
img3= clip1.get_frame(0)
a=img1[0,0,0]
b=img2[0,0,0]
c=img3[0,0,0]
I found that a=24, b=24, but c=26....infact on running a array compare loop i found that 'img1' and 'img2' were identical but 'img3' was different.
I suspect that the function video.write_videofile is responsible for the change in array. But i dont know why...Can anybody explain this to me and also suggest a way to write clips without changing their frames?
PS: i read the docs of 'VideoFileClip', 'FFMPEG_VideoWriter', 'FFMPEG_VideoReader' but could not find anything useful...I need to read the exact frame as it was before writing in a code I'm working on. Please, suggest me a way.
Like JPEG, MPEG-4 uses lossy compression, so it's not surprising that the frames read from "video_new.mp4" are not perfectly identical to those in "video.mp4". And as well as the variations caused purely by the lossy compression there are also variations that arise due to the wide variety of encoding options that can be used by programs that write MPEG data.
If you really need to be able to read back the exact same frame data that you write then you will have to use a different file format, but be warned: your files will be huge!
The choice of video format partly depends on what the image data is like and on what you want to do with it. If the data uses 256 colours or less, and you don't intend to perform transformations on it that will modify the colours, a simple GIF anim is a good choice. But bear in mind that even something like non-integer scaling modifies colours.
If you want to analyze the image data and transform it in various ways, it makes sense to use a format with better colour support than GIF, eg a stream of PNG images, which I assume is what Zulko mentions in his answer. FWIW, there's an anim format related to PNG called MNG, but it is not well supported or widely known.
Another option is to use a stream of PPM images, or maybe even a stream of YUV data, which is useful for certain kinds of analysis and convenient if you do intend to encode as MPEG for final consumption. The PPM format is very simple and easy to work with; YUV is slightly messy since it's a raw format with no header data, so you have to keep track of the image size and resolution data yourself.
The file size of PPM or YUV streams is large, since they incorporate no compression at all, but of course they can be compressed using standard compression techniques, if you want to save a little space when saving them to disk. OTOH, typical video processing workflows that use such streams often don't bother writing them to disk: they are sent in pipelines (perhaps using named pipes), so the file size is (mostly) irrelevant.
Although such formats take up a lot of space compared to MPEG-based files, they are far superior for use as intermediate formats while performing image data analysis and transformation, since every time you write & read back MPEG you are losing a little bit of quality.
I assume that you intend to do your image data analysis and transformations using PIL/Pillow. But you can also work with PPM & YUV streams using the ffmpeg / avconv command line programs; and the ffmpeg family happily work with sets of individual image files and GIF anims, too.
You can have lossless compression with the 'png' codec:
clip.write_videoclip('clip_new.avi', codec='png')
EDIT #PM 2Ring: when you write the line above, it makes a video that is compressed using the png algortihm (I'm not sure whether each frame is a png or if it's more subtle).

Video editing with python or command line

I need to perform the following operations in my python+django project:
joining videos with same size and bitrate
joining videos and images (for the image manipulation I'll use PIL: writing text to an existing image)
fading in the transitions between videos
I already know of some video editing libraries for python: MLT framework (too complex for my needs), pygame and pymedia (don't include all the features I want), gstreamer bindings (terrible documentation).
I could also do all the work from command line, using ffmpeg, mencoder or transcode.
What's the best approach to do such a thing on a Linux machine.
EDIT: eventually I've chosen to work with melt (mlt's command line)
http://avisynth.org/mediawiki/Main_Page is a scripting language for video.
Because ffmpeg is available on GNU/Linux, i thing using it with modules such as pexpect or subprocess is the best solution....
You can use OpenCV for joining videos and images. See the documentation, in particular the image/video I/O functions.
However, I'm not sure if the library has functions that will do the fading for you.
What codec are you using?
There are two ways to compress video: lossy and lossless. It's easy to tell them apart. Depending on their length, lossy video files are in the megabyte range, lossless (including uncompressed) are in the gigabyte range.
Here's an oversimplification. Editing video files is a lot different from editing film, where you just glue the pieces of film together. It's not just about bitrate, frame rate and resolution. Most lossy video codecs (MPEG 1-4, Ogg Theora, H.26x, VC-1, etc.) start out with a full frame then record only the changes in movement. When you watch the video what you're actually seeing is a static scene with layer after layer of changes pasted on top of it. It looks like you're seeing full frame after full frame, but if you looked at the data in the file all you'd see would be a black background and scrambled blocks of video.
If it's uncompressed or uses a lossless codec (HuffYUV, Lagarith, FFV1, etc.) then you can edit your video file just like film. You still have to re-encode the video but it won't effect video quality and you can cut, copy and paste however you like as long as the resolution and frame rate are the same. If you're video is lossy you have to re-encode it with some loss of video quality, just like saving the same image in JPEG, over and over.
Another option might be to put several pieces of video into a container like MKV and use chapters to have it jump from piece to piece. I seem to remember being told this is possible but I've never tried it so maybe it isn't.

Categories

Resources