Incorrect colours with moviepy - python

I have an image with grey background and I'm trying to insert it into the main video. Everything looks good in preview but after writing to file the grey background is turned into black (with default libx264).
Then I tried using the png codec and it returned the true colour of the image but with 130x the size (It's a 20m long video so the output was 48 Gb) of the video with mp4 or libx264. It was unacceptably huge so I tried converting it with VLC and it worked! Nothing has changed with the quality of the video and the size only increased around 10-20 Mbs.
I wanted to automate a process with moviepy and it took forever to render the video and the manually convert it with VLC. So is there a better approach only using Python? This is the basic VLC codec:

Related

Fast-Forward issue with saved video from PiCam

I'm working on a code, which reads incoming videos from Raspberry Pi, performs face detection on the frames, places frames around the faces, and then write backs the frames into an MP4 file with the same FPS. I use OpenCV to open and read from the PiCam.
When I looked into the saved video, it looks like it's moving too fast. I let my code to run for around 2 minutes, but my video has a length of 30 second. When I disable all post-processings (face detection), I can observe stable speed on the output video.
I can understand that Raspberry Pi has a small processor for heavy computations, but cannot understand why the video length is shorter? Is it possible that my face detection pipeline running much slower than the camera FPS, so the camera buffer should drop frames that are not going to be grabbed by the pipeline in a timely-fashion?
Any help here is highly appreciated!

How can I make cv2.VideoWriter lighter

I'm working on a project where I need to perform some manipulations on a video file using opencv.
A big problem I noticed is that the video files I create using cv2.VideoWriter are extremely large.
For instance, when I perform video stabilization, my input video is 10MB and the output video I create can reach even 80MB, despite having the same fps, same number of frames. Both input and output are in color and both input and output are in AVI format and so on...
Is this a known feature of cv2.VideoWriter? Or is there a way that I can make the output videos smaller?
Using ffmpeg can handle the problem.
ffmpeg -i output.mp4 output_light.mp4
This simple method roughly reduces the size of the video by about 1/3.

python write_videofile results in a black screen video

Code:
clip = ImageSequenceClip(new_frames, fps=fps1)
clip.write_videofile("out.mp4", fps=fps1)
TL;DR:
This code produces a black screen video.
where fps1 is from the original video I stitch on
I am trying to stitch a video using frames from many videos.
I created an array containing all the images in their respective place and then passed frame by frame on each video and assigned the correct frame in the array. When I acted that way the result was ok, but the process was slow so I saved each frame to a file and loaded it within the stitching process. Python throw an exception that the array is to big and I chunked the video into parts and saved each chunk. The result came out as a black screen, even thought when I debugged I could show each frame on the ImageSequenceClip correctly. I tried reinstalling moviepy. I use windows 10 and I converted all frames to png type.
Well #BajMile was indeed right offering to use opencv.
What took me a while to realize is that I have to use only functions of opencv, also for the images I was opening and resizing.

What are the steps needed to convert a video to a Gif in Python?

I have a project that involves creating a program that converts a video into a gif. Sounds simple enough if I use OpenCV but I need to organize the bitstream of the gif file myself. I Googled around and I can't find any resources that outline the steps required to achieve this or how to organize the bitstream myself.
I'm assuming the steps I need to do is image compression for each frame but I'm not sure if I still need to use Motion Estimation if I want a smooth Gif in the end.
edit: Just to be clear I need this to be done without using a library that converts the video to a gif for me so moviepy won't work
from moviepy.editor import *
clip = (VideoFileClip("ABCD.mp4")
.subclip((1,22.65),(1,23.2))
.resize(0.3))
clip.write_gif("ABCD.gif")
You can download Youtube Video with this command if you have Youtube-dl installed:
youtube-dl 2Jw-AeaU5WI -o ABCD.mp4

Video editing with python or command line

I need to perform the following operations in my python+django project:
joining videos with same size and bitrate
joining videos and images (for the image manipulation I'll use PIL: writing text to an existing image)
fading in the transitions between videos
I already know of some video editing libraries for python: MLT framework (too complex for my needs), pygame and pymedia (don't include all the features I want), gstreamer bindings (terrible documentation).
I could also do all the work from command line, using ffmpeg, mencoder or transcode.
What's the best approach to do such a thing on a Linux machine.
EDIT: eventually I've chosen to work with melt (mlt's command line)
http://avisynth.org/mediawiki/Main_Page is a scripting language for video.
Because ffmpeg is available on GNU/Linux, i thing using it with modules such as pexpect or subprocess is the best solution....
You can use OpenCV for joining videos and images. See the documentation, in particular the image/video I/O functions.
However, I'm not sure if the library has functions that will do the fading for you.
What codec are you using?
There are two ways to compress video: lossy and lossless. It's easy to tell them apart. Depending on their length, lossy video files are in the megabyte range, lossless (including uncompressed) are in the gigabyte range.
Here's an oversimplification. Editing video files is a lot different from editing film, where you just glue the pieces of film together. It's not just about bitrate, frame rate and resolution. Most lossy video codecs (MPEG 1-4, Ogg Theora, H.26x, VC-1, etc.) start out with a full frame then record only the changes in movement. When you watch the video what you're actually seeing is a static scene with layer after layer of changes pasted on top of it. It looks like you're seeing full frame after full frame, but if you looked at the data in the file all you'd see would be a black background and scrambled blocks of video.
If it's uncompressed or uses a lossless codec (HuffYUV, Lagarith, FFV1, etc.) then you can edit your video file just like film. You still have to re-encode the video but it won't effect video quality and you can cut, copy and paste however you like as long as the resolution and frame rate are the same. If you're video is lossy you have to re-encode it with some loss of video quality, just like saving the same image in JPEG, over and over.
Another option might be to put several pieces of video into a container like MKV and use chapters to have it jump from piece to piece. I seem to remember being told this is possible but I've never tried it so maybe it isn't.

Categories

Resources