python write_videofile results in a black screen video - python

Code:
clip = ImageSequenceClip(new_frames, fps=fps1)
clip.write_videofile("out.mp4", fps=fps1)
TL;DR:
This code produces a black screen video.
where fps1 is from the original video I stitch on
I am trying to stitch a video using frames from many videos.
I created an array containing all the images in their respective place and then passed frame by frame on each video and assigned the correct frame in the array. When I acted that way the result was ok, but the process was slow so I saved each frame to a file and loaded it within the stitching process. Python throw an exception that the array is to big and I chunked the video into parts and saved each chunk. The result came out as a black screen, even thought when I debugged I could show each frame on the ImageSequenceClip correctly. I tried reinstalling moviepy. I use windows 10 and I converted all frames to png type.

Well #BajMile was indeed right offering to use opencv.
What took me a while to realize is that I have to use only functions of opencv, also for the images I was opening and resizing.

Related

How to edit video frames without extracting each frame first ? OpenCV

I am currently extracting frames from a video so I can add some bounding boxes in each frame. Then, I want to put all the frames together and make a new video, using openCV. The problem is that every time I want to do that, I have to extract thousand of frames first. Is there a way to do it without having to extract the frames?
Thanks
That assumption isn't correct. You can use OpenCV's VideoCapture to load a video, get a frame, do some processing on it and save it using a VideoWriter object - one frame at a time. There is no need to load all frames into memory and then store them all at once.

Incorrect colours with moviepy

I have an image with grey background and I'm trying to insert it into the main video. Everything looks good in preview but after writing to file the grey background is turned into black (with default libx264).
Then I tried using the png codec and it returned the true colour of the image but with 130x the size (It's a 20m long video so the output was 48 Gb) of the video with mp4 or libx264. It was unacceptably huge so I tried converting it with VLC and it worked! Nothing has changed with the quality of the video and the size only increased around 10-20 Mbs.
I wanted to automate a process with moviepy and it took forever to render the video and the manually convert it with VLC. So is there a better approach only using Python? This is the basic VLC codec:

Combining two images horizontally in python using OpenCV

I have four images, each slices of a larger image. If I string them together horizontally, then I get the larger image. To complete this task, I'm using python 2.7 and the OpenCV library, specifically the hconcat() function. Here is the code:
with open("tempfds.jpg", 'ab+') as f:
f.write(cv2.hconcat(cv2.hconcat(cv2.imread("491411.jpg"),cv2.imread("491412.jpg")),cv2.hconcat(cv2.imread("491413.jpg"),cv2.imread("491414.jpg"))))
When I run it, everything works fine. But when I try to open the image itself, I get an error: Error interpreting JPEG image file (Not a JPEG file: starts with 0x86 0x7e). All the images I'm using are jpg's, so I don't understand why this error is occurring. Any insight is appreciated.
If you want to write a JPEG, you need:
cv2.imwrite('lovely.jpg', image)
where image is all your images concatenated together.

Select specific frame fast OpenCV Python

I am trying to rapidly select and process different frames from a video using OpenCV Python. To select a frame, I have used the 'CAP_PROP_POS_FRAMES' (or cap.set(2, frame_no)). However when using this I noticed a delay of about 200 ms to decode the selected frame. My script will be jumping in between frames a lot (not necessarily chronological) which means this will cause a big delay between each iteration.
I suspected OpenCV is buffering the upcoming frames after I set the frame number. Therefore I tried pre-decoding of the video by basically putting the entire video as a list so it can be accessed from RAM. This worked fantastic except bigger videos completely eat up my memory.
I was hoping someone knows a way to either set the frame number without this 200ms delay or to decode the video without using all of my memory space. Any suggestions are also welcome!
I don't know how to avoid that 200ms delay, but I have a suggestion on how you could decode the video first even if its size is greater than your RAM. You could use numpy's memmap:
https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.memmap.html.
In practice you could have a function that initializes this memory-mapped matrix and then iterate over each frame of the video using VideoCapture and then store each frame in this matrix. After that you will be able to jump between each frame just by accessing this memory-mapped matrix.

Reading tiffs in opencv swaps top and bottom third of image

I've got a pretty strange issue. I have several tif images of astronomical objects. I'm trying to use opencv's python bindings to process them. Upon reading the image file, it appears that segments of the images are swapped or rotated. I've stripped it down to the bare minimum, and it still reproduces:
img = cv2.imread('image.tif', 0)
cv2.imwrite('image_unaltered.tif', img)
I've uploaded some samples to imgur, to show the effect. The images aren't super clear, that's the nature of preprocessed astronomical images, but you can see it:
First set:
http://imgur.com/vXzRQvS
http://imgur.com/wig99KR
Second set:
http://imgur.com/pf7tnPz
http://imgur.com/xGn9C77
The same rotated/swapped images appear if I use cv2.imShow(...) as well, so I believe it's something when I read the file. Furthermore, it persists if I save as jpg as well. Opening the original in Photoshop shows the correct image. I'm using opencv 2.4.10, on Linux Mint 17.1. If it matters, the original tifs were created with FITS liberator on windows.
Any idea what's happening here?

Categories

Resources