So I wrote a script that creates a video out of a series of images. I have the images stored in a folder called "TEMP". The part of the script that takes a very long time is the following:
from moviepy.editor import *
def createVideoFromImages(genre, audio):
#export name
timeStamp = str(time.time()).split(".")[0]
exportFolderName = f"./finished/{genre}{timeStamp}"
exportFileName = f"{exportFolderName}/video.mp4"
images = os.listdir("TEMP")
clips = [ImageClip(f"./TEMP/{m}").set_duration(10).crossfadein(1.0) for m in images]
concat_clip = concatenate_videoclips(clips, method="compose")
audio_duration = audio.duration
videoclip = concat_clip.set_duration(audio_duration)
exportClip = videoclip.set_audio(audio)
#create folder and save video there
os.mkdir(exportFolderName)
exportClip.write_videofile(exportFileName, fps=60, preset="ultrafast")
return exportFolderName
I tried a couple of things, like changing the concatenation of videoclips to method="chain" but that broke and I got a glitchy video, where only the first image was properly showing.
I also tried to add the preset="ultrafast" as I found somewhere online, but I find it to slow things down rather than speed things up.
I suspect the script to run this slow(it takes 8-9 hours for an ~300 second video) because it takes almost all the RAM of my computer.
Is there a way to speed up this script, preferrably with a minimal sacrifice of quality?
Related
How to extract multiple smaller video clips from a long video using some python package, I need it as part of my video preprocessing for my project.
ffmpeg is a method but its too complex.
Any other method would be really helpful.
I tried using moviepy, but the documentation is not that clear, so I could only extract one video at a time and not multiple.
Using the Moviepy package, we can extract multiple smaller clips from a large video.
from moviepy.video.io.VideoFileClip import VideoFileClip
def extract_clips(video_file, clip_duration, clip_start_times):
clip_list = []
with VideoFileClip(video_file) as video:
for start_time in clip_start_times:
clip = video.subclip(start_time, start_time + clip_duration)
clip_list.append(clip)
return clip_list
video_file = "path/to/your/video.mp4"
clip_duration = 5 # duration of each clip in seconds
clip_start_times = [0, 10, 20] # start times of each clip in seconds
clips = extract_clips(video_file, clip_duration, clip_start_times)
# save the clips to disk
for i, clip in enumerate(clips):
clip.write_videofile("clip_{}.mp4".format(i))
The function takes in parameters, the video file, the total play time of the clip and the start times of wherever you want it to cut at.
You can use the write_videofile() to save these clips into the desired location in your system.
I hope this gives some more clarity on how to perform this operation.
Currently I would like to continually play movies in a loop from different filepaths using python 3.6, psychopy 1.90.2. The filepaths are listed in a csv file and each filepath has common ancestors but has a different parent directory and filename. e.g. '/media/michael/shared_network_drive/dataset/training/jumping/man_jumps_through_hoop3342.mp4' and '/media/michael/shared_network_drive/dataset/training/shouting/h555502.mp4'.
Currently there is a very large delay when creating the visual.MovieStim3 object which results in a large delay before every video. Here is the code so far:
def play_videos(csv_file, vid_location='/media/michael/shared_network_drive/dataset/training/'):
# Open a window
win = visual.Window([400,400])
#open csv file and cycle through each video
for vid, label, val1, val2 in csv.reader(open(csv_file, 'r')):
glob_vid_path = vid_location + vid
# Define a MovieStim3 object
mov = visual.MovieStim3(win, glob_vid_path, flipVert=False, flipHoriz=False)
# Loop through each frame of the video
while mov.status != visual.FINISHED:
mov.draw()
win.flip()
win.close()
Why is the delay so long and how can I overcome this?
For those with similar problems; the delay is caused by the location of the videos in a shared drive. Placing the videos on the home drive, or even an external hard-drive solved the problem.
I have built a code which will stitch 100X100 images approx. I want to view this stitiching process in real time. I am using pyvips to create large image. I am saving final image in .DZI format as it will take very less memory footprint to display.
Below code is copied just for testing purpose https://github.com/jcupitt/pyvips/issues/43.
#!/usr/bin/env python
import sys
import pyvips
# overlap joins by this many pixels
H_OVERLAP = 100
V_OVERLAP = 100
# number of images in mosaic
ACROSS = 40
DOWN = 40
if len(sys.argv) < 2 + ACROSS * DOWN:
print 'usage: %s output-image input1 input2 ..'
sys.exit(1)
def join_left_right(filenames):
images = [pyvips.Image.new_from_file(filename) for filename in filenames]
row = images[0]
for image in images[1:]:
row = row.merge(image, 'horizontal', H_OVERLAP - row.width, 0)
return row
def join_top_bottom(rows):
image = rows[0]
for row in rows[1:]:
image = image.merge(row, 'vertical', 0, V_OVERLAP - image.height)
return image
rows = []
for y in range(0, DOWN):
start = 2 + y * ACROSS
end = start + ACROSS
rows.append(join_left_right(sys.argv[start:end]))
image = join_top_bottom(rows)
image.write_to_file(sys.argv[1])
To run this code:
$ export VIPS_DISC_THRESHOLD=100
$ export VIPS_PROGRESS=1
$ export VIPS_CONCURRENCY=1
$ mkdir sample
$ for i in {1..1600}; do cp ~/pics/k2.jpg sample/$i.jpg; done
$ time ./mergeup.py x.dz sample/*.jpg
here cp ~/pics/k2.jpg will copy k2.jpg image 1600 times from pics folder, so change according to your image name and location.
I want to display this process in real time. Right now after creating final mosaiced image I am able to display. Just an idea,I am thinking to make a large image and display it, then insert smaller images. I don't know, how it can be done. I am confused as we also have to make pyramidal structure. So If we create large image first we have to replace each level images with the new images. Creating .DZI image is expensive, so I don't want to create it in every running loop. Replacing images may be a solution. Any suggestion folks??
I suppose you have two challenges: how to keep the pyramid up-to-date on the server, and how to keep it up-to-date on the client. The brute force method would be to constantly rebuild the DZI on the server, and periodically flush the tiles on the client (so they reload). For something like that you'll also need to add a cache bust to the tile URLs each time, or the browser will think it should just use its local copy (not realizing it has updated). Of course this brute force method is probably too slow (though it might be interesting to try!).
For a little more finesse, you'd want to make a pyramid that's exactly aligned with the sub images. That way when you change a single sub image, it's obvious which tiles need to be updated. You can do this with DZI if you have square sub images and you use a tile size that is some even fraction of the sub image size. Also no tile overlap. Of course you'll have to build your own DZI constructor, since the existing ones aren't primed to simply replace individual tiles. If you know which tiles you changed on the server, you can communicate that to the client (either via periodic polling or with something like web sockets) and then flush only those tiles (again with the cache busting).
Another solution you could experiment with would be to not attempt a pyramid, per se, but just a flat set of tiles at a reasonable resolution to allow the user to pan around the scene. This would greatly simplify your pyramid updating on the server, since all you would need to do would be replace a single image for each sub image. This could be loaded and shown in a custom (non-OpenSeadragon) fashion on the client, or you could even use OpenSeadragon's multi-image feature to take advantage of its panning and zooming, like here: http://www.letsfathom.com/ (each album cover is its own independent image object).
I'm trying to make a video from a list of images using moviepy. I have issues using moviepy.editor since it does not like being frozen using PyInstaller, so I'm using moviepy.video.VideoClip.ImageClip for the images and moviepy.video.compositing.CompositeVideoClip.CompositeVideoClip for the clip. I have a list of .jpg images in a list called images:
from moviepy.video.VideoClip import ImageClip
from moviepy.video.compositing.CompositeVideoClip import CompositeVideoClip
clips = [ImageClip(m).set_duration(1) for m in images]
concat_clip = CompositeVideoClip(clips))
concat_clip.write_videofile('VIDEO.mp4', fps=1)
It successfully makes an .mp4 but the video is only one second long and the last image in the list of images. I can check clips and it has the ~30 images that should be in the video. I can do this from using methods from moviepy.editor following this SO question and answer, but there doesn't seem to be an analogous parameter in CompositeVideoClip for method='compose' which is where I think the issue is.
using concatenate_videoclips might help. I use the code below and it works just fine.
clips = [ImageClip(m).set_duration(1/25)
for m in file_list_sorted]
concat_clip = concatenate_videoclips(clips, method="compose")
concat_clip.write_videofile("test.mp4", fps=25)
I have just started using OpenCV 2.4.8.2 in Python 2.7.6 on a MacBook Pro Retina running OS X 10.9.2. My main goal is to make a video file using a few NumPy-arrays. I would also like to do the inverse: decompose a video into separate frames (and consequently in NumPy-arrays).
To make a video, I use the following piece of code:
import cv2
# Composes a movie from separate frames.
videoMaker = cv2.VideoWriter("videoTest.mov", cv2.cv.CV_FOURCC('m', 'p', '4', 'v'), 1, (256, 256))
if (videoMaker.isOpened()):
videoMaker.write(cv2.imread("videoTestFrame0.jpg"))
videoMaker.write(cv2.imread("videoTestFrame1.jpg"))
videoMaker.write(cv2.imread("videoTestFrame2.jpg"))
videoMaker.release()
This piece of code seems to work fine - videoTest.mov is created, can be played in Quicktime, is 3 seconds long and consists of 3 different frames.
To load a video, I have put directly under the above code piece:
# Decomposes a movie into separate frames.
videoReader = cv2.VideoCapture("videoTest.mov")
count = 0
while True:
gotImage, image = videoReader.read()
if (gotImage):
cv2.imwrite("videoTestDecomposedFrame%d.jpg" %count, image)
count += 1
else:
break
The problem: When inspecting the 3 decomposed frames, the first frame of the movie (videoTestFrame0.jpg) is not one of them, while the last frame of the movie (videoTestFrame2.jpg) is stored twice.
How can I fix this problem and retrieve videoTestFrame0.jpg from the video without having to modify it by putting in this first frame twice? Might something be wrong with cv2.VideoCapture.read()? I have tried saving the movie as videoTest.avi instead of videoTest.mov, but the behavior at decomposition is the same.
Thanks in advance for your kind help! Martijn