How to extract all the frames from a video? - python

Hey so I wanted to know how I can extract all frames from a vide using MoviePy
Currently I am doing it in open-cv which is really really slow. My current code :-
vidObj = cv2.VideoCapture("./video.mp4")
count = 0
flag = 1
while flag:
flag, image = vidObj.read()
try:
cv2.imwrite(f"./images/frame{count}.jpg", image)
except:
break
count += 1
Can someone tell me how to achieve the same in MoviePy please?

You can use the write_images_sequence() method.
%05d represents a 5 digit frame index.
from moviepy.editor import VideoFileClip
video = VideoFileClip('video.mp4')
video.write_images_sequence('frame%05d.png', logger='bar')

Related

Moviepy doubles the speed of the video without affecting audio. I suspect a framerate issue but I havent been able to fix it

Right now here is all I'm having moviepy do:
full_video = VideoFileClip(input_video_path)
full_video.write_videofile("output.mp4")
quit()
It just takes the video and writes it to another file with no changes. But when the input video looks like this the output ends up looking like this with the video speed doubled but the audio just the same. I could take the audio and video separately, halve the speed of the video then put them back together but is there a way I can correct for whatever problem is causing this?
edit 2: It is the VideoFileClip method causing the speedup most likely, not the write_videofile method. When I try
full_video = VideoFileClip(input_video_path)
print( full_video.fps )
full_video.preview(fps = full_video.fps)
quit()
it is still double speed in the preview.
edit 3: The problem only happens with videos captured with Windows game bar. I tried a different video and it worked just fine with no speedup. I'll probably just find a different way to capture the screen recordings to fix it but I dont know what the root problem was
edit 1: the full code
from moviepy.editor import *
# get all dash times
times_path = "times.txt"
input_video_path = "input.mp4"
offset_time = 0
clip_length = float( input("Enter clip length: ") )
def get_times(path, offset):
dash_times_str = []
with open(path, "r") as file:
dash_times_str = file.readlines()
count = 0
# Strips the newline character
# also add offset time
temp = []
for line in dash_times_str:
count += 1
temp.append ("{}".format(line.strip()))
dash_times_str = temp
dash_times = []
for time in dash_times_str:
dash_times.append( float(time) + offset )
return dash_times
dash_times = get_times(times_path, offset_time)
def get_offset_time():
a = float(input("Enter time for first dash in video"))
b = dash_times[0]
return a-b
offset_time = get_offset_time()
full_video = VideoFileClip(input_video_path)
counter = 0
times_count = len(dash_times)
clips = []
for dash_time in dash_times:
clip = full_video.subclip(dash_time,dash_time+clip_length)
clips.append(clip)
counter+=1
print("Clip " + str(counter) + " out of " + str(times_count))
final_clip = concatenate_videoclips(clips)
final_clip.write_videofile("output.mp4")
I haven't been able to go deep down in the source code to figure out why this is, but I could indeed duplicate your bug with videos recorded with the Windows game bar.
I also agree with you that it seems to be tied directly to the VideoFileClip method.
I got my code to work by writing it like this:
full_video = VideoFileClip(input_video_path, fps_source="fps")
with the key detail being the (fps_source = "fps") bit.

Moviepy set ImageClip duration not working and duplicate clips beeing removed?

I am setting the duration of a imageClip, but it doesnt seem to care.
I am setting the duration of a imageClip, but it doesnt seem to care. When i try to add duplicate clips it doesnt register. Any help would be appreciated!
img = os.listdir(picDir)
rawClips = [ImageClip(str(picDir)+m).set_duration(2) for m in img]
clips = []
for i in rawClips:
for x in range(imageDuration):
clips.append(i)
print(len(clips))
music = AudioFileClip("music.mp3")
video = concatenate_videoclips(clips, method="compose")
#video.set_audio(music.set_duration(video.duration))
video.write_videofile('result.mp4', fps=120)
return send_file("result.mp4")
It should make every clip duration to 5, but i sets it to 1
It was a flask error, i have no idea how though

Parse video at lower frame rate

I currently am working on a project where I need to parse a video and pass it through several models. The videos come in at 60fps. I do not need to run every frame through the models. I am running into some issue when trying to skip the unneeded frames. I have tried two methods which are both fairly slow.
Method 1 code snippet:
The issue here is that I am still reading every frame of the video. Only every 4th frames is run through my model.
cap = cv2.VideoCapture(self.video)
while cap.isOpened():
success, frame = cap.read()
if count % 4 !=0:
count += 1
continue
if success:
''Run frame through models''
else:
break
Method 2 code snippet:
This method is slower. I am trying to avoid reading unnecessary frames in this case.
cap = cv2.VideoCapture(video)
count=0
while True:
if count % 4 != 0:
cap.set(cv2.CV_CAP_PROP_POS_FRAMES, count*15)
count+=1
success, frame = cap.read()
Any advice on how to achieve this in the most efficient way would be greatly appreciated.
Getting and setting frames by changing CV_CAP_PROP_POS_FRAMES is not accurate (and slow) due to how video compression works: by using keyframes.
It might help to not use the read() function at all. Instead use grab() and only retrieve() the needed frames. From the documentation: The (read) methods/functions combine VideoCapture::grab() and VideoCapture::retrieve() in one call.
grab() gets the frame data, and retrieve() decodes it (the computationally heavy part). What you might want to do is only grab frames you want to skip but not retrieve them.
Depending on your system and opencv build, you could also have ffmpeg decode video using hardware acceleration.
As I get it you are trying to process every fourth frame. You are using the condition:
if count % 4 != 0
which is triggered 3 out of 4 frames instead (you are processing frames 1, 2, 3, 5, 6, 7 etc)! Use the opposite:
if count % 4 == 0
Also although code snippets the two method does not seem to process the same frames. Although in both cases your counter seems to increase by 1 in each frame you actually point to the 15xframe of that counter in the second case (cap.set(cv2.CV_CAP_PROP_POS_FRAMES, count*15).
Some comments on your code also (maybe I misunderstood something):
Case 1:
while cap.isOpened():
success, frame = cap.read()
if count % 4 !=0:
count += 1
continue
here you seem to count only some frames (3 out of 4 as mentioned) since frames which are multiples of 4 are skipped: condition count % 4 !=0 is not met in that case and your counter is not updated although you read a frame. So, you have an inaccurate counter here. It's not shown how and where do you process you frames to judge that part though.
Case 2:
while True:
if count % 4 != 0:
cap.set(cv2.CV_CAP_PROP_POS_FRAMES, count*15)
count+=1
success, frame = cap.read()
here you read frames only if condition is met so in this code snippet you don't actually read any frame since frame 0 does not trigger the condition! If you update the counter outside the if scope it's not clear here. But if you do you should also read a frame there. Anyway more code should be revealed to tell.
As a general advice you should update the counter everytime you read a frame.
Instead putting a threshold on number of frames, which would make opencv process ALL the frames (which you rightly pointed out, slows the video processing), it's better to use CAP_PROP_POS_MSEC link and offload that processing to cv2. By using this option you can configure cv2 to sample 1 frame every nth Millisecond. So setting subsample_rate=1000 in vidcap.set(cv2.CAP_PROP_POS_MSEC, (frame_count * subsample_rate)) would sample 1 frame every 1 second (as 1000 milliseconds equals 1 second). Hopefully this improves your video processing speed.
def extractImagesFromVideo(path_in, subsample_rate, path_out, saveImage, resize=(), debug=False):
vidcap = cv2.VideoCapture(path_in)
if not vidcap.isOpened():
raise IOError
if debug:
length = int(vidcap.get(cv2.cv2.CAP_PROP_FRAME_COUNT))
width = int(vidcap.get(cv2.cv2.CAP_PROP_FRAME_WIDTH))
height = int(vidcap.get(cv2.cv2.CAP_PROP_FRAME_HEIGHT))
fps = vidcap.get(cv2.cv2.CAP_PROP_FPS)
print 'Length: %.2f | Width: %.2f | Height: %.2f | Fps: %.2f' % (length, width, height, fps)
success, image = vidcap.read() #extract first frame.
frame_count = 0
while success:
vidcap.set(cv2.CAP_PROP_POS_MSEC, (frame_count*subsample_rate))
success, image = vidcap.read()
if saveImage and np.any(image):
cv2.imwrite(os.path.join(path_out, "%s_%d.png" % (frame_count)), image)
frame_count = frame_count + 1
return frame_count

How to read the first frame of video using OpenCV in python?

Here is some code I am using which should succeed in returning the first frame of a video. The relevant bits here I think are lines 2:5 and the second to last one where count=2.
def define_bounds(video_file, threshold_value, index_value):
vidcap = cv2.VideoCapture(video_file)
count=0
while count <1:
success,color_img = vidcap.read()
blkwht_img = np.asarray(Image.fromarray(color_img).convert('L'))[index_value]
retval, final_img = cv2.threshold(blkwht_img, threshold_value, 200, cv2.ADAPTIVE_THRESH_GAUSSIAN_C)
for pixel_row in range(139,-1,-1):
if np.median(final_img[pixel_row,:]) <10:
silenced = pixel_row - 2
upmotion = pixel_row - 16
destination = pixel_row - 44
downmotion = pixel_row - 31
break
count=2
return silenced, upmotion, destination, downmotion
I do know for a fact that vidcap.read() succeeds in reading frames from the first to the last (I have other code I use to determine this, where it just exports all frames from first to last).
But this code above fails in reading the first frame... it reads SOME frame but I don't know which. It's not the first, second or last as the output of the function does not match when I manually input the first, second and last frames.
Is there something stupidly wrong in my code? Also: am I using break correctly? I'm still new to this. Thanks!

More problems extracting frames from GIFs

Following my previous question (Gifs opened with python have broken frames) I now have code that works sometimes.
For example, this code
from PIL import Image
img = Image.open('pigs.gif')
counter = 0
collection = []
current = img.convert('RGBA')
while True:
try:
current.save('original%d.png' % counter, 'PNG')
img.seek(img.tell()+1)
current = Image.alpha_composite(current, img.convert('RGBA'))
counter += 1
except EOFError:
break
…works on most GIFs perfectly, but on others it produces weird results. For example, when applied to this 2-frame GIF:
It produces these two frames:
The first one is ok, the second one not so much.
What now?
Sounds like you want to do this:
while True:
try:
current.save('original%d.gif' % counter)
img.seek(img.tell()+1)
current = img.convert('RGBA')
counter += 1
except EOFError:
break
try Wand (Wand is a ctypes-based simple ImageMagick binding for Python.)
from wand.image import Image
def extract_frames(gif_file):
with Image(filename=gif_file) as gif:
for i, single_img in enumerate(gif.sequence):
with Image(image=single_img) as frame:
frame.format = 'PNG'
frame.save(filename='frame-%d.png' % i)
extract_frames('test.gif')
frame-0.png
frame-1.png

Categories

Resources