Following my previous question (Gifs opened with python have broken frames) I now have code that works sometimes.
For example, this code
from PIL import Image
img = Image.open('pigs.gif')
counter = 0
collection = []
current = img.convert('RGBA')
while True:
try:
current.save('original%d.png' % counter, 'PNG')
img.seek(img.tell()+1)
current = Image.alpha_composite(current, img.convert('RGBA'))
counter += 1
except EOFError:
break
…works on most GIFs perfectly, but on others it produces weird results. For example, when applied to this 2-frame GIF:
It produces these two frames:
The first one is ok, the second one not so much.
What now?
Sounds like you want to do this:
while True:
try:
current.save('original%d.gif' % counter)
img.seek(img.tell()+1)
current = img.convert('RGBA')
counter += 1
except EOFError:
break
try Wand (Wand is a ctypes-based simple ImageMagick binding for Python.)
from wand.image import Image
def extract_frames(gif_file):
with Image(filename=gif_file) as gif:
for i, single_img in enumerate(gif.sequence):
with Image(image=single_img) as frame:
frame.format = 'PNG'
frame.save(filename='frame-%d.png' % i)
extract_frames('test.gif')
frame-0.png
frame-1.png
Related
Right now here is all I'm having moviepy do:
full_video = VideoFileClip(input_video_path)
full_video.write_videofile("output.mp4")
quit()
It just takes the video and writes it to another file with no changes. But when the input video looks like this the output ends up looking like this with the video speed doubled but the audio just the same. I could take the audio and video separately, halve the speed of the video then put them back together but is there a way I can correct for whatever problem is causing this?
edit 2: It is the VideoFileClip method causing the speedup most likely, not the write_videofile method. When I try
full_video = VideoFileClip(input_video_path)
print( full_video.fps )
full_video.preview(fps = full_video.fps)
quit()
it is still double speed in the preview.
edit 3: The problem only happens with videos captured with Windows game bar. I tried a different video and it worked just fine with no speedup. I'll probably just find a different way to capture the screen recordings to fix it but I dont know what the root problem was
edit 1: the full code
from moviepy.editor import *
# get all dash times
times_path = "times.txt"
input_video_path = "input.mp4"
offset_time = 0
clip_length = float( input("Enter clip length: ") )
def get_times(path, offset):
dash_times_str = []
with open(path, "r") as file:
dash_times_str = file.readlines()
count = 0
# Strips the newline character
# also add offset time
temp = []
for line in dash_times_str:
count += 1
temp.append ("{}".format(line.strip()))
dash_times_str = temp
dash_times = []
for time in dash_times_str:
dash_times.append( float(time) + offset )
return dash_times
dash_times = get_times(times_path, offset_time)
def get_offset_time():
a = float(input("Enter time for first dash in video"))
b = dash_times[0]
return a-b
offset_time = get_offset_time()
full_video = VideoFileClip(input_video_path)
counter = 0
times_count = len(dash_times)
clips = []
for dash_time in dash_times:
clip = full_video.subclip(dash_time,dash_time+clip_length)
clips.append(clip)
counter+=1
print("Clip " + str(counter) + " out of " + str(times_count))
final_clip = concatenate_videoclips(clips)
final_clip.write_videofile("output.mp4")
I haven't been able to go deep down in the source code to figure out why this is, but I could indeed duplicate your bug with videos recorded with the Windows game bar.
I also agree with you that it seems to be tied directly to the VideoFileClip method.
I got my code to work by writing it like this:
full_video = VideoFileClip(input_video_path, fps_source="fps")
with the key detail being the (fps_source = "fps") bit.
Hey so I wanted to know how I can extract all frames from a vide using MoviePy
Currently I am doing it in open-cv which is really really slow. My current code :-
vidObj = cv2.VideoCapture("./video.mp4")
count = 0
flag = 1
while flag:
flag, image = vidObj.read()
try:
cv2.imwrite(f"./images/frame{count}.jpg", image)
except:
break
count += 1
Can someone tell me how to achieve the same in MoviePy please?
You can use the write_images_sequence() method.
%05d represents a 5 digit frame index.
from moviepy.editor import VideoFileClip
video = VideoFileClip('video.mp4')
video.write_images_sequence('frame%05d.png', logger='bar')
Here is some code I am using which should succeed in returning the first frame of a video. The relevant bits here I think are lines 2:5 and the second to last one where count=2.
def define_bounds(video_file, threshold_value, index_value):
vidcap = cv2.VideoCapture(video_file)
count=0
while count <1:
success,color_img = vidcap.read()
blkwht_img = np.asarray(Image.fromarray(color_img).convert('L'))[index_value]
retval, final_img = cv2.threshold(blkwht_img, threshold_value, 200, cv2.ADAPTIVE_THRESH_GAUSSIAN_C)
for pixel_row in range(139,-1,-1):
if np.median(final_img[pixel_row,:]) <10:
silenced = pixel_row - 2
upmotion = pixel_row - 16
destination = pixel_row - 44
downmotion = pixel_row - 31
break
count=2
return silenced, upmotion, destination, downmotion
I do know for a fact that vidcap.read() succeeds in reading frames from the first to the last (I have other code I use to determine this, where it just exports all frames from first to last).
But this code above fails in reading the first frame... it reads SOME frame but I don't know which. It's not the first, second or last as the output of the function does not match when I manually input the first, second and last frames.
Is there something stupidly wrong in my code? Also: am I using break correctly? I'm still new to this. Thanks!
My objective is to determine similarity between two exact same videos. My approach is a bit naive i.e. to compare them frame by frame and see if two frame exactly matches or not. I am using the following python code for this:
import cv2
import numpy as np
capture = cv2.VideoCapture("video.wmv")
capture2 = cv2.VideoCapture("video.wmv")
counter = 0
while True:
f, frame = capture.read()
f2, frame2 = capture2.read()
frame = cv2.GaussianBlur(frame,(15,15),0)
frame2 = cv2.GaussianBlur(frame2, (15, 15), 0)
try:
res = frame - frame2
if(np.count_nonzero(res) > 0 ):
counter += 1
else: continue
except:
print(counter)
The total number of frames in my video is around 600K. The code runs for almost 20K frames with exact matches and the counter remains zero (i.e. exact frame matching) but after 20K frames it started returning the following exception for all frames to come
unsupported operand type(s) for -: 'NoneType' and 'NoneType'
From the exception I understand that it is not reading any frames that is why returning a NonType. Please guide me whether my approach for comparing videos is correct (I know it is not efficient way). Also why I am getting this error ?
You didn't break your while loop. And it continued reading the frames even when the vidoes had finished, hence returning a NoneType object.
You can try:
while True:
f, frame = capture.read()
f2, frame2 = capture2.read()
if not f and not f2:
break
frame = cv2.GaussianBlur(frame,(15,15),0)
frame2 = cv2.GaussianBlur(frame2, (15, 15), 0)
try:
res = frame - frame2
if(np.count_nonzero(res) > 0 ):
counter += 1
else: continue
except:
print(counter)
I am writing an opencv program where I track position of an object by use of a usb camera. To make sure I get as high frame rate as possible with the camera I am using I made a threaded process that read the images from the camera. The image processing is done in another loop which also writes the object position to the file.
Now I want a way to avoid processing the same frame multiple times. So I thought I could compare the image just processed with that available from the the video stream thread.
First I tried to use if frame1 == frame2, but got error message that "the truth value of an array with more than one element is ambiguous. Use a.any() or a.all()."
After some googling I found cv2.compare and the flag CMP_EQ. Made a sample code, and made it work in some way. However, my question is. How could this be done in an easier or better way?
import cv2
cv2.namedWindow('image1', cv2.WINDOW_NORMAL)
cv2.namedWindow('image2', cv2.WINDOW_NORMAL)
frame1 = cv2.imread("sample1.png")
frame2 = frame1
frame3 = cv2.imread("sample2.png")
compare1 = cv2.compare(frame1,frame2,0)
compare2 = cv2.compare(frame1,frame3,0)
cv2.imshow('image1', compare1)
cv2.imshow('image2', compare2)
if compare1.all():
print "equal"
else:
print "not equal"
if compare2.all():
print "equal"
else:
print "not equal"
cv2.waitKey(0)
cv2.destroyAllWindows()
open("image1.jpg","rb").read() == open("image2.jpg","rb").read()
should tell you if they are exactly the same ...
I was doing something close to what you are doing; I was trying to get the difference. I used the subtract function. It may help you.
UPDATE:
import cv2
import numpy as np
a = cv2.imread("sample1.png")
b = cv2.imread("sample2.png")
difference = cv2.subtract(a, b)
result = not np.any(difference)
if result is True:
print "Pictures are the same"
else:
cv2.imwrite("ed.jpg", difference )
print "Pictures are different, the difference is stored as ed.jpg"
How about giving your Images an index?
Pseudocode:
class Frame
{
cvImage img;
uint idx;
}
Than simply check if the current index is greater than the last one you processed.
It is simple and definitely faster than any image processing based approach.
You can compare the size of two image files as the first level of check for reduced computational complexity. With compression, it is highly unlikely for two different image files to have the same size to the accuracy of the number of bytes. With equal file size, you can then compare the image contents.
You should try something like this.
import cv2
import numpy as np
original = cv2.imread("rpi1.jpg")
duplicate = cv2.imread("rpi2.jpg")
if original.shape == duplicate.shape:
print("The images have same size and channels")
difference = cv2.subtract(original, duplicate)
b, g, r = cv2.split(difference)
if cv2.countNonZero(b) == 0 and cv2.countNonZero(g) == 0 and
cv2.countNonZero(r) == 0:
print("The images are completely Equal")
else:
print("images are different")