I am trying to concatenate a group of images with associated audio with a video clip at the start and front of the video. Whenever I concatenate the image with the associated audio it dosen't playback correctly in VLC media player and only displays the image for a frame before cutting to black and continually playing audio. I came across this github issue: https://github.com/kkroening/ffmpeg-python/issues/274 where the accepted solution was the one I implemented but one of the comments mentioned this issue of incorrect playback and error on youtube.
'''
Generates a clip from an image and a wav file, helper function for export_video
'''
def generate_clip(img):
transition_cond = os.path.exists("static/transitions/" + img + ".mp4")
chart_path = os.path.exists("charts/" + img + ".png")
if transition_cond:
clip = ffmpeg.input("static/transitions/" + img + ".mp4")
elif chart_path:
clip = ffmpeg.input("charts/" + img + ".png")
else:
clip = ffmpeg.input("static/transitions/Transition.jpg")
audio_clip = ffmpeg.input("audio/" + img + ".wav")
clip = ffmpeg.concat(clip, audio_clip, v=1, a=1)
clip = ffmpeg.filter(clip, "setdar","16/9")
return clip
'''
Combines the charts from charts/ and the audio from audio/ to generate one final video that will be uploaded to Youtube
'''
def export_video(CHARTS):
clips = []
intro = generate_clip("Intro")
clips.append(intro)
for key in CHARTS.keys():
value = CHARTS.get(key)
value.insert(0, key)
subclip = []
for img in value:
subclip.append(generate_clip(img))
concat_clip = ffmpeg.concat(*subclip)
clips.append(concat_clip)
outro = generate_clip("Outro")
clips.append(outro)
concat_clip = ffmpeg.concat(*clips)
concat_clip.output("export/export.mp4").run(overwrite_output=True)
It is unfortunate concat filter does not offer the shortest option like overlay. Anyway, the issue here is that image2 demuxer uses 25 fps by default, so a video stream with one image only lasts for 1/25 seconds long. There are a several ways to address this, but you first need to get the duration of the paired audio files. To incorporate the duration information to the ffmpeg command, you can:
Use tpad filter for each video (in series with setdar) to make the video duration to match the audio. Padded amount should be 1/25 seconds less than the audio duration.
Specify -loop 1 input option so the image will loop (indefinitely) and then specify an additional -t {duration} input option to limit the number of loops. Caution that the video duration may not be exact.
Specify -r {1/duration} so the image will last as long as the audio and use fps filter on each input to the output frame rate.
I'm not familiar with ffmpeg-python so I cannot provide its solution, but if you're interested, I'd be happy to post an equivalent code with my ffmpegio package.
[edit]
ffmpegio Solution
Here is how I'd code the 3rd solution with ffmpegio:
import ffmpegio
def generate_clip(img):
"""
Generates a clip from an image and a wav file,
helper function for export_video
"""
transition_cond = path.exists("static/transitions/" + img + ".mp4")
chart_path = path.exists("charts/" + img + ".png")
if transition_cond:
video_file = "static/transitions/" + img + ".mp4"
elif chart_path:
video_file = "charts/" + img + ".png"
else:
video_file = "static/transitions/Transition.jpg"
audio_file = "audio/" + img + ".wav"
video_opts = {}
if not transition_cond:
# audio_streams_basic() returns audio duration in seconds as Fraction
# set the "framerate" of the video to be the reciprocal
info = ffmpegio.probe.audio_streams_basic(audio_file)
video_opts["r"] = 1 / info[0]["duration"]
return [(video_file, video_opts), (audio_file, None)]
def export_video(CHARTS):
"""
Combines the charts from charts/ and the audio from audio/
to generate one final video that will be uploaded to Youtube
"""
# get all input files (video/audio pairs)
clips = [
generate_clip("Intro"),
*(generate_clip(img) for key, value in CHARTS.items() for img in value),
generate_clip("Outro"),
]
# number of clips
nclips = len(clips)
# filter chains to set DAR and fps of all video streams
vfilters = (f"[{2*n}:v]setdar=16/9,fps=30[v{n}]" for n in range(nclips))
# concatenation filter input: [v0][1:a][v1][3:a][v2][5:a]...
concatfilter = "".join((f"[v{n}][{2*n+1}:a]" for n in range(nclips))) + f"concat=n={nclips}:v=1:a=1[vout][aout]"
# form the full filtergraph
fg = ";".join((*vfilters, concatfilter))
# set output file and options
output = ("export/export.mp4", {"map": ["[vout]", "[aout]"]})
# run ffmpeg
ffmpegio.ffmpegprocess.run(
{
"inputs": [input for pair in clips for input in pair],
"outputs": [output],
"global_options": {"filter_complex": fg},
},
overwrite=True,
)
Since this code does not use the read/write features, ffmpegio-core package suffices:
pip install ffmpegio-core
Make sure that FFmpeg binary can be found by ffmpegio. See the installation doc.
Here are the direct links to the documentations of the functions used:
ffmpegprocess.run
ffmpeg_args dict argument
probe.audio_streams_basic (Ignore the documentation error both duration and start_time are both of Fraction type.
The code has not been fully validated. If you encounter a problem, it might be the easiest to post it on the GitHub Discussions to proceed.
I used opencv and ffmpeg to do the work of framing the video.
opencv
import cv2
# 영상의 의미지를 연속적으로 캡쳐할 수 있게 하는 class
vidcap = cv2.VideoCapture("D:/godzillakingofmonster/GodzillaKingOfMonsters_clip.mkv")
count = 0
while(vidcap.isOpened()):
# read()는 grab()와 retrieve() 두 함수를 한 함수로 불러옴
# 두 함수를 동시에 불러오는 이유는 프레임이 존재하지 않을 때
# grab() 함수를 이용하여 return false 혹은 NULL 값을 넘겨 주기 때문
ret, image = vidcap.read()
# 캡쳐된 이미지를 저장하는 함수
print("D:/godzillakingofmonster/frame/frame%d.jpg" % count)
cv2.imwrite("D:/godzillakingofmonster/frame/frame%d.jpg" % count, image)
print('Saved frame%d.jpg' % count)
count += 1
vidcap.release()
ffmpeg
ffmpeg -i \"{target_video}\" \"{save_folder_path}/{media_name}_%08d.{exp}\"
I am wondering which of the two methods will give you more accurate results.
When a frame is divided, another frame is saved. Why are there different results?
Which method, ffmpeg or opencv, is more accurate and prints the results closer to the original?
I'm assuming ffmpeg is also storing jpg files. In both methods you don't specify the amount of jpg compression so you're running with default values and they are likely different.
Output to an uncompressed format such as .png to get 100% accurate images in both ffmpeg and opencv.
I am working on a project and I am trying to save the video from a drone in my computer and also show it live. I thought about converting the video in Images about 30 per second and updating my frontend with these pictures so that it looks like a video.
Since It is the first time I am working with video and image strings I need some help. As far as I can figure out with my knowledge I am receiving a byte string.
I can not use the libh264 decoder because i am unable to intergrate it in python 3.7 its only working with python 2
Here some strings:
b'\x00\x00\x00\x01A\xe2\x82_\xc1^\xa9y\xae\xa3\xf2G\x1a \x89z\' \x8c\xa6\xe7I}\xf3F\x07t\xf4*b\xd8\xc7\xff\x82\xb32\xb7\x07\x9b\xf5r0\xa0\x1e\x10\x8e\x80\x07\xc1\xdd\xb8g\xba)\x02\xee\x9f\x16][\xe4\xc6\x16\xc8\x17y\x02\xdb\x974\x13Mfn\xcc6TB\xadC\xb3Y\xadB~"\xd5\xdb\xdbg\xbaQ:{\xbftV\xc4s\xb8\xa3\x1cC\xe9?\xca\xf6\xef\x84]?\xbd`\x94\xf6+\xa8\xb4]\xc3\xe9\xa8-I\xd1\x180\xed\xc9\xee\xf4\x93\xd2\r\x00p1\xb3\x1d\xa2~\xfa\xe8\xf4\x97\x08\xc1\x18\x8ad,\xb6\x80\x86\xc6\x05V\x0ba\xcb\x7f\x82\xf2\x03\x9a)\xd6\xd9\x11\x92\x7f\xb5\x8a)R\xaa\xa0 \x85$\xd82(\xee\xd2\x8b\x94N\xacg\x07\x98n\x95OJ\xa4\xcc_\\-\x17\x13\xf3V\x96_\xb5\x97\xe2\xa2;\x03q\xce\x9b\x9e,\xe37{Z\x00\xce|\\\xf9\xdb\xa7\xba\xf3\'c\xee\xc9\xe7I\xfadZ\xb2\xfb\t\xb6\x03\x03\xfe\x9dM!!k\xec\xe0t{\xfeig\xcbL\xf6\x0bOP\r\x97\t\x95Hb\xd81\xb5\xbfVLZ#\x16s\xb6\x1adf\xb5\xe2\xb5\xb7\xccI\x82l\x05\xe9\x85\xd3\'x\x14C\xeb\xc4\xcb\xa5\xc7\xb6=\x7f\\m4\xa4\x00~\xdb\x97\xe4\xbb\xf3A\x86 Mm\xc7\x9a\x90\xda&\xc5\xf2wY\nr.1\xb9\x0c\xb4\xb1\xb2!\x03)\xb3\x19\x1d\xba\xfb)\xb0\xd2LS\x93\xe3\xb4t\x91\xed\xa7\xfe\xceV\x10\xa7Vcd\xcbIt\xdf\xff0\xcb9Q\xef(\x11&W0|p\x13\xfe\xd6\x93A\xa7\xc2(f\xde\xcc[\x8f#P\x07\x1f\xb0\\.\xd0\xa07\xab\xd5\xce\xb1N\xfb\xd3\xcc\x0f\x89+gm1p4\x87_\xf6\xfe\x13\xe8\xec\xa3vd,\xb3jW\x96\xe2\x937\xcb\xc5\xc4\xdb\xd9(wj\xa85y\xccE \xf8\xe4\x83\xd5\xcf\xe5A\xf9\x18T;v\x00\xbc\xac\xd1a\xed\tK\xd6\xd4\xd4\xc4W\xe4F7L\xfc\xb4\xeb3\x937\x94\x02i\xf3\x85\xbe\x05B\xf5\xb8\xccO\x84\xfb]M\x0c\xd8k\x00va\x0f\x91M\xd9\x9f9\xfc\x0f6\xa4f\xc5\xbe\xd9GItD\xdf7*\x93Kv)~[\xf1%\xeb(o\xef;\xc0\xb4,\xa1\xc2V\x8a\xff\xe1\x86\x17\xe7\xf17\xe81l&\x14<j\xb0AS\xf92\xb1C;\x81\x8a\x06D\xab\x11j\xcd\xb1q\x9e\xefm\x0ei7\x15\x8d\x03\xdd6B\xd9qg*X\x0f\xe6F\xdc\xb6\x93N\xbe\x12\xc9#I\xe3\xd4\x80j\xe8z\xd5t\x05,Y\xd7\xec\xd1\x9a\x97\xae\x16\xb0\xdfi\xb2\xb8\xb5J-\xde9&\x1ai\x19\xb7\x81\xa3\'\xccf]\xeeK#\x8bk3\x11\x97\\T\x88\xfb\xee\xd3El:\x16\x13\xafi\xc0\xf9\xef\xefe7\xe4w\x14\xdf76g^\xd02J\x96Z\xedl\x19\x8eG\xb7\xc6\xebHj\x86\x84/:R{+co\xa0\xaa\xeb.\xbb\x0e\xc9\xf3\xa8\x1e\xd4\x1a\x010\x87;\xef\xbe\xaf.\x87\x9a5\xfdG\x82\xd5\xb2\x01\x1e\xf2\xd3l\xef\tb\xe7=1\x03\x8f\xae\x83\x84:0\x9bE;x\x03UB\x87\xbco\xb2\x80xZ\x96\x1a\x0e?i\xe51^\x9b\x1d\xb4\\|\xccH\xdf3G\x83\xbd/\rhS0;\x9a\xdb\xf6NG\x16 ?\xf3\x13<\xcf!p\xd5\n\xb1\xf2\x0e\xcc\xdc\x0b\xe6\xe8\xcb#\x85\x17s#\x87\xb4\xf8f\xc7\x9fi\xcc\xe4b\xca\xc0\x1eh\xc1u\xad\x98\x92\x12\x00\xb5`\xfa!~{\xac\xc0\x14:\xce\xfc\xa4\x90\x12\xc4K\xa5\xb9\x83\xd1\x03\x1a\xd8z\xf6A\xe9\xfbb\x07\x99\xf80\x9b,\x17\x8d /ZXb]\xb2P\\\'\xcb\n\xae\x82\x99X\xf5\t\xd1\xc9p\x11\x8d\xcaD\xf2\x8b\x8bc%\x17] \x89b\xa9kF\x93\xc0\xe1{INUg\xec\xb4\x1b`{\xd1:\xb3\xa4\x7f\t\x9b\xde\xb0V\x1f\xd7\x85>\xbeT\xbb\xe5\xf0u\x96\x98\xad\x9a\xc3N\xf8A\x91\xd95h\x1ef\xbc\xf2\x08B\xe0\x9f\xe0\x1d+\xb6$\xafA\xca\xf6\xc5MX\x88\x9e\xf1\xbawZ\x87\xe7\xf7\xf4\xcd\xe4\x92|L\x1ep69\x81\x8f\xc6\'\xc1q\xe3\x98\x1ev\x94\xa3\xd5\xb8g\xee\x82\xd3Y\xccs\x81\x06\x97\x02\xf0\xd8S\xf1\x1b!\x8emp\x02w\x97\x11t]5?\x16\xfa\xf2\xfb\xf7\xef\xdf\xe4\x82V\x07?F`\xcf\xee\xef\xe7\xae\x18\xef\x83a\x87\xb1zh\xe7\xaez]\x1e\xc5\xd9\xe7&\x9a\xf0\xd0\xa4!\x05\x07\xff\xca\x10\xfa\xb7\x01\x9aU\x8b(\xb5#\x11\x95\x98\x8b\xe3\x84\x9b\x13\xecw\x0e\xc9\xad<X\xde\x11\tuo\xd2\xfd\xb6\xc2\x1c\xfb\x82 \xb2\xa6\x02\x8c0\x19\xadP\x1b\xc3C\x08\xc9-\xaa\xd0\x15\xb3\xd2g\x07\x980:u\r\xfc\xf4&\xf9\x06$#\x85\xe1l\x16\x8a\x9f\xedX\xa0b\x1a^\x90#256\xc0z\xc7\xfax\xde\xa2\x0fKHY\xed8\xc6`\xa7^#\x0b.\xc4\x1a\r\x938\x17\xe2|\xb0\x95-\xce\xaa}\xc3\xb5\x0bS\xbb\xc6\x0cA\x00`\xe5:\x00\xc6\x0b\x93(1]\xb1\xb6\xc0\xc0de;]~\xa1\xc6d\xf7\x12\xc9\x0f\xfc\xd4\xd0\xfcJ\xb9\xd5\nE\x9a\x7f\x12\xbd\x83\x87\xff\xb8\x15\x0fm\x14p\xba\xc0\xef\x87v\x9e\\\xfd\x8f;\xe3\xb5\x03\x94\xd6t\xa5\xc2\xe9\x92\xd1\xcd9cS\x15\x9c}\xdd\x9f\xf4\xe1\xd2\xb6cR\xb1\x18\x83\xe7\n\xde\xfeUM\x90\xf9\xbf\xf6\xd8J\xc7\x1a:z\x0bGL\x00l\xf6\xa5\x1f$\x86O6\xfa\x13\x04G\x0e\xfe\xca\xbe\xaf\xe1\xb6\xfa\x91\x9b\xb5\x9f]\x12N\x9c\xcf4b}E\x07\xa6B\xd2\x10\xe0Xjxi\x93\x92w\x1d \xd5\xd1\x87,5\xa0\xd3\x18\x8e\xe0\xad9o\x92\x8d\xb1\x95o\x0c"\xb4\xadW\xf9\xc9\xa0\xe5i\xdb\x17\xea\xd6o$Y\xfb\xb5\x9c\x93\x16\xf7\xc0\x1cz\x00\xfc$\x08\x9ay38Y\xe1_8\xb2\xe2\xd1\t\xcdfmcpSEt\x86\xa6'
I would appreciate it if you could help me understand where every picture starts and where it ends. I assume that there has to be some kind of parity bits.
How is it possible to make a picture out of it.
Here is my code and what I've tried so far:
def videoLogging(self):
logging.info("-----------[Tello] Video Thread: started------------------")
INTERVAL = 0.2
index = 0
while True:
try:
packet_data = None
index += 1
res_string, ip = self.video_socket.recvfrom(2048)
packet_data = res_string
print(packet_data)
self.createImg(packet_data)
time.sleep(5)
# videoResponse = self.video_socket.recv(2048)
# mv = memoryview(videoResponse).cast('H')
# if mv is not None:
# self.createImg(mv)
# print("image created")
# print('VIDEO %s' % videoResponse)
# time.sleep(3)
except Exception as ex:
logging.error("Error in listening to tello\t\t %s" % ex)
def createImg(self, data):
with open('image.jpg', 'wb') as f:
f.write(data)
Unfortunately, the image can't be opened.
Thank you in regards.
This looks like an annex b stream. There are no parity bits. You can read about the bitstream format here. Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream
would like to generate text files for frames extracted with ffmpeg, containing subtitle of the frame if any, on a video for which I have burn the subtitles using ffmpeg also.
I use a python script with pysrt to open the subrip file and generate the text files.
What I am doing is that each frames is named with the frame number by ffmpeg, then and since they are extracted at a constant rate, I can easily retrieve the time position of the frame using the formula t1 = fnum/fps, where fnum is the number of the frame retrieved with the filename, and fps is the frequency passed to ffmpeg for the frame extraction.
Even though I am using the same subtitle file to retrieve the text positions in the timeline, that the one that has been used in the video, I still get accuracy errors. Most I have some text files missing or some that shouldn't be present.
Because time is not really continuous when talking about frames, I have tried recalibrating t using the fps of the video wih the hardcoded subtitles, let's call that fps vfps for video fps (I have ensured that the video fps is the same before and after subtitle burning). I get the formula: t2 = int(t1*vfps)/vfps.
It still is not 100% accurate.
For example, my video is at 30fps (vfps=30) and I extracted frames at 4fps (fps=4).
The extracted frame 166 (fnum=166) shows no subtitle. In the subrip file, the previous subtitle ends at t_prev=41.330 and the next subtitle begins at t_next=41.400, which means that t_sub should satisfy: t_prev < t_sub and t_sub < t_next, but I can't make this happen.
Formulas I have tried:
t1 = fnum/fps # 41.5 > t_next
t2 = int(fnum*vfps/fps)/vfps # 41.5 > t_next
# is it because of a indexing problem? No:
t3 = (fnum-1)/fps # 41.25 < t_prev
t4 = int((fnum-1)*vfps/fps)/vfps # 41.23333333 < t_prev
t5 = int(fnum*vfps/fps - 1)/vfps # 41.466666 > t_next
t6 = int((fnum-1)*vfps/fps + 1)/vfps # 41.26666 < t_prev
Command used:
# burning subtitles
# (previously)
# ffmpeg -r 25 -i nosub.mp4 -vf subtitles=sub.srt withsub.mp4
# now:
ffmpeg -i nosub.mp4 -vf subtitles=sub.srt withsub.mp4
# frames extraction
ffmpeg -i withsub.mp4 -vf fps=4 extracted/%05.bmp -hide_banner
Why does this happen and how can I solve this?
One thing I have noticed is that if I extract frames of the original video and the subtitle ones, do a difference of the frames, the result is not only the subtitles, there are variations in the background (that shouldn't happen). If I do the same experience using the same video two times, the difference is null, which means that the frame extraction is consistant.
Code for the difference:
ffmpeg -i withsub.mp4 -vf fps=4 extracted/%05.bmp -hide_banner
ffmpeg -i no_sub.mp4 -vf fps=4 extracted_no_sub/%05.bmp -hide_banner
for img in no_sub/*.bmp; do
convert extracted/${img##*/} $img -compose minus -composite diff/${img##*/}
done
Thanks.
You can extract frames with accurate timestamps, thus
ffmpeg -i nosub.mp4 -vf subtitles=sub.srt,settb=AVTB,select='if(eq(n\,0)\,1\,floor(4*t)-floor(4*prev_t))' -vsync 0 -r 1000 -frame_pts true extracted/%08d.bmp
This will extract the first frame from each quarter second. The output filename is 8 characters long where the first 5 digits are seconds and last three are milliseconds. You can change the field size based on max file duration.
So I've followed this tutorial but it doesn't seem to do anything. Simply nothing. It waits a few seconds and closes the program. What is wrong with this code?
import cv2
vidcap = cv2.VideoCapture('Compton.mp4')
success,image = vidcap.read()
count = 0
success = True
while success:
success,image = vidcap.read()
cv2.imwrite("frame%d.jpg" % count, image) # save frame as JPEG file
if cv2.waitKey(10) == 27: # exit if Escape is hit
break
count += 1
Also, in the comments it says that this limits the frames to 1000? Why?
EDIT:
I tried doing success = True first but that didn't help. It only created one image that was 0 bytes.
From here download this video so we have the same video file for the test. Make sure to have that mp4 file in the same directory of your python code. Then also make sure to run the python interpreter from the same directory.
Then modify the code, ditch waitKey that's wasting time also without a window it cannot capture the keyboard events. Also we print the success value to make sure it's reading the frames successfully.
import cv2
vidcap = cv2.VideoCapture('big_buck_bunny_720p_5mb.mp4')
success,image = vidcap.read()
count = 0
while success:
cv2.imwrite("frame%d.jpg" % count, image) # save frame as JPEG file
success,image = vidcap.read()
print('Read a new frame: ', success)
count += 1
How does that go?
To extend on this question (& answer by #user2700065) for a slightly different cases, if anyone does not want to extract every frame but wants to extract frame every one second. So a 1-minute video will give 60 frames(images).
import sys
import argparse
import cv2
print(cv2.__version__)
def extractImages(pathIn, pathOut):
count = 0
vidcap = cv2.VideoCapture(pathIn)
success,image = vidcap.read()
success = True
while success:
vidcap.set(cv2.CAP_PROP_POS_MSEC,(count*1000)) # added this line
success,image = vidcap.read()
print ('Read a new frame: ', success)
cv2.imwrite( pathOut + "\\frame%d.jpg" % count, image) # save frame as JPEG file
count = count + 1
if __name__=="__main__":
a = argparse.ArgumentParser()
a.add_argument("--pathIn", help="path to video")
a.add_argument("--pathOut", help="path to images")
args = a.parse_args()
print(args)
extractImages(args.pathIn, args.pathOut)
This is Function which will convert most of the video formats to number of frames there are in the video. It works on Python3 with OpenCV 3+
import cv2
import time
import os
def video_to_frames(input_loc, output_loc):
"""Function to extract frames from input video file
and save them as separate frames in an output directory.
Args:
input_loc: Input video file.
output_loc: Output directory to save the frames.
Returns:
None
"""
try:
os.mkdir(output_loc)
except OSError:
pass
# Log the time
time_start = time.time()
# Start capturing the feed
cap = cv2.VideoCapture(input_loc)
# Find the number of frames
video_length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) - 1
print ("Number of frames: ", video_length)
count = 0
print ("Converting video..\n")
# Start converting the video
while cap.isOpened():
# Extract the frame
ret, frame = cap.read()
if not ret:
continue
# Write the results back to output location.
cv2.imwrite(output_loc + "/%#05d.jpg" % (count+1), frame)
count = count + 1
# If there are no more frames left
if (count > (video_length-1)):
# Log the time again
time_end = time.time()
# Release the feed
cap.release()
# Print stats
print ("Done extracting frames.\n%d frames extracted" % count)
print ("It took %d seconds forconversion." % (time_end-time_start))
break
if __name__=="__main__":
input_loc = '/path/to/video/00009.MTS'
output_loc = '/path/to/output/frames/'
video_to_frames(input_loc, output_loc)
It supports .mts and normal files like .mp4 and .avi. Tried and Tested on .mts files. Works like a Charm.
This is a tweak from previous answer for python 3.x from #GShocked, I would post it to the comment, but dont have enough reputation
import sys
import argparse
import cv2
print(cv2.__version__)
def extractImages(pathIn, pathOut):
vidcap = cv2.VideoCapture(pathIn)
success,image = vidcap.read()
count = 0
success = True
while success:
success,image = vidcap.read()
print ('Read a new frame: ', success)
cv2.imwrite( pathOut + "\\frame%d.jpg" % count, image) # save frame as JPEG file
count += 1
if __name__=="__main__":
print("aba")
a = argparse.ArgumentParser()
a.add_argument("--pathIn", help="path to video")
a.add_argument("--pathOut", help="path to images")
args = a.parse_args()
print(args)
extractImages(args.pathIn, args.pathOut)
The previous answers have lost the first frame. And it will be nice to store the images in a folder.
# create a folder to store extracted images
import os
folder = 'test'
os.mkdir(folder)
# use opencv to do the job
import cv2
print(cv2.__version__) # my version is 3.1.0
vidcap = cv2.VideoCapture('test_video.mp4')
count = 0
while True:
success,image = vidcap.read()
if not success:
break
cv2.imwrite(os.path.join(folder,"frame{:d}.jpg".format(count)), image) # save frame as JPEG file
count += 1
print("{} images are extacted in {}.".format(count,folder))
By the way, you can check the frame rate by VLC. Go to windows -> media information -> codec details
After a lot of research on how to convert frames to video I have created this function hope this helps. We require opencv for this:
import cv2
import numpy as np
import os
def frames_to_video(inputpath,outputpath,fps):
image_array = []
files = [f for f in os.listdir(inputpath) if isfile(join(inputpath, f))]
files.sort(key = lambda x: int(x[5:-4]))
for i in range(len(files)):
img = cv2.imread(inputpath + files[i])
size = (img.shape[1],img.shape[0])
img = cv2.resize(img,size)
image_array.append(img)
fourcc = cv2.VideoWriter_fourcc('D', 'I', 'V', 'X')
out = cv2.VideoWriter(outputpath,fourcc, fps, size)
for i in range(len(image_array)):
out.write(image_array[i])
out.release()
inputpath = 'folder path'
outpath = 'video file path/video.mp4'
fps = 29
frames_to_video(inputpath,outpath,fps)
change the value of fps(frames per second),input folder path and output folder path according to your own local locations
This code extract frames from the video and save the frames in .jpg formate
import cv2
import numpy as np
import os
# set video file path of input video with name and extension
vid = cv2.VideoCapture('VideoPath')
if not os.path.exists('images'):
os.makedirs('images')
#for frame identity
index = 0
while(True):
# Extract images
ret, frame = vid.read()
# end of frames
if not ret:
break
# Saves images
name = './images/frame' + str(index) + '.jpg'
print ('Creating...' + name)
cv2.imwrite(name, frame)
# next frame
index += 1
In 2022 you also have the option to use ImageIO to do this, which IMHO is much more hasslefree and readable.
import imageio.v3 as iio
for idx, frame in enumerate(iio.imiter("imageio:cockatoo.mp4")):
iio.imwrite(f"extracted_images/frame{idx:03d}.jpg", frame)
Sidenote 1: "imageio:cockatoo.mp4" is a standard image provided by ImageIO for testing and demonstration purposes. You can simply replace it with "path/to/your/video.mp4".
Sidenote 2: You will have to install one of ImageIO's optional dependencies to read video data, which can be done via pip install imageio-ffmpeg or pip install av.
You can time this against OpenCV and you will find that, there isn't that much to gain from OpenCV on this front either:
Read-Only Timings
=================
OpenCV: 0.453
imageio_ffmpeg: 0.765
imageio_pyav: 0.272
Read + Write Timings
====================
OpenCV: 3.237
imageio_ffmpeg: 1.597
imageio_pyav: 1.506
By default, OpenCV and ImageIO+av are about equally fast when reading. Both direct bind into the FFmpeg libraries under the hood so this is rather unsurprising. However, ImageIO allows you to tweak FFmpeg's default threadding model (thread_type="FRAME") which is much faster when bulk reading.
More importantly, ImageIO is much faster at writing JPEG compared to OpenCV. This is because pillow is faster than OpenCV here which ImageIO capitalizes on. Writing images dominates runtime for this scenario, so you end up with an overall 2x improvement when using ImageIO instead of OpenCV.
Here is the code for reference:
import imageio.v3 as iio
import cv2
import timeit
from pathlib import Path
# create a common local file for benchmarking
video_file = "shared_video.mp4"
if not Path(video_file).exists():
frames = iio.imread("imageio:cockatoo.mp4")
meta = iio.immeta("imageio:cockatoo.mp4", exclude_applied=False)
iio.imwrite(video_file, frames, fps=meta["fps"])
repeats = 10
def read_cv2():
vidcap = cv2.VideoCapture(video_file)
success, image = vidcap.read()
idx = 0
while success:
cv2.imwrite(f"extracted_images/frame{idx:03d}.jpg", image)
success, image = vidcap.read()
idx += 1
def read_imageio_ffmpeg():
for idx, frame in enumerate(iio.imiter(video_file, plugin="FFMPEG")):
iio.imwrite(f"extracted_images/frame{idx:03d}.jpg", frame)
def read_imageio_pyav():
for idx, frame in enumerate(
iio.imiter(video_file, plugin="pyav", format="rgb24", thread_type="FRAME")
):
iio.imwrite(f"extracted_images/frame{idx:03d}.jpg", frame)
time_cv2 = (
timeit.timeit("read_cv2()", setup="from __main__ import read_cv2", number=repeats)
/ repeats
)
time_imageio_ffmpeg = (
timeit.timeit(
"read_imageio_ffmpeg()",
setup="from __main__ import read_imageio_ffmpeg",
number=repeats,
)
/ repeats
)
time_imageio_pyav = (
timeit.timeit(
"read_imageio_pyav()",
setup="from __main__ import read_imageio_pyav",
number=repeats,
)
/ repeats
)
print(
f"""
Timings
=======
OpenCV: {time_cv2:<3.3f}
imageio_ffmpeg: {time_imageio_ffmpeg:<3.3f}
imageio_pyav: {time_imageio_pyav:<3.3f}
"""
)
Following script will extract frames every half a second of all videos in folder. (Works on python 3.7)
import cv2
import os
listing = os.listdir(r'D:/Images/AllVideos')
count=1
for vid in listing:
vid = r"D:/Images/AllVideos/"+vid
vidcap = cv2.VideoCapture(vid)
def getFrame(sec):
vidcap.set(cv2.CAP_PROP_POS_MSEC,sec*1000)
hasFrames,image = vidcap.read()
if hasFrames:
cv2.imwrite("D:/Images/Frames/image"+str(count)+".jpg", image) # Save frame as JPG file
return hasFrames
sec = 0
frameRate = 0.5 # Change this number to 1 for each 1 second
success = getFrame(sec)
while success:
count = count + 1
sec = sec + frameRate
sec = round(sec, 2)
success = getFrame(sec)
This function extracts images from video with 1 fps, IN ADDITION it identifies the last frame and stops reading also:
import cv2
import numpy as np
def extract_image_one_fps(video_source_path):
vidcap = cv2.VideoCapture(video_source_path)
count = 0
success = True
while success:
vidcap.set(cv2.CAP_PROP_POS_MSEC,(count*1000))
success,image = vidcap.read()
## Stop when last frame is identified
image_last = cv2.imread("frame{}.png".format(count-1))
if np.array_equal(image,image_last):
break
cv2.imwrite("frame%d.png" % count, image) # save frame as PNG file
print '{}.sec reading a new frame: {} '.format(count,success)
count += 1
I am using Python via Anaconda's Spyder software. Using the original code listed in the question of this thread by #Gshocked, the code does not work (the python won't read the mp4 file). So I downloaded OpenCV 3.2 and copied "opencv_ffmpeg320.dll" and "opencv_ffmpeg320_64.dll" from the "bin" folder. I pasted both of these dll files to Anaconda's "Dlls" folder.
Anaconda also has a "pckgs" folder...I copied and pasted the entire "OpenCV 3.2" folder that I downloaded to the Anaconda "pckgs" folder.
Finally, Anaconda has a "Library" folder which has a "bin" subfolder. I pasted the "opencv_ffmpeg320.dll" and "opencv_ffmpeg320_64.dll" files to that folder.
After closing and restarting Spyder, the code worked. I'm not sure which of the three methods worked, and I'm too lazy to go back and figure it out. But it works so, cheers!
i might be late here but you can use this pip package to quickly generate images from videos. You can also get images using specific fps.
pip install videoToImages
then type the following command in terminal
videoToimages --videoFolder [pathToVideosFolder]
Example: videoToimages --videoFolder "c:/videos"
for specific output fps , set --fps 10 to any required value. --fps 1 means one image per one second of the video.
Full commands:
videoToimages --videoFolder "c:/videos"
videoToimages --videoFolder "c:/videos" --fps 10 --img_size (512, 512)
This code is simple and guarantees reliable execution.
# path of video file
video_path = "path/to/video.mp4"
# Open video file
video = cv2.VideoCapture(video_path)
# number of frames in video
frame_count = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
# Convert frame to image and save to file
for i in range(frame_count):
ret, frame = video.read()
if ret:
image_path = f"path/to/image_{i}.jpg"
cv2.imwrite(image_path, frame)
# Close video file
video.release()
There are several reasons to extract slides/frames from a video presentation, especially in the case of education or conference related videos. It allows you to access the study notes without watching the whole video.
I have faced this issue several times, so I decided to create a solution for it myself using python. I have made the code open-source, you can easily set up this tool and run it in few simple steps.
Refer to this for youtube video tutorial.
Steps on how to use this tool.
Clone this project video2pdfslides
Set up your environment by running "pip install -r requirements.txt"
Copy your video path
Run "python video2pdfslides.py <video_path>"
Boom! the pdf slides will be available in in output folder
Make notes and enjoy!