How to get the duration of a video in Python? - python

I need to get the duration of a video in Python. The video formats that I need to get are MP4, Flash video, AVI, and MOV... I have a shared hosting solution, so I have no FFmpeg support.
What would you suggest?
Thanks!

You can use the external command ffprobe for this. Specifically, run this bash command from the FFmpeg Wiki:
import subprocess
def get_length(filename):
result = subprocess.run(["ffprobe", "-v", "error", "-show_entries",
"format=duration", "-of",
"default=noprint_wrappers=1:nokey=1", filename],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
return float(result.stdout)

(year 2020 answer)
Solutions:
opencv 0.0065 sec ✔
ffprobe 0.0998 sec
moviepy 2.8239 sec
✔ OpenCV method:
def with_opencv(filename):
import cv2
video = cv2.VideoCapture(filename)
duration = video.get(cv2.CAP_PROP_POS_MSEC)
frame_count = video.get(cv2.CAP_PROP_FRAME_COUNT)
return duration, frame_count
Usage: print(with_opencv('my_video.webm'))
Other:
ffprobe method:
def with_ffprobe(filename):
import subprocess, json
result = subprocess.check_output(
f'ffprobe -v quiet -show_streams -select_streams v:0 -of json "{filename}"',
shell=True).decode()
fields = json.loads(result)['streams'][0]
duration = fields['tags']['DURATION']
fps = eval(fields['r_frame_rate'])
return duration, fps
moviepy method:
def with_moviepy(filename):
from moviepy.editor import VideoFileClip
clip = VideoFileClip(filename)
duration = clip.duration
fps = clip.fps
width, height = clip.size
return duration, fps, (width, height)

As reported here https://www.reddit.com/r/moviepy/comments/2bsnrq/is_it_possible_to_get_the_length_of_a_video/
you could use the moviepy module
from moviepy.editor import VideoFileClip
clip = VideoFileClip("my_video.mp4")
print( clip.duration )

To make things a little bit easier, the following codes put the output to JSON.
You can use it by using probe(filename), or get duration by using duration(filename):
json_info = probe(filename)
secondes_dot_ = duration(filename) # float number of seconds
It works on Ubuntu 14.04 where of course ffprobe installed. The code is not optimized for speed or beautiful purposes but it works on my machine hope it helps.
#
# Command line use of 'ffprobe':
#
# ffprobe -loglevel quiet -print_format json \
# -show_format -show_streams \
# video-file-name.mp4
#
# man ffprobe # for more information about ffprobe
#
import subprocess32 as sp
import json
def probe(vid_file_path):
''' Give a json from ffprobe command line
#vid_file_path : The absolute (full) path of the video file, string.
'''
if type(vid_file_path) != str:
raise Exception('Gvie ffprobe a full file path of the video')
return
command = ["ffprobe",
"-loglevel", "quiet",
"-print_format", "json",
"-show_format",
"-show_streams",
vid_file_path
]
pipe = sp.Popen(command, stdout=sp.PIPE, stderr=sp.STDOUT)
out, err = pipe.communicate()
return json.loads(out)
def duration(vid_file_path):
''' Video's duration in seconds, return a float number
'''
_json = probe(vid_file_path)
if 'format' in _json:
if 'duration' in _json['format']:
return float(_json['format']['duration'])
if 'streams' in _json:
# commonly stream 0 is the video
for s in _json['streams']:
if 'duration' in s:
return float(s['duration'])
# if everything didn't happen,
# we got here because no single 'return' in the above happen.
raise Exception('I found no duration')
#return None
if __name__ == "__main__":
video_file_path = "/tmp/tt1.mp4"
duration(video_file_path) # 10.008

Find this new python library: https://github.com/sbraz/pymediainfo
To get the duration:
from pymediainfo import MediaInfo
media_info = MediaInfo.parse('my_video_file.mov')
#duration in milliseconds
duration_in_ms = media_info.tracks[0].duration
Above code is tested against a valid mp4 file and works, but you should do more checks because it is heavily relying on the output of MediaInfo.

Use a modern method with https://github.com/kkroening/ffmpeg-python (pip install ffmpeg-python --user). Don't forget to install ffmpeg too.
Get video info:
import ffmpeg
info=ffmpeg.probe(filename)
print(f"duration={info['format']['duration']}")
print(f"framerate={info['streams'][0]['avg_frame_rate']}")
Use ffmpeg-python package to also easily create, edit and apply filters to videos.

from subprocess import check_output
file_name = "movie.mp4"
#For Windows
a = str(check_output('ffprobe -i "'+file_name+'" 2>&1 |findstr "Duration"',shell=True))
#For Linux
#a = str(check_output('ffprobe -i "'+file_name+'" 2>&1 |grep "Duration"',shell=True))
a = a.split(",")[0].split("Duration:")[1].strip()
h, m, s = a.split(':')
duration = int(h) * 3600 + int(m) * 60 + float(s)
print(duration)

A function I came up with. This is basically using only ffprobe arguments
from subprocess import check_output, CalledProcessError, STDOUT
def getDuration(filename):
command = [
'ffprobe',
'-v',
'error',
'-show_entries',
'format=duration',
'-of',
'default=noprint_wrappers=1:nokey=1',
filename
]
try:
output = check_output( command, stderr=STDOUT ).decode()
except CalledProcessError as e:
output = e.output.decode()
return output
fn = '/app/648c89e8-d31f-4164-a1af-034g0191348b.mp4'
print( getDuration( fn ) )
Outputs duration like this:
7.338000

As reported here
https://www.reddit.com/r/moviepy/comments/2bsnrq/is_it_possible_to_get_the_length_of_a_video/
you could use the moviepy module
from moviepy.editor import VideoFileClip
clip = VideoFileClip("my_video.mp4")
print( clip.duration )
If you're trying to get the duration of many videos in a folder it'll crash giving the error:
AttributeError: 'AudioFileClip' object has no attribute 'reader'
So, in order to avoid that you'll need to add
clip.close()
Based on this:
https://zulko.github.io/moviepy/_modules/moviepy/video/io/VideoFileClip.html
So the code would look like this:
from moviepy.editor import VideoFileClip
clip = VideoFileClip("my_video.mp4")
print( clip.duration )
clip.close()
Cheers! :)

The above pymediainfo answer really helped me. Thank you.
As a beginner, it did take a while to find out what was missing (sudo apt install mediainfo) and how to also address attributes in other ways (see below).
Hence this additional example:
# sudo apt install mediainfo
# pip3 install pymediainfo
from pymediainfo import MediaInfo
media_info = MediaInfo.parse('/home/pi/Desktop/a.mp4')
for track in media_info.tracks:
#for k in track.to_data().keys():
# print("{}.{}={}".format(track.track_type,k,track.to_data()[k]))
if track.track_type == 'Video':
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
print("{} width {}".format(track.track_type,track.to_data()["width"]))
print("{} height {}".format(track.track_type,track.to_data()["height"]))
print("{} duration {}s".format(track.track_type,track.to_data()["duration"]/1000.0))
print("{} duration {}".format(track.track_type,track.to_data()["other_duration"][3][0:8]))
print("{} other_format {}".format(track.track_type,track.to_data()["other_format"][0]))
print("{} codec_id {}".format(track.track_type,track.to_data()["codec_id"]))
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
elif track.track_type == 'Audio':
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
print("{} format {}".format(track.track_type,track.to_data()["format"]))
print("{} codec_id {}".format(track.track_type,track.to_data()["codec_id"]))
print("{} channel_s {}".format(track.track_type,track.to_data()["channel_s"]))
print("{} other_channel_s {}".format(track.track_type,track.to_data()["other_channel_s"][0]))
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
print("********************************************************************")
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Video width 1920
Video height 1080
Video duration 383.84s
Video duration 00:06:23
Video other_format AVC
Video codec_id avc1
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Audio format AAC
Audio codec_id mp4a-40-2
Audio channel_s 2
Audio other_channel_s 2 channels
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Open cmd terminal and install python package:mutagen using this command
python -m pip install mutagen
then use this code to get the video duration and its size:
import os
from mutagen.mp4 import MP4
audio = MP4("filePath")
print(audio.info.length)
print(os.path.getsize("filePath"))

Here is what I use in prod today, using cv2 way work well for mp4, wmv and flv which is what I needed:
try:
import cv2 # opencv-python - optional if using ffprobe
except ImportError:
cv2 = None
import subprocess
def get_playback_duration(video_filepath, method='cv2'): # pragma: no cover
"""
Get video playback duration in seconds and fps
"This epic classic car collection centres on co.webm"
:param video_filepath: str, path to video file
:param method: str, method cv2 or default ffprobe
"""
if method == 'cv2': # Use opencv-python
video = cv2.VideoCapture(video_filepath)
fps = video.get(cv2.CAP_PROP_FPS)
frame_count = video.get(cv2.CAP_PROP_FRAME_COUNT)
duration_seconds = frame_count / fps if fps else 0
else: # ffprobe
result = subprocess.check_output(
f'ffprobe -v quiet -show_streams -select_streams v:0 -of json "{video_filepath}"', shell=True).decode()
fields = json.loads(result)['streams'][0]
duration_seconds = fields['tags'].get('DURATION')
fps = eval(fields.get('r_frame_rate'))
return duration_seconds, fps
ffprobe does not work for flv and I couldn't get anything to work for webm. Otherwise, this works great and is being used in prod today.

Referring to the answer of #Nikolay Gogol using opencv-python (cv2):
His method did not work for me (Python 3.8.10, opencv-python==4.5.5.64) and
the comments say that opencv can not be used in this case which is also not true.
CAP_PROP_POS_MSEC gives you the millisecond of the current frame that the VideoCapture is at and not the total milliseconds of the video, so when just loading the video this is obviously 0.
But we can actually get the frame rate and the number of total frames to calculate the total number of milliseconds of the video:
import cv2
video = cv2.VideoCapture("video.mp4")
# the frame rate or frames per second
frame_rate = video.get(cv2.CAP_PROP_FPS)
# the total number of frames
total_num_frames = video.get(cv2.CAP_PROP_FRAME_COUNT)
# the duration in seconds
duration = total_num_frames / frame_rate

for anyone that like using the mediainfo program:
import json
import subprocess
#===============================
def getMediaInfo(mediafile):
cmd = "mediainfo --Output=JSON %s"%(mediafile)
proc = subprocess.Popen(cmd, shell=True,
stderr=subprocess.PIPE, stdout=subprocess.PIPE)
stdout, stderr = proc.communicate()
data = json.loads(stdout)
return data
#===============================
def getDuration(mediafile):
data = getMediaInfo(mediafile)
duration = float(data['media']['track'][0]['Duration'])
return duration

Using ffprobe in a function it returns the duration of a video in seconds.
def video_duration(filename):
import subprocess
secs = subprocess.check_output(f'ffprobe -v error -select_streams v:0 -show_entries stream=duration -of default=noprint_wrappers=1:nokey=1 "{filename}"', shell=True).decode()
return secs

Related

if statement with ffprobe stream analyzer

I am trying to use if statement to display some video streams parameter(bitrate) in case it exists or displays a message( cannot measure bitrate) if there is no bit rate. and prevent my GUI from crashing.
where is the wrong in my code, or how can I make it better?
import json
import shlex
import subprocess
cmd = "ffprobe -v quiet -print_format json -show_streams"
args = shlex.split(cmd)
myurl = "udp://#239.168.2.6:2113"# this is my video stream
args.append(myurl)
ffprobeOutput = subprocess.check_output(args).decode('utf-8')
ffprobeOutput = json.loads(ffprobeOutput)
video_stream = next((stream for stream in ffprobeOutput['streams'] if
stream['codec_type'] == 'video'))
if int(video_stream['bit_rate']) == True:
bit_rate1=int(video_stream['bit_rate'])
print(bit_rate1)
else:
print('can not measure bitrate')
my video stream has a bitrate parameter but my if statement always give me the else (print('can not measure bitrate'))

Saving audio from livestream using VLC library

I am trying to save audio clips (15 seconds per clip) from live stream using VLC library. I am unable to find any option that could allow me to record only 15 seconds from the live stream. Thus I ended up using timer in my code, but the recording clips sometimes contain 10 seconds, sometimes 20 seconds (rarely 15 seconds). Also, sometimes the audio content is repeated in the clips.
Here is the code (I am a newbie so please guide me)
Code.py
import os
import sys
import vlc
import time
clipNumber = sys.argv[1]
filepath = 'http://streamer64.eboundservices.com/geo/geonews_abr/playlist.m3u8'
movie = os.path.expanduser(filepath)
if 'http://' not in filepath:
if not os.access(movie, os.R_OK):
print ( 'Error: %s file is not readable' % movie )
sys.exit(1)
filename_and_command = "--sout=#transcode{vcodec=none,acodec=mp3,ab=320,channels=2,samplerate=44100}:file{dst=clip" + str(clipNumber) + ".mp3}"
# filename_and_command = "--sout=file/ts:clip" + str(clipNumber) + ".mp3"
instance = vlc.Instance(filename_and_command)
try:
media = instance.media_new(movie)
except NameError:
print ('NameError: % (%s vs Libvlc %s)' % (sys.exc_info()[1],
vlc.__version__, vlc.libvlc_get_version()))
sys.exit(1)
player = instance.media_player_new()
player.set_media(media)
player.play()
time.sleep(15)
exit()
Now that I want to record 1 minute of the live-stream, I invoke this python code from the bash script 4 times and it creates 4 audio clips (clip1.mp3, clip2.mp3, clip3.mp3 and clip4.mp3)
Script.sh
for ((i=1; i<=4; i++))
do
printf "Recording stream #%d\n", "$i"
python code.py "$i"
printf "Finished stream #%d\n", "$i"
done
Is there anyway to just loop the code with Python instead of invoking again and again with bash script (I tried to put the code in the loop in python, but the first clip - clip1 - keeps recording and never finishes recording). And a way to specify that I could only record 15 seconds from the live-stream instead of using time.sleep(15)
If you just want to save the file, no need to use vlc. Here is a short procedure I use to do that:
def record(filepath, stream, duration):
fd = open(filepath, 'wb')
begin = datetime.now()
duration = timedelta(milliseconds=duration)
while datetime.now() - begin < duration:
data = stream.read(10000)
fd.write(data)
fd.close()
Exemple of use to record during one second:
from urllib.request import urlopen
record('clip.mp3', urlopen('http://streamer64.eboundservices.com/geo/geonews_abr/playlist.m3u8'), 1000)
All of the work required can easily be done with FFMPEG as:
ffmpeg -i streamURL -c copy -vn -ac 2 -acodec aac -t 15
-vn for just recording the audio part (without video)
-t for specifying the duration of stream you want to record (15 sec here)

Capture webcam using ffmpeg-python library

Hi I'm attempting to capture a webcam stream with python using the ffmpeg-python wrapper library (https://github.com/kkroening/ffmpeg-python)
I have a working ffmpeg command which is:
ffmpeg -f v4l2 -video_size 352x288 -i /dev/video0 -vf "drawtext='fontfile=fonts/FreeSerif.ttf: text=%{pts} : \
x=(w-tw)/2: y=h-(2*lh): fontcolor=white: box=1: boxcolor=0x00000000#1'" -an -y -t 15 videotests/out_localtime8.mp4
This captures 15s of video in resolution 352x288, and writes a timestamp in the bottom centre of the video.
To play with the ffmpeg-python library, I'm simply attempting to only get the drawtext filter working, here is my script:
#!/usr/bin/env python
import ffmpeg
stream = ffmpeg.input('videotests/example.mov')
stream = ffmpeg.filter_(stream,'drawtext',("fontfile=fonts/FreeSerif.ttf:text=%{pts}"))
stream = ffmpeg.output(stream, 'videotests/output4.mp4')
ffmpeg.run(stream)
The error is
[Parsed_drawtext_0 # 0x561f59d494e0] Either text, a valid file or a timecode must be provided
[AVFilterGraph # 0x561f59d39080] Error initializing filter 'drawtext' with args 'fontfile\\\=fonts/FreeSerif.ttf\\\:text\\\=%{pts}'
Error initializing complex filters.
Invalid argument
The above appears to at least reach ffmpeg but the format of the arguments is incorrect, how to correct them?
Alternatively, when I attempting to split the argument to just pass one of them, I get a different error, as follows:
stream = ffmpeg.filter_(stream,'drawtext',('text=%{pts}'))
Error is
subprocess.CalledProcessError: Command '['ffmpeg', '-i', 'videotests/example.mov', '-filter_complex', "[0]drawtext=(\\\\\\\\\\\\\\'text\\\\\\\\\\\\=%{pts}\\\\\\\\\\\\\\'\\,)[s0]", '-map', '[s0]', 'videotests/output4.mp4']' returned non-zero exit status 1.
How come there are so many backslashes? Any advice on how to proceed please.
Thank you
I worked out the correct syntax eventually. Here is a working example
#!/usr/bin/env python
import ffmpeg
stream = ffmpeg.input('videotests/example.mov')
stream = ffmpeg.filter_(stream,'drawtext',fontfile="fonts/hack/Hack-Regular.ttf",text="%{pts}",box='1', boxcolor='0x00000000#1', fontcolor='white')
stream = ffmpeg.output(stream, 'videotests/output6.mp4')
ffmpeg.run(stream)
The syntax is
ffmpeg.filter_(<video stream name>,'<filter name>',filter_parameter_name='value',<filter_parameter_name>=value)
Where necessary use quotes for the filter_parameter_name values.
Hope this helps someone.
step 1: set environment variable for ffmpeg.
step 2: below code will help to to capture image as well as video using ffmpeg in python along with its current date and time.
import subprocess
from datetime import datetime
import time
class Webcam:
def Image(self):
try:
user = int(input("How many Images to be captured:"))
except ValueError:
print("\nPlease only use integers")
for i in range (user):
subprocess.call("ffmpeg -f vfwcap -vstats_file c:/test/log"+ datetime.now().strftime("_%Y%m%d_%H%M%S") +".txt -t 10 -r 25 -i 0 c:/test/sample"+ datetime.now().strftime("_%Y%m%d_%H%M%S") +".jpg")
time.sleep(3)
def Video(self):
try:
user = int(input("How many videos to be captured:"))
except ValueError:
print("\nPlease only use integers")
for i in range (user):
subprocess.call("ffmpeg -f vfwcap -vstats_file c:/test/log"+ datetime.now().strftime("_%Y%m%d_%H%M%S") +".txt -t 10 -r 25 -i 0 c:/test/sample"+ datetime.now().strftime("_%Y%m%d_%H%M%S") +".avi")
time.sleep(5)
Web=Webcam()
print ("press 1 to capture image")
print ("Press 2 to capture video")
choose = int(input("Enter choice:"))
if choose == 1:
Web.Image()
elif choose == 2:
Web.Video()
else:
print ("wrong choose")
import subprocess : Is use to call FFMPEG command.
subprocess is the in-build module provided by python

Using ffmpeg to obtain video durations in python

I've installed ffprobe using the pip ffprobe command on my PC, and installed ffmpeg from here.
However, I'm still having trouble running the code listed here.
I try to use the following code unsuccessfully.
SyntaxError: Non-ASCII character '\xe2' in file GetVideoDurations.py
on line 12, but no encoding declared; see
http://python.org/dev/peps/pep-0263/ for details
Does anyone know what's wrong? Am I not referencing the directories correctly? Do I need to make sure the .py and video files are in a specific location?
import subprocess
def getLength(filename):
result = subprocess.Popen(["ffprobe", "filename"],
stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
return [x for x in result.stdout.readlines() if "Duration" in x]
fileToWorkWith = ‪'C:\Users\PC\Desktop\Video.mkv'
getLength(fileToWorkWith)
Apologies if the question is somewhat basic. All I need is to be able to iterate over a group of video files and get their start time and end time.
Thank you!
There is no need to iterate though the output of FFprobe. There is one simple command which returns only the duration of the input file:
ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 <input_video>
You can use the following method instead to get the duration:
def get_length(input_video):
result = subprocess.run(['ffprobe', '-v', 'error', '-show_entries', 'format=duration', '-of', 'default=noprint_wrappers=1:nokey=1', input_video], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
return float(result.stdout)
I'd suggest using FFprobe (comes with FFmpeg).
The answer Chamath gave was pretty close, but ultimately failed for me.
Just as a note, I'm using Python 3.5 and 3.6 and this is what worked for me.
import subprocess
def get_duration(file):
"""Get the duration of a video using ffprobe."""
cmd = 'ffprobe -i {} -show_entries format=duration -v quiet -of csv="p=0"'.format(file)
output = subprocess.check_output(
cmd,
shell=True, # Let this run in the shell
stderr=subprocess.STDOUT
)
# return round(float(output)) # ugly, but rounds your seconds up or down
return float(output)
If you want to throw this function into a class and use it in Django (1.8 - 1.11), just change one line and put this function into your class, like so:
def get_duration(file):
to:
def get_duration(self, file):
Note: Using a relative path worked for me locally, but the production server required an absolute path. You can use os.path.abspath(os.path.dirname(file)) to get the path to your video or audio file.
Using the python ffmpeg package (https://pypi.org/project/python-ffmpeg)
import ffmpeg
duration = ffmpeg.probe(local_file_path)["format"]["duration"]
where local_file_path is a relative or absolute path to your file.
I think Chamath's second comment answers the question: you have a strange character somewhere in your script, either because you are using a ` instead of a ' or you have a word with non-english accents, something like this.
As a remark, for what you are doing you can also try MoviePy which parses the ffmpeg output like you do (but maybe in the future I'll use Chamath's ffprobe method it looks cleaner):
import moviepy.editor as mp
duration = mp.VideoFileClip("my_video.mp4").duration
Updated solution using ffprobe based on #llogan guidance with the pointed link:
import subprocess
def get_duration(input_video):
cmd = ["ffprobe", "-i", input_video, "-show_entries", "format=duration",
"-v", "quiet", "-sexagesimal", "-of", "csv=p=0"]
return subprocess.check_output(cmd).decode("utf-8").strip()
Fragile Solution due to stderr output:
the stderr output from ffmpeg is not intended for machine parsing and
is considered fragile.
I get help from the following documentation (https://codingwithcody.com/2014/05/14/get-video-duration-with-ffmpeg-and-python/) and https://stackoverflow.com/a/6239379/2402577
Actually, sed is unnecessary: ffmpeg -i file.mp4 2>&1 | grep -o -P "(?<=Duration: ).*?(?=,)"
You can use the following method to get the duration in HH:MM:SS format:
import subprocess
def get_duration(input_video):
# cmd: ffmpeg -i file.mkv 2>&1 | grep -o -P "(?<=Duration: ).*?(?=,)"
p1 = subprocess.Popen(['ffmpeg', '-i', input_video], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
p2 = subprocess.Popen(["grep", "-o", "-P", "(?<=Duration: ).*?(?=,)"], stdin=p1.stdout, stdout=subprocess.PIPE)
p1.stdout.close()
return p2.communicate()[0].decode("utf-8").strip()
Example output for both: 01:37:11.83
Have you tried adding the encoding? That error is typical of that, as Chamath said.
Add the utf-8 encoding to your script header:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
I like to build a shared library with ffmpeg, and load it in python.
C++ code:
#ifdef __WIN32__
#define LIB_CLASS __declspec(dllexport)
#else
#define LIB_CLASS
#endif
extern "C" {
#define __STDC_CONSTANT_MACROS
#include "libavformat/avformat.h"
}
extern "C" LIB_CLASS int64_t getDur(const char* url) {
AVFormatContext* pFormatContext = avformat_alloc_context();
if (avformat_open_input(&pFormatContext, url, NULL, NULL)) {
avformat_free_context(pFormatContext);
return -1;
}
int64_t t = pFormatContext->duration;
avformat_close_input(&pFormatContext);
avformat_free_context(pFormatContext);
return t;
}
Then use gcc to compile it and get a shared library.
Python code:
from ctypes import *
lib = CDLL('/the/path/to/your/library')
getDur = lib.getDur
getDur.restype = c_longlong
duration = getDur('the path/URL to your file')
It works well in my python program.
Python Code
<code>
cmnd = ['/root/bin/ffmpeg', '-i', videopath]
process = subprocess.Popen(cmnd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout, stderr = process.communicate()
#This matches regex to get the time in H:M:S format
matches = re.search(r"Duration:\s{1}(?P<hours>\d+?):(?P<minutes>\d+?):(?P<seconds>\d+\.\d+?),", stdout, re.DOTALL).groupdict()
t_hour = matches['hours']
t_min = matches['minutes']
t_sec = matches['seconds']
t_hour_sec = int(t_hour) * 3600
t_min_sec = int(t_min) * 60
t_s_sec = int(round(float(t_sec)))
total_sec = t_hour_sec + t_min_sec + t_s_sec
#This matches1 is to get the frame rate of a video
matches1 = re.search(r'(\d+) fps', stdout)
frame_rate = matches1.group(0) // This will give 20fps
frame_rate = matches1.group(1) //It will give 20
</code>
we can also use ffmpeg to get the duration of any video or audio files.
To install ffmpeg follow this link
import subprocess
import re
process = subprocess.Popen(['ffmpeg', '-i', path_of_video_file], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout, stderr = process.communicate()
matches = re.search(r"Duration:\s{1}(?P<hours>\d+?):(?P<minutes>\d+?):(?P<seconds>\d+\.\d+?),", stdout, re.DOTALL).groupdict()
print (matches['hours'])
print (matches['minutes'])
print (matches['seconds'])

Scanning QR Code via zbar and Raspicam module

I want to use my raspi cam modul to scan QR codes.
For detecting and decoding qr codes I want to use zbar.
My current code:
import io
import time
import picamera
import zbar
import Image
if len(argv) < 2: exit(1)
# Create an in-memory stream
my_stream = io.BytesIO()
with picamera.PiCamera() as camera:
camera.start_preview()
# Camera warm-up time
time.sleep(2)
camera.capture(my_stream, 'jpeg')
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
pil = Image.open(argv[1]).convert('L')
width, height = pil.size
raw = pil.tostring()
my_stream = zbar.Image(width, height, 'Y800', raw)
scanner.scan(image)
for symbol in image:
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
As you may see, I want to create a picture stream, send this stream to zbar to check if a qr code is contained in the picture.
I am not able to run this code, this error is the result:
Segmentation fault
------------------ (program exited with code: 139) Press return to continue
I don't find any solution how to fixx this bug, any idea?
Kind regards;
The shortage of all the other answers is that they have a large amount of DELAY - for example, what they are scanning and displaying to the screen was actually a frame taken several seconds ago and so on.
This is due to the slow CPU of Raspberry Pi. So the frame-rate is much bigger than the rate our software can read and scan.
With lots of effort, I finally made this code, which have LITTLE DELAY. So when you give it a QRCode/BarCode, it will give you a result in less than a second.
The trick I use is explained in the code.
import cv2
import cv2.cv as cv
import numpy
import zbar
import time
import threading
'''
LITTLE-DELAY BarCodeScanner
Author: Chen Jingyi (From FZYZ Junior High School, China)
PS. If your pi's V4L is not available, the cv-Window may have some error sometimes, but other parts of this code works fine.
'''
class BarCodeScanner(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.WINDOW_NAME = 'Camera'
self.CV_SYSTEM_CACHE_CNT = 5 # Cv has 5-frame cache
self.LOOP_INTERVAL_TIME = 0.2
cv.NamedWindow(self.WINDOW_NAME, cv.CV_WINDOW_NORMAL)
self.cam = cv2.VideoCapture(-1)
def scan(self, aframe):
imgray = cv2.cvtColor(aframe, cv2.COLOR_BGR2GRAY)
raw = str(imgray.data)
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
#print 'ScanZbar', time.time()
width = int(self.cam.get(cv.CV_CAP_PROP_FRAME_WIDTH))
height = int(self.cam.get(cv.CV_CAP_PROP_FRAME_HEIGHT))
imageZbar = zbar.Image(width, height,'Y800', raw)
scanner.scan(imageZbar)
#print 'ScanEnd', time.time()
for symbol in imageZbar:
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
def run(self):
#print 'BarCodeScanner run', time.time()
while True:
#print time.time()
''' Why reading several times and throw the data away: I guess OpenCV has a `cache-queue` whose length is 5.
`read()` will *dequeue* a frame from it if it is not null, otherwise wait until have one.
When the camera has a new frame, if the queue is not full, the frame will be *enqueue*, otherwise be thrown away.
So in this case, the frame rate is far bigger than the times the while loop is executed. So when the code comes to here, the queue is full.
Therefore, if we want the newest frame, we need to dequeue the 5 frames in the queue, which is useless because it is old. That's why.
'''
for i in range(0,self.CV_SYSTEM_CACHE_CNT):
#print 'Read2Throw', time.time()
self.cam.read()
#print 'Read2Use', time.time()
img = self.cam.read()
self.scan(img[1])
cv2.imshow(self.WINDOW_NAME, img[1])
cv.WaitKey(1)
#print 'Sleep', time.time()
time.sleep(self.LOOP_INTERVAL_TIME)
cam.release()
scanner = BarCodeScanner()
scanner.start()
In the line
scanner.scan(image)
you're using a variable that hasn't appeared in the code before. Because zbar is written in C, it doesn't catch that the variable is undefined, and the library tries to read garbage data as if it were an image. Hence, the segfault. I'm guessing you meant my_stream instead of image.
i'm using QR decoding on raspberry for my project. I solved it by using
subprocces module.
Here is my function for QR decoding:
import subprocess
def detect():
"""Detects qr code from camera and returns string that represents that code.
return -- qr code from image as string
"""
subprocess.call(["raspistill -n -t 1 -w 120 -h 120 -o cam.png"],shell=True)
process = subprocess.Popen(["zbarimg -D cam.png"], stdout=subprocess.PIPE, shell=True)
(out, err) = process.communicate()
qr_code = None
# out looks like "QR-code: Xuz213asdY" so you need
# to remove first 8 characters plus whitespaces
if len(out) > 8:
qr_code = out[8:].strip()
return qr_code
You can easy add parameters to function such as img_widt and img_height
and change this part of code
"raspistill -n -t 1 -w 120 -h 120 -o cam.png"
to
"raspistill -n -t 1 -w %d -h %d -o cam.png" % (img_width, img_height)
if you want different size of image for decoding.
After reading this, I was able to come up with a pythonic solution involving OpenCV.
First, you build OpenCV on the Pi by following these instructions. That will probably take several hours to complete.
Now reboot the Pi and use the following script (assuming you have python-zbar installed) to get the QR/barcode data:
import cv2
import cv2.cv as cv
import numpy
import zbar
class test():
def __init__(self):
cv.NamedWindow("w1", cv.CV_WINDOW_NORMAL)
# self.capture = cv.CaptureFromCAM(camera_index) #for some reason, this doesn't work
self.capture = cv.CreateCameraCapture(-1)
self.vid_contour_selection()
def vid_contour_selection(self):
while True:
self.frame = cv.QueryFrame(self.capture)
aframe = numpy.asarray(self.frame[:,:])
g = cv.fromarray(aframe)
g = numpy.asarray(g)
imgray = cv2.cvtColor(g,cv2.COLOR_BGR2GRAY)
raw = str(imgray.data)
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
imageZbar = zbar.Image( self.frame.width, self.frame.height,'Y800', raw)
scanner.scan(imageZbar)
for symbol in imageZbar:
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
cv2.imshow("w1", aframe)
c = cv.WaitKey(5)
if c == 110: #pressing the 'n' key will cause the program to exit
exit()
#
p = test()
Note: I had to turn the Raspi Camera's lens counterclockwise about 1/4 - 1/3 of a turn before zbar was able to detect the QR/barcodes.
With the above code, whenever zbar detects a QR/barcode, the decoded data is printed in the console. It runs continuously, only stopping if the n key is pressed
For anyone that is still looking for a solutions to this...
This code is ugly but it works from a regular webcam pretty well, haven't tried the Pi camera yet. I'm new to python so this is the best I could come up with that worked in both Python2 and 3.
Make a bash script called kill.sh and make it executable... (chmod -x)
#kill all running zbar tasks ... call from python
ps -face | grep zbar | awk '{print $2}' | xargs kill -s KILL
Then do a system call from python like so...
import sys
import os
def start_cam():
while True:
#Initializes an instance of Zbar to the commandline to detect barcode data-strings.
p=os.popen('/usr/bin/zbarcam --prescale=300x200','r')
#Barcode variable read by Python from the commandline.
print("Please Scan a QRcode to begin...")
barcode = p.readline()
barcodedata = str(barcode)[8:]
if barcodedata:
print("{0}".format(barcodedata))
#Kills the webcam window by executing the bash file
os.system("/home/pi/Desktop/kill.sh")
start_cam()
Hopefully this helps people with the same questions in the future!
Quite a late response, but I ran into a number of issues while trying to get Zbar working. Though I was using a USB webcam, but I had to install multiple libraries before i could get to install zbar. I installed fswebcam, python-zbar, libzbar-dev and finally ran setup.py.
More importantly, the zbar from sourceforge did not work for me, but the one from github, which has a Python wrapper worked for me.
I documented my steps by steps at http://techblog.saurabhkumar.com/2015/09/scanning-barcodes-using-raspberry-pi.html if it might help
Just a small modified from Dan2theR, because I don't want to create another shell file.
import sys
import os
p = os.popen('/usr/bin/zbarcam --prescale=300x300 --Sdisable -Sqrcode.enable', 'r')
def start_scan():
global p
while True:
print('Scanning')
data = p.readline()
qrcode = str(data)[8:]
if(qrcode):
print(qrcode)
try:
start_scan()
except KeyboardInterrupt:
print('Stop scanning')
finally:
p.close()

Categories

Resources