I want to use my raspi cam modul to scan QR codes.
For detecting and decoding qr codes I want to use zbar.
My current code:
import io
import time
import picamera
import zbar
import Image
if len(argv) < 2: exit(1)
# Create an in-memory stream
my_stream = io.BytesIO()
with picamera.PiCamera() as camera:
camera.start_preview()
# Camera warm-up time
time.sleep(2)
camera.capture(my_stream, 'jpeg')
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
pil = Image.open(argv[1]).convert('L')
width, height = pil.size
raw = pil.tostring()
my_stream = zbar.Image(width, height, 'Y800', raw)
scanner.scan(image)
for symbol in image:
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
As you may see, I want to create a picture stream, send this stream to zbar to check if a qr code is contained in the picture.
I am not able to run this code, this error is the result:
Segmentation fault
------------------ (program exited with code: 139) Press return to continue
I don't find any solution how to fixx this bug, any idea?
Kind regards;
The shortage of all the other answers is that they have a large amount of DELAY - for example, what they are scanning and displaying to the screen was actually a frame taken several seconds ago and so on.
This is due to the slow CPU of Raspberry Pi. So the frame-rate is much bigger than the rate our software can read and scan.
With lots of effort, I finally made this code, which have LITTLE DELAY. So when you give it a QRCode/BarCode, it will give you a result in less than a second.
The trick I use is explained in the code.
import cv2
import cv2.cv as cv
import numpy
import zbar
import time
import threading
'''
LITTLE-DELAY BarCodeScanner
Author: Chen Jingyi (From FZYZ Junior High School, China)
PS. If your pi's V4L is not available, the cv-Window may have some error sometimes, but other parts of this code works fine.
'''
class BarCodeScanner(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.WINDOW_NAME = 'Camera'
self.CV_SYSTEM_CACHE_CNT = 5 # Cv has 5-frame cache
self.LOOP_INTERVAL_TIME = 0.2
cv.NamedWindow(self.WINDOW_NAME, cv.CV_WINDOW_NORMAL)
self.cam = cv2.VideoCapture(-1)
def scan(self, aframe):
imgray = cv2.cvtColor(aframe, cv2.COLOR_BGR2GRAY)
raw = str(imgray.data)
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
#print 'ScanZbar', time.time()
width = int(self.cam.get(cv.CV_CAP_PROP_FRAME_WIDTH))
height = int(self.cam.get(cv.CV_CAP_PROP_FRAME_HEIGHT))
imageZbar = zbar.Image(width, height,'Y800', raw)
scanner.scan(imageZbar)
#print 'ScanEnd', time.time()
for symbol in imageZbar:
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
def run(self):
#print 'BarCodeScanner run', time.time()
while True:
#print time.time()
''' Why reading several times and throw the data away: I guess OpenCV has a `cache-queue` whose length is 5.
`read()` will *dequeue* a frame from it if it is not null, otherwise wait until have one.
When the camera has a new frame, if the queue is not full, the frame will be *enqueue*, otherwise be thrown away.
So in this case, the frame rate is far bigger than the times the while loop is executed. So when the code comes to here, the queue is full.
Therefore, if we want the newest frame, we need to dequeue the 5 frames in the queue, which is useless because it is old. That's why.
'''
for i in range(0,self.CV_SYSTEM_CACHE_CNT):
#print 'Read2Throw', time.time()
self.cam.read()
#print 'Read2Use', time.time()
img = self.cam.read()
self.scan(img[1])
cv2.imshow(self.WINDOW_NAME, img[1])
cv.WaitKey(1)
#print 'Sleep', time.time()
time.sleep(self.LOOP_INTERVAL_TIME)
cam.release()
scanner = BarCodeScanner()
scanner.start()
In the line
scanner.scan(image)
you're using a variable that hasn't appeared in the code before. Because zbar is written in C, it doesn't catch that the variable is undefined, and the library tries to read garbage data as if it were an image. Hence, the segfault. I'm guessing you meant my_stream instead of image.
i'm using QR decoding on raspberry for my project. I solved it by using
subprocces module.
Here is my function for QR decoding:
import subprocess
def detect():
"""Detects qr code from camera and returns string that represents that code.
return -- qr code from image as string
"""
subprocess.call(["raspistill -n -t 1 -w 120 -h 120 -o cam.png"],shell=True)
process = subprocess.Popen(["zbarimg -D cam.png"], stdout=subprocess.PIPE, shell=True)
(out, err) = process.communicate()
qr_code = None
# out looks like "QR-code: Xuz213asdY" so you need
# to remove first 8 characters plus whitespaces
if len(out) > 8:
qr_code = out[8:].strip()
return qr_code
You can easy add parameters to function such as img_widt and img_height
and change this part of code
"raspistill -n -t 1 -w 120 -h 120 -o cam.png"
to
"raspistill -n -t 1 -w %d -h %d -o cam.png" % (img_width, img_height)
if you want different size of image for decoding.
After reading this, I was able to come up with a pythonic solution involving OpenCV.
First, you build OpenCV on the Pi by following these instructions. That will probably take several hours to complete.
Now reboot the Pi and use the following script (assuming you have python-zbar installed) to get the QR/barcode data:
import cv2
import cv2.cv as cv
import numpy
import zbar
class test():
def __init__(self):
cv.NamedWindow("w1", cv.CV_WINDOW_NORMAL)
# self.capture = cv.CaptureFromCAM(camera_index) #for some reason, this doesn't work
self.capture = cv.CreateCameraCapture(-1)
self.vid_contour_selection()
def vid_contour_selection(self):
while True:
self.frame = cv.QueryFrame(self.capture)
aframe = numpy.asarray(self.frame[:,:])
g = cv.fromarray(aframe)
g = numpy.asarray(g)
imgray = cv2.cvtColor(g,cv2.COLOR_BGR2GRAY)
raw = str(imgray.data)
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
imageZbar = zbar.Image( self.frame.width, self.frame.height,'Y800', raw)
scanner.scan(imageZbar)
for symbol in imageZbar:
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
cv2.imshow("w1", aframe)
c = cv.WaitKey(5)
if c == 110: #pressing the 'n' key will cause the program to exit
exit()
#
p = test()
Note: I had to turn the Raspi Camera's lens counterclockwise about 1/4 - 1/3 of a turn before zbar was able to detect the QR/barcodes.
With the above code, whenever zbar detects a QR/barcode, the decoded data is printed in the console. It runs continuously, only stopping if the n key is pressed
For anyone that is still looking for a solutions to this...
This code is ugly but it works from a regular webcam pretty well, haven't tried the Pi camera yet. I'm new to python so this is the best I could come up with that worked in both Python2 and 3.
Make a bash script called kill.sh and make it executable... (chmod -x)
#kill all running zbar tasks ... call from python
ps -face | grep zbar | awk '{print $2}' | xargs kill -s KILL
Then do a system call from python like so...
import sys
import os
def start_cam():
while True:
#Initializes an instance of Zbar to the commandline to detect barcode data-strings.
p=os.popen('/usr/bin/zbarcam --prescale=300x200','r')
#Barcode variable read by Python from the commandline.
print("Please Scan a QRcode to begin...")
barcode = p.readline()
barcodedata = str(barcode)[8:]
if barcodedata:
print("{0}".format(barcodedata))
#Kills the webcam window by executing the bash file
os.system("/home/pi/Desktop/kill.sh")
start_cam()
Hopefully this helps people with the same questions in the future!
Quite a late response, but I ran into a number of issues while trying to get Zbar working. Though I was using a USB webcam, but I had to install multiple libraries before i could get to install zbar. I installed fswebcam, python-zbar, libzbar-dev and finally ran setup.py.
More importantly, the zbar from sourceforge did not work for me, but the one from github, which has a Python wrapper worked for me.
I documented my steps by steps at http://techblog.saurabhkumar.com/2015/09/scanning-barcodes-using-raspberry-pi.html if it might help
Just a small modified from Dan2theR, because I don't want to create another shell file.
import sys
import os
p = os.popen('/usr/bin/zbarcam --prescale=300x300 --Sdisable -Sqrcode.enable', 'r')
def start_scan():
global p
while True:
print('Scanning')
data = p.readline()
qrcode = str(data)[8:]
if(qrcode):
print(qrcode)
try:
start_scan()
except KeyboardInterrupt:
print('Stop scanning')
finally:
p.close()
Related
I'm trying to read data from memory of a process by inputing the process name, then finding PID using psutil. So far I have this:
import ctypes
from ctypes import *
from ctypes.wintypes import *
import win32ui
import psutil # install, not a default module
import sys
# input process name
nameprocess = "notepad.exe"
# find pid
def getpid():
for proc in psutil.process_iter():
if proc.name() == nameprocess:
return proc.pid
PROCESS_ID = getpid()
if PROCESS_ID == None:
print "Process was not found"
sys.exit(1)
# read from addresses
STRLEN = 255
PROCESS_VM_READ = 0x0010
process = windll.kernel32.OpenProcess(PROCESS_VM_READ, 0, PROCESS_ID)
readProcMem = windll.kernel32.ReadProcessMemory
buf = ctypes.create_string_buffer(STRLEN)
for i in range(1,100):
if readProcMem(process, hex(i), buf, STRLEN, 0):
print buf.raw
The last for loop should read and print contents of the first 100 addresses in the process if I'm getting this right. Only thing is, the output looks like complete gibberish.
There are 2 problems for me here: first, am I really reading the addresses from the selected process this way? And second, how can I figure how long in the loop I should go, if there is maybe some kind of end address?
I didn't install psutil, but just pulled a process ID and valid virtual address using Task Manager and SysInternals VMMap. The numbers will vary of course.
Good practice with ctypes is to define the argument types and return value via .argtypes and .restype. Get your own instance of the kernel32 library because changing the attributes of the cached windll.kernel32 instance could cause issues with other modules using ctypes and kernel32.
You need a valid virtual address. In answer to your 2nd problem, I think VMMap proves there is a way to do it. Pick up a copy of Windows Internals to learn the techniques.
from ctypes import *
from ctypes.wintypes import *
PROCESS_ID = 9476 # From TaskManager for Notepad.exe
PROCESS_HEADER_ADDR = 0x7ff7b81e0000 # From SysInternals VMMap utility
# read from addresses
STRLEN = 255
PROCESS_VM_READ = 0x0010
k32 = WinDLL('kernel32')
k32.OpenProcess.argtypes = DWORD,BOOL,DWORD
k32.OpenProcess.restype = HANDLE
k32.ReadProcessMemory.argtypes = HANDLE,LPVOID,LPVOID,c_size_t,POINTER(c_size_t)
k32.ReadProcessMemory.restype = BOOL
process = k32.OpenProcess(PROCESS_VM_READ, 0, PROCESS_ID)
buf = create_string_buffer(STRLEN)
s = c_size_t()
if k32.ReadProcessMemory(process, PROCESS_HEADER_ADDR, buf, STRLEN, byref(s)):
print(s.value,buf.raw)
Output (Note 'MZ' is the start of a program header):
255 b'MZ\x90\x00\x03\x00\x00\x00\x04\x00\x00\x00\xff\xff\x00\x00\xb8\x00\x00\x00\x00\x00\x00\x00#\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xe8\x00\x00\x00\x0e\x1f\xba\x0e\x00\xb4\t\xcd!\xb8\x01L\xcd!This program cannot be run in DOS mode.\r\r\n$\x00\x00\x00\x00\x00\x00\x00\xd0\x92\xa7\xd1\x94\xf3\xc9\x82\x94\xf3\xc9\x82\x94\xf3\xc9\x82\x9d\x8bZ\x82\x8a\xf3\xc9\x82\xfb\x97\xca\x83\x97\xf3\xc9\x82\xfb\x97\xcd\x83\x83\xf3\xc9\x82\xfb\x97\xcc\x83\x91\xf3\xc9\x82\xfb\x97\xc8\x83\x8f\xf3\xc9\x82\x94\xf3\xc8\x82\x82\xf2\xc9\x82\xfb\x97\xc1\x83\x8d\xf3\xc9\x82\xfb\x976\x82\x95\xf3\xc9\x82\xfb\x97\xcb\x83\x95\xf3\xc9\x82Rich\x94\xf3\xc9\x82\x00\x00\x00\x00\x00\x00\x00\x00PE\x00\x00d\x86\x06\x00^\'\x0f\x84\x00\x00\x00\x00\x00\x00\x00\x00\xf0\x00"'
Here's a screenshot of VMMap indicating the header address of notepad.exe:
Here's a screenshot of a hexdump of the content of notepad.exe that matches the output of the program:
On Windows, the PyMem library can help you with that: https://pymem.readthedocs.io/
I am trying to write a small python program that uses curses and a SWIGed C++ library. That library logs a lot of information to STDOUT, which interferes with the output from curses. I would like to somehow intercept that content and then display it nicely through ncurses. Is there some way to do this?
A minimal demonstrating example will hopefully show how this all works. I am not going to set up SWIG just for this, and opt for a quick and dirty demonstration of calling a .so file through ctypes to emulate that external C library usage. Just put the following in the working directory.
testlib.c
#include <stdio.h>
int vomit(void);
int vomit()
{
printf("vomiting output onto stdout\n");
fflush(stdout);
return 1;
}
Build with gcc -shared -Wl,-soname,testlib -o _testlib.so -fPIC testlib.c
testlib.py
import ctypes
from os.path import dirname
from os.path import join
testlib = ctypes.CDLL(join(dirname(__file__), '_testlib.so'))
demo.py (for minimum demonstration)
import os
import sys
import testlib
from tempfile import mktemp
pipename = mktemp()
os.mkfifo(pipename)
pipe_fno = os.open(pipename, os.O_RDWR | os.O_NONBLOCK)
stdout_fno = os.dup(sys.stdout.fileno())
os.dup2(pipe_fno, 1)
result = testlib.testlib.vomit()
os.dup2(stdout_fno, 1)
buf = bytearray()
while True:
try:
buf += os.read(pipe_fno, 1)
except Exception:
break
print("the captured output is: %s" % open('scratch').read())
print('the result of the program is: %d' % result)
os.unlink(pipename)
The caveat is that the output generated by the .so might be buffered somehow within the ctypes system (I have no idea how that part all works), and I cannot find a way to flush the output to ensure they are all outputted unless the fflush code is inside the .so; so there can be complications with how this ultimately behaves.
With threading, this can be done also (code is becoming quite atrocious, but it shows the idea):
import os
import sys
import testlib
from threading import Thread
from time import sleep
from tempfile import mktemp
def external():
# the thread that will call the .so that produces output
for i in range(7):
testlib.testlib.vomit()
sleep(1)
# setup
stdout_fno = os.dup(sys.stdout.fileno())
pipename = mktemp()
os.mkfifo(pipename)
pipe_fno = os.open(pipename, os.O_RDWR | os.O_NONBLOCK)
os.dup2(pipe_fno, 1)
def main():
thread = Thread(target=external)
thread.start()
buf = bytearray()
counter = 0
while thread.is_alive():
sleep(0.2)
try:
while True:
buf += os.read(pipe_fno, 1)
except BlockingIOError:
if buf:
# do some processing to show that the string is fully
# captured
output = 'external lib: [%s]\n' % buf.strip().decode('utf8')
# low level write to original stdout
os.write(stdout_fno, output.encode('utf8'))
buf.clear()
os.write(stdout_fno, b'tick: %d\n' % counter)
counter += 1
main()
# cleanup
os.dup2(stdout_fno, 1)
os.close(pipe_fno)
os.unlink(pipename)
Example execution:
$ python demo2.py
external lib: [vomiting output onto stdout]
tick: 0
tick: 1
tick: 2
tick: 3
external lib: [vomiting output onto stdout]
tick: 4
Note that everything is captured.
Now, since you do have make use of ncurses and also run that function in a thread, this is a bit tricky. Here be dragons.
We will need the ncurses API that will actually let us create a new screen to redirect the output, and again ctypes can be handy for this. Unfortunately, I am using absolute paths for the DLLs on my system; adjust as required.
lib.py
import ctypes
libc = ctypes.CDLL('/lib64/libc.so.6')
ncurses = ctypes.CDLL('/lib64/libncursesw.so.6')
class FILE(ctypes.Structure):
pass
class SCREEN(ctypes.Structure):
pass
FILE_p = ctypes.POINTER(FILE)
libc.fdopen.restype = FILE_p
SCREEN_p = ctypes.POINTER(SCREEN)
ncurses.newterm.restype = SCREEN_p
ncurses.set_term.restype = SCREEN_p
fdopen = libc.fdopen
newterm = ncurses.newterm
set_term = ncurses.set_term
delscreen = ncurses.delscreen
endwin = ncurses.endwin
Now that we have newterm and set_term, we can finally complete the script. Remove everything from the main function, and add the following:
# setup the curse window
import curses
from lib import newterm, fdopen, set_term, endwin, delscreen
stdin_fno = sys.stdin.fileno()
stdscr = curses.initscr()
# use the ctypes library to create a new screen and redirect output
# back to the original stdout
screen = newterm(None, fdopen(stdout_fno, 'w'), fdopen(stdin_fno, 'r'))
old_screen = set_term(screen)
stdscr.clear()
curses.noecho()
border = curses.newwin(8, 68, 4, 4)
border.border()
window = curses.newwin(6, 66, 5, 5)
window.scrollok(True)
window.clear()
border.refresh()
window.refresh()
def main():
thread = Thread(target=external)
thread.start()
buf = bytearray()
counter = 0
while thread.isAlive():
sleep(0.2)
try:
while True:
buf += os.read(pipe_fno, 1)
except BlockingIOError:
if buf:
output = 'external lib: [%s]\n' % buf.strip().decode('utf8')
buf.clear()
window.addstr(output)
window.refresh()
window.addstr('tick: %d\n' % counter)
counter += 1
window.refresh()
main()
# cleanup
os.dup2(stdout_fno, 1)
endwin()
delscreen(screen)
os.close(pipe_fno)
os.unlink(pipename)
This should sort of show that the intended result with the usage of ncurses be achieved, however for my case it hung at the end and I am not sure what else might be going on. I thought this could be caused by an accidental use of 32-bit Python while using that 64-bit shared object, but on exit things somehow don't play nicely (I thought misuse of ctypes is easy, but turns out it really is!). Anyway, this least it shows the output inside an ncurse window as you might expect.
#metatoaster indicated a link which talks about a way to temporarily redirect the standard output to /dev/null. That could show something about how to use dup2, but is not quite an answer by itself.
python's interface to curses uses only initscr, which means that the curses library writes its output to the standard output. The SWIG'd library writes its output to the standard output, but that would interfere with the curses output. You could solve the problem by
redirecting the curses output to /dev/tty, and
redirecting the SWIG'd output to a temporary file, and
reading the file, checking for updates to add to the screen.
Once initscr has been called, the curses library has its own copy of the output stream. If you can temporarily point the real standard output to a file first (before initializing curses), then open a new standard output to /dev/tty (for initscr), and then restore the (global!) output stream then that should work.
I'm trying to read a twitch stream via streamlink (https://streamlink.github.io/api_guide.html)
into OpenCV for further processing in Python.
What works, reading the stream into a stream.ts file via popen and then into opencv:
import subprocess
import os
import time
def create_new_streaming_file(stream_filename="stream0", stream_link="https://www.twitch.tv/tsm_viss"):
try:
os.remove('./Engine/streaming_util/'+stream_filename+'.ts')
except OSError:
pass
cmd = "streamlink --force --output ./Engine/streaming_util/"+stream_filename+".ts "+stream_link+" best"
subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
create_new_streaming_file()
video_capture = cv2.VideoCapture('./Engine/streaming_util/stream0.ts')
Which is very slow and the stream stops after about 30 seconds.
I would like to read the bytestream directly into openCV out of streamlink's python api.
What works is printing out the latest n bytes of a stream into the console:
import streamlink
streams = streamlink.streams("https://www.twitch.tv/grimmmz")
stream = streams["best"]
fd = stream.open()
while True:
data = fd.read(1024)
print(data)
I'm looking for something like this (does not work but you'll get the concept):
streams = streamlink.streams("https://www.twitch.tv/grimmmz")
stream = streams["best"]
fd = stream.open()
bytes=''
while True:
# to read mjpeg frame -
bytes+= fd.read(1024)
a = bytes.find('\xff\xd8')
b = bytes.find('\xff\xd9')
if a!=-1 and b!=-1:
jpg = bytes[a:b+2]
bytes= bytes[b+2:]
img = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8),cv2.CV_LOAD_IMAGE_COLOR)
cv2.imwrite('messigray.png', img)
cv2.imshow('cam2', img)
else:
continue
Thanks a lot in advance!
It was quite tricky to accomplish with reasonable performance, though.
Check out the project: https://github.com/DanielTea/rage-analytics/blob/master/README.md
The main file is realtime_Videostreamer.py in the engine folder. If you initialize this object it creates an ffmpeg subprocess and fills a queue with video frames in an extra thread. This architecture prevents the mainthread from blocking so depending on your networkspeed and cpu power a couple streams can be analyzed in parallel.
This solution works with twitch streams very well. Didn’t try out other streaming sites.
More info about this project.
I have the following code:
#!/usr/bin/python
import StringIO
import subprocess
import os
import time
from datetime import datetime
from PIL import Image
# Original code written by brainflakes and modified to exit
# image scanning for loop as soon as the sensitivity value is exceeded.
# this can speed taking of larger photo if motion detected early in scan
# Motion detection settings:
# need future changes to read values dynamically via command line parameter or xml file
# --------------------------
# Threshold - (how much a pixel has to change by to be marked as "changed")
# Sensitivity - (how many changed pixels before capturing an image) needs to be higher if noisy view
# ForceCapture - (whether to force an image to be captured every forceCaptureTime seconds)
# filepath - location of folder to save photos
# filenamePrefix - string that prefixes the file name for easier identification of files.
threshold = 10
sensitivity = 180
forceCapture = True
forceCaptureTime = 60 * 60 # Once an hour
filepath = "/home/pi/camera"
filenamePrefix = "capture"
# File photo size settings
saveWidth = 1280
saveHeight = 960
diskSpaceToReserve = 40 * 1024 * 1024 # Keep 40 mb free on disk
# Capture a small test image (for motion detection)
def captureTestImage():
command = "raspistill -w %s -h %s -t 500 -e bmp -o -" % (100, 75)
imageData = StringIO.StringIO()
imageData.write(subprocess.check_output(command, shell=True))
imageData.seek(0)
im = Image.open(imageData)
buffer = im.load()
imageData.close()
return im, buffer
# Save a full size image to disk
def saveImage(width, height, diskSpaceToReserve):
keepDiskSpaceFree(diskSpaceToReserve)
time = datetime.now()
filename = filepath + "/" + filenamePrefix + "-%04d%02d%02d-%02d%02d%02d.jpg" % ( time.year, time.month, time.day, time.hour, time.minute, time.second)
subprocess.call("raspistill -w 1296 -h 972 -t 1000 -e jpg -q 15 -o %s" % filename, shell=True)
print "Captured %s" % filename
# Keep free space above given level
def keepDiskSpaceFree(bytesToReserve):
if (getFreeSpace() < bytesToReserve):
for filename in sorted(os.listdir(filepath + "/")):
if filename.startswith(filenamePrefix) and filename.endswith(".jpg"):
os.remove(filepath + "/" + filename)
print "Deleted %s to avoid filling disk" % filename
if (getFreeSpace() > bytesToReserve):
return
# Get available disk space
def getFreeSpace():
st = os.statvfs(".")
du = st.f_bavail * st.f_frsize
return du
# Get first image
image1, buffer1 = captureTestImage()
# Reset last capture time
lastCapture = time.time()
# added this to give visual feedback of camera motion capture activity. Can be removed as required
os.system('clear')
print " Motion Detection Started"
print " ------------------------"
print "Pixel Threshold (How much) = " + str(threshold)
print "Sensitivity (changed Pixels) = " + str(sensitivity)
print "File Path for Image Save = " + filepath
print "---------- Motion Capture File Activity --------------"
while (True):
# Get comparison image
image2, buffer2 = captureTestImage()
# Count changed pixels
changedPixels = 0
for x in xrange(0, 100):
# Scan one line of image then check sensitivity for movement
for y in xrange(0, 75):
# Just check green channel as it's the highest quality channel
pixdiff = abs(buffer1[x,y][1] - buffer2[x,y][1])
if pixdiff > threshold:
changedPixels += 1
# Changed logic - If movement sensitivity exceeded then
# Save image and Exit before full image scan complete
if changedPixels > sensitivity:
lastCapture = time.time()
saveImage(saveWidth, saveHeight, diskSpaceToReserve)
break
continue
# Check force capture
if forceCapture:
if time.time() - lastCapture > forceCaptureTime:
changedPixels = sensitivity + 1
# Swap comparison buffers
image1 = image2
buffer1 = buffer2
This code takes a picture once movement is detected, and keeps doing so until I manually stop it. (I should mention that the code is for use with the Raspberry Pi computer)
I also have the following code courtesy of Nathan Jhaveri here on Stackoverflow:
import SocketServer
from BaseHTTPServer import BaseHTTPRequestHandler
def some_function():
print "some_function got called"
class MyHandler(BaseHTTPRequestHandler):
def do_GET(self):
if self.path == '/captureImage':
saveImage(saveWidth, saveHeight, diskSpaceToReserve)
self.send_response(200)
httpd = SocketServer.TCPServer(("", 8080), MyHandler)
httpd.serve_forever()
This code runs a simple server that would execute the
saveImage(saveWidth, saveHeight, diskSpaceToReserve)
function when the url /captureImage is visited on the server. I have run into a problem with this though. Since the two pieces of code are both infinite loops, they cannot run side by side. I would assume I need to do some kind of multi-threading, but that is something I have never experimented with in Python before. I would appreciate if anyone could help me get back on track with this.
This isn't a small question. Your best bet is to work through some python threading tutorials such as this one: http://www.tutorialspoint.com/python/python_multithreading.htm (found via google)
Try taking the webserver, and running it on a background thread so that calling "serve_forever()" does not block the main thread's "while True:" loop. Replace the existing call to httpd.serve_forever() with something like:
import threading
class WebThread(threading.Thread):
def run(self):
httpd = SocketServer.TCPServer(("", 8080), MyHandler)
httpd.serve_forever()
WebThread().start()
Make sure that chunk of code runs before the "while (True):" loop, and you should have both the webserver loop and the main thread loop running.
Keep in mind that having multiple threads can get complicated. What happens when two threads access the same resource at the same time? As Velox mentioned in another answer, it is worthwhile to learn more about threading.
I can illustrate a simple example using multi-threading.
from http.server import BaseHTTPRequestHandler, HTTPServer
import concurrent.futures
import logging
import time
hostName = "localhost"
serverPort = 5001
class MyServer(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(bytes("<html><head><title>python3 http server</title><body>", "utf-8"))
def serverThread():
webServer = HTTPServer((hostName, serverPort), MyServer)
logging.info("Server started http://%s:%s" % (hostName, serverPort))
try:
webServer.serve_forever()
except :
pass
webServer.server_close()
logging.info("Server stopped.")
def logThread():
while True:
time.sleep(2.0)
logging.info('hi from log thread')
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
# Run Server
executor.submit(serverThread)
# Run A Parallel Thread
executor.submit(logThread)
Here we have two threads : A server and Another parallel Thread which logs a line every 2 seconds.
You have to define code corresponding to each thread in separate functions, and submit it to the concurrent.futures Thread Pool.
Btw, I have not done study of how efficient it is to run a server this way.
I need to get the duration of a video in Python. The video formats that I need to get are MP4, Flash video, AVI, and MOV... I have a shared hosting solution, so I have no FFmpeg support.
What would you suggest?
Thanks!
You can use the external command ffprobe for this. Specifically, run this bash command from the FFmpeg Wiki:
import subprocess
def get_length(filename):
result = subprocess.run(["ffprobe", "-v", "error", "-show_entries",
"format=duration", "-of",
"default=noprint_wrappers=1:nokey=1", filename],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
return float(result.stdout)
(year 2020 answer)
Solutions:
opencv 0.0065 sec ✔
ffprobe 0.0998 sec
moviepy 2.8239 sec
✔ OpenCV method:
def with_opencv(filename):
import cv2
video = cv2.VideoCapture(filename)
duration = video.get(cv2.CAP_PROP_POS_MSEC)
frame_count = video.get(cv2.CAP_PROP_FRAME_COUNT)
return duration, frame_count
Usage: print(with_opencv('my_video.webm'))
Other:
ffprobe method:
def with_ffprobe(filename):
import subprocess, json
result = subprocess.check_output(
f'ffprobe -v quiet -show_streams -select_streams v:0 -of json "{filename}"',
shell=True).decode()
fields = json.loads(result)['streams'][0]
duration = fields['tags']['DURATION']
fps = eval(fields['r_frame_rate'])
return duration, fps
moviepy method:
def with_moviepy(filename):
from moviepy.editor import VideoFileClip
clip = VideoFileClip(filename)
duration = clip.duration
fps = clip.fps
width, height = clip.size
return duration, fps, (width, height)
As reported here https://www.reddit.com/r/moviepy/comments/2bsnrq/is_it_possible_to_get_the_length_of_a_video/
you could use the moviepy module
from moviepy.editor import VideoFileClip
clip = VideoFileClip("my_video.mp4")
print( clip.duration )
To make things a little bit easier, the following codes put the output to JSON.
You can use it by using probe(filename), or get duration by using duration(filename):
json_info = probe(filename)
secondes_dot_ = duration(filename) # float number of seconds
It works on Ubuntu 14.04 where of course ffprobe installed. The code is not optimized for speed or beautiful purposes but it works on my machine hope it helps.
#
# Command line use of 'ffprobe':
#
# ffprobe -loglevel quiet -print_format json \
# -show_format -show_streams \
# video-file-name.mp4
#
# man ffprobe # for more information about ffprobe
#
import subprocess32 as sp
import json
def probe(vid_file_path):
''' Give a json from ffprobe command line
#vid_file_path : The absolute (full) path of the video file, string.
'''
if type(vid_file_path) != str:
raise Exception('Gvie ffprobe a full file path of the video')
return
command = ["ffprobe",
"-loglevel", "quiet",
"-print_format", "json",
"-show_format",
"-show_streams",
vid_file_path
]
pipe = sp.Popen(command, stdout=sp.PIPE, stderr=sp.STDOUT)
out, err = pipe.communicate()
return json.loads(out)
def duration(vid_file_path):
''' Video's duration in seconds, return a float number
'''
_json = probe(vid_file_path)
if 'format' in _json:
if 'duration' in _json['format']:
return float(_json['format']['duration'])
if 'streams' in _json:
# commonly stream 0 is the video
for s in _json['streams']:
if 'duration' in s:
return float(s['duration'])
# if everything didn't happen,
# we got here because no single 'return' in the above happen.
raise Exception('I found no duration')
#return None
if __name__ == "__main__":
video_file_path = "/tmp/tt1.mp4"
duration(video_file_path) # 10.008
Find this new python library: https://github.com/sbraz/pymediainfo
To get the duration:
from pymediainfo import MediaInfo
media_info = MediaInfo.parse('my_video_file.mov')
#duration in milliseconds
duration_in_ms = media_info.tracks[0].duration
Above code is tested against a valid mp4 file and works, but you should do more checks because it is heavily relying on the output of MediaInfo.
Use a modern method with https://github.com/kkroening/ffmpeg-python (pip install ffmpeg-python --user). Don't forget to install ffmpeg too.
Get video info:
import ffmpeg
info=ffmpeg.probe(filename)
print(f"duration={info['format']['duration']}")
print(f"framerate={info['streams'][0]['avg_frame_rate']}")
Use ffmpeg-python package to also easily create, edit and apply filters to videos.
from subprocess import check_output
file_name = "movie.mp4"
#For Windows
a = str(check_output('ffprobe -i "'+file_name+'" 2>&1 |findstr "Duration"',shell=True))
#For Linux
#a = str(check_output('ffprobe -i "'+file_name+'" 2>&1 |grep "Duration"',shell=True))
a = a.split(",")[0].split("Duration:")[1].strip()
h, m, s = a.split(':')
duration = int(h) * 3600 + int(m) * 60 + float(s)
print(duration)
A function I came up with. This is basically using only ffprobe arguments
from subprocess import check_output, CalledProcessError, STDOUT
def getDuration(filename):
command = [
'ffprobe',
'-v',
'error',
'-show_entries',
'format=duration',
'-of',
'default=noprint_wrappers=1:nokey=1',
filename
]
try:
output = check_output( command, stderr=STDOUT ).decode()
except CalledProcessError as e:
output = e.output.decode()
return output
fn = '/app/648c89e8-d31f-4164-a1af-034g0191348b.mp4'
print( getDuration( fn ) )
Outputs duration like this:
7.338000
As reported here
https://www.reddit.com/r/moviepy/comments/2bsnrq/is_it_possible_to_get_the_length_of_a_video/
you could use the moviepy module
from moviepy.editor import VideoFileClip
clip = VideoFileClip("my_video.mp4")
print( clip.duration )
If you're trying to get the duration of many videos in a folder it'll crash giving the error:
AttributeError: 'AudioFileClip' object has no attribute 'reader'
So, in order to avoid that you'll need to add
clip.close()
Based on this:
https://zulko.github.io/moviepy/_modules/moviepy/video/io/VideoFileClip.html
So the code would look like this:
from moviepy.editor import VideoFileClip
clip = VideoFileClip("my_video.mp4")
print( clip.duration )
clip.close()
Cheers! :)
The above pymediainfo answer really helped me. Thank you.
As a beginner, it did take a while to find out what was missing (sudo apt install mediainfo) and how to also address attributes in other ways (see below).
Hence this additional example:
# sudo apt install mediainfo
# pip3 install pymediainfo
from pymediainfo import MediaInfo
media_info = MediaInfo.parse('/home/pi/Desktop/a.mp4')
for track in media_info.tracks:
#for k in track.to_data().keys():
# print("{}.{}={}".format(track.track_type,k,track.to_data()[k]))
if track.track_type == 'Video':
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
print("{} width {}".format(track.track_type,track.to_data()["width"]))
print("{} height {}".format(track.track_type,track.to_data()["height"]))
print("{} duration {}s".format(track.track_type,track.to_data()["duration"]/1000.0))
print("{} duration {}".format(track.track_type,track.to_data()["other_duration"][3][0:8]))
print("{} other_format {}".format(track.track_type,track.to_data()["other_format"][0]))
print("{} codec_id {}".format(track.track_type,track.to_data()["codec_id"]))
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
elif track.track_type == 'Audio':
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
print("{} format {}".format(track.track_type,track.to_data()["format"]))
print("{} codec_id {}".format(track.track_type,track.to_data()["codec_id"]))
print("{} channel_s {}".format(track.track_type,track.to_data()["channel_s"]))
print("{} other_channel_s {}".format(track.track_type,track.to_data()["other_channel_s"][0]))
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
print("********************************************************************")
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Video width 1920
Video height 1080
Video duration 383.84s
Video duration 00:06:23
Video other_format AVC
Video codec_id avc1
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Audio format AAC
Audio codec_id mp4a-40-2
Audio channel_s 2
Audio other_channel_s 2 channels
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Open cmd terminal and install python package:mutagen using this command
python -m pip install mutagen
then use this code to get the video duration and its size:
import os
from mutagen.mp4 import MP4
audio = MP4("filePath")
print(audio.info.length)
print(os.path.getsize("filePath"))
Here is what I use in prod today, using cv2 way work well for mp4, wmv and flv which is what I needed:
try:
import cv2 # opencv-python - optional if using ffprobe
except ImportError:
cv2 = None
import subprocess
def get_playback_duration(video_filepath, method='cv2'): # pragma: no cover
"""
Get video playback duration in seconds and fps
"This epic classic car collection centres on co.webm"
:param video_filepath: str, path to video file
:param method: str, method cv2 or default ffprobe
"""
if method == 'cv2': # Use opencv-python
video = cv2.VideoCapture(video_filepath)
fps = video.get(cv2.CAP_PROP_FPS)
frame_count = video.get(cv2.CAP_PROP_FRAME_COUNT)
duration_seconds = frame_count / fps if fps else 0
else: # ffprobe
result = subprocess.check_output(
f'ffprobe -v quiet -show_streams -select_streams v:0 -of json "{video_filepath}"', shell=True).decode()
fields = json.loads(result)['streams'][0]
duration_seconds = fields['tags'].get('DURATION')
fps = eval(fields.get('r_frame_rate'))
return duration_seconds, fps
ffprobe does not work for flv and I couldn't get anything to work for webm. Otherwise, this works great and is being used in prod today.
Referring to the answer of #Nikolay Gogol using opencv-python (cv2):
His method did not work for me (Python 3.8.10, opencv-python==4.5.5.64) and
the comments say that opencv can not be used in this case which is also not true.
CAP_PROP_POS_MSEC gives you the millisecond of the current frame that the VideoCapture is at and not the total milliseconds of the video, so when just loading the video this is obviously 0.
But we can actually get the frame rate and the number of total frames to calculate the total number of milliseconds of the video:
import cv2
video = cv2.VideoCapture("video.mp4")
# the frame rate or frames per second
frame_rate = video.get(cv2.CAP_PROP_FPS)
# the total number of frames
total_num_frames = video.get(cv2.CAP_PROP_FRAME_COUNT)
# the duration in seconds
duration = total_num_frames / frame_rate
for anyone that like using the mediainfo program:
import json
import subprocess
#===============================
def getMediaInfo(mediafile):
cmd = "mediainfo --Output=JSON %s"%(mediafile)
proc = subprocess.Popen(cmd, shell=True,
stderr=subprocess.PIPE, stdout=subprocess.PIPE)
stdout, stderr = proc.communicate()
data = json.loads(stdout)
return data
#===============================
def getDuration(mediafile):
data = getMediaInfo(mediafile)
duration = float(data['media']['track'][0]['Duration'])
return duration
Using ffprobe in a function it returns the duration of a video in seconds.
def video_duration(filename):
import subprocess
secs = subprocess.check_output(f'ffprobe -v error -select_streams v:0 -show_entries stream=duration -of default=noprint_wrappers=1:nokey=1 "{filename}"', shell=True).decode()
return secs