I have the following code:
#!/usr/bin/python
import StringIO
import subprocess
import os
import time
from datetime import datetime
from PIL import Image
# Original code written by brainflakes and modified to exit
# image scanning for loop as soon as the sensitivity value is exceeded.
# this can speed taking of larger photo if motion detected early in scan
# Motion detection settings:
# need future changes to read values dynamically via command line parameter or xml file
# --------------------------
# Threshold - (how much a pixel has to change by to be marked as "changed")
# Sensitivity - (how many changed pixels before capturing an image) needs to be higher if noisy view
# ForceCapture - (whether to force an image to be captured every forceCaptureTime seconds)
# filepath - location of folder to save photos
# filenamePrefix - string that prefixes the file name for easier identification of files.
threshold = 10
sensitivity = 180
forceCapture = True
forceCaptureTime = 60 * 60 # Once an hour
filepath = "/home/pi/camera"
filenamePrefix = "capture"
# File photo size settings
saveWidth = 1280
saveHeight = 960
diskSpaceToReserve = 40 * 1024 * 1024 # Keep 40 mb free on disk
# Capture a small test image (for motion detection)
def captureTestImage():
command = "raspistill -w %s -h %s -t 500 -e bmp -o -" % (100, 75)
imageData = StringIO.StringIO()
imageData.write(subprocess.check_output(command, shell=True))
imageData.seek(0)
im = Image.open(imageData)
buffer = im.load()
imageData.close()
return im, buffer
# Save a full size image to disk
def saveImage(width, height, diskSpaceToReserve):
keepDiskSpaceFree(diskSpaceToReserve)
time = datetime.now()
filename = filepath + "/" + filenamePrefix + "-%04d%02d%02d-%02d%02d%02d.jpg" % ( time.year, time.month, time.day, time.hour, time.minute, time.second)
subprocess.call("raspistill -w 1296 -h 972 -t 1000 -e jpg -q 15 -o %s" % filename, shell=True)
print "Captured %s" % filename
# Keep free space above given level
def keepDiskSpaceFree(bytesToReserve):
if (getFreeSpace() < bytesToReserve):
for filename in sorted(os.listdir(filepath + "/")):
if filename.startswith(filenamePrefix) and filename.endswith(".jpg"):
os.remove(filepath + "/" + filename)
print "Deleted %s to avoid filling disk" % filename
if (getFreeSpace() > bytesToReserve):
return
# Get available disk space
def getFreeSpace():
st = os.statvfs(".")
du = st.f_bavail * st.f_frsize
return du
# Get first image
image1, buffer1 = captureTestImage()
# Reset last capture time
lastCapture = time.time()
# added this to give visual feedback of camera motion capture activity. Can be removed as required
os.system('clear')
print " Motion Detection Started"
print " ------------------------"
print "Pixel Threshold (How much) = " + str(threshold)
print "Sensitivity (changed Pixels) = " + str(sensitivity)
print "File Path for Image Save = " + filepath
print "---------- Motion Capture File Activity --------------"
while (True):
# Get comparison image
image2, buffer2 = captureTestImage()
# Count changed pixels
changedPixels = 0
for x in xrange(0, 100):
# Scan one line of image then check sensitivity for movement
for y in xrange(0, 75):
# Just check green channel as it's the highest quality channel
pixdiff = abs(buffer1[x,y][1] - buffer2[x,y][1])
if pixdiff > threshold:
changedPixels += 1
# Changed logic - If movement sensitivity exceeded then
# Save image and Exit before full image scan complete
if changedPixels > sensitivity:
lastCapture = time.time()
saveImage(saveWidth, saveHeight, diskSpaceToReserve)
break
continue
# Check force capture
if forceCapture:
if time.time() - lastCapture > forceCaptureTime:
changedPixels = sensitivity + 1
# Swap comparison buffers
image1 = image2
buffer1 = buffer2
This code takes a picture once movement is detected, and keeps doing so until I manually stop it. (I should mention that the code is for use with the Raspberry Pi computer)
I also have the following code courtesy of Nathan Jhaveri here on Stackoverflow:
import SocketServer
from BaseHTTPServer import BaseHTTPRequestHandler
def some_function():
print "some_function got called"
class MyHandler(BaseHTTPRequestHandler):
def do_GET(self):
if self.path == '/captureImage':
saveImage(saveWidth, saveHeight, diskSpaceToReserve)
self.send_response(200)
httpd = SocketServer.TCPServer(("", 8080), MyHandler)
httpd.serve_forever()
This code runs a simple server that would execute the
saveImage(saveWidth, saveHeight, diskSpaceToReserve)
function when the url /captureImage is visited on the server. I have run into a problem with this though. Since the two pieces of code are both infinite loops, they cannot run side by side. I would assume I need to do some kind of multi-threading, but that is something I have never experimented with in Python before. I would appreciate if anyone could help me get back on track with this.
This isn't a small question. Your best bet is to work through some python threading tutorials such as this one: http://www.tutorialspoint.com/python/python_multithreading.htm (found via google)
Try taking the webserver, and running it on a background thread so that calling "serve_forever()" does not block the main thread's "while True:" loop. Replace the existing call to httpd.serve_forever() with something like:
import threading
class WebThread(threading.Thread):
def run(self):
httpd = SocketServer.TCPServer(("", 8080), MyHandler)
httpd.serve_forever()
WebThread().start()
Make sure that chunk of code runs before the "while (True):" loop, and you should have both the webserver loop and the main thread loop running.
Keep in mind that having multiple threads can get complicated. What happens when two threads access the same resource at the same time? As Velox mentioned in another answer, it is worthwhile to learn more about threading.
I can illustrate a simple example using multi-threading.
from http.server import BaseHTTPRequestHandler, HTTPServer
import concurrent.futures
import logging
import time
hostName = "localhost"
serverPort = 5001
class MyServer(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(bytes("<html><head><title>python3 http server</title><body>", "utf-8"))
def serverThread():
webServer = HTTPServer((hostName, serverPort), MyServer)
logging.info("Server started http://%s:%s" % (hostName, serverPort))
try:
webServer.serve_forever()
except :
pass
webServer.server_close()
logging.info("Server stopped.")
def logThread():
while True:
time.sleep(2.0)
logging.info('hi from log thread')
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
# Run Server
executor.submit(serverThread)
# Run A Parallel Thread
executor.submit(logThread)
Here we have two threads : A server and Another parallel Thread which logs a line every 2 seconds.
You have to define code corresponding to each thread in separate functions, and submit it to the concurrent.futures Thread Pool.
Btw, I have not done study of how efficient it is to run a server this way.
Related
I am trying to save audio clips (15 seconds per clip) from live stream using VLC library. I am unable to find any option that could allow me to record only 15 seconds from the live stream. Thus I ended up using timer in my code, but the recording clips sometimes contain 10 seconds, sometimes 20 seconds (rarely 15 seconds). Also, sometimes the audio content is repeated in the clips.
Here is the code (I am a newbie so please guide me)
Code.py
import os
import sys
import vlc
import time
clipNumber = sys.argv[1]
filepath = 'http://streamer64.eboundservices.com/geo/geonews_abr/playlist.m3u8'
movie = os.path.expanduser(filepath)
if 'http://' not in filepath:
if not os.access(movie, os.R_OK):
print ( 'Error: %s file is not readable' % movie )
sys.exit(1)
filename_and_command = "--sout=#transcode{vcodec=none,acodec=mp3,ab=320,channels=2,samplerate=44100}:file{dst=clip" + str(clipNumber) + ".mp3}"
# filename_and_command = "--sout=file/ts:clip" + str(clipNumber) + ".mp3"
instance = vlc.Instance(filename_and_command)
try:
media = instance.media_new(movie)
except NameError:
print ('NameError: % (%s vs Libvlc %s)' % (sys.exc_info()[1],
vlc.__version__, vlc.libvlc_get_version()))
sys.exit(1)
player = instance.media_player_new()
player.set_media(media)
player.play()
time.sleep(15)
exit()
Now that I want to record 1 minute of the live-stream, I invoke this python code from the bash script 4 times and it creates 4 audio clips (clip1.mp3, clip2.mp3, clip3.mp3 and clip4.mp3)
Script.sh
for ((i=1; i<=4; i++))
do
printf "Recording stream #%d\n", "$i"
python code.py "$i"
printf "Finished stream #%d\n", "$i"
done
Is there anyway to just loop the code with Python instead of invoking again and again with bash script (I tried to put the code in the loop in python, but the first clip - clip1 - keeps recording and never finishes recording). And a way to specify that I could only record 15 seconds from the live-stream instead of using time.sleep(15)
If you just want to save the file, no need to use vlc. Here is a short procedure I use to do that:
def record(filepath, stream, duration):
fd = open(filepath, 'wb')
begin = datetime.now()
duration = timedelta(milliseconds=duration)
while datetime.now() - begin < duration:
data = stream.read(10000)
fd.write(data)
fd.close()
Exemple of use to record during one second:
from urllib.request import urlopen
record('clip.mp3', urlopen('http://streamer64.eboundservices.com/geo/geonews_abr/playlist.m3u8'), 1000)
All of the work required can easily be done with FFMPEG as:
ffmpeg -i streamURL -c copy -vn -ac 2 -acodec aac -t 15
-vn for just recording the audio part (without video)
-t for specifying the duration of stream you want to record (15 sec here)
So I need to be able to read and count the number of lines from a FTP server WITHOUT downloading it to my local machine while using Python.
I know the code to connect to the server:
ftp = ftplib.FTP('example.com') //Object ftp set as server address
ftp.login ('username' , 'password') // Login info
ftp.retrlines('LIST') // List file directories
ftp.cwd ('/parent folder/another folder/file/') //Change file directory
I also know the basic code to count the number of line If it is already downloaded/stored locally :
with open('file') as f:
... count = sum(1 for line in f)
... print (count)
I just need to know how to connect these 2 pieces of code without having to download the file to my local system.
Any help is appreciated.
Thank You
As far as i know FTP doesn't provide any kind of functionality to read the file content without actually downloading it. However you could try using something like Is it possible to read FTP files without writing them using Python?
(You haven't specified what python you are using)
#!/usr/bin/env python
from ftplib import FTP
def countLines(s):
print len(s.split('\n'))
ftp = FTP('ftp.kernel.org')
ftp.login()
ftp.retrbinary('RETR /pub/README_ABOUT_BZ2_FILES', countLines)
Please take this code as a reference only
There is a way: I adapted a piece of code that I created for processes csv files "on the fly". Is implement by producer-consumer problem approach. Apply this pattern allows us to assign each task to a thread (or process) and show partial results for huge remote files. You can adapt it for ftp requests.
Download stream is saved in queue and is consumed "on the fly". No HDD extra space is needed and memory efficient. Tested in Python 3.5.2 (vanilla) on Fedora Core 25 x86_64.
This is the source adapted for ftp (over http) retrieve:
from threading import Thread, Event
from queue import Queue, Empty
import urllib.request,sys,csv,io,os,time;
import argparse
FILE_URL = 'http://cdiac.ornl.gov/ftp/ndp030/CSV-FILES/nation.1751_2010.csv'
def download_task(url,chunk_queue,event):
CHUNK = 1*1024
response = urllib.request.urlopen(url)
event.clear()
print('%% - Starting Download - %%')
print('%% - ------------------ - %%')
'''VT100 control codes.'''
CURSOR_UP_ONE = '\x1b[1A'
ERASE_LINE = '\x1b[2K'
while True:
chunk = response.read(CHUNK)
if not chunk:
print('%% - Download completed - %%')
event.set()
break
chunk_queue.put(chunk)
def count_task(chunk_queue, event):
part = False
time.sleep(5) #give some time to producer
M=0
contador = 0
'''VT100 control codes.'''
CURSOR_UP_ONE = '\x1b[1A'
ERASE_LINE = '\x1b[2K'
while True:
try:
#Default behavior of queue allows getting elements from it and block if queue is Empty.
#In this case I set argument block=False. When queue.get() and queue Empty ocurrs not block and throws a
#queue.Empty exception that I use for show partial result of process.
chunk = chunk_queue.get(block=False)
for line in chunk.splitlines(True):
if line.endswith(b'\n'):
if part: ##for treat last line of chunk (normally is a part of line)
line = linepart + line
part = False
M += 1
else:
##if line not contains '\n' is last line of chunk.
##a part of line which is completed in next interation over next chunk
part = True
linepart = line
except Empty:
# QUEUE EMPTY
print(CURSOR_UP_ONE + ERASE_LINE + CURSOR_UP_ONE)
print(CURSOR_UP_ONE + ERASE_LINE + CURSOR_UP_ONE)
print('Downloading records ...')
if M>0:
print('Partial result: Lines: %d ' % M) #M-1 because M contains header
if (event.is_set()): #'THE END: no elements in queue and download finished (even is set)'
print(CURSOR_UP_ONE + ERASE_LINE+ CURSOR_UP_ONE)
print(CURSOR_UP_ONE + ERASE_LINE+ CURSOR_UP_ONE)
print(CURSOR_UP_ONE + ERASE_LINE+ CURSOR_UP_ONE)
print('The consumer has waited %s times' % str(contador))
print('RECORDS = ', M)
break
contador += 1
time.sleep(1) #(give some time for loading more records)
def main():
chunk_queue = Queue()
event = Event()
args = parse_args()
url = args.url
p1 = Thread(target=download_task, args=(url,chunk_queue,event,))
p1.start()
p2 = Thread(target=count_task, args=(chunk_queue,event,))
p2.start()
p1.join()
p2.join()
# The user of this module can customized one parameter:
# + URL where the remote file can be found.
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('-u', '--url', default=FILE_URL,
help='remote-csv-file URL')
return parser.parse_args()
if __name__ == '__main__':
main()
Usage
$ python ftp-data.py -u <ftp-file>
Example:
python ftp-data-ol.py -u 'http://cdiac.ornl.gov/ftp/ndp030/CSV-FILES/nation.1751_2010.csv'
The consumer has waited 0 times
RECORDS = 16327
Csv version on Github: https://github.com/AALVAREZG/csv-data-onthefly
I want to use my raspi cam modul to scan QR codes.
For detecting and decoding qr codes I want to use zbar.
My current code:
import io
import time
import picamera
import zbar
import Image
if len(argv) < 2: exit(1)
# Create an in-memory stream
my_stream = io.BytesIO()
with picamera.PiCamera() as camera:
camera.start_preview()
# Camera warm-up time
time.sleep(2)
camera.capture(my_stream, 'jpeg')
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
pil = Image.open(argv[1]).convert('L')
width, height = pil.size
raw = pil.tostring()
my_stream = zbar.Image(width, height, 'Y800', raw)
scanner.scan(image)
for symbol in image:
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
As you may see, I want to create a picture stream, send this stream to zbar to check if a qr code is contained in the picture.
I am not able to run this code, this error is the result:
Segmentation fault
------------------ (program exited with code: 139) Press return to continue
I don't find any solution how to fixx this bug, any idea?
Kind regards;
The shortage of all the other answers is that they have a large amount of DELAY - for example, what they are scanning and displaying to the screen was actually a frame taken several seconds ago and so on.
This is due to the slow CPU of Raspberry Pi. So the frame-rate is much bigger than the rate our software can read and scan.
With lots of effort, I finally made this code, which have LITTLE DELAY. So when you give it a QRCode/BarCode, it will give you a result in less than a second.
The trick I use is explained in the code.
import cv2
import cv2.cv as cv
import numpy
import zbar
import time
import threading
'''
LITTLE-DELAY BarCodeScanner
Author: Chen Jingyi (From FZYZ Junior High School, China)
PS. If your pi's V4L is not available, the cv-Window may have some error sometimes, but other parts of this code works fine.
'''
class BarCodeScanner(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.WINDOW_NAME = 'Camera'
self.CV_SYSTEM_CACHE_CNT = 5 # Cv has 5-frame cache
self.LOOP_INTERVAL_TIME = 0.2
cv.NamedWindow(self.WINDOW_NAME, cv.CV_WINDOW_NORMAL)
self.cam = cv2.VideoCapture(-1)
def scan(self, aframe):
imgray = cv2.cvtColor(aframe, cv2.COLOR_BGR2GRAY)
raw = str(imgray.data)
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
#print 'ScanZbar', time.time()
width = int(self.cam.get(cv.CV_CAP_PROP_FRAME_WIDTH))
height = int(self.cam.get(cv.CV_CAP_PROP_FRAME_HEIGHT))
imageZbar = zbar.Image(width, height,'Y800', raw)
scanner.scan(imageZbar)
#print 'ScanEnd', time.time()
for symbol in imageZbar:
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
def run(self):
#print 'BarCodeScanner run', time.time()
while True:
#print time.time()
''' Why reading several times and throw the data away: I guess OpenCV has a `cache-queue` whose length is 5.
`read()` will *dequeue* a frame from it if it is not null, otherwise wait until have one.
When the camera has a new frame, if the queue is not full, the frame will be *enqueue*, otherwise be thrown away.
So in this case, the frame rate is far bigger than the times the while loop is executed. So when the code comes to here, the queue is full.
Therefore, if we want the newest frame, we need to dequeue the 5 frames in the queue, which is useless because it is old. That's why.
'''
for i in range(0,self.CV_SYSTEM_CACHE_CNT):
#print 'Read2Throw', time.time()
self.cam.read()
#print 'Read2Use', time.time()
img = self.cam.read()
self.scan(img[1])
cv2.imshow(self.WINDOW_NAME, img[1])
cv.WaitKey(1)
#print 'Sleep', time.time()
time.sleep(self.LOOP_INTERVAL_TIME)
cam.release()
scanner = BarCodeScanner()
scanner.start()
In the line
scanner.scan(image)
you're using a variable that hasn't appeared in the code before. Because zbar is written in C, it doesn't catch that the variable is undefined, and the library tries to read garbage data as if it were an image. Hence, the segfault. I'm guessing you meant my_stream instead of image.
i'm using QR decoding on raspberry for my project. I solved it by using
subprocces module.
Here is my function for QR decoding:
import subprocess
def detect():
"""Detects qr code from camera and returns string that represents that code.
return -- qr code from image as string
"""
subprocess.call(["raspistill -n -t 1 -w 120 -h 120 -o cam.png"],shell=True)
process = subprocess.Popen(["zbarimg -D cam.png"], stdout=subprocess.PIPE, shell=True)
(out, err) = process.communicate()
qr_code = None
# out looks like "QR-code: Xuz213asdY" so you need
# to remove first 8 characters plus whitespaces
if len(out) > 8:
qr_code = out[8:].strip()
return qr_code
You can easy add parameters to function such as img_widt and img_height
and change this part of code
"raspistill -n -t 1 -w 120 -h 120 -o cam.png"
to
"raspistill -n -t 1 -w %d -h %d -o cam.png" % (img_width, img_height)
if you want different size of image for decoding.
After reading this, I was able to come up with a pythonic solution involving OpenCV.
First, you build OpenCV on the Pi by following these instructions. That will probably take several hours to complete.
Now reboot the Pi and use the following script (assuming you have python-zbar installed) to get the QR/barcode data:
import cv2
import cv2.cv as cv
import numpy
import zbar
class test():
def __init__(self):
cv.NamedWindow("w1", cv.CV_WINDOW_NORMAL)
# self.capture = cv.CaptureFromCAM(camera_index) #for some reason, this doesn't work
self.capture = cv.CreateCameraCapture(-1)
self.vid_contour_selection()
def vid_contour_selection(self):
while True:
self.frame = cv.QueryFrame(self.capture)
aframe = numpy.asarray(self.frame[:,:])
g = cv.fromarray(aframe)
g = numpy.asarray(g)
imgray = cv2.cvtColor(g,cv2.COLOR_BGR2GRAY)
raw = str(imgray.data)
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
imageZbar = zbar.Image( self.frame.width, self.frame.height,'Y800', raw)
scanner.scan(imageZbar)
for symbol in imageZbar:
print 'decoded', symbol.type, 'symbol', '"%s"' % symbol.data
cv2.imshow("w1", aframe)
c = cv.WaitKey(5)
if c == 110: #pressing the 'n' key will cause the program to exit
exit()
#
p = test()
Note: I had to turn the Raspi Camera's lens counterclockwise about 1/4 - 1/3 of a turn before zbar was able to detect the QR/barcodes.
With the above code, whenever zbar detects a QR/barcode, the decoded data is printed in the console. It runs continuously, only stopping if the n key is pressed
For anyone that is still looking for a solutions to this...
This code is ugly but it works from a regular webcam pretty well, haven't tried the Pi camera yet. I'm new to python so this is the best I could come up with that worked in both Python2 and 3.
Make a bash script called kill.sh and make it executable... (chmod -x)
#kill all running zbar tasks ... call from python
ps -face | grep zbar | awk '{print $2}' | xargs kill -s KILL
Then do a system call from python like so...
import sys
import os
def start_cam():
while True:
#Initializes an instance of Zbar to the commandline to detect barcode data-strings.
p=os.popen('/usr/bin/zbarcam --prescale=300x200','r')
#Barcode variable read by Python from the commandline.
print("Please Scan a QRcode to begin...")
barcode = p.readline()
barcodedata = str(barcode)[8:]
if barcodedata:
print("{0}".format(barcodedata))
#Kills the webcam window by executing the bash file
os.system("/home/pi/Desktop/kill.sh")
start_cam()
Hopefully this helps people with the same questions in the future!
Quite a late response, but I ran into a number of issues while trying to get Zbar working. Though I was using a USB webcam, but I had to install multiple libraries before i could get to install zbar. I installed fswebcam, python-zbar, libzbar-dev and finally ran setup.py.
More importantly, the zbar from sourceforge did not work for me, but the one from github, which has a Python wrapper worked for me.
I documented my steps by steps at http://techblog.saurabhkumar.com/2015/09/scanning-barcodes-using-raspberry-pi.html if it might help
Just a small modified from Dan2theR, because I don't want to create another shell file.
import sys
import os
p = os.popen('/usr/bin/zbarcam --prescale=300x300 --Sdisable -Sqrcode.enable', 'r')
def start_scan():
global p
while True:
print('Scanning')
data = p.readline()
qrcode = str(data)[8:]
if(qrcode):
print(qrcode)
try:
start_scan()
except KeyboardInterrupt:
print('Stop scanning')
finally:
p.close()
I have linux (Ubuntu) machine as a client. I want to measure data transfer rate when 200 users try to download files from my server at the same time.
Is there some python or linux tool for this? Or can you recommend an approach?
I saw this speedcheck code, and I can wrap it in threads, but I don't understand why the code there is so "complicated" and the block size changes all the time.
I used Mult-Mechanize recently to run some performance tests. It was fairly easy and worked pretty well.
Not sure if you're talking about an actual dedicated server. For traffic graphs and so on I prefer to use Munin. It is a pretty complete monitoring application which builds you nice graphs using rrdtool. Examples are linked on the munin site: full setup, eth0 traffic graph.
The new munin 2 is even more flashy, but I did not use it yet as it's not in my repos and I don't like to mess with perl applications.
Maybe 'ab' from apache ?
ab -n 1000 -c 200 [http[s]://]hostname[:port]/path
-n Number of requests to perform
-c Number of multiple requests to make at a time
It has many options, http://httpd.apache.org/docs/2.2/programs/ab.html
or man ab
import threading
import time
import urllib2
block_sz = 8192
num_threads = 1
url = "http://192.168.1.1/bigfile2"
secDownload = 30
class DownloadFileThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.u = urllib2.urlopen(url)
self.file_size_dl = 0
def run(self):
while True:
buffer = self.u.read(block_sz)
if not buffer:
raise 'There is nothing to read. You should have bigger file or smaller time'
self.file_size_dl += len(buffer)
if __name__ == "__main__":
print 'Download from url ' + url + ' use in ' + str(num_threads) + ' to download. test time ' + str(secDownload)
threads = []
for i in range(num_threads):
downloadThread = DownloadFileThread()
downloadThread.daemon = True
threads.append(downloadThread)
for i in range(num_threads):
threads[i].start()
time.sleep(secDownload)
sumBytes=0
for i in range(num_threads):
sumBytes = sumBytes + threads[i].file_size_dl
print sumBytes
print str(sumBytes/(secDownload *1000000)) + 'MBps'
I have a upload script done. But i need to figure out how to make a script that I can run as a daemon in python to handle the conversion part and moving the file thats converted to its final resting place. heres what I have so far for the directory watcher script:
#!/usr/bin/python
import os
import pyinotify import WatchManager, Notifier, ThreadedNotifier, ProcessEvent, EventCodes
import sys, time, syslog, config
from os import system
from daemon import Daemon
class myLog(ProcessEvent):
def process_IN_CREATE(self, event):
syslog.syslog("creating: " + event.pathname)
def process_IN_DELETE(self, event):
syslog.syslog("deleting: " + event.pathname)
def process_default(self, event):
syslog.syslog("default: " + event.pathname)
class MyDaemon(Daemon):
def loadConfig(self):
"""Load user configuration file"""
self.config = {}
self.parser = ConfigParser.ConfigParser()
if not os.path.isfile(self.configfile):
self.parser.write(open(self.configfile, 'w'))
self.parser.readfp(open(self.configfile, 'r'))
variables = { \
'mplayer': ['paths', self.findProgram("mplayer")], \
'mencoder': ['paths', self.findProgram("mencoder")], \
'tcprobe': ['paths', self.findProgram("tcprobe")], \
'transcode': ['paths', self.findProgram("transcode")], \
'ogmmerge': ['paths', self.findProgram("ogmmerge")], \
'outputdir': ['paths', os.path.expanduser("~")], \
}
for key in variables.keys():
self.cautiousLoad(variables[key][0], key, variables[key][1])
def cautiousLoad(self, section, var, default):
"""Load a configurable variable within an exception clause,
in case variable is not in configuration file"""
try:
self.config[var] = int(self.parser.get(section, var))
except:
self.config[var] = default
try:
self.parser.set(section, var, default)
except:
self.parser.add_section(section)
self.parser.set(section, var, default)
self.parser.write(open(self.configfile, 'w'))
def findProgram(self, program):
"""Looks for program in path, and returns full path if found"""
for path in config.paths:
if os.path.isfile(os.path.join(path, program)):
return os.path.join(path, program)
self.ui_configError(program)
def run(self):
syslog.openlog('mediaConvertor', syslog.LOG_PID,syslog.LOG_DAEMON)
syslog.syslog('daemon started, entering loop')
wm = WatchManager()
mask = IN_DELETE | IN_CREATE
notifier = ThreadedNotifier(wm, myLog())
notifier.start()
wdd = wm.add_watch(self.config['outputdir'], mask, rec=True)
while True:
time.sleep(1)
wm.rm_watch(wdd.values())
notifier.stop()
syslog.syslog('exiting media convertor')
syslog.closelog()
if __name__ == "__main__":
daemon = MyDaemon('/tmp/mediaconvertor.pid')
if len(sys.argv) == 2:
if 'start' == sys.argv[1]:
daemon.run()
if 'stop' == sys.argv[1]:
daemon.stop()
if 'restart' == sys.argv[1]:
daemon.restart()
else:
print "Unknown Command"
sys.exit(2)
sys.exit(0)
else:
print "Usage: %s start|stop|restart" % sys.argv[0]
sys.exit(2)
not sure where to go from here.
I don't run on Linux and have never used the inotify capabilities you are using here. I'll describe how I would do things generically.
In the simplest case, you need to check if there's a new file in the upload directory and when there is one, start doing the conversion notification.
To check if there are new files you can do something like:
import os
import time
def watch_directory(dirname="."):
old_files = set(os.listdir(dirname))
while 1:
time.sleep(1)
new_files = set(os.listdir(dirname))
diff = new_files - old_files
if diff:
print "New files", diff
old_files = new_files
watch_directory()
You may be able to minimize some filesystem overhead by first stat'ing the directory to see if there are any changes.
def watch_directory(dirname="."):
old_files = set(os.listdir(dirname))
old_stat = os.stat(dirname)
while 1:
time.sleep(1)
new_stat = os.stat(dirname)
if new_stat == old_stat:
continue
new_files = set(os.listdir(dirname))
diff = new_files - old_files
if diff:
print "New files", diff
old_stat = new_stat
old_files = new_files
With inotify I think this is all handled for you, and you put your code into process_IN_CREATE() which gets called when a new file is available.
One bit of trickiness - how does the watcher know that the upload is complete? What happens if the upload is canceled part-way through uploading? This could be as simple as having the web server do a rename() to use one extension during upload and another extension when done.
Once you know the file, use subprocess.Popen(conversion_program, "new_filename") or os.system("conversion_program new_filename &") to spawn off the conversion in a new process which does the conversion. You'll need to handle things like error reporting, as when the input isn't in the right format. It should also clean up, meaning that once the conversion is done it should remove the input file from consideration. This might be as easy as deleting the file.
You'll also need to worry about restarting any conversions which were killed. If the machine does down, how does the restarted watcher know which data file conversions were also killed and need to be restarted. But this might be doable as a manual step.