I need to write a simple app that runs two threads:
- thread 1: runs at timed periods, let's say every 1 minute
- thread 2: just a 'normal' while True loop that does 'stuff'
if not the requirement to run at timed interval I would have not looked at twisted at all, but simple sleep(60) is not good enough and construction like:
l = task.LoopingCall(timed_thread)
l.start(60.0)
reactor.run()
Looked really simple to achieve what I wanted there.
Now, how do I 'properly' add another thread?
I see two options here:
Use threading library and run two 'python threads' one executing my while loop, and another running reactor.run(). But Google seems to object this approach and suggests using twisted threading
Use twisted threading. That's what I've tried, but somehow this looks bit clumsy to me.
Here's what I came up with:
def timed_thread():
print 'i will be called every 1 minute'
return
def normal_thread():
print 'this is a normal thread'
time.sleep(30)
return
l = task.LoopingCall(timed_thread)
l.start(60.0)
reactor.callInThread(normal_thread)
reactor.run()
That seems to work, but! I can't stop the app. If I press ^C it wouldn't do anything (without 'callInThread' it just stops as you'd expect it to). ^Z bombs out to shell, and if I then do 'kill %1' it seems to kill the process (shell reports that), but the 'normal' thread keeps on running. kill PID wouldn't get rid of it, and the only cure is kill -9. Really strange.
So. What am I doing wrong? Is it a correct approach to implement two threads in twisted? Should I not bother with twisted? What other 'standard' alternatives are to implement timed calls? ('Standard' I mean I can easy_install or yum install them, I don't want to start downloading and using some random scripts from random web pages).
You didn't explain why you actually need threads here. If you had, I might have been able to explain why you don't need them. ;)
That aside, I can confirm that your basic understanding of things is correct. One possible misunderstanding I can clear up, though, is the notion that "python threads" and "Twisted threads" are at all different from each other. They're not. Python provides a threading library. All of Twisted's thread APIs are implemented in terms of Python's threading library. Only the API is different.
As far as shutdown goes, you have two options.
Start your run-forever thread using Python's threading APIs directly and make the thread a daemon. Your process can exit even while daemon threads are still running. A possible problem with this solution is that some versions of Python have issues with daemon threads that will lead to a crash at shutdown time.
Create your thread using either Twisted's APIs or the stdlib threading APIs but also add a Twisted shutdown hook using reactor.addSystemEventTrigger('before', 'shutdown', f). In that hook, communicate with the work thread and tell it to shut down. For example, you could share a threading.Event between the Twisted thread and your work thread and have the hook set it. The work thread can periodically check to see if it has been set and exit when it notices that it has been. Aside from not crashing, this gives another advantage over daemon threads - it will let you run some cleanup or finalization code in your work thread before the process exits.
Assuming that your main is relatively non-blocking:
import random
from twisted.internet import task
class MyProcess:
def __init__(self):
self.stats = []
self.lp = None
def myloopingCall(self):
print "I have %s stats" % len(self.stats)
def myMainFunction(self,reactor):
self.stats.append(random.random())
reactor.callLater(0,self.myMainFunction,reactor)
def start(self,reactor):
self.lp = task.LoopingCall(self.myloopingCall)
self.lp.start(2)
reactor.callLater(0,self.myMainFunction,reactor)
def stop(self):
if self.lp is not None:
self.lp.stop()
print "I'm done"
if __name__ == '__main__':
myproc = MyProcess()
from twisted.internet import reactor
reactor.callWhenRunning(myproc.start,reactor)
reactor.addSystemEventTrigger('during','shutdown',myproc.stop)
reactor.callLater(10,reactor.stop)
reactor.run()
$ python bleh.py
I have 0 stats
I have 33375 stats
I have 66786 stats
I have 100254 stats
I have 133625 stats
I'm done
Related
I want to create a command-line tool which is a countdown timer with some custom features I need.
My idea is to use a python script to fire up a process which does the work in the background (e.g. play sound when close to the end). Once the timer process is running I would like to communicate with it via the command line (send inquiries like 'remaining' or commands like 'start XXmin' and 'stop'). There should be only a single instance of the timer process, of course.
Usage might look like
>>> timer start 25min
>>> timer remaining
17:34 min remaining
>>> timer stop
timer stopped.
>>> timer start 90sec
What would the timer process need to look like to do its work while waiting for messages to arrive? What, in turn, would the interface script need to do to fire up the process and to communicate with it later? Is using a separate process the best idea to achieve my goal?
I have no clue how to go about it. My idea sounds very simple yet almost all of what I found is concerned with the concurrency of child processes of a parent script, which is not what I want.
Thank you.
What you're looking for here is a basic client-server architecture. You'll need two programs - one which runs in the background and listens for messages (the server), and a second that sends messages to the server, and does something with the responses (the client).
There are a lot of ways to do this, and the area is legitimately complex, so don't expect it to be super easy. For just starting out, I'd recommend you just try to use a simple http.server server using the standard library (http server module). For the client side, I'd recommend the requests library. HTTP is definitely not the best possible choice for a local client-server setup, but with the existing libraries it's going to be by far the easiest to get up and running, and once you're comfortable with that, you can look into other approaches if you want to.
The easy way would be to use & in the shell to execute your script in the background. And then communicate with the process with just USR1 and USR2 signals.
In Python, I guess easiest way would be to use daemon module.
import daemon
def do_something():
pass
if __name__ == "__main__":
with daemon.DaemonContext():
do_something()
Or you could fork() your own daemon process.
import os
def doSomething():
pass
def createDaemon():
try:
# Store the Fork PID
pid = os.fork()
if pid > 0:
print 'PID: %d' % pid
os._exit(0)
os.chdir("/")
os.setsid()
os.umask(0)
except OSError, error:
print 'Unable to fork. Error: %d (%s)' % (error.errno, error.strerror)
os._exit(1)
doSomething()
Then, for example, you could use os.pipe to communicate with that daemon process. On in this simple case, on a *nix system, even just signals.
Another option is to use multiprocessing module to create the daemon process and also to communicate with it.
Kind all, I'm really new to python and I'm facing a task which I can't completely grasp.
I've created an interface with Tkinter which should accomplish a couple of apparently easy feats.
By clicking a "Start" button two threads/processes will be started (each calling multiple subfunctions) which mainly read data from a serial port (one port per process, of course) and write them to file.
The I/O actions are looped within a while loop with a very high counter to allow them to go onward almost indefinitely.
The "Stop" button should stop the acquisition and essentially it should:
Kill the read/write Thread
Close the file
Close the serial port
Unfortunately I still do not understand how to accomplish point 1, i.e.: how to create killable threads without killing the whole GUI. Is there any way of doing this?
Thank you all!
First, you have to choose whether you are going to use threads or processes.
I will not go too much into differences, google it ;) Anyway, here are some things to consider: it is much easier to establish communication between threads than betweeween processes; in Python, all threads will run on the same CPU core (see Python GIL), but subprocesses may use multiple cores.
Processes
If you are using subprocesses, there are two ways: subprocess.Popen and multiprocessing.Process. With Popen you can run anything, whereas Process gives a simpler thread-like interface to running python code which is part of your project in a subprocess.
Both can be killed using terminate method.
See documentation for multiprocessing and subprocess
Of course, if you want a more graceful exit, you will want to send an "exit" message to the subprocess, rather than just terminate it, so that it gets a chance to do the clean-up. You could do that e.g. by writing to its stdin. The process should read from stdin and when it gets message "exit", it should do whatever you need before exiting.
Threads
For threads, you have to implement your own mechanism for stopping, rather than using something as violent as process.terminate().
Usually, a thread runs in a loop and in that loop you check for a flag which says stop. Then you break from the loop.
I usually have something like this:
class MyThread(Thread):
def __init__(self):
super(Thread, self).__init__()
self._stop_event = threading.Event()
def run(self):
while not self._stop_event.is_set():
# do something
self._stop_event.wait(SLEEP_TIME)
# clean-up before exit
def stop(self, timeout):
self._stop_event.set()
self.join(timeout)
Of course, you need some exception handling etc, but this is the basic idea.
EDIT: Answers to questions in comment
thread.start_new_thread(your_function) starts a new thread, that is correct. On the other hand, module threading gives you a higher-level API which is much nicer.
With threading module, you can do the same with:
t = threading.Thread(target=your_function)
t.start()
or you can make your own class which inherits from Thread and put your functionality in the run method, as in the example above. Then, when user clicks the start button, you do:
t = MyThread()
t.start()
You should store the t variable somewhere. Exactly where depends on how you designed the rest of your application. I would probably have some object which hold all active threads in a list.
When user clicks stop, you should:
t.stop(some_reasonable_time_in_which_the_thread_should_stop)
After that, you can remove the t from your list, it is not usable any more.
First you can use subprocess.Popen() to spawn child processes, then later you can use Popen.terminate() to terminate them.
Note that you could also do everything in a single Python thread, without subprocesses, if you want to. It's perfectly possible to "multiplex" reading from multiple ports in a single event loop.
How can I modify this code (that uses twisted) so that CTRL+C will cause it to exit? I expect the problem is that doWork does not yield control back to the reactor, so the reactor is not able to terminate its execution.
def loop_forever():
i = 0
while True:
yield i
i += 1
time.sleep(5)
def doWork():
for i in loop_forever():
print i
def main():
threads.deferToThread(doWork)
reactor.run()
Note that this code:
def main():
try:
threads.deferToThread(doWork)
reactor.run()
except KeyboardInterrupt:
print "user interrupted task"
does catch the exception on windows, but not on ubuntu
Twisted uses Python's threading library to implement deferToThread. All of the rules that apply to Python threads apply to the threads you get with deferToThread. One rule is that signals and threads are a bad combination (Ctrl-C sends SIGINT on Linux).
The basic idea for solving this problem is to put some logic into doWork so that it will stop. Perhaps this means setting a global flag that it checks once per iteration. You can probably find lots of information elsewhere regarding strategies for getting a long-running thread to cooperate with shutdown.
You may also want to not use deferToThread for this. If you expect your job to run for most of the lifetime of the process then you may just want to use the stdlib threading module directly. Consider that a job like this is using up one of the thread pool slots. If you have enough of these then your thread pool will be full and other work will not be able to proceed.
You may also want to take doWork out of the thread. It doesn't look like it does a lot of blocking. Instead, run doWork in the reactor thread and only run iterations of loop_forever with deferToThread. Now you no longer have a long-running operation in a thread and several of your problems will probably go away.
I have a python program which operates an external program and starts a timeout thread. Timeout thread should countdown for 10 minutes and if the script, which operates the external program isn't finished in that time, it should kill the external program.
My thread seems to work fine on the first glance, my main script and the thread run simultaneously with no issues. But if a pop up window appears in the external program, it stops my scripts, so that even the countdown thread stops counting, therefore totally failing it's job.
I assume the issue is that the script calls a blocking function in API for the external program, which is blocked by the pop up window. I understand why it blocks my main program, but don't understand why it blocks my countdown thread. So, one possible solution might be to run a separate script for the countdown, but I would like to keep it as clean as possible and it seems really messy to start a script for this.
I have searched everywhere for a clue, but I didn't find much. There was a reference to the gevent library here:
background function in Python
, but it seems like such a basic task, that I don't want to include external library for this.
I also found a solution which uses a windows multimedia timer here, but I've never worked with this before and am afraid the code won't be flexible with this. Script is Windows-only, but it should work on all Windows from XP on.
For Unix I found signal.alarm which seems to do exactly what I want, but it's not available for Windows. Any alternatives for this?
Any ideas on how to work with this in the most simplified manner?
This is the simplified thread I'm creating (run in IDLE to reproduce the issue):
import threading
import time
class timeToKill():
def __init__(self, minutesBeforeTimeout):
self.stop = threading.Event()
self.countdownFrom = minutesBeforeTimeout * 60
def startCountdown(self):
self.countdownThread= threading.Thread(target=self.countdown, args=(self.countdownFrom,))
self.countdownThread.start()
def stopCountdown(self):
self.stop.set()
self.countdownThread.join()
def countdown(self,seconds):
for second in range(seconds):
if(self.stop.is_set()):
break
else:
print (second)
time.sleep(1)
timeout = timeToKill(1)
timeout.startCountdown()
raw_input("Blocking call, waiting for input:\n")
One possible explanation for a function call to block another Python thread is that CPython uses global interpreter lock (GIL) and the blocking API call doesn't release it (NOTE: CPython releases GIL on blocking I/O calls therefore your raw_input() example should work as is).
If you can't make the buggy API call to release GIL then you could use a process instead of a thread e.g., multiprocessing.Process instead of threading.Thread (the API is the same). Different processes are not limited by GIL.
For quick and dirty threading, I usually resort to subprocess commands. it is quite robust and os independent. It does not give as fine grained control as the thread and queue modules but for external calls to programs generally does nicely. Note the shell=True must be used with caution.
#this can be any command
p1 = subprocess.Popen(["python", "SUBSCRIPTS/TEST.py", "0"], shell=True)
#the thread p1 will run in the background - asynchronously. If you want to kill it after some time, then you need
#here do some other tasks/computations
time.sleep(10)
currentStatus = p1.poll()
if currentStatus is None: #then it is still running
try:
p1.kill() #maybe try os.kill(p1.pid,2) if p1.kill does not work
except:
#do something else if process is done running - maybe do nothing?
pass
I have made a Home Security program in Python that uses Raspberry Pi's GPIOs to sense movement and actuate the siren. The users activate/deactivate the system using a NFC Tag to a nfc reder connected also to raspberry pi.
For this I need to constantly check for nfc tags in a non blocking manner and at the same time constantly check the sensors for movement also non blocking. I need some more parallel stuff to do but I think these two are enough to make my point.
Right now I use threads which I start/stop like this - Stopping a thread after a certain amount of time -
I'm not sure if this is the optimal way but as of now the system works fine.
Now I want to extend its functionality to offer notifications through websockets. I found that this can be done with Twisted but I am confused..
Here is an example code of how I am trying to do it:
from twisted.internet import reactor
from autobahn.websocket import WebSocketServerFactory, \
WebSocketServerProtocol, \
listenWS
def thread1(stop_event):
while(not stop_event.is_set()):
stop_event.wait(4)
print "checking sensor"
# sensor_state = GPIO.input(11)
if sensor_state == 1:
# how can I call send_m("sensor detected movement") #<---
t1_stop_event.set()
t1_stop_event = Event()
t1 = Thread(target=thread1, args=(t1_stop_event,))
class EchoServerProtocol(WebSocketServerProtocol):
def onMessage(self, msg, binary):
print "received: "+msg
print "stopping thread1"
t1_stop_event.set()
def send_m(self, msg):
self.sendMessage(msg)
if __name__ == '__main__':
t1.start()
factory = WebSocketServerFactory("ws://localhost:9000")
factory.protocol = EchoServerProtocol
listenWS(factory)
reactor.run()
So how can I call the send method of the server protocol from a thread like the thread1?
As is often the case, the answer to your question about threads and Twisted is "don't use threads".
The reason you're starting a thread here appears to be so you can repeatedly check a GPIO sensor. Does checking the sensor block? I'm guessing not, since if it's a GPIO it's locally available hardware and its results will be available immediately. But I'll give you the answer both ways.
The main thing you are using threads for here is to do something repeatedly. If you want to do something repeatedly in Twisted, there is never a reason to use threads :). Twisted includes a great API for recurring tasks: LoopingCall. Your example, re-written to use LoopingCall (again, assuming that the GPIO call does not block) would look like this:
from somewhere import GPIO
from twisted.internet import reactor, task
from autobahn.websocket import WebSocketServerFactory, \
WebSocketServerProtocol, \
listenWS
class EchoServerProtocol(WebSocketServerProtocol):
def check_movement(self):
print "checking sensor"
sensor_state = GPIO.input(11)
if sensor_state == 1:
self.send_m("sensor detected movement")
def connectionMade(self):
WebSocketServerProtocol.connectionMade(self)
self.movement_checker = task.LoopingCall(self.check_movement)
self.movement_checker.start(4)
def onMessage(self, msg, binary):
self.movement_checker.stop()
def send_m(self, msg):
self.sendMessage(msg)
if __name__ == '__main__':
factory = WebSocketServerFactory("ws://localhost:9000")
factory.protocol = EchoServerProtocol
listenWS(factory)
reactor.run()
Of course, there is one case where you still need to use threads: if the GPIO checker (or whatever your recurring task is) needs to run in a thread because it is a potentially blocking operation in a library that can't be modified to make better use of Twisted, and you don't want to block the main loop.
In that case, you still want to use LoopingCall, and take advantage of another one of its features: if you return a Deferred from the function that LoopingCall is calling, then it won't call that function again until the Deferred fires. This means you can shuttle a task off to a thread and not worry about the main loop piling up queries for that thread: you can just resume the loop on the main thread automatically when the thread completes.
To give you a more concrete idea of what I mean, here's the check_movement function modified to work with a long-running blocking call that's run in a thread, instead of a quick polling call that can be run on the main loop:
def check_movement(self):
from twisted.internet.threads import deferToThread
def get_input():
# this is run in a thread
return GPIO.input(11)
def check_input(sensor_state):
# this is back on the main thread, and can safely call send_m
if sensor_state == 1:
self.send_m("sensor movement detected")
return deferToThread(get_input).addCallback(check_input)
Everything else about the above example stays exactly the same.
There are a few factors at play in your example. Short answer: study this documentation on threads in Twisted.
While you don't have to use Twisted's reactor to use protocol classes (threading and protocol implementation are decoupled), you have called reactor.run so all of the below I consider applicable to you.
Let Twisted create threads for you. Going outside the framework can get you in trouble. There are no "public" APIs for IPC messaging with the reactor (I think), so if you use Twisted, you pretty much need to go all the way.
By default, Twisted does not switch threads to call your callbacks. To delegate to a worker thread from the main reactor thread (i.e. to perform blocking I/O), you don't have to create a thread yourself, you use reactor.callInThread and it will run in a worker thread. If you never do this, everything runs in the main reactor thread, meaning for example any I/O operations will block the reactor thread and you can't receive any events until your I/O completes.
Code running in worker threads should use reactor.callFromThread to do anything that is not thread-safe. Provide a callback, which will run in the main reactor thread. You're better safe than sorry here, trust me.
All of the above applies to Deferred processing also. So don't be afraid to use partial(reactor.callFromThread, mycallback) or partial(reactor.callInThread, mycallback) instead of simply mycallback when setting up callbacks. I learned that the hard way; without that, I found that any blocking I/O that I might do in deferred callbacks was either erroring out (due to thread safety issues) or blocking the main thread.
If you're just starting out in Twisted, it's a bit of a "trust fall". Learn to let go of managing your own threads and passing messages via Queue objects, etc. Once you figure out how Deferred and the reactor work (it's called "Twisted" for a reason!), it will seem perfectly natural to you. Twisted does force you to decouple and separate concerns in a functional programming style, but once you're over that, I've found that it's very clean and works well.
One tip: I wrote some decorators to use on all my callback functions so that I didn't have to be constantly calling callInThread and callFromThread and setting up Deferred for exception handling callbacks throughout the code; my decorators enable that behavior for me. It's likely prevented bugs from forgetting to do that, and it's certainly made Twisted development more pleasant for me.