Twisted + qtreactor: How to cleanup after last window closed? - python

I have a Twisted/PyQt application that (among other things) connects to a bunch of remote resources. When the user closes the window I want to shut down all the connections, cleanly if possible, forcibly if not.
The problem is that by the time I go to close the connections, it appears that the reactor is no longer alive to let me do so.
Here's my app code:
# Create app and connect the Twisted/Qt reactors
app = QApplication(sys.argv)
qtreactor.qt4reactor.install()
# Shutdown Twisted when window is closed
#defer.inlineCallbacks
def stop():
print "="*40, "Closing connections..."
yield closeConnections()
print "="*40, "closed."
print "="*40, "Stopping reactor..."
reactor.stop()
print "="*40, "stopped."
app.connect(app, SIGNAL("lastWindowClosed()"), stop)
reactor.runReturn()
rc = app.exec_()
exit(rc)
And here's a stripped down version of my cleanup code:
#defer.inlineCallbacks
def closeConnections():
for connection in connections:
print "Closing connection #%s" % connection
yield threads.deferToThread(popen("/foo/bar/cleanup %s" % connection))
print "Connection closed."
The first print statement is reached, and the command is executed, but I never get the second one, nor do I go through the for loop multiple times.
Is my analysis correct? Is the problem that the reactor is already down, so I never hear back from threads.deferToThread? Or is there some other problem? Furthermore, how do I fix it?
Thanks,
Jonathan

I don't know exactly when that lastWindowClosed() signal fires. However, even if it fires early enough, before the reactor has shut down (preventing you from doing what you want to do), I'm sure that PyQt doesn't know what to do with the Deferred that is returned by your stop function. This means that the shutdown process will proceed merrily onward while your asynchronous cleanup code tries to run. Likely the GUI shutdown will finish before your network shutdown gets anywhere.
So, use reactor.addSystemEventTrigger('before', 'shutdown', stop) instead. I don't know if this will run slightly earlier or slightly later than lastWindowClosed(), but it will run early enough that the reactor will still be usable, and it will pay attention to the Deferred your function returns. Shutdown will be suspended, in fact, until that Deferred fires. This gives you all the time you need to do your cleanup.
Separately from all that, you shouldn't do threads.deferToThread(popen("/foo/bar/cleanup %s" % connection)):
You need to pass a callable to deferToThread, not the result of calling the callable. As written, your code runs popen in the reactor thread and passes a file object to the thread to be called (which makes no sense, of course)
Mixing threads and child processes is iffy. You might get away with it most of the time, I dunno.
reactor.spawnProcess will let you run a child process without blocking, without threads, and without worrying about mixing threads and processes. See also twisted.internet.utils.getProcessOutput if you don't need all the features of spawnProcess (which you appear not to).

Related

python function not running as thread

this is done in python 2.7.12
serialHelper is a class module arround python serial and this code does work nicely
#!/usr/bin/env python
import threading
from time import sleep
import serialHelper
sh = serialHelper.SerialHelper()
def serialGetter():
h = 0
while True:
h = h + 1
s_resp = sh.getResponse()
print ('response ' + s_resp)
sleep(3)
if __name__ == '__main__':
try:
t = threading.Thread(target=sh.serialReader)
t.setDaemon(True)
t.start()
serialGetter()
#tSR = threading.Thread(target=serialGetter)
#tSR.setDaemon(True)
#tSR.start()
except Exception as e:
print (e)
however the attemp to run serialGetter as thread as remarked it just dies.
Any reason why that function can not run as thread ?
Quoting from the Python documentation:
The entire Python program exits when no alive non-daemon threads are left.
So if you setDaemon(True) every new thread and then exit the main thread (by falling off the end of the script), the whole program will exit immediately. This kills all of the threads. Either don't use setDaemon(True), or don't exit the main thread without first calling join() on all of the threads you want to wait for.
Stepping back for a moment, it may help to think about the intended use case of a daemon thread. In Unix, a daemon is a process that runs in the background and (typically) serves requests or performs operations, either on behalf of remote clients over the network or local processes. The same basic idea applies to daemon threads:
You launch the daemon thread with some kind of work queue.
When you need some work done on the thread, you hand it a work object.
When you want the result of that work, you use an event or a future to wait for it to complete.
After requesting some work, you always eventually wait for it to complete, or perhaps cancel it (if your worker protocol supports cancellation).
You don't have to clean up the daemon thread at program termination. It just quietly goes away when there are no other threads left.
The problem is step (4). If you forget about some work object, and exit the app without waiting for it to complete, the work may get interrupted. Daemon threads don't gracefully shut down, so you could leave the outside world in an inconsistent state (e.g. an incomplete database transaction, a file that never got closed, etc.). It's often better to use a regular thread, and replace step (5) with an explicit "Finish up your work and shut down" work object that the main thread hands to the worker thread before exiting. The worker thread then recognizes this object, stops waiting on the work queue, and terminates itself once it's no longer doing anything else. This is slightly more up-front work, but is much safer in the event that a work object is inadvertently abandoned.
Because of all of the above, I recommend not using daemon threads unless you have a strong reason for them.

Terminating blocking thread that has been deferredToThread

How can I modify this code (that uses twisted) so that CTRL+C will cause it to exit? I expect the problem is that doWork does not yield control back to the reactor, so the reactor is not able to terminate its execution.
def loop_forever():
i = 0
while True:
yield i
i += 1
time.sleep(5)
def doWork():
for i in loop_forever():
print i
def main():
threads.deferToThread(doWork)
reactor.run()
Note that this code:
def main():
try:
threads.deferToThread(doWork)
reactor.run()
except KeyboardInterrupt:
print "user interrupted task"
does catch the exception on windows, but not on ubuntu
Twisted uses Python's threading library to implement deferToThread. All of the rules that apply to Python threads apply to the threads you get with deferToThread. One rule is that signals and threads are a bad combination (Ctrl-C sends SIGINT on Linux).
The basic idea for solving this problem is to put some logic into doWork so that it will stop. Perhaps this means setting a global flag that it checks once per iteration. You can probably find lots of information elsewhere regarding strategies for getting a long-running thread to cooperate with shutdown.
You may also want to not use deferToThread for this. If you expect your job to run for most of the lifetime of the process then you may just want to use the stdlib threading module directly. Consider that a job like this is using up one of the thread pool slots. If you have enough of these then your thread pool will be full and other work will not be able to proceed.
You may also want to take doWork out of the thread. It doesn't look like it does a lot of blocking. Instead, run doWork in the reactor thread and only run iterations of loop_forever with deferToThread. Now you no longer have a long-running operation in a thread and several of your problems will probably go away.

Call a twisted protocol method from another thread

I have made a Home Security program in Python that uses Raspberry Pi's GPIOs to sense movement and actuate the siren. The users activate/deactivate the system using a NFC Tag to a nfc reder connected also to raspberry pi.
For this I need to constantly check for nfc tags in a non blocking manner and at the same time constantly check the sensors for movement also non blocking. I need some more parallel stuff to do but I think these two are enough to make my point.
Right now I use threads which I start/stop like this - Stopping a thread after a certain amount of time -
I'm not sure if this is the optimal way but as of now the system works fine.
Now I want to extend its functionality to offer notifications through websockets. I found that this can be done with Twisted but I am confused..
Here is an example code of how I am trying to do it:
from twisted.internet import reactor
from autobahn.websocket import WebSocketServerFactory, \
WebSocketServerProtocol, \
listenWS
def thread1(stop_event):
while(not stop_event.is_set()):
stop_event.wait(4)
print "checking sensor"
# sensor_state = GPIO.input(11)
if sensor_state == 1:
# how can I call send_m("sensor detected movement") #<---
t1_stop_event.set()
t1_stop_event = Event()
t1 = Thread(target=thread1, args=(t1_stop_event,))
class EchoServerProtocol(WebSocketServerProtocol):
def onMessage(self, msg, binary):
print "received: "+msg
print "stopping thread1"
t1_stop_event.set()
def send_m(self, msg):
self.sendMessage(msg)
if __name__ == '__main__':
t1.start()
factory = WebSocketServerFactory("ws://localhost:9000")
factory.protocol = EchoServerProtocol
listenWS(factory)
reactor.run()
So how can I call the send method of the server protocol from a thread like the thread1?
As is often the case, the answer to your question about threads and Twisted is "don't use threads".
The reason you're starting a thread here appears to be so you can repeatedly check a GPIO sensor. Does checking the sensor block? I'm guessing not, since if it's a GPIO it's locally available hardware and its results will be available immediately. But I'll give you the answer both ways.
The main thing you are using threads for here is to do something repeatedly. If you want to do something repeatedly in Twisted, there is never a reason to use threads :). Twisted includes a great API for recurring tasks: LoopingCall. Your example, re-written to use LoopingCall (again, assuming that the GPIO call does not block) would look like this:
from somewhere import GPIO
from twisted.internet import reactor, task
from autobahn.websocket import WebSocketServerFactory, \
WebSocketServerProtocol, \
listenWS
class EchoServerProtocol(WebSocketServerProtocol):
def check_movement(self):
print "checking sensor"
sensor_state = GPIO.input(11)
if sensor_state == 1:
self.send_m("sensor detected movement")
def connectionMade(self):
WebSocketServerProtocol.connectionMade(self)
self.movement_checker = task.LoopingCall(self.check_movement)
self.movement_checker.start(4)
def onMessage(self, msg, binary):
self.movement_checker.stop()
def send_m(self, msg):
self.sendMessage(msg)
if __name__ == '__main__':
factory = WebSocketServerFactory("ws://localhost:9000")
factory.protocol = EchoServerProtocol
listenWS(factory)
reactor.run()
Of course, there is one case where you still need to use threads: if the GPIO checker (or whatever your recurring task is) needs to run in a thread because it is a potentially blocking operation in a library that can't be modified to make better use of Twisted, and you don't want to block the main loop.
In that case, you still want to use LoopingCall, and take advantage of another one of its features: if you return a Deferred from the function that LoopingCall is calling, then it won't call that function again until the Deferred fires. This means you can shuttle a task off to a thread and not worry about the main loop piling up queries for that thread: you can just resume the loop on the main thread automatically when the thread completes.
To give you a more concrete idea of what I mean, here's the check_movement function modified to work with a long-running blocking call that's run in a thread, instead of a quick polling call that can be run on the main loop:
def check_movement(self):
from twisted.internet.threads import deferToThread
def get_input():
# this is run in a thread
return GPIO.input(11)
def check_input(sensor_state):
# this is back on the main thread, and can safely call send_m
if sensor_state == 1:
self.send_m("sensor movement detected")
return deferToThread(get_input).addCallback(check_input)
Everything else about the above example stays exactly the same.
There are a few factors at play in your example. Short answer: study this documentation on threads in Twisted.
While you don't have to use Twisted's reactor to use protocol classes (threading and protocol implementation are decoupled), you have called reactor.run so all of the below I consider applicable to you.
Let Twisted create threads for you. Going outside the framework can get you in trouble. There are no "public" APIs for IPC messaging with the reactor (I think), so if you use Twisted, you pretty much need to go all the way.
By default, Twisted does not switch threads to call your callbacks. To delegate to a worker thread from the main reactor thread (i.e. to perform blocking I/O), you don't have to create a thread yourself, you use reactor.callInThread and it will run in a worker thread. If you never do this, everything runs in the main reactor thread, meaning for example any I/O operations will block the reactor thread and you can't receive any events until your I/O completes.
Code running in worker threads should use reactor.callFromThread to do anything that is not thread-safe. Provide a callback, which will run in the main reactor thread. You're better safe than sorry here, trust me.
All of the above applies to Deferred processing also. So don't be afraid to use partial(reactor.callFromThread, mycallback) or partial(reactor.callInThread, mycallback) instead of simply mycallback when setting up callbacks. I learned that the hard way; without that, I found that any blocking I/O that I might do in deferred callbacks was either erroring out (due to thread safety issues) or blocking the main thread.
If you're just starting out in Twisted, it's a bit of a "trust fall". Learn to let go of managing your own threads and passing messages via Queue objects, etc. Once you figure out how Deferred and the reactor work (it's called "Twisted" for a reason!), it will seem perfectly natural to you. Twisted does force you to decouple and separate concerns in a functional programming style, but once you're over that, I've found that it's very clean and works well.
One tip: I wrote some decorators to use on all my callback functions so that I didn't have to be constantly calling callInThread and callFromThread and setting up Deferred for exception handling callbacks throughout the code; my decorators enable that behavior for me. It's likely prevented bugs from forgetting to do that, and it's certainly made Twisted development more pleasant for me.

terminate/restart python script if all children are stopped and prevent other children from spawning

I have a socket server that used threading to open a thread for each client that connects.
I also have two other threads that run constantly that are doing maintenance operations.
Basically there is the main thread plus two children running constantly, plus one child for each client that connects.
I want to be able to terminate or restart safely.
I would like to be able to trigger a termination function somehow that would instruct all child processes to terminate safely and then the parent could exit.
Any ideas?
Please do not suggest to connect as a client and send a command that would trigger that.
Already thought of it.
I am looking for a way to do this by executing something in the console.
The python socket server runs as a system service and would like to implement the termination in the init script.
The best way to do this is setup a signal handler in your main thread. This can be done using the signal module. See: http://docs.python.org/library/signal.html. A good way would be to trap the CTRL-C signal (SIGINT).
Please note that the signal handler can also be a class method, so you do not have to use a global method (it took me a while to discover that).
def __init__(self):
signal.signal(signal.SIGINT, self.just_kill_me)
def just_kill_me(self, sig, frame):
self.stopped = True
for t in self.threads:
t.join()
It is not possible to send the equivalent of a kill signal to a thread. Instead you should set a flag that will signal the children to stop.
Your child threads should run in a loop, periodically checking if the parent requests them to stop.
while not parent.stopped:
do_some_maintenance_work

Gracefully Terminating Python Threads

I am trying to write a unix client program that is listening to a socket, stdin, and reading from file descriptors. I assign each of these tasks to an individual thread and have them successfully communicating with the "main" application using synchronized queues and a semaphore. The problem is that when I want to shutdown these child threads they are all blocking on input. Also, the threads cannot register signal handlers in the threads because in Python only the main thread of execution is allowed to do so.
Any suggestions?
There is no good way to work around this, especially when the thread is blocking.
I had a similar issue ( Python: How to terminate a blocking thread) and the only way I was able to stop my threads was to close the underlying connection. Which resulted in the thread that was blocking to raise and exception and then allowed me to check the stop flag and close.
Example code:
class Example(object):
def __init__(self):
self.stop = threading.Event()
self.connection = Connection()
self.mythread = Thread(target=self.dowork)
self.mythread.start()
def dowork(self):
while(not self.stop.is_set()):
try:
blockingcall()
except CommunicationException:
pass
def terminate():
self.stop.set()
self.connection.close()
self.mythread.join()
Another thing to note is commonly blocking operations generally offer up a timeout. If you have that option I would consider using it. My last comment is that you could always set the thread to deamonic,
From the pydoc :
A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. The flag can be set through the daemon property.
Also, the threads cannot register signal handlers
Signals to kill threads is potentially horrible, especially in C, especially if you allocate memory as part of the thread, since it won't be freed when that particular thread dies (as it belongs to the heap of the process). There is no garbage collection in C, so if that pointer goes out of scope, it's gone out of scope, the memory remains allocated. So just be careful with that one - only do it that way in C if you're going to actually kill all the threads and end the process so that the memory is handed back to the OS - adding and removing threads from a threadpool for example will give you a memory leak.
The problem is that when I want to shutdown these child threads they are all blocking on input.
Funnily enough I've been fighting with the same thing recently. The solution is literally don't make blocking calls without a timeout. So, for example, what you want ideally is:
def threadfunc(running):
while running:
blockingcall(timeout=1)
where running is passed from the controlling thread - I've never used threading but I have used multiprocessing and with this you actually need to pass an Event() object and check is_set(). But you asked for design patterns, that's the basic idea.
Then, when you want this thread to end, you run:
running.clear()
mythread.join()
and your main thread should then allow your client thread to handle its last call, and return, and the whole program folds up nicely.
What do you do if you have a blocking call without a timeout? Use the asynchronous option, and sleep (as in call whatever method you have to suspend the thread for a period of time so you're not spinning) if you need to. There's no other way around it.
See these answers:
Python SocketServer
How to exit a multithreaded program?
Basically, don't block on recv() by using select() with a timeout to check for readability of the socket, and poll a quit flag when select() times out.

Categories

Resources