How to catch exceptions in twisted? - python

I'm running a pretty simple server in Python using Twisted. When I try to run two at the same time, this exception occurs:
twisted.internet.error.CannotListenError: Couldn't listen on 127.0.0.1:5050: [Errno 98] Address already in use.
It makes a lot of sense. How can I catch this exception?
I'd simply like to terminate the reactor and shut everything down if an existing server is running. Otherwise, I get the exception and it just hangs indefinitely until I kill the process.

You need to use an error handler callback, an errBack in Twisted lingo. You can add one to a Deferred using the addErrback method.

Related

Python3 -- breaking out of a catch-all try block

I've just rewritten something akin to a basic python server
( https://docs.python.org/3/library/socketserver.html ) because I thought I needed to.
My question is, did I?
What I wanted to do is break out of the handler and out of the server loop if a certain request is received (a stop-the-server request, if you will).
Originally, I tried to break out of the server loop by throwing an exception, but it turns out the way the socketserver handlers are run is inside of a "try catch-all expect" block, which means exceptions thrown inside of a handler won't ever propagate beyond the handler invoking function (the one with the catch-all exception block).
So does python has a longjump mechanism that can pierce a try-catch_all-expect block or could I run the serve_forever_loop inside a thread and then, from the handler, do something like Thread.current.kill() (how can I do this?).
As far as I know, there is no way to skip stack frames when you raise an exception.
But if you really need this functionality, you can find other ways for one part of your code to send messages to another part. If both the handler and server are running in the same interpreter instance (i.e. not in separate threads), you can have the handler change some variable accessible to the main server loop, which the server loop checks for. If you're in different interpreters, you could have the handler write to a log file that the server loop watches. The log file idea is kind of hackish, but logging is a good thing to have for servers anyway.

Twisted multi-thread, signal handling

I wrote a twisted program that handling request from TCP sockets and raw sockets.
As the twisted doesn't support raw thread, I write the raw-socket select poll loop in a function named 'raw_socket_loop'. The main reactor program create a separate thread to run this loop by reactor.callInThread() function.
My problem is, I click control-C in the console but the reactor can not stop. I think the reactor's main thread receives this signal and handles well but the spawned thread doesn't receive this break signal. Is there is graceful shutdown suggestion for multi-thread reactor program?
Thanks a lot,
Threads aren't interruptable. You have to build a mechanism into the code running in a thread to receive shutdown notification and exit in response to it.
If you're using select(2) in the thread, then you can use the self-pipe trick (which is how Twisted itself does this for its own thread-control needs).
However, if you're using select(2) in a thread, then maybe you should consider not using a thread and instead implementing IFileDescriptor and using it with the reactor's IReactorFDSet implementation to get readiness events on it. This way you avoid threads, you let the reactor actually implement the event loop, and you still get your raw sockets.

How to properly manage connectionLost in twisted

I have written the following piece of code to handle a lost connection in twisted :
class Foo(LineReceiver):
def connectionLost(self, reason):
if reason.type != ConnectionLost:
reactor.stop()
def terminate(self):
self.transport.loseConnection()
The terminate method is called by some input/output protocol.
I had to test reason.type in the connectionLost method to avoid an error
'can't stop reactor that isn' running' when I interrupt my program with Ctrl-C instead
of calling the terminate method.
This code works but I wonder if there is a more elegant way of managing the end of a connection in twisted ?
Thanks !
The problem you're facing is that control+C is invoking an already-installed signal handler that stops the reactor. The reactor, while stopping, invokes your connectionLost method, because shutting down the reactor automatically closes all connections. In turn, your protocol tries to stop the reactor - only to find it's already been stopped!
A simple (but not entirely correct) way to avoid this condition is to look at the running attribute on the reactor before stopping it, like so:
def connectionLost(self, reason):
if reactor.running:
reactor.stop()
(Note also that you probably shouldn't check the exception type that way; you might want to use Failure.check instead, if you actually care about the type of the reason, but in most cases you shouldn't care about the type of the reason; a broken connection is just a broken connection, unless you are trying to ascertain something very specific attributes of its brokenness, like for example whether the termination should cause an SSL session termination.)
If you want to be more thorough about checking for this, you will need to monitor the reactor's state via a "before", "shutdown" system event trigger, or use the twisted.internet.task.react API introduced in Twisted 12.3.

twisted: catch keyboardinterrupt and shutdown properly

UPDATE: For ease of reading, here is how to add a callback before the reactor gets shutdown:
reactor.addSystemEventTrigger('before', 'shutdown', callable)
Original question follows.
If I have a client connected to a server, and it's chilling in the reactor main loop waiting for events, when I hit CTRL-C, I get a "Connection to the other side was lost in a non-clean fashion: Connection lost." How can I set it up so that I know when a KeyboardInterrupt happens, so that I can do proper clean-up and disconnect cleanly? Or how can I implement a cleaner way to shutdown that doesn't involve CTRL-C, if possible?
If you really, really want to catch C-c specifically, then you can do this in the usual way for a Python application - use signal.signal to install a handler for SIGINT that does whatever you want to do. If you invoke any Twisted APIs from the handler, make sure you use reactor.callFromThread since almost all other Twisted APIs are unsafe for invocation from signal handlers.
However, if you're really just interested in inserting some shutdown-time cleanup code, then you probably want to use IService.stopService (or the mechanism in terms of which it is implemented,reactor.addSystemEventTrigger) instead.
If you're using twistd, then using IService.stopService is easy. You already have an Application object with at least one service attached to it. You can add another one with a custom stopService method that does your shutdown work. The method is allowed to return a Deferred. If it does, then the shutdown process is paused until that Deferred fires. This lets you clean up your connections nicely, even if that involves some more network (or any other asynchronous) operations.
If you're not using twistd, then using reactor.addSystemEventTrigger directly is probably easier. You can install a before shutdown trigger which will get called in the same circumstance IService.stopService would have been called. This trigger (just any callable object) can also return a Deferred to delay shutdown. This is done with a call to reactor.addSystemEventTrigger('before', 'shutdown', callable) (sometime before shutdown is initiated, so that it's already registered whenever shutdown does happen).
service.tac gives an example of creating and using a custom service.
wxacceptance.py gives an example of using addSystemEventTrigger and delaying shutdown by (an arbitrary) three seconds.
Both of these mechanisms will give you notification whenever the reactor is stopping. This may be due to a C-c keystroke, or it may be because someone used kill -INT ..., or it may be because somewhere reactor.stop() was called. They all lead to reactor shutdown, and reactor shutdown always processes shutdown event triggers.
I'm not sure whether you talking about a client or a server that you've written.
Anyway, nothing wrong with 'CTRL-C'.
If you're writing a server as an Application. Subclass from twisted.application.service.Service and define startService and stopService. Maintain a list of active protocol instances. Use stopService to go through them and close them gracefully.
If you've got a client, you could also subclass Service, but it could be simpler to use reactor.addSystemEventTrigger('before','shutdown',myCleanUpFunction), and close connection(s) gracefully in this function.

too many threads due to synch communication

I'm using threads and xmlrpclib in python at the same time. Periodically, I create a bunch of thread to complete a service on a remote server via xmlrpclib. The problem is that, there are times that the remote server doesn't answer. This causes the thread to wait forever for a response which it never gets. Over time, number of threads in this state increases and will reach the maximum number of allowed threads on the system (I'm using fedora).
I tried to use socket.setdefaulttimeout(10); but the exception that is created by that will cause the server to defunct. I used it at server side but it seems that it doesn't work :/
Any idea how can I handle this issue?
You are doing what I usually call (originally in Spanish xD) "happy road programming". You should implement your programs to handle undesired cases, not only the ones you want to happen.
The threads here are only showing an underlying mistake: your server can't handle a timeout, and the implementation is rigid in a way that adding a timeout causes the server to crash due to an unhandled exception.
Implement it more robustly: it must be able to withstand an exception, servers can't die because of a misbehaving client. If you don't fix this kind of problem now, you may have similar issues later on.
It seems like your real problem is that the server hangs on certain requests, and dies if the client closes the socket - the threads are just a side effect of the implementation. If I'm understanding what you're saying correctly, then the only way to fix this would be to fix the server to respond to all requests, or to be more robust with network failure, or (preferably) both.

Categories

Resources