I know about this python bug, which does not allow sys.excepthook to be used from inside a thread. A recommended workaround is to surround the thread's run method with try/except. However, since I am using Django, I am already locked in in Django main thread.
So: Is there any way to globally catch exceptions in Django?
Related
I am embedding Python in a multi-threaded C++ application, is it safe to call
Py_Initialize() in multiple threads? Or should I call it in the main thread?
The Py_Initialize() code contains:
if (initialized)
return;
initialized = 1;
The documentation for the function also says:
https://docs.python.org/2/c-api/init.html#c.Py_Initialize
This is a no-op when called for a second time (without calling Py_Finalize() first).
My recommendation though is you only do it from the main thread, although depending on what you are doing, it can get complicated.
The problem is that signal handlers are only triggered in context of the main Python thread. That is, whatever thread was the one to call Py_Initialize(). So if that is a transient thread and is only used once and then discarded, then no chance to ever have signal handlers called. So you have to give some thought as to how you handle signals.
Also be careful of using lots of transient threads created in C code using native thread API and calling into Python interpreter as each will create data in the Python interpreter. That will accumulate if keep creating and discarding these external threads. You should endeavour to use a thread pool instead if calling in from external threads, and keep reusing prior threads.
Default handler for SIGINT raises KeyboardInterrupt. However, if a program is inside a __del__ method (because of an ongoing garbage collection), the exception is ignored with the following message printed to stderr:
Exception KeyboardInterrupt in <...> ignored
As a result, the program continues to work despite receiving SIGINT. Of course, I can define my own handler for SIGINT that sets a global variable sigint_received to True, and then often check the value of the variable in my program. But this looks ugly.
Is there an elegant and reliable way to make sure that the python program gets interrupted after receiving SIGINT?
Before I dive into my solution, I want to highlight the scary red "Warning:" sidebar in the docs for object.__del__ (emphasis mine):
Due to the precarious circumstances under which __del__() methods are invoked, exceptions that occur during their execution are ignored, and a warning is printed to sys.stderr instead. [...] __del__() methods should do the absolute minimum needed to maintain external invariants.
This suggests to me that any __del__ method that's at serious risk of being interrupted by an interactive user's Ctrl-C might be doing too much. So my first suggestion would be to look for ways to minimize your __del__ method, whatever it is.
Or to put it another way: If your __del__ method really does do "the absolute minimum needed", then how can it be safe to kill the process half-way through?
Custom Signal Handler
The only solution I could find was indeed a custom signal handler for signal.SIGINT... but a lot of the obvious tricks didn't work:
Failed: sys.exit
Calling sys.exit from the signal handler just raised a SystemExit exception, which was ignored. Python's C API docs suggest that it is impossible for the Python interpreter to raise any exception during a __del__ method:
voidPyErr_WriteUnraisable(PyObject *obj)
[Called when...] it is impossible for the interpreter to actually raise the exception [...] for example, when an exception occurs in an __del__() method.
Partial Success: Flag Variable
Your idea of setting a global "drop dead" variable inside the signal handler worked only partially --- although it updated the variable, nothing got a chance to read that variable until after the __del__ method returned. So for several seconds, the Ctrl-C appeared to have done nothing.
This might be good enough if you just want to terminate the process "eventually", since it will exit whenever the __del__ method returns. But since you probably want to shut down the process without waiting (both SIGINT and KeyboardInterrupt typically come from an impatient user), this won't do.
Success: os.kill
Since I couldn't find a way to convince the Python interpreter to kill itself, my solution was to have the (much more persuasive) operating system do it for me. This signal handler uses os.kill to send a stronger SIGTERM to its own process ID, causing the Python interpreter itself to exit.
def _sigterm_this_process(signum, frame):
pid = os.getpid()
os.kill(pid, signal.SIGTERM)
return
# Elsewhere...
signal.signal(signal.SIGINT, _sigterm_this_process)
Once the custom signal handler was set, Ctrl-C caused the __del__ method (and the entire program) to exit immediately.
I've just rewritten something akin to a basic python server
( https://docs.python.org/3/library/socketserver.html ) because I thought I needed to.
My question is, did I?
What I wanted to do is break out of the handler and out of the server loop if a certain request is received (a stop-the-server request, if you will).
Originally, I tried to break out of the server loop by throwing an exception, but it turns out the way the socketserver handlers are run is inside of a "try catch-all expect" block, which means exceptions thrown inside of a handler won't ever propagate beyond the handler invoking function (the one with the catch-all exception block).
So does python has a longjump mechanism that can pierce a try-catch_all-expect block or could I run the serve_forever_loop inside a thread and then, from the handler, do something like Thread.current.kill() (how can I do this?).
As far as I know, there is no way to skip stack frames when you raise an exception.
But if you really need this functionality, you can find other ways for one part of your code to send messages to another part. If both the handler and server are running in the same interpreter instance (i.e. not in separate threads), you can have the handler change some variable accessible to the main server loop, which the server loop checks for. If you're in different interpreters, you could have the handler write to a log file that the server loop watches. The log file idea is kind of hackish, but logging is a good thing to have for servers anyway.
I'm writing a multi-threaded application that utilizes QThreads. I know that, in order to start a thread, I need to override the run() method and call that method using the thread.start() somewhere (in my case in my GUI thread).
I was wondering, however, is it required to call the .wait() method anywhere and also am I supposed to call the .quit() once the thread finishes, or is this done automatically?
I am using PySide.
Thanks
Both answers depend on what your code is doing and what you expect from the thread.
If your logic which uses the thread needs to wait synchronously for the moment QThread finishes, then yes, you need to call wait(). However such requirement is a sign of sloppy threading model, except very specific situations like application startup and shutdown. Usage of QThread::wait() suggests creeping sequential operation, which means that you are effectively not using threads concurrently.
quit() exits QThread-internal event loop, which is not mandatory to use. A long-running thread (as opposed to one-task worker) must have an event loop of some sort - this is a generic statement, not specific to QThread. You either do it yourself (in form of some while(keepRunning) { } cycle) or use Qt-provided event loop, which you fire off by calling exec() in your run() method. The former implementation is finishable by you, because you did provide the keepRunning condition. The Qt-provided implementation is hidden from you and here goes the quit() call - which internally does nothing more than setting some sort of similar flag inside Qt.
I am building a django application which depends on a python module where a SIGINT signal handler has been implemented.
Assuming I cannot change the module I am dependent from, how can I workaround the "signal only works in main thread" error I get integrating it in Django ?
Can I run it on the Django main thread?
Is there a way to inhibit the handler to allow the module to run on non-main threads ?
Thanks!
Django's built-in development server has auto-reload feature enabled by default which spawns a new thread as a means of reloading code. To work around this you can simply do the following, although you'd obviously lose the convenience of auto-reloading:
python manage.py runserver --noreload
You'll also need to be mindful of this when choosing your production setup. At least some of the deployment options (such as threaded fastcgi) are certain to execute your code outside main thread.
I use Python 3.5 and Django 1.8.5 with my project, and I met a similar problem recently. I can easily run my xxx.py code with SIGNAL directly, but it can't be executed on Django as a package just because of the error "signal only works in main thread".
Firstly, runserver with --noreload --nothreading is usable but it runs my multi-thread code too slow for me.
Secondly, I found that code in __init__.py of my package ran in the main thread. But, of course, only the main thread can catch this signal, my code in package can't catch it at all. It can't solve my problem, although, it may be a solution for you.
Finally, I found that there is a built-in module named subprocess in Python. It means you can run a sub real complete process with it, that is to say, this process has its own main thread, so you can run your code with SIGNAL easily here. Though I don't know the performance with using it, it works well for me. PS, you can find all details about subprocess in Python Documentation.
Thank you~
There is a cleaner way, that doesn't break your ability to use threads and processes.
Put your registration calls in manage.py:
def handleKill(signum, frame):
print "Killing Thread."
# Or whatever code you want here
ForceTerminate.FORCE_TERMINATE = True
print threading.active_count()
exit(0)
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")
from django.core.management import execute_from_command_line
signal.signal(signal.SIGINT, handleKill)
signal.signal(signal.SIGTERM, handleKill)
execute_from_command_line(sys.argv)
Although the question does not describe exactly the situation you are in, here is some more generic advice:
The signal is only sent to the main thread. For this reason, the signal handler should be in the main thread.
From that point on, the action that the signal triggers, needs to be communicated to the other threads. I usually do this using Events. The signal handler sets the event, which the other threads will read, and then realize that action X has been triggered. Obviously this implies that the event attribute should be shared among the threads.