Default handler for SIGINT raises KeyboardInterrupt. However, if a program is inside a __del__ method (because of an ongoing garbage collection), the exception is ignored with the following message printed to stderr:
Exception KeyboardInterrupt in <...> ignored
As a result, the program continues to work despite receiving SIGINT. Of course, I can define my own handler for SIGINT that sets a global variable sigint_received to True, and then often check the value of the variable in my program. But this looks ugly.
Is there an elegant and reliable way to make sure that the python program gets interrupted after receiving SIGINT?
Before I dive into my solution, I want to highlight the scary red "Warning:" sidebar in the docs for object.__del__ (emphasis mine):
Due to the precarious circumstances under which __del__() methods are invoked, exceptions that occur during their execution are ignored, and a warning is printed to sys.stderr instead. [...] __del__() methods should do the absolute minimum needed to maintain external invariants.
This suggests to me that any __del__ method that's at serious risk of being interrupted by an interactive user's Ctrl-C might be doing too much. So my first suggestion would be to look for ways to minimize your __del__ method, whatever it is.
Or to put it another way: If your __del__ method really does do "the absolute minimum needed", then how can it be safe to kill the process half-way through?
Custom Signal Handler
The only solution I could find was indeed a custom signal handler for signal.SIGINT... but a lot of the obvious tricks didn't work:
Failed: sys.exit
Calling sys.exit from the signal handler just raised a SystemExit exception, which was ignored. Python's C API docs suggest that it is impossible for the Python interpreter to raise any exception during a __del__ method:
voidPyErr_WriteUnraisable(PyObject *obj)
[Called when...] it is impossible for the interpreter to actually raise the exception [...] for example, when an exception occurs in an __del__() method.
Partial Success: Flag Variable
Your idea of setting a global "drop dead" variable inside the signal handler worked only partially --- although it updated the variable, nothing got a chance to read that variable until after the __del__ method returned. So for several seconds, the Ctrl-C appeared to have done nothing.
This might be good enough if you just want to terminate the process "eventually", since it will exit whenever the __del__ method returns. But since you probably want to shut down the process without waiting (both SIGINT and KeyboardInterrupt typically come from an impatient user), this won't do.
Success: os.kill
Since I couldn't find a way to convince the Python interpreter to kill itself, my solution was to have the (much more persuasive) operating system do it for me. This signal handler uses os.kill to send a stronger SIGTERM to its own process ID, causing the Python interpreter itself to exit.
def _sigterm_this_process(signum, frame):
pid = os.getpid()
os.kill(pid, signal.SIGTERM)
return
# Elsewhere...
signal.signal(signal.SIGINT, _sigterm_this_process)
Once the custom signal handler was set, Ctrl-C caused the __del__ method (and the entire program) to exit immediately.
Related
I have an application that relies on SIGINT for a graceful shutdown. I noticed that every once in awhile it just keeps running. The cause turned out to be a generator in xml/etree/ElementTree.py.
If SIGINT arrives while that generator is being cleaned up, all exceptions are ignored (recall that default action for SIGINT is to raise a KeyboardInterrupt). That's not unique to this particular generator, or to generators in general.
From the python docs:
"Due to the precarious circumstances under which __del__() methods are invoked, exceptions that occur during their execution are ignored, and a warning is printed to sys.stderr instead"
In over five years of programming in python, this is the first time I run into this issue.
If garbage collection can occur at any point, then SIGINT can also theoretically be ignored at any point, and I can't ever rely on it. Is that correct? Have I just been lucky this whole time?
Or is it something about this particular package and this particular generator?
My Python program takes a lot of time to complete all the iterations of a for loop. The moment I hit a particular key/key combination on the keyboard while it is running, I want it to go into another method and save the variables into the disk (using pickle which I know) and exit the program safely.
Any idea how I can do this?
Is the KeyboardInterrupt a safe way to this just be wrapping the for loop inside the KeyboardInterrupt exception, catching it and then saving the variables in the except block?
It is only safe if, at every point in your loop, your variables are in a state which allows you to save them and resume later.
To be safe, you could instead catch the KeyboardInterrupt before it happens and set a flag for which you can test. To make this happen, you need to intercept the signal which causes the KeyboardInterrupt, which is SIGINT. In your signal handler, you can then set a flag which you test for in your calculation function. Example:
import signal
import time
interrupted = False
def on_interrupt(signum, stack):
global interrupted
interrupted = True
def long_running_function():
signal.signal(signal.SIGINT, on_interrupt)
while not interrupted:
time.sleep(1) # do your work here
signal.signal(signal.SIGINT, signal.SIG_DFL)
long_running_function()
The key advantage is that you have control over the point at which the function is interrupted. You can add checks for if interrupted at any place you like. This helps with being in a consistent, resumable state when the function is being interrupted.
(With python3, this could be solved nicer using nonlocal; this is left as an excercise for the reader as the Asker did not specify which Python version they are at.)
(This should work on Windows according to the documentation, but I have not tested it. Please report back if it does not so that future readers are warned.)
Is there any way to defer an exception coming from an external source for the duration of a certain block of code? For example:
some_statement_one
some_statement_two
# Begin suppression of KeyboardInterrupt
with KeyboardInterrupt suppressing context:
important_statement_one
important_statement_two
important_statement_three
some_statement_three
What I would like to be able to do is to delay reception of a KeyboardInterrupt (or whatever error(s) I specify) until after then end of the important block. That is, if during the execution of important_statement_one the user sends a keyboard interrupt, the code will continue normally until after it has finished important_statement_three. Then, the KeyboardInterrupt will be received as if it had been sent right then.
My concern is only with an exception that is generated external to the code being executed, such as KeyboardInterrupts, SystemExits, etc. Is there even a meaningful way to distinguish between an exception raised by code being executed and an exception raised due to something external? I probably can't do what I'm trying to do unless there is.
Some additional information:
I'm using the technique suggested in Is there any way to kill a Thread in Python? to send exceptions to threads. Does raising an exception like this make a difference?
I can't really use try/except blocks for this - I'm some information between threads, so if the exception is sent on the line where the information is passed then the information will be lost.
I think you want to override the default handler for the sigint signal.
Here is a simple way to do so.
some_statement_one
some_statement_two
# Save sigint handler
original_sigint_handler = signal.getsignal(signal.SIGINT)
# Set sigint handler to a function that does nothing with it
signal.signal(signal.SIGINT, lambda: None)
important_statement_one
important_statement_two
important_statement_three
# Restore handler
signal.signal(signal.SIGINT, original_sigint_handler)
some_statement_three
You may want to wrap this in a class and use enter and exit functions to get a cleaner code.
Your code would them look like you wanted (using "with")
Read: Explaining Python's '__enter__' and '__exit__'
Note that this doesn't fully answer my question, but this works specifically for deferring a keyboard interrupt.
As suggested by Martijn Pieters, a signal handler can be used to handle a keyboard interrupt before it becomes an exception that needs to be caught:
import signal
import threading
interruptLock = threading.Lock()
def handleInterrupt(signum, frame):
with interruptLock:
raise KeyboardInterrupt()
signal.signal(signal.SIGINT, handleInterrupt)
Then in the important code:
some_statement_one
some_statement_two
with interruptLock:
important_statement_one
...
This only allows one thread in the important section at a time, but it defers the exception until the interrupt lock has been released, so it should not be interrupted during that section.
If you want to have multiple threads going through the important section at a time, you might want to use a reader-writer lock. One implementation for a reader-writer lock can be found here
Sometimes it happens that an ongoing ipython evaluation won't respond to one, or even several, Ctrl-C's from the keyboard1.
Is there some other way to goose the ipython process to abort the current evaluation, and come back to its "read" state?
Maybe with kill -SOMESECRETSIGNAL <pid>? I've tried a few (SIGINT, SIGTERM, SIGUSR1, ...) to no avail: either they have no effect (e.g. SIGINT), or they kill the ipython process. Or maybe some arcane ipython configuration? Some sentinel file? ... ?
1"Promptly enough", that is. Of course, it is impossible to specify precisely how promptly is "promptly enough"; it depends on the situation, the reliability of the delay's duration, the temperament of the user, the day's pickings at Hacker News, etc.
It depends on where execution is occurring when you decide to interrupt (in a python function, in a lower level library,...). If this commonly occurs within a function you have created, you can try putting a try/except block in the function and catching KeyboardInterrupt exceptions. It may not break out of a low level library (if that is indeed where you are running) but it should prevent the ipython interpreter from exiting.
I have a chunk of code like this
def f(x):
try:
g(x)
except Exception, e:
print "Exception %s: %d" % (x, e)
def h(x):
thread.start_new_thread(f, (x,))
Once in a while, I get this:
Unhandled exception in thread started by
Error in sys.excepthook:
Original exception was:
Unlike the code sample, that's the complete text. I assume after the "by" there's supposed to be a thread ID and after the colon there are supposed to be stack traces, but nope, nothing. I don't know how to even start to debug this.
The error you're seeing means the interpreter was exiting (because the main thread exited) while another thread was still executing Python code. Python will clean up its environment, cleaning out and throwing away all of the loaded modules (to make sure as many finalizers as possible execute) but unfortunately that means the still-running thread will start raising exceptions when it tries to use something that was already destroyed. And then that exception propagates up to the start_new_thread function that started the thread, and it will try to report the exception -- only to find that what it tries to use to report the exception is also gone, which causes the confusing empty error messages.
In your specific example, this is all caused by your thread being started and your main thread exiting right away. Whether the newly started thread gets a chance to run before, during or after the interpreter exits (and thus whether you see it run as normal, run partially and report an error or never see it run) is entirely up to the OS thread scheduler.
If you're using threads (which is not a bad thing to avoid) you probably want to not have threads running while you're exiting the interpreter. The threading.Thread class is a better interface for starting new threads, and it will make the interpreter wait for all threads by default, on exit. If you really don't want to wait for a thread to end, you can set its 'daemonic' flag in the Thread object to get the old behaviour -- including the problem you see here.