How can I trap or be notified when a program reach the end, just before exit, so I can do some cleanup.
Capturing try/except errors and using signal from import signal allows me to capture most situations, however this is not enough, for example when using deamonize: the interactive task finish and start a new one in background, no control over the close.
To be more specific, when the 'pid' is released.
The module atexit is not a solution, as stated on the documentation:
The functions registered via this module are not called when the
program is killed by a signal not handled by Python, when a Python
fatal internal error is detected, or when os._exit() is called
Related
I have a Python script for automating simple tasks. Its main loop looks like this:
while True:
input = download_task_input()
if input:
output = process_task(input)
upload_task_output(output)
sleep(60)
Some local files are altered during task processing. They are modified when the task is started, and restored back to proper state when the task is done, or if exception is caught. Restoring these files on program exit is very important to me: leaving them in altered state causes some trouble later that I'd like to avoid.
When I want to terminate the script, I hit Ctrl+C. It raises KeyboardInterrupt exception which both stops task processing and triggers files restoration. However, if I hit Ctrl+Break, the program is simply terminated: if a task is being processed at this moment, then local files are left in altered state (which is undesirable).
The question: I'm worried about the situation when Windows OS is shutdown by pressing the Power button. Is it possible to make Python handle it exactly like it handles Ctrl+C? I.e. I'd like to detect OS shutdown in Python script and raise Python exception on the main thread.
I know it is possible to call SetConsoleCtrlHandler function from WinAPI and install own handler for situations like Ctrl+C, Ctrl+Break, Shutdown, etc. However, this handler seems to be executed in additional thread, and raising exception in it does not achieve anything. On the other hand, Python itself supposedly uses the same WinAPI feature to raise KeyboardInterrupt on the main thread on Ctrl+C, so it should be doable.
This is not a serious automation script, so I don't mind if a solution is hacky or not 100% reliable.
Python has async kill signal handling:
import signal
import sys
def signal_handler(*_):
print("\nExiting...")
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
From what I understand, when Python receives the signal registered here, it stops doing whatever it was doing, and executes the signal handler. If the signal handler doesn't stop the program, when it returns, Python continues doing whatever it was doing.
Is this some sort of special case? It seems strange that something can just interrupt the main thread, make it jump to a complete different function (the signal handler), and then return to what it was doing when the handler returns. I've been programming in Python for a few years, and I'm not aware of anything else in Python that allows you to register a function that at some point just forces the main thread to temporarily do something else no matter what it was doing, before returning to it. Sure, Python has Async support, but you need to await things there before Python will switch what it is executing. The main thread here is not doing any kind of awaiting before focus is taken away from whatever it was doing.
I'm currently studying Rust. As far as I can tell, Rust has a few options for letting you handle kill signals, including:
Running a background thread that watches for an incoming kill signal (like SIGINT), and sets a boolean for your main thread to check, and act on
Running a background thread that watches for an incoming kill signal, and does something about it
Neither interrupt the main thread, and make it execute the signal handler before returning the main thread to what it was doing. This sounds like a Python-specific way to handle kill signals that is built into the interpreter itself, and only possible in a higher level language like Python, where you can have the interpreter run code in-between each line of user code it runs. Or am I wrong?
Basically I am writing a script that can be stopped and resumed at any time. So if the user uses, say PyCharm console to execute the program, he can just click on the stop button whenever he wants.
Now, I need to save some variables and let an ongoing function finish before terminating. What functions do I use for this?
I have already tried atexit.register() to no avail.
Also, how do I make sure that an ongoing function is completed before the program can exit?
Solved it using a really bad workaround. I used all functions that are related to exit in Python, including SIG* functions, but uniquely, I did not find a way to catch the exit signal when Python program is being stopped by pressing the "Stop" button in PyCharm application. Finally got a workaround by using tkinter to open an empty window, with my program running in a background thread, and used that to close/stop program execution. Works wonderfully, and catches the SIG* signal as well as executing atexit . Anyways massive thanks to #scrineym as the link really gave a lot of useful information that did help me in development of the final version.
It looks like you might want to catch a signal.
When a program is told to stop a signal is sent to the process from the OS, you can then catch them and do cleanup before exit. There are many diffferent signals , for xample when you press CTRL+C a SIGINT signal is sent by the OS to stop your process, but there are many others.
See here : How do I capture SIGINT in Python?
and here for the signal library: https://docs.python.org/2/library/signal.html
I have a python script with multiple threading-launched threads in which several threads occasionally freeze (apparently simultaneously). In this script, I've registered a signal handler to dump stack traces from all the running threads. When it's frozen, no dumped stacks appear. What could be causing this?
A couple of possibilities that come to mind:
A thread is not releasing a mutex, freezing any other threads that attempt to acquire it. I would expect the signal handler to work in this case, however. Am I mistaken?
I log various things to stdout and stderrr, which are redirected with a bash command line to a log file. Perhaps precisely timed output from 2 threads could be blocking at OS level? This script has been running for months without problems, though there was a kernel update just recently (it's Ubuntu 12.04). If this is the case, the signal is not being ignored, just not producing any output...
I have a few global variables that are read and written by the freezing threads. I had thought that python 2.7 has a global thread lock to make this safe, and it's not been a problem before.
Python's signal module runs signal handlers on the main interpreter thread exclusively. If your main thread is hung and unable to execute Python code, your signal handlers will not run. Signals will be fired and caught, but if the thread can't execute Python code then nothing will happen.
The best way to avoid this situation is to ensure your main thread (the first thread that exists in the Python interpreter upon startup) does not deadlock. This may mean ensuring that nothing important happens on that thread after initialization.
When using mpirun, is it possible to catch signals (for example, the SIGINT generated by ^C) in the code being run?
For example, I'm running a parallelized python code. I can except KeyboardInterrupt to catch those errors when running python blah.py by itself, but I can't when doing mpirun -np 1 python blah.py.
Does anyone have a suggestion? Even finding how to catch signals in a C or C++ compiled program would be a helpful start.
If I send a signal to the spawned Python processes, they can handle the signals properly; however, signals sent to the parent orterun process (i.e. from exceeding wall time on a cluster, or pressing control-C in a terminal) will kill everything immediately.
I think it is really implementation dependent.
In SLURM, I tried to use sbatch --signal USR1#30 to send SIGUSR1 (whose signum is 30,10 or 16) to the program launched by srun commands. And the process received signal SIGUSR1 = 10.
For platform MPI of IBM, according to https://www.ibm.com/support/knowledgecenter/en/SSF4ZA_9.1.4/pmpi_guide/signal_propagation.html
SIGINT, SIGUSR1, SIGUSR2 will be bypassed to processes.
In MPICH, SIGUSR1 is used by the process manager for internal notification of abnormal failures.
ref: http://lists.mpich.org/pipermail/discuss/2014-October/003242.html>
Open MPI on the other had will forward SIGUSR1 and SIGUSR2 from mpiexec to the other processes.
ref: http://www.open-mpi.org/doc/v1.6/man1/mpirun.1.php#sect14>
For IntelMPI, according to https://software.intel.com/en-us/mpi-developer-reference-linux-hydra-environment-variables
I_MPI_JOB_SIGNAL_PROPAGATION and I_MPI_JOB_TIMEOUT_SIGNAL can be set to send signal.
Another thing worth notice: For many python scripts, they will invoke other library or codes through cython, and if the SIGUSR1 is caught by the sub-process, something unwanted might happen.
If you use mpirun --nw, then mpirun itself should terminate as soon as it's started the subprocesses, instead of waiting for their termination; if that's acceptable then I believe your processes would be able to catch their own signals.
The signal module supports setting signal handlers using signal.signal:
Set the handler for signal signalnum to the function handler. handler can be a callable Python object taking two arguments (see below), or one of the special values signal.SIG_IGN or signal.SIG_DFL. The previous signal handler will be returned ...
import signal
def ignore(sig, stack):
print "I'm ignoring signal %d" % (sig, )
signal.signal(signal.SIGINT, ignore)
while True: pass
If you send a SIGINT to a Python interpreter running this script (via kill -INT <pid>), it will print a message and simply continue to run.