I have a python script that does some jobs. I use multiprocessing.Pool to have a few workers do some commands for me.
My problem is when I try to terminate the script. When I press Ctrl-C, I would like, that every worker immediately cleans up its experiment (which is some custom code, or actually even a subprocess command, not just releasing locks or memory) and stops.
I know that I can catch Ctrl-C with the signal handler. How can I make all current running workers of a multiprocessing.Pool to terminate, still doing their cleanup command?
Pool.terminate() will not be useful, because the processes will be terminated without cleaning up.
How about trying the atexit standard module?
It allows you to register a function that will be executed upon termination.
Are you working with Unix? If yes, why not catch SIGTERM in the subprocesses? In fact, the documentation of Process.terminate() reads:
Terminate the process. On Unix this is done using the SIGTERM signal
(I have not tested this.)
Related
I would like to keep another Python process (Python function) running, even after the main process is completed. Can this be done without using subprocess?
Currently if I run a non-daemonic process, then it will be automatically joined to the main process.
If I set the process to be daemon, then the child process will exit once the main process is complete.
How do I have another process keep running in the background, even after the main process is complete?
I'm developing a python script that runs as a daemon in a linux environment. If and when I need to issue a shutdown/restart operation to the device, I want to do some cleanup and log data to a file to persist it through the shutdown.
I've looked around regarding Linux shutdown and I can't find anything detailing which, if any, signal, is sent to applications at the time of shutdown/restart. I assumed sigterm but my tests (which are not very good tests) seem to disagree with this.
When Linux is shutting down, (and this is slightly dependent on what kind of init scripts you are using) it first sends SIGTERM to all processes to shut them down, and then I believe will try SIGKILL to force them to close if they're not responding to SIGTERM.
Please note, however, that your script may not receive the SIGTERM - init may send this signal to the shell it's running in instead and it could kill python without actually passing the signal on to your script.
Hope this helps!
Using parallel python 1.6.4 I spawn a subprocess.Popen command on a remote server. For whatever reason, the command isn't completing in a timely matter, i.e., within the socket_timeout I've set. In this case, I expected parallel python to fail, kill the remote process, and maybe raise an exception. Instead, the long process keeps running, and the ppserver quietly spawns another one!
How can I configure ppserver to fail?
Short of that, I suppose I have to set timer, and destroy the job_server to make it close out and clean up the bad process?
I need to create spawn off a process in python that allows the calling process to exit while the child is still running. What is an effective way to do this?
Note: I'm running on a UNIX environment.
Terminating the parent process does not terminate child processes in Unix-like operating systems, so you don't need to do anything special. Just start your subprocesses with subprocess.Popen and terminate the main process. The orphaned processes will automatically be adopted by init.
I've got a Python script managing a gdb process on Windows, and I need to be able to send a SIGINT to the spawned process in order to halt the target process (managed by gdb)
It appears that there is only SIGTERM available in Win32, but clearly if I run gdb from the console and Ctrl+C, it thinks it's receiving a SIGINT. Is there a way I can fake this such that the functionality is available on all platforms?
(I am using the subprocess module, and python 2.5/2.6)
Windows doesn't have the unix signals IPC mechanism.
I would look at sending a CTRL-C to the gdb process.