I need to create spawn off a process in python that allows the calling process to exit while the child is still running. What is an effective way to do this?
Note: I'm running on a UNIX environment.
Terminating the parent process does not terminate child processes in Unix-like operating systems, so you don't need to do anything special. Just start your subprocesses with subprocess.Popen and terminate the main process. The orphaned processes will automatically be adopted by init.
Related
I would like to keep another Python process (Python function) running, even after the main process is completed. Can this be done without using subprocess?
Currently if I run a non-daemonic process, then it will be automatically joined to the main process.
If I set the process to be daemon, then the child process will exit once the main process is complete.
How do I have another process keep running in the background, even after the main process is complete?
I am using pexpect to run a start command on an in-house application. The start command starts a number of processes. As the processes are starting one by one in the background everything looks good, but when the 'start' process finishes and the pexpect process ends, the processes that have been started also die.
child = pexpect.spawn('foo start')
child.logfile = log
child.wait()
For this scenario, I can use nohup and it works as expected.
child = pexpect.spawn('bash -c "nohup foo start"')
However, there is also an installer for the same in-house application that has the same issue, part of the installation is to start the processes. The installer is interactive and requires input, so nohup will not work.
How can I prevent the processes that are started by the installer from dying when the pexpect session ends?
Note: The start and install processes work fine when executed from a standard terminal session. They are not tied to the session in any way.
I couldn't find much in the documentation about it, but including the "ignore_sighup=True" option in the spawn command fixed my issue.
child = pexpect.spawn('foo start', ignore_sighup=True)
I am working on Unix systems and have a GUI application that in turn spawns couple of other processes. These processes required to run independent of the parent process (GUI application). Basically, when the GUI is crashed or closed, the child processes should keep running.
One approach could be to demonize the processes. Here is an useful answer that runs a process in background through double forking.
What I would like to ask is, if it is possible to have the same result using terminal-multiplexer, like tmux or GNU-Screen. I am not sure how these terminal-multiplexers creates and maintain shell sessions but the basic idea would be to start the GUI application, that uses 'tmux' or 'screen' to creates a shell session and run child processes within the shell session. Would it make the child process independent of parent processes?
Thanks in advance!
It should work if your GUI runs something like this:
tmux new-session -s test -d vim
which creates a detached session named "test", running the "vim" command. The session can then be attached with:
tmux attach-session -t test
Using parallel python 1.6.4 I spawn a subprocess.Popen command on a remote server. For whatever reason, the command isn't completing in a timely matter, i.e., within the socket_timeout I've set. In this case, I expected parallel python to fail, kill the remote process, and maybe raise an exception. Instead, the long process keeps running, and the ppserver quietly spawns another one!
How can I configure ppserver to fail?
Short of that, I suppose I have to set timer, and destroy the job_server to make it close out and clean up the bad process?
I have a python script that does some jobs. I use multiprocessing.Pool to have a few workers do some commands for me.
My problem is when I try to terminate the script. When I press Ctrl-C, I would like, that every worker immediately cleans up its experiment (which is some custom code, or actually even a subprocess command, not just releasing locks or memory) and stops.
I know that I can catch Ctrl-C with the signal handler. How can I make all current running workers of a multiprocessing.Pool to terminate, still doing their cleanup command?
Pool.terminate() will not be useful, because the processes will be terminated without cleaning up.
How about trying the atexit standard module?
It allows you to register a function that will be executed upon termination.
Are you working with Unix? If yes, why not catch SIGTERM in the subprocesses? In fact, the documentation of Process.terminate() reads:
Terminate the process. On Unix this is done using the SIGTERM signal
(I have not tested this.)