I would like to keep another Python process (Python function) running, even after the main process is completed. Can this be done without using subprocess?
Currently if I run a non-daemonic process, then it will be automatically joined to the main process.
If I set the process to be daemon, then the child process will exit once the main process is complete.
How do I have another process keep running in the background, even after the main process is complete?
Related
I was starting some multiprocessing job that ran out of memory. I could see (online in a sagemaker container) that it raised an OSError, but the overall process was not terminated. I am unsure what to blame (sagemaker, docker, multiprocessing itself). What could cause such error not to "process" upwards?
Make sure your main process exists when a sub-process fail:
When SageMaker runs your container, for example as part of a Training job, it starts the container and waits for the container to exit. It has no knowledge of what your processes are doing.
To have the container exit when one of the sub-processes fail, make sure your main process detect this case and exits.
Note: A container’s main running process is the ENTRYPOINT and/or CMD at the end of the Dockerfile - in the case of bring your own script training it will be your train.py.
I executed one python grogram and issued the following command "ps -mH -p 10934"(the number was that process Id), I just found one thread in the process. But for a Java process it would start more than 20 threads, such as: gc daemon, management daemon...
Why python process only one thread? how does python do the garbage collection?
Using parallel python 1.6.4 I spawn a subprocess.Popen command on a remote server. For whatever reason, the command isn't completing in a timely matter, i.e., within the socket_timeout I've set. In this case, I expected parallel python to fail, kill the remote process, and maybe raise an exception. Instead, the long process keeps running, and the ppserver quietly spawns another one!
How can I configure ppserver to fail?
Short of that, I suppose I have to set timer, and destroy the job_server to make it close out and clean up the bad process?
I need to create spawn off a process in python that allows the calling process to exit while the child is still running. What is an effective way to do this?
Note: I'm running on a UNIX environment.
Terminating the parent process does not terminate child processes in Unix-like operating systems, so you don't need to do anything special. Just start your subprocesses with subprocess.Popen and terminate the main process. The orphaned processes will automatically be adopted by init.
I have a python script that does some jobs. I use multiprocessing.Pool to have a few workers do some commands for me.
My problem is when I try to terminate the script. When I press Ctrl-C, I would like, that every worker immediately cleans up its experiment (which is some custom code, or actually even a subprocess command, not just releasing locks or memory) and stops.
I know that I can catch Ctrl-C with the signal handler. How can I make all current running workers of a multiprocessing.Pool to terminate, still doing their cleanup command?
Pool.terminate() will not be useful, because the processes will be terminated without cleaning up.
How about trying the atexit standard module?
It allows you to register a function that will be executed upon termination.
Are you working with Unix? If yes, why not catch SIGTERM in the subprocesses? In fact, the documentation of Process.terminate() reads:
Terminate the process. On Unix this is done using the SIGTERM signal
(I have not tested this.)