cancel join after sys.exit in multiprocessing - python

On OSX I create a tree of processes with multiprocessing.Process. When I send a signal to a parent process, the process enters a join state:
[INFO/MainProcess] process shutting down
[INFO/MainProcess] calling join() for process Process-1
I am already catching the signal with a signal handler and then calling sys.exit(1). Is there something I can call before sys.exit(1) that will prevent this process from waiting for its child?

You can avoid this by setting the daemon property to True on your child processes. From the multiprocessing.Process docs (emphasis mine):
daemon
The process’s daemon flag, a Boolean value. This must be set before start() is called.
The initial value is inherited from the creating process.
When a process exits, it attempts to terminate all of its daemonic child processes.
Note that a daemonic process is not allowed to create child processes. Otherwise a daemonic process would leave its children
orphaned if it gets terminated when its parent process exits.
Additionally, these are not Unix daemons or services, they are normal
processes that will be terminated (and not joined) if non-daemonic
processes have exited.
So if p.daemon == True, your parent process will just kill your child process, rather than join it. Note, though, that your daemonic processes cannot create their own children (as stated in the docs).

I solved this by using os._exit(1) instead of sys.exit(1).

Related

Terminate all processes before finishing the main method

I run a separate process for some logging tasks in parallel to my main process. They share some resources and I run into issues terminating the logging process before the main process finishes.
Are there any drawbacks from finishing the main Python program and keeping the subprocess? Can I be sure that it will be terminated on exiting the main program? Or would it be better to call Process.terminate() as my last call in the main script?
As long as the processes you're launching are daemons, the main process will terminate them automatically before it exits:
daemon
The process’s daemon flag, a Boolean value. This must be set
before start() is called.
The initial value is inherited from the creating process.
When a process exits, it attempts to terminate all of its daemonic
child processes.
Note that a daemonic process is not allowed to create child processes.
Otherwise a daemonic process would leave its children orphaned if it
gets terminated when its parent process exits. Additionally, these are
not Unix daemons or services, they are normal processes that will be
terminated (and not joined) if non-daemonic processes have exited.
This flag is automatically set for processes created by a multiprocessing.Pool, but defaults to false for Process objects. The parent process will try to call join on all non-daemon children, so if you have any of those running, they will prevent the parent from exiting until they exit themselves.

Python thread or process model where child thread or process can survive parent?

This is a design question in reference to python scripting in using threads versus multi-processes. As I understand it, spawning a thread using the threading module cannot survive termination of the the parent thread, i.e. process. The parent thread must either do a join (i.e. wait timeout not withstanding) or exit, if no join, on parent exit the child threads are terminated. This is due to the shared resources model of threads, right?
Whereas the multiprocessing module when a process is spawned it can survive, i.e. continue to completion, regardless if the parent process which created it exits or terminates. This assumes of course that the parent process never called a join for the child process to complete.
Both, threading and multiprocessing are designed to achieve parallelism within a program. Their goal is not to launch independent processes. Hence both packages implicitly terminate their parallel execution paths during preparation for interpreter shutdown.
Threads are subsets of processes, they cannot outlive the process that created them.
Active non daemonic threads are implicitly joined upon interpreter shutdown using the function _shutdown() in the threading module. This function is called during the finalization routine in the Python interpreter lifecycle.Daemonic threads simply end with the interpreter process.
If processes, created via multiprocessing, are still alive when the interpreter prepares to shut down, they are terminated by the _exit_function(), that has been registered as exit handler via atexit. Similar to threading, multiprocessing joins non daemonic child processes; on daemonic childs, terminate() is called.
If you want to launch processes from a Python program and have that program exit afterwards, use subprocess.Popen. If you are on a POSIX platform, you might also want to take a look at python-daemon.

Multiprocess Daemon Not Terminating on Parent Exit

I have a Python 2.7 multiprocessing Process which will not exit on parent process exit. I've set the daemon flag which should force it to exit on parent death. The docs state that:
"When a process exits, it attempts to terminate all of its daemonic child processes."
p = Process(target=_serverLaunchHelper, args=args)
p.daemon = True
print p.daemon # prints True
p.start()
When I terminate the parent process via a kill command the daemon is left alive and running (which blocks the port on the next run). The child process is starting a SimpleHttpServer and calling serve_forever without doing anything else. My guess is that the "attempts" part of the docs means that the blocking server process is stopping process death and it's letting the process get orphaned as a result. I could have the child push the serving to another Thread and have the main thread check for parent process id changes, but this seems like a lot of code to just replicate the daemon functionality.
Does someone have insight into why the daemon flag isn't working as described? This is repeatable on windows8 64 bit and ubuntu12 32 bit vm.
A boiled down version of the process function is below:
def _serverLaunchHelper(port)
httpd = SocketServer.TCPServer(("", port), Handler)
httpd.serve_forever()
When a process exits, it attempts to terminate all of its daemonic child processes.
The key word here is "attempts". Also, "exits".
Depending on your platform and implementation, it may be that the only way to get daemonic child processes terminated is to do so explicitly. If the parent process exits normally, it gets a chance to do so explicitly, so everything is fine. But if the parent process is terminated abruptly, it doesn't.
For CPython in particular, if you look at the source, terminating daemonic processes is handled the same way as joining non-daemonic processes: by walking active_children() in an atexit function. So, your daemons will be killed if and only if your atexit handlers get to run. And, as that module's docs say:
Note: the functions registered via this module are not called when the program is killed by a signal not handled by Python, when a Python fatal internal error is detected, or when os._exit() is called.
Depending on how you're killing the parent, you might be able to work around this by adding a signal handler to intercept abrupt termination. But you might not—e.g., on POSIX, SIGKILL is not intercept able, so if you kill -9 $PARENTPID, this isn't an option.
Another option is to kill the process group, instead of just the parent process. For example, if your parent has PID 12345, kill -- -12345 on linux will kill it and all of its children (assuming you haven't done anything fancy).

How to Detect in Sub Process When Parent Process Has Died?

In python, I have a parent process that spawns a handful of child processes. I've run into a situation where, due to an unhandled exception, the parent process was dieing and the child processes where left orphaned. How do I get the child processes to recognize that they've lost their parent?
I tried some code that hooks the child process up to every available signal and none of them were fired. I could theoretically put a giant try/except around the parent process to ensure that it at least fires a sigterm to the children, but this is inelegant and not foolproof. How can I prevent orphaned processes?
on UNIX (including Linux):
def is_parent_running():
try:
os.kill(os.getppid(), 0)
return True
except OSError:
return False
Note, that on UNIX, signal 0 is not a real signal. It is used just to test if given process exists. See manual for kill command.
You can use socketpair() to create a pair of unix domain sockets before creating the subprocess. Have the parent have one end open, and the child the other end open. When the parent exits, it's end of the socket will shut down. Then the child will know it exited because it can select()/poll() for read events from its socket and receive end of file at that time.

disabling "joining" when process shuts down

Is there a way to stop the multiprocessing Python module from trying to call & wait on join() on child processes of a parent process shutting down?
2010-02-18 10:58:34,750 INFO calling join() for process procRx1
I want the process to which I sent a SIGTERM to exit as quickly as possible (i.e. "fail fast") instead of waiting for several seconds before finally giving up on the join attempt.
Clarifications: I have a "central process" which creates a bunch of "child processes". I am looking for a way to cleanly process a "SIGTERM" signal from any process in order to bring down the whole process tree.
Have you tried to explicitly using Process.terminate?
You could try joining in a loop with a timeout (1 sec?) and checking if the thread is still alive, something like:
while True:
a_thread.join(1)
if not a_thread.isAlive(): break
Terminating the a_thread will trigger break clause.
Sounds like setting your subprocess' flag Process.daemon = False may be what you want:
Process.daemon:
The process’s daemon flag, a Boolean value. This must be set before start() is called.
The initial value is inherited from the creating process.
When a process exits, it attempts to terminate all of its daemonic child processes.
Note that a daemonic process is not allowed to create child processes. Otherwise a daemonic process would leave its children orphaned if it gets terminated when its parent process exits. Additionally, these are not Unix daemons or services, they are normal processes that will be terminated (and not joined) if non-dameonic processes have exited.

Categories

Resources