I have a Python 2.7 multiprocessing Process which will not exit on parent process exit. I've set the daemon flag which should force it to exit on parent death. The docs state that:
"When a process exits, it attempts to terminate all of its daemonic child processes."
p = Process(target=_serverLaunchHelper, args=args)
p.daemon = True
print p.daemon # prints True
p.start()
When I terminate the parent process via a kill command the daemon is left alive and running (which blocks the port on the next run). The child process is starting a SimpleHttpServer and calling serve_forever without doing anything else. My guess is that the "attempts" part of the docs means that the blocking server process is stopping process death and it's letting the process get orphaned as a result. I could have the child push the serving to another Thread and have the main thread check for parent process id changes, but this seems like a lot of code to just replicate the daemon functionality.
Does someone have insight into why the daemon flag isn't working as described? This is repeatable on windows8 64 bit and ubuntu12 32 bit vm.
A boiled down version of the process function is below:
def _serverLaunchHelper(port)
httpd = SocketServer.TCPServer(("", port), Handler)
httpd.serve_forever()
When a process exits, it attempts to terminate all of its daemonic child processes.
The key word here is "attempts". Also, "exits".
Depending on your platform and implementation, it may be that the only way to get daemonic child processes terminated is to do so explicitly. If the parent process exits normally, it gets a chance to do so explicitly, so everything is fine. But if the parent process is terminated abruptly, it doesn't.
For CPython in particular, if you look at the source, terminating daemonic processes is handled the same way as joining non-daemonic processes: by walking active_children() in an atexit function. So, your daemons will be killed if and only if your atexit handlers get to run. And, as that module's docs say:
Note: the functions registered via this module are not called when the program is killed by a signal not handled by Python, when a Python fatal internal error is detected, or when os._exit() is called.
Depending on how you're killing the parent, you might be able to work around this by adding a signal handler to intercept abrupt termination. But you might not—e.g., on POSIX, SIGKILL is not intercept able, so if you kill -9 $PARENTPID, this isn't an option.
Another option is to kill the process group, instead of just the parent process. For example, if your parent has PID 12345, kill -- -12345 on linux will kill it and all of its children (assuming you haven't done anything fancy).
Related
I run a separate process for some logging tasks in parallel to my main process. They share some resources and I run into issues terminating the logging process before the main process finishes.
Are there any drawbacks from finishing the main Python program and keeping the subprocess? Can I be sure that it will be terminated on exiting the main program? Or would it be better to call Process.terminate() as my last call in the main script?
As long as the processes you're launching are daemons, the main process will terminate them automatically before it exits:
daemon
The process’s daemon flag, a Boolean value. This must be set
before start() is called.
The initial value is inherited from the creating process.
When a process exits, it attempts to terminate all of its daemonic
child processes.
Note that a daemonic process is not allowed to create child processes.
Otherwise a daemonic process would leave its children orphaned if it
gets terminated when its parent process exits. Additionally, these are
not Unix daemons or services, they are normal processes that will be
terminated (and not joined) if non-daemonic processes have exited.
This flag is automatically set for processes created by a multiprocessing.Pool, but defaults to false for Process objects. The parent process will try to call join on all non-daemon children, so if you have any of those running, they will prevent the parent from exiting until they exit themselves.
On OSX I create a tree of processes with multiprocessing.Process. When I send a signal to a parent process, the process enters a join state:
[INFO/MainProcess] process shutting down
[INFO/MainProcess] calling join() for process Process-1
I am already catching the signal with a signal handler and then calling sys.exit(1). Is there something I can call before sys.exit(1) that will prevent this process from waiting for its child?
You can avoid this by setting the daemon property to True on your child processes. From the multiprocessing.Process docs (emphasis mine):
daemon
The process’s daemon flag, a Boolean value. This must be set before start() is called.
The initial value is inherited from the creating process.
When a process exits, it attempts to terminate all of its daemonic child processes.
Note that a daemonic process is not allowed to create child processes. Otherwise a daemonic process would leave its children
orphaned if it gets terminated when its parent process exits.
Additionally, these are not Unix daemons or services, they are normal
processes that will be terminated (and not joined) if non-daemonic
processes have exited.
So if p.daemon == True, your parent process will just kill your child process, rather than join it. Note, though, that your daemonic processes cannot create their own children (as stated in the docs).
I solved this by using os._exit(1) instead of sys.exit(1).
I'm trying to create a background process for some heavy calculations from the main Pylons process. Here's the code:
p = Process(target = instance_process, \
args = (instance_tuple.instance, parent_pipe, child_pipe,))
p.start()
The process is created and started, but is seems to be a fork from the main process: it is listening to the same port and the whole application hangs up. What am I doing wrong?
Thanks in advance.
Process IS a fork. If you look through it's implementation you'll find that Process.start() calls a fork. It does NOT, however, call any of the exec variations to change the execution context.
Still, this may have nothing to do with listening on the same port (unless the parent process is multi-threaded). At which point is the program hanging?
I know that when you try shutting down a python program without terminating the child process created through multiprocessing it will hang until the child process terminates.
This might be caused if, for instance, you do not close the pipe between the processes.
In python, I have a parent process that spawns a handful of child processes. I've run into a situation where, due to an unhandled exception, the parent process was dieing and the child processes where left orphaned. How do I get the child processes to recognize that they've lost their parent?
I tried some code that hooks the child process up to every available signal and none of them were fired. I could theoretically put a giant try/except around the parent process to ensure that it at least fires a sigterm to the children, but this is inelegant and not foolproof. How can I prevent orphaned processes?
on UNIX (including Linux):
def is_parent_running():
try:
os.kill(os.getppid(), 0)
return True
except OSError:
return False
Note, that on UNIX, signal 0 is not a real signal. It is used just to test if given process exists. See manual for kill command.
You can use socketpair() to create a pair of unix domain sockets before creating the subprocess. Have the parent have one end open, and the child the other end open. When the parent exits, it's end of the socket will shut down. Then the child will know it exited because it can select()/poll() for read events from its socket and receive end of file at that time.
Is there a way to stop the multiprocessing Python module from trying to call & wait on join() on child processes of a parent process shutting down?
2010-02-18 10:58:34,750 INFO calling join() for process procRx1
I want the process to which I sent a SIGTERM to exit as quickly as possible (i.e. "fail fast") instead of waiting for several seconds before finally giving up on the join attempt.
Clarifications: I have a "central process" which creates a bunch of "child processes". I am looking for a way to cleanly process a "SIGTERM" signal from any process in order to bring down the whole process tree.
Have you tried to explicitly using Process.terminate?
You could try joining in a loop with a timeout (1 sec?) and checking if the thread is still alive, something like:
while True:
a_thread.join(1)
if not a_thread.isAlive(): break
Terminating the a_thread will trigger break clause.
Sounds like setting your subprocess' flag Process.daemon = False may be what you want:
Process.daemon:
The process’s daemon flag, a Boolean value. This must be set before start() is called.
The initial value is inherited from the creating process.
When a process exits, it attempts to terminate all of its daemonic child processes.
Note that a daemonic process is not allowed to create child processes. Otherwise a daemonic process would leave its children orphaned if it gets terminated when its parent process exits. Additionally, these are not Unix daemons or services, they are normal processes that will be terminated (and not joined) if non-dameonic processes have exited.