Child process detecting the parent process' death in Python - python

Is there a way for a child process in Python to detect if the parent process has died?

If your Python process is running under Linux, and the prctl() system call is exposed, you can use the answer here.
This can cause a signal to be sent to the child when the parent process dies.

Assuming the parent is alive when you start to do this, you can check whether it is still alive in a busy loop as such, by using psutil:
import psutil, os, time
me = psutil.Process(os.getpid())
while 1:
if me.parent is not None:
# still alive
time.sleep(0.1)
continue
else:
print "my parent is gone"
Not very nice but...

The only reliable way I know of is to create a pipe specifically for this purpose. The child will have to repeatedly attempt to read from the pipe, preferably in a non-blocking fashion, or using select. It will get an error when the pipe does not exist anymore (presumably because of the parent's death).

You might get away with reading your parent process' ID very early in your process, and then checking, but of course that is prone to race conditions. The parent that did the spawn might have died immediately, and even before your process got to execute its first instruction.
Unless you have a way of verifying if a given PID refers to the "expected" parent, I think it's hard to do reliably.

Related

Does a process always need to be terminated?

I am using a Python process to run one of my functions like so:
Process1 = Process(target = someFunction)
Process1.start()
Now that function has no looping or anything, it just does its thing, then ends, does the Process die with it? or do I always need to drop a:
Process1.terminate()
Afterwards?
The child process will exit by itself - the Process1.terminate() is unnecessary in that regard. This is especially true if using any shared resources between the child and parent process. From the Python documentation:
Avoid terminating processes
Using the Process.terminate method to stop a process is liable to cause any shared resources (such as locks, semaphores, pipes and queues) currently being used by the process to become broken or unavailable to other processes.
Therefore it is probably best to only consider using Process.terminate on processes which never use any shared resources.
However, if you want the parent process to wait for the child process to finish (perhaps the child process is modifying something that the parent will access afterwards), then you'll want to use Process1.join() to block the parent process from continuing until the child process complete. This is generally good practice when using child processes to avoid zombie processes or orphaned children.
No, as per the documentation it only sends a SIGTERM or TerminateProcess() to the process in question. If it has already exited then there is nothing to terminate.
However, it is always a good process to use exit codes in your subprocesses:
import sys
sys.exit(1)
And then check the exit code once you know the process has terminated:
if Process1.exitcode():
errorHandle()

multiprocessing.Queue hanging when Process dies

I have a subprocess via multiprocessing.Process and a queue via multiprocessing.Queue.
The main process is using multiprocessing.Queue.get() to get some new data. I don't want to have a timeout there and I want it to be blocking.
However, when the child process dies for whatever reason (manually killed by user via kill, or segfault, etc.), Queue.get() just will hang forever.
How can I avoid that?
I think multiprocessing.Queue is not what I want.
I'm using now
parent_conn, child_conn = multiprocessing.Pipe(duplex=True)
to get two multiprocessing.Connection objects. Then I os.fork() or use multiprocessing.Process. In the child, I do:
parent_conn.close()
# read/write on child_conn
In the parent (after the fork), I do:
child_conn.close()
# read/write on parent_conn
That way, when I call recv() on the connection, it will raise an exception (EOFError) when the child/parent dies in the meanwhile.
Note that this works only for a single child. I guess Queue is meant when you want multiple childs. In that case, you would probably anyway have some manager which watches whether all childs are alive and restarts them accordingly.
The Queue has no way of knowing when it does not have any possible writers anymore. You could pass the object to any number of subprocesses, and it does not know if you passed it to any given subprocess. So it will have to wait, even if a subprocess dies. A queue is not a file descriptor that is automatically closed when the child dies.
What you are looking for is some kind of supervisor in the parent process that notices when children die unexpectedly and handle that situation in whatever way you think appropriate. You can do this by catching a SIGCHLD process, checking Process.is_alive or using Process.join in a thread. A simple implementation would use the timeout parameter in the Queue.get call and do a Process.is_alive check when that returns.
If you have a bit more control over the death of the child process, it should send an "EOF"-type object (None, or some kind of marker that it is done) to the queue so your parent process can handle it correctly.

Parallel programming using python's multiprocessing and process defunc

I have a problem with creating parallel program using multiprocessing. AFAIK when I start a new process using this module (multiprocessing) I should do "os.wait()" or "childProcess.join()" to get its' exit status. But placing above functions in my program can occur in stopping main process if something happens to child process (and the child process will hang).
The problem is that if I don't do that I'll get child processes go zombie (and will be listed as something like "python < defunct>" in top listing).
Is there any way to avoid waiting for child processes to end and to avoid creating zombie processes and\or not bothering the main process so much about it's child processes?
Though ars' answer should solve your immediate issues, you might consider looking at celery: http://ask.github.com/celery/index.html. It's a relatively developer-friendly approach to accomplishing these goals and more.
You may have to provide more information or actual code to figure this out. Have you been through the documentation, in particular the sections labeled "Warning"? For example, you may be facing something like this:
Warning: As mentioned above, if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread()), then that process will not terminate until all buffered items have been flushed to the pipe.
This means that if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed. Similarly, if the child process is non-daemonic then the parent process may hang on exit when it tries to join all its non-daemonic children.
Note that a queue created using a manager does not have this issue. See Programming guidelines.

How do I check if a process is alive in Python on Linux?

I have a process id in Python. I know I can kill it with os.kill(), but how do I check if it is alive ? Is there a built-in function or do I have to go to the shell?
Use subprocess module to spawn process.
There is proc.poll() function - it returns None if process is still alive, otherwise it returns process returncode.
http://docs.python.org/library/subprocess.html
os.kill does not kill processes, it sends them signals (it's poorly named).
If you send signal 0, you can determine whether you are allowed to send other signals. An error code will indicate whether it's a permission problem or a missing process.
See man 2 kill for more info.
Also, if the process is your child, you can get a SIGCHLD when it dies, and you can use one of the wait calls to deal with it.
With psutil you can check if a process id exists:
import psutil
print(psutil.pid_exists(1234))

Recover process with subprocess.Popen?

I have a python program that uses subprocess.Popen to launch another process (python process or whatever), and after launching it I save the child's PID to a file. Let's suppose that suddenly the parent process dies (because of an exception or whatever). Is there any way to access again to the object returned by Popen?
I mean, the basic idea is to read the file at first, and if it exists and it has a PID written on it, then access to that process someway, in order to know the return code or whatever. If there isn't a PID, then launch the process with Popen.
Thanks a lot!!
The Popen object is effectively just a wrapper for the child processes PID, stdin, stdout, and stderr, plus some convenience functions for using those.
So the question is why do you need access to the Popen object? Do you want to communicate with the child, terminate it, or check whether it's still running?
In any case there is no way reacquire a Popen object for an already running process.
The proper way to approach this is to launch the child as a daemon, like Tobu suggested. Part of the procedure for daemonising a process is to close stdin and stdout, so you cannot use those to talk to the child process. Instead most daemons use either pipes or sockets to allow clients to connect to them and to send them messages.
The easiest way to talk to the child is to open a named pipe from the child process at e.g. /etc/my_pipe, open that named pipe from the parent / controlling process, and write / read to / from it.
After a quick look at python-daemon it seems to me that python-daemon will help you daemonise your child process, which is tricky to get right, but it doesn't help you with the messaging side of things.
But like I said, I think you need to tell us why you need a Popen object for the child process before we can help you any further.
If a process dies, all its open file handles are closed. This includes any unnamed pipes created by popen(). So, no, there's no way to recover a Popen object from just a PID. The OS won't even consider your new process the parent, so you won't even get SIGCHLD signals (though waitpid() might still work).
I'm not sure if the child is guaranteed to survive, either, since a write to a pipe with no reader (namely, the redirected stdout of the child) should kill the child with a SIGPIPE.
If you want your parent process to pick up where the child left off, you need to spawn the child to write to a file, usually in /tmp or /var/log, and have it record its PID like you are now (the usual location is /var/run). (Having it write to a named pipe risks getting it killed with SIGPIPE as above.) If you suffix your filename with the PID, then it becomes easy for the manager process to figure out which file belongs to which daemon.
Looks like you're trying to write a daemon, and it needs pidfile support. You can't go wrong with python-daemon.
For example:
import daemon
import lockfile
import os
with daemon.DaemonContext(pidfile=lockfile.FileLock('/var/run/spam.pid')):
os.execl('/path/to/prog', argsā€¦)

Categories

Resources