multiprocessing.Queue hanging when Process dies - python

I have a subprocess via multiprocessing.Process and a queue via multiprocessing.Queue.
The main process is using multiprocessing.Queue.get() to get some new data. I don't want to have a timeout there and I want it to be blocking.
However, when the child process dies for whatever reason (manually killed by user via kill, or segfault, etc.), Queue.get() just will hang forever.
How can I avoid that?

I think multiprocessing.Queue is not what I want.
I'm using now
parent_conn, child_conn = multiprocessing.Pipe(duplex=True)
to get two multiprocessing.Connection objects. Then I os.fork() or use multiprocessing.Process. In the child, I do:
parent_conn.close()
# read/write on child_conn
In the parent (after the fork), I do:
child_conn.close()
# read/write on parent_conn
That way, when I call recv() on the connection, it will raise an exception (EOFError) when the child/parent dies in the meanwhile.
Note that this works only for a single child. I guess Queue is meant when you want multiple childs. In that case, you would probably anyway have some manager which watches whether all childs are alive and restarts them accordingly.

The Queue has no way of knowing when it does not have any possible writers anymore. You could pass the object to any number of subprocesses, and it does not know if you passed it to any given subprocess. So it will have to wait, even if a subprocess dies. A queue is not a file descriptor that is automatically closed when the child dies.
What you are looking for is some kind of supervisor in the parent process that notices when children die unexpectedly and handle that situation in whatever way you think appropriate. You can do this by catching a SIGCHLD process, checking Process.is_alive or using Process.join in a thread. A simple implementation would use the timeout parameter in the Queue.get call and do a Process.is_alive check when that returns.
If you have a bit more control over the death of the child process, it should send an "EOF"-type object (None, or some kind of marker that it is done) to the queue so your parent process can handle it correctly.

Related

How do I ensure children of a subprocess don't get SIGINT in Python (on Linux)?

I'm trying to use a custom job pool (using multiprocessing.Process), and it works nicely, except SIGINT gets passed on from the parent process all the way to the children of the pool workers, Chrome instances in this case. I used signal.signal(signal.SIGINT, signal.SIG_IGN) in the pool workers, which seems to keep them from getting (or at least responding to) the SIGINT, but I'm using third-party code (Selenium) which creates subprocesses that get the SIGINT and stop because of it. I want only the parent process to receive and handle the SIGINT. How do I do this?
My inclination would be to give the pool workers /dev/null as STDIN, so it would give that to its children rather than the terminal, which, if I understand correctly, will keep it from receiving SIGINT from Ctrl+C. However, it seems that multiprocessing.Process already does that, though presumably too late (after the process is created; using a double-fork should get past this). Is there a good way to do that (or anything else that blocks SIGINT) through multiprocessing, or do I need a different solution to work around that? Maybe there's an easy way to use multiprocessing.Queue with Popen(), so I can use that instead?
To be clear, I still need to be able to log to the console (indirectly through the parent process is fine, maybe better even).

Does a process always need to be terminated?

I am using a Python process to run one of my functions like so:
Process1 = Process(target = someFunction)
Process1.start()
Now that function has no looping or anything, it just does its thing, then ends, does the Process die with it? or do I always need to drop a:
Process1.terminate()
Afterwards?
The child process will exit by itself - the Process1.terminate() is unnecessary in that regard. This is especially true if using any shared resources between the child and parent process. From the Python documentation:
Avoid terminating processes
Using the Process.terminate method to stop a process is liable to cause any shared resources (such as locks, semaphores, pipes and queues) currently being used by the process to become broken or unavailable to other processes.
Therefore it is probably best to only consider using Process.terminate on processes which never use any shared resources.
However, if you want the parent process to wait for the child process to finish (perhaps the child process is modifying something that the parent will access afterwards), then you'll want to use Process1.join() to block the parent process from continuing until the child process complete. This is generally good practice when using child processes to avoid zombie processes or orphaned children.
No, as per the documentation it only sends a SIGTERM or TerminateProcess() to the process in question. If it has already exited then there is nothing to terminate.
However, it is always a good process to use exit codes in your subprocesses:
import sys
sys.exit(1)
And then check the exit code once you know the process has terminated:
if Process1.exitcode():
errorHandle()

Gracefully Terminating Python Threads

I am trying to write a unix client program that is listening to a socket, stdin, and reading from file descriptors. I assign each of these tasks to an individual thread and have them successfully communicating with the "main" application using synchronized queues and a semaphore. The problem is that when I want to shutdown these child threads they are all blocking on input. Also, the threads cannot register signal handlers in the threads because in Python only the main thread of execution is allowed to do so.
Any suggestions?
There is no good way to work around this, especially when the thread is blocking.
I had a similar issue ( Python: How to terminate a blocking thread) and the only way I was able to stop my threads was to close the underlying connection. Which resulted in the thread that was blocking to raise and exception and then allowed me to check the stop flag and close.
Example code:
class Example(object):
def __init__(self):
self.stop = threading.Event()
self.connection = Connection()
self.mythread = Thread(target=self.dowork)
self.mythread.start()
def dowork(self):
while(not self.stop.is_set()):
try:
blockingcall()
except CommunicationException:
pass
def terminate():
self.stop.set()
self.connection.close()
self.mythread.join()
Another thing to note is commonly blocking operations generally offer up a timeout. If you have that option I would consider using it. My last comment is that you could always set the thread to deamonic,
From the pydoc :
A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. The flag can be set through the daemon property.
Also, the threads cannot register signal handlers
Signals to kill threads is potentially horrible, especially in C, especially if you allocate memory as part of the thread, since it won't be freed when that particular thread dies (as it belongs to the heap of the process). There is no garbage collection in C, so if that pointer goes out of scope, it's gone out of scope, the memory remains allocated. So just be careful with that one - only do it that way in C if you're going to actually kill all the threads and end the process so that the memory is handed back to the OS - adding and removing threads from a threadpool for example will give you a memory leak.
The problem is that when I want to shutdown these child threads they are all blocking on input.
Funnily enough I've been fighting with the same thing recently. The solution is literally don't make blocking calls without a timeout. So, for example, what you want ideally is:
def threadfunc(running):
while running:
blockingcall(timeout=1)
where running is passed from the controlling thread - I've never used threading but I have used multiprocessing and with this you actually need to pass an Event() object and check is_set(). But you asked for design patterns, that's the basic idea.
Then, when you want this thread to end, you run:
running.clear()
mythread.join()
and your main thread should then allow your client thread to handle its last call, and return, and the whole program folds up nicely.
What do you do if you have a blocking call without a timeout? Use the asynchronous option, and sleep (as in call whatever method you have to suspend the thread for a period of time so you're not spinning) if you need to. There's no other way around it.
See these answers:
Python SocketServer
How to exit a multithreaded program?
Basically, don't block on recv() by using select() with a timeout to check for readability of the socket, and poll a quit flag when select() times out.

disabling "joining" when process shuts down

Is there a way to stop the multiprocessing Python module from trying to call & wait on join() on child processes of a parent process shutting down?
2010-02-18 10:58:34,750 INFO calling join() for process procRx1
I want the process to which I sent a SIGTERM to exit as quickly as possible (i.e. "fail fast") instead of waiting for several seconds before finally giving up on the join attempt.
Clarifications: I have a "central process" which creates a bunch of "child processes". I am looking for a way to cleanly process a "SIGTERM" signal from any process in order to bring down the whole process tree.
Have you tried to explicitly using Process.terminate?
You could try joining in a loop with a timeout (1 sec?) and checking if the thread is still alive, something like:
while True:
a_thread.join(1)
if not a_thread.isAlive(): break
Terminating the a_thread will trigger break clause.
Sounds like setting your subprocess' flag Process.daemon = False may be what you want:
Process.daemon:
The process’s daemon flag, a Boolean value. This must be set before start() is called.
The initial value is inherited from the creating process.
When a process exits, it attempts to terminate all of its daemonic child processes.
Note that a daemonic process is not allowed to create child processes. Otherwise a daemonic process would leave its children orphaned if it gets terminated when its parent process exits. Additionally, these are not Unix daemons or services, they are normal processes that will be terminated (and not joined) if non-dameonic processes have exited.

Child process detecting the parent process' death in Python

Is there a way for a child process in Python to detect if the parent process has died?
If your Python process is running under Linux, and the prctl() system call is exposed, you can use the answer here.
This can cause a signal to be sent to the child when the parent process dies.
Assuming the parent is alive when you start to do this, you can check whether it is still alive in a busy loop as such, by using psutil:
import psutil, os, time
me = psutil.Process(os.getpid())
while 1:
if me.parent is not None:
# still alive
time.sleep(0.1)
continue
else:
print "my parent is gone"
Not very nice but...
The only reliable way I know of is to create a pipe specifically for this purpose. The child will have to repeatedly attempt to read from the pipe, preferably in a non-blocking fashion, or using select. It will get an error when the pipe does not exist anymore (presumably because of the parent's death).
You might get away with reading your parent process' ID very early in your process, and then checking, but of course that is prone to race conditions. The parent that did the spawn might have died immediately, and even before your process got to execute its first instruction.
Unless you have a way of verifying if a given PID refers to the "expected" parent, I think it's hard to do reliably.

Categories

Resources