I have a python multiprocessing setup (i.e. worker processes) with custom signal handling, which prevents the worker from cleanly using multiprocessing itself. (See extended problem description below).
The Setup
The master class that spawns all worker processes looks like the following (some parts stripped to only contain the important parts).
Here, it re-binds its own signals only to print Master teardown; actually the received signals are propagated down the process tree and must be handled by the workers themselves. This is achieved by re-binding the signals after workers have been spawned.
class Midlayer(object):
def __init__(self, nprocs=2):
self.nprocs = nprocs
self.procs = []
def handle_signal(self, signum, frame):
log.info('Master teardown')
for p in self.procs:
p.join()
sys.exit()
def start(self):
# Start desired number of workers
for _ in range(nprocs):
p = Worker()
self.procs.append(p)
p.start()
# Bind signals for master AFTER workers have been spawned and started
signal.signal(signal.SIGINT, self.handle_signal)
signal.signal(signal.SIGTERM, self.handle_signal)
# Serve forever, only exit on signals
for p in self.procs:
p.join()
The worker class bases multiprocessing.Process and implements its own run()-method.
In this method, it connects to a distributed message queue and polls the queue for items forever. Forever should be: until the worker receives SIGINT or SIGTERM. The worker should not quit immediately; instead, it has to finish whatever calculation it does and will quit afterwards (once quit_req is set to True).
class Worker(Process):
def __init__(self):
self.quit_req = False
Process.__init__(self)
def handle_signal(self, signum, frame):
print('Stopping worker (pid: {})'.format(self.pid))
self.quit_req = True
def run(self):
# Set signals for worker process
signal.signal(signal.SIGINT, self.handle_signal)
signal.signal(signal.SIGTERM, self.handle_signal)
q = connect_to_some_distributed_message_queue()
# Start consuming
print('Starting worker (pid: {})'.format(self.pid))
while not self.quit_req:
message = q.poll()
if len(message):
try:
print('{} handling message "{}"'.format(
self.pid, message)
)
# Facade pattern: Pick the correct target function for the
# requested message and execute it.
MessageRouter.route(message)
except Exception as e:
print('{} failed handling "{}": {}'.format(
self.pid, message, e.message)
)
The Problem
So far for the basic setup, where (almost) everything works fine:
The master process spawns the desired number of workers
Each worker connects to the message queue
Once a message is published, one of the workers receives it
The facade pattern (using a class named MessageRouter) routes the received message to the respective function and executes it
Now for the problem: Target functions (where the message gets directed to by the MessageRouter facade) may contain very complex business logic and thus may require multiprocessing.
If, for example, the target function contains something like this:
nproc = 4
# Spawn a pool, because we have expensive calculation here
p = Pool(processes=nproc)
# Collect result proxy objects for async apply calls to 'some_expensive_calculation'
rpx = [p.apply_async(some_expensive_calculation, ()) for _ in range(nproc)]
# Collect results from all processes
res = [rpx.get(timeout=.5) for r in rpx]
# Print all results
print(res)
Then the processes spawned by the Pool will also redirect their signal handling for SIGINT and SIGTERM to the worker's handle_signal function (because of signal propagation to the process subtree), essentially printing Stopping worker (pid: ...) and not stopping at all. I know, that this happens due to the fact that I have re-bound the signals for the worker before its own child-processes are spawned.
This is where I'm stuck: I just cannot set the workers' signals after spawning its child processes, because I do not know whether or not it spawns some (target functions are masked and may be written by others), and because the worker stays (as designed) in its poll-loop. At the same time, I cannot expect the implementation of a target function that uses multiprocessing to re-bind its own signal handlers to (whatever) default values.
Currently, I feel like restoring signal handlers in each loop in the worker (before the message is routed to its target function) and resetting them after the function has returned is the only option, but it simply feels wrong.
Do I miss something? Do you have any advice? I'd be really happy if someone could give me a hint on how to solve the flaws of my design here!
There is not a clear approach for tackling the issue in the way you want to proceed. I often find myself in situations where I have to run unknown code (represented as Python entry point functions which might get down into some C weirdness) in multiprocessing environments.
This is how I approach the problem.
The main loop
Usually the main loop is pretty simple, it fetches a task from some source (HTTP, Pipe, Rabbit Queue..) and submits it to a Pool of workers. I make sure the KeyboardInterrupt exception is correctly handled to shutdown the service.
try:
while 1:
task = get_next_task()
service.process(task)
except KeyboardInterrupt:
service.wait_for_pending_tasks()
logging.info("Sayonara!")
The workers
The workers are managed by a Pool of workers from either multiprocessing.Pool or from concurrent.futures.ProcessPoolExecutor. If I need more advanced features such as timeout support I either use billiard or pebble.
Each worker will ignore SIGINT as recommended here. SIGTERM is left as default.
The service
The service is controlled either by systemd or supervisord. In either cases, I make sure that the termination request is always delivered as a SIGINT (CTL+C).
I want to keep SIGTERM as an emergency shutdown rather than relying only on SIGKILL for that. SIGKILL is not portable and some platforms do not implement it.
"I whish it was that simple"
If things are more complex, I'd consider the use of frameworks such as Luigi or Celery.
In general, reinventing the wheel on such things is quite detrimental and gives little gratifications. Especially if someone else will have to look at that code.
The latter sentence does not apply if your aim is to learn how these things are done of course.
I was able to do this using Python 3 and set_start_method(method) with the 'forkserver' flavour. Another way Python 3 > Python 2!
Where by "this" I mean:
Have a main process with its own signal handler which just joins the children.
Have some worker processes with a signal handler which may spawn...
further subprocesses which do not have a signal handler.
The behaviour on Ctrl-C is then:
manager process waits for workers to exit.
workers run their signal handlers, (an maybe set a stop flag and continue executing to finish their job, although I didn't bother in my example, I just joined the child I knew I had) and then exit.
all children of the workers die immediately.
Of course note that if your intention is for the children of the workers not to crash you will need to install some ignore handler or something for them in your worker process run() method, or somewhere.
To mercilessly lift from the docs:
When the program starts and selects the forkserver start method, a server process is started. From then on, whenever a new process is needed, the parent process connects to the server and requests that it fork a new process. The fork server process is single threaded so it is safe for it to use os.fork(). No unnecessary resources are inherited.
Available on Unix platforms which support passing file descriptors over Unix pipes.
The idea is therefore that the "server process" inherits the default signal handling behaviour before you install your new ones, so all its children also have default handling.
Code in all its glory:
from multiprocessing import Process, set_start_method
import sys
from signal import signal, SIGINT
from time import sleep
class NormalWorker(Process):
def run(self):
while True:
print('%d %s work' % (self.pid, type(self).__name__))
sleep(1)
class SpawningWorker(Process):
def handle_signal(self, signum, frame):
print('%d %s handling signal %r' % (
self.pid, type(self).__name__, signum))
def run(self):
signal(SIGINT, self.handle_signal)
sub = NormalWorker()
sub.start()
print('%d joining %d' % (self.pid, sub.pid))
sub.join()
print('%d %s joined sub worker' % (self.pid, type(self).__name__))
def main():
set_start_method('forkserver')
processes = [SpawningWorker() for ii in range(5)]
for pp in processes:
pp.start()
def sig_handler(signum, frame):
print('main handling signal %d' % signum)
for pp in processes:
pp.join()
print('main out')
sys.exit()
signal(SIGINT, sig_handler)
while True:
sleep(1.0)
if __name__ == '__main__':
main()
Since my previous answer was python 3 only, I thought I'd also suggest a more dirty method for fun which should work on both python 2 and python 3. Not Windows though...
multiprocessing just uses os.fork() under the covers, so patch it to reset the signal handling in the child:
import os
from signal import SIGINT, SIG_DFL
def patch_fork():
print('Patching fork')
os_fork = os.fork
def my_fork():
print('Fork fork fork')
cpid = os_fork()
if cpid == 0:
# child
signal(SIGINT, SIG_DFL)
return cpid
os.fork = my_fork
You can call that at the start of the run method of your Worker processes (so that you don't affect the Manager) and so be sure that any children will ignore those signals.
This might seem crazy, but if you're not too concerned about portability it might actually not be a bad idea as it's simple and probably pretty resilient over different python versions.
You can store pid of main process (when register signal handler) and use it inside signal handler to route execution flow:
if os.getpid() != main_pid:
sys.exit(128 + signum)
Related
I am trying to find a way to handle SIGTERM nicely and have my subprocesses terminate when the main process received a SIGTERM.
Basically, I am creating processes manually (but I believe the issue is the same with mp.pool for example)
import multiprocessing as mp
...
workers = [
mp.Process(
target=worker,
args=(...,)
) for _ in range(nb_workers)
]
and I am catching signals
signal.signal(signal.SIGTERM, term)
signal.signal(signal.SIGINT, term)
signal.signal(signal.SIGQUIT, term)
signal.signal(signal.SIGABRT, term)
When a signal is caught, I want to terminate all my subprocesses and exit. I do not want to wait for them to finish running as their individual run times can be pretty long (understand a few minutes).
The same way, I cannot really set a threading.Event() all process would look at periodically as they are basically just doing one huge, but slow operation (depending on a few libraries).
My idea was to set a flag when a signal is caught, and then have a watchdog terminate all subprocesses when the flag is set. But using .terminate() also uses SIGTERM, which is caught again by my signal handlers.
Ex, simplified code:
import multiprocessing as mp
import signal
import time
FLAG = False
def f(x):
time.sleep(5)
print(x)
return x * x
def term(signum, frame):
print(f'Received Signal {signum}')
global FLAG
FLAG = True
def terminate(w):
for process in w:
print('Terminating worker {}'.format(process.pid))
process.terminate()
process.join()
process.close()
signal.signal(signal.SIGTERM, term)
signal.signal(signal.SIGINT, term)
signal.signal(signal.SIGQUIT, term)
signal.signal(signal.SIGABRT, term)
if __name__ == '__main__':
workers = [
mp.Process(
target=f,
args=(i,)
) for i in range(4)
]
for process in workers:
process.start()
while not FLAG:
time.sleep(0.1)
print('flag set')
terminate(workers)
print('Done')
If I interrupt the code before the processes are done (with ctrl-c):
Received Signal 2
Received Signal 2
Received Signal 2
Received Signal 2
Received Signal 2
flag set
Terminating worker 27742
Received Signal 15
0
Terminating worker 27743
Received Signal 15
1
3
2
Terminating worker 27744
Terminating worker 27745
Done
As you can see, it seems that .terminate() does not terminate the sub-processes as they keep running to their end, and as it appears we catch the resulting SIGTERM (15) too.
So far, my solutions are:
somehow manage to have the processes periodically check a threading.Event(). This mean rethinking completely what our current processes are doing.
use .kill() instead of .terminate(). This works on Linux but it is a less clean exit. Not sure about windows, but I was under the impression that on Windows .kill == .terminate.
do not catch SIGTERM anymore, assuming the program will never get killed this way (unlikely)
Is there any clean way to handle this?
The solution very much depends on what platform you are running on as is often the case for Python questions tagged with [multiprocessing] and it is for that reason one is supposed also tag such questions with the specific platform, such as [linux], too. I am inferring that your platform is not Windows since signal.SIGQUIT is not defined for that platform. So I will go with Linux.
For Linux you do not want your subprocesses to handle the signals at all (and it's sort of nonsensical for them to be calling function term on an Ctrl-C interrupt, for example). For Windows, however, you want your subprocesses to ignore these interrupts. That means you want your main process to call signal only after it has created the subprocesses.
Instead of using FLAG to indicate that the main process should terminate and have to have the main process loop testing this value periodically, it is simpler, cleaner and more efficient to have the main process just wait on a threading.Event instance, done_event. Although. for some reason, this does not seem to work on Windows; the main process wait call does not get satisfied immediately.
You would like some provision to terminate gracefully if and when your processes complete normally and there has been so signal triggered. The easiest way to accomplish all your goals including this is to make your subprocesses daemon processes that will terminate when the main process terminates. Then create a daemon thread that simply waits for the subprocesses to normally terminate and sets done_event when that occurs. So the main process will fall through on the call to done_event.wait() on either an interrupt of some sort or normal completion. All it has to do now is just end normally; there is no need to call terminate against the subprocesses since they will end when the main process ends.
import multiprocessing as mp
from threading import Thread, Event
import signal
import time
import sys
IS_WINDOWS = sys.platform == 'win32'
def f(x):
if IS_WINDOWS:
signal.signal(signal.SIGTERM, signal.SIG_IGN)
signal.signal(signal.SIGINT, signal.SIG_IGN)
signal.signal(signal.SIGABRT, signal.SIG_IGN)
time.sleep(5)
print(x)
return x * x
def term(signum, frame):
print(f'Received Signal {signum}')
if IS_WINDOWS:
globals()['FLAG'] = True
else:
done_event.set()
def process_wait_thread():
"""
wait for processes to finish normally and set done_event
"""
for process in workers:
process.join()
if IS_WINDOWS:
globals()['FLAG'] = True
else:
done_event.set()
if __name__ == '__main__':
if IS_WINDOWS:
globals()['FLAG'] = False
else:
done_event = Event()
workers = [
mp.Process(
target=f,
args=(i,),
daemon=True
) for i in range(4)
]
for process in workers:
process.start()
# We don't want subprocesses to inherit these so
# call signal after we start the processes:
signal.signal(signal.SIGTERM, term)
signal.signal(signal.SIGINT, term)
if not IS_WINDOWS:
signal.signal(signal.SIGQUIT, term) # Not supported by Windows at all
signal.signal(signal.SIGABRT, term)
Thread(target=process_wait_thread, daemon=True).start()
if IS_WINDOWS:
while not globals()['FLAG']:
time.sleep(0.1)
else:
done_event.wait()
print('Done')
I'm trying to implement a python daemon in the traditional start/stop/restart style to control a consumer to a messaging queue. I've successfully used python-daemons to create a single consumer, but I need more than one listener for the volume of messages. This led me to implement the multiprocessing library in my run function along with a os.kill call for the stop function:
def run(self):
for num in range(self.num_instances):
p = multiprocessing.Process(target=self.start_listening)
p.start()
def start_listening(self):
with open('/tmp/pids/listener_{}.pid'.format(os.getpid()), 'w') as f:
f.write("{}".format(os.getpid()))
while True:
// implement message queue listener
def stop(self):
for pid in os.listdir('/tmp/pids/'):
os.kill(int(os.path.basename(pid)), signal.SIGTERM)
shutil.rmtree('/tmp/pids/')
super().stop()
This is almost ok, but I'd really like to have a graceful shutdown of the child processes and do some clean up which would include logging. I read about signal handlers so I switched the signal.SIGTERM to signal.SIGINT and added a handler to the daemon class.
def __init__(self):
....
signal.signal(signal.SIGINT, self.graceful_stop)
def stop(self):
for pid in os.listdir('/tmp/pids/'):
os.kill(int(os.path.basename(pid)), signal.SIGINT)
super().stop()
def graceful_stop(self):
self.log.deug("Gracefully stopping the child {}".format(os.getpid()))
os.rm('/tmp/pids/listener_{}.pid".format(os.getpid()))
...
However, when tested, the child processes get killed but it doesn't seem like the graceful_stop function never gets called (files remain, logging doesn't get logged, etc). Am I implementing the handler wrong for the child processes? Is there a better way of having multiple listeners with a single control point?
I figured it out. The signal.signal declaration had to be explicitly put in each sub process's start_listening function.
def start_listening(self):
signal.signal(signal.SIGINT, self.graceful_stop)
with open('/tmp/pids/listener_{}.pid'.format(os.getpid()), 'w') as f:
f.write("{}".format(os.getpid()))
while True:
// implement message queue listener
I have some code that needs to run against several other systems that may hang or have problems not under my control. I would like to use python's multiprocessing to spawn child processes to run independent of the main program and then when they hang or have problems terminate them, but I am not sure of the best way to go about this.
When terminate is called it does kill the child process, but then it becomes a defunct zombie that is not released until the process object is gone. The example code below where the loop never ends works to kill it and allow a respawn when called again, but does not seem like a good way of going about this (ie multiprocessing.Process() would be better in the __init__()).
Anyone have a suggestion?
class Process(object):
def __init__(self):
self.thing = Thing()
self.running_flag = multiprocessing.Value("i", 1)
def run(self):
self.process = multiprocessing.Process(target=self.thing.worker, args=(self.running_flag,))
self.process.start()
print self.process.pid
def pause_resume(self):
self.running_flag.value = not self.running_flag.value
def terminate(self):
self.process.terminate()
class Thing(object):
def __init__(self):
self.count = 1
def worker(self,running_flag):
while True:
if running_flag.value:
self.do_work()
def do_work(self):
print "working {0} ...".format(self.count)
self.count += 1
time.sleep(1)
You might run the child processes as daemons in the background.
process.daemon = True
Any errors and hangs (or an infinite loop) in a daemon process will not affect the main process, and it will only be terminated once the main process exits.
This will work for simple problems until you run into a lot of child daemon processes which will keep reaping memories from the parent process without any explicit control.
Best way is to set up a Queue to have all the child processes communicate to the parent process so that we can join them and clean up nicely. Here is some simple code that will check if a child processing is hanging (aka time.sleep(1000)), and send a message to the queue for the main process to take action on it:
import multiprocessing as mp
import time
import queue
running_flag = mp.Value("i", 1)
def worker(running_flag, q):
count = 1
while True:
if running_flag.value:
print(f"working {count} ...")
count += 1
q.put(count)
time.sleep(1)
if count > 3:
# Simulate hanging with sleep
print("hanging...")
time.sleep(1000)
def watchdog(q):
"""
This check the queue for updates and send a signal to it
when the child process isn't sending anything for too long
"""
while True:
try:
msg = q.get(timeout=10.0)
except queue.Empty as e:
print("[WATCHDOG]: Maybe WORKER is slacking")
q.put("KILL WORKER")
def main():
"""The main process"""
q = mp.Queue()
workr = mp.Process(target=worker, args=(running_flag, q))
wdog = mp.Process(target=watchdog, args=(q,))
# run the watchdog as daemon so it terminates with the main process
wdog.daemon = True
workr.start()
print("[MAIN]: starting process P1")
wdog.start()
# Poll the queue
while True:
msg = q.get()
if msg == "KILL WORKER":
print("[MAIN]: Terminating slacking WORKER")
workr.terminate()
time.sleep(0.1)
if not workr.is_alive():
print("[MAIN]: WORKER is a goner")
workr.join(timeout=1.0)
print("[MAIN]: Joined WORKER successfully!")
q.close()
break # watchdog process daemon gets terminated
if __name__ == '__main__':
main()
Without terminating worker, attempt to join() it to the main process would have blocked forever since worker has never finished.
The way Python multiprocessing handles processes is a bit confusing.
From the multiprocessing guidelines:
Joining zombie processes
On Unix when a process finishes but has not been joined it becomes a zombie. There should never be very many because each time a new process starts (or active_children() is called) all completed processes which have not yet been joined will be joined. Also calling a finished process’s Process.is_alive will join the process. Even so it is probably good practice to explicitly join all the processes that you start.
In order to avoid a process to become a zombie, you need to call it's join() method once you kill it.
If you want a simpler way to deal with the hanging calls in your system you can take a look at pebble.
I am trying to use The Queue in python which will be multithreaded. I just wanted to know the approach I am using is correct or not. And if I am doing something redundant or If there is a better approach that I should use.
I am trying to get new requests from a table and schedule them using some logic to perform some operation like running a query.
So here from the main thread I spawn a separate thread for the queue.
if __name__=='__main__':
request_queue = SetQueue(maxsize=-1)
worker = Thread(target=request_queue.process_queue)
worker.setDaemon(True)
worker.start()
while True:
try:
#Connect to the database get all the new requests to be verified
db = Database(username_testschema, password_testschema, mother_host_testschema, mother_port_testschema, mother_sid_testschema, 0)
#Get new requests for verification
verify_these = db.query("SELECT JOB_ID FROM %s.table WHERE JOB_STATUS='%s' ORDER BY JOB_ID" %
(username_testschema, 'INITIATED'))
#If there are some requests to be verified, put them in the queue.
if len(verify_these) > 0:
for row in verify_these:
print "verifying : %s" % row[0]
verify_id = row[0]
request_queue.put(verify_id)
except Exception as e:
logger.exception(e)
finally:
time.sleep(10)
Now in the Setqueue class I have a process_queue function which is used for processing the top 2 requests in every run that were added to the queue.
'''
Overridding the Queue class to use set as all_items instead of list to ensure unique items added and processed all the time,
'''
class SetQueue(Queue.Queue):
def _init(self, maxsize):
Queue.Queue._init(self, maxsize)
self.all_items = set()
def _put(self, item):
if item not in self.all_items:
Queue.Queue._put(self, item)
self.all_items.add(item)
'''
The Multi threaded queue for verification process. Take the top two items, verifies them in a separate thread and sleeps for 10 sec.
This way max two requests per run will be processed.
'''
def process_queue(self):
while True:
scheduler_obj = Scheduler()
try:
if self.qsize() > 0:
for i in range(2):
job_id = self.get()
t = Thread(target=scheduler_obj.verify_func, args=(job_id,))
t.start()
for i in range(2):
t.join(timeout=1)
self.task_done()
except Exception as e:
logger.exception(
"QUEUE EXCEPTION : Exception occured while processing requests in the VERIFICATION QUEUE")
finally:
time.sleep(10)
I want to see if my understanding is correct and if there can be any issues with it.
So the main thread running in while True in the main func connects to database gets new requests and puts it in the queue. The worker thread(daemon) for the queue keeps on getting new requests from the queue and fork non-daemon threads which do the processing and since timeout for the join is 1 the worker thread will keep on taking new requests without getting blocked, and its child thread will keep on processing in the background. Correct?
So in case if the main process exit these won`t be killed until they finish their work but the worker daemon thread would exit.
Doubt : If the parent is daemon and child is non daemon and if parent exits does child exit?).
I also read here :- David beazley multiprocessing
By david beazley in using a Pool as a Thread Coprocessor section where he is trying to solve a similar problem. So should I follow his steps :-
1. Create a pool of processes.
2. Open a thread like I am doing for request_queue
3. In that thread
def process_verification_queue(self):
while True:
try:
if self.qsize() > 0:
job_id = self.get()
pool.apply_async(Scheduler.verify_func, args=(job_id,))
except Exception as e:
logger.exception("QUEUE EXCEPTION : Exception occured while processing requests in the VERIFICATION QUEUE")
Use a process from the pool and run the verify_func in parallel. Will this give me more performance?
While its possible to create a new independent thread for the queue, and process that data separately the way you are doing it, I believe it is more common for each independent worker thread to post messages to a queue that they already "know" about. Then that queue is processed from some other thread by pulling messages out of that queue.
Design Idea
The way I invision your application would be three threads. The main thread, and two worker threads. 1 worker thread would get requests from the database and put them in the queue. The other worker thread would process that data from the queue
The main thread would just waiting for the other threads to finish by using the thread functions .join()
You would protect queue that the threads have access to and make it thread safe by using a mutex. I have seen this pattern in many other designs in other languages as well.
Suggested Reading
"Effective Python" by Brett Slatkin has a great example of this very question.
Instead of inheriting from Queue, he just creates a wrapper to it in his class
called MyQueue and adds a get() and put(message) function.
He even provides the source code at his Github repo
https://github.com/bslatkin/effectivepython/blob/master/example_code/item_39.py
I'm not affiliated with the book or its author, but I highly recommend it as I learned quite a few things from it :)
I like this explanation of the advantages & differences between using threads and processes -
".....But there's a silver lining: processes can make progress on multiple threads of execution simultaneously. Since a parent process doesn't share the GIL with its child processes, all processes can execute simultaneously (subject to the constraints of the hardware and OS)...."
He has some great explanations for getting around GIL and how to improve performance
Read more here:
http://jeffknupp.com/blog/2013/06/30/pythons-hardest-problem-revisited/
When using python-daemon, I'm creating subprocesses likeso:
import multiprocessing
class Worker(multiprocessing.Process):
def __init__(self, queue):
self.queue = queue # we wait for things from this in Worker.run()
...
q = multiprocessing.Queue()
with daemon.DaemonContext():
for i in xrange(3):
Worker(q)
while True: # let the Workers do their thing
q.put(_something_we_wait_for())
When I kill the parent daemonic process (i.e. not a Worker) with a Ctrl-C or SIGTERM, etc., the children don't die. How does one kill the kids?
My first thought is to use atexit to kill all the workers, likeso:
with daemon.DaemonContext():
workers = list()
for i in xrange(3):
workers.append(Worker(q))
#atexit.register
def kill_the_children():
for w in workers:
w.terminate()
while True: # let the Workers do their thing
q.put(_something_we_wait_for())
However, the children of daemons are tricky things to handle, and I'd be obliged for thoughts and input on how this ought to be done.
Thank you.
Your options are a bit limited. If doing self.daemon = True in the constructor for the Worker class does not solve your problem and trying to catch signals in the Parent (ie, SIGTERM, SIGINT) doesn't work, you may have to try the opposite solution - instead of having the parent kill the children, you can have the children commit suicide when the parent dies.
The first step is to give the constructor to Worker the PID of the parent process (you can do this with os.getpid()). Then, instead of just doing self.queue.get() in the worker loop, do something like this:
waiting = True
while waiting:
# see if Parent is at home
if os.getppid() != self.parentPID:
# woe is me! My Parent has died!
sys.exit() # or whatever you want to do to quit the Worker process
try:
# I picked the timeout randomly; use what works
data = self.queue.get(block=False, timeout=0.1)
waiting = False
except queue.Queue.Empty:
continue # try again
# now do stuff with data
The solution above checks to see if the parent PID is different than what it originally was (that is, if the child process was adopted by init or lauchd because the parent died) - see reference. However, if that doesn't work for some reason you can replace it with the following function (adapted from here):
def parentIsAlive(self):
try:
# try to call Parent
os.kill(self.parentPID, 0)
except OSError:
# *beeep* oh no! The phone's disconnected!
return False
else:
# *ring* Hi mom!
return True
Now, when the Parent dies (for whatever reason), the child Workers will spontaneously drop like flies - just as you wanted, you daemon! :-D
You should store the parent pid when the child is first created (let's say in self.myppid) and when self.myppid is diferent from getppid() means that the parent died.
To avoid checking if the parent has changed over and over again, you can use PR_SET_PDEATHSIG that is described in the signals documentation.
5.8 The Linux "parent death" signal
For each process there is a variable pdeath_signal, that is
initialized to 0 after fork() or clone(). It gives the signal that the
process should get when its parent dies.
In this case, you want your process to die, you can just set it to a SIGHUP, like this:
prctl(PR_SET_PDEATHSIG, SIGHUP);
Atexit won't do the trick -- it only gets run on successful non-signal termination -- see the note near the top of the docs. You need to set up signal handling via one of two means.
The easier-sounding option: set the daemon flag on your worker processes, per http://docs.python.org/library/multiprocessing.html#process-and-exceptions
Somewhat harder-sounding option: PEP-3143 seems to imply there is a built-in way to hook program cleanup needs in python-daemon.