Delegate signal handling to a child process in python - python

How can I run a command from a python script and delegate to it signals like Ctrl+C?
I mean when I run e.g:
from subprocess import call
call(["child_proc"])
I want child_proc to handle Ctrl+C

I'm guessing that your problem is that you want the subprocess to receive Ctrl-C and not have the parent Python process terminate? If your child process initialises its own signal handler for Ctrl-C (SIGINT) then this might do the trick:
import signal, subprocess
old_action = signal.signal(signal.SIGINT, signal.SIG_IGN)
subprocess.call(['less', '/etc/passwd'])
signal.signal(signal.SIGINT, old_action) # restore original signal handler
Now you can hit Ctrl-C (which generates SIGINT), Python will ignore it but less will still see it.
However this only works if the child sets its signal handlers up properly (otherwise these are inherited from the parent).

Related

Forking a child in another thread: parent doesn't receive SIGCHLD on termination

Normally, when a process fork a child process, it will receive a SIGCHLD signal if that child terminates. But, if this fork happens in a thread other than the main thread of the application, parent won't receive anything.
I test it in Python, on different GNU/Linux machines. All was x86_64.
My question is: Is it a Python behaviour, or is it a defined behaviour of POSIX Standard? And in both cases, why it is so?
Here is a sample code to re-produce this behaviour.
import signal
import multiprocessing
import threading
import time
def signal_handler(*args):
print "Signal received"
def child_process():
time.sleep(100000)
def child_thread():
p = multiprocessing.Process(target=child_process)
p.start()
time.sleep(100000)
signal.signal(signal.SIGCHLD, signal_handler)
p = multiprocessing.Process(target=child_process)
p.start()
t = threading.Thread(target=child_thread)
t.start()
time.sleep(100000)
print "Waked"
time.sleep(100000)
Then, send a SIGKILL to each child. When first child (the one forked in main thread) terminates, signal_handler will be called. But when second child terminates, nothing will happen.
I also test the same scenario with os.fork instead of multiprocessing.Process. Same result.
It is documented python behavior:
only the main thread can set a new signal handler, and the main
thread will be the only one to receive signals (this is enforced by
the Python signal module, even if the underlying thread implementation
supports sending signals to individual threads). This means that
signals can’t be used as a means of inter-thread communication. Use
locks instead.

What is the proper way to register signal handlers in Python?

I have a python application where I create several new instances of multiprocessing.Process object from a main process in a fashion similar to:
self.my_proc = Process(name='foo', target=self.bar, args=(self.some_var,))
self.my_proc.daemon = True
self.my_proc.start()
...
#staticmethod
def bar(some_var):
while True:
do stuff forever
I've noticed that If I register a signal handler in the main process before I spawn the child processes that the signal event causes each spawned process to call the signal handler. If I register the signal handler after I spawn the child processes then a signal event only causes the parent process to call the signal handler.
I really only want the main (parent) process to receive the signal handler callback because it's the process that will clean up all the subprocesses. So my program works as I need it to. But my concern is that there is a better (right?) way to handle signals in multi-process Python applications. Is there?
This is nothing to do with Python as such. It reflects the UNIX (Linux) implementation of signals and their behaviour when a process creates child processes (using the fork system call). Here's an quote from the manual which explains the behaviour you have noticed:
A child created via fork(2) inherits a copy of its parent's signal dispositions.
During an execve(2), the dispositions of handled signals are reset to the default;
the dispositions of ignored signals are left unchanged.
If it's convenient in your main process (as it seems to be), and if you only care about catching SIGINT, I prefer to catch KeyboardInterrupt rather than installing a signal handler:
#staticmethod
def bar():
try:
while True:
# do stuff forever
pass
except KeyboardInterrupt:
# Tell subprocesses to shutdown gracefully
pass

terminate/restart python script if all children are stopped and prevent other children from spawning

I have a socket server that used threading to open a thread for each client that connects.
I also have two other threads that run constantly that are doing maintenance operations.
Basically there is the main thread plus two children running constantly, plus one child for each client that connects.
I want to be able to terminate or restart safely.
I would like to be able to trigger a termination function somehow that would instruct all child processes to terminate safely and then the parent could exit.
Any ideas?
Please do not suggest to connect as a client and send a command that would trigger that.
Already thought of it.
I am looking for a way to do this by executing something in the console.
The python socket server runs as a system service and would like to implement the termination in the init script.
The best way to do this is setup a signal handler in your main thread. This can be done using the signal module. See: http://docs.python.org/library/signal.html. A good way would be to trap the CTRL-C signal (SIGINT).
Please note that the signal handler can also be a class method, so you do not have to use a global method (it took me a while to discover that).
def __init__(self):
signal.signal(signal.SIGINT, self.just_kill_me)
def just_kill_me(self, sig, frame):
self.stopped = True
for t in self.threads:
t.join()
It is not possible to send the equivalent of a kill signal to a thread. Instead you should set a flag that will signal the children to stop.
Your child threads should run in a loop, periodically checking if the parent requests them to stop.
while not parent.stopped:
do_some_maintenance_work

Twisted program and TERM signal

I have a simple example:
from twisted.internet import utils,reactor
def test:
utils.getProcessOutput(executable="/bin/sleep",args=["10000"])
reactor.callWhenRunning(test)
reactor.run()
when I send signal "TERM" to program, "sleep" continues to be carried out, when I press Ctrl-C on keyboard "sleep" stopping. ( Ctrl-C is not equivalent signal TERM ?) Why ? How to kill "sleep" after send signal "TERM" to this program ?
Ctrl-C sends SIGINT to the entire foreground process group. That means it gets send to your Twisted program and to the sleep child process.
If you want to kill the sleep process whenever the Python process is going to exit, then you may want a before shutdown trigger:
def killSleep():
# Do it, somehow
reactor.addSystemEventTrigger('before', 'shutdown', killSleep)
As your example code is written, killSleep is difficult to implement. getProcessOutput doesn't give you something that easily allows the child to be killed (for example, you don't know its pid). If you use reactor.spawnProcess and a custom ProcessProtocol, this problem is solved though - the ProcessProtocol will be connected to a process transport which has a signalProcess method which you can use to send a SIGTERM (or whatever you like) to the child process.
You could also ignore SIGINT and this point and then manually deliver it to the whole process group:
import os, signal
def killGroup():
signal.signal(signal.SIGINT, signal.SIG_IGN)
os.kill(-os.getpgid(os.getpid()), signal.SIGINT)
reactor.addSystemEventTrigger('before', 'shutdown', killGroup)
Ignore SIGINT because the Twisted process is already shutting down and another signal won't do any good (and will probably confuse it or at least lead to spurious errors being reported). Sending a signal to -os.getpgid(os.getpid()) is how to send it to your entire process group.

kill subprocess when python process is killed?

I am writing a python program that lauches a subprocess (using Popen).
I am reading stdout of the subprocess, doing some filtering, and writing to
stdout of main process.
When I kill the main process (cntl-C) the subprocess keeps running.
How do I kill the subprocess too? The subprocess is likey to run a long time.
Context:
I'm launching only one subprocess at a time, I'm filtering its stdout.
The user might decide to interrupt to try something else.
I'm new to python and I'm using windows, so please be gentle.
Windows doesn't have signals, so you can't use the signal module. However, you can still catch the KeyboardInterrupt exception when Ctrl-C is pressed.
Something like this should get you going:
import subprocess
try:
child = subprocess.Popen(blah)
child.wait()
except KeyboardInterrupt:
child.terminate()
subprocess.Popen objects come with a kill and a terminate method (differs in which signal you send to the process).
signal.signal allows you install signal handlers, in which you can call the child's kill method.
You can use python atexit module.
For example:
import atexit
def killSubprocess():
mySubprocess.kill()
atexit.register(killSubprocess)

Categories

Resources