How to send signal from subprocess to parent process? - python

I tried using os.kill but it does not seem to be working.
import signal
import os
from time import sleep
def isr(signum, frame):
print "Hey, I'm the ISR!"
signal.signal(signal.SIGALRM, isr)
pid = os.fork()
if pid == 0:
def sisr(signum, frame):
os.kill(os.getppid(), signal.SIGALRM)
signal.signal(signal.SIGVTALRM, sisr)
signal.setitimer(signal.ITIMER_VIRTUAL, 1, 1)
while True:
print "2"
else:
sleep(2)

Your sisr handler never executes.
signal.setitimer(signal.ITIMER_VIRTUAL, 1, 1)
This line sets a virtual timer, which, according to documentation, “Decrements interval timer only when the process is executing, and delivers SIGVTALRM upon expiration”.
The thing is, your process is almost never executing. The prints take almost no time inside your process, all the work is done by the kernel delivering the output to your console application (xterm, konsole, …) and the application repainting the screen. Meanwhile, your child process is asleep, and the timer does not run.
Change it with a real timer, it works :)
import signal
import os
from time import sleep
def isr(signum, frame):
print "Hey, I'm the ISR!"
signal.signal(signal.SIGALRM, isr)
pid = os.fork()
if pid == 0:
def sisr(signum, frame):
print "Child running sisr"
os.kill(os.getppid(), signal.SIGALRM)
signal.signal(signal.SIGALRM, sisr)
signal.setitimer(signal.ITIMER_REAL, 1, 1)
while True:
print "2"
sleep(1)
else:
sleep(10)
print "Parent quitting"
Output:
spectras#etherbee:~/temp$ python test.py
2
Child running sisr
2
Hey, I'm the ISR!
Parent quitting
spectras#etherbee:~/temp$ Child running sisr
Traceback (most recent call last):
File "test.py", line 22, in <module>
sleep(1)
File "test.py", line 15, in sisr
os.kill(os.getppid(), signal.SIGALRM)
OSError: [Errno 1] Operation not permitted
Note: the child crashes the second time it runs sisr, because then the parent has exited, so os.getppid() return 0 and sending a signal to process 0 is forbidden.

Related

Catch process kill exception in Python? (Raspberry Pi) [duplicate]

Let's assume we have such a trivial daemon written in python:
def mainloop():
while True:
# 1. do
# 2. some
# 3. important
# 4. job
# 5. sleep
mainloop()
and we daemonize it using start-stop-daemon which by default sends SIGTERM (TERM) signal on --stop.
Let's suppose the current step performed is #2. And at this very moment we're sending TERM signal.
What happens is that the execution terminates immediately.
I've found that I can handle the signal event using signal.signal(signal.SIGTERM, handler) but the thing is that it still interrupts the current execution and passes the control to handler.
So, my question is - is it possible to not interrupt the current execution but handle the TERM signal in a separated thread (?) so that I was able to set shutdown_flag = True so that mainloop() had a chance to stop gracefully?
A class based clean to use solution:
import signal
import time
class GracefulKiller:
kill_now = False
def __init__(self):
signal.signal(signal.SIGINT, self.exit_gracefully)
signal.signal(signal.SIGTERM, self.exit_gracefully)
def exit_gracefully(self, *args):
self.kill_now = True
if __name__ == '__main__':
killer = GracefulKiller()
while not killer.kill_now:
time.sleep(1)
print("doing something in a loop ...")
print("End of the program. I was killed gracefully :)")
First, I'm not certain that you need a second thread to set the shutdown_flag.
Why not set it directly in the SIGTERM handler?
An alternative is to raise an exception from the SIGTERM handler, which will be propagated up the stack. Assuming you've got proper exception handling (e.g. with with/contextmanager and try: ... finally: blocks) this should be a fairly graceful shutdown, similar to if you were to Ctrl+C your program.
Example program signals-test.py:
#!/usr/bin/python
from time import sleep
import signal
import sys
def sigterm_handler(_signo, _stack_frame):
# Raises SystemExit(0):
sys.exit(0)
if sys.argv[1] == "handle_signal":
signal.signal(signal.SIGTERM, sigterm_handler)
try:
print "Hello"
i = 0
while True:
i += 1
print "Iteration #%i" % i
sleep(1)
finally:
print "Goodbye"
Now see the Ctrl+C behaviour:
$ ./signals-test.py default
Hello
Iteration #1
Iteration #2
Iteration #3
Iteration #4
^CGoodbye
Traceback (most recent call last):
File "./signals-test.py", line 21, in <module>
sleep(1)
KeyboardInterrupt
$ echo $?
1
This time I send it SIGTERM after 4 iterations with kill $(ps aux | grep signals-test | awk '/python/ {print $2}'):
$ ./signals-test.py default
Hello
Iteration #1
Iteration #2
Iteration #3
Iteration #4
Terminated
$ echo $?
143
This time I enable my custom SIGTERM handler and send it SIGTERM:
$ ./signals-test.py handle_signal
Hello
Iteration #1
Iteration #2
Iteration #3
Iteration #4
Goodbye
$ echo $?
0
Here is a simple example without threads or classes.
import signal
run = True
def handler_stop_signals(signum, frame):
global run
run = False
signal.signal(signal.SIGINT, handler_stop_signals)
signal.signal(signal.SIGTERM, handler_stop_signals)
while run:
pass # do stuff including other IO stuff
I think you are near to a possible solution.
Execute mainloop in a separate thread and extend it with the property shutdown_flag. The signal can be caught with signal.signal(signal.SIGTERM, handler) in the main thread (not in a separate thread). The signal handler should set shutdown_flag to True and wait for the thread to end with thread.join()
Based on the previous answers, I have created a context manager which protects from sigint and sigterm.
import logging
import signal
import sys
class TerminateProtected:
""" Protect a piece of code from being killed by SIGINT or SIGTERM.
It can still be killed by a force kill.
Example:
with TerminateProtected():
run_func_1()
run_func_2()
Both functions will be executed even if a sigterm or sigkill has been received.
"""
killed = False
def _handler(self, signum, frame):
logging.error("Received SIGINT or SIGTERM! Finishing this block, then exiting.")
self.killed = True
def __enter__(self):
self.old_sigint = signal.signal(signal.SIGINT, self._handler)
self.old_sigterm = signal.signal(signal.SIGTERM, self._handler)
def __exit__(self, type, value, traceback):
if self.killed:
sys.exit(0)
signal.signal(signal.SIGINT, self.old_sigint)
signal.signal(signal.SIGTERM, self.old_sigterm)
if __name__ == '__main__':
print("Try pressing ctrl+c while the sleep is running!")
from time import sleep
with TerminateProtected():
sleep(10)
print("Finished anyway!")
print("This only prints if there was no sigint or sigterm")
Found easiest way for me.
Here an example with fork for clarity that this way is useful for flow control.
import signal
import time
import sys
import os
def handle_exit(sig, frame):
raise(SystemExit)
def main():
time.sleep(120)
signal.signal(signal.SIGTERM, handle_exit)
p = os.fork()
if p == 0:
main()
os._exit()
try:
os.waitpid(p, 0)
except (KeyboardInterrupt, SystemExit):
print('exit handled')
os.kill(p, signal.SIGTERM)
os.waitpid(p, 0)
The simplest solution I have found, taking inspiration by responses above is
class SignalHandler:
def __init__(self):
# register signal handlers
signal.signal(signal.SIGINT, self.exit_gracefully)
signal.signal(signal.SIGTERM, self.exit_gracefully)
self.logger = Logger(level=ERROR)
def exit_gracefully(self, signum, frame):
self.logger.info('captured signal %d' % signum)
traceback.print_stack(frame)
###### do your resources clean up here! ####
raise(SystemExit)
Sample of my code how I use signal:
#! /usr/bin/env python
import signal
def ctrl_handler(signum, frm):
print "You can't cannot kill me"
print "Installing signal handler..."
signal.signal(signal.SIGINT, ctrl_handler)
print "done"
while True:
# do something
pass
You can set a threading.Event when catching the signal.
threading.Event is threadsafe to use and pass around, can be waited on, and the same event can be set and cleared from other places.
import signal, threading
quit_event = threading.Event()
signal.signal(signal.SIGTERM, lambda *_args: quit_event.set())
while not quit_event.is_set():
print("Working...")

Cleanly stopping a multiprocessing.Process - KeyboardInterrupt escapes in Windows

I'm using multiprocessing to spawn a task (multiprocessing.Process) that can be stopped (without cooperation from the task itself, e.g.: without using something like multiprocessing.Event to signal the task to gracefully stop).
Since .terminate() (or .kill()) won't stop it cleanly (the finally: clause won't execute), I thought I would use os.kill() to emulate a CTRL+C event:
from multiprocessing import Process
from time import sleep
import os
import signal
def task(n):
try:
for i in range(n):
sleep(1)
print(f'task: i={i}')
finally:
print('task: finally clause executed!')
return i
if __name__ == '__main__':
t = Process(target=task, args=(10,))
print(f'main: starting task...')
t.start()
sleep(5)
for i in ('CTRL_C_EVENT', 'SIGINT'):
if hasattr(signal, i):
sig = getattr(signal, i)
break
print(f'main: attempt to stop task...')
os.kill(t.pid, sig)
The finally: clause executes on both Windows, macOS and Linux; hoever on Windows it additionally spits out the error:
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "c:\Python38\lib\multiprocessing\util.py", line 357, in
_exit_function
p.join()
File "c:\Python38\lib\multiprocessing\process.py", line 149, in join
res = self._popen.wait(timeout)
File "c:\Python38\lib\multiprocessing\popen_spawn_win32.py", line 108, in wait
res = _winapi.WaitForSingleObject(int(self._handle), msecs)
KeyboardInterrupt
while on macOS and Linux it only print the messages meant to be printed.
It seems CTRL_C_EVENT in Windows is also propagated from the child process to the parent process. See for example this related question.
I added some book keeping code and a try...except block to the code. It shows what happens, and that the KeyboardInterrupt needs to be caught on parent process as well.
from multiprocessing import Process
from time import sleep
import os
import signal
def task(n):
try:
for i in range(n):
sleep(1)
print(f'task: i={i}')
except KeyboardInterrupt:
print("task: caught KeyboardInterrupt")
finally:
print('task: finally clause executed!')
return i
if __name__ == '__main__':
try:
t = Process(target=task, args=(10,))
print(f'main: starting task...')
t.start()
sleep(5)
for i in ('CTRL_C_EVENT', 'SIGINT'):
if hasattr(signal, i):
sig = getattr(signal, i)
break
print(f'main: attempt to stop task...')
os.kill(t.pid, sig)
finally:
try:
print("main: finally in main process. Waiting for 3 seconds")
sleep(3)
except KeyboardInterrupt:
print("main: caught KeyboardInterrupt in finally block")
It prevents the error and produces the following output:
main: starting task...
task: i=0
task: i=1
task: i=2
task: i=3
main: attempt to stop task...
main: finally in main process. Waiting for 3 seconds
task: caught KeyboardInterrupt
main: caught KeyboardInterrupt in finally block
task: finally clause executed!

Python threads and linux ioctl wait

I have the following toy example for Python threading module
from __future__ import print_function
import threading
import time
import signal
import sys
import os
import time
class ThreadShutdown(Exception):
# Custom exception to allow clean thread exit
pass
def thread_shutdown(signum, frame):
print(" o Signal {} caught and raising ThreadShutdown exception".format(signum))
raise ThreadShutdown
def main():
"""
Register the signal handlers needed to stop
cleanly the child accessing thread
"""
signal.signal(signal.SIGTERM, thread_shutdown)
signal.signal(signal.SIGINT, thread_shutdown)
test_run_seconds = 120
try:
thread = ChildThread()
thread.start()
time.sleep(1)
while test_run_seconds > 0:
test_run_seconds -= 1
print(" o [{}] remaining time is {} seconds".format(time.asctime( time.localtime(time.time()) ), test_run_seconds))
time.sleep(1)
except ThreadShutdown:
thread.shutdown_flag.set()
thread.join()
print(" o ThreadShutdown procedure complete")
return
proc.terminate()
thread.shutdown_flag.set()
thread.join()
print(" o Test terminated")
class ChildThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.shutdown_flag = threading.Event()
def run(self):
while not self.shutdown_flag.is_set():
print(" o [{}] is the current time in child, sleep for 10s".format(time.asctime( time.localtime(time.time()))))
time.sleep(10)
return
if __name__ == "__main__":
sys.exit(main())
which behaves as expected (the main thread counts every second while the spawned thread prints only every 10 seconds.
I was trying to understand the behaviour of the same code snippet in presence of blocking waits in kernel mode in the spawned thread. For example, assume that the spawned thread now goes into a killable wait in an ioctl with a timeout of 10s, I would still expect to have the main thread counting every second. For some reason, it instead counts every 10s, as if it was blocked as well in the wait of the spawned thread. What is the reason?

How to let child thread deal with main process killed or key interrupt in Python?

In a multi-threaded design, I want to do some clean steps when the program exits abnormally. The running thread should clean up the current task and then quit, rather than be killed immediately and leave some dirty data. I found that using threading module could not catch KeyInterrupt exception.
Here is my test code:
#!/usr/bin/env python3
from time import sleep
def do_sth():
print("I'm doing something...")
sleep(10)
if __name__ == "__main__":
do_sth()
Python will raise KeyInterrupt exception when I press CTRL-c
$ Python3 test.py
I'm doing something ...
^C
Traceback (most recent call last):
File "test.py", line 10, in <module>
do_sth ()
File "test.py", line 7, in do_sth
sleep (10)
KeyboardInterrupt
So I can catch this exception.
def do_sth ():
try:
print ("I'm doing something ...")
sleep (10)
except (KeyboardInterrupt, SystemExit):
print ("I'm doing some clean steps and exit.")
But when I use threading module, this exception is not raised at all.
#!/usr/bin/env python3
from time import sleep
import threading
def do_sth():
print("I'm doing something...")
sleep(10)
if __name__ == '__main__':
t = threading.Thread(target=do_sth)
t.start()
t.join()
result:
$ python3 test.py
I'm doing something...
^C
The running thread has been killed directly and no exception is raised.
How do I deal with this?
One way is to handle KeyboardInterrupt exceptions.
Another thing to do in such scenarios is to manage the state of your application across all threads.
One of the solutions is to add support for Signals in your code. It allows graceful handling of the shutting down of your process.
Here's one simple setup for that:
class SignalHandler:
continue_running = True
def __init__(self):
signal.signal(signal.SIGUSR2, self.signal_handler)
signal.signal(signal.SIGTERM, self.signal_handler)
signal.signal(signal.SIGINT, self.signal_handler)
logging.info("SignalHandler::Init")
def signal_handler(self, num, stack):
logging.warning('Received signal %d in %s' % (num, threading.currentThread()))
SignalHandler.continue_running = False
logging.warning("Time to SHUT DOWN ALL MODULES")
All threads would try and utilise the status from SignalHandler.continue_running to ensure that they all know when to stop.
If somebody tried to kill this python process by calling kill -2 [PID] for example - all threads will come to know about the need to shut down.

Handling keyboard interrupt when using subproccess

I have python script called monitiq_install.py which calls other scripts (or modules) using the subprocess python module. However, if the user sends a keyboard interrupt (CTRL + C) it exits, but with an exception. I want it to exit, but nicely.
My Code:
import os
import sys
from os import listdir
from os.path import isfile, join
from subprocess import Popen, PIPE
import json
# Run a module and capture output and exit code
def runModule(module):
try:
# Run Module
process = Popen(os.path.dirname(os.path.realpath(__file__)) + "/modules/" + module, shell=True, stdout=PIPE, bufsize=1)
for line in iter(process.stdout.readline, b''):
print line,
process.communicate()
exit_code = process.wait();
return exit_code;
except KeyboardInterrupt:
print "Got keyboard interupt!";
sys.exit(0);
The error I'm getting is below:
python monitiq_install.py -a
Invalid module filename: create_db_user_v0_0_0.pyc
Not Running Module: '3parssh_install' as it is already installed
######################################
Running Module: 'create_db_user' Version: '0.0.3'
Choose username for Monitiq DB User [MONITIQ]
^CTraceback (most recent call last):
File "/opt/monitiq-universal/install/modules/create_db_user-v0_0_3.py", line 132, in <module>
inputVal = raw_input("");
Traceback (most recent call last):
File "monitiq_install.py", line 40, in <module>
KeyboardInterrupt
module_install.runModules();
File "/opt/monitiq-universal/install/module_install.py", line 86, in runModules
exit_code = runModule(module);
File "/opt/monitiq-universal/install/module_install.py", line 19, in runModule
for line in iter(process.stdout.readline, b''):
KeyboardInterrupt
A solution or some pointers would be helpful :)
--EDIT
With try catch
Running Module: 'create_db_user' Version: '0.0.0'
Choose username for Monitiq DB User [MONITIQ]
^CGot keyboard interupt!
Traceback (most recent call last):
File "monitiq_install.py", line 36, in <module>
module_install.runModules();
File "/opt/monitiq-universal/install/module_install.py", line 90, in runModules
exit_code = runModule(module);
File "/opt/monitiq-universal/install/module_install.py", line 29, in runModule
sys.exit(0);
NameError: global name 'sys' is not defined
Traceback (most recent call last):
File "/opt/monitiq-universal/install/modules/create_db_user-v0_0_0.py", line 132, in <module>
inputVal = raw_input("");
KeyboardInterrupt
If you press Ctrl + C in a terminal then SIGINT is sent to all processes within the process group. See child process receives parent's SIGINT.
That is why you see the traceback from the child process despite try/except KeyboardInterrupt in the parent.
You could suppress the stderr output from the child process: stderr=DEVNULL. Or start it in a new process group: start_new_session=True:
import sys
from subprocess import call
try:
call([sys.executable, 'child.py'], start_new_session=True)
except KeyboardInterrupt:
print('Ctrl C')
else:
print('no exception')
If you remove start_new_session=True in the above example then KeyboardInterrupt may be raised in the child too and you might get the traceback.
If subprocess.DEVNULL is not available; you could use DEVNULL = open(os.devnull, 'r+b', 0). If start_new_session parameter is not available; you could use preexec_fn=os.setsid on POSIX.
You can do this using try and except as below:
import subprocess
try:
proc = subprocess.Popen("dir /S", shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while proc.poll() is None:
print proc.stdout.readline()
except KeyboardInterrupt:
print "Got Keyboard interrupt"
You could avoid shell=True in your execution as best security practice.
This code spawns a child process and hands signals like SIGINT, ... to them just like shells (bash, zsh, ...) do it.
This means KeyboardInterrupt is no longer seen by the Python process, but the child receives this and is killed correctly.
It works by running the process in a new foreground process group set by Python.
import os
import signal
import subprocess
import sys
import termios
def run_as_fg_process(*args, **kwargs):
"""
the "correct" way of spawning a new subprocess:
signals like C-c must only go
to the child process, and not to this python.
the args are the same as subprocess.Popen
returns Popen().wait() value
Some side-info about "how ctrl-c works":
https://unix.stackexchange.com/a/149756/1321
fun fact: this function took a whole night
to be figured out.
"""
old_pgrp = os.tcgetpgrp(sys.stdin.fileno())
old_attr = termios.tcgetattr(sys.stdin.fileno())
user_preexec_fn = kwargs.pop("preexec_fn", None)
def new_pgid():
if user_preexec_fn:
user_preexec_fn()
# set a new process group id
os.setpgid(os.getpid(), os.getpid())
# generally, the child process should stop itself
# before exec so the parent can set its new pgid.
# (setting pgid has to be done before the child execs).
# however, Python 'guarantee' that `preexec_fn`
# is run before `Popen` returns.
# this is because `Popen` waits for the closure of
# the error relay pipe '`errpipe_write`',
# which happens at child's exec.
# this is also the reason the child can't stop itself
# in Python's `Popen`, since the `Popen` call would never
# terminate then.
# `os.kill(os.getpid(), signal.SIGSTOP)`
try:
# fork the child
child = subprocess.Popen(*args, preexec_fn=new_pgid,
**kwargs)
# we can't set the process group id from the parent since the child
# will already have exec'd. and we can't SIGSTOP it before exec,
# see above.
# `os.setpgid(child.pid, child.pid)`
# set the child's process group as new foreground
os.tcsetpgrp(sys.stdin.fileno(), child.pid)
# revive the child,
# because it may have been stopped due to SIGTTOU or
# SIGTTIN when it tried using stdout/stdin
# after setpgid was called, and before we made it
# forward process by tcsetpgrp.
os.kill(child.pid, signal.SIGCONT)
# wait for the child to terminate
ret = child.wait()
finally:
# we have to mask SIGTTOU because tcsetpgrp
# raises SIGTTOU to all current background
# process group members (i.e. us) when switching tty's pgrp
# it we didn't do that, we'd get SIGSTOP'd
hdlr = signal.signal(signal.SIGTTOU, signal.SIG_IGN)
# make us tty's foreground again
os.tcsetpgrp(sys.stdin.fileno(), old_pgrp)
# now restore the handler
signal.signal(signal.SIGTTOU, hdlr)
# restore terminal attributes
termios.tcsetattr(sys.stdin.fileno(), termios.TCSADRAIN, old_attr)
return ret
# example:
run_as_fg_process(['openage', 'edit', '-f', 'random_map.rms'])

Categories

Resources