Python restricts the signal handler functions ( handling SIGINT, SIGTERM et..) to the following signature with no option to pass extra arguments.
def signal_handler(sig, frame):
In the following scenario when the program consists of multiple processes, I would like to terminate the the processes gracefully in a staggered fashion upon receiving a termination signal.
The problem is when trying to pass the shutdown events and processes to the signal handler, the only way I managed to do so is via using globals.
My question is: In this scenario, how can one avoid using globals?
# shutdown events for graceful termination
taskhandler_shutdown = Event()
logger_shutdown = Event()
# start the processes
p_taskhandler = Process(target=taskhandler.capture, args=[taskhandler_shutdown])
p_taskhandler.start()
p_eventlogger = Process(target=eventlogger.capture, args=[logger_shutdown])
p_eventlogger.start()
def termination_signal_handler(sig, frame):
# staggered shutdown, first terminate taskhandler
taskhandler_shutdown.set()
p_taskhandler.join()
# now terminate logger process
logger_shutdown.set()
p_eventlogger.join()
sys.exit(0)
The signature for the handler is not really restricted, it's just that it's called with these two arguments passed. You're free to make a signal handler with more parameters and pre-set them with help of partial().
def sigterm_handler(signum, frame, myobj):
...
def register_handler(myobj):
global sigterm_handler
sigterm_handler = partial(sigterm_handler, myobj=myobj)
signal.signal(signal.SIGTERM, sigterm_handler)
Related
I have a thread being generated from the main one which has basically an infinite loop with a system blocking function: something like:
def run(self):
global EXIT
while not EXIT:
data = self.conn.recv(1024)
...
I have defined a signal handler for SIGINT
def sig_handler(signum, frame):
global EXIT, threads
if (signum == 2):#defensive
print("Called SIGINT")
EXIT = True
Being the signal catched by the main thread, it interrupts the main thread. However the other thread is stuck on the blocking function: is there a way to interrupt a system call blocking function in python so that exiting from this function?
I do not want to stop the process directly how SIGINT normally does but I would like just to interrupt recv so that the condition of while is not true anymore and I can do other stuff before exiting.
I'm trying to implement a python daemon in the traditional start/stop/restart style to control a consumer to a messaging queue. I've successfully used python-daemons to create a single consumer, but I need more than one listener for the volume of messages. This led me to implement the multiprocessing library in my run function along with a os.kill call for the stop function:
def run(self):
for num in range(self.num_instances):
p = multiprocessing.Process(target=self.start_listening)
p.start()
def start_listening(self):
with open('/tmp/pids/listener_{}.pid'.format(os.getpid()), 'w') as f:
f.write("{}".format(os.getpid()))
while True:
// implement message queue listener
def stop(self):
for pid in os.listdir('/tmp/pids/'):
os.kill(int(os.path.basename(pid)), signal.SIGTERM)
shutil.rmtree('/tmp/pids/')
super().stop()
This is almost ok, but I'd really like to have a graceful shutdown of the child processes and do some clean up which would include logging. I read about signal handlers so I switched the signal.SIGTERM to signal.SIGINT and added a handler to the daemon class.
def __init__(self):
....
signal.signal(signal.SIGINT, self.graceful_stop)
def stop(self):
for pid in os.listdir('/tmp/pids/'):
os.kill(int(os.path.basename(pid)), signal.SIGINT)
super().stop()
def graceful_stop(self):
self.log.deug("Gracefully stopping the child {}".format(os.getpid()))
os.rm('/tmp/pids/listener_{}.pid".format(os.getpid()))
...
However, when tested, the child processes get killed but it doesn't seem like the graceful_stop function never gets called (files remain, logging doesn't get logged, etc). Am I implementing the handler wrong for the child processes? Is there a better way of having multiple listeners with a single control point?
I figured it out. The signal.signal declaration had to be explicitly put in each sub process's start_listening function.
def start_listening(self):
signal.signal(signal.SIGINT, self.graceful_stop)
with open('/tmp/pids/listener_{}.pid'.format(os.getpid()), 'w') as f:
f.write("{}".format(os.getpid()))
while True:
// implement message queue listener
I have a python multiprocessing setup (i.e. worker processes) with custom signal handling, which prevents the worker from cleanly using multiprocessing itself. (See extended problem description below).
The Setup
The master class that spawns all worker processes looks like the following (some parts stripped to only contain the important parts).
Here, it re-binds its own signals only to print Master teardown; actually the received signals are propagated down the process tree and must be handled by the workers themselves. This is achieved by re-binding the signals after workers have been spawned.
class Midlayer(object):
def __init__(self, nprocs=2):
self.nprocs = nprocs
self.procs = []
def handle_signal(self, signum, frame):
log.info('Master teardown')
for p in self.procs:
p.join()
sys.exit()
def start(self):
# Start desired number of workers
for _ in range(nprocs):
p = Worker()
self.procs.append(p)
p.start()
# Bind signals for master AFTER workers have been spawned and started
signal.signal(signal.SIGINT, self.handle_signal)
signal.signal(signal.SIGTERM, self.handle_signal)
# Serve forever, only exit on signals
for p in self.procs:
p.join()
The worker class bases multiprocessing.Process and implements its own run()-method.
In this method, it connects to a distributed message queue and polls the queue for items forever. Forever should be: until the worker receives SIGINT or SIGTERM. The worker should not quit immediately; instead, it has to finish whatever calculation it does and will quit afterwards (once quit_req is set to True).
class Worker(Process):
def __init__(self):
self.quit_req = False
Process.__init__(self)
def handle_signal(self, signum, frame):
print('Stopping worker (pid: {})'.format(self.pid))
self.quit_req = True
def run(self):
# Set signals for worker process
signal.signal(signal.SIGINT, self.handle_signal)
signal.signal(signal.SIGTERM, self.handle_signal)
q = connect_to_some_distributed_message_queue()
# Start consuming
print('Starting worker (pid: {})'.format(self.pid))
while not self.quit_req:
message = q.poll()
if len(message):
try:
print('{} handling message "{}"'.format(
self.pid, message)
)
# Facade pattern: Pick the correct target function for the
# requested message and execute it.
MessageRouter.route(message)
except Exception as e:
print('{} failed handling "{}": {}'.format(
self.pid, message, e.message)
)
The Problem
So far for the basic setup, where (almost) everything works fine:
The master process spawns the desired number of workers
Each worker connects to the message queue
Once a message is published, one of the workers receives it
The facade pattern (using a class named MessageRouter) routes the received message to the respective function and executes it
Now for the problem: Target functions (where the message gets directed to by the MessageRouter facade) may contain very complex business logic and thus may require multiprocessing.
If, for example, the target function contains something like this:
nproc = 4
# Spawn a pool, because we have expensive calculation here
p = Pool(processes=nproc)
# Collect result proxy objects for async apply calls to 'some_expensive_calculation'
rpx = [p.apply_async(some_expensive_calculation, ()) for _ in range(nproc)]
# Collect results from all processes
res = [rpx.get(timeout=.5) for r in rpx]
# Print all results
print(res)
Then the processes spawned by the Pool will also redirect their signal handling for SIGINT and SIGTERM to the worker's handle_signal function (because of signal propagation to the process subtree), essentially printing Stopping worker (pid: ...) and not stopping at all. I know, that this happens due to the fact that I have re-bound the signals for the worker before its own child-processes are spawned.
This is where I'm stuck: I just cannot set the workers' signals after spawning its child processes, because I do not know whether or not it spawns some (target functions are masked and may be written by others), and because the worker stays (as designed) in its poll-loop. At the same time, I cannot expect the implementation of a target function that uses multiprocessing to re-bind its own signal handlers to (whatever) default values.
Currently, I feel like restoring signal handlers in each loop in the worker (before the message is routed to its target function) and resetting them after the function has returned is the only option, but it simply feels wrong.
Do I miss something? Do you have any advice? I'd be really happy if someone could give me a hint on how to solve the flaws of my design here!
There is not a clear approach for tackling the issue in the way you want to proceed. I often find myself in situations where I have to run unknown code (represented as Python entry point functions which might get down into some C weirdness) in multiprocessing environments.
This is how I approach the problem.
The main loop
Usually the main loop is pretty simple, it fetches a task from some source (HTTP, Pipe, Rabbit Queue..) and submits it to a Pool of workers. I make sure the KeyboardInterrupt exception is correctly handled to shutdown the service.
try:
while 1:
task = get_next_task()
service.process(task)
except KeyboardInterrupt:
service.wait_for_pending_tasks()
logging.info("Sayonara!")
The workers
The workers are managed by a Pool of workers from either multiprocessing.Pool or from concurrent.futures.ProcessPoolExecutor. If I need more advanced features such as timeout support I either use billiard or pebble.
Each worker will ignore SIGINT as recommended here. SIGTERM is left as default.
The service
The service is controlled either by systemd or supervisord. In either cases, I make sure that the termination request is always delivered as a SIGINT (CTL+C).
I want to keep SIGTERM as an emergency shutdown rather than relying only on SIGKILL for that. SIGKILL is not portable and some platforms do not implement it.
"I whish it was that simple"
If things are more complex, I'd consider the use of frameworks such as Luigi or Celery.
In general, reinventing the wheel on such things is quite detrimental and gives little gratifications. Especially if someone else will have to look at that code.
The latter sentence does not apply if your aim is to learn how these things are done of course.
I was able to do this using Python 3 and set_start_method(method) with the 'forkserver' flavour. Another way Python 3 > Python 2!
Where by "this" I mean:
Have a main process with its own signal handler which just joins the children.
Have some worker processes with a signal handler which may spawn...
further subprocesses which do not have a signal handler.
The behaviour on Ctrl-C is then:
manager process waits for workers to exit.
workers run their signal handlers, (an maybe set a stop flag and continue executing to finish their job, although I didn't bother in my example, I just joined the child I knew I had) and then exit.
all children of the workers die immediately.
Of course note that if your intention is for the children of the workers not to crash you will need to install some ignore handler or something for them in your worker process run() method, or somewhere.
To mercilessly lift from the docs:
When the program starts and selects the forkserver start method, a server process is started. From then on, whenever a new process is needed, the parent process connects to the server and requests that it fork a new process. The fork server process is single threaded so it is safe for it to use os.fork(). No unnecessary resources are inherited.
Available on Unix platforms which support passing file descriptors over Unix pipes.
The idea is therefore that the "server process" inherits the default signal handling behaviour before you install your new ones, so all its children also have default handling.
Code in all its glory:
from multiprocessing import Process, set_start_method
import sys
from signal import signal, SIGINT
from time import sleep
class NormalWorker(Process):
def run(self):
while True:
print('%d %s work' % (self.pid, type(self).__name__))
sleep(1)
class SpawningWorker(Process):
def handle_signal(self, signum, frame):
print('%d %s handling signal %r' % (
self.pid, type(self).__name__, signum))
def run(self):
signal(SIGINT, self.handle_signal)
sub = NormalWorker()
sub.start()
print('%d joining %d' % (self.pid, sub.pid))
sub.join()
print('%d %s joined sub worker' % (self.pid, type(self).__name__))
def main():
set_start_method('forkserver')
processes = [SpawningWorker() for ii in range(5)]
for pp in processes:
pp.start()
def sig_handler(signum, frame):
print('main handling signal %d' % signum)
for pp in processes:
pp.join()
print('main out')
sys.exit()
signal(SIGINT, sig_handler)
while True:
sleep(1.0)
if __name__ == '__main__':
main()
Since my previous answer was python 3 only, I thought I'd also suggest a more dirty method for fun which should work on both python 2 and python 3. Not Windows though...
multiprocessing just uses os.fork() under the covers, so patch it to reset the signal handling in the child:
import os
from signal import SIGINT, SIG_DFL
def patch_fork():
print('Patching fork')
os_fork = os.fork
def my_fork():
print('Fork fork fork')
cpid = os_fork()
if cpid == 0:
# child
signal(SIGINT, SIG_DFL)
return cpid
os.fork = my_fork
You can call that at the start of the run method of your Worker processes (so that you don't affect the Manager) and so be sure that any children will ignore those signals.
This might seem crazy, but if you're not too concerned about portability it might actually not be a bad idea as it's simple and probably pretty resilient over different python versions.
You can store pid of main process (when register signal handler) and use it inside signal handler to route execution flow:
if os.getpid() != main_pid:
sys.exit(128 + signum)
I am trying the code pasted below on Windows, but instead of handling signal, it is killing the process.
However, the same code is working in Ubuntu.
import os, sys
import time
import signal
def func(signum, frame):
print 'You raised a SigInt! Signal handler called with signal', signum
signal.signal(signal.SIGINT, func)
while True:
print "Running...",os.getpid()
time.sleep(2)
os.kill(os.getpid(),signal.SIGINT)
Python's os.kill wraps two unrelated APIs on Windows. It calls GenerateConsoleCtrlEvent when the sig parameter is CTRL_C_EVENT or CTRL_BREAK_EVENT. In this case the pid parameter is a process group ID. If the latter call fails, and for all other sig values, it calls OpenProcess and then TerminateProcess. In this case the pid parameter is a process ID, and the sig value is passed as the exit code. Terminating a Windows process is akin to sending SIGKILL to a POSIX process. Generally this should be avoided since it doesn't allow the process to exit cleanly.
Note that the docs for os.kill mistakenly claim that "kill() additionally takes process handles to be killed", which was never true. It calls OpenProcess to get a process handle.
The decision to use WinAPI CTRL_C_EVENT and CTRL_BREAK_EVENT, instead of SIGINT and SIGBREAK, is unfortunate for cross-platform code. It's also not defined what GenerateConsoleCtrlEvent does when passed a process ID that's not a process group ID. Using this function in an API that takes a process ID is dubious at best, and potentially very wrong.
For your particular needs you can write an adapter function that makes os.kill a bit more friendly for cross-platform code. For example:
import os
import sys
import time
import signal
if sys.platform != 'win32':
kill = os.kill
sleep = time.sleep
else:
# adapt the conflated API on Windows.
import threading
sigmap = {signal.SIGINT: signal.CTRL_C_EVENT,
signal.SIGBREAK: signal.CTRL_BREAK_EVENT}
def kill(pid, signum):
if signum in sigmap and pid == os.getpid():
# we don't know if the current process is a
# process group leader, so just broadcast
# to all processes attached to this console.
pid = 0
thread = threading.current_thread()
handler = signal.getsignal(signum)
# work around the synchronization problem when calling
# kill from the main thread.
if (signum in sigmap and
thread.name == 'MainThread' and
callable(handler) and
pid == 0):
event = threading.Event()
def handler_set_event(signum, frame):
event.set()
return handler(signum, frame)
signal.signal(signum, handler_set_event)
try:
os.kill(pid, sigmap[signum])
# busy wait because we can't block in the main
# thread, else the signal handler can't execute.
while not event.is_set():
pass
finally:
signal.signal(signum, handler)
else:
os.kill(pid, sigmap.get(signum, signum))
if sys.version_info[0] > 2:
sleep = time.sleep
else:
import errno
# If the signal handler doesn't raise an exception,
# time.sleep in Python 2 raises an EINTR IOError, but
# Python 3 just resumes the sleep.
def sleep(interval):
'''sleep that ignores EINTR in 2.x on Windows'''
while True:
try:
t = time.time()
time.sleep(interval)
except IOError as e:
if e.errno != errno.EINTR:
raise
interval -= time.time() - t
if interval <= 0:
break
def func(signum, frame):
# note: don't print in a signal handler.
global g_sigint
g_sigint = True
#raise KeyboardInterrupt
signal.signal(signal.SIGINT, func)
g_kill = False
while True:
g_sigint = False
g_kill = not g_kill
print('Running [%d]' % os.getpid())
sleep(2)
if g_kill:
kill(os.getpid(), signal.SIGINT)
if g_sigint:
print('SIGINT')
else:
print('No SIGINT')
Discussion
Windows doesn't implement signals at the system level [*]. Microsoft's C runtime implements the six signals that are required by standard C: SIGINT, SIGABRT, SIGTERM, SIGSEGV, SIGILL, and SIGFPE.
SIGABRT and SIGTERM are implemented just for the current process. You can call the handler via C raise. For example (in Python 3.5):
>>> import signal, ctypes
>>> ucrtbase = ctypes.CDLL('ucrtbase')
>>> c_raise = ucrtbase['raise']
>>> foo = lambda *a: print('foo')
>>> signal.signal(signal.SIGTERM, foo)
<Handlers.SIG_DFL: 0>
>>> c_raise(signal.SIGTERM)
foo
0
SIGTERM is useless.
You also can't do much with SIGABRT using the signal module because the abort function kills the process once the handler returns, which happens immediately when using the signal module's internal handler (it trips a flag for the registered Python callable to be called in the main thread). For Python 3 you can instead use the faulthandler module. Or call the CRT's signal function via ctypes to set a ctypes callback as the handler.
The CRT implements SIGSEGV, SIGILL, and SIGFPE by setting a Windows structured exception handler for the corresponding Windows exceptions:
STATUS_ACCESS_VIOLATION SIGSEGV
STATUS_ILLEGAL_INSTRUCTION SIGILL
STATUS_PRIVILEGED_INSTRUCTION SIGILL
STATUS_FLOAT_DENORMAL_OPERAND SIGFPE
STATUS_FLOAT_DIVIDE_BY_ZERO SIGFPE
STATUS_FLOAT_INEXACT_RESULT SIGFPE
STATUS_FLOAT_INVALID_OPERATION SIGFPE
STATUS_FLOAT_OVERFLOW SIGFPE
STATUS_FLOAT_STACK_CHECK SIGFPE
STATUS_FLOAT_UNDERFLOW SIGFPE
STATUS_FLOAT_MULTIPLE_FAULTS SIGFPE
STATUS_FLOAT_MULTIPLE_TRAPS SIGFPE
The CRT's implementation of these signals is incompatible with Python's signal handling. The exception filter calls the registered handler and then returns EXCEPTION_CONTINUE_EXECUTION. However, Python's handler only trips a flag for the interpreter to call the registered callable sometime later in the main thread. Thus the errant code that triggered the exception will continue to trigger in an endless loop. In Python 3 you can use the faulthandler module for these exception-based signals.
That leaves SIGINT, to which Windows adds the non-standard SIGBREAK. Both console and non-console processes can raise these signals, but only a console process can receive them from another process. The CRT implements this by registering a console control event handler via SetConsoleCtrlHandler.
The console sends a control event by creating a new thread in an attached process that begins executing at CtrlRoutine in kernel32.dll or kernelbase.dll (undocumented). That the handler doesn't execute on the main thread can lead to synchronization problems (e.g. in the REPL or with input). Also, a control event won't interrupt the main thread if it's blocked while waiting on a synchronization object or waiting for synchronous I/O to complete. Care needs to be taken to avoid blocking in the main thread if it should be interruptible by SIGINT. Python 3 attempts to work around this by using a Windows event object, which can also be used in waits that should be interruptible by SIGINT.
When the console sends the process a CTRL_C_EVENT or CTRL_BREAK_EVENT, the CRT's handler calls the registered SIGINT or SIGBREAK handler, respectively. The SIGBREAK handler is also called for the CTRL_CLOSE_EVENT that the console sends when its window is closed. Python defaults to handling SIGINT by rasing a KeyboardInterrupt in the main thread. However, SIGBREAK is initially the default CTRL_BREAK_EVENT handler, which calls ExitProcess(STATUS_CONTROL_C_EXIT).
You can send a control event to all processes attached to the current console via GenerateConsoleCtrlEvent. This can target a subset of processes that belong to a process group, or target group 0 to send the event to all processes attached to the current console.
Process groups aren't a well-documented aspect of the Windows API. There's no public API to query the group of a process, but every process in a Windows session belongs to a process group, even if it's just the wininit.exe group (services session) or winlogon.exe group (interactive session). A new group is created by passing the creation flag CREATE_NEW_PROCESS_GROUP when creating a new process. The group ID is the process ID of the created process. To my knowledge, the console is the only system that uses the process group, and that's just for GenerateConsoleCtrlEvent.
What the console does when the target ID isn't a process group ID is undefined and should not be relied on. If both the process and its parent process are attached to the console, then sending it a control event basically acts like the target is group 0. If the parent process isn't attached to the current console, then GenerateConsoleCtrlEvent fails, and os.kill calls TerminateProcess. Weirdly, if you target the "System" process (PID 4) and its child process smss.exe (session manager), the call succeeds but nothing happens except that the target is mistakenly added to the list of attached processes (i.e. GetConsoleProcessList). It's probably because the parent process is the "Idle" process, which, since it's PID 0, is implicitly accepted as the broadcast PGID. The parent process rule also applies to non-console processes. Targeting a non-console child process does nothing -- except mistakenly corrupt the console process list by adding the unattached process. I hope it's clear that you should only send a control event to either group 0 or to a known process group that you created via CREATE_NEW_PROCESS_GROUP.
Don't rely on being able to send CTRL_C_EVENT to anything but group 0, since it's initially disabled in a new process group. It's not impossible to send this event to a new group, but the target process first has to enable CTRL_C_EVENT by calling SetConsoleCtrlHandler(NULL, FALSE).
CTRL_BREAK_EVENT is all you can depend on since it can't be disabled. Sending this event is a simple way to gracefully kill a child process that was started with CREATE_NEW_PROCESS_GROUP, assuming it has a Windows CTRL_BREAK_EVENT or C SIGBREAK handler. If not, the default handler will terminate the process, setting the exit code to STATUS_CONTROL_C_EXIT. For example:
>>> import os, signal, subprocess
>>> p = subprocess.Popen('python.exe',
... stdin=subprocess.PIPE,
... creationflags=subprocess.CREATE_NEW_PROCESS_GROUP)
>>> os.kill(p.pid, signal.CTRL_BREAK_EVENT)
>>> STATUS_CONTROL_C_EXIT = 0xC000013A
>>> p.wait() == STATUS_CONTROL_C_EXIT
True
Note that CTRL_BREAK_EVENT wasn't sent to the current process, because the example targets the process group of the child process (including all of its child processes that are attached to the console, and so on). If the example had used group 0, the current process would have been killed as well since I didn't define a SIGBREAK handler. Let's try that, but with a handler set:
>>> ctrl_break = lambda *a: print('^BREAK')
>>> signal.signal(signal.SIGBREAK, ctrl_break)
<Handlers.SIG_DFL: 0>
>>> os.kill(0, signal.CTRL_BREAK_EVENT)
^BREAK
[*]
Windows has asynchronous procedure calls (APC) to queue a target function to a thread. See the article Inside NT's Asynchronous Procedure Call for an in-depth analysis of Windows APCs, especially to clarify the role of kernel-mode APCs. You can queue a user-mode APC to a thread via QueueUserAPC. They also get queued by ReadFileEx and WriteFileEx for the I/O completion routine.
A user-mode APC executes when the thread enters an alertable wait (e.g. WaitForSingleObjectEx or SleepEx with bAlertable as TRUE). Kernel-mode APCs, on the other hand, get dispatched immediately (when the IRQL is below APC_LEVEL). They're typically used by the I/O manager to complete asynchronous I/O Request Packets in the context of the thread that issued the request (e.g. copying data from the IRP to a user-mode buffer). See Waits and APCs for a table that shows how APCs affect alertable and non-alertable waits. Note that kernel-mode APCs don't interrupt a wait, but instead are executed internally by the wait routine.
Windows could implement POSIX-like signals using APCs, but in practice it uses other means for the same ends. For example:
Structured Exception Handling, e.g. __try, __except, __finally, __leave, RaiseException, AddVectoredExceptionHandler.
Kernel Dispatcher Objects (i.e. Synchronization Objects), e.g. SetEvent, SetWaitableTimer.
Window Messages, e.g. SendMessage (to a window procedure), PostMessage (to a thread's message queue to be dispatched to a window procedure), PostThreadMessage (to a thread's message queue), WM_CLOSE, WM_TIMER.
Window messages can be sent and posted to all threads that share the calling thread's desktop and that are at the same or lower integrity level. Sending a window message puts it in a system queue to call the window procedure when the thread calls PeekMessage or GetMessage. Posting a message adds it to the thread's message queue, which has a default quota of 10,000 messages. A thread with a message queue should have a message loop to process the queue via GetMessage and DispatchMessage. Threads in a console-only process typically do not have a message queue. However, the console host process, conhost.exe, obviously does. When the close button is clicked, or when the primary process of a console is killed via the task manager or taskkill.exe, a WM_CLOSE message is posted to the message queue of the console window's thread. The console in turns sends a CTRL_CLOSE_EVENT to all of its attached processes. If a process handles the event, it's given 5 seconds to exit gracefully before it's forcefully terminated.
For Python >=3.8, use signal.raise_signal. This directly triggers the signal in the current process, avoiding complications of os.kill interpreting process ID incorrectly.
import os
import time
import signal
def func(signum, frame):
print (f"You raised a SigInt! Signal handler called with signal {signum}")
signal.signal(signal.SIGINT, func)
while True:
print(f"Running...{os.getpid()}")
time.sleep(2)
signal.raise_signal(signal.SIGINT)
Works great!
I have a very simple python code:
def monitor_keyboard_interrupt():
is_done = False
while True:
if is_done
break
try:
print(sys._getframe().f_code.co_name)
except KeyboardInterrupt:
is_done = True
def test():
monitor_keyboard_thread = threading.Thread(target = monitor_keyboard_interrupt)
monitor_keyboard_thread.start()
monitor_keyboard_thread.join()
def main():
test()
if '__main__' == __name__:
main()
However when I press 'Ctrl-C' the thread isn't stopped. Can someone explain what I'm doing wrong. Any help is appreciated.
Simple reason:
Because only the <_MainThread(MainThread, started 139712048375552)> can create signal handlers and listen for signals.
This includes KeyboardInterrupt which is a SIGINT.
THis comes straight from the signal docs:
Some care must be taken if both signals and threads are used in the
same program. The fundamental thing to remember in using signals and
threads simultaneously is: always perform signal() operations in the
main thread of execution. Any thread can perform an alarm(),
getsignal(), pause(), setitimer() or getitimer(); only the main thread
can set a new signal handler, and the main thread will be the only one
to receive signals (this is enforced by the Python signal module, even
if the underlying thread implementation supports sending signals to
individual threads). This means that signals can’t be used as a means
of inter-thread communication. Use locks instead.