I am trying the code pasted below on Windows, but instead of handling signal, it is killing the process.
However, the same code is working in Ubuntu.
import os, sys
import time
import signal
def func(signum, frame):
print 'You raised a SigInt! Signal handler called with signal', signum
signal.signal(signal.SIGINT, func)
while True:
print "Running...",os.getpid()
time.sleep(2)
os.kill(os.getpid(),signal.SIGINT)
Python's os.kill wraps two unrelated APIs on Windows. It calls GenerateConsoleCtrlEvent when the sig parameter is CTRL_C_EVENT or CTRL_BREAK_EVENT. In this case the pid parameter is a process group ID. If the latter call fails, and for all other sig values, it calls OpenProcess and then TerminateProcess. In this case the pid parameter is a process ID, and the sig value is passed as the exit code. Terminating a Windows process is akin to sending SIGKILL to a POSIX process. Generally this should be avoided since it doesn't allow the process to exit cleanly.
Note that the docs for os.kill mistakenly claim that "kill() additionally takes process handles to be killed", which was never true. It calls OpenProcess to get a process handle.
The decision to use WinAPI CTRL_C_EVENT and CTRL_BREAK_EVENT, instead of SIGINT and SIGBREAK, is unfortunate for cross-platform code. It's also not defined what GenerateConsoleCtrlEvent does when passed a process ID that's not a process group ID. Using this function in an API that takes a process ID is dubious at best, and potentially very wrong.
For your particular needs you can write an adapter function that makes os.kill a bit more friendly for cross-platform code. For example:
import os
import sys
import time
import signal
if sys.platform != 'win32':
kill = os.kill
sleep = time.sleep
else:
# adapt the conflated API on Windows.
import threading
sigmap = {signal.SIGINT: signal.CTRL_C_EVENT,
signal.SIGBREAK: signal.CTRL_BREAK_EVENT}
def kill(pid, signum):
if signum in sigmap and pid == os.getpid():
# we don't know if the current process is a
# process group leader, so just broadcast
# to all processes attached to this console.
pid = 0
thread = threading.current_thread()
handler = signal.getsignal(signum)
# work around the synchronization problem when calling
# kill from the main thread.
if (signum in sigmap and
thread.name == 'MainThread' and
callable(handler) and
pid == 0):
event = threading.Event()
def handler_set_event(signum, frame):
event.set()
return handler(signum, frame)
signal.signal(signum, handler_set_event)
try:
os.kill(pid, sigmap[signum])
# busy wait because we can't block in the main
# thread, else the signal handler can't execute.
while not event.is_set():
pass
finally:
signal.signal(signum, handler)
else:
os.kill(pid, sigmap.get(signum, signum))
if sys.version_info[0] > 2:
sleep = time.sleep
else:
import errno
# If the signal handler doesn't raise an exception,
# time.sleep in Python 2 raises an EINTR IOError, but
# Python 3 just resumes the sleep.
def sleep(interval):
'''sleep that ignores EINTR in 2.x on Windows'''
while True:
try:
t = time.time()
time.sleep(interval)
except IOError as e:
if e.errno != errno.EINTR:
raise
interval -= time.time() - t
if interval <= 0:
break
def func(signum, frame):
# note: don't print in a signal handler.
global g_sigint
g_sigint = True
#raise KeyboardInterrupt
signal.signal(signal.SIGINT, func)
g_kill = False
while True:
g_sigint = False
g_kill = not g_kill
print('Running [%d]' % os.getpid())
sleep(2)
if g_kill:
kill(os.getpid(), signal.SIGINT)
if g_sigint:
print('SIGINT')
else:
print('No SIGINT')
Discussion
Windows doesn't implement signals at the system level [*]. Microsoft's C runtime implements the six signals that are required by standard C: SIGINT, SIGABRT, SIGTERM, SIGSEGV, SIGILL, and SIGFPE.
SIGABRT and SIGTERM are implemented just for the current process. You can call the handler via C raise. For example (in Python 3.5):
>>> import signal, ctypes
>>> ucrtbase = ctypes.CDLL('ucrtbase')
>>> c_raise = ucrtbase['raise']
>>> foo = lambda *a: print('foo')
>>> signal.signal(signal.SIGTERM, foo)
<Handlers.SIG_DFL: 0>
>>> c_raise(signal.SIGTERM)
foo
0
SIGTERM is useless.
You also can't do much with SIGABRT using the signal module because the abort function kills the process once the handler returns, which happens immediately when using the signal module's internal handler (it trips a flag for the registered Python callable to be called in the main thread). For Python 3 you can instead use the faulthandler module. Or call the CRT's signal function via ctypes to set a ctypes callback as the handler.
The CRT implements SIGSEGV, SIGILL, and SIGFPE by setting a Windows structured exception handler for the corresponding Windows exceptions:
STATUS_ACCESS_VIOLATION SIGSEGV
STATUS_ILLEGAL_INSTRUCTION SIGILL
STATUS_PRIVILEGED_INSTRUCTION SIGILL
STATUS_FLOAT_DENORMAL_OPERAND SIGFPE
STATUS_FLOAT_DIVIDE_BY_ZERO SIGFPE
STATUS_FLOAT_INEXACT_RESULT SIGFPE
STATUS_FLOAT_INVALID_OPERATION SIGFPE
STATUS_FLOAT_OVERFLOW SIGFPE
STATUS_FLOAT_STACK_CHECK SIGFPE
STATUS_FLOAT_UNDERFLOW SIGFPE
STATUS_FLOAT_MULTIPLE_FAULTS SIGFPE
STATUS_FLOAT_MULTIPLE_TRAPS SIGFPE
The CRT's implementation of these signals is incompatible with Python's signal handling. The exception filter calls the registered handler and then returns EXCEPTION_CONTINUE_EXECUTION. However, Python's handler only trips a flag for the interpreter to call the registered callable sometime later in the main thread. Thus the errant code that triggered the exception will continue to trigger in an endless loop. In Python 3 you can use the faulthandler module for these exception-based signals.
That leaves SIGINT, to which Windows adds the non-standard SIGBREAK. Both console and non-console processes can raise these signals, but only a console process can receive them from another process. The CRT implements this by registering a console control event handler via SetConsoleCtrlHandler.
The console sends a control event by creating a new thread in an attached process that begins executing at CtrlRoutine in kernel32.dll or kernelbase.dll (undocumented). That the handler doesn't execute on the main thread can lead to synchronization problems (e.g. in the REPL or with input). Also, a control event won't interrupt the main thread if it's blocked while waiting on a synchronization object or waiting for synchronous I/O to complete. Care needs to be taken to avoid blocking in the main thread if it should be interruptible by SIGINT. Python 3 attempts to work around this by using a Windows event object, which can also be used in waits that should be interruptible by SIGINT.
When the console sends the process a CTRL_C_EVENT or CTRL_BREAK_EVENT, the CRT's handler calls the registered SIGINT or SIGBREAK handler, respectively. The SIGBREAK handler is also called for the CTRL_CLOSE_EVENT that the console sends when its window is closed. Python defaults to handling SIGINT by rasing a KeyboardInterrupt in the main thread. However, SIGBREAK is initially the default CTRL_BREAK_EVENT handler, which calls ExitProcess(STATUS_CONTROL_C_EXIT).
You can send a control event to all processes attached to the current console via GenerateConsoleCtrlEvent. This can target a subset of processes that belong to a process group, or target group 0 to send the event to all processes attached to the current console.
Process groups aren't a well-documented aspect of the Windows API. There's no public API to query the group of a process, but every process in a Windows session belongs to a process group, even if it's just the wininit.exe group (services session) or winlogon.exe group (interactive session). A new group is created by passing the creation flag CREATE_NEW_PROCESS_GROUP when creating a new process. The group ID is the process ID of the created process. To my knowledge, the console is the only system that uses the process group, and that's just for GenerateConsoleCtrlEvent.
What the console does when the target ID isn't a process group ID is undefined and should not be relied on. If both the process and its parent process are attached to the console, then sending it a control event basically acts like the target is group 0. If the parent process isn't attached to the current console, then GenerateConsoleCtrlEvent fails, and os.kill calls TerminateProcess. Weirdly, if you target the "System" process (PID 4) and its child process smss.exe (session manager), the call succeeds but nothing happens except that the target is mistakenly added to the list of attached processes (i.e. GetConsoleProcessList). It's probably because the parent process is the "Idle" process, which, since it's PID 0, is implicitly accepted as the broadcast PGID. The parent process rule also applies to non-console processes. Targeting a non-console child process does nothing -- except mistakenly corrupt the console process list by adding the unattached process. I hope it's clear that you should only send a control event to either group 0 or to a known process group that you created via CREATE_NEW_PROCESS_GROUP.
Don't rely on being able to send CTRL_C_EVENT to anything but group 0, since it's initially disabled in a new process group. It's not impossible to send this event to a new group, but the target process first has to enable CTRL_C_EVENT by calling SetConsoleCtrlHandler(NULL, FALSE).
CTRL_BREAK_EVENT is all you can depend on since it can't be disabled. Sending this event is a simple way to gracefully kill a child process that was started with CREATE_NEW_PROCESS_GROUP, assuming it has a Windows CTRL_BREAK_EVENT or C SIGBREAK handler. If not, the default handler will terminate the process, setting the exit code to STATUS_CONTROL_C_EXIT. For example:
>>> import os, signal, subprocess
>>> p = subprocess.Popen('python.exe',
... stdin=subprocess.PIPE,
... creationflags=subprocess.CREATE_NEW_PROCESS_GROUP)
>>> os.kill(p.pid, signal.CTRL_BREAK_EVENT)
>>> STATUS_CONTROL_C_EXIT = 0xC000013A
>>> p.wait() == STATUS_CONTROL_C_EXIT
True
Note that CTRL_BREAK_EVENT wasn't sent to the current process, because the example targets the process group of the child process (including all of its child processes that are attached to the console, and so on). If the example had used group 0, the current process would have been killed as well since I didn't define a SIGBREAK handler. Let's try that, but with a handler set:
>>> ctrl_break = lambda *a: print('^BREAK')
>>> signal.signal(signal.SIGBREAK, ctrl_break)
<Handlers.SIG_DFL: 0>
>>> os.kill(0, signal.CTRL_BREAK_EVENT)
^BREAK
[*]
Windows has asynchronous procedure calls (APC) to queue a target function to a thread. See the article Inside NT's Asynchronous Procedure Call for an in-depth analysis of Windows APCs, especially to clarify the role of kernel-mode APCs. You can queue a user-mode APC to a thread via QueueUserAPC. They also get queued by ReadFileEx and WriteFileEx for the I/O completion routine.
A user-mode APC executes when the thread enters an alertable wait (e.g. WaitForSingleObjectEx or SleepEx with bAlertable as TRUE). Kernel-mode APCs, on the other hand, get dispatched immediately (when the IRQL is below APC_LEVEL). They're typically used by the I/O manager to complete asynchronous I/O Request Packets in the context of the thread that issued the request (e.g. copying data from the IRP to a user-mode buffer). See Waits and APCs for a table that shows how APCs affect alertable and non-alertable waits. Note that kernel-mode APCs don't interrupt a wait, but instead are executed internally by the wait routine.
Windows could implement POSIX-like signals using APCs, but in practice it uses other means for the same ends. For example:
Structured Exception Handling, e.g. __try, __except, __finally, __leave, RaiseException, AddVectoredExceptionHandler.
Kernel Dispatcher Objects (i.e. Synchronization Objects), e.g. SetEvent, SetWaitableTimer.
Window Messages, e.g. SendMessage (to a window procedure), PostMessage (to a thread's message queue to be dispatched to a window procedure), PostThreadMessage (to a thread's message queue), WM_CLOSE, WM_TIMER.
Window messages can be sent and posted to all threads that share the calling thread's desktop and that are at the same or lower integrity level. Sending a window message puts it in a system queue to call the window procedure when the thread calls PeekMessage or GetMessage. Posting a message adds it to the thread's message queue, which has a default quota of 10,000 messages. A thread with a message queue should have a message loop to process the queue via GetMessage and DispatchMessage. Threads in a console-only process typically do not have a message queue. However, the console host process, conhost.exe, obviously does. When the close button is clicked, or when the primary process of a console is killed via the task manager or taskkill.exe, a WM_CLOSE message is posted to the message queue of the console window's thread. The console in turns sends a CTRL_CLOSE_EVENT to all of its attached processes. If a process handles the event, it's given 5 seconds to exit gracefully before it's forcefully terminated.
For Python >=3.8, use signal.raise_signal. This directly triggers the signal in the current process, avoiding complications of os.kill interpreting process ID incorrectly.
import os
import time
import signal
def func(signum, frame):
print (f"You raised a SigInt! Signal handler called with signal {signum}")
signal.signal(signal.SIGINT, func)
while True:
print(f"Running...{os.getpid()}")
time.sleep(2)
signal.raise_signal(signal.SIGINT)
Works great!
Related
I have a function in a Tkinter script that utilizes subprocess.Popen.wait() that keeps freezing my GUI. After going through some docs (1 and 2), I found that I need to use asyncio to solve this via asynchronous waiting but I am rather confused:
Note: The function is implemented using a busy loop (non-blocking call
and short sleeps). Use the asyncio module for an asynchronous wait:
see asyncio.create_subprocess_exec.
Does this mean that I have to create a subprocess for my wait() function?
Also, here's what I have so far:
def foo(self)
try:
self.proc = subprocess.Popen(["python", "someModule.py"])
try:
self.proc.wait(11) # <-- freezes GUI
except Exception as l:
someFunc()
except Exception as e:
print(e)
Note: the TimeoutException catch on wait() serves as an indicator for another function to begin execution, whereas successful execution of wait() (i.e., a return value of 0) is for the opposite. Further, my goal is to have a timer in my Tkinter script activated upon successful execution of the child process, which as aforementioned, is indicated by a TimeoutException. The Tkinter interface will always be running whereas the child process will only start on user input and, similarly, end on user input (or if it unexpectedly crashes).
Edit: someModule.py is a script that activates an external data collecting bluetooth device. It must (1) establish a connection to the device, and if (1) is successful then (2) begin collecting data. Function (1) will wait 10 seconds for a connection to the bluetooth device to be established. If after 10 seconds a connection is not made, someModule.py (i.e., the child process) prints an error and terminates. This is why wait() executes for 11 seconds.
How do I implement asynchronous waiting?
Workaround: executing foo() via a thread solved the GUI freezing problem.
import threading
def thread(self):
t1 = threading.Thread(target=foo)
t1.start() # will terminate independently upon completion of foo()
I have a python multiprocessing setup (i.e. worker processes) with custom signal handling, which prevents the worker from cleanly using multiprocessing itself. (See extended problem description below).
The Setup
The master class that spawns all worker processes looks like the following (some parts stripped to only contain the important parts).
Here, it re-binds its own signals only to print Master teardown; actually the received signals are propagated down the process tree and must be handled by the workers themselves. This is achieved by re-binding the signals after workers have been spawned.
class Midlayer(object):
def __init__(self, nprocs=2):
self.nprocs = nprocs
self.procs = []
def handle_signal(self, signum, frame):
log.info('Master teardown')
for p in self.procs:
p.join()
sys.exit()
def start(self):
# Start desired number of workers
for _ in range(nprocs):
p = Worker()
self.procs.append(p)
p.start()
# Bind signals for master AFTER workers have been spawned and started
signal.signal(signal.SIGINT, self.handle_signal)
signal.signal(signal.SIGTERM, self.handle_signal)
# Serve forever, only exit on signals
for p in self.procs:
p.join()
The worker class bases multiprocessing.Process and implements its own run()-method.
In this method, it connects to a distributed message queue and polls the queue for items forever. Forever should be: until the worker receives SIGINT or SIGTERM. The worker should not quit immediately; instead, it has to finish whatever calculation it does and will quit afterwards (once quit_req is set to True).
class Worker(Process):
def __init__(self):
self.quit_req = False
Process.__init__(self)
def handle_signal(self, signum, frame):
print('Stopping worker (pid: {})'.format(self.pid))
self.quit_req = True
def run(self):
# Set signals for worker process
signal.signal(signal.SIGINT, self.handle_signal)
signal.signal(signal.SIGTERM, self.handle_signal)
q = connect_to_some_distributed_message_queue()
# Start consuming
print('Starting worker (pid: {})'.format(self.pid))
while not self.quit_req:
message = q.poll()
if len(message):
try:
print('{} handling message "{}"'.format(
self.pid, message)
)
# Facade pattern: Pick the correct target function for the
# requested message and execute it.
MessageRouter.route(message)
except Exception as e:
print('{} failed handling "{}": {}'.format(
self.pid, message, e.message)
)
The Problem
So far for the basic setup, where (almost) everything works fine:
The master process spawns the desired number of workers
Each worker connects to the message queue
Once a message is published, one of the workers receives it
The facade pattern (using a class named MessageRouter) routes the received message to the respective function and executes it
Now for the problem: Target functions (where the message gets directed to by the MessageRouter facade) may contain very complex business logic and thus may require multiprocessing.
If, for example, the target function contains something like this:
nproc = 4
# Spawn a pool, because we have expensive calculation here
p = Pool(processes=nproc)
# Collect result proxy objects for async apply calls to 'some_expensive_calculation'
rpx = [p.apply_async(some_expensive_calculation, ()) for _ in range(nproc)]
# Collect results from all processes
res = [rpx.get(timeout=.5) for r in rpx]
# Print all results
print(res)
Then the processes spawned by the Pool will also redirect their signal handling for SIGINT and SIGTERM to the worker's handle_signal function (because of signal propagation to the process subtree), essentially printing Stopping worker (pid: ...) and not stopping at all. I know, that this happens due to the fact that I have re-bound the signals for the worker before its own child-processes are spawned.
This is where I'm stuck: I just cannot set the workers' signals after spawning its child processes, because I do not know whether or not it spawns some (target functions are masked and may be written by others), and because the worker stays (as designed) in its poll-loop. At the same time, I cannot expect the implementation of a target function that uses multiprocessing to re-bind its own signal handlers to (whatever) default values.
Currently, I feel like restoring signal handlers in each loop in the worker (before the message is routed to its target function) and resetting them after the function has returned is the only option, but it simply feels wrong.
Do I miss something? Do you have any advice? I'd be really happy if someone could give me a hint on how to solve the flaws of my design here!
There is not a clear approach for tackling the issue in the way you want to proceed. I often find myself in situations where I have to run unknown code (represented as Python entry point functions which might get down into some C weirdness) in multiprocessing environments.
This is how I approach the problem.
The main loop
Usually the main loop is pretty simple, it fetches a task from some source (HTTP, Pipe, Rabbit Queue..) and submits it to a Pool of workers. I make sure the KeyboardInterrupt exception is correctly handled to shutdown the service.
try:
while 1:
task = get_next_task()
service.process(task)
except KeyboardInterrupt:
service.wait_for_pending_tasks()
logging.info("Sayonara!")
The workers
The workers are managed by a Pool of workers from either multiprocessing.Pool or from concurrent.futures.ProcessPoolExecutor. If I need more advanced features such as timeout support I either use billiard or pebble.
Each worker will ignore SIGINT as recommended here. SIGTERM is left as default.
The service
The service is controlled either by systemd or supervisord. In either cases, I make sure that the termination request is always delivered as a SIGINT (CTL+C).
I want to keep SIGTERM as an emergency shutdown rather than relying only on SIGKILL for that. SIGKILL is not portable and some platforms do not implement it.
"I whish it was that simple"
If things are more complex, I'd consider the use of frameworks such as Luigi or Celery.
In general, reinventing the wheel on such things is quite detrimental and gives little gratifications. Especially if someone else will have to look at that code.
The latter sentence does not apply if your aim is to learn how these things are done of course.
I was able to do this using Python 3 and set_start_method(method) with the 'forkserver' flavour. Another way Python 3 > Python 2!
Where by "this" I mean:
Have a main process with its own signal handler which just joins the children.
Have some worker processes with a signal handler which may spawn...
further subprocesses which do not have a signal handler.
The behaviour on Ctrl-C is then:
manager process waits for workers to exit.
workers run their signal handlers, (an maybe set a stop flag and continue executing to finish their job, although I didn't bother in my example, I just joined the child I knew I had) and then exit.
all children of the workers die immediately.
Of course note that if your intention is for the children of the workers not to crash you will need to install some ignore handler or something for them in your worker process run() method, or somewhere.
To mercilessly lift from the docs:
When the program starts and selects the forkserver start method, a server process is started. From then on, whenever a new process is needed, the parent process connects to the server and requests that it fork a new process. The fork server process is single threaded so it is safe for it to use os.fork(). No unnecessary resources are inherited.
Available on Unix platforms which support passing file descriptors over Unix pipes.
The idea is therefore that the "server process" inherits the default signal handling behaviour before you install your new ones, so all its children also have default handling.
Code in all its glory:
from multiprocessing import Process, set_start_method
import sys
from signal import signal, SIGINT
from time import sleep
class NormalWorker(Process):
def run(self):
while True:
print('%d %s work' % (self.pid, type(self).__name__))
sleep(1)
class SpawningWorker(Process):
def handle_signal(self, signum, frame):
print('%d %s handling signal %r' % (
self.pid, type(self).__name__, signum))
def run(self):
signal(SIGINT, self.handle_signal)
sub = NormalWorker()
sub.start()
print('%d joining %d' % (self.pid, sub.pid))
sub.join()
print('%d %s joined sub worker' % (self.pid, type(self).__name__))
def main():
set_start_method('forkserver')
processes = [SpawningWorker() for ii in range(5)]
for pp in processes:
pp.start()
def sig_handler(signum, frame):
print('main handling signal %d' % signum)
for pp in processes:
pp.join()
print('main out')
sys.exit()
signal(SIGINT, sig_handler)
while True:
sleep(1.0)
if __name__ == '__main__':
main()
Since my previous answer was python 3 only, I thought I'd also suggest a more dirty method for fun which should work on both python 2 and python 3. Not Windows though...
multiprocessing just uses os.fork() under the covers, so patch it to reset the signal handling in the child:
import os
from signal import SIGINT, SIG_DFL
def patch_fork():
print('Patching fork')
os_fork = os.fork
def my_fork():
print('Fork fork fork')
cpid = os_fork()
if cpid == 0:
# child
signal(SIGINT, SIG_DFL)
return cpid
os.fork = my_fork
You can call that at the start of the run method of your Worker processes (so that you don't affect the Manager) and so be sure that any children will ignore those signals.
This might seem crazy, but if you're not too concerned about portability it might actually not be a bad idea as it's simple and probably pretty resilient over different python versions.
You can store pid of main process (when register signal handler) and use it inside signal handler to route execution flow:
if os.getpid() != main_pid:
sys.exit(128 + signum)
I have a Tkinter wrapper written over robocopy.exe, the code is organized as shown below :-
Tkinter Wrapper :
Spawns a new thread and pass the arguments, which includes source/destination and other parameters
(Note : Queue object is also passed to thread, since the thread will read the output from robocopy and will put in queue, main tkinter thread will keep on polling queue and will update Tkinter text widget with the output)
Code Snippet
... Code to poll queue and update tk widget ...
q = Queue.Queue()
t1 = threading.Thread(target=CopyFiles,args=(q,src,dst,), kwargs={"ignore":ignore_list})
t1.daemon = True
t1.start()
Thread : (In a separate file)
Below is the code snippet from thread
def CopyFiles(q,src,dst,ignore=None):
extra_args = ['/MT:15', '/E', '/LOG:./log.txt', '/tee', '/r:2', '/w:2']
if len(ignore) > 0:
extra_args.append('/xf')
extra_args.extend(ignore)
extra_args.append('/xd')
extra_args.extend(ignore)
command_to_pass = ["robocopy",src, dst]
command_to_pass.extend(extra_args)
proc = subprocess.Popen(command_to_pass,stdout=subprocess.PIPE)
while True:
line = proc.stdout.readline()
if line == '':
break
q.put(line.strip())
Code which is called, when tkinter application is closed :-
def onQuit(self):
global t1
if t1.isAlive():
pass
if tkMessageBox.askyesno("Title", "Do you really want to exit?"):
self.destroy()
self.master.destroy()
Problem
Whenever, I close the tkinter application when robocopy is running, python application closes but the robocopy.exe keeps on running.
I have tried setting the thread as daemon, but it has no effect. How can I stop robocopy.exe when onQuit method is called?
To simplify things let's ignore Tkinter and the fact that a separate thread is used.
The situation is that your app spawns a subprocess to execute an external program (robocopy.exe in this question), and you'd need to stop the spawned program from you're application on a certain event (when the Tkinter app is closing in this question).
This requires an inter process communication mechanism, so the spawned process would be notified of the event, and reacts accordingly. A common mechanism is to use signals provided by the OS.
You could send a signal (SIGTERM) to the external process and ask from it to quit. Assuming that the program reacts to signal as expected (most well written applications do) you'll get the desired behavior (the process will terminate).
Using the terminate method on the subprocess will send the proper signal of current platform to the subprocess.
You'd need a reference to the subprocess object proc in the onQuit function (from the provided code I see onQuit is a function and not an object method, so it could use a global variable to access proc), so you can call the terminate method of the process:
def onQuit(self):
global t1, proc
if t1.isAlive():
pass
# by the way I'm not sure about the logic, but I guess this
# below statement should be an elif instead of if
if tkMessageBox.askyesno("Title", "Do you really want to exit?"):
proc.terminate()
self.destroy()
self.master.destroy()
This code assumes you're storing the reference to the subprocess in the global scope, so you'd have to modify the CopyFiles as well.
I'm not sure how robocopy handles terminate signals and I'm guessing that's not something that we have any control over.
If you had more control on the external program (could modify the source), there might have been more options, for example sending messages using stdio, or using a shared memory, etc.
I have a very simple python code:
def monitor_keyboard_interrupt():
is_done = False
while True:
if is_done
break
try:
print(sys._getframe().f_code.co_name)
except KeyboardInterrupt:
is_done = True
def test():
monitor_keyboard_thread = threading.Thread(target = monitor_keyboard_interrupt)
monitor_keyboard_thread.start()
monitor_keyboard_thread.join()
def main():
test()
if '__main__' == __name__:
main()
However when I press 'Ctrl-C' the thread isn't stopped. Can someone explain what I'm doing wrong. Any help is appreciated.
Simple reason:
Because only the <_MainThread(MainThread, started 139712048375552)> can create signal handlers and listen for signals.
This includes KeyboardInterrupt which is a SIGINT.
THis comes straight from the signal docs:
Some care must be taken if both signals and threads are used in the
same program. The fundamental thing to remember in using signals and
threads simultaneously is: always perform signal() operations in the
main thread of execution. Any thread can perform an alarm(),
getsignal(), pause(), setitimer() or getitimer(); only the main thread
can set a new signal handler, and the main thread will be the only one
to receive signals (this is enforced by the Python signal module, even
if the underlying thread implementation supports sending signals to
individual threads). This means that signals can’t be used as a means
of inter-thread communication. Use locks instead.
I am confused as to why the following code snippet would not exit when called in the thread, but would exit when called in the main thread.
import sys, time
from threading import Thread
def testexit():
time.sleep(5)
sys.exit()
print "post thread exit"
t = Thread(target = testexit)
t.start()
t.join()
print "pre main exit, post thread exit"
sys.exit()
print "post main exit"
The docs for sys.exit() state that the call should exit from Python. I can see from the output of this program that "post thread exit" is never printed, but the main thread just keeps on going even after the thread calls exit.
Is a separate instance of the interpreter being created for each thread, and the call to exit() is just exiting that separate instance? If so, how does the threading implementation manage access to shared resources? What if I did want to exit the program from the thread (not that I actually want to, but just so I understand)?
sys.exit() raises the SystemExit exception, as does thread.exit(). So, when sys.exit() raises that exception inside that thread, it has the same effect as calling thread.exit(), which is why only the thread exits.
What if I did want to exit the program from the thread?
For Linux:
os.kill(os.getpid(), signal.SIGINT)
This sends a SIGINT to the main thread which raises a KeyboardInterrupt. With that you have a proper cleanup. Also you can register a handler, if you want to react differently.
For Windows:
The above does not work on Windows, as you can only send a SIGTERM signal, which is not handled by Python and has the same effect as os._exit().
The only option is to use:
os._exit()
This will exit the entire process without any cleanup. If you need cleanup, you need to communicate with the main thread in another way.
What if I did want to exit the program
from the thread?
Apart from the method Deestan described you can call os._exit (notice the underscore). Before using it make sure that you understand that it does no cleanups (like calling __del__ or similar).
What if I did want to exit the program
from the thread (not that I actually
want to, but just so I understand)?
My preferred method is Erlang-ish message passing. Slightly simlified, I do it like this:
import sys, time
import threading
import Queue # thread-safe
class CleanExit:
pass
ipq = Queue.Queue()
def testexit(ipq):
time.sleep(5)
ipq.put(CleanExit)
return
threading.Thread(target=testexit, args=(ipq,)).start()
while True:
print "Working..."
time.sleep(1)
try:
if ipq.get_nowait() == CleanExit:
sys.exit()
except Queue.Empty:
pass
Is the fact that "pre main exit, post thread exit" is printed what's bothering you?
Unlike some other languages (like Java) where the analog to sys.exit (System.exit, in Java's case) causes the VM/process/interpreter to immediately stop, Python's sys.exit just throws an exception: a SystemExit exception in particular.
Here are the docs for sys.exit (just print sys.exit.__doc__):
Exit the interpreter by raising SystemExit(status).
If the status is omitted or None, it defaults to zero (i.e., success).
If the status is numeric, it will be used as the system exit status.
If it is another kind of object, it will be printed and the system
exit status will be one (i.e., failure).
This has a few consequences:
in a thread it just kills the current thread, not the entire process (assuming it gets all the way to the top of the stack...)
object destructors (__del__) are potentially invoked as the stack frames that reference those objects are unwound
finally blocks are executed as the stack unwinds
you can catch a SystemExit exception
The last is possibly the most surprising, and is yet another reason why you should almost never have an unqualified except statement in your Python code.
_thread.interrupt_main() is available since Python 3.7 (optional before that)