Ctrl-C doesn't work when using threading.Timer - python

I'm writing a multithreaded Python app on Windows.
I used to terminate the app using ctrl-c, but once I added threading.Timer instances ctrl-c stopped working (or sometimes takes a very long time).
How could this be?
What's the relation between having Timer threads and ctrl-c?
UPDATE:
I found the following in Python's thread documentation:
Threads interact strangely with
interrupts: the KeyboardInterrupt
exception will be received by an
arbitrary thread. (When the signal
module is available, interrupts always
go to the main thread.)

The way threading.Thread (and thus threading.Timer) works is that each thread registers itself with the threading module, and upon interpreter exit the interpreter will wait for all registered threads to exit before terminating the interpreter proper. This is done so threads actually finish execution, instead of having the interpreter brutally removed from under them. So when you hit ^C, the main thread receives the signal, decides to terminate and waits for the timers to finish.
You can set threads daemonic (with the setDaemon method) to make the threading module not wait for these threads, but if they happen to be executing Python code while the interpreter exits, you get confusing errors during exit. Even if you cancel the threading.Timer (and set it daemonic) it can still wake up while the interpreter is being destroyed -- because threading.Timer's cancel method just tells the threading.Timer not to execute anything when it wakes up, but it has to actually execute Python code to make that determination.
There is no graceful way to terminate threads (other than the current one), and no reliable way to interrupt a thread that's blocked. A more manageable approach to timers is usually an event loop, like the ones GUIs and other event-driven systems offer you. What to use depends entirely on what else your program will be doing.

There is a presentation by David Beazley that sheds some light on the topic. The PDF is available here. Look around pages 22--25 ("Interlude: Signals" to "Frozen Signals").

This is a possible workaround: using time.sleep() instead of Timer means a "graceful shutdown" mechanism can be implemented ... for Python3 where, it appears, KeyboardInterrupt is only raised in user code for the main thread. Otherwise, it appears, the exception is "ignored" as per here: in fact it results in the thread where it occurs dying immediately, but not any ancestor threads, where problematically it can't be caught.
Let's say you want Ctrl-C responsiveness to be 0.5 seconds, but you only want to repeat some actual work every 5 seconds (work is of random duration as below):
import threading, sys, time, random
blip_counter = 0
work_threads=[]
def repeat_every_5():
global blip_counter
print( f'counter: {blip_counter}')
def real_work():
real_work_duration_s = random.randrange(10)
print( f'do some real work every 5 seconds, lasting {real_work_duration_s} s: starting...')
# in a real world situation stop_event.is_set() can be tested anywhere in the code
for interval_500ms in range( real_work_duration_s * 2 ):
if threading.current_thread().stop_event.is_set():
print( f'stop_event SET!')
return
time.sleep(0.5)
print( f'...real work ends')
# clean up work_threads as appropriate
for work_thread in work_threads:
if not work_thread.is_alive():
print(f'work thread {work_thread} dead, removing from list' )
work_threads.remove( work_thread )
new_work_thread = threading.Thread(target=real_work)
# stop event for graceful shutdown
new_work_thread.stop_event = threading.Event()
work_threads.append(new_work_thread)
# in fact, because a graceful shutdown is now implemented, new_work_thread doesn't have to be daemon
# new_work_thread.daemon = True
new_work_thread.start()
blip_counter += 1
time.sleep( 5 )
timer_thread = threading.Thread(target=repeat_every_5)
timer_thread.daemon = True
timer_thread.start()
repeat_every_5()
while True:
try:
time.sleep( 0.5 )
except KeyboardInterrupt:
print( f'shutting down due to Ctrl-C..., work threads left: {len(work_threads)}')
# trigger stop event for graceful shutdown
for work_thread in work_threads:
if work_thread.is_alive():
print( f'work_thread {work_thread}: setting STOP event')
work_thread.stop_event.set()
print( f'work_thread {work_thread}: joining to main...')
work_thread.join()
print( f'work_thread {work_thread}: ...joined to main')
else:
print( f'work_thread {work_thread} has died' )
sys.exit(1)
This while True: mechanism looks a bit clunky. But I think, as I say, that currently (Python 3.8.x) KeyboardInterrupt can only be caught on the main thread.
PS according to my experiments, handling child processes may be easier, in the sense that Ctrl-C will, it seems, in a simple case at least, cause a KeyboardInterrupt to occur simultaneously in all running processes.

Wrap your main while loop in a try except:
from threading import Timer
import time
def randomfn():
print ("Heartbeat sent!")
class RepeatingTimer(Timer):
def run(self):
while not self.finished.is_set():
self.function(*self.args, **self.kwargs)
self.finished.wait(self.interval)
t = RepeatingTimer(10.0, function=randomfn)
print ("Starting...")
t.start()
while (True):
try:
print ("Hello")
time.sleep(1)
except:
print ("Cancelled timer...")
t.cancel()
print ("Cancelled loop...")
break
print ("End")
Results:
Heartbeat sent!
Hello
Hello
Hello
Hello
Hello
Hello
Hello
Hello
Hello
Cancelled timer...
Cancelled loop...
End

Related

Python - How to break immediately out of loop without waiting for next iteration, or stop thread? [duplicate]

Is there a way in python to interrupt a thread when it's sleeping?
(As we can do in java)
I am looking for something like that.
import threading
from time import sleep
def f():
print('started')
try:
sleep(100)
print('finished')
except SleepInterruptedException:
print('interrupted')
t = threading.Thread(target=f)
t.start()
if input() == 'stop':
t.interrupt()
The thread is sleeping for 100 seconds and if I type 'stop', it interrupts
The correct approach is to use threading.Event. For example:
import threading
e = threading.Event()
e.wait(timeout=100) # instead of time.sleep(100)
In the other thread, you need to have access to e. You can interrupt the sleep by issuing:
e.set()
This will immediately interrupt the sleep. You can check the return value of e.wait to determine whether it's timed out or interrupted. For more information refer to the documentation: https://docs.python.org/3/library/threading.html#event-objects .
How about using condition objects: https://docs.python.org/2/library/threading.html#condition-objects
Instead of sleep() you use wait(timeout). To "interrupt" you call notify().
If you, for whatever reason, needed to use the time.sleep function and happened to expect the time.sleep function to throw an exception and you simply wanted to test what happened with large sleep values without having to wait for the whole timeout...
Firstly, sleeping threads are lightweight and there's no problem just letting them run in daemon mode with threading.Thread(target=f, daemon=True) (so that they exit when the program does). You can check the result of the thread without waiting for the whole execution with t.join(0.5).
But if you absolutely need to halt the execution of the function, you could use multiprocessing.Process, and call .terminate() on the spawned process. This does not give the process time to clean up (e.g. except and finally blocks aren't run), so use it with care.

ThreadPoolExecutor KeyboardInterrupt

I've got the following code which uses a concurrent.futures.ThreadPoolExecutor to launch processes of another program in a metered way (no more than 30 at a time). I additionally want the ability to stop all work if I ctrl-C the python process. This code works with one caveat: I have to ctrl-C twice. The first time I send the SIGINT, nothing happens; the second time, I see the "sending SIGKILL to processes", the processes die, and it works. What is happening to my first SIGINT?
execution_list = [['prog', 'arg1'], ['prog', 'arg2']] ... etc
processes = []
def launch_instance(args):
process = subprocess.Popen(args)
processes.append(process)
process.wait()
try:
with concurrent.futures.ThreadPoolExecutor(max_workers=30) as executor:
results = list(executor.map(launch_instance, execution_list))
except KeyboardInterrupt:
print('sending SIGKILL to processes')
for p in processes:
if p.poll() is None: #If process is still alive
p.send_signal(signal.SIGKILL)
I stumbled upon your question while trying to solve something similar. Not 100% sure that it will solve your use case (I'm not using subprocesses), but I think it will.
Your code will stay within the context manager of the executor as long as the jobs are still running. My educated guess is that the first KeyboardInterrupt will be caught by the ThreadPoolExecutor, whose default behaviour would be to not start any new jobs, wait until the current ones are finished, and then clean up (and probably reraise the KeyboardInterrupt). But the processes are probably long running, so you wouldn't notice. The second KeyboardInterrupt then interrupts this error handling.
How I solved my problem (inifinite background processes in separate threads) is with the following code:
from concurrent.futures import ThreadPoolExecutor
import signal
import threading
from time import sleep
def loop_worker(exiting):
while not exiting.is_set():
try:
print("started work")
sleep(10)
print("finished work")
except KeyboardInterrupt:
print("caught keyboardinterrupt") # never caught here. just for demonstration purposes
def loop_in_worker():
exiting = threading.Event()
def signal_handler(signum, frame):
print("Setting exiting event")
exiting.set()
signal.signal(signal.SIGTERM, signal_handler)
with ThreadPoolExecutor(max_workers=1) as executor:
executor.submit(loop_worker, exiting)
try:
while not exiting.is_set():
sleep(1)
print('waiting')
except KeyboardInterrupt:
print('Caught keyboardinterrupt')
exiting.set()
print("Main thread finished (and thus all others)")
if __name__ == '__main__':
loop_in_worker()
It uses an Event to signal to the threads that they should stop what they are doing. In the main loop, there is a loop just to keep busy and check for any exceptions. Note that this loop is within the context of the ThreadPoolExecutor.
As a bonus it also handles the SIGTERM signal by using the same exiting Event.
If you add a loop in between processes.append(process) and process.wait() that checks for a signal, then it will probably solve your use case as well. It depends on what you want to do with the running processes what actions you should take there.
If you run my script from the command line and press ctrl-C you should see something like:
started work
waiting
waiting
^CCaught keyboardinterrupt
# some time passes here
finished work
Main thread finished (and thus all others)
Inspiration for my solution came from this blog post

Python threading interrupt sleep

Is there a way in python to interrupt a thread when it's sleeping?
(As we can do in java)
I am looking for something like that.
import threading
from time import sleep
def f():
print('started')
try:
sleep(100)
print('finished')
except SleepInterruptedException:
print('interrupted')
t = threading.Thread(target=f)
t.start()
if input() == 'stop':
t.interrupt()
The thread is sleeping for 100 seconds and if I type 'stop', it interrupts
The correct approach is to use threading.Event. For example:
import threading
e = threading.Event()
e.wait(timeout=100) # instead of time.sleep(100)
In the other thread, you need to have access to e. You can interrupt the sleep by issuing:
e.set()
This will immediately interrupt the sleep. You can check the return value of e.wait to determine whether it's timed out or interrupted. For more information refer to the documentation: https://docs.python.org/3/library/threading.html#event-objects .
How about using condition objects: https://docs.python.org/2/library/threading.html#condition-objects
Instead of sleep() you use wait(timeout). To "interrupt" you call notify().
If you, for whatever reason, needed to use the time.sleep function and happened to expect the time.sleep function to throw an exception and you simply wanted to test what happened with large sleep values without having to wait for the whole timeout...
Firstly, sleeping threads are lightweight and there's no problem just letting them run in daemon mode with threading.Thread(target=f, daemon=True) (so that they exit when the program does). You can check the result of the thread without waiting for the whole execution with t.join(0.5).
But if you absolutely need to halt the execution of the function, you could use multiprocessing.Process, and call .terminate() on the spawned process. This does not give the process time to clean up (e.g. except and finally blocks aren't run), so use it with care.

Python: Timer, how to stop thread when program ends?

I have a function I'm calling every 5 seconds like such:
def check_buzz(super_buzz_words):
print 'Checking buzz'
t = Timer(5.0, check_buzz, args=(super_buzz_words,))
t.dameon = True
t.start()
buzz_word = get_buzz_word()
if buzz_word is not 'fail':
super_buzz_words.put(buzz_word)
main()
check_buzz()
I'm exiting the script by either catching a KeyboardInterrupt or by catching a System exit and calling this:
sys.exit('\nShutting Down\n')
I'm also restarting the program every so often by calling:
execv(sys.executable, [sys.executable] + sys.argv)
My question is, how do I get that timer thread to shut off? If I keyboard interrupt, the timer keeps going.
I think you just spelled daemon wrong, it should have been:
t.daemon = True
Then sys.exit() should work
Expanding on the answer from notorious.no, and the comment asking:
How can I call t.cancel() if I have no access to t oustide the
function?
Give the Timer thread a distinct name when you first create it:
import threading
def check_buzz(super_buzz_words):
print 'Checking buzz'
t = Timer(5.0, check_buzz, args=(super_buzz_words,))
t.daemon = True
t.name = "check_buzz_daemon"
t.start()
Although the local variable t soon goes out of scope, the Timer thread that t pointed to still exists and still retains the name assigned to it.
Your atexit-registered method can then identify this thread by its name and cancel it:
from atexit import register
def all_done():
for thr in threading._enumerate():
if thr.name == "check_buzz_daemon":
if thr.is_alive():
thr.cancel()
thr.join()
register(all_done)
Calling join() after calling cancel()is based on a StackOverflow answer by Cédric Julien.
HOWEVER, your thread is set to be a Daemon. According to this StackOverflow post, daemon threads do not need to be explicitly terminated.
from atexit import register
def all_done():
if t.is_alive():
# do something that will close your thread gracefully
register(all_done)
Basically when your code is about to exit, it will fire one last function and this is where you will check if your thread is still running. If it is, do something that will either cancel the transaction or otherwise exit gracefully. In general, it's best to let threads finish by themselves, but if it's not doing anything important (please note the emphasis) than you can just do t.cancel(). Design your code so that threads will finish on their own if possible.
Another way would be to use the Queue() module to send and recieve info from a thread using the .put() outside the thread and the .get() inside the thread.
What you can also do is create a txt file and make program write to it when you exit And put an if statement in the thread function to check it after each iteration (this is not a really good solution but it also works)
I would have put a code exemple but i am writing from mobile sorry

How to exit the entire application from a Python thread?

How can I exit my entire Python application from one of its threads? sys.exit() only terminates the thread in which it is called, so that is no help.
I would not like to use an os.kill() solution, as this isn't very clean.
Short answer: use os._exit.
Long answer with example:
I yanked and slightly modified a simple threading example from a tutorial on DevShed:
import threading, sys, os
theVar = 1
class MyThread ( threading.Thread ):
def run ( self ):
global theVar
print 'This is thread ' + str ( theVar ) + ' speaking.'
print 'Hello and good bye.'
theVar = theVar + 1
if theVar == 4:
#sys.exit(1)
os._exit(1)
print '(done)'
for x in xrange ( 7 ):
MyThread().start()
If you keep sys.exit(1) commented out, the script will die after the third thread prints out. If you use sys.exit(1) and comment out os._exit(1), the third thread does not print (done), and the program runs through all seven threads.
os._exit "should normally only be used in the child process after a fork()" -- and a separate thread is close enough to that for your purpose. Also note that there are several enumerated values listed right after os._exit in that manual page, and you should prefer those as arguments to os._exit instead of simple numbers like I used in the example above.
If all your threads except the main ones are daemons, the best approach is generally thread.interrupt_main() -- any thread can use it to raise a KeyboardInterrupt in the main thread, which can normally lead to reasonably clean exit from the main thread (including finalizers in the main thread getting called, etc).
Of course, if this results in some non-daemon thread keeping the whole process alive, you need to followup with os._exit as Mark recommends -- but I'd see that as the last resort (kind of like a kill -9;-) because it terminates things quite brusquely (finalizers not run, including try/finally blocks, with blocks, atexit functions, etc).
Using thread.interrupt_main() may not help in some situation. KeyboardInterrupts are often used in command line applications to exit the current command or to clean the input line.
In addition, os._exit will kill the process immediately without running any finally blocks in your code, which may be dangerous (files and connections will not be closed for example).
The solution I've found is to register a signal handler in the main thread that raises a custom exception. Use the background thread to fire the signal.
import signal
import os
import threading
import time
class ExitCommand(Exception):
pass
def signal_handler(signal, frame):
raise ExitCommand()
def thread_job():
time.sleep(5)
os.kill(os.getpid(), signal.SIGUSR1)
signal.signal(signal.SIGUSR1, signal_handler)
threading.Thread(target=thread_job).start() # thread will fire in 5 seconds
try:
while True:
user_input = raw_input('Blocked by raw_input loop ')
# do something with 'user_input'
except ExitCommand:
pass
finally:
print('finally will still run')
Related questions:
Why does sys.exit() not exit when called inside a thread in Python?
Python: How to quit CLI when stuck in blocking raw_input?
The easiest way to exit the whole program is, we should terminate the program by using the process id (pid).
import os
import psutil
current_system_pid = os.getpid()
ThisSystem = psutil.Process(current_system_pid)
ThisSystem.terminate()
To install psutl:- "pip install psutil"
For Linux you can use the kill() command and pass the current process' ID and the SIGINT signal to start the steps to exit the app.
import signal
os.kill(os.getpid(), signal.SIGINT)

Categories

Resources