I have a python program like this:
from threading import Thread
def foo():
while True:
blocking_function() #Actually waiting for a message on a socket
def run():
Thread(target=foo).start()
run()
This program does not terminate with KeyboardInterrupt, due to the main Thread exiting before a Thread running foo() has a chance to terminate. I tried keeping the main thread alive with just running while True loop after calling run() but that also doesn't exit the program (blocking_function() just blocks the thread from running I guess, waits for the message). Also tried catching KeyboardInterrupt exception in main thread and call sys.exit(0) - same outcome (I would actually expect it to kill the thread running foo(), but apparently it doesn't)
Now, I could simply timeout the execution of blocking_function() but that's no fun. Can I unblock it on KeyboardInterrupt or anything similar?
Main goal: Terminate the program with blocked thread on Ctrl+C
Maybe a little bit of a workaround, but you could use thread instead of threading. This is not really advised, but if it suits you and your program, why not.
You will need to keep your program running, otherwise the thread exits right after run()
import thread, time
def foo():
while True:
blocking_function() #Actually waiting for a message on a socket
def run():
thread.start_new_thread(foo, ())
run()
while True:
#Keep the main thread alive
time.sleep(1)
Related
I wrote this code to create a infinity threading loop without duplicate or interrupt thread task.
import threading
import time
thread = None
def loopMyTask():
global thread
if thread is not None and thread.isAlive():
thread.cancel()
thread.join()
thread = threading.Timer(6.0, loopMyTask)
thread.daemon = True
thread.start()
myTask()
def myTask():
# simulate a task
for i in range(14) :
print(str(i))
time.sleep(1)
while True:
loopMyTask()
Apparently it's working, but it returns an alert
I am not sure what you want to do, but only the main thread does some work here:
you call loopMyTask()
it sets a timer to start a new thread calling itself in 6s
it calls myTask, which prints 1, 2, 3, 4, 5...
the timer triggers a call to loopmytask() in a new thread.
the new thread finds that the global variable thread is set (and the thread is alive) so it calls cancel. That does nothing: it is meant to cancel the timer, but it already has finished. Indeed, this part of the code is running because the time has arrived
the new thread calls thread.join(), which would cause a deadlock since it would be waiting for itself to finish. Fortunately, the threading module prevents this kind of deadlocks and raises a RuntimeError. The thread dies.
the main thread resumes its execution of myTask, printing 6, 7, 8...
once it finishes, the loop goes again to step 1. The thread.join() call does not trigger an error this time, but everything repeats again.
So, you would get the same results (aside from the error) if you just call myTask() in a loop
I've got the following code which uses a concurrent.futures.ThreadPoolExecutor to launch processes of another program in a metered way (no more than 30 at a time). I additionally want the ability to stop all work if I ctrl-C the python process. This code works with one caveat: I have to ctrl-C twice. The first time I send the SIGINT, nothing happens; the second time, I see the "sending SIGKILL to processes", the processes die, and it works. What is happening to my first SIGINT?
execution_list = [['prog', 'arg1'], ['prog', 'arg2']] ... etc
processes = []
def launch_instance(args):
process = subprocess.Popen(args)
processes.append(process)
process.wait()
try:
with concurrent.futures.ThreadPoolExecutor(max_workers=30) as executor:
results = list(executor.map(launch_instance, execution_list))
except KeyboardInterrupt:
print('sending SIGKILL to processes')
for p in processes:
if p.poll() is None: #If process is still alive
p.send_signal(signal.SIGKILL)
I stumbled upon your question while trying to solve something similar. Not 100% sure that it will solve your use case (I'm not using subprocesses), but I think it will.
Your code will stay within the context manager of the executor as long as the jobs are still running. My educated guess is that the first KeyboardInterrupt will be caught by the ThreadPoolExecutor, whose default behaviour would be to not start any new jobs, wait until the current ones are finished, and then clean up (and probably reraise the KeyboardInterrupt). But the processes are probably long running, so you wouldn't notice. The second KeyboardInterrupt then interrupts this error handling.
How I solved my problem (inifinite background processes in separate threads) is with the following code:
from concurrent.futures import ThreadPoolExecutor
import signal
import threading
from time import sleep
def loop_worker(exiting):
while not exiting.is_set():
try:
print("started work")
sleep(10)
print("finished work")
except KeyboardInterrupt:
print("caught keyboardinterrupt") # never caught here. just for demonstration purposes
def loop_in_worker():
exiting = threading.Event()
def signal_handler(signum, frame):
print("Setting exiting event")
exiting.set()
signal.signal(signal.SIGTERM, signal_handler)
with ThreadPoolExecutor(max_workers=1) as executor:
executor.submit(loop_worker, exiting)
try:
while not exiting.is_set():
sleep(1)
print('waiting')
except KeyboardInterrupt:
print('Caught keyboardinterrupt')
exiting.set()
print("Main thread finished (and thus all others)")
if __name__ == '__main__':
loop_in_worker()
It uses an Event to signal to the threads that they should stop what they are doing. In the main loop, there is a loop just to keep busy and check for any exceptions. Note that this loop is within the context of the ThreadPoolExecutor.
As a bonus it also handles the SIGTERM signal by using the same exiting Event.
If you add a loop in between processes.append(process) and process.wait() that checks for a signal, then it will probably solve your use case as well. It depends on what you want to do with the running processes what actions you should take there.
If you run my script from the command line and press ctrl-C you should see something like:
started work
waiting
waiting
^CCaught keyboardinterrupt
# some time passes here
finished work
Main thread finished (and thus all others)
Inspiration for my solution came from this blog post
I have code like below
def run():
While True:
doSomething()
def main():
thread = threading.thread(target = run)
thread.setDaemon(True)
thread.start()
doSomethingElse()
if I Write code like above, when the main thread exits, the Deamon thread will exit, but maybe still in the process of doSomething.
The main function will be called outside, I am not allowed to use join in the main thread,
is there any way I can do to make the Daemon thread exit gracefully upon the main thread completion.
You can use thread threading.Event to signal child thread when to exit from main thread.
Example:
class DemonThead(threading.Thread):
def __init__(self):
self.shutdown_flag = threading.Event()
def run(self):
while not self.shutdown_flag:
# Run your code here
pass
def main_thread():
demon_thread = DemonThead()
demon_thread.setDaemon(True)
demon_thread.start()
# Stop your thread
demon_thread.shutdown_flag.set()
demon_thread.join()
You are not allowed to use join, but you can set an Event and do not use daemonic flag. Official doc is below:
Note: Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signalling mechanism such as an Event.
I have a thread being generated from the main one which has basically an infinite loop with a system blocking function: something like:
def run(self):
global EXIT
while not EXIT:
data = self.conn.recv(1024)
...
I have defined a signal handler for SIGINT
def sig_handler(signum, frame):
global EXIT, threads
if (signum == 2):#defensive
print("Called SIGINT")
EXIT = True
Being the signal catched by the main thread, it interrupts the main thread. However the other thread is stuck on the blocking function: is there a way to interrupt a system call blocking function in python so that exiting from this function?
I do not want to stop the process directly how SIGINT normally does but I would like just to interrupt recv so that the condition of while is not true anymore and I can do other stuff before exiting.
Is there a way in python to interrupt a thread when it's sleeping?
(As we can do in java)
I am looking for something like that.
import threading
from time import sleep
def f():
print('started')
try:
sleep(100)
print('finished')
except SleepInterruptedException:
print('interrupted')
t = threading.Thread(target=f)
t.start()
if input() == 'stop':
t.interrupt()
The thread is sleeping for 100 seconds and if I type 'stop', it interrupts
The correct approach is to use threading.Event. For example:
import threading
e = threading.Event()
e.wait(timeout=100) # instead of time.sleep(100)
In the other thread, you need to have access to e. You can interrupt the sleep by issuing:
e.set()
This will immediately interrupt the sleep. You can check the return value of e.wait to determine whether it's timed out or interrupted. For more information refer to the documentation: https://docs.python.org/3/library/threading.html#event-objects .
How about using condition objects: https://docs.python.org/2/library/threading.html#condition-objects
Instead of sleep() you use wait(timeout). To "interrupt" you call notify().
If you, for whatever reason, needed to use the time.sleep function and happened to expect the time.sleep function to throw an exception and you simply wanted to test what happened with large sleep values without having to wait for the whole timeout...
Firstly, sleeping threads are lightweight and there's no problem just letting them run in daemon mode with threading.Thread(target=f, daemon=True) (so that they exit when the program does). You can check the result of the thread without waiting for the whole execution with t.join(0.5).
But if you absolutely need to halt the execution of the function, you could use multiprocessing.Process, and call .terminate() on the spawned process. This does not give the process time to clean up (e.g. except and finally blocks aren't run), so use it with care.