Related
I am trying to implement a method to force stop the child that have been started with ThreadPoolExecutor / ProcessPoolExecutor. I would like a cross platform implementation (Windows and Linux).
When the signal is triggered from main, the main process exits and I do NOT want that, only the child.
What is the correct way to force the child to quit? I do NOT want Events because in the following example I can have a while loop that never gets to event.is_set() again
eg:
while not event.is_set():
# Do stuff
while waiting_for_something:
# Here is blocked
Here is the code I am using but I miss something and I don't know what:
import os
import signal
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
import time
def handler(signum, frame):
print(signum, os.getpid())
os.kill(os.getpid(), signal.SIGINT)
class asd:
def __init__(self):
pass
def run(self):
signal.signal(signal.SIGBREAK, handler)
while True:
print('running thread', os.getpid())
time.sleep(1)
while True:
print('running 2 ', os.getpid())
time.sleep(1)
print("after while")
if __name__ == "__main__":
t1 = asd()
pool = ProcessPoolExecutor(max_workers=4)
# pool = ThreadPoolExecutor(max_workers=4)
pool.submit(t1.run)
print('running main', os.getpid())
time.sleep(3)
signal.raise_signal(signal.SIGBREAK)
while True:
print("after killing process")
time.sleep(1)
Thank you!
you are sending the signal to your main python process not to the children.
in order to send signals to your children you need their PID, which is not available using the concurrent module, instead you should use multiprocess.Pool, then you can get the PID of the children and send the signal to them using os.kill
just remember to eventually use pool.terminate() to guarantee resources cleanup.
import os
import signal
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
import time
import psutil
import multiprocessing
def handler(signum, frame):
print(signum, os.getpid())
os.kill(os.getpid(), signal.SIGINT)
class asd:
def __init__(self):
pass
def run(self):
signal.signal(signal.SIGBREAK, handler)
while True:
print('running thread', os.getpid())
time.sleep(1)
while True:
print('running 2 ', os.getpid())
time.sleep(1)
print("after while")
if __name__ == "__main__":
t1 = asd()
pool = multiprocessing.Pool(4)
children = multiprocessing.active_children()
res = pool.apply_async(t1.run)
print('running main', os.getpid())
time.sleep(3)
for child in children:
os.kill(child.pid,signal.SIGBREAK)
while True:
print("after killing process")
time.sleep(1)
with result
running main 16860
running thread 14212
running 2 14212
running 2 14212
after killing process
after killing process
after killing process
You can take a look at pebble which has been designed to solve this problem transparently for the User.
It provides concurrent.futures compatible APIs and allows to end a processing job either by cancelling the returned Future object or by setting a computing timeout.
import time
from pebble import ProcessPool
from concurrent.futures import TimeoutError
TIMEOUT = 5
def function(sleep):
while True:
time.sleep(sleep)
with ProcessPool() as pool:
future = pool.submit(function, TIMEOUT, 1)
assert isinstance(future.exception(), TimeoutError)
Note that you cannot stop executing threads in Python so only process pools can support such functionality.
Let's assume we have such a trivial daemon written in python:
def mainloop():
while True:
# 1. do
# 2. some
# 3. important
# 4. job
# 5. sleep
mainloop()
and we daemonize it using start-stop-daemon which by default sends SIGTERM (TERM) signal on --stop.
Let's suppose the current step performed is #2. And at this very moment we're sending TERM signal.
What happens is that the execution terminates immediately.
I've found that I can handle the signal event using signal.signal(signal.SIGTERM, handler) but the thing is that it still interrupts the current execution and passes the control to handler.
So, my question is - is it possible to not interrupt the current execution but handle the TERM signal in a separated thread (?) so that I was able to set shutdown_flag = True so that mainloop() had a chance to stop gracefully?
A class based clean to use solution:
import signal
import time
class GracefulKiller:
kill_now = False
def __init__(self):
signal.signal(signal.SIGINT, self.exit_gracefully)
signal.signal(signal.SIGTERM, self.exit_gracefully)
def exit_gracefully(self, *args):
self.kill_now = True
if __name__ == '__main__':
killer = GracefulKiller()
while not killer.kill_now:
time.sleep(1)
print("doing something in a loop ...")
print("End of the program. I was killed gracefully :)")
First, I'm not certain that you need a second thread to set the shutdown_flag.
Why not set it directly in the SIGTERM handler?
An alternative is to raise an exception from the SIGTERM handler, which will be propagated up the stack. Assuming you've got proper exception handling (e.g. with with/contextmanager and try: ... finally: blocks) this should be a fairly graceful shutdown, similar to if you were to Ctrl+C your program.
Example program signals-test.py:
#!/usr/bin/python
from time import sleep
import signal
import sys
def sigterm_handler(_signo, _stack_frame):
# Raises SystemExit(0):
sys.exit(0)
if sys.argv[1] == "handle_signal":
signal.signal(signal.SIGTERM, sigterm_handler)
try:
print "Hello"
i = 0
while True:
i += 1
print "Iteration #%i" % i
sleep(1)
finally:
print "Goodbye"
Now see the Ctrl+C behaviour:
$ ./signals-test.py default
Hello
Iteration #1
Iteration #2
Iteration #3
Iteration #4
^CGoodbye
Traceback (most recent call last):
File "./signals-test.py", line 21, in <module>
sleep(1)
KeyboardInterrupt
$ echo $?
1
This time I send it SIGTERM after 4 iterations with kill $(ps aux | grep signals-test | awk '/python/ {print $2}'):
$ ./signals-test.py default
Hello
Iteration #1
Iteration #2
Iteration #3
Iteration #4
Terminated
$ echo $?
143
This time I enable my custom SIGTERM handler and send it SIGTERM:
$ ./signals-test.py handle_signal
Hello
Iteration #1
Iteration #2
Iteration #3
Iteration #4
Goodbye
$ echo $?
0
Here is a simple example without threads or classes.
import signal
run = True
def handler_stop_signals(signum, frame):
global run
run = False
signal.signal(signal.SIGINT, handler_stop_signals)
signal.signal(signal.SIGTERM, handler_stop_signals)
while run:
pass # do stuff including other IO stuff
I think you are near to a possible solution.
Execute mainloop in a separate thread and extend it with the property shutdown_flag. The signal can be caught with signal.signal(signal.SIGTERM, handler) in the main thread (not in a separate thread). The signal handler should set shutdown_flag to True and wait for the thread to end with thread.join()
Based on the previous answers, I have created a context manager which protects from sigint and sigterm.
import logging
import signal
import sys
class TerminateProtected:
""" Protect a piece of code from being killed by SIGINT or SIGTERM.
It can still be killed by a force kill.
Example:
with TerminateProtected():
run_func_1()
run_func_2()
Both functions will be executed even if a sigterm or sigkill has been received.
"""
killed = False
def _handler(self, signum, frame):
logging.error("Received SIGINT or SIGTERM! Finishing this block, then exiting.")
self.killed = True
def __enter__(self):
self.old_sigint = signal.signal(signal.SIGINT, self._handler)
self.old_sigterm = signal.signal(signal.SIGTERM, self._handler)
def __exit__(self, type, value, traceback):
if self.killed:
sys.exit(0)
signal.signal(signal.SIGINT, self.old_sigint)
signal.signal(signal.SIGTERM, self.old_sigterm)
if __name__ == '__main__':
print("Try pressing ctrl+c while the sleep is running!")
from time import sleep
with TerminateProtected():
sleep(10)
print("Finished anyway!")
print("This only prints if there was no sigint or sigterm")
Found easiest way for me.
Here an example with fork for clarity that this way is useful for flow control.
import signal
import time
import sys
import os
def handle_exit(sig, frame):
raise(SystemExit)
def main():
time.sleep(120)
signal.signal(signal.SIGTERM, handle_exit)
p = os.fork()
if p == 0:
main()
os._exit()
try:
os.waitpid(p, 0)
except (KeyboardInterrupt, SystemExit):
print('exit handled')
os.kill(p, signal.SIGTERM)
os.waitpid(p, 0)
The simplest solution I have found, taking inspiration by responses above is
class SignalHandler:
def __init__(self):
# register signal handlers
signal.signal(signal.SIGINT, self.exit_gracefully)
signal.signal(signal.SIGTERM, self.exit_gracefully)
self.logger = Logger(level=ERROR)
def exit_gracefully(self, signum, frame):
self.logger.info('captured signal %d' % signum)
traceback.print_stack(frame)
###### do your resources clean up here! ####
raise(SystemExit)
Sample of my code how I use signal:
#! /usr/bin/env python
import signal
def ctrl_handler(signum, frm):
print "You can't cannot kill me"
print "Installing signal handler..."
signal.signal(signal.SIGINT, ctrl_handler)
print "done"
while True:
# do something
pass
You can set a threading.Event when catching the signal.
threading.Event is threadsafe to use and pass around, can be waited on, and the same event can be set and cleared from other places.
import signal, threading
quit_event = threading.Event()
signal.signal(signal.SIGTERM, lambda *_args: quit_event.set())
while not quit_event.is_set():
print("Working...")
I want to kill a thread in python. This thread can run in a blocking operation and join can't terminate it.
Simular to this:
from threading import Thread
import time
def block():
while True:
print("running")
time.sleep(1)
if __name__ == "__main__":
thread = Thread(target = block)
thread.start()
#kill thread
#do other stuff
My problem is that the real blocking operation is in another module that is not from me so there is no place where I can break with a running variable.
The thread will be killed when exiting the main process if you set it up as a daemon:
from threading import Thread
import time
def block():
while True:
print("running")
time.sleep(1)
if __name__ == "__main__":
thread = Thread(target = block, daemon = True)
thread.start()
sys.exit(0)
Otherwise just set a flag, I'm using a bad example (you should use some synchronization not just a plain variable):
from threading import Thread
import time
RUNNING = True
def block():
global RUNNING
while RUNNING:
print("running")
time.sleep(1)
if __name__ == "__main__":
thread = Thread(target = block, daemon = True)
thread.start()
RUNNING = False # thread will stop, not killed until next loop iteration
.... continue your stuff here
Use a running variable:
from threading import Thread
import time
running = True
def block():
global running
while running:
print("running")
time.sleep(1)
if __name__ == "__main__":
thread = Thread(target = block)
thread.start()
running = False
# do other stuff
I would prefer to wrap it all in a class, but this should work (untested though).
EDIT
There is a way to asynchronously raise an exception in a separate thread which could be caught by a try: except: block, but it's a dirty dirty hack: https://gist.github.com/liuw/2407154
Original post
"I want to kill a thread in python." you can't. Threads are only killed when they're daemons when there are no more non-daemonic threads running from the parent process. Any thread can be asked nicely to terminate itself using standard inter-thread communication methods, but you state that you don't have any chance to interrupt the function you want to kill. This leaves processes.
Processes have more overhead, and are more difficult to pass data to and from, but they do support being killed by sending SIGTERM or SIGKILL.
from multiprocessing import Process, Queue
from time import sleep
def workfunction(*args, **kwargs): #any arguments you send to a child process must be picklable by python's pickle module
sleep(args[0]) #really long computation you might want to kill
return 'results' #anything you want to get back from a child process must be picklable by python's pickle module
class daemon_worker(Process):
def __init__(self, target_func, *args, **kwargs):
self.return_queue = Queue()
self.target_func = target_func
self.args = args
self.kwargs = kwargs
super().__init__(daemon=True)
self.start()
def run(self): #called by self.start()
self.return_queue.put(self.target_func(*self.args, **self.kwargs))
def get_result(self): #raises queue.Empty if no result is ready
return self.return_queue.get()
if __name__=='__main__':
#start some work that takes 1 sec:
worker1 = daemon_worker(workfunction, 1)
worker1.join(3) #wait up to 3 sec for the worker to complete
if not worker1.is_alive(): #if we didn't hit 3 sec timeout
print('worker1 got: {}'.format(worker1.get_result()))
else:
print('worker1 still running')
worker1.terminate()
print('killing worker1')
sleep(.1) #calling worker.is_alive() immediately might incur a race condition where it may or may not have shut down yet.
print('worker1 is alive: {}'.format(worker1.is_alive()))
#start some work that takes 100 sec:
worker2 = daemon_worker(workfunction, 100)
worker2.join(3) #wait up to 3 sec for the worker to complete
if not worker2.is_alive(): #if we didn't hit 3 sec timeout
print('worker2 got: {}'.format(worker2.get_result()))
else:
print('worker2 still running')
worker2.terminate()
print('killing worker2')
sleep(.1) #calling worker.is_alive() immediately might incur a race condition where it may or may not have shut down yet.
print('worker2 is alive: {}'.format(worker2.is_alive())
I have two threads, and, I want one thread to run for 10 seconds, and then have this thread stop, whilst another thread executes and then the first thread starts up again; this process is repeated. So e.g.
from threading import Thread
import sys
import time
class Worker(Thread):
Listened = False;
def __init__(self):
while 1:
if(self.Listened == False):
time.sleep(0)
else:
time.sleep(20)
for x in range(0, 10):
print "I'm working"
self.Listened = True
class Processor(Thread):
Listened = False;
def __init__(self):
# this is where I'm confused!!
Worker().start()
Processer().start()
(P.S. I have indented correctly, however, SO seems to have messed it up a bit)
Basically, what I want is:
The worker thread works for 10 seconds (or so) and then stops, the "processor" starts up and, once the processor has processed the data from the last run of the "Worker" thread, it then re-starts the "worker" thread up. I don't specifically have to re-start the "worker" thread from that current position, it can start from the beginning.
Does anyone have any ideas?
You can use a counting semaphore to block a thread, and then wake-it-up later.
A counting semaphore is an object that has a non-negative integer count. If a thread calls acquire() on the semaphore when the count is 0, the thead will block until the semaphore's count becomes greater than zero. To unblock the thread, another thread must increase the count of the semaphore by calling release() on the semaphore.
Create two semaphores, one to block the worker, and one to block the processor. Start the worker semaphore's count a 1 since we want it to run right away. Start the processor's semaphore's count to 0 since we want it to block until the worker is done.
Pass the semaphores to the worker and processor classes. After the worker has run for 10 seconds, it should wake-up the processor by calling processorSemaphore.release(), then it should sleep on its semaphore by calling workerSemaphore.acquire(). The processor does the same.
#!/usr/bin/env python
from threading import Thread, Semaphore
import sys
import time
INTERVAL = 10
class Worker(Thread):
def __init__(self, workerSemaphore, processorSemaphore):
super(Worker, self).__init__()
self.workerSemaphore = workerSemaphore
self.processorSemaphore = processorSemaphore
def run(self):
while True:
# wait for the processor to finish
self.workerSemaphore.acquire()
start = time.time()
while True:
if time.time() - start > INTERVAL:
# wake-up the processor
self.processorSemaphore.release()
break
# do work here
print "I'm working"
class Processor(Thread):
def __init__(self, workerSemaphore, processorSemaphore):
super(Processor, self).__init__()
print "init P"
self.workerSemaphore = workerSemaphore
self.processorSemaphore = processorSemaphore
def run(self):
print "running P"
while True:
# wait for the worker to finish
self.processorSemaphore.acquire()
start = time.time()
while True:
if time.time() - start > INTERVAL:
# wake-up the worker
self.workerSemaphore.release()
break
# do processing here
print "I'm processing"
workerSemaphore = Semaphore(1)
processorSemaphore = Semaphore(0)
worker = Worker(workerSemaphore, processorSemaphore)
processor = Processor(workerSemaphore, processorSemaphore)
worker.start()
processor.start()
worker.join()
processor.join()
See Alvaro's answer. But if you must really use threads then you can do something like below. However you can call start() on a Thread object only once. So either your data should preserve state as to where the next Worker thread should start from and you create a new worker thread in Processor every time or try to use a critical section so that the Worker and Processor threads can take turns to access it.
#!/usr/bin/env python
from threading import Thread
import time
class Worker(Thread):
def __init__(self):
Thread.__init__(self)
pass
def run(self):
for x in range(0, 10):
print "I'm working"
time.sleep(1)
class Processor(Thread):
def __init__(self, w):
Thread.__init__(self)
self.worker = w
def run(self):
# process data from worker thread, add your logic here
self.worker.start()
w = Worker()
p = Processor(w)
p.start()
Let's assume we have such a trivial daemon written in python:
def mainloop():
while True:
# 1. do
# 2. some
# 3. important
# 4. job
# 5. sleep
mainloop()
and we daemonize it using start-stop-daemon which by default sends SIGTERM (TERM) signal on --stop.
Let's suppose the current step performed is #2. And at this very moment we're sending TERM signal.
What happens is that the execution terminates immediately.
I've found that I can handle the signal event using signal.signal(signal.SIGTERM, handler) but the thing is that it still interrupts the current execution and passes the control to handler.
So, my question is - is it possible to not interrupt the current execution but handle the TERM signal in a separated thread (?) so that I was able to set shutdown_flag = True so that mainloop() had a chance to stop gracefully?
A class based clean to use solution:
import signal
import time
class GracefulKiller:
kill_now = False
def __init__(self):
signal.signal(signal.SIGINT, self.exit_gracefully)
signal.signal(signal.SIGTERM, self.exit_gracefully)
def exit_gracefully(self, *args):
self.kill_now = True
if __name__ == '__main__':
killer = GracefulKiller()
while not killer.kill_now:
time.sleep(1)
print("doing something in a loop ...")
print("End of the program. I was killed gracefully :)")
First, I'm not certain that you need a second thread to set the shutdown_flag.
Why not set it directly in the SIGTERM handler?
An alternative is to raise an exception from the SIGTERM handler, which will be propagated up the stack. Assuming you've got proper exception handling (e.g. with with/contextmanager and try: ... finally: blocks) this should be a fairly graceful shutdown, similar to if you were to Ctrl+C your program.
Example program signals-test.py:
#!/usr/bin/python
from time import sleep
import signal
import sys
def sigterm_handler(_signo, _stack_frame):
# Raises SystemExit(0):
sys.exit(0)
if sys.argv[1] == "handle_signal":
signal.signal(signal.SIGTERM, sigterm_handler)
try:
print "Hello"
i = 0
while True:
i += 1
print "Iteration #%i" % i
sleep(1)
finally:
print "Goodbye"
Now see the Ctrl+C behaviour:
$ ./signals-test.py default
Hello
Iteration #1
Iteration #2
Iteration #3
Iteration #4
^CGoodbye
Traceback (most recent call last):
File "./signals-test.py", line 21, in <module>
sleep(1)
KeyboardInterrupt
$ echo $?
1
This time I send it SIGTERM after 4 iterations with kill $(ps aux | grep signals-test | awk '/python/ {print $2}'):
$ ./signals-test.py default
Hello
Iteration #1
Iteration #2
Iteration #3
Iteration #4
Terminated
$ echo $?
143
This time I enable my custom SIGTERM handler and send it SIGTERM:
$ ./signals-test.py handle_signal
Hello
Iteration #1
Iteration #2
Iteration #3
Iteration #4
Goodbye
$ echo $?
0
Here is a simple example without threads or classes.
import signal
run = True
def handler_stop_signals(signum, frame):
global run
run = False
signal.signal(signal.SIGINT, handler_stop_signals)
signal.signal(signal.SIGTERM, handler_stop_signals)
while run:
pass # do stuff including other IO stuff
I think you are near to a possible solution.
Execute mainloop in a separate thread and extend it with the property shutdown_flag. The signal can be caught with signal.signal(signal.SIGTERM, handler) in the main thread (not in a separate thread). The signal handler should set shutdown_flag to True and wait for the thread to end with thread.join()
Based on the previous answers, I have created a context manager which protects from sigint and sigterm.
import logging
import signal
import sys
class TerminateProtected:
""" Protect a piece of code from being killed by SIGINT or SIGTERM.
It can still be killed by a force kill.
Example:
with TerminateProtected():
run_func_1()
run_func_2()
Both functions will be executed even if a sigterm or sigkill has been received.
"""
killed = False
def _handler(self, signum, frame):
logging.error("Received SIGINT or SIGTERM! Finishing this block, then exiting.")
self.killed = True
def __enter__(self):
self.old_sigint = signal.signal(signal.SIGINT, self._handler)
self.old_sigterm = signal.signal(signal.SIGTERM, self._handler)
def __exit__(self, type, value, traceback):
if self.killed:
sys.exit(0)
signal.signal(signal.SIGINT, self.old_sigint)
signal.signal(signal.SIGTERM, self.old_sigterm)
if __name__ == '__main__':
print("Try pressing ctrl+c while the sleep is running!")
from time import sleep
with TerminateProtected():
sleep(10)
print("Finished anyway!")
print("This only prints if there was no sigint or sigterm")
Found easiest way for me.
Here an example with fork for clarity that this way is useful for flow control.
import signal
import time
import sys
import os
def handle_exit(sig, frame):
raise(SystemExit)
def main():
time.sleep(120)
signal.signal(signal.SIGTERM, handle_exit)
p = os.fork()
if p == 0:
main()
os._exit()
try:
os.waitpid(p, 0)
except (KeyboardInterrupt, SystemExit):
print('exit handled')
os.kill(p, signal.SIGTERM)
os.waitpid(p, 0)
The simplest solution I have found, taking inspiration by responses above is
class SignalHandler:
def __init__(self):
# register signal handlers
signal.signal(signal.SIGINT, self.exit_gracefully)
signal.signal(signal.SIGTERM, self.exit_gracefully)
self.logger = Logger(level=ERROR)
def exit_gracefully(self, signum, frame):
self.logger.info('captured signal %d' % signum)
traceback.print_stack(frame)
###### do your resources clean up here! ####
raise(SystemExit)
Sample of my code how I use signal:
#! /usr/bin/env python
import signal
def ctrl_handler(signum, frm):
print "You can't cannot kill me"
print "Installing signal handler..."
signal.signal(signal.SIGINT, ctrl_handler)
print "done"
while True:
# do something
pass
You can set a threading.Event when catching the signal.
threading.Event is threadsafe to use and pass around, can be waited on, and the same event can be set and cleared from other places.
import signal, threading
quit_event = threading.Event()
signal.signal(signal.SIGTERM, lambda *_args: quit_event.set())
while not quit_event.is_set():
print("Working...")