How can I terminate a running process, started using concurrent.futures? As I understand, the cancel() method is there to remove a process from the queue if it is not running. But what about killing a running process? For example, if I have a long running process, and I want to stop it when I press a Cancel button in a GUI.
In this case it would probably be better to use a multiprocessing.Process for a long running task.
Create a multiprocessing.Event before starting the new process. Have the child process check the status of this Event regularly, and make it exit when Event.is_set() returns True.
In your GUI code, have the callback associated with the button call set() on the Event.
You may want to look at my answer to a related StackOverflow question here.
In short, there does not appear to be a simple way to cancel a running process inside a concurrent.futures.ProcessPoolExecutor. But you can accomplish it in a hacky way by killing the child processes manually.
You can use the _processes from the executor.
For example script.py:
import signal
import time
from concurrent.futures import ProcessPoolExecutor
def sleep_squre(x):
def sigterm_handler(signum, frame):
raise SystemExit(signum)
signal.signal(signal.SIGTERM, sigterm_handler)
try:
time.sleep(x)
except SystemExit:
return -1
return x * x
with ProcessPoolExecutor(max_workers=2) as ex:
results = ex.map(sleep_squre, [1, 5])
time.sleep(3)
for pid, proc in ex._processes.items():
proc.terminate()
print(list(results))
In this case, we send SIGKILL after 3 seconds to all processes.
$ python3 script.py
[1, -1]
Related
Is there a way in python to interrupt a thread when it's sleeping?
(As we can do in java)
I am looking for something like that.
import threading
from time import sleep
def f():
print('started')
try:
sleep(100)
print('finished')
except SleepInterruptedException:
print('interrupted')
t = threading.Thread(target=f)
t.start()
if input() == 'stop':
t.interrupt()
The thread is sleeping for 100 seconds and if I type 'stop', it interrupts
The correct approach is to use threading.Event. For example:
import threading
e = threading.Event()
e.wait(timeout=100) # instead of time.sleep(100)
In the other thread, you need to have access to e. You can interrupt the sleep by issuing:
e.set()
This will immediately interrupt the sleep. You can check the return value of e.wait to determine whether it's timed out or interrupted. For more information refer to the documentation: https://docs.python.org/3/library/threading.html#event-objects .
How about using condition objects: https://docs.python.org/2/library/threading.html#condition-objects
Instead of sleep() you use wait(timeout). To "interrupt" you call notify().
If you, for whatever reason, needed to use the time.sleep function and happened to expect the time.sleep function to throw an exception and you simply wanted to test what happened with large sleep values without having to wait for the whole timeout...
Firstly, sleeping threads are lightweight and there's no problem just letting them run in daemon mode with threading.Thread(target=f, daemon=True) (so that they exit when the program does). You can check the result of the thread without waiting for the whole execution with t.join(0.5).
But if you absolutely need to halt the execution of the function, you could use multiprocessing.Process, and call .terminate() on the spawned process. This does not give the process time to clean up (e.g. except and finally blocks aren't run), so use it with care.
I've got the following code which uses a concurrent.futures.ThreadPoolExecutor to launch processes of another program in a metered way (no more than 30 at a time). I additionally want the ability to stop all work if I ctrl-C the python process. This code works with one caveat: I have to ctrl-C twice. The first time I send the SIGINT, nothing happens; the second time, I see the "sending SIGKILL to processes", the processes die, and it works. What is happening to my first SIGINT?
execution_list = [['prog', 'arg1'], ['prog', 'arg2']] ... etc
processes = []
def launch_instance(args):
process = subprocess.Popen(args)
processes.append(process)
process.wait()
try:
with concurrent.futures.ThreadPoolExecutor(max_workers=30) as executor:
results = list(executor.map(launch_instance, execution_list))
except KeyboardInterrupt:
print('sending SIGKILL to processes')
for p in processes:
if p.poll() is None: #If process is still alive
p.send_signal(signal.SIGKILL)
I stumbled upon your question while trying to solve something similar. Not 100% sure that it will solve your use case (I'm not using subprocesses), but I think it will.
Your code will stay within the context manager of the executor as long as the jobs are still running. My educated guess is that the first KeyboardInterrupt will be caught by the ThreadPoolExecutor, whose default behaviour would be to not start any new jobs, wait until the current ones are finished, and then clean up (and probably reraise the KeyboardInterrupt). But the processes are probably long running, so you wouldn't notice. The second KeyboardInterrupt then interrupts this error handling.
How I solved my problem (inifinite background processes in separate threads) is with the following code:
from concurrent.futures import ThreadPoolExecutor
import signal
import threading
from time import sleep
def loop_worker(exiting):
while not exiting.is_set():
try:
print("started work")
sleep(10)
print("finished work")
except KeyboardInterrupt:
print("caught keyboardinterrupt") # never caught here. just for demonstration purposes
def loop_in_worker():
exiting = threading.Event()
def signal_handler(signum, frame):
print("Setting exiting event")
exiting.set()
signal.signal(signal.SIGTERM, signal_handler)
with ThreadPoolExecutor(max_workers=1) as executor:
executor.submit(loop_worker, exiting)
try:
while not exiting.is_set():
sleep(1)
print('waiting')
except KeyboardInterrupt:
print('Caught keyboardinterrupt')
exiting.set()
print("Main thread finished (and thus all others)")
if __name__ == '__main__':
loop_in_worker()
It uses an Event to signal to the threads that they should stop what they are doing. In the main loop, there is a loop just to keep busy and check for any exceptions. Note that this loop is within the context of the ThreadPoolExecutor.
As a bonus it also handles the SIGTERM signal by using the same exiting Event.
If you add a loop in between processes.append(process) and process.wait() that checks for a signal, then it will probably solve your use case as well. It depends on what you want to do with the running processes what actions you should take there.
If you run my script from the command line and press ctrl-C you should see something like:
started work
waiting
waiting
^CCaught keyboardinterrupt
# some time passes here
finished work
Main thread finished (and thus all others)
Inspiration for my solution came from this blog post
Is there a way in python to interrupt a thread when it's sleeping?
(As we can do in java)
I am looking for something like that.
import threading
from time import sleep
def f():
print('started')
try:
sleep(100)
print('finished')
except SleepInterruptedException:
print('interrupted')
t = threading.Thread(target=f)
t.start()
if input() == 'stop':
t.interrupt()
The thread is sleeping for 100 seconds and if I type 'stop', it interrupts
The correct approach is to use threading.Event. For example:
import threading
e = threading.Event()
e.wait(timeout=100) # instead of time.sleep(100)
In the other thread, you need to have access to e. You can interrupt the sleep by issuing:
e.set()
This will immediately interrupt the sleep. You can check the return value of e.wait to determine whether it's timed out or interrupted. For more information refer to the documentation: https://docs.python.org/3/library/threading.html#event-objects .
How about using condition objects: https://docs.python.org/2/library/threading.html#condition-objects
Instead of sleep() you use wait(timeout). To "interrupt" you call notify().
If you, for whatever reason, needed to use the time.sleep function and happened to expect the time.sleep function to throw an exception and you simply wanted to test what happened with large sleep values without having to wait for the whole timeout...
Firstly, sleeping threads are lightweight and there's no problem just letting them run in daemon mode with threading.Thread(target=f, daemon=True) (so that they exit when the program does). You can check the result of the thread without waiting for the whole execution with t.join(0.5).
But if you absolutely need to halt the execution of the function, you could use multiprocessing.Process, and call .terminate() on the spawned process. This does not give the process time to clean up (e.g. except and finally blocks aren't run), so use it with care.
I am trying to write a Python multi-threaded script that does the following two things in different threads:
Parent: Start Child Thread, Do some simple task, Stop Child Thread
Child: Do some long running task.
Below is a simple way to do it. And it works for me:
from multiprocessing import Process
import time
def child_func():
while not stop_thread:
time.sleep(1)
if __name__ == '__main__':
child_thread = Process(target=child_func)
stop_thread = False
child_thread.start()
time.sleep(3)
stop_thread = True
child_thread.join()
But a complication arises because in actuality, instead of the while-loop in child_func(), I need to run a single long-running process that doesn't stop unless it is killed by Ctrl-C. So I cannot periodically check the value of stop_thread in there. So how can I tell my child process to end when I want it to?
I believe the answer has to do with using signals. But I haven't seen a good example of how to use them in this exact situation. Can someone please help by modifying my code above to use signals to communicate between the Child and the Parent thread. And making the child-thread terminate iff the user hits Ctrl-C.
There is no need to use the signal module here unless you want to do cleanup on your child process. It is possible to stop any child processes using the terminate method (which has the same effect as SIGTERM)
from multiprocessing import Process
import time
def child_func():
time.sleep(1000)
if __name__ == '__main__':
event = Event()
child_thread = Process(target=child_func)
child_thread.start()
time.sleep(3)
child_thread.terminate()
child_thread.join()
The docs are here: https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Process.terminate
How can I exit my entire Python application from one of its threads? sys.exit() only terminates the thread in which it is called, so that is no help.
I would not like to use an os.kill() solution, as this isn't very clean.
Short answer: use os._exit.
Long answer with example:
I yanked and slightly modified a simple threading example from a tutorial on DevShed:
import threading, sys, os
theVar = 1
class MyThread ( threading.Thread ):
def run ( self ):
global theVar
print 'This is thread ' + str ( theVar ) + ' speaking.'
print 'Hello and good bye.'
theVar = theVar + 1
if theVar == 4:
#sys.exit(1)
os._exit(1)
print '(done)'
for x in xrange ( 7 ):
MyThread().start()
If you keep sys.exit(1) commented out, the script will die after the third thread prints out. If you use sys.exit(1) and comment out os._exit(1), the third thread does not print (done), and the program runs through all seven threads.
os._exit "should normally only be used in the child process after a fork()" -- and a separate thread is close enough to that for your purpose. Also note that there are several enumerated values listed right after os._exit in that manual page, and you should prefer those as arguments to os._exit instead of simple numbers like I used in the example above.
If all your threads except the main ones are daemons, the best approach is generally thread.interrupt_main() -- any thread can use it to raise a KeyboardInterrupt in the main thread, which can normally lead to reasonably clean exit from the main thread (including finalizers in the main thread getting called, etc).
Of course, if this results in some non-daemon thread keeping the whole process alive, you need to followup with os._exit as Mark recommends -- but I'd see that as the last resort (kind of like a kill -9;-) because it terminates things quite brusquely (finalizers not run, including try/finally blocks, with blocks, atexit functions, etc).
Using thread.interrupt_main() may not help in some situation. KeyboardInterrupts are often used in command line applications to exit the current command or to clean the input line.
In addition, os._exit will kill the process immediately without running any finally blocks in your code, which may be dangerous (files and connections will not be closed for example).
The solution I've found is to register a signal handler in the main thread that raises a custom exception. Use the background thread to fire the signal.
import signal
import os
import threading
import time
class ExitCommand(Exception):
pass
def signal_handler(signal, frame):
raise ExitCommand()
def thread_job():
time.sleep(5)
os.kill(os.getpid(), signal.SIGUSR1)
signal.signal(signal.SIGUSR1, signal_handler)
threading.Thread(target=thread_job).start() # thread will fire in 5 seconds
try:
while True:
user_input = raw_input('Blocked by raw_input loop ')
# do something with 'user_input'
except ExitCommand:
pass
finally:
print('finally will still run')
Related questions:
Why does sys.exit() not exit when called inside a thread in Python?
Python: How to quit CLI when stuck in blocking raw_input?
The easiest way to exit the whole program is, we should terminate the program by using the process id (pid).
import os
import psutil
current_system_pid = os.getpid()
ThisSystem = psutil.Process(current_system_pid)
ThisSystem.terminate()
To install psutl:- "pip install psutil"
For Linux you can use the kill() command and pass the current process' ID and the SIGINT signal to start the steps to exit the app.
import signal
os.kill(os.getpid(), signal.SIGINT)