Related
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
It is generally a bad pattern to kill a thread abruptly, in Python, and in any language. Think of the following cases:
the thread is holding a critical resource that must be closed properly
the thread has created several other threads that must be killed as well.
The nice way of handling this, if you can afford it (if you are managing your own threads), is to have an exit_request flag that each thread checks on a regular interval to see if it is time for it to exit.
For example:
import threading
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the stopped() condition."""
def __init__(self, *args, **kwargs):
super(StoppableThread, self).__init__(*args, **kwargs)
self._stop_event = threading.Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
In this code, you should call stop() on the thread when you want it to exit, and wait for the thread to exit properly using join(). The thread should check the stop flag at regular intervals.
There are cases, however, when you really need to kill a thread. An example is when you are wrapping an external library that is busy for long calls, and you want to interrupt it.
The following code allows (with some restrictions) to raise an Exception in a Python thread:
def _async_raise(tid, exctype):
'''Raises an exception in the threads with id tid'''
if not inspect.isclass(exctype):
raise TypeError("Only types can be raised (not instances)")
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid),
ctypes.py_object(exctype))
if res == 0:
raise ValueError("invalid thread id")
elif res != 1:
# "if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"
ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid), None)
raise SystemError("PyThreadState_SetAsyncExc failed")
class ThreadWithExc(threading.Thread):
'''A thread class that supports raising an exception in the thread from
another thread.
'''
def _get_my_tid(self):
"""determines this (self's) thread id
CAREFUL: this function is executed in the context of the caller
thread, to get the identity of the thread represented by this
instance.
"""
if not self.isAlive():
raise threading.ThreadError("the thread is not active")
# do we have it cached?
if hasattr(self, "_thread_id"):
return self._thread_id
# no, look for it in the _active dict
for tid, tobj in threading._active.items():
if tobj is self:
self._thread_id = tid
return tid
# TODO: in python 2.6, there's a simpler way to do: self.ident
raise AssertionError("could not determine the thread's id")
def raiseExc(self, exctype):
"""Raises the given exception type in the context of this thread.
If the thread is busy in a system call (time.sleep(),
socket.accept(), ...), the exception is simply ignored.
If you are sure that your exception should terminate the thread,
one way to ensure that it works is:
t = ThreadWithExc( ... )
...
t.raiseExc( SomeException )
while t.isAlive():
time.sleep( 0.1 )
t.raiseExc( SomeException )
If the exception is to be caught by the thread, you need a way to
check that your thread has caught it.
CAREFUL: this function is executed in the context of the
caller thread, to raise an exception in the context of the
thread represented by this instance.
"""
_async_raise( self._get_my_tid(), exctype )
(Based on Killable Threads by Tomer Filiba. The quote about the return value of PyThreadState_SetAsyncExc appears to be from an old version of Python.)
As noted in the documentation, this is not a magic bullet because if the thread is busy outside the Python interpreter, it will not catch the interruption.
A good usage pattern of this code is to have the thread catch a specific exception and perform the cleanup. That way, you can interrupt a task and still have proper cleanup.
A multiprocessing.Process can p.terminate()
In the cases where I want to kill a thread, but do not want to use flags/locks/signals/semaphores/events/whatever, I promote the threads to full blown processes. For code that makes use of just a few threads the overhead is not that bad.
E.g. this comes in handy to easily terminate helper "threads" which execute blocking I/O
The conversion is trivial: In related code replace all threading.Thread with multiprocessing.Process and all queue.Queue with multiprocessing.Queue and add the required calls of p.terminate() to your parent process which wants to kill its child p
See the Python documentation for multiprocessing.
Example:
import multiprocessing
proc = multiprocessing.Process(target=your_proc_function, args=())
proc.start()
# Terminate the process
proc.terminate() # sends a SIGTERM
There is no official API to do that, no.
You need to use platform API to kill the thread, e.g. pthread_kill, or TerminateThread. You can access such API e.g. through pythonwin, or through ctypes.
Notice that this is inherently unsafe. It will likely lead to uncollectable garbage (from local variables of the stack frames that become garbage), and may lead to deadlocks, if the thread being killed has the GIL at the point when it is killed.
If you are trying to terminate the whole program you can set the thread as a "daemon". see
Thread.daemon
As others have mentioned, the norm is to set a stop flag. For something lightweight (no subclassing of Thread, no global variable), a lambda callback is an option. (Note the parentheses in if stop().)
import threading
import time
def do_work(id, stop):
print("I am thread", id)
while True:
print("I am thread {} doing something".format(id))
if stop():
print(" Exiting loop.")
break
print("Thread {}, signing off".format(id))
def main():
stop_threads = False
workers = []
for id in range(0,3):
tmp = threading.Thread(target=do_work, args=(id, lambda: stop_threads))
workers.append(tmp)
tmp.start()
time.sleep(3)
print('main: done sleeping; time to stop the threads.')
stop_threads = True
for worker in workers:
worker.join()
print('Finis.')
if __name__ == '__main__':
main()
Replacing print() with a pr() function that always flushes (sys.stdout.flush()) may improve the precision of the shell output.
(Only tested on Windows/Eclipse/Python3.3)
In Python, you simply cannot kill a Thread directly.
If you do NOT really need to have a Thread (!), what you can do, instead of using the threading package , is to use the
multiprocessing package . Here, to kill a process, you can simply call the method:
yourProcess.terminate() # kill the process!
Python will kill your process (on Unix through the SIGTERM signal, while on Windows through the TerminateProcess() call). Pay attention to use it while using a Queue or a Pipe! (it may corrupt the data in the Queue/Pipe)
Note that the multiprocessing.Event and the multiprocessing.Semaphore work exactly in the same way of the threading.Event and the threading.Semaphore respectively. In fact, the first ones are clones of the latters.
If you REALLY need to use a Thread, there is no way to kill it directly. What you can do, however, is to use a "daemon thread". In fact, in Python, a Thread can be flagged as daemon:
yourThread.daemon = True # set the Thread as a "daemon thread"
The main program will exit when no alive non-daemon threads are left. In other words, when your main thread (which is, of course, a non-daemon thread) will finish its operations, the program will exit even if there are still some daemon threads working.
Note that it is necessary to set a Thread as daemon before the start() method is called!
Of course you can, and should, use daemon even with multiprocessing. Here, when the main process exits, it attempts to terminate all of its daemonic child processes.
Finally, please, note that sys.exit() and os.kill() are not choices.
This is based on the thread2 -- killable threads ActiveState recipe.
You need to call PyThreadState_SetAsyncExc(), which is only available through the ctypes module.
This has only been tested on Python 2.7.3, but it is likely to work with other recent 2.x releases. PyThreadState_SetAsyncExc() still exists in Python 3 for backwards compatibility (but I have not tested it).
import ctypes
def terminate_thread(thread):
"""Terminates a python thread from another thread.
:param thread: a threading.Thread instance
"""
if not thread.isAlive():
return
exc = ctypes.py_object(SystemExit)
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(
ctypes.c_long(thread.ident), exc)
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None)
raise SystemError("PyThreadState_SetAsyncExc failed")
You should never forcibly kill a thread without cooperating with it.
Killing a thread removes any guarantees that try/finally blocks set up so you might leave locks locked, files open, etc.
The only time you can argue that forcibly killing threads is a good idea is to kill a program fast, but never single threads.
If you are explicitly calling time.sleep() as part of your thread (say polling some external service), an improvement upon Phillipe's method is to use the timeout in the event's wait() method wherever you sleep()
For example:
import threading
class KillableThread(threading.Thread):
def __init__(self, sleep_interval=1):
super().__init__()
self._kill = threading.Event()
self._interval = sleep_interval
def run(self):
while True:
print("Do Something")
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
Then to run it
t = KillableThread(sleep_interval=5)
t.start()
# Every 5 seconds it prints:
#: Do Something
t.kill()
#: Killing Thread
The advantage of using wait() instead of sleep()ing and regularly checking the event is that you can program in longer intervals of sleep, the thread is stopped almost immediately (when you would otherwise be sleep()ing) and in my opinion, the code for handling exit is significantly simpler.
You can kill a thread by installing trace into the thread that will exit the thread. See attached link for one possible implementation.
Kill a thread in Python
It is better if you don't kill a thread.
A way could be to introduce a "try" block into the thread's cycle and to throw an exception when you want to stop the thread (for example a break/return/... that stops your for/while/...).
I've used this on my app and it works...
It is definitely possible to implement a Thread.stop method as shown in the following example code:
import sys
import threading
import time
class StopThread(StopIteration):
pass
threading.SystemExit = SystemExit, StopThread
class Thread2(threading.Thread):
def stop(self):
self.__stop = True
def _bootstrap(self):
if threading._trace_hook is not None:
raise ValueError('Cannot run thread with tracing!')
self.__stop = False
sys.settrace(self.__trace)
super()._bootstrap()
def __trace(self, frame, event, arg):
if self.__stop:
raise StopThread()
return self.__trace
class Thread3(threading.Thread):
def _bootstrap(self, stop_thread=False):
def stop():
nonlocal stop_thread
stop_thread = True
self.stop = stop
def tracer(*_):
if stop_thread:
raise StopThread()
return tracer
sys.settrace(tracer)
super()._bootstrap()
###############################################################################
def main():
test1 = Thread2(target=printer)
test1.start()
time.sleep(1)
test1.stop()
test1.join()
test2 = Thread2(target=speed_test)
test2.start()
time.sleep(1)
test2.stop()
test2.join()
test3 = Thread3(target=speed_test)
test3.start()
time.sleep(1)
test3.stop()
test3.join()
def printer():
while True:
print(time.time() % 1)
time.sleep(0.1)
def speed_test(count=0):
try:
while True:
count += 1
except StopThread:
print('Count =', count)
if __name__ == '__main__':
main()
The Thread3 class appears to run code approximately 33% faster than the Thread2 class.
I'm way late to this game, but I've been wrestling with a similar question and the following appears to both resolve the issue perfectly for me AND lets me do some basic thread state checking and cleanup when the daemonized sub-thread exits:
import threading
import time
import atexit
def do_work():
i = 0
#atexit.register
def goodbye():
print ("'CLEANLY' kill sub-thread with value: %s [THREAD: %s]" %
(i, threading.currentThread().ident))
while True:
print i
i += 1
time.sleep(1)
t = threading.Thread(target=do_work)
t.daemon = True
t.start()
def after_timeout():
print "KILL MAIN THREAD: %s" % threading.currentThread().ident
raise SystemExit
threading.Timer(2, after_timeout).start()
Yields:
0
1
KILL MAIN THREAD: 140013208254208
'CLEANLY' kill sub-thread with value: 2 [THREAD: 140013674317568]
from ctypes import *
pthread = cdll.LoadLibrary("libpthread-2.15.so")
pthread.pthread_cancel(c_ulong(t.ident))
t is your Thread object.
Read the python source (Modules/threadmodule.c and Python/thread_pthread.h) you can see the Thread.ident is an pthread_t type, so you can do anything pthread can do in python use libpthread.
Following workaround can be used to kill a thread:
kill_threads = False
def doSomething():
global kill_threads
while True:
if kill_threads:
thread.exit()
......
......
thread.start_new_thread(doSomething, ())
This can be used even for terminating threads, whose code is written in another module, from main thread. We can declare a global variable in that module and use it to terminate thread/s spawned in that module.
I usually use this to terminate all the threads at the program exit. This might not be the perfect way to terminate thread/s but could help.
Here's yet another way to do it, but with extremely clean and simple code, that works in Python 3.7 in 2021:
import ctypes
def kill_thread(thread):
"""
thread: a threading.Thread object
"""
thread_id = thread.ident
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, ctypes.py_object(SystemExit))
if res > 1:
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, 0)
print('Exception raise failure')
Adapted from here: https://www.geeksforgeeks.org/python-different-ways-to-kill-a-thread/
One thing I want to add is that if you read official documentation in threading lib Python, it's recommended to avoid use of "demonic" threads, when you don't want threads end abruptly, with the flag that Paolo Rovelli mentioned.
From official documentation:
Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signaling mechanism such as an Event.
I think that creating daemonic threads depends of your application, but in general (and in my opinion) it's better to avoid killing them or making them daemonic. In multiprocessing you can use is_alive() to check process status and "terminate" for finish them (Also you avoid GIL problems). But you can find more problems, sometimes, when you execute your code in Windows.
And always remember that if you have "live threads", the Python interpreter will be running for wait them. (Because of this daemonic can help you if don't matter abruptly ends).
There is a library built for this purpose, stopit. Although some of the same cautions listed herein still apply, at least this library presents a regular, repeatable technique for achieving the stated goal.
While it's rather old, this might be a handy solution for some:
A little module that extends the threading's module functionality --
allows one thread to raise exceptions in the context of another
thread. By raising SystemExit, you can finally kill python threads.
import threading
import ctypes
def _async_raise(tid, excobj):
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, ctypes.py_object(excobj))
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, 0)
raise SystemError("PyThreadState_SetAsyncExc failed")
class Thread(threading.Thread):
def raise_exc(self, excobj):
assert self.isAlive(), "thread must be started"
for tid, tobj in threading._active.items():
if tobj is self:
_async_raise(tid, excobj)
return
# the thread was alive when we entered the loop, but was not found
# in the dict, hence it must have been already terminated. should we raise
# an exception here? silently ignore?
def terminate(self):
# must raise the SystemExit type, instead of a SystemExit() instance
# due to a bug in PyThreadState_SetAsyncExc
self.raise_exc(SystemExit)
So, it allows a "thread to raise exceptions in the context of another thread" and in this way, the terminated thread can handle the termination without regularly checking an abort flag.
However, according to its original source, there are some issues with this code.
The exception will be raised only when executing python bytecode. If your thread calls a native/built-in blocking function, the
exception will be raised only when execution returns to the python
code.
There is also an issue if the built-in function internally calls PyErr_Clear(), which would effectively cancel your pending exception.
You can try to raise it again.
Only exception types can be raised safely. Exception instances are likely to cause unexpected behavior, and are thus restricted.
For example: t1.raise_exc(TypeError) and not t1.raise_exc(TypeError("blah")).
IMHO it's a bug, and I reported it as one. For more info, http://mail.python.org/pipermail/python-dev/2006-August/068158.html
I asked to expose this function in the built-in thread module, but since ctypes has become a standard library (as of 2.5), and this
feature is not likely to be implementation-agnostic, it may be kept
unexposed.
Asuming, that you want to have multiple threads of the same function, this is IMHO the easiest implementation to stop one by id:
import time
from threading import Thread
def doit(id=0):
doit.stop=0
print("start id:%d"%id)
while 1:
time.sleep(1)
print(".")
if doit.stop==id:
doit.stop=0
break
print("end thread %d"%id)
t5=Thread(target=doit, args=(5,))
t6=Thread(target=doit, args=(6,))
t5.start() ; t6.start()
time.sleep(2)
doit.stop =5 #kill t5
time.sleep(2)
doit.stop =6 #kill t6
The nice thing is here, you can have multiple of same and different functions, and stop them all by functionname.stop
If you want to have only one thread of the function then you don't need to remember the id. Just stop, if doit.stop > 0.
Just to build up on #SCB's idea (which was exactly what I needed) to create a KillableThread subclass with a customized function:
from threading import Thread, Event
class KillableThread(Thread):
def __init__(self, sleep_interval=1, target=None, name=None, args=(), kwargs={}):
super().__init__(None, target, name, args, kwargs)
self._kill = Event()
self._interval = sleep_interval
print(self._target)
def run(self):
while True:
# Call custom function with arguments
self._target(*self._args)
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
if __name__ == '__main__':
def print_msg(msg):
print(msg)
t = KillableThread(10, print_msg, args=("hello world"))
t.start()
time.sleep(6)
print("About to kill thread")
t.kill()
Naturally, like with #SBC, the thread doesn't wait to run a new loop to stop. In this example, you would see the "Killing Thread" message printed right after the "About to kill thread" instead of waiting for 4 more seconds for the thread to complete (since we have slept for 6 seconds already).
Second argument in KillableThread constructor is your custom function (print_msg here). Args argument are the arguments that will be used when calling the function (("hello world")) here.
Python version: 3.8
Using daemon thread to execute what we wanted, if we want to daemon thread be terminated, all we need is making parent thread exit, then system will terminate daemon thread which parent thread created.
Also support coroutine and coroutine function.
def main():
start_time = time.perf_counter()
t1 = ExitThread(time.sleep, (10,), debug=False)
t1.start()
time.sleep(0.5)
t1.exit()
try:
print(t1.result_future.result())
except concurrent.futures.CancelledError:
pass
end_time = time.perf_counter()
print(f"time cost {end_time - start_time:0.2f}")
below is ExitThread source code
import concurrent.futures
import threading
import typing
import asyncio
class _WorkItem(object):
""" concurrent\futures\thread.py
"""
def __init__(self, future, fn, args, kwargs, *, debug=None):
self._debug = debug
self.future = future
self.fn = fn
self.args = args
self.kwargs = kwargs
def run(self):
if self._debug:
print("ExitThread._WorkItem run")
if not self.future.set_running_or_notify_cancel():
return
try:
coroutine = None
if asyncio.iscoroutinefunction(self.fn):
coroutine = self.fn(*self.args, **self.kwargs)
elif asyncio.iscoroutine(self.fn):
coroutine = self.fn
if coroutine is None:
result = self.fn(*self.args, **self.kwargs)
else:
result = asyncio.run(coroutine)
if self._debug:
print("_WorkItem done")
except BaseException as exc:
self.future.set_exception(exc)
# Break a reference cycle with the exception 'exc'
self = None
else:
self.future.set_result(result)
class ExitThread:
""" Like a stoppable thread
Using coroutine for target then exit before running may cause RuntimeWarning.
"""
def __init__(self, target: typing.Union[typing.Coroutine, typing.Callable] = None
, args=(), kwargs={}, *, daemon=None, debug=None):
#
self._debug = debug
self._parent_thread = threading.Thread(target=self._parent_thread_run, name="ExitThread_parent_thread"
, daemon=daemon)
self._child_daemon_thread = None
self.result_future = concurrent.futures.Future()
self._workItem = _WorkItem(self.result_future, target, args, kwargs, debug=debug)
self._parent_thread_exit_lock = threading.Lock()
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock_released = False # When done it will be True
self._started = False
self._exited = False
self.result_future.add_done_callback(self._release_parent_thread_exit_lock)
def _parent_thread_run(self):
self._child_daemon_thread = threading.Thread(target=self._child_daemon_thread_run
, name="ExitThread_child_daemon_thread"
, daemon=True)
self._child_daemon_thread.start()
# Block manager thread
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock.release()
if self._debug:
print("ExitThread._parent_thread_run exit")
def _release_parent_thread_exit_lock(self, _future):
if self._debug:
print(f"ExitThread._release_parent_thread_exit_lock {self._parent_thread_exit_lock_released} {_future}")
if not self._parent_thread_exit_lock_released:
self._parent_thread_exit_lock_released = True
self._parent_thread_exit_lock.release()
def _child_daemon_thread_run(self):
self._workItem.run()
def start(self):
if self._debug:
print(f"ExitThread.start {self._started}")
if not self._started:
self._started = True
self._parent_thread.start()
def exit(self):
if self._debug:
print(f"ExitThread.exit exited: {self._exited} lock_released: {self._parent_thread_exit_lock_released}")
if self._parent_thread_exit_lock_released:
return
if not self._exited:
self._exited = True
if not self.result_future.cancel():
if self.result_future.running():
self.result_future.set_exception(concurrent.futures.CancelledError())
As mentioned in #Kozyarchuk's answer, installing trace works. Since this answer contained no code, here is a working ready-to-use example:
import sys, threading, time
class TraceThread(threading.Thread):
def __init__(self, *args, **keywords):
threading.Thread.__init__(self, *args, **keywords)
self.killed = False
def start(self):
self._run = self.run
self.run = self.settrace_and_run
threading.Thread.start(self)
def settrace_and_run(self):
sys.settrace(self.globaltrace)
self._run()
def globaltrace(self, frame, event, arg):
return self.localtrace if event == 'call' else None
def localtrace(self, frame, event, arg):
if self.killed and event == 'line':
raise SystemExit()
return self.localtrace
def f():
while True:
print('1')
time.sleep(2)
print('2')
time.sleep(2)
print('3')
time.sleep(2)
t = TraceThread(target=f)
t.start()
time.sleep(2.5)
t.killed = True
It stops after having printed 1 and 2. 3 is not printed.
An alternative is to use signal.pthread_kill to send a stop signal.
from signal import pthread_kill, SIGTSTP
from threading import Thread
from itertools import count
from time import sleep
def target():
for num in count():
print(num)
sleep(1)
thread = Thread(target=target)
thread.start()
sleep(5)
pthread_kill(thread.ident, SIGTSTP)
result
0
1
2
3
4
[14]+ Stopped
Pieter Hintjens -- one of the founders of the ØMQ-project -- says, using ØMQ and avoiding synchronization primitives like locks, mutexes, events etc., is the sanest and securest way to write multi-threaded programs:
http://zguide.zeromq.org/py:all#Multithreading-with-ZeroMQ
This includes telling a child thread, that it should cancel its work. This would be done by equipping the thread with a ØMQ-socket and polling on that socket for a message saying that it should cancel.
The link also provides an example on multi-threaded python code with ØMQ.
This seems to work with pywin32 on windows 7
my_thread = threading.Thread()
my_thread.start()
my_thread._Thread__stop()
If you really need the ability to kill a sub-task, use an alternate implementation. multiprocessing and gevent both support indiscriminately killing a "thread".
Python's threading does not support cancellation. Do not even try. Your code is very likely to deadlock, corrupt or leak memory, or have other unintended "interesting" hard-to-debug effects which happen rarely and nondeterministically.
You can execute your command in a process and then kill it using the process id.
I needed to sync between two thread one of which doesn’t return by itself.
processIds = []
def executeRecord(command):
print(command)
process = subprocess.Popen(command, stdout=subprocess.PIPE)
processIds.append(process.pid)
print(processIds[0])
#Command that doesn't return by itself
process.stdout.read().decode("utf-8")
return;
def recordThread(command, timeOut):
thread = Thread(target=executeRecord, args=(command,))
thread.start()
thread.join(timeOut)
os.kill(processIds.pop(), signal.SIGINT)
return;
The most simple way is this:
from threading import Thread
from time import sleep
def do_something():
global thread_work
while thread_work:
print('doing something')
sleep(5)
print('Thread stopped')
thread_work = True
Thread(target=do_something).start()
sleep(5)
thread_work = False
This is a bad answer, see the comments
Here's how to do it:
from threading import *
...
for thread in enumerate():
if thread.isAlive():
try:
thread._Thread__stop()
except:
print(str(thread.getName()) + ' could not be terminated'))
Give it a few seconds then your thread should be stopped. Check also the thread._Thread__delete() method.
I'd recommend a thread.quit() method for convenience. For example if you have a socket in your thread, I'd recommend creating a quit() method in your socket-handle class, terminate the socket, then run a thread._Thread__stop() inside of your quit().
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
It is generally a bad pattern to kill a thread abruptly, in Python, and in any language. Think of the following cases:
the thread is holding a critical resource that must be closed properly
the thread has created several other threads that must be killed as well.
The nice way of handling this, if you can afford it (if you are managing your own threads), is to have an exit_request flag that each thread checks on a regular interval to see if it is time for it to exit.
For example:
import threading
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the stopped() condition."""
def __init__(self, *args, **kwargs):
super(StoppableThread, self).__init__(*args, **kwargs)
self._stop_event = threading.Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
In this code, you should call stop() on the thread when you want it to exit, and wait for the thread to exit properly using join(). The thread should check the stop flag at regular intervals.
There are cases, however, when you really need to kill a thread. An example is when you are wrapping an external library that is busy for long calls, and you want to interrupt it.
The following code allows (with some restrictions) to raise an Exception in a Python thread:
def _async_raise(tid, exctype):
'''Raises an exception in the threads with id tid'''
if not inspect.isclass(exctype):
raise TypeError("Only types can be raised (not instances)")
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid),
ctypes.py_object(exctype))
if res == 0:
raise ValueError("invalid thread id")
elif res != 1:
# "if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"
ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid), None)
raise SystemError("PyThreadState_SetAsyncExc failed")
class ThreadWithExc(threading.Thread):
'''A thread class that supports raising an exception in the thread from
another thread.
'''
def _get_my_tid(self):
"""determines this (self's) thread id
CAREFUL: this function is executed in the context of the caller
thread, to get the identity of the thread represented by this
instance.
"""
if not self.isAlive():
raise threading.ThreadError("the thread is not active")
# do we have it cached?
if hasattr(self, "_thread_id"):
return self._thread_id
# no, look for it in the _active dict
for tid, tobj in threading._active.items():
if tobj is self:
self._thread_id = tid
return tid
# TODO: in python 2.6, there's a simpler way to do: self.ident
raise AssertionError("could not determine the thread's id")
def raiseExc(self, exctype):
"""Raises the given exception type in the context of this thread.
If the thread is busy in a system call (time.sleep(),
socket.accept(), ...), the exception is simply ignored.
If you are sure that your exception should terminate the thread,
one way to ensure that it works is:
t = ThreadWithExc( ... )
...
t.raiseExc( SomeException )
while t.isAlive():
time.sleep( 0.1 )
t.raiseExc( SomeException )
If the exception is to be caught by the thread, you need a way to
check that your thread has caught it.
CAREFUL: this function is executed in the context of the
caller thread, to raise an exception in the context of the
thread represented by this instance.
"""
_async_raise( self._get_my_tid(), exctype )
(Based on Killable Threads by Tomer Filiba. The quote about the return value of PyThreadState_SetAsyncExc appears to be from an old version of Python.)
As noted in the documentation, this is not a magic bullet because if the thread is busy outside the Python interpreter, it will not catch the interruption.
A good usage pattern of this code is to have the thread catch a specific exception and perform the cleanup. That way, you can interrupt a task and still have proper cleanup.
A multiprocessing.Process can p.terminate()
In the cases where I want to kill a thread, but do not want to use flags/locks/signals/semaphores/events/whatever, I promote the threads to full blown processes. For code that makes use of just a few threads the overhead is not that bad.
E.g. this comes in handy to easily terminate helper "threads" which execute blocking I/O
The conversion is trivial: In related code replace all threading.Thread with multiprocessing.Process and all queue.Queue with multiprocessing.Queue and add the required calls of p.terminate() to your parent process which wants to kill its child p
See the Python documentation for multiprocessing.
Example:
import multiprocessing
proc = multiprocessing.Process(target=your_proc_function, args=())
proc.start()
# Terminate the process
proc.terminate() # sends a SIGTERM
There is no official API to do that, no.
You need to use platform API to kill the thread, e.g. pthread_kill, or TerminateThread. You can access such API e.g. through pythonwin, or through ctypes.
Notice that this is inherently unsafe. It will likely lead to uncollectable garbage (from local variables of the stack frames that become garbage), and may lead to deadlocks, if the thread being killed has the GIL at the point when it is killed.
If you are trying to terminate the whole program you can set the thread as a "daemon". see
Thread.daemon
As others have mentioned, the norm is to set a stop flag. For something lightweight (no subclassing of Thread, no global variable), a lambda callback is an option. (Note the parentheses in if stop().)
import threading
import time
def do_work(id, stop):
print("I am thread", id)
while True:
print("I am thread {} doing something".format(id))
if stop():
print(" Exiting loop.")
break
print("Thread {}, signing off".format(id))
def main():
stop_threads = False
workers = []
for id in range(0,3):
tmp = threading.Thread(target=do_work, args=(id, lambda: stop_threads))
workers.append(tmp)
tmp.start()
time.sleep(3)
print('main: done sleeping; time to stop the threads.')
stop_threads = True
for worker in workers:
worker.join()
print('Finis.')
if __name__ == '__main__':
main()
Replacing print() with a pr() function that always flushes (sys.stdout.flush()) may improve the precision of the shell output.
(Only tested on Windows/Eclipse/Python3.3)
In Python, you simply cannot kill a Thread directly.
If you do NOT really need to have a Thread (!), what you can do, instead of using the threading package , is to use the
multiprocessing package . Here, to kill a process, you can simply call the method:
yourProcess.terminate() # kill the process!
Python will kill your process (on Unix through the SIGTERM signal, while on Windows through the TerminateProcess() call). Pay attention to use it while using a Queue or a Pipe! (it may corrupt the data in the Queue/Pipe)
Note that the multiprocessing.Event and the multiprocessing.Semaphore work exactly in the same way of the threading.Event and the threading.Semaphore respectively. In fact, the first ones are clones of the latters.
If you REALLY need to use a Thread, there is no way to kill it directly. What you can do, however, is to use a "daemon thread". In fact, in Python, a Thread can be flagged as daemon:
yourThread.daemon = True # set the Thread as a "daemon thread"
The main program will exit when no alive non-daemon threads are left. In other words, when your main thread (which is, of course, a non-daemon thread) will finish its operations, the program will exit even if there are still some daemon threads working.
Note that it is necessary to set a Thread as daemon before the start() method is called!
Of course you can, and should, use daemon even with multiprocessing. Here, when the main process exits, it attempts to terminate all of its daemonic child processes.
Finally, please, note that sys.exit() and os.kill() are not choices.
This is based on the thread2 -- killable threads ActiveState recipe.
You need to call PyThreadState_SetAsyncExc(), which is only available through the ctypes module.
This has only been tested on Python 2.7.3, but it is likely to work with other recent 2.x releases. PyThreadState_SetAsyncExc() still exists in Python 3 for backwards compatibility (but I have not tested it).
import ctypes
def terminate_thread(thread):
"""Terminates a python thread from another thread.
:param thread: a threading.Thread instance
"""
if not thread.isAlive():
return
exc = ctypes.py_object(SystemExit)
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(
ctypes.c_long(thread.ident), exc)
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None)
raise SystemError("PyThreadState_SetAsyncExc failed")
You should never forcibly kill a thread without cooperating with it.
Killing a thread removes any guarantees that try/finally blocks set up so you might leave locks locked, files open, etc.
The only time you can argue that forcibly killing threads is a good idea is to kill a program fast, but never single threads.
If you are explicitly calling time.sleep() as part of your thread (say polling some external service), an improvement upon Phillipe's method is to use the timeout in the event's wait() method wherever you sleep()
For example:
import threading
class KillableThread(threading.Thread):
def __init__(self, sleep_interval=1):
super().__init__()
self._kill = threading.Event()
self._interval = sleep_interval
def run(self):
while True:
print("Do Something")
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
Then to run it
t = KillableThread(sleep_interval=5)
t.start()
# Every 5 seconds it prints:
#: Do Something
t.kill()
#: Killing Thread
The advantage of using wait() instead of sleep()ing and regularly checking the event is that you can program in longer intervals of sleep, the thread is stopped almost immediately (when you would otherwise be sleep()ing) and in my opinion, the code for handling exit is significantly simpler.
You can kill a thread by installing trace into the thread that will exit the thread. See attached link for one possible implementation.
Kill a thread in Python
It is better if you don't kill a thread.
A way could be to introduce a "try" block into the thread's cycle and to throw an exception when you want to stop the thread (for example a break/return/... that stops your for/while/...).
I've used this on my app and it works...
It is definitely possible to implement a Thread.stop method as shown in the following example code:
import sys
import threading
import time
class StopThread(StopIteration):
pass
threading.SystemExit = SystemExit, StopThread
class Thread2(threading.Thread):
def stop(self):
self.__stop = True
def _bootstrap(self):
if threading._trace_hook is not None:
raise ValueError('Cannot run thread with tracing!')
self.__stop = False
sys.settrace(self.__trace)
super()._bootstrap()
def __trace(self, frame, event, arg):
if self.__stop:
raise StopThread()
return self.__trace
class Thread3(threading.Thread):
def _bootstrap(self, stop_thread=False):
def stop():
nonlocal stop_thread
stop_thread = True
self.stop = stop
def tracer(*_):
if stop_thread:
raise StopThread()
return tracer
sys.settrace(tracer)
super()._bootstrap()
###############################################################################
def main():
test1 = Thread2(target=printer)
test1.start()
time.sleep(1)
test1.stop()
test1.join()
test2 = Thread2(target=speed_test)
test2.start()
time.sleep(1)
test2.stop()
test2.join()
test3 = Thread3(target=speed_test)
test3.start()
time.sleep(1)
test3.stop()
test3.join()
def printer():
while True:
print(time.time() % 1)
time.sleep(0.1)
def speed_test(count=0):
try:
while True:
count += 1
except StopThread:
print('Count =', count)
if __name__ == '__main__':
main()
The Thread3 class appears to run code approximately 33% faster than the Thread2 class.
I'm way late to this game, but I've been wrestling with a similar question and the following appears to both resolve the issue perfectly for me AND lets me do some basic thread state checking and cleanup when the daemonized sub-thread exits:
import threading
import time
import atexit
def do_work():
i = 0
#atexit.register
def goodbye():
print ("'CLEANLY' kill sub-thread with value: %s [THREAD: %s]" %
(i, threading.currentThread().ident))
while True:
print i
i += 1
time.sleep(1)
t = threading.Thread(target=do_work)
t.daemon = True
t.start()
def after_timeout():
print "KILL MAIN THREAD: %s" % threading.currentThread().ident
raise SystemExit
threading.Timer(2, after_timeout).start()
Yields:
0
1
KILL MAIN THREAD: 140013208254208
'CLEANLY' kill sub-thread with value: 2 [THREAD: 140013674317568]
from ctypes import *
pthread = cdll.LoadLibrary("libpthread-2.15.so")
pthread.pthread_cancel(c_ulong(t.ident))
t is your Thread object.
Read the python source (Modules/threadmodule.c and Python/thread_pthread.h) you can see the Thread.ident is an pthread_t type, so you can do anything pthread can do in python use libpthread.
Following workaround can be used to kill a thread:
kill_threads = False
def doSomething():
global kill_threads
while True:
if kill_threads:
thread.exit()
......
......
thread.start_new_thread(doSomething, ())
This can be used even for terminating threads, whose code is written in another module, from main thread. We can declare a global variable in that module and use it to terminate thread/s spawned in that module.
I usually use this to terminate all the threads at the program exit. This might not be the perfect way to terminate thread/s but could help.
Here's yet another way to do it, but with extremely clean and simple code, that works in Python 3.7 in 2021:
import ctypes
def kill_thread(thread):
"""
thread: a threading.Thread object
"""
thread_id = thread.ident
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, ctypes.py_object(SystemExit))
if res > 1:
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, 0)
print('Exception raise failure')
Adapted from here: https://www.geeksforgeeks.org/python-different-ways-to-kill-a-thread/
One thing I want to add is that if you read official documentation in threading lib Python, it's recommended to avoid use of "demonic" threads, when you don't want threads end abruptly, with the flag that Paolo Rovelli mentioned.
From official documentation:
Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signaling mechanism such as an Event.
I think that creating daemonic threads depends of your application, but in general (and in my opinion) it's better to avoid killing them or making them daemonic. In multiprocessing you can use is_alive() to check process status and "terminate" for finish them (Also you avoid GIL problems). But you can find more problems, sometimes, when you execute your code in Windows.
And always remember that if you have "live threads", the Python interpreter will be running for wait them. (Because of this daemonic can help you if don't matter abruptly ends).
There is a library built for this purpose, stopit. Although some of the same cautions listed herein still apply, at least this library presents a regular, repeatable technique for achieving the stated goal.
While it's rather old, this might be a handy solution for some:
A little module that extends the threading's module functionality --
allows one thread to raise exceptions in the context of another
thread. By raising SystemExit, you can finally kill python threads.
import threading
import ctypes
def _async_raise(tid, excobj):
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, ctypes.py_object(excobj))
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, 0)
raise SystemError("PyThreadState_SetAsyncExc failed")
class Thread(threading.Thread):
def raise_exc(self, excobj):
assert self.isAlive(), "thread must be started"
for tid, tobj in threading._active.items():
if tobj is self:
_async_raise(tid, excobj)
return
# the thread was alive when we entered the loop, but was not found
# in the dict, hence it must have been already terminated. should we raise
# an exception here? silently ignore?
def terminate(self):
# must raise the SystemExit type, instead of a SystemExit() instance
# due to a bug in PyThreadState_SetAsyncExc
self.raise_exc(SystemExit)
So, it allows a "thread to raise exceptions in the context of another thread" and in this way, the terminated thread can handle the termination without regularly checking an abort flag.
However, according to its original source, there are some issues with this code.
The exception will be raised only when executing python bytecode. If your thread calls a native/built-in blocking function, the
exception will be raised only when execution returns to the python
code.
There is also an issue if the built-in function internally calls PyErr_Clear(), which would effectively cancel your pending exception.
You can try to raise it again.
Only exception types can be raised safely. Exception instances are likely to cause unexpected behavior, and are thus restricted.
For example: t1.raise_exc(TypeError) and not t1.raise_exc(TypeError("blah")).
IMHO it's a bug, and I reported it as one. For more info, http://mail.python.org/pipermail/python-dev/2006-August/068158.html
I asked to expose this function in the built-in thread module, but since ctypes has become a standard library (as of 2.5), and this
feature is not likely to be implementation-agnostic, it may be kept
unexposed.
Asuming, that you want to have multiple threads of the same function, this is IMHO the easiest implementation to stop one by id:
import time
from threading import Thread
def doit(id=0):
doit.stop=0
print("start id:%d"%id)
while 1:
time.sleep(1)
print(".")
if doit.stop==id:
doit.stop=0
break
print("end thread %d"%id)
t5=Thread(target=doit, args=(5,))
t6=Thread(target=doit, args=(6,))
t5.start() ; t6.start()
time.sleep(2)
doit.stop =5 #kill t5
time.sleep(2)
doit.stop =6 #kill t6
The nice thing is here, you can have multiple of same and different functions, and stop them all by functionname.stop
If you want to have only one thread of the function then you don't need to remember the id. Just stop, if doit.stop > 0.
Just to build up on #SCB's idea (which was exactly what I needed) to create a KillableThread subclass with a customized function:
from threading import Thread, Event
class KillableThread(Thread):
def __init__(self, sleep_interval=1, target=None, name=None, args=(), kwargs={}):
super().__init__(None, target, name, args, kwargs)
self._kill = Event()
self._interval = sleep_interval
print(self._target)
def run(self):
while True:
# Call custom function with arguments
self._target(*self._args)
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
if __name__ == '__main__':
def print_msg(msg):
print(msg)
t = KillableThread(10, print_msg, args=("hello world"))
t.start()
time.sleep(6)
print("About to kill thread")
t.kill()
Naturally, like with #SBC, the thread doesn't wait to run a new loop to stop. In this example, you would see the "Killing Thread" message printed right after the "About to kill thread" instead of waiting for 4 more seconds for the thread to complete (since we have slept for 6 seconds already).
Second argument in KillableThread constructor is your custom function (print_msg here). Args argument are the arguments that will be used when calling the function (("hello world")) here.
Python version: 3.8
Using daemon thread to execute what we wanted, if we want to daemon thread be terminated, all we need is making parent thread exit, then system will terminate daemon thread which parent thread created.
Also support coroutine and coroutine function.
def main():
start_time = time.perf_counter()
t1 = ExitThread(time.sleep, (10,), debug=False)
t1.start()
time.sleep(0.5)
t1.exit()
try:
print(t1.result_future.result())
except concurrent.futures.CancelledError:
pass
end_time = time.perf_counter()
print(f"time cost {end_time - start_time:0.2f}")
below is ExitThread source code
import concurrent.futures
import threading
import typing
import asyncio
class _WorkItem(object):
""" concurrent\futures\thread.py
"""
def __init__(self, future, fn, args, kwargs, *, debug=None):
self._debug = debug
self.future = future
self.fn = fn
self.args = args
self.kwargs = kwargs
def run(self):
if self._debug:
print("ExitThread._WorkItem run")
if not self.future.set_running_or_notify_cancel():
return
try:
coroutine = None
if asyncio.iscoroutinefunction(self.fn):
coroutine = self.fn(*self.args, **self.kwargs)
elif asyncio.iscoroutine(self.fn):
coroutine = self.fn
if coroutine is None:
result = self.fn(*self.args, **self.kwargs)
else:
result = asyncio.run(coroutine)
if self._debug:
print("_WorkItem done")
except BaseException as exc:
self.future.set_exception(exc)
# Break a reference cycle with the exception 'exc'
self = None
else:
self.future.set_result(result)
class ExitThread:
""" Like a stoppable thread
Using coroutine for target then exit before running may cause RuntimeWarning.
"""
def __init__(self, target: typing.Union[typing.Coroutine, typing.Callable] = None
, args=(), kwargs={}, *, daemon=None, debug=None):
#
self._debug = debug
self._parent_thread = threading.Thread(target=self._parent_thread_run, name="ExitThread_parent_thread"
, daemon=daemon)
self._child_daemon_thread = None
self.result_future = concurrent.futures.Future()
self._workItem = _WorkItem(self.result_future, target, args, kwargs, debug=debug)
self._parent_thread_exit_lock = threading.Lock()
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock_released = False # When done it will be True
self._started = False
self._exited = False
self.result_future.add_done_callback(self._release_parent_thread_exit_lock)
def _parent_thread_run(self):
self._child_daemon_thread = threading.Thread(target=self._child_daemon_thread_run
, name="ExitThread_child_daemon_thread"
, daemon=True)
self._child_daemon_thread.start()
# Block manager thread
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock.release()
if self._debug:
print("ExitThread._parent_thread_run exit")
def _release_parent_thread_exit_lock(self, _future):
if self._debug:
print(f"ExitThread._release_parent_thread_exit_lock {self._parent_thread_exit_lock_released} {_future}")
if not self._parent_thread_exit_lock_released:
self._parent_thread_exit_lock_released = True
self._parent_thread_exit_lock.release()
def _child_daemon_thread_run(self):
self._workItem.run()
def start(self):
if self._debug:
print(f"ExitThread.start {self._started}")
if not self._started:
self._started = True
self._parent_thread.start()
def exit(self):
if self._debug:
print(f"ExitThread.exit exited: {self._exited} lock_released: {self._parent_thread_exit_lock_released}")
if self._parent_thread_exit_lock_released:
return
if not self._exited:
self._exited = True
if not self.result_future.cancel():
if self.result_future.running():
self.result_future.set_exception(concurrent.futures.CancelledError())
As mentioned in #Kozyarchuk's answer, installing trace works. Since this answer contained no code, here is a working ready-to-use example:
import sys, threading, time
class TraceThread(threading.Thread):
def __init__(self, *args, **keywords):
threading.Thread.__init__(self, *args, **keywords)
self.killed = False
def start(self):
self._run = self.run
self.run = self.settrace_and_run
threading.Thread.start(self)
def settrace_and_run(self):
sys.settrace(self.globaltrace)
self._run()
def globaltrace(self, frame, event, arg):
return self.localtrace if event == 'call' else None
def localtrace(self, frame, event, arg):
if self.killed and event == 'line':
raise SystemExit()
return self.localtrace
def f():
while True:
print('1')
time.sleep(2)
print('2')
time.sleep(2)
print('3')
time.sleep(2)
t = TraceThread(target=f)
t.start()
time.sleep(2.5)
t.killed = True
It stops after having printed 1 and 2. 3 is not printed.
An alternative is to use signal.pthread_kill to send a stop signal.
from signal import pthread_kill, SIGTSTP
from threading import Thread
from itertools import count
from time import sleep
def target():
for num in count():
print(num)
sleep(1)
thread = Thread(target=target)
thread.start()
sleep(5)
pthread_kill(thread.ident, SIGTSTP)
result
0
1
2
3
4
[14]+ Stopped
Pieter Hintjens -- one of the founders of the ØMQ-project -- says, using ØMQ and avoiding synchronization primitives like locks, mutexes, events etc., is the sanest and securest way to write multi-threaded programs:
http://zguide.zeromq.org/py:all#Multithreading-with-ZeroMQ
This includes telling a child thread, that it should cancel its work. This would be done by equipping the thread with a ØMQ-socket and polling on that socket for a message saying that it should cancel.
The link also provides an example on multi-threaded python code with ØMQ.
This seems to work with pywin32 on windows 7
my_thread = threading.Thread()
my_thread.start()
my_thread._Thread__stop()
If you really need the ability to kill a sub-task, use an alternate implementation. multiprocessing and gevent both support indiscriminately killing a "thread".
Python's threading does not support cancellation. Do not even try. Your code is very likely to deadlock, corrupt or leak memory, or have other unintended "interesting" hard-to-debug effects which happen rarely and nondeterministically.
You can execute your command in a process and then kill it using the process id.
I needed to sync between two thread one of which doesn’t return by itself.
processIds = []
def executeRecord(command):
print(command)
process = subprocess.Popen(command, stdout=subprocess.PIPE)
processIds.append(process.pid)
print(processIds[0])
#Command that doesn't return by itself
process.stdout.read().decode("utf-8")
return;
def recordThread(command, timeOut):
thread = Thread(target=executeRecord, args=(command,))
thread.start()
thread.join(timeOut)
os.kill(processIds.pop(), signal.SIGINT)
return;
The most simple way is this:
from threading import Thread
from time import sleep
def do_something():
global thread_work
while thread_work:
print('doing something')
sleep(5)
print('Thread stopped')
thread_work = True
Thread(target=do_something).start()
sleep(5)
thread_work = False
This is a bad answer, see the comments
Here's how to do it:
from threading import *
...
for thread in enumerate():
if thread.isAlive():
try:
thread._Thread__stop()
except:
print(str(thread.getName()) + ' could not be terminated'))
Give it a few seconds then your thread should be stopped. Check also the thread._Thread__delete() method.
I'd recommend a thread.quit() method for convenience. For example if you have a socket in your thread, I'd recommend creating a quit() method in your socket-handle class, terminate the socket, then run a thread._Thread__stop() inside of your quit().
I stumble upon pice of code that is a bit werid, I'm expecting when the program is signaled, the exit() should raise SystemExit once and cause the program to exit, however in this case, when the main thread is blocking on th.join(), the exit() statement needs be called twice for the program to exit.
It is not a practical exercise, but I want to know what is going on under the hood.
import threading
import time
import signal
def task():
while True:
time.sleep(1)
def sig_handler(self, *_):
# raise ValueError()
exit()
def main():
signal.signal(signal.SIGINT, sig_handler)
th = threading.Thread(target=task)
th.start()
th.join()
if __name__ == "__main__":
main()
Usually, our main program implicitly waits until all other threads have completed their work. Using daemon threads is useful for services where there may not be an easy way to interrupt the thread or where letting the thread die in the middle of its work without losing or corrupting data. to set a thread as a daemon that runs without blocking the main program from exiting. use setDaemon() method. Your main function will be:
def main():
signal.signal(signal.SIGINT, sig_handler)
th = threading.Thread(target=task)
th.setDaemon(True)
th.start()
th.join()
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
It is generally a bad pattern to kill a thread abruptly, in Python, and in any language. Think of the following cases:
the thread is holding a critical resource that must be closed properly
the thread has created several other threads that must be killed as well.
The nice way of handling this, if you can afford it (if you are managing your own threads), is to have an exit_request flag that each thread checks on a regular interval to see if it is time for it to exit.
For example:
import threading
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the stopped() condition."""
def __init__(self, *args, **kwargs):
super(StoppableThread, self).__init__(*args, **kwargs)
self._stop_event = threading.Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
In this code, you should call stop() on the thread when you want it to exit, and wait for the thread to exit properly using join(). The thread should check the stop flag at regular intervals.
There are cases, however, when you really need to kill a thread. An example is when you are wrapping an external library that is busy for long calls, and you want to interrupt it.
The following code allows (with some restrictions) to raise an Exception in a Python thread:
def _async_raise(tid, exctype):
'''Raises an exception in the threads with id tid'''
if not inspect.isclass(exctype):
raise TypeError("Only types can be raised (not instances)")
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid),
ctypes.py_object(exctype))
if res == 0:
raise ValueError("invalid thread id")
elif res != 1:
# "if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"
ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid), None)
raise SystemError("PyThreadState_SetAsyncExc failed")
class ThreadWithExc(threading.Thread):
'''A thread class that supports raising an exception in the thread from
another thread.
'''
def _get_my_tid(self):
"""determines this (self's) thread id
CAREFUL: this function is executed in the context of the caller
thread, to get the identity of the thread represented by this
instance.
"""
if not self.isAlive():
raise threading.ThreadError("the thread is not active")
# do we have it cached?
if hasattr(self, "_thread_id"):
return self._thread_id
# no, look for it in the _active dict
for tid, tobj in threading._active.items():
if tobj is self:
self._thread_id = tid
return tid
# TODO: in python 2.6, there's a simpler way to do: self.ident
raise AssertionError("could not determine the thread's id")
def raiseExc(self, exctype):
"""Raises the given exception type in the context of this thread.
If the thread is busy in a system call (time.sleep(),
socket.accept(), ...), the exception is simply ignored.
If you are sure that your exception should terminate the thread,
one way to ensure that it works is:
t = ThreadWithExc( ... )
...
t.raiseExc( SomeException )
while t.isAlive():
time.sleep( 0.1 )
t.raiseExc( SomeException )
If the exception is to be caught by the thread, you need a way to
check that your thread has caught it.
CAREFUL: this function is executed in the context of the
caller thread, to raise an exception in the context of the
thread represented by this instance.
"""
_async_raise( self._get_my_tid(), exctype )
(Based on Killable Threads by Tomer Filiba. The quote about the return value of PyThreadState_SetAsyncExc appears to be from an old version of Python.)
As noted in the documentation, this is not a magic bullet because if the thread is busy outside the Python interpreter, it will not catch the interruption.
A good usage pattern of this code is to have the thread catch a specific exception and perform the cleanup. That way, you can interrupt a task and still have proper cleanup.
A multiprocessing.Process can p.terminate()
In the cases where I want to kill a thread, but do not want to use flags/locks/signals/semaphores/events/whatever, I promote the threads to full blown processes. For code that makes use of just a few threads the overhead is not that bad.
E.g. this comes in handy to easily terminate helper "threads" which execute blocking I/O
The conversion is trivial: In related code replace all threading.Thread with multiprocessing.Process and all queue.Queue with multiprocessing.Queue and add the required calls of p.terminate() to your parent process which wants to kill its child p
See the Python documentation for multiprocessing.
Example:
import multiprocessing
proc = multiprocessing.Process(target=your_proc_function, args=())
proc.start()
# Terminate the process
proc.terminate() # sends a SIGTERM
There is no official API to do that, no.
You need to use platform API to kill the thread, e.g. pthread_kill, or TerminateThread. You can access such API e.g. through pythonwin, or through ctypes.
Notice that this is inherently unsafe. It will likely lead to uncollectable garbage (from local variables of the stack frames that become garbage), and may lead to deadlocks, if the thread being killed has the GIL at the point when it is killed.
If you are trying to terminate the whole program you can set the thread as a "daemon". see
Thread.daemon
As others have mentioned, the norm is to set a stop flag. For something lightweight (no subclassing of Thread, no global variable), a lambda callback is an option. (Note the parentheses in if stop().)
import threading
import time
def do_work(id, stop):
print("I am thread", id)
while True:
print("I am thread {} doing something".format(id))
if stop():
print(" Exiting loop.")
break
print("Thread {}, signing off".format(id))
def main():
stop_threads = False
workers = []
for id in range(0,3):
tmp = threading.Thread(target=do_work, args=(id, lambda: stop_threads))
workers.append(tmp)
tmp.start()
time.sleep(3)
print('main: done sleeping; time to stop the threads.')
stop_threads = True
for worker in workers:
worker.join()
print('Finis.')
if __name__ == '__main__':
main()
Replacing print() with a pr() function that always flushes (sys.stdout.flush()) may improve the precision of the shell output.
(Only tested on Windows/Eclipse/Python3.3)
In Python, you simply cannot kill a Thread directly.
If you do NOT really need to have a Thread (!), what you can do, instead of using the threading package , is to use the
multiprocessing package . Here, to kill a process, you can simply call the method:
yourProcess.terminate() # kill the process!
Python will kill your process (on Unix through the SIGTERM signal, while on Windows through the TerminateProcess() call). Pay attention to use it while using a Queue or a Pipe! (it may corrupt the data in the Queue/Pipe)
Note that the multiprocessing.Event and the multiprocessing.Semaphore work exactly in the same way of the threading.Event and the threading.Semaphore respectively. In fact, the first ones are clones of the latters.
If you REALLY need to use a Thread, there is no way to kill it directly. What you can do, however, is to use a "daemon thread". In fact, in Python, a Thread can be flagged as daemon:
yourThread.daemon = True # set the Thread as a "daemon thread"
The main program will exit when no alive non-daemon threads are left. In other words, when your main thread (which is, of course, a non-daemon thread) will finish its operations, the program will exit even if there are still some daemon threads working.
Note that it is necessary to set a Thread as daemon before the start() method is called!
Of course you can, and should, use daemon even with multiprocessing. Here, when the main process exits, it attempts to terminate all of its daemonic child processes.
Finally, please, note that sys.exit() and os.kill() are not choices.
This is based on the thread2 -- killable threads ActiveState recipe.
You need to call PyThreadState_SetAsyncExc(), which is only available through the ctypes module.
This has only been tested on Python 2.7.3, but it is likely to work with other recent 2.x releases. PyThreadState_SetAsyncExc() still exists in Python 3 for backwards compatibility (but I have not tested it).
import ctypes
def terminate_thread(thread):
"""Terminates a python thread from another thread.
:param thread: a threading.Thread instance
"""
if not thread.isAlive():
return
exc = ctypes.py_object(SystemExit)
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(
ctypes.c_long(thread.ident), exc)
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None)
raise SystemError("PyThreadState_SetAsyncExc failed")
You should never forcibly kill a thread without cooperating with it.
Killing a thread removes any guarantees that try/finally blocks set up so you might leave locks locked, files open, etc.
The only time you can argue that forcibly killing threads is a good idea is to kill a program fast, but never single threads.
If you are explicitly calling time.sleep() as part of your thread (say polling some external service), an improvement upon Phillipe's method is to use the timeout in the event's wait() method wherever you sleep()
For example:
import threading
class KillableThread(threading.Thread):
def __init__(self, sleep_interval=1):
super().__init__()
self._kill = threading.Event()
self._interval = sleep_interval
def run(self):
while True:
print("Do Something")
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
Then to run it
t = KillableThread(sleep_interval=5)
t.start()
# Every 5 seconds it prints:
#: Do Something
t.kill()
#: Killing Thread
The advantage of using wait() instead of sleep()ing and regularly checking the event is that you can program in longer intervals of sleep, the thread is stopped almost immediately (when you would otherwise be sleep()ing) and in my opinion, the code for handling exit is significantly simpler.
You can kill a thread by installing trace into the thread that will exit the thread. See attached link for one possible implementation.
Kill a thread in Python
It is better if you don't kill a thread.
A way could be to introduce a "try" block into the thread's cycle and to throw an exception when you want to stop the thread (for example a break/return/... that stops your for/while/...).
I've used this on my app and it works...
It is definitely possible to implement a Thread.stop method as shown in the following example code:
import sys
import threading
import time
class StopThread(StopIteration):
pass
threading.SystemExit = SystemExit, StopThread
class Thread2(threading.Thread):
def stop(self):
self.__stop = True
def _bootstrap(self):
if threading._trace_hook is not None:
raise ValueError('Cannot run thread with tracing!')
self.__stop = False
sys.settrace(self.__trace)
super()._bootstrap()
def __trace(self, frame, event, arg):
if self.__stop:
raise StopThread()
return self.__trace
class Thread3(threading.Thread):
def _bootstrap(self, stop_thread=False):
def stop():
nonlocal stop_thread
stop_thread = True
self.stop = stop
def tracer(*_):
if stop_thread:
raise StopThread()
return tracer
sys.settrace(tracer)
super()._bootstrap()
###############################################################################
def main():
test1 = Thread2(target=printer)
test1.start()
time.sleep(1)
test1.stop()
test1.join()
test2 = Thread2(target=speed_test)
test2.start()
time.sleep(1)
test2.stop()
test2.join()
test3 = Thread3(target=speed_test)
test3.start()
time.sleep(1)
test3.stop()
test3.join()
def printer():
while True:
print(time.time() % 1)
time.sleep(0.1)
def speed_test(count=0):
try:
while True:
count += 1
except StopThread:
print('Count =', count)
if __name__ == '__main__':
main()
The Thread3 class appears to run code approximately 33% faster than the Thread2 class.
I'm way late to this game, but I've been wrestling with a similar question and the following appears to both resolve the issue perfectly for me AND lets me do some basic thread state checking and cleanup when the daemonized sub-thread exits:
import threading
import time
import atexit
def do_work():
i = 0
#atexit.register
def goodbye():
print ("'CLEANLY' kill sub-thread with value: %s [THREAD: %s]" %
(i, threading.currentThread().ident))
while True:
print i
i += 1
time.sleep(1)
t = threading.Thread(target=do_work)
t.daemon = True
t.start()
def after_timeout():
print "KILL MAIN THREAD: %s" % threading.currentThread().ident
raise SystemExit
threading.Timer(2, after_timeout).start()
Yields:
0
1
KILL MAIN THREAD: 140013208254208
'CLEANLY' kill sub-thread with value: 2 [THREAD: 140013674317568]
from ctypes import *
pthread = cdll.LoadLibrary("libpthread-2.15.so")
pthread.pthread_cancel(c_ulong(t.ident))
t is your Thread object.
Read the python source (Modules/threadmodule.c and Python/thread_pthread.h) you can see the Thread.ident is an pthread_t type, so you can do anything pthread can do in python use libpthread.
Following workaround can be used to kill a thread:
kill_threads = False
def doSomething():
global kill_threads
while True:
if kill_threads:
thread.exit()
......
......
thread.start_new_thread(doSomething, ())
This can be used even for terminating threads, whose code is written in another module, from main thread. We can declare a global variable in that module and use it to terminate thread/s spawned in that module.
I usually use this to terminate all the threads at the program exit. This might not be the perfect way to terminate thread/s but could help.
Here's yet another way to do it, but with extremely clean and simple code, that works in Python 3.7 in 2021:
import ctypes
def kill_thread(thread):
"""
thread: a threading.Thread object
"""
thread_id = thread.ident
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, ctypes.py_object(SystemExit))
if res > 1:
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, 0)
print('Exception raise failure')
Adapted from here: https://www.geeksforgeeks.org/python-different-ways-to-kill-a-thread/
One thing I want to add is that if you read official documentation in threading lib Python, it's recommended to avoid use of "demonic" threads, when you don't want threads end abruptly, with the flag that Paolo Rovelli mentioned.
From official documentation:
Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signaling mechanism such as an Event.
I think that creating daemonic threads depends of your application, but in general (and in my opinion) it's better to avoid killing them or making them daemonic. In multiprocessing you can use is_alive() to check process status and "terminate" for finish them (Also you avoid GIL problems). But you can find more problems, sometimes, when you execute your code in Windows.
And always remember that if you have "live threads", the Python interpreter will be running for wait them. (Because of this daemonic can help you if don't matter abruptly ends).
There is a library built for this purpose, stopit. Although some of the same cautions listed herein still apply, at least this library presents a regular, repeatable technique for achieving the stated goal.
While it's rather old, this might be a handy solution for some:
A little module that extends the threading's module functionality --
allows one thread to raise exceptions in the context of another
thread. By raising SystemExit, you can finally kill python threads.
import threading
import ctypes
def _async_raise(tid, excobj):
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, ctypes.py_object(excobj))
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, 0)
raise SystemError("PyThreadState_SetAsyncExc failed")
class Thread(threading.Thread):
def raise_exc(self, excobj):
assert self.isAlive(), "thread must be started"
for tid, tobj in threading._active.items():
if tobj is self:
_async_raise(tid, excobj)
return
# the thread was alive when we entered the loop, but was not found
# in the dict, hence it must have been already terminated. should we raise
# an exception here? silently ignore?
def terminate(self):
# must raise the SystemExit type, instead of a SystemExit() instance
# due to a bug in PyThreadState_SetAsyncExc
self.raise_exc(SystemExit)
So, it allows a "thread to raise exceptions in the context of another thread" and in this way, the terminated thread can handle the termination without regularly checking an abort flag.
However, according to its original source, there are some issues with this code.
The exception will be raised only when executing python bytecode. If your thread calls a native/built-in blocking function, the
exception will be raised only when execution returns to the python
code.
There is also an issue if the built-in function internally calls PyErr_Clear(), which would effectively cancel your pending exception.
You can try to raise it again.
Only exception types can be raised safely. Exception instances are likely to cause unexpected behavior, and are thus restricted.
For example: t1.raise_exc(TypeError) and not t1.raise_exc(TypeError("blah")).
IMHO it's a bug, and I reported it as one. For more info, http://mail.python.org/pipermail/python-dev/2006-August/068158.html
I asked to expose this function in the built-in thread module, but since ctypes has become a standard library (as of 2.5), and this
feature is not likely to be implementation-agnostic, it may be kept
unexposed.
Asuming, that you want to have multiple threads of the same function, this is IMHO the easiest implementation to stop one by id:
import time
from threading import Thread
def doit(id=0):
doit.stop=0
print("start id:%d"%id)
while 1:
time.sleep(1)
print(".")
if doit.stop==id:
doit.stop=0
break
print("end thread %d"%id)
t5=Thread(target=doit, args=(5,))
t6=Thread(target=doit, args=(6,))
t5.start() ; t6.start()
time.sleep(2)
doit.stop =5 #kill t5
time.sleep(2)
doit.stop =6 #kill t6
The nice thing is here, you can have multiple of same and different functions, and stop them all by functionname.stop
If you want to have only one thread of the function then you don't need to remember the id. Just stop, if doit.stop > 0.
Just to build up on #SCB's idea (which was exactly what I needed) to create a KillableThread subclass with a customized function:
from threading import Thread, Event
class KillableThread(Thread):
def __init__(self, sleep_interval=1, target=None, name=None, args=(), kwargs={}):
super().__init__(None, target, name, args, kwargs)
self._kill = Event()
self._interval = sleep_interval
print(self._target)
def run(self):
while True:
# Call custom function with arguments
self._target(*self._args)
# If no kill signal is set, sleep for the interval,
# If kill signal comes in while sleeping, immediately
# wake up and handle
is_killed = self._kill.wait(self._interval)
if is_killed:
break
print("Killing Thread")
def kill(self):
self._kill.set()
if __name__ == '__main__':
def print_msg(msg):
print(msg)
t = KillableThread(10, print_msg, args=("hello world"))
t.start()
time.sleep(6)
print("About to kill thread")
t.kill()
Naturally, like with #SBC, the thread doesn't wait to run a new loop to stop. In this example, you would see the "Killing Thread" message printed right after the "About to kill thread" instead of waiting for 4 more seconds for the thread to complete (since we have slept for 6 seconds already).
Second argument in KillableThread constructor is your custom function (print_msg here). Args argument are the arguments that will be used when calling the function (("hello world")) here.
Python version: 3.8
Using daemon thread to execute what we wanted, if we want to daemon thread be terminated, all we need is making parent thread exit, then system will terminate daemon thread which parent thread created.
Also support coroutine and coroutine function.
def main():
start_time = time.perf_counter()
t1 = ExitThread(time.sleep, (10,), debug=False)
t1.start()
time.sleep(0.5)
t1.exit()
try:
print(t1.result_future.result())
except concurrent.futures.CancelledError:
pass
end_time = time.perf_counter()
print(f"time cost {end_time - start_time:0.2f}")
below is ExitThread source code
import concurrent.futures
import threading
import typing
import asyncio
class _WorkItem(object):
""" concurrent\futures\thread.py
"""
def __init__(self, future, fn, args, kwargs, *, debug=None):
self._debug = debug
self.future = future
self.fn = fn
self.args = args
self.kwargs = kwargs
def run(self):
if self._debug:
print("ExitThread._WorkItem run")
if not self.future.set_running_or_notify_cancel():
return
try:
coroutine = None
if asyncio.iscoroutinefunction(self.fn):
coroutine = self.fn(*self.args, **self.kwargs)
elif asyncio.iscoroutine(self.fn):
coroutine = self.fn
if coroutine is None:
result = self.fn(*self.args, **self.kwargs)
else:
result = asyncio.run(coroutine)
if self._debug:
print("_WorkItem done")
except BaseException as exc:
self.future.set_exception(exc)
# Break a reference cycle with the exception 'exc'
self = None
else:
self.future.set_result(result)
class ExitThread:
""" Like a stoppable thread
Using coroutine for target then exit before running may cause RuntimeWarning.
"""
def __init__(self, target: typing.Union[typing.Coroutine, typing.Callable] = None
, args=(), kwargs={}, *, daemon=None, debug=None):
#
self._debug = debug
self._parent_thread = threading.Thread(target=self._parent_thread_run, name="ExitThread_parent_thread"
, daemon=daemon)
self._child_daemon_thread = None
self.result_future = concurrent.futures.Future()
self._workItem = _WorkItem(self.result_future, target, args, kwargs, debug=debug)
self._parent_thread_exit_lock = threading.Lock()
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock_released = False # When done it will be True
self._started = False
self._exited = False
self.result_future.add_done_callback(self._release_parent_thread_exit_lock)
def _parent_thread_run(self):
self._child_daemon_thread = threading.Thread(target=self._child_daemon_thread_run
, name="ExitThread_child_daemon_thread"
, daemon=True)
self._child_daemon_thread.start()
# Block manager thread
self._parent_thread_exit_lock.acquire()
self._parent_thread_exit_lock.release()
if self._debug:
print("ExitThread._parent_thread_run exit")
def _release_parent_thread_exit_lock(self, _future):
if self._debug:
print(f"ExitThread._release_parent_thread_exit_lock {self._parent_thread_exit_lock_released} {_future}")
if not self._parent_thread_exit_lock_released:
self._parent_thread_exit_lock_released = True
self._parent_thread_exit_lock.release()
def _child_daemon_thread_run(self):
self._workItem.run()
def start(self):
if self._debug:
print(f"ExitThread.start {self._started}")
if not self._started:
self._started = True
self._parent_thread.start()
def exit(self):
if self._debug:
print(f"ExitThread.exit exited: {self._exited} lock_released: {self._parent_thread_exit_lock_released}")
if self._parent_thread_exit_lock_released:
return
if not self._exited:
self._exited = True
if not self.result_future.cancel():
if self.result_future.running():
self.result_future.set_exception(concurrent.futures.CancelledError())
As mentioned in #Kozyarchuk's answer, installing trace works. Since this answer contained no code, here is a working ready-to-use example:
import sys, threading, time
class TraceThread(threading.Thread):
def __init__(self, *args, **keywords):
threading.Thread.__init__(self, *args, **keywords)
self.killed = False
def start(self):
self._run = self.run
self.run = self.settrace_and_run
threading.Thread.start(self)
def settrace_and_run(self):
sys.settrace(self.globaltrace)
self._run()
def globaltrace(self, frame, event, arg):
return self.localtrace if event == 'call' else None
def localtrace(self, frame, event, arg):
if self.killed and event == 'line':
raise SystemExit()
return self.localtrace
def f():
while True:
print('1')
time.sleep(2)
print('2')
time.sleep(2)
print('3')
time.sleep(2)
t = TraceThread(target=f)
t.start()
time.sleep(2.5)
t.killed = True
It stops after having printed 1 and 2. 3 is not printed.
An alternative is to use signal.pthread_kill to send a stop signal.
from signal import pthread_kill, SIGTSTP
from threading import Thread
from itertools import count
from time import sleep
def target():
for num in count():
print(num)
sleep(1)
thread = Thread(target=target)
thread.start()
sleep(5)
pthread_kill(thread.ident, SIGTSTP)
result
0
1
2
3
4
[14]+ Stopped
Pieter Hintjens -- one of the founders of the ØMQ-project -- says, using ØMQ and avoiding synchronization primitives like locks, mutexes, events etc., is the sanest and securest way to write multi-threaded programs:
http://zguide.zeromq.org/py:all#Multithreading-with-ZeroMQ
This includes telling a child thread, that it should cancel its work. This would be done by equipping the thread with a ØMQ-socket and polling on that socket for a message saying that it should cancel.
The link also provides an example on multi-threaded python code with ØMQ.
This seems to work with pywin32 on windows 7
my_thread = threading.Thread()
my_thread.start()
my_thread._Thread__stop()
If you really need the ability to kill a sub-task, use an alternate implementation. multiprocessing and gevent both support indiscriminately killing a "thread".
Python's threading does not support cancellation. Do not even try. Your code is very likely to deadlock, corrupt or leak memory, or have other unintended "interesting" hard-to-debug effects which happen rarely and nondeterministically.
You can execute your command in a process and then kill it using the process id.
I needed to sync between two thread one of which doesn’t return by itself.
processIds = []
def executeRecord(command):
print(command)
process = subprocess.Popen(command, stdout=subprocess.PIPE)
processIds.append(process.pid)
print(processIds[0])
#Command that doesn't return by itself
process.stdout.read().decode("utf-8")
return;
def recordThread(command, timeOut):
thread = Thread(target=executeRecord, args=(command,))
thread.start()
thread.join(timeOut)
os.kill(processIds.pop(), signal.SIGINT)
return;
The most simple way is this:
from threading import Thread
from time import sleep
def do_something():
global thread_work
while thread_work:
print('doing something')
sleep(5)
print('Thread stopped')
thread_work = True
Thread(target=do_something).start()
sleep(5)
thread_work = False
This is a bad answer, see the comments
Here's how to do it:
from threading import *
...
for thread in enumerate():
if thread.isAlive():
try:
thread._Thread__stop()
except:
print(str(thread.getName()) + ' could not be terminated'))
Give it a few seconds then your thread should be stopped. Check also the thread._Thread__delete() method.
I'd recommend a thread.quit() method for convenience. For example if you have a socket in your thread, I'd recommend creating a quit() method in your socket-handle class, terminate the socket, then run a thread._Thread__stop() inside of your quit().
I have some threads running, and one of those threads contains an object that will be spawning subprocesses. I want one such subprocess to be able to kill the entire application. The aforementioned object will need to save some state when it receives this signal. Unfortunately I can't get the signal to be handled in the thread that causes the kill.
Here is some example code that attempts to replicate the situation.
parent.py: starts a thread. that thread runs some subprocesses, one of which will try to kill the parent process.
#!/usr/local/bin/python3
import subprocess, time, threading, random
def killer_func():
possible_cmds = [['echo', 'hello'],
['echo', 'world'],
['/work/turbulencetoo/tmp/killer.py']
]
random.shuffle(possible_cmds)
for cmd in possible_cmds:
try:
time.sleep(2)
subprocess.check_call(cmd)
time.sleep(2)
except KeyboardInterrupt:
print("Kill -2 caught properly!!")
print("Here I could properly save my state")
break
except Exception as e:
print("Unhandled Exception: {}".format(e))
else:
print("No Exception")
killer_thread = threading.Thread(target=killer_func)
killer_thread.start()
try:
while True:
killer_thread.join(4)
if not killer_thread.is_alive():
print("The killer thread has died")
break
else:
print("Killer thread still alive, try to join again.")
except KeyboardInterrupt:
print("Caught the kill -2 in the main thread :(")
print("Main program shutting down")
killer.py, a simple program that tries to kill its parent process with SIGINT:
#!/usr/local/bin/python3
import time, os, subprocess, sys
ppid = os.getppid()
# -2 specifies SIGINT, python handles this as a KeyboardInterrupt exception
cmd = ["kill", "-2", "{}".format(ppid)]
subprocess.check_call(cmd)
time.sleep(3)
sys.exit(0)
Here is some sample output from running the parent program:
$ ./parent.py
hello
Killer thread still alive, try to join again.
No Exception
Killer thread still alive, try to join again.
Caught the kill -2 in the main thread :(
Main program shutting down
No Exception
world
No Exception
I've tried using signal.signal() inside killer_func, but it doesn't work in a sub thread.
Is there a way to force the signal or exception to be handled by the function without the main thread being aware?
The main thread of your program will always be the one that receives the signal. The signal module documentation states this:
Some care must be taken if both signals and threads are used in the
same program. The fundamental thing to remember in using signals and
threads simultaneously is: always perform signal() operations in the
main thread of execution. Any thread can perform an alarm(),
getsignal(), pause(), setitimer() or getitimer(); only the main thread
can set a new signal handler, and the main thread will be the only one
to receive signals (this is enforced by the Python signal module, even
if the underlying thread implementation supports sending signals to
individual threads). This means that signals can’t be used as a means
of inter-thread communication. Use locks instead.
You'll need to refactor your program such that the main thread receiving the signal doesn't prevent you from saving state. The easiest way is use something like threading.Event() to tell the background thread that the program has been aborted, and let it clean up when it sees the event has been set:
import subprocess
import threading
import random
def killer_func(event):
possible_cmds = [['echo', 'hello'],
['echo', 'world'],
['/home/cycdev/killer.py']
]
random.shuffle(possible_cmds)
for cmd in possible_cmds:
subprocess.check_call(cmd)
event.wait(4)
if event.is_set():
print("Main thread got a signal. Time to clean up")
# save state here.
return
event = threading.Event()
killer_thread = threading.Thread(target=killer_func, args=(event,))
killer_thread.start()
try:
killer_thread.join()
except KeyboardInterrupt:
print("Caught the kill -2 in the main thread :)")
event.set()
killer_thread.join()
print("Main program shutting down")
Signals are always handled in the main thread. When you receive a signal, you don't know where it comes from. You can't say "handle it in the thread that spawned the signal-sending-process" because you don't know what signal-sending-process is.
The way to solve this is to use Condition Variables to notify all threads that a signal was received and that they have to shut down.
import threading
got_interrupt = False # global variable
def killer_func(cv):
...
with cv:
cv.wait(2)
interupted = got_interrupt # Read got_interrupt while holding the lock
if interrupted:
cleanup()
...
lock = threading.Lock()
notifier_cv = threading.Condition(lock)
killer_thread = threading.Thread(target=killer_func, args=(notifier_cv,))
killer_thread.start()
try:
...
except KeyboardInterrupt:
with cv:
got_interrupt = True
cv.notify_all()