This question already has answers here:
Is there any way to kill a Thread?
(31 answers)
Closed 6 years ago.
I start a thread using the following code.
t = thread.start_new_thread(myfunction)
How can I kill the thread t from another thread. So basically speaking in terms of code, I want to be able to do something like this.
t.kill()
Note that I'm using Python 2.4.
In Python, you simply cannot kill a Thread.
If you do NOT really need to have a Thread (!), what you can do, instead of using the threading package (http://docs.python.org/2/library/threading.html), is to use the multiprocessing package (http://docs.python.org/2/library/multiprocessing.html). Here, to kill a process, you can simply call the method:
yourProcess.terminate() # kill the process!
Python will kill your process (on Unix through the SIGTERM signal, while on Windows through the TerminateProcess() call). Pay attention to use it while using a Queue or a Pipe! (it may corrupt the data in the Queue/Pipe)
Note that the multiprocessing.Event and the multiprocessing.Semaphore work exactly in the same way of the threading.Event and the threading.Semaphore respectively. In fact, the first ones are clones of the latters.
If you REALLY need to use a Thread, there is no way to kill your threads directly. What you can do, however, is to use a "daemon thread". In fact, in Python, a Thread can be flagged as daemon:
yourThread.daemon = True # set the Thread as a "daemon thread"
The main program will exit when no alive non-daemon threads are left. In other words, when your main thread (which is, of course, a non-daemon thread) will finish its operations, the program will exit even if there are still some daemon threads working.
Note that it is necessary to set a Thread as daemon before the start() method is called!
Of course you can, and should, use daemon even with multiprocessing. Here, when the main process exits, it attempts to terminate all of its daemonic child processes.
Finally, please, note that sys.exit() and os.kill() are not choices.
If your thread is busy executing Python code, you have a bigger problem than the inability to kill it. The GIL will prevent any other thread from even running whatever instructions you would use to do the killing. (After a bit of research, I've learned that the interpreter periodically releases the GIL, so the preceding statement is bogus. The remaining comment stands, however.)
Your thread must be written in a cooperative manner. That is, it must periodically check in with a signalling object such as a semaphore, which the main thread can use to instruct the worker thread to voluntarily exit.
while not sema.acquire(False):
# Do a small portion of work…
or:
for item in work:
# Keep working…
# Somewhere deep in the bowels…
if sema.acquire(False):
thread.exit()
You can't kill a thread from another thread. You need to signal to the other thread that it should end. And by "signal" I don't mean use the signal function, I mean that you have to arrange for some communication between the threads.
Related
I am in a situation where I have two endpoints I can ask for a value, and one may be faster than the other. The calls to the endpoints are blocking. I want to wait for one to complete and take that result without waiting for the other to complete.
My solution was to issue the requests in separate threads and have those threads set a flag to true when they complete. In the main thread, I continuously check the flags (I know it is a busy wait, but that is not my primary concern right now) and when one completes it takes that value and returns it as the result.
The issue I have is that I never clean up the other thread. I can't find any way to do it without using .join(), which would just block and defeat the purpose of this whole thing. So, how can I clean up that other, slower thread that is blocking without joining it from the main thread?
What you want is to make your threads daemons, so when you get the result and finish your main, the other running thread will be forced to finish. You do that by changing the daemon keyword to True:
tr = threading.Thread(daemon=True)
From the threading docs:
The significance of this flag is that the entire Python program exits
when only daemon threads are left.
Although:
Daemon threads are abruptly stopped at shutdown. Their resources (such
as open files, database transactions, etc.) may not be released
properly. If you want your threads to stop gracefully, make them
non-daemonic and use a suitable signalling mechanism such as an Event.
I don't have any particular experience with Events so can't elaborate on that. Feel free to click the link and read on.
One bad and dirty solution is to implement a methode for the threads which close the socket which is blocking. Now you have to catch the exception in the main thread.
Will the resources of non-daemon thread get released back to OS once the threads completes? ie If the main thread is not calling join() on these non-daemon threads, will the python GC call join on them and release the resources once held by the thread?
If you spawn a thread that runs a function, and then that function completes before the end of the program, then yes, the thread will get garbage collected once it is (a) no longer running, and (b) no longer referenced by anything else.
"
A thread can be flagged as a “daemon thread”. The significance of this
flag is that the entire Python program exits when only daemon threads
are left. The initial value is inherited from the creating thread. The
flag can be set through the daemon property.
Note Daemon threads are abruptly stopped at shutdown. Their resources
(such as open files, database transactions, etc.) may not be released
properly. If you want your threads to stop gracefully, make them
non-daemonic and use a suitable signalling mechanism such as an Event.
" -- Python Thread Docs
Daemons are cleaned up by Python, Non-daemonic threads are not - you have to signal them to stop. This is useful in executing some complicated parallel code though :)
This does mean though you can have random dummy / useless threads sitting around for you to manually cleanup if you use non-daemonic threads.
tl;dr Non-daemonic threads never 'finish' you have to signal them to finish via your own mechanism or one of the SIGS e.g. SIGTERM.
My python script creates alot of threads, they are all daemon threads, I find that I get an error saying "out of memory".
How do I kill a daemon thread whilst my script/application is running?
I understand the concept of daemon threads, that they destroy themselves when my process(script or application) closes/finishes. But I want to kill some of my daemon threads whilst my script is still running to avoid the "out of memory" error.
Will my thread below kill itself when there are no more tasks in the queue?
class ParsePageThread(threading.Thread):
THREAD_NUM = 0
def __init__(self, _queue):
threading.Thread.__init__(self)
self.queue = _queue
def run(self):
while(True):
try:
url = self.queue.get()
except Queue.Empty,e:
return # WILL this kill the thread?
finally:
self.queue.task_done()
I'll answer your second question first because it is easier. Yes, returning from the run method will indeed stop the thread. A detailed explanation is threading: Thread Objects doc.
To stop a thread that is running before it's natural completion you have to get a little more creative. There is no direct kill method on a thread object. What you need to do is use a shared variable to define the state of the thread.
alive = True
class MyThread(threading.Thread):
def run():
while(alive):
#do work here
In some other piece of code, when you detect a condition for stopping that thread, the other thread simply sets alive to False:
alive = False
This is a simple example, I'll leave it to you to scale to multiple threads.
DANGER
This example works because reading and setting a boolean variable are atomic actions in python because of the Global Interpreter Lock. Here is an excellent tutorial for lower level python threading. You should stick to using the Queue object because that's exactly what it's for.
If you do anything more than reading and setting simple variables from multiple threads you should use Locks or alternatively Reentrant Locks depending on your design and needs. Even something as simple as a compare and swap without a lock can cause problems in your program that are very difficult to debug.
Another piece of advice for python multithreading is to never do any significant work in the interpreter thread. It should setup and start all the other threads and then sleep or wait on a condition object until the program exits. The reason for this is no other python thread can receive operating system signals. This means that no other thread can deal with Ctrl+C aka KeyboardInterrupt exceptions. It can be a good practice to have the main thread handle the KeyboardInterrupt exception and then set all the alive variables to False so you can exit your program quickly. This is especially helpful while developing so you don't have to constantly kill things when you make a mistake.
I am trying to write a unix client program that is listening to a socket, stdin, and reading from file descriptors. I assign each of these tasks to an individual thread and have them successfully communicating with the "main" application using synchronized queues and a semaphore. The problem is that when I want to shutdown these child threads they are all blocking on input. Also, the threads cannot register signal handlers in the threads because in Python only the main thread of execution is allowed to do so.
Any suggestions?
There is no good way to work around this, especially when the thread is blocking.
I had a similar issue ( Python: How to terminate a blocking thread) and the only way I was able to stop my threads was to close the underlying connection. Which resulted in the thread that was blocking to raise and exception and then allowed me to check the stop flag and close.
Example code:
class Example(object):
def __init__(self):
self.stop = threading.Event()
self.connection = Connection()
self.mythread = Thread(target=self.dowork)
self.mythread.start()
def dowork(self):
while(not self.stop.is_set()):
try:
blockingcall()
except CommunicationException:
pass
def terminate():
self.stop.set()
self.connection.close()
self.mythread.join()
Another thing to note is commonly blocking operations generally offer up a timeout. If you have that option I would consider using it. My last comment is that you could always set the thread to deamonic,
From the pydoc :
A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. The flag can be set through the daemon property.
Also, the threads cannot register signal handlers
Signals to kill threads is potentially horrible, especially in C, especially if you allocate memory as part of the thread, since it won't be freed when that particular thread dies (as it belongs to the heap of the process). There is no garbage collection in C, so if that pointer goes out of scope, it's gone out of scope, the memory remains allocated. So just be careful with that one - only do it that way in C if you're going to actually kill all the threads and end the process so that the memory is handed back to the OS - adding and removing threads from a threadpool for example will give you a memory leak.
The problem is that when I want to shutdown these child threads they are all blocking on input.
Funnily enough I've been fighting with the same thing recently. The solution is literally don't make blocking calls without a timeout. So, for example, what you want ideally is:
def threadfunc(running):
while running:
blockingcall(timeout=1)
where running is passed from the controlling thread - I've never used threading but I have used multiprocessing and with this you actually need to pass an Event() object and check is_set(). But you asked for design patterns, that's the basic idea.
Then, when you want this thread to end, you run:
running.clear()
mythread.join()
and your main thread should then allow your client thread to handle its last call, and return, and the whole program folds up nicely.
What do you do if you have a blocking call without a timeout? Use the asynchronous option, and sleep (as in call whatever method you have to suspend the thread for a period of time so you're not spinning) if you need to. There's no other way around it.
See these answers:
Python SocketServer
How to exit a multithreaded program?
Basically, don't block on recv() by using select() with a timeout to check for readability of the socket, and poll a quit flag when select() times out.
I am creating a thread in my Python app with thread.start_new_thread.
How do I stop it if it hasn't finished in three seconds time?
You can't do that directly. Anyway aborting a thread is not good practice - rather think about using synchronization mechanisms that let you abort the thread in a "soft" way.
But daemonic threads will automatically be aborted if no non-daemonic threads remain (e.g. if the only main thread ends). Maybe that's what you want.
If you really need to do this (e.g. the thread calls code that may hang forever) then consider rewriting your code to spawn a process with the multiprocessing module. You can then kill the process with the Process.terminate() method. You will need 2.6 or later for this, of course.
You cannot. Threads can't be killed from outside. The only thing you can do is add a way to ask the thread to exit. Obviously you won't be able to do this if the thread is blocked in some systemcall.
As noted in a related question, you might be able to raise an exception through ctypes.pythonapi, but not while it's waiting on a system call.