Python Threading, kill thread [duplicate] - python

This question already has answers here:
Is there any way to kill a Thread?
(31 answers)
Closed 8 years ago.
I want only one Thread to be active, here is the algorithm:
def myfunction(update):
if threading.activeCount >=1:
# kill all previous threads
while true:
# some execution
while true:
t = threading.Thread(myfunction, args=([update]))
t.start()
so In here thread goes in Infinite loop in myfunction, so before starting new one i need to close previous one, please guide me

You can't kill threads, your function has to return somehow. The trick, then, is to write your function in such a way that it knows when to stop what its doing and return. That's very context dependent of course. Its common to implement your code as a class that has a 'close' method that knows what to do. For instance,
if the thread is waiting on a subprocess, the close could kill the subprocess.
If the thread does a sleep, you could use an event instead - the close would set the event.
If your thread is waiting on a queue, you could have a special method that tells it to return.

Related

Killing processes in ProcessPoolExecutor [duplicate]

This question already has answers here:
Python Multiprocess Pool. How to exit the script when one of the worker process determines no more work needs to be done?
(2 answers)
Closed 2 years ago.
I am using Python's ProcessPoolExecutor to run multiple processes in parallel and process them as any of them finishes. Then I look at their output and as soon as at least one of them gives satisfying answer I want to exit the program.
However, this is not possible since upon calling pool.shutdown(wait=False) I will have to wait for all active tasks in the pool to finish before I can exit my script.
Is there a way to kill all the remaining active children and exit?
Also, is there a better way to stop as soon as at least one child returns the answer we are waiting for?
What you're doing is not quite clear, but executors strike me as the wrong tool entirely.
multiprocessing.Pool seems much more suitable, and allows doing exactly what you're asking for: you can iterate on imap_unordered (or apply_async and poll the results), and once you have what you were looking for just terminate() the pool then join() it.

What is the pythonic way of doing nothing? [duplicate]

This question already has answers here:
python multithreading wait till all threads finished
(9 answers)
Closed 7 years ago.
I need my main program, at some point, to do nothing - forever (there are threads started earlier which do some work).
I was wondering what was the pythonic way to do so. I particularly would like to avoid losing CPU cycles on this nothingness.
My immediate guess was time.sleep(100000) where 100000 is large enough for the life of my program. It works but is not aesthetically pleasant. I checked the CPU load of a python session running this: not measurable (close to zero).
I also tried
while True:
pass
It looks nicer but the hit on the CPU is enormous (~25%).
Is there one-- and preferably only one --obvious way to do it?
If you would like to wait for the thread to exit, use thread.join():
join([timeout])
Wait until the thread terminates. This blocks the
calling thread until the thread whose join() method is called
terminates – either normally or through an unhandled exception – or
until the optional timeout occurs.
Example usage:
import threading
t = threading.Thread(name='non-daemon', target=non_daemon)
t.start() # This will not block
t.join() # This is blocking

Python Thread Wait() Timeout For Queue .join() [duplicate]

This question already has answers here:
Add timeout argument to python's Queue.join()
(4 answers)
Closed 7 years ago.
In python, I have multiple threads running and I need the main process to wait until they are done, so I did this with Queue class .join() method. However, I wanted to implement SIGINT but the handler for it wouldn't execute because join() was blocking it(the threading processes run for at least 5 minutes for what I have them doing). So I modified Queue's .join() and placed a time out in wait():
class CustomQueue(Queue.Queue):
#Can not use .join() because it would block any processing
#for SIGINT untill threads are done. To counter this,
# wait() is given a time out along with while not kill_received
#to be checked
def join(self):
self.all_tasks_done.acquire()
try:
while not kill_received and self.unfinished_tasks:
self.all_tasks_done.wait(10.0)
finally:
self.all_tasks_done.release()
This works beautifully and perfect for me.
But what I don't understand is the time out in wait(). For instance, I should not be able to send a SIGINT and have it process for at least 10 seconds. But, I am able to in less than 10 seconds. It doesn't matter what the seconds are, the SIGINT handler function is able to process without being blocked. Why is this? I should have to wait at least 10 seconds for the wait to time out and self.all_tasks_done.release() to run so the SIGINT function will process...rather than the SIGINT function processing before the wait() time out.
We're missing information here that may be important:
Which version of Python?
Which OS?
It may be important because mixing threads with signals is a cross-platform mess. CPython tries to make some sense of it all, but has had various degrees of success across various Python versions and OSes.
Anyway, the answer to your question may be simple ;-) Queue.all_tasks_done is a threading.Condition, and Condition implements .wait(timeout) (eventually, drilling down) using time.sleep(). As documented,
time.sleep(secs)
Suspend execution for the given number of seconds. ... The actual
suspension time may be less than that requested because any caught
signal will terminate the sleep() following execution of that
signal’s catching routine. ...
By default SIGINT raises KeyboardInterrupt. So if you don't have a handler installed for SIGINT, SIGINT terminates the sleep() early and raises KeyboardInterrupt. If you do have a SIGINT handler installed, SIGINT will still terminate the sleep() early, but what happens after that depends on what your handler does.

How can I kill a thread in python [duplicate]

This question already has answers here:
Is there any way to kill a Thread?
(31 answers)
Closed 6 years ago.
I start a thread using the following code.
t = thread.start_new_thread(myfunction)
How can I kill the thread t from another thread. So basically speaking in terms of code, I want to be able to do something like this.
t.kill()
Note that I'm using Python 2.4.
In Python, you simply cannot kill a Thread.
If you do NOT really need to have a Thread (!), what you can do, instead of using the threading package (http://docs.python.org/2/library/threading.html), is to use the multiprocessing package (http://docs.python.org/2/library/multiprocessing.html). Here, to kill a process, you can simply call the method:
yourProcess.terminate() # kill the process!
Python will kill your process (on Unix through the SIGTERM signal, while on Windows through the TerminateProcess() call). Pay attention to use it while using a Queue or a Pipe! (it may corrupt the data in the Queue/Pipe)
Note that the multiprocessing.Event and the multiprocessing.Semaphore work exactly in the same way of the threading.Event and the threading.Semaphore respectively. In fact, the first ones are clones of the latters.
If you REALLY need to use a Thread, there is no way to kill your threads directly. What you can do, however, is to use a "daemon thread". In fact, in Python, a Thread can be flagged as daemon:
yourThread.daemon = True # set the Thread as a "daemon thread"
The main program will exit when no alive non-daemon threads are left. In other words, when your main thread (which is, of course, a non-daemon thread) will finish its operations, the program will exit even if there are still some daemon threads working.
Note that it is necessary to set a Thread as daemon before the start() method is called!
Of course you can, and should, use daemon even with multiprocessing. Here, when the main process exits, it attempts to terminate all of its daemonic child processes.
Finally, please, note that sys.exit() and os.kill() are not choices.
If your thread is busy executing Python code, you have a bigger problem than the inability to kill it. The GIL will prevent any other thread from even running whatever instructions you would use to do the killing. (After a bit of research, I've learned that the interpreter periodically releases the GIL, so the preceding statement is bogus. The remaining comment stands, however.)
Your thread must be written in a cooperative manner. That is, it must periodically check in with a signalling object such as a semaphore, which the main thread can use to instruct the worker thread to voluntarily exit.
while not sema.acquire(False):
# Do a small portion of work…
or:
for item in work:
# Keep working…
# Somewhere deep in the bowels…
if sema.acquire(False):
thread.exit()
You can't kill a thread from another thread. You need to signal to the other thread that it should end. And by "signal" I don't mean use the signal function, I mean that you have to arrange for some communication between the threads.

Gracefully Terminating Python Threads

I am trying to write a unix client program that is listening to a socket, stdin, and reading from file descriptors. I assign each of these tasks to an individual thread and have them successfully communicating with the "main" application using synchronized queues and a semaphore. The problem is that when I want to shutdown these child threads they are all blocking on input. Also, the threads cannot register signal handlers in the threads because in Python only the main thread of execution is allowed to do so.
Any suggestions?
There is no good way to work around this, especially when the thread is blocking.
I had a similar issue ( Python: How to terminate a blocking thread) and the only way I was able to stop my threads was to close the underlying connection. Which resulted in the thread that was blocking to raise and exception and then allowed me to check the stop flag and close.
Example code:
class Example(object):
def __init__(self):
self.stop = threading.Event()
self.connection = Connection()
self.mythread = Thread(target=self.dowork)
self.mythread.start()
def dowork(self):
while(not self.stop.is_set()):
try:
blockingcall()
except CommunicationException:
pass
def terminate():
self.stop.set()
self.connection.close()
self.mythread.join()
Another thing to note is commonly blocking operations generally offer up a timeout. If you have that option I would consider using it. My last comment is that you could always set the thread to deamonic,
From the pydoc :
A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. The flag can be set through the daemon property.
Also, the threads cannot register signal handlers
Signals to kill threads is potentially horrible, especially in C, especially if you allocate memory as part of the thread, since it won't be freed when that particular thread dies (as it belongs to the heap of the process). There is no garbage collection in C, so if that pointer goes out of scope, it's gone out of scope, the memory remains allocated. So just be careful with that one - only do it that way in C if you're going to actually kill all the threads and end the process so that the memory is handed back to the OS - adding and removing threads from a threadpool for example will give you a memory leak.
The problem is that when I want to shutdown these child threads they are all blocking on input.
Funnily enough I've been fighting with the same thing recently. The solution is literally don't make blocking calls without a timeout. So, for example, what you want ideally is:
def threadfunc(running):
while running:
blockingcall(timeout=1)
where running is passed from the controlling thread - I've never used threading but I have used multiprocessing and with this you actually need to pass an Event() object and check is_set(). But you asked for design patterns, that's the basic idea.
Then, when you want this thread to end, you run:
running.clear()
mythread.join()
and your main thread should then allow your client thread to handle its last call, and return, and the whole program folds up nicely.
What do you do if you have a blocking call without a timeout? Use the asynchronous option, and sleep (as in call whatever method you have to suspend the thread for a period of time so you're not spinning) if you need to. There's no other way around it.
See these answers:
Python SocketServer
How to exit a multithreaded program?
Basically, don't block on recv() by using select() with a timeout to check for readability of the socket, and poll a quit flag when select() times out.

Categories

Resources