Python threading event object - How to notify specific thread? - python

I have multiple threads that uses an event object to wait with a timeout. If I wanted to call set() on the event, this would unblock all of the threads. What would be a good way to unblock a specific thread, and leave the other threads in a waiting state?
I've thought about instead of waiting, each thread would have a while loop using a global variable as a condition to signal when the thread should return, however I'm not sure how this could keep the timeout I want for each thread, without checking for timestamps.
import threading
import time
t1 = threading.Thread(target=startTimeout)
t2 = threading.Thread(target=startTimeout)
timeoutEvent = threading.Event()
time.sleep(0.3)
timeoutEvent.set()
# How to have indivial timeoutEvents for specific threads?
def startTimeout():
check = timeoutEvent.wait(1)
if (check):
# set was called
else:
# Timeout

you can create an event for each thread, or a group of threads, just pass it as argument to them or store it somewhere.
import threading
import time
def startTimeout(event):
check = event.wait(1)
if (check):
print('pass')
else:
print("didn't pass")
timeoutEvent1 = threading.Event()
timeoutEvent2 = threading.Event()
t1 = threading.Thread(target=startTimeout,args=(timeoutEvent1,))
t2 = threading.Thread(target=startTimeout,args=(timeoutEvent2,))
t1.start()
t2.start()
time.sleep(0.3)
timeoutEvent1.set()
t1.join()
t2.join()
pass
didn't pass

Related

Is sys.exit(0) a valid way to exit/kill a thread in python?

I'm writing a timer in python. When the timer reaches 0, I want the thread I made to automatically exit.
class Rollgame:
timer = 0
def timerf(self, timer):
self.timer = timer
while self.timer > 0:
time.sleep(0.1)
self.timer -= 0.1
sys.exit(0)
Is this a valid way to exit a thread? It seems to be working in the context of the program im building, however I'm not sure if it's a good way to do it.
If I ever choose to implement this in something like a flask/django app, will this still be valid?
Sorry if the question seems stupid or too simple, I've never worked with threading in python before.
In general, killing threads abruptly is considered a bad programming practice. Killing a thread abruptly might leave a critical resource that must be closed properly, open. But you might want to kill a thread once some specific time period has passed or some interrupt has been generated. There are the various methods by which you can kill a thread in python.
Set/Reset stop flag :
In order to kill a threads, we can declare a stop flag and this flag will be check occasionally by the thread. For Example:
# Python program showing
# how to kill threads
# using set/reset stop
# flag
import threading
import time
def run():
while True:
print('thread running')
global stop_threads
if stop_threads:
break
stop_threads = False
t1 = threading.Thread(target = run)
t1.start()
time.sleep(1)
stop_threads = True
t1.join()
print('thread killed')
In the above code, as soon as the global variable stop_threads is set, the target function run() ends and the thread t1 can be killed by using t1.join(). But one may refrain from using global variable due to certain reasons. For those situations, function objects can be passed to provide a similar functionality as shown below:
# Python program killing
# threads using stop
# flag
import threading
import time
def run(stop):
while True:
print('thread running')
if stop():
break
def main():
stop_threads = False
t1 = threading.Thread(target = run, args =(lambda : stop_threads, ))
t1.start()
time.sleep(1)
stop_threads = True
t1.join()
print('thread killed')
main()
Using traces to kill threads :
This methods works by installing traces in each thread. Each trace terminates itself on the detection of some stimulus or flag, thus instantly killing the associated thread. For Example:
# Python program using
# traces to kill threads
import sys
import trace
import threading
import time
class thread_with_trace(threading.Thread):
def __init__(self, *args, **keywords):
threading.Thread.__init__(self, *args, **keywords)
self.killed = False
def start(self):
self.__run_backup = self.run
self.run = self.__run
threading.Thread.start(self)
def __run(self):
sys.settrace(self.globaltrace)
self.__run_backup()
self.run = self.__run_backup
def globaltrace(self, frame, event, arg):
if event == 'call':
return self.localtrace
else:
return None
def localtrace(self, frame, event, arg):
if self.killed:
if event == 'line':
raise SystemExit()
return self.localtrace
def kill(self):
self.killed = True
def func():
while True:
print('thread running')
t1 = thread_with_trace(target = func)
t1.start()
time.sleep(2)
t1.kill()
t1.join()
if not t1.isAlive():
print('thread killed')
In this code, start() is slightly modified to set the system trace function using settrace(). The local trace function is defined such that, whenever the kill flag (killed) of the respective thread is set, a SystemExit exception is raised upon the excution of the next line of code, which end the execution of the target function func. Now the thread can be killed with join().
Finally, Using the multiprocessing module to kill threads :
The multiprocessing module of Python allows you to spawn processes in the similar way you spawn threads using the threading module. The interface of the multithreading module is similar to that of the threading module. For Example, in a given code we created three threads(processes) which count from 1 to 9. Now, suppose we wanted to terminate all of the threads. You could use multiprocessing to do that.
# Python program killing
# a thread using multiprocessing
# module
import multiprocessing
import time
def func(number):
for i in range(1, 10):
time.sleep(0.01)
print('Processing ' + str(number) + ': prints ' + str(number*i))
# list of all processes, so that they can be killed afterwards
all_processes = []
for i in range(0, 3):
process = multiprocessing.Process(target=func, args=(i,))
process.start()
all_processes.append(process)
# kill all processes after 0.03s
time.sleep(0.03)
for process in all_processes:
process.terminate()
To sum it up, there are many ways to terminate threads, but I peronally wouldn't use sys.exit().

killing a thread without waiting for join

I want to kill a thread in python. This thread can run in a blocking operation and join can't terminate it.
Simular to this:
from threading import Thread
import time
def block():
while True:
print("running")
time.sleep(1)
if __name__ == "__main__":
thread = Thread(target = block)
thread.start()
#kill thread
#do other stuff
My problem is that the real blocking operation is in another module that is not from me so there is no place where I can break with a running variable.
The thread will be killed when exiting the main process if you set it up as a daemon:
from threading import Thread
import time
def block():
while True:
print("running")
time.sleep(1)
if __name__ == "__main__":
thread = Thread(target = block, daemon = True)
thread.start()
sys.exit(0)
Otherwise just set a flag, I'm using a bad example (you should use some synchronization not just a plain variable):
from threading import Thread
import time
RUNNING = True
def block():
global RUNNING
while RUNNING:
print("running")
time.sleep(1)
if __name__ == "__main__":
thread = Thread(target = block, daemon = True)
thread.start()
RUNNING = False # thread will stop, not killed until next loop iteration
.... continue your stuff here
Use a running variable:
from threading import Thread
import time
running = True
def block():
global running
while running:
print("running")
time.sleep(1)
if __name__ == "__main__":
thread = Thread(target = block)
thread.start()
running = False
# do other stuff
I would prefer to wrap it all in a class, but this should work (untested though).
EDIT
There is a way to asynchronously raise an exception in a separate thread which could be caught by a try: except: block, but it's a dirty dirty hack: https://gist.github.com/liuw/2407154
Original post
"I want to kill a thread in python." you can't. Threads are only killed when they're daemons when there are no more non-daemonic threads running from the parent process. Any thread can be asked nicely to terminate itself using standard inter-thread communication methods, but you state that you don't have any chance to interrupt the function you want to kill. This leaves processes.
Processes have more overhead, and are more difficult to pass data to and from, but they do support being killed by sending SIGTERM or SIGKILL.
from multiprocessing import Process, Queue
from time import sleep
def workfunction(*args, **kwargs): #any arguments you send to a child process must be picklable by python's pickle module
sleep(args[0]) #really long computation you might want to kill
return 'results' #anything you want to get back from a child process must be picklable by python's pickle module
class daemon_worker(Process):
def __init__(self, target_func, *args, **kwargs):
self.return_queue = Queue()
self.target_func = target_func
self.args = args
self.kwargs = kwargs
super().__init__(daemon=True)
self.start()
def run(self): #called by self.start()
self.return_queue.put(self.target_func(*self.args, **self.kwargs))
def get_result(self): #raises queue.Empty if no result is ready
return self.return_queue.get()
if __name__=='__main__':
#start some work that takes 1 sec:
worker1 = daemon_worker(workfunction, 1)
worker1.join(3) #wait up to 3 sec for the worker to complete
if not worker1.is_alive(): #if we didn't hit 3 sec timeout
print('worker1 got: {}'.format(worker1.get_result()))
else:
print('worker1 still running')
worker1.terminate()
print('killing worker1')
sleep(.1) #calling worker.is_alive() immediately might incur a race condition where it may or may not have shut down yet.
print('worker1 is alive: {}'.format(worker1.is_alive()))
#start some work that takes 100 sec:
worker2 = daemon_worker(workfunction, 100)
worker2.join(3) #wait up to 3 sec for the worker to complete
if not worker2.is_alive(): #if we didn't hit 3 sec timeout
print('worker2 got: {}'.format(worker2.get_result()))
else:
print('worker2 still running')
worker2.terminate()
print('killing worker2')
sleep(.1) #calling worker.is_alive() immediately might incur a race condition where it may or may not have shut down yet.
print('worker2 is alive: {}'.format(worker2.is_alive())

Threading an endless while loop in Python 2

I'm not sure why this does not work. The thread starts as soon as it is defined and seems to not be in an actual thread... Maybe I'm missing something.
import threading
import time
def endless_loop1():
while True:
print('EndlessLoop1:'+str(time.time()))
time.sleep(2)
def endless_loop2():
while True:
print('EndlessLoop2:'+str(time.time()))
time.sleep(1)
print('Here1')
t1 = threading.Thread(name='t1', target=endless_loop1(), daemon=True)
print('Here2')
t2 = threading.Thread(name='t2', target=endless_loop2(), daemon=True)
print('Here3')
t1.start()
print('Here4')
t2.start()
Outputs:
Here1
EndlessLoop1:1446675282.8
EndlessLoop1:1446675284.8
EndlessLoop1:1446675286.81
You need to give target= a callable object.
target=endless_loop1()
Here you're actually calling endless_loop1(), so it gets executed in your main thread right away. What you want to do is:
target=endless_loop1
which passes your Thread the function object so it can call it itself.
Also, daemon isn't actually an init parameter, you need to set it separately before calling start:
t1 = threading.Thread(name='t1', target=endless_loop1)
t1.daemon = True

Mutual exclusion thread locking, with dropping of queued functions upon mutex/lock release, in Python?

This is the problem I have: I'm using Python 2.7, and I have a code which runs in a thread, which has a critical region that only one thread should execute at the time. That code currently has no mutex mechanisms, so I wanted to inquire what I could use for my specific use case, which involves "dropping" of "queued" functions. I've tried to simulate that behavior with the following minimal working example:
useThreading=False # True
if useThreading: from threading import Thread, Lock
else: from multiprocessing import Process, Lock
mymutex = Lock()
import time
tstart = None
def processData(data):
#~ mymutex.acquire()
try:
print('thread {0} [{1:.5f}] Do some stuff'.format(data, time.time()-tstart))
time.sleep(0.5)
print('thread {0} [{1:.5f}] 1000'.format(data, time.time()-tstart))
time.sleep(0.5)
print('thread {0} [{1:.5f}] done'.format(data, time.time()-tstart))
finally:
#~ mymutex.release()
pass
# main:
tstart = time.time()
for ix in xrange(0,3):
if useThreading: t = Thread(target = processData, args = (ix,))
else: t = Process(target = processData, args = (ix,))
t.start()
time.sleep(0.001)
Now, if you run this code, you get a printout like this:
thread 0 [0.00173] Do some stuff
thread 1 [0.00403] Do some stuff
thread 2 [0.00642] Do some stuff
thread 0 [0.50261] 1000
thread 1 [0.50487] 1000
thread 2 [0.50728] 1000
thread 0 [1.00330] done
thread 1 [1.00556] done
thread 2 [1.00793] done
That is to say, the three threads quickly get "queued" one after another (something like 2-3 ms after each other). Actually, they don't get queued, they simply start executing in parallel after 2-3 ms after each other.
Now, if I enable the mymutex.acquire()/.release() commands, I get what would be expected:
thread 0 [0.00174] Do some stuff
thread 0 [0.50263] 1000
thread 0 [1.00327] done
thread 1 [1.00350] Do some stuff
thread 1 [1.50462] 1000
thread 1 [2.00531] done
thread 2 [2.00547] Do some stuff
thread 2 [2.50638] 1000
thread 2 [3.00706] done
Basically, now with locking, the threads don't run in parallel, but they run one after another thanks to the lock - as long as one thread is working, the others will block at the .acquire(). But this is not exactly what I want to achieve, either.
What I want to achieve is this: let's assume that when .acquire() is first triggered by a thread function, it registers an id of a function (say a pointer to it) in a queue. After that, the behavior is basically the same as with the Lock - while the one thread works, the others block at .acquire(). When the first thread is done, it goes in the finally: block - and here, I'd like to check to see how many threads are waiting in the queue; then I'd like to delete/drop all waiting threads except for the very last one - and finally, I'd .release() the lock; meaning that after this, what was the last thread in the queue would execute next. I'd imagine, I would want to write something like the following pseudocode:
...
finally:
if (len(mymutex.queue) > 2): # more than this instance plus one other waiting:
while (len(mymutex.queue) > 2):
mymutex.queue.pop(1) # leave alone [0]=this instance, remove next element
# at this point, there should be only queue[0]=this instance, and queue[1]= what was the last thread queued previously
mymutex.release() # once we releace, queue[0] should be gone, and the next in the queue should acquire the mutex/lock..
pass
...
With that, I'd expect a printout like this:
thread 0 [0.00174] Do some stuff
thread 0 [0.50263] 1000
thread 0 [1.00327] done
# here upon lock release, thread 1 would be deleted - and the last one in the queue, thread 2, would acquire the lock next:
thread 2 [1.00350] Do some stuff
thread 2 [1.50462] 1000
thread 2 [2.00531] done
What would be the most straightforward way to achieve this in Python?
Seems like you want a queue-like behaviour, so why not use Queue?
import threading
from Queue import Queue
import time
# threads advertise to this queue when they're waiting
wait_queue = Queue()
# threads get their task from this queue
task_queue = Queue()
def do_stuff():
print "%s doing stuff" % str(threading.current_thread())
time.sleep(5)
def queue_thread(sleep_time):
# advertise current thread waiting
time.sleep(sleep_time)
wait_queue.put("waiting")
# wait for permission to pass
message = task_queue.get()
print "%s got task: %s" % (threading.current_thread(), message)
# unregister current thread waiting
wait_queue.get()
if message == "proceed":
do_stuff()
# kill size-1 threads waiting
for _ in range(wait_queue.qsize() - 1):
task_queue.put("die")
# release last
task_queue.put("proceed")
if message == "die":
print "%s died without doing stuff" % threading.current_thread()
pass
t1 = threading.Thread(target=queue_thread, args=(1, ))
t2 = threading.Thread(target=queue_thread, args=(2, ))
t3 = threading.Thread(target=queue_thread, args=(3, ))
t4 = threading.Thread(target=queue_thread, args=(4, ))
# allow first thread to pass
task_queue.put("proceed")
t1.start()
t2.start()
t3.start()
t4.start()
thread-1 arrives first and "acquires" the section, other threads come later to wait at the queue (and advertise they're waiting). Then, when thread-1 leaves it gives permission to the last thread at the queue by telling all other thread to die, and the last thread to proceed.
You can have finer control using different messages, a typical one would be a thread-id in the wait_queue (so you know who is waiting, and the order in which it arrived).
You can probably utilize non-blocking operations (queue.put(block=False) and queue.get(block=False)) in your favour when you're set on what you need.

Stopping a thread after a certain amount of time

I'm looking to terminate some threads after a certain amount of time. These threads will be running an infinite while loop and during this time they can stall for a random, large amount of time. The thread cannot last longer than time set by the duration variable.
How can I make it so after the length set by duration, the threads stop.
def main():
t1 = threading.Thread(target=thread1, args=1)
t2 = threading.Thread(target=thread2, args=2)
time.sleep(duration)
#the threads must be terminated after this sleep
This will work if you are not blocking.
If you are planing on doing sleeps, its absolutely imperative that you use the event to do the sleep. If you leverage the event to sleep, if someone tells you to stop while "sleeping" it will wake up. If you use time.sleep() your thread will only stop after it wakes up.
import threading
import time
duration = 2
def main():
t1_stop = threading.Event()
t1 = threading.Thread(target=thread1, args=(1, t1_stop))
t2_stop = threading.Event()
t2 = threading.Thread(target=thread2, args=(2, t2_stop))
time.sleep(duration)
# stops thread t2
t2_stop.set()
def thread1(arg1, stop_event):
while not stop_event.is_set():
stop_event.wait(timeout=5)
def thread2(arg1, stop_event):
while not stop_event.is_set():
stop_event.wait(timeout=5)
If you want the threads to stop when your program exits (as implied by your example), then make them daemon threads.
If you want your threads to die on command, then you have to do it by hand. There are various methods, but all involve doing a check in your thread's loop to see if it's time to exit (see Nix's example).
If you want to use a class:
from datetime import datetime,timedelta
class MyThread():
def __init__(self, name, timeLimit):
self.name = name
self.timeLimit = timeLimit
def run(self):
# get the start time
startTime = datetime.now()
while True:
# stop if the time limit is reached :
if((datetime.now()-startTime)>self.timeLimit):
break
print('A')
mt = MyThread('aThread',timedelta(microseconds=20000))
mt.run()
An alternative is to use signal.pthread_kill to send a stop signal. While it's not as robust as #Nix's answer (and I don't think it will work on Windows), it works in cases where Events don't (e.g., stopping a Flask server).
test.py
from signal import pthread_kill, SIGTSTP
from threading import Thread
import time
DURATION = 5
def thread1(arg):
while True:
print(f"processing {arg} from thread1...")
time.sleep(1)
def thread2(arg):
while True:
print(f"processing {arg} from thread2...")
time.sleep(1)
if __name__ == "__main__":
t1 = Thread(target=thread1, args=(1,))
t2 = Thread(target=thread2, args=(2,))
t1.start()
t2.start()
time.sleep(DURATION)
# stops all threads
pthread_kill(t2.ident, SIGTSTP)
result
$ python test.py
processing 1 from thread1...
processing 2 from thread2...
processing 1 from thread1...
processing 2 from thread2...
processing 1 from thread1...
processing 2 from thread2...
processing 1 from thread1...
processing 2 from thread2...
processing 1 from thread1...
processing 2 from thread2...
[19]+ Stopped python test.py

Categories

Resources