Related
There is a socket related function call in my code, that function is from another module thus out of my control, the problem is that it blocks for hours occasionally, which is totally unacceptable, How can I limit the function execution time from my code? I guess the solution must utilize another thread.
An improvement on #rik.the.vik's answer would be to use the with statement to give the timeout function some syntactic sugar:
import signal
from contextlib import contextmanager
class TimeoutException(Exception): pass
#contextmanager
def time_limit(seconds):
def signal_handler(signum, frame):
raise TimeoutException("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds)
try:
yield
finally:
signal.alarm(0)
try:
with time_limit(10):
long_function_call()
except TimeoutException as e:
print("Timed out!")
I'm not sure how cross-platform this might be, but using signals and alarm might be a good way of looking at this. With a little work you could make this completely generic as well and usable in any situation.
http://docs.python.org/library/signal.html
So your code is going to look something like this.
import signal
def signal_handler(signum, frame):
raise Exception("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(10) # Ten seconds
try:
long_function_call()
except Exception, msg:
print "Timed out!"
Here's a Linux/OSX way to limit a function's running time. This is in case you don't want to use threads, and want your program to wait until the function ends, or the time limit expires.
from multiprocessing import Process
from time import sleep
def f(time):
sleep(time)
def run_with_limited_time(func, args, kwargs, time):
"""Runs a function with time limit
:param func: The function to run
:param args: The functions args, given as tuple
:param kwargs: The functions keywords, given as dict
:param time: The time limit in seconds
:return: True if the function ended successfully. False if it was terminated.
"""
p = Process(target=func, args=args, kwargs=kwargs)
p.start()
p.join(time)
if p.is_alive():
p.terminate()
return False
return True
if __name__ == '__main__':
print run_with_limited_time(f, (1.5, ), {}, 2.5) # True
print run_with_limited_time(f, (3.5, ), {}, 2.5) # False
I prefer a context manager approach because it allows the execution of multiple python statements within a with time_limit statement. Because windows system does not have SIGALARM, a more portable and perhaps more straightforward method could be using a Timer
from contextlib import contextmanager
import threading
import _thread
class TimeoutException(Exception):
def __init__(self, msg=''):
self.msg = msg
#contextmanager
def time_limit(seconds, msg=''):
timer = threading.Timer(seconds, lambda: _thread.interrupt_main())
timer.start()
try:
yield
except KeyboardInterrupt:
raise TimeoutException("Timed out for operation {}".format(msg))
finally:
# if the action ends in specified time, timer is canceled
timer.cancel()
import time
# ends after 5 seconds
with time_limit(5, 'sleep'):
for i in range(10):
time.sleep(1)
# this will actually end after 10 seconds
with time_limit(5, 'sleep'):
time.sleep(10)
The key technique here is the use of _thread.interrupt_main to interrupt the main thread from the timer thread. One caveat is that the main thread does not always respond to the KeyboardInterrupt raised by the Timer quickly. For example, time.sleep() calls a system function so a KeyboardInterrupt will be handled after the sleep call.
Here: a simple way of getting the desired effect:
https://pypi.org/project/func-timeout
This saved my life.
And now an example on how it works: lets say you have a huge list of items to be processed and you are iterating your function over those items. However, for some strange reason, your function get stuck on item n, without raising an exception. You need to other items to be processed, the more the better. In this case, you can set a timeout for processing each item:
import time
import func_timeout
def my_function(n):
"""Sleep for n seconds and return n squared."""
print(f'Processing {n}')
time.sleep(n)
return n**2
def main_controller(max_wait_time, all_data):
"""
Feed my_function with a list of itens to process (all_data).
However, if max_wait_time is exceeded, return the item and a fail info.
"""
res = []
for data in all_data:
try:
my_square = func_timeout.func_timeout(
max_wait_time, my_function, args=[data]
)
res.append((my_square, 'processed'))
except func_timeout.FunctionTimedOut:
print('error')
res.append((data, 'fail'))
continue
return res
timeout_time = 2.1 # my time limit
all_data = range(1, 10) # the data to be processed
res = main_controller(timeout_time, all_data)
print(res)
Doing this from within a signal handler is dangerous: you might be inside an exception handler at the time the exception is raised, and leave things in a broken state. For example,
def function_with_enforced_timeout():
f = open_temporary_file()
try:
...
finally:
here()
unlink(f.filename)
If your exception is raised here(), the temporary file will never be deleted.
The solution here is for asynchronous exceptions to be postponed until the code is not inside exception-handling code (an except or finally block), but Python doesn't do that.
Note that this won't interrupt anything while executing native code; it'll only interrupt it when the function returns, so this may not help this particular case. (SIGALRM itself might interrupt the call that's blocking--but socket code typically simply retries after an EINTR.)
Doing this with threads is a better idea, since it's more portable than signals. Since you're starting a worker thread and blocking until it finishes, there are none of the usual concurrency worries. Unfortunately, there's no way to deliver an exception asynchronously to another thread in Python (other thread APIs can do this). It'll also have the same issue with sending an exception during an exception handler, and require the same fix.
You don't have to use threads. You can use another process to do the blocking work, for instance, maybe using the subprocess module. If you want to share data structures between different parts of your program then Twisted is a great library for giving yourself control of this, and I'd recommend it if you care about blocking and expect to have this trouble a lot. The bad news with Twisted is you have to rewrite your code to avoid any blocking, and there is a fair learning curve.
You can use threads to avoid blocking, but I'd regard this as a last resort, since it exposes you to a whole world of pain. Read a good book on concurrency before even thinking about using threads in production, e.g. Jean Bacon's "Concurrent Systems". I work with a bunch of people who do really cool high performance stuff with threads, and we don't introduce threads into projects unless we really need them.
The only "safe" way to do this, in any language, is to use a secondary process to do that timeout-thing, otherwise you need to build your code in such a way that it will time out safely by itself, for instance by checking the time elapsed in a loop or similar. If changing the method isn't an option, a thread will not suffice.
Why? Because you're risking leaving things in a bad state when you do. If the thread is simply killed mid-method, locks being held, etc. will just be held, and cannot be released.
So look at the process way, do not look at the thread way.
I would usually prefer using a contextmanager as suggested by #josh-lee
But in case someone is interested in having this implemented as a decorator, here's an alternative.
Here's how it would look like:
import time
from timeout import timeout
class Test(object):
#timeout(2)
def test_a(self, foo, bar):
print foo
time.sleep(1)
print bar
return 'A Done'
#timeout(2)
def test_b(self, foo, bar):
print foo
time.sleep(3)
print bar
return 'B Done'
t = Test()
print t.test_a('python', 'rocks')
print t.test_b('timing', 'out')
And this is the timeout.py module:
import threading
class TimeoutError(Exception):
pass
class InterruptableThread(threading.Thread):
def __init__(self, func, *args, **kwargs):
threading.Thread.__init__(self)
self._func = func
self._args = args
self._kwargs = kwargs
self._result = None
def run(self):
self._result = self._func(*self._args, **self._kwargs)
#property
def result(self):
return self._result
class timeout(object):
def __init__(self, sec):
self._sec = sec
def __call__(self, f):
def wrapped_f(*args, **kwargs):
it = InterruptableThread(f, *args, **kwargs)
it.start()
it.join(self._sec)
if not it.is_alive():
return it.result
raise TimeoutError('execution expired')
return wrapped_f
The output:
python
rocks
A Done
timing
Traceback (most recent call last):
...
timeout.TimeoutError: execution expired
out
Notice that even if the TimeoutError is thrown, the decorated method will continue to run in a different thread. If you would also want this thread to be "stopped" see: Is there any way to kill a Thread in Python?
Using simple decorator
Here's the version I made after studying above answers. Pretty straight forward.
def function_timeout(seconds: int):
"""Wrapper of Decorator to pass arguments"""
def decorator(func):
#contextmanager
def time_limit(seconds_):
def signal_handler(signum, frame): # noqa
raise TimeoutException(f"Timed out in {seconds_} seconds!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds_)
try:
yield
finally:
signal.alarm(0)
#wraps(func)
def wrapper(*args, **kwargs):
with time_limit(seconds):
return func(*args, **kwargs)
return wrapper
return decorator
How to use?
#function_timeout(seconds=5)
def my_naughty_function():
while True:
print("Try to stop me ;-p")
Well of course, don't forget to import the function if it is in a separate file.
Here's a timeout function I think I found via google and it works for me.
From:
http://code.activestate.com/recipes/473878/
def timeout(func, args=(), kwargs={}, timeout_duration=1, default=None):
'''This function will spwan a thread and run the given function using the args, kwargs and
return the given default value if the timeout_duration is exceeded
'''
import threading
class InterruptableThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.result = default
def run(self):
try:
self.result = func(*args, **kwargs)
except:
self.result = default
it = InterruptableThread()
it.start()
it.join(timeout_duration)
if it.isAlive():
return it.result
else:
return it.result
The method from #user2283347 is tested working, but we want to get rid of the traceback messages. Use pass trick from Remove traceback in Python on Ctrl-C, the modified code is:
from contextlib import contextmanager
import threading
import _thread
class TimeoutException(Exception): pass
#contextmanager
def time_limit(seconds):
timer = threading.Timer(seconds, lambda: _thread.interrupt_main())
timer.start()
try:
yield
except KeyboardInterrupt:
pass
finally:
# if the action ends in specified time, timer is canceled
timer.cancel()
def timeout_svm_score(i):
#from sklearn import svm
#import numpy as np
#from IPython.core.display import display
#%store -r names X Y
clf = svm.SVC(kernel='linear', C=1).fit(np.nan_to_num(X[[names[i]]]), Y)
score = clf.score(np.nan_to_num(X[[names[i]]]),Y)
#scoressvm.append((score, names[i]))
display((score, names[i]))
%%time
with time_limit(5):
i=0
timeout_svm_score(i)
#Wall time: 14.2 s
%%time
with time_limit(20):
i=0
timeout_svm_score(i)
#(0.04541284403669725, '计划飞行时间')
#Wall time: 16.1 s
%%time
with time_limit(5):
i=14
timeout_svm_score(i)
#Wall time: 5h 43min 41s
We can see that this method may need far long time to interrupt the calculation, we asked for 5 seconds, but it work out in 5 hours.
This code works for Windows Server Datacenter 2016 with python 3.7.3 and I didn't tested on Unix, after mixing some answers from Google and StackOverflow, it finally worked for me like this:
from multiprocessing import Process, Lock
import time
import os
def f(lock,id,sleepTime):
lock.acquire()
print("I'm P"+str(id)+" Process ID: "+str(os.getpid()))
lock.release()
time.sleep(sleepTime) #sleeps for some time
print("Process: "+str(id)+" took this much time:"+str(sleepTime))
time.sleep(sleepTime)
print("Process: "+str(id)+" took this much time:"+str(sleepTime*2))
if __name__ == '__main__':
timeout_function=float(9) # 9 seconds for max function time
print("Main Process ID: "+str(os.getpid()))
lock=Lock()
p1=Process(target=f, args=(lock,1,6,)) #Here you can change from 6 to 3 for instance, so you can watch the behavior
start=time.time()
print(type(start))
p1.start()
if p1.is_alive():
print("process running a")
else:
print("process not running a")
while p1.is_alive():
timeout=time.time()
if timeout-start > timeout_function:
p1.terminate()
print("process terminated")
print("watching, time passed: "+str(timeout-start) )
time.sleep(1)
if p1.is_alive():
print("process running b")
else:
print("process not running b")
p1.join()
if p1.is_alive():
print("process running c")
else:
print("process not running c")
end=time.time()
print("I am the main process, the two processes are done")
print("Time taken:- "+str(end-start)+" secs") #MainProcess terminates at approx ~ 5 secs.
time.sleep(5) # To see if on Task Manager the child process is really being terminated, and it is
print("finishing")
The main code is from this link:
Create two child process using python(windows)
Then I used .terminate() to kill the child process. You can see that the function f calls 2 prints, one after 5 seconds and another after 10 seconds. However, with a 7 seconds sleep and the terminate(), it does not show the last print.
It worked for me, hope it helps!
Created a background thread this way
def listenReply(self):
while self.SOCK_LISTENING:
fromNodeRED = self.nodeRED_sock.recv(1024).decode()
if fromNodeRED=="closeDoor":
self.door_closed()
...
self.listenThread = Thread(target=self.listenReply, daemon=True)
self.SOCK_LISTENING = True
self.listenThread.start()
But self.door_closed() has some UI stuffs so that's no good. How do I call self.door_closed in main thread instead?
One thing to keep in mind is that each thread is a sequential execution of a single flow of code, starting from the function the thread was started on.
It doesn't make much sense to simply run something on an existing thread, since that thread is already executing something, and doing so would disrupt it's current flow.
However, it's quite easy to communicate between threads and it's possible to implement a thread's code such that it simply receives functions/events from other threads which tell it what to do. This is commonly known as an event loop.
For example your main thread could look something like this
from queue import Queue
tasks = Queue()
def event_loop():
while True:
next_task = tasks.get()
print('Executing function {} on main thread'.format(next_task))
next_task()
In your other threads you could ask the main thread to run a function by simply adding it to the tasks queue:
def listenReply(self):
while self.SOCK_LISTENING:
fromNodeRED = self.nodeRED_sock.recv(1024).decode()
if fromNodeRED=="closeDoor":
tasks.put(door_closed)
You can use threading.Event() and set it whenever you you receive "closeDoor" from recv.
For example:
g_should_close_door = threading.Event()
def listenReply(self):
while self.SOCK_LISTENING:
fromNodeRED = self.nodeRED_sock.recv(1024).decode()
if fromNodeRED=="closeDoor":
g_should_close_door.set()
...
self.listenThread = Thread(target=self.listenReply, daemon=True)
self.SOCK_LISTENING = True
self.listenThread.start()
if g_should_close_door.is_set():
door_closed()
g_should_close_door.clear()
Solved it myself using signal and slots of the PyQt.
class App(QWidget):
socketSignal = QtCore.pyqtSignal(object) #must be defined in class level
# BG THREAD
def listenReply(self):
while self.SOCK_LISTENING:
fromNodeRED = self.nodeRED_sock.recv(1024).decode()
print(fromNodeRED)
self.socketSignal.emit(fromNodeRED)
.... somewhere in init of Main thread:
self.socketSignal.connect(self.executeOnMain)
self.listenThread = Thread(target=self.listenReply, daemon=True)
self.SOCK_LISTENING = True
self.listenThread.start()
....
def executeOnMain(self, data):
if data=="closeDoor":
self.door_closed() # a method that changes the UI
Works great for me.
I have a single-line encrypted code in form of string and a function that decodes & execute the encrypted code (the code establishes backdoor connection to another computer in the network, but that isn't very relevent here) Once the encrypted code runs the program won't proceed to next line because the code does not stop (in order to keep the connection alive).
What I want to do is run the code for 10 seconds, then terminate it, and re-run it.
But I am not being able to terminate the code because it's encrypted and can't be changed. I thought I could run it on a separate thread then forcibly terminate it after 10 seconds, but the problem is all the solutions mentioned on the internet would require changing the encrypted code slightly.
Python does not seem to have a way to forcibly close a running thread after 'n' seconds, which would solve my problem very easily.
Following code demonstrates my problem:
import threading, time
encrypted_code = ")'cba'(tnirp :eurT elihw"
def decrypt_and_execute(encrypted_code):
exec(encrypted_code[::-1])
def run(encrypted_code, run_time):
my_thread = threading.Thread(target=decrypt_and_execute, args=(encrypted_code,))
my_thread.start()
time.sleep(run_time)
# CODE TO STOP 'my_thread' without changing 'encrypted_code' NEEDED HERE#
while True:
threading.Thread(target=run, args=(encrypted_code, 10)).start()
time.sleep(10)
Above was my attempted solution (encrypted code here is an infinite while loop that keeps printing abc).
Any code I tried to stop 'my_thread' without closing the whole application, did not work.
Maybe that technique of a thread with traces works for you, is a Thread subclass that you can stop. From https://www.geeksforgeeks.org/python-different-ways-to-kill-a-thread/
Or use subprocesses, they have terminate. That same post covers that too nicely.
Edit: for some reason, this failed to kill the thread in the case in question. The multiprocess module solved the issue: killing processes is a reliable way to stop things on all operating systems, and multiprocess module has a spawn method that gives a Thread like API for subprocesses, and can start new processes with Python code also on Windows without having Python installed.
Anyway for other cases this technique with threading should work too. Threads are rarely actually needed but sometimes they truly are.
# Python program using
# traces to kill threads
import sys
import trace
import threading
import time
class thread_with_trace(threading.Thread):
def __init__(self, *args, **keywords):
threading.Thread.__init__(self, *args, **keywords)
self.killed = False
def start(self):
self.__run_backup = self.run
self.run = self.__run
threading.Thread.start(self)
def __run(self):
sys.settrace(self.globaltrace)
self.__run_backup()
self.run = self.__run_backup
def globaltrace(self, frame, event, arg):
if event == 'call':
return self.localtrace
else:
return None
def localtrace(self, frame, event, arg):
if self.killed:
if event == 'line':
raise SystemExit()
return self.localtrace
def kill(self):
self.killed = True
def func():
while True:
print('thread running')
t1 = thread_with_trace(target = func)
t1.start()
time.sleep(2)
t1.kill()
t1.join()
if not t1.isAlive():
print('thread killed')
What's the proper way to tell a looping thread to stop looping?
I have a fairly simple program that pings a specified host in a separate threading.Thread class. In this class it sleeps 60 seconds, the runs again until the application quits.
I'd like to implement a 'Stop' button in my wx.Frame to ask the looping thread to stop. It doesn't need to end the thread right away, it can just stop looping once it wakes up.
Here is my threading class (note: I haven't implemented looping yet, but it would likely fall under the run method in PingAssets)
class PingAssets(threading.Thread):
def __init__(self, threadNum, asset, window):
threading.Thread.__init__(self)
self.threadNum = threadNum
self.window = window
self.asset = asset
def run(self):
config = controller.getConfig()
fmt = config['timefmt']
start_time = datetime.now().strftime(fmt)
try:
if onlinecheck.check_status(self.asset):
status = "online"
else:
status = "offline"
except socket.gaierror:
status = "an invalid asset tag."
msg =("{}: {} is {}. \n".format(start_time, self.asset, status))
wx.CallAfter(self.window.Logger, msg)
And in my wxPyhton Frame I have this function called from a Start button:
def CheckAsset(self, asset):
self.count += 1
thread = PingAssets(self.count, asset, self)
self.threads.append(thread)
thread.start()
Threaded stoppable function
Instead of subclassing threading.Thread, one can modify the function to allow
stopping by a flag.
We need an object, accessible to running function, to which we set the flag to stop running.
We can use threading.currentThread() object.
import threading
import time
def doit(arg):
t = threading.currentThread()
while getattr(t, "do_run", True):
print ("working on %s" % arg)
time.sleep(1)
print("Stopping as you wish.")
def main():
t = threading.Thread(target=doit, args=("task",))
t.start()
time.sleep(5)
t.do_run = False
if __name__ == "__main__":
main()
The trick is, that the running thread can have attached additional properties. The solution builds
on assumptions:
the thread has a property "do_run" with default value True
driving parent process can assign to started thread the property "do_run" to False.
Running the code, we get following output:
$ python stopthread.py
working on task
working on task
working on task
working on task
working on task
Stopping as you wish.
Pill to kill - using Event
Other alternative is to use threading.Event as function argument. It is by
default False, but external process can "set it" (to True) and function can
learn about it using wait(timeout) function.
We can wait with zero timeout, but we can also use it as the sleeping timer (used below).
def doit(stop_event, arg):
while not stop_event.wait(1):
print ("working on %s" % arg)
print("Stopping as you wish.")
def main():
pill2kill = threading.Event()
t = threading.Thread(target=doit, args=(pill2kill, "task"))
t.start()
time.sleep(5)
pill2kill.set()
t.join()
Edit: I tried this in Python 3.6. stop_event.wait() blocks the event (and so the while loop) until release. It does not return a boolean value. Using stop_event.is_set() works instead.
Stopping multiple threads with one pill
Advantage of pill to kill is better seen, if we have to stop multiple threads
at once, as one pill will work for all.
The doit will not change at all, only the main handles the threads a bit differently.
def main():
pill2kill = threading.Event()
tasks = ["task ONE", "task TWO", "task THREE"]
def thread_gen(pill2kill, tasks):
for task in tasks:
t = threading.Thread(target=doit, args=(pill2kill, task))
yield t
threads = list(thread_gen(pill2kill, tasks))
for thread in threads:
thread.start()
time.sleep(5)
pill2kill.set()
for thread in threads:
thread.join()
This has been asked before on Stack. See the following links:
Is there any way to kill a Thread in Python?
Stopping a thread after a certain amount of time
Basically you just need to set up the thread with a stop function that sets a sentinel value that the thread will check. In your case, you'll have the something in your loop check the sentinel value to see if it's changed and if it has, the loop can break and the thread can die.
I read the other questions on Stack but I was still a little confused on communicating across classes. Here is how I approached it:
I use a list to hold all my threads in the __init__ method of my wxFrame class: self.threads = []
As recommended in How to stop a looping thread in Python? I use a signal in my thread class which is set to True when initializing the threading class.
class PingAssets(threading.Thread):
def __init__(self, threadNum, asset, window):
threading.Thread.__init__(self)
self.threadNum = threadNum
self.window = window
self.asset = asset
self.signal = True
def run(self):
while self.signal:
do_stuff()
sleep()
and I can stop these threads by iterating over my threads:
def OnStop(self, e):
for t in self.threads:
t.signal = False
I had a different approach. I've sub-classed a Thread class and in the constructor I've created an Event object. Then I've written custom join() method, which first sets this event and then calls a parent's version of itself.
Here is my class, I'm using for serial port communication in wxPython app:
import wx, threading, serial, Events, Queue
class PumpThread(threading.Thread):
def __init__ (self, port, queue, parent):
super(PumpThread, self).__init__()
self.port = port
self.queue = queue
self.parent = parent
self.serial = serial.Serial()
self.serial.port = self.port
self.serial.timeout = 0.5
self.serial.baudrate = 9600
self.serial.parity = 'N'
self.stopRequest = threading.Event()
def run (self):
try:
self.serial.open()
except Exception, ex:
print ("[ERROR]\tUnable to open port {}".format(self.port))
print ("[ERROR]\t{}\n\n{}".format(ex.message, ex.traceback))
self.stopRequest.set()
else:
print ("[INFO]\tListening port {}".format(self.port))
self.serial.write("FLOW?\r")
while not self.stopRequest.isSet():
msg = ''
if not self.queue.empty():
try:
command = self.queue.get()
self.serial.write(command)
except Queue.Empty:
continue
while self.serial.inWaiting():
char = self.serial.read(1)
if '\r' in char and len(msg) > 1:
char = ''
#~ print('[DATA]\t{}'.format(msg))
event = Events.PumpDataEvent(Events.SERIALRX, wx.ID_ANY, msg)
wx.PostEvent(self.parent, event)
msg = ''
break
msg += char
self.serial.close()
def join (self, timeout=None):
self.stopRequest.set()
super(PumpThread, self).join(timeout)
def SetPort (self, serial):
self.serial = serial
def Write (self, msg):
if self.serial.is_open:
self.queue.put(msg)
else:
print("[ERROR]\tPort {} is not open!".format(self.port))
def Stop(self):
if self.isAlive():
self.join()
The Queue is used for sending messages to the port and main loop takes responses back. I've used no serial.readline() method, because of different end-line char, and I have found the usage of io classes to be too much fuss.
Depends on what you run in that thread.
If that's your code, then you can implement a stop condition (see other answers).
However, if what you want is to run someone else's code, then you should fork and start a process. Like this:
import multiprocessing
proc = multiprocessing.Process(target=your_proc_function, args=())
proc.start()
now, whenever you want to stop that process, send it a SIGTERM like this:
proc.terminate()
proc.join()
And it's not slow: fractions of a second.
Enjoy :)
My solution is:
import threading, time
def a():
t = threading.currentThread()
while getattr(t, "do_run", True):
print('Do something')
time.sleep(1)
def getThreadByName(name):
threads = threading.enumerate() #Threads list
for thread in threads:
if thread.name == name:
return thread
threading.Thread(target=a, name='228').start() #Init thread
t = getThreadByName('228') #Get thread by name
time.sleep(5)
t.do_run = False #Signal to stop thread
t.join()
I find it useful to have a class, derived from threading.Thread, to encapsulate my thread functionality. You simply provide your own main loop in an overridden version of run() in this class. Calling start() arranges for the object’s run() method to be invoked in a separate thread.
Inside the main loop, periodically check whether a threading.Event has been set. Such an event is thread-safe.
Inside this class, you have your own join() method that sets the stop event object before calling the join() method of the base class. It can optionally take a time value to pass to the base class's join() method to ensure your thread is terminated in a short amount of time.
import threading
import time
class MyThread(threading.Thread):
def __init__(self, sleep_time=0.1):
self._stop_event = threading.Event()
self._sleep_time = sleep_time
"""call base class constructor"""
super().__init__()
def run(self):
"""main control loop"""
while not self._stop_event.isSet():
#do work
print("hi")
self._stop_event.wait(self._sleep_time)
def join(self, timeout=None):
"""set stop event and join within a given time period"""
self._stop_event.set()
super().join(timeout)
if __name__ == "__main__":
t = MyThread()
t.start()
time.sleep(5)
t.join(1) #wait 1s max
Having a small sleep inside the main loop before checking the threading.Event is less CPU intensive than looping continuously. You can have a default sleep time (e.g. 0.1s), but you can also pass the value in the constructor.
Sometimes you don't have control over the running target. In those cases you can use signal.pthread_kill to send a stop signal.
from signal import pthread_kill, SIGTSTP
from threading import Thread
from itertools import count
from time import sleep
def target():
for num in count():
print(num)
sleep(1)
thread = Thread(target=target)
thread.start()
sleep(5)
pthread_kill(thread.ident, SIGTSTP)
result
0
1
2
3
4
[14]+ Stopped
I have a threaded python application with a long-running mainloop in the background thread. This background mainloop is actually a call to pyglet.app.run(), which drives a GUI window and also can be configured to call other code periodically. I need a do_stuff(duration) function to be called at will from the main thread to trigger an animation in the GUI, wait for the animation to stop, and then return. The actual animation must be done in the background thread because the GUI library can't handle being driven by separate threads.
I believe I need to do something like this:
import threading
class StuffDoer(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.max_n_times = 0
self.total_n_times = 0
self.paused_ev = threading.Event()
def run(self):
# this part is outside of my control
while True:
self._do_stuff()
# do other stuff
def _do_stuff(self):
# this part is under my control
if self.paused_ev.is_set():
if self.max_n_times > self.total_n_times:
self.paused_ev.clear()
else:
if self.total_n_times >= self.max_n_times:
self.paused_ev.set()
if not self.paused_ev.is_set():
# do stuff that must execute in the background thread
self.total_n_times += 1
sd = StuffDoer()
sd.start()
def do_stuff(n_times):
sd.max_n_times += n_times
sd.paused_ev.wait_for_clear() # wait_for_clear() does not exist
sd.paused_ev.wait()
assert (sd.total_n_times == sd.max_n_times)
EDIT: use max_n_times instead of stop_time to clarify why Thread.join(duration) won't do the trick.
From the documentation for threading.Event:
wait([timeout])
Block until the internal flag is true.
If the internal flag is true on entry,
return immediately. Otherwise, block
until another thread calls set() to
set the flag to true, or until the
optional timeout occurs.
I've found I can get the behavior I'm looking for if I have a pair of events, paused_ev and not_paused_ev, and use not_paused_ev.wait(). I could almost just use Thread.join(duration), except it needs to only return precisely when the background thread actually registers that the time is up. Is there some other synchronization object or other strategy I should be using instead?
I'd also be open to arguments that I'm approaching this whole thing the wrong way, provided they're good arguments.
Hoping I get some revision or additional info from my comment, but I'm kind of wondering if you're not overworking things by subclassing Thread. You can do things like this:
class MyWorker(object):
def __init__(self):
t = Thread(target = self._do_work, name "Worker Owned Thread")
t.daemon = True
t.start()
def _do_work(self):
While True:
# Something going on here, forever if necessary. This thread
# will go away if the other non-daemon threads terminate, possibly
# raising an exception depending this function's body.
I find this makes more sense when the method you want to run is something that is more appropriately a member function of some other class than it would be to as the run method on the thread. Additionally, this saves you from having to encapsulate a bunch of business logic inside of a Thread. All IMO, of course.
It appears that your GUI animation thread is using a spin-lock in its while True loop. This can be prevented using thread-safe queues. Based on my reading of your question, this approach would be functionally equivalent and efficient.
I'm omitting some details of your code above which would not change. I'm also assuming here that the run() method which you do not control uses the self.stop_time value to do its work; otherwise there is no need for a threadsafe queue.
from Queue import Queue
from threading import Event
class StuffDoer:
def __init__(self, inq, ready):
self.inq = inq
self.ready = ready
def _do_stuff(self):
self.ready.set()
self.stop_time = self.inq.get()
GUIqueue = Queue()
control = Event()
sd = StuffDoer(GUIqueue, control)
def do_stuff(duration):
control.clear()
GUIqueue.put(time.time() + duration)
control.wait()
I ended up using a Queue similar to what #wberry suggested, and making use of Queue.task_done and Queue.wait:
import Queue
import threading
class StuffDoer(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.setDaemon(True)
self.max_n_times = 0
self.total_n_times = 0
self.do_queue = Queue.Queue()
def run(self):
# this part is outside of my control
while True:
self._do_stuff()
# do other stuff
def _do_stuff(self):
# this part is under my control
if self.total_n_times >= self.max_n_times:
try:
self.max_n_times += self.do_queue.get(block=False)
except Queue.Empty, e:
pass
if self.max_n_times > self.total_n_times:
# do stuff that must execute in the background thread
self.total_n_times += 1
if self.total_n_times >= self.max_n_times:
self.do_queue.task_done()
sd = StuffDoer()
sd.start()
def do_stuff(n_times):
sd.do_queue.put(n_times)
sd.do_queue.join()
assert (sd.total_n_times == sd.max_n_times)
I made solution based on #g.d.d.c advice for this question. There is my code:
threads = []
# initializing aux thread(s) in the main thread ...
t = threading.Thread(target=ThreadF, args=(...))
#t.setDaemon(True) # I'm not sure does it really needed
t.start()
threads.append(t.ident)
# Block main thread
while filter(lambda thread: thread.ident in threads, threading.enumerate()):
time.sleep(10)
Also, you can use Thread.join to block the main thread - it is better way.