From my understanding there is no such out-of-the-box solution in the python stdlib.
The solution has to have the following characteristics:
Start the thread as daemon with a target function and optional arguments
Have polling: the thread should rerun the target every X seconds
Allow easy and graceful stopping: Not breaking target execution midway
Expose the ability to stop from outside the program it belongs to, in particular be able to stop the thread from testing code.
Allow thread restarting after stop (or pseudo-restarting with a new thread)
I ran across a couple of suggestions on SO, but would like to aggregate any collected knowledge here (i will contribute in that in a follow-up answer), so that any new or alternative or additional ideas be heard.
My proposal uses the threading library, as it is advertised as more high level than thread.
A middle ground is this solution, found from other SO answer:
def main():
t_stop= threading.Event()
t = threading.Thread(target=thread, args=(1, t_stop))
t.daemon = True
t.start()
time.sleep(duration)
#stop the thread
t_stop.set()
def thread(arg, stop_event):
while(not stop_event.is_set()):
# Code to execute here
stop_event.wait(time)
This, unfortunately, requires us to have the t_stop object handy when testing -in order to stop the thread- and that handle to the object is not designed to be exposed.
A solution would be to add t and t_stop handles in a top level or global dictionary somewhere, for the testing code to reach.
Another solution (copied and improved from somewhere) is use of the following:
def main():
t = DaemonStoppableThread(sleep_time, target=target_function,
name='polling_thread',
args=(arg1, arg2))
t.start()
# Stopping code from a test
def stop_polling_threads():
threads = threading.enumerate()
polling_threads = [thread for thread in threads
if 'polling_thread' in thread.getName()]
for poll_thread in polling_threads:
poll_thread.stop()
class DaemonStoppableThread(threading.Thread):
def __init__(self, sleep_time, target=None, **kwargs):
super(DaemonStoppableThread, self).__init__(target=target, **kwargs)
self.setDaemon(True)
self.stop_event = threading.Event()
self.sleep_time = sleep_time
self.target = target
def stop(self):
self.stop_event.set()
def stopped(self):
return self.stop_event.isSet()
def run(self):
while not self.stopped():
if self.target:
self.target()
else:
raise Exception('No target function given')
self.stop_event.wait(self.sleep_time)
As good as these solutions may be, none of them face the restarting of the polling target function.
I avoided using the expression "restarting thread", as I understand that python threads cannot be restarted, so a new thread will have to be used to allow for this "pseudo-restarting"
EDIT:
To improve on the above, a solution to start/stop the polling target multiple times:
class ThreadManager(object):
def __init__(self):
self.thread = None
def start_thread(self):
if not self.thread or not self.thread.is_alive():
self.thread = DaemonStoppableThread(sleep_time=5, target=some_func, args=(1, 2))
self.thread.start()
return 'thread running'
def stop_thread(self):
if self.thread and self.thread.is_alive():
self.thread.stop()
return 'thread stopped'
else:
return 'dead thread'
def check_thread(self):
if self.thread and self.thread.is_alive():
return 'thread alive'
else:
return 'dead_thread'
Related
When using a class Pup for creating stoppable threads that are meant to be running in the background until .stop() is called:
What happens when pup.join() is not called after pup.stop()? Will the following result in a leak:
pup = Pup()
pup.start()
time.sleep(5)
pup.stop()
pup2 = Pup()
pup2.start()
time.sleep(5)
pup2.stop()
pup3 = Pup()
pup3.start()
time.sleep(5)
pup3.stop()
Must pup be a daemonized thread since we are running it in the background?
Main code below is borrowed from this SO answer
import time
import threading
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the stopped() condition."""
def __init__(self, *args, **kwargs):
super(StoppableThread, self).__init__(*args, **kwargs)
self._stopper = threading.Event()
def stop(self):
self._stopper.set()
def stopped(self):
return self._stopper.isSet()
class Pup(StoppableThread):
def __init__(self, i, *args, **kwargs):
super(Pup, self).__init__(*args, **kwargs)
self.i = i
def run(self):
while True:
if self.stopped():
return
print("Hello, world!", i)
time.sleep(1)
for i in range(100):
pup = Pup(i)
pup.start()
time.sleep(5)
pup.stop()
The StoppableThread should be joined.
Because it is just a thin wrapper about threading.Thread giving you a possibility of setting and checking flag stopper.
In this case, there must be a code that regularly checks this flag. The amount of delay between checks depends on the user of the class.
And given the fact that it is assumed that the thread should be correctly stopped, you must use join. Because if you make the thread as daemon and try to stop it before the application finishing:
Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signalling mechanism such as an Event.
A leak is possible only if your code is responsible for checking the stopper flag and subsequent exiting from the thread does not work correctly. Otherwise, there is no leaks, because the app, even join is not called, will wait for the completion of all non-daemon threads. But using join will give more control over the program flow.
Taking all of the above into consideration, i think that making StoppableThread as daemon is bad idea.
I am writing multiprocess program. There are four class: Main, Worker, Request and Ack. The Main class is the entry point of program. It will create the sub-process called Worker to do some jobs. The main process put the Request into JoinableQueue, and than Worker get request from queue. When Worker finished the request, it will put the ACK into queue. The part of code shown as below:
Main:
class Main():
def __init__(self):
self.cmd_queue = JoinableQueue()
self.worker = Worker(self.cmd_queue)
def call_worker(self, cmd_code):
if self.cmd_queue.empty() is True:
request = Request(cmd_code)
self.cmd_queue.put(request)
self.cmd_queue.join()
ack = self.cmd_queue.get()
self.cmd_queue.task_done()
if ack.value == 0:
return True
else:
return False
else:
# TODO: Error Handling.
pass
def run_worker(self):
self.worker.start()
Worker:
class Worker(Process):
def __init__(self, cmd_queue):
super(Worker, self).__init__()
self.cmd_queue = cmd_queue
...
def run(self):
while True:
ack = Ack(0)
try:
request = self.cmd_queue.get()
if request.cmd_code == ReqCmd.enable_handler:
self.enable_handler()
elif request.cmd_code == ReqCmd.disable_handler:
self.disable_handler()
else:
pass
except Exception:
ack.value = -1
finally:
self.cmd_queue.task_done()
self.cmd_queue.put(ack)
self.cmd_queue.join()
It often works normally. But Main process stuck at self.cmd_queue.join(), and the Worker stuck at self.cmd_queue.join() sometimes. It is so weird! Does anyone have any ideas? Thanks
There's nothing weird in the above issue: you shouldn't call queue's join within a typical single worker process activity because
Queue.join()
Blocks until all items in the queue have been gotten and
processed.
Such a calls where they are in your current implementation will make the processing pipeline wait.
Usually queue.join() is called in the main (supervisor) thread after initiating/starting all threads/workers.
https://docs.python.org/3/library/queue.html#queue.Queue.join
I have my own Thread called TimeBasedLogThread. I would like to fire a function my_function when the TimeBasedLogThread is being killed because the main process is exiting. I would like to do it from within this object. Is it possible to do so?
Here is my current approach:
class TimeBasedBufferingHandler(MemoryHandler):
# This is a logging-based handler that buffers logs to send
# them as emails
# the target of this handler is a SMTPHandler
def __init__(self, capacity=10, flushLevel=logging.ERROR, target=None,
flushOnClose=True, timeout=60):
MemoryHandler.__init__(self, capacity=capacity, flushLevel=flushLevel,
target=target, flushOnClose=flushOnClose)
self.timeout = timeout # in seconds (as time.time())
def flush(self):
# Send the emails that are younger than timeout, all together
# in the same email
class TimeBasedLogThread(Thread):
def __init__(self, handler, timeout=60):
Thread.__init__(self)
self.handler = handler
self.timeout = timeout
def run(self):
while True:
self.handler.flush()
time.sleep(self.timeout)
def my_function(self):
print("my_function is being called")
self.handler.flush()
def setup_thread():
smtp_handler = SMTPHandler()
new_thread = TimeBasedLogThread(smtp_handler, timeout=10)
new_thread.start()
In my main thread, I have:
setup_thread()
logging.error("DEBUG_0")
time.sleep(5)
logging.error("DEBUG_1")
time.sleep(5)
logging.error("DEBUG_2")
The time.sleep(5) releases the main thread 5 seconds before the timeout of my other thread. So, I receive the first 2 emails with "DEBUG_0" and "DEBUG_1", but not the last one "DEBUG_2" because the main process exits before the timeout has finished.
I would like to link the class TimeBasedLogThread and the function my_function that will flush (send the emails) before exiting. How can I do that? I looked at the source code of threading but I did not understand what method I could use.
Build your function as a Thread too. (Ex: AfterDeadThread)
You have two strategy here:
TimeBasedLogThread call AfterDeadThread before die
AfterDeadThread check if TimeBasedLogThread is alive or not, if not it will run some methods
Extend run() method (representing the thread’s activity) to fire the on_terminate handler passed to custom thread’s constructor as keyword argument.
On a slightly changed custom thread class (for demonstration):
from threading import Thread
import time, random
class TimeBasedLogThread(Thread):
def __init__(self, handler, timeout=2, on_terminate=None):
Thread.__init__(self)
self.handler = handler
self.timeout = timeout
self.terminate_handler = on_terminate
def run(self):
while True:
num = self.handler()
if num > 5:
break
time.sleep(self.timeout)
print(num)
if self.terminate_handler:
self.terminate_handler()
def my_term_function():
print("my_function is being called")
f = lambda: random.randint(3, 10)
tlog_thread = TimeBasedLogThread(f, on_terminate=my_term_function)
tlog_thread.start()
tlog_thread.join()
Sample output:
3
4
5
4
5
my_function is being called
I have a producer thread that produces data from a serial connection and puts them into multiple queues that will be used by different consumer threads. However, I'd like to be able to add in additional queues (additional consumers) from the main thread after the producer thread has already started running.
I.e. In the code below, how could I add a Queue to listOfQueues from the main thread while this thread is running? Can I add in a method such as addQueue(newQueue) to this class which appends to it listOfQueues? This doesn't seem likely as the thread will be in the run method. Can I create some sort of Event similar to the stop event?
class ProducerThread(threading.Thread):
def __init__(self, listOfQueues):
super(ProducerThread, self).__init__()
self.listOfQueues = listOfQueues
self._stop_event = threading.Event() # Flag to be set when the thread should stop
def run(self):
ser = serial.Serial() # Some serial connection
while(not self.stopped()):
try:
bytestring = ser.readline() # Serial connection or "producer" at some rate
for q in self.listOfQueues:
q.put(bytestring)
except serial.SerialException:
continue
def stop(self):
'''
Call this function to stop the thread. Must also use .join() in the main
thread to fully ensure the thread has completed.
:return:
'''
self._stop_event.set()
def stopped(self):
'''
Call this function to determine if the thread has stopped.
:return: boolean True or False
'''
return self._stop_event.is_set()
Sure, you can simply have an append function that adds to your list. E.g.
def append(self, element):
self.listOfQueues.append(element)
That will work even after your thread's start() method has been called.
Edit: for non thread-safe procedures you can use a lock, e.g.:
def unsafe(self, element):
with self.lock:
# do stuff
You would then also need to add the lock inside your run method, e.g.:
with lock:
for q in self.listOfQueues:
q.put(bytestring)
Any code acquiring a lock will wait for the lock to be released elsewhere.
I've made a server based on cherrypy but I have a repetitive task which takes a long time (more than a minute) to run. This is all fine until I need to shut down the server, then I am waiting forever for the threads to finish.
I was wondering how you'd detect a cherrypy shutdown inside the client thread so that the thread could abort when the server is shutting down.
I'm after something like this:
class RootServer:
#cherrypy.expose
def index(self, **keywords):
for i in range(0,1000):
lengthyprocess(i)
if server_is_shutting_down():
return
You can inspect the state directly:
if cherrypy.engine.state != cherrypy.engine.states.STARTED:
return
Or, you can register a listener on the 'stop' channel:
class RootServer:
def __init__(self):
cherrypy.engine.subscribe('start', self.start)
cherrypy.engine.subscribe('stop', self.stop)
def start(self):
self.running = True
def stop(self):
self.running = False
#cherrypy.expose
def index(self, **keywords):
for i in range(0,1000):
lengthyprocess(i)
if not self.running:
return
The latter is especially helpful if you also want to have the lengthyprocess start (or perform some preparation) when the server starts up, rather than upon a request.