Python: concurrent.futures: cancel not possible [duplicate] - python

I would like to start a blocking function in an Executor using the asyncio call loop.run_in_executor and then cancel it later, but that doesn't seem to be working for me.
Here is the code:
import asyncio
import time
from concurrent.futures import ThreadPoolExecutor
def blocking_func(seconds_to_block):
for i in range(seconds_to_block):
print('blocking {}/{}'.format(i, seconds_to_block))
time.sleep(1)
print('done blocking {}'.format(seconds_to_block))
#asyncio.coroutine
def non_blocking_func(seconds):
for i in range(seconds):
print('yielding {}/{}'.format(i, seconds))
yield from asyncio.sleep(1)
print('done non blocking {}'.format(seconds))
#asyncio.coroutine
def main():
non_blocking_futures = [non_blocking_func(x) for x in range(1, 4)]
blocking_future = loop.run_in_executor(None, blocking_func, 5)
print('wait a few seconds!')
yield from asyncio.sleep(1.5)
blocking_future.cancel()
yield from asyncio.wait(non_blocking_futures)
loop = asyncio.get_event_loop()
executor = ThreadPoolExecutor(max_workers=1)
loop.set_default_executor(executor)
asyncio.async(main())
loop.run_forever()
I would expect the code above to only allow the blocking function to output:
blocking 0/5
blocking 1/5
and then see the output of the non blocking function. But instead the blocking future continues on even after I have canceled.
Is it possible? Is there some other way of doing it?
Thanks
Edit: More discussion on running blocking and non-blocking code using asyncio: How to interface blocking and non-blocking code with asyncio

In this case, there is no way to cancel the Future once it has actually started running, because you're relying on the behavior of concurrent.futures.Future, and its docs state the following:
cancel()
Attempt to cancel the call. If the call is currently being executed
and cannot be cancelled then the method will return False, otherwise
the call will be cancelled and the method will return True.
So, the only time the cancellation would be successful is if the task is still pending inside of the Executor. Now, you're actually using an asyncio.Future wrapped around a concurrent.futures.Future, and in practice the asyncio.Future returned by loop.run_in_executor() will raise a CancellationError if you try to yield from it after you call cancel(), even if the underlying task is actually already running. But, it won't actually cancel the execution of the task inside the Executor.
If you need to actually cancel the task, you'll need to use a more conventional method of interrupting the task running in the thread. The specifics of how you do that is use-case dependent. For the use-case you presented in the example, you could use a threading.Event:
def blocking_func(seconds_to_block, event):
for i in range(seconds_to_block):
if event.is_set():
return
print('blocking {}/{}'.format(i, seconds_to_block))
time.sleep(1)
print('done blocking {}'.format(seconds_to_block))
...
event = threading.Event()
blocking_future = loop.run_in_executor(None, blocking_func, 5, event)
print('wait a few seconds!')
yield from asyncio.sleep(1.5)
blocking_future.cancel() # Mark Future as cancelled
event.set() # Actually interrupt blocking_func

As threads share the same memory address space of a process, there is no safe way to terminate a running thread. This is the reason why most programming languages do not allow to kill running threads (there are lots of ugly hacks around this limitation).
Java learnt it the hard way.
A solution would consist in running your function in a separate process instead of a thread and terinate it gracefully.
The Pebble library offers an interface similar to concurrent.futures supporting running Futures to be cancelled.
from pebble import ProcessPool
def function(foo, bar=0):
return foo + bar
with ProcessPool() as pool:
future = pool.schedule(function, args=[1])
# if running, the container process will be terminated
# a new process will be started consuming the next task
future.cancel()

Related

How can you wait for completion of a callback submitted from another thread?

I have two Python threads that share some state, A and B. At one point, A submits a callback to be run by B on its loop with something like:
# This line is executed by A
loop.call_soon_threadsafe(callback)
After this I want to continue doing something else, but I want to make sure that callback has been run by B before doing so. Is there any way (besides standard threading synchronization primitives) to make A wait for the completion of the callback? I know call_soon_threadsafe returns a asyncio.Handle object that can cancel the task, but I am not sure whether this can be used for waiting (I still don't know much about asyncio).
In this case, this callback calls loop.close() and cancels the remaining tasks, and after that, in B, after loop.run_forever() there is a loop.close(). So for this use case in particular a thread-safe mechanism that allows me to know from A when the loop has been effectively closed would also work for me - again, not involving a mutex/condition variable/etc.
I know that asyncio is not meant to be thread-safe, with very few exceptions, but I wanted to know if a convenient way to achieve this is provided.
Here is a very small snippet of what I mean in case it helps.
import asyncio
import threading
import time
def thread_A():
print('Thread A')
loop = asyncio.new_event_loop()
threading.Thread(target=thread_B, args=(loop,)).start()
time.sleep(1)
handle = loop.call_soon_threadsafe(callback, loop)
# How do I wait for the callback to complete before continuing?
print('Thread A out')
def thread_B(loop):
print('Thread B')
asyncio.set_event_loop(loop)
loop.run_forever()
loop.close()
print('Thread B out')
def callback(loop):
print('Stopping loop')
loop.stop()
thread_A()
I have tried this variation with asyncio.run_coroutine_threadsafe but it does not work, instead thread A hangs forever. Not sure if I am doing something wrong or it is because I am stopping the loop.
import asyncio
import threading
import time
def thread_A():
global future
print('Thread A')
loop = asyncio.new_event_loop()
threading.Thread(target=thread_B, args=(loop,)).start()
time.sleep(1)
future = asyncio.run_coroutine_threadsafe(callback(loop), loop)
future.result() # Hangs here
print('Thread A out')
def thread_B(loop):
print('Thread B')
asyncio.set_event_loop(loop)
loop.run_forever()
loop.close()
print('Thread B out')
async def callback(loop):
print('Stopping loop')
loop.stop()
thread_A()
Callbacks are set and (mostly) forget. They are not intended to be used for something you need to get a result back from. This is why the handle produced only lets you cancel a callback (this callback is no longer needed), nothing more.
If you need to wait for a result from an asyncio-managed coroutine in another thread, use a coroutine and schedule it as a task with asyncio.run_coroutine_threadsafe(); this gives you a Future() instance, which you can then wait for to be done.
However, stopping the loop with run_coroutine_threadsafe() does require the loop to handle one more round of callbacks than it'll actually be able to run; the Future returned by run_coroutine_threadsafe() would otherwise not be notified of the state change of the task it scheduled. You can remedy this by running asyncio.sleep(0) through loop.run_until_complete() in thread B before closing the loop:
def thread_A():
# ...
# when done, schedule the asyncio loop to exit
future = asyncio.run_coroutine_threadsafe(shutdown_loop(loop), loop)
future.result() # wait for the shutdown to complete
print("Thread A out")
def thread_B(loop):
print("Thread B")
asyncio.set_event_loop(loop)
loop.run_forever()
# run one last noop task in the loop to clear remaining callbacks
loop.run_until_complete(asyncio.sleep(0))
loop.close()
print("Thread B out")
async def shutdown_loop(loop):
print("Stopping loop")
loop.stop()
This is, of course, slightly hacky and depends on the internals of callback management and cross-threading task scheduling to not change. As the default asyncio implementation stands, running a single noop task is plenty for several rounds of callbacks creating more callbacks being handled, but alternative loop implementations may handle this differently.
So for shutting down the loop, you may be better off using thread-based coordination:
def thread_A():
# ...
callback_event = threading.Event()
loop.call_soon_threadsafe(callback, loop, callback_event)
callback_event.wait() # wait for the shutdown to complete
print("Thread A out")
def thread_B(loop):
print("Thread B")
asyncio.set_event_loop(loop)
loop.run_forever()
loop.close()
print("Thread B out")
def callback(loop, callback_event):
print("Stopping loop")
loop.stop()
callback_event.set()
Is there any way (besides standard threading synchronization primitives) to make A wait for the completion of the callback?
Normally you'd use run_coroutine_threadsafe, as Martijn initially suggested. But your use of loop.stop() makes the callback somewhat specific. Given that, you are probably best off using the standard thread synchronization primitives, which are in this case very straightforward and can be completely decoupled from the callback implementation and the rest of your code. For example:
def submit_and_wait(loop, fn, *args):
"Submit fn(*args) to loop, and wait until the callback executes."
done = threading.Event()
def wrap_fn():
try:
fn(*args)
finally:
done.set()
loop.call_soon_threadsafe(wrap_fn)
done.wait()
Instead of using loop.call_soon_threadsafe(callback), use submit_and_wait(loop, callback). The threading synchronization is there, but completely hidden inside submit_and_wait.

The workers in ThreadPoolExecutor is not really daemon

The thing I cannot figure out is that although ThreadPoolExecutor uses daemon workers, they will still run even if main thread exit.
I can provide a minimal example in python3.6.4:
import concurrent.futures
import time
def fn():
while True:
time.sleep(5)
print("Hello")
thread_pool = concurrent.futures.ThreadPoolExecutor()
thread_pool.submit(fn)
while True:
time.sleep(1)
print("Wow")
Both main thread and the worker thread are infinite loops. So if I use KeyboardInterrupt to terminate main thread, I expect that the whole program will terminate too. But actually the worker thread is still running even though it is a daemon thread.
The source code of ThreadPoolExecutor confirms that worker threads are daemon thread:
t = threading.Thread(target=_worker,
args=(weakref.ref(self, weakref_cb),
self._work_queue))
t.daemon = True
t.start()
self._threads.add(t)
Further, if I manually create a daemon thread, it works like a charm:
from threading import Thread
import time
def fn():
while True:
time.sleep(5)
print("Hello")
thread = Thread(target=fn)
thread.daemon = True
thread.start()
while True:
time.sleep(1)
print("Wow")
So I really cannot figure out this strange behavior.
Suddenly... I found why. According to much more source code of ThreadPoolExecutor:
# Workers are created as daemon threads. This is done to allow the interpreter
# to exit when there are still idle threads in a ThreadPoolExecutor's thread
# pool (i.e. shutdown() was not called). However, allowing workers to die with
# the interpreter has two undesirable properties:
# - The workers would still be running during interpreter shutdown,
# meaning that they would fail in unpredictable ways.
# - The workers could be killed while evaluating a work item, which could
# be bad if the callable being evaluated has external side-effects e.g.
# writing to a file.
#
# To work around this problem, an exit handler is installed which tells the
# workers to exit when their work queues are empty and then waits until the
# threads finish.
_threads_queues = weakref.WeakKeyDictionary()
_shutdown = False
def _python_exit():
global _shutdown
_shutdown = True
items = list(_threads_queues.items())
for t, q in items:
q.put(None)
for t, q in items:
t.join()
atexit.register(_python_exit)
There is an exit handler which will join all unfinished worker...
Here's the way to avoid this problem. Bad design can be beaten by another bad design. People write daemon=True only if they really know that the worker won't damage any objects or files.
In my case, I created TreadPoolExecutor with a single worker and after a single submit I just deleted the newly created thread from the queue so the interpreter won't wait till this thread stops on its own. Notice that worker threads are created after submit, not after the initialization of TreadPoolExecutor.
import concurrent.futures.thread
from concurrent.futures import ThreadPoolExecutor
...
executor = ThreadPoolExecutor(max_workers=1)
future = executor.submit(lambda: self._exec_file(args))
del concurrent.futures.thread._threads_queues[list(executor._threads)[0]]
It works in Python 3.8 but may not work in 3.9+ since this code is accessing private variables.
See the working piece of code on github

How to combine python asyncio with threads?

I have successfully built a RESTful microservice with Python asyncio and aiohttp that listens to a POST event to collect realtime events from various feeders.
It then builds an in-memory structure to cache the last 24h of events in a nested defaultdict/deque structure.
Now I would like to periodically checkpoint that structure to disc, preferably using pickle.
Since the memory structure can be >100MB I would like to avoid holding up my incoming event processing for the time it takes to checkpoint the structure.
I'd rather create a snapshot copy (e.g. deepcopy) of the structure and then take my time to write it to disk and repeat on a preset time interval.
I have been searching for examples on how to combine threads (and is a thread even the best solution for this?) and asyncio for that purpose but could not find something that would help me.
Any pointers to get started are much appreciated!
It's pretty simple to delegate a method to a thread or sub-process using BaseEventLoop.run_in_executor:
import asyncio
import time
from concurrent.futures import ProcessPoolExecutor
def cpu_bound_operation(x):
time.sleep(x) # This is some operation that is CPU-bound
#asyncio.coroutine
def main():
# Run cpu_bound_operation in the ProcessPoolExecutor
# This will make your coroutine block, but won't block
# the event loop; other coroutines can run in meantime.
yield from loop.run_in_executor(p, cpu_bound_operation, 5)
loop = asyncio.get_event_loop()
p = ProcessPoolExecutor(2) # Create a ProcessPool with 2 processes
loop.run_until_complete(main())
As for whether to use a ProcessPoolExecutor or ThreadPoolExecutor, that's kind of hard to say; pickling a large object will definitely eat some CPU cycles, which initially would make you think ProcessPoolExecutor is the way to go. However, passing your 100MB object to a Process in the pool would require pickling the instance in your main process, sending the bytes to the child process via IPC, unpickling it in the child, and then pickling it again so you can write it to disk. Given that, my guess is the pickling/unpickling overhead will be large enough that you're better off using a ThreadPoolExecutor, even though you're going to take a performance hit because of the GIL.
That said, it's very simple to test both ways and find out for sure, so you might as well do that.
I also used run_in_executor, but I found this function kinda gross under most circumstances, since it requires partial() for keyword args and I'm never calling it with anything other than a single executor and the default event loop. So I made a convenience wrapper around it with sensible defaults and automatic keyword argument handling.
from time import sleep
import asyncio as aio
loop = aio.get_event_loop()
class Executor:
"""In most cases, you can just use the 'execute' instance as a
function, i.e. y = await execute(f, a, b, k=c) => run f(a, b, k=c) in
the executor, assign result to y. The defaults can be changed, though,
with your own instantiation of Executor, i.e. execute =
Executor(nthreads=4)"""
def __init__(self, loop=loop, nthreads=1):
from concurrent.futures import ThreadPoolExecutor
self._ex = ThreadPoolExecutor(nthreads)
self._loop = loop
def __call__(self, f, *args, **kw):
from functools import partial
return self._loop.run_in_executor(self._ex, partial(f, *args, **kw))
execute = Executor()
...
def cpu_bound_operation(t, alpha=30):
sleep(t)
return 20*alpha
async def main():
y = await execute(cpu_bound_operation, 5, alpha=-2)
loop.run_until_complete(main())
Another alternative is to use loop.call_soon_threadsafe along with an asyncio.Queue as the intermediate channel of communication.
The current documentation for Python 3 also has a section on Developing with asyncio - Concurrency and Multithreading:
import asyncio
# This method represents your blocking code
def blocking(loop, queue):
import time
while True:
loop.call_soon_threadsafe(queue.put_nowait, 'Blocking A')
time.sleep(2)
loop.call_soon_threadsafe(queue.put_nowait, 'Blocking B')
time.sleep(2)
# This method represents your async code
async def nonblocking(queue):
await asyncio.sleep(1)
while True:
queue.put_nowait('Non-blocking A')
await asyncio.sleep(2)
queue.put_nowait('Non-blocking B')
await asyncio.sleep(2)
# The main sets up the queue as the communication channel and synchronizes them
async def main():
queue = asyncio.Queue()
loop = asyncio.get_running_loop()
blocking_fut = loop.run_in_executor(None, blocking, loop, queue)
nonblocking_task = loop.create_task(nonblocking(queue))
running = True # use whatever exit condition
while running:
# Get messages from both blocking and non-blocking in parallel
message = await queue.get()
# You could send any messages, and do anything you want with them
print(message)
asyncio.run(main())
How to send asyncio tasks to loop running in other thread may also help you.
If you need a more "powerful" example, check out my Wrapper to launch async tasks from threaded code. It will handle the thread safety part for you (for the most part) and let you do things like this:
# See https://gist.github.com/Lonami/3f79ed774d2e0100ded5b171a47f2caf for the full example
async def async_main(queue):
# your async code can go here
while True:
command = await queue.get()
if command.id == 'print':
print('Hello from async!')
elif command.id == 'double':
await queue.put(command.data * 2)
with LaunchAsync(async_main) as queue:
# your threaded code can go here
queue.put(Command('print'))
queue.put(Command('double', 7))
response = queue.get(timeout=1)
print('The result of doubling 7 is', response)

asyncio: Is it possible to cancel a future been run by an Executor?

I would like to start a blocking function in an Executor using the asyncio call loop.run_in_executor and then cancel it later, but that doesn't seem to be working for me.
Here is the code:
import asyncio
import time
from concurrent.futures import ThreadPoolExecutor
def blocking_func(seconds_to_block):
for i in range(seconds_to_block):
print('blocking {}/{}'.format(i, seconds_to_block))
time.sleep(1)
print('done blocking {}'.format(seconds_to_block))
#asyncio.coroutine
def non_blocking_func(seconds):
for i in range(seconds):
print('yielding {}/{}'.format(i, seconds))
yield from asyncio.sleep(1)
print('done non blocking {}'.format(seconds))
#asyncio.coroutine
def main():
non_blocking_futures = [non_blocking_func(x) for x in range(1, 4)]
blocking_future = loop.run_in_executor(None, blocking_func, 5)
print('wait a few seconds!')
yield from asyncio.sleep(1.5)
blocking_future.cancel()
yield from asyncio.wait(non_blocking_futures)
loop = asyncio.get_event_loop()
executor = ThreadPoolExecutor(max_workers=1)
loop.set_default_executor(executor)
asyncio.async(main())
loop.run_forever()
I would expect the code above to only allow the blocking function to output:
blocking 0/5
blocking 1/5
and then see the output of the non blocking function. But instead the blocking future continues on even after I have canceled.
Is it possible? Is there some other way of doing it?
Thanks
Edit: More discussion on running blocking and non-blocking code using asyncio: How to interface blocking and non-blocking code with asyncio
In this case, there is no way to cancel the Future once it has actually started running, because you're relying on the behavior of concurrent.futures.Future, and its docs state the following:
cancel()
Attempt to cancel the call. If the call is currently being executed
and cannot be cancelled then the method will return False, otherwise
the call will be cancelled and the method will return True.
So, the only time the cancellation would be successful is if the task is still pending inside of the Executor. Now, you're actually using an asyncio.Future wrapped around a concurrent.futures.Future, and in practice the asyncio.Future returned by loop.run_in_executor() will raise a CancellationError if you try to yield from it after you call cancel(), even if the underlying task is actually already running. But, it won't actually cancel the execution of the task inside the Executor.
If you need to actually cancel the task, you'll need to use a more conventional method of interrupting the task running in the thread. The specifics of how you do that is use-case dependent. For the use-case you presented in the example, you could use a threading.Event:
def blocking_func(seconds_to_block, event):
for i in range(seconds_to_block):
if event.is_set():
return
print('blocking {}/{}'.format(i, seconds_to_block))
time.sleep(1)
print('done blocking {}'.format(seconds_to_block))
...
event = threading.Event()
blocking_future = loop.run_in_executor(None, blocking_func, 5, event)
print('wait a few seconds!')
yield from asyncio.sleep(1.5)
blocking_future.cancel() # Mark Future as cancelled
event.set() # Actually interrupt blocking_func
As threads share the same memory address space of a process, there is no safe way to terminate a running thread. This is the reason why most programming languages do not allow to kill running threads (there are lots of ugly hacks around this limitation).
Java learnt it the hard way.
A solution would consist in running your function in a separate process instead of a thread and terinate it gracefully.
The Pebble library offers an interface similar to concurrent.futures supporting running Futures to be cancelled.
from pebble import ProcessPool
def function(foo, bar=0):
return foo + bar
with ProcessPool() as pool:
future = pool.schedule(function, args=[1])
# if running, the container process will be terminated
# a new process will be started consuming the next task
future.cancel()

Simulating Cancellation Tokens in Python Threading

I just wrote a task queue in Python whose job is to limit the number of tasks that are run at one time. This is a little different than Queue.Queue because instead of limiting how many items can be in the queue, it limits how many can be taken out at one time. It still uses an unbounded Queue.Queue to do its job, but it relies on a Semaphore to limit the number of threads:
from Queue import Queue
from threading import BoundedSemaphore, Lock, Thread
class TaskQueue(object):
"""
Queues tasks to be run in separate threads and limits the number
concurrently running tasks.
"""
def __init__(self, limit):
"""Initializes a new instance of a TaskQueue."""
self.__semaphore = BoundedSemaphore(limit)
self.__queue = Queue()
self.__cancelled = False
self.__lock = Lock()
def enqueue(self, callback):
"""Indicates that the given callback should be ran."""
self.__queue.put(callback)
def start(self):
"""Tells the task queue to start running the queued tasks."""
thread = Thread(target=self.__process_items)
thread.start()
def stop(self):
self.__cancel()
# prevent blocking on a semaphore.acquire
self.__semaphore.release()
# prevent blocking on a Queue.get
self.__queue.put(lambda: None)
def __cancel(self):
print 'canceling'
with self.__lock:
self.__cancelled = True
def __process_items(self):
while True:
# see if the queue has been stopped before blocking on acquire
if self.__is_canceled():
break
self.__semaphore.acquire()
# see if the queue has been stopped before blocking on get
if self.__is_canceled():
break
callback = self.__queue.get()
# see if the queue has been stopped before running the task
if self.__is_canceled():
break
def runTask():
try:
callback()
finally:
self.__semaphore.release()
thread = Thread(target=runTask)
thread.start()
self.__queue.task_done()
def __is_canceled(self):
with self.__lock:
return self.__cancelled
The Python interpreter runs forever unless I explicitly stop the task queue. This is a lot more tricky than I thought it would be. If you look at the stop method, you'll see that I set a canceled flag, release the semaphore and put a no-op callback on the queue. The last two parts are necessary because the code could be blocking on the Semaphore or on the Queue. I basically have to force these to go through so that the loop has a chance to break out.
This code works. This class is useful when running a service that is trying to run thousands of tasks in parallel. In order to keep the machine running smoothly and to prevent the OS from screaming about too many active threads, this code will limit the number of threads living at any one time.
I have written a similar chunk of code in C# before. What made that code particular cut 'n' dry was that .NET has something called a CancellationToken that just about every threading class uses. Any time there is a blocking operation, that operation takes an optional token. If the parent task is ever canceled, any child tasks blocking with that token will be immediately canceled, as well. This seems like a much cleaner way to exit than to "fake it" by releasing semaphores or putting values in a queue.
I was wondering if there was an equivalent way of doing this in Python? I definitely want to be using threads instead of something like asynchronous events. I am wondering if there is a way to achieve the same thing using two Queue.Queues where one is has a max size and the other doesn't - but I'm still not sure how to handle cancellation.
I think your code can be simplified by using poisoning and Thread.join():
from Queue import Queue
from threading import Thread
poison = object()
class TaskQueue(object):
def __init__(self, limit):
def process_items():
while True:
callback = self._queue.get()
if callback is poison:
break
try:
callback()
except:
pass
finally:
self._queue.task_done()
self._workers = [Thread(target=process_items) for _ in range(limit)]
self._queue = Queue()
def enqueue(self, callback):
self._queue.put(callback)
def start(self):
for worker in self._workers:
worker.start()
def stop(self):
for worker in self._workers:
self._queue.put(poison)
while self._workers:
self._workers.pop().join()
Untested.
I removed the comments, for brevity.
Also, in this version process_items() is truly private.
BTW: The whole point of the Queue module is to free you from the dreaded locking and event stuff.
You seem to be creating a new thread for each task from the queue. This is wasteful in itself, and also leads you to the problem of how to limit the number of threads.
Instead, a common approach is to create a fixed number of worker threads and let them freely pull tasks from the queue. To cancel the queue, you can clear it and let the workers stay alive in anticipation of future work.
I took Janne Karila's advice and created a thread pool. This eliminated the need for a semaphore. The problem is if you ever expect the queue to go away, you have to stop the worker threads from running (just a variation of what I did before). The new code is fairly similar:
class TaskQueue(object):
"""
Queues tasks to be run in separate threads and limits the number
concurrently running tasks.
"""
def __init__(self, limit):
"""Initializes a new instance of a TaskQueue."""
self.__workers = []
for _ in range(limit):
worker = Thread(target=self.__process_items)
self.__workers.append(worker)
self.__queue = Queue()
self.__cancelled = False
self.__lock = Lock()
self.__event = Event()
def enqueue(self, callback):
"""Indicates that the given callback should be ran."""
self.__queue.put(callback)
def start(self):
"""Tells the task queue to start running the queued tasks."""
for worker in self.__workers:
worker.start()
def stop(self):
"""
Stops the queue from processing anymore tasks. Any actively running
tasks will run to completion.
"""
self.__cancel()
# prevent blocking on a Queue.get
for _ in range(len(self.__workers)):
self.__queue.put(lambda: None)
self.__event.wait()
def __cancel(self):
with self.__lock:
self.__queue.queue.clear()
self.__cancelled = True
def __process_items(self):
while True:
callback = self.__queue.get()
# see if the queue has been stopped before running the task
if self.__is_canceled():
break
try:
callback()
except:
pass
finally:
self.__queue.task_done()
self.__event.set()
def __is_canceled(self):
with self.__lock:
return self.__cancelled
If you look carefully, I had to do some accounting to kill off the workers. I basically wait on an Event for as many times as there are workers. I clear the underlying queue to prevent workers from being cancelled any other way. I also wait after pumping each bogus value into the queue, so only one worker can cancel out at a time.
I've ran some tests on this and it appears to be working. It would still be nice to eliminate the need for bogus values.

Categories

Resources