I am writing a Python program that run tasks taken from a queue concurrently, to learn asyncio.
Items will be put onto a queue by interacting with a main thread (within REPL).
Whenever a task is put onto the queue, it should be consumed and executed immediately.
My approach is to kick off a separate thread and pass a queue to the event loop within that thread.
The tasks are running but only sequentially and I am not clear on how to run the tasks concurrently. My attempt is as follows:
import asyncio
import time
import queue
import threading
def do_it(task_queue):
'''Process tasks in the queue until the sentinel value is received'''
_sentinel = 'STOP'
def clock():
return time.strftime("%X")
async def process(name, total_time):
status = f'{clock()} {name}_{total_time}:'
print(status, 'START')
current_time = time.time()
end_time = current_time + total_time
while current_time < end_time:
print(status, 'processing...')
await asyncio.sleep(1)
current_time = time.time()
print(status, 'DONE.')
async def main():
while True:
item = task_queue.get()
if item == _sentinel:
break
await asyncio.create_task(process(*item))
print('event loop start')
asyncio.run(main())
print('event loop end')
if __name__ == '__main__':
tasks = queue.Queue()
th = threading.Thread(target=do_it, args=(tasks,))
th.start()
tasks.put(('abc', 5))
tasks.put(('def', 3))
Any advice pointing me in the direction of running these tasks concurrently would be greatly appreciated!
Thanks
UPDATE
Thank you Frank Yellin and cynthi8! I have reformed main() according to your advice:
removed await before asyncio.create_task - fixed concurrency
added wait while loop so that main would not return prematurely
used non-blocking mode of Queue.get()
The program now works as expected 👍
UPDATE 2
user4815162342 has offered further improvements, I have annotated his suggestions below.
'''
Starts auxiliary thread which establishes a queue and consumes tasks within a
queue.
Allow enqueueing of tasks from within __main__ and termination of aux thread
'''
import asyncio
import time
import threading
import functools
def do_it(started):
'''Process tasks in the queue until the sentinel value is received'''
_sentinel = 'STOP'
def clock():
return time.strftime("%X")
async def process(name, total_time):
print(f'{clock()} {name}_{total_time}:', 'Started.')
current_time = time.time()
end_time = current_time + total_time
while current_time < end_time:
print(f'{clock()} {name}_{total_time}:', 'Processing...')
await asyncio.sleep(1)
current_time = time.time()
print(f'{clock()} {name}_{total_time}:', 'Done.')
async def main():
# get_running_loop() get the running event loop in the current OS thread
# out to __main__ thread
started.loop = asyncio.get_running_loop()
started.queue = task_queue = asyncio.Queue()
started.set()
while True:
item = await task_queue.get()
if item == _sentinel:
# task_done is used to tell join when the work in the queue is
# actually finished. A queue length of zero does not mean work
# is complete.
task_queue.task_done()
break
task = asyncio.create_task(process(*item))
# Add a callback to be run when the Task is done.
# Indicate that a formerly enqueued task is complete. Used by queue
# consumer threads. For each get() used to fetch a task, a
# subsequent call to task_done() tells the queue that the processing
# on the task is complete.
task.add_done_callback(lambda _: task_queue.task_done())
# keep loop going until all the work has completed
# When the count of unfinished tasks drops to zero, join() unblocks.
await task_queue.join()
print('event loop start')
asyncio.run(main())
print('event loop end')
if __name__ == '__main__':
# started Event is used for communication with thread th
started = threading.Event()
th = threading.Thread(target=do_it, args=(started,))
th.start()
# started.wait() blocks until started.set(), ensuring that the tasks and
# loop variables are available from the event loop thread
started.wait()
tasks, loop = started.queue, started.loop
# call_soon schedules the callback callback to be called with args arguments
# at the next iteration of the event loop.
# call_soon_threadsafe is required to schedule callbacks from another thread
# put_nowait enqueues items in non-blocking fashion, == put(block=False)
loop.call_soon_threadsafe(tasks.put_nowait, ('abc', 5))
loop.call_soon_threadsafe(tasks.put_nowait, ('def', 3))
loop.call_soon_threadsafe(tasks.put_nowait, 'STOP')
As others pointed out, the problem with your code is that it uses a blocking queue which halts the event loop while waiting for the next item. The problem with the proposed solution, however, is that it introduces latency because it must occasionally sleep to allow other tasks to run. In addition to introducing latency, it prevents the program from ever going to sleep, even when there are no items in the queue.
An alternative is to switch to asyncio queue which is designed for use with asyncio. This queue must be created inside the running loop, so you can't pass it to do_it, you must retrieve it. Also, since it's an asyncio primitive, its put method must be invoked through call_soon_threadsafe to ensure that the event loop notices it.
One final issue is that your main() function uses another busy loop to wait for all the tasks to complete. This can be avoided by using Queue.join, which is explicitly designed for this use case.
Here is your code adapted to incorporate all of the above suggestions, with the process function remaining unchanged from your original:
import asyncio
import time
import threading
def do_it(started):
'''Process tasks in the queue until the sentinel value is received'''
_sentinel = 'STOP'
def clock():
return time.strftime("%X")
async def process(name, total_time):
status = f'{clock()} {name}_{total_time}:'
print(status, 'START')
current_time = time.time()
end_time = current_time + total_time
while current_time < end_time:
print(status, 'processing...')
await asyncio.sleep(1)
current_time = time.time()
print(status, 'DONE.')
async def main():
started.loop = asyncio.get_running_loop()
started.queue = task_queue = asyncio.Queue()
started.set()
while True:
item = await task_queue.get()
if item == _sentinel:
task_queue.task_done()
break
task = asyncio.create_task(process(*item))
task.add_done_callback(lambda _: task_queue.task_done())
await task_queue.join()
print('event loop start')
asyncio.run(main())
print('event loop end')
if __name__ == '__main__':
started = threading.Event()
th = threading.Thread(target=do_it, args=(started,))
th.start()
started.wait()
tasks, loop = started.queue, started.loop
loop.call_soon_threadsafe(tasks.put_nowait, ('abc', 5))
loop.call_soon_threadsafe(tasks.put_nowait, ('def', 3))
loop.call_soon_threadsafe(tasks.put_nowait, 'STOP')
Note: an unrelated issue with your code was that it awaited the result of create_task(), which nullified the usefulness of create_task() because it wasn't allowed to run in the background. (It would be equivalent to immediately joining a thread you've just started - you can do it, but it doesn't make much sense.) This issue is fixed both in the above code and in your edit to the question.
There are two problems with your code.
First, you should not have the await before the asyncio.create_task. This is possibly what is causing your code to run synchronously.
Then, once you've made your code run asynchronously, you need something after the while loop in main so that the code doesn't return immediately, but instead waits for all the jobs to finish. Another stackoverflow answer recommends:
while len(asyncio.Task.all_tasks()) > 1: # Any task besides main() itself?
await asyncio.sleep(0.2)
Alternatively there are versions of Queue that can keep track of running tasks.
As an additional problem:
If a queue.Queue is empty, get() blocks by default and does not return a sentinel string. https://docs.python.org/3/library/queue.html
Related
From the last post, the duplicate post cannot answer my question.
Right now, I have a function f1() which contains CPU intensive part and async IO intensive part. Therefore f1() itself is an async function. How can I run the whole f1() with given timeout? I found the method provided in the post cannot solve my situation. For the following part, it shows RuntimeWarning: coroutine 'f1' was never awaited handle = None # Needed to break cycles when an exception occurs.
import asyncio
import time
import concurrent.futures
executor = concurrent.futures.ThreadPoolExecutor(1)
async def f1():
print("start sleep")
time.sleep(3) # simulate CPU intensive part
print("end sleep")
print("start asyncio.sleep")
await asyncio.sleep(3) # simulate IO intensive part
print("end asyncio.sleep")
async def process():
print("enter process")
loop = asyncio.get_running_loop()
await loop.run_in_executor(executor, f1)
async def main():
print("-----f1-----")
t1 = time.time()
try:
await asyncio.wait_for(process(), timeout=2)
except:
pass
t2 = time.time()
print(f"f1 cost {(t2 - t1)} s")
if __name__ == '__main__':
asyncio.run(main())
From previous post, loop.run_in_executor can only work for normal function not async function.
one way to do it is to make process not an async function, so it can run in another thread, and have it start an asyncio loop in the other thread to run f1.
note that starting another loops means you cannot share coroutines and futures between the two loops.
import asyncio
import time
import concurrent.futures
executor = concurrent.futures.ThreadPoolExecutor(1)
async def f1():
print("start sleep")
time.sleep(3) # simulate CPU intensive part
print("end sleep")
print("start asyncio.sleep")
await asyncio.sleep(3) # simulate IO intensive part
print("end asyncio.sleep")
def process():
print("enter process")
asyncio.run(asyncio.wait_for(f1(),2))
async def main():
print("-----f1-----")
t1 = time.time()
try:
loop = asyncio.get_running_loop()
await loop.run_in_executor(executor, process)
except:
pass
t2 = time.time()
print(f"f1 cost {(t2 - t1)} s")
if __name__ == '__main__':
asyncio.run(main())
-----f1-----
enter process
start sleep
end sleep
start asyncio.sleep
f1 cost 3.0047199726104736 s
keep in mind that you must wait for any IO to return f1 to the eventloop so the future can be cancelled, you cannot cancel the CPU-intensive part of the code unless it does something like await asyncio.sleep(0) which returns to the event-loop momentarily, which is why time.sleep cannot be cancelled.
I have explained the cause of the issue. You should remove or replace the time.sleep at f1 as it blocks the thread, and asyncio.wait_for cannot handle the timeout.
Regarding to the RuntimeWarning
RuntimeWarning: coroutine 'f1' was never awaited handle = None # Needed to break cycles when an exception occurs.
It occurs because the loop.run_in_executor expects a non-async function as a second argument.
I want to start a new Process (Pricefeed) from my Executor class and then have the Executor class keep running in its own event loop (the shoot method). In my current attempt, the asyncio loop gets blocked on the line p.join(). However, without that line, my code just exits. How do I do this properly?
Note: fh.run() blocks as well.
import asyncio
from multiprocessing import Process, Queue
from cryptofeed import FeedHandler
from cryptofeed.defines import L2_BOOK
from cryptofeed.exchanges.ftx import FTX
class Pricefeed(Process):
def __init__(self, queue: Queue):
super().__init__()
self.coin_symbol = 'SOL-USD'
self.fut_symbol = 'SOL-USD-PERP'
self.queue = queue
async def _book_update(self, feed, symbol, book, timestamp, receipt_timestamp):
self.queue.put(book)
def run(self):
fh = FeedHandler()
fh.add_feed(FTX(symbols=[self.fut_symbol, self.coin_symbol], channels=[L2_BOOK],
callbacks={L2_BOOK: self._book_update}))
fh.run()
class Executor:
def __init__(self):
self.q = Queue()
async def shoot(self):
print('in shoot')
for i in range(5):
msg = self.q.get()
print(msg)
await asyncio.sleep(1) # do some stuff
async def run(self):
asyncio.create_task(self.shoot())
p = Pricefeed(self.q)
p.start()
p.join()
async def main():
g = Executor()
await g.run()
if __name__ == '__main__':
asyncio.run(main())
Since you're using a queue to communicate this is a somewhat tricky problem. To answer your first question as to why removing join makes the program work, join blocks until the process finishes. In asyncio you can't do anything blocking in a function marked async or it will freeze the event loop. To do this properly you'll need to run your process with the asyncio event loop's run_in_executor method which will run things in a process pool and return an awaitable that is compatible with the asyncio event loop.
Secondly, you'll need to use a multiprocessing Manager which creates shared state that can be used by multiple processes to properly share your queue. Managers directly support creation of a shared queue. Using these two bits of knowledge you can adapt your code to something like the following which works:
import asyncio
import functools
import time
from multiprocessing import Manager
from concurrent.futures import ProcessPoolExecutor
def run_pricefeed(queue):
i = 0
while True: #simulate putting an item on the queue every 250ms
queue.put(f'test-{i}')
i += 1
time.sleep(.25)
class Executor:
async def shoot(self, queue):
print('in shoot')
for i in range(5):
while not queue.empty():
msg = queue.get(block=False)
print(msg)
await asyncio.sleep(1) # do some stuff
async def run(self):
with ProcessPoolExecutor() as pool:
with Manager() as manager:
queue = manager.Queue()
asyncio.create_task(self.shoot(queue))
await asyncio.get_running_loop().run_in_executor(pool, functools.partial(run_pricefeed, queue))
async def main():
g = Executor()
await g.run()
if __name__ == '__main__':
asyncio.run(main())
This code has a drawback in that you need to empty the queue in a non-blocking fashing from your asyncio process and wait for a while for new items to come in before emptying it again, effectively implementing a polling mechanism. If you don't wait after emptying, you'll wind up with blocking code and you will freeze the event loop again. This isn't as good as just waiting for the queue to have an item in it by blocking, but may suit your needs. If possible, I would avoid asyncio here and use multiprocessing entirely, for example, by implementing queue processing as a separate process.
Goal: run main() consisting of a bunch of asyncio functions between start_time and end_time
import datetime as dt
start_time, end_time= dt.time(9, 29), dt.time(16, 20)
current_time() just keeps adding the current time to work space. This time is used at several different points in the script not shown here
async def current_time():
while True:
globals()['now'] = dt.datetime.now().replace(microsecond=0)
await asyncio.sleep(.001)
Another function that does something:
async def balance_check():
while True:
#do something here
await asyncio.sleep(.001)
main() awaits the previous coroutines:
async def main():
while True:
#Issue:This is my issue. I am trying to make these
#coroutines only run between start_time and end_time.
#Outside this interval, I want the loop to
#shut down and for the script to stop.
if start_time < now.time() < end_time :
await asyncio.wait([
current_time(),
balance_check(),
])
else:
print('loop stopped since {} is outside {} and {}'.format(now, start_time, end_time))
loop.stop()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
loop.run_until_complete(main())
finally:
loop.close()
Issue: this keeps working even after end_time
The problem is incorrect use of await asyncio.wait([current_time(), balance_check(),]).
Awaiting asyncio.wait() waits for the specified awaitables to complete, i.e. either return a value or raise an exception. Since neither current_time nor balance_check ever return out of their infinite loops, the execution of main() never gets passed await asyncio.wait(...), as this expression waits for both to finish. In turn, the while loop in main() never gets to its second iteration, and loop.stop() has no chance to run.
If the code's intention was to use asyncio.wait() to give coroutines a chance to run, asyncio.wait is not the tool for that. Instead, one can just start the two tasks by calling asyncio.create_task(), and then do nothing. As long as the event loop can run (i.e. it's not blocked with a call to time.sleep() or similar), asyncio will automatically switch between the coroutines, in this case current_time and balanced_check at their ~1-millisecond pace. Of course, you will want to regain control by the end_time deadline, so "doing nothing" is best expressed as a single call to asyncio.sleep():
async def main():
t1 = asyncio.create_task(current_time())
t2 = asyncio.create_task(balance_check())
end = dt.datetime.combine(dt.date.today(), end_time)
now = dt.datetime.now()
await asyncio.sleep((end - now).total_seconds())
print('loop stopped since {} is outside {} and {}'.format(now, start_time, end_time))
t1.cancel()
t2.cancel()
Note that an explicit loop.stop() is not even necessary, because run_until_complete() will automatically stop the loop once the given coroutine completes. Calling cancel() on the tasks has no practical effect because the loop stops pretty much immediately; it is included to make those tasks complete, so that the garbage collector doesn't warn about destroying tasks that are pending.
As noted in the comments, this code doesn't wait for start_time, but that functionality is easily accomplished by adding another sleep before spawning the tasks.
You could use synchronizing primitives' to do this, specifically events.
import datetime as dt
import asyncio
async def function(start_event, end_event):
# Waits until we get the start event command.
await start_event.wait()
# Run our function until we get the end_event being set to true.
index = 0
while not end_event.is_set():
print(f"Function running; index {index}.")
await asyncio.sleep(2)
index += 1
async def controller(start_event, end_event):
# Will usually be this. Commented out for testing purposes. Can be passed by main.
# Subtraction/addition does not work on dt.datetime(). Inequality does (> or <)
# now, start, end = dt.datetime.now().time(), dt.time(17, 24), dt.time(18, 0, 30)
now = dt.datetime.now()
start, end = now + dt.timedelta(seconds=-10), now + dt.timedelta(seconds=10)
# Starts our function.
print(f"Starting at {start}. Ending in {end - now}. ")
start_event.set()
# Keeps checking until we pass the scheduled time.
while start < now < end:
print(f"Ending in {end - now}.")
await asyncio.sleep(2)
now = dt.datetime.now()
# Ends our function.
end_event.set()
print(f"Function ended at {now}.")
async def main():
# Creates two indepedent events.
# start_event is flagged when controller is ready for function to be ran (prevents misfires).
# end_event is flagged when controller see's start < time.now < end (line 30).
start_event = asyncio.Event()
end_event = asyncio.Event()
# Starts both functions at the same time.
await asyncio.gather(
controller(start_event, end_event), function(start_event, end_event)
)
if __name__ == "__main__":
# Starting our process.
asyncio.run(main())
This method will require both function and controller to take up space in the asyncio loop. However, function will be locked for most of the time, and it will instead be the controller hogging loop resources with its while loop, so keep that in the back of your mind.
And I have made an application which wait for a task to run soon but doesn't wait to get the result but I've noticed sometimes
asyncio.run_coroutine_threadsafe did skip some task and it randomly runs by no order.
I integrated this with flask covered with uwsgi (processes = 1, threads = 10)
run_coroutine_threadsafe can possibly call by any threads at the same time
# get asyncio event loop
loop = asyncio.get_event_loop()
# define function to run in a thread
def InitAsyncio(loop):
asyncio.set_event_loop(loop)
loop.run_forever()
# create asyncio to run in thread
t = threading.Thread(target=InitAsyncio, args=(loop,))
t.start()
async def ProcessPicture(InputInt):
try:
print("This is a book. %d" % InputInt)
return
except:
return
def main():
for i in range(1000):
asyncio.run_coroutine_threadsafe(ProcessPicture(i), loop)
How to ensure that asyncio.run_coroutine_threadsafe always runs by no skip and order the task to run
Similar Question (but answer does not work for me): How to cancel long-running subprocesses running using concurrent.futures.ProcessPoolExecutor?
Unlike the question linked above and the solution provided, in my case the computation itself is rather long (CPU bound) and cannot be run in a loop to check if some event has happened.
Reduced version of the code below:
import asyncio
import concurrent.futures as futures
import time
class Simulator:
def __init__(self):
self._loop = None
self._lmz_executor = None
self._tasks = []
self._max_execution_time = time.monotonic() + 60
self._long_running_tasks = []
def initialise(self):
# Initialise the main asyncio loop
self._loop = asyncio.get_event_loop()
self._loop.set_default_executor(
futures.ThreadPoolExecutor(max_workers=3))
# Run separate processes of long computation task
self._lmz_executor = futures.ProcessPoolExecutor(max_workers=3)
def run(self):
self._tasks.extend(
[self.bot_reasoning_loop(bot_id) for bot_id in [1, 2, 3]]
)
try:
# Gather bot reasoner tasks
_reasoner_tasks = asyncio.gather(*self._tasks)
# Send the reasoner tasks to main monitor task
asyncio.gather(self.sample_main_loop(_reasoner_tasks))
self._loop.run_forever()
except KeyboardInterrupt:
pass
finally:
self._loop.close()
async def sample_main_loop(self, reasoner_tasks):
"""This is the main monitor task"""
await asyncio.wait_for(reasoner_tasks, None)
for task in self._long_running_tasks:
try:
await asyncio.wait_for(task, 10)
except asyncio.TimeoutError:
print("Oops. Some long operation timed out.")
task.cancel() # Doesn't cancel and has no effect
task.set_result(None) # Doesn't seem to have an effect
self._lmz_executor.shutdown()
self._loop.stop()
print('And now I am done. Yay!')
async def bot_reasoning_loop(self, bot):
import math
_exec_count = 0
_sleepy_time = 15
_max_runs = math.floor(self._max_execution_time / _sleepy_time)
self._long_running_tasks.append(
self._loop.run_in_executor(
self._lmz_executor, really_long_process, _sleepy_time))
while time.monotonic() < self._max_execution_time:
print("Bot#{}: thinking for {}s. Run {}/{}".format(
bot, _sleepy_time, _exec_count, _max_runs))
await asyncio.sleep(_sleepy_time)
_exec_count += 1
print("Bot#{} Finished Thinking".format(bot))
def really_long_process(sleepy_time):
print("I am a really long computation.....")
_large_val = 9729379273492397293479237492734 ** 344323
print("I finally computed this large value: {}".format(_large_val))
if __name__ == "__main__":
sim = Simulator()
sim.initialise()
sim.run()
The idea is that there is a main simulation loop that runs and monitors three bot threads. Each of these bot threads then perform some reasoning but also start a really long background process using ProcessPoolExecutor, which may end up running longer their own threshold/max execution time for reasoning on things.
As you can see in the code above, I attempted to .cancel() these tasks when a timeout occurs. Though this is not really cancelling the actual computation, which keeps happening in the background and the asyncio loop doesn't terminate until after all the long running computation have finished.
How do I terminate such long running CPU-bound computations within a method?
Other similar SO questions, but not necessarily related or helpful:
asyncio: Is it possible to cancel a future been run by an Executor?
How to terminate a single async task in multiprocessing if that single async task exceeds a threshold time in Python
Asynchronous multiprocessing with a worker pool in Python: how to keep going after timeout?
How do I terminate such long running CPU-bound computations within a method?
The approach you tried doesn't work because the futures returned by ProcessPoolExecutor are not cancellable. Although asyncio's run_in_executor tries to propagate the cancellation, it is simply ignored by Future.cancel once the task starts executing.
There is no fundamental reason for that. Unlike threads, processes can be safely terminated, so it would be perfectly possible for ProcessPoolExecutor.submit to return a future whose cancel terminated the corresponding process. Asyncio coroutines have well-defined cancellation semantics and could automatically make use of it. Unfortunately, ProcessPoolExecutor.submit returns a regular concurrent.futures.Future, which assumes the lowest common denominator of the underlying executors, and treats a running future as untouchable.
As a result, to cancel tasks executed in subprocesses, one must circumvent the ProcessPoolExecutor altogether and manage one's own processes. The challenge is how to do this without reimplementing half of multiprocessing. One option offered by the standard library is to (ab)use multiprocessing.Pool for this purpose, because it supports reliable shutdown of worker processes. A CancellablePool could work as follows:
Instead of spawning a fixed number of processes, spawn a fixed number of 1-worker pools.
Assign tasks to pools from an asyncio coroutine. If the coroutine is canceled while waiting for the task to finish in the other process, terminate the single-process pool and create a new one.
Since everything is coordinated from the single asyncio thread, don't worry about race conditions such as accidentally killing a process which has already started executing another task. (This would need to be prevented if one were to support cancellation in ProcessPoolExecutor.)
Here is a sample implementation of that idea:
import asyncio
import multiprocessing
class CancellablePool:
def __init__(self, max_workers=3):
self._free = {self._new_pool() for _ in range(max_workers)}
self._working = set()
self._change = asyncio.Event()
def _new_pool(self):
return multiprocessing.Pool(1)
async def apply(self, fn, *args):
"""
Like multiprocessing.Pool.apply_async, but:
* is an asyncio coroutine
* terminates the process if cancelled
"""
while not self._free:
await self._change.wait()
self._change.clear()
pool = usable_pool = self._free.pop()
self._working.add(pool)
loop = asyncio.get_event_loop()
fut = loop.create_future()
def _on_done(obj):
loop.call_soon_threadsafe(fut.set_result, obj)
def _on_err(err):
loop.call_soon_threadsafe(fut.set_exception, err)
pool.apply_async(fn, args, callback=_on_done, error_callback=_on_err)
try:
return await fut
except asyncio.CancelledError:
pool.terminate()
usable_pool = self._new_pool()
finally:
self._working.remove(pool)
self._free.add(usable_pool)
self._change.set()
def shutdown(self):
for p in self._working | self._free:
p.terminate()
self._free.clear()
A minimalistic test case showing cancellation:
def really_long_process():
print("I am a really long computation.....")
large_val = 9729379273492397293479237492734 ** 344323
print("I finally computed this large value: {}".format(large_val))
async def main():
loop = asyncio.get_event_loop()
pool = CancellablePool()
tasks = [loop.create_task(pool.apply(really_long_process))
for _ in range(5)]
for t in tasks:
try:
await asyncio.wait_for(t, 1)
except asyncio.TimeoutError:
print('task timed out and cancelled')
pool.shutdown()
asyncio.get_event_loop().run_until_complete(main())
Note how the CPU usage never exceeds 3 cores, and how it starts dropping near the end of the test, indicating that the processes are being terminated as expected.
To apply it to the code from the question, make self._lmz_executor an instance of CancellablePool and change self._loop.run_in_executor(...) to self._loop.create_task(self._lmz_executor.apply(...)).