Let's say I have an async generator like this:
async def event_publisher(connection, queue):
while True:
if not await connection.is_disconnected():
event = await queue.get()
yield event
else:
return
I consume it like this:
published_events = event_publisher(connection, queue)
async for event in published_events:
# do event processing here
It works just fine, however when the connection is disconnected and there is no new event published the async for will just wait forever, so ideally I would like to close the generator forcefully like this:
if connection.is_disconnected():
await published_events.aclose()
But I get the following error:
RuntimeError: aclose(): asynchronous generator is already running
Is there a way to stop processing of an already running generator?
It seems to be related to this issue. Noticable:
As shown in
https://gist.github.com/1st1/d9860cbf6fe2e5d243e695809aea674c, it's an
error to close a synchronous generator while it is being iterated.
...
In 3.8, calling "aclose()" can crash with a RuntimeError. It's no
longer possible to reliably cancel a running asynchrounous
generator.
Well, since we can't cancel running asynchrounous generator, let's try to cancel its running.
import asyncio
from contextlib import suppress
async def cancel_gen(agen):
task = asyncio.create_task(agen.__anext__())
task.cancel()
with suppress(asyncio.CancelledError):
await task
await agen.aclose() # probably a good idea,
# but if you'll be getting errors, try to comment this line
...
if connection.is_disconnected():
await cancel_gen(published_events)
Can't test if it'll work since you didn't provide reproducable example.
You can use a timeout on the queue so is_connected() is polled regularly if there is no item to pop:
async def event_publisher(connection, queue):
while True:
if not await connection.is_disconnected():
try:
event = await asyncio.wait_for(queue.get(), timeout=10.0)
except asyncio.TimeoutError:
continue
yield event
else:
return
Alternatively, it is possible to use Queue.get_nowait().
Related
Given the following minimal example:
#asynccontextmanager
async def async_context():
try:
yield
finally:
await asyncio.sleep(1)
print('finalize context')
async def async_gen():
try:
yield
finally:
await asyncio.sleep(2)
# will never be called if timeout is larger than in async_context
print('finalize gen')
async def main():
async with async_context():
async for _ in async_gen():
break
if __name__ == "__main__":
asyncio.run(main())
I'm breaking while iterating over the async generator and I want the finally block to complete before my async context manager finally block runs. In this example "finalize gen" will never be printed because the program exits before that happens.
Note that I intentionally chose a timeout of 2 in the generators finally block so the context managers finally has a chance to run before. If I chose 1 for both timeouts both messages will be printed.
Is this kind of a race condition? I expected all finally blocks to complete before the program finishes.
How can I prevent the context mangers finally block to run before the generators finally block has completed?
For context:
I use playwright to control a chromium browser. The outer context manager provides a page that it closes in the finally block.
I'm using python 3.9.0.
Try this example: https://repl.it/#trixn86/AsyncGeneratorRaceCondition
The async context manager doesn't know anything about the asynchronous generator. Nothing in main knows about the asynchronous generator after you break, in fact. You've given yourself no way to wait for the generator's finalization.
If you want to wait for the generator to close, you need to handle closure explicitly:
async def main():
async with async_context():
gen = async_gen()
try:
async for _ in gen:
break
finally:
await gen.aclose()
In Python 3.10, you'll be able to use contextlib.aclosing instead of the try/finally:
async def main():
async with async_context():
gen = async_gen()
async with contextlib.aclosing(gen):
async for _ in gen:
break
If I have a coroutine that's consuming items from an async generator what is the "best" way to terminate that loop from an external condition?
Consider this,
while not self.shutdown_event.is_set():
async with self.external_lib_client as client:
async for message in client:
if self.shutdown_event.is_set():
break
await self.handle(message)
If I set shutdown_event it will break out of the while loop, but not until the next message has been handled by the async for loop. What is the correct way to structure the async for iterator such that it can short circuit if a condition has been met between it yielding results?
Is there a standard way to add a Timeout?
One way would be to move the iteration to an async def and use cancelation:
async def iterate(client):
async for message in client:
# shield() because we want cancelation to cancel retrieval
# of the next message, not ongoing handling of a message
await asyncio.shield(self.handle(message))
async with self.external_lib_client as client:
iter_task = asyncio.create_task(iterate(client))
shutdown_task = asyncio.create_task(self.shutdown_event.wait())
await asyncio.wait([iter_task, shutdown_task],
return_when=asyncio.FIRST_COMPLETED)
if iter_task.done():
# iteration has completed, access result to propagate the
# exception if one was raised
iter_task.result()
shutdown_task.cancel()
else:
# shutdown was requested, cancel iteration
iter_task.cancel()
Another way would be to turn shutdown_event into a one-shot async stream and use aiostream to monitor both. That way the for loop gets an object when the shutdown event is signaled and can break out of the loop without bothering to finish waiting for the next message:
# a stream that just yields something (the return value of `wait()`)
# when shutdown_event is set
done_stream = aiostream.stream.just(self.shutdown_event.wait())
async with self.external_lib_client as client, \
aiostream.stream.merge(done_stream, client).stream() as stream:
async for message in stream:
# the merged stream will provide a bogus value (whatever
# `shutdown_event.wait()` returned) when the event is set,
# so check that before using `message`:
if self.shutdown_event.is_set():
break
await self.handle(message)
Note: since the code in the question is not runnable, the above examples are untested.
I'm trying to figure out if it's possible throw a custom exception into a running asyncio task, similarly to what is achieved by Task.cancel(self) which schedules a CancelledError to be raised in the underlying coroutine.
I came across Task.get_coro().throw(exc), but calling it seems like opening a big can of worms as we may leave the task in a bad state. Especially considering all the machinery that happens when a task is throwing CancelledError into its coroutine.
Consider the following example:
import asyncio
class Reset(Exception):
pass
async def infinite():
while True:
try:
print('work')
await asyncio.sleep(1)
print('more work')
except Reset:
print('reset')
continue
except asyncio.CancelledError:
print('cancel')
break
async def main():
infinite_task = asyncio.create_task(infinite())
await asyncio.sleep(0) # Allow infinite_task to enter its work loop.
infinite_task.get_coro().throw(Reset())
await infinite_task
asyncio.run(main())
## OUTPUT ##
# "work"
# "reset"
# "work"
# hangs forever ... bad :(
Is what I try to do even feasible? It feels as if I shouldn't be manipulating the underlying coroutine like this. Any workaround?
There's no way to throw a custom exception into a running task. You shouldn't mess with .throw - it's a detail of implementation and altering it will probably break something.
If you want to pass information (about reset) into the task, do it trough an argument. Here's how it can be implemented:
import asyncio
from contextlib import suppress
async def infinite(need_reset):
try:
while True:
inner_task = asyncio.create_task(inner_job())
await asyncio.wait(
[
need_reset.wait(),
inner_task
],
return_when=asyncio.FIRST_COMPLETED
)
if need_reset.is_set():
print('reset')
await cancel(inner_task)
need_reset.clear()
except asyncio.CancelledError:
print('cancel')
raise # you should never suppress, see:
# https://stackoverflow.com/a/33578893/1113207
async def inner_job():
print('work')
await asyncio.sleep(1)
print('more work')
async def cancel(task):
# more info: https://stackoverflow.com/a/43810272/1113207
task.cancel()
with suppress(asyncio.CancelledError):
await task
async def main():
need_reset = asyncio.Event()
infinite_task = asyncio.create_task(infinite(need_reset))
await asyncio.sleep(1.5)
need_reset.set()
await asyncio.sleep(1.5)
await cancel(infinite_task)
asyncio.run(main())
Output:
work
more work
work
reset
work
more work
work
cancel
Reproducible error
I tried to reproduce the error in an online REPL here. However, it is not exactly the same implementation (and hence behavior) as my real code (where I do async for response in position_stream(), instead of for position in count() in the REPL).
More details on my actual implementation
I define somewhere a coroutine like so:
async def position(self):
request = telemetry_pb2.SubscribePositionRequest()
position_stream = self._stub.SubscribePosition(request)
try:
async for response in position_stream:
yield Position.translate_from_rpc(response)
finally:
position_stream.cancel()
where position_stream is infinite (or possibly very long lasting). I use it from an example code like this:
async def print_altitude():
async for position in drone.telemetry.position():
print(f"Altitude: {position.relative_altitude_m}")
and print_altitude() is run on the loop with:
asyncio.ensure_future(print_altitude())
asyncio.get_event_loop().run_forever()
That works well. Now, at some point, I'd like to close the stream from the caller. I thought that I could just run asyncio.ensure_future(loop.shutdown_asyncgens()) and wait for my finally close above to get called, but it doesn't happen.
Instead, I receive a warning on an unretrieved exception:
Task exception was never retrieved
future: <Task finished coro=<print_altitude() done, defined at [...]
Why is that, and how can I make it such that all my async generators actually get closed (and run their finally clause)?
First of all, if you stop a loop, none of your coroutines will have a chance to shut down properly. Calling close basically means irreversibly destroying the loop.
If you do not care what happens to those running tasks, you can simply cancel them all, this will stop asynchronous generators as well:
import asyncio
from contextlib import suppress
async def position_stream():
while True:
await asyncio.sleep(1)
yield 0
async def print_position():
async for position in position_stream():
print(f'position: {position}')
async def cleanup_awaiter():
await asyncio.sleep(3)
print('cleanup!')
if __name__ == '__main__':
loop = asyncio.get_event_loop()
try:
asyncio.ensure_future(print_position())
asyncio.ensure_future(print_position())
loop.run_until_complete(cleanup_awaiter())
# get all running tasks:
tasks = asyncio.gather(*asyncio.Task.all_tasks())
# schedule throwing CancelledError into the them:
tasks.cancel()
# allow them to process the exception and be cancelled:
with suppress(asyncio.CancelledError):
loop.run_until_complete(tasks)
finally:
print('closing loop')
loop.close()
So i have a basic discord bot which accepts input
import discord
import asyncio
import threading
loop = asyncio.new_event_loop()
bot = discord.Client()
def run_asyncio_loop(loop):
asyncio.set_event_loop(loop)
loop.run_forever()
Hangman.set_bot(bot)
#bot.event
async def on_message(message):
bot.loop.create_task(Hangman.main(message))
asyncioLoop = threading.Thread(target = run_asyncio_loop, args = (loop,))
asyncioLoop.start()
bot.run(BotConstants.TOKEN)
In this example it calls the hangman game which does not block anything as i have tested this using asyncio.sleep(n) but when i go to do a something in hangman it blocks it.
class Hangman():
async def main(message):
await Hangman.make_guess(message)
async def update_score(message):
sheetLoaded = Spreadsheet.load_ws(...)
userExists = Spreadsheet.user_exists(...)
if (not userExists):
Spreadsheet.add_user(...)
Spreadsheet.add_score(...)
await Hangman.bot.send_message(message.channel, msg)
elif (not sheetLoaded):
await Hangman.bot.send_message(message.channel, msg)
async def make_guess(message):
# perform guess
if (matched):
await Hangman.bot.send_message(message.channel, msg)
Hangman.GAMES.pop(message.server.id)
await Hangman.update_score(message)
When Hangman.update_score() is called it blocks it. so it won't process any commands until the score has been updated which means for about 5 or so seconds (not long but with lots of users spamming it it's an issue) the bot does not accept any other messages
What am i missing to be able to make the process run in the background while still accept new inputs?
Asyncio is still single-threaded. The only way for the event loop to run is for no other coroutine to be actively executing. Using yield from/await suspends the coroutine temporarily, giving the event loop a chance to work. So unless you call another coroutine using yield (from) or await or return, the process is blocked. You can add await asyncio.sleep(0) in between steps of Hangman.update_score to divide the process blocking in multiple parts, but that will only ensure less "hanging" time, not actually speed up your thread.
To make the process actually run in the background, you could try something along the lines of:
from concurrent.futures import ProcessPoolExecutor
executor = ProcessPoolExecutor(2)
asyncio.ensure_future(loop.run_in_executor(executor, Hangman.update_score(message)))