How to throw a custom exception into a running task - python

I'm trying to figure out if it's possible throw a custom exception into a running asyncio task, similarly to what is achieved by Task.cancel(self) which schedules a CancelledError to be raised in the underlying coroutine.
I came across Task.get_coro().throw(exc), but calling it seems like opening a big can of worms as we may leave the task in a bad state. Especially considering all the machinery that happens when a task is throwing CancelledError into its coroutine.
Consider the following example:
import asyncio
class Reset(Exception):
pass
async def infinite():
while True:
try:
print('work')
await asyncio.sleep(1)
print('more work')
except Reset:
print('reset')
continue
except asyncio.CancelledError:
print('cancel')
break
async def main():
infinite_task = asyncio.create_task(infinite())
await asyncio.sleep(0) # Allow infinite_task to enter its work loop.
infinite_task.get_coro().throw(Reset())
await infinite_task
asyncio.run(main())
## OUTPUT ##
# "work"
# "reset"
# "work"
# hangs forever ... bad :(
Is what I try to do even feasible? It feels as if I shouldn't be manipulating the underlying coroutine like this. Any workaround?

There's no way to throw a custom exception into a running task. You shouldn't mess with .throw - it's a detail of implementation and altering it will probably break something.
If you want to pass information (about reset) into the task, do it trough an argument. Here's how it can be implemented:
import asyncio
from contextlib import suppress
async def infinite(need_reset):
try:
while True:
inner_task = asyncio.create_task(inner_job())
await asyncio.wait(
[
need_reset.wait(),
inner_task
],
return_when=asyncio.FIRST_COMPLETED
)
if need_reset.is_set():
print('reset')
await cancel(inner_task)
need_reset.clear()
except asyncio.CancelledError:
print('cancel')
raise # you should never suppress, see:
# https://stackoverflow.com/a/33578893/1113207
async def inner_job():
print('work')
await asyncio.sleep(1)
print('more work')
async def cancel(task):
# more info: https://stackoverflow.com/a/43810272/1113207
task.cancel()
with suppress(asyncio.CancelledError):
await task
async def main():
need_reset = asyncio.Event()
infinite_task = asyncio.create_task(infinite(need_reset))
await asyncio.sleep(1.5)
need_reset.set()
await asyncio.sleep(1.5)
await cancel(infinite_task)
asyncio.run(main())
Output:
work
more work
work
reset
work
more work
work
cancel

Related

Python asyncio await on future seems to be blocking

I'm trying to use the below code to implement async data waiting process. Basic idea is a future is passed into wait_on_data, another function set_status will set the result for this future.
The problem is when I ran the code, it seems like await fut doesn't suspend task, it always reached the timeout limit, and set_status couldn't run concurrently to set the result.
Is this because of GIL? If it is, what can I do alternatively to achieve this?
def set_status(fut):
print("set future")
fut.set_result((True, True))
async def wait_on_data(fut):
try:
async with asyncio.timeout(10):
print("waiting for future")
await fut
return fut.result()
except TimeoutError:
print("timeout for future")
return (False, False)
async def main():
fut = asyncio.get_running_loop().create_future()
result = await wait_on_data(fut)
await asyncio.get_event_loop().run_in_executor(set_status(fut))
assert result == True, True

Asyncio: cancelling tasks and starting new ones when a signal flag is raised

My program is supposed to read data forever from provider classes stored in PROVIDERS, defined in the config. Every second, it should check whether the config has changed and if so, stop all tasks, reload the config and and create new tasks.
The below code raises CancelledError because I'm cancelling my tasks. Should I really try/catch each of those to achieve my goals or is there a better pattern?
async def main(config_file):
load_config(config_file)
tasks = []
config_task = asyncio.create_task(watch_config(config_file)) # checks every 1s if config changed and raises ConfigChangedSignal if so
tasks.append(config_task)
for asset_name, provider in PROVIDERS.items():
task = asyncio.create_task(provider.read_forever())
tasks.append(task)
try:
await asyncio.gather(*tasks, return_exceptions=False)
except ConfigChangedSignal:
# Restarting
for task in asyncio.tasks.all_tasks():
task.cancel() # raises CancelledError
await main(config_file)
try:
asyncio.run(main(config_file))
except KeyboardInterrupt:
logger.debug("Ctrl-C pressed. Aborting")
If you are on Python 3.11, your pattern maps directly to using asyncio.TaskGroup, the "successor" to asyncio.gather, which makes use of the new "exception Groups". By default, if any task in the group raises an exception, all tasks in the group are cancelled:
I played around this snippet in the ipython console, and had run asyncio.run(main(False)) for no exception and asyncio.run(main(True)) for inducing an exception just to check the results:
import asyncio
async def doit(i, n, cancel=False):
await asyncio.sleep(n)
if cancel:
raise RuntimeError()
print(i, "done")
async def main(cancel):
try:
async with asyncio.TaskGroup() as group:
tasks = [group.create_task(doit(i, 2)) for i in range(10)]
group.create_task(doit(42, 1, cancel=cancel))
group.create_task(doit(11, .5))
except Exception:
pass
await asyncio.sleep(3)
Your code can acommodate that -
Apart from the best practice for cancelling tasks, though, you are doing a recursive call to your main that, although will work for most practical purposes, can make seasoned developers go "sigh" - and also can break in edgecases, (it will fail after ~1000 cycles, for example), and leak resources.
The correct way to do that is assembling a while loop, since Python function calls, even tail calls, won't clean up the resources in the calling scope:
import asyncio
...
async def main(config_file):
while True:
load_config(config_file)
try:
async with asyncio.TaskGroup() as tasks:
tasks.create_task(watch_config(config_file)) # checks every 1s if config changed and raises ConfigChangedSignal if so
for asset_name, provider in PROVIDERS.items():
tasks.create_task.create_task(provider.read_forever())
# all tasks are awaited at the end of the with block
except *ConfigChangedSignal: # <- the new syntax in Python 3.11
# Restarting is just a matter of re-doing the while-loop
# ... log.info("config changed")
pass
# any other exception won't be caught and will error, allowing one
# to review what went wrong
...
For Python 3.10, looping over the tasks and cancelling each seems alright, but you should look at that recursive call. If you don't want a while-loop inside your current main, refactor the code so that main itself is called from an outter while-loop
async def main(config_file):
while True:
await inner_main(config_file)
async def inner_main(config_file):
load_config(config_file)
# keep the existing body
...
except ConfigChangedSignal:
# Restarting
for task in asyncio.tasks.all_tasks():
task.cancel() # raises CancelledError
# await main call dropped from here
jsbueno’s answer is appropriate.
An easy alternative is to enclose the entire event loop in an outer “while”:
async def main(config_file):
load_config(config_file)
tasks = []
for asset_name, provider in PROVIDERS.items():
task = asyncio.create_task(provider.read_forever())
tasks.append(task)
try:
await watch_config(config_file)
except ConfigChangedSignal:
pass
try:
while True:
asyncio.run(main(config_file))
except KeyboardInterrupt:
logger.debug("Ctrl-C pressed. Aborting")

Python Asyncio TimeoutError (tasks.loop)

I have a script that should run endless. My problem is that after about 10 minutes a timeout error appears.
I tried a try/except, if it is catched, the start method should be called again. But this does not work. The catch work, but the start method cannot be called again.
Here is my code:
#tasks.loop()
async def beginn(self):
print(something)
self.csvToList()
await self.find_price()
def start():
try:
print("run")
mon = monitor()
mon.beginn.start()
client.run(token)
except asyncio.TimeoutError:
print("Timeout")
start()
start()
This is the error message
And for the line numbers
As I see in your code, you wrote client.run(token). So maybe this is discord.py.
I think the best solution to this, in order not to end this loop is this:
#tasks.loop(seconds=1)
async def beginn(self):
print(something)
self.csvToList()
await self.find_price()
#beginn.before_loop
async def before():
await bot.wait_until_ready()
print("Finished waiting")
beginn.start()
bot.run(token)
Do not forget to delete YOUR start() function in order this work properly.

How to forcefully close an async generator?

Let's say I have an async generator like this:
async def event_publisher(connection, queue):
while True:
if not await connection.is_disconnected():
event = await queue.get()
yield event
else:
return
I consume it like this:
published_events = event_publisher(connection, queue)
async for event in published_events:
# do event processing here
It works just fine, however when the connection is disconnected and there is no new event published the async for will just wait forever, so ideally I would like to close the generator forcefully like this:
if connection.is_disconnected():
await published_events.aclose()
But I get the following error:
RuntimeError: aclose(): asynchronous generator is already running
Is there a way to stop processing of an already running generator?
It seems to be related to this issue. Noticable:
As shown in
https://gist.github.com/1st1/d9860cbf6fe2e5d243e695809aea674c, it's an
error to close a synchronous generator while it is being iterated.
...
In 3.8, calling "aclose()" can crash with a RuntimeError. It's no
longer possible to reliably cancel a running asynchrounous
generator.
Well, since we can't cancel running asynchrounous generator, let's try to cancel its running.
import asyncio
from contextlib import suppress
async def cancel_gen(agen):
task = asyncio.create_task(agen.__anext__())
task.cancel()
with suppress(asyncio.CancelledError):
await task
await agen.aclose() # probably a good idea,
# but if you'll be getting errors, try to comment this line
...
if connection.is_disconnected():
await cancel_gen(published_events)
Can't test if it'll work since you didn't provide reproducable example.
You can use a timeout on the queue so is_connected() is polled regularly if there is no item to pop:
async def event_publisher(connection, queue):
while True:
if not await connection.is_disconnected():
try:
event = await asyncio.wait_for(queue.get(), timeout=10.0)
except asyncio.TimeoutError:
continue
yield event
else:
return
Alternatively, it is possible to use Queue.get_nowait().

Shutdown infinite async generator

Reproducible error
I tried to reproduce the error in an online REPL here. However, it is not exactly the same implementation (and hence behavior) as my real code (where I do async for response in position_stream(), instead of for position in count() in the REPL).
More details on my actual implementation
I define somewhere a coroutine like so:
async def position(self):
request = telemetry_pb2.SubscribePositionRequest()
position_stream = self._stub.SubscribePosition(request)
try:
async for response in position_stream:
yield Position.translate_from_rpc(response)
finally:
position_stream.cancel()
where position_stream is infinite (or possibly very long lasting). I use it from an example code like this:
async def print_altitude():
async for position in drone.telemetry.position():
print(f"Altitude: {position.relative_altitude_m}")
and print_altitude() is run on the loop with:
asyncio.ensure_future(print_altitude())
asyncio.get_event_loop().run_forever()
That works well. Now, at some point, I'd like to close the stream from the caller. I thought that I could just run asyncio.ensure_future(loop.shutdown_asyncgens()) and wait for my finally close above to get called, but it doesn't happen.
Instead, I receive a warning on an unretrieved exception:
Task exception was never retrieved
future: <Task finished coro=<print_altitude() done, defined at [...]
Why is that, and how can I make it such that all my async generators actually get closed (and run their finally clause)?
First of all, if you stop a loop, none of your coroutines will have a chance to shut down properly. Calling close basically means irreversibly destroying the loop.
If you do not care what happens to those running tasks, you can simply cancel them all, this will stop asynchronous generators as well:
import asyncio
from contextlib import suppress
async def position_stream():
while True:
await asyncio.sleep(1)
yield 0
async def print_position():
async for position in position_stream():
print(f'position: {position}')
async def cleanup_awaiter():
await asyncio.sleep(3)
print('cleanup!')
if __name__ == '__main__':
loop = asyncio.get_event_loop()
try:
asyncio.ensure_future(print_position())
asyncio.ensure_future(print_position())
loop.run_until_complete(cleanup_awaiter())
# get all running tasks:
tasks = asyncio.gather(*asyncio.Task.all_tasks())
# schedule throwing CancelledError into the them:
tasks.cancel()
# allow them to process the exception and be cancelled:
with suppress(asyncio.CancelledError):
loop.run_until_complete(tasks)
finally:
print('closing loop')
loop.close()

Categories

Resources