Python asyncio: handling exceptions in gather() - documentation unclear? - python

The documentation for asyncio.gather says that
If return_exceptions is False (default), the first raised exception is
immediately propagated to the task that awaits on gather(). Other
awaitables in the aws sequence won’t be cancelled and will continue to
run.
However, from a simple test it seems that if one of the tasks raises an exception when return_exceptions is False, all other awaitable are cancelled (or to be more precise, in case the terminology is not clear to me, the other awaitables do not finish their job):
import asyncio
async def factorial(name, number, raise_exception=False):
# If raise_exception is True, will raise an exception when
# the loop counter > 3
f = 1
for i in range(2, number + 1):
print(f' Task {name}: Compute factorial({i})...')
if raise_exception and i > 3:
print(f' Task {name}: raising Exception')
raise Exception(f'Bad Task {name}')
await asyncio.sleep(1)
f *= i
print(f'==>> Task {name} DONE: factorial({number}) = {f}')
return f
async def main():
tasks = [factorial('A', 5), # this will not be finished
factorial('B', 10, raise_exception=True),
factorial('C', 2)]
try:
results = await asyncio.gather(*tasks)
print('Results:', results)
except Exception as e:
print('Got an exception:', e)
asyncio.run(main())
What this piece of code is doing, just to make it simpler, it defines 3 tasks and call asyncio.gather() on them. One of the tasks raises an exception before one of the others is done, and this other task is not finished.
Actually, I cannot even make sense with what the documentations says - if an exception is raised and caught by the task awaiting on gather, I would not even be able to get the returned results (even if the other task would, somehow, get done).
Am I missing anything, or is there a problem with the documentation?
This was tested with Python 3.7.2.

I've run your code and got the following output, as expected from documentation.
Task C: Compute factorial(2)...
Task A: Compute factorial(2)...
Task B: Compute factorial(2)...
==>> Task C DONE: factorial(2) = 2
Task A: Compute factorial(3)...
Task B: Compute factorial(3)...
Task A: Compute factorial(4)...
Task B: Compute factorial(4)...
Task B: raising Exception
Got an exception: Bad Task B
Task A: Compute factorial(5)...
==>> Task A DONE: factorial(5) = 120
What's going on
Tasks A,B and C are submitted to the queue;
All tasks are running while C finishes earliest.
Task B raises and exception.
The await asyncio.gater() returns immediately and print('Got an exception:', e) to the screen.
Task A continues to run and print "==>> Task A DONE ..."
What's wrong with your test
As #deceze commented,
your program exited immediately after the exception was caught and main() returns. Thus, the tasks A and C are terminated because the entire process dies, not because of cancellation.
To fix it, add await asyncio.sleep(20) to the end of the main() function.

The answer to the main question here is to use asyncio.as_complete. Change your main() function code to:
async def main():
tasks = [factorial('A', 5), # this will not be finished
factorial('B', 10, raise_exception=True),
factorial('C', 2)]
# Handle results in the order the task are completed
# if exeption you can handle that as well.
for coroutine in asyncio.as_completed(tasks):
try:
results = await coroutine
except Exception as e:
print('Got an exception:', e)
else:
print('Results:', results)

Related

Unable to cancel future - asyncio.sleep()

I have a signal handler defined that cancels all the tasks in the currently running asyncio event loop when the SIGINT signal is raised. In main, I have defined a new loop and the loop runs until the sleep function completes. I have used print statements inside signal_handler for better understanding as to what happens when an asyncio task is cancelled.
Below is my implementation,
import asyncio
import signal
class temp:
def signal_handler(self, sig, frame):
loop = asyncio.get_event_loop()
tasks = asyncio.all_tasks(loop=loop)
for task in tasks:
print(task.get_name()) #returns the name of the task
ret_val = asyncio.Future.cancel(task) #returns True if task was just cancelled
print(f"Return value : {ret_val}")
print(f"Task Cancelled : {task.cancelled()}") #returns True if task is cancelled
return
def main(self):
try:
signal.signal(signal.SIGINT, self.signal_handler)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop=loop)
loop.run_until_complete(asyncio.sleep(20))
except asyncio.CancelledError as err:
print("Cancellation error raised")
finally:
if not loop.is_closed():
loop.close()
if __name__ == "__main__":
test = temp()
test.main()
Expected Behaviour:
When I raise a SIGINT at any time using Ctrl+C, the task (asyncio.sleep()) gets cancelled instantaneously and a CancellationError is raised and there is a graceful exit.
Actual Behaviour:
The CancellationError is raised after time t (in seconds) specified as a parameter in asyncio.sleep(t). For Example, the CancellationError is raised after 20 secs for the above code.
Unusual Observation:
The behaviour of the code is in line with the Actual Behaviour when executed on Windows.
The issue described above is only happening on Linux.
What could be the reason for this ambiguous behaviour?

How to timeout asyncio.to_thread?

I experimenting with the new asyncio features in Python 3.9, and have the following code:
import asyncio
async def inc(start):
i = start
while True:
print(i)
i += 2
return None
async def main():
x = asyncio.gather(
asyncio.to_thread(inc, 0),
asyncio.to_thread(inc, 1)
)
try:
await asyncio.wait_for(x, timeout=5.0)
except asyncio.TimeoutError:
print("timeout!")
asyncio.run(main())
My expectation is that the program will print numbers starting from 0 and terminate after 5 seconds. However, when I execute the program here, I see no output and the following warning:
/usr/lib/python3.9/asyncio/events.py:80: RuntimeWarning: coroutine 'inc' was never awaited
self._context.run(self._callback, *self._args)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
/usr/lib/python3.9/concurrent/futures/thread.py:85: RuntimeWarning: coroutine 'inc' was never awaited
del work_item
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
The asyncio.to_thread converts a regular function to a coroutine.
All you need is to change the async def inc to def inc. You can do this, because there is no await, it is not a real coroutine.
However, you cannot kill a thread. The timeout will cause to stop waiting, not to stop computing numbers.
UPDATE: In detail, the asyncio.gather awaits the coroutines until the timeout. When cancelled by the timeout, the gather cancels all still running coroutines a and waits for their termination. Cancelling a coroutine means to deliver an exception at the nearest await statement. In this case, there is no await and the coroutines will never receive the cancellation exception. The program hangs at that point.

Exception in python asyncio.Task not raised until main Task complete

Here is the code, I thought the program will crash at once because of the uncaught exception. However it waited 10s when the main task coro2 completes.
import asyncio
#asyncio.coroutine
def coro1():
print("coro1 primed")
yield
raise Exception("abc")
#asyncio.coroutine
def coro2(loop):
try:
print("coro2 primed")
ts = [asyncio.Task(coro1(),loop=loop) for _ in range(2)]
res = yield from asyncio.sleep(10)
print(res)
except Exception as e:
print(e)
raise
loop= asyncio.get_event_loop()
loop.run_until_complete(coro2(loop))
I think this is a serious problems because in more complicated programs, this makes the process stuck forever, instead of crashing with exception information.
Besides, I set a breakpoint in the except block in source code of run_until_complete but it's not triggered. I am interested in which piece of code handled that exception in python asyncio.
First, there is no reason to use generator-based coroutines in Python with the async/await syntax available for many years, and the coroutine decorator now deprecated and scheduled for removal. Also, you don't need to pass the event loop down to each coroutine, you can always use asyncio.get_event_loop() to obtain it when you need it. But these are unrelated to your question.
The except block in coro2 didn't trigger because the exception raised in coro1 didn't propagate to coro2. This is because you explicitly ran coro1 as a task, which executed it in the background, and didn't await it. You should always ensure that your tasks are awaited and then exceptions won't pass unnoticed; doing this systematically is sometimes referred to as structured concurrency.
The correct way to write the above would be something like:
async def coro1():
print("coro1 primed")
await asyncio.sleep(0) # yield to the event loop
raise Exception("abc")
async def coro2():
try:
print("coro2 primed")
ts = [asyncio.create_task(coro1()) for _ in range(2)]
await asyncio.sleep(10)
# ensure we pick up results of the tasks that we've started
for t in ts:
await t
print(res)
except Exception as e:
print(e)
raise
asyncio.run(coro2())
Note that this will run sleep() to completion and only then propagate the exceptions raised by the background tasks. If you wanted to propagate immediately, you could use asyncio.gather(), in which case you wouldn't have to bother with explicitly creating tasks in the first place:
async def coro2():
try:
print("coro2 primed")
res, *ignored = await asyncio.gather(
asyncio.sleep(10),
*[(coro1()) for _ in range(2)]
)
print(res)
except Exception as e:
print(e)
raise
I am interested in which piece of code handled that exception in python asyncio.
An exception raised by a coroutine which is not handled is caught by asyncio and stored in the task object. This allows you to await the task or (if you know it's completed) obtain its result using the result() method, either of which will propagate (re-raise) the exception. Since your code never accessed the task's result, the exception instance remained forgotten inside the task object. Python goes so far to notice this and print a "Task exception was never retrieved" warning when the task object is destroyed along with a traceback, but this warning is provided on a best-effort basis, usually comes too late, and should not be relied upon.

RuntimeError: Cannot close a running event loop

I'm trying to resolve this error: RuntimeError: Cannot close a running event loop in my asyncio process. I believe it's happening because there's a failure while tasks are still pending, and then I try to close the event loop. I'm thinking I need to await the remaining responses prior to closing the event loop, but I'm not sure how to accomplish that correctly in my specific situation.
def start_job(self):
if self.auth_expire_timestamp < get_timestamp():
api_obj = api_handler.Api('Api Name', self.dbObj)
self.api_auth_resp = api_obj.get_auth_response()
self.api_attr = api_obj.get_attributes()
try:
self.queue_manager(self.do_stuff(json_data))
except aiohttp.ServerDisconnectedError as e:
logging.info("Reconnecting...")
api_obj = api_handler.Api('API Name', self.dbObj)
self.api_auth_resp = api_obj.get_auth_response()
self.api_attr = api_obj.get_attributes()
self.run_eligibility()
async def do_stuff(self, data):
tasks = []
async with aiohttp.ClientSession() as session:
for row in data:
task = asyncio.ensure_future(self.async_post('url', session, row))
tasks.append(task)
result = await asyncio.gather(*tasks)
self.load_results(result)
def queue_manager(self, method):
self.loop = asyncio.get_event_loop()
future = asyncio.ensure_future(method)
self.loop.run_until_complete(future)
async def async_post(self, resource, session, data):
async with session.post(self.api_attr.api_endpoint + resource, headers=self.headers, data=data) as response:
resp = []
try:
headers = response.headers['foo']
content = await response.read()
resp.append(headers)
resp.append(content)
except KeyError as e:
logging.error('KeyError at async_post response')
logging.error(e)
return resp
def shutdown(self):
//need to do something here to await the remaining tasks and then I need to re-start a new event loop, which i think i can do, just don't know how to appropriately stop the current one.
self.loop.close()
return True
How can I handle the error and properly close the event loop so I can start a new one and essentially re-boot the whole program and continue on.
EDIT:
This is what I'm trying now, based on this SO answer. Unfortunately, this error only happens rarely, so unless I can force it, i will have to wait and see if it works. In my queue_manager method I changed it to this:
try:
self.loop.run_until_complete(future)
except Exception as e:
future.cancel()
self.loop.run_until_complete(future)
future.exception()
UPDATE:
I got rid of the shutdown() method and added this to my queue_manager() method instead and it seems to be working without issue:
try:
self.loop.run_until_complete(future)
except Exception as e:
future.cancel()
self.check_in_records()
self.reconnect()
self.start_job()
future.exception()
To answer the question as originally stated, there is no need to close() a running loop, you can reuse the same loop for the whole program.
Given the code in the update, your queue_manager could look like this:
try:
self.loop.run_until_complete(future)
except Exception as e:
self.check_in_records()
self.reconnect()
self.start_job()
Cancelling future is not necessary and as far as I can tell has no effect. This is different from the referenced answer which specifically reacts to KeyboardInterrupt, special because it is raised by asyncio itself. KeyboardInterrupt can be propagated by run_until_complete without the future having actually completed. Handling Ctrl-C correctly in asyncio is very hard or even impossible (see here for details), but fortunately the question is not about Ctrl-C at all, it is about exceptions raised by the coroutine. (Note that KeyboardInterrupt doesn't inherit from Exception, so in case of Ctrl-C the except body won't even execute.)
I was canceling the future because in this instance there are remaining tasks pending and i want to essentially remove those tasks and start a fresh event loop.
This is a correct thing to want to do, but the code in the (updated) question is only canceling a single future, the one already passed to run_until_complete. Recall that a future is a placeholder for a result value that will be provided at a later point. Once the value is provided, it can be retrieved by calling future.result(). If the "value" of the future is an exception, future.result() will raise that exception. run_until_complete has the contract that it will run the event loop for as long as it takes for the given future to produce a value, and then it returns that value. If the "value" is in fact an exception to raise, then run_until_complete will re-raise it. For example:
loop = asyncio.get_event_loop()
fut = loop.create_future()
loop.call_soon(fut.set_exception, ZeroDivisionError)
# raises ZeroDivisionError, as that is the future's result,
# manually set
loop.run_until_complete(fut)
When the future in question is in fact a Task, an asyncio-specific object that wraps a coroutine into a Future, the result of such future is the object returned by the coroutine. If the coroutine raises an exception, then retrieving the result will re-raise it, and so will run_until_complete:
async def fail():
1/0
loop = asyncio.get_event_loop()
fut = loop.create_task(fail())
# raises ZeroDivisionError, as that is the future's result,
# because the coroutine raises it
loop.run_until_complete(fut)
When dealing with a task, run_until_complete finishing means that the coroutine has finished as well, having either returned a value or raised an exception, as determined by run_until_complete returning or raising.
On the other hand, cancelling a task works by arranging for the task to be resumed and the await expression that suspended it to raise CancelledError. Unless the task specifically catches and suppresses this exception (which well-behaved asyncio code is not supposed to do), the task will stop executing and the CancelledError will become its result. However, if the coroutine is already finished when cancel() is called, then cancel() cannot do anything because there is no pending await to inject CancelledError into.
I got the same error below:
RuntimeError: Cannot close a running event loop
When I called loop.close() in test() as shown below:
import asyncio
async def test(loop):
print("Test")
loop.stop()
loop.close() # Here
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.create_task(test(loop))
loop.run_forever()
So, I used loop.close() after loop.run_forever() with try: and finally: as shown below, then the error was solved:
import asyncio
async def test(loop):
print("Test")
loop.stop()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.create_task(test(loop))
try:
loop.run_forever()
finally:
loop.close() # Here

asyncio: prevent task from being cancelled twice

Sometimes, my coroutine cleanup code includes some blocking parts (in the asyncio sense, i.e. they may yield).
I try to design them carefully, so they don't block indefinitely. So "by contract", coroutine must never be interrupted once it's inside its cleanup fragment.
Unfortunately, I can't find a way to prevent this, and bad things occur when it happens (whether it's caused by actual double cancel call; or when it's almost finished by itself, doing cleanup, and happens to be cancelled from elsewhere).
Theoretically, I can delegate cleanup to some other function, protect it with a shield, and surround it with try-except loop, but it's just ugly.
Is there a Pythonic way to do so?
#!/usr/bin/env python3
import asyncio
#asyncio.coroutine
def foo():
"""
This is the function in question,
with blocking cleanup fragment.
"""
try:
yield from asyncio.sleep(1)
except asyncio.CancelledError:
print("Interrupted during work")
raise
finally:
print("I need just a couple more seconds to cleanup!")
try:
# upload results to the database, whatever
yield from asyncio.sleep(1)
except asyncio.CancelledError:
print("Interrupted during cleanup :(")
else:
print("All cleaned up!")
#asyncio.coroutine
def interrupt_during_work():
# this is a good example, all cleanup
# finishes successfully
t = asyncio.async(foo())
try:
yield from asyncio.wait_for(t, 0.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"
t.cancel()
# wait for finish
try:
yield from t
except asyncio.CancelledError:
pass
#asyncio.coroutine
def interrupt_during_cleanup():
# here, cleanup is interrupted
t = asyncio.async(foo())
try:
yield from asyncio.wait_for(t, 1.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"
t.cancel()
# wait for finish
try:
yield from t
except asyncio.CancelledError:
pass
#asyncio.coroutine
def double_cancel():
# cleanup is interrupted here as well
t = asyncio.async(foo())
try:
yield from asyncio.wait_for(t, 0.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"
t.cancel()
try:
yield from asyncio.wait_for(t, 0.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"
# although double cancel is easy to avoid in
# this particular example, it might not be so obvious
# in more complex code
t.cancel()
# wait for finish
try:
yield from t
except asyncio.CancelledError:
pass
#asyncio.coroutine
def comain():
print("1. Interrupt during work")
yield from interrupt_during_work()
print("2. Interrupt during cleanup")
yield from interrupt_during_cleanup()
print("3. Double cancel")
yield from double_cancel()
def main():
loop = asyncio.get_event_loop()
task = loop.create_task(comain())
loop.run_until_complete(task)
if __name__ == "__main__":
main()
I ended up writing a simple function that provides a stronger shield, so to speak.
Unlike asyncio.shield, which protects the callee, but raises CancelledError in its caller, this function suppresses CancelledError altogether.
The drawback is that this function doesn't allow you to handle CancelledError later. You won't see whether it has ever happened. Something slightly more complex would be required to do so.
#asyncio.coroutine
def super_shield(arg, *, loop=None):
arg = asyncio.async(arg)
while True:
try:
return (yield from asyncio.shield(arg, loop=loop))
except asyncio.CancelledError:
continue
I found WGH's solution when encountering a similar problem. I'd like to await a thread, but regular asyncio cancellation (with or without shield) will just cancel the awaiter and leave the thread floating around, uncontrolled. Here is a modification of super_shield that optionally allows reacting on cancel requests and also handles cancellation from within the awaitable:
await protected(aw, lambda: print("Cancel request"))
This guarantees that the awaitable has finished or raised CancelledError from within. If your task could be cancelled by other means (e.g. setting a flag observed by a thread), you can use the optional cancel callback to enable cancellation.
Implementation:
async def protect(aw, cancel_cb: typing.Callable = None):
"""
A variant of `asyncio.shield` that protects awaitable as well
as the awaiter from being cancelled.
Cancellation events from the awaiter are turned into callbacks
for handling cancellation requests manually.
:param aw: Awaitable.
:param cancel_cb: Optional cancellation callback.
:return: Result of awaitable.
"""
task = asyncio.ensure_future(aw)
while True:
try:
return await asyncio.shield(task)
except asyncio.CancelledError:
if task.done():
raise
if cancel_cb is not None:
cancel_cb()

Categories

Resources