My understanding of async tasks is that they can be created and the returned task object itself can just be discarded because the task will automatically go on the loop, and then asyncio.all_tasks() can be called later to join them. However, hitting all_tasks() once won't account for any tasks created from tasks, and, as a result, an exception will get raised but not propagated. Even after the task that dispatches its own task is executed, asyncio.all_tasks() still doesn't seem to see it.
So, do we actually need proper task accounting and to make sure to gather/run on all created tasks? It seems like this is partially managed, but the important stuff is not.
Example code:
import asyncio
async def func2():
print("Second function")
raise Exception("Inner exception")
async def func1(loop):
print("First function")
loop.create_task(func2())
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
awaitable = func1(loop)
loop.create_task(awaitable)
reported_tasks = asyncio.all_tasks(loop=loop)
awaitable = asyncio.gather(*reported_tasks)
loop.run_until_complete(awaitable)
I'm creating the loop myself because my scenario is a non-main thread (so the loop won't be implicitly created) and I don't want there to be some nuance due to this that confuses things due to being left unspoken.
func1() and func2() execute, but I'll merely just get warned about not materializing the exception in func2() unless I manually collect all of the tasks, enumerate them, and check their exception flags.
On the other hand, if I start adding all of the tags to a list, do I grab that list, get the length, shift that number of items off the front of the list, wait on those to be done, check them for exceptions, and loop until I determine that all tasks have completed? This will potentially include some long-running tasks as well as some failed ones, so maybe I need to figure for a timeout in order to make sure those exceptions get processed seen after being raised? Is there a more elegant solution?
I looked, but the documentation seems mostly about the API rather than the use-cases/patterns and all of the searches I ran still left these questions unanswered.
The documentation does note that you should retain the created tasks and implies that this is for the reason of reference-counting, but there's no reference to the topics above:
Important
Save a reference to the result of this function, to avoid a
task disappearing mid execution.
The problem with your snippet seems to be only that you call all_tasks before the body of func1 is run, and therefore, before the task that will run func2 is ever created.
I typed some stuff on the interactive interpreter, without all the boilerplate you added there, and all_tasks seems to be "seeing" the grand-daughter task, with no problems, and the exception is also raised in the awaiting code:
In [1]: import asyncio
In [2]: async def func2():
...: print("second function")
...: 1 / 0 # raises
...:
In [3]: async def func1():
...: print("first function")
...: asyncio.create_task(func2())
...:
In [4]: async def main():
...: await func1()
...: z = asyncio.all_tasks()
...: print(z)
...:
In [5]: asyncio.run(main())
first function
{<Task pending name='Task-1148' coro=<main() running at <ipython-input-4-489680e1b4c1>:4> cb=[_run_until_complete_cb() at /home/gwidion/.pyenv/versions/3.10-dev/lib/python3.10/asyncio/base_events.py:184]>, <Task pending name='Task-1149' coro=<func2() running at <ipython-input-2-555de5f64f41>:1>>}
second function
Task exception was never retrieved
future: <Task finished name='Task-1149' coro=<func2() done, defined at <ipython-input-2-555de5f64f41>:1> exception=ZeroDivisionError('division by zero')>
Traceback (most recent call last):
File "<ipython-input-2-555de5f64f41>", line 3, in func2
1 / 0 # raises
ZeroDivisionError: division by zero
Related
I have a asyncio running loop, and from the coroutine I'm calling a sync function, is there any way we can call and get result from an async function in a sync function
tried below code, it is not working
want to print output of hel() in i() without changing i() to async function
is it possible, if yes how?
import asyncio
async def hel():
return 4
def i():
loop = asyncio.get_running_loop()
x = asyncio.run_coroutine_threadsafe(hel(), loop) ## need to change
y = x.result() ## this lines
print(y)
async def h():
i()
asyncio.run(h())
This is one of the most commonly asked type of question here. The tools to do this are in the standard library and require only a few lines of setup code. However, the result is not 100% robust and needs to be used with care. This is probably why it's not already a high-level function.
The basic problem with running an async function from a sync function is that async functions contain await expressions. Await expressions pause the execution of the current task and allow the event loop to run other tasks. Therefore async functions (coroutines) have special properties that allow them to yield control and resume again where they left off. Sync functions cannot do this. So when your sync function calls an async function and that function encounters an await expression, what is supposed to happen? The sync function has no ability to yield and resume.
A simple solution is to run the async function in another thread, with its own event loop. The calling thread blocks until the result is available. The async function behaves like a normal function, returning a value. The downside is that the async function now runs in another thread, which can cause all the well-known problems that come with threaded programming. For many cases this may not be an issue.
This can be set up as follows. This is a complete script that can be imported anywhere in an application. The test code that runs in the if __name__ == "__main__" block is almost the same as the code in the original question.
The thread is lazily initialized so it doesn't get created until it's used. It's a daemon thread so it will not keep your program from exiting.
The solution doesn't care if there is a running event loop in the main thread.
import asyncio
import threading
_loop = asyncio.new_event_loop()
_thr = threading.Thread(target=_loop.run_forever, name="Async Runner",
daemon=True)
# This will block the calling thread until the coroutine is finished.
# Any exception that occurs in the coroutine is raised in the caller
def run_async(coro): # coro is a couroutine, see example
if not _thr.is_alive():
_thr.start()
future = asyncio.run_coroutine_threadsafe(coro, _loop)
return future.result()
if __name__ == "__main__":
async def hel():
await asyncio.sleep(0.1)
print("Running in thread", threading.current_thread())
return 4
def i():
y = run_async(hel())
print("Answer", y, threading.current_thread())
async def h():
i()
asyncio.run(h())
Output:
Running in thread <Thread(Async Runner, started daemon 28816)>
Answer 4 <_MainThread(MainThread, started 22100)>
In order to call an async function from a sync method, you need to use asyncio.run, however this should be the single entry point of an async program so asyncio makes sure that you don't do this more than once per program, so you can't do that.
That being said, this project https://github.com/erdewit/nest_asyncio patches the asyncio event loop to do that, so after using it you should be able to just call asyncio.run in your sync function.
So given a bit of a complex setup, which is used to generate a list of queries to be run semi-parallel (using a semaphore to not run too many queries at the same time, to not DDoS the server).
i have an (in itself async) function that creates a number of queries:
async def run_query(self, url):
async with self.semaphore:
return await some_http_lib(url)
async def create_queries(self, base_url):
# ...gathering logic is ofc a bit more complex in the real setting
urls = await some_http_lib(base_url).json()
coros = [self.run_query(url) for url in urls] # note: not executed just yet
return coros
async def execute_queries(self):
queries = await self.create_queries('/seom/url')
_logger.info(f'prepared {len(queries)} queries')
results = []
done = 0
# note: ofc, in this simple example call these would not actually be asynchronously executed.
# in the real case i'm using asyncio.gather, this just makes for a slightly better
# understandable example.
for query in queries:
# at this point, the request is actually triggered
result = await query
# ...some postprocessing
if not result['success']:
raise QueryException(result['message']) # ...internal exception
done += 1
_logger.info(f'{done} of {len(queries)} queries done')
results.append(result)
return results
Now this works very nicely, executing exactly as i planned and i can handle an exception in one of the queries by aborting the whole operation.
async def run():
try:
return await QueryRunner.execute_queries()
except QueryException:
_logger.error('something went horribly wrong')
return None
The only problem is that the program is terminated, but leaves me with the usual RuntimeWarning: coroutine QueryRunner.run_query was never awaited, because the queries later in the queue are (rightfully) not executed and as such not awaited.
Is there any way to cancel these unawaited coroutines? Would it be otherwise possible to supress this warning?
[Edit] a bit more context as of how the queries are executed outside this simple example:
the queries are usually grouped together, so there is multiple calls to create_queries() with different parameters. then all collected groups are looped and the queries are executed using asyncio.gather(group). This awaits all the queries of one group, but if one fails, the other groups are canceled aswell, which results in the error being thrown.
So you are asking how to cancel a coroutine that has not yet been either awaited or passed to gather. There are two options:
you can call asyncio.create_task(c).cancel()
you can directly call c.close() on the coroutine object
The first option is a bit more heavyweight (it creates a task only to immediately cancel it), but it uses the documented asyncio functionality. The second option is more lightweight, but also more low-level.
The above applies to coroutine objects that have never been converted to tasks (by passing them to gather or wait, for example). If they have, for example if you called asyncio.gather(*coros), one of them raised and you want to cancel the rest, you should change the code to first convert them to tasks using asyncio.create_task(), then call gather, and use finally to cancel the unfinished ones:
tasks = list(map(asyncio.create_task, coros))
try:
results = await asyncio.gather(*tasks)
finally:
# if there are unfinished tasks, that is because one of them
# raised - cancel the rest
for t in tasks:
if not t.done():
t.cancel()
Use
pending = asyncio.tasks.all_tasks() # < 3.7
or
pending = asyncio.all_tasks() # >= 3.7 (not sure)
to get the list of pending tasks. You can wait for them with
await asyncio.wait(pending, return_when=asyncio.ALL_COMPLETED)
or cancel them:
for task in pending:
task.cancel()
In case one task of gather raises an exception, the others are still allowed to continue.
Well, that's not exactly what I need. I want to distinguish between errors that are fatal and need to cancel all remaining tasks, and errors that are not and instead should be logged while allowing other tasks to continue.
Here is my failed attempt to implement this:
from asyncio import gather, get_event_loop, sleep
class ErrorThatShouldCancelOtherTasks(Exception):
pass
async def my_sleep(secs):
await sleep(secs)
if secs == 5:
raise ErrorThatShouldCancelOtherTasks('5 is forbidden!')
print(f'Slept for {secs}secs.')
async def main():
try:
sleepers = gather(*[my_sleep(secs) for secs in [2, 5, 7]])
await sleepers
except ErrorThatShouldCancelOtherTasks:
print('Fatal error; cancelling')
sleepers.cancel()
finally:
await sleep(5)
get_event_loop().run_until_complete(main())
(the finally await sleep here is to prevent the interpreter from closing immediately, which would on its own cancel all tasks)
Oddly, calling cancel on the gather does not actually cancel it!
PS C:\Users\m> .\AppData\Local\Programs\Python\Python368\python.exe .\wtf.py
Slept for 2secs.
Fatal error; cancelling
Slept for 7secs.
I am very surprised by this behavior since it seems to be contradictory to the documentation, which states:
asyncio.gather(*coros_or_futures, loop=None, return_exceptions=False)
Return a future aggregating results from the given coroutine objects or futures.
(...)
Cancellation: if the outer Future is cancelled, all children (that have not completed yet) are also cancelled. (...)
What am I missing here? How to cancel the remaining tasks?
The problem with your implementation is that it calls sleepers.cancel() after sleepers has already raised. Technically the future returned by gather() is in a completed state, so its cancellation must be no-op.
To correct the code, you just need to cancel the children yourself instead of trusting gather's future to do it. Of course, coroutines are not themselves cancelable, so you need to convert them to tasks first (which gather would do anyway, so you're doing no extra work). For example:
async def main():
tasks = [asyncio.ensure_future(my_sleep(secs))
for secs in [2, 5, 7]]
try:
await asyncio.gather(*tasks)
except ErrorThatShouldCancelOtherTasks:
print('Fatal error; cancelling')
for t in tasks:
t.cancel()
finally:
await sleep(5)
I am very surprised by this behavior since it seems to be contradictory to the documentation[...]
The initial stumbling block with gather is that it doesn't really run tasks, it's just a helper to wait for them to finish. For this reason gather doesn't bother to cancel the remaining tasks if some of them fails with an exception - it just abandons the wait and propagates the exception, leaving the remaining tasks to proceed in the background. This was reported as a bug, but wasn't fixed for backward compatibility and because the behavior is documented and unchanged from the beginning. But here we have another wart: the documentation explicitly promises being able to cancel the returned future. Your code does exactly that and that doesn't work, without it being obvious why (at least it took me a while to figure it out, and required reading the source). It turns out that the contract of Future actually prevents this from working. By the time you call cancel(), the future returned by gather has already completed, and cancelling a completed future is meaningless, it is just no-op. (The reason is that a completed future has a well-defined result that could have been observed by outside code. Cancelling it would change its result, which is not allowed.)
In other words, the documentation is not wrong, because canceling would have worked if you had performed it prior to await sleepers having completed. However, it's misleading, because it appears to allow canceling gather() in this important use case of one of its awaitable raising, but in reality doesn't.
Problems like this that pop up when using gather are reason why many people eagerly await (no pun intended) trio-style nurseries in asyncio.
You can create your own custom gather-function
This cancels all its children when any exception occurs:
import asyncio
async def gather(*tasks, **kwargs):
tasks = [ task if isinstance(task, asyncio.Task) else asyncio.create_task(task)
for task in tasks ]
try:
return await asyncio.gather(*tasks, **kwargs)
except BaseException as e:
for task in tasks:
task.cancel()
raise e
# If a() or b() raises an exception, both are immediately cancelled
a_result, b_result = await gather(a(), b())
What you can do with Python 3.10 (and, probably, earlier versions) is use asyncio.wait. It takes an iterable of awaitables and a condition as to when to return, and when the condition is met, it returns two sets of tasks: completed ones and pending ones. You can have it return on the first exception and then cancel the pending tasks one by one:
async def my_task(x):
try:
...
except RecoverableError as e:
...
tasks = [asyncio.crate_task(my_task(x)) for x in xs]
done, pending = await asyncio.wait(taksk, return_when=asyncio.FIRST_EXCEPTION)
for p in pending:
p.cancel()
And you can wrap your tasks in try-except re-raising the fatal exceptions and processing not-fatal ones otherwise. It's not gather, but it looks like it does what you want.
https://docs.python.org/3/library/asyncio-task.html#id9
I have been experimenting with asyncio for a little while and read the PEPs; a few tutorials; and even the O'Reilly book.
I think I got the hang of it, but I'm still puzzled by the behavior of loop.close() which I can't quite figure out when it is "safe" to invoke.
Distilled to its simplest, my use case is a bunch of blocking "old school" calls, which I wrap in the run_in_executor() and an outer coroutine; if any of those calls goes wrong, I want to stop progress, cancel the ones still outstanding, print a sensible log and then (hopefully, cleanly) get out of the way.
Say, something like this:
import asyncio
import time
def blocking(num):
time.sleep(num)
if num == 2:
raise ValueError("don't like 2")
return num
async def my_coro(loop, num):
try:
result = await loop.run_in_executor(None, blocking, num)
print(f"Coro {num} done")
return result
except asyncio.CancelledError:
# Do some cleanup here.
print(f"man, I was canceled: {num}")
def main():
loop = asyncio.get_event_loop()
tasks = []
for num in range(5):
tasks.append(loop.create_task(my_coro(loop, num)))
try:
# No point in waiting; if any of the tasks go wrong, I
# just want to abandon everything. The ALL_DONE is not
# a good solution here.
future = asyncio.wait(tasks, return_when=asyncio.FIRST_EXCEPTION)
done, pending = loop.run_until_complete(future)
if pending:
print(f"Still {len(pending)} tasks pending")
# I tried putting a stop() - with/without a run_forever()
# after the for - same exception raised.
# loop.stop()
for future in pending:
future.cancel()
for task in done:
res = task.result()
print("Task returned", res)
except ValueError as error:
print("Outer except --", error)
finally:
# I also tried placing the run_forever() here,
# before the stop() - no dice.
loop.stop()
if pending:
print("Waiting for pending futures to finish...")
loop.run_forever()
loop.close()
I tried several variants of the stop() and run_forever() calls, the "run_forever first, then stop" seems to be the one to use according to the pydoc and, without the call to close() yields a satisfying:
Coro 0 done
Coro 1 done
Still 2 tasks pending
Task returned 1
Task returned 0
Outer except -- don't like 2
Waiting for pending futures to finish...
man, I was canceled: 4
man, I was canceled: 3
Process finished with exit code 0
However, when the call to close() is added (as shown above) I get two exceptions:
exception calling callback for <Future at 0x104f21438 state=finished returned int>
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/concurrent/futures/_base.py", line 324, in _invoke_callbacks
callback(self)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/futures.py", line 414, in _call_set_state
dest_loop.call_soon_threadsafe(_set_state, destination, source)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 620, in call_soon_threadsafe
self._check_closed()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py", line 357, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
which is at best annoying, but to me, totally puzzling: and, to make matter worse, I've been unable to figure out what would The Right Way of handling such a situation.
Thus, two questions:
what am I missing? how should I modify the code above in a way that with the call to close() included does not raise?
what actually happens if I don't call close() - in this trivial case, I presume it's largely redundant; but what might the consequences be in a "real" production code?
For my own personal satisfaction, also:
why does it raise at all? what more does the loop want from the coros/tasks: they either exited; raised; or were canceled: isn't this enough to keep it happy?
Many thanks in advance for any suggestions you may have!
Distilled to its simplest, my use case is a bunch of blocking "old school" calls, which I wrap in the run_in_executor() and an outer coroutine; if any of those calls goes wrong, I want to stop progress, cancel the ones still outstanding
This can't work as envisioned because run_in_executor submits the function to a thread pool, and OS threads can't be cancelled in Python (or in other languages that expose them). Canceling the future returned by run_in_executor will attempt to cancel the underlying concurrent.futures.Future, but that will only have effect if the blocking function is not yet running, e.g. because the thread pool is busy. Once it starts to execute, it cannot be safely cancelled. Support for safe and reliable cancellation is one of the benefits of using asyncio compared to threads.
If you are dealing with synchronous code, be it a legacy blocking call or longer-running CPU-bound code, you should run it with run_in_executor and incorporate a way to interrupt it. For example, the code could occasionally check a stop_requested flag and exit if that is true, perhaps by raising an exception. Then you can "cancel" those tasks by setting the appropriate flag or flags.
how should I modify the code above in a way that with the call to close() included does not raise?
As far as I can tell, there is currently no way to do so without modifications to blocking and the top-level code. run_in_executor will insist on informing the event loop of the result, and this fails when the event loop is closed. It doesn't help that the asyncio future is cancelled, because the cancellation check is performed in the event loop thread, and the error occurs before that, when call_soon_threadsafe is called by the worker thread. (It might be possible to move the check to the worker thread, but it should be carefully analyzed whether it leads a race condition between the call to cancel() and the actual check.)
why does it raise at all? what more does the loop want from the coros/tasks: they either exited; raised; or were canceled: isn't this enough to keep it happy?
It wants the blocking functions passed to run_in_executor (literally called blocking in the question) that have already been started to finish running before the event loop is closed. You cancelled the asyncio future, but the underlying concurrent future still wants to "phone home", finding the loop closed.
It is not obvious whether this is a bug in asyncio, or if you are simply not supposed to close an event loop until you somehow ensure that all work submitted to run_in_executor is done. Doing so requires the following changes:
Don't attempt to cancel the pending futures. Canceling them looks correct superficially, but it prevents you from being able to wait() for those futures, as asyncio will consider them complete.
Instead, send an application-specific event to your background tasks informing them that they need to abort.
Call loop.run_until_complete(asyncio.wait(pending)) before loop.close().
With these modifications (except for the application-specific event - I simply let the sleep()s finish their course), the exception did not appear.
what actually happens if I don't call close() - in this trivial case, I presume it's largely redundant; but what might the consequences be in a "real" production code?
Since a typical event loop runs as long as the application, there should be no issue in not call close() at the very end of the program. The operating system will clean up the resources on program exit anyway.
Calling loop.close() is important for event loops that have a clear lifetime. For example, a library might create a fresh event loop for a specific task, run it in a dedicated thread, and dispose of it. Failing to close such a loop could leak its internal resources (such as the pipe it uses for inter-thread wakeup) and cause the program to fail. Another example are test suites, which often start a new event loop for each unit test to ensure separation of test environments.
EDIT: I filed a bug for this issue.
EDIT 2: The bug was fixed by devs.
Until the upstream issue is fixed, another way to work around the problem is by replacing the use of run_in_executor with a custom version without the flaw. While rolling one's own run_in_executor sounds like a bad idea at first, it is in fact only a small glue between a concurrent.futures and an asyncio future.
A simple version of run_in_executor can be cleanly implemented using the public API of those two classes:
def run_in_executor(executor, fn, *args):
"""Submit FN to EXECUTOR and return an asyncio future."""
loop = asyncio.get_event_loop()
if args:
fn = functools.partial(fn, *args)
work_future = executor.submit(fn)
aio_future = loop.create_future()
aio_cancelled = False
def work_done(_f):
if not aio_cancelled:
loop.call_soon_threadsafe(set_result)
def check_cancel(_f):
nonlocal aio_cancelled
if aio_future.cancelled():
work_future.cancel()
aio_cancelled = True
def set_result():
if work_future.cancelled():
aio_future.cancel()
elif work_future.exception() is not None:
aio_future.set_exception(work_future.exception())
else:
aio_future.set_result(work_future.result())
work_future.add_done_callback(work_done)
aio_future.add_done_callback(check_cancel)
return aio_future
When loop.run_in_executor(blocking) is replaced with run_in_executor(executor, blocking), executor being a ThreadPoolExecutor created in main(), the code works without other modifications.
Of course, in this variant the synchronous functions will continue running in the other thread to completion despite being canceled -- but that is unavoidable without modifying them to support explicit interruption.
I have a question about how the event loop in python's asyncio module manages outstanding tasks. Consider the following code:
import asyncio
#asyncio.coroutine
def a():
for i in range(0, 3):
print('a.' + str(i))
yield
#asyncio.coroutine
def b():
for i in range(0, 3):
print('b.' + str(i))
yield
#asyncio.coroutine
def c():
for i in range(0, 3):
print('c.' + str(i))
yield
tasks = [
asyncio.Task(a()),
asyncio.Task(b()),
asyncio.Task(c()),
]
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait([t1, t2, t3]))
Running this will print:
a.0
b.0
c.0
a.1
b.1
c.1
a.2
b.2
c.2
Notice that it always prints out 'a' then 'b' then 'c'. I'm guessing that no matter how many iterations each coroutine goes through it will always print in that order. So you'd never see something like
b.100
c.100
a.100
Coming from a node.js background, this tells me that the event loop here is maintaining a queue internally that it uses to decide which task to run next. It initially puts a() at the front of the queue, then b(), then c() since that's the order of the tasks in the list passed to asyncio.wait(). Then whenever it hits a yield statement it puts that task at the end of the queue. I guess in a more realistic example, say if you were doing an async http request, it would put a() back on the end of the queue after the http response came back.
Can I get an amen on this?
Currently your example doesn't include any blocking I/O code. Try this to simulate some tasks:
import asyncio
#asyncio.coroutine
def coro(tag, delay):
for i in range(1, 8):
print(tag, i)
yield from asyncio.sleep(delay)
loop = asyncio.get_event_loop()
print("---- await 0 seconds :-) --- ")
tasks = [
asyncio.Task(coro("A", 0)),
asyncio.Task(coro("B", 0)),
asyncio.Task(coro("C", 0)),
]
loop.run_until_complete(asyncio.wait(tasks))
print("---- simulate some blocking I/O --- ")
tasks = [
asyncio.Task(coro("A", 0.1)),
asyncio.Task(coro("B", 0.3)),
asyncio.Task(coro("C", 0.5)),
]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
As you can see, coroutines are scheduled as needed, and not in order.
DISCLAIMER For at least v3.9 with the default implementation this appears to be true. However, the inner workings of the event loop are not public interface and thus may be changed with new versions. Additionally, asyncio allows for BaseEventLoop implementation to be substituted, which may change its behavior.
When a Task object is created, it calls loop.call_soon to register its _step method as a callback. The _step method actually does the work of calling your coroutine with calls to send() and processing the results.
In BaseEventLoop, loop.call_soon places the _step callback at the end of a _ready list of callbacks. Each run of the event loop, iterates the list of _ready callbacks in a FIFO order and calls them. Thus, for the initial run of tasks, they are executed in the order they are created.
When the task awaits or yields a future, it really depends on the nature of that future when the task's _wakeup method get put into the queue.
Also, note that other callbacks can be registered in between creation of tasks. While it is true that if TaskA is created before TaskB, the initial run of TaskA will happen before TaskB, there could still be other callbacks that get run in between.
Last, the above behavior is also for the default Task class that comes with asyncio. Its possible however to specify a custom task factory and use an alternative task implementation which could also change this behavior.
(This is a follow up to D-Rock's answer, was too long to be a comment.)
The execution order of callbacks is guaranteed in the asyncio documentation in a few places.
The loop.call_soon() docs guarantee the execution order:
Callbacks are called in the order in which they are registered. Each callback will be called exactly once.
The Future.add_done_callback() docs specify that callbacks are scheduled via loop.call_soon(), and thus have this guaranteed FIFO order.
And asyncio.Task described as a subclass of asyncio.Future, and so has the same behaviour for add_done_callback().
So I think it's pretty safe to rely on FIFO ordering of asyncio callbacks, at least when using vanilla asyncio.