I'm working with the Anki Cozmo SDK which requires use of async functions to actually make API calls. Trying to coordinate two of them which sometimes requires an optional "move" call prior to another async task.
Simply put I need two asynchronous tasks to run on the same loop, but not start the second until the first is finished.
loop = asyncio.get_event_loop()
async_tasks = []
async_tasks.append(asyncio.ensure_future(bot1.draw_line(), loop=loop))
async_tasks.append(asyncio.ensure_future(bot2.draw_line(), loop=loop))
if draw_util.point_conflicts(selected_plans, bot.position, CDIST):
safe_position = draw_util.find_safe_point_2_robots(selected_plans, bot.position, CDIST + 1)
task = asyncio.ensure_future(bot.move_to(safe_position), loop=loop)
await task
await asyncio.gather(*async_tasks)
I need some way to wait for the move_to task to complete before continuing onto running the async_tasks. How could this be done?
I've tried using loop.run_until_complete() to the same effect.
I noticed this question after a whole year but here it is:
asyncio.wait_for function does exactly what you're (or actually you were) looking for. It blocks until a task is completed.
Please note that this function is a co-routine too, so you'll have to call it inside an async function.
Related
Apologies for any incorrect terminology or poor description - I'm new to asyncio. Please feel free to edit/correct/improve my question.
I have a scenario where my asyncio application needs to use run_in_executor to run a sync function from a third-party library. This in turn calls a sync function in my code at certain times. I want this function to be able to then add tasks to my main loop.
I tried creating a new shorter-lived mini-loop within the sync function/thread, but it turns out another (tortoise-orm with sqlite) requires the task to be in the main loop (it uses an asyncio lock).
I'm wondering if there is a best-practice way of achieving this, my attempts so far are messy and have mixed results. I'm not sure if I'm thinking correctly that this is a valid use of contextvar / asyncio.run_coroutine_threadsafe.
Any tips would be appreciated.
A simplified example:
def add_item_to_database(item: dict) -> None:
# Is there a way to obtain the "main" loop here? contextvar?
# Do I use this with asyncio.run_coroutine_threadsafe?
# TODO
raise NotImplementedError(
"How do I now schedule/submit an async function back "
"on my main loop?"
)
import asyncio
async def my_coro() -> None:
loop = asyncio.get_running_loop()
# third party sync function will be long-running,
# and occasionally call my sync "callback"
# add_item_to_database function from another module.
await loop.run_in_executor(
None,
third_party_sync_function_that_will_call_add_item_to_database
)
async def main() -> None:
# other 'proper' asyncio coroutine tasks also exist - omitted here.
tasks = [asyncio.create_task(my_coro)]
await asyncio.gather(*tasks)
asyncio.run(main())
I am trying to learn to use asyncio in Python to optimize scripts.
My example returns a coroutine was never awaited warning, can you help to understand and find how to solve it?
import time
import datetime
import random
import asyncio
import aiohttp
import requests
def requete_bloquante(num):
print(f'Get {num}')
uid = requests.get("https://httpbin.org/uuid").json()['uuid']
print(f"Res {num}: {uid}")
def faire_toutes_les_requetes():
for x in range(10):
requete_bloquante(x)
print("Bloquant : ")
start = datetime.datetime.now()
faire_toutes_les_requetes()
exec_time = (datetime.datetime.now() - start).seconds
print(f"Pour faire 10 requêtes, ça prend {exec_time}s\n")
async def requete_sans_bloquer(num, session):
print(f'Get {num}')
async with session.get("https://httpbin.org/uuid") as response:
uid = (await response.json()['uuid'])
print(f"Res {num}: {uid}")
async def faire_toutes_les_requetes_sans_bloquer():
loop = asyncio.get_event_loop()
with aiohttp.ClientSession() as session:
futures = [requete_sans_bloquer(x, session) for x in range(10)]
loop.run_until_complete(asyncio.gather(*futures))
loop.close()
print("Fin de la boucle !")
print("Non bloquant : ")
start = datetime.datetime.now()
faire_toutes_les_requetes_sans_bloquer()
exec_time = (datetime.datetime.now() - start).seconds
print(f"Pour faire 10 requêtes, ça prend {exec_time}s\n")
The first classic part of the code runs correctly, but the second half only produces:
synchronicite.py:43: RuntimeWarning: coroutine 'faire_toutes_les_requetes_sans_bloquer' was never awaited
You made faire_toutes_les_requetes_sans_bloquer an awaitable function, a coroutine, by using async def.
When you call an awaitable function, you create a new coroutine object. The code inside the function won't run until you then await on the function or run it as a task:
>>> async def foo():
... print("Running the foo coroutine")
...
>>> foo()
<coroutine object foo at 0x10b186348>
>>> import asyncio
>>> asyncio.run(foo())
Running the foo coroutine
You want to keep that function synchronous, because you don't start the loop until inside that function:
def faire_toutes_les_requetes_sans_bloquer():
loop = asyncio.get_event_loop()
# ...
loop.close()
print("Fin de la boucle !")
However, you are also trying to use a aiophttp.ClientSession() object, and that's an asynchronous context manager, you are expected to use it with async with, not just with, and so has to be run in aside an awaitable task. If you use with instead of async with a TypeError("Use async with instead") exception will be raised.
That all means you need to move the loop.run_until_complete() call out of your faire_toutes_les_requetes_sans_bloquer() function, so you can keep that as the main task to be run; you can call and await on asycio.gather() directly then:
async def faire_toutes_les_requetes_sans_bloquer():
async with aiohttp.ClientSession() as session:
futures = [requete_sans_bloquer(x, session) for x in range(10)]
await asyncio.gather(*futures)
print("Fin de la boucle !")
print("Non bloquant : ")
start = datetime.datetime.now()
asyncio.run(faire_toutes_les_requetes_sans_bloquer())
exec_time = (datetime.datetime.now() - start).seconds
print(f"Pour faire 10 requêtes, ça prend {exec_time}s\n")
I used the new asyncio.run() function (Python 3.7 and up) to run the single main task. This creates a dedicated loop for that top-level coroutine and runs it until complete.
Next, you need to move the closing ) parenthesis on the await resp.json() expression:
uid = (await response.json())['uuid']
You want to access the 'uuid' key on the result of the await, not the coroutine that response.json() produces.
With those changes your code works, but the asyncio version finishes in sub-second time; you may want to print microseconds:
exec_time = (datetime.datetime.now() - start).total_seconds()
print(f"Pour faire 10 requêtes, ça prend {exec_time:.3f}s\n")
On my machine, the synchronous requests code in about 4-5 seconds, and the asycio code completes in under .5 seconds.
Do not use loop.run_until_complete call inside async function. The purpose for that method is to run an async function inside sync context. Anyway here's how you should change the code:
async def faire_toutes_les_requetes_sans_bloquer():
async with aiohttp.ClientSession() as session:
futures = [requete_sans_bloquer(x, session) for x in range(10)]
await asyncio.gather(*futures)
print("Fin de la boucle !")
loop = asyncio.get_event_loop()
loop.run_until_complete(faire_toutes_les_requetes_sans_bloquer())
Note that alone faire_toutes_les_requetes_sans_bloquer() call creates a future that has to be either awaited via explicit await (for that you have to be inside async context) or passed to some event loop. When left alone Python complains about that. In your original code you do none of that.
Not sure if this was the issue for you, but for me the response from the coroutine was another coroutine, so my code started warning me (note not actually crashing) I had creating coroutines that weren't being called. After I actually called them (although I didn't realy use the response the error went away).
Note main code I added was:
content_from_url_as_str: list[str] = await asyncio.gather(*content_from_url, return_exceptions=True)
inspired after I saw:
response: str = await content_from_url[0]
Full code:
"""
-- Notes from [1]
Threading and asyncio both run on a single processor and therefore only run one at a time [1]. It's cooperative concurrency.
Note: threads.py has a very good block with good defintions for io-bound, cpu-bound if you need to recall it.
Note: coroutine is an important definition to understand before proceeding. Definition provided at the end of this tutorial.
General idea for asyncio is that there is a general event loop that controls how and when each tasks gets run.
The event loop is aware of each task and knows what states they are in.
For simplicitly of exponsition assume there are only two states:
a) Ready state
b) Waiting state
a) indicates that a task has work to do and can be run - while b) indicates that a task is waiting for a response from an
external thing (e.g. io, printer, disk, network, coq, etc). This simplified event loop has two lists of tasks
(ready_to_run_lst, waiting_lst) and runs things from the ready to run list. Once a task runs it is in complete control
until it cooperatively hands back control to the event loop.
The way it works is that the task that was ran does what it needs to do (usually an io operation, or an interleaved op
or something like that) but crucially it gives control back to the event loop when the running task (with control) thinks is best.
(Note that this means the task might not have fully completed getting what is "fully needs".
This is probably useful when the user whats to implement the interleaving himself.)
Once the task cooperatively gives back control to the event loop it is placed by the event loop in either the
ready to run list or waiting list (depending how fast the io ran, etc). Then the event loop goes through the waiting
loop to see if anything waiting has "returned".
Once all the tasks have been sorted into the right list the event loop is able to choose what to run next (e.g. by
choosing the one that has been waiting to be ran the longest). This repeats until the event loop code you wrote is done.
The crucial point (and distinction with threads) that we want to emphasizes is that in asyncio, an operation is never
interrupted in the middle and every switching/interleaving is done deliberately by the programmer.
In a way you don't have to worry about making your code thread safe.
For more details see [2], [3].
Asyncio syntax:
i) await = this is where the code you wrote calls an expensive function (e.g. an io) and thus hands back control to the
event loop. Then the event loop will likely put it in the waiting loop and runs some other task. Likely eventually
the event loop comes back to this function and runs the remaining code given that we have the value from the io now.
await = the key word that does (mainly) two things 1) gives control back to the event loop to see if there is something
else to run if we called it on a real expensive io operation (e.g. calling network, printer, etc) 2) gives control to
the new coroutine (code that might give up control copperatively) that it is awaiting. If this is your own code with async
then it means it will go into this new async function (coroutine) you defined.
No real async benefits are being experienced until you call (await) a real io e.g. asyncio.sleep is the typical debug example.
todo: clarify, I think await doesn't actually give control back to the event loop but instead runs the "coroutine" this
await is pointing too. This means that if it's a real IO then it will actually give it back to the event loop
to do something else. In this case it is actually doing something "in parallel" in the async way.
Otherwise, it is your own python coroutine and thus gives it the control but "no true async parallelism" happens.
iii) async = approximately a flag that tells python the defined function might use await. This is not strictly true but
it gives you a simple model while your getting started. todo - clarify async.
async = defines a coroutine. This doesn't define a real io, it only defines a function that can give up and give the
execution power to other coroutines or the (asyncio) event loop.
todo - context manager with async
ii) awaiting = when you call something (e.g. a function) that usually requires waiting for the io response/return/value.
todo: though it seems it's also the python keyword to give control to a coroutine you wrote in python or give
control to the event loop assuming your awaiting an actual io call.
iv) async with = this creates a context manager from an object you would normally await - i.e. an object you would
wait to get the return value from an io. So usually we swap out (switch) from this object.
todo - e.g.
Note: - any function that calls await needs to be marked with async or you’ll get a syntax error otherwise.
- a task never gives up control without intentionally doing so e.g. never in the middle of an op.
Cons: - note how this also requires more thinking carefully (but feels less dangerous than threading due to no pre-emptive
switching) due to the concurrency. Another disadvantage is again the idisocyncracies of using this in python + learning
new syntax and details for it to actually work.
- understanding the semanics of new syntax + learning where to really put the syntax to avoid semantic errors.
- we needed a special asycio compatible lib for requests, since the normal requests is not designed to inform
the event loop that it's block (or done blocking)
- if one of the tasks doesn't cooperate properly then the whole code can be a mess and slow it down.
- not all libraries support the async IO paradigm in python (e.g. asyncio, trio, etc).
Pro: + despite learning where to put await and async might be annoying it forces your to think carefully about your code
which on itself can be an advantage (e.g. better, faster, less bugs due to thinking carefully)
+ often faster...? (skeptical)
1. https://realpython.com/python-concurrency/
2. https://realpython.com/async-io-python/
3. https://stackoverflow.com/a/51116910/6843734
todo - read [2] later (or [3] but thats not a tutorial and its more details so perhaps not a priority).
asynchronous = 1) dictionary def: not happening at the same time
e.g. happening indepedently 2) computing def: happening independently of the main program flow
couroutine = are computer program components that generalize subroutines for non-preemptive multitasking, by allowing execution to be suspended and resumed.
So basically it's a routine/"function" that can give up control in "a controlled way" (i.e. not randomly like with threads).
Usually they are associated with a single process -- so it's concurrent but not parallel.
Interesting note: Coroutines are well-suited for implementing familiar program components such as cooperative tasks, exceptions, event loops, iterators, infinite lists and pipes.
Likely we have an event loop in this document as an example. I guess yield and operators too are good examples!
Interesting contrast with subroutines: Subroutines are special cases of coroutines.[3] When subroutines are invoked, execution begins at the start,
and once a subroutine exits, it is finished; an instance of a subroutine only returns once, and does not hold state between invocations.
By contrast, coroutines can exit by calling other coroutines, which may later return to the point where they were invoked in the original coroutine;
from the coroutine's point of view, it is not exiting but calling another coroutine.
Coroutines are very similar to threads. However, coroutines are cooperatively multitasked, whereas threads are typically preemptively multitasked.
event loop = event loop is a programming construct or design pattern that waits for and dispatches events or messages in a program.
Appendix:
For I/O-bound problems, there’s a general rule of thumb in the Python community:
“Use asyncio when you can, threading when you must.”
asyncio can provide the best speed up for this type of program, but sometimes you will require critical libraries that
have not been ported to take advantage of asyncio.
Remember that any task that doesn’t give up control to the event loop will block all of the other tasks
-- Notes from [2]
see asyncio_example2.py file.
The sync fil should have taken longer e.g. in one run the async file took:
Downloaded 160 sites in 0.4063692092895508 seconds
While the sync option took:
Downloaded 160 in 3.351937770843506 seconds
"""
import asyncio
from asyncio import Task
from asyncio.events import AbstractEventLoop
import aiohttp
from aiohttp import ClientResponse
from aiohttp.client import ClientSession
from typing import Coroutine
import time
async def download_site(session: ClientSession, url: str) -> str:
async with session.get(url) as response:
print(f"Read {response.content_length} from {url}")
return response.text()
async def download_all_sites(sites: list[str]) -> list[str]:
# async with = this creates a context manager from an object you would normally await - i.e. an object you would wait to get the return value from an io. So usually we swap out (switch) from this object.
async with aiohttp.ClientSession() as session: # we will usually away session.FUNCS
# create all the download code a coroutines/task to be later managed/run by the event loop
tasks: list[Task] = []
for url in sites:
# creates a task from a coroutine todo: basically it seems it creates a callable coroutine? (i.e. function that is able to give up control cooperatively or runs an external io and also thus gives back control cooperatively to the event loop). read more? https://stackoverflow.com/questions/36342899/asyncio-ensure-future-vs-baseeventloop-create-task-vs-simple-coroutine
task: Task = asyncio.ensure_future(download_site(session, url))
tasks.append(task)
# runs tasks/coroutines in the event loop and aggrates the results. todo: does this halt until all coroutines have returned? I think so due to the paridgm of how async code works.
content_from_url: list[ClientResponse.text] = await asyncio.gather(*tasks, return_exceptions=True)
assert isinstance(content_from_url[0], Coroutine) # note allresponses are coroutines
print(f'result after aggregating/doing all coroutine tasks/jobs = {content_from_url=}')
# this is needed since the response is in a coroutine object for some reason
content_from_url_as_str: list[str] = await asyncio.gather(*content_from_url, return_exceptions=True)
print(f'result after getting response from coroutines that hold the text = {content_from_url_as_str=}')
return content_from_url_as_str
if __name__ == "__main__":
# - args
num_sites: int = 80
sites: list[str] = ["https://www.jython.org", "http://olympus.realpython.org/dice"] * num_sites
start_time: float = time.time()
# - run the same 160 tasks but without async paradigm, should be slower!
# note: you can't actually do this here because you have the async definitions to your functions.
# to test the synchronous version see the synchronous.py file. Then compare the two run times.
# await download_all_sites(sites)
# download_all_sites(sites)
# - Execute the coroutine coro and return the result.
asyncio.run(download_all_sites(sites))
# - run event loop manager and run all tasks with cooperative concurrency
# asyncio.get_event_loop().run_until_complete(download_all_sites(sites))
# makes explicit the creation of the event loop that manages the coroutines & external ios
# event_loop: AbstractEventLoop = asyncio.get_event_loop()
# asyncio.run(download_all_sites(sites))
# making creating the coroutine that hasn't been ran yet with it's args explicit
# event_loop: AbstractEventLoop = asyncio.get_event_loop()
# download_all_sites_coroutine: Coroutine = download_all_sites(sites)
# asyncio.run(download_all_sites_coroutine)
# - print stats about the content download and duration
duration = time.time() - start_time
print(f"Downloaded {len(sites)} sites in {duration} seconds")
print('Success.\a')
I want to execute web scraping with a set of categories, and each category also has a list of URLs. So I decided to call a function based only on each category in the main function, and within the inner function there is a non-blocking call.
So here is the code:
def main():
loop = asyncio.get_event_loop()
b = loop.create_task(f("p", all_p_list))
f = loop.create_task(f("f", all_f_list))
loop.run_until_complete(asyncio.gather(p, f))
It should execute the f function concurrently.
But the f function also has to run the loop, since in the function it calls a function simultaneously, based on each URL.
async def f(category, total):
urls = [urls_template[category].format(t) for t in t_list]
soups_coro = map(parseURL_async, urls)
loop = asyncio.get_event_loop()
result = await loop.run_until_complete(asyncio.gather(*soups_coro))
But after I run the script, it got an This event loop is already running error, and I found that it is because I call loop.run_until_complete() in both inner and outer functions.
However, when I strip the run_until_complete(), and just call f() in the main(), the function call immediately got finished and it cannot wait for the inner function to finish. So it is inevitable to call the loop in the main(). But then I think it is incompatible with the inner function, which also must call it.
How can I deal with the problem and run the loop? The orinigal code is all in the same main() and it worked, but I want to make it cleaner if possible.
How can I deal with the problem and run the loop?
The loop is already running. You don't need to (and can't) run it again.
result = await loop.run_until_complete(asyncio.gather(*soups_coro))
You're awaiting the wrong thing. loop.run_until_complete doesn't return something you can await (a Future); it returns the result of whatever you're running until completion.
The reason nothing appears to happen when you call f directly is that f is an asyncio-style coroutine. As such it returns a future that must be scheduled with the event loop. It doesn't execute until a running event loop tells it to. loop.run_until_complete takes care of all of that for you.
To wrap up your question, you want to await asyncio.gather.
async def f(category, total):
urls = [urls_template[category].format(t) for t in t_list]
soups_coro = map(parseURL_async, urls)
result = await asyncio.gather(*soups_coro)
And you probably also want to include return result at the end of f, too.
Convert main() into async function and execute it by loop.run_until_complete().
When the code has the only one run_until_complete() -- everything becomes much easier. In Python 3.7 you will be able to write just asyncio.run(main())
Let's assume I'm new to asyncio. I'm using async/await to parallelize my current project, and I've found myself passing all of my coroutines to asyncio.ensure_future. Lots of stuff like this:
coroutine = my_async_fn(*args, **kwargs)
task = asyncio.ensure_future(coroutine)
What I'd really like is for a call to an async function to return an executing task instead of an idle coroutine. I created a decorator to accomplish what I'm trying to do.
def make_task(fn):
def wrapper(*args, **kwargs):
return asyncio.ensure_future(fn(*args, **kwargs))
return wrapper
#make_task
async def my_async_func(*args, **kwargs):
# usually making a request of some sort
pass
Does asyncio have a built-in way of doing this I haven't been able to find? Am I using asyncio wrong if I'm lead to this problem to begin with?
asyncio had #task decorator in very early pre-released versions but we removed it.
The reason is that decorator has no knowledge what loop to use.
asyncio don't instantiate a loop on import, moreover test suite usually creates a new loop per test for sake of test isolation.
Does asyncio have a built-in way of doing this I haven't been able to
find?
No, asyncio doesn't have decorator to cast coroutine-functions into tasks.
Am I using asyncio wrong if I'm lead to this problem to begin with?
It's hard to say without seeing what you're doing, but I think it may happen to be true. While creating tasks is usual operation in asyncio programs I doubt you created this much coroutines that should be tasks always.
Awaiting for coroutine - is a way to "call some function asynchronously", but blocking current execution flow until it finished:
await some()
# you'll reach this line *only* when some() done
Task on the other hand - is a way to "run function in background", it won't block current execution flow:
task = asyncio.ensure_future(some())
# you'll reach this line immediately
When we write asyncio programs we usually need first way since we usually need result of some operation before starting next one:
text = await request(url)
links = parse_links(text) # we need to reach this line only when we got 'text'
Creating task on the other hand usually means that following further code doesn't depend of task's result. But again it doesn't happening always.
Since ensure_future returns immediately some people try to use it as a way to run some coroutines concurently:
# wrong way to run concurrently:
asyncio.ensure_future(request(url1))
asyncio.ensure_future(request(url2))
asyncio.ensure_future(request(url3))
Correct way to achieve this is to use asyncio.gather:
# correct way to run concurrently:
await asyncio.gather(
request(url1),
request(url2),
request(url3),
)
May be this is what you want?
Upd:
I think using tasks in your case is a good idea. But I don't think you should use decorator: coroutine functionality (to make request) still is a separate part from it's concrete usage detail (it will be used as task). If requests synchronization controlling is separate from their's main functionalities it's also make sense to move synchronization into separate function. I would do something like this:
import asyncio
async def request(i):
print(f'{i} started')
await asyncio.sleep(i)
print(f'{i} finished')
return i
async def when_ready(conditions, coro_to_start):
await asyncio.gather(*conditions, return_exceptions=True)
return await coro_to_start
async def main():
t = asyncio.ensure_future
t1 = t(request(1))
t2 = t(request(2))
t3 = t(request(3))
t4 = t(when_ready([t1, t2], request(4)))
t5 = t(when_ready([t2, t3], request(5)))
await asyncio.gather(t1, t2, t3, t4, t5)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main())
finally:
loop.run_until_complete(loop.shutdown_asyncgens())
loop.close()
I've seen several basic Python 3.5 tutorials on asyncio doing the same operation in various flavours.
In this code:
import asyncio
async def doit(i):
print("Start %d" % i)
await asyncio.sleep(3)
print("End %d" % i)
return i
if __name__ == '__main__':
loop = asyncio.get_event_loop()
#futures = [asyncio.ensure_future(doit(i), loop=loop) for i in range(10)]
#futures = [loop.create_task(doit(i)) for i in range(10)]
futures = [doit(i) for i in range(10)]
result = loop.run_until_complete(asyncio.gather(*futures))
print(result)
All the three variants above that define the futures variable achieve the same result; the only difference I can see is that with the third variant the execution is out of order (which should not matter in most cases). Is there any other difference? Are there cases where I can't just use the simplest variant (plain list of coroutines)?
Actual info:
Starting from Python 3.7 asyncio.create_task(coro) high-level function was added for this purpose.
You should use it instead other ways of creating tasks from coroutimes. However if you need to create task from arbitrary awaitable, you should use asyncio.ensure_future(obj).
Old info:
ensure_future vs create_task
ensure_future is a method to create Task from coroutine. It creates tasks in different ways based on argument (including using of create_task for coroutines and future-like objects).
create_task is an abstract method of AbstractEventLoop. Different event loops can implement this function different ways.
You should use ensure_future to create tasks. You'll need create_task only if you're going to implement your own event loop type.
Upd:
#bj0 pointed at Guido's answer on this topic:
The point of ensure_future() is if you have something that could
either be a coroutine or a Future (the latter includes a Task because
that's a subclass of Future), and you want to be able to call a method
on it that is only defined on Future (probably about the only useful
example being cancel()). When it is already a Future (or Task) this
does nothing; when it is a coroutine it wraps it in a Task.
If you know that you have a coroutine and you want it to be scheduled,
the correct API to use is create_task(). The only time when you should
be calling ensure_future() is when you are providing an API (like most
of asyncio's own APIs) that accepts either a coroutine or a Future and
you need to do something to it that requires you to have a Future.
and later:
In the end I still believe that ensure_future() is an appropriately
obscure name for a rarely-needed piece of functionality. When creating
a task from a coroutine you should use the appropriately-named
loop.create_task(). Maybe there should be an alias for that
asyncio.create_task()?
It's surprising to me. My main motivation to use ensure_future all along was that it's higher-level function comparing to loop's member create_task (discussion contains some ideas like adding asyncio.spawn or asyncio.create_task).
I can also point that in my opinion it's pretty convenient to use universal function that can handle any Awaitable rather than coroutines only.
However, Guido's answer is clear: "When creating a task from a coroutine you should use the appropriately-named loop.create_task()"
When coroutines should be wrapped in tasks?
Wrap coroutine in a Task - is a way to start this coroutine "in background". Here's example:
import asyncio
async def msg(text):
await asyncio.sleep(0.1)
print(text)
async def long_operation():
print('long_operation started')
await asyncio.sleep(3)
print('long_operation finished')
async def main():
await msg('first')
# Now you want to start long_operation, but you don't want to wait it finised:
# long_operation should be started, but second msg should be printed immediately.
# Create task to do so:
task = asyncio.ensure_future(long_operation())
await msg('second')
# Now, when you want, you can await task finised:
await task
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
Output:
first
long_operation started
second
long_operation finished
You can replace asyncio.ensure_future(long_operation()) with just await long_operation() to feel the difference.
create_task()
accepts coroutines,
returns Task,
it is invoked in context of the loop.
ensure_future()
accepts Futures, coroutines, awaitable objects,
returns Task (or Future if Future passed).
if the given arg is a coroutine it uses create_task,
loop object can be passed.
As you can see the create_task is more specific.
async function without create_task or ensure_future
Simple invoking async function returns coroutine
>>> async def doit(i):
... await asyncio.sleep(3)
... return i
>>> doit(4)
<coroutine object doit at 0x7f91e8e80ba0>
And since the gather under the hood ensures (ensure_future) that args are futures, explicitly ensure_future is redundant.
Similar question What's the difference between loop.create_task, asyncio.async/ensure_future and Task?
Note: Only valid for Python 3.7 (for Python 3.5 refer to the earlier answer).
From the official docs:
asyncio.create_task (added in Python 3.7) is the preferable way for spawning new tasks instead of ensure_future().
Detail:
So now, in Python 3.7 onwards, there are 2 top-level wrapper function (similar but different):
asyncio.create_task: which simply call event_loop.create_task(coro) directly. (see source code)
ensure_future which also call event_loop.create_task(coro) if it is coroutine or else it is simply to ensure the return type to be a asyncio.Future. (see source code). Anyway, Task is still a Future due to its class inheritance (ref).
Well, utlimately both of these wrapper functions will help you call BaseEventLoop.create_task. The only difference is ensure_future accept any awaitable object and help you convert it into a Future. And also you can provide your own event_loop parameter in ensure_future. And depending if you need those capability or not, you can simply choose which wrapper to use.
for your example, all the three types execute asynchronously. the only difference is that, in the third example, you pre-generated all 10 coroutines, and submitted to the loop together. so only the last one gives output randomly.