My Source Code:
import asyncio
async def mycoro(number):
print(f'Starting {number}')
await asyncio.sleep(1)
print(f'Finishing {number}')
return str(number)
c = mycoro(3)
task = asyncio.create_task(c)
loop = asyncio.get_event_loop()
loop.run_until_complete(task)
loop.close()
Error:
RuntimeError: no running event loop
sys:1: RuntimeWarning: coroutine 'mycoro' was never awaited
I was watching a tutorial and according to my code it was never awaited when I did and it clearly does in the video I was watching.
Simply run the coroutine directly without creating a task for it:
import asyncio
async def mycoro(number):
print(f'Starting {number}')
await asyncio.sleep(1)
print(f'Finishing {number}')
return str(number)
c = mycoro(3)
loop = asyncio.get_event_loop()
loop.run_until_complete(c)
loop.close()
The purpose of asyncio.create_task is to create an additional task from inside a running task. Since it directly starts the new task, it must be used inside a running event loop – hence the error when using it outside.
Use loop.create_task(c) if a task must be created from outside a task.
In more recent version of asyncio, use asyncio.run to avoid having to handle the event loop explicitly:
c = mycoro(3)
asyncio.run(c)
In general, use asyncio.create_task only to increase concurrency. Avoid using it when another task would block immediately.
# bad task usage: concurrency stays the same due to blocking
async def bad_task():
task = asyncio.create_task(mycoro(0))
await task
# no task usage: concurrency stays the same due to stacking
async def no_task():
await mycoro(0)
# good task usage: concurrency is increased via multiple tasks
async def good_task():
tasks = [asyncio.create_task(mycoro(i)) for i in range(3)]
print('Starting done, sleeping now...')
await asyncio.sleep(1.5)
await asyncio.gather(*tasks) # ensure subtasks finished
Change the line
task = asyncio.Task(c)
Related
I am new to asynchronous programming, and while I understand most concepts, there is one relating to the inner runnings of 'await' that I don't quite understand.
Consider the following:
import asyncio
async def foo():
print('start fetching')
await asyncio.sleep(2)
print('done fetcihng')
async def main():
task1 = asyncio.create_task(foo())
asyncio.run(main())
Output: start fetching
vs.
async def foo():
print('start fetching')
print('done fetcihng')
async def main():
task1 = asyncio.create_task(foo())
asyncio.run(main())
Output: start fetching followed by done fetching
Perhaps it is my understanding of await, which I do understand insofar that we can use it to pause (2 seconds in the case above), or await for functions to fully finish running before any further code is run.
But for the first example above, why does await cause 'done fetching' to not run??
asyncio.create_task schedules an awaitable on the event loop and returns immediately, so you are actually exiting the main function (and closing the event loop) before the task is able to finish
you need to change main to either
async def main():
task1 = asyncio.create_task(foo())
await task1
or
async def main():
await foo()
creating a task first (the former) is useful in many cases, but they all involve situations where the event loop will outlast the task, e.g. a long running server, otherwise you should just await the coroutine directly like the latter
As I understand when I call create_task() it will put at the end of the event loop queue.
My use case is the following, I have some tasks made of the same coroutine. I want to cancel all tasks on some failed condition. This is the pattern:
async def coro(params):
# long running task...
if failed_condition:
await cancel_all() # should cancel all tasks made of coro
async def cancel_all():
for task in tasks:
task.cancel()
await asyncio.gather(*tasks) # wait for cancel completion
print("All tasks cancelled")
tasks = []
async def main():
tasks = [loop.create_task(coro(params)) for x in range(5)]
asyncio.gather(*tasks)
The problem is that since cancel_all() itself is awaited by one task, it is cancelled by itself.
How can I solve this?
I could use loop.create_task(cancel_all()) instead, but I want the cancel_all() to run soon as possible.
cancel_all() could exclude the current task:
async def cancel_all():
to_cancel = set(tasks)
to_cancel.discard(asyncio.current_task())
for task in to_cancel:
task.cancel()
await asyncio.gather(*to_cancel)
You can use asyncio.wait with FIRST_EXCEPTION parameter.
import asyncio
import random
class UnexpectedCondition(Exception):
pass
async def coro(condition):
# long running task...
await asyncio.sleep(random.random()*10)
if condition:
raise UnexpectedCondition("Failed")
return "Result"
async def main(f):
tasks = [coro(f(x)) for x in range(5)]
done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_EXCEPTION)
for p in pending:
# unfinished tasks will be cancelled
print("Cancel")
p.cancel()
for d in done:
try:
# get result from finished tasks
print(d.result())
except UnexpectedCondition as e:
# handle tasks with unexpected conditions
print(e)
asyncio.run(main(lambda x: x%2 == 0))
I'm toying around with asyncio and I can't achieve what I'm trying to achieve. Here's my code.
import random
import asyncio
async def my_long_operation(s: str):
print("Starting operation on", s)
await asyncio.sleep(3)
print("End", s)
def random_string(length: int = 10):
alphabet = \
[chr(x) for x in range(ord('a'), ord('z') + 1)]
result = ""
for i in range(length):
result += random.choice(alphabet)
return result
def get_strings(n = 10):
for i in range(n):
s = random_string()
yield s
async def several(loop):
tasks =list()
for s in get_strings():
task = asyncio.create_task(my_long_operation(s))
asyncio.ensure_future(task, loop = loop)
print("OK")
loop = asyncio.get_event_loop()
loop.run_until_complete(several(loop))
# loop.run_forever()
loop.close()
Now my problem is the following.
I'd like to run all of my my_long_operations concurrently, waiting for all of them to finish. The thing is that when I run the code, I get the following error:
Task was destroyed but it is pending!
task: <Task pending coro=<my_long_operation() done, defined at test.py:4> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f75274f7168>()]>>
Which seems to make sense, since my script ended right after starting these operations (without waiting for them to complete).
So we arrive to my actual question: how am I supposed to do this? How can I start these tasks and wait for them to complete before terminating the script, without using run_forever (since run_forever never exits Python...)
Thanks!
ensure_future is basically the "put it in the event loop and then don't bother me with it" method. What you want instead is to await the completion of all your async functions. For that, use asyncio.wait if you're not particularly interested in the results, or asyncio.gather if you want the results:
tasks = map(my_long_operation, get_strings())
await asyncio.wait(list(tasks), loop=loop)
print('OK')
I am trying to create a simple monitoring system that periodically checks things and logs them. Here is a cutdown example of the logic I am attempting to use but I keep getting a RuntimeWarning: coroutine 'foo' was never awaited error.
How should I reschedule an async method from itself?
Code in test.py:
import asyncio
from datetime import datetime
async def collect_data():
await asyncio.sleep(1)
return {"some_data": 1,}
async def foo(loop):
results = await collect_data()
# Log the results
print("{}: {}".format(datetime.now(), results))
# schedule to run again in X seconds
loop.call_later(5, foo, loop)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.create_task(foo(loop))
loop.run_forever()
loop.close()
Error:
pi#raspberrypi [0] $ python test.py
2018-01-03 01:59:22.924871: {'some_data': 1}
/usr/lib/python3.5/asyncio/events.py:126: RuntimeWarning: coroutine 'foo' was never awaited
self._callback(*self._args)
call_later accepts a plain sync callback (a function defined with def). A coroutine function (async def) should be awaited to be executed.
The cool thing about asyncio is that it imitates imperative plain synchronous code in many ways. How would you solve this task for a plain function? I guess just sleep some time and recursively call function again. Do the same (almost - we should use synchronous sleep) with asyncio also:
import asyncio
from datetime import datetime
async def collect_data():
await asyncio.sleep(1)
return {"some_data": 1,}
async def foo(loop):
results = await collect_data()
# Log the results
print("{}: {}".format(datetime.now(), results))
# Schedule to run again in X seconds
await asyncio.sleep(5)
return (await foo(loop))
if __name__ == '__main__':
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(foo(loop))
finally:
loop.run_until_complete(loop.shutdown_asyncgens()) # Python 3.6 only
loop.close()
If you sometime would need to run foo in the background alongside with other coroutines you can create a task. There is also shown a way to cancel task execution.
Update:
As Andrew pointed out, a plain loop is even better:
async def foo(loop):
while True:
results = await collect_data()
# Log the results
print("{}: {}".format(datetime.now(), results))
# Wait before next iteration:
await asyncio.sleep(5)
I would like to achieve the following using asyncio:
# Each iteration of this loop MUST last only 1 second
while True:
# Make an async request
sleep(1)
However, the only examples I've seen use some variation of
async def my_func():
loop = asyncio.get_event_loop()
await loop.run_in_executor(None, requests.get, 'http://www.google.com')
loop = asyncio.get_event_loop()
loop.run_until_complete(my_func())
But run_until_complete is blocking! Using run_until_complete in each iteration of my while loop would cause the loop to block.
I've spent the last couple of hours trying to figure out how to correctly run a non-blocking task (defined with async def) without success. I must be missing something obvious, because something as simple as this should surely be simple. How can I achieve what I have described?
run_until_complete runs the main event loop. It's not "blocking" so to speak, it just runs the event loop until the coroutine you passed as a parameter returns. It has to hang because otherwise, the program would either stop or be blocked by the next instructions.
It's pretty hard to tell what you are willing to achieve, but this piece code actually does something:
async def my_func():
loop = asyncio.get_event_loop()
while True:
res = await loop.run_in_executor(None, requests.get, 'http://www.google.com')
print(res)
await asyncio.sleep(1)
loop = asyncio.get_event_loop()
loop.run_until_complete(my_func())
It will perform a GET request on Google homepage every seconds, popping a new thread to perform each request. You can convince yourself that it's actually non-blocking by running multiple requests virtually in parallel:
async def entrypoint():
await asyncio.wait([
get('https://www.google.com'),
get('https://www.stackoverflow.com'),
])
async def get(url):
loop = asyncio.get_event_loop()
while True:
res = await loop.run_in_executor(None, requests.get, url)
print(url, res)
await asyncio.sleep(1)
loop = asyncio.get_event_loop()
loop.run_until_complete(entrypoint())
Another thing to notice is that you're running requests in separate threads each time. It works, but it's sort of a hack. You should rather be using a real asynchronus HTTP client such as aiohttp.
This is Python 3.10
asyncio is single threaded execution, using await to yield the cpu to other function until what is await'ed is done.
import asyncio
async def my_func(t):
print("Start my_func")
await asyncio.sleep(t) # The await yields cpu, while we wait
print("Exit my_func")
async def main():
asyncio.ensure_future(my_func(10)) # Schedules on event loop, we might want to save the returned future to later check for completion.
print("Start main")
await asyncio.sleep(1) # The await yields cpu, giving my_func chance to start.
print("running other stuff")
await asyncio.sleep(15)
print("Exit main")
if __name__ == "__main__":
asyncio.run(main()) # Starts event loop