I would like to achieve the following using asyncio:
# Each iteration of this loop MUST last only 1 second
while True:
# Make an async request
sleep(1)
However, the only examples I've seen use some variation of
async def my_func():
loop = asyncio.get_event_loop()
await loop.run_in_executor(None, requests.get, 'http://www.google.com')
loop = asyncio.get_event_loop()
loop.run_until_complete(my_func())
But run_until_complete is blocking! Using run_until_complete in each iteration of my while loop would cause the loop to block.
I've spent the last couple of hours trying to figure out how to correctly run a non-blocking task (defined with async def) without success. I must be missing something obvious, because something as simple as this should surely be simple. How can I achieve what I have described?
run_until_complete runs the main event loop. It's not "blocking" so to speak, it just runs the event loop until the coroutine you passed as a parameter returns. It has to hang because otherwise, the program would either stop or be blocked by the next instructions.
It's pretty hard to tell what you are willing to achieve, but this piece code actually does something:
async def my_func():
loop = asyncio.get_event_loop()
while True:
res = await loop.run_in_executor(None, requests.get, 'http://www.google.com')
print(res)
await asyncio.sleep(1)
loop = asyncio.get_event_loop()
loop.run_until_complete(my_func())
It will perform a GET request on Google homepage every seconds, popping a new thread to perform each request. You can convince yourself that it's actually non-blocking by running multiple requests virtually in parallel:
async def entrypoint():
await asyncio.wait([
get('https://www.google.com'),
get('https://www.stackoverflow.com'),
])
async def get(url):
loop = asyncio.get_event_loop()
while True:
res = await loop.run_in_executor(None, requests.get, url)
print(url, res)
await asyncio.sleep(1)
loop = asyncio.get_event_loop()
loop.run_until_complete(entrypoint())
Another thing to notice is that you're running requests in separate threads each time. It works, but it's sort of a hack. You should rather be using a real asynchronus HTTP client such as aiohttp.
This is Python 3.10
asyncio is single threaded execution, using await to yield the cpu to other function until what is await'ed is done.
import asyncio
async def my_func(t):
print("Start my_func")
await asyncio.sleep(t) # The await yields cpu, while we wait
print("Exit my_func")
async def main():
asyncio.ensure_future(my_func(10)) # Schedules on event loop, we might want to save the returned future to later check for completion.
print("Start main")
await asyncio.sleep(1) # The await yields cpu, giving my_func chance to start.
print("running other stuff")
await asyncio.sleep(15)
print("Exit main")
if __name__ == "__main__":
asyncio.run(main()) # Starts event loop
Related
There is a function that blocks event loop (f.e. that function makes an API request). I need to make continuous stream of requests which will run in parallel but not synchronous. So every next request will be started before the previous request will be finished.
So I found this solved question with the loop.run_in_executer() solution and use it in the beginning:
import asyncio
import requests
#blocking_request_func() defined somewhere
async def main():
loop = asyncio.get_event_loop()
future1 = loop.run_in_executor(None, blocking_request_func, 'param')
future2 = loop.run_in_executor(None, blocking_request_func, 'param')
response1 = await future1
response2 = await future2
print(response1)
print(response2)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
this works well, requests run in parallel but there is a problem for my task - in this example we make group of tasks/futures in the beginning and then run this group synchronous. But I need something like this:
1. Sending request_1 and not awaiting when it's done.
(AFTER step 1 but NOT in the same time when step 1 starts):
2. Sending request_2 and not awaiting when it's done.
(AFTER step 2 but NOT in the same time when step 2 starts):
3. Sending request_3 and not awaiting when it's done.
(Request 1(or any other) gives the response)
(AFTER step 3 but NOT in the same time when step 3 starts):
4. Sending request_4 and not awaiting when it's done.
(Request 2(or any other) gives the response)
and so on...
I tried using asyncio.TaskGroup():
async def request_func():
global result #the list of results of requests defined somewhere in global area
loop = asyncio.get_event_loop()
result.append(await loop.run_in_executor(None, blocking_request_func, 'param')
await asyncio.sleep(0) #adding or removing this line gives the same result
async def main():
async with asyncio.TaskGroup() as tg:
for i in range(0, 10):
tg.create_task(request_func())
all these things gave the same result: first of all we defined group of tasks/futures and only then run this group synchronous and concurrently. But is there a way to run all these requests concurrently but "in the stream"?
I tried to make visualization if my explanation is not clear enough.
What I have for now
What I need
================ Update with the answer ===================
The most close answer however with some limitations:
import asyncio
import random
import time
def blockme(n):
x = random.random() * 2.0
time.sleep(x)
return n, x
def cb(fut):
print("Result", fut.result())
async def main():
#You need to control threads quantity
pool = concurrent.futures.ThreadPoolExecutor(max_workers=4)
loop = asyncio.get_event_loop()
futs = []
#You need to control requests per second
delay = 0.5
while await asyncio.sleep(delay, result=True):
fut = loop.run_in_executor(pool, blockme, n)
fut.add_done_callback(cb)
futs.append(fut)
#You need to control futures quantity, f.e. like this:
if len(futs)>40:
completed, futs = await asyncio.wait(futs,
timeout=5,
return_when=FIRST_COMPLETED)
asyncio.run(main())
I think this might be what you want. You don't have to await each request - the run_in_executor function returns a Future. Instead of awaiting that, you can attach a callback function:
import asyncio
import random
import time
def blockme(n):
x = random.random() * 2.0
time.sleep(x)
return n, x
def cb(fut):
print("Result", fut.result())
async def main():
loop = asyncio.get_event_loop()
futs = []
for n in range(20):
fut = loop.run_in_executor(None, blockme, n)
fut.add_done_callback(cb)
futs.append(fut)
await asyncio.gather(*futs)
# await asyncio.sleep(10)
asyncio.run(main())
All the requests are started at the beginning, but they don't all execute in parallel because the number of threads is limited by the ThreadPool. You can adjust the number of threads if you want.
Here I simulated a blocking call with time.sleep. I needed a way to prevent main() from ending before all the callbacks occurred, so I used gather for that purpose. You can also wait for some length of time, but gather is cleaner.
Apologies if I don't understand what you want. But I think you want to avoid using await for each call, and I tried to show one way you can do that.
This is directly referenced from Python documentation. The code snippet from documentation of asyncio library explains how you can run a blocking code concurrently using asyncio. It uses to_thread method to create task
you can find more here - https://docs.python.org/3/library/asyncio-task.html#running-in-threads
def blocking_io():
print(f"start blocking_io at {time.strftime('%X')}")
# Note that time.sleep() can be replaced with any blocking
# IO-bound operation, such as file operations.
time.sleep(1)
print(f"blocking_io complete at {time.strftime('%X')}")
async def main():
print(f"started main at {time.strftime('%X')}")
await asyncio.gather(
asyncio.to_thread(blocking_io),
asyncio.sleep(1))
print(f"finished main at {time.strftime('%X')}")
asyncio.run(main())
I'm trying to run some asynchronous functions in her asynchronous function, the problem is, how did I understand that functions don't run like that, then how do I do it? I don't want to make the maze_move function asynchronous.
async def no_stop():
#some logic
await asyncio.sleep(4)
async def stop(stop_time):
await asyncio.sleep(stop_time)
#some logic
def maze_move():
no_stop()
stop(1.5)
async def main(websocket):
global data_from_client, data_from_server, power_l, power_r
get_params()
get_data_from_server()
get_data_from_client()
while True:
msg = await websocket.recv()
allow_data(msg)
cheker(data_from_client)
data_from_server['IsBrake'] = data_from_client['IsBrake']
data_from_server['powerL'] = power_l
data_from_server['powerR'] = power_r
await websocket.send(json.dumps(data_from_server))
print(data_from_client['IsBrake'])
start_server = websockets.serve(main, 'localhost', 8080)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
How about:
def maze_move():
loop = asyncio.get_event_loop()
loop.run_until_complete(no_stop())
loop.run_until_complete(stop(1.5))
If you wanted to run two coroutines concurrently, then:
def maze_move():
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(no_stop(), stop(1.5)))
Update Based on Updated Question
I am guessing what it is you want to do (see my comment to your question):
First, you cannot call from maze_move coroutines such as stop directly since stop() does not result in calling stop it just returns a coroutine object. So maze_move has to be modified. I will assume you do not want to make it a coroutine itself (why not as long as you already have to modify it?). And further assuming you want to invoke maze_move from a coroutine that wishes to run concurrently other coroutines, then you can create a new coroutine, e.g. maze_move_runner that will run maze_move in a separate thread so that it does not block other concurrently running coroutines:
import asyncio
import concurrent.futures
async def no_stop():
#some logic
print('no stop')
await asyncio.sleep(4)
async def stop(stop_time):
await asyncio.sleep(stop_time)
print('stop')
#some logic
async def some_coroutine():
print('Enter some_coroutine')
await asyncio.sleep(2)
print('Exit some_coroutine')
return 1
def maze_move():
# In case we are being run directly and not in a separate thread:
try:
loop = asyncio.get_running_loop()
except:
# This thread has no current event loop, so:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(no_stop())
loop.run_until_complete(stop(1.5))
return 'Done!'
async def maze_move_runner():
loop = asyncio.get_running_loop()
# Run in another thread:
return await loop.run_in_executor(None, maze_move)
async def main():
loop = asyncio.get_running_loop()
results = await (asyncio.gather(some_coroutine(), maze_move_runner()))
print(results)
asyncio.run(main())
Prints:
Enter some_coroutine
no stop
Exit some_coroutine
stop
[1, 'Done!']
But this would be the most straightforward solution:
async def maze_move():
await no_stop()
await stop(1.5)
return 'Done!'
async def main():
loop = asyncio.get_running_loop()
results = await (asyncio.gather(some_coroutine(), maze_move()))
print(results)
If you have an already running event loop, you can define an async function inside of a sync function and launch it as task:
def maze_move():
async def amaze_move():
await no_stop()
await stop(1.5)
return asyncio.create_task(amaze_move())
This function returns an asyncio.Task object which can be used in an await expression, or not, depending on requirements. This way you won't have to make maze_move itself an async function, although I don't know why that would be a goal. Only a async function can run no_stop and stop, so you've got to have an async function somewhere.
I am new to asynchronous programming, and while I understand most concepts, there is one relating to the inner runnings of 'await' that I don't quite understand.
Consider the following:
import asyncio
async def foo():
print('start fetching')
await asyncio.sleep(2)
print('done fetcihng')
async def main():
task1 = asyncio.create_task(foo())
asyncio.run(main())
Output: start fetching
vs.
async def foo():
print('start fetching')
print('done fetcihng')
async def main():
task1 = asyncio.create_task(foo())
asyncio.run(main())
Output: start fetching followed by done fetching
Perhaps it is my understanding of await, which I do understand insofar that we can use it to pause (2 seconds in the case above), or await for functions to fully finish running before any further code is run.
But for the first example above, why does await cause 'done fetching' to not run??
asyncio.create_task schedules an awaitable on the event loop and returns immediately, so you are actually exiting the main function (and closing the event loop) before the task is able to finish
you need to change main to either
async def main():
task1 = asyncio.create_task(foo())
await task1
or
async def main():
await foo()
creating a task first (the former) is useful in many cases, but they all involve situations where the event loop will outlast the task, e.g. a long running server, otherwise you should just await the coroutine directly like the latter
Hi I have the following issue, I want to execute getlastItemFromGivenInterval method, let it briefly to go through without waiting for request reponses, and give a context to asyncio.sleep(60) to execute the whole procedure once more in 60 seconds time frames. What I get is waiting in getLastItemFromGivenInterval() for request end.
import aiohttp
import asyncio
loop = asyncio.get_event_loop()
task = loop.create_task(main())
loop.run_forever()
async def main():
async with aiohttp.ClientSession() as session:
while True:
await bc.getLastItemFromGivenInterval(session)
await asyncio.sleep(60)
async def getLastItemFromGivenInterval(session):
async with session.get(BinanceClient.base_endpoint + "/api/v1/time") as currentServerTime:
currentServerTime = await currentServerTime.json()
currentServerTime = currentServerTime['serverTime']
async with session.get(url) as res:
response = await res.json()
array = []
print(response)
return response
getLastItemFromGivenInterval is placed in the separate class.
Please give me a hint how to achieve not waiting effect in getLastItem...() method.
If I understand you correctly, you want to start getLastItemFromGivenInterval in the background, and do so every 60 seconds regardless of how long it takes to complete. You can replace await with create_task, and then never awaiting the resulting task:
loop = asyncio.get_event_loop()
while True:
# spawn the task in the background, and proceed
loop.create_task(bc.getLastItemFromGivenInterval(session))
# wait 60 seconds, allowing the above task (and other
# tasks managed by the event loop) to run
await asyncio.sleep(60)
You might want to also ensure that tasks that take a long time to complete or that hang indefinitely (e.g. due to a network failure) don't accumulate:
loop = asyncio.get_event_loop()
while True:
# asyncio.wait_for will cancel the task if it takes longer
# than the specified duration
loop.create_task(asyncio.wait_for(
bc.getLastItemFromGivenInterval(session), 500))
await asyncio.sleep(60)
I am trying to create a simple monitoring system that periodically checks things and logs them. Here is a cutdown example of the logic I am attempting to use but I keep getting a RuntimeWarning: coroutine 'foo' was never awaited error.
How should I reschedule an async method from itself?
Code in test.py:
import asyncio
from datetime import datetime
async def collect_data():
await asyncio.sleep(1)
return {"some_data": 1,}
async def foo(loop):
results = await collect_data()
# Log the results
print("{}: {}".format(datetime.now(), results))
# schedule to run again in X seconds
loop.call_later(5, foo, loop)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.create_task(foo(loop))
loop.run_forever()
loop.close()
Error:
pi#raspberrypi [0] $ python test.py
2018-01-03 01:59:22.924871: {'some_data': 1}
/usr/lib/python3.5/asyncio/events.py:126: RuntimeWarning: coroutine 'foo' was never awaited
self._callback(*self._args)
call_later accepts a plain sync callback (a function defined with def). A coroutine function (async def) should be awaited to be executed.
The cool thing about asyncio is that it imitates imperative plain synchronous code in many ways. How would you solve this task for a plain function? I guess just sleep some time and recursively call function again. Do the same (almost - we should use synchronous sleep) with asyncio also:
import asyncio
from datetime import datetime
async def collect_data():
await asyncio.sleep(1)
return {"some_data": 1,}
async def foo(loop):
results = await collect_data()
# Log the results
print("{}: {}".format(datetime.now(), results))
# Schedule to run again in X seconds
await asyncio.sleep(5)
return (await foo(loop))
if __name__ == '__main__':
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(foo(loop))
finally:
loop.run_until_complete(loop.shutdown_asyncgens()) # Python 3.6 only
loop.close()
If you sometime would need to run foo in the background alongside with other coroutines you can create a task. There is also shown a way to cancel task execution.
Update:
As Andrew pointed out, a plain loop is even better:
async def foo(loop):
while True:
results = await collect_data()
# Log the results
print("{}: {}".format(datetime.now(), results))
# Wait before next iteration:
await asyncio.sleep(5)