I am creating a cryptocurrency exchange API client using Python3.5 and Tkinter. I have several displays that I want to update asynchronously every 10 seconds. I am able to update the displays every 10 seconds using Tk.after() like in this example
def updateLoans():
offers = dd.loanOffers()
demands = dd.loanDemands()
w.LoanOfferView.delete(1.0, END)
w.LoanDemandView.delete(1.0, END)
w.LoanOfferView.insert(END, offers)
w.LoanDemandView.insert(END, demands)
print('loans refreshed')
root.after(10000, updateLoans)
In order for the after method to continue to update continuously every 10 seconds the function updateLoans() needs to be passed as a callable into after() inside of the function.
Now the part that is stumping me, when I make this function asynchronous with python's new async and await keywords
async def updateLoans():
offers = await dd.loanOffers()
demands = await dd.loanDemands()
w.LoanOfferView.delete(1.0, END)
w.LoanDemandView.delete(1.0, END)
w.LoanOfferView.insert(END, offers)
w.LoanDemandView.insert(END, demands)
print('loans refreshed')
root.after(10000, updateLoans)
The problem here is that I can not await a callable inside of the parameters for the after method. So I get a runtime warning. RuntimeWarning: coroutine 'updateLoans' was never awaited.
My initial function call IS placed inside of an event loop.
loop = asyncio.get_event_loop()
loop.run_until_complete(updateLoans())
loop.close()
The display populates just fine initially but never updates.
How can I use Tk.after to continuously update a tkinter display asynchronously?
tk.after accepts a normal function, not a coroutine. To run the coroutine to completion, you can use run_until_complete, just as you did the first time:
loop = asyncio.get_event_loop()
root.after(10000, lambda: loop.run_until_complete(updateLoans()))
Also, don't call loop.close(), since you'll need the loop again.
The above quick fix will work fine for many use cases. The fact is, however, that it will render the GUI completely unresponsive if updateLoans() takes a long time due to slow network or a problem with the remote service. A good GUI app will want to avoid this.
While Tkinter and asyncio cannot share an event loop yet, it is perfectly possible to run the asyncio event loop in a separate thread. The main thread then runs the GUI, while a dedicated asyncio thread runs all asyncio coroutines. When the event loop needs to notify the GUI to refresh something, it can use a queue as shown here. On the other hand, if the GUI needs to tell the event loop to do something, it can call call_soon_threadsafe or run_coroutine_threadsafe.
Example code (untested):
gui_queue = queue.Queue()
async def updateLoans():
while True:
offers = await dd.loanOffers()
demands = await dd.loanDemands()
print('loans obtained')
gui_queue.put(lambda: updateLoansGui(offers, demands))
await asyncio.sleep(10)
def updateLoansGui(offers, demands):
w.LoanOfferView.delete(1.0, END)
w.LoanDemandView.delete(1.0, END)
w.LoanOfferView.insert(END, offers)
w.LoanDemandView.insert(END, demands)
print('loans GUI refreshed')
# http://effbot.org/zone/tkinter-threads.htm
def periodicGuiUpdate():
while True:
try:
fn = gui_queue.get_nowait()
except queue.Empty:
break
fn()
root.after(100, periodicGuiUpdate)
# Run the asyncio event loop in a worker thread.
def start_loop():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.create_task(updateLoans())
loop.run_forever()
threading.Thread(target=start_loop).start()
# Run the GUI main loop in the main thread.
periodicGuiUpdate()
root.mainloop()
# To stop the event loop, call loop.call_soon_threadsafe(loop.stop).
# To start a coroutine from the GUI, call asyncio.run_coroutine_threadsafe.
Related
I have a asyncio running loop, and from the coroutine I'm calling a sync function, is there any way we can call and get result from an async function in a sync function
tried below code, it is not working
want to print output of hel() in i() without changing i() to async function
is it possible, if yes how?
import asyncio
async def hel():
return 4
def i():
loop = asyncio.get_running_loop()
x = asyncio.run_coroutine_threadsafe(hel(), loop) ## need to change
y = x.result() ## this lines
print(y)
async def h():
i()
asyncio.run(h())
This is one of the most commonly asked type of question here. The tools to do this are in the standard library and require only a few lines of setup code. However, the result is not 100% robust and needs to be used with care. This is probably why it's not already a high-level function.
The basic problem with running an async function from a sync function is that async functions contain await expressions. Await expressions pause the execution of the current task and allow the event loop to run other tasks. Therefore async functions (coroutines) have special properties that allow them to yield control and resume again where they left off. Sync functions cannot do this. So when your sync function calls an async function and that function encounters an await expression, what is supposed to happen? The sync function has no ability to yield and resume.
A simple solution is to run the async function in another thread, with its own event loop. The calling thread blocks until the result is available. The async function behaves like a normal function, returning a value. The downside is that the async function now runs in another thread, which can cause all the well-known problems that come with threaded programming. For many cases this may not be an issue.
This can be set up as follows. This is a complete script that can be imported anywhere in an application. The test code that runs in the if __name__ == "__main__" block is almost the same as the code in the original question.
The thread is lazily initialized so it doesn't get created until it's used. It's a daemon thread so it will not keep your program from exiting.
The solution doesn't care if there is a running event loop in the main thread.
import asyncio
import threading
_loop = asyncio.new_event_loop()
_thr = threading.Thread(target=_loop.run_forever, name="Async Runner",
daemon=True)
# This will block the calling thread until the coroutine is finished.
# Any exception that occurs in the coroutine is raised in the caller
def run_async(coro): # coro is a couroutine, see example
if not _thr.is_alive():
_thr.start()
future = asyncio.run_coroutine_threadsafe(coro, _loop)
return future.result()
if __name__ == "__main__":
async def hel():
await asyncio.sleep(0.1)
print("Running in thread", threading.current_thread())
return 4
def i():
y = run_async(hel())
print("Answer", y, threading.current_thread())
async def h():
i()
asyncio.run(h())
Output:
Running in thread <Thread(Async Runner, started daemon 28816)>
Answer 4 <_MainThread(MainThread, started 22100)>
In order to call an async function from a sync method, you need to use asyncio.run, however this should be the single entry point of an async program so asyncio makes sure that you don't do this more than once per program, so you can't do that.
That being said, this project https://github.com/erdewit/nest_asyncio patches the asyncio event loop to do that, so after using it you should be able to just call asyncio.run in your sync function.
I'm trying to test an async code, but I'm having trouble because of the complex connection between some tasks.
The context I need this is some code which reads a file in parallel to it being written by another process. There's some logic in the code where reading a truncated record will make it back off and wait() on an asyncio.Condition to be later released by an inotify event. This code should let it recover by re-reading the record when a future write has been completed by another process. I specifically want to test that this recovery works.
So my plan would be:
write a partial file
run the event loop until it suspends on the condition
write the rest of the file
run the event loop to completion
I had thought this was the anser: Detect an idle asyncio event loop
However a trial test shows that it exits too soon:
import asyncio
import random
def test_ping_pong():
async def ping_pong(idx: int, oth_idx: int):
for i in range(random.randint(100, 1000)):
counters[idx] += 1
async with conditions[oth_idx]:
conditions[oth_idx].notify()
async with conditions[idx]:
await conditions[idx].wait()
async def detect_iowait():
loop = asyncio.get_event_loop()
rsock, wsock = socket.socketpair()
wsock.close()
try:
await loop.sock_recv(rsock, 1)
finally:
rsock.close()
conditions = [asyncio.Condition(), asyncio.Condition()]
counters = [0, 0]
loop = asyncio.get_event_loop()
loop.create_task(ping_pong(0, 1))
loop.create_task(ping_pong(1, 0))
loop.run_until_complete(loop.create_task(detect_iowait()))
assert counters[0] > 10
assert counters[1] > 10
After digging through the source code for python's event loops, I've found nothing exposed that can do this publicly.
It is however possible to use the _ready deque created by the BaseEventLoop. See here. This contains every task that is immediately ready to run. When a task is run it is popped from the _ready deque. When a suspended task is released by another task (eg by calling future.set_result()) the suspended task is immediately added back to the deque. This has existed since python 3.5.
One thing that you can do is repeatedly inject a callback to check how many items in _ready. When all other tasks are suspended, there will be nothing left in the dqueue at the moment the callback runs.
The callback will run at most once per iteration of the event loop:
async def wait_for_deadlock(empty_loop_threshold: int = 0):
def check_for_deadlock():
nonlocal empty_loop_count
# pylint: disable=protected-access
if loop._ready:
empty_loop_count = 0
loop.call_soon(check_for_deadlock)
elif empty_loop_count < empty_loop_threshold:
empty_loop_count += 1
loop.call_soon(check_for_deadlock)
else:
future.set_result(None)
empty_loop_count = 0
loop = asyncio.get_running_loop()
future = loop.create_future()
asyncio.get_running_loop().call_soon(check_for_deadlock)
await future
In the above code the empty_loop_threshold is not really necessary in most cases but exists for cases where tasks communicate with IO. For example if one task communicates to another through IO, there may be a moment where all tasks are suspended even through one has data ready to read. Setting empty_loop_threshold = 1 should get round this.
Using this is relatively simple. You can:
loop.run_until_complete(wait_for_deadlock())
Or as requested in my question:
def some_test():
async def async_test():
await wait_for_deadlock()
inject_something()
await wait_for_deadlock()
loop = loop.get_event_loop()
loop.create_task(task_to_test())
loop.run_until_complete(loop.create_task(async_test)
assert something
My question is more or less like this, which is really an X-Y problem leading back to this. This is, however, not a duplicate, because my use case is slightly different and the linked threads don't answer my question.
I am porting a set of synchronous programs from Java to Python. These programs interact with an asynchronous library. In Java, I could block and wait for this library's asynchronous functions to return a value and then do things with that value.
Here's a code sample to illustrate the problem.
def do_work_sync_1(arg1, arg2, arg3):
# won't even run because await has to be called from an async function
value = await do_work_async(arg1, arg2, arg3)
def do_work_sync_2(arg1, arg2, arg3):
# throws "Loop already running" error because the async library referenced in do_work_async is already using my event loop
event_loop = asyncio.get_event_loop()
event_loop.run_until_complete(do_work_async(arg1, arg2, arg3))
def do_work_sync_3(arg1, arg2, arg3):
# throws "got Future attached to a different loop" because the do_work_async refers back to the asynchronous library, which is stubbornly attached to my main loop
thread_pool = ThreadPoolExecutor()
future = thread_pool.submit(asyncio.run, do_work_async(arg1, arg2, arg3)
result = future.result()
def do_work_sync_4(arg1, arg2, arg3):
# just hangs forever
event_loop = asyncio.get_event_loop()
future = asyncio.run_coroutine_threadsafe(do_work_async(arg1, arg2, arg3), event_loop)
return_value = future.result()
async def do_work_async(arg1, arg2, arg3):
value_1 = await async_lib.do_something(arg1)
value_2 = await async_lib.do_something_else(arg2, arg3)
return value_1 + value_2
Python appears to be trying very hard to keep me from blocking anything, anywhere. await can only be used from async def functions, which in their turn must be awaited. There doesn't seem to be a built-in way to keep async def/await from spreading through my code like a virus.
Tasks and Futures don't have any built-in blocking or wait_until_complete mechanisms unless I want to loop on Task.done(), which seems really bad.
I tried asyncio.get_event_loop().run_until_complete(), but that produces an error: This event loop is already running. Apparently I'm not supposed to do that for anything except main().
The second linked question above suggests using a separate thread and wrapping the async function in that. I tested this with a few simple functions and it seems to work as a general concept. The problem here is that my asynchronous library keeps a reference to the main thread's event loop and throws an error when I try to refer to it from the new thread: got Future <Future pending> attached to a different loop.
I considered moving all references to the asynchronous library into a separate thread, but I realized that I still can't block in the new thread, and I'd have to create a third thread for blocking calls, which would bring me back to the Future attached to a different loop error.
I'm pretty much out of ideas here. Is there a way to block and wait for an async function to return, or am I really being forced to convert my entire program to async/await? (If it's the latter, an explanation would be nice. I don't get it.)
It took me some time, but finally I've found the actual question đ
Is there a way to block and wait for an async function to return, or am I really being forced to convert my entire program to async/await?
There is a high-level function asyncio.run(). It does three things:
create new event loop
run your async function in that event loop
wait for any unfinished tasks and close the loop
Its source code is here: https://github.com/python/cpython/blob/3221a63c69268a9362802371a616f49d522a5c4f/Lib/asyncio/runners.py#L8 You see it uses loop.run_until_complete(main) under the hood.
If you are writing completely asynchronous code, you are supposed to call asyncio.run() somewhere at the end of your main() function, I guess. But that doesn't have to be the case. You can run it wherever you want, as many times you want. Caveats:
in given thread, at one time, there can be only one running event loop
do not run it from async def function, because, obviously, you have already one event loop running, so you can just call that function using await instead
Example:
import asyncio
async def something_async():
print('something_async start')
await asyncio.sleep(1)
print('something_async done')
for i in range(3):
asyncio.run(something_async())
You can have multiple threads with their own event loop:
import asyncio
import threading
async def something_async():
print('something_async start in thread:', threading.current_thread())
await asyncio.sleep(1)
print('something_async done in thread:', threading.current_thread())
def main():
t1 = threading.Thread(target=asyncio.run, args=(something_async(), ))
t2 = threading.Thread(target=asyncio.run, args=(something_async(), ))
t1.start()
t2.start()
t1.join()
t2.join()
if __name__ == '__main__':
main()
If you encounter this error: Future attached to a different loop That may mean two tings:
you are using resources tied to another event loop than you are running right now
you have created some resource before starting an event loop - it uses a "default event loop" in that case - but when you run asyncio.run(), you start a different loop. I've encountered this before: asyncio.Semaphore RuntimeError: Task got Future attached to a different loop
You need to use Python version at least 3.5.3 - explanation here.
I am trying to learn to use asyncio in Python to optimize scripts.
My example returns a coroutine was never awaited warning, can you help to understand and find how to solve it?
import time
import datetime
import random
import asyncio
import aiohttp
import requests
def requete_bloquante(num):
print(f'Get {num}')
uid = requests.get("https://httpbin.org/uuid").json()['uuid']
print(f"Res {num}: {uid}")
def faire_toutes_les_requetes():
for x in range(10):
requete_bloquante(x)
print("Bloquant : ")
start = datetime.datetime.now()
faire_toutes_les_requetes()
exec_time = (datetime.datetime.now() - start).seconds
print(f"Pour faire 10 requĂȘtes, ça prend {exec_time}s\n")
async def requete_sans_bloquer(num, session):
print(f'Get {num}')
async with session.get("https://httpbin.org/uuid") as response:
uid = (await response.json()['uuid'])
print(f"Res {num}: {uid}")
async def faire_toutes_les_requetes_sans_bloquer():
loop = asyncio.get_event_loop()
with aiohttp.ClientSession() as session:
futures = [requete_sans_bloquer(x, session) for x in range(10)]
loop.run_until_complete(asyncio.gather(*futures))
loop.close()
print("Fin de la boucle !")
print("Non bloquant : ")
start = datetime.datetime.now()
faire_toutes_les_requetes_sans_bloquer()
exec_time = (datetime.datetime.now() - start).seconds
print(f"Pour faire 10 requĂȘtes, ça prend {exec_time}s\n")
The first classic part of the code runs correctly, but the second half only produces:
synchronicite.py:43: RuntimeWarning: coroutine 'faire_toutes_les_requetes_sans_bloquer' was never awaited
You made faire_toutes_les_requetes_sans_bloquer an awaitable function, a coroutine, by using async def.
When you call an awaitable function, you create a new coroutine object. The code inside the function won't run until you then await on the function or run it as a task:
>>> async def foo():
... print("Running the foo coroutine")
...
>>> foo()
<coroutine object foo at 0x10b186348>
>>> import asyncio
>>> asyncio.run(foo())
Running the foo coroutine
You want to keep that function synchronous, because you don't start the loop until inside that function:
def faire_toutes_les_requetes_sans_bloquer():
loop = asyncio.get_event_loop()
# ...
loop.close()
print("Fin de la boucle !")
However, you are also trying to use a aiophttp.ClientSession() object, and that's an asynchronous context manager, you are expected to use it with async with, not just with, and so has to be run in aside an awaitable task. If you use with instead of async with a TypeError("Use async with instead") exception will be raised.
That all means you need to move the loop.run_until_complete() call out of your faire_toutes_les_requetes_sans_bloquer() function, so you can keep that as the main task to be run; you can call and await on asycio.gather() directly then:
async def faire_toutes_les_requetes_sans_bloquer():
async with aiohttp.ClientSession() as session:
futures = [requete_sans_bloquer(x, session) for x in range(10)]
await asyncio.gather(*futures)
print("Fin de la boucle !")
print("Non bloquant : ")
start = datetime.datetime.now()
asyncio.run(faire_toutes_les_requetes_sans_bloquer())
exec_time = (datetime.datetime.now() - start).seconds
print(f"Pour faire 10 requĂȘtes, ça prend {exec_time}s\n")
I used the new asyncio.run() function (Python 3.7 and up) to run the single main task. This creates a dedicated loop for that top-level coroutine and runs it until complete.
Next, you need to move the closing ) parenthesis on the await resp.json() expression:
uid = (await response.json())['uuid']
You want to access the 'uuid' key on the result of the await, not the coroutine that response.json() produces.
With those changes your code works, but the asyncio version finishes in sub-second time; you may want to print microseconds:
exec_time = (datetime.datetime.now() - start).total_seconds()
print(f"Pour faire 10 requĂȘtes, ça prend {exec_time:.3f}s\n")
On my machine, the synchronous requests code in about 4-5 seconds, and the asycio code completes in under .5 seconds.
Do not use loop.run_until_complete call inside async function. The purpose for that method is to run an async function inside sync context. Anyway here's how you should change the code:
async def faire_toutes_les_requetes_sans_bloquer():
async with aiohttp.ClientSession() as session:
futures = [requete_sans_bloquer(x, session) for x in range(10)]
await asyncio.gather(*futures)
print("Fin de la boucle !")
loop = asyncio.get_event_loop()
loop.run_until_complete(faire_toutes_les_requetes_sans_bloquer())
Note that alone faire_toutes_les_requetes_sans_bloquer() call creates a future that has to be either awaited via explicit await (for that you have to be inside async context) or passed to some event loop. When left alone Python complains about that. In your original code you do none of that.
Not sure if this was the issue for you, but for me the response from the coroutine was another coroutine, so my code started warning me (note not actually crashing) I had creating coroutines that weren't being called. After I actually called them (although I didn't realy use the response the error went away).
Note main code I added was:
content_from_url_as_str: list[str] = await asyncio.gather(*content_from_url, return_exceptions=True)
inspired after I saw:
response: str = await content_from_url[0]
Full code:
"""
-- Notes from [1]
Threading and asyncio both run on a single processor and therefore only run one at a time [1]. It's cooperative concurrency.
Note: threads.py has a very good block with good defintions for io-bound, cpu-bound if you need to recall it.
Note: coroutine is an important definition to understand before proceeding. Definition provided at the end of this tutorial.
General idea for asyncio is that there is a general event loop that controls how and when each tasks gets run.
The event loop is aware of each task and knows what states they are in.
For simplicitly of exponsition assume there are only two states:
a) Ready state
b) Waiting state
a) indicates that a task has work to do and can be run - while b) indicates that a task is waiting for a response from an
external thing (e.g. io, printer, disk, network, coq, etc). This simplified event loop has two lists of tasks
(ready_to_run_lst, waiting_lst) and runs things from the ready to run list. Once a task runs it is in complete control
until it cooperatively hands back control to the event loop.
The way it works is that the task that was ran does what it needs to do (usually an io operation, or an interleaved op
or something like that) but crucially it gives control back to the event loop when the running task (with control) thinks is best.
(Note that this means the task might not have fully completed getting what is "fully needs".
This is probably useful when the user whats to implement the interleaving himself.)
Once the task cooperatively gives back control to the event loop it is placed by the event loop in either the
ready to run list or waiting list (depending how fast the io ran, etc). Then the event loop goes through the waiting
loop to see if anything waiting has "returned".
Once all the tasks have been sorted into the right list the event loop is able to choose what to run next (e.g. by
choosing the one that has been waiting to be ran the longest). This repeats until the event loop code you wrote is done.
The crucial point (and distinction with threads) that we want to emphasizes is that in asyncio, an operation is never
interrupted in the middle and every switching/interleaving is done deliberately by the programmer.
In a way you don't have to worry about making your code thread safe.
For more details see [2], [3].
Asyncio syntax:
i) await = this is where the code you wrote calls an expensive function (e.g. an io) and thus hands back control to the
event loop. Then the event loop will likely put it in the waiting loop and runs some other task. Likely eventually
the event loop comes back to this function and runs the remaining code given that we have the value from the io now.
await = the key word that does (mainly) two things 1) gives control back to the event loop to see if there is something
else to run if we called it on a real expensive io operation (e.g. calling network, printer, etc) 2) gives control to
the new coroutine (code that might give up control copperatively) that it is awaiting. If this is your own code with async
then it means it will go into this new async function (coroutine) you defined.
No real async benefits are being experienced until you call (await) a real io e.g. asyncio.sleep is the typical debug example.
todo: clarify, I think await doesn't actually give control back to the event loop but instead runs the "coroutine" this
await is pointing too. This means that if it's a real IO then it will actually give it back to the event loop
to do something else. In this case it is actually doing something "in parallel" in the async way.
Otherwise, it is your own python coroutine and thus gives it the control but "no true async parallelism" happens.
iii) async = approximately a flag that tells python the defined function might use await. This is not strictly true but
it gives you a simple model while your getting started. todo - clarify async.
async = defines a coroutine. This doesn't define a real io, it only defines a function that can give up and give the
execution power to other coroutines or the (asyncio) event loop.
todo - context manager with async
ii) awaiting = when you call something (e.g. a function) that usually requires waiting for the io response/return/value.
todo: though it seems it's also the python keyword to give control to a coroutine you wrote in python or give
control to the event loop assuming your awaiting an actual io call.
iv) async with = this creates a context manager from an object you would normally await - i.e. an object you would
wait to get the return value from an io. So usually we swap out (switch) from this object.
todo - e.g.
Note: - any function that calls await needs to be marked with async or youâll get a syntax error otherwise.
- a task never gives up control without intentionally doing so e.g. never in the middle of an op.
Cons: - note how this also requires more thinking carefully (but feels less dangerous than threading due to no pre-emptive
switching) due to the concurrency. Another disadvantage is again the idisocyncracies of using this in python + learning
new syntax and details for it to actually work.
- understanding the semanics of new syntax + learning where to really put the syntax to avoid semantic errors.
- we needed a special asycio compatible lib for requests, since the normal requests is not designed to inform
the event loop that it's block (or done blocking)
- if one of the tasks doesn't cooperate properly then the whole code can be a mess and slow it down.
- not all libraries support the async IO paradigm in python (e.g. asyncio, trio, etc).
Pro: + despite learning where to put await and async might be annoying it forces your to think carefully about your code
which on itself can be an advantage (e.g. better, faster, less bugs due to thinking carefully)
+ often faster...? (skeptical)
1. https://realpython.com/python-concurrency/
2. https://realpython.com/async-io-python/
3. https://stackoverflow.com/a/51116910/6843734
todo - read [2] later (or [3] but thats not a tutorial and its more details so perhaps not a priority).
asynchronous = 1) dictionary def: not happening at the same time
e.g. happening indepedently 2) computing def: happening independently of the main program flow
couroutine = are computer program components that generalize subroutines for non-preemptive multitasking, by allowing execution to be suspended and resumed.
So basically it's a routine/"function" that can give up control in "a controlled way" (i.e. not randomly like with threads).
Usually they are associated with a single process -- so it's concurrent but not parallel.
Interesting note: Coroutines are well-suited for implementing familiar program components such as cooperative tasks, exceptions, event loops, iterators, infinite lists and pipes.
Likely we have an event loop in this document as an example. I guess yield and operators too are good examples!
Interesting contrast with subroutines: Subroutines are special cases of coroutines.[3] When subroutines are invoked, execution begins at the start,
and once a subroutine exits, it is finished; an instance of a subroutine only returns once, and does not hold state between invocations.
By contrast, coroutines can exit by calling other coroutines, which may later return to the point where they were invoked in the original coroutine;
from the coroutine's point of view, it is not exiting but calling another coroutine.
Coroutines are very similar to threads. However, coroutines are cooperatively multitasked, whereas threads are typically preemptively multitasked.
event loop = event loop is a programming construct or design pattern that waits for and dispatches events or messages in a program.
Appendix:
For I/O-bound problems, thereâs a general rule of thumb in the Python community:
âUse asyncio when you can, threading when you must.â
asyncio can provide the best speed up for this type of program, but sometimes you will require critical libraries that
have not been ported to take advantage of asyncio.
Remember that any task that doesnât give up control to the event loop will block all of the other tasks
-- Notes from [2]
see asyncio_example2.py file.
The sync fil should have taken longer e.g. in one run the async file took:
Downloaded 160 sites in 0.4063692092895508 seconds
While the sync option took:
Downloaded 160 in 3.351937770843506 seconds
"""
import asyncio
from asyncio import Task
from asyncio.events import AbstractEventLoop
import aiohttp
from aiohttp import ClientResponse
from aiohttp.client import ClientSession
from typing import Coroutine
import time
async def download_site(session: ClientSession, url: str) -> str:
async with session.get(url) as response:
print(f"Read {response.content_length} from {url}")
return response.text()
async def download_all_sites(sites: list[str]) -> list[str]:
# async with = this creates a context manager from an object you would normally await - i.e. an object you would wait to get the return value from an io. So usually we swap out (switch) from this object.
async with aiohttp.ClientSession() as session: # we will usually away session.FUNCS
# create all the download code a coroutines/task to be later managed/run by the event loop
tasks: list[Task] = []
for url in sites:
# creates a task from a coroutine todo: basically it seems it creates a callable coroutine? (i.e. function that is able to give up control cooperatively or runs an external io and also thus gives back control cooperatively to the event loop). read more? https://stackoverflow.com/questions/36342899/asyncio-ensure-future-vs-baseeventloop-create-task-vs-simple-coroutine
task: Task = asyncio.ensure_future(download_site(session, url))
tasks.append(task)
# runs tasks/coroutines in the event loop and aggrates the results. todo: does this halt until all coroutines have returned? I think so due to the paridgm of how async code works.
content_from_url: list[ClientResponse.text] = await asyncio.gather(*tasks, return_exceptions=True)
assert isinstance(content_from_url[0], Coroutine) # note allresponses are coroutines
print(f'result after aggregating/doing all coroutine tasks/jobs = {content_from_url=}')
# this is needed since the response is in a coroutine object for some reason
content_from_url_as_str: list[str] = await asyncio.gather(*content_from_url, return_exceptions=True)
print(f'result after getting response from coroutines that hold the text = {content_from_url_as_str=}')
return content_from_url_as_str
if __name__ == "__main__":
# - args
num_sites: int = 80
sites: list[str] = ["https://www.jython.org", "http://olympus.realpython.org/dice"] * num_sites
start_time: float = time.time()
# - run the same 160 tasks but without async paradigm, should be slower!
# note: you can't actually do this here because you have the async definitions to your functions.
# to test the synchronous version see the synchronous.py file. Then compare the two run times.
# await download_all_sites(sites)
# download_all_sites(sites)
# - Execute the coroutine coro and return the result.
asyncio.run(download_all_sites(sites))
# - run event loop manager and run all tasks with cooperative concurrency
# asyncio.get_event_loop().run_until_complete(download_all_sites(sites))
# makes explicit the creation of the event loop that manages the coroutines & external ios
# event_loop: AbstractEventLoop = asyncio.get_event_loop()
# asyncio.run(download_all_sites(sites))
# making creating the coroutine that hasn't been ran yet with it's args explicit
# event_loop: AbstractEventLoop = asyncio.get_event_loop()
# download_all_sites_coroutine: Coroutine = download_all_sites(sites)
# asyncio.run(download_all_sites_coroutine)
# - print stats about the content download and duration
duration = time.time() - start_time
print(f"Downloaded {len(sites)} sites in {duration} seconds")
print('Success.\a')
Hello I fairly new to Python and I am trying to convert an existing application I have on Flask into Quart (https://gitlab.com/pgjones/quart) which is supposed to be built on top of asyncio, so I can use Goblin OGM to interact with JanusGraph or TinkerPop. According to the examples I found on Goblin I need to obtain an event loop to run the commands asynchronously.
>>> import asyncio
>>> from goblin import Goblin
>>> loop = asyncio.get_event_loop()
>>> app = loop.run_until_complete(
... Goblin.open(loop))
>>> app.register(Person, Knows)
However I can't find a way to obtain the event loop from Quart even though it is build on top of asyncio.
Does anyone know how I can get that ? Any help will be highly appreciated.
TL;DR To obtain the event loop, call asyncio.get_event_loop().
In an asyncio-based application, the event loop is typically not owned by Quart or any other protocol/application level component, it is provided by asyncio or possibly an accelerator like uvloop. The event loop is obtained by calling asyncio.get_event_loop(), and sometimes set with asyncio.set_event_loop().
This is what quart's app.run() uses to run the application, which means it works with the default event loop created by asyncio for the main thread. In your case you could simply call quart's run() after registering Goblin:
loop = asyncio.get_event_loop()
goblin_app = loop.run_until_complete(Goblin.open(loop))
goblin_app.register(Person, Knows)
quart_app = Quart(...)
# ... #app.route, etc
# now they both run in the same event loop
quart_app.run()
The above should answer the question in the practical sense. But that approach wouldn't work if more than one component insisted on having their own run() method that spins the event loop - since app.run() doesn't return, you can only invoke one such function in a thread.
If you look more closely, though, that is not really the case with quart either. While Quart examples do use app.run() to serve the application, if you take a look at the implementation of app.run(), you will see that it calls the convenience function run_app(), which trivially creates a server and spins up the main loop forever:
def run_app(...):
loop = asyncio.get_event_loop()
# ...
create_server = loop.create_server(
lambda: Server(app, loop, ...), host, port, ...)
server = loop.run_until_complete(create_server)
# ...
loop.run_forever()
If you need to control how the event loop is actually run, you can always do it yourself:
# obtain the event loop from asyncio
loop = asyncio.get_event_loop()
# hook Goblin to the loop
goblin_app = loop.run_until_complete(Goblin.open(loop))
goblin_app.register(Person, Knows)
# hook Quart to the loop
quart_server = loop.run_until_complete(loop.create_server(
lambda: quart.serving.Server(quart_app, loop), host, port))
# actually run the loop (and the program)
try:
loop.run_forever()
except KeyboardInterrupt: # pragma: no cover
pass
finally:
quart_server.close()
loop.run_until_complete(quart_server.wait_closed())
loop.run_until_complete(loop.shutdown_asyncgens())
loop.close()