I am learning about asyncion in Python. There is one confusion that I cannot wrap my head around.
Suppose I have 2 Python scripts: dummy1.py and dummy2.py. In the first script, I have my code written like the following:
loop = asyncio.get_event_loop()
loop.create_task(a_task)
The first script will be imported into the second script, and in the second script, I arrange my code like the following:
loop = asyncio.get_event_loop()
loop.run_forever()
Are there 2 different event loop created? Thanks for your time guys!
The main thing going on with your example is that you only ever started one loop. And when you got the event loop the second time, get_event_loop() is still pointing to the same loop. The loop still has a pending task as well that will not start until the event loop is running.
The important thing to note about this, is that you have one thread. That threads execution can create tasks, save event loops to variables etc., but once you start the event loop, that thread is now working on that event loop. The code wont continue past the line where you started the loop until it ends. To have multiple event loops you would essentially need multiple threads.
You can see this playout with this example:
test1.py
import asyncio
async def a_task():
while True:
await asyncio.sleep(5)
print('a_task did something')
loop = asyncio.get_event_loop()
print('Got Event loop in test1.py')
loop.create_task(a_task())
print('Created task in test1.py')
test2.py
import asyncio
import test1
loop = asyncio.get_event_loop()
print('Got event loop in test2.py')
print('Starting Event loop in test2.py')
loop.run_forever()
print('Event loop ended')
Output
Related
My project requires me to run a blocking code (from another library), whilst continuing my asyncio while: true loop. The code looks something like this:
async def main():
while True:
session_timeout = aiohttp.ClientTimeout()
async with aiohttp.ClientSession() as session:
// Do async stuffs like session.get and so on
# At a certain point, I have a blocking code that I need to execute
// Blocking_code() starts here. The blocking code needs time to get the return value.
Running blocking_code() is the last thing to do in my main() function.
# My objective is to run the blocking code separately.
# Such that whilst the blocking_code() runs, I would like my loop to start from the beginning again,
# and not having to wait until blocking_code() completes and returns.
# In other words, go back to the top of the while loop.
# Separately, the blocking_code() will continue to run independently, which would eventually complete
# and returns. When it returns, nothing in main() will need the return value. Rather the returned
# result continue to be used in blocking_code()
asyncio.run(main())
I have tried using pool = ThreadPool(processes=1) and thread = pool.apply_async(blocking_code, params). It sort of works if there are things that needs to be done after blocking_code() within main(); but blocking_code() is the last thing in main(), and it would cause the whole while loop to pause, until blocking_code() completes, before starting back from the top.
I don't know if this is possible, and if it is, how it's done; but the ideal scenario is this.
Run main(), then run blocking_code() in its own instance. As if executing another .py file. So once the loops reaches blocking_code() in main(), it triggers the blocking_code.py file, and whilst blocking_code.py script runs, the while loops continues from the top again.
If by the time on the 2nd pass in the while loop, it reaches blocking_code() again and the previous run has not complete; another instance of blocking_code() will run on its own instance, independently.
Does what I say make sense? Is it possible to achieve the desired outcome?
Thank you!
This is possible with threads. So you don't block your main loop, you'll need to wrap your thread in an asyncio task. You can wait for return values once your loop is finished if you need to. You can do this with a combination of asyncio.create_task and asyncio.to_thread
import aiohttp
import asyncio
import time
def blocking_code():
print('Starting blocking code.')
time.sleep(5)
print('Finished blocking code.')
async def main():
blocking_code_tasks = []
while True:
session_timeout = aiohttp.ClientTimeout()
async with aiohttp.ClientSession() as session:
print('Executing GET.')
result = await session.get('https://www.example.com')
blocking_code_task = asyncio.create_task(asyncio.to_thread(blocking_code))
blocking_code_tasks.append(blocking_code_task)
#do something with blocking_code_tasks, wait for them to finish, extract errors, etc.
asyncio.run(main())
The above code runes blocking code in a thread and then puts that into an asyncio task. We then add this to the blocking_code_tasks list to keep track of all the currently running tasks. Later on, you can get the values or errors out with something like asyncio.gather
I have a script in which a Slow and a Fast function processes the same global object array. The Slow function is for filling up the array with new objects based on resource intensive calculations, the Fast is only for iterating the existing objects in the array and maintaining/displaying them. The Slow function only needs to be run only in every few seconds, but the Fast function is imperative to run as frequently as possible. I tried using asyncio and ensure_future calling the Slow process, but the result was that the Fast(main) function ran until I stopped it, and only at the end was the Slow function called. I need the Slow function to start running in the instance it is called in the background and complete whenever it can, but without blocking the call of the Fast function. Can you help me please?
Thank you!
An example of what I tried:
import asyncio
variable = []
async def slow():
temp = get_new_objects() #resource intensive
global variable
variable = temp
async def main():
while True: #Looping
if need_to_run_slow: #Only run sometimes
asyncio.ensure_future(slow())
do_fast_stuff_with(variable) #fast part
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close()
asyncio.ensure_future(slow()) only schedules slow() to run at the next pass of the event loop. Since your while loop doesn't await anything that can actually block, you are not giving the event loop a chance to run.
You can work around the issue by adding an await asyncio.sleep(0) after the call to the fast function:
async def main():
while True:
if need_to_run_slow:
asyncio.ensure_future(slow())
await asyncio.sleep(0)
do_fast_stuff_with(variable)
The no-op sleep will ensure that at every iteration of the while loop (and between runs of the "fast" function") gives a chance for a previously scheduled slow() to make progress.
However, your slow() doesn't await either, so all of its code will run in a single iteration, which makes the above equivalent to the much simpler:
def main():
while True:
slow() # slow() is an ordinary function
do_fast_stuff_with(variable)
A code example closer to your actual use case would probably result in a more directly usable answer.
I am creating a cryptocurrency exchange API client using Python3.5 and Tkinter. I have several displays that I want to update asynchronously every 10 seconds. I am able to update the displays every 10 seconds using Tk.after() like in this example
def updateLoans():
offers = dd.loanOffers()
demands = dd.loanDemands()
w.LoanOfferView.delete(1.0, END)
w.LoanDemandView.delete(1.0, END)
w.LoanOfferView.insert(END, offers)
w.LoanDemandView.insert(END, demands)
print('loans refreshed')
root.after(10000, updateLoans)
In order for the after method to continue to update continuously every 10 seconds the function updateLoans() needs to be passed as a callable into after() inside of the function.
Now the part that is stumping me, when I make this function asynchronous with python's new async and await keywords
async def updateLoans():
offers = await dd.loanOffers()
demands = await dd.loanDemands()
w.LoanOfferView.delete(1.0, END)
w.LoanDemandView.delete(1.0, END)
w.LoanOfferView.insert(END, offers)
w.LoanDemandView.insert(END, demands)
print('loans refreshed')
root.after(10000, updateLoans)
The problem here is that I can not await a callable inside of the parameters for the after method. So I get a runtime warning. RuntimeWarning: coroutine 'updateLoans' was never awaited.
My initial function call IS placed inside of an event loop.
loop = asyncio.get_event_loop()
loop.run_until_complete(updateLoans())
loop.close()
The display populates just fine initially but never updates.
How can I use Tk.after to continuously update a tkinter display asynchronously?
tk.after accepts a normal function, not a coroutine. To run the coroutine to completion, you can use run_until_complete, just as you did the first time:
loop = asyncio.get_event_loop()
root.after(10000, lambda: loop.run_until_complete(updateLoans()))
Also, don't call loop.close(), since you'll need the loop again.
The above quick fix will work fine for many use cases. The fact is, however, that it will render the GUI completely unresponsive if updateLoans() takes a long time due to slow network or a problem with the remote service. A good GUI app will want to avoid this.
While Tkinter and asyncio cannot share an event loop yet, it is perfectly possible to run the asyncio event loop in a separate thread. The main thread then runs the GUI, while a dedicated asyncio thread runs all asyncio coroutines. When the event loop needs to notify the GUI to refresh something, it can use a queue as shown here. On the other hand, if the GUI needs to tell the event loop to do something, it can call call_soon_threadsafe or run_coroutine_threadsafe.
Example code (untested):
gui_queue = queue.Queue()
async def updateLoans():
while True:
offers = await dd.loanOffers()
demands = await dd.loanDemands()
print('loans obtained')
gui_queue.put(lambda: updateLoansGui(offers, demands))
await asyncio.sleep(10)
def updateLoansGui(offers, demands):
w.LoanOfferView.delete(1.0, END)
w.LoanDemandView.delete(1.0, END)
w.LoanOfferView.insert(END, offers)
w.LoanDemandView.insert(END, demands)
print('loans GUI refreshed')
# http://effbot.org/zone/tkinter-threads.htm
def periodicGuiUpdate():
while True:
try:
fn = gui_queue.get_nowait()
except queue.Empty:
break
fn()
root.after(100, periodicGuiUpdate)
# Run the asyncio event loop in a worker thread.
def start_loop():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.create_task(updateLoans())
loop.run_forever()
threading.Thread(target=start_loop).start()
# Run the GUI main loop in the main thread.
periodicGuiUpdate()
root.mainloop()
# To stop the event loop, call loop.call_soon_threadsafe(loop.stop).
# To start a coroutine from the GUI, call asyncio.run_coroutine_threadsafe.
Hello I fairly new to Python and I am trying to convert an existing application I have on Flask into Quart (https://gitlab.com/pgjones/quart) which is supposed to be built on top of asyncio, so I can use Goblin OGM to interact with JanusGraph or TinkerPop. According to the examples I found on Goblin I need to obtain an event loop to run the commands asynchronously.
>>> import asyncio
>>> from goblin import Goblin
>>> loop = asyncio.get_event_loop()
>>> app = loop.run_until_complete(
... Goblin.open(loop))
>>> app.register(Person, Knows)
However I can't find a way to obtain the event loop from Quart even though it is build on top of asyncio.
Does anyone know how I can get that ? Any help will be highly appreciated.
TL;DR To obtain the event loop, call asyncio.get_event_loop().
In an asyncio-based application, the event loop is typically not owned by Quart or any other protocol/application level component, it is provided by asyncio or possibly an accelerator like uvloop. The event loop is obtained by calling asyncio.get_event_loop(), and sometimes set with asyncio.set_event_loop().
This is what quart's app.run() uses to run the application, which means it works with the default event loop created by asyncio for the main thread. In your case you could simply call quart's run() after registering Goblin:
loop = asyncio.get_event_loop()
goblin_app = loop.run_until_complete(Goblin.open(loop))
goblin_app.register(Person, Knows)
quart_app = Quart(...)
# ... #app.route, etc
# now they both run in the same event loop
quart_app.run()
The above should answer the question in the practical sense. But that approach wouldn't work if more than one component insisted on having their own run() method that spins the event loop - since app.run() doesn't return, you can only invoke one such function in a thread.
If you look more closely, though, that is not really the case with quart either. While Quart examples do use app.run() to serve the application, if you take a look at the implementation of app.run(), you will see that it calls the convenience function run_app(), which trivially creates a server and spins up the main loop forever:
def run_app(...):
loop = asyncio.get_event_loop()
# ...
create_server = loop.create_server(
lambda: Server(app, loop, ...), host, port, ...)
server = loop.run_until_complete(create_server)
# ...
loop.run_forever()
If you need to control how the event loop is actually run, you can always do it yourself:
# obtain the event loop from asyncio
loop = asyncio.get_event_loop()
# hook Goblin to the loop
goblin_app = loop.run_until_complete(Goblin.open(loop))
goblin_app.register(Person, Knows)
# hook Quart to the loop
quart_server = loop.run_until_complete(loop.create_server(
lambda: quart.serving.Server(quart_app, loop), host, port))
# actually run the loop (and the program)
try:
loop.run_forever()
except KeyboardInterrupt: # pragma: no cover
pass
finally:
quart_server.close()
loop.run_until_complete(quart_server.wait_closed())
loop.run_until_complete(loop.shutdown_asyncgens())
loop.close()
I want to call streamSimulation four times split among 2 threads.
How can I create a second loop, create a second thread and execute the loop in that thread?
import asyncio
import functools
from concurrent.futures import ThreadPoolExecutor
async def streamSimulation(p1,p2,p3,p4):
print("Stream init")
while True:
await asyncio.sleep(2)
print("Stream Simulation")
print("Params: " + p1 + p2 + p3 + p4)
doSomething()
def doSomething():
print("Did something")
def main():
loop = asyncio.get_event_loop()
#Supposed to run in first thread
asyncio.ensure_future(streamSimulation("P1","P2","P3","P4"))
asyncio.ensure_future(streamSimulation("A1","A2","A3","A4"))
#Supposed to run in second thread
asyncio.ensure_future(streamSimulation("Q1","Q2","Q3","Q4"))
asyncio.ensure_future(streamSimulation("B1","B2","B3","B4"))
loop.run_forever()
main()
Your idea conflicts with asynchronous way, sorry.
In general you need a single event loop in main thread and thread pool for executing CPU bound tasks.
The reason for the single loop is: it is IO-bound, the code executed by loop should never block except waiting for IO/timer events.
It means two loops will not give performance boost: they are both blocked by kernel IO subsystem.
The only exception is making work two different event loops together, e.g. asyncio and Qt (but for this particular case there is qualmash project).