I am using asyncio for a network framework.
In below code(low_level is our low level function, main block is our program entry, user_func is user-defined function):
import asyncio
loop = asyncio.get_event_loop()
""":type :asyncio.AbstractEventLoop"""
def low_level():
yield from asyncio.sleep(2)
def user_func():
yield from low_level()
if __name__ == '__main__':
co = user_func()
loop.run_until_complete(co)
I want wrap the low_level as normal function rather than coroutine(for compatibility etc.), but low_level is in event loop. How can wrap it as a normal function?
Because low_level is a coroutine, it can only be used by running an asyncio event loop. If you want to be able to call it from synchronous code that isn't running an event loop, you have to provide a wrapper that actually launches an event loop and runs the coroutine until completion:
def sync_low_level():
loop = asyncio.get_event_loop()
loop.run_until_complete(low_level())
If you want to be able to call low_level() from a function that is part of the running event loop, have it block for two seconds, but not have to use yield from, the answer is that you can't. The event loop is single-threaded; whenever execution is inside one of your functions, the event loop is blocked. No other events or callbacks can be processed. The only ways for a function running in the event loop to give control back to the event loop are to 1) return 2) use yield from. The asyncio.sleep call in low_level will never be able to complete unless you do one those two things.
Now, I suppose you could create an entirely new event loop, and use that to run the sleep synchronously from a coroutine running as part of the default event loop:
import asyncio
loop = asyncio.get_event_loop()
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def sync_low_level():
new_loop = asyncio.new_event_loop()
new_loop.run_until_complete(low_level(loop=new_loop))
#asyncio.coroutine
def user_func():
sync_low_level()
if __name__ == "__main__":
loop.run_until_complete(user_func())
But I'm really not sure why you'd want to do that.
If you just want to be able to make low_level act like a method returning a Future, so you can attach callbacks, etc. to it, just wrap it in asyncio.async():
loop = asyncio.get_event_loop()
def sleep_done(fut):
print("Done sleeping")
loop.stop()
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def user_func():
fut = asyncio.async(low_level())
fut.add_done_callback(sleep_done)
if __name__ == "__main__":
loop.call_soon(user_func)
loop.run_forever()
Output:
<2 second delay>
"Done sleeping"
Also, in your example code, you should use the #asyncio.coroutine decorator for both low_level and user_func, as stated in the asyncio docs:
A coroutine is a generator that follows certain conventions. For
documentation purposes, all coroutines should be decorated with
#asyncio.coroutine, but this cannot be strictly enforced.
Edit:
Here's how a user from a synchronous web framework could call into your application without blocking other requests:
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def thr_low_level():
loop = asyncio.new_event_loop()
t = threading.Thread(target=loop.run_until_complete, args(low_level(loop=loop),))
t.start()
t.join()
If a request being handled by Flask calls thr_low_level, it will block until the request is done, but the GIL should be released for all of the asynchronous I/O going on in low_level, allowing other requests to be handled in separate threads.
Related
Is there a way to call an async function from a sync one without waiting for it to complete?
My current tests:
Issue: Waits for test_timer_function to complete
async def test_timer_function():
await asyncio.sleep(10)
return
def main():
print("Starting timer at {}".format(datetime.now()))
asyncio.run(test_timer_function())
print("Ending timer at {}".format(datetime.now()))
Issue: Does not call test_timer_function
async def test_timer_function():
await asyncio.sleep(10)
return
def main():
print("Starting timer at {}".format(datetime.now()))
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
asyncio.ensure_future(test_timer_function())
print("Ending timer at {}".format(datetime.now()))
Any suggestions?
Async functions really do not run in the background: they run always in a single thread.
That means that when there are parallel tasks in async code (normal async code), it is only when you give a chance to the asyncio loop to run that those are executed - this happens when your code uses await, call one of async for, async with or return from a co-routine function that is running as a task.
In non-async code, you have to enter the loop and pass control to it, in order to the async code to run - that is what asyncio.run does - and asyncio.ensure_future does not: this call just registers a task to be executed, whenever the asyncio loop has time for it: but you return from the function without ever passing control to the async loop, so your program just finishes.
One thing that can be done is to establish a secondary thread, where the asyncio code will run: this thread will run its asyncio loop, and you can communicate with tasks in it by using global variables and normal thread data structures like Queues.
The minimal changes for your code are:
import asyncio
import threading
from datetime import datetime
now = datetime.now
async def test_timer_function():
await asyncio.sleep(2)
print(f"ending async task at {now()}")
return
def run_async_loop_in_thread():
asyncio.run(test_timer_function())
def main():
print(f"Starting timer at {now()}")
t = threading.Thread(target=run_async_loop_in_thread)
t.start()
print(f"Ending timer at {now()}")
return t
if __name__ == "__main__":
t = main()
t.join()
print(f"asyncio thread exited normally at {now()}")
(please, when posting Python code, include the import lines and lines to call your functions and make your code actually run: it is not a lot of boiler plate like may be needed in other languages, and turn your snippets in complete, ready to run, examples)
printout when running this snippet at the console:
Starting timer at 2022-10-20 16:47:45.211654
Ending timer at 2022-10-20 16:47:45.212630
ending async task at 2022-10-20 16:47:47.213464
asyncio thread exited normally at 2022-10-20 16:47:47.215417
The answer is simply no. It's not gonna happen in a single thread.
First issue:
In your first issue, main() is a sync function. It stops at the line asyncio.run(test_timer_function()) until the event loop finishes its work.
What is its only task? test_timer_function! This task "does" give the control back to event loop but not to the caller main! So if the event loop had other tasks too, they would cooperate with each other. But within the tasks of the event loop, not between event loop and the caller.
So it will wait 10 seconds. There is no other one here to use this 10 seconds to do its work.
Second issue:
You didn't even run the event loop. Check documentation for ensure_future.
How can I change this code to run on_some_event with g() instead of f() ?
def f(data):
pass
async def g(data):
pass
async def on_some_event(data):
for i in data:
threads.append(threading.Thread(target=f, args=(i,)))
threads[-1].start()
for i in threads:
i.join()
What should I use to execute async functions concurrently in async function ?
To run an async function (coroutine) you have to call it using an Event Loop.
Event Loops:
You can think of Event Loop as functions to run asynchronous tasks and
callbacks, perform network IO operations, and run subprocesses.
for example, Event Loop example to run async Function to run a single async function:
import asyncio
loop = asyncio.get_event_loop()
loop.run_until_complete(on_some_event('hello'))
loop.close()
Have worked through most examples but am still learning async in Python. I am having trouble why this example of code will not print "i am async".
import asyncio
from threading import Thread
async def cor1():
print("i am async!")
def myasync(loop):
print("Async running")
loop.run_forever()
print("Async ended?")
def main():
this_threads_event_loop = asyncio.get_event_loop()
t_1 = Thread(target=myasync, args=(this_threads_event_loop,));
t_1.start()
print("begining the async loop")
t1 = this_threads_event_loop.create_task(cor1())
print("Finsihed cor1")
main()
Your code attempts to submit tasks to the event loop from a different thread. To do that, you must use run_coroutine_threadsafe:
def main():
loop = asyncio.get_event_loop()
# start the event loop in a separate thread
t_1 = Thread(target=myasync, args=(loop,));
t_1.start()
# submit the coroutine to the event loop running in the
# other thread
f1 = asyncio.run_coroutine_threadsafe(cor1(), loop)
# wait for the coroutine to finish, by asking for its result
f1.result()
print("Finsihed cor1")
Please note that combining asyncio and threads should only be done in special circumstances, such as when introducing asyncio to a legacy application where the new functionality needs to be added gradually. If you are writing new code, you almost certainly want the main to be a coroutine, run from top-level using asyncio.run(main()).
To run a legacy synchronous function from asyncio code, you can always use run_in_executor.
I need at times more than one asyncio coroutine, where the routines then would be nested : coroutine B running in coroutine A, C in B and so on. The problem is with stopping a given loop. For example, using loop.stop() in the last top loop such as loop 'C' kills actually all asyncio coroutines - and not just this loop 'C'. I suspect that stop() actually kills coroutine A, and by doing so it annihilates all other dependent routines. The nested routines are call_soon_threadsafe, and all routines begin as run_forever.
I tried using specific loop names, or 'return', or 'break' (in the while loop inside the coroutine) but nothing exits the loop - except stop() which then kills non-specifically all loops at once.
My problem I described here is actually related to an earlier question of mine...
python daemon server crashes during HTML popup overlay callback using asyncio websocket coroutines
...which I thought I had solved - till running into this loop.stop() problem.
below now my example code for Python 3.4.3 where I try to stop() the coroutine_overlay_websocket_server loop as soon as it is done with the websocket job. As said, my code at the current state breaks all running loops. Thereafter the fmDaemon recreates a new asyncio loop that knows nothing of what was computed before :
import webbrowser
import websockets
import asyncio
class fmDaemon( Daemon):
# Daemon - see : http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
# Daemon - see : http://www.jejik.com/files/examples/daemon3x.py
def __init__( self, me):
self.me = me
def run( self):
while True:
#asyncio.coroutine
def coroutine_daemon_websocket_server( websocket, path):
msg = yield from websocket.recv()
if msg != None:
msg_out = "{}".format( msg)
yield from websocket.send( msg_out)
self.me = Function1( self.me, msg)
loop = asyncio.get_event_loop()
loop.run_until_complete( websockets.serve( coroutine_daemon_websocket_server, self.me.IP, self.me.PORT))
loop.run_forever()
def Function1( me, msg):
# doing some stuff :
# creating HTML file,
# loading HTML webpage with a webbrowser call,
# awaiting HTML button press signal via websocket protocol :
#asyncio.coroutine
def coroutine_overlay_websocket_server( websocket, path):
while True:
msg = yield from websocket.recv()
msg_out = "{}".format( msg)
yield from websocket.send( msg_out)
if msg == 'my_expected_string':
me.flags['myStr'] = msg
break
loop.call_soon_threadsafe( loop.stop)
loop = asyncio.get_event_loop()
loop.call_soon_threadsafe( asyncio.async, websockets.serve( coroutine_overlay_websocket_server, me.IP, me.PORT_overlay))
loop.run_forever()
loop.call_soon_threadsafe( loop.close)
# program should continue here...
My two questions : 1) Is there a way to exit a given coroutine without killing a coroutine lower down? 2) Or, alternatively, do you know of a method for reading websocket calls that does not make use of asyncio ?
I'm still a little confused by what you're trying to do, but there's definitely no need to try to nest event loops - your program is single-threaded, so when you call asyncio.get_event_loop() multiple times, you're always going to get the same event loop back. So you're really not creating two different loops in your example; both fmDaemon.run and Function1 use the same one. That's why stopping loop inside Function1 also kills the coroutine you launched inside run.
That said, there's no reason to try to create two different event loops to begin with. Function1 is being called from a coroutine, and wants to call other coroutines, so why not make it a coroutine, too? Then you can just call yield from websockets.serve(...) directly, and use an asyncio.Event to wait for coroutine_overlay_websocket_server to complete:
import webbrowser
import websockets
import asyncio
class fmDaemon( Daemon):
def __init__( self, me):
self.me = me
def run( self):
#asyncio.coroutine
def coroutine_daemon_websocket_server(websocket, path):
msg = yield from websocket.recv()
if msg != None:
msg_out = "{}".format( msg)
yield from websocket.send( msg_out)
self.me = Function1( self.me, msg)
loop = asyncio.get_event_loop()
loop.run_until_complete(websockets.serve(coroutine_daemon_websocket_server,
self.me.IP,
self.me.PORT))
loop.run_forever()
#asyncio.coroutine
def Function1(me, msg):
#asyncio.coroutine
def coroutine_overlay_websocket_server(websocket, path):
while True:
msg = yield from websocket.recv()
msg_out = "{}".format( msg)
yield from websocket.send( msg_out)
if msg == 'my_expected_string':
me.flags['myStr'] = msg
break
event.set() # Tell the outer function it can exit.
event = asyncio.Event()
yield from websockets.serve(coroutine_overlay_websocket_server, me.IP, me.PORT_overlay))
yield from event.wait() # This will block until event.set() is called.
I am using asyncio for a network framework.
In below code(low_level is our low level function, main block is our program entry, user_func is user-defined function):
import asyncio
loop = asyncio.get_event_loop()
""":type :asyncio.AbstractEventLoop"""
def low_level():
yield from asyncio.sleep(2)
def user_func():
yield from low_level()
if __name__ == '__main__':
co = user_func()
loop.run_until_complete(co)
I want wrap the low_level as normal function rather than coroutine(for compatibility etc.), but low_level is in event loop. How can wrap it as a normal function?
Because low_level is a coroutine, it can only be used by running an asyncio event loop. If you want to be able to call it from synchronous code that isn't running an event loop, you have to provide a wrapper that actually launches an event loop and runs the coroutine until completion:
def sync_low_level():
loop = asyncio.get_event_loop()
loop.run_until_complete(low_level())
If you want to be able to call low_level() from a function that is part of the running event loop, have it block for two seconds, but not have to use yield from, the answer is that you can't. The event loop is single-threaded; whenever execution is inside one of your functions, the event loop is blocked. No other events or callbacks can be processed. The only ways for a function running in the event loop to give control back to the event loop are to 1) return 2) use yield from. The asyncio.sleep call in low_level will never be able to complete unless you do one those two things.
Now, I suppose you could create an entirely new event loop, and use that to run the sleep synchronously from a coroutine running as part of the default event loop:
import asyncio
loop = asyncio.get_event_loop()
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def sync_low_level():
new_loop = asyncio.new_event_loop()
new_loop.run_until_complete(low_level(loop=new_loop))
#asyncio.coroutine
def user_func():
sync_low_level()
if __name__ == "__main__":
loop.run_until_complete(user_func())
But I'm really not sure why you'd want to do that.
If you just want to be able to make low_level act like a method returning a Future, so you can attach callbacks, etc. to it, just wrap it in asyncio.async():
loop = asyncio.get_event_loop()
def sleep_done(fut):
print("Done sleeping")
loop.stop()
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def user_func():
fut = asyncio.async(low_level())
fut.add_done_callback(sleep_done)
if __name__ == "__main__":
loop.call_soon(user_func)
loop.run_forever()
Output:
<2 second delay>
"Done sleeping"
Also, in your example code, you should use the #asyncio.coroutine decorator for both low_level and user_func, as stated in the asyncio docs:
A coroutine is a generator that follows certain conventions. For
documentation purposes, all coroutines should be decorated with
#asyncio.coroutine, but this cannot be strictly enforced.
Edit:
Here's how a user from a synchronous web framework could call into your application without blocking other requests:
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def thr_low_level():
loop = asyncio.new_event_loop()
t = threading.Thread(target=loop.run_until_complete, args(low_level(loop=loop),))
t.start()
t.join()
If a request being handled by Flask calls thr_low_level, it will block until the request is done, but the GIL should be released for all of the asynchronous I/O going on in low_level, allowing other requests to be handled in separate threads.