Is there a way to call an async function from a sync one without waiting for it to complete?
My current tests:
Issue: Waits for test_timer_function to complete
async def test_timer_function():
await asyncio.sleep(10)
return
def main():
print("Starting timer at {}".format(datetime.now()))
asyncio.run(test_timer_function())
print("Ending timer at {}".format(datetime.now()))
Issue: Does not call test_timer_function
async def test_timer_function():
await asyncio.sleep(10)
return
def main():
print("Starting timer at {}".format(datetime.now()))
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
asyncio.ensure_future(test_timer_function())
print("Ending timer at {}".format(datetime.now()))
Any suggestions?
Async functions really do not run in the background: they run always in a single thread.
That means that when there are parallel tasks in async code (normal async code), it is only when you give a chance to the asyncio loop to run that those are executed - this happens when your code uses await, call one of async for, async with or return from a co-routine function that is running as a task.
In non-async code, you have to enter the loop and pass control to it, in order to the async code to run - that is what asyncio.run does - and asyncio.ensure_future does not: this call just registers a task to be executed, whenever the asyncio loop has time for it: but you return from the function without ever passing control to the async loop, so your program just finishes.
One thing that can be done is to establish a secondary thread, where the asyncio code will run: this thread will run its asyncio loop, and you can communicate with tasks in it by using global variables and normal thread data structures like Queues.
The minimal changes for your code are:
import asyncio
import threading
from datetime import datetime
now = datetime.now
async def test_timer_function():
await asyncio.sleep(2)
print(f"ending async task at {now()}")
return
def run_async_loop_in_thread():
asyncio.run(test_timer_function())
def main():
print(f"Starting timer at {now()}")
t = threading.Thread(target=run_async_loop_in_thread)
t.start()
print(f"Ending timer at {now()}")
return t
if __name__ == "__main__":
t = main()
t.join()
print(f"asyncio thread exited normally at {now()}")
(please, when posting Python code, include the import lines and lines to call your functions and make your code actually run: it is not a lot of boiler plate like may be needed in other languages, and turn your snippets in complete, ready to run, examples)
printout when running this snippet at the console:
Starting timer at 2022-10-20 16:47:45.211654
Ending timer at 2022-10-20 16:47:45.212630
ending async task at 2022-10-20 16:47:47.213464
asyncio thread exited normally at 2022-10-20 16:47:47.215417
The answer is simply no. It's not gonna happen in a single thread.
First issue:
In your first issue, main() is a sync function. It stops at the line asyncio.run(test_timer_function()) until the event loop finishes its work.
What is its only task? test_timer_function! This task "does" give the control back to event loop but not to the caller main! So if the event loop had other tasks too, they would cooperate with each other. But within the tasks of the event loop, not between event loop and the caller.
So it will wait 10 seconds. There is no other one here to use this 10 seconds to do its work.
Second issue:
You didn't even run the event loop. Check documentation for ensure_future.
Related
I read the following code
async def f():
sc_client = session.client("ec2")
for id in ids:
await IOLoop.current().run_in_executor(None, lambda: client.terminate(id))
How does it compare to the following code? Will client.terminate be run in parallel? But each execution is awaited?
for id in ids:
client.terminate(id)
Will client.terminate be run in parallel?
NO, it still runs as sequence.
IOLoop.current().run_in_executor will run the blocking function in a separate thread and returns an asyncio.Future, while await will wait until the Future which call client.terminate finish, then the loop continue.
The difference the 2 options you given is:
If the program has other coroutine to run, using the 1st option, the other coroutine won't block, while using the 2nd option, the other coroutine will block to wait your for loop finish.
An example to make you understand it (Here, will use loop.run_in_executor to simulate the IOLoop.current().run_in_executor for simple sake):
test.py:
import asyncio
import concurrent.futures
import time
def client_terminate(id):
print(f"start terminate {id}")
time.sleep(5)
print(f"end terminate {id}")
async def f():
loop = asyncio.get_running_loop()
for id in range(2):
with concurrent.futures.ThreadPoolExecutor() as pool:
await loop.run_in_executor(pool, client_terminate, id)
# client_terminate(id)
async def f2():
await asyncio.sleep(1)
print("other task")
async def main():
await asyncio.gather(*[f(), f2()])
asyncio.run(main())
The run output is:
$ python3 test.py
start terminate 0
other task
end terminate 0
start terminate 1
end terminate 1
You could see the two client_terminate in for loop still runs in sequence, BUT, the function f2 which print other task inject between them, it won't block asyncio scheduler to schedule f2.
Additional:
If you comment the 2 lines related to await loop.run_in_executor & threadpool, directly call client_terminate(id), the output will be:
$ python3 test.py
start terminate 0
end terminate 0
start terminate 1
end terminate 1
other task
Means if you don't wraps the blocking function in a Future, the other task will have to wait your for loop to finish which waste CPU.
I'm coding a telegram userbot (with telethon) which sends a message,every 60 seconds, to some chats.
I'm using 2 threads but I get the following errors: "RuntimeWarning: coroutine 'sender' was never awaited" and "no running event loop".
My code:
....
async def sender():
for chat in chats :
try:
if chat.megagroup == True:
await client.send_message(chat, messaggio)
except:
await client.send_message(myID, 'error')
schedule.every(60).seconds.do(asyncio.create_task(sender()))
...
class checker1(Thread):
def run(self):
while True:
schedule.run_pending()
time.sleep(1)
class checker2(Thread):
def run(self):
while True:
client.add_event_handler(handler)
client.run_until_disconnected()
checker2().start()
checker1().start()
I searched for a solution but I didn't find anything...
You should avoid using threads with asyncio unless you know what you're doing. The code can be rewritten using asyncio as follows, since most of the time you don't actually need threads:
import asyncio
async def sender():
for chat in chats :
try:
if chat.megagroup == True:
await client.send_message(chat, messaggio)
except:
await client.send_message(myID, 'error')
async def checker1():
while True:
await sender()
await asyncio.sleep(60) # every 60s
async def main():
await asyncio.create_task(checker1()) # background task
await client.run_until_disconnected()
client.loop.run_until_complete(main())
This code is not perfect (you should properly cancel and wait checker1 at the end of the program), but it should work.
As a side note, you don't need client.run_until_disconnected(). The call simply blocks (runs) until the client is disconnected. If you can keep the program running differently, as long as asyncio runs, the client will work.
Another thing: bare except: are a very bad idea, and will probably cause issues with exception. At least, replace it with except Exception.
There are a few problems with your code. asyncio is complaining about "no running event loop" because your program never starts the event loop anywhere, and tasks can't be scheduled without an event loop running. See Asyncio in corroutine RuntimeError: no running event loop. In order to start the event loop, you can use asyncio.run_until_complete() if you have a main coroutine for your program, or you can use asyncio.get_event_loop().run_forever() to run the event loop forever.
The second problem is the incorrect usage of schedule.every(60).seconds.do(), which is hidden by the first error. schedule expects a function to be passed in, not an awaitable (which is what asyncio.create_task(sender()) returns). This normally would have caused a TypeError, but the create_task() without a running event loop raised an exception first, so this exception was never raised. You'll need to define a function and then pass it to schedule, like this:
def start_sender():
asyncio.create_task(sender())
schedule.every(60).seconds.do(start_sender)
This should work as long as the event loop is started somewhere else in your program.
Have worked through most examples but am still learning async in Python. I am having trouble why this example of code will not print "i am async".
import asyncio
from threading import Thread
async def cor1():
print("i am async!")
def myasync(loop):
print("Async running")
loop.run_forever()
print("Async ended?")
def main():
this_threads_event_loop = asyncio.get_event_loop()
t_1 = Thread(target=myasync, args=(this_threads_event_loop,));
t_1.start()
print("begining the async loop")
t1 = this_threads_event_loop.create_task(cor1())
print("Finsihed cor1")
main()
Your code attempts to submit tasks to the event loop from a different thread. To do that, you must use run_coroutine_threadsafe:
def main():
loop = asyncio.get_event_loop()
# start the event loop in a separate thread
t_1 = Thread(target=myasync, args=(loop,));
t_1.start()
# submit the coroutine to the event loop running in the
# other thread
f1 = asyncio.run_coroutine_threadsafe(cor1(), loop)
# wait for the coroutine to finish, by asking for its result
f1.result()
print("Finsihed cor1")
Please note that combining asyncio and threads should only be done in special circumstances, such as when introducing asyncio to a legacy application where the new functionality needs to be added gradually. If you are writing new code, you almost certainly want the main to be a coroutine, run from top-level using asyncio.run(main()).
To run a legacy synchronous function from asyncio code, you can always use run_in_executor.
I am using asyncio for a network framework.
In below code(low_level is our low level function, main block is our program entry, user_func is user-defined function):
import asyncio
loop = asyncio.get_event_loop()
""":type :asyncio.AbstractEventLoop"""
def low_level():
yield from asyncio.sleep(2)
def user_func():
yield from low_level()
if __name__ == '__main__':
co = user_func()
loop.run_until_complete(co)
I want wrap the low_level as normal function rather than coroutine(for compatibility etc.), but low_level is in event loop. How can wrap it as a normal function?
Because low_level is a coroutine, it can only be used by running an asyncio event loop. If you want to be able to call it from synchronous code that isn't running an event loop, you have to provide a wrapper that actually launches an event loop and runs the coroutine until completion:
def sync_low_level():
loop = asyncio.get_event_loop()
loop.run_until_complete(low_level())
If you want to be able to call low_level() from a function that is part of the running event loop, have it block for two seconds, but not have to use yield from, the answer is that you can't. The event loop is single-threaded; whenever execution is inside one of your functions, the event loop is blocked. No other events or callbacks can be processed. The only ways for a function running in the event loop to give control back to the event loop are to 1) return 2) use yield from. The asyncio.sleep call in low_level will never be able to complete unless you do one those two things.
Now, I suppose you could create an entirely new event loop, and use that to run the sleep synchronously from a coroutine running as part of the default event loop:
import asyncio
loop = asyncio.get_event_loop()
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def sync_low_level():
new_loop = asyncio.new_event_loop()
new_loop.run_until_complete(low_level(loop=new_loop))
#asyncio.coroutine
def user_func():
sync_low_level()
if __name__ == "__main__":
loop.run_until_complete(user_func())
But I'm really not sure why you'd want to do that.
If you just want to be able to make low_level act like a method returning a Future, so you can attach callbacks, etc. to it, just wrap it in asyncio.async():
loop = asyncio.get_event_loop()
def sleep_done(fut):
print("Done sleeping")
loop.stop()
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def user_func():
fut = asyncio.async(low_level())
fut.add_done_callback(sleep_done)
if __name__ == "__main__":
loop.call_soon(user_func)
loop.run_forever()
Output:
<2 second delay>
"Done sleeping"
Also, in your example code, you should use the #asyncio.coroutine decorator for both low_level and user_func, as stated in the asyncio docs:
A coroutine is a generator that follows certain conventions. For
documentation purposes, all coroutines should be decorated with
#asyncio.coroutine, but this cannot be strictly enforced.
Edit:
Here's how a user from a synchronous web framework could call into your application without blocking other requests:
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def thr_low_level():
loop = asyncio.new_event_loop()
t = threading.Thread(target=loop.run_until_complete, args(low_level(loop=loop),))
t.start()
t.join()
If a request being handled by Flask calls thr_low_level, it will block until the request is done, but the GIL should be released for all of the asynchronous I/O going on in low_level, allowing other requests to be handled in separate threads.
I am using asyncio for a network framework.
In below code(low_level is our low level function, main block is our program entry, user_func is user-defined function):
import asyncio
loop = asyncio.get_event_loop()
""":type :asyncio.AbstractEventLoop"""
def low_level():
yield from asyncio.sleep(2)
def user_func():
yield from low_level()
if __name__ == '__main__':
co = user_func()
loop.run_until_complete(co)
I want wrap the low_level as normal function rather than coroutine(for compatibility etc.), but low_level is in event loop. How can wrap it as a normal function?
Because low_level is a coroutine, it can only be used by running an asyncio event loop. If you want to be able to call it from synchronous code that isn't running an event loop, you have to provide a wrapper that actually launches an event loop and runs the coroutine until completion:
def sync_low_level():
loop = asyncio.get_event_loop()
loop.run_until_complete(low_level())
If you want to be able to call low_level() from a function that is part of the running event loop, have it block for two seconds, but not have to use yield from, the answer is that you can't. The event loop is single-threaded; whenever execution is inside one of your functions, the event loop is blocked. No other events or callbacks can be processed. The only ways for a function running in the event loop to give control back to the event loop are to 1) return 2) use yield from. The asyncio.sleep call in low_level will never be able to complete unless you do one those two things.
Now, I suppose you could create an entirely new event loop, and use that to run the sleep synchronously from a coroutine running as part of the default event loop:
import asyncio
loop = asyncio.get_event_loop()
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def sync_low_level():
new_loop = asyncio.new_event_loop()
new_loop.run_until_complete(low_level(loop=new_loop))
#asyncio.coroutine
def user_func():
sync_low_level()
if __name__ == "__main__":
loop.run_until_complete(user_func())
But I'm really not sure why you'd want to do that.
If you just want to be able to make low_level act like a method returning a Future, so you can attach callbacks, etc. to it, just wrap it in asyncio.async():
loop = asyncio.get_event_loop()
def sleep_done(fut):
print("Done sleeping")
loop.stop()
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def user_func():
fut = asyncio.async(low_level())
fut.add_done_callback(sleep_done)
if __name__ == "__main__":
loop.call_soon(user_func)
loop.run_forever()
Output:
<2 second delay>
"Done sleeping"
Also, in your example code, you should use the #asyncio.coroutine decorator for both low_level and user_func, as stated in the asyncio docs:
A coroutine is a generator that follows certain conventions. For
documentation purposes, all coroutines should be decorated with
#asyncio.coroutine, but this cannot be strictly enforced.
Edit:
Here's how a user from a synchronous web framework could call into your application without blocking other requests:
#asyncio.coroutine
def low_level(loop=None):
yield from asyncio.sleep(2, loop=loop)
def thr_low_level():
loop = asyncio.new_event_loop()
t = threading.Thread(target=loop.run_until_complete, args(low_level(loop=loop),))
t.start()
t.join()
If a request being handled by Flask calls thr_low_level, it will block until the request is done, but the GIL should be released for all of the asynchronous I/O going on in low_level, allowing other requests to be handled in separate threads.