Running code between task creation and await with asyncio - python

Using asyncio is it possible to create a task, then continue with the "main" code execution and await for the task results later on?
Consider the following code
from functools import reduce
import asyncio
async def a_task():
print('a_task(): before sleep')
# waiting for something to happen
await asyncio.sleep(30)
print('a_task(): after sleep')
return 42
async def main():
# Create a Task
print('main() before create_task')
task = asyncio.create_task(a_task())
print('main() task created')
print('Doing stuff here between task creation and await')
# Computing 200000! should take few seconds...
# use a smaller number if its too slow on your machine
x = reduce(lambda a,b: a*b, range(1, 200000))
print('Stuff done')
print('main() awaiting task')
task_result = await task
print('main() task awaited')
return task_result
#%%
if __name__ == '__main__':
results = asyncio.run(main())
print(results)
This returns
main() before create_task
main() task created
Doing stuff here between task creation and await
Stuff done
main() awaiting task
a_task(): before sleep <<---- Task only starts running here!!
a_task(): after sleep
main() task awaited
42
a_task is created before we start computing 200000!, but it's executed only when we call await task. Is it possible to make a_task start to run before we start computing 200000! and keep it running in the background?
I read the doc, this, this, etc... they all mention that tasks are what should be used to execute code in the background, but I can't understand how to run it without hanging the main code.

I believe that the problem here is the following: creating a task with create_task does not schedule it for the immediate execution: it needs an await or something similar to trigger the switching of the event loop to start running something different. In your current code the creation of task is followed with synchronous code in which case the event loop cannot suspend the execution of main and start running the task. One way you can ensure that it would behave the way you expect would be putting asyncio.sleep(0) before running the evaluation of the factorial. That way while main() is being executed when it'll encounter await the event loop will suspend the execution of main and switch to a_task.
Other approach that can be interesting for you is using asyncio.gather docs link to schedule multiple async task and then to suspend the execution of main and wait for all of the tasks to complete.
from functools import reduce
import asyncio
async def a_task():
print("a_task(): before sleep")
# waiting for something to happen
await asyncio.sleep(30)
print("a_task(): after sleep")
return 42
async def doing_something_else():
print("We start doing something else")
x = reduce(lambda a, b: a * b, range(1, 200000))
print("We finish doing something else")
async def main():
# Create a Task
print("main() before create_task")
task = asyncio.create_task(a_task())
print("main() task created")
print("main() before create_task 2")
task_2 = asyncio.create_task(doing_something_else())
print("main() task 2 created")
print("main() gathering tasks")
# task_result = await task
await asyncio.gather(task, task_2)
print("main() tasks finished")
# return task_result
#%%
if __name__ == "__main__":
results = asyncio.run(main())
print(results)

Related

How to run the whole async function in given timeout?

From the last post, the duplicate post cannot answer my question.
Right now, I have a function f1() which contains CPU intensive part and async IO intensive part. Therefore f1() itself is an async function. How can I run the whole f1() with given timeout? I found the method provided in the post cannot solve my situation. For the following part, it shows RuntimeWarning: coroutine 'f1' was never awaited handle = None # Needed to break cycles when an exception occurs.
import asyncio
import time
import concurrent.futures
executor = concurrent.futures.ThreadPoolExecutor(1)
async def f1():
print("start sleep")
time.sleep(3) # simulate CPU intensive part
print("end sleep")
print("start asyncio.sleep")
await asyncio.sleep(3) # simulate IO intensive part
print("end asyncio.sleep")
async def process():
print("enter process")
loop = asyncio.get_running_loop()
await loop.run_in_executor(executor, f1)
async def main():
print("-----f1-----")
t1 = time.time()
try:
await asyncio.wait_for(process(), timeout=2)
except:
pass
t2 = time.time()
print(f"f1 cost {(t2 - t1)} s")
if __name__ == '__main__':
asyncio.run(main())
From previous post, loop.run_in_executor can only work for normal function not async function.
one way to do it is to make process not an async function, so it can run in another thread, and have it start an asyncio loop in the other thread to run f1.
note that starting another loops means you cannot share coroutines and futures between the two loops.
import asyncio
import time
import concurrent.futures
executor = concurrent.futures.ThreadPoolExecutor(1)
async def f1():
print("start sleep")
time.sleep(3) # simulate CPU intensive part
print("end sleep")
print("start asyncio.sleep")
await asyncio.sleep(3) # simulate IO intensive part
print("end asyncio.sleep")
def process():
print("enter process")
asyncio.run(asyncio.wait_for(f1(),2))
async def main():
print("-----f1-----")
t1 = time.time()
try:
loop = asyncio.get_running_loop()
await loop.run_in_executor(executor, process)
except:
pass
t2 = time.time()
print(f"f1 cost {(t2 - t1)} s")
if __name__ == '__main__':
asyncio.run(main())
-----f1-----
enter process
start sleep
end sleep
start asyncio.sleep
f1 cost 3.0047199726104736 s
keep in mind that you must wait for any IO to return f1 to the eventloop so the future can be cancelled, you cannot cancel the CPU-intensive part of the code unless it does something like await asyncio.sleep(0) which returns to the event-loop momentarily, which is why time.sleep cannot be cancelled.
I have explained the cause of the issue. You should remove or replace the time.sleep at f1 as it blocks the thread, and asyncio.wait_for cannot handle the timeout.
Regarding to the RuntimeWarning
RuntimeWarning: coroutine 'f1' was never awaited handle = None # Needed to break cycles when an exception occurs.
It occurs because the loop.run_in_executor expects a non-async function as a second argument.

How to kill an asyncio coroutine (not First_completed case)

There is an example: main coroutine creates coroutines that take a long time to complete, what means FIRST_COMPLETED case is unreachable. Problem: considering that await asyncio.wait(tasks) line blocks everything under itself, how to get access to pending futures set?
import asyncio
async def worker(i):
#some big work
await asyncio.sleep(100000)
async def main():
tasks = [asyncio.create_task(worker(i), name=str(i)) for i in range(5)]
done, pending = await asyncio.wait(tasks) #or asyncio.as_completed(tasks, return_when=FIRST_COMPLETED) no matter
# everything below is unreachable until tasks are in process
# we want to kill certain task
for future in pending:
if future.get_name == "4":
future.close()
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
How to avoid await block and kill certain coroutine? for example 4?
You can create another task to monitor your tasks:
async def monitor(tasks):
# monitor the state of tasks and cancel some
while not all(t.done() for t in tasks):
for t in tasks:
if not t.done() and t.get_name() == "4":
t.cancel()
# ...
# give the tasks some time to make progress
await asyncio.sleep(1)
async def main():
tasks = [asyncio.create_task(worker(i), name=str(i)) for i in range(5)]
tasks.append(asyncio.create_task(monitor(tasks[:])))
done, pending = await asyncio.wait(tasks)
# ...

How to close the loop if one of the task is completed in asyncio

I have 3 tasks. -
def task_a():
while True:
file.write()
asyncio.sleep(10)
def task_b():
while True:
file.write()
asyncio.sleep(10)
def task_c():
# do something
main.py -
try:
loop = asyncio.get_event_loop()
A = loop.create_task(task_a)
B = loop.create_task(task_b)
C = loop.create_task(task_c)
awaitable_pending_tasks = asyncio.all_tasks()
execution_group = asyncio.gather(*awaitable_pending_tasks, return_exceptions=True)
fi_execution = loop.run_until_complete(execution_group)
finally:
loop.run_forever()
I want to make sure that the loop is exited when the task_c is completed.
Tried with loop.close() in finally but since it's async, it closes in between.
task_a and task_b write to a file and there is another process running that checks the time the file was modified. If it's greater than a minute it will result in an error(which I don't want) hence I've put the while loop in it and once its written I added a sleep()
Once task_c is complete, I need the loop to stop.
Other answers on StackOverflow looked complicated to understand.
Any way we can do this?
You could call loop.run_until_complete or asyncio.run (but not run_forever) to run a function that prepares the tasks you need and then only awaits the one you want to terminate the loop (untested):
async def main():
asyncio.create_task(task_a)
asyncio.create_task(task_b)
await task_c
tasks = set(asyncio.all_tasks()) - set([asyncio.current_task()])
for t in tasks:
t.cancel()
await asyncio.gather(*tasks, return_exceptions=True)
asyncio.run(main())
# or asyncio.get_event_loop().run_until_complete(main())

How can I run an asyncio loop as long as there are pending cancellation-shielded tasks left but no longer?

I'm trying to add some code to my existing asyncio loop to provide for a clean shutdown on Ctrl-C. Below is an abstraction of the sort of thing it's doing.
import asyncio, signal
async def task1():
print("Starting simulated task1")
await asyncio.sleep(5)
print("Finished simulated task1")
async def task2():
print("Starting simulated task2")
await asyncio.sleep(5)
print("Finished simulated task2")
async def tasks():
await task1()
await task2()
async def task_loop():
try:
while True:
await asyncio.shield(tasks())
await asyncio.sleep(60)
except asyncio.CancelledError:
print("Shutting down task loop")
raise
async def aiomain():
loop = asyncio.get_running_loop()
task = asyncio.Task(task_loop())
loop.add_signal_handler(signal.SIGINT, task.cancel)
await task
def main():
try:
asyncio.run(aiomain())
except asyncio.CancelledError:
pass
#def main():
# try:
# loop = asyncio.get_event_loop()
# loop.create_task(aiomain())
# loop.run_forever()
# except asyncio.CancelledError:
# pass
if __name__ == '__main__':
main()
In this example, imagine that the sequence of task1 and task2 needs to be finished once it's started, or some artifacts will be left in an inconsistent state. (Hence the asyncio.shield wrapper around calling tasks.)
With the code as above, if I interrupt the script soon after it starts and it's just printed Starting simulated task1 then the loop stops and task2 never gets started. If I try switching to the version of main that's commented out then that one never exits, even though the loop is properly cancelled and nothing further happens at least for several minutes. It does have a bit of progress in that it at least finishes any in-progress sequence of task1 and task2.
Some possible solutions from brainstorming, though I still get the feeling there must be something simpler that I'm missing:
Create a wrapper around asyncio.shield which increments a variable synchronized by an asyncio.Condition object, runs the shielded function, then decrements the variable. Then, in aiomain in a CancelledError handler, wait for the variable to reach zero before reraising the exception. (In an implementation, I would probably go for combining all the parts of this into one class with __aexit__ implementing the wait for zero on CancelledError logic.)
Skip using asyncio's cancellation mechanism entirely, and instead use an asyncio.Event or similar to allow for interruption points or interruptible sleeps. Though this does seem like it would be more invasive requiring me to specify what points are considered interruptible, as opposed to declaring what sequences need to be shielded from cancellation.
This is a very good question. I have learned some things while working out an answer, so I hope you are still monitoring this thread.
The first thing to investigate is, how does the shield() method work? On this point, the docs are confusing to say the least. I couldn't figure it out until I read the standard library test code in test_tasks.py. Here is my understanding:
Consider this code fragment:
async def coro_a():
await asyncio.sheild(task_b())
...
task_a = asyncio.create_task(coro_a())
task_a.cancel()
When the task_a.cancel() statement is executed, task_a is indeed cancelled. The await statement throws a CancelledError immediately, without waiting for task_b to finish. But task_b continues to run. The outer task (a) stops but the inner task (b) doesn't.
Here is a modified version of your program that illustrates this. The major change is to insert a wait in your CancelledError exception handler, to keep your program alive a few seconds longer. I'm running on Windows and that's why I changed your signal handler a little bit also, but that's a minor point. I also added time stamps to the print statements.
import asyncio
import signal
import time
async def task1():
print("Starting simulated task1", time.time())
await asyncio.sleep(5)
print("Finished simulated task1", time.time())
async def task2():
print("Starting simulated task2", time.time())
await asyncio.sleep(5)
print("Finished simulated task2", time.time())
async def tasks():
await task1()
await task2()
async def task_loop():
try:
while True:
await asyncio.shield(tasks())
await asyncio.sleep(60)
except asyncio.CancelledError:
print("Shutting down task loop", time.time())
raise
async def aiomain():
task = asyncio.create_task(task_loop())
KillNicely(task)
try:
await task
except asyncio.CancelledError:
print("Caught CancelledError", time.time())
await asyncio.sleep(5.0)
raise
class KillNicely:
def __init__(self, cancel_me):
self.cancel_me = cancel_me
self.old_sigint = signal.signal(signal.SIGINT,
self.trap_control_c)
def trap_control_c(self, signum, stack):
if signum != signal.SIGINT:
self.old_sigint(signum, stack)
else:
print("Got Control-C", time.time())
print(self.cancel_me.cancel())
def main():
try:
asyncio.run(aiomain())
except asyncio.CancelledError:
print("Program exit, cancelled", time.time())
# Output when ctrlC is struck during task1
#
# Starting simulated task1 1590871747.8977509
# Got Control-C 1590871750.8385916
# True
# Shutting down task loop 1590871750.8425908
# Caught CancelledError 1590871750.8435903
# Finished simulated task1 1590871752.908434
# Starting simulated task2 1590871752.908434
# Program exit, cancelled 1590871755.8488846
if __name__ == '__main__':
main()
You can see that your program didn't work because it exited as soon as task_loop was cancelled, before task1 and task2 had a chance to finish. They were still there all along (or rather they would have been there, if the program continued to run).
This illustrates how shield() and cancel() interact, but it doesn't actually solve your stated problem. For that, I think, you need to have an awaitable object that you can use to keep the program alive until the vital tasks are finished. This object needs to be created at the top level and passed down the stack to the place where the vital tasks are executing. Here is a program that is similar to yours, but preforms the way you want.
I did three runs: (1) control-C during task1, (2) control-C during task2, (3) control-C after both tasks were finished. In the first two cases the program continued until task2 was finished. In the third case it ended immediately.
import asyncio
import signal
import time
async def task1():
print("Starting simulated task1", time.time())
await asyncio.sleep(5)
print("Finished simulated task1", time.time())
async def task2():
print("Starting simulated task2", time.time())
await asyncio.sleep(5)
print("Finished simulated task2", time.time())
async def tasks(kwrap):
fut = asyncio.get_running_loop().create_future()
kwrap.awaitable = fut
await task1()
await task2()
fut.set_result(1)
async def task_loop(kwrap):
try:
while True:
await asyncio.shield(tasks(kwrap))
await asyncio.sleep(60)
except asyncio.CancelledError:
print("Shutting down task loop", time.time())
raise
async def aiomain():
kwrap = KillWrapper()
task = asyncio.create_task(task_loop(kwrap))
KillNicely(task)
try:
await task
except asyncio.CancelledError:
print("Caught CancelledError", time.time())
await kwrap.awaitable
raise
class KillNicely:
def __init__(self, cancel_me):
self.cancel_me = cancel_me
self.old_sigint = signal.signal(signal.SIGINT,
self.trap_control_c)
def trap_control_c(self, signum, stack):
if signum != signal.SIGINT:
self.old_sigint(signum, stack)
else:
print("Got Control-C", time.time())
print(self.cancel_me.cancel())
class KillWrapper:
def __init__(self):
self.awaitable = asyncio.get_running_loop().create_future()
self.awaitable.set_result(0)
def main():
try:
asyncio.run(aiomain())
except asyncio.CancelledError:
print("Program exit, cancelled", time.time())
# Run 1 Control-C during task1
# Starting simulated task1 1590872408.6737766
# Got Control-C 1590872410.7344952
# True
# Shutting down task loop 1590872410.7354996
# Caught CancelledError 1590872410.7354996
# Finished simulated task1 1590872413.6747622
# Starting simulated task2 1590872413.6747622
# Finished simulated task2 1590872418.6750958
# Program exit, cancelled 1590872418.6750958
#
# Run 1 Control-C during task2
# Starting simulated task1 1590872492.927735
# Finished simulated task1 1590872497.9280624
# Starting simulated task2 1590872497.9280624
# Got Control-C 1590872499.5973852
# True
# Shutting down task loop 1590872499.5983844
# Caught CancelledError 1590872499.5983844
# Finished simulated task2 1590872502.9274273
# Program exit, cancelled 1590872502.9287038
#
# Run 1 Control-C after task2 -> immediate exit
# Starting simulated task1 1590873694.2925708
# Finished simulated task1 1590873699.2928336
# Starting simulated task2 1590873699.2928336
# Finished simulated task2 1590873704.2938952
# Got Control-C 1590873706.0790765
# True
# Shutting down task loop 1590873706.0804725
# Caught CancelledError 1590873706.0804725
# Program exit, cancelled 1590873706.0814824
Here is what I ended up using:
import asyncio, signal
async def _shield_and_wait_body(coro, finish_event):
try:
await coro
finally:
finish_event.set()
async def shield_and_wait(coro):
finish_event = asyncio.Event()
task = asyncio.shield(_shield_and_wait_body(coro, finish_event))
try:
await task
except asyncio.CancelledError:
await finish_event.wait()
raise
def shield_and_wait_decorator(coro_fn):
return lambda *args, **kwargs: shield_and_wait(coro_fn(*args, **kwargs))
async def task1():
print("Starting simulated task1")
await asyncio.sleep(5)
print("Finished simulated task1")
async def task2():
print("Starting simulated task2")
await asyncio.sleep(5)
print("Finished simulated task2")
#shield_and_wait_decorator
async def tasks():
await task1()
await task2()
async def task_loop():
try:
while True:
# Alternative to applying #shield_and_wait_decorator to tasks()
#await shield_and_wait(tasks())
await tasks()
await asyncio.sleep(60)
except asyncio.CancelledError:
print("Shutting down task loop")
raise
def sigint_handler(task):
print("Cancelling task loop")
task.cancel()
async def aiomain():
loop = asyncio.get_running_loop()
task = asyncio.Task(task_loop())
loop.add_signal_handler(signal.SIGINT, sigint_handler, task)
await task
def main():
try:
asyncio.run(aiomain())
except asyncio.CancelledError:
pass
if __name__ == '__main__':
main()
Similar to the answer by Paul Cornelius, this inserts a wait for the subtask to finish before allowing the CancelledError to propagate up the call chain. However, it does not require touching the code other than at the point you would be calling asyncio.shield.
(In my actual use case, I had three loops running simultaneously, using an asyncio.Lock to make sure one task or sequence of tasks finished before another would start. I also had an asyncio.Condition on that lock communicating from one coroutine to another. When I tried the approach of waiting in aiomain or main for all shielded tasks to be done, I ran into an issue where a cancelled parent released the lock, then a shielded task tried to signal the condition variable using that lock, giving an error. It also didn't make sense to move acquiring and releasing the lock into the shielded task - that would result in task B still running in the sequence: shielded task A starts, coroutine for task B expires its timer and blocks waiting for the lock, Control+C. By putting the wait at the point of the shield_and_wait call, on the other hand, it neatly avoided prematurely releasing the lock.)
One caveat: it seems that shield_and_wait_decorator doesn't work properly on class methods.

Fire, Forget, and Return Value in Python3.7

I have the following scenario:
I have a python server that upon receiving a request, needs to parse some information, return the result to the user as quickly as possible, and then clean up after itself.
I tried to design it using the following logic:
Consumer: *==* (wait for result) *====(continue running)=====...
\ / return
Producer: *======(prase)====*=*
\
Cleanup: *==========*
I've been trying to use async tasks and coroutines to make this scenario work with no avail. Everything I tried ends up with either the producer waiting for the cleanup to finish before returning, or the return killing the cleanup.
I could in theory have the consumer call the cleanup after it displays the result to the user, but I refuse to believe Python doesn't know how to "fire-and-forget" and return.
For example, this code:
import asyncio
async def Slowpoke():
print("I see you shiver with antici...")
await asyncio.sleep(3)
print("...pation!")
async def main():
task = asyncio.create_task(Slowpoke())
return "Hi!"
if __name__ == "__main__":
print(asyncio.run(main()))
while True:
pass
returns:
I see you shiver with antici...
Hi!
and never gets to ...pation.
What am I missing?
I managed to get it working using threading instead of asyncio:
import threading
import time
def Slowpoke():
print("I see you shiver with antici...")
time.sleep(3)
print("...pation")
def Rocky():
t = threading.Thread(name="thread", target=Slowpoke)
t.setDaemon(True)
t.start()
time.sleep(1)
return "HI!"
if __name__ == "__main__":
print(Rocky())
while True:
time.sleep(1)
asyncio doesn't seem particularly suited for this problem. You probably want simple threads:
The reasoning for this is that your task was being killed when the parent finished. By throwing a daemon thread out there, your task will continue to run until it finishes, or until the program exits.
import threading
import time
def Slowpoke():
try:
print("I see you shiver with antici...")
time.sleep(3)
print("...pation!")
except:
print("Yup")
raise Exception()
def main():
task = threading.Thread(target=Slowpoke)
task.daemon = True
task.start()
return "Hi!"
if __name__ == "__main__":
print(main())
while True:
pass
asyncio.run ...
[...] creates a new event loop and closes it at the end. [...]
Your coro, wrapped in task does not get a chance to complete during the execution of main.
If you return the Task object and and print it, you'll see that it is in a cancelled state:
async def main():
task = asyncio.create_task(Slowpoke())
# return "Hi!"
return task
if __name__ == "__main__":
print(asyncio.run(main()))
# I see you shiver with antici...
# <Task cancelled coro=<Slowpoke() done, defined at [...]>>
When main ends after creating and scheduling the task (and printing 'Hi!'), the event loop is closed, which causes all running tasks in it to get cancelled.
You need to keep the event loop running until the task has completed, e.g. by awaiting it in main:
async def main():
task = asyncio.create_task(Slowpoke())
await task
return task
if __name__ == "__main__":
print(asyncio.run(main()))
# I see you shiver with antici...
# ...pation!
# <Task finished coro=<Slowpoke() done, defined at [..]> result=None>
(I hope I did properly understood your question. The ASCII image and the text description do not correspond fully in my mind. "Hi!" is the result and the "Antici..pation" is the cleanup, right? I like that musical too, BTW)
One of possible asyncio based solutions is to return the result asap. A return terminates the task, that's why it is necessary to fire-and-forget the cleanup. It must by accompanied with shutdown code waiting for all cleanups to finish.
import asyncio
async def Slowpoke():
print("I see you shiver with antici...")
await asyncio.sleep(3)
print("...pation!")
async def main():
result = "Hi!"
asyncio.create_task(Slowpoke())
return result
async def start_stop():
# you can create multiple tasks to serve multiple requests
task = asyncio.create_task(main())
print(await task)
# after the last request wait for cleanups to finish
this_task = asyncio.current_task()
all_tasks = [
task for task in asyncio.all_tasks()
if task is not this_task]
await asyncio.wait(all_tasks)
if __name__ == "__main__":
asyncio.run(start_stop())
Another solution would be to use other method (not return) to deliver the result to the waiting task, so the cleanup can start right after parsing. A Future is considered low-level, but here is an example anyway.
import asyncio
async def main(fut):
fut.set_result("Hi!")
# result delivered, continue with cleanup
print("I see you shiver with antici...")
await asyncio.sleep(3)
print("...pation!")
async def start_stop():
fut = asyncio.get_event_loop().create_future()
task = asyncio.create_task(main(fut))
print(await fut)
this_task = asyncio.current_task()
all_tasks = [
task for task in asyncio.all_tasks()
if task is not this_task]
await asyncio.wait(all_tasks)
if __name__ == "__main__":
asyncio.run(start_stop())

Categories

Resources