How to run a blocking code independently from asyncio loop - python

My project requires me to run a blocking code (from another library), whilst continuing my asyncio while: true loop. The code looks something like this:
async def main():
while True:
session_timeout = aiohttp.ClientTimeout()
async with aiohttp.ClientSession() as session:
// Do async stuffs like session.get and so on
# At a certain point, I have a blocking code that I need to execute
// Blocking_code() starts here. The blocking code needs time to get the return value.
Running blocking_code() is the last thing to do in my main() function.
# My objective is to run the blocking code separately.
# Such that whilst the blocking_code() runs, I would like my loop to start from the beginning again,
# and not having to wait until blocking_code() completes and returns.
# In other words, go back to the top of the while loop.
# Separately, the blocking_code() will continue to run independently, which would eventually complete
# and returns. When it returns, nothing in main() will need the return value. Rather the returned
# result continue to be used in blocking_code()
asyncio.run(main())
I have tried using pool = ThreadPool(processes=1) and thread = pool.apply_async(blocking_code, params). It sort of works if there are things that needs to be done after blocking_code() within main(); but blocking_code() is the last thing in main(), and it would cause the whole while loop to pause, until blocking_code() completes, before starting back from the top.
I don't know if this is possible, and if it is, how it's done; but the ideal scenario is this.
Run main(), then run blocking_code() in its own instance. As if executing another .py file. So once the loops reaches blocking_code() in main(), it triggers the blocking_code.py file, and whilst blocking_code.py script runs, the while loops continues from the top again.
If by the time on the 2nd pass in the while loop, it reaches blocking_code() again and the previous run has not complete; another instance of blocking_code() will run on its own instance, independently.
Does what I say make sense? Is it possible to achieve the desired outcome?
Thank you!

This is possible with threads. So you don't block your main loop, you'll need to wrap your thread in an asyncio task. You can wait for return values once your loop is finished if you need to. You can do this with a combination of asyncio.create_task and asyncio.to_thread
import aiohttp
import asyncio
import time
def blocking_code():
print('Starting blocking code.')
time.sleep(5)
print('Finished blocking code.')
async def main():
blocking_code_tasks = []
while True:
session_timeout = aiohttp.ClientTimeout()
async with aiohttp.ClientSession() as session:
print('Executing GET.')
result = await session.get('https://www.example.com')
blocking_code_task = asyncio.create_task(asyncio.to_thread(blocking_code))
blocking_code_tasks.append(blocking_code_task)
#do something with blocking_code_tasks, wait for them to finish, extract errors, etc.
asyncio.run(main())
The above code runes blocking code in a thread and then puts that into an asyncio task. We then add this to the blocking_code_tasks list to keep track of all the currently running tasks. Later on, you can get the values or errors out with something like asyncio.gather

Related

How to call async function from sync funcion and get result, while a loop is already running

I have a asyncio running loop, and from the coroutine I'm calling a sync function, is there any way we can call and get result from an async function in a sync function
tried below code, it is not working
want to print output of hel() in i() without changing i() to async function
is it possible, if yes how?
import asyncio
async def hel():
return 4
def i():
loop = asyncio.get_running_loop()
x = asyncio.run_coroutine_threadsafe(hel(), loop) ## need to change
y = x.result() ## this lines
print(y)
async def h():
i()
asyncio.run(h())
This is one of the most commonly asked type of question here. The tools to do this are in the standard library and require only a few lines of setup code. However, the result is not 100% robust and needs to be used with care. This is probably why it's not already a high-level function.
The basic problem with running an async function from a sync function is that async functions contain await expressions. Await expressions pause the execution of the current task and allow the event loop to run other tasks. Therefore async functions (coroutines) have special properties that allow them to yield control and resume again where they left off. Sync functions cannot do this. So when your sync function calls an async function and that function encounters an await expression, what is supposed to happen? The sync function has no ability to yield and resume.
A simple solution is to run the async function in another thread, with its own event loop. The calling thread blocks until the result is available. The async function behaves like a normal function, returning a value. The downside is that the async function now runs in another thread, which can cause all the well-known problems that come with threaded programming. For many cases this may not be an issue.
This can be set up as follows. This is a complete script that can be imported anywhere in an application. The test code that runs in the if __name__ == "__main__" block is almost the same as the code in the original question.
The thread is lazily initialized so it doesn't get created until it's used. It's a daemon thread so it will not keep your program from exiting.
The solution doesn't care if there is a running event loop in the main thread.
import asyncio
import threading
_loop = asyncio.new_event_loop()
_thr = threading.Thread(target=_loop.run_forever, name="Async Runner",
daemon=True)
# This will block the calling thread until the coroutine is finished.
# Any exception that occurs in the coroutine is raised in the caller
def run_async(coro): # coro is a couroutine, see example
if not _thr.is_alive():
_thr.start()
future = asyncio.run_coroutine_threadsafe(coro, _loop)
return future.result()
if __name__ == "__main__":
async def hel():
await asyncio.sleep(0.1)
print("Running in thread", threading.current_thread())
return 4
def i():
y = run_async(hel())
print("Answer", y, threading.current_thread())
async def h():
i()
asyncio.run(h())
Output:
Running in thread <Thread(Async Runner, started daemon 28816)>
Answer 4 <_MainThread(MainThread, started 22100)>
In order to call an async function from a sync method, you need to use asyncio.run, however this should be the single entry point of an async program so asyncio makes sure that you don't do this more than once per program, so you can't do that.
That being said, this project https://github.com/erdewit/nest_asyncio patches the asyncio event loop to do that, so after using it you should be able to just call asyncio.run in your sync function.

how to execute 2 async functions in sequence without blocking execution

I am trying to run 2 function asynchronously, one after another, but at the same not block code execution that follows. This is my code:
def someFunction:
dataFromCallingContext = "data created in the calling function"
alsoDataFromCallingContext = "also data created in the calling function"
loop = asyncio.get_event_loop()
asyncCode = asyncio.gather(Gateway.delete_gateways(dataFromCallingContext), Gateway.set_status_for_index("available", alsoDataFromCallingContext ))
results = loop.run_until_complete(asyncCode)
loop.close()
#run the code below immediately and do not wait for gather to finish
The code deletes what is represented as gateway, and after it is done, it sets the status "available" to the correct flag. Gateway.delete_gateways takes a very long time
If run, this returns an error saying that 'Gateway.delete_gateways' and also 'Gateway.set_status_for_index' was never awaited, but I do not want it to be awaited. I want the code that follows to continue executing without blocking. What is the correct syntax?

Is it possible to detect when all async tasks are suspended?

I'm trying to test an async code, but I'm having trouble because of the complex connection between some tasks.
The context I need this is some code which reads a file in parallel to it being written by another process. There's some logic in the code where reading a truncated record will make it back off and wait() on an asyncio.Condition to be later released by an inotify event. This code should let it recover by re-reading the record when a future write has been completed by another process. I specifically want to test that this recovery works.
So my plan would be:
write a partial file
run the event loop until it suspends on the condition
write the rest of the file
run the event loop to completion
I had thought this was the anser: Detect an idle asyncio event loop
However a trial test shows that it exits too soon:
import asyncio
import random
def test_ping_pong():
async def ping_pong(idx: int, oth_idx: int):
for i in range(random.randint(100, 1000)):
counters[idx] += 1
async with conditions[oth_idx]:
conditions[oth_idx].notify()
async with conditions[idx]:
await conditions[idx].wait()
async def detect_iowait():
loop = asyncio.get_event_loop()
rsock, wsock = socket.socketpair()
wsock.close()
try:
await loop.sock_recv(rsock, 1)
finally:
rsock.close()
conditions = [asyncio.Condition(), asyncio.Condition()]
counters = [0, 0]
loop = asyncio.get_event_loop()
loop.create_task(ping_pong(0, 1))
loop.create_task(ping_pong(1, 0))
loop.run_until_complete(loop.create_task(detect_iowait()))
assert counters[0] > 10
assert counters[1] > 10
After digging through the source code for python's event loops, I've found nothing exposed that can do this publicly.
It is however possible to use the _ready deque created by the BaseEventLoop. See here. This contains every task that is immediately ready to run. When a task is run it is popped from the _ready deque. When a suspended task is released by another task (eg by calling future.set_result()) the suspended task is immediately added back to the deque. This has existed since python 3.5.
One thing that you can do is repeatedly inject a callback to check how many items in _ready. When all other tasks are suspended, there will be nothing left in the dqueue at the moment the callback runs.
The callback will run at most once per iteration of the event loop:
async def wait_for_deadlock(empty_loop_threshold: int = 0):
def check_for_deadlock():
nonlocal empty_loop_count
# pylint: disable=protected-access
if loop._ready:
empty_loop_count = 0
loop.call_soon(check_for_deadlock)
elif empty_loop_count < empty_loop_threshold:
empty_loop_count += 1
loop.call_soon(check_for_deadlock)
else:
future.set_result(None)
empty_loop_count = 0
loop = asyncio.get_running_loop()
future = loop.create_future()
asyncio.get_running_loop().call_soon(check_for_deadlock)
await future
In the above code the empty_loop_threshold is not really necessary in most cases but exists for cases where tasks communicate with IO. For example if one task communicates to another through IO, there may be a moment where all tasks are suspended even through one has data ready to read. Setting empty_loop_threshold = 1 should get round this.
Using this is relatively simple. You can:
loop.run_until_complete(wait_for_deadlock())
Or as requested in my question:
def some_test():
async def async_test():
await wait_for_deadlock()
inject_something()
await wait_for_deadlock()
loop = loop.get_event_loop()
loop.create_task(task_to_test())
loop.run_until_complete(loop.create_task(async_test)
assert something

How to block on asyncio.Task call

I'm working with the Anki Cozmo SDK which requires use of async functions to actually make API calls. Trying to coordinate two of them which sometimes requires an optional "move" call prior to another async task.
Simply put I need two asynchronous tasks to run on the same loop, but not start the second until the first is finished.
loop = asyncio.get_event_loop()
async_tasks = []
async_tasks.append(asyncio.ensure_future(bot1.draw_line(), loop=loop))
async_tasks.append(asyncio.ensure_future(bot2.draw_line(), loop=loop))
if draw_util.point_conflicts(selected_plans, bot.position, CDIST):
safe_position = draw_util.find_safe_point_2_robots(selected_plans, bot.position, CDIST + 1)
task = asyncio.ensure_future(bot.move_to(safe_position), loop=loop)
await task
await asyncio.gather(*async_tasks)
I need some way to wait for the move_to task to complete before continuing onto running the async_tasks. How could this be done?
I've tried using loop.run_until_complete() to the same effect.
I noticed this question after a whole year but here it is:
asyncio.wait_for function does exactly what you're (or actually you were) looking for. It blocks until a task is completed.
Please note that this function is a co-routine too, so you'll have to call it inside an async function.

Concurrent Async Function call in Python 3.6

I have a script in which a Slow and a Fast function processes the same global object array. The Slow function is for filling up the array with new objects based on resource intensive calculations, the Fast is only for iterating the existing objects in the array and maintaining/displaying them. The Slow function only needs to be run only in every few seconds, but the Fast function is imperative to run as frequently as possible. I tried using asyncio and ensure_future calling the Slow process, but the result was that the Fast(main) function ran until I stopped it, and only at the end was the Slow function called. I need the Slow function to start running in the instance it is called in the background and complete whenever it can, but without blocking the call of the Fast function. Can you help me please?
Thank you!
An example of what I tried:
import asyncio
variable = []
async def slow():
temp = get_new_objects() #resource intensive
global variable
variable = temp
async def main():
while True: #Looping
if need_to_run_slow: #Only run sometimes
asyncio.ensure_future(slow())
do_fast_stuff_with(variable) #fast part
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close()
asyncio.ensure_future(slow()) only schedules slow() to run at the next pass of the event loop. Since your while loop doesn't await anything that can actually block, you are not giving the event loop a chance to run.
You can work around the issue by adding an await asyncio.sleep(0) after the call to the fast function:
async def main():
while True:
if need_to_run_slow:
asyncio.ensure_future(slow())
await asyncio.sleep(0)
do_fast_stuff_with(variable)
The no-op sleep will ensure that at every iteration of the while loop (and between runs of the "fast" function") gives a chance for a previously scheduled slow() to make progress.
However, your slow() doesn't await either, so all of its code will run in a single iteration, which makes the above equivalent to the much simpler:
def main():
while True:
slow() # slow() is an ordinary function
do_fast_stuff_with(variable)
A code example closer to your actual use case would probably result in a more directly usable answer.

Categories

Resources