I am trying to run 2 function asynchronously, one after another, but at the same not block code execution that follows. This is my code:
def someFunction:
dataFromCallingContext = "data created in the calling function"
alsoDataFromCallingContext = "also data created in the calling function"
loop = asyncio.get_event_loop()
asyncCode = asyncio.gather(Gateway.delete_gateways(dataFromCallingContext), Gateway.set_status_for_index("available", alsoDataFromCallingContext ))
results = loop.run_until_complete(asyncCode)
loop.close()
#run the code below immediately and do not wait for gather to finish
The code deletes what is represented as gateway, and after it is done, it sets the status "available" to the correct flag. Gateway.delete_gateways takes a very long time
If run, this returns an error saying that 'Gateway.delete_gateways' and also 'Gateway.set_status_for_index' was never awaited, but I do not want it to be awaited. I want the code that follows to continue executing without blocking. What is the correct syntax?
Related
My project requires me to run a blocking code (from another library), whilst continuing my asyncio while: true loop. The code looks something like this:
async def main():
while True:
session_timeout = aiohttp.ClientTimeout()
async with aiohttp.ClientSession() as session:
// Do async stuffs like session.get and so on
# At a certain point, I have a blocking code that I need to execute
// Blocking_code() starts here. The blocking code needs time to get the return value.
Running blocking_code() is the last thing to do in my main() function.
# My objective is to run the blocking code separately.
# Such that whilst the blocking_code() runs, I would like my loop to start from the beginning again,
# and not having to wait until blocking_code() completes and returns.
# In other words, go back to the top of the while loop.
# Separately, the blocking_code() will continue to run independently, which would eventually complete
# and returns. When it returns, nothing in main() will need the return value. Rather the returned
# result continue to be used in blocking_code()
asyncio.run(main())
I have tried using pool = ThreadPool(processes=1) and thread = pool.apply_async(blocking_code, params). It sort of works if there are things that needs to be done after blocking_code() within main(); but blocking_code() is the last thing in main(), and it would cause the whole while loop to pause, until blocking_code() completes, before starting back from the top.
I don't know if this is possible, and if it is, how it's done; but the ideal scenario is this.
Run main(), then run blocking_code() in its own instance. As if executing another .py file. So once the loops reaches blocking_code() in main(), it triggers the blocking_code.py file, and whilst blocking_code.py script runs, the while loops continues from the top again.
If by the time on the 2nd pass in the while loop, it reaches blocking_code() again and the previous run has not complete; another instance of blocking_code() will run on its own instance, independently.
Does what I say make sense? Is it possible to achieve the desired outcome?
Thank you!
This is possible with threads. So you don't block your main loop, you'll need to wrap your thread in an asyncio task. You can wait for return values once your loop is finished if you need to. You can do this with a combination of asyncio.create_task and asyncio.to_thread
import aiohttp
import asyncio
import time
def blocking_code():
print('Starting blocking code.')
time.sleep(5)
print('Finished blocking code.')
async def main():
blocking_code_tasks = []
while True:
session_timeout = aiohttp.ClientTimeout()
async with aiohttp.ClientSession() as session:
print('Executing GET.')
result = await session.get('https://www.example.com')
blocking_code_task = asyncio.create_task(asyncio.to_thread(blocking_code))
blocking_code_tasks.append(blocking_code_task)
#do something with blocking_code_tasks, wait for them to finish, extract errors, etc.
asyncio.run(main())
The above code runes blocking code in a thread and then puts that into an asyncio task. We then add this to the blocking_code_tasks list to keep track of all the currently running tasks. Later on, you can get the values or errors out with something like asyncio.gather
I have a loop (all this is being done in Python 3.10) that is running relatively fast compared to a function that needs to consume data from the loop. I don't want to slow down the data and am trying to find a way to run the function asynchronously but only execute the function again after completion... basically:
queue=[]
def flow():
thing=queue[0]
time.sleep(.5)
print(str(thing))
delete=queue.pop(0)
p1 = multiprocessing.Process(target=flow)
while True:
print('stream')
time.sleep(.25)
if len(queue)<1:
print('emptyQ')
queue.append('flow')
p1.start()
I've tried running the function in a thread and a process and both seem to try to start another while the function is still running. I tired using a queue to pass the data and as a semaphore by not removing the item until the end of the function and only adding an item and starting the thread or process if the queue was empty, but that didn't seem to work either.
EDIT : to add an explicit question...
Is there a way to execute a function asynchronously without executing it multiple time simultaneously?
EDIT2 : Updated with functional test code (accurately reproduces failure) since real code is a bit more substantial... I have noticed that it seems to work on the first execution of the function (the print doesn't work inside the function...) but the next execution it fails for whatever reason. I assume it tires to load the process twice?
The error I get is - RuntimeError : An attempt has been made to start a new process before the current process has finished its bootstrapping phase...
I'm working with the Anki Cozmo SDK which requires use of async functions to actually make API calls. Trying to coordinate two of them which sometimes requires an optional "move" call prior to another async task.
Simply put I need two asynchronous tasks to run on the same loop, but not start the second until the first is finished.
loop = asyncio.get_event_loop()
async_tasks = []
async_tasks.append(asyncio.ensure_future(bot1.draw_line(), loop=loop))
async_tasks.append(asyncio.ensure_future(bot2.draw_line(), loop=loop))
if draw_util.point_conflicts(selected_plans, bot.position, CDIST):
safe_position = draw_util.find_safe_point_2_robots(selected_plans, bot.position, CDIST + 1)
task = asyncio.ensure_future(bot.move_to(safe_position), loop=loop)
await task
await asyncio.gather(*async_tasks)
I need some way to wait for the move_to task to complete before continuing onto running the async_tasks. How could this be done?
I've tried using loop.run_until_complete() to the same effect.
I noticed this question after a whole year but here it is:
asyncio.wait_for function does exactly what you're (or actually you were) looking for. It blocks until a task is completed.
Please note that this function is a co-routine too, so you'll have to call it inside an async function.
When I try to start a new thread, my entire program stops until the thread's function finishes. I am trying to make the thread start and continue while my program runs at the same time.
Code:
def do_python(string):
while True:
exec(string, globals())
time.sleep(0.1)
getKeyThread = threading.Thread(target=do_python("key = cpc.get_key()"), daemon=True).start()
time.sleep(0.2)
while True:
if key == 9:
print("Tab pressed.")
exit()
I have imported all of the required modules, so that is not the problem. Any functions used in here that aren't defined have been defined elsewhere and work perfectly fine. I haven't included my entire program here, because it is far too big to paste here.
By doing
do_python("key = cpc.get_key()")
you are actually calling, in your main thread, the do_python function (which has an infinite loop and will never stop running). Since the function never returns anything, it will just keep running forever. If it did return something, you'd probably get an error unless whatever is returned in a callable object.
The argument target requires a callable, so you have to pass your function to it
getKeyThread = threading.Thread(target=do_python, args=some_args, daemon=True).start()
I want to execute web scraping with a set of categories, and each category also has a list of URLs. So I decided to call a function based only on each category in the main function, and within the inner function there is a non-blocking call.
So here is the code:
def main():
loop = asyncio.get_event_loop()
b = loop.create_task(f("p", all_p_list))
f = loop.create_task(f("f", all_f_list))
loop.run_until_complete(asyncio.gather(p, f))
It should execute the f function concurrently.
But the f function also has to run the loop, since in the function it calls a function simultaneously, based on each URL.
async def f(category, total):
urls = [urls_template[category].format(t) for t in t_list]
soups_coro = map(parseURL_async, urls)
loop = asyncio.get_event_loop()
result = await loop.run_until_complete(asyncio.gather(*soups_coro))
But after I run the script, it got an This event loop is already running error, and I found that it is because I call loop.run_until_complete() in both inner and outer functions.
However, when I strip the run_until_complete(), and just call f() in the main(), the function call immediately got finished and it cannot wait for the inner function to finish. So it is inevitable to call the loop in the main(). But then I think it is incompatible with the inner function, which also must call it.
How can I deal with the problem and run the loop? The orinigal code is all in the same main() and it worked, but I want to make it cleaner if possible.
How can I deal with the problem and run the loop?
The loop is already running. You don't need to (and can't) run it again.
result = await loop.run_until_complete(asyncio.gather(*soups_coro))
You're awaiting the wrong thing. loop.run_until_complete doesn't return something you can await (a Future); it returns the result of whatever you're running until completion.
The reason nothing appears to happen when you call f directly is that f is an asyncio-style coroutine. As such it returns a future that must be scheduled with the event loop. It doesn't execute until a running event loop tells it to. loop.run_until_complete takes care of all of that for you.
To wrap up your question, you want to await asyncio.gather.
async def f(category, total):
urls = [urls_template[category].format(t) for t in t_list]
soups_coro = map(parseURL_async, urls)
result = await asyncio.gather(*soups_coro)
And you probably also want to include return result at the end of f, too.
Convert main() into async function and execute it by loop.run_until_complete().
When the code has the only one run_until_complete() -- everything becomes much easier. In Python 3.7 you will be able to write just asyncio.run(main())