I'm trying to build an appllication with python's asyncio module that is scalable and uses other modules based on asyncio also, the idea is to easily add tasks as the application grows, utilizing syncronization primitives for shared resources between the tasks, yet I'm confused regarding which design would best fit the intent.
import asyncio
async def task1():
while True:
# Code from task1
await asyncio.sleep(1)
async def task2():
while True:
# Code from task2
await asyncio.sleep(1)
async def task3():
while True:
# Code from task3
await asyncio.sleep(1)
async def main():
await asyncio.gather(
task1(),
task2(),
task3()
)
if __name__ == "__main__":
asyncio.get_event_loop().run_until_complete(main())
In the above approach each task is a while loop, yielding at the end of each execution,
import asyncio
async def task1():
# Code from task1
await asyncio.sleep(1)
async def task2():
# Code from task2
await asyncio.sleep(1)
async def task3():
# Code from task3
await asyncio.sleep(1)
async def main():
while True:
await asyncio.gather(
task1(),
task2(),
task3()
)
if __name__ == "__main__":
asyncio.get_event_loop().run_until_complete(main())
In this last approach, the gather method is the one being inside a while loop.
I also that perhaps wouldn't be necessary as asyncio tasks are running in a cooperative way, and I would get the same result from awaiting in series each coroutine, saving the usage of mutexes.
Is there anything asynchronous happening besides the asyncio.sleep(1). If not, then why bother using asynchronicity.
In the first piece of code, your three tasks are running independently of each other; you might handle 3 items in task1, 4 items in task2, and none in task3.
In the second, you specifically handle one thing in task1, one thing in task2, one thing in task3, and then start over again. If the tasks run at different speeds, this seems inefficient.
Related
For example this code:
async def f1(num):
while True:
print(num)
await asyncio.sleep(2)
class ExampleClass:
def __init__():
self.tasks = []
async def main():
for i in range(10):
tasks.append(asyncio.create_task(f1(i)))
await asyncio.gather(*tasks)
def add_new_task(task):
self.tasks.append(task)
Then somewhere outside I call
ExampleClass.add_new_task(task)
What I need is to add new tasks and execute them asynchronously with the existing ones.
May be I should use any other constructions to implement what i want?
What is important is that my tasks probably need to execute forever(forever polling)
The asyncio.gather function is not really convenient for such task. However, you can take a look at asyncio.TaskGroup which was released in Python3.11 and allows to add new tasks dynamically more easily.
import asyncio
from concurrent.futures import wait, FIRST_COMPLETED
async def main():
async with asyncio.TaskGroup() as group:
# Create some tasks
tasks = [
group.create_task(asyncio.sleep(1.0))
for _ in range(10)
]
# Wait for some tasks to finish
done, tasks = await wait(tasks, return_when=FIRST_COMPLETED)
# Add more tasks (/!\ `tasks` is now a set)
tasks.add(group.create_task(asyncio.sleep(2.0)))
# Wait the rest to complete
await asyncio.gather(*tasks)
if __name__ == "__main__":
asyncio.run(main())
So, I come to possible solution for my case with the idea of Louis.
The approximate idea is to use fake async linfinite loop where all my task are executed and where I update new tasks within asyncio.TaskGroup :
async def fake_generator():
while True:
yield None
fake_gen = fake_generator()
async with asyncio.TaskGroup() as tg:
async for _ in fake_gen:
await add_new_data()
for element in self.data:
if not self.data[element]['in_use']:
tg.create_task(job(element))
self.data[element]['in_use'] = True
await asyncio.sleep(60 * 60)
I'm trying to run some asynchronous functions in her asynchronous function, the problem is, how did I understand that functions don't run like that, then how do I do it? I don't want to make the maze_move function asynchronous.
async def no_stop():
#some logic
await asyncio.sleep(4)
async def stop(stop_time):
await asyncio.sleep(stop_time)
#some logic
def maze_move():
no_stop()
stop(1.5)
async def main(websocket):
global data_from_client, data_from_server, power_l, power_r
get_params()
get_data_from_server()
get_data_from_client()
while True:
msg = await websocket.recv()
allow_data(msg)
cheker(data_from_client)
data_from_server['IsBrake'] = data_from_client['IsBrake']
data_from_server['powerL'] = power_l
data_from_server['powerR'] = power_r
await websocket.send(json.dumps(data_from_server))
print(data_from_client['IsBrake'])
start_server = websockets.serve(main, 'localhost', 8080)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
How about:
def maze_move():
loop = asyncio.get_event_loop()
loop.run_until_complete(no_stop())
loop.run_until_complete(stop(1.5))
If you wanted to run two coroutines concurrently, then:
def maze_move():
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(no_stop(), stop(1.5)))
Update Based on Updated Question
I am guessing what it is you want to do (see my comment to your question):
First, you cannot call from maze_move coroutines such as stop directly since stop() does not result in calling stop it just returns a coroutine object. So maze_move has to be modified. I will assume you do not want to make it a coroutine itself (why not as long as you already have to modify it?). And further assuming you want to invoke maze_move from a coroutine that wishes to run concurrently other coroutines, then you can create a new coroutine, e.g. maze_move_runner that will run maze_move in a separate thread so that it does not block other concurrently running coroutines:
import asyncio
import concurrent.futures
async def no_stop():
#some logic
print('no stop')
await asyncio.sleep(4)
async def stop(stop_time):
await asyncio.sleep(stop_time)
print('stop')
#some logic
async def some_coroutine():
print('Enter some_coroutine')
await asyncio.sleep(2)
print('Exit some_coroutine')
return 1
def maze_move():
# In case we are being run directly and not in a separate thread:
try:
loop = asyncio.get_running_loop()
except:
# This thread has no current event loop, so:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(no_stop())
loop.run_until_complete(stop(1.5))
return 'Done!'
async def maze_move_runner():
loop = asyncio.get_running_loop()
# Run in another thread:
return await loop.run_in_executor(None, maze_move)
async def main():
loop = asyncio.get_running_loop()
results = await (asyncio.gather(some_coroutine(), maze_move_runner()))
print(results)
asyncio.run(main())
Prints:
Enter some_coroutine
no stop
Exit some_coroutine
stop
[1, 'Done!']
But this would be the most straightforward solution:
async def maze_move():
await no_stop()
await stop(1.5)
return 'Done!'
async def main():
loop = asyncio.get_running_loop()
results = await (asyncio.gather(some_coroutine(), maze_move()))
print(results)
If you have an already running event loop, you can define an async function inside of a sync function and launch it as task:
def maze_move():
async def amaze_move():
await no_stop()
await stop(1.5)
return asyncio.create_task(amaze_move())
This function returns an asyncio.Task object which can be used in an await expression, or not, depending on requirements. This way you won't have to make maze_move itself an async function, although I don't know why that would be a goal. Only a async function can run no_stop and stop, so you've got to have an async function somewhere.
I am new to asynchronous programming, and while I understand most concepts, there is one relating to the inner runnings of 'await' that I don't quite understand.
Consider the following:
import asyncio
async def foo():
print('start fetching')
await asyncio.sleep(2)
print('done fetcihng')
async def main():
task1 = asyncio.create_task(foo())
asyncio.run(main())
Output: start fetching
vs.
async def foo():
print('start fetching')
print('done fetcihng')
async def main():
task1 = asyncio.create_task(foo())
asyncio.run(main())
Output: start fetching followed by done fetching
Perhaps it is my understanding of await, which I do understand insofar that we can use it to pause (2 seconds in the case above), or await for functions to fully finish running before any further code is run.
But for the first example above, why does await cause 'done fetching' to not run??
asyncio.create_task schedules an awaitable on the event loop and returns immediately, so you are actually exiting the main function (and closing the event loop) before the task is able to finish
you need to change main to either
async def main():
task1 = asyncio.create_task(foo())
await task1
or
async def main():
await foo()
creating a task first (the former) is useful in many cases, but they all involve situations where the event loop will outlast the task, e.g. a long running server, otherwise you should just await the coroutine directly like the latter
I come from the C++ world, and I'm looking for the equivalent of std::future, std::promise in Python. Is there an equivalent mechanism or another method in Python to achieve the same?
I'm aware of asyncio.Future, but I need it for threading not asyncio.
I'm using a third party library (PJSUA2) which I call directly from my main thread, but which send the results in asynchronous callbacks in context of a worker thread created by the library.
Expecting future/promise support in Python, I was hoping to write my application code like this:
future = wrap_foo(...)
if (future.get() != expected_result):
throw Exception(...)
future1 = wrap_foo(...)
future2 = wrap_bar(...)
I was planning on wrapping all library asynchronous calls with a wrap_xxx function (where the library function is called xxx) taking care of creating the future/promise objects.
I need the ability of having multiple futures pending, so I cannot simply make synchronous wrap_xxx functions which block until the result is ready.
See the asyncio module -
import asyncio
async def main():
print('hello')
await asyncio.sleep(1)
print('world')
asyncio.run(main())
hello
world
It supports coroutines -
import asyncio
import time
async def say_after(delay, what):
await asyncio.sleep(delay)
print(what)
async def main():
print(f"started at {time.strftime('%X')}")
await say_after(1, 'hello')
await say_after(2, 'world')
print(f"finished at {time.strftime('%X')}")
asyncio.run(main())
started at 17:13:52
hello
world
finished at 17:13:55
And tasks -
import asyncio
async def nested():
return 42
async def main():
# Schedule nested() to run soon concurrently
# with "main()".
task = asyncio.create_task(nested())
# "task" can now be used to cancel "nested()", or
# can simply be awaited to wait until it is complete:
print(await task)
asyncio.run(main())
42
And Futures -
import asyncio
async def set_after(fut, delay, value):
# Sleep for *delay* seconds.
await asyncio.sleep(delay)
# Set *value* as a result of *fut* Future.
fut.set_result(value)
async def main():
# Get the current event loop.
loop = asyncio.get_running_loop()
# Create a new Future object.
fut = loop.create_future()
# Run "set_after()" coroutine in a parallel Task.
# We are using the low-level "loop.create_task()" API here because
# we already have a reference to the event loop at hand.
# Otherwise we could have just used "asyncio.create_task()".
loop.create_task(
set_after(fut, 1, '... world'))
print('hello ...')
# Wait until *fut* has a result (1 second) and print it.
print(await fut)
asyncio.run(main())
hello ...
... world
How do I remove the async-everywhere insanity in a program like this?
import asyncio
async def async_coro():
await asyncio.sleep(1)
async def sync_func_1():
# This is blocking and synchronous
await async_coro()
async def sync_func_2():
# This is blocking and synchronous
await sync_func_1()
if __name__ == "__main__":
# Async pollution goes all the way to __main__
asyncio.run(sync_func_2())
I need to have 3 async markers and asyncio.run at the top level just to call one async function. I assume I'm doing something wrong - how can I clean up this code to make it use async less?
FWIW, I'm interested mostly because I'm writing an API using asyncio and I don't want my users to have to think too much about whether their functions need to be def or async def depending on whether they're using a async part of the API or not.
After some research, one answer is to manually manage the event loop:
import asyncio
async def async_coro():
await asyncio.sleep(1)
def sync_func_1():
# This is blocking and synchronous
loop = asyncio.get_event_loop()
coro = async_coro()
loop.run_until_complete(coro)
def sync_func_2():
# This is blocking and synchronous
sync_func_1()
if __name__ == "__main__":
# No more async pollution
sync_func_2()
If you must do that, I would recommend an approach like this:
import asyncio, threading
async def async_coro():
await asyncio.sleep(1)
_loop = asyncio.new_event_loop()
threading.Thread(target=_loop.run_forever, daemon=True).start()
def sync_func_1():
# This is blocking and synchronous
return asyncio.run_coroutine_threadsafe(async_coro(), _loop).result()
def sync_func_2():
# This is blocking and synchronous
sync_func_1()
if __name__ == "__main__":
sync_func_2()
The advantage of this approach compared to one where sync functions run the event loop is that it supports nesting of sync functions. It also only runs a single event loop, so that if the underlying library wants to set up e.g. a background task for monitoring or such, it will work continuously rather than being spawned each time anew.