Issue with waiting for an Event in curio - python

I'm using curio to implement a mechanism of two tasks that communicate using a curio.Event object. The first task (called action()) runs first, and awaits the event to be set. The second task (called setter()) runs after the first one, and is setting the event.
The code is the following:
import curio
evt = curio.Event()
async def action():
await evt.wait()
print('Performing action')
async def setter():
await evt.set()
print('Event set')
async def run():
task = await curio.spawn(action())
await setter()
print('Finished run')
await task.wait()
curio.run(run())
The output is the following:
Event set
Finished run
Performing action
Which means that print('Performing action') is executed AFTER print('Finished run'), and that's what I'm trying to prevent - I was expecting that calling await evt.set() will also invoke all of its waiters, and run() won't continue until all of the waiters were invoked, meaning that action() will be continued BEFORE print('Finished run') is executed. This is the output I would like to have:
Event set
Performing action
Finished run
What am I getting wrong? Is there any way to change this behavior? I would like to have more control over the execution order.
Thanks

Setting Event is a way to signal that something happened: it as you already noted doesn't provide invocation of waiters.
If you want to report run finished after action performed you should report it after awaiting for action:
async def run():
task = await curio.spawn(action())
await setter()
await task.wait() # await action been performed
print('Finished run') # and only after that reporting run() is done
If you want to block execution of run() until something happened you can do it with another event wait() that should be set() when this something happend:
import curio
evt = curio.Event()
evt2 = curio.Event()
async def action():
await evt.wait()
print('Performing action')
await evt2.set()
print('Event 2 set')
async def setter():
await evt.set()
print('Event set')
async def run():
task = await curio.spawn(action())
await setter()
await evt2.wait()
print('Finished run')
await task.wait()
curio.run(run())
Res:
Event set
Performing action
Event 2 set
Finished run

Related

Running asynchronous functions in non-asynchronous functions

I'm trying to run some asynchronous functions in her asynchronous function, the problem is, how did I understand that functions don't run like that, then how do I do it? I don't want to make the maze_move function asynchronous.
async def no_stop():
#some logic
await asyncio.sleep(4)
async def stop(stop_time):
await asyncio.sleep(stop_time)
#some logic
def maze_move():
no_stop()
stop(1.5)
async def main(websocket):
global data_from_client, data_from_server, power_l, power_r
get_params()
get_data_from_server()
get_data_from_client()
while True:
msg = await websocket.recv()
allow_data(msg)
cheker(data_from_client)
data_from_server['IsBrake'] = data_from_client['IsBrake']
data_from_server['powerL'] = power_l
data_from_server['powerR'] = power_r
await websocket.send(json.dumps(data_from_server))
print(data_from_client['IsBrake'])
start_server = websockets.serve(main, 'localhost', 8080)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
How about:
def maze_move():
loop = asyncio.get_event_loop()
loop.run_until_complete(no_stop())
loop.run_until_complete(stop(1.5))
If you wanted to run two coroutines concurrently, then:
def maze_move():
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(no_stop(), stop(1.5)))
Update Based on Updated Question
I am guessing what it is you want to do (see my comment to your question):
First, you cannot call from maze_move coroutines such as stop directly since stop() does not result in calling stop it just returns a coroutine object. So maze_move has to be modified. I will assume you do not want to make it a coroutine itself (why not as long as you already have to modify it?). And further assuming you want to invoke maze_move from a coroutine that wishes to run concurrently other coroutines, then you can create a new coroutine, e.g. maze_move_runner that will run maze_move in a separate thread so that it does not block other concurrently running coroutines:
import asyncio
import concurrent.futures
async def no_stop():
#some logic
print('no stop')
await asyncio.sleep(4)
async def stop(stop_time):
await asyncio.sleep(stop_time)
print('stop')
#some logic
async def some_coroutine():
print('Enter some_coroutine')
await asyncio.sleep(2)
print('Exit some_coroutine')
return 1
def maze_move():
# In case we are being run directly and not in a separate thread:
try:
loop = asyncio.get_running_loop()
except:
# This thread has no current event loop, so:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(no_stop())
loop.run_until_complete(stop(1.5))
return 'Done!'
async def maze_move_runner():
loop = asyncio.get_running_loop()
# Run in another thread:
return await loop.run_in_executor(None, maze_move)
async def main():
loop = asyncio.get_running_loop()
results = await (asyncio.gather(some_coroutine(), maze_move_runner()))
print(results)
asyncio.run(main())
Prints:
Enter some_coroutine
no stop
Exit some_coroutine
stop
[1, 'Done!']
But this would be the most straightforward solution:
async def maze_move():
await no_stop()
await stop(1.5)
return 'Done!'
async def main():
loop = asyncio.get_running_loop()
results = await (asyncio.gather(some_coroutine(), maze_move()))
print(results)
If you have an already running event loop, you can define an async function inside of a sync function and launch it as task:
def maze_move():
async def amaze_move():
await no_stop()
await stop(1.5)
return asyncio.create_task(amaze_move())
This function returns an asyncio.Task object which can be used in an await expression, or not, depending on requirements. This way you won't have to make maze_move itself an async function, although I don't know why that would be a goal. Only a async function can run no_stop and stop, so you've got to have an async function somewhere.

Why does 'await' break from the local function when called from main()?

I am new to asynchronous programming, and while I understand most concepts, there is one relating to the inner runnings of 'await' that I don't quite understand.
Consider the following:
import asyncio
async def foo():
print('start fetching')
await asyncio.sleep(2)
print('done fetcihng')
async def main():
task1 = asyncio.create_task(foo())
asyncio.run(main())
Output: start fetching
vs.
async def foo():
print('start fetching')
print('done fetcihng')
async def main():
task1 = asyncio.create_task(foo())
asyncio.run(main())
Output: start fetching followed by done fetching
Perhaps it is my understanding of await, which I do understand insofar that we can use it to pause (2 seconds in the case above), or await for functions to fully finish running before any further code is run.
But for the first example above, why does await cause 'done fetching' to not run??
asyncio.create_task schedules an awaitable on the event loop and returns immediately, so you are actually exiting the main function (and closing the event loop) before the task is able to finish
you need to change main to either
async def main():
task1 = asyncio.create_task(foo())
await task1
or
async def main():
await foo()
creating a task first (the former) is useful in many cases, but they all involve situations where the event loop will outlast the task, e.g. a long running server, otherwise you should just await the coroutine directly like the latter

How to multiprocess async/await functions in python?

I need to run an event every 60 seconds to all my cars. My problem with the code below is the while loop doesn't end until the timeout (60) does and hence only the first car in cars is run.
class RunCars(BaseEvent):
def __init__(self):
interval_seconds = 60 # Set the interval for this event
super().__init__(interval_seconds)
# run() method will be called once every {interval_seconds} minutes
async def run(self, client, cars):
for car in cars:
channel = get_channel(client, "general")
await client.send_message(channel, 'Running this '+str(car))
await msg.add_reaction(str(get_emoji(':smiley:')))
reaction = None
while True:
if str(reaction) == str(get_emoji(':smiley:'))
await client.send_message(channel, 'Finished with this '+str(car))
try:
reaction, user = await client.wait_for('reaction_add', timeout=60, check=check)
except:
break
I tried changing the code into a multithreaded process but had trouble with async/await inside the function and problems with pickling the function itself.
I'd appreciate any suggestions for how to go about this..
The asyncio module lets you execute multiple async method concurrently using the gather method. I think you can achieve the behavior you want by defining a method that runs a single car, and then replacing your for-loop with a call to gather, which will execute multiple run_one coroutines (methods) concurrently:
import asyncio
class RunCars(BaseEvent):
def __init__(self):
interval_seconds = 60 # Set the interval for this event
super().__init__(interval_seconds)
async def run(self, client, cars):
coroutines = [self.run_one(client, car) for car in cars]
asyncio.gather(*coroutines)
async def run_one(self, client, car):
channel = get_channel(client, "general")
await client.send_message(channel, 'Running this '+str(car))
await msg.add_reaction(str(get_emoji(':smiley:')))
reaction = None
while True:
if str(reaction) == str(get_emoji(':smiley:'))
await client.send_message(channel, 'Finished with this '+str(car))
try:
reaction, user = await client.wait_for('reaction_add', timeout=60, check=check)
except:
break
In general, when writing async code, you should try to replace sequential calls to async methods - basically for-loops that call async methods - with gather statements so they execute concurrently.

How to throw a custom exception into a running task

I'm trying to figure out if it's possible throw a custom exception into a running asyncio task, similarly to what is achieved by Task.cancel(self) which schedules a CancelledError to be raised in the underlying coroutine.
I came across Task.get_coro().throw(exc), but calling it seems like opening a big can of worms as we may leave the task in a bad state. Especially considering all the machinery that happens when a task is throwing CancelledError into its coroutine.
Consider the following example:
import asyncio
class Reset(Exception):
pass
async def infinite():
while True:
try:
print('work')
await asyncio.sleep(1)
print('more work')
except Reset:
print('reset')
continue
except asyncio.CancelledError:
print('cancel')
break
async def main():
infinite_task = asyncio.create_task(infinite())
await asyncio.sleep(0) # Allow infinite_task to enter its work loop.
infinite_task.get_coro().throw(Reset())
await infinite_task
asyncio.run(main())
## OUTPUT ##
# "work"
# "reset"
# "work"
# hangs forever ... bad :(
Is what I try to do even feasible? It feels as if I shouldn't be manipulating the underlying coroutine like this. Any workaround?
There's no way to throw a custom exception into a running task. You shouldn't mess with .throw - it's a detail of implementation and altering it will probably break something.
If you want to pass information (about reset) into the task, do it trough an argument. Here's how it can be implemented:
import asyncio
from contextlib import suppress
async def infinite(need_reset):
try:
while True:
inner_task = asyncio.create_task(inner_job())
await asyncio.wait(
[
need_reset.wait(),
inner_task
],
return_when=asyncio.FIRST_COMPLETED
)
if need_reset.is_set():
print('reset')
await cancel(inner_task)
need_reset.clear()
except asyncio.CancelledError:
print('cancel')
raise # you should never suppress, see:
# https://stackoverflow.com/a/33578893/1113207
async def inner_job():
print('work')
await asyncio.sleep(1)
print('more work')
async def cancel(task):
# more info: https://stackoverflow.com/a/43810272/1113207
task.cancel()
with suppress(asyncio.CancelledError):
await task
async def main():
need_reset = asyncio.Event()
infinite_task = asyncio.create_task(infinite(need_reset))
await asyncio.sleep(1.5)
need_reset.set()
await asyncio.sleep(1.5)
await cancel(infinite_task)
asyncio.run(main())
Output:
work
more work
work
reset
work
more work
work
cancel

asyncio create_task() to be first in queue?

As I understand when I call create_task() it will put at the end of the event loop queue.
My use case is the following, I have some tasks made of the same coroutine. I want to cancel all tasks on some failed condition. This is the pattern:
async def coro(params):
# long running task...
if failed_condition:
await cancel_all() # should cancel all tasks made of coro
async def cancel_all():
for task in tasks:
task.cancel()
await asyncio.gather(*tasks) # wait for cancel completion
print("All tasks cancelled")
tasks = []
async def main():
tasks = [loop.create_task(coro(params)) for x in range(5)]
asyncio.gather(*tasks)
The problem is that since cancel_all() itself is awaited by one task, it is cancelled by itself.
How can I solve this?
I could use loop.create_task(cancel_all()) instead, but I want the cancel_all() to run soon as possible.
cancel_all() could exclude the current task:
async def cancel_all():
to_cancel = set(tasks)
to_cancel.discard(asyncio.current_task())
for task in to_cancel:
task.cancel()
await asyncio.gather(*to_cancel)
You can use asyncio.wait with FIRST_EXCEPTION parameter.
import asyncio
import random
class UnexpectedCondition(Exception):
pass
async def coro(condition):
# long running task...
await asyncio.sleep(random.random()*10)
if condition:
raise UnexpectedCondition("Failed")
return "Result"
async def main(f):
tasks = [coro(f(x)) for x in range(5)]
done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_EXCEPTION)
for p in pending:
# unfinished tasks will be cancelled
print("Cancel")
p.cancel()
for d in done:
try:
# get result from finished tasks
print(d.result())
except UnexpectedCondition as e:
# handle tasks with unexpected conditions
print(e)
asyncio.run(main(lambda x: x%2 == 0))

Categories

Resources