Python-asyncio - using timers/threading to terminate async - python

I have an async coroutine that I want to terminate using a timer/thread. The coroutine is based of this example from aiortc.
args = parse_args()
client = Client(connection, media, args.role)
# run event loop
loop = asyncio.get_event_loop()
try:
timer = None
if args.timeout:
print("Timer started")
timer = threading.Timer(args.timeout, loop.run_until_complete, args=(client.close(),))
timer.start()
loop.run_until_complete(client.run())
if timer:
timer.join()
except KeyboardInterrupt:
pass
finally:
# cleanup
loop.run_until_complete(client.close())
This does not work and raises RuntimeError('This event loop is already running')
Why does this raise the error? My guess is that it's because the loop is running on a different thread. However, creating a new loop doesn't work as it gets a future attached to a different loop.
def timer_callback():
new_loop = asyncio.new_event_loop()
new_loop.run_until_complete(client.close())
Following that, how can I use a timer to end the script?

Following that, how can I use a timer to end the script?
You can call asyncio.run_coroutine_threadsafe() to submit a coroutine to an event loop running in another thread:
if args.timeout:
print("Timer started")
timer = threading.Timer(
args.timeout,
asyncio.run_coroutine_threadsafe,
args=(client.close(), loop),
)
timer.start()
Note, however, that since you're working with asyncio, you don't need a dedicated thread for the timer, you could just create a coroutine and tell it to wait before doing something:
if args.timeout:
print("Timer started")
async def close_after_timeout():
await asyncio.sleep(args.timeout)
await client.close()
loop.create_task(close_after_timeout())

This isn't the general solution that I was looking for, I added a timeout variable to the client constructor, and within client.run() I added asyncio.sleep(timeout) which would exit the loop. This is enough for my purposes.

Related

Python run non-blocking async function from sync function

Is there a way to call an async function from a sync one without waiting for it to complete?
My current tests:
Issue: Waits for test_timer_function to complete
async def test_timer_function():
await asyncio.sleep(10)
return
def main():
print("Starting timer at {}".format(datetime.now()))
asyncio.run(test_timer_function())
print("Ending timer at {}".format(datetime.now()))
Issue: Does not call test_timer_function
async def test_timer_function():
await asyncio.sleep(10)
return
def main():
print("Starting timer at {}".format(datetime.now()))
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
asyncio.ensure_future(test_timer_function())
print("Ending timer at {}".format(datetime.now()))
Any suggestions?
Async functions really do not run in the background: they run always in a single thread.
That means that when there are parallel tasks in async code (normal async code), it is only when you give a chance to the asyncio loop to run that those are executed - this happens when your code uses await, call one of async for, async with or return from a co-routine function that is running as a task.
In non-async code, you have to enter the loop and pass control to it, in order to the async code to run - that is what asyncio.run does - and asyncio.ensure_future does not: this call just registers a task to be executed, whenever the asyncio loop has time for it: but you return from the function without ever passing control to the async loop, so your program just finishes.
One thing that can be done is to establish a secondary thread, where the asyncio code will run: this thread will run its asyncio loop, and you can communicate with tasks in it by using global variables and normal thread data structures like Queues.
The minimal changes for your code are:
import asyncio
import threading
from datetime import datetime
now = datetime.now
async def test_timer_function():
await asyncio.sleep(2)
print(f"ending async task at {now()}")
return
def run_async_loop_in_thread():
asyncio.run(test_timer_function())
def main():
print(f"Starting timer at {now()}")
t = threading.Thread(target=run_async_loop_in_thread)
t.start()
print(f"Ending timer at {now()}")
return t
if __name__ == "__main__":
t = main()
t.join()
print(f"asyncio thread exited normally at {now()}")
(please, when posting Python code, include the import lines and lines to call your functions and make your code actually run: it is not a lot of boiler plate like may be needed in other languages, and turn your snippets in complete, ready to run, examples)
printout when running this snippet at the console:
Starting timer at 2022-10-20 16:47:45.211654
Ending timer at 2022-10-20 16:47:45.212630
ending async task at 2022-10-20 16:47:47.213464
asyncio thread exited normally at 2022-10-20 16:47:47.215417
The answer is simply no. It's not gonna happen in a single thread.
First issue:
In your first issue, main() is a sync function. It stops at the line asyncio.run(test_timer_function()) until the event loop finishes its work.
What is its only task? test_timer_function! This task "does" give the control back to event loop but not to the caller main! So if the event loop had other tasks too, they would cooperate with each other. But within the tasks of the event loop, not between event loop and the caller.
So it will wait 10 seconds. There is no other one here to use this 10 seconds to do its work.
Second issue:
You didn't even run the event loop. Check documentation for ensure_future.

Send Ctrl + C to asyncio Task

So I have two tasks running until complete in my event loop. I want to handle KeyboardInterrupt when running those Tasks and send the same signal to the Tasks in the event loop when receiving it.
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(asyncio.gather(
task_1(),
task_2()
))
except KeyboardInterrupt:
pass
# Here I want to send sigterm to the tasks in event loop
Is there any way to do this?
Use asyncio.run() instead. asyncio.run() cancels all tasks/futures when receiving exceptions.
asyncio.gather() would then cancel the tanks when itself is cancelled. You may capture asyncio.CancelledError around asyncio.sleep() to see. Discard it would keep the task running.
async def task_1():
try:
await asyncio.sleep(10)
except asyncio.CancelledError as ex:
print('task1', type(ex))
raise
async def task_2():
try:
await asyncio.sleep(10)
except asyncio.CancelledError as ex:
print('task2', type(ex))
raise
async def main():
await asyncio.gather(task_1(), task_2())
if __name__ == '__main__':
asyncio.run(main())
Otherwise, you need to keep references to tasks, cancel them, and re-run the event loop to propagate CancelledError, like what asynio.run() do under the hood.

Async Errors in python

I'm coding a telegram userbot (with telethon) which sends a message,every 60 seconds, to some chats.
I'm using 2 threads but I get the following errors: "RuntimeWarning: coroutine 'sender' was never awaited" and "no running event loop".
My code:
....
async def sender():
for chat in chats :
try:
if chat.megagroup == True:
await client.send_message(chat, messaggio)
except:
await client.send_message(myID, 'error')
schedule.every(60).seconds.do(asyncio.create_task(sender()))
...
class checker1(Thread):
def run(self):
while True:
schedule.run_pending()
time.sleep(1)
class checker2(Thread):
def run(self):
while True:
client.add_event_handler(handler)
client.run_until_disconnected()
checker2().start()
checker1().start()
I searched for a solution but I didn't find anything...
You should avoid using threads with asyncio unless you know what you're doing. The code can be rewritten using asyncio as follows, since most of the time you don't actually need threads:
import asyncio
async def sender():
for chat in chats :
try:
if chat.megagroup == True:
await client.send_message(chat, messaggio)
except:
await client.send_message(myID, 'error')
async def checker1():
while True:
await sender()
await asyncio.sleep(60) # every 60s
async def main():
await asyncio.create_task(checker1()) # background task
await client.run_until_disconnected()
client.loop.run_until_complete(main())
This code is not perfect (you should properly cancel and wait checker1 at the end of the program), but it should work.
As a side note, you don't need client.run_until_disconnected(). The call simply blocks (runs) until the client is disconnected. If you can keep the program running differently, as long as asyncio runs, the client will work.
Another thing: bare except: are a very bad idea, and will probably cause issues with exception. At least, replace it with except Exception.
There are a few problems with your code. asyncio is complaining about "no running event loop" because your program never starts the event loop anywhere, and tasks can't be scheduled without an event loop running. See Asyncio in corroutine RuntimeError: no running event loop. In order to start the event loop, you can use asyncio.run_until_complete() if you have a main coroutine for your program, or you can use asyncio.get_event_loop().run_forever() to run the event loop forever.
The second problem is the incorrect usage of schedule.every(60).seconds.do(), which is hidden by the first error. schedule expects a function to be passed in, not an awaitable (which is what asyncio.create_task(sender()) returns). This normally would have caused a TypeError, but the create_task() without a running event loop raised an exception first, so this exception was never raised. You'll need to define a function and then pass it to schedule, like this:
def start_sender():
asyncio.create_task(sender())
schedule.every(60).seconds.do(start_sender)
This should work as long as the event loop is started somewhere else in your program.

how to stop() the last loop among multiple nested asyncio loops?

I need at times more than one asyncio coroutine, where the routines then would be nested : coroutine B running in coroutine A, C in B and so on. The problem is with stopping a given loop. For example, using loop.stop() in the last top loop such as loop 'C' kills actually all asyncio coroutines - and not just this loop 'C'. I suspect that stop() actually kills coroutine A, and by doing so it annihilates all other dependent routines. The nested routines are call_soon_threadsafe, and all routines begin as run_forever.
I tried using specific loop names, or 'return', or 'break' (in the while loop inside the coroutine) but nothing exits the loop - except stop() which then kills non-specifically all loops at once.
My problem I described here is actually related to an earlier question of mine...
python daemon server crashes during HTML popup overlay callback using asyncio websocket coroutines
...which I thought I had solved - till running into this loop.stop() problem.
below now my example code for Python 3.4.3 where I try to stop() the coroutine_overlay_websocket_server loop as soon as it is done with the websocket job. As said, my code at the current state breaks all running loops. Thereafter the fmDaemon recreates a new asyncio loop that knows nothing of what was computed before :
import webbrowser
import websockets
import asyncio
class fmDaemon( Daemon):
# Daemon - see : http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
# Daemon - see : http://www.jejik.com/files/examples/daemon3x.py
def __init__( self, me):
self.me = me
def run( self):
while True:
#asyncio.coroutine
def coroutine_daemon_websocket_server( websocket, path):
msg = yield from websocket.recv()
if msg != None:
msg_out = "{}".format( msg)
yield from websocket.send( msg_out)
self.me = Function1( self.me, msg)
loop = asyncio.get_event_loop()
loop.run_until_complete( websockets.serve( coroutine_daemon_websocket_server, self.me.IP, self.me.PORT))
loop.run_forever()
def Function1( me, msg):
# doing some stuff :
# creating HTML file,
# loading HTML webpage with a webbrowser call,
# awaiting HTML button press signal via websocket protocol :
#asyncio.coroutine
def coroutine_overlay_websocket_server( websocket, path):
while True:
msg = yield from websocket.recv()
msg_out = "{}".format( msg)
yield from websocket.send( msg_out)
if msg == 'my_expected_string':
me.flags['myStr'] = msg
break
loop.call_soon_threadsafe( loop.stop)
loop = asyncio.get_event_loop()
loop.call_soon_threadsafe( asyncio.async, websockets.serve( coroutine_overlay_websocket_server, me.IP, me.PORT_overlay))
loop.run_forever()
loop.call_soon_threadsafe( loop.close)
# program should continue here...
My two questions : 1) Is there a way to exit a given coroutine without killing a coroutine lower down? 2) Or, alternatively, do you know of a method for reading websocket calls that does not make use of asyncio ?
I'm still a little confused by what you're trying to do, but there's definitely no need to try to nest event loops - your program is single-threaded, so when you call asyncio.get_event_loop() multiple times, you're always going to get the same event loop back. So you're really not creating two different loops in your example; both fmDaemon.run and Function1 use the same one. That's why stopping loop inside Function1 also kills the coroutine you launched inside run.
That said, there's no reason to try to create two different event loops to begin with. Function1 is being called from a coroutine, and wants to call other coroutines, so why not make it a coroutine, too? Then you can just call yield from websockets.serve(...) directly, and use an asyncio.Event to wait for coroutine_overlay_websocket_server to complete:
import webbrowser
import websockets
import asyncio
class fmDaemon( Daemon):
def __init__( self, me):
self.me = me
def run( self):
#asyncio.coroutine
def coroutine_daemon_websocket_server(websocket, path):
msg = yield from websocket.recv()
if msg != None:
msg_out = "{}".format( msg)
yield from websocket.send( msg_out)
self.me = Function1( self.me, msg)
loop = asyncio.get_event_loop()
loop.run_until_complete(websockets.serve(coroutine_daemon_websocket_server,
self.me.IP,
self.me.PORT))
loop.run_forever()
#asyncio.coroutine
def Function1(me, msg):
#asyncio.coroutine
def coroutine_overlay_websocket_server(websocket, path):
while True:
msg = yield from websocket.recv()
msg_out = "{}".format( msg)
yield from websocket.send( msg_out)
if msg == 'my_expected_string':
me.flags['myStr'] = msg
break
event.set() # Tell the outer function it can exit.
event = asyncio.Event()
yield from websockets.serve(coroutine_overlay_websocket_server, me.IP, me.PORT_overlay))
yield from event.wait() # This will block until event.set() is called.

python asyncio run event loop once?

I am trying to understand the asyncio library, specifically with using sockets. I have written some code in an attempt to gain understanding,
I wanted to run a sender and a receiver sockets asynchrounously. I got to the point where I get all data sent up till the last one, but then I have to run one more loop. Looking at how to do this, I found this link from stackoverflow, which I implemented below -- but what is going on here? Is there a better/more sane way to do this than to call stop followed by run_forever?
The documentation for stop() in the event loop is:
Stop running the event loop.
Every callback scheduled before stop() is called will run. Callbacks scheduled after stop() is called will not run. However, those callbacks will run if run_forever() is called again later.
And run_forever()'s documentation is:
Run until stop() is called.
Questions:
why in the world is run_forever the only way to run_once? This doesn't even make sense
Is there a better way to do this?
Does my code look like a reasonable way to program with the asyncio library?
Is there a better way to add tasks to the event loop besides asyncio.async()? loop.create_task gives an error on my Linux system.
https://gist.github.com/cloudformdesign/b30e0860497f19bd6596
The stop(); run_forever() trick works because of how stop is implemented:
def stop(self):
"""Stop running the event loop.
Every callback scheduled before stop() is called will run.
Callback scheduled after stop() is called won't. However,
those callbacks will run if run() is called again later.
"""
self.call_soon(_raise_stop_error)
def _raise_stop_error(*args):
raise _StopError
So, next time the event loop runs and executes pending callbacks, it's going to call _raise_stop_error, which raises _StopError. The run_forever loop will break only on that specific exception:
def run_forever(self):
"""Run until stop() is called."""
if self._running:
raise RuntimeError('Event loop is running.')
self._running = True
try:
while True:
try:
self._run_once()
except _StopError:
break
finally:
self._running = False
So, by scheduling a stop() and then calling run_forever, you end up running one iteration of the event loop, then stopping once it hits the _raise_stop_error callback. You may have also noticed that _run_once is defined and called by run_forever. You could call that directly, but that can sometimes block if there aren't any callbacks ready to run, which may not be desirable. I don't think there's a cleaner way to do this currently - That answer was provided by Andrew Svetlov, who is an asyncio contributor; he would probably know if there's a better option. :)
In general, your code looks reasonable, though I think that you shouldn't be using this run_once approach to begin with. It's not deterministic; if you had a longer list or a slower system, it might require more than two extra iterations to print everything. Instead, you should just send a sentinel that tells the receiver to shut down, and then wait on both the send and receive coroutines to finish:
import sys
import time
import socket
import asyncio
addr = ('127.0.0.1', 1064)
SENTINEL = b"_DONE_"
# ... (This stuff is the same)
#asyncio.coroutine
def sending(addr, dataiter):
loop = asyncio.get_event_loop()
for d in dataiter:
print("Sending:", d)
sock = socket.socket()
yield from send_close(loop, sock, addr, str(d).encode())
# Send a sentinel
sock = socket.socket()
yield from send_close(loop, sock, addr, SENTINEL)
#asyncio.coroutine
def receiving(addr):
loop = asyncio.get_event_loop()
sock = socket.socket()
try:
sock.setblocking(False)
sock.bind(addr)
sock.listen(5)
while True:
data = yield from accept_recv(loop, sock)
if data == SENTINEL: # Got a sentinel
return
print("Recevied:", data)
finally: sock.close()
def main():
loop = asyncio.get_event_loop()
# add these items to the event loop
recv = asyncio.async(receiving(addr), loop=loop)
send = asyncio.async(sending(addr, range(10)), loop=loop)
loop.run_until_complete(asyncio.wait([recv, send]))
main()
Finally, asyncio.async is the right way to add tasks to the event loop. create_task was added in Python 3.4.2, so if you have an earlier version it won't exist.

Categories

Resources