I am setting an exception handler on my asyncio event loop. However, it doesn't seem to be called until the event loop thread is stopped. For example, consider this code:
def exception_handler(loop, context):
print('Exception handler called')
loop = asyncio.get_event_loop()
loop.set_exception_handler(exception_handler)
thread = Thread(target=loop.run_forever)
thread.start()
async def run():
raise RuntimeError()
asyncio.run_coroutine_threadsafe(run(), loop)
loop.call_soon_threadsafe(loop.stop, loop)
thread.join()
This code prints "Exception handler called", as we might expect. However, if I remove the line that shuts-down the event loop (loop.call_soon_threadsafe(loop.stop, loop)) it no longer prints anything.
I have a few questions about this:
Am I doing something wrong here?
Does anyone know if this is the intended behaviour of asyncio exception handlers? I can't find anything that documents this, and it seems a little strange to me.
I'd quite like to have a long-running event loop that logs errors happening in its coroutines, so the current behaviour seems problematic for me.
There are a few problems in the code above:
stop() does not need a parameter
The program ends before the coroutine is executed (stop() was called before it).
Here is the fixed code (without exceptions and the exception handler):
import asyncio
from threading import Thread
async def coro():
print("in coro")
return 42
loop = asyncio.get_event_loop()
thread = Thread(target=loop.run_forever)
thread.start()
fut = asyncio.run_coroutine_threadsafe(coro(), loop)
print(fut.result())
loop.call_soon_threadsafe(loop.stop)
thread.join()
call_soon_threadsafe() returns a future object which holds the exception (it does not get to the default exception handler):
import asyncio
from pprint import pprint
from threading import Thread
def exception_handler(loop, context):
print('Exception handler called')
pprint(context)
loop = asyncio.get_event_loop()
loop.set_exception_handler(exception_handler)
thread = Thread(target=loop.run_forever)
thread.start()
async def coro():
print("coro")
raise RuntimeError("BOOM!")
fut = asyncio.run_coroutine_threadsafe(coro(), loop)
try:
print("success:", fut.result())
except:
print("exception:", fut.exception())
loop.call_soon_threadsafe(loop.stop)
thread.join()
However, coroutines that are called using create_task() or ensure_future() will call the exception_handler:
async def coro2():
print("coro2")
raise RuntimeError("BOOM2!")
async def coro():
loop.create_task(coro2())
print("coro")
raise RuntimeError("BOOM!")
You can use this to create a small wrapper:
async def boom(x):
print("boom", x)
raise RuntimeError("BOOM!")
async def call_later(coro, *args, **kwargs):
loop.create_task(coro(*args, **kwargs))
return "ok"
fut = asyncio.run_coroutine_threadsafe(call_later(boom, 7), loop)
However, you should probably consider using a Queue to communicate with your thread instead.
Related
Due to one use case, One of my long-running functions executes multiple instructions. But I have to give a maximum time for its execution. If the function is not able to finish its execution within the allocated time, it should clean up the progress and return.
Let's have a look at a sample code below:
import asyncio
async def eternity():
# Sleep for one hour
try:
await asyncio.sleep(3600)
print('yay!, everything is done..')
except Exception as e:
print("I have to clean up lot of thing in case of Exception or not able to finish by the allocated time")
async def main():
try:
ref = await asyncio.wait_for(eternity(), timeout=5)
except asyncio.exceptions.TimeoutError:
print('timeout!')
asyncio.run(main())
The function eternity is the long-running function. The catch is that, in case of some exception or reaching the maximum allocated time, the function needs to clean up the mess it has made.
P.S. eternity is an independent function and only it can understand what to clean.
I am looking for a way to raise an exception inside my task just before the timeout, OR send some interrupt or terminate signal to the task and handle it.
Basically, I want to execute some peice of code in my task before asyncio raises the TimeoutError and take control.
Also, I am using Python 3.9.
Hope I was able to explain the problem.
What you need is async context manager:
import asyncio
class MyClass(object):
async def eternity(self):
# Sleep for one hour
await asyncio.sleep(3600)
print('yay!, everything is done..')
async def __aenter__(self):
return self
async def __aexit__(self, exc_type, exc, tb):
print("I have to clean up lot of thing in case of Exception or not able to finish by the allocated time")
async def main():
try:
async with MyClass() as my_class:
ref = await asyncio.wait_for(my_class.eternity(), timeout=5)
except asyncio.exceptions.TimeoutError:
print('timeout!')
asyncio.run(main())
And this is the output:
I have to clean up lot of thing in case of Exception or not able to finish by the allocated time
timeout!
For more details take a look at here.
I have a bog-standard synchronous python program that needs to be able to read data from websockets and update the GUI with the data. However, asyncio creep is constantly tripping me up.
How do I make a module that:
accepts multiple subscriptions to multiple sources
sends an update to the requester whenever there's data
opens exactly one websocket connection per URL
resets the websocket if it closes
Here's what I have already, but it's failing at many points:
run_forever() means that the loop gets stuck before the subscription completes and then handle() is stuck in the falsey while loop
it does not seem to want to restart sockets when they're down because a websockets object does not have a connected property (websocket without an s does, but I'm not clear on the differences and can't find info online either)
I'm absolutely not sure if my approach is remotely correct.
Been fighting with this for weeks. Would appreciate some pointers.
class WSClient():
subscriptions = set()
connections = {}
started = False
def __init__(self):
self.loop = asyncio.get_event_loop()
def start(self):
self.started = True
self.loop.run_until_complete(self.handle())
self.loop.run_until_forever() # problematic, because it does not allow new subscribe() events
async def handle(self):
while len(self.connections) > 0:
# listen to every websocket
futures = [self.listen(self.connections[url]) for url in self.connections]
done, pending = await asyncio.wait(futures)
# the following is apparently necessary to avoid warnings
# about non-retrieved exceptions etc
try:
data, ws = done.pop().result()
except Exception as e:
print("OTHER EXCEPTION", e)
for task in pending:
task.cancel()
async def listen(self, ws):
try:
async for data in ws:
data = json.loads(data)
# call the subscriber (listener) back when there's data
[s.listener._handle_result(data) for s in self.subscriptions if s.ws == ws]
except Exception as e:
print('ERROR LISTENING; RESTARTING SOCKET', e)
await asyncio.sleep(2)
self.restart_socket(ws)
def subscribe(self, subscription):
task = self.loop.create_task(self._subscribe(subscription))
asyncio.gather(task)
if not self.started:
self.start()
async def _subscribe(self, subscription):
try:
ws = self.connections.get(subscription.url, await websockets.connect(subscription.url))
await ws.send(json.dumps(subscription.sub_msg))
subscription.ws = ws
self.connections[subscription.url] = ws
self.subscriptions.add(subscription)
except Exception as e:
print("ERROR SUBSCRIBING; RETRYING", e)
await asyncio.sleep(2)
self.subscribe(subscription)
def restart_socket(self, ws):
for s in self.subscriptions:
if s.ws == ws and not s.ws.connected:
print(s)
del self.connections[s.url]
self.subscribe(s)
I have a bog-standard synchronous python program that needs to be able to read data from websockets and update the GUI with the data. However, asyncio creep is constantly tripping me up.
As you mentioned GUI, then it is probably not a "bog-standard synchronous python program". Usually a GUI program has a non-blocking event-driven main thread, which allows concurrent user behaviors and callbacks. That is very much similar to asyncio, and it is usually a common way for asyncio to work together with GUIs to use GUI-specific event loop to replace default event loop in asyncio, so that your asyncio coroutines just run in GUI event loop and you can avoid calling run_forever() blocking everything.
An alternative way is to run asyncio event loop in a separate thread, so that your program could at the same time wait for websocket data and wait for user clicks. I've rewritten your code as follows:
import asyncio
import threading
import websockets
import json
class WSClient(threading.Thread):
def __init__(self):
super().__init__()
self._loop = None
self._tasks = {}
self._stop_event = None
def run(self):
self._loop = asyncio.new_event_loop()
self._stop_event = asyncio.Event(loop=self._loop)
try:
self._loop.run_until_complete(self._stop_event.wait())
self._loop.run_until_complete(self._clean())
finally:
self._loop.close()
def stop(self):
self._loop.call_soon_threadsafe(self._stop_event.set)
def subscribe(self, url, sub_msg, callback):
def _subscribe():
if url not in self._tasks:
task = self._loop.create_task(
self._listen(url, sub_msg, callback))
self._tasks[url] = task
self._loop.call_soon_threadsafe(_subscribe)
def unsubscribe(self, url):
def _unsubscribe():
task = self._tasks.pop(url, None)
if task is not None:
task.cancel()
self._loop.call_soon_threadsafe(_unsubscribe)
async def _listen(self, url, sub_msg, callback):
try:
while not self._stop_event.is_set():
try:
ws = await websockets.connect(url, loop=self._loop)
await ws.send(json.dumps(sub_msg))
async for data in ws:
data = json.loads(data)
# NOTE: please make sure that `callback` won't block,
# and it is allowed to update GUI from threads.
# If not, you'll need to find a way to call it from
# main/GUI thread (similar to `call_soon_threadsafe`)
callback(data)
except Exception as e:
print('ERROR; RESTARTING SOCKET IN 2 SECONDS', e)
await asyncio.sleep(2, loop=self._loop)
finally:
self._tasks.pop(url, None)
async def _clean(self):
for task in self._tasks.values():
task.cancel()
await asyncio.gather(*self._tasks.values(), loop=self._loop)
You can try tornado and autobahn-twisted for websockets.
I try to schedule an asyncio coroutine from another thread using create_task(). The problem is that the coroutine is not called, at least not in reasonable amount of time.
Is there are way to wake up the event loop or at least specify a shorter timeout?
#!/usr/bin/python3
import asyncio, threading
event_loop = None
#asyncio.coroutine
def coroutine():
print("coroutine called")
def scheduler():
print("scheduling...")
event_loop.create_task(coroutine())
threading.Timer(2, scheduler).start()
def main():
global event_loop
threading.Timer(2, scheduler).start()
event_loop = asyncio.new_event_loop()
asyncio.set_event_loop(event_loop)
event_loop.run_forever()
main()
Output:
scheduling...
scheduling...
scheduling...
scheduling...
According to the documentation of Task "this class is not thread safe". So scheduling from another thread is not expected to work.
I found two solutions for this based on the answers and comments here.
#wind85 answer: directly replacing the create_task line call with asyncio.run_coroutine_threadsafe(coroutine(), event_loop) call. Requires Python 3.5.1.
Use call_soon_threadsafe to schedule a callback, which then creates the task:
def do_create_task():
eventLoop.create_task(coroutine())
def scheduler():
eventLoop.call_soon_threadsafe(do_create_task)
Here we go this shuold work. It's a port. Try it out since I have the latest version, I can't really assure you it will work.
#!/usr/bin/python3
import concurrent.futures
import threading, asyncio
from asyncio import coroutines, futures
def run_coroutine_threadsafe_my(coro, loop):
"""Submit a coroutine object to a given event loop.
Return a concurrent.futures.Future to access the result.
"""
if not coroutines.iscoroutine(coro):
raise TypeError('A coroutine object is required')
future = concurrent.futures.Future()
def callback():
try:
futures._chain_future(asyncio.ensure_future(coro, loop=loop), future)
except Exception as exc:
if future.set_running_or_notify_cancel():
future.set_exception(exc)
raise
loop.call_soon_threadsafe(callback)
return future
event_loop = None
#asyncio.coroutine
async def coro():
print("coroutine called")
def scheduler():
print("scheduling...")
run_coroutine_threadsafe_my(coro(),event_loop)
threading.Timer(2, scheduler).start()
def main():
global event_loop
threading.Timer(2, scheduler).start()
event_loop = asyncio.new_event_loop()
asyncio.set_event_loop(event_loop)
event_loop.run_forever()
main()
I need at times more than one asyncio coroutine, where the routines then would be nested : coroutine B running in coroutine A, C in B and so on. The problem is with stopping a given loop. For example, using loop.stop() in the last top loop such as loop 'C' kills actually all asyncio coroutines - and not just this loop 'C'. I suspect that stop() actually kills coroutine A, and by doing so it annihilates all other dependent routines. The nested routines are call_soon_threadsafe, and all routines begin as run_forever.
I tried using specific loop names, or 'return', or 'break' (in the while loop inside the coroutine) but nothing exits the loop - except stop() which then kills non-specifically all loops at once.
My problem I described here is actually related to an earlier question of mine...
python daemon server crashes during HTML popup overlay callback using asyncio websocket coroutines
...which I thought I had solved - till running into this loop.stop() problem.
below now my example code for Python 3.4.3 where I try to stop() the coroutine_overlay_websocket_server loop as soon as it is done with the websocket job. As said, my code at the current state breaks all running loops. Thereafter the fmDaemon recreates a new asyncio loop that knows nothing of what was computed before :
import webbrowser
import websockets
import asyncio
class fmDaemon( Daemon):
# Daemon - see : http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
# Daemon - see : http://www.jejik.com/files/examples/daemon3x.py
def __init__( self, me):
self.me = me
def run( self):
while True:
#asyncio.coroutine
def coroutine_daemon_websocket_server( websocket, path):
msg = yield from websocket.recv()
if msg != None:
msg_out = "{}".format( msg)
yield from websocket.send( msg_out)
self.me = Function1( self.me, msg)
loop = asyncio.get_event_loop()
loop.run_until_complete( websockets.serve( coroutine_daemon_websocket_server, self.me.IP, self.me.PORT))
loop.run_forever()
def Function1( me, msg):
# doing some stuff :
# creating HTML file,
# loading HTML webpage with a webbrowser call,
# awaiting HTML button press signal via websocket protocol :
#asyncio.coroutine
def coroutine_overlay_websocket_server( websocket, path):
while True:
msg = yield from websocket.recv()
msg_out = "{}".format( msg)
yield from websocket.send( msg_out)
if msg == 'my_expected_string':
me.flags['myStr'] = msg
break
loop.call_soon_threadsafe( loop.stop)
loop = asyncio.get_event_loop()
loop.call_soon_threadsafe( asyncio.async, websockets.serve( coroutine_overlay_websocket_server, me.IP, me.PORT_overlay))
loop.run_forever()
loop.call_soon_threadsafe( loop.close)
# program should continue here...
My two questions : 1) Is there a way to exit a given coroutine without killing a coroutine lower down? 2) Or, alternatively, do you know of a method for reading websocket calls that does not make use of asyncio ?
I'm still a little confused by what you're trying to do, but there's definitely no need to try to nest event loops - your program is single-threaded, so when you call asyncio.get_event_loop() multiple times, you're always going to get the same event loop back. So you're really not creating two different loops in your example; both fmDaemon.run and Function1 use the same one. That's why stopping loop inside Function1 also kills the coroutine you launched inside run.
That said, there's no reason to try to create two different event loops to begin with. Function1 is being called from a coroutine, and wants to call other coroutines, so why not make it a coroutine, too? Then you can just call yield from websockets.serve(...) directly, and use an asyncio.Event to wait for coroutine_overlay_websocket_server to complete:
import webbrowser
import websockets
import asyncio
class fmDaemon( Daemon):
def __init__( self, me):
self.me = me
def run( self):
#asyncio.coroutine
def coroutine_daemon_websocket_server(websocket, path):
msg = yield from websocket.recv()
if msg != None:
msg_out = "{}".format( msg)
yield from websocket.send( msg_out)
self.me = Function1( self.me, msg)
loop = asyncio.get_event_loop()
loop.run_until_complete(websockets.serve(coroutine_daemon_websocket_server,
self.me.IP,
self.me.PORT))
loop.run_forever()
#asyncio.coroutine
def Function1(me, msg):
#asyncio.coroutine
def coroutine_overlay_websocket_server(websocket, path):
while True:
msg = yield from websocket.recv()
msg_out = "{}".format( msg)
yield from websocket.send( msg_out)
if msg == 'my_expected_string':
me.flags['myStr'] = msg
break
event.set() # Tell the outer function it can exit.
event = asyncio.Event()
yield from websockets.serve(coroutine_overlay_websocket_server, me.IP, me.PORT_overlay))
yield from event.wait() # This will block until event.set() is called.
I have a python multi-threaded application. I want to run an asyncio loop in a thread and post calbacks and coroutines to it from another thread. Should be easy but I cannot get my head around the asyncio stuff.
I came up to the following solution which does half of what I want, feel free to comment on anything:
import asyncio
from threading import Thread
class B(Thread):
def __init__(self):
Thread.__init__(self)
self.loop = None
def run(self):
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop) #why do I need that??
self.loop.run_forever()
def stop(self):
self.loop.call_soon_threadsafe(self.loop.stop)
def add_task(self, coro):
"""this method should return a task object, that I
can cancel, not a handle"""
f = functools.partial(self.loop.create_task, coro)
return self.loop.call_soon_threadsafe(f)
def cancel_task(self, xx):
#no idea
#asyncio.coroutine
def test():
while True:
print("running")
yield from asyncio.sleep(1)
b.start()
time.sleep(1) #need to wait for loop to start
t = b.add_task(test())
time.sleep(10)
#here the program runs fine but how can I cancel the task?
b.stop()
So starting and stoping the loop works fine. I thought about creating task using create_task, but that method is not threadsafe so I wrapped it in call_soon_threadsafe. But I would like to be able to get the task object in order to be able to cancel the task. I could do a complicated stuff using Future and Condition, but there must be a simplier way, isnt'it?
I think you may need to make your add_task method aware of whether or not its being called from a thread other than the event loop's. That way, if it's being called from the same thread, you can just call asyncio.async directly, otherwise, it can do some extra work to pass the task from the loop's thread to the calling thread. Here's an example:
import time
import asyncio
import functools
from threading import Thread, current_thread, Event
from concurrent.futures import Future
class B(Thread):
def __init__(self, start_event):
Thread.__init__(self)
self.loop = None
self.tid = None
self.event = start_event
def run(self):
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
self.tid = current_thread()
self.loop.call_soon(self.event.set)
self.loop.run_forever()
def stop(self):
self.loop.call_soon_threadsafe(self.loop.stop)
def add_task(self, coro):
"""this method should return a task object, that I
can cancel, not a handle"""
def _async_add(func, fut):
try:
ret = func()
fut.set_result(ret)
except Exception as e:
fut.set_exception(e)
f = functools.partial(asyncio.async, coro, loop=self.loop)
if current_thread() == self.tid:
return f() # We can call directly if we're not going between threads.
else:
# We're in a non-event loop thread so we use a Future
# to get the task from the event loop thread once
# it's ready.
fut = Future()
self.loop.call_soon_threadsafe(_async_add, f, fut)
return fut.result()
def cancel_task(self, task):
self.loop.call_soon_threadsafe(task.cancel)
#asyncio.coroutine
def test():
while True:
print("running")
yield from asyncio.sleep(1)
event = Event()
b = B(event)
b.start()
event.wait() # Let the loop's thread signal us, rather than sleeping
t = b.add_task(test()) # This is a real task
time.sleep(10)
b.stop()
First, we save the thread id of the event loop in the run method, so we can figure out if calls to add_task are coming from other threads later. If add_task is called from a non-event loop thread, we use call_soon_threadsafe to call a function that will both schedule the coroutine, and then use a concurrent.futures.Future to pass the task back to the calling thread, which waits on the result of the Future.
A note on cancelling a task: You when you call cancel on a Task, a CancelledError will be raised in the coroutine the next time the event loop runs. This means that the coroutine that the Task is wrapping will aborted due to the exception the next time it hit a yield point - unless the coroutine catches the CancelledError and prevents itself from aborting. Also note that this only works if the function being wrapped is actually an interruptible coroutine; an asyncio.Future returned by BaseEventLoop.run_in_executor, for example, can't really be cancelled, because it's actually wrapped around a concurrent.futures.Future, and those can't be cancelled once their underlying function actually starts executing. In those cases, the asyncio.Future will say its cancelled, but the function actually running in the executor will continue to run.
Edit: Updated the first example to use concurrent.futures.Future, instead of a queue.Queue, per Andrew Svetlov's suggestion.
Note: asyncio.async is deprecated since version 3.4.4 use asyncio.ensure_future instead.
You do everything right.
For task stopping make method
class B(Thread):
# ...
def cancel(self, task):
self.loop.call_soon_threadsafe(task.cancel)
BTW you have to setup an event loop for the created thread explicitly by
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
because asyncio creates implicit event loop only for main thread.
just for reference here it the code I finally implemented based on the the help I got on this site, it is simpler since I did not need all features. thanks again!
import asyncio
from threading import Thread
from concurrent.futures import Future
import functools
class B(Thread):
def __init__(self):
Thread.__init__(self)
self.loop = None
def run(self):
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
self.loop.run_forever()
def stop(self):
self.loop.call_soon_threadsafe(self.loop.stop)
def _add_task(self, future, coro):
task = self.loop.create_task(coro)
future.set_result(task)
def add_task(self, coro):
future = Future()
p = functools.partial(self._add_task, future, coro)
self.loop.call_soon_threadsafe(p)
return future.result() #block until result is available
def cancel(self, task):
self.loop.call_soon_threadsafe(task.cancel)
Since version 3.4.4 asyncio provides a function called run_coroutine_threadsafe to submit a coroutine object from a thread to an event loop. It returns a concurrent.futures.Future to access the result or cancel the task.
Using your example:
#asyncio.coroutine
def test(loop):
try:
while True:
print("Running")
yield from asyncio.sleep(1, loop=loop)
except asyncio.CancelledError:
print("Cancelled")
loop.stop()
raise
loop = asyncio.new_event_loop()
thread = threading.Thread(target=loop.run_forever)
future = asyncio.run_coroutine_threadsafe(test(loop), loop)
thread.start()
time.sleep(5)
future.cancel()
thread.join()