Async Python Background Task - python

I have a sync method that has a connection updater that checks if a connection is still open, that runs in a background thread. This works as expected and continously
How would the same thing be possible in asyncio ? Would asyncio.ensure_future would be the way to do this or is there another pythonic way to accomplish the same?
def main():
_check_conn = threading.Thread(target=_check_conn_bg)
_check_conn.daemon = True
_check_conn.start()
def _check_conn_bg():
while True:
#do checking code

Use asyncio.ensure_future(). In your coroutine make sure your code won't block the loop (otherwise use loop.run_in_executor()) It's also good to catch asyncio.CancelledError exceptions.
import asyncio
class Connection:
pass
async def _check_conn_bg(conn: Connection):
try:
while True:
# do checking code
print(f'Checking connection: {conn}')
await asyncio.sleep(1)
except asyncio.CancelledError:
print('Conn check task cancelled')
def main():
asyncio.ensure_future(_check_conn_bg(Connection()))
asyncio.get_event_loop().run_forever()

Related

How to handle Timeout Exception by the asyncio task itself just before asyncio raise TimeoutError

Due to one use case, One of my long-running functions executes multiple instructions. But I have to give a maximum time for its execution. If the function is not able to finish its execution within the allocated time, it should clean up the progress and return.
Let's have a look at a sample code below:
import asyncio
async def eternity():
# Sleep for one hour
try:
await asyncio.sleep(3600)
print('yay!, everything is done..')
except Exception as e:
print("I have to clean up lot of thing in case of Exception or not able to finish by the allocated time")
async def main():
try:
ref = await asyncio.wait_for(eternity(), timeout=5)
except asyncio.exceptions.TimeoutError:
print('timeout!')
asyncio.run(main())
The function eternity is the long-running function. The catch is that, in case of some exception or reaching the maximum allocated time, the function needs to clean up the mess it has made.
P.S. eternity is an independent function and only it can understand what to clean.
I am looking for a way to raise an exception inside my task just before the timeout, OR send some interrupt or terminate signal to the task and handle it.
Basically, I want to execute some peice of code in my task before asyncio raises the TimeoutError and take control.
Also, I am using Python 3.9.
Hope I was able to explain the problem.
What you need is async context manager:
import asyncio
class MyClass(object):
async def eternity(self):
# Sleep for one hour
await asyncio.sleep(3600)
print('yay!, everything is done..')
async def __aenter__(self):
return self
async def __aexit__(self, exc_type, exc, tb):
print("I have to clean up lot of thing in case of Exception or not able to finish by the allocated time")
async def main():
try:
async with MyClass() as my_class:
ref = await asyncio.wait_for(my_class.eternity(), timeout=5)
except asyncio.exceptions.TimeoutError:
print('timeout!')
asyncio.run(main())
And this is the output:
I have to clean up lot of thing in case of Exception or not able to finish by the allocated time
timeout!
For more details take a look at here.

Python asyncio loop already running when using asyncio.run and trying to add tasks

I am very new to asyncio and I find the EchoServer example very confusing. I am trying to achieve a simple situation where a server accepts multiple clients, runs that in a coroutine and handles data, and a UI thread which handles ncurses input. I currently have the following, which, in code, conveys the idea I think. But it does not work.
import asyncio
async def do_ui():
await asyncio.sleep(1)
print('doing')
await do_ui()
async def run_game():
loop = asyncio.get_running_loop()
server = await GameServer.create_server()
loop.create_task(server.serve_forever())
loop.create_task(do_ui())
loop.run_forever()
def run():
asyncio.run(run_game())
The problem starts in GameServer.create_server, where I, for encapsulation reasons, want to delegate creating the server to. However this is an asynchronous action (for some reason) and so has to be awaited. See the server code below:
class GameServer:
#staticmethod
async def create_server():
loop = asyncio.get_running_loop()
return await loop.create_server(
lambda: GameServerProtocol(),
'127.0.0.1', 8888
)
This forces me to make run_game async aswell and await it in the run method, which is my setup.py entrypoint, so I can't do that. Using the asyncio.run method however starts a different event loop and I am not able to access it anymore.
How do I solve this? And to vent my frustration, how is this in any way easier than just using threads?
You cannot use loop.run_forever() whilst the event loop is already running. For example, the following code will not work:
import asyncio
async def main():
loop=asyncio.get_running_loop()
loop.run_forever()
if __name__ == '__main__':
asyncio.run(main())
But this code will work, and has the "run forever" behaviour you appear to be looking for:
import asyncio
async def do_ui():
while True:
await asyncio.sleep(1)
print('doing ui')
async def main():
loop = asyncio.get_running_loop()
loop.create_task(do_ui())
# insert game server code here
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.create_task(main())
loop.run_forever()

Async Errors in python

I'm coding a telegram userbot (with telethon) which sends a message,every 60 seconds, to some chats.
I'm using 2 threads but I get the following errors: "RuntimeWarning: coroutine 'sender' was never awaited" and "no running event loop".
My code:
....
async def sender():
for chat in chats :
try:
if chat.megagroup == True:
await client.send_message(chat, messaggio)
except:
await client.send_message(myID, 'error')
schedule.every(60).seconds.do(asyncio.create_task(sender()))
...
class checker1(Thread):
def run(self):
while True:
schedule.run_pending()
time.sleep(1)
class checker2(Thread):
def run(self):
while True:
client.add_event_handler(handler)
client.run_until_disconnected()
checker2().start()
checker1().start()
I searched for a solution but I didn't find anything...
You should avoid using threads with asyncio unless you know what you're doing. The code can be rewritten using asyncio as follows, since most of the time you don't actually need threads:
import asyncio
async def sender():
for chat in chats :
try:
if chat.megagroup == True:
await client.send_message(chat, messaggio)
except:
await client.send_message(myID, 'error')
async def checker1():
while True:
await sender()
await asyncio.sleep(60) # every 60s
async def main():
await asyncio.create_task(checker1()) # background task
await client.run_until_disconnected()
client.loop.run_until_complete(main())
This code is not perfect (you should properly cancel and wait checker1 at the end of the program), but it should work.
As a side note, you don't need client.run_until_disconnected(). The call simply blocks (runs) until the client is disconnected. If you can keep the program running differently, as long as asyncio runs, the client will work.
Another thing: bare except: are a very bad idea, and will probably cause issues with exception. At least, replace it with except Exception.
There are a few problems with your code. asyncio is complaining about "no running event loop" because your program never starts the event loop anywhere, and tasks can't be scheduled without an event loop running. See Asyncio in corroutine RuntimeError: no running event loop. In order to start the event loop, you can use asyncio.run_until_complete() if you have a main coroutine for your program, or you can use asyncio.get_event_loop().run_forever() to run the event loop forever.
The second problem is the incorrect usage of schedule.every(60).seconds.do(), which is hidden by the first error. schedule expects a function to be passed in, not an awaitable (which is what asyncio.create_task(sender()) returns). This normally would have caused a TypeError, but the create_task() without a running event loop raised an exception first, so this exception was never raised. You'll need to define a function and then pass it to schedule, like this:
def start_sender():
asyncio.create_task(sender())
schedule.every(60).seconds.do(start_sender)
This should work as long as the event loop is started somewhere else in your program.

Fire, Forget, and Return Value in Python3.7

I have the following scenario:
I have a python server that upon receiving a request, needs to parse some information, return the result to the user as quickly as possible, and then clean up after itself.
I tried to design it using the following logic:
Consumer: *==* (wait for result) *====(continue running)=====...
\ / return
Producer: *======(prase)====*=*
\
Cleanup: *==========*
I've been trying to use async tasks and coroutines to make this scenario work with no avail. Everything I tried ends up with either the producer waiting for the cleanup to finish before returning, or the return killing the cleanup.
I could in theory have the consumer call the cleanup after it displays the result to the user, but I refuse to believe Python doesn't know how to "fire-and-forget" and return.
For example, this code:
import asyncio
async def Slowpoke():
print("I see you shiver with antici...")
await asyncio.sleep(3)
print("...pation!")
async def main():
task = asyncio.create_task(Slowpoke())
return "Hi!"
if __name__ == "__main__":
print(asyncio.run(main()))
while True:
pass
returns:
I see you shiver with antici...
Hi!
and never gets to ...pation.
What am I missing?
I managed to get it working using threading instead of asyncio:
import threading
import time
def Slowpoke():
print("I see you shiver with antici...")
time.sleep(3)
print("...pation")
def Rocky():
t = threading.Thread(name="thread", target=Slowpoke)
t.setDaemon(True)
t.start()
time.sleep(1)
return "HI!"
if __name__ == "__main__":
print(Rocky())
while True:
time.sleep(1)
asyncio doesn't seem particularly suited for this problem. You probably want simple threads:
The reasoning for this is that your task was being killed when the parent finished. By throwing a daemon thread out there, your task will continue to run until it finishes, or until the program exits.
import threading
import time
def Slowpoke():
try:
print("I see you shiver with antici...")
time.sleep(3)
print("...pation!")
except:
print("Yup")
raise Exception()
def main():
task = threading.Thread(target=Slowpoke)
task.daemon = True
task.start()
return "Hi!"
if __name__ == "__main__":
print(main())
while True:
pass
asyncio.run ...
[...] creates a new event loop and closes it at the end. [...]
Your coro, wrapped in task does not get a chance to complete during the execution of main.
If you return the Task object and and print it, you'll see that it is in a cancelled state:
async def main():
task = asyncio.create_task(Slowpoke())
# return "Hi!"
return task
if __name__ == "__main__":
print(asyncio.run(main()))
# I see you shiver with antici...
# <Task cancelled coro=<Slowpoke() done, defined at [...]>>
When main ends after creating and scheduling the task (and printing 'Hi!'), the event loop is closed, which causes all running tasks in it to get cancelled.
You need to keep the event loop running until the task has completed, e.g. by awaiting it in main:
async def main():
task = asyncio.create_task(Slowpoke())
await task
return task
if __name__ == "__main__":
print(asyncio.run(main()))
# I see you shiver with antici...
# ...pation!
# <Task finished coro=<Slowpoke() done, defined at [..]> result=None>
(I hope I did properly understood your question. The ASCII image and the text description do not correspond fully in my mind. "Hi!" is the result and the "Antici..pation" is the cleanup, right? I like that musical too, BTW)
One of possible asyncio based solutions is to return the result asap. A return terminates the task, that's why it is necessary to fire-and-forget the cleanup. It must by accompanied with shutdown code waiting for all cleanups to finish.
import asyncio
async def Slowpoke():
print("I see you shiver with antici...")
await asyncio.sleep(3)
print("...pation!")
async def main():
result = "Hi!"
asyncio.create_task(Slowpoke())
return result
async def start_stop():
# you can create multiple tasks to serve multiple requests
task = asyncio.create_task(main())
print(await task)
# after the last request wait for cleanups to finish
this_task = asyncio.current_task()
all_tasks = [
task for task in asyncio.all_tasks()
if task is not this_task]
await asyncio.wait(all_tasks)
if __name__ == "__main__":
asyncio.run(start_stop())
Another solution would be to use other method (not return) to deliver the result to the waiting task, so the cleanup can start right after parsing. A Future is considered low-level, but here is an example anyway.
import asyncio
async def main(fut):
fut.set_result("Hi!")
# result delivered, continue with cleanup
print("I see you shiver with antici...")
await asyncio.sleep(3)
print("...pation!")
async def start_stop():
fut = asyncio.get_event_loop().create_future()
task = asyncio.create_task(main(fut))
print(await fut)
this_task = asyncio.current_task()
all_tasks = [
task for task in asyncio.all_tasks()
if task is not this_task]
await asyncio.wait(all_tasks)
if __name__ == "__main__":
asyncio.run(start_stop())

Python: Websockets in Synchronous Program

I have a bog-standard synchronous python program that needs to be able to read data from websockets and update the GUI with the data. However, asyncio creep is constantly tripping me up.
How do I make a module that:
accepts multiple subscriptions to multiple sources
sends an update to the requester whenever there's data
opens exactly one websocket connection per URL
resets the websocket if it closes
Here's what I have already, but it's failing at many points:
run_forever() means that the loop gets stuck before the subscription completes and then handle() is stuck in the falsey while loop
it does not seem to want to restart sockets when they're down because a websockets object does not have a connected property (websocket without an s does, but I'm not clear on the differences and can't find info online either)
I'm absolutely not sure if my approach is remotely correct.
Been fighting with this for weeks. Would appreciate some pointers.
class WSClient():
subscriptions = set()
connections = {}
started = False
def __init__(self):
self.loop = asyncio.get_event_loop()
def start(self):
self.started = True
self.loop.run_until_complete(self.handle())
self.loop.run_until_forever() # problematic, because it does not allow new subscribe() events
async def handle(self):
while len(self.connections) > 0:
# listen to every websocket
futures = [self.listen(self.connections[url]) for url in self.connections]
done, pending = await asyncio.wait(futures)
# the following is apparently necessary to avoid warnings
# about non-retrieved exceptions etc
try:
data, ws = done.pop().result()
except Exception as e:
print("OTHER EXCEPTION", e)
for task in pending:
task.cancel()
async def listen(self, ws):
try:
async for data in ws:
data = json.loads(data)
# call the subscriber (listener) back when there's data
[s.listener._handle_result(data) for s in self.subscriptions if s.ws == ws]
except Exception as e:
print('ERROR LISTENING; RESTARTING SOCKET', e)
await asyncio.sleep(2)
self.restart_socket(ws)
def subscribe(self, subscription):
task = self.loop.create_task(self._subscribe(subscription))
asyncio.gather(task)
if not self.started:
self.start()
async def _subscribe(self, subscription):
try:
ws = self.connections.get(subscription.url, await websockets.connect(subscription.url))
await ws.send(json.dumps(subscription.sub_msg))
subscription.ws = ws
self.connections[subscription.url] = ws
self.subscriptions.add(subscription)
except Exception as e:
print("ERROR SUBSCRIBING; RETRYING", e)
await asyncio.sleep(2)
self.subscribe(subscription)
def restart_socket(self, ws):
for s in self.subscriptions:
if s.ws == ws and not s.ws.connected:
print(s)
del self.connections[s.url]
self.subscribe(s)
I have a bog-standard synchronous python program that needs to be able to read data from websockets and update the GUI with the data. However, asyncio creep is constantly tripping me up.
As you mentioned GUI, then it is probably not a "bog-standard synchronous python program". Usually a GUI program has a non-blocking event-driven main thread, which allows concurrent user behaviors and callbacks. That is very much similar to asyncio, and it is usually a common way for asyncio to work together with GUIs to use GUI-specific event loop to replace default event loop in asyncio, so that your asyncio coroutines just run in GUI event loop and you can avoid calling run_forever() blocking everything.
An alternative way is to run asyncio event loop in a separate thread, so that your program could at the same time wait for websocket data and wait for user clicks. I've rewritten your code as follows:
import asyncio
import threading
import websockets
import json
class WSClient(threading.Thread):
def __init__(self):
super().__init__()
self._loop = None
self._tasks = {}
self._stop_event = None
def run(self):
self._loop = asyncio.new_event_loop()
self._stop_event = asyncio.Event(loop=self._loop)
try:
self._loop.run_until_complete(self._stop_event.wait())
self._loop.run_until_complete(self._clean())
finally:
self._loop.close()
def stop(self):
self._loop.call_soon_threadsafe(self._stop_event.set)
def subscribe(self, url, sub_msg, callback):
def _subscribe():
if url not in self._tasks:
task = self._loop.create_task(
self._listen(url, sub_msg, callback))
self._tasks[url] = task
self._loop.call_soon_threadsafe(_subscribe)
def unsubscribe(self, url):
def _unsubscribe():
task = self._tasks.pop(url, None)
if task is not None:
task.cancel()
self._loop.call_soon_threadsafe(_unsubscribe)
async def _listen(self, url, sub_msg, callback):
try:
while not self._stop_event.is_set():
try:
ws = await websockets.connect(url, loop=self._loop)
await ws.send(json.dumps(sub_msg))
async for data in ws:
data = json.loads(data)
# NOTE: please make sure that `callback` won't block,
# and it is allowed to update GUI from threads.
# If not, you'll need to find a way to call it from
# main/GUI thread (similar to `call_soon_threadsafe`)
callback(data)
except Exception as e:
print('ERROR; RESTARTING SOCKET IN 2 SECONDS', e)
await asyncio.sleep(2, loop=self._loop)
finally:
self._tasks.pop(url, None)
async def _clean(self):
for task in self._tasks.values():
task.cancel()
await asyncio.gather(*self._tasks.values(), loop=self._loop)
You can try tornado and autobahn-twisted for websockets.

Categories

Resources