I am creating a program where a new Class is instantiated, let's call it Profile, and then data is streamed that would update the profile. Periodically the periodic function is also supposed to perform a checkup inside the instantiated profile, to make sure things are in order.
def _run():
stream = Stream(...)
api = API(...)
try:
_LOG.info("Generating Data.")
...
async def periodic():
while True:
if not api.get_clock().is_open:
raise RuntimeError()
await asyncio.sleep(60)
items = api.get_items()
for item in items:
...class.do_checkup(item)...
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(
stream.run(),
periodic(),
))
except KeyboardInterrupt:
# stream.stop()
# loop.close()
loop.stop()
exit(0)
except RuntimeError:
_LOG.info('...')
except Exception as e:
_LOG.error(e)
finally:
_LOG.info("Trying to re-establish connection.")
time.sleep(3)
# TODO: restart here
# _run()
The code above though, when trying to stop it, complains that:
UP (BASE): Generating Data.
...
.../__main__.py:89: RuntimeWarning: coroutine '_run.<locals>.periodic' was never awaited
_LOG.error(e)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
keyboard interrupt, bye
UP (BASE): An asyncio.Future, a coroutine or an awaitable is required
What is the correct way to have asyncio run periodic correctly?
Related
Due to one use case, One of my long-running functions executes multiple instructions. But I have to give a maximum time for its execution. If the function is not able to finish its execution within the allocated time, it should clean up the progress and return.
Let's have a look at a sample code below:
import asyncio
async def eternity():
# Sleep for one hour
try:
await asyncio.sleep(3600)
print('yay!, everything is done..')
except Exception as e:
print("I have to clean up lot of thing in case of Exception or not able to finish by the allocated time")
async def main():
try:
ref = await asyncio.wait_for(eternity(), timeout=5)
except asyncio.exceptions.TimeoutError:
print('timeout!')
asyncio.run(main())
The function eternity is the long-running function. The catch is that, in case of some exception or reaching the maximum allocated time, the function needs to clean up the mess it has made.
P.S. eternity is an independent function and only it can understand what to clean.
I am looking for a way to raise an exception inside my task just before the timeout, OR send some interrupt or terminate signal to the task and handle it.
Basically, I want to execute some peice of code in my task before asyncio raises the TimeoutError and take control.
Also, I am using Python 3.9.
Hope I was able to explain the problem.
What you need is async context manager:
import asyncio
class MyClass(object):
async def eternity(self):
# Sleep for one hour
await asyncio.sleep(3600)
print('yay!, everything is done..')
async def __aenter__(self):
return self
async def __aexit__(self, exc_type, exc, tb):
print("I have to clean up lot of thing in case of Exception or not able to finish by the allocated time")
async def main():
try:
async with MyClass() as my_class:
ref = await asyncio.wait_for(my_class.eternity(), timeout=5)
except asyncio.exceptions.TimeoutError:
print('timeout!')
asyncio.run(main())
And this is the output:
I have to clean up lot of thing in case of Exception or not able to finish by the allocated time
timeout!
For more details take a look at here.
Here is the code, I thought the program will crash at once because of the uncaught exception. However it waited 10s when the main task coro2 completes.
import asyncio
#asyncio.coroutine
def coro1():
print("coro1 primed")
yield
raise Exception("abc")
#asyncio.coroutine
def coro2(loop):
try:
print("coro2 primed")
ts = [asyncio.Task(coro1(),loop=loop) for _ in range(2)]
res = yield from asyncio.sleep(10)
print(res)
except Exception as e:
print(e)
raise
loop= asyncio.get_event_loop()
loop.run_until_complete(coro2(loop))
I think this is a serious problems because in more complicated programs, this makes the process stuck forever, instead of crashing with exception information.
Besides, I set a breakpoint in the except block in source code of run_until_complete but it's not triggered. I am interested in which piece of code handled that exception in python asyncio.
First, there is no reason to use generator-based coroutines in Python with the async/await syntax available for many years, and the coroutine decorator now deprecated and scheduled for removal. Also, you don't need to pass the event loop down to each coroutine, you can always use asyncio.get_event_loop() to obtain it when you need it. But these are unrelated to your question.
The except block in coro2 didn't trigger because the exception raised in coro1 didn't propagate to coro2. This is because you explicitly ran coro1 as a task, which executed it in the background, and didn't await it. You should always ensure that your tasks are awaited and then exceptions won't pass unnoticed; doing this systematically is sometimes referred to as structured concurrency.
The correct way to write the above would be something like:
async def coro1():
print("coro1 primed")
await asyncio.sleep(0) # yield to the event loop
raise Exception("abc")
async def coro2():
try:
print("coro2 primed")
ts = [asyncio.create_task(coro1()) for _ in range(2)]
await asyncio.sleep(10)
# ensure we pick up results of the tasks that we've started
for t in ts:
await t
print(res)
except Exception as e:
print(e)
raise
asyncio.run(coro2())
Note that this will run sleep() to completion and only then propagate the exceptions raised by the background tasks. If you wanted to propagate immediately, you could use asyncio.gather(), in which case you wouldn't have to bother with explicitly creating tasks in the first place:
async def coro2():
try:
print("coro2 primed")
res, *ignored = await asyncio.gather(
asyncio.sleep(10),
*[(coro1()) for _ in range(2)]
)
print(res)
except Exception as e:
print(e)
raise
I am interested in which piece of code handled that exception in python asyncio.
An exception raised by a coroutine which is not handled is caught by asyncio and stored in the task object. This allows you to await the task or (if you know it's completed) obtain its result using the result() method, either of which will propagate (re-raise) the exception. Since your code never accessed the task's result, the exception instance remained forgotten inside the task object. Python goes so far to notice this and print a "Task exception was never retrieved" warning when the task object is destroyed along with a traceback, but this warning is provided on a best-effort basis, usually comes too late, and should not be relied upon.
I'm coding a telegram userbot (with telethon) which sends a message,every 60 seconds, to some chats.
I'm using 2 threads but I get the following errors: "RuntimeWarning: coroutine 'sender' was never awaited" and "no running event loop".
My code:
....
async def sender():
for chat in chats :
try:
if chat.megagroup == True:
await client.send_message(chat, messaggio)
except:
await client.send_message(myID, 'error')
schedule.every(60).seconds.do(asyncio.create_task(sender()))
...
class checker1(Thread):
def run(self):
while True:
schedule.run_pending()
time.sleep(1)
class checker2(Thread):
def run(self):
while True:
client.add_event_handler(handler)
client.run_until_disconnected()
checker2().start()
checker1().start()
I searched for a solution but I didn't find anything...
You should avoid using threads with asyncio unless you know what you're doing. The code can be rewritten using asyncio as follows, since most of the time you don't actually need threads:
import asyncio
async def sender():
for chat in chats :
try:
if chat.megagroup == True:
await client.send_message(chat, messaggio)
except:
await client.send_message(myID, 'error')
async def checker1():
while True:
await sender()
await asyncio.sleep(60) # every 60s
async def main():
await asyncio.create_task(checker1()) # background task
await client.run_until_disconnected()
client.loop.run_until_complete(main())
This code is not perfect (you should properly cancel and wait checker1 at the end of the program), but it should work.
As a side note, you don't need client.run_until_disconnected(). The call simply blocks (runs) until the client is disconnected. If you can keep the program running differently, as long as asyncio runs, the client will work.
Another thing: bare except: are a very bad idea, and will probably cause issues with exception. At least, replace it with except Exception.
There are a few problems with your code. asyncio is complaining about "no running event loop" because your program never starts the event loop anywhere, and tasks can't be scheduled without an event loop running. See Asyncio in corroutine RuntimeError: no running event loop. In order to start the event loop, you can use asyncio.run_until_complete() if you have a main coroutine for your program, or you can use asyncio.get_event_loop().run_forever() to run the event loop forever.
The second problem is the incorrect usage of schedule.every(60).seconds.do(), which is hidden by the first error. schedule expects a function to be passed in, not an awaitable (which is what asyncio.create_task(sender()) returns). This normally would have caused a TypeError, but the create_task() without a running event loop raised an exception first, so this exception was never raised. You'll need to define a function and then pass it to schedule, like this:
def start_sender():
asyncio.create_task(sender())
schedule.every(60).seconds.do(start_sender)
This should work as long as the event loop is started somewhere else in your program.
I'm trying to resolve this error: RuntimeError: Cannot close a running event loop in my asyncio process. I believe it's happening because there's a failure while tasks are still pending, and then I try to close the event loop. I'm thinking I need to await the remaining responses prior to closing the event loop, but I'm not sure how to accomplish that correctly in my specific situation.
def start_job(self):
if self.auth_expire_timestamp < get_timestamp():
api_obj = api_handler.Api('Api Name', self.dbObj)
self.api_auth_resp = api_obj.get_auth_response()
self.api_attr = api_obj.get_attributes()
try:
self.queue_manager(self.do_stuff(json_data))
except aiohttp.ServerDisconnectedError as e:
logging.info("Reconnecting...")
api_obj = api_handler.Api('API Name', self.dbObj)
self.api_auth_resp = api_obj.get_auth_response()
self.api_attr = api_obj.get_attributes()
self.run_eligibility()
async def do_stuff(self, data):
tasks = []
async with aiohttp.ClientSession() as session:
for row in data:
task = asyncio.ensure_future(self.async_post('url', session, row))
tasks.append(task)
result = await asyncio.gather(*tasks)
self.load_results(result)
def queue_manager(self, method):
self.loop = asyncio.get_event_loop()
future = asyncio.ensure_future(method)
self.loop.run_until_complete(future)
async def async_post(self, resource, session, data):
async with session.post(self.api_attr.api_endpoint + resource, headers=self.headers, data=data) as response:
resp = []
try:
headers = response.headers['foo']
content = await response.read()
resp.append(headers)
resp.append(content)
except KeyError as e:
logging.error('KeyError at async_post response')
logging.error(e)
return resp
def shutdown(self):
//need to do something here to await the remaining tasks and then I need to re-start a new event loop, which i think i can do, just don't know how to appropriately stop the current one.
self.loop.close()
return True
How can I handle the error and properly close the event loop so I can start a new one and essentially re-boot the whole program and continue on.
EDIT:
This is what I'm trying now, based on this SO answer. Unfortunately, this error only happens rarely, so unless I can force it, i will have to wait and see if it works. In my queue_manager method I changed it to this:
try:
self.loop.run_until_complete(future)
except Exception as e:
future.cancel()
self.loop.run_until_complete(future)
future.exception()
UPDATE:
I got rid of the shutdown() method and added this to my queue_manager() method instead and it seems to be working without issue:
try:
self.loop.run_until_complete(future)
except Exception as e:
future.cancel()
self.check_in_records()
self.reconnect()
self.start_job()
future.exception()
To answer the question as originally stated, there is no need to close() a running loop, you can reuse the same loop for the whole program.
Given the code in the update, your queue_manager could look like this:
try:
self.loop.run_until_complete(future)
except Exception as e:
self.check_in_records()
self.reconnect()
self.start_job()
Cancelling future is not necessary and as far as I can tell has no effect. This is different from the referenced answer which specifically reacts to KeyboardInterrupt, special because it is raised by asyncio itself. KeyboardInterrupt can be propagated by run_until_complete without the future having actually completed. Handling Ctrl-C correctly in asyncio is very hard or even impossible (see here for details), but fortunately the question is not about Ctrl-C at all, it is about exceptions raised by the coroutine. (Note that KeyboardInterrupt doesn't inherit from Exception, so in case of Ctrl-C the except body won't even execute.)
I was canceling the future because in this instance there are remaining tasks pending and i want to essentially remove those tasks and start a fresh event loop.
This is a correct thing to want to do, but the code in the (updated) question is only canceling a single future, the one already passed to run_until_complete. Recall that a future is a placeholder for a result value that will be provided at a later point. Once the value is provided, it can be retrieved by calling future.result(). If the "value" of the future is an exception, future.result() will raise that exception. run_until_complete has the contract that it will run the event loop for as long as it takes for the given future to produce a value, and then it returns that value. If the "value" is in fact an exception to raise, then run_until_complete will re-raise it. For example:
loop = asyncio.get_event_loop()
fut = loop.create_future()
loop.call_soon(fut.set_exception, ZeroDivisionError)
# raises ZeroDivisionError, as that is the future's result,
# manually set
loop.run_until_complete(fut)
When the future in question is in fact a Task, an asyncio-specific object that wraps a coroutine into a Future, the result of such future is the object returned by the coroutine. If the coroutine raises an exception, then retrieving the result will re-raise it, and so will run_until_complete:
async def fail():
1/0
loop = asyncio.get_event_loop()
fut = loop.create_task(fail())
# raises ZeroDivisionError, as that is the future's result,
# because the coroutine raises it
loop.run_until_complete(fut)
When dealing with a task, run_until_complete finishing means that the coroutine has finished as well, having either returned a value or raised an exception, as determined by run_until_complete returning or raising.
On the other hand, cancelling a task works by arranging for the task to be resumed and the await expression that suspended it to raise CancelledError. Unless the task specifically catches and suppresses this exception (which well-behaved asyncio code is not supposed to do), the task will stop executing and the CancelledError will become its result. However, if the coroutine is already finished when cancel() is called, then cancel() cannot do anything because there is no pending await to inject CancelledError into.
I got the same error below:
RuntimeError: Cannot close a running event loop
When I called loop.close() in test() as shown below:
import asyncio
async def test(loop):
print("Test")
loop.stop()
loop.close() # Here
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.create_task(test(loop))
loop.run_forever()
So, I used loop.close() after loop.run_forever() with try: and finally: as shown below, then the error was solved:
import asyncio
async def test(loop):
print("Test")
loop.stop()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.create_task(test(loop))
try:
loop.run_forever()
finally:
loop.close() # Here
Sometimes, my coroutine cleanup code includes some blocking parts (in the asyncio sense, i.e. they may yield).
I try to design them carefully, so they don't block indefinitely. So "by contract", coroutine must never be interrupted once it's inside its cleanup fragment.
Unfortunately, I can't find a way to prevent this, and bad things occur when it happens (whether it's caused by actual double cancel call; or when it's almost finished by itself, doing cleanup, and happens to be cancelled from elsewhere).
Theoretically, I can delegate cleanup to some other function, protect it with a shield, and surround it with try-except loop, but it's just ugly.
Is there a Pythonic way to do so?
#!/usr/bin/env python3
import asyncio
#asyncio.coroutine
def foo():
"""
This is the function in question,
with blocking cleanup fragment.
"""
try:
yield from asyncio.sleep(1)
except asyncio.CancelledError:
print("Interrupted during work")
raise
finally:
print("I need just a couple more seconds to cleanup!")
try:
# upload results to the database, whatever
yield from asyncio.sleep(1)
except asyncio.CancelledError:
print("Interrupted during cleanup :(")
else:
print("All cleaned up!")
#asyncio.coroutine
def interrupt_during_work():
# this is a good example, all cleanup
# finishes successfully
t = asyncio.async(foo())
try:
yield from asyncio.wait_for(t, 0.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"
t.cancel()
# wait for finish
try:
yield from t
except asyncio.CancelledError:
pass
#asyncio.coroutine
def interrupt_during_cleanup():
# here, cleanup is interrupted
t = asyncio.async(foo())
try:
yield from asyncio.wait_for(t, 1.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"
t.cancel()
# wait for finish
try:
yield from t
except asyncio.CancelledError:
pass
#asyncio.coroutine
def double_cancel():
# cleanup is interrupted here as well
t = asyncio.async(foo())
try:
yield from asyncio.wait_for(t, 0.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"
t.cancel()
try:
yield from asyncio.wait_for(t, 0.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"
# although double cancel is easy to avoid in
# this particular example, it might not be so obvious
# in more complex code
t.cancel()
# wait for finish
try:
yield from t
except asyncio.CancelledError:
pass
#asyncio.coroutine
def comain():
print("1. Interrupt during work")
yield from interrupt_during_work()
print("2. Interrupt during cleanup")
yield from interrupt_during_cleanup()
print("3. Double cancel")
yield from double_cancel()
def main():
loop = asyncio.get_event_loop()
task = loop.create_task(comain())
loop.run_until_complete(task)
if __name__ == "__main__":
main()
I ended up writing a simple function that provides a stronger shield, so to speak.
Unlike asyncio.shield, which protects the callee, but raises CancelledError in its caller, this function suppresses CancelledError altogether.
The drawback is that this function doesn't allow you to handle CancelledError later. You won't see whether it has ever happened. Something slightly more complex would be required to do so.
#asyncio.coroutine
def super_shield(arg, *, loop=None):
arg = asyncio.async(arg)
while True:
try:
return (yield from asyncio.shield(arg, loop=loop))
except asyncio.CancelledError:
continue
I found WGH's solution when encountering a similar problem. I'd like to await a thread, but regular asyncio cancellation (with or without shield) will just cancel the awaiter and leave the thread floating around, uncontrolled. Here is a modification of super_shield that optionally allows reacting on cancel requests and also handles cancellation from within the awaitable:
await protected(aw, lambda: print("Cancel request"))
This guarantees that the awaitable has finished or raised CancelledError from within. If your task could be cancelled by other means (e.g. setting a flag observed by a thread), you can use the optional cancel callback to enable cancellation.
Implementation:
async def protect(aw, cancel_cb: typing.Callable = None):
"""
A variant of `asyncio.shield` that protects awaitable as well
as the awaiter from being cancelled.
Cancellation events from the awaiter are turned into callbacks
for handling cancellation requests manually.
:param aw: Awaitable.
:param cancel_cb: Optional cancellation callback.
:return: Result of awaitable.
"""
task = asyncio.ensure_future(aw)
while True:
try:
return await asyncio.shield(task)
except asyncio.CancelledError:
if task.done():
raise
if cancel_cb is not None:
cancel_cb()