I have a signal handler defined that cancels all the tasks in the currently running asyncio event loop when the SIGINT signal is raised. In main, I have defined a new loop and the loop runs until the sleep function completes. I have used print statements inside signal_handler for better understanding as to what happens when an asyncio task is cancelled.
Below is my implementation,
import asyncio
import signal
class temp:
def signal_handler(self, sig, frame):
loop = asyncio.get_event_loop()
tasks = asyncio.all_tasks(loop=loop)
for task in tasks:
print(task.get_name()) #returns the name of the task
ret_val = asyncio.Future.cancel(task) #returns True if task was just cancelled
print(f"Return value : {ret_val}")
print(f"Task Cancelled : {task.cancelled()}") #returns True if task is cancelled
return
def main(self):
try:
signal.signal(signal.SIGINT, self.signal_handler)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop=loop)
loop.run_until_complete(asyncio.sleep(20))
except asyncio.CancelledError as err:
print("Cancellation error raised")
finally:
if not loop.is_closed():
loop.close()
if __name__ == "__main__":
test = temp()
test.main()
Expected Behaviour:
When I raise a SIGINT at any time using Ctrl+C, the task (asyncio.sleep()) gets cancelled instantaneously and a CancellationError is raised and there is a graceful exit.
Actual Behaviour:
The CancellationError is raised after time t (in seconds) specified as a parameter in asyncio.sleep(t). For Example, the CancellationError is raised after 20 secs for the above code.
Unusual Observation:
The behaviour of the code is in line with the Actual Behaviour when executed on Windows.
The issue described above is only happening on Linux.
What could be the reason for this ambiguous behaviour?
Related
I'm trying to modify the graceful shutdown example from RogueLynn to cancel running processes that were spawned by the tasks.
Below is a minimal example to demonstrate the issue I'm facing. With this example, I get a warning message that the callback function isn't awaited and when I do try to terminate the script, the asyncio.gather call doesn't seem to complete. Any idea how to resolve this such that the shutdown callback executes completely?
import asyncio
import functools
import signal
async def run_process(time):
try:
print(f"Starting to sleep for {time} seconds")
await asyncio.sleep(time)
print(f"Completed sleep of {time} seconds")
except asyncio.CancelledError:
print("Received cancellation terminating process")
raise
async def main():
tasks = [run_process(10), run_process(5), run_process(2)]
for future in asyncio.as_completed(tasks):
try:
await future
except Exception as e:
print(f"Caught exception: {e}")
async def shutdown(signal, loop):
# Cancel running tasks on keyboard interrupt
print(f"Running shutdown")
tasks = [t for t in asyncio.all_tasks() if t is not asyncio.current_task()]
[task.cancel() for task in tasks]
await asyncio.gather(*tasks, return_exceptions=True)
print("Finished waiting for cancelled tasks")
loop.stop()
try:
loop = asyncio.get_event_loop()
signals = (signal.SIGINT,)
for sig in signals:
loop.add_signal_handler(sig, functools.partial(asyncio.create_task, shutdown(sig, loop)))
loop.run_until_complete(main())
finally:
loop.close()
Output when run to completion:
Starting to sleep for 2 seconds
Starting to sleep for 10 seconds
Starting to sleep for 5 seconds
Completed sleep of 2 seconds
Completed sleep of 5 seconds
Completed sleep of 10 seconds
/home/git/envs/lib/python3.8/asyncio/unix_events.py:140: RuntimeWarning: coroutine 'shutdown' was never awaited
del self._signal_handlers[sig]
And output when script is interrupted:
Starting to sleep for 2 seconds
Starting to sleep for 10 seconds
Starting to sleep for 5 seconds
Completed sleep of 2 seconds
^CRunning shutdown
Received cancellation terminating process
Received cancellation terminating process
Task was destroyed but it is pending!
task: <Task pending name='Task-5' coro=<shutdown() running at ./test.py:54> wait_for=<_GatheringFuture finished result=[CancelledError(), CancelledError(), CancelledError()]>>
Traceback (most recent call last):
File "./test.py", line 65, in <module>
loop.run_until_complete(main())
File "/home/git/envs/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
asyncio.exceptions.CancelledError
The CancelledError you see results from your as_completed for loop.
If you just want to fix it, you could add an exception handling for it, e.g.
[...]
try:
await future
except Exception as e:
print(f"Caught exception: {e}")
except asyncio.CancelledError:
print("task was cancelled")
[...]
Note that, you will find a warning telling you that Task was destroyed but it is pending! for your shutdown task, which you could just ignore. I guess it is because you stop the loop within a task.
Still, I would like to point at the difference between co-routines, tasks, and futures, see https://docs.python.org/3/library/asyncio-task.html.
What you call tasks and future are co-routines.
For the type of problem that you are trying to solve, I would advise to have a look at Asynchronous Context Managers. Graceful shutdown sounds to me like you want to close some database connections or dump some process variables... Here, you could have a look at https://docs.python.org/3/library/contextlib.html#contextlib.asynccontextmanager
However, if things become more complex, you may want to write such a signal handler which adds its own task to the loop. In this case, I would advise to create the relevant tasks explicitly with asyncio.create_task(coro, name="my-task-name") so you can select exactly the tasks you want to cancel first by name, e.g.
tasks = [
task for task in asyncio.all_tasks()
if task.get_name().startswith("my-task")
]
Otherwise, you may accidentally cancel a cleanup-task.
I've read every post I could find about how to gracefully handle a script with an asyncio event loop getting terminated with Ctrl-C, and I haven't been able to get any of them to work without printing one or more tracebacks as I do so. The answers are pretty much all over the place, and I haven't been able implement any of them into this small script:
import asyncio
import datetime
import functools
import signal
async def display_date(loop):
end_time = loop.time() + 5.0
while True:
print(datetime.datetime.now())
if (loop.time() + 1.0) >= end_time:
break
await asyncio.sleep(1)
def stopper(signame, loop):
print("Got %s, stopping..." % signame)
loop.stop()
loop = asyncio.get_event_loop()
for signame in ('SIGINT', 'SIGTERM'):
loop.add_signal_handler(getattr(signal, signame), functools.partial(stopper, signame, loop))
loop.run_until_complete(display_date(loop))
loop.close()
What I want to happen is for the script to exit without printing any tracebacks following a Ctrl-C (or SIGTERM/SIGINT sent via kill). This code prints RuntimeError: Event loop stopped before Future completed. In the MANY other forms I've tried based on previous answers, I've gotten a plethora of other types of exception classes and error messages with no idea how to fix them. The code above is minimal right now, but some of the attempts I made earlier were anything but, and none of them were correct.
If you're able to modify the script so that it terminates gracefully, an explanation of why your way of doing it is the right way would be greatly appreciated.
Use signal handlers:
import asyncio
from signal import SIGINT, SIGTERM
async def main_coro():
try:
await awaitable()
except asyncio.CancelledError:
do_cleanup()
if __name__ == "__main__":
loop = asyncio.get_event_loop()
main_task = asyncio.ensure_future(main_coro())
for signal in [SIGINT, SIGTERM]:
loop.add_signal_handler(signal, main_task.cancel)
try:
loop.run_until_complete(main_task)
finally:
loop.close()
Stopping the event loop while it is running will never be valid.
Here, you need to catch the Ctrl-C, to indicate to Python that you wish to handle it yourself instead of displaying the default stacktrace. This can be done with a classic try/except:
coro = display_date(loop)
try:
loop.run_until_complete(coro)
except KeyboardInterrupt:
print("Received exit, exiting")
And, for your use-case, that's it!
For a more real-life program, you would probably need to cleanup some resources. See also Graceful shutdown of asyncio coroutines
I am setting an exception handler on my asyncio event loop. However, it doesn't seem to be called until the event loop thread is stopped. For example, consider this code:
def exception_handler(loop, context):
print('Exception handler called')
loop = asyncio.get_event_loop()
loop.set_exception_handler(exception_handler)
thread = Thread(target=loop.run_forever)
thread.start()
async def run():
raise RuntimeError()
asyncio.run_coroutine_threadsafe(run(), loop)
loop.call_soon_threadsafe(loop.stop, loop)
thread.join()
This code prints "Exception handler called", as we might expect. However, if I remove the line that shuts-down the event loop (loop.call_soon_threadsafe(loop.stop, loop)) it no longer prints anything.
I have a few questions about this:
Am I doing something wrong here?
Does anyone know if this is the intended behaviour of asyncio exception handlers? I can't find anything that documents this, and it seems a little strange to me.
I'd quite like to have a long-running event loop that logs errors happening in its coroutines, so the current behaviour seems problematic for me.
There are a few problems in the code above:
stop() does not need a parameter
The program ends before the coroutine is executed (stop() was called before it).
Here is the fixed code (without exceptions and the exception handler):
import asyncio
from threading import Thread
async def coro():
print("in coro")
return 42
loop = asyncio.get_event_loop()
thread = Thread(target=loop.run_forever)
thread.start()
fut = asyncio.run_coroutine_threadsafe(coro(), loop)
print(fut.result())
loop.call_soon_threadsafe(loop.stop)
thread.join()
call_soon_threadsafe() returns a future object which holds the exception (it does not get to the default exception handler):
import asyncio
from pprint import pprint
from threading import Thread
def exception_handler(loop, context):
print('Exception handler called')
pprint(context)
loop = asyncio.get_event_loop()
loop.set_exception_handler(exception_handler)
thread = Thread(target=loop.run_forever)
thread.start()
async def coro():
print("coro")
raise RuntimeError("BOOM!")
fut = asyncio.run_coroutine_threadsafe(coro(), loop)
try:
print("success:", fut.result())
except:
print("exception:", fut.exception())
loop.call_soon_threadsafe(loop.stop)
thread.join()
However, coroutines that are called using create_task() or ensure_future() will call the exception_handler:
async def coro2():
print("coro2")
raise RuntimeError("BOOM2!")
async def coro():
loop.create_task(coro2())
print("coro")
raise RuntimeError("BOOM!")
You can use this to create a small wrapper:
async def boom(x):
print("boom", x)
raise RuntimeError("BOOM!")
async def call_later(coro, *args, **kwargs):
loop.create_task(coro(*args, **kwargs))
return "ok"
fut = asyncio.run_coroutine_threadsafe(call_later(boom, 7), loop)
However, you should probably consider using a Queue to communicate with your thread instead.
I'm writing a spider to crawl web pages. I know asyncio maybe my best choice. So I use coroutines to process the work asynchronously. Now I scratch my head about how to quit the program by keyboard interrupt. The program could shut down well after all the works have been done. The source code could be run in python 3.5 and is attatched below.
import asyncio
import aiohttp
from contextlib import suppress
class Spider(object):
def __init__(self):
self.max_tasks = 2
self.task_queue = asyncio.Queue(self.max_tasks)
self.loop = asyncio.get_event_loop()
self.counter = 1
def close(self):
for w in self.workers:
w.cancel()
async def fetch(self, url):
try:
async with aiohttp.ClientSession(loop = self.loop) as self.session:
with aiohttp.Timeout(30, loop = self.session.loop):
async with self.session.get(url) as resp:
print('get response from url: %s' % url)
except:
pass
finally:
pass
async def work(self):
while True:
url = await self.task_queue.get()
await self.fetch(url)
self.task_queue.task_done()
def assign_work(self):
print('[*]assigning work...')
url = 'https://www.python.org/'
if self.counter > 10:
return 'done'
for _ in range(self.max_tasks):
self.counter += 1
self.task_queue.put_nowait(url)
async def crawl(self):
self.workers = [self.loop.create_task(self.work()) for _ in range(self.max_tasks)]
while True:
if self.assign_work() == 'done':
break
await self.task_queue.join()
self.close()
def main():
loop = asyncio.get_event_loop()
spider = Spider()
try:
loop.run_until_complete(spider.crawl())
except KeyboardInterrupt:
print ('Interrupt from keyboard')
spider.close()
pending = asyncio.Task.all_tasks()
for w in pending:
w.cancel()
with suppress(asyncio.CancelledError):
loop.run_until_complete(w)
finally:
loop.stop()
loop.run_forever()
loop.close()
if __name__ == '__main__':
main()
But if I press 'Ctrl+C' while it's running, some strange errors may occur. I mean sometimes the program could be shut down by 'Ctrl+C' gracefully. No error message. However, in some cases the program will be still running after pressing 'Ctrl+C' and wouldn't stop until all the works have been done. If I press 'Ctrl+C' at that moment, 'Task was destroyed but it is pending!' would be there.
I have read some topics about asyncio and add some code in main() to close coroutines gracefully. But it not work. Is someone else has the similar problems?
I bet problem happens here:
except:
pass
You should never do such thing. And your situation is one more example of what can happen otherwise.
When you cancel task and await for its cancellation, asyncio.CancelledError raised inside task and shouldn't be suppressed anywhere inside. Line where you await of your task cancellation should raise this exception, otherwise task will continue execution.
That's why you do
task.cancel()
with suppress(asyncio.CancelledError):
loop.run_until_complete(task) # this line should raise CancelledError,
# otherwise task will continue
to actually cancel task.
Upd:
But I still hardly understand why the original code could quit well by
'Ctrl+C' at a uncertain probability?
It dependence of state of your tasks:
If at the moment you press 'Ctrl+C' all tasks are done, non of
them will raise CancelledError on awaiting and your code will finished normally.
If at the moment you press 'Ctrl+C' some tasks are pending, but close to finish their execution, your code will stuck a bit on tasks cancellation and finished when tasks are finished shortly after it.
If at the moment you press 'Ctrl+C' some tasks are pending and
far from being finished, your code will stuck trying to cancel these tasks (which
can't be done). Another 'Ctrl+C' will interrupt process of
cancelling, but tasks wouldn't be cancelled or finished then and you'll get
warning 'Task was destroyed but it is pending!'.
I assume you are using any flavor of Unix; if this is not the case, my comments might not apply to your situation.
Pressing Ctrl-C in a terminal sends all processes associated with this tty the signal SIGINT. A Python process catches this Unix signal and translates this into throwing a KeyboardInterrupt exception. In a threaded application (I'm not sure if the async stuff internally is using threads, but it very much sounds like it does) typically only one thread (the main thread) receives this signal and thus reacts in this fashion. If it is not prepared especially for this situation, it will terminate due to the exception.
Then the threading administration will wait for the still running fellow threads to terminate before the Unix process as a whole terminates with an exit code. This can take quite a long time. See this question about killing fellow threads and why this isn't possible in general.
What you want to do, I assume, is kill your process immediately, killing all threads in one step.
The easiest way to achieve this is to press Ctrl-\. This will send a SIGQUIT instead of a SIGINT which typically influences also the fellow threads and causes them to terminate.
If this is not enough (because for whatever reason you need to react properly on Ctrl-C), you can send yourself a signal:
import os, signal
os.kill(os.getpid(), signal.SIGQUIT)
This should terminate all running threads unless they especially catch SIGQUIT in which case you still can use SIGKILL to perform a hard kill on them. This doesn't give them any option of reacting, though, and might lead to problems.
I am developing a multi-threaded application in python. I have following scenario.
There are 2-3 producer threads which communicate with DB and get some data in large chunks and fill them up in a queue
There is an intermediate worker which breaks large chunks fetched by producer threads into smaller ones and fill them up in another queue.
There are 5 consumer threads which consume queue created by intermediate worker thread.
objects of data sources are accessed by producer threads through their API. these data sources are completely separate. So these producer understands only presence or absence of data which is supposed to be given out by data source object.
I create threads of these three types and i make main thread wait for completion of these threads by calling join() on them.
Now for such a setup I want a common error handler which senses failure of any thread, any exception and decides what to do. For e.g if I press ctrl+c after I start my application, main thread dies but producer, consumer threads continue to run. I would like that once ctrl+c is pressed entire application should shut down. Similarly if some DB error occurs in data source module, then producer thread should get notified of that.
This is what I have done so far:
I have created a class ThreadManager, it's object is passed to all threads. I have written an error handler method and passed it to sys.excepthook. This handler should catch exceptions, error and then it should call methods of ThreadManager class to control the running threads. Here is snippet:
class Producer(threading.Thread):
....
def produce():
data = dataSource.getData()
class DataSource:
....
def getData():
raise Exception("critical")
def customHandler(exceptionType, value, stackTrace):
print "In custom handler"
sys.excepthook = customHandler
Now when a thread of producer class calls getData() of DataSource class, exception is thrown. But this exception is never caught by my customHandler method.
What am I missing? Also in such scenario what other strategy can I apply? Please help. Thank you for having enough patience to read all this :)
What you need is a decorator. In essence you are modifying your original function and putting in inside a try-except:
def exception_decorator(func):
def _function(*args):
try:
result = func(*args)
except:
print('*** ESC default handler ***')
os._exit(1)
return result
return _function
If your thread function is called myfunc, then you add the following line above your function definition
#exception_decorator
def myfunc():
pass;
Can't you just catch "KeyboardInterrupt" when pressing Ctrl+C and do:
for thread in threading.enumerate():
thread._Thread__stop()
thread._Thread__delete()
while len(threading.enumerate()) > 1:
time.sleep(1)
os._exit(0)
and have a flag in each threaded class which is self.alive
you could theoretically call thread.alive = False and have it stop gracefully?
for thread in threading.enumerate():
thread.alive = False
time.sleep(5) # Grace period
thread._Thread__stop()
thread._Thread__delete()
while len(threading.enumerate()) > 1:
time.sleep(1)
os._exit(0)
example:
import os
from threading import *
from time import sleep
class worker(Thread):
def __init__(self):
self.alive = True
Thread.__init__(self)
self.start()
def run(self):
while self.alive:
sleep(0.1)
runner = worker()
try:
raw_input('Press ctrl+c!')
except:
pass
for thread in enumerate():
thread.alive = False
sleep(1)
try:
thread._Thread__stop()
thread._Thread__delete()
except:
pass
# There will always be 1 thread alive and that's the __main__ thread.
while len(enumerate()) > 1:
sleep(1)
os._exit(0)
Try going about it by changing the internal system exception handler?
import sys
origExcepthook = sys.excepthook
def uberexcept(exctype, value, traceback):
if exctype == KeyboardInterrupt:
print "Gracefully shutting down all the threads"
# enumerate() thingie here.
else:
origExcepthook(exctype, value, traceback)
sys.excepthook = uberexcept