I've read every post I could find about how to gracefully handle a script with an asyncio event loop getting terminated with Ctrl-C, and I haven't been able to get any of them to work without printing one or more tracebacks as I do so. The answers are pretty much all over the place, and I haven't been able implement any of them into this small script:
import asyncio
import datetime
import functools
import signal
async def display_date(loop):
end_time = loop.time() + 5.0
while True:
print(datetime.datetime.now())
if (loop.time() + 1.0) >= end_time:
break
await asyncio.sleep(1)
def stopper(signame, loop):
print("Got %s, stopping..." % signame)
loop.stop()
loop = asyncio.get_event_loop()
for signame in ('SIGINT', 'SIGTERM'):
loop.add_signal_handler(getattr(signal, signame), functools.partial(stopper, signame, loop))
loop.run_until_complete(display_date(loop))
loop.close()
What I want to happen is for the script to exit without printing any tracebacks following a Ctrl-C (or SIGTERM/SIGINT sent via kill). This code prints RuntimeError: Event loop stopped before Future completed. In the MANY other forms I've tried based on previous answers, I've gotten a plethora of other types of exception classes and error messages with no idea how to fix them. The code above is minimal right now, but some of the attempts I made earlier were anything but, and none of them were correct.
If you're able to modify the script so that it terminates gracefully, an explanation of why your way of doing it is the right way would be greatly appreciated.
Use signal handlers:
import asyncio
from signal import SIGINT, SIGTERM
async def main_coro():
try:
await awaitable()
except asyncio.CancelledError:
do_cleanup()
if __name__ == "__main__":
loop = asyncio.get_event_loop()
main_task = asyncio.ensure_future(main_coro())
for signal in [SIGINT, SIGTERM]:
loop.add_signal_handler(signal, main_task.cancel)
try:
loop.run_until_complete(main_task)
finally:
loop.close()
Stopping the event loop while it is running will never be valid.
Here, you need to catch the Ctrl-C, to indicate to Python that you wish to handle it yourself instead of displaying the default stacktrace. This can be done with a classic try/except:
coro = display_date(loop)
try:
loop.run_until_complete(coro)
except KeyboardInterrupt:
print("Received exit, exiting")
And, for your use-case, that's it!
For a more real-life program, you would probably need to cleanup some resources. See also Graceful shutdown of asyncio coroutines
Related
The following code requires 3 presses of CTRL-C to end, how can I make it end with one only? (So it works nicely in Docker)
import asyncio
import time
def sleep_blocking():
print("Sleep blocking")
time.sleep(1000)
async def main():
loop = asyncio.get_event_loop()
await loop.run_in_executor(None, sleep_blocking)
try:
asyncio.run(main())
except KeyboardInterrupt:
print("Nicely shutting down ...")
I've read many asyncio related questions and answers but can't figure this one out yet. The 1st CTRL-C does nothing, the 2nd prints "Nicely shutting down ..." and then hangs. The 3rd CTRL-C prints an ugly error.
I'm on Python 3.9.10 and Linux.
(edit: updated code per comment #mkrieger1)
The way to exit immediately and unconditionally from a Python program is by calling os._exit(). If your background threads are in the middle of doing something important this may not be wise. However the following program does what you asked (python 3.10, Windows10):
import asyncio
import time
import os
def sleep_blocking():
print("Sleep blocking")
time.sleep(1000)
async def main():
loop = asyncio.get_event_loop()
loop.run_until_complete(loop.run_in_executor(None, sleep_blocking))
try:
asyncio.run(main())
except KeyboardInterrupt:
print("Nicely shutting down ...")
os._exit(42)
From here we know that it's effectively impossible to kill a task running in a thread executor. If I replace the default thread executor with a ProcessPoolExecutor, I get the behavior you're looking for. Here's the code:
import concurrent.futures
import asyncio
import time
def sleep_blocking():
print("Sleep blocking")
time.sleep(1000)
async def main():
loop = asyncio.get_event_loop()
x = concurrent.futures.ProcessPoolExecutor()
await loop.run_in_executor(x, sleep_blocking)
try:
asyncio.run(main())
except KeyboardInterrupt:
print("Nicely shutting down ...")
And the result is:
$ python asynctest.py
Sleep blocking
^CNicely shutting down ...
A program I am developing has a long running process in another thread. I would like to interrupt that thread in the event something goes awry.
Of other SO posts I've seen like this, they use syntax similar to this:
while True:
if condition_here:
break
else:
await asyncio.sleep(1)
which does work in catching KeyboardInterrupts. However, I'm not a big fan of using while loops like this and would like to avoid this if at all possible.
For some example code, here is what I currently have (which does not catch the interrupts until after the thread is done):
import asyncio
import time
from threading import Thread
def some_long_process():
time.sleep(60)
async def main():
thread = Thread(target=some_long_process)
thread.start()
# Doesn't work
loop = asyncio.get_event_loop()
await loop.run_in_executor(None, thread.join)
# await asyncio.wait([loop.run_in_executor(None, thread.join)])
# await asyncio.wait_for(loop.run_in_executor(None, thread.join), None)
# await asyncio.gather(asyncio.to_thread(thread.join))
# Works
# while thread.is_alive():
# await asyncio.sleep(1)
if __name__ == '__main__':
asyncio.run(main())
I'm also open to suggestions to reconsider my entire approach to the way this is designed if this isn't possible. Thanks for your time.
I'm coding a telegram userbot (with telethon) which sends a message,every 60 seconds, to some chats.
I'm using 2 threads but I get the following errors: "RuntimeWarning: coroutine 'sender' was never awaited" and "no running event loop".
My code:
....
async def sender():
for chat in chats :
try:
if chat.megagroup == True:
await client.send_message(chat, messaggio)
except:
await client.send_message(myID, 'error')
schedule.every(60).seconds.do(asyncio.create_task(sender()))
...
class checker1(Thread):
def run(self):
while True:
schedule.run_pending()
time.sleep(1)
class checker2(Thread):
def run(self):
while True:
client.add_event_handler(handler)
client.run_until_disconnected()
checker2().start()
checker1().start()
I searched for a solution but I didn't find anything...
You should avoid using threads with asyncio unless you know what you're doing. The code can be rewritten using asyncio as follows, since most of the time you don't actually need threads:
import asyncio
async def sender():
for chat in chats :
try:
if chat.megagroup == True:
await client.send_message(chat, messaggio)
except:
await client.send_message(myID, 'error')
async def checker1():
while True:
await sender()
await asyncio.sleep(60) # every 60s
async def main():
await asyncio.create_task(checker1()) # background task
await client.run_until_disconnected()
client.loop.run_until_complete(main())
This code is not perfect (you should properly cancel and wait checker1 at the end of the program), but it should work.
As a side note, you don't need client.run_until_disconnected(). The call simply blocks (runs) until the client is disconnected. If you can keep the program running differently, as long as asyncio runs, the client will work.
Another thing: bare except: are a very bad idea, and will probably cause issues with exception. At least, replace it with except Exception.
There are a few problems with your code. asyncio is complaining about "no running event loop" because your program never starts the event loop anywhere, and tasks can't be scheduled without an event loop running. See Asyncio in corroutine RuntimeError: no running event loop. In order to start the event loop, you can use asyncio.run_until_complete() if you have a main coroutine for your program, or you can use asyncio.get_event_loop().run_forever() to run the event loop forever.
The second problem is the incorrect usage of schedule.every(60).seconds.do(), which is hidden by the first error. schedule expects a function to be passed in, not an awaitable (which is what asyncio.create_task(sender()) returns). This normally would have caused a TypeError, but the create_task() without a running event loop raised an exception first, so this exception was never raised. You'll need to define a function and then pass it to schedule, like this:
def start_sender():
asyncio.create_task(sender())
schedule.every(60).seconds.do(start_sender)
This should work as long as the event loop is started somewhere else in your program.
I have created a proxychecker that operates fine when it is left alone to check all the proxies. But I would like to implement functionality so that upon keyboard interrupt it cancels all pending proxychecking coroutines and exits gracefully. In its current state, upon keyboard interrupt the program does not exit gracefully and I get error messages - "Task was destroyed but it is pending!"
After doing some research, I realize that this is happening because I am closing the event loop before the coroutines have finished canceling. I have decided to try and attempt my own implementation of the solution found in this stackoverflow post:
What's the correct way to clean up after an interrupted event loop?
However my implementation does not work; it seems that the execution gets stuck in the loop.run_forever() because upon keyboard interrupt, my terminal is stuck in processing.
If possible, I would truly appreciate a solution that does not involve waiting for pending tasks to be finished. The target functionality is that upon keyboard interrupt, the program drops everything, issues a report, and exits.
Also I am new to asyncio so any constructive criticism of how I've structured my program are also truly appreciated.
async def check_proxy(self, id, session):
proxy = self.plist[0]
del self.plist[0]
try:
self.stats["tries"] += 1
async with session.head(self.test_url, proxy=proxy, timeout=self.timeout) as response:
if response and response.status == 200:
self.stats["alive"] += 1
self.alive.append(proxy)
print(f"{id} has found a live proxy : " + proxy)
except Exception:
pass
async def main(self):
tasks = []
connector = ProxyConnector()
async with aiohttp.ClientSession(connector=connector, request_class=ProxyClientRequest) as session:
while len(self.plist) > 0:
if len(self.plist) >= self.threads:
for i in range(self.threads):
tasks.append(asyncio.ensure_future(self.check_proxy(i+1, session)))
else:
for i in range(len(self.plist)):
tasks.append(asyncio.ensure_future(self.check_proxy(i+1, session)))
await asyncio.gather(*tasks)
def start(self):
self.load_proxies()
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(self.main())
except KeyboardInterrupt:
for task in asyncio.Task.all_tasks():
task.cancel()
loop.run_forever()
asyncio.Task.all_tasks().exception()
finally:
loop.close()
self.report_stats()
Ideally, the output should look something like this:
...
55 has found a live proxy : socks5://81.10.222.118:1080
83 has found a live proxy : socks5://173.245.239.223:16938
111 has found a live proxy : socks5://138.68.41.90:1080
^C
# of Tries: 160
# of Alive: 32
Took me a while because I was branching out on the wrong track but here was the fix in case others run into the same problem.
except KeyboardInterrupt:
for task in asyncio.Task.all_tasks():
task.cancel()
loop.stop() #-- rather unintuitive in my opinion, works though.
loop.run_forever()
finally:
loop.close()
Very simple fix; rather unfortunate that I didn't realize sooner. Hopefully this can prevent someone else from a similar descent into madness. Cheers!
I'm writing a spider to crawl web pages. I know asyncio maybe my best choice. So I use coroutines to process the work asynchronously. Now I scratch my head about how to quit the program by keyboard interrupt. The program could shut down well after all the works have been done. The source code could be run in python 3.5 and is attatched below.
import asyncio
import aiohttp
from contextlib import suppress
class Spider(object):
def __init__(self):
self.max_tasks = 2
self.task_queue = asyncio.Queue(self.max_tasks)
self.loop = asyncio.get_event_loop()
self.counter = 1
def close(self):
for w in self.workers:
w.cancel()
async def fetch(self, url):
try:
async with aiohttp.ClientSession(loop = self.loop) as self.session:
with aiohttp.Timeout(30, loop = self.session.loop):
async with self.session.get(url) as resp:
print('get response from url: %s' % url)
except:
pass
finally:
pass
async def work(self):
while True:
url = await self.task_queue.get()
await self.fetch(url)
self.task_queue.task_done()
def assign_work(self):
print('[*]assigning work...')
url = 'https://www.python.org/'
if self.counter > 10:
return 'done'
for _ in range(self.max_tasks):
self.counter += 1
self.task_queue.put_nowait(url)
async def crawl(self):
self.workers = [self.loop.create_task(self.work()) for _ in range(self.max_tasks)]
while True:
if self.assign_work() == 'done':
break
await self.task_queue.join()
self.close()
def main():
loop = asyncio.get_event_loop()
spider = Spider()
try:
loop.run_until_complete(spider.crawl())
except KeyboardInterrupt:
print ('Interrupt from keyboard')
spider.close()
pending = asyncio.Task.all_tasks()
for w in pending:
w.cancel()
with suppress(asyncio.CancelledError):
loop.run_until_complete(w)
finally:
loop.stop()
loop.run_forever()
loop.close()
if __name__ == '__main__':
main()
But if I press 'Ctrl+C' while it's running, some strange errors may occur. I mean sometimes the program could be shut down by 'Ctrl+C' gracefully. No error message. However, in some cases the program will be still running after pressing 'Ctrl+C' and wouldn't stop until all the works have been done. If I press 'Ctrl+C' at that moment, 'Task was destroyed but it is pending!' would be there.
I have read some topics about asyncio and add some code in main() to close coroutines gracefully. But it not work. Is someone else has the similar problems?
I bet problem happens here:
except:
pass
You should never do such thing. And your situation is one more example of what can happen otherwise.
When you cancel task and await for its cancellation, asyncio.CancelledError raised inside task and shouldn't be suppressed anywhere inside. Line where you await of your task cancellation should raise this exception, otherwise task will continue execution.
That's why you do
task.cancel()
with suppress(asyncio.CancelledError):
loop.run_until_complete(task) # this line should raise CancelledError,
# otherwise task will continue
to actually cancel task.
Upd:
But I still hardly understand why the original code could quit well by
'Ctrl+C' at a uncertain probability?
It dependence of state of your tasks:
If at the moment you press 'Ctrl+C' all tasks are done, non of
them will raise CancelledError on awaiting and your code will finished normally.
If at the moment you press 'Ctrl+C' some tasks are pending, but close to finish their execution, your code will stuck a bit on tasks cancellation and finished when tasks are finished shortly after it.
If at the moment you press 'Ctrl+C' some tasks are pending and
far from being finished, your code will stuck trying to cancel these tasks (which
can't be done). Another 'Ctrl+C' will interrupt process of
cancelling, but tasks wouldn't be cancelled or finished then and you'll get
warning 'Task was destroyed but it is pending!'.
I assume you are using any flavor of Unix; if this is not the case, my comments might not apply to your situation.
Pressing Ctrl-C in a terminal sends all processes associated with this tty the signal SIGINT. A Python process catches this Unix signal and translates this into throwing a KeyboardInterrupt exception. In a threaded application (I'm not sure if the async stuff internally is using threads, but it very much sounds like it does) typically only one thread (the main thread) receives this signal and thus reacts in this fashion. If it is not prepared especially for this situation, it will terminate due to the exception.
Then the threading administration will wait for the still running fellow threads to terminate before the Unix process as a whole terminates with an exit code. This can take quite a long time. See this question about killing fellow threads and why this isn't possible in general.
What you want to do, I assume, is kill your process immediately, killing all threads in one step.
The easiest way to achieve this is to press Ctrl-\. This will send a SIGQUIT instead of a SIGINT which typically influences also the fellow threads and causes them to terminate.
If this is not enough (because for whatever reason you need to react properly on Ctrl-C), you can send yourself a signal:
import os, signal
os.kill(os.getpid(), signal.SIGQUIT)
This should terminate all running threads unless they especially catch SIGQUIT in which case you still can use SIGKILL to perform a hard kill on them. This doesn't give them any option of reacting, though, and might lead to problems.