I've been working in a discord bot and I have a little problem with it. The only way I'm able to stop it is using CTRL+C which I don't think it's a safe way to do it because I always get some errors thrown when doing so
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0xb4c6f3a0>
Exception ignored in: <function ClientResponse.__del__ at 0xb6fb6ec8>
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/aiohttp/client_reqrep.py", line 757, in __del__
File "/usr/local/lib/python3.8/dist-packages/aiohttp/connector.py", line 177, in release
File "/usr/local/lib/python3.8/dist-packages/aiohttp/connector.py", line 629, in _release
File "/usr/local/lib/python3.8/dist-packages/aiohttp/client_proto.py", line 62, in close
File "/usr/lib/python3.8/asyncio/selector_events.py", line 692, in close
File "/usr/lib/python3.8/asyncio/base_events.py", line 719, in call_soon
File "/usr/lib/python3.8/asyncio/base_events.py", line 508, in _check_closed
RuntimeError: Event loop is closed
Any information on how should I stop the bot without these errors would be nice. Thanks
You can do something like this:
try:
#whole code here
except KeyboardInterrupt:
print("Exiting")
await client.logout()
When you press Control - C, it triggers the error KeyboardInterrupt. If you catch this with a try and accept block, you can safely exit the program.
I hope it helped!
make a command called shutdown
#client.command(name="shutdown")
async def shut_down():
print("logging out...")
await client.logout()
now try to close the program:)
If I understood your question properly, what you are trying to achieve is to have a way to exit your program and not let it run forever.
To do this, you can either handle KeyboardInterrupt to exit the code,
try:
main()
except KeyboardInterrupt:
print 'Interrupted'
sys.exit(0)
Or you should find some logic in your code after which you can terminate the program, so something like a if condition, which when met you just call sys.exit(0).
Related
I have a Python script that needs to be running continuously on a linux server, it connects to a webSocket and retrieve a realtime feed of data.
After a dozens of hours, the script crashes but doesn't exit, it displays an error message about losing connection to the server and I have to manually Ctrl+C and run it again.
I am looking for a way to automatically (instantly) restart the python script once it crashes, how can one achieve this ?
Edit : this is the message I get when the script crashes
Server disconnected.
disconnect handler error
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/engineio/client.py", line 496, in _trigger_event
return self.handlers[event](*args)
File "/usr/local/lib/python3.8/dist-packages/socketio/client.py", line 632, in _handle_eio_disconnect
self._trigger_event('disconnect', namespace=n)
File "/usr/local/lib/python3.8/dist-packages/socketio/client.py", line 550, in _trigger_event
return self.handlers[namespace][event](*args)
File "python_script.py", line 29, in on_disconnect
quit()
File "/usr/lib/python3.8/_sitebuiltins.py", line 26, in __call__
raise SystemExit(code)
SystemExit: None
Here is the on_disconnect class parameter code :
#socketio_client.on('disconnect', namespace='/streaming')
def on_disconnect():
print('Server disconnected.')
quit()
I think I may have to call an external script to start a new instance of the python script.
When I run the following python code:
import asyncio
import logging
logging.basicConfig(level=logging.DEBUG)
async def read_future(fut):
print(await fut)
async def write_future(fut):
fut.set_result('My Value')
async def main():
loop = asyncio.get_running_loop()
fut = loop.create_future()
asyncio.gather(read_future(fut), write_future(fut))
asyncio.run(main(), debug=True)
Instead of read_future waiting for the result of fut to be set, the program crashes with the following error:
DEBUG:asyncio:Using selector: KqueueSelector
ERROR:asyncio:_GatheringFuture exception was never retrieved
future: <_GatheringFuture finished exception=CancelledError() created at /Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/tasks.py:642>
source_traceback: Object created at (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/runners.py", line 43, in run
return loop.run_until_complete(main)
File "/Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/base_events.py", line 566, in run_until_complete
self.run_forever()
File "/Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/base_events.py", line 534, in run_forever
self._run_once()
File "/Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/base_events.py", line 1763, in _run_once
handle._run()
File "/Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "<stdin>", line 4, in main
File "/Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/tasks.py", line 766, in gather
outer = _GatheringFuture(children, loop=loop)
File "/Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/tasks.py", line 642, in __init__
super().__init__(loop=loop)
concurrent.futures._base.CancelledError
DEBUG:asyncio:Close <_UnixSelectorEventLoop running=False closed=False debug=True>
What am I doing wrong in this code? I want to be able to await the Future fut and continue after the Future has a value/exception set.
Your problem is that asyncio.gather is itself async (returns an awaitable); by not awaiting it, you never handed control back to the event loop, nor did you store the awaitable, so it was immediately cleaned up, implicitly cancelling it, and by extension, all of the awaitables it controlled.
To fix, just make sure you await the results of the gather:
await asyncio.gather(read_future(fut), write_future(fut))
Try it online!
From https://docs.python.org/3/library/asyncio-future.html#asyncio.Future.result:
result()
Return the result of the Future.
If the Future is done and has a result set by the set_result() method, the result value is returned.
If the Future is done and has an exception set by the set_exception() method, this method raises the exception.
If the Future has been cancelled, this method raises a CancelledError exception.
If the Future’s result isn’t yet available, this method raises a InvalidStateError exception.
(Bold added.)
I'm not sure why the Future is getting cancelled, but that seems to be the cause of the issue.
I have a coroutine that waiting for a signal rising :
#cocotb.coroutine
def wait_for_rise(self):
yield RisingEdge(self.dut.mysignal)
I'm launching it in my «main» test function like it :
mythread = cocotb.fork(wait_for_rise())
I want to stop it after a while even if no signal rise happen. I tryed to «kill» it:
mythread.kill()
But exception happen :
Send raised exception: 'RunningCoroutine' object has no attribute '_join'
File "/opt/cocotb/cocotb/decorators.py", line 121, in send
return self._coro.send(value)
File "/myproject.py", line 206, in i2c_read
wTXDRwthread.kill()
File "/opt/cocotb/cocotb/decorators.py", line 151, in kill
cocotb.scheduler.unschedule(self)
File "/opt/cocotb/cocotb/scheduler.py", line 453, in unschedule
if coro._join in self._trigger2coros:
Is there a solution to stop forked coroutine properly ?
This very much looks like it is the same problem as in https://github.com/potentialventures/cocotb/issues/650 - you can subscribe to the issue to be notified when its status changes.
I have a textbased interface (asciimatics module) for my program that uses asyncio and discord.py module and occasionally when my wifi adapter goes down I get an exception like so:
Task exception was never retrieved
future: <Task finished coro=<WebSocketCommonProtocol.run() done, defined at /home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py:428> exception=ConnectionResetError(104, 'Connection reset by peer')>
Traceback (most recent call last):
File "/usr/lib/python3.5/asyncio/tasks.py", line 241, in _step
result = coro.throw(exc)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 434, in run
msg = yield from self.read_message()
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 456, in read_message
frame = yield from self.read_data_frame(max_size=self.max_size)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 511, in read_data_frame
frame = yield from self.read_frame(max_size)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 546, in read_frame
self.reader.readexactly, is_masked, max_size=max_size)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/framing.py", line 86, in read_frame
data = yield from reader(2)
File "/usr/lib/python3.5/asyncio/streams.py", line 670, in readexactly
block = yield from self.read(n)
File "/usr/lib/python3.5/asyncio/streams.py", line 627, in read
yield from self._wait_for_data('read')
File "/usr/lib/python3.5/asyncio/streams.py", line 457, in _wait_for_data
yield from self._waiter
File "/usr/lib/python3.5/asyncio/futures.py", line 361, in __iter__
yield self # This tells Task to wait for completion.
File "/usr/lib/python3.5/asyncio/tasks.py", line 296, in _wakeup
future.result()
File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/lib/python3.5/asyncio/selector_events.py", line 662, in _read_ready
data = self._sock.recv(self.max_size)
ConnectionResetError: [Errno 104] Connection reset by peer
This exception is non-fatal and the program is able to re-connect despite it - what I want to do is prevent this exception from dumping to stdout and mucking up my text interface.
I tried using ensure_future to handle it but it doesn't seem to work. Am I missing something:
#asyncio.coroutine
def handle_exception():
try:
yield from WebSocketCommonProtocol.run()
except Exception:
print("SocketException-Retrying")
asyncio.ensure_future(handle_exception())
#start discord client
client.run(token)
Task exception was never retrieved - is not actually exception propagated to stdout, but a log message that warns you that you never retrieved exception in one of your tasks. You can find details here.
I guess, most easy way to avoid this message in your case is to retrieve exception from task manually:
coro = WebSocketCommonProtocol.run() # you don't need any wrapper
task = asyncio.ensure_future(coro)
try:
#start discord client
client.run(token)
finally:
# retrieve exception if any:
if task.done() and not task.cancelled():
task.exception() # this doesn't raise anything, just mark exception retrieved
The answer provided by Mikhail is perfectly acceptable, but I realized it wouldn't work for me since the task that is raising the exception is buried deep in some module so trying to retrieve it's exception is kind've difficult. I found that instead if I simply set a custom exception handler for my asyncio loop (loop is created by the discord client):
def exception_handler(loop,context):
print("Caught the following exception")
print(context['message'])
client.loop.set_exception_handler(exception_handler)
client.run(token)
I'm using tornado 2.0 and occassionally when I call self.finish() to end an asynchronous request, I'll get an IOError with the message "Stream is closed". It looks as though this happens when the client ends a request (ie by navigating to another page) prior to the server calling finish(). Is this expected behavior and something my code just needs to handle? I found this bug from a year ago that suggests this is NOT something client code should be handling: https://github.com/facebook/tornado/issues/81. Is this indicative of a bug in my code, and if so, what are the likely causes?
Stacktrace:
Traceback (most recent call last):
File "my_code.py", line 260, in my_method
self.finish()
File "/usr/lib/python2.6/site-packages/tornado/web.py", line 634, in finish
self.request.finish()
File "/usr/lib/python2.6/site-packages/tornado/httpserver.py", line 555, in finish
self.connection.finish()
File "/usr/lib/python2.6/site-packages/tornado/httpserver.py", line 349, in finish
self._finish_request()
File "/usr/lib/python2.6/site-packages/tornado/httpserver.py", line 372, in _finish_request
self.stream.read_until(b("\r\n\r\n"), self._header_callback)
File "/usr/lib/python2.6/site-packages/tornado/iostream.py", line 137, in read_until
self._check_closed()
File "/usr/lib/python2.6/site-packages/tornado/iostream.py", line 403, in _check_closed
raise IOError("Stream is closed")
IOError: Stream is closed
self.finish() is called to end the asynchronous request, and some functions like self.render() will call self.finish().
If you call self.finish() after the connection is closed, it will cause the error.
so you can check if you call some functions that finish the connection before self.finish()
or you can do like this:
if not self._finished:
#if the connection is closed, it won't call this function
self.finish()
else:
pass