Python3.5 Asyncio - Preventing task exception from dumping to stdout? - python

I have a textbased interface (asciimatics module) for my program that uses asyncio and discord.py module and occasionally when my wifi adapter goes down I get an exception like so:
Task exception was never retrieved
future: <Task finished coro=<WebSocketCommonProtocol.run() done, defined at /home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py:428> exception=ConnectionResetError(104, 'Connection reset by peer')>
Traceback (most recent call last):
File "/usr/lib/python3.5/asyncio/tasks.py", line 241, in _step
result = coro.throw(exc)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 434, in run
msg = yield from self.read_message()
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 456, in read_message
frame = yield from self.read_data_frame(max_size=self.max_size)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 511, in read_data_frame
frame = yield from self.read_frame(max_size)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 546, in read_frame
self.reader.readexactly, is_masked, max_size=max_size)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/framing.py", line 86, in read_frame
data = yield from reader(2)
File "/usr/lib/python3.5/asyncio/streams.py", line 670, in readexactly
block = yield from self.read(n)
File "/usr/lib/python3.5/asyncio/streams.py", line 627, in read
yield from self._wait_for_data('read')
File "/usr/lib/python3.5/asyncio/streams.py", line 457, in _wait_for_data
yield from self._waiter
File "/usr/lib/python3.5/asyncio/futures.py", line 361, in __iter__
yield self # This tells Task to wait for completion.
File "/usr/lib/python3.5/asyncio/tasks.py", line 296, in _wakeup
future.result()
File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/lib/python3.5/asyncio/selector_events.py", line 662, in _read_ready
data = self._sock.recv(self.max_size)
ConnectionResetError: [Errno 104] Connection reset by peer
This exception is non-fatal and the program is able to re-connect despite it - what I want to do is prevent this exception from dumping to stdout and mucking up my text interface.
I tried using ensure_future to handle it but it doesn't seem to work. Am I missing something:
#asyncio.coroutine
def handle_exception():
try:
yield from WebSocketCommonProtocol.run()
except Exception:
print("SocketException-Retrying")
asyncio.ensure_future(handle_exception())
#start discord client
client.run(token)

Task exception was never retrieved - is not actually exception propagated to stdout, but a log message that warns you that you never retrieved exception in one of your tasks. You can find details here.
I guess, most easy way to avoid this message in your case is to retrieve exception from task manually:
coro = WebSocketCommonProtocol.run() # you don't need any wrapper
task = asyncio.ensure_future(coro)
try:
#start discord client
client.run(token)
finally:
# retrieve exception if any:
if task.done() and not task.cancelled():
task.exception() # this doesn't raise anything, just mark exception retrieved

The answer provided by Mikhail is perfectly acceptable, but I realized it wouldn't work for me since the task that is raising the exception is buried deep in some module so trying to retrieve it's exception is kind've difficult. I found that instead if I simply set a custom exception handler for my asyncio loop (loop is created by the discord client):
def exception_handler(loop,context):
print("Caught the following exception")
print(context['message'])
client.loop.set_exception_handler(exception_handler)
client.run(token)

Related

error retrieving the user data from tiktok

I want to extract the user stats from unofficial tiktokapi (https://github.com/davidteather/TikTok-Api)
the code i used is below
from TikTokApi.tiktok import TikTokApi
with TikTokApi() as api: # .get_instance no longer exists
for trending_video in api.trending.videos():
user_stats = trending_video.author.info_full['stats']
if user_stats['followerCount'] >= 10000:
print(user_stats)
but i keep on getting this error -
RuntimeError: This event loop is already running
Task exception was never retrieved
future: <Task finished name='Task-5' coro=<Connection.run() done, defined at C:\Users\siddh\anaconda3\lib\site-packages\playwright\_impl\_connection.py:240> exception=NotImplementedError()>
Traceback (most recent call last):
File "C:\Users\siddh\anaconda3\lib\site-packages\playwright\_impl\_connection.py", line 247, in run
await self._transport.connect()
File "C:\Users\siddh\anaconda3\lib\site-packages\playwright\_impl\_transport.py", line 132, in connect
raise exc
File "C:\Users\siddh\anaconda3\lib\site-packages\playwright\_impl\_transport.py", line 120, in connect
self._proc = await asyncio.create_subprocess_exec(
File "C:\Users\siddh\anaconda3\lib\asyncio\subprocess.py", line 236, in create_subprocess_exec
transport, protocol = await loop.subprocess_exec(
File "C:\Users\siddh\anaconda3\lib\asyncio\base_events.py", line 1676, in subprocess_exec
transport = await self._make_subprocess_transport(
File "C:\Users\siddh\anaconda3\lib\asyncio\base_events.py", line 498, in _make_subprocess_transport
raise NotImplementedError
NotImplementedError
i tried using asyncio but the error keeps on coming any fixes to this
please note that if you're running the code in Jupyter Notebook - it won't work. As the documentation of this library says:
Note: Jupyter (ipynb) only works on linux
Paste your code into .py file and try running it as a Python script instead with:
python tiktok_script.py

Python Correct Way to Use asyncio streams to open a connection, send and receive multiple transmissions, then close connection gracefully

I am asking where either my though process or my code is incorrect relative to using asyncio client streams to send data and receive responses from a server. When I call the method that disconnects the client an exception is thrown. I am learning python asyncio and ran across exceptions during testing trying to close the client connection. I am trying to 1). Create a client connection to a server 2). leave the client connection open so that it can be used across multiple send/receive cycles 3). close the client connection gracefully when complete.
This is the class that contains the asyncio methods to create the stream writer.
class hl7_client_sender:
SB = b'\x1B'
EB = b'\x1C'
CR = b'\x0D'
def __init__(self,address,port,timeout=-1,retry=3.0):
self._resend=0
self._timeout= timeout
self._retry = retry
#self._reader, self._writer = await asyncio.open_connection(address,port)
self._address = address
self._port = port
self._writer = None
self._reader = None
async def connect(self):
self._reader, self._writer = await asyncio.open_connection(self._address,self._port)
async def disconnect(self):
await self._writer.wait_closed()
and this is the code in my driver where the exception occurs during the call to disconnect
#test send and respond
import asyncio
import string
import unicodedata
import simple_hl7_client
import time
##open a connectino sleep 5 seconds then close###
myclient = simple_hl7_client.hl7_client_sender('192.168.226.128',54321)
asyncio.run(myclient.connect())
time.sleep(3)
asyncio.run(myclient.disconnect())
The exception occurs during the call to asycnio.run(myclient.disconnect())
This is the exception:
Traceback (most recent call last):
File ".\test_simple_hl7_client.py", line 11, in <module>
asyncio.run(myclient.disconnect())
File "C:\Users\billg\AppData\Local\Programs\Python\Python37\lib\asyncio\runners.py", line 43, in run
return loop.run_until_complete(main)
File "C:\Users\billg\AppData\Local\Programs\Python\Python37\lib\asyncio\base_events.py", line 583, in run_until_complete
return future.result()
File "D:\data\FromOldPC\code\ASYNCIOTESTING\simple_hl7_client.py", line 23, in disconnect
self._writer.close()
File "C:\Users\billg\AppData\Local\Programs\Python\Python37\lib\asyncio\streams.py", line 317, in close
return self._transport.close()
File "C:\Users\billg\AppData\Local\Programs\Python\Python37\lib\asyncio\selector_events.py", line 663, in close
self._loop.call_soon(self._call_connection_lost, None)
File "C:\Users\billg\AppData\Local\Programs\Python\Python37\lib\asyncio\base_events.py", line 687, in call_soon
self._check_closed()
File "C:\Users\billg\AppData\Local\Programs\Python\Python37\lib\asyncio\base_events.py", line 479, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed

Awaiting an asyncio.Future raises concurrent.futures._base.CancelledError instead of waiting for a value/exception to be set

When I run the following python code:
import asyncio
import logging
logging.basicConfig(level=logging.DEBUG)
async def read_future(fut):
print(await fut)
async def write_future(fut):
fut.set_result('My Value')
async def main():
loop = asyncio.get_running_loop()
fut = loop.create_future()
asyncio.gather(read_future(fut), write_future(fut))
asyncio.run(main(), debug=True)
Instead of read_future waiting for the result of fut to be set, the program crashes with the following error:
DEBUG:asyncio:Using selector: KqueueSelector
ERROR:asyncio:_GatheringFuture exception was never retrieved
future: <_GatheringFuture finished exception=CancelledError() created at /Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/tasks.py:642>
source_traceback: Object created at (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/runners.py", line 43, in run
return loop.run_until_complete(main)
File "/Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/base_events.py", line 566, in run_until_complete
self.run_forever()
File "/Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/base_events.py", line 534, in run_forever
self._run_once()
File "/Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/base_events.py", line 1763, in _run_once
handle._run()
File "/Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "<stdin>", line 4, in main
File "/Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/tasks.py", line 766, in gather
outer = _GatheringFuture(children, loop=loop)
File "/Users/<user>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/tasks.py", line 642, in __init__
super().__init__(loop=loop)
concurrent.futures._base.CancelledError
DEBUG:asyncio:Close <_UnixSelectorEventLoop running=False closed=False debug=True>
What am I doing wrong in this code? I want to be able to await the Future fut and continue after the Future has a value/exception set.
Your problem is that asyncio.gather is itself async (returns an awaitable); by not awaiting it, you never handed control back to the event loop, nor did you store the awaitable, so it was immediately cleaned up, implicitly cancelling it, and by extension, all of the awaitables it controlled.
To fix, just make sure you await the results of the gather:
await asyncio.gather(read_future(fut), write_future(fut))
Try it online!
From https://docs.python.org/3/library/asyncio-future.html#asyncio.Future.result:
result()
Return the result of the Future.
If the Future is done and has a result set by the set_result() method, the result value is returned.
If the Future is done and has an exception set by the set_exception() method, this method raises the exception.
If the Future has been cancelled, this method raises a CancelledError exception.
If the Future’s result isn’t yet available, this method raises a InvalidStateError exception.
(Bold added.)
I'm not sure why the Future is getting cancelled, but that seems to be the cause of the issue.

How to catch "[Errno 32] Broken pipe" in a WSGI handler

WSGI is extremely useful for building highly concurrent HTTP servers to support e.g. long polling, however, typically, the long running HTTP request will at some point be ended by the client side; to clean up any resources and open handles, the WSGI server backend should be notified of any such events, however, it doesn't currently seem to be possible to catch those events in the WSGI handler:
# pseudocode example
def application(env, start_response):
start_response(...)
q = Queue()
ev_handle = register_event_handler(lambda event, arg: q.put((event, arg)))
# ??? need to call e.g. ev_handle.unregister() when the HTTP request is terminated
return iter(lambda: render(q.get()), None)
For example, when using gevent.pywsgi, the corresponding exception (error: [Errno 32] Broken pipe) is thrown somewhere inside gevent and never even seems to surface anywhere the handler could potentially see it:
Traceback (most recent call last):
File "/Users/erik.allik/.virtualenvs/myproj/lib/python2.7/site-packages/gevent/pywsgi.py", line 508, in handle_one_response
self.run_application()
File "/Users/erik.allik/.virtualenvs/myproj/lib/python2.7/site-packages/gevent/pywsgi.py", line 495, in run_application
self.process_result()
File "/Users/erik.allik/.virtualenvs/myproj/lib/python2.7/site-packages/gevent/pywsgi.py", line 486, in process_result
self.write(data)
File "/Users/erik.allik/.virtualenvs/myproj/lib/python2.7/site-packages/gevent/pywsgi.py", line 376, in write
self._write(data)
File "/Users/erik.allik/.virtualenvs/myproj/lib/python2.7/site-packages/gevent/pywsgi.py", line 369, in _write
self._sendall(data)
File "/Users/erik.allik/.virtualenvs/myproj/lib/python2.7/site-packages/gevent/pywsgi.py", line 355, in _sendall
self.socket.sendall(data)
File "/Users/erik.allik/.virtualenvs/myproj/lib/python2.7/site-packages/gevent/socket.py", line 458, in sendall
data_sent += self.send(_get_memory(data, data_sent), flags)
File "/Users/erik.allik/.virtualenvs/myproj/lib/python2.7/site-packages/gevent/socket.py", line 435, in send
return sock.send(data, flags)
Looks like what happens when a request gets terminated is that, in addition to the (seemingly uncatchable) exception traceback, the iterator that was returned from the WSGI handler is .close()-d. It is thus possible to determine when the any workers/resources/handles associated with the response should be closed. This is basically what werkzeug.wsgi.ClosingIterator does:
class ClosingIterator(object):
def __init__(self, iterable, on_close):
iterator = iter(iterable)
self.close = on_close
def __iter__(self):
return self
def __next__(self):
return self._next()
def application(env, start_response):
start_response(...)
q = Queue()
ev_handle = register_event_handler(lambda event, arg: q.put((event, arg)))
return ClosingIterator(
iter(lambda: render(q.get()), None),
on_close=ev_handle.unregister
)
This does not however silence the error message/traceback, but this seems tolerable unless somebody can come up with a solution that can fix even that.

tornado IOError "Stream is closed" on request finish()

I'm using tornado 2.0 and occassionally when I call self.finish() to end an asynchronous request, I'll get an IOError with the message "Stream is closed". It looks as though this happens when the client ends a request (ie by navigating to another page) prior to the server calling finish(). Is this expected behavior and something my code just needs to handle? I found this bug from a year ago that suggests this is NOT something client code should be handling: https://github.com/facebook/tornado/issues/81. Is this indicative of a bug in my code, and if so, what are the likely causes?
Stacktrace:
Traceback (most recent call last):
File "my_code.py", line 260, in my_method
self.finish()
File "/usr/lib/python2.6/site-packages/tornado/web.py", line 634, in finish
self.request.finish()
File "/usr/lib/python2.6/site-packages/tornado/httpserver.py", line 555, in finish
self.connection.finish()
File "/usr/lib/python2.6/site-packages/tornado/httpserver.py", line 349, in finish
self._finish_request()
File "/usr/lib/python2.6/site-packages/tornado/httpserver.py", line 372, in _finish_request
self.stream.read_until(b("\r\n\r\n"), self._header_callback)
File "/usr/lib/python2.6/site-packages/tornado/iostream.py", line 137, in read_until
self._check_closed()
File "/usr/lib/python2.6/site-packages/tornado/iostream.py", line 403, in _check_closed
raise IOError("Stream is closed")
IOError: Stream is closed
self.finish() is called to end the asynchronous request, and some functions like self.render() will call self.finish().
If you call self.finish() after the connection is closed, it will cause the error.
so you can check if you call some functions that finish the connection before self.finish()
or you can do like this:
if not self._finished:
#if the connection is closed, it won't call this function
self.finish()
else:
pass

Categories

Resources