I've a scraping engine which uses any proxy list, and retries in case the proxy doesn't work. So there are plenty of proxies that timeout, have connection refused, bad certificates etc. After I switched to httpx from aiohttp I have plenty of internal exceptions that don't seem to hinder, just spam the log.
16:47:37: Future exception was never retrieved
future: <Future finished exception=BrokenResourceError()>
Traceback (most recent call last):
File "/usr/lib/python3.9/asyncio/selector_events.py", line 856, in _read_ready__data_received
data = self._sock.recv(self.max_size)
ConnectionResetError: [Errno 104] Connection reset by peer
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/pooh/venv39/lib/python3.9/site-packages/httpcore/_backends/anyio.py", line 60, in read
return await self.stream.receive(n)
File "/home/pooh/venv39/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 1095, in receive
raise self._protocol.exception
anyio.BrokenResourceError
Maybe someone from the developers can shed a light at what it is?
Related
For my python (python 3.10) based project, we were using aioredis (aioredis 2.0.1) to connect to redis cache and all of a sudden all the requests accessing redis cache started failing. After analysis, we found that Aioredis is now in redis-py. Post that we removed aioredis and added redis (redis 4.5.1) as a dependency in pipfile.
I didn't added any extra code just changed the imports from
import aioredis
to
from redis import asyncio as aioredis
But that didn't resolve the issue completely, now half of the requests are failing with the error code as below.(In an hour 145 requests were success while 79 failed)
redis.exceptions.ConnectionError: Error UNKNOWN while writing to
socket. Connection lost
we use aioredis.Redis for connection
aioredis.Redis(
host=redis_hostname,
port=redis_port,
db=db_name,
password=redis_password,
ssl=true,
connection_pool=aioredis.ConnectionPool.from_url(
f"{redis_protocol}://:{redis_password}#{redis_hostname}:{redis_port}/{db_name}",
connection_class=aioredis.Connection,
max_connections=redis_pool_size,
)
Below is the error trace
Traceback (most recent call last): File
/usr/local/lib/python3.10/site-packages/redis/asyncio/connection.py,
line 788, in send_packed_command await self._writer.drain()
File /usr/local/lib/python3.10/asyncio/streams.py, line 371, in
drain await self._protocol._drain_helper()
Traceback (most recent call last): File
/usr/local/lib/python3.10/site-packages/ddtrace/contrib/asgi/middleware.py,
line 173, in call return await self.app(scope, receive,
wrapped_send)
File /usr/local/lib/python3.10/asyncio/streams.py, line 167, in
_drain_helper raise ConnectionResetError('Connection
lost')ConnectionResetError: Connection lost The above exception was
the direct cause of the following exception:
.....
File /usr/local/lib/python3.10/site-packages/redis/asyncio/client.py,
line 487, in _send_command_parse_response await
conn.send_command(*args)redis.exceptions.ConnectionError: Error
UNKNOWN while writing to socket. Connection lost.
I'm trying to connect as a client to my websocket, but yet every time I try this error comes up, I've tried literally everything but the result is always the same
(with other languages, for example NodeJS, I can connect without problems)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/api/.venv/lib/python3.9/site-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/root/api/.venv/lib/python3.9/site-packages/websockets/legacy/http.py", line 122, in read_response
raise EOFError("connection closed while reading HTTP status line") from exc
EOFError: connection closed while reading HTTP status line
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/api/.venv/lib/python3.9/site-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/root/api/.venv/lib/python3.9/site-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/root/api/.venv/lib/python3.9/site-packages/websockets/legacy/client.py", line 144, in read_http_response
raise InvalidMessage("did not receive a valid HTTP response") from exc
websockets.exceptions.InvalidMessage: did not receive a valid HTTP response
code:
import websockets
from websockets import client
async def receiver(ws):
for message in ws:
print(f"{message}")
async for websocket in client.connect('wss://localhost:8777/password/AAAAAA/1/175/'):
try:
print('connecting')
except websockets.ConnectionClosed:
print('error')
WARNING: Invalid HTTP request received.
Traceback (most recent call last):
File "/home/ubuntu/MySQL_UI_Backend/venv/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 136, in handle_events
event = self.conn.next_event()
File "/home/ubuntu/MySQL_UI_Backend/venv/lib/python3.8/site-packages/h11/_connection.py", line 432, in next_event
raise RemoteProtocolError(
h11._util.RemoteProtocolError: Receive buffer too long
WARNING:uvicorn.error:Invalid HTTP request received.
Traceback (most recent call last):
File "/home/ubuntu/MySQL_UI_Backend/venv/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 136, in handle_events
event = self.conn.next_event()
File "/home/ubuntu/MySQL_UI_Backend/venv/lib/python3.8/site-packages/h11/_connection.py", line 432, in next_event
raise RemoteProtocolError(
h11._util.RemoteProtocolError: Receive buffer too long
The APIs are working fine locally but when I run it on EC2, I get the above error, the ports I have used are 7879 for the API server, 6869 for the frontend UI running on react
1.Make sure that the ports that you use are open at your security group settings.
2.Try changing the url from https to http
I have code which produces potentially infinite call stack (simplified):
def listen(self, pipeline):
try:
for message in self.channel.consume(self.queue_name):
pipeline.process(message)
self.channel.basic_ack(delivery_tag=method_frame.delivery_tag)
except (pika.exceptions.StreamLostError,
pika.exceptions.ConnectionClosed,
pika.exceptions.ChannelClosed,
ConnectionResetError) as e:
logging.warning(f'Connection dropped for queue {self.queue_name}. Exception: {e}. Reconnecting...')
self._reconnect()
self.listen(pipeline)
If there are any network issues, it will log an error, reconnect and move further. But it will also add one extra call to call stack. So my stack trace on error will be like this:
...
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 1336, in _flush_output
self._connection._flush_output(lambda: self.is_closed, *waiters)
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 522, in _flush_output
raise self._closed_result.value.error
pika.exceptions.StreamLostError: Stream connection lost: ConnectionResetError(104, 'Connection reset by peer')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/msworker/queue.py", line 81, in listen
self.channel.basic_ack(delivery_tag=method_frame.delivery_tag)
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 2113, in basic_ack
self._flush_output()
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 1336, in _flush_output
self._connection._flush_output(lambda: self.is_closed, *waiters)
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 522, in _flush_output
raise self._closed_result.value.error
pika.exceptions.StreamLostError: Stream connection lost: ConnectionResetError(104, 'Connection reset by peer')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/msworker/queue.py", line 81, in listen
self.channel.basic_ack(delivery_tag=method_frame.delivery_tag)
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 2113, in basic_ack
self._flush_output()
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 1336, in _flush_output
self._connection._flush_output(lambda: self.is_closed, *waiters)
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 522, in _flush_output
raise self._closed_result.value.error
pika.exceptions.StreamLostError: Stream connection lost: ConnectionResetError(104, 'Connection reset by peer')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/utils/io_services_utils.py", line 1097, in _on_socket_writable
self._produce()
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/utils/io_services_utils.py", line 820, in _produce
self._tx_buffers[0])
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/utils/io_services_utils.py", line 79, in retry_sigint_wrap
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/utils/io_services_utils.py", line 861, in _sigint_safe_send
return sock.send(data)
ConnectionResetError: [Errno 104] Connection reset by peer
How can I rerun listen function from scratch, without old calls in call stack?
UPDATE
To avoid this issue, it is right to operate nested function and rerun it but not itself:
def listen(self, pipeline):
try:
self._listen(self, pipeline)
except (pika.exceptions.StreamLostError,
pika.exceptions.ConnectionClosed,
pika.exceptions.ChannelClosed,
ConnectionResetError) as e:
logging.warning(f'Connection dropped for queue {self.queue_name}. Exception: {e}. Reconnecting...')
self._reconnect()
self._listen(self, pipeline)
def _listen(self, pipeline):
for message in self.channel.consume(self.queue_name):
pipeline.process(message)
But still, is there a way to rerun the recursive function with a clean call stack?
Why use recursion when you can use simple iteration ?
def listen(self, pipeline):
while True:
try:
for message in self.channel.consume(self.queue_name):
pipeline.process(message)
self.channel.basic_ack(delivery_tag=method_frame.delivery_tag)
return
except (pika.exceptions.StreamLostError,
pika.exceptions.ConnectionClosed,
pika.exceptions.ChannelClosed,
ConnectionResetError) as e:
logging.warning(f'Connection dropped for queue {self.queue_name}. Exception: {e}. Reconnecting...')
self._reconnect()
But still, is there a way to rerun the recursive function with a clean call stack?
Actually, what you currently have IS a "clean call stack" - it's the real call stack, with one distinct frame per call (recursive or not). Some languages do "optimize" tail-recursive calls (by squashing / reusing frames), Python's designers choosed not to to make debugging easier.
I am trying to get my bottle server so that when one person in a game logs out, everyone can immediately see it. As I am using long polling, there is a request open with all the users.
The bit I am having trouble with is catching the exception that is thrown when the user leaves the page from the long polling that can no longer connect to the page. The error message is here.
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/gevent/pywsgi.py", line 438, in handle_one_response
self.run_application()
File "/usr/lib/python2.7/dist-packages/gevent/pywsgi.py", line 425, in run_application
self.process_result()
File "/usr/lib/python2.7/dist-packages/gevent/pywsgi.py", line 416, in process_result
self.write(data)
File "/usr/lib/python2.7/dist-packages/gevent/pywsgi.py", line 373, in write
self.socket.sendall(msg)
File "/usr/lib/python2.7/dist-packages/gevent/socket.py", line 509, in sendall
data_sent += self.send(_get_memory(data, data_sent), flags)
File "/usr/lib/python2.7/dist-packages/gevent/socket.py", line 483, in send
return sock.send(data, flags)
error: [Errno 32] Broken pipe
<WSGIServer fileno=3 address=0.0.0.0:8080>: Failed to handle request:
request = GET /refreshlobby/1 HTTP/1.1 from ('127.0.0.1', 53331)
application = <bottle.Bottle object at 0x7f9c05672750>
127.0.0.1 - - [2013-07-07 10:59:30] "GET /refreshlobby/1 HTTP/1.1" 200 160 6.038377
The function to handle that page is this.
#route('/refreshlobby/<id>')
def refreshlobby(id):
while True:
yield lobby.refresh()
gevent.sleep(1)
I tried catching the exception within the function, and in a decorator which I put to wrap #route, neither of which worked. I tried making an #error(500) decorator, but that didn't trigger, either. It seems that this is to do with the internals of bottle.
Edit: I know now that I need to be catching socket.error, but I don't know whereabouts in my code
The WSGI runner
Look closely at the traceback: this in not happening in your function, but in the WSGI runner.
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/gevent/pywsgi.py", line 438, in handle_one_response
self.run_application()
The way the WSGI runner works, in your case, is:
Receives a request
Gets a partial response from your code
Sends it to the client (this is where the exception is raised)
Repeats steps 2-3
You can't catch this exception
This error is not raised in your code.
It happens when you try to send a response to a client that closed the connection.
You'll therefore not be able to catch this error from within your code.
Alternate solutions
Unfortunately, it's not possible to tell from within the generator (your code) when it stops being consumed.
It's also not a good idea to rely on your generator being garbage collected.
You have a couple other solutions.
"Last seen"
Another way to know when an user disconnects would probably be to record a "last seen", after your yield statement.
You'll be able to identify clients that disconnected if their last seen is far in the past.
Other runner
Another, non-WSGI runner, will be more appropriate for a realtime application. You could give tornado a try.