I'm having the same problem as this github issue with python websockets:
https://github.com/aaugustin/websockets/issues/367
The proposed solution isn't working for me though. The error I'm getting is:
websockets.exceptions.ConnectionClosed: WebSocket connection is closed: code = 1006 (connection closed abnormally [internal]), no reason
This is my code:
async def get_order_book(symbol):
with open('test.csv', 'a+') as csvfile:
csvw = csv.writer(csvfile, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL)
DT = Data(data=data, last_OB_id=ob_id, last_TR_id=tr_id, sz=10, csvw=csvw)
while True:
if not websocket.open:
print('Reconnecting')
websocket = await websockets.connect(ws_url)
else:
resp = await websocket.recv()
update = ujson.loads(resp)
DT.update_data(update)
async def get_order_books():
r = requests.get(url='https://api.binance.com/api/v1/ticker/24hr')
await asyncio.gather(*[get_order_book(data['symbol']) for data in r.json()])
if __name__ == '__main__':
asyncio.run(get_order_books())
The way I've been testing it is by closing my internet connection, but after a ten second delay it still returns the 1006 error.
I'm running Python 3.7 and Websockets 7.0.
Let me know what your thoughts are, thanks!
I encountered the same problem.
After digging a while I found multiple versions of the answer that tells to just reconnect, but I didn't think it was a reasonable route, so I dug some more.
Enabling DEBUG level logging I found out that python websockets default to sending ping packets, and failing to receive a response, timeouts the connection. I am not sure if this lines up with the standard, but at least javascript websockets are completely fine with the server my python script times out with.
The fix is simple: add another kw argument to connect:
websockets.connect(uri, ping_interval=None)
The same argument should also work for server side function serve.
More info at https://websockets.readthedocs.io/en/stable/api.html
So I found the solution:
When the connection closes, it breaks out of the while loop for some reason. So in order to keep the websocket running you have to surround
resp = await websocket.recv()
with try ... except and have
print('Reconnecting')
websocket = await websockets.connect(ws_url)
in the exception handling part.
I ran in to this same issue. The solution by shinola worked for awhile, but I would still get errors sometimes.
To handle this I put the connection into a while True: loop and added two separate try except blocks. The consumer variable is a function that processes the messages received from the websocket connection.
async def websocketConnect(uri, payload, consumer):
websocket = await websockets.connect(uri, ssl=True)
await websocket.send(json.dumps(payload))
while True:
if not websocket.open:
try:
print('Websocket is NOT connected. Reconnecting...')
websocket = await websockets.connect(uri, ssl=True)
await websocket.send(json.dumps(payload))
except:
print('Unable to reconnect, trying again.')
try:
async for message in websocket:
if message is not None:
consumer(json.loads(message))
except:
print('Error receiving message from websocket.')
I start the connection using:
def startWebsocket(uri, payload, consumer):
asyncio.run(websocketConnect(uri, payload, consumer))
I might be a year late but i was just having this issue. No connection issues on my html5 websocket client but .py test client would crash after around a minute (raising 1006 exceptions for both the client and server too). As a test i started to just await connection.recv()ing after every frame the client sends. No more issues. I didnt need to receive data for my .py test client but apparently it causes issues if you let it build up. It's also probably why my web version was working fine, since i was handling the .onmessage callbacks.
Im pretty sure this is why this error occurs. So this solution of just receiving the data is an actual solution and not disabling pinging and screwing up a vital function of the protocol.
I think they explained here: https://websockets.readthedocs.io/en/stable/faq.html
it means that the TCP connection was lost. As a consequence, the
WebSocket connection was closed without receiving a close frame, which
is abnormal.
You can catch and handle ConnectionClosed to prevent it from being
logged.
There are several reasons why long-lived connections may be lost:
End-user devices tend to lose network connectivity often and
unpredictably because they can move out of wireless network coverage,
get unplugged from a wired network, enter airplane mode, be put to
sleep, etc.
HTTP load balancers or proxies that aren’t configured for long-lived
connections may terminate connections after a short amount of time,
usually 30 seconds.
I solved this problem by uvicorn the table with a hypercorn.
hypercorn app:app
For a class assignment I need to use the socket API to build a file transfer application. For this project there two connections with the client and server, one is called the control and is used to send error messages and the other is used to send data. My question is, on the client side how can I keep the control socket open and waiting for any possible error messages to be received from the server while not blocking the rest of the program from running?
Example code (removed some elements)
#Create the socket to bind to the server
clientSocket = socket(AF_INET,SOCK_STREAM)
clientSocket.connect((serverName,portNum))
clientSocket.send(sendCommand) # Send to the server in the control connection contains either the list or get command
(If command is valid server makes a data connection and waits for client to connect)
clientData = socket(AF_INET,SOCK_STREAM)
clientData.connect((serverName,dataport)) #Client connects
recCommand = clientData.recv(2000) #Receive the data from server if command is successful
badCommand = clientSocket.recv(2000) #But if there is an error then I need to skip the clientData.recv and catch the error message in bad Command
when there is an error, the data-socket should be closed by the server, so recv ends automatically.
I have a rabbitmq server and a amqp consumer (python) using kombu.
I have installed my app in a system that has a firewall that closes idle connections after 1 hour.
This is my amqp_consumer.py:
try:
# connections
with Connection(self.broker_url, ssl=_ssl, heartbeat=self.heartbeat) as conn:
chan = conn.channel()
# more stuff here
with conn.Consumer(queue, callbacks = [messageHandler], channel = chan):
# Process messages and handle events on all channels
while True:
conn.drain_events()
except Exception as e:
# do stuff
what i want is that if the firewall closed the connection, then i want to reconnect. should i use the heartbeat argument or should i pass a timeout argument (of 3600 sec) to the drain_events() function?
What are the differences between both options? (seems to do the same).
Thanks.
The drain_events on it's own would not produce any heartbeats, unless there are messages to consume and acknowledge. If the queue is idle then eventually the connection would be closed (by rabbit server or by your firewall).
What you should do is use both the heartbeat and the timeout like so:
while True:
try:
conn.drain_events(timeout=1)
except socket.timeout:
conn.heartbeat_check()
This way even if the queue is idle the connection won't be closed.
Besides that you might want to wrap the whole thing with a retry policy in case the connection does get closed or some other network error.
For http persistent connection I wrote the following code:
class LongPolling(tornado.web.RequestHandler):
waiters = set()
def get(self):
LongPolling.waiters.add(self)
for x in LongPolling.waiters:
x.write("Broadcast all")
x.flush()
return
def on_close(self):
logging.warning("Connection closed *********")
LongPolling.waiters.remove(self)
if __name__ == "__main__":
application = tornado.web.Application([
(r"/", LongPolling),
])
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
I am braodcasting every time a new connection comes.. But the problem with this is, immediately after get() the connection closes.
So how do I let the connection open after a get() call?
There is no such thing as a "persistent" http connection. the Connection: keep-alive header permits client and server to perform a new http request/response cycle without creating a new underlying tcp connection, to save a bit of network traffic, but that is not visible to the application; and usually implemented in the server side by a reverse proxy. clients will have to make new requests when they receive responses to their GET's.
If that's not what you had in mind, just that you want to respond to requests a bit at a time, then you might be looking for tornado.web.asynchronous. Note however, that most in-browser clients won't benefit from this very much; XHR's for instance won't be fired until the response completes, so browser applications will have to start a new request anyway
I am writing http server that can serve big files to client.
While writing to wfile stream it is possible that client closes connection and my server gets socket error (Errno 10053).
Is it possible to stop writing when client closes connection?
You can add these methods to your BaseHTTPRequestHandler class so that you can know if the client closed the connection:
def handle(self):
"""Handles a request ignoring dropped connections."""
try:
return BaseHTTPRequestHandler.handle(self)
except (socket.error, socket.timeout) as e:
self.connection_dropped(e)
def connection_dropped(self, error, environ=None):
"""Called if the connection was closed by the client. By default
nothing happens.
"""
# add here the code you want to be executed if a connection
# was closed by the client
In the second method: connection_dropped, you can add some code that you want to be executed each time a socket error (e.g. client closed the connection) occures.