Why are websockets connections constantly closed upon browser reload? - python

I was testing browser-based websockets using the slightly adapted (extra try) code from the documentation:
(backend)
import asyncio
import datetime
import random
import websockets
async def time(websocket, path):
print("new connection")
while True:
now = datetime.datetime.utcnow().isoformat() + 'Z'
try:
await websocket.send(now)
except websockets.exceptions.ConnectionClosed:
print("connection closed")
await asyncio.sleep(random.random() * 3)
start_server = websockets.serve(time, '127.0.0.1', 5678)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
(frontend)
<!DOCTYPE html>
<html>
<head>
<title>WebSocket demo</title>
</head>
<body>
<script>
var ws = new WebSocket("ws://127.0.0.1:5678/"),
messages = document.createElement('ul');
ws.onmessage = function (event) {
var messages = document.getElementsByTagName('ul')[0],
message = document.createElement('li'),
content = document.createTextNode(event.data);
message.appendChild(content);
messages.appendChild(message);
};
document.body.appendChild(messages);
</script>
</body>
</html>
When starting the backend and opening the frontend .html file in a browser (Chrome), I get the expected
new connection
on the backend output, and the browser is filled in with timestamps.
After reloading the page (F5), I get again a new connection, followed by ongoing connection closed:
new connection
new connection
connection closed
connection closed
connection closed
connection closed
connection closed
At the same time, the browser acts as expected, being filled in with timestamps.
What is happening? Why would the connection be stable the first time, and the unstable after reloading the page? Is the connection to the websocket recreated automatically (looks so, since the browser activity is OK) - but in that case what causes it to be closed in the first place?

You catch websockets.exceptions.ConnectionClosed exception which is how websockets knows to unregister a closed connection.
The closed connection is never unregistered because of this. Messages keep being sent through it.
You can get past this by doing any of the following:
Not catching the exception.
Sending messages via connected sockets
if websocket.open:
websocket.send(now)
# this doesn't unregister the closed socket connection.
explicitly unregistering the closed socket connection from the websocket server
websocket.ws_server.unregister(websocket)
# This raises an exception as well
Maintaining a list of connected clients in memory, sending messages to connections in this list and removing closed connections from this list on caught exception.
connect.append()
await asyncio.wait([ws.send("Hello!") for ws in connected])
Reference
http://websockets.readthedocs.io/en/stable/intro.html#common-patterns

Because you create a new WebSocket object on each browser reload. WebSockets can be persistent, but only if the object representing them is kept alive both on server and client sides - you do nothing to preserve your Javascript websocket upon page reload and it can't be ordinarily done - you have to use other mechanisms than full page reload to communicate. On page reload, the browser just creates a new connection to the server as a new WebSockect.

Related

1006 Connection closed abnormally error with python 3.7 websockets

I'm having the same problem as this github issue with python websockets:
https://github.com/aaugustin/websockets/issues/367
The proposed solution isn't working for me though. The error I'm getting is:
websockets.exceptions.ConnectionClosed: WebSocket connection is closed: code = 1006 (connection closed abnormally [internal]), no reason
This is my code:
async def get_order_book(symbol):
with open('test.csv', 'a+') as csvfile:
csvw = csv.writer(csvfile, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL)
DT = Data(data=data, last_OB_id=ob_id, last_TR_id=tr_id, sz=10, csvw=csvw)
while True:
if not websocket.open:
print('Reconnecting')
websocket = await websockets.connect(ws_url)
else:
resp = await websocket.recv()
update = ujson.loads(resp)
DT.update_data(update)
async def get_order_books():
r = requests.get(url='https://api.binance.com/api/v1/ticker/24hr')
await asyncio.gather(*[get_order_book(data['symbol']) for data in r.json()])
if __name__ == '__main__':
asyncio.run(get_order_books())
The way I've been testing it is by closing my internet connection, but after a ten second delay it still returns the 1006 error.
I'm running Python 3.7 and Websockets 7.0.
Let me know what your thoughts are, thanks!
I encountered the same problem.
After digging a while I found multiple versions of the answer that tells to just reconnect, but I didn't think it was a reasonable route, so I dug some more.
Enabling DEBUG level logging I found out that python websockets default to sending ping packets, and failing to receive a response, timeouts the connection. I am not sure if this lines up with the standard, but at least javascript websockets are completely fine with the server my python script times out with.
The fix is simple: add another kw argument to connect:
websockets.connect(uri, ping_interval=None)
The same argument should also work for server side function serve.
More info at https://websockets.readthedocs.io/en/stable/api.html
So I found the solution:
When the connection closes, it breaks out of the while loop for some reason. So in order to keep the websocket running you have to surround
resp = await websocket.recv()
with try ... except and have
print('Reconnecting')
websocket = await websockets.connect(ws_url)
in the exception handling part.
I ran in to this same issue. The solution by shinola worked for awhile, but I would still get errors sometimes.
To handle this I put the connection into a while True: loop and added two separate try except blocks. The consumer variable is a function that processes the messages received from the websocket connection.
async def websocketConnect(uri, payload, consumer):
websocket = await websockets.connect(uri, ssl=True)
await websocket.send(json.dumps(payload))
while True:
if not websocket.open:
try:
print('Websocket is NOT connected. Reconnecting...')
websocket = await websockets.connect(uri, ssl=True)
await websocket.send(json.dumps(payload))
except:
print('Unable to reconnect, trying again.')
try:
async for message in websocket:
if message is not None:
consumer(json.loads(message))
except:
print('Error receiving message from websocket.')
I start the connection using:
def startWebsocket(uri, payload, consumer):
asyncio.run(websocketConnect(uri, payload, consumer))
I might be a year late but i was just having this issue. No connection issues on my html5 websocket client but .py test client would crash after around a minute (raising 1006 exceptions for both the client and server too). As a test i started to just await connection.recv()ing after every frame the client sends. No more issues. I didnt need to receive data for my .py test client but apparently it causes issues if you let it build up. It's also probably why my web version was working fine, since i was handling the .onmessage callbacks.
Im pretty sure this is why this error occurs. So this solution of just receiving the data is an actual solution and not disabling pinging and screwing up a vital function of the protocol.
I think they explained here: https://websockets.readthedocs.io/en/stable/faq.html
it means that the TCP connection was lost. As a consequence, the
WebSocket connection was closed without receiving a close frame, which
is abnormal.
You can catch and handle ConnectionClosed to prevent it from being
logged.
There are several reasons why long-lived connections may be lost:
End-user devices tend to lose network connectivity often and
unpredictably because they can move out of wireless network coverage,
get unplugged from a wired network, enter airplane mode, be put to
sleep, etc.
HTTP load balancers or proxies that aren’t configured for long-lived
connections may terminate connections after a short amount of time,
usually 30 seconds.
I solved this problem by uvicorn the table with a hypercorn.
hypercorn app:app

Socket programming stuck waiting for a response from the server

For a class assignment I need to use the socket API to build a file transfer application. For this project there two connections with the client and server, one is called the control and is used to send error messages and the other is used to send data. My question is, on the client side how can I keep the control socket open and waiting for any possible error messages to be received from the server while not blocking the rest of the program from running?
Example code (removed some elements)
#Create the socket to bind to the server
clientSocket = socket(AF_INET,SOCK_STREAM)
clientSocket.connect((serverName,portNum))
clientSocket.send(sendCommand) # Send to the server in the control connection contains either the list or get command
(If command is valid server makes a data connection and waits for client to connect)
clientData = socket(AF_INET,SOCK_STREAM)
clientData.connect((serverName,dataport)) #Client connects
recCommand = clientData.recv(2000) #Receive the data from server if command is successful
badCommand = clientSocket.recv(2000) #But if there is an error then I need to skip the clientData.recv and catch the error message in bad Command
when there is an error, the data-socket should be closed by the server, so recv ends automatically.

RabbitMQ heartbeat vs connection drain events timeout

I have a rabbitmq server and a amqp consumer (python) using kombu.
I have installed my app in a system that has a firewall that closes idle connections after 1 hour.
This is my amqp_consumer.py:
try:
# connections
with Connection(self.broker_url, ssl=_ssl, heartbeat=self.heartbeat) as conn:
chan = conn.channel()
# more stuff here
with conn.Consumer(queue, callbacks = [messageHandler], channel = chan):
# Process messages and handle events on all channels
while True:
conn.drain_events()
except Exception as e:
# do stuff
what i want is that if the firewall closed the connection, then i want to reconnect. should i use the heartbeat argument or should i pass a timeout argument (of 3600 sec) to the drain_events() function?
What are the differences between both options? (seems to do the same).
Thanks.
The drain_events on it's own would not produce any heartbeats, unless there are messages to consume and acknowledge. If the queue is idle then eventually the connection would be closed (by rabbit server or by your firewall).
What you should do is use both the heartbeat and the timeout like so:
while True:
try:
conn.drain_events(timeout=1)
except socket.timeout:
conn.heartbeat_check()
This way even if the queue is idle the connection won't be closed.
Besides that you might want to wrap the whole thing with a retry policy in case the connection does get closed or some other network error.

tornado (Python): persistent http connection

For http persistent connection I wrote the following code:
class LongPolling(tornado.web.RequestHandler):
waiters = set()
def get(self):
LongPolling.waiters.add(self)
for x in LongPolling.waiters:
x.write("Broadcast all")
x.flush()
return
def on_close(self):
logging.warning("Connection closed *********")
LongPolling.waiters.remove(self)
if __name__ == "__main__":
application = tornado.web.Application([
(r"/", LongPolling),
])
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
I am braodcasting every time a new connection comes.. But the problem with this is, immediately after get() the connection closes.
So how do I let the connection open after a get() call?
There is no such thing as a "persistent" http connection. the Connection: keep-alive header permits client and server to perform a new http request/response cycle without creating a new underlying tcp connection, to save a bit of network traffic, but that is not visible to the application; and usually implemented in the server side by a reverse proxy. clients will have to make new requests when they receive responses to their GET's.
If that's not what you had in mind, just that you want to respond to requests a bit at a time, then you might be looking for tornado.web.asynchronous. Note however, that most in-browser clients won't benefit from this very much; XHR's for instance won't be fired until the response completes, so browser applications will have to start a new request anyway

How to know using BaseHTTPRequestHandler that client closed connection

I am writing http server that can serve big files to client.
While writing to wfile stream it is possible that client closes connection and my server gets socket error (Errno 10053).
Is it possible to stop writing when client closes connection?
You can add these methods to your BaseHTTPRequestHandler class so that you can know if the client closed the connection:
def handle(self):
"""Handles a request ignoring dropped connections."""
try:
return BaseHTTPRequestHandler.handle(self)
except (socket.error, socket.timeout) as e:
self.connection_dropped(e)
def connection_dropped(self, error, environ=None):
"""Called if the connection was closed by the client. By default
nothing happens.
"""
# add here the code you want to be executed if a connection
# was closed by the client
In the second method: connection_dropped, you can add some code that you want to be executed each time a socket error (e.g. client closed the connection) occures.

Categories

Resources