I am writing http server that can serve big files to client.
While writing to wfile stream it is possible that client closes connection and my server gets socket error (Errno 10053).
Is it possible to stop writing when client closes connection?
You can add these methods to your BaseHTTPRequestHandler class so that you can know if the client closed the connection:
def handle(self):
"""Handles a request ignoring dropped connections."""
try:
return BaseHTTPRequestHandler.handle(self)
except (socket.error, socket.timeout) as e:
self.connection_dropped(e)
def connection_dropped(self, error, environ=None):
"""Called if the connection was closed by the client. By default
nothing happens.
"""
# add here the code you want to be executed if a connection
# was closed by the client
In the second method: connection_dropped, you can add some code that you want to be executed each time a socket error (e.g. client closed the connection) occures.
Related
I was testing browser-based websockets using the slightly adapted (extra try) code from the documentation:
(backend)
import asyncio
import datetime
import random
import websockets
async def time(websocket, path):
print("new connection")
while True:
now = datetime.datetime.utcnow().isoformat() + 'Z'
try:
await websocket.send(now)
except websockets.exceptions.ConnectionClosed:
print("connection closed")
await asyncio.sleep(random.random() * 3)
start_server = websockets.serve(time, '127.0.0.1', 5678)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
(frontend)
<!DOCTYPE html>
<html>
<head>
<title>WebSocket demo</title>
</head>
<body>
<script>
var ws = new WebSocket("ws://127.0.0.1:5678/"),
messages = document.createElement('ul');
ws.onmessage = function (event) {
var messages = document.getElementsByTagName('ul')[0],
message = document.createElement('li'),
content = document.createTextNode(event.data);
message.appendChild(content);
messages.appendChild(message);
};
document.body.appendChild(messages);
</script>
</body>
</html>
When starting the backend and opening the frontend .html file in a browser (Chrome), I get the expected
new connection
on the backend output, and the browser is filled in with timestamps.
After reloading the page (F5), I get again a new connection, followed by ongoing connection closed:
new connection
new connection
connection closed
connection closed
connection closed
connection closed
connection closed
At the same time, the browser acts as expected, being filled in with timestamps.
What is happening? Why would the connection be stable the first time, and the unstable after reloading the page? Is the connection to the websocket recreated automatically (looks so, since the browser activity is OK) - but in that case what causes it to be closed in the first place?
You catch websockets.exceptions.ConnectionClosed exception which is how websockets knows to unregister a closed connection.
The closed connection is never unregistered because of this. Messages keep being sent through it.
You can get past this by doing any of the following:
Not catching the exception.
Sending messages via connected sockets
if websocket.open:
websocket.send(now)
# this doesn't unregister the closed socket connection.
explicitly unregistering the closed socket connection from the websocket server
websocket.ws_server.unregister(websocket)
# This raises an exception as well
Maintaining a list of connected clients in memory, sending messages to connections in this list and removing closed connections from this list on caught exception.
connect.append()
await asyncio.wait([ws.send("Hello!") for ws in connected])
Reference
http://websockets.readthedocs.io/en/stable/intro.html#common-patterns
Because you create a new WebSocket object on each browser reload. WebSockets can be persistent, but only if the object representing them is kept alive both on server and client sides - you do nothing to preserve your Javascript websocket upon page reload and it can't be ordinarily done - you have to use other mechanisms than full page reload to communicate. On page reload, the browser just creates a new connection to the server as a new WebSockect.
For a class assignment I need to use the socket API to build a file transfer application. For this project there two connections with the client and server, one is called the control and is used to send error messages and the other is used to send data. My question is, on the client side how can I keep the control socket open and waiting for any possible error messages to be received from the server while not blocking the rest of the program from running?
Example code (removed some elements)
#Create the socket to bind to the server
clientSocket = socket(AF_INET,SOCK_STREAM)
clientSocket.connect((serverName,portNum))
clientSocket.send(sendCommand) # Send to the server in the control connection contains either the list or get command
(If command is valid server makes a data connection and waits for client to connect)
clientData = socket(AF_INET,SOCK_STREAM)
clientData.connect((serverName,dataport)) #Client connects
recCommand = clientData.recv(2000) #Receive the data from server if command is successful
badCommand = clientSocket.recv(2000) #But if there is an error then I need to skip the clientData.recv and catch the error message in bad Command
when there is an error, the data-socket should be closed by the server, so recv ends automatically.
I'm using python socket server to which I connect with Android and periodically send messages.
I have a problem that the request is closed on every sent message and i need it to remain opened until Android decides to close it.
Curentlly it looks like this:
class SingleTCPHandler(SocketServer.StreamRequestHandler):
def handle(self):
try:
while True:
message = self.rfile.readline().strip() # clip input at 1Kb
my_event = pygame.event.Event(USEREVENT, {'control':message})
pygame.event.post(my_event)
except KeyboardInterrupt:
sys.exit(0)
finally:
self.request.close()
I've solved this by adding a while True in my handle() definition, however, this was criticized as a bad solution and that the right way to go is to override the process_request and shutdown methods.
Attempt of solution
I removed the while from the code, connected to the server locally with netcat, sent a message and went to see when will the connection be closed.
I wanted to see what is the method after which the connection is being closed to figuer out what i have to override.
I have stepped with the debugger through the serve_forever() and followed it to this part of code:
> /usr/lib/python2.7/threading.py(495)start()
494 try:
--> 495 _start_new_thread(self.__bootstrap, ())
496 except Exception:
After line 495 is passed (i can't step into it) the connection is closed.
I somehow doubt that it's such a hustle to maintain a connection via socket, that is basically the reason why we chosen to communicate over a socket, to have a continuous connection and not a 'one connection per sent message' system.
Ideas on implementation, or links?
The handle method is called for each client connection, and the connection is closed when it returns. Using a while loop is fine. Exit the loop when the client closes the connection.
Example (Python 3 syntax):
class EchoHandler(socketserver.StreamRequestHandler):
def setup(self):
print('{}:{} connected'.format(*self.client_address))
def handle(self):
while True:
data = self.request.recv(1024)
if not data: break
self.request.sendall(data)
def finish(self):
print('{}:{} disconnected'.format(*self.client_address))
I am trying to use socketserver to create a simple server to send images to a client with TCP.
First I send a catalogue to the client and then it responds with a request.
In the handle of my server, I have this loop :
class MainHandler(socketserver.BaseRequestHandler):
def handle(self):
while 1:
try:
# Sending the catalogue
# Using my methods to get my catalogue with a HTTP header
response = self.server.genHTTPRequest(self.server.init.catalogue)
self.request.sendall(response.encode())
# Response of the client
self.data = self.request.recv(1024).decode()
if self.data:
print("Data received : {}".format(self.data))
except:
print("transmission error")
break;
In the main I use this line to create my server (it's in an other file) :
mainServer = MainServer.MainServer((init.adresse, int(init.port)), MainServer.MainHandler)
When I launch this program, the client connect successfully and receive the catalogue but it sends back only some data and the program jump in the exception of the try/catch. Without the try/catch, I got this error :
self.data = self.request.recv(1024).decode()
ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine
I don't understand what is the problem, maybe a synchronization missing or may I need to use threads ?
Thank you for your help
(I am using python 3.3)
The problem would be because of sendall(response.encode()) and this self.request.recv(1024).decode() has been done in the same socket and it could lead to ConnectionAbortedError.
You need to read all the data from the socket before putting other data into the socket. Like flushing of data.
For http persistent connection I wrote the following code:
class LongPolling(tornado.web.RequestHandler):
waiters = set()
def get(self):
LongPolling.waiters.add(self)
for x in LongPolling.waiters:
x.write("Broadcast all")
x.flush()
return
def on_close(self):
logging.warning("Connection closed *********")
LongPolling.waiters.remove(self)
if __name__ == "__main__":
application = tornado.web.Application([
(r"/", LongPolling),
])
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
I am braodcasting every time a new connection comes.. But the problem with this is, immediately after get() the connection closes.
So how do I let the connection open after a get() call?
There is no such thing as a "persistent" http connection. the Connection: keep-alive header permits client and server to perform a new http request/response cycle without creating a new underlying tcp connection, to save a bit of network traffic, but that is not visible to the application; and usually implemented in the server side by a reverse proxy. clients will have to make new requests when they receive responses to their GET's.
If that's not what you had in mind, just that you want to respond to requests a bit at a time, then you might be looking for tornado.web.asynchronous. Note however, that most in-browser clients won't benefit from this very much; XHR's for instance won't be fired until the response completes, so browser applications will have to start a new request anyway