BaseHTTPServer still writing although client lost network connection - python

I've implemented a server which accepts requests and after some process the client connects to my server.
The server continuously sends data to client, but if the client lose the network connection (e.g. on my mobile I've disabled the internet access without exiting from the client program), then the server is still writing to the nothing.
I've attached my shortened version of my code logic. Monitoring the input data could be a good idea, but I have some cases when I don't have to wait for any input.
class CustomRequestHandler(BaseHTTPRequestHandler):
def __init__(self, request, client_address, server):
BaseHTTPRequestHandler.__init__(self, request, client_address, server)
def do_GET(self):
try:
readable, writable, exceptional = select.select([self.rfile], [self.wfile], [self.rfile, self.wfile], 0)
for s in readable:
print (s.readline())
for s in writable: #
s.write(b"Data")
except Exception as e:
print(e)
def finish(self, *args, **kw):
print ("Do finish")
class CustomServer(socketserver.ThreadingMixIn, HTTPServer):
pass
def start_server():
httpd = CustomServer((HOST, PORT), CustomRequestHandler)
try:
httpd.allow_reuse_address = True
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
if __name__ == '__main__':
start_server()
After a while writable became an empty list, but how could I detect if on the client side a network lost occurred? How could I catch the network error?

Your socket is not closed when you cut the network connection. The sender will only get informed when the OS decides that the socket is timed out. This usually takes 30s+.
If on the other hand the receiver program is closed properly, the sender will get notified within milliseconds.
These left open but actually lost connections are a major problem in network programming. There are mitigations but there is no universal ultimate solution to it.

Related

How do I forcibly disconnect all currently connected clients to my TCP or HTTP server during shutdown?

I have a fake HTTP server that I use as a fixture in my testing. At some point in the test, I want to stop the server regardless of any still open connections. Clients on these open connections should get a TCP FIN.
I am aware that usually production servers need to solve different problem, that of quiescing, sometimes called graceful shutdown. This is the opposite of what I want.
With a standalone process, it is usually possible to simply get the process to quit and the OS will take care of the rest. (Forcibly killing processes is easy, while forcibly killing threads is not.) My fake server is, however, running in a thread of the test process itself, so I don't have this option (and I don't want to externalize it if there is other way around).
I investigated this issue in Python, with the HTTPServer class, where I was not able to find any solution.
I also investigated this in Go, where I was able to find the concept of Contexts, which is close to what I need, but it works the other way around: a http server would propagate a Context that can be used to cancel e.g. a database lookup if a client disconnected.
Edit: looks like Go actually does what I need and has a separate graceful and nongraceful shutdown methods, with the nongraceful being net/http#Server.Close.
server = http.server.HTTPServer(...)
thread = threading.Thread(run=server.serve_forever)
thread.start()
# a client has connected ....
server.shutdown()
# at this point I want to have the server stopped,
# without waiting for the request handling to complete
I've implemented the Go solution in Python. When new client connects, I remember the client socket, and when I want to quit, I shutdown all remembered sockets.
It seems to work.
import socket
import http.server.HTTPServer
class MyHTTPServer(HTTPServer):
"""Adds a method to the HTTPServer to allow it to exit gracefully"""
def __init__(self, addr, handler_cls):
super().__init__(addr, handler_cls)
self._client_sockets: List[socket.socket] = []
self.server_killed = False
def get_request(self) -> Tuple[socket.socket, Any]:
"""Remember the client socket"""
sock, addr = super().get_request()
self._client_sockets.append(sock)
return sock, addr
def shutdown_request(self, request: socket.socket) -> None:
"""Forget the client socket"""
self._client_sockets.remove(request)
print(f"{self._client_sockets=}")
super().shutdown_request(request)
def force_disconnect_clients(self) -> None:
"""Shutdown the remembered sockets"""
for client in self._client_sockets:
client.shutdown(socket.SHUT_RDWR)
Usage
server = MyHTTPServer(server_addr, MyRequestHandler)
# in a new thread
while not server.server_killed:
self._server.handle_request()
# ... use the server (keep in mind it can have at most one client at a time) ...
# in the main program
server.server_killed = True
server.force_disconnect_clients()
server.server_close()

How can I write a socket server in a different thread from my main program (using gevent)?

I'm developing a Flask/gevent WSGIserver webserver that needs to communicate (in the background) with a hardware device over two sockets using XML.
One socket is initiated by the client (my application) and I can send XML commands to the device. The device answers on a different port and sends back information that my application has to confirm. So my application has to listen to this second port.
Up until now I have issued a command, opened the second port as a server, waited for a response from the device and closed the second port.
The problem is that it's possible that the device sends multiple responses that I have to confirm. So my solution was to keep the port open and keep responding to incoming requests. However, in the end the device is done sending requests, and my application is still listening (I don't know when the device is done), thereby blocking everything else.
This seemed like a perfect use case for a thread, so that my application launches a listening server in a separate thread. Because I'm already using gevent as a WSGI server for Flask, I can use the greenlets.
The problem is, I have looked for a good example of such a thing, but all I can find is examples of multi-threading handlers for a single socket server. I don't need to handle a lot of connections on the socket server, but I need it launched in a separate thread so it can listen for and handle incoming messages while my main program can keep sending messages.
The second problem I'm running into is that in the server, I need to use some methods from my "main" class. Being relatively new to Python I'm unsure how to structure it in a way to make that possible.
class Device(object):
def __init__(self, ...):
self.clientsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
def _connect_to_device(self):
print "OPEN CONNECTION TO DEVICE"
try:
self.clientsocket.connect((self.ip, 5100))
except socket.error as e:
pass
def _disconnect_from_device(self):
print "CLOSE CONNECTION TO DEVICE"
self.clientsocket.close()
def deviceaction1(self, ...):
# the data that is sent is an XML document that depends on the parameters of this method.
self._connect_to_device()
self._send_data(XMLdoc)
self._wait_for_response()
return True
def _send_data(self, data):
print "SEND:"
print(data)
self.clientsocket.send(data)
def _wait_for_response(self):
print "WAITING FOR REQUESTS FROM DEVICE (CHANNEL 1)"
self.serversocket.bind(('10.0.0.16', 5102))
self.serversocket.listen(5) # listen for answer, maximum 5 connections
connection, address = self.serversocket.accept()
# the data is of a specific length I can calculate
if len(data) > 0:
self._process_response(data)
self.serversocket.close()
def _process_response(self, data):
print "RECEIVED:"
print(data)
# here is some code that processes the incoming data and
# responds to the device
# this may or may not result in more incoming data
if __name__ == '__main__':
machine = Device(ip="10.0.0.240")
Device.deviceaction1(...)
This is (globally, I left out sensitive information) what I'm doing now. As you can see everything is sequential.
If anyone can provide an example of a listening server in a separate thread (preferably using greenlets) and a way to communicate from the listening server back to the spawning thread, it would be of great help.
Thanks.
EDIT:
After trying several methods, I decided to use Pythons default select() method to solve this problem. This worked, so my question regarding the use of threads is no longer relevant. Thanks for the people who provided input for your time and effort.
Hope it can provide some help, In example class if we will call tenMessageSender function then it will fire up an async thread without blocking main loop and then _zmqBasedListener will start listening on separate port untill that thread is alive. and whatever message our tenMessageSender function will send, those will be received by client and respond back to zmqBasedListener.
Server Side
import threading
import zmq
import sys
class Example:
def __init__(self):
self.context = zmq.Context()
self.publisher = self.context.socket(zmq.PUB)
self.publisher.bind('tcp://127.0.0.1:9997')
self.subscriber = self.context.socket(zmq.SUB)
self.thread = threading.Thread(target=self._zmqBasedListener)
def _zmqBasedListener(self):
self.subscriber.connect('tcp://127.0.0.1:9998')
self.subscriber.setsockopt(zmq.SUBSCRIBE, "some_key")
while True:
message = self.subscriber.recv()
print message
sys.exit()
def tenMessageSender(self):
self._decideListener()
for message in range(10):
self.publisher.send("testid : %d: I am a task" %message)
def _decideListener(self):
if not self.thread.is_alive():
print "STARTING THREAD"
self.thread.start()
Client
import zmq
context = zmq.Context()
subscriber = context.socket(zmq.SUB)
subscriber.connect('tcp://127.0.0.1:9997')
publisher = context.socket(zmq.PUB)
publisher.bind('tcp://127.0.0.1:9998')
subscriber.setsockopt(zmq.SUBSCRIBE, "testid")
count = 0
print "Listener"
while True:
message = subscriber.recv()
print message
publisher.send('some_key : Message received %d' %count)
count+=1
Instead of thread you can use greenlet etc.

twisted: test if connection exists before writing to transport

Is there a possibility to test if the connection still exists before executing a transport.write()?
I have modified the simpleserv/simpleclient examples so that a message is being send (written to Protocol.transport) every 5 seconds. The connection is persistent.
When disconnecting my wifi, it still writes to transport (of course the messages don't arrive on the other side) but no error is thrown.
When enabling the wifi again, the messages are being delivered, but the next attempt to send a message fails (and Protocol.connectionLost is called).
Here again what happens chronologically:
Sending a message establishes the connection, the message is delivered.
Disabling wifi
Sending a message writes to transport, does not throw an error, the message does not arrive
Enabling wifi
Message sent in 3. arrives
Sending a message results in Protocol.connectionLost call
It would be nice to know before executing step 6 if I can write to transport. Is there any way?
Server:
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
from twisted.internet import reactor, protocol
class Echo(protocol.Protocol):
"""This is just about the simplest possible protocol"""
def dataReceived(self, data):
"As soon as any data is received, write it back."
print
print data
self.transport.write(data)
def main():
"""This runs the protocol on port 8000"""
factory = protocol.ServerFactory()
factory.protocol = Echo
reactor.listenTCP(8000,factory)
reactor.run()
# this only runs if the module was *not* imported
if __name__ == '__main__':
main()
Client:
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
An example client. Run simpleserv.py first before running this.
"""
from twisted.internet import reactor, protocol
# a client protocol
counter = 0
class EchoClient(protocol.Protocol):
"""Once connected, send a message, then print the result."""
def connectionMade(self):
print 'connectionMade'
def dataReceived(self, data):
"As soon as any data is received, write it back."
print "Server said:", data
def connectionLost(self, reason):
print "connection lost"
def say_hello(self):
global counter
counter += 1
msg = '%s. hello, world' %counter
print 'sending: %s' %msg
self.transport.write(msg)
class EchoFactory(protocol.ClientFactory):
def buildProtocol(self, addr):
self.p = EchoClient()
return self.p
def clientConnectionFailed(self, connector, reason):
print "Connection failed - goodbye!"
def clientConnectionLost(self, connector, reason):
print "Connection lost - goodbye!"
def say_hello(self):
self.p.say_hello()
reactor.callLater(5, self.say_hello)
# this connects the protocol to a server running on port 8000
def main():
f = EchoFactory()
reactor.connectTCP("REMOTE_SERVER_ADDR", 8000, f)
reactor.callLater(5, f.say_hello)
reactor.run()
# this only runs if the module was *not* imported
if __name__ == '__main__':
main()
Protocol.connectionLost is the only way to know when the connection no longer exists. It is also called at the earliest time when it is known that the connection no longer exists.
It is obvious to you or me that disconnecting your network adapter (ie, turning off your wifi card) will break the connection - at least, if you leave it off or if you configure it different when you turn it back on again. It's not obvious to your platform's TCP implementation though.
Since network communication isn't instant and any individual packet may be lost for normal (non-fatal) reasons, TCP includes various timeouts and retries. When you disconnect your network adapter these packets can no longer be delivered but the platform doesn't know that this condition will outlast the longest TCP timeout. So your TCP connection doesn't get closed when you turn off your wifi. It hangs around and starts retrying the send and waiting for an acknowledgement.
At some point the timeouts and retries all expire and the connection really does get closed (although the way TCP works means that if there is no data waiting to be sent then there actually isn't a timeout, a "dead" connection will live forever; addressing this is the reason the TCP "keepalive" feature exists). This is made slightly more complicated by the fact that there are timeouts on both sides of the connection. If the connection closes as soon as you do the write in step six (and no sooner) then the cause is probably a "reset" (RST) packet.
A reset will occur after the timeout on the other side of the connection expires and closes the connection while the connection is still open on your side. Now when your side sends a packet for this TCP connection the other side won't recognize the TCP connection it belongs to (because as far as the other side is concerned that connection no longer exists) and reply with a reset message. This tells the original sender that there is no such connection. The original sender reacts to this by closing its side of the connection (since one side of a two-sided connection isn't very useful by itself). This is presumably when Protocol.connectionLost is called in your application.
All of this is basically just how TCP works. If the timeout behavior isn't suitable for your application then you have a couple options. You could turn on TCP keepalives (this usually doesn't help, by default TCP keepalives introduce timeouts that are hours long though you can tune this on most platforms) or you could build an application-level keepalive feature. This is simply some extra traffic that your protocol generates and then expects a response to. You can build your own timeouts (no response in 3 seconds? close the connection and establish a new one) on top of this or just rely on it to trigger one of the somewhat faster (~2 minute) TCP timeouts. The downside of a faster timeout is that spurious network issues may cause you to close the connection when you really didn't need to.

Prevent a request getting closed in python SocketServer

I'm using python socket server to which I connect with Android and periodically send messages.
I have a problem that the request is closed on every sent message and i need it to remain opened until Android decides to close it.
Curentlly it looks like this:
class SingleTCPHandler(SocketServer.StreamRequestHandler):
def handle(self):
try:
while True:
message = self.rfile.readline().strip() # clip input at 1Kb
my_event = pygame.event.Event(USEREVENT, {'control':message})
pygame.event.post(my_event)
except KeyboardInterrupt:
sys.exit(0)
finally:
self.request.close()
I've solved this by adding a while True in my handle() definition, however, this was criticized as a bad solution and that the right way to go is to override the process_request and shutdown methods.
Attempt of solution
I removed the while from the code, connected to the server locally with netcat, sent a message and went to see when will the connection be closed.
I wanted to see what is the method after which the connection is being closed to figuer out what i have to override.
I have stepped with the debugger through the serve_forever() and followed it to this part of code:
> /usr/lib/python2.7/threading.py(495)start()
494 try:
--> 495 _start_new_thread(self.__bootstrap, ())
496 except Exception:
After line 495 is passed (i can't step into it) the connection is closed.
I somehow doubt that it's such a hustle to maintain a connection via socket, that is basically the reason why we chosen to communicate over a socket, to have a continuous connection and not a 'one connection per sent message' system.
Ideas on implementation, or links?
The handle method is called for each client connection, and the connection is closed when it returns. Using a while loop is fine. Exit the loop when the client closes the connection.
Example (Python 3 syntax):
class EchoHandler(socketserver.StreamRequestHandler):
def setup(self):
print('{}:{} connected'.format(*self.client_address))
def handle(self):
while True:
data = self.request.recv(1024)
if not data: break
self.request.sendall(data)
def finish(self):
print('{}:{} disconnected'.format(*self.client_address))

HTTPSimpleServer - How to close/terminate it?

I recently learned I could run a server with this command:
sudo python -m HTTPSimpleServer
My question: how do I terminate this server when done with it?
Type Control-C. Simple as that.
You might want to check the HttpServer class in this servlet module for a modification that allows the server to be quit. If the handler raises a SystemExit exception, the server will break from its serving.
class HttpServer(socketserver.ThreadingMixIn, http.server.HTTPServer):
"""Create a server with specified address and handler.
A generic web server can be instantiated with this class. It will listen
on the address given to its constructor and will use the handler class
to process all incoming traffic. Running a server is greatly simplified."""
# We should not be binding to an
# address that is already in use.
allow_reuse_address = False
#classmethod
def main(cls, RequestHandlerClass, port=80):
"""Start server with handler on given port.
This static method provides an easy way to start, run, and exit
a HttpServer instance. The server will be executed if possible,
and the computer's web browser will be directed to the address."""
try:
server = cls(('', port), RequestHandlerClass)
active = True
except socket.error:
active = False
else:
addr, port = server.socket.getsockname()
print('Serving HTTP on', addr, 'port', port, '...')
finally:
port = '' if port == 80 else ':' + str(port)
addr = 'http://localhost' + port + '/'
webbrowser.open(addr)
if active:
try:
server.serve_forever()
except KeyboardInterrupt:
print('Keyboard interrupt received: EXITING')
finally:
server.server_close()
def handle_error(self, request, client_address):
"""Process exceptions raised by the RequestHandlerClass.
Overriding this method is necessary for two different reasons:
(1) SystemExit exceptions are incorrectly caught otherwise and
(2) Socket errors should be silently passed in the server code"""
klass, value = sys.exc_info()[:2]
if klass is SystemExit:
self.__exit = value
self._BaseServer__serving = None
elif issubclass(klass, socket.error):
pass
else:
super().handle_error(request, client_address)
def serve_forever(self, poll_interval=0.5):
"""Handle all incoming client requests forever.
This method has been overridden so that SystemExit exceptions
raised in the RequestHandlerClass can be re-raised after being
caught in the handle_error method above. This allows servlet
code to terminate server execution if so desired or required."""
super().serve_forever(poll_interval)
if self._BaseServer__serving is None:
raise self.__exit

Categories

Resources