Is it safe to run server.accept() constantly with sockets? - python

Right now I'm building a server-client program using TCP in Python with the sockets module. Having looked all over the internet, it has become apparent that a conn, addr = server.accept() line is required in the server code, however there is no way for the server to know when the client will connect. It could be from seconds to minutes after the server is run.
So my question is this: can I use threading to constantly run a server.accept() line of code so any client that chooses to connect can? Or could this lead to something malicious connecting?

As per Can 'connect' call on socket return successfully without server calling 'accept'? ,
TCP establishes the connection - the 3-way handshake - under the
covers and puts it in a completed connection queue when it is ready.
Accept() returns the next waiting connection from the front of this
queue.
From the client's perspective it is "connected" but it won't be
talking to anyone until the server accepts and begins processing. Sort
of like when you call a company and are immediately put in the hold
queue. You are "connected" but no business is going to be done until
someone actually picks up and starts talking.
So, you won't "miss" connections if you're not doing that. But accept() is typically run in an infinite loop anyway -- in the main thread or otherwise -- 'cuz it's server's primary job to service clients.
According to Is accept() thread-safe? , accept() is thread-safe, you can very well have it running in a separate thread, or even have multiple accept() calls in different threads (or even different processes in OSes with fork) at the same time.

Related

Python Socket reconnect after connection failure [duplicate]

Okay, I've read this post in search for the right answer, but it does not seem to serve my purpose.
This Question
Now, getting to the trouble:
I have a conventional client-server architecture in C (all sockets are non-blocking), where the server is listening for incoming connections and the client tries to connect. The first connect succeeds and everything goes on just fine until I press Ctrl + C on my server.
The client side of the code detects that the connection is lost and arms a retry timer.
The client code is supposed to try a reconnect on the server again and again by using the POSIX interval timers on each timer popping. It however, does not close the socket or start out afresh. Now, every time it retries the connection, the connect() returns
Transport endpoint is already connected
Even after restarting the server, which uses the SO_REUSEADDR and successfully starts, the connect does not complete.
One thing that I will need to implement is the signal handler on the server for the shutdown on Ctrl+C.
But still, do I need to close the socket descriptor on the client side and start afresh every time a disconnect happens, or is there a way out of this?
sockets cannot be reused.
Once the connection a socket served has gone down in both directions, the socket is unusable.
close() the client socket on loss of connection and create a new socket for a new connection.
Update (based on the comments below):
In the OP's case one side (the server side) went down (by means of the server process ending). This implies all sockets held by this process are implicitly close()ed and therefore shutdown() in both directions.

Python Socket Client Disappears, Server Can Not Tell

I'm going crazy writing a little socket server in python. Everything was working fine, but I noticed that in the case where the client just disappears, the server can't tell. I simulate this by pulling the ethernet cable between the client and server, close the client, then plug the cable back in. The server never hears that the client disconnected and will wait forever, never allowing more clients to connect.
I figured I'd solve this by adding a timeout to the read loop so that it would try and read every 10 seconds. I thought maybe if it tried to read from the socket it would notice the client was missing. But then I realized there really is no way for the server to know that.
So I added a heartbeat. If the server goes 10 seconds without reading, it will send data to the client. However, even this is successful (meaning doesn't throw any kind of exception). So I am able to both read and write to a client that isn't there any more. Is there any way to know that the client is gone without implementing some kind of challenge/response protocol between the client and server? That would be a breaking change in this case and I'd like to avoid it.
Here is the core of my code for this:
def _loop(self):
command = ""
while True:
socket, address = self._listen_socket.accept()
self._socket = socket
self._socket.settimeout(10)
socket.sendall("Welcome\r\n\r\n")
while True:
try:
data = socket.recv(1)
except timeout: # Went 10 seconds without data
pass
except Exception as e: # Likely the client closed the connection
break
if data:
command = command + data
if data == "\n" or data == "\r":
if len(command.strip()) > 0:
self._parse_command(command.strip(), socket)
command = ""
if data == '\x08':
command = command[:-2]
else: # Timeout on read
try:
self._socket.sendall("event,heartbeat\r\n") # Send heartbeat
except:
self._socket.close()
break
The sendall for the heartbeat never throws an exception and the recv only throws a timeout (or another exception if the client properly closes the connection under normal circumstances).
Any ideas? Am I wrong that sending to a client that doesn't ACK should generate an exception eventually (I've tested for several minutes).
The behavior you are observing is the expected behavior for a TCP socket connection. In particular, in general the TCP stack has no way of knowing that an ethernet cable has been pulled or that the (now physically disconnected) remote client program has shut down; all it knows is that it has stopped receiving acknowledgement packets from the remote peer, and for all it knows the packets could just be getting dropped by an overloaded router somewhere and the issue will resolve itself momentarily. Given that, it does what TCP always does when its packets don't get acknowledged: it reduces its transmission rate and its number-of-packets-in-flight limit, and retransmits the unacknowledged packets in the hope that they will get through this time.
Assuming the server's socket has outgoing data pending, the TCP stack will eventually (i.e. after a few minutes) decide that no data has gone through for a long-enough time, and unilaterally close the connection. So if you're okay with a problem-detection time of a few minutes, the easiest way to avoid the zombie-connection problem is simply to be sure to periodically send a bit of heartbeat data over the TCP connection, as you described. When the TCP stack tries (and repeatedly fails) to get the outgoing data sent-and-acknowledged, that is what eventually will trigger it to close the connection.
If you want something quicker than that, you'll need to implement your own challenge/response system with timeouts (either over the TCP socket, or over a separate TCP socket, or over UDP), but note that in doing so you are likely to suffer from false positives yourself (e.g. you might end up severing a TCP connection that was not actually dead but only suffering from a temporary condition of lost packets due to congestion). Whether or not that's a worthwhile tradeoff depends on what sort of program you are writing. (Note also that UDP has its own issues, particularly if you want your system to work across firewalls, etc)

Keeping python sockets alive in event of connection loss

I'm trying to make a socket connection that will stay alive so that in event of connection loss. So basically I want to keep the server always open (also the client preferably) and restart the client after the connection is lost. But if one end shuts down both ends shut down. I simulated this by having both ends on the same computer "localhost" and just clicking the X button. Could this be the source of my problems?
Anyway my connection code
m.connect(("localhost", 5000))
is in a if and try and while e.g.
while True:
if tryconnection:
#Error handeling
try:
m.connect(("localhost", 5000))
init = True
tryconnection = False
except socket.error:
init = False
tryconnection = True
And at the end of my code I just a m.send("example") when I press a button and if that returns an error the code of trying to connect to "localhost" starts again. And the server is a pretty generic server setup with a while loop around the x.accept(). So how do keep them both alive when the connection closes so they can reconnect when it opens again. Or is my code alright and its just by simulating on the same computer is messing with it?
I'm assuming we're dealing with TCP here since you use the word "connection".
It all depend by what you mean by "connection loss".
If by connection loss you mean that the data exchanges between the server and the client may be suspended/irresponsive (important: I did not say "closed" here) for a long among of time, seconds or minutes, then there's not much you can do about it and it's fine like that because the TCP protocol have been carefully designed to handle such situations gracefully. The timeout before deciding one or the other side is definitely down, give up, and close the connection is veeeery long (minutes). Example of such situation: the client is your smartphone, connected to some server on the web, and you enter a long tunnel.
But when you say: "But if one end shuts down both ends shut down. I simulated this by having both ends on the same computer localhost and just clicking the X button", what you are doing is actually closing the connections.
If you abruptly terminate the server: the TCP/IP implementation of your operating system will know that there's not any more a process listening on port 5000, and will cleanly close all connections to that port. In doing so a few TCP segments exchange will occur with the client(s) side (it's a TCP 4-way tear down or a reset), and all clients will be disconected. It is important to understand that this is done at the TCP/IP implementation level, that's to say your operating system.
If you abruptly terminate a client, accordingly, the TCP/IP implementation of your operating system will cleanly close the connection from it's port Y to your server port 5000.
In both cases/side, at the network level, that would be the same as if you explicitly (not abruptly) closed the connection in your code.
...and once closed, there's no way you can possibly re-establish those connections as they were before. You have to establish new connections.
If you want to establish these new connections and get the application logic to the state it was before, now that's another topic. TCP alone can't help you here. You need a higher level protocol, maybe your own, to implement stateful client/server application.
The issue is not related to the programming language, in this case python. The oeprating system (Windows or linux), has the final word regarding the resilience degree of the socket.

Is it OK to send asynchronous notifications from server to client via the same TCP connection?

As far as I understand the basics of the client-server model, generally only client may initiate requests; server responds to them. Now I've run into a system where the server sends asynchronous messages back to the client via the same persistent TCP connection whenever it wants. So, a couple of questions:
Is it a right thing to do at all? It seems to really overcomplicate implementation of a client.
Are there any nice patterns/methodologies I could use to implement a client for such a system in Python? Changing the server is not an option.
Obviously, the client has to watch both the local request queue (i.e. requests to be sent to the server), and the incoming messages from the server. Launching two threads (Rx and Tx) per connection does not feel right to me. Using select() is a major PITA here. Do I miss something?
When dealing with asynchronous io in python I typically use a library such as gevent or eventlet. The objective of these libraries is allow for applications written in a synchronous to be multiplexed by a back-end reactor.
This basic example demonstrates the launching of two green threads/co-routines/fibers to handle either side of the TCP duplex. The send side of the duplex is listening on an asynchronous queue.
This is all performed within a single hardware thread. Both gevent && eventlet have more substantive examples in their documentation that what I have provided below.
If you run nc -l -p 8000 you will see "012" printed out. As soon netcat is exited, this code will be terminated.
from eventlet import connect, sleep, GreenPool
from eventlet.queue import Queue
def handle_i(sock, queue):
while True:
data = sock.recv(8)
if data:
print(data)
else:
queue.put(None) #<- signal send side of duplex to exit
break
def handle_o(sock, queue):
while True:
data = queue.get()
if data:
sock.send(data)
else:
break
queue = Queue()
sock = connect(('127.0.0.1', 8000))
gpool = GreenPool()
gpool.spawn(handle_i, sock, queue)
gpool.spawn(handle_o, sock, queue)
for i in range(0, 3):
queue.put(str(i))
sleep(1)
gpool.waitall() #<- waits until nc exits
I believe what you are trying to achieve is a bit similar to jsonp. While sending to the client, send through a callback method which you know of, that is existing in client.
like if you are sending "some data xyz", send it like server.send("callback('some data xyz')");. This suggestion is for javascript because it executes the returned code as if it were called through that method., and I believe you can port this theory to python with some difficulty. But I am not sure, though.
Yes this is very normal and Server can also send the messages to client after connection is made like in case of telnet server when you initiate a connection it sends you a message for the capability exchange and after that it asks you about your username & password.
You could very well use select() or if I were in your shoes I would have spawned a separate thread to receive the asynchronous messages from the server & would have left the main thread free to do further processing.

Spawning more than 5 client requests on socket

If we bind a server socket like this:
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind((host,port))
server.listen(5)
and use something like select() and loop over and over each client connection until the client closes it to exchange messages while the loop (for here) is run concurrently we can make the exchange of server-client messages or client-client exchange concurrent. Can we?
But the problem as I've read is that the server cannot enqueue more than 5 clients to handle one by one;
What methods are there to actually run multiple such server instances, provided the criteris that multiple such server processes start to listen iff the clients queued up reach the level of 5?
When you receive a connection you can either spawn a thread/process to handle that connection.
On the main thread go back to listen for another connection
The 5 bit is the length of the list that are one hold.
Similar to a switchboard operator
The 5 limitation you are concerned about is the size of listener backlog queue. This is how many connections the system will hold in abeyance until it starts rejecting new connections. When you accept a connection room is freed on that queue. So as long as you accept your connections in a timely manner this is not really a concern under normal load conditions. (BTW 5 is on the low side of things. IIR the default max per process on linux, for instance, is 128.)
Probably you misunderstood the function of the backlog argument.
The limit of 5 only applies to connection that are not already accepted.

Categories

Resources