Trouble with a Python server - python

I have a few test clients that are encountering the same issue each time. The clients can connect, and they can send their first message, but after that the server stops responding to that client. I suspect that the problem is related to s.accept(), but I'm not sure exactly what is wrong or how to work around it.
def startServer():
host = ''
port = 13572
backlog = 5
size = 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((host,port))
s.listen(backlog)
print "Close the command prompt to stop Gamelink"
while 1:
try:
client, address = s.accept()
data = client.recv(size)
if data:
processData(data)
client.send("OK")
else:
print "Disconnecting from client at client's request"
client.close()
except socket.error, (value, message):
if s:
print "Disconnecting from client, socket issue"
s.close()
print "Error opening socket: " + message
break
except:
print "Gamelink encountered a problem"
break
print "End of loop"
client.close()
s.close()
The server is intended to be accessed across a local network, and it needs to be light weight and very quick to respond, so if another implementation (such as thread based) would be better for meeting those requirements please let me know. The intended application is to be used as a remote gaming keyboard, thus the need for low resource use and high speed.

Writing a server using socket directly will be hard. As Keith says, you need to multiplex the connections somehow, like with select or poll or threads or fork. You might think you need only one connection, but what will you do when something hiccups and the connection is lost? Will your server be able to respond to reconnection attempts from the client if it hasn't yet realized the connection is lost?
If your networking needs are basic, you might be able to let something else handle all the listening and accepting and forking stuff for you. You don't specify a platform, but examples of such programs are launchd on Mac OS and xinetd on Linux. The details differ between these tools, but basically you configure them, in some configuration file, to listen for a connection on some port. When they get it, they take care of setting up the connection, then they exec() your program with stdin and stdout aimed at the socket, so you can simply use all the basic IO you probably already know like print and sys.stdin.read().
The trouble with solutions like xinitd and launchd is that for each new connection, they must fork() and exec() a new instance of your program. These are relatively heavy operations so a large number of connections or a high rate of new connections might hit the limits of your server. But worse, since each connection is in a separate process, sharing data between them is hard. Also, most solutions you might find to communicate between processes involve a blocking API, and now you are back to the problem of multiplexing with select or threads or similar.
If that doesn't meet your needs, I think you are better off learning to use a higher-level networking framework which will handle all the problems you will inevitably encounter if you go down the path of socket. One such framework I'd suggest is Twisted. Beyond handling the mundane details of handling connections, and the more complex task of multiplexing IO between them, you will also have a huge library of tools that will make implementing your protocol much easier.

Related

What does "Only one usage of each socket address is normally permitted" tell me when using sockets under Python?

I have a socket server under Python:
sock= socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setblocking(0)
sock.bind((self._ipadress, port))
later I'm accepting incoming requests in a loop, using select.select:
connection, client_address = sock.accept()
...
select.select(...)
Note that connections can be closed by clients when they not need it anymore.
I tested my code with a python client and was able to observe that multiple connections can be easily handled simultaneously as expected.
However, very sporadically I get error:
“Only one usage of each socket address is normally permitted”
What does it tell me and when does it happen?
Multiple connections on the same port are definitely possible (I tested it), so why should there be only one usage permitted? This is against the principle, that multiple clients can be accepted by the same server.
I learned from
Python server "Only one usage of each socket address is normally permitted"
that it can be avoided by using SO_REUSEADDR.
But why is it required, since even after closing a connection by a client, the socket should still be able to accept other connections. Otherwise my program wouldn't work at all.
I'm at home now and not in the office, so I cannot test it, but I have even problems to understand the principles behind...
.

How do I connect two machines (with two different public IPs) via sockets using Python? Is it possible?

I'm new to the world of networking and wanted to clarify some of my thoughts regarding a problem I am facing at the moment. I saw this post which makes me think what I'm doing may be impossible, but I thought it would be worth a shot to ask on here and see what more qualified people think about it.
I am a TA for an intro computer science course, and I am writing a final project for students to complete at the end of the semester. Essentially, the project would be to fill in the holes in the implementation of a messaging client. I have set it up so each client would run two threads (one to listen for incoming messages, and one to wait for input to send messages to the other client). I have gotten this to work successfully on localhost with communication between two different port numbers, and am trying to find a way to have this work over the network so the two clients do not necessarily have to be on the same machine.
After struggling through a few methods, I came up with this solution: I would host a server on Heroku that would keep track of the clients' IPs and port numbers, and use a rest API so that one client could easily get the IP and port of the other client they are trying to communicate with. I have tested this, and the API seems to work. Thus, a client can create a socket endpoint and send it to this server to be entered into its database, and when the communication is terminated, it is removed from the database (this JSON would store a username as the primary key and internally manage an IP and port number) as the connection is now closed.
So, what I have is each client with an IP and port number knowing the IP and port number it is trying to communicate with. My last struggle is to actually form the connection. I understand there is a distinction between localhost (127.0.0.1) and the public IP for an internet endpoint. Upon searching, I found a way to find the public IP for the current user to share with the database, but I cannot bind to it. Whenever I try to, I get sockets error code 13: permission denied. I would imagine that if I tried connecting to the public IP of the other machine, I would get a similar error (but I cannot test the client until I can get a server running!).
I read online that some router work would be needed to actually form this connection between two machines. I guess I'm struggling to understand the practicality of socket programming if such a simple operation (connecting two socket endpoints on two different computers) requires so much tweaking. Is there something I am missing?
For reference, here is a general outline of my code thus far. The server:
# Server thread
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((LOCAL_IP, AVAILABLE_PORT))
s.listen(1)
# In my code, there is a quitting mechanism which closes s as well.
while True:
client_socket, addr = s.accept()
data = client_socket.recv(1024)
print "Received: " + data
client_socket.close()
...and the client:
# Client thread
# It is an infinite loop so I am always waiting for another potential message to send
while True:
x = raw_input()
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((OTHER_MACHINE_LOCAL_IP, OTHER_PORT))
sock.sendall(x)
sock.close()
Right now, I cannot make progress with these permission denied errors. Is there any way I could get this to work? Understand that, being that this would be for around 250ish students to use who are all intro CS students, I would want to avoid having to instruct them to do anything with their routers.
If there is another method to do this which would make this easier that I am missing, I would also love to hear any suggestions :) Thanks in advance!

Python Socket Client Disappears, Server Can Not Tell

I'm going crazy writing a little socket server in python. Everything was working fine, but I noticed that in the case where the client just disappears, the server can't tell. I simulate this by pulling the ethernet cable between the client and server, close the client, then plug the cable back in. The server never hears that the client disconnected and will wait forever, never allowing more clients to connect.
I figured I'd solve this by adding a timeout to the read loop so that it would try and read every 10 seconds. I thought maybe if it tried to read from the socket it would notice the client was missing. But then I realized there really is no way for the server to know that.
So I added a heartbeat. If the server goes 10 seconds without reading, it will send data to the client. However, even this is successful (meaning doesn't throw any kind of exception). So I am able to both read and write to a client that isn't there any more. Is there any way to know that the client is gone without implementing some kind of challenge/response protocol between the client and server? That would be a breaking change in this case and I'd like to avoid it.
Here is the core of my code for this:
def _loop(self):
command = ""
while True:
socket, address = self._listen_socket.accept()
self._socket = socket
self._socket.settimeout(10)
socket.sendall("Welcome\r\n\r\n")
while True:
try:
data = socket.recv(1)
except timeout: # Went 10 seconds without data
pass
except Exception as e: # Likely the client closed the connection
break
if data:
command = command + data
if data == "\n" or data == "\r":
if len(command.strip()) > 0:
self._parse_command(command.strip(), socket)
command = ""
if data == '\x08':
command = command[:-2]
else: # Timeout on read
try:
self._socket.sendall("event,heartbeat\r\n") # Send heartbeat
except:
self._socket.close()
break
The sendall for the heartbeat never throws an exception and the recv only throws a timeout (or another exception if the client properly closes the connection under normal circumstances).
Any ideas? Am I wrong that sending to a client that doesn't ACK should generate an exception eventually (I've tested for several minutes).
The behavior you are observing is the expected behavior for a TCP socket connection. In particular, in general the TCP stack has no way of knowing that an ethernet cable has been pulled or that the (now physically disconnected) remote client program has shut down; all it knows is that it has stopped receiving acknowledgement packets from the remote peer, and for all it knows the packets could just be getting dropped by an overloaded router somewhere and the issue will resolve itself momentarily. Given that, it does what TCP always does when its packets don't get acknowledged: it reduces its transmission rate and its number-of-packets-in-flight limit, and retransmits the unacknowledged packets in the hope that they will get through this time.
Assuming the server's socket has outgoing data pending, the TCP stack will eventually (i.e. after a few minutes) decide that no data has gone through for a long-enough time, and unilaterally close the connection. So if you're okay with a problem-detection time of a few minutes, the easiest way to avoid the zombie-connection problem is simply to be sure to periodically send a bit of heartbeat data over the TCP connection, as you described. When the TCP stack tries (and repeatedly fails) to get the outgoing data sent-and-acknowledged, that is what eventually will trigger it to close the connection.
If you want something quicker than that, you'll need to implement your own challenge/response system with timeouts (either over the TCP socket, or over a separate TCP socket, or over UDP), but note that in doing so you are likely to suffer from false positives yourself (e.g. you might end up severing a TCP connection that was not actually dead but only suffering from a temporary condition of lost packets due to congestion). Whether or not that's a worthwhile tradeoff depends on what sort of program you are writing. (Note also that UDP has its own issues, particularly if you want your system to work across firewalls, etc)

Socket : 2 way communication in python

I want a two way communication in Python :
I want to bind to a socket where one client can connect to, and then server and client can "chat" with eachother.
I already have the basic listener :
import socket
HOST='' #localhost
PORT=50008
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM ) #create an INET, STREAMing socket
s.bind((HOST,PORT)) #bind to that port
s.listen(1) #listen for user input and accept 1 connection at a time.
conn, addr = s.accept()
print "The connection has been set up"
bool=1
while bool==1:
data=conn.recv(1024)
print data
if "#!END!#" in data:
print "closing the connection"
s.close()
bool=0
What I want to do now is implement something so this script also accepts user input and after the enter key is hit, send it back to the client.
But I can't figure out how I can do this ? Because if I would do it like this :
while bool==1:
data=conn.recv(1024)
print data
u_input = raw_input("input now")
if u_input != "":
conn.send(u_input)
u_input= ""
Problem is that it probably hangs at the user input prompt, so it does not allow my client to send data.
How do I solve this ?
I want to keep it in one window, can this be solved with threads ?
(I've never used threads in python)
Python's sockets have a makefile tool to make this sort of interaction much easier. After creating a socket s, then run f = s.makefile(). That will return an object with a file-like interface (so you can use readline, write, writelines and other convenient method calls). The Python standard library itself makes use of this approach (see the source for ftplib and poplib for example).
To get text from the client and display it on the server console, write a loop with print f.readline().
To get text from the server console and send it to the client, write a loop with f.write(raw_input('+ ') + '\n').
To be send and receive at the same time, do those two steps separate threads:
Thread(target=read_client_and_print_to_console).start()
Thread(target=read_server_console_and_send).start()
If you prefer async over threads, here are two examples to get you started:
Basic Async HTTP Client
Basic Async Echo Server
The basic problem is that you have two sources of input you're waiting for: the socket and the user. The three main approaches I can think of are to use asynchronous I/O, to use synchronous (blocking) I/O with multiple threads, or to use synchronous I/O with timeouts. The last approach is conceptually the simplest: wait for data on the socket for up to some timeout period, then switch to waiting for the user to enter data to send, then back to the socket, etc.
I know at a lower level, you could implement this relatively easily by treating both the socket and stdin as I/O handles and use select to wait on both of them simultaneously, but I can't recall if that functionality is mapped into Python, or if so, how. That's potentially a very good way of handling this if you can make it work. EDIT: I looked it up, and Python does have a select module, but it sounds like it only functions like this under Unix operating systems--in Windows, it can only accept sockets, not stdin or files.
have you checked twisted? twisted python event driven networking engine and library or
oidranot a python library especially for that based on torando web server

Multithreaded Python socket server CPU usage spirals out of control

My Situation:
I have a python server that does the following:
Listens on port 500 (for firewall reasons), and everytime it receives a connection, spawns a thread to handle the connection. Each thread that is started will respond to client input with a few methods that are mostly database interaction (I actually use the Django ORM, as my server application is coupled with a Django website).
I'd like to use select at some point (and ultimately, Twisted), but right now, I can't, so i'll have to go with this.
My problem:
For some reason I can't seem to understand, the server's CPU usage sometime spirals totally out of control and goes up to 200% CPU usage (we're running on a dual core), making it even pretty difficult to ssh in to stop it.
What I don't understand is that it usually does not happen during operation (If I have one or multiple clients connected, the CPU usage stays very low), but once all clients have disconnected, my server goes up to 200% CPU usage.
This led me to believe that the problem is not in the worker threads (If they didn't die properly, I'd rather expect a massive RAM usage than a CPU one), but in the server's accept method.
Up to now, I've been using this code:
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.bind((host,port))
s.listen(5)
logger.warning('Started - listening on {0}:{1}'.format(host, port))
while True:
(newsocket, clientaddr) = s.accept()
logger.info('Received connection from {0}:{1}'.format(clientaddr[0], clientaddr[1]))
WorkerThread(newsocket, clientaddr, timeout).start()
I can't really grasp what could be going wrong here, but I thought that maybe I should use the following syntax:
while True:
(newsocket, clientaddr) = s.accept()
logger.info('Received connection from {0}:{1}'.format(clientaddr[0], clientaddr[1]))
wk = WorkerThread(newsocket, clientaddr, timeout)
wk.start()
I've seen this written a lot more often than what I've doing up to now.
Does anyone of you knows whether this could cause a problem such as the one I'm describing?
Thanks in advance,

Categories

Resources