Strange behaviour in Python SocketServer - python

I have created a python socket server, using a class inherited from SocketServer.BaseRequestHandler, overriding setup and handle methods. Of cource, SocketServer.BaseRequestHandler.setup is called at the end of my own setup.
This is my server class
class MyServer(SocketServer.ForkingMixIn, SocketServer.TCPServer):
timeout = 30
A typical forking socket server.
Here is how I run my server
while True:
try:
server = MyServer((host, port), MyRequestHandler)
print('Server listening on', (host, port))
server.timeout = 300 # seconds
server.serve_forever()
except:
print('Error with server, retrying in 5 seconds...')
print(sys.exc_info())
sleep(5)
host and port are predefined, no problem with them.
Server works fine, except when clients count reaches 40. After this number, no new connections will be accepted, all will be refused. I checked this with a client test python script from my own system. Only 40!
Why 40? I have checked source code for SocketServer and found nothing related to this. I currently have no clue regarding this issue. Any, and I really mean it, any help is appreciated :))
Thanks in advance
OS: CentOS 6.5

This is probably unrelated to Python. Tune your Linux kernel, in testing phase do stuff like:
turn syncookies off
increase file handles available for the user (every socket opened is also a file handle used - maybe you're running out of them?)
look at stuff like this: http://people.redhat.com/alikins/system_tuning.html#tcp
and: http://people.redhat.com/alikins/system_tuning.html#fds
check if stuff like fail2ban is installed (http://www.fail2ban.org/wiki/index.php/Main_Page)
check if rate limits are applied by iptables (in testing phase you could do iptables -F after making sure that default chain policy is ACCEPT)
and last but not in the very least, check dmesg, /var/log/messages, /var/log/syslog, etc
One thing that theoretically might be related to Python is SO_REUSEADDR:
http://www.unixguide.net/network/socketfaq/4.5.shtml
Check if you have it set for your socket.
UPDATE:
I just realized that since the 40 connections that your socket server maxes out at is actually pretty low, the simplest option could be running your socket server through systrace, just use -f flag to track forked processes as well. You could e.g. start socket server, open 35 simultaneous connections, and then connect systrace to a running process and set up 5 more connections and see what systrace reports. Very often in such situations syscalls fail with errors that are visible in systrace and allow pinpointing root cause relatively easily.

I really have now idea how I missed this in source!
class ForkingMixIn:
"""Mix-in class to handle each request in a new process."""
timeout = 300
active_children = None
max_children = 40
Yeah, now I see the max_children property.
Thanks guys

Related

Keeping ports open in a python script that is run continuously

I'm trying to develop a server script using python 3.4 that runs perpetually and responds to client requests on up to 5 separate ports. My preferred platform is Debian 8.0. The platform currently runs on a virtual machine in the cloud. My script works fine when I run it off the command line - I need to now (1) keep it running once I log off the server and (2) keep several ports open through the script so that a windows client can connect to them.
For (1),
After trying several options [I tried using upstart, added the script to rc.local, used nohup with & to run it off the terminal, etc] that didn't seem to work, I eventually found something that does seem to keep the script running, even if it's not very elegant - I wrote an hourly cron script that checks to see if the script is running in the process list, and if not, to execute it.
Whenever I login to the VM now, I see the following output when I type 'ps -ef':
root 22007 21992 98 Nov10 14-12:52:59 /usr/bin/python3.4 /home/userxyz/cronserver.py
I assume that the script is running based on the fact that there is an active process in the system. I mention this part because I suspect that there could be a correlation with part (2) of my issue.
For (2),
The script is supposed to open ports 49100 - 49105 and listen for connection requests, etc. When I run the script from the terminal, zenmap from my client machine verifies that these ports are open. However, when the cron job initiates the script, these ports don't seem to stay open. My windows client program can't connect to the script either.
The python code I use for listening to a port:
f = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
f.bind((serviceIP, 49101))
f.listen(5)
while True:
scName, address = f.accept()
[code to handle request]
scName.shutdown(socket.SHUT_WR)
scName.close()
Any insight or assistance would be greatly appreciated!
What you ask is not easy because it depends on a variety of factors:
What is the frequency of the data received?
How many clients are expected to connect to this server?
Is there a chance two clients try to connect at the same time?
How long it takes to handle some received data?
What do you need to do with your data?
Write to a database?
Write to a file?
Calculate something?
Etc.
Depending on your answer you'll have some design decisions to make for your solution.
But since you need an answer, here's a hack that represent a way to do things:
import socketserver
import threading
import datetime
class SleepyGaryReceptionHandler(socketserver.BaseRequestHandler):
log_file_name = "/tmp/sleepygaryserver.log"
def handle(self):
# self.request is defined in BaseRequestHandler
data_received = self.request.recv(1024)
# self.client_address is also defined in BaseRequestHandler
sender_address = self.client_address[0]
# This is where you are supposed to do something with your data
# This is an example
self.write_to_log('Someone from {} sent us "{}"'.format(sender_address,
data_received))
# A way to stop the server from going on forever
# But you could do this other ways but it depends what condition
# should cause the shutdown
if data_received.startswith(b"QUIT"):
finishing_thread = threading.Thread(target=self.finish_in_another_thread)
finishing_thread.start()
# This will be called in another thread to terminate the server
# self.server is also defined in BaseRequestHandler
def finish_in_another_thread(self):
self.write_to_log("Shutting down the server")
self.server.shutdown()
# Write something (with a timestamp) to a text file so that we
# know something is happenning
def write_to_log(self, message):
timestamp = datetime.datetime.now()
timestamp_text = timestamp.isoformat(sep=' ', timespec='seconds')
with open(self.log_file_name, mode='a') as log_file:
log_file.write("{}: {}\n".format(timestamp_text, message))
service_address = "localhost"
port_number = 49101
server = socketserver.TCPServer((service_address, port_number),
SleepyGaryReceptionHandler)
server.serve_forever()
I'm using here the socketserver module instead of listening directly at a socket. This standard library module has been written to simplify writing a server. so use it!
All I do here is write to a text file what has been received. You would have to adapt it to your use.
But to have it running continuously use a cron job but to start it at the startup of the computer. Since this script will block until the server is stopped, we have to run it in the background. It would look something like that:
#reboot /usr/bin/python3 /home/sleepygary/sleppys_server.py &
I have tested it and after 5 hours it still does his thing.
Now like I said, it is a hack. If you want to go all the way and do things like any other services on your computer you have to program it in a certain way. You can find more information on this page: https://www.freedesktop.org/software/systemd/man/daemon.html
I'm really tired so there may be some errors here and there.

Pyro4 Remote connection blocked

I am using Pyro4 to make a remote connection between a raspberry and a computer. I've tested the code local on my computer. But now I want to use it on the raspberry. Only problem the target machine refused it. Nameserver is set, I can ask for the metadata, client is not giving any error.
Server code:
daemon = Pyro4.core.Daemon("192.168.0.199")
Pyro4.config.HOST = "192.168.0.199"
ns = Pyro4.locateNS()
print ns.lookup("client", return_metadata=True) #this works
callback = MainController()
daemon.register(callback)
vc2 = Pyro4.core.Proxy("PYRONAME:client#192.168.0.199:12345")
Client code:
ns = Pyro4.locateNS()
Pyro4.config.HOST = "192.168.0.199"
uri = daemon.register(VehicleController)
ns.register("client#192.168.0.199:12345", uri)
print "Connection set!"
daemon.requestLoop()
Firewall is also off.
Thanks
The main issue is that the server never runs the daemon request loop and so cannot respond to requests.
But there are a lot of issues with the code as shown:
it is not complete.
you're mixing up server and client responsibilities; why is the client running a deamon? That's the server's job.
you're registering an object with a logical name that appears to be a physical one. That's not how the name server works.
you're registering things in both the client and server.
the server never runs the request loop of the daemon it creates.
what is that 'vc2' proxy doing in the server? Clients are supposed to create proxies to server objects.
it's generally best to set Pyro's config variables before doing anything else, this way you don't have to repeat yourself with the IP address the daemon binds on.
All in all you seem to be confused about various core concepts of Pyro.
Getting a better understanding (have you worked through the tutorial chapter of the manual?) and fixing the code accordingly will likely resolve your issue.
Okay, got some more info
I can connect when I edit my Pyro4 Core URL from obj_ x #0.0.0.0: x to obj_ x #192.168.0.199: x and connect manually. So I guess there is something wrong with the way I register the address to the nameserver.
I'll keep you in touch
Tom

Keeping python sockets alive in event of connection loss

I'm trying to make a socket connection that will stay alive so that in event of connection loss. So basically I want to keep the server always open (also the client preferably) and restart the client after the connection is lost. But if one end shuts down both ends shut down. I simulated this by having both ends on the same computer "localhost" and just clicking the X button. Could this be the source of my problems?
Anyway my connection code
m.connect(("localhost", 5000))
is in a if and try and while e.g.
while True:
if tryconnection:
#Error handeling
try:
m.connect(("localhost", 5000))
init = True
tryconnection = False
except socket.error:
init = False
tryconnection = True
And at the end of my code I just a m.send("example") when I press a button and if that returns an error the code of trying to connect to "localhost" starts again. And the server is a pretty generic server setup with a while loop around the x.accept(). So how do keep them both alive when the connection closes so they can reconnect when it opens again. Or is my code alright and its just by simulating on the same computer is messing with it?
I'm assuming we're dealing with TCP here since you use the word "connection".
It all depend by what you mean by "connection loss".
If by connection loss you mean that the data exchanges between the server and the client may be suspended/irresponsive (important: I did not say "closed" here) for a long among of time, seconds or minutes, then there's not much you can do about it and it's fine like that because the TCP protocol have been carefully designed to handle such situations gracefully. The timeout before deciding one or the other side is definitely down, give up, and close the connection is veeeery long (minutes). Example of such situation: the client is your smartphone, connected to some server on the web, and you enter a long tunnel.
But when you say: "But if one end shuts down both ends shut down. I simulated this by having both ends on the same computer localhost and just clicking the X button", what you are doing is actually closing the connections.
If you abruptly terminate the server: the TCP/IP implementation of your operating system will know that there's not any more a process listening on port 5000, and will cleanly close all connections to that port. In doing so a few TCP segments exchange will occur with the client(s) side (it's a TCP 4-way tear down or a reset), and all clients will be disconected. It is important to understand that this is done at the TCP/IP implementation level, that's to say your operating system.
If you abruptly terminate a client, accordingly, the TCP/IP implementation of your operating system will cleanly close the connection from it's port Y to your server port 5000.
In both cases/side, at the network level, that would be the same as if you explicitly (not abruptly) closed the connection in your code.
...and once closed, there's no way you can possibly re-establish those connections as they were before. You have to establish new connections.
If you want to establish these new connections and get the application logic to the state it was before, now that's another topic. TCP alone can't help you here. You need a higher level protocol, maybe your own, to implement stateful client/server application.
The issue is not related to the programming language, in this case python. The oeprating system (Windows or linux), has the final word regarding the resilience degree of the socket.

Trouble with a Python server

I have a few test clients that are encountering the same issue each time. The clients can connect, and they can send their first message, but after that the server stops responding to that client. I suspect that the problem is related to s.accept(), but I'm not sure exactly what is wrong or how to work around it.
def startServer():
host = ''
port = 13572
backlog = 5
size = 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((host,port))
s.listen(backlog)
print "Close the command prompt to stop Gamelink"
while 1:
try:
client, address = s.accept()
data = client.recv(size)
if data:
processData(data)
client.send("OK")
else:
print "Disconnecting from client at client's request"
client.close()
except socket.error, (value, message):
if s:
print "Disconnecting from client, socket issue"
s.close()
print "Error opening socket: " + message
break
except:
print "Gamelink encountered a problem"
break
print "End of loop"
client.close()
s.close()
The server is intended to be accessed across a local network, and it needs to be light weight and very quick to respond, so if another implementation (such as thread based) would be better for meeting those requirements please let me know. The intended application is to be used as a remote gaming keyboard, thus the need for low resource use and high speed.
Writing a server using socket directly will be hard. As Keith says, you need to multiplex the connections somehow, like with select or poll or threads or fork. You might think you need only one connection, but what will you do when something hiccups and the connection is lost? Will your server be able to respond to reconnection attempts from the client if it hasn't yet realized the connection is lost?
If your networking needs are basic, you might be able to let something else handle all the listening and accepting and forking stuff for you. You don't specify a platform, but examples of such programs are launchd on Mac OS and xinetd on Linux. The details differ between these tools, but basically you configure them, in some configuration file, to listen for a connection on some port. When they get it, they take care of setting up the connection, then they exec() your program with stdin and stdout aimed at the socket, so you can simply use all the basic IO you probably already know like print and sys.stdin.read().
The trouble with solutions like xinitd and launchd is that for each new connection, they must fork() and exec() a new instance of your program. These are relatively heavy operations so a large number of connections or a high rate of new connections might hit the limits of your server. But worse, since each connection is in a separate process, sharing data between them is hard. Also, most solutions you might find to communicate between processes involve a blocking API, and now you are back to the problem of multiplexing with select or threads or similar.
If that doesn't meet your needs, I think you are better off learning to use a higher-level networking framework which will handle all the problems you will inevitably encounter if you go down the path of socket. One such framework I'd suggest is Twisted. Beyond handling the mundane details of handling connections, and the more complex task of multiplexing IO between them, you will also have a huge library of tools that will make implementing your protocol much easier.

How to close a socket left open by a killed program?

I have a Python application which opens a simple TCP socket to communicate with another Python application on a separate host. Sometimes the program will either error or I will directly kill it, and in either case the socket may be left open for some unknown time.
The next time I go to run the program I get this error:
socket.error: [Errno 98] Address already in use
Now the program always tries to use the same port, so it appears as though it is still open. I checked and am quite sure the program isn't running in the background and yet my address is still in use.
SO, how can I manually (or otherwise) close a socket/address so that my program can immediately re-use it?
Update
Based on Mike's answer I checked out the socket(7) page and looked at SO_REUSEADDR:
SO_REUSEADDR
Indicates that the rules used in validating addresses supplied in a bind(2) call should
allow reuse of local addresses. For AF_INET sockets this means that a socket may bind,
except when there is an active listening socket bound to the address. When the listenā€
ing socket is bound to INADDR_ANY with a specific port then it is not possible to bind
to this port for any local address. Argument is an integer boolean flag.
Assume your socket is named s... you need to set socket.SO_REUSEADDR on the server's socket before binding to an interface... this will allow you to immediately restart a TCP server...
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((ADDR, PORT))
You might want to try using Twisted for your networking. Mike gave the correct low-level answer, SO_REUSEADDR, but he didn't mention that this isn't a very good option to set on Windows. This is the sort of thing that Twisted takes care of for you automatically. There are many, many other examples of this kind of boring low-level detail that you have to pay attention to when using the socket module directly but which you can forget about if you use a higher level library like Twisted.
You are confusing sockets, connections, and ports. Sockets are endpoints of connections, which in turn are 5-tuples {protocol, local-ip, local-port, remote-ip, remote-port}. The killed program's socket has been closed by the OS, and ditto the connection. The only relic of the connection is the peer's socket and the corresponding port at the peer host. So what you should really be asking about is how to reuse the local port. To which the answer is SO_REUSEADDR as per the other answers.

Categories

Resources