Keeping ports open in a python script that is run continuously - python

I'm trying to develop a server script using python 3.4 that runs perpetually and responds to client requests on up to 5 separate ports. My preferred platform is Debian 8.0. The platform currently runs on a virtual machine in the cloud. My script works fine when I run it off the command line - I need to now (1) keep it running once I log off the server and (2) keep several ports open through the script so that a windows client can connect to them.
For (1),
After trying several options [I tried using upstart, added the script to rc.local, used nohup with & to run it off the terminal, etc] that didn't seem to work, I eventually found something that does seem to keep the script running, even if it's not very elegant - I wrote an hourly cron script that checks to see if the script is running in the process list, and if not, to execute it.
Whenever I login to the VM now, I see the following output when I type 'ps -ef':
root 22007 21992 98 Nov10 14-12:52:59 /usr/bin/python3.4 /home/userxyz/cronserver.py
I assume that the script is running based on the fact that there is an active process in the system. I mention this part because I suspect that there could be a correlation with part (2) of my issue.
For (2),
The script is supposed to open ports 49100 - 49105 and listen for connection requests, etc. When I run the script from the terminal, zenmap from my client machine verifies that these ports are open. However, when the cron job initiates the script, these ports don't seem to stay open. My windows client program can't connect to the script either.
The python code I use for listening to a port:
f = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
f.bind((serviceIP, 49101))
f.listen(5)
while True:
scName, address = f.accept()
[code to handle request]
scName.shutdown(socket.SHUT_WR)
scName.close()
Any insight or assistance would be greatly appreciated!

What you ask is not easy because it depends on a variety of factors:
What is the frequency of the data received?
How many clients are expected to connect to this server?
Is there a chance two clients try to connect at the same time?
How long it takes to handle some received data?
What do you need to do with your data?
Write to a database?
Write to a file?
Calculate something?
Etc.
Depending on your answer you'll have some design decisions to make for your solution.
But since you need an answer, here's a hack that represent a way to do things:
import socketserver
import threading
import datetime
class SleepyGaryReceptionHandler(socketserver.BaseRequestHandler):
log_file_name = "/tmp/sleepygaryserver.log"
def handle(self):
# self.request is defined in BaseRequestHandler
data_received = self.request.recv(1024)
# self.client_address is also defined in BaseRequestHandler
sender_address = self.client_address[0]
# This is where you are supposed to do something with your data
# This is an example
self.write_to_log('Someone from {} sent us "{}"'.format(sender_address,
data_received))
# A way to stop the server from going on forever
# But you could do this other ways but it depends what condition
# should cause the shutdown
if data_received.startswith(b"QUIT"):
finishing_thread = threading.Thread(target=self.finish_in_another_thread)
finishing_thread.start()
# This will be called in another thread to terminate the server
# self.server is also defined in BaseRequestHandler
def finish_in_another_thread(self):
self.write_to_log("Shutting down the server")
self.server.shutdown()
# Write something (with a timestamp) to a text file so that we
# know something is happenning
def write_to_log(self, message):
timestamp = datetime.datetime.now()
timestamp_text = timestamp.isoformat(sep=' ', timespec='seconds')
with open(self.log_file_name, mode='a') as log_file:
log_file.write("{}: {}\n".format(timestamp_text, message))
service_address = "localhost"
port_number = 49101
server = socketserver.TCPServer((service_address, port_number),
SleepyGaryReceptionHandler)
server.serve_forever()
I'm using here the socketserver module instead of listening directly at a socket. This standard library module has been written to simplify writing a server. so use it!
All I do here is write to a text file what has been received. You would have to adapt it to your use.
But to have it running continuously use a cron job but to start it at the startup of the computer. Since this script will block until the server is stopped, we have to run it in the background. It would look something like that:
#reboot /usr/bin/python3 /home/sleepygary/sleppys_server.py &
I have tested it and after 5 hours it still does his thing.
Now like I said, it is a hack. If you want to go all the way and do things like any other services on your computer you have to program it in a certain way. You can find more information on this page: https://www.freedesktop.org/software/systemd/man/daemon.html
I'm really tired so there may be some errors here and there.

Related

Is it possible to automate the process of running different python scripts in separate Spyder consoles?

I've found running different code across separate console windows in Spyder to be a handy way of executing code concurrently. I've always done this "manually" (hitting the new console button and then starting the desired piece of code in that console). However, I was wondering if there is a way to automate this process (or achieve the same effect, i.e. concurrent code with separate namespaces, in an automated way). By automated I mean something along the lines of pressing one button and having one part of the code run in one console, another part in another, and so on for a handful of consoles.
The reason I want to do this is that I am trying to run code using the zmq package and I need to have the server script and the multiple client scripts running separately from each other. I may be approaching this in a very naive way so perhaps there is a different way of doing this that doesn't require multiple consoles. I've heard the term "threading" thrown around but I'm not sure this is what I want.
Here is a specific example of why I want to be able to launch code in separate consoles automatically.
I will have one Client script that I want to be able to communicate with multiple Server scripts. Example code for what the client script and various server scripts will do is given below. This example code is very simple, but in reality the server scripts will be running calculations using a different python package. The nature of the specific calculations necessitates the use of multiple server scripts each running in their own separate namespace (or consoles). That's just a requirement of my workflow. **My question is how can I launch the various server scripts automatically without having to manually open a new spyder console for each and run the code with the specific port number?
# Client
import zmq
import time
for request in range(10):
context = zmq.Context()
socket = context.socket(zmq.REQ)
socket.connect("tcp://localhost:"+str(5555+request))
print("Sending request {} ...".format(request))
socket.send(b"Hello")
message = socket.recv()
time.sleep(0.001)
print("Received reply {} [ {} ]". format(request, message))
.
# Server
import time
import zmq
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:"+str(portNumber))
for i in range(10):
message = socket.recv()
print("Received request: {}".format(message))
time.sleep(0.001)
message = b"World"+" from server# "+str(portNumber)
socket.send(message)
You should be looking into docker! It is a great way to run multiple scripts at the same time.
https://www.docker.com/

Pyngrok to getting connecting continuously

I have just started using ngrok, and while using the standard procedure, I can start the tunnel using ./ngrok tcp 22 and see that tunnel open in my dashboard,
But I would like to use pyngrok, and here when I use:
from pyngrok.conf import PyngrokConfig
from pyngrok import ngrok
ngrok.set_auth_token("<NGROK_AUTH_TOKEN>")
pyngrok_config = PyngrokConfig(config_path="/opt/ngrok/ngrok.yml")
ngrok.get_tunnels(pyngrok_config=pyngrok_config)
ssh_url = ngrok.connect()
It connects and generates a tunnel, but I can't see anything open in the dashboard, why?
Maybe because the python script executes and generates URL and then stops and comes out of it, but then how to make it keep running, or how to even start a tunnel using python or even API ? Please suggest the correct script, using python or API?
The thread with the ngrok tunnel will terminate as soon as the Python process terminates. So you are correct, the reason this is happening is because your script is not long lived. The easiest way to accomplish this is by following the example in the documentation.
Another issue is how you're setting the authtoken. Since you're not using the default config_path, you need to set this before setting the authtoken so it gets updated in the correct file (you'd also need to pass it to connect()). There are a couple ways to do this, but the easiest way from the docs is to just update the default config (since that's what will be used if you don't pass a pyngrok_config to any future method calls).
I also see that you're response variable is ssh_url, so you probably want to start a TCP tunnel to a port other than 80 (the default)—perhaps you've configured this in your ngrok.yml, but if not, I've updated the call to connect() to ensure this is the type of tunnel started for you and in case others try to use this same code snippet.
Full disclosure, I am the developer of pyngrok. Here is your code snippet updated with my changes.
import os, time
from pyngrok.conf import PyngrokConfig
from pyngrok import ngrok, conf
conf.get_default().config_path = "/opt/ngrok/ngrok.yml"
ngrok.set_auth_token(os.environ.get("NGROK_AUTH_TOKEN"))
ssh_tunnel = ngrok.connect(22, "tcp")
ngrok_process = ngrok.get_ngrok_process()
try:
# Block until CTRL-C or some other terminating event
ngrok_process.proc.wait()
except KeyboardInterrupt:
print(" Shutting down server.")
ngrok.kill()

Interprocess communcation between docker and host system

I have a python program that does some machine learning. This is supposed to be accessible over network using HTTP. Since I want Apache to act as a server,I use a python script to send the data which is received to my program using python multiprocessing.connection.
For eg script to send will be
#!/usr/bin/python
from multiprocessing.connection import Client
import cgi
from job import *
form = cgi.FieldStorage()
address = ('localhost', 6000)
conn = Client(address, authkey='secretpass')
conn.send(form)
And the receiving script will be
from multiprocessing.connection import Listener
import threading
print "Starting listener"
address = ('localhost', 6000)
listener = Listener(address, authkey='secretpass')
while True:
conn = listener.accept()
msg = conn.recv()
conn.close()
# Do stuff with msg
listener.close()
Once I trigger the url, Apache will call the first script, and it will send the python object to other script. Other script will receive it and do the processing.
Now, I would like to put the ML part into a docker container while Apache will be in the host system. In that case how will I communicate ?
As part of the Processing library you will find the process Queue. This structure exists to allow messages to be passed between processes. If you are working on Linux it is a matter of setting up a global variable and pushing messages. The pattern is usually: any process can post, and a single process reads. With two or more queues you can easily set up back and forth communications without worry about collisions or lost messages.
This becomes harder in Windows and other more restrictive systems, as there are no globals shared between processes, and no way to pass a complex structure at creation of a process. In Windows it is far easier to simply stick to threads.
Details of the multi-processing/threads in python can be found here:
16.6. multiprocessing — Process-based “threading” interface

Strange behaviour in Python SocketServer

I have created a python socket server, using a class inherited from SocketServer.BaseRequestHandler, overriding setup and handle methods. Of cource, SocketServer.BaseRequestHandler.setup is called at the end of my own setup.
This is my server class
class MyServer(SocketServer.ForkingMixIn, SocketServer.TCPServer):
timeout = 30
A typical forking socket server.
Here is how I run my server
while True:
try:
server = MyServer((host, port), MyRequestHandler)
print('Server listening on', (host, port))
server.timeout = 300 # seconds
server.serve_forever()
except:
print('Error with server, retrying in 5 seconds...')
print(sys.exc_info())
sleep(5)
host and port are predefined, no problem with them.
Server works fine, except when clients count reaches 40. After this number, no new connections will be accepted, all will be refused. I checked this with a client test python script from my own system. Only 40!
Why 40? I have checked source code for SocketServer and found nothing related to this. I currently have no clue regarding this issue. Any, and I really mean it, any help is appreciated :))
Thanks in advance
OS: CentOS 6.5
This is probably unrelated to Python. Tune your Linux kernel, in testing phase do stuff like:
turn syncookies off
increase file handles available for the user (every socket opened is also a file handle used - maybe you're running out of them?)
look at stuff like this: http://people.redhat.com/alikins/system_tuning.html#tcp
and: http://people.redhat.com/alikins/system_tuning.html#fds
check if stuff like fail2ban is installed (http://www.fail2ban.org/wiki/index.php/Main_Page)
check if rate limits are applied by iptables (in testing phase you could do iptables -F after making sure that default chain policy is ACCEPT)
and last but not in the very least, check dmesg, /var/log/messages, /var/log/syslog, etc
One thing that theoretically might be related to Python is SO_REUSEADDR:
http://www.unixguide.net/network/socketfaq/4.5.shtml
Check if you have it set for your socket.
UPDATE:
I just realized that since the 40 connections that your socket server maxes out at is actually pretty low, the simplest option could be running your socket server through systrace, just use -f flag to track forked processes as well. You could e.g. start socket server, open 35 simultaneous connections, and then connect systrace to a running process and set up 5 more connections and see what systrace reports. Very often in such situations syscalls fail with errors that are visible in systrace and allow pinpointing root cause relatively easily.
I really have now idea how I missed this in source!
class ForkingMixIn:
"""Mix-in class to handle each request in a new process."""
timeout = 300
active_children = None
max_children = 40
Yeah, now I see the max_children property.
Thanks guys

python xinetd client disconnection handling

This may or may not being a coding issue. It may also be an xinetd deamon issue, i do not know.
I have a python script which is triggered from a linux server running xinetd. Xinetd has been setup to only allow one instance as I only want one machine to be able to connect to the service, which is therefore also limited by IP.
Currently when the client connects to xinetd the service works correctly and the script begins sending its output to the client machine. However, when the client disconnects (i.e: due to reboot), the process is still alive on the server, and this blocks the ability for the client to connect once its finished rebooting or so on.
Q: How can i detect in python that the client has disconnected. Perhaps i can test if stdout is no longer being read from by the client (and then exit the script), or is there a much eaiser way in xinetd to have the child process be killed when the client disconnects ?
(I'm using python 2.4.3 on RHEL5 linux - solutions for 2.4 are needed, but 3.1 solutions would be useful to know also.)
Add a signal handler for SIGHUP. (x)inetd sends this upon the socket disconnecting.
Monitor the signals sent to your proccess. Maybe your script isn't responding to the SIGHUP sent by xinet, monitor the signal and let it die.
You don't seem to get a SIGHUP, but you do get a SIGPIPE, at least so long as you are attempting any IO on the connection. If the application spends long periods of time not doing any IO, then you could just start a thread reading stdin to ensure you get the SIGPIPE as soon as the disconnection occurs. This was good enough for my application but then I didn't use any pipes other than the ones xinetd gave me.
I've seen several places on the net where people talk about the SIGHUP getting sent on client disconnection, so I've written an inetd python script to test out a couple of servers (one inetd and another xinetd), so you could use that to check on the signals getting sent. It just logs what it finds to /var/log/test.log. Perhaps it will be useful.
#!/usr/bin/python
import os, signal, sys
skip = ["SIGKILL", "SIG_DFL", "SIGSTOP", "SIG_IGN", "SIGCLD", "SIGCHLD"]
name_map = {}
identifiers = [i for i in dir(signal) if i.startswith("SIG") and not i in skip]
for i in identifiers:
name_map[getattr(signal, i)] = i
def handler(num, frame):
signame = name_map[num]
os.system("echo handled %s >> /var/log/test.log" % signame)
if __name__ == "__main__":
for id, name in name_map.iteritems():
signal.signal(id, handler)
while True:
print sys.stdin.readline()
sys.stdout.flush()

Categories

Resources