Python code repeating n+1 times every run - python

I've been working on an application which contains a small, simple http server to handle post requests on occasion. The server and all functionality around it works fine, but each time the server runs, log output would tell me that my code is being run multiple times, plus one time for each request the http server handles.
class HttpApp:
def __init__(self, host="localhost", port=25565):
logging = Util.configure_logging(__name__)
server_address = (host, port)
httpd = HTTPServer(server_address, ServerObject)
logging.info('Starting httpd...\n')
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
logging.info('Stopping httpd...\n')
class ServerObject(BaseHTTPRequestHandler):
def _set_response(self):
self.send_response(200)
self.send_header('Content-type', 'application/json')
self.end_headers()
def do_GET(self):
print("GET request,\nPath: %s\nHeaders:\n%s\n", str(self.path), str(self.headers))
self._set_response()
self.wfile.write("GET request for {}".format(self.path).encode('utf-8'))
def do_POST(self):
content_length = int(self.headers['Content-Length'])
content_type = str(self.headers['Content-Type'])
# print(content_length)
post_data = self.rfile.read(content_length)
if content_type == "application/json":
parsed_data = json.loads(post_data.decode('utf-8'))
else:
print("Bad request!")
self._set_response()
self.wfile.write(json.dumps({"Response": "Bad Request"}).encode('utf-8'))
print("POST request,\nPath: %s\nHeaders:\n%s\n\nBody:\n%s\n" %
(str(self.path), str(self.headers), parsed_data))
print("Parsed Params: %s" % parsed_data)
def free_port():
free_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
free_socket.bind(('0.0.0.0', 0))
free_socket.listen(5)
port = free_socket.getsockname()[1]
free_socket.close()
return port
rand_port = free_port()
SpawnSlave(category=parsed_data["category"], tag=parsed_data["tag"], filename=parsed_data["filename"], port=rand_port)
self._set_response()
self.wfile.write(json.dumps({"port": rand_port}).encode('utf-8'))
A cli application passes information to HttpApp, which then starts based on that information. Once it receives a connection the first time, it goes through its steps normally and only prints output once. The second time, output is printed twice, and so on. Only post requests are handled by this server. I have gone over my code a few times to make sure I'm not calling it more than once, but I seem to be stumped. For more context, more of this code is available on github, but this is the only relevant piece.

It turns out that this wasn't an issue with my code, but rather an issue with the logger I was using which was adding multiple console handlers for the same logger, causing output to be repeated. I fixed this in my cli library.

Related

Python HTTPServer and periodic tasks

I´m using HTTPServer to listen for incoming POST requests and serving them. All is working fine with that.
I need to add some periodic tasks in the script (every X seconds: do something). As the HTTP server takes full command after
def run(server_class=HTTPServer, handler_class=S, port=9999):
server_address = (ethernetIP, port)
httpd = server_class(server_address, handler_class)
httpd.serve_forever()
I guess if there´s any way to include a check for time.time() as part of:
class S(BaseHTTPRequestHandler):
def _set_response(self):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
def do_GET(self):
self._set_response()
self.wfile.write("GET request for {}".format(self.path).encode('utf-8'))
def do_POST(self):
# my stuff here
Any ideas are welcome. Thanks!
Thanks to #rdas for pointing me to the separate thread solution. I tried schedule but it didn´t work with the HTTP server, because I can´t tell the script to run the pending jobs.
I tried with threading, running my periodic task as deamon.. and it worked! Here´s the code structure:
import time
import threading
from http.server import BaseHTTPRequestHandler, HTTPServer
polTime = 60 # how often we touch the file
polFile = "myfile.abc"
# this is the deamon thread
def polUpdate():
while True:
thisSecond = int(time.time())
if thisSecond % polTime == 0: # every X seconds
f = open(polFile,"w")
f.close() # touch and close
time.sleep(1) # avoid loopbacks
return "should never come this way"
# here´s the http server starter
def run(server_class=HTTPServer, handler_class=S, port=9999):
server_address = (ethernetIP, port)
httpd = server_class(server_address, handler_class)
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
sys.exit(1)
# init the thread as deamon
d = threading.Thread(target=polUpdate, name='Daemon')
d.setDaemon(True)
d.start()
# runs the HTTP server
run(port=conf_port)
The HTTP server doesn´t block the thread, so it works great.
By the way, I´m using the file 'touching' as proof of life for the process.

Python HTTP server keep-alive

How can I keep my Python HTTP server connected(streaming) to my browser in real time?
(Update image to infinity) Like raspberry pi's motion eye
class MyHttpRequestHandler(http.server.SimpleHTTPRequestHandler):
def _set_response(self):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.send_header("Connection", "keep-alive")
self.send_header("keep-alive", "timeout=999999, max=99999")
self.end_headers()
def do_GET(self):
#self.send_response(204)
#self.end_headers()
if self.path == '/':
self.path = 'abc.jpg'
return http.server.SimpleHTTPRequestHandler.do_GET(self)
# Create an object of the above class
handler_object = MyHttpRequestHandler
PORT = 8000
my_server = socketserver.TCPServer(("", PORT), handler_object)
# Star the server
my_server.serve_forever()
Just keep writing, as in:
while True:
self.wfile.write(b"data")
This however won't get you into eventstream / server sent events territory, without using helper external libraries, as far as I'm aware.
I came across the same issue, I then found by chance (after much debugging) that you need to send linebreaks (\r\n or \n\n) to have the packets sent:
import http.server
import time
class MyHttpRequestHandler(http.server.BaseHTTPRequestHandler):
value = 0
# One can also set protocol_version = 'HTTP/1.1' here
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.send_header("Connection", "keep-alive")
self.end_headers()
while True:
self.wfile.write(str(self.value).encode())
self.wfile.write(b'\r\n') # Or \n\n, necessary to flush
self.value += 1
time.sleep(1)
PORT = 8000
my_server = http.server.HTTPServer(("", PORT), MyHttpRequestHandler)
# Start the server
my_server.serve_forever()
This enables you to send Server-sent Events (SSE) or HTTP long poll, or even json/raw http streams with the http.server library.
As the comment in the code says, you can also set the protocol version to HTTP/1.1 to enable keepalive by default. If you do so, you will have to specify Content-Length for every sent packet, otherwise the connection will never be terminated.
It is probably best to combine this with a threaded server to allow concurrent connections, as well as maybe setting a keepalive on the socket itself.

How to start app using sockets Tornado 4.4

I'm a newbie to creating manually sockets. My OS is ubuntu. I've got an proxy server written python using Tornado, everything is fine when I use the "fast version" starting the app, I mean the:
if __name__ == "__main__":
app = make_app()
port = options.port # default 8000
if len(sys.argv) > 1:
port = int(sys.argv[1])
app.listen(port)
print 'tornado working on port %s' % port
tornado.ioloop.IOLoop.current().start()
But when i want to change it to use the 'socket version' it seems that I'm doing something wrong. I get an error saying that the address is already used.
code:
def make_app():
return MyApplication()
def connection_ready(sock, fd, events):
while True:
try:
connection, address = sock.accept()
except socket.error as e:
if e.args[0] not in (errno.EWOULDBLOCK, errno.EAGAIN):
raise
return
connection.setblocking(0)
app = make_app()
app.listen(8000) # I get here an error: [Errno 98] Address already in use
if __name__ == "__main__":
port = options.port # default port 8000
if len(sys.argv) > 1:
port = int(sys.argv[1])
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.setblocking(False)
sock.bind(("", port))
sock.listen(128)
io_loop = tornado.ioloop.IOLoop.current()
callback = functools.partial(connection_ready, sock)
io_loop.add_handler(sock.fileno(), callback, io_loop.READ)
io_loop.start()
I'm trying to implement the same way that the documentation says (http://www.tornadoweb.org/en/stable/ioloop.html) but I don't see it starting the app in there.
Could someone tell me what is the proper way to start an app using sockets? I'm trying to accomplish an application that is available when the sever accepts the incoming socket. (So every client that connects to my listining port described in the main function at lines:sock.bind(("", port)) and sock.listen(128) will get a new socket and have access to the application).
Edit: I'm adding my proxy class:
class ProxyHandler(tornado.web.RequestHandler):
SUPPORTED_METHODS = ['GET', 'POST']
def data_received(self, chunk):
pass
def compute_etag(self):
return None # disable tornado Etag
def handle_response(self, response):
if response.error and not isinstance(response.error, tornado.httpclient.HTTPError):
self.set_status(500)
self.write('Internal server error:\n' + str(response.error))
else:
self.set_status(response.code, response.reason)
self._headers = tornado.httputil.HTTPHeaders() # clear tornado default header
for header, v in response.headers.get_all():
if header not in ('Content-Length', 'Transfer-Encoding', 'Content-Encoding', 'Connection'):
self.add_header(header, v) # some header appear multiple times, eg 'Set-Cookie'
secured_page = False
for page in secure_pages:
if page in self.request.uri:
secured_page = True
self.set_header('Content-Length', len(response.body))
self.write(response.body)
break
if response.body and not secured_page:
c.execute('SELECT filter_name FROM filters WHERE filter_type=1')
tags = c.fetchall()
soup = BeautifulSoup(response.body, 'html.parser')
for row in tags:
catched_tags = soup.find_all(str(row[0]))
if catched_tags:
print 'catched: %s of <%s> tags' % (len(catched_tags), str(row[0]))
for tag in catched_tags:
tag.extract()
new_body = str(soup)
self.set_header('Content-Length', len(new_body))
self.write(new_body)
self.finish()
#tornado.web.asynchronous
def get(self):
logger.debug('Handle %s request to %s', self.request.method, self.request.uri)
body = self.request.body
if not body:
body = None
try:
if 'Proxy-Connection' in self.request.headers:
del self.request.headers['Proxy-Connection']
c.execute('SELECT filter_name FROM filters WHERE filter_type=2')
urls = c.fetchall()
for url in urls:
if url[0] in self.request.path:
self.set_status(403)
self.finish()
return
fetch_request(self.request.uri, self.handle_response,
method=self.request.method, body=body, headers=self.request.headers, follow_redirects=False,
allow_nonstandard_methods=True)
except tornado.httpclient.HTTPError as e:
if hasattr(e, 'response') and e.response:
self.handle_response(e.response)
else:
self.set_status(500)
self.write('Internal server error:\n' + str(e))
self.finish()
#tornado.web.asynchronous
def post(self):
return self.get()
And my urls for the application:
urls = [
url(r"/admin/$", mainHandlers.MainHandler),
url(r"/admin/delete_filter/", mainHandlers.DataDeleteHandler),
url(r"/admin/filters/$", mainHandlers.DataGetter),
url(r"/admin/new_filter/$", mainHandlers.FormHandler),
url(r"/admin/stats/$", mainHandlers.StatsTableHandler),
url(r"/admin/stats/query/$", mainHandlers.AjaxStatsGetHandler),
url(r"/static/", StaticFileHandler, dict(path=settings['static_path'])),
url(r'.*', myProxy.ProxyHandler),
]
It says the port is already in use because it is. You're listening on port 8000 at least twice: once in the __main__ block when you call sock.listen, and again in the connection_ready handler when you call app.listen() (which creates another socket and tries to bind it to port 8000). You need to remove the app.listen() line, but I don't understand what you're trying to do well enough to say what you should do instead.
If you start app on Windows, you must wait for the firewall unblock. In windows it is safe to assume that if an application occupies a port it is blocked for use by other processes that might listen to packets not intended for them.
I've rewitten my Proxy to pure Python code on sockets, I'm not using URL's now and I only handle the responses from the remote addresses. I'm not using any framework

Python sockets; sending from client receiving on server

I am trying to send messages on TCP/IP all on host machine. This is working, although for some reason the socket needs to be re-instantiated for every new message on the client side only. For example here is a basic client that sends three separate messages:
import socket
host = '127.0.0.1'
class Client:
def __init__(self):
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
def connect(self):
self.sock.connect((host,12347))
def send(self,message):
self.sock.sendall(message)
def close(self):
self.sock.close()
if __name__ == "__main__":
message1 = "I am message 1"
message2 = "I am message 2"
message3 = "I am message 3"
#exp = Client()
#exp.connect()
for i in range(0,3):
try:
exp = Client()
exp.connect()
if i == 0:
txt = message1
elif i == 1:
txt = message2
elif i == 2:
txt = message3
exp.send(txt)
exp.close()
print i
exp.send(txt)
except:
pass
and the server that receives:
#!/usr/bin/env python
import socket
class communication:
def __init__(self):
try:
host = '127.0.0.1'
self.Server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.Server.bind((host,12347))
self.Server.listen(1)
finally:
print "setup finished"
def recieve(self):
(connection, client_address) = self.Server.accept()
data = connection.recv(128)
return data
def close(self):
self.server.close()
if __name__ == "__main__":
exp = communication()
while True:
try:
(connection,client_address) = exp.Server.accept()
message = connection.recv(128)
finally:
print message
if message == "I am message 3":
exp.close()
You see how I re-call the Client class in each iteration of the for loop. This seems to be necessary for sending messages 2 and 3. If the socket is instantiated only once at the start of the main code along with the connect() function, then the server hangs on the recv() after the first message has been sent.
I can't understand why this is happening and the socket only needs to be setup once on the server side. I am doing something wrong, or is this normal?
Thanks!
It's even worse than you think. Take a look at your server code. exp.Server.accept() accepts a connection from the client, but connection.receive() ignores that connection completely and does a second self.Server.accept(). You ignore half of your connections!
Next, your server only does a single receive.... Even if you tried to send more messages on the connection, the server would ignore them.
But you can't just add a recv loop. Your client and server need some way to mark message boundaries so the server knows how to pull them out. Some text based systems use a new line. Others send a message size or fixed size header that the server can read. HTTP for example uses a combination of new lines and data count.
If you want to learn sockets from the ground up just know that they are complicated and you'll need to study. There are lots of ways to build a server and you'll need to understand the trade-offs. Otherwise, there are many frameworks from XMLRPC to zeromq that do some of the heavy lifting for you.

Python BaseHTTPServer - prevent errors ("connection reset by peer," etc.) from ruining curses display

I have a Python script that implements a built-in web server:
class http_server(BaseHTTPRequestHandler):
def log_message(self, format, *args):
# prevent the BaseHTTPServer log messages, we use our own logging instead
return
def do_GET(self):
log("responding to http request from %s: %s" % (self.client_address[0], self.path))
text_string = "Hello World"
self.send_response(200)
self.send_header("Content-type", "text/plain")
self.send_header("Content-Length", len(text_string))
self.end_headers()
self.wfile.write(text_string)
def start_server():
try:
httpd = SocketServer.TCPServer(("", 8888), http_server)
httpd.serve_forever()
except Exception as e:
cleanup(None, None)
print "Error starting internal http server: %s" % repr(e)
sys.exit(1)
# main script
# does various things, initializes curses, etc.
start_server()
This works fine, however the problem is that the python script also implements an on-screen status display using curses running in another thread. When an error occurs in the HTTP server (e.g. "connection reset by peer", etc.) the python traceback indicating said error gets splattered across my nice curses display.
I have tried adding try...exception blocks around the do_GET portion of my BaseHTTPRequestHandler class but that had no effect.
How can I silence Python traceback messages in this code?
Try overriding the handle_error method of BaseServer:
class MyServer(SocketServer.TCPServer):
def handle_error(self, request, client_address):
pass
Then use MyServer in your start_server function.

Categories

Resources