Using a very simple BaseHttpServer (code below) I am experiencing the following problem:
Starting the server in background using ./testserver.py & works fine and the server responses on port 12121 . When I type in disown the server still responses. After closing the terminal the server stops after the next request with Input/output error in test.log
Steps to reconstruct:
$ ./testserver &
$ bg
$ disown
close terminal, send a request to server -> server does not respond.
The only solution I found was to redirect the stdout and stderr:
$ ./testserver > /dev/null 2>&1
or as #Daniel Stated to call it the usual way with nohup
Has anyone experienced this bug before or why is this a desired behaviour for a HttpServer to crash if there is no output possible?
testserver.py
#! /usr/bin/env python
import time
import BaseHTTPServer
HOST_NAME = '0.0.0.0'
PORT_NUMBER = 12121
class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_HEAD(s):
s.send_response(200)
s.send_header("Content-type", "text/html")
s.end_headers()
def do_GET(s):
"""Respond to a GET request."""
s.send_response(200)
s.send_header("Content-type", "text/html")
s.end_headers()
s.wfile.write("MANUAL SERVER SUCCESS")
if __name__ == '__main__':
server_class = BaseHTTPServer.HTTPServer
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
try:
httpd.serve_forever()
except Exception as e:
with open('test.log', 'w') as f:
f.write(str(e))
You only can write strings to files not exceptions. And you have to redirect stdout and stderr somewhere, otherwise any output will hang your program. Why don't you use nohup? That's the normal way, to start programs without terminal.
To make this clear: There is no special behavior in HttpServer. HttpServer is writing log-information to the terminal, so you need a terminal or a redirection to a file.
Related
I'm dabbling with Python and HTTP. Have created this simple server script I run with python cli.py serve. The server starts and works, but the keyboard interrupt Ctrl+C only gets trough on the next page refresh, not when I actually press Ctrl+C in the terminal.. Any fix for this? Note: works immediately when no request have come in yet.
from http.server import HTTPServer, BaseHTTPRequestHandler
import sys, signal
commands = ["serve"]
class Server(BaseHTTPRequestHandler):
def do_GET(self):
if self.path == "/":
self.path = "/index.html"
self.send_response(200)
self.end_headers()
self.wfile.write(bytes("Tere, maailm!!",'utf-8'))
if len(sys.argv) > 1 and sys.argv[1] in commands:
index = commands.index(sys.argv[1])
if index == 0: # serve command
print("Starting web server. Press Ctrl+C to exit!")
httpd = HTTPServer(("localhost", 8080), Server)
try:
httpd.serve_forever()
except KeyboardInterrupt:
print("Shutting down...")
httpd.socket.close()
sys.exit(0)
else:
print("Usage: python tab.py [command], where [command] is any of", commands)
Same problem...
I discovered ThreadingHTTPServer, that works (instead of HTTPServer)
I currently have the following code:
from BaseHTTPServer import BaseHTTPRequestHandler
from pathlib import Path
from random import randint
import json
import random
example = 'logs/example.json'
class GetHandler(BaseHTTPRequestHandler):
# Slurp data from file
def dummy_data(self):
json_result = Path(example)
if json_result.is_file():
return json.load(open(example))
# Return data or empty
def random_selection(self):
data = self.dummy_data()
try:
return random.sample(data, randint(1, len(data)+50))
# Purposefully introduce entropy
except ValueError:
return ''
def do_GET(self):
self.send_response(200)
self.send_header('Content-Type', 'application/json')
self.end_headers()
self.wfile.write(json.dumps(self.random_selection()))
return
if __name__ == '__main__':
from BaseHTTPServer import HTTPServer
server = HTTPServer(('localhost', 8080), GetHandler)
print 'Starting server at http://127.0.0.1:8080'
server.serve_forever()
I have patched /etc/hosts as follows:
server-0001 server-0002 server-0003 server-0004 127.0.0.1
I am looking for a way for servers 0001-4 to redirect to 127.0.0.1:8080 but am not seeing how? Is this something to do with /etc/resolv.conf? I am using OSX but hopefully any *nix solution should work I'm hoping (unless ipfw evidently, since we no longer have it like sane people).
Looks like the sanest answer was to patch /etc/hosts, then change port of BaseHTTPServer to 80. This does not solve the origional question but is a fair compromise.
Patching Script:
#!/bin/bash
# ensure running as root
if [ "$(id -u)" != "0" ] ; then
exec sudo "$0" "$#"
fi
if ! grep "^#- marker " /etc/hosts ; then
echo -e '\n##+ marker' >> /etc/hosts
while read server ; do
[ 'x' != "${server}x" ] && echo -e "127.0.0.1 ${server}.io www.${server}.io" >> /etc/hosts
done <read/servers.txt
echo -e '\n##- marker' >> /etc/hosts
fi
Modification to Python
....
def main():
try:
server = HTTPServer(('', 80), GetHandler)
print '->> Starting server at http://127.0.0.1:80 and all patched hosts'
server.serve_forever()
except KeyboardInterrupt:
print '->> Interrupt received; closing server socket'
server.socket.close()
if __name__ == '__main__':
main()
I have the strange problem that the return of a request in SimpleHTTPRequestHandler is blocked if in the request handler a new process is spawned by subprocess.Popen. But shouldn't Popen be asynchronous?
The behavior is reproducible by the following files and using Python 2.6.7 on my OS X machine as well as ActivePython 2.6 on a SLES machine:
webserver.py:
#!/usr/bin/env python
import SimpleHTTPServer
import SocketServer
import subprocess
import uuid
class MyRequestHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
def do_POST(self):
print 'in do_POST'
uuid_gen = str(uuid.uuid1())
subprocess.Popen(['python', 'sleep.py'])
print 'subprocess spawned'
return self.wfile.write(uuid_gen)
Handler = MyRequestHandler
server = SocketServer.TCPServer(('127.0.0.1', 9019), Handler)
server.serve_forever()
sleep.py:
import time
time.sleep(10)
When I now start the webserver and then do a curl POST request to localhost:9019 the webserver prints instantly:
$python2.6 webserver2.py
in do_POST
subprocess spawned
but on the console where I do the curl request it shows the following behavior:
$curl -X POST http://127.0.0.1:9019/
<wait ~10 seconds>
cd8ee24a-0ad7-11e3-a361-34159e02ccec
When I run the same setup with Python 2.7 the answer arrives on the curl site instantly.
How can this happen, since Popen doesn't seem to block the print, but just the return?
The problem is I'm bound to python 2.6 for legacy reasons, so what would be the best workaround for that behavior to let the request return instantly?
Ok, so after Nikhil referred me to this question https://stackoverflow.com/questions/3973789/... I figured out that I need self.send_response(200), self.send_header("Content-Length", str(len(uuid_gen))) and self.end_headers().
This means the working code looks like this:
#!/usr/bin/env python
import SimpleHTTPServer
import SocketServer
import subprocess
import uuid
class MyRequestHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
def do_POST(self):
print 'in do_POST'
uuid_gen = str(uuid.uuid1())
subprocess.Popen(['python', 'sleep.py'])
print 'subprocess spawned'
self.send_response(200)
self.send_header("Content-Length", str(len(uuid_gen)))
self.end_headers()
self.wfile.write(uuid_gen)
Handler = MyRequestHandler
server = SocketServer.TCPServer(('127.0.0.1', 9019), Handler)
server.serve_forever()
The problem seemed to be that the HTTP connection is left open for Pipelining, even though I'm still not quite sure why it the returns exactly when the Popen process is finished.
The following code works fine with python.exe but fails with pythonw.exe. I'm using Python 3.1 on Windows 7.
from http.server import BaseHTTPRequestHandler, HTTPServer
class FooHandler(BaseHTTPRequestHandler):
def do_POST(self):
length = int(self.headers['Content-Length'])
data = self.rfile.read(length)
print(data)
self.send_response(200)
self.send_header('Content-Length', '0')
self.end_headers()
httpd = HTTPServer(('localhost', 8000), FooHandler)
httpd.serve_forever()
Something wrong when I start sending responses. Nothing got written back. And if I try another http connection it won't connect. I also tried using self.wfile but no luck either.
You are printing to stdout. pythonw.exe doens't have a stdout, as it's not connected to a terminal. My guess is that this has something to do with it.
Try to redirect stdout to a file, or quicker, remove the print().
I have a python cgi-script that checks if a process is active, and starts it if this process is not found. The process itself is a webserver (based on web.py). After I ensure that the process is running, I try to make a url request to it. The idea is to redirect the results of this request to the requester of my cgi script, basically I want to redirect a query to a local webserver that listens to a different port.
This code works fine if I have started the server first (findImgServerProcess returns True), from a shell, not using cgi requests. But if I try to start the process through the cgi-script below, I do get as far as the urllib2.urlopen call, which throws an exception that the connection is refused.
I don't understand why?
If I print the list of processes (r in findImgServerProcess()) I can see the process is there, but why does urllib2.urlopen throw an exception? I have the apache2 webserver set up to use suexec.
Here's the code:
#!/usr/bin/python
import cgi, cgitb
cgitb.enable()
import os, re
import sys
import subprocess
import urllib2
urlbase = "http://localhost:8080/getimage"
imgserver = "/home/martin/public_html/cgi-bin/stlimgservermirror.py" # this is based on web.py
def findImgServerProcess():
r = subprocess.Popen(["ps", "aux"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT).communicate()[0]
return re.match(".*%s" % os.path.split(imgserver)[-1], r, re.DOTALL)
def ensureImgServerProcess():
if findImgServerProcess():
return True
os.environ['LD_LIBRARY_PATH'] = '/home/martin/lib'
fout = open("/dev/null", "w")
ferr = fout
subprocess.Popen([sys.executable, imgserver], stdout=fout, stderr=subprocess.STDOUT)
# now try and find the process
return findImgServerProcess() != None
def main():
if not ensureImgServerProcess():
print "Content-type: text/plain\n"
print "image server down!"
return
form = cgi.FieldStorage()
if form.has_key("debug"):
print "Content-type: text/plain\n"
print os.environ['QUERY_STRING']
else:
try:
img = urllib2.urlopen("%s?%s" % (urlbase, os.environ['QUERY_STRING'])).read()
except Exception, e:
print "Content-type: text/plain\n"
print e
sys.exit()
print "Content-type: image/png\n"
print img
if __name__ == "__main__":
main()
A possibility is that the subprocess hasn't had an opportunity to completely start up before you try to connect to it. To test this, try adding a time.sleep(5) before you call urlopen.
This isn't ideal, but will at least help you figure out if that's the problem. Down the road, you'll probably want to set up a better way to check that the HTTP daemon is running and keep it running.