I have the strange problem that the return of a request in SimpleHTTPRequestHandler is blocked if in the request handler a new process is spawned by subprocess.Popen. But shouldn't Popen be asynchronous?
The behavior is reproducible by the following files and using Python 2.6.7 on my OS X machine as well as ActivePython 2.6 on a SLES machine:
webserver.py:
#!/usr/bin/env python
import SimpleHTTPServer
import SocketServer
import subprocess
import uuid
class MyRequestHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
def do_POST(self):
print 'in do_POST'
uuid_gen = str(uuid.uuid1())
subprocess.Popen(['python', 'sleep.py'])
print 'subprocess spawned'
return self.wfile.write(uuid_gen)
Handler = MyRequestHandler
server = SocketServer.TCPServer(('127.0.0.1', 9019), Handler)
server.serve_forever()
sleep.py:
import time
time.sleep(10)
When I now start the webserver and then do a curl POST request to localhost:9019 the webserver prints instantly:
$python2.6 webserver2.py
in do_POST
subprocess spawned
but on the console where I do the curl request it shows the following behavior:
$curl -X POST http://127.0.0.1:9019/
<wait ~10 seconds>
cd8ee24a-0ad7-11e3-a361-34159e02ccec
When I run the same setup with Python 2.7 the answer arrives on the curl site instantly.
How can this happen, since Popen doesn't seem to block the print, but just the return?
The problem is I'm bound to python 2.6 for legacy reasons, so what would be the best workaround for that behavior to let the request return instantly?
Ok, so after Nikhil referred me to this question https://stackoverflow.com/questions/3973789/... I figured out that I need self.send_response(200), self.send_header("Content-Length", str(len(uuid_gen))) and self.end_headers().
This means the working code looks like this:
#!/usr/bin/env python
import SimpleHTTPServer
import SocketServer
import subprocess
import uuid
class MyRequestHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
def do_POST(self):
print 'in do_POST'
uuid_gen = str(uuid.uuid1())
subprocess.Popen(['python', 'sleep.py'])
print 'subprocess spawned'
self.send_response(200)
self.send_header("Content-Length", str(len(uuid_gen)))
self.end_headers()
self.wfile.write(uuid_gen)
Handler = MyRequestHandler
server = SocketServer.TCPServer(('127.0.0.1', 9019), Handler)
server.serve_forever()
The problem seemed to be that the HTTP connection is left open for Pipelining, even though I'm still not quite sure why it the returns exactly when the Popen process is finished.
Related
Why does subprocess.call("xdotool key XF86AudioPlay") in do_GET only last for the duration of my http request?
I am trying to play/pause spotify on my local machine, through an http request. ie. when a request is received, emulate a keypress.
When I hit localhost:8002 - the music plays for ~200ms, but as soon as the request finishes, it stops.
import http.server
import socketserver
import subprocess
MYCMD = "xdotool key XF86AudioPlay"
PORT = 8002
class MyRequestHandler(http.server.SimpleHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
# Send the html message
subprocess.call(MYCMD, shell=True)
self.wfile.write(b'works')
return
Handler = MyRequestHandler
with socketserver.TCPServer(("", PORT), Handler) as httpd:
print("serving at port", PORT)
httpd.serve_forever()
This dos not really make sense.
Normally subprocess.call() waits for the process to be called to be finished.
in your example it just simulates pressing a key on the server's xwindows.
you simulate pressing a key, that toggles playing audio.
What I would expect is, that somehow you perform two http requests.
The first one starting audio, the second one stopping audio.
Probably the two requests are executed with a delay of about 20ms.
What I propose is to add following lines before the subprocess call
with open("/tmp/mylogfile.log", "a") as fout:
fout.write("call subprocess")
and following two lines after the subprocess call.
with open("/tmp/mylogfile.log", "a") as fout:
fout.write("called subprocess")
I am quite sure, that you will see two instead of one request.
Used this code to run a python server:
import os
from http.server import SimpleHTTPRequestHandler, HTTPServer
os.chdir('c:/users/owner/desktop/tom/tomsEnyo2.5-May27')
server_address = ('', 8000)
httpd = HTTPServer(server_address, SimpleHTTPRequestHandler)
httpd.serve_forever()
How to make it stop?
Your question is ambiguous - if your running the server via shell i.e. python myscript.py, simply press crtl + C.
If you want to close it elegantly using code, you must decide on some condition, or point, or exception to call it shutdown. You can add a block and call httpd.shutdown() - as HttpServer itself is a SocketServer.TCPSServer subclass:
The first class, HTTPServer, is a SocketServer.TCPServer subclass, and
therefore implements the SocketServer.BaseServer interface. It creates
and listens at the HTTP socket, dispatching the requests to a handler.
So the BaseServer has a method shutdown(), hence being a subclass HttpServer has it too.
for example:
import os
from http.server import SimpleHTTPRequestHandler, HTTPServer
os.chdir('c:/users/owner/desktop/tom/tomsEnyo2.5-May27')
server_address = ('', 8000)
try:
httpd = HTTPServer(server_address, SimpleHTTPRequestHandler)
httpd.serve_forever()
except Exception:
httpd.shutdown()
Helpful relevant question -
How do I shutdown an HTTPServer from inside a request handler in Python?
How to stop BaseHTTPServer.serve_forever() in a BaseHTTPRequestHandler subclass?
You can send a SIGTERM signal from the handler thread if you are ok with killing the whole process:
os.kill(os.getpid(), signal.SIGTERM)
If you need the Python HTTP server in a unit test then it is advisable to run it in a separate thread and stop it from another one, like this:
import unittest
from threading import Thread
from http.server import HTTPServer
class TestWithHTTP(unittest.TestCase):
"""
My unit test that needs a HTTP server
NOTE: skeleton code
"""
def setUp(self):
# you need to provide the host, port and request handler class
self.myserver = HTTPServer((host, port), HandlerClass)
# start HTTP server in another thread
httpthread = Thread(target=self.myserver.serve_forever)
httpthread.start()
# ... any other setup operations ...
def test_something(self):
# ... your unit testing code ...
pass
def tearDown(self):
# shut down the server from yet another thread
killerthread = Thread(target = self.myserver.shutdown)
killerthread.start()
Just use ^C (control+c) to shut down python server.
So here's the deal : I'm writing a simple lightweight IRC app, hosted locally, that basically does the same job as Xchat and works in your browser, just as Sabnzbd. I display search results in the browser as an html table, and using an AJAX GET request with an on_click event, the download is launched. I use another AJAX GET request in a 1 second loop to request the download information (status, progress, speed, ETA, etc.). I hit a bump with the simultaneous AJAX requests, since my CGI handler seems to only be able to handle one thread at a time : indeed, the main thread processes the download, while requests for download status are sent too.
Since I had a Django app somewhere, I tried implementing this IRC app and everything works fine. Simultaneous requests are handled properly.
So is there something I have to know with the HTTP handler ? Is it not possible with the basic CGI handle to deal with simultaneous requests ?
I use the following for my CGI IRC app :
from http.server import BaseHTTPRequestHandler, HTTPServer, CGIHTTPRequestHandler
If it's not about theory but about my code, I can gladly post various python scripts if it helps.
A little bit deeper into the documentation:
These four classes process requests synchronously; each request must be completed before the next request can be started.
TL;DR: Use a real web server.
So, after further research, here's my code, whick works :
from http.server import BaseHTTPRequestHandler, HTTPServer, CGIHTTPRequestHandler
from socketserver import ThreadingMixIn
import threading
import cgitb; cgitb.enable() ## This line enables CGI error reporting
import webbrowser
class HTTPRequestHandler(CGIHTTPRequestHandler):
"""Handle requests in a separate thread."""
def do_GET(self):
if "shutdown" in self.path:
self.send_head()
print ("shutdown")
server.stop()
else:
self.send_head()
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
allow_reuse_address = True
daemon_threads = True
def shutdown(self):
self.socket.close()
HTTPServer.shutdown(self)
class SimpleHttpServer():
def __init__(self, ip, port):
self.server = ThreadedHTTPServer((ip,port), HTTPRequestHandler)
self.status = 1
def start(self):
self.server_thread = threading.Thread(target=self.server.serve_forever)
self.server_thread.daemon = True
self.server_thread.start()
def waitForThread(self):
self.server_thread.join()
def stop(self):
self.server.shutdown()
self.waitForThread()
if __name__=='__main__':
HTTPRequestHandler.cgi_directories = ["/", "/ircapp"]
server = SimpleHttpServer('localhost', 8020)
print ('HTTP Server Running...........')
webbrowser.open_new_tab('http://localhost:8020/ircapp/search.py')
server.start()
server.waitForThread()
Using a very simple BaseHttpServer (code below) I am experiencing the following problem:
Starting the server in background using ./testserver.py & works fine and the server responses on port 12121 . When I type in disown the server still responses. After closing the terminal the server stops after the next request with Input/output error in test.log
Steps to reconstruct:
$ ./testserver &
$ bg
$ disown
close terminal, send a request to server -> server does not respond.
The only solution I found was to redirect the stdout and stderr:
$ ./testserver > /dev/null 2>&1
or as #Daniel Stated to call it the usual way with nohup
Has anyone experienced this bug before or why is this a desired behaviour for a HttpServer to crash if there is no output possible?
testserver.py
#! /usr/bin/env python
import time
import BaseHTTPServer
HOST_NAME = '0.0.0.0'
PORT_NUMBER = 12121
class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_HEAD(s):
s.send_response(200)
s.send_header("Content-type", "text/html")
s.end_headers()
def do_GET(s):
"""Respond to a GET request."""
s.send_response(200)
s.send_header("Content-type", "text/html")
s.end_headers()
s.wfile.write("MANUAL SERVER SUCCESS")
if __name__ == '__main__':
server_class = BaseHTTPServer.HTTPServer
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
try:
httpd.serve_forever()
except Exception as e:
with open('test.log', 'w') as f:
f.write(str(e))
You only can write strings to files not exceptions. And you have to redirect stdout and stderr somewhere, otherwise any output will hang your program. Why don't you use nohup? That's the normal way, to start programs without terminal.
To make this clear: There is no special behavior in HttpServer. HttpServer is writing log-information to the terminal, so you need a terminal or a redirection to a file.
The following code works fine with python.exe but fails with pythonw.exe. I'm using Python 3.1 on Windows 7.
from http.server import BaseHTTPRequestHandler, HTTPServer
class FooHandler(BaseHTTPRequestHandler):
def do_POST(self):
length = int(self.headers['Content-Length'])
data = self.rfile.read(length)
print(data)
self.send_response(200)
self.send_header('Content-Length', '0')
self.end_headers()
httpd = HTTPServer(('localhost', 8000), FooHandler)
httpd.serve_forever()
Something wrong when I start sending responses. Nothing got written back. And if I try another http connection it won't connect. I also tried using self.wfile but no luck either.
You are printing to stdout. pythonw.exe doens't have a stdout, as it's not connected to a terminal. My guess is that this has something to do with it.
Try to redirect stdout to a file, or quicker, remove the print().