I am writing a script which involves opening an HTTP server and serving a single file. However, the request for this file is also instigated from further down the script. Currently, I am doing it like this:
Handler = SimpleHTTPServer.SimpleHTTPRequestHandler
httpd = SocketServer.TCPServer(("", 8000), Handler)
Thread(target=httpd.handle_request).start()
This works to handle a single request, but also creates some issues with keyboard input. What is the most efficient, non-blocking way to serve a single HTTP request? Ideally the server would close and release the port upon the completion of the request.
You can try many workarounds but flask is the way to go. It is not the simplest or fastest solution but it is the most relieble one.
Example for serving a single file with flask:
from flask import Flask, send_file
app = Flask(__name__)
#app.route('/file-downloads/')
def file_downloads():
try:
return render_template('downloads.html')
except Exception as e:
return str(e)
app.run()
for a non blocking solution you can do this instead of app.run():
Thread(target=app.run).start()
But I don't recommend running the flask app in a thread because of the GIL
You can use the handle_request method to handle a single request, and if you use the server inside a with statement then Python will close the server and release the port when the statement exits. (Alternatively, you can use the server_close method to close the server and release the port if you want, but the with statement provides better error handling.) If you do all of that in a separate thread, you should get the behaviour you are looking for.
Using Python 3:
from threading import Thread
from http.server import HTTPServer, SimpleHTTPRequestHandler
def serve_one_request():
with HTTPServer(("0.0.0.0", 8000), SimpleHTTPRequestHandler) as server:
server.handle_request()
thread = Thread(target=serve_one_request)
thread.start()
# Do other work
thread.join()
I'm not sure if this will fix the issues with keyboard input you mentioned. If you elaborate on that some more I will take a look.
Related
Below is a snippet from the tornado documentation.
def handle_response(response):
if response.error:
print("Error: %s" % response.error)
else:
print(response.body)
http_client = AsyncHTTPClient()
http_client.fetch("http://www.google.com/", handle_response)
But this does not print anything to the console. I tried adding a time.sleep at the end but even then nothing prints.
Also, it does not send any request to my server when I change the url above to point to my server.
tornado.httpclient.HTTPClient works fine though.
I am on Macbook with Python 3.6.1.
Tornado is an asynchronous framework where all tasks are scheduled by a single event loop called the IOLoop. At the end of your program, put:
import tornado.ioloop
tornado.ioloop.IOLoop.current().start()
That will start the loop running and allow the AsyncHTTPClient to fetch the URL.
The IOLoop runs forever, so you need to implement some logic that determines when to call IOLoop.stop(). In your example program, call IOLoop.stop() at the bottom of handle_response. In a real HTTP client program, the loop should run until all work is complete and the program is ready to exit.
I want to start a local server and then open a link with a browser from the same python program.
This is what I tried (a very naive and foolish attempt):
from subprocess import call
import webbrowser
call(["python", "-m", "SimpleHTTPServer"]) #This creates a server at port:8000
webbrowser.open_new_tab("some/url")
However, the program doesn't go to the second statement because the server is still running in the background. To open the browser, I need to exit the server which defeats the purpose of running the server.
Can anyone help me by suggesting a working solution?
You could start your web server in a daemon thread (a Python program exits if only daemon threads are left) and then make your requests from the main thread.
The only problem then is to synchronize your main thread to the server thread, since the HTTP-server will need some time to start up and won't handle any requests until this point. I am not aware of an easy and clean solution to do that, but you could (somewhat hackish) just pause your main thread for a number seconds (possibly shorter) and start making requests only after this. Another option would be to just send requests to the webserver from the very beginning and expect them to fail for some amount of time.
Here is a small sample script with a simple HTTP webserver that serves content from the local file system over TCP on localhost:8080 and a sample request, requesting a file foo.txt from the directory the webserver (and in this case also the script) was started in.
import sys
import requests
import threading
import time
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
# set up the HTTP server and start it in a separate daemon thread
httpd = HTTPServer(('localhost', 8080), SimpleHTTPRequestHandler)
thread = threading.Thread(target=httpd.serve_forever)
thread.daemon = True
# if startup time is too long we might want to be able to quit the program
try:
thread.start()
except KeyboardInterrupt:
httpd.shutdown()
sys.exit(0)
# wait until the webserver finished starting up (maybe wait longer or shorter...)
time.sleep(5)
# start sending requests
r = requests.get('http://localhost:8080/foo.txt')
print r.status_code
# => 200 (hopefully...)
I have this methods:
class dJobs():
def server(self):
address = ('127.0.0.1', dConfig.cgiport)
handler = CGIHTTPServer.CGIHTTPRequestHandler
handler.cgi_directories = ['/cgi-bin']
self.logger.info("starting http server on port %s" %str(port))
httpd = BaseHTTPServer.HTTPServer(address, handler)
httpd.serve_forever()
def job(self):
self.runNumber = 0
while True:
self.logger.info("Counting: %s" %str(self.runNumber))
self.runNumber+=1
time.sleep(1)
I want run job while waiting for http and cgi requests, handle requests and then continue job method.
Is it possibile to do this using gevent (and how), or i need to use threading ?
i.e. I want to run both method concurrently without creating threads.
This solution seems to work for me:
import monkey:
from gevent import monkey
monkey.patch_all(thread=False)
add and run this method:
def run(self):
jobs = [gevent.spawn(self.server),gevent.spawn(self.job)]
gevent.joinall(jobs)
Please try it out in your program.
If your job is CPU-bound, you must not use Python threads because of GIL. Alternative is multiprocessing module.
Also you could use uWSGI--it can do CGI and run jobs. Look at the WSGI which is the main feature of uWSGI, you may want to use that instead of CGI.
I have a flask REST endpoint that does some cpu-intensive image processing and takes a few seconds to return. Often, this endpoint gets called, then aborted by the client. In these situations I would like to cancel processing. How can I do this in flask?
In node.js, I would do something like:
req.on('close', function(){
//some handler
});
I was expecting flask to have something similar, or a synchronous method (request.isClosed()) that I could check at certain points during my processing and return if it's closed, but I can't find one.
I thought about sending something to test that the connection is still open, and catching the exception if it fails, but it seems Flask buffers all outputs so the exception isn't thrown until the processing completes and tries to return the result:
An established connection was aborted by the software in your host machine
How can I cancel my processing half way through if the client aborts their request?
There is a potentially... hacky solution to your problem. Flask has the ability to stream content back to the user via a generator. The hacky part would be streaming blank data as a check to see if the connection is still open and then when your content is finished the generator could produce the actual image. Your generator could check to see if processing is done and return None or "" or whatever if it's not finished.
from flask import Response
#app.route('/image')
def generate_large_image():
def generate():
while True:
if not processing_finished():
yield ""
else:
yield get_image()
return Response(generate(), mimetype='image/jpeg')
I don't know what exception you'll get if the client closes the connection but I'm willing to bet its error: [Errno 32] Broken pipe
As far as I know you can't know if a connection was closed by the client during the execution because the server is not testing if the connection is open during the execution. I know that you can create your custom request_handler in your Flask application for detecting if after the request is processed the connection was "dropped".
For example:
from flask import Flask
from time import sleep
from werkzeug.serving import WSGIRequestHandler
app = Flask(__name__)
class CustomRequestHandler(WSGIRequestHandler):
def connection_dropped(self, error, environ=None):
print 'dropped, but it is called at the end of the execution :('
#app.route("/")
def hello():
for i in xrange(3):
print i
sleep(1)
return "Hello World!"
if __name__ == "__main__":
app.run(debug=True, request_handler=CustomRequestHandler)
Maybe you want to investigate a bit more and as your custom request_handler is created when a request comes you can create a thread in the __init__ that checks the status of the connection every second and when it detects that the connection is closed ( check this thread ) then stop the image processing. But I think this is a bit complicated :(.
I was just attempting to do this same thing in a project and I found that with my stack of uWSGI and nginx that when a streaming response was interrupted on the client's end that the following errors occurred
SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request
uwsgi_response_write_body_do(): Broken pipe [core/writer.c line 404] during GET
IOError: write error
and I could just use a regular old try and except like below
try:
for chunk in iter(process.stdout.readline, ''):
yield chunk
process.wait()
except:
app.logger.debug('client disconnected, killing process')
process.terminate()
process.wait()
This gave me:
Instant streaming of data using Flask's generator functionality
No zombie processes on cancelled connection
I am writing a tool in python (platform is linux), one of the tasks is to capture a live tcp stream and to
apply a function to each line. Currently I'm using
import subprocess
proc = subprocess.Popen(['sudo','tcpflow', '-C', '-i', interface, '-p', 'src', 'host', ip],stdout=subprocess.PIPE)
for line in iter(proc.stdout.readline,''):
do_something(line)
This works quite well (with the appropriate entry in /etc/sudoers), but I would like to avoid calling an external program.
So far I have looked into the following possibilities:
flowgrep: a python tool which looks just like what I need, BUT: it uses pynids
internally, which is 7 years old and seems pretty much abandoned. There is no pynids package
for my gentoo system and it ships with a patched version of libnids
which I couldn't compile without further tweaking.
scapy: this is a package manipulation program/library for python,
I'm not sure if tcp stream
reassembly is supported.
pypcap or pylibpcap as wrappers for libpcap. Again, libpcap is for packet
capturing, where I need stream reassembly which is not possible according
to this question.
Before I dive deeper into any of these libraries I would like to know if maybe someone
has a working code snippet (this seems like a rather common problem). I'm also grateful if
someone can give advice about the right way to go.
Thanks
Jon Oberheide has led efforts to maintain pynids, which is fairly up to date at:
http://jon.oberheide.org/pynids/
So, this might permit you to further explore flowgrep. Pynids itself handles stream reconstruction rather elegantly.See http://monkey.org/~jose/presentations/pysniff04.d/ for some good examples.
Just as a follow-up: I abandoned the idea to monitor the stream on the tcp layer. Instead I wrote a proxy in python and let the connection I want to monitor (a http session) connect through this proxy. The result is more stable and does not need root privileges to run. This solution depends on pymiproxy.
This goes into a standalone program, e.g. helper_proxy.py
from multiprocessing.connection import Listener
import StringIO
from httplib import HTTPResponse
import threading
import time
from miproxy.proxy import RequestInterceptorPlugin, ResponseInterceptorPlugin, AsyncMitmProxy
class FakeSocket(StringIO.StringIO):
def makefile(self, *args, **kw):
return self
class Interceptor(RequestInterceptorPlugin, ResponseInterceptorPlugin):
conn = None
def do_request(self, data):
# do whatever you need to sent data here, I'm only interested in responses
return data
def do_response(self, data):
if Interceptor.conn: # if the listener is connected, send the response to it
response = HTTPResponse(FakeSocket(data))
response.begin()
Interceptor.conn.send(response.read())
return data
def main():
proxy = AsyncMitmProxy()
proxy.register_interceptor(Interceptor)
ProxyThread = threading.Thread(target=proxy.serve_forever)
ProxyThread.daemon=True
ProxyThread.start()
print "Proxy started."
address = ('localhost', 6000) # family is deduced to be 'AF_INET'
listener = Listener(address, authkey='some_secret_password')
while True:
Interceptor.conn = listener.accept()
print "Accepted Connection from", listener.last_accepted
try:
Interceptor.conn.recv()
except: time.sleep(1)
finally:
Interceptor.conn.close()
if __name__ == '__main__':
main()
Start with python helper_proxy.py. This will create a proxy listening for http connections on port 8080 and listening for another python program on port 6000. Once the other python program has connected on that port, the helper proxy will send all http replies to it. This way the helper proxy can continue to run, keeping up the http connection, and the listener can be restarted for debugging.
Here is how the listener works, e.g. listener.py:
from multiprocessing.connection import Client
def main():
address = ('localhost', 6000)
conn = Client(address, authkey='some_secret_password')
while True:
print conn.recv()
if __name__ == '__main__':
main()
This will just print all the replies. Now point your browser to the proxy running on port 8080 and establish the http connection you want to monitor.