Stop processing Flask route if request aborted - python

I have a flask REST endpoint that does some cpu-intensive image processing and takes a few seconds to return. Often, this endpoint gets called, then aborted by the client. In these situations I would like to cancel processing. How can I do this in flask?
In node.js, I would do something like:
req.on('close', function(){
//some handler
});
I was expecting flask to have something similar, or a synchronous method (request.isClosed()) that I could check at certain points during my processing and return if it's closed, but I can't find one.
I thought about sending something to test that the connection is still open, and catching the exception if it fails, but it seems Flask buffers all outputs so the exception isn't thrown until the processing completes and tries to return the result:
An established connection was aborted by the software in your host machine
How can I cancel my processing half way through if the client aborts their request?

There is a potentially... hacky solution to your problem. Flask has the ability to stream content back to the user via a generator. The hacky part would be streaming blank data as a check to see if the connection is still open and then when your content is finished the generator could produce the actual image. Your generator could check to see if processing is done and return None or "" or whatever if it's not finished.
from flask import Response
#app.route('/image')
def generate_large_image():
def generate():
while True:
if not processing_finished():
yield ""
else:
yield get_image()
return Response(generate(), mimetype='image/jpeg')
I don't know what exception you'll get if the client closes the connection but I'm willing to bet its error: [Errno 32] Broken pipe

As far as I know you can't know if a connection was closed by the client during the execution because the server is not testing if the connection is open during the execution. I know that you can create your custom request_handler in your Flask application for detecting if after the request is processed the connection was "dropped".
For example:
from flask import Flask
from time import sleep
from werkzeug.serving import WSGIRequestHandler
app = Flask(__name__)
class CustomRequestHandler(WSGIRequestHandler):
def connection_dropped(self, error, environ=None):
print 'dropped, but it is called at the end of the execution :('
#app.route("/")
def hello():
for i in xrange(3):
print i
sleep(1)
return "Hello World!"
if __name__ == "__main__":
app.run(debug=True, request_handler=CustomRequestHandler)
Maybe you want to investigate a bit more and as your custom request_handler is created when a request comes you can create a thread in the __init__ that checks the status of the connection every second and when it detects that the connection is closed ( check this thread ) then stop the image processing. But I think this is a bit complicated :(.

I was just attempting to do this same thing in a project and I found that with my stack of uWSGI and nginx that when a streaming response was interrupted on the client's end that the following errors occurred
SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request
uwsgi_response_write_body_do(): Broken pipe [core/writer.c line 404] during GET
IOError: write error
and I could just use a regular old try and except like below
try:
for chunk in iter(process.stdout.readline, ''):
yield chunk
process.wait()
except:
app.logger.debug('client disconnected, killing process')
process.terminate()
process.wait()
This gave me:
Instant streaming of data using Flask's generator functionality
No zombie processes on cancelled connection

Related

Serve single HTTP request without blocking

I am writing a script which involves opening an HTTP server and serving a single file. However, the request for this file is also instigated from further down the script. Currently, I am doing it like this:
Handler = SimpleHTTPServer.SimpleHTTPRequestHandler
httpd = SocketServer.TCPServer(("", 8000), Handler)
Thread(target=httpd.handle_request).start()
This works to handle a single request, but also creates some issues with keyboard input. What is the most efficient, non-blocking way to serve a single HTTP request? Ideally the server would close and release the port upon the completion of the request.
You can try many workarounds but flask is the way to go. It is not the simplest or fastest solution but it is the most relieble one.
Example for serving a single file with flask:
from flask import Flask, send_file
app = Flask(__name__)
#app.route('/file-downloads/')
def file_downloads():
try:
return render_template('downloads.html')
except Exception as e:
return str(e)
app.run()
for a non blocking solution you can do this instead of app.run():
Thread(target=app.run).start()
But I don't recommend running the flask app in a thread because of the GIL
You can use the handle_request method to handle a single request, and if you use the server inside a with statement then Python will close the server and release the port when the statement exits. (Alternatively, you can use the server_close method to close the server and release the port if you want, but the with statement provides better error handling.) If you do all of that in a separate thread, you should get the behaviour you are looking for.
Using Python 3:
from threading import Thread
from http.server import HTTPServer, SimpleHTTPRequestHandler
def serve_one_request():
with HTTPServer(("0.0.0.0", 8000), SimpleHTTPRequestHandler) as server:
server.handle_request()
thread = Thread(target=serve_one_request)
thread.start()
# Do other work
thread.join()
I'm not sure if this will fix the issues with keyboard input you mentioned. If you elaborate on that some more I will take a look.

Python tornado AsyncHttpClient does not send any request

Below is a snippet from the tornado documentation.
def handle_response(response):
if response.error:
print("Error: %s" % response.error)
else:
print(response.body)
http_client = AsyncHTTPClient()
http_client.fetch("http://www.google.com/", handle_response)
But this does not print anything to the console. I tried adding a time.sleep at the end but even then nothing prints.
Also, it does not send any request to my server when I change the url above to point to my server.
tornado.httpclient.HTTPClient works fine though.
I am on Macbook with Python 3.6.1.
Tornado is an asynchronous framework where all tasks are scheduled by a single event loop called the IOLoop. At the end of your program, put:
import tornado.ioloop
tornado.ioloop.IOLoop.current().start()
That will start the loop running and allow the AsyncHTTPClient to fetch the URL.
The IOLoop runs forever, so you need to implement some logic that determines when to call IOLoop.stop(). In your example program, call IOLoop.stop() at the bottom of handle_response. In a real HTTP client program, the loop should run until all work is complete and the program is ready to exit.

How to abort a client request

I have this code in my application that use Django framework:
import os, time
def get(self, request, id=None):
pid = os.fork()
if pid == 0:
self.run()
time.sleep(600)
else:
time.sleep(20)
return write_response()
I create a child process that will create the data that will be returned. I really need to do the work an child process. The function run use an external software to calculate the data to return. If I don't create a new process only the first request will succeed (the external software constraint).
The child process take about 10 seconds to do the work. The parent wait 20 seconds and then return a response using the data calculated by the child. For the client everything is working. In the server I get exception (Broken pip).
When the child continue executing the client has closed the socket so it raise an exception. What should I do to fix my problem?

Responding to client disconnects using bottle and gevent.wsgi?

I have a small asynchronous server implemented using bottle and gevent.wsgi. There is a routine used to implement long poll that looks pretty much like the "Event Callbacks" example in the bottle documentation:
def worker(body):
msg = msgbus.recv()
body.put(msg)
body.put(StopIteration)
#route('/poll')
def poll():
body = gevent.queue.Queue()
worker = gevent.spawn(worker, body)
return body
Here, msgbus is a ZMQ sub socket.
This all works fine, but if a client breaks the connection while
worker is blocked on msgbus.recv(), that greenlet task will hang
around "forever" (well, until a message is received), and will only
find out about the disconnected client when it attempts to send a
response.
I can use msgbus.poll(timeout=something) if I don't want to block
forever waiting for ipc messages, but I still can't detect a client
disconnect.
What I want to do is get something like a reference to the client
socket so that I can use it in some kind of select or poll loop,
or get some sort of asynchronous notification inside my greenlet, but
I'm not sure how to accomplish either of these things with these
frameworks (bottle and gevent).
Is there a way to get notified of client disconnects?
Aha! The wsgi.input variable, at least under gevent.wsgi, has an rfile member that is a file-like object. This doesn't appear to be required by the WSGI spec, so it might not work with other servers.
With this I was able to modify my code to look something like:
def worker(body, rfile):
poll = zmq.Poller()
poll.register(msgbus)
poll.register(rfile, zmq.POLLIN)
while True:
events = dict(poll.poll())
if rfile.fileno() in events:
# client disconnect!
break
if msgbus in events:
msg = msgbus.recv()
body.put(msg)
break
body.put(StopIteration)
#route('/poll')
def poll():
rfile = bottle.request.environ['wsgi.input'].rfile
body = gevent.queue.Queue()
worker = gevent.spawn(worker, body, rfile)
return body
And this works great...
...except on OpenShift, where you will have to use the
alternate frontend on port 8000 with websockets support.

Prevent a request getting closed in python SocketServer

I'm using python socket server to which I connect with Android and periodically send messages.
I have a problem that the request is closed on every sent message and i need it to remain opened until Android decides to close it.
Curentlly it looks like this:
class SingleTCPHandler(SocketServer.StreamRequestHandler):
def handle(self):
try:
while True:
message = self.rfile.readline().strip() # clip input at 1Kb
my_event = pygame.event.Event(USEREVENT, {'control':message})
pygame.event.post(my_event)
except KeyboardInterrupt:
sys.exit(0)
finally:
self.request.close()
I've solved this by adding a while True in my handle() definition, however, this was criticized as a bad solution and that the right way to go is to override the process_request and shutdown methods.
Attempt of solution
I removed the while from the code, connected to the server locally with netcat, sent a message and went to see when will the connection be closed.
I wanted to see what is the method after which the connection is being closed to figuer out what i have to override.
I have stepped with the debugger through the serve_forever() and followed it to this part of code:
> /usr/lib/python2.7/threading.py(495)start()
494 try:
--> 495 _start_new_thread(self.__bootstrap, ())
496 except Exception:
After line 495 is passed (i can't step into it) the connection is closed.
I somehow doubt that it's such a hustle to maintain a connection via socket, that is basically the reason why we chosen to communicate over a socket, to have a continuous connection and not a 'one connection per sent message' system.
Ideas on implementation, or links?
The handle method is called for each client connection, and the connection is closed when it returns. Using a while loop is fine. Exit the loop when the client closes the connection.
Example (Python 3 syntax):
class EchoHandler(socketserver.StreamRequestHandler):
def setup(self):
print('{}:{} connected'.format(*self.client_address))
def handle(self):
while True:
data = self.request.recv(1024)
if not data: break
self.request.sendall(data)
def finish(self):
print('{}:{} disconnected'.format(*self.client_address))

Categories

Resources