Making an asynchronous call synchronous in tornado - python

I have the following problem.
I work on a tornado based application server. Most of the code can be synchronous and the web interface does not really use any of the asynchronous facilities of Tornado.
I now have to interface to an (asynchronous) legacy backend for which I use the tornado.iostream interface to send commands. Responses to these commands are sent asynchronously, together with other periodic information, such as status updates.
The code is wrapped in a common interface that is also used for other backends.
What I want to achieve is the following:
# this is executed on initialization
self.stream.read_until_close(self.close, self.read_from_backend)
# this is called whenever data arrives on the input stream
def read_from_backend(self, data):
if data in pending:
# it means we got a response to a request we sent out
del self.pending[data]
else:
# do something else
# this sends a request to the backend
def send_to_backend(self, data):
self.pending[data] = True
while data in self.pending:
# of course this does not work
time.sleep(1)
return
Of course this does not work, as time.sleep(1) will not allow read_from_backend() to run any further.
How do I solve this? I want the send_to_backend() to return only when the response is received. Is there a way I can yield control to read_from_backend without yet returning from the method?
Please note that it is difficult to do this at a in the web layer using #asynchronous and #gen.engine, because that would require a full rewrite of all requests in our web layer. Is there a way I can implement the same design pattern somewhere else?

I think a good idea may be to look into using gevent. By MonkeyPatching and using a simple decorator I wrote you can get very easily nice asynchronous views which are written in a synchronous manner (blocking style).
You can reuse most of the code from a previous answer of mine.
Though you may not want to use gevent for different reasons (not having it has a dependency):
Admitting that you've monkey patched your global process with :
from gevent import monkey; monkey.patch_all()
The above patches threads, sockets, sleep ... so they go through gevent's hub (the hub is to gevent what the ioloop is to tornado).
Once patched and by using the #gasync decorator in my previous answer your view could simply be :
class MyHandler(tornado.web.RequestHandler):
#gasync
def get(self):
# Parse the input data in some fashion
data = get_data_from_request()
# This could be anything using python sockets, urllib ...
backend_response = send_data_to_backend(data)
# Write data to HTTP client
self.write(backend_response)
# You have to finish the response yourself since it's asynchronous
self.finish()
I find that gevent's simplicity and "elegance" far outweighs any advantage you would have writing async code with tornado's ioloop.
In my case I had to use legacy code written in a synchronous fashion, so basically gevent was a life safer, all I had to do was monkey patch and write that decorator and I could use all that legacy code without any modifications.
I hope this helps.

Related

Make a non-blocking request with requests when running Flask with Gunicorn and Gevent

My Flask application will receive a request, do some processing, and then make a request to a slow external endpoint that takes 5 seconds to respond. It looks like running Gunicorn with Gevent will allow it to handle many of these slow requests at the same time. How can I modify the example below so that the view is non-blocking?
import requests
#app.route('/do', methods = ['POST'])
def do():
result = requests.get('slow api')
return result.content
gunicorn server:app -k gevent -w 4
If you're deploying your Flask application with gunicorn, it is already non-blocking. If a client is waiting on a response from one of your views, another client can make a request to the same view without a problem. There will be multiple workers to process multiple requests concurrently. No need to change your code for this to work. This also goes for pretty much every Flask deployment option.
First a bit of background, A blocking socket is the default kind of socket, once you start reading your app or thread does not regain control until data is actually read, or you are disconnected. This is how python-requests, operates by default. There is a spin off called grequests which provides non blocking reads.
The major mechanical difference is that send, recv, connect and accept
can return without having done anything. You have (of course) a number
of choices. You can check return code and error codes and generally
drive yourself crazy. If you don’t believe me, try it sometime
Source: https://docs.python.org/2/howto/sockets.html
It also goes on to say:
There’s no question that the fastest sockets code uses non-blocking
sockets and select to multiplex them. You can put together something
that will saturate a LAN connection without putting any strain on the
CPU. The trouble is that an app written this way can’t do much of
anything else - it needs to be ready to shuffle bytes around at all
times.
Assuming that your app is actually supposed to do something more than
that, threading is the optimal solution
But do you want to add a whole lot of complexity to your view by having it spawn it's own threads. Particularly when gunicorn as async workers?
The asynchronous workers available are based on Greenlets (via
Eventlet and Gevent). Greenlets are an implementation of cooperative
multi-threading for Python. In general, an application should be able
to make use of these worker classes with no changes.
and
Some examples of behavior requiring asynchronous workers: Applications
making long blocking calls (Ie, external web services)
So to cut a long story short, don't change anything! Just let it be. If you are making any changes at all, let it be to introduce caching. Consider using Cache-control an extension recommended by python-requests developers.
You can use grequests. It allows other greenlets to run while the request is made. It is compatible with the requests library and returns a requests.Response object. The usage is as follows:
import grequests
#app.route('/do', methods = ['POST'])
def do():
result = grequests.map([grequests.get('slow api')])
return result[0].content
Edit: I've added a test and saw that the time didn't improve with grequests since gunicorn's gevent worker already performs monkey-patching when it is initialized: https://github.com/benoitc/gunicorn/blob/master/gunicorn/workers/ggevent.py#L65

Restructuring program to use asyncio

Currently I have a game that does networking synchronously using the socket module.
It is structured like this:
Server:
while True:
add_new_clients()
process_game_state()
for client in clients:
send_data(client)
get_data_from(client)
Client:
connect_to_server()
while True:
get_data_from_server()
process_game_state()
draw_to_screen()
send_input_to_server()
I want to replace the network code with some that uses a higher level module than socket, e.g. asyncio or gevent. However, I don't know how to do this.
All the examples I have seen are structured like this:
class Server:
def handle_client(self, connection):
while True:
input = get_input(connection)
output = process(input)
send(connection, output)
and then handle_client being called in parallel, using threads or something, for each client that joins.
This works fine if the clients can be handled separately. However, I still want to keep a game-loop type structure, where processing only occurs in one case - I don't want have to check collisions etc. for each client. How would I do this?
I assume that you understand how to create a server using a protocol and how asynchronous paradigm work.
All you need is to break down your while event loop into handlers.
Let's see server case and client case :
Server case
A client (server-side)
You need to create a what we call a protocol, it will be used to create the server and serve as a pattern where each instance = a client :
class ClientProtocol(asyncio.Protocol):
def connection_made(self, transport):
# Here, we have a new player, the transport represent a socket.
self.transport = transport
def data_received(self, data):
packet = decode_packet(data) # some function for reading packets
if packet.opcode == CMSG_MOVE: # opcode is a operation code.
self.player.move(packet[0]) # packet[0] is the first "real" data.
self.transport.write("OK YOUR MOVE IS ACCEPTED") # Send back confirmation or whatever.
Ok, now you have a idea of how you can do thing with your clients.
Game state
After that, you need to process your game state each X ms :
def processGameState():
# some code...
eventLoop.call_later(0.1, processGameState) # every 100 ms, processGameState is called.
At some point, you will call processGameState in your initialization and it will tell the eventLoop to call processGameState 100 ms later (It may not be the ideal way to do it, but it's an idea like another one)
As for sending new data to clients, you just need to store a list of ClientProtocol and write to their transport with a simple for each.
The get_data_from is obviously removed, as we receive all our data asynchronously in the data_received method of the ClientProtocol.
This is a sketch of how you can refactor all your synchronous code into asynchronous code. You may want to add authentication, and some other things, if it's your first time with asynchronous paradigm, I suggest you to try to do it with Twisted more than asyncio : Twisted is likely to be more documented and explained everywhere than asyncio (but asyncio is quite the same as Twisted, so you can switch back everytime).
Client case
It's pretty the same here.
But, you may need to pay attention to how you draw and how you manage your input. You may need to ultimately use another thread to call inputs handlers, and another thread to draw to the screen at a constant framerate.
Conclusion
Think in asynchronous is pretty difficult at the start.
But it's worth the effort.
Note that even my approach may not be the best or adapted for games. I just feel I would do it like that, please, take your time to test your code and profile it.
Check if you don't mix synchronous and asynchronous code in the same function without proper handling using deferToThread (or other helpers), it would destroy your game's performances.

Bottle: execute a long running function asynchronously and send an early response to the client?

The Bottle app (behind CherryPy) that I'm working on receives a request for a resource from an HTTP client which results in an execution of a task that can take a few hours to finish. I'd like to send an early HTTP response (e.g., 202 Accepted) and continue processing the task. Is there a way to achieve this without using MQ libraries and using Python/Bottle alone?
For example:
from bottle import HTTPResponse
#route('/task')
def f():
longRunningTask() # <-- Anyway to make this asynchronous?
return bottle.HTTPResponse(status=202)
I know this question is several years old but I found #ahmed's answer so unbelievably unhelpful that I thought I would at least share how I solved this problem in my application.
All I did was make use of Python's existing threading libraries, as below:
from bottle import HTTPResponse
from threading import Thread
#route('/task')
def f():
task_thread = Thread(target=longRunningTask) # create a thread that will execute your longRunningTask() function
task_thread.setDaemon(True) # setDaemon to True so it terminates when the function returns
task_thread.start() # launch the thread
return bottle.HTTPResponse(status=202)
Using threads allows you to maintain a consistent response time while still having relatively complex or time-consuming functions.
I used uWSGI, so do make sure you enable threading in your uWSGI application config if that's the way you went.

Building an HTTP API for continuously running python process

TL;DR: I have a beautifully crafted, continuously running piece of Python code controlling and reading out a physics experiment. Now I want to add an HTTP API.
I have written a module which controls the hardware using USB. I can script several types of autonomously operating experiments, but I'd like to control my running experiment over the internet. I like the idea of an HTTP API, and have implemented a proof-of-concept using Flask's development server.
The experiment runs as a single process claiming the USB connection and periodically (every 16 ms) all data is read out. This process can write hardware settings and commands, and reads data and command responses.
I have a few problems choosing the 'correct' way to communicate with this process. It works if the HTTP server only has a single worker. Then, I can use python's multiprocessing.Pipe for communication. Using more-or-less low-level sockets (or things like zeromq) should work, even for request/response, but I have to implement some sort of protocol: send {'cmd': 'set_voltage', 'value': 900} instead of calling hardware.set_voltage(800) (which I can use in the stand-alone scripts). I can use some sort of RPC, but as far as I know they all (SimpleXMLRPCServer, Pyro) use some sort of event loop for the 'server', in this case the process running the experiment, to process requests. But I can't have an event loop waiting for incoming requests; it should be reading out my hardware! I googled around quite a bit, but however I try to rephrase my question, I end up with Celery as the answer, which mostly fires off one job after another, but isn't really about communicating with a long-running process.
I'm confused. I can get this to work, but I fear I'll be reinventing a few wheels. I just want to launch my app in the terminal, open a web browser from anywhere, and monitor and control my experiment.
Update: The following code is a basic example of using the module:
from pysparc.muonlab.muonlab_ii import MuonlabII
muonlab = MuonlabII()
muonlab.select_lifetime_measurement()
muonlab.set_pmt1_voltage(900)
muonlab.set_pmt1_threshold(500)
lifetimes = []
while True:
data = muonlab.read_lifetime_data()
if data:
print "Muon decays detected with lifetimes", data
lifetimes.extend(data)
The module lives at https://github.com/HiSPARC/pysparc/tree/master/pysparc/muonlab.
My current implementation of the HTTP API lives at https://github.com/HiSPARC/pysparc/blob/master/bin/muonlab_with_http_api.
I'm pretty happy with the module (with lots of tests) but the HTTP API runs using Flask's single-threaded development server (which the documentation and the internet tells me is a bad idea) and passes dictionaries through a Pipe as some sort of IPC. I'd love to be able to do something like this in the above script:
while True:
data = muonlab.read_lifetime_data()
if data:
print "Muon decays detected with lifetimes", data
lifetimes.extend(data)
process_remote_requests()
where process_remote_requests is a fairly short function to call the muonlab instance or return data. Then, in my Flask views, I'd have something like:
muonlab = RemoteMuonlab()
#app.route('/pmt1_voltage', methods=['GET', 'PUT'])
def get_data():
if request.method == 'PUT':
voltage = request.form['voltage']
muonlab.set_pmt1_voltage(voltage)
else:
voltage = muonlab.get_pmt1_voltage()
return jsonify(voltage=voltage)
Getting the measurement data from the app is perhaps less of a problem, since I could store that in SQLite or something else that handles concurrent access.
But... you do have an IO loop; it runs every 16ms.
You can use BaseHTTPServer.HTTPServer in such a case; just set the timeout attribute to something small. bascially...
class XmlRPCApi:
def do_something(self):
print "doing something"
server = SimpleXMLRPCServer(("localhost", 8000))
server.register_instance(XMLRpcAPI())
server.timeout = 0
while True:
sleep(0.016)
do_normal_thing()
x.handle_request()
Edit: python has a built in server, also built on BaseHTTPServer, capable of serving a flask app. since flask.Flask() happens to be a wsgi compliant application, your process_remote_requests() should look like this:
import wsgiref.simple_server
remote_server = wsgire.simple_server('localhost', 8000, app)
# app here is just your Flask() application!
# as before, set timeout to zero so that you can go right back
# to your event loop if there are no requests to handle
remote_server.timeout = 0
def process_remote_requests():
remote_server.handle_request()
This works well enough if you have only short running requests; but if you need to handle requests that may possibly take longer than your event loop's normal polling interval, or if you need to handle more requests than you have polls per unit of time, then you can't use this approach, exactly.
You don't necessarily need to fork off another process, though, You can potentially get by using a pool of workers in another thread. roughly:
import threading
import wsgiref.simple_server
remote_server = wsgire.simple_server('localhost', 8000, app)
POOL_SIZE = 10 # or some other value.
pool = [threading.Thread(target=remote_server.serve_forever) for dummy in xrange(POOL_SIZE)]
for thread in pool:
thread.daemon = True
thread.start()
while True:
pass # normal experiment processing here; don't handle requests in this thread.
However; this approach has one major shortcoming, you now have to deal with concurrency! It's not safe to manipulate your program state as freely as you could with the above loop, since you might be, concurrently manipulating that same state in the main thread (or another http server thread). It's up to you to know when this is valid, wrapping each resource with some sort of mutex lock or whatever is appropriate.

Python Tornado - making POST return immediately while async function keeps working

so I have a handler below:
class PublishHandler(BaseHandler):
def post(self):
message = self.get_argument("message")
some_function(message)
self.write("success")
The problem that I'm facing is that some_function() takes some time to execute and I would like the post request to return straight away when called and for some_function() to be executed in another thread/process if possible.
I'm using berkeley db as the database and what I'm trying to do is relatively simple.
I have a database of users each with a filter. If the filter matches the message, the server will send the message to the user. Currently I'm testing with thousands of users and hence upon each publication of a message via a post request it's iterating through thousands of users to find a match. This is my naive implementation of doing things and hence my question. How do I do this better?
You might be able to accomplish this by using your IOLoop's add_callback method like so:
loop.add_callback(lambda: some_function(message))
Tornado will execute the callback in the next IOLoop pass, which may (I'd have to dig into Tornado's guts to know for sure, or alternatively test it) allow the request to complete before that code gets executed.
The drawback is that that long-running code you've written will still take time to execute, and this may end up blocking another request. That's not ideal if you have a lot of these requests coming in at once.
The more foolproof solution is to run it in a separate thread or process. The best way with Python is to use a process, due to the GIL (I'd highly recommend reading up on that if you're not familiar with it). However, on a single-processor machine the threaded implementation will work just as fine, and may be simpler to implement.
If you're going the threaded route, you can build a nice "async executor" module with a mutex, a thread, and a queue. Check out the multiprocessing module if you want to go the route of using a separate process.
I've tried this, and I believe the request does not complete before the callbacks are called.
I think a dirty hack would be to call two levels of add_callback, e.g.:
def get(self):
...
def _defered():
ioloop.add_callback(<whatever you want>)
ioloop.add_callback(_defered)
...
But these are hacks at best. I'm looking for a better solution right now, probably will end up with some message queue or simple thread solution.

Categories

Resources