Consume REST API from Python: do I need an async library? - python

I have a REST API and now I want to create a web site that will use this API as only and primary datasource. The system is distributed: REST API is on one group of machines and the site will be on the other(s).
I'm expecting to have quite a lot of load, so I'd like to make requests as efficient, as possible.
Do I need some async HTTP requests library or any HTTP client library will work?
API is done using Flask, web site will be also built using Flask and Jinja as template engine.

You could use gevent with Flask to get asynchronous I/O from normally synchronous libraries. See this question for an example of someone getting help with doing that.
You could also run Flask behind gunicorn, which has support for spawning multiple workers (threads, processes, or greenlets) for handling concurrent requests. If you were to take that approach, Flask would remain completely synchronous, and gunicorn would handle creating multiple Flask instances to handle concurrent requests.

Start simple and use the way, which seems to be easy to use for you. Consider optimization to be done later on only if needed.
Use of async libraries would come into play as helpful if you would have thousands of request a second. Much sooner you are likely to have performance problems related to database (if you use it), which is not to be resolved by async magic.

Actually your API is on a separate machine. Even if you make your client calls asynchronous , it will not have any impact on the server. By making your calls asynchronous, your thread in the client will not wait for the response. Your server will react the same when call is sync /async.
And if you want to make your calls async , please check http://stackandqueue.com/?p=57 . It uses unirest to make both get and post async calls

Related

Flask: making a non-blocking requests call

Flask==2.0 (+uwsgi processes=5, threads=20)
Python: 3.9
I have a route that will accept a message and re-direct this message to other API(s) based on the type of message. Let's say there are 4 types of messages with 4 matching downstream APIs. The request is made using requests library and downstream API will return a response that my route will need to return to the client (so I need to wait for the response).
The issue is sometimes one of the downstream APIs can have an issue exhibiting high latency. This has an undesired result as this makes my application slow and impact other message types.
Is there a way to make these requests calls non-blocking so the slow downstream API responses don't slow down the whole app?
I read the flask async guide and from my understanding you can get real benefit if you're making multiple requests calls within a single route which isn't the case for me, it's always a single request.
Async requests to your downstream API could provide some relief in your case, I believe:
Async is beneficial when performing concurrent IO-bound tasks, but will probably not improve CPU-bound tasks.
Consider also using a queueing mechanism (such as background tasks in flask, or something more fully fledged such as RabbitMQ).
Final words - also consider defining timeout for your requests.
https://flask.palletsprojects.com/en/2.1.x/async-await/

How do I run tasks in the background using Django?

I have an end-point in Django which initiates a function which takes really long to complete. I don't want the request to wait until this function has been completed.
def MyRequest(APIView):
def get(self, request, *args **kwargs):
a_function_which_takes_really_long_time()
return Response({"message" : "We're Working on it."})
I tried using asyncio with Django Asynchronous Support. Also tried python threading here. But all these are making the request to wait until the function is completed.
I know that we can easily achieve this using Celery. But that approach would require me to use a message-broker such as Redis, RabbitMQ or any other similar servers which I'm not supposed to use.
In my opinion using celery with django is really easy and recommended. I am not sure why are you against using a message broker, both of the options you suggested are really stable and wildly used projects.
But, if you still have the broker constraint I can recommend apscheduler which doesn't use an intermediate broker

Make a non-blocking request with requests when running Flask with Gunicorn and Gevent

My Flask application will receive a request, do some processing, and then make a request to a slow external endpoint that takes 5 seconds to respond. It looks like running Gunicorn with Gevent will allow it to handle many of these slow requests at the same time. How can I modify the example below so that the view is non-blocking?
import requests
#app.route('/do', methods = ['POST'])
def do():
result = requests.get('slow api')
return result.content
gunicorn server:app -k gevent -w 4
If you're deploying your Flask application with gunicorn, it is already non-blocking. If a client is waiting on a response from one of your views, another client can make a request to the same view without a problem. There will be multiple workers to process multiple requests concurrently. No need to change your code for this to work. This also goes for pretty much every Flask deployment option.
First a bit of background, A blocking socket is the default kind of socket, once you start reading your app or thread does not regain control until data is actually read, or you are disconnected. This is how python-requests, operates by default. There is a spin off called grequests which provides non blocking reads.
The major mechanical difference is that send, recv, connect and accept
can return without having done anything. You have (of course) a number
of choices. You can check return code and error codes and generally
drive yourself crazy. If you don’t believe me, try it sometime
Source: https://docs.python.org/2/howto/sockets.html
It also goes on to say:
There’s no question that the fastest sockets code uses non-blocking
sockets and select to multiplex them. You can put together something
that will saturate a LAN connection without putting any strain on the
CPU. The trouble is that an app written this way can’t do much of
anything else - it needs to be ready to shuffle bytes around at all
times.
Assuming that your app is actually supposed to do something more than
that, threading is the optimal solution
But do you want to add a whole lot of complexity to your view by having it spawn it's own threads. Particularly when gunicorn as async workers?
The asynchronous workers available are based on Greenlets (via
Eventlet and Gevent). Greenlets are an implementation of cooperative
multi-threading for Python. In general, an application should be able
to make use of these worker classes with no changes.
and
Some examples of behavior requiring asynchronous workers: Applications
making long blocking calls (Ie, external web services)
So to cut a long story short, don't change anything! Just let it be. If you are making any changes at all, let it be to introduce caching. Consider using Cache-control an extension recommended by python-requests developers.
You can use grequests. It allows other greenlets to run while the request is made. It is compatible with the requests library and returns a requests.Response object. The usage is as follows:
import grequests
#app.route('/do', methods = ['POST'])
def do():
result = grequests.map([grequests.get('slow api')])
return result[0].content
Edit: I've added a test and saw that the time didn't improve with grequests since gunicorn's gevent worker already performs monkey-patching when it is initialized: https://github.com/benoitc/gunicorn/blob/master/gunicorn/workers/ggevent.py#L65

Performing a blocking request in django view

In one of the views in my django application, I need to perform a relatively lengthy network IO operation. The problem is other requests must wait for this request to be completed even though they have nothing to do with it.
I did some research and stumbled upon Celery but as I understand, it is used to perform background tasks independent of the request. (so I can not use the result of the task for the response to the request)
Is there a way to process views asynchronously in django so while the network request is pending other requests can be processed?
Edit: What I forgot to mention is that my application is a web service using django rest framework. So the result of a view is a json response not a page that I can later modify using AJAX.
The usual solution here is to offload the task to celery, and return a "please wait" response in your view. If you want, you can then use an Ajax call to periodically hit a view that will report whether the response is ready, and redirect when it is.
You want to maintain that HTTP connection for an extended period of time but still allow other requests to be managed, right? There's no simple solution to this problem. Also, any solution will be a level away from Django as it depends on how you process requests.
I don't know what you're currently using, so I can only tell you how I handled this in the past... I was using uwsgi to provide the WSGI interface between my python application and nginx. In uwsgi I used the asynchronous functions to suspend my long running connection when there was time to wait on the IO connections. The methods allow you to ask it to suspend things until there is something to read or write and then allow other connections to be serviced.
The above mentioned async calls use "green threads". It's much lighter weight then regular threads and you have control over when you move from thread to thread.
I am not saying that it is a good solution for your scenario[1], but the simple answer is using the following pattern:
async_result = some_task.delay(arg1)
result = async_result.get()
Check documentation for the get method. And instead of using the delay method you can use anything that returns an AsyncResult (like the apply_async method
[1] Why it may be a bad idea? Having an ongoing connection waiting a lot is bad for Django (it is not ready for long-lived connections), may conflict with the proxy configuration (if there is a reverse proxy somewhere) and may be identified as a timeout from the browser. So... it seems a Bad Idea[TM] to use this pattern for a Django Rest Framework view.

Non-Message Queue / Simple Long-Polling in Python (and Flask)

I am looking for a simple (i.e., not one that requires me to setup a separate server to handle a messaging queue) way to do long-polling for a small web-interface that runs calculations and produces a graph. This is what my web-interface needs to do:
User requests a graph/data in a web-interface
Server runs some calculations.
While the server is running calculations, a small container is updated (likely via AJAX/jQuery) with the calculation progress (similar to what you'd do in a consol with print (i.e. print 'calculating density function...'))
Calculation finishes and graph is shown to user.
As the calculation is all done server-side, I'm not really sure how to easily set this up. Obviously I'll want to setup a REST API to handle the polling, which would be easy in Flask. However, I'm not sure how to retrieve the actual updates. An obvious, albeit complicated for this purpose, solution would be to setup a messaging queue and do some long polling. However, I'm not sure sure this is the right approach for something this simple.
Here are my questions:
Is there a way to do this using the file system? Performance isn't a huge issue. Can AJAX/jQuery find messages from a file? Save the progress to some .json file?
What about pickling? (I don't really know much about pickling, but maybe I could pickle a message dict and it could be read by an API that is handling the polling).
Is polling even the right approach? Is there a better or more common pattern to handle this?
I have a feeling I'm overcomplicating things as I know this kind of thing is common on the web. Quite often I see stuff happening and a little "loading.gif" image is running while some calculation is going on (for example, in Google Analytics).
Thanks for your help!
I've built several apps like this using just Flask and jQuery. Based on that experience, I'd say your plan is good.
Do not use the filesystem. You will run into JavaScript security issues/protections. In the unlikely event you find reasonable workarounds, you still wouldn't have anything portable or scalable. Instead, use a small local web serving framework, like Flask.
Do not pickle. Use JSON. It's the language of web apps and REST interfaces. jQuery and those nice jQuery-based plugins for drawing charts, graphs and such will expect JSON. It's easy to use, human-readable, and for small-scale apps, there's no reason to go any place else.
Long-polling is fine for what you want to accomplish. Pure HTTP-based apps have some limitations. And WebSockets and similar socket-ish layers like Socket.IO "are the future." But finding good, simple examples of the server-side implementation has, in my experience, been difficult. I've looked hard. There are plenty of examples that want you to set up Node.js, REDIS, and other pieces of middleware. But why should we have to set up two or three separate middleware servers? It's ludicrous. So long-polling on a simple, pure-Python web framework like Flask is the way to go IMO.
The code is a little more than a snippet, so rather than including it here, I've put a simplified example into a Mercurial repository on bitbucket that you can freely review, copy, or clone. There are three parts:
serve.py a Python/Flask based server
templates/index.html 98% HTML, 2% template file the Flask-based server will render as HTML
static/lpoll.js a jQuery-based client
Long-polling was a reasonable work-around before simple, natural support for Web Sockets came to most browsers, and before it was easily integrated alongside Flask apps. But here in mid-2013, Web Socket support has come a long way.
Here is an example, similar to the one above, but integrating Flask and Web Sockets. It runs atop server components from gevent and gevent-websocket.
Note this example is not intended to be a Web Socket masterpiece. It retains a lot of the lpoll structure, to make them more easily comparable. But it immediately improves responsiveness, server overhead, and interactivity of the Web app.
Update for Python 3.7+
5 years since the original answer, WebSocket has become easier to implement. As of Python 3.7, asynchronous operations have matured into mainstream usefulness. Python web apps are the perfect use case. They can now use async just as JavaScript and Node.js have, leaving behind some of the quirks and complexities of "concurrency on the side." In particular, check out Quart. It retains Flask's API and compatibility with a number of Flask extensions, but is async-enabled. A key side-effect is that WebSocket connections can be gracefully handled side-by-side with HTTP connections. E.g.:
from quart import Quart, websocket
app = Quart(__name__)
#app.route('/')
async def hello():
return 'hello'
#app.websocket('/ws')
async def ws():
while True:
await websocket.send('hello')
app.run()
Quart is just one of the many great reasons to upgrade to Python 3.7.

Categories

Resources