Make parallel concurrent calls to external apis from django - python

I have a django app (django+python+apache mod_wsgi) which act as a middleware between two systems, app gets requests from one system and it makes multiple requests to other apis to fetch and prepare the desired response, and then it passes the response to the requesting system.
So it's basically acting as a one-to-many middleware, problem is making calls sequentially is taking too much time, I tried to use threading for IO concurrency. However this is not working (I've read django works on single thread; correct me if I'm wrong), I've not worked with making requests in parallel on a webserver and have no idea how to do it.
Following is the current implementation :
with futures.ThreadPoolExecutor(max_workers=MAX_BATCH_SIZE) as executor:
future_to_url = {}
for pnode in plist_node:
config = {'url':rurl}
futur = executor.submit(self.get_result_from_url, config)
Can someone please suggest the right way to do this ?

Related

Make a non-blocking request with requests when running Flask with Gunicorn and Gevent

My Flask application will receive a request, do some processing, and then make a request to a slow external endpoint that takes 5 seconds to respond. It looks like running Gunicorn with Gevent will allow it to handle many of these slow requests at the same time. How can I modify the example below so that the view is non-blocking?
import requests
#app.route('/do', methods = ['POST'])
def do():
result = requests.get('slow api')
return result.content
gunicorn server:app -k gevent -w 4
If you're deploying your Flask application with gunicorn, it is already non-blocking. If a client is waiting on a response from one of your views, another client can make a request to the same view without a problem. There will be multiple workers to process multiple requests concurrently. No need to change your code for this to work. This also goes for pretty much every Flask deployment option.
First a bit of background, A blocking socket is the default kind of socket, once you start reading your app or thread does not regain control until data is actually read, or you are disconnected. This is how python-requests, operates by default. There is a spin off called grequests which provides non blocking reads.
The major mechanical difference is that send, recv, connect and accept
can return without having done anything. You have (of course) a number
of choices. You can check return code and error codes and generally
drive yourself crazy. If you don’t believe me, try it sometime
Source: https://docs.python.org/2/howto/sockets.html
It also goes on to say:
There’s no question that the fastest sockets code uses non-blocking
sockets and select to multiplex them. You can put together something
that will saturate a LAN connection without putting any strain on the
CPU. The trouble is that an app written this way can’t do much of
anything else - it needs to be ready to shuffle bytes around at all
times.
Assuming that your app is actually supposed to do something more than
that, threading is the optimal solution
But do you want to add a whole lot of complexity to your view by having it spawn it's own threads. Particularly when gunicorn as async workers?
The asynchronous workers available are based on Greenlets (via
Eventlet and Gevent). Greenlets are an implementation of cooperative
multi-threading for Python. In general, an application should be able
to make use of these worker classes with no changes.
and
Some examples of behavior requiring asynchronous workers: Applications
making long blocking calls (Ie, external web services)
So to cut a long story short, don't change anything! Just let it be. If you are making any changes at all, let it be to introduce caching. Consider using Cache-control an extension recommended by python-requests developers.
You can use grequests. It allows other greenlets to run while the request is made. It is compatible with the requests library and returns a requests.Response object. The usage is as follows:
import grequests
#app.route('/do', methods = ['POST'])
def do():
result = grequests.map([grequests.get('slow api')])
return result[0].content
Edit: I've added a test and saw that the time didn't improve with grequests since gunicorn's gevent worker already performs monkey-patching when it is initialized: https://github.com/benoitc/gunicorn/blob/master/gunicorn/workers/ggevent.py#L65

Performing a blocking request in django view

In one of the views in my django application, I need to perform a relatively lengthy network IO operation. The problem is other requests must wait for this request to be completed even though they have nothing to do with it.
I did some research and stumbled upon Celery but as I understand, it is used to perform background tasks independent of the request. (so I can not use the result of the task for the response to the request)
Is there a way to process views asynchronously in django so while the network request is pending other requests can be processed?
Edit: What I forgot to mention is that my application is a web service using django rest framework. So the result of a view is a json response not a page that I can later modify using AJAX.
The usual solution here is to offload the task to celery, and return a "please wait" response in your view. If you want, you can then use an Ajax call to periodically hit a view that will report whether the response is ready, and redirect when it is.
You want to maintain that HTTP connection for an extended period of time but still allow other requests to be managed, right? There's no simple solution to this problem. Also, any solution will be a level away from Django as it depends on how you process requests.
I don't know what you're currently using, so I can only tell you how I handled this in the past... I was using uwsgi to provide the WSGI interface between my python application and nginx. In uwsgi I used the asynchronous functions to suspend my long running connection when there was time to wait on the IO connections. The methods allow you to ask it to suspend things until there is something to read or write and then allow other connections to be serviced.
The above mentioned async calls use "green threads". It's much lighter weight then regular threads and you have control over when you move from thread to thread.
I am not saying that it is a good solution for your scenario[1], but the simple answer is using the following pattern:
async_result = some_task.delay(arg1)
result = async_result.get()
Check documentation for the get method. And instead of using the delay method you can use anything that returns an AsyncResult (like the apply_async method
[1] Why it may be a bad idea? Having an ongoing connection waiting a lot is bad for Django (it is not ready for long-lived connections), may conflict with the proxy configuration (if there is a reverse proxy somewhere) and may be identified as a timeout from the browser. So... it seems a Bad Idea[TM] to use this pattern for a Django Rest Framework view.

Enable multithreading of my web app using python Bottle framework

I have a web app written with Bottle framework. It have a global somedict list accessed by multiple HTTP query.
After some researching, I find that the Bottle framework only support 1 thread in 1 process mode to run my app(I don't believe it is true, perhaps migrating it to other frameworks like Flask is a good idea.).
1 To enable multi-threading, I find WSGI solution but it does not support multiple processs(1 threads for each process) accessing global variable like somedict in my app, because process will re-init the list every time a query gets handled. How can I handle this issue?
2 Is there any other solutions except WSGI that solve the problem to enable this app to serve multiple HTTP query at once?
from bottle import request, route
import threading
somedict = {}
somedict_lock = threading.Lock()
#route("/read")
def read():
with somedict_lock:
return somedict
#route("/write", method="POST")
def write():
with somedict_lock:
somedict[request.forms.get("key1")] = request.forms.get("value1")
somedict[request.forms.get("key2")] = request.forms.get("value2")
It's best to serve a WSGI app via a server like gunicorn or waitress, which will handle your concurrency needs, but almost no matter what you do for concurrency your global queue in memory will not work the way you want it to. You need to use an external memory store like memcached, redis, etc. Static data is one thing, but mutable state should never be shared between web app processes. That's contrary to Python web server idioms and the typical execution model of Python web apps.
I'm not saying it's literally impossible to do in Python, but it's not the way Python solves this problem.
You can process incoming requests asynchronously, currently Celery seems very suitable for running asynchronous tasks. Read how Celery can do this.

Consume REST API from Python: do I need an async library?

I have a REST API and now I want to create a web site that will use this API as only and primary datasource. The system is distributed: REST API is on one group of machines and the site will be on the other(s).
I'm expecting to have quite a lot of load, so I'd like to make requests as efficient, as possible.
Do I need some async HTTP requests library or any HTTP client library will work?
API is done using Flask, web site will be also built using Flask and Jinja as template engine.
You could use gevent with Flask to get asynchronous I/O from normally synchronous libraries. See this question for an example of someone getting help with doing that.
You could also run Flask behind gunicorn, which has support for spawning multiple workers (threads, processes, or greenlets) for handling concurrent requests. If you were to take that approach, Flask would remain completely synchronous, and gunicorn would handle creating multiple Flask instances to handle concurrent requests.
Start simple and use the way, which seems to be easy to use for you. Consider optimization to be done later on only if needed.
Use of async libraries would come into play as helpful if you would have thousands of request a second. Much sooner you are likely to have performance problems related to database (if you use it), which is not to be resolved by async magic.
Actually your API is on a separate machine. Even if you make your client calls asynchronous , it will not have any impact on the server. By making your calls asynchronous, your thread in the client will not wait for the response. Your server will react the same when call is sync /async.
And if you want to make your calls async , please check http://stackandqueue.com/?p=57 . It uses unirest to make both get and post async calls

Synchronous reading of a multipart upload alongside Django

I have a Django application which needs to have access to reading multipart file uploads as file-like objects as they're uploaded, which means that I need more or less synchronous access to the request object and a way to unpack it in chunks to binary data. Django unfortunately handles uploads by moving them directly into memory or to temporary files, which won't work for my use case.
Some one recommended that I use gevent/greenlet to handle the upload, but I'm not sure how that plays into the equation and what setup is required alongside Django to make it work. Plus, running something outside of Django would mean that I would have to implement a database connection layer to validate that the upload is allowed (using a ticket id).
With this said, how can I set this up? Django should be running in a WSGI application, and someone had also recommended writing a second WSGI application to capture a single URL path for uploads. I'd like to essentially take as much advantage of the Django framework as I can, while being able to read uploads synchronously?
(I just became familiar with the requests Python library and have to say I'm a pretty big fan, though I wouldn't know the first thing about using it in a server context.)
I believe a lot of these suggestions are overcomplicating things.
You need to change the way Django handles uploaded files? Simply modify the upload handler.
The base class is relatively straightforward, and gives you lots of great hooks. You should be able to extend it to do what you want.
While I can't write all the code for you (it's complex), here is my recommended setup.
Use Tornado + Django: Tornado can embed WSGI processes, so this gives you the ability to have a single process host both Django, and this one off Tornado handler. Here's a quick sample from one of my active projects (though this using Tornado as a Socket.io handler, it should give you the gist of the solution (:
# Socket Server
db = momoko.Pool(DB_DSN, **DB_CONFIG)
router = tornadio2.TornadioRouter(QueryRouter, user_settings={'db':db})
sock_app = tornado.web.Application(router.urls, flash_policy_port = FL_PORT, flash_policy_file = path.join(PROJECT_ROOT, 'assets/xml/flashpolicy.xml'), socket_io_port = WS_PORT, debug=True)
# Django Server
os.environ['DJANGO_SETTINGS_MODULE'] = 'sever.settings'
application = django.core.handlers.wsgi.WSGIHandler()
container = tornado.wsgi.WSGIContainer(application)
# Start the web servers
if __name__ == "__main__":
try:
import logging
tornado.options.parse_command_line()
logging.getLogger().setLevel(logging.INFO)
logging.info('Server started')
tornado.locale.set_default_locale('us_US')
http_server = tornado.httpserver.HTTPServer(container)
http_server.listen(8000)
tornadio2.SocketServer(sock_app, auto_start=True)
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
logging.info("Stopping servers.")
This could easily be converted to two server instances running on two different ports, with 80 being reserved for Django and 8080 being used for your upload handler.
I'm recommending Tornado because it supports streaming request body, and is very well suited to this type of use. Here's a gist that might help you.
Your proxying setup will matter. If you're using NGINX, make sure to turn proxy_buffering off.
I wouldn't use a database for the ticket/upload check. Redis or memcache would probably be a much faster way to handle this. A cache would also be great way to bass upload progress back and forth between Django and Tornado, since the overhead for setting/getting a new value would be so small.
This is a big hairy problem that will serious engineering to come up with something elegant, but it's more than doable.

Categories

Resources