I have a Django project which make predictions on a VM with 2 CPU cores and 8 RAM. When my Django app starts it loads a large file (2.5GB, time to load:10 sec.) with information that the app needs. My app can handle a large amount of concurrent requests till I get error but it only use the 50% of my CPU (1 core), in order to use the 100% of my machines power I need to activate the second core through threading.
How can I set the my app so can handle users requests through different threads ?
Is there a recommended way to do that or an example ?
Related
I have been deploying apps to Kubernetes for the last 2 years. And in my org, all our apps(especially stateless) are running in Kubernetes. I still have a fundamental question, just because very recently we found some issues with respect to our few python apps.
Initially when we deployed, our python apps(Written in Flask and Django), we ran it using python app.py. It's known that, because of GIL, python really doesn't have support for system threads, and it will only serve one request at a time, but in case the one request is CPU heavy, it will not be able to process further requests. This is causing sometimes the health API to not work. We have observed that, at this moment, if there is a single request which is not IO and doing some operation, we will hold the CPU and cannot process another request in parallel. And since it's only doing fewer operations, we have observed there is no increase in the CPU utilization also. This has an impact on how HorizontalPodAutoscaler works, its unable to scale the pods.
Because of this, we started using uWSGI in our pods. So basically uWSGI can run multiple pods under the hood and handle multiple requests in parallel, and automatically spin new processes on demand. But here comes another problem, that we have seen, uwsgi is lacking speed in auto-scaling the process tocorrected serve the request and its causing HTTP 503 errors, Because of this we are unable to serve our few APIs in 100% availability.
At the same time our all other apps, written in nodejs, java and golang, is giving 100% availability.
I am looking at what is the best way by which I can run a python app in 100%(99.99) availability in Kubernetes, with the following
Having health API and liveness API served by the app
An app running in Kubernetes
If possible without uwsgi(Single process per pod is the fundamental docker concept)
If with uwsgi, are there any specific config we can apply for k8s env
We use Twisted's WSGI server with 30 threads and it's been solid for our Django application. Keeps to a single process per pod model which more closely matches Kubernetes' expectations, as you mentioned. Yes, the GIL means only one of those 30 threads can be running Python code at time, but as with most webapps, most of those threads are blocked on I/O (usually waiting for a response from the database) the vast majority of the time. Then run multiple replicas on top of that both for redundancy and to give you true concurrency at whatever level you need (we usually use 4-8 depending on the site traffic, some big ones are up to 16).
I have exactly the same problem with a python deployment running the Flask application. Most api calls are handled in a matter of seconds, but there are some cpu intensive requests that acquire GIL for 2 minutes.... The pod keep accepting requests, ignores the configured timeouts, ignores a closed connection by the user; then after 1 minute of liveness probes failing, the pod is restarted by kubelet.
So 1 fat request can dramatically drop the availability.
I see two different solutions:
have a separate deployment that will host only long running api calls; configure ingress to route requests between these two deployments;
using multiprocessing handle liveness/readyness probes in a main process, every other request must be handled in the child process;
There are pros and cons for each solution, maybe I will need a combination of both. Also if I need a steady flow of prometheus metrics, I might need to create a proxy server on the application layer (1 more container on the same pod). Also need to configure ingress to have a single upstream connection to python pods, so that long running request will be queued, whereas short ones will be processed concurrently (yep, python, concurrency, good joke). Not sure tho it will scale well with HPA.
So yeah, running production ready python rest api server on kubernetes is not a piece of cake. Go and java have a much better ecosystem for microservice applications.
PS
here is a good article that shows that there is no need to run your app in kubernetes with WSGI
https://techblog.appnexus.com/beyond-hello-world-modern-asynchronous-python-in-kubernetes-f2c4ecd4a38d
PPS
Im considering to use prometheus exporter for flask. Looks better than running a python client in a separate thread;
https://github.com/rycus86/prometheus_flask_exporter
I am about the deploy a Django app, and then it struck me that I couldn't find a way to anticipate how many requests per second my application can handle.
Is there a way of calculating how many requests per second can a Django application handle, without resorting to things like doing a test deployment and use an external tool such as locust?
I know there are several factors involved (such as number of database queries, etc.), but perhaps there is a convenient way of calculating, even estimating, how many visitors can a single Django app instance handle.
EDIT: Removed the mention to Gunicorn, since it only adds confusion to what I truly wanted to know.
Is there a way of calculating how many requests per second can a
Django application handle, without resorting to things like doing a
test deployment and use an external tool such as locust?
No and Yes. As mackarone pointed out, I don't think there's anyway you avoid measuring it. Consider the case where you did a local benchmark on your local dev server talking to a local DB instance, in order to generate a baseline for estimation. The issue with this is that the hardware, network (distance between services) all make a huge difference. So any numbers you generated locally would be relatively worthless for capacity planning.
In my experiences, local testing is great for relative changes. Consider the case where you wanted to see the performance impact of sql query planninng. Establishing a local baseline, making the change, than observing the effect locally is useful to gauge relative speedup.
How to generate these numbers?
I would recommend deploying the app to the hardware, and network you plan on testing on. This deploy should use your production configuration and component topology (ie if you're going to run gunicorn, make sure gunicorn is running instead of NGINX, or if you're going to have a proxy in front of gunicorn, make sure that is setup. I would run a single instance of your application using your production config.
Once this is running, I would launch a load test against the single instance using any of the popular load testing tools:
Apache Benchmark
Siege
Vegeta
K6
etc
You can launch these load tests from your single machine and ramp up traffic until response times are no longer acceptable in order to get a feel for the # of concurrent connections, and throughput your application can accommodate.
Now you have some idea of what a single instance of your service is able to handle. Up until your db (or other shared resources) are saturated these numbers can be used to project how many instances of your service are necessary to handle some amount of traffic!
According to the Gunicorn documentation
How Many Workers?
DO NOT scale the number of workers to the number of clients you expect to have. Gunicorn should only need 4-12 worker processes to handle hundreds or thousands of requests per second.
Gunicorn relies on the operating system to provide all of the load balancing when handling requests. Generally we recommend (2 x $num_cores) + 1 as the number of workers to start off with. While not overly scientific, the formula is based on the assumption that for a given core, one worker will be reading or writing from the socket while the other worker is processing a request.
Obviously, your particular hardware and application are going to affect the optimal number of workers. Our recommendation is to start with the above guess and tune using TTIN and TTOU signals while the application is under load.
Always remember, there is such a thing as too many workers. After a point your worker processes will start thrashing system resources decreasing the throughput of the entire system.
The best thing is tune it using some load testing tool as locust as you mentioned.
Emphasis mine
You have to install (loadtest) first, it is a npm package,
I was learning redis and at that time I found this, you can use it, it worked for me,
For More check this tutorial: https://realpython.com/caching-in-django-with-redis/#start-by-measuring-performance
npm install -g loadtest
loadtest -n 100 -k http://localhost:8000/myUrl/
I have a Flask app deployed on Elastic Beanstalk onto an EC2 instance on AWS. If 100 people simultaneously connected to my server, then wouldn't that mean that they have to wait in a queue of 100 since the app can only handle one instance at a time?
How can I make it so that I can handle more requests using the same IP address to connect to? Thanks!
The short answer is to use uWSGI or gunicorn.
The longer answer is that your intuition is correct - what you are worrying about is "concurrency", or the number of simultaneous requests your app can handle. And yes, a single Flask app without any application server can handle one request at a time. How do you change that? For most Python apps, the unit of concurrency is a process (there are frameworks that change that, but the majority of app deployments are probably process-based). That is, you run a process for each concurrent request you think you'll need. App servers like uWSGI do the listening for your app, then dispatch the request to a process from a pool. So, how many processes do you need?
The second concept you need is "throughput" - how many requests can be served in a specific time, which is influenced by, but different from, "concurrency" and is where your intuition may mislead you. Let's say you have 8 processes. You may think "but I'll have 100 users, 8 is clearly not enough". Let's assume you know that each request completes in 1/8 (.125) seconds. That means that each process can serve 8 requests a second. Times 8 processes; your throughput will be (roughly) 64 requests per second. 8 process gets you a lot closer to your 100 users than you may have otherwise expected. Your 100 users probably won't actually issue requests in that 1 second window. Possible, but unlikely. The issue isn't really the concurrency, but whether the user gets a response in a reasonable time.
Hope this helps. Scaling is a wonderful topic - both straightforward and frustratingly nuanced at the same time. As your traffic increases, the above guidance will shift and you'll need more and more advanced techniques. But to get started - keep it simple and focus on the basics.
See How many concurrent requests does a single Flask process receive?
I have a simple Flask application that exposes one api. Calling the api runs a python algorithm that does a lot of string manipulation and file reading (no writing). The algorithm takes about 1000ms. I'm trying to see if there's anyway to optimize concurrent requests. I'm running on a single instance of 4 vCPU VM.
I wrote a client that makes a request every 1000ms. There's minimal RAM usage, and CPU usage is about 35%. When I up the request to every 750ms. RAM usage did not increase by much, but CPU usage doubles to 70%. If I increase the requests to every 500ms, the response will start taking longer time, eventually timing out. CPU usage is at 100%, and RAM is still minimal.
I followed this tutorial to set my application. I enabled threads in my uWSGI settings. However, I did not really notice much difference.
I was hoping to get some advice on what I can do software/settings-wise to respond better to concurrent requests.
I have a simple webservice that I need to scale up substantially.
I'm trying to decide where to go amongst the various web frameworks, load balancers, app servers (e.g Mongrel2, Tornado, and nginx, mod_proxy).
I have an existing Python app (currently exposed via BaseHTTPServer) that accepts some JSON data (about 900KB per request), and returns some JSON data (about 1k). The processing is algorithmic and done in a mixture of Python and some C (via Cython).
This is heavily optimized already (down to 1.1 seconds per-job from >1hour). But I can optimise that no further. While I rewrite in something a bit more thread-friendly, I need to scale things out horizontally (ec2 maybe).
There is no session or state, but the startup time of the app is quite slow (even with pickling and cashing). It takes about 3 seconds to load all the source data. Once running it takes about 1.1 seconds per request. I
Maybe I could spin up a number of copies and then reverse proxy them? Maybe I could do some funky worker pool in one of those frameworks? But I'm still in the unknown unknowns here.
First, you should decouple your webservice layer from number crunching. Use external job queue (for example http://celeryproject.org/), to offload web frontend. Then you can scale each part interdependently.
You should look for IaaS-type cloud providers (EC2, Rackspace, Linode, Softlayer etc), where you should be able to add nodes automatically (preferred way would be to spin up some preconfigured image to minimize node setup time).