How can I determine what my database's connection limits should be? - python

At my organization, PostgreSQL databases are created with a 20-connection limit as a matter of policy. This tends to interact poorly when multiple applications are in play that use connection pools, since many of those open up their full suite of connections and hold them idle.
As soon as there are more than a couple of applications in contact with the DB, we run out of connections, as you'd expect.
Pooling behaviour is a new thing here; until now we've managed pooled connections by serializing access to them through a web-based DB gateway (?!) or by not pooling anything at all. As a consequence, I'm having to explain (literally, 5 trouble tickets from one person over the course of the project) over and over again how the pooling works.
What I want is one of the following:
A solid, inarguable rationale for increasing the number of available connections to the database in order to play nice with pools.
If so, what's a safe limit? Is there any reason to keep the limit to 20?
A reason why I'm wrong and we should cut the size of the pools down or eliminate them altogether.
For what it's worth, here are the components in play. If it's relevant how one of these is configured, please weigh in:
DB: PostgreSQL 8.2. No, we won't be upgrading it as part of this.
Web server: Python 2.7, Pylons 1.0, SQLAlchemy 0.6.5, psycopg2
This is complicated by the fact that some aspects of the system access data using SQLAlchemy ORM using a manually configured engine, while others access data using a different engine factory (Still sqlalchemy) written by one of my associates that wraps the connection in an object that matches an old PHP API.
Task runner: Python 2.7, celery 2.1.4, SQLAlchemy 0.6.5, psycopg2

I think it's reasonable to require one connection per concurrent activity, and it's reasonable to assume that concurrent HTTP requests are concurrently executed.
Now, the number of concurrent HTTP requests you want to process should scale with a) the load on your server, and b) the number of CPUs you have available. If all goes well, each request will consume CPU time somewhere (in the web server, in the application server, or in the database server), meaning that you couldn't process more requests concurrently than you have CPUs. In practice, it's not that all goes well: some requests will wait for IO at some point, and not consume any CPU. So it's ok to process some more requests concurrently than you have CPUs.
Still, assuming that you have, say, 4 CPUs, allowing 20 concurrent requests is already quite some load. I'd rather throttle HTTP requests than increasing the number of requests that can be processed concurrently. If you find that a single request needs more than one connection, you have a flaw in your application.
So my recommendation is to cope with the limit, and make sure that there are not too many idle connections (compared to the number of requests that you are actually processing concurrently).

Related

Production ready Python apps on Kubernetes

I have been deploying apps to Kubernetes for the last 2 years. And in my org, all our apps(especially stateless) are running in Kubernetes. I still have a fundamental question, just because very recently we found some issues with respect to our few python apps.
Initially when we deployed, our python apps(Written in Flask and Django), we ran it using python app.py. It's known that, because of GIL, python really doesn't have support for system threads, and it will only serve one request at a time, but in case the one request is CPU heavy, it will not be able to process further requests. This is causing sometimes the health API to not work. We have observed that, at this moment, if there is a single request which is not IO and doing some operation, we will hold the CPU and cannot process another request in parallel. And since it's only doing fewer operations, we have observed there is no increase in the CPU utilization also. This has an impact on how HorizontalPodAutoscaler works, its unable to scale the pods.
Because of this, we started using uWSGI in our pods. So basically uWSGI can run multiple pods under the hood and handle multiple requests in parallel, and automatically spin new processes on demand. But here comes another problem, that we have seen, uwsgi is lacking speed in auto-scaling the process tocorrected serve the request and its causing HTTP 503 errors, Because of this we are unable to serve our few APIs in 100% availability.
At the same time our all other apps, written in nodejs, java and golang, is giving 100% availability.
I am looking at what is the best way by which I can run a python app in 100%(99.99) availability in Kubernetes, with the following
Having health API and liveness API served by the app
An app running in Kubernetes
If possible without uwsgi(Single process per pod is the fundamental docker concept)
If with uwsgi, are there any specific config we can apply for k8s env
We use Twisted's WSGI server with 30 threads and it's been solid for our Django application. Keeps to a single process per pod model which more closely matches Kubernetes' expectations, as you mentioned. Yes, the GIL means only one of those 30 threads can be running Python code at time, but as with most webapps, most of those threads are blocked on I/O (usually waiting for a response from the database) the vast majority of the time. Then run multiple replicas on top of that both for redundancy and to give you true concurrency at whatever level you need (we usually use 4-8 depending on the site traffic, some big ones are up to 16).
I have exactly the same problem with a python deployment running the Flask application. Most api calls are handled in a matter of seconds, but there are some cpu intensive requests that acquire GIL for 2 minutes.... The pod keep accepting requests, ignores the configured timeouts, ignores a closed connection by the user; then after 1 minute of liveness probes failing, the pod is restarted by kubelet.
So 1 fat request can dramatically drop the availability.
I see two different solutions:
have a separate deployment that will host only long running api calls; configure ingress to route requests between these two deployments;
using multiprocessing handle liveness/readyness probes in a main process, every other request must be handled in the child process;
There are pros and cons for each solution, maybe I will need a combination of both. Also if I need a steady flow of prometheus metrics, I might need to create a proxy server on the application layer (1 more container on the same pod). Also need to configure ingress to have a single upstream connection to python pods, so that long running request will be queued, whereas short ones will be processed concurrently (yep, python, concurrency, good joke). Not sure tho it will scale well with HPA.
So yeah, running production ready python rest api server on kubernetes is not a piece of cake. Go and java have a much better ecosystem for microservice applications.
PS
here is a good article that shows that there is no need to run your app in kubernetes with WSGI
https://techblog.appnexus.com/beyond-hello-world-modern-asynchronous-python-in-kubernetes-f2c4ecd4a38d
PPS
Im considering to use prometheus exporter for flask. Looks better than running a python client in a separate thread;
https://github.com/rycus86/prometheus_flask_exporter

with flask, is it bad to use multiprocessing.Process to handle concurrent requests?

The flask python server can handle by default only one connection at a time.
By using multiprocessing.Process() one can spawn work tasks for every request. Each request takes some time for example to query a database.
question 1.
Why is it bad and why a WSGI server would be recommended and superior?
question 2.
It works with multiprocessing.Process(). Maybe it is not structured. But what real problem can happen in the future?
By the method of using multiprocess library and attempting concurrent processes for requests you risk limiting the concurrency to maximum of number of cores the machine CPU has. This is loosely equivalent to using --workers flag with something like gunicorn and providing maximum number of cores available for the guincorn server to run. While sure one can write the required logic to provide CPU time to each connection it seems like a lot of effort when WSGI frameworks exists to do exactly that. I'd suggest you going through Settings and Design documentation of GUnicorn to obtain an even better clarity on your situation.

Reusing database connection for multiple requests

If I don't need transactions, can I reuse the same database connection for multiple requests?
Flask documentation says:
Because database connections encapsulate a transaction, we also need to make sure that only one request at the time uses the connection.
Here's how I understand the meaning of the above sentence:
Python DB-API connection can only handle one transaction at a time; to start a new transaction, one must first commit or roll back the previous one. So if each of our requests needs its own transaction, then of course each request needs its own database connection.
Please let me know if I got it wrong.
But let's say I set autocommit mode, and handle each request in a single SQL statement. Or, alternatively, let's say I only read - not write - to the database. In either case, it seems I can just reuse the same database connection for all my requests to save the overhead of multiple connections. But I'm not sure if there's any downside to this approach.
Edit: I can see one issue with what I'm proposing: each request might be handled by a different process. Since connections should probably not be reused across processes, let me clarify my question: I mean creating one connection per process, and using it for all requests that happen to be handled by this process.
On the other hand, the whole point of (green or native) threads is usually to serve one request per thread, so my proposed approach implies sharing connection across threads. It seems one connection can be used concurrently in multiple native threads, but not in multiple green threads.
So let's say for concreteness my environment is flask + gunicorn with multiple multi-threaded sync workers.
Based on #Craig Ringer comment on a different question, I think I know the answer.
The only possible advantage of connection sharing is performance (other factors - like transaction encapsulation and simplicity - favor a separate connection per request). And since a connection can't be shared across processes or green threads, it only has a chance with native threads. But psycopg2 (and presumably other drivers) doesn't allow concurrent access from the same connection. So unless each request spends very little time talking to the database, there is likely a performance hit, not benefit, from connection sharing.

What exactly happens on the computer when multiple requests came to the webserver serving django or pyramid application?

I am having a hard time trying to figure out the big picture of the handling of multiple requests by the uwsgi server with django or pyramid application.
My understanding at the moment is this:
When multiple http requests are sent to uwsgi server concurrently, the server creates a separate processes or threads (copies of itself) for every request (or assigns to them the request) and every process/thread loads the webapplication's code (say django or pyramid) into computers memory and executes it and returns the response. In between every copy of the code can access the session, cache or database. There is a separate database server usually and it can also handle concurrent requests to the database.
So here some questions I am fighting with.
Is my above understanding correct or not?
Are the copies of code interact with each other somehow or are they wholly separated from each other?
What about the session or cache? Are they shared between them or are they local to each copy?
How are they created: by the webserver or by copies of python code?
How are responses returned to the requesters: by each process concurrently or are they put to some kind of queue and sent synchroniously?
I have googled these questions and have found very interesting answers on StackOverflow but anyway can't get the whole picture and the whole process remains a mystery for me. It would be fantastic if someone can explain the whole picture in terms of django or pyramid with uwsgi or whatever webserver.
Sorry for asking kind of dumb questions, but they really torment me every night and I am looking forward to your help:)
There's no magic in pyramid or django that gets you past process boundaries. The answers depend entirely on the particular server you've selected and the settings you've selected. For example, uwsgi has the ability to run multiple threads and multiple processes. If uwsig spins up multiple processes then they will each have their own copies of data which are not shared unless you took the time to create some IPC (this is why you should keep state in a third party like a database instead of in-memory objects which are not shared across processes). Each process initializes a WSGI object (let's call it app) which the server calls via body_iter = app(environ, start_response). This app object is shared across all of the threads in the process and is invoked concurrently, thus it needs to be threadsafe (usually the structures the app uses are either threadlocal or readonly to deal with this, for example a connection pool to the database).
In general the answers to your questions are that things happen concurrently, and objects may or may not be shared based on your server model but in general you should take anything that you want to be shared and store it somewhere that can handle concurrency properly (a database).
The power and weakness of webservers is that they are in principle stateless. This enables them to be massively parallel. So indeed for each page request a different thread may be spawned. Wether or not this indeed happens depends on the configuration settings of you webserver. There's also a cost to spawning many threads, so if possible threads are reused from a thread pool.
Almost all serious webservers have page cache. So if the same page is requested multiple times, it can be retrieved from cache. In addition, browsers do their own caching. A webserver has to be clever about what to cache and what not. Static pages aren't a big problem, although they may be replaced, in which case it is quite confusing to still get the old page served because of the cache.
One way to defeat the cache is by adding (dummy) parameters to the page request.
The statelessness of the web was initialy welcomed as a necessity to achieve scalability, where webpages of busy sites even could be served concurrently from different servers at nearby or remote locations.
However the trend is to have stateful apps. State can be maintained on the browser, easing the burden on the server. If it's maintained on the server it requires the server to know 'who's talking'. One way to do this is saving and recognizing cookies (small identifiable bits of data) on the client.
For databases the story is a bit different. As soon as anything gets stored that relates to a particular user, the application is in principle stateful. While there's no conceptual difference between retaining state on disk and in RAM memory, traditionally statefulness was left to the database, which in turned used thread pools and load balancing to do its job efficiently.
With the advent of very large internet shops like amazon and google, mandatory disk access to achieve statefulness created a performance problem. The answer were in-memory databases. While they may be accessed traditionally using e.g. SQL, they offer much more flexibility in the way data is stored conceptually.
A type of database that enjoys growing popularity is persistent object store. With this database, while the distinction still can be made formally, the boundary between webserver and database is blurred. Both have their data in RAM (but can swap to disk if needed), both work with objects rather than flat records as in SQL tables. These objects can be interconnected in complex ways.
In short there's an explosion of innovative storage / thread pooling / caching/ persistence / redundance / synchronisation technology, driving what has become popularly know as 'the cloud'.

Database access strategy for a Python web app

I've been writing a Python web app (in Flask) for a while now, and I don't believe I fully grasp how database access should work across multiple request/response cycles. Prior to Python my web programming experience was in PHP (several years worth) and I'm afraid that my PHP experience is misleading some of my Python work.
In PHP, each new request creates a brand new DB connection, because nothing is shared across requests. The more requests you have, the more connections you need to support. However, in a Python web app, where there is shared state across requests, DB connections can persist.
So I need to manage those connections, and ensure that I close them. Also, I need to have some kind of connection pool, because if I have just one connection shared across all requests, then requests could block waiting on DB access, if I don't have enough connections available.
Is this a correct understanding? Or have I identified the differences well? In a Python web app, do I need to have a DB connection pool that shares its connections across many requests? And the number of connections in the pool will depend on my application's request load?
I'm using Psycopg2.
Have you looked in to SQLAlchemy at all? It takes care of a lot of the dirty details - it maintains a pool of connections, and reuses/closes them as necessary.
to avoid connection cost, seems that best way is to use pooling.
either directly with psycopg2.pool from docs or via higher level abstraction layer as SQL Alchemy as underlined on comments here.

Categories

Resources