Postgres 8.4.4 + psycopg2 + python 2.6.5 + Win7 instability - python

You can see the combination of software components I'm using in the title of the question.
I have a simple 10-table database running on a Postgres server (Win 7 Pro). I have client apps (python using psycopg to connect to Postgres) who connect to the database at random intervals to conduct relatively light transactions. There's only one client app at a time doing any kind of heavy transaction, and those are typically < 500ms. The rest of them spend more time connecting than actually waiting for the database to execute the transaction. The point is that the database is under light load, but the load is evenly split between reads and writes.
My client apps run as servers/services themselves. I've found that it is pretty common for me to be able to (1) take the Postgres server completely down, and (2) ruin the database by killing the client app with a keyboard interrupt.
By (1), I mean that the Postgres process on the server aborts and the service needs to be restarted.
By (2), I mean that the database crashes again whenever a client tries to access the database after it has restarted and (presumably) finished "recovery mode" operations. I need to delete the old database/schema from the database server, then rebuild it each time to return it to a stable state. (After recovery mode, I have tried various combinations of Vacuums to see whether that improves stability; the vacuums run, but the server will still go down quickly when clients try to access the database again.)
I don't recall seeing the same effect when I kill the client app using a "taskkill" - only when using a keyboard interrupt to take the python process down. It doesn't happen all the time, but frequently enough that it's a major concern (25%?).
Really surprised that anything on a client would actually be able to take down an "enterprise class" database. Can anyone share tips on how to improve robustness, and hopefully help me to understand why this is happening in the first place? Thanks, M

If you're having problems with postgresql acting up like this, you should read this page:
http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
For an example of a real bug, and how to ask a question that gets action and answers, read this thread.
http://archives.postgresql.org/pgsql-general/2010-12/msg01030.php

Related

How do I analyze what is hanging my Flask application

I have a Python Flask web application, which uses a Postgresql database.
When I put a load on my application, it stops to respond. This only happens when I request pages which uses the database.
My setup:
nginx frontend (although in my test environment, skipping this tier doesn't make a difference), connecting via UNIX socket to:
gunicorn application server with 3 child processes, connecting via UNIX socket to:
pgbouncer, connection pooler for PostgreSQL, connecting via TCP/IP to:
I need pgbouncer, because SQLAlchemy has connection pooling per process. If I don't use pgbouncer, my database get's overloaded with connection requests very quickly.
postgresql 13, the database server.
I have a test environment on Debian Linux (with nginx) and on my iMac, and the application hang occurs on both machines.
I put load on the application with hey, a http load generator. I use the default, which generates 200 requests with 50 workers. The test-page issues two queries to the database.
When I run my load test, I see gunicorn getting worker timeouts. It's killing the timedout processes, and starts up new ones. Eventually (after a lot of timeouts) everything is fine again. For this, I lowered the statement timeout setting of Postgresql. First is was 30 and later I set it to 15 seconds. Gunicorn's worker timeouts happend more quickly now. (I don't understand this behaviour; why would gunicorn recycle a worker, when a query times out?)
When I look at pgbouncer, with the show clients; command I see some waiting clients. I think this is a hint of the problem. My Web application is waiting on pgbouncer, and pgbouncer seems to be waiting for Postgres. When the waiting lines are gone, the application behaves normally again (trying a few requests). Also, when I restart the gunicorn process, everything goes back to normal.
But with my application under stress, when I look at postgresql (querying with a direct connection, by-passing pgbouncer), I can't see anything wrong, or waiting or whatever. When I query pg_stat_activity, all I see are idle connections (except from then connection I use to query the view).
How do I debug this? I'm a bit stuck. pg_stat_activity should show queries running, but this doesn't seem to be the case. Is there something else wrong? How do I get my application to work under load, and how to analyze this.
So, I solved my question.
As it turned out, not being able to see what SqlAlchemy was doing turned out to be the most confusing part. I could see what Postgres was doing (pg_stat_activity), and also what pgbouncer was doing (show clients;).
SqlAlchemy does have an echo and pool_echo setting, but for some reason this didn't help me.
What helped me was the realization that SqlAlchemy uses standard python logging. For me, the best way to check it out was to add the default Flask logging handler to these loggers, something like this:
log_level = "INFO"
app.logger.setLevel(log_level)
for log_name in ["sqlalchemy.dialects", "sqlalchemy.engine", "sqlalchemy.orm", "sqlalchemy.pool"]:
additional_logger = logging.getLogger(log_name)
additional_logger.setLevel(log_level)
additional_logger.addHandler(app.logger.handlers[0])
(of course I can control my solution via a config-file, but I left that part out for clarity)
Now I could see what was actually happening. Still no statistics, like with the other tiers, but this helped.
Eventually I found the problem. I was using two (slightly) different connection strings to the same database. I had them because the first was for authentication (used by Flask-Session and Flask-Login via ORM), and the other for application queries (used by my own queries via PugSQL). In the end, different connection strings were not necessary. However it made SqlAlchemy do strange things when in stress.
I'm still not sure what the actual problem was (probably there were two connection pools which were fighting each other), but this solved it.
Nice benefit: I don't need pg_bouncer in my situation, so that removes a lot of complexity.

Ensuring at most a single instance of job executing on Kubernetes and writing into Postgresql

I have a Python program that I am running as a Job on a Kubernetes cluster every 2 hours. I also have a webserver that starts the job whenever user clicks a button on a page.
I need to ensure that at most only one instance of the Job is running on the cluster at any given time.
Given that I am using Kubernetes to run the job and connecting to Postgresql from within the job, the solution should somehow leverage these two. I though a bit about it and came with the following ideas:
Find a setting in Kubernetes that would set this limit, attempts to start second instance would then fail. I was unable to find this setting.
Create a shared lock, or mutex. Disadvantage is that if job crashes, I may not unlock before quitting.
Kubernetes is running etcd, maybe I can use that
Create a 'lock' table in Postgresql, when new instance connects, it checks if it is the only one running. Use transactions somehow so that one wins and proceeds, while others quit. I have not yet thought this out, but is should work.
Query kubernetes API for a label I use on the job, see if there are some instances. This may not be atomic, so more than one instance may slip through.
What are the usual solutions to this problem given the platform choice I made? What should I do, so that I don't reinvent the wheel and have something reliable?
A completely different approach would be to run a (web) server that executes the job functionality. At a high level, the idea is that the webserver can contact this new job server to execute functionality. In addition, this new job server will have an internal cron to trigger the same functionality every 2 hours.
There could be 2 approaches to implementing this:
You can put the checking mechanism inside the jobserver code to ensure that even if 2 API calls happen simultaneously to the job server, only one executes, while the other waits. You could use the language platform's locking features to achieve this, or use a message queue.
You can put the checking mechanism outside the jobserver code (in the database) to ensure that only one API call executes. Similar to what you suggested. If you use a postgres transaction, you don't have to worry about your job crashing and the value of the lock remaining set.
The pros/cons of both approaches are straightforward. The major difference in my mind between 1 & 2, is that if you update the job server code, then you might have a situation where 2 job servers might be running at the same time. This would destroy the isolation property you want. Hence, database might work better, or be more idiomatic in the k8s sense (all servers are stateless so all the k8s goodies work; put any shared state in a database that can handle concurrency).
Addressing your ideas, here are my thoughts:
Find a setting in k8s that will limit this: k8s will not start things with the same name (in the metadata of the spec). But anything else goes for a job, and k8s will start another job.
a) etcd3 supports distributed locking primitives. However, I've never used this and I don't really know what to watch out for.
b) postgres lock value should work. Even in case of a job crash, you don't have to worry about the value of the lock remaining set.
Querying k8s API server for things that should be atomic is not a good idea like you said. I've used a system that reacts to k8s events (like an annotation change on an object spec), but I've had bugs where my 'operator' suddenly stops getting k8s events and needs to be restarted, or again, if I want to push an update to the event-handler server, then there might be 2 event handlers that exist at the same time.
I would recommend sticking with what you are best familiar with. In my case that would be implementing a job-server like k8s deployment that runs as a server and listens to events/API calls.

Persistant MySQL connection in Python for social media harvesting

I am using Python to stream large amounts of Twitter data into a MySQL database. I anticipate my job running over a period of several weeks. I have code that interacts with the twitter API and gives me an iterator that yields lists, each list corresponding to a database row. What I need is a means of maintaining a persistent database connection for several weeks. Right now I find myself having to restart my script repeatedly when my connection is lost, sometimes as a result of MySQL being restarted.
Does it make the most sense to use the mysqldb library, catch exceptions and reconnect when necessary? Or is there an already made solution as part of sqlalchemy or another package? Any ideas appreciated!
I think the right answer is to try and handle the connection errors; it sounds like you'd only be pulling in a much a larger library just for this feature, while trying and catching is probably how it's done, whatever level of the stack it's at. If necessary, you could multithread these things since they're probably IO-bound (i.e. suitable for Python GIL threading as opposed to multiprocessing) and decouple the production and the consumption with a queue, too, which would maybe take some of the load off of the database connection.

master slave postgresql with logging and monitoring for a django application

I have a django application running.
The database backend that i use for it is PostGreSql.
Everything is working fine for me.
Now I want to create a master slave replication for my database, such that:
Whatever change happens on master, is replicated on slave.
If the master shuts down, the slave takes charge, and an error notification is sent.
Backup is created automatically of the database.
Logging is taken care of.
Monitoring is taken care of.
I went through https://docs.djangoproject.com/en/dev/topics/db/multi-db/ the entire article.
But I don't have much idea, how to implement the all 5 steps above. As you would have understood, I don't have much experience, hence please suggest pointers around, how to proceed. Thanks.
Have I missed, anything which should be kept in mind for database purpose??
It sounds like you want a dual-node HA setup for PostgreSQL, using synchronous streaming replication and failover.
Check out http://repmgr.org/ for one tool that'll help with this, particularly when coupled with a PgBouncer front-end. You may also want to read about "heartbeat", "high availability", "fencing" and "STONITH".
You need to cope with the master continuing to run but failing, not just it shutting down. Consider what happens if the master runs out of disk space; all write queries will return errors, but it won't shut down or crash.
This is really an issue of database administration / server management.

How can I determine what my database's connection limits should be?

At my organization, PostgreSQL databases are created with a 20-connection limit as a matter of policy. This tends to interact poorly when multiple applications are in play that use connection pools, since many of those open up their full suite of connections and hold them idle.
As soon as there are more than a couple of applications in contact with the DB, we run out of connections, as you'd expect.
Pooling behaviour is a new thing here; until now we've managed pooled connections by serializing access to them through a web-based DB gateway (?!) or by not pooling anything at all. As a consequence, I'm having to explain (literally, 5 trouble tickets from one person over the course of the project) over and over again how the pooling works.
What I want is one of the following:
A solid, inarguable rationale for increasing the number of available connections to the database in order to play nice with pools.
If so, what's a safe limit? Is there any reason to keep the limit to 20?
A reason why I'm wrong and we should cut the size of the pools down or eliminate them altogether.
For what it's worth, here are the components in play. If it's relevant how one of these is configured, please weigh in:
DB: PostgreSQL 8.2. No, we won't be upgrading it as part of this.
Web server: Python 2.7, Pylons 1.0, SQLAlchemy 0.6.5, psycopg2
This is complicated by the fact that some aspects of the system access data using SQLAlchemy ORM using a manually configured engine, while others access data using a different engine factory (Still sqlalchemy) written by one of my associates that wraps the connection in an object that matches an old PHP API.
Task runner: Python 2.7, celery 2.1.4, SQLAlchemy 0.6.5, psycopg2
I think it's reasonable to require one connection per concurrent activity, and it's reasonable to assume that concurrent HTTP requests are concurrently executed.
Now, the number of concurrent HTTP requests you want to process should scale with a) the load on your server, and b) the number of CPUs you have available. If all goes well, each request will consume CPU time somewhere (in the web server, in the application server, or in the database server), meaning that you couldn't process more requests concurrently than you have CPUs. In practice, it's not that all goes well: some requests will wait for IO at some point, and not consume any CPU. So it's ok to process some more requests concurrently than you have CPUs.
Still, assuming that you have, say, 4 CPUs, allowing 20 concurrent requests is already quite some load. I'd rather throttle HTTP requests than increasing the number of requests that can be processed concurrently. If you find that a single request needs more than one connection, you have a flaw in your application.
So my recommendation is to cope with the limit, and make sure that there are not too many idle connections (compared to the number of requests that you are actually processing concurrently).

Categories

Resources