I'm using the Django ORM in a non-Django application and would like to turn on the DEBUG setting so that I can periodically log my queries. So I have something vaguely like this:
from django.db import connection
def thread_main_loop():
while keep_going:
connection.queries[:] = []
do_something()
some_logging_function(connection.queries)
I would like to do this on my production server, but the doc warns, "It is also important to remember that when running with DEBUG turned on, Django will remember every SQL query it executes. This is useful when you are debugging, but on a production server, it will rapidly consume memory."
Because the connection.queries list is cleared every time through the main loop of each thread, I believe that Django query logging will not cause my application to consume memory. Is this correct? And are there any other reasons not to turn DEBUG on in a production environment if I'm only using the Django ORM?
In DEBUG mode any error in your application will lead to the detailed Django stacktrace. This is very undesirable in a production environment as it will probably leak sensitive information that attackers can use against your site. Even if your application seems pretty stable, I wouldn't risk it.
I would rather employ a middleware that somehow logs queries to a file. Or take statistics of the database directly, e.g (for MySQL).
watch -n 1 mysqladmin --user=<user> --password=<password> processlist
Edit:
If you are only using the Django ORM, then afaik only two things will be different:
Queries will be saved with the CursorDebugWrapper
If a query results in a database warning, this will raise an exception.
Related
I have a Python Flask web application, which uses a Postgresql database.
When I put a load on my application, it stops to respond. This only happens when I request pages which uses the database.
My setup:
nginx frontend (although in my test environment, skipping this tier doesn't make a difference), connecting via UNIX socket to:
gunicorn application server with 3 child processes, connecting via UNIX socket to:
pgbouncer, connection pooler for PostgreSQL, connecting via TCP/IP to:
I need pgbouncer, because SQLAlchemy has connection pooling per process. If I don't use pgbouncer, my database get's overloaded with connection requests very quickly.
postgresql 13, the database server.
I have a test environment on Debian Linux (with nginx) and on my iMac, and the application hang occurs on both machines.
I put load on the application with hey, a http load generator. I use the default, which generates 200 requests with 50 workers. The test-page issues two queries to the database.
When I run my load test, I see gunicorn getting worker timeouts. It's killing the timedout processes, and starts up new ones. Eventually (after a lot of timeouts) everything is fine again. For this, I lowered the statement timeout setting of Postgresql. First is was 30 and later I set it to 15 seconds. Gunicorn's worker timeouts happend more quickly now. (I don't understand this behaviour; why would gunicorn recycle a worker, when a query times out?)
When I look at pgbouncer, with the show clients; command I see some waiting clients. I think this is a hint of the problem. My Web application is waiting on pgbouncer, and pgbouncer seems to be waiting for Postgres. When the waiting lines are gone, the application behaves normally again (trying a few requests). Also, when I restart the gunicorn process, everything goes back to normal.
But with my application under stress, when I look at postgresql (querying with a direct connection, by-passing pgbouncer), I can't see anything wrong, or waiting or whatever. When I query pg_stat_activity, all I see are idle connections (except from then connection I use to query the view).
How do I debug this? I'm a bit stuck. pg_stat_activity should show queries running, but this doesn't seem to be the case. Is there something else wrong? How do I get my application to work under load, and how to analyze this.
So, I solved my question.
As it turned out, not being able to see what SqlAlchemy was doing turned out to be the most confusing part. I could see what Postgres was doing (pg_stat_activity), and also what pgbouncer was doing (show clients;).
SqlAlchemy does have an echo and pool_echo setting, but for some reason this didn't help me.
What helped me was the realization that SqlAlchemy uses standard python logging. For me, the best way to check it out was to add the default Flask logging handler to these loggers, something like this:
log_level = "INFO"
app.logger.setLevel(log_level)
for log_name in ["sqlalchemy.dialects", "sqlalchemy.engine", "sqlalchemy.orm", "sqlalchemy.pool"]:
additional_logger = logging.getLogger(log_name)
additional_logger.setLevel(log_level)
additional_logger.addHandler(app.logger.handlers[0])
(of course I can control my solution via a config-file, but I left that part out for clarity)
Now I could see what was actually happening. Still no statistics, like with the other tiers, but this helped.
Eventually I found the problem. I was using two (slightly) different connection strings to the same database. I had them because the first was for authentication (used by Flask-Session and Flask-Login via ORM), and the other for application queries (used by my own queries via PugSQL). In the end, different connection strings were not necessary. However it made SqlAlchemy do strange things when in stress.
I'm still not sure what the actual problem was (probably there were two connection pools which were fighting each other), but this solved it.
Nice benefit: I don't need pg_bouncer in my situation, so that removes a lot of complexity.
I've tried to deploy (includes migration) production environment. But my Django migration (like add columns) very often stops and doesn't progress anymore.
I'm working with postgresql 9.3, and I find some reasons of this problem. If postgresql has an active transaction, alter table query is not worked. So until now, restarting postgresql service before migration was a solution, but I think this is a bad idea.
Is there any good idea to progress deploying nicely?
Open connections will likely stop schema updates. If you can't wait for existing connections to finish, or if your environment is such that long-running connections are used, you may need to halt all connections while you run the update(s).
The downtime, if it's likely to be significant to you, could be mitigated if you have a read-only slave that could stay online. If not, ensuring your site fails over to some sort of error/explanation page/redirect would at least avoid raw failure code responses to requests that come in if downtime for migrations is acceptable.
I am running two python files on one cpu in parallel, both of which make use of the same sqlite3 database. I am handling the sqlite3 database using sqlalchemy and my understanding is that sqlalchemy handles all the threading database issues within one app. My question is how to handle the access from the two different apps?
One of my two programs is a flask application and the other is a cronjob which updates the database from time to time.
It seems that even read-only tasks on the sqlite database lock the database, meaning that if both apps want to read or write at the same time I get an error.
OperationalError: (sqlite3.OperationalError) database is locked
Lets assume that my cronjob app runs every 5min. How can I make sure that there are no collisions between my two apps? I could write some read flag into a file which I check before accessing the database, but it seems to me there should be a standard way to do this?
Furthermore I am running my app with gunicorn and in principle it is possible to have multiple jobs running... so I eventually want more than 2 parallel jobs for my flask app...
thanks
carl
It's true. Sqlite isn't built for this kind of application. Sqlite is really for lightweight single-threaded, single-instance applications.
Sqlite connections are one per instance, and if you start getting into some kind of threaded multiplexer (see https://www.sqlite.org/threadsafe.html) it'd be possible, but it's more trouble than it's worth. And there are other solutions that provide that function-- take a look at Postgresql or MySQL. Those DB's are open source, are well documented, well supported, and support the kind of concurrency you need.
I'm not sure how SQLAlchemy handles connections, but if you were using Peewee ORM then the solution is quite simple.
When your Flask app initiates a request, you will open a connection to the DB. Then when Flask sends the response, you close the DB.
Similarly, in your cron script, open a connection when you start to use the DB, then close it when the process is finished.
Another thing you might consider is using SQLite in WAL mode. This can improve concurrency. You set the journaling mode with a PRAGMA query when you open your connection.
For more info, see http://charlesleifer.com/blog/sqlite-small-fast-reliable-choose-any-three-/
I'm working on a distributed system where one process is controlling a hardware piece and I want it to be running as a service. My app is Django + Twisted based, so Twisted maintains the main loop and I access the database (SQLite) through Django, the entry point being a Django Management Command.
On the other hand, for user interface, I am writing a web application on the same Django project on the same database (also using Crossbar as websockets and WAMP server). This is a second Django process accessing the same database.
I'm looking for some validation here. Is anything fundamentally wrong to this approach? I'm particularly scared of issues with database (two different processes accessing it via Django ORM).
Consider that Django, like all WSGI-based web servers, almost always has multiple processes accessing the database. Because a single WSGI process can handle only one connection at a time, it's normal for servers to run multiple processes in parallel when they get any significant amount of traffic.
That doesn't mean there's no cause for concern. You have the database as if data might change between any two calls to it. Familiarize yourself with how Django uses transactions (default is autocommit mode, not atomic requests), and …
and oh, you said sqlite. Yeah, sqlite is probably not the best database to use when you need to write to it from multiple processes. I can imagine that might work for a single-user interface to a piece of hardware, but if you run in to any problems when adding the webapp, you'll want to trade up to a database server like postgresql.
No there is nothing inherently wrong with that approach. We currently use a similar approach for a lot of our work.
Ok, here is my confusion/problem:
I develop in localhost and there you could raise exceptions and could see the logs in command line easily.
Then I deploy the code on test, stage and production server, that is where the problem begins, it is not easy to see logs or debug errors and exceptions. For normal errors I guess django-toolbar could be enabled, but I do get some silent exceptions which dont crash but the process manipulates to failure because of that. For example, I have payment integration, and few days ago the payments were failing on return (callback) on our site, but nothing was crashing, just that payment process failed message was coming, but the payment gateway vendor was working fine, then I had to look for some failure instances which could lead to this problem and figured out that one db save operation was not saving because the variable was not there.
Now my question, is Sentry (https://github.com/getsentry/sentry) an answer for that? Or is there any other option for this?
Please do ask if any further clarification is needed for my requirement.
Sentry is an option, but honestly is too limited (I tried it a month ago or so), it's intended to track exceptions, but in the real world we should track important informations and events too.
If you didn't setup an application logging, I suggest you to do it, by following this example.
In my app I defined several loggers, for different purposes, the python logging configuration via dictionary (the one used by Django), is very powerful and you have a full control over how things get logged, for example you can write logs to a file, to a database, send an email, call a third party api or whatever. If your app is running in a load balanced environment (so you have several machines running your app), you can use services like Loggly to aggregate the logs coming from your instances in a single place (and since it uses RSYSLOG, it aggregates not only your Django app logs, but also all the logs of your underlying OS).
I suggest you to use also New Relic, which keeps track of a lot of stuff automatically: query executed and time, template loading time, errors and a lot of other useful statistics.