I'm using SQLAlchemy 0.6.6 against a Postgres 8.3 DB on Windows 7 an PY 2.6. I am leaving the defaults for configuring pooling when I create my engine, which is pool_size=5, max_overflow=10.
For some reason, the connections keep piling up and I intermittently get "Too many clients" from PG. I am positive that connections are being closed in a finally block as this application is only accessed via WSGI (CherryPy) and uses a connection/request pattern. I am also logging when connections are being closed just to make sure.
I've tried to see what's going on by adding echo_pool=true during my engine creation, but nothing is being logged. I can see SQL statement rolling through the console when I set echo=True, but nothing for pooling.
Anyway, this is driving me crazy because my co-worker who is on a Mac doesn't have any of these issues (I know, get a Mac), so I'm trying to see if this is the result of a bug or something. Google has yielded nothing so I'm hoping to get some help here.
Thanks,
cc
Turns out there was ScopedSession being used outside the normal application usage and the close wasn't in a finally.
Related
I have a working Python Cloud Foundry app on Bluemix / IBM Cloud that connects to, and otherwise works well with a DB2 on Cloud instance on Bluemix / IBM Cloud.
However, after long intervals (I haven't been able to measure the time), the connection to DB2 closes, and my queries fail. I could modify my code to check for this, but it would be great to be able to adjust the TCP keepalive settings. Something along the lines of this.
Would greatly appreciate any pointers. I'm not sure how one adjusts the client-side settings on a Python Cloud Foundry app.
Cheers.
I tried reproducing this using a Python Flask Cloud Foundry app and a Db2 on Cloud Lite plan instance. My connections seem to stay alive for hours. It might be the python application you are running that has some kind of timeout.
I could not find a satisfactory resolution. Like #jackic23 mentioned, there may be other factors at play. Few things:
The problem is hard to replicate
The app works fine on localhost, but not when deployed
I do have other simultaneous CRUD operations happening, which may be potentially conflicting somehow. There could be a race condition that somehow never happens on localhost.
My flask app is deployed using gunicorn, which was killing the worker thread after 30 seconds, so the DB connection terminated mid-query. I adjusted the timeout to 75 seconds, but then the query started returning in under 1 s.
At this point, I have switched to an Enterprise DB2 plan, and the app is working fine. To #jackic23's point, there may still be something else going on here (in my app's code) that may eventually need to be figured out.
For now, I'm moving on. Thank you #jackic23 for looking into this!
I'm having trouble with MySQL timing out and going away after 8 hours. I am using google app engine as a host. My Python script uses the Tornado framework.
Right now I instantiate my MySQL db connection before any functions right at the top of the main server script. Once I deploy that, the clock starts ticking and 8 hours or so later, MySQL will go away and I will have to deploy my script again.
I haven't been using db.close() at all because I hear that restarting the database connection takes a long time. Is this true? Or is there a proper way to use db.close()?
One of my friends suggested I try getting the database instance and then closing it after each function.. is that recommended and where might I find some tutorials on that?
I'm mostly looking for resources here, but if someone wants to lay it out for me that would be awesome.
Thank you all in advance.
The connection is going away because of the wait_timeout session variable which
is the number of seconds the server waits for activity on a noninteractive connection
before closing it.
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_wait_timeout
Good way is to close the connection each time and create a new one if you are not reusing the same connection so frequently, otherwise you can increase the value of wait_timeout
Establishing a connection to a MySQL database should be quite fast and it is certainly good practice to keep the connection open only for as long as you need it.
I am not certain why your connection should be non-responsive for 8 hours - have you tried checking your settings?
The correct command in Python is connection.close().
I've looked through stackoverflow and can see some oldish posts on this and wondered what the current thinking is about pooling connections in Python for MySQL.
We have a set of python processes that are threading with each thread creating a connection to MySQL. This all works fine, but we can have over 150 connections to MySQL.
When I look at the process state in MySQL I can see that most of the connections are asleep most of the time. The application is connecting to a Twitter streaming API so its busy but this only accounts for a few connections.
Is there a good way of adding connection pooling to Python MySQL and can this be done simply without re-writing all of the existing code?
Many thanks.
PT
See DBUtils
Maybe you have an abstract layer for MySQL, you can modify this layer to avoid rewriting all the code.
If not, you have to hack your Python-MySQL driver.
I am getting the error OperationalError: FATAL: sorry, too many clients already when using psycopg2. I am calling the close method on my connection instance after I am done with it. I am not sure what could be causing this, it is my first experience with python and postgresql, but I have a few years experience with php, asp.net, mysql, and sql server.
EDIT: I am running this locally, if the connections are closing like they should be then I only have 1 connection open at a time. I did have a GUI open to the database but even closed I am getting this error. It is happening very shortly after I run my program. I have a function I call that returns a connection that is opened like:
psycopg2.connect(connectionString)
Thanks
Final Edit:
It was my mistake, I was recursively calling the same method on mistake that was opening the same method over and over. It has been a long day..
This error means what it says, there are too many clients connected to postgreSQL.
Questions you should ask yourself:
Are you the only one connected to this database?
Are you running a graphical IDE?
What method are you using to connect?
Are you testing queries at the same time that you running the code?
Any of these things could be the problem. If you are the admin, you can up the number of clients, but if a program is hanging it open, then that won't help for long.
There are many reasons why you could be having too many clients running at the same time.
Make sure your db connection command isn't in any kind of loop. I was getting the same error from my script until I moved my db.database() out of my programs repeating execution loop.
It simple means many clients are making transaction to PostgreSQL at same time.
I was running Postgis container and Django in different docker container. Hence for my case restarting both db and system container solved the problem.
I'm using Pylons (a python framework) to serve a simple web application, but it seems to die from time to time, with this in the error log: (2006, 'MySQL server has gone away')
I did a bit of checking, and saw that this was because the connections to MySQL were not being renewed. This shouldn't be a problem though, because the sqlalchemy.pool_recycle in the config file should automatically keep it alive. The default was 3600, but I dialed it back to 1800 because of this problem. It helped a bit, but 3600 should be fine according to the docs. The errors still happen semi-regularly. I don't want to lower it too much though and DOS my own database :).
Maybe something in my MySQL config is goofy? Not sure where to look exactly.
Other relevant details:
Python 2.5
Pylons: 0.9.6.2 (w/ sql_alchemy)
MySQL: 5.0.51
I think I fixed it. It's turns out I had a simple config error. My ini file read:
sqlalchemy.default.url = [connection string here]
sqlalchemy.pool_recycle = 1800
The problem is that my environment.py file declared that the engine would only map keys with the prefix: sqlalchemy.default so pool_recycle was ignored.
The solution is to simply change the second line in the ini to:
sqlalchemy.default.pool_recycle = 1800
You might want to check MySQL's timeout variables:
show variables like '%timeout%';
You're probably interested in wait_timeout (less likely but possible: interactive_timeout). On Debian and Ubuntu, the defaults are 28800 (MySQL kills connections after 8 hours), but maybe the default for your platform is different or whoever administrates the server has configured things differently.
AFAICT, pool_recycle doesn't actually keep the connections alive, it expires them on its own before MySQL kills them. I'm not familiar with pylons, but if causing the connections to intermittently do a SELECT 1; is an option, that will keep them alive at the cost of basically no server load and minimal network traffic. One final thought: are you somehow managing to use a connection that pylons thinks it has expired?