I've looked through stackoverflow and can see some oldish posts on this and wondered what the current thinking is about pooling connections in Python for MySQL.
We have a set of python processes that are threading with each thread creating a connection to MySQL. This all works fine, but we can have over 150 connections to MySQL.
When I look at the process state in MySQL I can see that most of the connections are asleep most of the time. The application is connecting to a Twitter streaming API so its busy but this only accounts for a few connections.
Is there a good way of adding connection pooling to Python MySQL and can this be done simply without re-writing all of the existing code?
Many thanks.
PT
See DBUtils
Maybe you have an abstract layer for MySQL, you can modify this layer to avoid rewriting all the code.
If not, you have to hack your Python-MySQL driver.
Related
I have a python application which interacts with vertica database through vertica python client. Currently there is no connection pool to manage the connections, instead for every request a new connection is opened and then closed at the end of the request. However, this design will cost to handle concurrent requests. Also, the python application is run on a uwsgi and an Nginx server to process multiple requests.
I would like to use an existing connection pool to handle connections to vertica from python but I dont seem to find connection pools like C3Po or Hikari in python. Could you please help me with the pools for python - vertica
For native Postgres, have a look at some of the connection pools discussed at Should PostgreSQL connections be pooled in a Python web app, or create a new connection per request?
For Vertica, it doesn't look like connection pooling is available in the native driver though it might be worth posting an issue on GitHub if you'd like more specific details. You could probably use Vertica's ODBC driver through pyODBC since that supports connedction pooling if configured as discussed at http://www.unixodbc.org/doc/conn_pool.html
So I have to make a small website for internal use at work (my work is not related to programming). And since our office is about 200 people I thought that I'd use the SocketServer module with SQLite database and this way I am going to learn some new stuff. From what I see the only way to do it is to connect to the database at every request. Isn't that expensive ? What happens if 2 people send requests and try to connect to the database at the same time (or close) ? So I have :
Start the server
At every request the server initializes RequestHandler instance and then I have to connect to the database ?
I am not posting code because its a general question of how the process works.
I'm having trouble with MySQL timing out and going away after 8 hours. I am using google app engine as a host. My Python script uses the Tornado framework.
Right now I instantiate my MySQL db connection before any functions right at the top of the main server script. Once I deploy that, the clock starts ticking and 8 hours or so later, MySQL will go away and I will have to deploy my script again.
I haven't been using db.close() at all because I hear that restarting the database connection takes a long time. Is this true? Or is there a proper way to use db.close()?
One of my friends suggested I try getting the database instance and then closing it after each function.. is that recommended and where might I find some tutorials on that?
I'm mostly looking for resources here, but if someone wants to lay it out for me that would be awesome.
Thank you all in advance.
The connection is going away because of the wait_timeout session variable which
is the number of seconds the server waits for activity on a noninteractive connection
before closing it.
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_wait_timeout
Good way is to close the connection each time and create a new one if you are not reusing the same connection so frequently, otherwise you can increase the value of wait_timeout
Establishing a connection to a MySQL database should be quite fast and it is certainly good practice to keep the connection open only for as long as you need it.
I am not certain why your connection should be non-responsive for 8 hours - have you tried checking your settings?
The correct command in Python is connection.close().
I want to do the following:
Have a software running written in Python 2.7
This software connects to a database (Currently a MySQL database)
This software listen for connections on a port X on TCP
When a connection is established, a client x request or command something, then the software use the database to store, remove or fetch information (Based on the request or command).
What I currently have in head is the classic approach of connecting to the database, store the connection to the database in an object (as a variable) that is passed in the threads that are spawned by the connection listener, then these threads use the variable in the object to do what they need to do with the database connection. (I know that multi-processing is better then multi-threading in Python, but it's not related to my question at this time)
Now my question, how should I use SQLAlchemy in this context? I am quite confused even although I have been reading quite a lot of documentation about it and there doesn't seem to be "good" examples on how to handle this kind of situation specifically even although I have been searching quite a lot.
What is the problem here? SQLAlchemy maintains a thread-local connection pool..what else do you need?
I'm using SQLAlchemy 0.6.6 against a Postgres 8.3 DB on Windows 7 an PY 2.6. I am leaving the defaults for configuring pooling when I create my engine, which is pool_size=5, max_overflow=10.
For some reason, the connections keep piling up and I intermittently get "Too many clients" from PG. I am positive that connections are being closed in a finally block as this application is only accessed via WSGI (CherryPy) and uses a connection/request pattern. I am also logging when connections are being closed just to make sure.
I've tried to see what's going on by adding echo_pool=true during my engine creation, but nothing is being logged. I can see SQL statement rolling through the console when I set echo=True, but nothing for pooling.
Anyway, this is driving me crazy because my co-worker who is on a Mac doesn't have any of these issues (I know, get a Mac), so I'm trying to see if this is the result of a bug or something. Google has yielded nothing so I'm hoping to get some help here.
Thanks,
cc
Turns out there was ScopedSession being used outside the normal application usage and the close wasn't in a finally.