I want to do the following:
Have a software running written in Python 2.7
This software connects to a database (Currently a MySQL database)
This software listen for connections on a port X on TCP
When a connection is established, a client x request or command something, then the software use the database to store, remove or fetch information (Based on the request or command).
What I currently have in head is the classic approach of connecting to the database, store the connection to the database in an object (as a variable) that is passed in the threads that are spawned by the connection listener, then these threads use the variable in the object to do what they need to do with the database connection. (I know that multi-processing is better then multi-threading in Python, but it's not related to my question at this time)
Now my question, how should I use SQLAlchemy in this context? I am quite confused even although I have been reading quite a lot of documentation about it and there doesn't seem to be "good" examples on how to handle this kind of situation specifically even although I have been searching quite a lot.
What is the problem here? SQLAlchemy maintains a thread-local connection pool..what else do you need?
Related
I have a python application which interacts with vertica database through vertica python client. Currently there is no connection pool to manage the connections, instead for every request a new connection is opened and then closed at the end of the request. However, this design will cost to handle concurrent requests. Also, the python application is run on a uwsgi and an Nginx server to process multiple requests.
I would like to use an existing connection pool to handle connections to vertica from python but I dont seem to find connection pools like C3Po or Hikari in python. Could you please help me with the pools for python - vertica
For native Postgres, have a look at some of the connection pools discussed at Should PostgreSQL connections be pooled in a Python web app, or create a new connection per request?
For Vertica, it doesn't look like connection pooling is available in the native driver though it might be worth posting an issue on GitHub if you'd like more specific details. You could probably use Vertica's ODBC driver through pyODBC since that supports connedction pooling if configured as discussed at http://www.unixodbc.org/doc/conn_pool.html
So I have to make a small website for internal use at work (my work is not related to programming). And since our office is about 200 people I thought that I'd use the SocketServer module with SQLite database and this way I am going to learn some new stuff. From what I see the only way to do it is to connect to the database at every request. Isn't that expensive ? What happens if 2 people send requests and try to connect to the database at the same time (or close) ? So I have :
Start the server
At every request the server initializes RequestHandler instance and then I have to connect to the database ?
I am not posting code because its a general question of how the process works.
What are methods of having persistent connections to a MongoDB, instead of creating a MongoClient instance and using it when constructing queries? I noted that it opens/closes a connection on each query operation.
I'm using Python, and have pymongo installed. I've looked around and didn't find much information on connection management. In light of this, what are general recommendations on database management?
Just have a global MongoClient at the top level of a Python module:
client = MongoClient(my_connection_string)
It's critical that you create one client at your application's startup. Use that one same client for every operation for the lifetime of your application and never call "close" on it. This will provide optimal performance.
The client manages a connection pool, and reuses connections as much as possible. It does not open and close a new connection per query, that would be awful. See PyMongo's docs for connection pooling.
I've looked through stackoverflow and can see some oldish posts on this and wondered what the current thinking is about pooling connections in Python for MySQL.
We have a set of python processes that are threading with each thread creating a connection to MySQL. This all works fine, but we can have over 150 connections to MySQL.
When I look at the process state in MySQL I can see that most of the connections are asleep most of the time. The application is connecting to a Twitter streaming API so its busy but this only accounts for a few connections.
Is there a good way of adding connection pooling to Python MySQL and can this be done simply without re-writing all of the existing code?
Many thanks.
PT
See DBUtils
Maybe you have an abstract layer for MySQL, you can modify this layer to avoid rewriting all the code.
If not, you have to hack your Python-MySQL driver.
Sometimes in our production environment occurs situation when connection between service (which is python program that uses MySQLdb) and mysql server is flacky, some packages are lost, some black magic happens and .execute() of MySQLdb.Cursor object never ends (or take great amount of time to end).
This is very bad because it is waste of service worker threads. Sometimes it leads to exhausting of workers pool and service stops responding at all.
So the question is: Is there a way to interrupt MySQLdb.Connection.execute operation after given amount of time?
if the communication is such a problem, consider writing a 'proxy' that receives your SQL commands over the flaky connection and relays them to the MySQL server on a reliable channel (maybe running on the same box as the MySQL server). This way you have total control over failure detection and retrying.
You need to analyse exactly what the problem is. MySQL connections should eventually timeout if the server is gone; TCP keepalives are generally enabled. You may be able to tune the OS-level TCP timeouts.
If the database is "flaky", then you definitely need to investigate how. It seems unlikely that the database really is the problem, more likely that networking in between is.
If you are using (some) stateful firewalls of any kind, it's possible that they're losing some of the state, thus causing otherwise good long-lived connections to go dead.
You might want to consider changing the idle timeout parameter in MySQL; otherwise, a long-lived, unused connection may go "stale", where the server and client both think it's still alive, but some stateful network element in between has "forgotten" about the TCP connection. An application trying to use such a "stale" connection will have a long wait before receiving an error (but it should eventually).