What are methods of having persistent connections to a MongoDB, instead of creating a MongoClient instance and using it when constructing queries? I noted that it opens/closes a connection on each query operation.
I'm using Python, and have pymongo installed. I've looked around and didn't find much information on connection management. In light of this, what are general recommendations on database management?
Just have a global MongoClient at the top level of a Python module:
client = MongoClient(my_connection_string)
It's critical that you create one client at your application's startup. Use that one same client for every operation for the lifetime of your application and never call "close" on it. This will provide optimal performance.
The client manages a connection pool, and reuses connections as much as possible. It does not open and close a new connection per query, that would be awful. See PyMongo's docs for connection pooling.
Related
I have a python application which interacts with vertica database through vertica python client. Currently there is no connection pool to manage the connections, instead for every request a new connection is opened and then closed at the end of the request. However, this design will cost to handle concurrent requests. Also, the python application is run on a uwsgi and an Nginx server to process multiple requests.
I would like to use an existing connection pool to handle connections to vertica from python but I dont seem to find connection pools like C3Po or Hikari in python. Could you please help me with the pools for python - vertica
For native Postgres, have a look at some of the connection pools discussed at Should PostgreSQL connections be pooled in a Python web app, or create a new connection per request?
For Vertica, it doesn't look like connection pooling is available in the native driver though it might be worth posting an issue on GitHub if you'd like more specific details. You could probably use Vertica's ODBC driver through pyODBC since that supports connedction pooling if configured as discussed at http://www.unixodbc.org/doc/conn_pool.html
I am developing a function in aws lambda/serverless and was wondering how to get connection pooling done. From a programmatic point of view I know how to establish a pymongo connection with pooling, but no idea how to achieve this with serverless since it is stateless and every invocation would then trigger a new connection (and in theory a lot at the same time).
Any advice?
I'm connecting my API layer to Oracle DB using the cx_oracle connector, the issue here is that my DB machine keeps on restarting due to some other reasons.
I want to immune my API Layer to reestablish the connection or try to reconnect, what's the best possible solution to this?
Please don't suggest try and catch.
My connection code :
import cx_Oracle
connection_string = "{user}/{password}#{server}:{port}/{sid}".format(
user=config.DB_USER,
password=config.DB_PASSWORD,
server=config.DB_HOST,
port=config.DB_PORT,
sid=config.DB_SID)
db_conn = cx_Oracle.connect(connection_string)
cursor = db_conn.cursor()
I don't know much about this, but would having a session/connection pool help here?
If you use a session pool (cx_Oracle.SessionPool) then dead sessions will be replaced whenever they are requested from the pool. That will not help you with existing sessions that have been acquired from the pool. But if you get an error, and you release the session back to the pool and then acquire a session again from the pool you will get a session that can be used. If you want more advanced protection from database failure you will need to explore some of the more advanced techniques that the Oracle Database has to offer like RAC (Real Application Clusters).
I init mariadb connection while webapp initialization like this:
con = MySQLdb.connect('localhost', 'user', 'pass', 'db')
Now I've found it doesn't work since there is a timeout to this connection.
What is the best practice to set/keep connection to db? Increase timeout, create connection in each request or something more tuned?
Recommend you build a singleton object that returns the db connection (your con). The first time it is called, it performs the MySQLDB.connect(). Then it keeps a static copy of the connection for returning to the caller on subsequent calls.
In order to recover from disconnects... First, note that a timeout is not the only reason for losing a connection; a network glitch could cause it. So forget about timeouts and simply plan to reconnect when needed.
One way is to do a ping to see if its still open, and if it is not, reconnect.
Another way is to have "autoreconnect" turned on. But this has negative impact on transactions, #variables, and other thing that could bite you.
I want to do the following:
Have a software running written in Python 2.7
This software connects to a database (Currently a MySQL database)
This software listen for connections on a port X on TCP
When a connection is established, a client x request or command something, then the software use the database to store, remove or fetch information (Based on the request or command).
What I currently have in head is the classic approach of connecting to the database, store the connection to the database in an object (as a variable) that is passed in the threads that are spawned by the connection listener, then these threads use the variable in the object to do what they need to do with the database connection. (I know that multi-processing is better then multi-threading in Python, but it's not related to my question at this time)
Now my question, how should I use SQLAlchemy in this context? I am quite confused even although I have been reading quite a lot of documentation about it and there doesn't seem to be "good" examples on how to handle this kind of situation specifically even although I have been searching quite a lot.
What is the problem here? SQLAlchemy maintains a thread-local connection pool..what else do you need?