I'm connecting my API layer to Oracle DB using the cx_oracle connector, the issue here is that my DB machine keeps on restarting due to some other reasons.
I want to immune my API Layer to reestablish the connection or try to reconnect, what's the best possible solution to this?
Please don't suggest try and catch.
My connection code :
import cx_Oracle
connection_string = "{user}/{password}#{server}:{port}/{sid}".format(
user=config.DB_USER,
password=config.DB_PASSWORD,
server=config.DB_HOST,
port=config.DB_PORT,
sid=config.DB_SID)
db_conn = cx_Oracle.connect(connection_string)
cursor = db_conn.cursor()
I don't know much about this, but would having a session/connection pool help here?
If you use a session pool (cx_Oracle.SessionPool) then dead sessions will be replaced whenever they are requested from the pool. That will not help you with existing sessions that have been acquired from the pool. But if you get an error, and you release the session back to the pool and then acquire a session again from the pool you will get a session that can be used. If you want more advanced protection from database failure you will need to explore some of the more advanced techniques that the Oracle Database has to offer like RAC (Real Application Clusters).
Related
I want some clarification on how the pre ping feature exactly works with SqlAlchemy db pools. Let's say I try to make a SQL query to my database with the db pool. If the db pool sends a pre ping to check the connection and the connection is broken, does it automatically handle this? By handling I mean that it reconnects and then sends the SQL query? Or do I have to handle this myself in my code?
Thanks!
From the docs, yes stale connections are handled transparently:
The calling application does not need to be concerned about organizing operations to be able to recover from stale connections checked out from the pool.
... unless:
If the database is still not available when “pre ping” runs, then the
initial connect will fail and the error for failure to connect will be
propagated normally. In the uncommon situation that the database is
available for connections, but is not able to respond to a “ping”, the
“pre_ping” will try up to three times before giving up, propagating
the database error last received.
I'm wondering if cx_Oracle works with flask-python.
if so, please share with me a good documentation. it will be great.
Thank you All for your help.
See my blog post How to use Python Flask with Oracle Database which has a runnable example.
The key thing is that during application initialization you will want to start a connection pool with cx_Oracle.SessionPool().
Then each route can get a connection from the pool with connection = pool.acquire()
Also see https://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/apaas/python/python-oracle-accs/python-oracle-accs.html, however this doesn't use a connection pool so it has some scalability limitations.
On our dev environment, we started exploring the use of celery. The problem is when a task is launched, SQLAlchemy often has a hard time connecting to our Amazon/AWS RDS instance. This tends to happen after a period of time regardless of what settings I've tried but it's hard to say just how long. Our database is a snapshot of our production database on AWS RDS with all of the same parameters/settings.
The errors include...
OperationalError: (_mysql_exceptions.OperationalError) (2013, 'Lost connection to MySQL server during query')...(Background on this error at: http://sqlalche.me/e/e3q8)
...or...
OperationalErrorOperation : MySQL Connection not available.
Our engine...
engine = sa.create_engine(SA_ENGINE, echo=False, pool_recycle=90, pool_pre_ping=True)
(I've tried tons of variations of pool_recycle)
On the database side, I've changed the following parameters (though some are extreme, I've tried all sorts of variations)...
interactive_timeout = 28800
wait_timeout = 28800
max_heap_table_size = 32000000000
I tried wrapping each query to reconnect and this didn't work either. Note this code is taken from a StackOverflow answer on similar topics...
def db_execute(conn, query):
try:
result = conn.execute(query)
print(result)
except sa.exc.OperationalError: # may need more exceptions here (or trap all)
conn = engine.connect() # replace your connection
result = conn.execute(query) # and retry
return result
This has been a three day wheel spin and I'm stuck... I hopes someone out there has some insight or guidance?
UPDATE
I've completely removed celery from the equation and now it's still randomly dropping out and even in between queries within the same function flow. On the production server, the software is nearly identical now.
What are methods of having persistent connections to a MongoDB, instead of creating a MongoClient instance and using it when constructing queries? I noted that it opens/closes a connection on each query operation.
I'm using Python, and have pymongo installed. I've looked around and didn't find much information on connection management. In light of this, what are general recommendations on database management?
Just have a global MongoClient at the top level of a Python module:
client = MongoClient(my_connection_string)
It's critical that you create one client at your application's startup. Use that one same client for every operation for the lifetime of your application and never call "close" on it. This will provide optimal performance.
The client manages a connection pool, and reuses connections as much as possible. It does not open and close a new connection per query, that would be awful. See PyMongo's docs for connection pooling.
My db is SQL Server on a remote machine and I am encountering quite a bit of latency.
I have a method in a controller that is structured like this:
def submitData():
parameters = db.site.select(...)
results = some_http_post() # posts data to remote server,
if results:
rec = db.status_table.insert(...)
rec.status_tabl.update(...)
What tends to happen is that some_http_post() takes several seconds to get a response and I run out of threads
When I hit web2py with more than 6 concurrent requests to submitData, I am encountering freezes in requests, rather than getting DB error.
This has the effect of stopping any further web2py requests.
I would ideally like to close the db connection before the call to some_http_post and start another db connection after it, but I don't see a simple way to do this with the DAL API. Is this possible or is there a better optimisation that I could be trying?
Connection pooling is enabled by default.
If you add "pool_size=0" to the connection string this disables connection pooling and I assume the behavior to forcibly open/close each conn. instead of leaving them open.
If you need more threads (sounds like), increase your pool_size and see what happens.
OR, yes, you can use the DAL and do a db.close() and the connection is auto-reopened on first request
Try enabling connection pooling: http://web2py.com/books/default/chapter/29/06#Connection-pooling