Slow speed of cx_Oracle connection - python

i am using cx_oracle with python 3.7 to connect to oracle database and execute stored procedures stored in oracle database.
right now i am connecting to database as follows
dbconstr = "username/password#databaseip/sid"
db_connection = cx_Oracle.connect(dbconstr)
cursor = db_connection.cursor()
#calling sp here
cursor.close()
db_connection.close()
but in this code connection time for cx_Oracle.connect(dbconstr) is about 250ms and whole code will run in about 500ms what i want is to reduce conenction time of 250ms.
I am using flask rest-api in python and this code is used for that, 250ms for connection is too long when entire response time is 500ms.
i have also tried maintaining a connection a for a life time of application by declaring global variable for connect object and creating and closing cursors only as shown below will give result in 250ms
dbconstr = "username/password#databaseip/sid"
db_connection = cx_Oracle.connect(dbconstr)
def api_response():
cursor = db_connection.cursor()
#calling sp here
cursor.close()
return result
by this method response time is reduced but a connection is getting maintained even when no one is using the application. After some time of being idle execution speed will get reduced for first request after some idle time, it is in seconds which is very bad.
so, i want help in creating stable code with good response time.

Creating a connection involves a lot of work on the database server: process startup, memory allocation, authentication etc.
Your solution - or using a connection pool - are the ways to reduce connection times in Oracle applications. A pool with an acquire & release around the point of use in the app has benefits for planned and unplanned DB maintenance. This is due to the internal implementation of the pool.
What's the load on your service? You probably want to start a pool and aquire/release connections, see
How to use cx_Oracle session pool with Flask gracefuly? and Unresponsive requests- understanding the bottleneck (Flask + Oracle + Gunicorn) and others. Pro tip: keep the pool small, and make the minimum & maximum size the same.
Is there a problem with having connections open? What is that impacting? There are some solutions such as Shared Servers, or DRCP but generally there shouldn't be any need to use them unless your database server is short of memory.

Related

How to use multiple cursors in mysql.connector?

I want to execute multiple queries without each blocking other. I created multiple cursors and did the following but got mysql.connector.errors.OperationalError: 2013 (HY000): Lost connection to MySQL server during query
import mysql.connector as mc
from threading import Thread
conn = mc.connect(#...username, password)
cur1 = conn.cursor()
cur2 = conn.cursor()
e1 = Thread(target=cur1.execute, args=("do sleep(30)",)) # A 'time taking' task
e2 = Thread(target=cur2.execute, args=("show databases",)) # A simple task
e1.start()
e2.start()
But I got that OperationalError. And reading a few other questions, some suggest that using multiple connections is better than multiple cursors. So shall I use multiple connections?
I don't have the full context of your situation to understand the performance considerations. Yes, starting a new connection could be considered heavy if you are operating under strict timing constraints that are short relative to the time it takes to start a new connection and you were forced to do that for every query...
But you can mitigate that with a shared connection pool that you create ahead of time, and then distribute your queries (in separate threads) over those connections as resources allow.
On the other hand, if all of your query times are fairly long relative to the time it takes to create a new connection, and you aren't looking to run more than a handful of queries in parallel, then it can be a reasonable option to create connections on demand. Just be aware that you will run into limits with the number of open connections if you try to go too far, as well as resource limitations on the database system itself. You probably don't want to do something like that against a shared database. Again, this is only a reasonable option within some very specific contexts.

Task queues end up with "(2062, 'Cloud SQL socket open failed with error: No such file or directory')"

We are building an application which uses heavy backend tasks (Task queues), And in each task - we are doing I/O in Google Cloud SQL.
As GAE have limitation for 12 concurrent connections (not sure whether this is issue? I saw at https://stackoverflow.com/a/26155819/687692)
""Each App Engine instance running in a Standard environment or using Standard-compatible APIs cannot have more than 12 concurrent connections to a Google Cloud SQL instance." - https://cloud.google.com/sql/faq"
My most of the backend tasks (100-500 tasks per second) are failing because of this issue.
Also, I checked active connection for last 4 days: I dont see any of the connection is going more than 12 connections.
So, What approach i need to take to fix this? Connection pooling (How to do it in GAE with GCS?) ? or some other fix?
Any help - guidance is very much appreciated.
Let me know, if any one need more information.
Thanks,
It's not likely that you would exceed the 12 connection limit with the standard Python App Engine scaling settings if the connections are handled properly.
To demonstrate, I have created a small application that schedules many tasks, with each task acquiring a database connection and doing some work. I am able to run this test without hitting connection issues.
One thing worth making sure is that you are not leaking any connections (i.e. not closing the connection in some places or when exceptions happen).
For MySQLdb, you can guarantee you are not leaking connections by using closing from contextlib:
from contextlib import closing
def getDbConnection():
return MySQLdb.connect(unix_socket='/cloudsql/instance_name', db='db', user='user', charset='utf8')
with closing(getDbConnection()) as db:
# do stuff, database is guaranteed to be closed

Leaking database connections: PostgreSQL, SQLAlchemy, Flask

I'm running PostgreSQL 9.3 and SQLAlchemy 0.8.2 and experience database connections leaking. After deploying the app consumes around 240 connections. Over next 30 hours this number gradually grows to 500, when PostgreSQL will start dropping connections.
I use SQLAlchemy thread-local sessions:
from sqlalchemy import orm, create_engine
engine = create_engine(os.environ['DATABASE_URL'], echo=False)
Session = orm.scoped_session(orm.sessionmaker(engine))
For the Flask web app, the .remove() call to the Session proxy-object is send during request teardown:
#app.teardown_request
def teardown_request(exception=None):
if not app.testing:
Session.remove()
This should be the same as what Flask-SQLAlchemy is doing.
I also have some periodic tasks that run in a loop, and I call .remove() for every iteration of the loop:
def run_forever():
while True:
do_stuff(Session)
Session.remove()
What am I doing wrong which could lead to a connection leak?
If I remember correctly from my experiments with SQLAlchemy, the scoped_session() is used to create sessions that you can access from multiple places. That is, you create a session in one method and use it in another without explicitly passing the session object around.
It does that by keeping a list of sessions and associating them with a "scope ID". By default, to obtain a scope ID, it uses the current thread ID; so you have session per thread. You can supply a scopefunc to provide – for example – one ID per request:
# This is (approx.) what flask-sqlalchemy does:
from flask import _request_ctx_stack as context_stack
Session = orm.scoped_session(orm.sessionmaker(engine),
scopefunc=context_stack.__ident_func__)
Also, take note of the other answers and comments about doing background tasks.
First of all, it is a really really bad way to run background tasks. Try any ASync scheduler like celery.
Not 100% sure so this is a bit of a guess based on the information provided, but I wonder if each page load is starting a new db connection which is then listening for notifications. If this is the case, I wonder if the db connection is effectively removed from the pool and so gets created on the next page load.
If this is the case, my recommendation would be to have a separate DBI database handle dedicated to listening for notifications so that these are not active in the queue. This might be done outside your workflow.
Also
Particularly, the leak is happening when making more than one simultaneous requests. At the same time, I could see some of the requests were left with uncompleted query execution and timing out. You can write something to manage this yourself.

Python MySQLdb - Is there a benefit to leaving the cursor open?

[Python/MySQLdb] - CentOS - Linux - VPS
I have a page that parses a large file and queries the datase up to 100 times for each run. The database is pretty large and I'm trying to reduce the execution time of this script.
My SQL functions are inside a class, currently the connection object is a class variable created when the class is instantiated. I have various fetch and query functions that create a cursor from the connection object every time they are called. Would it be faster to create the cursor when the connection object is created and reuse it or would it be better practice to create the cursor every time it's called?
import MySQLdb as mdb
class parse:
con = mdb.connect( server, username, password, dbname )
#cur = con.cursor() ## create here?
def q( self, q ):
cur = self.con.cursor() ## it's currently here
cur.execute( q )
Any other suggestions on how to speed up the script are welcome too. The insert statement is the same for all the queries in the script.
Opening and closing connections is never free, it always wastes some amount of performance.
The reason you wouldn't want to just leave the connection open is that if two requests were to come in at the same time the second request would have to wait till the first request had finished before it could do any work.
One way to solve this is to use connection pooling. You create a bunch of open connections and then reuse them. Every time you need to do a query you check a connection out of the pool, preform the request and then put it back into the pool.
Setting all this up can be quite tedious, so I would recommend using SQLAlchemy. It has built in connection pooling, relatively low overhead and supports MySQL.
Since you care about speed I would only use the core part of SQLAlchemy since the ORM part comes is a bit slower.

Mysql connection pooling question: is it worth it?

I recall hearing that the connection process in mysql was designed to be very fast compared to other RDBMSes, and that therefore using a library that provides connection pooling (SQLAlchemy) won't actually help you that much if you enable the connection pool.
Does anyone have any experience with this?
I'm leery of enabling it because of the possibility that if some code does something stateful to a db connection and (perhaps mistakenly) doesn't clean up after itself, that state which would normally get cleaned up upon closing the connection will instead get propagated to subsequent code that gets a recycled connection.
There's no need to worry about residual state on a connection when using SQLA's connection pool, unless your application is changing connectionwide options like transaction isolation levels (which generally is not the case). SQLA's connection pool issues a connection.rollback() on the connection when its checked back in, so that any transactional state or locks are cleared.
It is possible that MySQL's connection time is pretty fast, especially if you're connecting over unix sockets on the same machine. If you do use a connection pool, you also want to ensure that connections are recycled after some period of time as MySQL's client library will shut down connections that are idle for more than 8 hours automatically (in SQLAlchemy this is the pool_recycle option).
You can quickly do some benching of connection pool vs. non with a SQLA application by changing the pool implementation from the default of QueuePool to NullPool, which is a pool implementation that doesn't actually pool anything - it connects and disconnects for real when the proxied connection is acquired and later closed.
Even if the connection part of MySQL itself is pretty slick, presumably there's still a network connection involved (whether that's loopback or physical). If you're making a lot of requests, that could get significantly expensive. It will depend (as is so often the case) on exactly what your application does, of course - if you're doing a lot of work per connection, then that will dominate and you won't gain a lot.
When in doubt, benchmark - but I would by-and-large trust that a connection pooling library (at least, a reputable one) should work properly and reset things appropriately.
Short answer: you need to benchmark it.
Long answer: it depends. MySQL is fast for connection setup, so avoiding that cost is not a good reason to go for connection pooling. Where you win there is if the queries run are few and fast because then you will see a win with pooling.
The other worry is how the application treats the SQL thread. If it does no SQL transactions, and makes no assumptions about the state of the thread, then pooling won't be a problem. OTOH, code that relies on the closing of the thread to discard temporary tables or to rollback transactions will have a lot of problems with pooling.
The connection pool speeds things up in that fact that you do not have create a java.sql.Connection object every time you do a database query. I use the Tomcat connection pool to a mysql database for web applications that do a lot of queries, during high user load there is noticeable speed improvement.
I made a simple RESTful service with Django and tested it with and without connection pooling. In my case, the difference was quite noticeable.
In a LAN, without it, response time was between 1 and 5 seconds. With it, less than 20 ms.
Results may vary, but the configuration I'm using for the MySQL & Apache servers is pretty standard low-end.
If you're serving UI pages over the internet the extra time may not be noticeable to the user, but in my case it was unacceptable, so I opted for using the pool. Hope this helps you.

Categories

Resources