Both of these work:
sel = select([self.tbl])
rec = self.engine.execute(sel)
and
sel = select([self.tbl])
conn = self.engine.connect()
rec = conn.execute(sel)
What is the underlying difference?
According to the docs:
About connect()
The engine can be used directly to issue SQL to the database. The most generic way is first procure a connection resource, which you get via the Engine.connect() method:
connection = engine.connect()
result = connection.execute("select username from users")
for row in result:
print("username:", row['username'])
connection.close()
The connection is an instance of Connection, which is a proxy object
for an actual DBAPI connection. The DBAPI connection is retrieved from
the connection pool at the point at which Connection is created.
About execute()
The above procedure can be performed in a shorthand way by using the execute() method of Engine itself:
result = engine.execute("select username from users") for row in result:
print("username:", row['username'])
Where above, the execute() method acquires a new Connection on its
own, executes the statement
with that object, and returns the ResultProxy. In this case, the
ResultProxy contains a special flag known as close_with_result, which
indicates that when its underlying DBAPI cursor is closed, the
Connection object itself is also closed, which again returns the DBAPI
connection to the connection pool, releasing transactional resources.
Related
I followed the answer here to create a connection using psycopg2. It works on the first call on the endpoint. The second try gives this error psycopg2.InterfaceError: connection already closed. Below is a snippet of my code:
from config import conn
with conn:
with conn.cursor() as cursor:
cursor.execute("""
select ...
"""
)
pos = cursor.fetchone()
cursor.execute("""
select ...'
"""
)
neg = cursor.fetchone()
conn.close()
Since you're closing the connection in the last line of your code, it cannot be used further (when calling the endpoint for the second time) without reconnecting.
You can either delete the last line (which would result in never closing connection) and move conn.close() to app shutdown event or perform psycopg2.connect each time you need it.
I write a program that, with the help of pyodbc, connects to the base several times and performs selects.
Unfortunately, I have to reestablish the connection before calling any of my methods.
Why doesn't a single connection in each method work?
# create object (connect to DB)
conn = db.db_connect()
# Call method with my select
weak_password_list = db.Find_LoginsWithWeakPassword(conn)
# I need to connect again
conn = db.db_connect()
# Call method with my select
logins_with_expired_password = db.LoginsWithExpiredPassword(conn)
# And again...
conn = db.db_connect()
# Call method with my select
logins_with_expiring_password = db.Find_LoginsWithExpiringPassword(conn)
######################################################
def db_connect(self):
try:
conn = pyodbc.connect('Driver={SQL Server};'
'Server='+self.server_name+''
'Database='+self.database_name+';'
'Trusted_Connection='+self.trusted_connection+'')
except Exception as e:
conn = ""
self.print_error("Failed to connect to the database.", e)
return conn
############################
def Find_LoginsWithWeakPassword(self, conn):
try:
cursor = conn.cursor()
query_result = cursor.execute('''SELECT * FROM table_name''')
except Exception as e:
query_result=""
self.print_error("Select failed in Find_LoginsWithWeakPassword", e)
return query_result
If I only connect once, the second and subsequent methods with select has no effect.
Why?
When you call
weak_password_list = db.Find_LoginsWithWeakPassword(conn)
the function returns the pyodbc Cursor object returned by .execute():
<pyodbc.Cursor object at 0x012F4F60>
You are not calling .fetchall() (or similar) on it, so the connection has an open cursor with unconsumed results. If you do your next call
logins_with_expired_password = db.LoginsWithExpiredPassword(conn)
without first (implicitly) closing the existing connection by clobbering it, then .execute() will fail with
('HY000', '[HY000] [Microsoft][ODBC SQL Server Driver]Connection is busy with results for another hstmt (0) (SQLExecDirectW)')
TL;DR: Consume you result sets before calling another function, either by having the functions themselves call .fetchall() or by calling .fetchall() on the cursor objects that they return.
I have written a function for connecting to a database using pymysql. Here is my code:
def SQLreadrep(sql):
connection=pymysql.connect(host=############,
user=#######,
password=########,
db=#########)
with connection.cursor() as cursor:
cursor.execute(sql)
rows=cursor.fetchall()
connection.commit()
connection.close()
return rows
I pass the SQL into this function and return the rows. However, I am doing quick queries to the database. (Something like "SELECT sku WHERE object='2J4423K').
What is a way to avoid so many connections?
Should I be avoiding this many connections to begin with?
Could I crash a server using this many connections and queries?
Let me answer your last question first. Your function is acquiring a connection but it is closing it prior to returning. So, I see no reason why unless your were multithreading or multiprocessing you would ever be using more than one connection at a time and you should not be crashing the server.
The way to avoid the overhead of creating and closing so many connections would be to "cache" the connection. One way to do that would be to replace your function by a class:
import pymysql
class DB(object):
def __init__(self, datasource, db_user, db_password):
self.conn = pymysql.connect(db=datasource, user=db_user, password=db_password)
def __del__(self):
self.conn.close()
def query(self, sql):
with self.conn.cursor() as cursor:
cursor.execute(sql)
self.conn.commit()
return cursor.fetchall()
Then you instantiate an instance of the DB class and invoke its query method. When the DB instance is grabage collected, the connection will be automatically closed.
I am using psycopg2 library to handle connection with Postgress database.
Are the following two approaches to handling db connection comparable?
Snipet 1:
cnx = connect(user=<...>, password=<...>, host=<...>, database=<...>)
cursor = cnx.cursor()
cursor.execute(sql)
cnx.close()
Snipet 2:
with psycopg2.connect(user=<...>, password=<...>, host=<...>, database=<...>) as cnx:
cursor = cnx.cursor()
cursor.execute(sql)
I am especialy interested if using WITH automatically closes connection? I was informed that second approach is preferable because it does connection close automatically. Yet when i test it using cnx.closed it shows open connection.
The with block tries on exit to close (commit) a transaction, not a connection. For the documentation:
Note that, unlike file objects or other resources, exiting the connection’s with block doesn’t close the connection but only the transaction associated with it [...]
According to http://docs.sqlalchemy.org/en/rel_0_9/core/pooling.html#disconnect-handling-pessimistic, SQLAlchemy can be instrumented to reconnect if an entry in the connection pool is no longer valid. I create the following test case to test this:
import subprocess
from sqlalchemy import create_engine, event
from sqlalchemy import exc
from sqlalchemy.pool import Pool
#event.listens_for(Pool, "checkout")
def ping_connection(dbapi_connection, connection_record, connection_proxy):
cursor = dbapi_connection.cursor()
try:
print "pinging server"
cursor.execute("SELECT 1")
except:
print "raising disconnect error"
raise exc.DisconnectionError()
cursor.close()
engine = create_engine('postgresql://postgres#localhost/test')
connection = engine.connect()
subprocess.check_call(['psql', str(engine.url), '-c',
"select pg_terminate_backend(pid) from pg_stat_activity " +
"where pid <> pg_backend_pid() " +
"and datname='%s';" % engine.url.database],
stdout=subprocess.PIPE)
result = connection.execute("select 'OK'")
for row in result:
print "Success!", " ".join(row)
But instead of recovering I receive this exception:
sqlalchemy.exc.OperationalError: (OperationalError) terminating connection due to administrator command
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Since "pinging server" is printed on the terminal it seems safe to conclude that the event listener is attached. How can SQLAlchemy be taught to recover from a disconnect?
It looks like the checkout method is only called when you first get a connection from the pool (eg your connection = engine.connect() line)
If you subsequently lose your connection, you will have to explicitly replace it, so you could just grab a new one, and retry your sql:
try:
result = connection.execute("select 'OK'")
except sqlalchemy.exc.OperationalError: # may need more exceptions here
connection = engine.connect() # grab a new connection
result = connection.execute("select 'OK'") # and retry
This would be a pain to do around every bit of sql, so you could wrap database queries using something like:
def db_execute(conn, query):
try:
result = conn.execute(query)
except sqlalchemy.exc.OperationalError: # may need more exceptions here (or trap all)
conn = engine.connect() # replace your connection
result = conn.execute(query) # and retry
return result
The following:
result = db_execute(connection, "select 'OK'")
Should now succeed.
Another option would be to also listen for the invalidate method, and take some action at that time to replace your connection.