I have a flask-restful project that uses a MySQL database, I am currently using PyMysql to connect to the database, I previously tried to use flask-mysqldb and flask-mysql but they didn't really work out because I need to use the db connection outside the request method (for decorators primarily), so I started to use PyMySQL but now I found an issue with concurrency, whenever I try to make more than 2 concurrent requests at the same time (which means 2 concurrent queries) I get this error:
mysqldb._exceptions.programmingerror: (2014, "commands out of sync; you can't run this command now")
I do in fact call cursor.nextset after each query to make sure the result is fully consumed, and I did make sure all my queries and stored procedures were correct and didn't have any problems, I was starting to think the issue was caused by the server using only one cursor as the cursor is defined in a separate module that all the views/resources import it to do work (they don't create a new cursor) but after I made every view/resource method create a new cursor to do work with and close it afterwards and I was still getting the same error.
this is how I execute queries:
cursor.callproc('do_something', (arg1, arg2))
result = cursor.fetch_all()
cursor.nextset()
it doesn't have to do with race conditions either because the queries don't write anything. I tried to serve the application with Waitress but I still am getting the problem.
And for MySQLdb Stored Procedure
cursor.callproc('do_something', (arg1, arg2))
Here you have to add for all out going parameters a variable
cursor.execute('SELECT #outparameter1, #outparameter2, #outparameter3')
cursor.fetchall()
for the pymyslq you need some more changes, as there is no callproc
cusrorType = pymysql.cursors.DictCursor
databaseConnection = pymysql.connect(host=hostName,
user=userName,
password=userPassword,
db=databaseName,
charset=databaseCharset,
cursorclass=cusrorType)
try:
# Cursor object creation
cursorObject = databaseConnection.cursor()
# Execute the sqlQuery
cursorObject.execute("call do_something(?,?)",arg1, arg2)
# Print the result of the executed stored procedure
for result in cursorObject.fetchall():
print(result)
For mysq.connector
You have to call `store_result before hand like
cursor.callproc('do_something', (arg1, arg2))
for result in cursor.stored_results():
result.fetchall()
Related
I have a multi-file Python Project, of which many of the files make connections to an Azure SQL Database. The project works fine but, for some reason, one of the files stops being able to connect to the database after a while of the application running, and I can see no reason as to why; especially when other connection attempts work fine.
The connection string, for all the connections (so across all the files), is define as the following:
SQLServer = os.getenv('SQL_SERVER')
SQLDatabase = os.getenv('SQL_DATABASE')
SQLLogin = os.getenv('SQL_LOGIN')
SQLPassword = os.getenv('SQL_PASSWORD')
SQLConnString = 'Driver={ODBC Driver 17 for SQL Server};Server=' + SQLServer + ';Database=' + SQLDatabase + ';UID='+ SQLLogin +';PWD=' + SQLPassword
sqlConn = pyodbc.connect(SQLConnString,timeout=20)
And the function I am calling, when the error happens is below:
def iscaptain(guild,user):
userRoles = user.roles
roleParam = ""
for role in userRoles:
roleParam = roleParam + "," + str(role.id)
cursor = sqlConn.cursor()
roleParam = roleParam[1:]
cursor.execute('EXEC corgi.GetUserAccess ?, ?;',guild.id,roleParam)
for row in cursor:
if row[1] == "Team Captain":
cursor.close()
return True
cursor.close()
return False
The error specifically happens at cursor.execute. I currently get the error
pyodbc.OperationalError: ('08S01', '[08S01] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: Error code 0x68 (104) (SQLExecDirectW)')
Previously I didn't have the timeout in the connection on the specific file that was having a problem, and I did get a different error:
Communication link failure
Apologies, I don't have the full previous error.
Other connections, in other files in the same project, work fine, so the problem is not a network issue; if it were none of the connections would work. The problem only happens in one file, where all the connection attempts fail.
Googling the latest error really doesn't get me far. For example, there's a Github issue that gets nowhere, and this question isn't related as connecting works fie from other files.
Note, as well, that this happens after a period of time; I don't really know how long that period is but it's certainly hours. Restarting the project fixes the issue as well; the above function will work fine. That isn't really a solution though, I can't keep restarting the application ad-hoc.
The error is immediate as well; it's like Python/PyODBC isn't trying to connect. When stepping into the cursor.execute the error is generated straight after; it's not like when you get a timeout and you'll be waiting a few seconds, or more, for the timeout to occur.
I'm at a loss here. Why is the file (and only that one) unable to connect any more later on? There are no locks on the database either, so It's not like I have a transaction left hanging; though I would expect a timeout error again then as the procedure would be unable to gain a lock on the data.
Note, as well, that if I manually execute the procedure, in sqlcmd/SSMS/ADS, data is returned fine as well, so the Procedure does work fine. And, again, if I restart the application it'll work without issue for many hours.
Edit: I attempted the answer from Sabik below, however, this only broke to application, unfortunately. The solution they provided had the parameter self on the function validate_conn and so calling validate_conn() failed as I don't have a parameter for this "self". The method they said to use, just validate_conn didn't do anything; it doesn't call the function (which I expected). Removing the parameter, and references to self, also broke the application, stating that sqlConn wasn't declared even though it was; see image below where you can clearly see that sqlConn has a value:
Yet immediately after that line I get the error below:
UnboundLocalError: local variable 'sqlConn' referenced before assignment
So something appears to be wrong with their code, but I don't know what.
One possibility being discussed in the comments is that it's a 30-minute idle timeout on the database end, in which case one solution would be to record the time the connection has been opened, then reconnect if it's been more than 25 minutes.
This would be a method like:
def validate_conn(self):
if self.sqlConn is None or datetime.datetime.now() > self.conn_expiry:
try:
self.sqlConn.close()
except: # pylint: disable=broad-except
# suppress all exceptions; we're in any case about to reconnect,
# which will either resolve the situation or raise its own error
pass
self.sqlConn = pyodbc.connect(...)
self.conn_expiry = datetime.datetime.now() + datetime.timedelta(minutes=25)
(Adjust as appropriate if sqlConn is a global.)
At the beginning of each function which uses sqlConn, call validate_conn first, then use the connection freely.
Note: this is one of the rare situations in which suppressing all exceptions is reasonable; we're in any case about to reconnect to the database, which will either resolve the situation satisfactorily, or raise its own error.
Edit: If sqlConn is a global, it will need to be declared as such in the function:
def validate_conn():
global sqlConn, conn_expiry
if sqlConn is None or datetime.datetime.now() > conn_expiry:
try:
sqlConn.close()
except: # pylint: disable=broad-except
# suppress all exceptions; we're in any case about to reconnect,
# which will either resolve the situation or raise its own error
pass
sqlConn = pyodbc.connect(...)
conn_expiry = datetime.datetime.now() + datetime.timedelta(minutes=25)
As an unrelated style note, a shorter way to write the function would be using (a) a with statement and (b) the any operator, like this:
with sqlConn.cursor() as cursor:
roleParam = roleParam[1:]
cursor.execute('EXEC corgi.GetUserAccess ?, ?;', guild.id, roleParam)
return any(row[1] == "Team Captain" for row in cursor)
The with statement has the advantage that the cursor is guaranteed to be closed regardless of how the code is exited, even if there's an unexpected exception or if a later modification adds more branches.
Although the solution for Sabik didn't work for me, the answer did push me in the right direction to find a solution. That was, specifically, with the use of the with clauses.
Instead of having a long lasting connection, as I have been informed I had, I've now changed these to short lived connections with I open with a with, and then also changed the cursor to a with as well. So, for the iscaptain function, I now have code that looks like this:
def iscaptain(guild,user):
userRoles = user.roles
roleParam = ""
for role in userRoles:
roleParam = roleParam + "," + str(role.id)
#sqlConn = pyodbc.connect(SQLConnString,timeout=20)
with pyodbc.connect(SQLConnString,timeout=20) as sqlConn:
with sqlConn.cursor() as cursor:
roleParam = roleParam[1:]
cursor.execute('EXEC corgi.GetUserAccess ?, ?;', guild.id, roleParam)
return any(row[1] == "Team Captain" for row in cursor)
return False
It did appear that Azure was closing the connections after a period of time, and thus when the connection was attempted to be reused it failed to connect. As, however, both hosts are in Azure, but the Server running the Python application and the SQL Database, I am happy to reconnect as needed, as speed should not be a massive issue; certainly it hasn't been during the last 48 hours of testing.
This does mean i have a lot of code to refactor, but for the stability, it's a must.
I am writing code to create a GUI in Python on the Spyder environment of Anaconda. within this code I operate with a PostgreSQL database and I therefore use the psycopg2 database adapter so that I can interact with directly from the GUI.
The code is too long to post here, as it is over 3000 lines, but to summarize, I have no problem interacting with my database except when I try to drop a table.
When I do so, the GUI frames become unresponsive, the drop table query doesn't drop the intended table and no errors or anything else of that kind are thrown.
Within my code, all operations which result in a table being dropped are processed via a function (DeleteTable). When I call this function, there are no problems as I have inserted several print statements previously which confirmed that everything was in order. The problem occurs when I execute the statement with the cur.execute(sql) line of code.
Can anybody figure out why my tables won't drop?
def DeleteTable(table_name):
conn=psycopg2.connect("host='localhost' dbname='trial2' user='postgres' password='postgres'")
cur=conn.cursor()
sql="""DROP TABLE """+table_name+""";"""
cur.execute(sql)
conn.commit()
That must be because a concurrent transaction is holding a lock that blocks the DROP TABLE statement.
Examine the pg_stat_activity view and watch out for sessions with state equal to idle in transaction or active that have an xact_start of more than a few seconds ago.
This is essentially an application bug: you must make sure that all transactions are closed immediately, otherwise Bad Things can happen.
I am having the same issue when using psycopg2 within airflow's postgres hook and I resolved it with with statement. Probably this resolves the issue because the connection becomes local within the with statement.
def drop_table():
with PostgresHook(postgres_conn_id="your_connection").get_conn() as conn:
cur = conn.cursor()
cur.execute("DROP TABLE IF EXISTS your_table")
task_drop_table = PythonOperator(
task_id="drop_table",
python_callable=drop_table
)
And a solution is possible for the original code above like this (I didn't test this one):
def DeleteTable(table_name):
with psycopg2.connect("host='localhost' dbname='trial2' user='postgres' password='postgres'") as conn:
cur=conn.cursor()
sql="""DROP TABLE """+table_name+""";"""
cur.execute(sql)
conn.commit()
Please comment if anyone tries this.
I'm using PostgreSQL 9.3, and SQLAlchemy 1.0.11
I have code that looks like this:
import sqlalchemy as sa
engine = sa.create_engine('postgresql+psycopg2://me#myhost/mydb')
conn = engine.connect()
metadata = sa.MetaData()
# Real table has more columns
mytable = sa.Table(
'my_temp_table', metadata,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('something', sa.String(200)),
prefixes=['TEMPORARY'],
)
metadata.create_all(engine)
pg_conn = engine.raw_connection()
with pg_conn.cursor() as cursor:
cursor.copy_expert('''COPY my_temp_table (id, something)
FROM STDIN WITH CSV''',
open('somecsvfile', 'r'))
Now this works just fine - cursor.rowcount reports the expected number of rows inserted. I can even run cursor.execute('SELECT count(*) FROM my_temp_table'); print(cursor.fetchone()) and it will display the same #. The problem is when I try to run a query from SQLAlchemy's connection, e.g
result = conn.execute(sa.text('SELECT count(*) FROM my_temp_table'))
It doesn't matter where I put that. I've tried several places:
inside the with block
outside the with block
after a cursor.close()
after a pg_conn.close()
Nothing seems to work - no matter where I run the query from, it barfs with:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) relation "my_temp_table" does not exist
The funny thing is that if I wrap that code in a try/except then I can do cursor.execute(...) in the except block successfully.
Actually, now that I'm writing this out, it appears that using the sqlalchemy connection anywhere fails to see that those tables exists.
So what gives? Why doesn't my SQLAlchemy connection see these tables, but the postgres (engine.raw_connection()) does?
Edit:
To further the mystery - if I create the the connection after the metadata.create_all(engine), it works! Well, sort of.
I can select from the tables, but then when I get the engine.raw_connection() it fails on .copy_expert because it can't find the table.
The first thing to note is that temporary tables are only visible to the connection which created them.
The second is that an Engine doesn't encapsulate a single connection; it manages a connection pool.
Finally, the documentation points out that operations performed directly on an Engine (engine.execute("select ...") in their example) will internally acquire and release their own connections.
With all of this in mind, it's clear what's going on in your example:
conn = engine.connect() acquires Connection #1 from the pool.
metadata.create_all(engine) implicitly acquires Connection #2 (as #1 is still "in use" from the engine's perspective), uses it to create the table, and releases it back to the pool.
pg_conn = engine.raw_connection() acquires #2 again, so the COPY executed via this object can still see the table.
conn is still using #1, and nothing you do via this object will be able to see your temp table.
In your second case:
metadata.create_all(engine) implicitly acquires/uses/releases Connection #1.
conn = engine.connect() acquires #1 and holds it.
pg_conn = engine.raw_connection() acquires #2, and the COPY fails to find the temp table.
The moral of the story: if you're doing something which relies on the connection state, you'd better be sure which connection you're using. Running commands directly on the engine is fine for standalone operations, but for anything involving temp tables, you should acquire one connection and stick with it through every step (including the table creation, which I suggest you change to metadata.create_all(conn)).
Well, this doesn't answer the why but it it is how to accomplish what I want.
Rather than:
pg_conn = engine.raw_connection()
with pg_conn.cursor() as cursor:
Just replace it with:
with conn.connection.cursor() as cursor:
The SQLAlchemy connection object exposes its underlying DBAPI connection via the .connection property. And whatever magic involved there does the right thing.
SQLAlchemy (0.9.8) and mysql-5.6.21-osx10.8-x86_64 and MAC OS X 10.3.3 (Yosemite)
I keep getting intermittent:
InterfaceError: (InterfaceError) 2013: Lost connection to MySQL server during query u'SELECT..... '
I have read up a few thread and most cases are resolved by adding this to my.cnf
max_allowed_packet = 1024M
which should be more than big enough for what I tried to do. After doing this, I step hit it intermittently. And putting this line in /etc/my.cnf:
log-error = "/Users/<myname>/tmp/mysql.err.log"
log-warnings = 3
I am hoping to get more details, but all I see is something like this:
[Warning] Aborted connection 444 to db: 'dbname' user: 'root' host: 'localhost' (Got an error reading communication packets)
I have reached a point where i think more detail better logging may help, or if there's something else i could try before this.
Thanks.
looks like your MySQL connection is timing out after a long period of inactivity, I bet it won't happen if you're constantly querying your DB with existing settings. There are couple of settings on both MySQL and sql sides which should resolve this issue:
check your SQLa engine's pool_recycle value, try different / smaller value, e.g. 1800 (secs). If you're reading DB settings from file, set it as
pool_recycle: 1800
otherwise specify it during engine init, e.g.
from sqlalchemy import create_engine
e = create_engine("mysql://user:pass#localhost/db", pool_recycle=1800)
check / modify your wait_timeout MySQL variable, see https://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html#sysvar_wait_timeout which is the number of seconds the server waits for activity on a noninteractive connection before closing it. e.g.
show global variables like 'wait_timeout';
find a combination that works for your environment.
There are two params that could help, pool_recycle, pool_pre_ping.
pool_recycle decides the seconds to recycle the connection after it is inactivity. The default value of mysql is 8 hours, and the default value of sqlalchemy is -1, which means not to recycle, this is the difference, if mysql has recycled the connection and sqlalchemy did not, the Lost connection exception will be raise.
pool_pre_ping will test the connection's liveness, as my understanding, this could be used as a back-up strategy, if a connection is recycled by mysql but not recognized by sqlalchemy, sqlalchemy will do a check, and avoid to use an invalid connection.
create_engine(<mysql conn url>, pool_recycle=60 * 5, pool_pre_ping=True)
Based on suggestions from this, this and many other articles on the internet, wrapping all my functions with the following decorator helped me resolve the "Lost Connection" issue with mariadb as the backend db. Please note that db below is an instance of flask_sqlalchemy.SQLAlchemy, but the concept will remain the same for an sqlalchemy session too.
def manage_session(f):
def inner(*args, **kwargs):
# MANUAL PRE PING
try:
db.session.execute("SELECT 1;")
db.session.commit()
except:
db.session.rollback()
finally:
db.session.close()
# SESSION COMMIT, ROLLBACK, CLOSE
try:
res = f(*args, **kwargs)
db.session.commit()
return res
except Exception as e:
db.session.rollback()
raise e
# OR return traceback.format_exc()
finally:
db.session.close()
return inner
I also added pool_recycle of 50 seconds in Flask SQLAlchemy config, but that didnt visibly contribute to the solution.
EDIT1:
Below is a sample snippet of how it was used in the final code:
from flask_restful import Resource
class DataAPI(Resource):
#manage_session
def get(self):
# Get data rows from DB
None of the previous solutions worked. I managed to solve it and developed a theory. I consider myself a layman in MySQL architecture so if you understand better, please complement my suggestion.
In my case I was getting this error but the query in question was not the problem. The problem was also not the query before it. What happens is that I saved the result of some previous queries in instances and I believe that this maintained a connection to the database. After a series of processing I only performed another query minutes later.
This connection I had ended up dying without warning and when trying to perform a new query mysql threw this error. For some reason increasing the connection time did not help. I noticed that making empty commits over time fixed the problem.
db.session.commit()
I'm using Psycopg2 in Python to access a PostgreSQL database. I'm curious if it's safe to use the with closing() pattern to create and use a cursor, or if I should use an explicit try/except wrapped around the query. My question is concerning inserting or updating, and transactions.
As I understand it, all Psycopg2 queries occur within a transaction, and it's up to calling code to commit or rollback the transaction. If within a with closing(... block an error occurs, is a rollback issued? In older versions of Psycopg2, a rollback was explicitly issued on close() but this is not the case anymore (see http://initd.org/psycopg/docs/connection.html#connection.close).
My question might make more sense with an example. Here's an example using with closing(...
with closing(db.cursor()) as cursor:
cursor.execute("""UPDATE users
SET password = %s, salt = %s
WHERE user_id = %s""",
(pw_tuple[0], pw_tuple[1], user_id))
module.rase_unexpected_error()
cursor.commit()
What happens when module.raise_unexpected_error() raises its error? Is the transaction rolled back? As I understand transactions, I either need to commit them or roll them back. So in this case, what happens?
Alternately I could write my query like this:
cursor = None
try:
cursor = db.cursor()
cursor.execute("""UPDATE users
SET password = %s, salt = %s
WHERE user_id = %s""",
(pw_tuple[0], pw_tuple[1], user_id))
module.rase_unexpected_error()
cursor.commit()
except BaseException:
if cursor is not None:
cursor.rollback()
finally:
if cursor is not None:
cursor.close()
Also I should mention that I have no idea if Psycopg2's connection class cursor() method could raise an error or not (the documentation doesn't say) so better safe than sorry, no?
Which method of issuing a query and managing a transaction should I use?
Your link to the Psycopg2 docs kind of explains it itself, no?
... Note that closing a connection without committing the changes first will
cause any pending change to be discarded as if a ROLLBACK was
performed (unless a different isolation level has been selected: see
set_isolation_level()).
Changed in version 2.2: previously an explicit ROLLBACK was issued by
Psycopg on close(). The command could have been sent to the backend at
an inappropriate time, so Psycopg currently relies on the backend to
implicitly discard uncommitted changes. Some middleware are known to
behave incorrectly though when the connection is closed during a
transaction (when status is STATUS_IN_TRANSACTION), e.g. PgBouncer
reports an unclean server and discards the connection. To avoid this
problem you can ensure to terminate the transaction with a
commit()/rollback() before closing.
So, unless you're using a different isolation level, or using PgBouncer, your first example should work fine. However, if you desire some finer-grained control over exactly what happens during a transaction, then the try/except method might be best, since it parallels the database transaction state itself.