I've been trying to test this out, but haven't been able to come to a definitive answer. I'm using SQLAlchemy on top of MySQL and trying to prevent having threads that do a select, get a SHARED_READ lock on some table, and then hold on to it (preventing future DDL operations until it's released). This happens when queries aren't committed. I'm using SQLAlchemy Core, where as far as I could tell .execute() essentially works in autocommit mode, issuing a COMMIT after everything it runs unless explicitly told we're in a transaction. Nevertheless, in show processlist, I'm seeing sleeping threads that still have SHARED_READ locks on a table they once queried. What gives?
Assuming from your post you're operating in "non-transactional" mode, either using an SQLAlchemy Connection without an ongoing transaction, or the shorthand engine.execute(). In this mode of operation SQLAlchemy will detect INSERT, UPDATE, DELETE, and DDL statements and issue a commit after automatically, but not for everything, like SELECT statements. See "Understanding Autocommit". For selects of mutating stored procedures and such that do require a commit, use
conn.execute(text('SELECT ...').execution_options(autocommit=True))
You should also consider closing connections when the thread is done with them for the time being. Closing will call rollback() on the underlying DBAPI connection, which per PEP-0249 is (probably) always in transactional state. This will remove the transactional state and/or locks, and returns the connection to the connection pool. This way you shouldn't need to worry about selects not autocommitting.
I have an SQLAlchemy session in a script. The script is running for a long time, and it only fetches data from database, never updates or inserts.
I get quite a lot of errors like
sqlalchemy.exc.DBAPIError: (TransactionRollbackError) terminating connection due to conflict with recovery
DETAIL: User was holding a relation lock for too long.
The way I understand it, SQLAlchemy creates a transaction with the first select issued, and then reuses it. As my script may run for about an hour, it is very likely that a conflict comes up during the lifetime of that transaction.
To get rid of the error, I could use autocommit in te deprecated mode (without doing anything more), but this is explicitly discouraged by the documentation.
What is the right way to deal with the error? Can I use ORM queries without transactions at all?
I ended up closing the session after (almost) every select, like
session.query(Foo).all()
session.close()
since I do not use autocommit, a new transaction is automatically opened.
If I want to start a transaction in my database through python do I have to execute the sql command 'BEGIN TRANSACTION' explicitly like this:
import sqlite3
conn = sqlite3.connect(db)
c = conn.cursor()
c.execute('BEGIN TRANSACTION;')
##... some updates on the database ...
conn.commit() ## or c.execute('COMMIT'). Are these two expressions the same?
Is the database locked for change from other clients when I establish the connection or when I begin the transaction or neither?
Only transactions lock the database.
However, Python tries to be clever and automatically begins transactions:
By default, the sqlite3 module opens transactions implicitly before a Data Modification Language (DML) statement (i.e. INSERT/UPDATE/DELETE/REPLACE), and commits transactions implicitly before a non-DML, non-query statement (i. e. anything other than SELECT or the aforementioned).
So if you are within a transaction and issue a command like CREATE TABLE ..., VACUUM, PRAGMA, the sqlite3 module will commit implicitly before executing that command. There are two reasons for doing that. The first is that some of these commands don’t work within transactions. The other reason is that sqlite3 needs to keep track of the transaction state (if a transaction is active or not).
You can control which kind of BEGIN statements sqlite3 implicitly executes (or none at all) via the isolation_level parameter to the connect() call, or via the isolation_level property of connections.
If you want autocommit mode, then set isolation_level to None.
Otherwise leave it at its default, which will result in a plain “BEGIN” statement, or set it to one of SQLite’s supported isolation levels: “DEFERRED”, “IMMEDIATE” or “EXCLUSIVE”.
From python docs:
When a database is accessed by multiple connections, and one of the processes modifies the database, the SQLite database is locked until that transaction is committed. The timeout parameter specifies how long the connection should wait for the lock to go away until raising an exception. The default for the timeout parameter is 5.0 (five seconds).
If I want to start a transaction in my database through python do I have to execute the sql command 'BEGIN TRANSACTION' explicitly like this:
It depends if you are in auto-commit mode (in which case yes) or in manual commit mode (in which case no before DML statements, but unfortunately yes before DDL or DQL statements, as the manual commit mode is incorrectly implemented in the current version of the SQLite3 database driver, see below).
conn.commit() ## or c.execute('COMMIT'). Are these two expressions the same?
Yes.
Is the database locked for change from other clients when I establish the connection or when I begin the transaction or neither?
When you begin the transaction (cf. SQLite3 documentation).
For a better understanding of auto-commit and manual commit modes, read this answer.
I'm building a WSGI web app and I have a MySQL database. I'm using MySQLdb, which provides cursors for executing statements and getting results. What is the standard practice for getting and closing cursors? In particular, how long should my cursors last? Should I get a new cursor for each transaction?
I believe you need to close the cursor before committing the connection. Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction? Is there a lot of overhead for getting new cursors, or is it just not a big deal?
Instead of asking what is standard practice, since that's often unclear and subjective, you might try looking to the module itself for guidance. In general, using the with keyword as another user suggested is a great idea, but in this specific circumstance it may not give you quite the functionality you expect.
As of version 1.2.5 of the module, MySQLdb.Connection implements the context manager protocol with the following code (github):
def __enter__(self):
if self.get_autocommit():
self.query("BEGIN")
return self.cursor()
def __exit__(self, exc, value, tb):
if exc:
self.rollback()
else:
self.commit()
There are several existing Q&A about with already, or you can read Understanding Python's "with" statement, but essentially what happens is that __enter__ executes at the start of the with block, and __exit__ executes upon leaving the with block. You can use the optional syntax with EXPR as VAR to bind the object returned by __enter__ to a name if you intend to reference that object later. So, given the above implementation, here's a simple way to query your database:
connection = MySQLdb.connect(...)
with connection as cursor: # connection.__enter__ executes at this line
cursor.execute('select 1;')
result = cursor.fetchall() # connection.__exit__ executes after this line
print result # prints "((1L,),)"
The question now is, what are the states of the connection and the cursor after exiting the with block? The __exit__ method shown above calls only self.rollback() or self.commit(), and neither of those methods go on to call the close() method. The cursor itself has no __exit__ method defined – and wouldn't matter if it did, because with is only managing the connection. Therefore, both the connection and the cursor remain open after exiting the with block. This is easily confirmed by adding the following code to the above example:
try:
cursor.execute('select 1;')
print 'cursor is open;',
except MySQLdb.ProgrammingError:
print 'cursor is closed;',
if connection.open:
print 'connection is open'
else:
print 'connection is closed'
You should see the output "cursor is open; connection is open" printed to stdout.
I believe you need to close the cursor before committing the connection.
Why? The MySQL C API, which is the basis for MySQLdb, does not implement any cursor object, as implied in the module documentation: "MySQL does not support cursors; however, cursors are easily emulated." Indeed, the MySQLdb.cursors.BaseCursor class inherits directly from object and imposes no such restriction on cursors with regard to commit/rollback. An Oracle developer had this to say:
cnx.commit() before cur.close() sounds most logical to me. Maybe you
can go by the rule: "Close the cursor if you do not need it anymore."
Thus commit() before closing the cursor. In the end, for
Connector/Python, it does not make much difference, but or other
databases it might.
I expect that's as close as you're going to get to "standard practice" on this subject.
Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction?
I very much doubt it, and in trying to do so, you may introduce additional human error. Better to decide on a convention and stick with it.
Is there a lot of overhead for getting new cursors, or is it just not a big deal?
The overhead is negligible, and doesn't touch the database server at all; it's entirely within the implementation of MySQLdb. You can look at BaseCursor.__init__ on github if you're really curious to know what's happening when you create a new cursor.
Going back to earlier when we were discussing with, perhaps now you can understand why the MySQLdb.Connection class __enter__ and __exit__ methods give you a brand new cursor object in every with block and don't bother keeping track of it or closing it at the end of the block. It's fairly lightweight and exists purely for your convenience.
If it's really that important to you to micromanage the cursor object, you can use contextlib.closing to make up for the fact that the cursor object has no defined __exit__ method. For that matter, you can also use it to force the connection object to close itself upon exiting a with block. This should output "my_curs is closed; my_conn is closed":
from contextlib import closing
import MySQLdb
with closing(MySQLdb.connect(...)) as my_conn:
with closing(my_conn.cursor()) as my_curs:
my_curs.execute('select 1;')
result = my_curs.fetchall()
try:
my_curs.execute('select 1;')
print 'my_curs is open;',
except MySQLdb.ProgrammingError:
print 'my_curs is closed;',
if my_conn.open:
print 'my_conn is open'
else:
print 'my_conn is closed'
Note that with closing(arg_obj) will not call the argument object's __enter__ and __exit__ methods; it will only call the argument object's close method at the end of the with block. (To see this in action, simply define a class Foo with __enter__, __exit__, and close methods containing simple print statements, and compare what happens when you do with Foo(): pass to what happens when you do with closing(Foo()): pass.) This has two significant implications:
First, if autocommit mode is enabled, MySQLdb will BEGIN an explicit transaction on the server when you use with connection and commit or rollback the transaction at the end of the block. These are default behaviors of MySQLdb, intended to protect you from MySQL's default behavior of immediately committing any and all DML statements. MySQLdb assumes that when you use a context manager, you want a transaction, and uses the explicit BEGIN to bypass the autocommit setting on the server. If you're used to using with connection, you might think autocommit is disabled when actually it was only being bypassed. You might get an unpleasant surprise if you add closing to your code and lose transactional integrity; you won't be able to rollback changes, you may start seeing concurrency bugs and it may not be immediately obvious why.
Second, with closing(MySQLdb.connect(user, pass)) as VAR binds the connection object to VAR, in contrast to with MySQLdb.connect(user, pass) as VAR, which binds a new cursor object to VAR. In the latter case you would have no direct access to the connection object! Instead, you would have to use the cursor's connection attribute, which provides proxy access to the original connection. When the cursor is closed, its connection attribute is set to None. This results in an abandoned connection that will stick around until one of the following happens:
All references to the cursor are removed
The cursor goes out of scope
The connection times out
The connection is closed manually via server administration tools
You can test this by monitoring open connections (in Workbench or by using SHOW PROCESSLIST) while executing the following lines one by one:
with MySQLdb.connect(...) as my_curs:
pass
my_curs.close()
my_curs.connection # None
my_curs.connection.close() # throws AttributeError, but connection still open
del my_curs # connection will close here
It's better to rewrite it using 'with' keyword. 'With' will take care about closing cursor (it's important because it's unmanaged resource) automatically. The benefit is it will close cursor in case of exception too.
from contextlib import closing
import MySQLdb
''' At the beginning you open a DB connection. Particular moment when
you open connection depends from your approach:
- it can be inside the same function where you work with cursors
- in the class constructor
- etc
'''
db = MySQLdb.connect("host", "user", "pass", "database")
with closing(db.cursor()) as cur:
cur.execute("somestuff")
results = cur.fetchall()
# do stuff with results
cur.execute("insert operation")
# call commit if you do INSERT, UPDATE or DELETE operations
db.commit()
cur.execute("someotherstuff")
results2 = cur.fetchone()
# do stuff with results2
# at some point when you decided that you do not need
# the open connection anymore you close it
db.close()
Note: this answer is for PyMySQL, which is a drop-in replacement for MySQLdb and effectively the latest version of MySQLdb since MySQLdb stopped being maintained. I believe everything here is also true of the legacy MySQLdb, but haven't checked.
First of all, some facts:
Python's with syntax calls the context manager's __enter__ method before executing the body of the with block, and its __exit__ method afterwards.
Connections have an __enter__ method that does nothing besides create and return a cursor, and an __exit__ method that either commits or rolls back (depending upon whether an exception was thrown). It does not close the connection.
Cursors in PyMySQL are purely an abstraction implemented in Python; there is no equivalent concept in MySQL itself.1
Cursors have an __enter__ method that doesn't do anything and an __exit__ method which "closes" the cursor (which just means nulling the cursor's reference to its parent connection and throwing away any data stored on the cursor).
Cursors hold a reference to the connection that spawned them, but connections don't hold a reference to the cursors that they've created.
Connections have a __del__ method which closes them
Per https://docs.python.org/3/reference/datamodel.html, CPython (the default Python implementation) uses reference counting and automatically deletes an object once the number of references to it hits zero.
Putting these things together, we see that naive code like this is in theory problematic:
# Problematic code, at least in theory!
import pymysql
with pymysql.connect() as cursor:
cursor.execute('SELECT 1')
# ... happily carry on and do something unrelated
The problem is that nothing has closed the connection. Indeed, if you paste the code above into a Python shell and then run SHOW FULL PROCESSLIST at a MySQL shell, you'll be able to see the idle connection that you created. Since MySQL's default number of connections is 151, which isn't huge, you could theoretically start running into problems if you had many processes keeping these connections open.
However, in CPython, there is a saving grace that ensures that code like my example above probably won't cause you to leave around loads of open connections. That saving grace is that as soon as cursor goes out of scope (e.g. the function in which it was created finishes, or cursor gets another value assigned to it), its reference count hits zero, which causes it to be deleted, dropping the connection's reference count to zero, causing the connection's __del__ method to be called which force-closes the connection. If you already pasted the code above into your Python shell, then you can now simulate this by running cursor = 'arbitrary value'; as soon as you do this, the connection you opened will vanish from the SHOW PROCESSLIST output.
However, relying upon this is inelegant, and theoretically might fail in Python implementations other than CPython. Cleaner, in theory, would be to explicitly .close() the connection (to free up a connection on the database without waiting for Python to destroy the object). This more robust code looks like this:
import contextlib
import pymysql
with contextlib.closing(pymysql.connect()) as conn:
with conn as cursor:
cursor.execute('SELECT 1')
This is ugly, but doesn't rely upon Python destructing your objects to free up your (finite available number of) database connections.
Note that closing the cursor, if you're already closing the connection explicitly like this, is entirely pointless.
Finally, to answer the secondary questions here:
Is there a lot of overhead for getting new cursors, or is it just not a big deal?
Nope, instantiating a cursor doesn't hit MySQL at all and basically does nothing.
Is there any significant advantage to finding sets of transactions that don't require intermediate commits so that you don't have to get new cursors for each transaction?
This is situational and difficult to give a general answer to. As https://dev.mysql.com/doc/refman/en/optimizing-innodb-transaction-management.html puts it, "an application might encounter performance issues if it commits thousands of times per second, and different performance issues if it commits only every 2-3 hours". You pay a performance overhead for every commit, but by leaving transactions open for longer, you increase the chance of other connections having to spend time waiting for locks, increase your risk of deadlocks, and potentially increase the cost of some lookups performed by other connections.
1 MySQL does have a construct it calls a cursor but they only exist inside stored procedures; they're completely different to PyMySQL cursors and are not relevant here.
I think you'll be better off trying to use one cursor for all of your executions, and close it at the end of your code. It's easier to work with, and it might have efficiency benefits as well (don't quote me on that one).
conn = MySQLdb.connect("host","user","pass","database")
cursor = conn.cursor()
cursor.execute("somestuff")
results = cursor.fetchall()
..do stuff with results
cursor.execute("someotherstuff")
results2 = cursor.fetchall()
..do stuff with results2
cursor.close()
The point is that you can store the results of a cursor's execution in another variable, thereby freeing your cursor to make a second execution. You run into problems this way only if you're using fetchone(), and need to make a second cursor execution before you've iterated through all results from the first query.
Otherwise, I'd say just close your cursors as soon as you're done getting all of the data out of them. That way you don't have to worry about tying up loose ends later in your code.
I suggest to do it like php and mysql. Start i at the beginning of your code before printing of the first data. So if you get a connect error you can display a 50x(Don't remember what internal error is) error message. And keep it open for the whole session and close it when you know you wont need it anymore.
I use MySQL with MySQLdb module in Python, in Django.
I'm running in autocommit mode in this case (and Django's transaction.is_managed() actually returns False).
I have several processes interacting with the database.
One process fetches all Task models with Task.objects.all()
Then another process adds a Task model (I can see it in a database management application).
If I call Task.objects.all() on the first process, I don't see anything. But if I call connection._commit() and then Task.objects.all(), I see the new Task.
My question is: Is there any caching involved at connection level? And is it a normal behaviour (it does not seems to me)?
This certainly seems autocommit/table locking - related.
If mysqldb implements the dbapi2 spec it will probably have a connection running as one single continuous transaction. When you say: 'running in autocommit mode': do you mean MySQL itself or the mysqldb module? Or Django?
Not intermittently commiting perfectly explains the behaviour you are getting:
i) a connection implemented as one single transaction in mysqldb (by default, probably)
ii) not opening/closing connections only when needed but (re)using one (or more) persistent database connections (my guess, could be Django-architecture-inherited).
ii) your selects ('reads') cause a 'simple read lock' on a table (which means other connections can still 'read' this table but connections wanting to 'write data' can't (immediately) because this lock prevents them from getting an 'exclusive lock' (needed 'for writing') on this table. The writing is thus postponed indefinitely (until it can get a (short) exclusive lock on the table for writing - when you close the connection or manually commit).
I'd do the following in your case:
find out which table locks are on your database during the scenario above
read about Django and transactions here. A quick skim suggests using standard Django functionality implicitely causes commits. This means sending handcrafted SQL maybe won't (insert, update...).