I'd like to catch a MySQL deadlock error and then retry the failed query. But, do I have to redo every query since the transaction started, or just the one in the try/catch? I'm not sure whether the deadlock error causes everything to be rolled back.
This is performed in Python using raw mysql queries.
insert into table_1 values...
insert_into_table_2 values ...
try:
delete from table_1 where ...
except: # set to catch deadlock error
# Can I just retry the delete statement, or do I also have to do the inserts again?
# commits at end
The whole transaction is rolled back.
Here's the relevant and helpful MySQL documentation:
https://dev.mysql.com/doc/refman/5.5/en/innodb-deadlock-detection.html
https://dev.mysql.com/doc/refman/5.5/en/innodb-deadlocks.html
Related
I have the following code that is using MySQLdb for db inserts
self.cursor.execute('START TRANSACTION;')
for item in data:
self.cursor.execute('INSERT INTO...')
self.cursor.execute('COMMIT;')
self.conn.commit()
Is the self.conn.commit() at the end redundant, or does that need to be there?
If you start a transaction you're responsible for calling COMMIT or it'll get unrolled when you close the connection.
As a note it's bad form to include ; in your queries unless you're using an interactive shell. They're not necessary and immediately raise a bunch of questions about how they came to be included there.
The ; delimiter is used by the shell to determine where one command stops and the next starts, something that's not necessary when using code where each statement is supplied as a separate string.
I'm using Python 2 with psycopg2 v2.6.2. I'm running a series of psycopg2 commands, and catching any errors:
for r in records:
county = r[0]
q = 'INSERT INTO allparcels(county, geom) '
q += "SELECT %s, ST_Union(ST_Buffer(wkb_geometry, 0)) FROM parcel "
q += "WHERE county=%s"
print q % (county, county)
try:
cursor.execute(q, (county, county))
conn.commit()
except Exception, e:
print e
print e.pgerror
cursor.close()
conn.close()
This runs for the first couple of records, then I get ERROR: current transaction is aborted, commands ignored until end of transaction block in rapid succession for all the rest of the rows.
Oddly, if I take one of the later commands and run it directly in my database, it works fine. So I think the later errors are something to do with psycopg2 and my error handling, not the SQL command.
I think I must not be handling the error correctly. I'd like my script to print the error, and then continue smoothly to the next command.
How should I do this instead?
The issue here is the following:
try:
# it is this specific line that causes an error
cursor.execute(q, (county, county))
# this never happens, so the transaction is still open
conn.commit()
except Exception, e:
...
# you never issued a rollback on the transaction ... its still open
As you can see, if cursor.execute fails then you neither attempt to commit the transaction or roll it back. The next iterations through the loop will be attempting to execute SQL on an already aborted but not rolled back transaction.
Instead you need to follow this kind of pattern
try:
cursor.execute(...)
except Exception, e:
conn.rollback()
else:
conn.commit()
donkopotamus' answer is pretty good. It has a problem in that it rolls back the entire transaction, which could definitely be undesirable behavior if you're executing this query as part of a larger transaction block. In case this query is part of a larger transactional block, consider using savepoints:
try:
cursor.execute("savepoint my_save_point")
cursor.execute(query)
except:
cursor.execute("rollback to savepoint my_save_point")
finally:
cursor.execute("release savepoint my_save_point")
psycopg2.errors.InFailedSqlTransaction: current transaction is aborted, commands ignored until end of transaction block
I Got this error when i'm going to insert some data in DB.
This is resolved by increasing the size of the attributes in the TABLE.
Here is the code I currently have:
with transaction.commit_manually():
try:
m.update_accepted_url(episode_id)
m.create_hit()
m.do_insert()
transaction.commit()
except:
transaction.rollback()
Now, what happens if the database operations fail -- and that rollback, but the create_hit goes through successfully? Is there a way to wrap the create_hit operation in something like a transaction, so if the db operations fail, that fails too?
You can add a unique token for your request, to avoid duplicates:
http://docs.aws.amazon.com/AWSMechTurk/latest/AWSMturkAPI/ApiReference_CreateHITOperation.html
What is the best way to deal with the
1205 "deadlock victim"
error when calling SQL Server from Python?
The issue arises when I have multiple Python scripts running, and all are attempting to update a table with a MERGE statement which adds a row if it doesn't yet exist (this query will be called millions of times in each script).
MERGE table_name as table // including UPDLOCK or ROWLOCK eventually
// results in deadlock
USING ( VALUES ( ... ) )
AS row( ... )
ON table.feature = row.feature
WHEN NOT MATCHED THEN
INSERT (...)
VALUES (...)
The scripts require immediate access to the table to access the unique id assigned to the row.
Eventually, one of the scripts raises an OperationalError:
Transaction (Process ID 52) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
1) I have tried using a try-except block around the call in Python:
while True:
try:
cur.execute(stmt)
break
except OperationalError:
continue
This approach slows the process down considerably. Also, I think I might be doing this incorrectly (I think I might need to reset the connection...).
2) Use a try-catch in SQL Server (something like below...):
WHILE 1 = 1
BEGIN
BEGIN TRY
MERGE statement // see above
BREAK
END TRY
BEGIN CATCH
SELECT ERROR_NUMBER() AS ErrorNumber
ROLLBACK
CONTINUE
END CATCH;
END
3) Something else?
Thanks for your help. And let me know if you need additional details, etc.
I am using Python 2.7, SQL Server 2008, and pymssql to make the connection.
I'm using Psycopg2 in Python to access a PostgreSQL database. I'm curious if it's safe to use the with closing() pattern to create and use a cursor, or if I should use an explicit try/except wrapped around the query. My question is concerning inserting or updating, and transactions.
As I understand it, all Psycopg2 queries occur within a transaction, and it's up to calling code to commit or rollback the transaction. If within a with closing(... block an error occurs, is a rollback issued? In older versions of Psycopg2, a rollback was explicitly issued on close() but this is not the case anymore (see http://initd.org/psycopg/docs/connection.html#connection.close).
My question might make more sense with an example. Here's an example using with closing(...
with closing(db.cursor()) as cursor:
cursor.execute("""UPDATE users
SET password = %s, salt = %s
WHERE user_id = %s""",
(pw_tuple[0], pw_tuple[1], user_id))
module.rase_unexpected_error()
cursor.commit()
What happens when module.raise_unexpected_error() raises its error? Is the transaction rolled back? As I understand transactions, I either need to commit them or roll them back. So in this case, what happens?
Alternately I could write my query like this:
cursor = None
try:
cursor = db.cursor()
cursor.execute("""UPDATE users
SET password = %s, salt = %s
WHERE user_id = %s""",
(pw_tuple[0], pw_tuple[1], user_id))
module.rase_unexpected_error()
cursor.commit()
except BaseException:
if cursor is not None:
cursor.rollback()
finally:
if cursor is not None:
cursor.close()
Also I should mention that I have no idea if Psycopg2's connection class cursor() method could raise an error or not (the documentation doesn't say) so better safe than sorry, no?
Which method of issuing a query and managing a transaction should I use?
Your link to the Psycopg2 docs kind of explains it itself, no?
... Note that closing a connection without committing the changes first will
cause any pending change to be discarded as if a ROLLBACK was
performed (unless a different isolation level has been selected: see
set_isolation_level()).
Changed in version 2.2: previously an explicit ROLLBACK was issued by
Psycopg on close(). The command could have been sent to the backend at
an inappropriate time, so Psycopg currently relies on the backend to
implicitly discard uncommitted changes. Some middleware are known to
behave incorrectly though when the connection is closed during a
transaction (when status is STATUS_IN_TRANSACTION), e.g. PgBouncer
reports an unclean server and discards the connection. To avoid this
problem you can ensure to terminate the transaction with a
commit()/rollback() before closing.
So, unless you're using a different isolation level, or using PgBouncer, your first example should work fine. However, if you desire some finer-grained control over exactly what happens during a transaction, then the try/except method might be best, since it parallels the database transaction state itself.