Dealing with SQL Server Deadlock with Python - python

What is the best way to deal with the
1205 "deadlock victim"
error when calling SQL Server from Python?
The issue arises when I have multiple Python scripts running, and all are attempting to update a table with a MERGE statement which adds a row if it doesn't yet exist (this query will be called millions of times in each script).
MERGE table_name as table // including UPDLOCK or ROWLOCK eventually
// results in deadlock
USING ( VALUES ( ... ) )
AS row( ... )
ON table.feature = row.feature
WHEN NOT MATCHED THEN
INSERT (...)
VALUES (...)
The scripts require immediate access to the table to access the unique id assigned to the row.
Eventually, one of the scripts raises an OperationalError:
Transaction (Process ID 52) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
1) I have tried using a try-except block around the call in Python:
while True:
try:
cur.execute(stmt)
break
except OperationalError:
continue
This approach slows the process down considerably. Also, I think I might be doing this incorrectly (I think I might need to reset the connection...).
2) Use a try-catch in SQL Server (something like below...):
WHILE 1 = 1
BEGIN
BEGIN TRY
MERGE statement // see above
BREAK
END TRY
BEGIN CATCH
SELECT ERROR_NUMBER() AS ErrorNumber
ROLLBACK
CONTINUE
END CATCH;
END
3) Something else?
Thanks for your help. And let me know if you need additional details, etc.
I am using Python 2.7, SQL Server 2008, and pymssql to make the connection.

Related

Execute a mysql statement in the background

I have a 'throwaway' sql statement that I would like to run. I don't care about the error status, and I don't need to know if it completed successfully. It is to create an index on a table that is very infrequently used. I currently have the connection and cursor object, and here is how I would normally do it:
self.cursor.execute('ALTER TABLE mytable ADD INDEX (_id)')
Easy enough. However, this statement takes about five minutes, and like I mentioned, it's not important enough to block other items that are unrelated to it. Is it possible to execute a cursor statement in the background? Again, I don't need any status or anything from it, and I don't care about 'closing the cursor/connection' or anything -- it really is a throw-away statement on a table that is probably accessed one to five times in its lifetime before being dropped.
threading.Thread(target=lambda tn, cursor: cursor.execute('ALTER TABLE %s ADD INDEX (_id)' % tn))).start()
What would be the best approach to execute a statement in the background so it doesn't block future sql statements.

Global query timeout in MySQL 5.6

I need to apply a query timeout at a global level in my application. The query: SET SESSION max_execution_time=1 does this with MySQL 5.7. I am using MySQL 5.6 and cannot upgrade at the moment. Any solution with SQL Alchemy would also help.
It seems there is no equivalent to max_execution_time in MySQL prior to versions 5.7.4 and 5.7.8 (the setting changed its name). What you can do is create your own periodic job that checks if queries have exceeded timeout and manually kill them. Unfortunately that is not quite the same as what the newer MySQL versions do: without inspecting the command info you'll end up killing all queries, not just read only SELECT, and it is nigh impossible to control at session level.
One way to do that would be to create a stored procedure that queries the process list and kills as required. Such stored procedure could look like:
DELIMITER //
CREATE PROCEDURE stmt_timeout_killer (timeout INT)
BEGIN
DECLARE query_id INT;
DECLARE done INT DEFAULT FALSE;
DECLARE curs CURSOR FOR
SELECT id
FROM information_schema.processlist
WHERE command = 'Query' AND time >= timeout;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
-- Ignore ER_NO_SUCH_THREAD, in case the query finished between
-- checking the process list and actually killing threads
DECLARE CONTINUE HANDLER FOR 1094 BEGIN END;
OPEN curs;
read_loop: LOOP
FETCH curs INTO query_id;
IF done THEN
LEAVE read_loop;
END IF;
-- Prevent suicide
IF query_id != CONNECTION_ID() THEN
KILL QUERY query_id;
END IF;
END LOOP;
CLOSE curs;
END//
DELIMITER ;
Alternatively you could implement all that in your application logic, but it would require separate round trips to the database for each query to be killed. What's left then is to call this periodically:
# Somewhere suitable
engine.execute(text("CALL stmt_timeout_killer(:timeout)"), timeout=30)
How and where exactly depends heavily on your actual application.

Is this mysql transaction code redundant?

I have the following code that is using MySQLdb for db inserts
self.cursor.execute('START TRANSACTION;')
for item in data:
self.cursor.execute('INSERT INTO...')
self.cursor.execute('COMMIT;')
self.conn.commit()
Is the self.conn.commit() at the end redundant, or does that need to be there?
If you start a transaction you're responsible for calling COMMIT or it'll get unrolled when you close the connection.
As a note it's bad form to include ; in your queries unless you're using an interactive shell. They're not necessary and immediately raise a bunch of questions about how they came to be included there.
The ; delimiter is used by the shell to determine where one command stops and the next starts, something that's not necessary when using code where each statement is supplied as a separate string.

Python and MySQL: If catch deadlock error, is everything rolled back?

I'd like to catch a MySQL deadlock error and then retry the failed query. But, do I have to redo every query since the transaction started, or just the one in the try/catch? I'm not sure whether the deadlock error causes everything to be rolled back.
This is performed in Python using raw mysql queries.
insert into table_1 values...
insert_into_table_2 values ...
try:
delete from table_1 where ...
except: # set to catch deadlock error
# Can I just retry the delete statement, or do I also have to do the inserts again?
# commits at end
The whole transaction is rolled back.
Here's the relevant and helpful MySQL documentation:
https://dev.mysql.com/doc/refman/5.5/en/innodb-deadlock-detection.html
https://dev.mysql.com/doc/refman/5.5/en/innodb-deadlocks.html

PostgreSQL Exception: DB_Cursor: exception in execute: tuple concurrently updated

As part of the upgrade process, our product scripts update a stored procedure for a trigger. There are two daemons running, either of which can update the stored procedure. It seems that PostgrSQL is not serializing the DDL to upgrade the procedure. The exact error is "DB_Cursor: exception in execute: tuple concurrently updated". A Google search yields no exact matches for this error in the search results. It would appear we have a race condition. What is the best approach for avoiding or preventing such an exception? It prevents the upgrade process from succeeding and one or both of the processes (daemons) must be restarted to retry the upgrade and recover. Is there known issue with PostgreSQL? We are running PostgreSQL 9.2.5.
It seems that PostgreSQL is not serializing the DDL to upgrade the
procedure
Yes. This is mentioned from time to time on pgsql mailing lists, for example recently here:
'tuple concurrently updated' error when granting permissions
Excerpt:
We do have such locking for DDL on tables/indexes, but the theory in
the past has been that it's not worth the trouble for objects
represented by single catalog rows, such as functions or roles. You
can't corrupt the database with concurrent updates on such a row,
you'll just get a "tuple concurrently updated" error from all but the
first-to-arrive update.
If you're concurrently replacing functions bodies, this is clearly your problem.
And the proposed solution is:
In the meantime, you could consider using an application-managed
advisory lock if you really need such grants to work transparently.
If by design multiple concurrent clients can decide to perform DDL, then you really should make sure only one of them is doing it. You can do it using advisory locks.
Example in pseudocode:
function try_upgrade(db) {
if ( ! is_upgrade_needed(db) ) {
// we check it before acquiring a lock to speed up a common case of
// no upgrade available
return UPGRADE_NOT_NEEDED;
}
query_result = db->begin_transaction();
if ( query_result < 0 ) throw Error("begin failed");
query_result = db->query(
"select pg_advisory_xact_lock(?)", MAGIC_NUMBER_UPGRADE_LOCK
);
if ( query_result < 0 ) throw Error("pg_advisory_xact_lock failed");
// another client might have performed upgrade between the previous check
// and acquiring advisory lock
if ( ! is_upgrade_needed(db) ) {
query_result = db->rollback_transaction();
return UPGRADE_NOT_NEEDED;
}
perform_upgrade();
query_result = db->commit_transaction();
if ( query_result < 0 ) throw Error("commit failed");
return UPGRADE_PERFORMED;
}

Categories

Resources