Update fails after repeating deadlocked query in pymssql - python

I'm using SQL Server with pymssql, and found that a particularly complicated SELECT query would occasionally be selected as a deadlock victim. So I wrapped it in a while loop to retry the transaction if that happens, roughly as follows:
while True:
try:
cursor.execute('SELECT .......')
count_row = cursor.fetchone();
break
except Exception, tec:
print "Got error: %s" % (tec)
time.sleep(1)
cursor.execute('UPDATE .........')
self.conn.commit()
It seems to work - if the SELECT hits a deadlock then it will pause for a second, retry again and get the right answer. However every time that occurs the following UPDATE statement always fails with:
pymssql.OperationalError: Cannot commit transaction: (3902, 'The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.DB-Lib error message 3902, severity 16:\nGeneral SQL Server error: Check messages from the SQL Server\n')
The UPDATE statement isn't in the while loop, so I have no idea why it's failing. It works fine when the SELECT doesn't hit the deadlock condition, so I think it's something to do with recovering from that error.
Any ideas?

Related

Is Postgres caching our queries and how do we get around it?

I'm trying to run the following piece of python3 code:
import os
import psycopg2
import logging
# Set max attempts before giving up
MAX_ATTEMPTS = 5
# Set basic logging config to debug (i.e. log everything).
# By default, this will log te stdout (i.e. it will behave the same as print)
logging.basicConfig(level=logging.DEBUG)
# Grab DB url from env variable
database_url = os.environ.get('DATABASE_URL')
assert database_url is not None, 'DATABASE_URL env variable must be set to a postgres connection string.'
# Initiate psycopg2 and instantiate a cursor object
conn = psycopg2.connect(database_url)
cursor = conn.cursor()
# Define function to delete old records
def delete_old_records(cur):
# execute a query to delete old records. We're going to refer to this as the "delete" command
query = 'DELETE FROM my_table WHERE id NOT IN ( SELECT id FROM ( SELECT id FROM my_table ORDER BY id DESC LIMIT 1850 ) foo);'
cur.execute(query)
# Set variables to keep track of loop
successful = False
attempts = 0
# While not successful and max attempts not reached
while not successful and attempts < MAX_ATTEMPTS:
try:
# Attempt to delete old records
delete_old_records(cursor)
# Set successful to True if no errors were encountered in the previous line
successful = True
# Log a message
logging.info('Successfully truncated old records!')
# If some psycopg2 error happens
except psycopg2.Error as e:
# Log the error
logging.exception('Got exception when executing query')
# Rollback the cursor and get ready to try again
conn.rollback()
# Increment attempts by 1
attempts += 1
# If not able to perform operation after max attempts, log message to indicate failure
if not successful:
logging.warning(f'Was not successfully able to truncate logs after {MAX_ATTEMPTS} retries. '
f'Check logs for traceback (console output by default).')
Here's the problem:
The code executes successfully and without error. However, when we run the following command (hereafter referred to as the "count" command) on postico (Postgres GUI for Mac):
SELECT count(*) from my_table;
We get 1860 instead of 1850 (i.e. the rows were not deleted).
When running the delete command manually in psql or in postico, we get the correct results when running the COUNT command in psql or postico respectively. However, we get different results when running the command in ipython.
When I have an open connection to the db on ipython on computer A, and I run the delete command, and I open another connector to the db on ipython on computer B and run the count command, I see that the db row count has not changed i.e. still 1860, not cut to 1850.
I suspect caching/memoization but the I'm not really sure that my command really worked. Is there something in psycopg2, postico, or postgres itself that might be causing this? and how do we get around it? We don't see any clear cache on postico, or on psycopg2/postgres.
There is no caching involved. PostgreSQL does not cache query results.
You simply forgot to COMMIT the deleting transaction, so its effects are not visible in any concurrent transaction.

python.exe crashes while executing query from server

I'm trying to execute tiny mdx query from Analysis Service server at work.
Server provides data via ms ole db, not odbc specification, thats why i use adodbapi library.
Here's the function i use to obtain result from query execution:
def mdx_query (query):
conn = adodbapi.connect("PROVIDER=MSOLAP; \
persist security info=true; \
Data Source=***; \
initial catalog=analyse;")
cursor = conn.cursor()
try:
cursor.execute(query)
result = cursor.fetchone()
except (adodbapi.Error, adodbapi.Warning) as e:
print(e)
cursor.close()
del cursor
conn.close()
del conn
return result
Primitive single-value queries works perfectly well:
select
[Physical Stock PCS] on 0,
[Goods].[Categories].[ALL] on 1
from [analyse]
If i got syntax error it also just give me adodbapi.Error message and it's fine.
But if I try to execute more complex queries like:
select
[Physical Stock PCS] on 0,
[Goods].[Categories].[Level 01] on 1
from [analyse]
[Goods].[Categories].[Level 01] have more than one dimension and i always got python.exe APPCRASH message no matter what.
I tried both python 2 and 3, running in jupyter and console mode,pandas.read_sql_query method. Result is always the same - i get APPCRASH window.
How to cure crashes and finally execute complicated queries?
Any help is appreciated!
UPD:here's error window. Can't change it to EN.Appcrash error

Flask, SQLAlchemy error (invalid transaction)

I have a flask website. Sometimes on some requests it returns this error:
Exception message: Can't reconnect until invalid transaction is rolled
back (original cause: InvalidRequestError: Can't reconnect until
invalid transaction is rolled back) u'SELECT a_auth2_user.id AS
a_auth2_user_id, a_auth2_user.username AS a_auth2_user_username,
a_auth2_user.fullname AS a_auth2_user_fullname, a_auth2_user.email AS
a_auth2_user_email, a_auth2_user.password AS a_auth2_user_password,
a_auth2_user.plain_password AS a_auth2_user_plain_password,
a_auth2_user.legacy_password AS a_auth2_user_legacy_password,
a_auth2_user.active AS a_auth2_user_active, a_auth2_user.is_admin AS
a_auth2_user_is_admin, a_auth2_user.phone AS a_auth2_user_phone,
a_auth2_user.last_activity AS a_auth2_user_last_activity \nFROM
a_auth2_user \nWHERE a_auth2_user.id = %s \n LIMIT %s'
[immutabledict({})]
The weird thing is that it returns this error "sometimes"! and sometimes it works fine.
Is it something like memory issue?! How can I fix it?
Because your previous commit may be got some exception you should roolbak the session if there is any invalid transaction.
try:
transaction.commit()
except Exception, e:
session.rollback()

Reading SSMS output message using pyodbc

I have a list of MS SQL CREATE scripts and I am trying to automate the process of executing each of these scripts. As this CREATE scripts does not result any records, I would want my automation script to return the SSMS output message that looks like:
'Command executed successfully'
Can I read this output message using pyodbc?
Here is the sample code that I use to execute the script:
conn = pyodbc.connect(r'DRIVER={SQL Server};SERVER=%s;Trusted_Connection=True;'% (db_conn_string))
cursor = conn.cursor()
cursor.execute(query)
It is not really necessary to capture the "Command executed successfully" message because an exception will occur if the command is not executed successfully.
So your Python code can just .execute the statement, catch any exception that occurs, and proceed accordingly, e.g.,
try:
crsr.execute("DROP TABLE dbo.nonexistent")
print("INFO: DROP TABLE succeeded.")
except pyodbc.ProgrammingError as err:
error_code = err.args[0]
if error_code == "42S02": # [table] does not exist or you do not have permission
print("INFO: DROP TABLE did not succeed.")
else:
raise # re-raise unexpected exception
cursor.rowcount
Adding the above line to your code will return the row count for the last executed query.

DatabaseError: current transaction is aborted, commands ignored until end of transaction block?

I got a lot of errors with the message :
"DatabaseError: current transaction is aborted, commands ignored until end of transaction block"
after changed from python-psycopg to python-psycopg2 as Django project's database engine.
The code remains the same, just don't know where those errors are from.
This is what postgres does when a query produces an error and you try to run another query without first rolling back the transaction. (You might think of it as a safety feature, to keep you from corrupting your data.)
To fix this, you'll want to figure out where in the code that bad query is being executed. It might be helpful to use the log_statement and log_min_error_statement options in your postgresql server.
To get rid of the error, roll back the last (erroneous) transaction after you've fixed your code:
from django.db import transaction
transaction.rollback()
You can use try-except to prevent the error from occurring:
from django.db import transaction, DatabaseError
try:
a.save()
except DatabaseError:
transaction.rollback()
Refer : Django documentation
In Flask you just need to write:
curs = conn.cursor()
curs.execute("ROLLBACK")
conn.commit()
P.S. Documentation goes here https://www.postgresql.org/docs/9.4/static/sql-rollback.html
So, I ran into this same issue. The problem I was having here was that my database wasn't properly synced. Simple problems always seem to cause the most angst...
To sync your django db, from within your app directory, within terminal, type:
$ python manage.py syncdb
Edit: Note that if you are using django-south, running the '$ python manage.py migrate' command may also resolve this issue.
Happy coding!
In my experience, these errors happen this way:
try:
code_that_executes_bad_query()
# transaction on DB is now bad
except:
pass
# transaction on db is still bad
code_that_executes_working_query() # raises transaction error
There nothing wrong with the second query, but since the real error was caught, the second query is the one that raises the (much less informative) error.
edit: this only happens if the except clause catches IntegrityError (or any other low level database exception), If you catch something like DoesNotExist this error will not come up, because DoesNotExist does not corrupt the transaction.
The lesson here is don't do try/except/pass.
I think the pattern priestc mentions is more likely to be the usual cause of this issue when using PostgreSQL.
However I feel there are valid uses for the pattern and I don't think this issue should be a reason to always avoid it. For example:
try:
profile = user.get_profile()
except ObjectDoesNotExist:
profile = make_default_profile_for_user(user)
do_something_with_profile(profile)
If you do feel OK with this pattern, but want to avoid explicit transaction handling code all over the place then you might want to look into turning on autocommit mode (PostgreSQL 8.2+): https://docs.djangoproject.com/en/dev/ref/databases/#autocommit-mode
DATABASES['default'] = {
#.. you usual options...
'OPTIONS': {
'autocommit': True,
}
}
I am unsure if there are important performance considerations (or of any other type).
just use rollback
Example code
try:
cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")
except:
cur.execute("rollback")
cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")
You only need to run
rollback;
in PostgreSQL and that's it!
If you get this while in interactive shell and need a quick fix, do this:
from django.db import connection
connection._rollback()
originally seen in this answer
I encountered a similar behavior while running a malfunctioned transaction on the postgres terminal. Nothing went through after this, as the database is in a state of error. However, just as a quick fix, if you can afford to avoid rollback transaction. Following did the trick for me:
COMMIT;
I've just got a similar error here. I've found the answer in this link https://www.postgresqltutorial.com/postgresql-python/transaction/
client = PsqlConnection(config)
connection = client.connection
cursor = client.cursor
try:
for query in list_of_querys:
#query format => "INSERT INTO <database.table> VALUES (<values>)"
cursor.execute(query)
connection.commit()
except BaseException as e:
connection.rollback()
Doing this the following query's you send to postgresql will not return an error.
I've got the silimar problem. The solution was to migrate db (manage.py syncdb or manage.py schemamigration --auto <table name> if you use south).
In Flask shell, all I needed to do was a session.rollback() to get past this.
I have met this issue , the error comes out since the error transactions hasn't been ended rightly, I found the postgresql_transactions of Transaction Control command here
Transaction Control
The following commands are used to control transactions
BEGIN TRANSACTION − To start a transaction.
COMMIT − To save the changes, alternatively you can use END TRANSACTION command.
ROLLBACK − To rollback the changes.
so i use the END TRANSACTION to end the error TRANSACTION, code like this:
for key_of_attribute, command in sql_command.items():
cursor = connection.cursor()
g_logger.info("execute command :%s" % (command))
try:
cursor.execute(command)
rows = cursor.fetchall()
g_logger.info("the command:%s result is :%s" % (command, rows))
result_list[key_of_attribute] = rows
g_logger.info("result_list is :%s" % (result_list))
except Exception as e:
cursor.execute('END TRANSACTION;')
g_logger.info("error command :%s and error is :%s" % (command, e))
return result_list
I just had this error too but it was masking another more relevant error message where the code was trying to store a 125 characters string in a 100 characters column:
DatabaseError: value too long for type character varying(100)
I had to debug through the code for the above message to show up, otherwise it displays
DatabaseError: current transaction is aborted
In response to #priestc and #Sebastian, what if you do something like this?
try:
conn.commit()
except:
pass
cursor.execute( sql )
try:
return cursor.fetchall()
except:
conn.commit()
return None
I just tried this code and it seems to work, failing silently without having to care about any possible errors, and working when the query is good.
I believe #AnujGupta's answer is correct. However the rollback can itself raise an exception which you should catch and handle:
from django.db import transaction, DatabaseError
try:
a.save()
except DatabaseError:
try:
transaction.rollback()
except transaction.TransactionManagementError:
# Log or handle otherwise
If you find you're rewriting this code in various save() locations, you can extract-method:
import traceback
def try_rolling_back():
try:
transaction.rollback()
log.warning('rolled back') # example handling
except transaction.TransactionManagementError:
log.exception(traceback.format_exc()) # example handling
Finally, you can prettify it using a decorator that protects methods which use save():
from functools import wraps
def try_rolling_back_on_exception(fn):
#wraps(fn)
def wrapped(*args, **kwargs):
try:
return fn(*args, **kwargs)
except:
traceback.print_exc()
try_rolling_back()
return wrapped
#try_rolling_back_on_exception
def some_saving_method():
# ...
model.save()
# ...
Even if you implement the decorator above, it's still convenient to keep try_rolling_back() as an extracted method in case you need to use it manually for cases where specific handling is required, and the generic decorator handling isn't enough.
This is very strange behavior for me. I'm surprised that no one thought of savepoints. In my code failing query was expected behavior:
from django.db import transaction
#transaction.commit_on_success
def update():
skipped = 0
for old_model in OldModel.objects.all():
try:
Model.objects.create(
group_id=old_model.group_uuid,
file_id=old_model.file_uuid,
)
except IntegrityError:
skipped += 1
return skipped
I have changed code this way to use savepoints:
from django.db import transaction
#transaction.commit_on_success
def update():
skipped = 0
sid = transaction.savepoint()
for old_model in OldModel.objects.all():
try:
Model.objects.create(
group_id=old_model.group_uuid,
file_id=old_model.file_uuid,
)
except IntegrityError:
skipped += 1
transaction.savepoint_rollback(sid)
else:
transaction.savepoint_commit(sid)
return skipped
I am using the python package psycopg2 and I got this error while querying.
I kept running just the query and then the execute function, but when I reran the connection (shown below), it resolved the issue. So rerun what is above your script i.e the connection, because as someone said above, I think it lost the connection or was out of sync or something.
connection = psycopg2.connect(user = "##",
password = "##",
host = "##",
port = "##",
database = "##")
cursor = connection.cursor()
It is an issue with bad sql execution which does not allow other queries to execute until the previous one gets suspended/rollback.
In PgAdmin4-4.24 there is an option of rollback, one can try this.
you could disable transaction via "set_isolation_level(0)"

Categories

Resources