Reading SSMS output message using pyodbc - python

I have a list of MS SQL CREATE scripts and I am trying to automate the process of executing each of these scripts. As this CREATE scripts does not result any records, I would want my automation script to return the SSMS output message that looks like:
'Command executed successfully'
Can I read this output message using pyodbc?
Here is the sample code that I use to execute the script:
conn = pyodbc.connect(r'DRIVER={SQL Server};SERVER=%s;Trusted_Connection=True;'% (db_conn_string))
cursor = conn.cursor()
cursor.execute(query)

It is not really necessary to capture the "Command executed successfully" message because an exception will occur if the command is not executed successfully.
So your Python code can just .execute the statement, catch any exception that occurs, and proceed accordingly, e.g.,
try:
crsr.execute("DROP TABLE dbo.nonexistent")
print("INFO: DROP TABLE succeeded.")
except pyodbc.ProgrammingError as err:
error_code = err.args[0]
if error_code == "42S02": # [table] does not exist or you do not have permission
print("INFO: DROP TABLE did not succeed.")
else:
raise # re-raise unexpected exception

cursor.rowcount
Adding the above line to your code will return the row count for the last executed query.

Related

Getting cx_Oracle.InterfaceError: not a query exception while trying to run alter system command

I am getting cx_Oracle.InterfaceError: not a query error while trying to end all user sessions for a specified user.
import cx_Oracle
try:
con = cx_Oracle.connect('username/password#someip/ora12c')
cursor = con.cursor()
result= cursor.execute("select USERNAME,SID,SERIAL#,COMMAND,STATUS from v$session where USERNAME='uname'")
for session in result:
query_string="ALTER SYSTEM KILL SESSION '#1,#2' IMMEDIATE".replace("#1",str(session[1])).replace("#2",str(session[2]))
print(query_string)
cursor.execute(query_string)
con.commit()
except cx_Oracle.DatabaseError as e:
print('Unable to kill user sessions, Subsequent steps may FAIL!!')
print(e)
finally:
if cursor: cursor.close()
if con: con.close()
Running the above code I am getting:
ALTER SYSTEM KILL SESSION '1526,30533' IMMEDIATE
Traceback (most recent call last):
File "oracleKillSession.py", line 10, in <module>
for session in result:
cx_Oracle.InterfaceError: not a query
I tried the solutions at PYSPARK: CX_ORACLE.InterfaceError: not a query but that didn't help resolving the issue. Please help.
You cannot iterate over a cursor (for session in result) and then use the same cursor to execute a statement. You'll need a separate cursor for that, or you'll need to fetch all of the rows from the cursor first. So one of these two approaches will work for you:
Option 1:
alter_cursor = con.cursor()
for session in result:
alter_cursor.execute(query_string)
Option 2:
results = cursor.fetchall()
for session in result:
cursor.execute(query_string)

python.exe crashes while executing query from server

I'm trying to execute tiny mdx query from Analysis Service server at work.
Server provides data via ms ole db, not odbc specification, thats why i use adodbapi library.
Here's the function i use to obtain result from query execution:
def mdx_query (query):
conn = adodbapi.connect("PROVIDER=MSOLAP; \
persist security info=true; \
Data Source=***; \
initial catalog=analyse;")
cursor = conn.cursor()
try:
cursor.execute(query)
result = cursor.fetchone()
except (adodbapi.Error, adodbapi.Warning) as e:
print(e)
cursor.close()
del cursor
conn.close()
del conn
return result
Primitive single-value queries works perfectly well:
select
[Physical Stock PCS] on 0,
[Goods].[Categories].[ALL] on 1
from [analyse]
If i got syntax error it also just give me adodbapi.Error message and it's fine.
But if I try to execute more complex queries like:
select
[Physical Stock PCS] on 0,
[Goods].[Categories].[Level 01] on 1
from [analyse]
[Goods].[Categories].[Level 01] have more than one dimension and i always got python.exe APPCRASH message no matter what.
I tried both python 2 and 3, running in jupyter and console mode,pandas.read_sql_query method. Result is always the same - i get APPCRASH window.
How to cure crashes and finally execute complicated queries?
Any help is appreciated!
UPD:here's error window. Can't change it to EN.Appcrash error

Update fails after repeating deadlocked query in pymssql

I'm using SQL Server with pymssql, and found that a particularly complicated SELECT query would occasionally be selected as a deadlock victim. So I wrapped it in a while loop to retry the transaction if that happens, roughly as follows:
while True:
try:
cursor.execute('SELECT .......')
count_row = cursor.fetchone();
break
except Exception, tec:
print "Got error: %s" % (tec)
time.sleep(1)
cursor.execute('UPDATE .........')
self.conn.commit()
It seems to work - if the SELECT hits a deadlock then it will pause for a second, retry again and get the right answer. However every time that occurs the following UPDATE statement always fails with:
pymssql.OperationalError: Cannot commit transaction: (3902, 'The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.DB-Lib error message 3902, severity 16:\nGeneral SQL Server error: Check messages from the SQL Server\n')
The UPDATE statement isn't in the while loop, so I have no idea why it's failing. It works fine when the SELECT doesn't hit the deadlock condition, so I think it's something to do with recovering from that error.
Any ideas?

InterfaceError (0, '')

I have built a site using Django and I am receiving this annoying error when I am trying to execute a query.
If I restart the Apache server, the error will go away for a short time.
Traceback:
File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
100. response = callback(request, *callback_args, **callback_kwargs)
File "/home/fran/cron/views/set_caches.py" in set_caches
24. cursor.execute(query, [category['id']])
File "/usr/local/lib/python2.7/site-packages/django/db/backends/util.py" in execute
15. return self.cursor.execute(sql, params)
File "/usr/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py" in execute
86. return self.cursor.execute(query, args)
File "build/bdist.linux-i686/egg/MySQLdb/cursors.py" in execute
155. charset = db.character_set_name()
Exception Type: InterfaceError at /blablabla/
Exception Value: (0, '')
This is caused by a global cursor. Try creating and closing the cursor within each method a raw query is needed.
cursor = connection.cursor()
cursor.execute(query)
cursor.close()
You get this error when you have a db.close() call and later try to access the database without creating a new connection. Try to find if you close the connection to the database when you don't mean to.
I agreed with Moberg. This error is caused when we try to access the database after we have closed the connection. This could be caused by some wrong indentation in the code. Below is my code.
conn = connect()
cur = conn.cursor()
tk = get_tickers(cur)
for t in tk:
prices = read_price(t, cur)
if prices != None:
update_price(t, cur)
print 'Price after update of ticker ', t, ':'
p_open, p_high, p_low, p_close = read_price(t, cur)
print p_open, p_high, p_low, p_close
else:
print 'Price for ', t, ' is not available'
conn.close()
I got the same error as reported by Marian. After dedenting conn.close(), everything worked well. Confirmed that global conn is not an issue.
I had the same problem as for April of 2019 using python 3.7 and Mysql 2.7.
At intermittent intervals, the string (0, '') would be added at random to my SQL statements causing errors. I have solved the issue by commenting on the closing of the database connection and just leaving the closing of the cursors across my code.
def set_db():
db = pymysql.connect(host='localhost',
user="root",
passwd="root",
db="DATABASE")
return db
def execute_sql(cnx, sql_clause, fetch_all):
if sql_clause and sql_clause is not None:
try:
cnx.execute(sql_clause)
except Exception as e:
print("Error in sql: " + sql_clause + str(e))
return 0
pass
if fetch_all:
result = cnx.fetchall()
else:
result = cnx.fetchone()
return result
else:
print("Empty sql.")
return 0
db = set_db()
cnx = db.cursor()
sql = "SELECT * FROM TABLE"
result = execute_sql(cnx, sql, 1)
cnx.close() #close the cursor
#db.close #do not close the db connection
...
I had the same issue using threading with Python3 and Pymysql. I was getting deadlocks and then I would get hit with InterfaceError (0, '').
My issue was that I was trying to do a rollback on exception of the query- I believe this rollback was trying to use a connection that no longer existed and it was giving me the interface error. I took this rollback out (because I am OK with not doing rollback for this query) and I just let things go. This fixed my issue.
def delete_q_msg(self, assetid, queuemsgtypeid, msgid):
"""
Given the paramerts below remove items from the msg queue equal to or older than this.
If appropriate send them into a history table to be processed later
:param assetid:
:param queuemsgtypeid:
:param msgid:
:return:
"""
params = (assetid, queuemsgtypeid, msgid,)
db_connection = self._connect_to_db()
sp_sql = "{db}.ps_delete_q_msg".format(db=self._db_settings["database"])
return_value = []
try:
with db_connection.cursor() as cursor:
cursor.callproc(sp_sql, params)
return_value = cursor.fetchall()
db_connection.commit()
except Exception as ex:
# i think we dont want rollback here
# db_connection.rollback()
raise Exception(ex)
finally:
db_connection.close()
return return_value
I can confirm this is caused by a global cursor which is then later used in some functions. My symptoms were the exact same: intermittent interface errors that would temporarily be cleared up by an apache restart.
from django.db import connection
cursor = connection.cursor() # BAD
def foo():
cursor.execute('select * from bar')
But, I am using Django on top of Oracle 11.2 so I do not believe this is a bug in the MySQL/python driver. This is probably due to the caching done by apache/mod_wsgi.
I had the same issue with Flask+pymysql, I was getting an empty tuple as a result in the except: block, something like this "(\"(0, '')\",)" to be specific.
It turned out that the connection was getting closed and later the code tried accessing it which resulted into this error.
So I solved it by referring to above solutions and used a function for connection which assured me a conn every time I had to access the db.
You can recreate this issue by inserting conn.close() just before accessing the cursor.
For reference I used this site which helped me solve this issue.
https://hackersandslackers.com/python-mysql-pymysql/
For me, removing the conn.close() from my function worked. I was trying to access the database again after closing.
I am using flask with AWS.
Also you can try to restart your flask application if it has been running for a long time & if you are also using AWS RDS with MYSQL workbench like in my case, then just check whether your session is expired or not and update the access key and id.
Hope this helps.
I had this same problem and what worked for me in Django was what is described in this answer, which consists of:
Replacing
'ENGINE': 'django.db.backends.mysql'
with
'ENGINE': 'mysql_server_has_gone_away'
on
settings.DATABASES['ENGINE']
and installing with pip the package below:
mysql_server_has_gone_away==1.0.0
with connections.cursor() as cursor:
res=cursor.execute(sql)

DatabaseError: current transaction is aborted, commands ignored until end of transaction block?

I got a lot of errors with the message :
"DatabaseError: current transaction is aborted, commands ignored until end of transaction block"
after changed from python-psycopg to python-psycopg2 as Django project's database engine.
The code remains the same, just don't know where those errors are from.
This is what postgres does when a query produces an error and you try to run another query without first rolling back the transaction. (You might think of it as a safety feature, to keep you from corrupting your data.)
To fix this, you'll want to figure out where in the code that bad query is being executed. It might be helpful to use the log_statement and log_min_error_statement options in your postgresql server.
To get rid of the error, roll back the last (erroneous) transaction after you've fixed your code:
from django.db import transaction
transaction.rollback()
You can use try-except to prevent the error from occurring:
from django.db import transaction, DatabaseError
try:
a.save()
except DatabaseError:
transaction.rollback()
Refer : Django documentation
In Flask you just need to write:
curs = conn.cursor()
curs.execute("ROLLBACK")
conn.commit()
P.S. Documentation goes here https://www.postgresql.org/docs/9.4/static/sql-rollback.html
So, I ran into this same issue. The problem I was having here was that my database wasn't properly synced. Simple problems always seem to cause the most angst...
To sync your django db, from within your app directory, within terminal, type:
$ python manage.py syncdb
Edit: Note that if you are using django-south, running the '$ python manage.py migrate' command may also resolve this issue.
Happy coding!
In my experience, these errors happen this way:
try:
code_that_executes_bad_query()
# transaction on DB is now bad
except:
pass
# transaction on db is still bad
code_that_executes_working_query() # raises transaction error
There nothing wrong with the second query, but since the real error was caught, the second query is the one that raises the (much less informative) error.
edit: this only happens if the except clause catches IntegrityError (or any other low level database exception), If you catch something like DoesNotExist this error will not come up, because DoesNotExist does not corrupt the transaction.
The lesson here is don't do try/except/pass.
I think the pattern priestc mentions is more likely to be the usual cause of this issue when using PostgreSQL.
However I feel there are valid uses for the pattern and I don't think this issue should be a reason to always avoid it. For example:
try:
profile = user.get_profile()
except ObjectDoesNotExist:
profile = make_default_profile_for_user(user)
do_something_with_profile(profile)
If you do feel OK with this pattern, but want to avoid explicit transaction handling code all over the place then you might want to look into turning on autocommit mode (PostgreSQL 8.2+): https://docs.djangoproject.com/en/dev/ref/databases/#autocommit-mode
DATABASES['default'] = {
#.. you usual options...
'OPTIONS': {
'autocommit': True,
}
}
I am unsure if there are important performance considerations (or of any other type).
just use rollback
Example code
try:
cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")
except:
cur.execute("rollback")
cur.execute("CREATE TABLE IF NOT EXISTS test2 (id serial, qa text);")
You only need to run
rollback;
in PostgreSQL and that's it!
If you get this while in interactive shell and need a quick fix, do this:
from django.db import connection
connection._rollback()
originally seen in this answer
I encountered a similar behavior while running a malfunctioned transaction on the postgres terminal. Nothing went through after this, as the database is in a state of error. However, just as a quick fix, if you can afford to avoid rollback transaction. Following did the trick for me:
COMMIT;
I've just got a similar error here. I've found the answer in this link https://www.postgresqltutorial.com/postgresql-python/transaction/
client = PsqlConnection(config)
connection = client.connection
cursor = client.cursor
try:
for query in list_of_querys:
#query format => "INSERT INTO <database.table> VALUES (<values>)"
cursor.execute(query)
connection.commit()
except BaseException as e:
connection.rollback()
Doing this the following query's you send to postgresql will not return an error.
I've got the silimar problem. The solution was to migrate db (manage.py syncdb or manage.py schemamigration --auto <table name> if you use south).
In Flask shell, all I needed to do was a session.rollback() to get past this.
I have met this issue , the error comes out since the error transactions hasn't been ended rightly, I found the postgresql_transactions of Transaction Control command here
Transaction Control
The following commands are used to control transactions
BEGIN TRANSACTION − To start a transaction.
COMMIT − To save the changes, alternatively you can use END TRANSACTION command.
ROLLBACK − To rollback the changes.
so i use the END TRANSACTION to end the error TRANSACTION, code like this:
for key_of_attribute, command in sql_command.items():
cursor = connection.cursor()
g_logger.info("execute command :%s" % (command))
try:
cursor.execute(command)
rows = cursor.fetchall()
g_logger.info("the command:%s result is :%s" % (command, rows))
result_list[key_of_attribute] = rows
g_logger.info("result_list is :%s" % (result_list))
except Exception as e:
cursor.execute('END TRANSACTION;')
g_logger.info("error command :%s and error is :%s" % (command, e))
return result_list
I just had this error too but it was masking another more relevant error message where the code was trying to store a 125 characters string in a 100 characters column:
DatabaseError: value too long for type character varying(100)
I had to debug through the code for the above message to show up, otherwise it displays
DatabaseError: current transaction is aborted
In response to #priestc and #Sebastian, what if you do something like this?
try:
conn.commit()
except:
pass
cursor.execute( sql )
try:
return cursor.fetchall()
except:
conn.commit()
return None
I just tried this code and it seems to work, failing silently without having to care about any possible errors, and working when the query is good.
I believe #AnujGupta's answer is correct. However the rollback can itself raise an exception which you should catch and handle:
from django.db import transaction, DatabaseError
try:
a.save()
except DatabaseError:
try:
transaction.rollback()
except transaction.TransactionManagementError:
# Log or handle otherwise
If you find you're rewriting this code in various save() locations, you can extract-method:
import traceback
def try_rolling_back():
try:
transaction.rollback()
log.warning('rolled back') # example handling
except transaction.TransactionManagementError:
log.exception(traceback.format_exc()) # example handling
Finally, you can prettify it using a decorator that protects methods which use save():
from functools import wraps
def try_rolling_back_on_exception(fn):
#wraps(fn)
def wrapped(*args, **kwargs):
try:
return fn(*args, **kwargs)
except:
traceback.print_exc()
try_rolling_back()
return wrapped
#try_rolling_back_on_exception
def some_saving_method():
# ...
model.save()
# ...
Even if you implement the decorator above, it's still convenient to keep try_rolling_back() as an extracted method in case you need to use it manually for cases where specific handling is required, and the generic decorator handling isn't enough.
This is very strange behavior for me. I'm surprised that no one thought of savepoints. In my code failing query was expected behavior:
from django.db import transaction
#transaction.commit_on_success
def update():
skipped = 0
for old_model in OldModel.objects.all():
try:
Model.objects.create(
group_id=old_model.group_uuid,
file_id=old_model.file_uuid,
)
except IntegrityError:
skipped += 1
return skipped
I have changed code this way to use savepoints:
from django.db import transaction
#transaction.commit_on_success
def update():
skipped = 0
sid = transaction.savepoint()
for old_model in OldModel.objects.all():
try:
Model.objects.create(
group_id=old_model.group_uuid,
file_id=old_model.file_uuid,
)
except IntegrityError:
skipped += 1
transaction.savepoint_rollback(sid)
else:
transaction.savepoint_commit(sid)
return skipped
I am using the python package psycopg2 and I got this error while querying.
I kept running just the query and then the execute function, but when I reran the connection (shown below), it resolved the issue. So rerun what is above your script i.e the connection, because as someone said above, I think it lost the connection or was out of sync or something.
connection = psycopg2.connect(user = "##",
password = "##",
host = "##",
port = "##",
database = "##")
cursor = connection.cursor()
It is an issue with bad sql execution which does not allow other queries to execute until the previous one gets suspended/rollback.
In PgAdmin4-4.24 there is an option of rollback, one can try this.
you could disable transaction via "set_isolation_level(0)"

Categories

Resources