pyodbc & MS SQL Server - "No results. Previous SQL was not a query." - python

I am using pyodbc to retrieve data from a Microsoft SQL Server. The query is of the following form
SET NOCOUNT ON --Ignore count statements
CREATE TABLE mytable ( ... )
EXEC some_stored_procedure
INSERT mytable
--Perform some processing...
SELECT *
FROM mytable
The stored procedure performs some aggregation over values that contain NULLs such that warnings of the form Warning: Null value is eliminated by an aggregate or other SET operation. are issued. This results in pyodbc failing to retrieve data with the error message No results. Previous SQL was not a query.
I have tried to disable the warnings by setting SET ANSI_WARNINGS OFF. However, the query then fails with the error message Heterogeneous queries require the ANSI_NULLS and ANSI_WARNINGS options to be set for the connection. This ensures consistent query semantics. Enable these options and then reissue your query..
Is it possible to
disable the warnings
or have pyodbc ignore the warnings?
Note that I do not have permissions to change the stored procedure.

Store the results of the query in a temporary table and execute the statement as two queries:
with pyodbc.connect(connection_string) as connection:
connection.execute(query1) #Do the work
result = connection.execute(query2) #Select the data
data = result.fetchall() #Retrieve the data
The first query does the heavy lifting and is of the form
--Do some work and execute complicated queries that issue warning messages
--Store the results in a temporary table
SELECT some, column, names
INTO #datastore
FROM some_table
The second query retrieves the data and is of the form
SELECT * FROM #datastore
Thus, all warning messages are issued upon execution of the first query. They do not interfere with data retrieval during the execution of the second query.

I have had some luck mitigating this error by flipping ansi_warnings on and off just around the offending view or stored proc.
/* vw_someView aggregates away some nulls and presents warnings that blow up pyodbc */
set ANSI_WARNINGS off
select *
into #my_temp
from vw_someView
set ANSI_WARNINGS on
/* rest of query follows */
This assumes that the entity that produces the aggregate warning doesn't also require warnings to be turned on. If it complains, it probably means that the entity itself has a portion of code like this that requires a toggle of the ansi_warnings (or a rewrite to eliminate the aggregation.)
One caviat is that I've found that this toggle still returns the "heterogeneous" warning if I try to run it as a cross-server query. Also, while debugging, it's pretty easy to get into a state where the ansi_warnings are flipped off when you don't realize it and you start getting heterogeneous errors for seemingly no reason. Just run the "set ANSI_WARNINGS on" line by itself to get yourself back into a good state.

Best thing is to add try: except: block
sql="sp_help stored_procedure;"
print(">>>>>executing {}".format(sql))
next_cursor=cursor.execute(sql)
while next_cursor:
try:
row = cursor.fetchone()
while row:
print(row)
row = cursor.fetchone()
except Exception as my_ex:
print("stored procedure returning non-row {}".format(my_ex))
next_cursor=cursor.nextset()

Related

Pyodbc doesn't run the procedure correctly without even throwing any error [duplicate]

I can't figure out what's wrong with the following code,
The syntax IS ok (checked with SQL Management Studio), i have access as i should so that works too.. but for some reason as soon as i try to create a table via PyODBC then it stops working.
import pyodbc
def SQL(QUERY, target = '...', DB = '...'):
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=' + target + DB+';UID=user;PWD=pass')
cursor = cnxn.cursor()
cursor.execute(QUERY)
cpn = []
for row in cursor:
cpn.append(row)
return cpn
print SQL("CREATE TABLE dbo.Approvals (ID SMALLINT NOT NULL IDENTITY PRIMARY KEY, HostName char(120));")
It fails with:
Traceback (most recent call last):
File "test_sql.py", line 25, in <module>
print SQL("CREATE TABLE dbo.Approvals (ID SMALLINT NOT NULL IDENTITY PRIMARY KEY, HostName char(120));")
File "test_sql.py", line 20, in SQL
for row in cursor:
pyodbc.ProgrammingError: No results. Previous SQL was not a query.
Anyone have any idea to why this is?
I got a "SQL Server" driver installed (it's default), running Windows 7 against a Windows 2008 SQL Server environment (Not a express database).
Just in case some lonely net nomad comes across this issue, the solution by Torxed didn't work for me. But the following worked for me.
I was calling an SP which inserts some values into a table and then returns some data back. Just add the following to the SP :
SET NOCOUNT ON
It'll work just fine :)
The Python code :
query = "exec dbo.get_process_id " + str(provider_id) + ", 0"
cursor.execute(query)
row = cursor.fetchone()
process_id = row[0]
The SP :
USE [DBNAME]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER procedure [dbo].[GET_PROCESS_ID](
#PROVIDER_ID INT,
#PROCESS_ID INT OUTPUT
)
AS
BEGIN
SET NOCOUNT ON
INSERT INTO processes(provider_id) values(#PROVIDER_ID)
SET #PROCESS_ID= SCOPE_IDENTITY()
SELECT #PROCESS_ID AS PROCESS_ID
END
Using the "SET NOCOUNT ON" value at the top of the script will not always be sufficient to solve the problem.
In my case, it was also necessary to remove this line:
Use DatabaseName;
Database was SQL Server 2012,
Python 3.7,
SQL Alchemy 1.3.8
Hope this helps somebody.
I got this because I was reusing a cursor that I was looping over:
rows = cursor.execute(...)
for row in rows:
# run query that returns nothing
cursor.execute(...)
# next iteration of this loop will throw 'Previous SQL' error when it tries to fetch next row because we re-used the cursor with a query that returned nothing
Use 2 different cursors instead
rows = cursor1.execute(...)
for row in rows:
cursor2.execute(...)
or get all results of the first cursor before using it again:
Use 2 different cursors instead
rows = cursor.execute(...)
for row in list(rows):
cursor.execute(...)
As others covered, SET NOCOUNT ON will take care of extra resultsets inside a stored procedure, however other things can also cause extra output that NOCOUNT will not prevent (and pyodbc will see as a resultset) such as forgetting to remove a print statement after debugging your stored procedure.
As Travis and others have mentioned, other things can also cause extra output that SET NOCOUNT ON will not prevent.
I had SET NOCOUNT ON at the start of my procedure but was receiving warning messages in my results set.
I set ansi warnings off at the beginning of my script in order to remove the error messages.
SET ANSI_WARNINGS OFF
Hopefully this helps someone.
If your stored procedure calls RAISERROR, pyodbc may create a set for that message.
CREATE PROCEDURE some_sp
AS
BEGIN
RAISERROR ('Some error!', 1, 1) WITH NOWAIT
RETURN 777
END
In python, you need to skip the first sets until you find one containing some results (see https://github.com/mkleehammer/pyodbc/issues/673#issuecomment-631206107 for details).
sql = """
SET NOCOUNT ON;
SET ANSI_WARNINGS OFF;
DECLARE #ret int;
EXEC #ret = some_sp;
SELECT #ret as ret;
"""
cursor = con.cursor()
cursor.execute(sql)
rows = None
#this section will only return the last result from the query
while cursor.nextset():
try:
rows = cursor.fetchall()
except Exception as e:
print("Skipping non rs message: {}".format(e))
continue
row = rows[0]
print(row[0]) # 777.
I think the root cause of the issue described above might be related with the fact that you receive the same error message when you execute for example a DELETE query which will not return a result. So if you run
result = cursor.fetchall()
you get this error, because a DELETE operation by definition does not return anything. Try to catch the exception as recommended here: How to check if a result set is empty?
In case your SQL is not Stored Proc.
usage of 'xyz != NULL' in query, will give the same error i.e. "pyodbc.ProgrammingError: No results. Previous SQL was not a query."
Use 'is not null' instead.
First off:
if you're running a Windows SQL Server 2008, use the "Native Client" that is included with the installation of the SQL software (it gets installed with the database and Toolkits so you need to install the SQL Management applicaton from Microsoft)
Secondly:
Use "Trusted_Connection=yes" in your SQL connection statement:
cnxn = pyodbc.connect('DRIVER={SQL Server Native Client 10.0};SERVER=ServerAddress;DATABASE=my_db;Trusted_Connection=yes')
This should do the trick!
I have solved this problem by splitting the use database and sql query into two execute statements.

Potential problems rolling back multiple-line SQL Transaction

I need to insert a CSV file into a table on SQL Server using Python (BULK INSERT is turned off). Instead of using SQLAlchemy I'm writing my own function (may God forgive me). I'm creating lists of SQL code as strings
sql_code_list = ["insert into table_name values (1,'aa'),(2,'ab'),(3,'ac')...(100,'az')",
"insert into table_name values (101,'ba'),(102,'bb'),(103,'bc')...(200,'bz')"]
and I plan to run them in the DB using pyodbc package one by one. To ensure data integrity, I want to use BEGIN TRANS ... ROLLBACK / COMMIT TRANS ... syntaxis. So I want to send command
DECLARE #TransactionName varchar(20) = 'TransInsert'
BEGIN TRANS #TransactionName
then send all my ```INSERT`` statements, and send on success
DECLARE #TransactionName varchar(20) = 'TransInsert'
COMMIT TRANS #TransactionName
or on failure
DECLARE #TransactionName varchar(20) = 'TransInsert'
ROLLBACK TRANS #TransactionName
There will be many INSERT statements, let's say 10,000 statements each inserting 100 rows, and I plan to send them from the same connection.cursor object but in multiple batches. Does this overall look like a correct procedure? What problems may I run into when I send these commands from a Python application?
There is no need for a named transaction here.
You could submit a transactional batch of multiple statements like this to conditionally rollback and throw on error:
SET XACT_ABORT, NO_COUNT ON;
BEGIN TRY
BEGIN TRAN;
<insert-statements-here>;
COMMIT;
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0 ROLLBACK;
THROW;
END CATCH;
The maximum SQL Server batch size is 64K * and the default network packet size is 4K, so each batch may be up to 256MB by default. 10K inserts will likely fit within that limit so you could try sending all in a single batch and break it into multiple smaller batches only if needed.
An alternative method to insert multiple rows is with an INSERT...SELECT from a table-valued parameter source. See this answer for an example of passing TVP value. I would expect much better performance with that technique because it avoids parsing a large batch and SQL Server internally bulk-inserts TVP data into tempdb.

How to commit a UPDATE in raw SQL in python

cursor = connection.cursor()
cursor.execute("UPDATE public.rsvp SET status=TRUE WHERE rsvp_id=%s", [rsvp_id])
cursor.execute("SELECT status, rsvp_id FROM public.rsvp WHERE rsvp_id=%s", [rsvp_id])
row = cursor.fetchall()
When I execute this in my Django project, I get the row returned as I expect to see it, but later when I select query for the same row, it appears as tho the statement was never really run. In my code, the column "status" defaults to NULL. After this is run, I still see NULL in my table.
You didn't specify what database you're dealing with, which may change the answer somewhat. However, with most database connections you need to finish with connection.commit() to really save changes on the database. This includes both update and insert operations. Failing to commit() usually results in a rollback of the actions.

MSSQL2008 - Pyodbc - Previous SQL was not a query

I can't figure out what's wrong with the following code,
The syntax IS ok (checked with SQL Management Studio), i have access as i should so that works too.. but for some reason as soon as i try to create a table via PyODBC then it stops working.
import pyodbc
def SQL(QUERY, target = '...', DB = '...'):
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=' + target + DB+';UID=user;PWD=pass')
cursor = cnxn.cursor()
cursor.execute(QUERY)
cpn = []
for row in cursor:
cpn.append(row)
return cpn
print SQL("CREATE TABLE dbo.Approvals (ID SMALLINT NOT NULL IDENTITY PRIMARY KEY, HostName char(120));")
It fails with:
Traceback (most recent call last):
File "test_sql.py", line 25, in <module>
print SQL("CREATE TABLE dbo.Approvals (ID SMALLINT NOT NULL IDENTITY PRIMARY KEY, HostName char(120));")
File "test_sql.py", line 20, in SQL
for row in cursor:
pyodbc.ProgrammingError: No results. Previous SQL was not a query.
Anyone have any idea to why this is?
I got a "SQL Server" driver installed (it's default), running Windows 7 against a Windows 2008 SQL Server environment (Not a express database).
Just in case some lonely net nomad comes across this issue, the solution by Torxed didn't work for me. But the following worked for me.
I was calling an SP which inserts some values into a table and then returns some data back. Just add the following to the SP :
SET NOCOUNT ON
It'll work just fine :)
The Python code :
query = "exec dbo.get_process_id " + str(provider_id) + ", 0"
cursor.execute(query)
row = cursor.fetchone()
process_id = row[0]
The SP :
USE [DBNAME]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER procedure [dbo].[GET_PROCESS_ID](
#PROVIDER_ID INT,
#PROCESS_ID INT OUTPUT
)
AS
BEGIN
SET NOCOUNT ON
INSERT INTO processes(provider_id) values(#PROVIDER_ID)
SET #PROCESS_ID= SCOPE_IDENTITY()
SELECT #PROCESS_ID AS PROCESS_ID
END
Using the "SET NOCOUNT ON" value at the top of the script will not always be sufficient to solve the problem.
In my case, it was also necessary to remove this line:
Use DatabaseName;
Database was SQL Server 2012,
Python 3.7,
SQL Alchemy 1.3.8
Hope this helps somebody.
I got this because I was reusing a cursor that I was looping over:
rows = cursor.execute(...)
for row in rows:
# run query that returns nothing
cursor.execute(...)
# next iteration of this loop will throw 'Previous SQL' error when it tries to fetch next row because we re-used the cursor with a query that returned nothing
Use 2 different cursors instead
rows = cursor1.execute(...)
for row in rows:
cursor2.execute(...)
or get all results of the first cursor before using it again:
Use 2 different cursors instead
rows = cursor.execute(...)
for row in list(rows):
cursor.execute(...)
As others covered, SET NOCOUNT ON will take care of extra resultsets inside a stored procedure, however other things can also cause extra output that NOCOUNT will not prevent (and pyodbc will see as a resultset) such as forgetting to remove a print statement after debugging your stored procedure.
As Travis and others have mentioned, other things can also cause extra output that SET NOCOUNT ON will not prevent.
I had SET NOCOUNT ON at the start of my procedure but was receiving warning messages in my results set.
I set ansi warnings off at the beginning of my script in order to remove the error messages.
SET ANSI_WARNINGS OFF
Hopefully this helps someone.
If your stored procedure calls RAISERROR, pyodbc may create a set for that message.
CREATE PROCEDURE some_sp
AS
BEGIN
RAISERROR ('Some error!', 1, 1) WITH NOWAIT
RETURN 777
END
In python, you need to skip the first sets until you find one containing some results (see https://github.com/mkleehammer/pyodbc/issues/673#issuecomment-631206107 for details).
sql = """
SET NOCOUNT ON;
SET ANSI_WARNINGS OFF;
DECLARE #ret int;
EXEC #ret = some_sp;
SELECT #ret as ret;
"""
cursor = con.cursor()
cursor.execute(sql)
rows = None
#this section will only return the last result from the query
while cursor.nextset():
try:
rows = cursor.fetchall()
except Exception as e:
print("Skipping non rs message: {}".format(e))
continue
row = rows[0]
print(row[0]) # 777.
I think the root cause of the issue described above might be related with the fact that you receive the same error message when you execute for example a DELETE query which will not return a result. So if you run
result = cursor.fetchall()
you get this error, because a DELETE operation by definition does not return anything. Try to catch the exception as recommended here: How to check if a result set is empty?
In case your SQL is not Stored Proc.
usage of 'xyz != NULL' in query, will give the same error i.e. "pyodbc.ProgrammingError: No results. Previous SQL was not a query."
Use 'is not null' instead.
First off:
if you're running a Windows SQL Server 2008, use the "Native Client" that is included with the installation of the SQL software (it gets installed with the database and Toolkits so you need to install the SQL Management applicaton from Microsoft)
Secondly:
Use "Trusted_Connection=yes" in your SQL connection statement:
cnxn = pyodbc.connect('DRIVER={SQL Server Native Client 10.0};SERVER=ServerAddress;DATABASE=my_db;Trusted_Connection=yes')
This should do the trick!
I have solved this problem by splitting the use database and sql query into two execute statements.

How to update records in SQL Alchemy in a Loop

I am trying to use SQLSoup - the SQLAlchemy extention, to update records in a SQL Server 2008 database. I am using pyobdc for the connections. There are a number of issues which make it hard to find a relevant example.
I am reprojection a geometry field in a very large table (2 million + records), so many of the standard ways of updating fields cannot be used. I need to extract coordinates from the geometry field to text, convert them and pass them back in. All this is fine, and all the individual pieces are working.
However I want to execute a SQL Update statement on each row, while looping through the records one by one. I assume this places locks on the recordset, or the connection is in use - as if I use the code below it hangs after successfully updating the first record.
Any advice on how to create a new connection, reuse the existing one, or accomplish this another way is appreciated.
s = select([text("%s as fid" % id_field),
text("%s.STAsText() as wkt" % geom_field)],
from_obj=[feature_table])
rs = s.execute()
for row in rs:
new_wkt = ReprojectFeature(row.wkt)
update_value = "geometry :: STGeomFromText('%s',%s)" % (new_wkt, "3785")
update_sql = ("update %s set GEOM3785 = %s where %s = %i" %
(full_name, update_value, id_field, row.fid))
conn = db.connection()
conn.execute(update_sql)
conn.close() #or not - no effect..
Updated working code now looks like this. It works fine on a few records, but hangs on the whole table, so I guess it is reading in too much data.
db = SqlSoup(conn_string)
#create outer query
Session = sessionmaker(autoflush=False, bind=db.engine)
session = Session()
rs = session.execute(s)
for row in rs:
#create update sql...
session.execute(update_sql)
session.commit()
I now get connection busy errors.
DBAPIError: (Error) ('HY000', '[HY000] [Microsoft][ODBC SQL Server Driver]Connection is busy with results for another hstmt (0) (SQLExecDirectW)')
It looks like this could be a problem with the ODBC driver - http://sourceitsoftware.blogspot.com/2008/06/connection-is-busy-with-results-for.html
Further Update:
On the server using profiler, it shows the select statement then the first update statement are "starting" but neither complete.
If I set the Select statement to return the top 10 rows, then it does complete and the updates run.
SQL: Batch Starting Select...
SQL: Batch Starting Update...
I believe this is an issue with pyodbc and SQL Server drivers. If I remove SQL Alchemy and execute the same SQL with pyodbc it also hangs. Even if I create a new connection object for the updates.
I also tried the SQL Server Native Client 10.0 driver which is meant to allow MARS - Multiple Active Record Sets but it made no difference. In the end I have resorted to "paging the results" and updating these batches using pyodbc and SQL (see below), however I thought SQLAlchemy would have been able to do this for me automatically.
Try using a Session.
rs = s.execute() then becomes session.execute(rs) and you can replace the last three lines with session.execute(update_sql). I'd also suggest configuring your Session with autocommit off and call session.commit() at the end.
Can I suggest that when your process hangs you do a sp_who2 on the Sql box and see what is happening. Check for blocked spid's and see if you can find anything in the Sql code that can suggest what is happening. If you do find a spid that is blocking others you can do a dbcc inputbuffer(*spidid*) and see if that tells you what the query was it executed. Otherwise you can also attach the Sql profiler and trace your calls.
In some cases it could also be parallelism on the Sql server that cause blocks. Unless this is a data warehouse, I suggest turn your Max DOP off, (set it to 1). Let me know and when I check this again in the morning and you need help, I'll be glad to help.
Until I find another solution I am using a single connection and custom SQL to return sets of records, and updating these in batches. I don't think what I am doing is a particulary unique case, so I am not sure why I cannot handle multiple result sets simultaneously.
Below works but is very, very slow..
cnxn = pyodbc.connect(conn_string, autocommit=True)
cursor = cnxn.cursor()
#get total recs in the database
s = "select count(fid) as count from table"
count = cursor.execute(s).fetchone().count
#choose number of records to update in each iteration
batch_size = 100
for i in range(1,count, batch_size):
#sql to bring back relevant records in each batch
s = """SELECT fid, wkt from(select ROW_NUMBER() OVER(ORDER BY FID ASC) AS 'RowNumber'
,FID
,GEOM29902.STAsText() as wkt
FROM %s) features
where RowNumber >= %i and RowNumber <= %i""" % (full_name,i,i+batch_size)
rs = cursor.execute(s).fetchall()
for row in rs:
new_wkt = ReprojectFeature(row.wkt)
#...create update sql statement for the record
cursor.execute(update_sql)
counter += 1
cursor.close()
cnxn.close()

Categories

Resources