PYODBC Insert Into Database - Error: Optional Feature Not Implemented (0) (SQLBindParameter) - python

I am currently trying to use pyodbc to select data from a table within Database A and insert it into a table within Database B. I was able to establish connections with both databases, so I know there is no error there. Additionally, my first cursor.execute command (line #9) works as I was able to print all the data.
The issue I am running into is when I try and insert the data from the first cursor.execute command into Database B. There are a few questions on SO regarding this same error, however I have checked to ensure I am not committing on of those errors. All the data types are accepted within SQL Server, I have the correct number of parameters and parameter markers, and I have ensured that the columns within my Python code match both the input and output tables. I am completely stuck and would greatly appreciate any help.
The specific error I am getting is:
('HYC00', '[HYC00] [Microsoft][ODBC SQL Server Driver]Optional feature
not implemented (0) (SQLBindParameter)')
Please see my code below:
import pyodbc
import time
cnxn1 = pyodbc.connect(r"DRIVER={SQL Server Native Client 11.0};SERVER='Server';" + \
"DATABASE='DatabaseA';Trusted_Connection=Yes")
cursor1 = cnxn1.cursor()
cnxn2 = pyodbc.connect(r"DRIVER={SQL Server};SERVER='Server'," + \
"user='Username', password='Password', database='DatabaseB'")
cursor2 = cnxn2.cursor()
SQL = cursor1.execute("select * from table.DatabaseA")
SQL2 = """insert into table.DatabaseB([col1], [col2], [col3], [col4],[col5], [col6], [col7],
[col8], [col9], [col10], [col11], [col12], [col13], [col14],
[col15], [col16],[col17], [col18], [col19], [col20], [col21],
[col22], [col23], [col24], [col25], [col26], [col27], [col28],
[col29], [col30], [col31])
values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)"""
for row in cursor1.fetchall():
cursor2.execute(SQL2,row)
In regard to the last two lines of code, I have also tried the following with no success:
for row in SQL:
cursor2.execute(SQL2,row)

Related

Turbodbc - Getting Output from Multi-Statement Query

I am struggling to access the results of a stored procedure giving me the identity of the row just inserted using Turbodbc 4.1.2, Python 3.7, and SQL Server 2017.
My procedure runs along the following lines:
CREATE OR ALTER PROCEDURE [dbo].[testSP] #var INT
AS
INSERT INTO testTable VALUES (#var)
SELECT 4 --intermediate step to prove concept
--SELECT SCOPE_IDENTITY() as [scope_id] --final goal
--SELECT ##IDENTITY AS '[scope_id]'
My Turbodbc code looks like this:
cnxn = connect(driver='{ODBC Driver 17 for SQL Server}', server=srv, database=db, uid=user, pwd=password, turbodbc_options=options)
crsr = cnxn.cursor()
cmd = "EXEC testSP 1"
crsr.execute(cmd)
df = pd.DataFrame(crsr.fetchallnumpy())
When running the stored procedure without any inserts (ie, just "SELECT 4"), the result set returns fine. However, when running with the insert, which operates correctly, I receive an error "turbodbc.exceptions.InterfaceError: No active result set". The query runs fine in SSMS.
I am guessing that this is because I am receiving two result sets back - one for the insert, and one for the select. I saw from a couple questions on SO that nextset function is available in pymssql and pyodbc, but that the same functionality is not available in turbodbc.
How can I access the second part in my multi-statement query using turbodbc? This seems like a relatively simple issue, but I have been banging my head against the wall for a few hours.

pypyodbc error 'Associated statement is not prepared'

I am trying to create an 'upsert' function for pypyodbc SQL Server. I have validated that the query built up will run in SSMS with the desired outcome, but when trying to execute and commit with pypyodbc I receive the following error: 'HY007', '[HY007] [Microsoft][ODBC SQL Server Driver]Associated statement is not prepared'.
Here is the upsert function:
def sql_upsert(sql_connection, table, key_field, key_value, **kwargs):
keys = ["{key}".format(key=k) for k in kwargs]
values = ["'{value}'".format(value=v) for v in kwargs.values()]
update_columns = ["{key} = '{value}'".format(key=k, value=v) for k, v in kwargs.items()]
sql = list()
#update
sql.append("UPDATE {table} SET ".format(table=table))
sql.append(", ".join(update_columns))
sql.append(" WHERE {} = '{}'".format(key_field, key_value))
sql.append(" IF ##ROWCOUNT=0 BEGIN ")
# insert
sql.append("INSERT INTO {table} (".format(table=table))
sql.append(", ".join(keys))
sql.append(") VALUES (")
sql.append(", ".join(values))
sql.append(")")
sql.append(" END")
query = "".join(sql)
print(query)
The function builds up a query string in a format based on this other thread How to insert or update using single query?
Here is an example of the output:
UPDATE test SET name='john' WHERE id=3012
IF ##ROWCOUNT=0 BEGIN
INSERT INTO test(name) VALUES('john')
END
The error message you cited is produced by the ancient "SQL Server" ODBC driver that ships as part of Windows. A more up-to-date driver version like "ODBC Driver 17 for SQL Server" should produce a meaningful error message.
If you look here or here you'll see people complaining about this over a decade ago.
Apparently SQL Server's ODBC driver returns that error when you're executing two statements that fail due to a field value being too long, or perhaps due to foreign key violations.
Using SSMS to see which statement causes this problem, or better - stop using ODBC and use pymssql
This error may also come when you don't give correct permissions to stored procedure
Go the SQL server --> Right click on your sp-->properties-->permissions
Add required users and roles which are going to execute this sp
This may help resolving the issue

Adding data from one Postgresql server into another Postgresql server using Python

I'm pretty new to python and postgresql in general but I'm having problems transferring data from one server into another. Currently I have code to run where I pull data and set that to a variable IT, when I try inserting IT into another Postgresql server, I run into errors.
First I used:
cur = con.cursor()
#Connect cursor to local server
IT = DataPull()
#Pulls the data from the remote Postgresql server and set it equal to IT
command2 = (
"""
INSERT INTO gr_data.it (column_name1,column_name2,column_name3,column_name4,column_name5,column_name6,column_name7) VALUES(?,?,?,?,?,?,?)
""")
cur.execute(command2, IT)
But I end up getting the error:
psycopg2.ProgrammingError: syntax error at or near ","
LINE 2:...column_name4,column_name5,column_name6,column_name7 VALUES<?,?,?,?,?....
^
So I figured it had to do with the question marks, I googled around and found that maybe they should be changed to "%s". Then I received this error:
TypeError: not all arguments converted during string formatting
Any help?
Here's an example of how you should perform the insert using variables:
cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", (100, "abc'def"))
It is taken from official docs you can find here

pypyodbc: OPENJSON incorrect syntax near keyword "WITH"

I'm trying to use OPENJSON in a Python script to import some basic JSON into a SQL database. I initially tried with a more complex JSON file, but simplified it for the sake of this post. Here's what I have:
sql_statement = "declare #json nvarchar(max) = '{\"name\":\"James\"}'; SELECT * FROM OPENJSON(#json) WITH (name nvarchar(20))"
cursor.execute(sql_statement)
cursor.commit()
connection.close()
The error I receive:
pypyodbc.ProgrammingError: (u'42000', u"[42000] [Microsoft][ODBC SQL
Server Driver][SQL Server]Incorrect syntax near the keyword 'with'. If
this statement is a common table expression, an xmlnamespaces clause
or a change tracking context clause, the previous statement must be
terminated with a semicolon.")
Any thoughts on why I'm seeing this error? I was successfully able to execute other SQL queries with the same pypyodbc / database configuration.
The problem could be that your database is running in an older compatibility level, where OPEN JSON is not available.
To find the compatibility level of your database, run following SQL statement:
SELECT compatibility_level FROM sys.databases WHERE name = 'your_db_name';
If the result is 120 or lower, you'll need to update your compatibility level to 130, by running:
ALTER DATABASE your_db_name SET COMPATIBILITY_LEVEL = 130;
Note: In case your database is actually Azure SQL DB, you should check the version as well, as OPEN JSON is not available for versions prior to 12.x

How to update records in SQL Alchemy in a Loop

I am trying to use SQLSoup - the SQLAlchemy extention, to update records in a SQL Server 2008 database. I am using pyobdc for the connections. There are a number of issues which make it hard to find a relevant example.
I am reprojection a geometry field in a very large table (2 million + records), so many of the standard ways of updating fields cannot be used. I need to extract coordinates from the geometry field to text, convert them and pass them back in. All this is fine, and all the individual pieces are working.
However I want to execute a SQL Update statement on each row, while looping through the records one by one. I assume this places locks on the recordset, or the connection is in use - as if I use the code below it hangs after successfully updating the first record.
Any advice on how to create a new connection, reuse the existing one, or accomplish this another way is appreciated.
s = select([text("%s as fid" % id_field),
text("%s.STAsText() as wkt" % geom_field)],
from_obj=[feature_table])
rs = s.execute()
for row in rs:
new_wkt = ReprojectFeature(row.wkt)
update_value = "geometry :: STGeomFromText('%s',%s)" % (new_wkt, "3785")
update_sql = ("update %s set GEOM3785 = %s where %s = %i" %
(full_name, update_value, id_field, row.fid))
conn = db.connection()
conn.execute(update_sql)
conn.close() #or not - no effect..
Updated working code now looks like this. It works fine on a few records, but hangs on the whole table, so I guess it is reading in too much data.
db = SqlSoup(conn_string)
#create outer query
Session = sessionmaker(autoflush=False, bind=db.engine)
session = Session()
rs = session.execute(s)
for row in rs:
#create update sql...
session.execute(update_sql)
session.commit()
I now get connection busy errors.
DBAPIError: (Error) ('HY000', '[HY000] [Microsoft][ODBC SQL Server Driver]Connection is busy with results for another hstmt (0) (SQLExecDirectW)')
It looks like this could be a problem with the ODBC driver - http://sourceitsoftware.blogspot.com/2008/06/connection-is-busy-with-results-for.html
Further Update:
On the server using profiler, it shows the select statement then the first update statement are "starting" but neither complete.
If I set the Select statement to return the top 10 rows, then it does complete and the updates run.
SQL: Batch Starting Select...
SQL: Batch Starting Update...
I believe this is an issue with pyodbc and SQL Server drivers. If I remove SQL Alchemy and execute the same SQL with pyodbc it also hangs. Even if I create a new connection object for the updates.
I also tried the SQL Server Native Client 10.0 driver which is meant to allow MARS - Multiple Active Record Sets but it made no difference. In the end I have resorted to "paging the results" and updating these batches using pyodbc and SQL (see below), however I thought SQLAlchemy would have been able to do this for me automatically.
Try using a Session.
rs = s.execute() then becomes session.execute(rs) and you can replace the last three lines with session.execute(update_sql). I'd also suggest configuring your Session with autocommit off and call session.commit() at the end.
Can I suggest that when your process hangs you do a sp_who2 on the Sql box and see what is happening. Check for blocked spid's and see if you can find anything in the Sql code that can suggest what is happening. If you do find a spid that is blocking others you can do a dbcc inputbuffer(*spidid*) and see if that tells you what the query was it executed. Otherwise you can also attach the Sql profiler and trace your calls.
In some cases it could also be parallelism on the Sql server that cause blocks. Unless this is a data warehouse, I suggest turn your Max DOP off, (set it to 1). Let me know and when I check this again in the morning and you need help, I'll be glad to help.
Until I find another solution I am using a single connection and custom SQL to return sets of records, and updating these in batches. I don't think what I am doing is a particulary unique case, so I am not sure why I cannot handle multiple result sets simultaneously.
Below works but is very, very slow..
cnxn = pyodbc.connect(conn_string, autocommit=True)
cursor = cnxn.cursor()
#get total recs in the database
s = "select count(fid) as count from table"
count = cursor.execute(s).fetchone().count
#choose number of records to update in each iteration
batch_size = 100
for i in range(1,count, batch_size):
#sql to bring back relevant records in each batch
s = """SELECT fid, wkt from(select ROW_NUMBER() OVER(ORDER BY FID ASC) AS 'RowNumber'
,FID
,GEOM29902.STAsText() as wkt
FROM %s) features
where RowNumber >= %i and RowNumber <= %i""" % (full_name,i,i+batch_size)
rs = cursor.execute(s).fetchall()
for row in rs:
new_wkt = ReprojectFeature(row.wkt)
#...create update sql statement for the record
cursor.execute(update_sql)
counter += 1
cursor.close()
cnxn.close()

Categories

Resources