Issue in running SQL Server Stored Proc using pyodbc - python

I am trying to run an Stored Proc in SQL Server (Synapse) using pyodbc.This SP merely performs INSERT operation into a table.
sql =""" EXEC my_sp (?,?,?)"""
values = (str(v1),str(v2),int(v3))
cur.execute(sql,(values))
conn.commit()
This above code is not giving any error. But when I am querying the Table, I don't see the records. However, when I am running above SP in SSMS console, this works fine.
Can't figure out why this is so?
Any clue?

Related

PYODBC query not running but works fine in MSSMS

I am trying to run a large query set up at work, maybe a stored procedure.?. I am unfamiliar with large queries but i think that is the issue. I can use pyodbc to connect and run a simple "SELECT * FROM db;" and it works fine, and if i run the full query in MSSMS and it works fine, but not when I copy the large query into the query variable
I have found a few articles that say to add "SET NOCOUNT NO;" I tried that and it didn't work either.
the python error is:
"No results. Previous SQL is not a query"
Once again the query works fine in MSSMS so any guidance would be appreciated.
thanks

Executing stored procedures in SQL Server using Python client

I learned from a helpful post on StackOverflow about how to call stored procedures on SQL Server in python (pyodbc). After modifying my code to what is below, I am able to connect and run execute() from the db_engine that I created.
import pyodbc
import sqlalchemy as sal
from sqlalchemy import create_engine
import pandas as pd
import urllib
params = urllib.parse.quote_plus(
'DRIVER={ODBC Driver 17 for SQL Server};'
f'SERVER=myserver.com;'
f'DATABASE=mydb;'
f'UID=foo;'
f'PWD=bar')
cobnnection_string = f'mssql+pyodbc:///?odbc_connect={params}'
db_engine = create_engine(connection_string)
db_engine.execute("EXEC [dbo].[appDoThis] 'MYDB';")
<sqlalchemy.engine.result.ResultProxy at 0x1121f55e0>
db_engine.execute("EXEC [dbo].[appDoThat];")
<sqlalchemy.engine.result.ResultProxy at 0x1121f5610>
However, even though no errors are returned after running the above code in Python, when I check the database, I confirm that nothing has been executed (what is more telling is that the above commands take one or two seconds to complete whereas running these stored procedures successfully on the database admin tool takes about 5 minutes).
How should I understand what is not working correctly in the above setup in order to properly debug? I literally run the exact same code through my database admin tool with no issues - the stored procedures execute as expected. What could be preventing this from happening via Python? Does the executed SQL need to be committed? Is there a way to debug using the ResultProxy that is returned? Any advice here would be appreciated.
Calling .execute() directly on an Engine object is an outdated usage pattern and will emit deprecation warnings starting with SQLAlchemy version 1.4. These days the preferred approach is to use a context manager (with block) that uses engine.begin():
import sqlalchemy as sa
# …
with engine.begin() as conn: # transaction starts here
conn.execute(sa.text("EXEC [dbo].[appDoThis] 'MYDB';"))
# On exiting the `with` block the transaction will automatically be committed
# if no errors have occurred. If an error has occurred the transaction will
# automatically be rolled back.
Notes:
When passing an SQL command string it should be wrapped in a SQLAlchemy text() object.
SQL Server stored procedures (and anonymous code blocks) should begin with SET NOCOUNT ON; in the overwhelming majority of cases. Failure to do so can result in legitimate results or errors getting "stuck behind" any row counts that may have been emitted by DML statements like INSERT, UPDATE, or DELETE.

when pymysql using local_infile, data is not insert database

I created the code to create and execute SQL query statements using pymysql in Python.
If i put the sql query statement generated in code directly in db, it works normally.
However, if i execute the sql query statement created in code with cursor.execute (sql), db will not contain data.
When I connect, I also gave the local_infile = True option. How do I create the code?

Unable to insert data on Google Cloud SQL but UID field is still AUTO incrementing

I am running into MySQL behavior on Google Cloud SQL I have never seen before.
Every MySQL command we try is working from a Python script except INSERT. We can create the table and show tables, but when we insert data - nothing appears in the table. Yet, if we copy that exact same insert statement to the MySQL command line and run it, the insert works fine.
BUT here is the unusual part. Even though the Python script fails to insert data, the UID AUTO INCREMENT field has incremented for every empty and failed insert. For example, if the Python script fails to insert a row, the next time we run an insert from the mySQL command line, we see that the UID field has incremented by one.
It is as if MySQL started to insert the data, auto-incremented the UID field, but then the data never arrived.
We are using MySQL on Google Cloud SQL. The insert is a simple test:
insert into queue (filename, text) VALUES ('test', 'test')
Any ideas what this is or how to debug it?
It turns out AUTOCOMMIT is set to OFF on Google Cloud SQL.
All SQL inserts must be followed by a commit statement.
For example:
import MySQLdb
db = mdb.connect(ip, login, pword)
query = "insert into tbname (entity, attribute) VALUES('foo', 'bar')"
cursor = db.cursor()
cursor.execute(query)
db.commit()

Mysql join query works in Mysql Workbench, but hangs at PyDev

I have been trying to run this query in PyDev environment for a while, but it just hangs there.
The tables involved are large (tweets around 700,000 and user about 400,000). I have all the indexes sorted out. I tried to run the query at Workbench and it works perfectly within seconds, but just hangs while executed in Eclipse...
cursor.execute("SELECT id, user, hashtags FROM users JOIN tweets ON users.user_id = tweets.user")
It seems that I don't understand the buffering associated with executing at Workbench vs any other environment...!
Any thoughts or ideas?

Categories

Resources