I am trying to get Python to run a stored procedure in my SQL Server which kicks off a series of procedures which involves importing a file processing it and outputting a couple of files.
So far I have got my code so that it accepts an input to a table but then the whole thing hangs when it calls the stored procedure.
Checking Who2 on the server it is waiting on the preemptive_OS_Pipeops which searching has revealed it is waiting on something outside of SQL Server to finish before proceeding.
Is someone able to shed some light if it is possible to use pyodbc to blind activate a stored procedure then close the connection?
My belief is by just telling the procedure to run then closing out should fix the issue but I am having issues finding the code for this
Python code:
connection2 = pypyodbc.connect('Driver={SQL Server}; Server=server;Database=db', timeout=1)
cursor2 = connection2.cursor()
cursor2.execute("{CALL uspGoBabyGo}")
connection2.close()
return 'file uploaded successfully'
Stored procedure:
BEGIN
SET NOCOUNT ON;
EXECUTE [dbo].[uspCableMapImport]
END
After searching and the script stopped posting the record to the table I found the solution to the issue. I needed to add in the autocommit=True line to the script, now the code is as follows;
connection = pyodbc.connect('Driver={SQL Server};
Server='Server';Database='DB';Trusted_Connection=yes')
connection.autocommit=True
cursor = connection.cursor()
referee = file.filename.rsplit('.')[0]
SQLCommand = ("INSERT INTO RequestTable(Reference, Requested) VALUES ('"+ str(referee) +"', " + datetime.now().strftime('%Y-%m-%d') + ")")
cursor.execute(SQLCommand)
connection.commit
SQLCommand2 = ("{CALL uspGoBabyGo}")
cursor.execute(SQLCommand2)
connection.commit
connection.close()
Related
Working with Python and SQL Server I'm trying to catch the SQL Messages (PRINT or RAISERROR) that may ossure when exeuting SQL command or stored procedure.
On the SQL Server I have the following procedure (simplified for my question)
CREATE PROCEDURE dbo.pyTest (#n int)
AS
SET NOCOUNT ON
IF OBJECT_ID('dbo.tablename') IS NULL RAISERROR(N'Error. Destination table not found.',0,0)
ELSE INSERT INTO dbo.tablename VALUES (#n);
In Python I'm executing this script
server = 'server_name'
db = 'database_name'
from sqlalchemy import create_engine
engine = create_engine(f"mssql+pyodbc://{server}/{db}?trusted_connection=yes&driver=ODBC+Driver+17+for+SQL+Server")
query="INSERT INTO dbo.tablename VALUES (1)"
#query="SELECT GETDATE()"
with engine.begin() as conn:
conn.execute(query)
My question is: How do I catch the messages being produced from the stored procedure?
When I execute "EXEC dbo.pyTest 1" in SSMS
I get the following text in the Messages tab
"Error. Destination table not found."
But when I do the same in Python I get "Process finished with exit code 0"
I have been reading a lot of suggestions from Goggle, but have not been able to find similar question.
I was writing a proprietary script that queries a company database to pull certain information. I was using Psycopg2. At this point the lines I was using were like:
conn = psycopg2.connect("dbname='somedb' user='usr' host='something.azure.com' password='pswd' port='5432'")
cur = conn.cursor()
query = "something"
cur.execute(query)
results = cur.fetchall()
The script was running fine until a few days ago, after quick successions of runs during debugging, I started getting:
psycopg2.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
A closer look that I realized I did not properly close the connection. So I modified it to:
with psycopg2.connect("dbname='somedb' user='usr' host='something.azure.com' password='pswd' port='5432'") as conn:
cur = conn.cursor()
query = "something"
cur.execute(query)
results = cur.fetchall()
The error persisted even when no additional connections were present on the server end, so hitting max_connection is unlikely the reason here.
Strangely, I can access the same server through PGAdmin or use sqlalchemy.create_engine with pandas.read_sql on the same machine. Additionally, the script runs fine on another colleague's machine while mimicking my IP address through VPN.
Edit: sqlalchemy worked for my machine for exactly once, after which I started to get the same error.
My sqlalchemy code:
engine = create_engine('postgresql://{0}:{1}#{2}:5432/db'.format(USR, PSWD, HOST))
query = "something"
df = pd.read_sql(sql=query, con=engine)
I've built a Python script that runs a report for me using one of our vendor's API's, then uploads the data directly to an MS SQL server. I would like to add an error handler that sends an email when the insert fails for any reason.
Can I just wrap my insert statement in a try? Currently I have this going to a local server for testing...
conn = pyodbc.connect('Driver={SQL Server};'
'Server=localhost\*****local;'
'Database=Reporting;'
'Trusted_Connection=yes;')
#Set cursor variable
cursor = conn.cursor()
executeValue = """INSERT INTO New_Five9_CallLog
(Call_ID, [Timestamp],
Campaign, Call_Type, Agent_Email, Agent_Name, Disposition,
ANI, Customer_Name, DNIS, Call_Time, Rounded_Bill_Time,
Cost, IVR_Time, Queue_Wait_Time, QCB_Wait_Time,
Total_Queue_Time, Ring_Time, Talk_Time, Hold_Time, Park_Time,
ACW_Time, Transfers, Conferences, Holds, Parks, Abandoned,
Recordings, Handle_Time, Session_ID, IVR_Path,
Skill, Ticket_Number)
VALUES (""" + values + ")"
#Execute query
cursor.execute(executeValue)
#Commit and close
conn.commit()
conn.close()
I get the values variable with some other script above this section. What I'd like to know is how to capture an error on this section and then send an email to myself with the error description.
Check out this answer https://stackoverflow.com/a/42143703/1525867 explaining how to catch pyodbc specific errors (I personally use https://sendgrid.com/ to send emails)
I have a program using Python + python mysql connector + Mysql which is installed in a network and uses the same database. How do I refresh a connection without restarting the program on other Machines?
The Program is installed in more than one Machine and connects to the same Mysql database which is located on a Server. The program is working properly, but,
when a new data is entered or modified "SQL INSERT, UPDATE.... Statments" from one machine is not reflected in all other Machines until the program is restarted, which means that a new connection is necessary to show the new database data.
So, I would like to know how to refresh the connection without restarting the program on other machines.
Here is the sample Connection code:
import mysql.connector as mc
conn = mc.connect(host='host', user='user', passwd='passw', db='db_name', port=port_number)
cur = conn.cursor()
How to refresh This Connection while the program is running?
closing and reopening connection may also solve your problem,
see following example in loop
import mysql.connector as mc
conn = mc.connect(host='host', user='user', passwd='passw', db='db_name', port=port_number)
while True:
# check if youre connected, if not, connect again
if (conn.is_connected() == False):
conn = mc.connect(host='host', user='user', passwd='passw', db='db_name', port=port_number)
cur = conn.cursor()
sql = "INSERT INTO db_name.myTable (name) VALUES (%(val)s);"
val = {'val':"Slim Shady"}
cur.execute(sql,val)
conn.commit()
conn.close()
After inserting or updating, you should do a commit to make sure all data it's writed to the DB. Then it will be visible for everyone:
conn.commit()
I'm trying to execute tiny mdx query from Analysis Service server at work.
Server provides data via ms ole db, not odbc specification, thats why i use adodbapi library.
Here's the function i use to obtain result from query execution:
def mdx_query (query):
conn = adodbapi.connect("PROVIDER=MSOLAP; \
persist security info=true; \
Data Source=***; \
initial catalog=analyse;")
cursor = conn.cursor()
try:
cursor.execute(query)
result = cursor.fetchone()
except (adodbapi.Error, adodbapi.Warning) as e:
print(e)
cursor.close()
del cursor
conn.close()
del conn
return result
Primitive single-value queries works perfectly well:
select
[Physical Stock PCS] on 0,
[Goods].[Categories].[ALL] on 1
from [analyse]
If i got syntax error it also just give me adodbapi.Error message and it's fine.
But if I try to execute more complex queries like:
select
[Physical Stock PCS] on 0,
[Goods].[Categories].[Level 01] on 1
from [analyse]
[Goods].[Categories].[Level 01] have more than one dimension and i always got python.exe APPCRASH message no matter what.
I tried both python 2 and 3, running in jupyter and console mode,pandas.read_sql_query method. Result is always the same - i get APPCRASH window.
How to cure crashes and finally execute complicated queries?
Any help is appreciated!
UPD:here's error window. Can't change it to EN.Appcrash error