Executing stored procedures in SQL Server using Python client - python

I learned from a helpful post on StackOverflow about how to call stored procedures on SQL Server in python (pyodbc). After modifying my code to what is below, I am able to connect and run execute() from the db_engine that I created.
import pyodbc
import sqlalchemy as sal
from sqlalchemy import create_engine
import pandas as pd
import urllib
params = urllib.parse.quote_plus(
'DRIVER={ODBC Driver 17 for SQL Server};'
f'SERVER=myserver.com;'
f'DATABASE=mydb;'
f'UID=foo;'
f'PWD=bar')
cobnnection_string = f'mssql+pyodbc:///?odbc_connect={params}'
db_engine = create_engine(connection_string)
db_engine.execute("EXEC [dbo].[appDoThis] 'MYDB';")
<sqlalchemy.engine.result.ResultProxy at 0x1121f55e0>
db_engine.execute("EXEC [dbo].[appDoThat];")
<sqlalchemy.engine.result.ResultProxy at 0x1121f5610>
However, even though no errors are returned after running the above code in Python, when I check the database, I confirm that nothing has been executed (what is more telling is that the above commands take one or two seconds to complete whereas running these stored procedures successfully on the database admin tool takes about 5 minutes).
How should I understand what is not working correctly in the above setup in order to properly debug? I literally run the exact same code through my database admin tool with no issues - the stored procedures execute as expected. What could be preventing this from happening via Python? Does the executed SQL need to be committed? Is there a way to debug using the ResultProxy that is returned? Any advice here would be appreciated.

Calling .execute() directly on an Engine object is an outdated usage pattern and will emit deprecation warnings starting with SQLAlchemy version 1.4. These days the preferred approach is to use a context manager (with block) that uses engine.begin():
import sqlalchemy as sa
# …
with engine.begin() as conn: # transaction starts here
conn.execute(sa.text("EXEC [dbo].[appDoThis] 'MYDB';"))
# On exiting the `with` block the transaction will automatically be committed
# if no errors have occurred. If an error has occurred the transaction will
# automatically be rolled back.
Notes:
When passing an SQL command string it should be wrapped in a SQLAlchemy text() object.
SQL Server stored procedures (and anonymous code blocks) should begin with SET NOCOUNT ON; in the overwhelming majority of cases. Failure to do so can result in legitimate results or errors getting "stuck behind" any row counts that may have been emitted by DML statements like INSERT, UPDATE, or DELETE.

Related

Sql Alchemy Insert Statement failing to insert, but no error

I am attempting to execute a raw sql insert statement in Sqlalchemy, SQL Alchemy throws no errors when the constructed insert statement is executed but the lines do not appear in the database.
As far as I can tell, it isn't a syntax error (see no 2), it isn't an engine error as the ORM can execute an equivalent write properly (see no 1), it's finding the table it's supposed to write too (see no 3). I think it's a problem with a transaction not being commited and have attempted to address this (see no 4) but this hasn't solved the issue. Is it possible to create a nested transaction and what would start the 'first' so to speak?
Thankyou for any answers.
Some background:
I know that the ORM facilitates this and have used this feature and it works, but is too slow for our application. We decided to try using raw sql for this particular write function due to how often it's called and the ORM for everything else. An equivalent method using the ORM works perfectly, and the same engine is used for both, so it can't be an engine problem right?
I've issued an example of the SQL that the method using raw sql constructs to the database directly and that reads in fine, so I don't think it's a syntax error.
it's communicating with the database properly and can find the table as any syntax errors with table and column names throw a programmatic error so it's not just throwing stuff into the 'void' so to speak.
My first thought after reading around was that it was transaction error and that a transaction was being created and not closed, and so constructed the execute statement as such to ensure a transaction was properly created and commited.
with self.Engine.connect() as connection:
connection.execute(Insert_Statement)
connection.commit
The so called 'Insert Statement' has been converted to text using the sqlalchemy 'text' function, I don't quite understand why it won't execute if I pass the constructed string directly to the execute statement but mention it in case it's relevant.
Other things that may be relevant:
Python3 is running on an individual ec2 instance the postgres database on another. The table in particular is a timescaledb hypertable taking realtime data, hence the need for very fast writes, but probably not relevant.
Currently using pg8000 as dialect for no particular reason other than psycopg2 was throwing errors when trying the execute an equivalent method using the ORM.
Just so this question is answered in case anyone else ends up here:
The issue was a failure to call commit as a method, as #snakecharmerb pointed out. Gord Thompson also provided an alternate method using 'begin' which automatically commits rather than connection which is a 'commit as you go' style transaction.

Preventing writable modifications to Oracle database, using Python.

Currently using cx_Oracle module in Python to connect to my Oracle database. I would like to only allow the user of the program to do read only executions, like Select, and NOT INSERT/DELETE queries.
Is there something I can do to the connection/cursor variables once I establish the connection to prevent writable queries?
I am using the Python Language.
Appreciate any help.
Thanks.
One possibility is to issue the statement "set transaction read only" as in the following code:
import cx_Oracle
conn = cx_Oracle.connect("cx_Oracle/welcome")
cursor = conn.cursor()
cursor.execute("set transaction read only")
cursor.execute("insert into c values (1, 'test')")
That will result in the following error:
ORA-01456: may not perform insert/delete/update operation inside a READ ONLY transaction
Of course you'll have to make sure that you create a Connection class that calls this statement when it is first created and after each and every commit() and rollback() call. And it can still be circumvented by calling a PL/SQL block that performs a commit or rollback.
The only other possibility that I can think of right now is to create a restricted user or role which simply doesn't have the ability to insert, update, delete, etc. and make sure the application uses that user or role. This one at least is fool proof, but a lot more effort up front!

What's psycopg2 doing when I iterate a cursor?

I'm trying to understand what this code is doing behind the scenes:
import psycopg2
c = psycopg2.connect('db=some_db user=me').cursor()
c.execute('select * from some_table')
for row in c:
pass
Per PEP 249 my understanding was that this was repeatedly calling Cursor.next() which is the equivalent of calling Cursor.fetchone(). However, the psycopg2 docs say the following:
When a database query is executed, the Psycopg cursor usually fetches
all the records returned by the backend, transferring them to the
client process.
So I'm confused -- when I run the code above, is it storing the results on the server and fetching them one by one, or is it bringing over everything at once?
It depends on how you configure psycopg2. See itersize and server side cursors.
By default it fetches all rows into client memory, then just iterates over the fetched rows with the cursor. But per the above docs, you can configure batch fetches from a server-side cursor instead.

How to disable SQLAlchemy caching?

I have a caching problem when I use sqlalchemy.
I use sqlalchemy to insert data into a MySQL database. Then, I have another application process this data, and update it directly.
But sqlalchemy always returns the old data rather than the updated data. I think sqlalchemy cached my request ... so ... how should I disable it?
The usual cause for people thinking there's a "cache" at play, besides the usual SQLAlchemy identity map which is local to a transaction, is that they are observing the effects of transaction isolation. SQLAlchemy's session works by default in a transactional mode, meaning it waits until session.commit() is called in order to persist data to the database. During this time, other transactions in progress elsewhere will not see this data.
However, due to the isolated nature of transactions, there's an extra twist. Those other transactions in progress will not only not see your transaction's data until it is committed, they also can't see it in some cases until they are committed or rolled back also (which is the same effect your close() is having here). A transaction with an average degree of isolation will hold onto the state that it has loaded thus far, and keep giving you that same state local to the transaction even though the real data has changed - this is called repeatable reads in transaction isolation parlance.
http://en.wikipedia.org/wiki/Isolation_%28database_systems%29
This issue has been really frustrating for me, but I have finally figured it out.
I have a Flask/SQLAlchemy Application running alongside an older PHP site. The PHP site would write to the database and SQLAlchemy would not be aware of any changes.
I tried the sessionmaker setting autoflush=True unsuccessfully
I tried db_session.flush(), db_session.expire_all(), and db_session.commit() before querying and NONE worked. Still showed stale data.
Finally I came across this section of the SQLAlchemy docs: http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#transaction-isolation-level
Setting the isolation_level worked great. Now my Flask app is "talking" to the PHP app. Here's the code:
engine = create_engine(
"postgresql+pg8000://scott:tiger#localhost/test",
isolation_level="READ UNCOMMITTED"
)
When the SQLAlchemy engine is started with the "READ UNCOMMITED" isolation_level it will perform "dirty reads" which means it will read uncommited changes directly from the database.
Hope this helps
Here is a possible solution courtesy of AaronD in the comments
from flask.ext.sqlalchemy import SQLAlchemy
class UnlockedAlchemy(SQLAlchemy):
def apply_driver_hacks(self, app, info, options):
if "isolation_level" not in options:
options["isolation_level"] = "READ COMMITTED"
return super(UnlockedAlchemy, self).apply_driver_hacks(app, info, options)
Additionally to zzzeek excellent answer,
I had a similar issue. I solved the problem by using short living sessions.
with closing(new_session()) as sess:
# do your stuff
I used a fresh session per task, task group or request (in case of web app). That solved the "caching" problem for me.
This material was very useful for me:
When do I construct a Session, when do I commit it, and when do I close it
This was happening in my Flask application, and my solution was to expire all objects in the session after every request.
from flask.signals import request_finished
def expire_session(sender, response, **extra):
app.db.session.expire_all()
request_finished.connect(expire_session, flask_app)
Worked like a charm.
I have tried session.commit(), session.flush() none worked for me.
After going through sqlalchemy source code, I found the solution to disable caching.
Setting query_cache_size=0 in create_engine worked.
create_engine(connection_string, convert_unicode=True, echo=True, query_cache_size=0)
First, there is no cache for SQLAlchemy.
Based on your method to fetch data from DB, you should do some test after database is updated by others, see whether you can get new data.
(1) use connection:
connection = engine.connect()
result = connection.execute("select username from users")
for row in result:
print "username:", row['username']
connection.close()
(2) use Engine ...
(3) use MegaData...
please folowing the step in : http://docs.sqlalchemy.org/en/latest/core/connections.html
Another possible reason is your MySQL DB is not updated permanently. Restart MySQL service and have a check.
As i know SQLAlchemy does not store caches, so you need to looking at logging output.

Execute sql query with Elixir

I'm using Elixir in a project that connects to a postgres database. I want to run the following query on the database I'm connected to, but I'm not sure how to do it as I'm rather new to Elixir and SQLAlchemy. Anyone know how?
VACUUM FULL ANALYZE table
Update
The error is: "UnboundExecutionError: Could not locate a bind configured on SQL expression or this Session". And the same result with session.close() issued before. I did try doing metadata.bind.execute() and that worked for a simple select. But for the VACUUM it said - "InternalError: (InternalError) VACUUM cannot run inside a transaction block", so now I'm trying to figure out how to turn that off.
Update 2
I can get the query to execute, but I'm still getting the same error - even when I create a new session and close the previous one.
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
# ... insert stuff
old_session.commit()
old_session.close()
new_sess = sessionmaker(autocommit=True)
new_sess.configure(bind=create_engine('postgres://user:pw#host/db', echo=True))
sess = new_sess()
sess.execute('VACUUM FULL ANALYZE table')
sess.close()
and the output I get is
2009-12-10 10:00:16,769 INFO sqlalchemy.engine.base.Engine.0x...05ac VACUUM FULL ANALYZE table
2009-12-10 10:00:16,770 INFO sqlalchemy.engine.base.Engine.0x...05ac {}
2009-12-10 10:00:16,770 INFO sqlalchemy.engine.base.Engine.0x...05ac ROLLBACK
finishing failed run, (InternalError) VACUUM cannot run inside a transaction block
'VACUUM FULL ANALYZE table' {}
Update 3
Thanks to everyone who responded. I wasn't able to find the solution I wanted, but I think I'm just going to go with the one described here PostgreSQL - how to run VACUUM from code outside transaction block?. It's not ideal, but it works.
Dammit. I knew the answer was going to be right under my nose. Assuming you setup your connection like I did.
metadata.bind = 'postgres://user:pw#host/db'
The solution to this was as simple as
conn = metadata.bind.engine.connect()
old_lvl = conn.connection.isolation_level
conn.connection.set_isolation_level(0)
conn.execute('vacuum analyze table')
conn.connection.set_isolation_level(old_lvl)
This is similar to what was suggested here PostgreSQL - how to run VACUUM from code outside transaction block?
because underneath it all, sqlalchemy uses psycopg to make the connection to postgres. Connection.connection is a proxy to the psycopg connection. Once I realized this, this problem came back to mind and I decided to take another whack at it.
Hopefully this helps someone.
You need to bind the session to an engine
session.bind = metadata.bind
session.execute('YOUR SQL STATEMENT')
UnboundExecutionError says that your session is not bound to an engine and there is no way to discover engine from query passed to execute(). You can either use engine.execute() directly or pass additional mapper parameter (either mapper or mapped model corresponding to table used in query) to session.execute() to help SQLAlchemy discover proper engine.
The InternalError says that you are trying to execute this statement inside explicitly (with BEGIN statement) started transaction. Have you issued some statements before it without calling commit()? If so, just call commit() or rollback() method to close transaction before doing VACUUM. Also note, that there are several parameter to sessionmaker() that tell SQLAlchemy when transaction should be started.
If you have access to SQLAlchemy session, you can execute arbitrary SQL statements via its execute method:
session.execute("VACUUM FULL ANALYZE table")
(Depending on the Postgres version) you most likely do not want to run "VACUUM FULL".

Categories

Resources