sqlalchemy connection flask - python

I have the following code in flask
sql = text('select * from person')
results = self.db.engine.execute(sql)
for row in results:
print(".............", row) # prints nothing
people = Person.query.all() # shows all person data
Now given this situation, it's obvious, the self.db is not using the same connection somehow that Person.query is using. However, given this situation, can I get the connection somehow from Person.query object?
PS. This is for testing and I'm using SQLite3 database. I tried this in postgres, but outcome is the same.

Just figured out. Try Person.query.session.execute(sql). Voila!

Related

How to use Azure DevOps / VSTS to fetch query results in python

Below is my current code. It connects successfully to the organization. How can I fetch the results of a query in Azure like they have here? I know this was solved but there isn't an explanation and there's quite a big gap on what they're doing.
from azure.devops.connection import Connection
from msrest.authentication import BasicAuthentication
from azure.devops.v5_1.work_item_tracking.models import Wiql
personal_access_token = 'xxx'
organization_url = 'zzz'
# Create a connection to the org
credentials = BasicAuthentication('', personal_access_token)
connection = Connection(base_url=organization_url, creds=credentials)
wit_client = connection.clients.get_work_item_tracking_client()
results = wit_client.query_by_id("my query ID here")
P.S. Please don't link me to the github or documentation. I've looked at both extensively for days and it hasn't helped.
Edit: I've added the results line that successfully gets the query. However, it returns a WorkItemQueryResult class which is not exactly what is needed. I need a way to view the column and results of the query for that column.
So I've figured this out in probably the most inefficient way possible, but hope it helps someone else and they find a way to improve it.
The issue with the WorkItemQueryResult class stored in variable "result" is that it doesn't allow the contents of the work item to be shown.
So the goal is to be able to use the get_work_item method that requires the id field, which you can get (in a rather roundabout way) through item.target.id from results' work_item_relations. The code below is added on.
for item in results.work_item_relations:
id = item.target.id
work_item = wit_client.get_work_item(id)
fields = work_item.fields
This gets the id from every work item in your result class and then grants access to the fields of that work item, which you can access by fields.get("System.Title"), etc.

Using Python and mySQL (and windows as OS), cursor.execute() is returning no results but it is connecting

So I have an issue very similar to this question, but a bit different.
I am calling cursor.execute(sqlString) on a piece of sql that works fine when I run it directly on the mysql workbench. When I run the code however I get no result set.
I have exactly the same issue symptons as stated in the link and I have tried the linked solutions but it turns out that I do not have the same issue.
my _stored_results[] is empty when returning.
I am using the code in a try/except block, I have another python program that uses the same code to load a csv into a my mySQL db and it works dandy.
The code where I am having the issue is within an #app.route if that makes any differnce.
My code looks like this:
def functionName() :
try:
import mysql.connector
from mysql.connector import errorcode
cnx = mysql.connector.connect(user=init["dbDetails"][0], password=init["dbDetails"][1], host=init["dbDetails"][2], database=init["dbDetails"][3])
cur = cnx.cursor()
cur.close() #I deffo don't need the two lines below but they were added for a sanity check, just to make sure the cur was not being read from any other code.
cur = cnx.cursor() # and this one obviously
sqlString = 'CALL `schemaName`.`getProcedureName_sp`(1, 1, 0)'
cur.execute(sqlString, multi=True) # tried it here without the multi=True and got the msg telling me to use it.
getSomeDetails = cur.fetchall()
cnx.commit() # probably don't need to commit here I am just reading from the dB but I am trying anything as I have no idea what my issue might be.
return render_template('success.html')
except Exception as e:
return render_template('error.html', error = str(e))
finally:
cur.close()
cnx.close()
I am so baffled as I have this same code working in several places.
So I was beating my head against the wall with this, and when I couldn't get anywhere, I just decided to leave it and move on, then come back with a fresh mind. Well... It worked, kinda.
So I haven't found the solution but I have found a work around that does the job and might even shed some light as to what is actually happening in my code.
I decided that as the fetchall() method was what was causing me the trouble I should try to circumvent it.
I probed the cursor(cur) just before the fetchall() method was called and saw that cur._rows contains the results from the SQL call.
So I changed the line
getSomeDetails = cur.fetchall()
to
if len(cur._rows) > 0 :
getSomeDetails = list(cur._rows[0]) #I only ever expect one result in this query
#getSomeDetails should now have the row I am looking for
getSomeDetails[0] #gets me the field I am looking for
and now my variable getSomeDetails has the return values from the procedure call
They are however not in the nice format that I should have gotten them from the fetchall() function, so I had to do some processing, I had to ensure that I was getting some values back and I noted that these values were returned in a tuple
I have come across this issue on two different machines running two different OS's and two different versions of python (Windows 7 with Python 2.7 and Windows 10 with Python 3) both pieces of code were different so obviously infact I was using two different MySQL libraries so the actual code for the fix was slightly different in both cases but I am now in both cases getting data from my DB into variables in Python, so that's cool.
However, this is a hack and I am aware of that, I would rather be using the proper function cur.fetchall() so I am still open to suggestions of what could be going wrong here.

stored proc not committing with sqlalchemy/pyodbc

I am using sqlalchemy/pyodbc to connect to a MS SQL 2012 server. I chose sqlalchemy because of the direct integration with pandas dataframes using .read_sql and .to_sql.
At a high level, my code is:
df = dataframe.read_sql("EXEC sp_getsomedata")
<do some stuff here>
finaldf.to_sql("loader_table", engine,...)
This part works great, very easy to read, etc. The problem is that I have to run a final stored proc to insert the data from the loader table into the live table. Normally, sqlalchemy knows to commit after INSERT/UPDATE/DELETE, but doesn't want to do the commit for me when I run this final stored proc.
After having tried multiple approaches, I see the transaction in the db sitting uncommitted. I know sqlalchemy is very flexible and I am using about 3% of its functionality, what is the simplest way to get this working? I think I need to be using sqlalchemy core and not ORM. I saw examples using sessionmaker, but I think that monopolizes the engine object and doesn't let pandas access it.
connection = engine.connect()
transaction = connection.begin()
connection.execute("EXEC sp_doLoaderStuff")
transaction.commit()
connection.close()
I have tried calling .execute from the connection level, from a cursor level, and even using the .raw_connection() method without success.
connection = engine.raw_connection()
connection.autocommit = True
cursor = connection.cursor()
cursor.execute("EXEC sp_doLoaderStuff")
connection.commit()
connection.close()
Any ideas what I am missing here?
Completely self-inflicted. The correct working code using the raw_connection() method that is working fine is:
connection = engine.raw_connection()
cursor = connection.cursor()
cursor.execute("EXEC sp_doLoaderStuff")
connection.commit()
connection.close()

SQLAlchemy get every row the matches query and loop through them

I'm new to Python and SQLAlchemy. I've been playing about with retrieving things from the database, and it's worked every time, but im a little unsure what to do when the select statement will return multiple rows. I tried using some older code that worked before I started SQLAlchemy, but db is a SQLAlchemy object and doesn't have the execute() method.
application = Applications.query.filter_by(brochureID=brochure.id)
cur = db.execute(application)
entries = cur.fetchall()
and then in my HTML file
{% for entry in entries %}
var getEmail = {{entry.2|tojson|safe}}
emailArray.push(getEmail);
I looked in the SQLAlchemy documentation and I couldn't find a .first() equivalent to getting all the rows. Can anyone point me in the right direction? No doubt it's something very small.
Your query is correct, you just need to change the way you interact with the result. The method you are looking for is all().
application = Applications.query.filter_by(brochureID=brochure.id)
entries = application.all()
the Usual way to work with orm queries is through the Session class, somewhere you should have a
engine = sqlalchemy.create_engine("sqlite:///...")
Session = sqlalchemy.orm.sessionmaker(bind=engine)
I'm not familiar with flask, but it likely does some of this work for you.
With a Session factory, your application is instead
session = Session()
entries = session.query(Application) \
.filter_by(...) \
.all()

web2py webserver - Best way to keep connection to external SQL server?

I have a simple web2py server that we use to visualize data from our PostgreSQL Server. The following functions are all part of the global models in web2py.
The current solution to fetch data is very simple. Every time I connect, and after I get the data I close the connection:
# Old way:
# (imports excluded)
def get_data(query):
postgres_connection = psycopg2.connect("credentials")
df = psql.frame_query(query, con=postgres_connection) # Pandas function to put data from query into DataFrame
postgres.close()
return df
For small queries, opening and closing the connection takes about 9/10 of the time run the function.
Is this a good way to do it instead? If not, what is a better way?
# Better way?
def connect():
"""
Create a connection to server.
"""
return psycopg2.connect("credentials")
db_connection = connect()
def create_pandas_frame(query):
"""
Get query if connection is open.
"""
return psql.frame_query(query, con=db_connection)
def get_data(query):
"""
Try to get data, open a new conneciton if connection is closed.
"""
try:
data = create_pandas_frame(query)
except:
global db_connection
db_connection = connect()
data = create_pandas_frame(query)
return data
If you run that code in a web2py model file, you'll end up creating a new connection on each HTTP request anyway. Instead, you might consider connection pooling.
An easier option might be to use the web2py DAL to fetch the data. Something like:
from pandas.core.api import DataFrame
db = DAL([db connection string], pool_size=10, migrate_enabled=False)
rows = db.executesql(query)
data = DataFrame.from_records(rows, columns=[list, of, column, names])
If you specify the pool_size argument to DAL(), it will automatically maintain a connection pool to be used across requests.
Note, I haven't tried this, so it may need some tweaking, but something along these lines should work.
If you'd like, you can even use the DAL to generate the SQL by defining database table models:
db.define_table('mytable',
Field('field1', 'integer'),
Field('field2', 'double'),
Field('field3', 'boolean'))
rows = db.executesql(db(db.mytable.id > 0)._select())
data = DataFrame.from_records(rows, columns=db.mytable.fields)
The ._select() method just generates the SQL without actually doing the select. The SQL is then passed to .executesql() to fetch the data.
An alternative is to create a special Pandas processor and pass it as the processor argument to .select().
def pandas_processor(rows, fields, columns, cacheable):
return DataFrame.from_records(rows, columns=columns)
data = db(db.mytable.id > 0).select(processor=pandas_processor)
I used Anthony's answer and now have functions that look like this:
# In one of the models files.
from pandas.core.api import DataFrame
external_db = DAL('postgres://connection_stuff',pool_size=10,migrate_enabled=False)
def create_simple_html_table(query):
dict_from_db = external_db.executesql(query, as_dict=True)
return DataFrame(dict_from_db).to_html()
Then later in a view or controller a html table is created using:
# In Controller:
my_table = create_simple_html_table('select * from random_table limit 50')
# In View:
{{=XML(create_simple_html_table('select * from random_table limit 50'))}}
I still need to do more testing, but my understanding so far is that this solution will let me query things from the external db and let web2py keep the connection, and let web2py use the same connection for all users.
Note that this solution is only good if all you want to do is to read and write to you Postgres server with raw SQL.
If you want to use DAL to read and write, you need to either try to find the DAL alternative called MyDAL or play around with the search_path option in Postgres.

Categories

Resources