Iterating over query results from sqlalchemy - python

I have a sqlalchemy query function like this
def foo():
local_session = Session()
results = local_session.query(T.x, T.y, T.z, T.a, T.b, T.c
, T.d, T.e, T.f, T.g, T.h, T.i, T.j, T.k, T.l
, T.m, T.n, T.o, T.p, T.q, T.r, T.s, T.t, T.u
, T.v,
User.gender).join(User)\
.filter(T.language == 'en', T.where_i_am_from == 'US',
User.some_num >= 0.9).limit(1000000)
local_session.close()
return results, results.count()
The query works fine.
and then I call this function here:
def fubar():
raw_data,raw_data_length = myModule.foo()
df = pd.DataFrame()
for each in raw_data:
df = df.append(pd.DataFrame({ #add each.x etc to df..... }}
return df
The issue is that it wont iterate over the "for each in raw_data" loop when I have a .limit on my foo query above 5000, or use .all() or have no limit. The program will just hang and do nothing (0 cpu usage). I've tested this both on my local sql server and my amazon one. When I run the sql directly on the database I return around 800,000 rows. Why is this happening?
I'm using the latest mysql and latest sqlalchemy.

This may be like MySQL driver problem. I would do the following in order:
Run python with -v flag, like python -v yourprogram.py.
This has a potential of showing you where the program got stuck.
Get those 800,000 results and stick them in SQLite with tables in equivalent schema.
That's relatively cheap to do, all you have to do afterwards is change SQA database string. Obviously, this would show you whether the problem lies with the driver or it's in your code.
You're doing a join between two classes (T, User) - do eager load instead of default lazy load.
If you have 800,000 rows and doing a lazy join, that may be a problem. Add a joinedload (eagerload in earlier versions of SQLAlchemy) to options.

Related

pandas/sqlalchemy/pyodbc: Result object does not return rows from stored proc when UPDATE statement appears before SELECT

I'm using SQL Server 2014, pandas 0.23.4, sqlalchemy 1.2.11, pyodbc 4.0.24, and Python 3.7.0. I have a very simple stored procedure that performs an UPDATE on a table and then a SELECT on it:
CREATE PROCEDURE my_proc_1
#v2 INT
AS
BEGIN
UPDATE my_table_1
SET v2 = #v2
;
SELECT * from my_table_1
;
END
GO
This runs fine in MS SQL Server Management Studio. However, when I try to invoke it via Python using this code:
import pandas as pd
from sqlalchemy import create_engine
if __name__ == "__main__":
conn_str = 'mssql+pyodbc://#MODEL_TESTING'
engine = create_engine(conn_str)
with engine.connect() as conn:
df = pd.read_sql_query("EXEC my_proc_1 33", conn)
print(df)
I get the following error:
sqlalchemy.exc.ResourceClosedError: This result object does not return
rows. It has been closed automatically.
(Please let me know if you want full stack trace, I will update if so)
When I remove the UPDATE from the stored proc, the code runs and the results are returned. Note also that selecting from a table other than the one being updated does not make a difference, I get the same error. Any help is much appreciated.
The issue is that the UPDATE statement is returning a row count, which is a scalar value, and the rows returned by the SELECT statement are "stuck" behind the row count where pyodbc cannot "see" them (without additional machinations).
It is considered a best practice to ensure that our stored procedures always start with a SET NOCOUNT ON; statement to suppress the returning of row count values from DML statements (UPDATE, DELETE, etc.) and allow the stored procedure to just return the rows from the SELECT statement.
For me I got the same issue for another reason, I was using sqlachmey the newest syntax select to get the entries of a table and I had forgot to write the name of the table class I want to get values from, so I got this error, so I had only added the name of the table as an argument to fix the error.
the code leaded to the error
query = select().where(Assessment.created_by == assessment.created_by)
simply fix it by adding the table class name sometimes issues are only in the syntax hhh
query = select(Assessment).where(Assessment.created_by == assessment.created_by)

inconsistent results from LIKE query: pyodbc vs. Access

I got a bunch of queries that should be executed in an Access database as a part of my Python script. Unfortunately, queries that used directly in MS Access are giving some records of output, in Python script return nothing (no error either). Connection with database and general syntax should be fine as simple queries (like select one column from table where something) are working just fine. Here is a code with one of these given queries:
import pyodbc
baza = r"C:\base.mdb"
driver = "{Microsoft Access Driver (*.mdb, *.accdb)}"
access_con_string = r"Driver={};Dbq={};".format(driver, baza)
cnn = pyodbc.connect(access_con_string)
db_cursor = cnn.cursor()
expression = """SELECT F_PARCEL.PARCEL_NR, F_PARCEL_LAND_USE.AREA_USE_CD, F_PARCEL_LAND_USE.SOIL_QUALITY_CD, F_ARODES.TEMP_ADRESS_FOREST, F_SUBAREA.AREA_TYPE_CD, F_AROD_LAND_USE.AROD_LAND_USE_AREA, F_PARCEL.COUNTY_CD, F_PARCEL.DISTRICT_CD, F_PARCEL.MUNICIPALITY_CD, F_PARCEL.COMMUNITY_CD, F_SUBAREA.SUB_AREA
FROM F_PARCEL INNER JOIN (F_PARCEL_LAND_USE INNER JOIN ((F_ARODES INNER JOIN F_AROD_LAND_USE ON F_ARODES.ARODES_INT_NUM = F_AROD_LAND_USE.ARODES_INT_NUM) INNER JOIN F_SUBAREA ON F_ARODES.ARODES_INT_NUM = F_SUBAREA.ARODES_INT_NUM) ON (F_PARCEL_LAND_USE.SHAPE_NR = F_AROD_LAND_USE.SHAPE_NR) AND (F_PARCEL_LAND_USE.PARCEL_INT_NUM = F_AROD_LAND_USE.PARCEL_INT_NUM)) ON F_PARCEL.PARCEL_INT_NUM = F_PARCEL_LAND_USE.PARCEL_INT_NUM
WHERE (((F_ARODES.TEMP_ADRESS_FOREST) Like ?) AND ((F_AROD_LAND_USE.AROD_LAND_USE_AREA)<?) AND ((F_ARODES.TEMP_ACT_ADRESS)= ?))
ORDER BY F_PARCEL.PARCEL_NR, F_PARCEL_LAND_USE.SHAPE_NR;"""
rows = db_cursor.execute(expression, ("14-17-2-03*", 0.0049, True)).fetchall()
for row in rows:
print row
cnn.close()
I know that those queries were generated from query builder in MS Access, so I was wondering that maybe this results in differences, but on the other hand this is still access database.
Anyway it seems, that the problem is in SQL, so I would like to know what elements could possibly result in different output between queries executed directly in MS Access and by pyodbc connection?
You are getting tripped up by the difference in LIKE wildcard characters between queries run in Access itself and queries run from an external application.
When running a query from within Access itself you need to use the asterisk as the wildcard character: "14-17-2-03*".
When running a query from an external application (like your Python app) you need to use the percent sign as the wildcard character: "14-17-2-03%".

PyMongo cursor operations are very slow

I'm new to both MongoDB and pyMongo,
and am having some performance issues
regarding cursors.
TL,DNR: Anything operation I try to perform
using a cursor takes about a second.
Long version
I have a small database, which I bulkloaded. Each entry has 3 fields:
dom: domain name (unique)
date: date, YYYYMMDD
flag: string
I've loaded about 1.9 million entries, without incident, and quite quickly.
I created a hash index on the dom field.
Now, I want to grab certain records by the domain field, and update them, using a Python program.
That's where the problem lies.
I'm using the latest MongoDB, and the latest pyMongo.
stripped down program...
import pymongo
from pymongo import MongoClient
db = client.myindexname
posts = db.posts
print list(db.profiles.index_information()) # shows hash index is present
for k in newdomainlist.keys(): #iterate list of domains to check
ret = posts.find({"dom": k}) #this runs fine, and quickly
#'ret' is a cursor
print ret #this runs quickly
#Here's the problem
print ret.count() #this takes about a second. why?
If I just 'print ret', the speed is fine. However, if I try to
reference anything in the cursor, the speed drops to the floor - I
can do about 1 operation per second.
In this case, I'm just trying to see if ret.count() returns '0' (we don't
have this domain), or '1' (we have it already).
I've tried adding a batch_size(10000) to the find, without it helping.
I DO have the Python C extensions loaded.
What the heck am I doing wrong?
thanks
It turned out that I'd created my hashed index on the wrong field, 'collection', rather than 'posts'. Chalk it up to mongodb inexperience. We can close this one now, or delete it entirely.

Slow MySQL queries in Python but fast elsewhere

I'm having a heckuva time dealing with slow MySQL queries in Python. In one area of my application, "load data infile" goes quick. In an another area, the select queries are VERY slow.
Executing the same query in PhpMyAdmin AND Navicat (as a second test) yields a response ~5x faster than in Python.
A few notes...
I switched to MySQLdb as the connector and am also using SSCursor. No performance increase.
The database is optimized, indexed etc. I'm porting this application to Python from PHP/Codeigniter where it ran fine (I foolishly thought getting out of PHP would help speed it up)
PHP/Codeigniter executes the select queries swiftly. For example, one key aspect of the application takes ~2 seconds in PHP/Codeigniter, but is taking 10 seconds in Python BEFORE any of the analysis of the data is done.
My link to the database is fairly standard...
dbconn=MySQLdb.connect(host="127.0.0.1",user="*",passwd="*",db="*", cursorclass = MySQLdb.cursors.SSCursor)
Any insights/help/advice would be greatly appreciated!
UPDATE
In terms of fetching/handling the results, I've tried it a few ways. The initial query is fairly standard...
# Run Query
cursor.execute(query)
I removed all of the code within this loop just to make sure it wasn't the case bottlekneck, and it's not. I put dummy code in its place. The entire process did not speed up at all.
db_results = "test"
# Loop Results
for row in cursor:
a = 0 (this was the dummy code I put in to test)
return db_results
The query result itself is only 501 rows (large amount of columns)... took 0.029 seconds outside of Python. Taking significantly longer than that within Python.
The project is related to horse racing. The query is done within this function. The query itself is long, however, it runs well outside of Python. I commented out the code within the loop on purpose for testing... also the print(query) in hopes of figuring this out.
# Get PPs
def get_pps(race_ids):
# Comma Race List
race_list = ','.join(map(str, race_ids))
# PPs Query
query = ("SELECT raceindex.race_id, entries.entry_id, entries.prognum, runlines.line_id, runlines.track_code, runlines.race_date, runlines.race_number, runlines.horse_name, runlines.line_date, runlines.line_track, runlines.line_race, runlines.surface, runlines.distance, runlines.starters, runlines.race_grade, runlines.post_position, runlines.c1pos, runlines.c1posn, runlines.c1len, runlines.c2pos, runlines.c2posn, runlines.c2len, runlines.c3pos, runlines.c3posn, runlines.c3len, runlines.c4pos, runlines.c4posn, runlines.c4len, runlines.c5pos, runlines.c5posn, runlines.c5len, runlines.finpos, runlines.finposn, runlines.finlen, runlines.dq, runlines.dh, runlines.dqplace, runlines.beyer, runlines.weight, runlines.comment, runlines.long_comment, runlines.odds, runlines.odds_position, runlines.entries, runlines.track_variant, runlines.speed_rating, runlines.sealed_track, runlines.frac1, runlines.frac2, runlines.frac3, runlines.frac4, runlines.frac5, runlines.frac6, runlines.final_time, charts.raceshape "
"FROM hrdb_raceindex raceindex "
"INNER JOIN hrdb_runlines runlines ON runlines.race_date = raceindex.race_date AND runlines.track_code = raceindex.track_code AND runlines.race_number = raceindex.race_number "
"INNER JOIN hrdb_entries entries ON entries.race_date=runlines.race_date AND entries.track_code=runlines.track_code AND entries.race_number=runlines.race_number AND entries.horse_name=runlines.horse_name "
"LEFT JOIN hrdb_charts charts ON runlines.line_date = charts.race_date AND runlines.line_track = charts.track_code AND runlines.line_race = charts.race_number "
"WHERE raceindex.race_id IN (" + race_list + ") "
"ORDER BY runlines.line_date DESC;")
print(query)
# Run Query
cursor.execute(query)
# Query Fields
fields = [i[0] for i in cursor.description]
# PPs List
pps = []
# Loop Results
for row in cursor:
a = 0
#this_pp = {}
#for i, value in enumerate(row):
# this_pp[fields[i]] = value
#pps.append(this_pp)
return pps
One final note... I haven't considered the ideal way to handle the result. I believe one cursor allows the result to come back as a set of dictionaries. I haven't even made it to that point yet as the query and return itself is so slow.
Tho you have only 501 rows it looks like you have over 50 columns. How much total data is being passed from MySQL to Python?
501 rows x 55 columns = 27,555 cells returned.
If each cell averaged "only" 1K that would be close to 27MB of data returned.
To get a sense of how much data mysql is pushing you can add this to your query:
SHOW SESSION STATUS LIKE "bytes_sent"
Is your server well-resourced? Is memory allocation well configured?
My guess is that when you are using PHPMyAdmin you are getting paginated results. This masks the issue of MySQL returning more data than your server can handle (I don't use Navicat, not sure about how that returns results).
Perhaps the Python process is memory-constrained and when faced with this large result set it has to out page out to disk to handle the result set.
If you reduce the number of columns called and/or constrain to, say LIMIT 10 on your query do you get improved speed?
Can you see if the server running Python is paging to disk when this query is called? Can you see what memory is allocated to Python, how much is used during the process and how that allocation and usage compares to those same values in the PHP version?
Can you allocate more memory to your constrained resource?
Can you reduce the number of columns or rows that are called through pagination or asynchronous loading?
I know this is late, however, I have run into similar issues with mysql and python. My solution is to use queries using another language...I use R to make my queries which is blindly fast, do what I can in R and then send the data to python if need be for more general programming, although R has many general purpose libraries as well. Just wanted to post something that may help someone who has a similar problem, and I know this side steps the heart of the problem.

Join with Pythons SQLite module is slower than doing it manually

I am using pythons built-in sqlite3 module to access a database. My query executes a join between a table of 150000 entries and a table of 40000 entries, the result contains about 150000 entries again. If I execute the query in the SQLite Manager it takes a few seconds, but if I execute the same query from Python, it has not finished after a minute. Here is the code I use:
cursor = self._connection.cursor()
annotationList = cursor.execute("SELECT PrimaryId, GOId " +
"FROM Proteins, Annotations " +
"WHERE Proteins.Id = Annotations.ProteinId")
annotations = defaultdict(list)
for protein, goterm in annotationList:
annotations[protein].append(goterm)
I did the fetchall just to measure the execution time. Does anyone have an explanation for the huge difference in performance? I am using Python 2.6.1 on Mac OS X 10.6.4.
I implemented the join manually, and this works much faster. The code looks like this:
cursor = self._connection.cursor()
proteinList = cursor.execute("SELECT Id, PrimaryId FROM Proteins ").fetchall()
annotationList = cursor.execute("SELECT ProteinId, GOId FROM Annotations").fetchall()
proteins = dict(proteinList)
annotations = defaultdict(list)
for protein, goterm in annotationList:
annotations[proteins[protein]].append(goterm)
So when I fetch the tables myself and then do the join in Python, it takes about 2 seconds. The code above takes forever. Am I missing something here?
I tried the same with apsw, and it works just fine (the code does not need to be changed at all), the performance it great. I'm still wondering why this is so slow with the sqlite3-module.
There is a discussion about it here: http://www.mail-archive.com/python-list#python.org/msg253067.html
It seems that there is a performance bottleneck in the sqlite3 module. There is an advice how to make your queries faster:
make sure that you do have indices on the join columns
use pysqlite
You haven't posted the schema of the tables in question, but I think there might be a problem with indexes, specifically not having an index on Proteins.Id or Annotations.ProteinId (or both).
Create the SQLite indexes like this
CREATE INDEX IF NOT EXISTS index_Proteins_Id ON Proteins (Id)
CREATE INDEX IF NOT EXISTS index_Annotations_ProteinId ON Annotations (ProteinId)
I wanted to update this because I am noticing the same issue and we are now 2022...
In my own application I am using python3 and sqlite3 to do some data wrangling on large databases (>100000 rows * >200 columns). In particular, I have noticed that my 3 table inner join clocks in around ~12 minutes of run time in python, whereas running the same join query in sqlite3 from the CLI runs in ~100 seconds. All the join predicates are properly indexed and the EXPLAIN QUERY PLAN indicates that the added time is most likely because I am using SELECT *, which is a necessary evil in my particular context.
The performance discrepancy caused me to pull my hair out all night until I realized there is a quick fix from here: Running a Sqlite3 Script from Command Line. This is definitely a workaround at best, but I have research due so this is my fix.
Write out the query to an .sql file (I am using f-strings to pass variables in so I used an example with {foo} here)
fi = open("filename.sql", "w")
fi.write(f"CREATE TABLE {Foo} AS SELECT * FROM Table1 INNER JOIN Table2 ON Table2.KeyColumn = Table1.KeyColumn INNER JOIN Table3 ON Table3.KeyColumn = Table1.KeyColumn;")
fi.close()
Run os.system from inside python and send the .sql file to sqlite3
os.system(f"sqlite3 {database} < filename.sql")
Make sure you close any open connection before running this so you don't end up locked out and you'll have to re-instantiate any connection objects afterward if you're going back to working in sqlite within python.
Hope this helps and if anyone has figured the source of this out, please link to it!

Categories

Resources