Loop SQL Query Over Dictionary - python

I have a SQL query that uses the first and the last day of the calendar months to generate a subset of data for a given month. I have been trying to figure out how to loop it for a number of months - i have two lists (one for first and another for last days), two tuples (same), and a dictionary (first and last are keys and values) with all these dates - and store all results in one dataframe and i am struggling very bad.
I can do loop and get all the data if i am only using one list or tuple - then i can loop through it and get all the data. if i try to use two, it simply does not work. Is there a way to do what I am trying to do?
fd=['2018-05-01','2018-06-01','2018-07-01']
ld=['2018-05-31','2018-06-30','2018-07-31']
my_dict=dict(zip(fd, ld))
data_check=pd.DataFrame()
fd_d=','.join(my_dict.keys())
ed_d=','.join(['%%(%s)s' % x for x in my_dict])
query= """
SELECT count(distinct ids),first_date, last_date from table1
where first_date=%s and last_date =%s
group by 2,3
"""
for x in my_dict:
df=pd.read_sql(query% (fd_d,ed_d),my_dict)
data_check=data_check.append(df)

In general, please heed three best practices:
Avoid the quadratic copy of using DataFrame.append in a loop. Instead, build a list of data frames to be concatenated once outside the loop.
Use parameterization and not string concatenation which is supported with pandas read_sql. This avoids the need to string format and punctuate with quotes.
Discontinue using the modulo operator, %, for string concatenation as it is de-emphasised (not officially deprecated). Instead, use the superior str.format.
Specifically, for your needs iterate elementwise between two lists using zip without layering it in a dictionary:
query= """SELECT count(distinct ids), first_date, last_date
FROM table1
WHERE first_date = %s and last_date = %s
GROUP BY 2, 3"""
df_list = []
for f, l in zip(fd, ld):
df = pd.read_sql(query, conn, params=[f, l])
df_list.append(df)
final_df = pd.concat(df_list)
Alternatively, avoid the loop and parameters by aggregating on first and last of days of every month in table:
query= """SELECT count(distinct ids), first_date, last_date
FROM table1
WHERE DATE_PART(d, first_date) = 1
AND last_date = LAST_DAY(first_date)
GROUP BY 2, 3
ORDER BY 2, 3"""
final_df = pd.read_sql(query, conn)

Related

Using function output in SQLAlchemy join clause

I am trying to translate a fairly short bit of SQL into an sqlAlchemy ORM query. The SQL uses Postgres's generate_series to make a set of dates and my goal is to make a set of time series arrays categorized by one of the columns.
The tables (simplified) are very simple:
counts:
-----------------
count (Integer)
day (Date)
placeID (foreign key related to places)
"counts_pkey" PRIMARY KEY (day, placeID)
places:
-----------------
id
name (varchar)
The output I'm after is a time series of counts for each place including null values when counts are not reported for a day. For example, this would correspond to a series over four days:
array_agg | name
-----------------+-------------------
{NULL,0,7,NULL} | A Place
{NULL,1,NULL,2} | Some other place
{5,NULL,3,NULL} | Yet another
I can do this fairly easily by taking a CROSS JOIN on a date range and places and joining that with the counts:
SELECT array_agg(counts.count), places.name
FROM generate_series('2018-11-01', '2018-11-04', interval '1 days') as day
CROSS JOIN places
LEFT OUTER JOIN counts on counts.day = day.day AND counts.PlaceID = places.id
GROUP BY places.name;
What I can't seem to figure out is how to get SQLAlchemy to do this. After a lot of digging, I found an old google groups thread which almost works leading to this:
date_list = select([column('generate_series')])\
.select_from(func.generate_series(backthen, today, '1 day'))\
.alias('date_list')
time_series = db.session.query(Place.name, func.array_agg(Count.count))\
.select_from(date_list)\
.outerjoin(Count, (Count.day == date_list.c.generate_series) & (Count.placeID == Place.id ))\
.group_by(Place.name)
This creates a sub-select for the time series, but it produces a database error:
There is an entry for table "places", but it cannot be referenced from this part of the query.
So my question is: how would you do this in sqlalchemy. Also, I'm open to the idea that this is difficult because my approach with the SQL is bone-headed.
The problem is that given the query construct SQLAlchemy produces a query along the lines of
SELECT ...
FROM places,
(...) AS date_list LEFT OUTER JOIN count ON ... AND count."placeID" = places.id
...
There are 2 FROM-list items: places and the join. Items cannot cross-reference each other1, and hence the error due to places.id in the ON-clause.
SQLAlchemy does not support explicit CROSS JOIN, but on the other hand a CROSS JOIN is equivalent to an INNER JOIN ON (TRUE). You could also omit wrapping the function expression in a subquery and use it as is by giving it an alias:
date_list = func.generate_series(backthen, today, '1 day').alias('gen_day')
time_series = session.query(Place.name, func.array_agg(Count.count))\
.join(date_list, true())\
.outerjoin(Count, (Count.day == column('gen_day')) &
(Count.placeID == Place.id ))\
.group_by(Place.name)
1: Except function-call FROM-items, or using LATERAL.

Running SQL in Python and apply parameters from Python Dataframe

I'm loading some data from SQL database to Python, but I need to apply some criteria from Python Dataframe, to be simplified, see example below:
some_sql = """
select column1,columns2
from table
where a between '{}' and '{}'
or a between '{}' and '{}'
or a between '{}' and '{}'
""".format(date1,date2,date3,date4,date5,date6)
date1,date2,date3,date4,date5,date6 are sourced from Python Dataframe. I can manually specify all 6 parameters, but I do have over 20 in fact...
df = DataFrame({'col1':['date1','date3','date5'],
'col2':['date2','date4','date6']})
is there a way I am able to do a loop here to be more efficient
Setup
# Create a dummy dataframe
df = pd.DataFrame({'col1':['date1','date3','date5'],
'col2':['date2','date4','date6']})
# Prepare the SQL (conditions will be added later)
some_sql = """
select column1,columns2
from table
where """
First approach
conditions = []
for row in df.iterrows():
# Ignore the index
data = row[1]
conditions.append(f"or a between '{data['col1']}' and '{data['col2']}'")
some_sql += '\n'.join(conditions)
By using iterrows() we can iterate through the dataframe, rows by row.
Alternative
some_sql += '\nor '.join(df.apply(lambda x: f"a between '{x['col1']}' and '{x['col2']}'", axis=1).tolist())
Using apply() should be faster that iterrows():
Although apply() also inherently loops through rows, it does so much
more efficiently than iterrows() by taking advantage of a number of
internal optimizations, such as using iterators in Cython.
source
Another alternative
some_sql += '\nor '.join([f"a between '{row['col1']}' and '{row['col2']}'" for row in df.to_dict('records')])
This converts the dataframe to a list of dicts, and then applies a list comprehension to create the conditions.
Result
select column1,columns2
from table
where a between 'date1' and 'date2'
or a between 'date3' and 'date4'
or a between 'date5' and 'date6'
As a secondary note to Kristof's answer above, I would note that even as an analyst one should probably be careful about things like SQL injection, so inlining data is something to be avoided.
If possible you should define your query once with placeholders and then create a param list to go with the placeholders. This also saves on the formatting too.
So in your case your query looks like:
some_sql = """
select column1,columns2
from table
where a between ? and ?
or a between ? and ?
or a between ? and ?
And our param list generation is going to look like:
conditions = []
for row in df.iterrows():
# Ignore the index
data = row[1]
conditions.append(data['col1'])
conditions.append(data['col2'])
Then execute your SQL with placeholder syntax and params list as placeholders.

Why is `for...in` returning a tuple when trying to iterate through rows returned by query?

I select 1 column from a table in a database. I want to iterate through each of the results. Why is it when I do this it’s a tuple instead of a single value?
con = psycopg2.connect(…)
cur = con.cursor()
stmt = "SELECT DISTINCT inventory_pkg FROM {}.{} WHERE inventory_pkg IS NOT NULL;".format(schema, tableName)
cur.execute(stmt)
con.commit()
referenced = cur.fetchall()
for destTbl in referenced:#why is destTbl a single element tuple?
print('destTbl: '+str(referenced))
stmt = "SELECT attr_name, attr_rule FROM {}.{} WHERE ppm_table_name = {};".format(schema, tableName, destTbl)#this fails because the where clause gets messed up because ‘destTbl’ has a comma after it
cur.execute(stmt)
Because that's what the db api does: always returns a tuple for each row in the result.
It's pretty simple to refer to destTbl[0] wherever you need to.
Because you are getting rows from your database, and the API is being consistent.
If your query asked for * columns, or a specific number of columns that is greater than 1, you'd also need a tuple or list to hold those columns for each row.
In other words, just because you only have one column in this query doesn't mean the API suddenly will change what kind of object it returns to model a row.
Simply always treat a row as a sequence and use indexing or tuple assignment to get a specific value out. Use:
inventory_pkg = destTbl[0]
or
inventory_pkg, = destTbl
for example.

Add MySQL query results to R dataframe

I want to convert a MySQL query from a python script to an analogous query in R. The python uses a loop structure to search for specific values using genomic coordinates:
SQL = """SELECT value FROM %s FORCE INDEX (chrs) FORCE INDEX (sites)
WHERE `chrom` = %d AND `site` = %d""" % (Table, Chr, Start)
cur.execute(SQL)
In R the chromosomes and sites are in a dataframe and for every row in the dataframe I would like to extract a single value and add it to a new column in the dataframe
So my current dataframe has a similar structure to the following:
df <- data.frame("Chr"=c(1,1,3,5,5), "Site"=c(100, 200, 400, 100, 300))
The amended dataframe should have an additional column with values from the database (at corresponding genomic coordinates. The structure should be similar to:
df <- data.frame("Chr"=c(1,1,3,5,5), "Site"=c(100, 200, 400, 100, 300), "Value"=c(1.5, 0, 5, 60, 100)
So far I connected to the database using:
con <- dbConnect(MySQL(),
user="root", password="",
dbname="MyDataBase")
Rather than loop over each row in my dataframe, I would like to use something that would add the corresponding value to a new column in the existing dataframe.
Update with working solution based on answer below:
library(RMySQL)
con <- dbConnect(MySQL(),
user="root", password="",
dbname="MyDataBase")
GetValue <- function(DataFrame, Table){
queries <- sprintf("SELECT value as value
FROM %s FORCE INDEX (chrs) FORCE INDEX (sites)
WHERE chrom = %d AND site = %d UNION ALL SELECT 'NA' LIMIT 1", Table, DataFrame$Chr, DataFrame$start)
res <- ldply(queries, function(query) { dbGetQuery(con, query)})
DataFrame[, Table] <- res$value
return(DataFrame)
}
df <- GetValue(df, "TableName")
Maybe you could do something like this. First, build up your queries, then execute them, storing the results in a column of your dataframe. Not sure if the do.call(rbind part is necessary, but that basically takes a bunch of dataframe rows, and squishes them together by row into a dataframe.
queries=sprintf("SELECT value as value FROM %s FORCE INDEX (chrs) FORCE INDEX (sites) WHERE chrom = %d AND site = %d UNION ALL SELECT 0 LIMIT 1", "TableName", df$Chrom, df$Pos)
df$Value = do.call("rbind",sapply(queries, function(query) dbSendQuery(mydb, query)))$value
I played with your SQL a little, my concern with the original is with cases where it might return more than 1 row.
I like the data.table package for this kind of tasks as its syntax is inspired by SQL
require(data.table)
So an example database to match the values to a table
table <- data.table(chrom=rep(1:5, each=5),
site=rep(100*1:5, times=5),
Value=runif(5*5))
Now the SQL query can be translated into something like
# select from table, where chrom=Chr and site=Site, value
Chr <- 2
Site <- 200
table[chrom==Chr & site==Site, Value] # returns data.table
table[chrom==Chr & site==Site, ]$Value # returns numeric
Key (index) database for quick lookup (assuming unique chrom and site..)
setkey(table, chrom, site)
table[J(Chr, Site), ]$Value # very fast lookup due to indexed table
Your dataframe as data table with two columns 'Chr' and 'Site' both integer
df <- data.frame("Chr"=c(1,1,3,5,5), "Site"=c(100, 200, 400, 100, 300))
dt <- as.data.table(df) # adds data.table class to data.frame
setkey(dt, Chr, Site) # index for 'by' and for 'J' join
Match the values and append in new column (by reference, so no copying of table)
# loop over keys Chr and Site and find the match in the table
# select the Value column and create a new column that contains this
dt[, Value:=table[chrom==Chr & site==Site]$Value, by=list(Chr, Site)]
# faster:
dt[, Value:=table[J(Chr, Site)]$Value, by=list(Chr, Site)]
# fastest: in one table merge operation assuming the keys are in the same order
table[J(dt)]
kind greetings
Why don't you use the RMySQL or sqldf package?
With RMySQL, you get MySQL access in R.
With sqldf, you can issue SQL queries on R data structures.
Using either of those, you do not need to reword you SQL query to get the same results.
Let me also mention the data.table package, which lets you do very efficient selects and joins on your data frames after converting them to data tables using as.data.table(your.data.frame). Another good thing about it is that a data.table object is a data.frame at the same time, so all your functions that work on the data frames work on these converted objects, too.
You could easily use dplyr package. There is even nice vignette about that - http://cran.rstudio.com/web/packages/dplyr/vignettes/databases.html.
One thing you need to know is:
You can connect to MySQL and MariaDB (a recent fork of MySQL) through
src_mysql(), mediated by the RMySQL package. Like PostgreSQL, you'll
need to provide a dbname, username, password, host, and port.

SELECT * in SQLAlchemy?

Is it possible to do SELECT * in SQLAlchemy?
Specifically, SELECT * WHERE foo=1?
Is no one feeling the ORM love of SQLAlchemy today? The presented answers correctly describe the lower-level interface that SQLAlchemy provides. Just for completeness, this is the more-likely (for me) real-world situation where you have a session instance and a User class that is ORM mapped to the users table.
for user in session.query(User).filter_by(name='jack'):
print(user)
# ...
And this does an explicit select on all columns.
The following selection works for me in the core expression language (returning a RowProxy object):
foo_col = sqlalchemy.sql.column('foo')
s = sqlalchemy.sql.select(['*']).where(foo_col == 1)
If you don't list any columns, you get all of them.
query = users.select()
query = query.where(users.c.name=='jack')
result = conn.execute(query)
for row in result:
print row
Should work.
You can always use a raw SQL too:
str_sql = sql.text("YOUR STRING SQL")
#if you have some args:
args = {
'myarg1': yourarg1
'myarg2': yourarg2}
#then call the execute method from your connection
results = conn.execute(str_sql,args).fetchall()
Where Bar is the class mapped to your table and session is your sa session:
bars = session.query(Bar).filter(Bar.foo == 1)
Turns out you can do:
sa.select('*', ...)
I had the same issue, I was trying to get all columns from a table as a list instead of getting ORM objects back. So that I can convert that list to pandas dataframe and display.
What works is to use .c on a subquery or cte as follows:
U = select(User).cte('U')
stmt = select(*U.c)
rows = session.execute(stmt)
Then you get a list of tuples with each column.
Another option is to use __table__.columns in the same way:
stmt = select(*User.__table__.columns)
rows = session.execute(stmt)
In case you want to convert the results to dataframe here is the one liner:
pd.DataFrame.from_records(rows, columns=rows.keys())
For joins if columns are not defined manually, only columns of target table are returned. To get all columns for joins(User table joined with Group Table:
sql = User.select(from_obj(Group, User.c.group_id == Group.c.id))
# Add all coumns of Group table to select
sql = sql.column(Group)
session.connection().execute(sql)
I had the same issue, I was trying to get all columns from a table as a list instead of getting ORM objects back. So that I can convert that list to pandas dataframe and display.
What works is to use .c on a subquery or cte as follows:
U = select(User).cte('U')
stmt = select(*U.c)
rows = session.execute(stmt)
Then you get a list of tuples with each column.
Another option is to use __table__.columns in the same way:
stmt = select(*User.__table__.columns)
rows = session.execute(stmt)
In case you want to convert the results to dataframe here is the one liner:
pd.DataFrame.from_records(dict(zip(r.keys(), r)) for r in rows)
If you're using the ORM, you can build a query using the normal ORM constructs and then execute it directly to get raw column values:
query = session.query(User).filter_by(name='jack')
for cols in session.connection().execute(query):
print cols
every_column = User.__table__.columns
records = session.query(*every_column).filter(User.foo==1).all()
When a ORM class is passed to the query function, e.g. query(User), the result will be composed of ORM instances. In the majority of cases, this is what the dev wants and will be easiest to deal with--demonstrated by the popularity of the answer above that corresponds to this approach.
In some cases, devs may instead want an iterable sequence of values. In these cases, one can pass the list of desired column objects to query(). This answer shows how to pass the entire list of columns without hardcoding them, while still working with SQLAlchemy at the ORM layer.

Categories

Resources