New to python and pandas, Im facing the following issue:
I would like to pass multiple string into a sql query and struggle to insert the delimiter ',' :
Example data
import pandas as pd
data = [['Alex',10],['Bob',12],['Clarke',13]]
df = pd.DataFrame(data,columns=['Name','Age'])
print (df)
# Remove header (Not sure whether that is necessary)
df.columns = df.iloc[0]
pd.read_sql(
"""
SELECT
*
FROM emptable
WHERE empID IN ('{}',)
""".format(df.ix[:, 0]), # Which corresponds to 'Alex','Bob','Clarke'
con = connection)
I tried different combinations, however None of them have worked out.
Demo:
sql_ = """
SELECT *
FROM emptable
WHERE empID IN ({})
"""
sql = sql_.format(','.join([x for x in ['?'] * len(df)]))
print(sql)
new = pd.read_sql(query, conn, params=tuple(df['Name']))
Output:
In [166]: print(sql)
SELECT *
FROM emptable
WHERE empID IN (?,?,?)
NOTE: this approach will not work if your DF is large, because the generated SQL string would be too big.
In this case you can save/dump Names in a helper temporary table and use it in SQL:
df[['Name']].to_sql('tmp', conn, if_exists='replace')
sql = """
SELECT *
FROM emptable
WHERE empID IN (select Name from tmp)
"""
new = pd.read_sql(sql, conn)
Related
I'm using 'pd.read_sql_query' to extract the data from microsoft sql in python.
However, when I'm trying to filter specified row from the query, it show blank/ no result.
sql = pd.read_sql_query('''
SELECT * FROM table
''', conn)
df = pd.DataFrame(sql)
given = '02/03/2022'
given = datetime.strptime(given, '%d/%m/%Y')
dff = df[df['Date'] == given]
print(dff)
What is the issue?
I want get a db into pandas df in Python. I use a following code:
self.cursor = self.connection.cursor()
query = """
SELECT * FROM `an_visit` AS `visit`
JOIN `an_ip` AS `ip` ON (`visit`.`ip_id` = `ip`.`ip_id`)
JOIN `an_useragent` AS `useragent` ON (`visit`.`useragent_id` = `useragent`.`useragent_id`)
JOIN `an_pageview` AS `pageview` ON (`visit`.`visit_id` = `pageview`.`visit_id`)
WHERE `visit`.`visit_id` BETWEEN %s AND %s
"""
self.cursor.execute(query, (start_id, end_id))
df = pd.DataFrame(self.cursor.fetchall())
This code works, but I want to get column names as well. I tried this question MySQL: Get column name or alias from query
but this did not work:
fields = map(lambda x: x[0], self.cursor.description)
result = [dict(zip(fields, row)) for row in self.cursor.fetchall()]
How can I get column names from db into df? Thanks
The easy way to include column names within recordset is to set dictionary=True as following:
self.cursor = self.connection.cursor(dictionary=True)
Then, all of fetch(), fetchall() and fetchone() are return dictionary with column name and data
check out links:
https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursordict.html
https://mariadb-corporation.github.io/mariadb-connector-python/connection.html
What work to me is:
field_names = [i[0] for i in self.cursor.description ]
the best practice to list all the columns in the database is to execute this query form the connection cursor
SELECT TABLE_CATALOG,TABLE_SCHEMA,TABLE_NAME,COLUMN_NAME,DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA='<schema>' AND TABLE_NAME = '<table_name>'
There is a column_names properties in MySql cursor that you can use:
row = dict(zip(self.cursor.column_names, self.cursor.fetchone()))
https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-column-names.html
I'm using pd.read_sql and sqlserver to get the column names from a specific table. Pandas however only returns a empty dataframe whereas I'm getting valid results from ssms. Here is the code I'm using:
query = f'''select * from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'Table_name'
'''
df_output = pd.read_sql(query, connection)
The result is as follows:
Empty DataFrame
Columns: [TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, ORDINAL_POSITION, COLUMN_DEFAULT, IS_NULLABLE, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH, CHARACTER_OCTET_LENGTH, NUMERIC_PRECISION, NUMERIC_PRECISION_RADIX, NUMERIC_SCALE, DATETIME_PRECISION, CHARACTER_SET_CATALOG, CHARACTER_SET_SCHEMA, CHARACTER_SET_NAME, COLLATION_CATALOG, COLLATION_SCHEMA, COLLATION_NAME, DOMAIN_CATALOG, DOMAIN_SCHEMA, DOMAIN_NAME]
Index: []
Thanks in advance
Experts,
I am struggling to find an efficient way to work with pandas and sqlite.
I am building a tool that let's users
extract part of a sql database (sub_table) based on some filters
change part of sub_table
upload changed sub_table back to
overall sql table replacing old values
Users will only see excel data (so I need to write back and forth to excel which is not part of my example as out of scope).
Users can
replace existing rows (entries) with new data
delete existing rows
add new rows
Question: how can I most efficiently do this "replace/delete/add" using Pandas / sqlite3?
Here is my example code. If I use df_sub.to_sql("MyTable", con = conn, index = False, if_exists="replace") at the bottom than obviously the entire table is replaced...so there must be another way I cannot think of.
import pandas as pd
import sqlite3
import numpy as np
#### SETTING EXAMPLE UP
### Create DataFrame
data = dict({"City": ["London","Frankfurt","Berlin","Paris","Brondby"],
"Population":[8,2,4,9,0.5]})
df = pd.DataFrame(data,index = pd.Index(np.arange(5)))
### Create SQL DataBase
conn = sqlite3.connect("MyDB.db")
### Upload DataFrame as Table into SQL Database
df.to_sql("MyTable", con = conn, index = False, if_exists="replace")
### Read DataFrame from SQL DB
query = "SELECT * from MyTable"
pd.read_sql_query(query, con = conn)
#### CREATE SUB_TABLE AND AMEND
#### EXTRACT sub_table FROM SQL TABLE
query = "SELECT * from MyTable WHERE Population > 2"
df_sub = pd.read_sql_query(query, con = conn)
df_sub
#### Amend Sub DF
df_sub[df_sub["City"] == "London"] = ["Brussel",4]
df_sub
#### Replace new data in SQL DB
df_sub.to_sql("MyTable", con = conn, index = False, if_exists="replace")
query = "SELECT * from MyTable"
pd.read_sql_query(query, con = conn)
Thanks for your help!
Note: I did try to achieve via pure SQL queries but gave up. As I am not an expert on SQL I would want to go with pandas if a solution exists. If not a hint on how to achieve this via sql would be great!
I think there is no way around using SQL queries for this task.
With pandas it is only possible to read a query to a DataFrame and to write a DataFrame to a Database (replace or append).
If you want to update specific values/ rows or want to delete rows, you have to use SQL queries.
Commands you should look into are for example:
UPDATE, REPLACE, INSERT, DELETE
# Update the database, change City to 'Brussel' and Population to 4, for the first row
# (Attention! python indices start at 0, SQL indices at 1)
cur = conn.cursor()
cur.execute('UPDATE MyTable SET City=?, Population=? WHERE ROWID=?', ('Brussel', 4, 1))
conn.commit()
conn.close()
# Display the changes
conn = sqlite3.connect("MyDB.db")
query = "SELECT * from MyTable"
pd.read_sql_query(query, con=conn)
For more examples on sql and pandas you can look at
https://www.dataquest.io/blog/python-pandas-databases/
I am trying to access PostgreSQL using psycopg2:
sql = """
SELECT
%s
FROM
table;
"""
cur = con.cursor()
input = (['id', 'name'], )
cur.execute(sql, input)
data = pd.DataFrame.from_records(cur.fetchall())
However, the returned result is:
0
0 [id, name]
1 [id, name]
2 [id, name]
3 [id, name]
4 [id, name]
If I try to access single column, it looks like:
0
0 id
1 id
2 id
3 id
4 id
It looks like something is wrong with the quoting around column name (single quote which should not be there):
In [49]: print cur.mogrify(sql, input)
SELECT
'id'
FROM
table;
but I am following doc: http://initd.org/psycopg/docs/usage.html#
Anyone can tell me what is going on here? Thanks a lot!!!
Use the AsIs extension
import psycopg2
from psycopg2.extensions import AsIs
column_list = ['id','name']
columns = ', '.join(column_list)
cursor.execute("SELECT %s FROM table", (AsIs(columns),))
And mogrify will show that it is not quoting the column names and passing them in as is.
Nowadays, you can use sql.Identifier to do this in a clean and secure way :
from psycopg2 import sql
statement = """
SELECT
{id}, {name}
FROM
table;
"""
with con.cursor() as cur:
cur.execute(sql.SQL(statement).format(
id=sql.SQL.Identifier("id"),
name=sql.SQL.Identifier("name")
))
data = pd.DataFrame.from_records(cur.fetchall())
More information on query composition here : https://www.psycopg.org/docs/sql.html
The reason was that you were passing the string representation of the array ['id', 'name'] as SQL query parameter but not as the column names. So the resulting query was similar to
SELECT 'id, name' FROM table
Looks your table had 5 rows so the returned result was just this literal for each row.
Column names cannot be the SQL query parameters but can be just the usual string parameters which you can prepare before executing the query-
sql = """
SELECT
%s
FROM
table;
"""
input = 'id, name'
sql = sql % input
print(sql)
cur = con.cursor()
cur.execute(sql)
data = pd.DataFrame.from_records(cur.fetchall())
In this case the resulting query is
SELECT
id, name
FROM
table;