I have a connection to a database (using pyodbc) and I need to commit a df to a new table. I've done this with SQL, but don't know how to do it with a df. Any ideas on how to alter the below code to make it work for a df?
code for SQL:
import pyodbc
import pandas as pd
conn= pyodbc.connect(r'DRIVER={Teradata};DBCNAME=foo; UID=name; PWD=password;QUIETMODE=YES;Trusted_Connection=yes')
cursor = conn.cursor()
cursor.execute(
"""
CREATE TABLE SCHEMA.NEW_TABLE AS
(
SELECT ... FROM ....
)
"""
)
conn.commit()
I tried this code, no errors but didn't create in the database:
import pyodbc
import pandas as pd
conn= pyodbc.connect(r'DRIVER={Teradata};DBCNAME=foo; UID=name; PWD=password;QUIETMODE=YES;Trusted_Connection=yes')
sheet1.to_sql(con=conn, name='new_table', schema='Schema', if_exists='replace', index=False)
The documentation for to_sql() clearly states:
con : SQLAlchemy engine or DBAPI2 connection (legacy mode)
Using SQLAlchemy makes it possible to use any DB supported by that
library. If a DBAPI2 object, only sqlite3 is supported.
Thus, you need to pass a SQLAlchemy engine to the to_sql() function to write from Pandas directly to your Teradata database.
Another way would be to dump the data to a different data structure (e.g. to_dict()) and then use pyODBC to perform DML statements on the database, preferably using binding variables to speed up processing.
Related
Trying to import a table from a SQLite into Pandas DF:
import pandas as pd
import sqlite3
cnxn = sqlite3.Connection("my_db.db")
c = cnxn.cursor()
Using this command works: pd.read_sql_query('select * from table1', con=cnxn). This doesn't : df = pd.read_sql_table('table1', con=cnxn).
Response :
ValueError: Table table1 not found
What could be the issue?
Using SQLite in Python the pd.read_sql_table() is not possible. Info found in Pandas doc.
Hence it's considered to be a DB-API when running the commands thru Python.
pd.read_sql_table() Documentation
Given a table name and a SQLAlchemy connectable, returns a DataFrame.
This function does not support DBAPI connections.
I am new to using postgreSQL in Python and using Pandas.
I am trying to create a pandas dataframe from a query, with the following:
with conn.cursor() as cur:
cur.execute('SELECT * FROM payment')
postgres_df = pd.read_sql_query('SELECT * FROM payment', conn)
print(postgres_df)
conn.close()
But when I run the code I do get a result, but also a UserWarning.
UserWarning: pandas only support SQLAlchemy connectable(engine/connection) ordatabase string URI or sqlite3 DBAPI2 connectionother DBAPI2 objects are not tested, please consider using SQLAlchemy
warnings.warn(
I am using the psycopg2 package to connect to PostgreSQL.
I would like to know if the code to creating a dataframe can be more efficiently written.
Then, should I be using SQLAlchemy instead of psycopg2 or is there a way to continue using the same package without getting a warning.
how to insert newDF in my mysql Database at one time using executemany
x=[
[[3141],[3141],[3169],[3251],[3285],[3302]],
[[5000],[3141],[3169],[3251],[3285],[3302]]
]
y=[
[[5],[7],[5],[2],[3],[8]],
[[6],[5],[6],[5],[3],[6]]
]
newDF=pd.DataFrame()
newDF[['x']]=x
newDF[['y']]=y`
sql = "INSERT INTO new_table (`x`,`y`) VALUES (?,?)" number_of_rows = cursor.executemany(sql,list(np.int64(newDF)))
I'm not familiar with executemany. However, I've used pandas.dataframe.to_sql successfully. You can find that here. In my case, I was using sqlalchemy and pymysql libraries to accomplish this.
This is not real code, but should be a reasonable outline; consider m to be the dataframe:
import numpy as np
import pandas as pd
import pymysql as mysql
import sqlalchemy
from sqlalchemy import create_engine
from pandas.io import sql
engine=create_engine('mysql+pymysql://username:password#host:port/db_name')
m.to_sql('table_name', engine, if_exists='append')
IMPORT MODULES
import pyodbc
import pandas as pd
import csv
CREATE CONNECTION TO MICROSOFT SQL SERVER
msconn = pyodbc.connect(driver='{SQL Server}',
server='SERVER',
database='DATABASE',
trusted_msconnection='yes')
cursor = msconn.cursor()
CREATE VARIABLES THAT HOLD SQL STATEMENTS
SCRIPT = "SELECT * FROM TABLE"
PRINT DATA
cursor.execute(SCRIPT)
cursor.commit
for row in cursor:
print (row)
WRITE ALL ROWS WITH COLUMN NAME TO CSV --- NEED HELP HERE
Pandas
Since pandas support direct import from an RDBMS with the name being called read_sql you don't need to write this manually.
from sqlalchemy import create_engine
import pandas as pd
engine = create_engine('mssql+pyodbc://user:pass#mydsn')
df = pd.read_sql(sql='SELECT * FROM ...', con=engine)
The right tool: odo
From odo docs
Loading CSV files into databases is a solved problem. It’s a problem
that has been solved well. Instead of rolling our own loader every
time we need to do this and wasting computational resources, we should
use the native loaders in the database of our choosing.
And it works the other way round also.
from odo import odo
odo('mssql+pyodbc://user:pass#mydsn::tablename','myfile.csv')
#e4c5's answer is great as it should be faster compared to for loop + cursor - i would extend it with saving result set to CSV:
...
pd.read_sql(sql='SELECT * FROM TABLE', con=msconn) \
.to_csv('/path/to/file.csv', index=False)
if you want to read all rows (not specifying WHERE clause):
pd.read_sql_table('TABLE', con=msconn).to_csv('/path/to/file.csv', index=False)
I am trying to write a pandas DataFrame to a PostgreSQL database,
using a schema-qualified table.
I use the following code:
import pandas.io.sql as psql
from sqlalchemy import create_engine
engine = create_engine(r'postgresql://some:user#host/db')
c = engine.connect()
conn = c.connection
df = psql.read_sql("SELECT * FROM xxx", con=conn)
df.to_sql('a_schema.test', engine)
conn.close()
What happens is that pandas writes in schema "public", in a table named 'a_schema.test',
instead of writing in the "test" table in the "a_schema" schema.
How can I instruct pandas to use a schema different than public?
Thanks
Update: starting from pandas 0.15, writing to different schema's is supported. Then you will be able to use the schema keyword argument:
df.to_sql('test', engine, schema='a_schema')
Writing to different schema's is not yet supported at the moment with the read_sql and to_sql functions (but an enhancement request has already been filed: https://github.com/pydata/pandas/issues/7441).
However, you can get around for now using the object interface with PandasSQLAlchemy and providing a custom MetaData object:
meta = sqlalchemy.MetaData(engine, schema='a_schema')
meta.reflect()
pdsql = pd.io.sql.PandasSQLAlchemy(engine, meta=meta)
pdsql.to_sql(df, 'test')
Beware! This interface (PandasSQLAlchemy) is not yet really public and will still undergo changes in the next version of pandas, but this is how you can do it for pandas 0.14.
Update: PandasSQLAlchemy is renamed to SQLDatabase in pandas 0.15.
Solved, thanks to joris answer.
Code was also improved thanks to joris comment, by passing around sqlalchemy engine instead of connection objects.
import pandas as pd
from sqlalchemy import create_engine, MetaData
engine = create_engine(r'postgresql://some:user#host/db')
meta = sqlalchemy.MetaData(engine, schema='a_schema')
meta.reflect(engine, schema='a_schema')
pdsql = pd.io.sql.PandasSQLAlchemy(engine, meta=meta)
df = pd.read_sql("SELECT * FROM xxx", con=engine)
pdsql.to_sql(df, 'test')