Pandas UserWarning using psycopg2 - python

I am new to using postgreSQL in Python and using Pandas.
I am trying to create a pandas dataframe from a query, with the following:
with conn.cursor() as cur:
cur.execute('SELECT * FROM payment')
postgres_df = pd.read_sql_query('SELECT * FROM payment', conn)
print(postgres_df)
conn.close()
But when I run the code I do get a result, but also a UserWarning.
UserWarning: pandas only support SQLAlchemy connectable(engine/connection) ordatabase string URI or sqlite3 DBAPI2 connectionother DBAPI2 objects are not tested, please consider using SQLAlchemy
warnings.warn(
I am using the psycopg2 package to connect to PostgreSQL.
I would like to know if the code to creating a dataframe can be more efficiently written.
Then, should I be using SQLAlchemy instead of psycopg2 or is there a way to continue using the same package without getting a warning.

Related

read.sql_query works, read sql_table doesn't

Trying to import a table from a SQLite into Pandas DF:
import pandas as pd
import sqlite3
cnxn = sqlite3.Connection("my_db.db")
c = cnxn.cursor()
Using this command works: pd.read_sql_query('select * from table1', con=cnxn). This doesn't : df = pd.read_sql_table('table1', con=cnxn).
Response :
ValueError: Table table1 not found
What could be the issue?
Using SQLite in Python the pd.read_sql_table() is not possible. Info found in Pandas doc.
Hence it's considered to be a DB-API when running the commands thru Python.
pd.read_sql_table() Documentation
Given a table name and a SQLAlchemy connectable, returns a DataFrame.
This function does not support DBAPI connections.

Pandas: load a table into a dataframe with read_sql - `con` parameter and table name

In trying to import an sql database into a python pandas dataframe, and I am getting a syntax error. I am newbie here, so probably the issue is very simple.
After downloading sqlite sample chinook.db from http://www.sqlitetutorial.net/sqlite-sample-database/
and reading pandas documentation, I tried to load it into a pandas dataframe with
import pandas as pd
import sqlite3
conn = sqlite3.connect('chinook.db')
df = pd.read_sql('albums', conn)
where 'albums' is a table of 'chinook.db' gathered with sqlite3 from command line.
The result is:
...
DatabaseError: Execution failed on sql 'albums': near "albums": syntax error
I tried variations of the above code to import in an ipython session the tables of the database for exploratory data analysis, with no success.
What am I doing wrong? Is there a documentation/tutorial for newbies with some examples around?
Thanks in advance for your help!
Found it!
An example of db connection with SQLAlchemy can be found here:
https://www.codementor.io/sagaragarwal94/building-a-basic-restful-api-in-python-58k02xsiq
import pandas as pd
from sqlalchemy import create_engine
db_connect = create_engine('sqlite:///chinook.db')
df = pd.read_sql('albums', con=db_connect)
print(df)
As suggested by #Anky_91, also pd.read_sql_table works, as read_sql wraps it.
The issue was the connection, that has to be made with SQLAlchemy and not with sqlite3.
Thanks

how to insert whole dataframe in mysql using executemany python

how to insert newDF in my mysql Database at one time using executemany
x=[
[[3141],[3141],[3169],[3251],[3285],[3302]],
[[5000],[3141],[3169],[3251],[3285],[3302]]
]
y=[
[[5],[7],[5],[2],[3],[8]],
[[6],[5],[6],[5],[3],[6]]
]
newDF=pd.DataFrame()
newDF[['x']]=x
newDF[['y']]=y`
sql = "INSERT INTO new_table (`x`,`y`) VALUES (?,?)" number_of_rows = cursor.executemany(sql,list(np.int64(newDF)))
I'm not familiar with executemany. However, I've used pandas.dataframe.to_sql successfully. You can find that here. In my case, I was using sqlalchemy and pymysql libraries to accomplish this.
This is not real code, but should be a reasonable outline; consider m to be the dataframe:
import numpy as np
import pandas as pd
import pymysql as mysql
import sqlalchemy
from sqlalchemy import create_engine
from pandas.io import sql
engine=create_engine('mysql+pymysql://username:password#host:port/db_name')
m.to_sql('table_name', engine, if_exists='append')

Python & MSSQL: Python Code to Delete All Rows of Data in MSSQL Table

I'm scripting in a Python environment. I have successfully written a pandas dataframe to a table in MSSQL.
I want to use Python code to delete all rows in my MSSQL table. I know the SQL syntax to do this (shown below).
DELETE FROM [LON].[dbo].[MREPORT]
BUT how do I incorporate the SQL syntax in my python code so I can run the code in my python environment and have it delete all rows in the MSSQL table?
Are you using pyobc ?
import pyodbc
conn = pyodbc.connect('DRIVER={<your_driver>};
SERVER=<your_server>
DATABASE=<your_database>;
UID=<user>;
PWD=<passwd>')
cursor = conn.cursor()
cursor.execute("TRUNCATE TABLE <your_table>")

How to commit df to SQL database using pyodbc?

I have a connection to a database (using pyodbc) and I need to commit a df to a new table. I've done this with SQL, but don't know how to do it with a df. Any ideas on how to alter the below code to make it work for a df?
code for SQL:
import pyodbc
import pandas as pd
conn= pyodbc.connect(r'DRIVER={Teradata};DBCNAME=foo; UID=name; PWD=password;QUIETMODE=YES;Trusted_Connection=yes')
cursor = conn.cursor()
cursor.execute(
"""
CREATE TABLE SCHEMA.NEW_TABLE AS
(
SELECT ... FROM ....
)
"""
)
conn.commit()
I tried this code, no errors but didn't create in the database:
import pyodbc
import pandas as pd
conn= pyodbc.connect(r'DRIVER={Teradata};DBCNAME=foo; UID=name; PWD=password;QUIETMODE=YES;Trusted_Connection=yes')
sheet1.to_sql(con=conn, name='new_table', schema='Schema', if_exists='replace', index=False)
The documentation for to_sql() clearly states:
con : SQLAlchemy engine or DBAPI2 connection (legacy mode)
Using SQLAlchemy makes it possible to use any DB supported by that
library. If a DBAPI2 object, only sqlite3 is supported.
Thus, you need to pass a SQLAlchemy engine to the to_sql() function to write from Pandas directly to your Teradata database.
Another way would be to dump the data to a different data structure (e.g. to_dict()) and then use pyODBC to perform DML statements on the database, preferably using binding variables to speed up processing.

Categories

Resources