Extract from DataBase using Python - python

I wrote a small script in Python that could help me to extract data from a database. Here is my script :
#!/usr/bin/python3
import pandas as pd
from sqlalchemy import create_engine
#connect to server
mytab = create_engine('mssql+pyodbc://test:test1#mypass')
#sql query that retrieves my table
df = pd.read_sql('select * from FO_INV', mytab)
#query result to excel file
df.to_csv('inventory.csv', index=False, sep=',', encoding='utf-8')
Everything works fine if I choose to select top 100 rows for example. But for the whole table, it take forever !!!
Do you have any idea or recommendations, please ?
Thank you in advance :)

I would suggest using pyodbc instead of SQLALCHEMY.
Something like this:
import pyodbc
mytab = pyodbc.connect('DRIVER={SQL SERVER};SERVER=.\;DATABASE=myDB;UID=user;PWD=pwd')
Check your timings with this. This should be faster.

Related

read.sql_query works, read sql_table doesn't

Trying to import a table from a SQLite into Pandas DF:
import pandas as pd
import sqlite3
cnxn = sqlite3.Connection("my_db.db")
c = cnxn.cursor()
Using this command works: pd.read_sql_query('select * from table1', con=cnxn). This doesn't : df = pd.read_sql_table('table1', con=cnxn).
Response :
ValueError: Table table1 not found
What could be the issue?
Using SQLite in Python the pd.read_sql_table() is not possible. Info found in Pandas doc.
Hence it's considered to be a DB-API when running the commands thru Python.
pd.read_sql_table() Documentation
Given a table name and a SQLAlchemy connectable, returns a DataFrame.
This function does not support DBAPI connections.

How to fix 'Python sqlite3.OperationalError: no such table' issue

I through my collegue recieved .db file (which includes text and number data) which I need to pass into pandas dataframe for further processing. I never worked or know about SQLite. But, with few google search,I written following line of code:
import pandas as pd
import numpy as np
import sqlite3
conn = sqlite3.connect('data.db') # This create `data.sqlite`
sql="""
SELECT * FROM data;
"""
df=pd.read_sql_query(sql,conn)
df.head()
This giving me following error
'error Execution failed on sql ' SELECT * FROM data;
': no such table: data
What table this code is referring to ? I had only data.db.
I do not quite understand where i am going wrong with this. Any advice how to get my data into dataframe df?
I'm also new to SQL but based on what you've provided, "data" is referring to a table in your database "data.db".
The query that you typed is instructing the program to select all items from the table called "data". This website helped me with creating tables: https://www.tutorialspoint.com/sqlite/sqlite_create_table.htm

Pandas: load a table into a dataframe with read_sql - `con` parameter and table name

In trying to import an sql database into a python pandas dataframe, and I am getting a syntax error. I am newbie here, so probably the issue is very simple.
After downloading sqlite sample chinook.db from http://www.sqlitetutorial.net/sqlite-sample-database/
and reading pandas documentation, I tried to load it into a pandas dataframe with
import pandas as pd
import sqlite3
conn = sqlite3.connect('chinook.db')
df = pd.read_sql('albums', conn)
where 'albums' is a table of 'chinook.db' gathered with sqlite3 from command line.
The result is:
...
DatabaseError: Execution failed on sql 'albums': near "albums": syntax error
I tried variations of the above code to import in an ipython session the tables of the database for exploratory data analysis, with no success.
What am I doing wrong? Is there a documentation/tutorial for newbies with some examples around?
Thanks in advance for your help!
Found it!
An example of db connection with SQLAlchemy can be found here:
https://www.codementor.io/sagaragarwal94/building-a-basic-restful-api-in-python-58k02xsiq
import pandas as pd
from sqlalchemy import create_engine
db_connect = create_engine('sqlite:///chinook.db')
df = pd.read_sql('albums', con=db_connect)
print(df)
As suggested by #Anky_91, also pd.read_sql_table works, as read_sql wraps it.
The issue was the connection, that has to be made with SQLAlchemy and not with sqlite3.
Thanks

How to write python sql output into CSV using a dataframe

IMPORT MODULES
import pyodbc
import pandas as pd
import csv
CREATE CONNECTION TO MICROSOFT SQL SERVER
msconn = pyodbc.connect(driver='{SQL Server}',
server='SERVER',
database='DATABASE',
trusted_msconnection='yes')
cursor = msconn.cursor()
CREATE VARIABLES THAT HOLD SQL STATEMENTS
SCRIPT = "SELECT * FROM TABLE"
PRINT DATA
cursor.execute(SCRIPT)
cursor.commit
for row in cursor:
print (row)
WRITE ALL ROWS WITH COLUMN NAME TO CSV --- NEED HELP HERE
Pandas
Since pandas support direct import from an RDBMS with the name being called read_sql you don't need to write this manually.
from sqlalchemy import create_engine
import pandas as pd
engine = create_engine('mssql+pyodbc://user:pass#mydsn')
df = pd.read_sql(sql='SELECT * FROM ...', con=engine)
The right tool: odo
From odo docs
Loading CSV files into databases is a solved problem. It’s a problem
that has been solved well. Instead of rolling our own loader every
time we need to do this and wasting computational resources, we should
use the native loaders in the database of our choosing.
And it works the other way round also.
from odo import odo
odo('mssql+pyodbc://user:pass#mydsn::tablename','myfile.csv')
#e4c5's answer is great as it should be faster compared to for loop + cursor - i would extend it with saving result set to CSV:
...
pd.read_sql(sql='SELECT * FROM TABLE', con=msconn) \
.to_csv('/path/to/file.csv', index=False)
if you want to read all rows (not specifying WHERE clause):
pd.read_sql_table('TABLE', con=msconn).to_csv('/path/to/file.csv', index=False)

Pandas writing dataframe to other postgresql schema

I am trying to write a pandas DataFrame to a PostgreSQL database,
using a schema-qualified table.
I use the following code:
import pandas.io.sql as psql
from sqlalchemy import create_engine
engine = create_engine(r'postgresql://some:user#host/db')
c = engine.connect()
conn = c.connection
df = psql.read_sql("SELECT * FROM xxx", con=conn)
df.to_sql('a_schema.test', engine)
conn.close()
What happens is that pandas writes in schema "public", in a table named 'a_schema.test',
instead of writing in the "test" table in the "a_schema" schema.
How can I instruct pandas to use a schema different than public?
Thanks
Update: starting from pandas 0.15, writing to different schema's is supported. Then you will be able to use the schema keyword argument:
df.to_sql('test', engine, schema='a_schema')
Writing to different schema's is not yet supported at the moment with the read_sql and to_sql functions (but an enhancement request has already been filed: https://github.com/pydata/pandas/issues/7441).
However, you can get around for now using the object interface with PandasSQLAlchemy and providing a custom MetaData object:
meta = sqlalchemy.MetaData(engine, schema='a_schema')
meta.reflect()
pdsql = pd.io.sql.PandasSQLAlchemy(engine, meta=meta)
pdsql.to_sql(df, 'test')
Beware! This interface (PandasSQLAlchemy) is not yet really public and will still undergo changes in the next version of pandas, but this is how you can do it for pandas 0.14.
Update: PandasSQLAlchemy is renamed to SQLDatabase in pandas 0.15.
Solved, thanks to joris answer.
Code was also improved thanks to joris comment, by passing around sqlalchemy engine instead of connection objects.
import pandas as pd
from sqlalchemy import create_engine, MetaData
engine = create_engine(r'postgresql://some:user#host/db')
meta = sqlalchemy.MetaData(engine, schema='a_schema')
meta.reflect(engine, schema='a_schema')
pdsql = pd.io.sql.PandasSQLAlchemy(engine, meta=meta)
df = pd.read_sql("SELECT * FROM xxx", con=engine)
pdsql.to_sql(df, 'test')

Categories

Resources