How to convert SQL Query result to PANDAS Data Structure? - python

Any help on this problem will be greatly appreciated.
So basically I want to run a query to my SQL database and store the returned data as Pandas data structure.
I have attached code for query.
I am reading the documentation on Pandas, but I have problem to identify the return type of my query.
I tried to print the query result, but it doesn't give any useful information.
Thanks!!!!
from sqlalchemy import create_engine
engine2 = create_engine('mysql://THE DATABASE I AM ACCESSING')
connection2 = engine2.connect()
dataid = 1022
resoverall = connection2.execute("
SELECT
sum(BLABLA) AS BLA,
sum(BLABLABLA2) AS BLABLABLA2,
sum(SOME_INT) AS SOME_INT,
sum(SOME_INT2) AS SOME_INT2,
100*sum(SOME_INT2)/sum(SOME_INT) AS ctr,
sum(SOME_INT2)/sum(SOME_INT) AS cpc
FROM daily_report_cooked
WHERE campaign_id = '%s'",
%dataid
)
So I sort of want to understand what's the format/datatype of my variable "resoverall" and how to put it with PANDAS data structure.

Here's the shortest code that will do the job:
from pandas import DataFrame
df = DataFrame(resoverall.fetchall())
df.columns = resoverall.keys()
You can go fancier and parse the types as in Paul's answer.

Edit: Mar. 2015
As noted below, pandas now uses SQLAlchemy to both read from (read_sql) and insert into (to_sql) a database. The following should work
import pandas as pd
df = pd.read_sql(sql, cnxn)
Previous answer:
Via mikebmassey from a similar question
import pyodbc
import pandas.io.sql as psql
cnxn = pyodbc.connect(connection_info)
cursor = cnxn.cursor()
sql = "SELECT * FROM TABLE"
df = psql.frame_query(sql, cnxn)
cnxn.close()

If you are using SQLAlchemy's ORM rather than the expression language, you might find yourself wanting to convert an object of type sqlalchemy.orm.query.Query to a Pandas data frame.
The cleanest approach is to get the generated SQL from the query's statement attribute, and then execute it with pandas's read_sql() method. E.g., starting with a Query object called query:
df = pd.read_sql(query.statement, query.session.bind)

Edit 2014-09-30:
pandas now has a read_sql function. You definitely want to use that instead.
Original answer:
I can't help you with SQLAlchemy -- I always use pyodbc, MySQLdb, or psychopg2 as needed. But when doing so, a function as simple as the one below tends to suit my needs:
import decimal
import pyodbc #just corrected a typo here
import numpy as np
import pandas
cnn, cur = myConnectToDBfunction()
cmd = "SELECT * FROM myTable"
cur.execute(cmd)
dataframe = __processCursor(cur, dataframe=True)
def __processCursor(cur, dataframe=False, index=None):
'''
Processes a database cursor with data on it into either
a structured numpy array or a pandas dataframe.
input:
cur - a pyodbc cursor that has just received data
dataframe - bool. if false, a numpy record array is returned
if true, return a pandas dataframe
index - list of column(s) to use as index in a pandas dataframe
'''
datatypes = []
colinfo = cur.description
for col in colinfo:
if col[1] == unicode:
datatypes.append((col[0], 'U%d' % col[3]))
elif col[1] == str:
datatypes.append((col[0], 'S%d' % col[3]))
elif col[1] in [float, decimal.Decimal]:
datatypes.append((col[0], 'f4'))
elif col[1] == datetime.datetime:
datatypes.append((col[0], 'O4'))
elif col[1] == int:
datatypes.append((col[0], 'i4'))
data = []
for row in cur:
data.append(tuple(row))
array = np.array(data, dtype=datatypes)
if dataframe:
output = pandas.DataFrame.from_records(array)
if index is not None:
output = output.set_index(index)
else:
output = array
return output

1. Using MySQL-connector-python
# pip install mysql-connector-python
import mysql.connector
import pandas as pd
mydb = mysql.connector.connect(
host = 'host',
user = 'username',
passwd = 'pass',
database = 'db_name'
)
query = 'select * from table_name'
df = pd.read_sql(query, con = mydb)
print(df)
2. Using SQLAlchemy
# pip install pymysql
# pip install sqlalchemy
import pandas as pd
import sqlalchemy
engine = sqlalchemy.create_engine('mysql+pymysql://username:password#localhost:3306/db_name')
query = '''
select * from table_name
'''
df = pd.read_sql_query(query, engine)
print(df)

MySQL Connector
For those that works with the mysql connector you can use this code as a start. (Thanks to #Daniel Velkov)
Used refs:
Querying Data Using Connector/Python
Connecting to MYSQL with Python in 3 steps
import pandas as pd
import mysql.connector
# Setup MySQL connection
db = mysql.connector.connect(
host="<IP>", # your host, usually localhost
user="<USER>", # your username
password="<PASS>", # your password
database="<DATABASE>" # name of the data base
)
# You must create a Cursor object. It will let you execute all the queries you need
cur = db.cursor()
# Use all the SQL you like
cur.execute("SELECT * FROM <TABLE>")
# Put it all to a data frame
sql_data = pd.DataFrame(cur.fetchall())
sql_data.columns = cur.column_names
# Close the session
db.close()
# Show the data
print(sql_data.head())

Here's the code I use. Hope this helps.
import pandas as pd
from sqlalchemy import create_engine
def getData():
# Parameters
ServerName = "my_server"
Database = "my_db"
UserPwd = "user:pwd"
Driver = "driver=SQL Server Native Client 11.0"
# Create the connection
engine = create_engine('mssql+pyodbc://' + UserPwd + '#' + ServerName + '/' + Database + "?" + Driver)
sql = "select * from mytable"
df = pd.read_sql(sql, engine)
return df
df2 = getData()
print(df2)

This is a short and crisp answer to your problem:
from __future__ import print_function
import MySQLdb
import numpy as np
import pandas as pd
import xlrd
# Connecting to MySQL Database
connection = MySQLdb.connect(
host="hostname",
port=0000,
user="userID",
passwd="password",
db="table_documents",
charset='utf8'
)
print(connection)
#getting data from database into a dataframe
sql_for_df = 'select * from tabledata'
df_from_database = pd.read_sql(sql_for_df , connection)

Like Nathan, I often want to dump the results of a sqlalchemy or sqlsoup Query into a Pandas data frame. My own solution for this is:
query = session.query(tbl.Field1, tbl.Field2)
DataFrame(query.all(), columns=[column['name'] for column in query.column_descriptions])

resoverall is a sqlalchemy ResultProxy object. You can read more about it in the sqlalchemy docs, the latter explains basic usage of working with Engines and Connections. Important here is that resoverall is dict like.
Pandas likes dict like objects to create its data structures, see the online docs
Good luck with sqlalchemy and pandas.

Simply use pandas and pyodbc together. You'll have to modify your connection string (connstr) according to your database specifications.
import pyodbc
import pandas as pd
# MSSQL Connection String Example
connstr = "Server=myServerAddress;Database=myDB;User Id=myUsername;Password=myPass;"
# Query Database and Create DataFrame Using Results
df = pd.read_sql("select * from myTable", pyodbc.connect(connstr))
I've used pyodbc with several enterprise databases (e.g. SQL Server, MySQL, MariaDB, IBM).

This question is old, but I wanted to add my two-cents. I read the question as " I want to run a query to my [my]SQL database and store the returned data as Pandas data structure [DataFrame]."
From the code it looks like you mean mysql database and assume you mean pandas DataFrame.
import MySQLdb as mdb
import pandas.io.sql as sql
from pandas import *
conn = mdb.connect('<server>','<user>','<pass>','<db>');
df = sql.read_frame('<query>', conn)
For example,
conn = mdb.connect('localhost','myname','mypass','testdb');
df = sql.read_frame('select * from testTable', conn)
This will import all rows of testTable into a DataFrame.

Long time from last post but maybe it helps someone...
Shorted way than Paul H:
my_dic = session.query(query.all())
my_df = pandas.DataFrame.from_dict(my_dic)

Here is mine. Just in case if you are using "pymysql":
import pymysql
from pandas import DataFrame
host = 'localhost'
port = 3306
user = 'yourUserName'
passwd = 'yourPassword'
db = 'yourDatabase'
cnx = pymysql.connect(host=host, port=port, user=user, passwd=passwd, db=db)
cur = cnx.cursor()
query = """ SELECT * FROM yourTable LIMIT 10"""
cur.execute(query)
field_names = [i[0] for i in cur.description]
get_data = [xx for xx in cur]
cur.close()
cnx.close()
df = DataFrame(get_data)
df.columns = field_names

pandas.io.sql.write_frame is DEPRECATED.
https://pandas.pydata.org/pandas-docs/version/0.15.2/generated/pandas.io.sql.write_frame.html
Should change to use pandas.DataFrame.to_sql
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html
There is another solution.
PYODBC to Pandas - DataFrame not working - Shape of passed values is (x,y), indices imply (w,z)
As of Pandas 0.12 (I believe) you can do:
import pandas
import pyodbc
sql = 'select * from table'
cnn = pyodbc.connect(...)
data = pandas.read_sql(sql, cnn)
Prior to 0.12, you could do:
import pandas
from pandas.io.sql import read_frame
import pyodbc
sql = 'select * from table'
cnn = pyodbc.connect(...)
data = read_frame(sql, cnn)

best way I do this
db.execute(query) where db=db_class() #database class
mydata=[x for x in db.fetchall()]
df=pd.DataFrame(data=mydata)

If the result type is ResultSet, you should convert it to dictionary first. Then the DataFrame columns will be collected automatically.
This works on my case:
df = pd.DataFrame([dict(r) for r in resoverall])

Here is a simple solution I like:
Put your DB connection info in a YAML file in a secure location (do not version it in the code repo).
---
host: 'hostname'
port: port_number_integer
database: 'databasename'
user: 'username'
password: 'password'
Then load the conf in a dictionary, open the db connection and load the result set of the SQL query in a data frame:
import yaml
import pymysql
import pandas as pd
db_conf_path = '/path/to/db-conf.yaml'
# Load DB conf
with open(db_conf_path) as db_conf_file:
db_conf = yaml.safe_load(db_conf_file)
# Connect to the DB
db_connection = pymysql.connect(**db_conf)
# Load the data into a DF
query = '''
SELECT *
FROM my_table
LIMIT 10
'''
df = pd.read_sql(query, con=db_connection)

Related

How to crate a table in snowflake from a python pandas dataframe (without using sqlalchemy)

Is there a way to create a table in snowflake from a pandas dataframe in python just using the snowflake connector and pandas library? Main goal here is to just take a pandas dataframe and use the schema to create a new table in a specific data warehouse/database/schema in snowflake. I have seen examples of how to do this with sqlalchemy which I am trying to avoid but worst case I will just use that.
I have tried other methods inclduing sqlalchemy, using the snowflake uploading method using the PUT command but wanted to ask if anyone had an alternative that just uses the snowflake connector, pandas and does require me to save the data to my drive locally or use sqlalchemy.
Appreciate any input, or feedback on how to write better questions too.
*Note:
write_pandas - snowflake connector function can only append tables that already exist.
df.to_sql - only works with sqlalchemy or sqlite3 connections so don't think snowflake conn would work but could be wrong?
I have used the snowflake connector functions write_pandas(), and pd_writer() with the pandas function to_sql(). The issue here is that the pandas to_sql() function in the docs states that the connection can only be "consqlalchemy.engine.(Engine or Connection) or sqlite3.Connection". I would prefer to keep using the snowflake-connector for python. Without using sqlalchemy I know I could do the following:
.ini config file for connecting to database (named db.ini)
[database_name]
user = user7
pass = s$cret
acc = jhn675f
wh = 22jb7tyo5
db = dev_env546
db_schema = hubspot
python module for connecting to snowflake db and code to execute
import configparser
import pandas as pd
import snowflake.connector
config = configparser.ConfigParser()
config.read('db.ini')
sn_user = config['database_name']['user']
sn_password = config['database_name']['pass']
sn_account = config['database_name']['acc']
sn_warehouse = config['database_name']['wh']
sn_database = config['database_name']['db']
sn_schema= config['database_name']['db_schema']
ctx = snowflake.connector.connect(
user=sn_user ,
password = sn_password
account=sn_account ,
warehouse=sn_warehouse ,
database=sn_database ,
schema=sn_schema
)
cs = ctx.cursor()
query_extract = '''
select table1.field1,
table1.field2,
table1.field3,
table1.field4,
table1.field5,
table1.field6,
table1.field7,
table2.field2,
table2.field5,
table2.field7,
table2.field9,
table3.field1,
table3.field6
from database.schema.table1
left join database.schema.table2
on table1.field3 = table2.field1
left join database.schema.table3
on table1.field5 = table3.field1
'''
try:
cs.execute(query_extract)
df = cs.fetch_pandas_all()
except:
#throw exception here
# clean data in the dataframe and perform some calcs
# store results in new dataframe called df_final
# would like to just use df_final to create a table in snowflake based on df_final schema and datatypes
# right now I am not sure how to do that
Current methods or alternatives
import configparser
import pandas as pd
import snowflake.connector
config = configparser.ConfigParser()
config.read('db.ini')
sn_user = config['database_name']['user']
sn_password = config['database_name']['pass']
sn_account = config['database_name']['acc']
sn_warehouse = config['database_name']['wh']
sn_database = config['database_name']['db']
sn_schema= config['database_name']['db_schema']
ctx = snowflake.connector.connect(
user=sn_user ,
password = sn_password
account=sn_account ,
warehouse=sn_warehouse ,
database=sn_database ,
schema=sn_schema
)
cs = ctx.cursor()
query_extract = '''
select table1.field1,
table1.field2,
table1.field3,
table1.field4,
table1.field5,
table1.field6,
table1.field7,
table2.field2,
table2.field5,
table2.field7,
table2.field9,
table3.field1,
table3.field6
from database.schema.table1
left join database.schema.table2
on table1.field3 = table2.field1
left join database.schema.table3
on table1.field5 = table3.field1
'''
try:
cs.execute(query_extract)
df = cs.fetch_pandas_all()
except:
#throw exception here
df_final.to_csv('data/processed_data')
create_stage = '''create stage processed_data_stage
copy_options = (on_error='skip_file');'''
create_file_format = '''create or replace file format processed_data_stage
type = 'csv' field_delimiter = ',';'''
upload_file = '''put file:/data/processed_data.csv #processed_data_stage;'''
Other alternative is just embracing sqlalchemy and the pandas_to_sql function
from snowflake.connector.pandas_tools import pd_writer
import pandas as pd
from sqlalchemy import create_engine
account_identifier = '<account_identifier>'
user = '<user_login_name>'
password = '<password>'
database_name = '<database_name>'
schema_name = '<schema_name>'
conn_string = f"snowflake://{user}:{password}#{account_identifier}/{database_name}/{schema_name}"
engine = create_engine(conn_string)
#Create your DataFrame
table_name = 'cities'
df = pd.DataFrame(data=[['Stephen','Oslo'],['Jane','Stockholm']],columns=['Name','City'])
#What to do if the table exists? replace, append, or fail?
if_exists = 'replace'
#Write the data to Snowflake, using pd_writer to speed up loading
with engine.connect() as con:
df.to_sql(name=table_name.lower(), con=con, if_exists=if_exists, method=pd_writer)
So for anyone who is doing something similar I ended up finding a pandas function that is not documented but can extract the schema from a dataframe.
The pandas library has a function called pd.io.sql.get_schema() that can return a string formatted as an sql query based on a dataframe and table name. So you can do:
import configparser
import pandas as pd
import snowflake.connector
config = configparser.ConfigParser()
config.read('db.ini')
sn_user = config['database_name']['user']
sn_password = config['database_name']['pass']
sn_account = config['database_name']['acc']
sn_warehouse = config['database_name']['wh']
sn_database = config['database_name']['db']
sn_schema= config['database_name']['db_schema']
ctx = snowflake.connector.connect(
user=sn_user ,
password = sn_password
account=sn_account ,
warehouse=sn_warehouse ,
database=sn_database ,
schema=sn_schema
)
cs = ctx.cursor()
query_extract = '''
select table1.field1,
table1.field2,
table1.field3,
table1.field4,
table1.field5,
table1.field6,
table1.field7,
table2.field2,
table2.field5,
table2.field7,
table2.field9,
table3.field1,
table3.field6
from database.schema.table1
left join database.schema.table2
on table1.field3 = table2.field1
left join database.schema.table3
on table1.field5 = table3.field1
'''
try:
cs.execute(query_extract)
df = cs.fetch_pandas_all()
except:
#throw exception here
# clean data in the dataframe and perform some calcs
# store results in new dataframe called df_final
df_final.columns = map(lambda x: str(x).upper(), df_final.columns)
tb_name = 'NEW_TABLE'
df_schema = pd.io.sql.get_schema(df_final, tb_name)
df_schema = str(df_schema).replace('TEXT', 'VARCAHR(256)')
#use replace to remove characters and quotes that might give you syntax errors
cs.execute(df_schema)
success, nchunks, nrows, _ = write_pandas(ctx, df, tb_name)
cs.close()
ctx.close()
I skipped over parts but basically you can do:
establish connection to snowflake
extract tables from snowflake into pandas dataframe
clean dataframe and perform transformations and save into new dataframe
use pd.io.sql.get_schema to get a string sql query based on the pandas dataframe you want to load into snowflake
Use the string of the sql query with your connection and database cursor to execute the create table command based on df schema
use the snowflake write_pandas command to write your df to the newly created snowflake table

Python looping to obtain different dataframes from a SQL database

I'm trying to connect to an SQL database and, within a loop, create separate dataframes for each different instance of Id, containing all the data related to that Id. I've tried a number of ways, without any success so far. I'm pretty new to all of this, so I'm probably making some rookie mistakes.
Attempt 1:
import pandas as pd
import pyodbc
conn = pyodbc.connect('Driver={SQL Server};'
'Server=Server_name;'
'Database=Database;'
'UID=Username;'
'PWD=password;'
'Trusted_Connection=yes;')
Name = ['HR','ZA','PR','FW']
for x in Name:
SQL = '''
SELECT *
FROM Database
WHERE Id = {x}'''.format(x = x)
cursor = conn.cursor()
cursor.execute(SQL)
df = pd.read_sql_query(SQL)
On this code, I get an 'invalid column name' programming error on the first Name 'HL'.
Attempt 2:
import pandas as pd
import pyodbc
conn = pyodbc.connect('Driver={SQL Server};'
'Server=Server_name;'
'Database=Database;'
'UID=Username;'
'PWD=password;'
'Trusted_Connection=yes;')
SQL = '''
SELECT *
FROM Database
conn.autocommit = True
cursor.execute(SQL)
for [Id] in cursor:
df = pd.Dataframe(SQL,conn)
On this code, I get a 'ValueError: too many values to unpack (expected 1)' - on the for statement.
I want to put a lot more code in the for loop so I need it to be set up to work through each Id. I hope that makes sense. Any guidance would be greatly appreciated. Thanks
UPDATE:
Thanks for all comments/answers. For some reason I just couldn't get it to work in either of the formats above so I took it back to where I started from now I understand how to include the syntax for the loop variable. The following now works:
import pandas as pd
import pyodbc
conn = pyodbc.connect('Driver={SQL Server};'
'Server=Server_name;'
'Database=Database;'
'UID=Username;'
'PWD=password;'
'Trusted_Connection=yes;')
Name = ['HR','ZA','PR','FW']
for x in Name:
SQL = pd.read_sql_query(
'''
SELECT *
FROM Database_table
WHERE Id = '{x}'
'''.format(x = x), conn)
df = pd.DataFrame(SQL)
I think that if you try a variation on your first attempt like:
for x in Name:
SQL = '''
SELECT *
FROM Database
WHERE Id = ?'''
cursor = conn.cursor()
cursor.execute(SQL)
df = pd.read_sql_query(SQL, params={x})
It should probably work :)

How to write MySQL DB data to Excel File using Python?

Below is my code, but this is not writing anything in file
import os
import pymysql
import pandas as pd
import csv
host = os.getenv('MYSQL_HOST')
port = os.getenv('MYSQL_PORT')
user = os.getenv('MYSQL_USER')
password = os.getenv('MYSQL_PASSWORD')
database = os.getenv('MYSQL_DATABASE')
conn = pymysql.connect(
host=host,
port=int(3306),
user="root",
passwd="Pass200",
db="test_db",
charset='utf8mb4')
QUERY='SELECT * FROM act;'
df = pd.read_sql_query("SELECT * FROM act",conn)
df.tail(10)
Above code successfully print SQL DB data. I appended below code to above code
for writing all data alongwith column names in EXCEL Sheet:
cur=conn.cursor()
cur.execute(QUERY)
result=cur.fetchall()
c = csv.writer(open('test.csv', 'w'))
for x in result:
print(x);
c.writerow(x)
Kindly assist as I am new to this.
Since you are using pandas you can use df.to_csv('test.csv')

creating a pandas dataframe from a database query that uses bind variables

I'm working with an Oracle database. I can do this much:
import pandas as pd
import pandas.io.sql as psql
import cx_Oracle as odb
conn = odb.connect(_user +'/'+ _pass +'#'+ _dbenv)
sqlStr = "SELECT * FROM customers"
df = psql.frame_query(sqlStr, conn)
But I don't know how to handle bind variables, like so:
sqlStr = """SELECT * FROM customers
WHERE id BETWEEN :v1 AND :v2
"""
I've tried these variations:
params = (1234, 5678)
params2 = {"v1":1234, "v2":5678}
df = psql.frame_query((sqlStr,params), conn)
df = psql.frame_query((sqlStr,params2), conn)
df = psql.frame_query(sqlStr,params, conn)
df = psql.frame_query(sqlStr,params2, conn)
The following works:
curs = conn.cursor()
curs.execute(sqlStr, params)
df = pd.DataFrame(curs.fetchall())
df.columns = [rec[0] for rec in curs.description]
but this solution is just...inellegant. If I can, I'd like to do this without creating the cursor object. Is there a way to do the whole thing using just pandas?
Try using pandas.io.sql.read_sql_query. I used pandas version 0.20.1, I used it, it worked out:
import pandas as pd
import pandas.io.sql as psql
import cx_Oracle as odb
conn = odb.connect(_user +'/'+ _pass +'#'+ _dbenv)
sqlStr = """SELECT * FROM customers
WHERE id BETWEEN :v1 AND :v2
"""
pars = {"v1":1234, "v2":5678}
df = psql.frame_query(sqlStr, conn, params=pars)
As far as I can tell, pandas expects that the SQL string be completely formed prior to passing it along. With that in mind, I would (and always do) use string interpolation:
params = (1234, 5678)
sqlStr = """
SELECT * FROM customers
WHERE id BETWEEN %d AND %d
""" % params
print(sqlStr)
which gives
SELECT * FROM customers
WHERE id BETWEEN 1234 AND 5678
So that should feed into psql.frame_query just fine. (it does in my experience with postgres, mysql, and sql server).

Retrieving Data from SQL Using pyodbc

I am trying to retrieve data from an SQL server using pyodbc and print it in a table using Python. However, I can only seem to retrieve the column name and the data type and stuff like that, not the actual data values in each row of the column.
Basically I am trying to replicate an Excel sheet that retrieves server data and displays it in a table. I am not having any trouble connecting to the server, just that I can't seem to find the actual data that goes into the table.
Here is an example of my code:
import pyodbc
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=SQLSRV01;DATABASE=DATABASE;UID=USER;PWD=PASSWORD')
cursor = cnxn.cursor()
cursor.execute("SELECT * FROM sys.tables")
tables = cursor.fetchall()
#cursor.execute("SELECT WORK_ORDER.TYPE,WORK_ORDER.STATUS, WORK_ORDER.BASE_ID, WORK_ORDER.LOT_ID FROM WORK_ORDER")
for row in cursor.columns(table='WORK_ORDER'):
print row.column_name
for field in row:
print field
However the result of this just gives me things like the table name, the column names, and some integers and 'None's and things like that that aren't of interest to me:
STATUS_EFF_DATE
DATABASE
dbo
WORK_ORDER
STATUS_EFF_DATE
93
datetime
23
16
3
None
0
None
None
9
3
None
80
NO
61
So I'm not really sure where I can get the values to fill up my table. Would it should be in table='WORK_ORDER', but could it be under a different table name? Is there a way of printing the data that I am just missing?
Any advice or suggestions would be greatly appreciated.
You are so close!
import pyodbc
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=SQLSRV01;DATABASE=DATABASE;UID=USER;PWD=PASSWORD')
cursor = cnxn.cursor()
cursor.execute("SELECT WORK_ORDER.TYPE,WORK_ORDER.STATUS, WORK_ORDER.BASE_ID, WORK_ORDER.LOT_ID FROM WORK_ORDER")
for row in cursor.fetchall():
print row
(the "columns()" function collects meta-data about the columns in the named table, as opposed to the actual data).
you could try using Pandas to retrieve information and get it as dataframe
import pyodbc as cnn
import pandas as pd
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=SQLSRV01;DATABASE=DATABASE;UID=USER;PWD=PASSWORD')
# Copy to Clipboard for paste in Excel sheet
def copia (argumento):
df=pd.DataFrame(argumento)
df.to_clipboard(index=False,header=True)
tableResult = pd.read_sql("SELECT * FROM YOURTABLE", cnxn)
# Copy to Clipboard
copia(tableResult)
# Or create a Excel file with the results
df=pd.DataFrame(tableResult)
df.to_excel("FileExample.xlsx",sheet_name='Results')
I hope this helps!
Cheers!
In order to receive actual data stored in the table, you should use one of fetch...() functions or use the cursor as an iterator (i.e. "for row in cursor"...). This is described in the documentation:
cursor.execute("select user_id, user_name from users where user_id < 100")
rows = cursor.fetchall()
for row in rows:
print row.user_id, row.user_name
Just do this:
import pandas as pd
import pyodbc
cnxn = pyodbc.connect("Driver={SQL Server}\
;Server=SERVER_NAME\
;Database=DATABASE_NAME\
;Trusted_Connection=yes")
df = pd.read_sql("SELECT * FROM myTableName", cnxn)
df.head()
Instead of using the pyodbc library, use the pypyodbc library... This worked for me.
import pypyodbc
conn = pypyodbc.connect("DRIVER={SQL Server};"
"SERVER=server;"
"DATABASE=database;"
"Trusted_Connection=yes;")
cursor = conn.cursor()
cursor.execute('SELECT * FROM [table]')
for row in cursor:
print('row = %r' % (row,))
import pyodbc
conn = pyodbc.connect('Driver={SQL Server};'
'Server=db-server;'
'Database=db;'
'Trusted_Connection=yes;')
sql = "SELECT * FROM [mytable] "
cursor.execute(sql)
for r in cursor:
print(r)
Why pyodbc you can try with pymssql. For more information follow this link: https://stackoverflow.com/a/70445445/8614314.
import pandas as pd
import pymssql
con = pymssql.connect(<conncetion to the server and db>)
cursor = con.cursor()
query = "<Your query>"
cursor.execute(query)
df = pd.read_sql(query, con)
con.close()
Upvoted answer din't work for me, It was fixed by editing connection line as follows(replace semicolons with coma and also remove those quotes):
import pyodbc
cnxn = pyodbc.connect(DRIVER='{SQL Server}',SERVER=SQLSRV01,DATABASE=DATABASE,UID=USER,PWD=PASSWORD)
cursor = cnxn.cursor()
cursor.execute("SELECT WORK_ORDER.TYPE,WORK_ORDER.STATUS, WORK_ORDER.BASE_ID, WORK_ORDER.LOT_ID FROM WORK_ORDER")
for row in cursor.fetchall():
print row

Categories

Resources