How to copy entire SQL Server table into CSV including column headers? - python

Summary
I have a Python program (2.7) that connects to a SQL Server database using SQLAlchemy. I want to copy the entire SQL table into a local CSV file (including column headers). I'm new to SQLAlchemy (version .7) and so far I'm able to dump the entire csv file, but I have to explicitly list my column headers.
Question
How do I copy an entire SQL table into a local CSV file (including column headers)? I don't want to explicitly type in my column headers. The reason is that I want to avoid changing the code if there's changes in the table's columns.
Code
import sqlalchemy
# Setup connection info, assume database connection info is correct
SQLALCHEMY_CONNECTION = (DB_DRIVER_SQLALCHEMY + '://'
+ DB_UID + ":" + DB_PWD + "#" + DB_SERVER + "/" + DB_DATABASE
)
engine = sqlalchemy.create_engine(SQLALCHEMY_CONNECTION, echo=True)
metadata = sqlalchemy.MetaData(bind=engine)
vw_AllCenterChatOverview = sqlalchemy.Table( \
'vw_AllCenterChatOverview', metadata, autoload=True)
metadata.create_all(engine)
conn = engine.connect()
# Run the SQL Select Statement
result = conn.execute("""SELECT * FROM
[LifelineChatDB].[dbo].[vw_AllCenterChatOverview]""")
# Open file 'output.csv' and write SQL query contents to it
f = csv.writer(open('output.csv', 'wb'))
f.writerow(['StartTime', 'EndTime', 'Type', 'ChatName', 'Queue', 'Account',\
'Operator', 'Accepted', 'WaitTimeSeconds', 'PreChatSurveySkipped',\
'TotalTimeInQ', 'CrisisCenterKey']) # Where I explicitly list table headers
for row in result:
try:
f.writerow(row)
except UnicodeError:
print "Error running this line ", row
result.close()
Table Structure
In my example, 'vw_AllCenterChatOverview' is the table. Here's the Table Headers:
StartTime, EndTime, Type, ChatName, Queue, Account, Operator, Accepted, WaitTimeSeconds, PreChatSurveySkipped, TotalTimeInQ, CrisisCenterKey
Thanks in advance!

Use ResultProxy.keys:
# Run the SQL Select Statement
result = conn.execute("""SELECT * FROM
[LifelineChatDB].[dbo].[vw_AllCenterChatOverview]""")
# Get column names
column_names = result.keys()

Related

How to insert data frame result into one specific column in python?

I have a data frame result which I need to insert into existing table line by line, How will I insert result into a specific column name - imgtext
table structure
As in SQL I know I can write query as -
INSERT INTO tableName(imgtext) VALUES('Learn MySQL INSERT Statement');
Python script:
This code takes some value from csv and return some data with help of beautifulsoup
resulted data will save in csv
Problem:
Now rather than saving into csv how will I inserted resulted data into SQL table in specific column name imgtext
seeking solution:
How will I process csv data using data frame for inserting result into SQL rather than CSV.
`
img_text_list = []
df1 = pd.DataFrame(
columns=['imgtext'])
img_formats = [".jpg", ".jpeg"]
df = pd.read_csv("urls.csv")
urls = df["urls"].tolist()
for y in urls:
response = requests.get(y)
soup = BeautifulSoup(response.text, 'html.parser')
img_tags = soup.find_all('img', class_='pick')
img_srcs = ["https://myimpact.in/" + img['src'].replace(
'\\', '/') if img.has_attr('src') else '-' for img in img_tags]
for count, x in enumerate(img_srcs):
if x != '-':
if pathlib.Path(x).suffix in img_formats:
response = requests.get(x)
img = Image.open(io.BytesIO(response.content))
text = pt.image_to_string(img, lang="hin")
# how to insert this text value into sql table column name - imgtext
img_text_list.append(text)
df1['img_text'] = img_text_list
df1.to_csv('data.csv', encoding='utf-8')
`
to add value from CSV to your SQL table you will need to use a Python SQL Driver (pyodbc). Please see the sample code for connecting python to SQL.
sample code:
import pyodbc
import pandas as pd
server = 'yourservername'
database = 'yourdatabasename'
username = 'username'
password = 'yourpassword'
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
# Insert Dataframe into SQL Server:
for index, row in df.iterrows():
cursor.execute("Insert your QUERY here")
cnxn.commit()
cursor.close()
Prequisite:
Please install the pyodbc package here https://mkleehammer.github.io/pyodbc/
Reference:
https://learn.microsoft.com/en-us/sql/machine-learning/data-exploration/python-dataframe-sql-server?view=sql-server-ver16

How do I create a csv file using Python from sybase db table depending on a value in a column of that table?

I am new at python and trying to write a script which connects with the database(Sybase ase) and copies the data present the table in the specific size of CSV files depending on one column in the table(index)
Code that I have tried:
server = 'XYZ'
database = 'user_details'
username = ''
password = ''
#have used sql express for now
import pyodbc
import csv
db = pyodbc.connect('DRIVER={SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+password+'')
c = db.cursor()
c.execute("SELECT * FROM userinfo")
list1 = c.fetchall()
#print (list1)
cursor = db.cursor()
cursor.execute("SELECT * FROM userinfo;")
with open("C:\\Users\\ABC\\Desktop\\out.csv", "w", newline='') as csv_file:
csv_writer = csv.writer(csv_file)
csv_writer.writerow([i[0] for i in cursor.description]) # write headers
csv_writer.writerows(cursor)
Now, I want to make CSV in batches. Like for eg. first 100 records of the table in 1 file(file_1.csv) and next 100 records in another file(file_2.csv). Depending on the output of the select statement.
How do I create a csv file using Python from sybase db table depending on a value in a column of that table?

Python: Error Importing data from CSV to Postgres (ERROR: invalid input syntax for integer:)

I have written a piece of Python code that should copy the CSV data to the table I created to host the data. Here is the code:
def sql_copy_command(csv_file, schema, database, table, delimiter = ',', header = True):
if header:
sql_command = """COPY "{schema}_{tbl}" FROM '{the_csv_file}' DELIMITER '{dlm}' CSV HEADER;""".format(the_csv_file = csv_file, db = database, tbl = table, dlm = delimiter, schema = schema)
else:
sql_command = """COPY "{schema}_{tbl}" FROM '{the_csv_file}' DELIMITER '{dlm}' CSV;""".format(the_csv_file = csv_file, db = database, tbl = table, dlm = delimiter, schema = schema)
return sql_command
This throws the following error:
DataError: invalid input syntax for integer: "Visa"
CONTEXT: COPY insight_transaction, line 2, column id: "Visa"
It seems to me that instead of the account_type, which is the first field in my model, postgres expects to see ID, which is the first column in the table. Given that the ID is automatically generated, I do not know how to address this in my code.
Specify the columns to copy to:
copy my_schema.my_table (account_type, col5, col2) from 'the_csv_file' csv
https://www.postgresql.org/docs/current/static/sql-copy.html

I need to export a sql table into a pipe delimited text file including the column names as well using python

I am currently using the below and it imports the data which is comma seperated without the column names.
Also, there is a problem with the dates in the table, as it returns all the dates columns as "datetime.date(1925, 1, 24)", where the actual date in the table is like "1925-01-24".
import MySQLdb
conn = MySQLdb.connect(host = 'localhost', user = 'username', passwd = 'password', port = port, db = 'DBNAME')
cursor = conn.cursor()
query = """SELECT * FROM myschema.mytable;"""
cursor.execute(query)
FILE = cursor.fetchall()
with open('FILE.txt', 'w') as f:
for row in cursor:
f.write("%s\n" % str(row))
Why use python at all?
SELECT 'column name1', 'column name2'
UNION ALL
SELECT * FROM myschema.mytable INTO OUTFILE '/tmp/mytable.csv' FIELDS TERMINATED BY '|';
Just type this in the mysql console. Note if you get an error in this query saying something about secure file privilages do
SHOW VARIABLES LIKE "secure_file_priv";
And then use the specified location instead of '/tmp' in the previous query.

Python: How to fill/append data from python loop to SQL table?

I created a python script in Pycharm that works like a charm... But these days I'm sensing that I could have a problem with size of monthly .csv file and also I would like to analyze data using SQL over Python so I could automatize process and make charts, pies from that queries.
So instead of exporting to csv I would like to append to SQL table.
Here is a part of code that exports data to .csv:
for content in driver.find_elements_by_class_name('companychatroom'):
people = content.find_element_by_xpath('.//div[#class="theid"]').text
if people != "Monthy" and people != "Python" :
pass
mybook = open(r'C:\Users\Director\PycharmProjects\MyProjects\Employees\archive' + datetime.now().strftime("%m_%y") + '.csv', 'a')
mybook.write('%20s,%20s\n'%(people, datetime.now().strftime("%d/%m/%y %H:%M")))
mybook.close()
================== EDIT: ==============
I tried over sqlite3 and these is what I could manage to write by now and it works... But how to append data into SQL table it always overwrite previous with INSERT TO and it shouldn't?
import sqlite3
from datetime import datetime
sqlite_file = (r"C:\Users\Director\PycharmProjects\MyProjects\Employees\MyDatabase.db")
conn = sqlite3.connect(sqlite_file)
cursor = conn.cursor()
table_name = 'Archive' + datetime.now().strftime("%m_%y")
sql = 'CREATE TABLE IF NOT EXISTS ' + table_name + '("first_name" varchar NOT NULL, "Currdat"date NOT NULL)'
cursor.execute(sql)
sql = 'INSERT INTO ' + table_name + '(first_name,Currdat) VALUES ("value1",CURRENT_TIMESTAMP);'
cursor.execute(sql)
sql = 'SELECT * FROM Archive06_16 '
for row in cursor.execute(sql):
print(row)
cursor.close()
Thanks in advance,

Categories

Resources