So I have a working piece of code which creates and modifies data in a SQL table. I now want to transfer all the data in the SQL table to a Excel file. Which libraries would I use and what functions in those libraries would I use?
an example of database with sqlite: memory.db
and table name is called table1 in the example
import os
import csv
import sqlite3
def db2csv(file,Table1):
con = sqlite3.connect("memory.db")
cur = con.cursor()
if not os.path.exists(file):
os.makedirs(file)
with open(file, 'w', newline='') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=';', quotechar='|', quoting=csv.QUOTE_MINIMAL)
for row in cur.execute('SELECT * FROM Table1 '):
spamwriter.writerow(row)
con.commit()
Related
I have a .csv file to be loaded into snowsql table using python API.
My question is how to load one row at a time, so to check if every row is successfully loaded.
Although it's possible, I do not recommend you to do single inserts to Snowflake:
import snowflake.connector
import csv
ctx = snowflake.connector.connect(
...
)
cursor = ctx.cursor()
with open('test.csv') as f:
reader = csv.reader(f)
for row in reader:
cursor.execute("""INSERT INTO table1 (col1, col2, col3 )
VALUES(%s, %s, %s )
""", row)
cursor.close()
You can validate the files before COPY command:
https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#validating-staged-files
And you can also check the errors after COPY command:
https://docs.snowflake.com/en/sql-reference/functions/validate.html
I'm trying to insert some values from a csv file through Python but I'm getting a no viable alternative at input error. When I specify the values instead of %s the code works but when I try to use %s it fails. This is my code:
import jaydebeapi
import jpype
import pyodbc
import pandas as pd
import csv
conn = pyodbc.connect("myconnection")
cursor = conn.cursor()
with open('/Users/user/Desktop/TEST.csv') as f:
reader = csv.reader(f)
for row in reader:
cursor.execute("INSERT INTO mytable (user_id, email) VALUES(%s,%s)", row)
#close the connection to the database.
mydb.commit()
cursor.close()
I'm using a simple script to pull data from an Oracle DB and write the data to a CSV file using the CSV writer.
The table i'm querying contains about 25k records, the script runs perfectly except for its actually very slow. It takes 25 minutes to finish.
In what way could i speed up this by altering the code? Any tips from you heroes are welcome.
#
# Load libraries
#
from __future__ import print_function
import cx_Oracle
import time
import csv
#
# Connect to Oracle and select the proper data
#
con = cx_Oracle.connect('secret')
cursor = con.cursor()
sql = "select * from table"
#
# Determine how and where the filename is created
#
path = ("c:\\path\\")
filename = time.strftime("%Y%m%d-%H%M%S")
extentionname = (".csv")
csv_file = open(path+filename+extentionname, "w")
writer = csv.writer(csv_file, delimiter=',', lineterminator="\n",
quoting=csv.QUOTE_NONNUMERIC)
r = cursor.execute(sql)
for row in cursor:
writer.writerow(row)
cursor.close()
con.close()
csv_file.close()
Did you try using writerows function from csv module? Instead of writing each record one by one, it gives provision to write all at once. This should fasten up the things.
data = [] #data rows
with open('csv_file.csv', 'w') as csv_file:
writer = csv.DictWriter(csv_file)
writer.writeheader()
writer.writerows(data)
Alternatively, you can also use pandas module to write big chunk of data to CSV file. This method is explained with examples here.
I am new in python and trying to write a code in it. I am trying to run a select query but i am not able to to render a data to csv file ?
this is the psql query :
# \copy (
# SELECT
# sr.imei,
# sensors.label,sr.created_at,
# sr.received_at,
# sr.type_id,
#
but How to write it in python to render it to csv file ?
thanking you,
Vikas
sql = "COPY (SELECT * FROM sensor_readings WHERE reading=blahblahblah) TO STDOUT WITH CSV DELIMITER ';'"
with open("/tmp/sensor_readings.csv", "w") as file:
cur.copy_expert(sql, file)
I think you just need to change the sql for your use, and it should work.
Install psycopg2 via pip install psycopg2 than you need something like this
import csv
import psycopg2
query = """
SELECT
sr.imei,
sensors.label,sr.created_at,
sr.received_at,
sr.type_id,
sr.data FROM sensor_readings as sr LEFT JOIN sensors on sr.imei = sensors.imei
WHERE sr.imei not like 'test%' AND sr.created_at > '2019-02-01'
ORDER BY sr.received_at desc
"""
conn = psycopg2.connect(database="routing_template", user="postgres", host="localhost", password="xxxx")
cur = conn.cursor()
cur.execute(query)
with open('result.csv', 'w') as f:
writer = csv.writer(f, delimiter=',')
for row in cur.fetchall():
writer.writerow(row)
cur.close()
conn.close()
I used following python script to dump a MySQL table to a CSV file. But it was saved in the same folder which python script is saved. I want to save it in another folder. How can I do it? Thank you
print 'Writing database to csv file'
import MySQLdb
import csv
import time
import datetime
import os
currentDate=datetime.datetime.now().date()
user = ''
passwd = ''
host = ''
db = ''
table = ''
con = MySQLdb.connect(user=user, passwd=passwd, host=host, db=db)
cursor = con.cursor()
query = "SELECT * FROM %s;" % table
cursor.execute(query)
with open('Data on %s.csv' % currentDate ,'w') as f:
writer = csv.writer(f)
for row in cursor.fetchall():
writer.writerow(row)
print 'Done'
Change this:
with open('/full/path/tofile/Data on %s.csv' % currentDate ,'w') as f:
This solves your problem X. But you have a problem Y. That is 'How do i efficiently, dump CSV data from mysql, without having to write a lot of code?'
Answer to problem Y is SELECT INTO OUTFILE