I want to fetch the SQL query from a text file and run it in Python program. This is my code:
csvfilelist=os.listdir(inputPath)
mycursor = mydb.cursor()
for csvfilename in csvfilelist:
with open(inputPath + csvfilename, 'r') as csvFile:
reader = csv.reader(csvFile)
for row in reader:
'''r = "INSERT INTO Terminate.RAW VALUES('%s','%s','%s','%s','%s')" %(row[0],row[1],row[2],row[3],row[4],row[5])'''
try:
result = mycursor.execute(r)
mydb.commit()
except mysql.connector.Error as err:
print(err)
csvFile.close()
Say you have a INI file containing the query
[main]
query=INSERT INTO Terminate.RAW VALUES('%s','%s','%s','%s','%s')
you may load it
config = configparser.ConfigParser()
config.read('myfile.ini')
query = config['main']['query']
and later you can call it with
r = query % (row[0],row[1],row[2],row[3],row[4],row[5])
As pointed out in comments, using "%" in queries is not a good solution, you should bind your variables when executing the query. I don't remember the exact syntax, it's something like
r = query
mycursor.execute(r, (row[0],row[1],row[2],row[3],row[4],row[5]))
Edit: sorry, I just read that your file is JSON, not INI. You wrote that in the title, not in the post. If so, you should use the json module instead of configparser module.
Related
I am new in python and trying to write a code in it. I am trying to run a select query but i am not able to to render a data to csv file ?
this is the psql query :
# \copy (
# SELECT
# sr.imei,
# sensors.label,sr.created_at,
# sr.received_at,
# sr.type_id,
#
but How to write it in python to render it to csv file ?
thanking you,
Vikas
sql = "COPY (SELECT * FROM sensor_readings WHERE reading=blahblahblah) TO STDOUT WITH CSV DELIMITER ';'"
with open("/tmp/sensor_readings.csv", "w") as file:
cur.copy_expert(sql, file)
I think you just need to change the sql for your use, and it should work.
Install psycopg2 via pip install psycopg2 than you need something like this
import csv
import psycopg2
query = """
SELECT
sr.imei,
sensors.label,sr.created_at,
sr.received_at,
sr.type_id,
sr.data FROM sensor_readings as sr LEFT JOIN sensors on sr.imei = sensors.imei
WHERE sr.imei not like 'test%' AND sr.created_at > '2019-02-01'
ORDER BY sr.received_at desc
"""
conn = psycopg2.connect(database="routing_template", user="postgres", host="localhost", password="xxxx")
cur = conn.cursor()
cur.execute(query)
with open('result.csv', 'w') as f:
writer = csv.writer(f, delimiter=',')
for row in cur.fetchall():
writer.writerow(row)
cur.close()
conn.close()
I used following python script to dump a MySQL table to a CSV file. But it was saved in the same folder which python script is saved. I want to save it in another folder. How can I do it? Thank you
print 'Writing database to csv file'
import MySQLdb
import csv
import time
import datetime
import os
currentDate=datetime.datetime.now().date()
user = ''
passwd = ''
host = ''
db = ''
table = ''
con = MySQLdb.connect(user=user, passwd=passwd, host=host, db=db)
cursor = con.cursor()
query = "SELECT * FROM %s;" % table
cursor.execute(query)
with open('Data on %s.csv' % currentDate ,'w') as f:
writer = csv.writer(f)
for row in cursor.fetchall():
writer.writerow(row)
print 'Done'
Change this:
with open('/full/path/tofile/Data on %s.csv' % currentDate ,'w') as f:
This solves your problem X. But you have a problem Y. That is 'How do i efficiently, dump CSV data from mysql, without having to write a lot of code?'
Answer to problem Y is SELECT INTO OUTFILE
I use python and vertica-python library to COPY data to Vertica DB
connection = vertica_python.connect(**conn_info)
vsql_cur = connection.cursor()
with open("/tmp/vertica-test-insert", "rb") as fs:
vsql_cur.copy( "COPY table FROM STDIN DELIMITER ',' ", fs, buffer_size=65536)
connection.commit()
It inserts data, but only 5 rows, although the file contains more. Could this be related to db settings or it's some client issue?
This code works for me:
For JSON
# for json file
with open("D:/SampleCSVFile_2kb/tweets.json", "rb") as fs:
my_file = fs.read().decode('utf-8')
cur.copy( "COPY STG.unstruc_data FROM STDIN parser fjsonparser()", my_file)
connection.commit()
For CSV
# for csv file
with open("D:/SampleCSVFile_2kb/SampleCSVFile_2kb.csv", "rb") as fs:
my_file = fs.read().decode('utf-8','ignore')
cur.copy( "COPY STG.unstruc_data FROM STDIN PARSER FDELIMITEDPARSER (delimiter=',', header='false') ", my_file) # buffer_size=65536
connection.commit()
Very likely that you have rows getting rejected. Assuming you are using 7.x, you can add:
[ REJECTED DATA {'path' [ ON nodename ] [, ...] | AS TABLE 'reject_table'} ]
You can also query this after the copy execution to see the summary of results:
SELECTGET_NUM_ACCEPTED_ROWS(),GET_NUM_REJECTED_ROWS();
I have a CSV file without headers and am trying to create a SQL table from certain columns in the file. I tried the solutions given here: Importing a CSV file into a sqlite3 database table using Python,
but keep getting the error that col1 is not defined. I then tried inserting headers in my CSV file and am still getting a KeyError.
Any help is appreciated! (I am not very familiar with SQL at all)
If the .csv file has no headers, you don't want to use DictReader; DictReader assumes line 1 is a set of headers and uses them as keys for every subsequent line. This is probably why you're getting KeyErrors.
A modified version of the example from that link:
import csv, sqlite3
con = sqlite3.connect(":memory:")
cur = con.cursor()
cur.execute("CREATE TABLE t (col1, col2);")
with open('data.csv','rb') as fin:
dr = csv.reader(fin)
dicts = ({'col1': line[0], 'col2': line[1]} for line in dr)
to_db = ((i['col1'], i['col2']) for i in dicts)
cur.executemany("INSERT INTO t (col1, col2) VALUES (?, ?);", to_db)
con.commit()
This below code, will read all the csv files from the path and load all the data into table present in sqllite 3 database.
import sqllite3
import io
import os.path
import glob
cnx = sqlite3.connect(user='user', host='localhost', password='password',
database='dbname')
cursor=cnx.cursor(buffered= True);
path ='path/*/csv'
for files in glob.glob(path + "/*.csv"):
add_csv_file="""LOAD DATA LOCAL INFILE '%s' INTO TABLE tabkename FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' IGNORE 1 LINES;;;""" %(files)
print ("add_csv_file: %s" % files)
cursor.execute(add_csv_file)
cnx.commit()
cursor.close();
cnx.close();
Let me know if this works.
I'm trying to write a script to import a database file. I wrote the script to export the file like so:
import sqlite3
con = sqlite3.connect('../sqlite.db')
with open('../dump.sql', 'w') as f:
for line in con.iterdump():
f.write('%s\n' % line)
Now I want to be able to import that database. I have tried :
import sqlite3
con = sqlite3.connect('../sqlite.db')
f = open('../dump.sql','r')
str = f.read()
con.execute(str)
but I'm not allowed to execute more than one statement. Is there a way to get it to run an SQL script directly?
sql = f.read() # watch out for built-in `str`
cur.executescript(sql)
Documentation.
Try using
con.executescript(str)
Documentation
Connection.executescript(sql_script)
This is a nonstandard shortcut that creates an intermediate cursor object
by calling the cursor method, then calls the cursor’s executescript
method with the parameters given.
Or create the cursor first
import sqlite3
con = sqlite3.connect('../sqlite.db')
f = open('../dump.sql','r')
str = f.read()
cur = con.cursor()
cur.execute(str)