Ive created a python script that sets up a SQLite database and creates a table, now im trying to read values into the table from a .txt file, as follows
import sqlite3
conn = sqlite3.connect('mydatabase.db')
c = conn.cursor()
c.execute('''CREATE TABLE mytable
(var1 TEXT,
var2 REAL)''')
c.execute('separator "," ')
c.execute('import records.txt myTable ')
conn.commit()
for row in c.execute('SELECT * FROM myTable'):
print(row)
conn.close()
the records.txt looks like
item1, 8.8
item2, 9.1
when i run the python code form the command line i get
c.execute('separator "," ')
sqlite3.OperationalError: near "separator": syntax error
how do I use the seperator sql statement here and and maybe the same problem will be for the import statement?
How to get this code working?
I don't think this is possible with the Python module sqlite3. The .separator (for me only with leading dot) command, as well as the .import, are features of the CLI fronted sqlite3.
If you want to use these, you could use subprocess to invoke the commands:
import subprocess
p = subprocess.Popen(["sqlite3", "mydatabase.db"], stdout=subprocess.PIPE, stdin=subprocess.PIPE)
p.communicate(b"""
CREATE TABLE mytable
(var1 TEXT,
var2 REAL);
.separator ","
.import records.txt myTable
""")
c.execute('separator "," ')
should be replaced with:
sql_statement = ''' INSERT INTO mytable(var1, var2) VALUES (?, ?) '''
values = ('text', 343.4)
c.execute(sql_statement, values)
The question marks are the placeholders for the values you want to insert. Note that you should pass these values in a tuple, if you want to insert a single value into your database the values argument should look like this:
values = (text,)
A working solution:
import sqlite3
# create database and initialize cursor
conn = sqlite3.connect('mydatabase.db')
c = conn.cursor()
# create table if not exists
c.execute('''CREATE TABLE IF NOT EXISTS mytable(var1 TEXT, var2 REAL)''')
sql_insert = ''' INSERT INTO mytable(var1, var2) VALUES (?, ?) '''
with open('results.txt', 'r') as fr:
for line in fr.readlines():
# parse the results.txt, create a list of comma separated values
line = line.replace('\n', '').split(',')
t, f = line
c.execute(sql_insert, (t, float(f)))
conn.commit()
sql_select = ''' SELECT * FROM mytable '''
for row in c.execute(sql_select):
print(row)
conn.close()
The solution above will work fine. Just to make it a little more readable and easier to debug you could also go like this:
import sqlite3
conn = sqlite3.connect('mydatabase.db')
c = conn.cursor()
c.execute('''CREATE TABLE mytable (var1 TEXT, var2 REAL)''')
with open("records.txt", "r") as f:
rows = f.readlines()
for row in rows:
fields = row.split(',')
c.execute(f'INSERT INTO mytable (var1, var2)'\
f"VALUES ('{fields[0]}','{fields[1]}')")
conn.commit()
for row in c.execute('SELECT * FROM myTable'):
print(row)
conn.close()
This way you open the text file with Python and read the lines. for each line you than seperate the records and put them into the INSERT-statement right away. Quick and easy!
goodluck!
Related
0
I am trying to create a function which is reading from my csv file and inserts it in a sql table.
This is my code:
def transaction_import(path):
with open (path, 'r') as f:
reader = csv.reader(f)
columns = next(reader)
query = 'insert into transactions({0}) values ({1})'
query = query.format(','.join(columns), ','.join('?' * len(columns)))
cursor = conn.cursor()
for data in reader:
cursor.execute(query, data)
transactions = transaction_import('../data/budapest.csv')
c.execute("select * from transactions")
transactions = c.fetchall()
for row in transactions:
print(row)
What i want to do is to read several transactions from different csvs. All of them have the same structure and column names. ex: transactions = transaction_import('../Data/Source/ny.csv') transactions = transaction_import('../Data/Source/london.csv')
When I run it I get this error: File "/Users/.../main.py", line 82, in transaction_import cursor.execute(query,(data,)) sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 4, and there are 1 supplied.
You're missing a comma in # cursor.execute(query, data)
Also google is your friend, see: sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 1, and there are 74 supplied
I have a csv file from which I am trying to load data into pysqlite database. I am not able to find a way to extract the first row of the file and get it into the database automatically as column headers of a table. I have to enter the names "manually" in the code itself, which is ok for 1-2 columsn but becomes cumbersome with tens or hundreds of columns. Here is my code:
import sqlite3
import csv
f_n = 'test_data_1.csv'
f = open( f_n , 'r' )
csv_reader = csv.reader(f)
header = next( csv_reader )
sqlite_file = 'survey_test_db.sqlite'
table_name01 = 'test_survey_1'
field_01 = 'analyst_name'
field_type_01 = 'text'
field_02 = 'question_name'
field_type_02 = 'text'
conn = sqlite3.connect( sqlite_file )
c = conn.cursor()
c.execute('CREATE TABLE {tn}({nf_01} {ft_01},{nf_02} {ft_02})'\
.format(tn = table_name01 , nf_01 = field_01 , ft_01 = field_type_01, nf_02 = field_02 , ft_02 = field_type_02 ))
for row in csv_reader:
c.execute("INSERT INTO test_survey_1 VALUES (?,?)",row)
f.close()
for row in c.execute('SELECT * FROM test_survey_1'):
print(row)
conn.commit()
conn.close()
c.execute('CREATE TABLE {tn}({fieldlist})'.format(
tn=table_name01,
fieldlist=', '.join('{} TEXT'.format(name) for name in header),
))
Or use a ORM which is designed to make this sort of thing easy. SQLAlchemy example:
t = Table(table_name01, meta, *(Column(name, String()) for name in header))
t.create()
You can use pandas to read your csv file into DataFrame and then export
it to sqlite.
import sqlite3
import pandas as pd
sqlite_file = 'survey_test_db.sqlite'
table_name01 = 'test_survey_1'
conn = sqlite3.connect(sqlite_file)
pd.read_csv('test_data_1.csv').to_sql(table_name01, con=con)
I have a cx_oracle connection and I am looking to run a 'batch' of sorts trying to gather ids from last names from a CSV file. Below I have my code, in which I am getting a cx_Oracle.DatabaseError: ORA-01756: quoted string not properly terminated error.
It is pointing to the line
and spriden_change_ind is null'''.format(lname,fname)
However I know this is working as you will see my commented code uses the format in this way and it works just fine. The rows_to_dict_list is a nice function I found here sometime ago to basically add the column names to the output.
Any direction would be nice! thank you
import csv, cx_Oracle
def rows_to_dict_list(cursor):
columns = [i[0] for i in cursor.description]
new_list = []
for row in cursor:
row_dict = dict()
for col in columns:
row_dict[col] = row[columns.index(col)]
new_list.append(row_dict)
return new_list
connection = cx_Oracle.connect('USERNAME','PASSWORD','HOSTNAME:PORTNUMBER/SERVICEID')
cur = connection.cursor()
printHeader = True
with open('nopnumber_names.csv')as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
lname = row['Last']
fname = row['First']
cur.execute('''select spriden_pidm as PIDM,
spriden_last_name as Last,
spriden_first_name as First,
spriden_mi as Middle,
spriden_ID as ID
from spriden
where upper(spriden_last_name) = '{0}'
and upper(spriden_first_name) = '{1}'
and spriden_change_ind is null'''.format(lname,fname)
)
# THIS RECORD RUNS FINE
# cur.execute('''select spriden_pidm as PIDM,
# spriden_ID as ID,
# spriden_last_name as Last,
# spriden_first_name as First
# from spriden
# where spriden_pidm = '{}'
# and spriden_change_ind is null'''.format(99999)
# )
data = rows_to_dict_list(cur)
for row in data:
print row
cur.close()
connection.close()
My best guess is that a first name or surname somewhere in your CSV file has a ' character in it.
You really shouldn't be building SQL by concatenating strings or using string formatting. You are at risk of SQL injection if you do so. What happens if someone has put a record in your CSV file with surname X' OR 1=1 --?
Instead, use bind parameters to send the values of your variables to the database. Try the following:
cur.execute('''select spriden_pidm as PIDM,
spriden_last_name as Last,
spriden_first_name as First,
spriden_mi as Middle,
spriden_ID as ID
from spriden
where upper(spriden_last_name) = :lname
and upper(spriden_first_name) = :fname
and spriden_change_ind is null''',
{"lname": lname, "fname": fname}
)
import sqlite3
def setup_tracks(db, data_file):
'''(str, file) -> NoneType
Create and populate the Tracks table with
the data from the open file data_file.'''
data_file.readlines()
# connect to the database
con = sqlite3.connect(db)
# create a cursor
cur = con.cursor()
# Create the Tracks table
cur.execute('CREATE TABLE tracks'+'(Title TEXT, id INTEGER, Time INTEGER)')
for line in data_file:
data = line.strip().split(',')
Title = data[0].strip()
id = int(data[1].strip())
s = int(data[2].strip().split(':'))
Time = int(s[0]*60 + int(s[1]))
cur.execute('INSERT INTO tracks VALUES(?, ?, ?)'+(Title, id, Time))
cur.close()
con.commit()
con.close()
There is file names tracks.csv. There are three columns which areenter link description here title, id, time. If I run this function, I get some errors say that 'db can not be defined'. I hope anybody can help me out with this.
thank you!
Here is my edit of the program along with some observations.
import sqlite3
def setup_tracks(db, data_file):
lines = data_file.readlines()
# connect to the database
con = sqlite3.connect(db)
# create a cursor
cur = con.cursor()
# Create the Tracks table
sql = '''create table tracks (Title text, id int, Time int)'''
cur.execute(sql)
for line in lines[1:]:
data = line.strip().split(',')
Title = data[0].strip()
id = int(data[1].strip())
s = data[2].strip().split(':')
Time = int(s[0]) * 60 + int(s[1])
Title = Title.replace("'", "''")
#This line is not working right.
cur.execute('INSERT INTO tracks VALUES(?, ?, ?)'+(Title, id, Time))
con.commit()
con.close()
filename = 'tracks.csv'
f = open(filename, 'r')
setup_tracks('data.db', f)
Found an example using cx_Oracle, this example shows all the information of Cursor.description.
import cx_Oracle
from pprint import pprint
connection = cx_Oracle.Connection("%s/%s#%s" % (dbuser, dbpasswd, oracle_sid))
cursor = cx_Oracle.Cursor(connection)
sql = "SELECT * FROM your_table"
cursor.execute(sql)
data = cursor.fetchall()
print "(name, type_code, display_size, internal_size, precision, scale, null_ok)"
pprint(cursor.description)
pprint(data)
cursor.close()
connection.close()
What I wanted to see was the list of Cursor.description[0](name), so I changed the code:
import cx_Oracle
import pprint
connection = cx_Oracle.Connection("%s/%s#%s" % (dbuser, dbpasswd, oracle_sid))
cursor = cx_Oracle.Cursor(connection)
sql = "SELECT * FROM your_table"
cursor.execute(sql)
data = cursor.fetchall()
col_names = []
for i in range(0, len(cursor.description)):
col_names.append(cursor.description[i][0])
pp = pprint.PrettyPrinter(width=1024)
pp.pprint(col_names)
pp.pprint(data)
cursor.close()
connection.close()
I think there will be better ways to print out the names of columns. Please get me alternatives to the Python beginner. :-)
You can use list comprehension as an alternative to get the column names:
col_names = [row[0] for row in cursor.description]
Since cursor.description returns a list of 7-element tuples you can get the 0th element which is a column name.
Here the code.
import csv
import sys
import cx_Oracle
db = cx_Oracle.connect('user/pass#host:1521/service_name')
SQL = "select * from dual"
print(SQL)
cursor = db.cursor()
f = open("C:\dual.csv", "w")
writer = csv.writer(f, lineterminator="\n", quoting=csv.QUOTE_NONNUMERIC)
r = cursor.execute(SQL)
#this takes the column names
col_names = [row[0] for row in cursor.description]
writer.writerow(col_names)
for row in cursor:
writer.writerow(row)
f.close()
The SQLAlchemy source code is a good starting point for robust methods of database introspection. Here is how SQLAlchemy reflects table names from Oracle:
SELECT table_name FROM all_tables
WHERE nvl(tablespace_name, 'no tablespace') NOT IN ('SYSTEM', 'SYSAUX')
AND OWNER = :owner
AND IOT_NAME IS NULL