I'm trying to use Python to do an SQLite query. I can get the desired result in SQLite using
SELECT Four FROM keys2 WHERE One = B
How do I do this in Python?
I am trying
c = conn.cursor()
print "Opened database successfully";
x = c.execute("SELECT Four FROM keys2 WHERE One = B")
print x
conn.close()
But I get the message "sqlite3.OperationalError: no such column: B". I'm trying to select the entry under column 4 where the one column is B. I have tried several methods to no avail.
Supply the value using parametrized SQL:
x = c.execute("SELECT Four FROM keys2 WHERE One = ?", "B")
The problem with the SQL you posted is that the value B must be quoted. So
x = c.execute("SELECT Four FROM keys2 WHERE One = 'B'")
would also have worked, but it is better to always use parametrized SQL (to guard against SQL injection) and let sqlite3 do the quoting for you.
Related
In python script i have insert query but when i want insert multiple columns in the same query it gives error.
but for single query it works perfectly.
Below is my code.
my database AWS S3.
A = [] #
for score_row in score:
A.append(score_row[2])
print("A=",A)
B = [] #
for day_row in score:
B.append(day_row[1])
print("B=",B)
for x,y in zip(A,B):
sql = """INSERT INTO calculated_corr_coeff(date,Day) VALUES (?,?)"""
cursor.executemany(sql, (x,),(y,))
when i replace above query with following sql insert statement it works perfect.
sql = """INSERT INTO calculated_corr_coeff(date,Day) VALUES (?)"""
cursor.executemany(sql, (x,))
Fix your code like this:
sql = """INSERT INTO calculated_corr_coeff(date,Day) VALUES (?,?)"""
cursor.execute(sql, (x,y,)) #<-- here
Because is just a onet insert ( not several inserts )
Explanation
I guess you are mistaked about number of inserts ( rows ) and number of paràmeters ( fields to insert on each row ). When you want to insert several rows, use executemany, just for one row you should to use execute. Second parapeter of execute is the "list" (or sequence ) of values to be inserted in this row.
Alternative
You can try to change syntax and insert all data in one shot using ** syntax:
values = zip(A,B) #instead of "for"
sql = """INSERT INTO calculated_corr_coeff(date,Day) VALUES (?,?)"""
cursor.executemany(sql, **values )
Notice this approach don't use for statement. This mean all data is send to database in one call, this is more efficient.
I am trying to insert data into a table. The table is determined in the beging of the program and remains constant throughout. How do I interpolate the table name in an execute many statement like the one below?
tbl = 'table_name'
rows = [{'this':x, 'that': x+1} for x in range(10)]
cur.executemany("""INSERT INTO %(tbl)s
VALUES(
%(this)s,
%(that)s
)""", rows)
As stated in the official documentation: "Only query values should be bound via this method: it shouldn’t be used to merge table or field names to the query. If you need to generate dynamically an SQL query (for instance choosing dynamically a table name) you can use the facilities provided by the psycopg2.sql module."
It has the following syntax:
from psycopg2 import sql
tbl = 'table_name'
rows = [{'this':x, 'that': x+1} for x in range(10)]
cur.execute(
sql.SQL("INSERT INTO {} VALUES (%(this)s, %(that)s);"""")
.format(sql.Identifier(tbl)), rows)
More on http://initd.org/psycopg/docs/sql.html#module-psycopg2.sql
I have a csv file of size 360x120 that I want to import into my sqlite database row by row. For one row, I know that below syntax works if mytuple has 2 elements:
import sqlite3
conn = sqlite3.connect(dbLoc)
cur = conn.cursor()
mytuple = (a, b, c, ...) #some long tuple of 120 elements
cur.execute('INSERT INTO tablename VALUES (?, ?)', mytuple)
Problem is, my rows contain 120 columns and I can't really go type 120 question marks into the cur.execute() line. Actually I have, it works but yeah, it is not a good solution. One thing I have tried was:
cur.execute('INSERT INTO tablename VALUES ?', mytuple)
Thought it would just do ?=mytuple and replace ? with mytuple but it doesn't do that. A user comment on the article sebastianraschka.com/Articles/2014_sqlite_in_python_tutorial.html shows such syntax, which would work for me but it does not:
t = ('RHAT',)
c.execute('SELECT * FROM stocks WHERE symbol=?', t)
As seen here he's able to replace a tuple into the execute string with a single ? used. How can I achieve the same with INSERT INTO tablename?
sqlite3 doesn't support more concise syntax:
c.execute('INSERT INTO tablename VALUES ({}?)'.format('?,'*(len(t) - 1)), t)
Note: the default SQLITE_MAX_COLUMN is 2000. And some algorithms in SQLite are O(n**2) in the number of columns i.e., if you increase the limit; it may slow down db operations.
I select 1 column from a table in a database. I want to iterate through each of the results. Why is it when I do this it’s a tuple instead of a single value?
con = psycopg2.connect(…)
cur = con.cursor()
stmt = "SELECT DISTINCT inventory_pkg FROM {}.{} WHERE inventory_pkg IS NOT NULL;".format(schema, tableName)
cur.execute(stmt)
con.commit()
referenced = cur.fetchall()
for destTbl in referenced:#why is destTbl a single element tuple?
print('destTbl: '+str(referenced))
stmt = "SELECT attr_name, attr_rule FROM {}.{} WHERE ppm_table_name = {};".format(schema, tableName, destTbl)#this fails because the where clause gets messed up because ‘destTbl’ has a comma after it
cur.execute(stmt)
Because that's what the db api does: always returns a tuple for each row in the result.
It's pretty simple to refer to destTbl[0] wherever you need to.
Because you are getting rows from your database, and the API is being consistent.
If your query asked for * columns, or a specific number of columns that is greater than 1, you'd also need a tuple or list to hold those columns for each row.
In other words, just because you only have one column in this query doesn't mean the API suddenly will change what kind of object it returns to model a row.
Simply always treat a row as a sequence and use indexing or tuple assignment to get a specific value out. Use:
inventory_pkg = destTbl[0]
or
inventory_pkg, = destTbl
for example.
i am new to sqlite and i think this question should have been answered before but i havent been able to find an answer.
i have a list of around 50 elements that i need to write to an sqlite database with 50 columns.
went over the documentation # https://docs.python.org/2/library/sqlite3.html but in the examples the values are specified by ? (so for writing 3 values, 3 ? are specified
sample code:
row_to_write = range(50)
conn = sqlite3.connect('C:\sample_database\sample_database')
c = conn.cursor()
tried these:
approach 1
c.execute("INSERT INTO PMU VALUES (?)", row_to_write)
ERROR: OperationalError: table PMU has 50 columns but 1 values were supplied
approach 2...tried writing a generator for iterating over list
def write_row_value_generator(row_to_write):
for val in row_to_write:
yield (val,)
c.executemany("INSERT INTO PMU VALUES (?)", write_row_value_generator(row_to_write))
ERROR: OperationalError: table PMU has 50 columns but 1 values were supplied
What is the correct way of doing this?
Assuming that your row_to_write has exactly the same number of items as PMU has columns, you can create a string of ? marks easily using str.join : ','.join(['?']*len(row_to_write))
import sqlite3
conn = sqlite3.connect(':memory:')
c = conn.cursor()
c.execute("create table PMU (%s)" % ','.join("col%d"%i for i in range(50)))
row_to_write = list(range(100,150,1))
row_value_markers = ','.join(['?']*len(row_to_write))
c.execute("INSERT INTO PMU VALUES (%s)"%row_value_markers, row_to_write)
conn.commit()
You need to specify the names of the columns. Sqlite will not guess those for you.
columns = ['A', 'B', 'C', ...]
n = len(row_to_write)
sql = "INSERT INTO PMU {} VALUES ({})".format(
', '.join(columns[:n]) , ', '.join(['?']*n))
c.execute(sql, row_to_write)
Note also that if your rows have a variable number of columns, then you might want to rethink your database schema. Usually each row should have a fixed number of columns, and the variability expresses itself in the number of rows inserted, not the number of columns used.
For example, instead of having 50 columns, perhaps you need just one extra column, whose value is one of 50 names (what used to be a column name). Each value in row_to_write would have its own row, and for each row you would have two columns: the value and the name of the column.