i am new to sqlite and i think this question should have been answered before but i havent been able to find an answer.
i have a list of around 50 elements that i need to write to an sqlite database with 50 columns.
went over the documentation # https://docs.python.org/2/library/sqlite3.html but in the examples the values are specified by ? (so for writing 3 values, 3 ? are specified
sample code:
row_to_write = range(50)
conn = sqlite3.connect('C:\sample_database\sample_database')
c = conn.cursor()
tried these:
approach 1
c.execute("INSERT INTO PMU VALUES (?)", row_to_write)
ERROR: OperationalError: table PMU has 50 columns but 1 values were supplied
approach 2...tried writing a generator for iterating over list
def write_row_value_generator(row_to_write):
for val in row_to_write:
yield (val,)
c.executemany("INSERT INTO PMU VALUES (?)", write_row_value_generator(row_to_write))
ERROR: OperationalError: table PMU has 50 columns but 1 values were supplied
What is the correct way of doing this?
Assuming that your row_to_write has exactly the same number of items as PMU has columns, you can create a string of ? marks easily using str.join : ','.join(['?']*len(row_to_write))
import sqlite3
conn = sqlite3.connect(':memory:')
c = conn.cursor()
c.execute("create table PMU (%s)" % ','.join("col%d"%i for i in range(50)))
row_to_write = list(range(100,150,1))
row_value_markers = ','.join(['?']*len(row_to_write))
c.execute("INSERT INTO PMU VALUES (%s)"%row_value_markers, row_to_write)
conn.commit()
You need to specify the names of the columns. Sqlite will not guess those for you.
columns = ['A', 'B', 'C', ...]
n = len(row_to_write)
sql = "INSERT INTO PMU {} VALUES ({})".format(
', '.join(columns[:n]) , ', '.join(['?']*n))
c.execute(sql, row_to_write)
Note also that if your rows have a variable number of columns, then you might want to rethink your database schema. Usually each row should have a fixed number of columns, and the variability expresses itself in the number of rows inserted, not the number of columns used.
For example, instead of having 50 columns, perhaps you need just one extra column, whose value is one of 50 names (what used to be a column name). Each value in row_to_write would have its own row, and for each row you would have two columns: the value and the name of the column.
Related
I have created a python list with 41 columns and 50 rows.
Now I want to insert this into an SQLite database.
When I execute the database export, I got the error message:
sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 41, and there are 40 supplied.
Most of the list fields should have data. Perhaps one or two don't have any.
Can I write into the sqlite database with a prompt like:
insert if data available, otherwise write none
Or something like this?
My code is like:
c.execute("""CREATE TABLE IF NOT EXISTS statstable (
spielid integer PRIMARY KEY,
41x field descr. (real, integer and text)
UNIQUE (spielid)
)
""")
c.executemany("INSERT OR REPLACE INTO statstable VALUES (41x ?)", all_data)
Append the appropriate number of None values to the nested lists to make them all 41 elements:
c.executemany("INSERT OR REPLACE INTO statstable VALUES (41x ?)",
[l + [None] * (41 - len(l)) for l in all_data])
This assumes the missing elements are always at the end of the list of columns. If they can be different columns in each row, I don't see how you can implement any automatic solution.
If the elements of all_data were dictionaries whose keys correspond to column names, you could determine which keys are missing. Then turn it into a list with the None placeholders in the appropriate places for those columns.
I want to insert values to SQL Server from python. Here's my code:
for value in rows:
cursor.execute ("""INSERT INTO Table ([ColumnOne]) VALUES (?)""", value)
cnxn.commit()
In rows , it contains lists (iteration) of rows, something like this:
row 1 contains of lists of float numbers
1.0
2.0
1.5
1.75
..... (in total of 1000 values in a row/column),
And it goes the same with row2, row3, and so on.
But, when I tried to run the code, I have this error
pyodbc.ProgrammingError: ('The SQL contains 1 parameter markers, but
1000 parameters were supplied', 'HY000')
Is there any way that I could do so the float values are not treated individually or to fix this problem?
I think maybe I should use ','.join statement to make it as string?
Considering I am not good at explaining and new to python, please correct me if i have some mistakes. Thank you.
When you attempt to insert multiple table rows in one query, you need to supply a list of values for each row.
For example, the following query would insert two rows:
INSERT INTO Table (ColumnOne) VALUES (1.0), (2.0);
So your python code needs to prepare the correct VALUES part of the query:
for row in rows:
values = ", ".join(("(?)",) * len(row))
cursor.execute(f"INSERT INTO Table (ColumnOne) VALUES {values}", row)
I Have this example of sql database. I want to use a certain item from the database in math calculation but I can't because the value looks like this: (25.0,) instead of just 25.0. Please see the attached picture
https://imgur.com/a/j7JOZ5H
import sqlite3
#Create the database:
connection = sqlite3.connect('DataBase.db')
c = connection.cursor()
c.execute('CREATE TABLE IF NOT EXISTS table1 (name TEXT,age NUMBER)')
c.execute("INSERT INTO table1 VALUES('Jhon',25)")
#Pull out the value:
c.execute('SELECT age FROM table1')
data =c.fetchall()
print(data[0])
#simple math calculation:
r=data[0]+1
print(r)
According to Python's PEP 249, the specification for most DB-APIs including sqlite3, fetchall returns a sequence of sequences, usually list of tuples. Therefore, to retrieve the single value in first column to do arithmetic, index the return twice: for specific row and then specific position in row.
data = c.fetchall()
data[0][0]
Alternatively, fetchone returns a single row, either first or next row, in resultset, so simply index once: the position in single row.
data = c.fetchone()
data[0]
The returned data from fetchall always comes back as a list of tuples, even if the tuple only contains 1 value. Your data variable should be:
[(25,)]
You need to use:
print(data[0][0])
r = data[0][0] + 1
print(r)
I am wondering how to fetch more than 1 column at the same time. This is the code I have so far but iI get the error:
ValueError: too many values to unpack (expected 2)
con = sqlite3.connect('database.db')
cur = con.cursor()
cur.execute("SELECT name, age FROM employees")
name, age = cur.fetchall()
Is it actually possible to fetch more than 1 column at the same time?
Thanks!
cur.fetchall() gives you a list of (name, age) tuples, as many as there are rows in your employees table. Your contains more than just two rows, so your name, age = ... syntax fails because that requires there to be two results. Even if there are only two rows, you'd assign those two rows (each a tuple) to the names name and age which is probably not what you wanted.
You probably want to iterate over the cursor; this lets you process each row tuple separately, including assigning the two column values to two separate targets:
for name, age in cur:
# process each name and age separately
or just assign the cur.fetchall() result to a single variable:
results = cur.fetchall()
and process the list as needed.
I select 1 column from a table in a database. I want to iterate through each of the results. Why is it when I do this it’s a tuple instead of a single value?
con = psycopg2.connect(…)
cur = con.cursor()
stmt = "SELECT DISTINCT inventory_pkg FROM {}.{} WHERE inventory_pkg IS NOT NULL;".format(schema, tableName)
cur.execute(stmt)
con.commit()
referenced = cur.fetchall()
for destTbl in referenced:#why is destTbl a single element tuple?
print('destTbl: '+str(referenced))
stmt = "SELECT attr_name, attr_rule FROM {}.{} WHERE ppm_table_name = {};".format(schema, tableName, destTbl)#this fails because the where clause gets messed up because ‘destTbl’ has a comma after it
cur.execute(stmt)
Because that's what the db api does: always returns a tuple for each row in the result.
It's pretty simple to refer to destTbl[0] wherever you need to.
Because you are getting rows from your database, and the API is being consistent.
If your query asked for * columns, or a specific number of columns that is greater than 1, you'd also need a tuple or list to hold those columns for each row.
In other words, just because you only have one column in this query doesn't mean the API suddenly will change what kind of object it returns to model a row.
Simply always treat a row as a sequence and use indexing or tuple assignment to get a specific value out. Use:
inventory_pkg = destTbl[0]
or
inventory_pkg, = destTbl
for example.