When using:
import datetime
import sqlite3
db = sqlite3.connect('mydb.sqlite', detect_types=sqlite3.PARSE_DECLTYPES)
c = db.cursor()
db.text_factory = str
c.execute('create table if not exists mytable (date timestamp, title str, \
custom str, x float, y float, z char default null, \
postdate timestamp default null, id integer primary key autoincrement, \
url text default null)')
c.execute('insert into mytable values(?, ?, ?, ?, ?)', \
(datetime.datetime(2018,4,23,23,00), 'Test', 'Test2', 2.1, 11.1))
I have:
sqlite3.OperationalError: table mytable has 9 columns but 5 values were supplied
Why doesn't SQlite take default values (specified during table creation) in consideration to populate a new row?
(Also, as I'm reopening a project I wrote a few years ago, I don't find the datatypes str, char anymore in the sqlite3 doc, is it still relevant?)
Because you are saying that you want to insert all columns by not specifying the specific columns.
Change 'insert into mytable values(?, ?, ?, ?, ?)'
to 'insert into mytable (date, title, custom, x, y) values(?, ?, ?, ?, ?)'
Virtually any value for column type can be specified, the value will follow a set of rules and be converted to TEXT, INTEGER, REAL, NUMERIC or BLOB. However, you can store any type of value in any column.
STR will resolve to NUMERIC,
TIMESTAMP will resolve to NUMERIC,
FLOAT will resolve to REAL,
CHAR to TEXT.
Have a read of Datatypes In SQLite or perhaps have a look at How flexible/restricive are SQLite column types?
If you're going to only supply values for some columns, you need to specify which columns. Otherwise the engine won't know where to put them. This line needs to be changed:
c.execute('insert into mytable values(?, ?, ?, ?, ?)', \
(datetime.datetime(2018,4,23,23,00), 'Test', 'Test2', 2.1, 11.1))
To this:
c.execute('insert into mytable (date, title, custom, x, y)values(?, ?, ?, ?, ?)', \
(datetime.datetime(2018,4,23,23,00), 'Test', 'Test2', 2.1, 11.1))
Example Solution
cursor.execute('CREATE TABLE vehicles_record (id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT, timestamp DATETIME DEFAULT CURRENT_TIMESTAMP)')
SQLite3 Query Above
cursor.execute("INSERT INTO vehicles_record(name) VALUES(?)", (name))
Result
id would be 1, name would be value of name var, current timestamp for last column.
Related
I have a list called
fullpricelist=[373.97, 381.0, 398.98, 402.98, 404.98, 457.97, 535.99, 550.97, 566.98]
I would like to write this list into a slqlite database column, I found the following code from another question and changed it to my situation.
cursor.executemany("""INSERT INTO cardata (fullprice) VALUES (?)""",
zip(fullpricelist))
My current script is this
for name, name2, image in zip(car_names, car_names2, images):
cursor.execute(
"insert into cardata (carname, carmodel, imageurl, location, Fro, T, companyid) values (?, ?, ?, ?, ?, ?, ?)",
(name.text, name2.text, image.get_attribute('src'), location, pickup_d, return_d, Rental_ID)
)
But now I am confused how to add these codes together
In your second piece of code, execute() is called and one specific object is stored in the database each loop iteration. This is slow and inefficient.
for price in fullpricelist:
cursor.execute("""INSERT INTO cardata (fullprice) VALUES (?)""", price)
executemany() reads from an iterable and adds each element of the iterable to the database as a distinct row. If you add many elements to a database and care about efficiency, you want to use executemany()
cursor.executemany("""INSERT INTO cardata (fullprice) VALUES (?)""", fullpricelist)
If you want to include the other columns in your question, your code will be
cursor.executemany("""INSERT INTO cardata (carname, carmodel, imageurl, location, Fro, T, companyid) values (?, ?, ?, ?, ?, ?, ?)""",
[
[name.text for name in car_names],
[name.text for name in car_names2],
[image.get_attribute('src') for image in images],
[location]*len(car_names),
[pickup_d]*len(car_names),
[return_d]*len(car_names),
[Rental_ID]*len(car_names)
]
)
This assumes all values for location, pickup_d, return_d and Rental_ID are the same, as you did not provide a list of the values.
From this answer:
cursor.execute("INSERT INTO booking_meeting (room_name,from_date,to_date,no_seat,projector,video,created_date,location_name) VALUES (?, ?, ?, ?, ?, ?, ?, ?)", (rname, from_date, to_date, seat, projector, video, now, location_name ))
I'd like to shorten it to something like:
simple_insert(booking_meeting, rname, from_date, to_date, seat, projector, video, now, location_name)
The first parameter is the table name which can be read to get list of column names to format the first section of the SQLite3 statement:
cursor.execute("INSERT INTO booking_meeting (room_name,from_date,to_date,no_seat,projector,video,created_date,location_name)
Then the values clause (second part of the insert statement):
VALUES (?, ?, ?, ?, ?, ?, ?, ?)"
can be formatted by counting the number of column names in the table.
I hope I explained the question properly and you can appreciate the time savings of such a function. How to write this function in python? ...is my question.
There may already a simple_insert() function in SQLite3 but I just haven't stumbled across it yet.
If you're inserting into all the columns, then you don't need to specify the column names in the INSERT query. For that scenario, you could write a function like this:
def simple_insert(cursor, table, *args):
query = f'INSERT INTO {table} VALUES (' + '?, ' * (len(args)-1) + '?)'
cursor.execute(query, args)
For your example, you would call it as:
simple_insert(cursor, 'booking_meeting', rname, from_date, to_date, seat, projector, video, now, location_name)
Note I've chosen to pass cursor to the function, you could choose to just rely on it as a global variable instead.
I've got an MS Access table (SearchAdsAccountLevel) which needs to be updated frequently from a python script. I've set up the pyodbc connection and now I would like to UPDATE/INSERT rows from my pandas df to the MS Access table based on whether the Date_ AND CampaignId fields match with the df data.
Looking at previous examples I've built the UPDATE statement which uses iterrows to iterate through all rows within df and execute the SQL code as per below:
connection_string = (
r"Driver={Microsoft Access Driver (*.mdb, *.accdb)};"
r"c:\AccessDatabases\Database2.accdb;"
)
cnxn = pyodbc.connect(connection_string, autocommit=True)
crsr = cnxn.cursor()
for index, row in df.iterrows():
crsr.execute("UPDATE SearchAdsAccountLevel SET [OrgId]=?, [CampaignName]=?, [CampaignStatus]=?, [Storefront]=?, [AppName]=?, [AppId]=?, [TotalBudgetAmount]=?, [TotalBudgetCurrency]=?, [DailyBudgetAmount]=?, [DailyBudgetCurrency]=?, [Impressions]=?, [Taps]=?, [Conversions]=?, [ConversionsNewDownloads]=?, [ConversionsRedownloads]=?, [Ttr]=?, [LocalSpendAmount]=?, [LocalSpendCurrency]=?, [ConversionRate]=?, [Week_]=?, [Month_]=?, [Year_]=?, [Quarter]=?, [FinancialYear]=?, [RowUpdatedTime]=? WHERE [Date_]=? AND [CampaignId]=?",
row['OrgId'],
row['CampaignName'],
row['CampaignStatus'],
row['Storefront'],
row['AppName'],
row['AppId'],
row['TotalBudgetAmount'],
row['TotalBudgetCurrency'],
row['DailyBudgetAmount'],
row['DailyBudgetCurrency'],
row['Impressions'],
row['Taps'],
row['Conversions'],
row['ConversionsNewDownloads'],
row['ConversionsRedownloads'],
row['Ttr'],
row['LocalSpendAmount'],
row['LocalSpendCurrency'],
row['ConversionRate'],
row['Week_'],
row['Month_'],
row['Year_'],
row['Quarter'],
row['FinancialYear'],
row['RowUpdatedTime'],
row['Date_'],
row['CampaignId'])
crsr.commit()
I would like to iterate through each row within my df (around 3000) and if the ['Date_'] AND ['CampaignId'] match I UPDATE all other fields. Otherwise I want to INSERT the whole df row in my Access Table (create new row). What's the most efficient and effective way to achieve this?
Consider DataFrame.values and pass list into an executemany call, making sure to order columns accordingly for the UPDATE query:
cols = ['OrgId', 'CampaignName', 'CampaignStatus', 'Storefront',
'AppName', 'AppId', 'TotalBudgetAmount', 'TotalBudgetCurrency',
'DailyBudgetAmount', 'DailyBudgetCurrency', 'Impressions',
'Taps', 'Conversions', 'ConversionsNewDownloads', 'ConversionsRedownloads',
'Ttr', 'LocalSpendAmount', 'LocalSpendCurrency', 'ConversionRate',
'Week_', 'Month_', 'Year_', 'Quarter', 'FinancialYear',
'RowUpdatedTime', 'Date_', 'CampaignId']
sql = '''UPDATE SearchAdsAccountLevel
SET [OrgId]=?, [CampaignName]=?, [CampaignStatus]=?, [Storefront]=?,
[AppName]=?, [AppId]=?, [TotalBudgetAmount]=?,
[TotalBudgetCurrency]=?, [DailyBudgetAmount]=?,
[DailyBudgetCurrency]=?, [Impressions]=?, [Taps]=?, [Conversions]=?,
[ConversionsNewDownloads]=?, [ConversionsRedownloads]=?, [Ttr]=?,
[LocalSpendAmount]=?, [LocalSpendCurrency]=?, [ConversionRate]=?,
[Week_]=?, [Month_]=?, [Year_]=?, [Quarter]=?, [FinancialYear]=?,
[RowUpdatedTime]=?
WHERE [Date_]=? AND [CampaignId]=?'''
crsr.executemany(sql, df[cols].values.tolist())
cnxn.commit()
For the insert, use a temp, staging table with exact structure as final table which you can create with make-table query: SELECT TOP 1 * INTO temp FROM final. This temp table will be regularly cleaned out and inserted with all data frame rows. The final query migrates only new rows from temp into final with NOT EXISTS, NOT IN, or LEFT JOIN/NULL. You can run this query anytime and never worry about duplicates per Date_ and CampaignId columns.
# CLEAN OUT TEMP
sql = '''DELETE FROM SearchAdsAccountLevel_Temp'''
crsr.executemany(sql)
cnxn.commit()
# APPEND TO TEMP
sql = '''INSERT INTO SearchAdsAccountLevel_Temp (OrgId, CampaignName, CampaignStatus, Storefront,
AppName, AppId, TotalBudgetAmount, TotalBudgetCurrency,
DailyBudgetAmount, DailyBudgetCurrency, Impressions,
Taps, Conversions, ConversionsNewDownloads, ConversionsRedownloads,
Ttr, LocalSpendAmount, LocalSpendCurrency, ConversionRate,
Week_, Month_, Year_, Quarter, FinancialYear,
RowUpdatedTime, Date_, CampaignId)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?,
?, ?, ?, ?, ?, ?, ?, ?, ?,
?, ?, ?, ?, ?, ?, ?, ?, ?);'''
crsr.executemany(sql, df[cols].values.tolist())
cnxn.commit()
# MIGRATE TO FINAL
sql = '''INSERT INTO SearchAdsAccountLevel
SELECT t.*
FROM SearchAdsAccountLevel_Temp t
LEFT JOIN SearchAdsAccountLevel f
ON t.Date_ = f.Date_ AND t.CampaignId = f.CampaignId
WHERE f.OrgId IS NULL'''
crsr.executemany(sql)
cnxn.commit()
What is the most used way to create a Sqlite query in Python?
query = 'insert into events (date, title, col3, col4, int5, int6)
values("%s", "%s", "%s", "%s", %s, %s)' % (date, title, col3, col4, int5, int6)
print query
c.execute(query)
Problem: it won't work for example if title contains a quote ".
query = 'insert into events (date, title, col3, col4, int5, int6)
values(?, ?, ?, ?, ?, ?)'
c.execute(query, (date, title, col3, col4, int5, int6))
Problem: in solution 1., we could display/print the query (to log it); here in solution 2. we can't log the query string anymore because the "replace" of each ? by a variable is done during the execute.
Another cleaner way to do it? Can we avoid to repeat ?, ?, ?, ..., ? and have one single values(?) and still have it replaced by all the parameters in the tuple?
You should always use parameter substitution of DB API, to avoid SQL injection, query logging is relatively trivial by subclassing sqlite3.Cursor:
import sqlite3
class MyConnection(sqlite3.Connection):
def cursor(self):
return super().cursor(MyCursor)
class MyCursor(sqlite3.Cursor):
def execute(self, sql, parameters=''):
print(f'statement: {sql!r}, parameters: {parameters!r}')
return super().execute(sql, parameters)
conn = sqlite3.connect(':memory:', timeout=60, factory=MyConnection)
conn.execute('create table if not exists "test" (id integer, value integer)')
conn.execute('insert into test values (?, ?)', (1, 0));
conn.commit()
yields:
statement: 'create table if not exists "test" (id integer, value integer)', parameters: ''
statement: 'insert into test values (?, ?)', parameters: (1, 0)
To avoid formatting problems and SQL injection attacks, you should always use parameters.
When you want to log the query, you can simply log the parameter list together with the query string.
(SQLite has a function to get the expanded query, but Python does not expose it.)
Each parameter markers corresponds to exactly one value. If writing many markers is too tedious for you, let the computer do it:
parms = (1, 2, 3)
markers = ",".join("?" * len(parms))
Is there a way I can assign an SQLite rowid through a variable in Python? I am using Tkinters 'get()' function to retrieve the contents of the entry.
Here is the code:
def insertdata():
c.execute("INSERT INTO Students VALUES (?, ?, ?, ?, ?, ?, ?, ?);", (surnamelabelentry.get(), forenamelabelentry.get(), dateofbirthlabelentry.get(), homeaddresslabelentry.get(), homephonenumberentry.get(), genderlabelentry.get(), tutorgrouplabelentry.get(), emaillabelentry.get()))
c.execute('INSERT INTO Students(rowid) VALUES',(studentidentry.get()))
conn.commit()
rootE.destroy()
Here is the error:
File "R:/Documents/PYTHON/Login And Pw.py", line 124, in insertdata
c.execute('INSERT INTO Students(rowid) VALUES',(studentidentry.get()))
sqlite3.OperationalError: near "VALUES": syntax error
Thanks in advance.
When creating your table, declare one column as INTEGER PRIMARY KEY.
CREATE TABLE Students (
StudentID INTEGER PRIMARY KEY,
Surname VARCHAR NOT NULL,
Forename VARCHAR NOT NULL,
...
);
This makes that column an alias for the ROWID, and you can then use a normal INSERT statement to set it.
If for some bizarre reason you want to keep ROWID as a hidden column but still set an explicit value for it, then you can use an explicit column list for INSERT.
c.execute("INSERT INTO Students(ROWID, Surname, Forename) VALUES (?, ?, ?)", (5678, 'Tables', 'Robert'))