This question already has answers here:
How to retrieve inserted id after inserting row in SQLite using Python?
(2 answers)
Closed last year.
When I insert a new row into a table, what's the best way to get the autoincremented primary key of the row I just created? Like this:
def create_address_and_get_id(address_line_1, address_line_2, city, state_abbrev, postcode, country):
db = get_db()
db.execute(
'INSERT INTO mailing_address (address_line_1, address_line_2,'
' city, state_abbrev, postcode, country)'
' VALUES (?, ?, ?, ?, ?, ?)'
(address_line_1, address_line_2, city, state_abbrev, postcode, country)
)
db.commit()
#return ???
I've seen how to do this in other systems but not in python.
you are looking for the lastrowid:
https://docs.python.org/3/library/sqlite3.html:
lastrowid
This read-only attribute provides the rowid of the last modified row. It is only set if you issued an INSERT or a REPLACE statement using the execute() method. For operations other than INSERT or REPLACE or when executemany() is called, lastrowid is set to None.
If the INSERT or REPLACE statement failed to insert the previous successful rowid is returned.
Changed in version 3.6: Added support for the REPLACE statement.
Related
I can do very efficient bulk inserts in Sqlite3 on Python (2.7) with this code:
cur.executemany("INSERT INTO " + tableName + " VALUES (?, ?, ?, ?);", data)
But I can't get updates to work efficiently. I thought it might be a problem of the database structure/indexing, but even on a test database with only one table of 100 rows, the update still takes about 2-3 seconds.
I've tried different code variations. The latest code I have is from this answer to a previous question about update and executemany, but it's just as slow for me as any other attempt I've made:
data = []
for s in sources:
source_id = s['source_id']
val = get_value(s['source_attr'])
x=[val, source_id]
data.append(x)
cur.executemany("UPDATE sources SET source_attr = ? WHERE source_id = ?", data)
con.commit()
How could I improve this code to do a big bulk update efficiently?
When inserting a record, the database just needs to write a row at the end of the table (unless you have something like UNIQUE constraints).
When updating a record, the database needs to find the row. This requires scanning through the entire table (for each command), unless you have an index on the search column:
CREATE INDEX whatever ON sources(source_id);
But if source_id is the primary key, you should just declare it as such (which creates an implicit index):
CREATE TABLE sources(
source_id INTEGER PRIMARY KEY,
source_attr TEXT,
[...]
);
What would be the best way to get the PK of the following:
self.cursor.execute('INSERT IGNORE INTO table (url, country) VALUES (%s, %s)', (line['url'], line['country']))
In other words, if it's already there, I would need to get that PK, but if it's not there, it would be INSERTing and then getting the LAST_INSERT_ID. Is there a way to do this without doing three queries? What would be the best way to do this pattern?
To get the LAST_INSERT_ID while inserting data, don't use INSERT IGNORE. Instead, use the ON DUPLICATE KEY UPDATE clause to get the id:
INSERT INTO table (url, country)
VALUES (%s, %s)
ON DUPLICATE KEY
UPDATE
id = LAST_INSERT_ID(id);
where id represents the unique column of your table.
You'd still need another query to fetch the updated LAST_INSERT_ID now.
I think the most straightforward way to do this without altering previous data would be to do an INSERT IGNORE followed by a SELECT to retrieve the id.
cursor.execute('INSERT IGNORE INTO...')
cursor,execute('SELECT id FROM table...')
id = cursor.fetchone()[0]
I am having trouble finding the documentation to be able to add non-SQL variables to a SQL statement The official docs don't go into that at http://dev.mysql.com/doc/refman/5.0/en/user-variables.html.
How would I do the following in python and valid SQL?
id, provider_id, title = row[0], row[2], row[4]
cursor.execute("INSERT INTO vendor_id VALUES (%s,%s,%s);"%(id, provider_id, title))
You're on the right track but it should look like this.
id, provider_id, title = row[0], row[2], row[4]
cursor.execute("INSERT INTO vendor_id VALUES (%s, %s, %s)", id, provider_id, title)
The python SQL database API automatically disables and ask for a reformation of escaping and quoting of variables. String formatting operators are unnecessary and in this case do not do any formatting of variable quoting. You can format the final array of variables any way you like, just do not use the % operator like you use. If I am correct, the reason for this is that the variables are bound, and not formatted into the text, so that an injection cannot occur
cursor.execute("INSERT INTO vendor_id VALUES (%s, %s, %s)", (id, provider_id, title))
cursor.execute("INSERT INTO vendor_id VALUES (?, ?, ?)", (id, provider_id, title))
should work too.
I am using Elixir to connect to MSSQL database. The database has a table with a computed column in it. However, when I update other columns in the object and commit the changes, python tells me I can't insert to the computed column.
I am using autoload so in my model:
class Slot(Entity):
using_options(tablename='tbScheduleSlots', autoload=True)
using_table_options(schema='sch')
I create a Slot and give it some values then commit:
ss = Slot(StartDateTime='2012-01-01 13:00:00:000', Program_ID=1234, etc)
session.commit()
Important note!! I do not give the ss object any value for EndDateTime because that is a computed field. So effectively, I'm not passing anything back to the database for that field.
Error:
sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('42000', '[42000] [FreeTDS][SQL Server]The column "EndDateTime" cannot be modified because it is either a computed column or is the result of a UNION operator. (271) (SQLPrepare)') 'INSERT INTO sch.[tbScheduleSlots] ([Program_ID], [SlotType_ID], [StartDateTime], [EndDateTime], [Duration], [Description], [Notes], [State], [MasterSlot_ID]) OUTPUT inserted.[ID_ScheduleSlot] VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)' (5130, 1, '2012-01-01 13:00:00:000', None, None, None, None, 2, None)
Ehhh, I'm not a Python programmer, but it appears that this line:
using_options(tablename='tbScheduleSlots', autoload=True)
which is using autoload is probably what is adding [EndDateTime] to the INSERT statement (as shown in your error message). Looks like that is the line that tells Python the metadata of the underlying table (i.e. the fields in the table). Look for a way to define the columns to be updated manually. Relying on Python to build the INSERT appears to be automatically including [EndDateTime] in the underlying query.
I am using Sqlite3 with Flask microframework, but this question concerns only the Sqlite side of things..
Here is a snippet of the code:
g.db.execute('INSERT INTO downloads (name, owner, mimetype) VALUES (?, ?, ?)', [name, owner, mimetype])
file_entry = query_db('SELECT last_insert_rowid()')
g.db.commit()
The downloads table has another column with the following attributes: id integer primary key autoincrement,
If two people write at the same time the code above could produce errors.
Transactions can be messy. In Sqlite is there a neat built in way of returning the primary key generated after doing an INSERT ?
The way you're doing it is valid. There won't be a problem if the above snipped is executed concurrently by two scripts. last_insert_rowid() returns the rowid of the latest INSERT statement for the connection that calls it. You can also get the rowid by doing g.db.lastrowid.