How can I do upsert (update and insert) query in MySQL Python? - python

I'm looking for a simple upsert (Update/Insert).
I have table in which I am inserting rows for books table but next time when I want to insert row I don't want to insert again data for that table just want to update with required columns if it exits there if not then create new row.
How can I do this in Mysql-python?
cursor.execute("""INSERT INTO books (book_code,book_name,created_at,updated_at) VALUES (%s,%s,%s,%s)""", (book_code,book_name,curr_time,curr_time,))

MySQL has REPLACE statement:
REPLACE works exactly like INSERT, except that if an old row in the
table has the same value as a new row for a PRIMARY KEY or a UNIQUE
index, the old row is deleted before the new row is inserted.
cursor.execute("""
REPLACE INTO books (book_code,book_name,created_at,updated_at)
VALUES (%s,%s,%s,%s)""",
(book_code,book_name,curr_time,curr_time,)
)
UPDATE According to comment of #Yo-han, REPLACE is like DELETE and INSERT, not UPSERT. Here's alternative using INSERT ... ON DUPLICATE KEY UPDATE:
cursor.execute("""
INSERT INTO books (book_code,book_name,created_at,updated_at)
VALUES (%s,%s,%s,%s)
ON DUPLICATE KEY UPDATE book_name=%s, created_at=%s, updated_at=%s
""", (book_code, book_name, curr_time, curr_time, book_name, curr_time, curr_time))

Related

Insert from a list after checking in mysql if duplicate

From my list, I am looking to insert row by row, but I need to perform a checking before allowing to insert in the database.
let's imagine that is my list
unique_hrefs = [
https://www.linkedin.com/in/123456,
https://www.linkedin.com/in/789013556,
https://www.linkedin.com/in/888888888,
https://www.linkedin.com/in/082b62112,
https://www.linkedin.com/in/5625a1a,
https://www.linkedin.com/in/123456,
https://www.linkedin.com/in/0000000341454,
https://www.linkedin.com/in/55555555,
https://www.linkedin.com/in/55555555,
https://www.linkedin.com/in/66666666,
https://www.linkedin.com/in/666677777
]
I need to check my table for the same string before allowing to inserting and if the string does not exist inserting.
I have my code to inset but struggling to find how to check before inserting it?
query= "INSERT INTO name_links (ulr_name) VALUES (%s)
cursor.executemany(query,[(r,) for r in unique_hrefs])
mydb.commit()

Is there a way in sqlite to insert exact value as the primary key or other default value while inserting fresh values in a column?

I believe that the row must be first be created as to get a self increment PK. Then i could replace the one column with PK value generated . But i am not getting the proper result.
insert = "insert into table(name, status) values (:Name, :Status)
params = {'Name': Jack,
'status' : active
}
db.execute(insert, params)
db.commit()
insert_type = "replace into table (type) values (select id from table)"
db.execute(insert_type)
db.commit()
so the table does have columns as id(auto increment ), name, status and type(default 0 or id)
Insert primary key column value to Non Primary key column in same table.
You can use last_insert_rowid() to get the rowid of the row that was just inserted. For example your second query could be :
update mytable set type = id where id = last_insert_rowid()
This only work if the update if executed right after the insert and no other thread is modifying the db at the same time.
Another solution could be to set type to null (either as default or in the inserted values) then use coalesce(type, id) in the queries.
I think you can use DEFUALT = blank to create a defualt value for any column.

Insert values in table with two excecute commands

trying to insert values into one MySQL table using python.
First inserting values from csvfile; with:
sql = "INSERT INTO mydb.table(time,day,number)values %r" % tuple (values),)
cursor.execute(sql)
then insert into the same table and same row an other value
sql = "INSERT INTO mydb.table(name) values(%s)"
cursor.execute(sql)
with this i get the inserts in two different rows…
But i need to insert it into the same row without using sql = "INSERT INTO mydb.table(time,day,number,name)values %r" % tuple (values),)
Is there a way to insert values into the same row in two 'insert statements'?
INSERT will always add a new row. If you want to change values in this row, you have to specify a unique identifier (key) in the WHERE clause to access this row and use UPDATE or REPLACE instead.
When using REPLACE you need to be careful if your table contains an auto_increment column, since a new value will be generated.

Update one postgres table from another postgres table

I am loading a batch csv file to postgres using python (Say Table A).
I am using pandas to upload the data into chunk which is quite faster.
for chunk in pd.read_csv(csv_file, sep='|',chunksize=chunk_size,low_memory=False):
Now I want to update another table (say Table B) using A based on following rules
if there are any new records in table A which is not in table B then insert that as a new record in table B (based on Id field)
if the values changes in the Table A for the same ID which exists in Table B then update the records in table B using TableA
(There are server tables which i need to update based on Table A )
I am able to do that using below and then loop through each row, but Table A always have records around 1,825,172 and it becomes extremely slow. Any forum member can help to speed this up or suggest a alternate approach to achieve the same.
cursor.execute(sql)
records = cursor.fetchall()
for row in records:
id= 0 if row[0] is None else row[0] # Use this to match with Table B and decide insert or update
id2=0 if row[1] is None else row[1]
id2=0 if row[2] is None else row[2]
You could leverage Postgres upsert syntax, like:
insert into tableB tb (id, col1, col2)
select ta.id, ta.col1, ta.col2 from tableA ta
on conflict(id) do update
set col1 = ta.col1, col2 = ta.col2
You should do this completely inside the DBMS, not loop through the records inside your python script. That allows your DBMS to better optimize.
UPDATE TableB
SET x=y
FROM TableA
WHERE TableA.id = TableB.id
INSERT INTO TableB(id,x)
SELECT id, y
FROM TableA
WHERE TableA.id NOT IN ( SELECT id FROM TableB )

Insert Table Field Values from Dictionary Values with corresponding Keys

Would it be be possible to insert dictionary values, one at a time into a specific field/column of a sqlite3 database table?
The fields/columns were created with the database in a previous step.
I would prefer to use a for loop, but I haven't found the right sqlite3 "command" that selects the column, which is the same as the dictionary key, and inserts the corresponding values (dictionary value).
import sqlite3
with sqlite.connect(db_full_path) as connection:
cursor = connection.cursor()
cursor.execute("SELECT * FROM {}".format(db_table))
db_fields = ','.join('?' for tab in cursor.description) # i.e. ?,?,?,?
cursor.executemany("INSERT INTO {} VALUES ({})"\
.format(db_table, db_fields), (ORDERED_DATA_VALUES,))
connection.commit()
'cursor.execute' and 'cursor.executemany' both require a predefined number of columns and a sorted list with the same number of items in the right order, which I find less than ideal for my purpose.
I'd much rather iterate over a dictionary and insert the values one at a time, but into the same row:
for key, value in NOT_ORDERED_DATA_VALUES.items():
# insert value into corresponding field/column (key)

Categories

Resources