Python MySQL Syntax Error - DUPLICATE KEY UPDATE - python

I have the following MySQL + Python code:
data = json.loads(decoded_response)
insert_values = []
cursor = cnx.cursor()
add_data = """INSERT INTO pb_ya_camps (camp_id,camp_name) VALUES (%s,%s) ON DUPLICATE KEY UPDATE VALUES (%s,%s)"""
for jsonData in data["data"]:
if "No" in jsonData["StatusArchive"]:
print("...processing campaign ",jsonData["Name"],"into the database.")
insert_values.append((jsonData["CampaignID"],jsonData["Name"]))
try:
cursor.executemany(add_data,(insert_values,insert_values))
Which at the moment produces the following error:
MYSQL ERROR: Failed processing format-parameters; 'MySQLConverter' object has no attribute '_tuple_to_mysql'
As far as I understand it is not liking the following:
cursor.executemany(add_data,(insert_values,insert_values))
I believe you can't do that with Python...but my problem probably derives from improper MySQL syntax. Could you please take a look at it?
INSERT INTO pb_ya_camps (camp_id,camp_name) VALUES (%s,%s) ON DUPLICATE KEY UPDATE VALUES (%s,%s)
I am not sure how to properly use the ON DUPLICATE KEY UPDATE without having to re-specify all the values... <<<--- that is the main problem.
I have read the following: LINK TO PREVIOUS EXAMPLE however I don't want to rely on KEY UPDATE col1 = VALUES(col1) because in a further part of my script I have too many columns to keep listing as part of col = value for each column...
Thank you!

Following MySQL Reference Manual, MySQL syntax for INSERT ... ON DUPLICATE KEY UPDATE is:
INSERT INTO table (`a`, `b`, `c`)
VALUES (1, 2, 3)
ON DUPLICATE KEY UPDATE `c` = `c` + 1;
So in your case (please note that writing either camp_id = %s or camp_id = VALUES(%s) is the same:
INSERT INTO `pb_ya_camps` (`camp_id`, `camp_name`)
VALUES (%s,%s)
ON DUPLICATE KEY UPDATE `camp_id` = VALUES(%s), `camp_name` = VALUES(%s)
More information about the syntax at the MySQL Reference Manual.

Related

Sqlalchemy constructing incorrectly SQL Query when not Reloading dataframe from CSV in Python

I'm using python3, SQLAlchemy, and a MariaDB server.
Im getting data from a REST server in json format parsing it to a dictionary then to a Dataframe in Pandas.
The error i'm getting occurs when i dont save the Dataframe into a CSV format and then Reload it like this:
df.to_csv("temp_save.csv", index=False)
df = pd.read_csv("temp_save.csv")
When the previous lines are commented out i get the following error:
(pymysql.err.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '), (), (), 0, '2022-01-26T17:32:49Z', 29101, 1, 3, 2, '2022-01-25T17:32:49Z', '2' at line 1")
[SQL: INSERT INTO `TicketRequesters` (subject, group_id, department_id, category, sub_category, item_category, requester_id, responder_id, due_by, fr_escalated, deleted, spam, email_config_id, fwd_emails, reply_cc_emails, cc_emails, is_escalated, fr_due_by, id, priority, status .....
VALUES (%(subject_m0)s, %(group_id_m0)s, %(department_id_m0)s, %(category_m0)s, %(sub_category_m0)s, %(item_category_m0)s, %(requester_id_m0)s, %(responder_id_m0)s, %(due_by_m0)s, %(fr_escalated_m0)s, %(deleted_m0)s, %(spam_m0)s, %(email_config_id_m0)s, %(fwd_emails_m0)s, %(reply_cc_emails_m0)s, %(cc_emails_m0)s, %(is_escalated_m0)s, %(fr_due_by_m0)s, %(id_m0)s, %(priority_m0)s, %(status_m0)s, %(source_m0)s, %(created_at_m0)s, %(updated_at_m0)s, %(requested_for_id_m0)s, %(to_emails_m0)s, %(type_m0)s, %(description_text_m0)s, %(custom_fields_localidad_m0)s, %(custom_fields_hora_de_la_falla_m0)s, %(custom_fields_hubo_alguna_modificacin_en_el_firewall_o_en_su_pl_m0)s, %(custom_fields_el_incidente_presentado_corresponde_a_m0)s, %(custom_fields_client_type_m0)s, %(custom_fields_riesgos_del_cambio_o_caso_m0)s, %(custom_fields_solucin_del_caso_m0)s, %(custom_fields_estado_de_cierre_m0)s, %(custom_fields_numero_de_oportunidad_m0)s, %(custom_fields_cuales_son_sus_servicios_afectados_especificar_si_m0)s, %(custom_fields_numero_de_ticket_de_cambio_m0)s, %(custom_fields_cantidad_estimada_de_personas_o_departamentos_afe_m0)s, %(cu.....
As shown, in the VALUES %()s field "_m0" is getting appended at the end, i noticed the number grows up to the number of rows i'm trying to upsert.
%(stats_created_at_m29)s, %(stats_updated_at_m29)s, %(stats_ticket_id_m29)s, %(stats_opened_at_m29)s, %(stats_group_escalated_m29)s, %(stats_inbound_count_m29)s, %(stats_status_updated_at_m29)s, %(stats_outbound_count_m29)s, %(stats_pending_since_m29)s, %(stats_resolved_at_m29)s, %(stats_closed_at_m29)s, %(stats_first_assigned_at_m29)s, %(stats_assigned_at_m29)s, %(stats_agent_responded_at_m29)s, %(stats_requester_responded_at_m29)s, %(stats_first_responded_at_m29)s, %(stats_first_resp_time_in_secs_m29)s, %(stats_resolution_time_in_secs_m29)s, %(description_m29)s, %
This is the python code that i try to use, just in case.
engine = db.create_engine(
f"mariadb+pymysql://{user}:{password}#{host}/{database_name}?charset=utf8mb4"
)
columndict: dict = {"id": Column("id", Integer, primary_key=True)}
# Prepare Column List, check columndict if exists, get Column object from dict
column_list = [columndict.get(name, Column(name, String(256))) for name in columns]
# Get an instance of the table
# TODO: Instance table from metadata (without having to give it columns)
instanceTable = Table(table_name, metadata, *column_list)
metadata.create_all(engine)
# Schema created
# Create Connection
conn = engine.connect()
# Prepare statement
insert_stmt = db.dialects.mysql.insert(instanceTable).values(values)
on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(
data=insert_stmt.inserted, status="U"
)
# Execute statement
result = conn.execute(on_duplicate_key_stmt)
# DATA UPSERTED
I investigated about the limitations of mysql/mariadb with the UTF8 encoding and the correct encoding is ?charset=utf8mb4, this might be related to the query construction issue.
EDIT: I found a fix for this error by replacing empty lists and empty strings from the Dataframe with None
This problem was caused due to sending empty lists [] and empty strings '' to the SQLAlchemy values.
Fixed by replacing those items with None.

Psycopg2/PostgreSQL 11.9: Syntax error at or near "::" when performing string->date type cast

I am using psycopg2 to create a table partition and insert some rows into this newly created partition. The table is RANGE partitioned on a date type column.
Psycopg2 code:
conn = connect_db()
cursor = conn.cursor()
sysdate = datetime.now().date()
sysdate_str = sysdate.strftime('%Y%m%d')
schema_name = "schema_name"
table_name = "transaction_log"
# Add partition if not exists for current day
sql_add_partition = sql.SQL("""
CREATE TABLE IF NOT EXISTS {table_partition}
PARTITION of {table}
FOR VALUES FROM (%(sysdate)s) TO (maxvalue);
""").format(table = sql.Identifier(schema_name, table_name), table_partition = sql.Identifier(schema_name, f'{table_name}_{sysdate_str}'))
print(cursor.mogrify(sql_add_partition, {'sysdate': dt.date(2015,6,30)}))
cursor.execute(sql_add_partition, {'sysdate': sysdate})
Formatted output of cursor.mogrify():
CREATE TABLE IF NOT EXISTS "schema_name"."transaction_log_20211001"
PARTITION of "schema_name"."transaction_log"
FOR VALUES FROM ('2021-10-01'::date) TO (maxvalue);
Error received:
ERROR: syntax error at or near "::"
LINE 3: for values FROM ('2021-10-01'::date) TO (maxvalue);
Interestingly enough, psycopg2 appears to be attempting to cast the string '2021-10-01' to a date object with the "::date" syntax, and according to the postgreSQL documentation, this appears to be valid (although there are no explicit examples given in the docs), however executing the statement with both pyscopg2 and in a postgreSQL query editor yields this syntax error. However, executing the following statement in a postgreSQL SQL editor is successful:
CREATE TABLE IF NOT EXISTS "schema_name"."transaction_log_20211001"
PARTITION of "schema_name"."transaction_log"
FOR VALUES FROM ('2021-10-01') TO (maxvalue);
Any ideas on how to get psycopg2 to format the query correctly?
To follow up on #LaurenzAlbe comment:
sql_add_partition = sql.SQL("""
CREATE TABLE IF NOT EXISTS {table_partition}
PARTITION of {table}
FOR VALUES FROM (%(sysdate)s) TO (maxvalue);
""").format(table = sql.Identifier(schema_name, table_name), table_partition = sql.Identifier(schema_name, f'{table_name}_{sysdate_str}'))
print(cursor.mogrify(sql_add_partition, {'sysdate': '2021-10-01'}))
#OR
sql_add_partition = sql.SQL("""
CREATE TABLE IF NOT EXISTS {table_partition}
PARTITION of {table}
FOR VALUES FROM ({sysdate}) TO (maxvalue);
""").format(table = sql.Identifier(schema_name, table_name),
table_partition = sql.Identifier(schema_name, f'{table_name}_{sysdate_str}'),
sysdate=sql.Literal('2021-10-01'))
print(cursor.mogrify(sql_add_partition))
#Formatted as
CREATE TABLE IF NOT EXISTS "schema_name"."transaction_log_20211001"
PARTITION of "schema_name"."transaction_log"
FOR VALUES FROM ('2021-10-01') TO (maxvalue);
Pass the date in as a literal value instead of a date object. psycopg2 does automatic adaptation of date(time) objects to Postgres date/timestamp types(Datetime adaptation) which is what is biting you.
UPDATE
Per my comment, the reason why it needs to be a literal is explained here Create Table:
Each of the values specified in the partition_bound_spec is a literal, NULL, MINVALUE, or MAXVALUE. Each literal value must be either a numeric constant that is coercible to the corresponding partition key column's type, or a string literal that is valid input for that type.

Insert record from list if not exists in table

cHandler = myDB.cursor()
cHandler.execute('select UserId,C1,LogDate from DeviceLogs_12_2019') // data from remote sql server database
curs = connection.cursor()
curs.execute("""select * from biometric""") //data from my database table
lst = []
result= cHandler.fetchall()
for row in result:
lst.append(row)
lst2 = []
result2= curs.fetchall()
for row in result2:
lst2.append(row)
t = []
r = [elem for elem in lst if not elem in lst2]
for i in r:
print(i)
t.append(i)
for i in t:
frappe.db.sql("""Insert into biometric(UserId,C1,LogDate) select '%s','%s','%s' where not exists(select * from biometric where UserID='%s' and LogDate='%s')""",(i[0],i[1],i[2],i[0],i[2]),as_dict=1)
I am trying above code to insert data into my table if record not exists but getting error :
pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '1111'',''in'',''2019-12-03 06:37:15'' where not exists(select * from biometric ' at line 1")
Is there anything I am doing wrong or any other way to achieve this?
It appears you have potentially four problems:
There is a from clause missing between select and where not exists.
When using a prepared statement you do not enclose your placeholder arguments, %s, within quotes. Your SQL should be:
Your loop:
Loop:
t = []
r = [elem for elem in lst if not elem in lst2]
for i in r:
print(i)
t.append(i)
If you are trying to only include rows from the remote site that will not be duplicates, then you should explicitly check the two fields that matter, i.e. UserId and LogDate. But what is the point since your SQL is taking care of making sure that you are excluding these duplicate rows? Also, what is the point of copying everything form r to t?
SQL:
Insert into biometric(UserId,C1,LogDate) select %s,%s,%s from DUAL where not exists(select * from biometric where UserID=%s and LogDate=%s
But here is the problem even with the above SQL:
If the not exists clause is false, then the select %s,%s,%s from DUAL ... returns no columns and the column count will not match the number of columns you are trying to insert, namely three.
If your concern is getting an error due to duplicate keys because (UserId, LogDate) is either a UNIQUE or PRIMARY KEY, then add the IGNORE keyword on the INSERT statement and then if a row with the key already exists, the insertion will be ignored. But there is no way of knowing since you have not provided this information:
for i in t:
frappe.db.sql("Insert IGNORE into biometric(UserId,C1,LogDate) values(%s,%s,%s)",(i[0],i[1],i[2]))
If you do not want multiple rows with the same (UserId, LogDate) combination, then you should define a UNIQUE KEY on these two columns and then the above SQL should be sufficient. There is also an ON DUPLICATE KEY SET ... variation of the INSERT statement where if the key exists you can do an update instead (look this up).
If you don't have a UNIQUE KEY defined on these two columns or you need to print out those rows which are being updated, then you do need to test for the presence of the existing keys. But this would be the way to do it:
cHandler = myDB.cursor()
cHandler.execute('select UserId,C1,LogDate from DeviceLogs_12_2019') // data from remote sql server database
rows = cHandler.fetchall()
curs = connection.cursor()
for row in rows:
curs.execute("select UserId from biometric where UserId=%s and LogDate=%s", (ros[0], row[2])) # row already in biometric table?
biometric_row = curs.fetchone()
if biometric_row is None: # no, it is not
print(row)
frappe.db.sql("Insert into biometric(UserId,C1,LogDate) values(%s, %s, %s)", (row[0],row[1],row[2]))

sqlite3 INSERT INTO fails if values is a 'itertools.chain' object

I am newbie for sqlite3 (python user)
Already stored data in database based on below method, but not working this time. INSERT INTO execuated, no error report, but nothing stored in db.
I searched this topic a lot on this website, focused on "no commit". But I am pretty sure I finished "commit" and closed file correctly. After INSERT action, I could found table f65 in my db with header (column name),but no more data. (10 columns x 4k rows).
Key codes snippet as below.
df = pd.DataFrame(pd.read_excel(r'/Users/' + username + r'/Documents/working/addf6-5.xlsx', header=0))
df = df.replace('#NUM!', '')
value = chain(reversed(grouplist2), df.values.tolist())
for x in value:
x[4] = str(x[4])
x[7] = str(x[7])
conn = sqlite3.connect('veradb.db')
c = conn.cursor()
c.execute("DROP TABLE IF EXISTS f65")
conn.commit()
c.close()
conn.close()
conn = sqlite3.connect('veradb.db')
c = conn.cursor()
c.execute("CREATE TABLE IF NOT EXISTS f65 ('offer', 'code', 'desc', 'engine', 'cost', 'supplier', 'remark', 'podate', 'empty', 'border')")
c.executemany("INSERT INTO f65 VALUES (?,?,?,?,?,?,?,?,?,?)", value)
conn.commit()
c.close()
conn.close()
More details for explanation:
Cache data is in addf6-5, I fetched new data in grouplist2 (it's a list) and used chain(reversed(grouplist2), df.values.tolist())to combine data.
Why used
for x in value:
x[4] = str(x[4])
x[7] = str(x[7])
Because my pycharm report "Error binding parameter 4 - probably unsupported type."&"Error binding parameter 7 - probably unsupported type." So I do STR again though they should be TEXT according to SQLite by default.
I tried to test "value" via
for x in value:
x[4] = str(x[4])
x[7] = str(x[7])
print(x)
And found they could be printed correctly.Maybe this proved value (lists) was correct?
Some people may doubt if attribute missed in "CREATE TABLE" sentence, I can say Sqlite provided them TEXT by default is no typing and I execute these codes on other files and worked.
Looking forward to your help. Thank you in advance.

Can't insert into SQLite3 using dictionary

I'm trying to make a function which inserts a row into the SQLite3 database using dictionary.
I found here, on SO a way to do that, but it unfortunately does not work. There is some problem I can't figure out.
def insert_into_table(self,data):
for key in data.keys(): # ADDING COLUMNS IF NECESSARY
columns = self.get_column_names()
column = key.replace(' ','_')
if column not in columns:
self.cur.execute("""ALTER TABLE vsetkyfirmy ADD COLUMN {} TEXT""".format(column.encode('utf-8')))
self.conn.commit()
new_data={}
for v,k in data.iteritems(): # new dictionary with remaden names (*column = key.replace(' ','_'))
new_data[self.remake_name(v)]=k
columns = ', '.join(new_data.keys())
placeholders = ':'+', :'.join(new_data.keys())
query = 'INSERT INTO vsetkyfirmy (%s) VALUES (%s)' % (columns, placeholders)
self.cur.execute(query, new_data)
self.conn.commit()
EXCEPTION:
self.cur.execute(query, new_data)
sqlite3.ProgrammingError: You did not supply a value for binding 1.
When I print query and new_data everything seems correct:
INSERT INTO vsetkyfirmy (Obchodné_meno, IČ_DPH, Sídlo, PSČ, Spoločník, IČO, Základné_imanie, Konateľ, Ročný_obrat, Dátum_vzniku, Právna_forma) VALUES (:Obchodné_meno, :IČ_DPH, :Sídlo, :PSČ, :Spoločník, :IČO, :Základné_imanie, :Konateľ, :Ročný_obrat, :Dátum_vzniku, :Právna_forma)
{u'Obchodn\xe9_meno': 'PRspol. s r.o.', u'I\u010c_DPH': 'S9540', u'S\xeddlo': u'Bansk\xe1 Bystrica, Orembursk\xe1 2', u'PS\u010c': '97401', u'Spolo\u010dn\xedk': u'Dana Dzurianikov\xe1', u'I\u010cO': '3067', u'Z\xe1kladn\xe9_imanie': u'142899 \u20ac', u'Konate\u013e': 'Miroslav Dz', u'Ro\u010dn\xfd_obrat': '2014: 482 EUR', u'D\xe1tum_vzniku': '01.12.1991 ', u'Pr\xe1vna_forma': u'Spolo\u010dnos\u0165 s ru\u010den\xedm obmedzen\xfdm'}
EDIT: So I've tried to remove ':' from query so it looks like:
INSERT INTO vsetkyfirmy (Obchodné_meno, IČ_DPH, Sídlo, PSČ, Spoločník, IČO, Základné_imanie, Konateľ, Ročný_obrat, Dátum_vzniku, Právna_forma) VALUES (Obchodné_meno, IČ_DPH, Sídlo, PSČ, Spoločník, IČO, Základné_imanie, Konateľ, Ročný_obrat, Dátum_vzniku, Právna_forma)
And it returns that sqlite3.OperationalError: no such column: Obchodné_meno
I don't know where is the problem, could it be in encoding?
You are calling encode('utf-8') when creating the table, but not when inserting.
SQLite indeed uses UTF-8, but the sqlite3 module automatically handles conversion from/to Python's internal Unicode string encoding. Don't try to reencode manually.

Categories

Resources