I'm trying to make a function which inserts a row into the SQLite3 database using dictionary.
I found here, on SO a way to do that, but it unfortunately does not work. There is some problem I can't figure out.
def insert_into_table(self,data):
for key in data.keys(): # ADDING COLUMNS IF NECESSARY
columns = self.get_column_names()
column = key.replace(' ','_')
if column not in columns:
self.cur.execute("""ALTER TABLE vsetkyfirmy ADD COLUMN {} TEXT""".format(column.encode('utf-8')))
self.conn.commit()
new_data={}
for v,k in data.iteritems(): # new dictionary with remaden names (*column = key.replace(' ','_'))
new_data[self.remake_name(v)]=k
columns = ', '.join(new_data.keys())
placeholders = ':'+', :'.join(new_data.keys())
query = 'INSERT INTO vsetkyfirmy (%s) VALUES (%s)' % (columns, placeholders)
self.cur.execute(query, new_data)
self.conn.commit()
EXCEPTION:
self.cur.execute(query, new_data)
sqlite3.ProgrammingError: You did not supply a value for binding 1.
When I print query and new_data everything seems correct:
INSERT INTO vsetkyfirmy (Obchodné_meno, IČ_DPH, Sídlo, PSČ, Spoločník, IČO, Základné_imanie, Konateľ, Ročný_obrat, Dátum_vzniku, Právna_forma) VALUES (:Obchodné_meno, :IČ_DPH, :Sídlo, :PSČ, :Spoločník, :IČO, :Základné_imanie, :Konateľ, :Ročný_obrat, :Dátum_vzniku, :Právna_forma)
{u'Obchodn\xe9_meno': 'PRspol. s r.o.', u'I\u010c_DPH': 'S9540', u'S\xeddlo': u'Bansk\xe1 Bystrica, Orembursk\xe1 2', u'PS\u010c': '97401', u'Spolo\u010dn\xedk': u'Dana Dzurianikov\xe1', u'I\u010cO': '3067', u'Z\xe1kladn\xe9_imanie': u'142899 \u20ac', u'Konate\u013e': 'Miroslav Dz', u'Ro\u010dn\xfd_obrat': '2014: 482 EUR', u'D\xe1tum_vzniku': '01.12.1991 ', u'Pr\xe1vna_forma': u'Spolo\u010dnos\u0165 s ru\u010den\xedm obmedzen\xfdm'}
EDIT: So I've tried to remove ':' from query so it looks like:
INSERT INTO vsetkyfirmy (Obchodné_meno, IČ_DPH, Sídlo, PSČ, Spoločník, IČO, Základné_imanie, Konateľ, Ročný_obrat, Dátum_vzniku, Právna_forma) VALUES (Obchodné_meno, IČ_DPH, Sídlo, PSČ, Spoločník, IČO, Základné_imanie, Konateľ, Ročný_obrat, Dátum_vzniku, Právna_forma)
And it returns that sqlite3.OperationalError: no such column: Obchodné_meno
I don't know where is the problem, could it be in encoding?
You are calling encode('utf-8') when creating the table, but not when inserting.
SQLite indeed uses UTF-8, but the sqlite3 module automatically handles conversion from/to Python's internal Unicode string encoding. Don't try to reencode manually.
Related
I'm using python3, SQLAlchemy, and a MariaDB server.
Im getting data from a REST server in json format parsing it to a dictionary then to a Dataframe in Pandas.
The error i'm getting occurs when i dont save the Dataframe into a CSV format and then Reload it like this:
df.to_csv("temp_save.csv", index=False)
df = pd.read_csv("temp_save.csv")
When the previous lines are commented out i get the following error:
(pymysql.err.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '), (), (), 0, '2022-01-26T17:32:49Z', 29101, 1, 3, 2, '2022-01-25T17:32:49Z', '2' at line 1")
[SQL: INSERT INTO `TicketRequesters` (subject, group_id, department_id, category, sub_category, item_category, requester_id, responder_id, due_by, fr_escalated, deleted, spam, email_config_id, fwd_emails, reply_cc_emails, cc_emails, is_escalated, fr_due_by, id, priority, status .....
VALUES (%(subject_m0)s, %(group_id_m0)s, %(department_id_m0)s, %(category_m0)s, %(sub_category_m0)s, %(item_category_m0)s, %(requester_id_m0)s, %(responder_id_m0)s, %(due_by_m0)s, %(fr_escalated_m0)s, %(deleted_m0)s, %(spam_m0)s, %(email_config_id_m0)s, %(fwd_emails_m0)s, %(reply_cc_emails_m0)s, %(cc_emails_m0)s, %(is_escalated_m0)s, %(fr_due_by_m0)s, %(id_m0)s, %(priority_m0)s, %(status_m0)s, %(source_m0)s, %(created_at_m0)s, %(updated_at_m0)s, %(requested_for_id_m0)s, %(to_emails_m0)s, %(type_m0)s, %(description_text_m0)s, %(custom_fields_localidad_m0)s, %(custom_fields_hora_de_la_falla_m0)s, %(custom_fields_hubo_alguna_modificacin_en_el_firewall_o_en_su_pl_m0)s, %(custom_fields_el_incidente_presentado_corresponde_a_m0)s, %(custom_fields_client_type_m0)s, %(custom_fields_riesgos_del_cambio_o_caso_m0)s, %(custom_fields_solucin_del_caso_m0)s, %(custom_fields_estado_de_cierre_m0)s, %(custom_fields_numero_de_oportunidad_m0)s, %(custom_fields_cuales_son_sus_servicios_afectados_especificar_si_m0)s, %(custom_fields_numero_de_ticket_de_cambio_m0)s, %(custom_fields_cantidad_estimada_de_personas_o_departamentos_afe_m0)s, %(cu.....
As shown, in the VALUES %()s field "_m0" is getting appended at the end, i noticed the number grows up to the number of rows i'm trying to upsert.
%(stats_created_at_m29)s, %(stats_updated_at_m29)s, %(stats_ticket_id_m29)s, %(stats_opened_at_m29)s, %(stats_group_escalated_m29)s, %(stats_inbound_count_m29)s, %(stats_status_updated_at_m29)s, %(stats_outbound_count_m29)s, %(stats_pending_since_m29)s, %(stats_resolved_at_m29)s, %(stats_closed_at_m29)s, %(stats_first_assigned_at_m29)s, %(stats_assigned_at_m29)s, %(stats_agent_responded_at_m29)s, %(stats_requester_responded_at_m29)s, %(stats_first_responded_at_m29)s, %(stats_first_resp_time_in_secs_m29)s, %(stats_resolution_time_in_secs_m29)s, %(description_m29)s, %
This is the python code that i try to use, just in case.
engine = db.create_engine(
f"mariadb+pymysql://{user}:{password}#{host}/{database_name}?charset=utf8mb4"
)
columndict: dict = {"id": Column("id", Integer, primary_key=True)}
# Prepare Column List, check columndict if exists, get Column object from dict
column_list = [columndict.get(name, Column(name, String(256))) for name in columns]
# Get an instance of the table
# TODO: Instance table from metadata (without having to give it columns)
instanceTable = Table(table_name, metadata, *column_list)
metadata.create_all(engine)
# Schema created
# Create Connection
conn = engine.connect()
# Prepare statement
insert_stmt = db.dialects.mysql.insert(instanceTable).values(values)
on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(
data=insert_stmt.inserted, status="U"
)
# Execute statement
result = conn.execute(on_duplicate_key_stmt)
# DATA UPSERTED
I investigated about the limitations of mysql/mariadb with the UTF8 encoding and the correct encoding is ?charset=utf8mb4, this might be related to the query construction issue.
EDIT: I found a fix for this error by replacing empty lists and empty strings from the Dataframe with None
This problem was caused due to sending empty lists [] and empty strings '' to the SQLAlchemy values.
Fixed by replacing those items with None.
Writing the Nested Dictionaries in SQL Table. Code is getting written.
Since I'm dealing with nested dictionaries. So Main Dicionary (User_ID value) is getting writted one row and other Nested Dictionary values are getting written in another row..
I want to write all the values in the single row.
Let me know any suggestions to write the values in single row.
Code - Output( User-defined Values)
{'1101': {'Name': 'Test_User', 'Age': Test_Age, 'Occupation': 'Test_Occupation'}}
I want to write all these nested dictionary values in the single row in SQL table.
SQL Output (For Example)
User_ID User_name User_Age User_Occupation
1101 Test_User Test_Age Test_Occupation
Thanks for your time in Advance !!
What I have tried:
Part of code use to write the values into SQL Table :
for k,v in user_details.items():
user_col = k
cursor.execute('INSERT INTO Details (User_ID) VALUES ("%s")' % (user_col))
cols = v.keys()
vals = v.values()
sql = "INSERT INTO Details ({}) VALUES ({})".format(
', '.join(cols),
', '.join(['%s'] * len(cols)));
cursor.execute(sql, list(vals))
db.commit()
This is one approach.
Ex:
for k,v in user_details.items():
columns = ", ".join(["User_ID"] + list(v.keys()))
values = ", ".join([k] + list(v.values()))
cursor.execute('INSERT INTO Details ({}) VALUES ({})'.format(columns, values))
You could update values dict with USER_ID field and use only second query.
v['User_ID']=k
How do I insert a python dictionary into a Postgresql2 table? I keep getting the following error, so my query is not formatted correctly:
Error syntax error at or near "To" LINE 1: INSERT INTO bill_summary VALUES(To designate the facility of...
import psycopg2
import json
import psycopg2.extras
import sys
with open('data.json', 'r') as f:
data = json.load(f)
con = None
try:
con = psycopg2.connect(database='sanctionsdb', user='dbuser')
cur = con.cursor(cursor_factory=psycopg2.extras.DictCursor)
cur.execute("CREATE TABLE bill_summary(title VARCHAR PRIMARY KEY, summary_text VARCHAR, action_date VARCHAR, action_desc VARCHAR)")
for d in data:
action_date = d['action-date']
title = d['title']
summary_text = d['summary-text']
action_date = d['action-date']
action_desc = d['action-desc']
q = "INSERT INTO bill_summary VALUES(" +str(title)+str(summary_text)+str(action_date)+str(action_desc)+")"
cur.execute(q)
con.commit()
except psycopg2.DatabaseError, e:
if con:
con.rollback()
print 'Error %s' % e
sys.exit(1)
finally:
if con:
con.close()
You should use the dictionary as the second parameter to cursor.execute(). See the example code after this statement in the documentation:
Named arguments are supported too using %(name)s placeholders in the query and specifying the values into a mapping.
So your code may be as simple as this:
with open('data.json', 'r') as f:
data = json.load(f)
print(data)
""" above prints something like this:
{'title': 'the first action', 'summary-text': 'some summary', 'action-date': '2018-08-08', 'action-desc': 'action description'}
use the json keys as named parameters:
"""
cur = con.cursor()
q = "INSERT INTO bill_summary VALUES(%(title)s, %(summary-text)s, %(action-date)s, %(action-desc)s)"
cur.execute(q, data)
con.commit()
Note also this warning (from the same page of the documentation):
Warning: Never, never, NEVER use Python string concatenation (+) or string parameters interpolation (%) to pass variables to a SQL query string. Not even at gunpoint.
q = "INSERT INTO bill_summary VALUES(" +str(title)+str(summary_text)+str(action_date)+str(action_desc)+")"
You're writing your query in a wrong way, by concatenating the values, they should rather be the comma-separated elements, like this:
q = "INSERT INTO bill_summary VALUES({0},{1},{2},{3})".format(str(title), str(summery_text), str(action_date),str(action_desc))
Since you're not specifying the columns names, I already suppose they are in the same orders as you have written the value in your insert query. There are basically two way of writing insert query in postgresql. One is by specifying the columns names and their corresponding values like this:
INSERT INTO TABLE_NAME (column1, column2, column3,...columnN)
VALUES (value1, value2, value3,...valueN);
Another way is, You may not need to specify the column(s) name in the SQL query if you are adding values for all the columns of the table. However, make sure the order of the values is in the same order as the columns in the table. Which you have used in your query, like this:
INSERT INTO TABLE_NAME VALUES (value1,value2,value3,...valueN);
Normally, if i want to insert values into a table, i will do something like this (assuming that i know which columns that the values i want to insert belong to):
conn = sqlite3.connect('mydatabase.db')
conn.execute("INSERT INTO MYTABLE (ID,COLUMN1,COLUMN2)\
VALUES(?,?,?)",[myid,value1,value2])
But now i have a list of columns (the length of list may vary) and a list of values for each columns in the list.
For example, if i have a table with 10 columns (Namely, column1, column2...,column10 etc). I have a list of columns that i want to update.Let's say [column3,column4]. And i have a list of values for those columns. [value for column3,value for column4].
How do i insert the values in the list to the individual columns that each belong?
As far as I know the parameter list in conn.execute works only for values, so we have to use string formatting like this:
import sqlite3
conn = sqlite3.connect(':memory:')
conn.execute('CREATE TABLE t (a integer, b integer, c integer)')
col_names = ['a', 'b', 'c']
values = [0, 1, 2]
conn.execute('INSERT INTO t (%s, %s, %s) values(?,?,?)'%tuple(col_names), values)
Please notice this is a very bad attempt since strings passed to the database shall always be checked for injection attack. However you could pass the list of column names to some injection function before insertion.
EDITED:
For variables with various length you could try something like
exec_text = 'INSERT INTO t (' + ','.join(col_names) +') values(' + ','.join(['?'] * len(values)) + ')'
conn.exec(exec_text, values)
# as long as len(col_names) == len(values)
Of course string formatting will work, you just need to be a bit cleverer about it.
col_names = ','.join(col_list)
col_spaces = ','.join(['?'] * len(col_list))
sql = 'INSERT INTO t (%s) values(%s)' % (col_list, col_spaces)
conn.execute(sql, values)
I was looking for a solution to create columns based on a list of unknown / variable length and found this question. However, I managed to find a nicer solution (for me anyway), that's also a bit more modern, so thought I'd include it in case it helps someone:
import sqlite3
def create_sql_db(my_list):
file = 'my_sql.db'
table_name = 'table_1'
init_col = 'id'
col_type = 'TEXT'
conn = sqlite3.connect(file)
c = conn.cursor()
# CREATE TABLE (IF IT DOESN'T ALREADY EXIST)
c.execute('CREATE TABLE IF NOT EXISTS {tn} ({nf} {ft})'.format(
tn=table_name, nf=init_col, ft=col_type))
# CREATE A COLUMN FOR EACH ITEM IN THE LIST
for new_column in my_list:
c.execute('ALTER TABLE {tn} ADD COLUMN "{cn}" {ct}'.format(
tn=table_name, cn=new_column, ct=col_type))
conn.close()
my_list = ["Col1", "Col2", "Col3"]
create_sql_db(my_list)
All my data is of the type text, so I just have a single variable "col_type" - but you could for example feed in a list of tuples (or a tuple of tuples, if that's what you're into):
my_other_list = [("ColA", "TEXT"), ("ColB", "INTEGER"), ("ColC", "BLOB")]
and change the CREATE A COLUMN step to:
for tupl in my_other_list:
new_column = tupl[0] # "ColA", "ColB", "ColC"
col_type = tupl[1] # "TEXT", "INTEGER", "BLOB"
c.execute('ALTER TABLE {tn} ADD COLUMN "{cn}" {ct}'.format(
tn=table_name, cn=new_column, ct=col_type))
As a noob, I can't comment on the very succinct, updated solution #ron_g offered. While testing, though I had to frequently delete the sample database itself, so for any other noobs using this to test, I would advise adding in:
c.execute('DROP TABLE IF EXISTS {tn}'.format(
tn=table_name))
Prior the the 'CREATE TABLE ...' portion.
It appears there are multiple instances of
.format(
tn=table_name ....)
in both 'CREATE TABLE ...' and 'ALTER TABLE ...' so trying to figure out if it's possible to create a single instance (similar to, or including in, the def section).
I have the following MySQL + Python code:
data = json.loads(decoded_response)
insert_values = []
cursor = cnx.cursor()
add_data = """INSERT INTO pb_ya_camps (camp_id,camp_name) VALUES (%s,%s) ON DUPLICATE KEY UPDATE VALUES (%s,%s)"""
for jsonData in data["data"]:
if "No" in jsonData["StatusArchive"]:
print("...processing campaign ",jsonData["Name"],"into the database.")
insert_values.append((jsonData["CampaignID"],jsonData["Name"]))
try:
cursor.executemany(add_data,(insert_values,insert_values))
Which at the moment produces the following error:
MYSQL ERROR: Failed processing format-parameters; 'MySQLConverter' object has no attribute '_tuple_to_mysql'
As far as I understand it is not liking the following:
cursor.executemany(add_data,(insert_values,insert_values))
I believe you can't do that with Python...but my problem probably derives from improper MySQL syntax. Could you please take a look at it?
INSERT INTO pb_ya_camps (camp_id,camp_name) VALUES (%s,%s) ON DUPLICATE KEY UPDATE VALUES (%s,%s)
I am not sure how to properly use the ON DUPLICATE KEY UPDATE without having to re-specify all the values... <<<--- that is the main problem.
I have read the following: LINK TO PREVIOUS EXAMPLE however I don't want to rely on KEY UPDATE col1 = VALUES(col1) because in a further part of my script I have too many columns to keep listing as part of col = value for each column...
Thank you!
Following MySQL Reference Manual, MySQL syntax for INSERT ... ON DUPLICATE KEY UPDATE is:
INSERT INTO table (`a`, `b`, `c`)
VALUES (1, 2, 3)
ON DUPLICATE KEY UPDATE `c` = `c` + 1;
So in your case (please note that writing either camp_id = %s or camp_id = VALUES(%s) is the same:
INSERT INTO `pb_ya_camps` (`camp_id`, `camp_name`)
VALUES (%s,%s)
ON DUPLICATE KEY UPDATE `camp_id` = VALUES(%s), `camp_name` = VALUES(%s)
More information about the syntax at the MySQL Reference Manual.