I'm using python3, SQLAlchemy, and a MariaDB server.
Im getting data from a REST server in json format parsing it to a dictionary then to a Dataframe in Pandas.
The error i'm getting occurs when i dont save the Dataframe into a CSV format and then Reload it like this:
df.to_csv("temp_save.csv", index=False)
df = pd.read_csv("temp_save.csv")
When the previous lines are commented out i get the following error:
(pymysql.err.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '), (), (), 0, '2022-01-26T17:32:49Z', 29101, 1, 3, 2, '2022-01-25T17:32:49Z', '2' at line 1")
[SQL: INSERT INTO `TicketRequesters` (subject, group_id, department_id, category, sub_category, item_category, requester_id, responder_id, due_by, fr_escalated, deleted, spam, email_config_id, fwd_emails, reply_cc_emails, cc_emails, is_escalated, fr_due_by, id, priority, status .....
VALUES (%(subject_m0)s, %(group_id_m0)s, %(department_id_m0)s, %(category_m0)s, %(sub_category_m0)s, %(item_category_m0)s, %(requester_id_m0)s, %(responder_id_m0)s, %(due_by_m0)s, %(fr_escalated_m0)s, %(deleted_m0)s, %(spam_m0)s, %(email_config_id_m0)s, %(fwd_emails_m0)s, %(reply_cc_emails_m0)s, %(cc_emails_m0)s, %(is_escalated_m0)s, %(fr_due_by_m0)s, %(id_m0)s, %(priority_m0)s, %(status_m0)s, %(source_m0)s, %(created_at_m0)s, %(updated_at_m0)s, %(requested_for_id_m0)s, %(to_emails_m0)s, %(type_m0)s, %(description_text_m0)s, %(custom_fields_localidad_m0)s, %(custom_fields_hora_de_la_falla_m0)s, %(custom_fields_hubo_alguna_modificacin_en_el_firewall_o_en_su_pl_m0)s, %(custom_fields_el_incidente_presentado_corresponde_a_m0)s, %(custom_fields_client_type_m0)s, %(custom_fields_riesgos_del_cambio_o_caso_m0)s, %(custom_fields_solucin_del_caso_m0)s, %(custom_fields_estado_de_cierre_m0)s, %(custom_fields_numero_de_oportunidad_m0)s, %(custom_fields_cuales_son_sus_servicios_afectados_especificar_si_m0)s, %(custom_fields_numero_de_ticket_de_cambio_m0)s, %(custom_fields_cantidad_estimada_de_personas_o_departamentos_afe_m0)s, %(cu.....
As shown, in the VALUES %()s field "_m0" is getting appended at the end, i noticed the number grows up to the number of rows i'm trying to upsert.
%(stats_created_at_m29)s, %(stats_updated_at_m29)s, %(stats_ticket_id_m29)s, %(stats_opened_at_m29)s, %(stats_group_escalated_m29)s, %(stats_inbound_count_m29)s, %(stats_status_updated_at_m29)s, %(stats_outbound_count_m29)s, %(stats_pending_since_m29)s, %(stats_resolved_at_m29)s, %(stats_closed_at_m29)s, %(stats_first_assigned_at_m29)s, %(stats_assigned_at_m29)s, %(stats_agent_responded_at_m29)s, %(stats_requester_responded_at_m29)s, %(stats_first_responded_at_m29)s, %(stats_first_resp_time_in_secs_m29)s, %(stats_resolution_time_in_secs_m29)s, %(description_m29)s, %
This is the python code that i try to use, just in case.
engine = db.create_engine(
f"mariadb+pymysql://{user}:{password}#{host}/{database_name}?charset=utf8mb4"
)
columndict: dict = {"id": Column("id", Integer, primary_key=True)}
# Prepare Column List, check columndict if exists, get Column object from dict
column_list = [columndict.get(name, Column(name, String(256))) for name in columns]
# Get an instance of the table
# TODO: Instance table from metadata (without having to give it columns)
instanceTable = Table(table_name, metadata, *column_list)
metadata.create_all(engine)
# Schema created
# Create Connection
conn = engine.connect()
# Prepare statement
insert_stmt = db.dialects.mysql.insert(instanceTable).values(values)
on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(
data=insert_stmt.inserted, status="U"
)
# Execute statement
result = conn.execute(on_duplicate_key_stmt)
# DATA UPSERTED
I investigated about the limitations of mysql/mariadb with the UTF8 encoding and the correct encoding is ?charset=utf8mb4, this might be related to the query construction issue.
EDIT: I found a fix for this error by replacing empty lists and empty strings from the Dataframe with None
This problem was caused due to sending empty lists [] and empty strings '' to the SQLAlchemy values.
Fixed by replacing those items with None.
I have a GUI interacting with my database, and MySQL database has around 50 tables. I need to search each table for a value and return the field and key of the item in each table if it is found. I would like to search for partial matches. ex.( Search Value = "test", "Protest", "Test123" would be matches. Here is my attempt.
def searchdatabase(self, event):
print('Searching...')
self.connect_mysql() #Function to connect to database
d_tables = []
results_list = [] # I will store results here
s_string = "test" #Value I am searching
self.cursor.execute("USE db") # select the database
self.cursor.execute("SHOW TABLES")
for (table_name,) in self.cursor:
d_tables.append(table_name)
#Loop through tables list, get column name, and check if value is in the column
for table in d_tables:
#Get the columns
self.cursor.execute(f"SELECT * FROM `{table}` WHERE 1=0")
field_names = [i[0] for i in self.cursor.description]
#Find Value
for f_name in field_names:
print("RESULTS:", self.cursor.execute(f"SELECT * FROM `{table}` WHERE {f_name} LIKE {s_string}"))
print(table)
I get an error on print("RESULTS:", self.cursor.execute(f"SELECT * FROM `{table}` WHERE {f_name} LIKE {s_string}"))
Exception: (1054, "Unknown column 'test' in 'where clause'")
I use a similar insert query that works fine so I am not understanding what the issue is.
ex. insert_query = (f"INSERT INTO `{source_tbl}` ({query_columns}) VALUES ({query_placeholders})")
May be because of single quote you have missed while checking for some columns.
TRY :
print("RESULTS:", self.cursor.execute(f"SELECT * FROM `{table}` WHERE '{f_name}' LIKE '{s_string}'"))
Have a look -> here
Don’t insert user-provided data into SQL queries like this. It is begging for SQL injection attacks. Your database library will have a way of sending parameters to queries. Use that.
The whole design is fishy. Normally, there should be no need to look for a string across several columns of 50 different tables. Admittedly, sometimes you end up in these situations because of reasons outside your control.
I would like to get a list that is a field in a JSON file, which is a record in a mysql table. I am using the following query:
cursor = self.database.cursor()
sql = """ SELECT result->>"$.my_list" FROM my_table
WHERE my_id = 5 ORDER BY date ASC """
cursor.execute(sql)
result = cursor.fetchall()
Using '->' the result is in the string format: '[a,b,c,d], [a,b,c,d]' or using '->>' the result is binary: b'[a,b,c,d]', b'[a,b,c,d]'.
How can I convert it (or get directly) a normal python list object?
New to python, trying to use psycopg2 to read Postgres
I am reading from a database table called deployment and trying to handle a Value from a table with three fields id, Key and Value
import psycopg2
conn = psycopg2.connect(host="localhost",database=database, user=user, password=password)
cur = conn.cursor()
cur.execute("SELECT \"Value\" FROM deployment WHERE (\"Key\" = 'DUMPLOCATION')")
records = cur.fetchall()
print(json.dumps(records))
[["newdrive"]]
I want this to be just "newdrive" so that I can do a string comparison in the next line to check if its "newdrive" or not
I tried json.loads on the json.dumps output, didn't work
>>> a=json.loads(json.dumps(records))
>>> print(a)
[['newdrive']]
I also tried to print just the records without json.dump
>>> print(records)
[('newdrive',)]
The result of fetchall() is a sequence of tuples. You can loop over the sequence and print the first (index 0) element of each tuple:
cur.execute("SELECT \"Value\" FROM deployment WHERE (\"Key\" = 'DUMPLOCATION')")
records = cur.fetchall()
for record in records:
print(record[0])
Or simpler, if you are sure the query returns no more than one row, use fetchone() which gives a single tuple representing returned row, e.g.:
cur.execute("SELECT \"Value\" FROM deployment WHERE (\"Key\" = 'DUMPLOCATION')")
row = cur.fetchone()
if row: # check whether the query returned a row
print(row[0])
I have gone through all the result for Raw Django query but still not able to get how will i retrieve data of both table with key value pair. Please check the two different approaches that i have tried :
result = TblSchedule.objects.raw("SELECT tbl_schedule.id, tbl_user.fname AS name from tbl_schedule join tbl_user using (id)")
data = serializers.serialize("json", result)
result = json.loads(data)
this gives me object for the TblSchedule but no data for user table, i need the data from both tables.
Second approach was using cursror:
cursor.execute("select tbluser.fname from tbl_schedule tblsch inner join tbl_user tbluser on tblsch.user_id=tbluser.id where tblsch.doctor_id = %s", [doctorId])
check =[]
row = cursor.fetchone()
while row is not None:
row = cursor.fetchone()
check = cursor.fetchone()
return Response(check)
this result in return only the name but no key.
I need the key value pair for the join (without having the foreigne key relation between them)