python "not enough arguments for format string" - python

I am trying to insert multiple rows from dataframe to mysql.
I counted arguments numbers and they all are of same number (i.e. 11 here).
But still I get this
not enough arguments for format string
Here's the function:
def insert_result_sets_into_db(df, filter_rule_id):
cols = "('hash_id', 'filter_rule_id', 'task_id', 'assigned_to', 'human_verdict', 'verdict_date', 'verdict_by', 'created_by', 'updated_by', 'created_at', 'updated_at')"
if not df.empty:
with connections['frontend'].cursor() as cursor:
sql = "INSERT INTO dmf_result_set_assign " + cols + " VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)"
values = []
now = datetime.now()
formatted_date = now.strftime('%Y-%m-%d %H:%M:%S')
# Insert DataFrame records one by one.
for i,row in df.iterrows():
tup = (row['_id'], filter_rule_id, 1, 1, 'Matched', formatted_date, 1, 1, 1, formatted_date, formatted_date)
values.extend(tup)
cursor.executemany(sql, values)
connections.commit()
Am I missing anything here?

Related

How to convert date format "dd/mm/yy" to "yy/mm/dd" for mysql database inserting?

When I insert a date from a file that has it formatted "dd/mm/yy" into my database table with the date formatted "yy/mm/dd" the date is wrong:
Instead of getting 2019:04:11 I get 2011:04:19.
I want to keep the database format ("yy/mm/dd")
I have tried:
actualdate = DATE_FORMAT(j[0], '%y-%m-%d')
cursor.execute(actualdate)
but it tells me error: name 'DATE_FORMAT' is not defined
import mysql.connector
sql = mysql.connector.connect(host='',user='',password='',db='')
cursor = sql.cursor()
f = open("C:\Cumulus\data\Apr19log.txt","r")
st=[i.strip().split(',') for i in f.readlines()]
actualdate = DATE_FORMAT(j[0], '%y-%m-%d')
cursor.execute(actualdate)
sqllist = "INSERT INTO station_fenelon (variable, date, time,
outside_temp, outside_humidity) VALUES (%s, %s, %s, %s, %s)"
record = [(i+1, j[0], j[1], j[2], j[3]) for i, j in enumerate(st)]
cursor.executemany(sqllist, record)
sql.commit()
error: name 'DATE_FORMAT' is not defined
DATE_FORMAT() is a MySQL function and you call it directly in your python script so you get an error message that is not defined.You should remove this line.
You could use STR_TO_DATE to convert your string to a date
sqllist = "INSERT INTO station_fenelon (variable, date, time,
outside_temp, outside_humidity) VALUES (%s, STR_TO_DATE(%s,'%d/%m/%Y'), %s, %s, %s)"

not enough arguments for format string from excel to mysql

I'm trying to put all information from excel to mysql, while processing it have these problem.
struggling to solve it!
counted all %s, seems like didn't miss any of them.
query = """INSERT INTO sanction (id, organization_type, organization, date, decision_number, penalty_type, penalty_way
penalty, violation, execution_period, article, note, type_npa, department, uploaded_date)
VALUES(null, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)"""
for r in range(1, sheet.nrows):
organization_type = sheet.cell(r,1).value
organization = sheet.cell(r,2).value
date = sheet.cell(r,3).value
decision_number = sheet.cell(r,4).value
penalty_type = sheet.cell(r,5).value
penalty_way = sheet.cell(r,6).value
penalty = sheet.cell(r,7).value
violation = sheet.cell(r,8).value
execution_period = sheet.cell(r,9).value
article =sheet.cell(r,10).value
note =sheet.cell(r,11).value
type_npa =sheet.cell(r,12).value
department =sheet.cell(r,13).value
uploaded_date =datetime.now().strftime("%Y-%m-%d %H:%M")
values = (organization_type, organization, date, decision_number, penalty_type,
penalty_way,penalty, violation, execution_period,article, note, type_npa, department,uploaded_date)
mycursor.execute(query, [values])
I notice 2 things that could cause this error:
Your variable values is a tuple already, so you dont need to wrap it inside a new list.
That means, change this line
mycursor.execute(query, [values])
to
mycursor.execute(query, values)
You are also missing a comma in your query in the part where you list the target column names, between penalty_way and penalty.
In case of this many arguments, I would suggest to restructure your code so that you can more easily see if you missed anything.
For example, here is a version that groups the 15 parameters in a 1-3-3-3-3-2 formation in 3 parts: the first part of the query, the second part of the query and also when building the values tuple.
query = """
INSERT INTO sanction (
id,
organization_type, organization, date,
decision_number, penalty_type, penalty_way,
penalty, violation, execution_period,
article, note, type_npa,
department, uploaded_date)
VALUES (
null,
%s, %s, %s,
%s, %s, %s,
%s, %s, %s,
%s, %s, %s,
%s, %s)
"""
for r in range(1, sheet.nrows):
organization_type = sheet.cell(r, 1).value
organization = sheet.cell(r, 2).value
date = sheet.cell(r, 3).value
decision_number = sheet.cell(r, 4).value
penalty_type = sheet.cell(r, 5).value
penalty_way = sheet.cell(r, 6).value
penalty = sheet.cell(r, 7).value
violation = sheet.cell(r, 8).value
execution_period = sheet.cell(r, 9).value
article = sheet.cell(r, 10).value
note = sheet.cell(r, 11).value
type_npa = sheet.cell(r, 12).value
department = sheet.cell(r, 13).value
uploaded_date = datetime.now().strftime("%Y-%m-%d %H:%M")
values = (
# the first value of the INSERT statement will be NULL
organization_type, organization, date, # 3 elements
decision_number, penalty_type, penalty_way, # 3 elements
penalty, violation, execution_period, # 3 elements
article, note, type_npa, # 3 elements
department, uploaded_date, # 2 elements
)
mycursor.execute(query, values)

Python Generated MySQL Insert Statement converting to handle 'on duplicate'

I have the python code below that generically inserts values in to a mysql db. It checks the length of my list field_split and builds a variable string var_string. Where I am stuck is that I want to modify this to support ON DUPLICATE KEY UPDATE user=user_id_var. If I was building a long string I would know how to do this but couldn't figure out a way to do it by passing two values to cursor.execute.
var_string = ', '.join(['%s'] * len(field_split))
query_string = 'INSERT into ' + table_name
query_string = query_string + ' VALUES (%s);' % var_string #query_string = 'INSERT INTO tbl_rtp_GET_FBA_ESTIMATED_FBA_FEES_TXT_DATA VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s);'
cursor.execute(query_string, field_split)
db.commit()
Just construct the query and pass all the parameters as a single sequence to cursor.execute(). You can also use str.format() to avoid multiple string concatenations, which can be slow:
var_string = ', '.join(['%s'] * len(field_split))
query_string = 'INSERT into {} VALUES ({}) ON DUPLICATE KEY UPDATE user=%s'.format(
table_name, var_string)
cursor.execute(query_string, field_split + [user_id_var])
db.commit()

Insert Data to SQL Server Table using pymssql

I am trying to write the data frame into the SQL Server Table. My code:
conn = pymssql.connect(host="Dev02", database="DEVDb")
cur = conn.cursor()
query = "INSERT INTO dbo.SCORE_TABLE VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)"
cur.executemany(query, df_sql)
conn.commit()
cur.close()
conn.close()
The dimension of the df_sql is (5860, 20) i.e. the number of columns in the data frame is same as the number of columns in the SQL Server Table. Still I am getting following error:
ValueError: more placeholders in sql than params available
UPDATED BELOW
As per one of the comments, I tried using turbodbc as below:
conn = turbodbc.connect(driver="{SQL Server}", server="Dev02", Database="DEVDb")
conn.use_async_io = True
cur = conn.cursor()
query = "INSERT INTO dbo.STG_CONTACTABILITY_SCORE VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)"
cur.executemany(query, df_sql.values)
cur.commit()
cur.close()
conn.close()
I am getting following error:
ValueError: The truth value of an array with more than one element is
ambiguous. Use a.any() or a.all()
I don't get it. What is wrong here. I see df_sql.values and I don't find anything wrong.
The first row of ndarray is as below:
[nan 'DUSTIN HOPKINS' 'SOUTHEAST MISSOURI STATE UNIVERSITY' 13.0
'5736512217' None None 'Monday' '8:00AM' '9:00AM' 'Summer' None None None
None '2017-12-22 10:39:30.626331' 'Completed' None '1-11KUFFZ'
'Central Time Zone']
I think you just need to specify each column name and don't forget the table must have the id field to charge the data frame index:
conn = pymssql.connect(host="Dev02", database="DEVDb")
cur = conn.cursor()
query = """INSERT INTO dbo.SCORE_TABLE(index, column1, column2, ..., column20)
VALUES (?, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s,
%s, %s, %s, %s, %s, %s)"""
cur.executemany(query, df_sql)
conn.commit()
cur.close()
conn.close()
Ok I have been using pandas and I exported the last data frame to csv like:
df.to_csv('new_file_name.csv', sep=',', encoding='utf-8')
Then I just used pyobdc and BULK INSERT Transact-SQL like:
import pyodbc
conn = pyodbc.connect(DRIVER='{SQL Server}', Server='server_name', Database='Database_name', trusted_connection='yes')
cur = conn.cursor()
cur.execute("""BULK INSERT table_name
FROM 'C:\\Users\\folders path\\new_file_name.csv'
WITH
(
CODEPAGE = 'ACP',
FIRSTROW = 2,
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)""")
conn.commit()
cur.close()
conn.close()
It was a second to charge 15314 rows into SQL Server. I hope this gives you an idea.
If i understand correctly you want to use DataFrame.to_sql() method:
df_sql.to_sql('dbo.SCORE_TABLE', conn, index=False, if_exists='append')
Possibly executemany treats each row in the ndarray from your df.values call as one item since there are no comma separators between values. Hence, the placeholders outnumber actual binded values and you receive the mismatch error.
Consider converting array to a tuple of tuples (or lists of lists/tuple of lists/list of tuples) and then pass that object into executemany:
query = "INTO dbo.SCORE_TABLE VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)"
sql_data = tuple(map(tuple, df.values))
cur.executemany(query, sql_data)
cur.commit()
This works for me-
insert_query = """INSERT INTO dbo.temptable(CHECK_TIME, DEVICE, METRIC, VALUE, TOWER, LOCATION, ANOMALY, ANOMALY_SCORE, ANOMALY_SEVERITY)
VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s)"""
write_data = tuple(map(tuple, data_frame.values))
cursor.executemany(insert_query, write_data)
con.commit()
cursor.close()
con.close()

Won't store csv data into mysql table?

I have a csv file that I am loading which you can see linked below as well as the error output I am getting. I cannot figure out why this error is happening. Any help is appreciated.
def url_store():
run_urlcrazy()
url_file = open('url_csv')
csv_reader = csv.reader(url_file)
cursor = db.cursor()
for row in csv_reader:
cursor.execute("INSERT INTO scanresults(typotype,squatdomain, ip, id, domaincontact, mx, originaldomain, ipcontact) \
VALUES (%s, %s, %s, %s, %s, %s, %s, %s)", str(row))
db.commit()
cursor.close()
Error Screenshot
CSV file
The query expects 8 parameters (indicated by (%s, %s, %s, %s, %s, %s, %s, %s)), however you only provide a single parameter, namely str(row).
If you're sure that row contains 8 string values, you can use
cursor.execute("INSERT INTO scanresults(typotype,squatdomain, ip, id, domaincontact, mx, originaldomain, ipcontact) \
VALUES (%s, %s, %s, %s, %s, %s, %s, %s)", *row)
or just go with
cursor.execute("INSERT INTO scanresults(typotype,squatdomain, ip, id, domaincontact, mx, originaldomain, ipcontact) \
VALUES (%s, %s, %s, %s, %s, %s, %s, %s)", (row[0], row[1], row[2], row[3], row[4], row[5], row[6], row[7]))

Categories

Resources