dynamic mysql insert statement in python - python

I need to update columns values in csv to mysql database and the csv values are dynamic for one file it may be 10 columns and for other it may be 5 columns.
my understanding in python we need to use list also this question raised earlier is similar to my requirement but here the values are static so it can be predefined where's in my case being dynamic need to a solution to have %%s under VALUES to be multiplied according to my column values dynamically.
MySQL Dynamic Query Statement in Python
ds_list=['name','id','class','chapter','MTH','YEAR']
vl_list=['xxxxx','978000048','x-grade','Science',mar,2017]
sql = 'INSERT INTO ann1 (%s) VALUES (%%s, %%s, %%s, %%s, %%s, %%s)' % ','.join(ds_list)
cur.execute(sql, vl_list)
conn.commit()

So, if you have two lists, one with headers and the other with values – you can create yourself dynamic INSERT query.
query_placeholders = ', '.join(['%s'] * len(vl_list))
query_columns = ', '.join(ds_list)
insert_query = ''' INSERT INTO table (%s) VALUES (%s) ''' %(query_columns, query_placeholders)
and then execute & commit your query by passing list with values in query.
cursor.execute(insert_query, vl_list)

Related

Insert values in table with two excecute commands

trying to insert values into one MySQL table using python.
First inserting values from csvfile; with:
sql = "INSERT INTO mydb.table(time,day,number)values %r" % tuple (values),)
cursor.execute(sql)
then insert into the same table and same row an other value
sql = "INSERT INTO mydb.table(name) values(%s)"
cursor.execute(sql)
with this i get the inserts in two different rows…
But i need to insert it into the same row without using sql = "INSERT INTO mydb.table(time,day,number,name)values %r" % tuple (values),)
Is there a way to insert values into the same row in two 'insert statements'?
INSERT will always add a new row. If you want to change values in this row, you have to specify a unique identifier (key) in the WHERE clause to access this row and use UPDATE or REPLACE instead.
When using REPLACE you need to be careful if your table contains an auto_increment column, since a new value will be generated.

In MySql, using Python, how to INSERT INTO tables and columns, depending on variables?

I am building an interface in Python to get access to different queries in some big data tables (queries such as insert, search, predefined ones, etc.). The problem is that there are a few different tables which contain each a number of columns... So I would want to modularize my code and MySql queries, so that depending on which table we want to insert data to and to which columns these data concern, it will know what MySql command it will have to execute.
I saw that we can use variables for values, for example :
sql = "INSERT INTO table_name (col1, col2) VALUES (%s, %s)"
values = ("val1", "val2")
mycursor.execute(sql, values)
Is it possible to have something similar with table_nameand columns ? To have something for example like :
sql = "INSERT INTO (%s) (%s, %s) VALUES (%s, %s)"
table = "table_name"
columns = ("col1", "col2")
values = ("val1", "val2")
mycursor.execute(sql, table, columns, values)
With that, it would be far easier for me to initialize table, columnsand values when needed (for example when the user clicks a button, enters values in some text fields, etc.) than having a lot of such sql queries, one for each table and each possible subset of columns.
I am not sure that it is all clear with my pretty random english, if you need some more information feel free to ask !
Thank you in advance for your time,
Sanimys
For the few of you that will see this while looking for an answer, in fact this is pretty easy ! You can do pretty much exaclty like I proposed, for example :
sql = "INSERT INTO %s %s VALUES %s"
table = "table_name"
columns = "(col1, col2)"
values = "(val1, val2)"
mycursor.execute(sql % (table, columns, values))
Where every element is a string.
By doing that, you can have some nice dynamic computations, such as :
sql = "INSERT INTO %s %s VALUES %s"
table = "table_name"
values = "(val1, val2)"
// Some code
def compute_columns(list_of_columns_to_insert):
col = "("
for c in list_of_columns_to_insert:
col = col + "'" + c + "', "
col = col[:-2] + ")"
return col
// Some code generating a list of columns that we want to insert to
columns = compute_columns(list_of_columns_to_insert)
mycursor.execute(sql % (columns, values))
Here you go, hope that it could help someone that struggles like me !

inserting huge number of rows to mysql

I want to read from MSSQL table then insert in to MySQL table but i couldn't format my MSSQL results to executemany on them
cursor.execute('select * from table') # MSSQL
rows = cursor.fetchall()
many_rows = []
for row in rows:
many_rows.append((row))
sql = "insert into mysql.table VALUES (NULL, %s) on duplicate key update REFCOLUMN=REFCOLUMN" # MYSQL
mycursor.executemany(sql, many_rows)
mydb.commit()
this gives Failed executing the operation; Could not process parameters
First NULL is for id column and %s for other 49 columns. It works with 1 by 1 but takes ages over remote connection
EDIT
my example print output of many_rows:
[
(49 columns' values, all string and separated by comma),
(49 columns' values, all string and separated by comma),
(49 columns' values, all string and separated by comma),
...
]
I was able to fix my issue with appending data like below:
many_rows.append((list(row)))

psychopg2 to generate insert statements with variable column counts

I am attempting to insert Excel spreadsheets into a Postgres DB using a Python script with psychopg2.
The problem is not all the spreadsheets have the same number of columns, and I need the insert statement to be flexible enough so I don't have to specify them by name.
My approach is to load the columns of the spreadsheet's header row into a tuple, and likewise with the values being inserted. So for example:
sql = ''''INSERT INTO my_table (%s) VALUES (%s);'''
cur.execute(sql, (cols, vals))
where 'cols' and 'vals' are both tuples.
'cols' can have 7, 9, 10, etc. entries, again depending on how many columns the spreadsheet had.
When I attempt to run this, I get:
psycopg2.ProgrammingError: syntax error at or near "'INSERT INTO my_table
(ARRAY['"
LINE 1: 'INSERT INTO my_table...
^
Not sure if the problem is in my calling syntax, or if you simply can't do what I'm trying to do.
There's an apostrophe ' at the beginning of your sql query.
''''INSERT INTO my_table (%s) VALUES (%s);'''
should be
'''INSERT INTO my_table (%s) VALUES (%s);'''
Edit: didn't realize you where trying to query columns dynamically. To do that, you should use text formatting. Asuming cols is a list:
sql = '''INSERT INTO my_table ({}) VALUES (%s)'''.format(','.join(cols))
Then, your execution would be:
cur.execute(sql, (vals,))

Insert into a large table in psycopg using a dictionary

I have a VERY large table (>200 columns) in a database, which I'm accessing through psycopg2. I have the rows I want to insert as dictionaries, column name as key and value as value. Using psycopg2, I want to insert the row into the table.
Because of the prohibitively huge number of columns of the table in question, I would rather not write out an insert statement manually. How do I insert the dictionary efficiently and neatly?
This is the test table:
create table testins (foo int, bar int, baz int)
You can compose a sql statement this way:
d = dict(foo=10,bar=20,baz=30)
cur.execute(
"insert into testins (%s) values (%s)"
% (','.join(d), ','.join('%%(%s)s' % k for k in d)),
d)

Categories

Resources