python to mysql unknown column in try exception - python

i just want to select or insert into mysql using python 3.2 and mysql.connector..
import mysql.connector
filename = "t1.15231.0337.mod35.hdf"
try:
cnx = mysql.connector.connect(user='root', password='', database='etl')
cursor = cnx.cursor()
cursor.execute('SELECT * FROM hdf_file WHERE NAMA_FILE = %s',filename)
rows = cursor.fetchall ()
if rows == []:
insert_hdf = cursor.execute('INSERT INTO hdf_file VALUES(%s,null,NOW(),null,null,NOW())',filename)
cursor.execute(insert_hdf)
cnx.commit()
cursor.close()
cnx.close()
except mysql.connector.Error as err:
print("Something went wrong: {}".format(err))
but it said that: unknown column 'filename' in where clause
i have tried to put something like this:
cursor.execute('SELECT * FROM hdf_file WHERE NAMA_FILE = filename')
but i got the same error...

When using cursor.execute() with parameterised queries the query arguments are passed as a sequence (e.g. list, tuple) or as a dictionary if using named parameters. Your code passes only the string filename.
Your queries could be written like this:
cursor.execute('SELECT * FROM hdf_file WHERE NAMA_FILE = %s', (filename,))
Here the tuple (filename,) is passed to execute(). Similarly for the insert query:
cursor.execute('INSERT INTO hdf_file VALUES (%s, null, NOW(), null, null, NOW())',
(filename,))
execute() will return None, so there is no use in storing the result in the insert_hdf variable. It also makes no sense, and will cause an error, if you attempt cursor.execute(insert_hdf).

Related

Mysql table name getting unwanted quotes resulting table does not exist error

import mysql.connector
def add_features_to_db(stockname, timeframe, date, feature):
try:
conn = mysql.connector.connect(
user='root', password='', host='localhost', database='fx003')
cursor = conn.cursor()
dbtable = stockname + timeframe
mySql_insert_query = """INSERT INTO `%s` (date, trend) VALUES ( `%s`, `%s` )"""
record = (dbtable, date, feature)
cursor.execute(mySql_insert_query, record)
conn.commit()
print("Record inserted successfully")
except mysql.connector.Error as error:
print("Failed to insert into MySQL table {}".format(error))
finally:
if conn.is_connected():
cursor.close()
conn.close()
print("MySQL connection is closed")
add_features_to_db("aud-cad", "_30mins", "2021-09-24 21:00:00", "Short")
I have the code above and giving me the below error:
Failed to insert into MySQL table 1146 (42S02): Table 'fx003.'aud-cad_30mins'' doesn't exist
aud-cad_30mins table does exist and an insert query like below doing its job:
mySql_insert_query = """INSERT INTO aud-cad_30mins (date, trend) VALUES ( "2021-09-24 21:00:00","Short" )"""
So when I try to use variables in the query, it gives the error. Why the table name getting unwanted quotes? Checked several tutorials but couldn't find a solution, any ideas?
The table name should be hardcoded in the query string instead of having it there as a placeholder %s, which is meant for the values to be inserted. So if you have the table name in the variable, you can replace it via format() before calling cursor.execute()
dbtable = stockname + timeframe
mySql_insert_query = """INSERT INTO {} (date, trend) VALUES ( %s, %s )""".format(dbtable)
see the examples in the docs
edit: as Bill mentioned in the comment, dont add the backticks around the %s placeholders.

Insert API response into many rows at once - psycopg2

I want to INSERT a resulting dict from an API into my db, so far I can insert one item at a time.
This is my code:
import json
import requests
import psycopg2
def my_func():
response = requests.get("https://path/to/api/")
data = response.json()
while data['next'] is not None:
response = requests.get(data['next'])
data = response.json()
for item in data['results']:
try:
connection = psycopg2.connect(user="user",
password="user",
host="127.0.0.1",
port="5432",
database="mydb")
cursor = connection.cursor()
postgres_insert_query = """ INSERT INTO table_items (NAME) VALUES (%s)"""
record_to_insert = item['name']
cursor.execute(postgres_insert_query, (record_to_insert,))
connection.commit()
count = cursor.rowcount
print (count, "success")
except (Exception, psycopg2.Error) as error :
if(connection):
print("error", error)
finally:
if(connection):
cursor.close()
connection.close()
my_func()
So, this one is working, but for example if I want to insert into table_items, not just on name row, but let's say, name, address, weight, cost_per_unit, from that table, then I will change these lines of code:
postgres_insert_query = 'INSERT INTO table_items (NAME, ADDRESS, WEIGHT, COST_PER_UNIT) VALUES (%s,%s,%s,%s)'
record_to_insert = (item['name']['address']['weight']['cost_per_unit'])
Then it will throw:
Failed to insert record into table_items table string indices must be integers
PostgreSQL connection is closed
I mean, the first version, with just one field works perfectly, but I need to insert into the other 3 fields everytime, any ideas?
You have to fix the syntax when you call the item attributes to define the parameters, and also change the object you pass to the parameterized query, since record_to_insert is already a tuple :
postgres_insert_query = """ INSERT INTO table_items
(NAME, ADDRESS, WEIGHT, COST_PER_UNIT) VALUES (%s,%s,%s,%s)"""
record_to_insert = (item['name'],
item['address'],
item['weight'],
item['cost_per_unit'])
cursor.execute(postgres_insert_query, record_to_insert) # you can pass the tuple directly

Not all parameters were used in the SQL statement when using python and mysql

hi I am doing the python mysql at this project, I initial the database and try to create the table record, but it seems cannot load data to the table, can anyone here can help me out with this
import mysql.connector
mydb = mysql.connector.connect( host="localhost",user="root",password="asd619248636",database="mydatabase")
mycursor = mydb.cursor()
mycursor.excute=("CREATE TABLE record (temperature FLOAT(20) , humidity FLOAT(20))")
sql = "INSERT INTO record (temperature,humidity) VALUES (%d, %d)"
val = (2.3,4.5)
mycursor.execute(sql,val)
mydb.commit()
print(mycursor.rowcount, "record inserted.")
and the error shows "Not all parameters were used in the SQL statement")
mysql.connector.errors.ProgrammingError: Not all parameters were used in the SQL statement
Changing the following should fix your problem:
sql = "INSERT INTO record (temperature,humidity) VALUES (%s, %s)"
val = ("2.3","4.5") # You can also use (2.3, 4.5)
mycursor.execute(sql,val)
The database API takes strings as arguments, and later converts them to the appropriate datatype. Your code is throwing an error because it isn't expecting %d or %f (int or float) datatypes.
For more info on this you can look here
simply change insert method to
sql = "INSERT INTO record (temperature,humidity) VALUES (%s, %s)"
then it works fine
This works for me.
# Insert from dataframe to table in SQL Server
import time
import pandas as pd
import pyodbc
# create timer
start_time = time.time()
from sqlalchemy import create_engine
df = pd.read_csv("C:\\your_path_here\\CSV1.csv")
conn_str = (
r'DRIVER={SQL Server Native Client 11.0};'
r'SERVER=Excel-Your_Server_Name;'
r'DATABASE=NORTHWND;'
r'Trusted_Connection=yes;'
)
cnxn = pyodbc.connect(conn_str)
cursor = cnxn.cursor()
for index,row in df.iterrows():
cursor.execute('INSERT INTO dbo.Table_1([Name],[Address],[Age],[Work]) values (?,?,?,?)',
row['Name'],
row['Address'],
row['Age'],
row['Work'])
cnxn.commit()
cursor.close()
cnxn.close()

how to use multiple DB connections in python unit test

I need to use multiple connections in my python test code. But the problem I'm facing is that the second connection does not see statements executed in the first one. As far as I understand autocommit should be ON by default.
Here is the code
import testing.postgresql
from sqlalchemy import create_engine
def test_simple():
with testing.postgresql.Postgresql() as postgresql:
try:
engine = create_engine(postgresql.url())
conn = engine.connect().connection
with conn.cursor() as cur:
cur.execute("""CREATE TABLE country (id integer, name text);
INSERT INTO country(id, name) VALUES (1, 'Mali');
INSERT INTO country(id, name) VALUES (2, 'Congo');
""")
# OK
cur.execute('select * from country')
countries = cur.fetchall()
print(str(countries))
# ERROR psycopg2.ProgrammingError: relation "country" does not exist
conn1 = engine.connect().connection
with conn1.cursor() as cur1:
cur1.execute('select * from country')
countries1 = cur1.fetchall()
print(str(countries1))
finally:
conn.close()
conn1.close()
How can I use multiple connections in my test?

Can't insert single column value in python using MySQL

I have a single column table. I need to insert values in this column. The program runs correctly without errors. But when I check the database, nothing gets inserted. When I added another column to the code and table, the program inserts data correctly. Can you tell me how to insert data for a single column table?
This is the single column code that does not insert anything to the table.
import MySQLdb
conn = MySQLdb.connect(host= "localhost",
user="root",
passwd="123",
db="dbname")
cursor = conn.cursor()
x=100
try:
sql="""INSERT INTO table (col1) VALUES ('%s')"""
cursor.execute(sql, (x))
conn.commit()
except:
conn.rollback()
conn.close()
This is the two columns code.
import MySQLdb
conn = MySQLdb.connect(host= "localhost",
user="root",
passwd="123",
db="dbname")
cursor = conn.cursor()
x=100
y=2
try:
sql="""INSERT INTO table (col1,col2) VALUES ('%s','%s')"""
cursor.execute(sql, (x,y))
conn.commit()
except:
conn.rollback()
conn.close()
You need to lose the quotes around %s, after that you need to know that the second argument to cursor.execute() is a tuple, and that a one-tuple is written:
(item,)
note the comma. The solution is then:
sql="""INSERT INTO table (col1) VALUES (%s)"""
cursor.execute(sql, (x,))
You can try either of these:
Don't use '%s', you can use ? instead
Instead of '%s', just use %s without quotes
try this:
sql="""INSERT INTO table (col1) VALUES ({});""".format(x)
cursor.execute(sql)

Categories

Resources