pymysql.err.OperationalError: 1054. "Unknown column 'X' in 'where clause' - python

I'm getting the following error in console, where the column name actually is the value passed through the query:
pymysql.err.OperationalError: (1054, "Unknown column 'LiqNac83437' in
'where clause'")
This is my function:
sql = f"""
SELECT
detallev.clave,
detallev.cantidad,
venta.fecha
FROM
detallev
INNER JOIN
venta ON detallev.ven_id = venta.ven_id
WHERE
clave = '{codigoBarras}'
AND (fecha BETWEEN {fecha_inicio} AND {fecha_final});"""
print(sql)
with bd:
with bd.cursor() as cursor:
cursor.execute(sql)
resultado = cursor.fetchall()
cursor.close()
which is called by:
#app.get('/{sucursal}/reporte/articulos/')
def reporte_articulo(sucursal: Origenes, clave: str = '', fecha_inicial: str = '', fecha_final: str = fechaHoy(), username: str = Depends(valida_usuario)):
return reporte_articulos.reporte_articulo(sucursal, clave, fecha_inicial, fecha_final)
I'm using FastAPI, python and Mysql.
I've already tried following these solutions with no luck:
Solution 1
Solution 2
and several other solutions outside stackoverflow, already tried wrapping the concatenated value in different type of ways.
When running this query directly on Mysql workbench it works perfect, aside from calling it from the API.
When the column name value passed to the function is only numbers as "47839234215" instead of "LiqNac83437", which is numbers and letters, It works great as expected.

This happens because you are substituting the values yourself, and in this case you have not properly quotes the fields in the BETWEEN clause. It sees LiqNac83437 and thinks it is a column name, because it is not quoted.
For this reason, and to avoid SQL injection problems, you should let the database connector do the quoting:
sql = """
SELECT
detallev.clave,
detallev.cantidad,
venta.fecha
FROM
detallev
INNER JOIN
venta ON detallev.ven_id = venta.ven_id
WHERE
clave = ?
AND fecha BETWEEN ? AND ?;"""
with bd.cursor() as cursor:
cursor.execute(sql, (codigoBarras, fecha_inicio, fecha_final))
resultado = cursor.fetchall()
cursor.close()

Related

Dynamic SQL Error - SQL error code = -104 Token unknown happening on "select * from table". Python Firebird

I seem to be getting an error while trying to dump all the data from the fdb file.
That was the error. 'BLOB' is the name of the table
("Error while preparing SQL statement:\n- SQLCODE: -104\n- Dynamic SQL Error\n- SQL error code = -104\n- Token unknown - line 1, column 15\n- 'BLOB'", -104, 335544569)
the code
def js(val):
if type(val) == int:
return val
if type(val) == str:
return val
if val is None:
return val
if type(val) == decimal.Decimal:
return str(val)
if type(val) == datetime.datetime:
return val.isoformat()
raise Exception(type(val))
con = fdb.connect(dsn='202204.fdb', user='sysdba', password='masterkey')
cur = con.cursor()
cur.execute(
"SELECT a.RDB$RELATION_NAME FROM RDB$RELATIONS a WHERE RDB$SYSTEM_FLAG=0")
tables = [row[0].strip() for row in cur.fetchall()]
db = {}
for table in tables:
db[table] = {}
cur.execute(
f"select rdb$field_name from rdb$relation_fields where rdb$relation_name='{table}' order by rdb$field_position")
db[table]['cols'] = [head[0].strip() for head in cur.fetchall()]
print(db)
cur.execute(f"select * from '{table}'") # code breaks here
db[table]['rows'] = [[js(field) for field in row]
for row in cur.fetchall()]
the expected structure
{"BLOB": {"cols": ["GUID", "UPDATE", "DATA"], "rows": []}}
Python 3.9
Firebird 2.5
The interpolated string "select * from '{table}'" will not produce a valid query. Things enclosed in single quotes are string literals, and you cannot select from a string literal. If you intended this as a quoted identifier, you should enclose it in double quotes ("), not single quotes (').
That is, right now you produce a statement select * from 'BLOB', which is why the error refers to the unknown token 'BLOB', as Firebird expects a (quoted or unquoted) identifier. Change your code to produce select * from "BLOB".
Also, please be aware that string interpolation like this makes your code vulnerable to SQL injection, although that is less of a problem in this case, as you're selecting from system tables, there are edge cases with table names which contain a double quote (third query), or a single quote (second query). Your second query should use parameters, not string interpolation (this is not possible for the third).

Safely Inserting Strings Into a SQLite3 UNION Query Using Python

I'm aware that the best way to prevent sql injection is to write Python queries of this form (or similar):
query = 'SELECT %s %s from TABLE'
fields = ['ID', 'NAME']
cur.execute(query, fields)
The above will work for a single query, but what if we want to do a UNION of 2 SQL commands? I've set this up via sqlite3 for sake of repeatability, though technically I'm using pymysql. Looks as follows:
import sqlite3
conn = sqlite3.connect('dummy.db')
cur = conn.cursor()
query = 'CREATE TABLE DUMMY(ID int AUTO INCREMENT, VALUE varchar(255))'
query2 = 'CREATE TABLE DUMMy2(ID int AUTO INCREMENT, VALUE varchar(255)'
try:
cur.execute(query)
cur.execute(query2)
except:
print('Already made table!')
tnames = ['DUMMY1', 'DUMMY2']
sqlcmds = []
for i in range(0,2):
query = 'SELECT %s FROM {}'.format(tnames[i])
sqlcmds.append(query)
fields = ['VALUE', 'VALUE']
sqlcmd = ' UNION '.join(sqlcmds)
cur.execute(sqlcmd, valid_fields)
When I run this, I get a sqlite Operational Error:
sqlite3.OperationalError: near "%": syntax error
I've validated the query prints as expected with this output:
INSERT INTO DUMMY VALUES(%s) UNION INSERT INTO DUMMY VALUES(%s)
All looks good there. What is the issue with the string substitutions here? I can confirm that running a query with direct string substitution works fine. I've tried it with both selects and inserts.
EDIT: I'm aware there are multiple ways to do this with executemany and a few other. I need to do this with UNION for the purposes I'm using this for because this is a very, very simplified example fo the operational code I'm using
The code below executes few INSERTS at once
import sqlite3
conn = sqlite3.connect('dummy.db')
cur = conn.cursor()
query = 'CREATE TABLE DUMMY(ID int AUTO INCREMENT NOT NULL, VALUE varchar(255))'
try:
cur.execute(query)
except:
print('Already made table!')
valid_fields = [('ya dummy',), ('stupid test example',)]
cur.executemany('INSERT INTO DUMMY (VALUE) VALUES (?)',valid_fields)

PYTHON error in SQL syntax while writing json to MYSQL database

I want to receive json data from MQTT and store it in my database.
When am executing my code am receiving this error:
mysql.connector.errors.ProgrammingError: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '%s)' at line 1
There is my code :
import mysql.connector
import json
mydb = mysql.connector.connect(
host="localhost",
user="***",
passwd="***",
database="database"
)
mycursor = mydb.cursor()
def DATA_REPARTITION(Topic, jsonData):
if Topic == "test":
#print ("Start")
INSERT_DEBIT(jsonData)
def INSERT_DEBIT(jsonData):
#Read json from MQTT
print("Start read data to insert")
json_Dict = json.loads(jsonData)
debit = json_Dict['debit']
#Insert into DB Table
sql = ("INSERT INTO debit (data_debit) VALUES (%s)")
val=(debit)
mycursor.execute(sql,val)
mydb.commit()
print(mycursor.rowcount, "record inserted.")
mycursor.close()
mydb.close()
Thanks for your help, am working on this problem for the last 2 days.
You've written your parameterized query properly for MySQL:
sql = ("INSERT INTO debit (data_debit) VALUES (%s)")
The problem is that you're passing in the arguments wrong:
val=(debit)
mycursor.execute(sql,val)
The parentheses don't make debit into a tuple of 1 value. They don't do anything at all; val is just the same value as debit.
But execute wants a sequence of separate values, not 1 value.
To fix this, you need to add a comma. Commas are what create tuples in Python:
val = debit,
If you're wondering why this raises a SQL error, instead of a Python error about val not being iterable… Most likely, val is a string. And strings are iterable. They just iterate their characters. If val is, say, '$100', then you're passing the arguments '$', '1', '0', and '0', to fit a parameterized query with only one parameter.

use row as variable with python and sql

I am trying to update some values into a database. The user can give the row that should be changed. The input from the user, however is a string. When I try to parse this into the MySQL connector with python it gives an error because of the apostrophes. The code I have so far is:
import mysql.connector
conn = mysql.connector
conn = connector.connect(user=dbUser, password=dbPasswd, host=dbHost, database=dbName)
cursor = conn.cursor()
cursor.execute("""UPDATE Search SET %s = %s WHERE searchID = %s""", ('maxPrice', 300, 10,))
I get this error
mysql.connector.errors.ProgrammingError: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''maxPrice' = 300 WHERE searchID = 10' at line 1
How do I get rid of the apostrophes? Because I think they are causing problems.
As noted, you can't prepare it using a field.
Perhaps the safest way is to allow only those fields that are expected, e.g.
#!/usr/bin/python
import os
import mysql.connector
conn = mysql.connector.connect(user=os.environ.get('USER'),
host='localhost',
database='sandbox',
unix_socket='/var/run/mysqld/mysqld.sock')
cur = conn.cursor(dictionary=True)
query = """SELECT column_name
FROM information_schema.columns
WHERE table_schema = DATABASE()
AND table_name = 'Search'
"""
cur.execute(query)
fields = [x['column_name'] for x in cur.fetchall()]
user_input = ['maxPrice', 300, 10]
if user_input[0] in fields:
cur.execute("""UPDATE Search SET {0} = {1} WHERE id = {1}""".format(user_input[0], '%s'),
tuple(user_input[1:]))
print cur.statement
Prints:
UPDATE Search SET maxPrice = 300 WHERE id = 10
Where:
mysql> show create table Search\G
*************************** 1. row ***************************
Search
CREATE TABLE `Search` (
`id` int(11) DEFAULT NULL,
`maxPrice` float DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1
A column name is not a parameter. Put the column name maxPrice directly into your SQL.
cursor.execute("""UPDATE Search SET maxPrice = %s WHERE searchID = %s""", (300, 10))
If you want to use the same code with different column names, you would have to modify the string itself.
sql = "UPDATE Search SET {} = %s WHERE searchID = %s".format('maxPrice')
cursor.execute(sql, (300,10))
But bear in mind that this is not safe from injection the way parameters are, so make sure your column name is not a user-input string or anything like that.
You cannot do it like that. You need to place the column name in the string before you call cursor.execute. Column names cannot be used when transforming variables in cursor.execute.
Something like this would work:
sql = "UPDATE Search SET {} = %s WHERE searchID = %s".format('maxPrice')
cursor.execute(sql, (300, 10,))
You cannot dynamically bind object (e.g., column) names, only values. If that's the logic you're trying to achieve, you'd have to resort to string manipulation/formatting (with all the risks of SQL-injection attacks that come with it). E.g.:
sql = """UPDATE Search SET {} = %s WHERE searchID = %s""".format('maxPrice')
cursor.execute(sql, (300, 10,))

specify table name for SQL statement by using user specifed name all of this will be in a python script [duplicate]

I have some problem to design a good algorithm which use specification of psycopg2 library described here
I want to build a dynamic query equal to this string :
SELECT ST_GeomFromText('POLYGON((0.0 0.0,20.0 0.0,20.0 20.0,0.0 20.0,0.0 0.0))');
As you can see, my POLYGON object contain multiple point, read in a simple csv file some.csv which contain :
0.0;0.0
20.0;0.0
20.0;20.0
0.0;20.0
0.0;0.0
So i build the query dynamically, function of the number of line/data in the csv.
Here my program to generate the SQL Query string to execute :
import psycopg2
import csv
# list of points
lXy = []
DSN= "dbname='testS' user='postgres' password='postgres' host='localhost'"
conn = psycopg2.connect(DSN)
curs = conn.cursor()
def genPointText(curs,x,y):
generatedPoint = "%s %s" % (x,y)
return generatedPoint
#Lecture fichier csv
polygonFile = open('some.csv', 'rb')
readerCSV = csv.reader(polygonFile,delimiter = ';')
for coordinates in readerCSV:
lXy.append(genPointText(curs,float(coordinates[0]),float(coordinates[1])))
# function of list concatenation by separator
def convert(myList,separator):
return separator.join([str(i) for i in myList])
# construct simple query with psycopg
def genPolygonText(curs,l):
# http://initd.org/psycopg/docs/usage.html#python-types-adaptation
generatedPolygon = "POLYGON((%s))" % convert(l, ",")
return generatedPolygon
def executeWKT(curs,geomObject,srid):
try:
# geometry ST_GeomFromText(text WKT, integer srid);
finalWKT = "SELECT ST_GeomFromText('%s');" % (geomObject)
print finalWKT
curs.execute(finalWKT)
except psycopg2.ProgrammingError,err:
print "ERROR = " , err
polygonQuery = genPolygonText(curs,lXy)
executeWKT(curs,polygonQuery,4326)
As you can see, that's works, but this way is not correct because of conversion problem between python object and sql postgresql object.
In the documentation, i see only example to feed and convert data for static query. Do you know an "elegant" way to create correct string with correct type in a dynamic build for query ?
UPDATE 1 :
As you can see, when i use psycopg type transformation function on this simple example, i have error like this :
query = "ST_GeomFromText('POLYGON(( 52.146542 19.050557, 52.148430 19.045527, 52.149525 19.045831, 52.147400 19.050780, 52.147400 19.050780, 52.146542 19.050557))',4326)"
name = "my_table"
try:
curs.execute('INSERT INTO %s(name, url, id, point_geom, poly_geom) VALUES (%s);', (name,query))
except psycopg2.ProgrammingError,err:
print "ERROR = " , err
Error equal :
ERROR = ERREUR: erreur de syntaxe sur ou près de « E'my_table' »
LINE 1: INSERT INTO E'my_table'(name, poly_geom) VALUES (E'ST_GeomFr...
UPDATE 2 :
Final code which work thanks to stackoverflow users !
#info lib : http://www.initd.org/psycopg/docs/
import psycopg2
# info lib : http://docs.python.org/2/library/csv.html
import csv
# list of points
lXy = []
DSN= "dbname='testS' user='postgres' password='postgres' host='localhost'"
print "Opening connection using dns:", DSN
conn = psycopg2.connect(DSN)
curs = conn.cursor()
def genPointText(curs,x,y):
generatedPoint = "%s %s" % (x,y)
return generatedPoint
#Lecture fichier csv
polygonFile = open('some.csv', 'rb')
readerCSV = csv.reader(polygonFile,delimiter = ';')
for coordinates in readerCSV:
lXy.append(genPointText(curs,float(coordinates[0]),float(coordinates[1])))
# function of list concatenation by separator
def convert(myList,separator):
return separator.join([str(i) for i in myList])
# construct simple query with psycopg
def genPolygonText(l):
# http://initd.org/psycopg/docs/usage.html#python-types-adaptation
generatedPolygon = "POLYGON((%s))" % convert(l, ",")
return generatedPolygon
def generateInsert(curs,tableName,name,geomObject):
curs.execute('INSERT INTO binome1(name,geom) VALUES (%s, %s);' , (name,geomObject))
def create_db_binome(conn,name):
curs = conn.cursor()
SQL = (
"CREATE TABLE %s"
" ("
" polyname character varying(15),"
" geom geometry,"
" id serial NOT NULL,"
" CONSTRAINT id_key PRIMARY KEY (id)"
" )"
" WITH ("
" OIDS=FALSE"
" );"
" ALTER TABLE %s OWNER TO postgres;"
) %(name,name)
try:
#print SQL
curs.execute(SQL)
except psycopg2.ProgrammingError,err:
conn.rollback()
dropQuery = "ALTER TABLE %s DROP CONSTRAINT id_key; DROP TABLE %s;" % (name,name)
curs.execute(dropQuery)
curs.execute(SQL)
conn.commit()
def insert_geometry(polyname,tablename,geometry):
escaped_name = tablename.replace('""','""')
try:
test = 'INSERT INTO %s(polyname, geom) VALUES(%%s, ST_GeomFromText(%%s,%%s))' % (escaped_name)
curs.execute(test, (tablename, geometry, 4326))
conn.commit()
except psycopg2.ProgrammingError,err:
print "ERROR = " , err
################
# PROGRAM MAIN #
################
polygonQuery = genPolygonText(lXy)
srid = 4326
table = "binome1"
create_db_binome(conn,table)
insert_geometry("Berlin",table,polygonQuery)
insert_geometry("Paris",table,polygonQuery)
polygonFile.close()
conn.close()
You are trying to pass a table name as a parameter. You probably could've seen this immediately if you'd just looked at the PostgreSQL error log.
The table name you're trying to pass through psycopg2 as a parameter is being escaped, producing a query like:
INSERT INTO E'my_table'(name, url, id, point_geom, poly_geom) VALUES (E'ST_GeomFromText(''POLYGON(( 52.146542 19.050557, 52.148430 19.045527, 52.149525 19.045831, 52.147400 19.050780, 52.147400 19.050780, 52.146542 19.050557))'',4326)');'
This isn't what you intended and won't work; you can't escape a table name like a literal. You must use normal Python string interpolation to construct dynamic SQL, you can only use parameterized statement placeholders for actual literal values.
params = ('POLYGON(( 52.146542 19.050557, 52.148430 19.045527, 52.149525 19.045831, 52.147400 19.050780, 52.147400 19.050780, 52.146542 19.050557))',4326)
escaped_name = name.replace('"",'""')
curs.execute('INSERT INTO "%s"(name, url, id, point_geom, poly_geom) VALUES (ST_GeomFromText(%%s,%%s));' % escaped_name, params)
See how I've interpolated the name directly to produce the query string:
INSERT INTO my_table(name, url, id, point_geom, poly_geom) VALUES (ST_GeomFromText(%s,%s));
(%% gets converted to plain % by % substitution). Then I'm using that query with the string defining the POLYGON and the other argument to ST_GeomFromText as query parameters.
I haven't tested this, but it should give you the right idea and help explain what's wrong.
BE EXTEMELY CAREFUL when doing string interpolation like this, it's an easy avenue for SQL injection. I've done very crude quoting in the code shown above, but I'd want to use a proper identifier quoting function if your client library offers one.
Now that 2.7 is on PyPi here is is an example for a dynamic query.
In this example I'll assume the polygon as dictionary from your csv file. Keys could be name, url, id, point_geom, poly_geom as mentioned above but they won't matter really as long as the table structure contains the same keys.
There's probably a way to shorten this but I hope this clarifies the use of the sql functions, namely sql.SQL, sql.Identifier, and sql.Placeholder and how to concatenate a list of strings sql.SQL('..').join(list()).
from psycopg2 import sql
table = 'my_table'
polygon = Polyogon.from_file() # or something
column_list = list()
value_list = list()
# Convert the dictionary to lists
for column, value in polygon.items():
column_list.append(sql.Identifier(column)) # Convert to identifiers
value_list.append(value)
# Build the query, values will be inserted later
query = sql.SQL("INSERT INTO {} ({}) VALUES ({}) ON CONFLICT DO NOTHING").format(
sql.Identifier(table),
sql.SQL(', ').join(column_list), # already sql.Identifier
sql.SQL(', ').join([sql.Placeholder()] * len(value_list)))
# Execute the cursor
with postgres.cursor() as p_cursor:
# execute requires tuples and not a list
p_cursor.execute(insert_query, tuple(value_list))
Reference: http://initd.org/psycopg/docs/sql.html
The proper way is to use psycopg2 2.7's new sql module which includes an Identifier object. This allows you to dynamically specify SQL identifiers in a safe way.
Unfortunately 2.7 is not on PyPi yet (2.6.2 as of writing).
Until then, psycopg2 cover this under the heading "How can I pass field/table names to a query?"
http://initd.org/psycopg/docs/faq.html#problems-with-type-conversions
You can pass SQL identifiers in along with data values to the execute function by using the AsIs function.
Note: this provides NO security. It is as good as using a format string, which is not recommended.
The only real advantage of this is you encourage future code to follow the execute + data style. You can also easily search for AsIs in future.
from psycopg2.extensions import AsIs
<snip>
with transaction() as cur:
# WARNING: not secure
cur.execute('SELECT * from %(table)s', {'table': AsIs('mytable')})

Categories

Resources