I have the following SQL being sent from pyodbc using bound parameters.
IF NOT EXISTS (
SELECT *
FROM dbo.tApplicationCMSNegativeFactors2
WHERE N'transientkey' = N'67'
)
INSERT dbo.tApplicationCMSNegativeFactors2 (
N'transientkey, nfid, active, applicationnum, modstamp, source_database_tablename_for_kafka_connector, source_identifier_for_kafka_connector'
)
VALUES (
N'''67'', ''5'', ''1'', ''52'', ''2022-10-01 03:28:25.000372'', ''tapplicationcmsnegativefactors'', ''transientkey'''
)
ELSE
UPDATE dbo.tApplicationCMSNegativeFactors2
SET N'transientkey = ''67'', nfid = ''5'', active = ''1'', applicationnum = ''52'', modstamp = ''2022-10-01 03:28:25.000372'', source_database_tablename_for_kafka_connector = ''tapplicationcmsnegativefactors'', source_identifier_for_kafka_connector = ''transientkey'''
WHERE N'transientkey' = N'67';
I am unsure why but when trying to execute this code, an error shows next to the set clause in SSMS. What can I do to make this sql execute successfully but still retaining the N prefix so I can make it work with pyodbc.
I was expecting this code to execute successfully seeing as removing the N prefix allows the code to execute.
Included the python code below.
import pyodbc
# Auth.
server = ""
database = ""
username = ""
password = ""
# Set up the database connection
cnxn = pyodbc.connect(f'DRIVER={"SQL Server"};SERVER={server};DATABASE={database};UID={username};PWD={password}')
cursor = cnxn.cursor()
ls = [
"transientkey",
67,
"transientkey, nfid, active, applicationnum, modstamp, source_database_tablename_for_kafka_connector, source_identifier_for_kafka_connector",
"'67', '5', '1', '52', '2022-10-01 03:28:25.000372', 'tapplicationcmsnegativefactors', 'transientkey'",
"transientkey = '67', nfid = '5', active = '1', applicationnum = '52', modstamp = '2022-10-01 03:28:25.000372', source_database_tablename_for_kafka_connector = 'tapplicationcmsnegativefactors', source_identifier_for_kafka_connector = 'transientkey'",
]
# Execute SQL
def exec_sql(kv, join_kv, col_inst, val_inst, val_upd):
cursor.execute(
"IF NOT EXISTS (SELECT * FROM dbo.tApplicationCMSNegativeFactors2 WHERE ? = ?) INSERT tApplicationCMSNegativeFactors2 (?) VALUES (?) ELSE UPDATE dbo.tApplicationCMSNegativeFactors2 SET ? WHERE ? = ?", kv, join_kv, col_inst, val_inst, val_upd, kv, join_kv
)
exec_sql(str(ls[0]), str(ls[1]), str(ls[2]), str(ls[3]), str(ls[4]))
You're mixing up parameters and dynamic SQL. You can't change the structure of the SQL with parameters, so this
tApplicationCMSNegativeFactors2 (?) VALUES (?)
Needs to be done with string interpolation (accounting for SQL injection vulnerabilities) before the string with parameter markers ? is sent to cursor.
The reason I'm using ? here is to prevent SQL Injection attacks from occuring from the python code.
Just can't do that. If you want to avoid dynamic SQL you can have a number of static SQL queries with with parameter markers for the data, like
... INSERT tApplicationCMSNegativeFactors2 (transientkey, nfid, active, applicationnum, modstamp, source_database_tablename_for_kafka_connector, source_identifier_for_kafka_connector) VALUES (?,?,?,?,?) ...
Related
I have created a database and I am trying to fetch data from it. I have a class Query and inside the class I have a function that calls a table called forecasts. The function is as follows:
def forecast(self, provider: str, zone: str='Mainland',):
self.date_start = date_start)
self.date_end = (date_end)
self.df_forecasts = pd.DataFrame()
fquery = """
SELECT dp.name AS provider_name, lf.datetime_from AS date, fr.name AS run_name, lf.value AS value
FROM load_forecasts lf
INNER JOIN bidding_zones bz ON lf.zone_id = bz.zone_id
INNER JOIN data_providers dp ON lf.provider_id = dp.provider_id
INNER JOIN forecast_runs fr ON lf.run_id = fr.run_id
WHERE bz.name = '{zone}'
AND dp.name = '{provider}'
AND date(lf.datetime_from) BETWEEN '{self.date_start}' AND '{self.date_end}'
"""
df_forecasts = pd.read_sql_query(fquery, self.connection)
return df_forecasts
In the scripts that I run I am calling the Query class giving it my inputs
query = Query(date_start, date_end)
And the function
forecast_df = query.forecast(provider='Meteologica')
I run my script in the command line in the classic way
python myscript.py '2022-11-10' '2022-11-18'
My script shows the error
sqlalchemy.exc.DataError: (psycopg2.errors.InvalidDatetimeFormat) invalid input syntax for type date: "{self.date_start}"
LINE 9: AND date(lf.datetime_from) BETWEEN '{self.date_start...
when I use this syntax, but when I manually input the string for date_start and date_end it works.
I cannot find a way to solve the problem with sqlalchemy, so I opened a cursor with psycopg2.
# Returns the datetime, value and provider name and issue date of the forecasts in the load_forecasts table
# The dates range is specified by the user when the class is called
def forecast(self, provider: str, zone: str='Mainland',):
# Opens a cursor to get the data
cursor = self.connection.cursor()
# Query to run
query = """
SELECT dp.name, lf.datetime_from, fr.name, lf.value, lf.issue_date
FROM load_forecasts lf
INNER JOIN bidding_zones bz ON lf.zone_id = bz.zone_id
INNER JOIN data_providers dp ON lf.provider_id = dp.provider_id
INNER JOIN forecast_runs fr ON lf.run_id = fr.run_id
WHERE bz.name = %s
AND dp.name = %s
AND date(lf.datetime_from) BETWEEN %s AND %s
"""
# Execute the query, bring the data and close the cursor
cursor.execute(query, (zone, provider, self.date_start, self.date_end))
self.df_forecasts = cursor.fetchall()
cursor.close()
return self.df_forecasts
If anyone finds the answer with sqlalchemy, I would love to see it!
AM trying to execute the PL/SQL script which am constructing at the run time but getting
cx_Oracle.DatabaseError: ORA-00922: missing or invalid option
Looks like some formatting issue with the script as it is showing as STRING but still not sure how to resolve it.
Below is the code that am trying:
script = '''Set serveroutput on;
DECLARE
V_req pls_integer;
BEGIN
V_req := infomediary_nse.request(
p_inApp_id => 100,
p_inPayload => XMLTYPE(
'<tag>hello</tag>'
)
);
END;
/'''
dbconnection = cx_Oracle.connect(ConnectionString)
str, err = dbconnection.cursor().execute(script)
set serveroutput on
is not a PL/SQL command, but a SQL*Plus one, so you can only use it in SQL*PLus.
Even the final / should be removed, because it also is SQL*Plus specific.
This should work:
script = '''DECLARE
V_req pls_integer;
BEGIN
V_req := infomediary_nse.request(
p_inApp_id => 100,
p_inPayload => XMLTYPE(
'<tag>hello</tag>'
)
);
END;'''
If you used set serveroutput on to get the result from DBMS_OUTPUT calls, you can have a look at this.
For example, this:
import cx_Oracle
conn = cx_Oracle.connect(..., ..., ...)
c = conn.cursor()
vSql = '''begin
dbms_output.put_line('Hello!');
end;
'''
c.callproc("dbms_output.enable")
c.execute(vSql)
statusVar = c.var(cx_Oracle.NUMBER)
lineVar = c.var(cx_Oracle.STRING)
while True:
c.callproc("dbms_output.get_line", (lineVar, statusVar))
if statusVar.getvalue() != 0:
break
print (lineVar.getvalue())
conn.close()
gives:
E:\Python>python testOracle.py
Hello!
Looking for some help with a specific error when I write out from a pyodbc connection. How do I fix the error:
ODBC SQL type -360 is not yet supported. column-index=1 type=-360', 'HY106' error from PYODBC
Here is my code:
import pyodbc
import pandas as pd
import sqlparse
## Function created to read SQL Query
def create_query_string(sql_full_path):
with open(sql_full_path, 'r') as f_in:
lines = f_in.read()
# remove any common leading whitespace from every line
query_string = textwrap.dedent("""{}""".format(lines))
## remove comments from SQL Code
query_string = sqlparse.format(query_string, strip_comments=True)
return query_string
query_string = create_query_string("Bad Code from R.sql")
## initializes the connection string
curs = conn.cursor()
df=pd.read_sql(query_string,conn)
df.to_csv("TestSql.csv",index=None)
We are using the following SQL code in query string:
SELECT loss_yr_qtr_cd,
CASE
WHEN loss_qtr_cd <= 2 THEN loss_yr_num
ELSE loss_yr_num + 1
END AS LOSS_YR_ENDING,
snap_yr_qtr_cd,
CASE
WHEN snap_qtr_cd <= 2 THEN snap_yr_num
ELSE snap_yr_num + 1
END AS CAL_YR_ENDING,
cur_ctstrph_loss_ind,
clm_symb_grp_cd,
adbfdb_pol_form_nm,
risk_st_nm,
wrt_co_nm,
wrt_co_part_cd,
src_of_bus_cd,
rt_zip_dlv_ofc_cd,
cur_rst_rt_terr_cd,
Sum(xtra_cntrc_py_amt) AS XTRA_CNTRC_PY_AMT
FROM (SELECT DT.loss_yr_qtr_cd,
DT.loss_qtr_cd,
DT.loss_yr_num,
SNAP.snap_yr_qtr_cd,
SNAP.snap_qtr_cd,
SNAP.snap_yr_num,
CLM.cur_ctstrph_loss_ind,
CLM.clm_symb_grp_cd,
POL_SLCT.adbfdb_pol_form_nm,
POL_SLCT.adbfdb_pol_form_cd,
CVR.bsic_cvr_ind,
POL_SLCT.priv_pass_ind,
POL_SLCT.risk_st_nm,
POL_SLCT.wrt_co_nm,
POL_SLCT.wrt_co_part_cd,
POL_SLCT.src_of_bus_cd,
TERR.rt_zip_dlv_ofc_cd,
TERR.cur_rst_rt_terr_cd,
LOSS.xtra_cntrc_py_amt
FROM ahshdm1d.vmaloss_day_dt_dim DT,
ahshdm1d.vmasnap_yr_mo_dim SNAP,
ahshdm1d.tmaaclm_dim CLM,
ahshdm1d.tmaapol_slct_dim POL_SLCT,
ahshdm1d.tmaacvr_dim CVR,
ahshdm1d.tmaart_terr_dim TERR,
ahshdm1d.tmaaloss_fct LOSS,
ahshdm1d.tmaaprod_bus_dim BUS
WHERE SNAP.snap_yr_qtr_cd BETWEEN '20083' AND '20182'
AND TRIM(POL_SLCT.adbfdb_lob_cd) = 'A'
AND CVR.bsic_cvr_ind = 'Y'
AND POL_SLCT.priv_pass_ind = 'Y'
AND POL_SLCT.adbfdb_pol_form_cd = 'V'
AND POL_SLCT.src_of_bus_cd NOT IN ( 'ITC', 'INV' )
AND LOSS.xtra_cntrc_py_amt > 0
AND LOSS.loss_day_dt_id = DT.loss_day_dt_dim_id
AND LOSS.cvr_dim_id = CVR.cvr_dim_id
AND LOSS.pol_slct_dim_id = POL_SLCT.pol_slct_dim_id
AND LOSS.rt_terr_dim_id = TERR.rt_terr_dim_id
AND LOSS.prod_bus_dim_id = BUS.prod_bus_dim_id
AND LOSS.clm_dim_id = CLM.clm_dim_id
AND LOSS.snap_yr_mo_dt_id = SNAP.snap_yr_mo_dt_id) AS TABLE1
GROUP BY loss_yr_qtr_cd,
loss_qtr_cd,
loss_yr_num,
snap_yr_qtr_cd,
snap_qtr_cd,
snap_yr_num,
cur_ctstrph_loss_ind,
clm_symb_grp_cd,
adbfdb_pol_form_nm,
risk_st_nm,
wrt_co_nm,
wrt_co_part_cd,
src_of_bus_cd,
rt_zip_dlv_ofc_cd,
cur_rst_rt_terr_cd
FOR FETCH only
Just looking how to properly write out the database.
Thanks,
Justin
I have a weird behaviour with postgres + sqlalchemy.
I call a function that insert into a table, but when called from sqlalchemy it roolback at the end, and when called from psql it succeed:
Logs when called by sqlalchemy (as reported by the logs):
Jan 21 13:17:28 intersec.local postgres[3466]: [18-9] STATEMENT: SELECT name, suffix
Jan 21 13:17:28 intersec.local postgres[3466]: [18-10] FROM doc_codes('195536d95bd155b9ea412154b3e920761495681a')
Jan 21 13:17:28 intersec.local postgres[3466]: [19-9] STATEMENT: ROLLBACK
Jan 21 13:17:28 intersec.local postgres[3465]: [13-9] STATEMENT: COMMIT
If using psql:
Jan 21 13:28:47 intersec.local postgres[3561]: [20-9] STATEMENT: SELECT name, suffix FROM doc_codes('195536d95bd155b9ea412154b3e920761495681a');
Note not transaction stuff at all.
This is my python code:
def getCon(self):
conStr = "postgresql+psycopg2://%(USER)s:%(PASSWORD)s#%(HOST)s/%(NAME)s"
config = settings.DATABASES['default']
#print conStr % config
con = sq.create_engine(
conStr % config,
echo=ECHO
)
event.listen(con, 'checkout', self.set_path)
self.con = con
self.meta.bind = con
return con
def getDocPrefixes(self, deviceId):
f = sq.sql.func.doc_codes(deviceId, type_=types.String)
columns = [
sq.Column('name', types.String),
sq.Column('suffix', types.String)
]
return [dict(x.items()) for x in self.con.execute
(
select(columns).
select_from(f)
).fetchall()]
sync = dbSync('malab')
for k in sync.getDocPrefixes('195536d95bd155b9ea412154b3e920761495681a'):
print k['name'], '=', k['suffix']
What could trigger the ROLLBACK?
P.D: My DB functions:
CREATE OR REPLACE FUNCTION next_letter (
table_name TEXT,
OUT RETURNS TEXT
)
AS
$$
DECLARE
result TEXT = 'A';
nextLetter TEXT;
num INTEGER;
BEGIN
SELECT INTO num nextval('letters');
nextLetter := chr(num);
result := nextLetter;
WHILE true LOOP
--RAISE NOTICE '%', result;
IF EXISTS(SELECT 1 FROM DocPrefix WHERE Name=result AND TableName=table_name) THEN
SELECT max(SUBSTRING(name FROM '\d+'))
FROM DocPrefix WHERE Name=result AND TableName=table_name
INTO num;
result := nextLetter || (coalesce(num,0) + 1);
ELSE
EXIT;
END IF;
END LOOP;
RETURNS = result;
END;
$$
LANGUAGE 'plpgsql';
-- Retorna el prefijo unico para la tabla/dispositivo.
CREATE OR REPLACE FUNCTION prefix_fordevice (
table_name TEXT,
device_id TEXT,
OUT RETURNS TEXT
)
AS
$$
DECLARE
result TEXT = NULL;
row RECORD;
BEGIN
IF NOT(EXISTS(SELECT 1 FROM DocPrefix WHERE MachineId=device_id AND TableName=table_name)) THEN
INSERT INTO DocPrefix
(Name, MachineId, TableName)
VALUES
(next_letter(table_name), device_id, table_name);
END IF;
SELECT name FROM DocPrefix WHERE
MachineId=device_id AND TableName=table_name
INTO result;
RETURNS = result;
END;
$$
LANGUAGE 'plpgsql';
--Retornar los prefijos exclusivos para el ID de dispositvo
CREATE OR REPLACE FUNCTION doc_codes(device_id TEXT) RETURNS TABLE("name" TEXT, "suffix" TEXT) AS $$
SELECT name, prefix_fordevice(name, device_id) AS suffix FROM doccode;
$$ LANGUAGE SQL;
the antipattern here is that you're confusing a SQLAlchemy Engine for a connection, when you do something like this:
con = sq.create_engine(<url>)
result = con.execute(statement)
the Engine is associated with a connection pool as a source of connections. When you call the execute() method on Engine, it checks out a connection from the pool, runs the statement, and returns the results; when the result set is exhausted, it returns the connection to the pool. At that stage, the pool will either close the connection fully, or it will re-pool it. Storing the connection in the pool means that any remaining transactional state must be cleared (note that DBAPI connections are always in a transaction when they are used), so it emits a rollback.
Your program should create a single Engine per URL at the module level, and when it needs a connection, should call upon engine.connect().
the document Working with Engines and Connections explains all of this.
I finally found the answer here:
Make SQLAlchemy COMMIT instead of ROLLBACK after a SELECT query
def getDocPrefixes(self, deviceId):
f = sq.sql.func.doc_codes(deviceId, type_=types.String)
columns = [
sq.Column('name', types.String),
sq.Column('sufix', types.String)
]
with self.con.begin():
return [dict(x.items()) for x in self.con.execute
(
select(columns).
select_from(f)
).fetchall()]
The thing is, the function can insert data + also return a SELECT, so, sqlalchemy think this is a normal SELECT when in fact the function also change data and need commit.
Here's a question for you mysql + python folks out there.
Why does this mysql sql sequence of commands not work when I execute it through Python, but it does when I execute it via the mysql CLI?
#!/usr/bin/env python
import oursql as mysql
import sys, traceback as tb
import logging
# some other stuff...
class MySqlAuth(object):
def __init__(self, host = None, db = None, user = None, pw = None, port = None):
self.host = 'localhost' if host is None else host
self.db = 'mysql' if db is None else db
self.user = 'root' if user is None else user
self.pw = pw
self.port = 3306 if port is None else port
#property
def cursor(self):
auth_dict = dict()
auth_dict['host'] = self.host
auth_dict['user'] = self.user
auth_dict['passwd'] = self.pw
auth_dict['db'] = self.db
auth_dict['port'] = self.port
conn = mysql.connect(**auth_dict)
cur = conn.cursor(mysql.DictCursor)
return cur
def ExecuteNonQuery(auth, sql):
try:
cur = auth.cursor
log.debug('SQL: ' + sql)
cur.execute(sql)
cur.connection.commit()
return cur.rowcount
except:
cur.connection.rollback()
log.error("".join(tb.format_exception(*sys.exc_info())))
finally:
cur.connection.close()
def CreateTable(auth, table_name):
CREATE_TABLE = """
CREATE TABLE IF NOT EXISTS %(table)s (
uid VARCHAR(120) PRIMARY KEY
, k VARCHAR(1000) NOT NULL
, v BLOB
, create_ts TIMESTAMP NOT NULL
, mod_ts TIMESTAMP NOT NULL
, UNIQUE(k)
, INDEX USING BTREE(k)
, INDEX USING BTREE(mod_ts) );
"""
ExecuteNonQuery(auth, CREATE_TABLE % { 'table' : table_name })
CREATE_BEFORE_INSERT_TRIGGER = """
DELIMITER //
CREATE TRIGGER %(table)s_before_insert BEFORE INSERT ON %(table)s
FOR EACH ROW
BEGIN
SET NEW.create_ts = NOW();
SET NEW.mod_ts = NOW();
SET NEW.uid = UUID();
END;// DELIMIETER ;
"""
ExecuteNonQuery(auth, CREATE_BEFORE_INSERT_TRIGGER % { 'table' : table_name })
CREATE_BEFORE_INSERT_TRIGGER = """
DELIMITER //
CREATE TRIGGER %(table)s_before_update BEFORE UPDATE ON %(table)s
FOR EACH ROW
BEGIN
SET NEW.mod_ts = NOW();
END;// DELIMIETER ;
"""
ExecuteNonQuery(auth, CREATE_BEFORE_UPDATE_TRIGGER % { 'table' : table_name })
# some other stuff
The error that I get when I run the python is this:
2012-01-15 11:53:00,138 [4214 MainThread mynosql.py] DEBUG SQL:
DELIMITER //
CREATE TRIGGER nosql_before_insert BEFORE INSERT ON nosql
FOR EACH ROW
BEGIN
SET NEW.create_ts = NOW();
SET NEW.mod_ts = NOW();
SET NEW.uid = UUID();
END;// DELIMIETER ;
2012-01-15 11:53:00,140 [4214 MainThread mynosql.py] ERROR Traceback (most recent call last):
File "./mynosql.py", line 39, in ExecuteNonQuery
cur.execute(sql)
File "cursor.pyx", line 120, in oursql.Cursor.execute (oursqlx/oursql.c:15856)
File "cursor.pyx", line 111, in oursql.execute (oursqlx/oursql.c:15728)
File "statement.pyx", line 157, in oursql._Statement.prepare (oursqlx/oursql.c:7750)
File "statement.pyx", line 127, in oursql._Statement._raise_error (oursqlx/oursql.c:7360)
ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DELIMITER //\n CREATE TRIGGER nosql_before_insert BEFORE INSERT ON nosql\n F' at line 1", None)
Although the error you are getting seems to be generated by the first DELIMITER // statement, you have a typo at the last mention of DELIMITER - you wrote it DELIMIETER ; - try to change that and see if that solves your issue.
Update
You have 2 typos for the same DELIMIETER ; - I believe you are getting the error just after the interpreter finds the first one:
DELIMITER //
CREATE TRIGGER %(table)s_before_insert BEFORE INSERT ON %(table)s
FOR EACH ROW
BEGIN
SET NEW.create_ts = NOW();
SET NEW.mod_ts = NOW();
SET NEW.uid = UUID();
END;// DELIMIETER ; <-- this one is wrong, it should be DELIMITER
You can only pass queries to mysql one at a time; it's up to the client to ensure that the query text is just one valid statement.
The MySQL client does this by tokenizing the entered query and looking for statement separators. In the case of a trigger definition, this doesn't work because the definition can contain semicolons (the default statement separator), and so you have to tell the cli to separate statements in another way, using the DELIMITER command.
The MySQLdb (and other) python api's require no such statement separation; the programmer is obligated to pass statements on at a time to query.
Try removing the DELIMITER statements altogether from your queries (when passed through the python api).