I am having some trouble with getting PyODBC to work with a proc in Oracle.
Below is the code and the output
db = pyodbc.connect('DSN=TEST;UID=cantsay;PWD=cantsay')
print('-' * 20)
try:
c = db.cursor()
rs = c.execute("select * from v$version where banner like 'Oracle%'")
for txt in c.fetchall():
print('%s' % (txt[0]))
test = ""
row = c.execute("call DKB_test.TESTPROC('7894','789465','789465')").fetchall()
finally:
db.close()
OUTPUT
> C:\Documents and Settings\dex\Desktop>orctest.py
> -------------------- Oracle Database 10g Release 10.2.0.4.0 - 64bit Production Traceback (most recent call last): File "C:\Documents and
> Settings\dex\Desktop\orctest.py", line 31, in <module>
> row = c.execute("{call DKB_test.TESTPROC(12354,78946,123 4)}").fetchall()
> pyodbc.Error: ('HY000', "[HY000] [Oracle][ODBC][Ora]ORA-06550: line 1,
> column 7: \nPLS-00221: 'TESTPROC' is not a procedure
> or is undefined\nORA- 06550: line 1, column 7:\nPL/SQL: Statement
> ignored\n (6550) (SQLExecDirectW)")
But I can see this procedure and coding it in c# it works, but this project I am doing is requiring python for now.
I did some Google searches and nothing comes up that helps me.
Any thing will be greatly appreciated.
Not 100% sure, The procedure name is Get_SC_From_Comp_Ven_Job or GET_SC_FROM_COMP_VEN_JOB?
check the userspace is correct or not.
check the name case-sensitive, if we create procedure Get_SC_From_Comp_Ven_Job, actually it is GET_SC_FROM_COMP_VEN_JOB. but if we create procedure "Get_SC_From_Comp_Ven_Job", then it is
Related
I have been working hard all day attempting to get a boolean value from a PL/SQL function using cx_Oracle. I've seen posts talking about using some other data type like char or integer to store the return value, but when I attempt to use such solutions, I get an incorrect data type error. First, let me show the code.
def lives_on_campus(self):
cursor = conn.cursor()
ret = cursor.callfunc('students_api.lives_on_campus', bool, [self.pidm])
return ret
If I use the 11.2.0.4 database client, I get the following error.
File "student-extracts.py", line 134, in <module>
if student.lives_on_campus():
File "student-extracts.py", line 58, in lives_on_campus
ret = cursor.callfunc('students_api.lives_on_campus', bool, [self.pidm])
cx_Oracle.DatabaseError: DPI-1050: Oracle Client library is at version 11.2 but version 12.1 or higher is needed
If I use the 12.1.0.2 database client or later, I get this error.
Traceback (most recent call last):
File "student-extracts.py", line 134, in <module>
if student.lives_on_campus():
File "student-extracts.py", line 58, in lives_on_campus
ret = cursor.callfunc('students_api.lives_on_campus', bool, [self.pidm])
cx_Oracle.DatabaseError: ORA-03115: unsupported network datatype or representation
Basically, it errors out no matter which version of the SQL Client I use. Now, I know the above code will work if the database version is 12c R2. Unfortunately, we only have that version in our TEST environment and PROD uses only the 11g database. Is there any I can make that function work with an 11g database? There must be a workaround.
~ Bob
Try a wrapper anonymous block like:
with connection.cursor() as cursor:
outVal = cursor.var(int)
sql="""
begin
:outVal := sys.diutil.bool_to_int(students_api.lives_on_campus(:pidm));
end;
"""
cursor.execute(sql, outVal=outVal, pidm='123456')
print(outVal.getvalue())
Using Python 3.7, I execute a query against a MySQL database, with multiple statements, with get_warnings enabled:
import mysql.connector
cnx = mysql.connector.connect(host='xxx',
user='xxx',
password='xxx',
database='xxx',
use_pure=False,
get_warnings=True)
# Test 1, works:
cur = cnx.cursor()
cur.execute('SELECT "a"+1')
for row in cur:
print(row)
print(cur.fetchwarnings())
cur.close()
# Test 2, InterfaceError:
cur = cnx.cursor()
for rs in cur.execute('SELECT "a"+1; SELECT 2', multi=True):
for row in rs:
print(row)
print(rs.fetchwarnings())
The first test executes a single statement, iterates over the cursor, fetches data, and finally prints warnings. Output as expected:
(1.0,)
[('Warning', 1292, "Truncated incorrect DOUBLE value: 'a'")]
The second test, (you can remove the first test altogether), will execute print(row) once, then an Exception happens. Output:
Traceback (most recent call last):
File "C:\Program Files\Python37\lib\site-packages\mysql\connector\connection_cext.py", line 472, in cmd_query
raw_as_string=raw_as_string)
_mysql_connector.MySQLInterfaceError: Commands out of sync; you can't run this command now
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\Python37\lib\site-packages\mysql\connector\cursor_cext.py", line 138, in _fetch_warnings
_ = self._cnx.cmd_query("SHOW WARNINGS")
File "C:\Program Files\Python37\lib\site-packages\mysql\connector\connection_cext.py", line 475, in cmd_query
sqlstate=exc.sqlstate)
mysql.connector.errors.DatabaseError: 2014 (HY000): Commands out of sync; you can't run this command now
During handling of the above exception, another exception occurred:
....etc....
Did anyone encounter the same problem? How did you solve it? What am I doing wrong? Could this be a bug in the connector?
Other things I've tried:
If you set get_warnings to False, no error happens and
fetchwarnings() returns None
If you remove the problem from the SQL code, no error happens and fetchwarnings() returns None
use_pure can be True or False, the only difference is a slightly different traceback
Using fetchall() instead of for row in rs gives the same result
Many other variations give the same error.
System:
Connector version is mysql-connector-python-8.0.17 but 8.0.16 has the same issue.
Python 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 22:22:05) [MSC v.1916 64 bit (AMD64)] on win32
MySQL 5.7
The "Commands out of sync" is because MySQL client interface calls are performed in a wrong order. This is not a bug in the connector. This is expected behavior.
Executing that first SELECT returns a MySQL resultset.
Before the client issues another statement that returns a MySQL resultset, we have to do something with the resultset that is already returned. That is, there needs to be calls to either mysql_use_result and mysql_free_result, or a call to mysql_store_result. Once the client does that, then the client can execute another SQL statement that returns a result.
(Note that the execution of the MySQL SHOW WARNINGS statement returns a MySQL resultset.)
Again, this is expected behavior, as documented here:
https://dev.mysql.com/doc/refman/8.0/en/commands-out-of-sync.html
The references to mysql_free_result, mysql_store_result and mysql_use_result aren't specific to a Python interface; these reference the underlying library routines in the MySQL client code. e.g. https://dev.mysql.com/doc/refman/8.0/en/mysql-use-result.html
FOLLOWUP
I suspect the author of the MySQL Python connector didn't anticipate this use case, or if it was anticipated, the observed behavior was judged to be correct.
As far as avoiding the problem, I would avoid the use of the multii=True and do a separate execute for each SQL statement. Following the same pattern as in Test 1, we could add an outer loop to loop through the SQL statements
# Test 1.2
sqls = ['SELECT "a"+1', 'SELECT 2', ]
for sql in sqls:
cur = cnx.cursor()
cur.execute(sql)
for row in cur:
print(row)
print(cur.fetchwarnings())
cur.close()
Another option would be to avoid the call to the fetchwarnings. That is what is causing the SHOW WARNINGS statement to be executed (only after it first verifies that the count of warnings is greater than zero.) We can issue a SHOW WARNINGS statement separately, and loop through the results from that like it were the return from a SELECT.
# Test 1.3
cur = cnx.cursor()
for rs in cur.execute('SELECT "a"+1; SHOW WARNINGS; SELECT 2; SHOW WARNINGS', multi=True):
for row in rs:
print(row)
cur.close()
I try to COPY a CSV file from a folder to a postgres table using python and psycopg2 and I get the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
psycopg2.ProgrammingError: must be superuser to COPY to or from a file
HINT: Anyone can COPY to stdout or from stdin. psql's \copy command also works for anyone.
I also tried to run it through the python environment as:
constr = "dbname='db_name' user='user' host='localhost' password='pass'"
conn = psycopg2.connect(constr)
cur = conn.cursor()
sqlstr = "COPY test_2 FROM '/tmp/tmpJopiUG/downloaded_xls.csv' DELIMITER ',' CSV;"
cur.execute(sqlstr)
I still get the above error. I tried \copy command but this works only in psql. What is the alternative in order to be able to execute this through my python script?
EDITED
After having a look in the link provided by #Ilja Everilä I tried this:
cur.copy_from('/tmp/tmpJopiUG/downloaded_xls.csv', 'test_copy')
I get an error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: argument 1 must have both .read() and .readline() methods
How do I give these methods?
Try using cursor.copy_expert():
constr = "dbname='db_name' user='user' host='localhost' password='pass'"
conn = psycopg2.connect(constr)
cur = conn.cursor()
sqlstr = "COPY test_2 FROM STDIN DELIMITER ',' CSV"
with open('/tmp/tmpJopiUG/downloaded_xls.csv') as f:
cur.copy_expert(sqlstr, f)
conn.commit()
You have to open the file in python and pass it to psycopg, which then forwards it to postgres' stdin. Since you're using the CSV argument to COPY, you have to use the expert version in which you pass the COPY statement yourself.
You can also use copy_from. See the code below
with open('/tmp/tmpJopiUG/downloaded_xls.csv') as f:
cur.copy_from(f, table_name,sep=',')
conn.commit()
Getting the below error while trying to import a ^ delimited file into a DB2 database using python 2.4.3.
Error:
Traceback (most recent call last):
File "C:\Python25\Usefulscripts\order.py", line 89, in <module>
load_order_stack()
File "C:\Python25\Usefulscripts\order.py", line 75, in load_order_stack
conn2.execute(importTmp)
ProgrammingError: ('42601', '[42601] [IBM][CLI Driver][DB2/LINUXX8664] SQL0104N An unexpected token "orders_extract"
was found following "import from ".
Code:
import pyodbc
def load_order_stack():
try:
conn2 = pyodbc.connect('DSN=db2Database;UID=ueserid;PWD=password')
importTmp = ("import from orders_extract of del modified by coldel0x5E"
"insert_update into test.ORDERS_Table (ORDER_ID,item,price);")
conn2.execute(importTmp)
conn2.commit()
IMPORT is not an SQL statement. It is a DB2 Command Line Processor (CLP) command and as such can only be run by the said CLP.
There is an SQL interface to some CLP commands via calls to the ADMIN_CMD() stored procedure, please check the manual: IMPORT using ADMIN_CMD
You also have the option of reading the file, line by line, and inserting into your database. This will definitely be slower than any native import operation. Assuming your delimited file structure is, and the file is named input.txt:
ORDER_ID^item^price
1^'bat'^50.00
2^'ball'^25.00
Code:
import csv
import pyodbc
connection = pyodbc.connect('DSN=db2Database;UID=ueserid;PWD=password')
cursor = connection.cursor()
with open('input.txt', 'rb') as f:
rows = csv.reader(f, delimiter='^')
# get column names from header in first line
columns = ','.join(next(rows))
for row in rows:
# build sql with placeholders for insert
placeholders = ','.join('?' * len(row))
sql = 'insert into ({}) values ({});'.format(columns, placeholders)
# execute parameterized database insert
cursor.execute(sql, row)
cursor.commit()
Play around with commit() placement, you probably want to commit in batches to improve performance.
I am trying to do share a psycopg2 connection between multiple threads. As was mentioned in the docs, I am doing that by creating new cursor objects from the shared connection, whenever I use it in a new thread.
def delete(conn):
while True:
conn.commit()
def test(conn):
cur = conn.cursor()
thread.start_new_thread(delete,(conn,))
i = 1
while True:
cur.execute("INSERT INTO mas(taru,s) values (2,%s)",(i,))
print i
i = i +1
conn.commit()
After running, I get output like,
1
2
...
98
99
Traceback (most recent call last):
File "postgres_test_send.py", line 44, in <module>
cur.execute("INSERT INTO mas(taru,s) values (2,%s)",(i,))
psycopg2.InternalError: SET TRANSACTION ISOLATION LEVEL must be called before any query
What's going on here?
The bug is not in the most recent psycopg2 versions: it has probably been fixed in 2.4.2.