I'm using cx_oracle to update record data in oracle from python. It just a simple update, but it takes forever to run and timeout in the end. If I run the same statement directly from Oracle, it works perfectly. Does anyone know why this happened? Thanks!
my code:
con = cx_Oracle.connect()
cur = con.cursor()
stmt = "UPDATE table SET rank = 4 WHERE id like 'SAP_1000141471' and rank = 2"
cur.execute(stmt)
con.commit()
result =cur.fetchall()
Related
I have an issue which is related to connection pool but I don't understand it.
Below is my code and this is the behavior:
Starting with empty table, I do SELECT query for non-existing value (no results)
Then I do INSERT query, it successfully inserts the value
HOWEVER, after inserting a new value, if I try to do more SELECT statements it only works 2 out of 3 times, always fails exactly every 3rd try (with pool size=3. ie with pool size=10 it will work exactly 9 out of 10 times)
finally, if i restart the script, with the initial SELECT commented out (but the value is in table before script ones) I get the inserted value and it works every time.
Why does this code seem to 'get stuck returning empty result for the connection that had no result' until restarting the script?
(note that it keep opening and closing connections from connection pool because this is taken from a web application where each connect/close is a different web request. Here i cut the whole 'web' aspect out of it)
#!/usr/bin/python
import mysql.connector
dbvars = {'host':'h','user':'u','passwd':'p','db':'d'}
# db has 1 empty table 'test' with one varchar field 'id'
con = mysql.connector.connect(pool_name="mypool", pool_size=3, pool_reset_session=False, **dbvars)
cur = con.cursor()
cur.execute("SELECT id FROM test WHERE id = '123';")
result = cur.fetchall()
cur.close()
con.close()
con = mysql.connector.connect(pool_name="mypool")
cur = con.cursor()
cur.execute("INSERT INTO test VALUES ('123');")
con.commit()
cur.close()
con.close()
for i in range(12):
con = mysql.connector.connect(pool_name="mypool")
cur = con.cursor()
cur.execute("SELECT id FROM test WHERE id = '123';")
result = cur.fetchall()
cur.close()
con.close()
print result
The output of the above is:
[(u'123',)]
[]
[(u'123',)]
[(u'123',)]
[]
[(u'123',)]
[(u'123',)]
[]
[(u'123',)]
[(u'123',)]
[]
[(u'123',)]
Again, if I don't do the initial SELECT before the insert, then all of them return 123 (if it's already in db). It seems the initial SELECT 'corrupts' one of the connections of the connection pool. Further, if I do 2 SELECTs for empty results before the INSERT, then 2 of the 3 connections are 'corrupt'. Finally if I do 3 SELECTs before the insert, it still works 1 of 3 times, because it seems the INSERT 'fixes' the connection (presumably by having 'results').
Ubuntu 18.04
Python 2.7.17 (released Oct 2019)
mysql-connector-python 8.0.21 (June 2020)
MySql server 5.6.10
It seems to be a rather severe bug in the python driver for MySQL. Perhaps some configuration incompatibility but clearly a bug as no error is shown yet it returns wrong query results.
I filed the bug report with MySQL team and it's status is currently 'verified'.
https://bugs.mysql.com/bug.php?id=102053
I am new in this and this is my first question. I hope you guys will help.
If my question format is wrong, feel free to comment on that also.
The code is pretty simple. I have DB connection, 2 functions - one for printing and another for choosing how many SQL queries I want to execute and input for those queries.
Idea is to enter a number(INT) of SQL queries - for example, 2 and then in another line user must enter 2 SQL queries.
After that, call_table function will print out current table status/situation/data.
For example - user wants to print out into console table data (table have 2 columns, [name][college], varchar type)
Insert a number of SQL queries you want to execute: 1
Insert SQL statement:
select * from student
('ivan', 'ino')
('nena', 'fer')
('tomislav', 'ino')
('marko', 'fer')
('tomislav', 'ino')
('marko', 'fer')
When I try to insert some values into the same table nothing happens with the table, data is not entered.
The query is 100% correct since I tested it in workbench, also I've tried to create another table from this program and the query was executed normally and the table was created.
I receive no errors.
Code is below:
import pymysql
db = pymysql.connect(host='localhost', user='root', passwd='123456', database='test')
mycursor = db.cursor()
def call_table(data_print):
for i in data_print:
print(i)
def sql_inputs(cursor):
container = []
no = int(input("Insert a number of SQL queries you want to execute: "))
for i in range(no):
container = [input("Insert SQL statement: \n").upper()]
for y in container:
cursor.execute(y)
sql_inputs(mycursor)
call_table(mycursor)
What am I doing wrong?
I tried even more complicated SQL queries but insert into the table is not working.
Thank you
Everything is good with the code, you're just missing cursor.commit()
By default cursor commit is false in python for insert queries.
cursor.execute(y)
cursor.commit()
and if you're done with queries
db.close()
You should append the queries to the container variable
import pymysql
db = pymysql.connect(host='localhost', user='root', passwd='123456', database='test')
mycursor = db.cursor()
def call_table(data_print):
for i in data_print:
print(i)
def sql_inputs(cursor):
container = []
no = int(input("Insert a number of SQL queries you want to execute: "))
for i in range(no):
container.append(input("Insert SQL statement: \n").upper())
for y in container:
cursor.execute(y)
sql_inputs(mycursor)
call_table(mycursor)
At the end of the program, I've added db.commit(), and everything works fine now.
import pymysql
db = pymysql.connect(host='localhost', user='root', passwd='45fa6cb2',
database='ivan')
mycursor = db.cursor()
def call_table(data_print):
for i in data_print:
print(i)
def sql_inputs(cursor):
container = []
no = int(input("Insert a number of SQL queries you want to execute: "))
for i in range(no):
container.append(input("Insert SQL statement: \n").upper())
for y in container:
cursor.execute(y)
sql_inputs(mycursor)
db.commit()
call_table(mycursor)
So, after coding with pyodbc for a couple days now, I've run into a road block it seems. My SQL update will not work, even after putting autocommit=True in the connection statement. Nothing changes in the database at all. All my code is provided below. Please help. (I am using the 2016 version of MS Access, code runs with no errors, 32 bit Python and Access.)
import pyodbc
# Connect to the Microsoft Access Database
conn_str = (
r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};'
r'DBQ=C:\Users\User_Name\Desktop\Databse\CPLM.accdb'
)
cnxn = pyodbc.connect(conn_str, autocommit=True)
crsr = cnxn.cursor()
crsr2 = cnxn.cursor()
# SQL code used for the for statement
SQL = "SELECT NameProject, Type, Date, Amount, ID FROM InvoiceData WHERE Type=? OR Type=? OR Type IS NULL AND ID > ?"
# Defining variables
date = ""
projectNumber = 12.04
numberDate = []
# Main Code, for each row in the SQL query, update the table
for row in crsr.execute(SQL, "Invoice", "Deposit", "1"):
print (projectNumber)
if row.NameProject is not None:
crsr2.execute("UPDATE Cimt SET LastInvoice='%s' WHERE Num='%s'" % (date, projectNumber))
cnxn.commit()
# Just used to find where to input certain data.
# I also know all the code in this if statement completes due to outside testing
projectNumber = row.NameProject[:5]
numberDate.append([projectNumber, date])
else:
date = row.Date
print(numberDate)
crsr.commit()
cnxn.commit()
cnxn.close()
I want to insert a record in mytable (in DB2 database) and get the id generated in that insert. I'm trying to do that with python 2.7. Here is what I did:
import sqlalchemy
from sqlalchemy import *
import ibm_db_sa
db2 = sqlalchemy.create_engine('ibm_db_sa://user:pswd#localhost:50001/mydatabase')
sql = "select REPORT_ID from FINAL TABLE(insert into MY_TABLE values(DEFAULT,CURRENT TIMESTAMP,EMPTY_BLOB(),10,'success'));"
result = db2.execute(sql)
for item in result:
id = item[0]
print id
When I execute the code above it gives me this output:
10 //or a increasing number
Now when I check in the database nothing has been inserted ! I tried to run the same SQL request on the command line and it worked just fine. Any clue why I can't insert it with python using sqlalchemy ?
Did you try a commit? #Lennart is right. It might solve your problem.
Your code does not commit the changes you have made and thus are rolled back.
If your Database is InnoDB, it is transactional and thus needs a commit.
according to this, you also have to connect to your engine. so in your instance it would look like:
db2 = sqlalchemy.create_engine('ibm_db_sa://user:pswd#localhost:50001/mydatabase')
conn = db2.connect()
trans = conn.begin()
try:
sql = "select REPORT_ID from FINAL TABLE(insert into MY_TABLE values(DEFAULT,CURRENT TIMESTAMP,EMPTY_BLOB(),10,'success'));"
result = conn.execute(sql)
for item in result:
id = item[0]
print id
trans.commit()
except:
trans.rollback()
raise
I do hope this helps.
I am currently connecting to a Sybase 15.7 server using sybpydb. It seems to connect fine:
import sys
sys.path.append('/dba/sybase/ase/15.7/OCS-15_0/python/python26_64r/lib')
sys.path.append('/dba/sybase/ase/15.7/OCS-15_0/lib')
import sybpydb
conn = sybpydb.connect(user='usr', password='pass', servername='serv')
is working fine. Changing any of my connection details results in a connection error.
I then select a database:
curr = conn.cursor()
curr.execute('use db_1')
however, now when I try to run queries, it always returns None
print curr.execute('select * from table_1')
I have tried running the use and select queries in the same execute, I have tried including go commands after each, I have tried using curr.connection.commit() after each, all with no success. I have confirmed, using dbartisan and isql, that the same queries I am using return entries.
Why am I not getting results from my queries in python?
EDIT:
Just some additional info. In order to get the sybpydb import to work, I had to change two environment variables. I added the lib paths (the same ones that I added to sys.path) to $LD_LIBRARY_PATH, i.e.:
setenv LD_LIBRARY_PATH "$LD_LIBRARY_PATH":dba/sybase/ase/15.7/OCS-15_0/python/python26_64r/lib:/dba/sybase/ase/15.7/OCS-15_0/lib
and I had to change the SYBASE path from 12.5 to 15.7. All this was done in csh.
If I print conn.error(), after every curr.execute(), I get:
("Server message: number(5701) severity(10) state(2) line(0)\n\tChanged database context to 'master'.\n\n", 5701)
I completely understand where you might be confused by the documentation. Its doesn't seem to be on par with other db extensions (e.g. psycopg2).
When connecting with most standard db extensions you can specify a database. Then, when you want to get the data back from a SELECT query, you either use fetch (an ok way to do it) or the iterator (the more pythonic way to do it).
import sybpydb as sybase
conn = sybase.connect(user='usr', password='pass', servername='serv')
cur = conn.cursor()
cur.execute("use db_1")
cur.execute("SELECT * FROM table_1")
print "Query Returned %d row(s)" % cur.rowcount
for row in cur:
print row
# Alternate less-pythonic way to read query results
# for row in cur.fetchall():
# print row
Give that a try and let us know if it works.
Python 3.x working solution:
import sybpydb
try:
conn = sybpydb.connect(dsn="Servername=serv;Username=usr;Password=pass")
cur = conn.cursor()
cur.execute('select * from db_1..table_1')
# table header
header = tuple(col[0] for col in cur.description)
print('\t'.join(header))
print('-' * 60)
res = cur.fetchall()
for row in res:
line = '\t'.join(str(col) for col in row)
print(line)
cur.close()
conn.close()
except sybpydb.Error:
for err in cur.connection.messages:
print(f'Error {err[0]}, Value {err[1]}')