select from insert into not working with sqlalchemy - python

I want to insert a record in mytable (in DB2 database) and get the id generated in that insert. I'm trying to do that with python 2.7. Here is what I did:
import sqlalchemy
from sqlalchemy import *
import ibm_db_sa
db2 = sqlalchemy.create_engine('ibm_db_sa://user:pswd#localhost:50001/mydatabase')
sql = "select REPORT_ID from FINAL TABLE(insert into MY_TABLE values(DEFAULT,CURRENT TIMESTAMP,EMPTY_BLOB(),10,'success'));"
result = db2.execute(sql)
for item in result:
id = item[0]
print id
When I execute the code above it gives me this output:
10 //or a increasing number
Now when I check in the database nothing has been inserted ! I tried to run the same SQL request on the command line and it worked just fine. Any clue why I can't insert it with python using sqlalchemy ?

Did you try a commit? #Lennart is right. It might solve your problem.
Your code does not commit the changes you have made and thus are rolled back.
If your Database is InnoDB, it is transactional and thus needs a commit.
according to this, you also have to connect to your engine. so in your instance it would look like:
db2 = sqlalchemy.create_engine('ibm_db_sa://user:pswd#localhost:50001/mydatabase')
conn = db2.connect()
trans = conn.begin()
try:
sql = "select REPORT_ID from FINAL TABLE(insert into MY_TABLE values(DEFAULT,CURRENT TIMESTAMP,EMPTY_BLOB(),10,'success'));"
result = conn.execute(sql)
for item in result:
id = item[0]
print id
trans.commit()
except:
trans.rollback()
raise
I do hope this helps.

Related

"Insert into table values" does not send data

I am new in this and this is my first question. I hope you guys will help.
If my question format is wrong, feel free to comment on that also.
The code is pretty simple. I have DB connection, 2 functions - one for printing and another for choosing how many SQL queries I want to execute and input for those queries.
Idea is to enter a number(INT) of SQL queries - for example, 2 and then in another line user must enter 2 SQL queries.
After that, call_table function will print out current table status/situation/data.
For example - user wants to print out into console table data (table have 2 columns, [name][college], varchar type)
Insert a number of SQL queries you want to execute: 1
Insert SQL statement:
select * from student
('ivan', 'ino')
('nena', 'fer')
('tomislav', 'ino')
('marko', 'fer')
('tomislav', 'ino')
('marko', 'fer')
When I try to insert some values into the same table nothing happens with the table, data is not entered.
The query is 100% correct since I tested it in workbench, also I've tried to create another table from this program and the query was executed normally and the table was created.
I receive no errors.
Code is below:
import pymysql
db = pymysql.connect(host='localhost', user='root', passwd='123456', database='test')
mycursor = db.cursor()
def call_table(data_print):
for i in data_print:
print(i)
def sql_inputs(cursor):
container = []
no = int(input("Insert a number of SQL queries you want to execute: "))
for i in range(no):
container = [input("Insert SQL statement: \n").upper()]
for y in container:
cursor.execute(y)
sql_inputs(mycursor)
call_table(mycursor)
What am I doing wrong?
I tried even more complicated SQL queries but insert into the table is not working.
Thank you
Everything is good with the code, you're just missing cursor.commit()
By default cursor commit is false in python for insert queries.
cursor.execute(y)
cursor.commit()
and if you're done with queries
db.close()
You should append the queries to the container variable
import pymysql
db = pymysql.connect(host='localhost', user='root', passwd='123456', database='test')
mycursor = db.cursor()
def call_table(data_print):
for i in data_print:
print(i)
def sql_inputs(cursor):
container = []
no = int(input("Insert a number of SQL queries you want to execute: "))
for i in range(no):
container.append(input("Insert SQL statement: \n").upper())
for y in container:
cursor.execute(y)
sql_inputs(mycursor)
call_table(mycursor)
At the end of the program, I've added db.commit(), and everything works fine now.
import pymysql
db = pymysql.connect(host='localhost', user='root', passwd='45fa6cb2',
database='ivan')
mycursor = db.cursor()
def call_table(data_print):
for i in data_print:
print(i)
def sql_inputs(cursor):
container = []
no = int(input("Insert a number of SQL queries you want to execute: "))
for i in range(no):
container.append(input("Insert SQL statement: \n").upper())
for y in container:
cursor.execute(y)
sql_inputs(mycursor)
db.commit()
call_table(mycursor)

update row in oracle from python timeout/take forever

I'm using cx_oracle to update record data in oracle from python. It just a simple update, but it takes forever to run and timeout in the end. If I run the same statement directly from Oracle, it works perfectly. Does anyone know why this happened? Thanks!
my code:
con = cx_Oracle.connect()
cur = con.cursor()
stmt = "UPDATE table SET rank = 4 WHERE id like 'SAP_1000141471' and rank = 2"
cur.execute(stmt)
con.commit()
result =cur.fetchall()

"No results. Previous SQL was not a query" when trying to query DeltaDNA with Python

I'm currently trying to query a deltadna database. Their Direct SQL Access guide states that any PostgreSQL ODBC compliant tools should be able to connect without issue. Using the guide, I set up an ODBC data source in windows
I have tried adding Set nocount on, changed various formats for the connection string, changed the table name to be (account).(system).(tablename), all to no avail. The simple query works in Excel and I have cross referenced with how Excel formats everything as well, so it is all the more strange that I get the no query problem.
import pyodbc
conn_str = 'DSN=name'
query1 = 'select eventName from table_name limit 5'
conn = pyodbc.connect(conn_str)
conn.setdecoding(pyodbc.SQL_CHAR,encoding='utf-8')
query1_cursor = conn.cursor().execute(query1)
row = query1_cursor.fetchone()
print(row)
Result is ProgrammingError: No results. Previous SQL was not a query.
Try it like this:
import pyodbc
conn_str = 'DSN=name'
query1 = 'select eventName from table_name limit 5'
conn = pyodbc.connect(conn_str)
conn.setdecoding(pyodbc.SQL_CHAR,encoding='utf-8')
query1_cursor = conn.cursor()
query1_cursor.execute(query1)
row = query1_cursor.fetchone()
print(row)
You can't do the cursor declaration and execution in the same row. Since then your query1_cursor variable will point to a cursor object which hasn't executed any query.

insert into python mysql not updating in DB

I tried inserting the values into the DB through python. However i do not get any error but i do not see it updating in DB. Please advice.
#!/usr/bin/python
import MySQLdb
val = MySQLdb.connect(host='localhost', user='root', passwd='root123',
db='expenses')
def access_db(val):
access = val.cursor()
sql = """Insert into monthly values (2,'Food',1000)"""
access.execute(sql)
val.commit()
val.close()
Output from DB after the script execution:
MariaDB[expenses]> select * from monthly;
SL_no Type Amount
1 Fuel 500
I do not find the second entry in Db.
I dont think you are calling the access_db() function anywhere

Sybase sybpydb queries not returning anything

I am currently connecting to a Sybase 15.7 server using sybpydb. It seems to connect fine:
import sys
sys.path.append('/dba/sybase/ase/15.7/OCS-15_0/python/python26_64r/lib')
sys.path.append('/dba/sybase/ase/15.7/OCS-15_0/lib')
import sybpydb
conn = sybpydb.connect(user='usr', password='pass', servername='serv')
is working fine. Changing any of my connection details results in a connection error.
I then select a database:
curr = conn.cursor()
curr.execute('use db_1')
however, now when I try to run queries, it always returns None
print curr.execute('select * from table_1')
I have tried running the use and select queries in the same execute, I have tried including go commands after each, I have tried using curr.connection.commit() after each, all with no success. I have confirmed, using dbartisan and isql, that the same queries I am using return entries.
Why am I not getting results from my queries in python?
EDIT:
Just some additional info. In order to get the sybpydb import to work, I had to change two environment variables. I added the lib paths (the same ones that I added to sys.path) to $LD_LIBRARY_PATH, i.e.:
setenv LD_LIBRARY_PATH "$LD_LIBRARY_PATH":dba/sybase/ase/15.7/OCS-15_0/python/python26_64r/lib:/dba/sybase/ase/15.7/OCS-15_0/lib
and I had to change the SYBASE path from 12.5 to 15.7. All this was done in csh.
If I print conn.error(), after every curr.execute(), I get:
("Server message: number(5701) severity(10) state(2) line(0)\n\tChanged database context to 'master'.\n\n", 5701)
I completely understand where you might be confused by the documentation. Its doesn't seem to be on par with other db extensions (e.g. psycopg2).
When connecting with most standard db extensions you can specify a database. Then, when you want to get the data back from a SELECT query, you either use fetch (an ok way to do it) or the iterator (the more pythonic way to do it).
import sybpydb as sybase
conn = sybase.connect(user='usr', password='pass', servername='serv')
cur = conn.cursor()
cur.execute("use db_1")
cur.execute("SELECT * FROM table_1")
print "Query Returned %d row(s)" % cur.rowcount
for row in cur:
print row
# Alternate less-pythonic way to read query results
# for row in cur.fetchall():
# print row
Give that a try and let us know if it works.
Python 3.x working solution:
import sybpydb
try:
conn = sybpydb.connect(dsn="Servername=serv;Username=usr;Password=pass")
cur = conn.cursor()
cur.execute('select * from db_1..table_1')
# table header
header = tuple(col[0] for col in cur.description)
print('\t'.join(header))
print('-' * 60)
res = cur.fetchall()
for row in res:
line = '\t'.join(str(col) for col in row)
print(line)
cur.close()
conn.close()
except sybpydb.Error:
for err in cur.connection.messages:
print(f'Error {err[0]}, Value {err[1]}')

Categories

Resources