Table Insertion Failing in a Function Call in SQLAlchemy 1.4 - python

I'm using SQLAlchemy 1.4 to execute the following commands below. When I run the table insertion. It works fine as it is.
# Execute table insertion
with engine.connect() as conn:
insert_stmt = table.insert().values(**data)
conn.execute(insert_stmt)
conn.commit()
However, when I run the command inside a function, load, in this case, I'm getting an error that This method is not implemented for SQLAlchemy 2.0
# Execute table insertion inside the load function
def load(conn, table, data):
insert_stmt = table.insert().values(**data)
conn.execute(insert_stmt)
conn.commit()
with engine.connect() as conn:
load(engine, table, data)
Why is this happening?

Related

Error while trying to execute the query in Denodo using Python SQLAlchemy

I'm trying to get a table from Denodo using Python and sqlalchemy library. That's my code
from sqlalchemy import create_engine
import os
sql = """SELECT * FROM test_table LIMIT 10 """
engine = create_engine('mssql+pyodbc://DenodoODBC', encoding='utf-8')
con = engine.connect().connection
cursor = con.cursor()
cursor.execute(sql)
df = cursor.fetchall()
cursor.close()
con.close()
When I'm trying to run it for the first time I get the following error.
DBAPIError: (pyodbc.Error) (' \x10#', "[ \x10#] ERROR: Function 'schema_name' with arity 0 not found\njava.sql.SQLException: Function 'schema_name' with arity 0 not found;\nError while executing the query (7) (SQLExecDirectW)")
[SQL: SELECT schema_name()]
I think the problem might be with create_engine because when I'm trying to run the code for the second time without creating an engine again, everything is fine.
I hope somebody can explain me what is going on. Thanks :)

Is commit used on the connect or cursor?

The book named "Practical Programming: 2nd Edition" has conflicting code. This is the start of my code:
import sqlite3
con = sqlite3.connect('stackoverflow.db')
cur = conn.cursor()
To commit, would I use con.commit(), cur.commit() or are there different times to use each? From the book:
con.commit() :
cur.commit() :
Documentation shows con.commit() :
I took unutbu's advice and tried it myself.
Sample code:
import sqlite3
con = sqlite3.connect('db.db')
cur = con.cursor()
data = [('data', 3), ('data2', 69)]
cur.execute('CREATE TABLE Density(Name TEXT, Number INTEGER)')
for i in data:
cur.execute('INSERT INTO Density VALUES (?, ?)', (i[0], i[1]))
cur.commit()
PyCharm Run:
Traceback (most recent call last):
File "/Users/User/Library/Preferences/PyCharmCE2018.1/scratches/scratch_2.py", line 13, in <module>
cur.commit()
AttributeError: 'sqlite3.Cursor' object has no attribute 'commit'
Error in textbook. cur.commit() does not exist.
Thanks unutbu and s3n0
con.commit() and conn.commit() are the same ... they are created object types ... in both cases they are otherwise named ... important is mainly .commit() and not the naming that the programmer has specified
There are object types that use a different name (con and cur - as you asked) to calling the method. You can also use a different name in your code, for example:
db = sqlite3.connect('/tmp/filename.db')
cursor = db.cursor()
cursor.execute("CREATE TABLE ....
.... some DB-API 2.0 commands ....
")
db.commit()
Please check again the webpage https://docs.python.org/3/library/sqlite3.html .
You forgot to copy these two lines from the webpage:
import sqlite3
conn = sqlite3.connect('example.db')
And then continuing the code (just copied it):
c = conn.cursor()
# Create table
c.execute('''CREATE TABLE stocks
(date text, trans text, symbol text, qty real, price real)''')
# Insert a row of data
c.execute("INSERT INTO stocks VALUES ('2006-01-05','BUY','RHAT',100,35.14)")
# Save (commit) the changes
conn.commit()
# We can also close the connection if we are done with it.
# Just be sure any changes have been committed or they will be lost.
conn.close()
I think if you're using a specified cursor to commit changes, in your case, it should be cur.connection.commit().
You can always use connect to commit in the end of your code, whether it's named db, or con or conn.
But when your code gets complicated, you'll have different function to do certain operation to the database, if you only use connection commit, when there is a bug, you gonna have a hard time to find which function failed. So you create specific cursor for specific operation, when that failed, the traceback message will show you which specific cursor when wrong.
To #s3n0 & #DanielYu's point they can be handled two different ways. I had to list these out to better understand the overlap:
Connection Objects
backup
close
commit
create_aggregate
create_collation
create_function
cursor
enable_load_extension
execute
executemany
executescript
in_transaction
interrupt
isolation_level
iterdump
load_extension
rollback
row_factory
set_authorizer
set_progress_handler
set_trace_callback
text_factory
total_changes
Cursor objects
arraysize
close
connection
description
execute
executemany
executescript
fetchall
fetchmany
fetchone
lastrowid
rowcount
setinputsizes
setoutputsize

Can't get MySQL Connector/Python to Return Dictionary

I have a Python application, in which I'm calling a MySQL stored procedure from my view, like so:
import mysql.connector
proc = 'audit_report'
parms = [data['schoolid'], dateToISO(data['startdatedefault'],'from'), dateToISO(data['enddatedefault'],'to'), joinIntList(data['studypgms'], joinWith), joinIntList(data['fedpgms'], joinWith), joinIntList(data['statuses'], joinWith), data['fullssndefault']]
conn = mysql.connector.connect(user='usr', database='db', password='pwd')
cursor = conn.cursor(dictionary=True)
cursor.callproc(proc, parms)
for result in cursor.stored_results():
print(result.fetchall())
I am getting the data returned as a list of tuples, the standard output. Since I'm using connector version 2.1.7, the docs say adding
dictionary=True
to the cursor declaration should cause the rowset to be returned as a list of dictionaries, with column name as the key of each dictionary. Main difference between my application and the example in the docs is that I'm using cursor.callproc(), whereas the examples use cursor.execute() with actual sql code.
I tried
print(cursor.column_names)
to see if I could get the column names that way, but all I get is
('#_audit_report_arg1', '#_audit_report_arg2', '#_audit_report_arg3', '#_audit_report_arg4', '#_audit_report_arg5', '#_audit_report_arg6', '#_audit_report_arg7')
which looks more like the input parameters to the stored procedure.
Is there any way to actually get the column names of the returned data? The procedure is somewhat complex and contains crosstab-type manipulation, but calling the same stored procedure from MySQL Workbench happily supplies the column names.
Normally, knowing what the output is supposed to be, I could hard-code column names, except this procedure crosstabs the data for the last few columns, and it is unpredictable what they will be until after the query runs.
Thanks...
You can use pymysql in python3 and it should work fine !!
import pymysql.cursors
connection = pymysql.connect(host='',
user='',
password='',
db='test',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
try:
with connection.cursor() as cursor:
# Read a single record
sql = "query"
cursor.execute(sql)
result = cursor.fetchone()
num_fields = len(cursor.description)
field_names = [i[0] for i in cursor.description]
print (field_names)
finally:
connection.close()

select from insert into not working with sqlalchemy

I want to insert a record in mytable (in DB2 database) and get the id generated in that insert. I'm trying to do that with python 2.7. Here is what I did:
import sqlalchemy
from sqlalchemy import *
import ibm_db_sa
db2 = sqlalchemy.create_engine('ibm_db_sa://user:pswd#localhost:50001/mydatabase')
sql = "select REPORT_ID from FINAL TABLE(insert into MY_TABLE values(DEFAULT,CURRENT TIMESTAMP,EMPTY_BLOB(),10,'success'));"
result = db2.execute(sql)
for item in result:
id = item[0]
print id
When I execute the code above it gives me this output:
10 //or a increasing number
Now when I check in the database nothing has been inserted ! I tried to run the same SQL request on the command line and it worked just fine. Any clue why I can't insert it with python using sqlalchemy ?
Did you try a commit? #Lennart is right. It might solve your problem.
Your code does not commit the changes you have made and thus are rolled back.
If your Database is InnoDB, it is transactional and thus needs a commit.
according to this, you also have to connect to your engine. so in your instance it would look like:
db2 = sqlalchemy.create_engine('ibm_db_sa://user:pswd#localhost:50001/mydatabase')
conn = db2.connect()
trans = conn.begin()
try:
sql = "select REPORT_ID from FINAL TABLE(insert into MY_TABLE values(DEFAULT,CURRENT TIMESTAMP,EMPTY_BLOB(),10,'success'));"
result = conn.execute(sql)
for item in result:
id = item[0]
print id
trans.commit()
except:
trans.rollback()
raise
I do hope this helps.

Enable executing multiple statements while execution via sqlalchemy

I have a DDL object (create_function_foo) that contains a create function statement. In first line of it I put DROP FUNCTION IF EXISTS foo; but engine.execute(create_function_foo) returns:
sqlalchemy.exc.InterfaceError: (InterfaceError) Use multi=True when executing multiple statements
I put multi=True as parameter for create_engine, engine.execute_options and engine.execute but it doesn't work.
NOTE: engine if my instance of create_engine
NOTE: I'm using python 3.2 + mysql.connector 1.0.12 + sqlalchemy 0.8.2
create_function_foo = DDL("""\
DROP FUNCTION IF EXISTS foo;
CREATE FUNCTION `foo`(
SID INT
) RETURNS double
READS SQL DATA
BEGIN
...
END
""")
Where I should put it?
multi=True is a requirement for MySql connector. You can not set this flag passing it to SQLAlchemy methods. Do this:
conn = session.connection().connection
cursor = conn.cursor() # get mysql db-api cursor
cursor.execute(sql, multi=True)
More info here: http://www.mail-archive.com/sqlalchemy#googlegroups.com/msg30129.html
Yeah... This seems like a bummer to me. I don't want to use the ORM so the accepted answer didn't work for me.
I did this instead:
with open('sql_statements_file.sql') as sql_file:
for statement in sql_file.read().split(';'):
if len(statement.strip()) > 0:
connection.execute(statement + ';')
And then this failed for a CREATE function.... YMMV.
There are some cases where SQLAlchemy does not provide a generic way at accessing some DBAPI functions, such as as dealing with multiple result sets. In these cases, you should deal with the raw DBAPI connection directly.
From SQLAlchemy documentation:
connection = engine.raw_connection()
try:
cursor = connection.cursor()
cursor.execute("select * from table1; select * from table2")
results_one = cursor.fetchall()
cursor.nextset()
results_two = cursor.fetchall()
cursor.close()
finally:
connection.close()
You can also do the same using mysql connector as seen here:
operation = 'SELECT 1; INSERT INTO t1 VALUES (); SELECT 2'
for result in cursor.execute(operation, multi=True):
if result.with_rows:
print("Rows produced by statement '{}':".format(
result.statement))
print(result.fetchall())
else:
print("Number of rows affected by statement '{}': {}".format(
result.statement, result.rowcount))

Categories

Resources