I'm trying to figure out how to use the MySQLdb library in Python (I am novice at best for both of them).
I'm following the code here, specifically:
cursor = conn.cursor ()
cursor.execute ("DROP TABLE IF EXISTS animal")
cursor.execute ("""
CREATE TABLE animal
(
name CHAR(40),
category CHAR(40)
)
""")
cursor.execute ("""
INSERT INTO animal (name, category)
VALUES
('snake', 'reptile'),
('frog', 'amphibian'),
('tuna', 'fish'),
('racoon', 'mammal')
""")
print "Number of rows inserted: %d" % cursor.rowcount
cursor.close ()
conn.close ()
I can change this code to create or drop tables, but I can't get it to actually commit the INSERT. It returns the row.count value as expected (even when I change the value in the table, it changes to what I expect it to be).
Every time I look into the database with PHPMyAdmin there are no inserts made. How do I commit the INSERT to the database?
You forget commit data changes, autocommit is disabled by default:
cursor.close ()
conn.commit ()
conn.close ()
Quoting Writing MySQL Scripts with Python DB-API documentation:
"The connection object commit() method commits any outstanding changes
in the current transaction to make them permanent in the database. In
DB-API, connections begin with autocommit mode disabled, so you must
call commit() before disconnecting or changes may be lost."
Related
Background
I maintain a Python application that automatically applies SQL schema migrations (adding/removing tables and columns, adjusting the data, etc) to our database (SQL2016). Each migration is executed via PyODBC within a transaction so that it can be rolled back if something goes wrong. Sometimes a migration requires one or more batch statements (GO) to execute correctly. Since GO is not actually a T-SQL command but rather a special keyword in SSMS, I've been splitting each SQL migration on GO and executing each SQL fragment separately within the same transaction.
import pyodbc
import re
conn_args = {
'driver': '{ODBC Driver 17 for SQL Server}',
'hostname': 'MyServer',
'port': 1298,
'server': r'MyServer\MyInstance',
'database': 'MyDatabase',
'user': 'MyUser',
'password': '********',
'autocommit': False,
}
connection = pyodbc.connect(**conn_args)
cursor = connection.cursor()
sql = '''
ALTER TABLE MyTable ADD NewForeignKeyID INT NULL FOREIGN KEY REFERENCES MyParentTable(ID)
GO
UPDATE MyTable
SET NewForeignKeyID = 1
'''
sql_fragments = re.split(r'^\s*GO;?\s*$', sql, flags=re.IGNORECASE|re.MULTILINE)
for sql_frag in sql_fragments:
cursor.execute(sql_frag)
# Wait for the command to complete. This is necessary for some database system commands
# (backup, restore, etc). Probably not necessary for schema migrations, but included
# for completeness.
while cursor.nextset():
pass
connection.commit()
Problem
SQL statement batches aren't being executed like I expected. When the above schema migration is executed in SSMS, it succeeds. When executed in Python, the first batch (adding the foreign key) executes just fine, but the second batch (setting the foreign key value) fails because it isn't aware of the new foreign key.
('42S22', "[42S22] [FreeTDS][SQL Server]Invalid column name 'NewForeignKeyID'. (207) (SQLExecDirectW)")
Goal
Execute a hierarchy of SQL statement batches (i.e. where each statement batch depends upon the previous batch) within a single transaction in PyODBC.
What I've Tried
Searching the PyODBC documentation for information on how PyODBC supports or doesn't support batch statements / the GO command. No references found.
Searching StackOverflow & Google for how to batch statements within PyODBC.
Introducing a small sleep between SQL fragment executions just in case there's some sort of race condition. Seemed unlikely to be a solution, and didn't change the behavior.
I've considered separating each batch of statements out into a separate transaction that is committed before the next batch is executed, but that would reduce/eliminate our ability to automatically roll back a schema migration that fails.
EDIT: I just found this question, which is pretty much exactly what I want to do. However, upon testing (in SSMS) the answer that recommends using EXEC I discovered that the second EXEC command (setting the value) fails because it isn't aware of the new foreign key. I'm bad at testing and it actually does succeed. This solution might work but isn't ideal since EXEC isn't compatible with parameters. Also, this won't work if variables are used across fragments.
BEGIN TRAN
EXEC('ALTER TABLE MyTable ADD NewForeignKeyID INT NULL FOREIGN KEY REFERENCES MyParentTable(ID)')
EXEC('UPDATE MyTable SET NewForeignKeyID = 1')
ROLLBACK TRAN
Invalid column name 'FK_TestID'.
If you are reading the SQL statements from a text file (such as one produced by scripting objects in SSMS) then you could just use Python's subprocess module to run the sqlcmd utility with that file as the input (-i). In its simplest form that would look like
server = "localhost"
port = 49242
uid = "scott"
pwd = "tiger^5HHH"
database = "myDb"
script_file = r"C:\__tmp\batch_test.sql"
"""contents of the above file:
DROP TABLE IF EXISTS so69020084;
CREATE TABLE so69020084 (src varchar(10), var_value varchar(10));
INSERT INTO so69020084 (src, var_value) VALUES ('1st batch', 'foo');
GO
INSERT INTO so69020084 (src, var_value) VALUES ('2nd batch', 'bar');
GO
"""
import subprocess
cmd = [
"sqlcmd",
"-S", f"{server},{port}",
"-U", uid,
"-P", pwd,
"-d", database,
"-i", script_file,
]
subprocess.run(cmd)
I have a stored procedure.
calling it via MySQL workbench as follows working;
CALL `lobdcapi`.`escalatelobalarm`('A0001');
But not from the python program. (means it is not throwing any exception, process finish execution silently) if I make any error in column names, then at python I get an error. So it calls my stored procedure but not working as expected. (it is an update query .it needs SAFE update )
Why through the python sqlalchemy this update didn't update any records?
CREATE DEFINER=`lob`#`%` PROCEDURE `escalatelobalarm`(IN client_id varchar(50))
BEGIN
SET SQL_SAFE_UPDATES = 0;
update lobdcapi.alarms
set lobalarmescalated=1
where id in (
SELECT al.id
from (select id,alarmoccurredhistoryid from lobdcapi.alarms where lobalarmpriorityid=1 and lobalarmescalated=0 and clientid=client_id
and alarmstatenumber='02' ) as al
inner join lobdcapi.`alarmhistory` as hi on hi.id=al.alarmoccurredhistoryid
and hi.datetimestamp<= current_timestamp() )
);
SET SQL_SAFE_UPDATES = 1;
END
I call it like;
from sqlalchemy import and_, func,text
db.session.execute(text("CALL escalatelobalarm(:param)"), {'param': clientid})
I suspect the param I pass via code didn't get bind properly?
I haven't called stored procs from SQLAlchemy, but it seems possible that this could be within a transaction because you're using the session. Perhaps calling db.session.commit() at the end would help?
If that fails, SQLAlchemy calls out calling stored procs here. Perhaps try their method of using callproc. Adapting to your use-case, something like:
connection = db.session.connection()
try:
cursor = connection.cursor()
cursor.callproc("escalatelobalarm", [clientid])
results = list(cursor.fetchall())
cursor.close()
connection.commit()
finally:
connection.close()
I can't seem to correctly connect and pull from a test postgreSQL database in python. I installed PostgreSQL using Homebrew. Here's how I have been accessing the database table and value from the terminal:
xxx-macbook:~ xxx$ psql
psql (9.4.0)
Type "help" for help.
xxx=# \dn
List of schemas
Name | Owner
--------+---------
public | xxx
(1 row)
xxx=# \connect postgres
You are now connected to database "postgres" as user "xxx".
postgres=# SELECT * from test.test;
coltest
-----------
It works!
(1 row)
But when trying to access it from python, using the code below, it doesn't work. Any suggestions?
########################################################################################
# Importing variables from PostgreSQL database via SQL commands
db_conn = psycopg2.connect(database='postgres',
user='xxx')
cursor = db_conn.cursor()
#querying the database
result = cursor.execute("""
Select * From test.test
""")
print "Result: ", result
>>> Result: None
It should say: Result: It works!
You need to fetch the results.
From the docs:
The [execute()-]method returns None. If a query was executed, the returned values can be retrieved using fetch*() methods.
Example:
result = cursor.fetchall()
For reference:
http://initd.org/psycopg/docs/cursor.html#execute
http://initd.org/psycopg/docs/cursor.html#fetch
Note that (unlike psql) psycopg2 wraps anything in transactions. So if you intend to issue persistent changes to the database (INSERT, UPDATE, DELETE, ...) you need to commit them explicitly. Otherwise changes will be rolled back automatically when the connection object is destroyed. Read more on that topic here:
http://initd.org/psycopg/docs/usage.html
http://initd.org/psycopg/docs/usage.html#transactions-control
The documentation I've run across researching this indicates that the way to do it for other databases is to use multiple statements in your query, a la:
>>> cursor = connection.cursor()
>>> cursor.execute("set session transaction isolation level read uncommitted;
select stuff from table;
set session transaction isolation level repeatable read;")
Unfortunately, doing that yields no results, as apparently the Python DB API (or maybe just this implementation of it?) doesn't support multiple recordsets within a single query.
Has anyone else had success with this in the past?
I don't think this works for the MySQLdb driver; you'll have to issue separate queries:
cur = conn.cursor()
cur.execute("SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED")
cur.execute("SELECT ##session.tx_isolation")
print cur.fetchall()[0]
cur.execute("SELECT * FROM bar")
print cur.fetchall()
cur.execute("SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ")
cur.execute("SELECT ##session.tx_isolation")
print cur.fetchall()[0]
# output
('READ-UNCOMMITTED',)
(('foo',), ('bar',))
('REPEATABLE-READ',)
The MySQLdb cursor's execute() method only sees the first query up to the semicolon:
cur.execute("SELECT * FROM bar WHERE thing = 'bar'; SELECT * FROM bar")
print cur.fetchall()
# output
(('bar',),)
cur.executemany("SELECT * FROM bar WHERE thing = 'bar'; SELECT * FROM bar")
print cur.fetchall()
use cur.executemany to run multiple sql statements with ; separated.
I'm having some trouble updating a row in a MySQL database. Here is the code I'm trying to run:
import MySQLdb
conn=MySQLdb.connect(host="localhost", user="root", passwd="pass", db="dbname")
cursor=conn.cursor()
cursor.execute("UPDATE compinfo SET Co_num=4 WHERE ID=100")
cursor.execute("SELECT Co_num FROM compinfo WHERE ID=100")
results = cursor.fetchall()
for row in results:
print row[0]
print "Number of rows updated: %d" % cursor.rowcount
cursor.close()
conn.close()
The output I get when I run this program is:
4Number of rows updated: 1
It seems like it's working but if I query the database from the MySQL command line interface (CLI) I find that it was not updated at all. However, if from the CLI I enter UPDATE compinfo SET Co_num=4 WHERE ID=100; the database is updated as expected.
What is my problem? I'm running Python 2.5.2 with MySQL 5.1.30 on a Windows box.
I am not certain, but I am going to guess you are using a INNODB table, and you haven't done a commit. I believe MySQLdb enable transactions automatically.
Call conn.commit() before calling close.
From the FAQ: Starting with 1.2.0, MySQLdb disables autocommit by default
MySQLdb has autocommit off by default, which may be confusing at first. Your connection exists in its own transaction and you will not be able to see the changes you make from other connections until you commit that transaction.
You can either do conn.commit() after the update statement as others have pointed out, or disable this functionality altogether by setting conn.autocommit(True) right after you create the connection object.
You need to commit changes manually or turn auto-commit on.
The reason SELECT returns the modified (but not persisted) data is because the connection is still in the same transaction.
I've found that Python's connector automatically turns autocommit off, and there doesn't appear to be any way to change this behaviour. Of course you can turn it back on, but then looking at the query logs, it stupidly does two pointless queries after connect to turn autocommit off then back on.
Connector/Python Connection Arguments
Turning on autocommit can be done directly when you connect to a database:
import mysql.connector as db
conn = db.connect(host="localhost", user="root", passwd="pass", db="dbname", autocommit=True)
MySQLConnection.autocommit Property
Or separately:
import MySQLdb
conn = MySQLdb.connect(host="localhost", user="root", passwd="pass", db="dbname")
cursor = conn.cursor()
conn.get_autocommit() # will return **False**
conn.autocommit(True) # will make it True
conn.get_autocommit() # Should return **True** now
cursor = conn.cursor()
Explicitly committing the changes is done with
conn.commit()
I have to execute SET autocommit=true on my mysqlWorkbench app script