For the life of me I can't figure out why the below module won't add new rows to my DB. I can add them using the command line interface. I can also add them by using other means (ie. writing commands to a script file and using os.system('...'), but if I use cursor.execute(), no rows are added (even though the table is created). Here is a minimal script for your viewing pleasure. Note that I am getting no errors or warnings when I run this script
#!/usr/bin/env python
import MySQLdb
if __name__ == '__main__':
db = MySQLdb.connect ( host="localhost", user="user", passwd="passwd", db="db" )
cursor = db.cursor()
cursor.execute (
"""
CREATE TABLE IF NOT EXISTS god_i_really_hate_this_stupid_library
(
id INT NOT NULL auto_increment,
username VARCHAR(32) NOT NULL UNIQUE,
PRIMARY KEY(id)
) engine=innodb;
"""
)
cursor.execute (
"""
INSERT INTO god_i_really_hate_this_stupid_library
( username )
VALUES
( 'Booberry' );
"""
)
cursor.close()
you need to call commit on your connection, otherwise all the changes made will be rolled back automatically.
From the FAQ of MySQLdb:
Starting with 1.2.0, MySQLdb disables autocommit by default, as required by the DB-API standard (PEP-249). If you are using InnoDB tables or some other type of transactional table type, you'll need to do connection.commit() before closing the connection, or else none of your changes will be written to the database.
Conversely, you can also use connection.rollback() to throw away any changes you've made since the last commit.
Important note: Some SQL statements -- specifically DDL statements like CREATE TABLE -- are non-transactional, so they can't be rolled back, and they cause pending transactions to commit.
You can call db.autocommit(True) to turn autocommit on for the connection or just call db.commit() manually whenever you deem it necessary.
Related
I'm using Sqlite3 database in my Python application and query it using parameters substitution.
For example:
cursor.execute('SELECT * FROM table WHERE id > ?', (10,))
Some queries do not return results properly and I would like to log them and try to query sqlite manually.
How can I log these queries with parameters instead of question marks?
Python 3.3 has sqlite3.Connection.set_trace_callback:
import sqlite3
connection = sqlite3.connect(':memory:')
connection.set_trace_callback(print)
The function you provide as argument gets called for every SQL statement that is executed through that particular Connection object. Instead of print, you may want to use a function from the logging module.
Assuming that you have a log function, you could call it first :
query, param = 'SELECT * FROM table WHERE id > ?', (10,)
log(query.replace('?', '%s') % param)
cursor.execute(query, param)
So you don't modify your query at all.
Moreover, this is not Sqlite specific.
I use SQLAlchemy Engine to create some functions and triggers, but I did not want to mix Python and SQL, so I have created a separate file for my SQL statements, I read the content and pass it to engine.execute(). It throws no errors, however the functions are not created in the database, but if I run the same SQL file through pgAdmin, everything works fine.
My SQL file:
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'plpython3u') THEN
CREATE EXTENSION plpython3u;
END IF;
END;
$$;
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname = 'my_func') THEN
CREATE FUNCTION public.my_func() RETURNS TRIGGER LANGUAGE 'plpython3u' NOT LEAKPROOF AS $BODY$
-- definition
$BODY$;
GRANT EXECUTE ON FUNCTION my_func() TO public;
END IF;
END;
$$;
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_proc WHERE proname = 'my_func2') THEN
CREATE FUNCTION public.my_func2() RETURNS TRIGGER LANGUAGE 'plpython3u' NOT LEAKPROOF AS $BODY$
-- definition
$BODY$;
GRANT EXECUTE ON FUNCTION my_func2() TO public;
END IF;
END;
$$;
And I run this as follows:
def execute_sql_file(engine, path):
try:
with open(path) as file:
engine.execute(file.read())
except ProgrammingError:
raise MyCustomError
except FileNotFoundError:
raise MyCustomError
If I run this without superuser privilege, it throws ProgrammingError, as expected. In my understanding END; commits the transaction, so it this code is really run, the functions should be available for the public, however they are not even created. Any ideas are welcome, thanks!
I believe you may have mixed the BEGIN SQL command (a Postgresql extension) and a PL/pgSQL block. The SQL command DO executes an anonymous code block, as if it were an anonymous function with no parameters and returning void. In other words in
DO $$
BEGIN
...
END;
$$;
the BEGIN / END; pair denotes the code block, not a transaction. It is worth noting that starting from Postgresql version 11 it is possible to manage transactions in a DO block, given that it is not executed in a transaction block, but the commands for that are COMMIT and ROLLBACK, not the keyword END.
The problem then is that your changes are not committed, though your commands clearly are executed – as proven by the error, if not running with suitable privileges. This issue is caused by how SQLAlchemy autocommit feature works. In short, it inspects your statement / command and tries to determine if it is a data changing operation, or a DDL statement. This works for the basic operations such as INSERT, DELETE, UPDATE, and the like, but is not perfect. In fact it is impossible for it to always correctly determine if a statement changes data; for example SELECT my_mutating_procedure() is such a statement. So it needs some help, if doing more complex operations. One way is to instruct the autocommit machinery that it should commit by wrapping the SQL string in a text() construct and using execution_options():
engine.execute(text("SELECT my_mutating_procedure()").
execution_options(autocommit=True))
It is also possible to explicitly instruct SQLAlchemy that the command is a literal DDL statement using the DDL construct:
from sqlalchemy.schema import DDL
def execute_sql_file(engine, path):
try:
with open(path) as file:
stmt = file.read()
# Not strictly DDL, but a series of DO commands that execute DDL
ddl_stmt = DDL(stmt)
engine.execute(ddl_stmt)
except ProgrammingError:
raise MyCustomError
except FileNotFoundError:
raise MyCustomError
As to why it works with pgAdmin, it probably by default commits, if no error was raised.
i need some help with python an mysql.
I have the following code, which is executing in infinite loop:
db = MySQLdb.connect("127.0.0.1","user","password","dbname" )
while True:
cursor = db.cursor()
cursor.execute("SELECT * FROM requests WHERE status <> 'Finished'")
all_pending_requests = cursor.fetchall()
cursor.close()
And that works fine the first time i run it. But when i go to a tool like mysql workbench or i type it myself in in terminal, i update some rows and set their status to something that is not "Finished". So by doing that the next time the loop executes i should get those rows as a result but i get nothing. Do you guys now why this is happening maybe?
Thanks for help.
I am not certain but would assume that you are using InnoDB storage engine in MySQL and MySQLdb version >=1.2.0. You need to commit before the changes are being reflected. As of version 1.2.0, MySQLdb disables auto-commit by default. Confirmation of the same is here. Try adding db.commit() as the last line in the loop.
I've created a database with the python package sqlite3.
import sqlite3
conn=sqlite3.connect('foo.sqlite')
c=conn.cursor()
c.execute('CREATE TABLE foo (bar1 int, bar2 int)')
conn.commit()
conn.close
Then for statistical purposes I try to read this database with R (I use the R package RSQLite)
library('RSQLite')
drv=dbDriver('SQLite')
foo=dbConnect(drv,'foo.sqlite')
If I want to list the table I've just created with Python
dbListTables(foo)
R says that the database is empty :
character(0)
Am I doing something wrong or does R cannot read a Python database ?
Thanks for your help
Try closing your database connection in python, rather than just instantiating the close method:
conn.close()
Spot the difference? Then it all works for me.
> dbListTables(foo)
[1] "foo"
although it all works for me even if I don't close the connection, and even if I've not quit python after the commit. So, umm...
Here's my code:
import cx_Oracle
conn = cx_Oracle.connect(usr, pwd, url)
cursor = conn.cursor()
cursor.execute("UPDATE SO SET STATUS='PE' WHERE ID='100'")
conn.commit()
If I remove the conn.commit(), the table isn't updated. But for select statements, I don't need that conn.commit(). I'm curious why?
The DB-API spec requires that connecting to the database begins a new transaction, by default. You must commit to confirm any changes you make, or rollback to discard them.
Note that if the database supports an auto-commit feature, this must be initially off.
Pure SELECT statements, since they never make any changes to the database, don't have to have their changes committed.
Others have explained why a commit is not necessary on a SELECT statement. I just wanted to point out you could utilize the autocommit property of the Connection object to avoid having to manually execute commit yourself:
import cx_Oracle
with cx_Oracle.connect(usr, pwd, url) as conn:
conn.autocommit = True
cursor = conn.cursor()
cursor.execute("UPDATE SO SET STATUS='PE' WHERE ID='100'")
cursor.close()
This is especially useful when you have multiple INSERT, UPDATE, and DELETE statements within the same connection.
commit is used to tell the database to save all the changes in the current transaction.
Select does not change any data so there is nothing to save and thus nothing to commit
See wikipedia for transactions