i need some help with python an mysql.
I have the following code, which is executing in infinite loop:
db = MySQLdb.connect("127.0.0.1","user","password","dbname" )
while True:
cursor = db.cursor()
cursor.execute("SELECT * FROM requests WHERE status <> 'Finished'")
all_pending_requests = cursor.fetchall()
cursor.close()
And that works fine the first time i run it. But when i go to a tool like mysql workbench or i type it myself in in terminal, i update some rows and set their status to something that is not "Finished". So by doing that the next time the loop executes i should get those rows as a result but i get nothing. Do you guys now why this is happening maybe?
Thanks for help.
I am not certain but would assume that you are using InnoDB storage engine in MySQL and MySQLdb version >=1.2.0. You need to commit before the changes are being reflected. As of version 1.2.0, MySQLdb disables auto-commit by default. Confirmation of the same is here. Try adding db.commit() as the last line in the loop.
Related
Sometimes I have a need to execute a query from psycopg2 that is not in a transaction block.
For example:
cursor.execute('create index concurrently on my_table (some_column)')
Doesn't work:
InternalError: CREATE INDEX CONCURRENTLY cannot run inside a transaction block
I don't see any easy way to do this with psycopg2. What am I missing?
I can probably call os.system('psql -c "create index concurrently"') or something similar to get it to run from my python code, however it would be much nicer to be able to do it inside python and not rely on psql to actually be in the container.
Yes, I have to use the concurrently option for this particular use case.
Another time I've explored this and not found an obvious answer is when I have a set of sql commands that I'd like to call with a single execute(), where the first one briefly locks a resource. When I do this, that resource will remain locked for the entire duration of the execute() rather than for just when the first statement in the sql string was running because they all run together in one big happy transaction.
In that case I could break the query up into a series of execute() statements - each became its own transaction, which was ok.
It seems like there should be a way, but I seem to be missing it. Hopefully this is an easy answer for someone.
EDIT: Add code sample:
#!/usr/bin/env python3.10
import psycopg2 as pg2
# -- set the standard psql environment variables to specify which database this should connect to.
# We have to set these to 'None' explicitly to get psycopg2 to use the env variables
connDetails = {'database': None, 'host': None, 'port': None, 'user': None, 'password': None}
with (pg2.connect(**connDetails) as conn, conn.cursor() as curs):
conn.set_session(autocommit=True)
curs.execute("""
create index concurrently if not exists my_new_index on my_table (my_column);
""")
Throws:
psycopg2.errors.ActiveSqlTransaction: CREATE INDEX CONCURRENTLY cannot run inside a transaction block
Per psycopg2 documentation:
It is possible to set the connection in autocommit mode: this way all the commands executed will be immediately committed and no rollback is possible. A few commands (e.g. CREATE DATABASE, VACUUM, CALL on stored procedures using transaction control…) require to be run outside any transaction: in order to be able to run these commands from Psycopg, the connection must be in autocommit mode: you can use the autocommit property.
Hence on the connection:
conn.set_session(autocommit=True)
Further resources from psycopg2 documentation:
transactions-control
connection.autocommit
I'm trying to use an SQLite insert operation in a python script, it works when I execute it manually on the command line but when I try to access it on the web it won't insert it in the database. Here is my function:
def insertdb(unique_id,number_of_days):
conn = sqlite3.connect('database.db')
print "Opened database successfully";
conn.execute("INSERT INTO IDENT (ID_NUM,DAYS_LEFT) VALUES (?,?)",(unique_id,number_of_days));
conn.commit()
print "Records created successfully";
conn.close()
When it is executed on the web, it only shows the output "Opened database successfully" but does not seem to insert the value into the database. What am I missing? Is this a server configuration issue? I have checked the database permissions on writing and they are correctly set.
The problem is almost certainly that you're trying to create or open a database named database.db in whatever happens to be the current working directory, and one of the following is true:
The database exists and you don't have permission to write to it. So, everything works until you try to do something that requires write access (like commiting an INSERT).
The database exists, and you have permission to write to it, but you don't have permission to create new files in the directory. So, everything works until sqlite needs to create a temporary file (which it almost always will for execute-ing an INSERT).
Meanwhile, you don't mention what web server/container/etc. you're using, but apparently you have it configured to just swallow all errors silently, which is a really, really bad idea for any debugging. Configure it to report the errors in some way. Otherwise, you will never figure out what's going on with anything that goes wrong.
If you don't have control over the server configuration, you can at least wrap all your code in a try/except and manually log exceptions to some file you have write access to (ideally via the logging module, or just open and write if worst comes to worst).
Or, you can just do that with dumb print statements, as you're already doing:
def insertdb(unique_id,number_of_days):
conn = sqlite3.connect('database.db')
print "Opened database successfully";
try:
conn.execute("INSERT INTO IDENT (ID_NUM,DAYS_LEFT) VALUES (?,?)",(unique_id,number_of_days));
conn.commit()
print "Records created successfully";
except Exception as e:
print e # or, better, traceback.print_exc()
conn.close()
I've created a database with the python package sqlite3.
import sqlite3
conn=sqlite3.connect('foo.sqlite')
c=conn.cursor()
c.execute('CREATE TABLE foo (bar1 int, bar2 int)')
conn.commit()
conn.close
Then for statistical purposes I try to read this database with R (I use the R package RSQLite)
library('RSQLite')
drv=dbDriver('SQLite')
foo=dbConnect(drv,'foo.sqlite')
If I want to list the table I've just created with Python
dbListTables(foo)
R says that the database is empty :
character(0)
Am I doing something wrong or does R cannot read a Python database ?
Thanks for your help
Try closing your database connection in python, rather than just instantiating the close method:
conn.close()
Spot the difference? Then it all works for me.
> dbListTables(foo)
[1] "foo"
although it all works for me even if I don't close the connection, and even if I've not quit python after the commit. So, umm...
I had a procedure that was not working.
If I tried to run: "BEGIN proc_name; END;" in SQL Developer or via script I had the same error.
I've fixed the procedure and now when I run that same command in SQL Developer, it's fine, but the script returns an error.
When I try:
...
sql = """EXEC proc_name"""
con = connection.cursor()
con.execute( sql )
...
I get DatabaseError: ORA-00900: invalid SQL statement, but probably is because of that: Problem with execute procedure in PL/SQL Developer and I'm not really worried about it.
What is really making me curious is when I try:
...
sql = """BEGIN proc_name;END;"""
con = connection.cursor()
con.execute( sql )
...
I get the same error that I had before fix the procedure.
Do you have any idea what is going on?
PS: This is a python script using cx_Oracle and I'm using Oracle 10g.
Try using the callproc() or callfunc() method on the cursor, instead of execute(). They are not exactly Py DB API compatible, but should do the job for cx_Oracle...
Here's my code:
import cx_Oracle
conn = cx_Oracle.connect(usr, pwd, url)
cursor = conn.cursor()
cursor.execute("UPDATE SO SET STATUS='PE' WHERE ID='100'")
conn.commit()
If I remove the conn.commit(), the table isn't updated. But for select statements, I don't need that conn.commit(). I'm curious why?
The DB-API spec requires that connecting to the database begins a new transaction, by default. You must commit to confirm any changes you make, or rollback to discard them.
Note that if the database supports an auto-commit feature, this must be initially off.
Pure SELECT statements, since they never make any changes to the database, don't have to have their changes committed.
Others have explained why a commit is not necessary on a SELECT statement. I just wanted to point out you could utilize the autocommit property of the Connection object to avoid having to manually execute commit yourself:
import cx_Oracle
with cx_Oracle.connect(usr, pwd, url) as conn:
conn.autocommit = True
cursor = conn.cursor()
cursor.execute("UPDATE SO SET STATUS='PE' WHERE ID='100'")
cursor.close()
This is especially useful when you have multiple INSERT, UPDATE, and DELETE statements within the same connection.
commit is used to tell the database to save all the changes in the current transaction.
Select does not change any data so there is nothing to save and thus nothing to commit
See wikipedia for transactions