I have a python script which needs to update a mysql database, I have so far:
dbb = MySQLdb.connect(host="localhost",
user="user",
passwd="pass",
db="database")
try:
curb = dbb.cursor()
curb.execute ("UPDATE RadioGroups SET CurrentState=1 WHERE RadioID=11")
print "Row(s) were updated :" + str(curb.rowcount)
curb.close()
except MySQLdb.Error, e:
print "query failed<br/>"
print e
The script prints Row(s) were updated : with the correct number of rows which have a RadioID of 11. If I change the RadioID to another number not present in the table it will say Row(s) were updated :0. However the database doesn't actually update. The CurrentState field just stays the same. If I copy and past the SQL statement in to PHPMyAdmin it works fine.
use
dbb.commit()
after
curb.execute ("UPDATE RadioGroups SET CurrentState=1 WHERE RadioID=11")
to commit all the changes that you 'loaded' into the mysql server
As the #Lazykiddy pointed out, you have to commit your changes after you load them into the mysql.
You could also use this approach to enable the auto commit setting, just after the MySQL connection initialization:
dbb.autocommit(True)
Then, it will automatically commit the changes you made during your code execution.
the two answers are correct. However, you can also do this:
dbb = MySQLdb.connect(host="localhost",
user="user",
passwd="pass",
db="database",
autocommit=True)
add autocommit=True
Related
I'm trying to use psycopg2 in python to drop index in postgresql:
connection = psycopg2.connect(host=hostname, user=username, password=password, dbname=database)
cur = connection.cursor()
statement = "DROP INDEX IF EXISTS idx_my_id"
cur.execute(statement)
connection.commit()
Same statement can be done in pgAdmin4 in one second. But in python, the execution never finished.
"pg_stat_activity" shows wait_event_type is Lock and wait_event is relation.
What went wrong?
won't fit in comments put it here. maybe because you don't commit your connections?
add this to your code , close all the connections and try again:
connection.set_session(autocommit=True)
I have a code like this:
import mysql.connector as mysql
from generate_records import generateRecords
devicesQuery = "CALL iot.sp_sensors_overview()"
try:
db = mysql.connect(
user = "username",
password = "password",
host = "hostname",
database="iot"
)
cursor = db.cursor(dictionary=True, buffered=True)
cursor.execute(devicesQuery)
for sensor in cursor:
generateRecords(sensor, db)
cursor.close()
except mysql.connector.Error as error:
print("Error:")
print(error)
else:
db.close()
The purpose of generateRecords function is obviously to generate records and run the INSERT query against the different table.
Seems like I do something wrong, because no matter what I trying, I getting different errors here, like mysql.connector.errors.OperationalError: MySQL Connection not available..
(upd) I also tried to change the code like it was suggested (see example bellow), with no luck - I still receiving the MySQL connection not available. error.
rows = cursor.fetchall()
cursor.close()
for sensor in rows:
cursor2 = db.cursor()
generateRecords(sensor, cursor2)
So, should I create a new connection within generateRecords function, or pass something different within it, or use some kind of different approach here?
Thank you!
Finally I found what was wrong. I'm used the query to call the stored procedure. Using the cursor.callproc("sp_sensors_overview") instead fixed my issue, and now I'm able to create the next cursor without errors.
I have looked at similar questions but nothing has worked for me so far
So here it is. I want to update my table through a python script. I'm using the module cx_oracle. I can execute a SELECT query but whenever I try to execute an UPDATE query, my program just hangs (freezes). I realize that I need to use cursor.commit() after cursor.execute() if I am updating a table but my code never gets past cursor.commit(). I have added a code snippet below that I am using to debug.
Any suggestions??
Code
import cx_Oracle
def getConnection():
ip = '127.0.0.1'
port = 1521
service_name = 'ORCLCDB.localdomain'
username = 'username'
password = 'password'
dsn = cx_Oracle.makedsn(ip, port, service_name=service_name) # (CONNECT_DATA=(SERVICE_NAME=ORCLCDB.localdomain)))
return cx_Oracle.connect(username, password, dsn) # connection
def debugging():
con = getConnection()
print(con)
cur = con.cursor()
print('Updating')
cur.execute('UPDATE EMPLOYEE SET LATITUDE = 53.540943 WHERE EMPLOYEEID = 1')
print('committing')
con.commit()
con.close()
print('done')
debugging()
**Here is the corresponding output: **
<cx_Oracle.Connection to username#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ORCLCDB.localdomain)))>
Updating
Solution
After a bit of poking around, I found the underlying cause! I had made changes to the table using Oracle SQL Developer but had not committed them, when the python script tried to make changes to the table it would freeze up because of this. To avoid the freeze, I committed my changes in oracle sql developer before running the python script and it worked fine!
Do you have any option to look in the database ? I mean , in order to understand whether is a problem of the python program or not, we need to check the v$session in the database to understand whether something is blocked.
select sid, event, last_call_et, status from v$session where sid = xxx
Where xxx is the sid of the session which has connected with python.
By the way, I would choose to commit explicitly after cursor execute
cur.execute('UPDATE EMPLOYEE SET LATITUDE = 53.540943 WHERE EMPLOYEEID = 1')
con.commit()
Hope it helps
Best
I periodically query a MySQL table and check data in the same row.
I use MySQLdb to do the job, querying the same table and row every 15 seconds.
Actually, the row data changes every 3 seconds, but the cursor always return the same value.
The strange thing is: after I close the MySQL connection and reconnect, using a new cursor to execute the same select command, the new value is returned.
The code that I suspect to be wrong is begins after the comment:
config = SafeConfigParser()
config.read("../test/settings_test.conf")
settings = {}
settings["mysql_host"] = config.get("mysql","mysql_host")
settings["mysql_port"] = int(config.get("mysql","mysql_port"))
settings["mysql_user"] = config.get("mysql","mysql_user")
settings["mysql_password"] = config.get("mysql","mysql_password")
settings["mysql_charset"] = config.get("mysql","mysql_charset")
#suspected wrong code
conn = mysql_from_settings(settings)
cur = conn.cursor()
cur.execute('use database_a;')
cur.execute('select pages from database_a_monitor where id=1;')
result = cur.fetchone()[0]
print result
#during 15 second, I manually update the row and commit from mysql workbench
time.sleep(15)
cur.execute('select pages from database_a_monitor where id=1;')
result = cur.fetchone()
print result
conn.close()
The output is:
94
94
If I change the code so that it closes the connection and re-connects, it returns the latest value instead of repeating the same value:
conn = mysql_from_settings(settings)
cur = conn.cursor()
cur.execute('use database_a;')
cur.execute('select pages from database_a_monitor where id=1;')
result = cur.fetchone()[0]
print result
conn.close()
time.sleep(15)
#during that period, I manually update the row and commit from mysql workbench
conn = mysql_from_settings(settings)
cur = conn.cursor()
cur.execute('use database_a;')
cur.execute('select pages from database_a_monitor where id=1;')
result = cur.fetchone()[0]
print result
conn.close()
The output is:
94
104
Why this difference in behavior?
Here is the definition of mysql_from_settings:
def mysql_from_settings(settings):
try:
host = settings.get('mysql_host')
port = settings.get('mysql_port')
user = settings.get('mysql_user')
password = settings.get('mysql_password')
charset = settings.get('mysql_charset')
conn=MySQLdb.connect(host=host,user=user,passwd=password,port=port,\
charset=charset)
return conn
except MySQLdb.Error,e:
print "Mysql Error %d: %s" % (e.args[0], e.args[1])
This is almost certainly the result of transaction isolation. I'm going to assume, since you haven't stated otherwise, that you're using the default storage engine (InnoDB) and isolation level (REPEATABLE READ):
REPEATABLE READ
The default isolation level for InnoDB. It prevents any rows that are queried from being changed by other
transactions, thus blocking non-repeatable reads but not
phantom reads. It uses a moderately strict locking strategy so that all queries within a transaction see data from the
same snapshot, that is, the data as it was at the time the transaction
started.
For more details, see Consistent Nonlocking Reads in the MySQL docs.
In plain English, this means that when you SELECT from a table within a transaction, the values you read from the table will not change for the duration of the transaction; you'll continue to see the state of the table at the time the transaction opened, plus any changes made in the same transaction.
In your case, the changes every 3 seconds are being made in some other session and transaction. In order to "see" these changes, you need to leave the transaction that began when you issued the first SELECT and start a new transaction, which will then "see" a new snapshot of the table.
You can manage transactions explicitly with START TRANSACTION, COMMIT and ROLLBACK in SQL or by calling Connection.commit() and Connection.rollback(). An even better approach here might be to take advantage of context managers; for example:
conn = mysql_from_settings(settings)
with conn as cur:
cur.execute('use database_a;')
cur.execute('select pages from database_a_monitor where id=1;')
result = cur.fetchone()[0]
print result
#during 15 second, I manually update the row and commit from mysql workbench
time.sleep(15)
cur.execute('select pages from database_a_monitor where id=1;')
result = cur.fetchone()
print result
conn.close()
The with statement, when used with MySQLdb's Connection object, gives you back a cursor. When you leave the with block, Connection.__exit__ is called:
def __exit__(self, exc, value, tb):
if exc:
self.rollback()
else:
self.commit()
Since all you've done is read data, there's nothing to roll back or commit; when writing data, remember that leaving the block via an exception will cause your changes to be rolled back, while leaving normally will cause your changes to be committed.
Note that this didn't close the cursor, it only managed the transaction context. I go into more detail on this subject in my answer to When to close cursors using MySQLdb but the short story is, you don't generally have to worry about closing cursors when using MySQLdb.
You can also make your life a little easier by passing the database as a parameter to MySQLdb.connect instead of issuing a USE statement.
This answer to a very similar question offers two other approaches—you could change the isolation level to READ COMMITTED, or turn on autocommit.
The problem you face not related with Python but MySql setting:
After changing bellow in mysql databse you would fix that:
Login mysql as root
mysql> set global transaction isolation level read committed;
For permanent: (Even after restart mysql)
mysql> set global transaction isolation level read committed;
Query OK, 0 rows affected (0.00 sec)
mysql> show session variables like '%isolation%';
+-----------------------+----------------+
| Variable_name | Value |
+-----------------------+----------------+
| transaction_isolation | READ-COMMITTED |
+-----------------------+----------------+
1 row in set (0.01 sec)
I have a connection with :
url = urlparse.urlparse(os.environ["DATABASE_URL"])
self.conn = psycopg2.connect(
database=url.path[1:],
user=url.username,
password=url.password,
host=url.hostname,
port=url.port
)
When I run the query to insert self.cursor.execute("INSERT INTO Divers (email, hashpass) VALUES (%s, %s);",[email, password]) it returns successfully.
However, when I check my data the registration isn't there. When I run this query through the CLI it inserts but the incremented id is increased as if values had been inserted. Pleas help.
The documentation shows a commit() method commit. So you probably want to try using it like this:
self.conn.commit()