Suppose I am having 50 records which I want to insert in database. I am calling commit method at the end of execution of 50th record (bulk insert). While inserting the 45th record, I'm getting some exception and I'm catching that exception. But when I see the data in the database, it only shows me data from 46th to 50th. All other data before 45th is not inserted into the database because I got an error at the 45th record. How can we solve this issue?
def insert_data(self, query):
cursor = database.cursor()
try:
cursor.execute(query)
except Exception:
print("Exception")
else:
cursor.close()
def process(self):
database = DB.conn() // Get connection
for x in range(50):
self.insert_data("insert.......")
database.commit()// commit after for ends
Related
I'm trying to update the database to turn records to false if they are in the past.
I can run it and it works perfect using the query tool on the pgadmin.
I get no errors when I run the script and I get a message that the query has run successfully.
But the database won't update
This is my code:
def connect_db(self):
event_over = datetime.now()
try:
# Creating a cursor object using the cursor() method
cursor = self.connection().cursor()
postgres_insert_query = """UPDATE dashboard_calendarevent SET calendar_event_live=false WHERE
calendar_dates_end <= (%s); """
cursor.execute(postgres_insert_query, (event_over,))
self.connection().commit()
count = cursor.rowcount
print(count, "Records effected in Calendar table")
except (Exception, psycopg2.Error) as error:
print("Failed to update record into Calendar table because ", error)
finally:
# closing database connection.
if self.connection():
cursor.close()
self.connection().close()
print("PostgreSQL connection is closed")
This is the query that is run:
UPDATE dashboard_calendarevent SET calendar_event_live=false WHERE calendar_dates_end <= '2022-08-22 16:05:05.426078';
My message retuned:
14 Records effected in Calendar table
PostgreSQL connection is closed
There are 14 records to be changed but they are not updating.
This is a sample datetime in the Database 2022-08-19 15:30:00+00
And this is the time I am creating when the function is called. 2022-08-22 16:05:05.426078
There is no error logged in the files
I am connected to the correct database its reading the database as I know there is 14 records to be updated.
I want to make a python script that consistently checks the Mysql database.
The problem that is arising is that the first time checks that there is no data it consistently says there is no data in the database even after I add the data in the database.
But if the data is already there in the database I immediately fetch it and continues.
What can I do such that my python script consistently checks the database and if it finds the data it stops checking the database and continue's
I have tried running it when there is no data in the database and then added data to the database
import mysql.connector
mydb = mysql.connector.connect(
host='localhost',
user='root',
passwd='',
database='local_data'
)
mycursor = mydb.cursor()
node_id = 123
node_id = str(node_id)
print('getting code...')
a = 0
while True:
try:
time.sleep(1)
a=a+1
print(a)
sql = "SELECT code_name FROM node_code where node_id = '"+node_id +"'"
mycursor.execute(sql)
myresult = mycursor.fetchone()
for x in myresult:
filename=x
print(filename)
break
except Exception as error:
print("Nothing in database\n")
print(error)
I am expecting an output where it keeps checking the database until it finds the data after which it continues
I am getting these results. By the time the second loop ran the data was inserted in the database
getting code...
1
Nothing in database
'NoneType' object is not iterable
2
Nothing in database
'NoneType' object is not iterable
3
Nothing in database
If the data is already in the database before I run the script I get this
getting code...
1
server.py
This is because the transaction has not ended unless you do a commit to end that transaction.
Try committing that transaction by doing mycursor.commit() at the end of loop
I'm still using Flask-mysql.
I'm getting the database context (the mysql variable) just fine, and can query on the database / get results. It's only the insert that is not working: it's not complaining (throwing Exceptions). It returns True from the insert method.
This should be done inserting the record when it commits, but for some reason, as I watch the MySQL database with MySQL Workbench, nothing is getting inserted into the table (and it's not throwing exceptions from the insert method):
I'm passing in this to insertCmd:
"INSERT into user(username, password) VALUES ('test1','somepassword');"
I've checked the length of the column in the database, and copied the command into MySQL Workbench (where it successfully inserts the row into the table).
I'm at a loss. The examples I've seen all seem to follow this format, and I have a good database context. You can see other things I've tried in the comments.
def insert(mysql, insertCmd):
try:
#connection = mysql.get_db()
cursor = mysql.connect().cursor()
cursor.execute(insertCmd)
mysql.connect().commit()
#mysql.connect().commit
#connection.commit()
return True
except Exception as e:
print("Problem inserting into db: " + str(e))
return False
You need to keep a handle to the connection; you keep overriding it in your loop.
Here is a simplified example:
con = mysql.connect()
cursor = con.cursor()
def insert(mysql, insertCmd):
try:
cursor.execute(insertCmd)
con.commit()
return True
except Exception as e:
print("Problem inserting into db: " + str(e))
return False
If mysql is your connection, then you can just commit on that, directly:
def insert(mysql, insertCmd):
try:
cursor = mysql.cursor()
cursor.execute(insertCmd)
mysql.commit()
return True
except Exception as e:
print("Problem inserting into db: " + str(e))
return False
return False
Apparently, you MUST separate the connect and cursor, or it won't work.
To get the cursor, this will work: cursor = mysql.connect().cursor()
However, as Burchan Khalid so adeptly pointed out, any attempt after that to make a connection object in order to commit will wipe out the work you did using the cursor.
So, you have to do the following (no shortcuts):
connection = mysql.connect()
cursor = connection.cursor()
cursor.execute(insertCmd)
connection.commit()
I periodically query a MySQL table and check data in the same row.
I use MySQLdb to do the job, querying the same table and row every 15 seconds.
Actually, the row data changes every 3 seconds, but the cursor always return the same value.
The strange thing is: after I close the MySQL connection and reconnect, using a new cursor to execute the same select command, the new value is returned.
The code that I suspect to be wrong is begins after the comment:
config = SafeConfigParser()
config.read("../test/settings_test.conf")
settings = {}
settings["mysql_host"] = config.get("mysql","mysql_host")
settings["mysql_port"] = int(config.get("mysql","mysql_port"))
settings["mysql_user"] = config.get("mysql","mysql_user")
settings["mysql_password"] = config.get("mysql","mysql_password")
settings["mysql_charset"] = config.get("mysql","mysql_charset")
#suspected wrong code
conn = mysql_from_settings(settings)
cur = conn.cursor()
cur.execute('use database_a;')
cur.execute('select pages from database_a_monitor where id=1;')
result = cur.fetchone()[0]
print result
#during 15 second, I manually update the row and commit from mysql workbench
time.sleep(15)
cur.execute('select pages from database_a_monitor where id=1;')
result = cur.fetchone()
print result
conn.close()
The output is:
94
94
If I change the code so that it closes the connection and re-connects, it returns the latest value instead of repeating the same value:
conn = mysql_from_settings(settings)
cur = conn.cursor()
cur.execute('use database_a;')
cur.execute('select pages from database_a_monitor where id=1;')
result = cur.fetchone()[0]
print result
conn.close()
time.sleep(15)
#during that period, I manually update the row and commit from mysql workbench
conn = mysql_from_settings(settings)
cur = conn.cursor()
cur.execute('use database_a;')
cur.execute('select pages from database_a_monitor where id=1;')
result = cur.fetchone()[0]
print result
conn.close()
The output is:
94
104
Why this difference in behavior?
Here is the definition of mysql_from_settings:
def mysql_from_settings(settings):
try:
host = settings.get('mysql_host')
port = settings.get('mysql_port')
user = settings.get('mysql_user')
password = settings.get('mysql_password')
charset = settings.get('mysql_charset')
conn=MySQLdb.connect(host=host,user=user,passwd=password,port=port,\
charset=charset)
return conn
except MySQLdb.Error,e:
print "Mysql Error %d: %s" % (e.args[0], e.args[1])
This is almost certainly the result of transaction isolation. I'm going to assume, since you haven't stated otherwise, that you're using the default storage engine (InnoDB) and isolation level (REPEATABLE READ):
REPEATABLE READ
The default isolation level for InnoDB. It prevents any rows that are queried from being changed by other
transactions, thus blocking non-repeatable reads but not
phantom reads. It uses a moderately strict locking strategy so that all queries within a transaction see data from the
same snapshot, that is, the data as it was at the time the transaction
started.
For more details, see Consistent Nonlocking Reads in the MySQL docs.
In plain English, this means that when you SELECT from a table within a transaction, the values you read from the table will not change for the duration of the transaction; you'll continue to see the state of the table at the time the transaction opened, plus any changes made in the same transaction.
In your case, the changes every 3 seconds are being made in some other session and transaction. In order to "see" these changes, you need to leave the transaction that began when you issued the first SELECT and start a new transaction, which will then "see" a new snapshot of the table.
You can manage transactions explicitly with START TRANSACTION, COMMIT and ROLLBACK in SQL or by calling Connection.commit() and Connection.rollback(). An even better approach here might be to take advantage of context managers; for example:
conn = mysql_from_settings(settings)
with conn as cur:
cur.execute('use database_a;')
cur.execute('select pages from database_a_monitor where id=1;')
result = cur.fetchone()[0]
print result
#during 15 second, I manually update the row and commit from mysql workbench
time.sleep(15)
cur.execute('select pages from database_a_monitor where id=1;')
result = cur.fetchone()
print result
conn.close()
The with statement, when used with MySQLdb's Connection object, gives you back a cursor. When you leave the with block, Connection.__exit__ is called:
def __exit__(self, exc, value, tb):
if exc:
self.rollback()
else:
self.commit()
Since all you've done is read data, there's nothing to roll back or commit; when writing data, remember that leaving the block via an exception will cause your changes to be rolled back, while leaving normally will cause your changes to be committed.
Note that this didn't close the cursor, it only managed the transaction context. I go into more detail on this subject in my answer to When to close cursors using MySQLdb but the short story is, you don't generally have to worry about closing cursors when using MySQLdb.
You can also make your life a little easier by passing the database as a parameter to MySQLdb.connect instead of issuing a USE statement.
This answer to a very similar question offers two other approaches—you could change the isolation level to READ COMMITTED, or turn on autocommit.
The problem you face not related with Python but MySql setting:
After changing bellow in mysql databse you would fix that:
Login mysql as root
mysql> set global transaction isolation level read committed;
For permanent: (Even after restart mysql)
mysql> set global transaction isolation level read committed;
Query OK, 0 rows affected (0.00 sec)
mysql> show session variables like '%isolation%';
+-----------------------+----------------+
| Variable_name | Value |
+-----------------------+----------------+
| transaction_isolation | READ-COMMITTED |
+-----------------------+----------------+
1 row in set (0.01 sec)
I've seen some answers around here that open a new MySQL cursor before each query, then close it.
Is that slow? Shouldn't I be recycling a cursor, by passing it in as a parameter?
I have a program that does an infinite loop, so eventually the connection will time out after the default 8 hours.
Edit:
As requested, this is the relevant code that handles the SQL query:
def fetch_data(query):
try:
cursor = db.Cursor()
cursor.execute(query)
return cursor.fetchall()
except OperationalError as e:
db = fetchDb()
db.autocommit(True)
print 'reconnecting and trying again...'
return fetch_data(query)
Of course, re-connecting a connection for thousands of times will take much more time. You'd better set it as a property of your class, like this:
class yourClass():
self.db = ...
self.cursor = self.con.Cursor()
# do something
def fetch_data(self, query):
try:
if self.cursor:
self.cursor.execute(query)
else:
raise OperationalError
return self.cursor.fetchall()
except OperationalError as e:
self.db = fetchDb()
self.db.autocommit(True)
print 'reconnecting and trying again...'
return fetch_data(query)