I am using pyMysql for saving data to mysql db. I have permanent flow of tick data from financial market. i use cur.executemany for inserting 10 lines by one time. It worked fine just for first 20-30 lines - then it stops write it and don't throw any exception..
self.queue.append((timestamp,side,size,price))
if len(self.queue)>=10:
try:
self.logger.info("Writing 10 lines to sql..")
conn = pymysql.connect(host='localhost', port=3306, user='root', passwd='*****', db='sys')
cur = conn.cursor()
sqlQ=""" INSERT den_trades (date2 , side, size, price) VALUES (%s,%s,%s,%s)"""
cur.executemany(sqlQ, self.queue)
conn.commit()
conn.close()
self.queue=[]
except Exception as e:
self.logger.warning("Exception while cur.executemany... sys.exc_info()[0]: {}".format(sys.exc_info()[0]))
self.logger.warning("e.message ".format(e.message))
template = "An exception of type {0} occurred. Arguments:\n{1!r}"
message = template.format(type(e).__name__, e.args)
self.logger.warning(message)
conn.rollback()
i am trying to get some exceptions - but no one warning..
strange thing is - when problem appears - "Writing 10 lines to sql.." still work fine for every tick - self.queue continue to grow up - so self.queue=[] never happens.. how can it be? first string of try: block still work, but last string stop occur.. if so - there should be any exception... right?
one more thing. i have another script that's running fine on same machine. that script saving 1000 lines by one time through pyMysql.
Can it be a problem?
Related
I'm desperately trying to get this code to work from about 70 threads, where it won't be run exactly at the same time, but pretty closely. All I really want is a way of saying, try to insert this, and if you can't back off for a while and try again, just doit without breaking the database. I'm using no options when creating the database, except for the filename. The only problem is I'm getting lots of disk I/O errors and database disk image is malformed. I'm trying to run this in a transaction, so if anything goes wrong it should roll back. I've tried the isolation_level=None option on the connection, which didn't really help. I'm using the Python sqlite3 module.
Here's the code
update_simulations_end_time_sql = """update simulations set end_time=?, completion_status =? where id=?;"""
def __set_time(sql_command, data):
retries=0
while retries<5:
try:
with create_tables.create_connection() as conn:
cur = conn.cursor()
cur.execute("begin")
cur.execute(sql_command, data)
return
except Exception as e:
print(f"__set_time has failed with {sql_command}")
print(e)
sleep_time = uniform(0.1,4)
print(f"Sleeping for {sleep_time}")
sleep(sleep_time)
retries+=1
raise Exception(f"__set_time failed after {retries}")
Here's the options sqlite was compiled with
sqlite> SELECT * FROM pragma_compile_options;
COMPILER=gcc-9.4.0
ENABLE_COLUMN_METADATA
ENABLE_DBSTAT_VTAB
ENABLE_FTS3
ENABLE_FTS3_PARENTHESIS
ENABLE_FTS3_TOKENIZER
ENABLE_FTS4
ENABLE_FTS5
ENABLE_JSON1
ENABLE_LOAD_EXTENSION
ENABLE_PREUPDATE_HOOK
ENABLE_RTREE
ENABLE_SESSION
ENABLE_STMTVTAB
ENABLE_UNKNOWN_SQL_FUNCTION
ENABLE_UNLOCK_NOTIFY
ENABLE_UPDATE_DELETE_LIMIT
HAVE_ISNAN
LIKE_DOESNT_MATCH_BLOBS
MAX_SCHEMA_RETRY=25
MAX_VARIABLE_NUMBER=250000
OMIT_LOOKASIDE
SECURE_DELETE
SOUNDEX
THREADSAFE=1
USE_URI
If anyone has any ideas on how to solve this, I would be amazingly grateful.
In Python 3.11 you will be able to use the sqlite3 module with options like (link):
import sqlite3
if sqlite3.threadsafety == 3:
check_same_thread = False
else:
check_same_thread = True
conn = sqlite3.connect(":memory:", check_same_thread=check_same_thread)
I have the code below to read the chat.db iMessage database on Mac. I am pretty sure this is correct but when I print I get an empty list.
try:
messages = sqlite3.connect("<path>\\chat.db")
except Exception as e:
print(e)
cur = messages.cursor()
results = cur.execute("SELECT name FROM sqlite_master WHERE type='table'")
print(results.fetchall())
returns []
I saw /tmp should not be full. I removed about 10% of it (down to 90% full) but it still is not not working
I have an issue which is related to connection pool but I don't understand it.
Below is my code and this is the behavior:
Starting with empty table, I do SELECT query for non-existing value (no results)
Then I do INSERT query, it successfully inserts the value
HOWEVER, after inserting a new value, if I try to do more SELECT statements it only works 2 out of 3 times, always fails exactly every 3rd try (with pool size=3. ie with pool size=10 it will work exactly 9 out of 10 times)
finally, if i restart the script, with the initial SELECT commented out (but the value is in table before script ones) I get the inserted value and it works every time.
Why does this code seem to 'get stuck returning empty result for the connection that had no result' until restarting the script?
(note that it keep opening and closing connections from connection pool because this is taken from a web application where each connect/close is a different web request. Here i cut the whole 'web' aspect out of it)
#!/usr/bin/python
import mysql.connector
dbvars = {'host':'h','user':'u','passwd':'p','db':'d'}
# db has 1 empty table 'test' with one varchar field 'id'
con = mysql.connector.connect(pool_name="mypool", pool_size=3, pool_reset_session=False, **dbvars)
cur = con.cursor()
cur.execute("SELECT id FROM test WHERE id = '123';")
result = cur.fetchall()
cur.close()
con.close()
con = mysql.connector.connect(pool_name="mypool")
cur = con.cursor()
cur.execute("INSERT INTO test VALUES ('123');")
con.commit()
cur.close()
con.close()
for i in range(12):
con = mysql.connector.connect(pool_name="mypool")
cur = con.cursor()
cur.execute("SELECT id FROM test WHERE id = '123';")
result = cur.fetchall()
cur.close()
con.close()
print result
The output of the above is:
[(u'123',)]
[]
[(u'123',)]
[(u'123',)]
[]
[(u'123',)]
[(u'123',)]
[]
[(u'123',)]
[(u'123',)]
[]
[(u'123',)]
Again, if I don't do the initial SELECT before the insert, then all of them return 123 (if it's already in db). It seems the initial SELECT 'corrupts' one of the connections of the connection pool. Further, if I do 2 SELECTs for empty results before the INSERT, then 2 of the 3 connections are 'corrupt'. Finally if I do 3 SELECTs before the insert, it still works 1 of 3 times, because it seems the INSERT 'fixes' the connection (presumably by having 'results').
Ubuntu 18.04
Python 2.7.17 (released Oct 2019)
mysql-connector-python 8.0.21 (June 2020)
MySql server 5.6.10
It seems to be a rather severe bug in the python driver for MySQL. Perhaps some configuration incompatibility but clearly a bug as no error is shown yet it returns wrong query results.
I filed the bug report with MySQL team and it's status is currently 'verified'.
https://bugs.mysql.com/bug.php?id=102053
I am trying to fetch data from AWS MariaDB:
cursor = self._cnx.cursor()
stmt = ('SELECT * FROM flights')
cursor.execute(stmt)
print(cursor.rowcount)
# prints 2
for z in cursor:
print(z)
# Does not iterate
row = cursor.fetchone()
# row is None
rows = cursor.fetchall()
# throws 'No result set to fetch from.'
I can verify that table contains data using MySQL Workbench. Am I missing some step?
EDIT: re 2 answers:
res = cursor.execute(stmt)
# res is None
EDIT:
I created new Python project with a single file:
import mysql.connector
try:
cnx = mysql.connector.connect(
host='foobar.rds.amazonaws.com',
user='devuser',
password='devpasswd',
database='devdb'
)
cursor = cnx.cursor()
#cursor = cnx.cursor(buffered=True)
cursor.execute('SELECT * FROM flights')
print(cursor.rowcount)
rows = cursor.fetchall()
except Exception as exc:
print(exc)
If I run this code with simple cursor, fetchall raises "No result set to fetch from". If I run with buffered cursor, I can see that _rows property of cursor contains my data, but fetchall() returns empty array.
Your issue is that cursor.execute(stmt) returns an object with results and you're not storing that.
results = cursor.execute(stmt)
print(results.fetchone()) # Prints out and pops first row
For the future googlers with the same Problem I found a workaround which may help in some cases:
I didn't find the source of the problem but a solution which worked for me.
In my case .fetchone() also returned none whatever I did on my local(on my own Computer) Database. I tried the exact same code with the Database on our companies server and somehow it worked. So I copied the complete server Database onto my local Database (by using database dumps) just to get the server settings and afterwards I also could get data from my local SQL-Server with the code which didn't work before.
I am a SQL-newbie but maybe some crazy setting on my local SQL-Server prevented me from fetching data. Maybe some more experienced SQL-user knows this setting and can explain.
Suppose, I have a modifying statement:
cursor = conn.cursor()
# some code
affected_rows1 = cursor.execute(update_statement1, params1)
# some code
conn.commit()
cursor.close()
Should I wrap the block of code with a try ... except and explicitly rollback a transaction when an exception is raised, and which MySQLdb exceptions should I catch to rollback? I used to catch any StandardError in this case, but now I have a hesitation that the block of code would even need an explicit rollback at all.
The following example is slightly more difficult, and I understand that it does require an explicit rollback if the first update statement succeeded. Still, which exceptions should I catch in this case:
cursor = conn.cursor()
# some code
affected_rows1 = cursor.execute(update_statement1, params1)
# some code
affected_rows2 = cursor.execute(update_statement2, params2)
#some code
conn.commit()
cursor.close()
This link shows the various types of Errors that you can catch. MySQLdb.Error is the standard base class from which all other MySQL Errors are derived.
I usually use MySQLdb.Error because it lets you focus on errors relating to MySQLdb itself. By contrast StandardError will catch almost all the exceptions (not something you want if you want better debugging capability). Plus the use of MySQLdb.Error allows you to display the exact error message (MySQL error number and all) so that you can debug it faster.
Coming to the first part of the question, in case of database statements it is (usually) necessary to rollback transactions (if they are supported) in case of error.
The methodology that I follow is to wrap each execute statement in a try except clause (catching MySQLdb.Error) and using rollback if there is an an error before printing the error message and exiting.
However, there is a catch. In MySQLdb the changes that you make to DB are not actually written to the database until you explicilty call commit. So, logically, rollback is not necessary.
As an example,
conn = MySQLdb.connection(db=, host=, passwd=, user=)
cur = conn.cursor()
#Say you have a table X with one entry id = 1 and total = 50
cur.execute("update X set total = 70 where id = 1")
#Actual DB has not yet changed
cur.execute("update X set total = 80 where id = 1")
#Actual DB has still not changed
If you exit the program without commiting, the value in DB will still be 50 because you never called commit().
This is how you would ideally do it:
conn = MySQLdb.connection(db=, host=, passwd=, user=)
cur = conn.cursor()
#Say you have a table X with one entry id = 1 and total = 50
try:
cur.execute("update X set total = 70 where id = 1")
except MySQLdb.Error,e:
print e[0], e[1]
conn.rollback()
cur.close()
conn.close()
#print lengthy error description!!
sys.exit(2)
#Note: Value in table is still 50
#If you do conn.commit() here, value becomes 70 in table too!!
try:
cur.execute("update X set total = 80 where id = 1")
except MySQLdb.Error,e:
print e[0], e[1]
conn.rollback()
cur.close()
conn.close()
#print lengthy error description!!
sys.exit(2)
#Value in DB will be
#a) 50 if you didn't commit anywhere
#b) 70 if you committed after first execute statement
conn.commit()
#Now value in DB is 80!!
cur.close()
conn.close()
IMHO, you should rollback transactions if you continue to use the same connection. Else everything before the error will get commit when you finish the transactions.
For the exception to catch, I always use MySQLdb.Error but I'm not sure that's correct.
Its advised to wrap execute() in a sub. This is how i do it.
def executeSQL(self, stmt):
cursor = self.dbHand.cursor()
if not stmt.endswith(";"):
stmt += ';'
try:
cursor.execute(stmt)
except MySQLdb.Error, e:
self.logger.error("Caught MYSQL exception :%s: while executing stmt :%s:.\n"%(e,stmt))
return False