I don't know what I'm doing wrong to connect with MySQL database...
class DaoImportantInfo:
def __init__(self, db=None):
self.db = db
self.config = ConfigConnection()
self.tabelas = TabelasConfig()
def getAll(self):
self.db.connect()
cursor = self.db.cursor()
strComando = f"""SELECT tetosPrevId, dataValidade, valor FROM {self.tabelas.tblTetosPrev} ORDER BY dataValidade DESC"""
try:
cursor.execute(strComando)
logPrioridade(f'SELECT<getAllTetos>____________{self.tabelas.tblTetosPrev};', TipoEdicao.select, Prioridade.saidaComun)
return cursor.fetchall()
except:
raise Warning(f'Erro SQL - getAllTetos({self.config.banco}) <INSERT {self.tabelas.tblTetosPrev}>')
finally:
self.disconectBD(cursor)
def disconectBD(self, cursor):
cursor.close()
self.db.close()
I have a DaoImportantInfo class, who has the connection and some configuration properties, a method getAll who returns all info in tblTetosPrev table and disconnectBD that shold turn off the connection.
The weird situation is that every time I call getAll it takes more time to fetch all the data. The first time it takes less than one second, then, in second time, it takes 2 seconds, then 19 seconds...
I read that is important to close the connection after execute a script, so I don't know what is the problem. Can anybody help me?
I'm using Python 3.8 and Linux PopOs.
Related
I have written a function for connecting to a database using pymysql. Here is my code:
def SQLreadrep(sql):
connection=pymysql.connect(host=############,
user=#######,
password=########,
db=#########)
with connection.cursor() as cursor:
cursor.execute(sql)
rows=cursor.fetchall()
connection.commit()
connection.close()
return rows
I pass the SQL into this function and return the rows. However, I am doing quick queries to the database. (Something like "SELECT sku WHERE object='2J4423K').
What is a way to avoid so many connections?
Should I be avoiding this many connections to begin with?
Could I crash a server using this many connections and queries?
Let me answer your last question first. Your function is acquiring a connection but it is closing it prior to returning. So, I see no reason why unless your were multithreading or multiprocessing you would ever be using more than one connection at a time and you should not be crashing the server.
The way to avoid the overhead of creating and closing so many connections would be to "cache" the connection. One way to do that would be to replace your function by a class:
import pymysql
class DB(object):
def __init__(self, datasource, db_user, db_password):
self.conn = pymysql.connect(db=datasource, user=db_user, password=db_password)
def __del__(self):
self.conn.close()
def query(self, sql):
with self.conn.cursor() as cursor:
cursor.execute(sql)
self.conn.commit()
return cursor.fetchall()
Then you instantiate an instance of the DB class and invoke its query method. When the DB instance is grabage collected, the connection will be automatically closed.
I have a main Python script which connects to a MySQL database and pulls out few records from it. Based on the result returned it starts as many threads (class instances) as many records are grabbed. Each thread should go back to the database and update another table by setting one status flag to a different state ("process started").
To achieve this I tried to:
1.) Pass the database connection to all threads
2.) Open a new database connection from each thread
but none of them were working.
I could run my update without any issue in both cases by using try/except, but the MySQL table has not been updated, and no error was generated. I used commit in both cases.
My question would be how to handle MySQL connection(s) in such a case?
Update based on the first few comments:
MAIN SCRIPT
-----------
#Connecting to DB
db = MySQLdb.connect(host = db_host,
db = db_db,
port = db_port,
user = db_user,
passwd = db_password,
charset='utf8')
# Initiating database cursor
cur = db.cursor()
# Fetching records for which I need to initiate a class instance
cur.execute('SELECT ...')
for row in cur.fetchall() :
# Initiating new instance, appending it to a list and
# starting all of them
CLASS WHICH IS INSTANTIATED
---------------------------
# Connecting to DB again. I also tried to pass connection
# which has been opened in the main script but it did not
# work either.
db = MySQLdb.connect(host = db_host,
db = db_db,
port = db_port,
user = db_user,
passwd = db_password,
charset='utf8')
# Initiating database cursor
cur_class = db.cursor()
cur.execute('UPDATE ...')
db.commit()
Here is an example using multithreading deal mysql in Python, I don't know
your table and data, so, just change the code may help:
import threading
import time
import MySQLdb
Num_Of_threads = 5
class myThread(threading.Thread):
def __init__(self, conn, cur, data_to_deal):
threading.Thread.__init__(self)
self.threadID = threadID
self.conn = conn
self.cur = cur
self.data_to_deal
def run(self):
# add your sql
sql = 'insert into table id values ({0});'
for i in self.data_to_deal:
self.cur.execute(sql.format(i))
self.conn.commit()
threads = []
data_list = [1,2,3,4,5]
for i in range(Num_Of_threads):
conn = MySQLdb.connect(host='localhost',user='root',passwd='',db='')
cur = conn.cursor()
new_thread = myThread(conn, cur, data_list[i])
for th in threads:
th.start()
for t in threads:
t.join()
It seems there's no problem with my code but with my MySQL version. I'm using MySQL standard community edition and based on the official documentation found here :
The thread pool plugin is a commercial feature. It is not included in MySQL community distributions.
I'm about to upgrade to MariaDB to solve this issue.
Looks like mysql 5.7 does support multithreading.
As you tried previously - absolutely make sure to pass the connection within the def worker(). defining the connections globally was my mistake
Here's sample code that prints 10 records via 5 threads, 5 times
import MySQLdb
import threading
def write_good_proxies():
local_db = MySQLdb.connect("localhost","username","PassW","DB", port=3306 )
local_cursor = local_db.cursor (MySQLdb.cursors.DictCursor)
sql_select = 'select http from zproxies where update_time is null order by rand() limit 10'
local_cursor.execute(sql_select)
records = local_cursor.fetchall()
id_list = [f['http'] for f in records]
print id_list
def worker():
x=0
while x< 5:
x = x+1
write_good_proxies()
threads = []
for i in range(5):
print i
t = threading.Thread(target=worker)
threads.append(t)
t.start()
I've seen some answers around here that open a new MySQL cursor before each query, then close it.
Is that slow? Shouldn't I be recycling a cursor, by passing it in as a parameter?
I have a program that does an infinite loop, so eventually the connection will time out after the default 8 hours.
Edit:
As requested, this is the relevant code that handles the SQL query:
def fetch_data(query):
try:
cursor = db.Cursor()
cursor.execute(query)
return cursor.fetchall()
except OperationalError as e:
db = fetchDb()
db.autocommit(True)
print 'reconnecting and trying again...'
return fetch_data(query)
Of course, re-connecting a connection for thousands of times will take much more time. You'd better set it as a property of your class, like this:
class yourClass():
self.db = ...
self.cursor = self.con.Cursor()
# do something
def fetch_data(self, query):
try:
if self.cursor:
self.cursor.execute(query)
else:
raise OperationalError
return self.cursor.fetchall()
except OperationalError as e:
self.db = fetchDb()
self.db.autocommit(True)
print 'reconnecting and trying again...'
return fetch_data(query)
I'm using a few apps running Tornado Web server which all connect to a MySql DB using mysqldb. When I spin up the server, it instantiates a DB class (below) which opens a connection to the DB. All transactions are made using this same connection - which I'm not sure is a good idea.
class RDSdb(object):
def __init__(self):
self.connect()
def connect(self):
self.connection = MySQLdb.connect(cursorclass = MySQLdb.cursors.SSDictCursor, host=self.RDS_HOST,
user=self.RDS_USER, passwd=self.RDS_PASS, db=self.RDS_DB)
def get_cursor(self):
try:
cursor = self.connection.cursor()
except (AttributeError, MySQLdb.OperationalError):
self.connect()
cursor = self.connection.cursor()
return cursor
def fetch_by_query(self, query):
cursor = self.get_cursor()
cursor.execute(query)
result = cursor.fetchall()
cursor.close()
return result
I'm pretty sure I shouldn't open/close a new connection for every transaction, but then, when should I?
I noticed something else that's a bit off, which I'm certain is related : when I need to update one of my db table's schema (ex : alter table), the whole table in question gets locked and unresponsive - until I kill my 3 apps with open connections to the DB - I realize that one of those connections was holding up this update.
Best practices when it comes to this? Ideas?
thanks.
I came across PHP way of doing the trick:
my_bool reconnect = 1;
mysql_options(&mysql, MYSQL_OPT_RECONNECT, &reconnect);
but no luck with MySQLdb (python-mysql).
Can anybody please give a clue? Thanks.
I solved this problem by creating a function that wraps the cursor.execute() method since that's what was throwing the MySQLdb.OperationalError exception. The other example above implies that it is the conn.cursor() method that throws this exception.
import MySQLdb
class DB:
conn = None
def connect(self):
self.conn = MySQLdb.connect()
def query(self, sql):
try:
cursor = self.conn.cursor()
cursor.execute(sql)
except (AttributeError, MySQLdb.OperationalError):
self.connect()
cursor = self.conn.cursor()
cursor.execute(sql)
return cursor
db = DB()
sql = "SELECT * FROM foo"
cur = db.query(sql)
# wait a long time for the Mysql connection to timeout
cur = db.query(sql)
# still works
I had problems with the proposed solution because it didn't catch the exception. I am not sure why.
I have solved the problem with the ping(True) statement which I think is neater:
import MySQLdb
con=MySQLdb.Connect()
con.ping(True)
cur=con.cursor()
Got it from here: http://www.neotitans.com/resources/python/mysql-python-connection-error-2006.html
If you are using ubuntu Linux there was a patch added to the python-mysql package that added the ability to set that same MYSQL_OPT_RECONNECT option (see here). I have not tried it though.
Unfortunately, the patch was later removed due to a conflict with autoconnect and transations (described here).
The comments from that page say:
1.2.2-7 Published in intrepid-release on 2008-06-19
python-mysqldb (1.2.2-7) unstable; urgency=low
[ Sandro Tosi ]
* debian/control
- list items lines in description starts with 2 space, to avoid reformat
on webpages (Closes: #480341)
[ Bernd Zeimetz ]
* debian/patches/02_reconnect.dpatch:
- Dropping patch:
Comment in Storm which explains the problem:
# Here is another sad story about bad transactional behavior. MySQL
# offers a feature to automatically reconnect dropped connections.
# What sounds like a dream, is actually a nightmare for anyone who
# is dealing with transactions. When a reconnection happens, the
# currently running transaction is transparently rolled back, and
# everything that was being done is lost, without notice. Not only
# that, but the connection may be put back in AUTOCOMMIT mode, even
# when that's not the default MySQLdb behavior. The MySQL developers
# quickly understood that this is a terrible idea, and removed the
# behavior in MySQL 5.0.3. Unfortunately, Debian and Ubuntu still
# have a patch right now which *reenables* that behavior by default
# even past version 5.0.3.
I needed a solution that works similarly to Garret's, but for cursor.execute(), as I want to let MySQLdb handle all escaping duties for me. The wrapper module ended up looking like this (usage below):
#!/usr/bin/env python
import MySQLdb
class DisconnectSafeCursor(object):
db = None
cursor = None
def __init__(self, db, cursor):
self.db = db
self.cursor = cursor
def close(self):
self.cursor.close()
def execute(self, *args, **kwargs):
try:
return self.cursor.execute(*args, **kwargs)
except MySQLdb.OperationalError:
self.db.reconnect()
self.cursor = self.db.cursor()
return self.cursor.execute(*args, **kwargs)
def fetchone(self):
return self.cursor.fetchone()
def fetchall(self):
return self.cursor.fetchall()
class DisconnectSafeConnection(object):
connect_args = None
connect_kwargs = None
conn = None
def __init__(self, *args, **kwargs):
self.connect_args = args
self.connect_kwargs = kwargs
self.reconnect()
def reconnect(self):
self.conn = MySQLdb.connect(*self.connect_args, **self.connect_kwargs)
def cursor(self, *args, **kwargs):
cur = self.conn.cursor(*args, **kwargs)
return DisconnectSafeCursor(self, cur)
def commit(self):
self.conn.commit()
def rollback(self):
self.conn.rollback()
disconnectSafeConnect = DisconnectSafeConnection
Using it is trivial, only the initial connect differs. Extend the classes with wrapper methods as per your MySQLdb needs.
import mydb
db = mydb.disconnectSafeConnect()
# ... use as a regular MySQLdb.connections.Connection object
cursor = db.cursor()
# no more "2006: MySQL server has gone away" exceptions now
cursor.execute("SELECT * FROM foo WHERE bar=%s", ("baz",))
you can separate the commit and the close for the connection...that's not cute but it does it.
class SqlManager(object):
"""
Class that handle the database operation
"""
def __init__(self,server, database, username, pswd):
self.server = server
self.dataBase = database
self.userID = username
self.password = pswd
def Close_Transation(self):
"""
Commit the SQL Query
"""
try:
self.conn.commit()
except Sql.Error, e:
print "-- reading SQL Error %d: %s" % (e.args[0], e.args[1])
def Close_db(self):
try:
self.conn.close()
except Sql.Error, e:
print "-- reading SQL Error %d: %s" % (e.args[0], e.args[1])
def __del__(self):
print "close connection with database.."
self.conn.close()
I had a similar problem with MySQL and Python, and the solution that worked for me was to upgrade MySQL to 5.0.27 (on Fedora Core 6; your system may work fine with a different version).
I tried a lot of other things, including patching the Python libraries, but upgrading the database was a lot easier and (I think) a better decision.
In addition to Liviu Chircu solution ... add the following method to DisconnectSafeCursor:
def __getattr__(self, name):
return getattr(self.cursor, name)
and the original cursor properties like "lastrowid" will keep working.
You other bet it to work around dropped connections yourself with code.
One way to do it would be the following:
import MySQLdb
class DB:
conn = None
def connect(self):
self.conn = MySQLdb.connect()
def cursor(self):
try:
return self.conn.cursor()
except (AttributeError, MySQLdb.OperationalError):
self.connect()
return self.conn.cursor()
db = DB()
cur = db.cursor()
# wait a long time for the Mysql connection to timeout
cur = db.cursor()
# still works