I am using MariaDB Database Connector for Python and I have a singleton database class that is responsible for creating a pool and performing database operations on that pool. I have made every effort to close the pool after every access. But, still, after a while the pool becomes unusable and gets stuck, never to be freed. This might be a bug with the connector or a bug in my code. Once the pool is exhausted, I create and return a normal connection, which is not efficient for every database access.
Here's my database module code:
import mariadb
import configparser
import sys
from classes.logger import AppLogger
logger = AppLogger(__name__)
connections = 0
class Db:
"""
Main database for the application
"""
config = configparser.ConfigParser()
config.read('/app/config/conf.ini')
db_config = db_config = config['db']
try:
conn_pool = mariadb.ConnectionPool(
user = db_config['user'],
password = db_config['password'],
host = db_config['host'],
port = int(db_config['port']),
pool_name = db_config['pool_name'],
pool_size = int(db_config['pool_size']),
database = db_config['database'],
)
except mariadb.PoolError as e:
print(f'Error creating connection pool: {e}')
logger.error(f'Error creating connection pool: {e}')
sys.exit(1)
def get_pool(self):
return self.conn_pool if self.conn_pool != None else self.create_pool()
def __get_connection__(self):
"""
Returns a db connection
"""
global connections
try:
pconn = self.conn_pool.get_connection()
pconn.autocommit = True
print(f"Receiving connection. Auto commit: {pconn.autocommit}")
connections += 1
print(f"New Connection. Open Connections: {connections}")
logger.debug(f"New Connection. Open Connections: {connections}")
except mariadb.PoolError as e:
print(f"Error getting pool connection: {e}")
logger.error(f'Error getting pool connection: {e}')
# exit(1)
pconn = self.ــcreate_connectionــ()
pconn.autocommit = True
connections += 1
logger.debug(f'Created normal connection following failed pool access. Connections: {connections}')
return pconn
def ــcreate_connectionــ(self):
"""
Creates a new connection. Use this when getting a
pool connection fails
"""
db_config = self.db_config
return mariadb.connect(
user = db_config['user'],
password = db_config['password'],
host = db_config['host'],
port = int(db_config['port']),
database = db_config['database'],
)
def exec_sql(self, sql, values = None):
global connections
pconn = self.__get_connection__()
try:
cur = pconn.cursor()
print(f'Sql: {sql}')
print(f'values: {values}')
cur.execute(sql, values)
# pconn.commit()
# Is this a select operation?
if sql.startswith('SELECT') or sql.startswith('Select') or sql.startswith('select'):
result = cur.fetchall() #Return a result set for select operations
else:
result = True
pconn.close()
connections -= 1
print(f'connection closed: connections: {connections}')
logger.debug(f'connection closed: connections: {connections}')
# return True #Return true for insert, update, and delete operations
return result
except mariadb.Error as e:
print(f"Error performing database operations: {e}")
# pconn.rollback()
pconn.close()
connections -=1
print(f'connection closed: connections: {connections}')
return False
To use the class in a module, I import the class there and simply instantiate an object from the class and run sql queries on it:
db = Db()
users = db.exec_sql("SELECT * FROM users")
Any ideas why the pool gets exhausted after a while (maybe days) and never gets healed?
Maybe a different error from mariadb.Error is raised sometimes and the connection is never closed. I believe the best practice would be to use a finally section to guarantee that the connection is always closed, like this:
pconn = None
try:
pconn = self.__get_connection__()
# ...
except mariadb.Error as e:
# ...
finally:
if pconn:
try:
pconn.close()
except:
# Not really expected, but if this ever happens it should not alter
# whatever happened in the try or except sections above.
Related
The bounty expires in 5 days. Answers to this question are eligible for a +50 reputation bounty.
Haley Mueller wants to draw more attention to this question.
I'm new to Python so this could be a simple fix.
I am using Flask and sockets for this Python project. I am starting the socket on another thread so I can actively listen for new messages. I have an array variable called 'SocketConnections' that is within my UdpComms class. The variable gets a new 'Connection' appended to it when a new socket connection is made. That works correctly. My issue is that when I try to read 'SocketConnections' from outside of the thread looking in, it is an empty array.
server.py
from flask import Flask, jsonify
import UdpComms as U
app = Flask(__name__)
#app.route('/api/talk', methods=['POST'])
def talk():
global global_server_socket
apples = global_server_socket.SocketConnections
return jsonify(message=apples)
global_server_socket = None
def start_server():
global global_server_socket
sock = U.UdpComms(udpIP="127.0.0.1", portTX=8000, portRX=8001, enableRX=True, suppressWarnings=True)
i = 0
global_server_socket = sock
while True:
i += 1
data = sock.ReadReceivedData() # read data
if data != None: # if NEW data has been received since last ReadReceivedData function call
print(data) # print new received data
time.sleep(1)
if __name__ == '__main__':
server_thread = threading.Thread(target=start_server)
server_thread.start()
app.run(debug=True,host='192.168.0.25')
UdpComms.py
import json
import uuid
class UdpComms():
def __init__(self,udpIP,portTX,portRX,enableRX=False,suppressWarnings=True):
self.SocketConnections = []
import socket
self.udpIP = udpIP
self.udpSendPort = portTX
self.udpRcvPort = portRX
self.enableRX = enableRX
self.suppressWarnings = suppressWarnings # when true warnings are suppressed
self.isDataReceived = False
self.dataRX = None
# Connect via UDP
self.udpSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # internet protocol, udp (DGRAM) socket
self.udpSock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # allows the address/port to be reused immediately instead of it being stuck in the TIME_WAIT state waiting for late packets to arrive.
self.udpSock.bind((udpIP, portRX))
# Create Receiving thread if required
if enableRX:
import threading
self.rxThread = threading.Thread(target=self.ReadUdpThreadFunc, daemon=True)
self.rxThread.start()
def __del__(self):
self.CloseSocket()
def CloseSocket(self):
# Function to close socket
self.udpSock.close()
def SendData(self, strToSend):
# Use this function to send string to C#
self.udpSock.sendto(bytes(strToSend,'utf-8'), (self.udpIP, self.udpSendPort))
def SendDataAddress(self, strToSend, guid):
# Use this function to send string to C#
print('finding connection: ' + guid)
if self.SocketConnections:
connection = self.GetConnectionByGUID(guid)
print('found connection: ' + guid)
if connection is not None:
self.udpSock.sendto(bytes(strToSend,'utf-8'), connection.Address)
def ReceiveData(self):
if not self.enableRX: # if RX is not enabled, raise error
raise ValueError("Attempting to receive data without enabling this setting. Ensure this is enabled from the constructor")
data = None
try:
data, _ = self.udpSock.recvfrom(1024)
print('Socket data recieved from: ', _)
if self.IsNewConnection(_) == True:
print('New socket')
self.SendDataAddress("INIT:" + self.SocketConnections[-1].GUID, self.SocketConnections[-1].GUID)
data = data.decode('utf-8')
except WindowsError as e:
if e.winerror == 10054: # An error occurs if you try to receive before connecting to other application
if not self.suppressWarnings:
print("Are You connected to the other application? Connect to it!")
else:
pass
else:
raise ValueError("Unexpected Error. Are you sure that the received data can be converted to a string")
return data
def ReadUdpThreadFunc(self): # Should be called from thread
self.isDataReceived = False # Initially nothing received
while True:
data = self.ReceiveData() # Blocks (in thread) until data is returned (OR MAYBE UNTIL SOME TIMEOUT AS WELL)
self.dataRX = data # Populate AFTER new data is received
self.isDataReceived = True
# When it reaches here, data received is available
def ReadReceivedData(self):
data = None
if self.isDataReceived: # if data has been received
self.isDataReceived = False
data = self.dataRX
self.dataRX = None # Empty receive buffer
if data != None and data.startswith('DIALOG:'): #send it info
split = data.split(':')[1]
return data
class Connection:
def __init__(self, gUID, address) -> None:
self.GUID = gUID
self.Address = address
def IsNewConnection(self, address):
for connection in self.SocketConnections:
if connection.Address == address:
return False
print('Appending new connection...')
connection = self.Connection(str(uuid.uuid4()),address)
self.SocketConnections.append(connection)
return True
def GetConnectionByGUID(self, guid):
for connection in self.SocketConnections:
if connection.GUID == guid:
return connection
return None
As mentioned above. When IsNewConnection() is called in UdpComms it does append a new object to SocketConnections. It's just trying to view the SocketConnections in the app.route that is empty. My plans are to be able to send socket messages from the app.routes
For interprocess communication you may try to use something like shared memory documented here
Instead of declaring your self.SocketConnections as a list = []
you'd use self.SocketConnections = Array('i', range(10)) (you are then limited to remembering only 10 connections though).
I need to perform a postgres bulk update using SQLAlchemy
Right now I am using this code to update which inefficient and very slow. I wanted to know if there is a better way to perform the below action instead of looping the image names one by one:
for image in list_of_image_names:
result_update = connection.execute(f"UPDATE databse.tablename SET image_downloaded = 'I' WHERE image_name = '{image}' AND image_downloaded = 'N';")
Code for establishing a connection:
def getconn()-> pg8000.dbapi.Connection:
conn: pg8000.dbapi.Connection = connector.connect(
db_secrets['connection_name'],
"pg8000",
user='some_user',
password='some_password',
db='some_dbname',
)
return conn
engine = sqlalchemy.create_engine(
"{}://".format("postgresql+pg8000"),
creator=getconn,
)
engine.dialect.description_encoding = None
try:
conne = engine.connect()
print("DB connection successful")
except Exception as e:
print(e)
raise
return conne
This pet project is about creating a miniature ChatServer and Client. The assumptions go like this:
First the client tries to login and then validates his details which is written using tkinter. The second frame which interacts with the user is the chat widow. This consummates the client side.
It is presumed that the details of each user is stored in a sqlite db.
The server on the other hand first creates(spawns) a socket and assigns it to each user and keeps him logged in.
When the second user is logged in and tries to chat with the first the server validates the second user and socket pairs both of the users.
I have written a code here for the server to do the above activity but it fails for some reason. I get the following error:
runfile('C:/Users/CGDELL23/ChatServer/ChatServer.py', wdir='C:/Users/CGDELL23/ChatServer')
Traceback (most recent call last):
File "C:\Users\CGDELL23\ChatServer\ChatServer.py", line 18, in <module>
__main__
NameError: name '__main__' is not defined
I think it is failing in the parseUserData function - Can you please help
# -*- coding: utf-8 -*-
"""
Created on Fri Aug 14 10:00:54 2020
#author: Sathya Devarakonda
#Module: ChatServer.py
"""
#import networkX
import socket
import sqlite3
#from sqlite3 import error
import sys
__main__
#Network Variables
cSocket,sSocket = None
#Client socket Info
(cStrBuffer,cAncData,cflags,cAddr) = None
#Boolean Variables
validUser = None
#Arrays
newUser = []
#Variables
user,password,activityType,rUser,cUser, nullUser,nullPassword,x,y = None
#DB Connection con
con = None# Connection
class ChatServer:
def spawnSocket():
sSocket.listen()
cSocket = sSocket.accept()
cSocket.connect()#Connection Established
return cSocket;
def closeSocket(cSocket):
cSocket.close
def parseUserData(cStrBuffer):
return cStrBuffer.split(',')
def disconnectUser(cUser):
DBCalls.deleteUser(cUser)
def validateUser(cSocket):
(cStrBuffer, cAncdata, cFlags, cAddr) = cSocket.recvmsg(1024)
(activityType,user,password,cUser) = parseUserData(cStrBuffer)
if (activityType == 'login'):
return(cUser)
def connectrUser(cSocket,rUser):
try:
rSocket = DBCalls.getUser(rUser)
#(rStrBuffer,rAncdata, rFlags, rAddr) = rSocket.recvmsg(1024)
socket.socketpair(cSocket,rSocket)
except socket.err as err:
print ('Creation of socket Failed')
sys.settrace()
except sqlite3.DatabaseError as error:
print ('Db query Failed')
sys.settrace()
def chatUser(cUser):
(cStrBuffer, cAncdata, cFlags, cAddr) = cSocket.recvmsg(1024)
(activityType,user,passsword,rUser) = parseUserData(cStrBuffer)
if(activityType == 'chat'):
connectrUser(cSocket,rUser)
elif (activityType == 'logoff'):
disconnectUser(rUser)
def validateLoop():
try:
#Creating a Server Socket and binding it to localhost
sSocket = socket.socket(-1,-1,-1,None)
sSocket.bind("127.0.0.1")
while(True):
cSocket = spawnSocket()
cUser = validateUser(cSocket)
if (cUser):
chatUser(cUser)
except socket.err as err:
print ('Creation of socket Failed')
sys.settrace()
class DBCalls:
def createConn():
try:
con = sqlite3.connect('SockDetails')
except socket.error as error:
print ('Creation of connection failed')
sys.settrace()
def closeConn():
try:
con = sqlite3.close()
except socket.error as error:
print ('Closing connection failed')
sys.settrace()
def insertUser(cUser,cSocket,cAddr):
try:
userCursor = con.cursor()
userCursor.execute('insert into SockDetails (cUser,cSocket,cAddr)')
con.commit
except socket.error as error:
print ('Error adding socket details')
sys.settrace()
finally:
return (True)
def checkUser(rUser):
userIn = False #Boolean Flag
try:
userCursor = con.cursor()
userCursor.execute('select * from SockDetails where user=cUser')
con.commit
except socket.error as error:
print ('Error checking socket details')
sys.settrace()
return (userIn)
def getUser(cUser):
rSocket = False
try:
userCursor = con.cursor()
userCursor.execute('select socket from SockDetails where user=cUser')
con.commit
except socket.error as error:
print ('Error adding socket details')
sys.settrace()
finally:
return (rSocket)
def deleteUser(cUser):
try:
userCursor = con.cursor()
userCursor.execute('delete * from SockDetails where user=rUser')
con.commit
except socket.error as error:
print ('Error delete socket details')
sys.settrace()
finally:
return (True)
instead of :
import sys
__main__
try to use:
import sys
if __name__ == "__main__": # in python main used like this way
"Rest of Code"
I am newbie in python, so, it looks like my first project on that lang.
Everytime when I'm trying to run my script - I get different answers from mysql server.
The most frequent answer is OperationalError: (2006, 'MySQL server has gone away')
Sometimes I get output Thread: 11 commited (see code below).
And sometimes emergency stop (traslated, I have russian output in console).
Whatever if output full of commited - records in table still the same.
import MySQLdb
import pyping
import socket, struct
from threading import Thread
def ip2int(addr):
"""Convert ip to integer"""
return struct.unpack("!I", socket.inet_aton(addr))[0]
def int2ip(addr):
"""Convert integer to ip"""
return socket.inet_ntoa(struct.pack("!I", addr))
def ping(ip):
"""Pinging client"""
request = pyping.ping(ip, timeout=100, count=1)
return int(request.max_rtt)
class UpdateThread(Thread):
def __init__(self, records, name):
Thread.__init__(self)
self.database = MySQLdb.connect(host="***", port=3306, user="root", passwd="***", db="dns")
self.cursor = database.cursor()
self.name = name
self.records = records
def run(self):
print(self.name)
for r in self.records:
#latency = ping(int2ip(r[1])) what the hell :x
#ip = str(int2ip(r[1]))
id = str(r[0])
self.cursor.execute("""update clients set has_subn=%s where id=%s""" % (id, id))
self.database.commit()
print(self.name + " commited")
#start
database = MySQLdb.connect(host="***", port=3306, user="root", passwd="***", db="dns")
cursor = database.cursor()
cursor.execute("""select * from clients""")
data = cursor.fetchall() #All records from DataBase
count = len(data)
threads_counter = 10 #We are creating 10 threads for all records
th_count = count / threads_counter #Count of records for each thread
last_thread = count % threads_counter #Last records
threads = []
i = 0
while i < (count - last_thread):
temp_list = data[i:(i+th_count)]
#print(temp_list)
threads.append(UpdateThread(records = temp_list, name = "Thread: " + str((i/3) + 1)).start())
i += th_count
threads.append(UpdateThread(records = data[i: count], name = "Thread: 11").start())
P.S.
Another answers I found here is not helping me.
UPD:
I found that some(everytime another) thread print
OperationalError: (2013, 'Lost connection to MySQL server during query') and all next threads print OperationalError: (2013, 'Lost connection to MySQL server during query')
You need to close your DB connections when you're done with them or else the DB server will become overwhelmed and make your connections expire. For your program, I would change your code so that you have only one DB connection. You can pass a reference to it to your UpdateThread instances and close it when you're done.
database.close()
I make project read RFID tag using python on raspberry pi and using reader RDM880.
My idea is to take the time in and time out to check with the staff to work on time or not.
I try to add card_ID, time_in, time_out to local mysql and remote mysql (IP: 192.168.137.1) using python.
It has the same table in remote and local mysql.
If mysql remote is broken, I want only add to local mysql.
Here is my code:
import serial
import time
import RPi.GPIO as GPIO
import MySQLdb
from datetime import datetime
from binascii import hexlify
serial=serial.Serial("/dev/ttyAMA0",
baudrate=9600,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS,
timeout=0.1)
db_local = MySQLdb.connect("localhost","root","root","luan_van") #connect local
db = MySQLdb.connect("192.168.137.1", "root_a","","luan_van") #connect remote
ID_rong = 128187 # reader respone if no card
chuoi= "\xAA\x00\x03\x25\x26\x00\x00\xBB"
def RFID(str): #function read RFID via uart
serial.write(chuoi)
data = serial.readline()
tach_5 = data[5]
tach_6 = data[6]
hex_5 = hexlify(tach_5)
hex_6= hexlify(tach_6)
num_5 = int(hex_5,16)
num_6 = int(hex_6,16)
num_a = num_5 * 1000 + num_6
if(num_a != ID_rong):
tach_7 = data[7]
tach_8 = data[7]
hex_7 = hexlify(tach_7)
hex_8= hexlify(tach_8)
num_7 = int(hex_7,16)
num_8 = int(hex_8,16)
num = num_8 + num_7 * 1000 + num_6 * 1000000 + num_5 * 1000000000
else:
num = num_5 * 1000 + num_6
return num
def add_database(): # add card_ID and time_in to remote mysql
with db:
cur = db.cursor()
cur.execure("INSERT INTO tt_control(Card_ID,Time_in) VALUES ('%d',NOW()) " %num)
return
def add_database_local(): # add card_ID and time_in to remote mysql
with db_local:
cur = db_local.cursor()
cur.execure("INSERT INTO tt_control(Card_ID,Time_in) VALUES ('%d',NOW()) " %num)
return
def have_ID(int): #check ID in table tt_control
with db_local:
cur = db_local.cursor(MySQLdb.cursors.DictCursor)
cur.execute("SELECT * FROM tt_control WHERE Card_ID = '%d'" %num)
rows = cur.fetchall()
ID=""
for row in rows:
ID = row['Card_ID']
return ID
def add_time_out(): #add time out to remote mysql
with db:
cur = db.cursor(MySQLdb.cursors.DictCursor)
cur.execute("UPDATE tt_control SET Time_out = NOW() WHERE Card_ID = '%d'" %num)
return
def add_time_out_local(): #add time out to local mysql
with db_local:
cur = db_local.cursor(MySQLdb.cursors.DictCursor)
cur.execute("UPDATE tt_control SET Time_out = NOW() WHERE Card_ID = '%d'" %num)
return
def add_OUT(): #increase Card_ID to distinguish second check
with db:
cur = db.cursor(MySQLdb.cursors.DictCursor)
cur.execute("UPDATE tt_control SET Card_ID = Card_ID + 1 WHERE Card_ID = '%d'" %num)
return
def add_OUT_local(): #increase Card_ID to distinguish second check
with db_local:
cur = db_local.cursor(MySQLdb.cursors.DictCursor)
cur.execute("UPDATE tt_control SET Card_ID = Card_ID + 1 WHERE Card_ID = '%d'" %num)
return
while 1:
num = RFID(chuoi)
time.sleep(1)
Have_ID =have_ID(num)
if(num != ID_rong):
if(Have_ID ==""):
add_database() #---> it will error if remote broken, how can i fix it?
add_database_local()
else:
add_time_out() #---> it will error if remote broken, how can i fix it? I think connection alive can fix, but I don't know
add_time_out_local()
add_OUT()
add_OUT_local() #---> it will error if remote broken, how can i fix it?
You have a couple choices:
(not as good) Ping the server regularly to keep the connection alive.
(best) Handle the MySQLdb exception when calling cur.execute by re-establishing your connection and trying the call again. Here's an excellent and concise answer for how to do just that. From that article, you handle the exception yourself:
def __execute_sql(self,sql,cursor):
try:
cursor.execute(sql)
return 1
except MySQLdb.OperationalError, e:
if e[0] == 2006:
self.logger.do_logging('info','DB', "%s : Restarting db" %(e))
self.start_database()
return 0
(lastly) Establish a new database connection just before you actually call the database entries. In this case, move the db and db_local definitions into a function which you call just before your cursor. If you're making thousands of queries, this isn't the best. However, if it's only a few database queries, it's probably fine.
I use the following method:
def checkConn(self):
sq = "SELECT NOW()"
try:
self.cur.execute( sq )
except pymysql.Error as e:
if e.errno == 2006:
return self.connect()
else:
print ( "No connection with database." )
return False
I used a simple technique. Initially, I connected to DB using:
conect = mysql.connector.connect(host=DB_HOST, user=DB_USER, password=DB_PASS, database=DB_NAME)
Whenever I need to check if the DB is still connected, I used a line:
conect.ping(reconnect=True, attempts=3, delay=2)
This will check if the DB connection is still alive. If not, it will restart the connection which solves the problem.
It just makes sense not to use a status checker function before executing a SQL. Best practice shall handle the exception afterward and reconnect to the server.
Since the client library is always on the client side, there is no way to know the server status (connect status does depend on server status of course) unless we ping it or connect it.
Even if you ping the server and make sure the connection is fine and let the code execute down to the following line, the connection theoretically still could be down within that glimpse of time. So it's still not guaranteed that you will have a good connection right after you check the connection status.
On the other hand, ping is as expensive as most operations. If your operation fails because of a bad connection, then it's as good as using the ping to check the status.
Considering these, why bother to use ping or other no-matter built-in or not-built-in functions to check the connection status? Just execute your command as if it is up, then handle the exception in case it is down. This might be the reason the mysqlclient library does not provide a built-in status checker in the first place.