I am newbie in python, so, it looks like my first project on that lang.
Everytime when I'm trying to run my script - I get different answers from mysql server.
The most frequent answer is OperationalError: (2006, 'MySQL server has gone away')
Sometimes I get output Thread: 11 commited (see code below).
And sometimes emergency stop (traslated, I have russian output in console).
Whatever if output full of commited - records in table still the same.
import MySQLdb
import pyping
import socket, struct
from threading import Thread
def ip2int(addr):
"""Convert ip to integer"""
return struct.unpack("!I", socket.inet_aton(addr))[0]
def int2ip(addr):
"""Convert integer to ip"""
return socket.inet_ntoa(struct.pack("!I", addr))
def ping(ip):
"""Pinging client"""
request = pyping.ping(ip, timeout=100, count=1)
return int(request.max_rtt)
class UpdateThread(Thread):
def __init__(self, records, name):
Thread.__init__(self)
self.database = MySQLdb.connect(host="***", port=3306, user="root", passwd="***", db="dns")
self.cursor = database.cursor()
self.name = name
self.records = records
def run(self):
print(self.name)
for r in self.records:
#latency = ping(int2ip(r[1])) what the hell :x
#ip = str(int2ip(r[1]))
id = str(r[0])
self.cursor.execute("""update clients set has_subn=%s where id=%s""" % (id, id))
self.database.commit()
print(self.name + " commited")
#start
database = MySQLdb.connect(host="***", port=3306, user="root", passwd="***", db="dns")
cursor = database.cursor()
cursor.execute("""select * from clients""")
data = cursor.fetchall() #All records from DataBase
count = len(data)
threads_counter = 10 #We are creating 10 threads for all records
th_count = count / threads_counter #Count of records for each thread
last_thread = count % threads_counter #Last records
threads = []
i = 0
while i < (count - last_thread):
temp_list = data[i:(i+th_count)]
#print(temp_list)
threads.append(UpdateThread(records = temp_list, name = "Thread: " + str((i/3) + 1)).start())
i += th_count
threads.append(UpdateThread(records = data[i: count], name = "Thread: 11").start())
P.S.
Another answers I found here is not helping me.
UPD:
I found that some(everytime another) thread print
OperationalError: (2013, 'Lost connection to MySQL server during query') and all next threads print OperationalError: (2013, 'Lost connection to MySQL server during query')
You need to close your DB connections when you're done with them or else the DB server will become overwhelmed and make your connections expire. For your program, I would change your code so that you have only one DB connection. You can pass a reference to it to your UpdateThread instances and close it when you're done.
database.close()
Related
I'm trying to read the data from MySql database to OPC UA server. I tested it with the following code and sample database it is working. However, I'm not sure if it runs in a real time environment as the database has 40+ tables and 30+ columns in each table recording 1 minute data. Can someone please suggest the optimal way to do this.
from opcua import ua, uamethod, Server
from time import sleep
import logging
import mysql.connector
mydb = mysql.connector.connect(
host="127.0.0.1",
port=3306,
user="root",
password="root",
database="classicmodels")
mycursor = mydb.cursor(buffered=True , dictionary=True)
sql = "SELECT * FROM classicmodels.customers"
mycursor.execute(sql)
myresult = mycursor.fetchone()
sql1 = "SELECT * FROM classicmodels.employees"
mycursor.execute(sql1)
myresult1 = mycursor.fetchone()
if __name__ == "__main__":
"""
OPC-UA-Server Setup
"""
server = Server()
endpoint = "opc.tcp://127.0.0.1:4848"
server.set_endpoint(endpoint)
servername = "Python-OPC-UA-Server"
server.set_server_name(servername)
"""
OPC-UA-Modeling
"""
root_node = server.get_root_node()
object_node = server.get_objects_node()
idx = server.register_namespace("OPCUA_SERVER")
myobj = object_node.add_object(idx, "DA_UA")
myobj1 = object_node.add_object(idx, "D_U")
"""
OPC-UA-Server Add Variable
"""
for key, value in myresult.items():
myobj.add_variable(idx, key, str(value))
for key, value in myresult1.items():
myobj1.add_variable(idx, key, str(value))
"""
OPC-UA-Server Start
"""
server.start()
'''
I am using MariaDB Database Connector for Python and I have a singleton database class that is responsible for creating a pool and performing database operations on that pool. I have made every effort to close the pool after every access. But, still, after a while the pool becomes unusable and gets stuck, never to be freed. This might be a bug with the connector or a bug in my code. Once the pool is exhausted, I create and return a normal connection, which is not efficient for every database access.
Here's my database module code:
import mariadb
import configparser
import sys
from classes.logger import AppLogger
logger = AppLogger(__name__)
connections = 0
class Db:
"""
Main database for the application
"""
config = configparser.ConfigParser()
config.read('/app/config/conf.ini')
db_config = db_config = config['db']
try:
conn_pool = mariadb.ConnectionPool(
user = db_config['user'],
password = db_config['password'],
host = db_config['host'],
port = int(db_config['port']),
pool_name = db_config['pool_name'],
pool_size = int(db_config['pool_size']),
database = db_config['database'],
)
except mariadb.PoolError as e:
print(f'Error creating connection pool: {e}')
logger.error(f'Error creating connection pool: {e}')
sys.exit(1)
def get_pool(self):
return self.conn_pool if self.conn_pool != None else self.create_pool()
def __get_connection__(self):
"""
Returns a db connection
"""
global connections
try:
pconn = self.conn_pool.get_connection()
pconn.autocommit = True
print(f"Receiving connection. Auto commit: {pconn.autocommit}")
connections += 1
print(f"New Connection. Open Connections: {connections}")
logger.debug(f"New Connection. Open Connections: {connections}")
except mariadb.PoolError as e:
print(f"Error getting pool connection: {e}")
logger.error(f'Error getting pool connection: {e}')
# exit(1)
pconn = self.ــcreate_connectionــ()
pconn.autocommit = True
connections += 1
logger.debug(f'Created normal connection following failed pool access. Connections: {connections}')
return pconn
def ــcreate_connectionــ(self):
"""
Creates a new connection. Use this when getting a
pool connection fails
"""
db_config = self.db_config
return mariadb.connect(
user = db_config['user'],
password = db_config['password'],
host = db_config['host'],
port = int(db_config['port']),
database = db_config['database'],
)
def exec_sql(self, sql, values = None):
global connections
pconn = self.__get_connection__()
try:
cur = pconn.cursor()
print(f'Sql: {sql}')
print(f'values: {values}')
cur.execute(sql, values)
# pconn.commit()
# Is this a select operation?
if sql.startswith('SELECT') or sql.startswith('Select') or sql.startswith('select'):
result = cur.fetchall() #Return a result set for select operations
else:
result = True
pconn.close()
connections -= 1
print(f'connection closed: connections: {connections}')
logger.debug(f'connection closed: connections: {connections}')
# return True #Return true for insert, update, and delete operations
return result
except mariadb.Error as e:
print(f"Error performing database operations: {e}")
# pconn.rollback()
pconn.close()
connections -=1
print(f'connection closed: connections: {connections}')
return False
To use the class in a module, I import the class there and simply instantiate an object from the class and run sql queries on it:
db = Db()
users = db.exec_sql("SELECT * FROM users")
Any ideas why the pool gets exhausted after a while (maybe days) and never gets healed?
Maybe a different error from mariadb.Error is raised sometimes and the connection is never closed. I believe the best practice would be to use a finally section to guarantee that the connection is always closed, like this:
pconn = None
try:
pconn = self.__get_connection__()
# ...
except mariadb.Error as e:
# ...
finally:
if pconn:
try:
pconn.close()
except:
# Not really expected, but if this ever happens it should not alter
# whatever happened in the try or except sections above.
import sshtunnel
import time
import logging
import mysql.connector
class SelectCommand():
def __init__(self,dbcmd,value = None, mul = False):
self.dbcmd = dbcmd
self.value = value
self.mul = mul
def execute(self):
try:
print("try1")
connection = mysql.connector.connect(
user='**myuser**', password='**pass**',
host='127.0.0.1', port=server.local_bind_port,
database='**myuser$test**', autocommit = True
)
print("try2")
connection.autocommit = True
mycursor = connection.cursor()
sql = self.dbcmd
val = self.value
mycursor.execute(sql, val)
myresult = mycursor.fetchone()
mycursor.close()
connection.close()
if myresult == None or self.mul == True:
return myresult
return myresult[0]
except Exception as e:
print(e)
return "server disconnect "
sshtunnel.SSH_TIMEOUT = 5.0
sshtunnel.TUNNEL_TIMEOUT = 5.0
def get_server():
#sshtunnel.DEFAULT_LOGLEVEL = logging.DEBUG
server = sshtunnel.SSHTunnelForwarder(
('ssh.pythonanywhere.com'),
ssh_username='**myuser**', ssh_password='**mypass**',
remote_bind_address=('**myuser.mysql.pythonanywhere-services.com**', 3306))
return server
server = get_server()
server.start()
while True :
if(server.is_active):
print("alive... " + (time.ctime()))
print(SelectCommand("SELECT * FROM A_table WHERE id = %s", (1,), mul = True).execute())
else:
print("reconnecting... " + time.ctime())
server.stop()
server = get_server()
server.start()
time.sleep(8)
Now i want use sshtunnel connect with database of pythonanywhere, and i want check connecting of sshtunnel if connect do select command else wait for new connecting. i try do Python: Automatically reconnect ssh tunnel after remote server gone down . but my problem is when i query database i try turn off my WIFI
my console show this message (Could not establish connection from ('127.0.0.1', 54466) to remote side of the tunnel) and Socket exception: An existing connection was forcibly closed by the remote host (10054) then it's result to stopping of my program. How can i fix.
#!/usr/bin/env python
import pika
def doQuery( conn, i ) :
cur = conn.cursor()
cur.execute("SELECT * FROM table OFFSET %s LIMIT 100000", (i,))
return cur.fetchall()
print "Using psycopg2"
import psycopg2
myConnection = psycopg2.connect( host=hostname, user=username,
password=password, dbname=database )
connection =
pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='task_queue2')
endloop = False
i = 1
while True:
results = doQuery( myConnection, i )
j = 0
while j < 10000:
try:
results[j][-1]
except:
endloop = True
break
message = str(results[j][-1]).encode("hex")
channel.basic_publish(exchange='',
routing_key='task_queue2',
body=message
#properties=pika.BasicProperties(
#delivery_mode = 2, # make message persistent
)#)
j = j + 1
# if i % 10000 == 0:
# print i
if endloop == False:
break
i = i + 10000
The SQL query is taking too long to execute when i gets to 100,000,000, but I have about two billion entries I need to put into the queue. Anyone know of a more efficient SQL query that I can run so that I can get all those two billion into the queue faster?
psycopg2 supports server-side cursors, that is, a cursor that is managed on the database server rather than in the client. The full result set is not transferred all at once to the client, rather it is fed to it as required via the cursor interface.
This will allow you to perform the query without using paging (as LIMIT/OFFSET implements), and will simplify your code. To use a server side cursor use the name parameter when creating the cursor.
import pika
import psycopg2
with psycopg2.connect(host=hostname, user=username, password=password, dbname=database) as conn:
with conn.cursor(name='my_cursor') as cur: # create a named server-side cursor
cur.execute('select * from table')
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='task_queue2')
for row in cur:
message = str(row[-1]).encode('hex')
channel.basic_publish(exchange='', routing_key='task_queue2', body=message)
You might want to tweak cur.itersize to improve performance if necessary.
I make project read RFID tag using python on raspberry pi and using reader RDM880.
My idea is to take the time in and time out to check with the staff to work on time or not.
I try to add card_ID, time_in, time_out to local mysql and remote mysql (IP: 192.168.137.1) using python.
It has the same table in remote and local mysql.
If mysql remote is broken, I want only add to local mysql.
Here is my code:
import serial
import time
import RPi.GPIO as GPIO
import MySQLdb
from datetime import datetime
from binascii import hexlify
serial=serial.Serial("/dev/ttyAMA0",
baudrate=9600,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS,
timeout=0.1)
db_local = MySQLdb.connect("localhost","root","root","luan_van") #connect local
db = MySQLdb.connect("192.168.137.1", "root_a","","luan_van") #connect remote
ID_rong = 128187 # reader respone if no card
chuoi= "\xAA\x00\x03\x25\x26\x00\x00\xBB"
def RFID(str): #function read RFID via uart
serial.write(chuoi)
data = serial.readline()
tach_5 = data[5]
tach_6 = data[6]
hex_5 = hexlify(tach_5)
hex_6= hexlify(tach_6)
num_5 = int(hex_5,16)
num_6 = int(hex_6,16)
num_a = num_5 * 1000 + num_6
if(num_a != ID_rong):
tach_7 = data[7]
tach_8 = data[7]
hex_7 = hexlify(tach_7)
hex_8= hexlify(tach_8)
num_7 = int(hex_7,16)
num_8 = int(hex_8,16)
num = num_8 + num_7 * 1000 + num_6 * 1000000 + num_5 * 1000000000
else:
num = num_5 * 1000 + num_6
return num
def add_database(): # add card_ID and time_in to remote mysql
with db:
cur = db.cursor()
cur.execure("INSERT INTO tt_control(Card_ID,Time_in) VALUES ('%d',NOW()) " %num)
return
def add_database_local(): # add card_ID and time_in to remote mysql
with db_local:
cur = db_local.cursor()
cur.execure("INSERT INTO tt_control(Card_ID,Time_in) VALUES ('%d',NOW()) " %num)
return
def have_ID(int): #check ID in table tt_control
with db_local:
cur = db_local.cursor(MySQLdb.cursors.DictCursor)
cur.execute("SELECT * FROM tt_control WHERE Card_ID = '%d'" %num)
rows = cur.fetchall()
ID=""
for row in rows:
ID = row['Card_ID']
return ID
def add_time_out(): #add time out to remote mysql
with db:
cur = db.cursor(MySQLdb.cursors.DictCursor)
cur.execute("UPDATE tt_control SET Time_out = NOW() WHERE Card_ID = '%d'" %num)
return
def add_time_out_local(): #add time out to local mysql
with db_local:
cur = db_local.cursor(MySQLdb.cursors.DictCursor)
cur.execute("UPDATE tt_control SET Time_out = NOW() WHERE Card_ID = '%d'" %num)
return
def add_OUT(): #increase Card_ID to distinguish second check
with db:
cur = db.cursor(MySQLdb.cursors.DictCursor)
cur.execute("UPDATE tt_control SET Card_ID = Card_ID + 1 WHERE Card_ID = '%d'" %num)
return
def add_OUT_local(): #increase Card_ID to distinguish second check
with db_local:
cur = db_local.cursor(MySQLdb.cursors.DictCursor)
cur.execute("UPDATE tt_control SET Card_ID = Card_ID + 1 WHERE Card_ID = '%d'" %num)
return
while 1:
num = RFID(chuoi)
time.sleep(1)
Have_ID =have_ID(num)
if(num != ID_rong):
if(Have_ID ==""):
add_database() #---> it will error if remote broken, how can i fix it?
add_database_local()
else:
add_time_out() #---> it will error if remote broken, how can i fix it? I think connection alive can fix, but I don't know
add_time_out_local()
add_OUT()
add_OUT_local() #---> it will error if remote broken, how can i fix it?
You have a couple choices:
(not as good) Ping the server regularly to keep the connection alive.
(best) Handle the MySQLdb exception when calling cur.execute by re-establishing your connection and trying the call again. Here's an excellent and concise answer for how to do just that. From that article, you handle the exception yourself:
def __execute_sql(self,sql,cursor):
try:
cursor.execute(sql)
return 1
except MySQLdb.OperationalError, e:
if e[0] == 2006:
self.logger.do_logging('info','DB', "%s : Restarting db" %(e))
self.start_database()
return 0
(lastly) Establish a new database connection just before you actually call the database entries. In this case, move the db and db_local definitions into a function which you call just before your cursor. If you're making thousands of queries, this isn't the best. However, if it's only a few database queries, it's probably fine.
I use the following method:
def checkConn(self):
sq = "SELECT NOW()"
try:
self.cur.execute( sq )
except pymysql.Error as e:
if e.errno == 2006:
return self.connect()
else:
print ( "No connection with database." )
return False
I used a simple technique. Initially, I connected to DB using:
conect = mysql.connector.connect(host=DB_HOST, user=DB_USER, password=DB_PASS, database=DB_NAME)
Whenever I need to check if the DB is still connected, I used a line:
conect.ping(reconnect=True, attempts=3, delay=2)
This will check if the DB connection is still alive. If not, it will restart the connection which solves the problem.
It just makes sense not to use a status checker function before executing a SQL. Best practice shall handle the exception afterward and reconnect to the server.
Since the client library is always on the client side, there is no way to know the server status (connect status does depend on server status of course) unless we ping it or connect it.
Even if you ping the server and make sure the connection is fine and let the code execute down to the following line, the connection theoretically still could be down within that glimpse of time. So it's still not guaranteed that you will have a good connection right after you check the connection status.
On the other hand, ping is as expensive as most operations. If your operation fails because of a bad connection, then it's as good as using the ping to check the status.
Considering these, why bother to use ping or other no-matter built-in or not-built-in functions to check the connection status? Just execute your command as if it is up, then handle the exception in case it is down. This might be the reason the mysqlclient library does not provide a built-in status checker in the first place.