I make project read RFID tag using python on raspberry pi and using reader RDM880.
My idea is to take the time in and time out to check with the staff to work on time or not.
I try to add card_ID, time_in, time_out to local mysql and remote mysql (IP: 192.168.137.1) using python.
It has the same table in remote and local mysql.
If mysql remote is broken, I want only add to local mysql.
Here is my code:
import serial
import time
import RPi.GPIO as GPIO
import MySQLdb
from datetime import datetime
from binascii import hexlify
serial=serial.Serial("/dev/ttyAMA0",
baudrate=9600,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS,
timeout=0.1)
db_local = MySQLdb.connect("localhost","root","root","luan_van") #connect local
db = MySQLdb.connect("192.168.137.1", "root_a","","luan_van") #connect remote
ID_rong = 128187 # reader respone if no card
chuoi= "\xAA\x00\x03\x25\x26\x00\x00\xBB"
def RFID(str): #function read RFID via uart
serial.write(chuoi)
data = serial.readline()
tach_5 = data[5]
tach_6 = data[6]
hex_5 = hexlify(tach_5)
hex_6= hexlify(tach_6)
num_5 = int(hex_5,16)
num_6 = int(hex_6,16)
num_a = num_5 * 1000 + num_6
if(num_a != ID_rong):
tach_7 = data[7]
tach_8 = data[7]
hex_7 = hexlify(tach_7)
hex_8= hexlify(tach_8)
num_7 = int(hex_7,16)
num_8 = int(hex_8,16)
num = num_8 + num_7 * 1000 + num_6 * 1000000 + num_5 * 1000000000
else:
num = num_5 * 1000 + num_6
return num
def add_database(): # add card_ID and time_in to remote mysql
with db:
cur = db.cursor()
cur.execure("INSERT INTO tt_control(Card_ID,Time_in) VALUES ('%d',NOW()) " %num)
return
def add_database_local(): # add card_ID and time_in to remote mysql
with db_local:
cur = db_local.cursor()
cur.execure("INSERT INTO tt_control(Card_ID,Time_in) VALUES ('%d',NOW()) " %num)
return
def have_ID(int): #check ID in table tt_control
with db_local:
cur = db_local.cursor(MySQLdb.cursors.DictCursor)
cur.execute("SELECT * FROM tt_control WHERE Card_ID = '%d'" %num)
rows = cur.fetchall()
ID=""
for row in rows:
ID = row['Card_ID']
return ID
def add_time_out(): #add time out to remote mysql
with db:
cur = db.cursor(MySQLdb.cursors.DictCursor)
cur.execute("UPDATE tt_control SET Time_out = NOW() WHERE Card_ID = '%d'" %num)
return
def add_time_out_local(): #add time out to local mysql
with db_local:
cur = db_local.cursor(MySQLdb.cursors.DictCursor)
cur.execute("UPDATE tt_control SET Time_out = NOW() WHERE Card_ID = '%d'" %num)
return
def add_OUT(): #increase Card_ID to distinguish second check
with db:
cur = db.cursor(MySQLdb.cursors.DictCursor)
cur.execute("UPDATE tt_control SET Card_ID = Card_ID + 1 WHERE Card_ID = '%d'" %num)
return
def add_OUT_local(): #increase Card_ID to distinguish second check
with db_local:
cur = db_local.cursor(MySQLdb.cursors.DictCursor)
cur.execute("UPDATE tt_control SET Card_ID = Card_ID + 1 WHERE Card_ID = '%d'" %num)
return
while 1:
num = RFID(chuoi)
time.sleep(1)
Have_ID =have_ID(num)
if(num != ID_rong):
if(Have_ID ==""):
add_database() #---> it will error if remote broken, how can i fix it?
add_database_local()
else:
add_time_out() #---> it will error if remote broken, how can i fix it? I think connection alive can fix, but I don't know
add_time_out_local()
add_OUT()
add_OUT_local() #---> it will error if remote broken, how can i fix it?
You have a couple choices:
(not as good) Ping the server regularly to keep the connection alive.
(best) Handle the MySQLdb exception when calling cur.execute by re-establishing your connection and trying the call again. Here's an excellent and concise answer for how to do just that. From that article, you handle the exception yourself:
def __execute_sql(self,sql,cursor):
try:
cursor.execute(sql)
return 1
except MySQLdb.OperationalError, e:
if e[0] == 2006:
self.logger.do_logging('info','DB', "%s : Restarting db" %(e))
self.start_database()
return 0
(lastly) Establish a new database connection just before you actually call the database entries. In this case, move the db and db_local definitions into a function which you call just before your cursor. If you're making thousands of queries, this isn't the best. However, if it's only a few database queries, it's probably fine.
I use the following method:
def checkConn(self):
sq = "SELECT NOW()"
try:
self.cur.execute( sq )
except pymysql.Error as e:
if e.errno == 2006:
return self.connect()
else:
print ( "No connection with database." )
return False
I used a simple technique. Initially, I connected to DB using:
conect = mysql.connector.connect(host=DB_HOST, user=DB_USER, password=DB_PASS, database=DB_NAME)
Whenever I need to check if the DB is still connected, I used a line:
conect.ping(reconnect=True, attempts=3, delay=2)
This will check if the DB connection is still alive. If not, it will restart the connection which solves the problem.
It just makes sense not to use a status checker function before executing a SQL. Best practice shall handle the exception afterward and reconnect to the server.
Since the client library is always on the client side, there is no way to know the server status (connect status does depend on server status of course) unless we ping it or connect it.
Even if you ping the server and make sure the connection is fine and let the code execute down to the following line, the connection theoretically still could be down within that glimpse of time. So it's still not guaranteed that you will have a good connection right after you check the connection status.
On the other hand, ping is as expensive as most operations. If your operation fails because of a bad connection, then it's as good as using the ping to check the status.
Considering these, why bother to use ping or other no-matter built-in or not-built-in functions to check the connection status? Just execute your command as if it is up, then handle the exception in case it is down. This might be the reason the mysqlclient library does not provide a built-in status checker in the first place.
Related
I am using MariaDB Database Connector for Python and I have a singleton database class that is responsible for creating a pool and performing database operations on that pool. I have made every effort to close the pool after every access. But, still, after a while the pool becomes unusable and gets stuck, never to be freed. This might be a bug with the connector or a bug in my code. Once the pool is exhausted, I create and return a normal connection, which is not efficient for every database access.
Here's my database module code:
import mariadb
import configparser
import sys
from classes.logger import AppLogger
logger = AppLogger(__name__)
connections = 0
class Db:
"""
Main database for the application
"""
config = configparser.ConfigParser()
config.read('/app/config/conf.ini')
db_config = db_config = config['db']
try:
conn_pool = mariadb.ConnectionPool(
user = db_config['user'],
password = db_config['password'],
host = db_config['host'],
port = int(db_config['port']),
pool_name = db_config['pool_name'],
pool_size = int(db_config['pool_size']),
database = db_config['database'],
)
except mariadb.PoolError as e:
print(f'Error creating connection pool: {e}')
logger.error(f'Error creating connection pool: {e}')
sys.exit(1)
def get_pool(self):
return self.conn_pool if self.conn_pool != None else self.create_pool()
def __get_connection__(self):
"""
Returns a db connection
"""
global connections
try:
pconn = self.conn_pool.get_connection()
pconn.autocommit = True
print(f"Receiving connection. Auto commit: {pconn.autocommit}")
connections += 1
print(f"New Connection. Open Connections: {connections}")
logger.debug(f"New Connection. Open Connections: {connections}")
except mariadb.PoolError as e:
print(f"Error getting pool connection: {e}")
logger.error(f'Error getting pool connection: {e}')
# exit(1)
pconn = self.ــcreate_connectionــ()
pconn.autocommit = True
connections += 1
logger.debug(f'Created normal connection following failed pool access. Connections: {connections}')
return pconn
def ــcreate_connectionــ(self):
"""
Creates a new connection. Use this when getting a
pool connection fails
"""
db_config = self.db_config
return mariadb.connect(
user = db_config['user'],
password = db_config['password'],
host = db_config['host'],
port = int(db_config['port']),
database = db_config['database'],
)
def exec_sql(self, sql, values = None):
global connections
pconn = self.__get_connection__()
try:
cur = pconn.cursor()
print(f'Sql: {sql}')
print(f'values: {values}')
cur.execute(sql, values)
# pconn.commit()
# Is this a select operation?
if sql.startswith('SELECT') or sql.startswith('Select') or sql.startswith('select'):
result = cur.fetchall() #Return a result set for select operations
else:
result = True
pconn.close()
connections -= 1
print(f'connection closed: connections: {connections}')
logger.debug(f'connection closed: connections: {connections}')
# return True #Return true for insert, update, and delete operations
return result
except mariadb.Error as e:
print(f"Error performing database operations: {e}")
# pconn.rollback()
pconn.close()
connections -=1
print(f'connection closed: connections: {connections}')
return False
To use the class in a module, I import the class there and simply instantiate an object from the class and run sql queries on it:
db = Db()
users = db.exec_sql("SELECT * FROM users")
Any ideas why the pool gets exhausted after a while (maybe days) and never gets healed?
Maybe a different error from mariadb.Error is raised sometimes and the connection is never closed. I believe the best practice would be to use a finally section to guarantee that the connection is always closed, like this:
pconn = None
try:
pconn = self.__get_connection__()
# ...
except mariadb.Error as e:
# ...
finally:
if pconn:
try:
pconn.close()
except:
# Not really expected, but if this ever happens it should not alter
# whatever happened in the try or except sections above.
I'm getting threading errors when I try to use or create a db cursor in my process_id function. Each thread will have to use the database to process data for their passed id.
I can't utilize a cursor in the thread/process_id at all(I get threading errors and the DB never updates)...I've coded it a lot of different ways. The code works when I don't use threads.
I have very specific requirements for how this code is to be written, slow and stable is fine. I also cut out a lot of error handling/logging before posting. Daemon/Infinite loop is required.
How do I spin up a new cursor in each thread?
import threading
import time
from datetime import datetime
import os
import jaydebeapi, sys
#Enter the values for you database connection
database = "REMOVED"
hostname = "REMOVED"
port = "REMOVED"
uid = "REMOVED"
pwd = "REMOVED"
connection_string='jdbc:db2://'+hostname+':'+port+'/'+database
if (sys.version_info >= (3,0)):
conn = jaydebeapi.connect("com.ibm.db2.jcc.DB2Driver", connection_string, [uid, pwd], jars="REMOVED")
else:
conn = jaydebeapi.connect("com.ibm.db2.jcc.DB2Driver", [connection_string, uid, pwd])
# Thread Pool Variables
max_threads = 5
used_threads = 0
# define main cursor
cus=conn.cursor()
def process_id(id):
#create a cursor for a thread
cus_id=conn.cursor()
cus_id.execute("SOME QUERY;")
cus_id.close()
global used_threads
used_threads = used_threads - 1
return 0
def daemon():
global num_threads, used_threads
print("Daemon running...")
while True:
#ids to process are loaded into a list...
for id in ids_to_process:
if used_threads < max_threads:
t = threading.Thread(target=process_id, args=(int(id),))
t.start()
used_threads += 1
return 0
daemon()
#!/usr/bin/env python
import pika
def doQuery( conn, i ) :
cur = conn.cursor()
cur.execute("SELECT * FROM table OFFSET %s LIMIT 100000", (i,))
return cur.fetchall()
print "Using psycopg2"
import psycopg2
myConnection = psycopg2.connect( host=hostname, user=username,
password=password, dbname=database )
connection =
pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='task_queue2')
endloop = False
i = 1
while True:
results = doQuery( myConnection, i )
j = 0
while j < 10000:
try:
results[j][-1]
except:
endloop = True
break
message = str(results[j][-1]).encode("hex")
channel.basic_publish(exchange='',
routing_key='task_queue2',
body=message
#properties=pika.BasicProperties(
#delivery_mode = 2, # make message persistent
)#)
j = j + 1
# if i % 10000 == 0:
# print i
if endloop == False:
break
i = i + 10000
The SQL query is taking too long to execute when i gets to 100,000,000, but I have about two billion entries I need to put into the queue. Anyone know of a more efficient SQL query that I can run so that I can get all those two billion into the queue faster?
psycopg2 supports server-side cursors, that is, a cursor that is managed on the database server rather than in the client. The full result set is not transferred all at once to the client, rather it is fed to it as required via the cursor interface.
This will allow you to perform the query without using paging (as LIMIT/OFFSET implements), and will simplify your code. To use a server side cursor use the name parameter when creating the cursor.
import pika
import psycopg2
with psycopg2.connect(host=hostname, user=username, password=password, dbname=database) as conn:
with conn.cursor(name='my_cursor') as cur: # create a named server-side cursor
cur.execute('select * from table')
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='task_queue2')
for row in cur:
message = str(row[-1]).encode('hex')
channel.basic_publish(exchange='', routing_key='task_queue2', body=message)
You might want to tweak cur.itersize to improve performance if necessary.
I am new to python and thought I would practice what I have been learning to complete a little task. Essentially I am inserting a cognimatic cameras data into a database from a .csv file that I pulled from the web. sadly I have had to omit all the connection details as it can only be accessed from my works computer which means the script cannot be run.
To the problem!
I have a for loop that iterates through the cameras in the system, running this script:
#!/usr/bin/python
import pymssql
import urllib2
import sys
import getpass
import csv
import os
attempts = 0 #connection attempt counter
#check db connection with tsql -H cabernet.ad.uow.edu.au -p 1433 -U ADUOW\\mbeavis -P mb1987 -D library_gate_counts
server = "*****" #sever address
#myUser = 'ADUOW\\' + raw_input("User: ")# User and password for server. Will this be needed when the script runs on the server? # Ask David
#passw = getpass.getpass("Password: ")
while attempts < 3: # attempt to connect 3 times
try: #try connection
conn = pymssql.connect(server = server, user = '****', password = '****', database = "***", port='1433',timeout = 15, login_timeout = 15)
break
except pymssql.Error as e: #if connection fails print error information
attempts += 1
print type(e)
print e.args
camCursor = conn.cursor() #creates a cursor on the database
camCursor.execute("SELECT * FROM dbo.CAMERAS") #Selects the camera names and connection details
for rows in camCursor:
print rows
Everything is fine and the loop runs as it should, however when I actually try and do anything with the data the loop runs once and ends, this is the full script:
#!/usr/bin/python
import pymssql
import urllib2
import sys
import getpass
import csv
import os
attempts = 0 #connection attempt counter
#check db connection with tsql -H cabernet.ad.uow.edu.au -p 1433 -U ADUOW\\mbeavis -P mb1987 -D library_gate_counts
server = "*****" #sever address
#myUser = 'ADUOW\\' + raw_input("User: ")# User and password for server. Will this be needed when the script runs on the server? # Ask David
#passw = getpass.getpass("Password: ")
while attempts < 3: # attempt to connect 3 times
try: #try connection
conn = pymssql.connect(server = server, user = '****', password = '****', database = "***", port='1433',timeout = 15, login_timeout = 15)
break
except pymssql.Error as e: #if connection fails print error information
attempts += 1
print type(e)
print e.args
camCursor = conn.cursor() #creates a cursor on the database
camCursor.execute("SELECT * FROM dbo.CAMERAS") #Selects the camera names and connection details
for rows in camCursor:
print rows
cameraName = str(rows[0]) #converts UNICODE camera name to string
connectionDetails = str(rows[1]) #converts UNICODE connection details to string
try: #try connection
#connect to webpage, this will be changed to loop through the entire range of cameras, which will
#have their names and connection details stored in a seperate database table
prefix = "***"
suffix = "**suffix"
response = urllib2.urlopen(prefix + connectionDetails + suffix, timeout = 5)
content = response.read() #read the data for the csv page into content
f = open( "/tmp/test.csv", 'w' ) #open a file for writing (test phase only)
f.write( content ) #write the data stored in content to file
f.close() #close file
print content #prints out content
with open( "/tmp/test.csv", 'rb' ) as csvFile: #opens the .csv file previously created
reader = csv.DictReader(csvFile) #reader object of DictReader, allows for the first row to be the dictionary keys for the following rows
for row in reader: #loop through each row
start = row['Interval start']
end = row['Interval stop']
camName = row['Counter name']
pplIn = int(row['Pedestrians coming in'])
pplOut = int(row['Pedestrians going out'])
insertCursor = conn.cursor()
insert = "INSERT INTO dbo.COUNTS VALUES (%s, %s, %d, %d)"
insertCursor.execute(insert, (camName, start, pplIn, pplOut))
conn.commit()
except urllib2.URLError as e: #catch URL errors
print type(e)
print e.args
except urllib2.HTTPError as e: #catch HTTP erros
print type(e)
print e.code
I have been scratching my head as I cannot see why there is a problem, but maybe I just need some fresh eyes on it. Any help would be great cheers!
Have you tried to do something like
queryResult = camCursor.execute("SELECT * FROM dbo.CAMERAS")
for rows in queryResult:
...
I guess this might solve the problem, which is probably the fact that you're trying to iterate over a cursor instead of the results.
You might find this way interesting as well:
camCursor.execute("SELECT * FROM dbo.CAMERAS")
for rows in camCursor.fetchall():
...
Source: https://docs.python.org/2/library/sqlite3.html
I am newbie in python, so, it looks like my first project on that lang.
Everytime when I'm trying to run my script - I get different answers from mysql server.
The most frequent answer is OperationalError: (2006, 'MySQL server has gone away')
Sometimes I get output Thread: 11 commited (see code below).
And sometimes emergency stop (traslated, I have russian output in console).
Whatever if output full of commited - records in table still the same.
import MySQLdb
import pyping
import socket, struct
from threading import Thread
def ip2int(addr):
"""Convert ip to integer"""
return struct.unpack("!I", socket.inet_aton(addr))[0]
def int2ip(addr):
"""Convert integer to ip"""
return socket.inet_ntoa(struct.pack("!I", addr))
def ping(ip):
"""Pinging client"""
request = pyping.ping(ip, timeout=100, count=1)
return int(request.max_rtt)
class UpdateThread(Thread):
def __init__(self, records, name):
Thread.__init__(self)
self.database = MySQLdb.connect(host="***", port=3306, user="root", passwd="***", db="dns")
self.cursor = database.cursor()
self.name = name
self.records = records
def run(self):
print(self.name)
for r in self.records:
#latency = ping(int2ip(r[1])) what the hell :x
#ip = str(int2ip(r[1]))
id = str(r[0])
self.cursor.execute("""update clients set has_subn=%s where id=%s""" % (id, id))
self.database.commit()
print(self.name + " commited")
#start
database = MySQLdb.connect(host="***", port=3306, user="root", passwd="***", db="dns")
cursor = database.cursor()
cursor.execute("""select * from clients""")
data = cursor.fetchall() #All records from DataBase
count = len(data)
threads_counter = 10 #We are creating 10 threads for all records
th_count = count / threads_counter #Count of records for each thread
last_thread = count % threads_counter #Last records
threads = []
i = 0
while i < (count - last_thread):
temp_list = data[i:(i+th_count)]
#print(temp_list)
threads.append(UpdateThread(records = temp_list, name = "Thread: " + str((i/3) + 1)).start())
i += th_count
threads.append(UpdateThread(records = data[i: count], name = "Thread: 11").start())
P.S.
Another answers I found here is not helping me.
UPD:
I found that some(everytime another) thread print
OperationalError: (2013, 'Lost connection to MySQL server during query') and all next threads print OperationalError: (2013, 'Lost connection to MySQL server during query')
You need to close your DB connections when you're done with them or else the DB server will become overwhelmed and make your connections expire. For your program, I would change your code so that you have only one DB connection. You can pass a reference to it to your UpdateThread instances and close it when you're done.
database.close()