python close mysql database connection - python

I have a python script which uses a MySQL database connection, I want the database connection to be closed when the instance of the class does no longer exist therefore in my class I implemented a disconnect method as follows:
def disconnect(self):
'''Disconnect from the MySQL server'''
if self.conn is not None:
self.log.info('Closing connection')
self.conn.close()
self.conn = None
self.curs = None
Now I was thinking of calling this method as follows:
def __del__(self):
self.disconnect()
However I've read that you cannot assume that the __del__ method will ever be called, if that is the case, what is the correct way? Where / when should I call the disconnect() method?
Important sidenote is that my script is running as a unix daemon and is instantiated as follows:
if __name__ == '__main__':
daemon = MyDaemon(PIDFILE)
daemonizer.daemonizerCLI(daemon, 'mydaemon', sys.argv[0], sys.argv[1], PIDFILE)
The above takes the class MyDaemon and creates an Unix Daemon by executing a double fork.

Maybe you could use with statment and populate __exit__ method. For example:
class Foo(object):
def disconnect(self):
'''Disconnect from the MySQL server'''
if self.conn is not None:
self.log.info('Closing connection')
self.conn.close()
self.conn = None
self.curs = None
print "disconnect..."
def __exit__(self, *err):
self.disconnect()
if __name__ == '__main__':
with Foo() as foo:
print foo, foo.curs

Related

SQlite3 context manager with file locking in python 2.7

Is this the good way to do file locking with re-entrance in a context manager under Python 2.7? I just want to be sure the rlock() is going to be effective so I could get a multi-threaded application to use a single database file.
import sqlite3
import threading
import os
class ConnectionHolder:
def __init__(self, connection):
self.path = connection
self.lock = threading.RLock()
def __enter__(self):
self.lock.acquire()
self.connection = sqlite3.connect(self.path)
self.cursor = self.connection .cursor()
return self.cursor
def __exit__(self, exc_class, exc, traceback):
self.connection .commit()
self.connection .close()
self.lock.release()
conn_holder = ConnectionHolder(os.path.join(os.path.dirname(__file__), 'data/db/database.db'))
if __name__ == '__main__':
with conn_holder as c:
c.execute("SELECT * FROM 'sample_table'")
result = c.fetchall()
print result
I finally fond in my code a place where I was running a long loop (about 2 minutes) before commiting. I correct this and, as #DB suggested, increase busy timeout to 30 seconds. Problem seem to be resolved. Thanks guys!

sqlalchemy engine.connect() stalled

I use sqlachemy to connect to a remote database but I do not know the type (can be PostgreSQL, MariaDB, etc.). I try them in a loop and I keep the first working driver:
for driver in drivers:
try:
uri = get_uri_from_driver(driver)
engine = create_engine(uri, echo=False)
print('Try connection')
con = engine.engine.connect()
# Try to get some lines
return engine
except Exception:
continue
return None
In some case the con = engine.engine.connect() does not end and it happens when you try the MySQL driver to connect to something which is not MySQL (Oracle).
Questions:
How can I set a timeout to this?
If I cannot, is there any other way to achieve this ? (I will for example base the test order with the default port but I would like to be able to kill the connect() after some seconds.
EDIT:
This code is in a Django so I cannot use signal/alarm because of multi-threading.
This can be done with a generic timeout solution like in:
What should I do if socket.setdefaulttimeout() is not working?
import signal
class Timeout():
"""Timeout class using ALARM signal"""
class TimeoutException(Exception): pass
def __init__(self, sec):
self.sec = sec
def __enter__(self):
signal.signal(signal.SIGALRM, self.raise_timeout)
signal.alarm(self.sec)
def __exit__(self, *args):
signal.alarm(0) # disable alarm
def raise_timeout(self, *args):
raise Timeout.TimeoutException()
# In your example
try:
uri = get_uri_from_driver(driver)
engine = create_engine(uri, echo=False)
print('Try connection')
with Timeout(10):
con = engine.engine.connect()
# Try to get some lines
return engine
except Exception:
continue

python cx_Oracle.Connection proxying to share between process - I get an error when creating a cursor on a shared connection object

I am trying to write a load app on Oracle using python and I neeed some concurrency.
I am doing this by sharing a connection pool to be used by child processes but before going into that I tried to share a simple Connection object from a manager process to a child.
The connection object is shared properly using the proxy object, but when I try to create a cursor on this connection I get smth like :
>
And the cursor is not usable.
Here is my code :
import cx_Oracle
from multiprocessing import managers
from multiprocessing import current_process
from multiprocessing import Process
import time
#function to setup the connection object in manager process
def setupConnection(user,password,dsn):
conn = cx_Oracle.connect(user=user,password=password,dsn=dsn)
return conn
#proxy object for my connection
class connectionProxy(managers.BaseProxy):
def close(self):
return self._callmethod('close',args=())
def ping(self):
return self._callmethod('ping',args=())
def cursor(self):
return self._callmethod('cursor',args=())
#connection manager
class connectionManager(managers.BaseManager): pass
#child process work function
def child(conn_proxy):
print(str(current_process().name) + "Working on connection : " + str(conn_proxy))
cur = conn_proxy.cursor()
print(cur)
cur.execute('select 1 from dual');
if __name__ == '__main__' :
#db details
user = 'N974783'
password = '12345'
dsn = '192.168.56.6:1521/orcl'
#setup manager process and open the connection
manager = connectionManager()
manager.register('set_conn',setupConnection,proxytype=connectionProxy,exposed = ('close','ping','cursor'))
manager.start()
#pass the connection to the child process
conn_proxy = manager.set_conn(user=user,password=password,dsn=dsn)
p = Process(target=child, args=(conn_proxy,),name='oraWorker')
p.start()
p.join()
I get the following output:
oraWorker Working on connection : <cx_Oracle.Connection to N974783#192.168.56.6:1521/orcl>
<cx_Oracle.Cursor on <NULL>> ..
cur.execute('select 1 from dual');
cx_Oracle.InterfaceError: not open
Can someone give me an idea on how should I get past this ?
Thanks,
Ionut
The problem is that cursors cannot be passed across the boundary between processes. So you need to wrap the execute method instead. Something like this. You would need to expand it to handle bind variables and the like, of course.
import cx_Oracle
from multiprocessing import managers
from multiprocessing import current_process
from multiprocessing import Process
import time
class Connection(cx_Oracle.Connection):
def execute(self, sql):
cursor = self.cursor()
cursor.execute(sql)
return list(cursor)
#function to setup the connection object in manager process
def setupConnection(user,password,dsn):
conn = Connection(user=user,password=password,dsn=dsn)
return conn
#proxy object for my connection
class connectionProxy(managers.BaseProxy):
def close(self):
return self._callmethod('close',args=())
def ping(self):
return self._callmethod('ping',args=())
def execute(self, sql):
return self._callmethod('execute', args=(sql,))
#connection manager
class connectionManager(managers.BaseManager):
pass
#child process work function
def child(conn_proxy):
print(str(current_process().name) + "Working on connection : " + str(conn_proxy), id(conn_proxy))
result = conn_proxy.execute('select 1 from dual')
print("Result:", result)
if __name__ == '__main__' :
#db details
user = 'user'
password = 'pwd'
dsn = 'tnsentry'
#setup manager process and open the connection
manager = connectionManager()
manager.register('set_conn',setupConnection,proxytype=connectionProxy,exposed = ('close','ping','execute'))
manager.start()
#pass the connection to the child process
conn_proxy = manager.set_conn(user=user,password=password,dsn=dsn)
p = Process(target=child, args=(conn_proxy,),name='oraWorker')
p.start()
p.join()

Accessing a MySQL connection pool from Python multiprocessing

I'm trying to set up a MySQL connection pool and have my worker processes access the already established pool instead of setting up a new connection each time.
I'm confused if I should pass the database cursor to each process, or if there's some other way to do this? Shouldn't MySql.connector do the pooling automatically? When I check my log files, many, many connections are opened and closed ... one for each process.
My code looks something like this:
PATH = "/tmp"
class DB(object):
def __init__(self):
connected = False
while not connected:
try:
cnxpool = mysql.connector.pooling.MySQLConnectionPool(pool_name = "pool1",
**config.dbconfig)
self.__cnx = cnxpool.get_connection()
except mysql.connector.errors.PoolError:
print("Sleeping.. (Pool Error)")
sleep(5)
except mysql.connector.errors.DatabaseError:
print("Sleeping.. (Database Error)")
sleep(5)
self.__cur = self.__cnx.cursor(cursor_class=MySQLCursorDict)
def execute(self, query):
return self.__cur.execute(query)
def isValidFile(self, name):
return True
def readfile(self, fname):
d = DB()
d.execute("""INSERT INTO users (first_name) VALUES ('michael')""")
def main():
queue = multiprocessing.Queue()
pool = multiprocessing.Pool(None, init, [queue])
for dirpath, dirnames, filenames in os.walk(PATH):
full_path_fnames = map(lambda fn: os.path.join(dirpath, fn),
filenames)
full_path_fnames = filter(is_valid_file, full_path_fnames)
pool.map(readFile, full_path_fnames)
if __name__ == '__main__':
sys.exit(main())
First, you're creating a different connection pool for each instance of your DB class. The pools having the same name doesn't make them the same pool
From the documentation:
It is not an error for multiple pools to have the same name. An application that must distinguish pools by their pool_name property should create each pool with a distinct name.
Besides that, sharing a database connection (or connection pool) between different processes would be a bad idea (and i highly doubt it would even work correctly), so each process using it's own connections is actually what you should aim for.
You could just initialize the pool in your init initializer as a global variable and use that instead.
Very simple example:
from multiprocessing import Pool
from mysql.connector.pooling import MySQLConnectionPool
from mysql.connector import connect
import os
pool = None
def init():
global pool
print("PID %d: initializing pool..." % os.getpid())
pool = MySQLConnectionPool(...)
def do_work(q):
con = pool.get_connection()
print("PID %d: using connection %s" % (os.getpid(), con))
c = con.cursor()
c.execute(q)
res = c.fetchall()
con.close()
return res
def main():
p = Pool(initializer=init)
for res in p.map(do_work, ['select * from test']*8):
print(res)
p.close()
p.join()
if __name__ == '__main__':
main()
Or just use a simple connection instead of a connection pool, as only one connection will be active in each process at a time anyway.
The number of concurrently used connections is implicitly limited by the size of the multiprocessing.Pool.
#!/usr/bin/python
# -*- coding: utf-8 -*-
import time
import mysql.connector.pooling
dbconfig = {
"host":"127.0.0.1",
"port":"3306",
"user":"root",
"password":"123456",
"database":"test",
}
class MySQLPool(object):
"""
create a pool when connect mysql, which will decrease the time spent in
request connection, create connection and close connection.
"""
def __init__(self, host="172.0.0.1", port="3306", user="root",
password="123456", database="test", pool_name="mypool",
pool_size=3):
res = {}
self._host = host
self._port = port
self._user = user
self._password = password
self._database = database
res["host"] = self._host
res["port"] = self._port
res["user"] = self._user
res["password"] = self._password
res["database"] = self._database
self.dbconfig = res
self.pool = self.create_pool(pool_name=pool_name, pool_size=pool_size)
def create_pool(self, pool_name="mypool", pool_size=3):
"""
Create a connection pool, after created, the request of connecting
MySQL could get a connection from this pool instead of request to
create a connection.
:param pool_name: the name of pool, default is "mypool"
:param pool_size: the size of pool, default is 3
:return: connection pool
"""
pool = mysql.connector.pooling.MySQLConnectionPool(
pool_name=pool_name,
pool_size=pool_size,
pool_reset_session=True,
**self.dbconfig)
return pool
def close(self, conn, cursor):
"""
A method used to close connection of mysql.
:param conn:
:param cursor:
:return:
"""
cursor.close()
conn.close()
def execute(self, sql, args=None, commit=False):
"""
Execute a sql, it could be with args and with out args. The usage is
similar with execute() function in module pymysql.
:param sql: sql clause
:param args: args need by sql clause
:param commit: whether to commit
:return: if commit, return None, else, return result
"""
# get connection form connection pool instead of create one.
conn = self.pool.get_connection()
cursor = conn.cursor()
if args:
cursor.execute(sql, args)
else:
cursor.execute(sql)
if commit is True:
conn.commit()
self.close(conn, cursor)
return None
else:
res = cursor.fetchall()
self.close(conn, cursor)
return res
def executemany(self, sql, args, commit=False):
"""
Execute with many args. Similar with executemany() function in pymysql.
args should be a sequence.
:param sql: sql clause
:param args: args
:param commit: commit or not.
:return: if commit, return None, else, return result
"""
# get connection form connection pool instead of create one.
conn = self.pool.get_connection()
cursor = conn.cursor()
cursor.executemany(sql, args)
if commit is True:
conn.commit()
self.close(conn, cursor)
return None
else:
res = cursor.fetchall()
self.close(conn, cursor)
return res
if __name__ == "__main__":
mysql_pool = MySQLPool(**dbconfig)
sql = "select * from store WHERE create_time < '2017-06-02'"
p = Pool()
for i in range(5):
p.apply_async(mysql_pool.execute, args=(sql,))
Code above creates a connection pool at the beginning, and get connections from it in execute(), once the connection pool has been created, the work is to remain it, since the pool is created only once, it will save the time to request for a connection every time you would like to connect to MySQL.
Hope it helps!
You created multiple DB object instance. In mysql.connector.pooling.py, pool_name is only a attribute to let you make out which pool it is. There is no mapping in the mysql pool.
So, you create multiple DB instance in def readfile(), then you will have several connection pool.
A Singleton is useful in this case.
(I spent several hours to find it out. In Tornado framework, each http get create a new handler, which leads to making a new connection.)
There may be synchronization issues if you're going to reuse MySQLConnection instances maintained by a pool, but just sharing a MySQLConnectionPool instance between worker processes and using connections retrieved by calling the method get_connection() would be okay, because a dedicated socket would be created for each MySQLConnection instance.
import multiprocessing
from mysql.connector import pooling
def f(cnxpool: pooling.MySQLConnectionPool) -> None:
# Dedicate connection instance for each worker process.
cnx = cnxpool.get_connection()
...
if __name__ == '__main__':
cnxpool = pooling.MySQLConnectionPool(
pool_name='pool',
pool_size=2,
)
p0 = multiprocessing.Process(target=f, args=(cnxpool,))
p1 = multiprocessing.Process(target=f, args=(cnxpool,))
p0.start()
p1.start()

With python socketserver how can I pass a variable to the constructor of the handler class

I would like to pass my database connection to the EchoHandler class, however I can't figure out how to do that or access the EchoHandler class at all.
class EchoHandler(SocketServer.StreamRequestHandler):
def handle(self):
print self.client_address, 'connected'
if __name__ == '__main__':
conn = MySQLdb.connect (host = "10.0.0.5", user = "user", passwd = "pass", db = "database")
SocketServer.ForkingTCPServer.allow_reuse_address = 1
server = SocketServer.ForkingTCPServer(('10.0.0.6', 4242), EchoHandler)
print "Server listening on localhost:4242..."
try:
server.allow_reuse_address
server.serve_forever()
except KeyboardInterrupt:
print "\nbailing..."
Unfortunately, there really isn't an easy way to access the handlers directly from outside the server.
You have two options to get the information to the EchoHandler instances:
Store the connection as a property of the server (add server.conn = conn before calling server_forever()) and then access that property in EchoHandler.handler through self.server.conn.
You can overwrite the server's finish_request and assign the value there (you would have to pass it to the constructor of EchoHandler and overwrite EchoHandler.__init__). That is a far messier solution and it pretty much requires you to store the connection on the server anyway.
My optionon of your best bet:
class EchoHandler(SocketServer.StreamRequestHandler):
def handle(self):
# I have no idea why you would print this but this is an example
print( self.server.conn );
print self.client_address, 'connected'
if __name__ == '__main__':
SocketServer.ForkingTCPServer.allow_reuse_address = 1
server = SocketServer.ForkingTCPServer(('10.0.0.6', 4242), EchoHandler)
server.conn = MySQLdb.connect (host = "10.0.0.5",
user = "user", passwd = "pass", db = "database")
# continue as normal
Mark T has the following to say on the python list archive
In the handler class, self.server refers to the server object, so subclass
the server and override init to take any additional server parameters
and store them as instance variables.
import SocketServer
class MyServer(SocketServer.ThreadingTCPServer):
def __init__(self, server_address, RequestHandlerClass, arg1, arg2):
SocketServer.ThreadingTCPServer.__init__(self,
server_address,
RequestHandlerClass)
self.arg1 = arg1
self.arg2 = arg2
class MyHandler(SocketServer.StreamRequestHandler):
def handle(self):
print self.server.arg1
print self.server.arg2
Another way, that I believe more pythonic, is to do the following:
class EchoHandler(SocketServer.StreamRequestHandler):
def __init__(self, a, b):
self.a = a
self.b = b
def __call__(self, request, client_address, server):
h = EchoHandler(self.a, self.b)
SocketServer.StreamRequestHandler.__init__(h, request, client_address, server)
You can now give an instance of your handler to the TCPServer:
SocketServer.ForkingTCPServer(('10.0.0.6', 4242), EchoHandler("aaa", "bbb"))
The TCPServer normally creates a new instance of EchoHandler per request but in this case, the __call__ method will be called instead of the constructor (it is already an instance.)
In the call method, I explicitly make a copy of the current EchoHandler and pass it to the super constructor to conform to the original logic of "one handler instance per request".
It is worth having a look at the SocketServer module to understand what happens here: https://github.com/python/cpython/blob/2.7/Lib/SocketServer.py
I was currently solving same problem, but I used slightly different solution, I feel it's slightly nicer and more general (inspired by #aramaki).
In the EchoHandler you just need to overwrite __init__ and specify custom Creator method.
class EchoHandler(SocketServer.StreamRequestHandler):
def __init__(self, request, client_address, server, a, b):
self.a = a
self.b = b
# super().__init__() must be called at the end
# because it's immediately calling handle method
super().__init__(request, client_address, server)
#classmethod
def Creator(cls, *args, **kwargs):
def _HandlerCreator(request, client_address, server):
cls(request, client_address, server, *args, **kwargs)
return _HandlerCreator
Then you can just call the Creator method and pass anything you need.
SocketServer.ForkingTCPServer(('10.0.0.6', 4242), EchoHandler.Creator(0, "foo"))
Main benefit is, that this way you are not creating any more instances than necessary and you are extending the class in more manageable way - you don't need to change the Creator method ever again.
It seems that you can't use ForkingServer to share variables because Copy-on-Write happens when a process tries to modify a shared variable.
Change it to ThreadingServer and you'll be able to share global variables.

Categories

Resources