I'm currently writing a script that uses sqlite3. I recently ran into a problem with the database being in use by another program due to my code exiting early with an error.
With similar problems, one usually uses:
conn = sqlite3.connect(...)
try:
#Do stuff
finally:
conn.close()
But this won't work in my case. In a nutshell, this is my code:
import sqlite3
class Thingamadoodle:
def __init__(self, ...):
self.conn = sqlite3.connect(...)
...
#Methods and stuff
def __del__(self):
self.conn.close()
poop = Thingamadoodle(...)
poop.do_stuff(...)
poop.throw_irritating_exception_that_you_cant_track_down(irritatingness=11)
After the program exits without closing the connection, I get errors when I try to modify the database.
Is there a way to safely close the connection, even on an unclean exit?
To be honest, i don't understand the question much, but why not just wrap the poop.do_stuff() in a try/except block?
try:
poop.do_stuff()
except:
poop.__del__()
finally:
poop.__del__()
Or to be a bit cleaner, use a context manager:
class Thingamadoodle:
def __init__(self, ...):
...
#Methods and stuff
def __enter__(self):
self.conn = sqlite3.connect(...)
return self
def __exit__(self, errorType, errorValue, errorTrace):
self.conn.close()
And just execute it as:
with Thingmadoodle(args) as poop:
#do things
After all the code is done, or after an exception happened in the statement, __exit__ will be executed, and you can safely close it.
Hope this helps!
Related
Most likely I don't understand this fully but I have a class declared like
class db:
def __init__(self, name):
pass
#self._conn = get_conn(name)
def __enter__(self):
print('enter')
#self_conn.begin_transaction()
def __exit__(self, a, b, c):
print('exit')
#self._conn.commit()
def close(self):
print('close')
#self._conn.close()
When I use it like:
with db('bname') as db:
print('do something')
I get the expected output like:
enter
do something
exit
but when I use contextlib closing those functions don't get called at all
from contextlib import closing
with closing(db('bname')) as db:
print('do something')
I only get:
do something
close
My understanding was the contextlib closing can be used with context manager classes to call close always but what am i missing?
The closing class implements its own version of __exit__. This calls close().
Since you're passing an instance of closing to the with block, the __exit__ method of the closing instance will get called and not yours.
Subclass the closing class for your __exit__ to get called. Here's an example:
from contextlib import closing
class Db(closing):
def __init__(self):
super().__init__(self)
print('initializing')
def close(self):
print('closing')
def __enter__(self):
print('entering')
def __exit__(self, *exc_info):
print('exiting')
return super().__exit__(*exc_info)
with Db() as db:
print('running')
Output
initializing
entering
running
exiting
closing
It is the with statement that executes the __enter__ and __exit__ blocks. In your second example, you are applying in on the closing function, not on your db class. That's why those methods don't get triggered.
First of all, sorry for the wording of the question, I can't express it in a more compact form.
Let's say I have a code like this in Python:
something_happened = False
def main():
# 'procs' is a list of procedures
for proc in procs:
try:
# Any of these can set the 'something_happened'
# global var to True
proc()
except as e:
handle_unexpected_exception(e)
continue
# If some procedure found some problem,
# print a remainder to check the logging files
if something_happened:
print('Check the logfile, just in case.')
Any of the involved procedures may encounter some problem but execution MUST continue, the problem is properly logged and that's the ONLY handling needed, really, because the problems that may arise while running the procedures shouldn't stop the program, this shouldn't involve raising an exception and stopping the execution.
The reason why the logfile should be checked is that some of the problems may need further human action, but the program can't do anything about them, other than logging them and keep running (long story).
Right now the only way of achieving this that I can think about is to make each procedure to set something_happened == True after logging a potential problem, but using a global variable which may be set from any of the procedures, or returning a status code from the procedures.
And yes, I know I can raise an exception from the procedures instead of setting a global or returning an error code, but that would only work because I'm running them in a loop, and this may change in the future (and then raising an exception will jump out the try-block), so that's my last resort.
Can anyone suggest a better way of dealing with this situation? Yes, I know, this is a very particular use case, but that's the reason why I'm not raising an exception in the first place, and I'm just curious because I didn't find anything after googling for hours...
Thanks in advance :)
You have a variable that may be set to True by any of the procs. It looks like a common OOP schema:
class A():
"""Don't do that"""
def __init__(self, logger):
self._logger = logger
self._something_happened = False
def proc1(self):
try:
...
except KeyError as e:
self._something_happened = True
self._logger.log(...)
def proc2(self):
...
def execute(self):
for proc in [self.proc1, self.proc2, ...]:
try:
proc()
except as e:
self._handle_unexpected_exception(e)
continue
if self._something_happened:
print('Check the logfile, just in case.')
But that's a very bad idea, because you're violating the Single Responsibility Principle: your classs has to know about proc1, proc2, ... You have to reverse the idea:
class Context:
def __init__(self):
self.something_happened = False
def main():
ctx = Context()
for proc in procs:
try:
proc(ctx) # proc may set ctx.something_happened to True
except as e:
handle_unexpected_exception(e)
continue
if ctx.something_happened:
print('Check the logfile, just in case.')
Creating a void class like that is not attracting. You can take the idea further:
class Context:
def __init__(self, logger):
self._logger = logger
self._something_happened = False
def handle_err(self, e):
self._something_happened = True
self._logger.log(...)
def handle_unexpected_exception(self, e):
...
self._logger.log(...)
def after(self):
if self._something_happened:
print('Check the logfile, just in case.')
def proc1(ctx):
try:
...
except KeyError as e:
ctx.handle_err(e) # you delegate the error handling to ctx
def proc2(ctx):
...
def main():
ctx = Context(logging.gerLogger("main"))
for proc in procs:
try:
proc(ctx)
except as e:
ctx.handle_unexpected_exception(e)
ctx.after()
The main benefit here is you that can use another Context if you want:
def StrictContext():
def handle_err(self, e):
raise e
def handle_unexpected_exception(self, e):
raise e
def after(self):
pass
Or
class LooseContext:
def handle_err(self, e):
pass
def handle_unexpected_exception(self, e):
pass
def after(self):
pass
Or whatever you need.
Looks like the cleaner solution is to raise an exception, and I will change the code accordingly. They only problem is what will happen if in the future the loop goes away, but I suppose I'll cross that bridge when I arrive to it ;) and then I'll use another solution or I'll try to change the main code miself.
#cglacet, #Phydeaux, thanks for your help and suggestions.
I'm writing a Python class to manage a Postgres database connection using pyscopg2.
I'd like the class to set up a connection to the database upon initialisation (I feel like this might be a terrible idea, but I can't think of a good reason why). I'm trying to make this work with a property, which I've never used before. In other words, I want the getter to be called from within the __init__ method.
My class looks something like this:
class MyDatabase:
connection_string = "host='<host_ip>' db_name='<db_name>'"
def __init__(self):
# *
self._connection = connection
#property
def connection(self):
# Check for an existing connection
if self._connection:
self._connection.close()
self._connection = psycopg2.connect(connection_string)
return self._connection
...
In this version, the check for an existing connection throws AttributeError: Elefriends instance has no attribute '_connection', which makes sense. I can get around this by simply adding a line that says self._connection = None at the place I've marked with # *, but this feels clunky. Is this the price I pay for the convenience? Am I just being fussy? Or is there a better way?
Thanks!
Instead of the if ... statement, you could use:
try:
self._connection.close()
except AttributeError:
pass
self._connection = psycopg2.connect(connection_string)
return self._connection
So I have written a wrapper WebApiSession for a Web API. When an instance is created a login etc is done and a session is created. The session needs to be kept alive so the constructor launches a separate process handling this. The method close() logs out of the session as well as stops the process. Now ideally I would not want to have to call close(). Instead I want this to happen when the instance is not needed anymore, i.e. I would like to be able to remove the session.close() call below. Is this possible?
import time
from multiprocessing import Process
class WebApiSession:
def __init__(self):
# start session, login etc
# ...
# start touch loop
self.touchLoop = Process(target = self.runTouchLoop)
self.touchLoop.start()
def runTouchLoop(self):
self.touch()
time.sleep(1)
self.runTouchLoop()
def touch(self):
# touch session
pass
def close(self):
# logout etc
# ...
self.touchLoop.terminate()
def doSomething(self):
pass
if __name__ == '__main__':
session = WebApiSession()
session.doSomething()
session.close()
It sounds like you could benefit from implementing WebApiSession as a context manager. You can then treat your session like any other "context" that has special methods that must be called when it's opened and closed, like a file or other connection. It would also would give you the added bonus of neatly wrapping up errors and so on.
class WebApiSession(object):
def __init__(self):
pass # other init stuff here, but don't connect yet.
def __enter__(self): # entering the context.
# start session, login, start touch loop
self.touchLoop = Process(target = self.runTouchLoop)
self.touchLoop.start()
return self
def __exit__(self, exc_type, exc_val, traceback): # leaving the context.
# Bonus feature: handle exception info here as needed!
self.close()
if __name__ == '__main__':
with WebApiSession() as session:
session.doSomething()
In Ruby I can say:
def get_connection
db = connect_to_db()
yield
db.close()
end
and then call it
get_connection do
# action1.....
# action2.....
# action3.....
end
In Python I have to say
def get_connection(code_block):
db = connect_to_db()
code_block()
db.close()
get_connection(method1)
def method1():
# action1.....
# action2.....
# action3.....
It's not convenient since I have to create an extra method1. Notice that method1 can be big. Is there any way to emulate Ruby's anonymous blocks in Python?
Yes. Use the 'with' statement:
Using classes
class get_connection(object):
def __enter__(self):
self.connect_to_db()
def __exit__(self, *args, **kwargs):
self.close()
def some_db_method(self,...):
...
And use it like this:
with get_connection() as db:
db.some_db_method(...)
This does the following:
self.connect_to_db()
db.some_db_method(...)
self.close()
Have a look here: http://docs.python.org/release/2.5/whatsnew/pep-343.html . You can use the arguments taken by __exit__ to handle exceptions within the with statement, etc.
Using functions
from contextlib import contextmanager
#contextmanager
def db_connection():
db = connect_to_db()
yield db
db.close()
and use this:
with db_connection() as db:
db.some_db_method()
(Perhaps this is closer to your ruby equivalent. Also, see here for more details: http://preshing.com/20110920/the-python-with-statement-by-example)
Hope this helps