PyQt - How to close database connection on class destruction - python

If my class uses a database quite a few times (a lot of functions/properties use data from the DB), what's the best practice: to create a DB connection once at the start of the class, do whatever so many times, and then close the DB connection on exit (using global variables); or to create/use/close DB connections in every property (using local variables)?
If it's better to start a connection once and close it on class destruction, how can I do this?
def __del__ (self)
self.connection.close()
doesn't work.
Thank you.

__del__ function is only called when the object get destructed which is when no object is referencing it anymore while garbage collecting is occurring.
Either see what object is still referencing your class when you let it go or implement an explicit shutdown method on your class.
It can be dangerous to rely on the __del__ method to release resources because of object not being destructed when we think it is.
From python documentation
Some objects contain references to “external” resources such as open files or windows. It is understood that these resources are freed when the object is garbage-collected, but since garbage collection is not guaranteed to happen, such objects also provide an explicit way to release the external resource, usually a close() method. Programs are strongly recommended to explicitly close such objects. The ‘try...finally‘ statement provides a convenient way to do this.

If another classes will also use database connections then you can create a class that will include methods for creating db, connecting/closing db and retrieving information from db, etc and then inherit this class.

Create a database close-request function that can be accessed by the main window class.
You can then call this within the window's closeEvent, and perhaps take different actions depending on the function's return value.

Related

COM object keeps returning a stub even if I start the server application

I have a program with
repo = win32com.client.Dispatch("EA.App").Repository
and that works fine all time. However, once I called that while the EA.app instance was not running I permanently get rubbish
<COMObject <unknown>>
in return - until I reboot Windoze. What's that? And more importantly: how do I get around the reboot?
P.S. Right after writing I found that
repo = win32com.client.Dispatch("EA.App")
returns
<COMObject EA.App>
So, may I assume this is something with that EA.app not doing right?
win32com.client.Dispatch("EA.App") creates a new instance of the EA.App COM class.
I'm assuming that connecting the COM object to the "EA.app instance" or whatever (presumably via an IPC channel) happens at creation time, so if it wasn't running, you end up with a dummy object that remains such.
Maybe the COM class has some method that would force it to rediscover the server application without having to recreate the COM object. But since recreating is easy, it's very possible that the developer thought that such a function wouldn't add enough value to the product to justify the expenses.
Some COM classes implement some kind of the singleton pattern: calling Dispatch multiple times will use a "cached" result from the first one internally and actually return objects that are somehow "the same". If this is the case, just calling Dispatch again wouldn't help. In this case, check with the COM class' documentation and/or its vendor how you force the rediscovery. (E.g. there may be a way to explicitly delete the "cached" underlying object so the next Dispatch creates a new one.)

Is __del__ really a destructor?

I do things mostly in C++, where the destructor method is really meant for destruction of an acquired resource. Recently I started with python (which is really a fun and fantastic), and I came to learn it has GC like java.
Thus, there is no heavy emphasis on object ownership (construction and destruction).
As far as I've learned, the __init__() method makes more sense to me in python than it does for ruby too, but the __del__() method, do we really need to implement this built-in function in our class? Will my class lack something if I miss __del__()? The one scenario I could see __del__() useful is, if I want to log something when destroying an object. Is there anything other than this?
In the Python 3 docs the developers have now made clear that destructor is in fact not the appropriate name for the method __del__.
object.__del__(self)
Called when the instance is about to be destroyed. This is also called a finalizer or (improperly) a destructor.
Note that the OLD Python 3 docs used to suggest that 'destructor' was the proper name:
object.__del__(self)
Called when the instance is about to be destroyed. This is also called a destructor. If a base class has a __del__() method, the derived class’s __del__() method, if any, must explicitly call it to ensure proper deletion of the base class part of the instance.
From other answers but also from the Wikipedia:
In a language with an automatic garbage collection mechanism, it would be difficult to deterministically ensure the invocation of a destructor, and hence these languages are generally considered unsuitable for RAII [Resource Acquisition Is Initialization]
So you should almost never be implementing __del__, but it gives you the opportunity to do so in some (rare?) use cases
As the other answers have already pointed out, you probably shouldn't implement __del__ in Python. If you find yourself in the situation thinking you'd really need a destructor (for example if your class wraps a resource that needs to be explicitly closed) then the Pythonic way to go is using context managers.
Is del really a destructor?
No, __del__ method is not a destructor, is just a normal method you can call whenever you want to perform any operation, but it is always called before the garbage collector destroys the object.
Think of it like a clean or last will method.
So uncommon it is that I have learned about it today (and I'm long ago into python).
Memory is deallocated, files closed, ... by the GC. But you could need to perform some task with effects outside of the class.
My use case is about implementing some sort of RAII regarding some temporal directories. I'd like it to be removed no matter what.
Instead of removing it after the processing (which, after some change, was no longer run) I've moved it to the __del__ method, and it works as expected.
This is a very specific case, where we don't really care about when the method is called, as long as it's called before leaving the program. So, use with care.

How to specify clean up behavior of an object when it is garbage collected in Python?

Say I have some class that manages a database connection. The user is supposed to call close() on instances of this class so that the db connection is terminated cleanly.
Is there any way in python to get this object to call close() if the interpreter is closed or the object is otherwise picked up by the garbage collector?
Edit: This question assumes the user of the object failed to instantiate it within a with block, either because he forgot or isn't concerned about closing connections.
The only way to ensure such a method is called if you don't trust users is using __del__ (docs). From the docs:
Called when the instance is about to be destroyed.
Note that there are lots of issues that make using del tricky. For example, at the moment it is called, the interpreter may be shutting down already - meaning other objects and modules may have been destroyed already. See the notes and warnings for details.
If you really cannot rely on users to be consenting adults, I would prevent them from implicitly avoiding close - don't give them a public open in the first place. Only supply the methods to support with. If anybody explicitly digs into your code to do otherwise, they probably have a good reason for it.
Define __enter__ and __exit__ methods on your class and then use it with the with statement:
with MyClass() as c:
# Do stuff
When the with block ends your __exit__() method will be called automatically.

DB-Connections Class as a Singleton in Python

So there has been a lot of hating on singletons in python. I generally see that having a singleton is usually no good, but what about stuff that has side effects, like using/querying a Database? Why would I make a new instance for every simple query, when I could reuse a present connection already setup again? What would be a pythonic approach/alternative to this?
Thank you!
Normally, you have some kind of object representing the thing that uses a database (e.g., an instance of MyWebServer), and you make the database connection a member of that object.
If you instead have all your logic inside some kind of function, make the connection local to that function. (This isn't too common in many other languages, but in Python, there are often good ways to wrap up multi-stage stateful work in a single generator function.)
If you have all the database stuff spread out all over the place, then just use a global variable instead of a singleton. Yes, globals are bad, but singletons are just as bad, and more complicated. There are a few cases where they're useful, but very rare. (That's not necessarily true for other languages, but it is for Python.) And the way to get rid of the global is to rethink you design. There's a good chance you're effectively using a module as a (singleton) object, and if you think it through, you can probably come up with a good class or function to wrap it up in.
Obviously just moving all of your globals into class attributes and #classmethods is just giving you globals under a different namespace. But moving them into instance attributes and methods is a different story. That gives you an object you can pass around—and, if necessary, an object you can have 2 of (or maybe even 0 under some circumstances), attach a lock to, serialize, etc.
In many types of applications, you're still going to end up with a single instance of something—every Qt GUI app has exactly one MyQApplication, nearly every web server has exactly one MyWebServer, etc. No matter what you call it, that's effectively a singleton or global. And if you want to, you can just move everything into attributes of that god object.
But just because you can do so doesn't mean you should. You've still got function parameters, local variables, globals in each module, other (non-megalithic) classes with their own instance attributes, etc., and you should use whatever is appropriate for each value.
For example, say your MyWebServer creates a new ClientConnection instance for each new client that connects to you. You could make the connections write MyWebServer.instance.db.execute whenever they want to execute a SQL query… but you could also just pass self.db to the ClientConnection constructor, and each connection then just does self.db.execute. So, which one is better? Well, if you do it the latter way, it makes your code a lot easier to extend and refactor. If you want to load-balance across 4 databases, you only need to change code in one place (where the MyWebServer initializes each ClientConnection) instead of 100 (every time the ClientConnection accesses the database). If you want to convert your monolithic web app into a WSGI container, you don't have to change any of the ClientConnection code except maybe the constructor. And so on.
If you're using an object oriented approach, then abamet's suggestion of attaching the database connection parameters as class attributes makes sense to me. The class can then establish a single database connection which all methods of the class refer to as self.db_connection, for example.
If you're not using an object oriented approach, a separate database connection module can provide a functional-style equivalent. Devote a module to establishing a database connection, and simply import that module everywhere you want to use it. Your code can then refer to the connection as db.connection, for example. Since modules are effectively singletons, and the module code is only run on the first import, you will be re-using the same database connection each time.

Guaranteeing a file close

I have a class where I create a file object in the constructor. This class also implements a finish() method as part of its interface and in this method I close the file object. The problem is that if I get an exception before this point, the file will not be closed. The class in question has a number of other methods that use the file object. Do I need to wrap all of these in a try finally clause or is there a better approach?
Thanks,
Barry
You could make your class a context-manager, and then wrap object creation and use of that class in a with-statement. See PEP 343 for details.
To make your class a context-manager, it has to implement the methods __enter__() and __exit__(). __enter__() is called when you enter the with-statement, and __exit__() is guaranteed to be called when you leave it, no matter how.
You could then use your class like this:
with MyClass() as foo:
# use foo here
If you acquire your resources in the constructor, you can make __enter__() simply return self without doing anything. __exit__() should just call your finish()-method.
For short lived file objects, a try/finally pair or the more succinct with-statement is recommended as a clean way to make sure the files are flushed and the related resources are released.
For long lived file objects, you can register with atexit() for an explicit close or just rely on the interpreter cleaning up before it exits.
At the interactive prompt, most people don't bother for simple experiments where there isn't much of a downside to leaving files unclosed or relying on refcounting or GC to close for you.
Closing your files is considered good technique. In reality though, not explicitly closing files rarely has any noticeable effects.
You can either have a try...finally pair, or make your class a context manager suitable for use in the with statement.

Categories

Resources