I do things mostly in C++, where the destructor method is really meant for destruction of an acquired resource. Recently I started with python (which is really a fun and fantastic), and I came to learn it has GC like java.
Thus, there is no heavy emphasis on object ownership (construction and destruction).
As far as I've learned, the __init__() method makes more sense to me in python than it does for ruby too, but the __del__() method, do we really need to implement this built-in function in our class? Will my class lack something if I miss __del__()? The one scenario I could see __del__() useful is, if I want to log something when destroying an object. Is there anything other than this?
In the Python 3 docs the developers have now made clear that destructor is in fact not the appropriate name for the method __del__.
object.__del__(self)
Called when the instance is about to be destroyed. This is also called a finalizer or (improperly) a destructor.
Note that the OLD Python 3 docs used to suggest that 'destructor' was the proper name:
object.__del__(self)
Called when the instance is about to be destroyed. This is also called a destructor. If a base class has a __del__() method, the derived class’s __del__() method, if any, must explicitly call it to ensure proper deletion of the base class part of the instance.
From other answers but also from the Wikipedia:
In a language with an automatic garbage collection mechanism, it would be difficult to deterministically ensure the invocation of a destructor, and hence these languages are generally considered unsuitable for RAII [Resource Acquisition Is Initialization]
So you should almost never be implementing __del__, but it gives you the opportunity to do so in some (rare?) use cases
As the other answers have already pointed out, you probably shouldn't implement __del__ in Python. If you find yourself in the situation thinking you'd really need a destructor (for example if your class wraps a resource that needs to be explicitly closed) then the Pythonic way to go is using context managers.
Is del really a destructor?
No, __del__ method is not a destructor, is just a normal method you can call whenever you want to perform any operation, but it is always called before the garbage collector destroys the object.
Think of it like a clean or last will method.
So uncommon it is that I have learned about it today (and I'm long ago into python).
Memory is deallocated, files closed, ... by the GC. But you could need to perform some task with effects outside of the class.
My use case is about implementing some sort of RAII regarding some temporal directories. I'd like it to be removed no matter what.
Instead of removing it after the processing (which, after some change, was no longer run) I've moved it to the __del__ method, and it works as expected.
This is a very specific case, where we don't really care about when the method is called, as long as it's called before leaving the program. So, use with care.
Related
I’m reading Think Python: How to Think Like a Computer Scientist. The author uses “invoke” with methods and “call” with functions.
Is it a convention? And, if so, why is this distinction made? Why are functions said to be called, but methods are said to be invoked?
Not really, maybe it is easier for new readers to make an explicit distinction in order to understand that their invocation is slightly different. At least that why I suspect the author might have chosen different wording for each.
There doesn't seem to be a convention that dictates this in the Reference Manual for the Python language. What I seem them doing is choosing invoke when the call made to a function is implicit and not explicit.
For example, in the Callables section of the Standard Type Hierarchy you see:
[..] When an instance method object is called, the underlying function (__func__) is called, inserting the class instance (__self__) in front of the argument list. [...]
(Emphasis mine) Explicit call
Further down in Basic Customization and specifically for __new__ you can see:
Called to create a new instance of class cls. __new__() is a static method [...]
(Emphasis mine) Explicit call
While just a couple of sentences later you'll see how invoked is used because __new__ implicitly calls __init__:
If __new__() does not return an instance of cls, then the new instance’s __init__() method will not be invoked.
(Emphasis mine) Implicitly called
So no, no convention seems to be used, at least by the creators of the language. Simple is better than complex, I guess :-).
One good source for this would be the Python documentation. A simple text search through the section on Classes reveals the word "call" being used many times in reference to "calling methods", and the word "invoke" being used only once.
In my experience, the same is true: I regularly hear "call" used in reference to methods and functions, while I rarely hear "invoke" for either. However, I assume this is mainly a matter of personal preference and/or context (is the setting informal?, academic?, etc.).
You will also see places in the documentation where the word "invoke" is used in refernce to functions:
void Py_FatalError(const char *message)
Print a fatal error message
and kill the process. No cleanup is performed. This function should
only be invoked when a condition is detected that would make it
dangerous to continue using the Python interpreter; e.g., when the
object administration appears to be corrupted. On Unix, the standard C
library function abort() is called which will attempt to produce a
core file.
And from here:
void Py_DECREF(PyObject *o)
Decrement the reference count for object o. The object must not be NULL; if you aren’t sure that it isn’t NULL,
use Py_XDECREF(). If the reference count reaches zero, the object’s
type’s deallocation function (which must not be NULL) is invoked.
Although both these references are from the Python C API, so that may be significant.
To summerize:
I think it is safe to use either "invoke" or "call" in the context of functions or methods without sounding either like a noob or a showoff.
Note that I speak only of Python, and what I know from my own experience. I cannot speak to the difference between these terms in other languages.
I'm modifying a legacy library that uses the singleton pattern through the metaclass approach.
The Singleton class, inheriting from type, defines de __call__ function.
Right now, my singleton object using this library are never deleted. I defined the __del__ method in the singleton classes and that function is never called.
Clarification: I have implemented one (meta)class named Singleton, that is used by several classes, using Singleton as __metaclass__.
For example, I have class A(object), that has __metaclass__ = Singleton. The A class has several members that I want to be destroyed when my program ends and the A object (the only one that can exist) is destroyed.
I tried defining __del__ method in A class, but it doesn't work.
Point 1: __del__() may not be called at process exit
The first thing to say is that
It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits.
From the python data model docs. Therefore you should not be relying on it to tidy up state that you need to tidy up at exit, and at the highest level, that's why your __del__() may not be being called. That's what atexit is for.
Point 2: predictable object lifetimes is an implementation detail in python
The next thing to say is that while CPython uses reference counting to enable it to detect that an object can be released without having to use the garbage collector (leading to more predictable CPU impact and potentially more efficient applications), it only takes one circular reference, one uncleared exception, one forgotten closure or one different python implementation to break, and so you should think really really hard about whether you want to rely on __del__() being called at a particular point.
Point 3: Singleton implementations generally maintain a global reference to the singleton instance
By the sound of it, I would guess your singleton metaclass (itself a singleton...) is retaining your singleton instance the first time __call__() is called. Since the metaclass is not released since it belongs to the module, which is itself retained by sys.modules, that reference is not going to go away by the time the program terminates, so even given a guaranteed prompt tidy up of all external references to the singleton being freed, your __del__() is not going to get called.
What you could try
Add an atexit handler when you create your singleton instance to do your necessary tidy up at process exit.
Also do that tidy up in the __del__() method if you want. E.g, you may decide for neatness / future extensibility (e.g. pluralizing the singleton) that you would like the singleton instance to tidy up after itself when it is no longer being used.
And if you implement a __del__() method expecting to want tidy up to be done during normal program execution, you will probably want to remove the atexit handler too.
If you would like your singleton to be cleaned up when no one is using it anymore, consider storing it on your metaclass using weakref so that you don't retain it yourself.
In contextlib.py, I see the ExitStack class is calling __enter__() method via the type object (type(cm)) instead of direct method calls to the given object (cm).
I wonder why or why not.
e.g.,
does it give better exception traces when an error occurs?
is it just specific to some module author's coding style?
does it have any performance benefits?
does it avoid some artifacts/side-effects with complicated type hierarchies?
First of all, this is what happens when you do with something, it's not just contextlib that looks up special method on the type. Also, it's worth noting that the same happens with other special methods too: e.g. a + b results in type(a).__add__(a, b).
But why does it happen? This is a question that is often fired up on the python-dev and python-ideas mailing lists. And when I say "often", I mean "very often".
The last one were these: Missing Core Feature: + - * / | & do not call getattr and Eliminating special method lookup.
Here are some interesting points:
The current behaviour is by design - special methods are looked up as
slots on the object's class, not as instance attributes. This allows
the interpreter to bypass several steps in the normal instance
attribute lookup process.
(Source)
It is worth noting that the behavior is even more magical than this.
Even when looked up on the class, implicit special method lookup
bypasses __getattr__ and __getattribute__ of the metaclass. So the
special method lookup is not just an ordinary lookup that happens to
start on the class instead of the instance; it is a fully magic lookup
that does not engage the usual attribute-access-customization hooks at
any level.
(Source)
This behavior is also documented on the reference documentation: Special method lookup, which says:
Bypassing the __getattribute__() machinery in this fashion provides significant scope for speed optimisations within the interpreter, at the cost of some flexibility in the handling of special methods (the special method must be set on the class object itself in order to be consistently invoked by the interpreter).
In short, performance is the main concern. But let's take a closer look at this.
What's the difference between type(obj).__enter__() and obj.__enter__()?
When you write obj.attr, type(obj).__getattribute__('attr') gets called. The default implementation of __getattribute__() looks for attr into the instance dictionary (i.e. obj.__dict__) and into the class namespace and, failing that, calls type(obj).__getattr__('attr').
Now, this was a quick explanation and I have omitted some details, however it should give you an idea of how complicated an attribute lookup can be, and how slow it can become. Short circuiting special method lookup surely provides performance improvements, as looking up obj.__enter__() in the "classical" way may be too slow.
What are peoples' opinions on using the __call__. I've only very rarely seen it used, but I think it's a very handy tool to use when you know that a class is going to be used for some default behaviour.
I think your intuition is about right.
Historically, callable objects (or what I've sometimes heard called "functors") have been used in the OO world to simulate closures. In C++ they're frequently indispensable.
However, __call__ has quite a bit of competition in the Python world:
A regular named method, whose behavior can sometimes be a lot more easily deduced from the name. Can convert to a bound method, which can be called like a function.
A closure, obtained by returning a function that's defined in a nested block.
A lambda, which is a limited but quick way of making a closure.
Generators and coroutines, whose bodies hold accumulated state much like a functor can.
I'd say the time to use __call__ is when you're not better served by one of the options above. Check the following criteria, perhaps:
Your object has state.
There is a clear "primary" behavior for your class that's kind of silly to name. E.g. if you find yourself writing run() or doStuff() or go() or the ever-popular and ever-redundant doRun(), you may have a candidate.
Your object has state that exceeds what would be expected of a generator function.
Your object wraps, emulates, or abstracts the concept of a function.
Your object has other auxilliary methods that conceptually belong with your primary behavior.
One example I like is UI command objects. Designed so that their primary task is to execute the comnand, but with extra methods to control their display as a menu item, for example, this seems to me to be the sort of thing you'd still want a callable object for.
Use it if you need your objects to be callable, that's what it's there for
I'm not sure what you mean by default behaviour
One place I have found it particularly useful is when using a wrapper or somesuch where the object is called deep inside some framework/library.
More generally, Python has a lot of double-underscore methods. They're there for a reason: they are the Python way of overloading operators. For instance, if you want a new class in which addition, I don't know, prints "foo", you define the __add__ and __radd__ methods. There's nothing inherently good or bad about this, any more than there's anything good or bad about using for loops.
In fact, using __call__ is often the more Pythonic approach, because it encourages clarity of code. You could replace MyCalculator.calculateValues( foo ) with MyCalculator( foo ), say.
Its usually used when class is used as function with some instance context, like some DecoratorClass which would be used as #DecoratorClass('some param'), so 'some param' would be stored in the instance's namespace and then instance being called as actual decorator.
It is not very useful when your class provides some different methods, since its usually not obvious what would the call do, and explicit is better than implicit in these cases.
I have been thinking about how I write classes in Python. More specifically how the constructor is implemented and how the object should be destroyed. I don't want to rely on CPython's reference counting to do object cleanup. This basically tells me I should use with statements to manage my object life times and that I need an explicit close/dispose method (this method could be called from __exit__ if the object is also a context manager).
class Foo(object):
def __init__(self):
pass
def close(self):
pass
Now, if all my objects behave in this way and all my code uses with statements or explicit calls to close() (or dispose()) I don't realy see the need for me to put any code in __del__. Should we really use __del__ to dispose of our objects?
Short answer : No.
Long answer: Using __del__ is tricky, mainly because it's not guaranteed to be called. That means you can't do things there that absolutely has to be done. This in turn means that __del__ basically only can be used for cleanups that would happen sooner or later anyway, like cleaning up resources that would be cleaned up when the process exits, so it doesn't matter if __del__ doesn't get called. Of course, these are also generally the same things Python will do for you. So that kinda makes __del__ useless.
Also, __del__ gets called when Python garbage collects, and you didn't want to wait for Pythons garbage collecting, which means you can't use __del__ anyway.
So, don't use __del__. Use __enter__/__exit__ instead.
FYI: Here is an example of a non-circular situation where the destructor did not get called:
class A(object):
def __init__(self):
print('Constructing A')
def __del__(self):
print('Destructing A')
class B(object):
a = A()
OK, so it's a class attribute. Evidently that's a special case. But it just goes to show that making sure __del__ gets called isn't straightforward. I'm pretty sure I've seen more non-circular situations where __del__ isn't called.
Not necessarily. You'll encounter problems when you have cyclic references. Eli Bendersky does a good job of explaining this in his blog post:
Safely using destructors in Python
If you are sure you will not go into cyclic references, then using __del__ in that way is OK: as soon as the reference count goes to zero, the CPython VM will call that method and destroy the object.
If you plan to use cyclic references - please think it very thoroughly, and check if weak references may help; in many cases, cyclic references are a first symptom of bad design.
If you have no control on the way your object is going to be used, then using __del__ may not be safe.
If you plan to use JPython or IronPython, __del__ is unreliable at all, because final object destruction will happen at garbage collection, and that's something you cannot control.
In sum, in my opinion, __del__ is usually perfectly safe and good; however, in many situation it could be better to make a step back, and try to look at the problem from a different perspective; a good use of try/except and of with contexts may be a more pythonic solution.