I'm having trouble with Python (2.7) inheritance. I'm trying to refer from derived classes to parents and back, which is easy enough if you hard-code the classes, but that seems like an ugly approach to me. Is it? Anyway, here we go:
class Alpha(object):
def fie(self):
pass
class Beta(Alpha):
def fie(self):
super(self.__class__, self).fie()
class Gamma(Beta):
pass
Alpha().fie()
Beta().fie()
Gamma().fie()
The last one calls fie as defined on Beta, but since it's called from Gamma, the super will refer to Beta. As such it'll call itself again and starts an infinite recursion.
Is there a way to reference the class for which the function is initially defined? Or the class highest up the chain (besides object)? Or possibly an even better way to accomplish this without hard-coding class names?
Nope - you just have to write it as:
class Beta(Alpha):
def fie(self):
super(Beta, self).fie()
See: http://yergler.net/blog/2011/07/04/super-self/ - and quoted from there (as it explains it better than I could!):
According to the Python 2.7.2 standard library documentation, super “return[s] a proxy object that delegates method calls to a parent or sibling class of type.” So in the case of single inheritance, it delegates access to the super class, it does not return an instance of the super class. In the example above, this means that when you instantiate B, the follow happens:
enter B.__init__()
call super on B and call __init__ on the proxy object
enter A.__init__()
call super on self.__class__ and call __init__ on the proxy object
The problem is that when we get to step four, self still refers to our instance of B, so calling super points back to A again. In technical terms: Ka-bloom.
And within that article is a link to a blog by Raymond Hettinger (and they're always worth reading): http://rhettinger.wordpress.com/2011/05/26/super-considered-super/
NB: read the comment where a user suggests using type(self) (equiv to your self._class_) and why it doesn't work
Related
I already asked about something related to the game I am developing. The Problem occured while the developement, but actually it has nothing to do with the game it self.
I have a method ('resize' in a subclass) in my Code which calls the equivalent method in it's super class ('resize' of the superclass).
Expected Behaviour: Super-resize calls Super-do_rotozoom
What happend: Super-resize called Sub-do_rotozoom
Here is a code Example:
Subclass:
def do_rotozoom(self):
# do rotozoom stuff of subclass
def resize(self,factor):
super().resize(factor)
self.do_rotozoom()
Superclass:
def do_rotozoom(self):
#do rotozoom stuff of superclass
def resize(self,factor):
self.factor = factor
self.do_rotozoom()
I found a workaround which involved calling super().do_rotozoom() in the Subclass method do_rotozoom() which then was called by the super().resize(). I also found out, that I could in this case remove the line self.do_rotozoom().
In this case it was a pretty easy fix, but what would I do in a more complex scenario, for example, if I need to call the method do_rotozoom() with other variables in the superclass than I do in the subclass/another specific implementation? In other words, how am I able to select which method I want to use in a specific context?
Normaly you are only able to reach the super-methods from the subclass, but no super-methods (not of it's superclass but it's own methods) from the superclass.
I have not found a better title... :D
Developers tend to prefer Composition over inheritance , it's much more manageable .
what i advise you to do is to include an instance of your superclass in you subclass and use it whenever you want to .
The very definition of a subclass is that it inherits everything from the superclass except the methods and attributes it overrides.
A subclass can refer to its superclass and its method implementations with super(), like you already do in your example.
Either don't override do_rotozoom, or refer to the superclass method with super().do_rotozoom() where that's the behavior you require.
This question is in relation to posts at What does 'super' do in Python? , How do I initialize the base (super) class? , and Python: How do I make a subclass from a superclass? which describes two ways to initialize a SuperClass from within a SubClass as
class SuperClass:
def __init__(self):
return
def superMethod(self):
return
## One version of Initiation
class SubClass(SuperClass):
def __init__(self):
SuperClass.__init__(self)
def subMethod(self):
return
or
class SuperClass:
def __init__(self):
return
def superMethod(self):
return
## Another version of Initiation
class SubClass(SuperClass):
def __init__(self):
super(SubClass, self).__init__()
def subMethod(self):
return
So I'm a little confused about needing to explicitly pass self as a parameter in
SuperClass.__init__(self)
and
super(SubClass, self).__init__().
(In fact if I call SuperClass.__init__() I get the error
TypeError: __init__() missing 1 required positional argument: 'self'
). But when calling constructors or any other class method (ie :
## Calling class constructor / initiation
c = SuperClass()
k = SubClass()
## Calling class methods
c.superMethod()
k.superMethod()
k.subMethod()
), The self parameter is passed implicitly .
My understanding of the self keyword is it is not unlike the this pointer in C++, whereas it provides a reference to the class instance. Is this correct?
If there would always be a current instance (in this case SubClass), then why does self need to be explicitly included in the call to SuperClass.__init__(self)?
Thanks
This is simply method binding, and has very little to do with super. When you can x.method(*args), Python checks the type of x for a method named method. If it finds one, it "binds" the function to x, so that when you call it, x will be passed as the first parameter, before the rest of the arguments.
When you call a (normal) method via its class, no such binding occurs. If the method expects its first argument to be an instance (e.g. self), you need to pass it in yourself.
The actual implementation of this binding behavior is pretty neat. Python objects are "descriptors" if they have a __get__ method (and/or __set__ or __delete__ methods, but those don't matter for methods). When you look up an attribute like a.b, Python checks the class of a to see if it has a attribute b that is a descriptor. If it does, it translates a.b into type(a).b.__get__(a, type(a)). If b is a function, it will have a __get__ method that implements the binding behavior I described above. Other kinds of descriptors can have different behaviors. For instance, the classmethod decorator replaces a method with a special descriptor that binds the function the class, rather than the instance.
Python's super creates special objects that handle attribute lookups differently than normal objects, but the details don't matter too much for this issue. The binding behavior of methods called through super is just like what I described in the first paragraph, so self gets passed automatically to the bound method when it is called. The only thing special about super is that it may bind a different function than you'd get lookup up the same method name on self (that's the whole point of using it).
The following example might elucidate things:
class Example:
def method(self):
pass
>>> print(Example.method)
<unbound method Example.method>
>>> print(Example().method)
<bound method Example.method of <__main__.Example instance at 0x01EDCDF0>>
When a method is bound, the instance is passed implicitly. When a method is unbound, the instance needs to be passed explicitly.
The other answers will definitely offer some more detail on the binding process, but I think it's worth showing the above snippet.
The answer is non-trivial and would probably warrant a good article. A very good explanation of how super() works is brilliantly given by Raymond Hettinger in a Pycon 2015 talk, available here and a related article.
I will attempt a short answer and if it is not sufficient I (and hopefully the community) will expand on it.
The answer has two key pieces:
Python's super() needs to have an object on which the method being overridden is called, so it is explicitly passed with self. This is not the only possible implementation and in fact, in Python 3, it is no longer required that you pass the self instance.
Python super() is not like Java, or other compiled languages, super. Python's implementation is designed to support the multiple collaborative inheritance paradigm, as explained in Hettinger's talk.
This has an interesting consequence in Python: the method resolution in super() depends not only on the parent class, but on the children classes as well (consequence of multiple inheritance). Note that Hettinger is using Python 3.
The official Python 2.7 documentation on super is also a good source of information (better understood after watching the talk, in my opinion).
Because in SuperClass.__init__(self), you're calling the method on the class, not the instance, so it cannot be passed implicitly. Similarly you cannot just call SubClass.subMethod(), but you can call SubClass.subMethod(k) and it'll be equivalent to k.subMethod(). Similarly if self refers to a SubClass then self.__init__() means SubClass.__init__(self), so if you want to call SuperClass.__init you have to call it directly.
I have somewhat of a strange question here. Let's say I'm making a simple, basic class as follows:
class MyClass(object):
def __init__(self):
super(MyClass, self).__init__()
Is there any purpose in calling super()? My class only has the default object parent class. The reason why I'm asking this is because my IDE automagically gives me this snippet when I create a new class. I usually remove the super() function because I don't see any purpose in leaving it. But maybe I'm missing something ?
You're not obliged to call object.__init__ (via super or otherwise). It does nothing.
However, the purpose of writing that snippet in that way in an __init__ function (or any function that calls the superclass) is to give you some flexibility to change the superclass without modifying that code.
So it doesn't buy you much, but it does buy you the ability to change the superclass of MyClass to a different class whose __init__ likewise accepts no-args, but which perhaps does do something and does need to be called by subclass __init__ functions. All without modifying your MyClass.__init__.
Your call whether that's worth having.
Also in this particular example you can leave out MyClass.__init__ entirely, because yours does nothing too.
I have looked at other question here regarding python's super() method but I am still finding it difficult to understand the whole concept.
I am also looking at the example in the book pro python
The example referenced there is
class A(object):
def test(self):
return 'A'
class B(A):
def test(self):
return 'B-->' + super(B, self).test()
class C(A):
def test(self):
return 'C'
class D(B, C):
pass
>>> A().test()
'A'
>>> B().test()
'B-->A'
>>> C().test()
'C'
>>> D().test()
'B-->C'
>>> A.__mro__
(__main__.A, object)
>>> B.__mro__
(__main__.B, __main__.A, object)
>>> C.__mro__
(__main__.C, __main__.A, object)
>>> D.__mro__
(__main__.D, __main__.B, __main__.C, __main__.A, object)
Why doing D().test() we get the output as 'B-->C' instead of 'B-->A'
The explanation in the book is
In the most common case, which includes the usage shown here, super() takes two arguments: a
class and an instance of that class. As our example here has shown, the instance object determines
which MRO will be used to resolve any attributes on the resulting object. The provided class determines
a subset of that MRO, because super() only uses those entries in the MRO that occur after the class
provided.
I still find the explanation a bit difficult to understand. This might be a possible duplicate and questions similar to this has been asked many times, but if I get an understanding of this I might be able to understand the rest of other questions better.
Understanding Python super() with __init__() methods
What does 'super' do in Python?
python, inheritance, super() method
[python]: confused by super()
If you want to know why Python chose this specific MRO algorithm, the discussion is in the mailing list archives, and briefly summarized in The Python 2.3 Method Resolution Order.
But really, it comes down to this: Python 2.2's method resolution was broken when dealing with multiple inheritance, and the first thing anyone suggested to fix it was to borrow the C3 algorithm from Dylan, and nobody had any problem with it or suggested anything better, and therefore Python uses C3.
If you're more interested in the general advantages (and disadvantages) of C3 against other algorithms…
BrenBarn's and florquake's answers give the basics to this question. Python's super() considered super! from Raymond Hettinger's blog is a much longer and more detailed discussion in the same vein, and definitely worth reading.
A Monotonic Superclass Linearlization for Dylan is the original paper describing the design. Of course Dylan is a very different language from Python, and this is an academic paper, but the rationale is still pretty nice.
Finally, The Python 2.3 Method Resolution Order (the same docs linked above) has some discussion on the benefits.
And you'd need to learn a lot about the alternatives, and about how they are and aren't appropriate to Python, to go any farther. Or, if you want deeper information on SO, you'll need to ask more specific questions.
Finally, if you're asking the "how" question:
When you call D().test(), it's obviously calling the code you defined in B's test method. And B.__mro__ is (__main__.B, __main__.A, object). So, how can that super(B, self).test() possibly call C's test method instead of A's?
The key here is that the MRO is based on the type of self, not based on the type B where the test method was defined. If you were to print(type(self)) inside the test functions, you'd see that it's D, not B.
So, super(B, self) actually gets self.__class__.__mro__ (in this case, D.__mro__), finds B in the list, and returns the next thing after it. Pretty simpler.
But that doesn't explain how the MRO works, just what it does. How does D().test() call the method from B, but with a self that's a D?
First, notice that D().test, D.test and B.test are not the same function, because they're not functions at all; they're methods. (I'm assuming Python 2.x here. Things are a little different—mainly simpler—in 3.x.)
A method is basically an object with im_func, im_class, and im_self members. When you call a method, all you're doing is calling its im_func, with its im_self (if not None) crammed in as an extra argument at the start.
So, our three examples all have the same im_func, which actually is the function you defined inside B. But the first two have D rather than B for im_class, and the first also has a D instance instead of None for im_self. So, that's how calling it ends up passing the D instance as self.
So, how does D().test end up with that im_self and im_class? Where does that get created? That's the fun part. For a full description, read the Descriptor HowTo Guide, but briefly:
Whenever you write foo.bar, what actually happens is equivalent to a call to getattr(foo, 'bar'), which does something like this (ignoring instance attributes, __getattr__, __getattribute__, slots, builtins, etc.):
def getattr(obj, name):
for cls in obj.__class__.__mro__:
try:
desc = cls.__dict__[name]
except KeyError:
pass
else:
return desc.get(obj.__class__, obj)
That .get() at the end is the magic bit. If you look at a function—say, B.test.im_func, you'll see that it actually has a get method. And what it does is to create a bound method, with im_func as itself, im_class as the class obj.__class__, and im_self as the object obj.
The short answer is that the method resolution order is roughly "breadth first". That is, it goes through all the base classes at a given level of ancestry before going to any of their superclasses. So if D inherits from B and C, which both inherit from A, the MRO always has B and C before A.
Another way to think about it is that if the order went B->A, then A.test would be called before C.test, even though C is a subclass of A. You generally want a subclass implementation to be invoked before the superclass one (because the subclass one might want to totally override the superclass and not invoke it at all).
A longer explanation can be found here. You can also find useful information by googling or searching Stackoverflow for question about "Python method resolution order" or "Python MRO".
super() is basically how you tell Python "Do what this object's other classes say."
When each of your classes has only one parent (single inheritance), super() will simply refer you to the parent class. (I guess you've already understood this part.)
But when you use multiple base classes, as you did in your example, things start to get a little more complicated. In this case, Python ensures that if you call super() everywhere, every class's method gets called.
A (somewhat nonsensical) example:
class Animal(object):
def make_sounds(self):
pass
class Duck(Animal):
def make_sounds(self):
print 'quack!'
super(Duck, self).make_sounds()
class Pig(Animal):
def make_sounds(self):
print 'oink!'
super(Pig, self).make_sounds()
# Let's try crossing some animals
class DuckPig(Duck, Pig):
pass
my_duck_pig = DuckPig()
my_duck_pig.make_sounds()
# quack!
# oink!
You would want your DuckPig to say quack! and oink!, after all, it's a pig and a duck, right? Well, that's what super() is for.
I've been hacking classes in Python like this:
def hack(f,aClass) :
class MyClass(aClass) :
def f(self) :
f()
return MyClass
A = hack(afunc,A)
Which looks pretty clean to me. It takes a class, A, creates a new class derived from it that has an extra method, calling f, and then reassigns the new class to A.
How does this differ from metaclass hacking in Python? What are the advantages of using a metaclass over this?
The definition of a class in Python is an instance of type (or an instance of a subclass of type). In other words, the class definition itself is an object. With metaclasses, you have the ability to control the type instance that becomes the class definition.
When a metaclass is invoked, you have the ability to completely re-write the class definition. You have access to all the proposed attributes of the class, its ancestors, etc. More than just injecting a method or removing a method, you can radically alter the inheritance tree, the type, and pretty much any other aspect. You can also chain metaclasses together for a very dynamic and totally convoluted experience.
I suppose the real benefit, though is that the class's type remains the class's type. In your example, typing:
a_inst = A()
type(a_inst)
will show that it is an instance of MyClass. Yes, isinstance(a_inst, aClass) would return True, but you've introduced a subclass, rather than a dynamically re-defined class. The distinction there is probably the key.
As rjh points out, the anonymous inner class also has performance and extensibility implications. A metaclass is processed only once, and the moment that the class is defined, and never again. Users of your API can also extend your metaclass because it is not enclosed within a function, so you gain a certain degree of extensibility.
This slightly old article actually has a good explanation that compares exactly the "function decoration" approach you used in the example with metaclasses, and shows the history of the Python metaclass evolution in that context: http://www.ibm.com/developerworks/linux/library/l-pymeta.html
You can use the type callable as well.
def hack(f, aClass):
newfunc = lambda self: f()
return type('MyClass', (aClass,), {'f': newfunc})
I find using type the easiest way to get into the metaclass world.
A metaclass is the class of a class. IMO, the bloke here covered it quite serviceably, including some use-cases. See Stack Overflow question "MetaClass", "new", "cls" and "super" - what is the mechanism exactly?.