Python - Initializing parent classes - python

I am trying to figure how to use super() to initialize the parent class one by one based on condition.
class A:
def __init__(self, foo):
self.foo = foo
class B:
def __init__(self, bar):
self.bar == bar
class C(A,B):
def __init__(self):
#Initialize class A first.
#Do some calculation and then initialize class B
How do I use super() in class C such that it only initializes class A first, then I do some calc and call super() to initialize class B

You cannot do what you ask for in C.__init__, as super doesn't give you any control over which specific inherited methods get called, only the order in which they are called, and that is controlled entirely by the order in which the parent classes are listed.
If you use super, you need to use it consistently in all the classes. (That's why it's called cooperative inheritance.) Note this means that C cannot inject any code between the calls to A.__init__ and B.__init__.
__init__ is particularly tricky to implement correctly when using super, because a rule of super is that you have to expected arbitrary arguments to be passed, yet object.__init__() doesn't take any arguments. You need each additional argument to be "owned" by a particular root class that is responsible for removing it from the argument list.
class A:
def __init__(self, foo, **kwargs):
# A "owns" foo; pass everything else on
super().__init__(**kwargs)
self.foo = foo
class B:
def __init__(self, bar, **kwargs):
# B "owns" bar; pass everything else on
super().__init__(**kwargs)
self.bar = bar
class C(A,B):
def __init__(self):
# Must pass arguments expected by A and B
super().__init__(foo=3, bar=9)
The MRO for C is [A, B, object], so the call tree looks something like this:
C.__init__ is called with no arguments
super() resolves to A, so A.__init__ is called with foo=3 and bar=9.
In A.__init__, super() resolves to B, so B.__init__ is called with bar=9.
In B.__init__, super() resolves to object, so object.__init__ is called with no arguments (kwargs being empty)
Once object.__init__ returns, self.bar is set to bar
Once B.__init__ returns, self.foo is set to foo
Once A.__init__ returns, C.__init__ finishes up
OK, the first sentence isn't entirely true. Since neither A nor B, as currently written, use super, you might be able to assume that an appropriate use of super will simply call one parent function and immediately return.
class A:
def __init__(self, foo):
self.foo = foo
class B:
def __init__(self, bar):
self.bar == bar
class C(A,B):
def __init__(self):
super(A, self).__init__(foo=3)
# Do some calculation
super(B, self).__init__(bar=9)
I'm not entirely certain, though, that this doesn't introduce some hard-to-predict bugs that could manifest with other subclasses of A, B, and/or C that attempt to use super properly.

You can actually refer the base classes explicitly:
class A:
def __init__(self, foo):
self.foo = foo
class B:
def __init__(self, bar):
self.bar == bar
class C(A,B):
def __init__(self):
A.__init__(self, 'foovalue')
# Do some calculation
B.__init__(self, 'barvalue')

Related

how to add values in "__init__" to the list of VALUES with the meta class

How do I add self.value 1 and self.value 2 into VALUES . this is the purpose
How can I add "self.value1" and "self.value2" to the VALUES list inside the meta class
class Meta(type):
VALUES = dict()
def __new__(mcs, name, bases, attrs):
for k, v in attrs.items():
print(k, v)
return super(Meta, mcs).__new__(mcs, name, bases, attrs)
class A(metaclass=Meta):
def __init__(self):
self.value1 = "Class A"
class B(metaclass=Meta):
def __init__(self):
self.value2 = "Class B"
class Main(A, B):
def __init__(self):
super(Main, self).__init__()
m = Main()
print(m.__dict__)
the output i wanted
{'value1': 'Class A',"value2":"Class B"}
super is meant to be used cooperatively, by all classes in the hierarchy. (Ignoring the metaclass, since it is irrelevant to the requested result.)
class A:
def __init__(self. **kwargs):
super().__init__(**kwargs)
self.value1 = "Class A"
class B:
def __init__(self. **kwargs):
super().__init__(**kwargs)
self.value2 = "Class B"
class Main(A, B):
pass
m = Main()
Since Main.__init__ isn't defined, the first __init__ method found in the MRO is called first, namely A.__init__. A.__init__ then uses super() to call B.__init__, which uses super() to call object.__init__, which is the last method in the chain, as object.__init__ itself does not use super.
Note that both A and B use super without knowing which class's __init__ method will be used next. That is determined by the type of the object passed as self, not the class.
This is because when you call the super() function, it doesn't call all instances of the function, only the first one it finds. Because your class definition inherits A before it inherits B, the first class to be searched is class A. The program finds the init function inside class A so it executed that and then returns. If you were instead to write class Main(B, A):, the output would be {"value2":"Class B"}, because then class B is higher in the search order than A.
To solve the problem you have, you would have to run all of the inherited class's init functions, for example like this:
class Main(A, B):
def __init__(self):
for cls in Main.__bases__:
cls.__init__(self)
This takes a reference from each parent class from the bases attribute and calls their init directly, meaning that it is called from every parent class, not just the highest priority one. I'm not sure if this is best practice or even sensible at all but it should the problem you described.

Python base class' implicit super() call

Currently I am starting to revise my python's OOP knowledge. I stumbled upon super() definition, which suggests, that it provides a derived class with a set of instance variables and methods from a base class.
So I have this piece of code:
class foo:
bar = 5
def __init__(self, a):
self.x = a
def spam(self):
print(self.x)
class baz(foo):
pass
b = baz(5)
b.spam()
And this executed with no super() calls, no errors, and printed out 5.
Now when I add an __init__ method to the derived class, like this:
class foo:
bar = 5
def __init__(self, a):
self.x = a
def spam(self):
print(self.x)
class baz(foo):
def __init__(self, a):
self.b = a
b = baz(5)
b.spam()
the script gives me an error: AttributeError: 'baz' object has no attribute 'x'.
So this would suggest, that if my class has a default __init__, it also has an explicit super() call. I couldn't actually find any info confirming this, so I just wanted to ask if I am correct.
The problem is that when you define the method __init__ in your subclass baz, you are no longer using the one in the parent class foo. Then, when you call b.spam(), x does not exist because that is define in the __init__ method of the parent class.
You can use the following to fix this if what you want is to call the __init__ method of the parent class and also add your own logic:
class baz(foo):
def __init__(self, a):
super().__init__(10) # you can pass any value you want to assign to x
self.b = a
>>> b = baz(5)
>>> b.spam()
10

Why do we pass self when we call a constructor of super class using class name but not when using super [duplicate]

This question already has answers here:
When calling super() in a derived class, can I pass in self.__class__? [duplicate]
(2 answers)
Closed 9 years ago.
Here is the code I was trying to write:
class A(object):
def bind_foo(self):
old_foo = self.foo
def new_foo():
old_foo()
#super().foo()
super(self.__class__,self).foo()
self.foo = new_foo
def __init__(self):
print("A __init__")
def foo(self):
print("A foo")
class B(A):
def __init__(self):
print("B __init__")
super().__init__()
def foo(self):
print("B foo")
super().foo()
class C(A):
def __init__(self):
print("C __init__")
super().__init__()
super().bind_foo()
def foo(self):
print("C foo")
b = B()
b.foo()
c = C()
c.foo()
Class B and A is the expected behavior, that is, when I call b.foo(), it calls a.foo() as well with super(). Class C is the trying to mimic the child B and parent A behavior but this time I dont want to put explicitly super().foo() in the child class but I still want the parent foo() to be called. It works as expected.
However, what I dont quite get is that, under A.bind_foo, I have to use super(self.__class__,self).foo() rather than super().foo. super().foo gives a
"SystemError: super(): no arguments".
Can someone explain why that is so?
You should not use self.__class__ or type(self) when calling super().
In Python 3, a call to super() without arguments is equivalent to super(B, self) (within methods on class B); note the explicit naming of the class. The Python compiler adds a __class__ closure cell to methods that use super() without arguments (see Why is Python 3.x's super() magic?) that references the current class being defined.
If you use super(self.__class__, self) or super(type(self), self), you will hit an infinite recursion exception when a subclass tries to call that method; at that time self.__class__ is the derived class, not the original. See When calling super() in a derived class, can I pass in self.__class__?
So, to summarize, in Python 3:
class B(A):
def __init__(self):
print("B __init__")
super().__init__()
def foo(self):
print("B foo")
super().foo()
is equal to:
class B(A):
def __init__(self):
print("B __init__")
super(B, self).__init__()
def foo(self):
print("B foo")
super(B, self).foo()
but you should use the former, as it saves you repeating yourself.
In Python 2, you are stuck with the second form only.
For your bind_foo() method, you'll have to pass in an explicit class from which to search the MRO from, as the Python compiler cannot determine here what class is used when you bind the new replacement foo:
def bind_foo(self, klass=None):
old_foo = self.foo
if klass is None:
klass = type(self)
def new_foo():
old_foo()
super(klass, self).foo()
self.foo = new_foo
You could use __class__ (no self) to have Python provide you with the closure cell, but that'd be a reference to A, not C here. When you are binding the new foo, you want the search for overridden methods in the MRO to start searching at C instead.
Note that if you now create a class D, subclassing from C, things will go wrong again, because now you are calling bind_foo() and in turn call super() with D, not C, as the starting point. Your best bet then is to call bind_foo() with an explicit class reference. Here __class__ (no self.) will do nicely:
class C(A):
def __init__(self):
print("C __init__")
super().__init__()
self.bind_foo(__class__)
Now you have the same behaviour as using super() without arguments, a reference to the current class, the one in which you are defining the method __init__, is passed to super(), making the new_foo() behave as if it was defined directly in the class definition of C.
Note that there is no point in calling bind_foo() on super() here; you didn't override it here, so you can just call self.bind_foo() instead.

Maintaining readability when using super() for direct multiple inheritance

For the case of the most basic multiple inheritance:
class A:
def __init__(self, a):
self.a = a
class B:
def __init__(self, b):
self.b = b
class C(A, B):
def __init__(self, a, b):
A.__init__(self, a)
B.__init__(self, b)
I do not see why super() should be used. I suppose you could implement it with kwargs, but that is surely less readable than the above method. I am yet to find any answers on stack overflow which are in favour of this method, yet surely for this case it is the most satisfactory?
There are a lot of questions marked as duplicate on this topic, but no satisfactory answers for this exact case. This question addresses multiple inheritance and the use of super() for a diamond inheritance. In this case there is no diamond inheritance and neither parent class have any knowledge of each other, so they shouldn't need to call super() like this suggests.
This answer deals with the use of super in this scenario but without passing arguments to __init__ like is done here, and this answer deals with passing arguments but is again a diamond inheritance.
One correct way to use super here would be
class A:
def __init__(self, a, **kwargs):
super().__init__(**kwargs)
self.a = a
class B:
def __init__(self, b, **kwargs):
super().__init__(**kwargs)
self.b = b
class C1(A, B):
pass
class C2(A, B):
def __init__(self, a, b, **kwargs):
super().__init__(a=a, b=b, **kwargs)
c1 = C1(a="foo", b="bar")
c2 = C2(a="foo", b="bar")
The method resolution order for C is [C, A, B, object]. Each time super() is called, it returns a proxy for the next class in the MRO, based on where super() is called at the time.
You have two options when defining C, depending on whether you want to define C.__init__ with a signature that mentions the two arguments A and B required for initialization. With C1, C1.__init__ is not defined so A.__init__ will be called instead. With C2, you need to explicitly call the next __init__ method in the chain.
C, knowing that it is a subclass of A and B, has to at least provide the expected arguments for the known upstream __init__ methods.
A.__init__ will pass on everything except a to the next class's __init__ method.
B.__init__ will pass on everything it receives except b.
object.__init__ will finally be called, and assuming all previous classes correctly removed the keyword arguments they introduced, will receive no additional arguments.
Changing the order in which the various __init__s are called means changing the MRO, which means altering the order of the base classes. If you want more control than that, then cooperative multiple inheritance is not for you.
class A(object):
def __init__(self, *args, **kwargs):
super(A, self).__init__(*args, **kwargs)
self.a = kwargs['a']
class B(object):
def __init__(self, *args, **kwargs):
super(B, self).__init__()
self.b = kwargs['b']
class C(A, B):
def __init__(self, *args, **kwargs):
super(C, self).__init__(*args, **kwargs)
z = C(a=1,b=2)
z.b
2

Change signature of function called in __init__ of base class

In A.__init__ I call self.func(argument):
class A(object):
def __init__(self, argument, key=0):
self.func(argument)
def func(self, argument):
#some code here
I want to change the signature of A.func in B. B.func gets called in B.__init__ through A.__init__:
class B(A):
def __init__(self, argument1, argument2, key=0):
super(B, self).__init__(argument1, key) # calls A.__init__
def func(self, argument1, argument2):
#some code here
Clearly, this doesn't work because the signature of B.func expects two arguments while A.__init__ calls it with one argument. How do I work around this? Or is there something incorrect with the way I have designed my classes?
key is a default argument to A.__init__. argument2 is not intended for key. argument2 is an extra argument that B takes but A does not. B also takes key and has default value for it.
Another constraint is that I would like not to change the signature of A.__init__. key will usually be 0. So I want to allow users to be able to write A(arg) rather than A(arg, key=0).
Generally speaking, changing the signature of a method between subclasses breaks the expectation that the methods on subclasses implement the same API as those on the parent.
However, you could re-tool your A.__init__ to allow for arbitrary extra arguments, passing those on to self.func():
class A(object):
def __init__(self, argument, *extra, **kwargs):
key = kwargs.get('key', 0)
self.func(argument, *extra)
# ...
class B(A):
def __init__(self, argument1, argument2, key=0):
super(B, self).__init__(argument1, argument2, key=key)
# ...
The second argument passed to super(B, self).__init__() is then captured in the extra tuple, and applied to self.func() in addition to argument.
In Python 2, to make it possible to use extra however, you need to switch to using **kwargs, otherwise key is always going to capture the second positional argument. Make sure to pass on key from B with key=key.
In Python 3, you are not bound by this restriction; put *args before key=0 and only ever use key as a keyword argument in calls:
class A(object):
def __init__(self, argument, *extra, key=0):
self.func(argument, *extra)
I'd give func() an *extra parameter too, so that it's interface essentially is going to remain unchanged between A and B; it just ignores anything beyond the first parameter passed in for A, and beyond the first two for B:
class A(object):
# ...
def func(self, argument, *extra):
# ...
class B(A):
# ...
def func(self, argument1, argument2, *extra):
# ...
Python 2 demo:
>>> class A(object):
... def __init__(self, argument, *extra, **kwargs):
... key = kwargs.get('key', 0)
... self.func(argument, *extra)
... def func(self, argument, *extra):
... print('func({!r}, *{!r}) called'.format(argument, extra))
...
>>> class B(A):
... def __init__(self, argument1, argument2, key=0):
... super(B, self).__init__(argument1, argument2, key=key)
... def func(self, argument1, argument2, *extra):
... print('func({!r}, {!r}, *{!r}) called'.format(argument1, argument2, extra))
...
>>> A('foo')
func('foo', *()) called
<__main__.A object at 0x105f602d0>
>>> B('foo', 'bar')
func('foo', 'bar', *()) called
<__main__.B object at 0x105f4fa50>
It seems to be that there is a problem in your design. The following might fix your particular case but seems to perpetuate bad design even further. Notice the Parent.method being called directly.
>>> class Parent:
def __init__(self, a, b=None):
Parent.method(self, a)
self.b = b
def method(self, a):
self.location = id(a)
>>> class Child(Parent):
def __init__(self, a):
super().__init__(a, object())
def method(self, a, b):
self.location = id(a), id(b)
>>> test = Child(object())
Please consider adding a default argument to the second parameter of the method you are overriding. Otherwise, design your class and call structure differently. Reorganization might eliminate the problem.
actually I would resort to put an extra boolean argument in A's __init__ to control the call of the func, and just pass False from B's __init__
class A(object):
def __init__(self, argument, key=0, call_func=True):
if call_func:
self.func(argument)
class B(A):
def __init__(self, argument):
argument1, argument2 = argument, 'something else'
super(B, self).__init__(argument1, argument2, call_func=False)

Categories

Resources