I have a Parent and a Child class, both should execute their own fct in init but Child have to execute first the Parent fct :
class Parent(object):
def __init__(self):
self.fct()
def fct(self):
# do some basic stuff
class Child(Parent):
def __init__(self):
super().__init__()
self.fct()
def fct(self):
# add other stuff
Problem is that super().init() calls the Child fct and not the Parent one as I would like. Of course I could rename Child function as fct2 but I was wondering if I can do what I want to do without changing names (because fct and fct2 do the same thing conceptually speaking, they just apply on different things). It would be nice if I could call super().__init() as if were Parent object.
The idea of subclassining is this: if you ever need to use a method to the parent class, just do not create it in the child class.
Otherwise, in a hierarchy with complicated classes and mixins, and you really need the methods to have the same name, there is the name mangling mechanism, triggered by Python when using two underlines __ as a method or attribute prefix:
class Parent(object):
def __init__(self):
self.__fct()
def __fct(self):
# do some basic stuff
class Child(Parent):
def __init__(self):
super().__init__()
self.__fct()
def __fct(self):
# add other stuff
Using the __ prefix makes Python change the name of this method, both in declaration and where it is used when the class is created (at the time the class statement with its block is itself executed) - and both methods work as if they were named differently, each one only accessible, in an ordinary way, from code in its own class.
Some documentation, mainly older docs, will sometimes refer to this as the mechanism in Python to create "private methods". It is not the samething, although it can serve the same purpose in some use cases (like this). The __fct method above will be renamed respectively to Parent._Parent__fct and Child._Child__fct when the code is executed.
second way, without name mangling:
Without resorting to this name mangling mechanism, it is possible to retrieve attributes from the class where a piece of code is declared by using the __class__ special name (not self.__class__, just __class__) - it is part of the same mechanism Python uses to make argumentless super() work:
class Parent(object):
def __init__(self):
__class__.fct(self) # <- retrieved from the current class (Parent)
def fct(self):
# do some basic stuff
class Child(Parent):
def __init__(self):
super().__init__()
__class__.fct(self)
def fct(self):
# add other stuff
This will also work - just note that as the methods are retrieved from the class object, not from an instance, the instance have to be explicitly passed as an argument when calling the methods.
The name __class__ is inserted automatically in any methods that use it, and will always refer to the class that has the body were it appears - the class itself will be created "in the future", when all the class body is processed and the class command itself is resolved.
class A:
def amethod(self): print("Base1")
class B():
def amethod(self): print("Base3")
class Derived(A,B):
pass
instance = Derived()
instance.amethod()
#Now i want to call B method amethod().. please let me know the way.**
Python multiple inheritance, calling second base class method, if both base classes holding same method
try to use composition
+Avoid multiple inheritance at all costs, as it's too complex to be reliable. If you're stuck with it, then be prepared to know the class hierarchy and spend time finding where everything is coming from.
+Use composition to package code into modules that are used in many different unrelated places and situations.
+Use inheritance only when there are clearly related reusable pieces of code that fit under a single common concept or if you have to because of something you're using.
class A:
def amethod(self): print("Base1")
class B:
def amethod(self): print("Base3")
class Derived2:
def __init__(self):
self.a = A()
self.b = B()
def amthodBase1(self):
self.a.amethod()
def amthodBase3(self):
self.b.amethod()
instance2 = Derived2()
instance2.amthodBase1()
instance2.amthodBase3()
galaxyan's answer suggesting composition is probably the best one. Multiple inheritance is often complicated to design and debug, and unless you know what you're doing, it can be difficult to get right. But if you really do want it, here's an answer explaining how you can make it work:
For multiple inheritance to work properly, the base classes will often need to cooperate with their children. Python's super function makes this not too difficult to set up. You often will need a common base for the classes involved in the inheritance (to stop the inheritance chain):
class CommonBase:
def amethod(self):
print("CommonBase")
# don't call `super` here, we're the end of the inheritance chain
class Base1(CommonBase):
def amethod(self):
print("Base1")
super().amethod()
class Base2(CommonBase):
def amethod(self):
print("Base2")
super().amethod()
class Derived(Base1, Base2):
def amethod(self):
print("Derived")
super().amethod()
Now calling Derived().amethod() will print Derived, Base1, Base2, and finally CommonBase. The trick is that super passes each call on to the the next class in the MRO of self, even if that's not the in the current class's inheritance hierarchy. So Base1.amethod ends up calling Base2.amethod via super since they're being run on an instance of Derived.
If you don't need any behavior in the common base class, its method body just be pass. And of course, the Derived class can just inherit the method without writing its own version and calling super to get the rest.
Why did the Python designers decide that subclasses' __init__() methods don't automatically call the __init__() methods of their superclasses, as in some other languages? Is the Pythonic and recommended idiom really like the following?
class Superclass(object):
def __init__(self):
print 'Do something'
class Subclass(Superclass):
def __init__(self):
super(Subclass, self).__init__()
print 'Do something else'
The crucial distinction between Python's __init__ and those other languages constructors is that __init__ is not a constructor: it's an initializer (the actual constructor (if any, but, see later;-) is __new__ and works completely differently again). While constructing all superclasses (and, no doubt, doing so "before" you continue constructing downwards) is obviously part of saying you're constructing a subclass's instance, that is clearly not the case for initializing, since there are many use cases in which superclasses' initialization needs to be skipped, altered, controlled -- happening, if at all, "in the middle" of the subclass initialization, and so forth.
Basically, super-class delegation of the initializer is not automatic in Python for exactly the same reasons such delegation is also not automatic for any other methods -- and note that those "other languages" don't do automatic super-class delegation for any other method either... just for the constructor (and if applicable, destructor), which, as I mentioned, is not what Python's __init__ is. (Behavior of __new__ is also quite peculiar, though really not directly related to your question, since __new__ is such a peculiar constructor that it doesn't actually necessarily need to construct anything -- could perfectly well return an existing instance, or even a non-instance... clearly Python offers you a lot more control of the mechanics than the "other languages" you have in mind, which also includes having no automatic delegation in __new__ itself!-).
I'm somewhat embarrassed when people parrot the "Zen of Python", as if it's a justification for anything. It's a design philosophy; particular design decisions can always be explained in more specific terms--and they must be, or else the "Zen of Python" becomes an excuse for doing anything.
The reason is simple: you don't necessarily construct a derived class in a way similar at all to how you construct the base class. You may have more parameters, fewer, they may be in a different order or not related at all.
class myFile(object):
def __init__(self, filename, mode):
self.f = open(filename, mode)
class readFile(myFile):
def __init__(self, filename):
super(readFile, self).__init__(filename, "r")
class tempFile(myFile):
def __init__(self, mode):
super(tempFile, self).__init__("/tmp/file", mode)
class wordsFile(myFile):
def __init__(self, language):
super(wordsFile, self).__init__("/usr/share/dict/%s" % language, "r")
This applies to all derived methods, not just __init__.
Java and C++ require that a base class constructor is called because of memory layout.
If you have a class BaseClass with a member field1, and you create a new class SubClass that adds a member field2, then an instance of SubClass contains space for field1 and field2. You need a constructor of BaseClass to fill in field1, unless you require all inheriting classes to repeat BaseClass's initialization in their own constructors. And if field1 is private, then inheriting classes can't initialise field1.
Python is not Java or C++. All instances of all user-defined classes have the same 'shape'. They're basically just dictionaries in which attributes can be inserted. Before any initialisation has been done, all instances of all user-defined classes are almost exactly the same; they're just places to store attributes that aren't storing any yet.
So it makes perfect sense for a Python subclass not to call its base class constructor. It could just add the attributes itself if it wanted to. There's no space reserved for a given number of fields for each class in the hierarchy, and there's no difference between an attribute added by code from a BaseClass method and an attribute added by code from a SubClass method.
If, as is common, SubClass actually does want to have all of BaseClass's invariants set up before it goes on to do its own customisation, then yes you can just call BaseClass.__init__() (or use super, but that's complicated and has its own problems sometimes). But you don't have to. And you can do it before, or after, or with different arguments. Hell, if you wanted you could call the BaseClass.__init__ from another method entirely than __init__; maybe you have some bizarre lazy initialization thing going.
Python achieves this flexibility by keeping things simple. You initialise objects by writing an __init__ method that sets attributes on self. That's it. It behaves exactly like a method, because it is exactly a method. There are no other strange and unintuitive rules about things having to be done first, or things that will automatically happen if you don't do other things. The only purpose it needs to serve is to be a hook to execute during object initialisation to set initial attribute values, and it does just that. If you want it to do something else, you explicitly write that in your code.
To avoid confusion it is useful to know that you can invoke the base_class __init__() method if the child_class does not have an __init__() class.
Example:
class parent:
def __init__(self, a=1, b=0):
self.a = a
self.b = b
class child(parent):
def me(self):
pass
p = child(5, 4)
q = child(7)
z= child()
print p.a # prints 5
print q.b # prints 0
print z.a # prints 1
In fact the MRO in python will look for __init__() in the parent class when can not find it in the children class. You need to invoke the parent class constructor directly if you have already an __init__() method in the children class.
For example the following code will return an error:
class parent:
def init(self, a=1, b=0):
self.a = a
self.b = b
class child(parent):
def __init__(self):
pass
def me(self):
pass
p = child(5, 4) # Error: constructor gets one argument 3 is provided.
q = child(7) # Error: constructor gets one argument 2 is provided.
z= child()
print z.a # Error: No attribute named as a can be found.
"Explicit is better than implicit." It's the same reasoning that indicates we should explicitly write 'self'.
I think in in the end it is a benefit-- can you recite all of the rules Java has regarding calling superclasses' constructors?
Right now, we have a rather long page describing the method resolution order in case of multiple inheritance: http://www.python.org/download/releases/2.3/mro/
If constructors were called automatically, you'd need another page of at least the same length explaining the order of that happening. That would be hell...
Often the subclass has extra parameters which can't be passed to the superclass.
Maybe __init__ is the method that the subclass needs to override. Sometimes subclasses need the parent's function to run before they add class-specific code, and other times they need to set up instance variables before calling the parent's function. Since there's no way Python could possibly know when it would be most appropriate to call those functions, it shouldn't guess.
If those don't sway you, consider that __init__ is Just Another Function. If the function in question were dostuff instead, would you still want Python to automatically call the corresponding function in the parent class?
i believe the one very important consideration here is that with an automatic call to super.__init__(), you proscribe, by design, when that initialization method is called, and with what arguments. eschewing automatically calling it, and requiring the programmer to explicitly do that call, entails a lot of flexibility.
after all, just because class B is derived from class A does not mean A.__init__() can or should be called with the same arguments as B.__init__(). making the call explicit means a programmer can have e.g. define B.__init__() with completely different parameters, do some computation with that data, call A.__init__() with arguments as appropriate for that method, and then do some postprocessing. this kind of flexibility would be awkward to attain if A.__init__() would be called from B.__init__() implicitly, either before B.__init__() executes or right after it.
As Sergey Orshanskiy pointed out in the comments, it is also convenient to write a decorator to inherit the __init__ method.
You can write a decorator to inherit the __init__ method, and even perhaps automatically search for subclasses and decorate them. – Sergey Orshanskiy Jun 9 '15 at 23:17
Part 1/3: The implementation
Note: actually this is only useful if you want to call both the base and the derived class's __init__ since __init__ is inherited automatically. See the previous answers for this question.
def default_init(func):
def wrapper(self, *args, **kwargs) -> None:
super(type(self), self).__init__(*args, **kwargs)
return wrapper
class base():
def __init__(self, n: int) -> None:
print(f'Base: {n}')
class child(base):
#default_init
def __init__(self, n: int) -> None:
pass
child(42)
Outputs:
Base: 42
Part 2/3: A warning
Warning: this doesn't work if base itself called super(type(self), self).
def default_init(func):
def wrapper(self, *args, **kwargs) -> None:
'''Warning: recursive calls.'''
super(type(self), self).__init__(*args, **kwargs)
return wrapper
class base():
def __init__(self, n: int) -> None:
print(f'Base: {n}')
class child(base):
#default_init
def __init__(self, n: int) -> None:
pass
class child2(child):
#default_init
def __init__(self, n: int) -> None:
pass
child2(42)
RecursionError: maximum recursion depth exceeded while calling a Python object.
Part 3/3: Why not just use plain super()?
But why not just use the safe plain super()? Because it doesn't work since the new rebinded __init__ is from outside the class, and super(type(self), self) is required.
def default_init(func):
def wrapper(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
return wrapper
class base():
def __init__(self, n: int) -> None:
print(f'Base: {n}')
class child(base):
#default_init
def __init__(self, n: int) -> None:
pass
child(42)
Errors:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-9-6f580b3839cd> in <module>
13 pass
14
---> 15 child(42)
<ipython-input-9-6f580b3839cd> in wrapper(self, *args, **kwargs)
1 def default_init(func):
2 def wrapper(self, *args, **kwargs) -> None:
----> 3 super().__init__(*args, **kwargs)
4 return wrapper
5
RuntimeError: super(): __class__ cell not found
Background - We CAN AUTO init a parent AND child class!
A lot of answers here and say "This is not the python way, use super().__init__() from the subclass". The question is not asking for the pythonic way, it's comparing to the expected behavior from other languages to python's obviously different one.
The MRO document is pretty and colorful but it's really a TLDR situation and still doesn't quite answer the question, as is often the case in these types of comparisons - "Do it the Python way, because.".
Inherited objects can be overloaded by later declarations in subclasses, a pattern building on #keyvanrm's (https://stackoverflow.com/a/46943772/1112676) answer solves the case where I want to AUTOMATICALLY init a parent class as part of calling a class without explicitly calling super().__init__() in every child class.
In my case where a new team member might be asked to use a boilerplate module template (for making extensions to our application without touching the core application source) which we want to make as bare and easy to adopt without them needing to know or understand the underlying machinery - to only need to know of and use what is provided by the application's base interface which is well documented.
For those who will say "Explicit is better than implicit." I generally agree, however, when coming from many other popular languages inherited automatic initialization is the expected behavior and it is very useful if it can be leveraged for projects where some work on a core application and others work on extending it.
This technique can even pass args/keyword args for init which means pretty much any object can be pushed to the parent and used by the parent class or its relatives.
Example:
class Parent:
def __init__(self, *args, **kwargs):
self.somevar = "test"
self.anothervar = "anothertest"
#important part, call the init surrogate pass through args:
self._init(*args, **kwargs)
#important part, a placeholder init surrogate:
def _init(self, *args, **kwargs):
print("Parent class _init; ", self, args, kwargs)
def some_base_method(self):
print("some base method in Parent")
self.a_new_dict={}
class Child1(Parent):
# when omitted, the parent class's __init__() is run
#def __init__(self):
# pass
#overloading the parent class's _init() surrogate
def _init(self, *args, **kwargs):
print(f"Child1 class _init() overload; ",self, args, kwargs)
self.a_var_set_from_child = "This is a new var!"
class Child2(Parent):
def __init__(self, onevar, twovar, akeyword):
print(f"Child2 class __init__() overload; ", self)
#call some_base_method from parent
self.some_base_method()
#the parent's base method set a_new_dict
print(self.a_new_dict)
class Child3(Parent):
pass
print("\nRunning Parent()")
Parent()
Parent("a string", "something else", akeyword="a kwarg")
print("\nRunning Child1(), keep Parent.__init__(), overload surrogate Parent._init()")
Child1()
Child1("a string", "something else", akeyword="a kwarg")
print("\nRunning Child2(), overload Parent.__init__()")
#Child2() # __init__() requires arguments
Child2("a string", "something else", akeyword="a kwarg")
print("\nRunning Child3(), empty class, inherits everything")
Child3().some_base_method()
Output:
Running Parent()
Parent class _init; <__main__.Parent object at 0x7f84a721fdc0> () {}
Parent class _init; <__main__.Parent object at 0x7f84a721fdc0> ('a string', 'something else') {'akeyword': 'a kwarg'}
Running Child1(), keep Parent.__init__(), overload surrogate Parent._init()
Child1 class _init() overload; <__main__.Child1 object at 0x7f84a721fdc0> () {}
Child1 class _init() overload; <__main__.Child1 object at 0x7f84a721fdc0> ('a string', 'something else') {'akeyword': 'a kwarg'}
Running Child2(), overload Parent.__init__()
Child2 class __init__() overload; <__main__.Child2 object at 0x7f84a721fdc0>
some base method in Parent
{}
Running Child3(), empty class, inherits everything, access things set by other children
Parent class _init; <__main__.Child3 object at 0x7f84a721fdc0> () {}
some base method in Parent
As one can see, the overloaded definition(s) take the place of those declared in Parent class but can still be called BY the Parent class thereby allowing one to emulate the classical implicit inheritance initialization behavior Parent and Child classes both initialize without needing to explicitly invoke the Parent's init() from the Child class.
Personally, I call the surrogate _init() method main() because it makes sense to me when switching between C++ and Python for example since it is a function that will be automatically run for any subclass of Parent (the last declared definition of main(), that is).
I want to be able to create an instance of a parent class X, with a string "Q" as an extra argument.
This string is to be a name being an identifier for a subclass Q of the parent class X.
I want the instance of the parent class to become (or be replaced with) an instance of the subclass.
I am aware that this is probably a classic problem (error?). After some searching I haven't found a suitable solution though.
I came up with the following solution myself;
I added a dictionary of possible identifiers as keys for their baseclass-instances to the init-method of the parent class.
Then assigned the class-attribute of the corresponding subclass to the current instances class-attribute.
I required the argument of the init-method not to be the default value to prevent infinite looping.
Following is an example of what the code looks like in practice;
class SpecialRule:
""""""
name="Special Rule"
description="This is a Special Rule."
def __init__(self, name=None):
""""""
print "SpecialInit"
if name!=None:
SPECIAL_RULES={
"Fly" : FlyRule(),
"Skirmish" : SkirmishRule()
} #dictionary coupling names to SpecialRuleclasses
self.__class__= SPECIAL_RULES[name].__class__
def __str__(self):
""""""
return self.name
class FlyRule(SpecialRule):
""""""
name="Fly"
description="Flies."
def __init__(self):
""""""
print "FlyInit"+self.name
SpecialRule.__init__(self)
def addtocontainer(self, container):
"""this instance messes with the attributes of its containing class when added to some sort of list"""
class SkirmishRule(SpecialRule):
""""""
name="Skirmish"
description="Skirmishes."
def __init__(self):
""""""
SpecialRule.__init__(self)
def addtocontainer(self, container):
"""this instance messes with the attributes of its containing class when added to some sort of list"""
test=SpecialRule("Fly")
print "evaluating resulting class"
print test.description
print test.__class__
</pre></code>
output:
>
SpecialInit
FlyInitFly
SpecialInit
evaluating resulting class
Flies.
main.FlyRule
>
Is there a more pythonic solution and are there foresee-able problems with mine?
(And am I mistaken that its a good programming practice to explicitly call the .__init__(self) of the parent class in .__init__ of the subclass?).
My solution feels a bit ... wrong ...
Quick recap so far;
Thanks for the quick answers
# Mark Tolonen's solution
I've been looking into the __new__-method, but when I try to make A, B and C in Mark Tolonen's example subclasses of Z, I get the error that class Z isn't defined yet. Also I'm not sure if instantiating class A the normal way ( with variable=A() outside of Z's scope ) is possible, unless you already have an instance of a subclass made and call the class as an attribute of an instance of a subclass of Z ... which doesn't seem very straightforward. __new__ is quite interesting so I'll fool around with it a bit more, your example is easier to grasp than what I got from the pythondocs.
# Greg Hewgill's solution
I tried the staticmethod-solution and it seems to work fine. I looked into using a seperate function as a factory before but I guessed it would get hard to manage a large program with a list of loose strands of constructor code in the main block, so I'm very happy to integrate it in the class.
I did experiment a bit seeing if I could turn the create-method into a decorated .__call__() but it got quite messy so I'll leave it at that.
I would solve this by using a function that encapsulates the choice of object:
class SpecialRule:
""""""
name="Special Rule"
description="This is a Special Rule."
#staticmethod
def create(name=None):
""""""
print "SpecialCreate"
if name!=None:
SPECIAL_RULES={
"Fly" : FlyRule,
"Skirmish" : SkirmishRule
} #dictionary coupling names to SpecialRuleclasses
return SPECIAL_RULES[name]()
else:
return SpecialRule()
I have used the #staticmethod decorator to allow you to call the create() method without already having an instance of the object. You would call this like:
SpecialRule.create("Fly")
Look up the __new__ method. It is the correct way to override how a class is created vs. initialized.
Here's a quick hack:
class Z(object):
class A(object):
def name(self):
return "I'm A!"
class B(object):
def name(self):
return "I'm B!"
class C(object):
def name(self):
return "I'm C!"
D = {'A':A,'B':B,'C':C}
def __new__(cls,t):
return cls.D[t]()
I was reading 'Dive Into Python' and in the chapter on classes it gives this example:
class FileInfo(UserDict):
"store file metadata"
def __init__(self, filename=None):
UserDict.__init__(self)
self["name"] = filename
The author then says that if you want to override the __init__ method, you must explicitly call the parent __init__ with the correct parameters.
What if that FileInfo class had more than one ancestor class?
Do I have to explicitly call all of the ancestor classes' __init__ methods?
Also, do I have to do this to any other method I want to override?
The book is a bit dated with respect to subclass-superclass calling. It's also a little dated with respect to subclassing built-in classes.
It looks like this nowadays:
class FileInfo(dict):
"""store file metadata"""
def __init__(self, filename=None):
super(FileInfo, self).__init__()
self["name"] = filename
Note the following:
We can directly subclass built-in classes, like dict, list, tuple, etc.
The super function handles tracking down this class's superclasses and calling functions in them appropriately.
In each class that you need to inherit from, you can run a loop of each class that needs init'd upon initiation of the child class...an example that can copied might be better understood...
class Female_Grandparent:
def __init__(self):
self.grandma_name = 'Grandma'
class Male_Grandparent:
def __init__(self):
self.grandpa_name = 'Grandpa'
class Parent(Female_Grandparent, Male_Grandparent):
def __init__(self):
Female_Grandparent.__init__(self)
Male_Grandparent.__init__(self)
self.parent_name = 'Parent Class'
class Child(Parent):
def __init__(self):
Parent.__init__(self)
#---------------------------------------------------------------------------------------#
for cls in Parent.__bases__: # This block grabs the classes of the child
cls.__init__(self) # class (which is named 'Parent' in this case),
# and iterates through them, initiating each one.
# The result is that each parent, of each child,
# is automatically handled upon initiation of the
# dependent class. WOOT WOOT! :D
#---------------------------------------------------------------------------------------#
g = Female_Grandparent()
print g.grandma_name
p = Parent()
print p.grandma_name
child = Child()
print child.grandma_name
You don't really have to call the __init__ methods of the base class(es), but you usually want to do it because the base classes will do some important initializations there that are needed for rest of the classes methods to work.
For other methods it depends on your intentions. If you just want to add something to the base classes behavior you will want to call the base classes method additionally to your own code. If you want to fundamentally change the behavior, you might not call the base class' method and implement all the functionality directly in the derived class.
If the FileInfo class has more than one ancestor class then you should definitely call all of their __init__() functions. You should also do the same for the __del__() function, which is a destructor.
Yes, you must call __init__ for each parent class. The same goes for functions, if you are overriding a function that exists in both parents.