Calling on parent class when child class is in another module - python

I am trying to figure out how to have a child class reside in another module. Currently it is more convenient for me to store the parent and child classes in different modules due to their size. I need the super method, since I want to inherit not just all the functions, but the variables in self as well. My current solution is as follows:
Parent Module (parent.py):
class A:
def __init__(self, *args, **kwargs):
super(A, self).__init__(*args, **kwargs)
Child Module(child.py):
from parent import A
class B(A):
def __init__(self, *args, **kwargs):
super(B, self).__init__(*args, **kwargs)
B()
When I run the child module I get the following error.
TypeError: super(type, obj): obj must be an instance or subtype of type
I understand that this is due to the module reloading and thus causing data to be lost, however I am not sure if there is a workaround.

First, on your code:
It's not necessary to always call the parent constructor, in particular calling object's constructor as you do in parent.A is not needed
In Python 3, you can use the much simpler super().__init__ form for the call for single-inheritance
The import should usually be relative: from .parent import A
Now, to your actual problem:
When you reload parent in this case, you essentially generate a new class object for A that is not identical to the one that your compiled B knows of. You can check this by comparing id(B.__base__) to id(A) after the reload. This is not a problem for the super() form, as that doesn't use the name A explicitly (which points to the new class) but instead uses the actual base class. So it will construct fine, but with the "old" A implementation.
P.S.:
It is essential that your question includes information on what you are actually trying to do, in this case reloading a module, which is not a "standard" operation in Python (that's why it's so cumbersome to do).

Related

Initializing a class derived from a base type

I am creating a class called an Environment which subclasses a dictionary. It looks something like this:
class Env(dict):
"An environment dict, containing the parent Env (or None) where created."
def __init__(self, parent=None):
self.parent = parent
# super().__init__() <-- not included
Pylint complains that:
super-init-not-called: __init__ method from base class 'dict' is not called.
What does doing super() on a dict type do? Is this something that is required to be done, and if so, why is it necessary?
After playing around with this a bit, I'm not so sure it does anything (or maybe it automatically does the super behind-the-scenes anyways). Here's an example:
class Env1(dict):
def __init__(self, parent=None):
self.parent = parent
super().__init__()
class Env2(dict):
def __init__(self, parent=None):
self.parent = parent
dir(Env1()) == dir(Env2()), len(dir(Env1))
(True, 48)
Pylint doesn't know what dict.__init__ does. It can't be sure if there's some important setup logic in that method or not. That's why it's warning you, so that you can either decide to call super().__init__ to be safe, or to silence the warning if you're confident you don't need the call.
I'm pretty sure you don't need to call dict.__init__ when you want to initialize your instances as empty dictionaries. But that may be dependent on the implementation details of the dict class you're inheriting from (which does all of its setup in the C-API equivalent __new__). Another Python implementation might do more of the setup work for its dictionaries in __init__ and then your code wouldn't work correctly.
To be safe, it's generally a good idea to call your parent class's __init__ method. This is such broad advice that it's baked into Pylint. You can ignore those warnings, and even add comments to your code that will suppress the ones that don't apply to certain parts of your code (so they don't distract you from real issues). But most of the warnings are generally good to obey, even if they don't reflect a serious bug in your current code.
Calling super() is not required, but makes sense if you want to follow OOP, specifically, the Liskov substitution principle.
From Wikipedia, the Liskov substitution principle says:
If S is a subtype of T, then objects of type T may be replaced with objects of type S without altering any of the desirable properties of the program.
In plain words, let S is a subclass of T. If T has a method or attribute, then S also has it. Moreover if T.some_method(arg1, arg2,...,argn) is a proper syntax, then S.some_method(arg1, arg2, ..., argn) is also a proper syntax and the output is identical. (There is more to it, but I skip it for simplicity)
What does this theory mean for our case? If dict has any attributes (except parent) declared during the init, they are lost, and the Liskov substitution principle is violated. Please check the following example.
class T:
def __init__(self):
self.t = 1
class S(T):
def __init__(self, parent=None):
self.parent = parent
s = S()
s.t
raises the error because class S does not have access to the attribute t.
Why no error is in our case? Because there are no attributes created inside __init__ in the parent class dict. Therefore, the extension works well and does not violate OOP.
To fix PyLint issue, change the code as follows:
class Env(dict):
def __init__(self, parent=None):
super().__init__() # get all parent's __init__ setup
self.parent = parent # add your attributes
It does just what the documentation teaches us: it calls the __init__ method of the parent class. This does all of the initialization behind the attributes you supposedly want to inherit from the parent.
In general, if you do not call super().__init__(), then your object has only the added parent field, plus access to methods and class attributes of the parent. This will work just fine (except for the warning) for any class that does not use initialization arguments -- or, in particular, one that does not initialize any fields on the fly.
Python built-in types do what you expect (or want), so your given use is okay.
In contrast, consider the case of extending your Env class to one called Context:
class Context(Env):
def __init__(upper, lower):
self.upper = upper
self.lower = lower
ctx = Context(7, 0)
print(ctx.upper)
print(ctx.parent)
At this last statement, you'll get a run-time fault: ctx has no attribute parent, since I never called super().__init__() in Context.__init__

Singleton with __new__ returns "Was __classcell__ propagated to type.__new_?" using Python 3.8

Trying to change singleton using metaclass of Python 2 to Python 3, __new__ returns:
[ ERROR ] Error in file Importing test library 'C:\Users\TestTabs.py' failed: __class__ not set defining 'BrowserDriver' as <class 'BrowserDriver.BrowserDriver'>. Was __classcell__ propagated to type.__new__?
CODE:
class Singleton(type):
_instance = None
def __new__(cls, *args, **kwargs):
print('Newtest')
if cls._instance is None:
Singleton._instance = type.__new__(cls, *args, **kwargs)
return Singleton._instance
This one is called:
class BrowserDriver(metaclass=Singleton)
first: you should not be using a metaclass for having a singleton
Second: your "singleton" code is broken, even if it would work:
By luck it crossed the way of a new mechanism used in class creation, which requires type.__new__ to receive the "class cell" when creating a new class, and this was detected.
So, the misterious __class__ cell will exit if any method in your class uses a call to super(). Python will create a rathr magic __class__ variable that will receive a reference to the class that will be created, when the class body execution ends. At that point, the metaclass.__new__ is called. When the call to metaclass.__new__ returns, the Python runtime expects that the __class__ magic variable for that class is now "filled in" with a reference to the class itself.
This is for a working class creation - now we come to the bug in your code:
I don't know where you got this "singleton metaclass code" at all, but it is broken: (if it would work), it creates ONE SINGLE CLASS, for all classes using this metaclass - and not, as probably was desired, allow one single-instance of each class using this metaclass. (as the new class body do not have its __class__ attribute set, you get the error you described under Python 3.8)
In other words: any classes past the first one using this metaclass is simply ignored, and not used by the program at all.
The (overkill) idea of using a metaclass to create singleton-enforcing classes is, yes, to allow a single-instance of a class, but the cache for the single instance should be set in the class itself, not on the metaclass - or in an attribute in the metaclass that holds one instance for each class created, like a dictionary would. A simple class attribute of the metaclass as featured in this code just makes classes past the first be ignored.
So, to fix that using metaclasses, the cache logic should be in the metaclass __call__ method, not in its __new__ method -
This is the expressly not recommended, but working, metaclass to enforce singletons:
class SingletonEnforcingmeta(type):
def __call__(cls, *args, **kw):
# check "__dict__" entry insead of "hasattr" - allows inheritance
# and one instance per subclass
if "_instance" not in cls.__dict__:
cls._instance = super().__call__(*args, **kw)
return cls._instance
But, as I wrote above, it is overkill to have a metaclass if you just once a singleton - the instantiation mechanism in __new__ itself is enough for creating a single-instance cache.
But before doing that - on should think: is a "singleton enforcing class really necessary" ? This is Python - the flexible structure and "consenting adults" mindset of the language can have you simply create an instance of your class in the same namespace you created the class itself - and just use that single instance from that point on.
Actually, if your single-instance have the same name the class have, one can't even create a new instance by accident, as the class itself will be reachable only indirectly. That is:
nice thing to do: if you need a singleton, create a singleton, not a 'singleton-enforcing-class
class BrowserDriver(...):
# normal code for the class here
...
BrowserDriver = BrowserDriver()
That is all there is to it. All you have now is a single-instance of
the BrowserDriver class that can be used from any place in your code.
Now, if you really need a singleton-enforcing class, one that upon
trying to create any instance beyond the first will silently do not
raise this attempt as an error, and just return the first instance ever created,
then the code you need in then __new__ method of the class is like the code
you were trying to use as the metaclass´ __new__. It records the sinvgle instance in the class itself:
if really needed: singleton enforcing-class using __new__:
class SingletonBase:
def __new__(cls, *args, **kw):
if "_instance" not in cls.__dict__:
cls._instance = super().__new__(cls, *args, **kw)
return cls._instance
And then just inherit your "I must be a singleton" classes from this base.
Note however, that __init__ will be called on the single-instance at each instantiation attempt - so, these singletons should use __new__ (and call super() as appropriate, instead of having an __init__ method, or have an idempotent __init__ (i.e. it can be called more than once, but this extra call have no effects)

unitTest a python 3 metaclass

I have a metaclass that set a class property my_new_property when it loads the class. This file will me named my_meta and the code is this
def remote_function():
# Get some data from a request to other site
return 'remote_response'
class MyMeta(type):
def __new__(cls, *args, **kwargs):
print("It is in")
obj = super().__new__(cls, *args, **kwargs)
new_value = remote_function()
setattr(obj, 'my_new_property', new_value)
return obj
The functionality to set the property works fine, however when writing the test file tests.py with only one code line:
from my_meta import MyMeta
The meta code is executed. As a consequence, it executes the real method remote_function.
The question is... as the meta code is executed only by using the import from the test file, how could I mock the method remote_function?
Importing the file as you show us won't trigger execution of the metaclass code.
However, importing any file (includng the one where the metaclass is), where there is a class that makes use of this metaclass, will run the code in the metaclass __new__ method - as parsing a class body defined with the class statement does just that: call the metaclass to create a new class instance.
So, the recomendation is: do not have your metaclass __new__ or __init__ methods to trigger side effects, like accessing remote stuff, if that can't be done in a seamless and innocuous way. Not only testing, but importing modules of your app in a Python shell will also trigger the behavior.
You could have a method in the metaclass to inialize with the remote value, and when you are about to actually use those, you'd explicitly call a such "remote_init" - like in
class MyMeta(type):
def __new__(cls, *args, **kwargs):
print("It is in")
obj = super().__new__(cls, *args, **kwargs)
new_value = remote_function()
setattr(obj, 'my_new_property', new_value)
return obj
def remote_init(cls):
if hasattr(cls, "my_new_property"):
return
cls.my_new_property = remote_function()
The remote_init method, being placed in the metaclass will behave jsut like a class method for the instantiated classes, but won't be visible (for dir or attribute retrieval), from the class instances.
This is the safest thing to do.
If you want to avoid the explicit step, which is understanble, you could use a setting in a configuration file, and a test inside the remote_function on whether to trigger the actual networking code, or just return a local, dummy value. You then make the configurations differ for testing/staging/production.
And, finally, you could just separate remote_method in another module, import that first, patch it out with unitttest.mock.patch, and the import the metaclass containing module - when it runs and calls the method, it will be just the pacthed version. This will work for your tests, but won't fix the problem of you triggering side-effects in other occasions (like in other tests that load this module).
Of course, for that to work you have to import the module containing the metaclass and any classes defined with it inside your test function, in a region where mock.patch is active, not importing it at the top of the file. There is just no problem in importing things inside test methods to have control over the importing process itself.

Python GTK: Instantiating a subclass of gtk.Bin

I am trying to write a GTK widget in Python that is a subclass of gtk.Bin and am not sure how to go about instantiating it. The first few lines of my class look like:
class Completer(gtk.Bin):
def __init__(self, exts)):
gtk.Container.__init__(self)
child = gtk.VBox(spacing=15)
self.add(child)
I'm not sure how to set the child attribute, hence the code for that. But it hangs up on the line gtk.Container.__init__(self) with the message:
File "C:\Users\462974\Documents\Local Sandbox\tools\python\packages\GUI\tools\SNCompleter.py", line 133, in __init__
gtk.Container.__init__(self)
TypeError: cannot create instance of abstract (non-instantiable) type `GtkBin'
It also happens if I call gtk.Bin.__init__. I'm not sure how to initialize this subclass, but there is presumably a way since GTK does have usable subclasses of gtk.Bin.
You need to register a new gtype for your widget, otherwise it will use the same as the super class, and since it's an abstract class, you won't be able to instantiate it (as the exception indicates).
There're two ways of registering a new gtype:
Using gobject.type_register.
Setting the __gtype_name__ class variable in your class.
Here's an example using the second choice (since I believe is more straigth forward):
class Completer(gtk.Bin):
__gtype_name__= "Completer"
def __init__(self, exts, *args, **kwargs):
super(Completer, self).__init__(*args, **kwargs)
child = gtk.VBox(spacing=15)
self.add(child)

Why aren't superclass __init__ methods automatically invoked?

Why did the Python designers decide that subclasses' __init__() methods don't automatically call the __init__() methods of their superclasses, as in some other languages? Is the Pythonic and recommended idiom really like the following?
class Superclass(object):
def __init__(self):
print 'Do something'
class Subclass(Superclass):
def __init__(self):
super(Subclass, self).__init__()
print 'Do something else'
The crucial distinction between Python's __init__ and those other languages constructors is that __init__ is not a constructor: it's an initializer (the actual constructor (if any, but, see later;-) is __new__ and works completely differently again). While constructing all superclasses (and, no doubt, doing so "before" you continue constructing downwards) is obviously part of saying you're constructing a subclass's instance, that is clearly not the case for initializing, since there are many use cases in which superclasses' initialization needs to be skipped, altered, controlled -- happening, if at all, "in the middle" of the subclass initialization, and so forth.
Basically, super-class delegation of the initializer is not automatic in Python for exactly the same reasons such delegation is also not automatic for any other methods -- and note that those "other languages" don't do automatic super-class delegation for any other method either... just for the constructor (and if applicable, destructor), which, as I mentioned, is not what Python's __init__ is. (Behavior of __new__ is also quite peculiar, though really not directly related to your question, since __new__ is such a peculiar constructor that it doesn't actually necessarily need to construct anything -- could perfectly well return an existing instance, or even a non-instance... clearly Python offers you a lot more control of the mechanics than the "other languages" you have in mind, which also includes having no automatic delegation in __new__ itself!-).
I'm somewhat embarrassed when people parrot the "Zen of Python", as if it's a justification for anything. It's a design philosophy; particular design decisions can always be explained in more specific terms--and they must be, or else the "Zen of Python" becomes an excuse for doing anything.
The reason is simple: you don't necessarily construct a derived class in a way similar at all to how you construct the base class. You may have more parameters, fewer, they may be in a different order or not related at all.
class myFile(object):
def __init__(self, filename, mode):
self.f = open(filename, mode)
class readFile(myFile):
def __init__(self, filename):
super(readFile, self).__init__(filename, "r")
class tempFile(myFile):
def __init__(self, mode):
super(tempFile, self).__init__("/tmp/file", mode)
class wordsFile(myFile):
def __init__(self, language):
super(wordsFile, self).__init__("/usr/share/dict/%s" % language, "r")
This applies to all derived methods, not just __init__.
Java and C++ require that a base class constructor is called because of memory layout.
If you have a class BaseClass with a member field1, and you create a new class SubClass that adds a member field2, then an instance of SubClass contains space for field1 and field2. You need a constructor of BaseClass to fill in field1, unless you require all inheriting classes to repeat BaseClass's initialization in their own constructors. And if field1 is private, then inheriting classes can't initialise field1.
Python is not Java or C++. All instances of all user-defined classes have the same 'shape'. They're basically just dictionaries in which attributes can be inserted. Before any initialisation has been done, all instances of all user-defined classes are almost exactly the same; they're just places to store attributes that aren't storing any yet.
So it makes perfect sense for a Python subclass not to call its base class constructor. It could just add the attributes itself if it wanted to. There's no space reserved for a given number of fields for each class in the hierarchy, and there's no difference between an attribute added by code from a BaseClass method and an attribute added by code from a SubClass method.
If, as is common, SubClass actually does want to have all of BaseClass's invariants set up before it goes on to do its own customisation, then yes you can just call BaseClass.__init__() (or use super, but that's complicated and has its own problems sometimes). But you don't have to. And you can do it before, or after, or with different arguments. Hell, if you wanted you could call the BaseClass.__init__ from another method entirely than __init__; maybe you have some bizarre lazy initialization thing going.
Python achieves this flexibility by keeping things simple. You initialise objects by writing an __init__ method that sets attributes on self. That's it. It behaves exactly like a method, because it is exactly a method. There are no other strange and unintuitive rules about things having to be done first, or things that will automatically happen if you don't do other things. The only purpose it needs to serve is to be a hook to execute during object initialisation to set initial attribute values, and it does just that. If you want it to do something else, you explicitly write that in your code.
To avoid confusion it is useful to know that you can invoke the base_class __init__() method if the child_class does not have an __init__() class.
Example:
class parent:
def __init__(self, a=1, b=0):
self.a = a
self.b = b
class child(parent):
def me(self):
pass
p = child(5, 4)
q = child(7)
z= child()
print p.a # prints 5
print q.b # prints 0
print z.a # prints 1
In fact the MRO in python will look for __init__() in the parent class when can not find it in the children class. You need to invoke the parent class constructor directly if you have already an __init__() method in the children class.
For example the following code will return an error:
class parent:
def init(self, a=1, b=0):
self.a = a
self.b = b
class child(parent):
def __init__(self):
pass
def me(self):
pass
p = child(5, 4) # Error: constructor gets one argument 3 is provided.
q = child(7) # Error: constructor gets one argument 2 is provided.
z= child()
print z.a # Error: No attribute named as a can be found.
"Explicit is better than implicit." It's the same reasoning that indicates we should explicitly write 'self'.
I think in in the end it is a benefit-- can you recite all of the rules Java has regarding calling superclasses' constructors?
Right now, we have a rather long page describing the method resolution order in case of multiple inheritance: http://www.python.org/download/releases/2.3/mro/
If constructors were called automatically, you'd need another page of at least the same length explaining the order of that happening. That would be hell...
Often the subclass has extra parameters which can't be passed to the superclass.
Maybe __init__ is the method that the subclass needs to override. Sometimes subclasses need the parent's function to run before they add class-specific code, and other times they need to set up instance variables before calling the parent's function. Since there's no way Python could possibly know when it would be most appropriate to call those functions, it shouldn't guess.
If those don't sway you, consider that __init__ is Just Another Function. If the function in question were dostuff instead, would you still want Python to automatically call the corresponding function in the parent class?
i believe the one very important consideration here is that with an automatic call to super.__init__(), you proscribe, by design, when that initialization method is called, and with what arguments. eschewing automatically calling it, and requiring the programmer to explicitly do that call, entails a lot of flexibility.
after all, just because class B is derived from class A does not mean A.__init__() can or should be called with the same arguments as B.__init__(). making the call explicit means a programmer can have e.g. define B.__init__() with completely different parameters, do some computation with that data, call A.__init__() with arguments as appropriate for that method, and then do some postprocessing. this kind of flexibility would be awkward to attain if A.__init__() would be called from B.__init__() implicitly, either before B.__init__() executes or right after it.
As Sergey Orshanskiy pointed out in the comments, it is also convenient to write a decorator to inherit the __init__ method.
You can write a decorator to inherit the __init__ method, and even perhaps automatically search for subclasses and decorate them. – Sergey Orshanskiy Jun 9 '15 at 23:17
Part 1/3: The implementation
Note: actually this is only useful if you want to call both the base and the derived class's __init__ since __init__ is inherited automatically. See the previous answers for this question.
def default_init(func):
def wrapper(self, *args, **kwargs) -> None:
super(type(self), self).__init__(*args, **kwargs)
return wrapper
class base():
def __init__(self, n: int) -> None:
print(f'Base: {n}')
class child(base):
#default_init
def __init__(self, n: int) -> None:
pass
child(42)
Outputs:
Base: 42
Part 2/3: A warning
Warning: this doesn't work if base itself called super(type(self), self).
def default_init(func):
def wrapper(self, *args, **kwargs) -> None:
'''Warning: recursive calls.'''
super(type(self), self).__init__(*args, **kwargs)
return wrapper
class base():
def __init__(self, n: int) -> None:
print(f'Base: {n}')
class child(base):
#default_init
def __init__(self, n: int) -> None:
pass
class child2(child):
#default_init
def __init__(self, n: int) -> None:
pass
child2(42)
RecursionError: maximum recursion depth exceeded while calling a Python object.
Part 3/3: Why not just use plain super()?
But why not just use the safe plain super()? Because it doesn't work since the new rebinded __init__ is from outside the class, and super(type(self), self) is required.
def default_init(func):
def wrapper(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
return wrapper
class base():
def __init__(self, n: int) -> None:
print(f'Base: {n}')
class child(base):
#default_init
def __init__(self, n: int) -> None:
pass
child(42)
Errors:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-9-6f580b3839cd> in <module>
13 pass
14
---> 15 child(42)
<ipython-input-9-6f580b3839cd> in wrapper(self, *args, **kwargs)
1 def default_init(func):
2 def wrapper(self, *args, **kwargs) -> None:
----> 3 super().__init__(*args, **kwargs)
4 return wrapper
5
RuntimeError: super(): __class__ cell not found
Background - We CAN AUTO init a parent AND child class!
A lot of answers here and say "This is not the python way, use super().__init__() from the subclass". The question is not asking for the pythonic way, it's comparing to the expected behavior from other languages to python's obviously different one.
The MRO document is pretty and colorful but it's really a TLDR situation and still doesn't quite answer the question, as is often the case in these types of comparisons - "Do it the Python way, because.".
Inherited objects can be overloaded by later declarations in subclasses, a pattern building on #keyvanrm's (https://stackoverflow.com/a/46943772/1112676) answer solves the case where I want to AUTOMATICALLY init a parent class as part of calling a class without explicitly calling super().__init__() in every child class.
In my case where a new team member might be asked to use a boilerplate module template (for making extensions to our application without touching the core application source) which we want to make as bare and easy to adopt without them needing to know or understand the underlying machinery - to only need to know of and use what is provided by the application's base interface which is well documented.
For those who will say "Explicit is better than implicit." I generally agree, however, when coming from many other popular languages inherited automatic initialization is the expected behavior and it is very useful if it can be leveraged for projects where some work on a core application and others work on extending it.
This technique can even pass args/keyword args for init which means pretty much any object can be pushed to the parent and used by the parent class or its relatives.
Example:
class Parent:
def __init__(self, *args, **kwargs):
self.somevar = "test"
self.anothervar = "anothertest"
#important part, call the init surrogate pass through args:
self._init(*args, **kwargs)
#important part, a placeholder init surrogate:
def _init(self, *args, **kwargs):
print("Parent class _init; ", self, args, kwargs)
def some_base_method(self):
print("some base method in Parent")
self.a_new_dict={}
class Child1(Parent):
# when omitted, the parent class's __init__() is run
#def __init__(self):
# pass
#overloading the parent class's _init() surrogate
def _init(self, *args, **kwargs):
print(f"Child1 class _init() overload; ",self, args, kwargs)
self.a_var_set_from_child = "This is a new var!"
class Child2(Parent):
def __init__(self, onevar, twovar, akeyword):
print(f"Child2 class __init__() overload; ", self)
#call some_base_method from parent
self.some_base_method()
#the parent's base method set a_new_dict
print(self.a_new_dict)
class Child3(Parent):
pass
print("\nRunning Parent()")
Parent()
Parent("a string", "something else", akeyword="a kwarg")
print("\nRunning Child1(), keep Parent.__init__(), overload surrogate Parent._init()")
Child1()
Child1("a string", "something else", akeyword="a kwarg")
print("\nRunning Child2(), overload Parent.__init__()")
#Child2() # __init__() requires arguments
Child2("a string", "something else", akeyword="a kwarg")
print("\nRunning Child3(), empty class, inherits everything")
Child3().some_base_method()
Output:
Running Parent()
Parent class _init; <__main__.Parent object at 0x7f84a721fdc0> () {}
Parent class _init; <__main__.Parent object at 0x7f84a721fdc0> ('a string', 'something else') {'akeyword': 'a kwarg'}
Running Child1(), keep Parent.__init__(), overload surrogate Parent._init()
Child1 class _init() overload; <__main__.Child1 object at 0x7f84a721fdc0> () {}
Child1 class _init() overload; <__main__.Child1 object at 0x7f84a721fdc0> ('a string', 'something else') {'akeyword': 'a kwarg'}
Running Child2(), overload Parent.__init__()
Child2 class __init__() overload; <__main__.Child2 object at 0x7f84a721fdc0>
some base method in Parent
{}
Running Child3(), empty class, inherits everything, access things set by other children
Parent class _init; <__main__.Child3 object at 0x7f84a721fdc0> () {}
some base method in Parent
As one can see, the overloaded definition(s) take the place of those declared in Parent class but can still be called BY the Parent class thereby allowing one to emulate the classical implicit inheritance initialization behavior Parent and Child classes both initialize without needing to explicitly invoke the Parent's init() from the Child class.
Personally, I call the surrogate _init() method main() because it makes sense to me when switching between C++ and Python for example since it is a function that will be automatically run for any subclass of Parent (the last declared definition of main(), that is).

Categories

Resources