Related
I'm just trying to streamline one of my classes and have introduced some functionality in the same style as the flyweight design pattern.
However, I'm a bit confused as to why __init__ is always called after __new__. I wasn't expecting this. Can anyone tell me why this is happening and how I can implement this functionality otherwise? (Apart from putting the implementation into the __new__ which feels quite hacky.)
Here's an example:
class A(object):
_dict = dict()
def __new__(cls):
if 'key' in A._dict:
print "EXISTS"
return A._dict['key']
else:
print "NEW"
return super(A, cls).__new__(cls)
def __init__(self):
print "INIT"
A._dict['key'] = self
print ""
a1 = A()
a2 = A()
a3 = A()
Outputs:
NEW
INIT
EXISTS
INIT
EXISTS
INIT
Why?
Use __new__ when you need to control
the creation of a new instance.
Use
__init__ when you need to control initialization of a new instance.
__new__ is the first step of instance creation. It's called first, and is
responsible for returning a new
instance of your class.
In contrast,
__init__ doesn't return anything; it's only responsible for initializing the
instance after it's been created.
In general, you shouldn't need to
override __new__ unless you're
subclassing an immutable type like
str, int, unicode or tuple.
From April 2008 post: When to use __new__ vs. __init__? on mail.python.org.
You should consider that what you are trying to do is usually done with a Factory and that's the best way to do it. Using __new__ is not a good clean solution so please consider the usage of a factory. Here's a good example: ActiveState Fᴀᴄᴛᴏʀʏ ᴘᴀᴛᴛᴇʀɴ Recipe.
__new__ is static class method, while __init__ is instance method.
__new__ has to create the instance first, so __init__ can initialize it. Note that __init__ takes self as parameter. Until you create instance there is no self.
Now, I gather, that you're trying to implement singleton pattern in Python. There are a few ways to do that.
Also, as of Python 2.6, you can use class decorators.
def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
#singleton
class MyClass:
...
In most well-known OO languages, an expression like SomeClass(arg1, arg2) will allocate a new instance, initialise the instance's attributes, and then return it.
In most well-known OO languages, the "initialise the instance's attributes" part can be customised for each class by defining a constructor, which is basically just a block of code that operates on the new instance (using the arguments provided to the constructor expression) to set up whatever initial conditions are desired. In Python, this corresponds to the class' __init__ method.
Python's __new__ is nothing more and nothing less than similar per-class customisation of the "allocate a new instance" part. This of course allows you to do unusual things such as returning an existing instance rather than allocating a new one. So in Python, we shouldn't really think of this part as necessarily involving allocation; all that we require is that __new__ comes up with a suitable instance from somewhere.
But it's still only half of the job, and there's no way for the Python system to know that sometimes you want to run the other half of the job (__init__) afterwards and sometimes you don't. If you want that behavior, you have to say so explicitly.
Often, you can refactor so you only need __new__, or so you don't need __new__, or so that __init__ behaves differently on an already-initialised object. But if you really want to, Python does actually allow you to redefine "the job", so that SomeClass(arg1, arg2) doesn't necessarily call __new__ followed by __init__. To do this, you need to create a metaclass, and define its __call__ method.
A metaclass is just the class of a class. And a class' __call__ method controls what happens when you call instances of the class. So a metaclass' __call__ method controls what happens when you call a class; i.e. it allows you to redefine the instance-creation mechanism from start to finish. This is the level at which you can most elegantly implement a completely non-standard instance creation process such as the singleton pattern. In fact, with less than 10 lines of code you can implement a Singleton metaclass that then doesn't even require you to futz with __new__ at all, and can turn any otherwise-normal class into a singleton by simply adding __metaclass__ = Singleton!
class Singleton(type):
def __init__(self, *args, **kwargs):
super(Singleton, self).__init__(*args, **kwargs)
self.__instance = None
def __call__(self, *args, **kwargs):
if self.__instance is None:
self.__instance = super(Singleton, self).__call__(*args, **kwargs)
return self.__instance
However this is probably deeper magic than is really warranted for this situation!
To quote the documentation:
Typical implementations create a new instance of the class by invoking
the superclass's __new__() method using "super(currentclass,
cls).__new__(cls[, ...])"with appropriate arguments and then
modifying the newly-created instance as necessary before returning it.
...
If __new__() does not return an instance of cls, then the new
instance's __init__() method will not be invoked.
__new__() is intended mainly to allow subclasses of immutable
types (like int, str, or tuple) to customize instance creation.
I realize that this question is quite old but I had a similar issue.
The following did what I wanted:
class Agent(object):
_agents = dict()
def __new__(cls, *p):
number = p[0]
if not number in cls._agents:
cls._agents[number] = object.__new__(cls)
return cls._agents[number]
def __init__(self, number):
self.number = number
def __eq__(self, rhs):
return self.number == rhs.number
Agent("a") is Agent("a") == True
I used this page as a resource http://infohost.nmt.edu/tcc/help/pubs/python/web/new-new-method.html
When __new__ returns instance of the same class, __init__ is run afterwards on returned object. I.e. you can NOT use __new__ to prevent __init__ from being run. Even if you return previously created object from __new__, it will be double (triple, etc...) initialized by __init__ again and again.
Here is the generic approach to Singleton pattern which extends vartec answer above and fixes it:
def SingletonClass(cls):
class Single(cls):
__doc__ = cls.__doc__
_initialized = False
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = super(Single, cls).__new__(cls, *args, **kwargs)
return cls._instance
def __init__(self, *args, **kwargs):
if self._initialized:
return
super(Single, self).__init__(*args, **kwargs)
self.__class__._initialized = True # Its crucial to set this variable on the class!
return Single
Full story is here.
Another approach, which in fact involves __new__ is to use classmethods:
class Singleton(object):
__initialized = False
def __new__(cls, *args, **kwargs):
if not cls.__initialized:
cls.__init__(*args, **kwargs)
cls.__initialized = True
return cls
class MyClass(Singleton):
#classmethod
def __init__(cls, x, y):
print "init is here"
#classmethod
def do(cls):
print "doing stuff"
Please pay attention, that with this approach you need to decorate ALL of your methods with #classmethod, because you'll never use any real instance of MyClass.
I think the simple answer to this question is that, if __new__ returns a value that is the same type as the class, the __init__ function executes, otherwise it won't. In this case your code returns A._dict('key') which is the same class as cls, so __init__ will be executed.
class M(type):
_dict = {}
def __call__(cls, key):
if key in cls._dict:
print 'EXISTS'
return cls._dict[key]
else:
print 'NEW'
instance = super(M, cls).__call__(key)
cls._dict[key] = instance
return instance
class A(object):
__metaclass__ = M
def __init__(self, key):
print 'INIT'
self.key = key
print
a1 = A('aaa')
a2 = A('bbb')
a3 = A('aaa')
outputs:
NEW
INIT
NEW
INIT
EXISTS
NB As a side effect M._dict property automatically becomes accessible from A as A._dict so take care not to overwrite it incidentally.
An update to #AntonyHatchkins answer, you probably want a separate dictionary of instances for each class of the metatype, meaning that you should have an __init__ method in the metaclass to initialize your class object with that dictionary instead of making it global across all the classes.
class MetaQuasiSingleton(type):
def __init__(cls, name, bases, attibutes):
cls._dict = {}
def __call__(cls, key):
if key in cls._dict:
print('EXISTS')
instance = cls._dict[key]
else:
print('NEW')
instance = super().__call__(key)
cls._dict[key] = instance
return instance
class A(metaclass=MetaQuasiSingleton):
def __init__(self, key):
print 'INIT'
self.key = key
print()
I have gone ahead and updated the original code with an __init__ method and changed the syntax to Python 3 notation (no-arg call to super and metaclass in the class arguments instead of as an attribute).
Either way, the important point here is that your class initializer (__call__ method) will not execute either __new__ or __init__ if the key is found. This is much cleaner than using __new__, which requires you to mark the object if you want to skip the default __init__ step.
__new__ should return a new, blank instance of a class. __init__ is then called to initialise that instance. You're not calling __init__ in the "NEW" case of __new__, so it's being called for you. The code that is calling __new__ doesn't keep track of whether __init__ has been called on a particular instance or not nor should it, because you're doing something very unusual here.
You could add an attribute to the object in the __init__ function to indicate that it's been initialised. Check for the existence of that attribute as the first thing in __init__ and don't proceed any further if it has been.
Digging little deeper into that!
The type of a generic class in CPython is type and its base class is Object (Unless you explicitly define another base class like a metaclass). The sequence of low level calls can be found here. The first method called is the type_call which then calls tp_new and then tp_init.
The interesting part here is that tp_new will call the Object's (base class) new method object_new which does a tp_alloc (PyType_GenericAlloc) which allocates the memory for the object :)
At that point the object is created in memory and then the __init__ method gets called. If __init__ is not implemented in your class then the object_init gets called and it does nothing :)
Then type_call just returns the object which binds to your variable.
One should look at __init__ as a simple constructor in traditional OO languages. For example, if you are familiar with Java or C++, the constructor is passed a pointer to its own instance implicitly. In the case of Java, it is the this variable. If one were to inspect the byte code generated for Java, one would notice two calls. The first call is to an "new" method, and then next call is to the init method (which is the actual call to the user defined constructor). This two step process enables creation of the actual instance before calling the constructor method of the class which is just another method of that instance.
Now, in the case of Python, __new__ is a added facility that is accessible to the user. Java does not provide that flexibility, due to its typed nature. If a language provided that facility, then the implementor of __new__ could do many things in that method before returning the instance, including creating a totally new instance of a unrelated object in some cases. And, this approach also works out well for especially for immutable types in the case of Python.
However, I'm a bit confused as to why __init__ is always called after __new__.
I think the C++ analogy would be useful here:
__new__ simply allocates memory for the object. The instance variables of an object needs memory to hold it, and this is what the step __new__ would do.
__init__ initialize the internal variables of the object to specific values (could be default).
Referring to this doc:
When subclassing immutable built-in types like numbers and strings,
and occasionally in other situations, the static method __new__ comes
in handy. __new__ is the first step in instance construction, invoked
before __init__.
The __new__ method is called with the class as its
first argument; its responsibility is to return a new instance of that
class.
Compare this to __init__: __init__ is called with an instance
as its first argument, and it doesn't return anything; its
responsibility is to initialize the instance.
There are situations
where a new instance is created without calling __init__ (for example
when the instance is loaded from a pickle). There is no way to create
a new instance without calling __new__ (although in some cases you can
get away with calling a base class's __new__).
Regarding what you wish to achieve, there also in same doc info about Singleton pattern
class Singleton(object):
def __new__(cls, *args, **kwds):
it = cls.__dict__.get("__it__")
if it is not None:
return it
cls.__it__ = it = object.__new__(cls)
it.init(*args, **kwds)
return it
def init(self, *args, **kwds):
pass
you may also use this implementation from PEP 318, using a decorator
def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
#singleton
class MyClass:
...
Now I've got the same problem, and for some reasons I decided to avoid decorators, factories and metaclasses. I did it like this:
Main file
def _alt(func):
import functools
#functools.wraps(func)
def init(self, *p, **k):
if hasattr(self, "parent_initialized"):
return
else:
self.parent_initialized = True
func(self, *p, **k)
return init
class Parent:
# Empty dictionary, shouldn't ever be filled with anything else
parent_cache = {}
def __new__(cls, n, *args, **kwargs):
# Checks if object with this ID (n) has been created
if n in cls.parent_cache:
# It was, return it
return cls.parent_cache[n]
else:
# Check if it was modified by this function
if not hasattr(cls, "parent_modified"):
# Add the attribute
cls.parent_modified = True
cls.parent_cache = {}
# Apply it
cls.__init__ = _alt(cls.__init__)
# Get the instance
obj = super().__new__(cls)
# Push it to cache
cls.parent_cache[n] = obj
# Return it
return obj
Example classes
class A(Parent):
def __init__(self, n):
print("A.__init__", n)
class B(Parent):
def __init__(self, n):
print("B.__init__", n)
In use
>>> A(1)
A.__init__ 1 # First A(1) initialized
<__main__.A object at 0x000001A73A4A2E48>
>>> A(1) # Returned previous A(1)
<__main__.A object at 0x000001A73A4A2E48>
>>> A(2)
A.__init__ 2 # First A(2) initialized
<__main__.A object at 0x000001A7395D9C88>
>>> B(2)
B.__init__ 2 # B class doesn't collide with A, thanks to separate cache
<__main__.B object at 0x000001A73951B080>
Warning: You shouldn't initialize Parent, it will collide with other classes - unless you defined separate cache in each of the children, that's not what we want.
Warning: It seems a class with Parent as grandparent behaves weird. [Unverified]
Try it online!
The __init__ is called after __new__ so that when you override it in a subclass, your added code will still get called.
If you are trying to subclass a class that already has a __new__, someone unaware of this might start by adapting the __init__ and forwarding the call down to the subclass __init__. This convention of calling __init__ after __new__ helps that work as expected.
The __init__ still needs to allow for any parameters the superclass __new__ needed, but failing to do so will usually create a clear runtime error. And the __new__ should probably explicitly allow for *args and '**kw', to make it clear that extension is OK.
It is generally bad form to have both __new__ and __init__ in the same class at the same level of inheritance, because of the behavior the original poster described.
However, I'm a bit confused as to why __init__ is always called after __new__.
Not much of a reason other than that it just is done that way. __new__ doesn't have the responsibility of initializing the class, some other method does (__call__, possibly-- I don't know for sure).
I wasn't expecting this. Can anyone tell me why this is happening and how I implement this functionality otherwise? (apart from putting the implementation into the __new__ which feels quite hacky).
You could have __init__ do nothing if it's already been initialized, or you could write a new metaclass with a new __call__ that only calls __init__ on new instances, and otherwise just returns __new__(...).
The simple reason is that the new is used for creating an instance, while init is used for initializing the instance. Before initializing, the instance should be created first. That's why new should be called before init.
When instantiating a class, first, __new__() is called to create the instance of a class, then __init__() is called to initialize the instance.
__new__():
Called to create a new instance of class cls. ...
If __new__() is invoked during object construction and it returns an
instance of cls, then the new instance’s __init__() method will be
invoked like __init__(self[, ...]), ...
__init__():
Called after the instance has been created (by __new__()), ...
Because __new__() and __init__() work together in constructing objects
(__new__() to create it, and __init__() to customize it), ...
For example, when instantiating Teacher class, first, __new__() is called to create the instance of Teacher class, then __init__() is called to initialize the instance as shown below:
class Teacher:
def __init__(self, name):
self.name = name
class Student:
def __init__(self, name):
self.name = name
obj = Teacher("John") # Instantiation
print(obj.name)
This is the output:
<class '__main__.Teacher'>
John
And, using __new__() of the instance of Teacher class, we can create the instance of Student class as shown below:
# ...
obj = Teacher("John")
print(type(obj))
print(obj.name)
obj = obj.__new__(Student) # Creates the instance of "Student" class
print(type(obj))
Now, the instance of Student class is created as shown below:
<class '__main__.Teacher'>
<__main__.Teacher object at 0x7f4e3950bf10>
<class '__main__.Student'> # Here
Next, if we try to get the value of name variable from **the instance of Student class as shown below:
obj = Teacher("John")
print(type(obj))
print(obj.name)
obj = obj.__new__(Student)
print(type(obj))
print(obj.name) # Tries to get the value of "name" variable
The error below occurs because the instance of Student class has not been initialized by __init__() yet:
AttributeError: 'Student' object has no attribute 'name'
So, we initialize the instance of Student class as shown below:
obj = Teacher("John")
print(type(obj))
print(obj.name)
obj = obj.__new__(Student)
print(type(obj))
obj.__init__("Tom") # Initializes the instance of "Student" class
print(obj.name)
Then, we can get the value of name variable from the instance of Student class as shown below:
<class '__main__.Teacher'>
John
<class '__main__.Student'>
Tom # Here
People have already detailed the question and answer both use some examples like singleton etc. See the code below:
__instance = None
def __new__(cls):
if cls.__instance is None:
cls.__instance = object.__new__(cls)
return cls.__instance
I got the above code from this link, it has detailed overview of new vs init. Worth reading!
I'm just trying to streamline one of my classes and have introduced some functionality in the same style as the flyweight design pattern.
However, I'm a bit confused as to why __init__ is always called after __new__. I wasn't expecting this. Can anyone tell me why this is happening and how I can implement this functionality otherwise? (Apart from putting the implementation into the __new__ which feels quite hacky.)
Here's an example:
class A(object):
_dict = dict()
def __new__(cls):
if 'key' in A._dict:
print "EXISTS"
return A._dict['key']
else:
print "NEW"
return super(A, cls).__new__(cls)
def __init__(self):
print "INIT"
A._dict['key'] = self
print ""
a1 = A()
a2 = A()
a3 = A()
Outputs:
NEW
INIT
EXISTS
INIT
EXISTS
INIT
Why?
Use __new__ when you need to control
the creation of a new instance.
Use
__init__ when you need to control initialization of a new instance.
__new__ is the first step of instance creation. It's called first, and is
responsible for returning a new
instance of your class.
In contrast,
__init__ doesn't return anything; it's only responsible for initializing the
instance after it's been created.
In general, you shouldn't need to
override __new__ unless you're
subclassing an immutable type like
str, int, unicode or tuple.
From April 2008 post: When to use __new__ vs. __init__? on mail.python.org.
You should consider that what you are trying to do is usually done with a Factory and that's the best way to do it. Using __new__ is not a good clean solution so please consider the usage of a factory. Here's a good example: ActiveState Fᴀᴄᴛᴏʀʏ ᴘᴀᴛᴛᴇʀɴ Recipe.
__new__ is static class method, while __init__ is instance method.
__new__ has to create the instance first, so __init__ can initialize it. Note that __init__ takes self as parameter. Until you create instance there is no self.
Now, I gather, that you're trying to implement singleton pattern in Python. There are a few ways to do that.
Also, as of Python 2.6, you can use class decorators.
def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
#singleton
class MyClass:
...
In most well-known OO languages, an expression like SomeClass(arg1, arg2) will allocate a new instance, initialise the instance's attributes, and then return it.
In most well-known OO languages, the "initialise the instance's attributes" part can be customised for each class by defining a constructor, which is basically just a block of code that operates on the new instance (using the arguments provided to the constructor expression) to set up whatever initial conditions are desired. In Python, this corresponds to the class' __init__ method.
Python's __new__ is nothing more and nothing less than similar per-class customisation of the "allocate a new instance" part. This of course allows you to do unusual things such as returning an existing instance rather than allocating a new one. So in Python, we shouldn't really think of this part as necessarily involving allocation; all that we require is that __new__ comes up with a suitable instance from somewhere.
But it's still only half of the job, and there's no way for the Python system to know that sometimes you want to run the other half of the job (__init__) afterwards and sometimes you don't. If you want that behavior, you have to say so explicitly.
Often, you can refactor so you only need __new__, or so you don't need __new__, or so that __init__ behaves differently on an already-initialised object. But if you really want to, Python does actually allow you to redefine "the job", so that SomeClass(arg1, arg2) doesn't necessarily call __new__ followed by __init__. To do this, you need to create a metaclass, and define its __call__ method.
A metaclass is just the class of a class. And a class' __call__ method controls what happens when you call instances of the class. So a metaclass' __call__ method controls what happens when you call a class; i.e. it allows you to redefine the instance-creation mechanism from start to finish. This is the level at which you can most elegantly implement a completely non-standard instance creation process such as the singleton pattern. In fact, with less than 10 lines of code you can implement a Singleton metaclass that then doesn't even require you to futz with __new__ at all, and can turn any otherwise-normal class into a singleton by simply adding __metaclass__ = Singleton!
class Singleton(type):
def __init__(self, *args, **kwargs):
super(Singleton, self).__init__(*args, **kwargs)
self.__instance = None
def __call__(self, *args, **kwargs):
if self.__instance is None:
self.__instance = super(Singleton, self).__call__(*args, **kwargs)
return self.__instance
However this is probably deeper magic than is really warranted for this situation!
To quote the documentation:
Typical implementations create a new instance of the class by invoking
the superclass's __new__() method using "super(currentclass,
cls).__new__(cls[, ...])"with appropriate arguments and then
modifying the newly-created instance as necessary before returning it.
...
If __new__() does not return an instance of cls, then the new
instance's __init__() method will not be invoked.
__new__() is intended mainly to allow subclasses of immutable
types (like int, str, or tuple) to customize instance creation.
I realize that this question is quite old but I had a similar issue.
The following did what I wanted:
class Agent(object):
_agents = dict()
def __new__(cls, *p):
number = p[0]
if not number in cls._agents:
cls._agents[number] = object.__new__(cls)
return cls._agents[number]
def __init__(self, number):
self.number = number
def __eq__(self, rhs):
return self.number == rhs.number
Agent("a") is Agent("a") == True
I used this page as a resource http://infohost.nmt.edu/tcc/help/pubs/python/web/new-new-method.html
When __new__ returns instance of the same class, __init__ is run afterwards on returned object. I.e. you can NOT use __new__ to prevent __init__ from being run. Even if you return previously created object from __new__, it will be double (triple, etc...) initialized by __init__ again and again.
Here is the generic approach to Singleton pattern which extends vartec answer above and fixes it:
def SingletonClass(cls):
class Single(cls):
__doc__ = cls.__doc__
_initialized = False
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = super(Single, cls).__new__(cls, *args, **kwargs)
return cls._instance
def __init__(self, *args, **kwargs):
if self._initialized:
return
super(Single, self).__init__(*args, **kwargs)
self.__class__._initialized = True # Its crucial to set this variable on the class!
return Single
Full story is here.
Another approach, which in fact involves __new__ is to use classmethods:
class Singleton(object):
__initialized = False
def __new__(cls, *args, **kwargs):
if not cls.__initialized:
cls.__init__(*args, **kwargs)
cls.__initialized = True
return cls
class MyClass(Singleton):
#classmethod
def __init__(cls, x, y):
print "init is here"
#classmethod
def do(cls):
print "doing stuff"
Please pay attention, that with this approach you need to decorate ALL of your methods with #classmethod, because you'll never use any real instance of MyClass.
I think the simple answer to this question is that, if __new__ returns a value that is the same type as the class, the __init__ function executes, otherwise it won't. In this case your code returns A._dict('key') which is the same class as cls, so __init__ will be executed.
class M(type):
_dict = {}
def __call__(cls, key):
if key in cls._dict:
print 'EXISTS'
return cls._dict[key]
else:
print 'NEW'
instance = super(M, cls).__call__(key)
cls._dict[key] = instance
return instance
class A(object):
__metaclass__ = M
def __init__(self, key):
print 'INIT'
self.key = key
print
a1 = A('aaa')
a2 = A('bbb')
a3 = A('aaa')
outputs:
NEW
INIT
NEW
INIT
EXISTS
NB As a side effect M._dict property automatically becomes accessible from A as A._dict so take care not to overwrite it incidentally.
An update to #AntonyHatchkins answer, you probably want a separate dictionary of instances for each class of the metatype, meaning that you should have an __init__ method in the metaclass to initialize your class object with that dictionary instead of making it global across all the classes.
class MetaQuasiSingleton(type):
def __init__(cls, name, bases, attibutes):
cls._dict = {}
def __call__(cls, key):
if key in cls._dict:
print('EXISTS')
instance = cls._dict[key]
else:
print('NEW')
instance = super().__call__(key)
cls._dict[key] = instance
return instance
class A(metaclass=MetaQuasiSingleton):
def __init__(self, key):
print 'INIT'
self.key = key
print()
I have gone ahead and updated the original code with an __init__ method and changed the syntax to Python 3 notation (no-arg call to super and metaclass in the class arguments instead of as an attribute).
Either way, the important point here is that your class initializer (__call__ method) will not execute either __new__ or __init__ if the key is found. This is much cleaner than using __new__, which requires you to mark the object if you want to skip the default __init__ step.
__new__ should return a new, blank instance of a class. __init__ is then called to initialise that instance. You're not calling __init__ in the "NEW" case of __new__, so it's being called for you. The code that is calling __new__ doesn't keep track of whether __init__ has been called on a particular instance or not nor should it, because you're doing something very unusual here.
You could add an attribute to the object in the __init__ function to indicate that it's been initialised. Check for the existence of that attribute as the first thing in __init__ and don't proceed any further if it has been.
Digging little deeper into that!
The type of a generic class in CPython is type and its base class is Object (Unless you explicitly define another base class like a metaclass). The sequence of low level calls can be found here. The first method called is the type_call which then calls tp_new and then tp_init.
The interesting part here is that tp_new will call the Object's (base class) new method object_new which does a tp_alloc (PyType_GenericAlloc) which allocates the memory for the object :)
At that point the object is created in memory and then the __init__ method gets called. If __init__ is not implemented in your class then the object_init gets called and it does nothing :)
Then type_call just returns the object which binds to your variable.
One should look at __init__ as a simple constructor in traditional OO languages. For example, if you are familiar with Java or C++, the constructor is passed a pointer to its own instance implicitly. In the case of Java, it is the this variable. If one were to inspect the byte code generated for Java, one would notice two calls. The first call is to an "new" method, and then next call is to the init method (which is the actual call to the user defined constructor). This two step process enables creation of the actual instance before calling the constructor method of the class which is just another method of that instance.
Now, in the case of Python, __new__ is a added facility that is accessible to the user. Java does not provide that flexibility, due to its typed nature. If a language provided that facility, then the implementor of __new__ could do many things in that method before returning the instance, including creating a totally new instance of a unrelated object in some cases. And, this approach also works out well for especially for immutable types in the case of Python.
However, I'm a bit confused as to why __init__ is always called after __new__.
I think the C++ analogy would be useful here:
__new__ simply allocates memory for the object. The instance variables of an object needs memory to hold it, and this is what the step __new__ would do.
__init__ initialize the internal variables of the object to specific values (could be default).
Referring to this doc:
When subclassing immutable built-in types like numbers and strings,
and occasionally in other situations, the static method __new__ comes
in handy. __new__ is the first step in instance construction, invoked
before __init__.
The __new__ method is called with the class as its
first argument; its responsibility is to return a new instance of that
class.
Compare this to __init__: __init__ is called with an instance
as its first argument, and it doesn't return anything; its
responsibility is to initialize the instance.
There are situations
where a new instance is created without calling __init__ (for example
when the instance is loaded from a pickle). There is no way to create
a new instance without calling __new__ (although in some cases you can
get away with calling a base class's __new__).
Regarding what you wish to achieve, there also in same doc info about Singleton pattern
class Singleton(object):
def __new__(cls, *args, **kwds):
it = cls.__dict__.get("__it__")
if it is not None:
return it
cls.__it__ = it = object.__new__(cls)
it.init(*args, **kwds)
return it
def init(self, *args, **kwds):
pass
you may also use this implementation from PEP 318, using a decorator
def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
#singleton
class MyClass:
...
Now I've got the same problem, and for some reasons I decided to avoid decorators, factories and metaclasses. I did it like this:
Main file
def _alt(func):
import functools
#functools.wraps(func)
def init(self, *p, **k):
if hasattr(self, "parent_initialized"):
return
else:
self.parent_initialized = True
func(self, *p, **k)
return init
class Parent:
# Empty dictionary, shouldn't ever be filled with anything else
parent_cache = {}
def __new__(cls, n, *args, **kwargs):
# Checks if object with this ID (n) has been created
if n in cls.parent_cache:
# It was, return it
return cls.parent_cache[n]
else:
# Check if it was modified by this function
if not hasattr(cls, "parent_modified"):
# Add the attribute
cls.parent_modified = True
cls.parent_cache = {}
# Apply it
cls.__init__ = _alt(cls.__init__)
# Get the instance
obj = super().__new__(cls)
# Push it to cache
cls.parent_cache[n] = obj
# Return it
return obj
Example classes
class A(Parent):
def __init__(self, n):
print("A.__init__", n)
class B(Parent):
def __init__(self, n):
print("B.__init__", n)
In use
>>> A(1)
A.__init__ 1 # First A(1) initialized
<__main__.A object at 0x000001A73A4A2E48>
>>> A(1) # Returned previous A(1)
<__main__.A object at 0x000001A73A4A2E48>
>>> A(2)
A.__init__ 2 # First A(2) initialized
<__main__.A object at 0x000001A7395D9C88>
>>> B(2)
B.__init__ 2 # B class doesn't collide with A, thanks to separate cache
<__main__.B object at 0x000001A73951B080>
Warning: You shouldn't initialize Parent, it will collide with other classes - unless you defined separate cache in each of the children, that's not what we want.
Warning: It seems a class with Parent as grandparent behaves weird. [Unverified]
Try it online!
The __init__ is called after __new__ so that when you override it in a subclass, your added code will still get called.
If you are trying to subclass a class that already has a __new__, someone unaware of this might start by adapting the __init__ and forwarding the call down to the subclass __init__. This convention of calling __init__ after __new__ helps that work as expected.
The __init__ still needs to allow for any parameters the superclass __new__ needed, but failing to do so will usually create a clear runtime error. And the __new__ should probably explicitly allow for *args and '**kw', to make it clear that extension is OK.
It is generally bad form to have both __new__ and __init__ in the same class at the same level of inheritance, because of the behavior the original poster described.
However, I'm a bit confused as to why __init__ is always called after __new__.
Not much of a reason other than that it just is done that way. __new__ doesn't have the responsibility of initializing the class, some other method does (__call__, possibly-- I don't know for sure).
I wasn't expecting this. Can anyone tell me why this is happening and how I implement this functionality otherwise? (apart from putting the implementation into the __new__ which feels quite hacky).
You could have __init__ do nothing if it's already been initialized, or you could write a new metaclass with a new __call__ that only calls __init__ on new instances, and otherwise just returns __new__(...).
The simple reason is that the new is used for creating an instance, while init is used for initializing the instance. Before initializing, the instance should be created first. That's why new should be called before init.
When instantiating a class, first, __new__() is called to create the instance of a class, then __init__() is called to initialize the instance.
__new__():
Called to create a new instance of class cls. ...
If __new__() is invoked during object construction and it returns an
instance of cls, then the new instance’s __init__() method will be
invoked like __init__(self[, ...]), ...
__init__():
Called after the instance has been created (by __new__()), ...
Because __new__() and __init__() work together in constructing objects
(__new__() to create it, and __init__() to customize it), ...
For example, when instantiating Teacher class, first, __new__() is called to create the instance of Teacher class, then __init__() is called to initialize the instance as shown below:
class Teacher:
def __init__(self, name):
self.name = name
class Student:
def __init__(self, name):
self.name = name
obj = Teacher("John") # Instantiation
print(obj.name)
This is the output:
<class '__main__.Teacher'>
John
And, using __new__() of the instance of Teacher class, we can create the instance of Student class as shown below:
# ...
obj = Teacher("John")
print(type(obj))
print(obj.name)
obj = obj.__new__(Student) # Creates the instance of "Student" class
print(type(obj))
Now, the instance of Student class is created as shown below:
<class '__main__.Teacher'>
<__main__.Teacher object at 0x7f4e3950bf10>
<class '__main__.Student'> # Here
Next, if we try to get the value of name variable from **the instance of Student class as shown below:
obj = Teacher("John")
print(type(obj))
print(obj.name)
obj = obj.__new__(Student)
print(type(obj))
print(obj.name) # Tries to get the value of "name" variable
The error below occurs because the instance of Student class has not been initialized by __init__() yet:
AttributeError: 'Student' object has no attribute 'name'
So, we initialize the instance of Student class as shown below:
obj = Teacher("John")
print(type(obj))
print(obj.name)
obj = obj.__new__(Student)
print(type(obj))
obj.__init__("Tom") # Initializes the instance of "Student" class
print(obj.name)
Then, we can get the value of name variable from the instance of Student class as shown below:
<class '__main__.Teacher'>
John
<class '__main__.Student'>
Tom # Here
People have already detailed the question and answer both use some examples like singleton etc. See the code below:
__instance = None
def __new__(cls):
if cls.__instance is None:
cls.__instance = object.__new__(cls)
return cls.__instance
I got the above code from this link, it has detailed overview of new vs init. Worth reading!
This question may look silly(since I am new to python), but can you guys tell me what is the difference between self and classname when Binding?
class OnlyOne(object):
class __OnlyOne:
def __init__(self):
self.val = None
def __str__(self):
return ´self´ + self.val
instance = None
def __new__(cls): # __new__ always a classmethod
if not OnlyOne.instance:
OnlyOne.instance = OnlyOne.__OnlyOne()
return OnlyOne.instance
def __getattr__(self, name):
return getattr(self.instance, name)
def __setattr__(self, name):
return setattr(self.instance, name)
Here, I usually use Instance as self... What is the difference between using self and Only one... my intuition tells me that, it is a global variable.... if it is a global variable, it does not make sense at all(I will edit this, if its a global variable). Thanks!!
Ok, I think I've got a handle on your code ... The way it works is that when the constructor is called:
a = OnlyOne() #call constructor. This implicitly calls __new__
At this point, __new__ checks the class to see if an instance has been created (instance isn't None). If it hasn't been created, it creates an instance and puts it in the instance class attribute. Then the instance class attribute is returned which is then passed into your methods as self.
I think that if you actually need a singleton, then there's something fishy (lazy) about your program design. Singletons allow information to propagate throughout your program in strange ways (Imagine functions foo and bar both which create an instance of OnlyOne. Changes you make in foo show up when you call bar) -- It's somewhat akin to monkey patching.
If, after rethinking your design for a few months, you decide that you really do need a singleton, you can create some sort of factory class which is a lot more transparent...
I'm writing a test suite for firefox 5.1 and selenium webdrive v.2 on os x 10.6 with Python
2.7.
Everything is working fine except the creation of a singleton class, which should guarantee
only one instance of firefox:
def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
#singleton
class Fire(object):
def __init__(self):
self.driver = webdriver.Firefox()
def getdriver(self):
return self.driver
def close_(self):
self.driver.close()
def get(self, url):
self.driver.get(url)
return self.driver.page_source
f = Fire()
f.close_()
at this point if I call f=Fire() again nothing happens. No new instance will be created.
My question is why do I see that behavior?
How I do that right?
My second question, if I type:
isinstance(f, Fire)
I get this error:
TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types
This is strange to me ... from my understanding it should return True
A final question:
when I have a singleton class I should be able to do:
f = Fire()
f2 = Fire()
f2.get('http://www.google.com')
up to here works, but if I say
f.close_()//then
URLError: urlopen error [Errno 61] Connection refused
I can't understand this.
Your decorator seems to work OK for me as far as creating a single instance of a class, so I don't see your issue #1. It isn't doing quite what you think it is: each time you use the decorator there's a fresh instances dictionary, and there's only ever one item in it, so there's not actually any reason to use a dictionary there -- you need a mutable container so you can modify it, but I'd use a list, or, in Python 3, perhaps a nonlocal variable. However, it does perform its intended function of making sure there's only one instance of the decorated class.
If you're asking why you can't create a new instance of the object after closing it, well, you didn't write any code to allow another instance to be created in that situation, and Python is incapable of guessing that you want that to happen. A singleton means there is only ever a single instance of the class. You have created that instance; you can't create another.
As for #2, your #singleton decorator returns a function, which instantiates (or returns a previously-created instance) of the class. Therefore Fire is a function, not a class, once decorated, which is why your isinstance() doesn't work.
The most straightforward approach to singletons, in my opinion, is to put the smarts in a class rather than in a decorator, then inherit from that class. This even makes sense from an inheritance point of view, since a singleton is a kind of object.
class Singleton(object):
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = object.__new__(cls, *args, **kwargs)
return cls._instance
class Fire(Singleton):
pass
f1 = Fire()
f2 = Fire()
f1 is f2 # True
isinstance(f1, Fire) # True
If you still want to do it with a decorator, the simplest approach there would be to create an intermediate class in the decorator and return that:
def singleton(D):
class C(D):
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = D.__new__(cls, *args, **kwargs)
return cls._instance
C.__name__ = D.__name__
return C
#singleton
class Fire(object):
pass
You could inject the desired behavior into the existing class object, but this is, in my opinion, needlessly complex, as it requires (in Python 2.x) creating a method wrapper, and you also have to deal with the situation in which the class being decorated already has a __new__() method yourself.
You might think that you could write a __del__() method to allow a new singleton to be created when there are no references to the existing instance. This won't work because there is always a class-internal reference to the instance (e.g., Fire._instance) so __del__() is never called. Once you have a singleton, it's there to stay. If you want a new singleton after you close the old one, you probably don't actually want a singleton but rather something else. A context manager might be a possibility.
A "singleton" that can be re-instantiated under certain circumstances would be, to me, really weird and unexpected behavior, and I would advise against it. Still, if that's what you really want, you could do self.__class__._instance = None in your close_() method. Or you could write a separate method to do this. It looks ugly, which is fitting because it is ugly. :-)
I think your third question also arises from the fact that you expect the singleton to somehow go away after you call close_() on it, when you have not programmed that behavior.
The issue is your use of that singleton class as a decorator. It isn't a decorator at all, so using it like one doesn't make sense.
A decorator needs to actually return the decorated object - usually a function, but in your case, the class. You're just returning a function. So obviously, when you try and use it in isinstance, Fire no longer refers to a class.
Why did the Python designers decide that subclasses' __init__() methods don't automatically call the __init__() methods of their superclasses, as in some other languages? Is the Pythonic and recommended idiom really like the following?
class Superclass(object):
def __init__(self):
print 'Do something'
class Subclass(Superclass):
def __init__(self):
super(Subclass, self).__init__()
print 'Do something else'
The crucial distinction between Python's __init__ and those other languages constructors is that __init__ is not a constructor: it's an initializer (the actual constructor (if any, but, see later;-) is __new__ and works completely differently again). While constructing all superclasses (and, no doubt, doing so "before" you continue constructing downwards) is obviously part of saying you're constructing a subclass's instance, that is clearly not the case for initializing, since there are many use cases in which superclasses' initialization needs to be skipped, altered, controlled -- happening, if at all, "in the middle" of the subclass initialization, and so forth.
Basically, super-class delegation of the initializer is not automatic in Python for exactly the same reasons such delegation is also not automatic for any other methods -- and note that those "other languages" don't do automatic super-class delegation for any other method either... just for the constructor (and if applicable, destructor), which, as I mentioned, is not what Python's __init__ is. (Behavior of __new__ is also quite peculiar, though really not directly related to your question, since __new__ is such a peculiar constructor that it doesn't actually necessarily need to construct anything -- could perfectly well return an existing instance, or even a non-instance... clearly Python offers you a lot more control of the mechanics than the "other languages" you have in mind, which also includes having no automatic delegation in __new__ itself!-).
I'm somewhat embarrassed when people parrot the "Zen of Python", as if it's a justification for anything. It's a design philosophy; particular design decisions can always be explained in more specific terms--and they must be, or else the "Zen of Python" becomes an excuse for doing anything.
The reason is simple: you don't necessarily construct a derived class in a way similar at all to how you construct the base class. You may have more parameters, fewer, they may be in a different order or not related at all.
class myFile(object):
def __init__(self, filename, mode):
self.f = open(filename, mode)
class readFile(myFile):
def __init__(self, filename):
super(readFile, self).__init__(filename, "r")
class tempFile(myFile):
def __init__(self, mode):
super(tempFile, self).__init__("/tmp/file", mode)
class wordsFile(myFile):
def __init__(self, language):
super(wordsFile, self).__init__("/usr/share/dict/%s" % language, "r")
This applies to all derived methods, not just __init__.
Java and C++ require that a base class constructor is called because of memory layout.
If you have a class BaseClass with a member field1, and you create a new class SubClass that adds a member field2, then an instance of SubClass contains space for field1 and field2. You need a constructor of BaseClass to fill in field1, unless you require all inheriting classes to repeat BaseClass's initialization in their own constructors. And if field1 is private, then inheriting classes can't initialise field1.
Python is not Java or C++. All instances of all user-defined classes have the same 'shape'. They're basically just dictionaries in which attributes can be inserted. Before any initialisation has been done, all instances of all user-defined classes are almost exactly the same; they're just places to store attributes that aren't storing any yet.
So it makes perfect sense for a Python subclass not to call its base class constructor. It could just add the attributes itself if it wanted to. There's no space reserved for a given number of fields for each class in the hierarchy, and there's no difference between an attribute added by code from a BaseClass method and an attribute added by code from a SubClass method.
If, as is common, SubClass actually does want to have all of BaseClass's invariants set up before it goes on to do its own customisation, then yes you can just call BaseClass.__init__() (or use super, but that's complicated and has its own problems sometimes). But you don't have to. And you can do it before, or after, or with different arguments. Hell, if you wanted you could call the BaseClass.__init__ from another method entirely than __init__; maybe you have some bizarre lazy initialization thing going.
Python achieves this flexibility by keeping things simple. You initialise objects by writing an __init__ method that sets attributes on self. That's it. It behaves exactly like a method, because it is exactly a method. There are no other strange and unintuitive rules about things having to be done first, or things that will automatically happen if you don't do other things. The only purpose it needs to serve is to be a hook to execute during object initialisation to set initial attribute values, and it does just that. If you want it to do something else, you explicitly write that in your code.
To avoid confusion it is useful to know that you can invoke the base_class __init__() method if the child_class does not have an __init__() class.
Example:
class parent:
def __init__(self, a=1, b=0):
self.a = a
self.b = b
class child(parent):
def me(self):
pass
p = child(5, 4)
q = child(7)
z= child()
print p.a # prints 5
print q.b # prints 0
print z.a # prints 1
In fact the MRO in python will look for __init__() in the parent class when can not find it in the children class. You need to invoke the parent class constructor directly if you have already an __init__() method in the children class.
For example the following code will return an error:
class parent:
def init(self, a=1, b=0):
self.a = a
self.b = b
class child(parent):
def __init__(self):
pass
def me(self):
pass
p = child(5, 4) # Error: constructor gets one argument 3 is provided.
q = child(7) # Error: constructor gets one argument 2 is provided.
z= child()
print z.a # Error: No attribute named as a can be found.
"Explicit is better than implicit." It's the same reasoning that indicates we should explicitly write 'self'.
I think in in the end it is a benefit-- can you recite all of the rules Java has regarding calling superclasses' constructors?
Right now, we have a rather long page describing the method resolution order in case of multiple inheritance: http://www.python.org/download/releases/2.3/mro/
If constructors were called automatically, you'd need another page of at least the same length explaining the order of that happening. That would be hell...
Often the subclass has extra parameters which can't be passed to the superclass.
Maybe __init__ is the method that the subclass needs to override. Sometimes subclasses need the parent's function to run before they add class-specific code, and other times they need to set up instance variables before calling the parent's function. Since there's no way Python could possibly know when it would be most appropriate to call those functions, it shouldn't guess.
If those don't sway you, consider that __init__ is Just Another Function. If the function in question were dostuff instead, would you still want Python to automatically call the corresponding function in the parent class?
i believe the one very important consideration here is that with an automatic call to super.__init__(), you proscribe, by design, when that initialization method is called, and with what arguments. eschewing automatically calling it, and requiring the programmer to explicitly do that call, entails a lot of flexibility.
after all, just because class B is derived from class A does not mean A.__init__() can or should be called with the same arguments as B.__init__(). making the call explicit means a programmer can have e.g. define B.__init__() with completely different parameters, do some computation with that data, call A.__init__() with arguments as appropriate for that method, and then do some postprocessing. this kind of flexibility would be awkward to attain if A.__init__() would be called from B.__init__() implicitly, either before B.__init__() executes or right after it.
As Sergey Orshanskiy pointed out in the comments, it is also convenient to write a decorator to inherit the __init__ method.
You can write a decorator to inherit the __init__ method, and even perhaps automatically search for subclasses and decorate them. – Sergey Orshanskiy Jun 9 '15 at 23:17
Part 1/3: The implementation
Note: actually this is only useful if you want to call both the base and the derived class's __init__ since __init__ is inherited automatically. See the previous answers for this question.
def default_init(func):
def wrapper(self, *args, **kwargs) -> None:
super(type(self), self).__init__(*args, **kwargs)
return wrapper
class base():
def __init__(self, n: int) -> None:
print(f'Base: {n}')
class child(base):
#default_init
def __init__(self, n: int) -> None:
pass
child(42)
Outputs:
Base: 42
Part 2/3: A warning
Warning: this doesn't work if base itself called super(type(self), self).
def default_init(func):
def wrapper(self, *args, **kwargs) -> None:
'''Warning: recursive calls.'''
super(type(self), self).__init__(*args, **kwargs)
return wrapper
class base():
def __init__(self, n: int) -> None:
print(f'Base: {n}')
class child(base):
#default_init
def __init__(self, n: int) -> None:
pass
class child2(child):
#default_init
def __init__(self, n: int) -> None:
pass
child2(42)
RecursionError: maximum recursion depth exceeded while calling a Python object.
Part 3/3: Why not just use plain super()?
But why not just use the safe plain super()? Because it doesn't work since the new rebinded __init__ is from outside the class, and super(type(self), self) is required.
def default_init(func):
def wrapper(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
return wrapper
class base():
def __init__(self, n: int) -> None:
print(f'Base: {n}')
class child(base):
#default_init
def __init__(self, n: int) -> None:
pass
child(42)
Errors:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-9-6f580b3839cd> in <module>
13 pass
14
---> 15 child(42)
<ipython-input-9-6f580b3839cd> in wrapper(self, *args, **kwargs)
1 def default_init(func):
2 def wrapper(self, *args, **kwargs) -> None:
----> 3 super().__init__(*args, **kwargs)
4 return wrapper
5
RuntimeError: super(): __class__ cell not found
Background - We CAN AUTO init a parent AND child class!
A lot of answers here and say "This is not the python way, use super().__init__() from the subclass". The question is not asking for the pythonic way, it's comparing to the expected behavior from other languages to python's obviously different one.
The MRO document is pretty and colorful but it's really a TLDR situation and still doesn't quite answer the question, as is often the case in these types of comparisons - "Do it the Python way, because.".
Inherited objects can be overloaded by later declarations in subclasses, a pattern building on #keyvanrm's (https://stackoverflow.com/a/46943772/1112676) answer solves the case where I want to AUTOMATICALLY init a parent class as part of calling a class without explicitly calling super().__init__() in every child class.
In my case where a new team member might be asked to use a boilerplate module template (for making extensions to our application without touching the core application source) which we want to make as bare and easy to adopt without them needing to know or understand the underlying machinery - to only need to know of and use what is provided by the application's base interface which is well documented.
For those who will say "Explicit is better than implicit." I generally agree, however, when coming from many other popular languages inherited automatic initialization is the expected behavior and it is very useful if it can be leveraged for projects where some work on a core application and others work on extending it.
This technique can even pass args/keyword args for init which means pretty much any object can be pushed to the parent and used by the parent class or its relatives.
Example:
class Parent:
def __init__(self, *args, **kwargs):
self.somevar = "test"
self.anothervar = "anothertest"
#important part, call the init surrogate pass through args:
self._init(*args, **kwargs)
#important part, a placeholder init surrogate:
def _init(self, *args, **kwargs):
print("Parent class _init; ", self, args, kwargs)
def some_base_method(self):
print("some base method in Parent")
self.a_new_dict={}
class Child1(Parent):
# when omitted, the parent class's __init__() is run
#def __init__(self):
# pass
#overloading the parent class's _init() surrogate
def _init(self, *args, **kwargs):
print(f"Child1 class _init() overload; ",self, args, kwargs)
self.a_var_set_from_child = "This is a new var!"
class Child2(Parent):
def __init__(self, onevar, twovar, akeyword):
print(f"Child2 class __init__() overload; ", self)
#call some_base_method from parent
self.some_base_method()
#the parent's base method set a_new_dict
print(self.a_new_dict)
class Child3(Parent):
pass
print("\nRunning Parent()")
Parent()
Parent("a string", "something else", akeyword="a kwarg")
print("\nRunning Child1(), keep Parent.__init__(), overload surrogate Parent._init()")
Child1()
Child1("a string", "something else", akeyword="a kwarg")
print("\nRunning Child2(), overload Parent.__init__()")
#Child2() # __init__() requires arguments
Child2("a string", "something else", akeyword="a kwarg")
print("\nRunning Child3(), empty class, inherits everything")
Child3().some_base_method()
Output:
Running Parent()
Parent class _init; <__main__.Parent object at 0x7f84a721fdc0> () {}
Parent class _init; <__main__.Parent object at 0x7f84a721fdc0> ('a string', 'something else') {'akeyword': 'a kwarg'}
Running Child1(), keep Parent.__init__(), overload surrogate Parent._init()
Child1 class _init() overload; <__main__.Child1 object at 0x7f84a721fdc0> () {}
Child1 class _init() overload; <__main__.Child1 object at 0x7f84a721fdc0> ('a string', 'something else') {'akeyword': 'a kwarg'}
Running Child2(), overload Parent.__init__()
Child2 class __init__() overload; <__main__.Child2 object at 0x7f84a721fdc0>
some base method in Parent
{}
Running Child3(), empty class, inherits everything, access things set by other children
Parent class _init; <__main__.Child3 object at 0x7f84a721fdc0> () {}
some base method in Parent
As one can see, the overloaded definition(s) take the place of those declared in Parent class but can still be called BY the Parent class thereby allowing one to emulate the classical implicit inheritance initialization behavior Parent and Child classes both initialize without needing to explicitly invoke the Parent's init() from the Child class.
Personally, I call the surrogate _init() method main() because it makes sense to me when switching between C++ and Python for example since it is a function that will be automatically run for any subclass of Parent (the last declared definition of main(), that is).