I'm trying to understand the self keyword in python classes. Ergo I made up a simple class:
class Question(object):
"""A class for answering my OO questions about python.
attributes: a_number"""
def __init__():
print "You've initialized a Question!"
def myself_add(self, new_number):
self.a_number += new_number
And in a __main__ function below the class definition I'm running the code
my_q = Question
print my_q
my_q.a_number = 12
print 'A number:',my_q.a_number
my_q.myself_add(3)
print 'A number:',my_q.a_number
The result I'm getting (including error) is
<class '__main__.Question'>
A number: 12
Traceback (most recent call last):
File "question.py", line 21, in <module>
my_q.myself_add(3)
TypeError: unbound method myself_add() must be called with Question instance as first argument (got int instance instead)
I'm trying to understand why the method myself_add is considered unbound. What does that mean? And why doesn't it know that I'm calling it on an instance of the Question class? Also, why doesn't my __init__ print happen?
In order to create an instance you need to write Question() not Question.
__init__ should take at least self argument
You may also want to initialize self.a_number in __init__ or put in into the class body, otherwise the call of myself_add will fail if you do not execute my_q.a_number = 12
The "self" argument can be anything, but is called self by convention.
It's passed in for you by python on default.
You can try running this simple code snippet to see what I mean:
class A(object):
def __init__(self):
pass
def check(self, checked):
print self is checked
and then from the command line:
a=A()
a.check(a)
You can take this further like this:
In [74]: b=A()
In [75]: a.check(b)
False
So here I've created two instances of A, one called a one called b. Only when comparing and instance of a's self to a itself is the statement true.
Hopefully that makes sense.
Specifically in your case, your class Question needs to be instantiated (Question()) and your __init__ needs an argument (self by convention).
On when a function is bound, you might find this answer helpful:
What is the difference between a function, an unbound method and a bound method? (but might be a bid advanced)
First of all , when you initialize an instance of an class you use opening and closing parenthesis which calls the __init__ method by convention.
my_q = Question()
However, in your case your __init__ method is not bound to an object using self, so if you run as is it will give error,
my_q = Question()
TypeError: __init__() takes no arguments (1 given)
>>>
You need to initialize __init__ with self so that instance can make implicit call to the object.
def __init__(self):
Now , the program runs smoothly
First print is the my_q object itself.
>>>
You've initialized a Question!
<__main__.Question object at 0x028ED2F0>
A number: 12
A number: 15
Related
This question already has answers here:
How does `super` interacts with a class's `__mro__` attribute in multiple inheritance?
(2 answers)
Closed 4 years ago.
From Python3's documentation super() "returns a proxy object that delegates method calls to a parent or sibling class of type." What does that mean?
Suppose I have the following code:
class SuperClass():
def __init__(self):
print("__init__ from SuperClass.")
print("self object id from SuperClass: " + str(id(self)))
class SubClass(SuperClass):
def __init__(self):
print("__init__ from SubClass.")
print("self object id from SubClass: " + str(id(self)))
super().__init__()
sc = SubClass()
The output I get from this is:
__init__ from SubClass.
self object id from SubClass: 140690611849200
__init__ from SuperClass.
self object id from SuperClass: 140690611849200
This means that in the line super().__init__(), super() is returning the current object which is then implicitly passed to the superclass' __init__() method. Is this accurate or am I missing something here?
To put it simply, I want to understand the following:
When super().__init__() is run,
What exactly is being passed to __init__() and how? We are calling it on super() so whatever this is returning should be getting passed to the __init__() method from what I understand about Python so far.
Why don't we have to pass in self to super().__init__()?
returns a proxy object that delegates method calls to a parent or
sibling class of type.
This proxy is an object that acts as the method-calling portion of the parent class. It is not the class itself; rather, it's just enough information so that you can use it to call the parent class methods.
If you call __init__(), you get your own, local, sub-class __init__ function. When you call super(), you get that proxy object, which will redirect you to the parent-class methods. Thus, when you call super().__init__(), that proxy redirects the call to the parent-class __init__ method.
Similarly, if you were to call super().foo, you would get the foo method from the parent class -- again, re-routed by that proxy.
Is that clear to you?
Responses to OP comments
But that must mean that this proxy object is being passed to
__init__() when running super().__init__() right?
Wrong. The proxy object is like a package name, such as calling math.sqrt(). You're not passing math to sqrt, you're using it to denote which sqrt you're using. If you wanted to pass the proxy to __init__, the call would be __init__(super()). That call would be semantically ridiculous, of course.
When we have to actually pass in self which is the sc object in my example.
No, you are not passing in sc; that is the result of the object creation call (internal method __new__), which includes an invocation of init. For __init__, the self object is a new item created for you by the Python run-time system. For most class methods, that first argument (called self out of convention, this in other languages) is the object that invoked the method.
This means that in the line super().__init__(), super() is returning the current object which is then implicitly passed to the superclass' __init__() method. Is this accurate or am I missing something here?
>>> help(super)
super() -> same as super(__class__, <first argument>)
super call returns a proxy/wrapper object which remembers:
The instance invoking super()
The class of the calling object
The class that's invoking super()
This is perfectly sound. super always fetches the attribute of the next class in the hierarchy ( really the MRO) that has the attribute that you're looking for. So it's not returning the current object, but rather and more accurately, it returns an object that remembers enough information to search for attributes higher in the class hierarchy.
What exactly is being passed to __init__() and how? We are calling it on super() so whatever this is returning should be getting passed to the __init__() method from what I understand about Python so far.
You're almost right. But super loves to play tricks on us. super class defines __getattribute__, this method is responsible for attribute search. When you do something like: super().y(), super.__getattribute__ gets called searching for y. Once it finds y it passes the instance that's invoking the super call to y. Also, super has __get__ method, which makes it a descriptor, I'll omit the details of descriptors here, refer to the documentation to know more. This answers your second question as well, as to why self isn't passed explicitly.
*Note: super is a little bit different and relies on some magic. Almost for all other classes, the behavior is the same. That is:
a = A() # A is a class
a.y() # same as A.y(a), self is a
But super is different:
class A:
def y(self):
return self
class B(A):
def y(self)
return super().y() # equivalent to: A.y(self)
b = B()
b.y() is b # True: returns b not super(), self is b not super()
I wrote a simple test to investigate what CPython does for super:
class A:
pass
class B(A):
def f(self):
return super()
#classmethod
def g(cls):
return super()
def h(selfish):
selfish = B()
return super()
class C(B):
pass
c = C()
for method in 'fgh':
super_object = getattr(c, method)()
print(super_object, super_object.__self__, super_object.__self_class__, super_object.__thisclass__) # (These methods were found using dir.)
The zero-argument super call returns an object that stores three things:
__self__ stores the object whose name matches the first parameter of the method—even if that name has been reassigned.
__self_class__ stores its type, or itself in the case of a class method.
__thisclass__ stores the class in which the method is defined.
(It is unfortunate that __thisclass__ was implemented this way rather than fetching an attribute on the method because it makes it impossible to use the zero-argument form of super with meta-programming.)
The object returned by super implements getattribute, which forwards method calls to the type found in the __mro__ of __self_class__ one step after __thisclass__.
I have a piece of code that I am trying to understand, and even with the existing answers, I really couldn't understand the purpose of the following code, Can someone please help me in understanding the same?
I have already looked a various relevant questions ( __get__() ) here and I couldnt find specific answers. I understand that the class below is trying to create a method on the fly ( possibly we get to this class from a __getattr__() method which fails to find an attribute ) and return the method to the caller. I have commented right above the lines of code I need understanding with.
class MethodGen(object):
def __getattr__(self, name):
method = self.method_gen(name)
if method:
return self.method_gen(name)
def method_gen(self, name):
def method(*args, **kwargs):
print("Creating a method here")
# Below are the two lines of code I need help understanding with
method.__name__ = name
setattr(self, name, method.__get__(self))
return method
If I am not wrong, the method() function's attribute __name__ has been set, but in setattr() function, the attribute of the class MethodGen, name is set to what ?
This question really intrigued me. The two answers provided didn't seem to tell the whole story. What bothered me was the fact that in this line:
setattr(self, name, method.__get__(self))
the code is not setting things up so that method.__get__ Will be called at some point. Rather, method.__get__ is actually Being Called! But isn't the idea that this __get__ method will be called when a particular attribute of an object, an instance of MethodGen in this case, is actually referenced? If you read the docs, this is the impression you get...that an attribute is linked to a Descriptor that implements __get__, and that implementation determines what gets returned when that attribute is referenced. But again, that's not what's going on here. This is all happening before that point. So what IS really going on here?
The answer lies HERE. The key language is this:
To support method calls, functions include the __get__() method for
binding methods during attribute access. This means that all functions
are non-data descriptors which return bound methods when they are
invoked from an object.
method.__get__(self) is exactly what's being described here. So what method.__get__(self) is actually doing is returning a reference to the "method" function that is bound to self. Since in our case, self is an instance of MethodGen, this call is returning a reference to the "method" function that is bound to an instance of MethodGen. In this case, the __get__ method has nothing to do with the act of referencing an attribute. Rather, this call is turning a function reference into a method reference!
So now we have a reference to a method we've created on the fly. But how do we set it up so it gets called at the right time, when an attribute with the right name is referenced on the instance it is bound to? That's where the setattr(self, name, X) part comes in. This call takes our new method and binds it to the attribute with name name on our instance.
All of the above then is why:
setattr(self, name, method.__get__(self))
is adding a new method to self, the instance of the MethodGen class on which method_gen has been called.
The method.__name__ = name part is not all that important. Executing just the line of code discussed above gives you all the behavior you really want. This extra step just attaches a name to our new method so that code that asks for the name of the method, like code that uses introspection to write documentation, will get the right name. It is the instance attribute's name...the name passed to setattr...that really matters, and really "names" the method.
Interesting, never seen this done before, seems tough to maintain (probably will make some fellow developers want to hang you).
I changed some code so you can see a little more of what is happening.
class MethodGen(object):
def method_gen(self, name):
print("Creating a method here")
def method(*args, **kwargs):
print("Calling method")
print(args) # so we can see what is actually being outputted
# Below are the two lines of code I need help understanding with
method.__name__ = name # These the method name equal to name (i.e. we can call the method this way)
# The following is adding the new method to the current class.
setattr(self, name, method.__get__(self)) # Adds the method to this class
# I would do: setattr(self, name, method) though and remove the __get__
return method # Returns the emthod
m = MethodGen()
test = m.method_gen("my_method") # I created a method in MethodGen class called my_method
test("test") # It returned a pointer to the method that I can use
m.my_method("test") # Or I can now call that method in the class.
m.method_gen("method_2")
m.method_2("test2")
Consider the class below:
class Foo:
def bar(self):
print("hi")
f = Foo()
f.bar()
bar is a class attribute that has a function as its value. Because function implements the descriptor protocol, however, accessing it as Foo.bar or f.bar does not immediately return the function itself; it causes the function's __get__ method to be invoked, and that returns either the original function (as in Foo.bar) or a new value of type instancemethod (as in f.bar). f.bar() is evaluated as Foo.bar.__get__(f, Foo)().
method_gen takes the function named method, and attaches an actual method retrieved by calling the function's __get__ method to an object. The intent is so that something like this works:
>>> m = MethodGen()
>>> n = MethodGen()
>>> m.foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'MethodGen' object has no attribute 'foo'
>>> n.foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'MethodGen' object has no attribute 'foo'
>>> m.method_gen('foo')
<function foo at 0x10465c758>
>>> m.foo()
Creating a method here
>>> n.foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'MethodGen' object has no attribute 'foo'
Initially, MethodGen does not have any methods other than method_gen. You can see the exception raised when attempting to invoke a method named foo on either of two instances. Calling method_gen, however, attaches a new method to just that particular instance. After calling m.method_gen("foo"), m.foo() calls the method defined by method_gen. That call does not affect other instances of MethodGen like n.
In Python 3.x what can I do with bar() that I cannot do with foo()?
class A:
def foo():
print("some code")
#staticmethod
def bar():
print("some code")
Note: I initially forgot to specify self as an argument to foo(), but I'm leaving the mistake there, since the answers andress that.
a staticmethod is a method that does not require an object as its first parameter. This means it is a method useful to the class itself and all instantiations of it, rather than just instances of it (object initialized as A().
What this means in practical terms, is that Python does not implicitly send the object itself as a parameter. Your first method will break once you call it:
>>> a.foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: foo() takes 0 positional arguments but 1 was given
This is because Python supplies object methods with the object itself as a first parameter. Hence the ubiquitous self argument:
def foo(self): #Proper signature
On the other hand,
A.bar()
will work just fine, and so will
a.bar()
The object is not supplied as a first argument. Use staticmethods for methods that are supposed to be helpful to the class and its instances, but do not require knowledge of either. Usually I use these as utility functions.
Note there is a third version, a classmethod, which is similar to a regular method in that it accepts a first parameter by default - the class of the caller. In this case the minimal signature is
#classmethod
def operateOnClass(cls):
Use this to make changes that affect all instances, such as changing class variables.
bar() can be called from an uninstantiated class object. foo() needs to be fed self as an argument, and as such can only be called from an object already declared as an instance of class A
What is the difference between the following class methods?
Is it that one is static and the other is not?
class Test(object):
def method_one(self):
print "Called method_one"
def method_two():
print "Called method_two"
a_test = Test()
a_test.method_one()
a_test.method_two()
In Python, there is a distinction between bound and unbound methods.
Basically, a call to a member function (like method_one), a bound function
a_test.method_one()
is translated to
Test.method_one(a_test)
i.e. a call to an unbound method. Because of that, a call to your version of method_two will fail with a TypeError
>>> a_test = Test()
>>> a_test.method_two()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: method_two() takes no arguments (1 given)
You can change the behavior of a method using a decorator
class Test(object):
def method_one(self):
print "Called method_one"
#staticmethod
def method_two():
print "Called method two"
The decorator tells the built-in default metaclass type (the class of a class, cf. this question) to not create bound methods for method_two.
Now, you can invoke static method both on an instance or on the class directly:
>>> a_test = Test()
>>> a_test.method_one()
Called method_one
>>> a_test.method_two()
Called method_two
>>> Test.method_two()
Called method_two
Methods in Python are a very, very simple thing once you understood the basics of the descriptor system. Imagine the following class:
class C(object):
def foo(self):
pass
Now let's have a look at that class in the shell:
>>> C.foo
<unbound method C.foo>
>>> C.__dict__['foo']
<function foo at 0x17d05b0>
As you can see if you access the foo attribute on the class you get back an unbound method, however inside the class storage (the dict) there is a function. Why's that? The reason for this is that the class of your class implements a __getattribute__ that resolves descriptors. Sounds complex, but is not. C.foo is roughly equivalent to this code in that special case:
>>> C.__dict__['foo'].__get__(None, C)
<unbound method C.foo>
That's because functions have a __get__ method which makes them descriptors. If you have an instance of a class it's nearly the same, just that None is the class instance:
>>> c = C()
>>> C.__dict__['foo'].__get__(c, C)
<bound method C.foo of <__main__.C object at 0x17bd4d0>>
Now why does Python do that? Because the method object binds the first parameter of a function to the instance of the class. That's where self comes from. Now sometimes you don't want your class to make a function a method, that's where staticmethod comes into play:
class C(object):
#staticmethod
def foo():
pass
The staticmethod decorator wraps your class and implements a dummy __get__ that returns the wrapped function as function and not as a method:
>>> C.__dict__['foo'].__get__(None, C)
<function foo at 0x17d0c30>
Hope that explains it.
When you call a class member, Python automatically uses a reference to the object as the first parameter. The variable self actually means nothing, it's just a coding convention. You could call it gargaloo if you wanted. That said, the call to method_two would raise a TypeError, because Python is automatically trying to pass a parameter (the reference to its parent object) to a method that was defined as having no parameters.
To actually make it work, you could append this to your class definition:
method_two = staticmethod(method_two)
or you could use the #staticmethod function decorator.
>>> class Class(object):
... def __init__(self):
... self.i = 0
... def instance_method(self):
... self.i += 1
... print self.i
... c = 0
... #classmethod
... def class_method(cls):
... cls.c += 1
... print cls.c
... #staticmethod
... def static_method(s):
... s += 1
... print s
...
>>> a = Class()
>>> a.class_method()
1
>>> Class.class_method() # The class shares this value across instances
2
>>> a.instance_method()
1
>>> Class.instance_method() # The class cannot use an instance method
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method instance_method() must be called with Class instance as first argument (got nothing instead)
>>> Class.instance_method(a)
2
>>> b = 0
>>> a.static_method(b)
1
>>> a.static_method(a.c) # Static method does not have direct access to
>>> # class or instance properties.
3
>>> Class.c # a.c above was passed by value and not by reference.
2
>>> a.c
2
>>> a.c = 5 # The connection between the instance
>>> Class.c # and its class is weak as seen here.
2
>>> Class.class_method()
3
>>> a.c
5
method_two won't work because you're defining a member function but not telling it what the function is a member of. If you execute the last line you'll get:
>>> a_test.method_two()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: method_two() takes no arguments (1 given)
If you're defining member functions for a class the first argument must always be 'self'.
Accurate explanation from Armin Ronacher above, expanding on his answers so that beginners like me understand it well:
Difference in the methods defined in a class, whether static or instance method(there is yet another type - class method - not discussed here so skipping it), lay in the fact whether they are somehow bound to the class instance or not. For example, say whether the method receives a reference to the class instance during runtime
class C:
a = []
def foo(self):
pass
C # this is the class object
C.a # is a list object (class property object)
C.foo # is a function object (class property object)
c = C()
c # this is the class instance
The __dict__ dictionary property of the class object holds the reference to all the properties and methods of a class object and thus
>>> C.__dict__['foo']
<function foo at 0x17d05b0>
the method foo is accessible as above. An important point to note here is that everything in python is an object and so references in the dictionary above are themselves pointing to other objects. Let me call them Class Property Objects - or as CPO within the scope of my answer for brevity.
If a CPO is a descriptor, then python interpretor calls the __get__() method of the CPO to access the value it contains.
In order to determine if a CPO is a descriptor, python interpretor checks if it implements the descriptor protocol. To implement descriptor protocol is to implement 3 methods
def __get__(self, instance, owner)
def __set__(self, instance, value)
def __delete__(self, instance)
for e.g.
>>> C.__dict__['foo'].__get__(c, C)
where
self is the CPO (it could be an instance of list, str, function etc) and is supplied by the runtime
instance is the instance of the class where this CPO is defined (the object 'c' above) and needs to be explicity supplied by us
owner is the class where this CPO is defined(the class object 'C' above) and needs to be supplied by us. However this is because we are calling it on the CPO. when we call it on the instance, we dont need to supply this since the runtime can supply the instance or its class(polymorphism)
value is the intended value for the CPO and needs to be supplied by us
Not all CPO are descriptors. For example
>>> C.__dict__['foo'].__get__(None, C)
<function C.foo at 0x10a72f510>
>>> C.__dict__['a'].__get__(None, C)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'list' object has no attribute '__get__'
This is because the list class doesnt implement the descriptor protocol.
Thus the argument self in c.foo(self) is required because its method signature is actually this C.__dict__['foo'].__get__(c, C) (as explained above, C is not needed as it can be found out or polymorphed)
And this is also why you get a TypeError if you dont pass that required instance argument.
If you notice the method is still referenced via the class Object C and the binding with the class instance is achieved via passing a context in the form of the instance object into this function.
This is pretty awesome since if you chose to keep no context or no binding to the instance, all that was needed was to write a class to wrap the descriptor CPO and override its __get__() method to require no context.
This new class is what we call a decorator and is applied via the keyword #staticmethod
class C(object):
#staticmethod
def foo():
pass
The absence of context in the new wrapped CPO foo doesnt throw an error and can be verified as follows:
>>> C.__dict__['foo'].__get__(None, C)
<function foo at 0x17d0c30>
Use case of a static method is more of a namespacing and code maintainability one(taking it out of a class and making it available throughout the module etc).
It maybe better to write static methods rather than instance methods whenever possible, unless ofcourse you need to contexualise the methods(like access instance variables, class variables etc). One reason is to ease garbage collection by not keeping unwanted reference to objects.
that is an error.
first of all, first line should be like this (be careful of capitals)
class Test(object):
Whenever you call a method of a class, it gets itself as the first argument (hence the name self) and method_two gives this error
>>> a.method_two()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: method_two() takes no arguments (1 given)
The second one won't work because when you call it like that python internally tries to call it with the a_test instance as the first argument, but your method_two doesn't accept any arguments, so it wont work, you'll get a runtime error.
If you want the equivalent of a static method you can use a class method.
There's much less need for class methods in Python than static methods in languages like Java or C#. Most often the best solution is to use a method in the module, outside a class definition, those work more efficiently than class methods.
The call to method_two will throw an exception for not accepting the self parameter the Python runtime will automatically pass it.
If you want to create a static method in a Python class, decorate it with the staticmethod decorator.
Class Test(Object):
#staticmethod
def method_two():
print "Called method_two"
Test.method_two()
Please read this docs from the Guido First Class everything Clearly explained how Unbound, Bound methods are born.
Bound method = instance method
Unbound method = static method.
The definition of method_two is invalid. When you call method_two, you'll get TypeError: method_two() takes 0 positional arguments but 1 was given from the interpreter.
An instance method is a bounded function when you call it like a_test.method_two(). It automatically accepts self, which points to an instance of Test, as its first parameter. Through the self parameter, an instance method can freely access attributes and modify them on the same object.
Unbound Methods
Unbound methods are methods that are not bound to any particular class instance yet.
Bound Methods
Bound methods are the ones which are bound to a specific instance of a class.
As its documented here, self can refer to different things depending on the function is bound, unbound or static.
Take a look at the following example:
class MyClass:
def some_method(self):
return self # For the sake of the example
>>> MyClass().some_method()
<__main__.MyClass object at 0x10e8e43a0># This can also be written as:>>> obj = MyClass()
>>> obj.some_method()
<__main__.MyClass object at 0x10ea12bb0>
# Bound method call:
>>> obj.some_method(10)
TypeError: some_method() takes 1 positional argument but 2 were given
# WHY IT DIDN'T WORK?
# obj.some_method(10) bound call translated as
# MyClass.some_method(obj, 10) unbound method and it takes 2
# arguments now instead of 1
# ----- USING THE UNBOUND METHOD ------
>>> MyClass.some_method(10)
10
Since we did not use the class instance — obj — on the last call, we can kinda say it looks like a static method.
If so, what is the difference between MyClass.some_method(10) call and a call to a static function decorated with a #staticmethod decorator?
By using the decorator, we explicitly make it clear that the method will be used without creating an instance for it first. Normally one would not expect the class member methods to be used without the instance and accesing them can cause possible errors depending on the structure of the method.
Also, by adding the #staticmethod decorator, we are making it possible to be reached through an object as well.
class MyClass:
def some_method(self):
return self
#staticmethod
def some_static_method(number):
return number
>>> MyClass.some_static_method(10) # without an instance
10
>>> MyClass().some_static_method(10) # Calling through an instance
10
You can’t do the above example with the instance methods. You may survive the first one (as we did before) but the second one will be translated into an unbound call MyClass.some_method(obj, 10) which will raise a TypeError since the instance method takes one argument and you unintentionally tried to pass two.
Then, you might say, “if I can call static methods through both an instance and a class, MyClass.some_static_method and MyClass().some_static_method should be the same methods.” Yes!
I find when __init__ method lacks self parameter, only if there is no instantiation of this class, the compiler won't complain:
$ cat test.py
#!/usr/bin/python
class A():
def __init__():
print("A()")
$ ./test.py
But if there is a instantiation, the running time error will occur:
$ cat test.py
#!/usr/bin/python
class A():
def __init__():
print("A()")
A()
$ ./test.py
Traceback (most recent call last):
File "./test.py", line 6, in <module>
A()
TypeError: __init__() takes 0 positional arguments but 1 was given
Per my understanding, __init__() seems no use because it can't make any instance. So why doesn't the Python compiler enforce limitation such as when no self parameter in __init__ function, it will show error message? Is it reasonable? Or I miss something.
__init__ is not a special name at compilation time.
It is possible to do things with the symbol A.__init__ other than invoke it by trying to construct an A, and I think it is possible to replace the unusable __init__ function with a different function at runtime prior to trying to construct an A.
The check for a correct number of arguments is made when __init__ is called, as it is for any other method.
it must be one argument at least, we call it "self" usually, so your Python code maybe have a small change
#!/usr/bin/python
class A(object):
def __init__(self):
print("A()")
A()
__init__ is the initialization of an already created instance of a class, therefore it needs self, just like every other class method. If you want to handle the creation of the class, that is the __new__ method. This seems like a good description.
Don't let the fact that python doesn't complain unless the method is called confuse you.