Why are python static/class method not callable? - python

Why are python instance methods callable, but static methods and class methods not callable?
I did the following:
class Test():
class_var = 42
#classmethod
def class_method(cls):
pass
#staticmethod
def static_method():
pass
def instance_method(self):
pass
for attr, val in vars(Test).items():
if not attr.startswith("__"):
print (attr, "is %s callable" % ("" if callable(val) else "NOT"))
The result is:
static_method is NOT callable
instance_method is callable
class_method is NOT callable
class_var is NOT callable
Technically this may be because instance method object might have a particular attribute (not) set in a particular way (possibly __call__). Why such asymmetry, or what purpose does it serve?
I came across this while learning python inspection tools.
Additional remarks from comments:
The SO answer linked in the comments says that the static/class methods are descriptors , which are not callable. Now I am curious, why are descriptors made not callable, since descriptors are class with particular attributes (one of __get__, __set__, __del___) defined.

Why are descriptors not callable? Basically because they don't need to be. Not every descriptor represents a callable either.
As you correctly note, the descriptor protocol consists of __get__, __set__ and __del__. Note no __call__, that's the technical reason why it's not callable. The actual callable is the return value of your static_method.__get__(...).
As for the philosophical reason, let's look at the class. The contents of the __dict__, or in your case results of vars(), are basically locals() of the class block. If you define a function, it gets dumped as a plain function. If you use a decorator, such as #staticmethod, it's equivalent to something like:
def _this_is_not_stored_anywhere():
pass
static_method = staticmethod(_this_is_not_stored_anywhere)
I.e., static_method is assigned a return value of the staticmethod() function.
Now, function objects actually implement the descriptor protocol - every function has a __get__ method on it. This is where the special self and the bound-method behavior comes from. See:
def xyz(what):
print(what)
repr(xyz) # '<function xyz at 0x7f8f924bdea0>'
repr(xyz.__get__("hello")) # "<bound method str.xyz of 'hello'>"
xyz.__get__("hello")() # "hello"
Because of how the class calls __get__, your test.instance_method binds to the instance and gets it pre-filled as it first argument.
But the whole point of #classmethod and #staticmethod is that they do something special to avoid the default "bound method" behavior! So they can't return a plain function. Instead they return a descriptor object with a custom __get__ implementation.
Of course, you could put a __call__ method on this descriptor object, but why? It's code that you don't need in practice; you can almost never touch the descriptor object itself. If you do (in code similar to yours), you still need special handling for descriptors, because a general descriptor doesn't have to be(have like a) callable - properties are descriptors too. So you don't want __call__ in the descriptor protocol. So if a third party "forgets" to implement __call__ on something you consider a "callable", your code will miss it.
Also, the object is a descriptor, not a function. Putting a __call__ method on it would be masking its true nature :) I mean, it's not wrong per se, it's just ... something that you should never need for anything.
BTW, in case of classmethod/staticmethod, you can get back the original function from their __func__ attribute.

Related

Meaning of staticmethod object's description?

I understand the #staticmethod decorator in practice. But a bug in mocking a static method led me down the Python semantics rabbit hole. This description in The standard type hierarchy section is confusing me:
Static method objects provide a way of defeating the transformation
of function objects to method objects described above.
A static method object is a wrapper around any other object, usually a
user-defined method object. When a static method object is retrieved
from a class or a class instance, the object actually returned is the
wrapped object, which is not subject to any further transformation.
Static method objects are not themselves callable, although the
objects they wrap usually are. Static method objects are created by
the built-in staticmethod() constructor.
The staticmethod() constructor takes a function object as sole argument. How can it wrap any other object than a function object? Even if this doesn't fail, how does it make any sense?
How is it usually a wrapper around a user-defined method object instead of a function object? User-defined method objects, when called, add the object they're called on to the start of the argument list, then call the function object stored on the class (ignoring all the various special cases).
How is it that static method objects are not themselves callable? How do calls to these work, then?
You can see that staticmethod can take any argument:
>>> x = staticmethod(3)
and that it is, indeed, not callable:
>>> x()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'staticmethod' object is not callable
staticmethod doesn't do much more than store a reference to its argument. The "magic" happens when you try to access a staticmethod object as the attribute of a class object or an instance of a class. When you do, you get the result of the staticmethod method's __get__ method, which is... the thing you originally wrapped.
>>> x.__get__(x)
3
Don't worry about why we passed x as an argument; suffice it to say, staticmethod.__get__ mostly ignores its argument(s).
When you wrap a function in a class statement, the staticmethod saves a reference to the function, to be called later when you ask for it.
>>> class Foo(object):
... #staticmethod
... def x():
... pass
...
>>> type(Foo.__dict__['x'])
<type 'staticmethod'>
>>> type(Foo.x)
<type 'function'>
Instance methods work the way they do because function.__get__ returns an instance of method, which is in some sense just the original function partially applied the instance that invokes it. You may have seen that x.foo() is the same as type(x).foo(x). The reason that is true is because x.foo first resolves to type(x).foo, which itself evaluates to type(x).__dict__['foo'].__get__(x, type(x). The return value of function.__get__ is basically a wrapper around the function foo, but with x already supplied as the first argument.
staticmethod's main purpose is to provide a different __get__ method.
Incidentally, classmethod serves the same purpose. classmethod.__get__ returns something that calls the wrapped function with the class as the first argument, whether you invoke the class method from an instance of the class or the class itself.
How can it wrap any other object than a function object?
Pretty easily.
class Example(object):
example = staticmethod(5)
print(Example.example) # prints 5
You can pass anything you want to the staticmethod constructor.
Even if this doesn't fail, how does it make any sense?
It usually doesn't, but staticmethod doesn't check.
How is it usually a wrapper around a user-defined method object instead of a function object?
It's not. That part's just wrong.
How is it that static method objects are not themselves callable? How do calls to these work, then?
The descriptor protocol. Static method objects have a __get__ method that returns whatever object they wrap. Attribute access invokes this __get__ method and returns what __get__ returns.

Why does setting a descriptor on a class overwrite the descriptor?

Simple repro:
class VocalDescriptor(object):
def __get__(self, obj, objtype):
print('__get__, obj={}, objtype={}'.format(obj, objtype))
def __set__(self, obj, val):
print('__set__')
class B(object):
v = VocalDescriptor()
B.v # prints "__get__, obj=None, objtype=<class '__main__.B'>"
B.v = 3 # does not print "__set__", evidently does not trigger descriptor
B.v # does not print anything, we overwrote the descriptor
This question has an effective duplicate, but the duplicate was not answered, and I dug a bit more into the CPython source as a learning exercise. Warning: i went into the weeds. I'm really hoping I can get help from a captain who knows those waters. I tried to be as explicit as possible in tracing the calls I was looking at, for my own future benefit and the benefit of future readers.
I've seen a lot of ink spilled over the behavior of __getattribute__ applied to descriptors, e.g. lookup precedence. The Python snippet in "Invoking Descriptors" just below For classes, the machinery is in type.__getattribute__()... roughly agrees in my mind with what I believe is the corresponding CPython source in type_getattro, which I tracked down by looking at "tp_slots" then where tp_getattro is populated. And the fact that B.v initially prints __get__, obj=None, objtype=<class '__main__.B'> makes sense to me.
What I don't understand is, why does the assignment B.v = 3 blindly overwrite the descriptor, rather than triggering v.__set__? I tried to trace the CPython call, starting once more from "tp_slots", then looking at where tp_setattro is populated, then looking at type_setattro. type_setattro appears to be a thin wrapper around _PyObject_GenericSetAttrWithDict. And there's the crux of my confusion: _PyObject_GenericSetAttrWithDict appears to have logic that gives precedence to a descriptor's __set__ method!! With this in mind, I can't figure out why B.v = 3 blindly overwrites v rather than triggering v.__set__.
Disclaimer 1: I did not rebuild Python from source with printfs, so I'm not
completely sure type_setattro is what's being called during B.v = 3.
Disclaimer 2: VocalDescriptor is not intended to exemplify "typical" or "recommended" descriptor definition. It's a verbose no-op to tell me when the methods are being called.
You are correct that B.v = 3 simply overwrites the descriptor with an integer (as it should). In the descriptor protocol, __get__ is designed to be called as instance attribute or class attribute, but __set__ is designed to be called only as instance attribute.
For B.v = 3 to invoke a descriptor, the descriptor should have been defined on the metaclass, i.e. on type(B).
>>> class BMeta(type):
... v = VocalDescriptor()
...
>>> class B(metaclass=BMeta):
... pass
...
>>> B.v = 3
__set__
To invoke the descriptor on B, you would use an instance: B().v = 3 will do it.
The reason for B.v also invoking the getter is to allow user's customization of what B.v does, independently of whatever B().v does. A common pattern is to allow direct access on the descriptor instance, by returning the descriptor itself when a class attribute access was used:
class VocalDescriptor(object):
def __get__(self, obj, objtype):
if obj is None:
return self
print('__get__, obj={}, objtype={}'.format(obj, objtype))
def __set__(self, obj, val):
print('__set__')
Now B.v would return some instance like <mymodule.VocalDescriptor object at 0xdeadbeef> which you can interact with. It is literally the descriptor object, defined as a class attribute, and its state B.v.__dict__ is shared between all instances of B.
Of course it is up to user's code to define exactly what they want B.v to do, returning self is just the common pattern. A classmethod is an example of a descriptor which does something different here, see the Descriptor HowTo Guide for a pure-python implementation of classmethod.
Unlike __get__, which can be used to customize B().v and B.v independently, __set__ is not invoked unless the attribute access is on an instance. I would suppose that the goal of customizing B().v = other and B.v = other using the same descriptor v is not common or useful enough to complicate the descriptor protocol further, especially since the latter is still possible with a metaclass descriptor anyway, as shown in BMeta.v above.
Barring any overrides, B.v is equivalent to type.__getattribute__(B, "v"), while b = B(); b.v is equivalent to object.__getattribute__(b, "v"). Both definitions invoke the __get__ method of the result if defined.
Note, thought, that the call to __get__ differs in each case. B.v passes None as the first argument, while B().v passes the instance itself. In both cases B is passed as the second argument.
B.v = 3, on the other hand, is equivalent to type.__setattr__(B, "v", 3), which does not invoke __set__.
I think that none of the current answers actually answer your question.
Why does setting a descriptor on a class overwrite the descriptor?
Setting or deleting an attribute on a class (or on a subclass of the class) owning a descriptor (e.g. cls.descr = 3 or del cls.descr) overrides that descriptor because it would be impossible to change a faulty descriptor otherwise (e.g. descr.__set__(None, cls, 3) or descr.__delete__(None, cls) raising an exception) since a class dictionary (e.g. cls.__dict__) is a read-only types.MappingProxyType. You can always define a descriptor on the metaclass if you want to override setting or deleting an attribute on a class which is an instance of that metaclass. So __set__ and __delete__ are always passed an instance of the class owning the descriptor, that is why they do not have an owner parameter.
Getting an attribute on a class (or on a subclass of the class) owning a descriptor (e.g. cls.descr) does not override that descriptor because it does not prevent changing a faulty descriptor (e.g. descr.__get__(None, cls) raising an exception). So __get__ is passed either an instance of the class owning the descriptor, or the class (or a subclass of the class) itself, that is why it has an owner parameter.
More information in this answer.

Python descriptors set and get behaviour [duplicate]

Simple repro:
class VocalDescriptor(object):
def __get__(self, obj, objtype):
print('__get__, obj={}, objtype={}'.format(obj, objtype))
def __set__(self, obj, val):
print('__set__')
class B(object):
v = VocalDescriptor()
B.v # prints "__get__, obj=None, objtype=<class '__main__.B'>"
B.v = 3 # does not print "__set__", evidently does not trigger descriptor
B.v # does not print anything, we overwrote the descriptor
This question has an effective duplicate, but the duplicate was not answered, and I dug a bit more into the CPython source as a learning exercise. Warning: i went into the weeds. I'm really hoping I can get help from a captain who knows those waters. I tried to be as explicit as possible in tracing the calls I was looking at, for my own future benefit and the benefit of future readers.
I've seen a lot of ink spilled over the behavior of __getattribute__ applied to descriptors, e.g. lookup precedence. The Python snippet in "Invoking Descriptors" just below For classes, the machinery is in type.__getattribute__()... roughly agrees in my mind with what I believe is the corresponding CPython source in type_getattro, which I tracked down by looking at "tp_slots" then where tp_getattro is populated. And the fact that B.v initially prints __get__, obj=None, objtype=<class '__main__.B'> makes sense to me.
What I don't understand is, why does the assignment B.v = 3 blindly overwrite the descriptor, rather than triggering v.__set__? I tried to trace the CPython call, starting once more from "tp_slots", then looking at where tp_setattro is populated, then looking at type_setattro. type_setattro appears to be a thin wrapper around _PyObject_GenericSetAttrWithDict. And there's the crux of my confusion: _PyObject_GenericSetAttrWithDict appears to have logic that gives precedence to a descriptor's __set__ method!! With this in mind, I can't figure out why B.v = 3 blindly overwrites v rather than triggering v.__set__.
Disclaimer 1: I did not rebuild Python from source with printfs, so I'm not
completely sure type_setattro is what's being called during B.v = 3.
Disclaimer 2: VocalDescriptor is not intended to exemplify "typical" or "recommended" descriptor definition. It's a verbose no-op to tell me when the methods are being called.
You are correct that B.v = 3 simply overwrites the descriptor with an integer (as it should). In the descriptor protocol, __get__ is designed to be called as instance attribute or class attribute, but __set__ is designed to be called only as instance attribute.
For B.v = 3 to invoke a descriptor, the descriptor should have been defined on the metaclass, i.e. on type(B).
>>> class BMeta(type):
... v = VocalDescriptor()
...
>>> class B(metaclass=BMeta):
... pass
...
>>> B.v = 3
__set__
To invoke the descriptor on B, you would use an instance: B().v = 3 will do it.
The reason for B.v also invoking the getter is to allow user's customization of what B.v does, independently of whatever B().v does. A common pattern is to allow direct access on the descriptor instance, by returning the descriptor itself when a class attribute access was used:
class VocalDescriptor(object):
def __get__(self, obj, objtype):
if obj is None:
return self
print('__get__, obj={}, objtype={}'.format(obj, objtype))
def __set__(self, obj, val):
print('__set__')
Now B.v would return some instance like <mymodule.VocalDescriptor object at 0xdeadbeef> which you can interact with. It is literally the descriptor object, defined as a class attribute, and its state B.v.__dict__ is shared between all instances of B.
Of course it is up to user's code to define exactly what they want B.v to do, returning self is just the common pattern. A classmethod is an example of a descriptor which does something different here, see the Descriptor HowTo Guide for a pure-python implementation of classmethod.
Unlike __get__, which can be used to customize B().v and B.v independently, __set__ is not invoked unless the attribute access is on an instance. I would suppose that the goal of customizing B().v = other and B.v = other using the same descriptor v is not common or useful enough to complicate the descriptor protocol further, especially since the latter is still possible with a metaclass descriptor anyway, as shown in BMeta.v above.
Barring any overrides, B.v is equivalent to type.__getattribute__(B, "v"), while b = B(); b.v is equivalent to object.__getattribute__(b, "v"). Both definitions invoke the __get__ method of the result if defined.
Note, thought, that the call to __get__ differs in each case. B.v passes None as the first argument, while B().v passes the instance itself. In both cases B is passed as the second argument.
B.v = 3, on the other hand, is equivalent to type.__setattr__(B, "v", 3), which does not invoke __set__.
I think that none of the current answers actually answer your question.
Why does setting a descriptor on a class overwrite the descriptor?
Setting or deleting an attribute on a class (or on a subclass of the class) owning a descriptor (e.g. cls.descr = 3 or del cls.descr) overrides that descriptor because it would be impossible to change a faulty descriptor otherwise (e.g. descr.__set__(None, cls, 3) or descr.__delete__(None, cls) raising an exception) since a class dictionary (e.g. cls.__dict__) is a read-only types.MappingProxyType. You can always define a descriptor on the metaclass if you want to override setting or deleting an attribute on a class which is an instance of that metaclass. So __set__ and __delete__ are always passed an instance of the class owning the descriptor, that is why they do not have an owner parameter.
Getting an attribute on a class (or on a subclass of the class) owning a descriptor (e.g. cls.descr) does not override that descriptor because it does not prevent changing a faulty descriptor (e.g. descr.__get__(None, cls) raising an exception). So __get__ is passed either an instance of the class owning the descriptor, or the class (or a subclass of the class) itself, that is why it has an owner parameter.
More information in this answer.

When does __getattr__ get triggered?

I have a class as follows:
class Lz:
def __init__(self, b):
self.b = b
def __getattr__(self, item):
return self.b.__getattribute__(item)
And I create an instance and print :
a = Lz('abc')
print(a)
Result is: abc
I have set a breakpoint at line return self.b.__getattribute__(item), item show __str__
I don't know why it calls __getattr__, and item is __str__ when I access the instance.
print calls __str__ (see this question for details), but as Lz does not have a __str__ method, a lookup for an attribute named '__str__' takes place using __getattr__.
So if you add a __str__ method, __getattr__ should not be called anymore when printing objects of the Lz class.
print(obj) invokes str(obj) (to get a printable representation), which in turn tries to invokes obj.__str__() (and fallback to something else if this fails, but that's not the point here).
You defined Lz as an old-style class, so it's doesn't by default have a __str__ method (new-style classes inherit this from object), but you defined a __getattr__() method, so this is what gets invoked in the end (__getattr__() is the last thing the attribute lookup will invoke when everything else has failed).
NB: in case you don't already know, since everything in Python is an object - include classes, functions, methods etc - Python doesn't make difference between "data" attributes and "method" attributes - those are all attributes, period.
NB2: directly accessing __magic__ names is considered bad practice. Those names are implementation support for operators or operator-like generic functions (ie len(), type etc), and you are supposed to use the operator or generic function instead. IOW, this:
return self.b.__getattribute__(item)
should be written as
return getattr(self.b, item)
(getattr() is the generic function version of the "dot" attribute lookup operator (.))

How does assignment of a function as a class attribute become a method in Python?

>>> class A(object): pass
>>> def func(cls): pass
>>> A.func = func
>>> A.func
<unbound method A.func>
How does this assignment create a method? It seems unintuitive that assignment does the following for classes:
Turn functions into unbound instance methods
Turn functions wrapped in classmethod() into class methods (actually, this is pretty intuitive)
Turn functions wrapped in staticmethod() into functions
It seems that for the first, there should be an instancemethod(), and for the last one, there shouldn't be a wrapper function at all. I understand that these are for uses within a class block, but why should they apply outside of it?
But more importantly, how exactly does assignment of the function into a class work? What magic happens that resolves those 3 things?
Even more confusing with this:
>>> A.func
<unbound method A.func>
>>> A.__dict__['func']
<function func at 0x...>
But I think this is something to do with descriptors, when retrieving attributes. I don't think it has much to do with the setting of attributes here.
You're right that this has something to do with descriptor protocol. Descriptors are how passing the receiver object as the first parameter of a method is implemented in Python. You can read more detail about Python attribute lookup from here. The following shows on a bit lower level, what is happening when you do A.func = func; A.func:
# A.func = func
A.__dict__['func'] = func # This just sets the attribute
# A.func
# The __getattribute__ method of a type object calls the __get__ method with
# None as the first parameter and the type as the second.
A.__dict__['func'].__get__(None, A) # The __get__ method of a function object
# returns an unbound method object if the
# first parameter is None.
a = A()
# a.func()
# The __getattribute__ method of object finds an attribute on the type object
# and calls the __get__ method of it with the instance as its first parameter.
a.__class__.__dict__['func'].__get__(a, a.__class__)
# This returns a bound method object that is actually just a proxy for
# inserting the object as the first parameter to the function call.
So it's the looking up of the function on a class or an instance that turns it into a method, not assigning it to a class attribute.
classmethod and staticmethod are just slightly different descriptors, classmethod returning a bound method object bound to a type object and staticmethod just returns the original function.
Descriptors are the magic1 that turns an ordinary function into a bound or unbound method when you retrieve it from an instance or class, since they’re all just functions that need different binding strategies. The classmethod and staticmethod decorators implement other binding strategies, and staticmethod actually just returns the raw function, which is the same behavior you get from a non-function callable object.
See “User-defined methods” for some gory details, but note this:
Also notice that this transformation only happens for user-defined functions; other callable objects (and all non-callable objects) are retrieved without transformation.
So if you wanted this transformation for your own callable object, you could just wrap it in a function, but you could also write a descriptor to implement your own binding strategy.
Here’s the staticmethod decorator in action, returning the underlying function when it’s accessed.
>>> #staticmethod
... def f(): pass
>>> class A(object): pass
>>> A.f = f
>>> A.f
<function f at 0x100479398>
>>> f
<staticmethod object at 0x100492750>
Whereas a normal object with a __call__ method doesn’t get transformed:
>>> class C(object):
... def __call__(self): pass
>>> c = C()
>>> A.c = c
>>> A.c
<__main__.C object at 0x10048b890>
>>> c
<__main__.C object at 0x10048b890>
1 The specific function is func_descr_get in Objects/funcobject.c.
What you have to consider is that in Python everything is an object. By establishing that it is easier to understand what is happening. If you have a function def foo(bar): print bar, you can do spam = foo and call spam(1), getting of course, 1.
Objects in Python keep their instance attributes in a dictionary called __dict__ with a "pointer" to other objects. As functions in Python are objects as well, they can be assigned and manipulated as simple variables, passed around to other functions, etc. Python's implementation of object orientation takes advantage of this, and treats methods as attributes, as functions that are in the __dict__ of the object.
Instance methods' first parameter is always the instance object itself, generally called self (but this could be called this or banana). When a method is called directly on the class, it is unbound to any instance, so you have to give it an instance object as the first parameter (A.func(A())). When you call a bound function (A().func()), the first parameter of the method, self, is implicit, but behind the curtains Python does exactly the same as calling directly on the unbound function and passing the instance object as the first parameter.
If this is understood, the fact that assigning A.func = func (which behind the curtains is doing A.__dict__["func"] = func) leaves you with an unbound method, is unsurprising.
In your example the cls in def func(cls): pass actually what will be passed on is the instance (self) of type A. When you apply the classmethod or staticmethod decorators do nothing more than take the first argument obtained during the call of the function/method, and transform it into something else, before calling the function.
classmethod takes the first argument, gets the class object of the instance, and passes that as the first argument to the function call, while staticmethod simply discards the first parameter and calls the function without it.
Point 1: The function func you defined exists as a First-Class Object in Python.
Point 2: Classes in Python store their attributes in their __dict__.
So what happens when you pass a function as the value of a class attribute in Python? That function is stored in the class' __dict__, making it a method of that class accessed by calling the attribute name you assigned it to.
Relating to MTsoul's comment to Gabriel Hurley's answer:
What is different is that func has a __call__() method, making it "callable", i.e. you can apply the () operator to it. Check out the Python docs (search for __call__ on that page).

Categories

Resources