python super question - python

I have a syntax question about python's super() and multiple inheritance. Say I have class A and B, both of which have a method hello(). I have a class C that inherits from both A and B, in that order.
How do I call the hello() method of B explicitly from C? Seems simple enough, but I can't seem to find the syntax for it.

To call B's hello method explicitly from C:
B.hello(self,...)

>>> class A(object):
def hello(self):
print "hello from A"
>>> class B(object):
def hello(self):
print "hello from B"
>>> class C(A, B):
def hello(self):
print "hello from C"
>>> c = C()
>>> B.hello(c)
hello from B
>>> # alternately if you want to call it from the class method itself..
>>> class C(A, B):
def hello(self):
B.hello(self) # actually calling B
>>> c = C()
>>> c.hello()
hello from B

You might want to consider using super()–instead of the hard-coded B.hello()–, as explained in Python's super() considered super. In this approach, C.hello() uses super() and automatically call A.hello(), which in turn uses super() and automatically calls B.hello(), with no hard-coding of class names.
Otherwise, B.hello() is indeed the normal way to do what you want.

Remember that Python calls superclass methods from right to left.

Related

What are functions outside of a class called?

I failed to phrase the question correctly online so I could not get an answer.
We have instance methods, static methods, and class methods.
What are functions called when they don't belong to a class?
They're just called functions.
In python, "function" refers to a type of callable procedure/block of code with its own localized namespace.
In contrast, "method" refers specifically to a kind of function that is bound to a class. We use "instance methods", "static methods", and "class methods" to differentiate between how those functions are bound to their respective classes, but in any case we call them methods because they are bound to their class.
So, we just call them functions, unless we have something more specific. If you must use some sort of qualifier, "unbound function" (alluding to the fact that it's not bound to any class) or "module function" (alluding to the fact that it's bound to the module it's defined in, though that's not a class), or even "static function" (but this could be confusing when trying to communicate to people who don't know the difference between functions and methods) or "free function" will probably work.
According to the section "Callable types" in the Python docs on its Data model
A user-defined function object is created by a function definition
So I guess one could say that everything that begins with def is a function.
In general, I think it depends a lot on the context, which term you want to use. For example, even though, to define a "static method", you'd write #staticmethod and everything, it's not called a "method" but a "function" in the context of the types module:
>>> class A:
... def f(self):
... print("Hello from method `f`")
... #staticmethod
... def g():
... print("Hello from function `g`")
...
>>> a = A()
>>> a.f()
Hello from method `f`
>>> a.g()
Hello from function `g`
>>> type(a.f)
<class 'method'>
>>> type(a.g)
<class 'function'>
Furthermore, looking at the docs of Python's types module reveals:
types.MethodType - The type of methods of user-defined class instances.
So methods are only found in instances. A method b.f of an instance b of a class B refers to the function B.f of the class:
>>> class B:
... def f(self):
... pass
...
>>> b1 = B()
>>> type(b1.f)
<class 'method'>
>>> type(B.f)
<class 'function'>
The methods are different objects for each instance:
>>> b2 = B()
>>> b1.f is b2.f
False
However, the methods refer to the same function B.f:
>>> b1.f.__func__
<function B.f at 0x7f166e31b2f0>
>>> b1.f.__func__ is b2.f.__func__
True
I imagine, this can be both useful or a pitfall, so it makes sense to know about it.
Here is an example, using a class C with function C.f and cache, an argument with a mutable default value:
>>> class C:
... def f(self, cache=[]):
... cache.append(cache[-1] + 1 if cache else 1)
... print(cache)
...
>>> c1 = C()
>>> c2 = C()
>>> c1.f()
[1]
>>> c2.f()
[1, 2]
>>> c3 = C()
>>> c3.f()
[1, 2, 3]
As you can see, all instances c1, c2 and c3 of class C share the same underlying function C.f with its argument cache.

How do I know how python core is working?

How do I know how the core python will work.
For ex see the below code:
class A(object):
def a(self):
print 111111
def b(self):
self.a()
class B(A):
def a(self):
print 222222
When I do
a = B()
a.b()
it is printing
222222
What could be the reason why it did not print "11111", if you say 'self' is the object of class B, so its calling its own method, but where it is defined that 'self' is the object of class B?
Where do I see the internal logic that's calling 'a' method from class B?
Every time I came across this logic, I forget and assumes that it prints '111111', so is there any way I can check the internal python behavior?
You are assuming that self.a refers to A.a just because it occurs in the definition of A.b. That's not true; the behavior of self.a is determined by the runtime type of self.
Since a is an instance of B, a.b() is the same as type(a).b(a, b) or B.b(a, b). Since B.b is not defined, the attribute lookup process falls back to A.b. Inside A.b, self == a, so we now have a.a(), which again is equivalent to type(a).a(a) == B.a(a). As a result, A.a never gets called.

super confusing python multiple inheritance super()

I was playing around with the multiple inheritance in python and I come a cross a situation that I can't understand how it happen.
Here is the inheritance layout:
A F
/ \ |
B C |
\ | /
\ | /
D
The ABCD diamond that everyone familiar with.
Plus an extra class "F" I throw it in for fun.
Here is the code:
class A(object):
def foo(self, call_from):
print "foo from A, call from %s" % call_from
super(A, self).foo("A")
class B(A):
def foo(self, call_from):
print "foo from B, call from %s" % call_from
super(B, self).foo("B")
class C(A):
def foo(self, call_from):
print "foo from C, call from %s" % call_from
super(C, self).foo("C")
class F(object):
def foo(self, call_from):
print "foo from F, call from %s" % call_from
class D(B, C, F):
def foo(self):
print "foo from D"
super(D, self).foo("D")
output:
>>> d = D()
>>> d.foo()
foo from D
foo from B, call from D
foo from C, call from B
foo from A, call from C
foo from F, call from A
The method resolution order:
>>> D.__mro__
(<class '__main__.D'>, <class '__main__.B'>, <class '__main__.C'>, <class '__main__.A'>, <class '__main__.F'>, <type 'object'>)
foo from C, call from B instead of foo from C, call from D
foo from F, call from A just simply throw me off...
It seems like the super() are chained up according to the method resolution order and ignore the relationship between classes, but I not sure.
Can someone point me to the right direction to understand this behavior?
Please keep in mind that I'm trying to understand the language itself. Not trying to solve a practical problem. So I don't have an use case for this. But it will be nice if someone can point out an use case :)
UPDATE:
To summarize - super() simply let you know what is next to call base on the mro. It is not necessary the parent. While mro built base on the inheritance hierarchy, mro itself is not the inheritance hierarchy.
The whole point of super() is to follow the method resolution order. That's why you tell it your own class, not your parent class. It's hard for the programmer to predict which class will be invoked next, so you let super() take care of it.
You already had B called from D, so how could you then get C called from D? D.foo() can only call one other foo(), because you only have one function call there. It's going to be a linear chain of calls, so the classes have to be linearized, that's what the method resolution order does.
Occasionally I find it useful to call super on parent class. Ex.
class TmpClass0(object):
def tmp_method(self):
print 'TmpClass0 tmp_method'
class TmpClass1(TmpClass0):
def tmp_method(self):
print 'TmpClass1 tmp_method'
Now I want to use TmpClass0's tmp_method from an instance of TmpClass2.
class TmpClass2(TmpClass1):
def tmp_method(self):
super(TmpClass1, self).tmp_method()
Result:
In [107]: tmp_class2 = TmpClass2()
In [108]: tmp_class2.tmp_method()
TmpClass0 tmp_method

Apply a method to an object of another class

Given two non-related classes A and B, how to call A.method with an object of B as self?
class A:
def __init__(self, x):
self.x = x
def print_x(self):
print self.x
class B:
def __init__(self, x):
self.x = x
a = A('spam')
b = B('eggs')
a.print_x() #<-- spam
<magic>(A.print_x, b) #<-- 'eggs'
In Python 3.x you can simply do what you want:
A.print_x(b) #<-- 'eggs'
If you only have an instance of 'A', then get the class first:
a.__class__.print_x(b) #<-- 'eggs'
In Python 2.x (which the OP uses) this doesn't work, as noted by the OP and explained by Amber in the comments:
This is a difference between Python 2.x and Python 3.x - methods in
3.x don't enforce being passed the same class.
More details (OP edit)
In python 2, A.print_x returns an "unbound method", which cannot be directly applied to other classes' objects:
When an unbound user-defined method object is called, the underlying function (im_func) is called, with the restriction that the first argument must be an instance of the proper class (im_class) or of a derived class thereof. >> http://docs.python.org/reference/datamodel.html
To work around this restriction, we first have to obtain a "raw" function from a method, via im_func or __func__ (2.6+), which then can be called passing an object. This works on both classes and instances:
# python 2.5-
A.print_x.im_func(b)
a.print_x.im_func(b)
# python 2.6+
A.print_x.__func__(b)
a.print_x.__func__(b)
In python 3 there's no such thing anymore as unbound method.
Unbound methods are gone for good. ClassObject.method returns an
ordinary function object, instance.method still returns a bound
method object. >> http://www.python.org/getit/releases/3.0/NEWS.txt
Hence, in python 3, A.print_x is just a function, and can be called right away and a.print_x still has to be unbounded:
# python 3.0+
A.print_x(b)
a.print_x.__func__(b)
You don't (well, it's not that you can't throw enough magic at it to make it work, it's that you just shouldn't). If the function is supposed to work with more than one type, make it... a function.
# behold, the magic and power of duck typing
def fun(obj):
print obj.x
class A:
x = 42
class B:
x = 69
fun(A())
fun(B())
I don't know why you would really want to do this, but it is possible:
>>> class A(object):
... def foo(self):
... print self.a
...
>>> class B(object):
... def __init__(self):
... self.a = "b"
...
>>> x = A()
>>> y = B()
>>> x.foo.im_func(y)
b
>>> A.foo.im_func(y)
b
An instance method (a class instance's bound method) has a property called im_func which refers to the actual function called by the instance method, without the instance/class binding. The class object's version of the method also has this property.

Convert partial function to method in python

Consider the following (broken) code:
import functools
class Foo(object):
def __init__(self):
def f(a,self,b):
print a+b
self.g = functools.partial(f,1)
x=Foo()
x.g(2)
What I want to do is take the function f and partially apply it, resulting in a function g(self,b). I would like to use this function as a method, however this does not currently work and instead I get the error
Traceback (most recent call last):
File "test.py", line 8, in <module>
x.g(2)
TypeError: f() takes exactly 3 arguments (2 given)
Doing x.g(x,2) however works, so it seem the issue is that g is considered a "normal" function instead of a method of the class. Is there a way to get x.g to behave like a method (i.e implicitly pass the self parameter) instead of a function?
There are two issues at hand here. First, for a function to be turned into a method it must be stored on the class, not the instance. A demonstration:
class Foo(object):
def a(*args):
print 'a', args
def b(*args):
print 'b', args
Foo.b = b
x = Foo()
def c(*args):
print 'c', args
x.c = c
So a is a function defined in the class definition, b is a function assigned to the class afterwards, and c is a function assigned to the instance. Take a look at what happens when we call them:
>>> x.a('a will have "self"')
a (<__main__.Foo object at 0x100425ed0>, 'a will have "self"')
>>> x.b('as will b')
b (<__main__.Foo object at 0x100425ed0>, 'as will b')
>>> x.c('c will only recieve this string')
c ('c will only recieve this string',)
As you can see there is little difference between a function defined along with the class, and one assigned to it later. I believe there is actually no difference as long as there is no metaclass involved, but that is for another time.
The second problem comes from how a function is actually turned into a method in the first place; the function type implements the descriptor protocol. (See the docs for details.) In a nutshell, the function type has a special __get__ method which is called when you perform an attribute lookup on the class itself. Instead of you getting the function object, the __get__ method of that function object is called, and that returns a bound method object (which is what supplies the self argument).
Why is this a problem? Because the functools.partial object is not a descriptor!
>>> import functools
>>> def f(*args):
... print 'f', args
...
>>> g = functools.partial(f, 1, 2, 3)
>>> g
<functools.partial object at 0x10042f2b8>
>>> g.__get__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'functools.partial' object has no attribute '__get__'
There are a number of options you have at this point. You can explicitly supply the self argument to the partial:
import functools
class Foo(object):
def __init__(self):
def f(self, a, b):
print a + b
self.g = functools.partial(f, self, 1)
x = Foo()
x.g(2)
...or you would imbed the self and value of a in a closure:
class Foo(object):
def __init__(self):
a = 1
def f(b):
print a + b
self.g = f
x = Foo()
x.g(2)
These solutions are of course assuming that there is an as yet unspecified reason for assigning a method to the class in the constructor like this, as you can very easily just define a method directly on the class to do what you are doing here.
Edit: Here is an idea for a solution assuming the functions may be created for the class, instead of the instance:
class Foo(object):
pass
def make_binding(name):
def f(self, *args):
print 'Do %s with %s given %r.' % (name, self, args)
return f
for name in 'foo', 'bar', 'baz':
setattr(Foo, name, make_binding(name))
f = Foo()
f.foo(1, 2, 3)
f.bar('some input')
f.baz()
Gives you:
Do foo with <__main__.Foo object at 0x10053e3d0> given (1, 2, 3).
Do bar with <__main__.Foo object at 0x10053e3d0> given ('some input',).
Do baz with <__main__.Foo object at 0x10053e3d0> given ().
This will work. But I'm not sure if this is what you are looking for
class Foo(object):
def __init__(self):
def f(a,self,b):
print a+b
self.g = functools.partial(f,1, self) # <= passing `self` also.
x = Foo()
x.g(2)
this is simply a concrete example of what i believe is the most correct (and therefore pythonic :) way to solve -- as the best solution (definition on a class!) was never revealed -- #MikeBoers explanations are otherwise solid.
i've used this pattern quite a bit (recently for an proxied API), and it's survived untold production hours without the slightest irregularity.
from functools import update_wrapper
from functools import partial
from types import MethodType
class Basic(object):
def add(self, **kwds):
print sum(kwds.values())
Basic.add_to_one = MethodType(
update_wrapper(partial(Basic.add, a=1), Basic.add),
None,
Basic,
)
x = Basic()
x.add(a=1, b=9)
x.add_to_one(b=9)
...yields:
10
10
...the key take-home-point here is MethodType(func, inst, cls), which creates an unbound method from another callable (you can even use this to chain/bind instance methods to unrelated classes... when instantiated+called the original instance method will receive BOTH self objects!)
note the exclusive use of keyword arguments! while there might be a better way to handle, args are generally a PITA because the placement of self becomes less predictable. also, IME anyway, using *args, and **kwds in the bottom-most function has proven very useful later on.
functools.partialmethod() is available since python 3.4 for this purpose.
import functools
class Foo(object):
def __init__(self):
def f(a,self,b):
print a+b
self.g = functools.partialmethod(f,1)
x=Foo()
x.g(2)

Categories

Resources