Python Parent/Child class method call - python

Python 2.7.6 on Linux.
I'm using a test class that inherits from a parent. The parent class holds a number of fields that are common to many child classes, and I need to call the parent setUp method to initialize the fields. Is calling ParentClass.setUp(self) the correct way to do this? Here's a simple example:
class RESTTest(unittest.TestCase):
def setUp(self):
self.host = host
self.port = port
self.protocol = protocol
self.context = context
class HistoryTest(RESTTest):
def setUp(self):
RESTTest.setUp(self)
self.endpoint = history_endpoint
self.url = "%s://%s:%s/%s/%s" %(self.protocol, self.host, self.port, self.context, self.endpoint)
def testMe(self):
self.assertTrue(True)
if __name__ == '__main__':
unittest.main()
Is this correct? It seems to work.

You would use super for that.
super(ChildClass, self).method(args)
class HistoryTest(RESTTest):
def setUp(self):
super(HistoryTest, self).method(args)
...
In Python 3 you may write:
class HistoryTest(RESTTest):
def setUp(self):
super().method(args)
...
which is simpler.
See this answer:
super() lets you avoid referring to the base class explicitly, which can be nice. But the main advantage comes with multiple inheritance, where all sorts of fun stuff can happen. See the standard docs on super if you haven't already.
Multiple inheritance
To (try to) answer the question in your comment:
How do you specify which super method you want to call?
From what I understand of the philosophy of multiple inheritance (in Python), you don't. I mean, super, along with the Method Resolution Order (MRO) should do things right and select the appropriate methods. (Yes methods is a plural, see below.)
There are a lot of blog posts / SO answers about this you can find with keywords "multiple inheritance", "diamond", "MRO", "super", etc. This article provides a Python 3 example I found surprising and didn't find in other sources:
class A:
def m(self):
print("m of A called")
class B(A):
def m(self):
print("m of B called")
super().m()
class C(A):
def m(self):
print("m of C called")
super().m()
class D(B,C):
def m(self):
print("m of D called")
super().m()
D().m()
m of D called
m of B called
m of C called
m of A called
See? Both B.m() and C.m() are called thanks to super, which seems like the right thing to do considering D inherits from both B and C.
I suggest you play with this example like I just did. Adding a few prints, you'll see that, when calling D().m(), the super().m() statement in class B itself calls C.m(). Whereas, of course, if you call B().m() (B instance, not D instance), only A.m() is called. In other words, super().m() in B is aware of the class of the instance it is dealing with and behaves accordingly.
Using super everywhere sounds like the silver bullet, but you need to make sure all classes in the inheritance schema are cooperative (another keyword to dig for) and don't break the chain, for instance when expecting additional parameters in child classes.

Related

How does calling super() with the 2-argument form differ from directly referencing the method and passing in "self" manually?

A bit of an odd question, consider the following code:
class A:
def mymethod(self):
return "mymethod from class A"
class B(A):
def mymethod(self):
return "mymethod from class B"
class C(B):
# in this class I want "mymethod" to return the same string returned from mymethod in the A class, with 2 exclamation points (!!) appended
# obviously this is a simplified example with dumb logic
def mymethod(self):
ss = super(B, self).mymethod()
# OR
ss = A.mymethod(self)
return ss + " !!"
I am fairly inexperienced when it comes to OOP so excuse me if the answer is extremely obvious.
As far as I can see, there is no difference between these 2 approaches, and as such I fail to understand how super() can be useful inside class methods.
Using super(B, self) removes a level of dependency. If the definition of B is changed to use a different superclass, you'll automatically call the method from that class. If you hard-code A.mymethod(self), you'll need to update that when B is redefined.

can I call base class method in my setup function

I am writing python script for the first time
Here is a basic question
class TestLolSupv(TestSetTxFreqPL4App,TestCaseAppForceReset):
def setUp(self):
super().setUp()
super().test_TX_SET_FREQ_PL4P2_A001()
Can I call base class method directly in this way in my setUp function?
Also if I am inheriting more than one base class , and super.setup() executes which both ? if so which one first ?
super.setup() will search for setup method in all parent classes, starting from left to right, till it finds it and executes the first found method.
for example
class A(object):
def setup(self):
print("A")
class B(object):
def setup(self):
print("B")
class C(A, B):
def setup(self):
super().setup()
c = C()
c.setup()
will print the answer as "A".
if the parent classes have inherited from other classes, then they will be searched in that order.
example
class A(object):
def setup(self):
pass
class B(object):
def setup(self):
pass
class C(A):
pass
class D(B):
pass
class E(C, D):
def setup(self):
super().setup()
class F(D, C):
def setup(self):
super().setup()
now for E the setup in A will be executed, and for F setup in B will be executed.
Yes, doing super().setUp() is the right way of calling the parent method class.
Now, regarding the multiple inheritance and the method calling. Python will actually call only the method of one class. When you do multiple inheritance, Python will create a 'Multiple-Resolution Order', that is the order in which python check the parent class for the method. Once it found the method in a parent it will stop looking for the method (this normally happens, because the inheritance hierarchy may be big).
In your example, you resolution order would be:
[<class '__main__.TestLolSupv'>, <class '__main__.TestSetTxFreqPL4App'>, <class '__main__.TestCaseAppForceReset'>, <class 'object'>]
You can check it by running TestLolSupv.mro().
If you want to have control of the order the objects are being called, I would suggest using delegation instead. And also, if you are using inheritance to have access to parent class method, delegation would be better. Normally, inheritance is worth when the parent class calls a child class method, because this means you are building an abstraction.
Here is some info about multiple inheritance in python.

Calling base class method after child class __init__ from base class __init__?

This is a feature I miss in several languages and wonder if anyone has any idea how it can be done in Python.
The idea is that I have a base class:
class Base(object):
def __init__(self):
self.my_data = 0
def my_rebind_function(self):
pass
and a derived class:
class Child(Base):
def __init__(self):
super().__init__(self)
# Do some stuff here
self.my_rebind_function() # <==== This is the line I want to get rid of
def my_rebind_function(self):
# Do stuff with self.my_data
As can be seen above, I have a rebound function which I want called after the Child.__init__ has done its job. And I want this done for all inherited classes, so it would be great if it was performed by the base class, so I do not have to retype that line in every child class.
It would be nice if the language had a function like __finally__, operating similar to how it operates with exceptions. That is, it should run after all __init__-functions (of all derived classes) have been run, that would be great. So the call order would be something like:
Base1.__init__()
...
BaseN.__init__()
LeafChild.__init__()
LeafChild.__finally__()
BaseN.__finally__()
...
Base1.__finally__()
And then object construction is finished. This is also kind of similar to unit testing with setup, run and teardown functions.
You can do this with a metaclass like that:
class Meta(type):
def __call__(cls, *args, **kwargs):
print("start Meta.__call__")
instance = super().__call__(*args, **kwargs)
instance.my_rebind_function()
print("end Meta.__call__\n")
return instance
class Base(metaclass=Meta):
def __init__(self):
print("Base.__init__()")
self.my_data = 0
def my_rebind_function(self):
pass
class Child(Base):
def __init__(self):
super().__init__()
print("Child.__init__()")
def my_rebind_function(self):
print("Child.my_rebind_function")
# Do stuff with self.my_data
self.my_data = 999
if __name__ == '__main__':
c = Child()
print(c.my_data)
By overwriting Metaclass.__call__ you can hook after all __init__ ( and __new__) methods of the class-tree have run an before the instance is returned. This is the place to call your rebind function. To understand the call order i added some print statements. The output will look like this:
start Meta.__call__
Base.__init__()
Child.__init__()
Child.my_rebind_function
end Meta.__call__
999
If you want to read on and get deeper into details I can recommend following great article: https://blog.ionelmc.ro/2015/02/09/understanding-python-metaclasses/
I may still not fully understand, but this seems to do what I (think) you want:
class Base(object):
def __init__(self):
print("Base.__init__() called")
self.my_data = 0
self.other_stuff()
self.my_rebind_function()
def other_stuff(self):
""" empty """
def my_rebind_function(self):
""" empty """
class Child(Base):
def __init__(self):
super(Child, self).__init__()
def other_stuff(self):
print("In Child.other_stuff() doing other stuff I want done in Child class")
def my_rebind_function(self):
print("In Child.my_rebind_function() doing stuff with self.my_data")
child = Child()
Output:
Base.__init__() called
In Child.other_stuff() doing other stuff I want done in Child class
In Child.my_rebind_function() doing stuff with self.my_data
If you want a "rebind" function to be invoked after each instance of a type which inherits from Base is instantiated, then I would say this "rebind" function can live outside the Base class(or any class inheriting from it).
You can have a factory function that gives you the object you need when you invoke it(for example give_me_a_processed_child_object()). This factory function basically instantiates an object and does something to it before it returns it to you.
Putting logic in __init__ is not a good idea because it obscures logic and intention. When you write kid = Child(), you don't expect many things to happen in the background, especially things that act on the instance of Child that you just created. What you expect is a fresh instance of Child.
A factory function, however, transparently does something to an object and returns it to you. This way you know you're getting an already processed instance.
Finally, you wanted to avoid adding "rebind" methods to your Child classes which you now you can since all that logic can be placed in your factory function.

In Python can one implement mixin behavior without using inheritance?

Is there a reasonable way in Python to implement mixin behavior similar to that found in Ruby -- that is, without using inheritance?
class Mixin(object):
def b(self): print "b()"
def c(self): print "c()"
class Foo(object):
# Somehow mix in the behavior of the Mixin class,
# so that all of the methods below will run and
# the issubclass() test will be False.
def a(self): print "a()"
f = Foo()
f.a()
f.b()
f.c()
print issubclass(Foo, Mixin)
I had a vague idea to do this with a class decorator, but my attempts led to confusion. Most of my searches on the topic have led in the direction of using inheritance (or in more complex scenarios, multiple inheritance) to achieve mixin behavior.
def mixer(*args):
"""Decorator for mixing mixins"""
def inner(cls):
for a,k in ((a,k) for a in args for k,v in vars(a).items() if callable(v)):
setattr(cls, k, getattr(a, k).im_func)
return cls
return inner
class Mixin(object):
def b(self): print "b()"
def c(self): print "c()"
class Mixin2(object):
def d(self): print "d()"
def e(self): print "e()"
#mixer(Mixin, Mixin2)
class Foo(object):
# Somehow mix in the behavior of the Mixin class,
# so that all of the methods below will run and
# the issubclass() test will be False.
def a(self): print "a()"
f = Foo()
f.a()
f.b()
f.c()
f.d()
f.e()
print issubclass(Foo, Mixin)
output:
a()
b()
c()
d()
e()
False
You can add the methods as functions:
Foo.b = Mixin.b.im_func
Foo.c = Mixin.c.im_func
I am not that familiar with Python, but from what I know about Python metaprogramming, you could actually do it pretty much the same way it is done in Ruby.
In Ruby, a module basically consists of two things: a pointer to a method dictionary and a pointer to a constant dictionary. A class consists of three things: a pointer to a method dictionary, a pointer to a constant dictionary and a pointer to the superclass.
When you mix in a module M into a class C, the following happens:
an anonymous class α is created (this is called an include class)
α's method dictionary and constant dictionary pointers are set equal to M's
α's superclass pointer is set equal to C's
C's superclass pointer is set to α
In other words: a fake class which shares its behavior with the mixin is injected into the inheritance hierarchy. So, Ruby actually does use inheritance for mixin composition.
I left out a couple of subleties above: first off, the module doesn't actually get inserted as C's superclass, it gets inserted as C's superclasses' (which is C's singleton class) superclass. And secondly, if the mixin itself has mixed in other mixins, then those also get wrapped into fake classes which get inserted directly above α, and this process is applied recursively, in case the mixed in mixins in turn have mixins.
Basically, the whole mixin hierarchy gets flattened into a straight line and spliced into the inheritance chain.
AFAIK, Python actually allows you to change a class's superclass(es) after the fact (something which Ruby does not allow you to do), and it also gives you access to a class's dict (again, something that is impossible in Ruby), so you should be able to implement this yourself.
EDIT: Fixed what could (and probably should) be construed as a bug. Now it builds a new dict and then updates that from the class's dict. This prevents mixins from overwriting methods that are defined directly on the class. The code is still untested but should work. I'm busy ATM so I'll test it later. It worked fine except for a syntax error. In retrospect, I decided that I don't like it (even after my further improvements) and much prefer my other solution even if it is more complicated. The test code for that one applies here as well but I wont duplicate it.
You could use a metaclass factory:
import inspect
def add_mixins(*mixins):
Dummy = type('Dummy', mixins, {})
d = {}
for mixin in reversed(inspect.getmro(Dummy)):
d.update(mixin.__dict__)
class WithMixins(type):
def __new__(meta, classname, bases, classdict):
d.update(classdict)
return super(WithMixins, meta).__new__(meta, classname, bases, d)
return WithMixins
then use it like:
class Foo(object):
__metaclass__ = add_mixins(Mixin1, Mixin2)
# rest of the stuff
This one is based on the way it's done in ruby as explained by Jörg W Mittag. All of the wall of code after if __name__=='__main__' is test/demo code. There's actually only 13 lines of real code to it.
import inspect
def add_mixins(*mixins):
Dummy = type('Dummy', mixins, {})
d = {}
# Now get all the class attributes. Use reversed so that conflicts
# are resolved with the proper priority. This rules out the possibility
# of the mixins calling methods from their base classes that get overridden
# using super but is necessary for the subclass check to fail. If that wasn't a
# requirement, we would just use Dummy above (or use MI directly and
# forget all the metaclass stuff).
for base in reversed(inspect.getmro(Dummy)):
d.update(base.__dict__)
# Create the mixin class. This should be equivalent to creating the
# anonymous class in Ruby.
Mixin = type('Mixin', (object,), d)
class WithMixins(type):
def __new__(meta, classname, bases, classdict):
# The check below prevents an inheritance cycle from forming which
# leads to a TypeError when trying to inherit from the resulting
# class.
if not any(issubclass(base, Mixin) for base in bases):
# This should be the the equivalent of setting the superclass
# pointers in Ruby.
bases = (Mixin,) + bases
return super(WithMixins, meta).__new__(meta, classname, bases,
classdict)
return WithMixins
if __name__ == '__main__':
class Mixin1(object):
def b(self): print "b()"
def c(self): print "c()"
class Mixin2(object):
def d(self): print "d()"
def e(self): print "e()"
class Mixin3Base(object):
def f(self): print "f()"
class Mixin3(Mixin3Base): pass
class Foo(object):
__metaclass__ = add_mixins(Mixin1, Mixin2, Mixin3)
def a(self): print "a()"
class Bar(Foo):
def f(self): print "Bar.f()"
def test_class(cls):
print "Testing {0}".format(cls.__name__)
f = cls()
f.a()
f.b()
f.c()
f.d()
f.e()
f.f()
print (issubclass(cls, Mixin1) or
issubclass(cls, Mixin2) or
issubclass(cls, Mixin3))
test_class(Foo)
test_class(Bar)
You could decorate the classes __getattr__ to check in the mixin. The problem is that all methods of the mixin would always require an object the type of the mixin as their first parameter, so you would have to decorate __init__ as well to create a mixin-object. I believe you could achieve this using a class decorator.
from functools import partial
class Mixin(object):
#staticmethod
def b(self): print "b()"
#staticmethod
def c(self): print "c()"
class Foo(object):
def __init__(self, mixin_cls):
self.delegate_cls = mixin_cls
def __getattr__(self, attr):
if hasattr(self.delegate_cls, attr):
return partial(getattr(self.delegate_cls, attr), self)
def a(self): print "a()"
f = Foo(Mixin)
f.a()
f.b()
f.c()
print issubclass(Foo, Mixin)
This basically uses the Mixin class as a container to hold ad-hoc functions (not methods) that behave like methods by taking an object instance (self) as the first argument. __getattr__ will redirect missing calls to these methods-alike functions.
This passes your simple tests as shown below. But I cannot guarantee it will do all the things you want. Make more thorough test to make sure.
$ python mixin.py
a()
b()
c()
False
Composition? It seems like that would be the simplest way to handle this: either wrap your object in a decorator or just import the methods as an object into your class definition itself. This is what I usually do: put the methods that I want to share between classes in a file and then import the file. If I want to override some behavior I import a modified file with the same method names as the same object name. It's a little sloppy, but it works.
For example, if I want the init_covers behavior from this file (bedg.py)
import cove as cov
def init_covers(n):
n.covers.append(cov.Cover((set([n.id]))))
id_list = []
for a in n.neighbors:
id_list.append(a.id)
n.covers.append(cov.Cover((set(id_list))))
def update_degree(n):
for a in n.covers:
a.degree = 0
for b in n.covers:
if a != b:
a.degree += len(a.node_list.intersection(b.node_list))
In my bar class file I would do: import bedg as foo
and then if I want to change my foo behaviors in another class that inherited bar, I write
import bild as foo
Like I say, it is sloppy.

Chain-calling parent initialisers in python [duplicate]

This question already has answers here:
How to invoke the super constructor in Python?
(7 answers)
Closed 6 years ago.
Consider this - a base class A, class B inheriting from A, class C inheriting from B. What is a generic way to call a parent class initialiser in an initialiser? If this still sounds too vague, here's some code.
class A(object):
def __init__(self):
print "Initialiser A was called"
class B(A):
def __init__(self):
super(B,self).__init__()
print "Initialiser B was called"
class C(B):
def __init__(self):
super(C,self).__init__()
print "Initialiser C was called"
c = C()
This is how I do it now. But it still seems a bit too non-generic - you still must pass a correct type by hand.
Now, I've tried using self.__class__ as a first argument to super(), but, obviously it doesn't work - if you put it in the initialiser for C - fair enough, B's initialiser gets called. If you do the same in B, "self" still points to an instance of C so you end up calling B's initialiser again (this ends in an infinite recursion).
There is no need to think about diamond inheritance for now, I am just interested in solving this specific problem.
Python 3 includes an improved super() which allows use like this:
super().__init__(args)
The way you are doing it is indeed the recommended one (for Python 2.x).
The issue of whether the class is passed explicitly to super is a matter of style rather than functionality. Passing the class to super fits in with Python's philosophy of "explicit is better than implicit".
You can simply write :
class A(object):
def __init__(self):
print "Initialiser A was called"
class B(A):
def __init__(self):
A.__init__(self)
# A.__init__(self,<parameters>) if you want to call with parameters
print "Initialiser B was called"
class C(B):
def __init__(self):
# A.__init__(self) # if you want to call most super class...
B.__init__(self)
print "Initialiser C was called"

Categories

Resources