Use of super() with not immediate parent - python

Is this a legal use of super()?
class A(object):
def method(self, arg):
pass
class B(A):
def method(self, arg):
super(B,self).method(arg)
class C(B):
def method(self, arg):
super(B,self).method(arg)
Thank you.

It will work, but it will probably confuse anyone trying to read your code (including you, unless you remember it specifically). Don't forget that if you want to call a method from a particular parent class, you can just do:
A.method(self, arg)

Well, "legal" is a questionable term here. The code will end up calling A.method, since the type given to super is excluded from the search. I would consider this usage of super flaky to say the least, since it will skip a member of the inheritance hierarchy (seemingly haphhazardly), which is inconsistent with what I would expect as a developer. Since users of super are already encouraged to maintain consistency, I'd recommend against this practice.

Related

super().__init__(...) and default values

Preamble
This is a rather basic question, I realize, but I haven't been able to find a sturdy reference for it, which would likely be a mixture of technical details and best practices for well-behaved classes.
Question
When a parent and child class both define the same initialization parameter, but with default values, what's the best way to get sane behavior when the child class is created?
My assumptions are:
Classes only accept named parameters, I don't need to deal with positional arguments. That simplifies many things, both theoretically in reasoning about situations and practically in taking arguments from external config files, etc.
__init__ methods may be more sophisticated than just setting self.foo = foo for their arguments - they may transform it before storing, use it to set other params, etc. and I'd like to be as respectful of that as possible.
Subclasses never break the interfaces of their parents, both for __init__ parameters and for attributes. Having a different default value is not considered "breaking".
Classes should never have to be aware of their subclasses, they should just do things in "reasonable ways" and it's up to subclasses to ensure everything still works properly. However, it's sometimes necessary to modify a superclass to be "more reasonable" if it's doing things that aren't amenable to being subclassed - this can form a set of principles that help everyone get along well.
Examples
In general, my idea of a "best practice" template for a derived class looks like this:
class Child(Parent):
def __init__(self, arg=1, **kwargs):
self.arg = arg
super().__init__(**kwargs)
That works well in most situations - deal with our stuff, then delegate all the rest to our superclass.
However, it doesn't work well if arg is shared by both Child and Parent - neither the caller's argument nor the Child default are respected:
class Parent:
def __init__(self, arg=0):
self.arg = arg
class Child(Parent):
def __init__(self, arg=1, **kwargs):
self.arg = arg
super().__init__(**kwargs)
print(Child(arg=6).arg)
# Prints `0` - bad
A better approach is probably for Child to acknowledge that the argument is shared:
class Parent:
def __init__(self, arg=0):
self.arg = arg
class Child(Parent):
def __init__(self, arg=1, **kwargs):
super().__init__(arg=arg, **kwargs)
print(Child(arg=6).arg)
# Prints `6` - good
print(Child().arg)
# Prints `1` - good
That successfully gets the defaults working according to expectations. What I'm not sure of is whether this plays well with the expectations of Parent. So I think my questions are:
If Parent.__init__ does some Fancy Stuff with arg and/or self.arg, how should Child be set up to respect that?
In general does this require knowing Too Much about the internals of Parent and how self.arg is used? Or are there reasonable practices that everyone can follow to draw that part of the interface contract in a clean way?
Are there any specific gotchas to keep in mind?
Parent.__init__ only expects that the caller may choose to omit an argument for the arg parameter. It doesn't matter if any particular caller (Child.__init__, in this case) always provides an argument, nor does it matter how the caller produces the value it passes.
Your third example is what I would write, with the addition that Parent.__init__ itself also uses super().__init__: it doesn't assume that it's the end of whatever MRO is in force for its self argument.
class Parent:
def __init__(self, arg=0, **kwargs):
super().__init__(**kwargs)
self.arg = arg
class Child(Parent):
def __init__(self, arg=1, **kwargs):
super().__init__(arg=arg, **kwargs)

Python2.7: infinite loop when super __init__ creates an instance of it's own subclass

I have the sense that this must be kind of a dumb question—nub here. So I'm open to an answer of the sort "This is ass-backwards, don't do it, please try this: [proper way]".
I'm using Python 2.7.5.
General Form of the Problem
This causes an infinite loop unless Thesaurus (an app-wide singleton) does not call Baseclass.__init__()
class Baseclass():
def __init__(self):
thes = Thesaurus()
#do stuff
class Thesaurus(Baseclass):
def __init__(self):
Baseclass.__init__(self)
#do stuff
My Specific Case
I have a base class that virtually every other class in my app extends (just some basic conventions for functionality within the app; perhaps should just be an interface). This base class is meant to house a singleton of a Thesaurus class that grants some flexibility with user input by inferring some synonyms (ie. {'yes':'yep', 'ok'}).
But since the subclass calls the superclass's __init__(), which in turn creates another subclass, loops ensue. Not calling the superclass's __init__() works just fine, but I'm concerned that's merely a lucky coincidence, and that my Thesaurus class may eventually be modified to require it's parent __init__().
Advice?
Well, I'm stopping to look at your code, and I'll just base my answer on what you say:
I have a base class that virtually every other class in my app extends (just some basic conventions for functionality within the app; perhaps should just be an interface).
this would be ThesaurusBase in the code below
This base class is meant to house a singleton of a Thesaurus class that grants some flexibility with user input by inferring some synonyms (ie. {'yes':'yep', 'ok'}).
That would be ThesaurusSingleton, that you can call with a better name and make it actually useful.
class ThesaurusBase():
def __init__(self, singleton=None):
self.singleton = singleton
def mymethod1(self):
raise NotImplementedError
def mymethod2(self):
raise NotImplementedError
class ThesaurusSingleton(ThesaurusBase):
def mymethod1(self):
return "meaw!"
class Thesaurus(TheraususBase):
def __init__(self, singleton=None):
TheraususBase.__init__(self, singleton)
def mymethod1(self):
return "quack!"
def mymethod2(self):
return "\\_o<"
now you can create your objects as follows:
singleton = ThesaurusSingleton()
thesaurus = Thesaurus(singleton)
edit:
Basically, what I've done here is build a "Base" class that is just an interface defining an expected behavior for all its children classes. The class ThesaurusSingleton (I know that's a terrible name) is also implementing that interface, because you said it had too and I did not want to discuss your design, you may always have good reasons for weird constraints.
And finally, do you really need to instantiate your singleton inside the class that is defining the singleton object? Though there may be some hackish way to do so, there's often a better design that avoids the "hackish" part.
What I think is that however you create your singleton, you should better do it explicitly. That's in the "Zen of python": explicit is better than implicit. Why? because then people reading your code (and that might be you in six months) will be able to understand what's happening and what you were thinking when you wrote that code. If you try to make things more implicit (like using sophisticated meta classes and weird self-inheritance) you may wonder what this code does in less than three weeks!
I'm not telling to avoid that kind of options, but to only use sophisticated stuff when you're out of simple ones!
Based on what you said I think the solution I gave can be a starting point. But as you focus on some obscure, yet not very useful hackish stuff instead of talking about your design, I can't be sure if my example is that appropriate, and hint you on the design.
edit2:
There's an another way to achieve what you say you want (but be sure that's really the design you want). You may want to use a class method that will act on the class itself (instead of the instances) and thus enable you to store a class-wide instance of itself:
>>> class ThesaurusBase:
... #classmethod
... def initClassWide(cls):
... cls._shared = cls()
...
>>> class T(ThesaurusBase):
... def foo(self):
... print self._shared
...
>>> ThesaurusBase.initClassWide()
>>> t = T()
>>> t.foo()
<__main__.ThesaurusBase instance at 0x7ff299a7def0>
and you can call the initClassWide method at the module level of where you declare ThesaurusBase, so whenever you import that module, it will have the singleton loaded (the import mechanism ensuring that python modules are run only once).
the short answer is:
do not instantiate an instance of a sub class from the super class constructor
longer answer:
if the motive you have to try to do this is the fact the Thesaurus is a singleton then you'll be better off exposing the singleton using a static method in the class (Thesaurus) and calling this method when you need the singleton

Calling parent class with multiple inheritance in python

I'm apologizing in advance if this question was already answered, I just couldn't find it.
When using multiple inheritance, how can I use a method of a specific parent?
Let's say I have something like this
Class Ancestor:
def gene:
Class Dad(Ancestor):
def gene:
...
Class Mom(Ancestor):
def gene:
...
Class Child(Dad,Mom):
def gene:
if(dad is dominant):
#call dad's gene
else:
#call mom's gene
How can I do that? The super() doesn't have the option the specify the specific parent.
Thanks!
Edit: Forgot to mention an extremely important detail - the methods are of the same name and are overridden. Sorry, and thanks again!
That's not what super is for. super is just meant to call the next item in the inheritance hierarchy, whatever it is - in other words, it's supposed to be used when you don't know or care what that hierarchy is.
For your case, you probably just want to call the method directly. But note that you don't actually need to deal with ancestors at all, because methodA and methodB are not overridden anyway: so you can just call them on self:
if whatever:
self.methodA()
else:
self.methodB()
If you are in the situation where you have overridden methods, you will need to specify the ancestors:
class C(A, B):
def methodA(self):
if whatever:
A.methodA(self)
else:
B.methodA(self)

creating Python classes with arbitrarily substituted attribute name

I apologize for not giving this question a better title; the reason that I am posting it is that I don't even have the correct terminology to know what I am looking for.
I have defined a class with an attribute 'spam':
def SpamClass(object):
def __init__(self, arg):
self.spam = arg
def __str__(self):
return self.spam
I want to create a (sub/sibling?)class that has exactly the same functionality, but with an attribute named 'eggs' instead of 'spam':
def EggsClass(object):
def __init__(self, arg):
self.eggs = arg
def __str__(self):
return self.eggs
To generalize, how do I create functionally-identical classes with arbitrary attribute names? When the class has complicated behavior, it seems silly to duplicate code.
Update: I agree that this smells like bad design. To clarify, I'm not trying to solve a particular problem in this stupid way. I just want to know how to arbitrarily name the (non-magic) contents of an object's __dict__ while preserving functionality. Consider something like the keys() method for dict-like objects. People create various classes with keys() methods that behave according to convention, and the naming convention is a Good Thing. But the name is arbitrary. How can I make a class with a spam() method that exactly replaces keys() without manually substituting /keys/spam/ in the source?
Overloading __getattr__ and friends to reference the generic attribute seems inelegant and brittle to me. If a subclass reimplements these methods, it must accommodate this behavior. I would rather have it appear to the user that there is simply a base class with a named attribute that can be accessed naively.
Actually, I can think of a plausible use case. Suppose that you want a mixin class that confers a special attribute and some closely related methods that manipulate or depend upon this attribute. A user may want to name this special attribute differently for different classes (to match names in the real-world problem domain or to avoid name collisions) while reusing the underlying behavior.
Here is a way to get the effect I think you want.
Define a generic class with a generic attribute name. Then in each sub class follow the advice in http://docs.python.org/reference/datamodel.html#customizing-attribute-access to make the attribute look externally like it is called whatever you want it called.
Your description of what you do feels like it has a "code smell" to me, I'd suggest reviewing your design very carefully to see whether this is really what you want to do. But you can make it work with my suggestion.
You can also create a super-class with all common stuff and then sub-classes with specific attributes.
Or even:
def SuperClass(object):
specific_attribute = 'unset'
def __init__(self, arg):
setattr(self, specific_attribute, arg)
def __str__(self):
return getattr(self, specific_attribute)
def EggClass(SuperClass):
specific_attribute = 'eggs'
Have you considered not overcomplicating things and just create one class? (since they are identical anyway)
class FoodClass(object):
def __init__(self, foodname, arg):
self.attrs = {foodname: arg}
self.foodname = foodname
def __str__(self):
return self.attrs[foodname]
If you want some nice constructors, just create them separately:
def make_eggs(arg):
return FoodClass('eggs', arg)
def make_spam(arg):
return FoodClass('spam', arg)
To create attributes during runtime, just add them in self.__dict__['foo'] = 'I'm foo' in the class code.

What are the elegant ways to do MixIns in Python?

I need to find an elegant way to do 2 kinds of MixIns.
First:
class A(object):
def method1(self):
do_something()
Now, a MixInClass should make method1 do this: do_other() -> A.method1() -> do_smth_else() - i.e. basically "wrap" the older function. I'm pretty sure there must exist a good solution to this.
Second:
class B(object):
def method1(self):
do_something()
do_more()
In this case, I want MixInClass2 to be able to inject itself between do_something() and do_more(), i.e.: do_something() -> MixIn.method1 -> do_more(). I understand that probably this would require modifying class B - that's ok, just looking for simplest ways to achieve this.
These are pretty trivial problems and I actually solved them, but my solution is tainted.
Fisrt one by using self._old_method1 = self.method1(); self.method1() = self._new_method1(); and writing _new_method1() that calls to _old_method1().
Problem: multiple MixIns will all rename to _old_method1 and it is inelegant.
Second MixIn one was solved by creating a dummy method call_mixin(self): pass and injecting it between calls and defining self.call_mixin(). Again inelegant and will break on multiple MixIns..
Any ideas?
Thanks to Boldewyn, I've found elegant solution to first one (I've forgot you can create decorators on-the-fly, without modifying original code):
class MixIn_for_1(object):
def __init__(self):
self.method1 = self.wrap1(self.method1)
super(MixIn_for_1, self).__init__()
def wrap1(self, old):
def method1():
print "do_other()"
old()
print "do_smth_else()"
return method1
Still searching for ideas for second one (this idea won't fit, since I need to inject inside of old method, not outside, like in this case).
Solution for second is below, replacing "pass_func" with lambda:0.
I think, that can be handled in quite a Pythonic way using decorators. (PEP 318, too)
Here is another way to implement MixInClass1, MixinClass2:
Decorators are useful when you need to wrap many functions. Since MixinClass1 needs to wrap only one function, I think it is clearer to monkey-patch:
Using double underscores for __old_method1 and __method1 plays a useful role in MixInClass1. Because of Python's name-mangling convention, using the double underscores localizes these attributes to MixinClass1 and allows you to use the very same attribute names for other mix-in classes without causing unwanted name-collisions.
class MixInClass1(object):
def __init__(self):
self.__old_method1,self.method1=self.method1,self.__method1
super(MixInClass1, self).__init__()
def __method1(self):
print "pre1()"
self.__old_method1()
print "post1()"
class MixInClass2(object):
def __init__(self):
super(MixInClass2, self).__init__()
def method1_hook(self):
print('MixIn method1')
class Foo(MixInClass2,MixInClass1):
def method1(self):
print "do_something()"
getattr(self,'method1_hook',lambda *args,**kw: None)()
print "do_more()"
foo=Foo()
foo.method1()

Categories

Resources