Multiple function forms in template pattern - python

I wanted to ask what is the best way to implement template pattern in which template method can have multiple forms (I guess it wouldn't be template pattern then).
Let's say I have an abstract class with one abstract method and few concrete methods:
from abc import ABC, abstractmethod
from typing import Any
class TemplateClass(ABC):
def __init__(self, my_client):
self.client = my_client
def run(self) -> Any:
self._step1()
self._step2()
self._step3()
self.execute_specific_logic()
#abstractmethod
def _execute_specific_logic(self) -> Any:
raise NotImplementedError
def _step1(self):
pass
def _step2(self):
pass
def _step3(self):
pass
And I want to create about 10 classes that will inherit TemplateClass, but:
7 of them should have run method with all steps
2 of them should have run method only with _step2 and _step3
1 of them should have run method only with _step1
I was wondering about different ways to implement such logic:
implementing different run methods for every case - run, run_without_step1, run_without_step2_and_step3
adding flag argument to run method such as is_step_x_required with True as default and passing Flase in subclass method when needed
overwriting in run method in subclasses when needed
using some kind of mixin class?
I would really appreciate any advice on this issue.

All the techniques you list are reasonable. My first inclination was towards the "least" amount of boilerplate:
class Stepper(ABC):
def __init__(self, run_steps):
self._run_steps = run_steps
self._steps = [self._step1, self._step2, self._step3]
def run(self):
for step in self._run_steps:
self._steps[i - 1]()
class Only23Stepper(Stepper):
def __init__(self):
super().__init__(run_steps=[2, 3])
For more flexibility, but more boilerplate, one may override run() to explicitly specify the steps.
However, we may want to express this in a different way than with classes. It is well known that inheritance is evil. It is possible that the reason we're running into design issues and potential inflexibility is that this could be expressed in a simpler way using plain old functions.

Related

Python inheritance - add argument to parent method

I have a base class with function run. For example:
class A:
#abstractmethod
def run(self, steps):
...
It is possible to define class B with more arguments to the run method.
class B(A):
def run(self, steps, save):
...
Working with typing, I can specify if a function gets either A or B as argument. By specifying the function gets A, I tell that I only need the basic interface of run. While specifying B says I need the extended one.
The purpose of this design is to declare a base interface that all the children share but each one can have an extended API.
This is impossible to be done in other languages. Hence I wonder, is it an anti-pattern? Is it something legit to do?
In Python you can do something like the following.
class A:
def run(self, steps):
print("Using class A's run.")
print(f"steps are {steps}")
class B(A):
def run(self, steps, other_arg=None):
if other_arg:
print("Using class B's override.")
print(f"steps are {steps}")
else:
# Use parent's run logic instead.
super().run(steps)
x = B()
x.run(100)
x.run(30, other_arg="something")
# Using class A's run.
# steps are 100
# Using class B's override.
# steps are 30
Now, should you do this? There is a time and a place. You can get into trouble as well. Imagine you break the interface of the core object you're inheriting from, so the core object loses its abstraction value. You'd have been better off having two objects or rewriting your abstraction to be more robust to the differences in object you wish you represent.
Edit: Note that the original question changed to make the base run method abstract. The solution posted here is mostly invalidated by that.

Call specific method from parent class in multiple inheritance - Python

I have one class with multiple inheritance. I would like to concat the output from some parents' methods that share the same name. Ideally, I would be able to do this without going through all parent class but selecting explicitly the cases I want.
class my_class1:
def common_method(self): return ['dependency_1']
class my_class2:
def common_method(self): return ['dependency_2']
class my_class3:
def whatever(self): return 'ANYTHING'
class composite(my_class1, my_class2, my_class3):
def do_something_important(self):
return <my_class1.common_method()> + <my_class2.common_method()>
Since you don't want to use the langage mechanisms to call super-methors (which are designed to go through all the methods in the superclasses, even ones that are not known at the time the code is written), just call the methods explitly on the classes you want - by using the class name.
The only thing different that has to be done is that you have to call the method from the class, not from the instance, and then insert the instance manually as first parameter. Python's automatic self reference is only good when calling the method in the most derived sub-class (from which point, in a more common design, it will use super to run its coutnerparts in the superclasses)
For your example to work, you simply have to write it like this:
class my_class1:
def common_method(self): return ['dependency_1']
class my_class2:
def common_method(self): return ['dependency_2']
class my_class3:
def whatever(self): return 'ANYTHING'
class composite(my_class1, my_class2, my_class3):
def do_something_important(self):
return my_class1.common_method(self) + my_class2.common_method(self)
Note, hoever, that if any of the common_methods would call super().common_method in a common ancestor base, that super-method would be run once for each explicit invocation of a sub-class' .common_method.
If you would want to specialize that it would be though to do.
In other words, if you want, a "super" counterpart that would allow you to specify which super-classes to visit when calling the method, and ensure any super-method called by those would run only once - that i feasible, but complicated and error prone. If you can use explicit classes like in this example, it is 100 times simpler.

pythonic way to expose user override hooks

Note: although my particular use is Flask related, I think the question is more general.
I am building a Flask web application meant to be customized by the user. For example, the user is expected to provide a concrete subclass of a DatabaseInterface and may add to the list of certain ModelObjects that the application knows how to handle.
What is the best way to expose the various hooks to users, and indicate required and optional status? 'Best' here primarily means most 'pythonic', or "easiest for python users to grasp", but other criteria like not causing headaches down the road are certainly worth mentioning.
Some approaches I've considered:
Rely solely on documentation
Create a template file with documented overrides, much like default config files for many servers. E.g.
app = mycode.get_app()
##Add your list of extra foo classes here
#app.extra_foos = []
Create a UserOverrides class with an attr/method for each of the hooks; possibly split into RequiredOverrides and OptionalOverrides
Create an empty class with unimplemented methods that the user must subclass into a concrete instance
One method is by using abstract base classes (abc module). For example, you can define an ABC with abstract methods that must be overridden by child classes like this:
from abc import ABC
class MyClass(ABC): # inherit from ABC
def __init__(self):
pass
#abstractmethod
def some_method(self, args):
# must be overridden by child class
pass
You would then implement a child class like:
class MyChild(MyClass):
# uses parent's __init__ by default
def some_method(self, args):
# overrides the abstract method
You can specify what everything needs to do in the overridden methods with documentation. There are also decorators for abstract properties, class methods, and static methods. Attempting to instantiate an ABC that does not have all of its abstract methods/properties overridden will result in an error.
Inheritance. Is. Bad.
This is especially true in Python, which gives you a nice precedent to avoid the issue. Consider the following code:
len({1,2,3}) # set with length 3
len([1,2,3]) # list with length 3
len((1,2,3)) # tuple with length 3
Which is cool and all for the built-in data structures, but what if you want to make your own data structure and have it work with Python's len? Simple:
class Duple(object):
def __init__(self, fst, snd):
super(Duple, self).__init__()
self.fst = fst
self.snd = snd
def __len__():
return 2
A Duple is a two-element (only) data structure (calling it with more or fewer arguments raises) and now works with len:
len(Duple(1,2)) # 2
Which is exactly how you should do this:
def foo(arg):
return arg.__foo__()
Any class that wants to work with your foo function just implements the __foo__ magic method, which is how len works under the hood.

How to make a file-like class work with "isinstance(cls, io.IOBase)"?

It seems that checking isinstance(..., io.IOBase) is the 'correct' way to determine if an object is 'file-like'.
However, when defining my own file-like class, it doesn't seem to work:
import io
class file_like():
def __init__(self):
pass
def write(self, line):
print("Written:", line)
def close(self):
pass
def flush(self):
pass
print(isinstance(file_like(), io.IOBase))
# Prints 'False'
How can I make it work?
isinstance(obj, some_class) just iterates up obj's inheritance chain, looking for some_class. Thus isinstance(file_like, io.IOBase), will be false, as your file_like class doesn't have io.IOBase in its ancestry. file_like doesn't designate an explicit parent, hence it implicitly inherits only from object. That's the only class - besides file_like itself - that will test positive for a file_like instance with isinstance().
What you are doing in file_like is defining the methods expected on a file-like object while not inheriting from any particular "file-like" class. This approach is called duck-typing, and it has many merits in dynamic languages, although it's more popular in others (e.g. Ruby) than Python. Still, if whatever you're providing your file_like instance to follows duck-typing, it should work, provided your file_like does in fact "quack like a file", i.e. behaves sufficiently like a file to not cause errors upon usage at the receiving end.
Of course, if the receiving end is not following duck-typing, for example tries to check types by isinstance() as you do here, this approach will fail.
Finally, a small stylistic nit: don't put empty parens on a class if it doesn't inherit anything explicitly. They are redundant.
Checking isinstance(something, io.IOBase) only checks if something is an instance of an io.IOBase or a class derived from it — so I don't understand where you got the mistaken idea that it's the "correct" way to determine if an object is "file-like".
A different way to do it is with an Abstract Base Class. Python has a number of built-in ones, but currently doesn't have one for "file-like" that could used with isinstance(). However you can define your own by using the abc module as outlined in PEP 3119.
The Python Module of the Week webiste has a good explanation of using the abc module to do things like as this. And this highly rated answer to the question Correct way to detect sequence parameter? shows a similar way of defining your own ABC.
To illustrate applying it to your case, you could define an ABC like this with all its methods abstract — thereby forcing derived classes to define all of them in order to be instantiated:
from abc import ABCMeta, abstractmethod
class ABCFileLike(metaclass=ABCMeta):
#abstractmethod
def __init__(self): pass
#abstractmethod
def write(self, line): pass
#abstractmethod
def close(self): pass
#abstractmethod
def flush(self): pass
You could then derive your own concrete classes from it, making sure to supply implementations of all the abstract methods. (If you don't define them all, then a TypeError will be be raised if any attempts are made to instantiate it.)
class FileLike(ABCFileLike):
""" Concrete implementation of a file-like class.
(Meaning all the abstract methods have an implementation.)
"""
def __init__(self):
pass
def write(self, line):
print("Written:", line)
def close(self):
pass
def flush(self):
pass
print(isinstance(FileLike(), ABCFileLike)) # -> True
You can even add existing classes to it by registering them with the new metaclass:
import io
print(isinstance(io.IOBase(), ABCFileLike)) # -> False
ABCFileLike.register(io.IOBase)
print(isinstance(io.IOBase(), ABCFileLike)) # -> True

Python2.7: infinite loop when super __init__ creates an instance of it's own subclass

I have the sense that this must be kind of a dumb question—nub here. So I'm open to an answer of the sort "This is ass-backwards, don't do it, please try this: [proper way]".
I'm using Python 2.7.5.
General Form of the Problem
This causes an infinite loop unless Thesaurus (an app-wide singleton) does not call Baseclass.__init__()
class Baseclass():
def __init__(self):
thes = Thesaurus()
#do stuff
class Thesaurus(Baseclass):
def __init__(self):
Baseclass.__init__(self)
#do stuff
My Specific Case
I have a base class that virtually every other class in my app extends (just some basic conventions for functionality within the app; perhaps should just be an interface). This base class is meant to house a singleton of a Thesaurus class that grants some flexibility with user input by inferring some synonyms (ie. {'yes':'yep', 'ok'}).
But since the subclass calls the superclass's __init__(), which in turn creates another subclass, loops ensue. Not calling the superclass's __init__() works just fine, but I'm concerned that's merely a lucky coincidence, and that my Thesaurus class may eventually be modified to require it's parent __init__().
Advice?
Well, I'm stopping to look at your code, and I'll just base my answer on what you say:
I have a base class that virtually every other class in my app extends (just some basic conventions for functionality within the app; perhaps should just be an interface).
this would be ThesaurusBase in the code below
This base class is meant to house a singleton of a Thesaurus class that grants some flexibility with user input by inferring some synonyms (ie. {'yes':'yep', 'ok'}).
That would be ThesaurusSingleton, that you can call with a better name and make it actually useful.
class ThesaurusBase():
def __init__(self, singleton=None):
self.singleton = singleton
def mymethod1(self):
raise NotImplementedError
def mymethod2(self):
raise NotImplementedError
class ThesaurusSingleton(ThesaurusBase):
def mymethod1(self):
return "meaw!"
class Thesaurus(TheraususBase):
def __init__(self, singleton=None):
TheraususBase.__init__(self, singleton)
def mymethod1(self):
return "quack!"
def mymethod2(self):
return "\\_o<"
now you can create your objects as follows:
singleton = ThesaurusSingleton()
thesaurus = Thesaurus(singleton)
edit:
Basically, what I've done here is build a "Base" class that is just an interface defining an expected behavior for all its children classes. The class ThesaurusSingleton (I know that's a terrible name) is also implementing that interface, because you said it had too and I did not want to discuss your design, you may always have good reasons for weird constraints.
And finally, do you really need to instantiate your singleton inside the class that is defining the singleton object? Though there may be some hackish way to do so, there's often a better design that avoids the "hackish" part.
What I think is that however you create your singleton, you should better do it explicitly. That's in the "Zen of python": explicit is better than implicit. Why? because then people reading your code (and that might be you in six months) will be able to understand what's happening and what you were thinking when you wrote that code. If you try to make things more implicit (like using sophisticated meta classes and weird self-inheritance) you may wonder what this code does in less than three weeks!
I'm not telling to avoid that kind of options, but to only use sophisticated stuff when you're out of simple ones!
Based on what you said I think the solution I gave can be a starting point. But as you focus on some obscure, yet not very useful hackish stuff instead of talking about your design, I can't be sure if my example is that appropriate, and hint you on the design.
edit2:
There's an another way to achieve what you say you want (but be sure that's really the design you want). You may want to use a class method that will act on the class itself (instead of the instances) and thus enable you to store a class-wide instance of itself:
>>> class ThesaurusBase:
... #classmethod
... def initClassWide(cls):
... cls._shared = cls()
...
>>> class T(ThesaurusBase):
... def foo(self):
... print self._shared
...
>>> ThesaurusBase.initClassWide()
>>> t = T()
>>> t.foo()
<__main__.ThesaurusBase instance at 0x7ff299a7def0>
and you can call the initClassWide method at the module level of where you declare ThesaurusBase, so whenever you import that module, it will have the singleton loaded (the import mechanism ensuring that python modules are run only once).
the short answer is:
do not instantiate an instance of a sub class from the super class constructor
longer answer:
if the motive you have to try to do this is the fact the Thesaurus is a singleton then you'll be better off exposing the singleton using a static method in the class (Thesaurus) and calling this method when you need the singleton

Categories

Resources