Python inheritance - add argument to parent method - python

I have a base class with function run. For example:
class A:
#abstractmethod
def run(self, steps):
...
It is possible to define class B with more arguments to the run method.
class B(A):
def run(self, steps, save):
...
Working with typing, I can specify if a function gets either A or B as argument. By specifying the function gets A, I tell that I only need the basic interface of run. While specifying B says I need the extended one.
The purpose of this design is to declare a base interface that all the children share but each one can have an extended API.
This is impossible to be done in other languages. Hence I wonder, is it an anti-pattern? Is it something legit to do?

In Python you can do something like the following.
class A:
def run(self, steps):
print("Using class A's run.")
print(f"steps are {steps}")
class B(A):
def run(self, steps, other_arg=None):
if other_arg:
print("Using class B's override.")
print(f"steps are {steps}")
else:
# Use parent's run logic instead.
super().run(steps)
x = B()
x.run(100)
x.run(30, other_arg="something")
# Using class A's run.
# steps are 100
# Using class B's override.
# steps are 30
Now, should you do this? There is a time and a place. You can get into trouble as well. Imagine you break the interface of the core object you're inheriting from, so the core object loses its abstraction value. You'd have been better off having two objects or rewriting your abstraction to be more robust to the differences in object you wish you represent.
Edit: Note that the original question changed to make the base run method abstract. The solution posted here is mostly invalidated by that.

Related

Multiple function forms in template pattern

I wanted to ask what is the best way to implement template pattern in which template method can have multiple forms (I guess it wouldn't be template pattern then).
Let's say I have an abstract class with one abstract method and few concrete methods:
from abc import ABC, abstractmethod
from typing import Any
class TemplateClass(ABC):
def __init__(self, my_client):
self.client = my_client
def run(self) -> Any:
self._step1()
self._step2()
self._step3()
self.execute_specific_logic()
#abstractmethod
def _execute_specific_logic(self) -> Any:
raise NotImplementedError
def _step1(self):
pass
def _step2(self):
pass
def _step3(self):
pass
And I want to create about 10 classes that will inherit TemplateClass, but:
7 of them should have run method with all steps
2 of them should have run method only with _step2 and _step3
1 of them should have run method only with _step1
I was wondering about different ways to implement such logic:
implementing different run methods for every case - run, run_without_step1, run_without_step2_and_step3
adding flag argument to run method such as is_step_x_required with True as default and passing Flase in subclass method when needed
overwriting in run method in subclasses when needed
using some kind of mixin class?
I would really appreciate any advice on this issue.
All the techniques you list are reasonable. My first inclination was towards the "least" amount of boilerplate:
class Stepper(ABC):
def __init__(self, run_steps):
self._run_steps = run_steps
self._steps = [self._step1, self._step2, self._step3]
def run(self):
for step in self._run_steps:
self._steps[i - 1]()
class Only23Stepper(Stepper):
def __init__(self):
super().__init__(run_steps=[2, 3])
For more flexibility, but more boilerplate, one may override run() to explicitly specify the steps.
However, we may want to express this in a different way than with classes. It is well known that inheritance is evil. It is possible that the reason we're running into design issues and potential inflexibility is that this could be expressed in a simpler way using plain old functions.

Python multiple inheritance, calling second base class method, if both base classes holding same method

class A:
def amethod(self): print("Base1")
class B():
def amethod(self): print("Base3")
class Derived(A,B):
pass
instance = Derived()
instance.amethod()
#Now i want to call B method amethod().. please let me know the way.**
Python multiple inheritance, calling second base class method, if both base classes holding same method
try to use composition
+Avoid multiple inheritance at all costs, as it's too complex to be reliable. If you're stuck with it, then be prepared to know the class hierarchy and spend time finding where everything is coming from.
+Use composition to package code into modules that are used in many different unrelated places and situations.
+Use inheritance only when there are clearly related reusable pieces of code that fit under a single common concept or if you have to because of something you're using.
class A:
def amethod(self): print("Base1")
class B:
def amethod(self): print("Base3")
class Derived2:
def __init__(self):
self.a = A()
self.b = B()
def amthodBase1(self):
self.a.amethod()
def amthodBase3(self):
self.b.amethod()
instance2 = Derived2()
instance2.amthodBase1()
instance2.amthodBase3()
galaxyan's answer suggesting composition is probably the best one. Multiple inheritance is often complicated to design and debug, and unless you know what you're doing, it can be difficult to get right. But if you really do want it, here's an answer explaining how you can make it work:
For multiple inheritance to work properly, the base classes will often need to cooperate with their children. Python's super function makes this not too difficult to set up. You often will need a common base for the classes involved in the inheritance (to stop the inheritance chain):
class CommonBase:
def amethod(self):
print("CommonBase")
# don't call `super` here, we're the end of the inheritance chain
class Base1(CommonBase):
def amethod(self):
print("Base1")
super().amethod()
class Base2(CommonBase):
def amethod(self):
print("Base2")
super().amethod()
class Derived(Base1, Base2):
def amethod(self):
print("Derived")
super().amethod()
Now calling Derived().amethod() will print Derived, Base1, Base2, and finally CommonBase. The trick is that super passes each call on to the the next class in the MRO of self, even if that's not the in the current class's inheritance hierarchy. So Base1.amethod ends up calling Base2.amethod via super since they're being run on an instance of Derived.
If you don't need any behavior in the common base class, its method body just be pass. And of course, the Derived class can just inherit the method without writing its own version and calling super to get the rest.

Sharing base object with inheritance

I have class Base. I'd like to extend its functionality in a class Derived. I was planning to write:
class Derived(Base):
def __init__(self, base_arg1, base_arg2, derived_arg1, derived_arg2):
super().__init__(base_arg1, base_arg2)
# ...
def derived_method1(self):
# ...
Sometimes I already have a Base instance, and I want to create a Derived instance based on it, i.e., a Derived instance that shares the Base object (doesn't re-create it from scratch). I thought I could write a static method to do that:
b = Base(arg1, arg2) # very large object, expensive to create or copy
d = Derived.from_base(b, derived_arg1, derived_arg2) # reuses existing b object
but it seems impossible. Either I'm missing a way to make this work, or (more likely) I'm missing a very big reason why it can't be allowed to work. Can someone explain which one it is?
[Of course, if I used composition rather than inheritance, this would all be easy to do. But I was hoping to avoid the delegation of all the Base methods to Derived through __getattr__.]
Rely on what your Base class is doing with with base_arg1, base_arg2.
class Base(object):
def __init__(self, base_arg1, base_arg2):
self.base_arg1 = base_arg1
self.base_arg2 = base_arg2
...
class Derived(Base):
def __init__(self, base_arg1, base_arg2, derived_arg1, derived_arg2):
super().__init__(base_arg1, base_arg2)
...
#classmethod
def from_base(cls, b, da1, da2):
return cls(b.base_arg1, b.base_arg2, da1, da2)
The alternative approach to Alexey's answer (my +1) is to pass the base object in the base_arg1 argument and to check, whether it was misused for passing the base object (if it is the instance of the base class). The other agrument can be made technically optional (say None) and checked explicitly when decided inside the code.
The difference is that only the argument type decides what of the two possible ways of creation is to be used. This is neccessary if the creation of the object cannot be explicitly captured in the source code (e.g. some structure contains a mix of argument tuples, some of them with the initial values, some of them with the references to the existing objects. Then you would probably need pass the arguments as the keyword arguments:
d = Derived(b, derived_arg1=derived_arg1, derived_arg2=derived_arg2)
Updated: For the sharing the internal structures with the initial class, it is possible using both approaches. However, you must be aware of the fact, that if one of the objects tries to modify the shared data, the usual funny things can happen.
To be clear here, I'll make an answer with code. pepr talks about this solution, but code is always clearer than English. In this case Base should not be subclassed, but it should be a member of Derived:
class Base(object):
def __init__(self, base_arg1, base_arg2):
self.base_arg1 = base_arg1
self.base_arg2 = base_arg2
class Derived(object):
def __init__(self, base, derived_arg1, derived_arg2):
self.base = base
self.derived_arg1 = derived_arg1
self.derived_arg2 = derived_arg2
def derived_method1(self):
return self.base.base_arg1 * self.derived_arg1

Dynamic sub-classing in Python

I have a number of atomic classes (Components/Mixins, not really sure what to call them) in a library I'm developing, which are meant to be subclassed by applications. This atomicity was created so that applications can only use the features that they need, and combine the components through multiple inheritance.
However, sometimes this atomicity cannot be ensured because some component may depend on another one. For example, imagine I have a component that gives a graphical representation to an object, and another component which uses this graphical representation to perform some collision checking. The first is purely atomic, however the latter requires that the current object already subclassed this graphical representation component, so that its methods are available to it. This is a problem, because we have to somehow tell the users of this library, that in order to use a certain Component, they also have to subclass this other one. We could make this collision component sub class the visual component, but if the user also subclasses this visual component, it wouldn't work because the class is not on the same level (unlike a simple diamond relationship, which is desired), and would give the cryptic meta class errors which are hard to understand for the programmer.
Therefore, I would like to know if there is any cool way, through maybe metaclass redefinition or using class decorators, to mark these unatomic components, and when they are subclassed, the additional dependency would be injected into the current object, if its not yet available. Example:
class AtomicComponent(object):
pass
#depends(AtomicComponent) # <- something like this?
class UnAtomicComponent(object):
pass
class UserClass(UnAtomicComponent): #automatically includes AtomicComponent
pass
class UserClass2(AtomicComponent, UnAtomicComponent): #also works without problem
pass
Can someone give me an hint on how I can do this? or if it is even possible...
edit:
Since it is debatable that the meta class solution is the best one, I'll leave this unaccepted for 2 days.
Other solutions might be to improve error messages, for example, doing something like UserClass2 would give an error saying that UnAtomicComponent already extends this component. This however creates the problem that it is impossible to use two UnAtomicComponents, given that they would subclass object on different levels.
"Metaclasses"
This is what they are for! At time of class creation, the class parameters run through the
metaclass code, where you can check the bases and change then, for example.
This runs without error - though it does not preserve the order of needed classes
marked with the "depends" decorator:
class AutoSubclass(type):
def __new__(metacls, name, bases, dct):
new_bases = set()
for base in bases:
if hasattr(base, "_depends"):
for dependence in base._depends:
if not dependence in bases:
new_bases.add(dependence)
bases = bases + tuple(new_bases)
return type.__new__(metacls, name, bases, dct)
__metaclass__ = AutoSubclass
def depends(*args):
def decorator(cls):
cls._depends = args
return cls
return decorator
class AtomicComponent:
pass
#depends(AtomicComponent) # <- something like this?
class UnAtomicComponent:
pass
class UserClass(UnAtomicComponent): #automatically includes AtomicComponent
pass
class UserClass2(AtomicComponent, UnAtomicComponent): #also works without problem
pass
(I removed inheritance from "object", as I declared a global __metaclass__ variable. All classs will still be new style class and have this metaclass. Inheriting from object or another class does override the global __metaclass__variable, and a class level __metclass__ will have to be declared)
-- edit --
Without metaclasses, the way to go is to have your classes to properly inherit from their dependencies. Tehy will no longer be that "atomic", but, since they could not work being that atomic, it may be no matter.
In the example bellow, classes C and D would be your User classes:
>>> class A(object): pass
...
>>> class B(A, object): pass
...
>>>
>>> class C(B): pass
...
>>> class D(B,A): pass
...
>>>

Best way to test instance methods without running __init__

I've got a simple class that gets most of its arguments via init, which also runs a variety of private methods that do most of the work. Output is available either through access to object variables or public methods.
Here's the problem - I'd like my unittest framework to directly call the private methods called by init with different data - without going through init.
What's the best way to do this?
So far, I've been refactoring these classes so that init does less and data is passed in separately. This makes testing easy, but I think the usability of the class suffers a little.
EDIT: Example solution based on Ignacio's answer:
import types
class C(object):
def __init__(self, number):
new_number = self._foo(number)
self._bar(new_number)
def _foo(self, number):
return number * 2
def _bar(self, number):
print number * 10
#--- normal execution - should print 160: -------
MyC = C(8)
#--- testing execution - should print 80 --------
MyC = object.__new__(C)
MyC._bar(8)
For new-style classes, call object.__new__(), passing the class as a parameter. For old-style classes, call types.InstanceType() passing the class as a parameter.
import types
class C(object):
def __init__(self):
print 'init'
class OldC:
def __init__(self):
print 'initOld'
c = object.__new__(C)
print c
oc = types.InstanceType(OldC)
print oc
Why does the usability of the class have to suffer? If all the __init__ is doing is precomputing things so you can expose values as simple variables, change those variables into properties and do the computation (potentially cached/memoized) in the getter. That way your __init__ method is back to doing initialization only and testability is improved.
The downside to this approach is that it might be less performant, but probably not to a significant degree.

Categories

Resources