What are the elegant ways to do MixIns in Python? - python

I need to find an elegant way to do 2 kinds of MixIns.
First:
class A(object):
def method1(self):
do_something()
Now, a MixInClass should make method1 do this: do_other() -> A.method1() -> do_smth_else() - i.e. basically "wrap" the older function. I'm pretty sure there must exist a good solution to this.
Second:
class B(object):
def method1(self):
do_something()
do_more()
In this case, I want MixInClass2 to be able to inject itself between do_something() and do_more(), i.e.: do_something() -> MixIn.method1 -> do_more(). I understand that probably this would require modifying class B - that's ok, just looking for simplest ways to achieve this.
These are pretty trivial problems and I actually solved them, but my solution is tainted.
Fisrt one by using self._old_method1 = self.method1(); self.method1() = self._new_method1(); and writing _new_method1() that calls to _old_method1().
Problem: multiple MixIns will all rename to _old_method1 and it is inelegant.
Second MixIn one was solved by creating a dummy method call_mixin(self): pass and injecting it between calls and defining self.call_mixin(). Again inelegant and will break on multiple MixIns..
Any ideas?
Thanks to Boldewyn, I've found elegant solution to first one (I've forgot you can create decorators on-the-fly, without modifying original code):
class MixIn_for_1(object):
def __init__(self):
self.method1 = self.wrap1(self.method1)
super(MixIn_for_1, self).__init__()
def wrap1(self, old):
def method1():
print "do_other()"
old()
print "do_smth_else()"
return method1
Still searching for ideas for second one (this idea won't fit, since I need to inject inside of old method, not outside, like in this case).
Solution for second is below, replacing "pass_func" with lambda:0.

I think, that can be handled in quite a Pythonic way using decorators. (PEP 318, too)

Here is another way to implement MixInClass1, MixinClass2:
Decorators are useful when you need to wrap many functions. Since MixinClass1 needs to wrap only one function, I think it is clearer to monkey-patch:
Using double underscores for __old_method1 and __method1 plays a useful role in MixInClass1. Because of Python's name-mangling convention, using the double underscores localizes these attributes to MixinClass1 and allows you to use the very same attribute names for other mix-in classes without causing unwanted name-collisions.
class MixInClass1(object):
def __init__(self):
self.__old_method1,self.method1=self.method1,self.__method1
super(MixInClass1, self).__init__()
def __method1(self):
print "pre1()"
self.__old_method1()
print "post1()"
class MixInClass2(object):
def __init__(self):
super(MixInClass2, self).__init__()
def method1_hook(self):
print('MixIn method1')
class Foo(MixInClass2,MixInClass1):
def method1(self):
print "do_something()"
getattr(self,'method1_hook',lambda *args,**kw: None)()
print "do_more()"
foo=Foo()
foo.method1()

Related

Multiple function forms in template pattern

I wanted to ask what is the best way to implement template pattern in which template method can have multiple forms (I guess it wouldn't be template pattern then).
Let's say I have an abstract class with one abstract method and few concrete methods:
from abc import ABC, abstractmethod
from typing import Any
class TemplateClass(ABC):
def __init__(self, my_client):
self.client = my_client
def run(self) -> Any:
self._step1()
self._step2()
self._step3()
self.execute_specific_logic()
#abstractmethod
def _execute_specific_logic(self) -> Any:
raise NotImplementedError
def _step1(self):
pass
def _step2(self):
pass
def _step3(self):
pass
And I want to create about 10 classes that will inherit TemplateClass, but:
7 of them should have run method with all steps
2 of them should have run method only with _step2 and _step3
1 of them should have run method only with _step1
I was wondering about different ways to implement such logic:
implementing different run methods for every case - run, run_without_step1, run_without_step2_and_step3
adding flag argument to run method such as is_step_x_required with True as default and passing Flase in subclass method when needed
overwriting in run method in subclasses when needed
using some kind of mixin class?
I would really appreciate any advice on this issue.
All the techniques you list are reasonable. My first inclination was towards the "least" amount of boilerplate:
class Stepper(ABC):
def __init__(self, run_steps):
self._run_steps = run_steps
self._steps = [self._step1, self._step2, self._step3]
def run(self):
for step in self._run_steps:
self._steps[i - 1]()
class Only23Stepper(Stepper):
def __init__(self):
super().__init__(run_steps=[2, 3])
For more flexibility, but more boilerplate, one may override run() to explicitly specify the steps.
However, we may want to express this in a different way than with classes. It is well known that inheritance is evil. It is possible that the reason we're running into design issues and potential inflexibility is that this could be expressed in a simpler way using plain old functions.

How to avoid parameter type in function's name?

I have a function foo that takes a parameter stuff
Stuff can be something in a database and I'd like to create a function that takes a stuff_id, get the stuff from the db, execute foo.
Here's my attempt to solve it:
1/ Create a second function with suffix from_stuff_id
def foo(stuff):
do something
def foo_from_stuff_id(stuff_id):
stuff = get_stuff(stuff_id)
foo(stuff)
2/ Modify the first function
def foo(stuff=None, stuff_id=None):
if stuff_id:
stuff = get_stuff(stuff_id)
do something
I don't like both ways.
What's the most pythonic way to do it ?
Assuming foo is the main component of your application, your first way. Each function should have a different purpose. The moment you combine multiple purposes into a single function, you can easily get lost in long streams of code.
If, however, some other function can also provide stuff, then go with the second.
The only thing I would add is make sure you add docstrings (PEP-257) to each function to explain in words the role of the function. If necessary, you can also add comments to your code.
I'm not a big fan of type overloading in Python, but this is one of the cases where I might go for it if there's really a need:
def foo(stuff):
if isinstance(stuff, int):
stuff = get_stuff(stuff)
...
With type annotations it would look like this:
def foo(stuff: Union[int, Stuff]):
if isinstance(stuff, int):
stuff = get_stuff(stuff)
...
It basically depends on how you've defined all these functions. If you're importing get_stuff from another module the second approach is more Pythonic, because from an OOP perspective you create functions for doing one particular purpose and in this case when you've already defined the get_stuff you don't need to call it within another function.
If get_stuff it's not defined in another module, then it depends on whether you are using classes or not. If you're using a class and you want to use all these modules together you can use a method for either accessing or connecting to the data base and use that method within other methods like foo.
Example:
from some module import get_stuff
MyClass:
def __init__(self, *args, **kwargs):
# ...
self.stuff_id = kwargs['stuff_id']
def foo(self):
stuff = get_stuff(self.stuff_id)
# do stuff
Or if the functionality of foo depends on the existence of stuff you can have a global stuff and simply check for its validation :
MyClass:
def __init__(self, *args, **kwargs):
# ...
_stuff_id = kwargs['stuff_id']
self.stuff = get_stuff(_stuff_id) # can return None
def foo(self):
if self.stuff:
# do stuff
else:
# do other stuff
Or another neat design pattern for such situations might be using a dispatcher function (or method in class) that delegates the execution to different functions based on the state of stuff.
def delegator(stff, stuff_id):
if stuff: # or other condition
foo(stuff)
else:
get_stuff(stuff_id)

Jupyter - Split Classes in multiple Cells

I wonder if there is a possibility to split jupyter classes into different cells? Lets say:
#first cell:
class foo(object):
def __init__(self, var):
self.var = var
#second cell
def print_var(self):
print(self.var)
For more complex classes its really annoying to write them into one cell.
I would like to put each method in a different cell.
Someone made this this last year but i wonder if there is something build in so i dont need external scripts/imports.
And if not, i would like to know if there is a reason to not give the opportunity to split your code and document / debug it way easier.
Thanks in advance
Two solutions were provided to this problem on Github issue "Define a Python class across multiple cells #1243" which can be found here: https://github.com/jupyter/notebook/issues/1243
One solution is using a magic function from a package developed for this specific case called jdc - or Jupyter dynamic classes. The documentation on how to install it and how to use can be found on package url at https://alexhagen.github.io/jdc/
The second solution was provided by Doug Blank and which just work in regular Python, without resorting to any extra magic as follows:
Cell 1:
class MyClass():
def method1(self):
print("method1")
Cell 2:
class MyClass(MyClass):
def method2(self):
print("method2")
Cell 3:
instance = MyClass()
instance.method1()
instance.method2()
I tested the second solution myself in both Jupyter Notebook and VS Code, and it worked fine in both environments, except that I got a pylint error [pylint] E0102:class already defined line 5 in VS Code, which is kind of expected but still runs fine. Moreover, VS Code was not meant to be the target environment anyway.
I don't feel like that whole stuff to be a issue or a good idea... But maybe the following will work for you:
# First cell
class Foo(object):
pass
# Other cell
def __init__(self, var):
self.var = var
Foo.__init__ = __init__
# Yet another cell
def print_var(self):
print(self.var)
Foo.print_var = print_var
I don't expect it to be extremely robust, but... it should work for regular classes.
EDIT: I believe that there are a couple of situations where this may break. I am not sure if that will resist code inspection, given that the method lives "far" from the class. But you are using a notebook, so code inspection should not be an issue (?), although keep that in mind if debugging.
Another possible issue can be related to use of metaclasses. If you try to use metaclasses (or derive from some class which uses a metaclass) that may broke it, because metaclasses typically expect to be able to know all the methods of the class, and by dynamically adding methods to a class, we are bending the rules on the flow of class creation.
Without metaclasses or some "quite-strange" use cases, the approach should be safe-ish.
For "simple" classes, it is a perfectly valid approach. But... it is not exactly an expected feature, so (ab)using it may give some additional problems which I may not
Here's a decorator which lets you add members to a class:
import functools
def update_class(
main_class=None, exclude=("__module__", "__name__", "__dict__", "__weakref__")
):
"""Class decorator. Adds all methods and members from the wrapped class to main_class
Args:
- main_class: class to which to append members. Defaults to the class with the same name as the wrapped class
- exclude: black-list of members which should not be copied
"""
def decorates(main_class, exclude, appended_class):
if main_class is None:
main_class = globals()[appended_class.__name__]
for k, v in appended_class.__dict__.items():
if k not in exclude:
setattr(main_class, k, v)
return main_class
return functools.partial(decorates, main_class, exclude)
Use it like this:
#%% Cell 1
class MyClass:
def method1(self):
print("method1")
me = MyClass()
#%% Cell 2
#update_class()
class MyClass:
def method2(self):
print("method2")
me.method1()
me.method2()
This solution has the following benefits:
pure python
Doesn't change the inheritance order
Effects existing instances
There is no way to split a single class,
You could however, add methods dynamically to an instance of it
CELL #1
import types
class A:
def __init__(self, var):
self.var = var
a = A()
And in a different cell:
CELL #2
def print_var(self):
print (self.var)
a.print_var = types.MethodType( print_var, a )
Now, this should work:
CELL #3
a.print_var()
Medhat Omr's answer provides some good options; another one I found that I thought someone might find useful is to dynamically assign methods to a class using a decorator function. For example, we can create a higher-order function like the one below, which takes some arbitrary function, gets its name as a string, and assigns it as a class method.
def classMethod(func):
setattr(MyClass, func.__name__, func)
return func
We can then use the syntactic sugar for a decorator above each method that should be bound to the class;
#classMethod
def get_numpy(self):
return np.array(self.data)
This way, each method can be stored in a different Jupyter notebook cell and the class will be updated with the new function each time the cell is run.
I should also note that since this initializes the methods as functions in the global scope, it might be a good idea to prefix them with an underscore or letter to avoid name conflicts (then replace func.__name__ with func.__name__[1:] or however characters at the beginning of each name you want to omit. The method will still have the "mangled" name since it is the same object, so be wary of this if you need to programmatically access the method name somewhere else in your program.
thanks#Medhat Omr, it works for me for the #classmethod as well.
Base class in the first cell
class Employee:
# define two class variables
num_empl = 0
raise_amt = 1.05
def __init__(self, first, last, pay):
self.first = first
self.last = last
self.pay = pay
...
...
#classmethod in an another cell:
class Employee(Employee):
#classmethod
def set_raise_amt(cls, amount):
cls.raise_amt = amount
empl = Employee("Jahn", "Smith", 65000)
Employee.set_raise_amt(1.04)
print(empl.full_name() + " is getting " + str(empl.apply_raise()))

Idiomatic way of processing instances of derived classes?

Although, I have some years of experience programming in Python every time I encounter a problem like this I'm using the built-in isinstance function. However, I'm not sure whether this is the ideomatic way of doing these kind of things in python.
So, I have a base class that most of my instances will be.
class Base():
def a(self):
return 1
I also have a slightly different class that look like this:
class Extended(Base):
def b(self):
return 2
Now, there is a third class that might have additional functionality depending on the received argument which would be instance of one of the previous classes.
class User():
def __init__(self, arg):
... # do some common work
if isinstance(arg, Extended):
...
# define more functionality which will call method 'b'
# at some point during runtime (as event handler or smth)
Is this really the way to go with Python on this trivial example or maybe I should consider changing the interface of the Base to something like:
class Base2():
supports_more_func = False
def a(self):
return 1
def b(self):
pass
class Extended2(Base2):
supports_more_func = True
def b(self):
return 2
class User():
def __init__(self, arg):
... # do some common work
if arg.supports_more_func:
...
# define more functionality which will call method 'b'
# at some point during runtime (as event handler or smth)
Which one is the better approach according to you guy and why?
Generally speaking, when doing object oriented programming, using isinstance is rarely the way to go, especially when you're in charge of designing the classes you use, because that would be breaking S.O.L.I.D. principles.
Instead you should simply design your class to have a common well defined interface and just use it. So testing for type or for a member is rarely the way to go.
The way I'd go would be:
class Base2():
def a(self):
return 1
def b(self):
pass
class Extended2(Base2):
def b(self):
# all that extra functionality that was in User.__init__()
return 2
class User():
def __init__(self, arg):
... # do some common work
arg.b()
now I guess that the part with:
# define more functionality which will call method 'b'
# at some point during runtime (as event handler or smth)
has some data and processing tightly coupled with User and not with Extended2, but I'm pretty sure there's an elegant way to give that data to arg.b() as argument.
Basically, I'd say that 99% of the time when you need to use isinstance() to do something, it means you have a design issue and there's a better way to do the same.
Here's some web-litterature on the topic:
http://canonical.org/~kragen/isinstance/
https://www.quora.com/When-is-it-acceptable-to-use-isinstance-in-Python
https://www.lynda.com/Programming-Languages-tutorials/Avoiding-isinstance/471978/502199-4.html

do's and don'ts of __init__ method

I was just wondering if it's considered wildly inappropriate, just messy, or unconventional at all to use the init method to set variables by calling, one after another, the rest of the functions within a class. I have done things like, self.age = ch_age(), where ch_age is a function within the same class, and set more variables the same way, like self.name=ch_name() etc. Also, what about prompting for user input within init specifically to get the arguments with which to call ch_age? The latter feels a little wrong I must say. Any advice, suggestions, admonishments welcome!
I always favor being lazy: if you NEED to initialize everything in the constructor, you should--in a lot of cases, I put a general "reset" method in my class. Then you can call that method in init, and can re-initialize the class instance easily.
But if you don't need those variables initially, I feel it's better to wait to initialize things until you actually need them.
For your specific case
class Blah1(object):
def __init__(self):
self.name=self.ch_name()
def ch_name(self):
return 'Ozzy'
you might as well use the property decorator. The following will have the same effect:
class Blah2(object):
def __init__(self):
pass
#property
def name():
return 'Ozzy'
In both of the implementations above, the following code should not issue any exceptions:
>>> b1 = Blah1()
>>> b2 = Blah2()
>>> assert b1.name == 'Ozzy'
>>> assert b2.name == 'Ozzy'
If you wanted to provide a reset method, it might look something like this:
class Blah3(object):
def __init__(self, name):
self.reset(name)
def reset(self, name):
self.name = name

Categories

Resources