Tricky method for overriding a method in several sibling classes required - python

Imagine a situation in which a large set of animal classes, which cannot be modified, all inherit from the same parent class "Animal", and each implements a method called "make_noise" each with a slightly different signature, but all with shared parameter volume:
class Cat(Animal)
def make_noise(volume, duration)
-some code here-
class Mouse(Animal)
def make_noise(volume, pitch)
-some different code here-
A different "controller" class, which also cannot be modified, is instructing a list of these animal instances (a list which I have populated) to make sounds at particular volumes (and duration/pitch/etc as appropriate). However, I need to get between the controller and animal classes to modify the behaviour of "make_noise" in all animal classes, so that I can reduce the value of volume before the sound is made.
One option would be to do something like:
def animal_monkeypatcher(animal_class, volume_reduction_factor):
class QuietAnimal(animal_class)
def make_noise(volume, **kwargs)
volume = volume * volume_reduction_factor
super(QuietAnimal, self).make_noise(volume, **kwargs)
However, I also need to pickle these objects, and that doesn't work with this approach. The next approach I thought about was a class which had an instance of the animal like so...
class QuietAnimal():
def __init__(animal_class, init_kwargs):
self.animal = animal_class(**init_kwargs)
def make_noise(volume, **kwargs)
volume = volume * volume_reduction_factor
self.animal.make_noise(volume, **kwargs)
def lots of other functions.....
However, this is also not suitable because the controller sometimes needs to create new instances of animals. It does this by getting the class of an animal (which is QuietAnimal, instead of say Mouse) and then using the same set of init_kwargs to create it, which does not match the signature of QuietAnimal, so again we're stuck...
At the moment I have a horrible hack, which basically forks the init depending on whether or not an animal_class has been passed in or not, and records some info in some class variables. It's frankly dreadful, and not useful if I need to create more than one type of animal (which in my use case I don't, but still...). It's also rubbish because I have to include all of the methods from all of the animals.
What is the appropriate way to wrap/proxy/whatever this set of classes to achieve the above? Some sample code would be greatly appreciated. I am a Python novice.

It turns out, what I needed was standard "monkey patching". Not a great idea either, but slightly cleaner than creating classes on the fly.

Related

Undoing a decade of singleton pattern and class-level configuration

Overview
I need to duplicate a whole inheritance tree of classes. Simply deep-copying the class objects does not work; a proper factory pattern involves a huge amount of code changes; I'm not sure how to use metaclasses to accomplish this.
Background
The software I work on implements support for specialized external hardware, connected to the host computer via USB. Many years ago, it was assumed that there would only ever be one type of hardware in use at a time. Consequently, the hardware object is used as a singleton. Along the years, secondary classes were configured based on the currently active hardware class.
At the moment, it is impossible to use this library with two types of hardware at the same time, since the classobjects cannot be configured for both hardware at the same time.
In recent years, we have avoided this issue by creating one python process for each hardware, but this is becoming untenable.
Here is an extremely simplified example of the architecture:
# ----------
# Hardware classes
class HwBase():
def customizeComponent(self, compDict):
compDict['ComponentBase'].hardware = self
class HwA(HwBase):
def customizeComponent(self, compDict):
super().customizeComponent(compDict)
compDict['AnotherComponent'].prop.configure(1,2,3)
class HwB(HwBase):
def customizeComponent(self, compDict):
super().customizeComponent(compDict)
compDict['AnotherComponent'].prop.configure(4,5,6)
# ----------
# Property classes
class SpecialProperty(property):
def __init__(self, fvalidate):
self.fvalidate = fvalidate
# handle fset, fget, etc. here.
# super().__init__()
# ----------
# Component classes
class ComponentBase():
hardware = None
def validateProp(self, val):
return val < self.maxVal
prop = SpecialProperty(fvalidate=validateProp)
class SomeComponent():
"""Users directly instantiate and use this compoent via an interactive shell.
This component does complex operations with the hardware attribute"""
def validateThing(self, val):
return isinstance(val, ComponentBase)
thing = SpecialProperty(fvalidate=validateThing)
class AnotherComponent():
"""Users directly instantiate and use this compoent via an interactive shell
This component does complex operations with the hardware attribute"""
maxVal = 15
# ----------
# Initialization
def initialize():
""" This is only called once perppython instance."""
#activeCls = HwA
activeCls = HwB
allComponents = {
'ComponentBase': ComponentBase,
'SomeComponent': SomeComponent,
'AnotherComponent': AnotherComponent
}
hwInstance = activeCls()
hwInstance.customizeComponent(allComponents)
return allComponents
components = initialize()
# ----------
# User code goes here
someInstance1 = components['SomeComponent']()
someInstance2 = components['SomeComponent']()
someInstance1.prop = 10
someInstance2.prop = 10
The overarching goal would be to interact with both HwA and HwB at the same time. Since most interactions are done via components instead of the Hw objects themselves, I believe the solution involves having multiple versions of the components, e.g.: two separate inheritance trees, for a total of 6 final components, one tree/set configured for each hardware. This is what I need help with.
Potential solutions
Consider that I have around tens different hardware do configure for. Furthermore, there are hundreds of different leaf components classes, with many extra bases and mixin classes.
Move all configuration steps in the component's init method
Not possible due to the use of properties; these need to be set on the class.
Deepcopy the classobjects
Copy all classobjects, swap in the appropriate __bases__. Mutable class variables need to be carefully handled. However, I'm not sure how to deal with properties for this, since classbody references within the property objects (such as fvalidate) need to be updated to that of the copied class.
This requires a significant amount of manual intervention to work. Not impossible, but prone to breaking in the long term.
Factory pattern
Wrap all component definition in a factory function:
def ComponentBaseFactory(hw):
class SomeComponent(cache[hw].ComponentBase):
pass
and have some sort of component cache which would handle creating all classobjects during initialize()
This is what I consider the most architecturally-correct option available. Since the class body is re-executed
on every factory call, the attributes of the properties will reference the appropriate class object.
Downside: huge code footprint. I am familiar with doing codebase-wide changes via sed or python scripts, but this would be quite a lot.
Add metaclasses on components
I am not sure how to proceed for this. Based on the python data model (py3.7), the following happens at class creation (which happens right after the class definition indentation ends):
MRO entries are resolved;
the appropriate metaclass is determined;
the class namespace is prepared;
the class body is executed;
the class object is created.
I would need to redo these steps after the class has been defined (like a factory function!), but i'm not sure how to redo step 4. Specifically, the python documentation states in section 3.3.3.5 that the class body is executed as with a "special?" form of the exec() builtin. How can I re-exec the class body with a different set of locals/globals? Even if I access the class body's code with inspect shenanigans, i'm not sure i'll be able to reproduce the module environment properly.
Even if I mess with __prepare__ and __new__, I don't see how I can fix the cross-references introduced in the class code block regarding the property instantiation.
Components as metaclasses
A metaclass is a class factory, just like a class is an object factory. SomeComponent and AnotherComponent could be declared as metaclasses, then get instantiated with the Hw object during initialize():
SomeComponent = SomeComponentMeta(hw)
This is similar to the factory pattern, but would also require quite a few code changes: a lot of class code would have to be moved to the metaclass __init__.
I'd have to spend a lot more of time here to proper understand what you need, but if your "TL;DR" of executing the class body with different globals/nonlocal variables is the bottom line, the factory approach is a very clean and readable way, as you had considered.
At first, I don't think a metaclass could be a good approach here - although it could be used to customize your special properties (in my first read, I could not figure out what they actually do, and how they should differ between your final classes). If the function as a class factory can specialize your properties, it would work nonetheless.
If what you need is that the properties are independent for Hwa and HwB like in accessing a different list object in HwA than is accessed in HwB, yes, a metaclass could take care of that, by automatically recreating any properties when creating a subclass (so that the property objects themselves are not shared with the supper-classes and across the hierarchy).
If that i what you need, leave a comment, I can write some proof of concept code.
Anyway, it is possible to create a metaclass that, upon instantiating a subclass, will look upon the hierarchy for all SpecialProperty and create new-instances of those for the subclass - so that a base value set on a superclass remains valid for the subclasses, but when configuration runs, each class will have an independent configuration. (as it turns out, no metaclass is needed: we are covered by __init_subclass__ )
Another thing to take care of is that subclassses of property cannot be simply copies with Python's copy.copy (tested empirically), so we need a way to create reliable copies of those. I include one function bellow, but it might need to be improved to work with the actual SpecialProperty class.
from copy import copy
def copy_property(prop):
cls = prop.__class__
new_prop = cls.__new__(cls)
# Initialize the attributes that can't be set from Python code, inplace:
property.__init__(new_prop, prop.fget, prop.fset, prop.fdel)
if hasattr(prop, "__dict__"): # only exists for subclasses of property
# Possible adaptation needed: it may be that for some attributes of
# SpecialProperty, a deepcopy would be needed.
# But for the given example attribute of "fvalidate" a simple copy is better:
new_prop.__dict__ = copy(prop.__dict__)
return new_prop
# Python 3.6 introduced `__init_subclass__` which is called at subclass _creation_
# time. With it, the logic can be inserted in ComponentBase and there is no need for
# a metaclass.
class ComponentBase():
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
for attrname in dir(cls):
attr = getattr(cls, attrname)
if not isinstance(attr, SpecialProperty):
continue
new_prop = copy_property(attr)
setattr(cls, attrname, new_prop)
hardware = None
...
As you see- theres some workarounds that had to be done because your project opted for subclassing property. I am leaving this remark here as a remainder that unless property fits one exact needs, it is more clean to write a new class implementing the Descriptor Protocol - just by implementing __set__, __get__ and __delete__ directly.

Is this a valid use of metaclasses

I've been watching some videos on decorators and metaclasses and I think I understand them better now. One maxim I took away was "don't use metaclasses if you can do it more simply without using them". Some time ago I wrote a metaclass without really understanding what I was doing and I went back and reviewed it. I'm pretty certain that I've done something sensible here but I thought I'd check ....
PS I'm mildly concerned that the Colour class is used in the Metaclass definition, I feel it ought to be used at the Class level but that would complicate the code.
import webcolors
# This is a holding class for demo purposes
# actual code allows much more, eg 0.3*c0 + 0.7*c1
class Colour(list):
def __init__(self,*arg):
super().__init__(arg)
# define a metaclass to implement Class.name
class MetaColour(type):
def __getattr__(cls,name):
try:
_ = webcolors.name_to_rgb(name)
return Colour(_.blue,_.green,_.red)
except ValueError as e:
raise ValueError(f"{name} is not a valid name for a colour ({e})")
return f"name({name})"
# a class based on the metaclass MetaColour
class Colours(metaclass=MetaColour):
pass
print("blue = ",Colours.blue)
print("green = ",Colours.green)
print("lime = ",Colours.lime)
print("orange = ",Colours.orange)
print()
print("lilac = ",Colours.lilac)
Edit: I realise I could have written the Colour class so that Colour("red") was equivalent to Colours.red but felt at the time that using Colours.red was more elegant and added the implication that the Colour 'red' was a constant, not something that has to be looked up and can vary.
If you really need Colours to be a class, then this metaclass just does it job - and seems fine. There is no problem at all in making use Colour inside it - there is no such thing as "metaclass code can not make use of any 'ordinary' class" - it is Python code as usuall.
The remark I'd do there is that maybe you don't need to use Colours as a class, and instead just create the Colours class, with all the functionality you need, and create a single instance of it. The remainder of the code will use this instance instead of the Colours class.
Yes, a single instance is the "singleton pattern" - but unlike some complicated code you can find around on how to make your class "be a singleton" (including some widely spread bad-practice about needing a metaclass to have a singleton in Python), you can just create the instance, assign it to a name, and be done with it. Just like in your example you have the "webcolors" object you are using.
For an extra-singleton bonus, you can make your single instance of Colours be named Colours, and shadow the class, preventing any accidental use of the class instead of the instance.
(And, although it might be obvious, for sake of completeness: in the "use Colours as an instance" case there is no need for this metaclass at all - the same __getattr__ method goes into the class body)
Of course, again, if you have uses for Colours as a class, there is no problem with this design.

How to make new instance creation more generic in sqlalchemy (python)?

I have a class called House()
To make a new instance of House I can pass data like
house = House(roof = roof)
To standardize how Houses get created (and make use of type annotations) House has a .new() static method that looks like:
class House():
#staticmethod
def new(roof: Roof):
house = House(roof = roof)
# do other stuff for new method, ie may add to session, etc.
return house
However, this is kind of annoying because if house has say 10 attributes, it means there's a lot of copy and paste. For example here, to use keyword args (which is preferred), 'roof' is repeated 3 times.
Docs states that __init__ is not called when recreating, but I feel a bit strange over riding it for a .new() method or is this correct?
Also I feel like __init__ doesn't really solve the generic concern. I'm looking for the best of both worlds, where the existing defined properties are available on init, but I can also define logic that's different for each class.
Thinking along the lines of attributes from kwargs maybe but not exactly it?
An assumption is that there will always be some attributes that are not needed. Part of this is enforcing that say a house always needs a roof, but an attribute like owner doesn't need to be populated when it's first created. But if a new developer joins the team and calls House.new() they should be able to see that definition of what attributes are needed.
For example I can pass roof = Column(..., default = 1)
is there a way in Column or similar to say something like required?

Inheritance - proper way to create a new instance of a class with a class method

I'm God (or evolution, or whatever you believe in). I am trying to create all living things with Python.
I have defined a class that is able to "reproduce", i.e is able to create a new instance of itself (ignore the fact that it looks more like cloning than reproduction, this is for a beta version of Earth) :
class Animal:
def __init__(self, **kwargs):
self.characteristics = kwargs
def reproduce(self):
return Animal(**self.characteristics)
This works fine in the case of a base class, but what happens when I create a class that inherits from Animal ?
class Fox (Animal):
def __init__ (self, **kwargs):
self.color = 'red'
super().__init__(dict(self.color, **kwargs))
If a Fox tries to reproduce, I will have an instance of type Animal rather than Fox (even though it still has the color 'red').
I could overload the method for a fox to be able to reproduce:
def reproduce(self):
return Fox(self.characteristics)
However, I would have to do that for every new creature I define!
How can I create a class from which I could make all my creatures inherit so when they reproduce an object of the same class is created? So that I could be sure that:
parent = Fox()
child = parent.reproduce()
assert type(parent) == type(child)
I know I can use type to make reproduce return type(self)(self.characteristics) or self.__class__(**self.characteristics), but it does not seem very pytonic to me. Is there a more proper way to do this ?
Note: you changed your question from one where your subclasses took different numbers of arguments. If you stick to such a design, then you have no choice here but to override reproduce(), because there is no consistent API to create a new instance of the 'current class'.
If you were to standardise your class API, you can then also standardise creating new instances, at which point you can write a reproduce() method that just takes type(self) to reference the current class and then proceed to make a new instance of this class.
Note that having to write a new reproduce() method for each subclass is a good option too, because that's a good way of delegating creating new instances of specialised subclasses. You give each subclass the responsibility of handling the details of reproduction.
But if you don't want to do that, then you take away that responsibility from the subclass and put it in the base class, at which point the base design of how you create instances also is the responsibility of that base class.
There are middle grounds between those two options, of course, but all of them come down to some form of delegation. You could have the base classes provide some kind of structure that details what attributes should be copied across when creating an instance, you could have subclasses implement the __copy__ or __deepcopy__ hooks to handle 'reproduction' via copy.copy() or copy.deepcopy() calls, etc.
Your updated question structure is just another example of that delegation; you added a characteristics dictionary, so subclasses are responsible of keeping that dictionary updated so that the base class can implement reproduction as:
def reproduce(self):
return type(self)(**self.characteristics)
That's perfectly Pythonic, but more because this is a decent OO design where you have made choices to minimise what subclasses are responsible for and have the base class do as much of the reproducing as possible.

How can I combine (or make operations) between attributes of different classes (without specifying instances yet)?

First of all, I am a total newbie. Thanks for your patience.
I am designing a piece of software to calculate insulation materials and amounts on different houses.
I have a class House(), which holds attributes like roof_area and wall_area.
I have a class Insulator(), which holds attributes like thickness and area (the area the packaged material covers)
Now I want to know how many packages of the insulator I should buy in order to cover the whole roof area.
So, the operation would be:
House.roof_area / Insulator.area = insulator_packages_needed_for_roof
The thing is I can't do that operation:
AttributeError: type object 'House' has no attribute 'roof_area'.
Of course I could do it a an instance scope, but I don't want to specify an instance yet, as this operation should be done for any instance of the Class that gets build in the future. Should I use inheritance? My feeling is that, given that Insulator and House are totally different things, they shouldn't be mixed by inheritance, but I am just a beginner.
It doesn't make any sense to try to compute the number of insulation packages you need to cover the roof of a house, without using any instances of your House or Insulator classes. It only makes sense if you have one instance of each.
You can, however, write the code to do the calculation before you've created the instances. Just put it in a function that takes the instances as arguments:
def roof_insulation_packages(house, insulator): # args are instances of House and Insulator
return house.roof_area / insulator.area # do the calculation and return it
It might make more sense for the function to be a method of one of the classes. I'd even suggest that Insulator instances might be a good candidates to be instance attributes of the House. That would look something like this:
class House():
def __init__(self, roof_area, roof_insulator, wall_area, wall_insulator):
self.roof_area = roof_area
self.roof_insulator = roof_insulator
self.wall_area = wall_area
self.wall_insulator = wall_insulator
def calculate_roof_insulation_packages(self):
return self.roof_area / self.roof_insulator.area
def calculate_wall_insulation_packages(self, insulator):
return self.wall_area / self.wall_insulator.area
You'd create the house instance with something like this (I'm making up the arguments to the Insulator class, so don't pay too much attention to that part):
good_roof_insulation = Insulator(4, 5) # nonsense args
cheap_wall_insulation = Insulator(5, 12)
my_house = House(100, good_roof_insulation, 87, cheap_wall_insulation)
If you want to use attributes without creating an instance you should use class attributes.
class House(object):
roof_area = 10 # Or whatever you see fit.
roof_surface = 20
class Insulator(object):
thickness = 10 # Or whatever you see fit.
surface = 20
This way you can access the attributes by typing 'House.roof_area' for example.
Although, I don't see why you cannot create an instance. It would prevent harcoding in class attributes and would in your case be much easier.
Also, your operation is not valid syntax, but maybe you just showed pseudo-code. Proper syntax would be:
insulator_packages_needed_for_roof = House.roof_area / Insulator.area

Categories

Resources