Is it possible to inverse inheritance using a Python meta class? - python

Out of curiosity, I'm interested whether it's possible to write a meta class that causes methods of parent classes to have preference over methods of sub classes. I'd like to play around with it for a while. It would not be possible to override methods anymore. The base class would have to call the sub method explicitly, for example using a reference to a base instance.
class Base(metaclass=InvertedInheritance):
def apply(self, param):
print('Validate parameter')
result = self.subclass.apply(param)
print('Validate result')
return result
class Child(Base):
def apply(self, param):
print('Compute')
result = 42 * param
return result
child = Child()
child.apply(2)
With the output:
Validate parameter
Compute
Validate result

If you only care about making lookups on instances go in reverse order (not classes), you don't even need a metaclass. You can just override __getattribute__:
class ReverseLookup:
def __getattribute__(self, attr):
if attr.startswith('__'):
return super().__getattribute__(attr)
cls = self.__class__
if attr in self.__dict__:
return self.__dict__[attr]
# Using [-3::-1] skips topmost two base classes, which will be ReverseLookup and object
for base in cls.__mro__[-3::-1]:
if attr in base.__dict__:
value = base.__dict__[attr]
# handle descriptors
if hasattr(value, '__get__'):
return value.__get__(self, cls)
else:
return value
raise AttributeError("Attribute {} not found".format(attr))
class Base(ReverseLookup):
def apply(self, param):
print('Validate parameter')
result = self.__class__.apply(self, param)
print('Validate result')
return result
class Child(Base):
def apply(self, param):
print('Compute')
result = 42 * param
return result
>>> Child().apply(2)
Validate parameter
Compute
Validate result
84
This mechanism is relatively simple because lookups on the class aren't in reverse:
>>> Child.apply
<function Child.apply at 0x0000000002E06048>
This makes it easy to get a "normal" lookup just by doing it on a class instead of an instance. However, it could result in confusion in other cases, like if a base class method tries to access a different method on the subclass, but that method actually doesn't exist on that subclass; in this case lookup will proceed in the normal direction and possibly find the method on a higher class. In other words, when doing this you have be sure that you don't look any methods up on a class unless you're sure they're defined on that specific class.
There may well be other corner cases where this approach doesn't work. In particular you can see that I jury-rigged descriptor handling; I wouldn't be surprised if it does something weird for descriptors with a __set__, or for more complicated descriptors that make more intense use of the class/object parameters passed to __get__. Also, this implementation falls back on the default behavior for any attributes beginning with two underscores; changing this would require careful thought about how it's going to work with magic methods like __init__ and __repr__.

Related

Wrapper class for method chaining in python & handling dunder methods

I'm hoping to write a wrapper class that would make method calls of a given object return self instead of None to allow for method chaining.
TLDR: I have figured out the above goal for regular methods but can't figure out how to handle dunder methods (e.g., __add__, etc) since __getattr__ doesn't get called with dunder methods.
Given that __getattr__ doesn't handle parameters of undefined methods, I've so far made a class called Chain that can allow for method-chaining for regular methods:
class Chain:
def __init__(self, obj):
self.obj = obj
def __getattr__(self, attr):
if hasattr(self.obj, attr):
obj_attr = getattr(self.obj, attr)
if callable(obj_attr):
return lambda *args, **kwargs: \
self.newMethod(obj_attr, *args, **kwargs)
else:
return obj_attr
else:
raise AttributeError(
f"Chain: Object of type {type(self.obj)} " + \
f"has no attribute {attr}"
)
def newMethod(self, method, *args, **kwargs):
output = method(*args, **kwargs)
if output is None:
return self
else:
return output
A trivial example of how this could work:
class A:
def __init__(self):
self.num = 0
def incrementNum(self, num):
self.num += num
# original code
obj = A()
obj.incrementNum(1)
obj.incrementNum(2)
obj.incrementNum(3)
print("without Chain:", obj.num)
# with wrapper
new_obj = Chain(A())
new_obj.incrementNum(1) \
.incrementNum(2) \
.incrementNum(3)
print("with Chain:", new_obj.num)
Obviously, for user-defined classes, the Chain class is superfluous since you can just change the class definition to always return self where applicable. I'm making Chain for use with tkinter classes like ttk.Menu etc since I don't want to mess with the tkinter source code.
Note that in the Chain class, it's not just the object that's returned, but instead it's the instance of Chain that wraps said object, so it can be method-chained more than once.
However, say that I'm hoping to Chain-wrap a class that implements __add__ or some other class-defined operators. This is where I'm having issues, since I cannot intercept the output of __add__ using __getattr__. Yes, I could of course implement __add__ in Chain, but if I wanted to be really thorough, I'd also have to implement that for every other operator that could show up for classes. I thought that maybe __getattribute__ would be the way to do it, but it doesn't catch dunder methods either. Any ideas how I can work around this in this case?
Yes, I know I'm doing a lot of acrobatics just to implement method-chaining. Might not be worth it, but it would be cool to augment a given class so that it behaves exactly the same as before except it returns itself with its methods. However, I suppose it's also possible __getattr__ doesn't catch other dunder methods for a reason I don't know about.
Side comment: I should probably also implement __setattr__ and __delattr__ in Chain so that its instances can be treated just like the objects they're wrapping. I've left that out here for brevity.

Inheritance from metaclasses in Python

I've got simple metaclass, that turns methods of classes starting with "get_" to properties:
class PropertyConvertMetaclass(type):
def __new__(mcs, future_class_name, future_class_parents, future_class_attr):
new_attr = {}
for name, val in future_class_attr.items():
if not name.startswith('__'):
if name.startswith('get_'):
new_attr[name[4:]] = property(val)
else:
new_attr[name] = val
return type.__new__(mcs, future_class_name, future_class_parents, new_attr)
Imagine I have TestClass:
class TestClass():
def __init__(self, x: int):
self._x = x
def get_x(self):
print("this is property")
return self._x
I want it to work like this: I create some new class that kinda inherits from them both
class NewTestClass(TestClass, PropertyConvertMetaclass):
pass
and I could reuse their both methods like this:
obj = NewTestClass(8)
obj.get_x() # 8
obj.x # 8
As I take it, I should create a new class, lets name it PropertyConvert and make NewTestClass inherit from It:
class PropertyConvert(metaclass=PropertyConvertMetaclass):
pass
class NewTestClass(TestClass, PropertyConvert):
pass
But it doesn't help, I still can't use new property method with NewClassTest. How can I make PropertyConvert inherit all the methods from its brother, not doing anything inside NewClassTest, changing only PropertyConverterMetaclass or PropertyConverter? I'm new to metaclasses, so I'm sorry, if this question might seem silly.
When you do TestClass():, the body of the class is run in a namespace which becomes the class __dict__. The metaclass just informs the construction of that namespace via __new__ and __init__. In this case, you have set up the metaclass of TestClass to be type.
When you inherit from TestClass, e. g. with class NewTestClass(TestClass, PropertyConverter):, the version of PropertyConvertMetaclass you wrote operates on the __dict__ of NewTestClass only. TestClass has been created at that point, with no properties, because its metaclass way type, and the child class is empty, so you see no properties.
There are a couple of possible solutions here. The simpler one, but out of reach because of your assignment, is to do class TestClass(metaclass=PropertyConvertMetaclass):. All children of TestClass will have PropertyConvertMetaclass and so all getters will be converted to properties.
The alternative is to look carefully at the arguments of PropertyConvertMetaclass.__new__. Under normal circumstances, you only operate on the future_class_attr attribute. However, you have access to future_class_bases as well. If you want to upgrade the immediate siblings of PropertyConverter, that's all you need:
class PropertyConvertMetaclass(type):
def __new__(mcs, future_class_name, future_class_parents, future_class_attr):
# The loop is the same for each base __dict__ as for future_class_attr,
# so factor it out into a function
def update(d):
for name, value in d.items():
# Don't check for dunders: dunder can't start with `get_`
if name.startswith('get_') and callable(value):
prop = name[4:]
# Getter and setter can't be defined in separate classes
if 'set_' + prop in d and callable(d['set_' + prop]):
setter = d['set_' + prop]
else:
setter = None
if 'del_' + prop in d and callable(d['del_' + prop]):
deleter = d['del_' + prop]
else:
deleter = None
future_class_attr[prop] = property(getter, setter, deleter)
update(future_class_dict)
for base in future_class_parents:
# Won't work well with __slots__ or custom __getattr__
update(base.__dict__)
return super().__new__(mcs, future_class_name, future_class_parents, future_class_attr)
This is probably adequate for your assignment, but lacks a certain amount of finesse. Specifically, there are two deficiencies that I can see:
There is no lookup beyond the immediate base classes.
You can't define a getter in one class and a setter in another.
To address the first issue, you will have to traverse the MRO of the class. As #jsbueno suggests, this is easier to do on the fully constructed class using __init__ rather than the pre-class dictionary. I would solve the second issue by making a table of available getters and setters before making any properties. You could also make the properties respect MRO by doing this. The only complication with using __init__ is that you have to call setattr on the class rather than simply updating its future __dict__.
class PropertyConvertMetaclass(type):
def __init__(cls, class_name, class_parents, class_attr):
getters = set()
setters = set()
deleters = set()
for base in cls.__mro__:
for name, value in base.__dict__.items():
if name.startswith('get_') and callable(value):
getters.add(name[4:])
if name.startswith('set_') and callable(value):
setters.add(name[4:])
if name.startswith('del_') and callable(value):
deleters.add(name[4:])
for name in getters:
def getter(self, *args, **kwargs):
return getattr(super(cls, self), 'get_' + name)(*args, **kwargs)
if name in setters:
def setter(self, *args, **kwargs):
return getattr(super(cls, self), 'set_' + name)(*args, **kwargs)
else:
setter = None
if name in deleters:
def deleter(self, *args, **kwargs):
return getattr(super(cls, self), 'del_' + name)(*args, **kwargs)
else:
deleter = None
setattr(cls, name, property(getter, setter, deleter)
Anything that you do in the __init__ of a metaclass can just as easily be done with a class decorator. The main difference is that the metaclass will apply to all child classes, while a decorator only applies where it is used.
There is nothing "impossible" there.
It is a problem that, however unusual, can be solved with metaclasses.
Your approach is good - the problem you got is that when you look into the "future_class_attr" (also known as the namespace in the classbody), it only contains the methods and attributes for the class currently being defined . In your examples, NewTestClass is empty, and so is "future_class_attr".
The way to overcome that is to check instead on all base classes, looking for the methods that match the pattern you are looking for, and then creating the appropriate property.
Doing this correctly before creating the target class would be tricky - for one would have to do attribute searching in the correct mro (method resolution order) of all superclasses -and there can be a lot of corner cases. (but note it is not "impossible", nonetheless)
But nothing prevents you of doing that after creating the new class. For that, you can just assign the return value of super().__new__(mcls, ...) to a variable (by the way, prefer using super().__new__ instead of hardcoding type.__new__: this allows your metaclass to be colaborative and be combined with, say, collections.ABC or enum.Enum). That variable is them your completed class and you can use dir on it to check for all attribute and method names, already consolidating all superclasses - then, just create your new properties and assign then to the newly created class with setattr(cls_variable, property_name, property_object).
Better yet, write the metaclass __init__ instead of its __new__ method: you retrieve the new class already created, and can proceed to introspecting it with dir and adding the properties immediately. (don't forget to call super().__init__(...) even though your class don't need it.)
Also, note that since Python 3.6, the same results can be achieved with no metaclass at all, if one just implements the needed logic in the __init_subclass__ method of a base class.
One of the solutions of my problem is parsing parents' dicts in PropertyConvertMetaclass:
class PropertyConvertMetaclass(type):
def __new__(mcs, future_class_name, future_class_parents, future_class_attr):
new_attr = {}
for parent in future_class_parents:
for name, val in parent.__dict__.items():
if not name.startswith('__'):
if name.startswith('get_'):
new_attr[name[4:]] = property(val, parent.__dict__['set_' + name[4:]])
new_attr[name] = val
for name, val in future_class_attr.items():
if not name.startswith('__'):
if name.startswith('get_'):
new_attr[name[4:]] = property(val, future_class_attr['set_'+name[4:]])
new_attr[name] = val
return type.__new__(mcs, future_class_name, future_class_parents, new_attr)

dynamic inheritance in Python through a decorator

I found this post where a function is used to inherit from a class:
def get_my_code(base):
class MyCode(base):
def initialize(self):
...
return MyCode
my_code = get_my_code(ParentA)
I would like to do something similar, but with a decorator, something like:
#decorator(base)
class MyClass(base):
...
Is this possible?
UPDATE
Say you have a class Analysis that is used throughout your code. Then you realize that you want to use a wrapper class Transient that is just a time loop on top of the analysis class. If in the code I replace the analysis class, but Transient(Analysis) everything breaks because an analysis class is expected, and thus all its attributes. The problem is that I can't just get to define class Transient(Analysis) in this way because there are plenty of analysis classes. I thought the best way to do this would be to have some sort of dynamic inheritance. Right now I use aggregation to redirect the functionality to the analysis class inside transient.
A class decorator actually gets the class already built - and instantiated (as a class object). It can perform changes on it's dict, and even wrap its methods with other decorators.
However, it means the class already has its bases set - and these can't be ordinarily changed. That implies you have to, in some ay rebuild the class inside the decorator code.
However, if the class'methods make use of parameterless super or __class__ cell variable, those are already set in the member functions (that in Python 3 are the same as unbound methods) you can't just create a new class and set those methods as members on the new one.
So, there might be a way, but it will be non-trivial. And as I pointed out in the comment above, I d like to understand what you'd like to be able to achieve with this, since one could just put the base class on the class declaration itself, instead of using it on the decorator configuration.
I've crafted a function that, as described above, creates a new class, "clonning" the original and can re-build all methods that use __class__ or super: it returns the new class which is functionally identical to the orignal one, but with the bases exchanged. If used in a decorator as requested (decorator code included), it will simply change the class bases. It can't handle decorated methods (other than classmethod and staticmethod), and don't take care of naming details - such as qualnames or repr for the methods.
from types import FunctionType
def change_bases(cls, bases, metaclass=type):
class Changeling(*bases, metaclass=metaclass):
def breeder(self):
__class__ #noQA
cell = Changeling.breeder.__closure__
del Changeling.breeder
Changeling.__name__ = cls.__name__
for attr_name, attr_value in cls.__dict__.items():
if isinstance(attr_value, (FunctionType, classmethod, staticmethod)):
if isinstance(attr_value, staticmethod):
func = getattr(cls, attr_name)
elif isinstance(attr_value, classmethod):
func = attr_value.__func__
else:
func = attr_value
# TODO: check if func is wrapped in decorators and recreate inner function.
# Although reaplying arbitrary decorators is not actually possible -
# it is possible to have a "prepare_for_changeling" innermost decorator
# which could be made to point to the new function.
if func.__closure__ and func.__closure__[0].cell_contents is cls:
franken_func = FunctionType(
func.__code__,
func.__globals__,
func.__name__,
func.__defaults__,
cell
)
if isinstance(attr_value, staticmethod):
func = staticmethod(franken_func)
elif isinstance(attr_value, classmethod):
func = classmethod(franken_func)
else:
func = franken_func
setattr(Changeling, attr_name, func)
continue
setattr(Changeling, attr_name, attr_value)
return Changeling
def decorator(bases):
if not isinstance(base, tuple):
bases = (bases,)
def stage2(cls):
return change_bases(cls, bases)
return stage2

How to replace/bypass a class property?

I would like to have a class with an attribute attr that, when accessed for the first time, runs a function and returns a value, and then becomes this value (its type changes, etc.).
A similar behavior can be obtained with:
class MyClass(object):
#property
def attr(self):
try:
return self._cached_result
except AttributeError:
result = ...
self._cached_result = result
return result
obj = MyClass()
print obj.attr # First calculation
print obj.attr # Cached result is used
However, .attr does not become the initial result, when doing this. It would be more efficient if it did.
A difficulty is that after obj.attr is set to a property, it cannot be set easily to something else, because infinite loops appear naturally. Thus, in the code above, the obj.attr property has no setter so it cannot be directly modified. If a setter is defined, then replacing obj.attr in this setter creates an infinite loop (the setter is accessed from within the setter). I also thought of first deleting the setter so as to be able to do a regular self.attr = …, with del self.attr, but this calls the property deleter (if any), which recreates the infinite loop problem (modifications of self.attr anywhere generally tend to go through the property rules).
So, is there a way to bypass the property mechanism and replace the bound property obj.attr by anything, from within MyClass.attr.__getter__?
This looks a bit like premature optimization : you want to skip a method call by making a descriptor change itself.
It's perfectly possible, but it would have to be justified.
To modify the descriptor from your property, you'd have to be editing your class, which is probably not what you want.
I think a better way to implement this would be to :
do not define obj.attr
override __getattr__, if argument is "attr", obj.attr = new_value, otherwise raise AttributeError
As soon as obj.attr is set, __getattr__ will not be called any more, as it is only called when the attribute does not exist. (__getattribute__ is the one that would get called all the time.)
The main difference with your initial proposal is that the first attribute access is slower, because of the method call overhead of __getattr__, but then it will be as fact as a regular __dict__ lookup.
Example :
class MyClass(object):
def __getattr__(self, name):
if name == 'attr':
self.attr = ...
return self.attr
raise AttributeError(name)
obj = MyClass()
print obj.attr # First calculation
print obj.attr # Cached result is used
EDIT : Please see the other answer, especially if you use Python 3.6 or more.
For new-style classes, which utilize the descriptor protocol, you could do this by creating your own custom descriptor class whose __get__() method will be called at most one time. When that happens, the result is then cached by creating an instance attribute with the same name the class method has.
Here's what I mean.
from __future__ import print_function
class cached_property(object):
"""Descriptor class for making class methods lazily-evaluated and caches the result."""
def __init__(self, func):
self.func = func
def __get__(self, inst, cls):
if inst is None:
return self
else:
value = self.func(inst)
setattr(inst, self.func.__name__, value)
return value
class MyClass(object):
#cached_property
def attr(self):
print('doing long calculation...', end='')
result = 42
return result
obj = MyClass()
print(obj.attr) # -> doing long calculation...42
print(obj.attr) # -> 42

Mimic Python's NoneType

I'm creating several classes to use them as state flags (this is more of an exercise, though I'm going to use them in a real project), just like we use None in Python, i.e.
... some_var is None ...
NoneType has several special properties, most importantly it's a singleton, that is there can't be more than one NoneType instance during any interpreter session, and its instances (None objects) are immutable. I've come up with two possible ways to implement somewhat similar behaviour in pure Python and I'm eager to know which one looks better from the architectural standpoint.
1. Don't use instances at all.
The idea is to have a metaclass, that produces immutable classes. The classes are prohibited to have instances.
class FlagMetaClass(type):
def __setattr__(self, *args, **kwargs):
raise TypeError("{} class is immutable".format(self))
def __delattr__(self, *args, **kwargs):
self.__setattr__()
def __repr__(self):
return self.__name__
class BaseFlag(object):
__metaclass__ = FlagMetaClass
def __init__(self):
raise TypeError("Can't create {} instances".format(type(self)))
def __repr__(self):
return str(type(self))
class SomeFlag(BaseFlag):
pass
And we get the desired behaviour
a = BaseFlag
a is BaseFlag # -> True
a is SomeFlag # -> False
Obviously any attempt to set attributes on these classes will fail (of course there are several hacks to overcome this, but the direct way is closed). And the classes themselves are unique objects loaded in a namespace.
2. A proper singleton class
class FlagMetaClass(type):
_instances = {}
def __call__(cls):
if cls not in cls._instances:
cls._instances[cls] = super(FlagMetaClass, cls).__call__()
return cls._instances[cls] # This may be slightly modified to
# raise an error instead of returning
# the same object, e.g.
# def __call__(cls):
# if cls in cls._instances:
# raise TypeError("Can't have more than one {} instance".format(cls))
# cls._instances[cls] = super(FlagMetaClass, cls).__call__()
# return cls._instances[cls]
def __setattr__(self, *args, **kwargs):
raise TypeError("{} class is immutable".format(self))
def __delattr__(self, *args, **kwargs):
self.__setattr__()
def __repr__(self):
return self.__name__
class BaseFlag(object):
__metaclass__ = FlagMetaClass
__slots__ = []
def __repr__(self):
return str(type(self))
class SomeFlag(BaseFlag):
pass
Here the Flag classes are real singletons. This particular implementation doesn't raise an error when we try to create another instance, but returns the same object (though it's easy to alter this behaviour). Both classes and instances can't be directly modified. The point is to create an instance of each class upon import like it's done with None.
Both approaches give me somewhat immutable unique objects that can be used for comparison just like the None. To me the second one looks more NoneType-like, since None is an instance, but I'm not sure that it's worth the increase in idealogical complexity. Looking forward to hear from you.
Theoretically, it's an interesting exercise. But when you say "though I'm going to use them in a real project" then you lose me.
If the real project is highly unPythonic (using traits or some other package to emulate static typing, using __slots__ to keep people from falling on sharp objects, etc.) -- well, I've got nothing for you, because I've got no use for that, but others do.
If the real project is Pythonic, then do the simplest thing possible.
Your "not use instances at all" answer is the correct one here, but you don't need to do a lot of class definition, either.
For example, if you have a function that could accept None as a real parameter, and you want to tell if the parameter has been defaulted, then just do this:
class NoParameterGiven:
pass
def my_function(my_parameter=NoParameterGiven):
if my_parameter is NoParameterGiven:
<do all my default stuff>
That class is so cheap, there's no reason even to share it between files. Just create it where you need it.
Your state classes are a different story, and you might want to use something like that enum module that #Dunes mentioned -- it has some nice features.
OTOH, if you want to keep it really simple, you could just do something like this:
class MyStates:
class State1: pass
class State2: pass
class State3 pass
You don't need to instantiate anything, and you can refer to them like this: MyStates.State1.

Categories

Resources