Metaclass and syntax in Python - python

I try to make something like that :
class oObject(object):
def __init__(self, x = 0, y = 0, z = 0):
self.x = x
self.y = y
self.z = z
def asString (self, value):
return str(value)
vector = oObject(5,5,5)
# So i can do
asString(vector.x)
# But I want this kind of syntax
vector.x.asString()
It's just an example, i don't really want to convert integrer into a string. It's more about class into a class.

You could either write a custom method for your oObject class that returns the string of the given key, or maybe you could write a custom Variant class and wrap your values:
class oObject(object):
def __init__(self, x = 0, y = 0, z = 0):
self.x = Variant(x)
self.y = Variant(y)
self.z = Variant(z)
class Variant(object):
def __init__(self, obj):
self._obj = obj
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.asString())
def __str__(self):
return self.asString()
def asString(self):
return str(self._obj)
def value(self):
return self._obj
Check out this reference as to how PyQt4 does it, with the QVariant class, which is actually from Qt. Normally python wouldn't need this type, but it was necessary for C++ to represent the multiple types.

You cannot shouldn't do this kind of things in Python.
What you can however do is implementing the standard __str__ method in the class and that is the code that will be used when converting an instance to a string using str(instance).
Technically you can play a lot of tricks in python, trying to bend the syntax to whatever you are used to, but this is a bad idea because a lot of efforts have been put on making Python more readable and you are basically destroying that work.
In Python conversion to string is done by str(x), not by calling a method named asString. Using __str__ you can already customize what str is going to return, why adding a method? If you need a way to do a custom string conversion then just define a function dispatching on the object type instead of trying to inject new methods on existing classes:
converters = dict()
def add_converter(klass, f):
converters[klass] = f
def default_converter(x):
return "<%s: %s>" % (x.__class__.__name__, str(x))
def mystr(x):
return converters.get(x.__class__, default_converter)(x)
With this approach there is no "magic" (i.e. surprising) behavior and you are not wrapping things (another approach that may surprise who reads the code).
In the above example I'm not handling converter inheritance, but you can do that by using a more sophisticated lookup if you need and if you really want that (not sure it makes sense to inherit a conversion to string function, it would silently lose information).
Also if you don't understand what a metaclass is for just leave that concept alone, most probably you don't really need it. Metaclasses are a powerful but somewhat complex tool that is not needed really that often...
I think this article is a good general explanation of what metaclasses are and what you can do with them. Note that some gory details are missing and you should use official documentation to dig them.

To have exactly what you are asking for is tricky in Python -
that is because, when you do
"instance.x.method" - Python first retrieves the attribute "x" from "instance", and them
it would try to find "method" as an attribute in the "x" object itself (without any reference to the "instance" which originally had a reference to "x" that could be possibly retrieved from inside the "method" - but for frame introspection).
I said that it "could be done" - and be made to work for most types of x, but could eventually fail, or have collatteral effects, deppending on the type of the attribute "x":
If you write a __setattr__ method for your class that for each attribute set on the instance, it actually creates a dynamic sub-class of that attribute - which would enable the desired methods on the new object. The draw back, is that not all types of objects can be sub-classed, and not all sub-classed objects will behave exactly like their parents. (If "x" is a function, for example). But it would work for most cases:
class Base(object):
def __setattr__(self, name, attr):
type_ = type(attr)
new_dict = {}
for meth_name in dir(self.__class__):
function = getattr(self.__class__, meth_name)
# Assume any methods on the class have the desired behavior and would
# accept the attribute as it's second parameter (the first being self).
# This could be made more robust by making a simple method-decorator
# which would mark the methods that one wishes to be appliable
# to attributes, instead of picking all non "_" starting methods like here:
if not callable(function) or meth_name in new_dict or meth_name.startswith("_"):
continue
def pinner(f):
def auto_meth(se, *args, **kw):
return f(se._container, se, *args, **kw)
return auto_meth
new_dict[meth_name] = pinner(function)
# This could be improved in order to have a class-based cache of derived types
# so that each attribute setting would only create a new_type for
# each different type that is being set
new_type = type(type_.__name__, (type_,), new_dict)
try:
attr.__class__ = new_type
except TypeError:
# here is the main problem withthis approach:
# if the type being stored can't have it's `__class__`dynamically
# changed, we have to build a new instance of it.
# And if the constructor can't take just the base type
# as its building parameter, it won't work. Worse if having another instance
# does have side-effects in the code, we are subject to those.
attr = new_type(attr)
attr._container = self
super(Base, self).__setattr__(name, attr)
class oObject(Base):
def __init__(self, x = 0, y = 0, z = 0):
self.x = x
self.y = y
self.z = z
def asString(self, attr):
return str(attr)
And after loading these in an interactive section:
>>> v = oObject(1,2,3)
>>> v.x.asString()
'1'
>>> v.w = [1,2,3]
>>> v.w.append(3)
>>> v.w.asString()
'[1, 2, 3, 4]'
>>>
As you can see, this can be done with normal class inheritance no need for metaclasses.
Another, more reliable approach for any Parameter type would be to use another separator for the attribute name, and the method - them you could writhe a much simpler __getattribute__ method on a base class, that would dynamically check for the request method and call it for the attribute. This approach requires no dynamic sub-classing, and is about 2 orders of magnitude simpler. The price is that you'd write something like vector.x__asString instead of the dot separator. This is actually the approach taken in the tried and tested SQLALchemy ORM for Python.
# Second approach:
class Base(object):
separator = "__"
def __getattr__(self, attr_name):
if self.__class__.separator in attr_name:
attr_name, method_name = attr_name.split(self.__class__.separator, 1)
method = getattr(self, method_name)
return method(getattr(self, attr_name))
raise AttributeError
And now:
>>> class oObject(Base):
... def __init__(self, x = 0, y = 0, z = 0):
... self.x = x
... self.y = y
... self.z = z
...
... def asString(self, attr):
... return str(attr)
...
>>>
>>>
>>> v = oObject(1,2,3)
>>> v.x__asString
'1'
(Some more code is required if you want more parameters to be passed to the called method, but I think this is enough to get the idea).

Related

How to clone a python class object? (not the instance but the class itself)

Imagine you have the following code:
class A:
pass
NewA = ... # copy A
NewA.__init__ = decorator(A.__init__) # but don't change A's init function, just NewA's
I am looking for a way to change some of the attributes/methods in the cloned class and the rest I want them to be similar to the base class object (preferebly even through MappingProxyType so that when A changes the unchanged logic of NewA reflects the changes as well).
I came across this ancient thread, where there some suggetions which don't fully work:
Repurposing inheritance class NewA(A): pass which doesn't exactly result in what I am looking for
Dynamically generating a new class using type and somehow having an eye out for the tons of cases that might happen (having mutable attributes/descriptors/calls to globals ...)
Using copy.deepcopy which is totally wrong (since class object's internal data representation is a MappingProxyType which we cannot copy/deepcopy)
Is there a way to still achive this without manually handling every corner case, especially considering the fact that the base class we intend to copy could be anything (with metaclasses and custom init_subclass parents, and a mixture of attributes mutable and what not, and potentially with __slots__)?
Here is a humble attempt to get you started. I've tested it out with a class with slots and it seems to work. I am not very sure about that aspect of it though.
import types
import copy
def clone_class(klass):
def _exec_body(ns):
# don't add in slots descriptors, it will fail!
ns_no_slots = {
k:v for k,v in vars(klass).items()
if not isinstance(v, types.MemberDescriptorType)
}
ns |= copy.deepcopy(ns_no_slots)
return ns
return types.new_class(
name=klass.__name__,
bases=klass.__bases__,
kwds={"metaclass": type(klass)},
exec_body=_exec_body,
)
Now, this seems to work with classes that have __slots__. The one thing that might trip things up is if the metaclass has slots (which must be empty). But that would be really weird.
Here is a test script:
import types
import copy
def clone_class(klass):
def _exec_body(ns):
ns_no_slots = {
k:v for k,v in vars(klass).items()
if not isinstance(v, types.MemberDescriptorType)
}
ns |= copy.deepcopy(ns_no_slots)
return ns
return types.new_class(
name=klass.__name__,
bases=klass.__bases__,
kwds={"metaclass": type(klass)},
exec_body=_exec_body,
)
class Meta(type):
def meta_magic(cls):
print("magical!")
class Foo(metaclass=Meta):
__slots__ = ('x','y')
#property
def size(self):
return 42
class Bar(Foo):
state = []
__slots__ = ('z',)
def __init__(self, x=1, y=2, z=3):
self.x = x
self.y = y
self.z = z
#property
def closed(self):
return False
BarClone = clone_class(Bar)
bar = BarClone()
BarClone.state.append('foo')
print(BarClone.state, Bar.state)
BarClone.meta_magic()
This prints:
['foo'] []
magical!

Python: Dynamically add properties to class instance, properties return function value with inputs

I've been going through all the Stackoverflow answers on dynamic property setting, but for whatever reason I can't seem to get this to work.
I have a class, Evolution_Base, that in its init creates an instance of Value_Differences. Value_Differences should be dynamically creating properties, based on the list I pass, that returns the function value from _get_df_change:
from pandas import DataFrame
from dataclasses import dataclass
import pandas as pd
class Evolution_Base():
def __init__(self, res_date_0 : DataFrame , res_date_1 : DataFrame):
#dataclass
class Results_Data():
res_date_0_df : DataFrame
res_date_1_df : DataFrame
self.res = Results_Data(res_date_0_df= res_date_0,
res_date_1_df= res_date_1)
property_list = ['abc', 'xyz']
self.difference = Value_Differences(parent = self, property_list=property_list)
# Shared Functions
def _get_df_change(self, df_name, operator = '-'):
df_0 = getattr(self.res.res_date_0_df, df_name.lower())
df_1 = getattr(self.res.res_date_1_df, df_name.lower())
return self._df_change(df_1, df_0, operator=operator)
def _df_change(self, df_1 : pd.DataFrame, df_0 : pd.DataFrame, operator = '-') -> pd.DataFrame:
"""
Returns df_1 <operator | default = -> df_0
"""
# is_numeric mask
m_1 = df_1.select_dtypes('number')
m_0 = df_0.select_dtypes('number')
def label_me(x):
x.columns = ['t_1', 't_0']
return x
if operator == '-':
return label_me(df_1[m_1] - df_0[m_0])
elif operator == '+':
return label_me(df_1[m_1] + df_0[m_0])
class Value_Differences():
def __init__(self, parent : Evolution_Base, property_list = []):
self._parent = parent
for name in property_list:
def func(self, prop_name):
return self._parent._get_df_change(name)
# I've tried the following...
setattr(self, name, property(fget = lambda cls_self: func(cls_self, name)))
setattr(self, name, property(func(self, name)))
setattr(self, name, property(func))
Its driving me nuts... Any help appreciated!
My desired outcome is for:
evolution = Evolution_Base(df_1, df_2)
evolution.difference.abc == evolution._df_change('abc')
evolution.difference.xyz == evolution._df_change('xyz')
EDIT: The simple question is really, how do I setattr for a property function?
As asked
how do I setattr for a property function?
To be usable as a property, the accessor function needs to be wrapped as a property and then assigned as an attribute of the class, not the instance.
That function, meanwhile, needs to have a single unbound parameter - which will be an instance of the class, but is not necessarily the current self. Its logic needs to use the current value of name, but late binding will be an issue because of the desire to create lambdas in a loop.
A clear and simple way to work around this is to define a helper function accepting the Value_Differences instance and the name to use, and then bind the name value eagerly.
Naively:
from functools import partial
def _get_from_parent(name, instance):
return instance._parent._get_df_change(name)
class Value_Differences:
def __init__(self, parent: Evolution_Base, property_list = []):
self._parent = parent
for name in property_list:
setattr(Value_Differences, name, property(
fget = partial(_get_from_parent, name)
))
However, this of course has the issue that every instance of Value_Differences will set properties on the class, thus modifying what properties are available for each other instance. Further, in the case where there are many instances that should have the same properties, the setup work will be repeated at each instance creation.
The apparent goal
It seems that what is really sought, is the ability to create classes dynamically, such that a list of property names is provided and a corresponding class pops into existence, with code filled in for the properties implementing a certain logic.
There are multiple approaches to this.
Factory A: Adding properties to an instantiated template
Just like how functions can be nested within each other and the inner function will be an object that can be modified and returned (as is common when creating a decorator), a class body can appear within a function and a new class object (with the same name) is created every time the function runs. (The code in the OP already does this, for the Results_Data dataclass.)
def example():
class Template:
pass
return Template
>>> TemplateA, TemplateB = example(), example()
>>> TemplateA is TemplateB
False
>>> isinstance(TemplateA(), TemplateB)
False
>>> isinstance(TemplateB(), TemplateA)
False
So, a "factory" for value-difference classes could look like
from functools import partial
def _make_value_comparer(property_names, access_func):
class ValueDifferences:
def __init__(self, parent):
self._parent = parent
for name in property_names:
setattr(Value_Differences, name, property(
fget = partial(access_func, name)
))
return ValueDifferences
Notice that instead of hard-coding a helper, this factory expects to be provided with a function that implements the access logic. That function takes two parameters: a property name, and the ValueDifferences instance. (They're in that order because it's more convenient for functools.partial usage.)
Factory B: Using the type constructor directly
The built-in type in Python has two entirely separate functions.
With one argument, it discloses the type of an object.
With three arguments, it creates a new type. The class syntax is in fact syntactic sugar for a call to this builtin. The arguments are:
a string name (will be set as the __name__ attribute)
a list of classes to use as superclasses (will be set as __bases__)
a dict mapping attribute names to their values (including methods and properties - will become the __dict__, roughly)
In this style, the same factory could look something like:
from functools import partial
def _make_value_comparer(property_names, access_func):
methods = {
name: property(fget = partial(access_func, name)
for name in property_names
}
methods['__init__'] = lambda self, parent: setattr(self, '_parent', parent)
return type('ValueDifferences', [], methods)
Using the factory
In either of the above cases, EvolutionBase would be modified in the same way.
Presumably, every EvolutionBase should use the same ValueDifferences class (i.e., the one that specifically defines abc and xyz properties), so the EvolutionBase class can cache that class as a class attribute, and use it later:
class Evolution_Base():
def _get_from_parent(name, mvd):
# mvd._parent will be an instance of Evolution_Base.
return mvd._parent._get_df_change(name)
_MyValueDifferences = _make_value_comparer(['abc', 'xyz'], _get_from_parent)
def __init__(self, res_date_0 : DataFrame , res_date_1 : DataFrame):
#dataclass
class Results_Data():
res_date_0_df : DataFrame
res_date_1_df : DataFrame
self.res = Results_Data(res_date_0_df= res_date_0,
res_date_1_df= res_date_1)
self.difference = _MyValueDifferences(parent = self)
Notice that the cached _MyValueDifferences class no longer requires a list of property names to be constructed. That's because it was already provided when the class was created. The actual thing that varies per instance of _MyValueDifferences, is the parent, so that's all that gets passed.
Simpler approaches
It seems that the goal is to have a class whose instances are tightly associated with instances of Evolution_Base, providing properties specifically named abc and xyz that are computed using the Evolution_Base's data.
That could just be hard-coded as a nested class:
class Evolution_Base:
class EBValueDifferences:
def __init__(self, parent):
self._parent = parent
#property
def abc(self):
return self._parent._get_df_change('abc')
#property
def xyz(self):
return self._parent._get_df_change('xyz')
def __init__(self, res_date_0 : DataFrame , res_date_1 : DataFrame):
#dataclass
class Results_Data():
res_date_0_df : DataFrame
res_date_1_df : DataFrame
self.res = Results_Data(res_date_0_df = res_date_0,
res_date_1_df = res_date_1)
self.difference = EBValueDifferences(self)
# _get_df_change etc. as before
Even simpler, provide corresponding properties directly on Evolution_Base:
class Evolution_Base:
#property
def abc_difference(self):
return self._get_df_change('abc')
#property
def xyz_difference(self):
return self._get_df_change('xyz')
def __init__(self, res_date_0 : DataFrame , res_date_1 : DataFrame):
#dataclass
class Results_Data():
res_date_0_df : DataFrame
res_date_1_df : DataFrame
self.res = Results_Data(res_date_0_df = res_date_0,
res_date_1_df = res_date_1)
# _get_df_change etc. as before
# client code now calls my_evolution_base.abc_difference
# instead of my_evolution_base.difference.abc
If there are a lot of such properties, they could be attached using a much simpler dynamic approach (that would still be reusable for other classes that define a _get_df_change):
def add_df_change_property(name, cls):
setattr(
cls, f'{name}_difference',
property(fget = lambda instance: instance._get_df_change(name))
)
which can also be adapted for use as a decorator:
from functools import partial
def exposes_df_change(name):
return partial(add_df_change_property, name)
#exposes_df_change('abc')
#exposes_df_change('def')
class Evolution_Base:
# `self.difference` can be removed, no other changes needed
This is quite the rabbit hole. Impossible is a big call, but I will say this: they don't intend you to do this. The 'Pythonic' way of achieving your example use case is the __getattr__ method. You could also override the __dir__ method to insert your custom attributes for discoverability.
This is the code for that:
class Value_Differences():
def __init__(self, parent : Evolution_Base, property_list = []):
self._parent = parent
self._property_list = property_list
def __dir__(self):
return sorted(set(
dir(super(Value_Differences, self)) + \
list(self.__dict__.keys()) + self._property_list))
def __getattr__(self, __name: str):
if __name in self._property_list:
return self._parent._get_df_change(__name)
But that wasn't the question, and respect for a really, really interesting question. This is one of those things that you look at and say 'hmm, should be possible' and can get almost to a solution. I initially thought what you asked for was technically possible, just very hacky to achieve. But it turns out that it would be very, very weird hackery if it was possible.
Two small foundational things to start with:
Remind ourselves of the hierarchy of Python objects that the runtime is working with when defining and instantiating classes:
The metaclass (defaulting to type), which is used to build classes. I'm going to refer to this as the Metaclass Type Object (MTO).
The class definition, which is used to build objects. I'm going to refer to this as the Class Type Object (CTO).
And the class instance or object, which I'll refer to as the Class Instance Object (CIO).
MTOs are subclasses of type. CTOs are subclasses of object. CIOs are instances of CTOs, but instantiated by MTOs.
Python runs code inside class definitions as if it was running a function:
class Class1:
print("1")
def __init__(self, v1):
print("4")
print("2")
print("3")
c1 = Class1("x")
print("5")
gives 1, 2, 3, 4, 5
Put these two things together with:
class Class1:
def attr1_get(self):
return 'attr1 value'
attr1 = property(attr1_get)
we are defining a function attr1_get as part of the class definition. We are then running an inline piece of code that creates an object of type property. Note that this is just the name of the object's type - it isn't a property as you would describe it. Just an object with some attributes, being references to various functions. We then assign that object to an attribute in the class we are defining.
In the terms I used above, once that code is run we have a CTO instantiated as an object in memory that contains an attribute attr1 of type property (an object subclass, containing a bunch of attributes itself - one of which is a reference to the function attr1_get).
That can be used to instantiate an object, the CIO.
This is where the MTO comes in. You instantiate the property object while defining the CTO so that when the runtime applies the MTO to create the CIO from the CTO, an attribute on the CIO will be formed with a custom getter function for that attribute rather than the 'standard' getter function the runtime would use. The property object means something to the type object when it is building a new object.
So when we run:
c1 = Class1()
we don't get a CIO c1 with an attribute attr1 that is an object of type property. The metaclass of type type formed a set of references against the attribute's internal state to all the functions we stored in the property object. Note that this is happening inside the runtime, and you can't call this directly from your code - you just tell the type metaclass to do it by using the property wrapper object.
So if you directly assign a property() result to an attribute of a CIO, you have a Pythonic object assigned that references some functions, but the internal state for the runtime to use to reference the getter, setter, etc. is not set up. The getter of an attribute that contains a property object is the standard getter and so returns the object instance, and not the result of the functions it wraps,
This next bit of code demonstrates how this flows:
print("Let's begin")
class MetaClass1(type):
print("Starting to define MetaClass1")
def __new__(cls, name, bases, dct):
x = super().__new__(cls, name, bases, dct)
print("Metaclass1 __new__({})".format(str(cls)))
return x
print("__new__ of MetaClass1 is defined")
def __init__(cls, name, bases, dct):
print("Metaclass1 __init__({})".format(str(cls)))
print("__init__ of MetaClass1 is defined")
print("Metaclass is defined")
class Class1(object,metaclass=MetaClass1):
print("Starting to define Class1")
def __new__(cls, *args, **kwargs):
print("Class1 __new__({})".format(str(cls)))
return super(Class1, cls).__new__(cls, *args, **kwargs)
print("__new__ of Class1 is defined")
def __init__(self):
print("Class1 __init__({})".format(str(self)))
print("__init__ of Class1 is defined")
def g1(self):
return 'attr1 value'
print("g1 of Class1 is defined")
attr1 = property(g1)
print("Class1.attr1 = ", attr1)
print("attr1 of Class1 is defined")
def addProperty(self, name, getter):
setattr(self, name, property(getter))
print("self.", name, " = ", getattr(self, name))
print("addProperty of Class1 is defined")
print("Class is defined")
c1 = Class1()
print("Instance is created")
print(c1.attr1)
def g2(cls):
return 'attr2 value'
c1.addProperty('attr2', g2)
print(c1.attr2)
I have put all those print statements there to demonstrate the order in which things happen very clearly.
In the middle, you see:
g1 of Class1 is defined
Class1.attr1 = <property object at 0x105115c10>
attr1 of Class1 is defined
We have created an object of type property and assigned it to a class attribute.
Continuing:
addProperty of Class1 is defined
Metaclass1 __new__(<class '__main__.MetaClass1'>)
Metaclass1 __init__(<class '__main__.Class1'>)
Class is defined
The metaclass got instantiated, being passed first itself (__new__) and then the class it will work on (__init__). This happened right as we stepped out of the class definition. I have only included the metaclass to show what will happen with the type metaclass by default.
Then:
Class1 __new__(<class '__main__.Class1'>)
Class1 __init__(<__main__.Class1 object at 0x105124c10>)
Instance is created
attr1 value
self. attr2 = <property object at 0x105115cb0>
<property object at 0x105115cb0>
Class1 is instantiated, providing first its type to __new__ and then its instance to __init__.
We see that attr1 is instantiated properly, but attr2 is not. That is because setattr is being called once the class instance is already constructed and is just saying attr2 is an instance of the class property and not defining attr2 as the actual runtime construct of a property.
Which is made more clear if we run:
print(c1.attr2.fget(c1))
print(c1.attr1.fget(c1))
attr2 (a property object) isn't aware of the class or instance of the containing attribute's parent. The function it wraps still needs to be given the instance to work on.
attr1 doesn't know what to do with that, because as far as it is concerned it is a string object, and has no concept of how the runtime is mapping its getter.
The fundamental reason why what you tried doesn't work is that a property, a use case of a descriptor, by design must be stored as a class variable, not as an instance attribute.
Excerpt from the documentation of descriptor:
To use the descriptor, it must be stored as a class variable in
another class:
To create a class with dynamically named properties that has access to a parent class, one elegant approach is to create the class within a method of the main class, and use setattr to create class attributes with dynamic names and property objects. A class created in the closure of a method automatically has access to the self object of the parent instance, avoiding having to manage a clunky _parent attribute like you do in your attempt:
class Evolution_Base:
def __init__(self, property_list):
self.property_list = property_list
self._difference = None
#property
def difference(self):
if not self._difference:
class Value_Differences:
pass
for name in self.property_list:
# use default value to store the value of name in each iteration
def func(obj, prop_name=name):
return self._get_df_change(prop_name) # access self via closure
setattr(Value_Differences, name, property(func))
self._difference = Value_Differences()
return self._difference
def _get_df_change(self, df_name):
return f'df change of {df_name}' # simplified return value for demo purposes
so that:
evolution = Evolution_Base(['abc', 'xyz'])
print(evolution.difference.abc)
print(evolution.difference.xyz)
would output:
df change of abc
df change of xyz
Demo: https://replit.com/#blhsing/ExtralargeNaturalCoordinate
Responding directly to your question, you can create a class:
class FooBar:
def __init__(self, props):
def make_prop(name):
return property(lambda accessor_self: self._prop_impl(name))
self.accessor = type(
'Accessor',
tuple(),
{p: make_prop(p) for p in props}
)()
def _prop_impl(self, arg):
return arg
o = FooBar(['foo', 'bar'])
assert o.accessor.foo == o._prop_impl('foo')
assert o.accessor.bar == o._prop_impl('bar')
Further, it would be beneficiary to cache created class to make equivalent objects more similar and eliminate potential issues with equality comparison.
That said, I am not sure if this is desired. There's little benefit of replacing method call syntax (o.f('a')) with property access (o.a). I believe it can be detrimental on multiple accounts: dynamic properties are confusing, harder to document, etc., finally while none of this is strictly guaranteed in crazy world of dynamic python -- they kind of communicate wrong message: that the access is cheap and does not involve computation and that perhaps you can attempt to write to it.
I think that when you define the function func in the loop, it closes over the current value of the name variable, not the value of the name variable at the time the property is accessed. To fix this, you can use a lambda function to create a closure that captures the value of name at the time the property is defined.
class Value_Differences():
def __init__(self, parent : Evolution_Base, property_list = []):
self._parent = parent
for name in property_list:
setattr(self, name, property(fget = lambda self, name=name: self._parent._get_df_change(name)))
Does this help you ?
The simple question is really, how do I setattr for a property function?
In python we can set dynamic attributes like this:
class DynamicProperties():
def __init__(self, property_list):
self.property_list = property_list
def add_properties(self):
for name in self.property_list:
setattr(self.__class__, name, property(fget=lambda self: 1))
dync = DynamicProperties(['a', 'b'])
dync.add_properties()
print(dync.a) # prints 1
print(dync.b) # prints 1
Correct me if I am wrong but from reviewing your code, you want to create a dynamic attributes then set their value to a specific function call within the same class, where the passed in data is passed in attributes in the constructor " init " this is achievable, an example:
class DynamicProperties():
def __init__(self, property_list, data1, data2):
self.property_list = property_list
self.data1 = data1
self.data2 = data2
def add_properties(self):
for name in self.property_list:
setattr(self.__class__, name, property(fget=lambda self: self.change(self.data1, self.data2) ))
def change(self, data1, data2):
return data1 - data2
dync = DynamicProperties(['a', 'b'], 1, 2)
dync.add_properties()
print(dync.a == dync.change(1, 2)) # prints true
print(dync.b == dync.change(1,2)) # prints true
You just have to add more complexity to the member, __getattr__ / __setattr__ gives you the string, so it can be interpreted as needed. The biggest "problem" doing this is that the return might no be consistent and piping it back to a library that expect an object to have a specific behavior can cause soft errors.
This example is not the same as yours, but it has the same concept, manipulate columns with members. To get a copy with changes a set is not needed, with a copy, modify and return, the new instance can be created with whatever needed.
For example, the __getattr__ in this line will:
Check and interpret the string xyz_mull_0
Validate that the members and the operand exists
Make a copy of data_a
Modify the copy and return it
var = data_a.xyz_mull_0()
This looks more complex that it actually is, with the same instance members its clear what it is doing, but the _of modifier needs a callback, this is because the __getattr__ can only have one parameter, so it needs to save the attr and return a callback to be called with the other instance that then will call back to the __getattr__ and complete the rest of the function.
import re
class FlexibleFrame:
operand_mod = {
'sub': lambda a, b: a - b,
'add': lambda a, b: a + b,
'div': lambda a, b: a / b,
'mod': lambda a, b: a % b,
'mull': lambda a, b: a * b,
}
#staticmethod
def add_operand(name, func):
if name not in FlexibleFrame.operand_mod.keys():
FlexibleFrame.operand_mod[name] = func
# This makes this class subscriptable
def __getitem__(self, item):
return self.__dict__[item]
# Uses:
# -> object.value
# -> object.member()
# -> object.<name>_<operand>_<name|int>()
# -> object.<name>_<operand>_<name|int>_<flow>()
def __getattr__(self, attr):
if re.match(r'^[a-zA-Z]+_[a-zA-Z]+_[a-zA-Z0-9]+(_of)?$', attr):
seg = attr.split('_')
var_a, operand, var_b = seg[0:3]
# If there is a _of: the second operand is from the other
# instance, the _of is removed and a callback is returned
if len(seg) == 4:
self.__attr_ref = '_'.join(seg[0:3])
return self.__getattr_of
# Checks if this was a _of attribute and resets it
if self.__back_ref is not None:
other = self.__back_ref
self.__back_ref = None
self.__attr_ref = None
else:
other = self
if var_a not in self.__dict__:
raise AttributeError(
f'No match of {var_a} in (primary) {__class__.__name__}'
)
if operand not in FlexibleFrame.operand_mod.keys():
raise AttributeError(
f'No match of operand {operand}'
)
# The return is a copy of self, if not the instance
# is getting modified making x = a.b() useless
ret = FlexibleFrame(**self.__dict__)
# Checks if the second operand is a int
if re.match(r'^\d+$', var_b) :
ref_b_num = int(var_b)
for i in range(len(self[var_a])):
ret[var_a][i] = FlexibleFrame.operand_mod[operand](
self[var_a][i], ref_b_num
)
elif var_b in other.__dict__:
for i in range(len(self[var_a])):
# out_index = operand[type](in_a_index, in_b_index)
ret[var_a][i] = FlexibleFrame.operand_mod[operand](
self[var_a][i], other[var_b][i]
)
else:
raise AttributeError(
f'No match of {var_b} in (secondary) {__class__.__name__}'
)
# This swaps the .member to a .member()
# it also adds and extra () in __getattr_of
return lambda: ret
# return ret
if attr in self.__dict__:
return self[attr]
raise AttributeError(
f'No match of {attr} in {__class__.__name__}'
)
def __getattr_of(self, other):
self.__back_ref = other
return self.__getattr__(self.__attr_ref)()
def __init__(self, **kwargs):
self.__back_ref = None
self.__attr_ref = None
#TODO: Check if data columns match in size
# if not, implement column_<name>_filler=<default>
for i in kwargs:
self.__dict__[i] = kwargs[i]
if __name__ == '__main__':
data_a = FlexibleFrame(**{
'abc': [i for i in range(10)],
'nmv': [i for i in range(10)],
'xyz': [i for i in range(10)],
})
data_b = FlexibleFrame(**{
'fee': [i + 10 for i in range(10)],
'foo': [i + 10 for i in range(10)],
})
FlexibleFrame.add_operand('set', lambda a, b: b)
var = data_a.xyz_mull_0()
var = var.abc_set_xyz()
var = var.xyz_add_fee_of(data_b)
As a extra thing, lambdas in python have this thing, so it can make difficult using them when self changes.
It seems you're bending the language to do weird things. I'd take it as a smell that your code is probably getting convoluted but I'm not saying there would never be a use-case for it so here is a minimal example of how to do it:
class Obj:
def _df_change(self, arg):
print('change', arg)
class DynAttributes(Obj):
def __getattr__(self, name):
return self._df_change(name)
class Something:
difference = DynAttributes()
a = Something()
b = Obj()
assert a.difference.hello == b._df_change('hello')
When calling setattr , use self.__class__ instead of self
Code sample:
class A:
def __init__(self,names : List[str]):
for name in names:
setattr(self.__class__,name,property(fget=self.__create_getter(name)))
def __create_getter(self,name: str):
def inner(self):
print(f"invoking {name}")
return 10
return inner
a = A(['x','y'])
print(a.x + 1)
print(a.y + 2)

Dynamically add method to class from property function?

I think a code sample will better speak for itself:
class SomeClass:
example = create_get_method()
Yes, that's all – ideally.
In that case, create_get_method would add a get_example() to SomeClass in a way that it can be accessed via an instance of SomeClass:
obj = SomeClass()
obj.get_example() <- returns the value of self.example
(Of course, the idea is to implement a complex version of get_contact, that's why I want to do that in a non-repetitive way, and this is a simplified version that represents well the issue.)
I don't know if that's possible, because it require to have access to the property name (example) and the class (SomeClass) since these can not be guessed in advance (that function will be used on many and various classes).
I know it's something possible, because that's kind of what SQLAlchemy does with their relationship() function on a class:
class Model(BaseModel):
id = ...
contact_id = db.Integer(db.ForeignKey..)
contact = relationship('contact') <-- This !
How can this be done?
Objects bound to class-level variables can have a __set_name__ method that will be called immediately after the class object has been created. It will be called with two arguments, the class object, and the name of the variable the object is saved as in the class.
You could use this to create your extra getter method, though I'm not sure why exactly you want to (you could make the object a descriptor instead, which would probably be better than adding a separate getter function to the parent class).
class create_get_method:
def __set_name__(self, owner, name):
def getter(self):
return getattr(self, name)
getter_name = f"get_{name}"
getter.__name__ = getter_name
setattr(owner, getter_name, getter)
# you might also want a __get__ method here to give a default value (like None)
Here's how that would work:
>>> class Test:
... example = create_get_method()
...
>>> t = Test()
>>> print(t.get_example())
<__main__.create_get_method at 0x000001E0B4D41400>
>>> t.example = "foo"
>>> print(t.get_example())
foo
You could change the value returned by default (in the first print call), so that the create_get_method object isn't as exposed. Just add a __get__ method to the create_get_method class.
You can do this with a custom non-data descriptor, like a property, except that you don't need a __set__ method:
class ComplicatedDescriptor:
def __init__(self, name):
self.name = name
def __get__(self, owner, type):
# Here, `owner` is the instance of `SomeClass` that contains this descriptor
# Use `owner` to do some complicated stuff, like DB lookup or whatever
name = f'_{self.name}'
# These two lines for demo only
value = owner.__dict__.get(name, 0)
value += 1
setattr(owner, name, value)
return value
Now you can have any number of classes that use this descriptor:
class SomeClass:
example = ComplicatedDescriptor('example')
Now you can do something like:
>>> inst0 = SomeClass()
>>> inst1 = SomeClass()
>>> inst0.example
1
>>> inst1.example
1
>>> inst1.example
2
>>> inst0.example
2
The line name = f'_{self.name} is necessary because the descriptor here is a non-data descriptor: it has no __set__ method, so if you create inst0.__dict__['example'], the lookup will no longer happen: inst0.example will return inst0.__dict__['example'] instead of calling SomeClass.example.__get__(inst0, type(inst0)). One workaround is to store the value under the attribute name _example. The other is to make your descriptor into a data descriptor:
class ComplicatedDescriptor_v2:
def __init__(self, name):
self.name = name
def __get__(self, owner, type):
# Here, `owner` is the instance of `SomeClass` that contains this descriptor
# Use `owner` to do some complicated stuff, like DB lookup or whatever
# These two lines for demo only
value = owner.__dict__.get(self.name, 0)
value += 1
owner.__dict__[self.name] = value
return value
def __set__(self, *args):
raise AttributeError(f'{self.name} is a read-only attribute')
The usage is generally identical:
class SomeClass:
example = ComplicatedDescriptor_v2('example')
Except that now you can't accidentally override your attribute:
>>> inst = SomeClass()
>>> inst.example
1
>>> inst.example
2
>>> inst.example = 0
AttributeError: example is a read-only attribute
Descriptors are a fairly idiomatic way to get and set values in python. They are preferred to getters and setters in almost all cases. The simplest cases are handled by the built-in property. That being said, if you wanted to explicitly have a getter method, I would recommend doing something very similar, but just returning a method instead of calling __get__ directly.
For example:
def __get__(self, owner, type):
def enclosed():
# Use `owner` to do some complicated stuff, like DB lookup or whatever
name = f'_{self.name}'
# These two lines for demo only
value = owner.__dict__.get(name, 0)
value += 1
setattr(owner, name, value)
return value
return enclosed
There is really no point to doing something like this unless you plan on really just want to be able to call inst.example().

OO design: an object that can be exported to a "row", while accessing header names, without repeating myself

Sorry, badly worded title. I hope a simple example will make it clear. Here's the easiest way to do what I want to do:
class Lemon(object):
headers = ['ripeness', 'colour', 'juiciness', 'seeds?']
def to_row(self):
return [self.ripeness, self.colour, self.juiciness, self.seeds > 0]
def save_lemons(lemonset):
f = open('lemons.csv', 'w')
out = csv.writer(f)
out.write(Lemon.headers)
for lemon in lemonset:
out.writerow(lemon.to_row())
This works alright for this small example, but I feel like I'm "repeating myself" in the Lemon class. And in the actual code I'm trying to write (where the number of variables I'm exporting is ~50 rather than 4, and where to_row calls a number of private methods that do a bunch of weird calculations), it becomes awkward.
As I write the code to generate a row, I need to constantly refer to the "headers" variable to make sure I'm building my list in the correct order. If I want to change the variables being outputted, I need to make sure to_row and headers are being changed in parallel (exactly the kind of thing that DRY is meant to prevent, right?).
Is there a better way I could design this code? I've been playing with function decorators, but nothing has stuck. Ideally I should still be able to get at the headers without having a particular lemon instance (i.e. it should be a class variable or class method), and I don't want to have a separate method for each variable.
In this case, getattr() is your friend: it allows you to get a variable based on a string name. For example:
def to_row(self):
return [getattr(self, head) for head in self.headers]
EDIT: to properly use the header seeds?, you would need to set the attribute seeds? for the objects. setattr(self, 'seeds?', self.seeds > 0) right above the return statement.
We could use some metaclass shenanegans to do this...
In python 2, attributes are passed to the metaclass in a dict, without
preserving order, we'll also want a base class to work with so we can
distinguish class attributes that should be mapped into the row. In python3, we could dispense with just about all of this base descriptor class.
import itertools
import functools
#functools.total_ordering
class DryDescriptor(object):
_order_gen = itertools.count()
def __init__(self, alias=None):
self.alias = alias
self.order = next(self._order_gen)
def __lt__(self, other):
return self.order < other.order
We will want a python descriptor for every attribute we wish to map into the
row. slots are a nice way to get data descriptors without much work. One
caveat, though, we'll have to manually remove the helper instance to make the
real slot descriptor visible.
class slot(DryDescriptor):
def annotate(self, attr, attrs):
del attrs[attr]
self.attr = attr
slots = attrs.setdefault('__slots__', []).append(attr)
def annotate_class(self, cls):
if self.alias is not None:
setattr(cls, self.alias, getattr(self.attr))
For computed fields, we can memoize results. Memoizing off of the annotated
instance is tricky without a memory leak, we need weakref. alternatively, we
could have arranged for another slot just to store the cached value. This also isn't quite thread safe, but pretty close.
import weakref
class memo(DryDescriptor):
_memo = None
def __call__(self, method):
self.getter = method
return self
def annotate(self, attr, attrs):
if self.alias is not None:
attrs[self.alias] = self
def annotate_class(self, cls): pass
def __get__(self, instance, owner):
if instance is None:
return self
if self._memo is None:
self._memo = weakref.WeakKeyDictionary()
try:
return self._memo[instance]
except KeyError:
return self._memo.setdefault(instance, self.getter(instance))
On the metaclass, all of the descriptors we created above are found, sorted by
creation order, and instructed to annotate the new, created class. This does
not correctly treat derived classes and could use some other conveniences like
an __init__ for all the slots.
class DryMeta(type):
def __new__(mcls, name, bases, attrs):
descriptors = sorted((value, key)
for key, value
in attrs.iteritems()
if isinstance(value, DryDescriptor))
for descriptor, attr in descriptors:
descriptor.annotate(attr, attrs)
cls = type.__new__(mcls, name, bases, attrs)
for descriptor, attr in descriptors:
descriptor.annotate_class(cls)
cls._header_descriptors = [getattr(cls, attr) for descriptor, attr in descriptors]
return cls
Finally, we want a base class to inherit from so that we can have a to_row
method. this just invokes all of the __get__s for all of the respective
descriptors, in order.
class DryBase(object):
__metaclass__ = DryMeta
def to_row(self):
cls = type(self)
return [desc.__get__(self, cls) for desc in cls._header_descriptors]
Assuming all of that is tucked away, out of sight, the definition of a class
that uses this feature is mostly free of repitition. The only short coming is
that to be practical, every field needs a python friendly name, thus we had the
alias key to associate 'seeds?' to has_seeds
class ADryRow(DryBase):
__slots__ = ['seeds']
ripeness = slot()
colour = slot()
juiciness = slot()
#memo(alias='seeds?')
def has_seeds(self):
print "Expensive!!!"
return self.seeds > 0
>>> my_row = ADryRow()
>>> my_row.ripeness = "tart"
>>> my_row.colour = "#8C2"
>>> my_row.juiciness = 0.3479
>>> my_row.seeds = 19
>>>
>>> print my_row.to_row()
Expensive!!!
['tart', '#8C2', 0.3479, True]
>>> print my_row.to_row()
['tart', '#8C2', 0.3479, True]

Can you easily create a list-like object in python that uses something like a descriptor for its items?

I'm trying to write an interface that abstracts another interface somewhat.
The bottom interface is somewhat inconsistent about what it requires: sometimes id's, and sometimes names. I'm trying to hide details like these.
I want to create a list-like object that will allow you to add names to it, but internally store id's associated with those names.
Preferably, I'd like to use something like descriptors for class attributes, except that they work on list items instead. That is, a function (like __get__) is called for everything added to the list to convert it to the id's I want to store internally, and another function (like __set__) to return objects (that provide convenience methods) instead of the actual id's when trying to retrieve items from the list.
So that I can do something like this:
def get_thing_id_from_name(name):
# assume that this is more complicated
return other_api.get_id_from_name_or_whatever(name)
class Thing(object)
def __init__(self, thing_id):
self.id = thing_id
self.name = other_api.get_name_somehow(id)
def __eq__(self, other):
if isinstance(other, basestring):
return self.name == other
if isinstance(other, Thing):
return self.thing_id == other.thing_id
return NotImplemented
tl = ThingList()
tl.append('thing_one')
tl.append('thing_two')
tl[1] = 'thing_three'
print tl[0].id
print tl[0] == 'thing_one'
print tl[1] == Thing(3)
The documentation recommends defining 17 methods (not including a constructor) for an object that acts like a mutable sequence. I don't think subclassing list is going to help me out at all. It feels like I ought to be able to achieve this just defining a getter and setter somewhere.
UserList is apparently depreciated (although is in python3? I'm using 2.7 though).
Is there a way to achieve this, or something similar, without having to redefine so much functionality?
Yo don't need to override all the list methods -- __setitem__, __init__ and \append should be enough - you may want to have insert and some others as well. You could write __setitem__ and __getitem__ to call __set__ and __get__ methods on a sepecial "Thing" class exactly as descriptors do.
Here is a short example - maybe something like what you want:
class Thing(object):
def __init__(self, thing):
self.value = thing
self.name = str(thing)
id = property(lambda s: id(s))
#...
def __repr__(self):
return "I am a %s" %self.name
class ThingList(list):
def __init__(self, items):
for item in items:
self.append(item)
def append(self, value):
list.append(self, Thing(value))
def __setitem__(self, index, value):
list.__setitem__(self, index, Thing(value))
Example:
>>> a = ThingList(range(3))
>>> a.append("three")
>>> a
[I am a 0, I am a 1, I am a 2, I am a three]
>>> a[0].id
35242896
>>>
-- edit --
The O.P. commented: "I was really hoping that there would be a way to have all the functionality from list - addition, extending, slices etc. and only have to redefine the get/set item behaviour."
So mote it be - one really has to override all relevant methods in this way. But if what we want to avoid is just a lot of boiler plate code with a lot of functions doing almost the same, the new, overriden methods, can be generated dynamically - all we need is a decorator to change ordinary objects into Things for all operations that set values:
class Thing(object):
# Prevents duplicating the wrapping of objects:
def __new__(cls, thing):
if isinstance(thing, cls):
return thing
return object.__new__(cls, thing)
def __init__(self, thing):
self.value = thing
self.name = str(thing)
id = property(lambda s: id(s))
#...
def __repr__(self):
return "I am a %s" %self.name
def converter(func, cardinality=1):
def new_func(*args):
# Pick the last item in the argument list, which
# for all item setter methods on a list is the one
# which actually contains the values
if cardinality == 1:
args = args[:-1] + (Thing(args[-1] ),)
else:
args = args[:-1] + ([Thing(item) for item in args[-1]],)
return func(*args)
new_func.func_name = func.__name__
return new_func
my_list_dict = {}
for single_setter in ("__setitem__", "append", "insert"):
my_list_dict[single_setter] = converter(getattr(list, single_setter), cardinality=1)
for many_setter in ("__setslice__", "__add__", "__iadd__", "__init__", "extend"):
my_list_dict[many_setter] = converter(getattr(list, many_setter), cardinality="many")
MyList = type("MyList", (list,), my_list_dict)
And it works thus:
>>> a = MyList()
>>> a
[]
>>> a.append(5)
>>> a
[I am a 5]
>>> a + [2,3,4]
[I am a 5, I am a 2, I am a 3, I am a 4]
>>> a.extend(range(4))
>>> a
[I am a 5, I am a 0, I am a 1, I am a 2, I am a 3]
>>> a[1:2] = range(10,12)
>>> a
[I am a 5, I am a 10, I am a 11, I am a 1, I am a 2, I am a 3]
>>>

Categories

Resources