Wagtail - more class arguments than parameters? Whats going on? - python

Hi Stackoverflow Community
I have been trying to understand how Django (and Wagtail's Stream-field) work under the hood. Doing that I learned about metaclasses and believe to have a handle on the principle. There is however one piece of code that confuses me (see below).
When we follow the StreamField definitions this code seems to pass a list of tuples to a class constructor that doesn't appear to accommodate for a list of this kind. How can this work?
Any advice would be highly appreciated. Here the code:
models.py
This is where we define the StreamField. It takes a list of tuples of format ('title', blockType). Our objective is to follow the StreamField call:
class BlogPage(Page):
blogElement = StreamField([
('heading', blocks.CharBlock(classname="full title")),
('paragraph', blocks.TextBlock()),
('picture', ImageChooserBlock()),
], default=[])
fields.py>StreamField
When we follow the call to StreamField we arrive at the following class constructor. Here the call to StreamBlock(block_types) is where it gets confusing:
class StreamField(models.Field):
def __init__(self, block_types, **kwargs):
if isinstance(block_types, Block):
self.stream_block = block_types
elif isinstance(block_types, type):
self.stream_block = block_types()
else:
self.stream_block = StreamBlock(block_types)
super(StreamField, self).__init__(**kwargs)
StreamBlock
While the call to the class constructor takes block_types as an argument (contains a list of three tuples defined in models.py) the receiving class constructor takes a call to six.with_metaclass as an argument (code included below):
class StreamBlock(six.with_metaclass(DeclarativeSubBlocksMetaclass, BaseStreamBlock)):
pass
MY QUESTION
How is this possible? six.with_metaclass itself is a call to a method which takes two arguments <> and <>, both class constructors in their own right (code included below).
Shouldn't StreamBlock accommodate to receive arguments that fit the block_types which in turn contain the list of three tuples defined in models.py? I am sure i may be missing something here but just can't see it. Any advice would be highly appreciated.
Z
For Context
I included the code for the other code-pieces for context below. I figured out how the six.with_metaclass works in this post: [Wow does six.with_metaclass() work?
][1] but I am struggling with the question above.
six.with_metaclass
def with_metaclass(meta, *bases):
"""Create a base class with a metaclass."""
# This requires a bit of explanation: the basic idea is to make a dummy
# metaclass for one level of class instantiation that replaces itself with
# the actual metaclass.
class metaclass(meta):
def __new__(cls, name, this_bases, d):
print("In with_metaclass: %s " %d )
return meta(name, bases, d)
return type.__new__(metaclass, 'temporary_class', (), {})
DeclarativeSubBlocksMetaclass
class DeclarativeSubBlocksMetaclass(BaseBlock):
"""
Metaclass that collects sub-blocks declared on the base classes.
(cheerfully stolen from https://github.com/django/django/blob/master/django/forms/forms.py)
"""
def __new__(mcs, name, bases, attrs):
# Collect sub-blocks declared on the current class.
# These are available on the class as `declared_blocks`
current_blocks = []
for key, value in list(attrs.items()):
if isinstance(value, Block):
current_blocks.append((key, value))
value.set_name(key)
attrs.pop(key)
current_blocks.sort(key=lambda x: x[1].creation_counter)
attrs['declared_blocks'] = collections.OrderedDict(current_blocks)
new_class = (super(DeclarativeSubBlocksMetaclass, mcs).__new__(mcs, name, bases, attrs))
# Walk through the MRO, collecting all inherited sub-blocks, to make
# the combined `base_blocks`.
base_blocks = collections.OrderedDict()
for base in reversed(new_class.__mro__):
# Collect sub-blocks from base class.
if hasattr(base, 'declared_blocks'):
base_blocks.update(base.declared_blocks)
# Field shadowing.
for attr, value in base.__dict__.items():
if value is None and attr in base_blocks:
base_blocks.pop(attr)
new_class.base_blocks = base_blocks
return new_class
BaseStreamBlock
class BaseStreamBlock(Block):
def __init__(self, local_blocks=None, **kwargs):
self._constructor_kwargs = kwargs
super(BaseStreamBlock, self).__init__(**kwargs)
# create a local (shallow) copy of base_blocks so that it can be supplemented by local_blocks
self.child_blocks = self.base_blocks.copy()
if local_blocks:
for name, block in local_blocks:
block.set_name(name)
self.child_blocks[name] = block
self.dependencies = self.child_blocks.values()

Related

Python: Dynamically add properties to class instance, properties return function value with inputs

I've been going through all the Stackoverflow answers on dynamic property setting, but for whatever reason I can't seem to get this to work.
I have a class, Evolution_Base, that in its init creates an instance of Value_Differences. Value_Differences should be dynamically creating properties, based on the list I pass, that returns the function value from _get_df_change:
from pandas import DataFrame
from dataclasses import dataclass
import pandas as pd
class Evolution_Base():
def __init__(self, res_date_0 : DataFrame , res_date_1 : DataFrame):
#dataclass
class Results_Data():
res_date_0_df : DataFrame
res_date_1_df : DataFrame
self.res = Results_Data(res_date_0_df= res_date_0,
res_date_1_df= res_date_1)
property_list = ['abc', 'xyz']
self.difference = Value_Differences(parent = self, property_list=property_list)
# Shared Functions
def _get_df_change(self, df_name, operator = '-'):
df_0 = getattr(self.res.res_date_0_df, df_name.lower())
df_1 = getattr(self.res.res_date_1_df, df_name.lower())
return self._df_change(df_1, df_0, operator=operator)
def _df_change(self, df_1 : pd.DataFrame, df_0 : pd.DataFrame, operator = '-') -> pd.DataFrame:
"""
Returns df_1 <operator | default = -> df_0
"""
# is_numeric mask
m_1 = df_1.select_dtypes('number')
m_0 = df_0.select_dtypes('number')
def label_me(x):
x.columns = ['t_1', 't_0']
return x
if operator == '-':
return label_me(df_1[m_1] - df_0[m_0])
elif operator == '+':
return label_me(df_1[m_1] + df_0[m_0])
class Value_Differences():
def __init__(self, parent : Evolution_Base, property_list = []):
self._parent = parent
for name in property_list:
def func(self, prop_name):
return self._parent._get_df_change(name)
# I've tried the following...
setattr(self, name, property(fget = lambda cls_self: func(cls_self, name)))
setattr(self, name, property(func(self, name)))
setattr(self, name, property(func))
Its driving me nuts... Any help appreciated!
My desired outcome is for:
evolution = Evolution_Base(df_1, df_2)
evolution.difference.abc == evolution._df_change('abc')
evolution.difference.xyz == evolution._df_change('xyz')
EDIT: The simple question is really, how do I setattr for a property function?
As asked
how do I setattr for a property function?
To be usable as a property, the accessor function needs to be wrapped as a property and then assigned as an attribute of the class, not the instance.
That function, meanwhile, needs to have a single unbound parameter - which will be an instance of the class, but is not necessarily the current self. Its logic needs to use the current value of name, but late binding will be an issue because of the desire to create lambdas in a loop.
A clear and simple way to work around this is to define a helper function accepting the Value_Differences instance and the name to use, and then bind the name value eagerly.
Naively:
from functools import partial
def _get_from_parent(name, instance):
return instance._parent._get_df_change(name)
class Value_Differences:
def __init__(self, parent: Evolution_Base, property_list = []):
self._parent = parent
for name in property_list:
setattr(Value_Differences, name, property(
fget = partial(_get_from_parent, name)
))
However, this of course has the issue that every instance of Value_Differences will set properties on the class, thus modifying what properties are available for each other instance. Further, in the case where there are many instances that should have the same properties, the setup work will be repeated at each instance creation.
The apparent goal
It seems that what is really sought, is the ability to create classes dynamically, such that a list of property names is provided and a corresponding class pops into existence, with code filled in for the properties implementing a certain logic.
There are multiple approaches to this.
Factory A: Adding properties to an instantiated template
Just like how functions can be nested within each other and the inner function will be an object that can be modified and returned (as is common when creating a decorator), a class body can appear within a function and a new class object (with the same name) is created every time the function runs. (The code in the OP already does this, for the Results_Data dataclass.)
def example():
class Template:
pass
return Template
>>> TemplateA, TemplateB = example(), example()
>>> TemplateA is TemplateB
False
>>> isinstance(TemplateA(), TemplateB)
False
>>> isinstance(TemplateB(), TemplateA)
False
So, a "factory" for value-difference classes could look like
from functools import partial
def _make_value_comparer(property_names, access_func):
class ValueDifferences:
def __init__(self, parent):
self._parent = parent
for name in property_names:
setattr(Value_Differences, name, property(
fget = partial(access_func, name)
))
return ValueDifferences
Notice that instead of hard-coding a helper, this factory expects to be provided with a function that implements the access logic. That function takes two parameters: a property name, and the ValueDifferences instance. (They're in that order because it's more convenient for functools.partial usage.)
Factory B: Using the type constructor directly
The built-in type in Python has two entirely separate functions.
With one argument, it discloses the type of an object.
With three arguments, it creates a new type. The class syntax is in fact syntactic sugar for a call to this builtin. The arguments are:
a string name (will be set as the __name__ attribute)
a list of classes to use as superclasses (will be set as __bases__)
a dict mapping attribute names to their values (including methods and properties - will become the __dict__, roughly)
In this style, the same factory could look something like:
from functools import partial
def _make_value_comparer(property_names, access_func):
methods = {
name: property(fget = partial(access_func, name)
for name in property_names
}
methods['__init__'] = lambda self, parent: setattr(self, '_parent', parent)
return type('ValueDifferences', [], methods)
Using the factory
In either of the above cases, EvolutionBase would be modified in the same way.
Presumably, every EvolutionBase should use the same ValueDifferences class (i.e., the one that specifically defines abc and xyz properties), so the EvolutionBase class can cache that class as a class attribute, and use it later:
class Evolution_Base():
def _get_from_parent(name, mvd):
# mvd._parent will be an instance of Evolution_Base.
return mvd._parent._get_df_change(name)
_MyValueDifferences = _make_value_comparer(['abc', 'xyz'], _get_from_parent)
def __init__(self, res_date_0 : DataFrame , res_date_1 : DataFrame):
#dataclass
class Results_Data():
res_date_0_df : DataFrame
res_date_1_df : DataFrame
self.res = Results_Data(res_date_0_df= res_date_0,
res_date_1_df= res_date_1)
self.difference = _MyValueDifferences(parent = self)
Notice that the cached _MyValueDifferences class no longer requires a list of property names to be constructed. That's because it was already provided when the class was created. The actual thing that varies per instance of _MyValueDifferences, is the parent, so that's all that gets passed.
Simpler approaches
It seems that the goal is to have a class whose instances are tightly associated with instances of Evolution_Base, providing properties specifically named abc and xyz that are computed using the Evolution_Base's data.
That could just be hard-coded as a nested class:
class Evolution_Base:
class EBValueDifferences:
def __init__(self, parent):
self._parent = parent
#property
def abc(self):
return self._parent._get_df_change('abc')
#property
def xyz(self):
return self._parent._get_df_change('xyz')
def __init__(self, res_date_0 : DataFrame , res_date_1 : DataFrame):
#dataclass
class Results_Data():
res_date_0_df : DataFrame
res_date_1_df : DataFrame
self.res = Results_Data(res_date_0_df = res_date_0,
res_date_1_df = res_date_1)
self.difference = EBValueDifferences(self)
# _get_df_change etc. as before
Even simpler, provide corresponding properties directly on Evolution_Base:
class Evolution_Base:
#property
def abc_difference(self):
return self._get_df_change('abc')
#property
def xyz_difference(self):
return self._get_df_change('xyz')
def __init__(self, res_date_0 : DataFrame , res_date_1 : DataFrame):
#dataclass
class Results_Data():
res_date_0_df : DataFrame
res_date_1_df : DataFrame
self.res = Results_Data(res_date_0_df = res_date_0,
res_date_1_df = res_date_1)
# _get_df_change etc. as before
# client code now calls my_evolution_base.abc_difference
# instead of my_evolution_base.difference.abc
If there are a lot of such properties, they could be attached using a much simpler dynamic approach (that would still be reusable for other classes that define a _get_df_change):
def add_df_change_property(name, cls):
setattr(
cls, f'{name}_difference',
property(fget = lambda instance: instance._get_df_change(name))
)
which can also be adapted for use as a decorator:
from functools import partial
def exposes_df_change(name):
return partial(add_df_change_property, name)
#exposes_df_change('abc')
#exposes_df_change('def')
class Evolution_Base:
# `self.difference` can be removed, no other changes needed
This is quite the rabbit hole. Impossible is a big call, but I will say this: they don't intend you to do this. The 'Pythonic' way of achieving your example use case is the __getattr__ method. You could also override the __dir__ method to insert your custom attributes for discoverability.
This is the code for that:
class Value_Differences():
def __init__(self, parent : Evolution_Base, property_list = []):
self._parent = parent
self._property_list = property_list
def __dir__(self):
return sorted(set(
dir(super(Value_Differences, self)) + \
list(self.__dict__.keys()) + self._property_list))
def __getattr__(self, __name: str):
if __name in self._property_list:
return self._parent._get_df_change(__name)
But that wasn't the question, and respect for a really, really interesting question. This is one of those things that you look at and say 'hmm, should be possible' and can get almost to a solution. I initially thought what you asked for was technically possible, just very hacky to achieve. But it turns out that it would be very, very weird hackery if it was possible.
Two small foundational things to start with:
Remind ourselves of the hierarchy of Python objects that the runtime is working with when defining and instantiating classes:
The metaclass (defaulting to type), which is used to build classes. I'm going to refer to this as the Metaclass Type Object (MTO).
The class definition, which is used to build objects. I'm going to refer to this as the Class Type Object (CTO).
And the class instance or object, which I'll refer to as the Class Instance Object (CIO).
MTOs are subclasses of type. CTOs are subclasses of object. CIOs are instances of CTOs, but instantiated by MTOs.
Python runs code inside class definitions as if it was running a function:
class Class1:
print("1")
def __init__(self, v1):
print("4")
print("2")
print("3")
c1 = Class1("x")
print("5")
gives 1, 2, 3, 4, 5
Put these two things together with:
class Class1:
def attr1_get(self):
return 'attr1 value'
attr1 = property(attr1_get)
we are defining a function attr1_get as part of the class definition. We are then running an inline piece of code that creates an object of type property. Note that this is just the name of the object's type - it isn't a property as you would describe it. Just an object with some attributes, being references to various functions. We then assign that object to an attribute in the class we are defining.
In the terms I used above, once that code is run we have a CTO instantiated as an object in memory that contains an attribute attr1 of type property (an object subclass, containing a bunch of attributes itself - one of which is a reference to the function attr1_get).
That can be used to instantiate an object, the CIO.
This is where the MTO comes in. You instantiate the property object while defining the CTO so that when the runtime applies the MTO to create the CIO from the CTO, an attribute on the CIO will be formed with a custom getter function for that attribute rather than the 'standard' getter function the runtime would use. The property object means something to the type object when it is building a new object.
So when we run:
c1 = Class1()
we don't get a CIO c1 with an attribute attr1 that is an object of type property. The metaclass of type type formed a set of references against the attribute's internal state to all the functions we stored in the property object. Note that this is happening inside the runtime, and you can't call this directly from your code - you just tell the type metaclass to do it by using the property wrapper object.
So if you directly assign a property() result to an attribute of a CIO, you have a Pythonic object assigned that references some functions, but the internal state for the runtime to use to reference the getter, setter, etc. is not set up. The getter of an attribute that contains a property object is the standard getter and so returns the object instance, and not the result of the functions it wraps,
This next bit of code demonstrates how this flows:
print("Let's begin")
class MetaClass1(type):
print("Starting to define MetaClass1")
def __new__(cls, name, bases, dct):
x = super().__new__(cls, name, bases, dct)
print("Metaclass1 __new__({})".format(str(cls)))
return x
print("__new__ of MetaClass1 is defined")
def __init__(cls, name, bases, dct):
print("Metaclass1 __init__({})".format(str(cls)))
print("__init__ of MetaClass1 is defined")
print("Metaclass is defined")
class Class1(object,metaclass=MetaClass1):
print("Starting to define Class1")
def __new__(cls, *args, **kwargs):
print("Class1 __new__({})".format(str(cls)))
return super(Class1, cls).__new__(cls, *args, **kwargs)
print("__new__ of Class1 is defined")
def __init__(self):
print("Class1 __init__({})".format(str(self)))
print("__init__ of Class1 is defined")
def g1(self):
return 'attr1 value'
print("g1 of Class1 is defined")
attr1 = property(g1)
print("Class1.attr1 = ", attr1)
print("attr1 of Class1 is defined")
def addProperty(self, name, getter):
setattr(self, name, property(getter))
print("self.", name, " = ", getattr(self, name))
print("addProperty of Class1 is defined")
print("Class is defined")
c1 = Class1()
print("Instance is created")
print(c1.attr1)
def g2(cls):
return 'attr2 value'
c1.addProperty('attr2', g2)
print(c1.attr2)
I have put all those print statements there to demonstrate the order in which things happen very clearly.
In the middle, you see:
g1 of Class1 is defined
Class1.attr1 = <property object at 0x105115c10>
attr1 of Class1 is defined
We have created an object of type property and assigned it to a class attribute.
Continuing:
addProperty of Class1 is defined
Metaclass1 __new__(<class '__main__.MetaClass1'>)
Metaclass1 __init__(<class '__main__.Class1'>)
Class is defined
The metaclass got instantiated, being passed first itself (__new__) and then the class it will work on (__init__). This happened right as we stepped out of the class definition. I have only included the metaclass to show what will happen with the type metaclass by default.
Then:
Class1 __new__(<class '__main__.Class1'>)
Class1 __init__(<__main__.Class1 object at 0x105124c10>)
Instance is created
attr1 value
self. attr2 = <property object at 0x105115cb0>
<property object at 0x105115cb0>
Class1 is instantiated, providing first its type to __new__ and then its instance to __init__.
We see that attr1 is instantiated properly, but attr2 is not. That is because setattr is being called once the class instance is already constructed and is just saying attr2 is an instance of the class property and not defining attr2 as the actual runtime construct of a property.
Which is made more clear if we run:
print(c1.attr2.fget(c1))
print(c1.attr1.fget(c1))
attr2 (a property object) isn't aware of the class or instance of the containing attribute's parent. The function it wraps still needs to be given the instance to work on.
attr1 doesn't know what to do with that, because as far as it is concerned it is a string object, and has no concept of how the runtime is mapping its getter.
The fundamental reason why what you tried doesn't work is that a property, a use case of a descriptor, by design must be stored as a class variable, not as an instance attribute.
Excerpt from the documentation of descriptor:
To use the descriptor, it must be stored as a class variable in
another class:
To create a class with dynamically named properties that has access to a parent class, one elegant approach is to create the class within a method of the main class, and use setattr to create class attributes with dynamic names and property objects. A class created in the closure of a method automatically has access to the self object of the parent instance, avoiding having to manage a clunky _parent attribute like you do in your attempt:
class Evolution_Base:
def __init__(self, property_list):
self.property_list = property_list
self._difference = None
#property
def difference(self):
if not self._difference:
class Value_Differences:
pass
for name in self.property_list:
# use default value to store the value of name in each iteration
def func(obj, prop_name=name):
return self._get_df_change(prop_name) # access self via closure
setattr(Value_Differences, name, property(func))
self._difference = Value_Differences()
return self._difference
def _get_df_change(self, df_name):
return f'df change of {df_name}' # simplified return value for demo purposes
so that:
evolution = Evolution_Base(['abc', 'xyz'])
print(evolution.difference.abc)
print(evolution.difference.xyz)
would output:
df change of abc
df change of xyz
Demo: https://replit.com/#blhsing/ExtralargeNaturalCoordinate
Responding directly to your question, you can create a class:
class FooBar:
def __init__(self, props):
def make_prop(name):
return property(lambda accessor_self: self._prop_impl(name))
self.accessor = type(
'Accessor',
tuple(),
{p: make_prop(p) for p in props}
)()
def _prop_impl(self, arg):
return arg
o = FooBar(['foo', 'bar'])
assert o.accessor.foo == o._prop_impl('foo')
assert o.accessor.bar == o._prop_impl('bar')
Further, it would be beneficiary to cache created class to make equivalent objects more similar and eliminate potential issues with equality comparison.
That said, I am not sure if this is desired. There's little benefit of replacing method call syntax (o.f('a')) with property access (o.a). I believe it can be detrimental on multiple accounts: dynamic properties are confusing, harder to document, etc., finally while none of this is strictly guaranteed in crazy world of dynamic python -- they kind of communicate wrong message: that the access is cheap and does not involve computation and that perhaps you can attempt to write to it.
I think that when you define the function func in the loop, it closes over the current value of the name variable, not the value of the name variable at the time the property is accessed. To fix this, you can use a lambda function to create a closure that captures the value of name at the time the property is defined.
class Value_Differences():
def __init__(self, parent : Evolution_Base, property_list = []):
self._parent = parent
for name in property_list:
setattr(self, name, property(fget = lambda self, name=name: self._parent._get_df_change(name)))
Does this help you ?
The simple question is really, how do I setattr for a property function?
In python we can set dynamic attributes like this:
class DynamicProperties():
def __init__(self, property_list):
self.property_list = property_list
def add_properties(self):
for name in self.property_list:
setattr(self.__class__, name, property(fget=lambda self: 1))
dync = DynamicProperties(['a', 'b'])
dync.add_properties()
print(dync.a) # prints 1
print(dync.b) # prints 1
Correct me if I am wrong but from reviewing your code, you want to create a dynamic attributes then set their value to a specific function call within the same class, where the passed in data is passed in attributes in the constructor " init " this is achievable, an example:
class DynamicProperties():
def __init__(self, property_list, data1, data2):
self.property_list = property_list
self.data1 = data1
self.data2 = data2
def add_properties(self):
for name in self.property_list:
setattr(self.__class__, name, property(fget=lambda self: self.change(self.data1, self.data2) ))
def change(self, data1, data2):
return data1 - data2
dync = DynamicProperties(['a', 'b'], 1, 2)
dync.add_properties()
print(dync.a == dync.change(1, 2)) # prints true
print(dync.b == dync.change(1,2)) # prints true
You just have to add more complexity to the member, __getattr__ / __setattr__ gives you the string, so it can be interpreted as needed. The biggest "problem" doing this is that the return might no be consistent and piping it back to a library that expect an object to have a specific behavior can cause soft errors.
This example is not the same as yours, but it has the same concept, manipulate columns with members. To get a copy with changes a set is not needed, with a copy, modify and return, the new instance can be created with whatever needed.
For example, the __getattr__ in this line will:
Check and interpret the string xyz_mull_0
Validate that the members and the operand exists
Make a copy of data_a
Modify the copy and return it
var = data_a.xyz_mull_0()
This looks more complex that it actually is, with the same instance members its clear what it is doing, but the _of modifier needs a callback, this is because the __getattr__ can only have one parameter, so it needs to save the attr and return a callback to be called with the other instance that then will call back to the __getattr__ and complete the rest of the function.
import re
class FlexibleFrame:
operand_mod = {
'sub': lambda a, b: a - b,
'add': lambda a, b: a + b,
'div': lambda a, b: a / b,
'mod': lambda a, b: a % b,
'mull': lambda a, b: a * b,
}
#staticmethod
def add_operand(name, func):
if name not in FlexibleFrame.operand_mod.keys():
FlexibleFrame.operand_mod[name] = func
# This makes this class subscriptable
def __getitem__(self, item):
return self.__dict__[item]
# Uses:
# -> object.value
# -> object.member()
# -> object.<name>_<operand>_<name|int>()
# -> object.<name>_<operand>_<name|int>_<flow>()
def __getattr__(self, attr):
if re.match(r'^[a-zA-Z]+_[a-zA-Z]+_[a-zA-Z0-9]+(_of)?$', attr):
seg = attr.split('_')
var_a, operand, var_b = seg[0:3]
# If there is a _of: the second operand is from the other
# instance, the _of is removed and a callback is returned
if len(seg) == 4:
self.__attr_ref = '_'.join(seg[0:3])
return self.__getattr_of
# Checks if this was a _of attribute and resets it
if self.__back_ref is not None:
other = self.__back_ref
self.__back_ref = None
self.__attr_ref = None
else:
other = self
if var_a not in self.__dict__:
raise AttributeError(
f'No match of {var_a} in (primary) {__class__.__name__}'
)
if operand not in FlexibleFrame.operand_mod.keys():
raise AttributeError(
f'No match of operand {operand}'
)
# The return is a copy of self, if not the instance
# is getting modified making x = a.b() useless
ret = FlexibleFrame(**self.__dict__)
# Checks if the second operand is a int
if re.match(r'^\d+$', var_b) :
ref_b_num = int(var_b)
for i in range(len(self[var_a])):
ret[var_a][i] = FlexibleFrame.operand_mod[operand](
self[var_a][i], ref_b_num
)
elif var_b in other.__dict__:
for i in range(len(self[var_a])):
# out_index = operand[type](in_a_index, in_b_index)
ret[var_a][i] = FlexibleFrame.operand_mod[operand](
self[var_a][i], other[var_b][i]
)
else:
raise AttributeError(
f'No match of {var_b} in (secondary) {__class__.__name__}'
)
# This swaps the .member to a .member()
# it also adds and extra () in __getattr_of
return lambda: ret
# return ret
if attr in self.__dict__:
return self[attr]
raise AttributeError(
f'No match of {attr} in {__class__.__name__}'
)
def __getattr_of(self, other):
self.__back_ref = other
return self.__getattr__(self.__attr_ref)()
def __init__(self, **kwargs):
self.__back_ref = None
self.__attr_ref = None
#TODO: Check if data columns match in size
# if not, implement column_<name>_filler=<default>
for i in kwargs:
self.__dict__[i] = kwargs[i]
if __name__ == '__main__':
data_a = FlexibleFrame(**{
'abc': [i for i in range(10)],
'nmv': [i for i in range(10)],
'xyz': [i for i in range(10)],
})
data_b = FlexibleFrame(**{
'fee': [i + 10 for i in range(10)],
'foo': [i + 10 for i in range(10)],
})
FlexibleFrame.add_operand('set', lambda a, b: b)
var = data_a.xyz_mull_0()
var = var.abc_set_xyz()
var = var.xyz_add_fee_of(data_b)
As a extra thing, lambdas in python have this thing, so it can make difficult using them when self changes.
It seems you're bending the language to do weird things. I'd take it as a smell that your code is probably getting convoluted but I'm not saying there would never be a use-case for it so here is a minimal example of how to do it:
class Obj:
def _df_change(self, arg):
print('change', arg)
class DynAttributes(Obj):
def __getattr__(self, name):
return self._df_change(name)
class Something:
difference = DynAttributes()
a = Something()
b = Obj()
assert a.difference.hello == b._df_change('hello')
When calling setattr , use self.__class__ instead of self
Code sample:
class A:
def __init__(self,names : List[str]):
for name in names:
setattr(self.__class__,name,property(fget=self.__create_getter(name)))
def __create_getter(self,name: str):
def inner(self):
print(f"invoking {name}")
return 10
return inner
a = A(['x','y'])
print(a.x + 1)
print(a.y + 2)

Mix type(), and custom __init__() using super().__init__()

What I've succeeded to do so far:
I've made an elem class to represent html elements (div, html, span, body, etc.).
I'm able to derivate this class like this to make subclasses for each element:
class elem:
def __init__(self, content="", tag="div", attr={}, tag_type="double"):
"""Builds the element."""
self.tag = tag
self.attr = attr
self.content = content
self.tag_type = tag_type
class head(elem):
"""A head html element."""
def __init__(self, content=None, **kwargs):
super().__init__(tag="head", content=content, **kwargs)
And it works pretty well.
But I have to write this for each subclass declaration, and that's pretty repetitive and redundant if I want to do every HTML tag type.
So I was trying to make a make_elem() function that would make my class by taking the corresponding tag name as a string parameter.
So instead of the previous class definition, I would simply have something like this:
head = make_elem_class("head")
Where I'm stuck
This function should create a class. And the __init__() method from this class should call the __init__() method from the class it inherits from.
I tried to make this make_elem_class() function and it looked like this :
def make_elem_class(name):
"""Dynamically creates the class with a type() call."""
def init(self, content=None, **kwargs):
super().__init__(tag=name, content=None, **kwargs)
return type(name, (elem,), {"__init__" : init})
But when running html = make_elem_class('html'), then html("html element") I get the following error:
Traceback (most recent call last):
File "elements.py", line 118, in <module>
html("html element")
File "elements.py", line 20, in init
super().__init__(tag=name, content=None, **kwargs)
TypeError: object.__init__() takes no parameters
I guess that it has something to do with the empty super() call, so I tried with super(elem, self) instead. But it obviously doesn't work better.
How could I achieve this?
NB : If I remove the "__init__":init from the dictionnary in the type() call, it works fine but the tag isn't correctly set in my elem. I've also tried to directly pass {"tag":name} to type() but it didn't work either.
You can't use the no-argument form of super() here, as there is no class statement here to provide the context that that function normally needs.
Or rather, you can't unless you provide that context yourself; you need to set the name __class__ as a closure here:
def make_elem_class(name):
"""Dynamically creates the class with a type() call."""
def init(self, content=None, **kwargs):
super().__init__(tag=name, content=content, **kwargs)
__class__ = type(name, (elem,), {"__init__" : init})
return __class__
super() automatically will take the __class__ value from the closure. Note that I pass on the value for content, not None, to the elem.__init__ method; you wouldn't want to lose that value.
If that is too magical for you, explicitly name the class and self when calling super(); again, the class is going to be taken from the closure:
def make_elem_class(name):
"""Dynamically creates the class with a type() call."""
def init(self, content=None, **kwargs):
super(elemcls, self).__init__(tag=name, content=content, **kwargs)
elemcls = type(name, (elem,), {"__init__" : init})
return elemcls
What's about a more straight-forward solution like inferring the tag for the class __name__?
class elem:
def __init__(self, content="", tag=None, attr={}, tag_type="double"):
"""Builds the element."""
self.tag = tag or self.__class__.__name__
...
And then:
class div(elem): pass
class head(elem): "Optional docstring for <head>"
...
A bit less magic (controversial), and a bit more explicit. :-)
I think this is a little bit of an XY problem. In that you've asked to how to use super in a dynamically created class, but what you really want is a less verbose way to set various class variables and defaults for your subclasses.
Since you don't expect all instances of the same tag class to share the same tag name, you might as well set it as a class variable rather than an instance variable. eg.
from abc import ABC, abstractmethod
class Elem(ABC):
tag_type = "double" # the default tag type
def __init__(self, content="", attr=None, tag_type=None):
"""Builds the element."""
self.attr = attr if attr is not None else {}
self.content = content
if tag_type is not None:
self.tag_type = tag_type
#property
#abstractmethod
def tag(self):
"""All base classes should identify the tag they represent"""
raise TypeError("undefined tag for {}".format(type(self)))
class Head(Elem):
tag = "head"
tag_type = "text"
class Div(Elem):
tag = "div"
h = Head()
d = Div()
h1 = Head(tag_type="int")
assert h.tag == "head"
assert d.tag == "div"
assert h1.tag == "head"
assert h.tag_type == "text"
assert d.tag_type == "double"
assert h1.tag_type == "int"
You can now write very short child classes, and still have your classes explicitly declared. You'll note that I changed a couple of the defaults to None. For attr, this is because having mutable default arguments won't work how you expect -- it'll behave more like it's a shared class variable. Instead, have the default as None, if attr has not been specified then create a new attr for each instance. The second (tag_type) is so that if tag_type is specified then the instance will have it's tag_type set, but all other instances will rely on the class for the default value.

Ruby like DSL in Python

I'm currently writing my first bigger project in Python, and I'm now wondering how to define a class method so that you can execute it in the class body of a subclass of the class.
First to give some more context, a slacked down (I removed everything non essential for this question) example of how I'd do the thing I'm trying to do in Ruby:
If I define a class Item like this:
class Item
def initialize(data={})
#data = data
end
def self.define_field(name)
define_method("#{name}"){ instance_variable_get("#data")[name.to_s] }
define_method("#{name}=") do |value|
instance_variable_get("#data")[name.to_s] = value
end
end
end
I can use it like this:
class MyItem < Item
define_field("name")
end
item = MyItem.new
item.name = "World"
puts "Hello #{item.name}!"
Now so far I tried achieving something similar in Python, but I'm not happy with the result I've got so far:
class ItemField(object):
def __init__(self, name):
self.name = name
def __get__(self, item, owner=None):
return item.values[self.name]
def __set__(self, item, value):
item.values[self.name] = value
def __delete__(self, item):
del item.values[self.name]
class Item(object):
def __init__(self, data=None):
if data == None: data = {}
self.values = data
for field in type(self).fields:
self.values[field.name] = None
setattr(self, field.name, field)
#classmethod
def define_field(cls, name):
if not hasattr(cls, "fields"): cls.fields = []
cls.fields.append(ItemField(name, default))
Now I don't know how I can call define_field from withing a subclass's body. This is what I wished that it was possible:
class MyItem(Item):
define_field("name")
item = MyItem({"name": "World"})
puts "Hello {}!".format(item.name)
item.name = "reader"
puts "Hello {}!".format(item.name)
There's this similar question but none of the answers are really satisfying, somebody recommends caling the function with __func__() but I guess I can't do that, because I can't get a reference to the class from within its anonymous body (please correct me if I'm wrong about this.)
Somebody else pointed out that it's better to use a module level function for doing this which I also think would be the easiest way, however the main intention of me doing this is to make the implementation of subclasses clean and having to load that module function wouldn't be to nice either. (Also I'd have to do the function call outside the class body and I don't know but I think this is messy.)
So basically I think my approach is wrong, because Python wasn't designed to allow this kind of thing to be done. What would be the best way to achieve something as in the Ruby example with Python?
(If there's no better way I've already thought about just having a method in the subclass which returns an array of the parameters for the define_field method.)
Perhaps calling a class method isn't the right route here. I'm not quite up to speed on exactly how and when Python creates classes, but my guess is that the class object doesn't yet exist when you'd call the class method to create an attribute.
It looks like you want to create something like a record. First, note that Python allows you to add attributes to your user-created classes after creation:
class Foo(object):
pass
>>> foo = Foo()
>>> foo.x = 42
>>> foo.x
42
Maybe you want to constrain which attributes the user can set. Here's one way.
class Item(object):
def __init__(self):
if type(self) is Item:
raise NotImplementedError("Item must be subclassed.")
def __setattr__(self, name, value):
if name not in self.fields:
raise AttributeError("Invalid attribute name.")
else:
self.__dict__[name] = value
class MyItem(Item):
fields = ("foo", "bar", "baz")
So that:
>>> m = MyItem()
>>> m.foo = 42 # works
>>> m.bar = "hello" # works
>>> m.test = 12 # raises AttributeError
Lastly, the above allows you the user subclass Item without defining fields, like such:
class MyItem(Item):
pass
This will result in a cryptic attribute error saying that the attribute fields could not be found. You can require that the fields attribute be defined at the time of class creation by using metaclasses. Furthermore, you can abstract away the need for the user to specify the metaclass by inheriting from a superclass that you've written to use the metaclass:
class ItemMetaclass(type):
def __new__(cls, clsname, bases, dct):
if "fields" not in dct:
raise TypeError("Subclass must define 'fields'.")
return type.__new__(cls, clsname, bases, dct)
class Item(object):
__metaclass__ = ItemMetaclass
fields = None
def __init__(self):
if type(self) == Item:
raise NotImplementedError("Must subclass Type.")
def __setattr__(self, name, value):
if name in self.fields:
self.__dict__[name] = value
else:
raise AttributeError("The item has no such attribute.")
class MyItem(Item):
fields = ("one", "two", "three")
You're almost there! If I understand you correctly:
class Item(object):
def __init__(self, data=None):
fields = data or {}
for field, value in data.items():
if hasattr(self, field):
setattr(self, field, value)
#classmethod
def define_field(cls, name):
setattr(cls, name, None)
EDIT: As far as I know, it's not possible to access the class being defined while defining it. You can however call the method on the __init__ method:
class Something(Item):
def __init__(self):
type(self).define_field("name")
But then you're just reinventing the wheel.
When defining a class, you cannot reference the class itself inside its own definition block. So you have to call define_field(...) on MyItem after its definition. E.g.,
class MyItem(Item):
pass
MyItem.define_field("name")
item = MyItem({"name": "World"})
print("Hello {}!".format(item.name))
item.name = "reader"
print("Hello {}!".format(item.name))

Python: showing attributes assigned to a class object in the class code

One of my classes does a lot of aggregate calculating on a collection of objects, then assigns an attribute and value appropriate to the specific object: I.e.
class Team(object):
def __init__(self, name): # updated for typo in code, added self
self.name = name
class LeagueDetails(object):
def __init__(self): # added for clarity, corrected another typo
self.team_list = [Team('name'), ...]
self.calculate_league_standings() # added for clarity
def calculate_league_standings(self):
# calculate standings as a team_place_dict
for team in self.team_list:
team.place = team_place_dict[team.name] # a new team attribute
I know, as long as the calculate_league_standings has been run, every team has team.place. What I would like to be able to do is to scan the code for class Team(object) and read all the attributes, both created by class methods and also created by external methods which operate on class objects. I am getting a little sick of typing for p in dir(team): print p just to see what the attribute names are. I could define a bunch of blank attributes in the Team __init__. E.g.
class Team(object):
def __init__(self, name): # updated for typo in code, added self
self.name = name
self.place = None # dummy attribute, but recognizable when the code is scanned
It seems redundant to have calculate_league_standings return team._place and then add
#property
def place(self): return self._place
I know I could comment a list of attributes at the top class Team, which is the obvious solution, but I feel like there has to be a best practice here, something pythonic and elegant here.
If I half understand your question, you want to keep track of which attributes of an instance have been added after initialization. If this is the case, you could use something like this:
#! /usr/bin/python3.2
def trackable (cls):
cls._tracked = {}
oSetter = cls.__setattr__
def setter (self, k, v):
try: self.initialized
except: return oSetter (self, k, v)
try: self.k
except:
if not self in self.__class__._tracked:
self.__class__._tracked [self] = []
self.__class__._tracked [self].append (k)
return oSetter (self, k, v)
cls.__setattr__ = setter
oInit = cls.__init__
def init (self, *args, **kwargs):
o = oInit (self, *args, **kwargs)
self.initialized = 42
return o
cls.__init__ = init
oGetter = cls.__getattribute__
def getter (self, k):
if k == 'tracked': return self.__class__._tracked [self]
return oGetter (self, k)
cls.__getattribute__ = getter
return cls
#trackable
class Team:
def __init__ (self, name, region):
self.name = name
self.region = region
#set name and region during initialization
t = Team ('A', 'EU')
#set rank and ELO outside (hence trackable)
#in your "aggregate" functions
t.rank = 4 # a new team attribute
t.ELO = 14 # a new team attribute
#see witch attributes have been created after initialization
print (t.tracked)
If I did not understand the question, please do specify which part I got wrong.
Due to Python's dynamic nature, I don't believe there is a general answer to your question. An attribute of an instance can be set in many ways, including pure assignment, setattr(), and writes to __dict__ . Writing a tool to statically analyze Python code and correctly determine all possible attributes of an class by analyzing all these methods would be very difficult.
In your specific case, as the programmer you know that class Team will have a place attribute in many instances, so you can decide to be explicit and write its constructor like so:
class Team(object):
def __init__(name ,place=None):
self.name = name
self.place = place
I would say there is no need to define a property of a simple attribute, unless you wanted side effects or derivations to happen at read or write time.

Does Python support something like literal objects?

In Scala I could define an abstract class and implement it with an object:
abstrac class Base {
def doSomething(x: Int): Int
}
object MySingletonAndLiteralObject extends Base {
override def doSomething(x: Int) = x*x
}
My concrete example in Python:
class Book(Resource):
path = "/book/{id}"
def get(request):
return aBook
Inheritance wouldn't make sense here, since no two classes could have the same path. And only one instance is needed, so that the class doesn't act as a blueprint for objects. With other words: no class is needed here for a Resource (Book in my example), but a base class is needed to provide common functionality.
I'd like to have:
object Book(Resource):
path = "/book/{id}"
def get(request):
return aBook
What would be the Python 3 way to do it?
Use a decorator to convert the inherited class to an object at creation time
I believe that the concept of such an object is not a typical way of coding in Python, but if you must then the decorator class_to_object below for immediate initialisation will do the trick. Note that any parameters for object initialisation must be passed through the decorator:
def class_to_object(*args):
def c2obj(cls):
return cls(*args)
return c2obj
using this decorator we get
>>> #class_to_object(42)
... class K(object):
... def __init__(self, value):
... self.value = value
...
>>> K
<__main__.K object at 0x38f510>
>>> K.value
42
The end result is that you have an object K similar to your scala object, and there is no class in the namespace to initialise other objects from.
Note: To be pedantic, the class of the object K can be retrieved as K.__class__ and hence other objects may be initialised if somebody really want to. In Python there is almost always a way around things if you really want.
Use an abc (Abstract Base Class):
import abc
class Resource( metaclass=abc.ABCMeta ):
#abc.abstractproperty
def path( self ):
...
return p
Then anything inheriting from Resource is required to implement path. Notice that path is actually implemented in the ABC; you can access this implementation with super.
If you can instantiate Resource directly you just do that and stick the path and get method on directly.
from types import MethodType
book = Resource()
def get(self):
return aBook
book.get = MethodType(get, book)
book.path = path
This assumes though that path and get are not used in the __init__ method of Resource and that path is not used by any class methods which it shouldn't be given your concerns.
If your primary concern is making sure that nothing inherits from the Book non-class, then you could just use this metaclass
class Terminal(type):
classes = []
def __new__(meta, classname, bases, classdict):
if [cls for cls in meta.classes if cls in bases]:
raise TypeError("Can't Touch This")
cls = super(Terminal, meta).__new__(meta, classname, bases, classdict)
meta.classes.append(cls)
return cls
class Book(object):
__metaclass__ = Terminal
class PaperBackBook(Book):
pass
You might want to replace the exception thrown with something more appropriate. This would really only make sense if you find yourself instantiating a lot of one offs.
And if that's not good enough for you and you're using CPython, you could always try some of this hackery:
class Resource(object):
def __init__(self, value, location=1):
self.value = value
self.location = location
with Object('book', Resource, 1, location=2):
path = '/books/{id}'
def get(self):
aBook = 'abook'
return aBook
print book.path
print book.get()
made possible by my very first context manager.
class Object(object):
def __init__(self, name, cls, *args, **kwargs):
self.cls = cls
self.name = name
self.args = args
self.kwargs = kwargs
def __enter__(self):
self.f_locals = copy.copy(sys._getframe(1).f_locals)
def __exit__(self, exc_type, exc_val, exc_tb):
class cls(self.cls):
pass
f_locals = sys._getframe(1).f_locals
new_items = [item for item in f_locals if item not in self.f_locals]
for item in new_items:
setattr(cls, item, f_locals[item])
del f_locals[item] # Keyser Soze the new names from the enclosing namespace
obj = cls(*self.args, **self.kwargs)
f_locals[self.name] = obj # and insert the new object
Of course I encourage you to use one of my above two solutions or Katrielalex's suggestion of ABC's.

Categories

Resources