I'm working on a class describing an object that can be expressed in several "units", I'll say, to keep things simple. Let's say we're talking about length. (It's actually something more complicated.) What I would like is for the user to be able to input 1 and "inch", for example, and automatically get member variables in feet, meters, furlongs, what have you as well. I want the user to be able to input any of the units I am dealing in, and get member variables in all the other units. My thought was to do something like this:
class length:
#classmethod
def inch_to_foot(cls,inch):
# etc.
#classmethod
def inch_to_meter(cls,inch):
# etc.
I guess you get the idea. Then I would define a dictionary in the class:
from_to={'inch':{'foot':inch_to_foot,'meter':inch_to_meter, ...},
'furlong':{'foot':furlong_to_foot, ...},
#etc
}
So then I think I can write an __init__ method
def __init__(self,num,unit):
cls = self.__class__
setattr(self,unit,num)
for k in cls.from_to[unit].keys:
setattr(self,k,cls.from_to[unit][k](num)
But no go. I get the error "class method not callable". Any ideas how I can make this work? Any ideas for scrapping the whole thing and trying a different approach? Thanks.
If you move the from_to variable into __init__ and modify it to something like:
cls.from_to={'inch':{'foot':cls.inch_to_foot,'meter':cls.inch_to_meter, }}
then I think it works as you expect.
Unfortunately I can't answer why because i haven't used classmethods much myself, but I think it is something to do with bound vs unbound methods. Anyway, if you print the functions stored in to_from in your code vs the ones with my modification you'll see they are different (mine are bound, yours are classmethod objects)
Hope that helps somewhat!
EDIT: I've thought about it a bit more, I think the problem is because you are storing a reference to the functions before they have been bound to the class (not surprising that the binding happens once the rest of the class has been parsed). My advice would be to forget about storing a dictionary of function references, but to store (in some representation of your choice) strings that indicate the units you can change between. For instance you might choose a similar format, such as:
from_to = {'inch':['foot','meter']}
and then look up the functions during __init__ using getattr
E.G.:
class length:
from_to = {'inch':['foot','meter']}
def __init__(self,num,unit):
if unit not in self.from_to:
raise RuntimeError('unit %s not supported'%unit)
cls = self.__class__
setattr(self,unit,num)
for k in cls.from_to[unit]:
f = getattr(cls,'%s_to_%s'%(unit,k))
setattr(self,k,f(num))
#classmethod
def inch_to_foot(cls,inch):
return inch/12.0
#classmethod
def inch_to_meter(cls,inch):
return inch*2.54/100
a = length(3,'inches')
print a.meter
print a.foot
print length.inch_to_foot(3)
I don't think doing with an __init__() method would be a good idea. I once saw an interesting way to do it in the Overriding the __new__ method section of in the classic document titled Unifying types and classes in Python 2.2 by Guido van Rossum.
Here's some examples:
class inch_to_foot(float):
"Convert from inch to feet"
def __new__(cls, arg=0.0):
return float.__new__(cls, float(arg)/12)
class inch_to_meter(float):
"Convert from inch to meter"
def __new__(cls, arg=0.0):
return float.__new__(cls, arg*0.0254)
print inch_to_meter(5) # 0.127
Here's a completely different answer that uses a metaclass and requires the conversion functions to bestaticmethodsrather thanclassmethods-- which it turns into properties based on the target unit's name. If searches for the names of any conversion functions itself, eliminating the need to manually definefrom_totype tables.
One thing about this approach is that the conversion functions aren't even called unless indirect references are made to the units associated with them. Another is that they're dynamic in the sense that the results returned will reflect the current value of the instance (unlike instances of three_pineapples'lengthclass, which stores the results of calling them on the numeric value of the instance when it's initially constructed).
You've never said what version of Python you're using, so the following code is for Python 2.2 - 2.x.
import re
class MetaUnit(type):
def __new__(metaclass, classname, bases, classdict):
cls = type.__new__(metaclass, classname, bases, classdict)
# add a constructor
setattr(cls, '__init__',
lambda self, value=0: setattr(self, '_value', value))
# add a property for getting and setting the underlying value
setattr(cls, 'value',
property(lambda self: self._value,
lambda self, value: setattr(self, '_value', value)))
# add an identity property the just returns the value unchanged
unitname = classname.lower() # lowercase classname becomes name of unit
setattr(cls, unitname, property(lambda self: self._value))
# find conversion methods and create properties that use them
matcher = re.compile(unitname + r'''_to_(?P<target_unitname>\w+)''')
for name in cls.__dict__.keys():
match = matcher.match(name)
if match:
target_unitname = match.group('target_unitname').lower()
fget = (lambda self, conversion_method=getattr(cls, name):
conversion_method(self._value))
setattr(cls, target_unitname, property(fget))
return cls
Sample usage:
scalar_conversion_staticmethod = (
lambda scale_factor: staticmethod(lambda value: value * scale_factor))
class Inch(object):
__metaclass__ = MetaUnit
inch_to_foot = scalar_conversion_staticmethod(1./12.)
inch_to_meter = scalar_conversion_staticmethod(0.0254)
a = Inch(3)
print a.inch # 3
print a.meter # 0.0762
print a.foot # 0.25
a.value = 6
print a.inch # 6
print a.meter # 0.1524
print a.foot # 0.5
Related
I am trying to modify an already defined class by changing an attribute's value. Importantly, I want this change to propagate internally.
For example, consider this class:
class Base:
x = 1
y = 2 * x
# Other attributes and methods might follow
assert Base.x == 1
assert Base.y == 2
I would like to change x to 2, making it equivalent to this.
class Base:
x = 2
y = 2 * x
assert Base.x == 2
assert Base.y == 4
But I would like to make it in the following way:
Base = injector(Base, x=2)
Is there a way to achieve this WITHOUT recompile the original class source code?
The effect you want to achieve belongs to the realm of "reactive programing" - a programing paradigm (from were the now ubiquitous Javascript library got its name as an inspiration).
While Python has a lot of mechanisms to allow that, one needs to write his code to actually make use of these mechanisms.
By default, plain Python code as the one you put in your example, uses the Imperative paradigm, which is eager: whenever an expression is encoutered, it is executed, and the result of that expression is used (in this case, the result is stored in the class attribute).
Python's advantages also can make it so that once you write a codebase that will allow some reactive code to take place, users of your codebase don't have to be aware of that, and things work more or less "magically".
But, as stated above, that is not free. For the case of being able to redefine y when x changes in
class Base:
x = 1
y = 2 * x
There are a couple paths that can be followed - the most important is that, at the time the "*" operator is executed (and that happens when Python is parsing the class body), at least one side of the operation is not a plain number anymore, but a special object which implements a custom __mul__ method (or __rmul__) in this case. Then, instead of storing a resulting number in y, the expression is stored somewhere, and when y is retrieved either as a class attribute, other mechanisms force the expression to resolve.
If you want this at instance level, rather than at class level, it would be easier to implement. But keep in mind that you'd have to define each operator on your special "source" class for primitive values.
Also, both this and the easier, instance descriptor approach using property are "lazily evaluated": that means, the value for y is calcualted when it is to be used (it can be cached if it will be used more than once). If you want to evaluate it whenever x is assigned (and not when y is consumed), that will require other mechanisms. Although caching the lazy approach can mitigate the need for eager evaluation to the point it should not be needed.
1 - Before digging there
Python's easiest way to do code like this is simply to write the expressions to be calculated as functions - and use the property built-in as a descriptor to retrieve these values. The drawback is small:
you just have to wrap your expressions in a function (and then, that function
in something that will add the descriptor properties to it, such as property). The gain is huge: you are free to use any Python code inside your expression, including function calls, object instantiation, I/O, and the like. (Note that the other approach requires wiring up each desired operator, just to get started).
The plain "101" approach to have what you want working for instances of Base is:
class Base:
x = 1
#property
def y(self):
return self.x * 2
b = Base()
b.y
-> 2
Base.x = 3
b.y
-> 6
The work of property can be rewritten so that retrieving y from the class, instead of an instance, achieves the effect as well (this is still easier than the other approach).
If this will work for you somehow, I'd recommend doing it. If you need to cache y's value until x actually changes, that can be done with normal coding
2 - Exactly what you asked for, with a metaclass
as stated above, Python'd need to know about the special status of your y attribute when calculcating its expression 2 * x. At assignment time, it would be already too late.
Fortunately Python 3 allow class bodies to run in a custom namespace for the attribute assignment by implementing the __prepare__ method in a metaclass, and then recording all that takes place, and replacing primitive attributes of interest by special crafted objects implementing __mul__ and other special methods.
Going this way could even allow values to be eagerly calculated, so they can work as plain Python objects, but register information so that a special injector function could recreate the class redoing all the attributes that depend on expressions. It could also implement lazy evaluation, somewhat as described above.
from collections import UserDict
import operator
class Reactive:
def __init__(self, value):
self._initial_value = value
self.values = {}
def __set_name__(self, owner, name):
self.name = name
self.values[owner] = self._initial_value
def __get__(self, instance, owner):
return self.values[owner]
def __set__(self, instance, value):
raise AttributeError("value can't be set directly - call 'injector' to change this value")
def value(self, cls=None):
return self.values.get(cls, self._initial_value)
op1 = value
#property
def result(self):
return self.value
# dynamically populate magic methods for operation overloading:
for name in "mul add sub truediv pow contains".split():
op = getattr(operator, name)
locals()[f"__{name}__"] = (lambda operator: (lambda self, other: ReactiveExpr(self, other, operator)))(op)
locals()[f"__r{name}__"] = (lambda operator: (lambda self, other: ReactiveExpr(other, self, operator)))(op)
class ReactiveExpr(Reactive):
def __init__(self, value, op2, operator):
self.op2 = op2
self.operator = operator
super().__init__(value)
def result(self, cls):
op1, op2 = self.op1(cls), self.op2
if isinstance(op1, Reactive):
op1 = op1.result(cls)
if isinstance(op2, Reactive):
op2 = op2.result(cls)
return self.operator(op1, op2)
def __get__(self, instance, owner):
return self.result(owner)
class AuxDict(UserDict):
def __init__(self, *args, _parent, **kwargs):
self.parent = _parent
super().__init__(*args, **kwargs)
def __setitem__(self, item, value):
if isinstance(value, self.parent.reacttypes) and not item.startswith("_"):
value = Reactive(value)
super().__setitem__(item, value)
class MetaReact(type):
reacttypes = (int, float, str, bytes, list, tuple, dict)
def __prepare__(*args, **kwargs):
return AuxDict(_parent=__class__)
def __new__(mcls, name, bases, ns, **kwargs):
pre_registry = {}
cls = super().__new__(mcls, name, bases, ns.data, **kwargs)
#for name, obj in ns.items():
#if isinstance(obj, ReactiveExpr):
#pre_registry[name] = obj
#setattr(cls, name, obj.result()
for name, reactive in pre_registry.items():
_registry[cls, name] = reactive
return cls
def injector(cls, inplace=False, **kwargs):
original = cls
if not inplace:
cls = type(cls.__name__, (cls.__bases__), dict(cls.__dict__))
for name, attr in cls.__dict__.items():
if isinstance(attr, Reactive):
if isinstance(attr, ReactiveExpr) and name in kwargs:
raise AttributeError("Expression attributes can't be modified by injector")
attr.values[cls] = kwargs.get(name, attr.values[original])
return cls
class Base(metaclass=MetaReact):
x = 1
y = 2 * x
And, after pasting the snippet above in a REPL, here is the
result of using injector:
In [97]: Base2 = injector(Base, x=5)
In [98]: Base2.y
Out[98]: 10
The idea is complicated with that aspect that Base class is declared with dependent dynamically evaluated attributes. While we can inspect class's static attributes, I think there's no other way of getting dynamic expression except for parsing the class's sourcecode, find and replace the "injected" attribute name with its value and exec/eval the definition again. But that's the way you wanted to avoid. (moreover: if you expected injector to be unified for all classes).
If you want to proceed to rely on dynamically evaluated attributes define the dependent attribute as a lambda function.
class Base:
x = 1
y = lambda: 2 * Base.x
Base.x = 2
print(Base.y()) # 4
I have a problem with my base class. I started writing it after finding an answer on this site about more informative __repr__() methods. I added to it after finding a different answer on this site about using pprint() with my own classes. I tinkered with it a little more after finding a third answer on this site about making my classes unpackable with a ** operator.
I modified it again after seeing in yet another answer on this site that there was a distinction between merely giving it __getitem__(), __iter__(), and __len__() methods on the one hand, and actually making it a fully-qualified mapping by subclassing collections.abc.Mapping on the other. Further, I saw that doing so would remove the need for writing my own keys() method, as the Mapping would take care of that.
So I got rid of keys(), and a class method broke.
The problem
I have a method that iterates through my class' keys and values to produce one big string formatted as I'd like it. That class looks like this.
class MyObj():
def __init__(self, foo, bar):
self.foo = foo
self.bar = bar
def the_problem_method(self):
"""Method I'm getting divergent output for."""
longest = len(max((key for key in self.keys()), key=len))
key_width = longest + TAB_WIDTH - longest % TAB_WIDTH
return '\n'.join((f'{key:<{key_width}}{value}' for key, value in self))
Yes, that doesn't have the base class in it, but the MWE later on will account for that. The nut of it is that (key for key in self.keys()) part. When I have a keys() method written, I get the output I want.
def keys(self):
"""Get object attribute names."""
return self.__dict__.keys()
When I remove that to go with the keys() method supplied by collections.abc.Mapping, I get no space between key and value
The question
I can get the output I want by restoring the keys() method (and maybe adding values() and items() while I'm at it), but is that the best approach? Would it be better to go with the Mapping one and modify my class method to suit it? If so, how? Should I leave Mapping well enough alone until I know I need it?
This is my base class to be copied all aver creation and subclassed out the wazoo. I want to Get. It. Right.
There are already several considerations I can think of and many more of which I am wholly ignorant.
I use Python 3.9 and greater. I'll abandon 3.9 when conda does.
I want to keep my more-informative __repr__() methods.
I want pprint() to work, via the _dispatch table method with _format_dict_items().
I want to allow for duck typing my classes reliably.
I have not yet used type hinting, but I want to allow for using best practices there if I start.
Everything else I know nothing about.
The MWE
This has my problem class at the top and output stuff at the bottom. There are two series of classes building upon the previous ones.
The first are ever-more-inclusive base classes, and it is here that the difference between the instance with the keys() method and that without is shown. the class, BaseMap, subclasses the Mapping and has the __getitem__(), __iter__(), and __len__() methods. The next class up the chain, BaseMapKeys, subclasses that and adds the keys() method.
The second group, MapObj and MapKeysObj, are subclasses of the problem class that also subclass those different base classes respectively.
OK, maybe the WE isn't so M, but lots of things got me to this point and I don't want to neglect any.
import collections.abc
from pprint import pprint, PrettyPrinter
TAB_WIDTH = 3
class MyObj():
def __init__(self, foo, bar):
self.foo = foo
self.bar = bar
def the_problem_method(self):
"""Method I'm getting divergent output for."""
longest = len(max((key for key in self.keys()), key=len))
key_width = longest + TAB_WIDTH - longest % TAB_WIDTH
return '\n'.join((f'{key:<{key_width}}{value}' for key, value in self))
class Base(object):
"""Base class with more informative __repr__."""
def __repr__(self):
"""Object representation."""
params = (f'{key}={repr(value)}'
for key, value in self.__dict__.items())
return f'{repr(self.__class__)}({", ".join(params)})'
class BaseMap(Base, collections.abc.Mapping):
"""Enable class to be pprint-able, unpacked with **."""
def __getitem__(self, attr):
"""Get object attribute values."""
return getattr(self.__dict__, attr)
def __iter__(self):
"""Make object iterable."""
for attr in self.__dict__.keys():
yield attr, getattr(self, attr)
def __len__(self):
"""Get length of object."""
return len(self.__dict__)
class BaseMapKeys(BaseMap):
"""Overwrite KeysView output with what I thought it would be."""
def keys(self):
"""Get object attribute names."""
return self.__dict__.keys()
class MapObj(BaseMap, MyObj):
"""Problem class with collections.abc.Mapping."""
def __init__(self, foo, bar):
super().__init__(foo, bar)
class MapKeysObj(BaseMapKeys, MyObj):
"""Problem class with collections.abc.Mapping and keys method."""
def __init__(self, foo, bar):
super().__init__(foo, bar)
if isinstance(getattr(PrettyPrinter, '_dispatch'), dict):
# assume the dispatch table method still works
def pprint_basemap(printer, object, stream, indent, allowance, context,
level):
"""Implement pprint for subclasses of BaseMap class."""
write = stream.write
write(f'{object.__class__}(\n {indent * " "}')
printer._format_dict_items(object, stream, indent, allowance + 1,
context, level)
write(f'\n{indent * " "})')
map_classes = [MapObj, MapKeysObj]
for map_class in map_classes:
PrettyPrinter._dispatch[map_class.__repr__] = pprint_basemap
def print_stuff(map_obj):
print('pprint object:')
pprint(map_obj)
print()
print('print keys():')
print(map_obj.keys())
print()
print('print list(keys()):')
print(list(map_obj.keys()))
print()
print('print the problem method:')
print(map_obj.the_problem_method())
print('\n\n')
params = ['This is a really long line to force new line in pprint output', 2]
baz = MapObj(*params)
print_stuff(baz)
scoggs = MapKeysObj(*params)
print_stuff(scoggs)
I ran into an interesting situation while working on a project:
I am constructing a class, which we can call ValueContainer which always will store a single value under a value attribute. ValueContainer to have custom functionality, keep other metadata, etc., however I would like to inherit all the magic/dunder methods (e.g. __add__, __sub__, __repr__) from value. The obvious solution is to implement all magic methods by hand and point the operation to the value attribute.
Example definition:
class ValueContainer:
def __init__(self, value):
self.value = value
def __add__(self, other):
if isinstance(other, ValueContainer):
other = other.value
return self.value.__add__(other)
Example behavior:
vc1 = ValueContainer(1)
assert vc1 + 2 == 3
vc2 = ValueContainer(2)
assert vc1 + vc2 == 3
However, there are two issues here.
I want to inherit ALL magic methods from type(self.value), which would end up being likely 20+ different functions, all with the same core functionality (calling the super magic-method of value). This makes my every ounce of my body shiver, and shout "DRY! DRY! DRY!"
value can be any type. At the VERY least I need to support at least numeric types (int, float) and strings. The set of magic methods and their behaviors for numerics and strings are already different enough to make this a sticky situation to handle. Now when adding the fact that I would like the ability to store custom types in value, it becomes somewhat unimaginable to implement manually.
With these two things in mind, I spend a long time trying different approaches to get this working. The tough part comes from the fact that dunder methods are class properties(?), but value gets assigned to an instance.
Attempt 1: After value is assigned, we look up all the methods that start with __ on the class type(self.value), and assign the class dunder methods on ValueContainer to be these functions. This seemed to be a good solution at first, before realizing the doing this would now reassign the dunder methods of ValueContainer for all instances.
This means when we instantiate:
valc_int = ValueContainer(1)
it will apply all dunder methods from int to the ValueContainer class. Great!
...but if we then instantiate:
valc_str = ValueContainer('a string')
all the dunder methods for str will be set on the class ValueContainer, meaning valc_int would now try to use the dunder methods from str, potentially causing issue when there's overlap.
Attempt 2: This is the solution I am currently using, which achieves the majority of the functionality that I'm after.
Welcome, Metaclasses.
import functools
def _magic_function(valc, method_name, *args, **kwargs):
if hasattr(valc.value, method_name):
# Get valc.value's magic method
func = getattr(valc.value, method_name)
# If comparing to another ValueContainer, need to compare to its .value
new_args = [arg.value if isinstance(arg, ValueContainer)
else arg for arg in args]
return func(*new_args, **kwargs)
class ValueContainerMeta(type):
blacklist = [
'__new__',
'__init__',
'__getattribute__',
'__getnewargs__',
'__doc__',
]
# Filter magic methods
methods = {*int.__dict__, *str.__dict__}
methods = filter(lambda m: m.startswith('__'), methods)
methods = filter(lambda m: m not in ValueContainer.blacklist, methods)
def __new__(cls, name, bases, attr):
new = super(ValueContainer, cls).__new__(cls, name, bases, attr)
# Set all specified magic methods to our _magic_function
for method_name in ValueContainerMeta.methods:
setattr(new, method_name, functools.partialmethod(_magic_function, method_name))
return new
class ValueContainer(metaclass=ValueContainerMeta):
def __init__(self, value):
self.value = value
Explanation:
By using the ValueContainerMeta metaclass, we intercept the creation of ValueContainer, and override the specific magic methods that we collect on the ValueContainerMeta.methods class attribute. The magic here comes from the combination of our _magic_function function and functools.partialmethod. Just like a dunder method, _magic_function takes the ValueContainer instance it is being called on as the first parameter. We'll come back to this in a second. The next argument, method_name, is the string name of the magic method we want to call ('__add__' for example). The remaining *args and **kwargs will be the arguments that would be passed to the original magic method (generally no arguments or just other, but sometimes more).
In the ValueContainerMeta metaclass, we collect a list of magic methods to override, and use partialmethod to inject the method name to call without actually calling the _magic_function itself. Initially I though just using functools.partial would serve the purpose since dunder methods are class methods, but apparently magic methods are somehow also bound to instances even though they are class methods? I still don't fully understand the implementation, but using functools.partialmethod solves this issue by injecting the ValueContainer instance be called as the first argument in _magic_fuction (valc).
Output:
def test_magic_methods():
v1 = ValueContainer(1.0)
eq_(v1 + 4, 5.0)
eq_(4 + v1, 5.0)
eq_(v1 - 3.5, -2.5)
eq_(3.5 - v1, 2.5)
eq_(v1 * 10, 10)
eq_(v1 / 10, 0.1)
v2 = ValueContainer(2.0)
eq_(v1 + v2, 3.0)
eq_(v1 - v2, -1.0)
eq_(v1 * v2, 2.0)
eq_(v1 / v2, 0.5)
v3 = ValueContainer(3.3325)
eq_(round(v3), 3)
eq_(round(v3, 2), 3.33)
v4 = ValueContainer('magic')
v5 = ValueContainer('-works')
eq_(v4 + v4, 'magicmagic')
eq_(v4 * 2, 'magicmagic')
eq_(v4 + v5, 'magic-works')
# Float magic methods still work even though
# we instantiated a str ValueContainer
eq_(v1 + v2, 3.0)
eq_(v1 - v2, -1.0)
eq_(v1 * v2, 2.0)
eq_(v1 / v2, 0.5)
Overall, I am happy with this solution, EXCEPT for the fact that you must specify which method names to inherit explicitly in ValueContainerMeta. As you can see, for now I've taken the superset of str and int magic methods. If possible, I would love a way to dynamically populate the list of method names based on the type of value, but since this is happening before its instantiation, I don't believe that would be possible with this approach. If there are magic methods on a type that aren't contained within the superset of int and str right now, this solution would not work with those.
Although this solution is 95% of what I am looking for, it was such an interesting problem that I wanted to know if anyone else could come up with a better solution, that accomplishes dynamic choosing of magic methods from the type of value, or has optimizations/tricks for improving other aspects, or if someone could explain more of the internals of how magic methods work.
As you've correctly identified,
magic methods are discovered on the class, not on the instance, and
you have no access to the wrapped value prior to class creation.
With that in mind, I think it's impossible to force instances of the same class to overload operators differently depending on the wrapped value type.
One workaround is to dynamically create and cache ValueContainer subclasses. For example,
import inspect
blacklist = frozenset([
'__new__',
'__init__',
'__getattribute__',
'__getnewargs__',
'__doc__',
'__setattr__',
'__str__',
'__repr__',
])
# container type superclass
class ValueContainer:
def __init__(self, value):
self.value = value
def __repr__(self):
return '{}({!r})'.format(self.__class__.__name__, self.value)
# produce method wrappers
def method_factory(method_name):
def method(self, other):
if isinstance(other, ValueContainer):
other = other.value
return getattr(self.value, method_name)(other)
return method
# create and cache container types (instances of ValueContainer)
type_container_cache = {}
def type_container(type_, blacklist=blacklist):
try:
return type_container_cache[type_]
except KeyError:
pass
# e.g. IntContainer, StrContainer
name = f'{type_.__name__.title()}Container'
bases = ValueContainer,
method_names = {
method_name for method_name, _ in inspect.getmembers(type_, inspect.ismethoddescriptor) if
method_name.startswith('__') and method_name not in blacklist
}
result = type_container_cache[type_] = type(name, bases, {
n: method_factory(n) for n in method_names})
return result
# create or lookup an appropriate ValueContainer
def value_container(value):
cls = type_container(type(value))
return cls(value)
You can then use the value_container factory.
i2 = value_container(2)
i3 = value_container(3)
assert 2 + i2 == 4 == i2 + 2
assert repr(i2) == 'IntContainer(2)'
assert type(i2) is type(i3)
s = value_container('a')
assert s + 'b' == 'ab'
assert repr(s) == "StrContainer('a')"
Igor provided a very nice piece of code. You will probably want to enhance the method factory to support non-binary operations but, apart from that, in my opinion the use of a blacklist is not ideal in term of maintenance. You have to carefully review all possible special methods now, and check them again for possibly new ones with each new release of python.
Building on top of Igor's code, I suggest another way making use of multiple inheritance. Inheriting from both the wrapped type and the value container lets the containers be almost perfectly compatible with the wrapped types while including the common services from the generic container. As a bonus, this approach makes the code even simpler (and even better with Igor's tip about lru_cache).
import functools
# container type superclass
class ValueDecorator:
def wrapped_type(self):
return type(self).__bases__[1]
def custom_operation(self):
print('hey! i am a', self.wrapped_type(), 'and a', type(self))
def __repr__(self):
return '{}({})'.format(self.__class__.__name__, super().__repr__())
# create and cache container types (e.g. IntContainer, StrContainer)
#functools.lru_cache(maxsize=16)
def type_container(type_):
name = f'{type_.__name__.title()}Container'
bases = (ValueDecorator, type_)
return type(name, bases, {})
# create or lookup an appropriate container
def value_container(value):
cls = type_container(type(value))
return cls(value)
Please note that unlike Sam and Igor's methods, which reference the input object in the container, this method creates a new subclassed object initialized with the input object. It is fine for basic values but may cause undesirable effects for other types, depending on how their constructor deals with copy.
I have a class which would be a container for a number of variables of different types. The collection is finite and not very large so I didn't use a dictionary. Is there a way to automate, or shorten the creation of variables based on whether or not they are requested (specified as True/False) in the constructor?
Here is what I have for example:
class test:
def __init__(self,a=False,b=False,c=False):
if a: self.a = {}
if b: self.b = 34
if c: self.c = "generic string"
For any of a,b,c that are true in the constructor they will be created in the object.
I have a collection of standard variables (a,b,c,d..) that some objects will have and some objects won't. The number of combinations is too large to create separate classes, but the number of variables isn't enough to have a dictionary for them in each class.
Is there any way in python to do something like this:
class test:
def __init__(self,*args):
default_values = {a:{},b:34,c:"generic string"}
for item in args:
if item: self.arg = default_values[arg]
Maybe there is a whole other way to do this?
EDIT:
To clarify this a class which represents different type of bounding boxes on a 2D surface. Depending on the function of the box it can have any of frame coordinates, internal cross coordinates, id, population statistics (attached to that box), and some other cached values for easy calculation.
I don't want to have each object as a dictionary because there are methods attached to it which allow it to export and modify its internal data and interact with other objects of the same type (similar to how strings interact with + - .join, etc.). I also don't want to have a dictionary inside each object because the call to that variable is inelegant:
print foo.info["a"]
versus
print foo.a
Thanks to ballsdotball I've come up with a solution:
class test:
def __init__(self, a=False, b=False,c =False):
default_values = {"a":{},"b":34,"c":"generic string"}
for k, v in default_values.iteritems():
if eval(k): setattr(self,k,v)
Maybe something like:
def __init__(self,*args,**kwargs):
default_values = {a:{},b:34,c:"generic string"}
for k,v in kwargs.iteritems():
try:
if not v is False:
setattr(self,k,default_values[k])
except Exception, e:
print "Argument has no default value.",e
But to be honest I would just put the default values in with the init arguments instead of having to test for them like that.
*Edited a couple times for syntax.
You can subclass dict (if you aren't using positional arguments):
class Test(dict):
def your_method(self):
return self['foo'] * 4
You can also override __getattr__ and __setattr__ if the self['foo'] syntax bothers you:
class Test(dict):
def __getattr__(self, key):
return dict.__getattr__(self, key)
def __setattr__(self, key, value):
return dict.__getattr__(self, key, value)
def your_method(self):
return self.foo * 4
I am looking for a way to create a basic python "object" which I can externally assign attributes to.
Currently I am doing it the following way:
I define an empty class with
class C(object):
pass
and then I instantiate an object and assign attributes like this:
c = C()
c.attr = 2
Coming to my question
Is there a way to instantiate an empty class object, which I can then assign attributes like shown above without defining a class C?
Is there maybe an other better way to accomplish what I am after?
It looks like you are looking for a flexible container that has no methods and can take attributes with arbitrary names. That's a dict.
d = dict()
d['myattr'] = 42
If you prefer the attribute syntax that you get with a class (c.myattr = 42), then use a class just as per the code in your question.
Is there a way to instantiate an empty class object, which I can then assign attributes like shown above without defining a class C?
Yes:
>>> C = type("C", (object,), {})
>>> c = C()
>>> c.attr = 2
But as you can see, it's not much of an improvement, and the end result is the same -- it's just another way of creating the same class C.
Addendum:
You can make it prettier by "hiding" it in a function:
def attr_holder(cls=type("C", (object,), {})):
return cls()
c = attr_holder()
c.attr = 2
Though this is just reinventing the wheel -- replace the two line function with
class attr_holder(object):
pass
and it'll work exactly the same, and we've come full circle. So: go with what David or Reorx suggests.
I had come to the same question long ago, and then create this class to use in many of my projects:
class DotDict(dict):
"""
retrieve value of dict in dot style
"""
def __getattr__(self, key):
try:
return self[key]
except KeyError:
raise AttributeError('has no attribute %s' % key)
def __setattr__(self, key, value):
self[key] = value
def __delattr__(self, key):
try:
del self[key]
except KeyError:
raise AttributeError(key)
def __str__(self):
return '<DotDict %s >' % self.__to_dict()
def __to_dict(self):
return dict(self)
When I want a object to store data or want to retrieve value easily from a dict, I always use this class.
Additionally, it can help me serialize the attributes that I set in the object, and reversely get the original dict when needed.
So I think this may be a good solution in many situations, though other tricks look simple,
they are not very helpful further.