Dynamically inherit all Python magic methods from an instance attribute - python

I ran into an interesting situation while working on a project:
I am constructing a class, which we can call ValueContainer which always will store a single value under a value attribute. ValueContainer to have custom functionality, keep other metadata, etc., however I would like to inherit all the magic/dunder methods (e.g. __add__, __sub__, __repr__) from value. The obvious solution is to implement all magic methods by hand and point the operation to the value attribute.
Example definition:
class ValueContainer:
def __init__(self, value):
self.value = value
def __add__(self, other):
if isinstance(other, ValueContainer):
other = other.value
return self.value.__add__(other)
Example behavior:
vc1 = ValueContainer(1)
assert vc1 + 2 == 3
vc2 = ValueContainer(2)
assert vc1 + vc2 == 3
However, there are two issues here.
I want to inherit ALL magic methods from type(self.value), which would end up being likely 20+ different functions, all with the same core functionality (calling the super magic-method of value). This makes my every ounce of my body shiver, and shout "DRY! DRY! DRY!"
value can be any type. At the VERY least I need to support at least numeric types (int, float) and strings. The set of magic methods and their behaviors for numerics and strings are already different enough to make this a sticky situation to handle. Now when adding the fact that I would like the ability to store custom types in value, it becomes somewhat unimaginable to implement manually.
With these two things in mind, I spend a long time trying different approaches to get this working. The tough part comes from the fact that dunder methods are class properties(?), but value gets assigned to an instance.
Attempt 1: After value is assigned, we look up all the methods that start with __ on the class type(self.value), and assign the class dunder methods on ValueContainer to be these functions. This seemed to be a good solution at first, before realizing the doing this would now reassign the dunder methods of ValueContainer for all instances.
This means when we instantiate:
valc_int = ValueContainer(1)
it will apply all dunder methods from int to the ValueContainer class. Great!
...but if we then instantiate:
valc_str = ValueContainer('a string')
all the dunder methods for str will be set on the class ValueContainer, meaning valc_int would now try to use the dunder methods from str, potentially causing issue when there's overlap.
Attempt 2: This is the solution I am currently using, which achieves the majority of the functionality that I'm after.
Welcome, Metaclasses.
import functools
def _magic_function(valc, method_name, *args, **kwargs):
if hasattr(valc.value, method_name):
# Get valc.value's magic method
func = getattr(valc.value, method_name)
# If comparing to another ValueContainer, need to compare to its .value
new_args = [arg.value if isinstance(arg, ValueContainer)
else arg for arg in args]
return func(*new_args, **kwargs)
class ValueContainerMeta(type):
blacklist = [
'__new__',
'__init__',
'__getattribute__',
'__getnewargs__',
'__doc__',
]
# Filter magic methods
methods = {*int.__dict__, *str.__dict__}
methods = filter(lambda m: m.startswith('__'), methods)
methods = filter(lambda m: m not in ValueContainer.blacklist, methods)
def __new__(cls, name, bases, attr):
new = super(ValueContainer, cls).__new__(cls, name, bases, attr)
# Set all specified magic methods to our _magic_function
for method_name in ValueContainerMeta.methods:
setattr(new, method_name, functools.partialmethod(_magic_function, method_name))
return new
class ValueContainer(metaclass=ValueContainerMeta):
def __init__(self, value):
self.value = value
Explanation:
By using the ValueContainerMeta metaclass, we intercept the creation of ValueContainer, and override the specific magic methods that we collect on the ValueContainerMeta.methods class attribute. The magic here comes from the combination of our _magic_function function and functools.partialmethod. Just like a dunder method, _magic_function takes the ValueContainer instance it is being called on as the first parameter. We'll come back to this in a second. The next argument, method_name, is the string name of the magic method we want to call ('__add__' for example). The remaining *args and **kwargs will be the arguments that would be passed to the original magic method (generally no arguments or just other, but sometimes more).
In the ValueContainerMeta metaclass, we collect a list of magic methods to override, and use partialmethod to inject the method name to call without actually calling the _magic_function itself. Initially I though just using functools.partial would serve the purpose since dunder methods are class methods, but apparently magic methods are somehow also bound to instances even though they are class methods? I still don't fully understand the implementation, but using functools.partialmethod solves this issue by injecting the ValueContainer instance be called as the first argument in _magic_fuction (valc).
Output:
def test_magic_methods():
v1 = ValueContainer(1.0)
eq_(v1 + 4, 5.0)
eq_(4 + v1, 5.0)
eq_(v1 - 3.5, -2.5)
eq_(3.5 - v1, 2.5)
eq_(v1 * 10, 10)
eq_(v1 / 10, 0.1)
v2 = ValueContainer(2.0)
eq_(v1 + v2, 3.0)
eq_(v1 - v2, -1.0)
eq_(v1 * v2, 2.0)
eq_(v1 / v2, 0.5)
v3 = ValueContainer(3.3325)
eq_(round(v3), 3)
eq_(round(v3, 2), 3.33)
v4 = ValueContainer('magic')
v5 = ValueContainer('-works')
eq_(v4 + v4, 'magicmagic')
eq_(v4 * 2, 'magicmagic')
eq_(v4 + v5, 'magic-works')
# Float magic methods still work even though
# we instantiated a str ValueContainer
eq_(v1 + v2, 3.0)
eq_(v1 - v2, -1.0)
eq_(v1 * v2, 2.0)
eq_(v1 / v2, 0.5)
Overall, I am happy with this solution, EXCEPT for the fact that you must specify which method names to inherit explicitly in ValueContainerMeta. As you can see, for now I've taken the superset of str and int magic methods. If possible, I would love a way to dynamically populate the list of method names based on the type of value, but since this is happening before its instantiation, I don't believe that would be possible with this approach. If there are magic methods on a type that aren't contained within the superset of int and str right now, this solution would not work with those.
Although this solution is 95% of what I am looking for, it was such an interesting problem that I wanted to know if anyone else could come up with a better solution, that accomplishes dynamic choosing of magic methods from the type of value, or has optimizations/tricks for improving other aspects, or if someone could explain more of the internals of how magic methods work.

As you've correctly identified,
magic methods are discovered on the class, not on the instance, and
you have no access to the wrapped value prior to class creation.
With that in mind, I think it's impossible to force instances of the same class to overload operators differently depending on the wrapped value type.
One workaround is to dynamically create and cache ValueContainer subclasses. For example,
import inspect
blacklist = frozenset([
'__new__',
'__init__',
'__getattribute__',
'__getnewargs__',
'__doc__',
'__setattr__',
'__str__',
'__repr__',
])
# container type superclass
class ValueContainer:
def __init__(self, value):
self.value = value
def __repr__(self):
return '{}({!r})'.format(self.__class__.__name__, self.value)
# produce method wrappers
def method_factory(method_name):
def method(self, other):
if isinstance(other, ValueContainer):
other = other.value
return getattr(self.value, method_name)(other)
return method
# create and cache container types (instances of ValueContainer)
type_container_cache = {}
def type_container(type_, blacklist=blacklist):
try:
return type_container_cache[type_]
except KeyError:
pass
# e.g. IntContainer, StrContainer
name = f'{type_.__name__.title()}Container'
bases = ValueContainer,
method_names = {
method_name for method_name, _ in inspect.getmembers(type_, inspect.ismethoddescriptor) if
method_name.startswith('__') and method_name not in blacklist
}
result = type_container_cache[type_] = type(name, bases, {
n: method_factory(n) for n in method_names})
return result
# create or lookup an appropriate ValueContainer
def value_container(value):
cls = type_container(type(value))
return cls(value)
You can then use the value_container factory.
i2 = value_container(2)
i3 = value_container(3)
assert 2 + i2 == 4 == i2 + 2
assert repr(i2) == 'IntContainer(2)'
assert type(i2) is type(i3)
s = value_container('a')
assert s + 'b' == 'ab'
assert repr(s) == "StrContainer('a')"

Igor provided a very nice piece of code. You will probably want to enhance the method factory to support non-binary operations but, apart from that, in my opinion the use of a blacklist is not ideal in term of maintenance. You have to carefully review all possible special methods now, and check them again for possibly new ones with each new release of python.
Building on top of Igor's code, I suggest another way making use of multiple inheritance. Inheriting from both the wrapped type and the value container lets the containers be almost perfectly compatible with the wrapped types while including the common services from the generic container. As a bonus, this approach makes the code even simpler (and even better with Igor's tip about lru_cache).
import functools
# container type superclass
class ValueDecorator:
def wrapped_type(self):
return type(self).__bases__[1]
def custom_operation(self):
print('hey! i am a', self.wrapped_type(), 'and a', type(self))
def __repr__(self):
return '{}({})'.format(self.__class__.__name__, super().__repr__())
# create and cache container types (e.g. IntContainer, StrContainer)
#functools.lru_cache(maxsize=16)
def type_container(type_):
name = f'{type_.__name__.title()}Container'
bases = (ValueDecorator, type_)
return type(name, bases, {})
# create or lookup an appropriate container
def value_container(value):
cls = type_container(type(value))
return cls(value)
Please note that unlike Sam and Igor's methods, which reference the input object in the container, this method creates a new subclassed object initialized with the input object. It is fine for basic values but may cause undesirable effects for other types, depending on how their constructor deals with copy.

Related

Modify an attribute of an already defined class in Python (and run its definition again)

I am trying to modify an already defined class by changing an attribute's value. Importantly, I want this change to propagate internally.
For example, consider this class:
class Base:
x = 1
y = 2 * x
# Other attributes and methods might follow
assert Base.x == 1
assert Base.y == 2
I would like to change x to 2, making it equivalent to this.
class Base:
x = 2
y = 2 * x
assert Base.x == 2
assert Base.y == 4
But I would like to make it in the following way:
Base = injector(Base, x=2)
Is there a way to achieve this WITHOUT recompile the original class source code?
The effect you want to achieve belongs to the realm of "reactive programing" - a programing paradigm (from were the now ubiquitous Javascript library got its name as an inspiration).
While Python has a lot of mechanisms to allow that, one needs to write his code to actually make use of these mechanisms.
By default, plain Python code as the one you put in your example, uses the Imperative paradigm, which is eager: whenever an expression is encoutered, it is executed, and the result of that expression is used (in this case, the result is stored in the class attribute).
Python's advantages also can make it so that once you write a codebase that will allow some reactive code to take place, users of your codebase don't have to be aware of that, and things work more or less "magically".
But, as stated above, that is not free. For the case of being able to redefine y when x changes in
class Base:
x = 1
y = 2 * x
There are a couple paths that can be followed - the most important is that, at the time the "*" operator is executed (and that happens when Python is parsing the class body), at least one side of the operation is not a plain number anymore, but a special object which implements a custom __mul__ method (or __rmul__) in this case. Then, instead of storing a resulting number in y, the expression is stored somewhere, and when y is retrieved either as a class attribute, other mechanisms force the expression to resolve.
If you want this at instance level, rather than at class level, it would be easier to implement. But keep in mind that you'd have to define each operator on your special "source" class for primitive values.
Also, both this and the easier, instance descriptor approach using property are "lazily evaluated": that means, the value for y is calcualted when it is to be used (it can be cached if it will be used more than once). If you want to evaluate it whenever x is assigned (and not when y is consumed), that will require other mechanisms. Although caching the lazy approach can mitigate the need for eager evaluation to the point it should not be needed.
1 - Before digging there
Python's easiest way to do code like this is simply to write the expressions to be calculated as functions - and use the property built-in as a descriptor to retrieve these values. The drawback is small:
you just have to wrap your expressions in a function (and then, that function
in something that will add the descriptor properties to it, such as property). The gain is huge: you are free to use any Python code inside your expression, including function calls, object instantiation, I/O, and the like. (Note that the other approach requires wiring up each desired operator, just to get started).
The plain "101" approach to have what you want working for instances of Base is:
class Base:
x = 1
#property
def y(self):
return self.x * 2
b = Base()
b.y
-> 2
Base.x = 3
b.y
-> 6
The work of property can be rewritten so that retrieving y from the class, instead of an instance, achieves the effect as well (this is still easier than the other approach).
If this will work for you somehow, I'd recommend doing it. If you need to cache y's value until x actually changes, that can be done with normal coding
2 - Exactly what you asked for, with a metaclass
as stated above, Python'd need to know about the special status of your y attribute when calculcating its expression 2 * x. At assignment time, it would be already too late.
Fortunately Python 3 allow class bodies to run in a custom namespace for the attribute assignment by implementing the __prepare__ method in a metaclass, and then recording all that takes place, and replacing primitive attributes of interest by special crafted objects implementing __mul__ and other special methods.
Going this way could even allow values to be eagerly calculated, so they can work as plain Python objects, but register information so that a special injector function could recreate the class redoing all the attributes that depend on expressions. It could also implement lazy evaluation, somewhat as described above.
from collections import UserDict
import operator
class Reactive:
def __init__(self, value):
self._initial_value = value
self.values = {}
def __set_name__(self, owner, name):
self.name = name
self.values[owner] = self._initial_value
def __get__(self, instance, owner):
return self.values[owner]
def __set__(self, instance, value):
raise AttributeError("value can't be set directly - call 'injector' to change this value")
def value(self, cls=None):
return self.values.get(cls, self._initial_value)
op1 = value
#property
def result(self):
return self.value
# dynamically populate magic methods for operation overloading:
for name in "mul add sub truediv pow contains".split():
op = getattr(operator, name)
locals()[f"__{name}__"] = (lambda operator: (lambda self, other: ReactiveExpr(self, other, operator)))(op)
locals()[f"__r{name}__"] = (lambda operator: (lambda self, other: ReactiveExpr(other, self, operator)))(op)
class ReactiveExpr(Reactive):
def __init__(self, value, op2, operator):
self.op2 = op2
self.operator = operator
super().__init__(value)
def result(self, cls):
op1, op2 = self.op1(cls), self.op2
if isinstance(op1, Reactive):
op1 = op1.result(cls)
if isinstance(op2, Reactive):
op2 = op2.result(cls)
return self.operator(op1, op2)
def __get__(self, instance, owner):
return self.result(owner)
class AuxDict(UserDict):
def __init__(self, *args, _parent, **kwargs):
self.parent = _parent
super().__init__(*args, **kwargs)
def __setitem__(self, item, value):
if isinstance(value, self.parent.reacttypes) and not item.startswith("_"):
value = Reactive(value)
super().__setitem__(item, value)
class MetaReact(type):
reacttypes = (int, float, str, bytes, list, tuple, dict)
def __prepare__(*args, **kwargs):
return AuxDict(_parent=__class__)
def __new__(mcls, name, bases, ns, **kwargs):
pre_registry = {}
cls = super().__new__(mcls, name, bases, ns.data, **kwargs)
#for name, obj in ns.items():
#if isinstance(obj, ReactiveExpr):
#pre_registry[name] = obj
#setattr(cls, name, obj.result()
for name, reactive in pre_registry.items():
_registry[cls, name] = reactive
return cls
def injector(cls, inplace=False, **kwargs):
original = cls
if not inplace:
cls = type(cls.__name__, (cls.__bases__), dict(cls.__dict__))
for name, attr in cls.__dict__.items():
if isinstance(attr, Reactive):
if isinstance(attr, ReactiveExpr) and name in kwargs:
raise AttributeError("Expression attributes can't be modified by injector")
attr.values[cls] = kwargs.get(name, attr.values[original])
return cls
class Base(metaclass=MetaReact):
x = 1
y = 2 * x
And, after pasting the snippet above in a REPL, here is the
result of using injector:
In [97]: Base2 = injector(Base, x=5)
In [98]: Base2.y
Out[98]: 10
The idea is complicated with that aspect that Base class is declared with dependent dynamically evaluated attributes. While we can inspect class's static attributes, I think there's no other way of getting dynamic expression except for parsing the class's sourcecode, find and replace the "injected" attribute name with its value and exec/eval the definition again. But that's the way you wanted to avoid. (moreover: if you expected injector to be unified for all classes).
If you want to proceed to rely on dynamically evaluated attributes define the dependent attribute as a lambda function.
class Base:
x = 1
y = lambda: 2 * Base.x
Base.x = 2
print(Base.y()) # 4

Python classes, mappings, pprint, KeysView vs. dict_keys; to keys() or not to keys()?

I have a problem with my base class. I started writing it after finding an answer on this site about more informative __repr__() methods. I added to it after finding a different answer on this site about using pprint() with my own classes. I tinkered with it a little more after finding a third answer on this site about making my classes unpackable with a ** operator.
I modified it again after seeing in yet another answer on this site that there was a distinction between merely giving it __getitem__(), __iter__(), and __len__() methods on the one hand, and actually making it a fully-qualified mapping by subclassing collections.abc.Mapping on the other. Further, I saw that doing so would remove the need for writing my own keys() method, as the Mapping would take care of that.
So I got rid of keys(), and a class method broke.
The problem
I have a method that iterates through my class' keys and values to produce one big string formatted as I'd like it. That class looks like this.
class MyObj():
def __init__(self, foo, bar):
self.foo = foo
self.bar = bar
def the_problem_method(self):
"""Method I'm getting divergent output for."""
longest = len(max((key for key in self.keys()), key=len))
key_width = longest + TAB_WIDTH - longest % TAB_WIDTH
return '\n'.join((f'{key:<{key_width}}{value}' for key, value in self))
Yes, that doesn't have the base class in it, but the MWE later on will account for that. The nut of it is that (key for key in self.keys()) part. When I have a keys() method written, I get the output I want.
def keys(self):
"""Get object attribute names."""
return self.__dict__.keys()
When I remove that to go with the keys() method supplied by collections.abc.Mapping, I get no space between key and value
The question
I can get the output I want by restoring the keys() method (and maybe adding values() and items() while I'm at it), but is that the best approach? Would it be better to go with the Mapping one and modify my class method to suit it? If so, how? Should I leave Mapping well enough alone until I know I need it?
This is my base class to be copied all aver creation and subclassed out the wazoo. I want to Get. It. Right.
There are already several considerations I can think of and many more of which I am wholly ignorant.
I use Python 3.9 and greater. I'll abandon 3.9 when conda does.
I want to keep my more-informative __repr__() methods.
I want pprint() to work, via the _dispatch table method with _format_dict_items().
I want to allow for duck typing my classes reliably.
I have not yet used type hinting, but I want to allow for using best practices there if I start.
Everything else I know nothing about.
The MWE
This has my problem class at the top and output stuff at the bottom. There are two series of classes building upon the previous ones.
The first are ever-more-inclusive base classes, and it is here that the difference between the instance with the keys() method and that without is shown. the class, BaseMap, subclasses the Mapping and has the __getitem__(), __iter__(), and __len__() methods. The next class up the chain, BaseMapKeys, subclasses that and adds the keys() method.
The second group, MapObj and MapKeysObj, are subclasses of the problem class that also subclass those different base classes respectively.
OK, maybe the WE isn't so M, but lots of things got me to this point and I don't want to neglect any.
import collections.abc
from pprint import pprint, PrettyPrinter
TAB_WIDTH = 3
class MyObj():
def __init__(self, foo, bar):
self.foo = foo
self.bar = bar
def the_problem_method(self):
"""Method I'm getting divergent output for."""
longest = len(max((key for key in self.keys()), key=len))
key_width = longest + TAB_WIDTH - longest % TAB_WIDTH
return '\n'.join((f'{key:<{key_width}}{value}' for key, value in self))
class Base(object):
"""Base class with more informative __repr__."""
def __repr__(self):
"""Object representation."""
params = (f'{key}={repr(value)}'
for key, value in self.__dict__.items())
return f'{repr(self.__class__)}({", ".join(params)})'
class BaseMap(Base, collections.abc.Mapping):
"""Enable class to be pprint-able, unpacked with **."""
def __getitem__(self, attr):
"""Get object attribute values."""
return getattr(self.__dict__, attr)
def __iter__(self):
"""Make object iterable."""
for attr in self.__dict__.keys():
yield attr, getattr(self, attr)
def __len__(self):
"""Get length of object."""
return len(self.__dict__)
class BaseMapKeys(BaseMap):
"""Overwrite KeysView output with what I thought it would be."""
def keys(self):
"""Get object attribute names."""
return self.__dict__.keys()
class MapObj(BaseMap, MyObj):
"""Problem class with collections.abc.Mapping."""
def __init__(self, foo, bar):
super().__init__(foo, bar)
class MapKeysObj(BaseMapKeys, MyObj):
"""Problem class with collections.abc.Mapping and keys method."""
def __init__(self, foo, bar):
super().__init__(foo, bar)
if isinstance(getattr(PrettyPrinter, '_dispatch'), dict):
# assume the dispatch table method still works
def pprint_basemap(printer, object, stream, indent, allowance, context,
level):
"""Implement pprint for subclasses of BaseMap class."""
write = stream.write
write(f'{object.__class__}(\n {indent * " "}')
printer._format_dict_items(object, stream, indent, allowance + 1,
context, level)
write(f'\n{indent * " "})')
map_classes = [MapObj, MapKeysObj]
for map_class in map_classes:
PrettyPrinter._dispatch[map_class.__repr__] = pprint_basemap
def print_stuff(map_obj):
print('pprint object:')
pprint(map_obj)
print()
print('print keys():')
print(map_obj.keys())
print()
print('print list(keys()):')
print(list(map_obj.keys()))
print()
print('print the problem method:')
print(map_obj.the_problem_method())
print('\n\n')
params = ['This is a really long line to force new line in pprint output', 2]
baz = MapObj(*params)
print_stuff(baz)
scoggs = MapKeysObj(*params)
print_stuff(scoggs)

Special method like __str__ that returns a number representation of an object

Say I have a Python class as follows:
class TestClass():
value = 20
def __str__(self):
return str(self.value)
The __str__ method will automatically be called any time I try to use an instance of TestClass as a string, like in print. Is there any equivalent for treating it as a number? For example, in
an_object = TestClass()
if an_object > 30:
...
where some hypothetical __num__ function would be automatically called to interpret the object as a number. How could this be easily done?
Ideally I'd like to avoid overloading every normal mathematical operator.
You can provide __float__(), __int__(), and/or __complex__() methods to convert objects to numbers. There is also a __round__() method you can provide for custom rounding. Documentation here. The __bool__() method technically fits here too, since Booleans are a subclass of integers in Python.
While Python does implicitly convert objects to strings for e.g. print(), it never converts objects to numbers without you saying to. Thus, Foo() + 42 isn't valid just because Foo has an __int__ method. You have to explicitly use int() or float() or complex() on them. At least that way, you know what you're getting just by reading the code.
To get classes to actually behave like numbers, you have to implement all the special methods for the operations that numbers participate in, including arithmetic and comparisons. As you note, this gets annoying. You can, however, write a mixin class so that at least you only have to write it once. Such as:
class NumberMixin(object):
def __eq__(self, other): return self.__num__() == self.__getval__(other)
# other comparison methods
def __add__(self, other): return self.__num__() + self.__getval__(other)
def __radd__(self, other): return self.__getval__(other) + self.__num__()
# etc., I'm not going to write them all out, are you crazy?
This class expects two special methods on the class it's mixed in with.
__num__() - converts self to a number. Usually this will be an alias for the conversion method for the most precise type supported by the object. For example, your class might have __int__() and __float__() methods, but __int__() will truncate the number, so you assign __num__ = __float__ in your class definition. On the other hand, if your class has a natural integral value, you might want to provide __float__ so it can also be converted to a float, but you'd use __num__ = __int__ since it should behave like an integer.
__getval__() - a static method that obtains the numeric value from another object. This is useful when you want to be able to support operations with objects other than numeric types. For example, when comparing, you might want to be able to compare to objects of your own type, as well as to traditional numeric types. You can write __getval__() to fish out the right attribute or call the right method of those other objects. Of course with your own instances you can just rely on float() to do the right thing, but __getval__() lets you be as flexible as you like in what you accept.
A simple example class using this mixin:
class FauxFloat(NumberMixin):
def __init__(self, value): self.value = float(value)
def __int__(self): return int(self.value)
def __float__(self): return float(self.value)
def __round__(self, digits=0): return round(self.value, digits)
def __str__(self): return str(self.value)
__repr__ = __str__
__num__ = __float__
#staticmethod
def __getval__(obj):
if isinstance(obj, FauxFloat):
return float(obj)
if hasattr(type(obj), "__num__") and callable(type(obj).__num__):
return type(obj).__num__(obj) # don't call dunder method on instance
try:
return float(obj)
except TypeError:
return int(obj)
ff = FauxFloat(42)
print(ff + 13) # 55.0
For extra credit, you could register your class so it'll be seen as a subclass of an appropriate abstract base class:
import numbers
numbers.Real.register(FauxFloat)
issubclass(FauxFloat, numbers.Real) # True
For extra extra credit, you might also create a global num() function that calls __num__() on objects that have it, otherwise falling back to the older methods.
In case of numbers it a bit more complicated. But its possible! You have to override your class operators to fit your needs.
operator.__lt__(a, b) # lower than
operator.__le__(a, b) # lower equal
operator.__eq__(a, b) # equal
operator.__ne__(a, b) # not equal
operator.__ge__(a, b) # greater equial
operator.__gt__(a, b) # greater than
Python Operators
Looks like you need __gt__ method.
class A:
val = 0
def __gt__(self, other):
return self.val > other
a = A()
a.val = 12
a > 10
If you just wanna cast object to int - you should define __int__ method (or __float__).

Calling classmethods through a dictionary

I'm working on a class describing an object that can be expressed in several "units", I'll say, to keep things simple. Let's say we're talking about length. (It's actually something more complicated.) What I would like is for the user to be able to input 1 and "inch", for example, and automatically get member variables in feet, meters, furlongs, what have you as well. I want the user to be able to input any of the units I am dealing in, and get member variables in all the other units. My thought was to do something like this:
class length:
#classmethod
def inch_to_foot(cls,inch):
# etc.
#classmethod
def inch_to_meter(cls,inch):
# etc.
I guess you get the idea. Then I would define a dictionary in the class:
from_to={'inch':{'foot':inch_to_foot,'meter':inch_to_meter, ...},
'furlong':{'foot':furlong_to_foot, ...},
#etc
}
So then I think I can write an __init__ method
def __init__(self,num,unit):
cls = self.__class__
setattr(self,unit,num)
for k in cls.from_to[unit].keys:
setattr(self,k,cls.from_to[unit][k](num)
But no go. I get the error "class method not callable". Any ideas how I can make this work? Any ideas for scrapping the whole thing and trying a different approach? Thanks.
If you move the from_to variable into __init__ and modify it to something like:
cls.from_to={'inch':{'foot':cls.inch_to_foot,'meter':cls.inch_to_meter, }}
then I think it works as you expect.
Unfortunately I can't answer why because i haven't used classmethods much myself, but I think it is something to do with bound vs unbound methods. Anyway, if you print the functions stored in to_from in your code vs the ones with my modification you'll see they are different (mine are bound, yours are classmethod objects)
Hope that helps somewhat!
EDIT: I've thought about it a bit more, I think the problem is because you are storing a reference to the functions before they have been bound to the class (not surprising that the binding happens once the rest of the class has been parsed). My advice would be to forget about storing a dictionary of function references, but to store (in some representation of your choice) strings that indicate the units you can change between. For instance you might choose a similar format, such as:
from_to = {'inch':['foot','meter']}
and then look up the functions during __init__ using getattr
E.G.:
class length:
from_to = {'inch':['foot','meter']}
def __init__(self,num,unit):
if unit not in self.from_to:
raise RuntimeError('unit %s not supported'%unit)
cls = self.__class__
setattr(self,unit,num)
for k in cls.from_to[unit]:
f = getattr(cls,'%s_to_%s'%(unit,k))
setattr(self,k,f(num))
#classmethod
def inch_to_foot(cls,inch):
return inch/12.0
#classmethod
def inch_to_meter(cls,inch):
return inch*2.54/100
a = length(3,'inches')
print a.meter
print a.foot
print length.inch_to_foot(3)
I don't think doing with an __init__() method would be a good idea. I once saw an interesting way to do it in the Overriding the __new__ method section of in the classic document titled Unifying types and classes in Python 2.2 by Guido van Rossum.
Here's some examples:
class inch_to_foot(float):
"Convert from inch to feet"
def __new__(cls, arg=0.0):
return float.__new__(cls, float(arg)/12)
class inch_to_meter(float):
"Convert from inch to meter"
def __new__(cls, arg=0.0):
return float.__new__(cls, arg*0.0254)
print inch_to_meter(5) # 0.127
Here's a completely different answer that uses a metaclass and requires the conversion functions to bestaticmethodsrather thanclassmethods-- which it turns into properties based on the target unit's name. If searches for the names of any conversion functions itself, eliminating the need to manually definefrom_totype tables.
One thing about this approach is that the conversion functions aren't even called unless indirect references are made to the units associated with them. Another is that they're dynamic in the sense that the results returned will reflect the current value of the instance (unlike instances of three_pineapples'lengthclass, which stores the results of calling them on the numeric value of the instance when it's initially constructed).
You've never said what version of Python you're using, so the following code is for Python 2.2 - 2.x.
import re
class MetaUnit(type):
def __new__(metaclass, classname, bases, classdict):
cls = type.__new__(metaclass, classname, bases, classdict)
# add a constructor
setattr(cls, '__init__',
lambda self, value=0: setattr(self, '_value', value))
# add a property for getting and setting the underlying value
setattr(cls, 'value',
property(lambda self: self._value,
lambda self, value: setattr(self, '_value', value)))
# add an identity property the just returns the value unchanged
unitname = classname.lower() # lowercase classname becomes name of unit
setattr(cls, unitname, property(lambda self: self._value))
# find conversion methods and create properties that use them
matcher = re.compile(unitname + r'''_to_(?P<target_unitname>\w+)''')
for name in cls.__dict__.keys():
match = matcher.match(name)
if match:
target_unitname = match.group('target_unitname').lower()
fget = (lambda self, conversion_method=getattr(cls, name):
conversion_method(self._value))
setattr(cls, target_unitname, property(fget))
return cls
Sample usage:
scalar_conversion_staticmethod = (
lambda scale_factor: staticmethod(lambda value: value * scale_factor))
class Inch(object):
__metaclass__ = MetaUnit
inch_to_foot = scalar_conversion_staticmethod(1./12.)
inch_to_meter = scalar_conversion_staticmethod(0.0254)
a = Inch(3)
print a.inch # 3
print a.meter # 0.0762
print a.foot # 0.25
a.value = 6
print a.inch # 6
print a.meter # 0.1524
print a.foot # 0.5

Reuse existing objects for immutable objects?

In Python, how is it possible to reuse existing equal immutable objects (like is done for str)? Can this be done just by defining a __hash__ method, or does it require more complicated measures?
If you want to create via the class constructor and have it return a previously created object then you will need to provide a __new__ method (because by the time you get to __init__ the object has already been created).
Here is a simple example - if the value used to initialise has been seen before then a previously created object is returned rather than a new one created:
class Cached(object):
"""Simple example of immutable object reuse."""
def __init__(self, i):
self.i = i
def __new__(cls, i, _cache={}):
try:
return _cache[i]
except KeyError:
# you must call __new__ on the base class
x = super(Cached, cls).__new__(cls)
x.__init__(i)
_cache[i] = x
return x
Note that for this example you can use anything to initialise as long as it's hashable. And just to show that objects really are being reused:
>>> a = Cached(100)
>>> b = Cached(200)
>>> c = Cached(100)
>>> a is b
False
>>> a is c
True
There are two 'software engineering' solutions to this that don't require any low-level knowledge of Python. They apply in the following scenarios:
First Scenario: Objects of your class are 'equal' if they are constructed with the same constructor parameters, and equality won't change over time after construction. Solution: Use a factory that hashses the constructor parameters:
class MyClass:
def __init__(self, someint, someotherint):
self.a = someint
self.b = someotherint
cachedict = { }
def construct_myobject(someint, someotherint):
if (someint, someotherint) not in cachedict:
cachedict[(someint, someotherint)] = MyClass(someint, someotherint)
return cachedict[(someint, someotherint)]
This approach essentially limits the instances of your class to one unique object per distinct input pair. There are obvious drawbacks as well: not all types are easily hashable and so on.
Second Scenario: Objects of your class are mutable and their 'equality' may change over time. Solution: define a class-level registry of equal instances:
class MyClass:
registry = { }
def __init__(self, someint, someotherint, third):
MyClass.registry[id(self)] = (someint, someotherint)
self.someint = someint
self.someotherint = someotherint
self.third = third
def __eq__(self, other):
return MyClass.registry[id(self)] == MyClass.registry[id(other)]
def update(self, someint, someotherint):
MyClass.registry[id(self)] = (someint, someotherint)
In this example, objects with the same someint, someotherint pair are equal, while the third parameter does not factor in. The trick is to keep the parameters in registry in sync. As an alternative to update, you could override getattr and setattr for your class instead; this would ensure that any assignment foo.someint = y would be kept synced with your class-level dictionary. See an example here.
I believe you would have to keep a dict {args: object} of instances already created, then override the class' __new__ method to check in that dictionary, and return the relevant object if it already existed. Note that I haven't implemented or tested this idea. Of course, strings are handled at the C level.

Categories

Resources