I have a situation where I need to enforce and give the user the option of one of a number of select functions, to be passed in as an argument to another function:
I really want to achieve something like the following:
from enum import Enum
#Trivial Function 1
def functionA():
pass
#Trivial Function 2
def functionB():
pass
#This is not allowed (as far as i can tell the values should be integers)
#But pseudocode for what I am after
class AvailableFunctions(Enum):
OptionA = functionA
OptionB = functionB
So the following can be executed:
def myUserFunction(theFunction = AvailableFunctions.OptionA):
#Type Check
assert isinstance(theFunction,AvailableFunctions)
#Execute the actual function held as value in the enum or equivalent
return theFunction.value()
Your assumption is wrong. Values can be arbitrary, they are not limited to integers. From the documentation:
The examples above use integers for enumeration values. Using integers
is short and handy (and provided by default by the Functional API),
but not strictly enforced. In the vast majority of use-cases, one
doesn’t care what the actual value of an enumeration is. But if the
value is important, enumerations can have arbitrary values.
However the issue with functions is that they are considered to be method definitions instead of attributes!
In [1]: from enum import Enum
In [2]: def f(self, *args):
...: pass
...:
In [3]: class MyEnum(Enum):
...: a = f
...: def b(self, *args):
...: print(self, args)
...:
In [4]: list(MyEnum) # it has no values
Out[4]: []
In [5]: MyEnum.a
Out[5]: <function __main__.f>
In [6]: MyEnum.b
Out[6]: <function __main__.MyEnum.b>
You can work around this by using a wrapper class or just functools.partial or (only in Python2) staticmethod:
from functools import partial
class MyEnum(Enum):
OptionA = partial(functionA)
OptionB = staticmethod(functionB)
Sample run:
In [7]: from functools import partial
In [8]: class MyEnum2(Enum):
...: a = partial(f)
...: def b(self, *args):
...: print(self, args)
...:
In [9]: list(MyEnum2)
Out[9]: [<MyEnum2.a: functools.partial(<function f at 0x7f4130f9aae8>)>]
In [10]: MyEnum2.a
Out[10]: <MyEnum2.a: functools.partial(<function f at 0x7f4130f9aae8>)>
Or using a wrapper class:
In [13]: class Wrapper:
...: def __init__(self, f):
...: self.f = f
...: def __call__(self, *args, **kwargs):
...: return self.f(*args, **kwargs)
...:
In [14]: class MyEnum3(Enum):
...: a = Wrapper(f)
...:
In [15]: list(MyEnum3)
Out[15]: [<MyEnum3.a: <__main__.Wrapper object at 0x7f413075b358>>]
Also note that if you want you can define the __call__ method in your enumeration class to make the values callable:
In [1]: from enum import Enum
In [2]: def f(*args):
...: print(args)
...:
In [3]: class MyEnum(Enum):
...: a = partial(f)
...: def __call__(self, *args):
...: self.value(*args)
...:
In [5]: MyEnum.a(1,2,3) # no need for MyEnum.a.value(1,2,3)
(1, 2, 3)
Since Python 3.11 there is much more concise and understandable way. member and nonmember functions were added to enum among other improvements, so you can now do the following:
from enum import Enum, member
def fn(x):
print(x)
class MyEnum(Enum):
meth = fn
mem = member(fn)
#classmethod
def this_is_a_method(cls):
print('No, still not a member')
def this_is_just_function():
print('No, not a member')
#member
def this_is_a_member(x):
print('Now a member!', x)
And now
>>> list(MyEnum)
[<MyEnum.mem: <function fn at ...>>, <MyEnum.this_is_a_member: <function MyEnum.this_is_a_member at ...>>]
>>> MyEnum.meth(1)
1
>>> MyEnum.mem(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'MyEnum' object is not callable
>>> MyEnum.mem.value(1)
1
>>> MyEnum.this_is_a_method()
No, still not a member
>>> MyEnum.this_is_just_function()
No, not a member
>>> MyEnum.this_is_a_member()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'MyEnum' object is not callable
>>> MyEnum.this_is_a_member.value(1)
Now a member! 1
Another less clunky solution is to put the functions in a tuple. As Bakuriu mentioned, you may want to make the enum callable.
from enum import Enum
def functionA():
pass
def functionB():
pass
class AvailableFunctions(Enum):
OptionA = (functionA,)
OptionB = (functionB,)
def __call__(self, *args, **kwargs):
self.value[0](*args, **kwargs)
Now you can use it like this:
AvailableFunctions.OptionA() # calls functionA
In addition to the answer of Bakuriu... If you use the wrapper approach like above you loose information about the original function like __name__, __repr__
and so on after wrapping it. This will cause problems for example if you want to use sphinx for generation of source code documentation. Therefore add the following to your wrapper class.
class wrapper:
def __init__(self, function):
self.function = function
functools.update_wrapper(self, function)
def __call__(self,*args, **kwargs):
return self.function(*args, **kwargs)
def __repr__(self):
return self.function.__repr__()
Building on top of #bakuriu's approach, I just want to highlight that we can also use dictionaries of multiple functions as values and have a broader polymorphism, similar to enums in Java. Here is a fictitious example to show what I mean:
from enum import Enum, unique
#unique
class MyEnum(Enum):
test = {'execute': lambda o: o.test()}
prod = {'execute': lambda o: o.prod()}
def __getattr__(self, name):
if name in self.__dict__:
return self.__dict__[name]
elif not name.startswith("_"):
value = self.__dict__['_value_']
return value[name]
raise AttributeError(name)
class Executor:
def __init__(self, mode: MyEnum):
self.mode = mode
def test(self):
print('test run')
def prod(self):
print('prod run')
def execute(self):
self.mode.execute(self)
Executor(MyEnum.test).execute()
Executor(MyEnum.prod).execute()
Obviously, the dictionary approach provides no additional benefit when there is only a single function, so use this approach when there are multiple functions. Ensure that the keys are uniform across all values as otherwise, the usage won't be polymorphic.
The __getattr__ method is optional, it is only there for syntactic sugar (i.e., without it, mode.execute() would become mode.value['execute']().
Since dictionaries can't be made readonly, using namedtuple would be better and require only minor changes to the above.
from enum import Enum, unique
from collections import namedtuple
EnumType = namedtuple("EnumType", "execute")
#unique
class MyEnum(Enum):
test = EnumType(lambda o: o.test())
prod = EnumType(lambda o: o.prod())
def __getattr__(self, name):
if name in self.__dict__:
return self.__dict__[name]
elif not name.startswith("_"):
value = self.__dict__['_value_']
return getattr(value, name)
raise AttributeError(name)
Related
Problem
Given the following code (an arbitrary decorator which just assigns data to some callable)
from typing import Callable
def decorator() -> Callable:
def wrapper(f: Callable) -> Callable:
f.foo = "bar"
return f
return wrapper
#decorator()
def func(a: int, b: int) -> int:
return a + b
If I attempt the following, I get the expected behaviour.
>>> func.foo
"bar"
>>> func(1,2)
3
If however I want to use functools.partial, the f.foo = "bar" relationship is "broken"
>>> from functools import partial
>>> p = partial(func, a=1)
>>> p(b=2)
3
# As expected, p.foo raises an attribute error
>>> p.foo
AttributeError: 'partial' object has no attribute 'foo'
I realise I can access this via p.func.foo. But I was curious if there was a better way to re-establish the relationship when using partial.
Attempted solution
The following workaround seems to "work", but I was curious if there is a better approach. Overriding __new__ is odd, so I opted to override __init__
class new_partial(partial):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
for k, v in self.func.__dict__.items():
setattr(self, k, v)
Given the above, the following now works
>>> p = new_partial(func, a=1)
>>> p(b=2)
3
>>> p.foo
"bar"
If you plan ahead, you can decorate the partial'd version of the function separately:
def func(a: int, b: int) -> int:
return a + b
p = decorator()(partial(func, 1))
func = decorator()(func)
You could also just decorate the partial anyway, if you don't mind the existence of a separate p.foo and func.foo.
For decorators that create a new function (rather than setting attributes on the original), you can use functools.wraps so that the function will remember its original in the __wrapped__ attribute, and partial and redecorate that:
from functools import partial, wraps
def get_name(func):
try:
return func.__name__
except AttributeError: # handle a partial
return 'a partial application of ' + get_name(func.func)
def decorator(func):
#wraps(func)
def wrapper(*args, **kwargs):
print(f"this is a decorated version of {get_name(func)}")
return func(*args, **kwargs)
return wrapper
#decorator
def decorated_sum(x, y):
return x + y
increment = decorator(partial(decorated_sum.__wrapped__, 1))
You could potentially even make a helper function to look for a __wrapped__ attribute, but you'd need to build your own way to figure out which decorators to apply.
I have a class with __getitem__() function which is subscribable like a dictionary. However, when I try to pass it to a str.format() i get a TypeError. How can I use a class in python with the format() function?
>>> class C(object):
id=int()
name=str()
def __init__(self, id, name):
self.id=id
self.name=name
def __getitem__(self, key):
return getattr(self, key)
>>> d=dict(id=1, name='xyz')
>>> c=C(id=1, name='xyz')
>>>
>>> #Subscription works for both objects
>>> print(d['id'])
1
>>> print(c['id'])
1
>>>
>>> s='{id} {name}'
>>> #format() only works on dict()
>>> print(s.format(**d))
1 xyz
>>> print(s.format(**c))
Traceback (most recent call last):
File "<pyshell#13>", line 1, in <module>
print(s.format(**c))
TypeError: format() argument after ** must be a mapping, not C
As some of the comments mention you could inherit from dict, the reason it doesn't work is that:
If the syntax **expression appears in the function call, the expression must evaluate to a mapping, the contents of which are treated as additional keyword arguments. In the case of a keyword appearing in both expression and as an explicit keyword argument, a TypeError exception is raised.
For it to work you need to implement the Mapping ABC. Something along the lines of this:
from collections.abc import Mapping
class C(Mapping):
id=int()
name=str()
def __init__(self, id, name):
self.id = id
self.name = name
def __iter__(self):
for x in self.__dict__.keys():
yield x
def __len__(self):
return len(self.__dict__)
def __getitem__(self, key):
return self.__dict__[key]
This way you should just be able to use s = '{id}{name}'.format(**c)
rather than s = '{id}{name}'.format(**c.__dict__)
You can also use MutableMapping from collections.abc module if you want to be able to change your class variables like in a dictionary. MutableMapping would also require the implementation of __setitem__ and __delitem__
Let's say I have an Entity class:
class Entity(dict):
pass
def save(self):
...
I can wrap a dict object with Entity(dict_obj)
But is it possible to create a class that can wrap any type of objects, eg. int, list etc.
PS I have come up the following work around, it doesn't work on the more complex objects, but seems to work with basic ones, completely unsure if there are any gotchas, might get penalised with efficiency by creating the class every time, please let me know:
class EntityMixin(object):
def save(self):
...
def get_entity(obj):
class Entity(obj.__class__, EntityMixin):
pass
return Entity(obj)
Usage:
>>> a = get_entity(1)
>>> a + 1
2
>>> b = get_entity('b')
>>> b.upper()
'B'
>>> c = get_entity([1,2])
>>> len(c)
2
>>> d = get_entity({'a':1})
>>> d['a']
1
>>> d = get_entity(map(lambda x : x, [1,2]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jlin/projects/django-rest-framework-queryset/rest_framework_queryset/entity.py", line 11, in get_entity
return Entity(obj)
TypeError: map() must have at least two arguments.
Improve efficiency:
EntityClsCache = {}
class EntityMixin(object):
def save(self):
...
def _get_entity_cls(obj):
class Entity(obj.__class__, EntityMixin):
pass
return Entity
def get_entity(obj)
cls = None
try:
cls = EntityClsCache[obj.__class__]
except AttributeError:
cls = _get_entity_cls(obj)
EntityClsCache[obj.__class__] = cls
return cls(obj)
The solution you propose looks elegant, but it lacks caching, as in, you'll construct a unique class every time get_entity() is called, even if types are all the same.
Python has metaclasses, which act as class factories. Given that metaclass' methods override these of class, not the instance, we can implement class caching:
class EntityMixin(object):
pass
class CachingFactory(type):
__registry__ = {}
# Instead of declaring an inner class,
# we can also return type("Wrapper", (type_, EntityMixin), {}) right away,
# which, however, looks more obscure
def __makeclass(cls, type_):
class Wrapper(type_, EntityMixin):
pass
return Wrapper
# This is the simplest form of caching; for more realistic and less error-prone example,
# better use a more unique/complex key, for example, tuple of `value`'s ancestors --
# you can obtain them via type(value).__mro__
def __call__(cls, value):
t = type(value)
typename = t.__name__
if typename not in cls.__registry__:
cls.__registry__[typename] = cls.__makeclass(t)
return cls.__registry__[typename](value)
class Factory(object):
__metaclass__ = CachingFactory
This way, Factory(1) performs Factory.__call__(1), which is CachingFactory.__call__(1) (without metaclass, that'd be a constructor call instead, which would result in a class instance -- but we want to make a class first and only then instantiate it).
We can ensure that the objects created by Factory are the instances of the same class, which is crafted specifically for them at the first time:
>>> type(Factory(map(lambda x: x, [1, 2]))) is type(Factory([1]))
True
>>> type(Factory("a")) is type(Factory("abc"))
True
In [1]: class Foo():
...: pass
...:
In [2]: class Qux():
...: def __init__(self):
...: item = Foo()
...:
In [3]: a = Foo()
In [4]: setattr(a, 'superpower', 'strength')
In [5]: a.superpower
Out[5]: 'strength'
In [6]: b = Qux()
In [7]: b.item = a
In [8]: b.superpower
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-8-cf0e287006f1> in <module>()
----> 1 b.superpower
AttributeError: Qux instance has no attribute 'superpower'
What I would like is to define some way of calling any attribute on Qux and have it return getattr(Qux.item, <attributename>). In other words, to have b.superpower work without explicitly defining:
#property
def superpower(self):
return getattr(self.item, 'superpower')
I don't want to lose access to any properties defined on Qux itself as well, but rather to expose properties defined on Foo if they are not also on Qux.
Define a __getattr__:
class Qux(Foo):
def __init__(self):
self.item = Foo()
def __getattr__(self, attr):
return getattr(self.item, attr)
__getattr__ gets called whenever someone tries to look up an attribute of the object, but fails through normal means.
It has an evil twin called __getattribute__, which always gets called and must be used with extreme caution.
You do that by defining __getattr__, not with a property. For any attribute that cannot be found with the standard protocol, Python will call the __getattr__ method of a class.
Moreover, to store the item, you have to assign it to self.item, otherwise it is thrown at the end of Qux.__init__.
Finally, inheriting from Foo seems unecessary in that case.
class Foo:
def __init__(self, superpower):
self.superpower = superpower
class Qux:
def __init__(self, foo_item):
self.item = foo_item
def __getattr__(self, name):
return getattr(self.item, name)
Example
f = Foo('strenght')
q = Qux(f)
print(q.superpower) # 'strenght'
Inheritance
Although, it seems you half-tried to implement this with inheritance. If your intent was to extend Qux behaviour with Foo, then inheritance would be the way to go.
class Foo:
def __init__(self, superpower):
self.superpower = superpower
class Qux(Foo):
def __getattr__(self, name):
return getattr(self.item, name)
Example
q = Qux('strenght')
print(q.superpower) # 'strenght'
I have this method in a class:
def do_exit():
# some task
I want to assign a bunch of other methods to do_exit, so currently I'm doing this:
do_quit = do_exit
do_stop = do_exit
do_finish = do_exit
do_complete = do_exit
do_leave = do_exit
This works fine but I'm wondering if there's a better way, especially if I'm going to be doing this a lot.
You might consider making a dictionary to hold your methods. With a defaultdict, you can ensure that do_exit is called if it's ever the case that nothing else was slotted in for a particular function name. On the other hand, this might not be very safe or validated against, e.g. spelling errors:
from collections import defaultdict
method_dict = defaultdict(lambda: do_exit)
# Try this
method_dict["do_quit"]()
Within a class, you could also override __getattr__ if you'd like. Say, just guessing, that all of these kinds of methods begin with do or else maybe the condition is that they end with some synonym of complete. You could give the class a class attribute that holds the appropriate convention items and checks for them, and looks them up in method_dict as needed.
from collections import defaultdict
class Foo(object):
QUIT_WORDS = ['exit', 'quit', 'stop', 'finish', 'complete', 'leave']
def __init__(self):
self.method_dict = defaultdict(lambda: self.do_exit)
def __getattr__(self, attr):
if any([attr.endswith("_{}".format(x)) for x in self.QUIT_WORDS]):
return self.method_dict[attr]
else:
return super(Foo, self).__getattribute__(attr)
def do_exit(self):
print "Exit!"
For example:
In [88]: f = Foo()
In [89]: f.do_quit()
Exit!
In [90]: f.do_exit()
Exit!
In [91]: f.do_go_bye_bye()
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-91-2584940dee36> in <module>()
----> 1 f.do_go_bye_bye()
<ipython-input-87-3b0db0bf6a47> in __getattr__(self, attr)
11 return self.method_dict[attr]
12 else:
---> 13 return super(Foo, self).__getattribute__(attr)
14
15
Your code is actually dangerous: if you subclass your initial class, you'll get unexpected behavior. Consider the following:
class Foo(object):
def m1(self):
print "Hello!"
m2 = m1
class Bar(Foo):
def m1(self):
print "World!"
# prints "World!", as expected
Bar().m1()
# prints "Hello!", because Bar.m2 is Foo.m2 is Foo.m1, *not* Bar.m1
Bar().m2()
Unfortunately, the only simple solution to your use case that doesn't break inheritance is to define each method manually, with a def foo(self): return self.bar() type of construct.
You can do:
do_quit = do_stop = do_finish = do_complete = do_leave = do_exit