Example:
class Planet(Enum):
MERCURY = (mass: 3.303e+23, radius: 2.4397e6)
def __init__(self, mass, radius):
self.mass = mass # in kilograms
self.radius = radius # in meters
Ref: https://docs.python.org/3/library/enum.html#planet
Why do I want to do this? If there are a few primitive types (int, bool) in the constructor list, it would be nice to used named arguments.
While you can't use named arguments the way you describe with enums, you can get a similar effect with a namedtuple mixin:
from collections import namedtuple
from enum import Enum
Body = namedtuple("Body", ["mass", "radius"])
class Planet(Body, Enum):
MERCURY = Body(mass=3.303e+23, radius=2.4397e6)
VENUS = Body(mass=4.869e+24, radius=6.0518e6)
EARTH = Body(mass=5.976e+24, radius=3.3972e6)
# ... etc.
... which to my mind is cleaner, since you don't have to write an __init__ method.
Example use:
>>> Planet.MERCURY
<Planet.MERCURY: Body(mass=3.303e+23, radius=2439700.0)>
>>> Planet.EARTH.mass
5.976e+24
>>> Planet.VENUS.radius
6051800.0
Note that, as per the docs, "mix-in types must appear before Enum itself in the sequence of bases".
The accepted answer by #zero-piraeus can be slightly extended to allow default arguments as well. This is very handy when you have a large enum with most entries having the same value for an element.
class Body(namedtuple('Body', "mass radius moons")):
def __new__(cls, mass, radius, moons=0):
return super().__new__(cls, mass, radius, moons)
def __getnewargs__(self):
return (self.mass, self.radius, self.moons)
class Planet(Body, Enum):
MERCURY = Body(mass=3.303e+23, radius=2.4397e6)
VENUS = Body(mass=4.869e+24, radius=6.0518e6)
EARTH = Body(5.976e+24, 3.3972e6, moons=1)
Beware pickling will not work without the __getnewargs__.
class Foo:
def __init__(self):
self.planet = Planet.EARTH # pickle error in deepcopy
from copy import deepcopy
f1 = Foo()
f2 = deepcopy(f1) # pickle error here
For Python 3.6.1+ the typing.NamedTuple can be used, which also allows for setting default values, which leads to prettier code. The example by #shao.lo then looks like this:
from enum import Enum
from typing import NamedTuple
class Body(NamedTuple):
mass: float
radius: float
moons: int=0
class Planet(Body, Enum):
MERCURY = Body(mass=3.303e+23, radius=2.4397e6)
VENUS = Body(mass=4.869e+24, radius=6.0518e6)
EARTH = Body(5.976e+24, 3.3972e6, moons=1)
This also supports pickling. The typing.Any can be used if you don't want to specify the type.
Credit to #monk-time, who's answer here inspired this solution.
If going beyond the namedtuple mix-in check out the aenum library1. Besides having a few extra bells and whistles for Enum it also supports NamedConstant and a metaclass-based NamedTuple.
Using aenum.Enum the above code could look like:
from aenum import Enum, enum, _reduce_ex_by_name
class Planet(Enum, init='mass radius'):
MERCURY = enum(mass=3.303e+23, radius=2.4397e6)
VENUS = enum(mass=4.869e+24, radius=6.0518e6)
EARTH = enum(mass=5.976e+24, radius=3.3972e6)
# replace __reduce_ex__ so pickling works
__reduce_ex__ = _reduce_ex_by_name
and in use:
--> for p in Planet:
... print(repr(p))
<Planet.MERCURY: enum(radius=2439700.0, mass=3.3030000000000001e+23)>
<Planet.EARTH: enum(radius=3397200.0, mass=5.9760000000000004e+24)>
<Planet.VENUS: enum(radius=6051800.0, mass=4.8690000000000001e+24)>
--> print(Planet.VENUS.mass)
4.869e+24
1 Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library.
Related
I'm building an API that deals with serializing data, and I'd prefer to add support for as much static analysis as possible. I'm inspired by Django's Meta pattern for declaring metadata on a class, and use an internal library similar to pydantic for inspecting type annotations for serialization.
I'd like the API to work something like:
class Base:
Data: type
def __init__(self):
self.data = self.Data()
class Derived(Base):
class Data:
# Actually, more like a pydantic model.
# Used for serialization.
attr: str = None
obj = Derived()
type(obj.data) # Derived.Data
obj.data.attr = 'content'
This works well and is readable, however it doesn't seem to support static analysis at all. How do I annotate self.data in Base so that I have proper type information on obj?
reveal_type(obj.data) # Derived.Data
reveal_type(obj.data.attr) # str
obj.data.attr = 7 # Should be static error
obj.data.other = 7 # Should be static error
I might write self.data: typing.Self.Data but this obviously doesn't work.
I was able to get something close with typing.Generic and forward references:
import typing
T = typing.TypeVar('T')
class Base(typing.Generic[T]):
Data: type[T]
def __init__(self):
self.data: T = self.Data()
class Derived(Base['Derived.Data']):
class Data:
attr: str = None
But it's not DRY and it doesn't enforce that the annotation and runtime type actually match. For example:
class Derived(Base[SomeOtherType]):
class Data: # Should be static error
attr: str = None
type(obj.data) # Derived.Data
reveal_type(obj.data) # SomeOtherType
I could also require the derived class provide an annotation for data, but this suffers similar issues as typing.Generic.
class Derived(Base):
data: SomeOtherClass # should be 'Data'
class Data: # should be a static error
attr: str = None
To attempt to fix this I tried writing some validation logic in __init_subclass__ to ensure T matches cls.data; however this is brittle and doesn't work in all cases. It also forbids creating any abstract derived class which doesn't define Data.
This is actually non-trivial because you run into the classic problem of wanting to dynamically create types, while simultaneously having static type checkers understand them. An obvious contradiction in terms.
Quick Pydantic digression
Since you mentioned Pydantic, I'll pick up on it. The way they solve it, greatly simplified, is by never actually instantiating the inner Config class. Instead, the __config__ attribute is set on your class, whenever you subclass BaseModel and this attribute holds itself a class (meaning an instance of type).
That class referenced by __config__ inherits from BaseConfig and is dynamically created by the ModelMetaclass constructor. In the process it inherits all the attributes set by the model's base classes and overrides them with whatever you set in the inner Config.
You can see the consequences in this example:
from pydantic import BaseConfig, BaseModel
class Model(BaseModel):
class Config:
frozen = True
a = BaseModel()
b = Model()
a_conf = a.__config__
b_conf = b.__config__
assert isinstance(a_conf, type) and issubclass(a_conf, BaseConfig)
assert isinstance(b_conf, type) and issubclass(b_conf, BaseConfig)
assert not a_conf.frozen
assert b_conf.frozen
By the way, this is why you should not refer to the inner Config directly in your code. It will only have the attributes you set on that one class explicitly and nothing inherited, not even the defaults from BaseConfig. The documented way to access the full model config is via __config__.
This is also why there is no such thing as model instance config. Change an attribute of __config__ and you'll change it for the entire class/model:
from pydantic import BaseModel
foo = BaseModel()
bar = BaseModel()
assert not foo.__config__.frozen
bar.__config__.frozen = True
assert foo.__config__.frozen
Possible solutions
An important constraint of this approach is that it only really makes sense, when you have some fixed type that all these dynamically created classes can inherit from. In the case of Pydantic it is BaseConfig and the __config__ attribute is annotated accordingly, namely with type[BaseConfig], which allows a static type checker to infer the interface of that __config__ class.
You could of course go the opposite way and allow literally any inner class to be defined for Data on your classes, but this probably defeats the purpose of your design. It would work fine though and you could hook into class creation via the meta class to enforce that Data is set and a class. You could even enforce that specific attributes on that inner class are set, but at that point you might as well have a common base class for that.
If you wanted to replicate the Pydantic approach, I can give you a very crude example of how this can be accomplished, with the basic ideas shamelessly stolen from (or inspired by) the Pydantic code.
You can set up a BaseData class and fully define its attributes for the annotations and type inferences down the line. Then you set up your custom meta class. In its __new__ method you perform the inheritance loop to dynamically build the new BaseData subclass and assign the result to the __data__ attribute of the new outer class:
from __future__ import annotations
from typing import ClassVar, cast
class BaseData:
foo: str = "abc"
bar: int = 1
class CustomMeta(type):
def __new__(
mcs,
name: str,
bases: tuple[type],
namespace: dict[str, object],
**kwargs: object,
) -> CustomMeta:
data = BaseData
for base in reversed(bases):
if issubclass(base, Base):
data = inherit_data(base.__data__, data)
own_data = cast(type[BaseData], namespace.get('Data'))
data = inherit_data(own_data, data)
namespace["__data__"] = data
cls = super().__new__(mcs, name, bases, namespace, **kwargs)
return cls
def inherit_data(
own_data: type[BaseData] | None,
parent_data: type[BaseData],
) -> type[BaseData]:
if own_data is None:
base_classes: tuple[type[BaseData], ...] = (parent_data,)
elif own_data == parent_data:
base_classes = (own_data,)
else:
base_classes = own_data, parent_data
return type('Data', base_classes, {})
... # more code below...
With this you can now define your Base class, annotate __data__ in its namespace with type[BaseData], and assign BaseData to its Data attribute. The inner Data classes on all derived classes can now define just those attributes that are different from their parents' Data. To demonstrate that this works, try this:
... # Code from above
class Base(metaclass=CustomMeta):
__data__: ClassVar[type[BaseData]]
Data = BaseData
class Derived1(Base):
class Data:
foo = "xyz"
class Derived2(Derived1):
class Data:
bar = 42
if __name__ == "__main__":
obj0 = Base()
obj1 = Derived1()
obj2 = Derived2()
print(obj0.__data__.foo, obj0.__data__.bar) # abc 1
print(obj1.__data__.foo, obj1.__data__.bar) # xyz 1
print(obj2.__data__.foo, obj2.__data__.bar) # xyz 42
Static type checkers will of course also know what to expect from the __data__ attribute and IDEs should give proper auto-suggestions for it. If you add reveal_type(obj2.__data__.foo) and reveal_type(obj2.__data__.bar) at the bottom and run mypy over the code, it will output that the revealed types are str and int respectively.
Caveat
An important drawback of this approach is that the inheritance is abstracted away in such a way that the inner Data class is treated as its own class unrelated to BaseData in any way by a static type checker, which makes sense because that is what it is; it just inherits from object.
Thus, you will not get any suggestions about the attributes you can override on Data by your IDE. This is the same deal with Pydantic, which is one of the reasons they roll their own custom plugins for mypy and PyCharm for example. The latter allows PyCharm to suggest you the BaseConfig attributes, when you are writing the inner Data class on any derived class.
I know I already provided one answer, but after the little back-and-forth, I thought of another possible solution involving an entirely different design from what I proposed earlier. I think this improves readability, if I post it as a second answer.
No inner classes; just a single type argument
See here for the details about how you can access the type argument provided during subclassing.
from typing import Generic, TypeVar, get_args, get_origin
D = TypeVar("D", bound="BaseData")
class BaseData:
foo: str = "abc"
bar: int = 1
class Base(Generic[D]):
__data__: type[D]
#classmethod
def __init_subclass__(cls, **kwargs: object) -> None:
super().__init_subclass__(**kwargs)
for base in cls.__orig_bases__: # type: ignore[attr-defined]
origin = get_origin(base)
if origin is None or not issubclass(origin, Base):
continue
type_arg = get_args(base)[0]
# Do not set the attribute for GENERIC subclasses!
if not isinstance(type_arg, TypeVar):
cls.__data__ = type_arg
return
Usage:
class Derived1Data(BaseData):
foo = "xyz"
class Derived1(Base[Derived1Data]):
pass
class Derived2Data(Derived1Data):
bar = 42
baz = True
class Derived2(Base[Derived2Data]):
pass
if __name__ == "__main__":
obj1 = Derived1()
obj2 = Derived2()
assert "xyz" == obj1.__data__.foo == obj2.__data__.foo
assert 42 == obj2.__data__.bar
assert not hasattr(obj1.__data__, "baz")
assert obj2.__data__.baz
Adding reveal_type(obj1.__data__) and reveal_type(obj2.__data__) for mypy will show type[Derived1Data] and type[Derived2Data] respectively.
The downside is obvious: It is not the "inner class"-design you had in mind.
The upside is that is entirely type safe, while requiring minimal code. The user merely needs to provide his own BaseData subclass as a type argument, when subclassing Base.
Adding the instance (optional)
If you want to have __data__ be an instance attribute and actual instance of the specified BaseData subclass, this is also easily accomplished. Here is a crude but working example:
from typing import Generic, TypeVar, get_args, get_origin
D = TypeVar("D", bound="BaseData")
class BaseData:
foo: str = "abc"
bar: int = 1
def __init__(self, **kwargs: object) -> None:
self.__dict__.update(kwargs)
class Base(Generic[D]):
__data_cls__: type[D]
__data__: D
#classmethod
def __init_subclass__(cls, **kwargs: object) -> None:
super().__init_subclass__(**kwargs)
for base in cls.__orig_bases__: # type: ignore[attr-defined]
origin = get_origin(base)
if origin is None or not issubclass(origin, Base):
continue
type_arg = get_args(base)[0]
# Do not set the attribute for GENERIC subclasses!
if not isinstance(type_arg, TypeVar):
cls.__data_cls__ = type_arg
return
def __init__(self, **data_kwargs: object) -> None:
self.__data__ = self.__data_cls__(**data_kwargs)
Usage:
class DerivedData(BaseData):
foo = "xyz"
baz = True
class Derived(Base[DerivedData]):
pass
if __name__ == "__main__":
obj = Derived(baz=False)
print(obj.__data__.foo) # xyz
print(obj.__data__.bar) # 1
print(obj.__data__.baz) # False
Again, a static type checker will know that __data__ is of the DerivedData type.
Though, I suppose at that point you might as well just have the user provide his own instance of a BaseData subclass during initialization of Derived. Maybe this is a cleaner and more intuitive design anyway.
I think your initial idea will only work, if you roll your own plugins for static type checkers.
It is not completely DRY, but given advice from #daniil-fajnberg I think this is probably preferable. Explicit is better than implicit, right?
The idea is to require derived classes to specify a type annotation for data; type checkers will be happy since the derived classes all annotate with the correct type, and the base class only needs to inspect that single annotation to determine the runtime type.
from typing import ClassVar, TypeVar, get_type_hints
class Base:
__data_cls__: ClassVar[type]
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
hints = get_type_hints(cls)
if 'data' in hints:
if isinstance(hints['data'], TypeVar):
raise TypeError('Cannot infer __data_cls__ from TypeVar.')
cls.__data_cls__ = hints['data']
def __init__(self):
self.data = self.__data_cls__()
Usage looks like this. Note the name of the data type and the data attribute are no longer coupled.
class Derived1(Base):
class TheDataType:
foo: str = ''
bar: int = 77
data: TheDataType
print('Derived1:')
obj1 = Derived1()
reveal_type(obj1.data) # Derived1.TheDataType
reveal_type(obj1.data.foo) # str
reveal_type(obj1.data.bar) # int
And that decoupling means you are not required to use an inner type.
class Derived2(Base):
data: Derived1.TheDataType
print('Derived3:')
obj2 = Derived2()
reveal_type(obj2.data) # Derived1.TheDataType
reveal_type(obj2.data.foo) # str
reveal_type(obj2.data.bar) # int
I don't think it's possible to support generic subclasses in this solution. It might be possible to adapt the code in https://stackoverflow.com/a/74788026/4672189 to fetch the runtime type in certain situations.
We have a number of dataclasses representing various results with common ancestor Result. Each result then provides its data using its own subclass of ResultData. But we have trouble to annotate the case properly.
We came up with following solution:
from dataclasses import dataclass
from typing import ClassVar, Generic, Optional, Sequence, Type, TypeVar
class ResultData:
...
T = TypeVar('T', bound=ResultData)
#dataclass
class Result(Generic[T]):
_data_cls: ClassVar[Type[T]]
data: Sequence[T]
#classmethod
def parse(cls, ...) -> T:
self = cls()
self.data = [self._data_cls.parse(...)]
return self
class FooResultData(ResultData):
...
class FooResult(Result):
_data_cls = FooResultData
but it stopped working lately with mypy error ClassVar cannot contain type variables [misc]. It is also against PEP 526, see https://www.python.org/dev/peps/pep-0526/#class-and-instance-variable-annotations, which we missed earlier.
Is there a way to annotate this case properly?
As hinted in the comments, the _data_cls attribute could be removed, assuming that it's being used for type hinting purposes. The correct way to annotate a Generic class defined like class MyClass[Generic[T]) is to use MyClass[MyType] in the type annotations.
For example, hopefully the below works in mypy. I only tested in Pycharm and it seems to infer the type well enough at least.
from dataclasses import dataclass
from functools import cached_property
from typing import Generic, Sequence, TypeVar, Any, Type
T = TypeVar('T', bound='ResultData')
class ResultData:
...
#dataclass
class Result(Generic[T]):
data: Sequence[T]
#cached_property
def data_cls(self) -> Type[T]:
"""Get generic type arg to Generic[T] using `__orig_class__` attribute"""
# noinspection PyUnresolvedReferences
return self.__orig_class__.__args__[0]
def parse(self):
print(self.data_cls)
#dataclass
class FooResultData(ResultData):
# can be removed
this_is_a_test: Any = 'testing'
class AnotherResultData(ResultData): ...
# indicates `data` is a list of `FooResultData` objects
FooResult = Result[FooResultData]
# indicates `data` is a list of `AnotherResultData` objects
AnotherResult = Result[AnotherResultData]
f: FooResult = FooResult([FooResultData()])
f.parse()
_ = f.data[0].this_is_a_test # no warnings
f: AnotherResult = AnotherResult([AnotherResultData()])
f.parse()
Output:
<class '__main__.FooResultData'>
<class '__main__.AnotherResultData'>
And of course, here is proof that it seems to be working on my end:
At the end I just replaced the variable in _data_cls annotation with the base class and fixed the annotation of subclasses as noted by #rv.kvetch in his answer.
The downside is the need to define the result class twice in every subclass, but in my opinion it is more legible than extracting the class in property.
The complete solution:
from dataclasses import dataclass
from typing import ClassVar, Generic, Optional, Sequence, Type, TypeVar
class ResultData:
...
T = TypeVar('T', bound=ResultData)
#dataclass
class Result(Generic[T]):
_data_cls: ClassVar[Type[ResultData]] # Fixed annotation here
data: Sequence[T]
#classmethod
def parse(cls, ...) -> T:
self = cls()
self.data = [self._data_cls.parse(...)]
return self
class FooResultData(ResultData):
...
class FooResult(Result[FooResultData]): # Fixed annotation here
_data_cls = FooResultData
I know I can do this:
import typing
T = typing.TypeVar("T")
class MyGenericClass(Generic[T]):
def a_method(self):
print(self.__orig_class__)
MyOtherGeneric[SomeBaseClass]().a_method()
to print SomeBaseClass. Probably, I will just stick with that ability to achieve what I am ultimately trying to do (modify functionality based on T), but I am now stuck wondering how all of this even works.
Originally, I wanted to access the base type information (the value of T) from inside the class at the time the object is being instantiated, or soon thereafter, rather than later in its lifecycle.
As a concrete example, in the code below, I wanted something to replace any of ?n? so I could get the value SomeOtherBaseClass early in the object's lifecycle. Maybe there's some code that needs to go above one of those lines, as well.
import typing
T = typing.TypeVar("T")
class MyOtherGenericClass(Generic[T]):
def __init__(self, *args, **kwargs):
print(?1?)
def __new__(klass, *args, **kwargs):
print(?2?)
MyOtherGenericClass[SomeOtherBaseClass]()
I was trying to set some instance variables at the time of instantiation (or, somehow, soon after it) based on the value of T. I'm rethinking my approach given that the typing module and, specifically, this stuff with generics, still seems to be in an unstable period of development.
So… Is that possible? A user pointed out that, at least in 3.8, __orig_class__ gets set during typing._GenericAlias.__call__, but how does that __call__ method get invoked? When does that happen?
Related reading:
Generic[T] base class - how to get type of T from within instance?
How to access the type arguments of typing.Generic?
I dunno, it looks like if you wanna have a Self type, which is a thing in PEP 673 in the upcomming Python 3.11.
The current workaround for the not yet implemented Self-Type is:
from typing import TypeVar
TShape = TypeVar("TShape", bound="Shape")
class Shape:
def set_scale(self: TShape, scale: float) -> TShape:
self.scale = scale
return self
class Circle(Shape):
def set_radius(self, radius: float) -> Circle:
self.radius = radius
return self
Circle().set_scale(0.5).set_radius(2.7) # => Circle
Whereby the the self parameter and the return type are both hinted with the upper-bounded Type-Variable TShape instead of the Self Type.
You have two/three options, from dirtiest to dirty:
Hack typing.Generic mechanisms:
and inspect calling frames
import inspect
import types
from typing import Generic
from typing import Type
from typing import TypeVar
T = TypeVar('T')
class A(Generic[T]):
_t: Type[T]
def __class_getitem__(cls, key_t) -> type:
g = typing._GenericAlias(cls, key_t) # <-- undocumented
return g
def __new__(cls):
prevfr = inspect.currentframe().f_back
t = inspect.getargvalues(prevfr).locals['self'].__args__[0]
o = super().__new__(cls)
o._t = t
return o
def __init__(self) -> None:
print(f"{self._t=}")
print(A[str]) # >>> "<types.A[str] at 0x11b52c550>"
print(A[str]()) # >>> "self._t=<class 'str'>"
Reify your generic class
T = TypeVar('T')
class A(Generic[T]):
__concrete__ = {}
_t: Type[T]
def __class_getitem__(cls, key_t: type):
cache = cls.__concrete__
if (c := cache.get(key_t, None)): return c
cache[key_t] = c = types.new_class(
f"{cls.__name__}[{key_t.__name__}]", (cls,), {}, lambda ns: ns.update(_t=key_t))
return c
def __init__(self) -> None:
print(f"{self._t=}")
A[str]()
Hide your shameful privates with metaclasses
T = TypeVar('T')
class AMeta(type):
__concrete__ = {}
def __getitem__(cls, key_t: type):
cache = cls.__concrete__
if (c := cache.get(key_t, None)): return c
cache[key_t] = c = types.new_class(
f"{cls.__name__}[{key_t.__name__}]", (cls,), {}, lambda ns: ns.update(_t=key_t))
return c
class A(Generic[T], metaclass=AMeta):
_t: Type[T]
def __init__(self) -> None:
print(f"{self._t=}")
A[str]()
Note that this add substantial instantiation overhead. Neither solution is elegant. Python sages wisely (IMHO, Typescript looking at ya') decided to implement typing support using existing Python constructs or creating as few new ones as possible. They are adding runtime support slowly, the focus being static typing. Thus
Using class_getitem() on any class for purposes other than type
hinting is discouraged.
But the inherently dynamic nature of Python will triumph at the end and we'll got full runtime introspection in time.
Tested in Python 3.10.8
I have a class, for example Circle, which has dependent attributes, radius and circumference. It makes sense to use a dataclass here because of the boilerplate for __init__, __eq__, __repr__ and the ordering methods (__lt__, ...).
I choose one of the attributes to be dependent on the other, e.g. the circumference is computed from the radius. Since the class should support initialization with either of the attributes (+ have them included in __repr__ as well as dataclasses.asdict) I annotate both:
from dataclasses import dataclass
import math
#dataclass
class Circle:
radius: float = None
circumference: float = None
#property
def circumference(self):
return 2 * math.pi * self.radius
#circumference.setter
def circumference(self, val):
if val is not type(self).circumference: # <-- awkward check
self.radius = val / (2 * math.pi)
This requires me to add the somewhat awkward check for if val is not type(self).circumference because this is what the setter will receive if no value is provided to __init__.
Then if I wanted to make the class hashable by declaring frozen=True I need to change self.radius = ... to object.__setattr__(self, 'radius', ...) because otherwise this would attempt to assign to a field of a frozen instance.
So my question is if this is a sane way of using dataclasses together with properties or if potential (non-obvious) obstacles lie ahead and I should refrain from using dataclasses in such cases? Or maybe there is even a better way of achieving this goal?
For starters, you could set the attributes in the __init__ method as follows:
from dataclasses import dataclass, InitVar
import math
#dataclass(frozen=True, order=True)
class CircleWithFrozenDataclass:
radius: float = 0
circumference: float = 0
def __init__(self, radius=0, circumference=0):
super().__init__()
if circumference:
object.__setattr__(self, 'circumference', circumference)
object.__setattr__(self, 'radius', circumference / (2 * math.pi))
if radius:
object.__setattr__(self, 'radius', radius)
object.__setattr__(self, 'circumference', 2 * math.pi * radius)
This will still provide you with all the helpful __eq__, __repr__, __hash__, and ordering method injections. While object.__setattr__ looks ugly, note that the CPython implementation itself uses object.__setattr__ to set attributes when injecting the generated __init__ method for a frozen dataclass.
If you really want to get rid of object.__setattr__, you can set frozen=False (the default) and override the __setattr__ method yourself. This is copying how the frozen feature of dataclasses is implemented in CPython. Note that you will also have to turn on unsafe_hash=True as __hash__ is no longer injected since frozen=False.
#dataclass(unsafe_hash=True, order=True)
class CircleUsingDataclass:
radius: float = 0
circumference: float = 0
_initialized: InitVar[bool] = False
def __init__(self, radius=0, circumference=0):
super().__init__()
if circumference:
self.circumference = circumference
self.radius = circumference / (2 * math.pi)
if radius:
self.radius = radius
self.circumference = 2 * math.pi * radius
self._initialized = True
def __setattr__(self, name, value):
if self._initialized and \
(type(self) is __class__ or name in ['radius', 'circumference']):
raise AttributeError(f"cannot assign to field {name!r}")
super().__setattr__(name, value)
def __delattr__(self, name, value):
if self._initialized and \
(type(self) is __class__ or name in ['radius', 'circumference']):
raise AttributeError(f"cannot delete field {name!r}")
super().__delattr__(name, value)
In my opinion, freezing should only happen after the __init__ by default, but for now I will probably use the first approach.
Is it possible to have an enum of enums in Python? For example, I'd like to have
enumA
enumB
elementA
elementB
enumC
elementC
elementD
And for me to be able to refer to elementA as enumA.enumB.elementA, or to refer to elementD as enumA.enumC.elementD.
Is this possible? If so, how?
EDIT: When implemented in the naive way:
from enum import Enum
class EnumA(Enum):
class EnumB(Enum):
member = 0
print(EnumA)
print(EnumA.EnumB.member)
It gives:
<enum 'EnumA'>
Traceback (most recent call last):
File "Maps.py", line 15, in <module>
print(EnumA.EnumB.member)
AttributeError: 'EnumA' object has no attribute 'member'
You can't do this with the enum stdlib module. If you try it:
class A(Enum):
class B(Enum):
a = 1
b = 2
class C(Enum):
c = 1
d = 2
A.B.a
… you'll just get an exception like:
AttributeError: 'A' object has no attribute 'a'
This is because the enumeration values of A act like instances of A, not like instances of their value type. Just like a normal enum holding int values doesn't have int methods on the values, the B won't have Enum methods. Compare:
class D(Enum):
a = 1
b = 2
D.a.bit_length()
You can, of course, access the underlying value (the int, or the B class) explicitly:
D.a.value.bit_length()
A.B.value.a
… but I doubt that's what you want here.
So, could you use the same trick that IntEnum uses, of subclassing both Enum and int so that its enumeration values are int values, as described in the Others section of the docs?
No, because what type would you subclass? Not Enum; that's already your type. You can't use type (the type of arbitrary classes). There's nothing that works.
So, you'd have to use a different Enum implementation with a different design to make this work. Fortunately, there are about 69105 different ones on PyPI and ActiveState to choose from.
For example, when I was looking at building something similar to Swift enumerations (which are closer to ML ADTs than Python/Java/etc. enumerations), someone recommended I look at makeobj. I forgot to do so, but now I just did, and:
class A(makeobj.Obj):
class B(makeobj.Obj):
a, b = makeobj.keys(2)
class C(makeobj.Obj):
c, d = makeobj.keys(2)
print(A.B, A.B.b, A.B.b.name, A.B.b.value)
This gives you:
<Object: B -> [a:0, b:1]> <Value: B.b = 1> b 1
It might be nice if it looked at its __qualname__ instead of its __name__ for creating the str/repr values, but otherwise it looks like it does everything you want. And it has some other cool features (not exactly what I was looking for, but interesting…).
Note The below is interesting, and may be useful, but as #abarnert noted the resulting A Enum doesn't have Enum members -- i.e. list(A) returns an empty list.
Without commenting on whether an Enum of Enums is a good idea (I haven't yet decided ;) , this can be done... and with only a small amount of magic.
You can either use the Constant class from this answer:
class Constant:
def __init__(self, value):
self.value = value
def __get__(self, *args):
return self.value
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, self.value)
Or you can use the new aenum library and its built-in skip desriptor decorator (which is what I will show).
At any rate, by wrapping the subEnum classes in a descriptor they are sheltered from becoming members themselves.
Your example then looks like:
from aenum import Enum, skip
class enumA(Enum):
#skip
class enumB(Enum):
elementA = 'a'
elementB = 'b'
#skip
class enumC(Enum):
elementC = 'c'
elementD = 'd'
and you can then access them as:
print(enumA)
print(enumA.enumB)
print(enumA.enumC.elementD)
which gives you:
<enum 'enumA'>
<enum 'enumB'>
enumC.elementD
The difference between using Constant and skip is esoteric: in enumA's __dict__ 'enumB' will return a Constant object (if Constant was used) or <enum 'enumB'> if skip was used; normal access will always return <enum 'enumB'>.
In Python 3.5+ you can even (un)pickle the nested Enums:
print(pickle.loads(pickle.dumps(enumA.enumC.elementD)) is enumA.enumC.elementD)
# True
Do note that the subEnum doesn't include the parent Enum in it's display; if that's important I would suggest enhancing EnumMeta to recognize the Constant descriptor and modify its contained class' __repr__ -- but I'll leave that as an exercise for the reader. ;)
I made an enum of enum implementing de __ getattr __ in the base enum like this
def __getattr__(self, item):
if item != '_value_':
return getattr(self.value, item).value
raise AttributeError
In my case I have an enum of enum of enum
class enumBase(Enum):
class innerEnum(Enum):
class innerInnerEnum(Enum):
A
And
enumBase.innerEnum.innerInnerEnum.A
works
You can use namedtuples to do something like this:
>>> from collections import namedtuple
>>> Foo = namedtuple('Foo', ['bar', 'barz'])
>>> Bar = namedtuple('Bar', ['element_a', 'element_b'])
>>> Barz = namedtuple('Barz', ['element_c', 'element_d'])
>>> bar = Bar('a', 'b')
>>> barz = Barz('c', 'd')
>>> foo = Foo(bar, barz)
>>> foo
Foo(bar=Bar(element_a='a', element_b='b'), barz=Barz(element_c='c', element_d='d'))
>>> foo.bar.element_a
'a'
>>> foo.barz.element_d
'd'
This is not a enum but, maybe solves your problem
If you don't care about inheritance, here's a solution I've used before:
class Animal:
class Cat(enum.Enum):
TIGER = "TIGER"
CHEETAH = "CHEETAH"
LION = "LION"
class Dog(enum.Enum):
WOLF = "WOLF"
FOX = "FOX"
def __new__(cls, name):
for member in cls.__dict__.values():
if isinstance(member, enum.EnumMeta) and name in member.__members__:
return member(name)
raise ValueError(f"'{name}' is not a valid {cls.__name__}")
It works by overriding the __new__ method of Animal to find the appropriate sub-enum and return an instance of that.
Usage:
Animal.Dog.WOLF #=> <Dog.WOLF: 'WOLF'>
Animal("WOLF") #=> <Dog.WOLF: 'WOLF'>
Animal("WOLF") is Animal.Dog.WOLF #=> True
Animal("WOLF") is Animal.Dog.FOX #=> False
Animal("WOLF") in Animal.Dog #=> True
Animal("WOLF") in Animal.Cat #=> False
Animal("OWL") #=> ValueError: 'OWL' is not a valid Animal
However, notably:
isinstance(Animal.Dog, Animal) #=> False
As long as you don't care about that this solution works nicely. Unfortunately there seems to be no way to refer to the outer class inside the definition of an inner class, so there's no easy way to make Dog extend Animal.
Solution based on attrs. This also allows to implement attributes validators and other goodies of attrs:
import enum
import attr
class CoilsTypes(enum.Enum):
heating: str = "heating"
class FansTypes(enum.Enum):
plug: str = "plug"
class HrsTypes(enum.Enum):
plate: str = "plate"
rotory_wheel: str = "rotory wheel"
class FiltersTypes(enum.Enum):
bag: str = "bag"
pleated: str = "pleated"
#attr.dataclass(frozen=True)
class ComponentTypes:
coils: CoilsTypes = CoilsTypes
fans: FansTypes = FansTypes
hrs: HrsTypes = HrsTypes
filter: FiltersTypes = FiltersTypes
cmp = ComponentTypes()
res = cmp.hrs.plate
Try this:
# python3.7
import enum
class A(enum.Enum):
def __get__(self, instance, owner):
return self.value
class B(enum.IntEnum):
a = 1
b = 2
class C(enum.IntEnum):
c = 1
d = 2
# this is optional (it just adds 'A.' before the B and C enum names)
B.__name__ = B.__qualname__
C.__name__ = C.__qualname__
print(A.C.d) # prints: A.C.d
print(A.B.b.value) # prints: 2