When defining a class/module with annotated fields, how can I get annotations as like in functions?
class Test:
def __init__(self):
self.x : int
t = Test()
Now I need 'int' from getattr(t,'x')
With baseline Python, there is no option to do what you want without changing the definition of Test. The minimalist change would be to annotate the attribute at class level:
class Test:
x: int
def __init__(self):
# define self.x or not, but it needn't be annotated again
This is actually perfectly fine; by default, annotations at class scope are assumed to refer to instance attributes, not class attributes (assigning to a value at class scope creates a class attribute, but annotating it does not); you have to explicitly use typing.ClassVar to indicate the annotated type is intended to be a class attribute only. PEP 526's section on class and instance variable annotations defines these behaviors; they're something you can rely on, not just an accident of implementation.
Once you've done this, typing.get_type_hints will return {'x': int} for both Test and t in your example case.
While that's enough on its own, I'll note that in many such cases nowadays, as long as you're annotating anyway, you can simplify your code with the dataclasses module, getting the annotations and basic functionality defined for you with minimal typing. Simple replacement code for your case would be:
import dataclasses
#dataclasses.dataclass
class Test:
x: int
While your case doesn't showcase the full feature set (it's basically just replacing __init__ with the decorator), it's still doing more than meets the eye. In addition to defining __init__ for you (it expects to receive an x argument which is annotated to be an int), as well as a suitable __repr__ and __eq__, you can define defaults easily (just assign the default at point of annotation or for more complex or mutable cases, assign a dataclasses.field instead), and you can pass arguments to dataclass to make it produce sortable or immutable instances.
In your case, the main advantage is removing redundancy; x is annotated and referenced exactly once, rather than being annotated once at class level, then used (and optionally, annotated again) during initialization.
I am not sure you can get the annotations of self.x easily.
Assuming your code:
class Test:
def __init__(self):
self.x: int = None
t = Test()
I tried looking for __annotations__ in Test and t (where I would expect it to be), without much luck.
However, what you could do is this workaround:
class Test:
x: int
def __init__(self):
# annotation from here seems to be unreachable from `__annotations__`
self.x: str
t = Test()
print(Test.__annotations__)
# {'x': <class 'int'>}
print(t.__annotations__)
# {'x': <class 'int'>}
EDIT
If you want to be able to inspect the type of self.x within mypy check answer from #ruohola.
EDIT 2
Note that mypy (at least v.0.560) does get confused by annotating x both from the class and from the __init__, i.e. it looks like the annotation of self.x is boldly ignored:
import sys
class Test:
x: str = "0"
def __init__(self):
self.x: int = 1
t = Test()
print(Test.x, t.x)
# 0 1
print(Test.x is t.x)
# False
if "mypy" in sys.modules:
reveal_type(t.x)
# from mypyp: annotated_self.py:14: error: Revealed type is 'builtins.str'
reveal_type(Test.x)
# from mypy: annotated_self.py:15: error: Revealed type is 'builtins.str'
Test.x = 2
# from mypy: annotated_self.py:17: error: Incompatible types in assignment (expression has type "int", variable has type "str")
t.x = "3"
# no complaining from `mypy`
t.x = 4
# from mypy: annotated_self.py:19: error: Incompatible types in assignment (expression has type "int", variable has type "str")
print(Test.x, t.x)
# 2 4
If you're using mypy, you can use reveal_type() to check the type annotation of any expression. Note that this function is only usable when running mypy, and not at normal runtime.
I also use typing.TYPE_CHECKING, to not get an error when running the file normally, since this special constant is only assumed to be True by 3rd party type checkers.
test.py:
from typing import Dict, Optional, TYPE_CHECKING
class Test:
def __init__(self) -> None:
self.x: Optional[Dict[str, int]]
test = Test()
if TYPE_CHECKING:
reveal_type(test.x)
else:
print("not running with mypy")
Example when running mypy on it:
$ mypy test.py
test.py:10: error: Revealed type is 'Union[builtins.dict[builtins.str, builtins.int], None]'
And when running it normally:
$ python3 test.py
not running with mypy
Related
I am trying to add a type annotation to a function input argument that is a dataclass with attributes that overlap with another dataclass, which actually gets passed in as an input argument.
Consider the following code:
from dataclasses import dataclass
from typing import TypeVar
#dataclass
class Foo:
a: str
zar: str
#dataclass
class Car(Foo):
b: str
#dataclass
class CarInterface:
a: str
b: str
mar = TypeVar("mar", bound=CarInterface)
def blah(x: mar):
print(x.a)
car_instance = Car(a="blah blah", zar="11", b="bb")
blah(car_instance)
In this example, I'm trying to create my own type annotation mar which is bound by CarInterface. I want to check that whatever class is passed into blah() at least has a and b attributes (don't care if the class has other attributes such as zar). I want to do it this way because class Car (which actually gets passed in) is one of many classes that will be written in the future and passed into this function.
I also want it to be very easy to define a new Car, so I would like to avoid abstract classes as I don't think the added complexity is worth mypy being happy.
So I'm trying to create mar which uses duck typing to say that Car satisfies the interface of CarInterface.
However, I get two mypy errors.
The first is on the mar annotation in def blah
TypeVar "mar" appears only once in generic function signaturePylancereportInvalidTypeVarUse
And the other is where I pass car_instance into blah()
Argument of type "Car" cannot be assigned to parameter "x" of type "bar#blah" in function "blah"
Type "Car" cannot be assigned to type "CarInterface"
"Car" is incompatible with "CarInterface"PylancereportGeneralTypeIssues
Use a Protocol to define CarInterface rather than a dataclass:
from dataclasses import dataclass
from typing import Protocol
#dataclass
class Foo:
a: str
zar: str
#dataclass
class Car(Foo):
b: str
class CarInterface(Protocol):
a: str
b: str
def blah(x: CarInterface):
print(x.a)
car_instance = Car(a="blah blah", zar="11", b="bb")
blah(car_instance)
The above code will typecheck fine, but if you try to pass blah a Foo instead of a Car you'll get a mypy error like this:
test.py:22: error: Argument 1 to "blah" has incompatible type "Foo"; expected "CarInterface"
test.py:22: note: "Foo" is missing following "CarInterface" protocol member:
test.py:22: note: b
Found 1 error in 1 file (checked 1 source file)
A Protocol can be used as the bound for a TypeVar, but it's only necessary to use a TypeVar if you want to indicate that two variables not only implement the protocol but are also the same specific type (e.g. to indicate that a function takes any object implementing CarInterface and returns the same exact type of object rather than some other arbitrary CarInterface implementation).
I have a class structure that looks something like this:
class Base:
class Nested:
pass
def __init__(self):
self.nestedInstance = self.Nested()
where subclasses of Base each have their own Nested class extending the original, like this:
class Sub(Base):
class Nested(Base.Nested):
pass
This works perfectly, and instances of Sub have their nestedInstance attributes set to instances of Sub.Nested.
However, in my IDE the nestedInstance attribute is always treated as an instance of Base.Nested, not the inherited Sub.Nested. How can I make it so that nestedInstance will be inferred to be Sub.Nested rather than Base.Nested? (Without having to add extra code to every subclass; preferably, this would all be done in Base.)
(By the way, I'm aware that this is an odd structure, and I can go into detail about why I chose it if necessary, but I think it's pretty elegant for my situation, and I'm hoping there's a solution to this problem.)
I don't agree with the statement that you were trying to violate the Liskov substitution principle. You were merely looking for a way to let a static type checker infer the type of nested_instance for classes inheriting from Base to be their respective Nested class. Obviously this wasn't possible with the code you had; otherwise there would be no question.
There actually is a way to minimize repetition and accomplish what you want.
Generics to the rescue!
You can define your Base class as generic over a type variable with the upper bound of Base.Nested. When you define Sub as a subclass Base, you provide a reference to Sub.Nested as the concrete type argument. Here is the setup:
from typing import Generic, TypeVar, cast
N = TypeVar("N", bound="Base.Nested")
class Base(Generic[N]):
nested_instance: N
class Nested:
pass
def __init__(self) -> None:
self.nested_instance = cast(N, self.Nested())
class Sub(Base["Sub.Nested"]):
class Nested(Base.Nested):
pass
This is actually all you need. For more info about generics I recommend the relevant section of PEP 484. A few things to note:
Why do we need the bound?
If we were to just use N = TypeVar("N"), the type checker would have no problem if we wanted do define a subclass like this:
class Broken(Base[int]):
class Nested(Base.Nested):
pass
But this would be a problem since now the nested_instance attribute would be expected to be of the type int, which is not what we want. That upper bound on N will prevent this causing mypy to complain:
error: Type argument "int" of "Base" must be a subtype of "Nested" [type-var]
Why explicitly declare nested_instance?
The whole point of making a class generic is to bind some type variable (like N) to it and then indicate that some associated type inside that class is in fact N (or even multiple). We essentially tell the type checker to expect nested_instance to always be of the type N, which must be provided, whenever Base is used to annotate something.
However, now the type checker will always complain, if we ever omit the type argument for Base and tried an annotation like this: x: Base. Again, mypy would tell us:
error: Missing type parameters for generic type "Base" [type-arg]
This may be the only "downside" to the use of generics in this fashion.
Why cast?
The problem is that inside Base, the nested_instance attribute is declared as a generic type N, whereas in Base.__init__, we assign an instance of the specific type Base.Nested. Even though it may seem like this should work, it does not. Omitting the cast call results in the following mypy error:
error: Incompatible types in assignment (expression has type "Nested", variable has type "N") [assignment]
Are the quotes necessary?
Yes, and importing __future__.annotations does not help here. I am not entirely sure why that is, but I believe in case of the Base[...] usage the reason is that __class_getitem__ is actually called and you cannot provide Sub.Nested to it because it is not even defined at that point.
Full working example
from typing import Generic, TypeVar, cast
N = TypeVar("N", bound="Base.Nested")
class Base(Generic[N]):
nested_instance: N
class Nested:
pass
def __init__(self) -> None:
self.nested_instance = cast(N, self.Nested())
class Sub(Base["Sub.Nested"]):
class Nested(Base.Nested):
pass
def get_nested(obj: Base[N]) -> N:
return obj.nested_instance
def assert_instance_of_nested(nested_obj: N, cls: type[Base[N]]) -> None:
assert isinstance(nested_obj, cls.Nested)
if __name__ == '__main__':
sub = Sub()
nested = get_nested(sub)
assert_instance_of_nested(nested, Sub)
This script works "as is" and mypy is perfectly happy with it.
The two functions are just for demonstration purposes, so that you see how you could leverage the generic Base.
Additional sanity checks
To assure you even more, you can for example add reveal_type(sub.nested_instance) at the bottom and mypy will tell you:
note: Revealed type is "[...].Sub.Nested"
This is what we wanted.
If we declare a new subclass
class AnotherSub(Base["AnotherSub.Nested"]):
class Nested(Base.Nested):
pass
and try this
a: AnotherSub.Nested
a = Sub().nested_instance
we are again correctly reprimanded by mypy:
error: Incompatible types in assignment (expression has type "[...].Sub.Nested", variable has type "[...].AnotherSub.Nested") [assignment]
Hope this helps.
PS
To be clear, you can still inherit from Base without specifying the type argument. This has no runtime implications either way. It's just that a strict type checker will complain about it because it is generic, just as it would complain if you annotate something with list without specifying the type argument. (Yes, list is generic.)
Also, whether or not your IDE actually infers this correctly of course depends on how consistent their internal type checker is with the typing rules in Python. PyCharm for example seems to deal with this setup as expected.
It gets typed as Base.Nested because that's what's in scope at initialisation. If you want to declare that nestedInstance is actually something else, then you'll probably need to actually type-hint it.
class Sub(Base):
class Nested(Base.Nested):
pass
nestedInstance: Nested
Context
Say we want to define a custom generic (base) class that inherits from typing.Generic.
For the sake of simplicity, we want it to be parameterized by a single type variable T. So the class definition starts like this:
from typing import Generic, TypeVar
T = TypeVar("T")
class GenericBase(Generic[T]):
...
Question
Is there a way to access the type argument T in any specific subclass of GenericBase?
The solution should be universal enough to work in a subclass with additional bases besides GenericBase and be independent of instantiation (i.e. work on the class level).
The desired outcome is a class-method like this:
class GenericBase(Generic[T]):
#classmethod
def get_type_arg(cls) -> Type[T]:
...
Usage
class Foo:
pass
class Bar:
pass
class Specific(Foo, GenericBase[str], Bar):
pass
print(Specific.get_type_arg())
The output should be <class 'str'>.
Bonus
It would be nice if all relevant type annotations were made such that static type checkers could correctly infer the specific class returned by get_type_arg.
Related questions
Generic[T] base class - how to get type of T from within instance? - This question focuses on direct instances of the custom generic class itself, not on specified subclasses.
How can I access T from a Generic[T] instance early in its lifecycle? - This is a variation on the previous one.
How to access the type arguments of typing.Generic? - This is very close, but does not cover the possibility of other base classes.
TL;DR
Grab the GenericBase from the subclass' __orig_bases__ tuple, pass it to typing.get_args, grab the first element from the tuple it returns, and make sure what you have is a concrete type.
1) Starting with get_args
As pointed out in this post, the typing module for Python 3.8+ provides the get_args function. It is convenient because given a specialization of a generic type, get_args returns its type arguments (as a tuple).
Demonstration:
from typing import Generic, TypeVar, get_args
T = TypeVar("T")
class GenericBase(Generic[T]):
pass
print(get_args(GenericBase[int]))
Output:
(<class 'int'>,)
This means that once we have access to a specialized GenericBase type, we can easily extract its type argument.
2) Continuing with __orig_bases__
As further pointed out in the aforementioned post, there is this handy little class attribute __orig_bases__ that is set by the type metaclass when a new class is created. It is mentioned here in PEP 560, but is otherwise hardly documented.
This attribute contains (as the name suggests) the original bases as they were passed to the metaclass constructor in the form of a tuple. This distinguishes it from __bases__, which contains the already resolved bases as returned by types.resolve_bases.
Demonstration:
from typing import Generic, TypeVar
T = TypeVar("T")
class GenericBase(Generic[T]):
pass
class Specific(GenericBase[int]):
pass
print(Specific.__bases__)
print(Specific.__orig_bases__)
Output:
(<class '__main__.GenericBase'>,)
(__main__.GenericBase[int],)
We are interested in the original base because that is the specialization of our generic class, meaning it is the one that "knows" about the type argument (int in this example), whereas the resolved base class is just an instance of type.
3) Simplistic solution
If we put these two together, we can quickly construct a simplistic solution like this:
from typing import Generic, TypeVar, get_args
T = TypeVar("T")
class GenericBase(Generic[T]):
#classmethod
def get_type_arg_simple(cls):
return get_args(cls.__orig_bases__[0])[0]
class Specific(GenericBase[int]):
pass
print(Specific.get_type_arg_simple())
Output:
<class 'int'>
But this will break as soon as we introduce another base class on top of our GenericBase.
from typing import Generic, TypeVar, get_args
T = TypeVar("T")
class GenericBase(Generic[T]):
#classmethod
def get_type_arg_simple(cls):
return get_args(cls.__orig_bases__[0])[0]
class Mixin:
pass
class Specific(Mixin, GenericBase[int]):
pass
print(Specific.get_type_arg_simple())
Output:
Traceback (most recent call last):
...
return get_args(cls.__orig_bases__[0])[0]
IndexError: tuple index out of range
This happens because cls.__orig_bases__[0] now happens to be Mixin, which is not a parameterized type, so get_args returns an empty tuple ().
So what we need is a way to unambiguously identify the GenericBase from the __orig_bases__ tuple.
4) Identifying with get_origin
Just like typing.get_args gives us the type arguments for a generic type, typing.get_origin gives us the unspecified version of a generic type.
Demonstration:
from typing import Generic, TypeVar, get_origin
T = TypeVar("T")
class GenericBase(Generic[T]):
pass
print(get_origin(GenericBase[int]))
print(get_origin(GenericBase[str]) is GenericBase)
Output:
<class '__main__.GenericBase'>
True
5) Putting them together
With these components, we can now write a function get_type_arg that takes a class as an argument and -- if that class is specialized form of our GenericBase -- returns its type argument:
from typing import Generic, TypeVar, get_origin, get_args
T = TypeVar("T")
class GenericBase(Generic[T]):
pass
class Specific(GenericBase[int]):
pass
def get_type_arg(cls):
for base in cls.__orig_bases__:
origin = get_origin(base)
if origin is None or not issubclass(origin, GenericBase):
continue
return get_args(base)[0]
print(get_type_arg(Specific))
Output:
<class 'int'>
Now all that is left to do is embed this directly as a class-method of GenericBase, optimize it a little bit and fix the type annotations.
One thing we can do to optimize this, is only run this algorithm only once for any given subclass of GenericBase, namely when it is defined, and then save the type in a class-attribute. Since the type argument presumably never changes for a specific class, there is no need to compute this every time we want to access the type argument. To accomplish this, we can hook into __init_subclass__ and do our loop there.
We should also define a proper response for when get_type_arg is called on a (unspecified) generic class. An AttributeError seems appropriate.
6) Full working example
from typing import Any, Generic, Optional, Type, TypeVar, get_args, get_origin
# The `GenericBase` must be parameterized with exactly one type variable.
T = TypeVar("T")
class GenericBase(Generic[T]):
_type_arg: Optional[Type[T]] = None # set in specified subclasses
#classmethod
def __init_subclass__(cls, **kwargs: Any) -> None:
"""
Initializes a subclass of `GenericBase`.
Identifies the specified `GenericBase` among all base classes and
saves the provided type argument in the `_type_arg` class attribute
"""
super().__init_subclass__(**kwargs)
for base in cls.__orig_bases__: # type: ignore[attr-defined]
origin = get_origin(base)
if origin is None or not issubclass(origin, GenericBase):
continue
type_arg = get_args(base)[0]
# Do not set the attribute for GENERIC subclasses!
if not isinstance(type_arg, TypeVar):
cls._type_arg = type_arg
return
#classmethod
def get_type_arg(cls) -> Type[T]:
if cls._type_arg is None:
raise AttributeError(
f"{cls.__name__} is generic; type argument unspecified"
)
return cls._type_arg
def demo_a() -> None:
class SpecificA(GenericBase[int]):
pass
print(SpecificA.get_type_arg())
def demo_b() -> None:
class Foo:
pass
class Bar:
pass
class GenericSubclass(GenericBase[T]):
pass
class SpecificB(Foo, GenericSubclass[str], Bar):
pass
type_b = SpecificB.get_type_arg()
print(type_b)
e = type_b.lower("E") # static type checkers correctly infer `str` type
assert e == "e"
if __name__ == '__main__':
demo_a()
demo_b()
Output:
<class 'int'>
<class 'str'>
An IDE like PyCharm even provides the correct auto-suggestions for whatever type is returned by get_type_arg, which is really nice. 🎉
7) Caveats
The __orig_bases__ attribute is not well documented. I am not sure it should be considered entirely stable. Although it doesn't appear to be "just an implementation detail" either. I would suggest keeping an eye on that.
mypy seems to agree with this caution and raises a no attribute error in the place where you access __orig_bases__. Thus a type: ignore was placed in that line.
The entire setup is for one single type parameter for our generic class. It can be adapted relatively easily to multiple parameters, though annotations for type checkers might become more tricky.
This method does not work when called directly from a specialized GenericBase class, i.e. GenericBase[str].get_type_arg(). But for that one just needs to call typing.get_args on it as shown in the very beginning.
I have an inherited class member that is an optional callable and I want to type hint it.
import typing
class BytesDecoder(typing.Protocol):
def __call__(self,data:bytes)->None: ...
class BaseClass:
_decodeBytes:typing.Optional[BytesDecoder]=None
#typing.final
def decode(self,data:bytes):
if self._decodeBytes is not None:
self._decodeBytes(data)
class DerivedClass(BaseClass):
def _decodeBytes(self,data:bytes)->None:
...
Mypy complains about the derived _decodeBytes method:
Signature of "_decodeBytes" incompatible with supertype "BaseClass"
I've also tried defining BytesDecoder like BytesDecoder=typing.Callable[[bytes],None] but that does the same thing.
I stumbled upon one solution, though it's not ideal. In the base class I can create a dummy _decodeBytes() function and then forcibly smash it back to None.
class BaseClass:
def _decodeBytes(self,data:bytes)->None: ...
_decodeBytes=None # type: ignore
It's weird and non-intuitive, but it does seem to make type checking work as expected for derived classes.
A better solution may be to make _decodeBytes required, but provide a dummy implementation that behaves the same as not calling it at all.
import typing
class BaseClass:
def _decodeBytes(self, data: bytes) -> None:
pass
#typing.final
def decode(self,data:bytes):
self._decodeBytes(data)
class DerivedClass(BaseClass):
def _decodeBytes(self,data:bytes)->None:
...
The original problem was that mypy couldn't verify something like the following was safe:
decoders: list[BaseClass] = [BaseClass(), DerivedClass()]
for x in decoders:
x._decodeBytes = None
The assignment is legal according to the static type of decoders and for the runtime type of decoders[0], but illegal for the runtime type of decoders[1]. Getting rid of the Optional status of _decodeBytes takes away the possibility of assigning None to any object's _decodeBytes attribute.
I am writing a CustomEnum class in which I want to add some helper methods, that would then be available by the classes subclassing my CustomEnum. One of the methods is to return a random enum value, and this is where I am stuck. The function works as expected, but on the type-hinting side, I cannot figure out a way of saying "the return type is the same type of cls".
I am fairly sure there's some TypeVar or similar magic involved, but since I never had to use them I never took the time to figure them out.
class CustomEnum(Enum):
#classmethod
def random(cls) -> ???:
return random.choice(list(cls))
class SubclassingEnum(CustomEnum):
A = "a"
B = "b"
random_subclassing_enum: SubclassingEnum
random_subclassing_enum = SubclassingEnum.random() # Incompatible types in assignment (expression has type "CustomEnum", variable has type "SubclassingEnum")
Can somebody help me or give me a hint on how to proceed?
Thanks!
The syntax here is kind of horrible, but I don't think there's a cleaner way to do this. The following passes MyPy:
from typing import TypeVar
from enum import Enum
import random
T = TypeVar("T", bound="CustomEnum")
class CustomEnum(Enum):
#classmethod
def random(cls: type[T]) -> T:
return random.choice(list(cls))
(In python versions <= 3.8, you have to use typing.Type rather than the builtin type if you want to parameterise it.)
What's going on here?
T is defined at the top as being a type variable that is "bound" to the CustomEnum class. This means that a variable annotated with T can only be an instance of CustomEnum or an instance of a class inheriting from CustomEnum.
In the classmethod above, we're actually using this type-variable to define the type of the cls parameter with respect to the return type. Usually we do the opposite — we usually define a function's return types with respect to the types of that function's input parameters. So it's understandable if this feels a little mind-bending!
We're saying: this method leads to instances of a class — we don't know what the class will be, but we know it will either be CustomEnum or a class inheriting from CustomEnum. We also know that whatever class is returned, we can guarantee that the type of the cls parameter in the function will be "one level up" in the type heirarchy from the type of the return value.
In a lot of situations, we might know that type[cls] will always be a fixed value. In those situations, it would be possible to hardcode that into the type annotations. However, it's best not to do so, and instead to use this method, which clearly shows the relationship between the type of the input and the return type (even if it uses horrible syntax to do so!).
Further reading: the MyPy documentation on the type of class objects.
Further explanation and examples
For the vast majority of classes (not with Enums, they use metaclasses, but let's leave that aside for the moment), the following will hold true:
Example 1
Class A:
pass
instance_of_a = A()
type(instance_of_a) == A # True
type(A) == type # True
Example 2
class B:
pass
instance_of_b = B()
type(instance_of_b) == B # True
type(B) == type # True
For the cls parameter of your CustomEnum.random() method, we're annotating the equivalent of A rather than instance_of_a in my Example 1 above.
The type of instance_of_a is A.
But the type of A is not A — A is a class, not an instance of a class.
Classes are not instances of classes; they are either instances of type or instances of custom metaclasses that inherit from type.
No metaclasses are being used here; ergo, the type of A is type.
The rule is as follows:
The type of all python class instances will be the class they're an instance of.
The type of all python classes will be either type or (if you're being too clever for your own good) a custom metaclass that inherits from type.
With your CustomEnum class, we could annotate the cls parameter with the metaclass that the enum module uses (enum.EnumType, if you want to know). But, as I say — best not to. The solution I've suggested illustrates the relationship between the input type and the return type more clearly.
Starting in Python 3.11, the correct return annotation for this code is Self:
from typing import Self
class CustomEnum(Enum):
#classmethod
def random(cls) -> Self:
return random.choice(list(cls))
Quoting from the PEP:
This PEP introduces a simple and intuitive way to annotate methods that return an instance of their class. This behaves the same as the TypeVar-based approach specified in PEP 484 but is more concise and easier to follow.
The current workaround for this is unintuitive and error-prone:
Self = TypeVar("Self", bound="Shape")
class Shape:
#classmethod
def from_config(cls: type[Self], config: dict[str, float]) -> Self:
return cls(config["scale"])
We propose using Self directly:
from typing import Self
class Shape:
#classmethod
def from_config(cls, config: dict[str, float]) -> Self:
return cls(config["scale"])
This avoids the complicated cls: type[Self] annotation and the TypeVar declaration with a bound. Once again, the latter code behaves equivalently to the former code.