Annotating "Instance of a subclass derived from specific base class" - python

I have a Python method with the following signature:
def basic_sizer(self, ctrl):
where ctrl can be any wxPython control derived from wx.Control. Is there a specific Python stock annotation to indicate this other than either
def basic_sizer(self, ctrl: wx.Control):
or
def basic_sizer(self, ctrl: Union[wx.SpinCtrl, wx.BitmapButton, <other possible controls>]):
I have tried
def basic_sizer(self, ctrl: Type[wx.Control]):
as suggested here. This approach is also presented in the official documentation, but PyCharm does not accept it, flagging mismatched type. I do not want to use some PyCharm-specific hack, even if available. Rather, I am interested in whether the Python typing module provides a generic approach for this situation.

Abstraction
You have some base class SomeBase. You want to write and annotate a function foo that takes an argument arg. That argument arg can be an instance of SomeBase or of any subclass of SomeBase. This is how you write that:
def foo(arg: SomeBase):
...
Say now there are classes DerivedA and DerivedB that both inherit from SomeBase and you realize that arg should actually only ever be an instance of any of those subclasses and not be of the type SomeBase (directly). Here is how you write that:
def foo(arg: DerivedA | DerivedB):
...
Or in Python <3.10:
from typing import Union
def foo(arg: Union[DerivedA, DerivedB]):
...
To my knowledge, there is currently no way to annotate that arg should be an instance of any subclass of SomeBase but not of the class SomeBase itself.
Concrete
I am not familiar with wxPython, but you stated that you want the argument ctrl to
be any wxPython control derived from wx.Control.
According to the documentation, wx.Control is in fact a class. Your statement is still ambiguous in whether or not the ctrl argument should be assumed to be any instance of wx.Control. But if so, you do write:
def basic_sizer(self, ctrl: wx.Control):
...
If you want to restrict it to specific subclasses, you use the Union.
But this is wrong:
def basic_sizer(self, ctrl: Type[wx.Control]):
...
That would state that ctrl must be a class (as opposed to an instance of a class), namely wx.Control or any subclass of it. Unless of course that is in fact what you want... Again, your statement is ambiguous.
Mismatched types
Possible reasons for PyCharm complaining about "mismatched types" include:
You are calling the method basic_sizer providing an argument for ctrl that is not actually an instance of wx.Control.
wxPython messed up big time in their typing.
PyCharm has a bug in its static type checker.
If you provide the code that produces the PyCharm complaint and the specific message by PyCharm, we can sort this out.
PS:
If the PyCharm complaint arises in some other place because you assume that ctrl has certain attributes that it may not have, that would probably indicate that you actually need it to be an instance of specific subclasses. There are multiple ways to handle this, depending on the situation.

Related

Is it possible to set a Metaclass globally so it applies to all classes created by default?

I get that a metaclass can be substituted for type and define how a newly created class behaves.
ex:
class NoMixedCase(type):
def __new__(cls,clsname,base,clsdict):
for name in clsdict:
if name.lower() != name:
raise TypeError("Bad name.Don't mix case!")
return super().__new__(cls,clsname,base,clsdict)
class Root(metaclass=NoMixedCase):
pass
class B(Root):
def Foo(self): #type error
pass
However, is there a way of setting NoMixedCase globally, so anytime a new class is created it's behavior is defined by NoMixedCase by default, without havining to inherit from Root?
So if you did...
Class B:
def Foo(self):
pass
...it would still check case on method names.
As for your question, no, it it is not ordinarily - and possibly not even some extra-ordinary thng that will work for this - a lot of CPythons inner things are tied to the type class, and hardcoded to it.
What is possible of trying, without crashing the interpretrer right away, would be to write a wrapper for type.__new__ and use ctypes to replace it directly in type.__new__ slot. (Ordinary assignment won't do it). You'd probably still crash things.
So, in real life, if you decide not to go via a linter program with a plug-in and commit hooks as I suggested in the comment above, the way to go is to have a Base class that uses your metaclass, and get everyone in your project to inherit from that Base.

Is it possible to type hint exclusively a class object but exclude subclass objects?

I would like to exclusively type hint an argument to a specific class but exclude any subclasses.
class A:
pass
class B(A):
pass
def foo(obj: A):
pass
foo(B()) # I'd like the type checker to warn me here that it expects A, not B
Is this possible? and if so, how?
(bonus points if you can tell me what I would call this. Googling wasn't helpful, but I'm afraid I'm using the wrong terminology to describe this)
No, this is not possible to do.
Fundamentally, the Python typing ecosystem assumes that you are following the Liskov substitution principle -- assumes that it is always safe to substitute a subclass in places designed to handle the parent.
The fact that it permits you to pass in instances of B in addition to instances of A in your code snippet is just one example of this principle in play.
So if your subclass B is designed not to follow the Liskov substitution principle, that probably it wasn't ever really a "kind of" A to begin with and shouldn't be subclassing it.
You could fix this by either adjusting your code so B does properly follow Liskov or by making B stop subclassing A and instead use composition instead of inheritance as a mechanism for code reuse. That is, make B keep an instance of A as a field and use it as appropriate.
And if you run into a rare case where it's legitimately not possible to ever subclass A without breaking Liskov, something you could do to prevent people from accidentally subclassing it would be to explicitly mark A as being final:
from typing import final
# If you want to support Python 3.7 or earlier, pip-install 'typing_extensions'
# and do 'from typing_extensions import final' instead
#final
class A: pass
class B(A): pass
This should make your type checker report a "B cannot subclass A" error on the definition of B. And if you fix that error by changing/deleting B, the call to foo(B()) should also naturally fail to type-check.

Automatic type inferring conventions in PyCharm

My python 2.7 project has a class called WorldTables. I pass instances of it to many methods, always under the variable name world_tables.
Unless I specify the PyCharm type hint for every method, I don't get code completion for this class (which I want to have).
However, PyCharm does seem to have a solution for this. If the class name was one word (for example, Player) it would automatically assume that every variable called player is an instance of this class.
After playing around with it a bit, I noticed that this would happen for WorldTables if I passed it under the name of worldtables (instead of world_tables which I currently use). However, that is not how the naming conventions work AFAIK.
Is there a solution to this that doesn't involve adding (hundreds of) type hints or breaking my naming conventions? Something like:
A) Telling pycharm to automatically assume that class_name is ClassName rather than that classname is of ClassName
B) Giving PyCharm a one-off type hint ("every time you see a variable called class_name, assume it is of class ClassName")
C) Any other creative idea that would address this issue.
Thanks!
If you specify the type in your docstrings, PyCharm picks up on it.
For eg.
def some_func(word_tables):
"""The description of the function.
:param `WordTables` word_tables: An instance of the `WordTables` class.
"""
pass

Type hint for 'other' in magic methods?

class Interval(object):
def __sub__(self, other: Interval):
pass
The way it is I get a 'NameError: name 'Interval' is not defined'. Can someone tell me which type would be correct here?
The class doesn't exist until after Python finishes executing all the code inside the class block, including your method definitions.
Just use a string literal instead, as suggested in PEP 484:
class Interval(object):
def __sub__(self, other: 'Interval'):
pass
Note that it's perfectly fine to refer to the class inside a method:
class Interval(object):
def __sub__(self, other: 'Interval'):
Interval.do_something()
This is only a problem if you need to use it in a method signature or directly inside the class block.
As of Python 3.7, the above workaround is no longer necessary (but still perfectly valid, so you can and should continue using it if you need to support older versions of Python). Instead, put the following at the top of each module to enable forward references within that module:
from __future__ import annotations
This will cause all annotations to be lazily evaluated on lookup, so that you can refer to classes within their definitions, or even before their definitions. In short, it makes every "reasonable" annotation mean what you (probably) wanted it to mean.
Finally, if you need to inspect type annotations, be sure you are using typing.get_type_hints() instead of the .__annotations__ special attribute. This function correctly handles both the old and the new way of doing things, as well as a few other nuances of the type hinting rules such as automatic Optional[T] when the default value is None. It is the comfortable and future-proof way to examine the type hints attached to an arbitrary Python object.

Python constructors convention

I wonder if there is any convention regarding constructor in Python. If I have a constructor doing nothing, I can basically not writing it and everything will work just fine.
However when I'm using Pycharm, it is recommending me (warning) to write an empty constructor:
__init__:
pass
I have not find anything regarding this problem in the PEP8. I am wondering if Pycharm just come out with a new convention or if there is a reason behind that ?
Thanks.
I think it's opinion based, but I will share rules that I try to follow:
1. Declare all instance variables in constructor, even if they are not set yet
def __init__(self, name):
self.name = name
self.lname = None
Do not do any logic in the constructor. You will benefit from this when will try to write unittests.
And of course if it's not necessary dont' add it.
I agree with the sentiment to not write unnecessary code. The warning is probably there to help speed up development since most classes probably have an init and this will remind you to write it ahead of time.
It is possible to customize or suppress this warning in Settings -> Editor -> Inspections -> Python -> "Class has no __init__ method"
Don't add a constructor if it doesn't do anything. Some editors like to warn you for some silly things. Eclipse for example, warns you when variables are initialized but not used later on or when classes don't have a serializable id. But that's Java. If your program will run without it, then remove the constructor.
I keep seeing this when I make a tiny, ad hoc class. I use this warning as a cue:
I forgot to derive the class from something sensible.
Exception Object
For example, an exception class that I would use to raise CustomException.
No:
class CustomException:
pass
Yes:
class CustomException(Exception):
pass
Because Exception defines an __init__ method, this change makes the warning go away.
New Style Object
Another example, a tiny class that doesn't need any member variables. The new style object (explicit in Python 2, implicit in Python 3) is derived from class object.
class SettClass(object)
pass
Similarly, the warning goes away because object has an __init__ method.
Old Style Object
Or just disable the warning:
# noinspection PyClassHasNoInit
class StubbornClass:
pass

Categories

Resources