Consider the following code :
class parent_print():
def print_word(self):
self.print_hello()
class child_print(parent_print):
def print_hello(self):
print('Hello')
basic_print = child_print()
basic_print.print_word()
Here I am assuming that print_hello() is a virtual function in the parent class. And all the children of parent (parent_print) will have a method implemented named print_hello. So that all children can call a single function print_word and appropriate binding to the print_hello function is done based on the child's method implementation.
But in languages like C++ we need to specifically say that a function is virtual in the parent class with the virtual keyword and also use the function destructor symbol ~.
But how is this possible in python with no mention in python parent class to the function print_hello() being a virtual function.
I am assuming that the concept that python uses is that of virtual functions , if I am wrong please correct me and explain the concept.
Rather than thinking of this as being a matter of "virtual functions", it might be more useful to think of this as an example of "duck typing". In this expression:
self.print_hello()
we simply access the print_hello attribute of self (whatever it might be), and then call it. If it doesn't have such an attribute, an AttributeError is raised at runtime. If the attribute isn't callable, a TypeError is raised. That's all there is to it. No assumptions are made about the type of self -- we simply ask it to "quack like a duck" and see if it can.
The danger of duck typing is that it's very easy to accidentally ask something to quack that does not in fact know how to quack -- if you instantiate a parent_print and call print_word on it, it will fail:
abstract_print = parent_print()
abstract_print.print_word()
# raises AttributeError: 'parent_print' object has no attribute 'print_hello'
Python does have support for static type declarations and the concept of abstract classes, which can help you avoid mistakes. For example, if we run a static type checker (mypy) on your code as-is, we'll get an error:
test.py:4: error: "parent_print" has no attribute "print_hello"
which is exactly correct -- from a static typing perspective, it's not valid to call print_hello since we haven't established that all parent_print instances have such an attribute.
To fix this, we can declare print_hello as an abstract method (I'll also clean up the names to match standard Python conventions, in the interest of building good habits):
from abc import ABC, abstractmethod
class ParentPrint(ABC):
#abstractmethod
def print_hello(self) -> None: ...
def print_word(self) -> None:
self.print_hello()
class ChildPrint(ParentPrint):
def print_hello(self) -> None:
print('Hello')
basic_print = ChildPrint()
basic_print.print_word()
Now the code typechecks with no issues.
The #abstractmethod decorator also indicates that the class is abstract ("pure virtual") and can't be instantiated. If we attempt to create a ParentPrint, or any subclass of it that doesn't provide an implementation of the abstract method, we get an error, both statically from mypy and at runtime in the form of a TypeError that is raised as soon as you try to instantiate the object (before you even try to call the abstract method):
abstract_print = ParentPrint()
# raises error: Cannot instantiate abstract class "ParentPrint" with abstract attribute "print_hello"
Related
I get that a metaclass can be substituted for type and define how a newly created class behaves.
ex:
class NoMixedCase(type):
def __new__(cls,clsname,base,clsdict):
for name in clsdict:
if name.lower() != name:
raise TypeError("Bad name.Don't mix case!")
return super().__new__(cls,clsname,base,clsdict)
class Root(metaclass=NoMixedCase):
pass
class B(Root):
def Foo(self): #type error
pass
However, is there a way of setting NoMixedCase globally, so anytime a new class is created it's behavior is defined by NoMixedCase by default, without havining to inherit from Root?
So if you did...
Class B:
def Foo(self):
pass
...it would still check case on method names.
As for your question, no, it it is not ordinarily - and possibly not even some extra-ordinary thng that will work for this - a lot of CPythons inner things are tied to the type class, and hardcoded to it.
What is possible of trying, without crashing the interpretrer right away, would be to write a wrapper for type.__new__ and use ctypes to replace it directly in type.__new__ slot. (Ordinary assignment won't do it). You'd probably still crash things.
So, in real life, if you decide not to go via a linter program with a plug-in and commit hooks as I suggested in the comment above, the way to go is to have a Base class that uses your metaclass, and get everyone in your project to inherit from that Base.
I have an abstract class in python and want to call non-abstract methods in it. Is it possible to do it?
from abc import ABC, abstractmethod
class MyAbstract(ABC):
# Can I call method get_state() from get_current() ?
def get_state():
get_current() # gives me error?
def get_current():
#abstractmethod
def get_time():
I have another python file, Temp.py implement this interface.
In Temp.py, I call the get_state using MyAbstract.get_state(), I get the error stating that get_current() is undefined.
Not sure why.
Any help is appreciated.
In general, all methods have a namespace which is the class or object they're attached to. If you have an instance of a class floating around (e.g. self, most of the time), you can call methods on that instance that automatically pass the instance itself as the first parameter - the instance acts as the namespace for an instance method.
If you're using a class method or a static method, then the namespace is almost always going to be the class they're attached to. If you don't specify a namespace, then python assumes that whatever function you're trying to call is in the global namespace, and if it isn't, then you get a NameError.
In this case, the following should work for you:
class MyAbstract(ABC):
def get_current():
print("current")
def get_state():
MyAbstract.get_current()
#abstractmethod
def get_time():
pass
You can just imagine that you have a little invisible #staticmethod decorator hanging above get_current() that marks it as such. The problem with this is that now you don't get to change the behavior of get_current() in subclasses to affect change in get_state(). The solution to this is to make get_state() a class method:
#classmethod
def get_state(cls):
cls.get_current()
Calling a static method uses identical syntax to calling a class method (in both cases you would do MyAbstract.get_state(), but the latter passes the class you're calling it on as the first argument. You can then use this class as a namespace to find the method get_current() for whatever subclass has most recently defined it, which is how you implement polymorphism with method that would otherwise be static.
While trying to write unittests that check whether a concrete subclass of an Abstract base class really does raise a TypeError upon instantiation if one of the required methods is not implemented, I stumbled upon something which made me wonder when the check if the required methods is defined by the concrete subclass is actually performed.
Until now I would have said: upon instantiation of the object, since this is the time when the Exception is actually raised when running the program.
But look at this snippet:
import abc
class MyABC(abc.ABC):
#abstractmethod
def foo(self): pass
MyConcreteSubclass(MyABC):
pass
As expected, trying to instantiate MyConcreteSubclass raises a TypeError:
>>> MyConcreteSubclass()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-39-fbfc0708afa6> in <module>()
----> 1 t = MySubclass()
TypeError: Can't instantiate abstract class MySubclass with abstract methods foo
But what happens if I declare a valid subclass at first and then afterwards delete this method surprises me:
class MyConcreteSubclass(MyABC):
def foo(self):
print("bar")
MyConcreteSubclass.foo
--> <function __main__.MyConcreteSubclass.foo(self)>
>>> t = MyConcreteSubclass()
>>> t.foo()
bar
>>> del MyConcreteSubclass.foo
>>> MyConcreteSubclass.foo
<function __main__.MyABC.foo(self)>
>>> t = MyConcreteSubclass()
>>> print(t.foo())
None
This is certainly not what I expected. When inspecting MyConcreteSubclass.foo after deletion, we see that through the method Resolution order the Abstract method of the base class is retrieved, which is the same behaviour as if we haven't implemented foo in the concrete subclass in the first place.
But after instantiation the TypeError is not raised.
So I wonder, are the checks whether the required methods are implemented already performed when the body of the concrete subclass is evaluated by the Interpreter?
If so, why are the TypeErrors only raised when someone tries to instantiate the subclass?
The Tests shown above were performed using Python 3.6.5.
It happens at class creation time. In Python 3.7, it's in C, in compute_abstract_methods in Modules/_abc.c, which is called as part of ABCMeta.__new__.
Incidentally, the docs do mention that
Dynamically adding abstract methods to a class, or attempting to modify the abstraction status of a method or class once it is created, are not supported.
user2357112's answer covers the main question here, but there's a secondary question:
why are the TypeErrors only raised when someone tries to instantiate the subclass?
If a TypeError were raised earlier, at class creation time, it would be impossible to create hierarchies of ABCs:
class MyABC(abc.ABC):
#abstractmethod
def foo(self): pass
class MySecondABC(MyABC):
#abstractmethod
def bar(self): pass
You don't want that to raise a TypeError because MySecondABC doesn't define foo, unless someone tries to instantiate MySecondABC.
What if it were legal only for classes that added new abstract methods? Then it would be possible to create ABC hierarchies, but it would be impossible to create intermediate helper classes:
class MyABCHelper(MySecondABC):
def foo(self):
return bar(self)*2
(For a more realistic example, see the classes in collections.abc that allow you to implement the full MutableSequence interface by defining only 7 of the 18 methods.)
You wouldn't want a rule that made such definitions illegal.
I wonder if there is any convention regarding constructor in Python. If I have a constructor doing nothing, I can basically not writing it and everything will work just fine.
However when I'm using Pycharm, it is recommending me (warning) to write an empty constructor:
__init__:
pass
I have not find anything regarding this problem in the PEP8. I am wondering if Pycharm just come out with a new convention or if there is a reason behind that ?
Thanks.
I think it's opinion based, but I will share rules that I try to follow:
1. Declare all instance variables in constructor, even if they are not set yet
def __init__(self, name):
self.name = name
self.lname = None
Do not do any logic in the constructor. You will benefit from this when will try to write unittests.
And of course if it's not necessary dont' add it.
I agree with the sentiment to not write unnecessary code. The warning is probably there to help speed up development since most classes probably have an init and this will remind you to write it ahead of time.
It is possible to customize or suppress this warning in Settings -> Editor -> Inspections -> Python -> "Class has no __init__ method"
Don't add a constructor if it doesn't do anything. Some editors like to warn you for some silly things. Eclipse for example, warns you when variables are initialized but not used later on or when classes don't have a serializable id. But that's Java. If your program will run without it, then remove the constructor.
I keep seeing this when I make a tiny, ad hoc class. I use this warning as a cue:
I forgot to derive the class from something sensible.
Exception Object
For example, an exception class that I would use to raise CustomException.
No:
class CustomException:
pass
Yes:
class CustomException(Exception):
pass
Because Exception defines an __init__ method, this change makes the warning go away.
New Style Object
Another example, a tiny class that doesn't need any member variables. The new style object (explicit in Python 2, implicit in Python 3) is derived from class object.
class SettClass(object)
pass
Similarly, the warning goes away because object has an __init__ method.
Old Style Object
Or just disable the warning:
# noinspection PyClassHasNoInit
class StubbornClass:
pass
I have a specific problem closely related to PyCharm (Community 3.1.1). The following simple example illustrates this. I will use the screenshot of PyCharm rather than type the code, for reasons that will be clear shortly.
As you can see, the call to self.say_hello() is highlighted in yellow by PyCharm, and presumably this is because say_hello() is not implemented in the Base class. The fact that say_hello() is not implemented in the base class is intentional on my part, because I want a kind of "abstract" effect, so that an instance of Base cannot call say_hello() (and therefore shouldn't call hello()), but that an instance of Child can call hello() (implemented in the Base class). How do I get this "abstract" effect without PyCharm complaining?
As I learned from here, I could use the abc module. But that, to me, would be rather cumbersome and somewhat not pythonic. What are your recommendations?
I would implement say_hello() as a stub:
class Base(object):
# ...as above...
def say_hello(self):
raise NotImplementedError
Alternatively, put only pass in the body of say_hello().
This would also signal to the user of your Base class that say_hello() should be implemented before she gets an AttributeError when calling obj.hello().
Whether to raise an Exception or to pass depends on whether doing nothing is sensible default behaviour. If you require the user to supply her own method, raise an exception.