Let's say I have the following two classes
class A:
def own_method(self):
pass
def descendant_method(self):
pass
class B(A):
pass
and I want descendant_method to be callable from instances of B, but not of A, and own_method to be callable from everywhere.
I can think of several solutions, all unsatisfactory:
Check some field and manually raise NotImplementedError:
class A:
def __init__(self):
self.some_field = None
def own_method(self):
pass
def descendant_method(self):
if self.some_field is None:
raise NotImplementedError
class B(A):
def __init__(self):
super(B, self).__init__()
self.some_field = 'B'
pass
But this is modifying the method's runtime behaviour, which I don't want to do
Use a mixin:
class A:
def own_method(self):
pass
class AA:
def descendant_method(self):
pass
class B(AA, A):
pass
This is nice as long as descendant_method doesn't use much from A, or else we'll have to inherit AA(A) and this defies the whole point
make method private in A and redefine it in a metaclass:
class A:
def own_method(self):
pass
def __descendant_method(self):
pass
class AMeta(type):
def __new__(mcs, name, parents, dct):
par = parents[0]
desc_method_private_name = '_{}__descendant_method'.format(par.__name__)
if desc_method_private_name in par.__dict__:
dct['descendant_method'] = par.__dict__[desc_method_private_name]
return super(AMeta, mcs).__new__(mcs, name, parents, dct)
class B(A, metaclass=AMeta):
def __init__(self):
super(B, self).__init__()
This works, but obviously looks dirty, just like writing self.descendant_method = self._A__descendant_method in B itself.
What would be the right "pythonic" way of achieving this behaviour?
UPD: putting the method directly in B would work, of course, but I expect that A will have many descendants that will use this method and do not want to define it in every subclass.
What is so bad about making AA inherit from A? It's basically an abstract base class that adds additional functionality that isn't meant to be available in A. If you really don't want AA to ever be instantiated then the pythonic answer is not to worry about it, and just document that the user isn't meant to do that. Though if you're really insistent you can define __new__ to throw an error if the user tries to instantiate AA.
class A:
def f(self):
pass
class AA(A):
def g(self):
pass
def __new__(cls, *args, **kwargs):
if cls is AA:
raise TypeError("AA is not meant to be instansiated")
return super().__new__(cls)
class B(AA):
pass
Another alternative might be to make AA an Abstract Base Class. For this to work you will need to define at least one method as being abstract -- __init__ could do if there are no other methods you want to say are abstract.
from abc import ABCMeta, abstractmethod
class A:
def __init__(self, val):
self.val = val
def f(self):
pass
class AA(A, metaclass=ABCMeta):
#abstractmethod
def __init__(self, val):
super().__init__(val)
def g(self):
pass
class B(AA):
def __init__(self, val):
super().__init__(val)
Very finally, what's so bad about having the descendant method available on A, but just not using it. You are writing the code for A, so just don't use the method... You could even document the method that it isn't meant to be used directly by A, but is rather meant to be available to child classes. That way future developers will know your intentions.
As far as I can tell, this may be the most Pythonic way of accomplishing what you want:
class A:
def own_method(self):
pass
def descendant_method(self):
raise NotImplementedError
class B(A):
def descendant_method(self):
...
Another option could be the following:
class A:
def own_method(self):
pass
def _descendant_method(self):
pass
class B(A):
def descendant_method(self):
return self._descendant_method(self)
They're both Pythonic because it's explicit, readable, clear and concise.
It's explicit because it's not doing any unnecessary magic.
It's readable because
one can tell precisely what your doing, and what your intention was
at first glance.
It's clear because the leading single underscore is
a widely used convention in the Python community for private
(non-magic) methods—any developer that uses it should know to tread
with caution.
Choosing between one of these approaches will depend on how you intend on your use case. A more concrete example in your question would be helpful.
Try to check the class name using __class__.__name__ .
class A(object):
def descendant_method(self):
if self.__class__.__name__ == A.__name__:
raise NotImplementedError
print 'From descendant'
class B(A):
pass
b = B()
b.descendant_method()
a = A()
a.descendant_method()
Related
I am having a parent class A with few classes inheriting from it (B,C,...).
I have some function is_class_A that is implemented in A, and I want to override it in B and C in the same way.
Let say:
class A:
def is_class_A():
print("We are in class A")
class B(A):
def is_class_A():
print("Nope")
class C(A):
def is_class_A():
print("Nope")
Take in mind both functions that implement in class A and in the others, are long, and more complicate.
2 solutions I came in my mind to avoid the duplication in B, C implementation:
Using in class C:
def is_class_A():
return B.is_class_A()
I am not sure how this will be functional, because an object of C is not an object of B. And I don't pass here any self, so I don't see how this will work, maybe something similar can?
Next solution is using another heritage:
class C(B)
This won't work always, since not always possible (if they have different attributes) and might not fit the design purpose.
My best attempt so far is to use another class:
class BC(A):
def is_class_A():
print("Nope")
class B(BC):
pass
class C(BC):
pass
What do you think? Some more ideas? Maybe something more technical that won't involve with the program design?
Thanks.
One option is to define the alternate method once at the global scope, then do direct class attribute assignment.
class A:
def is_class_A(self):
print("We are in class A")
def alt_is_class_A(self):
print("Nope")
class B(A):
is_class_A = alt_is_class_A
class C(A):
is_class_A = alt_is_class_A
class D(A):
pass # No override
The assignment could also be handled by a decorator:
def mod_it(cls):
cls.is_class_A = alt_is_class_A
#mod_it
class B(A):
pass
#mod_it
class C(A):
pass
# No override
class D(A):
pass
or via A.__init_subclass__:
class A:
def is_class_A(self):
print("We are in class A")
def __init_subclass__(cls, is_class_A=None):
super().__init_subclass__()
if is_class_A is not None:
cls.is_class_A = is_class_A
def alt_is_class_A(self):
print("Nope")
class B(A, is_class_A=alt_is_class_A):
pass
class C(A, is_class_A=alt_is_class_A):
pass
# No override
class D(A):
pass
class abc():
def xyz(self):
print("Class abc")
class foo(abc):
def xyz(self):
print("class foo")
x = foo()
I want to call xyz() of the parent class, something like;
x.super().xyz()
With single inheritance like this it's easiest in my opinion to call the method through the class, and pass self explicitly:
abc.xyz(x)
Using super to be more generic this would become (though I cannot think of a good use case):
super(type(x), x).xyz()
Which returns a super object that can be thought of as the parent class but with the child as self.
If you want something exactly like your syntax, just provide a super method for your class (your abc class, so everyone inheriting will have it):
def super(self):
return super(type(self), self)
and now x.super().xyz() will work. It will break though if you make a class inheriting from foo, since you will only be able to go one level up (i.e. back to foo).
There is no "through the object" way I know of to access hidden methods.
Just for kicks, here is a more robust version allowing chaining super calls using a dedicated class keeping tracks of super calls:
class Super:
def __init__(self, obj, counter=0):
self.obj = obj
self.counter = counter
def super(self):
return Super(self.obj, self.counter+1)
def __getattr__(self, att):
return getattr(super(type(self.obj).mro()[self.counter], self.obj), att)
class abc():
def xyz(self):
print("Class abc", type(self))
def super(self):
return Super(self)
class foo(abc):
def xyz(self):
print("class foo")
class buzz(foo):
def xyz(self):
print("class buzz")
buzz().super().xyz()
buzz().super().super().xyz()
results in
class foo
Class abc
I have googled around for some time, but what I got is all about INSTANCE property rather than CLASS property.
For example, this is the most-voted answer for question from stackoverflow
class C(ABC):
#property
#abstractmethod
def my_abstract_property(self):
return 'someValue'
class D(C)
def my_abstract_property(self):
return 'aValue'
class E(c)
# I expect the subclass should have this assignment,
# but how to enforce this?
my_abstract_property = 'aValue'
However, that is the INSTANCE PROPERTY case, not my CLASS PROPERTY case. In other words, calling
D.my_abstract_property will return something like <unbound method D.my_abstract_property>. Returning 'aValue' is what I expected, like class E.
Based on your example and comment to my previous reply, I've structured the following which works with ABC. :
from abc import ABC
class C(ABC):
_myprop = None
def __init__(self):
assert self._myprop, "class._myprop should be set"
#property
def myprop(self):
return self._myprop
class D(C):
_myprop = None
def __init__(self):
super().__init__()
class E(C):
_myprop = 'e'
def __init__(self):
super().__init__()
e = E()
print(e.myprop)
d = D()
print(d.myprop)
You are correct that there is no Python pre-scan that will detect another developer has not assigned a value to a class variable before initializing. The initializer will take care of notifying pretty quickly in usage.
You can use #classmethod decorator.
I come up with a tricky workaround.
class C(object):
myProp = None
def __init__(self):
assert self.myProp, 'you should set class property "name"'
class D(C):
def __init__(self):
C.__init__(self)
class E(C):
myProp = 'e'
def __init__(self):
C.__init__(self)
print(D.myProp)
print(E.myProp)
But it still has some problems:
D.myProp will not raise any exception to warn the developer about the constraint (assigning myProp as its class property), until the developer initialize an instance of his class.
abc module cannot work with this solution, which means loss of lots of useful features of that module
I have the following class structure:
class Base:
def z(self):
raise NotImplementedError()
class A(Base):
def z(self):
self._x()
return self._z()
def _x(self):
# do stuff
def _a(self):
raise NotImplementedError()
class B(Base)
def z(self):
self._x()
return self._z()
def _x(self):
# do stuff
def _z(self):
raise NotImplementedError()
class C(A):
def _z(self):
print(5)
class D(B):
def _z(self):
print(5)
The implementation of C(A) and D(B) is exactly the same and does not really care which class it inherits from. The conceptual difference is only in A and B (and these need to be kept as separate classes). Instead of writing separate definitions for C and D, I want to be able to dynamically inherit from A or B based on an argument provided at time of creating an instance of C/D (eventually C and D must be the same name).
It seems that metaclasses might work, but I am not sure how to pass an __init__ argument to the metaclass __new__ (and whether this will actually work). I would really prefer a solution which resolves the problem inside the class.
Have you considered using composition instead of inheritance? It seems like it is much more suitable for this use case. See the bottom of the answer for details.
Anyway,
class C(A): ......... class C(B): ..... is not even valid, and will result with only class C(B) getting defined.
I'm not sure a metaclass will be able to help you here. I believe the best way would be to use type but I'd love to be corrected.
A solution using type (and probably misusing locals() but that's not the point here)
class A:
def __init__(self):
print('Inherited from A')
class B:
def __init__(self):
print('Inherited from B')
class_to_inherit = input() # 'A' or 'B"
C = type('C', (locals()[class_to_inherit],), {})
C()
'A' or 'B'
>> A
Inherited from A
'A' or 'B'
>> B
Inherited from B
Composition
Tracking back to the question in the beginning of my answer, you state yourself that the implementation of both "C(A)" and "C(B)" is identical and they don't actually care about A or B. It seems more correct to me to use composition. Then you can do something along the lines of:
class A: pass
class B: pass
class C:
def __init__(self, obj): # obj is either A or B instance, or A or B themselves
self.obj = obj # or self.obj = obj() if obj is A or B themselves
c = C(A()) # or c = C(A)
In case C should expose the same API as A or B, C can overwrite __getattr__:
class A:
def foo(self):
print('foo')
class C:
def __init__(self, obj):
self.obj = obj
def __getattr__(self, item):
return getattr(self.obj, item)
C(A()).foo()
# foo
I need to refactor existing code by collapsing a method that's copy-and-pasted between various classed that inherit from one another into a single method.
So I produced the following code:
class A(object):
def rec(self):
return 1
class B(A):
def rec(self):
return self.rec_gen(B)
def rec_gen(self, rec_class):
return super(rec_class, self).rec() + 1
class C(B):
def rec(self):
return self.rec_gen(C)
if __name__=='__main__':
b = B(); c = C()
print c.rec()
print b.rec()
And the output:
3
2
What still bothers me is that in the 'rec' method I need to tell 'rec_gen' the context of the class in which it's running. Is there a way for 'rec_gen' to figure it out by itself in runtime?
This capability has been added to Python 3 - see PEP 3135. In a nutshell:
class B(A):
def rec(self):
return super().rec() + 1
I think you've created the convoluted rec()/rec_gen() setup because you couldn't automatically find the class, but in case you want that anyway the following should work:
class A(object):
def rec(self):
return 1
class B(A):
def rec(self):
# __class__ is a cell that is only created if super() is in the method
super()
return self.rec_gen(__class__)
def rec_gen(self, rec_class):
return super(rec_class, self).rec() + 1
class C(B):
def rec(self):
# __class__ is a cell that is only created if super() is in the method
super()
return self.rec_gen(__class__)
The simplest solution in Python 2 is to use a private member to hold the super object:
class B(A):
def __init__(self):
self.__super = super(B)
def rec(self):
return self.__super.rec() + 1
But that still suffers from the need to specify the actual class in one place, and if you happen to have two identically-named classes in the class hierarchy (e.g. from different modules) this method will break.
There were a couple of us who made recipes for automatic resolution for Python 2 prior to the existence of PEP 3135 - my method is at self.super on ActiveState. Basically, it allows the following:
class B(A, autosuper):
def rec(self):
return self.super().rec() + 1
or in the case that you're calling a parent method with the same name (the most common case):
class B(A, autosuper):
def rec(self):
return self.super() + 1
Caveats to this method:
It's quite slow. I have a version sitting around somewhere that does bytecode manipulation to improve the speed a lot.
It's not consistent with PEP 3135 (although it was a proposal for the Python 3 super at one stage).
It's quite complex.
It's a mix-in base class.
I don't know if the above would enable you to meet your requirements. With a small change to the recipe though you could find out what class you're in and pass that to rec_gen() - basically extract the class-finding code out of _getSuper() into its own method.
An alternative solution for python 2.x would be to use a metaclass to automatically define the rec method in all your subclasses:
class RecGen(type):
def __new__(cls, name, bases, dct):
new_cls = super(RecGen, cls).__new__(cls, name, bases, dct)
if bases != (object,):
def rec(self):
return super(new_cls, self).rec() + 1
new_cls.rec = rec
return new_cls
class A(object):
__metaclass__ = RecGen
def rec(self):
return 1
class B(A):
pass
class C(B):
pass
Note that if you're just trying to get something like the number of parent classes, it would be easier to use self.__class__.__mro__ directly:
class A(object):
def rec(self):
return len(self.__class__.__mro__)-1
class B(A):
pass
class C(B):
pass
I'm not sure exactly what you're trying to achieve, but if it is just to have a method that returns a different constant value for each class then use class attributes to store the value. It isn't clear at all from your example that you need to go anywhere near super().
class A(object):
REC = 1
def rec(self):
return self.REC
class B(A):
REC = 2
class C(B):
REC = 3
if __name__=='__main__':
b = B(); c = C()
print c.rec()
print b.rec()