How to dynamically inherit at initialization time? - python

I have the following class structure:
class Base:
def z(self):
raise NotImplementedError()
class A(Base):
def z(self):
self._x()
return self._z()
def _x(self):
# do stuff
def _a(self):
raise NotImplementedError()
class B(Base)
def z(self):
self._x()
return self._z()
def _x(self):
# do stuff
def _z(self):
raise NotImplementedError()
class C(A):
def _z(self):
print(5)
class D(B):
def _z(self):
print(5)
The implementation of C(A) and D(B) is exactly the same and does not really care which class it inherits from. The conceptual difference is only in A and B (and these need to be kept as separate classes). Instead of writing separate definitions for C and D, I want to be able to dynamically inherit from A or B based on an argument provided at time of creating an instance of C/D (eventually C and D must be the same name).
It seems that metaclasses might work, but I am not sure how to pass an __init__ argument to the metaclass __new__ (and whether this will actually work). I would really prefer a solution which resolves the problem inside the class.

Have you considered using composition instead of inheritance? It seems like it is much more suitable for this use case. See the bottom of the answer for details.
Anyway,
class C(A): ......... class C(B): ..... is not even valid, and will result with only class C(B) getting defined.
I'm not sure a metaclass will be able to help you here. I believe the best way would be to use type but I'd love to be corrected.
A solution using type (and probably misusing locals() but that's not the point here)
class A:
def __init__(self):
print('Inherited from A')
class B:
def __init__(self):
print('Inherited from B')
class_to_inherit = input() # 'A' or 'B"
C = type('C', (locals()[class_to_inherit],), {})
C()
'A' or 'B'
>> A
Inherited from A
'A' or 'B'
>> B
Inherited from B
Composition
Tracking back to the question in the beginning of my answer, you state yourself that the implementation of both "C(A)" and "C(B)" is identical and they don't actually care about A or B. It seems more correct to me to use composition. Then you can do something along the lines of:
class A: pass
class B: pass
class C:
def __init__(self, obj): # obj is either A or B instance, or A or B themselves
self.obj = obj # or self.obj = obj() if obj is A or B themselves
c = C(A()) # or c = C(A)
In case C should expose the same API as A or B, C can overwrite __getattr__:
class A:
def foo(self):
print('foo')
class C:
def __init__(self, obj):
self.obj = obj
def __getattr__(self, item):
return getattr(self.obj, item)
C(A()).foo()
# foo

Related

Correct way of returning new class object (which could also be extended)

I am trying to find a good way for returning a (new) class object in class method that can be extended as well.
I have a class (classA) which has among other methods, a method that returns a new classA object after some processing
class classA:
def __init__(): ...
def methodX(self, **kwargs):
process data
return classA(new params)
Now, I am extending this class to another classB. I need methodX to do the same, but return classB this time, instead of classA
class classB(classA):
def __init__(self, params):
super().__init__(params)
self.newParams = XYZ
def methodX(self, **kwargs):
???
This may be something trivial but I simply cannot figure it out. In the end I dont want to rewrite the methodX each time the class gets extended.
Thank you for your time.
Use the __class__ attribute like this:
class A:
def __init__(self, **kwargs):
self.kwargs = kwargs
def methodX(self, **kwargs):
#do stuff with kwargs
return self.__class__(**kwargs)
def __repr__(self):
return f'{self.__class__}({self.kwargs})'
class B(A):
pass
a = A(foo='bar')
ax = a.methodX(gee='whiz')
b = B(yee='haw')
bx = b.methodX(cool='beans')
print(a)
print(ax)
print(b)
print(bx)
class classA:
def __init__(self, x):
self.x = x
def createNew(self, y):
t = type(self)
return t(y)
class classB(classA):
def __init__(self, params):
super().__init__(params)
a = classA(1)
newA = a.createNew(2)
b = classB(1)
newB = b.createNew(2)
print(type(newB))
# <class '__main__.classB'>
I want to propose what I think is the cleanest approach, albeit similar to existing answers. The problem feels like a good fit for a class method:
class A:
#classmethod
def method_x(cls, **kwargs):
return cls(<init params>)
Using the #classmethod decorator ensures that the first input (traditionally named cls) will refer to the Class to which the method belongs, rather than the instance.
(usually we call the first method input self and this refers to the instance to which the method belongs)
Because cls refers to A, rather than an instance of A, we can call cls() as we would call A().
However, in a class that inherits from A, cls will instead refer to the child class, as required:
class A:
def __init__(self, x):
self.x = x
#classmethod
def make_new(cls, **kwargs):
y = kwargs["y"]
return cls(y) # returns A(y) here
class B(A):
def __init__(self, x):
super().__init__(x)
self.z = 3 * x
inst = B(1).make_new(y=7)
print(inst.x, inst.z)
And now you can expect that print statement to produce 7 21.
That inst.z exists should confirm for you that the make_new call (which was only defined on A and inherited unaltered by B) has indeed made an instance of B.
However, there's something I must point out. Inheriting the unaltered make_new method only works because the __init__ method on B has the same call signature as the method on A. If this weren't the case then the call to cls might have had to be altered.
This can be circumvented by allowing **kwargs on the __init__ method and passing generic **kwargs into cls() in the parent class:
class A:
def __init__(self, **kwargs):
self.x = kwargs["x"]
#classmethod
def make_new(cls, **kwargs):
return cls(**kwargs)
class B(A):
def __init__(self, x, w):
super().__init__(x=x)
self.w = w
inst = B(1,2).make_new(x="spam", w="spam")
print(inst.x, inst.w)
Here we were able to give B a different (more restrictive!) signature.
This illustrates a general principle, which is that parent classes will typically be more abstract/less specific than their children.
It follows that, if you want two classes that substantially share behaviour but which do quite specific different things, it will be better to create three classes: one rather abstract one that defines the behaviour-in-common, and two children that give you the specific behaviours you want.

How do I initialize the parent class using a class method instead of calling the constructor?

I have class A which I want to inherit from, this class has a class method that can initialize a new instance from some data. I don't have access to the code for from_data and can't change the implementation of A.
I want to initialize new instances of class B using the same data I would pass to the A's from_data method. In the solution I came up with I create a new instance of A in __new__(...) and change the __class__ to B. __init__(...) can then further initialize the "new instance of B" as normal. It seems to work but I'm not sure this will have some sort of side effects.
So will this work reliably? Is there a proper way of achieving this?
class A:
def __init__(self, alpha, beta):
self.alpha = alpha
self.beta = beta
#classmethod
def from_data(cls, data):
obj = cls(*data)
return obj
class B(A):
def __new__(cls, data):
a = A.from_data(data)
a.__class__ = cls
return a
def __init__(self, data):
pass
b = B((5, 3))
print(b.alpha, b.beta)
print(type(b))
print(isinstance(b, B))
Output:
5 3
<class '__main__.B'>
True
It could be that your use-case is more abstract than I am understanding, but testing out in a REPL, it seems that calling the parent class A constructor via super()
class A:
# ...
class B(A):
def __init__(self, data):
super().__init__(*data)
b = B((5, 3))
print(b.alpha, b.beta)
print(type(b))
print(isinstance(b, B))
also results in
5 3
<class '__main__.B'>
True
Is there a reason you don't want to call super() to instantiate a new instance of your child class?
Edit:
So, in case you need to use the from_data constructor... you could do something like
#... class A
class B(A):
def __init__(self, data):
a_obj = A.from_data(data)
for attr in a_obj.__dict__:
setattr(self, attr, getattr(a_obj, attr))
That is really hacky though... and not guaranteed to work for all attrs of A class object, especially if the __dict__ function has been overloaded.

How to override in multiple classes without code duplication (Python)

I am having a parent class A with few classes inheriting from it (B,C,...).
I have some function is_class_A that is implemented in A, and I want to override it in B and C in the same way.
Let say:
class A:
def is_class_A():
print("We are in class A")
class B(A):
def is_class_A():
print("Nope")
class C(A):
def is_class_A():
print("Nope")
Take in mind both functions that implement in class A and in the others, are long, and more complicate.
2 solutions I came in my mind to avoid the duplication in B, C implementation:
Using in class C:
def is_class_A():
return B.is_class_A()
I am not sure how this will be functional, because an object of C is not an object of B. And I don't pass here any self, so I don't see how this will work, maybe something similar can?
Next solution is using another heritage:
class C(B)
This won't work always, since not always possible (if they have different attributes) and might not fit the design purpose.
My best attempt so far is to use another class:
class BC(A):
def is_class_A():
print("Nope")
class B(BC):
pass
class C(BC):
pass
What do you think? Some more ideas? Maybe something more technical that won't involve with the program design?
Thanks.
One option is to define the alternate method once at the global scope, then do direct class attribute assignment.
class A:
def is_class_A(self):
print("We are in class A")
def alt_is_class_A(self):
print("Nope")
class B(A):
is_class_A = alt_is_class_A
class C(A):
is_class_A = alt_is_class_A
class D(A):
pass # No override
The assignment could also be handled by a decorator:
def mod_it(cls):
cls.is_class_A = alt_is_class_A
#mod_it
class B(A):
pass
#mod_it
class C(A):
pass
# No override
class D(A):
pass
or via A.__init_subclass__:
class A:
def is_class_A(self):
print("We are in class A")
def __init_subclass__(cls, is_class_A=None):
super().__init_subclass__()
if is_class_A is not None:
cls.is_class_A = is_class_A
def alt_is_class_A(self):
print("Nope")
class B(A, is_class_A=alt_is_class_A):
pass
class C(A, is_class_A=alt_is_class_A):
pass
# No override
class D(A):
pass

Method accessible only from class descendants in python

Let's say I have the following two classes
class A:
def own_method(self):
pass
def descendant_method(self):
pass
class B(A):
pass
and I want descendant_method to be callable from instances of B, but not of A, and own_method to be callable from everywhere.
I can think of several solutions, all unsatisfactory:
Check some field and manually raise NotImplementedError:
class A:
def __init__(self):
self.some_field = None
def own_method(self):
pass
def descendant_method(self):
if self.some_field is None:
raise NotImplementedError
class B(A):
def __init__(self):
super(B, self).__init__()
self.some_field = 'B'
pass
But this is modifying the method's runtime behaviour, which I don't want to do
Use a mixin:
class A:
def own_method(self):
pass
class AA:
def descendant_method(self):
pass
class B(AA, A):
pass
This is nice as long as descendant_method doesn't use much from A, or else we'll have to inherit AA(A) and this defies the whole point
make method private in A and redefine it in a metaclass:
class A:
def own_method(self):
pass
def __descendant_method(self):
pass
class AMeta(type):
def __new__(mcs, name, parents, dct):
par = parents[0]
desc_method_private_name = '_{}__descendant_method'.format(par.__name__)
if desc_method_private_name in par.__dict__:
dct['descendant_method'] = par.__dict__[desc_method_private_name]
return super(AMeta, mcs).__new__(mcs, name, parents, dct)
class B(A, metaclass=AMeta):
def __init__(self):
super(B, self).__init__()
This works, but obviously looks dirty, just like writing self.descendant_method = self._A__descendant_method in B itself.
What would be the right "pythonic" way of achieving this behaviour?
UPD: putting the method directly in B would work, of course, but I expect that A will have many descendants that will use this method and do not want to define it in every subclass.
What is so bad about making AA inherit from A? It's basically an abstract base class that adds additional functionality that isn't meant to be available in A. If you really don't want AA to ever be instantiated then the pythonic answer is not to worry about it, and just document that the user isn't meant to do that. Though if you're really insistent you can define __new__ to throw an error if the user tries to instantiate AA.
class A:
def f(self):
pass
class AA(A):
def g(self):
pass
def __new__(cls, *args, **kwargs):
if cls is AA:
raise TypeError("AA is not meant to be instansiated")
return super().__new__(cls)
class B(AA):
pass
Another alternative might be to make AA an Abstract Base Class. For this to work you will need to define at least one method as being abstract -- __init__ could do if there are no other methods you want to say are abstract.
from abc import ABCMeta, abstractmethod
class A:
def __init__(self, val):
self.val = val
def f(self):
pass
class AA(A, metaclass=ABCMeta):
#abstractmethod
def __init__(self, val):
super().__init__(val)
def g(self):
pass
class B(AA):
def __init__(self, val):
super().__init__(val)
Very finally, what's so bad about having the descendant method available on A, but just not using it. You are writing the code for A, so just don't use the method... You could even document the method that it isn't meant to be used directly by A, but is rather meant to be available to child classes. That way future developers will know your intentions.
As far as I can tell, this may be the most Pythonic way of accomplishing what you want:
class A:
def own_method(self):
pass
def descendant_method(self):
raise NotImplementedError
class B(A):
def descendant_method(self):
...
Another option could be the following:
class A:
def own_method(self):
pass
def _descendant_method(self):
pass
class B(A):
def descendant_method(self):
return self._descendant_method(self)
They're both Pythonic because it's explicit, readable, clear and concise.
It's explicit because it's not doing any unnecessary magic.
It's readable because
one can tell precisely what your doing, and what your intention was
at first glance.
It's clear because the leading single underscore is
a widely used convention in the Python community for private
(non-magic) methods—any developer that uses it should know to tread
with caution.
Choosing between one of these approaches will depend on how you intend on your use case. A more concrete example in your question would be helpful.
Try to check the class name using __class__.__name__ .
class A(object):
def descendant_method(self):
if self.__class__.__name__ == A.__name__:
raise NotImplementedError
print 'From descendant'
class B(A):
pass
b = B()
b.descendant_method()
a = A()
a.descendant_method()

Making a class method recognize which class context it's running in

I need to refactor existing code by collapsing a method that's copy-and-pasted between various classed that inherit from one another into a single method.
So I produced the following code:
class A(object):
def rec(self):
return 1
class B(A):
def rec(self):
return self.rec_gen(B)
def rec_gen(self, rec_class):
return super(rec_class, self).rec() + 1
class C(B):
def rec(self):
return self.rec_gen(C)
if __name__=='__main__':
b = B(); c = C()
print c.rec()
print b.rec()
And the output:
3
2
What still bothers me is that in the 'rec' method I need to tell 'rec_gen' the context of the class in which it's running. Is there a way for 'rec_gen' to figure it out by itself in runtime?
This capability has been added to Python 3 - see PEP 3135. In a nutshell:
class B(A):
def rec(self):
return super().rec() + 1
I think you've created the convoluted rec()/rec_gen() setup because you couldn't automatically find the class, but in case you want that anyway the following should work:
class A(object):
def rec(self):
return 1
class B(A):
def rec(self):
# __class__ is a cell that is only created if super() is in the method
super()
return self.rec_gen(__class__)
def rec_gen(self, rec_class):
return super(rec_class, self).rec() + 1
class C(B):
def rec(self):
# __class__ is a cell that is only created if super() is in the method
super()
return self.rec_gen(__class__)
The simplest solution in Python 2 is to use a private member to hold the super object:
class B(A):
def __init__(self):
self.__super = super(B)
def rec(self):
return self.__super.rec() + 1
But that still suffers from the need to specify the actual class in one place, and if you happen to have two identically-named classes in the class hierarchy (e.g. from different modules) this method will break.
There were a couple of us who made recipes for automatic resolution for Python 2 prior to the existence of PEP 3135 - my method is at self.super on ActiveState. Basically, it allows the following:
class B(A, autosuper):
def rec(self):
return self.super().rec() + 1
or in the case that you're calling a parent method with the same name (the most common case):
class B(A, autosuper):
def rec(self):
return self.super() + 1
Caveats to this method:
It's quite slow. I have a version sitting around somewhere that does bytecode manipulation to improve the speed a lot.
It's not consistent with PEP 3135 (although it was a proposal for the Python 3 super at one stage).
It's quite complex.
It's a mix-in base class.
I don't know if the above would enable you to meet your requirements. With a small change to the recipe though you could find out what class you're in and pass that to rec_gen() - basically extract the class-finding code out of _getSuper() into its own method.
An alternative solution for python 2.x would be to use a metaclass to automatically define the rec method in all your subclasses:
class RecGen(type):
def __new__(cls, name, bases, dct):
new_cls = super(RecGen, cls).__new__(cls, name, bases, dct)
if bases != (object,):
def rec(self):
return super(new_cls, self).rec() + 1
new_cls.rec = rec
return new_cls
class A(object):
__metaclass__ = RecGen
def rec(self):
return 1
class B(A):
pass
class C(B):
pass
Note that if you're just trying to get something like the number of parent classes, it would be easier to use self.__class__.__mro__ directly:
class A(object):
def rec(self):
return len(self.__class__.__mro__)-1
class B(A):
pass
class C(B):
pass
I'm not sure exactly what you're trying to achieve, but if it is just to have a method that returns a different constant value for each class then use class attributes to store the value. It isn't clear at all from your example that you need to go anywhere near super().
class A(object):
REC = 1
def rec(self):
return self.REC
class B(A):
REC = 2
class C(B):
REC = 3
if __name__=='__main__':
b = B(); c = C()
print c.rec()
print b.rec()

Categories

Resources