Handling custom __new__() and __del__() with inheritance - python

When I derive a class in Python, I need to call the Base.__init__(self) from the derived __init__(self) function like
class Base(object):
def __init__(self):
pass
class Der(Base):
def __init__(self) :
Base.__init__(self)
Do I need to do the same for __new__(self) and __del__(self) functions like
class Base(object):
def __new__(self):
pass
def __init__(self):
pass
def __del__(self) :
pass
class Der(Base):
def __new__(self):
Base.__new__(self)
def __init__(self) :
Base.__init__(self)
def __del__(self) :
Base.__del__(self)
I am wondering because nothing seems to go WRONG if I don't do that.
I am sure that python gc will take care, but is there anything I need to be worried about if I don't call them from Derived chain

The usage of __init__() and __del() is straightforward but not the __new()__
Since __new()__ returns an instance of the class, we need to call super().__new()__ till the object chain as given below
class Base(object):
def __new__(self):
return super().__new__(self)
class Derive(Base):
def __new__(self):
return super().__new__(self);
The __del()__ can be using similar to __init()__, calling Base.__del()__ is not mandatory unless and until some cleanup is required in base __del() function.
The correct way of writing the code asked in the question is
class Base(object):
def __new__(self):
return super().__new__(self)
def __init__(self):
self.Value = 200
def __del__(self):
pass
class Derive(Base):
def __new__(self):
return super().__new__(self);
def __init__(self):
Base.__init__(self)
def __del__(self):
Base.__del__(self)
It's always better not to implement __new()__ and __del()__ unless we really need it

Related

I want to call parent class method which is overridden in child class through child class object in Python

class abc():
def xyz(self):
print("Class abc")
class foo(abc):
def xyz(self):
print("class foo")
x = foo()
I want to call xyz() of the parent class, something like;
x.super().xyz()
With single inheritance like this it's easiest in my opinion to call the method through the class, and pass self explicitly:
abc.xyz(x)
Using super to be more generic this would become (though I cannot think of a good use case):
super(type(x), x).xyz()
Which returns a super object that can be thought of as the parent class but with the child as self.
If you want something exactly like your syntax, just provide a super method for your class (your abc class, so everyone inheriting will have it):
def super(self):
return super(type(self), self)
and now x.super().xyz() will work. It will break though if you make a class inheriting from foo, since you will only be able to go one level up (i.e. back to foo).
There is no "through the object" way I know of to access hidden methods.
Just for kicks, here is a more robust version allowing chaining super calls using a dedicated class keeping tracks of super calls:
class Super:
def __init__(self, obj, counter=0):
self.obj = obj
self.counter = counter
def super(self):
return Super(self.obj, self.counter+1)
def __getattr__(self, att):
return getattr(super(type(self.obj).mro()[self.counter], self.obj), att)
class abc():
def xyz(self):
print("Class abc", type(self))
def super(self):
return Super(self)
class foo(abc):
def xyz(self):
print("class foo")
class buzz(foo):
def xyz(self):
print("class buzz")
buzz().super().xyz()
buzz().super().super().xyz()
results in
class foo
Class abc

How to force a python class to have a CLASS property? (not a INSTANCE property!!!)

I have googled around for some time, but what I got is all about INSTANCE property rather than CLASS property.
For example, this is the most-voted answer for question from stackoverflow
class C(ABC):
#property
#abstractmethod
def my_abstract_property(self):
return 'someValue'
class D(C)
def my_abstract_property(self):
return 'aValue'
class E(c)
# I expect the subclass should have this assignment,
# but how to enforce this?
my_abstract_property = 'aValue'
However, that is the INSTANCE PROPERTY case, not my CLASS PROPERTY case. In other words, calling
D.my_abstract_property will return something like <unbound method D.my_abstract_property>. Returning 'aValue' is what I expected, like class E.
Based on your example and comment to my previous reply, I've structured the following which works with ABC. :
from abc import ABC
class C(ABC):
_myprop = None
def __init__(self):
assert self._myprop, "class._myprop should be set"
#property
def myprop(self):
return self._myprop
class D(C):
_myprop = None
def __init__(self):
super().__init__()
class E(C):
_myprop = 'e'
def __init__(self):
super().__init__()
e = E()
print(e.myprop)
d = D()
print(d.myprop)
You are correct that there is no Python pre-scan that will detect another developer has not assigned a value to a class variable before initializing. The initializer will take care of notifying pretty quickly in usage.
You can use #classmethod decorator.
I come up with a tricky workaround.
class C(object):
myProp = None
def __init__(self):
assert self.myProp, 'you should set class property "name"'
class D(C):
def __init__(self):
C.__init__(self)
class E(C):
myProp = 'e'
def __init__(self):
C.__init__(self)
print(D.myProp)
print(E.myProp)
But it still has some problems:
D.myProp will not raise any exception to warn the developer about the constraint (assigning myProp as its class property), until the developer initialize an instance of his class.
abc module cannot work with this solution, which means loss of lots of useful features of that module

#staticmethod, can anyone explain the following python code?

class A:
def __init__(self):
pass
#staticmethod
def a():
return "a"
class B1(A):
def __init__(self):
super().__init__()
#staticmethod
def a():
return "b"
class B2(A):
def __init__(self):
super().__init__()
class C1(B1):
def __init__(self):
super().__init__()
#staticmethod
def a():
return super(C1, C1).a()
class C2(B1):
def __init__(self):
super().__init__()
#staticmethod
def a():
return super(B1, B1).a()
So here's a tricky thing I'm having understanding.
B2().a() returns a, even though B2 doesn't have a method called a().
How come?
Also, I don't quite understand how staticmethod differs from the other methods.
Every class here inherits (directly or indirectly) from A (that's what class B2(A): is telling you).
Since they inherit from A, they have access to A's methods. All the #staticmethod decorator does is suppress passing self to the method implicitly when it's called on an instance, so that A.a() and A().a() work the same; similarly, B2.a() and B2().a() work the same way, invoking A.a().

Python: Inheriting methods of a parent class without being a child of the class

I can't seem to find information regarding what I'm trying to do, so I'm afraid the answer is "you can't do it" or "that's bad practice." But, here it goes:
Given the following:
Class A(object):
def __init__(self):
pass
def methoda(self):
return 1
Class C(object):
def __init__(self):
pass
def methodc(self):
return 2
import A, C
Class B(object):
def __init__(self, classC):
A.__init__(self)
if classC:
C.__init__(self)
def methodb(self):
return 2
Obviously, running:
b = A()
b.methoda()
Is going to crash with an error:
Unbound method __init()___ must be called with A class as first argument (got B instance instead)
However, I am basically looking for a way to make this work. My motivation:
There are classes (maybe in the future) that will duplicate a certain group of methods (say some fancy conversions). In an effort to reduce code, I'd like for the future classes to inherit the methods; but for legacy reasons, I don't want B to inherit C.
A couple solutions:
Don't use special methods directly:
class A(object):
def __init__(self):
self._init(self)
#staticmethod
def _init(self):
...actual initialization code here...
def methoda(self):
return 1
class C(object):
def __init__(self):
self._init(self)
#staticmethod
def _init(self):
...actual initialization code here...
def methodc(self):
return 2
import A, C
class B(object):
def __init__(self, classC):
A._init(self)
if classC:
C._init(self)
def methodb(self):
return 2
Or silly hacks involving copying from initialized objects:
import A, C
class B(object):
def __init__(self, classC):
vars(self).update(vars(A()))
if classC:
vars(self).update(vars(C()))
def methodb(self):
return 2
Note that none of these solutions will give access to methods from A or C on instances of B. That's just ugly. If you really need inheritance, use inheritance, don't do terrible things trying to simulate it poorly.
I ended up just inheriting multiple classes. It's not exactly the way I wanted it done, but it's cleaner and easier for the IDE to follow
Class A(object):
def __init__(self):
super(A, self).__init__()
pass
def methoda(self):
return 1
Class C(object):
def __init__(self):
super(C, self).__init__()
def _C_init(self):
# some init stuff
pass
def methodc(self):
return 1
Class B(A, C):
def __init__(self, use_C):
super(B, self).__init__()
if use_C:
self._C_init()
def methodb(self):
return 2

Method accessible only from class descendants in python

Let's say I have the following two classes
class A:
def own_method(self):
pass
def descendant_method(self):
pass
class B(A):
pass
and I want descendant_method to be callable from instances of B, but not of A, and own_method to be callable from everywhere.
I can think of several solutions, all unsatisfactory:
Check some field and manually raise NotImplementedError:
class A:
def __init__(self):
self.some_field = None
def own_method(self):
pass
def descendant_method(self):
if self.some_field is None:
raise NotImplementedError
class B(A):
def __init__(self):
super(B, self).__init__()
self.some_field = 'B'
pass
But this is modifying the method's runtime behaviour, which I don't want to do
Use a mixin:
class A:
def own_method(self):
pass
class AA:
def descendant_method(self):
pass
class B(AA, A):
pass
This is nice as long as descendant_method doesn't use much from A, or else we'll have to inherit AA(A) and this defies the whole point
make method private in A and redefine it in a metaclass:
class A:
def own_method(self):
pass
def __descendant_method(self):
pass
class AMeta(type):
def __new__(mcs, name, parents, dct):
par = parents[0]
desc_method_private_name = '_{}__descendant_method'.format(par.__name__)
if desc_method_private_name in par.__dict__:
dct['descendant_method'] = par.__dict__[desc_method_private_name]
return super(AMeta, mcs).__new__(mcs, name, parents, dct)
class B(A, metaclass=AMeta):
def __init__(self):
super(B, self).__init__()
This works, but obviously looks dirty, just like writing self.descendant_method = self._A__descendant_method in B itself.
What would be the right "pythonic" way of achieving this behaviour?
UPD: putting the method directly in B would work, of course, but I expect that A will have many descendants that will use this method and do not want to define it in every subclass.
What is so bad about making AA inherit from A? It's basically an abstract base class that adds additional functionality that isn't meant to be available in A. If you really don't want AA to ever be instantiated then the pythonic answer is not to worry about it, and just document that the user isn't meant to do that. Though if you're really insistent you can define __new__ to throw an error if the user tries to instantiate AA.
class A:
def f(self):
pass
class AA(A):
def g(self):
pass
def __new__(cls, *args, **kwargs):
if cls is AA:
raise TypeError("AA is not meant to be instansiated")
return super().__new__(cls)
class B(AA):
pass
Another alternative might be to make AA an Abstract Base Class. For this to work you will need to define at least one method as being abstract -- __init__ could do if there are no other methods you want to say are abstract.
from abc import ABCMeta, abstractmethod
class A:
def __init__(self, val):
self.val = val
def f(self):
pass
class AA(A, metaclass=ABCMeta):
#abstractmethod
def __init__(self, val):
super().__init__(val)
def g(self):
pass
class B(AA):
def __init__(self, val):
super().__init__(val)
Very finally, what's so bad about having the descendant method available on A, but just not using it. You are writing the code for A, so just don't use the method... You could even document the method that it isn't meant to be used directly by A, but is rather meant to be available to child classes. That way future developers will know your intentions.
As far as I can tell, this may be the most Pythonic way of accomplishing what you want:
class A:
def own_method(self):
pass
def descendant_method(self):
raise NotImplementedError
class B(A):
def descendant_method(self):
...
Another option could be the following:
class A:
def own_method(self):
pass
def _descendant_method(self):
pass
class B(A):
def descendant_method(self):
return self._descendant_method(self)
They're both Pythonic because it's explicit, readable, clear and concise.
It's explicit because it's not doing any unnecessary magic.
It's readable because
one can tell precisely what your doing, and what your intention was
at first glance.
It's clear because the leading single underscore is
a widely used convention in the Python community for private
(non-magic) methods—any developer that uses it should know to tread
with caution.
Choosing between one of these approaches will depend on how you intend on your use case. A more concrete example in your question would be helpful.
Try to check the class name using __class__.__name__ .
class A(object):
def descendant_method(self):
if self.__class__.__name__ == A.__name__:
raise NotImplementedError
print 'From descendant'
class B(A):
pass
b = B()
b.descendant_method()
a = A()
a.descendant_method()

Categories

Resources