classmethod as constructor and inheritance - python

The problem is quite simple. If a class B inherit a class A and wants to override a ´classmethod´ that is used as a constructor (I guess you call that a "factory method"). The problem is that B.classmethod will want to reuse A.classmethod, but then it will have to create an instance of the class A, while it subclasses the class A - since, as a classmethod, it has no self. And then, it doesn't seem the right way to design that.
I did the example trivial, I do more complicate stuff by reading numpy arrays, etc. But I guess there is no loss of information here.
class A:
def __init__(self, a):
self.el1 = a
#classmethod
def from_csv(cls, csv_file):
a = read_csv(csv_file)
return cls(a)
#classmethod
def from_hdf5 ...
class B(A):
def __init__(self, a, b)
A.(self, a)
self.el2 = b
#classmethod
def from_csv(cls, csv_file):
A_ = A.from_csv(csv_file) #instance of A created in B(A)
b = [a_*2 for a_ in A.el]
return cls(A.el, b)
Is there a pythonic way to deal with that?

After doing some different trials. My conclusion is that you should override a classmethod without reusing the code inside. So the best way I found, for my particular problem, is to make the classmethod as simply as possible and put the code I want to reuse in another method, static in my case, since the classmethod is a constructor.

One easy solution would be to have class B's __init__ method have a default value for its b parameter. This would let the cls(a) call made by A.from_csv work when it is inherited. If the default is used, the __init__ method could calculate a value to store from a (as you do in B.from_csv now).
class B(A):
def __init__(self, a, b=None):
super().__init__(a) # use super(B, self).__init__(a) if you're in Python 2
self.el2 = b if b is not None else [i*2 for i in a]
# don't override from_csv, B.from_csv will already return a B instance!

Related

Inherit from dataclass without specifying everything in subclass' __init__?

I'm building a hierarchy of dataclasses.dataclass (Python 3.10).
E.g.
import dataclasses
#dataclasses.dataclass
class A:
a: int
#dataclasses.dataclass
class B(A):
b: int
I now have a bunch of As, which I want to additionally specify as B without adding all of A's properties to the constructor. Something like this:
a = A(1)
b = B(a, 1)
I know I could use dataclasses.asdict
b = B(**dataclasses.asdict(a), b=1)
Is this an acceptable solution in the sense of best practice? It looks a bit inefficient and less readable than it could be.
I tried overwriting the B.__init__ (and B.__new__) but this seems like too much code is needed. And with overwriting the __init__, it's not possible to make the dataclass frozen.
You can define a classmethod on B to make your approach more ergonomic.
#dataclasses.dataclass
class B(A):
b: int
#classmethod
def from_A(cls, a, **kwargs):
return cls(**dataclasses.asdict(a), **kwargs)
b = B.from_A(a, b=1)

Value Changed in Class B calls function in Class A to update object of class A

I want to call a function from a class A inside another class B. However, it should be called for an object of A. I mean if I have something like this:
class A:
def __init__(self, ....):
self.valuechanged=False
# do something
objectfromb=B()
self.somearray.append(objectfromb)
def updateevent(self):
self.valuechanged=True
# do some things if update event triggered
class B:
def __init__(self,...):
self.somevalue=0
self.someothervalue=1
# do something
def updatesomevalue(self,somenewvalue):
self.somevalue=somenewvalue
# !!! HERE SHOULD BE A CALL TO CLASS A FUNCTION updateevent
And in my code I use the classes like this:
a=A()
Then i would have a list somearray in a (a.somearray) which contains an object of B. So if I want to update this object B with:
a.somearray[0].updatesomevalue(10)
Then there should not only be a new value for a.somearray[0].somevalue but also the function update event of class A should trigger changing a. How can I do that?
There are two ways I can think of to achieve this without invoking any special magic.
The first is to have objects of type B know what object A they belong to so that they can call updateevent on it. This isn't a way I'm generally a fan of as there's extra admin work to do when moving instances of B between instances of A and such. If that's not a concern then it may be the best way. You'd do that something like this (with a method on A to create a B and set the correct parent for convenience):
class A:
valuechanged=False
somearray=[]
def add_b(self):
b = B(self)
somearray.append(b)
return b
def updateevent(self):
self.valuechanged=True
class B:
somevalue=0
someothervalue=1
def __init__(self, parent):
self.parent = parent
def updatesomevalue(self,somenewvalue):
self.somevalue=somenewvalue
self.parent.updateevent()
The second is to provide a method on A that does both tasks. This is only suitable if 1) you know A will always contains instances of B and only B and 2) B's interface is relatively small (to avoid providing lots of methods of this type on A). You would implement this as something like:
class A:
valuechanged=False
somearray=[]
def updatesomevalue(self, index, new_value):
somearray[index].updatesomevalue(new_value)
self.updateevent()
def updateevent(self):
self.valuechanged=True
class B:
somevalue=0
someothervalue=1
def updatesomevalue(self,somenewvalue):
self.somevalue=somenewvalue
Something I haven't addressed is that somearray, somevalue, etc are all being created as class attributes in your example (i.e. they will be shared among all instances, instead of each instance having its own ones). This is likely not what you wanted.

Dynamically change a given class in the heritage tree and replace it with another one and initialize it

I am currently working on a library.
I would like to write a wrapper that modifies a class (Mean in the example) in the inheritance tree by a new class (suppose WindowedMean) and I would like to initialize this class (for example k=10).
The Mean class can be anywhere in the heritage tree this is just one example.
This link shows the example
I know it's not advisable.
Do you have an elegant way to do this?
I imagine using the wrapper like this:
metric = Wrapper(MyClass, k=10)
update
Although the solution bellow will work exactly as you described, it came to my attention that with Multiple Inheritance, what you are asking can happen naturally.
Just inherit the class you want to modify, use the normal inheritance mechanisms to override the inherited class behavior. That means, setting a k=10 class attribute, and hardcoding super-calls to Mean's parent instead of using super, if needed.
Then, just create a new child of MyClass and add the overriden child of Mean to the inheritance Tree. Note that this sub-class of MyClass does not need a single statement in its body, and will behave exactly like MyClass, except for the modified Mean now being in the proper place in the mro.
Directly in the interpreter (I had to resort to exec to be able to type all the class hierarchy in a single line)
In [623]: exec("class GreatGrandParent1: pass\nclass GreatGrandParent2: pass\nclass GrandParent1(GreatGrandParent1, Gre
...: atGrandParent2): pass\nclass Mean: pass\nclass Parent1(GrandParent1, Mean): pass\nclass Parent2: pass\nclass
...: MyClass(Parent1, Parent2): pass")
In [624]: MyClass.__mro__
Out[624]:
(__main__.MyClass,
__main__.Parent1,
__main__.GrandParent1,
__main__.GreatGrandParent1,
__main__.GreatGrandParent2,
__main__.Mean,
__main__.Parent2,
object)
In [625]: class MeanMod(Mean):
...: k = 10
...:
In [626]: class MyClassMod(MyClass, MeanMod): pass
In [627]: MyClassMod.__mro__
Out[627]:
(__main__.MyClassMod,
__main__.MyClass,
__main__.Parent1,
__main__.GrandParent1,
__main__.GreatGrandParent1,
__main__.GreatGrandParent2,
__main__.MeanMod,
__main__.Mean,
__main__.Parent2,
object)
Note there is a case where that won't work straightforward: if a super call in Mean was supposed to call a method in Parent2 in your example. In this case, either resort to the original solution, or use some clever __mro__ manipulation to skip the method in Mean.
original answer
It looks like this would work with a class-decorator (which can also be used with this "wrapper" syntax you want).
there are sure cornercases, and things that might go wrong - but if your tree is somewhat well behaved, we have to recursively take all the bases on the class you want to wrap, and make the substitution in those bases - we can't simply take the final base, unroll all the ancestor classes in its __mro__ and just replace the desired class there: the heritage line would be broken bellow it.
That is, let's suppose you have classes A, B(A), C(B), D(C), and you want a clone D2 of D, replacing B by B2(A) - D.__bases__ is C. D.__mro__ is (D, C, B, A, object) If we try do create a new D2 forcing the __mro__ to be
(D, C, B2, A, object) , class C would break, as it would not see B anymore. (And the code in the previous version of this answer would leave both B and B2 in the inheritance line, leading to further brokerage).
The solution bellow recreates not only a new B class but also a new C.
Just beware that if B2 would not itself inherit from A, in this same example, A itself would be removed from the __mro__ for the new, replaced B2 does not need it. If D have any functionality that depends on A, it will break. This is not easily fixable, but for putting checking mechanisms to ensure the class being replaced contains the same ancestors the replaced class had, and raising an error otherwise - that would be easy to do. But figuring out how to pick and include the "now gone missing" ancestors is not easy, as it is impossible to know if they are needed at all, just to start.
As for the configuring part, without some example of how your RollingMean class is "configured" it is hard to give a proper concrete example - but let's make an subclass of it, updating its dict with the passed parameters - that should do for any configuration needed.
from types import new_class
def norm_name(config):
return "_" + "__".join(f"{key}_{value}" for key, value in config.items())
def ancestor_replace(oldclass: type, newclass: type, config: dict):
substitutions = {}
configured_newclass = type(newclass.__name__ + norm_name(config), (newclass,), config)
def replaced_factory(cls):
if cls in substitutions:
return substitutions[cls]
bases = cls.__bases__
new_bases = []
for base in bases:
if base is oldclass:
new_bases.append(configured_newclass)
else:
new_bases.append(replaced_factory(base))
if new_bases != bases:
new_cls = new_class(cls.__name__, tuple(new_bases), exec_body=lambda ns: ns.update(cls.__dict__.copy()))
substitutions[cls] = new_cls
return new_cls
return cls
return replaced_factory
# canbe used as:
#MyNewClass = ancestor_replace(Mean, RollingMean, {"ks": 10})(MyClass)
I take some care there to derive the proper metaclass - if no class in your inheritance tree is using a metaclass other than type, (usually ABCs or ORM models have differing metaclasses), you could use type instead of types.new_class in the call.
And finally, an example of using this in the interactive prompt:
In [152]: class A:
...: pass
...:
...: class B(A):
...: pass
...:
...: class B2(A):
...: pass
...:
...: class C(B):
...: pass
...:
...: class D(B):
...: pass
...:
...: class E(D):
...: pass
...:
In [153]: E2 = ancestor_replace(B, B2, {"k": 20})(E)
In [154]: E.__mro__
Out[154]: (__main__.E, __main__.D, __main__.B, __main__.A, object)
In [155]: E2.__mro__
Out[155]:
(__main__.E_modified,
__main__.D_modified,
__main__.B2_k_20,
__main__.B2,
__main__.A,
object)
In [156]: E2.k
Out[156]: 20
Maybe something like this?
class Parent1(object):
def __init__(self):
super(Parent1, self).__init__()
def speak(self):
print('Iam parent 1')
class Parent2(object):
def __init__(self):
super(Parent2, self).__init__()
def speak(self):
print('Iam parent 2')
class Child():
"""docstring for Child."""
def __init__(self):
pass
def child_method(self):
print('child method')
def make_child(inhert):
child = Child()
class ChildInhert(Child, inhert):
def __init__(self):
inhert.__init__(self)
return ChildInhert()
if __name__ == '__main__':
child = make_child(Parent1)
child.speak()
child.child_method()

How to change the base class in Python

Say I have following
Class A(object):
base Functions
Class B (A):
some useful functions
Class C(object)
req base functions
Now I want to create a class which has all the functions from B but instead of functions from A refers to functions from C.
Something like
Class D(B,C)
and when B calls super it should look in C instead of A
The way that I am achieving it now is copy pasting the whole class B and just inheriting from C instead of A.
Is there a better way to solve this problem?
Composition can surely solve the problem, but Class B is already in heavy use so I don't want to change it.
Change the __bases__ attribute of B class.
Well, it's ugly but it works:
class A(object):
def f(self):
print("A.f()")
class B (A):
def g(self):
print("B.g()", end=": ")
super(B, self).f()
class C(object):
def f(self):
print("C.f()")
B.__bases__ = (C,)
b = B()
b.g()
You get:
B.g(): C.f()
You can use fudge.patch to do it during your unit test.
Note that your question is a duplicate of How to dynamically change base class of instances at runtime?

Most pythonic way dealing with combinations of parameters for instantiating a class?

Let's say I have a class Foo:
class Foo(object):
#staticmethod
def get_a(b, c):
if not b or not c:
raise ValueError("Invalid params!")
return b + c
def __init__(self, a=None, b=None, c=None):
if not a:
a = Foo.get_a(b, c)
self.a = a
The user can use the class with either a or both b and c. If a is provided, b and c are ignored.
What is better: erroring when all three parameters are provided (making sure the programmer is conscious of which one is being used) or putting it into the docs that b and c will be ignored if a is provided?
On one hand, erroring is more explicit, which is pythonic (Explicit is better than implicit). On the other hand, accepting whatever works is more practical (Although practicality beats purity).
I'd give the class a separate classmethod factory instead:
class Foo(object):
def __init__(self, a):
self.a = a
#classmethod
def from_b_and_c(cls, b, c):
return cls(b + c)
This is the real Explicit option; you either create Foo(a), or you use Foo.from_b_and_c(b, c) to produce an instance with very different arguments. This immediately documents how the parameters are separate; either you create an instance from just a, or you create an instance from both b and c together.
This is a common pattern; if you have more than one way to produce an instance, provide additional factory methods in the form of class methods.
For example, you can produce a datetime.date() class with:
The standard year, month and day, date(2014, 10, 23)
from your system date, date.today()
from a POSIX timestamp, date.fromtimestamp(1414018800.0)
from an ordinal (days since 0001-1-1), date.fromordinal(735529).

Categories

Resources