I am designing a class in python whose properties have a mesh of interdepencies.
Say like it has a property A. When A is set to True then properties B and C can be used. Or else they cant be used. Property B and C may be of any type. May be a boolean or int or string or any custom class.
Also say if B is enabled then we can have either properties D or E or F ( a checkbox like behaviour).
How do i design such dependencies in python class?.
Also i may have similar classes which have such dependencies.. So i am thinking of making a metaclass or baseclass or template like design where user will specify dependencies and code is internally generated.
Any design inputs on how to proceed?
I'm not sure exactly what you mean by "designing dependencies", but I'd just give the class some methods decorated with the #property tag that check self.a and self.b prior to returning some value.
class Foo:
def __init__(self, a):
self.a = a
#property
def b():
return "b" if self.a else None
#property
def c():
return "c" if self.a else None
#property
def d():
return "d" if self.b else None
#property
def e():
return "e" if self.b else None
#property
def f():
return "f" if self.b else None
Related
i have a python class in which i have few arguments sent to constructor as below.
class Test(object):
def __init__(self, a, b):
self.a = a
if b<10:
self.a = a*2
I know that, constructors are just meant to initialize variable's and there should be no logic inside a constructor. But, if not this way, how can i set value of "a" variable based a logic with "b" variable. I tried to use property. following is my code
class Test(object):
def __init__(self, a, b):
self.a = a
self.b = b
#property
def a(self):
self._a
#a.setter
def a(self, value):
if self.b < 10:
self._a = value*2
else:
self._a = value
But, problem is that, setter is not called when initializing with a constructor. So, how can i solve this problem of modifying the default setting of few variable inside a constructor
I have the following class structure:
class Base:
def z(self):
raise NotImplementedError()
class A(Base):
def z(self):
self._x()
return self._z()
def _x(self):
# do stuff
def _a(self):
raise NotImplementedError()
class B(Base)
def z(self):
self._x()
return self._z()
def _x(self):
# do stuff
def _z(self):
raise NotImplementedError()
class C(A):
def _z(self):
print(5)
class D(B):
def _z(self):
print(5)
The implementation of C(A) and D(B) is exactly the same and does not really care which class it inherits from. The conceptual difference is only in A and B (and these need to be kept as separate classes). Instead of writing separate definitions for C and D, I want to be able to dynamically inherit from A or B based on an argument provided at time of creating an instance of C/D (eventually C and D must be the same name).
It seems that metaclasses might work, but I am not sure how to pass an __init__ argument to the metaclass __new__ (and whether this will actually work). I would really prefer a solution which resolves the problem inside the class.
Have you considered using composition instead of inheritance? It seems like it is much more suitable for this use case. See the bottom of the answer for details.
Anyway,
class C(A): ......... class C(B): ..... is not even valid, and will result with only class C(B) getting defined.
I'm not sure a metaclass will be able to help you here. I believe the best way would be to use type but I'd love to be corrected.
A solution using type (and probably misusing locals() but that's not the point here)
class A:
def __init__(self):
print('Inherited from A')
class B:
def __init__(self):
print('Inherited from B')
class_to_inherit = input() # 'A' or 'B"
C = type('C', (locals()[class_to_inherit],), {})
C()
'A' or 'B'
>> A
Inherited from A
'A' or 'B'
>> B
Inherited from B
Composition
Tracking back to the question in the beginning of my answer, you state yourself that the implementation of both "C(A)" and "C(B)" is identical and they don't actually care about A or B. It seems more correct to me to use composition. Then you can do something along the lines of:
class A: pass
class B: pass
class C:
def __init__(self, obj): # obj is either A or B instance, or A or B themselves
self.obj = obj # or self.obj = obj() if obj is A or B themselves
c = C(A()) # or c = C(A)
In case C should expose the same API as A or B, C can overwrite __getattr__:
class A:
def foo(self):
print('foo')
class C:
def __init__(self, obj):
self.obj = obj
def __getattr__(self, item):
return getattr(self.obj, item)
C(A()).foo()
# foo
Suppose I have a class with 3 instance attributes, 'a', 'b' and 'c', which are initialized each with property setters. Now, my property 'b' assignment should use the value of the instance variable 'a'. So for 'b' to be initialized, 'a' has to be initialized beforehand.
Following the code below, does python set the instance 'a' first, then goes to instance 'b', and then finnally to 'c', or may the initialisation occur in any random order, which might destroy the possibility to sucessfully initialise the variables?
class Foo(object):
def __init__(self):
self.a = None
self.b = None
self.c = None
#property
def a(self):
return self._a
#a.setter
def a(self, value):
self._a = value
#property
def b(self):
return self._b
#b.setter
def b(self, value):
value = self.a
self._b = value
#property
def c(self):
return self._c
#c.setter
def c(self, value):
value = self.b
self._c = value
I am asking this question in a simplified version because I am having difficulties in a real case. In that case, I used logs to view the execution of the execution of the initialisation, and it appears to me that it starts by executing the last property ('c' in this case), instead of the desired first, 'a'.
__init__ is like any other method in Python; the statements in it are executed in the order given, so in your example code, a is set before b, which is set before c, always.
The Python language spec in general provides stronger ordering guarantees than languages like C/C++ (e.g. a, b, c = d, e, f guarantees that d is read first, then e, then f, and a is set first, then b, then c).
It does not matter if they are properties, plain attributes, or whatever; assignment might do funky things, but those things occur in the order the statements occur.
I have a class A. During the __init__ method of an instance of A;
I create these following two instances of classes B and C:
b = B()
c = C()
Once all's set, I need to call, within a method of B, a method from C.
Example:
Triggered:
b.call_c()
Does:
def call_c(self):
parent.c.a_method_of_c()
What do I need to do to achieve this structure?
You need to pass either of self or c to B() so that it can know about the other object.
Here is how this looks if you pass the A object to both B and C as a parent/container object:
class A(object):
def __init__(self):
self.b = B(self)
self.c = C(self)
class B(object):
def __init__(self, parent):
self.parent = parent
def call_c(self):
self.parent.c.a_method_of_c()
class C(object):
def __init__(self, parent):
self.parent = parent
# whatever...
Or, you can just pass the C instance to B's initializer like this:
class A(object):
def __init__(self):
self.c = C()
self.b = B(self.c)
class B(object):
def __init__(self, c):
self.cobj = c
def call_c(self):
self.cobj.a_method_of_c()
class C(object):
# whatever...
I like the second approach better, since it cuts out the dependencies of B and C on A, and the necessity of A to implement b and c attributes.
If B and C have to call methods on each other, you can still use A to make these associations, but keep B and C ignorant of A:
class A(object):
def __init__(self):
self.b = B()
self.c = C()
self.b.cobj = self.c
self.c.bobj = self.b
class B(object):
def __init__(self, c):
self.cobj = None
def call_c(self):
if self.cobj is not None:
self.cobj.a_method_of_c()
else:
raise Exception("B instance not fully initialized")
class C(object):
# similar to B
In general, your goal is to try to avoid or at least minimize these dependencies - have a parent know about a child, but a child be ignorant of the parent. Or a container knows its contained objects, but the contained objects do not know their container. Once you add circular references (back references to a parent or container object), things can get ugly in all kinds of surprising ways. A relationship can get corrupted when one of the links gets cleared but not the reflecting link. Or garbage-collection in circular relations can get tricky (handled in Python itself, but may not be handled if these objects and relations are persisted or replicated in a framework).
I need to call, within a method of B, a method from C.
Basically, if the method is not a class method or a static method, then calling a method always means that you have access to the (c) object of the C class.
Have a look at the example:
#!python3
class B:
def __init__(self, value):
self.value = value
def __str__(self):
return 'class B object with the value ' + str(self.value)
class C:
def __init__(self, value):
self.value = value
def __str__(self):
return 'class C object with the value ' + str(self.value)
class A:
def __init__(self, value):
self.value = value
self.b = B(value * 2)
self.c = C(value * 3)
def __str__(self):
lst = ['class A object with the value ' + str(self.value),
' containing the ' + self.b.__str__(),
' containing also the ' + str(self.c),
]
return '\n'.join(lst)
a = A(1)
print(a)
print(a.b)
print(a.c)
The self.b.__str__() is the example of calling the method of the object of the B class from the method of the object of the A class. The str(self.c) is the same, only called indirectly via the str() function.
The following is displayed:
class A object with the value 1
containing the class B object with the value 2
containing also the class C object with the value 3
class B object with the value 2
class C object with the value 3
I need to refactor existing code by collapsing a method that's copy-and-pasted between various classed that inherit from one another into a single method.
So I produced the following code:
class A(object):
def rec(self):
return 1
class B(A):
def rec(self):
return self.rec_gen(B)
def rec_gen(self, rec_class):
return super(rec_class, self).rec() + 1
class C(B):
def rec(self):
return self.rec_gen(C)
if __name__=='__main__':
b = B(); c = C()
print c.rec()
print b.rec()
And the output:
3
2
What still bothers me is that in the 'rec' method I need to tell 'rec_gen' the context of the class in which it's running. Is there a way for 'rec_gen' to figure it out by itself in runtime?
This capability has been added to Python 3 - see PEP 3135. In a nutshell:
class B(A):
def rec(self):
return super().rec() + 1
I think you've created the convoluted rec()/rec_gen() setup because you couldn't automatically find the class, but in case you want that anyway the following should work:
class A(object):
def rec(self):
return 1
class B(A):
def rec(self):
# __class__ is a cell that is only created if super() is in the method
super()
return self.rec_gen(__class__)
def rec_gen(self, rec_class):
return super(rec_class, self).rec() + 1
class C(B):
def rec(self):
# __class__ is a cell that is only created if super() is in the method
super()
return self.rec_gen(__class__)
The simplest solution in Python 2 is to use a private member to hold the super object:
class B(A):
def __init__(self):
self.__super = super(B)
def rec(self):
return self.__super.rec() + 1
But that still suffers from the need to specify the actual class in one place, and if you happen to have two identically-named classes in the class hierarchy (e.g. from different modules) this method will break.
There were a couple of us who made recipes for automatic resolution for Python 2 prior to the existence of PEP 3135 - my method is at self.super on ActiveState. Basically, it allows the following:
class B(A, autosuper):
def rec(self):
return self.super().rec() + 1
or in the case that you're calling a parent method with the same name (the most common case):
class B(A, autosuper):
def rec(self):
return self.super() + 1
Caveats to this method:
It's quite slow. I have a version sitting around somewhere that does bytecode manipulation to improve the speed a lot.
It's not consistent with PEP 3135 (although it was a proposal for the Python 3 super at one stage).
It's quite complex.
It's a mix-in base class.
I don't know if the above would enable you to meet your requirements. With a small change to the recipe though you could find out what class you're in and pass that to rec_gen() - basically extract the class-finding code out of _getSuper() into its own method.
An alternative solution for python 2.x would be to use a metaclass to automatically define the rec method in all your subclasses:
class RecGen(type):
def __new__(cls, name, bases, dct):
new_cls = super(RecGen, cls).__new__(cls, name, bases, dct)
if bases != (object,):
def rec(self):
return super(new_cls, self).rec() + 1
new_cls.rec = rec
return new_cls
class A(object):
__metaclass__ = RecGen
def rec(self):
return 1
class B(A):
pass
class C(B):
pass
Note that if you're just trying to get something like the number of parent classes, it would be easier to use self.__class__.__mro__ directly:
class A(object):
def rec(self):
return len(self.__class__.__mro__)-1
class B(A):
pass
class C(B):
pass
I'm not sure exactly what you're trying to achieve, but if it is just to have a method that returns a different constant value for each class then use class attributes to store the value. It isn't clear at all from your example that you need to go anywhere near super().
class A(object):
REC = 1
def rec(self):
return self.REC
class B(A):
REC = 2
class C(B):
REC = 3
if __name__=='__main__':
b = B(); c = C()
print c.rec()
print b.rec()