I want a python class that has a nested class where the inner class can access the members of the outer class. I understand that normal nesting doesn't even require that the outer class has an instance. I have some code that seems to generate the results I desire and I want feedback on style and unforeseen complications
Code:
class A():
def __init__(self,x):
self.x = x
self.B = self.classBdef()
def classBdef(self):
parent = self
class B():
def out(self):
print parent.x
return B
Output:
>>> a = A(5)
>>> b = a.B()
>>> b.out()
5
>>> a.x = 7
>>> b.out()
7
So, A has an inner class B, which can only be created from an instance of A. Then B has access to all the members of A through the parent variable.
This doesn't look very good to me. classBdef is a class factory method. Usually (and seldomly) you would use these to create custom classes e.g. a class with a custom super class:
def class_factory(superclass):
class CustomClass(superclass):
def custom_method(self):
pass
return CustomClass
But your construct doesn't make use of a customization. In fact it puts stuff of A into B and couples them tightly. If B needs to know about some A variable then make a method call with parameters or instantiate a B object with a reference to the A object.
Unless there is a specific reason or problem you need to solve, it would be much easier and clearer to just make a normal factory method giving a B object in A instead of stuff like b = a.B().
class B(object):
def __init__(self, a):
self.a = a
def out(self):
print self.a.x
class A(object):
def __init__(self,x):
self.x = x
def create_b(self):
return B(self)
a = A()
b = a.create_b()
b.out()
I don't think what you're trying to do is a very good idea. "Inner" classes in python have absolutely no special relationship with their "outer" class, if you bother to define one inside of another. It is exactly the same to say:
class A(object):
class B(object):
pass
as it is to say:
class B(object): pass
class A(object): pass
A.B = B
del B
That said, it is possible to accomplish something like what you're describing, by making your "inner" class into a descriptor, by defining __get__() on its metaclass. I recommend against doing this -- it's too complicated and yields little benefit.
class ParentBindingType(type):
def __get__(cls, inst, instcls):
return type(cls.__name__, (cls,), {'parent': inst})
def __repr__(cls):
return "<class '%s.%s' parent=%r>" % (cls.__module__,
cls.__name__, getattr(cls, 'parent', None))
class B(object):
__metaclass__ = ParentBindingType
def out(self):
print self.parent.x
class A(object):
_B = B
def __init__(self,x):
self.x = x
self.B = self._B
a = A(5)
print a.B
b = a.B()
b.out()
a.x = 7
b.out()
printing:
<class '__main__.B' parent=<__main__.A object at 0x85c90>>
5
7
Related
I have class A which I want to inherit from, this class has a class method that can initialize a new instance from some data. I don't have access to the code for from_data and can't change the implementation of A.
I want to initialize new instances of class B using the same data I would pass to the A's from_data method. In the solution I came up with I create a new instance of A in __new__(...) and change the __class__ to B. __init__(...) can then further initialize the "new instance of B" as normal. It seems to work but I'm not sure this will have some sort of side effects.
So will this work reliably? Is there a proper way of achieving this?
class A:
def __init__(self, alpha, beta):
self.alpha = alpha
self.beta = beta
#classmethod
def from_data(cls, data):
obj = cls(*data)
return obj
class B(A):
def __new__(cls, data):
a = A.from_data(data)
a.__class__ = cls
return a
def __init__(self, data):
pass
b = B((5, 3))
print(b.alpha, b.beta)
print(type(b))
print(isinstance(b, B))
Output:
5 3
<class '__main__.B'>
True
It could be that your use-case is more abstract than I am understanding, but testing out in a REPL, it seems that calling the parent class A constructor via super()
class A:
# ...
class B(A):
def __init__(self, data):
super().__init__(*data)
b = B((5, 3))
print(b.alpha, b.beta)
print(type(b))
print(isinstance(b, B))
also results in
5 3
<class '__main__.B'>
True
Is there a reason you don't want to call super() to instantiate a new instance of your child class?
Edit:
So, in case you need to use the from_data constructor... you could do something like
#... class A
class B(A):
def __init__(self, data):
a_obj = A.from_data(data)
for attr in a_obj.__dict__:
setattr(self, attr, getattr(a_obj, attr))
That is really hacky though... and not guaranteed to work for all attrs of A class object, especially if the __dict__ function has been overloaded.
I have the following class structure:
class Base:
def z(self):
raise NotImplementedError()
class A(Base):
def z(self):
self._x()
return self._z()
def _x(self):
# do stuff
def _a(self):
raise NotImplementedError()
class B(Base)
def z(self):
self._x()
return self._z()
def _x(self):
# do stuff
def _z(self):
raise NotImplementedError()
class C(A):
def _z(self):
print(5)
class D(B):
def _z(self):
print(5)
The implementation of C(A) and D(B) is exactly the same and does not really care which class it inherits from. The conceptual difference is only in A and B (and these need to be kept as separate classes). Instead of writing separate definitions for C and D, I want to be able to dynamically inherit from A or B based on an argument provided at time of creating an instance of C/D (eventually C and D must be the same name).
It seems that metaclasses might work, but I am not sure how to pass an __init__ argument to the metaclass __new__ (and whether this will actually work). I would really prefer a solution which resolves the problem inside the class.
Have you considered using composition instead of inheritance? It seems like it is much more suitable for this use case. See the bottom of the answer for details.
Anyway,
class C(A): ......... class C(B): ..... is not even valid, and will result with only class C(B) getting defined.
I'm not sure a metaclass will be able to help you here. I believe the best way would be to use type but I'd love to be corrected.
A solution using type (and probably misusing locals() but that's not the point here)
class A:
def __init__(self):
print('Inherited from A')
class B:
def __init__(self):
print('Inherited from B')
class_to_inherit = input() # 'A' or 'B"
C = type('C', (locals()[class_to_inherit],), {})
C()
'A' or 'B'
>> A
Inherited from A
'A' or 'B'
>> B
Inherited from B
Composition
Tracking back to the question in the beginning of my answer, you state yourself that the implementation of both "C(A)" and "C(B)" is identical and they don't actually care about A or B. It seems more correct to me to use composition. Then you can do something along the lines of:
class A: pass
class B: pass
class C:
def __init__(self, obj): # obj is either A or B instance, or A or B themselves
self.obj = obj # or self.obj = obj() if obj is A or B themselves
c = C(A()) # or c = C(A)
In case C should expose the same API as A or B, C can overwrite __getattr__:
class A:
def foo(self):
print('foo')
class C:
def __init__(self, obj):
self.obj = obj
def __getattr__(self, item):
return getattr(self.obj, item)
C(A()).foo()
# foo
I am trying to make a python decorator that adds attributes to methods of a class so that I can access and modify those attributes from within the method itself. The decorator code is
from types import MethodType
class attribute(object):
def __init__(self, **attributes):
self.attributes = attributes
def __call__(self, function):
class override(object):
def __init__(self, function, attributes):
self.__function = function
for att in attributes:
setattr(self, att, attributes[att])
def __call__(self, *args, **kwargs):
return self.__function(*args, **kwargs)
def __get__(self, instance, owner):
return MethodType(self, instance, owner)
retval = override(function, self.attributes)
return retval
I tried this decorator on the toy example that follows.
class bar(object):
#attribute(a=2)
def foo(self):
print self.foo.a
self.foo.a = 1
Though I am able to access the value of attribute 'a' from within foo(), I can't set it to another value. Indeed, when I call bar().foo(), I get the following AttributeError.
AttributeError: 'instancemethod' object has no attribute 'a'
Why is this? More importantly how can I achieve my goal?
Edit
Just to be more specific, I am trying to find a simple way to implement static variable that are located within class methods. Continuing from the example above, I would like instantiate b = bar(), call both foo() and doo() methods and then access b.foo.a and b.doo.a later on.
class bar(object):
#attribute(a=2)
def foo(self):
self.foo.a = 1
#attribute(a=4)
def doo(self):
self.foo.a = 3
The best way to do this is to not do it at all.
First of all, there is no need for an attribute decorator; you can just assign it yourself:
class bar(object):
def foo(self):
print self.foo.a
self.foo.a = 1
foo.a = 2
However, this still encounters the same errors. You need to do:
self.foo.__dict__['a'] = 1
You can instead use a metaclass...but that gets messy quickly.
On the other hand, there are cleaner alternatives.
You can use defaults:
def foo(self, a):
print a[0]
a[0] = 2
foo.func_defaults = foo.func_defaults[:-1] + ([2],)
Of course, my preferred way is to avoid this altogether and use a callable class ("functor" in C++ words):
class bar(object):
def __init__(self):
self.foo = self.foo_method(self)
class foo_method(object):
def __init__(self, bar):
self.bar = bar
self.a = 2
def __call__(self):
print self.a
self.a = 1
Or just use classic class attributes:
class bar(object):
def __init__(self):
self.a = 1
def foo(self):
print self.a
self.a = 2
If it's that you want to hide a from derived classes, use whatever private attributes are called in Python terminology:
class bar(object):
def __init__(self):
self.__a = 1 # this will be implicitly mangled as __bar__a or similar
def foo(self):
print self.__a
self.__a = 2
EDIT: You want static attributes?
class bar(object):
a = 1
def foo(self):
print self.a
self.a = 2
EDIT 2: If you want static attributes visible to only the current function, you can use PyExt's modify_function:
import pyext
def wrap_mod(*args, **kw):
def inner(f):
return pyext.modify_function(f, *args, **kw)
return inner
class bar(object):
#wrap_mod(globals={'a': [1]})
def foo(self):
print a[0]
a[0] = 2
It's slightly ugly and hackish. But it works.
My recommendation would be just to use double underscores:
class bar(object):
__a = 1
def foo(self):
print self.__a
self.__a = 2
Although this is visible to the other functions, it's invisible to anything else (actually, it's there, but it's mangled).
FINAL EDIT: Use this:
import pyext
def wrap_mod(*args, **kw):
def inner(f):
return pyext.modify_function(f, *args, **kw)
return inner
class bar(object):
#wrap_mod(globals={'a': [1]})
def foo(self):
print a[0]
a[0] = 2
foo.a = foo.func_globals['a']
b = bar()
b.foo() # prints 1
b.foo() # prints 2
# external access
b.foo.a[0] = 77
b.foo() # prints 77
While You can accomplish Your goal by replacing self.foo.a = 1 with self.foo.__dict__['a'] = 1 it is generally not recommended.
If you are using Python2 - (and not Python3) - whenever you retrieve a method from an instance, a new instance method object is created which is a wrapper to the original function defined in the class body.
The instance method is a rather transparent proxy to the function - you can retrieve the function's attributes through it, but not set them - that is why setting an item in self.foo.__dict__ works.
Alternatively you can reach the function object itself using: self.foo.im_func - the im_func attribute of instance methods point the underlying function.
Based on other contributors's answers, I came up with the following workaround. First, wrap a dictionnary in a class resolving non-existant attributes to the wrapped dictionnary such as the following code.
class DictWrapper(object):
def __init__(self, d):
self.d = d
def __getattr__(self, key):
return self.d[key]
Credits to Lucas Jones for this code.
Then implement a addstatic decorator with a statics attribute that will store the static attributes.
class addstatic(object):
def __init__(self, **statics):
self.statics = statics
def __call__(self, function):
class override(object):
def __init__(self, function, statics):
self.__function = function
self.statics = DictWrapper(statics)
def __call__(self, *args, **kwargs):
return self.__function(*args, **kwargs)
def __get__(self, instance, objtype):
from types import MethodType
return MethodType(self, instance)
retval = override(function, self.statics)
return retval
The following code is an example of how the addstatic decorator can be used on methods.
class bar(object):
#attribute(a=2, b=3)
def foo(self):
self.foo.statics.a = 1
self.foo.statics.b = 2
Then, playing with an instance of the bar class yields :
>>> b = bar()
>>> b.foo.statics.a
2
>>> b.foo.statics.b
3
>>> b.foo()
>>> b.foo.statics.a
3
>>> b.foo.statics.b
5
The reason for using this statics dictionnary follows jsbueno's answer which suggest that what I want would require overloading the dot operator of and instance method wrapping the foo function, which I am not sure is possible. Of course, the method's attribute could be set in self.foo.__dict__, but since it not recommended (as suggested by brainovergrow), I came up with this workaround. I am not certain this would be recommended either and I guess it is up for comments.
I have two singleton subclasses A and B, which data needs to be maintained separately. When one subclass A make use of superclass, some of the informations will be stored in list and dict in superclass. But when I try to invoke superclass from another subclass B the informations that are stored by subclass A causing issues.
Is it possible to have seperate instance of superclass for both A and B subclasses.
(I used self for superclass members)
Data members of base classes are not shared in Python
class Base(object):
def __init__(self, x):
self.x = x
class Derived1(Base):
def __init__(self, x, y):
Base.__init__(self, x)
self.y = y
class Derived2(Base):
def __init__(self, x, z):
Base.__init__(self, x)
self.z = z
d1 = Derived1(10, 20)
d2 = Derived2(30, 40)
print d1.x # Will show 10
print d2.x # Will show 30
The problem you're observing is probably related to something else
I'm not sure I completely understand your problem, partly because you haven't included any code in your question. So, based solely on my interpretation of the description you wrote, I think the following will address the issue of the two singleton subclasses interfering with each other.
A generic Singleton metaclass is used to define the base singleton class. The metaclass ensures that a separate single instance is created for each singleton class instance of itself -- i.e. your MySingletonBase in this case -- which is exactly what I think you want. The two subclasses derived from it, A and B, will inherit this metaclass from the baseclass and thereby will have independent instances of their singleton superclass.
This code below is based on one of my answers to the question:
How to initialize Singleton-derived object once?
# singleton metaclass
class Singleton(type):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class MySingletonBase(object):
__metaclass__ = Singleton
def __init__(self):
self.values_list = []
def test_method(self, value):
self.values_list.append(value)
def total(self):
return sum(self.values_list)
class A(MySingletonBase): pass
class B(MySingletonBase): pass
a1 = A()
b1 = B()
a2 = A()
a1.test_method(42)
a2.test_method(13)
print '{}, {}'.format(a1.values_list, a1.total()) # [42, 13], 55
print '{}, {}'.format(a2.values_list, a2.total()) # [42, 13], 55
print '{}, {}'.format(b1.values_list, b1.total()) # [], 0
The output illustrates that instances of subclass A are indeed singletons, and are separate from any instances created of subclass B, yet they are both inheriting MySingletonBase's methods.
thanks for your response,
The mistake that I made was I declared base class members list1 in wrong place,
class BaseClass(metaclass=Singleton):
list1 = []
def __init__(self, a, b, c):
self.list1.extend([a,b,c])
def printList(self):
print(self.list1)
class SubClass1(BaseClass):
def __init__(self):
super(SubClass1, self).__init__('a', 'b', 'c')
class SubClass2(BaseClass):
def __init__(self):
super(SubClass2, self).__init__('x','y','z')
if '__main__' == __name__:
s1 = SubClass1()
s2 = SubClass2()
s1.printList()
s2.printList()
The problem has been solved, once after I declared list1 inside init method
class BaseClass(metaclass=Singleton):
def __init__(self, a, b, c):
self.list1 = []
self.list1.extend([a,b,c])
I have a class A. During the __init__ method of an instance of A;
I create these following two instances of classes B and C:
b = B()
c = C()
Once all's set, I need to call, within a method of B, a method from C.
Example:
Triggered:
b.call_c()
Does:
def call_c(self):
parent.c.a_method_of_c()
What do I need to do to achieve this structure?
You need to pass either of self or c to B() so that it can know about the other object.
Here is how this looks if you pass the A object to both B and C as a parent/container object:
class A(object):
def __init__(self):
self.b = B(self)
self.c = C(self)
class B(object):
def __init__(self, parent):
self.parent = parent
def call_c(self):
self.parent.c.a_method_of_c()
class C(object):
def __init__(self, parent):
self.parent = parent
# whatever...
Or, you can just pass the C instance to B's initializer like this:
class A(object):
def __init__(self):
self.c = C()
self.b = B(self.c)
class B(object):
def __init__(self, c):
self.cobj = c
def call_c(self):
self.cobj.a_method_of_c()
class C(object):
# whatever...
I like the second approach better, since it cuts out the dependencies of B and C on A, and the necessity of A to implement b and c attributes.
If B and C have to call methods on each other, you can still use A to make these associations, but keep B and C ignorant of A:
class A(object):
def __init__(self):
self.b = B()
self.c = C()
self.b.cobj = self.c
self.c.bobj = self.b
class B(object):
def __init__(self, c):
self.cobj = None
def call_c(self):
if self.cobj is not None:
self.cobj.a_method_of_c()
else:
raise Exception("B instance not fully initialized")
class C(object):
# similar to B
In general, your goal is to try to avoid or at least minimize these dependencies - have a parent know about a child, but a child be ignorant of the parent. Or a container knows its contained objects, but the contained objects do not know their container. Once you add circular references (back references to a parent or container object), things can get ugly in all kinds of surprising ways. A relationship can get corrupted when one of the links gets cleared but not the reflecting link. Or garbage-collection in circular relations can get tricky (handled in Python itself, but may not be handled if these objects and relations are persisted or replicated in a framework).
I need to call, within a method of B, a method from C.
Basically, if the method is not a class method or a static method, then calling a method always means that you have access to the (c) object of the C class.
Have a look at the example:
#!python3
class B:
def __init__(self, value):
self.value = value
def __str__(self):
return 'class B object with the value ' + str(self.value)
class C:
def __init__(self, value):
self.value = value
def __str__(self):
return 'class C object with the value ' + str(self.value)
class A:
def __init__(self, value):
self.value = value
self.b = B(value * 2)
self.c = C(value * 3)
def __str__(self):
lst = ['class A object with the value ' + str(self.value),
' containing the ' + self.b.__str__(),
' containing also the ' + str(self.c),
]
return '\n'.join(lst)
a = A(1)
print(a)
print(a.b)
print(a.c)
The self.b.__str__() is the example of calling the method of the object of the B class from the method of the object of the A class. The str(self.c) is the same, only called indirectly via the str() function.
The following is displayed:
class A object with the value 1
containing the class B object with the value 2
containing also the class C object with the value 3
class B object with the value 2
class C object with the value 3