Let's say I have 2 classes A and B, where B inherits from A. B overrides some methods of A and B have a couple more attributes. Once I created an object b of type B, is it possible to convert it into the type A and only A ? This is to get the primitive behavior of the methods
I don't know how safe it is, but you can reassign the __class__ attribute of the object.
class A:
def f(self):
print("A")
class B(A):
def f(self):
print("B")
b = B()
b.f() # prints B
b.__class__ = A
b.f() # prints A
This only changes the class of the object, it doesn't update any of the attributes. In Python, attributes are added dynamically to objects, and nothing intrinsically links them to specific classes, so there's no way to automatically update the attributes if you change the class.
Related
I asked about this yesterday, but I botched writing up my question so much that by the time I realized what I typed, all the replies were solutions to a different miss-worded problem I didn't have. Sorry for the foolish type up last time.
I have two Classes, and I want them to able to share a common list without having to pass it as a parameter. I also want to create a method that will scramble that list, and I want the list to be the same newly scrambled list in both Class A and Class B.
I figured this was a case for inheritance, so I made a Parent Class and set the list as a class attribute and made a method to scramble, but the list variable is oddly enough being now treated as an instance variable of the children.
class A:
lst = []
target = 0
def generateNewLst(self, randomRange, listSize):
self.lst = [random.randint(*randomRange) for i in range(listSize)]
class B(A):
pass
class C(A):
pass
My inherited method works just fine:
a = B()
a.generateNewLst((0, 10), 3)
a.lst # => [2,5,7]
but when I create another B:
b = B()
b.lst # => [] not shared when I want it to be
This CANNOT be solved with a class attribute in B, because that won't solve the more important below issue...
c = C()
c.lst # => [] not shared when I want it to be
TL;DR: I want a Class attribute that shares between every instance of both classes. I want a.lst == b.lst == c.lst every time I run generateNewList on ONE of any of those instances.
How should I reorganize my setup to work the way I want it to?
You need a static variable. To do so make the method generateNewLst static and let him update the static variable lst and not a member variable lst that would belong to the instance of the class and not to the class itself.
class A:
lst = []
#staticmethod
def generateNewLst(randomRange, listSize):
A.lst = [random.randint(*randomRange) for i in range(listSize)]
class B(A):
pass
class C(A):
pass
Then once you generate the lst you will have it for all classes.
a = B()
B.generateNewLst((0, 10), 3)
# the same list is available for all classes
print(A.lst)
print(B.lst)
print(C.lst)
I have a class A
class A(object):
a = 1
def __init__(self):
self.b = 10
def foo(self):
print type(self).a
print self.b
Then I want to create a class B, which equivalent as A but with different name and value of class member a:
This is what I have tried:
class A(object):
a = 1
def __init__(self):
self.b = 10
def foo(self):
print type(self).a
print self.b
A_dummy = type('A_dummy',(object,),{})
A_attrs = {attr:getattr(A,attr) for attr in dir(A) if (not attr in dir(A_dummy))}
B = type('B',(object,),A_attrs)
B.a = 2
a = A()
a.foo()
b = B()
b.foo()
However I got an Error:
File "test.py", line 31, in main
b.foo()
TypeError: unbound method foo() must be called with A instance as first argument (got nothing instead)
So How I can cope with this sort of jobs (create a copy of an exists class)? Maybe a meta class is needed? But What I prefer is just a function FooCopyClass, such that:
B = FooCopyClass('B',A)
A.a = 10
B.a = 100
print A.a # get 10 as output
print B.a # get 100 as output
In this case, modifying the class member of B won't influence the A, vice versa.
The problem you're encountering is that looking up a method attribute on a Python 2 class creates an unbound method, it doesn't return the underlying raw function (on Python 3, unbound methods are abolished, and what you're attempting would work just fine). You need to bypass the descriptor protocol machinery that converts from function to unbound method. The easiest way is to use vars to grab the class's attribute dictionary directly:
# Make copy of A's attributes
Bvars = vars(A).copy()
# Modify the desired attribute
Bvars['a'] = 2
# Construct the new class from it
B = type('B', (object,), Bvars)
Equivalently, you could copy and initialize B in one step, then reassign B.a after:
# Still need to copy; can't initialize from the proxy type vars(SOMECLASS)
# returns to protect the class internals
B = type('B', (object,), vars(A).copy())
B.a = 2
Or for slightly non-idiomatic one-liner fun:
B = type('B', (object,), dict(vars(A), a=2))
Either way, when you're done:
B().foo()
will output:
2
10
as expected.
You may be trying to (1) create copies of classes for some reason for some real app:
in that case, try using copy.deepcopy - it includes the mechanisms to copy classes. Just change the copy __name__ attribute afterwards if needed. Works both in Python 2 or Python 3.
(2) Trying to learn and understand about Python internal class organization: in that case, there is no reason to fight with Python 2, as some wrinkles there were fixed for Python 3.
In any case, if you try using dir for fetching a class attributes, you will end up with more than you want - as dir also retrieves the methods and attributes of all superclasses. So, even if your method is made to work (in Python 2 that means getting the .im_func attribute of retrieved unbound methods, to use as raw functions on creating a new class), your class would have more methods than the original one.
Actually, both in Python 2 and Python 3, copying a class __dict__ will suffice. If you want mutable objects that are class attributes not to be shared, you should resort again to deepcopy. In Python 3:
class A(object):
b = []
def foo(self):
print(self.b)
from copy import deepcopy
def copy_class(cls, new_name):
new_cls = type(new_name, cls.__bases__, deepcopy(A.__dict__))
new_cls.__name__ = new_name
return new_cls
In Python 2, it would work almost the same, but there is no convenient way to get the explicit bases of an existing class (i.e. __bases__ is not set). You can use __mro__ for the same effect. The only thing is that all ancestor classes are passed in a hardcoded order as bases of the new class, and in a complex hierarchy you could have differences between the behaviors of B descendants and A descendants if multiple-inheritance is used.
I am learning all about Python classes and I have a lot of ground to cover.
I came across an example that got me a bit confused.
These are the parent classes
Class X
Class Y
Class Z
Child classes are:
Class A (X,Y)
Class B (Y,Z)
Grandchild class is:
Class M (A,B,Z)
Doesn't Class M inherit Class Z through inheriting from Class B or what would the reason be for this type of structure? Class M would just ignore the second time Class Z is inherited wouldn't it be, or am I missing something?
Class M would just inherit the Class Z attributes twice (redundant) wouldn't it be, or am I missing something?
No, there are no "duplicated" attributes, Python performs a linearization they can the Method Resolution Order (MRO) as is, for instance, explained here. You are however correct that here adding Z to the list does not change anything.
They first construct MRO's for the parents, so:
MRO(X) = (X,object)
MRO(Y) = (Y,object)
MRO(Z) = (Z,object)
MRO(A) = (A,X,Y,object)
MRO(B) = (B,Y,Z,object)
and then they construct an MRO for M by merging:
MRO(M) = (M,)+merge((A,X,Y,object),(B,Y,Z,object),(Z,object))
= (M,A,X,B,Y,Z,object)
Now each time you call a method, Python will first check if the attribute is in the internal dictionary self.__dict__ of that object). If not, Python will walk throught the MRO and attempt to find an attribute with that name. From the moment it finds one, it will stop searching.
Finally super() is a proxy-object that does the same resolution, but starts in the MRO at the stage of the class. So in this case if you have:
class B:
def foo():
super().bar()
and you construct an object m = M() and call m.foo() then - given the foo() of B is called, super().bar will first attempt to find a bar in Y, if that fails, it will look for a bar in Z and finally in object.
Attributes are not inherited twice. If you add an attribute like:
self.qux = 1425
then it is simply added to the internal self.__dict__ dictionary of that object.
Stating Z explicitly however can be beneficial: if the designer of B is not sure whether Z is a real requirement. In that case you know for sure that Z will still be in the MRO if B is altered.
Apart from what #Willem has mentioned, I would like to add that, you are talking about multiple inheritance problem. For python, object instantiation is a bit different compared other languages like Java. Here, object instatiation is divided into two parts :- Object creation(using __new__ method) and object initialization(using __init__ method). Moreover, it's not necessary that child class will always have parent class's attributes. Child class get parent class's attribute, only if parent class constructor is invoked from child class(explicitly).
>>> class A(object):
def __init__(self):
self.a = 23
>>> class B(A):
def __init__(self):
self.b = 33
>>> class C(A):
def __init__(self):
self.c = 44
super(C, self).__init__()
>>> a = A()
>>> b = B()
>>> c = C()
>>> print (a.a) 23
>>> print (b.a) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'B' object has no attribute 'a'
>>> print (c.a) 23
In the above code snipped, B is not invoking A's __init__ method and so, it doesn't have a as member variable, despite the fact that it's inheriting from A class. Same thing is not the case for language like Java, where there's a fixed template of attributes, that a class will have. This's how python is different from other languages.
Attributes that an object have, are stored in __dict__ member of object and it's __getattribute__ magic method in object class, which implements attribute lookup according to mro specified by willem. You can use vars() and dir() method for introspection of instance.
Say I have following
Class A(object):
base Functions
Class B (A):
some useful functions
Class C(object)
req base functions
Now I want to create a class which has all the functions from B but instead of functions from A refers to functions from C.
Something like
Class D(B,C)
and when B calls super it should look in C instead of A
The way that I am achieving it now is copy pasting the whole class B and just inheriting from C instead of A.
Is there a better way to solve this problem?
Composition can surely solve the problem, but Class B is already in heavy use so I don't want to change it.
Change the __bases__ attribute of B class.
Well, it's ugly but it works:
class A(object):
def f(self):
print("A.f()")
class B (A):
def g(self):
print("B.g()", end=": ")
super(B, self).f()
class C(object):
def f(self):
print("C.f()")
B.__bases__ = (C,)
b = B()
b.g()
You get:
B.g(): C.f()
You can use fudge.patch to do it during your unit test.
Note that your question is a duplicate of How to dynamically change base class of instances at runtime?
The problem is quite simple. If a class B inherit a class A and wants to override a ´classmethod´ that is used as a constructor (I guess you call that a "factory method"). The problem is that B.classmethod will want to reuse A.classmethod, but then it will have to create an instance of the class A, while it subclasses the class A - since, as a classmethod, it has no self. And then, it doesn't seem the right way to design that.
I did the example trivial, I do more complicate stuff by reading numpy arrays, etc. But I guess there is no loss of information here.
class A:
def __init__(self, a):
self.el1 = a
#classmethod
def from_csv(cls, csv_file):
a = read_csv(csv_file)
return cls(a)
#classmethod
def from_hdf5 ...
class B(A):
def __init__(self, a, b)
A.(self, a)
self.el2 = b
#classmethod
def from_csv(cls, csv_file):
A_ = A.from_csv(csv_file) #instance of A created in B(A)
b = [a_*2 for a_ in A.el]
return cls(A.el, b)
Is there a pythonic way to deal with that?
After doing some different trials. My conclusion is that you should override a classmethod without reusing the code inside. So the best way I found, for my particular problem, is to make the classmethod as simply as possible and put the code I want to reuse in another method, static in my case, since the classmethod is a constructor.
One easy solution would be to have class B's __init__ method have a default value for its b parameter. This would let the cls(a) call made by A.from_csv work when it is inherited. If the default is used, the __init__ method could calculate a value to store from a (as you do in B.from_csv now).
class B(A):
def __init__(self, a, b=None):
super().__init__(a) # use super(B, self).__init__(a) if you're in Python 2
self.el2 = b if b is not None else [i*2 for i in a]
# don't override from_csv, B.from_csv will already return a B instance!