Python inheritance - can you have a default value in a parent? - python

If I had a parent class attribute that all of the child classes are going to inherit, can I set a default so that when the object is created, it automatically take the default from the parent class and no argument has to be given when creating it?
class F1(object):
def __init__(self, sick="flu"):
self.sick = sick
class F2(F1):
def __init__(self, sick, cure):
super(F2, self).__init__(sick)
self.cure = cure
a = F2("bed rest")
print(a.sick)
print(a.cure)
this is just a sample bit of code to show what I mean. I want every child to inherit the "sick" from the parent so I do not have to send that argument in when creating the object. Is this possible? Is there a different way of doing this same thing? Would it be better to make "sick" a class attribute?

the problem with your code is, that you are declaring F2.__init__ to have two explicit arguments, even though you only want to pass one.
If you want to be able to optionally override the creation argument of F1 you need to handle that yourself (see F3)
class F1(object):
def __init__(self, sick="flu"):
self.sick = sick
class F2(F1):
def __init__(self, cure):
super(F2, self).__init__()
self.cure = cure
class F3(F1):
def __init__(self, cure, sick=None):
if sick is None:
super(F3, self).__init__()
else:
super(F3, self).__init__(sick)
self.cure = cure
a = F2("bed rest")
print("%s -> %s" % (a.sick, a.cure))
b = F3("inhale")
print("%s -> %s" % (b.sick, b.cure))
c = F3(sick="curiosity", cure="none")
print("%s -> %s" % (c.sick, c.cure))

Using super is the standard way of doing this in Python. If you want to override, just override...
class F2(F1):
def __init__(self, sick, cure):
super(F2, self).__init__(sick)
self.cure = cure
self.sick = sick + 1
Adding class attribute could be an option, depending on your need. From your description, I'd say it sounds better, because the default value of sick never change, and that's probably what you need.
Using class attribute does not affect overriding, because when assigning attribute on an instance, class attribute is not touched. An example:
>>> class F:
... a = 1
...
>>> f1, f2 = F(), F()
>>> f2.a = 2
>>> f1.a
1

Related

Inheritance method overwrite in some conditions [duplicate]

When creating a simple object hierarchy in Python, I'd like to be able to invoke methods of the parent class from a derived class. In Perl and Java, there is a keyword for this (super). In Perl, I might do this:
package Foo;
sub frotz {
return "Bamf";
}
package Bar;
#ISA = qw(Foo);
sub frotz {
my $str = SUPER::frotz();
return uc($str);
}
In Python, it appears that I have to name the parent class explicitly from the child.
In the example above, I'd have to do something like Foo::frotz().
This doesn't seem right since this behavior makes it hard to make deep hierarchies. If children need to know what class defined an inherited method, then all sorts of information pain is created.
Is this an actual limitation in python, a gap in my understanding or both?
Use the super() function:
class Foo(Bar):
def baz(self, **kwargs):
return super().baz(**kwargs)
For Python < 3, you must explicitly opt in to using new-style classes and use:
class Foo(Bar):
def baz(self, arg):
return super(Foo, self).baz(arg)
Python also has super as well:
super(type[, object-or-type])
Return a proxy object that delegates method calls to a parent or sibling class of type.
This is useful for accessing inherited methods that have been overridden in a class.
The search order is same as that used by getattr() except that the type itself is skipped.
Example:
class A(object): # deriving from 'object' declares A as a 'new-style-class'
def foo(self):
print "foo"
class B(A):
def foo(self):
super(B, self).foo() # calls 'A.foo()'
myB = B()
myB.foo()
ImmediateParentClass.frotz(self)
will be just fine, whether the immediate parent class defined frotz itself or inherited it. super is only needed for proper support of multiple inheritance (and then it only works if every class uses it properly). In general, AnyClass.whatever is going to look up whatever in AnyClass's ancestors if AnyClass doesn't define/override it, and this holds true for "child class calling parent's method" as for any other occurrence!
Python 3 has a different and simpler syntax for calling parent method.
If Foo class inherits from Bar, then from Bar.__init__ can be invoked from Foo via super().__init__():
class Foo(Bar):
def __init__(self, *args, **kwargs):
# invoke Bar.__init__
super().__init__(*args, **kwargs)
Many answers have explained how to call a method from the parent which has been overridden in the child.
However
"how do you call a parent class's method from child class?"
could also just mean:
"how do you call inherited methods?"
You can call methods inherited from a parent class just as if they were methods of the child class, as long as they haven't been overwritten.
e.g. in python 3:
class A():
def bar(self, string):
print("Hi, I'm bar, inherited from A"+string)
class B(A):
def baz(self):
self.bar(" - called by baz in B")
B().baz() # prints out "Hi, I'm bar, inherited from A - called by baz in B"
yes, this may be fairly obvious, but I feel that without pointing this out people may leave this thread with the impression you have to jump through ridiculous hoops just to access inherited methods in python. Especially as this question rates highly in searches for "how to access a parent class's method in Python", and the OP is written from the perspective of someone new to python.
I found:
https://docs.python.org/3/tutorial/classes.html#inheritance
to be useful in understanding how you access inherited methods.
Here is an example of using super():
#New-style classes inherit from object, or from another new-style class
class Dog(object):
name = ''
moves = []
def __init__(self, name):
self.name = name
def moves_setup(self):
self.moves.append('walk')
self.moves.append('run')
def get_moves(self):
return self.moves
class Superdog(Dog):
#Let's try to append new fly ability to our Superdog
def moves_setup(self):
#Set default moves by calling method of parent class
super(Superdog, self).moves_setup()
self.moves.append('fly')
dog = Superdog('Freddy')
print dog.name # Freddy
dog.moves_setup()
print dog.get_moves() # ['walk', 'run', 'fly'].
#As you can see our Superdog has all moves defined in the base Dog class
There's a super() in Python too. It's a bit wonky, because of Python's old- and new-style classes, but is quite commonly used e.g. in constructors:
class Foo(Bar):
def __init__(self):
super(Foo, self).__init__()
self.baz = 5
I would recommend using CLASS.__bases__
something like this
class A:
def __init__(self):
print "I am Class %s"%self.__class__.__name__
for parentClass in self.__class__.__bases__:
print " I am inherited from:",parentClass.__name__
#parentClass.foo(self) <- call parents function with self as first param
class B(A):pass
class C(B):pass
a,b,c = A(),B(),C()
If you don't know how many arguments you might get, and want to pass them all through to the child as well:
class Foo(bar)
def baz(self, arg, *args, **kwargs):
# ... Do your thing
return super(Foo, self).baz(arg, *args, **kwargs)
(From: Python - Cleanest way to override __init__ where an optional kwarg must be used after the super() call?)
There is a super() in python also.
Example for how a super class method is called from a sub class method
class Dog(object):
name = ''
moves = []
def __init__(self, name):
self.name = name
def moves_setup(self,x):
self.moves.append('walk')
self.moves.append('run')
self.moves.append(x)
def get_moves(self):
return self.moves
class Superdog(Dog):
#Let's try to append new fly ability to our Superdog
def moves_setup(self):
#Set default moves by calling method of parent class
super().moves_setup("hello world")
self.moves.append('fly')
dog = Superdog('Freddy')
print (dog.name)
dog.moves_setup()
print (dog.get_moves())
This example is similar to the one explained above.However there is one difference that super doesn't have any arguments passed to it.This above code is executable in python 3.4 version.
In this example cafec_param is a base class (parent class) and abc is a child class. abc calls the AWC method in the base class.
class cafec_param:
def __init__(self,precip,pe,awc,nmonths):
self.precip = precip
self.pe = pe
self.awc = awc
self.nmonths = nmonths
def AWC(self):
if self.awc<254:
Ss = self.awc
Su = 0
self.Ss=Ss
else:
Ss = 254; Su = self.awc-254
self.Ss=Ss + Su
AWC = Ss + Su
return self.Ss
def test(self):
return self.Ss
#return self.Ss*4
class abc(cafec_param):
def rr(self):
return self.AWC()
ee=cafec_param('re',34,56,2)
dd=abc('re',34,56,2)
print(dd.rr())
print(ee.AWC())
print(ee.test())
Output
56
56
56
In Python 2, I didn't have a lot luck with super(). I used the answer from
jimifiki on this SO thread how to refer to a parent method in python?.
Then, I added my own little twist to it, which I think is an improvement in usability (Especially if you have long class names).
Define the base class in one module:
# myA.py
class A():
def foo( self ):
print "foo"
Then import the class into another modules as parent:
# myB.py
from myA import A as parent
class B( parent ):
def foo( self ):
parent.foo( self ) # calls 'A.foo()'
class department:
campus_name="attock"
def printer(self):
print(self.campus_name)
class CS_dept(department):
def overr_CS(self):
department.printer(self)
print("i am child class1")
c=CS_dept()
c.overr_CS()
If you want to call the method of any class, you can simply call Class.method on any instance of the class. If your inheritance is relatively clean, this will work on instances of a child class too:
class Foo:
def __init__(self, var):
self.var = var
def baz(self):
return self.var
class Bar(Foo):
pass
bar = Bar(1)
assert Foo.baz(bar) == 1
class a(object):
def my_hello(self):
print "hello ravi"
class b(a):
def my_hello(self):
super(b,self).my_hello()
print "hi"
obj = b()
obj.my_hello()
This is a more abstract method:
super(self.__class__,self).baz(arg)

Fields in all subclasses of a class

Let's say we have a class A, a class B that inherits from A and classes C, D and E that inherit from B.
We want all of those classes to have an attribute _f initialized with a default value, and we want that attribute to be mutable and to have a separate value for each instance of the class, i.e. it should not be a static, constant value of A used by all subclasses.
One way to do this is to define _f in A's __init__ method, and then rely on this method in the subclasses:
class A:
def __init__(self):
self._f = 'default_value'
class B(A):
def __init__(self):
super(B, self).__init__()
class C(B):
def __init__(self):
super(C, self).__init__()
Is there any nice Pythonic way to avoid this, and possibly avoid using metaclasses?
If your goal is to simplify subclass constructors by eliminating the need the call the base class constructor, but still be able to override the default value in subclasses, there's a common paradigm of exploiting the fact that Python will return the class's value for an attribute if it doesn't exist on the instance.
Using a slightly more concrete example, instead of doing...
class Human(object):
def __init__(self):
self._fingers = 10
def __repr__(self):
return 'I am a %s with %d fingers' % (self.__class__.__name__, self._fingers)
class MutatedHuman(Human):
def __init__(self, fingers):
super(MutatedHuman, self).__init__()
self._fingers = fingers
print MutatedHuman(fingers=11)
print Human()
...you can use...
class Human(object):
_fingers = 10
def __repr__(self):
return 'I am a %s with %d fingers' % (self.__class__.__name__, self._fingers)
class MutatedHuman(Human):
def __init__(self, fingers):
self._fingers = fingers
print MutatedHuman(fingers=11)
print Human()
...both of which output...
I am a MutatedHuman with 11 fingers
I am a Human with 10 fingers
The important point being that the line self._fingers = fingers in the second example doesn't overwrite the default value set on class Human, but merely hides it when referenced as self._fingers.
It's slightly hairy when the variable refers to a mutable type, such as a list. You have to be careful not to perform a operation on the default value which will modify it, although it's still safe to do a self.name = value.
What's neat about this approach is it tends to lead to fewer lines of code than other approaches, which is usually a Good Thing (tm).

python class factory inherit random parent

I have some code like this:
class Person(object):
def drive(self, f, t):
raise NotImplementedError
class John(Person):
def drive(self, f, t):
print "John drove from %s to %s" % (f,t)
class Kyle(Person):
def drive(self, f, t):
print "Kyle drove from %s to %s" % (f,t)
class RandomPerson(Person):
# instansiate either John or Kyle, and inherit it.
pass
class Vehicle(object):
pass
class Driver(Person, Vehicle):
def __init__(self):
# instantiate and inherit a RandomPerson somehow
pass
d1 = Driver()
d1.drive('New York', 'Boston')
>>> "John drove from New York to Boston"
d2 = Driver()
d2.drive('New Jersey', 'Boston')
>>> "Kyle drove from New Jersey to Boston"
How could i implement RandomPerson, with the following requirements:
calling person = RandomPerson() must return a RandomPerson object.
RandomPerson should subclass either John or Kyle randomly.
In my original answer (which I deleted because it was just plain wrong) I said I would consider doing it like this:
class RandomPerson(Person):
def __init__(self):
rand_person = random.choice((John, Kyle))()
self.__dict__ = rand_person.__dict__
This way is an adaptation of the Python Borg idiom; the idea was that everything that matters about an object is contained in its __dict__.
However, this only works when overwriting objects of the same class (which is what you are doing in the Borg idiom); the object __dict__ only contains state information pertaining to object instance, not the object class.
It is possible to switch out the class of an object like so:
class RandomPerson(Person):
def __init__(self):
rand_person = random.choice((John, Kyle))
self.__class__ = rand_person
However, doing it this way would mean that the call to RandomPerson would then not return an instance of RandomPerson per your requirement, but of Kyle or of John. So this is a no go.
Here is a way to get a RandomPerson object that acts like Kyle or John, but isn't:
class RandomPerson(Person):
def __new__(cls):
new = super().__new__(cls)
new.__dict__.update(random.choice((Kyle,John)).__dict__)
return new
This one - very similar to the Borg idiom, except doing it with classes instead of instance objects and we're only copying the current version of the chosen class dict - is really pretty evil: we have lobotomized the RandomPerson class and (randomly) stuck the brains of a Kyle or John class in place. And there is no indication, unfortunately, that this happened:
>>> rperson = RandomPerson()
>>> assert isinstance(rperson,Kyle) or isinstance(rperson,John)
AssertionError
So we still haven't really subclassed Kyle or John. Also, this is really really evil. So please don't do it unless you have a really good reason.
Now, assuming you do in fact have a good reason, the above solution should be good enough if all you are after is making sure you can use any class state information (methods and class attributes) from Kyle or John with RandomPerson. However, as illustrated prior, RandomPerson still isn't a true subclass of either.
Near as I can tell there is no way to actually randomly subclass an object's class at instance creation AND to have the class maintain state across multiple instance creations. You're going to have to fake it.
One way to fake it is to allow RandomPerson to be considered a subclass of John and Kyle using the abstract baseclass module and __subclasshook__, and adding that to your Person class. This looks like it will be a good solution since the Person class is an interface and isn't going to be directly used, anyway.
Here's a way to do that:
class Person(object):
__metaclass__ = abc.ABCMeta
def drive(self, f, t):
raise NotImplementedError
#classmethod
def __subclasshook__(cls, C):
if C.identity is cls:
return True
return NotImplemented
class John(Person):
def drive(self, f, t):
print "John drove from %s to %s" % (f,t)
class Kyle(Person):
def drive(self, f, t):
print "Kyle drove from %s to %s" % (f,t)
class RandomPerson(Person):
identity = None
def __new__(cls):
cls.identity = random.choice((John,Kyle))
new = super().__new__(cls)
new.__dict__.update(cls.identity.__dict__)
return new
>>> type(RandomPerson())
class RandomPerson
>>> rperson = RandomPerson()
>>> isinstance(rperson,John) or isinstance(rperson,Kyle)
True
Now RandomPerson - though it technically is not a subclass - is considered to be a subclass of Kyle or John, and it also shares the state of Kyle or John. In fact, it will switch back and forth between the two, randomly, every time a new instance is created (or when RandomPerson.identity is changed). Another effect of doing things this way: if you have multiple RandomPerson instances, they all share the state of whatever RandomPerson happens to be in that moment -- i.e., rperson1 might start out being Kyle, and then when rperson2 is instantiated, both rperson2 AND rperson1 could be John (or they could both be Kyle and then switch to John when rperson3 is created).
Needless to say, this is pretty weird behavior. In fact it is so weird, my suspicion is that your design needs a complete overhaul. I really don't think there is a very good reason to EVER do this (other than maybe playing a bad joke on someone).
If you don't want to mix this behavior into your Person class, you could also do it separately:
class Person(object):
def drive(self, f, t):
raise NotImplementedError
class RandomPersonABC():
__metaclass__ = abc.ABCMeta
#classmethod
def __subclasshook__(cls, C):
if C.identity is cls:
return True
return NotImplemented
class John(Person, RandomPersonABC):
def drive(self, f, t):
print "John drove from %s to %s" % (f,t)
class Kyle(Person, RandomPersonABC):
def drive(self, f, t):
print "Kyle drove from %s to %s" % (f,t)
class RandomPerson(Person):
identity = None
def __new__(cls):
cls.identity = random.choice((John,Kyle))
new = super().__new__(cls)
new.__dict__.update(cls.identity.__dict__)
return new
You could just implement the RandomPerson class to have a member called _my_driver or whatever else you wanted. You would just call their drive method from the RandomPerson.drive method. It could look something like this:
class RandomPerson(Person):
# instantiate either John or Kyle, and inherit it.
def __init__(self):
self._my_person = John() if random.random() > 0.50 else Kyle()
def drive(self, f, t):
self._my_person.drive(f,t)
Alternatively, if you want to be more strict about making sure that the class has exactly the same methods as Kyle or John, you could set the method in the constructor like the following:
class RandomPerson(Person):
# instantiate either John or Kyle, and inherit it.
def __init__(self):
self._my_person = John() if random.random() > 0.50 else Kyle()
self.drive = self._my_person.drive
In your most recent comment on my other answer you said:
I'm gonna go with changing the class of the object, like you pointed out: rand_person = random.choice((John, Kyle)) and self.__class__ = rand_person. I've moved the methods of RandomPerson back into Person, and RandomPerson works now like a factory-ish generating class.
If I may say so, this is an odd choice. First of all, swapping out the class after object creation doesn't seem very pythonic (it's overly complicated). It would be better to generate the object instance randomly instead of assigning the class after the fact:
class RandomPerson(): # Doing it this way, you also don't need to inherit from Person
def __new__(self):
return random.choice((Kyle,John))()
Secondly, if the code has been refactored so that you no longer require a RandomPerson object, why have one at all? Just use a factory function:
def RandomPerson():
return random.choice((Kyle,John))()

Python: showing attributes assigned to a class object in the class code

One of my classes does a lot of aggregate calculating on a collection of objects, then assigns an attribute and value appropriate to the specific object: I.e.
class Team(object):
def __init__(self, name): # updated for typo in code, added self
self.name = name
class LeagueDetails(object):
def __init__(self): # added for clarity, corrected another typo
self.team_list = [Team('name'), ...]
self.calculate_league_standings() # added for clarity
def calculate_league_standings(self):
# calculate standings as a team_place_dict
for team in self.team_list:
team.place = team_place_dict[team.name] # a new team attribute
I know, as long as the calculate_league_standings has been run, every team has team.place. What I would like to be able to do is to scan the code for class Team(object) and read all the attributes, both created by class methods and also created by external methods which operate on class objects. I am getting a little sick of typing for p in dir(team): print p just to see what the attribute names are. I could define a bunch of blank attributes in the Team __init__. E.g.
class Team(object):
def __init__(self, name): # updated for typo in code, added self
self.name = name
self.place = None # dummy attribute, but recognizable when the code is scanned
It seems redundant to have calculate_league_standings return team._place and then add
#property
def place(self): return self._place
I know I could comment a list of attributes at the top class Team, which is the obvious solution, but I feel like there has to be a best practice here, something pythonic and elegant here.
If I half understand your question, you want to keep track of which attributes of an instance have been added after initialization. If this is the case, you could use something like this:
#! /usr/bin/python3.2
def trackable (cls):
cls._tracked = {}
oSetter = cls.__setattr__
def setter (self, k, v):
try: self.initialized
except: return oSetter (self, k, v)
try: self.k
except:
if not self in self.__class__._tracked:
self.__class__._tracked [self] = []
self.__class__._tracked [self].append (k)
return oSetter (self, k, v)
cls.__setattr__ = setter
oInit = cls.__init__
def init (self, *args, **kwargs):
o = oInit (self, *args, **kwargs)
self.initialized = 42
return o
cls.__init__ = init
oGetter = cls.__getattribute__
def getter (self, k):
if k == 'tracked': return self.__class__._tracked [self]
return oGetter (self, k)
cls.__getattribute__ = getter
return cls
#trackable
class Team:
def __init__ (self, name, region):
self.name = name
self.region = region
#set name and region during initialization
t = Team ('A', 'EU')
#set rank and ELO outside (hence trackable)
#in your "aggregate" functions
t.rank = 4 # a new team attribute
t.ELO = 14 # a new team attribute
#see witch attributes have been created after initialization
print (t.tracked)
If I did not understand the question, please do specify which part I got wrong.
Due to Python's dynamic nature, I don't believe there is a general answer to your question. An attribute of an instance can be set in many ways, including pure assignment, setattr(), and writes to __dict__ . Writing a tool to statically analyze Python code and correctly determine all possible attributes of an class by analyzing all these methods would be very difficult.
In your specific case, as the programmer you know that class Team will have a place attribute in many instances, so you can decide to be explicit and write its constructor like so:
class Team(object):
def __init__(name ,place=None):
self.name = name
self.place = place
I would say there is no need to define a property of a simple attribute, unless you wanted side effects or derivations to happen at read or write time.

Is it safe to replace a self object by another object of the same type in a method?

I would like to replace an object instance by another instance inside a method like this:
class A:
def method1(self):
self = func(self)
The object is retrieved from a database.
It is unlikely that replacing the 'self' variable will accomplish whatever you're trying to do, that couldn't just be accomplished by storing the result of func(self) in a different variable. 'self' is effectively a local variable only defined for the duration of the method call, used to pass in the instance of the class which is being operated upon. Replacing self will not actually replace references to the original instance of the class held by other objects, nor will it create a lasting reference to the new instance which was assigned to it.
As far as I understand, If you are trying to replace the current object with another object of same type (assuming func won't change the object type) from an member function. I think this will achieve that:
class A:
def method1(self):
newObj = func(self)
self.__dict__.update(newObj.__dict__)
It is not a direct answer to the question, but in the posts below there's a solution for what amirouche tried to do:
Python object conversion
Can I dynamically convert an instance of one class to another?
And here's working code sample (Python 3.2.5).
class Men:
def __init__(self, name):
self.name = name
def who_are_you(self):
print("I'm a men! My name is " + self.name)
def cast_to(self, sex, name):
self.__class__ = sex
self.name = name
def method_unique_to_men(self):
print('I made The Matrix')
class Women:
def __init__(self, name):
self.name = name
def who_are_you(self):
print("I'm a women! My name is " + self.name)
def cast_to(self, sex, name):
self.__class__ = sex
self.name = name
def method_unique_to_women(self):
print('I made Cloud Atlas')
men = Men('Larry')
men.who_are_you()
#>>> I'm a men! My name is Larry
men.method_unique_to_men()
#>>> I made The Matrix
men.cast_to(Women, 'Lana')
men.who_are_you()
#>>> I'm a women! My name is Lana
men.method_unique_to_women()
#>>> I made Cloud Atlas
Note the self.__class__ and not self.__class__.__name__. I.e. this technique not only replaces class name, but actually converts an instance of a class (at least both of them have same id()). Also, 1) I don't know whether it is "safe to replace a self object by another object of the same type in [an object own] method"; 2) it works with different types of objects, not only with ones that are of the same type; 3) it works not exactly like amirouche wanted: you can't init class like Class(args), only Class() (I'm not a pro and can't answer why it's like this).
Yes, all that will happen is that you won't be able to reference the current instance of your class A (unless you set another variable to self before you change it.) I wouldn't recommend it though, it makes for less readable code.
Note that you're only changing a variable, just like any other. Doing self = 123 is the same as doing abc = 123. self is only a reference to the current instance within the method. You can't change your instance by setting self.
What func(self) should do is to change the variables of your instance:
def func(obj):
obj.var_a = 123
obj.var_b = 'abc'
Then do this:
class A:
def method1(self):
func(self) # No need to assign self here
In many cases, a good way to achieve what you want is to call __init__ again. For example:
class MyList(list):
def trim(self,n):
self.__init__(self[:-n])
x = MyList([1,2,3,4])
x.trim(2)
assert type(x) == MyList
assert x == [1,2]
Note that this comes with a few assumptions such as the all that you want to change about the object being set in __init__. Also beware that this could cause problems with inheriting classes that redefine __init__ in an incompatible manner.
Yes, there is nothing wrong with this. Haters gonna hate. (Looking at you Pycharm with your in most cases imaginable, there's no point in such reassignment and it indicates an error).
A situation where you could do this is:
some_method(self, ...):
...
if(some_condition):
self = self.some_other_method()
...
return ...
Sure, you could start the method body by reassigning self to some other variable, but if you wouldn't normally do that with other parametres, why do it with self?
One can use the self assignment in a method, to change the class of instance to a derived class.
Of course one could assign it to a new object, but then the use of the new object ripples through the rest of code in the method. Reassiging it to self, leaves the rest of the method untouched.
class aclass:
def methodA(self):
...
if condition:
self = replace_by_derived(self)
# self is now referencing to an instance of a derived class
# with probably the same values for its data attributes
# all code here remains untouched
...
self.methodB() # calls the methodB of derivedclass is condition is True
...
def methodB(self):
# methodB of class aclass
...
class derivedclass(aclass):
def methodB(self):
#methodB of class derivedclass
...
But apart from such a special use case, I don't see any advantages to replace self.
You can make the instance a singleton element of the class
and mark the methods with #classmethod.
from enum import IntEnum
from collections import namedtuple
class kind(IntEnum):
circle = 1
square = 2
def attr(y): return [getattr(y, x) for x in 'k l b u r'.split()]
class Shape(namedtuple('Shape', 'k,l,b,u,r')):
self = None
#classmethod
def __repr__(cls):
return "<Shape({},{},{},{},{}) object at {}>".format(
*(attr(cls.self)+[id(cls.self)]))
#classmethod
def transform(cls, func):
cls.self = cls.self._replace(**func(cls.self))
Shape.self = Shape(k=1, l=2, b=3, u=4, r=5)
s = Shape.self
def nextkind(self):
return {'k': self.k+1}
print(repr(s)) # <Shape(1,2,3,4,5) object at 139766656561792>
s.transform(nextkind)
print(repr(s)) # <Shape(2,2,3,4,5) object at 139766656561888>

Categories

Resources