When I call the base class recursive method from the derived class, the recursive call is done against the derived method, instead of the base class method. How can I avoid that without modifying base class implementation (in example class A)?
Here is an example
class A(object):
# recursive method
def f(self, x):
print x,
if x < 0:
self.f(x+1)
if x > 0:
self.f(x-1)
if x == 0:
print ""
class B(A):
# Override method
def f(self):
# do some pretty cool stuff
super(B, self).f(25)
if __name__ == "__main__":
A().f(5)
B().f()
I've got this output:
5 4 3 2 1 0
25
Traceback (most recent call last):
File "./test.py", line 19, in <module>
B().f()
File "./test.py", line 15, in f
super(B, self).f(25)
File "./test.py", line 9, in f
self.f(x-1)
TypeError: f() takes exactly 1 argument (2 given)
Thanks in advance,
Name mangling is the tool for this job. This would look like this in your case:
class A(object):
# recursive method
def f(self, x):
print x,
if x < 0:
self.__f(x+1)
if x > 0:
self.__f(x-1)
if x == 0:
print ""
__f = f
class B(A):
# Override method
def f(self):
# do some pretty cool stuff
super(B, self).f(25)
Explanation from the linked documentation:
Any identifier of the form __spam (at least two leading underscores,
at most one trailing underscore) is textually replaced with
_classname__spam, where classname is the current class name with
leading underscore(s) stripped.
In your second example, your problem is that the self you're passing along is an instance of B, not an instance of A, so when you attempt to call self.f you're calling B.f.
Unfortunately, the behavior you're seeing is really sort of how OO programming should work. Anything you do to work around this is going to be a bit of a hack around the OO paradigm. Another option which might be more explicit than using mangling, but is not necessarily "real recursion", would be to pass along the function you want to recurse on:
class A(object):
# recursive method
def f(self, x, func=None):
if func is None:
func = A.f
print x,
if x < 0:
func(self,x+1,func)
if x > 0:
func(self,x-1,func)
if x == 0:
print ""
class B(A):
# Override method
def f(self):
# do some pretty cool stuff
super(B, self).f(25)
if __name__ == "__main__":
A().f(5)
B().f()
This probably isn't the best way this could be written, but I think it gets the idea across. You could alternately try passing A.f in from your call in B.f.
I would suggest renaming the base classes f method to a private method called _f and having that recurse. You can then introduce a new f method to the base class which just calls _f. Then your free to change f in the subclass.
However it may not be considered good practice to change the method signature in a subclass.
class A(object):
def f(self, x):
return self._f(x)
# recursive method
def _f(self, x):
print x,
if x < 0:
self._f(x+1)
if x > 0:
self._f(x-1)
if x == 0:
print ""
class B(A):
# Override method
def f(self):
# do some pretty cool stuff
super(B, self).f(25)
if __name__ == "__main__":
A().f(5)
B().f()
If you can't modify A's implementation, you can take advantage of the difference in function signatures.
class B(A):
def f(self, x=None):
if x is None:
# do some pretty cool stuff
self.f(25)
else:
super(B, self).f(x)
Related
I have the following error:
Error Traceback (most recent call last): line 25,line 14,line 3 AttributeError: 'str' object has no attribute 'Teamname'
Here is my python code:
class Team:
def __init__(self,Name = "Name",Origin= "india"):
self.Teamname = Name
self.Teamorigin= Origin
def DefTeamname(self,Name):
self.Teamname = Name
def defTeamorigin(self,Origin):
self.Teamorigin = Origin
class Player(Team):
def __init__(self,Pname,Ppoints,Tname,Torigin):
Team.__init__(Tname,Torigin)
self.Playername = Pname
self.Playerpoints = Ppoints
def Scoredpoint(self):
self.Playerpoints += 1
def __str__(self):
return self.Playername + "has scored" + str(self.Playerpoints) + "points"
Player1 = Player('Sid',0,'Gokulam','Kochi')
print(Player1)
What am I doing wrong?
Your error is being thrown because the first argument to __init__ is being interpreted as self, and __init__ is therefore trying to mutate it. It only works at all because you use keyword arguments, so Origin is just falling back to the default.
Note for completeness that you can call the method directly on the target class (or any other class for that matter), but that self is not bound and you lose the advantage of super() always pointing to the parent in the MRO. This works:
class A:
def __init__(self, a):
self._a = a
class B:
def __init__(self, a, b):
self._b = b
A.__init__(self, a)
b = B(6, 7)
assert b._a == 6
Incidentally this shows that __init__ is just a function which takes a muteable first arg (self by convention) and mutates that arg.
You really should use super() however. What happens if I redefine A?:
class newA:
def __init__(self):
self._other = True
class A(newA):
...
If you have used super() all the way through, everything will work fine:
class NewA:
def __init__(self, **kwargs):
super().__init__(**kwargs)
class A(NewA):
def __init__(self, a=None, **kwargs):
self._a = a
super().__init__(**kwargs)
Note the use of keyword arguments to pass up the chain without worrying about the semantics of every class's init.
Further Reading
Python's super considered super
Python's super considered harmful for warnings about how things can go wrong if you don't keep your semantics compatible.
class Player(Team):
def __init__(self,Pname,Ppoints,Tname,Torigin):
super().__init__(Tname,Torigin)
...
You cannot call __init__ yourself on the class, you need to call on the instance by using super() notation, else the self parameter will not be bound correctly
Being new to OOP, I wanted to know if there is any way of inheriting one of multiple classes based on how the child class is called in Python. The reason I am trying to do this is because I have multiple methods with the same name but in three parent classes which have different functionality. The corresponding class will have to be inherited based on certain conditions at the time of object creation.
For example, I tried to make Class C inherit A or B based on whether any arguments were passed at the time of instantiating, but in vain. Can anyone suggest a better way to do this?
class A:
def __init__(self,a):
self.num = a
def print_output(self):
print('Class A is the parent class, the number is 7',self.num)
class B:
def __init__(self):
self.digits=[]
def print_output(self):
print('Class B is the parent class, no number given')
class C(A if kwargs else B):
def __init__(self,**kwargs):
if kwargs:
super().__init__(kwargs['a'])
else:
super().__init__()
temp1 = C(a=7)
temp2 = C()
temp1.print_output()
temp2.print_output()
The required output would be 'Class A is the parent class, the number is 7' followed by 'Class B is the parent class, no number given'.
Thanks!
Whether you're just starting out with OOP or have been doing it for a while, I would suggest you get a good book on design patterns. A classic is Design Patterns by Gamma. Helm. Johnson and Vlissides.
Instead of using inheritance, you can use composition with delegation. For example:
class A:
def do_something(self):
# some implementation
class B:
def do_something(self):
# some implementation
class C:
def __init__(self, use_A):
# assign an instance of A or B depending on whether argument use_A is True
self.instance = A() if use_A else B()
def do_something(self):
# delegate to A or B instance:
self.instance.do_something()
Update
In response to a comment made by Lev Barenboim, the following demonstrates how you can make composition with delegation appear to be more like regular inheritance so that if class C has has assigned an instance of class A, for example, to self.instance, then attributes of A such as x can be accessed internally as self.x as well as self.instance.x (assuming class C does not define attribute x itself) and likewise if you create an instance of C named c, you can refer to that attribute as c.x as if class C had inherited from class A.
The basis for doing this lies with builtin methods __getattr__ and __getattribute__. __getattr__ can be defined on a class and will be called whenever an attribute is referenced but not defined. __getattribute__ can be called on an object to retrieve an attribute by name.
Note that in the following example, class C no longer even has to define method do_something if all it does is delegate to self.instance:
class A:
def __init__(self, x):
self.x = x
def do_something(self):
print('I am A')
class B:
def __init__(self, x):
self.x = x
def do_something(self):
print('I am B')
class C:
def __init__(self, use_A, x):
# assign an instance of A or B depending on whether argument use_A is True
self.instance = A(x) if use_A else B(x)
# called when an attribute is not found:
def __getattr__(self, name):
# assume it is implemented by self.instance
return self.instance.__getattribute__(name)
# something unique to class C:
def foo(self):
print ('foo called: x =', self.x)
c = C(True, 7)
print(c.x)
c.foo()
c.do_something()
# This will throw an Exception:
print(c.y)
Prints:
7
foo called: x = 7
I am A
Traceback (most recent call last):
File "C:\Ron\test\test.py", line 34, in <module>
print(c.y)
File "C:\Ron\test\test.py", line 23, in __getattr__
return self.instance.__getattribute__(name)
AttributeError: 'A' object has no attribute 'y'
I don't think you can pass values to the condition of the class from inside itself.
Rather, you can define a factory method like this :
class A:
def sayClass(self):
print("Class A")
class B:
def sayClass(self):
print("Class B")
def make_C_from_A_or_B(make_A):
class C(A if make_A else B):
def sayClass(self):
super().sayClass()
print("Class C")
return C()
make_C_from_A_or_B(True).sayClass()
which output :
Class A
Class C
Note: You can find information about the factory pattern with an example I found good enough on this article (about a parser factory)
I made these two classes:
class A:
#staticmethod
def f(x):
print("x is", x)
class B:
def f(x):
print("x is", x)
And used them like this:
>>> A.f(1)
x is 1
>>> B.f(1)
x is 1
It looks like f became a static method on B even without the decorator. Why would I need the decorator?
It used to matter more back in Python 2, where the instance-ness of instance methods was enforced more strongly:
>>> class B:
... def f(x):
... print("x is", x)
...
>>> B.f(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method f() must be called with B instance as first argument (
got int instance instead)
You had to mark static methods with #staticmethod back then.
These days, #staticmethod still makes it clearer that the method is static, which helps with code readability and documentation generation, and it lets you call the method on instances without the system trying to bind self.
Try these two classes, both having a cry method, one as a classmethod and another as a staticmethod with self passed on
class Cat:
def __init__(self):
self.sound = "meow"
def cry(self):
print(self.sound)
x = Cat()
x.cry()
meow
and with another class
class Dog:
def __init__(self):
self.sound = "ruff-ruff"
#staticmethod
def cry(self):
print(self.sound)
x = Dog()
x.cry()
TypeError: cry() missing 1 required positional argument: 'self'
and we can see the #staticmethod decorator basically removed the passed in self
I would like to wrap a number of class methods in Python with the same wrapper.
Conceptually it would look something like this in the simplest scenario:
x = 0 # some arbitrary context
class Base(object):
def a(self):
print "a x: %s" % x
def b(self):
print "b x: %s" % x
class MixinWithX(Base):
"""Wrap"""
def a(self):
global x
x = 1
super(MixinWithX, self).a()
x = 0
def b(self):
global x
x = 1
super(MixinWithX, self).a()
x = 0
Of course, when there are more methods than a and b, this becomes a mess. It seems like there ought to be something simpler. Obviously x could be modified in a decorator but one still ends up having a long list of garbage, which instead of the above looks like:
from functools import wraps
def withx(f):
#wraps(f) # good practice
def wrapped(*args, **kwargs):
global x
x = 1
f(*args, **kwargs)
x = 0
return wrapped
class MixinWithX(Base):
"""Wrap"""
#withx
def a(self):
super(MixinWithX, self).a()
#withx
def b(self):
super(MixinWithX, self).b()
I thought about using __getattr__ in the mixin, but of course since methods such as a and b are already defined this is never called.
I also thought about using __getattribute__ but it returns the attribute, not wrapping the call. I suppose __getattribute__ could return a closure (example below) but I am not sure how sound a design that is. Here is an example:
class MixinWithX(Base):
# a list of the methods of our parent class (Base) that are wrapped
wrapped = ['a', 'b']
# application of the wrapper around the methods specified
def __getattribute__(self, name):
original = object.__getattribute__(self, name)
if name in wrapped:
def wrapped(self, *args, **kwargs):
global x
x = 1 # in this example, a context manager would be handy.
ret = original(*args, **kwargs)
x = 0
return ret
return wrapped
return original
It has occurred to me that there may be something built into Python that may alleviate the need to manually reproduce every method of the parent class that is to be wrapped. Or maybe a closure in __getattribute__ is the proper way to do this. I would be grateful for thoughts.
Here's my attempt, which allows for a more terse syntax...
x = 0 # some arbitrary context
# Define a simple function to return a wrapped class
def wrap_class(base, towrap):
class ClassWrapper(base):
def __getattribute__(self, name):
original = base.__getattribute__(self, name)
if name in towrap:
def func_wrapper(*args, **kwargs):
global x
x = 1
try:
return original(*args, **kwargs)
finally:
x = 0
return func_wrapper
return original
return ClassWrapper
# Our existing base class
class Base(object):
def a(self):
print "a x: %s" % x
def b(self):
print "b x: %s" % x
# Create a wrapped class in one line, without needing to define a new class
# for each class you want to wrap.
Wrapped = wrap_class(Base, ('a',))
# Now use it
m = Wrapped()
m.a()
m.b()
# ...or do it in one line...
m = wrap_class(Base, ('a',))()
...which outputs...
a x: 1
b x: 0
You can do this using decorators and inspect:
from functools import wraps
import inspect
def withx(f):
#wraps(f)
def wrapped(*args, **kwargs):
print "decorator"
x = 1
f(*args, **kwargs)
x = 0
return wrapped
class MyDecoratingBaseClass(object):
def __init__(self, *args, **kwargs):
for member in inspect.getmembers(self, predicate=inspect.ismethod):
if member[0] in self.wrapped_methods:
setattr(self, member[0], withx(member[1]))
class MyDecoratedSubClass(MyDecoratingBaseClass):
wrapped_methods = ['a', 'b']
def a(self):
print 'a'
def b(self):
print 'b'
def c(self):
print 'c'
if __name__ == '__main__':
my_instance = MyDecoratedSubClass()
my_instance.a()
my_instance.b()
my_instance.c()
Output:
decorator
a
decorator
b
c
There are two general directions I can think of which are useful in your case.
One is using a class decorator. Write a function which takes a class, and returns a class with the same set of methods, decorated (either by creating a new class by calling type(...), or by changing the input class in place).
EDIT: (the actual wrapping/inspecting code I had in mind is similar to
what #girasquid has in his answer, but connecting is done using decoration instead of mixin/inheritance, which I think is more flexible an robust.)
Which brings me to the second option, which is to use a metaclass, which may be cleaner (yet trickier if you're not used to working with metaclasses). If you don't have access to the definition of the original class, or don't want to change the original definition, you can subclass the original class, setting the metaclass on the derived.
There is a solution, and it's called a decorator. Google "python decorators" for lots of information.
The basic concept is that a decorator is a function which takes a function as a parameter, and returns a function:
def decorate_with_x(f)
def inner(self):
self.x = 1 #you must always use self to refer to member variables, even if you're not decorating
f(self)
self.x = 0
return inner
class Foo(object):
#decorate_with_x # #-syntax passes the function defined on next line
# to the function named s.t. it is equivalent to
# foo_func = decorate_with_x(foo_func)
def foo_func(self):
pass
This code throws an exception, AttributeError, "wtf!", because A.foo() is calling B.foo1(), shouldn't it call A.foo1()? How can I force it to call A.foo1() (and any method call inside A.foo() should call A.*)
class A(object):
def foo(self):
print self.foo1()
def foo1(self):
return "foo"
class B(A):
def foo1(self):
raise AttributeError, "wtf!"
def foo(self):
raise AttributeError, "wtf!"
def foo2(self):
super(B, self).foo()
myB = B()
myB.foo2()
In class A instead of calling self methods you need to call A methods and pass in self manually.
This is not the normal way of doing things -- you should have a really good reason for doing it like this.
class A(object):
def foo(self):
print A.foo1(self)
def foo1(self):
return "foo"
class B(A):
def foo1(self):
raise AttributeError, "wtf!"
def foo(self):
raise AttributeError, "wtf!"
def foo2(self):
super(B, self).foo()
myB = B()
myB.foo2()
In the code:
def foo2(self):
super(B, self).foo()
self is an instance of B.
When a method derived from A is called by an instance of B it will start looking in the namespace from B, and only if the method is not found (e.g. is not overridden by B) the implementation from A is used, but always with self referring to B. At no point self is an instance of A.
It is working as intended, as 100% of world programming languages work. Subclass overrides ALL methods of parent class.
However if you really really want to call the A.foo1() you might be able to do it like this (I cannot guarantee). And in any case you must not do this as this is against all principles of good programming.
class A(object):
def foo(self):
A.foo1(self)
One can see what Python is doing here, but the manner of overriding is a bit extreme. Take the case when class A defines 100 attributes and class B inherits these and add 1 more attribute. We want to be able to have the __init__() for B call the __init__() for A and let B's code define only its single attribute. Similarly, if we define a reset() method in A to set all attributes to zero, then the corresponding reset() method for B should be able just to call the reset() method for A and then zero out the single B attribute instead of having to duplicate all of A's code. Python is making difficult what is supposed to be a major advantage of object-oriented programming; that is, the reuse of code. The best option here is avoid overriding of methods that we really want to reuse. If you want to get a sense of the complications with Python here, try this code:
class X(object):
def __init__ ( self ):
print "X"
self.x = 'x'
self.reset()
print "back to X"
def reset ( self ):
print "reset X"
self.xx = 'xx'
class Y(X):
def __init__ ( self ):
print "Y"
super(Y,self).__init__()
self.y = 'y'
self.reset()
print "back to Y"
def reset ( self ):
print "reset Y"
super(Y,self).reset()
print "back to reset Y"
self.yy = 'yy'
aY = Y()
(To make this work properly, remove the self.reset() call in __init__() for class Y.)