Why is it called operator overloading and not overriding in Python? - python

If I want to change the behavior of an inherited method, I would do something like this:
class a:
def changeMe(self):
print('called a')
class b(a):
def changeMe(self):
print('called b')
I believe this is an example of overriding in Python.
However, if I want to overload an operator, I do something very similar:
class c:
aNumber = 0
def __add__(self, operand):
print("words")
return self.aNumber + operand.aNumber
a = c()
b = c()
a.aNumber += 1
b.aNumber += 2
print(a + b) # prints "words\n3"
I thought that maybe the operator methods are really overridden in python since we overload using optional parameters and we just call it operator overloading out of convention.
But it also couldn't be an override, since '__add__' in object.__dict__.keys() is False; a method needs to be a member of the parent class in order to be overridden (and all classes inherit from object when created).
Where is the gap in my understanding?

I guess since the original question specifically asked about the gap in my own understanding, I am best-positioned to answer it. Go figure.
What I failed to understand was that whereas overriding depends on inheritance, overloading does not. Rather, Python matches methods for overloading based on name only.
For a subclass to override a method, the method does indeed need to exist in the parent class. Therefore, the def __add__ portion is not an example of overriding.
(In this case, I also did not fully understand that if the interpreter sees a + operator, it will look to the class of the operands for a definition of the __add__ magic method.)
Because the + operator is essentially an alias for __add__(), the same name is being used. Operator overloading is in fact an example of overloading because we are changing the behavior of the name (+ or __add__) when it is called with novel parameters (in my example, objects of class c).

Overloading means 2 methods with the SAME Name and different signatures + return types. Overriding means 2 methods with the SAME name, wherein the sub method has different functionality.The main difference between overloading and overriding is that in overloading we can use same function name with different parameters for multiple times for different tasks with on a class. and overriding means we can use same name function name with same parameters of the base class in the derived class. this is also called as re usability of code in the program.

Related

Method overriding about inheritance Python

I am reading python 3.7 documentation. And I am very confused about the following sentences:
"Derived classes may override methods of their base classes. Because methods have no special privileges when calling other methods of the same object, a method of a base class that calls another method defined in the same base class may end up calling a method of a derived class that overrides it. (For C++ programmers: all methods in Python are effectively virtual.)"
Can you show me an example code that "a base class that calls another method defined in the same base class may end up calling a method of a derived class that overrides it."?
And here is my understanding:
class A:
def me(self):
print("This is A")
def idet(self):
self.me()
class B(A):
def me(self):
print("this is B")
a = A()
b = B()
b.me()
b.idet()
the result is
this is B
this is B
I am not sure if it is the case.
And the last question is what does "all methods in Python are effectively virtual" mean? (I am familiar with Java but not C++)
The example shows this principle exactly. b, which is an instance of B, calls the method ident defined in class A, which in turn calls me. Since B overrides me, B's method is called, and you get this is me printed out.
In C++, methods can't be overridden by default - you have to declare them as virtual to get this behavior. In Python (and in Java, which you mentioned you are familiar with) this is is the default behavior. In Java you can modify a method so it can't be overrided, by defining it as final.
When you derive a class, all the methods of the super class are copied into the base class. When you redefine a method that already exists in the super class while defining methods in the base class (or super class), it is called as overriding. When you override a method from the base class (or super class) in the sub class, it will kinda cut the connections between the method in the base class and the sub class.
In your program, you're calling b.me() first, which is an overrided method; So, it will execute the me() from the class B. Then, you've got b.idet() which is a copied method from the base class A; So, its code will not change. But, when you look closely in the body of the idet() method, what it does is that it will call the me() method of the class which it is being called from. In this case, as the class calling that method is the class B, it will execute the me() method from the class B.
idet() has got self.me(); self keyword references to the attributes/methods class within which it is written.
I think the Python3 documentation could have also mentioned that when an instance of class B, 'b', refers to a method that is pulled from class A, it passes itself into that method as the first argument (self). Therefore any reference to "self" inside any method called by 'b' (including its inherited methods) will first search class B functions before class A functions, even if the method called by 'b' was derived from class A.
b.idet() is equivalent to A.idet(b) which calls b.me()
"a method of a base class that calls another method defined in the
same base class may end up calling a method of a derived class that overrides it"
I can see how this seems misleading, and I think its because the method idet(self) of base class A only calls another method defined in the same base class (A) if "self" refers to an instance of A, but in the above scenario when "self" refers to an instance of subclass B, idet(self) doesn't really call another method defined in A, it calls a method defined in B that overrides A.

Late (runtime) addition of additional parent class possible?

This is about multiple inheritance. Parent class A provides a few methods and B parent class B a few additional ones. By creating a class inheriting from A and B I could instantiate an object having both method sets.
Now my problem is, that I detect only after having instantiated A, that the methods from B would be helpful too (or more strictly stated, that my object is also of class B).
While
aInstance.bMethod = types.MethodType(localFunction, aInstance)
works in principle, it has to be repeated for any bMethod, and looks unnecessary complicated. It also requires stand-alone (local) functions instead of a conceptually cleaner class B. Is there a more streamlined approach?
Update:
I tried abstract base class with some success, but there only the methods of one additional class could be added.
What I finally achieved is a little routine, which adds all top-level procedures of a given module:
from types import MethodType
from inspect import ismodule, isfunction, getmembers
# adds all functions found in module as methods to given obj
def classMagic(obj, module):
assert(ismodule(module))
for name, fn in getmembers(module, isfunction):
if not name.startswith("__"):
setattr(obj, name, MethodType(fn, obj))
Functionally this is sufficient, and I'm also pleased with the automatism, that all functions are processed and I don't have separate places of function definition and adding it as method, so maintenace is easy. The only remaining issue is reflected by the startswith line, as an example for a neccessary naming convention, if selected functions shall not be added.
If I understand correctly, you want to add mixins to your class at run time. A very common way of adding mixins in Python is through decorators (rather than inheritance), so we can borrow this idea to do something runtime to the object (instead to the class).
I used functools.partial to freeze the self parameter, to emulate the process of binding a function to an object (i.e. turn a function into a method).
from functools import partial
class SimpleObject():
pass
def MixinA(obj):
def funcA1(self):
print('A1 - propertyA is equal to %s' % self.propertyA)
def funcA2(self):
print('A2 - propertyA is equal to %s' % self.propertyA)
obj.propertyA = 0
obj.funcA1 = partial(funcA1, self=obj)
obj.funcA2 = partial(funcA2, self=obj)
return obj
def MixinB(obj):
def funcB1(self):
print('B1')
obj.funcB1 = partial(funcB1, self=obj)
return obj
o = SimpleObject()
# need A characteristics?
o = MixinA(o)
# need B characteristics?
o = MixinB(o)
Instead of functools.partial, you can also use types.MethodType as you did in your question; I think that is a better/cleaner solution.

Why do we use #staticmethod?

I just can't see why do we need to use #staticmethod. Let's start with an exmaple.
class test1:
def __init__(self,value):
self.value=value
#staticmethod
def static_add_one(value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
a=test1(3)
print(a.new_val) ## >>> 4
class test2:
def __init__(self,value):
self.value=value
def static_add_one(self,value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
b=test2(3)
print(b.new_val) ## >>> 4
In the example above, the method, static_add_one , in the two classes do not require the instance of the class(self) in calculation.
The method static_add_one in the class test1 is decorated by #staticmethod and work properly.
But at the same time, the method static_add_one in the class test2 which has no #staticmethod decoration also works properly by using a trick that provides a self in the argument but doesn't use it at all.
So what is the benefit of using #staticmethod? Does it improve the performance? Or is it just due to the zen of python which states that "Explicit is better than implicit"?
The reason to use staticmethod is if you have something that could be written as a standalone function (not part of any class), but you want to keep it within the class because it's somehow semantically related to the class. (For instance, it could be a function that doesn't require any information from the class, but whose behavior is specific to the class, so that subclasses might want to override it.) In many cases, it could make just as much sense to write something as a standalone function instead of a staticmethod.
Your example isn't really the same. A key difference is that, even though you don't use self, you still need an instance to call static_add_one --- you can't call it directly on the class with test2.static_add_one(1). So there is a genuine difference in behavior there. The most serious "rival" to a staticmethod isn't a regular method that ignores self, but a standalone function.
Today I suddenly find a benefit of using #staticmethod.
If you created a staticmethod within a class, you don't need to create an instance of the class before using the staticmethod.
For example,
class File1:
def __init__(self, path):
out=self.parse(path)
def parse(self, path):
..parsing works..
return x
class File2:
def __init__(self, path):
out=self.parse(path)
#staticmethod
def parse(path):
..parsing works..
return x
if __name__=='__main__':
path='abc.txt'
File1.parse(path) #TypeError: unbound method parse() ....
File2.parse(path) #Goal!!!!!!!!!!!!!!!!!!!!
Since the method parse is strongly related to the classes File1 and File2, it is more natural to put it inside the class. However, sometimes this parse method may also be used in other classes under some circumstances. If you want to do so using File1, you must create an instance of File1 before calling the method parse. While using staticmethod in the class File2, you may directly call the method by using the syntax File2.parse.
This makes your works more convenient and natural.
I will add something other answers didn't mention. It's not only a matter of modularity, of putting something next to other logically related parts. It's also that the method could be non-static at other point of the hierarchy (i.e. in a subclass or superclass) and thus participate in polymorphism (type based dispatching). So if you put that function outside the class you will be precluding subclasses from effectively overriding it. Now, say you realize you don't need self in function C.f of class C, you have three two options:
Put it outside the class. But we just decided against this.
Do nothing new: while unused, still keep the self parameter.
Declare you are not using the self parameter, while still letting other C methods to call f as self.f, which is required if you wish to keep open the possibility of further overrides of f that do depend on some instance state.
Option 2 demands less conceptual baggage (you already have to know about self and methods-as-bound-functions, because it's the more general case). But you still may prefer to be explicit about self not being using (and the interpreter could even reward you with some optimization, not having to partially apply a function to self). In that case, you pick option 3 and add #staticmethod on top of your function.
Use #staticmethod for methods that don't need to operate on a specific object, but that you still want located in the scope of the class (as opposed to module scope).
Your example in test2.static_add_one wastes its time passing an unused self parameter, but otherwise works the same as test1.static_add_one. Note that this extraneous parameter can't be optimized away.
One example I can think of is in a Django project I have, where a model class represents a database table, and an object of that class represents a record. There are some functions used by the class that are stand-alone and do not need an object to operate on, for example a function that converts a title into a "slug", which is a representation of the title that follows the character set limits imposed by URL syntax. The function that converts a title to a slug is declared as a staticmethod precisely to strongly associate it with the class that uses it.

Python : how to make subclasses 'closed' under methods inherited from its superclass

I know I should have come up with a better title, but anyway...
Say I make a class inherited from int in python:
class Foo(int):
def is_even(self):
return self%2 == 0
and do something like this
a = Foo(3)
b = Foo(5)
print(type(a+b)) #=> <class 'int'>
I understand this behaviour is not surprising at all, as __add___ called here is defined to return int instances. But I would like to create a class so that a+b returns Foo(8). In other words, I'd like the result a+b to have the is_even method.
Is there any way I can achieve this conveniently? Or do I have to overwrite __add__ and everything?
Background information: I'm trying to write an interpreter for an esoteric programming language called Grass . In that attempt, I want to have a class that behaves like 'callable-int' (actually, numpy.uint8), whose __call__ would be like
def __call__(self, other):
if self == other:
return lambda x: lambda y: x
else:
return lambda x: lambda y: y
.
There are tricks that you could do with metaclasses (__metaclass__ class variable) or the __getattribute__ special method. But the documentation states:
Bypassing the getattribute() machinery in this fashion provides significant scope for speed optimisations within the interpreter, at the cost of some flexibility in the handling of special methods (the special method must be set on the class object itself in order to be consistently invoked by the interpreter)
Which means that if you want to make sure that the parent class is never handled directly, you need to intercept everything. And for int, that is described as emulating numeric types (i.e.: implementing all those methods).
That said, I believe you could implement all those methods in your class quite easily by creating a lambda or generic method that takes two parameters and just calls super on them. And then assign that method to all the specific methods that you need to implement. So you implement once and reuse it.

is it ever useful to define a class method with a reference to self not called 'self' in Python?

I'm teaching myself Python and I see the following in Dive into Python section 5.3:
By convention, the first argument of any Python class method (the reference to the current instance) is called self. This argument fills the role of the reserved word this in C++ or Java, but self is not a reserved word in Python, merely a naming convention. Nonetheless, please don't call it anything but self; this is a very strong convention.
Considering that self is not a Python keyword, I'm guessing that it can sometimes be useful to use something else. Are there any such cases? If not, why is it not a keyword?
No, unless you want to confuse every other programmer that looks at your code after you write it. self is not a keyword because it is an identifier. It could have been a keyword and the fact that it isn't one was a design decision.
As a side observation, note that Pilgrim is committing a common misuse of terms here: a class method is quite a different thing from an instance method, which is what he's talking about here. As wikipedia puts it, "a method is a subroutine that is exclusively associated either with a class (in which case it is called a class method or a static method) or with an object (in which case it is an instance method).". Python's built-ins include a staticmethod type, to make static methods, and a classmethod type, to make class methods, each generally used as a decorator; if you don't use either, a def in a class body makes an instance method. E.g.:
>>> class X(object):
... def noclass(self): print self
... #classmethod
... def withclass(cls): print cls
...
>>> x = X()
>>> x.noclass()
<__main__.X object at 0x698d0>
>>> x.withclass()
<class '__main__.X'>
>>>
As you see, the instance method noclass gets the instance as its argument, but the class method withclass gets the class instead.
So it would be extremely confusing and misleading to use self as the name of the first parameter of a class method: the convention in this case is instead to use cls, as in my example above. While this IS just a convention, there is no real good reason for violating it -- any more than there would be, say, for naming a variable number_of_cats if the purpose of the variable is counting dogs!-)
The only case of this I've seen is when you define a function outside of a class definition, and then assign it to the class, e.g.:
class Foo(object):
def bar(self):
# Do something with 'self'
def baz(inst):
return inst.bar()
Foo.baz = baz
In this case, self is a little strange to use, because the function could be applied to many classes. Most often I've seen inst or cls used instead.
I once had some code like (and I apologize for lack of creativity in the example):
class Animal:
def __init__(self, volume=1):
self.volume = volume
self.description = "Animal"
def Sound(self):
pass
def GetADog(self, newvolume):
class Dog(Animal):
def Sound(this):
return self.description + ": " + ("woof" * this.volume)
return Dog(newvolume)
Then we have output like:
>>> a = Animal(3)
>>> d = a.GetADog(2)
>>> d.Sound()
'Animal: woofwoof'
I wasn't sure if self within the Dog class would shadow self within the Animal class, so I opted to make Dog's reference the word "this" instead. In my opinion and for that particular application, that was more clear to me.
Because it is a convention, not language syntax. There is a Python style guide that people who program in Python follow. This way libraries have a familiar look and feel. Python places a lot of emphasis on readability, and consistency is an important part of this.
I think that the main reason self is used by convention rather than being a Python keyword is because it's simpler to have all methods/functions take arguments the same way rather than having to put together different argument forms for functions, class methods, instance methods, etc.
Note that if you have an actual class method (i.e. one defined using the classmethod decorator), the convention is to use "cls" instead of "self".

Categories

Resources