Everything in Python is an object, why operators are not? - python

Everything in Python is an object
We all know this sentence and all Pythonistas (including me) loving it. In that regard, it is interesting to look at operators. They seem to be no objects, e.g.
>>> type(*) # or /, +, -, < ...
returns SyntaxError: invalid syntax.
However, there might be situations where it would be useful to have them treated as objects. Consider for example a function like
def operation(operand1, operand2, operator):
"""
This function returns the operation of two operands defined by the operator as parameter
"""
# The following line is invalid python code and should only describe the function
return operand1 <operator> operand2
So operation(1, 2, +) would return 3, operation(1, 2, *) would return 2, operation(1, 2, <) would return True, etc...
Why is this not implemented in python? Or is it and if, how?
Remark: I do know the operator module, which also wouldn't be applicable in the example function above. Also I am aware that one could workaround it in a way like e.g. operations(operand1, operand2, '>') and find the desired operation via the string representation of the corresponding operator. However I am asking for the reason of the non-existance of operator-objects being able to be passed as parameters in functions e.g. like every other python object.

Every value is an object. Operators are not values; they are syntax. However, they are implemented by functions, which are values. The operator module provides access to those functions.
Not at all applicable to Python, though suggestive, is that a language could provide additional syntax to convert an operator into a "name". For example, in Haskell, you can use an infix operator like + as if it were a name using parentheses. Where you wanted to write operation(3, 5, +) in Python, Haskell allows operation 3 5 (+).
There's no technical reason why something similar couldn't be added to Python, but there's also no compelling design reason to add it. The operator module is sufficient and "fits" better with the language design as a whole.

Operators tell the interpreter what underlying method to operate on the objects provided, so they are more like functions, which are still object in a sense, you just need the appropriate reference to call the type on. For instance, say you have Foo.some_method and you want to look up its type. You need the proper reference: type(Foo.some_method) instead of just type(some_method), the first of which returns <class 'function'>, the latter a NameError.
That said, you can certainly implement something like this without the operator module:
def operation(operand1, operand2, operator):
return getattr(operand1, operator)(operand2)
operation(1, 2, '__add__')
# 3
That said, the easiest way to understand your issue is that operators are part of the syntax for python to interpret your code, not an actual object. So when the interpreter sees *, +, ~ etc... it expects two operands to fetch the aliased method and execute. The method itself is an object. The syntax, not so much.

Here is an example of an excellent question.
Technically speaking to answer properly to this question one would could use the Lexical Analyser (tokenizer) approach, in terms of token categories, besides encoding declarations, comments and blank lines.
Besides Operator, also: i) NEWLINE, INDENT and DEDENT, ii) Keywords, and iii) Delimiters are not objects. Everything else is an object in Python.
So next time they tell you "Everything in Python is an object" you should answer:
"Everything in a logical line that is not NEWLINE, INDENT, DEDENT, Space bar Character, Operator, Keyword or Delimiter is an object in Python."
The rationale attempt in here: When Python designers devised the language they figured that these token categories where pretty much useless to be treated as objects, in any situation, that could not be otherwise solved by other means.
Cheers.

You can think operator as a kind of syntax sugar. For example, 3+4 is just a syntax sugar of int.__add__(3,4). And type(int.__add__) is not None but type(+) will raise error.

You say:
# The following line is invalid python code and should only describe the function
return operand1 <operator> operand2
That's not true, that code isn't invalid. And with a fitting operator, it does work as intended:
def operation(operand1, operand2, operator):
"""
This function returns the operation of two operands defined by the operator as parameter
"""
return operand1 <operator> operand2
class Subtraction:
def __gt__(self, operand):
try:
return self.operand - operand
except:
self.operand = operand
return self
print(operation(13, 8, Subtraction()))
Prints 5 as expected.

Related

Is "Changing the behavior of an operator like + so it works with a programmer-defined type," a good definition of operator overloading?

In "Think Python: How to Think Like a Computer Scientist", the author defines operator overloading as:
Changing the behavior of an operator like + so it works with a
programmer-defined type.
Is this an accurate definition of it (in programming in general, and in Python in specific)? Isn't it: "The ability to use the same operator for different operations?" For example, in Python, we an use + to add to numbers, or to concatenate two sequences. Isn't this operator overloading? Isn't the + operator overloaded here?
Does the author means by "the behavior of an operator", raising a TypeError because it's not implemented for the given class? Because the operator still has its behavior with other types (e.g. strings).
Is the definition the author wrote, a specific type of operator overloading?
The definition given in "How to think..." is correct.
It isn't specific for Python, C++ has the same concept.
The programmer can give an operator a new meaning, e.g. adding two vectors with a + instead of two scalar numbers.
The mere fact that an operator can be used on multiple datatypes natively doesn't have a specific naming. In almost any language + can be used to add integers or floats. There's no special word for this and many programmers aren't even aware of the difference.

Does this qualify as polymorphism?

I'm trying to get a grasp on the concept of polymorphism and I kind of understand what it is in the context of OOP, but I've read that it can also apply to procedural programming and I don't really understand how.
I know that there's a bunch of methods and operators in Python, for example, which already use polymorphism, but what function that I write would qualify as polymorphic? For example, would the function what_type below be considered polymorphic, because by definition it has different behaviour based on the data type of the parameter it takes in, or is this just a regular example of a function with conditional statements in it?
def what_type(input_var):
'Returns a statement of the data type of the input variable'
# If the input variable is a string
if type(input_var) is str:
this_type = "This variable is a string!"
# If it's an integer
elif type(input_var) is int:
this_type = "This variable is an integer!"
# If it's a float
elif type(input_var) is float:
this_type = "This variable is a float!"
# If it's any other type
else:
this_type = "I don't know this type!"
return this_type
Perhaps it could be considered a rather uninteresting example of ad-hoc polymoriphism, but it isn't the sort of thing most people have in mind when they talk about polymorphism.
In addition to the OOP-style polymoriphism which you seem to be somewhat comfortable with, Python has an interesting protocol-based polymorphism. For example, iterators are objects which implement a __next__() method. You can write polymorphic functions which work with any iterator. For example, if you wanted to wrap two iterators into one which alternates between the two you could do something like this:
def alternate(source1,source2):
for x,y in zip(source1,source2):
yield x
yield y
This could be applied to strings:
>>> for item in alternate("cat","dog"): print(item)
c
d
a
o
t
g
but it could be equally applied to e.g. two files that are open for reading.
The point of such polymorphic code is you really don't care what type of iterators are passed to it. As long as they are iterators, the code will work as expected.
what_type is checking the type of the argument and deciding how to handle it by itself.
It's not polymorphism nor oop.
In oop, what_type should not concern the concrete type of the argument and the argument should treat type-specific behavior.
So, what_type should be written like:
def what_type(input_var):
input_var.print_whats_myself()

Python custom character/symbol/expression map to methods/functions [duplicate]

I would like to define my own operator. Does python support such a thing?
While technically you cannot define new operators in Python, this clever hack works around this limitation. It allows you to define infix operators like this:
# simple multiplication
x=Infix(lambda x,y: x*y)
print 2 |x| 4
# => 8
# class checking
isa=Infix(lambda x,y: x.__class__==y.__class__)
print [1,2,3] |isa| []
print [1,2,3] <<isa>> []
# => True
No, Python comes with a predefined, yet overridable, set of operators.
No, you can't create new operators. However, if you are just evaluating expressions, you could process the string yourself and calculate the results of the new operators.
Sage provides this functionality, essentially using the "clever hack" described by #Ayman Hourieh, but incorporated into a module as a decorator to give a cleaner appearance and additional functionality – you can choose the operator to overload and therefore the order of evaluation.
from sage.misc.decorators import infix_operator
#infix_operator('multiply')
def dot(a,b):
return a.dot_product(b)
u=vector([1,2,3])
v=vector([5,4,3])
print(u *dot* v)
# => 22
#infix_operator('or')
def plus(x,y):
return x*y
print(2 |plus| 4)
# => 6
See the Sage documentation and this enhancement tracking ticket for more information.
Python 3.5 introduces the symbol # for an extra operator.
PEP465 introduced this new operator for matrix multiplication, to simplify the notation of many numerical code. The operator will not be implemented for all types, but just for arrays-like-objects.
You can support the operator for your classes/objects by implementing __matmul__().
The PEP leaves space for a different usage of the operator for non-arrays-like objects.
Of course you can implement with # any sort of operation different from matrix multiplication also for arrays-like objects, but the user experience will be affected, because everybody will expect your data type to behave in a different way.
If you intend to apply the operation on a particular class of objects, you could just override the operator that matches your function the closest... for instance, overriding __eq__() will override the == operator to return whatever you want. This works for almost all the operators.

Difference between operators and methods

Is there any substantial difference between operators and methods?
The only difference I see is the way the are called, do they have other differences?
For example in Python concatenation, slicing, indexing are defined as operators, while (referring to strings) upper(), replace(), strip() and so on are methods.
If I understand question currectly...
In nutshell, everything is a method of object. You can find "expression operators" methods in python magic class methods, in the operators.
So, why python has "sexy" things like [x:y], [x], +, -? Because it is common things to most developers, even to unfamiliar with development people, so math functions like +, - will catch human eye and he will know what happens. Similar with indexing - it is common syntax in many languages.
But there is no special ways to express upper, replace, strip methods, so there is no "expression operators" for it.
So, what is different between "expression operators" and methods, I'd say just the way it looks.
Your question is rather broad. For your examples, concatenation, slicing, and indexing are defined on strings and lists using special syntax (e.g., []). But other types may do things differently.
In fact, the behavior of most (I think all) of the operators is constrolled by magic methods, so really when you write something like x + y a method is called under the hood.
From a practical perspective, one of the main differences is that the set of available syntactic operators is fixed and new ones cannot be added by your Python code. You can't write your own code to define a new operator called $ and then have x $ y work. On the other hand, you can define as many methods as you want. This means that you should choose carefully what behavior (if any) you assign to operators; since there are only a limited number of operators, you want to be sure that you don't "waste" them on uncommon operations.
Is there any substantial difference between operators and
methods?
Practically speaking, there is no difference because each operator is mapped to a specific Python special method. Moreover, whenever Python encounters the use of an operator, it calls its associated special method implicitly. For example:
1 + 2
implicitly calls int.__add__, which makes the above expression equivalent1 to:
(1).__add__(2)
Below is a demonstration:
>>> class Foo:
... def __add__(self, other):
... print("Foo.__add__ was called")
... return other + 10
...
>>> f = Foo()
>>> f + 1
Foo.__add__ was called
11
>>> f.__add__(1)
Foo.__add__ was called
11
>>>
Of course, actually using (1).__add__(2) in place of 1 + 2 would be inefficient (and ugly!) because it involves an unnecessary name lookup with the . operator.
That said, I do not see a problem with generally regarding the operator symbols (+, -, *, etc.) as simply shorthands for their associated method names (__add__, __sub__, __mul__, etc.). After all, they each end up doing the same thing by calling the same method.
1Well, roughly equivalent. As documented here, there is a set of special methods prefixed with the letter r that handle reflected operands. For example, the following expression:
A + B
may actually be equivalent to:
B.__radd__(A)
if A does not implement __add__ but B implements __radd__.

override builtin "and"

can the primitive "and" be overridden?
for instance If I try to do something like this
class foo():
def __and__(self,other):
return "bar"
this gets outputted
>> foo() and 4
4
>> foo().__and__(4)
'bar'
my intuition is that the built in and cannot be overridden and shouldn't be overridden to not cause confusion.
I guess the and function shouldn't be changed because it doesn't need to be, since it behaves like
def and(self,other):
if( not bool(self) ):
return self
else:
return other
__and__ overrides the bitwise operator &, not the logic operator.
To give your class boolean logic handling, define a __nonzero__ method: http://docs.python.org/reference/datamodel.html#object.__nonzero__
The and operator cannot be overridden, since it doesn't always evaluate both of its operands. There is no way to provide a function that can do what it does, arguments to functions are always fully evaluated before the function is invoked. The same goes for or.
Boolean and and or can't be overridden in Python. It is true that they have special behavior (short-circuiting, such that both operands are not always evaluated) and this has something to do with why they are not overrideable. However, this is not (IMHO) the whole story.
In fact, Python could provide a facility that supports short-circuiting while still allowing the and and or operators to be overridden; it simply does not do so. This may be a deliberate choice to keep things simple, or it may be because there are other things the Python developers think would be more useful to developers and are working on them instead.
However, it is a choice on the part of Python's designers, not a law of nature.

Categories

Resources