Is there any substantial difference between operators and methods?
The only difference I see is the way the are called, do they have other differences?
For example in Python concatenation, slicing, indexing are defined as operators, while (referring to strings) upper(), replace(), strip() and so on are methods.
If I understand question currectly...
In nutshell, everything is a method of object. You can find "expression operators" methods in python magic class methods, in the operators.
So, why python has "sexy" things like [x:y], [x], +, -? Because it is common things to most developers, even to unfamiliar with development people, so math functions like +, - will catch human eye and he will know what happens. Similar with indexing - it is common syntax in many languages.
But there is no special ways to express upper, replace, strip methods, so there is no "expression operators" for it.
So, what is different between "expression operators" and methods, I'd say just the way it looks.
Your question is rather broad. For your examples, concatenation, slicing, and indexing are defined on strings and lists using special syntax (e.g., []). But other types may do things differently.
In fact, the behavior of most (I think all) of the operators is constrolled by magic methods, so really when you write something like x + y a method is called under the hood.
From a practical perspective, one of the main differences is that the set of available syntactic operators is fixed and new ones cannot be added by your Python code. You can't write your own code to define a new operator called $ and then have x $ y work. On the other hand, you can define as many methods as you want. This means that you should choose carefully what behavior (if any) you assign to operators; since there are only a limited number of operators, you want to be sure that you don't "waste" them on uncommon operations.
Is there any substantial difference between operators and
methods?
Practically speaking, there is no difference because each operator is mapped to a specific Python special method. Moreover, whenever Python encounters the use of an operator, it calls its associated special method implicitly. For example:
1 + 2
implicitly calls int.__add__, which makes the above expression equivalent1 to:
(1).__add__(2)
Below is a demonstration:
>>> class Foo:
... def __add__(self, other):
... print("Foo.__add__ was called")
... return other + 10
...
>>> f = Foo()
>>> f + 1
Foo.__add__ was called
11
>>> f.__add__(1)
Foo.__add__ was called
11
>>>
Of course, actually using (1).__add__(2) in place of 1 + 2 would be inefficient (and ugly!) because it involves an unnecessary name lookup with the . operator.
That said, I do not see a problem with generally regarding the operator symbols (+, -, *, etc.) as simply shorthands for their associated method names (__add__, __sub__, __mul__, etc.). After all, they each end up doing the same thing by calling the same method.
1Well, roughly equivalent. As documented here, there is a set of special methods prefixed with the letter r that handle reflected operands. For example, the following expression:
A + B
may actually be equivalent to:
B.__radd__(A)
if A does not implement __add__ but B implements __radd__.
Related
Everything in Python is an object
We all know this sentence and all Pythonistas (including me) loving it. In that regard, it is interesting to look at operators. They seem to be no objects, e.g.
>>> type(*) # or /, +, -, < ...
returns SyntaxError: invalid syntax.
However, there might be situations where it would be useful to have them treated as objects. Consider for example a function like
def operation(operand1, operand2, operator):
"""
This function returns the operation of two operands defined by the operator as parameter
"""
# The following line is invalid python code and should only describe the function
return operand1 <operator> operand2
So operation(1, 2, +) would return 3, operation(1, 2, *) would return 2, operation(1, 2, <) would return True, etc...
Why is this not implemented in python? Or is it and if, how?
Remark: I do know the operator module, which also wouldn't be applicable in the example function above. Also I am aware that one could workaround it in a way like e.g. operations(operand1, operand2, '>') and find the desired operation via the string representation of the corresponding operator. However I am asking for the reason of the non-existance of operator-objects being able to be passed as parameters in functions e.g. like every other python object.
Every value is an object. Operators are not values; they are syntax. However, they are implemented by functions, which are values. The operator module provides access to those functions.
Not at all applicable to Python, though suggestive, is that a language could provide additional syntax to convert an operator into a "name". For example, in Haskell, you can use an infix operator like + as if it were a name using parentheses. Where you wanted to write operation(3, 5, +) in Python, Haskell allows operation 3 5 (+).
There's no technical reason why something similar couldn't be added to Python, but there's also no compelling design reason to add it. The operator module is sufficient and "fits" better with the language design as a whole.
Operators tell the interpreter what underlying method to operate on the objects provided, so they are more like functions, which are still object in a sense, you just need the appropriate reference to call the type on. For instance, say you have Foo.some_method and you want to look up its type. You need the proper reference: type(Foo.some_method) instead of just type(some_method), the first of which returns <class 'function'>, the latter a NameError.
That said, you can certainly implement something like this without the operator module:
def operation(operand1, operand2, operator):
return getattr(operand1, operator)(operand2)
operation(1, 2, '__add__')
# 3
That said, the easiest way to understand your issue is that operators are part of the syntax for python to interpret your code, not an actual object. So when the interpreter sees *, +, ~ etc... it expects two operands to fetch the aliased method and execute. The method itself is an object. The syntax, not so much.
Here is an example of an excellent question.
Technically speaking to answer properly to this question one would could use the Lexical Analyser (tokenizer) approach, in terms of token categories, besides encoding declarations, comments and blank lines.
Besides Operator, also: i) NEWLINE, INDENT and DEDENT, ii) Keywords, and iii) Delimiters are not objects. Everything else is an object in Python.
So next time they tell you "Everything in Python is an object" you should answer:
"Everything in a logical line that is not NEWLINE, INDENT, DEDENT, Space bar Character, Operator, Keyword or Delimiter is an object in Python."
The rationale attempt in here: When Python designers devised the language they figured that these token categories where pretty much useless to be treated as objects, in any situation, that could not be otherwise solved by other means.
Cheers.
You can think operator as a kind of syntax sugar. For example, 3+4 is just a syntax sugar of int.__add__(3,4). And type(int.__add__) is not None but type(+) will raise error.
You say:
# The following line is invalid python code and should only describe the function
return operand1 <operator> operand2
That's not true, that code isn't invalid. And with a fitting operator, it does work as intended:
def operation(operand1, operand2, operator):
"""
This function returns the operation of two operands defined by the operator as parameter
"""
return operand1 <operator> operand2
class Subtraction:
def __gt__(self, operand):
try:
return self.operand - operand
except:
self.operand = operand
return self
print(operation(13, 8, Subtraction()))
Prints 5 as expected.
can the primitive "and" be overridden?
for instance If I try to do something like this
class foo():
def __and__(self,other):
return "bar"
this gets outputted
>> foo() and 4
4
>> foo().__and__(4)
'bar'
my intuition is that the built in and cannot be overridden and shouldn't be overridden to not cause confusion.
I guess the and function shouldn't be changed because it doesn't need to be, since it behaves like
def and(self,other):
if( not bool(self) ):
return self
else:
return other
__and__ overrides the bitwise operator &, not the logic operator.
To give your class boolean logic handling, define a __nonzero__ method: http://docs.python.org/reference/datamodel.html#object.__nonzero__
The and operator cannot be overridden, since it doesn't always evaluate both of its operands. There is no way to provide a function that can do what it does, arguments to functions are always fully evaluated before the function is invoked. The same goes for or.
Boolean and and or can't be overridden in Python. It is true that they have special behavior (short-circuiting, such that both operands are not always evaluated) and this has something to do with why they are not overrideable. However, this is not (IMHO) the whole story.
In fact, Python could provide a facility that supports short-circuiting while still allowing the and and or operators to be overridden; it simply does not do so. This may be a deliberate choice to keep things simple, or it may be because there are other things the Python developers think would be more useful to developers and are working on them instead.
However, it is a choice on the part of Python's designers, not a law of nature.
This question already has answers here:
Why do some functions have underscores "__" before and after the function name?
(7 answers)
Closed 5 months ago.
I'm fairly new to actual programming languages, and Python is my first one. I know my way around Linux a bit, enough to get a summer job with it (I'm still in high school), and on the job, I have a lot of free time which I'm using to learn Python.
One thing's been getting me though. What exactly is different in Python when you have expressions such as
x.__add__(y) <==> x+y
x.__getattribute__('foo') <==> x.foo
I know what methods do and stuff, and I get what they do, but my question is: How are those double underscore methods above different from their simpler looking equivalents?
P.S., I don't mind being lectured on programming history, in fact, I find it very useful to know :) If these are mainly historical aspects of Python, feel free to start rambling.
Here is the creator of Python explaining it:
... rather than devising a new syntax for
special kinds of class methods (such
as initializers and destructors), I
decided that these features could be
handled by simply requiring the user
to implement methods with special
names such as __init__, __del__, and
so forth. This naming convention was
taken from C where identifiers
starting with underscores are reserved
by the compiler and often have special
meaning (e.g., macros such as
__FILE__ in the C preprocessor).
...
I also used this technique to allow
user classes to redefine the behavior
of Python's operators. As previously
noted, Python is implemented in C and
uses tables of function pointers to
implement various capabilities of
built-in objects (e.g., “get
attribute”, “add” and “call”). To
allow these capabilities to be defined
in user-defined classes, I mapped the
various function pointers to special
method names such as __getattr__,
__add__, and __call__. There is a
direct correspondence between these
names and the tables of function
pointers one has to define when
implementing new Python objects in C.
When you start a method with two underscores (and no trailing underscores), Python's name mangling rules are applied. This is a way to loosely simulate the private keyword from other OO languages such as C++ and Java. (Even so, the method is still technically not private in the way that Java and C++ methods are private, but it is "harder to get at" from outside the instance.)
Methods with two leading and two trailing underscores are considered to be "built-in" methods, that is, they're used by the interpreter and are generally the concrete implementations of overloaded operators or other built-in functionality.
Well, power for the programmer is good, so there should be a way to customize behaviour. Like operator overloading (__add__, __div__, __ge__, ...), attribute access (__getattribute__, __getattr__ (those two are differnt), __delattr__, ...) etc. In many cases, like operators, the usual syntax maps 1:1 to the respective method. In other cases, there is a special procedure which at some point involves calling the respective method - for example, __getattr__ is only called if the object doesn't have the requested attribute and __getattribute__ is not implemented or raised AttributeError. And some of them are really advanced topics that get you deeeeep into the object system's guts and are rarely needed. So no need to learn them all, just consult the reference when you need/want to know. Speaking of reference, here it is.
They are used to specify that the Python interpreter should use them in specific situations.
E.g., the __add__ function allows the + operator to work for custom classes. Otherwise you will get some sort of not defined error when attempting to add.
From an historical perspective, leading underscores have often been used as a method for indicating to the programmer that the names are to be considered internal to the package/module/library that defines them. In languages which do not provide good support for private namespaces, using underscores is a convention to emulate that. In Python, when you define a method named '__foo__' the maintenance programmer knows from the name that something special is going on which is not happening with a method named 'foo'. If Python had choosen to use 'add' as the internal method to overload '+', then you could never have a class with a method 'add' without causing much confusion. The underscores serve as a cue that some magic will happen.
A number of other questions are now marked as duplicates of this question, and at least two of them ask what either the __spam__ methods are called, or what the convention is called, and none of the existing answers cover that, so:
There actually is no official name for either.
Many developers unofficially call them "dunder methods", for "Double UNDERscore".
Some people use the term "magic methods", but that's somewhat ambiguous between meaning dunder methods, special methods (see below), or something somewhere between the two.
There is an official term "special attributes", which overlaps closely but not completely with dunder methods. The Data Model chapter in the reference never quite explains what a special attribute is, but the basic idea is that it's at least one of the following:
An attribute that's provided by the interpreter itself or its builtin code, like __name__ on a function.
An attribute that's part of a protocol implemented by the interpreter itself, like __add__ for the + operator, or __getitem__ for indexing and slicing.
An attribute that the interpreter is allowed to look up specially, by ignoring the instance and going right to the class, like __add__ again.
Most special attributes are methods, but not all (e.g., __name__ isn't). And most use the "dunder" convention, but not all (e.g., the next method on iterators in Python 2.x).
And meanwhile, most dunder methods are special attributes, but not all—in particular, it's not that uncommon for stdlib or external libraries to want to define their own protocols that work the same way, like the pickle protocol.
[Speculation] Python was influenced by Algol68, Guido possibly used Algol68 at the University of Amsterdam where Algol68 has a similar "stropping regime" called "Quote stropping". In Algol68 the operators, types and keywords can appear in a different typeface (usually **bold**, or __underlined__), in sourcecode files this typeface is achieved with quotes, e.g. 'abs' (quoting similar to quoting in 'wikitext')
Algol68 ⇒ Python (Operators mapped to member functions)
'and' ⇒ __and__
'or' ⇒ __or__
'not' ⇒ not
'entier' ⇒ __trunc__
'shl' ⇒ __lshift__
'shr' ⇒ __rshift__
'upb' ⇒ __sizeof__
'long' ⇒ __long__
'int' ⇒ __int__
'real' ⇒ __float__
'format' ⇒ __format__
'repr' ⇒ __repr__
'abs' ⇒ __abs__
'minus' ⇒ __neg__
'minus' ⇒ __sub__
'plus' ⇒ __add__
'times' ⇒ __mul__
'mod' ⇒ __mod__
'div' ⇒ __truediv__
'over' ⇒ __div__
'up' ⇒ __pow__
'im' ⇒ imag
're' ⇒ real
'conj' ⇒ conjugate
In Algol68 these were refered to as bold names, e.g. abs, but "under-under-abs" __abs__ in python.
My 2 cents: ¢ So sometimes — like a ghost — when you cut and paste python classes into a wiki you will magically reincarnate Algol68's bold keywords. ¢
I'm a bit surprised by Python's extensive use of 'magic methods'.
For example, in order for a class to declare that instances have a "length", it implements a __len__ method, which it is called when you write len(obj). Why not just define a len method which is called directly as a member of the object, e.g. obj.len()?
See also: Why does Python code use len() function instead of a length method?
AFAIK, len is special in this respect and has historical roots.
Here's a quote from the FAQ:
Why does Python use methods for some
functionality (e.g. list.index()) but
functions for other (e.g. len(list))?
The major reason is history. Functions
were used for those operations that
were generic for a group of types and
which were intended to work even for
objects that didn’t have methods at
all (e.g. tuples). It is also
convenient to have a function that can
readily be applied to an amorphous
collection of objects when you use the
functional features of Python (map(),
apply() et al).
In fact, implementing len(), max(),
min() as a built-in function is
actually less code than implementing
them as methods for each type. One can
quibble about individual cases but
it’s a part of Python, and it’s too
late to make such fundamental changes
now. The functions have to remain to
avoid massive code breakage.
The other "magical methods" (actually called special method in the Python folklore) make lots of sense, and similar functionality exists in other languages. They're mostly used for code that gets called implicitly when special syntax is used.
For example:
overloaded operators (exist in C++ and others)
constructor/destructor
hooks for accessing attributes
tools for metaprogramming
and so on...
From the Zen of Python:
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
This is one of the reasons - with custom methods, developers would be free to choose a different method name, like getLength(), length(), getlength() or whatsoever. Python enforces strict naming so that the common function len() can be used.
All operations that are common for many types of objects are put into magic methods, like __nonzero__, __len__ or __repr__. They are mostly optional, though.
Operator overloading is also done with magic methods (e.g. __le__), so it makes sense to use them for other common operations, too.
Python uses the word "magic methods", because those methods really performs magic for you program. One of the biggest advantages of using Python's magic methods is that they provide a simple way to make objects behave like built-in types. That means you can avoid ugly, counter-intuitive, and nonstandard ways of performing basic operators.
Consider a following example:
dict1 = {1 : "ABC"}
dict2 = {2 : "EFG"}
dict1 + dict2
Traceback (most recent call last):
File "python", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'dict' and 'dict'
This gives an error, because the dictionary type doesn't support addition. Now, let's extend dictionary class and add "__add__" magic method:
class AddableDict(dict):
def __add__(self, otherObj):
self.update(otherObj)
return AddableDict(self)
dict1 = AddableDict({1 : "ABC"})
dict2 = AddableDict({2 : "EFG"})
print (dict1 + dict2)
Now, it gives following output.
{1: 'ABC', 2: 'EFG'}
Thus, by adding this method, suddenly magic has happened and the error you were getting earlier, has gone away.
I hope, it makes things clear to you. For more information, refer to:
A Guide to Python's Magic Methods (Rafe Kettler, 2012)
Some of these functions do more than a single method would be able to implement (without abstract methods on a superclass). For instance bool() acts kind of like this:
def bool(obj):
if hasattr(obj, '__nonzero__'):
return bool(obj.__nonzero__())
elif hasattr(obj, '__len__'):
if obj.__len__():
return True
else:
return False
return True
You can also be 100% sure that bool() will always return True or False; if you relied on a method you couldn't be entirely sure what you'd get back.
Some other functions that have relatively complicated implementations (more complicated than the underlying magic methods are likely to be) are iter() and cmp(), and all the attribute methods (getattr, setattr and delattr). Things like int also access magic methods when doing coercion (you can implement __int__), but do double duty as types. len(obj) is actually the one case where I don't believe it's ever different from obj.__len__().
They are not really "magic names". It's just the interface an object has to implement to provide a given service. In this sense, they are not more magic than any predefined interface definition you have to reimplement.
While the reason is mostly historic, there are some peculiarities in Python's len that make the use of a function instead of a method appropriate.
Some operations in Python are implemented as methods, for example list.index and dict.append, while others are implemented as callables and magic methods, for example str and iter and reversed. The two groups differ enough so the different approach is justified:
They are common.
str, int and friends are types. It makes more sense to call the constructor.
The implementation differs from the function call. For example, iter might call __getitem__ if __iter__ isn't available, and supports additional arguments that don't fit in a method call. For the same reason it.next() has been changed to next(it) in recent versions of Python - it makes more sense.
Some of these are close relatives of operators. There's syntax for calling __iter__ and __next__ - it's called the for loop. For consistency, a function is better. And it makes it better for certain optimisations.
Some of the functions are simply way too similar to the rest in some way - repr acts like str does. Having str(x) versus x.repr() would be confusing.
Some of them rarely use the actual implementation method, for example isinstance.
Some of them are actual operators, getattr(x, 'a') is another way of doing x.a and getattr shares many of the aforementioned qualities.
I personally call the first group method-like and the second group operator-like. It's not a very good distinction, but I hope it helps somehow.
Having said this, len doesn't exactly fit in the second group. It's more close to the operations in the first one, with the only difference that it's way more common than almost any of them. But the only thing that it does is calling __len__, and it's very close to L.index. However, there are some differences. For example, __len__ might be called for the implementation of other features, such as bool, if the method was called len you might break bool(x) with custom len method that does completely different thing.
In short, you have a set of very common features that classes might implement that might be accessed through an operator, through a special function (that usually does more than the implementation, as an operator would), during object construction, and all of them share some common traits. All the rest is a method. And len is somewhat of an exception to that rule.
There is not a lot to add to the above two posts, but all the "magic" functions are not really magic at all. They are part of the __ builtins__ module which is implicitly/automatically imported when the interpreter starts. I.e.:
from __builtins__ import *
happens every time before your program starts.
I always thought it would be more correct if Python only did this for the interactive shell, and required scripts to import the various parts from builtins they needed. Also probably different __ main__ handling would be nice in shells vs interactive. Anyway, check out all the functions, and see what it is like without them:
dir (__builtins__)
...
del __builtins__
Perhaps, you have noticed it is possible to use certain built-in methods (ex. len(my_list_or_my_string)), and syntaxes (ex. my_list_or_my_string[:3], my_fancy_dict['some_key']) on some native types such as list, dict. Maybe you have been curious as to why it is not possible (yet) to use these same syntaxes on some of the classes you have written.
Variables of native types (list, dict, int, str) have unique behaviours and respond to certain syntaxes because they have some special methods defined in their respective classes — these methods are called Magic Methods.
A few magic methods include: __len__, __gt__, __eq__, etc.
Read more here: https://tomisin.dev/blog/supercharging-python-classes-with-magic-methods
i am a python newbie, and i am not sure why python implemented len(obj), max(obj), and min(obj) as a static like functions (i am from the java language) over obj.len(), obj.max(), and obj.min()
what are the advantages and disadvantages (other than obvious inconsistency) of having len()... over the method calls?
why guido chose this over the method calls? (this could have been solved in python3 if needed, but it wasn't changed in python3, so there gotta be good reasons...i hope)
thanks!!
The big advantage is that built-in functions (and operators) can apply extra logic when appropriate, beyond simply calling the special methods. For example, min can look at several arguments and apply the appropriate inequality checks, or it can accept a single iterable argument and proceed similarly; abs when called on an object without a special method __abs__ could try comparing said object with 0 and using the object change sign method if needed (though it currently doesn't); and so forth.
So, for consistency, all operations with wide applicability must always go through built-ins and/or operators, and it's those built-ins responsibility to look up and apply the appropriate special methods (on one or more of the arguments), use alternate logic where applicable, and so forth.
An example where this principle wasn't correctly applied (but the inconsistency was fixed in Python 3) is "step an iterator forward": in 2.5 and earlier, you needed to define and call the non-specially-named next method on the iterator. In 2.6 and later you can do it the right way: the iterator object defines __next__, the new next built-in can call it and apply extra logic, for example to supply a default value (in 2.6 you can still do it the bad old way, for backwards compatibility, though in 3.* you can't any more).
Another example: consider the expression x + y. In a traditional object-oriented language (able to dispatch only on the type of the leftmost argument -- like Python, Ruby, Java, C++, C#, &c) if x is of some built-in type and y is of your own fancy new type, you're sadly out of luck if the language insists on delegating all the logic to the method of type(x) that implements addition (assuming the language allows operator overloading;-).
In Python, the + operator (and similarly of course the builtin operator.add, if that's what you prefer) tries x's type's __add__, and if that one doesn't know what to do with y, then tries y's type's __radd__. So you can define your types that know how to add themselves to integers, floats, complex, etc etc, as well as ones that know how to add such built-in numeric types to themselves (i.e., you can code it so that x + y and y + x both work fine, when y is an instance of your fancy new type and x is an instance of some builtin numeric type).
"Generic functions" (as in PEAK) are a more elegant approach (allowing any overriding based on a combination of types, never with the crazy monomaniac focus on the leftmost arguments that OOP encourages!-), but (a) they were unfortunately not accepted for Python 3, and (b) they do of course require the generic function to be expressed as free-standing (it would be absolutely crazy to have to consider the function as "belonging" to any single type, where the whole POINT is that can be differently overridden/overloaded based on arbitrary combination of its several arguments' types!-). Anybody who's ever programmed in Common Lisp, Dylan, or PEAK, knows what I'm talking about;-).
So, free-standing functions and operators are just THE right, consistent way to go (even though the lack of generic functions, in bare-bones Python, does remove some fraction of the inherent elegance, it's still a reasonable mix of elegance and practicality!-).
It emphasizes the capabilities of an object, not its methods or type. Capabilites are declared by "helper" functions such as __iter__ and __len__ but they don't make up the interface. The interface is in the builtin functions, and beside this also in the buit-in operators like + and [] for indexing and slicing.
Sometimes, it is not a one-to-one correspondance: For example, iter(obj) returns an iterator for an object, and will work even if __iter__ is not defined. If not defined, it goes on to look if the object defines __getitem__ and will return an iterator accessing the object index-wise (like an array).
This goes together with Python's Duck Typing, we care only about what we can do with an object, not that it is of a particular type.
Actually, those aren't "static" methods in the way you are thinking about them. They are built-in functions that really just alias to certain methods on python objects that implement them.
>>> class Foo(object):
... def __len__(self):
... return 42
...
>>> f = Foo()
>>> len(f)
42
These are always available to be called whether or not the object implements them or not. The point is to have some consistency. Instead of some class having a method called length() and another called size(), the convention is to implement len and let the callers always access it by the more readable len(obj) instead of obj.methodThatDoesSomethingCommon
I thought the reason was so these basic operations could be done on iterators with the same interface as containers. However, it actually doesn't work with len:
def foo():
for i in range(10):
yield i
print len(foo())
... fails with TypeError. len() won't consume and count an iterator; it only works with objects that have a __len__ call.
So, as far as I'm concerned, len() shouldn't exist. It's much more natural to say obj.len than len(obj), and much more consistent with the rest of the language and the standard library. We don't say append(lst, 1); we say lst.append(1). Having a separate global method for length is an odd, inconsistent special case, and eats a very obvious name in the global namespace, which is a very bad habit of Python.
This is unrelated to duck typing; you can say getattr(obj, "len") to decide whether you can use len on an object just as easily--and much more consistently--than you can use getattr(obj, "__len__").
All that said, as language warts go--for those who consider this a wart--this is a very easy one to live with.
On the other hand, min and max do work on iterators, which gives them a use apart from any particular object. This is straightforward, so I'll just give an example:
import random
def foo():
for i in range(10):
yield random.randint(0, 100)
print max(foo())
However, there are no __min__ or __max__ methods to override its behavior, so there's no consistent way to provide efficient searching for sorted containers. If a container is sorted on the same key that you're searching, min/max are O(1) operations instead of O(n), and the only way to expose that is by a different, inconsistent method. (This could be fixed in the language relatively easily, of course.)
To follow up with another issue with this: it prevents use of Python's method binding. As a simple, contrived example, you can do this to supply a function to add values to a list:
def add(f):
f(1)
f(2)
f(3)
lst = []
add(lst.append)
print lst
and this works on all member functions. You can't do that with min, max or len, though, since they're not methods of the object they operate on. Instead, you have to resort to functools.partial, a clumsy second-class workaround common in other languages.
Of course, this is an uncommon case; but it's the uncommon cases that tell us about a language's consistency.