I'm a bit surprised by Python's extensive use of 'magic methods'.
For example, in order for a class to declare that instances have a "length", it implements a __len__ method, which it is called when you write len(obj). Why not just define a len method which is called directly as a member of the object, e.g. obj.len()?
See also: Why does Python code use len() function instead of a length method?
AFAIK, len is special in this respect and has historical roots.
Here's a quote from the FAQ:
Why does Python use methods for some
functionality (e.g. list.index()) but
functions for other (e.g. len(list))?
The major reason is history. Functions
were used for those operations that
were generic for a group of types and
which were intended to work even for
objects that didn’t have methods at
all (e.g. tuples). It is also
convenient to have a function that can
readily be applied to an amorphous
collection of objects when you use the
functional features of Python (map(),
apply() et al).
In fact, implementing len(), max(),
min() as a built-in function is
actually less code than implementing
them as methods for each type. One can
quibble about individual cases but
it’s a part of Python, and it’s too
late to make such fundamental changes
now. The functions have to remain to
avoid massive code breakage.
The other "magical methods" (actually called special method in the Python folklore) make lots of sense, and similar functionality exists in other languages. They're mostly used for code that gets called implicitly when special syntax is used.
For example:
overloaded operators (exist in C++ and others)
constructor/destructor
hooks for accessing attributes
tools for metaprogramming
and so on...
From the Zen of Python:
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
This is one of the reasons - with custom methods, developers would be free to choose a different method name, like getLength(), length(), getlength() or whatsoever. Python enforces strict naming so that the common function len() can be used.
All operations that are common for many types of objects are put into magic methods, like __nonzero__, __len__ or __repr__. They are mostly optional, though.
Operator overloading is also done with magic methods (e.g. __le__), so it makes sense to use them for other common operations, too.
Python uses the word "magic methods", because those methods really performs magic for you program. One of the biggest advantages of using Python's magic methods is that they provide a simple way to make objects behave like built-in types. That means you can avoid ugly, counter-intuitive, and nonstandard ways of performing basic operators.
Consider a following example:
dict1 = {1 : "ABC"}
dict2 = {2 : "EFG"}
dict1 + dict2
Traceback (most recent call last):
File "python", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'dict' and 'dict'
This gives an error, because the dictionary type doesn't support addition. Now, let's extend dictionary class and add "__add__" magic method:
class AddableDict(dict):
def __add__(self, otherObj):
self.update(otherObj)
return AddableDict(self)
dict1 = AddableDict({1 : "ABC"})
dict2 = AddableDict({2 : "EFG"})
print (dict1 + dict2)
Now, it gives following output.
{1: 'ABC', 2: 'EFG'}
Thus, by adding this method, suddenly magic has happened and the error you were getting earlier, has gone away.
I hope, it makes things clear to you. For more information, refer to:
A Guide to Python's Magic Methods (Rafe Kettler, 2012)
Some of these functions do more than a single method would be able to implement (without abstract methods on a superclass). For instance bool() acts kind of like this:
def bool(obj):
if hasattr(obj, '__nonzero__'):
return bool(obj.__nonzero__())
elif hasattr(obj, '__len__'):
if obj.__len__():
return True
else:
return False
return True
You can also be 100% sure that bool() will always return True or False; if you relied on a method you couldn't be entirely sure what you'd get back.
Some other functions that have relatively complicated implementations (more complicated than the underlying magic methods are likely to be) are iter() and cmp(), and all the attribute methods (getattr, setattr and delattr). Things like int also access magic methods when doing coercion (you can implement __int__), but do double duty as types. len(obj) is actually the one case where I don't believe it's ever different from obj.__len__().
They are not really "magic names". It's just the interface an object has to implement to provide a given service. In this sense, they are not more magic than any predefined interface definition you have to reimplement.
While the reason is mostly historic, there are some peculiarities in Python's len that make the use of a function instead of a method appropriate.
Some operations in Python are implemented as methods, for example list.index and dict.append, while others are implemented as callables and magic methods, for example str and iter and reversed. The two groups differ enough so the different approach is justified:
They are common.
str, int and friends are types. It makes more sense to call the constructor.
The implementation differs from the function call. For example, iter might call __getitem__ if __iter__ isn't available, and supports additional arguments that don't fit in a method call. For the same reason it.next() has been changed to next(it) in recent versions of Python - it makes more sense.
Some of these are close relatives of operators. There's syntax for calling __iter__ and __next__ - it's called the for loop. For consistency, a function is better. And it makes it better for certain optimisations.
Some of the functions are simply way too similar to the rest in some way - repr acts like str does. Having str(x) versus x.repr() would be confusing.
Some of them rarely use the actual implementation method, for example isinstance.
Some of them are actual operators, getattr(x, 'a') is another way of doing x.a and getattr shares many of the aforementioned qualities.
I personally call the first group method-like and the second group operator-like. It's not a very good distinction, but I hope it helps somehow.
Having said this, len doesn't exactly fit in the second group. It's more close to the operations in the first one, with the only difference that it's way more common than almost any of them. But the only thing that it does is calling __len__, and it's very close to L.index. However, there are some differences. For example, __len__ might be called for the implementation of other features, such as bool, if the method was called len you might break bool(x) with custom len method that does completely different thing.
In short, you have a set of very common features that classes might implement that might be accessed through an operator, through a special function (that usually does more than the implementation, as an operator would), during object construction, and all of them share some common traits. All the rest is a method. And len is somewhat of an exception to that rule.
There is not a lot to add to the above two posts, but all the "magic" functions are not really magic at all. They are part of the __ builtins__ module which is implicitly/automatically imported when the interpreter starts. I.e.:
from __builtins__ import *
happens every time before your program starts.
I always thought it would be more correct if Python only did this for the interactive shell, and required scripts to import the various parts from builtins they needed. Also probably different __ main__ handling would be nice in shells vs interactive. Anyway, check out all the functions, and see what it is like without them:
dir (__builtins__)
...
del __builtins__
Perhaps, you have noticed it is possible to use certain built-in methods (ex. len(my_list_or_my_string)), and syntaxes (ex. my_list_or_my_string[:3], my_fancy_dict['some_key']) on some native types such as list, dict. Maybe you have been curious as to why it is not possible (yet) to use these same syntaxes on some of the classes you have written.
Variables of native types (list, dict, int, str) have unique behaviours and respond to certain syntaxes because they have some special methods defined in their respective classes — these methods are called Magic Methods.
A few magic methods include: __len__, __gt__, __eq__, etc.
Read more here: https://tomisin.dev/blog/supercharging-python-classes-with-magic-methods
Related
Is saying:
if not callable(output.write):
raise ValueError("Output class must have a write() method")
The same as saying:
if type(output.write) != types.MethodType:
raise exceptions.ValueError("Output class must have a write() method")
I would rather not use the types module if I can avoid it.
No, they are not the same.
callable(output.write) just checks whether output.write is callable. Things that are callable include:
Bound method objects (whose type is types.MethodType).
Plain-old functions (whose type is types.FunctionType)
partial instances wrapping bound method objects (whose type is functools.partial)
Instances of you own custom callable class with a __call__ method that are designed to be indistinguishable from bound method objects (whose type is your class).
Instances of a subclass of the bound method type (whose type is that subclass).
…
type(output.write) == types.MethodType accepts only the first of these. Nothing else, not even subclasses of MethodType, will pass. (If you want to allow subclasses, use isinstance(output.write, types.MethodType).)
The former is almost certainly what you want. If I've monkeypatched an object to replace the write method with something that acts just like a write method when called, but isn't implemented as a bound method, why would your code want to reject my object?
As for your side question in the comments:
I do want to know if the exceptions.ValueError is necessary
No, it's not.
In Python 2.7, the builtin exceptions are also available in the exceptions module:
>>> ValueError is exceptions.ValueError
True
In Python 3, they were moved to builtins along with all the other builtins:
>>> ValueError is builtins.ValueError
True
But either way, the only reason you'd ever need to refer to its module is if you hid ValueError with a global of the same name in your own module.
One last thing:
As user2357112 points out in a comment, your solution doesn't really ensures anything useful.
The most common problem is almost certainly going to be output.write not existing at all. In which case you're going to get an AttributeError rather than the ValueError you wanted. (If this is acceptable, you don't need to check anything—just call the method and you'll get an AttributeError if it doesn't exist, and a TypeError if it does but isn't callable.) You could solve that by using getattr(output, 'write', None) instead of output.write, because None is not callable.
The next most common problem is probably going to be output.write existing, and being callable, but with the wrong signature. Which means you'll still get the same TypeError you were trying to avoid when you try to call it. You could solve that by, e.g., using the inspect module.
But if you really want to do all of this, you should probably be factoring it all out into an ABC. ABCs only have built-in support for checking that abstract methods exist as attributes; it doesn't check whether they're callable, or callable with the right signature. But it's not that hard to extend that support. (Or, maybe better, just grabbing one of the interface/protocol modules off PyPI.) And I think something like isinstance(output, StringWriteable) would declare your intention a lot better than a bunch of lines involving getattr or hasattr, type checking, and inspect grubbing.
In Python 2, it was possible to convert arbitrary callables to methods of a class. Importantly, if the callable was a CPython built-in implemented in C, you could use this to make methods of user-defined classes that were C layer themselves, invoking no byte code when called.
This is occasionally useful if you're relying on the GIL to provide "lock-free" synchronization; since the GIL can only be swapped out between op codes, if all the steps in a particular part of your code can be pushed to C, you can make it behave atomically.
In Python 2, you could do something like this:
import types
from operator import attrgetter
class Foo(object):
... This class maintains a member named length storing the length...
def __len__(self):
return self.length # We don't want this, because we're trying to push all work to C
# Instead, we explicitly make an unbound method that uses attrgetter to achieve
# the same result as above __len__, but without no byte code invoked to satisfy it
Foo.__len__ = types.MethodType(attrgetter('length'), None, Foo)
In Python 3, there is no longer an unbound method type, and types.MethodType only takes two arguments and creates only bound methods (which is not useful for Python special methods like __len__, __hash__, etc., since special methods are often looked up directly on the type, not the instance).
Is there some way of accomplishing this in Py3 that I'm missing?
Things I've looked at:
functools.partialmethod (appears to not have a C implementation, so it fails the requirements, and between the Python implementation and being much more general purpose than I need, it's slow, taking about 5 us in my tests, vs. ~200-300 ns for direct Python definitions or attrgetter in Py2, a roughly 20x increase in overhead)
Trying to make attrgetter or the like follow the non-data descriptor protocol (not possible AFAICT, can't monkey-patch in a __get__ or the like)
Trying to find a way to subclass attrgetter to give it a __get__, but of course, the __get__ needs to be delegated to C layer somehow, and now we're back where we started
(Specific to attrgetter use case) Using __slots__ to make the member a descriptor in the first place, then trying to somehow convert from the resulting descriptor for the data into something that skips the final step of binding and acquiring the real value to something that makes it callable so the real value retrieval is deferred
I can't swear I didn't miss something for any of those options though. Anyone have any solutions? Total hackery is allowed; I recognize I'm doing pathological things here. Ideally it would be flexible (to let you make something that behaves like an unbound method out of a class, a Python built-in function like hex, len, etc., or any other callable object not defined at the Python layer). Importantly, it needs to attach to the class, not each instance (both to reduce per-instance overhead, and to work correctly for dunder special methods, which bypass instance lookup in most cases).
Found a (probably CPython only) solution to this recently. It's a little ugly, being a ctypes hack to directly invoke CPython APIs, but it works, and gets the desired performance:
import ctypes
from operator import attrgetter
make_instance_method = ctypes.pythonapi.PyInstanceMethod_New
make_instance_method.argtypes = (ctypes.py_object,)
make_instance_method.restype = ctypes.py_object
class Foo:
# ... This class maintains a member named length storing the length...
# Defines a __len__ method that, at the C level, fetches self.length
__len__ = make_instance_method(attrgetter('length'))
It's an improvement over the Python 2 version in one way, since, as it doesn't need the class to be defined to make an unbound method for it, you can define it in the class body by simple assignment (where the Python 2 version must explicitly reference Foo twice in Foo.__len__ = types.MethodType(attrgetter('length'), None, Foo), and only after class Foo has finished being defined).
On the other hand, it doesn't actually provide a performance benefit on CPython 3.7 AFAICT, at least not for the simple case here where it's replacing def __len__(self): return self.length; in fact, for __len__ accessed via len(instance) on an instance of Foo, ipython %%timeit microbenchmarks show len(instance) is ~10% slower when __len__ is defined via __len__ = make_instance_method(attrgetter('length')), . This is likely an artifact of attrgetter itself having slightly higher overhead due to CPython not having moved it to the "FastCall" protocol (called "Vectorcall" in 3.8 when it was made semi-public for provisional third-party use), while user-defined functions already benefit from it in 3.7, as well as having to dynamically choose whether to perform dotted or undotted attribute lookup and single or multiple attribute lookup each time (which Vectorcall might be able to avoid by choosing a __call__ implementation appropriate to the gets being performed at construction time) adds more overhead that the plain method avoids. It should win for more complicated cases (say, if the attribute to be retrieved is a nested attribute like self.contained.length), since attrgetter's overhead is largely fixed, while nested attribute lookup in Python means more byte code, but right now, it's not useful very often.
If they ever get around to optimizing operator.attrgetter for Vectorcall, I'll rebenchmark and update this answer.
So, I'm just beginning to learn Python (using Codecademy), and I'm a bit confused.
Why are there some methods that take an argument, and others use the dot notation?
len() takes an arugment, but won't work with the dot notation:
>>> len("Help")
4
>>>"help".len()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'str' object has no attribute 'len'
And likewise:
>>>"help".upper()
'HELP'
>>>upper("help")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'upper' is not defined
The key word here is method. There is a slight difference between a function and a method.
Method
Is a function that is defined in the class of the given object. For example:
class Dog:
def bark(self):
print 'Woof woof!'
rufus = Dog()
rufus.bark() # called from the object
Function
A function is a globally defined procedure:
def bark():
print 'Woof woof!'
As for your question regarding the len function, the globally defined function calls the object's __len__ special method. So in this scenario, it is an issue of readability.
Otherwise, methods are better when they apply only to certain objects. Functions are better when they apply to multiple objects. For example, how can you uppercase a number? You wouldn't define that as a function, you'd define it as only a method only in the string class.
What you call "dot notation" are class methods and they only work for classes that have the method defined by the class implementer. len is a builtin function that takes one argument and returns the size of that object. A class may implement a method called len if its wants to, but most don't. The builtin len function has a rule that says if a class has a method called __len__, it will use it, so this works:
>>> class C(object):
... def __len__(self):
... return 100
...
>>> len(C())
100
"help".upper is the opposite. The string class defines a method called upper, but that doesn't mean there has to be a function called upper also. It turns out that there is an upper function in the string module, but generally you don't have to implement an extra function just because you implemented a class method.
This is the difference between a function and a method. If you are only just learning the basics, maybe simply accept that this difference exists, and that you will eventually understand it.
Still here? It's not even hard, actually. In object-oriented programming, methods are preferred over functions for many things, because that means one type of object can override its version of the method without affecting the rest of the system.
For example, let's pretend you had a new kind of string where accented characters should lose their accent when you call .upper(). Instances of this type can subclass str and behave exactly the same in every other aspect, basically for free; all they need to redefine is the upper method (and even then, probably call the method of the base class and only change the logic when you handle an accented lowercase character). And software which expects to work on strings will just continue to work and not even know the difference if you pass in an object of this new type where a standard str is expected.
A design principle in Python is that everything is an object. This means you can create your own replacements even for basic fundamental objects like object, class, and type, i.e. extend or override the basic language for your application or platform.
In fact, this happened in Python 2 when unicode strings were introduced to the language. A lot of application software continued to work exactly as before, but now with unicode instances where previously the code had been written to handle str instances. (This difference no longer exists in Python 3; or rather, the type which was called str and was used almost everywhere is now called bytes and is only used when you specifically want to handle data which is not text.)
Going back to our new upper method, think about the opposite case; if upper was just a function in the standard library, how would you even think about modifying software which needs upper to behave differently? What if tomorrow your boss wants you to do the same for lower? It would be a huge undertaking, and the changes you would have to make all over the code base would easily tend towards a spaghetti structure, as well as probably introduce subtle new bugs.
This is one of the cornerstones of object-oriented programming, but it probably only really makes ense when you learn the other two or three principles in a more structured introduction. For now, perhaps the quick and dirty summary is "methods make the implementation modular and extensible."
I know this may sound like a stupid question, especially to someone who knows python's nature, but I was just wondering, is there a way to know if an object "implements an interface" so as to say?
To give an example of what I want to say:
let's say I have this function:
def get_counts(sequence):
counts = {}
for x in sequence:
if x in counts:
counts[x] += 1
else:
counts[x] = 1
return counts
My question is: Is there a way to make sure that the object passed to the function is iterable? I know that in Java or C# I could do this by having the method accept any object that implements a specific interface, let's say (for example) iIterable like this: void get_counts(iIterable sequence)
My guess is that in Python I would have to employ preemptive introspection checks (in a decorator perhaps?) and throw a custom exception if the object doesn't have an __iter__ attribute). But is there a more pythonic way to do this?
Use polymorphism and duck-typing before isinstance() or interfaces
You generally define what you want to do with your objects, then either use polymorphism to adjust how each object responds to what you want to do, or you use duck typing; test if the object at hand can do the thing you want to do in the first place. This is the invocation versus introspection trade-off, conventional wisdom states that invocation is preferable over introspection, but in Python, duck-typing is preferred over isinstance testing.
So you need to work out why you need to filter on wether or not something is iterable in the first place; why do you need to know this? Just use a try: iter(object), except TypeError: # not iterable to test.
Or perhaps you just need to throw an exception if whatever that was passed was not an iterable, as that would signal an error.
ABCs
With duck-typing, you may find that you have to test for multiple methods, and thus a isinstance() test may look a better option. In such cases, using a Abstract Base Class (ABC) could also be an option; using an ABC let's you 'paint' several different classes as being the right type for a given operation, for example. Using a ABC let's you focus on the tasks that need to be performed rather than the specific implementations used; you can have a Paintable ABC, a Printable ABC, etc.
Zope interfaces and component architecture
If you find your application is using an awful lot of ABCs or you keep having to add polymorphic methods to your classes to deal with various different situations, the next step is to consider using a full-blown component architecture, such as the Zope Component Architecture (ZCA).
zope.interface interfaces are ABCs on steroids, especially when combined with the ZCA adapters. Interfaces document expected behaviour of a class:
if IFrobnarIterable.providedBy(yourobject):
# it'll support iteration and yield Frobnars.
but it also let's you look up adapters; instead of putting all the behaviours for every use of shapes in your classes, you implement adapters to provide polymorphic behaviours for specific use-cases. You can adapt your objects to be printable, or iterable, or exportable to XML:
class FrobnarsXMLExport(object):
adapts(IFrobnarIterable)
provides(IXMLExport)
def __init__(self, frobnariterator):
self.frobnars = frobnariterator
def export(self):
entries = []
for frobnar in self.frobnars:
entries.append(
u'<frobnar><width>{0}</width><height>{0}</height></frobnar>'.format(
frobnar.width, frobnar.height)
return u''.join(entries)
and your code merely has to look up adapters for each shape:
for obj in setofobjects:
self.result.append(IXMLExport(obj).export())
Python (since 2.6) has abstract base classes (aka virtual interfaces), which are more flexible than Java or C# interfaces. To check whether an object is iterable, use collections.Iterable:
if isinstance(obj, collections.Iterable):
...
However, if your else block would just raise an exception, then the most Python answer is: don't check! It's up to your caller to pass in an appropriate type; you just need to document that you're expecting an iterable object.
The Pythonic way is to use duck typing and, "ask forgiveness, not permission". This usually means performing an operation in a try block assuming it behaves the way you expect it to, then handling other cases in an except block.
I think this is the way that the community would recommend you do it:
import sys
def do_something(foo):
try:
for i in foo:
process(i)
except:
t, ex, tb = sys.exc_info()
if "is not iterable" in ex.message:
print "Is not iterable"
do_something(True)
Or, you could use something like zope.interface.
i am a python newbie, and i am not sure why python implemented len(obj), max(obj), and min(obj) as a static like functions (i am from the java language) over obj.len(), obj.max(), and obj.min()
what are the advantages and disadvantages (other than obvious inconsistency) of having len()... over the method calls?
why guido chose this over the method calls? (this could have been solved in python3 if needed, but it wasn't changed in python3, so there gotta be good reasons...i hope)
thanks!!
The big advantage is that built-in functions (and operators) can apply extra logic when appropriate, beyond simply calling the special methods. For example, min can look at several arguments and apply the appropriate inequality checks, or it can accept a single iterable argument and proceed similarly; abs when called on an object without a special method __abs__ could try comparing said object with 0 and using the object change sign method if needed (though it currently doesn't); and so forth.
So, for consistency, all operations with wide applicability must always go through built-ins and/or operators, and it's those built-ins responsibility to look up and apply the appropriate special methods (on one or more of the arguments), use alternate logic where applicable, and so forth.
An example where this principle wasn't correctly applied (but the inconsistency was fixed in Python 3) is "step an iterator forward": in 2.5 and earlier, you needed to define and call the non-specially-named next method on the iterator. In 2.6 and later you can do it the right way: the iterator object defines __next__, the new next built-in can call it and apply extra logic, for example to supply a default value (in 2.6 you can still do it the bad old way, for backwards compatibility, though in 3.* you can't any more).
Another example: consider the expression x + y. In a traditional object-oriented language (able to dispatch only on the type of the leftmost argument -- like Python, Ruby, Java, C++, C#, &c) if x is of some built-in type and y is of your own fancy new type, you're sadly out of luck if the language insists on delegating all the logic to the method of type(x) that implements addition (assuming the language allows operator overloading;-).
In Python, the + operator (and similarly of course the builtin operator.add, if that's what you prefer) tries x's type's __add__, and if that one doesn't know what to do with y, then tries y's type's __radd__. So you can define your types that know how to add themselves to integers, floats, complex, etc etc, as well as ones that know how to add such built-in numeric types to themselves (i.e., you can code it so that x + y and y + x both work fine, when y is an instance of your fancy new type and x is an instance of some builtin numeric type).
"Generic functions" (as in PEAK) are a more elegant approach (allowing any overriding based on a combination of types, never with the crazy monomaniac focus on the leftmost arguments that OOP encourages!-), but (a) they were unfortunately not accepted for Python 3, and (b) they do of course require the generic function to be expressed as free-standing (it would be absolutely crazy to have to consider the function as "belonging" to any single type, where the whole POINT is that can be differently overridden/overloaded based on arbitrary combination of its several arguments' types!-). Anybody who's ever programmed in Common Lisp, Dylan, or PEAK, knows what I'm talking about;-).
So, free-standing functions and operators are just THE right, consistent way to go (even though the lack of generic functions, in bare-bones Python, does remove some fraction of the inherent elegance, it's still a reasonable mix of elegance and practicality!-).
It emphasizes the capabilities of an object, not its methods or type. Capabilites are declared by "helper" functions such as __iter__ and __len__ but they don't make up the interface. The interface is in the builtin functions, and beside this also in the buit-in operators like + and [] for indexing and slicing.
Sometimes, it is not a one-to-one correspondance: For example, iter(obj) returns an iterator for an object, and will work even if __iter__ is not defined. If not defined, it goes on to look if the object defines __getitem__ and will return an iterator accessing the object index-wise (like an array).
This goes together with Python's Duck Typing, we care only about what we can do with an object, not that it is of a particular type.
Actually, those aren't "static" methods in the way you are thinking about them. They are built-in functions that really just alias to certain methods on python objects that implement them.
>>> class Foo(object):
... def __len__(self):
... return 42
...
>>> f = Foo()
>>> len(f)
42
These are always available to be called whether or not the object implements them or not. The point is to have some consistency. Instead of some class having a method called length() and another called size(), the convention is to implement len and let the callers always access it by the more readable len(obj) instead of obj.methodThatDoesSomethingCommon
I thought the reason was so these basic operations could be done on iterators with the same interface as containers. However, it actually doesn't work with len:
def foo():
for i in range(10):
yield i
print len(foo())
... fails with TypeError. len() won't consume and count an iterator; it only works with objects that have a __len__ call.
So, as far as I'm concerned, len() shouldn't exist. It's much more natural to say obj.len than len(obj), and much more consistent with the rest of the language and the standard library. We don't say append(lst, 1); we say lst.append(1). Having a separate global method for length is an odd, inconsistent special case, and eats a very obvious name in the global namespace, which is a very bad habit of Python.
This is unrelated to duck typing; you can say getattr(obj, "len") to decide whether you can use len on an object just as easily--and much more consistently--than you can use getattr(obj, "__len__").
All that said, as language warts go--for those who consider this a wart--this is a very easy one to live with.
On the other hand, min and max do work on iterators, which gives them a use apart from any particular object. This is straightforward, so I'll just give an example:
import random
def foo():
for i in range(10):
yield random.randint(0, 100)
print max(foo())
However, there are no __min__ or __max__ methods to override its behavior, so there's no consistent way to provide efficient searching for sorted containers. If a container is sorted on the same key that you're searching, min/max are O(1) operations instead of O(n), and the only way to expose that is by a different, inconsistent method. (This could be fixed in the language relatively easily, of course.)
To follow up with another issue with this: it prevents use of Python's method binding. As a simple, contrived example, you can do this to supply a function to add values to a list:
def add(f):
f(1)
f(2)
f(3)
lst = []
add(lst.append)
print lst
and this works on all member functions. You can't do that with min, max or len, though, since they're not methods of the object they operate on. Instead, you have to resort to functools.partial, a clumsy second-class workaround common in other languages.
Of course, this is an uncommon case; but it's the uncommon cases that tell us about a language's consistency.