Class objects have a __bases__ (and a __base__) attribute:
>>> class Foo(object):
... pass
...
>>> Foo.__bases__
(<class 'object'>,)
Sadly, these attributes aren't accessible in the class body, which would be very convenient for accessing parent class attributes without having to hard-code the name:
class Foo:
cls_attr = 3
class Bar(Foo):
cls_attr = __base__.cls_attr + 2
# throws NameError: name '__base__' is not defined
Is there a reason why __bases__ and __base__ can't be accessed in the class body?
(To be clear, I'm asking if this is a conscious design decision. I'm not asking about the implementation; I know that __bases__ is a descriptor in type and that this descriptor can't be accessed until a class object has been created. I want to know why python doesn't create __bases__ as a local variable in the class body.)
I want to know why python doesn't create __bases__ as a local variable in the class body
As you know, class is mostly a shortcut for type.__new__() - when the runtime hits a class statements, it executes all statements at the top-level of the class body, collects all resulting bindings in a dedicated namespace dict, calls type() with the concrete metaclass, the class name, the base classes and the namespace dict, and binds the resulting class object to the class name in the enclosing scope (usually but not necessarily the module's top-level namespace).
The important point here is that it's the metaclass responsabilty to build the class object, and to allow for class object creation customisations, the metaclass must be free to do whatever it wants with its arguments. Most often a custom metaclass will mainly work on the attrs dict, but it must also be able to mess with the bases argument. Now since the metaclass is only invoked AFTER the class body statements have been executed, there's no way the runtime can reliably expose the bases in the class body scope since those bases could be modified afterward by the metaclass.
There are also some more philosophical considerations here, notably wrt/ explicit vs implicit, and as shx2 mentions, Python designers try to avoid magic variables popping out of the blue. There are indeed a couple implementation variables (__module__ and, in py3, __qualname__) that are "automagically" defined in the class body namespace, but those are just names, mostly intended as additional debugging / inspection informations for developers) and have absolutely no impact on the class object creation nor on its properties, behaviour and whatnots.
As always with Python, you have to consider the whole context (the execution model, the object model, how the different parts work together etc) to really understand the design choices. Whether you agree with the whole design and philosophy is another debate (and one that doesn't belong here), but you can be sure that yes, those choices are "conscious design decisions".
I am not answering as to why it was decided to be implemented the way it was, I'm answering why it wasn't implemented as a "local variable in the class body":
Simply because nothing in python is a local variable magically defined in the class body. Python doesn't like names magically appearing out of nowhere.
It's because it's simply is not yet created.
Consider the following:
>>> class baselessmeta(type):
... def __new__(metaclass, class_name, bases, classdict):
... return type.__new__(
... metaclass,
... class_name,
... (), # I can just ignore all the base
... {}
... )
...
>>> class Baseless(int, metaclass=baselessmeta):
... # imaginary print(__bases__, __base__)
... ...
...
>>> Baseless.__bases__
(<class 'object'>,)
>>> Baseless.__base__
<class 'object'>
>>>
What should the imaginary print result in?
Every python class is created via the type metaclass one way or another.
You have the int argument for the type() in bases argument, yet you do not know what is the return value is going to be. You may use that directly as a base in your metaclass, or you may return another base with your LOC.
Just realized your to be clear part and now my answer is useless haha. Oh welp.
Related
I've been learning about metaclasses, and I was wondering if it's possible to add a new attribute to every class that's defined in python, not just those which inherit explicitly from a custom metaclass.
I can add a new attribute explicitly using a custom metaclass like this
class NewAttrMeta(type):
def __new__(cls, name, bases, attrs):
attrs['new_attr'] = 'new_thing'
return super().__new__(cls, name, bases, attrs)
class A(metaclass=NewAttrMeta):
...
print(A.new_attr)
$ 'new_thing'
But is it possible to force a change like this on every class that's defined, not just the ones which explicitly inherit from your custom metaclass?
I thought maybe as all classes are of type type, if I overwrote type itself with my custom metaclass, then all new classes might then inherit from it. And as metaclasses are subclasses of type, then all classes defined that way would still be valid...
class NewAttrMeta(type):
def __new__(cls, name, bases, attrs):
attrs['new_attr'] = 'new_thing'
return super().__new__(cls, name, bases, attrs)
type = NewMagicMeta
But this only works if type is passed in explicitly again:
class A(type):
...
print(A.new_attr)
$ 'new_thing'
class B():
...
print(B.new_attr)
$ AttributeError: type object 'A' has no attribute 'new_attr'
Why on earth am I trying to do this? I wanted to see if I could locally implement a version of the rejected PEP 472: "Support for indexing with keyword arguments" by overriding the __getitem__ method of every class which defined any version of __getitem__. I'm only doing this for fun, so I would be interested in any insights or alternative ways to do that (the hackier the better!).
Python does not allow modification of built-in declared types. That
means that dictionaries, lists, classes defineds in extensions like
Numpy.ndarrays are "frozen" from Python code.
Even if that where possible, changing the metaclass for all classes would not change classes already defined. So, list, etc... would not have affected. You could arrange your program so that it could "install" your class creation hooks before importing any other modules with class definition, though - so it could affect classes written in Python code. (Classes created in extensions are defined in C code and do not go through the metaclass-class creation process anyway)
type is referenced as the metaclass for object, so even if you change type in the builtins - which is possible, that won't automatically be used as a metaclass for anyone. It is used by default because it is what is returned by type(object)
All that said, it is possible to create something that would seek through all existing classes in a running Python program, and, whenever the class is defined in Python, to decorate a __getitem__ method if it exists, to accept keyword parameters.
But then:
The support with indexing arguments as proposed in PEP 472 requires changes to the parser and to the language specification - simply accepting keyword arguments in __getitem__ won´t make a[b=1] work, or not be a syntax error. One would still have to write a.__getitem__(b=1)
An index name in __getitem__ is something very specific for a kind of objects. There is no way it would make sense for any class designed without that in mind. If a is a list, what a[fish='golden'] would mean? And what if a is a dict?
All in all, you'd already have a very cool class if you would come up with something for which it makes sense to have name passed in the index - and then you could just have any method to retrieve it, and use the regular parentheses notation for that a.get(fish="gold"), or even, if you write the __call__ method: a(fish="gold")
From other languages I am used to code a class property and afterwards I can access this without having it in the constructor like
Class MyClass:
def __init__(self):
self._value = 0
#property
my_property(self):
print('I got the value:' & self._value)
In almost every example I worked through the property variable was in the constructor self._value like this
Class MyClass:
def __init__(self, value = 0):
self._value = value
To me this makes no sence since you want to set it in the property. Could anyone explain to me what is the use of placing the value variable in the constructor?
Python objects are not struct-based (like C++ or Java), they are dict-based (like Javascript). This means that the instances attributes are dynamic (you can add new attributes or delete existing ones at runtime), and are not defined at the class level but at the instance level, and they are defined quite simply by assigning to them. While they can technically be defined anywhere in the code (even outside the class), the convention (and good practice) is to define them (eventually to default values) in the initializer (the __init__ method - the real constructor is named __new__ but there are very few reasons to override it) to make clear which attributes an instance of a given class is supposed to have.
Note the use of the term "attribute" here - in python, we don't talk about "member variables" or "member functions" but about "attributes" and "methods". Actually, since Python classes are objects too (instance of the type class or a subclass of), they have attributes too, so we have instance attributes (which are per-instance) and class attributes (which belong to the class object itself, and are shared amongst instances). A class attribute can be looked up on an instance, as long as it's not shadowed by an instance attribute of the same name.
Also, since Python functions are objects too (hint: in Python, everything - everything you can put on the RHS of an assignment that is - is an object), there are no distinct namespaces for "data" attributes and "function" attributes, and Python's "methods" are actually functions defined on the class itself - IOW they are class attributes that happen to be instances of the function type. Since methods need to access the instance to be able to work on it, there's a special mechanism that allow to "customize" attribute access so a given object - if it implements the proper interface - can return something else than itself when it's looked up on an instance but resolved on the class. This mechanism is used by functions so they turn themselves into methods (callable objects that wraps the function and instance together so you don't have to pass the instance to the function), but also more generally as the support for computed attributes.
The property class is a generic implementation of computed attributes that wraps a getter function (and eventually a setter and a deleter) - so in Python "property" has a very specific meaning (the property class itself or an instance of it). Also, the #decorator syntax is nothing magical (and isn't specific to properties), it's just syntactic sugar so given a "decorator" function:
def decorator(func):
return something
this:
#decorator
def foo():
# code here
is just a shortcut for:
def foo():
# code here
foo = decorator(foo)
Here I defined decorator as a function, but just any callable object (a "callable" object is an instance of a class that defines the __call__ magic method) can be used instead - and Python classes are callables (that's even actually by calling a class that you instanciate it).
So back to your code:
# in py2, you want to inherit from `object` for
# descriptors and other fancy things to work.
# this is useless in py3 but doesn't break anything either...
class MyClass(object):
# the `__init__` function will become an attribute
# of the `MyClass` class object
def __init__(self, value=0):
# defines the instance attribute named `_value`
# the leading underscore denotes an "implementation attribute"
# - something that is not part of the class public interface
# and should not be accessed externally (IOW a protected attribute)
self._value = value
# this first defines the `my_property` function, then
# pass it to `property()`, and rebinds the `my_property` name
# to the newly created `property` instance. The `my_property` function
# will then become the property's getter (it's `fget` instance attribute)
# and will be called when the `my_property` name is resolved on a `MyClass` instance
#property
my_property(self):
print('I got the value: {}'.format(self._value))
# let's at least return something
return self._value
You may then want to inspect both the class and an instance of it:
>>> print(MyClass.__dict__)
{'__module__': 'oop', '__init__': <function MyClass.__init__ at 0x7f477fc4a158>, 'my_property': <property object at 0x7f477fc639a8>, '__dict__': <attribute '__dict__' of 'MyClass' objects>, '__weakref__': <attribute '__weakref__' of 'MyClass' objects>, '__doc__': None}
>>> print(MyClass.my_property)
<property object at 0x7f477fc639a8>
>>> print(MyClass.my_property.fget)
<function MyClass.my_property at 0x7f477fc4a1e0>
>>> m = MyClass(42)
>>> print(m.__dict__)
{'_value': 42}
>>> print(m.my_property)
I got the value: 42
42
>>>
As a conclusion: if you hope to do anything usefull with a language, you have to learn this language - you cannot just expect it to work as other languages you know. While some features are based on common concepts (ie functions, classes etc), they can actually be implemented in a totally different way (Python's object model has almost nothing in common with Java's one), so just trying to write Java (or C or C++ etc) in Python will not work (just like trying to write Python in Java FWIW).
NB: just for the sake of completeness: Python objects can actually be made "struct-based" by using __slots__ - but the aim here is not to prevent dynamically adding attributes (that's only a side effect) but to make instances of those classes "lighter" in size (which is useful when you know you're going to have thousands or more instances of them at a given time).
Because #property is not a decorator for a variable, it is a decorator for function that allows it to behave like a property. You still need to create a class variable to use a function decorated by #property:
The #property decorator turns the voltage() method into a “getter” for a read-only attribute with the same name, and it sets the docstring for voltage to “Get the current voltage.”
A property object has getter, setter, and deleter methods usable as decorators that create a copy of the property with the corresponding accessor function set to the decorated function. This is best explained with an example:
I'm guessing you're coming from a language like C++ or Java where it is common to make attributes private and then write explicit getters and setters for them? In Python there is no such thing as private other than by convention and there is no need to write getters and setters for a variable that you only need to write and read as is. #property and the corresponding setter decorators can be used if you want to add additional behaviour (e.g. logging acccess) or you want to have pseudo-properties that you can access just like real ones, e.g. you might have a Circle class that is defined by it's radius but you could define a #property for the diameter so you can still write circle.diameter.
More specifically to your question: You want to have the property as an argument of the initializer if you want to set the property at the time when you create the object. You wouldn't want to create an empty object and then immediately fill it with properties as that would create a lot of noise and make the code less readable.
Just an aside: __init__ isn't actually a constructor. The constructor for Python objects is __new__ and you almost never overwrite it.
What is the difference between class and instance variables in Python?
class Complex:
a = 1
and
class Complex:
def __init__(self):
self.a = 1
Using the call: x = Complex().a in both cases assigns x to 1.
A more in-depth answer about __init__() and self will be appreciated.
When you write a class block, you create class attributes (or class variables). All the names you assign in the class block, including methods you define with def become class attributes.
After a class instance is created, anything with a reference to the instance can create instance attributes on it. Inside methods, the "current" instance is almost always bound to the name self, which is why you are thinking of these as "self variables". Usually in object-oriented design, the code attached to a class is supposed to have control over the attributes of instances of that class, so almost all instance attribute assignment is done inside methods, using the reference to the instance received in the self parameter of the method.
Class attributes are often compared to static variables (or methods) as found in languages like Java, C#, or C++. However, if you want to aim for deeper understanding I would avoid thinking of class attributes as "the same" as static variables. While they are often used for the same purposes, the underlying concept is quite different. More on this in the "advanced" section below the line.
An example!
class SomeClass:
def __init__(self):
self.foo = 'I am an instance attribute called foo'
self.foo_list = []
bar = 'I am a class attribute called bar'
bar_list = []
After executing this block, there is a class SomeClass, with 3 class attributes: __init__, bar, and bar_list.
Then we'll create an instance:
instance = SomeClass()
When this happens, SomeClass's __init__ method is executed, receiving the new instance in its self parameter. This method creates two instance attributes: foo and foo_list. Then this instance is assigned into the instance variable, so it's bound to a thing with those two instance attributes: foo and foo_list.
But:
print instance.bar
gives:
I am a class attribute called bar
How did this happen? When we try to retrieve an attribute through the dot syntax, and the attribute doesn't exist, Python goes through a bunch of steps to try and fulfill your request anyway. The next thing it will try is to look at the class attributes of the class of your instance. In this case, it found an attribute bar in SomeClass, so it returned that.
That's also how method calls work by the way. When you call mylist.append(5), for example, mylist doesn't have an attribute named append. But the class of mylist does, and it's bound to a method object. That method object is returned by the mylist.append bit, and then the (5) bit calls the method with the argument 5.
The way this is useful is that all instances of SomeClass will have access to the same bar attribute. We could create a million instances, but we only need to store that one string in memory, because they can all find it.
But you have to be a bit careful. Have a look at the following operations:
sc1 = SomeClass()
sc1.foo_list.append(1)
sc1.bar_list.append(2)
sc2 = SomeClass()
sc2.foo_list.append(10)
sc2.bar_list.append(20)
print sc1.foo_list
print sc1.bar_list
print sc2.foo_list
print sc2.bar_list
What do you think this prints?
[1]
[2, 20]
[10]
[2, 20]
This is because each instance has its own copy of foo_list, so they were appended to separately. But all instances share access to the same bar_list. So when we did sc1.bar_list.append(2) it affected sc2, even though sc2 didn't exist yet! And likewise sc2.bar_list.append(20) affected the bar_list retrieved through sc1. This is often not what you want.
Advanced study follows. :)
To really grok Python, coming from traditional statically typed OO-languages like Java and C#, you have to learn to rethink classes a little bit.
In Java, a class isn't really a thing in its own right. When you write a class you're more declaring a bunch of things that all instances of that class have in common. At runtime, there's only instances (and static methods/variables, but those are really just global variables and functions in a namespace associated with a class, nothing to do with OO really). Classes are the way you write down in your source code what the instances will be like at runtime; they only "exist" in your source code, not in the running program.
In Python, a class is nothing special. It's an object just like anything else. So "class attributes" are in fact exactly the same thing as "instance attributes"; in reality there's just "attributes". The only reason for drawing a distinction is that we tend to use objects which are classes differently from objects which are not classes. The underlying machinery is all the same. This is why I say it would be a mistake to think of class attributes as static variables from other languages.
But the thing that really makes Python classes different from Java-style classes is that just like any other object each class is an instance of some class!
In Python, most classes are instances of a builtin class called type. It is this class that controls the common behaviour of classes, and makes all the OO stuff the way it does. The default OO way of having instances of classes that have their own attributes, and have common methods/attributes defined by their class, is just a protocol in Python. You can change most aspects of it if you want. If you've ever heard of using a metaclass, all that is is defining a class that is an instance of a different class than type.
The only really "special" thing about classes (aside from all the builtin machinery to make them work they way they do by default), is the class block syntax, to make it easier for you to create instances of type. This:
class Foo(BaseFoo):
def __init__(self, foo):
self.foo = foo
z = 28
is roughly equivalent to the following:
def __init__(self, foo):
self.foo = foo
classdict = {'__init__': __init__, 'z': 28 }
Foo = type('Foo', (BaseFoo,) classdict)
And it will arrange for all the contents of classdict to become attributes of the object that gets created.
So then it becomes almost trivial to see that you can access a class attribute by Class.attribute just as easily as i = Class(); i.attribute. Both i and Class are objects, and objects have attributes. This also makes it easy to understand how you can modify a class after it's been created; just assign its attributes the same way you would with any other object!
In fact, instances have no particular special relationship with the class used to create them. The way Python knows which class to search for attributes that aren't found in the instance is by the hidden __class__ attribute. Which you can read to find out what class this is an instance of, just as with any other attribute: c = some_instance.__class__. Now you have a variable c bound to a class, even though it probably doesn't have the same name as the class. You can use this to access class attributes, or even call it to create more instances of it (even though you don't know what class it is!).
And you can even assign to i.__class__ to change what class it is an instance of! If you do this, nothing in particular happens immediately. It's not earth-shattering. All that it means is that when you look up attributes that don't exist in the instance, Python will go look at the new contents of __class__. Since that includes most methods, and methods usually expect the instance they're operating on to be in certain states, this usually results in errors if you do it at random, and it's very confusing, but it can be done. If you're very careful, the thing you store in __class__ doesn't even have to be a class object; all Python's going to do with it is look up attributes under certain circumstances, so all you need is an object that has the right kind of attributes (some caveats aside where Python does get picky about things being classes or instances of a particular class).
That's probably enough for now. Hopefully (if you've even read this far) I haven't confused you too much. Python is neat when you learn how it works. :)
What you're calling an "instance" variable isn't actually an instance variable; it's a class variable. See the language reference about classes.
In your example, the a appears to be an instance variable because it is immutable. It's nature as a class variable can be seen in the case when you assign a mutable object:
>>> class Complex:
>>> a = []
>>>
>>> b = Complex()
>>> c = Complex()
>>>
>>> # What do they look like?
>>> b.a
[]
>>> c.a
[]
>>>
>>> # Change b...
>>> b.a.append('Hello')
>>> b.a
['Hello']
>>> # What does c look like?
>>> c.a
['Hello']
If you used self, then it would be a true instance variable, and thus each instance would have it's own unique a. An object's __init__ function is called when a new instance is created, and self is a reference to that instance.
Looking at the documentation of the super type in Python 3.5, it notes that super(…) is the same as super(__class__, «first argument to function»). To my surprise, I wrote a method that returned __class__ – and it actually worked:
>>> class c:
... def meth(self): return __class__
...
>>> c().meth()
<class '__main__.c'>
Apparently, __class__ is a free variable assigned by the closure of the function:
>>> c.meth.__code__.co_freevars
('__class__',)
>>> c.meth.__closure__
(<cell at 0x7f6346a91048: type object at 0x55823b17f3a8>,)
I'd like to know under what circumstances that free variable is associated in the closure. I know that if I assign a function to a variable as part of creating a class it doesn't happen.
>>> def meth2(self): return __class__
...
>>> meth2.__code__.co_freevars
()
Even if I create a new class and as part of that creation assign some attribute to meth2, meth2 doesn't somehow magically gain a free variable that gets filled in.
That's unsurprising, because part of this appears to depend on the lexical state of the compiler at the time that the code is compiled.
I'd like to confirm that the conditions necessary for __class__ to be treated as a free variable are simply:
A reference to __class__ in the code block; and
The def containing the __class__ reference is lexically within a class declaration block.
I'd further like to understand what the conditions necessary for that variable getting filled in correctly are. It appears – at least from the Python 3.6 documentation – that something like type.__new__(…) is involved somehow. I haven't been able to understand for sure how type comes into play and how this all interacts with metaclasses that do not ultimately call type.__new__(…).
I'm particularly confused because I didn't think that at the time the namespace's __setattr__ method was used to assign the attribute containing the method to the method function (as it exists on the ultimately-constructed class object). I know that this namespace object exists because it was either constructed implicitly by the use of the class statement, or explicitly by the metaclass's __prepare__ method – but as best I can tell, the metaclass constructs the class object that populates __class__ after the function object is set as a value within the class namespace.
In the docs for Python’s data model, § 3.3.3.6 – “Creating the class object” – you will find the following:
[The] class object is the one that will be referenced by the
zero-argument form of super(). __class__ is an implicit closure
reference created by the compiler if any methods in a class body refer
to either __class__ or super. This allows the zero argument form
of super() to correctly identify the class being defined based on
lexical scoping, while the class or instance that was used to make
the current call is identified based on the first argument passed to
the method.
…emphasis is mine. This confirms your two putative criteria for a __class__ closure happening: a “__class__” reference in the method def, which itself is defined inside a class statement.
But then, the next ¶ in “Creating the class object” goes on to say:
CPython implementation detail: In CPython 3.6 and later, the __class__ cell is passed to the metaclass as a __classcell__ entry
in the class namespace. If present, this must be propagated up to the
type.__new__ call in order for the class to be initialized
correctly. Failing to do so will result in a RuntimeError in Python
3.8.
… emphasis is theirs. This means that if you are employing a metaclass with a __new__ method – in order to dictate the terms by which classes so designated are created – for example like e.g.:
class Meta(type):
def __new__(metacls, name, bases, attributes, **kwargs):
# Or whatever:
if '__slots__' not in attributes:
attributes['__slots__'] = tuple()
# Call up, creating and returning the new class:
return super().__new__(metacls, name,
bases,
attributes,
**kwargs)
… that last super(…).__new__(…) call is effectively calling type.__new__(…). In real life, there might be some other ancestral “__new__(…)” methods that get called between here and there, if your metaclass inherits from other metaclasses (like, e.g. abc.ABCMeta). Effectively, though, inside your Meta.__new__(…) method, between the method entry point, the super(…).__new__(…) call, and return-ing the new class object, you can inspect or set the value of the eventual __class__ cell variable through attributes['__classcell__']†.
Now as for whether this is at all useful: I don’t know. I have been programming in python for ten years; I totally use metaclasses‡, like, absolutely all the time (for better or for worse); and in the course of doing so I have never done any of the following things:
reassigned a __class__ attribute;
inspected the __class__ cell variable of anything; nor
messed around with this supposed __classcell__ namespace entry, in like any capacity
… Naturally, your programming experience will be different from mine, who knows what one does. It is not that any one of those aforementioned stratagems are de facto problematic, necessarily. But I am no stranger to bending Python’s type systems and metaprogramming facilities to my whim, and these particular things have never presented themselves as partiuclarly useful, especially once you are working within the general context of metaclasses, and what they do.
By which I suppose I mean, tl;dr: you are on the cusp of figuring out the basics of metaclasses and what they can do – do press on and experiment, but do investigate the topic with depth as well as breath. Indeed!
† – In reading through code examples of this sort, you’ll often find what my snippet here calls the attributes dictionary referred to as namespace or ns, or similar. It’s all the same stuff.
‡ – …and ABCs and mixins and class decorators and __init_subclass__(…) and the abuse of __mro_entries__(…) for personal gain; et cetera, ad nauseum
In Python, class variables can be accessed via that class instance:
>>> class A(object):
... x = 4
...
>>> a = A()
>>> a.x
4
It's easy to show that a.x is really resolved to A.x, not copied to an instance during construction:
>>> A.x = 5
>>> a.x
5
Despite the fact that this behavior is well known and widely used, I couldn't find any definitive documentation covering it. The closest I could find in Python docs was the section on classes:
class MyClass:
"""A simple example class"""
i = 12345
def f(self):
return 'hello world'
[snip]
... By definition, all attributes of a class that are function objects define corresponding methods of its instances. So in our example, x.f is a valid method reference, since MyClass.f is a function, but x.i is not, since MyClass.i is not. ...
However, this part talks specifically about methods so it's probably not relevant to the general case.
My question is, is this documented? Can I rely on this behavior?
Refs the Classes and Class instances parts in the Python data model documentation
A class has a namespace implemented by a dictionary object. Class
attribute references are translated to lookups in this dictionary,
e.g., C.x is translated to C.__dict__["x"] (although for new-style classes in particular there are a number of hooks which allow for other means of locating attributes).
...
A class instance is created by calling a class object (see above). A
class instance has a namespace implemented as a dictionary which is
the first place in which attribute references are searched. When an
attribute is not found there, and the instance’s class has an
attribute by that name, the search continues with the class
attributes.
Generally, this usage is fine, except the special cases mentioned as "for new-style classes in particular there are a number of hooks which allow for other means of locating attributes".
Not only can you rely on this behavior, you constantly do.
Think about methods. A method is merely a function that has been made a class attribute. You then look it up on the instance.
>>> def foo(self, x):
... print "foo:", self, x
...
>>> class C(object):
... method = foo # What a weird way to write this! But perhaps illustrative?
...
>>> C().method("hello")
foo: <__main__.C object at 0xadad50> hello
In the case of objects like functions, this isn't a plain lookup, but some magic occurs to pass self automatically. You may have used other objects that are meant to be stored as class attributes and looked up on the instance; properties are an example (check out the property builtin if you're not familiar with it.)
As okm notes, the way this works is described in the data model reference (including information about and links to more information about the magic that makes methods and properties work). The Data Model page is by far the most useful part of the Language Reference; it also includes among other things documentation about almost all the __foo__ methods and names.