Was just thinking about Python's dict "function" and starting to realize that dict isn't really a function at all. For example, if we do dir(dict), we get all sorts of methods that aren't include in the usual namespace of an user defined function. Extending that thought, its similar to dir(list) and dir(len). They aren't function, but really types. But then I'm confused about the documentation page, http://docs.python.org/2/library/functions.html, which clearly says functions. (I guess it should really just says builtin callables)
So what gives? (Starting to seem that making the distinction of classes and functions is trivial)
It's a callable, as are classes in general. Calling dict() is effectively to call the dict constructor. It is like when you define your own class (C, say) and you call C() to instantiate it.
One way that dict is special, compared to, say, sum, is that though both are callable, and both are implemented in C (in cpython, anyway), dict is a type; that is, isinstance(dict, type) == True. This means that you can use dict as the base class for other types, you can write:
class MyDictSubclass(dict):
pass
but not
class MySumSubclass(sum):
pass
This can be useful to make classes that behave almost like a builtin object, but with some enhancements. For instance, you can define a subclass of tuple that implements + as vector addition instead of concatenation:
class Vector(tuple):
def __add__(self, other):
return Vector(x + y for x, y in zip(self, other))
Which brings up another interesting point. type is also implemented in C. It's also callable. Like dict (and unlike sum) it's an instance of type; isinstance(type, type) == True. Because of this weird, seemingly impossible cycle, type can be used to make new classes of classes, (called metaclasses). You can write:
class MyTypeSubclass(type):
pass
class MyClass(object):
__metaclass__ = MyTypeSubclass
or, in Python 3:
class MyClass(metaclass=MyTypeSubclass):
pass
Which give the interesting result that isinstance(MyClass, MyTypeSubclass) == True. How this is useful is a bit beyond the scope of this answer, though.
dict() is a constructor for a dict instance. When you do dir(dict) you're looking at the attributes of class dict. When you write a = dict() you're setting a to a new instance of type dict.
I'm assuming here that dict() is what you're referring to as the "dict function". Or are you calling an indexed instance of dict, e.g. a['my_key'] a function?
Note that calling dir on the constructor dict.__init__
dir(dict.__init__)
gives you what you would expect, including the same stuff as you'd get for any other function. Since a call to the dict() constructor results in a call to dict.__init__(instance), that explains where those function attributes went. (Of course there's a little extra behind-the-scenes work in any constructor, but that's the same for dicts as for any object.)
Related
This question already has answers here:
What is the purpose of the `self` parameter? Why is it needed?
(26 answers)
Closed 6 months ago.
When defining a method on a class in Python, it looks something like this:
class MyClass(object):
def __init__(self, x, y):
self.x = x
self.y = y
But in some other languages, such as C#, you have a reference to the object that the method is bound to with the "this" keyword without declaring it as an argument in the method prototype.
Was this an intentional language design decision in Python or are there some implementation details that require the passing of "self" as an argument?
I like to quote Peters' Zen of Python. "Explicit is better than implicit."
In Java and C++, 'this.' can be deduced, except when you have variable names that make it impossible to deduce. So you sometimes need it and sometimes don't.
Python elects to make things like this explicit rather than based on a rule.
Additionally, since nothing is implied or assumed, parts of the implementation are exposed. self.__class__, self.__dict__ and other "internal" structures are available in an obvious way.
It's to minimize the difference between methods and functions. It allows you to easily generate methods in metaclasses, or add methods at runtime to pre-existing classes.
e.g.
>>> class C:
... def foo(self):
... print("Hi!")
...
>>>
>>> def bar(self):
... print("Bork bork bork!")
...
>>>
>>> c = C()
>>> C.bar = bar
>>> c.bar()
Bork bork bork!
>>> c.foo()
Hi!
>>>
It also (as far as I know) makes the implementation of the python runtime easier.
I suggest that one should read Guido van Rossum's blog on this topic - Why explicit self has to stay.
When a method definition is decorated, we don't know whether to automatically give it a 'self' parameter or not: the decorator could turn the function into a static method (which has no 'self'), or a class method (which has a funny kind of self that refers to a class instead of an instance), or it could do something completely different (it's trivial to write a decorator that implements '#classmethod' or '#staticmethod' in pure Python). There's no way without knowing what the decorator does whether to endow the method being defined with an implicit 'self' argument or not.
I reject hacks like special-casing '#classmethod' and '#staticmethod'.
Python doesn't force you on using "self". You can give it whatever name you want. You just have to remember that the first argument in a method definition header is a reference to the object.
Also allows you to do this: (in short, invoking Outer(3).create_inner_class(4)().weird_sum_with_closure_scope(5) will return 12, but will do so in the craziest of ways.
class Outer(object):
def __init__(self, outer_num):
self.outer_num = outer_num
def create_inner_class(outer_self, inner_arg):
class Inner(object):
inner_arg = inner_arg
def weird_sum_with_closure_scope(inner_self, num)
return num + outer_self.outer_num + inner_arg
return Inner
Of course, this is harder to imagine in languages like Java and C#. By making the self reference explicit, you're free to refer to any object by that self reference. Also, such a way of playing with classes at runtime is harder to do in the more static languages - not that's it's necessarily good or bad. It's just that the explicit self allows all this craziness to exist.
Moreover, imagine this: We'd like to customize the behavior of methods (for profiling, or some crazy black magic). This can lead us to think: what if we had a class Method whose behavior we could override or control?
Well here it is:
from functools import partial
class MagicMethod(object):
"""Does black magic when called"""
def __get__(self, obj, obj_type):
# This binds the <other> class instance to the <innocent_self> parameter
# of the method MagicMethod.invoke
return partial(self.invoke, obj)
def invoke(magic_self, innocent_self, *args, **kwargs):
# do black magic here
...
print magic_self, innocent_self, args, kwargs
class InnocentClass(object):
magic_method = MagicMethod()
And now: InnocentClass().magic_method() will act like expected. The method will be bound with the innocent_self parameter to InnocentClass, and with the magic_self to the MagicMethod instance. Weird huh? It's like having 2 keywords this1 and this2 in languages like Java and C#. Magic like this allows frameworks to do stuff that would otherwise be much more verbose.
Again, I don't want to comment on the ethics of this stuff. I just wanted to show things that would be harder to do without an explicit self reference.
I think it has to do with PEP 227:
Names in class scope are not accessible. Names are resolved in the
innermost enclosing function scope. If a class definition occurs in a
chain of nested scopes, the resolution process skips class
definitions. This rule prevents odd interactions between class
attributes and local variable access. If a name binding operation
occurs in a class definition, it creates an attribute on the resulting
class object. To access this variable in a method, or in a function
nested within a method, an attribute reference must be used, either
via self or via the class name.
I think the real reason besides "The Zen of Python" is that Functions are first class citizens in Python.
Which essentially makes them an Object. Now The fundamental issue is if your functions are object as well then, in Object oriented paradigm how would you send messages to Objects when the messages themselves are objects ?
Looks like a chicken egg problem, to reduce this paradox, the only possible way is to either pass a context of execution to methods or detect it. But since python can have nested functions it would be impossible to do so as the context of execution would change for inner functions.
This means the only possible solution is to explicitly pass 'self' (The context of execution).
So i believe it is a implementation problem the Zen came much later.
As explained in self in Python, Demystified
anything like obj.meth(args) becomes Class.meth(obj, args). The calling process is automatic while the receiving process is not (its explicit). This is the reason the first parameter of a function in class must be the object itself.
class Point(object):
def __init__(self,x = 0,y = 0):
self.x = x
self.y = y
def distance(self):
"""Find distance from origin"""
return (self.x**2 + self.y**2) ** 0.5
Invocations:
>>> p1 = Point(6,8)
>>> p1.distance()
10.0
init() defines three parameters but we just passed two (6 and 8). Similarly distance() requires one but zero arguments were passed.
Why is Python not complaining about this argument number mismatch?
Generally, when we call a method with some arguments, the corresponding class function is called by placing the method's object before the first argument. So, anything like obj.meth(args) becomes Class.meth(obj, args). The calling process is automatic while the receiving process is not (its explicit).
This is the reason the first parameter of a function in class must be the object itself. Writing this parameter as self is merely a convention. It is not a keyword and has no special meaning in Python. We could use other names (like this) but I strongly suggest you not to. Using names other than self is frowned upon by most developers and degrades the readability of the code ("Readability counts").
...
In, the first example self.x is an instance attribute whereas x is a local variable. They are not the same and lie in different namespaces.
Self Is Here To Stay
Many have proposed to make self a keyword in Python, like this in C++ and Java. This would eliminate the redundant use of explicit self from the formal parameter list in methods. While this idea seems promising, it's not going to happen. At least not in the near future. The main reason is backward compatibility. Here is a blog from the creator of Python himself explaining why the explicit self has to stay.
The 'self' parameter keeps the current calling object.
class class_name:
class_variable
def method_name(self,arg):
self.var=arg
obj=class_name()
obj.method_name()
here, the self argument holds the object obj. Hence, the statement self.var denotes obj.var
There is also another very simple answer: according to the zen of python, "explicit is better than implicit".
Consider an implementation of filterNot (basically the opposite of filter):
def filterNot(f, sequence):
return filter(lambda x: not f(x), sequence)
The parameter f can be a "function" or a "method" or a lambda -- or even an object whose class defines __call__.
Now consider a line of docstring for this parameter:
:param ??? f: Should return True for each element to be abandoned
Now, what should go in place of ??? -- how should the type of parameter f be referred to in a docstring. callable is the obvious choice (and what I would dictate if I were calling the shots :P) but is there an established convention?
Yes, the term callable is the one to use here.
The abstract base class Callable exists in collections.abc - abstract base classes can be best thought of as interfaces (although more like they dynamic ones in Go than those in Java, for example) - they define an interface, and any class that has the given functions is defined as inheriting from that abstract base class (whether they did so explicitly or not) - this means anything you would usefully pass into a function like this would be a subclass of Callable, making the use of the term completely correct here. Just as you might say Iterable.
It is definitely the term used by most people in when talking informally about Python code, and anyone reading your code should understand what you mean.
The callable() built-in (that got removed for a while in 3.x, then added back) does the check for function-like objects, and this further reinforces the name as the best choice where you are looking for function-like objects.
I know I should have come up with a better title, but anyway...
Say I make a class inherited from int in python:
class Foo(int):
def is_even(self):
return self%2 == 0
and do something like this
a = Foo(3)
b = Foo(5)
print(type(a+b)) #=> <class 'int'>
I understand this behaviour is not surprising at all, as __add___ called here is defined to return int instances. But I would like to create a class so that a+b returns Foo(8). In other words, I'd like the result a+b to have the is_even method.
Is there any way I can achieve this conveniently? Or do I have to overwrite __add__ and everything?
Background information: I'm trying to write an interpreter for an esoteric programming language called Grass . In that attempt, I want to have a class that behaves like 'callable-int' (actually, numpy.uint8), whose __call__ would be like
def __call__(self, other):
if self == other:
return lambda x: lambda y: x
else:
return lambda x: lambda y: y
.
There are tricks that you could do with metaclasses (__metaclass__ class variable) or the __getattribute__ special method. But the documentation states:
Bypassing the getattribute() machinery in this fashion provides significant scope for speed optimisations within the interpreter, at the cost of some flexibility in the handling of special methods (the special method must be set on the class object itself in order to be consistently invoked by the interpreter)
Which means that if you want to make sure that the parent class is never handled directly, you need to intercept everything. And for int, that is described as emulating numeric types (i.e.: implementing all those methods).
That said, I believe you could implement all those methods in your class quite easily by creating a lambda or generic method that takes two parameters and just calls super on them. And then assign that method to all the specific methods that you need to implement. So you implement once and reuse it.
As the Python 2 documentation on __repr__ states:
If at all possible, this (i.e. __repr__) should look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment).
So how come builtin __repr__ for classes does not act accordingly to that guideline?
Example
>>> class A(object):
... pass
>>> repr(A)
"<class 'A'>"
To meet the guideline, the default __repr__ should return "A", i.e. generally A.__name__. Why is it acting differently? It would be extra-easy to implement, I believe.
Edit: The scope of 'reproduction'
I can see in the answers that it is not clear in the discussion what repr should return. The way I see it, the repr function should return a string that allows you to reproduce the object:
in an arbitrary context and
automatically (i.e. not manually).
Ad.1. Take a look at a built-in class case (taken from this SO question):
>>> from datetime import date
>>>
>>> repr(date.today()) # calls date.today().__repr__()
'datetime.date(2009, 1, 16)'
Apparently, the assumed context is as if you use the basic form of import, i.e. import datetime, because if you would try eval(repr(date.today())), datetime would not be recognized. So the point is that __repr__ doesn't need to represent the object from scratch. It's enough if it is unambiguous in a context the community agreed upon, e.g. using direct module's types and functions. Sounds reasonable, right?
Ad.2. Giving an impression of how the object could be reconstructed is not enough for repr, I believe. Helpfulness in debugging is the purpose of str.
Conclusion
So what I expect from repr is allowing me to do eval on the result. And in the case of a class, I would not like to get the whole code that would reconstruct the class from scratch. Instead, I would like to have an unambiguous reference to a class visible in my scope. The "Module.Class" would suffice. No offence, Python, but "<class 'Module.Class'>" doesn't just cut it.
Consider a slightly more complicated class:
class B(object):
def __init__(self):
self.foo=3
repr would need to return something like
type("B", (object,), { "__init__": lambda self: setattr(self, "foo", 3) })
Notice one difficulty already: not all functions defined by the def statement can be translated into a single lambda expression. Change B slightly:
class B(object):
def __init__(self, x=2, y, **kwargs):
print "in B.__init__"
How do you write an expression that defines B.__init__? You can't use
lambda self: print "in B.__init__"
because lambda expressions cannot contain statements. For this simple class, it is already impossible to write a single expression that defines the class completely.
Because the default __repr__ cannot know what statements were used to create the class.
The documentation you quote starts with If at all possible. Since it is not possible to represent custom classes in a way that lets you recreate them, a different format is used, which follows the default for all things not easily recreated.
If repr(A) were to just return 'A', that'd be meaningless. You are not recreating A, you'd just be referencing it then. "type('A', (object,), {})" would be closer to reflecting the class constructor, but that'd be a) confusing for people not familiar with the fact python classes are instances of type and b) never able to reflect methods and attributes accurately.
Compare the output to that of repr(type) or repr(int) instead, these follow the same pattern.
I know this is an older question, but I found a way to do it.
The only way I know to do this is with a metaclass like so:
class A(object):
secret = 'a'
class _metaA(type):
#classmethod
def __repr__(cls):
return "<Repr for A: secret:{}>".format(A.secret)
__metaclass__ =_metaA
outputs:
>>> A
<Repr for A: secret:a>
Since neither "<class 'A'>" nor "A" can be used to re-create the class when its definition is not available, I think the question is moot.
I've been hacking classes in Python like this:
def hack(f,aClass) :
class MyClass(aClass) :
def f(self) :
f()
return MyClass
A = hack(afunc,A)
Which looks pretty clean to me. It takes a class, A, creates a new class derived from it that has an extra method, calling f, and then reassigns the new class to A.
How does this differ from metaclass hacking in Python? What are the advantages of using a metaclass over this?
The definition of a class in Python is an instance of type (or an instance of a subclass of type). In other words, the class definition itself is an object. With metaclasses, you have the ability to control the type instance that becomes the class definition.
When a metaclass is invoked, you have the ability to completely re-write the class definition. You have access to all the proposed attributes of the class, its ancestors, etc. More than just injecting a method or removing a method, you can radically alter the inheritance tree, the type, and pretty much any other aspect. You can also chain metaclasses together for a very dynamic and totally convoluted experience.
I suppose the real benefit, though is that the class's type remains the class's type. In your example, typing:
a_inst = A()
type(a_inst)
will show that it is an instance of MyClass. Yes, isinstance(a_inst, aClass) would return True, but you've introduced a subclass, rather than a dynamically re-defined class. The distinction there is probably the key.
As rjh points out, the anonymous inner class also has performance and extensibility implications. A metaclass is processed only once, and the moment that the class is defined, and never again. Users of your API can also extend your metaclass because it is not enclosed within a function, so you gain a certain degree of extensibility.
This slightly old article actually has a good explanation that compares exactly the "function decoration" approach you used in the example with metaclasses, and shows the history of the Python metaclass evolution in that context: http://www.ibm.com/developerworks/linux/library/l-pymeta.html
You can use the type callable as well.
def hack(f, aClass):
newfunc = lambda self: f()
return type('MyClass', (aClass,), {'f': newfunc})
I find using type the easiest way to get into the metaclass world.
A metaclass is the class of a class. IMO, the bloke here covered it quite serviceably, including some use-cases. See Stack Overflow question "MetaClass", "new", "cls" and "super" - what is the mechanism exactly?.