All the Python built-ins are subclasses of object and I come across many user-defined classes which are too. Why? What is the purpose of the class object? It's just an empty class, right?
Note: new-style classes are the default in Python 3. Subclassing object is unnecessary there. Read below for more information on usage with Python 2.
In short, it sets free magical ponies.
In long, Python 2.2 and earlier used "old style classes". They were a particular implementation of classes, and they had a few limitations (for example, you couldn't subclass builtin types). The fix for this was to create a new style of class. But, doing this would involve some backwards-incompatible changes. So, to make sure that code which is written for old style classes will still work, the object class was created to act as a superclass for all new-style classes.
So, in Python 2.X, class Foo: pass will create an old-style class and class Foo(object): pass will create a new style class.
In longer, see Guido's Unifying types and classes in Python 2.2.
And, in general, it's a good idea to get into the habit of making all your classes new-style, because some things (the #property decorator is one that comes to mind) won't work with old-style classes.
Short answer: subclassing object effectively makes it a new-style class (note that this is unnecessary since automatic in Python 3.x)
For the difference between new style classes and old style classes: see this stackoverflow question. For the complete story: see this nice writeup on Python Types and Objects.
It has to do with the "new-style" of classes. You can read more about it here: http://docs.python.org/tutorial/classes.html#multiple-inheritance and also here: http://docs.python.org/reference/datamodel.html#new-style-and-classic-classes
Using new-style classes will allow you to use "Python's newer, versatile features like __slots__, descriptors, properties, and __getattribute__()."
Right, but it marks the class as a new-style class. Newly developed classes should use the object base because it costs little and future-proofs your code.
The short version is that classic classes, which didn't need a superclass, had limitations that couldn't be worked around without breaking a lot of old code. So they created the concept of new-style classes which subclass from object, and now you can do cool things like define properties, and subclassing dict is no longer an exercise in pain and strange bugs.
The details are in section 3.3 of the Python docs: New-style and classic classes.
Python 2.2 introduced "new style classes" which had a number of additional features relative to the old style classes which did not subclass object. Subclasses object was the chosen way to indicate that your class should be a new style class, not an old style one.
Related
This question already has answers here:
Why do Python classes inherit object?
(6 answers)
Closed 5 years ago.
What is the difference between:
class ClassName(object):
pass
and
class ClassName:
pass
When I call the help function of the module of those class you can read ____builtin____.object for the first case just under the CLASS title of help. For the second case it just shows the class name. Is there any functional difference between those classes and/or possible methods thereof?
(I know that class Classname(ParentClassName) has a functional use)
In Python 2.x, when you inherit from "object" you class is a "new style" class - that was implemented back in Python 2.2 (around 2001) - The non inheriting from "object" case creates an "old style" class, that was actually maintained only for backwards compatibility.
The great benefit of "new style" classes is the unification of types across Python - prior to that, one could not subclass built-in types such as int, list, dict, properly. There was also specified a "descriptor protocol" which describes a protocol for retrieving and setting attributes in an object, giving the language a lot of flexibility. (It is more visible when one does use a Python "property" in a class).
What does make the difference is not actually "inheriting from object", but, since classes in Python are also objects, that does change the class'class itself (a class'class is known as its "metaclass"). Thus if you set the metaclass to be "type", you don't need to inherit from object to have a new style class.
It is strongly recommended that in Python 2.x, all your classes are new style - using old style classes may work for some single straightforward cases, but they can generate a lot of subtle, difficult to find, errors, when you try to use properties, pickle, descriptors, and other advanced features. Above all, when you try to check the "type" of an object, it will be the same (type "instance") for all objects from old style classes, even if they are from different user defined classes.
In Python versions 3.x all classes are new style - no need to set the metaclass.
Python's documentation "datamodel" is the "book of law" where the behavior of both
class typs is defined in detail (enough to allow one to reimplement the language):
http://docs.python.org/reference/datamodel.html
This blog post from Guido talks about the motivations behind new style classes in a lighter language:
http://python-history.blogspot.com.br/2010/06/new-style-classes.html
http://docs.python.org/release/2.5.2/ref/node33.html
ClassName(object) uses the new style class: http://docs.python.org/release/2.5.2/ref/node33.html
The second example demonstrates an old style class.
In python 3, new style classes are used by default and you will no longer need to subclass object.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Old style and new style classes in Python
What is the current state of affairs with new-style and old-style classes in Python 2.7?
I don't work with Python often, but I vaguely remember the issue. The documentation doesn't seem to mention the issue at all: The Python Tutorial: Classes. Do I still need to worry about this? In general, should I declare my classes like the following?
class MyClass:
pass
or?
class MyClass(object):
pass
Always subclass "object". Those are new style classes.
You are ready for Python 3 that way.
Things like .super() work properly that way, should you need them.
You should always use new style classes. New-style classes are part of an effort to unify built-in types and user-defined classes in the Python programming language.
New style classes have several things to offer such as:
Properties: Attributes that are defined by get/set methods
Static methods and class methods
The new getattribute hook, which, unlike getattr, is called
for every attribute access, not just when the attribute can’t be
found in the instance
Descriptors: A protocol to define the behavior of attribute access
through objects
Overriding the constructor new
Metaclasses
Source.
This question already has answers here:
Why do Python classes inherit object?
(6 answers)
Closed 5 years ago.
What is the difference between:
class ClassName(object):
pass
and
class ClassName:
pass
When I call the help function of the module of those class you can read ____builtin____.object for the first case just under the CLASS title of help. For the second case it just shows the class name. Is there any functional difference between those classes and/or possible methods thereof?
(I know that class Classname(ParentClassName) has a functional use)
In Python 2.x, when you inherit from "object" you class is a "new style" class - that was implemented back in Python 2.2 (around 2001) - The non inheriting from "object" case creates an "old style" class, that was actually maintained only for backwards compatibility.
The great benefit of "new style" classes is the unification of types across Python - prior to that, one could not subclass built-in types such as int, list, dict, properly. There was also specified a "descriptor protocol" which describes a protocol for retrieving and setting attributes in an object, giving the language a lot of flexibility. (It is more visible when one does use a Python "property" in a class).
What does make the difference is not actually "inheriting from object", but, since classes in Python are also objects, that does change the class'class itself (a class'class is known as its "metaclass"). Thus if you set the metaclass to be "type", you don't need to inherit from object to have a new style class.
It is strongly recommended that in Python 2.x, all your classes are new style - using old style classes may work for some single straightforward cases, but they can generate a lot of subtle, difficult to find, errors, when you try to use properties, pickle, descriptors, and other advanced features. Above all, when you try to check the "type" of an object, it will be the same (type "instance") for all objects from old style classes, even if they are from different user defined classes.
In Python versions 3.x all classes are new style - no need to set the metaclass.
Python's documentation "datamodel" is the "book of law" where the behavior of both
class typs is defined in detail (enough to allow one to reimplement the language):
http://docs.python.org/reference/datamodel.html
This blog post from Guido talks about the motivations behind new style classes in a lighter language:
http://python-history.blogspot.com.br/2010/06/new-style-classes.html
http://docs.python.org/release/2.5.2/ref/node33.html
ClassName(object) uses the new style class: http://docs.python.org/release/2.5.2/ref/node33.html
The second example demonstrates an old style class.
In python 3, new style classes are used by default and you will no longer need to subclass object.
This question already has answers here:
Should constructors comply with the Liskov Substitution Principle? [closed]
(3 answers)
Closed 7 years ago.
One of the recommended principles of object-oriented programming is the Liskov substitution principle: a subclass should behave in the same way as its base class(es) (warning: this is actually not a correct description of the Liskov principle: see the PS).
Is it recommended that it also apply to constructors? I mostly have Python in mind, and its __init__() methods, but this question applies to any object-oriented language with inheritance.
I am asking this question because it is sometimes useful to have a subclass inherit from one or more classes that provide some nice default behavior (like inheriting from a dictionary, in Python, so that obj['key'] works for objects of the new class). However, it is not always natural or simple to allow the subclass to be used exactly like a dictionary: it would be sometimes nicer that the constructor parameters only relate to the specific user subclass (for instance, a class that represents a set of serial ports might want to behave like a dictionary with ports['usb1'] being USB port #1, etc.). What is the recommended approach to such a situation? having subclass constructors that are fully compatible with that of their base classes, and generating instances through an object factory function that takes simple, user-friendly parameters? or simply writing a class constructor whose set of parameters cannot be directly given to the constructor of its base classes, but which is more logical from the user perspective?
PS: I misinterpreted the Liskov principle, above: Sven's comment below points out the fact that objects of a subclass should behave like objects of the superclass (the subclass itself does not have to behave like the superclass; in particular, their constructors do not have to have the same parameters [signature]).
As requested, I post as an answer what previously has been a comment.
The principle as defined in the linked Wikipedia article reads "if S is a subtype of T, then objects of type T may be replaced with objects of type S". It does not read "a subclass should behave in the same way as its base class(es)". The difference is important when thinking about constructors: The Wikipedia version only talks about objects of a subtype, not the type itself. For an object, the constructor has already been called, so the principle doesn't apply to constructors. This is also how I apply it, and the ways it seems applied in the standard lib (e.g defaultdict and dict).
Constructors in multiple inheritance probably can't be discussed in a language-agnostic way. In Python, there are two approaches. If your inheritance diagram includes diamond patterns and you need to make sure all constructors are called exactly once, you should use super() and follow the pattern described in the section "Practical advice" of Raymond Hettinger's article Python's super() considered super. If you don't have diamonds (except for the ones including object), you can also use explicit base class calls for all base class constructors.
I'm learning about metaclasses in Python. I think it is a very powerful technique, and I'm looking for good uses for them. I'd like some feedback of good useful real-world examples of using metaclasses. I'm not looking for example code on how to write a metaclass (there are plenty examples of useless metaclasses out there), but real examples where you have applied the technique and it was really the appropriate solution. The rule is: no theoretical possibilities, but metaclasses at work in a real application.
I'll start with the one example I know:
Django models, for declarative programming, where the base class Model uses a metaclass to fill the model objects of useful ORM functionality from the attribute definitions.
Looking forward to your contributions.
In Python 2.6 and 3.1, the Python standard library provides an abc.ABCMeta, a meta-class for Abstract Base Classes ("ABCs"). Classes that use the meta-class can use #abstractmethod and #abstractproperty to define abstract methods and properties. The meta-class will ensure that derived classes override the abstract methods and properties.
Also, classes that implement the ABC without actually inheriting from it can register as implementing the interface, so that issubclass and isinstance will work.
For example, the collections module defines the Sequence ABC. It also calls Sequence.register(tuple) to register the built-in tuple type as a Sequence, even though tuple does not actually inherit from Sequence.
The Python implementation of Protocol Buffers uses metaclasses to generate the Python bindings that represent your data format. From the tutorial:
The important line in each class is __metaclass__ = reflection.GeneratedProtocolMessageType. While the details of how Python metaclasses work is beyond the scope of this tutorial, you can think of them as like a template for creating classes. At load time, the GeneratedProtocolMessageType metaclass uses the specified descriptors to create all the Python methods you need to work with each message type and adds them to the relevant classes. You can then use the fully-populated classes in your code.
FormEncode validators and Turbogears / Tosca widgets.
You might also be interested in class decorators: they can be written with the latest releases, and cover many use cases that were previously handled with metaclasses.
SQLalchemy also uses them for declarative database models.
Sorry my answer isn't very different from your example, but if you're looking for example code, I found declarative to be pretty readable.
The only time I used a metaclass so far was to write a deprecation warning mechanism. It was something along the following lines - syntax may be very approximative, but code will illustrate my point more easily than a complicated sentence :
class New(object):
pass
class Old(object):
def __new__(self):
deprecation_warning("Old class is no more supported, use New class instead")
return New()