Let's say I define a base class that provides some methods that will likely be overridden. It's not required though, as the base class provides naive default implementations. How can I highlight those methods best?
In C++, I would just use virtual but Python is a dynamic language where methods can always be overridden without marking them. I am rather looking for a hint here. Ideally, this is faster to see than a textual explanation in the docstring and easier to understand to others than a custom decorator.
Related
I first learned polymorphism in c++, in c++ we had types for every variable. So we used polymorphism to get a single pointer which can point to different type objects, and we could use them very nice.
But I don't get polymorphism and abstract classes in python. Here every variable can be everything. It could be an iterator, a list, a singe variable or a function. Every thing. So what makes a programmer to use an abstract class or use polymorphism here?
In c++ we used inheritance in many ways. But in python, it is just used to use another classes method or attribute. Am I right? what's the matter?
You don't understand what polymorphism is (OO polymorphic dispatch I mean). Polymorphism is the ability to have objects of different types understanding the same message, so you can use those objects the same way without worrying about their concrete type.
C++ actually uses the same concept (class) to denote two slightly different semantics: the abstract type (interface) which is the set of messages an object of this type understand) and the concrete type (implementation) which defines how this type reacts to those messages.
Java clearly distinguishes between abstract type (interface) and concrete type (class).
Python, being dynamically typed, relies mostly on "duck typing" (if it walks like a duck and quack like duck, then it's a duck - or at least it's "kind-of-a-duck" enough). You'll often find terms like "file-like" or "dict-like" in Python docs, meaning "anything that has the same interface as a file (or dict)", and quite a few "interfaces" are (or at least have long been) more or less implicit.
The issue with those implicit interfaces is that they are seldom fully documented, and one sometimes have to get to a function source code to find out exactly what the object passed needs to support. That's one of the reasons why the abc module was introduced in python 2 and improved in python 3: as a way to better document those implicit interfaces by creating an abstract base type that clearly defines the interface.
Another reason for abstract base classes (whether using the abc module or not) is to provide a common base implementation for a set of concrete subclasses. This is specially useful for frameworks, ie Django's models.Model (ORM) or forms.Form (user input collection and validation) classes - in both cases, just defining the database or form fields is enough to have something working.
Inheritance in C++ suffers from the same issue as classes: it serves both as defining the interface and implementation. This adds to the confusion... Java had the good idea (IMHO) to have separate abstract type from implementation, but failed to go all the way and restrict typing to interfaces - you can use either classes or interfaces for type declaration, so it still doesn't make the distinction clear.
In Python, since we don't have static typing, inheritance is mostly about implementation reuse indeed. The abc module allows you to register totally unrelated classes (no inheritance relationship) as also being subtypes of a defined abstract base case, but the point here is mostly to document that your class implements the same interface (and that it's not an accident...).
I once read (I think on a page from Microsoft) that it's a good way to use static classes, when you don't NEED two or more instances of a class.
I'm writing a program in Python. Is it a bad style, if I use #classmethod for every method of a class?
Generally, usage like this is better done by just using functions in a module, without a class at all.
It's terrible style, unless you actually need to access the class.
A static method [...] does not translate to a Python classmethod. Oh sure, it results in more or less the same effect, but the goal of a classmethod is actually to do something that's usually not even possible [...] (like inheriting a non-default constructor). The idiomatic translation of a [...] static method is usually a module-level function, not a classmethod or staticmethod.
source
In my experience creating a class is a very good solution for a number of reasons. One is that you wind up using the class as a 'normal' class (esp. making more than just one instance) more often than you might think. It's also a reasonable style choice to stick with classes for everthing; this can make it easier for others who read/maintain your code, esp if they are very OO - they will be comfortable with classes. As noted in other replies, it's also reasonable to just use 'bare' functions for the implementation. You may wish to start with a class and make it a singleton/Borg pattern (lots of examples if you googlefor these); it gives you the flexibility to (re)use the class to meet other needs. I would recommend against the 'static class' approach as being non-conventional and non-Pythonic, which makes it harder to read and maintain.
There are a few approaches you might take for this. As others have mentioned, you could just use module-level functions. In this case, the module itself is the namespace that holds them together. Another option, which can be useful if you need to keep track of state, is to define a class with normal methods (taking self), and then define a single global instance of it, and copy its instance methods to the module namespace. This is the approach taken by the standard library "random" module -- take a look at lib/python2.5/random.py in your python directory. At the bottom, it has something like this:
# Create one instance, seeded from current time, and export its methods
# as module-level functions. [...]
_inst = Random()
seed = _inst.seed
random = _inst.random
uniform = _inst.uniform
...
Or you can take the basic approach you described (though I would recommend using #staticmethod rather than #classmethod in most cases).
You might actually want a singleton class rather than a static class:
Making a singleton class in python
How do you decide between using decorators and inheritance when both are possible?
E.g., this problem has two solutions.
I'm particularly interested in Python.
Decorators...:
...should be used if what you are trying to do is "wrapping". Wrapping consists of taking something, modifying (or registering it with something), and/or returning a proxy object that behaves "almost exactly" like the original.
...are okay for applying mixin-like behavior, as long as you aren't creating a large stack of proxy objects.
...have an implied "stack" abstraction:
e.g.
#decoA
#decoB
#decoC
def myFunc(...): ...
...
Is equivalent to:
def myFunc(...): ...
...
myFunc = decoA(decoB(decoC(myFunc))) #note the *ordering*
Multiple inheritance...:
... is best for adding methods to classes; you cannot use it to decorate functions easily. In this context, it can be used to achieve mixin-like behavior if all you need is a set of "duck-typing style" extra methods.
... may be a bit unwieldy if your problem is not a good match for it, with issues with superclass constructors, etc. For example, the subclasses __init__ method will not be called unless it is called explicitly (via the method-resolution-order protocol)!
To sum up, I would use decorators for mixin-like behavior if they didn't return proxy objects. Some examples would include any decorator which returns the original function, slightly modified (or after registering it somewhere or adding it to some collection).
Things you will often find decorators for (like memoization) are also good candidates, but should be used in moderation if they return proxy objects; the order they are applied matter. And too many decorators on top of one another is using them in a way they aren't intended to be used.
I would consider using inheritance if it was a "classic inheritance problem", or if all I needed for the mixin behavior were methods. A classic inheritance problem is one where you can use the child wherever you could use the parent.
In general, I try to write code where it is not necessary to enhance arbitrary things.
The problem you reference is not deciding between decorators and classes. It is using decorators, but you have the option of using either:
a decorator, which returns a class
a decorator, which returns a function
A decorator is just a fancy name for the "wrapper" pattern, i.e. replacing something with something else. The implementation is up to you (class or function).
When deciding between them, it's completely a matter of personal preference. You can do everything you can do in one with the other.
if decorating a function, you may prefer decorators which return proxy functions
if decorating a class, you may prefer decorators which return proxy classes
(Why is it a good idea? There may be assumptions that a decorated function is still a function, and a decorated class is still a class.)
Even better in both cases would be to use a decorator which just returns the original, modified somehow.
edit: After better understanding your question, I have posted another solution at Python functools.wraps equivalent for classes
The other answers are quite great, but I wanted to give a succinct list of pros and cons.
The main advantage of mixins is that the type can be checked at runtime using isinstance and it can be checked with linters like MyPy. Like all inheritance, it should be used when you have an is-a relationship. For example dataclass should probably have been a mixin in order to expose dataclass-specific introspection variables like the list of dataclass fields.
Decorators should be preferred when you don't have an is-a relationship. For example, a decorator that propagates documentation from another class, or registers a class in some collection.
Decoration typically only affects the class it decorates, but not classes that inherit from the base class:
#decorator
class A:
... # Can be affected by the decorator.
class B(A):
... # Not affected by the decorator in most cases.
Now that Python has __init_subclass__, everything that decorators can do can be done with mixins, and they typically do affect child subclasses:
class A(Mixin):
... # Is affected by Mixin.__init_subclass__.
class B(A):
... # Is affected by Mixin.__init_subclass__.
Mixins have another advantage, which is that they can provide empty base class methods. Child classes can override these methods with some "augmenting" behavior, and then call super. The decorator cannot easily provide such base class methods. This is another way in which mixins are more flexible.
In summary, the questions you should ask when deciding between a mixin and decoration are:
Is there an is-a pattern?
Would you ever call isinstance?
Would you use the mixin in a type annotation?
Do you want the behavior to affect child classes?
Do you need augmenting methods?
In general, lean towards inheritance.
If both are equivalent, I would prefer decorators, since you can use the same decorator for many classes, while inheriting apply to only one specific class.
Personally, I would think in terms of code reuse. Decorator is sometimes more flexible than inheritance.
Let's take caching as an example. If you want to add caching facility to two classes in your system: A and B, with inheritance, you'll probably wind up having ACached and BCached. And by overriding some of the methods in these classes, you'll probably duplicate a lot of codes for the same caching logic. But if you use decorator in this case, you only need to define one decorator to decorate both classes.
So, when deciding which one to use, you may first want to check if the extended functionality is only specific to this class or if the same extended functionality can be reused in other parts of your system. If it cannot be reused, then inheritance should probably do the job. Otherwise, you can think about using decorator.
just encountered a problem at dict "type" subclassing. I did override __iter__ method and expected it will affect other methods like iterkeys, keys etc. because I believed they call __iter__ method to get values but it seems they are implemented independently and I have to override all of them.
Is this a bug or intention they don't make use of other methods and retrieves values separately ?
I didn't find in the standard Python documentation description of calls dependency between methods of standard classes. It would be handy for sublassing work and for orientation what methods is required to override for proper behaviour. Is there some supplemental documentation about python base types/classes internals ?
Subclass Mapping or MuteableMapping from the collections module instead of dict and you get all those methods for free.
Here is a example of a minimal mapping and some of the methods you get for free:
import collections
class MinimalMapping(collections.Mapping):
def __init__(self, *items ):
self.elements = dict(items)
def __getitem__(self, key):
return self.elements[key]
def __len__(self):
return len(self.elements)
def __iter__(self):
return iter(self.elements)
t = MinimalMapping()
print (t.iteritems, t.keys, t.itervalues, t.get)
To subclass any of the builtin containers you should always use the appropriate baseclass from the collections module.
If not specified in the documentation, it is implementation specific. Implementations other that CPython might re-use the iter method to implement iterkeys and others. I would not consider this to be a bug, but simply a bit of freedom for the implementors.
I suspect there is a performance factor in implementing the methods independently, especially as dictionaries are so widely used in Python.
So basically, you should implement them.
You know the saying: "You know what happens when you assume." :-)
They don't officially document that stuff because they may decide to change it in the future. Any unofficial documentation you may find would simply document the current behavior of one Python implementation, and relying on it would result in your code being very, very fragile.
When there is official documentation of special methods, it tends to describe behavior of the interpreter with respect to your own classes, such as using __len__() when __nonzero__() isn't implemented, or only needing __lt()__ for sorting.
Since Python uses duck typing, you usually don't actually need to inherit from a built-in class to make your own class act like one. So you might reconsider whether subclassing dict is really what you want to do. You might choose a different class, such as something from the collections module, or to encapsulate rather than inheriting. (The UserString class uses encapsulation.) Or just start from scratch.
Instead of subclassing dict, you could instead just create make your own class that has exactly the properties you want without too much trouble. Here's a blog post with an example of how to do this. The __str__() method in it isn't the greatest, but that's easily corrected the rest provide the functionality you seek.
I'm learning about metaclasses in Python. I think it is a very powerful technique, and I'm looking for good uses for them. I'd like some feedback of good useful real-world examples of using metaclasses. I'm not looking for example code on how to write a metaclass (there are plenty examples of useless metaclasses out there), but real examples where you have applied the technique and it was really the appropriate solution. The rule is: no theoretical possibilities, but metaclasses at work in a real application.
I'll start with the one example I know:
Django models, for declarative programming, where the base class Model uses a metaclass to fill the model objects of useful ORM functionality from the attribute definitions.
Looking forward to your contributions.
In Python 2.6 and 3.1, the Python standard library provides an abc.ABCMeta, a meta-class for Abstract Base Classes ("ABCs"). Classes that use the meta-class can use #abstractmethod and #abstractproperty to define abstract methods and properties. The meta-class will ensure that derived classes override the abstract methods and properties.
Also, classes that implement the ABC without actually inheriting from it can register as implementing the interface, so that issubclass and isinstance will work.
For example, the collections module defines the Sequence ABC. It also calls Sequence.register(tuple) to register the built-in tuple type as a Sequence, even though tuple does not actually inherit from Sequence.
The Python implementation of Protocol Buffers uses metaclasses to generate the Python bindings that represent your data format. From the tutorial:
The important line in each class is __metaclass__ = reflection.GeneratedProtocolMessageType. While the details of how Python metaclasses work is beyond the scope of this tutorial, you can think of them as like a template for creating classes. At load time, the GeneratedProtocolMessageType metaclass uses the specified descriptors to create all the Python methods you need to work with each message type and adds them to the relevant classes. You can then use the fully-populated classes in your code.
FormEncode validators and Turbogears / Tosca widgets.
You might also be interested in class decorators: they can be written with the latest releases, and cover many use cases that were previously handled with metaclasses.
SQLalchemy also uses them for declarative database models.
Sorry my answer isn't very different from your example, but if you're looking for example code, I found declarative to be pretty readable.
The only time I used a metaclass so far was to write a deprecation warning mechanism. It was something along the following lines - syntax may be very approximative, but code will illustrate my point more easily than a complicated sentence :
class New(object):
pass
class Old(object):
def __new__(self):
deprecation_warning("Old class is no more supported, use New class instead")
return New()