Dynamically select which subclass to inherit methods from? - python

Suppose I have different classes providing access to different subsystems but with a common interface. They all provide the same set of methods but each class implements them in a different way (think using foo.write() to write to a file or send data via socket, etc)
Since the interface is the same, I wanted to make a single class that is able to pick the correct class but only based on the constructor/initializer parameters.
On code, it would look like
class Foo(object):
def who_am_i(self):
print "Foo"
class Bar(object):
def who_am_i(self):
print "Bar"
# Class that decides which one to use and adds some methods that are common to both
class SomeClass(Foo, Bar):
def __init__(self, use_foo):
# How inherit methods from Foo -OR- Bar?
How can the SomeClass inherit methods from Foo or Bar given the __init__ and/or __new__ arguments?
The goal should be something like
>>> some_object = SomeClass(use_foo=True)
>>> some_object.who_am_i()
Foo
>>> another_object = SomeClass(use_foo=False)
>>> another_object.who_am_i()
Bar
Is there some clean "pythonic" way to achieve this? I didn't wanted to use a function to dynamically define SomeClass, but I'm not finding another way to do this.
Thanks!

As mentioned in the comments, this can be done with a factory function (a function that pretends to be a class):
def SomeClass(use_foo):
if use_foo:
return Foo()
else:
return Bar()

As far as I can see, you have your inheritance completely backwards; instead of the multiple inheritance you're proposing:
Foo Bar
- foo code - bar code
\ /
SomeClass(Foo, Bar)
- common code
you could use a much simpler single inheritance model:
SomeClass
- common code
/ \
Foo(SomeClass) Bar(SomeClass)
- foo code - bar code
This then makes your problem one of choosing which subclass to instantiate (a decision that only needs to be made once) rather than which superclass method to call (which potentially needs to be made on every method call). This could be solved with as little as:
thing = Foo() if use_foo else Bar()

A class factory can be used here. Note the use of dictionary to make sure that the same subclass instance is used for each base class.
def makeclass(baseclass, classes={}):
if baseclass not in classes:
class Class(baseclass):
pass # define your methods here
classes[baseclass] = Class
return classes[baseclass]
obj1 = makeclass(Foo)(...)
obj2 = makeclass(Bar)(...)
isinstance(obj1, makeclass(Foo)) # True
isinstance(obj1, Foo) # True
issubclass(makeclass(Foo), Foo) # True
issubclass(type(obj1), Foo) # True
You could also make a dict subclass with a __missing__ method to do essentially the same thing; it makes it more explicit that you've got a container that stores classes, but creates them on demand:
class ClassDict(dict):
def __missing__(self, baseclass):
class Class(baseclass):
pass # define your methods here
self[baseclass] = Class
return Class
subclasses = ClassDict()
obj1 = subclasses[Foo]
obj2 = subclasses[Bar]

Judging by the lack of agreement upon the answer, maybe the problem is the question. jonrsharpe's comment gave an interesting insight on the problem: this should not be solved via inheritance.
Consider SomeClass defined as follows:
# Class that uses Foo or Bar depending on the environment
# Notice it doesn't subclasses either Foo or Bar
class SomeClass(object):
def __init__(self, use_foo):
if use_foo:
self.handler = Foo()
else:
self.handler = Bar()
# Makes more sense asking 'Who implements?' instead of 'Who am I?'
def who_implements(self):
return self.handler
# Explicitly expose methods from the handler
def some_handler_method(self, *args, **kwargs):
return self.handler.some_handler_method(*args, **kwargs)
def another_handler_method(self, *args, **kwargs):
return self.handler.another_handler_method(*args, **kwargs)
Should we need to get details on the handler implementation, just get the handler attribute. Other classes that subclass SomeClass won't even see the handler directly, which actually makes sense.

One could use a __new__ method for this purpose:
_foos = {}
_bars = {}
class SomeClass(object):
def __new__(cls,use_foo,*args,**kwargs):
if use_foo:
if cls not in _foos:
class specialized(cls,Foo):pass
_foos[cls] = specialized
else:
specialized = _foos[cls]
else:
if cls not in _bars:
class specialized(cls,Bar):pass
_bars[cls] = specialized
else:
specialized = _bars[cls]
specialized.__name__ = cls.__name__
return object.__new__(specialized,*args,**kwargs)
#common methods to both go here
pass
The advantage of this over a factory function is that isinstance(SomeClass(True),SomeClass) works, and that SomeClass can be subclassed.

Related

Extending a class in Python inside a decorator

I am using a decorator to extend certain classes and add some functionality to them, something like the following:
def useful_stuff(cls):
class LocalClass(cls):
def better_foo(self):
print('better foo')
return LocalClass
#useful_stuff
class MyClass:
def foo(self):
print('foo')
Unfortunaltely, MyClass is no longer pickleable due to the non global LocalClass
AttributeError: Can't pickle local object 'useful_stuff.<locals>.LocalClass'
I need to pickle my classes. Can you recommend a better design?
Considering that there can be multiple decorators on a class, would switching to multiple inheritance by having MyClass inherit all the functionality be a better option?
You need to set the metadata so the subclass looks like the original:
def deco(cls):
class SubClass(cls):
...
SubClass.__name__ = cls.__name__
SubClass.__qualname__ = cls.__qualname__
SubClass.__module__ = cls.__module__
return SubClass
Classes are pickled by using their module and qualname to record where to find the class. Your class needs to be found in the same location the original class would have been if it hadn't been decorated, so pickle needs to see the same module and qualname. This is similar to what funcutils.wraps does for decorated functions.
However, it would probably be simpler and less bug-prone to instead add the new methods directly to the original class instead of creating a subclass:
def better_foo(self):
print('better_foo')
def useful_stuff(cls):
cls.better_foo = better_foo
return cls

Is there a way apply a decorator to a Python method that needs informations about the class?

When you decorate a method, it is not bound yet to the class, and therefor doesn't have the im_class attribute yet. I looking for a way to get the information about the class inside the decorator. I tried this:
import types
def decorator(method):
def set_signal(self, name, value):
print name
if name == 'im_class':
print "I got the class"
method.__setattr__ = types.MethodType(set_signal, method)
return method
class Test(object):
#decorator
def bar(self, foo):
print foo
But it doesn't print anything.
I can imagine doing this:
class Test(object):
#decorator(klass=Test)
def bar(self, foo):
print foo
But if I can avoid it, it would make my day.
__setattr__ is only called on explicit object.attribute = assignments; building a class does not use attribute assignment but builds a dictionary (Test.__dict__) instead.
To access the class you have a few different options though:
Use a class decorator instead; it'll be passed the completed class after building it, you could decorate individual methods on that class by replacing them (decorated) in the class. You could use a combination of a function decorator and a class decorator to mark which methods are to be decorated:
def methoddecoratormarker(func):
func._decorate_me = True
return func
def realmethoddecorator(func):
# do something with func.
# Note: it is still an unbound function here, not a method!
return func
def classdecorator(klass):
for name, item in klass.__dict__.iteritems():
if getattr(item, '_decorate_me', False):
klass.__dict__[name] = realmethoddecorator(item)
You could use a metaclass instead of a class decorator to achieve the same, of course.
Cheat, and use sys._getframe() to retrieve the class from the calling frame:
import sys
def methoddecorator(func):
callingframe = sys._getframe(1)
classname = callingframe.f_code.co_name
Note that all you can retrieve is the name of the class; the class itself is still being built at this time. You can add items to callingframe.f_locals (a mapping) and they'll be made part of the new class object.
Access self whenever the method is called. self is a reference to the instance after all, and self.__class__ is going to be, at the very least, a sub-class of the original class the function was defined in.
My strict answer would be: It's not possible, because the class does not yet exist when the decorator is executed.
The longer answer would depend on your very exact requirements. As I wrote, you cannot access the class if it does not yet exists. One solution would be, to mark the decorated method to be "transformed" later. Then use a metaclass or class decorator to apply your modifications after the class has been created.
Another option involves some magic. Look for the implementation of the implements method in zope.interfaces. It has some access to the information about the class which is just been parsed. Don't know if it will be enough for your use case.
You might want to take a look at descriptors. They let you implement a __get__ that is used when an attribute is accessed, and can return different things depending on the object and its type.
Use method decorators to add some marker attributes to the interesting methods, and use a metaclass which iterates over the methods, finds the marker attributes, and does the logic. The metaclass code is run when the class is created, so it has a reference to the newly created class.
class MyMeta(object):
def __new__(...):
...
cls = ...
... iterate over dir(cls), find methods having .is_decorated, act on them
return cls
def decorator(f):
f.is_decorated = True
return f
class MyBase(object):
__metaclass__ = MyMeta
class MyClass(MyBase):
#decorator
def bar(self, foo):
print foo
If you worry about that the programmer of MyClass forgets to use MyBase, you can forcibly set the metaclass in decorator, by exampining the globals dicitionary of the caller stack frame (sys._getframe()).

How to dynamically change base class of instances at runtime?

This article has a snippet showing usage of __bases__ to dynamically change the inheritance hierarchy of some Python code, by adding a class to an existing classes collection of classes from which it inherits. Ok, that's hard to read, code is probably clearer:
class Friendly:
def hello(self):
print 'Hello'
class Person: pass
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
That is, Person doesn't inherit from Friendly at the source level, but rather this inheritance relation is added dynamically at runtime by modification of the __bases__attribute of the Person class. However, if you change Friendly and Person to be new style classes (by inheriting from object), you get the following error:
TypeError: __bases__ assignment: 'Friendly' deallocator differs from 'object'
A bit of Googling on this seems to indicate some incompatibilities between new-style and old style classes in regards to changing the inheritance hierarchy at runtime. Specifically: "New-style class objects don't support assignment to their bases attribute".
My question, is it possible to make the above Friendly/Person example work using new-style classes in Python 2.7+, possibly by use of the __mro__ attribute?
Disclaimer: I fully realise that this is obscure code. I fully realize that in real production code tricks like this tend to border on unreadable, this is purely a thought experiment, and for funzies to learn something about how Python deals with issues related to multiple inheritance.
Ok, again, this is not something you should normally do, this is for informational purposes only.
Where Python looks for a method on an instance object is determined by the __mro__ attribute of the class which defines that object (the M ethod R esolution O rder attribute). Thus, if we could modify the __mro__ of Person, we'd get the desired behaviour. Something like:
setattr(Person, '__mro__', (Person, Friendly, object))
The problem is that __mro__ is a readonly attribute, and thus setattr won't work. Maybe if you're a Python guru there's a way around that, but clearly I fall short of guru status as I cannot think of one.
A possible workaround is to simply redefine the class:
def modify_Person_to_be_friendly():
# so that we're modifying the global identifier 'Person'
global Person
# now just redefine the class using type(), specifying that the new
# class should inherit from Friendly and have all attributes from
# our old Person class
Person = type('Person', (Friendly,), dict(Person.__dict__))
def main():
modify_Person_to_be_friendly()
p = Person()
p.hello() # works!
What this doesn't do is modify any previously created Person instances to have the hello() method. For example (just modifying main()):
def main():
oldperson = Person()
ModifyPersonToBeFriendly()
p = Person()
p.hello()
# works! But:
oldperson.hello()
# does not
If the details of the type call aren't clear, then read e-satis' excellent answer on 'What is a metaclass in Python?'.
I've been struggling with this too, and was intrigued by your solution, but Python 3 takes it away from us:
AttributeError: attribute '__dict__' of 'type' objects is not writable
I actually have a legitimate need for a decorator that replaces the (single) superclass of the decorated class. It would require too lengthy a description to include here (I tried, but couldn't get it to a reasonably length and limited complexity -- it came up in the context of the use by many Python applications of an Python-based enterprise server where different applications needed slightly different variations of some of the code.)
The discussion on this page and others like it provided hints that the problem of assigning to __bases__ only occurs for classes with no superclass defined (i.e., whose only superclass is object). I was able to solve this problem (for both Python 2.7 and 3.2) by defining the classes whose superclass I needed to replace as being subclasses of a trivial class:
## T is used so that the other classes are not direct subclasses of object,
## since classes whose base is object don't allow assignment to their __bases__ attribute.
class T: pass
class A(T):
def __init__(self):
print('Creating instance of {}'.format(self.__class__.__name__))
## ordinary inheritance
class B(A): pass
## dynamically specified inheritance
class C(T): pass
A() # -> Creating instance of A
B() # -> Creating instance of B
C.__bases__ = (A,)
C() # -> Creating instance of C
## attempt at dynamically specified inheritance starting with a direct subclass
## of object doesn't work
class D: pass
D.__bases__ = (A,)
D()
## Result is:
## TypeError: __bases__ assignment: 'A' deallocator differs from 'object'
I can not vouch for the consequences, but that this code does what you want at py2.7.2.
class Friendly(object):
def hello(self):
print 'Hello'
class Person(object): pass
# we can't change the original classes, so we replace them
class newFriendly: pass
newFriendly.__dict__ = dict(Friendly.__dict__)
Friendly = newFriendly
class newPerson: pass
newPerson.__dict__ = dict(Person.__dict__)
Person = newPerson
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
We know that this is possible. Cool. But we'll never use it!
Right of the bat, all the caveats of messing with class hierarchy dynamically are in effect.
But if it has to be done then, apparently, there is a hack that get's around the "deallocator differs from 'object" issue when modifying the __bases__ attribute for the new style classes.
You can define a class object
class Object(object): pass
Which derives a class from the built-in metaclass type.
That's it, now your new style classes can modify the __bases__ without any problem.
In my tests this actually worked very well as all existing (before changing the inheritance) instances of it and its derived classes felt the effect of the change including their mro getting updated.
I needed a solution for this which:
Works with both Python 2 (>= 2.7) and Python 3 (>= 3.2).
Lets the class bases be changed after dynamically importing a dependency.
Lets the class bases be changed from unit test code.
Works with types that have a custom metaclass.
Still allows unittest.mock.patch to function as expected.
Here's what I came up with:
def ensure_class_bases_begin_with(namespace, class_name, base_class):
""" Ensure the named class's bases start with the base class.
:param namespace: The namespace containing the class name.
:param class_name: The name of the class to alter.
:param base_class: The type to be the first base class for the
newly created type.
:return: ``None``.
Call this function after ensuring `base_class` is
available, before using the class named by `class_name`.
"""
existing_class = namespace[class_name]
assert isinstance(existing_class, type)
bases = list(existing_class.__bases__)
if base_class is bases[0]:
# Already bound to a type with the right bases.
return
bases.insert(0, base_class)
new_class_namespace = existing_class.__dict__.copy()
# Type creation will assign the correct ‘__dict__’ attribute.
del new_class_namespace['__dict__']
metaclass = existing_class.__metaclass__
new_class = metaclass(class_name, tuple(bases), new_class_namespace)
namespace[class_name] = new_class
Used like this within the application:
# foo.py
# Type `Bar` is not available at first, so can't inherit from it yet.
class Foo(object):
__metaclass__ = type
def __init__(self):
self.frob = "spam"
def __unicode__(self): return "Foo"
# … later …
import bar
ensure_class_bases_begin_with(
namespace=globals(),
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
Use like this from within unit test code:
# test_foo.py
""" Unit test for `foo` module. """
import unittest
import mock
import foo
import bar
ensure_class_bases_begin_with(
namespace=foo.__dict__,
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
class Foo_TestCase(unittest.TestCase):
""" Test cases for `Foo` class. """
def setUp(self):
patcher_unicode = mock.patch.object(
foo.Foo, '__unicode__')
patcher_unicode.start()
self.addCleanup(patcher_unicode.stop)
self.test_instance = foo.Foo()
patcher_frob = mock.patch.object(
self.test_instance, 'frob')
patcher_frob.start()
self.addCleanup(patcher_frob.stop)
def test_instantiate(self):
""" Should create an instance of `Foo`. """
instance = foo.Foo()
The above answers are good if you need to change an existing class at runtime. However, if you are just looking to create a new class that inherits by some other class, there is a much cleaner solution. I got this idea from https://stackoverflow.com/a/21060094/3533440, but I think the example below better illustrates a legitimate use case.
def make_default(Map, default_default=None):
"""Returns a class which behaves identically to the given
Map class, except it gives a default value for unknown keys."""
class DefaultMap(Map):
def __init__(self, default=default_default, **kwargs):
self._default = default
super().__init__(**kwargs)
def __missing__(self, key):
return self._default
return DefaultMap
DefaultDict = make_default(dict, default_default='wug')
d = DefaultDict(a=1, b=2)
assert d['a'] is 1
assert d['b'] is 2
assert d['c'] is 'wug'
Correct me if I'm wrong, but this strategy seems very readable to me, and I would use it in production code. This is very similar to functors in OCaml.
This method isn't technically inheriting during runtime, since __mro__ can't be changed. But what I'm doing here is using __getattr__ to be able to access any attributes or methods from a certain class. (Read comments in order of numbers placed before the comments, it makes more sense)
class Sub:
def __init__(self, f, cls):
self.f = f
self.cls = cls
# 6) this method will pass the self parameter
# (which is the original class object we passed)
# and then it will fill in the rest of the arguments
# using *args and **kwargs
def __call__(self, *args, **kwargs):
# 7) the multiple try / except statements
# are for making sure if an attribute was
# accessed instead of a function, the __call__
# method will just return the attribute
try:
return self.f(self.cls, *args, **kwargs)
except TypeError:
try:
return self.f(*args, **kwargs)
except TypeError:
return self.f
# 1) our base class
class S:
def __init__(self, func):
self.cls = func
def __getattr__(self, item):
# 5) we are wrapping the attribute we get in the Sub class
# so we can implement the __call__ method there
# to be able to pass the parameters in the correct order
return Sub(getattr(self.cls, item), self.cls)
# 2) class we want to inherit from
class L:
def run(self, s):
print("run" + s)
# 3) we create an instance of our base class
# and then pass an instance (or just the class object)
# as a parameter to this instance
s = S(L) # 4) in this case, I'm using the class object
s.run("1")
So this sort of substitution and redirection will simulate the inheritance of the class we wanted to inherit from. And it even works with attributes or methods that don't take any parameters.

Python: thinking of a module and its variables as a singleton — Clean approach?

I'd like to implement some sort of singleton pattern in my Python program. I was thinking of doing it without using classes; that is, I'd like to put all the singleton-related functions and variables within a module and consider it an actual singleton.
For example, say this is to be in the file 'singleton_module.py':
# singleton_module.py
# Singleton-related variables
foo = 'blah'
bar = 'stuff'
# Functions that process the above variables
def work(some_parameter):
global foo, bar
if some_parameter:
bar = ...
else:
foo = ...
Then, the rest of the program (i.e., other modules) would use this singleton like so:
# another_module.py
import singleton_module
# process the singleton variables,
# which changes them across the entire program
singleton_module.work(...)
# freely access the singleton variables
# (at least for reading)
print singleton_module.foo
This seemed to be a pretty good idea to me, because it looks pretty clean in the modules that use the singleton.
However, all these tedious 'global' statements in the singleton module are ugly. They occur in every function that processes the singleton-related variables. That's not much in this particular example, but when you have 10+ variables to manage across several functions, it's not pretty.
Also, this is pretty error-prone if you happen to forget the global statements: variables local to the function will be created, and the module's variables won't be changed, which is not what you want!
So, would this be considered to be clean? Is there an approach similar to mine that manages to do away with the 'global' mess?
Or is this simply not the way to go?
A common alternative to using a module as a singleton is Alex Martelli's Borg pattern:
class Borg:
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
# and whatever else you want in your class -- that's all!
There can be multiple instances of this class, but they all share the same state.
Maybe you can put all the variables in a global dict, and you can directly use the dict in your functions without "global".
# Singleton-related variables
my_globals = {'foo': 'blah', 'bar':'stuff'}
# Functions that process the above variables
def work(some_parameter):
if some_parameter:
my_globals['bar'] = ...
else:
my_globals['foo'] = ...
why you can do it like this is Python Scopes and Namespaces.
One approach to implementing a singleton pattern with Python can also be:
have singleton __init()__ method raise an exception if an instance of the class already exists. More precisely, class has a member _single. If this member is different from None, exception is raised.
class Singleton:
__single = None
def __init__( self ):
if Singleton.__single:
raise Singleton.__single
Singleton.__single = self
It could be argued that handling the singleton instance creation with exceptions is not very clean also. We may hide implementation details with a method handle() as in
def Handle( x = Singleton ):
try:
single = x()
except Singleton, s:
single = s
return single
this Handle() method is very similar to what would be a C++ implementation of the Singleton pattern. We could have in Singleton class the handle()
Singleton& Singleton::Handle() {
if( !psingle ) {
psingle = new Singleton;
}
return *psingle;
}
returning either a new Singleton instance or a reference to the existing unique instance of class Singleton.
Handling the whole hierarchy
If Single1 and Single2 classes derive from Singleton, a single instance of Singleton through one of the derived class exists. This can be verify with this:
>>> child = S2( 'singlething' )
>>> junior = Handle( S1)
>>> junior.name()
'singlething'
Similar to Sven's "Borg pattern" suggestion, you could just keep all your state data in a class, without creating any instances of the class. This method utilizes new-style classes, I believe.
This method could even be adapted into the Borg pattern, with the caveat that modifying the state members from the instances of the class would require accessing the __class__ attribute of the instance (instance.__class__.foo = 'z' rather than instance.foo = 'z', though you could also just do stateclass.foo = 'z').
class State: # in some versions of Python, may need to be "class State():" or "class State(object):"
__slots__ = [] # prevents additional attributes from being added to instances and same-named attributes from shadowing the class's attributes
foo = 'x'
bar = 'y'
#classmethod
def work(cls, spam):
print(cls.foo, spam, cls.bar)
Note that modifications to the class's attributes will be reflected in instances of the class even after instantiation. This includes adding new attributes and removing existing ones, which could have some interesting, possibly useful effects (though I can also see how that might actually cause problems in some cases). Try it out yourself.
Building off of WillYang's answer and taking it a step further for cleanliness: define a simple class to hold your global dictionary to make it easier to reference:
class struct(dict):
def __init__(self, **kwargs):
dict.__init__(self, kwargs)
self.__dict__ = self
g = struct(var1=None, var2=None)
def func():
g.var1 = dict()
g.var3 = 10
g["var4"] = [1, 2]
print(g["var3"])
print(g.var4)
Just like before you put anything you want in g but now it's super clean. :)
For a legitimate Singleton:
class SingletonMeta(type):
__classes = {} # protect against defining class with the same name
def __new__(cls, cls_name, cls_ancestors, cls_dict):
if cls_name in cls.__classes:
return cls.__classes[cls_name]
type_instance = super(SingletonMeta, cls).__new__(cls, cls_name, cls_ancestors, cls_dict) # pass 'type' instead of 'cls' if you dont want SingletonMeta's attributes reflected in the class
return type_instance() # call __init__
class Singleton:
__metaclass__ = SingletonMeta
# define __init__ however you want
__call__(self, *args, *kwargs):
print 'hi!'
To see that it truly is a singleton, try to instantiate this class, or any class that inherits from it.
singleton = Singleton() # prints "hi!"

In Python can one implement mixin behavior without using inheritance?

Is there a reasonable way in Python to implement mixin behavior similar to that found in Ruby -- that is, without using inheritance?
class Mixin(object):
def b(self): print "b()"
def c(self): print "c()"
class Foo(object):
# Somehow mix in the behavior of the Mixin class,
# so that all of the methods below will run and
# the issubclass() test will be False.
def a(self): print "a()"
f = Foo()
f.a()
f.b()
f.c()
print issubclass(Foo, Mixin)
I had a vague idea to do this with a class decorator, but my attempts led to confusion. Most of my searches on the topic have led in the direction of using inheritance (or in more complex scenarios, multiple inheritance) to achieve mixin behavior.
def mixer(*args):
"""Decorator for mixing mixins"""
def inner(cls):
for a,k in ((a,k) for a in args for k,v in vars(a).items() if callable(v)):
setattr(cls, k, getattr(a, k).im_func)
return cls
return inner
class Mixin(object):
def b(self): print "b()"
def c(self): print "c()"
class Mixin2(object):
def d(self): print "d()"
def e(self): print "e()"
#mixer(Mixin, Mixin2)
class Foo(object):
# Somehow mix in the behavior of the Mixin class,
# so that all of the methods below will run and
# the issubclass() test will be False.
def a(self): print "a()"
f = Foo()
f.a()
f.b()
f.c()
f.d()
f.e()
print issubclass(Foo, Mixin)
output:
a()
b()
c()
d()
e()
False
You can add the methods as functions:
Foo.b = Mixin.b.im_func
Foo.c = Mixin.c.im_func
I am not that familiar with Python, but from what I know about Python metaprogramming, you could actually do it pretty much the same way it is done in Ruby.
In Ruby, a module basically consists of two things: a pointer to a method dictionary and a pointer to a constant dictionary. A class consists of three things: a pointer to a method dictionary, a pointer to a constant dictionary and a pointer to the superclass.
When you mix in a module M into a class C, the following happens:
an anonymous class α is created (this is called an include class)
α's method dictionary and constant dictionary pointers are set equal to M's
α's superclass pointer is set equal to C's
C's superclass pointer is set to α
In other words: a fake class which shares its behavior with the mixin is injected into the inheritance hierarchy. So, Ruby actually does use inheritance for mixin composition.
I left out a couple of subleties above: first off, the module doesn't actually get inserted as C's superclass, it gets inserted as C's superclasses' (which is C's singleton class) superclass. And secondly, if the mixin itself has mixed in other mixins, then those also get wrapped into fake classes which get inserted directly above α, and this process is applied recursively, in case the mixed in mixins in turn have mixins.
Basically, the whole mixin hierarchy gets flattened into a straight line and spliced into the inheritance chain.
AFAIK, Python actually allows you to change a class's superclass(es) after the fact (something which Ruby does not allow you to do), and it also gives you access to a class's dict (again, something that is impossible in Ruby), so you should be able to implement this yourself.
EDIT: Fixed what could (and probably should) be construed as a bug. Now it builds a new dict and then updates that from the class's dict. This prevents mixins from overwriting methods that are defined directly on the class. The code is still untested but should work. I'm busy ATM so I'll test it later. It worked fine except for a syntax error. In retrospect, I decided that I don't like it (even after my further improvements) and much prefer my other solution even if it is more complicated. The test code for that one applies here as well but I wont duplicate it.
You could use a metaclass factory:
import inspect
def add_mixins(*mixins):
Dummy = type('Dummy', mixins, {})
d = {}
for mixin in reversed(inspect.getmro(Dummy)):
d.update(mixin.__dict__)
class WithMixins(type):
def __new__(meta, classname, bases, classdict):
d.update(classdict)
return super(WithMixins, meta).__new__(meta, classname, bases, d)
return WithMixins
then use it like:
class Foo(object):
__metaclass__ = add_mixins(Mixin1, Mixin2)
# rest of the stuff
This one is based on the way it's done in ruby as explained by Jörg W Mittag. All of the wall of code after if __name__=='__main__' is test/demo code. There's actually only 13 lines of real code to it.
import inspect
def add_mixins(*mixins):
Dummy = type('Dummy', mixins, {})
d = {}
# Now get all the class attributes. Use reversed so that conflicts
# are resolved with the proper priority. This rules out the possibility
# of the mixins calling methods from their base classes that get overridden
# using super but is necessary for the subclass check to fail. If that wasn't a
# requirement, we would just use Dummy above (or use MI directly and
# forget all the metaclass stuff).
for base in reversed(inspect.getmro(Dummy)):
d.update(base.__dict__)
# Create the mixin class. This should be equivalent to creating the
# anonymous class in Ruby.
Mixin = type('Mixin', (object,), d)
class WithMixins(type):
def __new__(meta, classname, bases, classdict):
# The check below prevents an inheritance cycle from forming which
# leads to a TypeError when trying to inherit from the resulting
# class.
if not any(issubclass(base, Mixin) for base in bases):
# This should be the the equivalent of setting the superclass
# pointers in Ruby.
bases = (Mixin,) + bases
return super(WithMixins, meta).__new__(meta, classname, bases,
classdict)
return WithMixins
if __name__ == '__main__':
class Mixin1(object):
def b(self): print "b()"
def c(self): print "c()"
class Mixin2(object):
def d(self): print "d()"
def e(self): print "e()"
class Mixin3Base(object):
def f(self): print "f()"
class Mixin3(Mixin3Base): pass
class Foo(object):
__metaclass__ = add_mixins(Mixin1, Mixin2, Mixin3)
def a(self): print "a()"
class Bar(Foo):
def f(self): print "Bar.f()"
def test_class(cls):
print "Testing {0}".format(cls.__name__)
f = cls()
f.a()
f.b()
f.c()
f.d()
f.e()
f.f()
print (issubclass(cls, Mixin1) or
issubclass(cls, Mixin2) or
issubclass(cls, Mixin3))
test_class(Foo)
test_class(Bar)
You could decorate the classes __getattr__ to check in the mixin. The problem is that all methods of the mixin would always require an object the type of the mixin as their first parameter, so you would have to decorate __init__ as well to create a mixin-object. I believe you could achieve this using a class decorator.
from functools import partial
class Mixin(object):
#staticmethod
def b(self): print "b()"
#staticmethod
def c(self): print "c()"
class Foo(object):
def __init__(self, mixin_cls):
self.delegate_cls = mixin_cls
def __getattr__(self, attr):
if hasattr(self.delegate_cls, attr):
return partial(getattr(self.delegate_cls, attr), self)
def a(self): print "a()"
f = Foo(Mixin)
f.a()
f.b()
f.c()
print issubclass(Foo, Mixin)
This basically uses the Mixin class as a container to hold ad-hoc functions (not methods) that behave like methods by taking an object instance (self) as the first argument. __getattr__ will redirect missing calls to these methods-alike functions.
This passes your simple tests as shown below. But I cannot guarantee it will do all the things you want. Make more thorough test to make sure.
$ python mixin.py
a()
b()
c()
False
Composition? It seems like that would be the simplest way to handle this: either wrap your object in a decorator or just import the methods as an object into your class definition itself. This is what I usually do: put the methods that I want to share between classes in a file and then import the file. If I want to override some behavior I import a modified file with the same method names as the same object name. It's a little sloppy, but it works.
For example, if I want the init_covers behavior from this file (bedg.py)
import cove as cov
def init_covers(n):
n.covers.append(cov.Cover((set([n.id]))))
id_list = []
for a in n.neighbors:
id_list.append(a.id)
n.covers.append(cov.Cover((set(id_list))))
def update_degree(n):
for a in n.covers:
a.degree = 0
for b in n.covers:
if a != b:
a.degree += len(a.node_list.intersection(b.node_list))
In my bar class file I would do: import bedg as foo
and then if I want to change my foo behaviors in another class that inherited bar, I write
import bild as foo
Like I say, it is sloppy.

Categories

Resources