I'm trying to intercept calls to python's double underscore magic methods in new style classes. This is a trivial example but it show's the intent:
class ShowMeList(object):
def __init__(self, it):
self._data = list(it)
def __getattr__(self, name):
attr = object.__getattribute__(self._data, name)
if callable(attr):
def wrapper(*a, **kw):
print "before the call"
result = attr(*a, **kw)
print "after the call"
return result
return wrapper
return attr
If I use that proxy object around list I get the expected behavior for non-magic methods but my wrapper function is never called for magic methods.
>>> l = ShowMeList(range(8))
>>> l #call to __repr__
<__main__.ShowMeList object at 0x9640eac>
>>> l.append(9)
before the call
after the call
>> len(l._data)
9
If I don't inherit from object (first line class ShowMeList:) everything works as expected:
>>> l = ShowMeList(range(8))
>>> l #call to __repr__
before the call
after the call
[0, 1, 2, 3, 4, 5, 6, 7]
>>> l.append(9)
before the call
after the call
>> len(l._data)
9
How do I accomplish this intercept with new style classes?
For performance reasons, Python always looks in the class (and parent classes') __dict__ for magic methods and does not use the normal attribute lookup mechanism. A workaround is to use a metaclass to automatically add proxies for magic methods at the time of class creation; I've used this technique to avoid having to write boilerplate call-through methods for wrapper classes, for example.
class Wrapper(object):
"""Wrapper class that provides proxy access to some internal instance."""
__wraps__ = None
__ignore__ = "class mro new init setattr getattr getattribute"
def __init__(self, obj):
if self.__wraps__ is None:
raise TypeError("base class Wrapper may not be instantiated")
elif isinstance(obj, self.__wraps__):
self._obj = obj
else:
raise ValueError("wrapped object must be of %s" % self.__wraps__)
# provide proxy access to regular attributes of wrapped object
def __getattr__(self, name):
return getattr(self._obj, name)
# create proxies for wrapped object's double-underscore attributes
class __metaclass__(type):
def __init__(cls, name, bases, dct):
def make_proxy(name):
def proxy(self, *args):
return getattr(self._obj, name)
return proxy
type.__init__(cls, name, bases, dct)
if cls.__wraps__:
ignore = set("__%s__" % n for n in cls.__ignore__.split())
for name in dir(cls.__wraps__):
if name.startswith("__"):
if name not in ignore and name not in dct:
setattr(cls, name, property(make_proxy(name)))
Usage:
class DictWrapper(Wrapper):
__wraps__ = dict
wrapped_dict = DictWrapper(dict(a=1, b=2, c=3))
# make sure it worked....
assert "b" in wrapped_dict # __contains__
assert wrapped_dict == dict(a=1, b=2, c=3) # __eq__
assert "'a': 1" in str(wrapped_dict) # __str__
assert wrapped_dict.__doc__.startswith("dict()") # __doc__
Using __getattr__ and __getattribute__ are the last resources of a class to respond to getting an attribute.
Consider the following:
>>> class C:
x = 1
def __init__(self):
self.y = 2
def __getattr__(self, attr):
print(attr)
>>> c = C()
>>> c.x
1
>>> c.y
2
>>> c.z
z
The __getattr__ method is only called when nothing else works (It will not work on operators, and you can read about that here).
On your example, the __repr__ and many other magic methods are already defined in the object class.
One thing can be done, thought, and it is to define those magic methods and make then call the __getattr__ method. Check this other question by me and its answers (link) to see some code doing that.
As of the answers to Asymmetric behavior for __getattr__, newstyle vs oldstyle classes (see also the Python docs), modifying access to "magic" methods with __getattr__ or __getattribute__ is just not possible with new-style classes. This restriction makes the interpreter much faster.
Cut and copy from the documentation:
For old-style classes, special methods are always looked up in exactly the same way as any other method or attribute.
For new-style classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object’s type, not in the object’s instance dictionary.
Related
I'd like to do something like this:
class X:
#classmethod
def id(cls):
return cls.__name__
def id(self):
return self.__class__.__name__
And now call id() for either the class or an instance of it:
>>> X.id()
'X'
>>> X().id()
'X'
Obviously, this exact code doesn't work, but is there a similar way to make it work?
Or any other workarounds to get such behavior without too much "hacky" stuff?
Class and instance methods live in the same namespace and you cannot reuse names like that; the last definition of id will win in that case.
The class method will continue to work on instances however, there is no need to create a separate instance method; just use:
class X:
#classmethod
def id(cls):
return cls.__name__
because the method continues to be bound to the class:
>>> class X:
... #classmethod
... def id(cls):
... return cls.__name__
...
>>> X.id()
'X'
>>> X().id()
'X'
This is explicitly documented:
It can be called either on the class (such as C.f()) or on an instance (such as C().f()). The instance is ignored except for its class.
If you do need distinguish between binding to the class and an instance
If you need a method to work differently based on where it is being used on; bound to a class when accessed on the class, bound to the instance when accessed on the instance, you'll need to create a custom descriptor object.
The descriptor API is how Python causes functions to be bound as methods, and bind classmethod objects to the class; see the descriptor howto.
You can provide your own descriptor for methods by creating an object that has a __get__ method. Here is a simple one that switches what the method is bound to based on context, if the first argument to __get__ is None, then the descriptor is being bound to a class, otherwise it is being bound to an instance:
class class_or_instancemethod(classmethod):
def __get__(self, instance, type_):
descr_get = super().__get__ if instance is None else self.__func__.__get__
return descr_get(instance, type_)
This re-uses classmethod and only re-defines how it handles binding, delegating the original implementation for instance is None, and to the standard function __get__ implementation otherwise.
Note that in the method itself, you may then have to test, what it is bound to. isinstance(firstargument, type) is a good test for this:
>>> class X:
... #class_or_instancemethod
... def foo(self_or_cls):
... if isinstance(self_or_cls, type):
... return f"bound to the class, {self_or_cls}"
... else:
... return f"bound to the instance, {self_or_cls"
...
>>> X.foo()
"bound to the class, <class '__main__.X'>"
>>> X().foo()
'bound to the instance, <__main__.X object at 0x10ac7d580>'
An alternative implementation could use two functions, one for when bound to a class, the other when bound to an instance:
class hybridmethod:
def __init__(self, fclass, finstance=None, doc=None):
self.fclass = fclass
self.finstance = finstance
self.__doc__ = doc or fclass.__doc__
# support use on abstract base classes
self.__isabstractmethod__ = bool(
getattr(fclass, '__isabstractmethod__', False)
)
def classmethod(self, fclass):
return type(self)(fclass, self.finstance, None)
def instancemethod(self, finstance):
return type(self)(self.fclass, finstance, self.__doc__)
def __get__(self, instance, cls):
if instance is None or self.finstance is None:
# either bound to the class, or no instance method available
return self.fclass.__get__(cls, None)
return self.finstance.__get__(instance, cls)
This then is a classmethod with an optional instance method. Use it like you'd use a property object; decorate the instance method with #<name>.instancemethod:
>>> class X:
... #hybridmethod
... def bar(cls):
... return f"bound to the class, {cls}"
... #bar.instancemethod
... def bar(self):
... return f"bound to the instance, {self}"
...
>>> X.bar()
"bound to the class, <class '__main__.X'>"
>>> X().bar()
'bound to the instance, <__main__.X object at 0x10a010f70>'
Personally, my advice is to be cautious about using this; the exact same method altering behaviour based on the context can be confusing to use. However, there are use-cases for this, such as SQLAlchemy's differentiation between SQL objects and SQL values, where column objects in a model switch behaviour like this; see their Hybrid Attributes documentation. The implementation for this follows the exact same pattern as my hybridmethod class above.
I have no idea what's your actual use case is, but you can do something like this using a descriptor:
class Desc(object):
def __get__(self, ins, typ):
if ins is None:
print 'Called by a class.'
return lambda : typ.__name__
else:
print 'Called by an instance.'
return lambda : ins.__class__.__name__
class X(object):
id = Desc()
x = X()
print x.id()
print X.id()
Output
Called by an instance.
X
Called by a class.
X
It can be done, quite succinctly, by binding the instance-bound version of your method explicitly to the instance (rather than to the class). Python will invoke the instance attribute found in Class().__dict__ when Class().foo() is called (because it searches the instance's __dict__ before the class'), and the class-bound method found in Class.__dict__ when Class.foo() is called.
This has a number of potential use cases, though whether they are anti-patterns is open for debate:
class Test:
def __init__(self):
self.check = self.__check
#staticmethod
def check():
print('Called as class')
def __check(self):
print('Called as instance, probably')
>>> Test.check()
Called as class
>>> Test().check()
Called as instance, probably
Or... let's say we want to be able to abuse stuff like map():
class Str(str):
def __init__(self, *args):
self.split = self.__split
#staticmethod
def split(sep=None, maxsplit=-1):
return lambda string: string.split(sep, maxsplit)
def __split(self, sep=None, maxsplit=-1):
return super().split(sep, maxsplit)
>>> s = Str('w-o-w')
>>> s.split('-')
['w', 'o', 'w']
>>> Str.split('-')(s)
['w', 'o', 'w']
>>> list(map(Str.split('-'), [s]*3))
[['w', 'o', 'w'], ['w', 'o', 'w'], ['w', 'o', 'w']]
"types" provides something quite interesting since Python 3.4: DynamicClassAttribute
It is not doing 100% of what you had in mind, but it seems to be closely related, and you might need to tweak a bit my metaclass but, rougly, you can have this;
from types import DynamicClassAttribute
class XMeta(type):
def __getattr__(self, value):
if value == 'id':
return XMeta.id # You may want to change a bit that line.
#property
def id(self):
return "Class {}".format(self.__name__)
That would define your class attribute. For the instance attribute:
class X(metaclass=XMeta):
#DynamicClassAttribute
def id(self):
return "Instance {}".format(self.__class__.__name__)
It might be a bit overkill especially if you want to stay away from metaclasses. It's a trick I'd like to explore on my side, so I just wanted to share this hidden jewel, in case you can polish it and make it shine!
>>> X().id
'Instance X'
>>> X.id
'Class X'
Voila...
In your example, you could simply delete the second method entirely, since both the staticmethod and the class method do the same thing.
If you wanted them to do different things:
class X:
def id(self=None):
if self is None:
# It's being called as a static method
else:
# It's being called as an instance method
(Python 3 only) Elaborating on the idea of a pure-Python implementation of #classmethod, we can declare an #class_or_instance_method as a decorator, which is actually a class implementing the attribute descriptor protocol:
import inspect
class class_or_instance_method(object):
def __init__(self, f):
self.f = f
def __get__(self, instance, owner):
if instance is not None:
class_or_instance = instance
else:
class_or_instance = owner
def newfunc(*args, **kwargs):
return self.f(class_or_instance, *args, **kwargs)
return newfunc
class A:
#class_or_instance_method
def foo(self_or_cls, a, b, c=None):
if inspect.isclass(self_or_cls):
print("Called as a class method")
else:
print("Called as an instance method")
I'm trying to intercept calls to python's double underscore magic methods in new style classes. This is a trivial example but it show's the intent:
class ShowMeList(object):
def __init__(self, it):
self._data = list(it)
def __getattr__(self, name):
attr = object.__getattribute__(self._data, name)
if callable(attr):
def wrapper(*a, **kw):
print "before the call"
result = attr(*a, **kw)
print "after the call"
return result
return wrapper
return attr
If I use that proxy object around list I get the expected behavior for non-magic methods but my wrapper function is never called for magic methods.
>>> l = ShowMeList(range(8))
>>> l #call to __repr__
<__main__.ShowMeList object at 0x9640eac>
>>> l.append(9)
before the call
after the call
>> len(l._data)
9
If I don't inherit from object (first line class ShowMeList:) everything works as expected:
>>> l = ShowMeList(range(8))
>>> l #call to __repr__
before the call
after the call
[0, 1, 2, 3, 4, 5, 6, 7]
>>> l.append(9)
before the call
after the call
>> len(l._data)
9
How do I accomplish this intercept with new style classes?
For performance reasons, Python always looks in the class (and parent classes') __dict__ for magic methods and does not use the normal attribute lookup mechanism. A workaround is to use a metaclass to automatically add proxies for magic methods at the time of class creation; I've used this technique to avoid having to write boilerplate call-through methods for wrapper classes, for example.
class Wrapper(object):
"""Wrapper class that provides proxy access to some internal instance."""
__wraps__ = None
__ignore__ = "class mro new init setattr getattr getattribute"
def __init__(self, obj):
if self.__wraps__ is None:
raise TypeError("base class Wrapper may not be instantiated")
elif isinstance(obj, self.__wraps__):
self._obj = obj
else:
raise ValueError("wrapped object must be of %s" % self.__wraps__)
# provide proxy access to regular attributes of wrapped object
def __getattr__(self, name):
return getattr(self._obj, name)
# create proxies for wrapped object's double-underscore attributes
class __metaclass__(type):
def __init__(cls, name, bases, dct):
def make_proxy(name):
def proxy(self, *args):
return getattr(self._obj, name)
return proxy
type.__init__(cls, name, bases, dct)
if cls.__wraps__:
ignore = set("__%s__" % n for n in cls.__ignore__.split())
for name in dir(cls.__wraps__):
if name.startswith("__"):
if name not in ignore and name not in dct:
setattr(cls, name, property(make_proxy(name)))
Usage:
class DictWrapper(Wrapper):
__wraps__ = dict
wrapped_dict = DictWrapper(dict(a=1, b=2, c=3))
# make sure it worked....
assert "b" in wrapped_dict # __contains__
assert wrapped_dict == dict(a=1, b=2, c=3) # __eq__
assert "'a': 1" in str(wrapped_dict) # __str__
assert wrapped_dict.__doc__.startswith("dict()") # __doc__
Using __getattr__ and __getattribute__ are the last resources of a class to respond to getting an attribute.
Consider the following:
>>> class C:
x = 1
def __init__(self):
self.y = 2
def __getattr__(self, attr):
print(attr)
>>> c = C()
>>> c.x
1
>>> c.y
2
>>> c.z
z
The __getattr__ method is only called when nothing else works (It will not work on operators, and you can read about that here).
On your example, the __repr__ and many other magic methods are already defined in the object class.
One thing can be done, thought, and it is to define those magic methods and make then call the __getattr__ method. Check this other question by me and its answers (link) to see some code doing that.
As of the answers to Asymmetric behavior for __getattr__, newstyle vs oldstyle classes (see also the Python docs), modifying access to "magic" methods with __getattr__ or __getattribute__ is just not possible with new-style classes. This restriction makes the interpreter much faster.
Cut and copy from the documentation:
For old-style classes, special methods are always looked up in exactly the same way as any other method or attribute.
For new-style classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object’s type, not in the object’s instance dictionary.
It bugs me that the default __repr__() for a class is so uninformative:
>>> class Opaque(object): pass
...
>>> Opaque()
<__main__.Opaque object at 0x7f3ac50eba90>
... so I've been thinking about how to improve it. After a little consideration, I came up with this abstract base class which leverages the pickle protocol's __getnewargs__() method:
from abc import abstractmethod
class Repro(object):
"""Abstract base class for objects with informative ``repr()`` behaviour."""
#abstractmethod
def __getnewargs__(self):
raise NotImplementedError
def __repr__(self):
signature = ", ".join(repr(arg) for arg in self.__getnewargs__())
return "%s(%s)" % (self.__class__.__name__, signature)
Here's a trivial example of its usage:
class Transparent(Repro):
"""An example of a ``Repro`` subclass."""
def __init__(self, *args):
self.args = args
def __getnewargs__(self):
return self.args
... and the resulting repr() behaviour:
>>> Transparent("an absurd signature", [1, 2, 3], str)
Transparent('an absurd signature', [1, 2, 3], <type 'str'>)
>>>
Now, I can see one reason Python doesn't do this by default straight away - requiring every class to define a __getnewargs__() method would be more burdensome than expecting (but not requiring) that it defines a __repr__() method.
What I'd like to know is: how dangerous and/or fragile is it? Off-hand, I can't think of anything that could go terribly wrong except that if a Repro instance contained itself, you'd get infinite recursion ... but that's solveable, at the cost of making the code above uglier.
What else have I missed?
If you're into this sort of thing, why not have the arguments automatically picked up from __init__ by using a decorator? Then you don't need to burden the user with manually handling them, and you can transparently handle normal method signatures with multiple arguments. Here's a quick version I came up with:
def deco(f):
def newFunc(self, *args, **kwargs):
self._args = args
self._kwargs = kwargs
f(self, *args, **kwargs)
return newFunc
class AutoRepr(object):
def __repr__(self):
args = ', '.join(repr(arg) for arg in self._args)
kwargs = ', '.join('{0}={1}'.format(k, repr(v)) for k, v in self._kwargs.iteritems())
allArgs = ', '.join([args, kwargs]).strip(', ')
return '{0}({1})'.format(self.__class__.__name__, allArgs)
Now you can define subclasses of AutoRepr normally, with normal __init__ signatures:
class Thingy(AutoRepr):
#deco
def __init__(self, foo, bar=88):
self.foo = foo
self.bar = bar
And the __repr__ automatically works:
>>> Thingy(1, 2)
Thingy(1, 2)
>>> Thingy(10)
Thingy(10)
>>> Thingy(1, bar=2)
Thingy(1, bar=2)
>>> Thingy(bar=1, foo=2)
Thingy(foo=2, bar=1)
>>> Thingy([1, 2, 3], "Some junk")
Thingy([1, 2, 3], 'Some junk')
Putting #deco on your __init__ is much easier than writing a whole __getnewargs__. And if you don't even want to have to do that, you could write a metaclass that automatically decorates the __init__ method in this way.
One problem with this whole idea is that there can be some kinds of objects who's state is not fully dependent on the arguments given to its constructor. For a trivial case, consider a class with random state:
import random
def A(object):
def __init__(self):
self.state = random.random()
There's no way for this class to correctly implement __getnewargs__, and so your implantation of __repr__ is also impossible. It may be that a class like the one above is not well designed. But pickle can handle it with no problems (I assume using the __reduce__ method inherited from object, but my pickle-fu is not enough to say so with certainty).
This is why it is nice that __repr__ can be coded to do whatever you want. If you want the internal state to be visible, you can make your class's __repr__ do that. If the object should be opaque, you can do that too. For the class above, I'd probably implement __repr__ like this:
def __repr__(self):
return "<A object with state=%f>" % self.state
I am using the python mock framework for testing (http://www.voidspace.org.uk/python/mock/) and I want to mock out a superclass and focus on testing the subclasses' added behavior.
(For those interested I have extended pymongo.collection.Collection and I want to only test my added behavior. I do not want to have to run mongodb as another process for testing purposes.)
For this discussion, A is the superclass and B is the subclass. Furthermore, I define direct and indirect superclass calls as shown below:
class A(object):
def method(self):
...
def another_method(self):
...
class B(A):
def direct_superclass_call(self):
...
A.method(self)
def indirect_superclass_call(self):
...
super(A, self).another_method()
Approach #1
Define a mock class for A called MockA and use mock.patch to substitute it for the test at runtime. This handles direct superclass calls. Then manipulate B.__bases__ to handle indirect superclass calls. (see below)
The issue that arises is that I have to write MockA and in some cases (as in the case for pymongo.collection.Collection) this can involve a lot of work to unravel all of the internal calls to mock out.
Approach #2
The desired approach is to somehow use a mock.Mock() class to handle calls on the the mock just in time, as well as defined return_value or side_effect in place in the test. In this manner, I have to do less work by avoiding the definition of MockA.
The issue that I am having is that I cannot figure out how to alter B.__bases__ so that an instance of mock.Mock() can be put in place as a superclass (I must need to somehow do some direct binding here). Thus far I have determined, that super() examines the MRO and then calls the first class that defines the method in question. I cannot figure out how to get a superclass to handle the check to it and succeed if it comes across a mock class. __getattr__ does not seem to be used in this case. I want super to to think that the method is defined at this point and then use the mock.Mock() functionality as usual.
How does super() discover what attributes are defined within the class in the MRO sequence? And is there a way for me to interject here and to somehow get it to utilize a mock.Mock() on the fly?
import mock
class A(object):
def __init__(self, value):
self.value = value
def get_value_direct(self):
return self.value
def get_value_indirect(self):
return self.value
class B(A):
def __init__(self, value):
A.__init__(self, value)
def get_value_direct(self):
return A.get_value_direct(self)
def get_value_indirect(self):
return super(B, self).get_value_indirect()
# approach 1 - use a defined MockA
class MockA(object):
def __init__(self, value):
pass
def get_value_direct(self):
return 0
def get_value_indirect(self):
return 0
B.__bases__ = (MockA, ) # - mock superclass
with mock.patch('__main__.A', MockA):
b2 = B(7)
print '\nApproach 1'
print 'expected result = 0'
print 'direct =', b2.get_value_direct()
print 'indirect =', b2.get_value_indirect()
B.__bases__ = (A, ) # - original superclass
# approach 2 - use mock module to mock out superclass
# what does XXX need to be below to use mock.Mock()?
#B.__bases__ = (XXX, )
with mock.patch('__main__.A') as mymock:
b3 = B(7)
mymock.get_value_direct.return_value = 0
mymock.get_value_indirect.return_value = 0
print '\nApproach 2'
print 'expected result = 0'
print 'direct =', b3.get_value_direct()
print 'indirect =', b3.get_value_indirect() # FAILS HERE as the old superclass is called
#B.__bases__ = (A, ) # - original superclass
is there a way for me to interject here and to somehow get it to utilize a mock.Mock() on the fly?
There may be better approaches, but you can always write your own super() and inject it into the module that contains the class you're mocking. Have it return whatever it should based on what's calling it.
You can either just define super() in the current namespace (in which case the redefinition only applies to the current module after the definition), or you can import __builtin__ and apply the redefinition to __builtin__.super, in which case it will apply globally in the Python session.
You can capture the original super function (if you need to call it from your implementation) using a default argument:
def super(type, obj=None, super=super):
# inside the function, super refers to the built-in
I played around with mocking out super() as suggested by kindall. Unfortunately, after a great deal of effort it became quite complicated to handle complex inheritance cases.
After some work I realized that super() accesses the __dict__ of classes directly when resolving attributes through the MRO (it does not do a getattr type of call). The solution is to extend a mock.MagicMock() object and wrap it with a class to accomplish this. The wrapped class can then be placed in the __bases__ variable of a subclass.
The wrapped object reflects all defined attributes of the target class to the __dict__ of the wrapping class so that super() calls resolve to the properly patched in attributes within the internal MagicMock().
The following code is the solution that I have found to work thus far. Note that I actually implement this within a context handler. Also, care has to be taken to patch in the proper namespaces if importing from other modules.
This is a simple example illustrating the approach:
from mock import MagicMock
import inspect
class _WrappedMagicMock(MagicMock):
def __init__(self, *args, **kwds):
object.__setattr__(self, '_mockclass_wrapper', None)
super(_WrappedMagicMock, self).__init__(*args, **kwds)
def wrap(self, cls):
# get defined attribtues of spec class that need to be preset
base_attrs = dir(type('Dummy', (object,), {}))
attrs = inspect.getmembers(self._spec_class)
new_attrs = [a[0] for a in attrs if a[0] not in base_attrs]
# pre set mocks for attributes in the target mock class
for name in new_attrs:
setattr(cls, name, getattr(self, name))
# eat up any attempts to initialize the target mock class
setattr(cls, '__init__', lambda *args, **kwds: None)
object.__setattr__(self, '_mockclass_wrapper', cls)
def unwrap(self):
object.__setattr__(self, '_mockclass_wrapper', None)
def __setattr__(self, name, value):
super(_WrappedMagicMock, self).__setattr__(name, value)
# be sure to reflect to changes wrapper class if activated
if self._mockclass_wrapper is not None:
setattr(self._mockclass_wrapper, name, value)
def _get_child_mock(self, **kwds):
# when created children mocks need only be MagicMocks
return MagicMock(**kwds)
class A(object):
x = 1
def __init__(self, value):
self.value = value
def get_value_direct(self):
return self.value
def get_value_indirect(self):
return self.value
class B(A):
def __init__(self, value):
super(B, self).__init__(value)
def f(self):
return 2
def get_value_direct(self):
return A.get_value_direct(self)
def get_value_indirect(self):
return super(B, self).get_value_indirect()
# nominal behavior
b = B(3)
assert b.get_value_direct() == 3
assert b.get_value_indirect() == 3
assert b.f() == 2
assert b.x == 1
# using mock class
MockClass = type('MockClassWrapper', (), {})
mock = _WrappedMagicMock(A)
mock.wrap(MockClass)
# patch the mock in
B.__bases__ = (MockClass, )
A = MockClass
# set values within the mock
mock.x = 0
mock.get_value_direct.return_value = 0
mock.get_value_indirect.return_value = 0
# mocked behavior
b = B(7)
assert b.get_value_direct() == 0
assert b.get_value_indirect() == 0
assert b.f() == 2
assert b.x == 0
This question already has answers here:
Creating a singleton in Python
(38 answers)
Closed 4 years ago.
There seem to be many ways to define singletons in Python. Is there a consensus opinion on Stack Overflow?
I don't really see the need, as a module with functions (and not a class) would serve well as a singleton. All its variables would be bound to the module, which could not be instantiated repeatedly anyway.
If you do wish to use a class, there is no way of creating private classes or private constructors in Python, so you can't protect against multiple instantiations, other than just via convention in use of your API. I would still just put methods in a module, and consider the module as the singleton.
Here's my own implementation of singletons. All you have to do is decorate the class; to get the singleton, you then have to use the Instance method. Here's an example:
#Singleton
class Foo:
def __init__(self):
print 'Foo created'
f = Foo() # Error, this isn't how you get the instance of a singleton
f = Foo.instance() # Good. Being explicit is in line with the Python Zen
g = Foo.instance() # Returns already created instance
print f is g # True
And here's the code:
class Singleton:
"""
A non-thread-safe helper class to ease implementing singletons.
This should be used as a decorator -- not a metaclass -- to the
class that should be a singleton.
The decorated class can define one `__init__` function that
takes only the `self` argument. Also, the decorated class cannot be
inherited from. Other than that, there are no restrictions that apply
to the decorated class.
To get the singleton instance, use the `instance` method. Trying
to use `__call__` will result in a `TypeError` being raised.
"""
def __init__(self, decorated):
self._decorated = decorated
def instance(self):
"""
Returns the singleton instance. Upon its first call, it creates a
new instance of the decorated class and calls its `__init__` method.
On all subsequent calls, the already created instance is returned.
"""
try:
return self._instance
except AttributeError:
self._instance = self._decorated()
return self._instance
def __call__(self):
raise TypeError('Singletons must be accessed through `instance()`.')
def __instancecheck__(self, inst):
return isinstance(inst, self._decorated)
You can override the __new__ method like this:
class Singleton(object):
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = super(Singleton, cls).__new__(
cls, *args, **kwargs)
return cls._instance
if __name__ == '__main__':
s1 = Singleton()
s2 = Singleton()
if (id(s1) == id(s2)):
print "Same"
else:
print "Different"
A slightly different approach to implement the singleton in Python is the borg pattern by Alex Martelli (Google employee and Python genius).
class Borg:
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
So instead of forcing all instances to have the same identity, they share state.
The module approach works well. If I absolutely need a singleton I prefer the Metaclass approach.
class Singleton(type):
def __init__(cls, name, bases, dict):
super(Singleton, cls).__init__(name, bases, dict)
cls.instance = None
def __call__(cls,*args,**kw):
if cls.instance is None:
cls.instance = super(Singleton, cls).__call__(*args, **kw)
return cls.instance
class MyClass(object):
__metaclass__ = Singleton
See this implementation from PEP318, implementing the singleton pattern with a decorator:
def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
#singleton
class MyClass:
...
The Python documentation does cover this:
class Singleton(object):
def __new__(cls, *args, **kwds):
it = cls.__dict__.get("__it__")
if it is not None:
return it
cls.__it__ = it = object.__new__(cls)
it.init(*args, **kwds)
return it
def init(self, *args, **kwds):
pass
I would probably rewrite it to look more like this:
class Singleton(object):
"""Use to create a singleton"""
def __new__(cls, *args, **kwds):
"""
>>> s = Singleton()
>>> p = Singleton()
>>> id(s) == id(p)
True
"""
it_id = "__it__"
# getattr will dip into base classes, so __dict__ must be used
it = cls.__dict__.get(it_id, None)
if it is not None:
return it
it = object.__new__(cls)
setattr(cls, it_id, it)
it.init(*args, **kwds)
return it
def init(self, *args, **kwds):
pass
class A(Singleton):
pass
class B(Singleton):
pass
class C(A):
pass
assert A() is A()
assert B() is B()
assert C() is C()
assert A() is not B()
assert C() is not B()
assert C() is not A()
It should be relatively clean to extend this:
class Bus(Singleton):
def init(self, label=None, *args, **kwds):
self.label = label
self.channels = [Channel("system"), Channel("app")]
...
As the accepted answer says, the most idiomatic way is to just use a module.
With that in mind, here's a proof of concept:
def singleton(cls):
obj = cls()
# Always return the same object
cls.__new__ = staticmethod(lambda cls: obj)
# Disable __init__
try:
del cls.__init__
except AttributeError:
pass
return cls
See the Python data model for more details on __new__.
Example:
#singleton
class Duck(object):
pass
if Duck() is Duck():
print "It works!"
else:
print "It doesn't work!"
Notes:
You have to use new-style classes (derive from object) for this.
The singleton is initialized when it is defined, rather than the first time it's used.
This is just a toy example. I've never actually used this in production code, and don't plan to.
I'm very unsure about this, but my project uses 'convention singletons' (not enforced singletons), that is, if I have a class called DataController, I define this in the same module:
_data_controller = None
def GetDataController():
global _data_controller
if _data_controller is None:
_data_controller = DataController()
return _data_controller
It is not elegant, since it's a full six lines. But all my singletons use this pattern, and it's at least very explicit (which is pythonic).
The one time I wrote a singleton in Python I used a class where all the member functions had the classmethod decorator.
class Foo:
x = 1
#classmethod
def increment(cls, y=1):
cls.x += y
Creating a singleton decorator (aka an annotation) is an elegant way if you want to decorate (annotate) classes going forward. Then you just put #singleton before your class definition.
def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
#singleton
class MyClass:
...
There are also some interesting articles on the Google Testing blog, discussing why singleton are/may be bad and are an anti-pattern:
Singletons are Pathological Liars
Where Have All the Singletons Gone?
Root Cause of Singletons
I think that forcing a class or an instance to be a singleton is overkill. Personally, I like to define a normal instantiable class, a semi-private reference, and a simple factory function.
class NothingSpecial:
pass
_the_one_and_only = None
def TheOneAndOnly():
global _the_one_and_only
if not _the_one_and_only:
_the_one_and_only = NothingSpecial()
return _the_one_and_only
Or if there is no issue with instantiating when the module is first imported:
class NothingSpecial:
pass
THE_ONE_AND_ONLY = NothingSpecial()
That way you can write tests against fresh instances without side effects, and there is no need for sprinkling the module with global statements, and if needed you can derive variants in the future.
The Singleton Pattern implemented with Python courtesy of ActiveState.
It looks like the trick is to put the class that's supposed to only have one instance inside of another class.
class Singleton(object[,...]):
staticVar1 = None
staticVar2 = None
def __init__(self):
if self.__class__.staticVar1==None :
# create class instance variable for instantiation of class
# assign class instance variable values to class static variables
else:
# assign class static variable values to class instance variables
class Singeltone(type):
instances = dict()
def __call__(cls, *args, **kwargs):
if cls.__name__ not in Singeltone.instances:
Singeltone.instances[cls.__name__] = type.__call__(cls, *args, **kwargs)
return Singeltone.instances[cls.__name__]
class Test(object):
__metaclass__ = Singeltone
inst0 = Test()
inst1 = Test()
print(id(inst1) == id(inst0))
OK, singleton could be good or evil, I know. This is my implementation, and I simply extend a classic approach to introduce a cache inside and produce many instances of a different type or, many instances of same type, but with different arguments.
I called it Singleton_group, because it groups similar instances together and prevent that an object of the same class, with same arguments, could be created:
# Peppelinux's cached singleton
class Singleton_group(object):
__instances_args_dict = {}
def __new__(cls, *args, **kwargs):
if not cls.__instances_args_dict.get((cls.__name__, args, str(kwargs))):
cls.__instances_args_dict[(cls.__name__, args, str(kwargs))] = super(Singleton_group, cls).__new__(cls, *args, **kwargs)
return cls.__instances_args_dict.get((cls.__name__, args, str(kwargs)))
# It's a dummy real world use example:
class test(Singleton_group):
def __init__(self, salute):
self.salute = salute
a = test('bye')
b = test('hi')
c = test('bye')
d = test('hi')
e = test('goodbye')
f = test('goodbye')
id(a)
3070148780L
id(b)
3070148908L
id(c)
3070148780L
b == d
True
b._Singleton_group__instances_args_dict
{('test', ('bye',), '{}'): <__main__.test object at 0xb6fec0ac>,
('test', ('goodbye',), '{}'): <__main__.test object at 0xb6fec32c>,
('test', ('hi',), '{}'): <__main__.test object at 0xb6fec12c>}
Every object carries the singleton cache... This could be evil, but it works great for some :)
My simple solution which is based on the default value of function parameters.
def getSystemContext(contextObjList=[]):
if len( contextObjList ) == 0:
contextObjList.append( Context() )
pass
return contextObjList[0]
class Context(object):
# Anything you want here
Being relatively new to Python I'm not sure what the most common idiom is, but the simplest thing I can think of is just using a module instead of a class. What would have been instance methods on your class become just functions in the module and any data just becomes variables in the module instead of members of the class. I suspect this is the pythonic approach to solving the type of problem that people use singletons for.
If you really want a singleton class, there's a reasonable implementation described on the first hit on Google for "Python singleton", specifically:
class Singleton:
__single = None
def __init__( self ):
if Singleton.__single:
raise Singleton.__single
Singleton.__single = self
That seems to do the trick.
Singleton's half brother
I completely agree with staale and I leave here a sample of creating a singleton half brother:
class void:pass
a = void();
a.__class__ = Singleton
a will report now as being of the same class as singleton even if it does not look like it. So singletons using complicated classes end up depending on we don't mess much with them.
Being so, we can have the same effect and use simpler things like a variable or a module. Still, if we want use classes for clarity and because in Python a class is an object, so we already have the object (not and instance, but it will do just like).
class Singleton:
def __new__(cls): raise AssertionError # Singletons can't have instances
There we have a nice assertion error if we try to create an instance, and we can store on derivations static members and make changes to them at runtime (I love Python). This object is as good as other about half brothers (you still can create them if you wish), however it will tend to run faster due to simplicity.
In cases where you don't want the metaclass-based solution above, and you don't like the simple function decorator-based approach (e.g. because in that case static methods on the singleton class won't work), this compromise works:
class singleton(object):
"""Singleton decorator."""
def __init__(self, cls):
self.__dict__['cls'] = cls
instances = {}
def __call__(self):
if self.cls not in self.instances:
self.instances[self.cls] = self.cls()
return self.instances[self.cls]
def __getattr__(self, attr):
return getattr(self.__dict__['cls'], attr)
def __setattr__(self, attr, value):
return setattr(self.__dict__['cls'], attr, value)