Related
I'm looking to create a dynamic wrapper class that exposes the API calls from a provided object using data in the object.
Statically it looks like this:
class Concrete:
def __init__(self, data):
self.data = data
def print_data(self):
print(self.data)
class Wrapper:
'''
One day this will wrap a variety of objects. But today
it can only handle Concrete objects.
'''
def wrap_it(self, concrete):
self.cco = concrete # concreteobject=cco
def print_data(self):
self.cco.print_data()
cco = Concrete(5)
wcco = Wrapper()
wcco.wrap_it(cco)
wcco.print_data()
Produces
5
I'd like to figure out how to do the same thing but make
wrap_it dynamic. It should search the concrete object
find the functions, and create functions of the same name
that call the same function in the concrete object.
I imagine that the solution involves inspect.signature or
at least some use of *args and **kwargs, but I've not seen
an example on how to put all this together.
You can use the __getattr__ magic method to hook getting undefined attributes, and forward them to the concrete object:
class DynamicWrapper():
def wrap_it(self, concrete):
self.cco = concrete
def __getattr__(self, k):
def wrapper(*args, **kwargs):
print(f'DynamicWrapper calling {k} with args {args} {kwargs}')
return getattr(self.cco, k)(*args, **kwargs)
if hasattr(self.cco, k):
return wrapper
else:
raise AttributeError(f'No such field/method: {k}')
cco = Concrete(5)
dwcco = DynamicWrapper()
dwcco.wrap_it(cco)
dwcco.print_data()
Use the dir() function to get the attributes of the given object, check if they are callable and assign them to your wrapper, like this:
class Wrapper:
def wrap_it(self, objToWrap):
for attr in dir(objToWrap):
if not attr.startswith('__') and callable(getattr(objToWrap, attr)):
exec('self.%s = objToWrap.%s' % (attr, attr))
And now, for testing.
>>> cco = Concrete(5)
>>> wcco = Wrapper()
>>> wcco.wrap_it(cco)
>>> wcco.print_data()
5
I have a test framework that requires test cases to be defined using the following class patterns:
class TestBase:
def __init__(self, params):
self.name = str(self.__class__)
print('initializing test: {} with params: {}'.format(self.name, params))
class TestCase1(TestBase):
def run(self):
print('running test: ' + self.name)
When I create and run a test, I get the following:
>>> test1 = TestCase1('test 1 params')
initializing test: <class '__main__.TestCase1'> with params: test 1 params
>>> test1.run()
running test: <class '__main__.TestCase1'>
The test framework searches for and loads all TestCase classes it can find, instantiates each one, then calls the run method for each test.
load_test(TestCase1(test_params1))
load_test(TestCase2(test_params2))
...
load_test(TestCaseN(test_params3))
...
for test in loaded_tests:
test.run()
However, I now have some test cases for which I don't want the __init__ method called until the time that the run method is called, but I have little control over the framework structure or methods. How can I delay the call to __init__ without redefining the __init__ or run methods?
Update
The speculations that this originated as an XY problem are correct. A coworker asked me this question a while back when I was maintaining said test framework. I inquired further about what he was really trying to achieve and we figured out a simpler workaround that didn't involve changing the framework or introducing metaclasses, etc.
However, I still think this is a question worth investigating: if I wanted to create new objects with "lazy" initialization ("lazy" as in lazy evaluation generators such as range, etc.) what would be the best way of accomplishing it? My best attempt so far is listed below, I'm interested in knowing if there's anything simpler or less verbose.
First Solution:use property.the elegant way of setter/getter in python.
class Bars(object):
def __init__(self):
self._foo = None
#property
def foo(self):
if not self._foo:
print("lazy initialization")
self._foo = [1,2,3]
return self._foo
if __name__ == "__main__":
f = Bars()
print(f.foo)
print(f.foo)
Second Solution:the proxy solution,and always implement by decorator.
In short, Proxy is a wrapper that wraps the object you need. Proxy could provide additional functionality to the object that it wraps and doesn't change the object's code. It's a surrogate which provide the abitity of control access to a object.there is the code come form user Cyclone.
class LazyProperty:
def __init__(self, method):
self.method = method
self.method_name = method.__name__
def __get__(self, obj, cls):
if not obj:
return None
value = self.method(obj)
print('value {}'.format(value))
setattr(obj, self.method_name, value)
return value
class test:
def __init__(self):
self._resource = None
#LazyProperty
def resource(self):
print("lazy")
self._resource = tuple(range(5))
return self._resource
if __name__ == '__main__':
t = test()
print(t.resource)
print(t.resource)
print(t.resource)
To be used for true one-time calculated lazy properties. I like it because it avoids sticking extra attributes on objects, and once activated does not waste time checking for attribute presence
Metaclass option
You can intercept the call to __init__ using a metaclass. Create the object with __new__ and overwrite the __getattribute__ method to check if __init__ has been called or not and call it if it hasn't.
class DelayInit(type):
def __call__(cls, *args, **kwargs):
def init_before_get(obj, attr):
if not object.__getattribute__(obj, '_initialized'):
obj.__init__(*args, **kwargs)
obj._initialized = True
return object.__getattribute__(obj, attr)
cls.__getattribute__ = init_before_get
new_obj = cls.__new__(cls, *args, **kwargs)
new_obj._initialized = False
return new_obj
class TestDelayed(TestCase1, metaclass=DelayInit):
pass
In the example below, you'll see that the init print won't occur until the run method is executed.
>>> new_test = TestDelayed('delayed test params')
>>> new_test.run()
initializing test: <class '__main__.TestDelayed'> with params: delayed test params
running test: <class '__main__.TestDelayed'>
Decorator option
You could also use a decorator that has a similar pattern to the metaclass above:
def delayinit(cls):
def init_before_get(obj, attr):
if not object.__getattribute__(obj, '_initialized'):
obj.__init__(*obj._init_args, **obj._init_kwargs)
obj._initialized = True
return object.__getattribute__(obj, attr)
cls.__getattribute__ = init_before_get
def construct(*args, **kwargs):
obj = cls.__new__(cls, *args, **kwargs)
obj._init_args = args
obj._init_kwargs = kwargs
obj._initialized = False
return obj
return construct
#delayinit
class TestDelayed(TestCase1):
pass
This will behave identically to the example above.
In Python, there is no way that you can avoid calling __init__ when you instantiate a class cls. If calling cls(args) returns an instance of cls, then the language guarantees that cls.__init__ will have been called.
So the only way to achieve something similar to what you are asking is to introduce another class that will postpone the calling of __init__ in the original class until an attribute of the instantiated class is being accessed.
Here is one way:
def delay_init(cls):
class Delay(cls):
def __init__(self, *arg, **kwarg):
self._arg = arg
self._kwarg = kwarg
def __getattribute__(self, name):
self.__class__ = cls
arg = self._arg
kwarg = self._kwarg
del self._arg
del self._kwarg
self.__init__(*arg, **kwarg)
return getattr(self, name)
return Delay
This wrapper function works by catching any attempt to access an attribute of the instantiated class. When such an attempt is made, it changes the instance's __class__ to the original class, calls the original __init__ method with the arguments that were used when the instance was created, and then returns the proper attribute. This function can be used as decorator for your TestCase1 class:
class TestBase:
def __init__(self, params):
self.name = str(self.__class__)
print('initializing test: {} with params: {}'.format(self.name, params))
class TestCase1(TestBase):
def run(self):
print('running test: ' + self.name)
>>> t1 = TestCase1("No delay")
initializing test: <class '__main__.TestCase1'> with params: No delay
>>> t2 = delay_init(TestCase1)("Delayed init")
>>> t1.run()
running test: <class '__main__.TestCase1'>
>>> t2.run()
initializing test: <class '__main__.TestCase1'> with params: Delayed init
running test: <class '__main__.TestCase1'>
>>>
Be careful where you apply this function though. If you decorate TestBase with delay_init, it will not work, because it will turn the TestCase1 instances into TestBase instances.
In my answer I'd like to focus on cases when one wants to instantiate a class whose initialiser (dunder init) has side effects. For instance, pysftp.Connection, creates an SSH connection, which may be undesired until it's actually used.
In a great blog series about conceiving of wrapt package (nit-picky decorator implementaion), the author describes Transparent object proxy. This code can be customised for the subject in question.
class LazyObject:
_factory = None
'''Callable responsible for creation of target object'''
_object = None
'''Target object created lazily'''
def __init__(self, factory):
self._factory = factory
def __getattr__(self, name):
if not self._object:
self._object = self._factory()
return getattr(self._object, name)
Then it can be used as:
obj = LazyObject(lambda: dict(foo = 'bar'))
obj.keys() # dict_keys(['foo'])
But len(obj), obj['foo'] and other language constructs which invoke Python object protocols (dunder methods, like __len__ and __getitem__) will not work. However, for many cases, which are limited to regular methods, this is a solution.
To proxy object protocol implementations, it's possible to use neither __getattr__, nor __getattribute__ (to do it in a generic way). The latter's documentation notes:
This method may still be bypassed when looking up special methods as the result of implicit invocation via language syntax or built-in functions. See Special method lookup.
As a complete solution is demanded, there are examples of manual implementations like werkzeug's LocalProxy and django's SimpleLazyObject. However a clever workaround is possible.
Luckily there's a dedicated package (based on wrapt) for the exact use case, lazy-object-proxy which is described in this blog post.
from lazy_object_proxy import Proxy
obj = Proxy(labmda: dict(foo = 'bar'))
obj.keys() # dict_keys(['foo'])
len(len(obj)) # 1
obj['foo'] # 'bar'
One alternative would be to write a wrapper that takes a class as input and returns a class with delayed initialization until any member is accessed. This could for example be done as this:
def lazy_init(cls):
class LazyInit(cls):
def __init__(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs
self._initialized = False
def __getattr__(self, attr):
if not self.__dict__['_initialized']:
cls.__init__(self,
*self.__dict__['args'], **self.__dict__['kwargs'])
self._initialized = True
return self.__dict__[attr]
return LazyInit
This could then be used as such
load_test(lazy_init(TestCase1)(test_params1))
load_test(lazy_init(TestCase2)(test_params2))
...
load_test(lazy_init(TestCaseN)(test_params3))
...
for test in loaded_tests:
test.run()
Answering your original question (and the problem I think you are actually trying to solve), "How can I delay the init call until an attribute is accessed?": don't call init until you access the attribute.
Said another way: you can make the class initialization simultaneous with the attribute call. What you seem to actually want is 1) create a collection of TestCase# classes along with their associated parameters; 2) run each test case.
Probably your original problem came from thinking you had to initialize all your TestCase classes in order to create a list of them that you could iterate over. But in fact you can store class objects in lists, dicts etc. That means you can do whatever method you have for finding all TestCase classes and store those class objects in a dict with their relevant parameters. Then just iterate that dict and call each class with its run() method.
It might look like:
tests = {TestCase1: 'test 1 params', TestCase2: 'test 2 params', TestCase3: 'test 3 params'}
for test_case, param in tests.items():
test_case(param).run()
Overridding __new__
You could do this by overriding __new__ method and replacing __init__ method with a custom function.
def init(cls, real_init):
def wrapped(self, *args, **kwargs):
# This will run during the first call to `__init__`
# made after `__new__`. Here we re-assign the original
# __init__ back to class and assign a custom function
# to `instances.__init__`.
cls.__init__ = real_init
def new_init():
if new_init.called is False:
real_init(self, *args, **kwargs)
new_init.called = True
new_init.called = False
self.__init__ = new_init
return wrapped
class DelayInitMixin(object):
def __new__(cls, *args, **kwargs):
cls.__init__ = init(cls, cls.__init__)
return object.__new__(cls)
class A(DelayInitMixin):
def __init__(self, a, b):
print('inside __init__')
self.a = sum(a)
self.b = sum(b)
def __getattribute__(self, attr):
init = object.__getattribute__(self, '__init__')
if not init.called:
init()
return object.__getattribute__(self, attr)
def run(self):
pass
def fun(self):
pass
Demo:
>>> a = A(range(1000), range(10000))
>>> a.run()
inside __init__
>>> a.a, a.b
(499500, 49995000)
>>> a.run(), a.__init__()
(None, None)
>>> b = A(range(100), range(10000))
>>> b.a, b.b
inside __init__
(4950, 49995000)
>>> b.run(), b.__init__()
(None, None)
Using cached properties
The idea is to do the heavy calculation only once by caching results. This approach will lead to much more readable code if the whole point of delaying initialization is improving performance.
Django comes with a nice decorator called #cached_property. I tend to use it a lot in both code and unit-tests for caching results of heavy properties.
A cached_property is a non-data descriptor. Hence once the key is set in instance's dictionary, the access to property would always get the value from there.
class cached_property(object):
"""
Decorator that converts a method with a single self argument into a
property cached on the instance.
Optional ``name`` argument allows you to make cached properties of other
methods. (e.g. url = cached_property(get_absolute_url, name='url') )
"""
def __init__(self, func, name=None):
self.func = func
self.__doc__ = getattr(func, '__doc__')
self.name = name or func.__name__
def __get__(self, instance, cls=None):
if instance is None:
return self
res = instance.__dict__[self.name] = self.func(instance)
return res
Usage:
class A:
#cached_property
def a(self):
print('calculating a')
return sum(range(1000))
#cached_property
def b(self):
print('calculating b')
return sum(range(10000))
Demo:
>>> a = A()
>>> a.a
calculating a
499500
>>> a.b
calculating b
49995000
>>> a.a, a.b
(499500, 49995000)
I think you can use a wrapper class to hold the real class you want to instance, and use call __init__ yourself in your code, like(Python 3 code):
class Wrapper:
def __init__(self, cls):
self.cls = cls
self.instance = None
def your_method(self, *args, **kwargs):
if not self.instance:
self.instnace = cls()
return self.instance(*args, **kwargs)
class YourClass:
def __init__(self):
print("calling __init__")
but it's a dump way, but without any trick.
I can't find a definitive answer for this. As far as I know, you can't have multiple __init__ functions in a Python class. So how do I solve this problem?
Suppose I have a class called Cheese with the number_of_holes property. How can I have two ways of creating cheese objects...
One that takes a number of holes like this: parmesan = Cheese(num_holes = 15).
And one that takes no arguments and just randomizes the number_of_holes property: gouda = Cheese().
I can think of only one way to do this, but this seems clunky:
class Cheese():
def __init__(self, num_holes = 0):
if (num_holes == 0):
# Randomize number_of_holes
else:
number_of_holes = num_holes
What do you say? Is there another way?
Actually None is much better for "magic" values:
class Cheese():
def __init__(self, num_holes = None):
if num_holes is None:
...
Now if you want complete freedom of adding more parameters:
class Cheese():
def __init__(self, *args, **kwargs):
#args -- tuple of anonymous arguments
#kwargs -- dictionary of named arguments
self.num_holes = kwargs.get('num_holes',random_holes())
To better explain the concept of *args and **kwargs (you can actually change these names):
def f(*args, **kwargs):
print 'args: ', args, ' kwargs: ', kwargs
>>> f('a')
args: ('a',) kwargs: {}
>>> f(ar='a')
args: () kwargs: {'ar': 'a'}
>>> f(1,2,param=3)
args: (1, 2) kwargs: {'param': 3}
http://docs.python.org/reference/expressions.html#calls
Using num_holes=None as the default is fine if you are going to have just __init__.
If you want multiple, independent "constructors", you can provide these as class methods. These are usually called factory methods. In this case you could have the default for num_holes be 0.
class Cheese(object):
def __init__(self, num_holes=0):
"defaults to a solid cheese"
self.number_of_holes = num_holes
#classmethod
def random(cls):
return cls(randint(0, 100))
#classmethod
def slightly_holey(cls):
return cls(randint(0, 33))
#classmethod
def very_holey(cls):
return cls(randint(66, 100))
Now create object like this:
gouda = Cheese()
emmentaler = Cheese.random()
leerdammer = Cheese.slightly_holey()
One should definitely prefer the solutions already posted, but since no one mentioned this solution yet, I think it is worth mentioning for completeness.
The #classmethod approach can be modified to provide an alternative constructor which does not invoke the default constructor (__init__). Instead, an instance is created using __new__.
This could be used if the type of initialization cannot be selected based on the type of the constructor argument, and the constructors do not share code.
Example:
class MyClass(set):
def __init__(self, filename):
self._value = load_from_file(filename)
#classmethod
def from_somewhere(cls, somename):
obj = cls.__new__(cls) # Does not call __init__
super(MyClass, obj).__init__() # Don't forget to call any polymorphic base class initializers
obj._value = load_from_somewhere(somename)
return obj
All of these answers are excellent if you want to use optional parameters, but another Pythonic possibility is to use a classmethod to generate a factory-style pseudo-constructor:
def __init__(self, num_holes):
# do stuff with the number
#classmethod
def fromRandom(cls):
return cls( # some-random-number )
Why do you think your solution is "clunky"? Personally I would prefer one constructor with default values over multiple overloaded constructors in situations like yours (Python does not support method overloading anyway):
def __init__(self, num_holes=None):
if num_holes is None:
# Construct a gouda
else:
# custom cheese
# common initialization
For really complex cases with lots of different constructors, it might be cleaner to use different factory functions instead:
#classmethod
def create_gouda(cls):
c = Cheese()
# ...
return c
#classmethod
def create_cheddar(cls):
# ...
In your cheese example you might want to use a Gouda subclass of Cheese though...
Those are good ideas for your implementation, but if you are presenting a cheese making interface to a user. They don't care how many holes the cheese has or what internals go into making cheese. The user of your code just wants "gouda" or "parmesean" right?
So why not do this:
# cheese_user.py
from cheeses import make_gouda, make_parmesean
gouda = make_gouda()
paremesean = make_parmesean()
And then you can use any of the methods above to actually implement the functions:
# cheeses.py
class Cheese(object):
def __init__(self, *args, **kwargs):
#args -- tuple of anonymous arguments
#kwargs -- dictionary of named arguments
self.num_holes = kwargs.get('num_holes',random_holes())
def make_gouda():
return Cheese()
def make_paremesean():
return Cheese(num_holes=15)
This is a good encapsulation technique, and I think it is more Pythonic. To me this way of doing things fits more in line more with duck typing. You are simply asking for a gouda object and you don't really care what class it is.
Overview
For the specific cheese example, I agree with many of the other answers about using default values to signal random initialization or to use a static factory method. However, there may also be related scenarios that you had in mind where there is value in having alternative, concise ways of calling the constructor without hurting the quality of parameter names or type information.
Since Python 3.8 and functools.singledispatchmethod can help accomplish this in many cases (and the more flexible multimethod can apply in even more scenarios). (This related post describes how one could accomplish the same in Python 3.4 without a library.) I haven't seen examples in the documentation for either of these that specifically shows overloading __init__ as you ask about, but it appears that the same principles for overloading any member method apply (as shown below).
"Single dispatch" (available in the standard library) requires that there be at least one positional parameter and that the type of the first argument be sufficient to distinguish among the possible overloaded options. For the specific Cheese example, this doesn't hold since you wanted random holes when no parameters were given, but multidispatch does support the very same syntax and can be used as long as each method version can be distinguish based on the number and type of all arguments together.
Example
Here is an example of how to use either method (some of the details are in order to please mypy which was my goal when I first put this together):
from functools import singledispatchmethod as overload
# or the following more flexible method after `pip install multimethod`
# from multimethod import multidispatch as overload
class MyClass:
#overload # type: ignore[misc]
def __init__(self, a: int = 0, b: str = 'default'):
self.a = a
self.b = b
#__init__.register
def _from_str(self, b: str, a: int = 0):
self.__init__(a, b) # type: ignore[misc]
def __repr__(self) -> str:
return f"({self.a}, {self.b})"
print([
MyClass(1, "test"),
MyClass("test", 1),
MyClass("test"),
MyClass(1, b="test"),
MyClass("test", a=1),
MyClass("test"),
MyClass(1),
# MyClass(), # `multidispatch` version handles these 3, too.
# MyClass(a=1, b="test"),
# MyClass(b="test", a=1),
])
Output:
[(1, test), (1, test), (0, test), (1, test), (1, test), (0, test), (1, default)]
Notes:
I wouldn't usually make the alias called overload, but it helped make the diff between using the two methods just a matter of which import you use.
The # type: ignore[misc] comments are not necessary to run, but I put them in there to please mypy which doesn't like decorating __init__ nor calling __init__ directly.
If you are new to the decorator syntax, realize that putting #overload before the definition of __init__ is just sugar for __init__ = overload(the original definition of __init__). In this case, overload is a class so the resulting __init__ is an object that has a __call__ method so that it looks like a function but that also has a .register method which is being called later to add another overloaded version of __init__. This is a bit messy, but it please mypy becuase there are no method names being defined twice. If you don't care about mypy and are planning to use the external library anyway, multimethod also has simpler alternative ways of specifying overloaded versions.
Defining __repr__ is simply there to make the printed output meaningful (you don't need it in general).
Notice that multidispatch is able to handle three additional input combinations that don't have any positional parameters.
Use num_holes=None as a default, instead. Then check for whether num_holes is None, and if so, randomize. That's what I generally see, anyway.
More radically different construction methods may warrant a classmethod that returns an instance of cls.
The best answer is the one above about default arguments, but I had fun writing this, and it certainly does fit the bill for "multiple constructors". Use at your own risk.
What about the new method.
"Typical implementations create a new instance of the class by invoking the superclass’s new() method using super(currentclass, cls).new(cls[, ...]) with appropriate arguments and then modifying the newly-created instance as necessary before returning it."
So you can have the new method modify your class definition by attaching the appropriate constructor method.
class Cheese(object):
def __new__(cls, *args, **kwargs):
obj = super(Cheese, cls).__new__(cls)
num_holes = kwargs.get('num_holes', random_holes())
if num_holes == 0:
cls.__init__ = cls.foomethod
else:
cls.__init__ = cls.barmethod
return obj
def foomethod(self, *args, **kwargs):
print "foomethod called as __init__ for Cheese"
def barmethod(self, *args, **kwargs):
print "barmethod called as __init__ for Cheese"
if __name__ == "__main__":
parm = Cheese(num_holes=5)
I'd use inheritance. Especially if there are going to be more differences than number of holes. Especially if Gouda will need to have different set of members then Parmesan.
class Gouda(Cheese):
def __init__(self):
super(Gouda).__init__(num_holes=10)
class Parmesan(Cheese):
def __init__(self):
super(Parmesan).__init__(num_holes=15)
Since my initial answer was criticised on the basis that my special-purpose constructors did not call the (unique) default constructor, I post here a modified version that honours the wishes that all constructors shall call the default one:
class Cheese:
def __init__(self, *args, _initialiser="_default_init", **kwargs):
"""A multi-initialiser.
"""
getattr(self, _initialiser)(*args, **kwargs)
def _default_init(self, ...):
"""A user-friendly smart or general-purpose initialiser.
"""
...
def _init_parmesan(self, ...):
"""A special initialiser for Parmesan cheese.
"""
...
def _init_gouda(self, ...):
"""A special initialiser for Gouda cheese.
"""
...
#classmethod
def make_parmesan(cls, *args, **kwargs):
return cls(*args, **kwargs, _initialiser="_init_parmesan")
#classmethod
def make_gouda(cls, *args, **kwargs):
return cls(*args, **kwargs, _initialiser="_init_gouda")
This is how I solved it for a YearQuarter class I had to create. I created an __init__ which is very tolerant to a wide variety of input.
You use it like this:
>>> from datetime import date
>>> temp1 = YearQuarter(year=2017, month=12)
>>> print temp1
2017-Q4
>>> temp2 = YearQuarter(temp1)
>>> print temp2
2017-Q4
>>> temp3 = YearQuarter((2017, 6))
>>> print temp3
2017-Q2
>>> temp4 = YearQuarter(date(2017, 1, 18))
>>> print temp4
2017-Q1
>>> temp5 = YearQuarter(year=2017, quarter = 3)
>>> print temp5
2017-Q3
And this is how the __init__ and the rest of the class looks like:
import datetime
class YearQuarter:
def __init__(self, *args, **kwargs):
if len(args) == 1:
[x] = args
if isinstance(x, datetime.date):
self._year = int(x.year)
self._quarter = (int(x.month) + 2) / 3
elif isinstance(x, tuple):
year, month = x
self._year = int(year)
month = int(month)
if 1 <= month <= 12:
self._quarter = (month + 2) / 3
else:
raise ValueError
elif isinstance(x, YearQuarter):
self._year = x._year
self._quarter = x._quarter
elif len(args) == 2:
year, month = args
self._year = int(year)
month = int(month)
if 1 <= month <= 12:
self._quarter = (month + 2) / 3
else:
raise ValueError
elif kwargs:
self._year = int(kwargs["year"])
if "quarter" in kwargs:
quarter = int(kwargs["quarter"])
if 1 <= quarter <= 4:
self._quarter = quarter
else:
raise ValueError
elif "month" in kwargs:
month = int(kwargs["month"])
if 1 <= month <= 12:
self._quarter = (month + 2) / 3
else:
raise ValueError
def __str__(self):
return '{0}-Q{1}'.format(self._year, self._quarter)
class Cheese:
def __init__(self, *args, **kwargs):
"""A user-friendly initialiser for the general-purpose constructor.
"""
...
def _init_parmesan(self, *args, **kwargs):
"""A special initialiser for Parmesan cheese.
"""
...
def _init_gauda(self, *args, **kwargs):
"""A special initialiser for Gauda cheese.
"""
...
#classmethod
def make_parmesan(cls, *args, **kwargs):
new = cls.__new__(cls)
new._init_parmesan(*args, **kwargs)
return new
#classmethod
def make_gauda(cls, *args, **kwargs):
new = cls.__new__(cls)
new._init_gauda(*args, **kwargs)
return new
I do not see a straightforward answer with an example yet. The idea is simple:
use __init__ as the "basic" constructor as python only allows one __init__ method
use #classmethod to create any other constructors and call the basic constructor
Here is a new try.
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
#classmethod
def fromBirthYear(cls, name, birthYear):
return cls(name, date.today().year - birthYear)
Usage:
p = Person('tim', age=18)
p = Person.fromBirthYear('tim', birthYear=2004)
Here (drawing on this earlier answer, the pure Python version of classmethod in the docs, and as suggested by this comment) is a decorator that can be used to create multiple constructors.
from types import MethodType
from functools import wraps
class constructor:
def __init__(self, func):
#wraps(func)
def wrapped(cls, *args, **kwargs):
obj = cls.__new__(cls) # Create new instance but don't init
super(cls, obj).__init__() # Init any classes it inherits from
func(obj, *args, **kwargs) # Run the constructor with obj as self
return obj
self.wrapped = wrapped
def __get__(self, _, cls):
return MethodType(self.wrapped, cls) # Bind this constructor to the class
class Test:
def __init__(self, data_sequence):
""" Default constructor, initiates with data sequence """
self.data = [item ** 2 for item in data_sequence]
#constructor
def zeros(self, size):
""" Initiates with zeros """
self.data = [0 for _ in range(size)]
a = Test([1,2,3])
b = Test.zeros(100)
This seems the cleanest way in some cases (see e.g. multiple dataframe constructors in Pandas), where providing multiple optional arguments to a single constructor would be inconvenient: for example cases where it would require too many parameters, be unreadable, be slower or use more memory than needed. However, as earlier comments have argued, in most cases it is probably more Pythonic to route through a single constructor with optional parameters, adding class methods where needed.
I am instantiating a class A (which I am importing from somebody
else, so I can't modify it) into my class X.
Is there a way I can intercept or wrap calls to methods in A?
I.e., in the code below can I call
x.a.p1()
and get the output
X.pre
A.p1
X.post
Many TIA!
class A:
# in my real application, this is an imported class
# that I cannot modify
def p1(self): print 'A.p1'
class X:
def __init__(self):
self.a=A()
def pre(self): print 'X.pre'
def post(self): print 'X.post'
x=X()
x.a.p1()
Here is the solution I and my colleagues came up with:
from types import MethodType
class PrePostCaller:
def __init__(self, other):
self.other = other
def pre(self): print 'pre'
def post(self): print 'post'
def __getattr__(self, name):
if hasattr(self.other, name):
func = getattr(self.other, name)
return lambda *args, **kwargs: self._wrap(func, args, kwargs)
raise AttributeError(name)
def _wrap(self, func, args, kwargs):
self.pre()
if type(func) == MethodType:
result = func( *args, **kwargs)
else:
result = func(self.other, *args, **kwargs)
self.post()
return result
#Examples of use
class Foo:
def stuff(self):
print 'stuff'
a = PrePostCaller(Foo())
a.stuff()
a = PrePostCaller([1,2,3])
print a.count()
Gives:
pre
stuff
post
pre
post
0
So when creating an instance of your object, wrap it with the PrePostCaller object. After that you continue using the object as if it was an instance of the wrapped object. With this solution you can do the wrapping on a per instance basis.
You could just modify the A instance and replace the p1 function with a wrapper function:
def wrapped(pre, post, f):
def wrapper(*args, **kwargs):
pre()
retval = f(*args, **kwargs)
post()
return retval
return wrapper
class Y:
def __init__(self):
self.a=A()
self.a.p1 = wrapped(self.pre, self.post, self.a.p1)
def pre(self): print 'X.pre'
def post(self): print 'X.post'
The no-whistles-or-bells solution would be to write a wrapper class for class A that does just that.
As others have mentioned, the wrapper/decorator solution is probably be the easiest one. I don't recommend modifyng the wrapped class itself, for the same reasons that you point out.
If you have many external classes you can write a code generator to generate the wrapper classes for you. Since you are doing this in Python you can probably even implement the generator as a part of the program, generating the wrappers at startup, or something.
I've just recently read about decorators in python, I'm not understanding them yet but it seems to me that they can be a solution to your problem. see Bruce Eckel intro to decorators at:
http://www.artima.com/weblogs/viewpost.jsp?thread=240808
He has a few more posts on that topic there.
Edit: Three days later I stumble upon this article, which shows how to do a similar task without decorators, what's the problems with it and then introduces decorators and develop a quite full solution:
http://wordaligned.org/articles/echo
Here's what I've received from Steven D'Aprano on comp.lang.python.
# Define two decorator factories.
def precall(pre):
def decorator(f):
def newf(*args, **kwargs):
pre()
return f(*args, **kwargs)
return newf
return decorator
def postcall(post):
def decorator(f):
def newf(*args, **kwargs):
x = f(*args, **kwargs)
post()
return x
return newf
return decorator
Now you can monkey patch class A if you want. It's probably not a great
idea to do this in production code, as it will effect class A everywhere.
[this is ok for my application, as it is basically a protocol converter and there's exactly one instance of each class being processed.]
class A:
# in my real application, this is an imported class
# that I cannot modify
def p1(self): print 'A.p1'
class X:
def __init__(self):
self.a=A()
A.p1 = precall(self.pre)(postcall(self.post)(A.p1))
def pre(self): print 'X.pre'
def post(self): print 'X.post'
x=X()
x.a.p1()
Gives the desired result.
X.pre
A.p1
X.post
This question already has answers here:
Creating a singleton in Python
(38 answers)
Closed 4 years ago.
There seem to be many ways to define singletons in Python. Is there a consensus opinion on Stack Overflow?
I don't really see the need, as a module with functions (and not a class) would serve well as a singleton. All its variables would be bound to the module, which could not be instantiated repeatedly anyway.
If you do wish to use a class, there is no way of creating private classes or private constructors in Python, so you can't protect against multiple instantiations, other than just via convention in use of your API. I would still just put methods in a module, and consider the module as the singleton.
Here's my own implementation of singletons. All you have to do is decorate the class; to get the singleton, you then have to use the Instance method. Here's an example:
#Singleton
class Foo:
def __init__(self):
print 'Foo created'
f = Foo() # Error, this isn't how you get the instance of a singleton
f = Foo.instance() # Good. Being explicit is in line with the Python Zen
g = Foo.instance() # Returns already created instance
print f is g # True
And here's the code:
class Singleton:
"""
A non-thread-safe helper class to ease implementing singletons.
This should be used as a decorator -- not a metaclass -- to the
class that should be a singleton.
The decorated class can define one `__init__` function that
takes only the `self` argument. Also, the decorated class cannot be
inherited from. Other than that, there are no restrictions that apply
to the decorated class.
To get the singleton instance, use the `instance` method. Trying
to use `__call__` will result in a `TypeError` being raised.
"""
def __init__(self, decorated):
self._decorated = decorated
def instance(self):
"""
Returns the singleton instance. Upon its first call, it creates a
new instance of the decorated class and calls its `__init__` method.
On all subsequent calls, the already created instance is returned.
"""
try:
return self._instance
except AttributeError:
self._instance = self._decorated()
return self._instance
def __call__(self):
raise TypeError('Singletons must be accessed through `instance()`.')
def __instancecheck__(self, inst):
return isinstance(inst, self._decorated)
You can override the __new__ method like this:
class Singleton(object):
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = super(Singleton, cls).__new__(
cls, *args, **kwargs)
return cls._instance
if __name__ == '__main__':
s1 = Singleton()
s2 = Singleton()
if (id(s1) == id(s2)):
print "Same"
else:
print "Different"
A slightly different approach to implement the singleton in Python is the borg pattern by Alex Martelli (Google employee and Python genius).
class Borg:
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
So instead of forcing all instances to have the same identity, they share state.
The module approach works well. If I absolutely need a singleton I prefer the Metaclass approach.
class Singleton(type):
def __init__(cls, name, bases, dict):
super(Singleton, cls).__init__(name, bases, dict)
cls.instance = None
def __call__(cls,*args,**kw):
if cls.instance is None:
cls.instance = super(Singleton, cls).__call__(*args, **kw)
return cls.instance
class MyClass(object):
__metaclass__ = Singleton
See this implementation from PEP318, implementing the singleton pattern with a decorator:
def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
#singleton
class MyClass:
...
The Python documentation does cover this:
class Singleton(object):
def __new__(cls, *args, **kwds):
it = cls.__dict__.get("__it__")
if it is not None:
return it
cls.__it__ = it = object.__new__(cls)
it.init(*args, **kwds)
return it
def init(self, *args, **kwds):
pass
I would probably rewrite it to look more like this:
class Singleton(object):
"""Use to create a singleton"""
def __new__(cls, *args, **kwds):
"""
>>> s = Singleton()
>>> p = Singleton()
>>> id(s) == id(p)
True
"""
it_id = "__it__"
# getattr will dip into base classes, so __dict__ must be used
it = cls.__dict__.get(it_id, None)
if it is not None:
return it
it = object.__new__(cls)
setattr(cls, it_id, it)
it.init(*args, **kwds)
return it
def init(self, *args, **kwds):
pass
class A(Singleton):
pass
class B(Singleton):
pass
class C(A):
pass
assert A() is A()
assert B() is B()
assert C() is C()
assert A() is not B()
assert C() is not B()
assert C() is not A()
It should be relatively clean to extend this:
class Bus(Singleton):
def init(self, label=None, *args, **kwds):
self.label = label
self.channels = [Channel("system"), Channel("app")]
...
As the accepted answer says, the most idiomatic way is to just use a module.
With that in mind, here's a proof of concept:
def singleton(cls):
obj = cls()
# Always return the same object
cls.__new__ = staticmethod(lambda cls: obj)
# Disable __init__
try:
del cls.__init__
except AttributeError:
pass
return cls
See the Python data model for more details on __new__.
Example:
#singleton
class Duck(object):
pass
if Duck() is Duck():
print "It works!"
else:
print "It doesn't work!"
Notes:
You have to use new-style classes (derive from object) for this.
The singleton is initialized when it is defined, rather than the first time it's used.
This is just a toy example. I've never actually used this in production code, and don't plan to.
I'm very unsure about this, but my project uses 'convention singletons' (not enforced singletons), that is, if I have a class called DataController, I define this in the same module:
_data_controller = None
def GetDataController():
global _data_controller
if _data_controller is None:
_data_controller = DataController()
return _data_controller
It is not elegant, since it's a full six lines. But all my singletons use this pattern, and it's at least very explicit (which is pythonic).
The one time I wrote a singleton in Python I used a class where all the member functions had the classmethod decorator.
class Foo:
x = 1
#classmethod
def increment(cls, y=1):
cls.x += y
Creating a singleton decorator (aka an annotation) is an elegant way if you want to decorate (annotate) classes going forward. Then you just put #singleton before your class definition.
def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
#singleton
class MyClass:
...
There are also some interesting articles on the Google Testing blog, discussing why singleton are/may be bad and are an anti-pattern:
Singletons are Pathological Liars
Where Have All the Singletons Gone?
Root Cause of Singletons
I think that forcing a class or an instance to be a singleton is overkill. Personally, I like to define a normal instantiable class, a semi-private reference, and a simple factory function.
class NothingSpecial:
pass
_the_one_and_only = None
def TheOneAndOnly():
global _the_one_and_only
if not _the_one_and_only:
_the_one_and_only = NothingSpecial()
return _the_one_and_only
Or if there is no issue with instantiating when the module is first imported:
class NothingSpecial:
pass
THE_ONE_AND_ONLY = NothingSpecial()
That way you can write tests against fresh instances without side effects, and there is no need for sprinkling the module with global statements, and if needed you can derive variants in the future.
The Singleton Pattern implemented with Python courtesy of ActiveState.
It looks like the trick is to put the class that's supposed to only have one instance inside of another class.
class Singleton(object[,...]):
staticVar1 = None
staticVar2 = None
def __init__(self):
if self.__class__.staticVar1==None :
# create class instance variable for instantiation of class
# assign class instance variable values to class static variables
else:
# assign class static variable values to class instance variables
class Singeltone(type):
instances = dict()
def __call__(cls, *args, **kwargs):
if cls.__name__ not in Singeltone.instances:
Singeltone.instances[cls.__name__] = type.__call__(cls, *args, **kwargs)
return Singeltone.instances[cls.__name__]
class Test(object):
__metaclass__ = Singeltone
inst0 = Test()
inst1 = Test()
print(id(inst1) == id(inst0))
OK, singleton could be good or evil, I know. This is my implementation, and I simply extend a classic approach to introduce a cache inside and produce many instances of a different type or, many instances of same type, but with different arguments.
I called it Singleton_group, because it groups similar instances together and prevent that an object of the same class, with same arguments, could be created:
# Peppelinux's cached singleton
class Singleton_group(object):
__instances_args_dict = {}
def __new__(cls, *args, **kwargs):
if not cls.__instances_args_dict.get((cls.__name__, args, str(kwargs))):
cls.__instances_args_dict[(cls.__name__, args, str(kwargs))] = super(Singleton_group, cls).__new__(cls, *args, **kwargs)
return cls.__instances_args_dict.get((cls.__name__, args, str(kwargs)))
# It's a dummy real world use example:
class test(Singleton_group):
def __init__(self, salute):
self.salute = salute
a = test('bye')
b = test('hi')
c = test('bye')
d = test('hi')
e = test('goodbye')
f = test('goodbye')
id(a)
3070148780L
id(b)
3070148908L
id(c)
3070148780L
b == d
True
b._Singleton_group__instances_args_dict
{('test', ('bye',), '{}'): <__main__.test object at 0xb6fec0ac>,
('test', ('goodbye',), '{}'): <__main__.test object at 0xb6fec32c>,
('test', ('hi',), '{}'): <__main__.test object at 0xb6fec12c>}
Every object carries the singleton cache... This could be evil, but it works great for some :)
My simple solution which is based on the default value of function parameters.
def getSystemContext(contextObjList=[]):
if len( contextObjList ) == 0:
contextObjList.append( Context() )
pass
return contextObjList[0]
class Context(object):
# Anything you want here
Being relatively new to Python I'm not sure what the most common idiom is, but the simplest thing I can think of is just using a module instead of a class. What would have been instance methods on your class become just functions in the module and any data just becomes variables in the module instead of members of the class. I suspect this is the pythonic approach to solving the type of problem that people use singletons for.
If you really want a singleton class, there's a reasonable implementation described on the first hit on Google for "Python singleton", specifically:
class Singleton:
__single = None
def __init__( self ):
if Singleton.__single:
raise Singleton.__single
Singleton.__single = self
That seems to do the trick.
Singleton's half brother
I completely agree with staale and I leave here a sample of creating a singleton half brother:
class void:pass
a = void();
a.__class__ = Singleton
a will report now as being of the same class as singleton even if it does not look like it. So singletons using complicated classes end up depending on we don't mess much with them.
Being so, we can have the same effect and use simpler things like a variable or a module. Still, if we want use classes for clarity and because in Python a class is an object, so we already have the object (not and instance, but it will do just like).
class Singleton:
def __new__(cls): raise AssertionError # Singletons can't have instances
There we have a nice assertion error if we try to create an instance, and we can store on derivations static members and make changes to them at runtime (I love Python). This object is as good as other about half brothers (you still can create them if you wish), however it will tend to run faster due to simplicity.
In cases where you don't want the metaclass-based solution above, and you don't like the simple function decorator-based approach (e.g. because in that case static methods on the singleton class won't work), this compromise works:
class singleton(object):
"""Singleton decorator."""
def __init__(self, cls):
self.__dict__['cls'] = cls
instances = {}
def __call__(self):
if self.cls not in self.instances:
self.instances[self.cls] = self.cls()
return self.instances[self.cls]
def __getattr__(self, attr):
return getattr(self.__dict__['cls'], attr)
def __setattr__(self, attr, value):
return setattr(self.__dict__['cls'], attr, value)