Overload a method with a function at runtime - python

OK, I'll admit upfront this is a mega kludge and that I could definately implement this better. It's only morbid curiosity that's driving me to find out how I could do this.
class SomeClass(object):
def __init__(self):
def __(self, arg):
self.doStuff(arg)
self.overLoaded = __
def doStuff(self, string):
print string
SomeClass().overLoaded("test string")
This returns a parameter error because I'm only supplying overLoaded() with one argument instead of two. Is there some magic to tell the interpreter that it's now a method of a class (I tried decorating it with #classmethod, I always understood this to be it's purpose??)

Don't worry about the self parameter, the function already has that from local scope.
class SomeClass(object):
def __init__(self):
def __(arg):
self.bar(arg)
self.foo = __
def foo(self, arg):
print "foo", arg
def bar(self, arg):
print "bar", arg
SomeClass().foo("thing") # prints "bar thing"
When creating an instance (after __new__, iirc, but before __init__) Python binds all the methods to automagically supply the instance as the first argument. If you're adding a method later then you need to supply the instance manually. As you are defining the function with self already in scope you don't need to pass it again.
Python's new module is not a solution as it has been deprecated since 2.6. If you want to create a "real" instance method do it with the partial decorator like this:
import functools
class SomeClass(object):
def __init__(self):
def __(self, arg):
self.bar(arg)
self.foo = functools.partial(__, self)
def foo(self, arg):
print "foo", arg
def bar(self, arg):
print "bar", arg
SomeClass().foo("thing") # prints "bar thing"

The issue is that you are trying to add a new instance method (not class method) and it is not binding properly. Python has a module function to manually bind functions to instances.
import new
self.method = new.instancemethod(func, self, class)
Edit: Apparently the new module is deprecated. Use the types module instead for metamagic.
import types
self.method = types.MethodType(func, self, class)

sj26's solution is a good one. Another alternative, if you want to set up a method that can be overloaded with any user-supplied function or with another of the object's methods, is build a custom descriptor. This descriptor can be used as a decorator (analogous to #classmethod or #staticmethod); and it allows you to store a function in an instance's dictionary, and returns it as a method:
import types
class overloadable(object):
def __init__(self, func):
self._default_func = func
self._name = func.__name__
def __get__(self, obj, type=None):
func = obj.__dict__.get(self._name, self._default_func)
return types.MethodType(func, obj, type)
def __set__(self, obj, value):
if hasattr(value, 'im_func'): value = value.im_func
obj.__dict__[self._name] = value
def __delete__(self, obj):
del obj.__dict__[self._name]
Now we can just decorate a function with "#overloadable":
class SomeClass(object):
def doStuff(self, string):
print 'do stuff:', string
#overloadable
def overLoaded(self, arg):
print 'default behavior:', arg
And it'll just do the right thing when we overload it for a given instance:
>>> sc = SomeClass()
>>> sc.overLoaded("test string") # Before customization
default behavior: test string
>>> sc.overLoaded = sc.doStuff # Customize
>>> sc.overLoaded("test string")
do stuff: test string
>>> del sc.overLoaded # Revert to default behavior
>>> sc.overLoaded("test string")
default behavior: test string

Related

python getter and setter in dict style of static class [duplicate]

I have a class like:
class MyClass:
Foo = 1
Bar = 2
Whenever MyClass.Foo or MyClass.Bar is invoked, I need a custom method to be invoked before the value is returned. Is it possible in Python? I know it is possible if I create an instance of the class and I can define my own __getattr__ method. But my scnenario involves using this class as such without creating any instance of it.
Also I need a custom __str__ method to be invoked when str(MyClass.Foo) is invoked. Does Python provide such an option?
__getattr__() and __str__() for an object are found on its class, so if you want to customize those things for a class, you need the class-of-a-class. A metaclass.
class FooType(type):
def _foo_func(cls):
return 'foo!'
def _bar_func(cls):
return 'bar!'
def __getattr__(cls, key):
if key == 'Foo':
return cls._foo_func()
elif key == 'Bar':
return cls._bar_func()
raise AttributeError(key)
def __str__(cls):
return 'custom str for %s' % (cls.__name__,)
class MyClass(metaclass=FooType):
pass
# # in python 2:
# class MyClass:
# __metaclass__ = FooType
print(MyClass.Foo)
print(MyClass.Bar)
print(str(MyClass))
printing:
foo!
bar!
custom str for MyClass
And no, an object can't intercept a request for a stringifying one of its attributes. The object returned for the attribute must define its own __str__() behavior.
Updated 2023-02-20 for Python 3.x default implementation (python 2 as a comment).
(I know this is an old question, but since all the other answers use a metaclass...)
You can use the following simple classproperty descriptor:
class classproperty(object):
""" #classmethod+#property """
def __init__(self, f):
self.f = classmethod(f)
def __get__(self, *a):
return self.f.__get__(*a)()
Use it like:
class MyClass(object):
#classproperty
def Foo(cls):
do_something()
return 1
#classproperty
def Bar(cls):
do_something_else()
return 2
For the first, you'll need to create a metaclass, and define __getattr__() on that.
class MyMetaclass(type):
def __getattr__(self, name):
return '%s result' % name
class MyClass(object):
__metaclass__ = MyMetaclass
print MyClass.Foo
For the second, no. Calling str(MyClass.Foo) invokes MyClass.Foo.__str__(), so you'll need to return an appropriate type for MyClass.Foo.
Surprised no one pointed this one out:
class FooType(type):
#property
def Foo(cls):
return "foo!"
#property
def Bar(cls):
return "bar!"
class MyClass(metaclass=FooType):
pass
Works:
>>> MyClass.Foo
'foo!'
>>> MyClass.Bar
'bar!'
(for Python 2.x, change definition of MyClass to:
class MyClass(object):
__metaclass__ = FooType
)
What the other answers say about str holds true for this solution: It must be implemented on the type actually returned.
Depending on the case I use this pattern
class _TheRealClass:
def __getattr__(self, attr):
pass
LooksLikeAClass = _TheRealClass()
Then you import and use it.
from foo import LooksLikeAClass
LooksLikeAClass.some_attribute
This avoid use of metaclass, and handle some use cases.

How can I delay the __init__ call until an attribute is accessed?

I have a test framework that requires test cases to be defined using the following class patterns:
class TestBase:
def __init__(self, params):
self.name = str(self.__class__)
print('initializing test: {} with params: {}'.format(self.name, params))
class TestCase1(TestBase):
def run(self):
print('running test: ' + self.name)
When I create and run a test, I get the following:
>>> test1 = TestCase1('test 1 params')
initializing test: <class '__main__.TestCase1'> with params: test 1 params
>>> test1.run()
running test: <class '__main__.TestCase1'>
The test framework searches for and loads all TestCase classes it can find, instantiates each one, then calls the run method for each test.
load_test(TestCase1(test_params1))
load_test(TestCase2(test_params2))
...
load_test(TestCaseN(test_params3))
...
for test in loaded_tests:
test.run()
However, I now have some test cases for which I don't want the __init__ method called until the time that the run method is called, but I have little control over the framework structure or methods. How can I delay the call to __init__ without redefining the __init__ or run methods?
Update
The speculations that this originated as an XY problem are correct. A coworker asked me this question a while back when I was maintaining said test framework. I inquired further about what he was really trying to achieve and we figured out a simpler workaround that didn't involve changing the framework or introducing metaclasses, etc.
However, I still think this is a question worth investigating: if I wanted to create new objects with "lazy" initialization ("lazy" as in lazy evaluation generators such as range, etc.) what would be the best way of accomplishing it? My best attempt so far is listed below, I'm interested in knowing if there's anything simpler or less verbose.
First Solution:use property.the elegant way of setter/getter in python.
class Bars(object):
def __init__(self):
self._foo = None
#property
def foo(self):
if not self._foo:
print("lazy initialization")
self._foo = [1,2,3]
return self._foo
if __name__ == "__main__":
f = Bars()
print(f.foo)
print(f.foo)
Second Solution:the proxy solution,and always implement by decorator.
In short, Proxy is a wrapper that wraps the object you need. Proxy could provide additional functionality to the object that it wraps and doesn't change the object's code. It's a surrogate which provide the abitity of control access to a object.there is the code come form user Cyclone.
class LazyProperty:
def __init__(self, method):
self.method = method
self.method_name = method.__name__
def __get__(self, obj, cls):
if not obj:
return None
value = self.method(obj)
print('value {}'.format(value))
setattr(obj, self.method_name, value)
return value
class test:
def __init__(self):
self._resource = None
#LazyProperty
def resource(self):
print("lazy")
self._resource = tuple(range(5))
return self._resource
if __name__ == '__main__':
t = test()
print(t.resource)
print(t.resource)
print(t.resource)
To be used for true one-time calculated lazy properties. I like it because it avoids sticking extra attributes on objects, and once activated does not waste time checking for attribute presence
Metaclass option
You can intercept the call to __init__ using a metaclass. Create the object with __new__ and overwrite the __getattribute__ method to check if __init__ has been called or not and call it if it hasn't.
class DelayInit(type):
def __call__(cls, *args, **kwargs):
def init_before_get(obj, attr):
if not object.__getattribute__(obj, '_initialized'):
obj.__init__(*args, **kwargs)
obj._initialized = True
return object.__getattribute__(obj, attr)
cls.__getattribute__ = init_before_get
new_obj = cls.__new__(cls, *args, **kwargs)
new_obj._initialized = False
return new_obj
class TestDelayed(TestCase1, metaclass=DelayInit):
pass
In the example below, you'll see that the init print won't occur until the run method is executed.
>>> new_test = TestDelayed('delayed test params')
>>> new_test.run()
initializing test: <class '__main__.TestDelayed'> with params: delayed test params
running test: <class '__main__.TestDelayed'>
Decorator option
You could also use a decorator that has a similar pattern to the metaclass above:
def delayinit(cls):
def init_before_get(obj, attr):
if not object.__getattribute__(obj, '_initialized'):
obj.__init__(*obj._init_args, **obj._init_kwargs)
obj._initialized = True
return object.__getattribute__(obj, attr)
cls.__getattribute__ = init_before_get
def construct(*args, **kwargs):
obj = cls.__new__(cls, *args, **kwargs)
obj._init_args = args
obj._init_kwargs = kwargs
obj._initialized = False
return obj
return construct
#delayinit
class TestDelayed(TestCase1):
pass
This will behave identically to the example above.
In Python, there is no way that you can avoid calling __init__ when you instantiate a class cls. If calling cls(args) returns an instance of cls, then the language guarantees that cls.__init__ will have been called.
So the only way to achieve something similar to what you are asking is to introduce another class that will postpone the calling of __init__ in the original class until an attribute of the instantiated class is being accessed.
Here is one way:
def delay_init(cls):
class Delay(cls):
def __init__(self, *arg, **kwarg):
self._arg = arg
self._kwarg = kwarg
def __getattribute__(self, name):
self.__class__ = cls
arg = self._arg
kwarg = self._kwarg
del self._arg
del self._kwarg
self.__init__(*arg, **kwarg)
return getattr(self, name)
return Delay
This wrapper function works by catching any attempt to access an attribute of the instantiated class. When such an attempt is made, it changes the instance's __class__ to the original class, calls the original __init__ method with the arguments that were used when the instance was created, and then returns the proper attribute. This function can be used as decorator for your TestCase1 class:
class TestBase:
def __init__(self, params):
self.name = str(self.__class__)
print('initializing test: {} with params: {}'.format(self.name, params))
class TestCase1(TestBase):
def run(self):
print('running test: ' + self.name)
>>> t1 = TestCase1("No delay")
initializing test: <class '__main__.TestCase1'> with params: No delay
>>> t2 = delay_init(TestCase1)("Delayed init")
>>> t1.run()
running test: <class '__main__.TestCase1'>
>>> t2.run()
initializing test: <class '__main__.TestCase1'> with params: Delayed init
running test: <class '__main__.TestCase1'>
>>>
Be careful where you apply this function though. If you decorate TestBase with delay_init, it will not work, because it will turn the TestCase1 instances into TestBase instances.
In my answer I'd like to focus on cases when one wants to instantiate a class whose initialiser (dunder init) has side effects. For instance, pysftp.Connection, creates an SSH connection, which may be undesired until it's actually used.
In a great blog series about conceiving of wrapt package (nit-picky decorator implementaion), the author describes Transparent object proxy. This code can be customised for the subject in question.
class LazyObject:
_factory = None
'''Callable responsible for creation of target object'''
_object = None
'''Target object created lazily'''
def __init__(self, factory):
self._factory = factory
def __getattr__(self, name):
if not self._object:
self._object = self._factory()
return getattr(self._object, name)
Then it can be used as:
obj = LazyObject(lambda: dict(foo = 'bar'))
obj.keys() # dict_keys(['foo'])
But len(obj), obj['foo'] and other language constructs which invoke Python object protocols (dunder methods, like __len__ and __getitem__) will not work. However, for many cases, which are limited to regular methods, this is a solution.
To proxy object protocol implementations, it's possible to use neither __getattr__, nor __getattribute__ (to do it in a generic way). The latter's documentation notes:
This method may still be bypassed when looking up special methods as the result of implicit invocation via language syntax or built-in functions. See Special method lookup.
As a complete solution is demanded, there are examples of manual implementations like werkzeug's LocalProxy and django's SimpleLazyObject. However a clever workaround is possible.
Luckily there's a dedicated package (based on wrapt) for the exact use case, lazy-object-proxy which is described in this blog post.
from lazy_object_proxy import Proxy
obj = Proxy(labmda: dict(foo = 'bar'))
obj.keys() # dict_keys(['foo'])
len(len(obj)) # 1
obj['foo'] # 'bar'
One alternative would be to write a wrapper that takes a class as input and returns a class with delayed initialization until any member is accessed. This could for example be done as this:
def lazy_init(cls):
class LazyInit(cls):
def __init__(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs
self._initialized = False
def __getattr__(self, attr):
if not self.__dict__['_initialized']:
cls.__init__(self,
*self.__dict__['args'], **self.__dict__['kwargs'])
self._initialized = True
return self.__dict__[attr]
return LazyInit
This could then be used as such
load_test(lazy_init(TestCase1)(test_params1))
load_test(lazy_init(TestCase2)(test_params2))
...
load_test(lazy_init(TestCaseN)(test_params3))
...
for test in loaded_tests:
test.run()
Answering your original question (and the problem I think you are actually trying to solve), "How can I delay the init call until an attribute is accessed?": don't call init until you access the attribute.
Said another way: you can make the class initialization simultaneous with the attribute call. What you seem to actually want is 1) create a collection of TestCase# classes along with their associated parameters; 2) run each test case.
Probably your original problem came from thinking you had to initialize all your TestCase classes in order to create a list of them that you could iterate over. But in fact you can store class objects in lists, dicts etc. That means you can do whatever method you have for finding all TestCase classes and store those class objects in a dict with their relevant parameters. Then just iterate that dict and call each class with its run() method.
It might look like:
tests = {TestCase1: 'test 1 params', TestCase2: 'test 2 params', TestCase3: 'test 3 params'}
for test_case, param in tests.items():
test_case(param).run()
Overridding __new__
You could do this by overriding __new__ method and replacing __init__ method with a custom function.
def init(cls, real_init):
def wrapped(self, *args, **kwargs):
# This will run during the first call to `__init__`
# made after `__new__`. Here we re-assign the original
# __init__ back to class and assign a custom function
# to `instances.__init__`.
cls.__init__ = real_init
def new_init():
if new_init.called is False:
real_init(self, *args, **kwargs)
new_init.called = True
new_init.called = False
self.__init__ = new_init
return wrapped
class DelayInitMixin(object):
def __new__(cls, *args, **kwargs):
cls.__init__ = init(cls, cls.__init__)
return object.__new__(cls)
class A(DelayInitMixin):
def __init__(self, a, b):
print('inside __init__')
self.a = sum(a)
self.b = sum(b)
def __getattribute__(self, attr):
init = object.__getattribute__(self, '__init__')
if not init.called:
init()
return object.__getattribute__(self, attr)
def run(self):
pass
def fun(self):
pass
Demo:
>>> a = A(range(1000), range(10000))
>>> a.run()
inside __init__
>>> a.a, a.b
(499500, 49995000)
>>> a.run(), a.__init__()
(None, None)
>>> b = A(range(100), range(10000))
>>> b.a, b.b
inside __init__
(4950, 49995000)
>>> b.run(), b.__init__()
(None, None)
Using cached properties
The idea is to do the heavy calculation only once by caching results. This approach will lead to much more readable code if the whole point of delaying initialization is improving performance.
Django comes with a nice decorator called #cached_property. I tend to use it a lot in both code and unit-tests for caching results of heavy properties.
A cached_property is a non-data descriptor. Hence once the key is set in instance's dictionary, the access to property would always get the value from there.
class cached_property(object):
"""
Decorator that converts a method with a single self argument into a
property cached on the instance.
Optional ``name`` argument allows you to make cached properties of other
methods. (e.g. url = cached_property(get_absolute_url, name='url') )
"""
def __init__(self, func, name=None):
self.func = func
self.__doc__ = getattr(func, '__doc__')
self.name = name or func.__name__
def __get__(self, instance, cls=None):
if instance is None:
return self
res = instance.__dict__[self.name] = self.func(instance)
return res
Usage:
class A:
#cached_property
def a(self):
print('calculating a')
return sum(range(1000))
#cached_property
def b(self):
print('calculating b')
return sum(range(10000))
Demo:
>>> a = A()
>>> a.a
calculating a
499500
>>> a.b
calculating b
49995000
>>> a.a, a.b
(499500, 49995000)
I think you can use a wrapper class to hold the real class you want to instance, and use call __init__ yourself in your code, like(Python 3 code):
class Wrapper:
def __init__(self, cls):
self.cls = cls
self.instance = None
def your_method(self, *args, **kwargs):
if not self.instance:
self.instnace = cls()
return self.instance(*args, **kwargs)
class YourClass:
def __init__(self):
print("calling __init__")
but it's a dump way, but without any trick.

python pass self to referenced function

I have a function reference that is being called by an object. What I'd like is for that object to pass itself as a parameter to the function as it would normally do for its own functions.
IE I would like to use self.myFoo() instead of self.myFoo(self) in the following sample
Code Sample:
def foo(self):
print(self.toString())
class Node:
def __init__(self, myFoo):
self.myFoo = myFoo
def run(self):
self.myFoo()
def toString(self):
return "Hello World"
n = Node(foo)
n.run()
The problem is that you assigned myFoo to an unbound function and are calling it as though it were bound. If you want to be able to use self.myFoo() you will need to curry the object into the first arg yourself.
from functools import partial
def foo(self):
print(self.toString())
class Node:
def __init__(self, myFoo):
self.myFoo = partial(myFoo, self)
def run(self):
self.myFoo()
def toString(self):
return "Hello World"
n = Node(foo)
n.run()
Alternatively you could use
self.myFoo = types.MethodType(myFoo, self)
in your __init__(self, myFoo) method, but using partial is more commonly done, and more versatile since you can use it to curry arguments for any sort of function, not just methods.
This thread looks like what you are looking for. To bound a method to an object (but not its class) at runtime you can do:
import types
n.foo = types.MethodType(foo, n)
foo is a simple function. To make it callable via self.myFoo() you can attach it to the class - not the object. Even Node.myFoo is still an unbound function you can call it via self.myFoo() within your class.
def __init__(self, myFoo):
Node.myFoo = myFoo

How do I directly mock a superclass with python mock?

I am using the python mock framework for testing (http://www.voidspace.org.uk/python/mock/) and I want to mock out a superclass and focus on testing the subclasses' added behavior.
(For those interested I have extended pymongo.collection.Collection and I want to only test my added behavior. I do not want to have to run mongodb as another process for testing purposes.)
For this discussion, A is the superclass and B is the subclass. Furthermore, I define direct and indirect superclass calls as shown below:
class A(object):
def method(self):
...
def another_method(self):
...
class B(A):
def direct_superclass_call(self):
...
A.method(self)
def indirect_superclass_call(self):
...
super(A, self).another_method()
Approach #1
Define a mock class for A called MockA and use mock.patch to substitute it for the test at runtime. This handles direct superclass calls. Then manipulate B.__bases__ to handle indirect superclass calls. (see below)
The issue that arises is that I have to write MockA and in some cases (as in the case for pymongo.collection.Collection) this can involve a lot of work to unravel all of the internal calls to mock out.
Approach #2
The desired approach is to somehow use a mock.Mock() class to handle calls on the the mock just in time, as well as defined return_value or side_effect in place in the test. In this manner, I have to do less work by avoiding the definition of MockA.
The issue that I am having is that I cannot figure out how to alter B.__bases__ so that an instance of mock.Mock() can be put in place as a superclass (I must need to somehow do some direct binding here). Thus far I have determined, that super() examines the MRO and then calls the first class that defines the method in question. I cannot figure out how to get a superclass to handle the check to it and succeed if it comes across a mock class. __getattr__ does not seem to be used in this case. I want super to to think that the method is defined at this point and then use the mock.Mock() functionality as usual.
How does super() discover what attributes are defined within the class in the MRO sequence? And is there a way for me to interject here and to somehow get it to utilize a mock.Mock() on the fly?
import mock
class A(object):
def __init__(self, value):
self.value = value
def get_value_direct(self):
return self.value
def get_value_indirect(self):
return self.value
class B(A):
def __init__(self, value):
A.__init__(self, value)
def get_value_direct(self):
return A.get_value_direct(self)
def get_value_indirect(self):
return super(B, self).get_value_indirect()
# approach 1 - use a defined MockA
class MockA(object):
def __init__(self, value):
pass
def get_value_direct(self):
return 0
def get_value_indirect(self):
return 0
B.__bases__ = (MockA, ) # - mock superclass
with mock.patch('__main__.A', MockA):
b2 = B(7)
print '\nApproach 1'
print 'expected result = 0'
print 'direct =', b2.get_value_direct()
print 'indirect =', b2.get_value_indirect()
B.__bases__ = (A, ) # - original superclass
# approach 2 - use mock module to mock out superclass
# what does XXX need to be below to use mock.Mock()?
#B.__bases__ = (XXX, )
with mock.patch('__main__.A') as mymock:
b3 = B(7)
mymock.get_value_direct.return_value = 0
mymock.get_value_indirect.return_value = 0
print '\nApproach 2'
print 'expected result = 0'
print 'direct =', b3.get_value_direct()
print 'indirect =', b3.get_value_indirect() # FAILS HERE as the old superclass is called
#B.__bases__ = (A, ) # - original superclass
is there a way for me to interject here and to somehow get it to utilize a mock.Mock() on the fly?
There may be better approaches, but you can always write your own super() and inject it into the module that contains the class you're mocking. Have it return whatever it should based on what's calling it.
You can either just define super() in the current namespace (in which case the redefinition only applies to the current module after the definition), or you can import __builtin__ and apply the redefinition to __builtin__.super, in which case it will apply globally in the Python session.
You can capture the original super function (if you need to call it from your implementation) using a default argument:
def super(type, obj=None, super=super):
# inside the function, super refers to the built-in
I played around with mocking out super() as suggested by kindall. Unfortunately, after a great deal of effort it became quite complicated to handle complex inheritance cases.
After some work I realized that super() accesses the __dict__ of classes directly when resolving attributes through the MRO (it does not do a getattr type of call). The solution is to extend a mock.MagicMock() object and wrap it with a class to accomplish this. The wrapped class can then be placed in the __bases__ variable of a subclass.
The wrapped object reflects all defined attributes of the target class to the __dict__ of the wrapping class so that super() calls resolve to the properly patched in attributes within the internal MagicMock().
The following code is the solution that I have found to work thus far. Note that I actually implement this within a context handler. Also, care has to be taken to patch in the proper namespaces if importing from other modules.
This is a simple example illustrating the approach:
from mock import MagicMock
import inspect
class _WrappedMagicMock(MagicMock):
def __init__(self, *args, **kwds):
object.__setattr__(self, '_mockclass_wrapper', None)
super(_WrappedMagicMock, self).__init__(*args, **kwds)
def wrap(self, cls):
# get defined attribtues of spec class that need to be preset
base_attrs = dir(type('Dummy', (object,), {}))
attrs = inspect.getmembers(self._spec_class)
new_attrs = [a[0] for a in attrs if a[0] not in base_attrs]
# pre set mocks for attributes in the target mock class
for name in new_attrs:
setattr(cls, name, getattr(self, name))
# eat up any attempts to initialize the target mock class
setattr(cls, '__init__', lambda *args, **kwds: None)
object.__setattr__(self, '_mockclass_wrapper', cls)
def unwrap(self):
object.__setattr__(self, '_mockclass_wrapper', None)
def __setattr__(self, name, value):
super(_WrappedMagicMock, self).__setattr__(name, value)
# be sure to reflect to changes wrapper class if activated
if self._mockclass_wrapper is not None:
setattr(self._mockclass_wrapper, name, value)
def _get_child_mock(self, **kwds):
# when created children mocks need only be MagicMocks
return MagicMock(**kwds)
class A(object):
x = 1
def __init__(self, value):
self.value = value
def get_value_direct(self):
return self.value
def get_value_indirect(self):
return self.value
class B(A):
def __init__(self, value):
super(B, self).__init__(value)
def f(self):
return 2
def get_value_direct(self):
return A.get_value_direct(self)
def get_value_indirect(self):
return super(B, self).get_value_indirect()
# nominal behavior
b = B(3)
assert b.get_value_direct() == 3
assert b.get_value_indirect() == 3
assert b.f() == 2
assert b.x == 1
# using mock class
MockClass = type('MockClassWrapper', (), {})
mock = _WrappedMagicMock(A)
mock.wrap(MockClass)
# patch the mock in
B.__bases__ = (MockClass, )
A = MockClass
# set values within the mock
mock.x = 0
mock.get_value_direct.return_value = 0
mock.get_value_indirect.return_value = 0
# mocked behavior
b = B(7)
assert b.get_value_direct() == 0
assert b.get_value_indirect() == 0
assert b.f() == 2
assert b.x == 0

Namespaces inside class in Python3

I am new to Python and I wonder if there is any way to aggregate methods into 'subspaces'. I mean something similar to this syntax:
smth = Something()
smth.subspace.do_smth()
smth.another_subspace.do_smth_else()
I am writing an API wrapper and I'm going to have a lot of very similar methods (only different URI) so I though it would be good to place them in a few subspaces that refer to the API requests categories. In other words, I want to create namespaces inside a class. I don't know if this is even possible in Python and have know idea what to look for in Google.
I will appreciate any help.
One way to do this is by defining subspace and another_subspace as properties that return objects that provide do_smth and do_smth_else respectively:
class Something:
#property
def subspace(self):
class SubSpaceClass:
def do_smth(other_self):
print('do_smth')
return SubSpaceClass()
#property
def another_subspace(self):
class AnotherSubSpaceClass:
def do_smth_else(other_self):
print('do_smth_else')
return AnotherSubSpaceClass()
Which does what you want:
>>> smth = Something()
>>> smth.subspace.do_smth()
do_smth
>>> smth.another_subspace.do_smth_else()
do_smth_else
Depending on what you intend to use the methods for, you may want to make SubSpaceClass a singleton, but i doubt the performance gain is worth it.
I had this need a couple years ago and came up with this:
class Registry:
"""Namespace within a class."""
def __get__(self, obj, cls=None):
if obj is None:
return self
else:
return InstanceRegistry(self, obj)
def __call__(self, name=None):
def decorator(f):
use_name = name or f.__name__
if hasattr(self, use_name):
raise ValueError("%s is already registered" % use_name)
setattr(self, name or f.__name__, f)
return f
return decorator
class InstanceRegistry:
"""
Helper for accessing a namespace from an instance of the class.
Used internally by :class:`Registry`. Returns a partial that will pass
the instance as the first parameter.
"""
def __init__(self, registry, obj):
self.__registry = registry
self.__obj = obj
def __getattr__(self, attr):
return partial(getattr(self.__registry, attr), self.__obj)
# Usage:
class Something:
subspace = Registry()
another_subspace = Registry()
#MyClass.subspace()
def do_smth(self):
# `self` will be an instance of Something
pass
#MyClass.another_subspace('do_smth_else')
def this_can_be_called_anything_and_take_any_parameter_name(obj, other):
# Call it `obj` or whatever else if `self` outside a class is unsettling
pass
At runtime:
>>> smth = Something()
>>> smth.subspace.do_smth()
>>> smth.another_subspace.do_smth_else('other')
This is compatible with Py2 and Py3. Some performance optimizations are possible in Py3 because __set_name__ tells us what the namespace is called and allows caching the instance registry.

Categories

Resources