I'm trying to create a proxy (wrapper) object so I can modify behaviour of an already instantiated object. Attributes of a wrapper class are set to a newly generated class (using type) along with attributes of an underlying object, it's done this way because Python __magic __ methods work correctly only if they are members of a class. So cls is a wrapper class, client is the underlying object:
def __new__(cls, client, *args, **kwargs):
ns = {}
for i, attr in inspect.getmembers(client):
if i in ('__init__', '__new__', '__getattribute__', '__dict__'):
continue
ns[i] = attr
for i in cls.__dict__:
if i in ('__new__',):
continue
elif i == '__init__':
ns['_init_'] = getattr(cls, i)
continue
attr = getattr(cls, i)
ns[i] = attr
P = type(cls.__name__ + "." + client.__class__.__name__,
(Proxy2.BaseProxy,), ns)
P._client_ = client
return P(*args, **kwargs)
The problem comes from #staticmethod/#classmethod in the wrapper class. I cannot call static method from an instance because self is being passed to it. I've tried to use __get__ but without success. Here's a minimal example which fails:
class SuperA:
#staticmethod
def static():
return 'static?'
class A:
def __getattribute__(self, attr):
v = object.__getattribute__(self, attr)
if hasattr(v, '__get__'):
v2 = v.__get__(None, self)
return v2
return v
A.static = getattr(SuperA, "static")
print(A.static()) # success
print(A().static()) # fail
Instead of getattr which invokes descriptors mechanism inspect.getattr_static can be used like this:
A.static = inspect.getattr_static(SuperA, "static")
Related
My question is how to create a class like slice?
slice (built-in type) doesn't have a __dict__ attribute
even that the metaclass of this slice is type.
And it is not using __slots__, and all it's attribute are readonly and it's not overriding
__setattr__ (this i'm not sure about it but look at my code and see if I'm right).
Check this code:
# how slice is removing the __dict__ from the class object
# and the metaclass is type!!
class sliceS(object):
pass
class sliceS0(object):
def __setattr__(self, name, value):
pass
# this means that both have the same
# metaclass type.
print type(slice) == type(sliceS) # prints True
# from what i understand the metaclass is the one
# that is responsible for making the class object
sliceS2 = type('sliceS2', (object,), {})
# witch is the same
# sliceS2 = type.__new__(type, 'sliceS2', (object,), {})
print type(sliceS2) # prints type
# but when i check the list of attribute using dir
print '__dict__' in dir(slice) # prints False
print '__dict__' in dir(sliceS) # prints True
# now when i try to set an attribute on slice
obj_slice = slice(10)
# there is no __dict__ here
print '__dict__' in dir(obj_slice) # prints False
obj_sliceS = sliceS()
try:
obj_slice.x = 1
except AttributeError as e:
# you get AttributeError
# mean you cannot add new properties
print "'slice' object has no attribute 'x'"
obj_sliceS.x = 1 # Ok: x is added to __dict__ of obj_sliceS
print 'x' in obj_sliceS.__dict__ # prints True
# and slice is not using __slots__ because as you see it's not here
print '__slots__' in dir(slice) # print False
# and this why i'm saying it's not overriding the __settattr__
print id(obj_slice.__setattr__) == id(obj_sliceS.__setattr__) # True: it's the same object
obj_sliceS0 = sliceS0()
print id(obj_slice.__setattr__) == id(obj_sliceS0.__setattr__) # False: it's the same object
# so slice have only start, stop, step and are all readonly attribute and it's not overriding the __setattr__
# what technique it's using?!!!!
How to make this kind of first-class object all of it's attributes are readonly and you cannot
add new attributes.
The thing is that Python's built-in slice class is programmed in C. And when you code using the C-Python API you can code the equivalent of attributes accessible with the __slots__ without using any mechanisms visible from the Python side. (You can even have 'real' private attributes, which are virtually impossible with Python only code).
The mechanism used for Python code to be able to prevent a __dict__ for a class' instances and subsequent "any attribute can be set" is the __slots__ exactly the attribute.
However, unlike magic dunder methods that have to be present when the class is actually used, the information on __slots__ is used when the class is created, and only then. So, if what concerns you is to have a visible __slots__ in your final class, you can just remove it from the class before exposing it:
In [8]: class A:
...: __slots__ = "b"
...:
In [9]: del A.__slots__
In [10]: a = A()
In [11]: a.b = 5
In [12]: a.c = 5
------------------------
AttributeError
...
In [13]: A.__slots__
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-13-68a69c802e74> in <module>()
----> 1 A.__slots__
AttributeError: type object 'A' has no attribute '__slots__'
If you won't like a del MyClass.__slots__ line to be visible wherever you declare a class, it is a one-line class decorator:
def slotless(cls):
del cls.__slots__
return cls
#slotless
class MyClass:
__slots__ = "x y".split()
Or, you could use a metaclass to auto-create, and auto-destroy the Python visible __slots__, so that you could declare your descriptors and attributes in the class body, and have the class protected against extra attributes:
class AttrOnly(type):
def __new__(metacls, name, bases, namespace, **kw):
namespace["__slots__"] = list(namespace.keys()) # not sure if "list(" is needed
cls = super().__new__(metacls, name, bases, namespace, **kw)
del cls.__slots__
return cls
class MyClass(metaclass=AttrOnly):
x = int
y = int
If you want pure Python readonly attributes which does not have a visible counterpart in the instance itself (like a ._x which is used by a property descriptor to keep the value of a x attribute), the straightforward way is to customize __setattr__ . Another approach is to have your metaclass to auto-add a read-only property for each attribute on the class creation stage. The metaclass bellow does that and uses the __slots__ class attribute to create the desired descriptors:
class ReadOnlyAttrs(type):
def __new__(metacls, name, bases, namespace, **kw):
def get_setter(attr):
def setter(self, value):
if getattr(self, "_initialized", False):
raise ValueError("Can't set " + attr)
setattr(self, "_" + attr, value)
return setter
slots = namespace.get("__slots__", [])
slots.append("initialized")
def __new__(cls, *args, **kw):
self = object.__new__(cls) # for production code that could have an arbitrary hierarchy, this needs to be done more carefully
for attr, value in kw.items():
setattr(self, attr, value)
self.initialized = True
return self
namespace["__new__"] = __new__
real_slots = []
for attr in slots:
real_slots.append("_" + attr)
namespace[attr] = property(
(lambda attr: lambda self: getattr(self, "_" + attr))(attr), # Getter. Extra lambda needed to create an extra closure containing each attr
get_setter(attr)
)
namespace["__slots__"] = real_slots
cls = super().__new__(metacls, name, bases, namespace, **kw)
del cls.__slots__
return cls
Have in mind you can also customize the class' __dir__ method so that _x shadow attributes would not be seen, if you want to.
I have a test framework that requires test cases to be defined using the following class patterns:
class TestBase:
def __init__(self, params):
self.name = str(self.__class__)
print('initializing test: {} with params: {}'.format(self.name, params))
class TestCase1(TestBase):
def run(self):
print('running test: ' + self.name)
When I create and run a test, I get the following:
>>> test1 = TestCase1('test 1 params')
initializing test: <class '__main__.TestCase1'> with params: test 1 params
>>> test1.run()
running test: <class '__main__.TestCase1'>
The test framework searches for and loads all TestCase classes it can find, instantiates each one, then calls the run method for each test.
load_test(TestCase1(test_params1))
load_test(TestCase2(test_params2))
...
load_test(TestCaseN(test_params3))
...
for test in loaded_tests:
test.run()
However, I now have some test cases for which I don't want the __init__ method called until the time that the run method is called, but I have little control over the framework structure or methods. How can I delay the call to __init__ without redefining the __init__ or run methods?
Update
The speculations that this originated as an XY problem are correct. A coworker asked me this question a while back when I was maintaining said test framework. I inquired further about what he was really trying to achieve and we figured out a simpler workaround that didn't involve changing the framework or introducing metaclasses, etc.
However, I still think this is a question worth investigating: if I wanted to create new objects with "lazy" initialization ("lazy" as in lazy evaluation generators such as range, etc.) what would be the best way of accomplishing it? My best attempt so far is listed below, I'm interested in knowing if there's anything simpler or less verbose.
First Solution:use property.the elegant way of setter/getter in python.
class Bars(object):
def __init__(self):
self._foo = None
#property
def foo(self):
if not self._foo:
print("lazy initialization")
self._foo = [1,2,3]
return self._foo
if __name__ == "__main__":
f = Bars()
print(f.foo)
print(f.foo)
Second Solution:the proxy solution,and always implement by decorator.
In short, Proxy is a wrapper that wraps the object you need. Proxy could provide additional functionality to the object that it wraps and doesn't change the object's code. It's a surrogate which provide the abitity of control access to a object.there is the code come form user Cyclone.
class LazyProperty:
def __init__(self, method):
self.method = method
self.method_name = method.__name__
def __get__(self, obj, cls):
if not obj:
return None
value = self.method(obj)
print('value {}'.format(value))
setattr(obj, self.method_name, value)
return value
class test:
def __init__(self):
self._resource = None
#LazyProperty
def resource(self):
print("lazy")
self._resource = tuple(range(5))
return self._resource
if __name__ == '__main__':
t = test()
print(t.resource)
print(t.resource)
print(t.resource)
To be used for true one-time calculated lazy properties. I like it because it avoids sticking extra attributes on objects, and once activated does not waste time checking for attribute presence
Metaclass option
You can intercept the call to __init__ using a metaclass. Create the object with __new__ and overwrite the __getattribute__ method to check if __init__ has been called or not and call it if it hasn't.
class DelayInit(type):
def __call__(cls, *args, **kwargs):
def init_before_get(obj, attr):
if not object.__getattribute__(obj, '_initialized'):
obj.__init__(*args, **kwargs)
obj._initialized = True
return object.__getattribute__(obj, attr)
cls.__getattribute__ = init_before_get
new_obj = cls.__new__(cls, *args, **kwargs)
new_obj._initialized = False
return new_obj
class TestDelayed(TestCase1, metaclass=DelayInit):
pass
In the example below, you'll see that the init print won't occur until the run method is executed.
>>> new_test = TestDelayed('delayed test params')
>>> new_test.run()
initializing test: <class '__main__.TestDelayed'> with params: delayed test params
running test: <class '__main__.TestDelayed'>
Decorator option
You could also use a decorator that has a similar pattern to the metaclass above:
def delayinit(cls):
def init_before_get(obj, attr):
if not object.__getattribute__(obj, '_initialized'):
obj.__init__(*obj._init_args, **obj._init_kwargs)
obj._initialized = True
return object.__getattribute__(obj, attr)
cls.__getattribute__ = init_before_get
def construct(*args, **kwargs):
obj = cls.__new__(cls, *args, **kwargs)
obj._init_args = args
obj._init_kwargs = kwargs
obj._initialized = False
return obj
return construct
#delayinit
class TestDelayed(TestCase1):
pass
This will behave identically to the example above.
In Python, there is no way that you can avoid calling __init__ when you instantiate a class cls. If calling cls(args) returns an instance of cls, then the language guarantees that cls.__init__ will have been called.
So the only way to achieve something similar to what you are asking is to introduce another class that will postpone the calling of __init__ in the original class until an attribute of the instantiated class is being accessed.
Here is one way:
def delay_init(cls):
class Delay(cls):
def __init__(self, *arg, **kwarg):
self._arg = arg
self._kwarg = kwarg
def __getattribute__(self, name):
self.__class__ = cls
arg = self._arg
kwarg = self._kwarg
del self._arg
del self._kwarg
self.__init__(*arg, **kwarg)
return getattr(self, name)
return Delay
This wrapper function works by catching any attempt to access an attribute of the instantiated class. When such an attempt is made, it changes the instance's __class__ to the original class, calls the original __init__ method with the arguments that were used when the instance was created, and then returns the proper attribute. This function can be used as decorator for your TestCase1 class:
class TestBase:
def __init__(self, params):
self.name = str(self.__class__)
print('initializing test: {} with params: {}'.format(self.name, params))
class TestCase1(TestBase):
def run(self):
print('running test: ' + self.name)
>>> t1 = TestCase1("No delay")
initializing test: <class '__main__.TestCase1'> with params: No delay
>>> t2 = delay_init(TestCase1)("Delayed init")
>>> t1.run()
running test: <class '__main__.TestCase1'>
>>> t2.run()
initializing test: <class '__main__.TestCase1'> with params: Delayed init
running test: <class '__main__.TestCase1'>
>>>
Be careful where you apply this function though. If you decorate TestBase with delay_init, it will not work, because it will turn the TestCase1 instances into TestBase instances.
In my answer I'd like to focus on cases when one wants to instantiate a class whose initialiser (dunder init) has side effects. For instance, pysftp.Connection, creates an SSH connection, which may be undesired until it's actually used.
In a great blog series about conceiving of wrapt package (nit-picky decorator implementaion), the author describes Transparent object proxy. This code can be customised for the subject in question.
class LazyObject:
_factory = None
'''Callable responsible for creation of target object'''
_object = None
'''Target object created lazily'''
def __init__(self, factory):
self._factory = factory
def __getattr__(self, name):
if not self._object:
self._object = self._factory()
return getattr(self._object, name)
Then it can be used as:
obj = LazyObject(lambda: dict(foo = 'bar'))
obj.keys() # dict_keys(['foo'])
But len(obj), obj['foo'] and other language constructs which invoke Python object protocols (dunder methods, like __len__ and __getitem__) will not work. However, for many cases, which are limited to regular methods, this is a solution.
To proxy object protocol implementations, it's possible to use neither __getattr__, nor __getattribute__ (to do it in a generic way). The latter's documentation notes:
This method may still be bypassed when looking up special methods as the result of implicit invocation via language syntax or built-in functions. See Special method lookup.
As a complete solution is demanded, there are examples of manual implementations like werkzeug's LocalProxy and django's SimpleLazyObject. However a clever workaround is possible.
Luckily there's a dedicated package (based on wrapt) for the exact use case, lazy-object-proxy which is described in this blog post.
from lazy_object_proxy import Proxy
obj = Proxy(labmda: dict(foo = 'bar'))
obj.keys() # dict_keys(['foo'])
len(len(obj)) # 1
obj['foo'] # 'bar'
One alternative would be to write a wrapper that takes a class as input and returns a class with delayed initialization until any member is accessed. This could for example be done as this:
def lazy_init(cls):
class LazyInit(cls):
def __init__(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs
self._initialized = False
def __getattr__(self, attr):
if not self.__dict__['_initialized']:
cls.__init__(self,
*self.__dict__['args'], **self.__dict__['kwargs'])
self._initialized = True
return self.__dict__[attr]
return LazyInit
This could then be used as such
load_test(lazy_init(TestCase1)(test_params1))
load_test(lazy_init(TestCase2)(test_params2))
...
load_test(lazy_init(TestCaseN)(test_params3))
...
for test in loaded_tests:
test.run()
Answering your original question (and the problem I think you are actually trying to solve), "How can I delay the init call until an attribute is accessed?": don't call init until you access the attribute.
Said another way: you can make the class initialization simultaneous with the attribute call. What you seem to actually want is 1) create a collection of TestCase# classes along with their associated parameters; 2) run each test case.
Probably your original problem came from thinking you had to initialize all your TestCase classes in order to create a list of them that you could iterate over. But in fact you can store class objects in lists, dicts etc. That means you can do whatever method you have for finding all TestCase classes and store those class objects in a dict with their relevant parameters. Then just iterate that dict and call each class with its run() method.
It might look like:
tests = {TestCase1: 'test 1 params', TestCase2: 'test 2 params', TestCase3: 'test 3 params'}
for test_case, param in tests.items():
test_case(param).run()
Overridding __new__
You could do this by overriding __new__ method and replacing __init__ method with a custom function.
def init(cls, real_init):
def wrapped(self, *args, **kwargs):
# This will run during the first call to `__init__`
# made after `__new__`. Here we re-assign the original
# __init__ back to class and assign a custom function
# to `instances.__init__`.
cls.__init__ = real_init
def new_init():
if new_init.called is False:
real_init(self, *args, **kwargs)
new_init.called = True
new_init.called = False
self.__init__ = new_init
return wrapped
class DelayInitMixin(object):
def __new__(cls, *args, **kwargs):
cls.__init__ = init(cls, cls.__init__)
return object.__new__(cls)
class A(DelayInitMixin):
def __init__(self, a, b):
print('inside __init__')
self.a = sum(a)
self.b = sum(b)
def __getattribute__(self, attr):
init = object.__getattribute__(self, '__init__')
if not init.called:
init()
return object.__getattribute__(self, attr)
def run(self):
pass
def fun(self):
pass
Demo:
>>> a = A(range(1000), range(10000))
>>> a.run()
inside __init__
>>> a.a, a.b
(499500, 49995000)
>>> a.run(), a.__init__()
(None, None)
>>> b = A(range(100), range(10000))
>>> b.a, b.b
inside __init__
(4950, 49995000)
>>> b.run(), b.__init__()
(None, None)
Using cached properties
The idea is to do the heavy calculation only once by caching results. This approach will lead to much more readable code if the whole point of delaying initialization is improving performance.
Django comes with a nice decorator called #cached_property. I tend to use it a lot in both code and unit-tests for caching results of heavy properties.
A cached_property is a non-data descriptor. Hence once the key is set in instance's dictionary, the access to property would always get the value from there.
class cached_property(object):
"""
Decorator that converts a method with a single self argument into a
property cached on the instance.
Optional ``name`` argument allows you to make cached properties of other
methods. (e.g. url = cached_property(get_absolute_url, name='url') )
"""
def __init__(self, func, name=None):
self.func = func
self.__doc__ = getattr(func, '__doc__')
self.name = name or func.__name__
def __get__(self, instance, cls=None):
if instance is None:
return self
res = instance.__dict__[self.name] = self.func(instance)
return res
Usage:
class A:
#cached_property
def a(self):
print('calculating a')
return sum(range(1000))
#cached_property
def b(self):
print('calculating b')
return sum(range(10000))
Demo:
>>> a = A()
>>> a.a
calculating a
499500
>>> a.b
calculating b
49995000
>>> a.a, a.b
(499500, 49995000)
I think you can use a wrapper class to hold the real class you want to instance, and use call __init__ yourself in your code, like(Python 3 code):
class Wrapper:
def __init__(self, cls):
self.cls = cls
self.instance = None
def your_method(self, *args, **kwargs):
if not self.instance:
self.instnace = cls()
return self.instance(*args, **kwargs)
class YourClass:
def __init__(self):
print("calling __init__")
but it's a dump way, but without any trick.
I was working on a decorator that decorates the class. It woks fine for instance methods but gives an TypeError for class method. The code is as below:
def deco_method(fn):
def wrapper(*arg, **kwarg):
"""
Function: Wrapper
"""
print "Calling function {}".format(fn.__name__)
print arg, kwarg
ret_val = fn(*arg, **kwarg)
print "Executed function {}".format(fn.__name__)
return ret_val
return wrapper
def clsdeco(cls):
attributes = cls.__dict__.keys()
for attribute in attributes:
# Do not decorate private methods
if '__' in attribute:
continue
# Get the method
value = getattr(cls, attribute)
if not hasattr(value, '__call__'):
continue
# CHeck if method is a class method or normal method and decoate accordingly
if value.im_self is None:# non class method
setattr(cls, attribute, deco_method(value))
elif value.im_self is cls: # CHeck if the method is class method
setattr(cls, attribute, classmethod(deco_method(value)))
else:
assert False
return cls # return decorated class
#clsdeco
class Person:
message = "Hi Man"
def __init__(self, first_name, last_name):
self.fname = first_name
self.lname = last_name
self.age = None
def get_name(self):
print "Name is '{} {}'".format(self.fname, self.lname)
#classmethod
def greet_person(cls):
print cls.message
p = Person('John', 'snow')
p.greet_person()
It gives an error:
TypeError: greet_person() takes exactly 1 argument (2 given)
If i remove #clsdeco, it works perfectly fine.
Any idea what i am missing here?
If you add the line shown it will work. This is because the #classmethod decorator applied in the class definition changes what getattr(cls, attribute) returns—it will be a descriptor for the named method which adds the cls argument and then calls the real method.
What you need to do is retrieve the "raw" value of the attribute which is just a regular function and then turn it back into a class method by explicitly calling classmethod. This needed "raw" value is stored in the class dictionary __dict__ associated with the same attribute name, hence the need for adding the value = cls.__dict__[attribute].__func__ line.
Something similar will also be required to handle static methods properly. How to do this for all the different types of methods is described in this answer to the question Decorating a method that's already a classmethod? Some of the other answers also describe what's going on in more detail than I have here.
def clsdeco(cls):
attributes = cls.__dict__.keys()
for attribute in attributes:
# Do not decorate private methods
if '__' in attribute:
continue
# Get the method
value = getattr(cls, attribute)
if not hasattr(value, '__call__'):
continue
# Check if method is a class method or normal method and decoate accordingly
if value.im_self is None:# non class method
setattr(cls, attribute, deco_method(value))
elif value.im_self is cls: # Check if the method is class method
value = cls.__dict__[attribute].__func__ # ADDED
setattr(cls, attribute, classmethod(deco_method(value)))
else:
assert False
return cls # return decorated class
I have the following code, it is from Learning Python published by O'Reilly Media. Why line 3 (self._name = name) won't trigger __getattribute__? Is it because __setattr__ overrides it?
class Person: # Portable: 2.X or 3.X
def __init__(self, name): # On [Person()]
self._name = name # Triggers __setattr__!
def __getattribute__(self, attr): # On [obj.any]
print('get: ' + attr)
if attr == 'name': # Intercept all names
attr = '_name' # Map to internal name
return object.__getattribute__(self, attr) # Avoid looping here
def __setattr__(self, attr, value): # On [obj.any = value]
print('set: ' + attr)
if attr == 'name':
attr = '_name' # Set internal name
self.__dict__[attr] = value # Avoid looping here
You are setting an attribute. Assignment to an attribute always uses __setattr__.
__getattr__ and __getattribute__ are only consulted when looking up the value of a specific attribute; when setting you are not retrieving a value.
This is not an override; even if __setattr__ was not defined, the __getattribute__ method would not be consulted.
Usually Python descriptor are defined as class attributes. But in my case, I want every object instance to have different set descriptors that depends on the input. For example:
class MyClass(object):
def __init__(self, **kwargs):
for attr, val in kwargs.items():
self.__dict__[attr] = MyDescriptor(val)
Each object are have different set of attributes that are decided at instantiation time. Since these are one-off objects, it is not convenient to first subclass them.
tv = MyClass(type="tv", size="30")
smartphone = MyClass(type="phone", os="android")
tv.size # do something smart with the descriptor
Assign Descriptor to the object does not seem to work. If I try to access the attribute, I got something like
<property at 0x4067cf0>
Do you know why is this not working? Is there any work around?
This is not working because you have to assign the descriptor to the class of the object.
class Descriptor:
def __get__(...):
# this is called when the value is got
def __set__(...
def __del__(...
if you write
obj.attr
=> type(obj).__getattribute__(obj, 'attr') is called
=> obj.__dict__['attr'] is returned if there else:
=> type(obj).__dict__['attr'] is looked up
if this contains a descriptor object then this is used.
so it does not work because the type dictionairy is looked up for descriptors and not the object dictionairy.
there are possible work arounds:
put the descriptor into the class and make it use e.g. obj.xxxattr to store the value.
If there is only one descriptor behaviour this works.
overwrite setattr and getattr and delattr to respond to discriptors.
put a discriptor into the class that responds to descriptors stored in the object dictionairy.
You are using descriptors in the wrong way.
Descriptors don't make sense on an instance level. After all the __get__/__set__
methods give you access to the instance of the class.
Without knowing what exactly you want to do, I'd suggest you put the per-instance
logic inside the __set__ method, by checking who is the "caller/instance" and act accordingly.
Otherwise tell us what you are trying to achieve, so that we can propose alternative solutions.
I dynamically create instances by execing a made-up class. This may suit your use case.
def make_myclass(**kwargs):
class MyDescriptor(object):
def __init__(self, val):
self.val = val
def __get__(self, obj, cls):
return self.val
def __set__(self, obj, val):
self.val = val
cls = 'class MyClass(object):\n{}'.format('\n'.join(' {0} = MyDescriptor({0})'.format(k) for k in kwargs))
#check if names in kwargs collide with local names
for key in kwargs:
if key in locals():
raise Exception('name "{}" collides with local name'.format(key))
kwargs.update(locals())
exec(cls, kwargs, locals())
return MyClass()
Test;
In [577]: tv = make_myclass(type="tv", size="30")
In [578]: tv.type
Out[578]: 'tv'
In [579]: tv.size
Out[579]: '30'
In [580]: tv.__dict__
Out[580]: {}
But the instances are of different class.
In [581]: phone = make_myclass(type='phone')
In [582]: phone.type
Out[582]: 'phone'
In [583]: tv.type
Out[583]: 'tv'
In [584]: isinstance(tv,type(phone))
Out[584]: False
In [585]: isinstance(phone,type(tv))
Out[585]: False
In [586]: type(tv)
Out[586]: MyClass
In [587]: type(phone)
Out[587]: MyClass
In [588]: type(phone) is type(tv)
Out[588]: False
This looks like a use-case for named tuples
The reason it is not working is because Python only checks for descriptors when looking up attributes on the class, not on the instance; the methods in question are:
__getattribute__
__setattr__
__delattr__
It is possible to override those methods on your class in order to implement the descriptor protocol on instances as well as classes:
# do not use in production, example code only, needs more checks
class ClassAllowingInstanceDescriptors(object):
def __delattr__(self, name):
res = self.__dict__.get(name)
for method in ('__get__', '__set__', '__delete__'):
if hasattr(res, method):
# we have a descriptor, use it
res = res.__delete__(name)
break
else:
res = object.__delattr__(self, name)
return res
def __getattribute__(self, *args):
res = object.__getattribute__(self, *args)
for method in ('__get__', '__set__', '__delete__'):
if hasattr(res, method):
# we have a descriptor, call it
res = res.__get__(self, self.__class__)
return res
def __setattr__(self, name, val):
# check if object already exists
res = self.__dict__.get(name)
for method in ('__get__', '__set__', '__delete__'):
if hasattr(res, method):
# we have a descriptor, use it
res = res.__set__(self, val)
break
else:
res = object.__setattr__(self, name, val)
return res
#property
def world(self):
return 'hello!'
When the above class is used as below:
huh = ClassAllowingInstanceDescriptors()
print(huh.world)
huh.uni = 'BIG'
print(huh.uni)
huh.huh = property(lambda *a: 'really?')
print(huh.huh)
print('*' * 50)
try:
del huh.world
except Exception, e:
print(e)
print(huh.world)
print('*' * 50)
try:
del huh.huh
except Exception, e:
print(e)
print(huh.huh)
The results are:
hello!
BIG
really?
can't delete attribute
hello!
can't delete attribute
really?