__getattr__ of meta class not being called - python

As the title says. It seems no matter what I do, __getattr__ will not be called. I also tried it for instance (absurd, I know), with predictably no response. As if __getattr__ was banned in meta classes.
I'd appreciate any pointer to documentation about this.
The code:
class PreinsertMeta(type):
def resolvedField(self):
if isinstance(self.field, basestring):
tbl, fld = self.field.split(".")
self.field = (tbl, fld)
return self.field
Field = property(resolvedField)
def __getattr__(self, attrname):
if attrname == "field":
if isinstance(self.field, basestring):
tbl, fld = self.field.split(".")
self.field = (tbl, fld)
return self.field
else:
return super(PreinsertMeta, self).__getattr__(attrname)
def __setattr__(self, attrname, value):
super(PreinsertMeta, self).__setattr__(attrname, value)
class Test(object):
__metaclass__ = PreinsertMeta
field = "test.field"
print Test.field # Should already print the tuple
Test.field = "another.field" # __setattr__ gets called nicely
print Test.field # Again with the string?
print Test.Field # note the capital 'F', this actually calls resolvedField() and prints the tuple
Thanks to BrenBarn, here's the final working implementation:
class PreinsertMeta(type):
def __getattribute__(self, attrname):
if attrname == "field" and isinstance(object.__getattribute__(self, attrname), basestring):
tbl, fld = object.__getattribute__(self, attrname).split(".")
self.field = (tbl, fld)
return object.__getattribute__(self, attrname)

As documented, __getattr__ is only called if the attribute does not exist. Since your class has a field attribute, that blocks __getattr__. You can use __getattribute__ if you really want to intercept all attribute access, although it's not clear from your example why you need to do this. Note that this has nothing to do with metaclasses; you would see the same behavior if you created an instance of an ordinary class and gave it some attribute.
Even assuming you used __getattribute__, so it was called when the attribute exists, your implementation doesn't make much sense. Inside __getattr__ you try to get a value for self.field. But if __getattribute__ was called in the first place, it will be called again for this access, creating an infinite recursion: in order to get self.field, it has to call __getattribute__, which again tries to get self.field, which again calls __getattribute__, etc. See the documentation for __getattribute__ for how to get around this.

Related

Replacing the object from one of its methods

I am using python and have an object, that object has a method. I am looking for a simple way, to replace the entire object from within that function.
E.g
class a():
def b(self):
self = other_object
How can you do that?
Thanks
You use a proxy/facade object to hold a reference to the actual object, the self if you wish and that proxy (better term than Facade, but not changing my code now) is what the rest of your codebase sees. However, any attribute/method access is forwarded on to the actual object, which is swappable.
Code below should give you a rough idea. Note that you need to be careful about recursion around __the_instance, which is why I am assigning to __dict__ directly. Bit messy, since it's been a while I've written code that wraps getattr and setattr entirely.
class Facade:
def __init__(self, instance):
self.set_obj(instance)
def set_obj(self, instance):
self.__dict__["__theinstance"] = instance
def __getattr__(self, attrname):
if attrname == "__theinstance":
return self.__dict__["__theinstance"]
return getattr(self.__dict__["__theinstance"], attrname)
def __setattr__(self, attrname, value):
if attrname == "__theinstance":
self.set_obj(value)
return setattr(self.__dict__["__theinstance"], attrname, value)
class Test:
def __init__(self, name, cntr):
self.name = name
self.cntr = cntr
def __repr__(self):
return "%s[%s]" % (self.__class__.__name__, self.__dict__)
obj1 = Test("first object", 1)
obj2 = Test("second", 2)
obj2.message = "greetings"
def pretend_client_code(facade):
print(id(facade), facade.name, facade.cntr, getattr(facade, "value", None))
facade = Facade(obj1)
pretend_client_code(facade)
facade.set_obj(obj2)
pretend_client_code(facade)
facade.value = 3
pretend_client_code(facade)
facade.set_obj(obj1)
pretend_client_code(facade)
output:
4467187104 first object 1 None
4467187104 second 2 None
4467187104 second 2 3
4467187104 first object 1 None
So basically, the "client code" always sees the same facade object, but what it is actually accessing depends on what your equivalent of def b is has done.
Facade has a specific meaning in Design Patterns terminology and it may not be really applicable here, but close enough. Maybe Proxy would have been better.
Note that if you want to change the class on the same object, that is a different thing, done through assigning self.__class__ . For example, say an RPG game with an EnemyClass who gets swapped to DeadEnemyClass once killed: self.__class__ = DeadEnemyClass
You can't directly do that. What you can do is save it as an instance variable.
class A():
def __init__(self, instance=None):
self.instance = val or self
# yes, you can make it a property as well.
def set_val(self, obj):
self.instance = obj
def get_val(self):
return self.instance
It is unlikely that replacing the 'self' variable will accomplish
whatever you're trying to do, that couldn't just be accomplished by
storing the result of func(self) in a different variable. 'self' is
effectively a local variable only defined for the duration of the
method call, used to pass in the instance of the class which is being
operated upon. Replacing self will not actually replace references to
the original instance of the class held by other objects, nor will it
create a lasting reference to the new instance which was assigned to
it.
Original source: Is it safe to replace a self object by another object of the same type in a method?

Overriding getters and setters for attributes from a list of strings

The aim is to provide some strings in a list as attributes of a class. The class shall have not only attributes, but also the respective getter and setter methods. In some other class inherited from that some of those setters need to be overridden.
To this end I came up with the following. Using setattr in a loop over the list of strings, an attribute and the respective methods are created. Concerning this first part, the code works as expected.
However I am not able to override the setters in an inheriting class.
class Base():
attributes = ["attr{}".format(i) for i in range(100)]
def __init__(self):
_get = lambda a: lambda : getattr(self, a)
_set = lambda a: lambda v: setattr(self, a, v)
for attr in self.attributes:
setattr(self, attr, None)
setattr(self, "get_"+attr, _get(attr))
setattr(self, "set_"+attr, _set(attr))
class Child(Base):
def __init__(self):
super().__init__()
#setattr(self, "set_attr4", set_attr4)
# Here I want to override one of the setters to perform typechecking
def set_attr4(self, v):
print("This being printed would probably solve the problem.")
if type(v) == bool:
super().set_attr4(v)
else:
raise ValueError("attr4 must be a boolean")
if __name__ == "__main__":
b = Base()
b.attr2 = 5
print(b.get_attr2())
b.set_attr3(55)
print(b.get_attr3())
c = Child()
c.set_attr4("SomeString")
print(c.get_attr4())
The output here is
5
555
SomeString
The expected output would however be
5
555
This being printed would probably solve the problem.
ValueError("attr4 must be a boolean")
So somehow the set_attr4 method is never called; which I guess is expected, because __init__ is called after the class structure is read in. But I am at loss on how else to override those methods. I tried to add setattr(self, "set_attr4", set_attr4) (the commented line in the code above) but to no avail.
Or more generally, there is the propery which is usually used for creating getters and setters. But I don't think I understand how to apply it in a case where the getters and setters are created dynamically and need to be overridden by a child.
Is there any solution to this?
Update due to comments: It was pointed out by several people that using getters/setters in python may not be a good style and that they may usually not be needed. While this is definitely something to keep in mind, the background of this question is that I'm extending an old existing code which uses getters/setters throughout. I hence do not wish to change the style and let the user (this project only has some 20 users in total, but still...) suddenly change the way they access properties within the API.
However any future reader of this may consider that the getter/setter approach is at least questionable.
Metaclasses to the rescue!
class Meta(type):
def __init__(cls, name, bases, dct):
for attr in cls.attributes:
if not hasattr(cls, attr):
setattr(cls, attr, None)
setattr(cls, f'get_{attr}', cls._get(attr))
setattr(cls, f'set_{attr}', cls._set(attr))
class Base(metaclass=Meta):
attributes = ["attr{}".format(i) for i in range(100)]
_get = lambda a: lambda self: getattr(self, a)
_set = lambda a: lambda self, v: setattr(self, a, v)
# the rest of your code goes here
This is pretty self-explanatory: make attributes, _get, _set class variables (so that you can access them without class instantiation), then let the metaclass set everything up for you.
The __init__ is executed after the subclass is created, so it overrides what was specified there.
The minimal change needed to fix the problem is to check whether the attribute has already been set:
class Base():
attributes = ["attr{}".format(i) for i in range(100)]
def __init__(self):
_get = lambda a: lambda : getattr(self, a)
_set = lambda a: lambda v: setattr(self, a, v)
for attr in self.attributes:
setattr(self, attr, None)
if not hasattr(self, "get_"+attr):
setattr(self, "get_"+attr, _get(attr))
if not hasattr(self, "set_"+attr):
setattr(self, "set_"+attr, _set(attr))
However, I do not see to point in doing that this way. This is creating a new getter and setter for each instance of Base. I would instead rather create them on the class. That can be done with a class decorator, or with a metaclass, or in the body of the class itself, or in some other way.
For example, this is ugly, but simple:
class Base():
attributes = ["attr{}".format(i) for i in range(100)]
for attr in attributes:
exec(f"get_{attr} = lambda self: self.{attr}")
exec(f"set_{attr} = lambda self, value: setattr(self, '{attr}', value)")
del attr
This is better:
class Base:
pass
attributes = ["attr{}".format(i) for i in range(100)]
for attr in attributes:
setattr(Base, f"get_{attr}", lambda self: getattr(self, attr))
setattr(Base, f"set_{attr}", lambda self, value: setattr(self, '{attr}', value))
You're right about the problem. The creation of your Base instance happens after the Child class defines set_attr4. Since Base is creating it's getters/setters dynamically, it just blasts over Childs version upon creation.
One alternative way (in addition to the other answers) is to create the Child's getters/setters dynamically too. The idea here is to go for "convention over configuration" and just prefix methods you want to override with override_. Here's an example:
class Child(Base):
def __init__(self):
super().__init__()
overrides = [override for override in dir(self) if override.startswith("override_")]
for override in overrides:
base_name = override.split("override_")[-1]
setattr(self, base_name, getattr(self, override))
# Here I want to override one of the setters to perform typechecking
def override_set_attr4(self, v):
print("This being printed would probably solve the problem.")
if type(v) == bool:
super().set_attr4(v)
else:
raise ValueError("attr4 must be a boolean") # Added "raise" to this, overwise we just return None...
which outputs:
5
55
This being printed would probably solve the problem.
Traceback (most recent call last):
File ".\stack.py", line 39, in <module>
c.set_attr4("SomeString")
File ".\stack.py", line 29, in override_set_attr4
raise ValueError("attr4 must be a boolean") # Added "raise" to this, overwise we just return None...
ValueError: attr4 must be a boolean
Advantages here are that the Base doesn't have to know about the Child class. In the other answers, there's very subtle Base/Child coupling going on. It also might not be desirable to touch the Base class at all (violation of the Open/Closed principle).
Disadvantages are that "convention over configuration" to avoid a true inheritance mechanism is a bit clunky and unintuitive. The override_ function is also still hanging around on the Child instance (which you may or may not care about).
I think the real problem here is that you're trying to define getters and setters in such a fashion. We usually don't even want getters/setters in Python. This definitely feels like an X/Y problem, but maybe it isn't. You have a lot of rep, so I'm not going to give you some pedantic spiel about it. Even so, maybe take a step back and think about what you're really trying to do and consider alternatives.
The problem here is that you're creating the "methods" in the instance of the Base class (__init__ only runs in the instance).
Inheriting happens before you instance your class, and won't look into instances.
In other words, When you try to override the method, it wasn't even created in first place.
A solution is to create them in the class and not in self instance inside __init__:
def _create_getter(attr):
def _get(self):
return getattr(self, attr)
return _get
def _create_setter(attr):
def _set(self, value):
return setattr(self, attr, value)
return _set
class Base():
attributes = ["attr{}".format(i) for i in range(100)]
for attr in Base.attributes:
setattr(Base, 'get_' + attr, _create_getter(attr))
setattr(Base, 'set_' + attr, _create_setter(attr))
Then inheriting will work normally:
class Child(Base):
def set_attr4(self, v):
print("This being printed would probably solve the problem.")
if type(v) == bool:
super().set_attr4(v)
else:
raise ValueError("attr4 must be a boolean")
if __name__ == "__main__":
b = Base()
b.attr2 = 5
print(b.get_attr2())
b.set_attr3(55)
print(b.get_attr3())
c = Child()
c.set_attr4("SomeString")
print(c.get_attr4())
You could also just not do it - make your Base class as normal, and make setters only for the attributes you want, in the child class:
class Base:
pass
class Child(Base):
#property
def attr4(self):
return self._attr4
#attr4.setter
def attr4(self, new_v):
if not isinstance(new_v, bool):
raise TypeError('Not bool')
self._attr4 = new_v
Testing:
c = Child()
c.attr3 = 2 # works fine even without any setter
c.attr4 = True #works fine, runs the setter
c.attr4 = 3 #type error

Ruby like DSL in Python

I'm currently writing my first bigger project in Python, and I'm now wondering how to define a class method so that you can execute it in the class body of a subclass of the class.
First to give some more context, a slacked down (I removed everything non essential for this question) example of how I'd do the thing I'm trying to do in Ruby:
If I define a class Item like this:
class Item
def initialize(data={})
#data = data
end
def self.define_field(name)
define_method("#{name}"){ instance_variable_get("#data")[name.to_s] }
define_method("#{name}=") do |value|
instance_variable_get("#data")[name.to_s] = value
end
end
end
I can use it like this:
class MyItem < Item
define_field("name")
end
item = MyItem.new
item.name = "World"
puts "Hello #{item.name}!"
Now so far I tried achieving something similar in Python, but I'm not happy with the result I've got so far:
class ItemField(object):
def __init__(self, name):
self.name = name
def __get__(self, item, owner=None):
return item.values[self.name]
def __set__(self, item, value):
item.values[self.name] = value
def __delete__(self, item):
del item.values[self.name]
class Item(object):
def __init__(self, data=None):
if data == None: data = {}
self.values = data
for field in type(self).fields:
self.values[field.name] = None
setattr(self, field.name, field)
#classmethod
def define_field(cls, name):
if not hasattr(cls, "fields"): cls.fields = []
cls.fields.append(ItemField(name, default))
Now I don't know how I can call define_field from withing a subclass's body. This is what I wished that it was possible:
class MyItem(Item):
define_field("name")
item = MyItem({"name": "World"})
puts "Hello {}!".format(item.name)
item.name = "reader"
puts "Hello {}!".format(item.name)
There's this similar question but none of the answers are really satisfying, somebody recommends caling the function with __func__() but I guess I can't do that, because I can't get a reference to the class from within its anonymous body (please correct me if I'm wrong about this.)
Somebody else pointed out that it's better to use a module level function for doing this which I also think would be the easiest way, however the main intention of me doing this is to make the implementation of subclasses clean and having to load that module function wouldn't be to nice either. (Also I'd have to do the function call outside the class body and I don't know but I think this is messy.)
So basically I think my approach is wrong, because Python wasn't designed to allow this kind of thing to be done. What would be the best way to achieve something as in the Ruby example with Python?
(If there's no better way I've already thought about just having a method in the subclass which returns an array of the parameters for the define_field method.)
Perhaps calling a class method isn't the right route here. I'm not quite up to speed on exactly how and when Python creates classes, but my guess is that the class object doesn't yet exist when you'd call the class method to create an attribute.
It looks like you want to create something like a record. First, note that Python allows you to add attributes to your user-created classes after creation:
class Foo(object):
pass
>>> foo = Foo()
>>> foo.x = 42
>>> foo.x
42
Maybe you want to constrain which attributes the user can set. Here's one way.
class Item(object):
def __init__(self):
if type(self) is Item:
raise NotImplementedError("Item must be subclassed.")
def __setattr__(self, name, value):
if name not in self.fields:
raise AttributeError("Invalid attribute name.")
else:
self.__dict__[name] = value
class MyItem(Item):
fields = ("foo", "bar", "baz")
So that:
>>> m = MyItem()
>>> m.foo = 42 # works
>>> m.bar = "hello" # works
>>> m.test = 12 # raises AttributeError
Lastly, the above allows you the user subclass Item without defining fields, like such:
class MyItem(Item):
pass
This will result in a cryptic attribute error saying that the attribute fields could not be found. You can require that the fields attribute be defined at the time of class creation by using metaclasses. Furthermore, you can abstract away the need for the user to specify the metaclass by inheriting from a superclass that you've written to use the metaclass:
class ItemMetaclass(type):
def __new__(cls, clsname, bases, dct):
if "fields" not in dct:
raise TypeError("Subclass must define 'fields'.")
return type.__new__(cls, clsname, bases, dct)
class Item(object):
__metaclass__ = ItemMetaclass
fields = None
def __init__(self):
if type(self) == Item:
raise NotImplementedError("Must subclass Type.")
def __setattr__(self, name, value):
if name in self.fields:
self.__dict__[name] = value
else:
raise AttributeError("The item has no such attribute.")
class MyItem(Item):
fields = ("one", "two", "three")
You're almost there! If I understand you correctly:
class Item(object):
def __init__(self, data=None):
fields = data or {}
for field, value in data.items():
if hasattr(self, field):
setattr(self, field, value)
#classmethod
def define_field(cls, name):
setattr(cls, name, None)
EDIT: As far as I know, it's not possible to access the class being defined while defining it. You can however call the method on the __init__ method:
class Something(Item):
def __init__(self):
type(self).define_field("name")
But then you're just reinventing the wheel.
When defining a class, you cannot reference the class itself inside its own definition block. So you have to call define_field(...) on MyItem after its definition. E.g.,
class MyItem(Item):
pass
MyItem.define_field("name")
item = MyItem({"name": "World"})
print("Hello {}!".format(item.name))
item.name = "reader"
print("Hello {}!".format(item.name))

Create per-instance property descriptor?

Usually Python descriptor are defined as class attributes. But in my case, I want every object instance to have different set descriptors that depends on the input. For example:
class MyClass(object):
def __init__(self, **kwargs):
for attr, val in kwargs.items():
self.__dict__[attr] = MyDescriptor(val)
Each object are have different set of attributes that are decided at instantiation time. Since these are one-off objects, it is not convenient to first subclass them.
tv = MyClass(type="tv", size="30")
smartphone = MyClass(type="phone", os="android")
tv.size # do something smart with the descriptor
Assign Descriptor to the object does not seem to work. If I try to access the attribute, I got something like
<property at 0x4067cf0>
Do you know why is this not working? Is there any work around?
This is not working because you have to assign the descriptor to the class of the object.
class Descriptor:
def __get__(...):
# this is called when the value is got
def __set__(...
def __del__(...
if you write
obj.attr
=> type(obj).__getattribute__(obj, 'attr') is called
=> obj.__dict__['attr'] is returned if there else:
=> type(obj).__dict__['attr'] is looked up
if this contains a descriptor object then this is used.
so it does not work because the type dictionairy is looked up for descriptors and not the object dictionairy.
there are possible work arounds:
put the descriptor into the class and make it use e.g. obj.xxxattr to store the value.
If there is only one descriptor behaviour this works.
overwrite setattr and getattr and delattr to respond to discriptors.
put a discriptor into the class that responds to descriptors stored in the object dictionairy.
You are using descriptors in the wrong way.
Descriptors don't make sense on an instance level. After all the __get__/__set__
methods give you access to the instance of the class.
Without knowing what exactly you want to do, I'd suggest you put the per-instance
logic inside the __set__ method, by checking who is the "caller/instance" and act accordingly.
Otherwise tell us what you are trying to achieve, so that we can propose alternative solutions.
I dynamically create instances by execing a made-up class. This may suit your use case.
def make_myclass(**kwargs):
class MyDescriptor(object):
def __init__(self, val):
self.val = val
def __get__(self, obj, cls):
return self.val
def __set__(self, obj, val):
self.val = val
cls = 'class MyClass(object):\n{}'.format('\n'.join(' {0} = MyDescriptor({0})'.format(k) for k in kwargs))
#check if names in kwargs collide with local names
for key in kwargs:
if key in locals():
raise Exception('name "{}" collides with local name'.format(key))
kwargs.update(locals())
exec(cls, kwargs, locals())
return MyClass()
Test;
In [577]: tv = make_myclass(type="tv", size="30")
In [578]: tv.type
Out[578]: 'tv'
In [579]: tv.size
Out[579]: '30'
In [580]: tv.__dict__
Out[580]: {}
But the instances are of different class.
In [581]: phone = make_myclass(type='phone')
In [582]: phone.type
Out[582]: 'phone'
In [583]: tv.type
Out[583]: 'tv'
In [584]: isinstance(tv,type(phone))
Out[584]: False
In [585]: isinstance(phone,type(tv))
Out[585]: False
In [586]: type(tv)
Out[586]: MyClass
In [587]: type(phone)
Out[587]: MyClass
In [588]: type(phone) is type(tv)
Out[588]: False
This looks like a use-case for named tuples
The reason it is not working is because Python only checks for descriptors when looking up attributes on the class, not on the instance; the methods in question are:
__getattribute__
__setattr__
__delattr__
It is possible to override those methods on your class in order to implement the descriptor protocol on instances as well as classes:
# do not use in production, example code only, needs more checks
class ClassAllowingInstanceDescriptors(object):
def __delattr__(self, name):
res = self.__dict__.get(name)
for method in ('__get__', '__set__', '__delete__'):
if hasattr(res, method):
# we have a descriptor, use it
res = res.__delete__(name)
break
else:
res = object.__delattr__(self, name)
return res
def __getattribute__(self, *args):
res = object.__getattribute__(self, *args)
for method in ('__get__', '__set__', '__delete__'):
if hasattr(res, method):
# we have a descriptor, call it
res = res.__get__(self, self.__class__)
return res
def __setattr__(self, name, val):
# check if object already exists
res = self.__dict__.get(name)
for method in ('__get__', '__set__', '__delete__'):
if hasattr(res, method):
# we have a descriptor, use it
res = res.__set__(self, val)
break
else:
res = object.__setattr__(self, name, val)
return res
#property
def world(self):
return 'hello!'
When the above class is used as below:
huh = ClassAllowingInstanceDescriptors()
print(huh.world)
huh.uni = 'BIG'
print(huh.uni)
huh.huh = property(lambda *a: 'really?')
print(huh.huh)
print('*' * 50)
try:
del huh.world
except Exception, e:
print(e)
print(huh.world)
print('*' * 50)
try:
del huh.huh
except Exception, e:
print(e)
print(huh.huh)
The results are:
hello!
BIG
really?
can't delete attribute
hello!
can't delete attribute
really?

Track changes of atributes in instance. Python

I want to implement function which takes as argument any object and trackes changes of value for specific attribute. Than saves old value of attribute in old_name attribute.
For example:
class MyObject(object):
attr_one = None
attr_two = 1
Lets name my magic function magic_function()
Sot than i can do like this:
obj = MyObject()
obj = magic_function(obj)
obj.attr_one = 'new value'
obj.attr_two = 2
and it saves old values so i can get like this
print obj.old_attr_one
None
print obj.attr_one 'new value'
and
print obj.old_attr_two
1
print obj.attr_two
2
Something like this.. I wonder how can i do this by not touching the class of instance?
This is a start:
class MagicWrapper(object):
def __init__(self, wrapped):
self._wrapped = wrapped
def __getattr__(self, attr):
return getattr(self._wrapped, attr)
def __setattr__(self, attr, val):
if attr == '_wrapped':
super(MagicWrapper, self).__setattr__('_wrapped', val)
else:
setattr(self._wrapped, 'old_' + attr, getattr(self._wrapped, attr))
setattr(self._wrapped, attr, val)
class MyObject(object):
def __init__(self):
self.attr_one = None
self.attr_two = 1
obj = MyObject()
obj = MagicWrapper(obj)
obj.attr_one = 'new value'
obj.attr_two = 2
print obj.old_attr_one
print obj.attr_one
print obj.old_attr_two
print obj.attr_two
This isn't bullet-proof when you're trying to wrap weird objects (very little in Python is), but it should work for "normal" classes. You could write a lot more code to get a little bit closer to fully cloning the behaviour of the wrapped object, but it's probably impossible to do perfectly. The main thing to be aware of here is that many special methods will not be redirected to the wrapped object.
If you want to do this without wrapping obj in some way, it's going to get messy. Here's an option:
def add_old_setattr_to_class(cls):
def __setattr__(self, attr, val):
super_setattr = super(self.__class__, self).__setattr__
if attr.startswith('old_'):
super_setattr(attr, val)
else:
super_setattr('old_' + attr, getattr(self, attr))
super_setattr(attr, val)
cls.__setattr__ = __setattr__
class MyObject(object):
def __init__(self):
self.attr_one = None
self.attr_two = 1
obj = MyObject()
add_old_setattr_to_class(obj.__class__)
obj.attr_one = 'new value'
obj.attr_two = 2
print obj.old_attr_one
print obj.attr_one
print obj.old_attr_two
print obj.attr_two
Note that this is extremely invasive if you're using it on externally provided objects. It globally modifies the class of the object you're applying the magic to, not just that one instance. This is because like several other special methods, __setattr__ is not looked up in the instance's attribute dictionary; the lookup skips straight to the class, so there's no way to just override __setattr__ on the instance. I would characterise this sort of code as a bizarre hack if I encountered it in the wild (it's "nifty cleverness" if I write it myself, of course ;) ).
This version may or may not play nicely with objects that already play tricks with __setattr__ and __getattr__/__getattribute__. If you end up modifying the same class several times, I think this still works, but you end up with an ever-increasing number of wrapped __setattr__ definitions. You should probably try to avoid that; maybe by setting a "secret flag" on the class and checking for it in add_old_setattr_to_class before modifying cls. You should probably also use a more-unlikely prefix than just old_, since you're essentially trying to create a whole separate namespace.
You can substitute all attributes with custom properties at runtime. What are you trying to achieve though? Maybe migrating to completely immutable types would be a better choice?

Categories

Resources