Python: why visiting an existing attribute triggered "__getattr__"? - python

I knew that python's __getattr__ is triggered when visiting a non-existing attribute.
But in my example below, inside c1's __init__, I created a self attribute called name. When visiting it, both access ways triggered __getattr__ and thus printed "None".
This is weird to me. I suppose either my understanding or my code has some issue?
$ cat n.py
class c1(object):
def __init__(s):
print 'init c1'
s.name='abc'
def __getattr__(s,name):
print "__getattr__:"+name
return None
def __setattr__(s,name,value):
print "__setattr__:"+value
def __get__(s,inst,owner):
print "__get__"
class d:
def __init__(s):
s.c=c1()
c=c1()
print c.name
o=d()
print o.c.name
$ python n.py
init c1
__setattr__:abc
__getattr__:name
None
init c1
__setattr__:abc
__getattr__:name
None
You can see I've defined s.name='abc' inside __init__, but it is not recognized when calling it.

You also implemented __setattr__, and it is always called when trying to set an attribute. Your version only prints the attribute:
def __setattr__(s,name,value):
print "__setattr__:"+value
and nothing else, which means the attribute is not actually set. The above is called for the s.name='abc' expression, the name attribute is never set, so any future access to the name attribute is sent to __getattr__ again.
Have __setattr__ set the value in __dict__ directly:
def __setattr__(self, name, value):
print "__setattr__:" + value
self.__dict__[name] = value
or make your class a new-style class (inherit from object), and you can re-use the base implementation with super(c1, self).__setattr__(name, value).
As a side note: you implemented c1.__get__, presumably in an attempt to make the class a descriptor object. However, the descriptor protocol only applies to class attributes, not to instance attributes, and then only on new-style classes. Your class d is not a new-style class, and you used an instance attribute c to store the c1 instance.

Related

Is it right to say that setter function for property in python is like overloading the assignment operator?

I just started to study the subject of OOP in python and and I got into a bit of trouble subject of decorators/properties and all "private" methods tricks in python..
Is it okay to say that when using #property is like using an attribute, but behind the scenes there is a function that do something? (for example, checks the input)
In addition, is using the #is_barking.setter is like overloading the assignment operator in other languages (let's say C++)? because i can also check the input and things like that
This is the code:
class Dog():
def __init__(self,name):
self.name = name
#property
def is_barking(self):
try:
return self._is_barking
except AttributeError as error:
self._is_barking = False
return self._is_barking
#is_barking.setter
def is_barking(self,value):
self._is_barking = value
def main():
rexi = Dog("rexi")
print(rexi.is_barking)
rexi.is_barking = True
print(rexi.is_barking)
main()
>> False
>> True
Thank you very much!
Yes, that's sort of correct. The setter lets you intercept assignment to that attribute name alone.
It doesn't overload assignment, it is simply facilitated for by the descriptor protocol at runtime, and property objects are data descriptor objects; they get to intercept all attribute access (getting, setting and deleting) on instances of the class they are a member of, and so can veto any of those operations. This is different from C++ assignment overloading, which is handled at compile time (IIRC) and operates on whole instances, not just attributes on instances.
What really happens is that attribute assignment is handled by the class object whose instance the attribute is being assigned to, in the object.__setattr__ special method. The class will check if the given attribute name is covered by an object on the class that is a descriptor object with a __set__ or __delete__ method.
So general attribute assignment can be hooked into via the __setattr__ special method, Python delegates attribute setting to the parent type of instances:
# foo.attr = bar -> Python essentially calls __setattr__ on the class
type(foo).__setattr__(foo, "attr", bar)
and in the regular, simple case that then becomes
foo.__dict__["attr"] = bar
but in reality, there is a search for a data descriptor first, on type(foo) and its parent classes. If such an object exist, then it is tasked with handling attribute setting:
obj = None
for cls in reversed(type(foo).__mro__):
if "attr" in cls.__dict__:
obj = cls.__dict__["attr"]
break
if hasattr(obj, "__set__") or hasattr(obj, "__delete__"): # data descriptor?
try:
obj.__set__(foo, bar)
except AttributeError:
raise AttributeError("can't set attribute")
else:
foo.__dict__["attr"] = bar
property objects implement __set__, and this implementation will call your setter (if one is set).

Substitute a mock object for the metaclass of a class

How can I override the metaclass of a Python class, with a unittest.mock.MagicMock instance instead?
I have a function whose job involves working with the metaclass of an argument:
# lorem.py
class Foo(object):
pass
def quux(existing_class):
…
metaclass = type(existing_class)
new_class = metaclass(…)
The unit tests for this function will need to assert that the calls to
the metaclass go as expected, without actually calling a real class
object.
Note: The test case does not care about the metaclass's behaviour; it cares that quux retrieves that metaclass (using type(existing_class)) and calls the metaclass with the correct arguments.
So to write a unit test for this function, I want to pass a class object whose metaclass is a mock object instead. This will allow, for example, making assertions about how the metaclass was called, and ensuring no unwanted side effects.
# test_lorem.py
import unittest
import unittest.mock
import lorem
class stub_metaclass(type):
def __new__(metaclass, name, bases, namespace):
return super().__new__(metaclass, name, bases, namespace)
class quux_TestCase(unittest.TestCase):
#unittest.mock.patch.object(
lorem.Foo, '__class__', side_effect=stub_metaclass)
def test_calls_expected_metaclass_with_class_name(
self,
mock_foo_metaclass,
):
expected_name = 'Foo'
expected_bases = …
expected_namespace = …
lorem.quux(lorem.Foo)
mock_foo_metaclass.assert_called_with(
expected_name, expected_bases, expected_namespace)
When I try to mock the __class__ attribute of an existing class, though, I get this error:
File "/usr/lib/python3/dist-packages/mock/mock.py", line 1500, in start
result = self.__enter__()
File "/usr/lib/python3/dist-packages/mock/mock.py", line 1460, in __enter__
setattr(self.target, self.attribute, new_attr)
TypeError: __class__ must be set to a class, not 'MagicMock' object
This is telling me that unittest.mock.patch is attempting to set the __class__ attribute temporarily to a MagicMock instance, as I want; but Python is refusing that with a TypeError.
But placing a mock object as the metaclass is exactly what I'm trying to do: put a unittest.mock.MagicMock instance in the __class__ attribute in order that the mock object will do all that it does: record calls, pretend valid behaviour, etc.
How can I set a mock object in place of the Foo class's __class__ attribute, in order to instrument Foo and test that my code uses Foo's metaclass correctly?
You can't do exactly what you want. As you can see an object's __class__ attribute is very special in Python, and even for ordinary instances there are checks in runtime to verify it is assigned to a proper type.
When you get down to a class's __class__, that is even more strict.
Possible approach:
One thing to do in there is not pass a class to your test - but an object that is an instance from a crafted ordinary class, which will have an artificial __class__ attribute. Even them, you will have to change your code from calling type(existing_class) to do existing_class.__class__ directly. For an instance object to "falsify" its __class__ anyway, you have to implement __class__ as a property on its class (or override __getattribute__; (the class itself will report its true metaclass, but an instance can return whatever is coded on the __class__ property.
class Foo:
#property
def __class__(self):
return stub_metaclass
Actual suggestion:
But then, since you are at it, maybe the simplest thing is to mock type instead on the target module where quux is defined.
class MockType:
def __init__(self):
self.mock = mock.Mock()
def __call__(self, *args):
return self.mock
...
class ...:
...
def test_calls_expected_metaclass_with_class_name(
self,
):
try:
new_type = MockType()
# This creates "type" on the module "lorem" namespace
# as a global variable. It will then override the built-in "type"
lorem.type = new_type
lorem.quux(lorem.Foo)
finally:
del lorem.type # un-shadows the built-in type on the module
new_type.mock.assert_called_with(
'Foo', unittest.mock.ANY, unittest.mock.ANY)
Still another approach
Another thing that can be done is to craft a full "MockMetaclass" in the "old fashion": without unittest.magicmock at all, instead, with intrumented __new__ and other relevant methods that will record the called parameters, and function as a true metaclass for a class you pass in as parameter.
Considerations on what is being done
People reaching here, please note that one should not test the class creation (and metaclass) mechanisms themselves. One can just assume the Python runtime have these working and tested already.

python when I use the '__slots__'

Recent I study Python,but I have a question about __slots__. In my opinion, it is for limiting parameters in Class, but also limiting the method in Class?
For example:
from types import MethodType
Class Student(object):
__slots__=('name','age')
When I run the code:
def set_age(self,age):
self.age=age
stu=Student()
stu.set_age=MethodType(set_age,stu,Student)
print stu.age
An error has occurred:
stu.set_age=MethodType(set_age,stu,Student)
AttributeError: 'Student' object has no attribute 'set_age'
I want to know, why not use set_age for this class?
Using __slots__ means you don't get a __dict__ with each class instance, and so each instance is more lightweight. The downside is that you cannot modify the methods and cannot add attributes. And you cannot do what you attempted to do, which is to add methods (which would be adding attributes).
Also, the pythonic approach is not to instantiate a MethodType, but to simply create the function in the class namespace. If you're attempting to add or modify the function on the fly, as in monkey-patching, then you simply assign the function to the class, as in:
Student.set_age = set_age
Assigning it to the instance, of course, you can't do if it uses __slots__.
Here's the __slots__ docs:
https://docs.python.org/2/reference/datamodel.html#slots
In new style classes, methods are not instance attributes. Instead, they're class attributes that follow the descriptor protocol by defining a __get__ method. The method call obj.some_method(arg) is equivalent to obj.__class__.method.__get__(obj)(arg), which is in turn, equivalent to obj.__class__.method(obj, arg). The __get__ implementation does the instance binding (sticking obj in as the first argument to method when it is called).
In your example code, you're instead trying to put a hand-bound method as an instance variable of the already-existing instance. This doesn't work because your __slots__ declaration prevents you from adding new instance attributes. However, if you wrote to the class instead, you'd have no problem:
class Foo(object):
__slots__ = () # no instance variables!
def some_method(self, arg):
print(arg)
Foo.some_method = some_method # this works!
f = Foo()
f.some_method() # so does this
This code would also work if you created the instance before adding the method to its class.
Your attribute indeed doesn't have an attribute set_age since you didn't create a slot for it. What did you expect?
Also, it should be __slots__ not __slots (I imagine this is right in your actual code, otherwise you wouldn't be getting the error you're getting).
Why aren't you just using:
class Student(object):
__slots__ = ('name','age')
def set_age(self,age):
self.age = age
where set_age is a method of the Student class rather than adding the function as a method to an instance of the Student class.
Instead of __slots__, I'm using the following method. It allow the use of only a predefined set of parameters:
class A(object):
def __init__(self):
self.__dict__['a']=''
self.__dict__['b']=''
def __getattr__(self,name):
d=getattr(self,'__dict__')
if d.keys().__contains__(name):
return d.__dict__[attr]
else:
raise AttributeError
def __setattr__(self,name,value):
d=getattr(self,'__dict__')
if d.keys().__contains__(name):
d[name] = value
else:
raise AttributeError
The use of getattr(..) is to avoid recursion.
There are some merits usin __slots__ vs __dict__ in term of memory and perhaps speed but this is easy to implement and read.

How to use default property descriptors and successfully assign from __init__()?

What's the correct idiom for this please?
I want to define an object containing properties which can (optionally) be initialized from a dict (the dict comes from JSON; it may be incomplete). Later on I may modify the properties via setters.
There are actually 13+ properties, and I want to be able to use default getters and setters, but that doesn't seem to work for this case:
But I don't want to have to write explicit descriptors for all of prop1... propn
Also, I'd like to move the default assignments out of __init__() and into the accessors... but then I'd need expicit descriptors.
What's the most elegant solution? (other than move all the setter calls out of __init__() and into a method/classmethod _make()?)
[DELETED COMMENT The code for badprop using default descriptor was due to comment by a previous SO user, who gave the impression it gives you a default setter. But it doesn't - the setter is undefined and it necessarily throws AttributeError.]
class DubiousPropertyExample(object):
def __init__(self,dct=None):
self.prop1 = 'some default'
self.prop2 = 'other default'
#self.badprop = 'This throws AttributeError: can\'t set attribute'
if dct is None: dct = dict() # or use defaultdict
for prop,val in dct.items():
self.__setattr__(prop,val)
# How do I do default property descriptors? this is wrong
##property
#def badprop(self): pass
# Explicit descriptors for all properties - yukk
#property
def prop1(self): return self._prop1
#prop1.setter
def prop1(self,value): self._prop1 = value
#property
def prop2(self): return self._prop2
#prop2.setter
def prop2(self,value): self._prop2 = value
dub = DubiousPropertyExample({'prop2':'crashandburn'})
print dub.__dict__
# {'_prop2': 'crashandburn', '_prop1': 'some default'}
If you run this with line 5 self.badprop = ... uncommented, it fails:
self.badprop = 'This throws AttributeError: can\'t set attribute'
AttributeError: can't set attribute
[As ever, I read the SO posts on descriptors, implicit descriptors, calling them from init]
I think you're slightly misunderstanding how properties work. There is no "default setter". It throws an AttributeError on setting badprop not because it doesn't yet know that badprop is a property rather than a normal attribute (if that were the case it would just set the attribute with no error, because that's now normal attributes behave), but because you haven't provided a setter for badprop, only a getter.
Have a look at this:
>>> class Foo(object):
#property
def foo(self):
return self._foo
def __init__(self):
self._foo = 1
>>> f = Foo()
>>> f.foo = 2
Traceback (most recent call last):
File "<pyshell#12>", line 1, in <module>
f.foo = 2
AttributeError: can't set attribute
You can't set such an attribute even from outside of __init__, after the instance is constructed. If you just use #property, then what you have is a read-only property (effectively a method call that looks like an attribute read).
If all you're doing in your getters and setters is redirecting read/write access to an attribute of the same name but with an underscore prepended, then by far the simplest thing to do is get rid of the properties altogether and just use normal attributes. Python isn't Java (and even in Java I'm not convinced of the virtue of private fields with the obvious public getter/setter anyway). An attribute that is directly accessible to the outside world is a perfectly reasonable part of your "public" interface. If you later discover that you need to run some code whenever an attribute is read/written you can make it a property then without changing your interface (this is actually what descriptors were originally intended for, not so that we could start writing Java style getters/setters for every single attribute).
If you're actually doing something in the properties other than changing the name of the attribute, and you do want your attributes to be readonly, then your best bet is probably to treat the initialisation in __init__ as directly setting the underlying data attributes with the underscore prepended. Then your class can be straightforwardly initialised without AttributeErrors, and thereafter the properties will do their thing as the attributes are read.
If you're actually doing something in the properties other than changing the name of the attribute, and you want your attributes to be readable and writable, then you'll need to actually specify what happens when you get/set them. If each attribute has independent custom behaviour, then there is no more efficient way to do this than explicitly providing a getter and a setter for each attribute.
If you're running exactly the same (or very similar) code in every single getter/setter (and it's not just adding an underscore to the real attribute name), and that's why you object to writing them all out (rightly so!), then you may be better served by implementing some of __getattr__, __getattribute__, and __setattr__. These allow you to redirect attribute reading/writing to the same code each time (with the name of the attribute as a parameter), rather than to two functions for each attribute (getting/setting).
It seems like the easiest way to go about this is to just implement __getattr__ and __setattr__ such that they will access any key in your parsed JSON dict, which you should set as an instance member. Alternatively, you could call update() on self.__dict__ with your parsed JSON, but that's not really the best way to go about things, as it means your input dict could potentially trample members of your instance.
As to your setters and getters, you should only be creating them if they actually do something special other than directly set or retrieve the value in question. Python isn't Java (or C++ or anything else), you shouldn't try to mimic the private/set/get paradigm that is common in those languages.
I simply put the dict in the local scope and get/set there my properties.
class test(object):
def __init__(self,**kwargs):
self.kwargs = kwargs
#self.value = 20 asign from init is possible
#property
def value(self):
if self.kwargs.get('value') == None:
self.kwargs.update(value=0)#default
return self.kwargs.get('value')
#value.setter
def value(self,v):
print(v) #do something with v
self.kwargs.update(value=v)
x = test()
print(x.value)
x.value = 10
x.value = 5
Output
0
10
5

How do I wrangle python lookups: make.up.a.dot.separated.name.and.use.it.until.destroyed = 777

I'm a Python newbie with a very particular itch to experiment with Python's dot-name-lookup process. How do I code either a class or function in "make.py" so that these assignment statements work succesfully?
import make
make.a.dot.separated.name = 666
make.something.else.up = 123
make.anything.i.want = 777
#!/usr/bin/env python
class Make:
def __getattr__(self, name):
self.__dict__[name] = Make()
return self.__dict__[name]
make = Make()
make.a.dot.separated.name = 666
make.anything.i.want = 777
print make.a.dot.separated.name
print make.anything.i.want
The special __getattr__ method is called when a named value isn't found. The line make.anything.i.want ends up doing the equivalent of:
m1 = make.anything # calls make.__getattr__("anything")
m2 = m1.i # calls m1.__getattr__("i")
m2.want = 777
The above implementation uses these calls to __getattr__ to create a chain of Make objects each time an unknown property is accessed. This allows the dot accesses to be nested arbitrarily deep until the final assignment at which point a real value is assigned.
Python documentation - customizing attribute access:
object.__getattr__(self, name)
Called when an attribute lookup has not found the attribute in the usual places (i.e. it is not an instance attribute nor is it found in the class tree for self). name is the attribute name. This method should return the (computed) attribute value or raise an AttributeError exception.
Note that if the attribute is found through the normal mechanism, __getattr__() is not called. (This is an intentional asymmetry between __getattr__() and __setattr__().) This is done both for efficiency reasons and because otherwise __getattr__() would have no way to access other attributes of the instance. Note that at least for instance variables, you can fake total control by not inserting any values in the instance attribute dictionary (but instead inserting them in another object). See the __getattribute__() method below for a way to actually get total control in new-style classes.
object.__setattr__(self, name, value)
Called when an attribute assignment is attempted. This is called instead of the normal mechanism (i.e. store the value in the instance dictionary). name is the attribute name, value is the value to be assigned to it.
If __setattr__() wants to assign to an instance attribute, it should not simply execute self.name = value — this would cause a recursive call to itself. Instead, it should insert the value in the dictionary of instance attributes, e.g., self.__dict__[name] = value. For new-style classes, rather than accessing the instance dictionary, it should call the base class method with the same name, for example, object.__setattr__(self, name, value).

Categories

Resources