How to know which next attribute is requested in python - python

I have class with custom getter, so I have situations when I need to use my custom getter, and situations when I need to use default.
So consider following.
If I call method of object c in this way:
c.somePyClassProp
In that case I need to call custom getter, and getter will return int value, not Python object.
But if I call method on this way:
c.somePyClassProp.getAttributes()
In this case I need to use default setter, and first return need to be Python object, and then we need to call getAttributes method of returned python object (from c.somePyClassProp).
Note that somePyClassProp is actually property of class which is another Python class instance.
So, is there any way in Python on which we can know whether some other methods will be called after first method call?

No. c.someMethod is a self-contained expression; its evaluation cannot be influenced by the context in which the result will be used. If it were possible to achieve what you want, this would be the result:
x = c.someMethod
c.someMethod.getAttributes() # Works!
x.getAttributes() # AttributeError!
This would be confusing as hell.
Don't try to make c.someMethod behave differently depending on what will be done with it, and if possible, don't make c.someMethod a method call at all. People will expect c.someMethod to return a bound method object that can then be called to execute the method; just define the method the usual way and call it with c.someMethod().

You don't want to return different values based on which attribute is accessed next, you want to return an int-like object that also has the required attribute on it. To do this, we create a subclass of int that has a getAttributes() method. An instance of this class, of course, needs to know what object it is "bound" to, that is, what object its getAttributes() method should refer to, so we'll add this to the constructor.
class bound_int(int):
def __new__(cls, value, obj):
val = int.__new__(cls, value)
val.obj = obj
return val
def getAttributes(self):
return self.obj.somePyClassProp
Now in your getter for c.somePyClassProp, instead of returning an integer, you return a bound_int and pass it a reference to the object its getAttributes() method needs to know about (here I'll just have it refer to self, the object it's being returned from):
#property
def somePyClassProp(self):
return bound_int(42, self)
This way, if you use c.somePyPclassProp as an int, it acts just like any other int, because it is one, but if you want to further call getAttributes() on it, you can do that, too. It's the same value in both cases; it just has been built to fulfill both purposes. This approach can be adapted to pretty much any problem of this type.

It looks like you want two ways to get the property depending on what you want to do with it. I don't think there's any inherent Pythonic way to implement this, and you therefore need to store a variable or property name for each case. Maybe:
c.somePyClassProp
can be used in the __get__ and
c.somePyClassProp__getAttributes()
can be implemented in a more custom way inside the __getattribute__ function.
One way I've used (which is probably not the best) is to check for that exact variable name:
def __getattribute__(self, var_name):
if ('__' in var_name):
var_name, method = var_name.split('__')
return object.__getattribute__(self, var_name).__getattribute__(method)
Using object.__get__(self, var_name) uses the object class's method of getting a property directly.

You can store the contained python object as a variable and the create getters via the #property dectorator for whatever values you want. When you want to read the int, reference the property. When you want the contained object, use its variable name instead.
class SomePyClass(object):
def getInt(self):
return 1
def getAttributes(self):
return 'a b c'
class MyClass(object):
def __init__(self, py_class):
self._py_class = py_class
#property
def some_property(self):
return self._py_class.getInt()
x = MyClass(SomePyClass())
y = self.some_property
x._py_class.getAttributes()

Related

What happens when we edit(append, remove...) a list and can we execute actions each time a list is edited

I would like to know if there is a way to create a list that will execute some actions each time I use the method append(or an other similar method).
I know that I could create a class that inherits from list and overwrite append, remove and all other methods that change content of list but I would like to know if there is an other way.
By comparison, if I want to print 'edited' each time I edit an attribute of an object I will not execute print("edited") in all methods of the class of that object. Instead, I will only overwrite __setattribute__.
I tried to create my own type which inherits of list and overwrite __setattribute__ but that doesn't work. When I use myList.append __setattribute__ isn't called. I would like to know what's realy occured when I use myList.append ? Is there some magic methods called that I could overwrite ?
I know that the question have already been asked there : What happens when you call `append` on a list?. The answer given is just, there is no answer... I hope it's a mistake.
I don't know if there is an answer to my request so I will also explain you why I'm confronted to that problem. Maybe I can search in an other direction to do what I want. I have got a class with several attributes. When an attribute is edited, I want to execute some actions. Like I explain before, to do this I am use to overwrite __setattribute__. That works fine for most of attributes. The problem is lists. If the attribute is used like this : myClass.myListAttr.append(something), __setattribute__ isn't called while the value of the attribute have changed.
The problem would be the same with dictionaries. Methods like pop doesn't call __setattribute__.
If I understand correctly, you would want something like Notify_list that would call some method (argument to the constructor in my implementation) every time a mutating method is called, so you could do something like this:
class Test:
def __init__(self):
self.list = Notify_list(self.list_changed)
def list_changed(self,method):
print("self.list.{} was called!".format(method))
>>> x = Test()
>>> x.list.append(5)
self.list.append was called!
>>> x.list.extend([1,2,3,4])
self.list.extend was called!
>>> x.list[1] = 6
self.list.__setitem__ was called!
>>> x.list
[5, 6, 2, 3, 4]
The most simple implementation of this would be to create a subclass and override every mutating method:
class Notifying_list(list):
__slots__ = ("notify",)
def __init__(self,notifying_method, *args,**kw):
self.notify = notifying_method
list.__init__(self,*args,**kw)
def append(self,*args,**kw):
self.notify("append")
return list.append(self,*args,**kw)
#etc.
This is obviously not very practical, writing the entire definition would be very tedious and very repetitive, so we can create the new subclass dynamically for any given class with functions like the following:
import functools
import types
def notify_wrapper(name,method):
"""wraps a method to call self.notify(name) when called
used by notifying_type"""
#functools.wraps(method)
def wrapper(*args,**kw):
self = args[0]
# use object.__getattribute__ instead of self.notify in
# case __getattribute__ is one of the notifying methods
# in which case self.notify will raise a RecursionError
notify = object.__getattribute__(self, "_Notify__notify")
# I'd think knowing which method was called would be useful
# you may want to change the arguments to the notify method
notify(name)
return method(*args,**kw)
return wrapper
def notifying_type(cls, notifying_methods="all"):
"""creates a subclass of cls that adds an extra function call when calling certain methods
The constructor of the subclass will take a callable as the first argument
and arguments for the original class constructor after that.
The callable will be called every time any of the methods specified in notifying_methods
is called on the object, it is passed the name of the method as the only argument
if notifying_methods is left to the special value 'all' then this uses the function
get_all_possible_method_names to create wrappers for nearly all methods."""
if notifying_methods == "all":
notifying_methods = get_all_possible_method_names(cls)
def init_for_new_cls(self,notify_method,*args,**kw):
self._Notify__notify = notify_method
namespace = {"__init__":init_for_new_cls,
"__slots__":("_Notify__notify",)}
for name in notifying_methods:
method = getattr(cls,name) #if this raises an error then you are trying to wrap a method that doesn't exist
namespace[name] = notify_wrapper(name, method)
# I figured using the type() constructor was easier then using a meta class.
return type("Notify_"+cls.__name__, (cls,), namespace)
unbound_method_or_descriptor = ( types.FunctionType,
type(list.append), #method_descriptor, not in types
type(list.__add__),#method_wrapper, also not in types
)
def get_all_possible_method_names(cls):
"""generates the names of nearly all methods the given class defines
three methods are blacklisted: __init__, __new__, and __getattribute__ for these reasons:
__init__ conflicts with the one defined in notifying_type
__new__ will not be called with a initialized instance, so there will not be a notify method to use
__getattribute__ is fine to override, just really annoying in most cases.
Note that this function may not work correctly in all cases
it was only tested with very simple classes and the builtin list."""
blacklist = ("__init__","__new__","__getattribute__")
for name,attr in vars(cls).items():
if (name not in blacklist and
isinstance(attr, unbound_method_or_descriptor)):
yield name
Once we can use notifying_type creating Notify_list or Notify_dict would be as simple as:
import collections
mutating_list_methods = set(dir(collections.MutableSequence)) - set(dir(collections.Sequence))
Notify_list = notifying_type(list, mutating_list_methods)
mutating_dict_methods = set(dir(collections.MutableMapping)) - set(dir(collections.Mapping))
Notify_dict = notifying_type(dict, mutating_dict_methods)
I have not tested this extensively and it quite possibly contains bugs / unhandled corner cases but I do know it worked correctly with list!

Python descriptors (__get__, __set__) on function parameters

Normally a descriptor is used on a class attribute like so:
class Owner(object):
attr = Attr()
When getting Owner.attr, Attr.__get__(self, instance, owner) is called where self = Owner.attr, instance = None and owner = Owner.
When Owner is instantiated instance will be the instance of Owner.
Now I would like to apply this concept to method parameters instead of class attributes.
How it would look in practice (let's assume that the functionality of Attr is to wrap a string with a given string):
class Example(object):
def funct(self, param=Attr('t')):
return param == 'test' # < param calls the descriptor here
e = Example()
e.funct('es') # < is True because 'es' wrapped with 't' becomes 'test'.
When accessing param, Attr.__get__(self, instance, owner) will be called with self = funct.param, instance = funct and owner = funct (although it doesn't make sense to have owner and instance the same, might be None?).
But since funct is not a class, this will not work. How can I get something similar to work?
A decorator on the function will be processing the parameters, so this might add to the solution I think.
The decorator must be, for example, be able to change the wrapper string.
Functions actually are first class objects in Python, but you are correct in saying that the syntax you describe would not work as you want. You could potentially do something like this with a decorator that inspects the passed attributes for characteristics that would enable this sort of functionality though. However, you'd probably be better off implementing a callable object, then attaching descriptors to that and creating instances of the callable rather than functions.

How to use default property descriptors and successfully assign from __init__()?

What's the correct idiom for this please?
I want to define an object containing properties which can (optionally) be initialized from a dict (the dict comes from JSON; it may be incomplete). Later on I may modify the properties via setters.
There are actually 13+ properties, and I want to be able to use default getters and setters, but that doesn't seem to work for this case:
But I don't want to have to write explicit descriptors for all of prop1... propn
Also, I'd like to move the default assignments out of __init__() and into the accessors... but then I'd need expicit descriptors.
What's the most elegant solution? (other than move all the setter calls out of __init__() and into a method/classmethod _make()?)
[DELETED COMMENT The code for badprop using default descriptor was due to comment by a previous SO user, who gave the impression it gives you a default setter. But it doesn't - the setter is undefined and it necessarily throws AttributeError.]
class DubiousPropertyExample(object):
def __init__(self,dct=None):
self.prop1 = 'some default'
self.prop2 = 'other default'
#self.badprop = 'This throws AttributeError: can\'t set attribute'
if dct is None: dct = dict() # or use defaultdict
for prop,val in dct.items():
self.__setattr__(prop,val)
# How do I do default property descriptors? this is wrong
##property
#def badprop(self): pass
# Explicit descriptors for all properties - yukk
#property
def prop1(self): return self._prop1
#prop1.setter
def prop1(self,value): self._prop1 = value
#property
def prop2(self): return self._prop2
#prop2.setter
def prop2(self,value): self._prop2 = value
dub = DubiousPropertyExample({'prop2':'crashandburn'})
print dub.__dict__
# {'_prop2': 'crashandburn', '_prop1': 'some default'}
If you run this with line 5 self.badprop = ... uncommented, it fails:
self.badprop = 'This throws AttributeError: can\'t set attribute'
AttributeError: can't set attribute
[As ever, I read the SO posts on descriptors, implicit descriptors, calling them from init]
I think you're slightly misunderstanding how properties work. There is no "default setter". It throws an AttributeError on setting badprop not because it doesn't yet know that badprop is a property rather than a normal attribute (if that were the case it would just set the attribute with no error, because that's now normal attributes behave), but because you haven't provided a setter for badprop, only a getter.
Have a look at this:
>>> class Foo(object):
#property
def foo(self):
return self._foo
def __init__(self):
self._foo = 1
>>> f = Foo()
>>> f.foo = 2
Traceback (most recent call last):
File "<pyshell#12>", line 1, in <module>
f.foo = 2
AttributeError: can't set attribute
You can't set such an attribute even from outside of __init__, after the instance is constructed. If you just use #property, then what you have is a read-only property (effectively a method call that looks like an attribute read).
If all you're doing in your getters and setters is redirecting read/write access to an attribute of the same name but with an underscore prepended, then by far the simplest thing to do is get rid of the properties altogether and just use normal attributes. Python isn't Java (and even in Java I'm not convinced of the virtue of private fields with the obvious public getter/setter anyway). An attribute that is directly accessible to the outside world is a perfectly reasonable part of your "public" interface. If you later discover that you need to run some code whenever an attribute is read/written you can make it a property then without changing your interface (this is actually what descriptors were originally intended for, not so that we could start writing Java style getters/setters for every single attribute).
If you're actually doing something in the properties other than changing the name of the attribute, and you do want your attributes to be readonly, then your best bet is probably to treat the initialisation in __init__ as directly setting the underlying data attributes with the underscore prepended. Then your class can be straightforwardly initialised without AttributeErrors, and thereafter the properties will do their thing as the attributes are read.
If you're actually doing something in the properties other than changing the name of the attribute, and you want your attributes to be readable and writable, then you'll need to actually specify what happens when you get/set them. If each attribute has independent custom behaviour, then there is no more efficient way to do this than explicitly providing a getter and a setter for each attribute.
If you're running exactly the same (or very similar) code in every single getter/setter (and it's not just adding an underscore to the real attribute name), and that's why you object to writing them all out (rightly so!), then you may be better served by implementing some of __getattr__, __getattribute__, and __setattr__. These allow you to redirect attribute reading/writing to the same code each time (with the name of the attribute as a parameter), rather than to two functions for each attribute (getting/setting).
It seems like the easiest way to go about this is to just implement __getattr__ and __setattr__ such that they will access any key in your parsed JSON dict, which you should set as an instance member. Alternatively, you could call update() on self.__dict__ with your parsed JSON, but that's not really the best way to go about things, as it means your input dict could potentially trample members of your instance.
As to your setters and getters, you should only be creating them if they actually do something special other than directly set or retrieve the value in question. Python isn't Java (or C++ or anything else), you shouldn't try to mimic the private/set/get paradigm that is common in those languages.
I simply put the dict in the local scope and get/set there my properties.
class test(object):
def __init__(self,**kwargs):
self.kwargs = kwargs
#self.value = 20 asign from init is possible
#property
def value(self):
if self.kwargs.get('value') == None:
self.kwargs.update(value=0)#default
return self.kwargs.get('value')
#value.setter
def value(self,v):
print(v) #do something with v
self.kwargs.update(value=v)
x = test()
print(x.value)
x.value = 10
x.value = 5
Output
0
10
5

How to fake type with Python

I recently developed a class named DocumentWrapper around some ORM document object in Python to transparently add some features to it without changing its interface in any way.
I just have one issue with this. Let's say I have some User object wrapped in it. Calling isinstance(some_var, User) will return False because some_var indeed is an instance of DocumentWrapper.
Is there any way to fake the type of an object in Python to have the same call return True?
You can use the __instancecheck__ magic method to override the default isinstance behaviour:
#classmethod
def __instancecheck__(cls, instance):
return isinstance(instance, User)
This is only if you want your object to be a transparent wrapper; that is, if you want a DocumentWrapper to behave like a User. Otherwise, just expose the wrapped class as an attribute.
This is a Python 3 addition; it came with abstract base classes. You can't do the same in Python 2.
Override __class__ in your wrapper class DocumentWrapper:
class DocumentWrapper(object):
#property
def __class__(self):
return User
>>> isinstance(DocumentWrapper(), User)
True
This way no modifications to the wrapped class User are needed.
Python Mock does the same (see mock.py:612 in mock-2.0.0, couldn't find sources online to link to, sorry).
Testing the type of an object is usually an antipattern in python. In some cases it makes sense to test the "duck type" of the object, something like:
hasattr(some_var, "username")
But even that's undesirable, for instance there are reasons why that expression might return false, even though a wrapper uses some magic with __getattribute__ to correctly proxy the attribute.
It's usually preferred to allow variables only take a single abstract type, and possibly None. Different behaviours based on different inputs should be achieved by passing the optionally typed data in different variables. You want to do something like this:
def dosomething(some_user=None, some_otherthing=None):
if some_user is not None:
#do the "User" type action
elif some_otherthing is not None:
#etc...
else:
raise ValueError("not enough arguments")
Of course, this all assumes you have some level of control of the code that is doing the type checking. Suppose it isn't. for "isinstance()" to return true, the class must appear in the instance's bases, or the class must have an __instancecheck__. Since you don't control either of those things for the class, you have to resort to some shenanigans on the instance. Do something like this:
def wrap_user(instance):
class wrapped_user(type(instance)):
__metaclass__ = type
def __init__(self):
pass
def __getattribute__(self, attr):
self_dict = object.__getattribute__(type(self), '__dict__')
if attr in self_dict:
return self_dict[attr]
return getattr(instance, attr)
def extra_feature(self, foo):
return instance.username + foo # or whatever
return wrapped_user()
What we're doing is creating a new class dynamically at the time we need to wrap the instance, and actually inherit from the wrapped object's __class__. We also go to the extra trouble of overriding the __metaclass__, in case the original had some extra behaviors we don't actually want to encounter (like looking for a database table with a certain class name). A nice convenience of this style is that we never have to create any instance attributes on the wrapper class, there is no self.wrapped_object, since that value is present at class creation time.
Edit: As pointed out in comments, the above only works for some simple types, if you need to proxy more elaborate attributes on the target object, (say, methods), then see the following answer: Python - Faking Type Continued
Here is a solution by using metaclass, but you need to modify the wrapped classes:
>>> class DocumentWrapper:
def __init__(self, wrapped_obj):
self.wrapped_obj = wrapped_obj
>>> class MetaWrapper(abc.ABCMeta):
def __instancecheck__(self, instance):
try:
return isinstance(instance.wrapped_obj, self)
except:
return isinstance(instance, self)
>>> class User(metaclass=MetaWrapper):
pass
>>> user=DocumentWrapper(User())
>>> isinstance(user,User)
True
>>> class User2:
pass
>>> user2=DocumentWrapper(User2())
>>> isinstance(user2,User2)
False
It sounds like you want to test the type of the object your DocumentWrapper wraps, not the type of the DocumentWrapper itself. If that's right, then the interface to DocumentWrapper needs to expose that type. You might add a method to your DocumentWrapper class that returns the type of the wrapped object, for instance. But I don't think that making the call to isinstance ambiguous, by making it return True when it's not, is the right way to solve this.
The best way is to inherit DocumentWrapper from the User itself, or mix-in pattern and doing multiple inherintance from many classes
class DocumentWrapper(User, object)
You can also fake isinstance() results by manipulating obj.__class__ but this is deep level magic and should not be done.

Dynamically adding #property in python

I know that I can dynamically add an instance method to an object by doing something like:
import types
def my_method(self):
# logic of method
# ...
# instance is some instance of some class
instance.my_method = types.MethodType(my_method, instance)
Later on I can call instance.my_method() and self will be bound correctly and everything works.
Now, my question: how to do the exact same thing to obtain the behavior that decorating the new method with #property would give?
I would guess something like:
instance.my_method = types.MethodType(my_method, instance)
instance.my_method = property(instance.my_method)
But, doing that instance.my_method returns a property object.
The property descriptor objects needs to live in the class, not in the instance, to have the effect you desire. If you don't want to alter the existing class in order to avoid altering the behavior of other instances, you'll need to make a "per-instance class", e.g.:
def addprop(inst, name, method):
cls = type(inst)
if not hasattr(cls, '__perinstance'):
cls = type(cls.__name__, (cls,), {})
cls.__perinstance = True
inst.__class__ = cls
setattr(cls, name, property(method))
I'm marking these special "per-instance" classes with an attribute to avoid needlessly making multiple ones if you're doing several addprop calls on the same instance.
Note that, like for other uses of property, you need the class in play to be new-style (typically obtained by inheriting directly or indirectly from object), not the ancient legacy style (dropped in Python 3) that's assigned by default to a class without bases.
Since this question isn't asking about only adding to a spesific instance,
the following method can be used to add a property to the class, this will expose the properties to all instances of the class YMMV.
cls = type(my_instance)
cls.my_prop = property(lambda self: "hello world")
print(my_instance.my_prop)
# >>> hello world
Note: Adding another answer because I think #Alex Martelli, while correct, is achieving the desired result by creating a new class that holds the property, this answer is intended to be more direct/straightforward without abstracting whats going on into its own method.

Categories

Resources