Python properties as instance attributes - python

I am trying to write a class with dynamic properties. Consider the following class with two read-only properties:
class Monster(object):
def __init__(self,color,has_fur):
self._color = color
self._has_fur = has_fur
#property
def color(self): return self._color
#property
def has_fur(self): return self._has_fur
I want to generalize this so that __init__ can take an arbitrary dictionary and create read-only properties from each item in the dictionary. I could do that like this:
class Monster2(object):
def __init__(self,traits):
self._traits = traits
for key,value in traits.iteritems():
setattr(self.__class__,key,property(lambda self,key=key: self._traits[key]))
However, this has a serious drawback: every time I create a new instance of Monster, I am actually modifying the Monster class. Instead of creating properties for my new Monster instance, I am effectively adding properties to all instances of Monster. To see this:
>>> hasattr(Monster2,"height")
False
>>> hasattr(Monster2,"has_claws")
False
>>> blue_monster = Monster2({"height":4.3,"color":"blue"})
>>> hasattr(Monster2,"height")
True
>>> hasattr(Monster2,"has_claws")
False
>>> red_monster = Monster2({"color":"red","has_claws":True})
>>> hasattr(Monster2,"height")
True
>>> hasattr(Monster2,"has_claws")
True
This of course makes sense, since I explicitly added the properties as class attributes with setattr(self.__class__,key,property(lambda self,key=key: self._traits[key])). What I need here instead are properties that can be added to the instance. (i.e. "instance properties"). Unfortunately, according to everything I have read and tried, properties are always class attributes, not instance attributes. For example, this doesn't work:
class Monster3(object):
def __init__(self,traits):
self._traits = traits
for key,value in traits.iteritems():
self.__dict__[key] = property(lambda self,key=key: self._traits[key])
>>> green_monster = Monster3({"color":"green"})
>>> green_monster.color
<property object at 0x028FDAB0>
So my question is this: do "instance properties" exist? If not, what is the reason? I have been able to find lots about how properties are used in Python, but precious little about how they are implemented. If "instance properties" don't make sense, I would like to understand why.

No, there is no such thing as per-instance properties; like all descriptors, properties are always looked up on the class. See the descriptor HOWTO for exactly how that works.
You can implement dynamic attributes using a __getattr__ hook instead, which can check for instance attributes dynamically:
class Monster(object):
def __init__(self, traits):
self._traits = traits
def __getattr__(self, name):
if name in self._traits:
return self._traits[name]
raise AttributeError(name)
These attributes are not really dynamic though; you could just set these directly on the instance:
class Monster(object):
def __init__(self, traits):
self.__dict__.update(traits)

So my question is this: do "instance properties" exist?
No.
If not, what is the reason?
Because properties are implemented as descriptors. And the magic of descriptors is that they do different things when found in an object's type's dictionary than when found in the object's dictionary.
I have been able to find lots about how properties are used in Python, but precious little about how they are implemented.
Read the Descriptor HowTo Guide linked above.
So, is there a way you could do this?
Well, yes, if you're willing to rethink the design a little.
For your case, all you want to do is use _traits in place of __dict__, and you're generating useless getter functions dynamically, so you could replace the whole thing with a couple of lines of code, as in Martijn Pieters's answer.
Or, if you want to redirect .foo to ._foo iff foo is in a list (or, better, set) of _traits, that's just as easy:
def __getattr__(self, name):
if name in self._traits:
return getattr(self, '_' + name)
raise AttributeError
But let's say you actually had some kind of use for getter functions—each attribute actually needs some code to generate the value, which you've wrapped up in a function, and stored in _traits. In that case:
def __getattr__(self, name):
getter = self._traits.get(name)
if getter:
return getter()
raise AttributeError

What I need here instead are properties that can be added to the instance.
A property() is a descriptor and those only work when stored in classes, not when stored in instances.
An easy way to achieve the effect of an instance property is do def __getattr__. That will let you control the behavior for lookups.

In case you don't need to make that properties read-only - you can just update object __dict__ with kwargs
class Monster(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
than you can make instances of that class like that:
m0 = Monster(name='X')
m1 = Monster(name='godzilla', behaviour='godzilla behaviour')

Another way of doing what you want could be to dynamically create monster classes. e.g.
def make_monster_class(traits):
class DynamicMonster(object):
pass
for key, val in traits.items():
setattr(DynamicMonster, key, property(lambda self, val=val: val))
return DynamicMonster()
blue_monster = make_monster_class({"height": 4.3, "color": "blue"})
red_monster = make_monster_class({"color": "red", "has_claws": True})
for check in ("height", "color", "has_claws"):
print "blue", check, getattr(blue_monster, check, "N/A")
print "red ", check, getattr(red_monster, check, "N/A")
Output:
blue height 4.3
red height N/A
blue color blue
red color red
blue has_claws N/A
red has_claws True

I don't necessarily recommend this (the __getattr__ solution is generally preferable) but you could write your class so that all instances made from it have their own class (well, a subclass of it). This is a quick hacky implementation of that idea:
class MyClass(object):
def __new__(Class):
Class = type(Class.__name__, (Class,), {})
Class.__new__ = object.__new__ # to prevent infinite recursion
return Class()
m1 = MyClass()
m2 = MyClass()
assert type(m1) is not type(m2)
Now you can set properties on type(self) with aplomb since each instance has its own class object.
#Claudiu's answer is the same kind of thing, just implemented with a function instead of integrated into the instance-making machinery.

Related

Modifying class __dict__ when shadowed by a property

I am attempting to modify a value in a class __dict__ directly using something like X.__dict__['x'] += 1. It is impossible to do the modification like that because a class __dict__ is actually a mappingproxy object that does not allow direct modification of values. The reason for attempting direct modification or equivalent is that I am trying to hide the class attribute behind a property defined on the metaclass with the same name. Here is an example:
class Meta(type):
def __new__(cls, name, bases, attrs, **kwargs):
attrs['x'] = 0
return super().__new__(cls, name, bases, attrs)
#property
def x(cls):
return cls.__dict__['x']
class Class(metaclass=Meta):
def __init__(self):
self.id = __class__.x
__class__.__dict__['x'] += 1
This is example shows a scheme for creating an auto-incremented ID for each instance of Class. The line __class__.__dict__['x'] += 1 can not be replaced by setattr(__class__, 'x', __class__.x + 1) because x is a property with no setter in Meta. It would just change a TypeError from mappingproxy into an AttributeError from property.
I have tried messing with __prepare__, but that has no effect. The implementation in type already returns a mutable dict for the namespace. The immutable mappingproxy seems to get set in type.__new__, which I don't know how to avoid.
I have also attempted to rebind the entire __dict__ reference to a mutable version, but that failed as well: https://ideone.com/w3HqNf, implying that perhaps the mappingproxy is not created in type.__new__.
How can I modify a class dict value directly, even when shadowed by a metaclass property? While it may be effectively impossible, setattr is able to do it somehow, so I would expect that there is a solution.
My main requirement is to have a class attribute that appears to be read only and does not use additional names anywhere. I am not absolutely hung up on the idea of using a metaclass property with an eponymous class dict entry, but that is usually how I hide read only values in regular instances.
EDIT
I finally figured out where the class __dict__ becomes immutable. It is described in the last paragraph of the "Creating the Class Object" section of the Data Model reference:
When a new class is created by type.__new__, the object provided as the namespace parameter is copied to a new ordered mapping and the original object is discarded. The new copy is wrapped in a read-only proxy, which becomes the __dict__ attribute of the class object.
Probably the best way: just pick another name. Call the property x and the dict key '_x', so you can access it the normal way.
Alternative way: add another layer of indirection:
class Meta(type):
def __new__(cls, name, bases, attrs, **kwargs):
attrs['x'] = [0]
return super().__new__(cls, name, bases, attrs)
#property
def x(cls):
return cls.__dict__['x'][0]
class Class(metaclass=Meta):
def __init__(self):
self.id = __class__.x
__class__.__dict__['x'][0] += 1
That way you don't have to modify the actual entry in the class dict.
Super-hacky way that might outright segfault your Python: access the underlying dict through the gc module.
import gc
class Meta(type):
def __new__(cls, name, bases, attrs, **kwargs):
attrs['x'] = 0
return super().__new__(cls, name, bases, attrs)
#property
def x(cls):
return cls.__dict__['x']
class Class(metaclass=Meta):
def __init__(self):
self.id = __class__.x
gc.get_referents(__class__.__dict__)[0]['x'] += 1
This bypasses critical work type.__setattr__ does to maintain internal invariants, particularly in things like CPython's type attribute cache. It is a terrible idea, and I'm only mentioning it so I can put this warning here, because if someone else comes up with it, they might not know that messing with the underlying dict is legitimately dangerous.
It is very easy to end up with dangling references doing this, and I have segfaulted Python quite a few times experimenting with this. Here's one simple case that crashed on Ideone:
import gc
class Foo(object):
x = []
Foo().x
gc.get_referents(Foo.__dict__)[0]['x'] = []
print(Foo().x)
Output:
*** Error in `python3': double free or corruption (fasttop): 0x000055d69f59b110 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x70bcb)[0x2b32d5977bcb]
/lib/x86_64-linux-gnu/libc.so.6(+0x76f96)[0x2b32d597df96]
/lib/x86_64-linux-gnu/libc.so.6(+0x7778e)[0x2b32d597e78e]
python3(+0x2011f5)[0x55d69f02d1f5]
python3(+0x6be7a)[0x55d69ee97e7a]
python3(PyCFunction_Call+0xd1)[0x55d69efec761]
python3(PyObject_Call+0x47)[0x55d69f035647]
... [it continues like that for a while]
And here's a case with wrong results and no noisy error message to alert you to the fact that something has gone wrong:
import gc
class Foo(object):
x = 'foo'
print(Foo().x)
gc.get_referents(Foo.__dict__)[0]['x'] = 'bar'
print(Foo().x)
Output:
foo
foo
I make absolutely no guarantees as to any safe way to use this, and even if things happen to work out on one Python version, they may not work on future versions. It can be fun to fiddle with, but it's not something to actually use. Seriously, don't do it. Do you want to explain to your boss that your website went down or your published data analysis will need to be retracted because you took this bad idea and used it?
This probably counts as an "additional name" you don't want, but I've implemented this using a dictionary in the metaclass where the keys are the classes. The __next__ method on the metaclass makes the class itself iterable, such that you can just do next() to get the next ID. The dunder method also keeps the method from being available through the instances. The dictionary storing the next id has a name starting with a double underscore, so it's not easily discoverable from any of the classes that use it. The incrementing ID functionality is thus entirely contained in the metaclass.
I tucked the assignment of the id into a __new__ method on a base class, so you don't have to worry about it in __init__. This also allows you to del Meta so all the machinery is a little harder to get to.
class Meta(type):
__ids = {}
#property
def id(cls):
return __class__.__ids.setdefault(cls, 0)
def __next__(cls):
id = __class__.__ids.setdefault(cls, 0)
__class__.__ids[cls] += 1
return id
class Base(metaclass=Meta):
def __new__(cls, *args, **kwargs):
self = object.__new__(cls)
self.id = next(cls)
return self
del Meta
class Class(Base):
pass
class Brass(Base):
pass
c0 = Class()
c1 = Class()
b0 = Brass()
b1 = Brass()
assert (b0.id, b1.id, c0.id, c1.id) == (0, 1, 0, 1)
assert (Class.id, Brass.id) == (2, 2)
assert not hasattr(Class, "__ids")
assert not hasattr(Brass, "__ids")
Note that I've used the same name for the attribute on both the class and the object. That way Class.id is the number of instances you've created, while c1.id is the ID of that specific instance.
My main requirement is to have a class attribute that appears to be read only and does not use additional names anywhere. I am not absolutely hung up on the idea of using a metaclass property with an eponymous class dict entry, but that is usually how I hide read only values in regular instances.
What you are asking for is a contradiction: If your example worked, then __class__.__dict__['x'] would be an "additional name" for the attribute. So clearly we need a more specific definition of "additional name." But to come up with that definition, we need to know what you are trying to accomplish (NB: The following goals are not mutually exclusive, so you may want to do all of these things):
You want to make the value completely untouchable, except within the Class.__init__() method (and the same method of any subclasses): This is unPythonic and quite impossible. If __init__() can modify the value, then so can anyone else. You might be able to accomplish something like this if the modifying code lives in Class.__new__(), which the metaclass dynamically creates in Meta.__new__(), but that's extremely ugly and hard to understand.
You want the code that manipulates the value to be "nicely encapsulated": Write a method in the metaclass that increments the private value (or does whatever other modification you need), and provide a read-only metaclass property that accesses it under the public name.
You are concerned about a subclass accidentally clashing names with the private name: Prefix the private name with a double underscore to invoke automatic name mangling. While this is usually seen as a bit unPythonic, it is appropriate for cases where name collisions may be less obvious to subclass authors, such as the internal names of a metaclass colliding with the internal names of a regular class instantiated from it.

How to use default property descriptors and successfully assign from __init__()?

What's the correct idiom for this please?
I want to define an object containing properties which can (optionally) be initialized from a dict (the dict comes from JSON; it may be incomplete). Later on I may modify the properties via setters.
There are actually 13+ properties, and I want to be able to use default getters and setters, but that doesn't seem to work for this case:
But I don't want to have to write explicit descriptors for all of prop1... propn
Also, I'd like to move the default assignments out of __init__() and into the accessors... but then I'd need expicit descriptors.
What's the most elegant solution? (other than move all the setter calls out of __init__() and into a method/classmethod _make()?)
[DELETED COMMENT The code for badprop using default descriptor was due to comment by a previous SO user, who gave the impression it gives you a default setter. But it doesn't - the setter is undefined and it necessarily throws AttributeError.]
class DubiousPropertyExample(object):
def __init__(self,dct=None):
self.prop1 = 'some default'
self.prop2 = 'other default'
#self.badprop = 'This throws AttributeError: can\'t set attribute'
if dct is None: dct = dict() # or use defaultdict
for prop,val in dct.items():
self.__setattr__(prop,val)
# How do I do default property descriptors? this is wrong
##property
#def badprop(self): pass
# Explicit descriptors for all properties - yukk
#property
def prop1(self): return self._prop1
#prop1.setter
def prop1(self,value): self._prop1 = value
#property
def prop2(self): return self._prop2
#prop2.setter
def prop2(self,value): self._prop2 = value
dub = DubiousPropertyExample({'prop2':'crashandburn'})
print dub.__dict__
# {'_prop2': 'crashandburn', '_prop1': 'some default'}
If you run this with line 5 self.badprop = ... uncommented, it fails:
self.badprop = 'This throws AttributeError: can\'t set attribute'
AttributeError: can't set attribute
[As ever, I read the SO posts on descriptors, implicit descriptors, calling them from init]
I think you're slightly misunderstanding how properties work. There is no "default setter". It throws an AttributeError on setting badprop not because it doesn't yet know that badprop is a property rather than a normal attribute (if that were the case it would just set the attribute with no error, because that's now normal attributes behave), but because you haven't provided a setter for badprop, only a getter.
Have a look at this:
>>> class Foo(object):
#property
def foo(self):
return self._foo
def __init__(self):
self._foo = 1
>>> f = Foo()
>>> f.foo = 2
Traceback (most recent call last):
File "<pyshell#12>", line 1, in <module>
f.foo = 2
AttributeError: can't set attribute
You can't set such an attribute even from outside of __init__, after the instance is constructed. If you just use #property, then what you have is a read-only property (effectively a method call that looks like an attribute read).
If all you're doing in your getters and setters is redirecting read/write access to an attribute of the same name but with an underscore prepended, then by far the simplest thing to do is get rid of the properties altogether and just use normal attributes. Python isn't Java (and even in Java I'm not convinced of the virtue of private fields with the obvious public getter/setter anyway). An attribute that is directly accessible to the outside world is a perfectly reasonable part of your "public" interface. If you later discover that you need to run some code whenever an attribute is read/written you can make it a property then without changing your interface (this is actually what descriptors were originally intended for, not so that we could start writing Java style getters/setters for every single attribute).
If you're actually doing something in the properties other than changing the name of the attribute, and you do want your attributes to be readonly, then your best bet is probably to treat the initialisation in __init__ as directly setting the underlying data attributes with the underscore prepended. Then your class can be straightforwardly initialised without AttributeErrors, and thereafter the properties will do their thing as the attributes are read.
If you're actually doing something in the properties other than changing the name of the attribute, and you want your attributes to be readable and writable, then you'll need to actually specify what happens when you get/set them. If each attribute has independent custom behaviour, then there is no more efficient way to do this than explicitly providing a getter and a setter for each attribute.
If you're running exactly the same (or very similar) code in every single getter/setter (and it's not just adding an underscore to the real attribute name), and that's why you object to writing them all out (rightly so!), then you may be better served by implementing some of __getattr__, __getattribute__, and __setattr__. These allow you to redirect attribute reading/writing to the same code each time (with the name of the attribute as a parameter), rather than to two functions for each attribute (getting/setting).
It seems like the easiest way to go about this is to just implement __getattr__ and __setattr__ such that they will access any key in your parsed JSON dict, which you should set as an instance member. Alternatively, you could call update() on self.__dict__ with your parsed JSON, but that's not really the best way to go about things, as it means your input dict could potentially trample members of your instance.
As to your setters and getters, you should only be creating them if they actually do something special other than directly set or retrieve the value in question. Python isn't Java (or C++ or anything else), you shouldn't try to mimic the private/set/get paradigm that is common in those languages.
I simply put the dict in the local scope and get/set there my properties.
class test(object):
def __init__(self,**kwargs):
self.kwargs = kwargs
#self.value = 20 asign from init is possible
#property
def value(self):
if self.kwargs.get('value') == None:
self.kwargs.update(value=0)#default
return self.kwargs.get('value')
#value.setter
def value(self,v):
print(v) #do something with v
self.kwargs.update(value=v)
x = test()
print(x.value)
x.value = 10
x.value = 5
Output
0
10
5

How to dynamically change base class of instances at runtime?

This article has a snippet showing usage of __bases__ to dynamically change the inheritance hierarchy of some Python code, by adding a class to an existing classes collection of classes from which it inherits. Ok, that's hard to read, code is probably clearer:
class Friendly:
def hello(self):
print 'Hello'
class Person: pass
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
That is, Person doesn't inherit from Friendly at the source level, but rather this inheritance relation is added dynamically at runtime by modification of the __bases__attribute of the Person class. However, if you change Friendly and Person to be new style classes (by inheriting from object), you get the following error:
TypeError: __bases__ assignment: 'Friendly' deallocator differs from 'object'
A bit of Googling on this seems to indicate some incompatibilities between new-style and old style classes in regards to changing the inheritance hierarchy at runtime. Specifically: "New-style class objects don't support assignment to their bases attribute".
My question, is it possible to make the above Friendly/Person example work using new-style classes in Python 2.7+, possibly by use of the __mro__ attribute?
Disclaimer: I fully realise that this is obscure code. I fully realize that in real production code tricks like this tend to border on unreadable, this is purely a thought experiment, and for funzies to learn something about how Python deals with issues related to multiple inheritance.
Ok, again, this is not something you should normally do, this is for informational purposes only.
Where Python looks for a method on an instance object is determined by the __mro__ attribute of the class which defines that object (the M ethod R esolution O rder attribute). Thus, if we could modify the __mro__ of Person, we'd get the desired behaviour. Something like:
setattr(Person, '__mro__', (Person, Friendly, object))
The problem is that __mro__ is a readonly attribute, and thus setattr won't work. Maybe if you're a Python guru there's a way around that, but clearly I fall short of guru status as I cannot think of one.
A possible workaround is to simply redefine the class:
def modify_Person_to_be_friendly():
# so that we're modifying the global identifier 'Person'
global Person
# now just redefine the class using type(), specifying that the new
# class should inherit from Friendly and have all attributes from
# our old Person class
Person = type('Person', (Friendly,), dict(Person.__dict__))
def main():
modify_Person_to_be_friendly()
p = Person()
p.hello() # works!
What this doesn't do is modify any previously created Person instances to have the hello() method. For example (just modifying main()):
def main():
oldperson = Person()
ModifyPersonToBeFriendly()
p = Person()
p.hello()
# works! But:
oldperson.hello()
# does not
If the details of the type call aren't clear, then read e-satis' excellent answer on 'What is a metaclass in Python?'.
I've been struggling with this too, and was intrigued by your solution, but Python 3 takes it away from us:
AttributeError: attribute '__dict__' of 'type' objects is not writable
I actually have a legitimate need for a decorator that replaces the (single) superclass of the decorated class. It would require too lengthy a description to include here (I tried, but couldn't get it to a reasonably length and limited complexity -- it came up in the context of the use by many Python applications of an Python-based enterprise server where different applications needed slightly different variations of some of the code.)
The discussion on this page and others like it provided hints that the problem of assigning to __bases__ only occurs for classes with no superclass defined (i.e., whose only superclass is object). I was able to solve this problem (for both Python 2.7 and 3.2) by defining the classes whose superclass I needed to replace as being subclasses of a trivial class:
## T is used so that the other classes are not direct subclasses of object,
## since classes whose base is object don't allow assignment to their __bases__ attribute.
class T: pass
class A(T):
def __init__(self):
print('Creating instance of {}'.format(self.__class__.__name__))
## ordinary inheritance
class B(A): pass
## dynamically specified inheritance
class C(T): pass
A() # -> Creating instance of A
B() # -> Creating instance of B
C.__bases__ = (A,)
C() # -> Creating instance of C
## attempt at dynamically specified inheritance starting with a direct subclass
## of object doesn't work
class D: pass
D.__bases__ = (A,)
D()
## Result is:
## TypeError: __bases__ assignment: 'A' deallocator differs from 'object'
I can not vouch for the consequences, but that this code does what you want at py2.7.2.
class Friendly(object):
def hello(self):
print 'Hello'
class Person(object): pass
# we can't change the original classes, so we replace them
class newFriendly: pass
newFriendly.__dict__ = dict(Friendly.__dict__)
Friendly = newFriendly
class newPerson: pass
newPerson.__dict__ = dict(Person.__dict__)
Person = newPerson
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
We know that this is possible. Cool. But we'll never use it!
Right of the bat, all the caveats of messing with class hierarchy dynamically are in effect.
But if it has to be done then, apparently, there is a hack that get's around the "deallocator differs from 'object" issue when modifying the __bases__ attribute for the new style classes.
You can define a class object
class Object(object): pass
Which derives a class from the built-in metaclass type.
That's it, now your new style classes can modify the __bases__ without any problem.
In my tests this actually worked very well as all existing (before changing the inheritance) instances of it and its derived classes felt the effect of the change including their mro getting updated.
I needed a solution for this which:
Works with both Python 2 (>= 2.7) and Python 3 (>= 3.2).
Lets the class bases be changed after dynamically importing a dependency.
Lets the class bases be changed from unit test code.
Works with types that have a custom metaclass.
Still allows unittest.mock.patch to function as expected.
Here's what I came up with:
def ensure_class_bases_begin_with(namespace, class_name, base_class):
""" Ensure the named class's bases start with the base class.
:param namespace: The namespace containing the class name.
:param class_name: The name of the class to alter.
:param base_class: The type to be the first base class for the
newly created type.
:return: ``None``.
Call this function after ensuring `base_class` is
available, before using the class named by `class_name`.
"""
existing_class = namespace[class_name]
assert isinstance(existing_class, type)
bases = list(existing_class.__bases__)
if base_class is bases[0]:
# Already bound to a type with the right bases.
return
bases.insert(0, base_class)
new_class_namespace = existing_class.__dict__.copy()
# Type creation will assign the correct ‘__dict__’ attribute.
del new_class_namespace['__dict__']
metaclass = existing_class.__metaclass__
new_class = metaclass(class_name, tuple(bases), new_class_namespace)
namespace[class_name] = new_class
Used like this within the application:
# foo.py
# Type `Bar` is not available at first, so can't inherit from it yet.
class Foo(object):
__metaclass__ = type
def __init__(self):
self.frob = "spam"
def __unicode__(self): return "Foo"
# … later …
import bar
ensure_class_bases_begin_with(
namespace=globals(),
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
Use like this from within unit test code:
# test_foo.py
""" Unit test for `foo` module. """
import unittest
import mock
import foo
import bar
ensure_class_bases_begin_with(
namespace=foo.__dict__,
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
class Foo_TestCase(unittest.TestCase):
""" Test cases for `Foo` class. """
def setUp(self):
patcher_unicode = mock.patch.object(
foo.Foo, '__unicode__')
patcher_unicode.start()
self.addCleanup(patcher_unicode.stop)
self.test_instance = foo.Foo()
patcher_frob = mock.patch.object(
self.test_instance, 'frob')
patcher_frob.start()
self.addCleanup(patcher_frob.stop)
def test_instantiate(self):
""" Should create an instance of `Foo`. """
instance = foo.Foo()
The above answers are good if you need to change an existing class at runtime. However, if you are just looking to create a new class that inherits by some other class, there is a much cleaner solution. I got this idea from https://stackoverflow.com/a/21060094/3533440, but I think the example below better illustrates a legitimate use case.
def make_default(Map, default_default=None):
"""Returns a class which behaves identically to the given
Map class, except it gives a default value for unknown keys."""
class DefaultMap(Map):
def __init__(self, default=default_default, **kwargs):
self._default = default
super().__init__(**kwargs)
def __missing__(self, key):
return self._default
return DefaultMap
DefaultDict = make_default(dict, default_default='wug')
d = DefaultDict(a=1, b=2)
assert d['a'] is 1
assert d['b'] is 2
assert d['c'] is 'wug'
Correct me if I'm wrong, but this strategy seems very readable to me, and I would use it in production code. This is very similar to functors in OCaml.
This method isn't technically inheriting during runtime, since __mro__ can't be changed. But what I'm doing here is using __getattr__ to be able to access any attributes or methods from a certain class. (Read comments in order of numbers placed before the comments, it makes more sense)
class Sub:
def __init__(self, f, cls):
self.f = f
self.cls = cls
# 6) this method will pass the self parameter
# (which is the original class object we passed)
# and then it will fill in the rest of the arguments
# using *args and **kwargs
def __call__(self, *args, **kwargs):
# 7) the multiple try / except statements
# are for making sure if an attribute was
# accessed instead of a function, the __call__
# method will just return the attribute
try:
return self.f(self.cls, *args, **kwargs)
except TypeError:
try:
return self.f(*args, **kwargs)
except TypeError:
return self.f
# 1) our base class
class S:
def __init__(self, func):
self.cls = func
def __getattr__(self, item):
# 5) we are wrapping the attribute we get in the Sub class
# so we can implement the __call__ method there
# to be able to pass the parameters in the correct order
return Sub(getattr(self.cls, item), self.cls)
# 2) class we want to inherit from
class L:
def run(self, s):
print("run" + s)
# 3) we create an instance of our base class
# and then pass an instance (or just the class object)
# as a parameter to this instance
s = S(L) # 4) in this case, I'm using the class object
s.run("1")
So this sort of substitution and redirection will simulate the inheritance of the class we wanted to inherit from. And it even works with attributes or methods that don't take any parameters.

How to fake type with Python

I recently developed a class named DocumentWrapper around some ORM document object in Python to transparently add some features to it without changing its interface in any way.
I just have one issue with this. Let's say I have some User object wrapped in it. Calling isinstance(some_var, User) will return False because some_var indeed is an instance of DocumentWrapper.
Is there any way to fake the type of an object in Python to have the same call return True?
You can use the __instancecheck__ magic method to override the default isinstance behaviour:
#classmethod
def __instancecheck__(cls, instance):
return isinstance(instance, User)
This is only if you want your object to be a transparent wrapper; that is, if you want a DocumentWrapper to behave like a User. Otherwise, just expose the wrapped class as an attribute.
This is a Python 3 addition; it came with abstract base classes. You can't do the same in Python 2.
Override __class__ in your wrapper class DocumentWrapper:
class DocumentWrapper(object):
#property
def __class__(self):
return User
>>> isinstance(DocumentWrapper(), User)
True
This way no modifications to the wrapped class User are needed.
Python Mock does the same (see mock.py:612 in mock-2.0.0, couldn't find sources online to link to, sorry).
Testing the type of an object is usually an antipattern in python. In some cases it makes sense to test the "duck type" of the object, something like:
hasattr(some_var, "username")
But even that's undesirable, for instance there are reasons why that expression might return false, even though a wrapper uses some magic with __getattribute__ to correctly proxy the attribute.
It's usually preferred to allow variables only take a single abstract type, and possibly None. Different behaviours based on different inputs should be achieved by passing the optionally typed data in different variables. You want to do something like this:
def dosomething(some_user=None, some_otherthing=None):
if some_user is not None:
#do the "User" type action
elif some_otherthing is not None:
#etc...
else:
raise ValueError("not enough arguments")
Of course, this all assumes you have some level of control of the code that is doing the type checking. Suppose it isn't. for "isinstance()" to return true, the class must appear in the instance's bases, or the class must have an __instancecheck__. Since you don't control either of those things for the class, you have to resort to some shenanigans on the instance. Do something like this:
def wrap_user(instance):
class wrapped_user(type(instance)):
__metaclass__ = type
def __init__(self):
pass
def __getattribute__(self, attr):
self_dict = object.__getattribute__(type(self), '__dict__')
if attr in self_dict:
return self_dict[attr]
return getattr(instance, attr)
def extra_feature(self, foo):
return instance.username + foo # or whatever
return wrapped_user()
What we're doing is creating a new class dynamically at the time we need to wrap the instance, and actually inherit from the wrapped object's __class__. We also go to the extra trouble of overriding the __metaclass__, in case the original had some extra behaviors we don't actually want to encounter (like looking for a database table with a certain class name). A nice convenience of this style is that we never have to create any instance attributes on the wrapper class, there is no self.wrapped_object, since that value is present at class creation time.
Edit: As pointed out in comments, the above only works for some simple types, if you need to proxy more elaborate attributes on the target object, (say, methods), then see the following answer: Python - Faking Type Continued
Here is a solution by using metaclass, but you need to modify the wrapped classes:
>>> class DocumentWrapper:
def __init__(self, wrapped_obj):
self.wrapped_obj = wrapped_obj
>>> class MetaWrapper(abc.ABCMeta):
def __instancecheck__(self, instance):
try:
return isinstance(instance.wrapped_obj, self)
except:
return isinstance(instance, self)
>>> class User(metaclass=MetaWrapper):
pass
>>> user=DocumentWrapper(User())
>>> isinstance(user,User)
True
>>> class User2:
pass
>>> user2=DocumentWrapper(User2())
>>> isinstance(user2,User2)
False
It sounds like you want to test the type of the object your DocumentWrapper wraps, not the type of the DocumentWrapper itself. If that's right, then the interface to DocumentWrapper needs to expose that type. You might add a method to your DocumentWrapper class that returns the type of the wrapped object, for instance. But I don't think that making the call to isinstance ambiguous, by making it return True when it's not, is the right way to solve this.
The best way is to inherit DocumentWrapper from the User itself, or mix-in pattern and doing multiple inherintance from many classes
class DocumentWrapper(User, object)
You can also fake isinstance() results by manipulating obj.__class__ but this is deep level magic and should not be done.

Dynamically adding #property in python

I know that I can dynamically add an instance method to an object by doing something like:
import types
def my_method(self):
# logic of method
# ...
# instance is some instance of some class
instance.my_method = types.MethodType(my_method, instance)
Later on I can call instance.my_method() and self will be bound correctly and everything works.
Now, my question: how to do the exact same thing to obtain the behavior that decorating the new method with #property would give?
I would guess something like:
instance.my_method = types.MethodType(my_method, instance)
instance.my_method = property(instance.my_method)
But, doing that instance.my_method returns a property object.
The property descriptor objects needs to live in the class, not in the instance, to have the effect you desire. If you don't want to alter the existing class in order to avoid altering the behavior of other instances, you'll need to make a "per-instance class", e.g.:
def addprop(inst, name, method):
cls = type(inst)
if not hasattr(cls, '__perinstance'):
cls = type(cls.__name__, (cls,), {})
cls.__perinstance = True
inst.__class__ = cls
setattr(cls, name, property(method))
I'm marking these special "per-instance" classes with an attribute to avoid needlessly making multiple ones if you're doing several addprop calls on the same instance.
Note that, like for other uses of property, you need the class in play to be new-style (typically obtained by inheriting directly or indirectly from object), not the ancient legacy style (dropped in Python 3) that's assigned by default to a class without bases.
Since this question isn't asking about only adding to a spesific instance,
the following method can be used to add a property to the class, this will expose the properties to all instances of the class YMMV.
cls = type(my_instance)
cls.my_prop = property(lambda self: "hello world")
print(my_instance.my_prop)
# >>> hello world
Note: Adding another answer because I think #Alex Martelli, while correct, is achieving the desired result by creating a new class that holds the property, this answer is intended to be more direct/straightforward without abstracting whats going on into its own method.

Categories

Resources