Dynamically adding #property in python - python

I know that I can dynamically add an instance method to an object by doing something like:
import types
def my_method(self):
# logic of method
# ...
# instance is some instance of some class
instance.my_method = types.MethodType(my_method, instance)
Later on I can call instance.my_method() and self will be bound correctly and everything works.
Now, my question: how to do the exact same thing to obtain the behavior that decorating the new method with #property would give?
I would guess something like:
instance.my_method = types.MethodType(my_method, instance)
instance.my_method = property(instance.my_method)
But, doing that instance.my_method returns a property object.

The property descriptor objects needs to live in the class, not in the instance, to have the effect you desire. If you don't want to alter the existing class in order to avoid altering the behavior of other instances, you'll need to make a "per-instance class", e.g.:
def addprop(inst, name, method):
cls = type(inst)
if not hasattr(cls, '__perinstance'):
cls = type(cls.__name__, (cls,), {})
cls.__perinstance = True
inst.__class__ = cls
setattr(cls, name, property(method))
I'm marking these special "per-instance" classes with an attribute to avoid needlessly making multiple ones if you're doing several addprop calls on the same instance.
Note that, like for other uses of property, you need the class in play to be new-style (typically obtained by inheriting directly or indirectly from object), not the ancient legacy style (dropped in Python 3) that's assigned by default to a class without bases.

Since this question isn't asking about only adding to a spesific instance,
the following method can be used to add a property to the class, this will expose the properties to all instances of the class YMMV.
cls = type(my_instance)
cls.my_prop = property(lambda self: "hello world")
print(my_instance.my_prop)
# >>> hello world
Note: Adding another answer because I think #Alex Martelli, while correct, is achieving the desired result by creating a new class that holds the property, this answer is intended to be more direct/straightforward without abstracting whats going on into its own method.

Related

Modifying class __dict__ when shadowed by a property

I am attempting to modify a value in a class __dict__ directly using something like X.__dict__['x'] += 1. It is impossible to do the modification like that because a class __dict__ is actually a mappingproxy object that does not allow direct modification of values. The reason for attempting direct modification or equivalent is that I am trying to hide the class attribute behind a property defined on the metaclass with the same name. Here is an example:
class Meta(type):
def __new__(cls, name, bases, attrs, **kwargs):
attrs['x'] = 0
return super().__new__(cls, name, bases, attrs)
#property
def x(cls):
return cls.__dict__['x']
class Class(metaclass=Meta):
def __init__(self):
self.id = __class__.x
__class__.__dict__['x'] += 1
This is example shows a scheme for creating an auto-incremented ID for each instance of Class. The line __class__.__dict__['x'] += 1 can not be replaced by setattr(__class__, 'x', __class__.x + 1) because x is a property with no setter in Meta. It would just change a TypeError from mappingproxy into an AttributeError from property.
I have tried messing with __prepare__, but that has no effect. The implementation in type already returns a mutable dict for the namespace. The immutable mappingproxy seems to get set in type.__new__, which I don't know how to avoid.
I have also attempted to rebind the entire __dict__ reference to a mutable version, but that failed as well: https://ideone.com/w3HqNf, implying that perhaps the mappingproxy is not created in type.__new__.
How can I modify a class dict value directly, even when shadowed by a metaclass property? While it may be effectively impossible, setattr is able to do it somehow, so I would expect that there is a solution.
My main requirement is to have a class attribute that appears to be read only and does not use additional names anywhere. I am not absolutely hung up on the idea of using a metaclass property with an eponymous class dict entry, but that is usually how I hide read only values in regular instances.
EDIT
I finally figured out where the class __dict__ becomes immutable. It is described in the last paragraph of the "Creating the Class Object" section of the Data Model reference:
When a new class is created by type.__new__, the object provided as the namespace parameter is copied to a new ordered mapping and the original object is discarded. The new copy is wrapped in a read-only proxy, which becomes the __dict__ attribute of the class object.
Probably the best way: just pick another name. Call the property x and the dict key '_x', so you can access it the normal way.
Alternative way: add another layer of indirection:
class Meta(type):
def __new__(cls, name, bases, attrs, **kwargs):
attrs['x'] = [0]
return super().__new__(cls, name, bases, attrs)
#property
def x(cls):
return cls.__dict__['x'][0]
class Class(metaclass=Meta):
def __init__(self):
self.id = __class__.x
__class__.__dict__['x'][0] += 1
That way you don't have to modify the actual entry in the class dict.
Super-hacky way that might outright segfault your Python: access the underlying dict through the gc module.
import gc
class Meta(type):
def __new__(cls, name, bases, attrs, **kwargs):
attrs['x'] = 0
return super().__new__(cls, name, bases, attrs)
#property
def x(cls):
return cls.__dict__['x']
class Class(metaclass=Meta):
def __init__(self):
self.id = __class__.x
gc.get_referents(__class__.__dict__)[0]['x'] += 1
This bypasses critical work type.__setattr__ does to maintain internal invariants, particularly in things like CPython's type attribute cache. It is a terrible idea, and I'm only mentioning it so I can put this warning here, because if someone else comes up with it, they might not know that messing with the underlying dict is legitimately dangerous.
It is very easy to end up with dangling references doing this, and I have segfaulted Python quite a few times experimenting with this. Here's one simple case that crashed on Ideone:
import gc
class Foo(object):
x = []
Foo().x
gc.get_referents(Foo.__dict__)[0]['x'] = []
print(Foo().x)
Output:
*** Error in `python3': double free or corruption (fasttop): 0x000055d69f59b110 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x70bcb)[0x2b32d5977bcb]
/lib/x86_64-linux-gnu/libc.so.6(+0x76f96)[0x2b32d597df96]
/lib/x86_64-linux-gnu/libc.so.6(+0x7778e)[0x2b32d597e78e]
python3(+0x2011f5)[0x55d69f02d1f5]
python3(+0x6be7a)[0x55d69ee97e7a]
python3(PyCFunction_Call+0xd1)[0x55d69efec761]
python3(PyObject_Call+0x47)[0x55d69f035647]
... [it continues like that for a while]
And here's a case with wrong results and no noisy error message to alert you to the fact that something has gone wrong:
import gc
class Foo(object):
x = 'foo'
print(Foo().x)
gc.get_referents(Foo.__dict__)[0]['x'] = 'bar'
print(Foo().x)
Output:
foo
foo
I make absolutely no guarantees as to any safe way to use this, and even if things happen to work out on one Python version, they may not work on future versions. It can be fun to fiddle with, but it's not something to actually use. Seriously, don't do it. Do you want to explain to your boss that your website went down or your published data analysis will need to be retracted because you took this bad idea and used it?
This probably counts as an "additional name" you don't want, but I've implemented this using a dictionary in the metaclass where the keys are the classes. The __next__ method on the metaclass makes the class itself iterable, such that you can just do next() to get the next ID. The dunder method also keeps the method from being available through the instances. The dictionary storing the next id has a name starting with a double underscore, so it's not easily discoverable from any of the classes that use it. The incrementing ID functionality is thus entirely contained in the metaclass.
I tucked the assignment of the id into a __new__ method on a base class, so you don't have to worry about it in __init__. This also allows you to del Meta so all the machinery is a little harder to get to.
class Meta(type):
__ids = {}
#property
def id(cls):
return __class__.__ids.setdefault(cls, 0)
def __next__(cls):
id = __class__.__ids.setdefault(cls, 0)
__class__.__ids[cls] += 1
return id
class Base(metaclass=Meta):
def __new__(cls, *args, **kwargs):
self = object.__new__(cls)
self.id = next(cls)
return self
del Meta
class Class(Base):
pass
class Brass(Base):
pass
c0 = Class()
c1 = Class()
b0 = Brass()
b1 = Brass()
assert (b0.id, b1.id, c0.id, c1.id) == (0, 1, 0, 1)
assert (Class.id, Brass.id) == (2, 2)
assert not hasattr(Class, "__ids")
assert not hasattr(Brass, "__ids")
Note that I've used the same name for the attribute on both the class and the object. That way Class.id is the number of instances you've created, while c1.id is the ID of that specific instance.
My main requirement is to have a class attribute that appears to be read only and does not use additional names anywhere. I am not absolutely hung up on the idea of using a metaclass property with an eponymous class dict entry, but that is usually how I hide read only values in regular instances.
What you are asking for is a contradiction: If your example worked, then __class__.__dict__['x'] would be an "additional name" for the attribute. So clearly we need a more specific definition of "additional name." But to come up with that definition, we need to know what you are trying to accomplish (NB: The following goals are not mutually exclusive, so you may want to do all of these things):
You want to make the value completely untouchable, except within the Class.__init__() method (and the same method of any subclasses): This is unPythonic and quite impossible. If __init__() can modify the value, then so can anyone else. You might be able to accomplish something like this if the modifying code lives in Class.__new__(), which the metaclass dynamically creates in Meta.__new__(), but that's extremely ugly and hard to understand.
You want the code that manipulates the value to be "nicely encapsulated": Write a method in the metaclass that increments the private value (or does whatever other modification you need), and provide a read-only metaclass property that accesses it under the public name.
You are concerned about a subclass accidentally clashing names with the private name: Prefix the private name with a double underscore to invoke automatic name mangling. While this is usually seen as a bit unPythonic, it is appropriate for cases where name collisions may be less obvious to subclass authors, such as the internal names of a metaclass colliding with the internal names of a regular class instantiated from it.

Accessing the parameters of a constructor from a metaclass

TL;DR -
I have a class that uses a metaclass.
I would like to access the parameters of the object's constructor from the metaclass, just before the initialization process, but I couldn't find a way to access those parameters.
How can I access the constructor's parameters from the metaclass function __new__?
In order to practice the use of metaclasses in python, I would like to create a class that would be used as the supercomputer "Deep Thought" from the book "The Hitchhiker's Guide to the Galaxy".
The purpose of my class would be to store the various queries the supercomputer gets from users.
At the bottom line, it would just get some arguments and store them.
If one of the given arguments is number 42 or the string "The answer to life, the universe, and everything", I don't want to create a new object but rather return a pointer to an existing object.
The idea behind this is that those objects would be the exact same so when using the is operator to compare those two, the result would be true.
In order to be able to use the is operator and get True as an answer, I would need to make sure those variables point to the same object. So, in order to return a pointer to an existing object, I need to intervene in the middle of the initialization process of the object. I cannot check the given arguments at the constructor itself and modify the object's inner-variables accordingly because it would be too late: If I check the given parameters only as part of the __init__ function, those two objects would be allocated on different portions of the memory (they might be equal but won't return True when using the is operator).
I thought of doing something like that:
class SuperComputer(type):
answer = 42
def __new__(meta, name, bases, attributes):
# Check if args contains the number "42"
# or has the string "The answer to life, the universe, and everything"
# If so, just return a pointer to an existing object:
return SuperComputer.answer
# Else, just create the object as it is:
return super(SuperComputer, meta).__new__(meta, name, bases, attributes)
class Query(object):
__metaclass__ = SuperComputer
def __init__(self, *args, **kwargs):
self.args = args
for key, value in kwargs.items():
setattr(self, key, value)
def main():
number = Query(42)
string = Query("The answer to life, the universe, and everything")
other = Query("Sunny", "Sunday", 123)
num2 = Query(45)
print number is string # Should print True
print other is string # Should print False
print number is num2 # Should print False
if __name__ == '__main__':
main()
But I'm stuck on getting the parameters from the constructor.
I saw that the __new__ method gets only four arguments:
The metaclass instance itself, the name of the class, its bases, and its attributes.
How can I send the parameters from the constructor to the metaclass?
What can I do in order to achieve my goal?
You don't need a metaclass for that.
The fact is __init__ is not the "constructor" of an object in Python, rather, it is commonly called an "initializator" . The __new__ is closer to the role of a "constructor" in other languages, and it is not available only for the metaclass - all classes have a __new__ method. If it is not explicitly implemented, the object.__new__ is called directly.
And actually, it is object.__new__ which creates a new object in Python. From pure Python code, there is no other possible way to create an object: it will always go through there. That means that if you implement the __new__ method on your own class, you have the option of not creating a new instance, and instead return another pre-existing instance of the same class (or any other object).
You only have to keep in mind that: if __new__ returns an instance of the same class, then the default behavior is that __init__ is called on the same instance. Otherwise, __init__ is not called.
It is also worth noting that in recent years some recipe for creating "singletons" in Python using metaclasses became popular - it is actually an overkill approach,a s overriding __new__ is also preferable for creating singletons.
In your case, you just need to have a dictionary with the parameters you want to track as your keys, and check if you create a new instance or "recycle" one whenever __new__ runs. The dictionary may be a class attribute, or a global variable at module level - that is your pick:
class Recycler:
_instances = {}
def __new__(cls, parameter1, ...):
if parameter1 in cls._instances:
return cls._instances[parameter1]
self = super().__new__(cls) # don't pass remaining parameters to object.__new__
_instances[parameter1] = self
return self
If you'd have any code in __init__ besides that, move it to __new__ as well.
You can have a baseclass with this behavior and have a class hierarchy without needing to re-implement __new__ for every class.
As for a metaclass, none of its methods are called when actually creating a new instance of the classes created with that metaclass. It would only be of use to automatically insert this behavior, by decorating or creating a fresh __new__ method, on classes created with that metaclass. Since this behavior is easier to track, maintain, and overall to combine with other classes just using ordinary inheritance, no need for a metaclass at all.

Why does "self" outside a function's parameters give a "not defined" error?

Look at this code:
class MyClass():
# Why does this give me "NameError: name 'self' is not defined":
mySelf = self
# But this does not?
def myFunction(self):
mySelf2 = self
Basically I want a way for a class to refer to itself without needing to name itself specifically, hence I want self to work for the class, not just methods/functions. How can I achieve this?
EDIT: The point of this is that I'm trying to refer to the class name from inside the class itself with something like self.class._name_ so that the class name isn't hardcoded anywhere in the class's code, and thus it's easier to re-use the code.
EDIT 2: From what I've learned from the answers below, what I'm trying to do is impossible. I'll have to find a different way. Mission abandoned.
EDIT 3: Here is specifically what I'm trying to do:
class simpleObject(object):
def __init__(self, request):
self.request = request
#view_defaults(renderer='string')
class Test(simpleObject):
# this line throws an error because of self
myClassName = self.__class__.__name__
#view_config(route_name=myClassName)
def activateTheView(self):
db = self.request.db
foo = 'bar'
return foo
Note that self is not defined at the time when you want the class to refer to itself for the assignment to work. This is because (in addition to being named arbitrarily), self refers to instances and not classes. At the time that the suspect line of code attempts to run, there is as of yet no class for it to refer to. Not that it would refer to the class if there was.
In a method, you can always use type(self). That will get the subclass of MyClass that created the current instance. If you want to hard-code to MyClass, that name will be available in the global scope of the methods. This will allow you to do everything that your example would allow if it actually worked. E.g, you can just do MyClass.some_attribute inside your methods.
You probably want to modify the class attributes after class creation. This can be done with decorators or on an ad-hoc basis. Metaclasses may be a better fit. Without knowing what you actually want to do though, it's impossible to say.
UPDATE:
Here's some code to do what you want. It uses a metaclass AutoViewConfigMeta and a new decorator to mark the methods that you want view_config applied to. I spoofed the view_config decorator. It prints out the class name when it's called though to prove that it has access to it. The metaclass __new__ just loops through the class dictionary and looks for methods that were marked by the auto_view_config decorator. It cleans off the mark and applies the view_config decorator with the appropriate class name.
Here's the code.
# This just spoofs the view_config decorator.
def view_config(route=''):
def dec(f):
def wrapper(*args, **kwargs):
print "route={0}".format(route)
return f(*args, **kwargs)
return wrapper
return dec
# Apply this decorator to methods for which you want to call view_config with
# the class name. It will tag them. The metaclass will apply view_config once it
# has the class name.
def auto_view_config(f):
f.auto_view_config = True
return f
class AutoViewConfigMeta(type):
def __new__(mcls, name, bases, dict_):
#This is called during class creation. _dict is the namespace of the class and
# name is it's name. So the idea is to pull out the methods that need
# view_config applied to them and manually apply them with the class name.
# We'll recognize them because they will have the auto_view_config attribute
# set on them by the `auto_view_config` decorator. Then use type to create
# the class and return it.
for item in dict_:
if hasattr(dict_[item], 'auto_view_config'):
method = dict_[item]
del method.auto_view_config # Clean up after ourselves.
# The next line is the manual form of applying a decorator.
dict_[item] = view_config(route=name)(method)
# Call out to type to actually create the class with the modified dict.
return type.__new__(mcls, name, bases, dict_)
class simpleObject(object):
__metaclass__ = AutoViewConfigMeta
class Test(simpleObject):
#auto_view_config
def activateTheView(self):
foo = 'bar'
print foo
if __name__=='__main__':
t = Test()
t.activateTheView()
Let me know if you have any questions.
Python has an "explict is better than implicit" design philosophy.
Many languages have an implicit pointer or variable in the scope of a method that (e.g. this in C++) that refers to the object through which the method was invoked. Python does not have this. Here, all bound methods will have an extra first argument that is the object through which the method was invoked. You can call it anything you want (self is not a keyword like this in C++). The name self is convention rather than a syntactic rule.
Your method myFunction defines the variable self as a parameter so it works. There's no such variable at the class level so it's erroring out.
So much for the explanation. I'm not aware of a straightforward way for you to do what you want and I've never seen such requirement in Python. Can you detail why you want to do such a thing? Perhaps there's an assumption that you're making which can be handled in another way using Python.
self is just a name, your self in this case is a class variable and not this for the object using which it is called,
self is treated as a normal variable and it is not defined, where as the self in the function comes from the object used for calling.
you want to treat the object reference in self as a class variable which is not possible.
self isn't a keyword, it's just a convention. The methods are attributes of the class object (not the instance), but they receive the instance as their first argument. You could rename the argument to xyzzy if you wanted and it would still work the same way.
But (as should be obvious) you can't refer to a method argument outside the body of the method. Inside a class block but outside of any method, self is undefined. And the concept wouldn't even make sense -- at the time the class block is being evaluated, no instance of the class can possibly exist yet.
Because the name self is explicitly defined as part of the arguments to myFunction. The first argument to a method is the instance that the method was called on; in the class body, there isn't an "instance we're dealing with", because the class body deals with every possible instance of the class (including ones that don't necessarily exist yet) - so, there isn't a particular object that could be called self.
If you want to refer to the class itself, rather than some instance of it, this is spelled self.__class__ (or, for new-style classes in Py2 and all classes in Py3, type(self)) anywhere self exists. If you want to be able to deal with this in situations where self doesn't exist, then you may want to look at class methods which aren't associated with any particular instance, and so take the class itself in place of self. If you really need to do this in the class body (and, you probably don't), you'll just have to call it by name.
You can't refer to the class itself within the class body because the class doesn't exist at the time that the class body is executed. (If the previous sentence is confusing, reading up about metaclasses will either clear this up or make you more confused.)
Within an instance method, you can refer to the class of the instance with self.__class__, but be careful here. This will be the instance's actual class, which through the power of inheritance might not be the class in which the method was defined.
Within a class method, the class is passed in as the first argument, much like instances are the first argument to instance methods:
class MyClass(object):
#classmethod
def foo(cls):
print cls.__name__
MyClass.foo() # Should print "MyClass"
As with instance methods, the actual class might differ due to inheritance.
class OtherClass(MyClass):
pass
OtherClass.foo() # Should print "OtherClass"
If you really need to refer to MyClass within a method of MyClass, you're pretty much going to have to refer to it as MyClass unless you use magic. This sort of magic is more trouble than it is worth.

How to dynamically change base class of instances at runtime?

This article has a snippet showing usage of __bases__ to dynamically change the inheritance hierarchy of some Python code, by adding a class to an existing classes collection of classes from which it inherits. Ok, that's hard to read, code is probably clearer:
class Friendly:
def hello(self):
print 'Hello'
class Person: pass
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
That is, Person doesn't inherit from Friendly at the source level, but rather this inheritance relation is added dynamically at runtime by modification of the __bases__attribute of the Person class. However, if you change Friendly and Person to be new style classes (by inheriting from object), you get the following error:
TypeError: __bases__ assignment: 'Friendly' deallocator differs from 'object'
A bit of Googling on this seems to indicate some incompatibilities between new-style and old style classes in regards to changing the inheritance hierarchy at runtime. Specifically: "New-style class objects don't support assignment to their bases attribute".
My question, is it possible to make the above Friendly/Person example work using new-style classes in Python 2.7+, possibly by use of the __mro__ attribute?
Disclaimer: I fully realise that this is obscure code. I fully realize that in real production code tricks like this tend to border on unreadable, this is purely a thought experiment, and for funzies to learn something about how Python deals with issues related to multiple inheritance.
Ok, again, this is not something you should normally do, this is for informational purposes only.
Where Python looks for a method on an instance object is determined by the __mro__ attribute of the class which defines that object (the M ethod R esolution O rder attribute). Thus, if we could modify the __mro__ of Person, we'd get the desired behaviour. Something like:
setattr(Person, '__mro__', (Person, Friendly, object))
The problem is that __mro__ is a readonly attribute, and thus setattr won't work. Maybe if you're a Python guru there's a way around that, but clearly I fall short of guru status as I cannot think of one.
A possible workaround is to simply redefine the class:
def modify_Person_to_be_friendly():
# so that we're modifying the global identifier 'Person'
global Person
# now just redefine the class using type(), specifying that the new
# class should inherit from Friendly and have all attributes from
# our old Person class
Person = type('Person', (Friendly,), dict(Person.__dict__))
def main():
modify_Person_to_be_friendly()
p = Person()
p.hello() # works!
What this doesn't do is modify any previously created Person instances to have the hello() method. For example (just modifying main()):
def main():
oldperson = Person()
ModifyPersonToBeFriendly()
p = Person()
p.hello()
# works! But:
oldperson.hello()
# does not
If the details of the type call aren't clear, then read e-satis' excellent answer on 'What is a metaclass in Python?'.
I've been struggling with this too, and was intrigued by your solution, but Python 3 takes it away from us:
AttributeError: attribute '__dict__' of 'type' objects is not writable
I actually have a legitimate need for a decorator that replaces the (single) superclass of the decorated class. It would require too lengthy a description to include here (I tried, but couldn't get it to a reasonably length and limited complexity -- it came up in the context of the use by many Python applications of an Python-based enterprise server where different applications needed slightly different variations of some of the code.)
The discussion on this page and others like it provided hints that the problem of assigning to __bases__ only occurs for classes with no superclass defined (i.e., whose only superclass is object). I was able to solve this problem (for both Python 2.7 and 3.2) by defining the classes whose superclass I needed to replace as being subclasses of a trivial class:
## T is used so that the other classes are not direct subclasses of object,
## since classes whose base is object don't allow assignment to their __bases__ attribute.
class T: pass
class A(T):
def __init__(self):
print('Creating instance of {}'.format(self.__class__.__name__))
## ordinary inheritance
class B(A): pass
## dynamically specified inheritance
class C(T): pass
A() # -> Creating instance of A
B() # -> Creating instance of B
C.__bases__ = (A,)
C() # -> Creating instance of C
## attempt at dynamically specified inheritance starting with a direct subclass
## of object doesn't work
class D: pass
D.__bases__ = (A,)
D()
## Result is:
## TypeError: __bases__ assignment: 'A' deallocator differs from 'object'
I can not vouch for the consequences, but that this code does what you want at py2.7.2.
class Friendly(object):
def hello(self):
print 'Hello'
class Person(object): pass
# we can't change the original classes, so we replace them
class newFriendly: pass
newFriendly.__dict__ = dict(Friendly.__dict__)
Friendly = newFriendly
class newPerson: pass
newPerson.__dict__ = dict(Person.__dict__)
Person = newPerson
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
We know that this is possible. Cool. But we'll never use it!
Right of the bat, all the caveats of messing with class hierarchy dynamically are in effect.
But if it has to be done then, apparently, there is a hack that get's around the "deallocator differs from 'object" issue when modifying the __bases__ attribute for the new style classes.
You can define a class object
class Object(object): pass
Which derives a class from the built-in metaclass type.
That's it, now your new style classes can modify the __bases__ without any problem.
In my tests this actually worked very well as all existing (before changing the inheritance) instances of it and its derived classes felt the effect of the change including their mro getting updated.
I needed a solution for this which:
Works with both Python 2 (>= 2.7) and Python 3 (>= 3.2).
Lets the class bases be changed after dynamically importing a dependency.
Lets the class bases be changed from unit test code.
Works with types that have a custom metaclass.
Still allows unittest.mock.patch to function as expected.
Here's what I came up with:
def ensure_class_bases_begin_with(namespace, class_name, base_class):
""" Ensure the named class's bases start with the base class.
:param namespace: The namespace containing the class name.
:param class_name: The name of the class to alter.
:param base_class: The type to be the first base class for the
newly created type.
:return: ``None``.
Call this function after ensuring `base_class` is
available, before using the class named by `class_name`.
"""
existing_class = namespace[class_name]
assert isinstance(existing_class, type)
bases = list(existing_class.__bases__)
if base_class is bases[0]:
# Already bound to a type with the right bases.
return
bases.insert(0, base_class)
new_class_namespace = existing_class.__dict__.copy()
# Type creation will assign the correct ‘__dict__’ attribute.
del new_class_namespace['__dict__']
metaclass = existing_class.__metaclass__
new_class = metaclass(class_name, tuple(bases), new_class_namespace)
namespace[class_name] = new_class
Used like this within the application:
# foo.py
# Type `Bar` is not available at first, so can't inherit from it yet.
class Foo(object):
__metaclass__ = type
def __init__(self):
self.frob = "spam"
def __unicode__(self): return "Foo"
# … later …
import bar
ensure_class_bases_begin_with(
namespace=globals(),
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
Use like this from within unit test code:
# test_foo.py
""" Unit test for `foo` module. """
import unittest
import mock
import foo
import bar
ensure_class_bases_begin_with(
namespace=foo.__dict__,
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
class Foo_TestCase(unittest.TestCase):
""" Test cases for `Foo` class. """
def setUp(self):
patcher_unicode = mock.patch.object(
foo.Foo, '__unicode__')
patcher_unicode.start()
self.addCleanup(patcher_unicode.stop)
self.test_instance = foo.Foo()
patcher_frob = mock.patch.object(
self.test_instance, 'frob')
patcher_frob.start()
self.addCleanup(patcher_frob.stop)
def test_instantiate(self):
""" Should create an instance of `Foo`. """
instance = foo.Foo()
The above answers are good if you need to change an existing class at runtime. However, if you are just looking to create a new class that inherits by some other class, there is a much cleaner solution. I got this idea from https://stackoverflow.com/a/21060094/3533440, but I think the example below better illustrates a legitimate use case.
def make_default(Map, default_default=None):
"""Returns a class which behaves identically to the given
Map class, except it gives a default value for unknown keys."""
class DefaultMap(Map):
def __init__(self, default=default_default, **kwargs):
self._default = default
super().__init__(**kwargs)
def __missing__(self, key):
return self._default
return DefaultMap
DefaultDict = make_default(dict, default_default='wug')
d = DefaultDict(a=1, b=2)
assert d['a'] is 1
assert d['b'] is 2
assert d['c'] is 'wug'
Correct me if I'm wrong, but this strategy seems very readable to me, and I would use it in production code. This is very similar to functors in OCaml.
This method isn't technically inheriting during runtime, since __mro__ can't be changed. But what I'm doing here is using __getattr__ to be able to access any attributes or methods from a certain class. (Read comments in order of numbers placed before the comments, it makes more sense)
class Sub:
def __init__(self, f, cls):
self.f = f
self.cls = cls
# 6) this method will pass the self parameter
# (which is the original class object we passed)
# and then it will fill in the rest of the arguments
# using *args and **kwargs
def __call__(self, *args, **kwargs):
# 7) the multiple try / except statements
# are for making sure if an attribute was
# accessed instead of a function, the __call__
# method will just return the attribute
try:
return self.f(self.cls, *args, **kwargs)
except TypeError:
try:
return self.f(*args, **kwargs)
except TypeError:
return self.f
# 1) our base class
class S:
def __init__(self, func):
self.cls = func
def __getattr__(self, item):
# 5) we are wrapping the attribute we get in the Sub class
# so we can implement the __call__ method there
# to be able to pass the parameters in the correct order
return Sub(getattr(self.cls, item), self.cls)
# 2) class we want to inherit from
class L:
def run(self, s):
print("run" + s)
# 3) we create an instance of our base class
# and then pass an instance (or just the class object)
# as a parameter to this instance
s = S(L) # 4) in this case, I'm using the class object
s.run("1")
So this sort of substitution and redirection will simulate the inheritance of the class we wanted to inherit from. And it even works with attributes or methods that don't take any parameters.

How to fake type with Python

I recently developed a class named DocumentWrapper around some ORM document object in Python to transparently add some features to it without changing its interface in any way.
I just have one issue with this. Let's say I have some User object wrapped in it. Calling isinstance(some_var, User) will return False because some_var indeed is an instance of DocumentWrapper.
Is there any way to fake the type of an object in Python to have the same call return True?
You can use the __instancecheck__ magic method to override the default isinstance behaviour:
#classmethod
def __instancecheck__(cls, instance):
return isinstance(instance, User)
This is only if you want your object to be a transparent wrapper; that is, if you want a DocumentWrapper to behave like a User. Otherwise, just expose the wrapped class as an attribute.
This is a Python 3 addition; it came with abstract base classes. You can't do the same in Python 2.
Override __class__ in your wrapper class DocumentWrapper:
class DocumentWrapper(object):
#property
def __class__(self):
return User
>>> isinstance(DocumentWrapper(), User)
True
This way no modifications to the wrapped class User are needed.
Python Mock does the same (see mock.py:612 in mock-2.0.0, couldn't find sources online to link to, sorry).
Testing the type of an object is usually an antipattern in python. In some cases it makes sense to test the "duck type" of the object, something like:
hasattr(some_var, "username")
But even that's undesirable, for instance there are reasons why that expression might return false, even though a wrapper uses some magic with __getattribute__ to correctly proxy the attribute.
It's usually preferred to allow variables only take a single abstract type, and possibly None. Different behaviours based on different inputs should be achieved by passing the optionally typed data in different variables. You want to do something like this:
def dosomething(some_user=None, some_otherthing=None):
if some_user is not None:
#do the "User" type action
elif some_otherthing is not None:
#etc...
else:
raise ValueError("not enough arguments")
Of course, this all assumes you have some level of control of the code that is doing the type checking. Suppose it isn't. for "isinstance()" to return true, the class must appear in the instance's bases, or the class must have an __instancecheck__. Since you don't control either of those things for the class, you have to resort to some shenanigans on the instance. Do something like this:
def wrap_user(instance):
class wrapped_user(type(instance)):
__metaclass__ = type
def __init__(self):
pass
def __getattribute__(self, attr):
self_dict = object.__getattribute__(type(self), '__dict__')
if attr in self_dict:
return self_dict[attr]
return getattr(instance, attr)
def extra_feature(self, foo):
return instance.username + foo # or whatever
return wrapped_user()
What we're doing is creating a new class dynamically at the time we need to wrap the instance, and actually inherit from the wrapped object's __class__. We also go to the extra trouble of overriding the __metaclass__, in case the original had some extra behaviors we don't actually want to encounter (like looking for a database table with a certain class name). A nice convenience of this style is that we never have to create any instance attributes on the wrapper class, there is no self.wrapped_object, since that value is present at class creation time.
Edit: As pointed out in comments, the above only works for some simple types, if you need to proxy more elaborate attributes on the target object, (say, methods), then see the following answer: Python - Faking Type Continued
Here is a solution by using metaclass, but you need to modify the wrapped classes:
>>> class DocumentWrapper:
def __init__(self, wrapped_obj):
self.wrapped_obj = wrapped_obj
>>> class MetaWrapper(abc.ABCMeta):
def __instancecheck__(self, instance):
try:
return isinstance(instance.wrapped_obj, self)
except:
return isinstance(instance, self)
>>> class User(metaclass=MetaWrapper):
pass
>>> user=DocumentWrapper(User())
>>> isinstance(user,User)
True
>>> class User2:
pass
>>> user2=DocumentWrapper(User2())
>>> isinstance(user2,User2)
False
It sounds like you want to test the type of the object your DocumentWrapper wraps, not the type of the DocumentWrapper itself. If that's right, then the interface to DocumentWrapper needs to expose that type. You might add a method to your DocumentWrapper class that returns the type of the wrapped object, for instance. But I don't think that making the call to isinstance ambiguous, by making it return True when it's not, is the right way to solve this.
The best way is to inherit DocumentWrapper from the User itself, or mix-in pattern and doing multiple inherintance from many classes
class DocumentWrapper(User, object)
You can also fake isinstance() results by manipulating obj.__class__ but this is deep level magic and should not be done.

Categories

Resources