How to fake type with Python - python

I recently developed a class named DocumentWrapper around some ORM document object in Python to transparently add some features to it without changing its interface in any way.
I just have one issue with this. Let's say I have some User object wrapped in it. Calling isinstance(some_var, User) will return False because some_var indeed is an instance of DocumentWrapper.
Is there any way to fake the type of an object in Python to have the same call return True?

You can use the __instancecheck__ magic method to override the default isinstance behaviour:
#classmethod
def __instancecheck__(cls, instance):
return isinstance(instance, User)
This is only if you want your object to be a transparent wrapper; that is, if you want a DocumentWrapper to behave like a User. Otherwise, just expose the wrapped class as an attribute.
This is a Python 3 addition; it came with abstract base classes. You can't do the same in Python 2.

Override __class__ in your wrapper class DocumentWrapper:
class DocumentWrapper(object):
#property
def __class__(self):
return User
>>> isinstance(DocumentWrapper(), User)
True
This way no modifications to the wrapped class User are needed.
Python Mock does the same (see mock.py:612 in mock-2.0.0, couldn't find sources online to link to, sorry).

Testing the type of an object is usually an antipattern in python. In some cases it makes sense to test the "duck type" of the object, something like:
hasattr(some_var, "username")
But even that's undesirable, for instance there are reasons why that expression might return false, even though a wrapper uses some magic with __getattribute__ to correctly proxy the attribute.
It's usually preferred to allow variables only take a single abstract type, and possibly None. Different behaviours based on different inputs should be achieved by passing the optionally typed data in different variables. You want to do something like this:
def dosomething(some_user=None, some_otherthing=None):
if some_user is not None:
#do the "User" type action
elif some_otherthing is not None:
#etc...
else:
raise ValueError("not enough arguments")
Of course, this all assumes you have some level of control of the code that is doing the type checking. Suppose it isn't. for "isinstance()" to return true, the class must appear in the instance's bases, or the class must have an __instancecheck__. Since you don't control either of those things for the class, you have to resort to some shenanigans on the instance. Do something like this:
def wrap_user(instance):
class wrapped_user(type(instance)):
__metaclass__ = type
def __init__(self):
pass
def __getattribute__(self, attr):
self_dict = object.__getattribute__(type(self), '__dict__')
if attr in self_dict:
return self_dict[attr]
return getattr(instance, attr)
def extra_feature(self, foo):
return instance.username + foo # or whatever
return wrapped_user()
What we're doing is creating a new class dynamically at the time we need to wrap the instance, and actually inherit from the wrapped object's __class__. We also go to the extra trouble of overriding the __metaclass__, in case the original had some extra behaviors we don't actually want to encounter (like looking for a database table with a certain class name). A nice convenience of this style is that we never have to create any instance attributes on the wrapper class, there is no self.wrapped_object, since that value is present at class creation time.
Edit: As pointed out in comments, the above only works for some simple types, if you need to proxy more elaborate attributes on the target object, (say, methods), then see the following answer: Python - Faking Type Continued

Here is a solution by using metaclass, but you need to modify the wrapped classes:
>>> class DocumentWrapper:
def __init__(self, wrapped_obj):
self.wrapped_obj = wrapped_obj
>>> class MetaWrapper(abc.ABCMeta):
def __instancecheck__(self, instance):
try:
return isinstance(instance.wrapped_obj, self)
except:
return isinstance(instance, self)
>>> class User(metaclass=MetaWrapper):
pass
>>> user=DocumentWrapper(User())
>>> isinstance(user,User)
True
>>> class User2:
pass
>>> user2=DocumentWrapper(User2())
>>> isinstance(user2,User2)
False

It sounds like you want to test the type of the object your DocumentWrapper wraps, not the type of the DocumentWrapper itself. If that's right, then the interface to DocumentWrapper needs to expose that type. You might add a method to your DocumentWrapper class that returns the type of the wrapped object, for instance. But I don't think that making the call to isinstance ambiguous, by making it return True when it's not, is the right way to solve this.

The best way is to inherit DocumentWrapper from the User itself, or mix-in pattern and doing multiple inherintance from many classes
class DocumentWrapper(User, object)
You can also fake isinstance() results by manipulating obj.__class__ but this is deep level magic and should not be done.

Related

Mimic Python's NoneType

I'm creating several classes to use them as state flags (this is more of an exercise, though I'm going to use them in a real project), just like we use None in Python, i.e.
... some_var is None ...
NoneType has several special properties, most importantly it's a singleton, that is there can't be more than one NoneType instance during any interpreter session, and its instances (None objects) are immutable. I've come up with two possible ways to implement somewhat similar behaviour in pure Python and I'm eager to know which one looks better from the architectural standpoint.
1. Don't use instances at all.
The idea is to have a metaclass, that produces immutable classes. The classes are prohibited to have instances.
class FlagMetaClass(type):
def __setattr__(self, *args, **kwargs):
raise TypeError("{} class is immutable".format(self))
def __delattr__(self, *args, **kwargs):
self.__setattr__()
def __repr__(self):
return self.__name__
class BaseFlag(object):
__metaclass__ = FlagMetaClass
def __init__(self):
raise TypeError("Can't create {} instances".format(type(self)))
def __repr__(self):
return str(type(self))
class SomeFlag(BaseFlag):
pass
And we get the desired behaviour
a = BaseFlag
a is BaseFlag # -> True
a is SomeFlag # -> False
Obviously any attempt to set attributes on these classes will fail (of course there are several hacks to overcome this, but the direct way is closed). And the classes themselves are unique objects loaded in a namespace.
2. A proper singleton class
class FlagMetaClass(type):
_instances = {}
def __call__(cls):
if cls not in cls._instances:
cls._instances[cls] = super(FlagMetaClass, cls).__call__()
return cls._instances[cls] # This may be slightly modified to
# raise an error instead of returning
# the same object, e.g.
# def __call__(cls):
# if cls in cls._instances:
# raise TypeError("Can't have more than one {} instance".format(cls))
# cls._instances[cls] = super(FlagMetaClass, cls).__call__()
# return cls._instances[cls]
def __setattr__(self, *args, **kwargs):
raise TypeError("{} class is immutable".format(self))
def __delattr__(self, *args, **kwargs):
self.__setattr__()
def __repr__(self):
return self.__name__
class BaseFlag(object):
__metaclass__ = FlagMetaClass
__slots__ = []
def __repr__(self):
return str(type(self))
class SomeFlag(BaseFlag):
pass
Here the Flag classes are real singletons. This particular implementation doesn't raise an error when we try to create another instance, but returns the same object (though it's easy to alter this behaviour). Both classes and instances can't be directly modified. The point is to create an instance of each class upon import like it's done with None.
Both approaches give me somewhat immutable unique objects that can be used for comparison just like the None. To me the second one looks more NoneType-like, since None is an instance, but I'm not sure that it's worth the increase in idealogical complexity. Looking forward to hear from you.
Theoretically, it's an interesting exercise. But when you say "though I'm going to use them in a real project" then you lose me.
If the real project is highly unPythonic (using traits or some other package to emulate static typing, using __slots__ to keep people from falling on sharp objects, etc.) -- well, I've got nothing for you, because I've got no use for that, but others do.
If the real project is Pythonic, then do the simplest thing possible.
Your "not use instances at all" answer is the correct one here, but you don't need to do a lot of class definition, either.
For example, if you have a function that could accept None as a real parameter, and you want to tell if the parameter has been defaulted, then just do this:
class NoParameterGiven:
pass
def my_function(my_parameter=NoParameterGiven):
if my_parameter is NoParameterGiven:
<do all my default stuff>
That class is so cheap, there's no reason even to share it between files. Just create it where you need it.
Your state classes are a different story, and you might want to use something like that enum module that #Dunes mentioned -- it has some nice features.
OTOH, if you want to keep it really simple, you could just do something like this:
class MyStates:
class State1: pass
class State2: pass
class State3 pass
You don't need to instantiate anything, and you can refer to them like this: MyStates.State1.

Python descriptors (__get__, __set__) on function parameters

Normally a descriptor is used on a class attribute like so:
class Owner(object):
attr = Attr()
When getting Owner.attr, Attr.__get__(self, instance, owner) is called where self = Owner.attr, instance = None and owner = Owner.
When Owner is instantiated instance will be the instance of Owner.
Now I would like to apply this concept to method parameters instead of class attributes.
How it would look in practice (let's assume that the functionality of Attr is to wrap a string with a given string):
class Example(object):
def funct(self, param=Attr('t')):
return param == 'test' # < param calls the descriptor here
e = Example()
e.funct('es') # < is True because 'es' wrapped with 't' becomes 'test'.
When accessing param, Attr.__get__(self, instance, owner) will be called with self = funct.param, instance = funct and owner = funct (although it doesn't make sense to have owner and instance the same, might be None?).
But since funct is not a class, this will not work. How can I get something similar to work?
A decorator on the function will be processing the parameters, so this might add to the solution I think.
The decorator must be, for example, be able to change the wrapper string.
Functions actually are first class objects in Python, but you are correct in saying that the syntax you describe would not work as you want. You could potentially do something like this with a decorator that inspects the passed attributes for characteristics that would enable this sort of functionality though. However, you'd probably be better off implementing a callable object, then attaching descriptors to that and creating instances of the callable rather than functions.

Python -- dynamic multiple inheritance

I'm seeking advice about design of my code.
Introduction
I have several classes, each represents one file type, eg: MediaImageFile, MediaAudioFile and generic (and also base class) MediaGenericFile.
Each file have two variants: Master and Version, so I created these classes to define theirs specific behaviour. EDIT: Version represents resized/cropped/trimmed/etc variant of Master file. It's used mainly for previews.
EDIT: The reason, why I want to do it dynamically is that this app should be reusable (it's Django-app) and therefore it should be easy to implement other MediaGenericFile subclass without modifying original code.
What I want to do
First of all, user should be able to register own MediaGenericFile subclasses without affecting original code.
Whether file is version or master is easily (one regexp) recognizable from filename.
/path/to/master.jpg -- master
/path/to/.versions/master_version.jpg -- version
Master/Version classes use some methods/properties of MediaGenericFile, like filename (you need to know filename to generate new version).
MediaGenericFile extends LazyFile, which is just lazy File object.
Now I need to put it together…
Used design
Before I start coding 'versions' feature, I had factory class MediaFile, which returns appropriate file type class according to extension:
>>> MediaFile('path/to/image.jpg')
<<< <MediaImageFile 'path/to/image.jpg'>
Classes Master and Version define new methods which use methods and attributes of MediaGenericFile and etc.
Approach 1
One approach is create dynamically new type, which inherits Master (or Version) and MediaGenericFile (or subclass).
class MediaFile(object):
def __new__(cls, *args, **kwargs):
... # decision about klass
if version:
bases = (Version, klass)
class_name = '{0}Version'.format(klass.__name__)
else:
bases = (Master, klass)
class_name = '{0}Master'.format(klass.__name__)
new_class = type(class_name, bases, {})
...
return new_class(*args, **kwargs)
Approach 2
Second approach is create method 'contribute_to_instance' in Master/Version and call it after creating new_class, but that's more tricky than I thought:
classs Master(object):
#classmethod
def contribute_to_instance(cls, instance):
methods = (...)
for m in methods:
setattr(instance, m, types.MethodType(getattr(cls, m), instance))
class MediaFile(object):
def __new__(*args, **kwargs):
... # decision about new_class
obj = new_class(*args, **kwargs)
if version:
version_class = Version
else:
version_class = Master
version_class.contribute_to_instance(obj)
...
return obj
However, this doesn't work. There are still problems with calling Master/Version's methods.
Questions
What would be good way to implement this multiple inheritance?
How is this problem called? :) I was trying to find some solutions, but I simply don't know how to name this problem.
Thanks in advance!
Note to answers
ad larsmans
Comparison and instance check wouldn't be problem for my case, because:
Comparisons are redefined anyway
class MediaGenericFile(object):
def __eq__(self, other):
return self.name == other.name
I never need to check isinstance(MediaGenericFileVersion, instance). I'm using isinstance(MediaGenericFile, instance) and isinstance(Version, instance) and both works as expected.
Nevertheless, creating new type per instance sounds to me as considerable defect.
Well, I could create both variations dynamically in metaclass and then use them, something like:
>>> MediaGenericFile.version_class
<<< <class MediaGenericFileVersion>
>>> MediaGenericFile.master_class
<<< <class MediaGenericFileMaster>
And then:
class MediaFile(object):
def __new__(cls, *args, **kwargs):
... # decision about klass
if version:
attr_name = 'version_class'
else:
attr_name = 'master_class'
new_class = getattr(klass, attr_name)
...
return new_class(*args, **kwargs)
Final solution
Finally the design pattern is factory class. MediaGenericFile subclasses are statically typed, users can implement and register their own. Master/Version variants are created dynamically (glued together from several mixins) in metaclass and stored in 'cache' to avoid perils mentioned by larsmans.
Thanks everyone for their suggestions. Finally I understand the metaclass concept. Well, at least I think that I understand it. Push origin master…
I'd certainly advise against the first approach of constructing classes in __new__. The problem with it is that you create a new type per instance, which causes overhead and worse, causes type comparisons to fail:
>>> Ham1 = type("Ham", (object,), {})
>>> Ham2 = type("Ham", (object,), {})
>>> Ham1 == Ham2
False
>>> isinstance(Ham1(), Ham2)
False
>>> isinstance(Ham2(), Ham1)
False
This violates the principle of least surprise because the classes may seem entirely identical:
>>> Ham1
<class '__main__.Ham'>
>>> Ham2
<class '__main__.Ham'>
You can get approach 1 to work properly, though, if you construct the classes at the module level, outside of MediaFile:
classes = {}
for klass in [MediaImageFile, MediaAudioFile]:
for variant in [Master, Version]:
# I'd actually do this the other way around,
# making Master and Version mixins
bases = (variant, klass)
name = klass.__name__ + variant.__name__
classes[name] = type(name, bases, {})
then, in MediaFile.__new__, look the required class up by name in classes. (Alternatively, set the newly constructed classes on the module instead of in a dict.)
I'm not sure how dynamic you want it to be, but using a "factory pattern" (here using a class factory), is fairly readable and understandable and may do what you want. This could serve as a base... MediaFactory could be smarter, and you could register multiple other classes, instead of hard-coding MediaFactoryMaster etc...
class MediaFactory(object):
__items = {}
#classmethod
def make(cls, item):
return cls.__items[item]
#classmethod
def register(cls, item):
def func(kls):
cls.__items[item] = kls
return kls
return func
class MediaFactoryMaster(MediaFactory, Master): pass
class MediaFactoryVersion(MediaFactory, Version): pass
class MediaFile(object):
pass
#MediaFactoryMaster.register('jpg') # adapt to take ['jpg', 'gif', 'png'] ?
class MediaFileImage(MediaFile):
pass
#MediaFactoryVersion.register('mp3') # adapt to take ['mp3', 'ogg', 'm4a'] ?
class MediaFileAudio(MediaFile):
pass
other possible MediaFactory.make
#classmethod
def make(cls, fname):
name, ext = somefunc(fname)
kls = cls.__items[ext]
other = Version if Version else Master
return type('{}{}'.format(kls.__name__,other.__name__), (kls, other), {})
How come you're not using inheritance but are playing around with __new__?
class GenericFile(File):
"""Base class"""
class Master(object):
"""Master Mixin"""
class Versioned(object):
"""Versioning mixin"""
class ImageFile(GenericFile):
"""Image Files"""
class MasterImage(ImageFile, Master):
"""Whatever"""
class VersionedImage(ImageFile, Versioned):
"""Blah blah blah"""
...
It's not clear why you're doing this though. I think there's a weird code smell here. I'd recommend fewer classes with a consistent interface (duck-typing) rather than a dozen classes and isinstance checks throughout the code to make it all work.
Perhaps you can update your question with what you'd like to do in your code and folks can help either identify the real pattern or a suggest a more idiomatic solution.
You do not have to create a new class for each instance. Don't create the new classes in __new__ create them in __metaclass__. define a metaclass in the base or in the base_module. The two "variant" subclasses are easily saved as as class attributes of their genaric parent and then __new__ just looks at the filename according to it's own rules and decides which subclass to return.
Watch out for __new__ that returns a class other than the one "nominated" during the constructor call. You may have to take steps to invoke __init__ from withing __new__
Subclasses will either have to:
"register" themselves with a factory or parent to be found
be imported and then have the parent or factory find them through a recursive search of cls.__subclasses (might have to happen once per creation but that's probably not a problem for file handeling)
found through the use of "setuptools" entry_points type tools but that requires more effort and coordination by the user
The OOD question you should be asking is "do the various classes of my proposed inheritance share any properties at all?"
The purpose of inheritance is to share common data or methods that the instances naturally have in common. Aside from both being files, what do Image files and Audio files have in common? If you really want to stretch your metaphors, you could conceivably have AudioFile.view() which could present — for example — a visualization of the power spectra of the audio data, but ImageFile.listen() makes even less sense.
I think your question side-steps this language independent conceptual issue in favor of the Python dependent mechanics of an object factory. I don't think you have a proper case of inheritance here, or you've failed to explain what common features your Media objects need to share.

How to dynamically change base class of instances at runtime?

This article has a snippet showing usage of __bases__ to dynamically change the inheritance hierarchy of some Python code, by adding a class to an existing classes collection of classes from which it inherits. Ok, that's hard to read, code is probably clearer:
class Friendly:
def hello(self):
print 'Hello'
class Person: pass
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
That is, Person doesn't inherit from Friendly at the source level, but rather this inheritance relation is added dynamically at runtime by modification of the __bases__attribute of the Person class. However, if you change Friendly and Person to be new style classes (by inheriting from object), you get the following error:
TypeError: __bases__ assignment: 'Friendly' deallocator differs from 'object'
A bit of Googling on this seems to indicate some incompatibilities between new-style and old style classes in regards to changing the inheritance hierarchy at runtime. Specifically: "New-style class objects don't support assignment to their bases attribute".
My question, is it possible to make the above Friendly/Person example work using new-style classes in Python 2.7+, possibly by use of the __mro__ attribute?
Disclaimer: I fully realise that this is obscure code. I fully realize that in real production code tricks like this tend to border on unreadable, this is purely a thought experiment, and for funzies to learn something about how Python deals with issues related to multiple inheritance.
Ok, again, this is not something you should normally do, this is for informational purposes only.
Where Python looks for a method on an instance object is determined by the __mro__ attribute of the class which defines that object (the M ethod R esolution O rder attribute). Thus, if we could modify the __mro__ of Person, we'd get the desired behaviour. Something like:
setattr(Person, '__mro__', (Person, Friendly, object))
The problem is that __mro__ is a readonly attribute, and thus setattr won't work. Maybe if you're a Python guru there's a way around that, but clearly I fall short of guru status as I cannot think of one.
A possible workaround is to simply redefine the class:
def modify_Person_to_be_friendly():
# so that we're modifying the global identifier 'Person'
global Person
# now just redefine the class using type(), specifying that the new
# class should inherit from Friendly and have all attributes from
# our old Person class
Person = type('Person', (Friendly,), dict(Person.__dict__))
def main():
modify_Person_to_be_friendly()
p = Person()
p.hello() # works!
What this doesn't do is modify any previously created Person instances to have the hello() method. For example (just modifying main()):
def main():
oldperson = Person()
ModifyPersonToBeFriendly()
p = Person()
p.hello()
# works! But:
oldperson.hello()
# does not
If the details of the type call aren't clear, then read e-satis' excellent answer on 'What is a metaclass in Python?'.
I've been struggling with this too, and was intrigued by your solution, but Python 3 takes it away from us:
AttributeError: attribute '__dict__' of 'type' objects is not writable
I actually have a legitimate need for a decorator that replaces the (single) superclass of the decorated class. It would require too lengthy a description to include here (I tried, but couldn't get it to a reasonably length and limited complexity -- it came up in the context of the use by many Python applications of an Python-based enterprise server where different applications needed slightly different variations of some of the code.)
The discussion on this page and others like it provided hints that the problem of assigning to __bases__ only occurs for classes with no superclass defined (i.e., whose only superclass is object). I was able to solve this problem (for both Python 2.7 and 3.2) by defining the classes whose superclass I needed to replace as being subclasses of a trivial class:
## T is used so that the other classes are not direct subclasses of object,
## since classes whose base is object don't allow assignment to their __bases__ attribute.
class T: pass
class A(T):
def __init__(self):
print('Creating instance of {}'.format(self.__class__.__name__))
## ordinary inheritance
class B(A): pass
## dynamically specified inheritance
class C(T): pass
A() # -> Creating instance of A
B() # -> Creating instance of B
C.__bases__ = (A,)
C() # -> Creating instance of C
## attempt at dynamically specified inheritance starting with a direct subclass
## of object doesn't work
class D: pass
D.__bases__ = (A,)
D()
## Result is:
## TypeError: __bases__ assignment: 'A' deallocator differs from 'object'
I can not vouch for the consequences, but that this code does what you want at py2.7.2.
class Friendly(object):
def hello(self):
print 'Hello'
class Person(object): pass
# we can't change the original classes, so we replace them
class newFriendly: pass
newFriendly.__dict__ = dict(Friendly.__dict__)
Friendly = newFriendly
class newPerson: pass
newPerson.__dict__ = dict(Person.__dict__)
Person = newPerson
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
We know that this is possible. Cool. But we'll never use it!
Right of the bat, all the caveats of messing with class hierarchy dynamically are in effect.
But if it has to be done then, apparently, there is a hack that get's around the "deallocator differs from 'object" issue when modifying the __bases__ attribute for the new style classes.
You can define a class object
class Object(object): pass
Which derives a class from the built-in metaclass type.
That's it, now your new style classes can modify the __bases__ without any problem.
In my tests this actually worked very well as all existing (before changing the inheritance) instances of it and its derived classes felt the effect of the change including their mro getting updated.
I needed a solution for this which:
Works with both Python 2 (>= 2.7) and Python 3 (>= 3.2).
Lets the class bases be changed after dynamically importing a dependency.
Lets the class bases be changed from unit test code.
Works with types that have a custom metaclass.
Still allows unittest.mock.patch to function as expected.
Here's what I came up with:
def ensure_class_bases_begin_with(namespace, class_name, base_class):
""" Ensure the named class's bases start with the base class.
:param namespace: The namespace containing the class name.
:param class_name: The name of the class to alter.
:param base_class: The type to be the first base class for the
newly created type.
:return: ``None``.
Call this function after ensuring `base_class` is
available, before using the class named by `class_name`.
"""
existing_class = namespace[class_name]
assert isinstance(existing_class, type)
bases = list(existing_class.__bases__)
if base_class is bases[0]:
# Already bound to a type with the right bases.
return
bases.insert(0, base_class)
new_class_namespace = existing_class.__dict__.copy()
# Type creation will assign the correct ‘__dict__’ attribute.
del new_class_namespace['__dict__']
metaclass = existing_class.__metaclass__
new_class = metaclass(class_name, tuple(bases), new_class_namespace)
namespace[class_name] = new_class
Used like this within the application:
# foo.py
# Type `Bar` is not available at first, so can't inherit from it yet.
class Foo(object):
__metaclass__ = type
def __init__(self):
self.frob = "spam"
def __unicode__(self): return "Foo"
# … later …
import bar
ensure_class_bases_begin_with(
namespace=globals(),
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
Use like this from within unit test code:
# test_foo.py
""" Unit test for `foo` module. """
import unittest
import mock
import foo
import bar
ensure_class_bases_begin_with(
namespace=foo.__dict__,
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
class Foo_TestCase(unittest.TestCase):
""" Test cases for `Foo` class. """
def setUp(self):
patcher_unicode = mock.patch.object(
foo.Foo, '__unicode__')
patcher_unicode.start()
self.addCleanup(patcher_unicode.stop)
self.test_instance = foo.Foo()
patcher_frob = mock.patch.object(
self.test_instance, 'frob')
patcher_frob.start()
self.addCleanup(patcher_frob.stop)
def test_instantiate(self):
""" Should create an instance of `Foo`. """
instance = foo.Foo()
The above answers are good if you need to change an existing class at runtime. However, if you are just looking to create a new class that inherits by some other class, there is a much cleaner solution. I got this idea from https://stackoverflow.com/a/21060094/3533440, but I think the example below better illustrates a legitimate use case.
def make_default(Map, default_default=None):
"""Returns a class which behaves identically to the given
Map class, except it gives a default value for unknown keys."""
class DefaultMap(Map):
def __init__(self, default=default_default, **kwargs):
self._default = default
super().__init__(**kwargs)
def __missing__(self, key):
return self._default
return DefaultMap
DefaultDict = make_default(dict, default_default='wug')
d = DefaultDict(a=1, b=2)
assert d['a'] is 1
assert d['b'] is 2
assert d['c'] is 'wug'
Correct me if I'm wrong, but this strategy seems very readable to me, and I would use it in production code. This is very similar to functors in OCaml.
This method isn't technically inheriting during runtime, since __mro__ can't be changed. But what I'm doing here is using __getattr__ to be able to access any attributes or methods from a certain class. (Read comments in order of numbers placed before the comments, it makes more sense)
class Sub:
def __init__(self, f, cls):
self.f = f
self.cls = cls
# 6) this method will pass the self parameter
# (which is the original class object we passed)
# and then it will fill in the rest of the arguments
# using *args and **kwargs
def __call__(self, *args, **kwargs):
# 7) the multiple try / except statements
# are for making sure if an attribute was
# accessed instead of a function, the __call__
# method will just return the attribute
try:
return self.f(self.cls, *args, **kwargs)
except TypeError:
try:
return self.f(*args, **kwargs)
except TypeError:
return self.f
# 1) our base class
class S:
def __init__(self, func):
self.cls = func
def __getattr__(self, item):
# 5) we are wrapping the attribute we get in the Sub class
# so we can implement the __call__ method there
# to be able to pass the parameters in the correct order
return Sub(getattr(self.cls, item), self.cls)
# 2) class we want to inherit from
class L:
def run(self, s):
print("run" + s)
# 3) we create an instance of our base class
# and then pass an instance (or just the class object)
# as a parameter to this instance
s = S(L) # 4) in this case, I'm using the class object
s.run("1")
So this sort of substitution and redirection will simulate the inheritance of the class we wanted to inherit from. And it even works with attributes or methods that don't take any parameters.

Dynamically adding #property in python

I know that I can dynamically add an instance method to an object by doing something like:
import types
def my_method(self):
# logic of method
# ...
# instance is some instance of some class
instance.my_method = types.MethodType(my_method, instance)
Later on I can call instance.my_method() and self will be bound correctly and everything works.
Now, my question: how to do the exact same thing to obtain the behavior that decorating the new method with #property would give?
I would guess something like:
instance.my_method = types.MethodType(my_method, instance)
instance.my_method = property(instance.my_method)
But, doing that instance.my_method returns a property object.
The property descriptor objects needs to live in the class, not in the instance, to have the effect you desire. If you don't want to alter the existing class in order to avoid altering the behavior of other instances, you'll need to make a "per-instance class", e.g.:
def addprop(inst, name, method):
cls = type(inst)
if not hasattr(cls, '__perinstance'):
cls = type(cls.__name__, (cls,), {})
cls.__perinstance = True
inst.__class__ = cls
setattr(cls, name, property(method))
I'm marking these special "per-instance" classes with an attribute to avoid needlessly making multiple ones if you're doing several addprop calls on the same instance.
Note that, like for other uses of property, you need the class in play to be new-style (typically obtained by inheriting directly or indirectly from object), not the ancient legacy style (dropped in Python 3) that's assigned by default to a class without bases.
Since this question isn't asking about only adding to a spesific instance,
the following method can be used to add a property to the class, this will expose the properties to all instances of the class YMMV.
cls = type(my_instance)
cls.my_prop = property(lambda self: "hello world")
print(my_instance.my_prop)
# >>> hello world
Note: Adding another answer because I think #Alex Martelli, while correct, is achieving the desired result by creating a new class that holds the property, this answer is intended to be more direct/straightforward without abstracting whats going on into its own method.

Categories

Resources