I found this post where a function is used to inherit from a class:
def get_my_code(base):
class MyCode(base):
def initialize(self):
...
return MyCode
my_code = get_my_code(ParentA)
I would like to do something similar, but with a decorator, something like:
#decorator(base)
class MyClass(base):
...
Is this possible?
UPDATE
Say you have a class Analysis that is used throughout your code. Then you realize that you want to use a wrapper class Transient that is just a time loop on top of the analysis class. If in the code I replace the analysis class, but Transient(Analysis) everything breaks because an analysis class is expected, and thus all its attributes. The problem is that I can't just get to define class Transient(Analysis) in this way because there are plenty of analysis classes. I thought the best way to do this would be to have some sort of dynamic inheritance. Right now I use aggregation to redirect the functionality to the analysis class inside transient.
A class decorator actually gets the class already built - and instantiated (as a class object). It can perform changes on it's dict, and even wrap its methods with other decorators.
However, it means the class already has its bases set - and these can't be ordinarily changed. That implies you have to, in some ay rebuild the class inside the decorator code.
However, if the class'methods make use of parameterless super or __class__ cell variable, those are already set in the member functions (that in Python 3 are the same as unbound methods) you can't just create a new class and set those methods as members on the new one.
So, there might be a way, but it will be non-trivial. And as I pointed out in the comment above, I d like to understand what you'd like to be able to achieve with this, since one could just put the base class on the class declaration itself, instead of using it on the decorator configuration.
I've crafted a function that, as described above, creates a new class, "clonning" the original and can re-build all methods that use __class__ or super: it returns the new class which is functionally identical to the orignal one, but with the bases exchanged. If used in a decorator as requested (decorator code included), it will simply change the class bases. It can't handle decorated methods (other than classmethod and staticmethod), and don't take care of naming details - such as qualnames or repr for the methods.
from types import FunctionType
def change_bases(cls, bases, metaclass=type):
class Changeling(*bases, metaclass=metaclass):
def breeder(self):
__class__ #noQA
cell = Changeling.breeder.__closure__
del Changeling.breeder
Changeling.__name__ = cls.__name__
for attr_name, attr_value in cls.__dict__.items():
if isinstance(attr_value, (FunctionType, classmethod, staticmethod)):
if isinstance(attr_value, staticmethod):
func = getattr(cls, attr_name)
elif isinstance(attr_value, classmethod):
func = attr_value.__func__
else:
func = attr_value
# TODO: check if func is wrapped in decorators and recreate inner function.
# Although reaplying arbitrary decorators is not actually possible -
# it is possible to have a "prepare_for_changeling" innermost decorator
# which could be made to point to the new function.
if func.__closure__ and func.__closure__[0].cell_contents is cls:
franken_func = FunctionType(
func.__code__,
func.__globals__,
func.__name__,
func.__defaults__,
cell
)
if isinstance(attr_value, staticmethod):
func = staticmethod(franken_func)
elif isinstance(attr_value, classmethod):
func = classmethod(franken_func)
else:
func = franken_func
setattr(Changeling, attr_name, func)
continue
setattr(Changeling, attr_name, attr_value)
return Changeling
def decorator(bases):
if not isinstance(base, tuple):
bases = (bases,)
def stage2(cls):
return change_bases(cls, bases)
return stage2
I'm just trying to streamline one of my classes and have introduced some functionality in the same style as the flyweight design pattern.
However, I'm a bit confused as to why __init__ is always called after __new__. I wasn't expecting this. Can anyone tell me why this is happening and how I can implement this functionality otherwise? (Apart from putting the implementation into the __new__ which feels quite hacky.)
Here's an example:
class A(object):
_dict = dict()
def __new__(cls):
if 'key' in A._dict:
print "EXISTS"
return A._dict['key']
else:
print "NEW"
return super(A, cls).__new__(cls)
def __init__(self):
print "INIT"
A._dict['key'] = self
print ""
a1 = A()
a2 = A()
a3 = A()
Outputs:
NEW
INIT
EXISTS
INIT
EXISTS
INIT
Why?
Use __new__ when you need to control
the creation of a new instance.
Use
__init__ when you need to control initialization of a new instance.
__new__ is the first step of instance creation. It's called first, and is
responsible for returning a new
instance of your class.
In contrast,
__init__ doesn't return anything; it's only responsible for initializing the
instance after it's been created.
In general, you shouldn't need to
override __new__ unless you're
subclassing an immutable type like
str, int, unicode or tuple.
From April 2008 post: When to use __new__ vs. __init__? on mail.python.org.
You should consider that what you are trying to do is usually done with a Factory and that's the best way to do it. Using __new__ is not a good clean solution so please consider the usage of a factory. Here's a good example: ActiveState Fᴀᴄᴛᴏʀʏ ᴘᴀᴛᴛᴇʀɴ Recipe.
__new__ is static class method, while __init__ is instance method.
__new__ has to create the instance first, so __init__ can initialize it. Note that __init__ takes self as parameter. Until you create instance there is no self.
Now, I gather, that you're trying to implement singleton pattern in Python. There are a few ways to do that.
Also, as of Python 2.6, you can use class decorators.
def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
#singleton
class MyClass:
...
In most well-known OO languages, an expression like SomeClass(arg1, arg2) will allocate a new instance, initialise the instance's attributes, and then return it.
In most well-known OO languages, the "initialise the instance's attributes" part can be customised for each class by defining a constructor, which is basically just a block of code that operates on the new instance (using the arguments provided to the constructor expression) to set up whatever initial conditions are desired. In Python, this corresponds to the class' __init__ method.
Python's __new__ is nothing more and nothing less than similar per-class customisation of the "allocate a new instance" part. This of course allows you to do unusual things such as returning an existing instance rather than allocating a new one. So in Python, we shouldn't really think of this part as necessarily involving allocation; all that we require is that __new__ comes up with a suitable instance from somewhere.
But it's still only half of the job, and there's no way for the Python system to know that sometimes you want to run the other half of the job (__init__) afterwards and sometimes you don't. If you want that behavior, you have to say so explicitly.
Often, you can refactor so you only need __new__, or so you don't need __new__, or so that __init__ behaves differently on an already-initialised object. But if you really want to, Python does actually allow you to redefine "the job", so that SomeClass(arg1, arg2) doesn't necessarily call __new__ followed by __init__. To do this, you need to create a metaclass, and define its __call__ method.
A metaclass is just the class of a class. And a class' __call__ method controls what happens when you call instances of the class. So a metaclass' __call__ method controls what happens when you call a class; i.e. it allows you to redefine the instance-creation mechanism from start to finish. This is the level at which you can most elegantly implement a completely non-standard instance creation process such as the singleton pattern. In fact, with less than 10 lines of code you can implement a Singleton metaclass that then doesn't even require you to futz with __new__ at all, and can turn any otherwise-normal class into a singleton by simply adding __metaclass__ = Singleton!
class Singleton(type):
def __init__(self, *args, **kwargs):
super(Singleton, self).__init__(*args, **kwargs)
self.__instance = None
def __call__(self, *args, **kwargs):
if self.__instance is None:
self.__instance = super(Singleton, self).__call__(*args, **kwargs)
return self.__instance
However this is probably deeper magic than is really warranted for this situation!
To quote the documentation:
Typical implementations create a new instance of the class by invoking
the superclass's __new__() method using "super(currentclass,
cls).__new__(cls[, ...])"with appropriate arguments and then
modifying the newly-created instance as necessary before returning it.
...
If __new__() does not return an instance of cls, then the new
instance's __init__() method will not be invoked.
__new__() is intended mainly to allow subclasses of immutable
types (like int, str, or tuple) to customize instance creation.
I realize that this question is quite old but I had a similar issue.
The following did what I wanted:
class Agent(object):
_agents = dict()
def __new__(cls, *p):
number = p[0]
if not number in cls._agents:
cls._agents[number] = object.__new__(cls)
return cls._agents[number]
def __init__(self, number):
self.number = number
def __eq__(self, rhs):
return self.number == rhs.number
Agent("a") is Agent("a") == True
I used this page as a resource http://infohost.nmt.edu/tcc/help/pubs/python/web/new-new-method.html
When __new__ returns instance of the same class, __init__ is run afterwards on returned object. I.e. you can NOT use __new__ to prevent __init__ from being run. Even if you return previously created object from __new__, it will be double (triple, etc...) initialized by __init__ again and again.
Here is the generic approach to Singleton pattern which extends vartec answer above and fixes it:
def SingletonClass(cls):
class Single(cls):
__doc__ = cls.__doc__
_initialized = False
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = super(Single, cls).__new__(cls, *args, **kwargs)
return cls._instance
def __init__(self, *args, **kwargs):
if self._initialized:
return
super(Single, self).__init__(*args, **kwargs)
self.__class__._initialized = True # Its crucial to set this variable on the class!
return Single
Full story is here.
Another approach, which in fact involves __new__ is to use classmethods:
class Singleton(object):
__initialized = False
def __new__(cls, *args, **kwargs):
if not cls.__initialized:
cls.__init__(*args, **kwargs)
cls.__initialized = True
return cls
class MyClass(Singleton):
#classmethod
def __init__(cls, x, y):
print "init is here"
#classmethod
def do(cls):
print "doing stuff"
Please pay attention, that with this approach you need to decorate ALL of your methods with #classmethod, because you'll never use any real instance of MyClass.
I think the simple answer to this question is that, if __new__ returns a value that is the same type as the class, the __init__ function executes, otherwise it won't. In this case your code returns A._dict('key') which is the same class as cls, so __init__ will be executed.
class M(type):
_dict = {}
def __call__(cls, key):
if key in cls._dict:
print 'EXISTS'
return cls._dict[key]
else:
print 'NEW'
instance = super(M, cls).__call__(key)
cls._dict[key] = instance
return instance
class A(object):
__metaclass__ = M
def __init__(self, key):
print 'INIT'
self.key = key
print
a1 = A('aaa')
a2 = A('bbb')
a3 = A('aaa')
outputs:
NEW
INIT
NEW
INIT
EXISTS
NB As a side effect M._dict property automatically becomes accessible from A as A._dict so take care not to overwrite it incidentally.
An update to #AntonyHatchkins answer, you probably want a separate dictionary of instances for each class of the metatype, meaning that you should have an __init__ method in the metaclass to initialize your class object with that dictionary instead of making it global across all the classes.
class MetaQuasiSingleton(type):
def __init__(cls, name, bases, attibutes):
cls._dict = {}
def __call__(cls, key):
if key in cls._dict:
print('EXISTS')
instance = cls._dict[key]
else:
print('NEW')
instance = super().__call__(key)
cls._dict[key] = instance
return instance
class A(metaclass=MetaQuasiSingleton):
def __init__(self, key):
print 'INIT'
self.key = key
print()
I have gone ahead and updated the original code with an __init__ method and changed the syntax to Python 3 notation (no-arg call to super and metaclass in the class arguments instead of as an attribute).
Either way, the important point here is that your class initializer (__call__ method) will not execute either __new__ or __init__ if the key is found. This is much cleaner than using __new__, which requires you to mark the object if you want to skip the default __init__ step.
__new__ should return a new, blank instance of a class. __init__ is then called to initialise that instance. You're not calling __init__ in the "NEW" case of __new__, so it's being called for you. The code that is calling __new__ doesn't keep track of whether __init__ has been called on a particular instance or not nor should it, because you're doing something very unusual here.
You could add an attribute to the object in the __init__ function to indicate that it's been initialised. Check for the existence of that attribute as the first thing in __init__ and don't proceed any further if it has been.
Digging little deeper into that!
The type of a generic class in CPython is type and its base class is Object (Unless you explicitly define another base class like a metaclass). The sequence of low level calls can be found here. The first method called is the type_call which then calls tp_new and then tp_init.
The interesting part here is that tp_new will call the Object's (base class) new method object_new which does a tp_alloc (PyType_GenericAlloc) which allocates the memory for the object :)
At that point the object is created in memory and then the __init__ method gets called. If __init__ is not implemented in your class then the object_init gets called and it does nothing :)
Then type_call just returns the object which binds to your variable.
One should look at __init__ as a simple constructor in traditional OO languages. For example, if you are familiar with Java or C++, the constructor is passed a pointer to its own instance implicitly. In the case of Java, it is the this variable. If one were to inspect the byte code generated for Java, one would notice two calls. The first call is to an "new" method, and then next call is to the init method (which is the actual call to the user defined constructor). This two step process enables creation of the actual instance before calling the constructor method of the class which is just another method of that instance.
Now, in the case of Python, __new__ is a added facility that is accessible to the user. Java does not provide that flexibility, due to its typed nature. If a language provided that facility, then the implementor of __new__ could do many things in that method before returning the instance, including creating a totally new instance of a unrelated object in some cases. And, this approach also works out well for especially for immutable types in the case of Python.
However, I'm a bit confused as to why __init__ is always called after __new__.
I think the C++ analogy would be useful here:
__new__ simply allocates memory for the object. The instance variables of an object needs memory to hold it, and this is what the step __new__ would do.
__init__ initialize the internal variables of the object to specific values (could be default).
Referring to this doc:
When subclassing immutable built-in types like numbers and strings,
and occasionally in other situations, the static method __new__ comes
in handy. __new__ is the first step in instance construction, invoked
before __init__.
The __new__ method is called with the class as its
first argument; its responsibility is to return a new instance of that
class.
Compare this to __init__: __init__ is called with an instance
as its first argument, and it doesn't return anything; its
responsibility is to initialize the instance.
There are situations
where a new instance is created without calling __init__ (for example
when the instance is loaded from a pickle). There is no way to create
a new instance without calling __new__ (although in some cases you can
get away with calling a base class's __new__).
Regarding what you wish to achieve, there also in same doc info about Singleton pattern
class Singleton(object):
def __new__(cls, *args, **kwds):
it = cls.__dict__.get("__it__")
if it is not None:
return it
cls.__it__ = it = object.__new__(cls)
it.init(*args, **kwds)
return it
def init(self, *args, **kwds):
pass
you may also use this implementation from PEP 318, using a decorator
def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
#singleton
class MyClass:
...
Now I've got the same problem, and for some reasons I decided to avoid decorators, factories and metaclasses. I did it like this:
Main file
def _alt(func):
import functools
#functools.wraps(func)
def init(self, *p, **k):
if hasattr(self, "parent_initialized"):
return
else:
self.parent_initialized = True
func(self, *p, **k)
return init
class Parent:
# Empty dictionary, shouldn't ever be filled with anything else
parent_cache = {}
def __new__(cls, n, *args, **kwargs):
# Checks if object with this ID (n) has been created
if n in cls.parent_cache:
# It was, return it
return cls.parent_cache[n]
else:
# Check if it was modified by this function
if not hasattr(cls, "parent_modified"):
# Add the attribute
cls.parent_modified = True
cls.parent_cache = {}
# Apply it
cls.__init__ = _alt(cls.__init__)
# Get the instance
obj = super().__new__(cls)
# Push it to cache
cls.parent_cache[n] = obj
# Return it
return obj
Example classes
class A(Parent):
def __init__(self, n):
print("A.__init__", n)
class B(Parent):
def __init__(self, n):
print("B.__init__", n)
In use
>>> A(1)
A.__init__ 1 # First A(1) initialized
<__main__.A object at 0x000001A73A4A2E48>
>>> A(1) # Returned previous A(1)
<__main__.A object at 0x000001A73A4A2E48>
>>> A(2)
A.__init__ 2 # First A(2) initialized
<__main__.A object at 0x000001A7395D9C88>
>>> B(2)
B.__init__ 2 # B class doesn't collide with A, thanks to separate cache
<__main__.B object at 0x000001A73951B080>
Warning: You shouldn't initialize Parent, it will collide with other classes - unless you defined separate cache in each of the children, that's not what we want.
Warning: It seems a class with Parent as grandparent behaves weird. [Unverified]
Try it online!
The __init__ is called after __new__ so that when you override it in a subclass, your added code will still get called.
If you are trying to subclass a class that already has a __new__, someone unaware of this might start by adapting the __init__ and forwarding the call down to the subclass __init__. This convention of calling __init__ after __new__ helps that work as expected.
The __init__ still needs to allow for any parameters the superclass __new__ needed, but failing to do so will usually create a clear runtime error. And the __new__ should probably explicitly allow for *args and '**kw', to make it clear that extension is OK.
It is generally bad form to have both __new__ and __init__ in the same class at the same level of inheritance, because of the behavior the original poster described.
However, I'm a bit confused as to why __init__ is always called after __new__.
Not much of a reason other than that it just is done that way. __new__ doesn't have the responsibility of initializing the class, some other method does (__call__, possibly-- I don't know for sure).
I wasn't expecting this. Can anyone tell me why this is happening and how I implement this functionality otherwise? (apart from putting the implementation into the __new__ which feels quite hacky).
You could have __init__ do nothing if it's already been initialized, or you could write a new metaclass with a new __call__ that only calls __init__ on new instances, and otherwise just returns __new__(...).
The simple reason is that the new is used for creating an instance, while init is used for initializing the instance. Before initializing, the instance should be created first. That's why new should be called before init.
When instantiating a class, first, __new__() is called to create the instance of a class, then __init__() is called to initialize the instance.
__new__():
Called to create a new instance of class cls. ...
If __new__() is invoked during object construction and it returns an
instance of cls, then the new instance’s __init__() method will be
invoked like __init__(self[, ...]), ...
__init__():
Called after the instance has been created (by __new__()), ...
Because __new__() and __init__() work together in constructing objects
(__new__() to create it, and __init__() to customize it), ...
For example, when instantiating Teacher class, first, __new__() is called to create the instance of Teacher class, then __init__() is called to initialize the instance as shown below:
class Teacher:
def __init__(self, name):
self.name = name
class Student:
def __init__(self, name):
self.name = name
obj = Teacher("John") # Instantiation
print(obj.name)
This is the output:
<class '__main__.Teacher'>
John
And, using __new__() of the instance of Teacher class, we can create the instance of Student class as shown below:
# ...
obj = Teacher("John")
print(type(obj))
print(obj.name)
obj = obj.__new__(Student) # Creates the instance of "Student" class
print(type(obj))
Now, the instance of Student class is created as shown below:
<class '__main__.Teacher'>
<__main__.Teacher object at 0x7f4e3950bf10>
<class '__main__.Student'> # Here
Next, if we try to get the value of name variable from **the instance of Student class as shown below:
obj = Teacher("John")
print(type(obj))
print(obj.name)
obj = obj.__new__(Student)
print(type(obj))
print(obj.name) # Tries to get the value of "name" variable
The error below occurs because the instance of Student class has not been initialized by __init__() yet:
AttributeError: 'Student' object has no attribute 'name'
So, we initialize the instance of Student class as shown below:
obj = Teacher("John")
print(type(obj))
print(obj.name)
obj = obj.__new__(Student)
print(type(obj))
obj.__init__("Tom") # Initializes the instance of "Student" class
print(obj.name)
Then, we can get the value of name variable from the instance of Student class as shown below:
<class '__main__.Teacher'>
John
<class '__main__.Student'>
Tom # Here
People have already detailed the question and answer both use some examples like singleton etc. See the code below:
__instance = None
def __new__(cls):
if cls.__instance is None:
cls.__instance = object.__new__(cls)
return cls.__instance
I got the above code from this link, it has detailed overview of new vs init. Worth reading!
Preamble: I have objects, some of them could be created by default constructor and left without modifications, so such objects could be considered as "empty". Sometimes I need to verify whether some object is "empty" or not. It could be done in the following way (majik methods are implemented in the base class Animal):
>>> a = Bird()
>>> b = Bird()
>>> a == b
True
>>> a == Bird()
True
So the question: is it possible (and if yes then how) to achieve such syntax:
>>> a == Bird.default
True
At least this one (but the previous is sweeter):
>>> a == a.default
True
But: with implementation of default in the base class Animal (to not repeat it in all derived classes):
class Animal(object):
... tech stuff ...
- obj comparison
- obj representation
- etc
class Bird(Animal):
... all about birds ...
class Fish(Animal):
... all about fishes ...
Of course I don't need solutions to have Bird() calling in Animal class :)
I'd like to have a kind of templating implemented in base class which will stamp out derived class default instance and store its unique copy in the derived class or instance property. I think it could be achieved by playing with metaclasses or so, but don't know how.
Class default instance could be considered as any object instantiated by __init__() of its class (without further object modification of course).
UPDATE
The system is flooded with objects and I just want to have a possibility to separate circulating of freshly (by default) created objects (which are useless to display for example) from already somehow modified one. I do it by:
if a == Bird():
. . .
I don't want creation of new object for comparison, intuitevly, I'd like to have one instance copy as etalon for the instances of this class to compare with. Objects are JSON-like and contain only properties (besides implicit __str__, __call__, __eq__ methods), so I'd like to keep such style of using built-in Python features and avoid the using explicitly defined methods like is_empty() for example. It's like entering an object in the interactive shell and it prints it out calling __str__, it is implicit, but fun.
To achieve the first solution you should use a metaclass.
For example:
def add_default_meta(name, bases, attrs):
cls = type(name, bases, attrs)
cls.default = cls()
return cls
And use it as(assuming python3. In python2 set the __metaclass__ attribute in the class body):
class Animal(object, metaclass=add_default_meta):
# stuff
class NameClass(Animal, metaclass=add_default_meta):
# stuff
Note that you have repeat the metaclass=... for every subclass of Animal.
If instead of a function you use a class and its __new__ method to implement the metaclass, it can be inherited, i.e:
class AddDefaultMeta(type):
def __new__(cls, name, bases, attrs):
cls = super(AddDefaultMeta, cls).__new__(cls, name, bases, attrs)
cls.default = cls()
return cls
A different way to achieve the same effect is to use a class decorator:
def add_default(cls):
cls.default = cls()
return cls
#add_default
class Bird(Animal):
# stuff
Again, you must use the decorator for every subclass.
If you want to achieve the second solution, i.e. to check a == a.default, then you can simply reimplement Animal.__new__:
class Animal(object):
def __new__(cls, *args, **kwargs):
if not (args or kwargs) and not hasattr(cls, 'default'):
cls.default = object.__new__(cls)
return cls.default
else:
return object.__new__(cls)
This will create the empty instance whenever the first instance of the class is created and it is stored in the default attribute.
This means that you can do both:
a == a.default
and
a == Bird.default
But accessing Bird.default gives AttributeError if you didn't create any Bird instance.
Style note: Bird.Default looks very bad to me. Default is an instance of Bird not a type, hence you should use lowercase_with_underscore according to PEP 8.
In fact the whole thing looks fishy for me. You could simply have an is_empty() method. It's pretty easy to implement:
class Animal(object):
def __init__(self, *args, **kwargs):
# might require more complex condition
self._is_empty = not (bool(args) or bool(kwargs))
def is_empty(self):
return self._is_empty
Then when the subclasses create an empty instance that doesn't pass any arguments to the base class the _is_empty attribute will be True and hence the inherited method will return True accordingly, while in the other cases some argument would be passed to the base class which would set _is_empty to False.
You can play around with this in order to obtain a more robust condition that works better with your subclasses.
Another possible metaclass:
class DefaultType(type):
def __new__(cls, name, bases, attrs):
new_cls = super(DefaultType, cls).__new__(cls, name, bases, attrs)
new_cls.default = new_cls()
return new_cls
You only need to set the metaclass attribute for the Animal class, as all derived classes will inherit it:
class Animal(object):
__metaclass__ = DefaultType
# ...
class Bird(Animal):
# ...
This allows you to use both:
a == Bird.default
and:
a == a.default
I'm defining several classes intended to be used for multiple inheritance, e.g.:
class A:
def __init__(self, bacon = None, **kwargs):
self.bacon = bacon
if bacon is None:
self.bacon = 100
super().__init__(**kwargs)
class Bacon(A):
def __init__(self, **kwargs):
"""Optional: bacon"""
super().__init__(**kwargs)
class Eggs(A):
def __init__(self, **kwargs):
"""Optional: bacon"""
super().__init__(**kwargs)
class Spam(Eggs, Bacon):
def __init__(self, **kwargs):
"""Optional: bacon"""
super().__init__(**kwargs)
However, I have multiple classes (e.g. possibly Bacon, A, and Spam, but not Eggs) that care about when their property bacon is changed. They don't need to modify the value, only to know what the new value is, like an event. Because of the Multiple Inheritance nature I have set up, this would mean having to notify the super class about the change (if it cares).
I know that it might be possible if I pass the class name to the method decorator, or if I use a class decorator. I don't want to have all the direct self-class referencing, having to create lots of decorators above each class, or forcing the methods to be the same name, as none of these sound very pythonic.
I was hoping to get syntax that looks something like this:
#on_change(bacon)
def on_bacon_change(self, bacon):
# read from old/new bacon
make_eggs(how_much = bacon)
I don't care about the previous value of bacon, so that bacon argument isn't necessary, if this is called after bacon is set.
Is it possible to check if a super class has a method with this
decorator?
If this isn't feasible, are there alternatives to passing events like
this, up through the multiple-inheritance chain?
EDIT:
The actual calling of the function in Spam would be done in A, by using a #property and #bacon.setter, as that would be the upper-most class that initializes bacon. Once it knows what function to call on self, the problem only lies in propagating the call up the MI chain.
EDIT 2:
If I override the attribute with a #bacon.setter, Would it be possible to determine whether the super() class has a setter for bacon?
What you call for would probably be nicely fit with a more complete framework of signals, and so on - maybe even invite for Aspected Oriented Programing.
Without going deep into it however, a metaclass and a decorator can do just what you are asking for - I came up with these, I hope they work for you.
If you'd like to evolve this in to something robust and usable, write me - if nothing like this exists out there, it wouldbe worth to keep an utility package in pipy for this.
def setattr_wrapper(cls):
def watcher_setattr(self, attr, val):
super(cls, self).__setattr__(attr, val)
watched = cls.__dict__["_watched_attrs"]
if attr in watched:
for method in watched[attr]:
getattr(self, method)(attr, val)
return watcher_setattr
class AttrNotifier(type):
def __new__(metacls, name, bases, dct):
dct["_watched_attrs"] = {}
for key, value in dct.items():
if hasattr(value, "_watched_attrs"):
for attr in getattr(value, "_watched_attrs"):
if not attr in dct["_watched_attrs"]:
dct["_watched_attrs"][attr] = set()
dct["_watched_attrs"][attr].add(key)
cls = type.__new__(metacls, name, bases, dct)
cls.__setattr__ = setattr_wrapper(cls)
return cls
def on_change(*args):
def decorator(meth):
our_args = args
#ensure that this decorator is stackable
if hasattr(meth, "_watched_attrs"):
our_args = getattr(meth, "_watched_attrs") + our_args
setattr(meth, "_watched_attrs", our_args)
return meth
return decorator
# from here on, example of use:
class A(metaclass=AttrNotifier):
#on_change("bacon")
def bacon_changed(self, attr, val):
print ("%s changed in %s to %s" % (attr, self.__class__.__name__, val))
class Spam(A):
#on_change("bacon", "pepper")
def changed(self, attr, val):
print ("%s changed in %s to %s" % (attr, self.__class__.__name__, val))
a = A()
a.bacon = 5
b = Spam()
b.pepper = 10
b.bacon = 20
(tested in Python 3.2 and Python 2.6 - changing the declaration of the "A" class for
Python 2 metaclass syntax)
edit - some words on what is being done
Here is what happens:
The metaclass picks all methods marked with the "on_close" decorator, and register then in a dictionary on the class - this dictionary is named _watched_attrs and it can be accessed as a normal class attribute.
The other thing the metaclass does is to override the __setattr__ method for the clas once it is created. This new __setattr__ just sets the attribute, and then checks the _wacthed_attrs dictionary if there are any methods on that class registered to be called when the attribute just changed has been modified - if so, it calls it.
The extra indirection level around watcher_setattr (which is the function that becomes each class's __setattr__ is there so that you can register different attributes to be watched on each class on the inheritance chain - all the classess have indepently acessible _watched_attrs dictionaries. If it was not for this, only the most specilized class on the inheritance chain _watched_attrs would be respected.
You are looking for python properties:
http://docs.python.org/library/functions.html#property
Search google for override superclass property setter resulted in this StackOverflow question:
Overriding inherited properties’ getters and setters in Python
For putting methods of various classes into a global registry I'm using a decorator with a metaclass. The decorator tags, the metaclass puts the function in the registry:
class ExposedMethod (object):
def __init__(self, decoratedFunction):
self._decoratedFunction = decoratedFunction
def __call__(__self,*__args,**__kw):
return __self._decoratedFunction(*__args,**__kw)
class ExposedMethodDecoratorMetaclass(type):
def __new__(mcs, name, bases, dct):
for obj_name, obj in dct.iteritems():
if isinstance(obj, ExposedMethod):
WorkerFunctionRegistry.addWorkerToWorkerFunction(obj_name, name)
return type.__new__(mcs, name, bases, dct)
class MyClass (object):
__metaclass__ = DiscoveryExposedMethodDecoratorMetaclass
#ExposeDiscoveryMethod
def myCoolExposedMethod (self):
pass
I've now came to the point where two function registries are needed. The first thought was to subclass the metaclass and put the other registry in. For that the new method has simply to be rewritten.
Since rewriting means redundant code this is not what I really want. So, it would be nice if anyone could name a way how to put an attribute inside of the metaclass which is able to be read when new is executed. With that the right registry could be put in without having to rewrite new.
Your ExposedMethod instances do not behave as normal instance methods but rather like static methods -- the fact that you're giving one of them a self argument hints that you're not aware of that. You may need to add a __get__ method to the ExposedMethod class to make it a descriptor, just like function objects are -- see here for more on descriptors.
But there is a much simpler way, since functions can have attributes...:
def ExposedMethod(registry=None):
def decorate(f):
f.registry = registry
return f
return decorate
and in a class decorator (simpler than a metaclass! requires Python 2.6 or better -- in 2.5 or earlier you'll need to stick w/the metaclass or explicitly call this after the class statement, though the first part of the answer and the functionality of the code below are still perfectly fine):
def RegisterExposedMethods(cls):
for name, f in vars(cls).iteritems():
if not hasattr(f, 'registry'): continue
registry = f.registry
if registry is None:
registry = cls.registry
registry.register(name, cls.__name__)
return cls
So you can do:
#RegisterExposedMethods
class MyClass (object):
#ExposeMethod(WorkerFunctionRegistry)
def myCoolExposedMethod (self):
pass
and the like. This is easily extended to allowing an exposed method to have several registries, get the default registry elsewhere than from the class (it could be in the class decorator, for example, if that works better for you) and avoids getting enmeshed with metaclasses without losing any functionality. Indeed that's exactly why class decorators were introduced in Python 2.6: they can take the place of 90% or so of practical uses of metaclasses and are much simpler than custom metaclasses.
You can use a class attribute to point to the registry you want to use in the specialized metaclasses, e.g. :
class ExposedMethodDecoratorMetaclassBase(type):
registry = None
def __new__(mcs, name, bases, dct):
for obj_name, obj in dct.items():
if isinstance(obj, ExposedMethod):
mcs.registry.register(obj_name, name)
return type.__new__(mcs, name, bases, dct)
class WorkerExposedMethodDecoratorMetaclass(ExposedMethodDecoratorMetaclassBase):
registry = WorkerFunctionRegistry
class RetiredExposedMethodDecoratorMetaclass(ExposedMethodDecoratorMetaclassBase):
registry = RetiredFunctionRegistry
Thank you both for your answers. Both helped alot to find a proper way for my request.
My final solution to the problem is the following:
def ExposedMethod(decoratedFunction):
decoratedFunction.isExposed = True
return decoratedFunction
class RegisterExposedMethods (object):
def __init__(self, decoratedClass, registry):
self._decoratedClass = decoratedClass
for name, f in vars(self._decoratedClass).iteritems():
if hasattr(f, "isExposed"):
registry.addComponentClassToComponentFunction(name, self._decoratedClass.__name__)
# cloak us as the original class
self.__class__.__name__ = decoratedClass.__name__
def __call__(self,*__args,**__kw):
return self._decoratedClass(*__args,**__kw)
def __getattr__(self, name):
return getattr(self._decoratedClass, name)
On a Class I wish to expose methods from I do the following:
#RegisterExposedMethods
class MyClass (object):
#ExposedMethod
def myCoolExposedMethod (self):
pass
The class decorator is now very easy to be subclassed. Here is an example:
class DiscoveryRegisterExposedMethods (RegisterExposedMethods):
def __init__(self, decoratedClass):
RegisterExposedMethods.__init__(self,
decoratedClass,
DiscoveryFunctionRegistry())
With that the comment of Alex
Your ExposedMethod instances do not behave as normal instance methods ...
is no longer true, since the method is simply tagged and not wrapped.