I have a class which is derived from a base class, and have many many lines of code
e.g.
class AutoComplete(TextCtrl):
.....
What I want to do is change the baseclass so that it works like
class AutoComplete(PriceCtrl):
.....
I have use for both type of AutoCompletes and may be would like to add more base classes, so how can I do it dynamically?
Composition would have been a solution, but I do not want to modify code a lot.
any simple solutions?
You could have a factory for your classes:
def completefactory(baseclass):
class AutoComplete(baseclass):
pass
return AutoComplete
And then use:
TextAutoComplete = completefactory(TextCtrl)
PriceAutoComplete = completefactory(PriceCtrl)
On the other hand depending on what you want to achieve and how your classes look, maybe AutoComplete is meant to be a mixin, so that you would define TextAutoComplete with:
class TextAutocomplete(TextCtrl, AutoComplete):
pass
You could use multiple inheritance for this:
class AutoCompleteBase(object):
# code for your class
# remember to call base implementation with super:
# super(AutoCompleteBase, self).some_method()
class TextAutoComplete(AutoCompleteBase, TextCtrl):
pass
class PriceAutoComplete(AutoCompleteBase, PriceCtrl):
pass
Also, there's the option of a metaclass:
class BasesToSeparateClassesMeta(type):
"""Metaclass to create a separate childclass for each base.
NB: doesn't create a class but a list of classes."""
def __new__(self, name, bases, dct):
classes = []
for base in bases:
cls = type.__new__(self, name, (base,), dct)
# Need to init explicitly because not returning a class
type.__init__(cls, name, (base,), dct)
classes.append(cls)
return classes
class autocompletes(TextCtrl, PriceCtrl):
__metaclass__ = BasesToSeparateClassesMeta
# Rest of the code
TextAutoComplete, PriceAutoComplete = autocompletes
But I'd still suggest the class factory approach already suggested, one level of indentation really isn't that big of a deal.
You could modify the __bases__ tuple. For example you could add another baseclass:
AutoComplete.__bases__ += (PriceCtrl,)
But in general I would try to avoid such hacks, it quickly creates a terrible mess.
Related
When you create a class, one of the inputs is a tuple of base classes to inherit from.
For example,
class K(str, list, dict):
pass
...or...
Klass = type("Klass", (Class1, Class2, Class3), dict())
I want a class to inherit from all of the same classes that another class does. inspect.getmro() returns something far more complicated than a simple tuple where each element is a class.
Suppose that we wanted the same method resolution order.
def mro2tuple(_mro):
# MAGIC HAPPENS HERE
return tuppy
How do we get a tuple of base classes from the method resolution order? I was wondering if I could write something like the following:
import inspect
Klass1_mro = inspect.getmro(Klass1)
bases = mro2tuple(Klass1_mro):
Klass2 = type("Klass2", bases, dict())
You shouldn't try to inherit from the mro, just inheriting from the bases of the other class should be enough.
new_class = type('Newclass', old_class.__bases__, {})
I have a setup like this:
class Meta(type):
#property
def test(self):
return "Meta"
class Test(object):
__metaclass__ = Meta
test = "Test"
class TestSub(object):
test = "TestSub"
print(Test.test, TestSub.test)
Which yields the following output:
('Meta', 'TestSub')
What I would have expected would be:
('Test', 'TestSub')
I know why that happens: test is assigned on Test before the metaclass Meta is executed. But I have no idea about how to implement a clean way of changing this behavior. I know I could hack around in __init__ and __new__ of the Meta class, but that seems dirty because I'd have to modify it for each new property. So is there a clean way (like writing a new decorator) to get this?
I also don't like the idea of creating an intermediate class just to work around this, but would accept it as a last resort.
In fact, your Test class's test attribute is not overwritten. It's still there:
>>> Test.__dict__['test']
'Test'
However, doing Test.test doesn't access it, because, according to the documentation:
If an instance’s dictionary has an entry with the same name as a data descriptor, the data descriptor takes precedence.
property creates a data descriptor. So by using a property in the metaclass, you block access to the ordinary class variable on the class.
How best to solve this is not clear, because it's not clear what you're trying to accomplish with this structure. Given the code you posted, it's not clear why you're using a metaclass at all. If you just want override class attributes on subclasses, you can do it with simple inheritance as described in unutbu's answer.
If that's what you want, don't use a metaclass. Use inheritance:
class Base(object):
#property
def test(self):
return "Meta"
class Test(Base):
test = "Test"
class TestSub(object):
test = "TestSub"
print(Test.test, TestSub.test)
yields
('Test', 'TestSub')
I'm not sure is it what you need?
class Meta(type):
#property
def test(self):
return 'Meta'
def __new__(cls, name, bases, dct):
if 'test' not in dct:
dct['test'] = Meta.test
return super(Meta, cls).__new__(cls, name, bases, dct)
class Test(object):
__metaclass__ = Meta
test = "Test"
class TestSub(object):
test = "TestSub"
class TestWithoutTest(object):
__metaclass__ = Meta
print(Test.test, TestSub.test, TestWithoutTest.test)
How can you access class values from within the top level class scope? What I mean by that is, how do you do something like:
class FooClass(object):
zeroith_base = __bases__[0]
.
.
.
What I'm specifically trying to do in this case is derive the metaclasses of all base classes to dynamically generate a metaclass that subclasses all the base classes' metaclasses to get past metclass conflict problems. I found http://code.activestate.com/recipes/204197-solving-the-metaclass-conflict/, and while all the concepts make sense to me, the actual code of the recipe is just beyond my ability to follow it. I don't want to use code I can't understand though, so instead, I tried to implement my own, more rudimentary system, but I'm stuck at square one trying to inspect the class object during creation.
You cannot inspect a class prior to its creation, and it has not yet been created yet until the suite of statements, or class body, have finished executing. The first time you have access to this information would be in the MetaClass.__new__ method of the class creating the class in question, or the execution of the thing creating the class in question, which technically need not be a meta-class or a class at all (as in the example below).
Here is a very rough prototype that probably does not work in all cases, but works in the simple case, and is probably easier to follow than the recipe.
def meta_class_synthesize(name, bases, attrmap):
seen = set()
seen_add = seen.add
metas = [type(base) for base in bases]
metas = tuple([
meta for meta in metas
if meta is not type and meta not in seen and not seen_add(meta)])
if not metas:
return type(name, bases, attrmap)
elif len(metas) == 1:
return metas[0](name, bases, attrmap)
newmeta_name = "__".join(meta.__name__ for meta in metas)
newmeta = type(newmeta_name, metas, {})
return newmeta(name, bases, attrmap)
class M_A(type):
pass
class M_B(type):
pass
class A:
__metaclass__ = M_A
class B:
__metaclass__ = M_B
class C(A, B):
__metaclass__ = meta_class_synthesize
print type(C) # prints "<class '__main__.M_A__M_B'>"
You'll find that __bases__ is not part of the class namespace. The class namespace is passed to the metaclass as the third parameter; the bases are passed as the second parameter. They are totally separate until the class is created.
So what you'll need to do is write a metaclass that synthesizes the metaclass you want, then uses that to create the class. I have no idea if that'll actually work, but I can't see any reason why it wouldn't.
For putting methods of various classes into a global registry I'm using a decorator with a metaclass. The decorator tags, the metaclass puts the function in the registry:
class ExposedMethod (object):
def __init__(self, decoratedFunction):
self._decoratedFunction = decoratedFunction
def __call__(__self,*__args,**__kw):
return __self._decoratedFunction(*__args,**__kw)
class ExposedMethodDecoratorMetaclass(type):
def __new__(mcs, name, bases, dct):
for obj_name, obj in dct.iteritems():
if isinstance(obj, ExposedMethod):
WorkerFunctionRegistry.addWorkerToWorkerFunction(obj_name, name)
return type.__new__(mcs, name, bases, dct)
class MyClass (object):
__metaclass__ = DiscoveryExposedMethodDecoratorMetaclass
#ExposeDiscoveryMethod
def myCoolExposedMethod (self):
pass
I've now came to the point where two function registries are needed. The first thought was to subclass the metaclass and put the other registry in. For that the new method has simply to be rewritten.
Since rewriting means redundant code this is not what I really want. So, it would be nice if anyone could name a way how to put an attribute inside of the metaclass which is able to be read when new is executed. With that the right registry could be put in without having to rewrite new.
Your ExposedMethod instances do not behave as normal instance methods but rather like static methods -- the fact that you're giving one of them a self argument hints that you're not aware of that. You may need to add a __get__ method to the ExposedMethod class to make it a descriptor, just like function objects are -- see here for more on descriptors.
But there is a much simpler way, since functions can have attributes...:
def ExposedMethod(registry=None):
def decorate(f):
f.registry = registry
return f
return decorate
and in a class decorator (simpler than a metaclass! requires Python 2.6 or better -- in 2.5 or earlier you'll need to stick w/the metaclass or explicitly call this after the class statement, though the first part of the answer and the functionality of the code below are still perfectly fine):
def RegisterExposedMethods(cls):
for name, f in vars(cls).iteritems():
if not hasattr(f, 'registry'): continue
registry = f.registry
if registry is None:
registry = cls.registry
registry.register(name, cls.__name__)
return cls
So you can do:
#RegisterExposedMethods
class MyClass (object):
#ExposeMethod(WorkerFunctionRegistry)
def myCoolExposedMethod (self):
pass
and the like. This is easily extended to allowing an exposed method to have several registries, get the default registry elsewhere than from the class (it could be in the class decorator, for example, if that works better for you) and avoids getting enmeshed with metaclasses without losing any functionality. Indeed that's exactly why class decorators were introduced in Python 2.6: they can take the place of 90% or so of practical uses of metaclasses and are much simpler than custom metaclasses.
You can use a class attribute to point to the registry you want to use in the specialized metaclasses, e.g. :
class ExposedMethodDecoratorMetaclassBase(type):
registry = None
def __new__(mcs, name, bases, dct):
for obj_name, obj in dct.items():
if isinstance(obj, ExposedMethod):
mcs.registry.register(obj_name, name)
return type.__new__(mcs, name, bases, dct)
class WorkerExposedMethodDecoratorMetaclass(ExposedMethodDecoratorMetaclassBase):
registry = WorkerFunctionRegistry
class RetiredExposedMethodDecoratorMetaclass(ExposedMethodDecoratorMetaclassBase):
registry = RetiredFunctionRegistry
Thank you both for your answers. Both helped alot to find a proper way for my request.
My final solution to the problem is the following:
def ExposedMethod(decoratedFunction):
decoratedFunction.isExposed = True
return decoratedFunction
class RegisterExposedMethods (object):
def __init__(self, decoratedClass, registry):
self._decoratedClass = decoratedClass
for name, f in vars(self._decoratedClass).iteritems():
if hasattr(f, "isExposed"):
registry.addComponentClassToComponentFunction(name, self._decoratedClass.__name__)
# cloak us as the original class
self.__class__.__name__ = decoratedClass.__name__
def __call__(self,*__args,**__kw):
return self._decoratedClass(*__args,**__kw)
def __getattr__(self, name):
return getattr(self._decoratedClass, name)
On a Class I wish to expose methods from I do the following:
#RegisterExposedMethods
class MyClass (object):
#ExposedMethod
def myCoolExposedMethod (self):
pass
The class decorator is now very easy to be subclassed. Here is an example:
class DiscoveryRegisterExposedMethods (RegisterExposedMethods):
def __init__(self, decoratedClass):
RegisterExposedMethods.__init__(self,
decoratedClass,
DiscoveryFunctionRegistry())
With that the comment of Alex
Your ExposedMethod instances do not behave as normal instance methods ...
is no longer true, since the method is simply tagged and not wrapped.
I want to add class atttributes to a superclass dynamically. Furthermore, I want to create classes that inherit from this superclass dynamically, and the name of those subclasses should depend on user input.
There is a superclass "Unit", to which I can add attributes at runtime. This already works.
def add_attr (cls, name, value):
setattr(cls, name, value)
class Unit(object):
pass
class Archer(Unit):
pass
myArcher = Archer()
add_attr(Unit, 'strength', 5)
print "Strenght ofmyarcher: " + str(myArcher.strength)
Unit.strength = 2
print "Strenght ofmyarcher: " + str(myArcher.strength)
This leads to the desired output:
Strenght ofmyarcher: 5
Strenght ofmyarcher: 2
But now I don't want to predefine the subclass Archer, but I'd rather let the user decide how to call this subclass. I've tried something like this:
class Meta(type, subclassname):
def __new__(cls, subclassname, bases, dct):
return type.__new__(cls, subclassname, Unit, dct)
factory = Meta()
factory.__new__("Soldier")
but no luck. I guess I haven't really understood what new does here.
What I want as a result here is
class Soldier(Unit):
pass
being created by the factory. And if I call the factory with the argument "Knight", I'd like a class Knight, subclass of Unit, to be created.
Any ideas? Many thanks in advance!
Bye
-Sano
To create a class from a name, use the class statement and assign the name. Observe:
def meta(name):
class cls(Unit):
pass
cls.__name__ = name
return cls
Now I suppose I should explain myself, and so on. When you create a class using the class statement, it is done dynamically-- it is equivalent of calling type().
For example, the following two snippets do the same thing:
class X(object): pass
X = type("X", (object,), {})
The name of a class-- the first argument to type-- is assigned to __name__, and that's basically the end of that (the only time __name__ is itself used is probably in the default __repr__() implementation). To create a class with a dynamic name, you can in fact call type like so, or you can just change the class name afterward. The class syntax exists for a reason, though-- it's convenient, and it's easy to add to and change things later. If you wanted to add methods, for example, it would be
class X(object):
def foo(self): print "foo"
def foo(self): print "foo"
X = type("X", (object,), {'foo':foo})
and so on. So I would advise using the class statement-- if you had known you could do so from the beginning, you likely would have done so. Dealing with type and so on is a mess.
(You should not, by the way, call type.__new__() by hand, only type())
Have a look at the type() builtin function.
knight_class = type('Knight', (Unit,), {})
First parameter: Name of new class
Second parameter: Tuple of parent classes
Third parameter: dictionary of class attributes.
But in your case, if the subclasses don't implement a different behaviour, maybe giving the Unit class a name attribute is sufficient.