Python: Class factory using user input as class names - python

I want to add class atttributes to a superclass dynamically. Furthermore, I want to create classes that inherit from this superclass dynamically, and the name of those subclasses should depend on user input.
There is a superclass "Unit", to which I can add attributes at runtime. This already works.
def add_attr (cls, name, value):
setattr(cls, name, value)
class Unit(object):
pass
class Archer(Unit):
pass
myArcher = Archer()
add_attr(Unit, 'strength', 5)
print "Strenght ofmyarcher: " + str(myArcher.strength)
Unit.strength = 2
print "Strenght ofmyarcher: " + str(myArcher.strength)
This leads to the desired output:
Strenght ofmyarcher: 5
Strenght ofmyarcher: 2
But now I don't want to predefine the subclass Archer, but I'd rather let the user decide how to call this subclass. I've tried something like this:
class Meta(type, subclassname):
def __new__(cls, subclassname, bases, dct):
return type.__new__(cls, subclassname, Unit, dct)
factory = Meta()
factory.__new__("Soldier")
but no luck. I guess I haven't really understood what new does here.
What I want as a result here is
class Soldier(Unit):
pass
being created by the factory. And if I call the factory with the argument "Knight", I'd like a class Knight, subclass of Unit, to be created.
Any ideas? Many thanks in advance!
Bye
-Sano

To create a class from a name, use the class statement and assign the name. Observe:
def meta(name):
class cls(Unit):
pass
cls.__name__ = name
return cls
Now I suppose I should explain myself, and so on. When you create a class using the class statement, it is done dynamically-- it is equivalent of calling type().
For example, the following two snippets do the same thing:
class X(object): pass
X = type("X", (object,), {})
The name of a class-- the first argument to type-- is assigned to __name__, and that's basically the end of that (the only time __name__ is itself used is probably in the default __repr__() implementation). To create a class with a dynamic name, you can in fact call type like so, or you can just change the class name afterward. The class syntax exists for a reason, though-- it's convenient, and it's easy to add to and change things later. If you wanted to add methods, for example, it would be
class X(object):
def foo(self): print "foo"
def foo(self): print "foo"
X = type("X", (object,), {'foo':foo})
and so on. So I would advise using the class statement-- if you had known you could do so from the beginning, you likely would have done so. Dealing with type and so on is a mess.
(You should not, by the way, call type.__new__() by hand, only type())

Have a look at the type() builtin function.
knight_class = type('Knight', (Unit,), {})
First parameter: Name of new class
Second parameter: Tuple of parent classes
Third parameter: dictionary of class attributes.
But in your case, if the subclasses don't implement a different behaviour, maybe giving the Unit class a name attribute is sufficient.

Related

How to access "__" (double underscore) variables in methods added to a class

Background
I wish to use a meta class in order to add helper methods based on the original class. If the method I wish to add uses self.__attributeName I get an AttributeError (because of name mangling) but for an existing identical method this isn't a problem.
Code example
Here is a simplified example
# Function to be added as a method of Test
def newfunction2(self):
"""Function identical to newfunction"""
print self.mouse
print self._dog
print self.__cat
class MetaTest(type):
"""Metaclass to process the original class and
add new methods based on the original class
"""
def __new__(meta, name, base, dct):
newclass = super(MetaTest, meta).__new__(
meta, name, base, dct
)
# Condition for adding newfunction2
if "newfunction" in dct:
print "Found newfunction!"
print "Add newfunction2!"
setattr(newclass, "newfunction2", newfunction2)
return newclass
# Class to be modified by MetaTest
class Test(object):
__metaclass__ = MetaTest
def __init__(self):
self.__cat = "cat"
self._dog = "dog"
self.mouse = "mouse"
def newfunction(self):
"""Function identical to newfunction2"""
print self.mouse
print self._dog
print self.__cat
T = Test()
T.newfunction()
T.newfunction2() # AttributeError: 'Test' object has no attribute '__cat'
Question
Is there a way of adding newfunction2 that could use self.__cat?
(Without renaming self.__cat to self._cat.)
And maybe something more fundamental, why isn't self.__cat being treated in the same way for both cases since newfunction2 is now part of Test?
Name mangling happens when the methods in a class are compiled. Attribute names like __foo are turned in to _ClassName__foo, where ClassName is the name of the class the method is defined in. Note that you can use name mangling for attributes of other objects!
In your code, the name mangling in newfunction2 doesn't work because when the function is compiled, it's not part of the class. Thus the lookups of __cat don't get turned into __Test_cat the way they did in Test.__init__. You could explicitly look up the mangled version of the attribute name if you want, but it sounds like you want newfunction2 to be generic, and able to be added to multiple classes. Unfortunately, that doesn't work with name mangling.
Indeed, preventing code not defined in your class from accessing your attributes is the whole reason to use name mangling. Usually it's only worth bothering with if you're writing a proxy or mixin type and you don't want your internal-use attributes to collide with the attributes of the class you're proxying or mixing in with (which you won't know in advance).
To answer both of your questions:
You will need to change self.__cat when you need to call it from newfunction2 to self._Test__cat thanks to the name mangling rule.
Python documentation:
This mangling is done without regard to the syntactic position of the
identifier, as long as it occurs within the definition of a class.
Let me brake it down for you, it's saying that it doesn't matter where your interpreter is reading when it encounters a name mangled name. The name will only be mangled if it occurs in the definition of a class, which in your case, it's not. Since it's not directly "under" a class definition. So when it reads self.__cat, it's keeping it at self.__cat, not going to textually replace it with self._Test__cat since it isn't defined inside theTest class.
You can use <Test instance>._Test__cat to access the __cat attribute from the Test class. (where <Test instance> is replaced by self or any other instance of the Test class)
learn more in the Python doc
class B:
def __init__(self):
self.__private = 0
def __private_method(self):
'''A private method via inheritance'''
return ('{!r}'.format(self))
def internal_method(self):
return ('{!s}'.format(self))
class C(B):
def __init__(self):
super().__init__()
self.__private = 1
def __private_method(self):
return 'Am from class C'
c = C()
print(c.__dict__)
b = B()
print(b.__dict__)
print(b._B__private)
print(c._C__private_method())

How to let inheritance take precedence over class properties when using a metaclass?

I have posted a similar question before but I interpreted my problem wrong, so I flagged it for deletion and come forth with the new and correct problem.
My general goal is the following: I want to have a property on a class, so I implement it on a metaclass using a property as suggested on my question Implementing a class property that preserves the docstring. Additionally, I want users to be able to subclass the base class and override this property with static values. The thing here is that if the user does not provide an attribute, I want to calculate a sensible default and since all configuration is done at the class level, I need a property at the class level.
A basic example will show this:
class Meta(type):
#property
def test(self):
return "Meta"
class Test(object):
__metaclass__ = Meta
class TestSub(Test):
test = "TestSub"
class TestSubWithout(Test):
pass
print(TestSub.test, TestSubWithout.test)
Here is what it prints:
('Meta', 'Meta')
And what I want it to print:
('TestSub', 'Meta')
Essentially, on TestSub the user overrides the test attribute himself. Now, TestSub is the correct output, since the user provided it. In the case of TestSubWithout, the user instead did not provide the attribute and so the default should be calculated (of course, the real calculation has more than just a static return, this is just to show it).
I know what happens here: First the attribute test is assigned to TestSub and then the metaclass overrides it. But I don't know how to prevent it in a clean and pythonic way.
A property is a "data descriptor", which means that it takes precedence in the attribute search path over values stored in a instance dictionary (or for a class, the instance dictionaries of the other classes in its MRO).
Instead, write your own non-data descriptor that works like property:
class NonDataProperty(object):
def __init__(self, fget):
self.fget = fget
def __get__(self, obj, type):
if obj:
return self.fget(obj)
else:
return self
# don't define a __set__ method!
Here's a test of it:
class MyMeta(type):
#NonDataProperty
def x(self):
return "foo"
class Foo(metaclass=MyMeta):
pass # does not override x
class Bar(metaclass=MyMeta):
x = "bar" # does override x
class Baz:
x = "baz"
class Baz_with_meta(Baz, metaclass=MyMeta):
pass # inherited x value will take precedence over non-data descriptor
print(Foo.x) # prints "foo"
print(Bar.x) # prints "bar"
print(Baz_with_meta.x) # prints "baz"
I cleanest way I could come up with is creating a subclass of property that handles this case:
class meta_property(property):
def __init__(self, fget, fset=None, fdel=None, doc=None):
self.key = fget.__name__
super(meta_property, self).__init__(fget, fset, fdel, doc)
def __get__(self, obj, type_):
if self.key in obj.__dict__:
return obj.__dict__[self.key]
else:
return super(meta_property, self).__get__(obj, type_)
This handles the case by storing the name of the function and returning the overridden value if it is present. This seems like an okayish solution but I am open to more advanced suggestions.

Is there a way apply a decorator to a Python method that needs informations about the class?

When you decorate a method, it is not bound yet to the class, and therefor doesn't have the im_class attribute yet. I looking for a way to get the information about the class inside the decorator. I tried this:
import types
def decorator(method):
def set_signal(self, name, value):
print name
if name == 'im_class':
print "I got the class"
method.__setattr__ = types.MethodType(set_signal, method)
return method
class Test(object):
#decorator
def bar(self, foo):
print foo
But it doesn't print anything.
I can imagine doing this:
class Test(object):
#decorator(klass=Test)
def bar(self, foo):
print foo
But if I can avoid it, it would make my day.
__setattr__ is only called on explicit object.attribute = assignments; building a class does not use attribute assignment but builds a dictionary (Test.__dict__) instead.
To access the class you have a few different options though:
Use a class decorator instead; it'll be passed the completed class after building it, you could decorate individual methods on that class by replacing them (decorated) in the class. You could use a combination of a function decorator and a class decorator to mark which methods are to be decorated:
def methoddecoratormarker(func):
func._decorate_me = True
return func
def realmethoddecorator(func):
# do something with func.
# Note: it is still an unbound function here, not a method!
return func
def classdecorator(klass):
for name, item in klass.__dict__.iteritems():
if getattr(item, '_decorate_me', False):
klass.__dict__[name] = realmethoddecorator(item)
You could use a metaclass instead of a class decorator to achieve the same, of course.
Cheat, and use sys._getframe() to retrieve the class from the calling frame:
import sys
def methoddecorator(func):
callingframe = sys._getframe(1)
classname = callingframe.f_code.co_name
Note that all you can retrieve is the name of the class; the class itself is still being built at this time. You can add items to callingframe.f_locals (a mapping) and they'll be made part of the new class object.
Access self whenever the method is called. self is a reference to the instance after all, and self.__class__ is going to be, at the very least, a sub-class of the original class the function was defined in.
My strict answer would be: It's not possible, because the class does not yet exist when the decorator is executed.
The longer answer would depend on your very exact requirements. As I wrote, you cannot access the class if it does not yet exists. One solution would be, to mark the decorated method to be "transformed" later. Then use a metaclass or class decorator to apply your modifications after the class has been created.
Another option involves some magic. Look for the implementation of the implements method in zope.interfaces. It has some access to the information about the class which is just been parsed. Don't know if it will be enough for your use case.
You might want to take a look at descriptors. They let you implement a __get__ that is used when an attribute is accessed, and can return different things depending on the object and its type.
Use method decorators to add some marker attributes to the interesting methods, and use a metaclass which iterates over the methods, finds the marker attributes, and does the logic. The metaclass code is run when the class is created, so it has a reference to the newly created class.
class MyMeta(object):
def __new__(...):
...
cls = ...
... iterate over dir(cls), find methods having .is_decorated, act on them
return cls
def decorator(f):
f.is_decorated = True
return f
class MyBase(object):
__metaclass__ = MyMeta
class MyClass(MyBase):
#decorator
def bar(self, foo):
print foo
If you worry about that the programmer of MyClass forgets to use MyBase, you can forcibly set the metaclass in decorator, by exampining the globals dicitionary of the caller stack frame (sys._getframe()).

“Can't instantiate abstract class … with abstract methods” on class that shouldn't have any abstract method

Take the following minimal example:
import abc
class FooClass(object):
__metaclass__ = abc.ABCMeta
#abc.abstractmethod
def FooMethod(self):
raise NotImplementedError()
def main():
derived_type = type('Derived', (FooClass,), {})
def BarOverride(self):
print 'Hello, world!'
derived_type.FooMethod = BarOverride
instance = derived_type()
Running main() gets you:
TypeError: Can't instantiate abstract class Derived with abstract methods FooMethod
(The exception occurs on the instance = derived_type() line.)
But FooMethod shouldn't be abstract: I've overridden it with BarOverride. So, why is this raising exceptions?
Disclaimer: Yes, I could use the explicit class syntax, and accomplish the exact same thing. (And even better, I can make it work!) But this is a minimal test case, and the larger example is dynamically creating classes. :-) And I'm curious as to why this doesn't work.
Edit: And to prevent the other obvious non-answer: I don't want to pass BarOverride in the third argument to type: In the real example, BarOverride needs to have derived_type bound to it. It is easier to do this if I can define BarOverride after the creation of derived_type. (If I can't do this, then why?)
Because the docs say so:
Dynamically adding abstract methods to a class, or attempting to
modify the abstraction status of a method or class once it is created,
are not supported. The abstractmethod() only affects subclasses
derived using regular inheritance; “virtual subclasses” registered
with the ABC’s register() method are not affected.
A metaclass is only called when a class is defined. When abstractmethod has marked a class as abstract that status won't change later.
Jochen is right; the abstract methods are set at class creation and won't me modified just because you reassign an attribute.
You can manually remove it from the list of abstract methods by doing
DerivedType.__abstractmethods__ = frozenset()
or
DerivedType.__abstractmethods__ = frozenset(
elem for elem in DerivedType.__abstractmethods__ if elem != 'FooMethod')
as well as setattr, so it doesn't still think that FooMethod is abstract.
I know this topic is really old but... That is really a nice question.
It doesn't work because abc can only check for abstract methods during instatiation of types, that is, when type('Derived', (FooClass,), {}) is running. Any setattr done after that is not accessible from abc.
So, setattr wont work, buuut...
Your problem of addressing the name of a class that wasn't previously declared or defined looks solvable:
I wrote a little metaclass that lets you use a placeholder "clazz" for accessing any class that will eventually get the method you are writing outside a class definition.
That way you won't get TypeError from abc anymore, since you can now define your method BEFORE instatiating your type, and then pass it to type at the dict argument. Then abc will see it as a proper method override.
Aaand, with the new metaclass you can refer to the class object during that method.
And this is super, because now you can use super! =P
I can guess you were worried about that too...
Take a look:
import abc
import inspect
clazz = type('clazz', (object,), {})()
def clazzRef(func_obj):
func_obj.__hasclazzref__ = True
return func_obj
class MetaClazzRef(type):
"""Makes the clazz placeholder work.
Checks which of your functions or methods use the decorator clazzRef
and swaps its global reference so that "clazz" resolves to the
desired class, that is, the one where the method is set or defined.
"""
methods = {}
def __new__(mcs, name, bases, dict):
ret = super(MetaClazzRef, mcs).__new__(mcs, name, bases, dict)
for (k,f) in dict.items():
if getattr(f, '__hasclazzref__', False):
if inspect.ismethod(f):
f = f.im_func
if inspect.isfunction(f):
for (var,value) in f.func_globals.items():
if value is clazz:
f.func_globals[var] = ret
return ret
class MetaMix(abc.ABCMeta, MetaClazzRef):
pass
class FooClass(object):
__metaclass__ = MetaMix
#abc.abstractmethod
def FooMethod(self):
print 'Ooops...'
#raise NotImplementedError()
def main():
#clazzRef
def BarOverride(self):
print "Hello, world! I'm a %s but this method is from class %s!" % (type(self), clazz)
super(clazz, self).FooMethod() # Now I have SUPER!!!
derived_type = type('Derived', (FooClass,), {'FooMethod': BarOverride})
instance = derived_type()
instance.FooMethod()
class derivedDerived(derived_type):
def FooMethod(self):
print 'I inherit from derived.'
super(derivedDerived,self).FooMethod()
instance = derivedDerived()
instance.FooMethod()
main()
The output is:
Hello, world! I'm a <class 'clazz.Derived'> but this method is from class <class 'clazz.Derived'>!
Ooops...
I inherit from derived.
Hello, world! I'm a <class 'clazz.derivedDerived'> but this method is from class <class 'clazz.Derived'>!
Ooops...
Well, if you must do it this way, then you could just pass a dummy dict {'FooMethod':None} as the third argument to type. This allows derived_type to satisfy ABCMeta's requirement that all abstract methods be overridden. Later on you can supply the real FooMethod:
def main():
derived_type = type('Derived', (FooClass,), {'FooMethod':None})
def BarOverride(self):
print 'Hello, world!'
setattr(derived_type, 'FooMethod', BarOverride)
instance = derived_type()

How to change baseclass

I have a class which is derived from a base class, and have many many lines of code
e.g.
class AutoComplete(TextCtrl):
.....
What I want to do is change the baseclass so that it works like
class AutoComplete(PriceCtrl):
.....
I have use for both type of AutoCompletes and may be would like to add more base classes, so how can I do it dynamically?
Composition would have been a solution, but I do not want to modify code a lot.
any simple solutions?
You could have a factory for your classes:
def completefactory(baseclass):
class AutoComplete(baseclass):
pass
return AutoComplete
And then use:
TextAutoComplete = completefactory(TextCtrl)
PriceAutoComplete = completefactory(PriceCtrl)
On the other hand depending on what you want to achieve and how your classes look, maybe AutoComplete is meant to be a mixin, so that you would define TextAutoComplete with:
class TextAutocomplete(TextCtrl, AutoComplete):
pass
You could use multiple inheritance for this:
class AutoCompleteBase(object):
# code for your class
# remember to call base implementation with super:
# super(AutoCompleteBase, self).some_method()
class TextAutoComplete(AutoCompleteBase, TextCtrl):
pass
class PriceAutoComplete(AutoCompleteBase, PriceCtrl):
pass
Also, there's the option of a metaclass:
class BasesToSeparateClassesMeta(type):
"""Metaclass to create a separate childclass for each base.
NB: doesn't create a class but a list of classes."""
def __new__(self, name, bases, dct):
classes = []
for base in bases:
cls = type.__new__(self, name, (base,), dct)
# Need to init explicitly because not returning a class
type.__init__(cls, name, (base,), dct)
classes.append(cls)
return classes
class autocompletes(TextCtrl, PriceCtrl):
__metaclass__ = BasesToSeparateClassesMeta
# Rest of the code
TextAutoComplete, PriceAutoComplete = autocompletes
But I'd still suggest the class factory approach already suggested, one level of indentation really isn't that big of a deal.
You could modify the __bases__ tuple. For example you could add another baseclass:
AutoComplete.__bases__ += (PriceCtrl,)
But in general I would try to avoid such hacks, it quickly creates a terrible mess.

Categories

Resources