I have the following:
class X(object):
def __init__(self, *args, **kwargs):
type(self).label_type = xyzzy(self.__class__.__name__)
class Y(X):
def __init__(self, *args, **kwargs):
super(Y, self).__init__(self, *args, **kwargs)
When I create a new instance of Y, a class variable called label_type is created using Y, not X. This is good and works fine.
But it burns me that I have to wait until there's an instance of Y before the class variable is created. How can I set label_type when class Y is compiled, not when it is instantiated?
EDIT - I have numerous subclasses that are derived from X. I want to push as much of the work into X as possible.
You can use a metaclass to do this kind of thing. Here's a trivial example to demonstrate:
class NameLabelMeta(type):
def __new__(meta, name, bases, dct):
"""Create a new class and label it with its name."""
cls = super(NameLabelMeta, meta).__new__(meta, name, bases, dct)
cls.label_type = name # or xyzzy(name), or whatever
return cls
In use:
>>> class X(object):
__metaclass__ = NameLabelMeta # 2.x syntax
>>> class Y(X):
pass
>>> Y.label_type
'Y'
Note that Y transparently inherits the metaclass from X, so doesn't need to implement anything at all for the correct behaviour to occur. It's also much more efficient, as it doesn't happen again every time you create a new instance of the class.
Do you need to set the label_type dynamically? Why not use a "static variable" such as
class X(object):
LABEL_TYPE = 'X'
class Y(X):
pass
print Y().LABEL_TYPE # Prints 'X'
Related
I am trying to find a good way for returning a (new) class object in class method that can be extended as well.
I have a class (classA) which has among other methods, a method that returns a new classA object after some processing
class classA:
def __init__(): ...
def methodX(self, **kwargs):
process data
return classA(new params)
Now, I am extending this class to another classB. I need methodX to do the same, but return classB this time, instead of classA
class classB(classA):
def __init__(self, params):
super().__init__(params)
self.newParams = XYZ
def methodX(self, **kwargs):
???
This may be something trivial but I simply cannot figure it out. In the end I dont want to rewrite the methodX each time the class gets extended.
Thank you for your time.
Use the __class__ attribute like this:
class A:
def __init__(self, **kwargs):
self.kwargs = kwargs
def methodX(self, **kwargs):
#do stuff with kwargs
return self.__class__(**kwargs)
def __repr__(self):
return f'{self.__class__}({self.kwargs})'
class B(A):
pass
a = A(foo='bar')
ax = a.methodX(gee='whiz')
b = B(yee='haw')
bx = b.methodX(cool='beans')
print(a)
print(ax)
print(b)
print(bx)
class classA:
def __init__(self, x):
self.x = x
def createNew(self, y):
t = type(self)
return t(y)
class classB(classA):
def __init__(self, params):
super().__init__(params)
a = classA(1)
newA = a.createNew(2)
b = classB(1)
newB = b.createNew(2)
print(type(newB))
# <class '__main__.classB'>
I want to propose what I think is the cleanest approach, albeit similar to existing answers. The problem feels like a good fit for a class method:
class A:
#classmethod
def method_x(cls, **kwargs):
return cls(<init params>)
Using the #classmethod decorator ensures that the first input (traditionally named cls) will refer to the Class to which the method belongs, rather than the instance.
(usually we call the first method input self and this refers to the instance to which the method belongs)
Because cls refers to A, rather than an instance of A, we can call cls() as we would call A().
However, in a class that inherits from A, cls will instead refer to the child class, as required:
class A:
def __init__(self, x):
self.x = x
#classmethod
def make_new(cls, **kwargs):
y = kwargs["y"]
return cls(y) # returns A(y) here
class B(A):
def __init__(self, x):
super().__init__(x)
self.z = 3 * x
inst = B(1).make_new(y=7)
print(inst.x, inst.z)
And now you can expect that print statement to produce 7 21.
That inst.z exists should confirm for you that the make_new call (which was only defined on A and inherited unaltered by B) has indeed made an instance of B.
However, there's something I must point out. Inheriting the unaltered make_new method only works because the __init__ method on B has the same call signature as the method on A. If this weren't the case then the call to cls might have had to be altered.
This can be circumvented by allowing **kwargs on the __init__ method and passing generic **kwargs into cls() in the parent class:
class A:
def __init__(self, **kwargs):
self.x = kwargs["x"]
#classmethod
def make_new(cls, **kwargs):
return cls(**kwargs)
class B(A):
def __init__(self, x, w):
super().__init__(x=x)
self.w = w
inst = B(1,2).make_new(x="spam", w="spam")
print(inst.x, inst.w)
Here we were able to give B a different (more restrictive!) signature.
This illustrates a general principle, which is that parent classes will typically be more abstract/less specific than their children.
It follows that, if you want two classes that substantially share behaviour but which do quite specific different things, it will be better to create three classes: one rather abstract one that defines the behaviour-in-common, and two children that give you the specific behaviours you want.
I'm wondering how to create a metaclass in Python that can create other classes that:
Store their instances in an array automatically
Have a special instance, NonMetaClass.all, whose properties:
When set, set all the class's instances with the same key to the same value (e.g., Foo.all.num = 3 makes all instances of Foo have a num of 3)
When accessed (get), returns an array of all of the class's instances's key values (e.g., Foo.all.num returns [5, 3, 2])
Cannot be deleted.
When called (if the attribute is a function), call that method on all the instances of a class.
In Python terms, I would like to turn a class that is like this:
class Foo(object):
BAR = 23
def __init__(self):
self.a = 5
def pointless():
print 'pointless.'
def change_a(self):
self.a = 52
Into this:
class Foo(object):
BAR = 23
instances = []
all = # Some black magic to create the special "all" instance
def __init__(self):
self.a = 5
Foo.instances.append(self)
def pointless(self):
print 'pointless.'
def change_a(self):
self.a = 52
And be able to use it like this:
>>> Foo()
>>> Foo.instances[0]
<__main__.Foo instance at 0x102ff5758>
>>> Foo()
>>> len(Foo.instances)
2
>>> Foo.all.a = 78
78
>>> Foo.all.a
[78, 78]
>>> Foo.all.change_a()
>>> Foo.all.a
[52, 52]
>>>
The only thing a metaclass is needed for there is actually quite easy:
exactly creating the intances and all attributes.
All it have to do is to insert those into the namespace. Ah, it will also have to wrap the class __new__ method to insert new instances into the instances list.
The part that is the behavior wanted from all is interesting, and that can be implemented using the descriptor protocol, and attribute access control, so we have to craft a couple special classes, that will return the appropriate objects when requested after the ".".
"All" is the class that will be instantiated as "all" - it just needs a __get__ method to return another special object, from the AllAttr class, already bound to the parent class.
"AllAttr" is a special object that on any attribute access, perform your requirements on the members of the owner class "instance" attribute.
And "CallAllList" is a special list subclass that is callable, and calls all its members in turn. It is used by AllAttr if the required attribute from the owner class is callable itself.
class CallAllList(list):
def __call__(self, *args, **kwargs):
return [instance(*args, **kwargs) for instance in self]
class AllAttr(object):
def __init__(self, owner):
self._owner = owner
def __getattr__(self, attr):
method = getattr(self._owner, attr, None)
cls = CallAllList if callable(method) else list
return cls(getattr(obj, attr) for obj in self._owner.instances)
def __setattr__(self, attr, value):
if attr == "_owner":
return super(AllAttr, self).__setattr__(attr, value)
for obj in self._owner.instances:
setattr(obj, attr, value)
class All(object):
def __get__(self, instance, owner):
return AllAttr(owner)
def __repr__(self):
return "Representation of all instances of '{}'".format(self.__class__.__name__)
class MetaAll(type):
def __new__(metacls, name, bases, namespace):
namespace["all"] = All()
namespace["instances"] = []
cls = super(MetaAll, metacls).__new__(metacls, name, bases, namespace)
original_new = getattr(cls, "__new__")
def __new__(cls, *args, **kwargs):
instance = original_new(cls, *args, **kwargs)
cls.instances.append(instance)
return instance
cls.__new__ = __new__
return cls
class Foo(metaclass=MetaAll):
pass
The code above is written so that it is Python 3 and Python 2 compatible, since you appear to still be using Python2 given your "print" example.
The only thing that cannot be written compatible with both forms is the metaclass using declaration itself - just declare a __metaclass__ = MetaAll inside the body of your Foo class if you are using Python 2. But you should not really be using Python2, just change to Python 3 as soon as you can.
update
It happens that Python 2 has the "unbound method" figure, and the special casing of __new__ does not work like in Python 3: you can't just attribute a function named __new__ to the class. In order to get the correct __new__ method from the superclasses, the easiest way is to create a disposable class, so that it can be searched linearly. Otherwise, one would have to reimplement the MRO algorithm to get the proper __new__ method.
So, for Python 2, the metaclass should be this:
class MetaAll(type):
def __new__(metacls, name, bases, namespace):
namespace["all"] = All()
namespace["instances"] = []
if "__new__" in namespace:
original_new = namespace["__new__"]
def __new__(cls, *args, **kwargs):
instance = original_new(cls, *args, **kwargs)
cls.instances.append(instance)
return instance
else:
# We create a disposable class just to get the '__mro__'
stub_cls = super(MetaAll, metacls).__new__(metacls, name, bases, {})
for parent in stub_cls.__mro__[1:]:
if "__new__" in parent.__dict__:
original_new = parent.__dict__["__new__"]
break
def __new__(cls, *args, **kwargs):
instance = original_new(cls, *args, **kwargs)
cls.instances.append(instance)
return instance
namespace["__new__"] = __new__
final_cls = super(MetaAll, metacls).__new__(metacls, name, bases, namespace)
return final_cls
class Foo(object):
__metaclass__ = MetaAll
(now, again, this thing is ancient. Just settle for Python 3.6)
Ok, I figured out how to do this for Python 2.7 on my own. This is what I believe to be the best solution though it may not be the only one. It allows you to set, get, and function call on attributes of Class.all. I've named the metaclass InstanceUnifier, but please comment if you think there's a better (shorter, more descriptive) name you can think of.
class InstanceUnifier(type):
'''
What we want: A metaclass that can give a class an array of instances and provide a static Class.all object, that, when a method is called on it, calls the same method on every instance of the class.
'''
def __new__(cls, name, base_classes, dct):
dct['all'] = None
dct['instances'] = []
return type.__new__(cls, name, base_classes, dct)
def __init__(cls, name, base_classes, dct):
class Accessor(object):
def __getattribute__(self, name):
array = [getattr(inst, name) for inst in cls.instances]
if all([callable(item) for item in array]):
def proxy_func(*args, **kwargs):
for i in range(len(cls.instances)):
this = cls.instances[i]
func = array[i]
func(*args, **kwargs)
return proxy_func
elif all([not callable(item) for item in array]):
return array
else:
raise RuntimeError('Some objects in class instance array for key "'+name+'" are callable, some are not.')
def __setattr__(self, name, value):
[setattr(inst, name, value) for inst in cls.instances]
def __delattr__(self, name):
[delattr(inst, name) for inst in cls.instances]
cls.all = Accessor()
return type.__init__(cls, name, base_classes, dct)
def __call__(cls, *args, **kwargs):
inst = type.__call__(cls, *args, **kwargs)
cls.instances.append(inst)
return inst
I have a class B that inherits from A :
class A():
def do_something(self, x):
"""Prints x."""
print(x)
class B(A):
def something_else(self, x):
print("This isn't the same.")
I'd like to achieve a few things :
I'd like for B.do_something to inherit the docstring from A.do_something. I think functools.wraps is the recommended solution : is that right ?
Let's say there are some methods of A that return an instance of A. If I call those methods from B, I'd like them to return an instance of B. So far, I'm overloading each function manually.
def method_of_A(self, *args, **kwargs):
return A(super(self.__class__, self).method_of_A(*args, **kwargs))
There's likely a better way - especially given that I have to do this for a large number of classes. Is there same way to check if a function is defined within B and, if not but available in A, have it decorated / wrapped to return an instance of B ? EDIT : I can't make changes to A's codebase.
Are there solutions that are Py2 and Py3 compatible ?
Thanks very much for any suggestions.
Yes, you can use functools.wraps to copy the function name and docstring. You can return an instance of the current class using self.__class__
class A(object):
def func(self):
return self.__class__()
class B(A):
#functools.wraps(A.func)
def func(self):
return super(B, self).func()
>>> b = B()
>>> obj = b.return_object()
>>> print type(obj)
"<class '__main__.B'>"
Is there same way to check if a function is defined within B and, if not but available in A, have it decorated / wrapped to return an instance of B?
You may be able to do this using metaclasses, assuming A isn't already using a custom metaclass that you're not able to inherit from (like if it is only defined in C and hasn't been exposed to python). The way you use metaclasses is slightly different in python 2 and 3.
class MetaB(type):
def __new__(cls, name, bases, attrs):
if A in bases:
for attr, value in A.__dict__.items():
if isinstance(value, types.FunctionType) and attr not in attrs:
new_func = MyMeta.make_wrapper_func(value)
attrs[attr] = new_func
return super(MetaB, cls).__new__(cls, name, bases, attrs)
#staticmethod
def make_wrapper_func(func):
#functools.wraps(func)
def _func(self, *args, **kwargs):
value = func(self, *args, **kwargs)
if isinstance(value, A):
value = self.__class__(value)
return value
return _func
class B(A):
__metaclass__ = MetaB
...
In python 3, metaclasses are used a little differently
class B(A, metaclass=MetaB):
...
This assumes you can create an object of the B() type just by passing an instance of A() to the constructor for it (ie. return self.__class__(value)). That was just a guess. I'd have to know a litte more about your object to know how to translate an A object to a B object, but the general method would be the same. This solution also only works on regular class methods. It's not going to work on some other stuff like classmethods and staticmethods or other types of descriptor objects. You certainly could make it work for all those, your metaclass would just need to be a little more complex.
Let's say there are some methods of A that return an instance of A. If I call those methods from B, I'd like them to return an instance of B. So far, I'm overloading each function manually.
Use a classmethod.
class A(object):
#classmethod
def f(cls):
return cls
when b (instance of B) will call f, it will return B.
I'm working as an application with classes and subclasses. For each class, both super and sub, there is a class variable called label. I would like the label variable for the super class to default to the class name. For example:
class Super():
label = 'Super'
class Sub(Super):
label = 'Sub'
Rather than manually type out the variable for each class, is it possible to derive the variable from the class name in the super class and have it automatically populated for the subclasses?
class Super():
label = # Code to get class name
class Sub(Super)
pass
# When inherited Sub.label == 'Sub'.
The reason for this is that this will be the default behavior. I'm also hoping that if I can get the default behavior, I can override it later by specifying an alternate label.
class SecondSub(Super):
label = 'Pie' # Override the default of SecondSub.label == 'SecondSub'
I've tried using __name__, but that's not working and just gives me '__main__'.
I would like to use the class variable label in #classmethod methods. So I would like to be able to reference the value without having to actually create a Super() or Sub() object, like below:
class Super():
label = # Magic
#classmethod
def do_something_with_label(cls):
print(cls.label)
you can return self.__class__.__name__ in label as a property
class Super:
#property
def label(self):
return self.__class__.__name__
class Sub(Super):
pass
print Sub().label
alternatively you could set it in the __init__ method
def __init__(self):
self.label = self.__class__.__name__
this will obviously only work on instantiated classes
to access the class name inside of a class method you would need to just call __name__ on the cls
class XYZ:
#classmethod
def my_label(cls):
return cls.__name__
print XYZ.my_label()
this solution might work too (snagged from https://stackoverflow.com/a/13624858/541038)
class classproperty(object):
def __init__(self, fget):
self.fget = fget
def __get__(self, owner_self, owner_cls):
return self.fget(owner_cls)
class Super(object):
#classproperty
def label(cls):
return cls.__name__
class Sub(Super):
pass
print Sub.label #works on class
print Sub().label #also works on an instance
class Sub2(Sub):
#classmethod
def some_classmethod(cls):
print cls.label
Sub2.some_classmethod()
You can use a descriptor:
class ClassNameDescriptor(object):
def __get__(self, obj, type_):
return type_.__name__
class Super(object):
label = ClassNameDescriptor()
class Sub(Super):
pass
class SecondSub(Super):
label = 'Foo'
Demo:
>>> Super.label
'Super'
>>> Sub.label
'Sub'
>>> SecondSub.label
'Foo'
>>> Sub().label
'Sub'
>>> SecondSub().label
'Foo'
If class ThirdSub(SecondSub) should have ThirdSub.label == 'ThirdSub' instead of ThirdSub.label == 'Foo', you can do that with a bit more work. Assigning label at the class level will be inherited, unless you use a metaclass (which is a lot more hassle than it's worth for this), but we can have the label descriptor look for a _label attribute instead:
class ClassNameDescriptor(object):
def __get__(self, obj, type_):
try:
return type_.__dict__['_label']
except KeyError:
return type_.__name__
Demo:
>>> class SecondSub(Super):
... _label = 'Foo'
...
>>> class ThirdSub(SecondSub):
... pass
...
>>> SecondSub.label
'Foo'
>>> ThirdSub.label
'ThirdSub'
A metaclass might be useful here.
class Labeller(type):
def __new__(meta, name, bases, dct):
dct.setdefault('label', name)
return super(Labeller, meta).__new__(meta, name, bases, dct)
# Python 2
# class Super(object):
# __metaclass__ = Labeller
class Super(metaclass=Labeller):
pass
class Sub(Super):
pass
class SecondSub(Super):
label = 'Pie'
class ThirdSub(SecondSub):
pass
Disclaimer: when providing a custom metaclass for your class, you need to make sure it is compatible with whatever metaclass(es) are used by any class in its ancestry. Generally, this means making sure your metaclass inherits from all the other metaclasses, but it can be nontrivial to do so. In practice, metaclasses aren't so commonly used, so it's usually just a matter of subclassing type, but it's something to be aware of.
As of Python 3.6, the cleanest way to achieve this is with __init_subclass__ hook introduced in PEP 487. It is much simpler (and easier to manage with respect to inheritance) than using a metaclass.
class Base:
#classmethod
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
if 'label' not in cls.__dict__: # Check if label has been set in the class itself, i.e. not inherited from any of its superclasses
cls.label = cls.__name__ # If not, default to class's __name__
class Sub1(Base):
pass
class Sub2(Base):
label = 'Custom'
class SubSub(Sub2):
pass
print(Sub1.label) # Sub1
print(Sub2.label) # Custom
print(SubSub.label) # SubSub
Is it possible to access the 'owner' class inside a descriptor during the __init__ function of that descriptor, without passing it in manually as in this example?
class FooDescriptor(object):
def __init__(self, owner):
#do things to owner here
setattr(owner, 'bar_attribute', 'bar_value')
class BarClass(object):
foo_attribute = FooDescriptor(owner=BarClass)
One way to do something like that is with a metaclass. Just make sure it's really what you want, and don't just copy blindly if you don't understand how it works.
class Descriptor(object):
pass
class Meta(type):
def __new__(cls, name, bases, attrs):
obj = type.__new__(cls, name, bases, attrs)
# obj is now a type instance
# this loop looks for Descriptor subclasses
# and instantiates them, passing the type as the first argument
for name, attr in attrs.iteritems():
if isinstance(attr, type) and issubclass(attr, Descriptor):
setattr(obj, name, attr(obj))
return obj
class FooDescriptor(Descriptor):
def __init__(self, owner):
owner.foo = 42
class BarClass(object):
__metaclass__ = Meta
foo_attribute = FooDescriptor # will be instantiated by the metaclass
print BarClass.foo
If you need to pass additional arguments, you could use e.g. a tuple of (class, args) in the place of the class, or make FooDescriptor a decorator that would return a class that takes only one argument in the ctor.
Since Python 3.6, you can use the __set_name__ special method:
class FooDescriptor(object):
def __set_name__(self, owner, name):
owner.foo = 42
class BarClass(object):
foo_attribute = FooDescriptor()
# foo_attribute.__set_name__(BarClass, "foo_attribute") called after class definition
__set_name__ is automatically called on all descriptors in a class immediately after the class is created.
See PEP 487 for more details.