I have a number of classes that inherit a common one. I need the parent class to keep track of a bunch of dependencies/relationships that are defined at the class level. Something like:
class Meta(type):
ALLDEPENDENCIES = {}
def __new__(meta, name, bases, attrs):
if "DEPENDENCIES" in attrs.keys():
for key, value in attrs.items():
if key == "DEPENDENCIES":
meta.ALLDEPENDENCIES.update(attrs["DEPENDENCIES"])
return type.__new__(meta, name, bases, attrs)
class DataTable(DataFrameWrapper, metaclass=Meta):
pass
class Foo(DataTable):
DEPENDENCIES = {"a":1}
class Bar(DataTable):
DEPENDENCIES = {"b":2}
So essentially, as I create new classes (Foo, Bar, Baz...) each of them has a dictionary. I need to merge the info from each dictionary. So I'm using the metaclass, as shown above. Each class as an DEPENDENCIES attribute, and I'm gathering all of those into the ALLDEPENDENCIES attribute defined in the metaclass.
If I do this, it seems to work alright:
import Foo, Bar
print(Foo.ALLDEPENDENCIES)
>> {"a":1, "b":2}
print(Bar.ALLDEPENDENCIES)
>> {"a":1, "b":2}
However, when working if obj instances, the ALLDEPENDENCIES attributes is missing:
f = Foo()
b = Bar()
print(f.ALLDEPENDENCIES)
print(b.ALLDEPENDENCIES)
Attribute error - there is no ALLDEPENDENCIES.
I thought that the class attribute defined in the metaclass would be accessible from self.myattribute in the instances, just like DEPENDENCIES is. What am I doing wrong?
Meta describes how to create class but not what class that will be.
Meta != Parent with inherited attributes
So you have to pass proper attributes into new class:
class Meta(type):
_a = {}
def __new__(meta, name, bases, attrs):
if "d" in attrs:
meta._a.update(attrs["d"])
attrs["a"] = meta._a
return type.__new__(meta, name, bases, attrs)
class Data:
pass
class DataTable(Data, metaclass=Meta):
pass
class Foo(DataTable):
d = {"a":1}
class Bar(DataTable):
d = {"b":2}
f = Foo()
print(Foo.a)
print(f.a)
{'a': 1, 'b': 2}
{'a': 1, 'b': 2}
Instance class attribute search does not go into the metaclass - just to the class. The metaclass could set ALLDEPENDANCIES in each new class, with a single line in its __new__, but if you want cleaner code, in the sense the dictionary is not aliased everywhere, you can just access the attribute through the class.
Using your code, as is:
Foo().__class__.ALLDEPENDANCIES
will work from anywhere (just as `type(Foo()).ALLDEPENDANCIES).
In order to set the attribute in the new classes, so that it will be visible in the newly created classes, an option is:
from types import MappingProxyType
class Meta(type):
ALLDEPENDANCIES = {}
ALLDEPSVIEW = MappingProxyType(ALLDEPENDANCIES)
def __new__(meta, name, bases, attrs):
if "DEPENDANCIES" in attrs.keys():
for key, value in attrs.items():
if key == "DEPENDANCIES":
meta.ALLDEPENDANCIES.update(attrs["DEPENDANCIES"])
new_cls = super().__new__(meta, name, bases, attrs)
new_cls.ALLDEPENDANCIES = meta.ALLDEPSVIEW
return new_cls
(Inserting the new attr in attrs before calling type.__new__ will also work)
Here I do two other extras: (1) call super().__new__ instead of hardcoding a call to type.__new__: this allows your metaclass to be composable with other metaclasses, which might be needed if one of your classes will cross with other metaclass (for example, if you are using abstract base classes from abc or collections.abc). And (2) using a MappingProxyType which is a "read only" dictionary view, and will stop acidental direct updates of the dict through classes or instances.
Related
Suppose we have a base class, A, that contains some class variables. This class also has a class method foo that does something with those variables. Since this behavior shouldn't be hard-coded (e.g. we don't want to have to modify foo when adding new class variables), foo reads cls.__dict__ instead of directly referencing the variables.
Now we introduce a derived class: B extends A with some more class variables, as well as inherits foo. Code example:
class A:
x = 0
y = 1
#classmethod
def foo(cls):
print([name for name, prop in cls.__dict__.items() if type(prop) is int])
class B(A):
z = 3
print(B.x) # prints "0"
A.foo() # prints "['x', 'y']"
B.foo() # prints "['z']" -- why not "['x', 'y', 'z']"?
Therefore, my question is: why B.__dict__ does not contain the variables inherited from A, and if not there, then where are they?
This is not a duplicate of Accessing class attributes from parents in instance methods, because I don't just want to query specific variables that happen to be in the base class - I want to list them without knowing their names. The answers related to MRO given in this question might happen to also apply here, but the original problem is in my view different.
I solved this with a metaclass, using it to redefine the way derived classes are created. In this approach, all class variables that we are interested in get copied from the base classes to the derived one:
class M(type):
def __new__(cls, name, bases, dct):
for base in bases:
for name, prop in base.__dict__.items():
if type(prop) is int:
dct[name] = prop
return super(M, cls).__new__(cls, name, bases, dct)
class A(metaclass=M):
x = 0
y = 1
#classmethod
def foo(cls):
print([name for name, prop in cls.__dict__.items() if type(prop) is int])
class B(A):
z = 3
A.foo() # prints "['y', 'x']"
B.foo() # prints "['y', 'x', 'z']"
Besides, the metaclass can be defined in such a way that foo will not even have to query the __dict__ - variables of interest could be put into a list instead of modifying __dict__, should this for some reason had to be avoided.
I am setting a class property fields using a metaclass:
class MyMeta(type):
def __new__(mcs, name, bases, clsdict):
clsdict['fields'] = {k: v
for k, v in clsdict.items()
if <my_condition>}
return super(MyMeta, mcs).__new__(mcs, name, bases, clsdict)
class MyBaseClass(metaclass=MyMeta):
fields = {}
The following instantiation leads to expected results:
class SubClass(MyBaseClass):
param1 = 1 # meets <my_condition>
>>> SubClass.fields
{param1: 1}
But if I now subclass SubClass, fields is empty:
class SubSubClass(SubClass):
pass
>>> SubSubClass.fields
{}
How would I be able to update the classdict of all classes in inheritance hierarchy so that the fields variable would be updated from base classes?
You need to somehow keep the fields of the superclasses, for example by iterating over the "bases" and using their fields as starting point:
class MyMeta(type):
def __new__(mcs, name, bases, clsdict):
if 'fields' not in clsdict:
clsdict['fields'] = {}
# Initialize "fields" from base classes
for base in bases:
try:
clsdict['fields'].update(base.fields)
except AttributeError:
pass
# Fill in new fields (I included a "trivial" condition here, just use yours instead.)
clsdict['fields'].update({k: v for k, v in clsdict.items() if k.startswith('param')})
return super(MyMeta, mcs).__new__(mcs, name, bases, clsdict)
And it works for SubClass and SubSubClass:
>>> SubClass.fields
{'param1': 1}
>>> SubSubClass.fields
{'param1': 1}
I suggest turning fields into a property descriptor which fetches all the contents of _fields from parent classes. This way you can also more easily customize what happens when there are name conflicts, etc.
class MyMeta(type):
def __new__(mcs, name, bases, clsdict):
# change fields to _fields
clsdict['_fields'] = {k: v
for k, v in clsdict.items()
if <my_condition>}
return super(MyMeta, mcs).__new__(mcs, name, bases, clsdict)
#property
def fields(cls):
# reversed makes most recent key value override parent values
return {k:v
for c in reversed(cls.__mro__)
for k,v in getattr(c, '_fields', {}).items() }
Usage:
class MyBaseClass(metaclass=MyMeta):
fields = {}
class SubClass(MyBaseClass):
param1 = 1
>>> SubClass.fields
{param1: 1}
class SubSubClass(SubClass):
pass
>>> SubSubClass.fields
{param1: 1} # success
Now, usage of SomeChildClass.fields always refers to the metaclass property. The third argument to getattr allows classes with no _fields attribute (such as object) to fail silently.
Using a descriptor also has the advantage of preventing a child class from accidentally overriding the fields attribute:
>>> SubSubClass.fields = 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
You could also create a setter, if desired, and use it in the __init__ method (i.e., go back to using fields instead of _fields) so that the entirety rest of the class is implementation agnostic:
#fields.setter
def fields(cls, mapping):
try:
cls._fields.update(**mapping)
except AttributeError:
cls._fields = dict(**mapping)
Class Foo is defined with a metaclass Meta. The metaclass loops over the class attributes and prints them to screen.
Class Bar subclasses Foo. However, the metaclass does not print the inherited attributes from Bar.
Why doesn't the metaclass have access to Foo's attributes inherited in Bar? What am I not understanding about python's metaclass system?
Here is the sample code in 2.7:
class Meta(type):
def __init__(cls, name, bases, attrs):
print "bases = {}".format(bases)
items = {k:v for k,v in attrs.iteritems() if not k.startswith('__')}
for k,v in items.iteritems():
print k, v
class Foo(object):
__metaclass__ = Meta
hi = 1
# This prints:
# bases = (<type 'object'>,)
# hi 1
class Bar(Foo):
pass
# This prints:
# bases = (<class '__main__.Foo'>,)
Foo.hi
#prints 1
Bar.hi
#prints 1
The attrs parameter to __init__ only contains the attributes for that class, not for its bases.
A Bar object does not have an attribute hi. Instead, when you ask for Bar.hi the lookup will start at Bar, find out that it doesn't have hi, then look in base Foo to find it.
As #orlp says, attrs contains only the class dictionary for the class being created. You still have access to hi, however, because it's in the __dict__ attribute of one of Foo's bases. That is, you could do something similar to what you have, but recurse through the base classes and print out the entries in each base class dictionary.
Another approach is to use dir(), which should roughly return a list of all attributes a class has. I say roughly because a class can implement __getattr__ or __getattribute__ to return attributes "on the fly", meaning that the class may not have a well-defined set of attributes for dir() to return -- see the full disclaimer here. But in many common cases, something like the following will work:
class Meta(type):
def __init__(cls, name, bases, attrs):
print "bases = {}".format(bases)
for attr in dir(cls):
if not attr.startswith('_'):
print attr, getattr(cls, attr)
class Foo(object):
__metaclass__ = Meta
hi = 1
class Bar(Foo):
pass
Which prints:
bases = (<type 'object'>,)
hi 1
bases = (<class '__main__.Foo'>,)
hi 1
I'd like to be able to use __delitem__ with a class-level variable.
My use case can be found here (the answer that uses _reg_funcs) but it basically involves a decorator class keeping a list of all the functions it has decorated. Is there a way I can get the class object to support __delitem__? I know I could keep an instance around specially for this purpose but I'd rather not have to do that.
class Foo(object):
_instances = {}
def __init__(self, my_str):
n = len(self._instances) + 1
self._instances[my_str] = n
print "Now up to {} instances".format(n)
#classmethod
def __delitem__(cls, my_str):
del cls._instances[my_str]
abcd = Foo('abcd')
defg = Foo('defg')
print "Deleting via instance..."
del abcd['abcd']
print "Done!\n"
print "Deleting via class object..."
del Foo['defg']
print "You'll never get here because of a TypeError: 'type' object does not support item deletion"
When you write del obj[key], Python calls the __delitem__ method of the class of obj, not of obj. So del obj[key] results in type(obj).__delitem__(obj, key).
In your case, that means type(Foo).__delitem__(Foo, 'abcd'). type(Foo) is type, and type.__delitem__ is not defined. You can't modify type itself, you'll need to change the type of Foo itself to something that does.
You do that by defining a new metaclass, which is simply a subclass of type, then instructing Python to use your new metaclass to create the Foo class (not instances of Foo, but Foo itself).
class ClassMapping(type):
def __new__(cls, name, bases, dct):
t = type.__new__(cls, name, bases, dct)
t._instances = {}
return t
def __delitem__(cls, my_str):
del cls._instances[my_str]
class Foo(object):
__metaclass__ = ClassMapping
def __init__(self, my_str):
n = len(Foo._instances) + 1
Foo._instances[my_str] = n
print "Now up to {} instances".format(n)
Changing the metaclass of Foo from type to ClassMapping provides Foo with
a class variable _instances that refers to a dictionary
a __delitem__ method that removes arguments from _instances.
I have some code in Python where I'll have a bunch of classes, each of which will have an attribute _internal_attribute. I would like to be able to generate a mapping of those attributes to the original class. Essentially I would like to be able to do this:
class A(object):
_internal_attribute = 'A attribute'
class B(object):
_internal_attribute = 'B attribute'
a_instance = magic_reverse_mapping['A attribute']()
b_instance = magic_reverse_mapping['B attribute']()
What I'm missing here is how to generate magic_reverse_mapping dict. I have a gut feeling that having a metaclass generate A and B is the correct way to go about this; does that seem right?
You can use a meta class to automatically register your classes in magic_reverse_mapping:
magic_reverse_mapping = {}
class MagicRegister(type):
def __new__(meta, name, bases, dict):
cls = type.__new__(meta, name, bases, dict)
magic_reverse_mapping[dict['_internal_attribute']] = cls
return cls
class A(object):
__metaclass__ = MagicRegister
_internal_attribute = 'A attribute'
afoo = magic_reverse_mapping['A attribute']()
Alternatively you can use a decorator on your classes to register them. I think this is more readable and easier to understand:
magic_reverse_mapping = {}
def magic_register(cls):
magic_reverse_mapping[cls._internal_attribute] = cls
return cls
#magic_register
class A(object):
_internal_attribute = 'A attribute'
afoo = magic_reverse_mapping['A attribute']()
Or you could even do it by hand. It's not that much more work without using any magic:
reverse_mapping = {}
class A(object):
_internal_attribute = 'A attribute'
reverse_mapping[A._internal_attribute] = A
Looking at the different variants I think the decorator version would be the most pleasant to use.
You need some data structure to store the list of applicable classes in the first place, but you don't have to generate it in the first place. You can read classes from globals instead. This naturally assumes that your classes extend object, as they do in your first post.
def magic_reverse_mapping(attribute_name, attribute_value):
classobjects = [val for val in globals().values() if isinstance(val, object)]
attrobjects = [cls for cls in classobjects if hasattr(cls, attribute_name)]
resultobjects = [cls for cls in attrobjects if object.__getattribute__(cls, attribute_name) == attribute_value]
return resultobjects
magic_reverse_mapping('_internal_attribute', 'A attribute')
#output: [<class '__main__.A'>]
Note that this returns a list of classes with that attribute value, because there may be more than one. If you wanted to instantiate the first one:
magic_reverse_mapping('_internal_attribute', 'A attribute')[0]()
#output: <__main__.A object at 0xb7ce486c>
Unlike in sth's answer, you don't have to add a decorator to your classes (neat solution, though). However, there's no way to exclude any classes that are in the global namespace.