im looking into mongoengine, and i wanted to make a class an "EmbeddedDocument" dynamically, so i do this
def custom(cls):
cls = type( cls.__name__, (EmbeddedDocument,), cls.__dict__.copy() )
cls.a = FloatField(required=True)
cls.b = FloatField(required=True)
return cls
A = custom( A )
and tried it on some classes, but its not doing some of the base class's init or sumthing
in BaseDocument
def __init__(self, **values):
self._data = {}
# Assign initial values to instance
for attr_name, attr_value in self._fields.items():
if attr_name in values:
setattr(self, attr_name, values.pop(attr_name))
else:
# Use default value if present
value = getattr(self, attr_name, None)
setattr(self, attr_name, value)
but this never gets used, thus never setting ._data, and giving me errors. how do i do this?
update:
im playing with it more, and it seems to have an issue with classes with init methods. maybe i need to make it explicit?
The class you are creating isn't a subclass of cls. You can mix-in EmbeddedDocument, but you still need to be subclassing the original to get the parent's methods (like __init__).
cls = type(cls.__name__, (cls, EmbeddedDocument), {'a': FloatField(required=True), 'b': FloatField(required=True)})
EDIT: you can put the 'a' and 'b' attributes right in the attribute dict passed to type()
Related
I'm trying to modify a third-party dict class to make it immutable after a certain point.
With most classes, I can assign to method slots to modify behavior.
However, this doesn't seem possible with all methods in all classes. In particular for dict, I can reassign update, but not __setitem__.
Why? How are they different?
For example:
class Freezable(object):
def _not_modifiable(self, *args, **kw):
raise NotImplementedError()
def freeze(self):
"""
Disallow mutating methods from now on.
"""
print "FREEZE"
self.__setitem__ = self._not_modifiable
self.update = self._not_modifiable
# ... others
return self
class MyDict(dict, Freezable):
pass
d = MyDict()
d.freeze()
print d.__setitem__ # <bound method MyDict._not_modifiable of {}>
d[2] = 3 # no error -- this is incorrect.
d.update({4:5}) # raise NotImplementedError
Note that you can define the class __setitem__, e.g.:
def __setitem__(self, key, value):
if self.update is Freezable._not_modifiable:
raise TypeError('{} has been frozen'.format(id(self)))
dict.__setitem__(self, key, value)
(This method is a bit clumsy; there are other options. But it's one way to make it work even though Python calls the class's __setitem__ directly.)
The aim is to provide some strings in a list as attributes of a class. The class shall have not only attributes, but also the respective getter and setter methods. In some other class inherited from that some of those setters need to be overridden.
To this end I came up with the following. Using setattr in a loop over the list of strings, an attribute and the respective methods are created. Concerning this first part, the code works as expected.
However I am not able to override the setters in an inheriting class.
class Base():
attributes = ["attr{}".format(i) for i in range(100)]
def __init__(self):
_get = lambda a: lambda : getattr(self, a)
_set = lambda a: lambda v: setattr(self, a, v)
for attr in self.attributes:
setattr(self, attr, None)
setattr(self, "get_"+attr, _get(attr))
setattr(self, "set_"+attr, _set(attr))
class Child(Base):
def __init__(self):
super().__init__()
#setattr(self, "set_attr4", set_attr4)
# Here I want to override one of the setters to perform typechecking
def set_attr4(self, v):
print("This being printed would probably solve the problem.")
if type(v) == bool:
super().set_attr4(v)
else:
raise ValueError("attr4 must be a boolean")
if __name__ == "__main__":
b = Base()
b.attr2 = 5
print(b.get_attr2())
b.set_attr3(55)
print(b.get_attr3())
c = Child()
c.set_attr4("SomeString")
print(c.get_attr4())
The output here is
5
555
SomeString
The expected output would however be
5
555
This being printed would probably solve the problem.
ValueError("attr4 must be a boolean")
So somehow the set_attr4 method is never called; which I guess is expected, because __init__ is called after the class structure is read in. But I am at loss on how else to override those methods. I tried to add setattr(self, "set_attr4", set_attr4) (the commented line in the code above) but to no avail.
Or more generally, there is the propery which is usually used for creating getters and setters. But I don't think I understand how to apply it in a case where the getters and setters are created dynamically and need to be overridden by a child.
Is there any solution to this?
Update due to comments: It was pointed out by several people that using getters/setters in python may not be a good style and that they may usually not be needed. While this is definitely something to keep in mind, the background of this question is that I'm extending an old existing code which uses getters/setters throughout. I hence do not wish to change the style and let the user (this project only has some 20 users in total, but still...) suddenly change the way they access properties within the API.
However any future reader of this may consider that the getter/setter approach is at least questionable.
Metaclasses to the rescue!
class Meta(type):
def __init__(cls, name, bases, dct):
for attr in cls.attributes:
if not hasattr(cls, attr):
setattr(cls, attr, None)
setattr(cls, f'get_{attr}', cls._get(attr))
setattr(cls, f'set_{attr}', cls._set(attr))
class Base(metaclass=Meta):
attributes = ["attr{}".format(i) for i in range(100)]
_get = lambda a: lambda self: getattr(self, a)
_set = lambda a: lambda self, v: setattr(self, a, v)
# the rest of your code goes here
This is pretty self-explanatory: make attributes, _get, _set class variables (so that you can access them without class instantiation), then let the metaclass set everything up for you.
The __init__ is executed after the subclass is created, so it overrides what was specified there.
The minimal change needed to fix the problem is to check whether the attribute has already been set:
class Base():
attributes = ["attr{}".format(i) for i in range(100)]
def __init__(self):
_get = lambda a: lambda : getattr(self, a)
_set = lambda a: lambda v: setattr(self, a, v)
for attr in self.attributes:
setattr(self, attr, None)
if not hasattr(self, "get_"+attr):
setattr(self, "get_"+attr, _get(attr))
if not hasattr(self, "set_"+attr):
setattr(self, "set_"+attr, _set(attr))
However, I do not see to point in doing that this way. This is creating a new getter and setter for each instance of Base. I would instead rather create them on the class. That can be done with a class decorator, or with a metaclass, or in the body of the class itself, or in some other way.
For example, this is ugly, but simple:
class Base():
attributes = ["attr{}".format(i) for i in range(100)]
for attr in attributes:
exec(f"get_{attr} = lambda self: self.{attr}")
exec(f"set_{attr} = lambda self, value: setattr(self, '{attr}', value)")
del attr
This is better:
class Base:
pass
attributes = ["attr{}".format(i) for i in range(100)]
for attr in attributes:
setattr(Base, f"get_{attr}", lambda self: getattr(self, attr))
setattr(Base, f"set_{attr}", lambda self, value: setattr(self, '{attr}', value))
You're right about the problem. The creation of your Base instance happens after the Child class defines set_attr4. Since Base is creating it's getters/setters dynamically, it just blasts over Childs version upon creation.
One alternative way (in addition to the other answers) is to create the Child's getters/setters dynamically too. The idea here is to go for "convention over configuration" and just prefix methods you want to override with override_. Here's an example:
class Child(Base):
def __init__(self):
super().__init__()
overrides = [override for override in dir(self) if override.startswith("override_")]
for override in overrides:
base_name = override.split("override_")[-1]
setattr(self, base_name, getattr(self, override))
# Here I want to override one of the setters to perform typechecking
def override_set_attr4(self, v):
print("This being printed would probably solve the problem.")
if type(v) == bool:
super().set_attr4(v)
else:
raise ValueError("attr4 must be a boolean") # Added "raise" to this, overwise we just return None...
which outputs:
5
55
This being printed would probably solve the problem.
Traceback (most recent call last):
File ".\stack.py", line 39, in <module>
c.set_attr4("SomeString")
File ".\stack.py", line 29, in override_set_attr4
raise ValueError("attr4 must be a boolean") # Added "raise" to this, overwise we just return None...
ValueError: attr4 must be a boolean
Advantages here are that the Base doesn't have to know about the Child class. In the other answers, there's very subtle Base/Child coupling going on. It also might not be desirable to touch the Base class at all (violation of the Open/Closed principle).
Disadvantages are that "convention over configuration" to avoid a true inheritance mechanism is a bit clunky and unintuitive. The override_ function is also still hanging around on the Child instance (which you may or may not care about).
I think the real problem here is that you're trying to define getters and setters in such a fashion. We usually don't even want getters/setters in Python. This definitely feels like an X/Y problem, but maybe it isn't. You have a lot of rep, so I'm not going to give you some pedantic spiel about it. Even so, maybe take a step back and think about what you're really trying to do and consider alternatives.
The problem here is that you're creating the "methods" in the instance of the Base class (__init__ only runs in the instance).
Inheriting happens before you instance your class, and won't look into instances.
In other words, When you try to override the method, it wasn't even created in first place.
A solution is to create them in the class and not in self instance inside __init__:
def _create_getter(attr):
def _get(self):
return getattr(self, attr)
return _get
def _create_setter(attr):
def _set(self, value):
return setattr(self, attr, value)
return _set
class Base():
attributes = ["attr{}".format(i) for i in range(100)]
for attr in Base.attributes:
setattr(Base, 'get_' + attr, _create_getter(attr))
setattr(Base, 'set_' + attr, _create_setter(attr))
Then inheriting will work normally:
class Child(Base):
def set_attr4(self, v):
print("This being printed would probably solve the problem.")
if type(v) == bool:
super().set_attr4(v)
else:
raise ValueError("attr4 must be a boolean")
if __name__ == "__main__":
b = Base()
b.attr2 = 5
print(b.get_attr2())
b.set_attr3(55)
print(b.get_attr3())
c = Child()
c.set_attr4("SomeString")
print(c.get_attr4())
You could also just not do it - make your Base class as normal, and make setters only for the attributes you want, in the child class:
class Base:
pass
class Child(Base):
#property
def attr4(self):
return self._attr4
#attr4.setter
def attr4(self, new_v):
if not isinstance(new_v, bool):
raise TypeError('Not bool')
self._attr4 = new_v
Testing:
c = Child()
c.attr3 = 2 # works fine even without any setter
c.attr4 = True #works fine, runs the setter
c.attr4 = 3 #type error
In my object's init, I would like to create object properties from an iterable. For example:
class MyClass(object):
def __init__(self, parameters):
attributes = ['name',
'memory',
'regressors',
'use_const']
for attr_name in attributes():
try:
attr_val = parameters[attr_name]
except KeyError:
raise Error("parameters must contain {}".format(attr_name))
setattr(self, attr_name, attr_val)
This lets me get the attributes that I want. However, what I lose compared to defining
#property
def name(self):
"""str: This class' name"""
return self._name
is that I don't get the docstrings for the properties now.
I'd like to have the docstrings for each property (for my auto-generated documentation), but I'd also like to use an iterable instead of having to define each property separately. For example, can I turn attributes into a dict with the docstring as a value, and set the attribute's docstring dynamically?
Can I have my cake and eat it too?
You can only set property objects on the class. You can do this in a loop, but this has to be done when building the class, not instances.
Simply produce property objects:
def set_property(cls, name, attr, docstring):
def getter(self):
return getattr(self, attr)
prop = property(getter, None, None, docstring)
setattr(cls, name, prop)
for name in attributes:
attr = '_' + name
docstring = "str: This class' {}".format(name)
set_property(SomeClass, name, attr, docstring)
As the title says. It seems no matter what I do, __getattr__ will not be called. I also tried it for instance (absurd, I know), with predictably no response. As if __getattr__ was banned in meta classes.
I'd appreciate any pointer to documentation about this.
The code:
class PreinsertMeta(type):
def resolvedField(self):
if isinstance(self.field, basestring):
tbl, fld = self.field.split(".")
self.field = (tbl, fld)
return self.field
Field = property(resolvedField)
def __getattr__(self, attrname):
if attrname == "field":
if isinstance(self.field, basestring):
tbl, fld = self.field.split(".")
self.field = (tbl, fld)
return self.field
else:
return super(PreinsertMeta, self).__getattr__(attrname)
def __setattr__(self, attrname, value):
super(PreinsertMeta, self).__setattr__(attrname, value)
class Test(object):
__metaclass__ = PreinsertMeta
field = "test.field"
print Test.field # Should already print the tuple
Test.field = "another.field" # __setattr__ gets called nicely
print Test.field # Again with the string?
print Test.Field # note the capital 'F', this actually calls resolvedField() and prints the tuple
Thanks to BrenBarn, here's the final working implementation:
class PreinsertMeta(type):
def __getattribute__(self, attrname):
if attrname == "field" and isinstance(object.__getattribute__(self, attrname), basestring):
tbl, fld = object.__getattribute__(self, attrname).split(".")
self.field = (tbl, fld)
return object.__getattribute__(self, attrname)
As documented, __getattr__ is only called if the attribute does not exist. Since your class has a field attribute, that blocks __getattr__. You can use __getattribute__ if you really want to intercept all attribute access, although it's not clear from your example why you need to do this. Note that this has nothing to do with metaclasses; you would see the same behavior if you created an instance of an ordinary class and gave it some attribute.
Even assuming you used __getattribute__, so it was called when the attribute exists, your implementation doesn't make much sense. Inside __getattr__ you try to get a value for self.field. But if __getattribute__ was called in the first place, it will be called again for this access, creating an infinite recursion: in order to get self.field, it has to call __getattribute__, which again tries to get self.field, which again calls __getattribute__, etc. See the documentation for __getattribute__ for how to get around this.
What I would like to do there is declaring class variables, but actually use them as vars of the instance. I have a class Field and a class Thing, like this:
class Field(object):
def __set__(self, instance, value):
for key, v in vars(instance.__class__).items():
if v is self:
instance.__dict__.update({key: value})
def __get__(self, instance, owner):
for key, v in vars(instance.__class__).items():
if v is self:
try:
return instance.__dict__[key]
except:
return None
class Thing(object):
foo = Field()
So when I instantiate a thing and set attribute foo, it will be added to the instance, not the class, the class variable is never actually re-set.
new = Thing()
new.foo = 'bar'
# (foo : 'bar') is stored in new.__dict__
This works so far, but the above code for Field is rather awkward. It has too look for the Field object instance in the classes props, otherwise there seems no way of knowing the name of the property (foo) in __set__ and __get__. Is there another, more straight forward way to accomplish this?
Every instance of Field (effectively) has a name. Its name is the attribute name (or key) which references it in Thing. Instead of having to look up the key dynamically, you could instantiate Fields with the name at the time the class attribute is set in Thing:
class Field(object):
def __init__(self, name):
self.name = name
def __set__(self, instance, value):
instance.__dict__.update({self.name: value})
def __get__(self, instance, owner):
if instance is None:
return self
try:
return instance.__dict__[self.name]
except KeyError:
return None
def make_field(*args):
def wrapper(cls):
for arg in args:
setattr(cls, arg, Field(arg))
return cls
return wrapper
#make_field('foo')
class Thing(object):
pass
And it can be used like this:
new = Thing()
Before new.foo is set, new.foo returns None:
print(new.foo)
# None
After new.foo is set, 'foo' is an instance attribute of new:
new.foo = 'bar'
print(new.__dict__)
# {'foo': 'bar'}
You can access the descriptor (the Field instance itself) with Thing.foo:
print(Thing.foo)
# <__main__.Field object at 0xb76cedec>
PS. I'm assuming you have a good reason why
class Thing(object):
foo = None
does not suffice.
Reread your question and realized I had it wrong:
You don't need to override the default python behavior to do this. For example, you could do the following:
class Thing(object):
foo = 5
>>> r = Thing()
>>> r.foo = 10
>>> s = Thing()
>>> print Thing.foo
5
>>> print r.foo
10
>>> print s.foo
5
If you want the default to be 'None' for a particular variable, you could just set the class-wide value to be None. That said, you would have to declare it specifically for each variable.
The easiest way would be to call the attribute something else than the name of the descriptor variable - preferably starting with _ to signal its an implementation detail. That way, you end up with:
def __set__(self, instance, value):
instance._foo = value
def __get__(self, instance, owner):
return getattr(instance, '_foo', None)
The only drawback of this is that you can't determine the name of the key from the one used for the descriptor. If that increased coupling isn't a problem compared to the loop, you could just use a property:
class Thing:
#property
def foo(self):
return getattr(self, '_foo', None)
#foo.setter
def foo(self, value):
self._foo = value
otherwise, you could pass the name of the variable into the descriptor's __init__, so that you have:
class Thing:
foo = Field('_foo')
Of course, all this assumes that the simplest and most Pythonic way - use a real variable Thing().foo that you set to None in Thing.__init__ - isn't an option for some reason. If that way will work for you, you should prefer it.