Python - how to create immutable class member that is configurable during __init__ - python

I would like to create a class member that can be assigned a user-specified value by the constructor, but could not be changed afterwards. Is there a way to do this?
So far I have gotten the following code, which mostly works but is not "idiot proof".
def constantify(f):
def fset(self, value):
raise SyntaxError ('Not allowed to change value')
def fget(self):
return f(self)
return property(fget, fset)
class dummy(object):
def __init__(self,constval):
self.iamvar = None
self._CONST = constval
#constantify
def SOMECONST(self):
return self._CONST
dum = dummy(42)
print 'Original Val:', dum.SOMECONST
this prints "Original Val: 42"
dum.SOMECONST = 24
This gives the correct SyntaxError
But, enter an idiot,
dum._CONST = 0
print 'Current Val:', dum.SOMECONST
gives "Current Val: 0"
Is there a better idiot-proof way of achieving this?
Or is it the case that an class member that is initializable but remains const afterwards is somehow not a "pythonic" way? (I'm still a newbie learning the pythonic way)
In that case, What would be a pythonic way of creating a class for which each instance is "configurable" at the time of instantiation only?
Kalpit
Update:
I don't want to create a class for which all the members are immutable. I only want some members to be constant, and others variable at any time.

The simplest way I could think of, is to override the __setattr__ and raise an Error whenever the particular attribute is set, like this
class dummy(object):
def __init__(self, arg):
super(dummy, self).__setattr__("data", arg)
def __setattr__(self, name, value):
if name == "data":
raise AttributeError("Can't modify data")
else:
super(dummy, self).__setattr__(name, value)
a = dummy(5)
print a.data
# 5
a.data = "1"
# AttributeError: Can't modify data

One nice thing about collections.namedtuple is that you can derive another class from an instance of it:
from collections import namedtuple
class Foo(namedtuple('Foo', ['a', 'b'])):
def __new__(cls, a, b, *args, **kwargs):
return super(Foo, cls).__new__(cls, a, b)
def __init__(self, a, b, c):
# a & b are immutable and handled by __new__
self.c = c

Related

Not creating an object when conditions are not met in python?

Is it possible to not create an object if certain conditions are not met in the constructor of a class?
E.g.:
class ABC:
def __init__(self, a):
if a > 5:
self.a = a
else:
return None
a = ABC(3)
print(a)
This should print None (since it should not create an Object but return None in this case) but currently prints the Object...
you can use a classmethod as an alternate constructor and return what you want:
class ABC:
def __init__(self, a):
self.a = a
#classmethod
def with_validation(cls, a):
if a > 5:
return cls(a)
return None
a = ABC.with_validation(10)
a
<__main__.ABC at 0x10ceec288>
a = ABC.with_validation(4)
a
type(a)
NoneType
This code seems to show that an exception raised in an __init__() gives you the effect you want:
class Obj:
def __init__(self):
raise Exception("invalid condition")
class E:
def __call__(self):
raise Exception("raise")
def create(aType):
return aType()
def catchEx():
e = E()
funcs=[Obj, int, e]
for func in funcs:
try:
func()
print('No exception:', func)
except Exception as e:
print(e)
catchEx()
Output:
invalid condition
No exception: <class 'int'>
raise
I think this shows the principle. Note that returning None is not returning a new object because None is a singleton in Python, but of course it is still an object. Note also that __init__ will not be called as None is not an A class object.
class A():
def __new__(cls, condition):
if condition:
obj = super().__new__(cls)
return obj
a = A(True)
print(a)
a1 = A(False)
print(a1)
This outputs:
<__main__.A object at 0x7f64e65c62e8>
None
Edit:
I tried to directly address your question, by showing the new behaviour. But I think all the answers and comments here are good contributions.
So the proper answer is more about how you should do this kind of thing.
I recommend, more or less in this order depending on your taste and context:
"do the sane thing" by #Matt Messersmith. Test condition outside the class in client code and create the object only when appropriate.
"If the check is complicated and I want to make it easier for the user, it is better placed inside the class." by MrCarnivore. Maybe, maybe not. You can group validation code in functions inside a module that you import and call from the outside, still like in 1) mostly because validation rules can be repetitive or even apply to different kinds of objects. This also hides validation complexity from the client code.
raise an exception and use a try block in client code, by #Farhan.K. This is probably the more pythonic way if you test inside the class. You can still invoke an external data validation function inside the class for this.
define a classmethod in the class that acts as an alternate constructor by #salparadise. This is a good option.
go with a condition inside __new__ but do not pass it as an arg, or you have to use varargs to deal with that calls and __init__ calls. Then if you need varargs for other reasons, does not look a good option.
So I end up recommending several answers and options, except my own example. But I was only illustrating the main point of the question anyway.
With the help from #progmatico and a little try and error I managed to come to this solution:
class ABC:
def __new__(cls, *args, **kwargs):
if len(args) > 0:
arg = args[0]
else:
arg = kwargs['a']
if arg <= 5:
return None
return object.__new__(cls)
def __init__(self, a):
self.a = a
def __str__(self):
return str(self.a)
a = ABC(a=3)
print(a)
b = ABC(a=7)
print(b)

Dynamically creating #attribute.setter methods for all properties in class (Python)

I have code that someone else wrote like this:
class MyClass(object):
def __init__(self, data):
self.data = data
#property
def attribute1(self):
return self.data.another_name1
#property
def attribute2(self):
return self.data.another_name2
and I want to automatically create the corresponding property setters at run time so I don't have to modify the other person's code. The property setters should look like this:
#attribute1.setter
def attribue1(self, val):
self.data.another_name1= val
#attribute2.setter
def attribue2(self, val):
self.data.another_name2= val
How do I dynamically add these setter methods to the class?
You can write a custom Descriptor like this:
from operator import attrgetter
class CustomProperty(object):
def __init__(self, attr):
self.attr = attr
def __get__(self, ins, type):
print 'inside __get__'
if ins is None:
return self
else:
return attrgetter(self.attr)(ins)
def __set__(self, ins, value):
print 'inside __set__'
head, tail = self.attr.rsplit('.', 1)
obj = attrgetter(head)(ins)
setattr(obj, tail, value)
class MyClass(object):
def __init__(self, data):
self.data = data
attribute1 = CustomProperty('data.another_name1')
attribute2 = CustomProperty('data.another_name2')
Demo:
>>> class Foo():
... pass
...
>>> bar = MyClass(Foo())
>>>
>>> bar.attribute1 = 10
inside __set__
>>> bar.attribute2 = 20
inside __set__
>>> bar.attribute1
inside __get__
10
>>> bar.attribute2
inside __get__
20
>>> bar.data.another_name1
10
>>> bar.data.another_name2
20
This is the author of the question. I found out a very jerry-rigged solution, but I don't know another way to do it. (I am using python 3.4 by the way.)
I'll start with the problems I ran into.
First, I thought about overwriting the property entirely, something like this:
Given this class
class A(object):
def __init__(self):
self._value = 42
#property
def value(self):
return self._value
and you can over write the property entirely by doing something like this:
a = A()
A.value = 31 # This just redirects A.value from the #property to the int 31
a.value # Returns 31
The problem is that this is done at the class level and not at the instance level, so if I make a new instance of A then this happens:
a2 = A()
a.value # Returns 31, because the class itself was modified in the previous code block.
I want that to return a2._value because a2 is a totally new instance of A() and therefore shouldn't be influenced by what I did to a.
The solution to this was to overwrite A.value with a new property rather than whatever I wanted to assign the instance _value to. I learned that you can create a new property that instantiates itself from the old property using the special getter, setter, and deleter methods (see here). So I can overwrite A's value property and make a setter for it by doing this:
def make_setter(name):
def value_setter(self, val):
setattr(self, name, val)
return value_setter
my_setter = make_setter('_value')
A.value = A.value.setter(my_setter) # This takes the property defined in the above class and overwrites the setter with my_setter
setattr(A, 'value', getattr(A, 'value').setter(my_setter)) # This does the same thing as the line above I think so you only need one of them
This is all well and good as long as the original class has something extremely simple in the original class's property definition (in this case it was just return self._value). However, as soon as you get more complicated, to something like return self.data._value like I have, things get nasty -- like #BrenBarn said in his comment on my post. I used the inspect.getsourcelines(A.value.fget) function to get the source code line that contains the return value and parsed that. If I failed to parse the string, I raised an exception. The result looks something like this:
def make_setter(name, attrname=None):
def setter(self, val):
try:
split_name = name.split('.')
child_attr = getattr(self, split_name[0])
for i in range(len(split_name)-2):
child_attr = getattr(child_attr, split_name[i+1])
setattr(child_attr, split_name[-1], val)
except:
raise Exception("Failed to set property attribute {0}".format(name))
It seems to work but there are probably bugs.
Now the question is, what to do if the thing failed? That's up to you and sort of off track from this question. Personally, I did a bit of nasty stuff that involves creating a new class that inherits from A (let's call this class B). Then if the setter worked for A, it will work for the instance of B because A is a base class. However, if it didn't work (because the return value defined in A was something nasty), I ran a settattr(B, name, val) on the class B. This would normally change all other instances that were created from B (like in the 2nd code block in this post) but I dynamically create B using type('B', (A,), {}) and only use it once ever, so changing the class itself has no affect on anything else.
There is a lot of black-magic type stuff going on here I think, but it's pretty cool and quite versatile in the day or so I've been using it. None of this is copy-pastable code, but if you understand it then you can write your modifications.
I really hope/wish there is a better way, but I do not know of one. Maybe metaclasses or descriptors created from classes can do some nice magic for you, but I do not know enough about them yet to be sure.
Comments appreciated!

Ruby like DSL in Python

I'm currently writing my first bigger project in Python, and I'm now wondering how to define a class method so that you can execute it in the class body of a subclass of the class.
First to give some more context, a slacked down (I removed everything non essential for this question) example of how I'd do the thing I'm trying to do in Ruby:
If I define a class Item like this:
class Item
def initialize(data={})
#data = data
end
def self.define_field(name)
define_method("#{name}"){ instance_variable_get("#data")[name.to_s] }
define_method("#{name}=") do |value|
instance_variable_get("#data")[name.to_s] = value
end
end
end
I can use it like this:
class MyItem < Item
define_field("name")
end
item = MyItem.new
item.name = "World"
puts "Hello #{item.name}!"
Now so far I tried achieving something similar in Python, but I'm not happy with the result I've got so far:
class ItemField(object):
def __init__(self, name):
self.name = name
def __get__(self, item, owner=None):
return item.values[self.name]
def __set__(self, item, value):
item.values[self.name] = value
def __delete__(self, item):
del item.values[self.name]
class Item(object):
def __init__(self, data=None):
if data == None: data = {}
self.values = data
for field in type(self).fields:
self.values[field.name] = None
setattr(self, field.name, field)
#classmethod
def define_field(cls, name):
if not hasattr(cls, "fields"): cls.fields = []
cls.fields.append(ItemField(name, default))
Now I don't know how I can call define_field from withing a subclass's body. This is what I wished that it was possible:
class MyItem(Item):
define_field("name")
item = MyItem({"name": "World"})
puts "Hello {}!".format(item.name)
item.name = "reader"
puts "Hello {}!".format(item.name)
There's this similar question but none of the answers are really satisfying, somebody recommends caling the function with __func__() but I guess I can't do that, because I can't get a reference to the class from within its anonymous body (please correct me if I'm wrong about this.)
Somebody else pointed out that it's better to use a module level function for doing this which I also think would be the easiest way, however the main intention of me doing this is to make the implementation of subclasses clean and having to load that module function wouldn't be to nice either. (Also I'd have to do the function call outside the class body and I don't know but I think this is messy.)
So basically I think my approach is wrong, because Python wasn't designed to allow this kind of thing to be done. What would be the best way to achieve something as in the Ruby example with Python?
(If there's no better way I've already thought about just having a method in the subclass which returns an array of the parameters for the define_field method.)
Perhaps calling a class method isn't the right route here. I'm not quite up to speed on exactly how and when Python creates classes, but my guess is that the class object doesn't yet exist when you'd call the class method to create an attribute.
It looks like you want to create something like a record. First, note that Python allows you to add attributes to your user-created classes after creation:
class Foo(object):
pass
>>> foo = Foo()
>>> foo.x = 42
>>> foo.x
42
Maybe you want to constrain which attributes the user can set. Here's one way.
class Item(object):
def __init__(self):
if type(self) is Item:
raise NotImplementedError("Item must be subclassed.")
def __setattr__(self, name, value):
if name not in self.fields:
raise AttributeError("Invalid attribute name.")
else:
self.__dict__[name] = value
class MyItem(Item):
fields = ("foo", "bar", "baz")
So that:
>>> m = MyItem()
>>> m.foo = 42 # works
>>> m.bar = "hello" # works
>>> m.test = 12 # raises AttributeError
Lastly, the above allows you the user subclass Item without defining fields, like such:
class MyItem(Item):
pass
This will result in a cryptic attribute error saying that the attribute fields could not be found. You can require that the fields attribute be defined at the time of class creation by using metaclasses. Furthermore, you can abstract away the need for the user to specify the metaclass by inheriting from a superclass that you've written to use the metaclass:
class ItemMetaclass(type):
def __new__(cls, clsname, bases, dct):
if "fields" not in dct:
raise TypeError("Subclass must define 'fields'.")
return type.__new__(cls, clsname, bases, dct)
class Item(object):
__metaclass__ = ItemMetaclass
fields = None
def __init__(self):
if type(self) == Item:
raise NotImplementedError("Must subclass Type.")
def __setattr__(self, name, value):
if name in self.fields:
self.__dict__[name] = value
else:
raise AttributeError("The item has no such attribute.")
class MyItem(Item):
fields = ("one", "two", "three")
You're almost there! If I understand you correctly:
class Item(object):
def __init__(self, data=None):
fields = data or {}
for field, value in data.items():
if hasattr(self, field):
setattr(self, field, value)
#classmethod
def define_field(cls, name):
setattr(cls, name, None)
EDIT: As far as I know, it's not possible to access the class being defined while defining it. You can however call the method on the __init__ method:
class Something(Item):
def __init__(self):
type(self).define_field("name")
But then you're just reinventing the wheel.
When defining a class, you cannot reference the class itself inside its own definition block. So you have to call define_field(...) on MyItem after its definition. E.g.,
class MyItem(Item):
pass
MyItem.define_field("name")
item = MyItem({"name": "World"})
print("Hello {}!".format(item.name))
item.name = "reader"
print("Hello {}!".format(item.name))

Use class variables as instance vars?

What I would like to do there is declaring class variables, but actually use them as vars of the instance. I have a class Field and a class Thing, like this:
class Field(object):
def __set__(self, instance, value):
for key, v in vars(instance.__class__).items():
if v is self:
instance.__dict__.update({key: value})
def __get__(self, instance, owner):
for key, v in vars(instance.__class__).items():
if v is self:
try:
return instance.__dict__[key]
except:
return None
class Thing(object):
foo = Field()
So when I instantiate a thing and set attribute foo, it will be added to the instance, not the class, the class variable is never actually re-set.
new = Thing()
new.foo = 'bar'
# (foo : 'bar') is stored in new.__dict__
This works so far, but the above code for Field is rather awkward. It has too look for the Field object instance in the classes props, otherwise there seems no way of knowing the name of the property (foo) in __set__ and __get__. Is there another, more straight forward way to accomplish this?
Every instance of Field (effectively) has a name. Its name is the attribute name (or key) which references it in Thing. Instead of having to look up the key dynamically, you could instantiate Fields with the name at the time the class attribute is set in Thing:
class Field(object):
def __init__(self, name):
self.name = name
def __set__(self, instance, value):
instance.__dict__.update({self.name: value})
def __get__(self, instance, owner):
if instance is None:
return self
try:
return instance.__dict__[self.name]
except KeyError:
return None
def make_field(*args):
def wrapper(cls):
for arg in args:
setattr(cls, arg, Field(arg))
return cls
return wrapper
#make_field('foo')
class Thing(object):
pass
And it can be used like this:
new = Thing()
Before new.foo is set, new.foo returns None:
print(new.foo)
# None
After new.foo is set, 'foo' is an instance attribute of new:
new.foo = 'bar'
print(new.__dict__)
# {'foo': 'bar'}
You can access the descriptor (the Field instance itself) with Thing.foo:
print(Thing.foo)
# <__main__.Field object at 0xb76cedec>
PS. I'm assuming you have a good reason why
class Thing(object):
foo = None
does not suffice.
Reread your question and realized I had it wrong:
You don't need to override the default python behavior to do this. For example, you could do the following:
class Thing(object):
foo = 5
>>> r = Thing()
>>> r.foo = 10
>>> s = Thing()
>>> print Thing.foo
5
>>> print r.foo
10
>>> print s.foo
5
If you want the default to be 'None' for a particular variable, you could just set the class-wide value to be None. That said, you would have to declare it specifically for each variable.
The easiest way would be to call the attribute something else than the name of the descriptor variable - preferably starting with _ to signal its an implementation detail. That way, you end up with:
def __set__(self, instance, value):
instance._foo = value
def __get__(self, instance, owner):
return getattr(instance, '_foo', None)
The only drawback of this is that you can't determine the name of the key from the one used for the descriptor. If that increased coupling isn't a problem compared to the loop, you could just use a property:
class Thing:
#property
def foo(self):
return getattr(self, '_foo', None)
#foo.setter
def foo(self, value):
self._foo = value
otherwise, you could pass the name of the variable into the descriptor's __init__, so that you have:
class Thing:
foo = Field('_foo')
Of course, all this assumes that the simplest and most Pythonic way - use a real variable Thing().foo that you set to None in Thing.__init__ - isn't an option for some reason. If that way will work for you, you should prefer it.

Create per-instance property descriptor?

Usually Python descriptor are defined as class attributes. But in my case, I want every object instance to have different set descriptors that depends on the input. For example:
class MyClass(object):
def __init__(self, **kwargs):
for attr, val in kwargs.items():
self.__dict__[attr] = MyDescriptor(val)
Each object are have different set of attributes that are decided at instantiation time. Since these are one-off objects, it is not convenient to first subclass them.
tv = MyClass(type="tv", size="30")
smartphone = MyClass(type="phone", os="android")
tv.size # do something smart with the descriptor
Assign Descriptor to the object does not seem to work. If I try to access the attribute, I got something like
<property at 0x4067cf0>
Do you know why is this not working? Is there any work around?
This is not working because you have to assign the descriptor to the class of the object.
class Descriptor:
def __get__(...):
# this is called when the value is got
def __set__(...
def __del__(...
if you write
obj.attr
=> type(obj).__getattribute__(obj, 'attr') is called
=> obj.__dict__['attr'] is returned if there else:
=> type(obj).__dict__['attr'] is looked up
if this contains a descriptor object then this is used.
so it does not work because the type dictionairy is looked up for descriptors and not the object dictionairy.
there are possible work arounds:
put the descriptor into the class and make it use e.g. obj.xxxattr to store the value.
If there is only one descriptor behaviour this works.
overwrite setattr and getattr and delattr to respond to discriptors.
put a discriptor into the class that responds to descriptors stored in the object dictionairy.
You are using descriptors in the wrong way.
Descriptors don't make sense on an instance level. After all the __get__/__set__
methods give you access to the instance of the class.
Without knowing what exactly you want to do, I'd suggest you put the per-instance
logic inside the __set__ method, by checking who is the "caller/instance" and act accordingly.
Otherwise tell us what you are trying to achieve, so that we can propose alternative solutions.
I dynamically create instances by execing a made-up class. This may suit your use case.
def make_myclass(**kwargs):
class MyDescriptor(object):
def __init__(self, val):
self.val = val
def __get__(self, obj, cls):
return self.val
def __set__(self, obj, val):
self.val = val
cls = 'class MyClass(object):\n{}'.format('\n'.join(' {0} = MyDescriptor({0})'.format(k) for k in kwargs))
#check if names in kwargs collide with local names
for key in kwargs:
if key in locals():
raise Exception('name "{}" collides with local name'.format(key))
kwargs.update(locals())
exec(cls, kwargs, locals())
return MyClass()
Test;
In [577]: tv = make_myclass(type="tv", size="30")
In [578]: tv.type
Out[578]: 'tv'
In [579]: tv.size
Out[579]: '30'
In [580]: tv.__dict__
Out[580]: {}
But the instances are of different class.
In [581]: phone = make_myclass(type='phone')
In [582]: phone.type
Out[582]: 'phone'
In [583]: tv.type
Out[583]: 'tv'
In [584]: isinstance(tv,type(phone))
Out[584]: False
In [585]: isinstance(phone,type(tv))
Out[585]: False
In [586]: type(tv)
Out[586]: MyClass
In [587]: type(phone)
Out[587]: MyClass
In [588]: type(phone) is type(tv)
Out[588]: False
This looks like a use-case for named tuples
The reason it is not working is because Python only checks for descriptors when looking up attributes on the class, not on the instance; the methods in question are:
__getattribute__
__setattr__
__delattr__
It is possible to override those methods on your class in order to implement the descriptor protocol on instances as well as classes:
# do not use in production, example code only, needs more checks
class ClassAllowingInstanceDescriptors(object):
def __delattr__(self, name):
res = self.__dict__.get(name)
for method in ('__get__', '__set__', '__delete__'):
if hasattr(res, method):
# we have a descriptor, use it
res = res.__delete__(name)
break
else:
res = object.__delattr__(self, name)
return res
def __getattribute__(self, *args):
res = object.__getattribute__(self, *args)
for method in ('__get__', '__set__', '__delete__'):
if hasattr(res, method):
# we have a descriptor, call it
res = res.__get__(self, self.__class__)
return res
def __setattr__(self, name, val):
# check if object already exists
res = self.__dict__.get(name)
for method in ('__get__', '__set__', '__delete__'):
if hasattr(res, method):
# we have a descriptor, use it
res = res.__set__(self, val)
break
else:
res = object.__setattr__(self, name, val)
return res
#property
def world(self):
return 'hello!'
When the above class is used as below:
huh = ClassAllowingInstanceDescriptors()
print(huh.world)
huh.uni = 'BIG'
print(huh.uni)
huh.huh = property(lambda *a: 'really?')
print(huh.huh)
print('*' * 50)
try:
del huh.world
except Exception, e:
print(e)
print(huh.world)
print('*' * 50)
try:
del huh.huh
except Exception, e:
print(e)
print(huh.huh)
The results are:
hello!
BIG
really?
can't delete attribute
hello!
can't delete attribute
really?

Categories

Resources