How to forbid creation of new class attributes in Python? - python

This may appear as a very basic question, but I couldn't find anything helpful on SO or elsewhere...
If you take built-in classes, such as int or list, there is no way to create additional class attributes for them (which is obviously a desirable behavior) :
>>> int.x = 0
Traceback (most recent call last):
File "<pyshell#16>", line 1, in <module>
int.x = 0
TypeError: can't set attributes of built-in/extension type 'int'
but if you create your own custom class, this restriction is not actived by default, so anybody may create additional class attributes in it
class foo(object):
a = 1
b = 2
>>> foo.c = 3
>>> print(foo.a, foo.b, foo.c)
1 2 3
I know that the __slots__ class attribute is one solution (among others) to forbid creation of unwanted instance attributes, but what is the process to forbid unwanted class attributes, as done in the built-in classes ?

I think you should play with metaclasses. It can define the behavior of your class instead of its instances.
The comment from Patrick Haugh refers to another SO answer with the following code snippet:
class FrozenMeta(type):
def __new__(cls, name, bases, dct):
inst = super().__new__(cls, name, bases, {"_FrozenMeta__frozen": False, **dct})
inst.__frozen = True
return inst
def __setattr__(self, key, value):
if self.__frozen and not hasattr(self, key):
raise TypeError("I am frozen")
super().__setattr__(key, value)
class A(metaclass=FrozenMeta):
a = 1
b = 2
A.a = 2
A.c = 1 # TypeError: I am frozen

#AlexisBRENON's answer works but if you want to emulate the behavior of a built-in class, where subclasses are allowed to override attributes, you can set the __frozen attribute to True only when the bases argument is empty:
class FrozenMeta(type):
def __new__(cls, name, bases, dct):
inst = super().__new__(cls, name, bases, {"_FrozenMeta__frozen": False, **dct})
inst.__frozen = not bases
return inst
def __setattr__(self, key, value):
if self.__frozen and not hasattr(self, key):
raise TypeError("I am frozen")
super().__setattr__(key, value)
class A(metaclass=FrozenMeta):
a = 1
b = 2
class B(A):
pass
B.a = 2
B.c = 1 # this is OK
A.c = 1 # TypeError: I am frozen

Whenever you see built-in/extension type you are dealing with an object that was not created in Python. The built-in types of CPython were created with C, for example, and so the extra behavior of assigning new attributes was simply not written in.
You see similar behavior with __slots__:
>>> class Huh:
... __slots__ = ('a', 'b')
>>> class Hah(Huh):
... pass
>>> Huh().c = 5 # traceback
>>> Hah().c = 5 # works
As far as making Python classes immutable, or at least unable to have new attributes defined, a metaclass is the route to go -- although anything written in pure Python will be modifiable, it's just a matter of how much effort it will take:
>>> class A(metaclass=FrozenMeta):
... a = 1
... b = 2
>>> type.__setattr__(A, 'c', 9)
>>> A.c
9
A more complete metaclass:
class Locked(type):
"support various levels of immutability"
#
def __new__(metacls, cls_name, bases, clsdict, create=False, change=False, delete=False):
cls = super().__new__(metacls, cls_name, bases, {
"_Locked__create": True,
"_Locked__change": True,
"_Locked__delete": True,
**clsdict,
})
cls.__create = create
cls.__change = change
cls.__delete = delete
return cls
#
def __setattr__(cls, name, value):
if hasattr(cls, name):
if cls.__change:
super().__setattr__(name, value)
else:
raise TypeError('%s: cannot modify %r' % (cls.__name__, name))
elif cls.__create:
super().__setattr__(name, value)
else:
raise TypeError('%s: cannot create %r' % (cls.__name__, name))
#
def __delattr__(cls, name):
if not hasattr(cls, name):
raise AttributeError('%s: %r does not exist' % (cls.__name__, name))
if not cls.__delete or name in (
'_Locked__create', '_Locked__change', '_Locked_delete',
):
raise TypeError('%s: cannot delete %r' % (cls.__name__, name))
super().__delattr__(name)
and in use:
>>> class Changable(metaclass=Locked, change=True):
... a = 1
... b = 2
...
>>> Changable.a = 9
>>> Changable.c = 7
Traceback (most recent call last):
...
TypeError: Changable: cannot create 'c'
>>> del Changable.b
Traceback (most recent call last):
...
TypeError: Changable: cannot delete 'b'

Related

How to make a specific instance of a class immutable?

Having an instance c of a class C,
I would like to make c immutable, but other instances of C dont have to.
Is there an easy way to achieve this in python?
You can't make Python classes fully immutable. You can however imitate it:
class C:
_immutable = False
def __setattr__(self, name, value):
if self._immutable:
raise TypeError(f"Can't set attribute, {self!r} is immutable.")
super().__setattr__(name, value)
Example:
>>> c = C()
>>> c.hello = 123
>>> c.hello
123
>>> c._immutable = True
>>> c.hello = 456
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in __setattr__
TypeError: Can't set attribute, <__main__.C object at 0x000002087C679D20> is immutable.
If you wish to set it at initialization, you can add an __init__ like so:
class C:
_immutable = False
def __init__(self, immutable=False):
self._immutable = immutable
def __setattr__(self, name, value):
if self._immutable:
raise TypeError(f"Can't set attribute, {self!r} is immutable.")
super().__setattr__(name, value)
Keep in mind you can still bypass it by accessing and modifying the __dict__ of the instance directly:
>>> c = C(immutable=True)
>>> c.__dict__["hello"] = 123
>>> c.hello
123
You may attempt to block it like so:
class C:
_immutable = False
def __init__(self, immutable=False):
self._immutable = immutable
def __getattribute__(self, name):
if name == "__dict__":
raise TypeError("Can't access class dict.")
return super().__getattribute__(name)
def __setattr__(self, name, value):
if self._immutable:
raise TypeError(f"Can't set attribute, {self!r} is immutable.")
super().__setattr__(name, value)
But even then it's possible to bypass:
>>> c = C(immutable=True)
>>> object.__getattribute__(c, "__dict__")["hello"] = 123
>>> c.hello
123

TypeError for class instance when checking attributes as suggested by #jusbueno

I am referring to the question asked in How to force/ensure class attributes are a specific type? (shown bellow).
The type checking works as suggested. However, the class instance has an error. Namely, when instantiate the class as follows and call __dict__ on it, the error comes up.
excel_parser.py:
one_foo = Foo()
one_foo.__dict__
results in:
Traceback (most recent call last):
File "C:/Users/fiona/PycharmProjects/data_processing/excel_parser.py", line 80, in <module>
Foo.__dict__
TypeError: descriptor '__dict__' for 'Foo' objects doesn't apply to a 'Foo' object
How can I prevent this from happening? Thx
def getter_setter_gen(name, type_):
def getter(self):
return getattr(self, "__" + name)
def setter(self, value):
if not isinstance(value, type_):
raise TypeError(f"{name} attribute must be set to an instance of {type_}")
setattr(self, "__" + name, value)
return property(getter, setter)
def auto_attr_check(cls):
new_dct = {}
for key, value in cls.__dict__.items():
if isinstance(value, type):
value = getter_setter_gen(key, value)
new_dct[key] = value
# Creates a new class, using the modified dictionary as the class dict:
return type(cls)(cls.__name__, cls.__bases__, new_dct)
#auto_attr_check
class Foo(object):
bar = int
baz = str
bam = float

Can you have constraints on Python3 NamedTuple attributes?

I have a simple NamedTuple that I want to enforce a constraint on. Is it possible?
Take the following example:
from typing import NamedTuple
class Person(NamedTuple):
first_name: str
last_name: str
If I had a desired maximum length for the name fields (e.g. 50 characters), how can I ensure that you cannot make a Person object with a name longer than that?
Normally, if this were just a class, not a NamedTuple, I'd handle this with a #property, #attr.setter and override the __init__ method. But NamedTuples can't have an __init__, and I can't see a way of having just a setter for one of the attributes (and if I could, I don't know if upon construction, the NamedTuple would even use it).
So, is this possible?
Note: I specifically want to use a NamedTuple (rather than trying to make a class immutable via my own methods/magic)
So I coded something that basically does what I wanted. I forgot to post it here, so it's evolved slightly from my original question, but I thought I'd best post here so that others can make use of it if they want.
import inspect
from collections import namedtuple
class TypedTuple:
_coerce_types = True
def __new__(cls, *args, **kwargs):
# Get the specified public attributes on the class definition
typed_attrs = cls._get_typed_attrs()
# For each positional argument, get the typed attribute, and check it's validity
new_args = []
for i, attr_value in enumerate(args):
typed_attr = typed_attrs[i]
new_value = cls.__parse_attribute(typed_attr, attr_value)
# Build a new args list to construct the namedtuple with
new_args.append(new_value)
# For each keyword argument, get the typed attribute, and check it's validity
new_kwargs = {}
for attr_name, attr_value in kwargs.items():
typed_attr = (attr_name, getattr(cls, attr_name))
new_value = cls.__parse_attribute(typed_attr, attr_value)
# Build a new kwargs object to construct the namedtuple with
new_kwargs[attr_name] = new_value
# Return a constructed named tuple using the named attribute, and the supplied arguments
return namedtuple(cls.__name__, [attr[0] for attr in typed_attrs])(*new_args, **new_kwargs)
#classmethod
def __parse_attribute(cls, typed_attr, attr_value):
# Try to find a function defined on the class to do checks on the supplied value
check_func = getattr(cls, f'_parse_{typed_attr[0]}', None)
if inspect.isroutine(check_func):
attr_value = check_func(attr_value)
else:
# If the supplied value is not the correct type, attempt to coerce it if _coerce_type is True
if not isinstance(attr_value, typed_attr[1]):
if cls._coerce_types:
# Coerce the value to the type, and assign back to the attr_value for further validation
attr_value = typed_attr[1](attr_value)
else:
raise TypeError(f'{typed_attr[0]} is not of type {typed_attr[1]}')
# Return the original value
return attr_value
#classmethod
def _get_typed_attrs(cls) -> tuple:
all_items = cls.__dict__.items()
public_items = filter(lambda attr: not attr[0].startswith('_') and not attr[0].endswith('_'), all_items)
public_attrs = filter(lambda attr: not inspect.isroutine(attr[1]), public_items)
return [attr for attr in public_attrs if isinstance(attr[1], type)]
This is my TypedTuple class, it basically behaves like a NamedTuple, except that you get type checking. It has the following basic usage:
>>> class Person(TypedTuple):
... """ Note, syntax is var=type, not annotation-style var: type
... """
... name=str
... age=int
...
>>> Person('Dave', 21)
Person(name='Dave', age=21)
>>>
>>> # Like NamedTuple, argument order matters
>>> Person(21, 'dave')
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: 'dave'
>>>
>>> # Can used named arguments
>>> Person(age=21, name='Dave')
Person(name='Dave', age=21)
So now you have a named tuple, which behaves in basically the same way, but it will type check the arguments you supply.
By default, the TypedTuple will also attempt to coerce the data you give it, into the types you say that it should be:
>>> dave = Person('Dave', '21')
>>> type(dave.age)
<class 'int'>
This behaviour can be turned off:
>>> class Person(TypedTuple):
... _coerce_types = False
... name=str
... age=int
...
>>> Person('Dave', '21')
Traceback (most recent call last):
...
TypeError: age is not of type <class 'int'>
Finally, you can also specify special parse methods, that can do any specific checking or coercing you want to do. These methods have the naming convention _parse_ATTR:
>>> class Person(TypedTuple):
... name=str
... age=int
...
... def _parse_age(value):
... if value < 0:
... raise ValueError('Age cannot be less than 0')
...
>>> Person('dave', -3)
Traceback (most recent call last):
...
ValueError: Age cannot be less than 0
I hope someone else finds this useful.
(Please note, this code will only work in Python3)
You are going to have to overload the __new__ method that constructs the subclass.
Here is an example that defines a name checking function inside of __new__ and checks each of the arguments.
from collections import namedtuple
# create the named tuple
BasePerson = namedtuple('person', 'first_name last_name')
# subclass the named tuple, overload new
class Person(BasePerson):
def __new__(cls, *args, **kwargs):
def name_check(name):
assert len(name)<50, 'Length of input name "{}" is too long'.format(name)
# check the arguments
for a in args + tuple(kwargs.values()):
name_check(a)
self = super().__new__(cls, *args, **kwargs)
return self
Now we can test a few inputs...
Person('hello','world')
# returns:
Person(first_name='hello', last_name='world')
Person('hello','world'*10)
# raises:
AssertionError Traceback (most recent call last)
<ipython-input-42-1ee8a8154e81> in <module>()
----> 1 Person('hello','world'*10)
<ipython-input-40-d0fa9033c890> in __new__(cls, *args, **kwargs)
12 # check the arguments
13 for a in args + tuple(kwargs.values()):
---> 14 name_check(a)
15
16 self = super().__new__(cls, *args, **kwargs)
<ipython-input-40-d0fa9033c890> in name_check(name)
8 def __new__(cls, *args, **kwargs):
9 def name_check(name):
---> 10 assert len(name)<50, 'Length of input name "{}" is too long'.format(name)
11
12 # check the arguments
AssertionError: Length of input name "worldworldworldworldworldworldworldworldworldworld" is too long

Python: Copy properties with it's functions (fget, fset, fdel) from one class to another

I know the questions about: copy properties, or dynamic creation of properties has already been posted and also been answered (here, here and here). You could also find an excellent description, how the property function works here.
But I think, that my question is a bit more specific. I do not only want to copy the property from one class to another. No, I also want the specific getter, setter and deleter functions to be copied to the destination class. After a whole day of searching for an answer, I decided to create an new post for this question.
So let me get a bit more in detail. A have an attribute class which is more a class group and stores property-classes:
class AttrContainer():
class a():
ATTR=1
#property
def a(self):
return self.ATTR
#a.setter
def a(self, n):
self.ATTR = n + 3.021
class b():
ATTR=None
#property
def b(self):
return "Something"
class c():
ATTR=None
#property
def c(self):
return 3
#c.setter
def c(self, n):
self.ATTR = n - 8.5201
As you can see, I have different getter, setter (not in the example: deleter) definitions of each property.
I want to use those properties with my item "wrapper" objects. But not all of item objects needs all properties, thats why I want to copy them dynamically into my wrapper classes.
So, this is how my item "wrapper" classes looks like:
class Item01Object():
properties = ["a","c"]
ATTR = None
#[...]
class Item02Object():
properties = ["b","c"]
ATTR = None
#[...]
#[...]
Because I can't set the properties dynamically while the item class will be instanced, I have to set them before I instance the class:
def SetProperties( ItemObject ):
for propName, cls in AttrContainer.__dict__.iteritems():
if propName in ItemObject.properties:
prop = cls.__dict__[propName]
fget = prop.fget if prop.fget else None
fset = prop.fset if prop.fset else None
fdel = prop.fdel if prop.fdel else None
ItemObject.__dict__[propName] = property(fget,fset,fdel)
return ItemObject()
In the end, i would instance my ItemObjects like this:
item = SetProperties(Item01Object)
I would expect, that this will work...
>>> print item
<__builtin__.Item01Object instance at 0x0000000003270F88>
>>> print item.a
None
This is result is right, because I do not update my property ATTR..
Lets change the property:
>>> item.a = 20
>>> print item.a
20
But this result is wrong, it should be 23.021 and NOT 20 . It looks like my properties do not using the setter functions from its classes.
Why? What do I wrong in my code?
Edit: Sorry, I forgot to remove the inherited object of the ItemObject classes.. Now the code works.
For properties with setters and deleters to work properly, your classes need to inherit from object: Why does #foo.setter in Python not work for me?
You can just copy the property object itself over to the new class. It'll hold references to the getter, setter and deleter functions and there is no need to copy those across.
For new-style classes, your code is not working; you cannot assign to a class __dict__ attribute:
>>> item = SetProperties(Item01Object)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in SetProperties
TypeError: 'dictproxy' object does not support item assignment
Use setattr() instead to set attributes on new-style classes:
def SetProperties( ItemObject ):
for propName, cls in AttrContainer.__dict__.iteritems():
if propName in ItemObject.properties:
setattr(ItemObject, propName, cls.__dict__[propName])
return ItemObject()
Note that the property object is copied across wholesale.
Demo:
>>> class Item01Object(object):
... properties = ["a","c"]
... ATTR = None
...
>>> def SetProperties( ItemObject ):
... for propName, cls in AttrContainer.__dict__.iteritems():
... if propName in ItemObject.properties:
... setattr(ItemObject, propName, cls.__dict__[propName])
... return ItemObject()
...
>>> item = SetProperties(Item01Object)
>>> item
<__main__.Item01Object object at 0x108205850>
>>> item.a
>>> item.a = 20
>>> item.a
23.021
You only have to copy across property objects to the target class once though; that your function returns an instance implies you are planning to use it for all instances created.
I'd make it a decorator instead:
def set_properties(cls):
for name, propcls in vars(AttrContainer).iteritems():
if name in cls.properties:
setattr(cls, name, vars(propcls)[name])
return cls
then use this on each of your Item*Object classes:
#set_properties
class Item01Object(object):
properties = ["a","c"]
ATTR = None
#set_properties
class Item02Object(object):
properties = ["b","c"]
ATTR = None
Demo:
>>> def set_properties(cls):
... for name, propcls in vars(AttrContainer).iteritems():
... if name in cls.properties:
... setattr(cls, name, vars(propcls)[name])
... return cls
...
>>> #set_properties
... class Item01Object(object):
... properties = ["a","c"]
... ATTR = None
...
>>> #set_properties
... class Item02Object(object):
... properties = ["b","c"]
... ATTR = None
...
>>> item01 = Item01Object()
>>> item01.c = 20
>>> item01.c
3
>>> item02 = Item02Object()
>>> item02.b = 42
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
>>> item02.b
'Something'

What is the easiest, most concise way to make selected attributes in an instance be readonly?

In Python, I want to make selected instance attributes of a class be readonly to code outside of the class. I want there to be no way outside code can alter the attribute, except indirectly by invoking methods on the instance. I want the syntax to be concise. What is the best way? (I give my current best answer below...)
You should use the #property decorator.
>>> class a(object):
... def __init__(self, x):
... self.x = x
... #property
... def xval(self):
... return self.x
...
>>> b = a(5)
>>> b.xval
5
>>> b.xval = 6
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
class C(object):
def __init__(self):
self.fullaccess = 0
self.__readonly = 22 # almost invisible to outside code...
# define a publicly visible, read-only version of '__readonly':
readonly = property(lambda self: self.__readonly)
def inc_readonly( self ):
self.__readonly += 1
c=C()
# prove regular attribute is RW...
print "c.fullaccess = %s" % c.fullaccess
c.fullaccess = 1234
print "c.fullaccess = %s" % c.fullaccess
# prove 'readonly' is a read-only attribute
print "c.readonly = %s" % c.readonly
try:
c.readonly = 3
except AttributeError:
print "Can't change c.readonly"
print "c.readonly = %s" % c.readonly
# change 'readonly' indirectly...
c.inc_readonly()
print "c.readonly = %s" % c.readonly
This outputs:
$ python ./p.py
c.fullaccess = 0
c.fullaccess = 1234
c.readonly = 22
Can't change c.readonly
c.readonly = 22
c.readonly = 23
My fingers itch to be able to say
#readonly
self.readonly = 22
i.e., use a decorator on an attribute. It would be so clean...
Here's how:
class whatever(object):
def __init__(self, a, b, c, ...):
self.__foobar = 1
self.__blahblah = 2
foobar = property(lambda self: self.__foobar)
blahblah = property(lambda self: self.__blahblah)
(Assuming foobar and blahblah are the attributes you want to be read-only.) Prepending two underscores to an attribute name effectively hides it from outside the class, so the internal versions won't be accessible from the outside. This only works for new-style classes inheriting from object since it depends on property.
On the other hand... this is a pretty silly thing to do. Keeping variables private seems to be an obsession that comes from C++ and Java. Your users should use the public interface to your class because it's well-designed, not because you force them to.
Edit: Looks like Kevin already posted a similar version.
There is no real way to do this. There are ways to make it more 'difficult', but there's no concept of completely hidden, inaccessible class attributes.
If the person using your class can't be trusted to follow the API docs, then that's their own problem. Protecting people from doing stupid stuff just means that they will do far more elaborate, complicated, and damaging stupid stuff to try to do whatever they shouldn't have been doing in the first place.
You could use a metaclass that auto-wraps methods (or class attributes) that follow a naming convention into properties (shamelessly taken from Unifying Types and Classes in Python 2.2:
class autoprop(type):
def __init__(cls, name, bases, dict):
super(autoprop, cls).__init__(name, bases, dict)
props = {}
for name in dict.keys():
if name.startswith("_get_") or name.startswith("_set_"):
props[name[5:]] = 1
for name in props.keys():
fget = getattr(cls, "_get_%s" % name, None)
fset = getattr(cls, "_set_%s" % name, None)
setattr(cls, name, property(fget, fset))
This allows you to use:
class A:
__metaclass__ = autosuprop
def _readonly(self):
return __x
I am aware that William Keller is the cleanest solution by far.. but here's something I came up with..
class readonly(object):
def __init__(self, attribute_name):
self.attribute_name = attribute_name
def __get__(self, instance, instance_type):
if instance != None:
return getattr(instance, self.attribute_name)
else:
raise AttributeError("class %s has no attribute %s" %
(instance_type.__name__, self.attribute_name))
def __set__(self, instance, value):
raise AttributeError("attribute %s is readonly" %
self.attribute_name)
And here's the usage example
class a(object):
def __init__(self, x):
self.x = x
xval = readonly("x")
Unfortunately this solution can't handle private variables (__ named variables).

Categories

Resources