I have inherited a project with many large classes constituent of nothing but class objects (integers, strings, etc). I'd like to be able to check if an attribute is present without needed to define a list of attributes manually.
Is it possible to make a python class iterable itself using the standard syntax? That is, I'd like to be able to iterate over all of a class's attributes using for attr in Foo: (or even if attr in Foo) without needing to create an instance of the class first. I think I can do this by defining __iter__, but so far I haven't quite managed what I'm looking for.
I've achieved some of what I want by adding an __iter__ method like so:
class Foo:
bar = "bar"
baz = 1
#staticmethod
def __iter__():
return iter([attr for attr in dir(Foo) if attr[:2] != "__"])
However, this does not quite accomplish what I'm looking for:
>>> for x in Foo:
... print(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'classobj' object is not iterable
Even so, this works:
>>> for x in Foo.__iter__():
... print(x)
bar
baz
Add the __iter__ to the metaclass instead of the class itself (assuming Python 2.x):
class Foo(object):
bar = "bar"
baz = 1
class __metaclass__(type):
def __iter__(self):
for attr in dir(self):
if not attr.startswith("__"):
yield attr
For Python 3.x, use
class MetaFoo(type):
def __iter__(self):
for attr in dir(self):
if not attr.startswith("__"):
yield attr
class Foo(metaclass=MetaFoo):
bar = "bar"
baz = 1
this is how we make a class object iterable. provide the class with a iter and a next() method, then you can iterate over class attributes or their values.you can leave the next() method if you want to, or you can define next() and raise StopIteration on some condition.
e.g:
class Book(object):
def __init__(self,title,author):
self.title = title
self.author = author
def __iter__(self):
for each in self.__dict__.values():
yield each
>>> book = Book('The Mill on the Floss','George Eliot')
>>> for each in book: each
...
'George Eliot'
'The Mill on the Floss'
this class iterates over attribute value of class Book.
A class object can be made iterable by providing it with a getitem method too.
e.g:
class BenTen(object):
def __init__(self, bentenlist):
self.bentenlist = bentenlist
def __getitem__(self,index):
if index <5:
return self.bentenlist[index]
else:
raise IndexError('this is high enough')
>>> bt_obj = BenTen([x for x in range(15)])
>>>for each in bt_obj:each
...
0
1
2
3
4
now when the object of BenTen class is used in a for-in loop, getitem is called with succesively higher index value, till it raises IndexError.
You can iterate over the class's unhidden attributes with for attr in (elem for elem in dir(Foo) if elem[:2] != '__').
A less horrible way to spell that is:
def class_iter(Class):
return (elem for elem in dir(Class) if elem[:2] != '__')
then
for attr in class_iter(Foo):
pass
class MetaItetaror(type):
def __iter__(cls):
return iter(
filter(
lambda k: not k[0].startswith('__'),
cls.__dict__.iteritems()
)
)
class Klass:
__metaclass__ = MetaItetaror
iterable_attr_names = {'x', 'y', 'z'}
x = 5
y = 6
z = 7
for v in Klass:
print v
An instance of enum.Enum happens to be iterable, and while it is not a general solution, it is a reasonable option for some use cases:
from enum import Enum
class Foo(Enum):
bar = "qux"
baz = 123
>>> print(*Foo)
Foo.bar Foo.baz
names = [m.name for m in Foo]
>>> print(*names)
bar baz
values = [m.value for m in Foo]
print(*values)
>>> qux 123
As with .__dict__, the order of iteration using this Enum based approach is the same as the order of definition.
You can make class members iterable within just a single line.
Despite the easy and compact code there are two mayor features included, additionally:
Type checking allows using additional class members not to be iterated.
The technique is also working if (public) class methods are defined. The proposals above using the "__" string checking filtering method propably fail in such cases.
# How to make class members iterable in a single line within Python (O. Simon, 14.4.2022)
# Includes type checking to allow additional class members not to be iterated
class SampleVector():
def __init__(self, x, y, name):
self.x = x
self.y = y
self.name = name
def __iter__(self):
return [value for value in self.__dict__.values() if isinstance(value, int) or isinstance(value, float)].__iter__()
if __name__ == '__main__':
v = SampleVector(4, 5, "myVector")
print (f"The content of sample vector '{v.name}' is:\n")
for m in v:
print(m)
This solution is fairly close and inspired by answer 12 from Hans Ginzel and Vijay Shanker.
Related
Let's say I have an Entity class:
class Entity(dict):
pass
def save(self):
...
I can wrap a dict object with Entity(dict_obj)
But is it possible to create a class that can wrap any type of objects, eg. int, list etc.
PS I have come up the following work around, it doesn't work on the more complex objects, but seems to work with basic ones, completely unsure if there are any gotchas, might get penalised with efficiency by creating the class every time, please let me know:
class EntityMixin(object):
def save(self):
...
def get_entity(obj):
class Entity(obj.__class__, EntityMixin):
pass
return Entity(obj)
Usage:
>>> a = get_entity(1)
>>> a + 1
2
>>> b = get_entity('b')
>>> b.upper()
'B'
>>> c = get_entity([1,2])
>>> len(c)
2
>>> d = get_entity({'a':1})
>>> d['a']
1
>>> d = get_entity(map(lambda x : x, [1,2]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jlin/projects/django-rest-framework-queryset/rest_framework_queryset/entity.py", line 11, in get_entity
return Entity(obj)
TypeError: map() must have at least two arguments.
Improve efficiency:
EntityClsCache = {}
class EntityMixin(object):
def save(self):
...
def _get_entity_cls(obj):
class Entity(obj.__class__, EntityMixin):
pass
return Entity
def get_entity(obj)
cls = None
try:
cls = EntityClsCache[obj.__class__]
except AttributeError:
cls = _get_entity_cls(obj)
EntityClsCache[obj.__class__] = cls
return cls(obj)
The solution you propose looks elegant, but it lacks caching, as in, you'll construct a unique class every time get_entity() is called, even if types are all the same.
Python has metaclasses, which act as class factories. Given that metaclass' methods override these of class, not the instance, we can implement class caching:
class EntityMixin(object):
pass
class CachingFactory(type):
__registry__ = {}
# Instead of declaring an inner class,
# we can also return type("Wrapper", (type_, EntityMixin), {}) right away,
# which, however, looks more obscure
def __makeclass(cls, type_):
class Wrapper(type_, EntityMixin):
pass
return Wrapper
# This is the simplest form of caching; for more realistic and less error-prone example,
# better use a more unique/complex key, for example, tuple of `value`'s ancestors --
# you can obtain them via type(value).__mro__
def __call__(cls, value):
t = type(value)
typename = t.__name__
if typename not in cls.__registry__:
cls.__registry__[typename] = cls.__makeclass(t)
return cls.__registry__[typename](value)
class Factory(object):
__metaclass__ = CachingFactory
This way, Factory(1) performs Factory.__call__(1), which is CachingFactory.__call__(1) (without metaclass, that'd be a constructor call instead, which would result in a class instance -- but we want to make a class first and only then instantiate it).
We can ensure that the objects created by Factory are the instances of the same class, which is crafted specifically for them at the first time:
>>> type(Factory(map(lambda x: x, [1, 2]))) is type(Factory([1]))
True
>>> type(Factory("a")) is type(Factory("abc"))
True
Let's say I have a class like this:
class Test(object):
prop = property(lambda self: "property")
The descriptor takes priority whenever I try to access Test().prop. So that will return 'property'. If I want to access the object's instance storage, I can do:
x = Test()
x.__dict__["prop"] = 12
print(x.__dict__["prop"])
However if I change my class to:
class Test(object):
__slots__ = ("prop",)
prop = property(lambda self: "property")
How do I do the same, and access the internal storage of x, to write 12 and read it back, since x.__dict__ no longer exist?
I am fairly new with Python, but I understand the Python philosophy is to give complete control, so why is an implementation detail preventing me from doing that?
Isn't Python missing a built-in function that could read from an instance internal storage, something like:
instance_vars(x)["prop"] = 12
print(instance_vars(x)["prop"])
which would work like vars, except it also works with __slots__, and with built-in types that don't have a __dict__?
Short answer, You can't
The problem is that slots are themselves implemented in terms of descriptors. Given:
class Test(object):
__slots__ = ("prop",)
t = Test()
the phrase:
t.prop
Is translated, approximately to:
Test.prop.__get__(t, Test)
where Test.prop is a <type 'member_descriptor'> crafted by the run-time specifically to load prop values out of Test instances from their reserved space.
If you add another descriptor to the class body definition, it masks out the member_descriptor that would let you get to the slotted attribute; there's no way to ask for it, it's just not there anymore. It's effectively like saying:
class Test(object):
#property
def prop(self):
return self.__dict__['prop']
#property
def prop(self):
return "property"
You've defined it twice. there's no way to "get at" the first prop definition.
but:
Long answer, you can't in a general way. You can
You can still abuse the python type system to get at it using another class definition. You can change the type of a python object, so long as it has the exact same class layout, which roughly means that it has all of the same slots:
>>> class Test1(object):
... __slots__ = ["prop"]
... prop = property(lambda self: "property")
...
>>> class Test2(object):
... __slots__ = ["prop"]
...
>>> t = Test1()
>>> t.prop
'property'
>>> t.__class__ = Test2
>>> t.prop = 5
>>> t.prop
5
>>> t.__class__ = Test1
>>> t.prop
'property'
But there's no general way to introspect an instance to work out its class layout; you just have to know from context. You could look at it's __slots__ class attribute, but that won't tell you about the slots provided in the superclass (if any) nor will it give you any hint if that attribute has changed for some reason after the class was defined.
I don't quite understand what and why you want to do this, but does this help you?
>>> class Test(object):
__slots__ = ("prop",)
prop = property(lambda self: "property")
>>> a = Test()
>>> b = Test()
>>> a.prop
'property'
>>> tmp = Test.prop
>>> Test.prop = 23
>>> a.prop
23
>>> Test.prop = tmp; del tmp
>>> b.prop
'property'
of course, you cannot overwrite the property on a per-instance basis, that's the whole point of slotted descriptors.
Note that subclasses of a class with __slots__ do have a __dict__ unless you manually define __slots__, so you can do:
>>> class Test2(Test):pass
>>> t = Test2()
>>> t.prop
'property'
>>> t.__dict__['prop'] = 5
>>> t.__dict__['prop']
5
>>> Test2.prop
<property object at 0x00000000032C4278>
but still:
>>> t.prop
'property'
and that's not because of __slots__, it's the way descriptors work.
your __dict__ is bypassed on attribute lookup, you are just abusing it as data structure that happens to be there for storing a state.
it is equivalent to do this:
>>> class Test(object):
__slots__ = ("prop", "state")
prop = property(lambda self: "property")
state = {"prop": prop}
>>> t.prop
'property'
>>> t.state["prop"] = 5
>>> t.state["prop"]
5
>>> t.prop
'property'
If you really ever want to do something like that, and you REALL REALLY need something like that, you can always override __getattribute__ and __setattribute__, it's just as stupid... This is just to prove it to you:
class Test(object):
__slots__ = ("prop",)
prop = property(lambda self: "property")
__internal__ = {}
def __getattribute__(self, k):
if k == "__dict__":
return self.__internal__
else:
try:
return object.__getattribute__(self, k)
except AttributeError, e:
try:
return self.__internal__[k]
except KeyError:
raise e
def __setattribute__(self, k, v):
self.__internal__[k] = v
object.__setattribute__(self, k, v)
t = Test()
print t.prop
t.__dict__["prop"] = "test"
print "from dict", t.__dict__["prop"]
print "from getattr", t.prop
import traceback
# These won't work: raise AttributeError
try:
t.prop2 = "something"
except AttributeError:
print "see? I told you!"
traceback.print_exc()
try:
print t.prop2
except AttributeError:
print "Haha! Again!"
traceback.print_exc()
(Tried it on Python 2.7)
It's exactly what you expect I guess. Don't do this, it's useless.
I'm looking for a decorator for Python class that would convert any element access to attribute access, something like this:
#DictAccess
class foo(bar):
x = 1
y = 2
myfoo = foo()
print myfoo.x # gives 1
print myfoo['y'] # gives 2
myfoo['z'] = 3
print myfoo.z # gives 3
Does such decorator exist somewhere already? If not, what is the proper way to implement it? Should I wrap __new__ on class foo and add __getitem__ and __setitem__ properties to the instance? How make these properly bound to the new class then? I understand that DictMixin can help me support all dict capabilities, but I still have to get the basic methods in the classes somehow.
The decorator needs to add a __getitem__ method and a __setitem__ method:
def DictAccess(kls):
kls.__getitem__ = lambda self, attr: getattr(self, attr)
kls.__setitem__ = lambda self, attr, value: setattr(self, attr, value)
return kls
That will work fine with your example code:
class bar:
pass
#DictAccess
class foo(bar):
x = 1
y = 2
myfoo = foo()
print myfoo.x # gives 1
print myfoo['y'] # gives 2
myfoo['z'] = 3
print myfoo.z # gives 3
That test code produces the expected values:
1
2
3
I have inherited a project with many large classes constituent of nothing but class objects (integers, strings, etc). I'd like to be able to check if an attribute is present without needed to define a list of attributes manually.
Is it possible to make a python class iterable itself using the standard syntax? That is, I'd like to be able to iterate over all of a class's attributes using for attr in Foo: (or even if attr in Foo) without needing to create an instance of the class first. I think I can do this by defining __iter__, but so far I haven't quite managed what I'm looking for.
I've achieved some of what I want by adding an __iter__ method like so:
class Foo:
bar = "bar"
baz = 1
#staticmethod
def __iter__():
return iter([attr for attr in dir(Foo) if attr[:2] != "__"])
However, this does not quite accomplish what I'm looking for:
>>> for x in Foo:
... print(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'classobj' object is not iterable
Even so, this works:
>>> for x in Foo.__iter__():
... print(x)
bar
baz
Add the __iter__ to the metaclass instead of the class itself (assuming Python 2.x):
class Foo(object):
bar = "bar"
baz = 1
class __metaclass__(type):
def __iter__(self):
for attr in dir(self):
if not attr.startswith("__"):
yield attr
For Python 3.x, use
class MetaFoo(type):
def __iter__(self):
for attr in dir(self):
if not attr.startswith("__"):
yield attr
class Foo(metaclass=MetaFoo):
bar = "bar"
baz = 1
this is how we make a class object iterable. provide the class with a iter and a next() method, then you can iterate over class attributes or their values.you can leave the next() method if you want to, or you can define next() and raise StopIteration on some condition.
e.g:
class Book(object):
def __init__(self,title,author):
self.title = title
self.author = author
def __iter__(self):
for each in self.__dict__.values():
yield each
>>> book = Book('The Mill on the Floss','George Eliot')
>>> for each in book: each
...
'George Eliot'
'The Mill on the Floss'
this class iterates over attribute value of class Book.
A class object can be made iterable by providing it with a getitem method too.
e.g:
class BenTen(object):
def __init__(self, bentenlist):
self.bentenlist = bentenlist
def __getitem__(self,index):
if index <5:
return self.bentenlist[index]
else:
raise IndexError('this is high enough')
>>> bt_obj = BenTen([x for x in range(15)])
>>>for each in bt_obj:each
...
0
1
2
3
4
now when the object of BenTen class is used in a for-in loop, getitem is called with succesively higher index value, till it raises IndexError.
You can iterate over the class's unhidden attributes with for attr in (elem for elem in dir(Foo) if elem[:2] != '__').
A less horrible way to spell that is:
def class_iter(Class):
return (elem for elem in dir(Class) if elem[:2] != '__')
then
for attr in class_iter(Foo):
pass
class MetaItetaror(type):
def __iter__(cls):
return iter(
filter(
lambda k: not k[0].startswith('__'),
cls.__dict__.iteritems()
)
)
class Klass:
__metaclass__ = MetaItetaror
iterable_attr_names = {'x', 'y', 'z'}
x = 5
y = 6
z = 7
for v in Klass:
print v
An instance of enum.Enum happens to be iterable, and while it is not a general solution, it is a reasonable option for some use cases:
from enum import Enum
class Foo(Enum):
bar = "qux"
baz = 123
>>> print(*Foo)
Foo.bar Foo.baz
names = [m.name for m in Foo]
>>> print(*names)
bar baz
values = [m.value for m in Foo]
print(*values)
>>> qux 123
As with .__dict__, the order of iteration using this Enum based approach is the same as the order of definition.
You can make class members iterable within just a single line.
Despite the easy and compact code there are two mayor features included, additionally:
Type checking allows using additional class members not to be iterated.
The technique is also working if (public) class methods are defined. The proposals above using the "__" string checking filtering method propably fail in such cases.
# How to make class members iterable in a single line within Python (O. Simon, 14.4.2022)
# Includes type checking to allow additional class members not to be iterated
class SampleVector():
def __init__(self, x, y, name):
self.x = x
self.y = y
self.name = name
def __iter__(self):
return [value for value in self.__dict__.values() if isinstance(value, int) or isinstance(value, float)].__iter__()
if __name__ == '__main__':
v = SampleVector(4, 5, "myVector")
print (f"The content of sample vector '{v.name}' is:\n")
for m in v:
print(m)
This solution is fairly close and inspired by answer 12 from Hans Ginzel and Vijay Shanker.
Why does Python not support a record type natively? It's a matter of having a mutable version of namedtuple.
I could use namedtuple._replace. But I need to have these records in a collection and since namedtuple._replace creates another instance, I also need to modify the collection which becomes messy quickly.
Background:
I have a device whose attributes I need to get by polling it over TCP/IP. i.e. its representation is a mutable object.
Edit:
I have a set of devices for whom I need to poll.
Edit:
I need to iterate through the object displaying its attributes using PyQt. I know I can add special methods like __getitem__ and __iter__, but I want to know if there is an easier way.
Edit:
I would prefer a type whose attribute are fixed (just like they are in my device), but are mutable.
Python <3.3
You mean something like this?
class Record(object):
__slots__= "attribute1", "attribute2", "attribute3",
def items(self):
"dict style items"
return [
(field_name, getattr(self, field_name))
for field_name in self.__slots__]
def __iter__(self):
"iterate over fields tuple/list style"
for field_name in self.__slots__:
yield getattr(self, field_name)
def __getitem__(self, index):
"tuple/list style getitem"
return getattr(self, self.__slots__[index])
>>> r= Record()
>>> r.attribute1= "hello"
>>> r.attribute2= "there"
>>> r.attribute3= 3.14
>>> print r.items()
[('attribute1', 'hello'), ('attribute2', 'there'), ('attribute3', 3.1400000000000001)]
>>> print tuple(r)
('hello', 'there', 3.1400000000000001)
Note that the methods provided are just a sample of possible methods.
Python ≥3.3 update
You can use types.SimpleNamespace:
>>> import types
>>> r= types.SimpleNamespace()
>>> r.attribute1= "hello"
>>> r.attribute2= "there"
>>> r.attribute3= 3.14
dir(r) would provide you with the attribute names (filtering out all .startswith("__"), of course).
Is there any reason you can't use a regular dictionary? It seems like the attributes don't have a specific ordering in your particular situation.
Alternatively, you could also use a class instance (which has nice attribute access syntax). You could use __slots__ if you wish to avoid having a __dict__ created for each instance.
I've also just found a recipe for "records", which are described as mutable named-tuples. They are implemented using classes.
Update:
Since you say order is important for your scenario (and you want to iterate through all the attributes) an OrderedDict seems to be the way to go. This is part of the standard collections module as of Python 2.7; there are other implementations floating around the internet for Python < 2.7.
To add attribute-style access, you can subclass it like so:
from collections import OrderedDict
class MutableNamedTuple(OrderedDict):
def __init__(self, *args, **kwargs):
super(MutableNamedTuple, self).__init__(*args, **kwargs)
self._initialized = True
def __getattr__(self, name):
try:
return self[name]
except KeyError:
raise AttributeError(name)
def __setattr__(self, name, value):
if hasattr(self, '_initialized'):
super(MutableNamedTuple, self).__setitem__(name, value)
else:
super(MutableNamedTuple, self).__setattr__(name, value)
Then you can do:
>>> t = MutableNamedTuple()
>>> t.foo = u'Crazy camels!'
>>> t.bar = u'Yay, attribute access'
>>> t.foo
u'Crazy camels!'
>>> t.values()
[u'Crazy camels!', u'Yay, attribute access']
This can be done using an empty class and instances of it, like this:
>>> class a(): pass
...
>>> ainstance = a()
>>> ainstance.b = 'We want Moshiach Now'
>>> ainstance.b
'We want Moshiach Now'
>>>
There's a library similar to namedtuple, but mutable, called recordtype.
Package home: http://pypi.python.org/pypi/recordtype
Simple example:
from recordtype import recordtype
Person = recordtype('Person', 'first_name last_name phone_number')
person1 = Person('Trent', 'Steele', '637-3049')
person1.last_name = 'Terrence';
print person1
# Person(first_name=Trent, last_name=Terrence, phone_number=637-3049)
Simple default value example:
Basis = recordtype('Basis', [('x', 1), ('y', 0)])
Iterate through the fields of person1 in order:
map(person1.__getattribute__, Person._fields)
This question is old, but just for the sake of completeness, Python 3.7 has dataclasses which are pretty much records.
>>> from dataclasses import dataclass
>>>
>>> #dataclass
... class MyRecord:
... name: str
... age: int = -1
...
>>> rec = MyRecord('me')
>>> rec.age = 127
>>> print(rec)
MyRecord(name='me', age=127)
The attrs third party library provides more functionality for both Python 2 and Python 3. Nothing wrong with vendoring dependencies either if the requirement is more around things you can't keep locally rather than specifically only using the stdlib. dephell has a nice helper for doing that.
This answer duplicates another one.
There is a mutable alternative to collections.namedtuple - recordclass.
It has same API and minimal memory footprint (actually it also faster). It support assignments. For example:
from recordclass import recordclass
Point = recordclass('Point', 'x y')
>>> p = Point(1, 2)
>>> p
Point(x=1, y=2)
>>> print(p.x, p.y)
1 2
>>> p.x += 2; p.y += 3; print(p)
Point(x=3, y=5)
There is more complete example (it also include performance comparisons).
In the closely related Existence of mutable named tuple in Python? question 13 tests are used for comparing 6 mutable alternatives to namedtuple.
The latest namedlist 1.7 passes all of these tests with both Python 2.7 and Python 3.5 as of Jan 11, 2016. It is a pure python implementation.
The second best candidate according to these tests is the recordclass which is a C extension. Of course, it depends on your requirements whether a C extension is preferred or not.
For further details, especially for the tests, see Existence of mutable named tuple in Python?
Based on several useful tricks gathered over time, this "frozenclass" decorator does pretty much everything needed: http://pastebin.com/fsuVyM45
Since that code is over 70% documentation and tests, I won't say more here.
Here is a complete mutable namedtuple I made, which behaves like a list and is totally compatible with it.
class AbstractNamedArray():
"""a mutable collections.namedtuple"""
def __new__(cls, *args, **kwargs):
inst = object.__new__(cls) # to rename the class
inst._list = len(cls._fields)*[None]
inst._mapping = {}
for i, field in enumerate(cls._fields):
inst._mapping[field] = i
return inst
def __init__(self, *args, **kwargs):
if len(kwargs) == 0 and len(args) != 0:
assert len(args) == len(self._fields), 'bad number of arguments'
self._list = list(args)
elif len(args) == 0 and len(kwargs) != 0:
for field, value in kwargs.items():
assert field in self._fields, 'field {} doesn\'t exist'
self._list[self._mapping[field]] = value
else:
raise ValueError("you can't mix args and kwargs")
def __getattr__(self, x):
return object.__getattribute__(self, '_list')[object.__getattribute__(self, '_mapping')[x]]
def __setattr__(self, x, y):
if x in self._fields:
self._list[self._mapping[x]] = y
else:
object.__setattr__(self, x, y)
def __repr__(self):
fields = []
for field, value in zip(self._fields, map(self.__getattr__, self._fields)):
fields.append('{}={}'.format(field, repr(value)))
return '{}({})'.format(self._name, ', '.join(fields))
def __iter__(self):
yield from self._list
def __list__(self):
return self._list[:]
def __len__(self):
return len(self._fields)
def __getitem__(self, x):
return self._list[x]
def __setitem__(self, x, y):
self._list[x] = y
def __contains__(self, x):
return x in self._list
def reverse(self):
self._list.reverse()
def copy(self):
return self._list.copy()
def namedarray(name, fields):
"""used to construct a named array (fixed-length list with named fields)"""
return type(name, (AbstractNamedarray,), {'_name': name, '_fields': fields})
You could do something like thisdictsubclass which is its own __dict__. The basic concept is the same as that of the ActiveState AttrDict recipe, but the implementation is simpler. The result is something more mutable than you need since both an instance's attributes and their values are changeable. Although the attributes aren't ordered, you can iterate through the current ones and/or their values.
class Record(dict):
def __init__(self, *args, **kwargs):
super(Record, self).__init__(*args, **kwargs)
self.__dict__ = self
As tzot stated, since Python ≥3.3, Python does have a mutable version of namedtuple: types.SimpleNamespace.
These things are very similar to the new C# 9 Records.
Here are some usage examples:
Positional constructor arguments
>>> import types
>>>
>>> class Location(types.SimpleNamespace):
... def __init__(self, lat=0, long=0):
... super().__init__(lat=lat, long=long)
...
>>> loc_1 = Location(49.4, 8.7)
Pretty repr
>>> loc_1
Location(lat=49.4, long=8.7)
Mutable
>>> loc_2 = Location()
>>> loc_2
Location(lat=0, long=0)
>>> loc_2.lat = 49.4
>>> loc_2
Location(lat=49.4, long=0)
Value semantics for equality
>>> loc_2 == loc_1
False
>>> loc_2.long = 8.7
>>> loc_2 == loc_1
True
Can add attributes at runtime
>>> loc_2.city = 'Heidelberg'
>>> loc_2