In Python,
Is there a way to access dict similar the way a method is invoked on an object. Scenario is, I thought to create a class with just attributes, there are no methods. And I came across few discussion on this scenario Python: Should I use a class or dictionary?. I've decided to go for dict instead of class.
No I would just like to understand, is there a way I can access elements of dict similar to method invocation on a object?
mydict = {'a': 100, 'b': 20.5, 'c': 'Hello'}
Instead of,
mydict['a']
mydict['a'] = 200
Something like,
mydict.a
mydict.a = 200
namedtuple does solve one part, that is, I can initialize and read. But not to be intended to set/write values.
This is easily done by implementing __getattr__ and __setattr__ in a subclass:
class MyDict(dict):
def __getattr__(self, attr):
return self[attr]
def __setattr__(self, attr, value):
self[attr] = value
Related
Suppose I have a dataclass with a set method. How do I extend the repr method so that it also updates whenever the set method is called:
from dataclasses import dataclass
#dataclass
class State:
A: int = 1
B: int = 2
def set(self, var, val):
setattr(self, var, val)
Ex:
In [2]: x = State()
In [3]: x
Out[3]: State(A=1, B=2)
In [4]: x.set("C", 3)
In [5]: x
Out[5]: State(A=1, B=2)
In [6]: x.C
Out[6]: 3
The outcome I would like
In [7]: x
Out[7]: State(A=1, B=2, C=3)
The dataclass decorator lets you quickly and easily build classes that have specific fields that are predetermined when you define the class. The way you're intending to use your class, however, doesn't match up very well with what dataclasses are good for. You want to be able to dynamically add new fields after the class already exists, and have them work with various methods (like __init__, __repr__ and presumably __eq__). That removes almost all of the benefits of using dataclass. You should instead just write your own class that does what you want it to do.
Here's a quick and dirty version:
class State:
_defaults = {"A": 1, "B": 2}
def __init__(self, **kwargs):
self.__dict__.update(self._defaults)
self.__dict__.update(kwargs)
def __eq__(self, other):
return self.__dict__ == other.__dict__ # you might want to add some type checking here
def __repr__(self):
kws = [f"{key}={value!r}" for key, value in self.__dict__.items()]
return "{}({})".format(type(self).__name__, ", ".join(kws))
This is pretty similar to what you get from types.SimpleNamespace, so you might just be able to use that instead (it doesn't do default values though).
You could add your set method to this framework, though it seems to me like needless duplication of the builtin setattr function you're already using to implement it. If the caller needs to dynamically set an attribute, they can call setattr themselves. If the attribute name is constant, they can use normal attribute assignment syntax instead s.foo = "bar".
Here is the problem; I have an immutable dictionary with huge set of items. The key type and the value types contained within this dict are themselves immutable. I would like to be able to mutate this dict (adding/removing/replacing key-value pairs) without having to do a full copy of the dict.
I am imagining some wrapper class for the immutable dict which adheres to the dict contract, and defaults to the immutable dict for values that have not been updated. I see the post How to “perfectly” override a dict? which I plan to leverage to make this wrapper.
Before I embark on implementing this design I just wanted to ask- is this construct already provided by the language? Or how else can I achieve the desired effect? I am on the latest version of Python (3.7) so I can use all language features available. Thanks!
Take a look at collections.ChainMap. It's a wrapper around multiple dictionaries: all writes go to the first dictionary, and lookups are searched in order of the maps. So I think you could just do something like:
modified_map = {}
mutable_map = collections.ChainMap(modified_map, huge_immutable_map)
Let's say you used a frozendict implementation like this one:
class frozendict(collections.Mapping):
"""
An immutable wrapper around dictionaries that implements the complete :py:class:`collections.Mapping`
interface. It can be used as a drop-in replacement for dictionaries where immutability is desired.
"""
dict_cls = dict
def __init__(self, *args, **kwargs):
self._dict = self.dict_cls(*args, **kwargs)
self._hash = None
def __getitem__(self, key):
return self._dict[key]
def __contains__(self, key):
return key in self._dict
def copy(self, **add_or_replace):
return self.__class__(self, **add_or_replace)
def __iter__(self):
return iter(self._dict)
def __len__(self):
return len(self._dict)
def __repr__(self):
return '<%s %r>' % (self.__class__.__name__, self._dict)
def __hash__(self):
if self._hash is None:
h = 0
for key, value in self._dict.items():
h ^= hash((key, value))
self._hash = h
return self._hash
If you wanted to mutate it, you could just reach in and mutate self._dict:
d = frozendict({'a': 1, 'b': 2})
d['a'] = 3 # This fails
mutable_dict = d._dict
mutable_dict['a'] = 3 # This works
print(d['a'])
It's a little yucky reaching into a class's protected members, but I'd say that's okay because what you're trying to do is a little yucky. If you want a mutable dictionary (just a dict) then use one. If you never want to mutate it, use a frozendict implementation like the one above. A hyrid of mutable and immutable makes no sense. All frozendict does is it doesn't implement the mutation dunder methods (__setitem__, __delitem__, etc.). Under the hood a frozendict is represented by a regular, mutable dict.
The above approach is superior in my view to the one you linked. Composability (having frozendict have a _dict property) is much easier to reason about than inheritance (subclassing dict) in many cases.
I'd like to use a SimpleNameSpace which can also act as a mapping so to be able to be used with ** unpacking.
Here is what I've done:
class MySimpleNameSpace(object):
# my initial attempt subclassed SimpleNameSpace and Mapping, with
# possibility to use MySimpleNameSpace as a dict as well as a normal SimpleNameSpace.
def __init__(self, **kw):
self.__dict__.update(kw)
def __getitem__(self, item):
return getattr(self, item)
def keys(self):
return self.__dict__.keys()
So far so good:
def f(**kw):
print(kw)
ns = MySimpleNameSpace(a=42)
f(**ns)
Gives: {'a': 42}
More tricky:
ns.__getitem__ = "what"
ns.__iter__ = "da"
f(**ns)
Now gives:
{'a': 42, '__getitem__': "what", '__iter__', "da" }
But:
ns.keys = "douh"
f(**ns)
Obviously gives:
TypeError: attribute of type 'str' is not callable
Any idea if this would be feasible to have such a custom mapping class but able to use keys as a normal attribute?
I realize that subclassing (Mutable)Mapping makes this actually harder, if at all possible, but I think it's all because the functionality apparently requires the given object to have a keys method, which is unfortunate if we can't find a workaround for that.
As far as I know: iterating (__iter__) a dict gives its keys, then __getitem__ gives the value associated to a given key. As far as I know this would be all enough to implement the functionality?
I am looking for a way to create a basic python "object" which I can externally assign attributes to.
Currently I am doing it the following way:
I define an empty class with
class C(object):
pass
and then I instantiate an object and assign attributes like this:
c = C()
c.attr = 2
Coming to my question
Is there a way to instantiate an empty class object, which I can then assign attributes like shown above without defining a class C?
Is there maybe an other better way to accomplish what I am after?
It looks like you are looking for a flexible container that has no methods and can take attributes with arbitrary names. That's a dict.
d = dict()
d['myattr'] = 42
If you prefer the attribute syntax that you get with a class (c.myattr = 42), then use a class just as per the code in your question.
Is there a way to instantiate an empty class object, which I can then assign attributes like shown above without defining a class C?
Yes:
>>> C = type("C", (object,), {})
>>> c = C()
>>> c.attr = 2
But as you can see, it's not much of an improvement, and the end result is the same -- it's just another way of creating the same class C.
Addendum:
You can make it prettier by "hiding" it in a function:
def attr_holder(cls=type("C", (object,), {})):
return cls()
c = attr_holder()
c.attr = 2
Though this is just reinventing the wheel -- replace the two line function with
class attr_holder(object):
pass
and it'll work exactly the same, and we've come full circle. So: go with what David or Reorx suggests.
I had come to the same question long ago, and then create this class to use in many of my projects:
class DotDict(dict):
"""
retrieve value of dict in dot style
"""
def __getattr__(self, key):
try:
return self[key]
except KeyError:
raise AttributeError('has no attribute %s' % key)
def __setattr__(self, key, value):
self[key] = value
def __delattr__(self, key):
try:
del self[key]
except KeyError:
raise AttributeError(key)
def __str__(self):
return '<DotDict %s >' % self.__to_dict()
def __to_dict(self):
return dict(self)
When I want a object to store data or want to retrieve value easily from a dict, I always use this class.
Additionally, it can help me serialize the attributes that I set in the object, and reversely get the original dict when needed.
So I think this may be a good solution in many situations, though other tricks look simple,
they are not very helpful further.
I have a class that inherits from a dictionary in order to add some custom behavior - in this case it passes each key and value to a function for validation. In the example below, the 'validation' simply prints a message.
Assignment to the dictionary works as expected, printing messages whenever items are added to the dict. But when I try to use the custom dictionary type as the __dict__ attribute of a class, attribute assignments, which in turn puts keys/values into my custom dictionary class, somehow manages to insert values into the dictionary while completely bypassing __setitem__ (and the other methods I've defined that may add keys).
The custom dictionary:
from collections import MutableMapping
class ValidatedDict(dict):
"""A dictionary that passes each value it ends up storing through
a given validator function.
"""
def __init__(self, validator, *args, **kwargs):
self.__validator = validator
self.update(*args, **kwargs)
def __setitem__(self, key, value):
self.__validator(value)
self.__validator(key)
dict.__setitem__(self, key, value)
def copy(self): pass # snipped
def fromkeys(validator, seq, v = None): pass # snipped
setdefault = MutableMapping.setdefault
update = MutableMapping.update
def Validator(i): print "Validating:", i
Using it as the __dict__ attribute of a class yields behavior I don't understand.
>>> d = ValidatedDict(Validator)
>>> d["key"] = "value"
Validating: value
Validating: key
>>> class Foo(object): pass
...
>>> foo = Foo()
>>> foo.__dict__ = ValidatedDict(Validator)
>>> type(foo.__dict__)
<class '__main__.ValidatedDict'>
>>> foo.bar = 100 # Yields no message!
>>> foo.__dict__['odd'] = 99
Validating: 99
Validating: odd
>>> foo.__dict__
{'odd': 99, 'bar': 100}
Can someone explain why it doesn't behave the way I expect? Can it or can't it work the way I'm attempting?
This is an optimization. To support metamethods on __dict__, every single instance assignment would need to check the existance of the metamethod. This is a fundamental operation--every attribute lookup and assignment--so the extra couple branches needed to check this would become overhead for the whole language, for something that's more or less redundant with obj.__getattr__ and obj.__setattr__.