How to remove a key/value pair from yaml dump, in Python? - python

Suppose I have a naive class definition:
import yaml
class A:
def __init__(self):
self.abc = 1
self.hidden = 100
self.xyz = 2
def __repr__(self):
return yaml.dump(self)
A()
printing
!!python/object:__main__.A
abc: 1
hidden: 100
xyz: 2
Is there a clean way to remove a line containing hidden: 100 from yaml dump's printed output? The key name hidden is known in advance, but its numeric value may change.
Desired output:
!!python/object:__main__.A
abc: 1
xyz: 2
FYI: This dump is for display only and will not be loaded.
I suppose one can suppress key/value pair with key=hidden with use of yaml.representative. Another way is find hidden: [number] with RegEx in a string output.

I looked at the documentation for pyyaml and did not find a way to achieve your objective. A work-around would be to delete the attribte hidden, call yaml.dump, then add it back in:
def __repr__(self):
hidden = self.hidden
del self.hidden
return yaml.dump(self)
self.hidden = hidden
Taking a step back, why do you want to use yaml for __repr__? Can you just roll your own instead of relying on yaml?

json is mature solution and (at the moment of writing) have much better docs than pyyaml;
I'd use it instead while pyyaml's docs are hard to fully understand. As a bonus, YAML is (almost) superset of JSON, so you'll be able to read your data as YAML without converting it.
However, to easily use all goodies of YAML you will probably have to convert the data to YAML
json module is unable to serialize custom objects by default, but it can be easily extended:
import json
def default(o):
if isinstance(o, A):
result = vars(o).copy()
del result['hidden']
result['__class__'] = o.__class__.__name__
return result
else:
return o
json.dumps(A(), default=default) # => '{"__class__": "A", "xyz": 2, "abc": 1}'
If you don't want to write default=default everywhere you dumps, you can create custom serializer:
dumper = json.JSONEncoder(default=default)
dumper.encode(A()) # => '{"__class__": "A", "xyz": 2, "abc": 1}'
Or, to be able to easily extend it even further via subclassing:
class Dumper(json.JSONEncoder):
__slots__ = ()
def default(self, o):
if isinstance(o, A):
result = vars(o).copy()
del result['hidden']
result['__class__'] = o.__class__.__name__
return result
else:
return super().default(o)
dumper = Dumper()
dumper.encode(A()) # => '{"__class__": "A", "xyz": 2, "abc": 1}'
Note that fields in JSON are unordered.
Also, if you want to use this, I'd advise you not to serialize dict with key __class__, because it might be hard to distinguish it from serialized object.
See it working online

Related

Python serialize namedtuple key in dictionary to json [duplicate]

What is the recommended way of serializing a namedtuple to json with the field names retained?
Serializing a namedtuple to json results in only the values being serialized and the field names being lost in translation. I would like the fields also to be retained when json-ized and hence did the following:
class foobar(namedtuple('f', 'foo, bar')):
__slots__ = ()
def __iter__(self):
yield self._asdict()
The above serializes to json as I expect and behaves as namedtuple in other places I use (attribute access etc.,) except with a non-tuple like results while iterating it (which fine for my use case).
What is the "correct way" of converting to json with the field names retained?
If it's just one namedtuple you're looking to serialize, using its _asdict() method will work (with Python >= 2.7)
>>> from collections import namedtuple
>>> import json
>>> FB = namedtuple("FB", ("foo", "bar"))
>>> fb = FB(123, 456)
>>> json.dumps(fb._asdict())
'{"foo": 123, "bar": 456}'
This is pretty tricky, since namedtuple() is a factory which returns a new type derived from tuple. One approach would be to have your class also inherit from UserDict.DictMixin, but tuple.__getitem__ is already defined and expects an integer denoting the position of the element, not the name of its attribute:
>>> f = foobar('a', 1)
>>> f[0]
'a'
At its heart the namedtuple is an odd fit for JSON, since it is really a custom-built type whose key names are fixed as part of the type definition, unlike a dictionary where key names are stored inside the instance. This prevents you from "round-tripping" a namedtuple, e.g. you cannot decode a dictionary back into a namedtuple without some other a piece of information, like an app-specific type marker in the dict {'a': 1, '#_type': 'foobar'}, which is a bit hacky.
This is not ideal, but if you only need to encode namedtuples into dictionaries, another approach is to extend or modify your JSON encoder to special-case these types. Here is an example of subclassing the Python json.JSONEncoder. This tackles the problem of ensuring that nested namedtuples are properly converted to dictionaries:
from collections import namedtuple
from json import JSONEncoder
class MyEncoder(JSONEncoder):
def _iterencode(self, obj, markers=None):
if isinstance(obj, tuple) and hasattr(obj, '_asdict'):
gen = self._iterencode_dict(obj._asdict(), markers)
else:
gen = JSONEncoder._iterencode(self, obj, markers)
for chunk in gen:
yield chunk
class foobar(namedtuple('f', 'foo, bar')):
pass
enc = MyEncoder()
for obj in (foobar('a', 1), ('a', 1), {'outer': foobar('x', 'y')}):
print enc.encode(obj)
{"foo": "a", "bar": 1}
["a", 1]
{"outer": {"foo": "x", "bar": "y"}}
It looks like you used to be able to subclass simplejson.JSONEncoder to make this work, but with the latest simplejson code, that is no longer the case: you have to actually modify the project code. I see no reason why simplejson should not support namedtuples, so I forked the project, added namedtuple support, and I'm currently waiting for my branch to be pulled back into the main project. If you need the fixes now, just pull from my fork.
EDIT: Looks like the latest versions of simplejson now natively support this with the namedtuple_as_object option, which defaults to True.
I wrote a library for doing this: https://github.com/ltworf/typedload
It can go from and to named-tuple and back.
It supports quite complicated nested structures, with lists, sets, enums, unions, default values. It should cover most common cases.
edit: The library also supports dataclass and attr classes.
It's impossible to serialize namedtuples correctly with the native python json library. It will always see tuples as lists, and it is impossible to override the default serializer to change this behaviour. It's worse if objects are nested.
Better to use a more robust library like orjson:
import orjson
from typing import NamedTuple
class Rectangle(NamedTuple):
width: int
height: int
def default(obj):
if hasattr(obj, '_asdict'):
return obj._asdict()
rectangle = Rectangle(width=10, height=20)
print(orjson.dumps(rectangle, default=default))
=>
{
"width":10,
"height":20
}
There is a more convenient solution is to use the decorator (it uses the protected field _fields).
Python 2.7+:
import json
from collections import namedtuple, OrderedDict
def json_serializable(cls):
def as_dict(self):
yield OrderedDict(
(name, value) for name, value in zip(
self._fields,
iter(super(cls, self).__iter__())))
cls.__iter__ = as_dict
return cls
#Usage:
C = json_serializable(namedtuple('C', 'a b c'))
print json.dumps(C('abc', True, 3.14))
# or
#json_serializable
class D(namedtuple('D', 'a b c')):
pass
print json.dumps(D('abc', True, 3.14))
Python 3.6.6+:
import json
from typing import TupleName
def json_serializable(cls):
def as_dict(self):
yield {name: value for name, value in zip(
self._fields,
iter(super(cls, self).__iter__()))}
cls.__iter__ = as_dict
return cls
# Usage:
#json_serializable
class C(NamedTuple):
a: str
b: bool
c: float
print(json.dumps(C('abc', True, 3.14))
It recursively converts the namedTuple data to json.
print(m1)
## Message(id=2, agent=Agent(id=1, first_name='asd', last_name='asd', mail='2#mai.com'), customer=Customer(id=1, first_name='asd', last_name='asd', mail='2#mai.com', phone_number=123123), type='image', content='text', media_url='h.com', la=123123, ls=4512313)
def reqursive_to_json(obj):
_json = {}
if isinstance(obj, tuple):
datas = obj._asdict()
for data in datas:
if isinstance(datas[data], tuple):
_json[data] = (reqursive_to_json(datas[data]))
else:
print(datas[data])
_json[data] = (datas[data])
return _json
data = reqursive_to_json(m1)
print(data)
{'agent': {'first_name': 'asd',
'last_name': 'asd',
'mail': '2#mai.com',
'id': 1},
'content': 'text',
'customer': {'first_name': 'asd',
'last_name': 'asd',
'mail': '2#mai.com',
'phone_number': 123123,
'id': 1},
'id': 2,
'la': 123123,
'ls': 4512313,
'media_url': 'h.com',
'type': 'image'}
The jsonplus library provides a serializer for NamedTuple instances. Use its compatibility mode to output simple objects if needed, but prefer the default as it is helpful for decoding back.
This is an old question. However:
A suggestion for all those with the same question, think carefully about using any of the private or internal features of the NamedTuple because they have before and will change again over time.
For example, if your NamedTuple is a flat value object and you're only interested in serializing it and not in cases where it is nested into another object, you could avoid the troubles that would come up with __dict__ being removed or _as_dict() changing and just do something like (and yes this is Python 3 because this answer is for the present):
from typing import NamedTuple
class ApiListRequest(NamedTuple):
group: str="default"
filter: str="*"
def to_dict(self):
return {
'group': self.group,
'filter': self.filter,
}
def to_json(self):
return json.dumps(self.to_dict())
I tried to use the default callable kwarg to dumps in order to do the to_dict() call if available, but that didn't get called as the NamedTuple is convertible to a list.
Here is my take on the problem. It serializes the NamedTuple, takes care of folded NamedTuples and Lists inside of them
def recursive_to_dict(obj: Any) -> dict:
_dict = {}
if isinstance(obj, tuple):
node = obj._asdict()
for item in node:
if isinstance(node[item], list): # Process as a list
_dict[item] = [recursive_to_dict(x) for x in (node[item])]
elif getattr(node[item], "_asdict", False): # Process as a NamedTuple
_dict[item] = recursive_to_dict(node[item])
else: # Process as a regular element
_dict[item] = (node[item])
return _dict
simplejson.dump() instead of json.dump does the job. It may be slower though.

How to override JSONDecoder to handle certain string matches differently

I want to change certain values in a json file (nested dicts and arrays). I thought a handy way to do that would be to take advantage of the JSONDecoder.
However, it's not working as I'd expect it to. I've done this exact same approach for getting JSONEncoder to convert np.arrays to lists so it wouldn't break the encoder.
After not getting it to do what I wanted, I thought maybe to try the Decoder instead. Same issue, it never calls default for handling strings it seems. Maybe default is never called when handling a string, just when handling other types of objects?
# key, val are arguments passed in, e.g. ("bar", "2.0rc1")
# Replace the value "2.0rc1" everywhere the "bar" key is found
class StringReplaceDecoder(json.JSONDecoder):
def default(self, obj):
if isinstance(obj, str):
print("Handling obj str: {}".format(obj))
if obj == key:
return val
return json.JSONEncoder.default(self, obj)
json_dump = json.dumps(dict)
json_load = json.loads(json_dump, cls=StringReplaceDecoder)
# Example input
{a:{foo:"", bar:"1.3"}, b:{d:{foo:""}, z:{bar:"1.5"}}}
# Example desired output:
{a:{foo:"", bar:"2.0rc1"}, b:{d:{foo:""}, z:{bar:"2.0rc1"}}}
After finding a solution that worked, I found a far superior one liner that didn't come up in previous google searches.
The correct answer for my problem is from nested_lookup import nested_update
For what it's worth, I also found the object_hook did exactly what I wanted as well:
def val_hook(obj):
return_d = {}
if isinstance(obj, dict):
for k in obj:
if in_key == k:
return_d[k] = in_val
else:
return_d[k] = obj[k]
return return_d
else:
return obj
json_dump = json.dumps(in_dict)
json_load = json.loads(json_dump, object_hook=val_hook)
References
https://gist.github.com/douglasmiranda/5127251
https://github.com/russellballestrini/nested-lookup
https://pypi.org/project/nested-lookup/

Adding duplicate keys to JSON with python

Is there a way to add duplicate keys to json with python?
From my understanding, you can't have duplicate keys in python dictionaries. Usually, how I go about creating json is to create the dictionary and then json.dumps. However, I need duplicated keys within the JSON for testing purposes. But I can't do so because I can't add duplicate keys in a python dictionary. I am trying to doing this in python 3
You could always construct such a string value by hand.
On the other hand, one can make the CPython json module to encode duplicate keys. This is very tricky in Python 2 because json module does not respect duck-typing at all.
The straightforward solution would be to inherit from collections.Mapping - well you can't, since "MyMapping is not a JSON serializable."
Next one tries to subclass a dict - well, but if json.dumps notices that the type is dict, it skips from calling __len__, and sees the underlying dict directly - if it is empty, {} is output directly, so clearly if we fake the methods, the underlying dictionary must not be empty.
The next source of joy is that actually __iter__ is called, which iterates keys; and for each key, the __getitem__ is called, so we need to remember what is the corresponding value to return for the given key... thus we arrive to a very ugly solution for Python 2:
class FakeDict(dict):
def __init__(self, items):
# need to have something in the dictionary
self['something'] = 'something'
self._items = items
def __getitem__(self, key):
return self.last_val
def __iter__(self):
def generator():
for key, value in self._items:
self.last_val = value
yield key
return generator()
In CPython 3.3+ it is slightly easier... no, collections.abc.Mapping does not work, yes, you need to subclass a dict, yes, you need to fake that your dictionary has content... but the internal JSON encoder calls items instead of __iter__ and __getitem__!
Thus on Python 3:
import json
class FakeDict(dict):
def __init__(self, items):
self['something'] = 'something'
self._items = items
def items(self):
return self._items
print(json.dumps(FakeDict([('a', 1), ('a', 2)])))
prints out
{"a": 1, "a": 2}
Thanks a lot Antti Haapala, I figured out you can even use this to convert an array of tuples into a FakeDict:
def function():
array_of_tuples = []
array_of_tuples.append(("key","value1"))
array_of_tuples.append(("key","value2"))
return FakeDict(array_of_tuples)
print(json.dumps(function()))
Output:
{"key": "value1", "key": "value2"}
And if you change the FakeDict class to this Empty dictionaries will be correctly parsed:
class FakeDict(dict):
def __init__(self, items):
if items != []:
self['something'] = 'something'
self._items = items
def items(self):
return self._items
def test():
array_of_tuples = []
return FakeDict(array_of_tuples)
print(json.dumps(test()))
Output:
"{}"
Actually, it's very easy:
$> python -c "import json; print json.dumps({1: 'a', '1': 'b'})"
{"1": "b", "1": "a"}

Serializing a Python namedtuple to json

What is the recommended way of serializing a namedtuple to json with the field names retained?
Serializing a namedtuple to json results in only the values being serialized and the field names being lost in translation. I would like the fields also to be retained when json-ized and hence did the following:
class foobar(namedtuple('f', 'foo, bar')):
__slots__ = ()
def __iter__(self):
yield self._asdict()
The above serializes to json as I expect and behaves as namedtuple in other places I use (attribute access etc.,) except with a non-tuple like results while iterating it (which fine for my use case).
What is the "correct way" of converting to json with the field names retained?
If it's just one namedtuple you're looking to serialize, using its _asdict() method will work (with Python >= 2.7)
>>> from collections import namedtuple
>>> import json
>>> FB = namedtuple("FB", ("foo", "bar"))
>>> fb = FB(123, 456)
>>> json.dumps(fb._asdict())
'{"foo": 123, "bar": 456}'
This is pretty tricky, since namedtuple() is a factory which returns a new type derived from tuple. One approach would be to have your class also inherit from UserDict.DictMixin, but tuple.__getitem__ is already defined and expects an integer denoting the position of the element, not the name of its attribute:
>>> f = foobar('a', 1)
>>> f[0]
'a'
At its heart the namedtuple is an odd fit for JSON, since it is really a custom-built type whose key names are fixed as part of the type definition, unlike a dictionary where key names are stored inside the instance. This prevents you from "round-tripping" a namedtuple, e.g. you cannot decode a dictionary back into a namedtuple without some other a piece of information, like an app-specific type marker in the dict {'a': 1, '#_type': 'foobar'}, which is a bit hacky.
This is not ideal, but if you only need to encode namedtuples into dictionaries, another approach is to extend or modify your JSON encoder to special-case these types. Here is an example of subclassing the Python json.JSONEncoder. This tackles the problem of ensuring that nested namedtuples are properly converted to dictionaries:
from collections import namedtuple
from json import JSONEncoder
class MyEncoder(JSONEncoder):
def _iterencode(self, obj, markers=None):
if isinstance(obj, tuple) and hasattr(obj, '_asdict'):
gen = self._iterencode_dict(obj._asdict(), markers)
else:
gen = JSONEncoder._iterencode(self, obj, markers)
for chunk in gen:
yield chunk
class foobar(namedtuple('f', 'foo, bar')):
pass
enc = MyEncoder()
for obj in (foobar('a', 1), ('a', 1), {'outer': foobar('x', 'y')}):
print enc.encode(obj)
{"foo": "a", "bar": 1}
["a", 1]
{"outer": {"foo": "x", "bar": "y"}}
It looks like you used to be able to subclass simplejson.JSONEncoder to make this work, but with the latest simplejson code, that is no longer the case: you have to actually modify the project code. I see no reason why simplejson should not support namedtuples, so I forked the project, added namedtuple support, and I'm currently waiting for my branch to be pulled back into the main project. If you need the fixes now, just pull from my fork.
EDIT: Looks like the latest versions of simplejson now natively support this with the namedtuple_as_object option, which defaults to True.
I wrote a library for doing this: https://github.com/ltworf/typedload
It can go from and to named-tuple and back.
It supports quite complicated nested structures, with lists, sets, enums, unions, default values. It should cover most common cases.
edit: The library also supports dataclass and attr classes.
It's impossible to serialize namedtuples correctly with the native python json library. It will always see tuples as lists, and it is impossible to override the default serializer to change this behaviour. It's worse if objects are nested.
Better to use a more robust library like orjson:
import orjson
from typing import NamedTuple
class Rectangle(NamedTuple):
width: int
height: int
def default(obj):
if hasattr(obj, '_asdict'):
return obj._asdict()
rectangle = Rectangle(width=10, height=20)
print(orjson.dumps(rectangle, default=default))
=>
{
"width":10,
"height":20
}
There is a more convenient solution is to use the decorator (it uses the protected field _fields).
Python 2.7+:
import json
from collections import namedtuple, OrderedDict
def json_serializable(cls):
def as_dict(self):
yield OrderedDict(
(name, value) for name, value in zip(
self._fields,
iter(super(cls, self).__iter__())))
cls.__iter__ = as_dict
return cls
#Usage:
C = json_serializable(namedtuple('C', 'a b c'))
print json.dumps(C('abc', True, 3.14))
# or
#json_serializable
class D(namedtuple('D', 'a b c')):
pass
print json.dumps(D('abc', True, 3.14))
Python 3.6.6+:
import json
from typing import TupleName
def json_serializable(cls):
def as_dict(self):
yield {name: value for name, value in zip(
self._fields,
iter(super(cls, self).__iter__()))}
cls.__iter__ = as_dict
return cls
# Usage:
#json_serializable
class C(NamedTuple):
a: str
b: bool
c: float
print(json.dumps(C('abc', True, 3.14))
It recursively converts the namedTuple data to json.
print(m1)
## Message(id=2, agent=Agent(id=1, first_name='asd', last_name='asd', mail='2#mai.com'), customer=Customer(id=1, first_name='asd', last_name='asd', mail='2#mai.com', phone_number=123123), type='image', content='text', media_url='h.com', la=123123, ls=4512313)
def reqursive_to_json(obj):
_json = {}
if isinstance(obj, tuple):
datas = obj._asdict()
for data in datas:
if isinstance(datas[data], tuple):
_json[data] = (reqursive_to_json(datas[data]))
else:
print(datas[data])
_json[data] = (datas[data])
return _json
data = reqursive_to_json(m1)
print(data)
{'agent': {'first_name': 'asd',
'last_name': 'asd',
'mail': '2#mai.com',
'id': 1},
'content': 'text',
'customer': {'first_name': 'asd',
'last_name': 'asd',
'mail': '2#mai.com',
'phone_number': 123123,
'id': 1},
'id': 2,
'la': 123123,
'ls': 4512313,
'media_url': 'h.com',
'type': 'image'}
The jsonplus library provides a serializer for NamedTuple instances. Use its compatibility mode to output simple objects if needed, but prefer the default as it is helpful for decoding back.
This is an old question. However:
A suggestion for all those with the same question, think carefully about using any of the private or internal features of the NamedTuple because they have before and will change again over time.
For example, if your NamedTuple is a flat value object and you're only interested in serializing it and not in cases where it is nested into another object, you could avoid the troubles that would come up with __dict__ being removed or _as_dict() changing and just do something like (and yes this is Python 3 because this answer is for the present):
from typing import NamedTuple
class ApiListRequest(NamedTuple):
group: str="default"
filter: str="*"
def to_dict(self):
return {
'group': self.group,
'filter': self.filter,
}
def to_json(self):
return json.dumps(self.to_dict())
I tried to use the default callable kwarg to dumps in order to do the to_dict() call if available, but that didn't get called as the NamedTuple is convertible to a list.
Here is my take on the problem. It serializes the NamedTuple, takes care of folded NamedTuples and Lists inside of them
def recursive_to_dict(obj: Any) -> dict:
_dict = {}
if isinstance(obj, tuple):
node = obj._asdict()
for item in node:
if isinstance(node[item], list): # Process as a list
_dict[item] = [recursive_to_dict(x) for x in (node[item])]
elif getattr(node[item], "_asdict", False): # Process as a NamedTuple
_dict[item] = recursive_to_dict(node[item])
else: # Process as a regular element
_dict[item] = (node[item])
return _dict
simplejson.dump() instead of json.dump does the job. It may be slower though.

What do I do when I need a self referential dictionary?

I'm new to Python, and am sort of surprised I cannot do this.
dictionary = {
'a' : '123',
'b' : dictionary['a'] + '456'
}
I'm wondering what the Pythonic way to correctly do this in my script, because I feel like I'm not the only one that has tried to do this.
EDIT: Enough people were wondering what I'm doing with this, so here are more details for my use cases. Lets say I want to keep dictionary objects to hold file system paths. The paths are relative to other values in the dictionary. For example, this is what one of my dictionaries may look like.
dictionary = {
'user': 'sholsapp',
'home': '/home/' + dictionary['user']
}
It is important that at any point in time I may change dictionary['user'] and have all of the dictionaries values reflect the change. Again, this is an example of what I'm using it for, so I hope that it conveys my goal.
From my own research I think I will need to implement a class to do this.
No fear of creating new classes -
You can take advantage of Python's string formating capabilities
and simply do:
class MyDict(dict):
def __getitem__(self, item):
return dict.__getitem__(self, item) % self
dictionary = MyDict({
'user' : 'gnucom',
'home' : '/home/%(user)s',
'bin' : '%(home)s/bin'
})
print dictionary["home"]
print dictionary["bin"]
Nearest I came up without doing object:
dictionary = {
'user' : 'gnucom',
'home' : lambda:'/home/'+dictionary['user']
}
print dictionary['home']()
dictionary['user']='tony'
print dictionary['home']()
>>> dictionary = {
... 'a':'123'
... }
>>> dictionary['b'] = dictionary['a'] + '456'
>>> dictionary
{'a': '123', 'b': '123456'}
It works fine but when you're trying to use dictionary it hasn't been defined yet (because it has to evaluate that literal dictionary first).
But be careful because this assigns to the key of 'b' the value referenced by the key of 'a' at the time of assignment and is not going to do the lookup every time. If that is what you are looking for, it's possible but with more work.
What you're describing in your edit is how an INI config file works. Python does have a built in library called ConfigParser which should work for what you're describing.
This is an interesting problem. It seems like Greg has a good solution. But that's no fun ;)
jsbueno as a very elegant solution but that only applies to strings (as you requested).
The trick to a 'general' self referential dictionary is to use a surrogate object. It takes a few (understatement) lines of code to pull off, but the usage is about what you want:
S = SurrogateDict(AdditionSurrogateDictEntry)
d = S.resolve({'user': 'gnucom',
'home': '/home/' + S['user'],
'config': [S['home'] + '/.emacs', S['home'] + '/.bashrc']})
The code to make that happen is not nearly so short. It lives in three classes:
import abc
class SurrogateDictEntry(object):
__metaclass__ = abc.ABCMeta
def __init__(self, key):
"""record the key on the real dictionary that this will resolve to a
value for
"""
self.key = key
def resolve(self, d):
""" return the actual value"""
if hasattr(self, 'op'):
# any operation done on self will store it's name in self.op.
# if this is set, resolve it by calling the appropriate method
# now that we can get self.value out of d
self.value = d[self.key]
return getattr(self, self.op + 'resolve__')()
else:
return d[self.key]
#staticmethod
def make_op(opname):
"""A convience class. This will be the form of all op hooks for subclasses
The actual logic for the op is in __op__resolve__ (e.g. __add__resolve__)
"""
def op(self, other):
self.stored_value = other
self.op = opname
return self
op.__name__ = opname
return op
Next, comes the concrete class. simple enough.
class AdditionSurrogateDictEntry(SurrogateDictEntry):
__add__ = SurrogateDictEntry.make_op('__add__')
__radd__ = SurrogateDictEntry.make_op('__radd__')
def __add__resolve__(self):
return self.value + self.stored_value
def __radd__resolve__(self):
return self.stored_value + self.value
Here's the final class
class SurrogateDict(object):
def __init__(self, EntryClass):
self.EntryClass = EntryClass
def __getitem__(self, key):
"""record the key and return"""
return self.EntryClass(key)
#staticmethod
def resolve(d):
"""I eat generators resolve self references"""
stack = [d]
while stack:
cur = stack.pop()
# This just tries to set it to an appropriate iterable
it = xrange(len(cur)) if not hasattr(cur, 'keys') else cur.keys()
for key in it:
# sorry for being a duche. Just register your class with
# SurrogateDictEntry and you can pass whatever.
while isinstance(cur[key], SurrogateDictEntry):
cur[key] = cur[key].resolve(d)
# I'm just going to check for iter but you can add other
# checks here for items that we should loop over.
if hasattr(cur[key], '__iter__'):
stack.append(cur[key])
return d
In response to gnucoms's question about why I named the classes the way that I did.
The word surrogate is generally associated with standing in for something else so it seemed appropriate because that's what the SurrogateDict class does: an instance replaces the 'self' references in a dictionary literal. That being said, (other than just being straight up stupid sometimes) naming is probably one of the hardest things for me about coding. If you (or anyone else) can suggest a better name, I'm all ears.
I'll provide a brief explanation. Throughout S refers to an instance of SurrogateDict and d is the real dictionary.
A reference S[key] triggers S.__getitem__ and SurrogateDictEntry(key) to be placed in the d.
When S[key] = SurrogateDictEntry(key) is constructed, it stores key. This will be the key into d for the value that this entry of SurrogateDictEntry is acting as a surrogate for.
After S[key] is returned, it is either entered into the d, or has some operation(s) performed on it. If an operation is performed on it, it triggers the relative __op__ method which simple stores the value that the operation is performed on and the name of the operation and then returns itself. We can't actually resolve the operation because d hasn't been constructed yet.
After d is constructed, it is passed to S.resolve. This method loops through d finding any instances of SurrogateDictEntry and replacing them with the result of calling the resolve method on the instance.
The SurrogateDictEntry.resolve method receives the now constructed d as an argument and can use the value of key that it stored at construction time to get the value that it is acting as a surrogate for. If an operation was performed on it after creation, the op attribute will have been set with the name of the operation that was performed. If the class has a __op__ method, then it has a __op__resolve__ method with the actual logic that would normally be in the __op__ method. So now we have the logic (self.op__resolve) and all necessary values (self.value, self.stored_value) to finally get the real value of d[key]. So we return that which step 4 places in the dictionary.
finally the SurrogateDict.resolve method returns d with all references resolved.
That'a a rough sketch. If you have any more questions, feel free to ask.
If you, just like me wandering how to make #jsbueno snippet work with {} style substitutions, below is the example code (which is probably not much efficient though):
import string
class MyDict(dict):
def __init__(self, *args, **kw):
super(MyDict,self).__init__(*args, **kw)
self.itemlist = super(MyDict,self).keys()
self.fmt = string.Formatter()
def __getitem__(self, item):
return self.fmt.vformat(dict.__getitem__(self, item), {}, self)
xs = MyDict({
'user' : 'gnucom',
'home' : '/home/{user}',
'bin' : '{home}/bin'
})
>>> xs["home"]
'/home/gnucom'
>>> xs["bin"]
'/home/gnucom/bin'
I tried to make it work with the simple replacement of % self with .format(**self) but it turns out it wouldn't work for nested expressions (like 'bin' in above listing, which references 'home', which has it's own reference to 'user') because of the evaluation order (** expansion is done before actual format call and it's not delayed like in original % version).
Write a class, maybe something with properties:
class PathInfo(object):
def __init__(self, user):
self.user = user
#property
def home(self):
return '/home/' + self.user
p = PathInfo('thc')
print p.home # /home/thc
As sort of an extended version of #Tony's answer, you could build a dictionary subclass that calls its values if they are callables:
class CallingDict(dict):
"""Returns the result rather than the value of referenced callables.
>>> cd = CallingDict({1: "One", 2: "Two", 'fsh': "Fish",
... "rhyme": lambda d: ' '.join((d[1], d['fsh'],
... d[2], d['fsh']))})
>>> cd["rhyme"]
'One Fish Two Fish'
>>> cd[1] = 'Red'
>>> cd[2] = 'Blue'
>>> cd["rhyme"]
'Red Fish Blue Fish'
"""
def __getitem__(self, item):
it = super(CallingDict, self).__getitem__(item)
if callable(it):
return it(self)
else:
return it
Of course this would only be usable if you're not actually going to store callables as values. If you need to be able to do that, you could wrap the lambda declaration in a function that adds some attribute to the resulting lambda, and check for it in CallingDict.__getitem__, but at that point it's getting complex, and long-winded, enough that it might just be easier to use a class for your data in the first place.
This is very easy in a lazily evaluated language (haskell).
Since Python is strictly evaluated, we can do a little trick to turn things lazy:
Y = lambda f: (lambda x: x(x))(lambda y: f(lambda *args: y(y)(*args)))
d1 = lambda self: lambda: {
'a': lambda: 3,
'b': lambda: self()['a']()
}
# fix the d1, and evaluate it
d2 = Y(d1)()
# to get a
d2['a']() # 3
# to get b
d2['b']() # 3
Syntax wise this is not very nice. That's because of us needing to explicitly construct lazy expressions with lambda: ... and explicitly evaluate lazy expression with ...(). It's the opposite problem in lazy languages needing strictness annotations, here in Python we end up needing lazy annotations.
I think with some more meta-programmming and some more tricks, the above could be made more easy to use.
Note that this is basically how let-rec works in some functional languages.
The jsbueno answer in Python 3 :
class MyDict(dict):
def __getitem__(self, item):
return dict.__getitem__(self, item).format(self)
dictionary = MyDict({
'user' : 'gnucom',
'home' : '/home/{0[user]}',
'bin' : '{0[home]}/bin'
})
print(dictionary["home"])
print(dictionary["bin"])
Her ewe use the python 3 string formatting with curly braces {} and the .format() method.
Documentation : https://docs.python.org/3/library/string.html

Categories

Resources