Python hashable dicts - python

As an exercise, and mostly for my own amusement, I'm implementing a backtracking packrat parser. The inspiration for this is i'd like to have a better idea about how hygenic macros would work in an algol-like language (as apposed to the syntax free lisp dialects you normally find them in). Because of this, different passes through the input might see different grammars, so cached parse results are invalid, unless I also store the current version of the grammar along with the cached parse results. (EDIT: a consequence of this use of key-value collections is that they should be immutable, but I don't intend to expose the interface to allow them to be changed, so either mutable or immutable collections are fine)
The problem is that python dicts cannot appear as keys to other dicts. Even using a tuple (as I'd be doing anyways) doesn't help.
>>> cache = {}
>>> rule = {"foo":"bar"}
>>> cache[(rule, "baz")] = "quux"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
>>>
I guess it has to be tuples all the way down. Now the python standard library provides approximately what i'd need, collections.namedtuple has a very different syntax, but can be used as a key. continuing from above session:
>>> from collections import namedtuple
>>> Rule = namedtuple("Rule",rule.keys())
>>> cache[(Rule(**rule), "baz")] = "quux"
>>> cache
{(Rule(foo='bar'), 'baz'): 'quux'}
Ok. But I have to make a class for each possible combination of keys in the rule I would want to use, which isn't so bad, because each parse rule knows exactly what parameters it uses, so that class can be defined at the same time as the function that parses the rule.
Edit: An additional problem with namedtuples is that they are strictly positional. Two tuples that look like they should be different can in fact be the same:
>>> you = namedtuple("foo",["bar","baz"])
>>> me = namedtuple("foo",["bar","quux"])
>>> you(bar=1,baz=2) == me(bar=1,quux=2)
True
>>> bob = namedtuple("foo",["baz","bar"])
>>> you(bar=1,baz=2) == bob(bar=1,baz=2)
False
tl'dr: How do I get dicts that can be used as keys to other dicts?
Having hacked a bit on the answers, here's the more complete solution I'm using. Note that this does a bit extra work to make the resulting dicts vaguely immutable for practical purposes. Of course it's still quite easy to hack around it by calling dict.__setitem__(instance, key, value) but we're all adults here.
class hashdict(dict):
"""
hashable dict implementation, suitable for use as a key into
other dicts.
>>> h1 = hashdict({"apples": 1, "bananas":2})
>>> h2 = hashdict({"bananas": 3, "mangoes": 5})
>>> h1+h2
hashdict(apples=1, bananas=3, mangoes=5)
>>> d1 = {}
>>> d1[h1] = "salad"
>>> d1[h1]
'salad'
>>> d1[h2]
Traceback (most recent call last):
...
KeyError: hashdict(bananas=3, mangoes=5)
based on answers from
http://stackoverflow.com/questions/1151658/python-hashable-dicts
"""
def __key(self):
return tuple(sorted(self.items()))
def __repr__(self):
return "{0}({1})".format(self.__class__.__name__,
", ".join("{0}={1}".format(
str(i[0]),repr(i[1])) for i in self.__key()))
def __hash__(self):
return hash(self.__key())
def __setitem__(self, key, value):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def __delitem__(self, key):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def clear(self):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def pop(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def popitem(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def setdefault(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def update(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
# update is not ok because it mutates the object
# __add__ is ok because it creates a new object
# while the new object is under construction, it's ok to mutate it
def __add__(self, right):
result = hashdict(self)
dict.update(result, right)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()

Here is the easy way to make a hashable dictionary. Just remember not to mutate them after embedding in another dictionary for obvious reasons.
class hashabledict(dict):
def __hash__(self):
return hash(tuple(sorted(self.items())))

Hashables should be immutable -- not enforcing this but TRUSTING you not to mutate a dict after its first use as a key, the following approach would work:
class hashabledict(dict):
def __key(self):
return tuple((k,self[k]) for k in sorted(self))
def __hash__(self):
return hash(self.__key())
def __eq__(self, other):
return self.__key() == other.__key()
If you DO need to mutate your dicts and STILL want to use them as keys, complexity explodes hundredfolds -- not to say it can't be done, but I'll wait until a VERY specific indication before I get into THAT incredible morass!-)

All that is needed to make dictionaries usable for your purpose is to add a __hash__ method:
class Hashabledict(dict):
def __hash__(self):
return hash(frozenset(self))
Note, the frozenset conversion will work for all dictionaries (i.e. it doesn't require the keys to be sortable). Likewise, there is no restriction on the dictionary values.
If there are many dictionaries with identical keys but with distinct values, it is necessary to have the hash take the values into account. The fastest way to do that is:
class Hashabledict(dict):
def __hash__(self):
return hash((frozenset(self), frozenset(self.itervalues())))
This is quicker than frozenset(self.iteritems()) for two reasons. First, the frozenset(self) step reuses the hash values stored in the dictionary, saving unnecessary calls to hash(key). Second, using itervalues will access the values directly and avoid the many memory allocator calls using by items to form new many key/value tuples in memory every time you do a lookup.

The given answers are okay, but they could be improved by using frozenset(...) instead of tuple(sorted(...)) to generate the hash:
>>> import timeit
>>> timeit.timeit('hash(tuple(sorted(d.iteritems())))', "d = dict(a=3, b='4', c=2345, asdfsdkjfew=0.23424, x='sadfsadfadfsaf')")
4.7758948802947998
>>> timeit.timeit('hash(frozenset(d.iteritems()))', "d = dict(a=3, b='4', c=2345, asdfsdkjfew=0.23424, x='sadfsadfadfsaf')")
1.8153600692749023
The performance advantage depends on the content of the dictionary, but in most cases I've tested, hashing with frozenset is at least 2 times faster (mainly because it does not need to sort).

A reasonably clean, straightforward implementation is
import collections
class FrozenDict(collections.Mapping):
"""Don't forget the docstrings!!"""
def __init__(self, *args, **kwargs):
self._d = dict(*args, **kwargs)
def __iter__(self):
return iter(self._d)
def __len__(self):
return len(self._d)
def __getitem__(self, key):
return self._d[key]
def __hash__(self):
return hash(tuple(sorted(self._d.iteritems())))

I keep coming back to this topic... Here's another variation. I'm uneasy with subclassing dict to add a __hash__ method; There's virtually no escape from the problem that dict's are mutable, and trusting that they won't change seems like a weak idea. So I've instead looked at building a mapping based on a builtin type that is itself immutable. although tuple is an obvious choice, accessing values in it implies a sort and a bisect; not a problem, but it doesn't seem to be leveraging much of the power of the type it's built on.
What if you jam key, value pairs into a frozenset? What would that require, how would it work?
Part 1, you need a way of encoding the 'item's in such a way that a frozenset will treat them mainly by their keys; I'll make a little subclass for that.
import collections
class pair(collections.namedtuple('pair_base', 'key value')):
def __hash__(self):
return hash((self.key, None))
def __eq__(self, other):
if type(self) != type(other):
return NotImplemented
return self.key == other.key
def __repr__(self):
return repr((self.key, self.value))
That alone puts you in spitting distance of an immutable mapping:
>>> frozenset(pair(k, v) for k, v in enumerate('abcd'))
frozenset([(0, 'a'), (2, 'c'), (1, 'b'), (3, 'd')])
>>> pairs = frozenset(pair(k, v) for k, v in enumerate('abcd'))
>>> pair(2, None) in pairs
True
>>> pair(5, None) in pairs
False
>>> goal = frozenset((pair(2, None),))
>>> pairs & goal
frozenset([(2, None)])
D'oh! Unfortunately, when you use the set operators and the elements are equal but not the same object; which one ends up in the return value is undefined, we'll have to go through some more gyrations.
>>> pairs - (pairs - goal)
frozenset([(2, 'c')])
>>> iter(pairs - (pairs - goal)).next().value
'c'
However, looking values up in this way is cumbersome, and worse, creates lots of intermediate sets; that won't do! We'll create a 'fake' key-value pair to get around it:
class Thief(object):
def __init__(self, key):
self.key = key
def __hash__(self):
return hash(pair(self.key, None))
def __eq__(self, other):
self.value = other.value
return pair(self.key, None) == other
Which results in the less problematic:
>>> thief = Thief(2)
>>> thief in pairs
True
>>> thief.value
'c'
That's all the deep magic; the rest is wrapping it all up into something that has an interface like a dict. Since we're subclassing from frozenset, which has a very different interface, there's quite a lot of methods; we get a little help from collections.Mapping, but most of the work is overriding the frozenset methods for versions that work like dicts, instead:
class FrozenDict(frozenset, collections.Mapping):
def __new__(cls, seq=()):
return frozenset.__new__(cls, (pair(k, v) for k, v in seq))
def __getitem__(self, key):
thief = Thief(key)
if frozenset.__contains__(self, thief):
return thief.value
raise KeyError(key)
def __eq__(self, other):
if not isinstance(other, FrozenDict):
return dict(self.iteritems()) == other
if len(self) != len(other):
return False
for key, value in self.iteritems():
try:
if value != other[key]:
return False
except KeyError:
return False
return True
def __hash__(self):
return hash(frozenset(self.iteritems()))
def get(self, key, default=None):
thief = Thief(key)
if frozenset.__contains__(self, thief):
return thief.value
return default
def __iter__(self):
for item in frozenset.__iter__(self):
yield item.key
def iteritems(self):
for item in frozenset.__iter__(self):
yield (item.key, item.value)
def iterkeys(self):
for item in frozenset.__iter__(self):
yield item.key
def itervalues(self):
for item in frozenset.__iter__(self):
yield item.value
def __contains__(self, key):
return frozenset.__contains__(self, pair(key, None))
has_key = __contains__
def __repr__(self):
return type(self).__name__ + (', '.join(repr(item) for item in self.iteritems())).join('()')
#classmethod
def fromkeys(cls, keys, value=None):
return cls((key, value) for key in keys)
which, ultimately, does answer my own question:
>>> myDict = {}
>>> myDict[FrozenDict(enumerate('ab'))] = 5
>>> FrozenDict(enumerate('ab')) in myDict
True
>>> FrozenDict(enumerate('bc')) in myDict
False
>>> FrozenDict(enumerate('ab', 3)) in myDict
False
>>> myDict[FrozenDict(enumerate('ab'))]
5

The accepted answer by #Unknown, as well as the answer by #AlexMartelli work perfectly fine, but only under the following constraints:
The dictionary's values must be hashable. For example, hash(hashabledict({'a':[1,2]})) will raise TypeError.
Keys must support comparison operation. For example, hash(hashabledict({'a':'a', 1:1})) will raise TypeError.
The comparison operator on keys imposes total ordering. For example, if the two keys in a dictionary are frozenset((1,2,3)) and frozenset((4,5,6)), they compare unequal in both directions. Therefore, sorting the items of a dictionary with such keys can result in an arbitrary order, and therefore will violate the rule that equal objects must have the same hash value.
The much faster answer by #ObenSonne lifts the constraints 2 and 3, but is still bound by constraint 1 (values must be hashable).
The faster yet answer by #RaymondHettinger lifts all 3 constraints because it does not include .values() in the hash calculation. However, its performance is good only if:
Most of the (non-equal) dictionaries that need to be hashed have do not identical .keys().
If this condition isn't satisfied, the hash function will still be valid, but may cause too many collisions. For example, in the extreme case where all the dictionaries are generated from a website template (field names as keys, user input as values), the keys will always be the same, and the hash function will return the same value for all the inputs. As a result, a hashtable that relies on such a hash function will become as slow as a list when retrieving an item (O(N) instead of O(1)).
I think the following solution will work reasonably well even if all 4 constraints I listed above are violated. It has an additional advantage that it can hash not only dictionaries, but any containers, even if they have nested mutable containers.
I'd much appreciate any feedback on this, since I only tested this lightly so far.
# python 3.4
import collections
import operator
import sys
import itertools
import reprlib
# a wrapper to make an object hashable, while preserving equality
class AutoHash:
# for each known container type, we can optionally provide a tuple
# specifying: type, transform, aggregator
# even immutable types need to be included, since their items
# may make them unhashable
# transformation may be used to enforce the desired iteration
# the result of a transformation must be an iterable
# default: no change; for dictionaries, we use .items() to see values
# usually transformation choice only affects efficiency, not correctness
# aggregator is the function that combines all items into one object
# default: frozenset; for ordered containers, we can use tuple
# aggregator choice affects both efficiency and correctness
# e.g., using a tuple aggregator for a set is incorrect,
# since identical sets may end up with different hash values
# frozenset is safe since at worst it just causes more collisions
# unfortunately, no collections.ABC class is available that helps
# distinguish ordered from unordered containers
# so we need to just list them out manually as needed
type_info = collections.namedtuple(
'type_info',
'type transformation aggregator')
ident = lambda x: x
# order matters; first match is used to handle a datatype
known_types = (
# dict also handles defaultdict
type_info(dict, lambda d: d.items(), frozenset),
# no need to include set and frozenset, since they are fine with defaults
type_info(collections.OrderedDict, ident, tuple),
type_info(list, ident, tuple),
type_info(tuple, ident, tuple),
type_info(collections.deque, ident, tuple),
type_info(collections.Iterable, ident, frozenset) # other iterables
)
# hash_func can be set to replace the built-in hash function
# cache can be turned on; if it is, cycles will be detected,
# otherwise cycles in a data structure will cause failure
def __init__(self, data, hash_func=hash, cache=False, verbose=False):
self._data=data
self.hash_func=hash_func
self.verbose=verbose
self.cache=cache
# cache objects' hashes for performance and to deal with cycles
if self.cache:
self.seen={}
def hash_ex(self, o):
# note: isinstance(o, Hashable) won't check inner types
try:
if self.verbose:
print(type(o),
reprlib.repr(o),
self.hash_func(o),
file=sys.stderr)
return self.hash_func(o)
except TypeError:
pass
# we let built-in hash decide if the hash value is worth caching
# so we don't cache the built-in hash results
if self.cache and id(o) in self.seen:
return self.seen[id(o)][0] # found in cache
# check if o can be handled by decomposing it into components
for typ, transformation, aggregator in AutoHash.known_types:
if isinstance(o, typ):
# another option is:
# result = reduce(operator.xor, map(_hash_ex, handler(o)))
# but collisions are more likely with xor than with frozenset
# e.g. hash_ex([1,2,3,4])==0 with xor
try:
# try to frozenset the actual components, it's faster
h = self.hash_func(aggregator(transformation(o)))
except TypeError:
# components not hashable with built-in;
# apply our extended hash function to them
h = self.hash_func(aggregator(map(self.hash_ex, transformation(o))))
if self.cache:
# storing the object too, otherwise memory location will be reused
self.seen[id(o)] = (h, o)
if self.verbose:
print(type(o), reprlib.repr(o), h, file=sys.stderr)
return h
raise TypeError('Object {} of type {} not hashable'.format(repr(o), type(o)))
def __hash__(self):
return self.hash_ex(self._data)
def __eq__(self, other):
# short circuit to save time
if self is other:
return True
# 1) type(self) a proper subclass of type(other) => self.__eq__ will be called first
# 2) any other situation => lhs.__eq__ will be called first
# case 1. one side is a subclass of the other, and AutoHash.__eq__ is not overridden in either
# => the subclass instance's __eq__ is called first, and we should compare self._data and other._data
# case 2. neither side is a subclass of the other; self is lhs
# => we can't compare to another type; we should let the other side decide what to do, return NotImplemented
# case 3. neither side is a subclass of the other; self is rhs
# => we can't compare to another type, and the other side already tried and failed;
# we should return False, but NotImplemented will have the same effect
# any other case: we won't reach the __eq__ code in this class, no need to worry about it
if isinstance(self, type(other)): # identifies case 1
return self._data == other._data
else: # identifies cases 2 and 3
return NotImplemented
d1 = {'a':[1,2], 2:{3:4}}
print(hash(AutoHash(d1, cache=True, verbose=True)))
d = AutoHash(dict(a=1, b=2, c=3, d=[4,5,6,7], e='a string of chars'),cache=True, verbose=True)
print(hash(d))

You might also want to add these two methods to get the v2 pickling protocol work with hashdict instances. Otherwise cPickle will try to use hashdict.____setitem____ resulting in a TypeError. Interestingly, with the other two versions of the protocol your code works just fine.
def __setstate__(self, objstate):
for k,v in objstate.items():
dict.__setitem__(self,k,v)
def __reduce__(self):
return (hashdict, (), dict(self),)

serialize the dict as string with json package:
d = {'a': 1, 'b': 2}
s = json.dumps(d)
restore the dict when you need:
d2 = json.loads(s)

If you don't put numbers in the dictionary and you never lose the variables containing your dictionaries, you can do this:
cache[id(rule)] = "whatever"
since id() is unique for every dictionary
EDIT:
Oh sorry, yeah in that case what the other guys said would be better. I think you could also serialize your dictionaries as a string, like
cache[ 'foo:bar' ] = 'baz'
If you need to recover your dictionaries from the keys though, then you'd have to do something uglier like
cache[ 'foo:bar' ] = ( {'foo':'bar'}, 'baz' )
I guess the advantage of this is that you wouldn't have to write as much code.

Related

Is There a dict with User Provided Key Function in Python? [duplicate]

How can I make as "perfect" a subclass of dict as possible? The end goal is to have a simple dict in which the keys are lowercase.
It would seem that there should be some tiny set of primitives I can override to make this work, but according to all my research and attempts it seem like this isn't the case:
If I override __getitem__/__setitem__, then get/set don't work. How can I make them work? Surely I don't need to implement them individually?
Am I preventing pickling from working, and do I need to implement __setstate__ etc?
Do I need repr, update and __init__?
Should I just use mutablemapping (it seems one shouldn't use UserDict
or DictMixin)? If so, how? The docs aren't exactly enlightening.
Here is my first go at it, get() doesn't work and no doubt there are many other minor problems:
class arbitrary_dict(dict):
"""A dictionary that applies an arbitrary key-altering function
before accessing the keys."""
def __keytransform__(self, key):
return key
# Overridden methods. List from
# https://stackoverflow.com/questions/2390827/how-to-properly-subclass-dict
def __init__(self, *args, **kwargs):
self.update(*args, **kwargs)
# Note: I'm using dict directly, since super(dict, self) doesn't work.
# I'm not sure why, perhaps dict is not a new-style class.
def __getitem__(self, key):
return dict.__getitem__(self, self.__keytransform__(key))
def __setitem__(self, key, value):
return dict.__setitem__(self, self.__keytransform__(key), value)
def __delitem__(self, key):
return dict.__delitem__(self, self.__keytransform__(key))
def __contains__(self, key):
return dict.__contains__(self, self.__keytransform__(key))
class lcdict(arbitrary_dict):
def __keytransform__(self, key):
return str(key).lower()
You can write an object that behaves like a dict quite easily with ABCs (Abstract Base Classes) from the collections.abc module. It even tells you if you missed a method, so below is the minimal version that shuts the ABC up.
from collections.abc import MutableMapping
class TransformedDict(MutableMapping):
"""A dictionary that applies an arbitrary key-altering
function before accessing the keys"""
def __init__(self, *args, **kwargs):
self.store = dict()
self.update(dict(*args, **kwargs)) # use the free update to set keys
def __getitem__(self, key):
return self.store[self._keytransform(key)]
def __setitem__(self, key, value):
self.store[self._keytransform(key)] = value
def __delitem__(self, key):
del self.store[self._keytransform(key)]
def __iter__(self):
return iter(self.store)
def __len__(self):
return len(self.store)
def _keytransform(self, key):
return key
You get a few free methods from the ABC:
class MyTransformedDict(TransformedDict):
def _keytransform(self, key):
return key.lower()
s = MyTransformedDict([('Test', 'test')])
assert s.get('TEST') is s['test'] # free get
assert 'TeSt' in s # free __contains__
# free setdefault, __eq__, and so on
import pickle
# works too since we just use a normal dict
assert pickle.loads(pickle.dumps(s)) == s
I wouldn't subclass dict (or other builtins) directly. It often makes no sense, because what you actually want to do is implement the interface of a dict. And that is exactly what ABCs are for.
How can I make as "perfect" a subclass of dict as possible?
The end goal is to have a simple dict in which the keys are lowercase.
If I override __getitem__/__setitem__, then get/set don't work. How
do I make them work? Surely I don't need to implement them
individually?
Am I preventing pickling from working, and do I need to implement
__setstate__ etc?
Do I need repr, update and __init__?
Should I just use mutablemapping (it seems one shouldn't use UserDict
or DictMixin)? If so, how? The docs aren't exactly enlightening.
The accepted answer would be my first approach, but since it has some issues,
and since no one has addressed the alternative, actually subclassing a dict, I'm going to do that here.
What's wrong with the accepted answer?
This seems like a rather simple request to me:
How can I make as "perfect" a subclass of dict as possible?
The end goal is to have a simple dict in which the keys are lowercase.
The accepted answer doesn't actually subclass dict, and a test for this fails:
>>> isinstance(MyTransformedDict([('Test', 'test')]), dict)
False
Ideally, any type-checking code would be testing for the interface we expect, or an abstract base class, but if our data objects are being passed into functions that are testing for dict - and we can't "fix" those functions, this code will fail.
Other quibbles one might make:
The accepted answer is also missing the classmethod: fromkeys.
The accepted answer also has a redundant __dict__ - therefore taking up more space in memory:
>>> s.foo = 'bar'
>>> s.__dict__
{'foo': 'bar', 'store': {'test': 'test'}}
Actually subclassing dict
We can reuse the dict methods through inheritance. All we need to do is create an interface layer that ensures keys are passed into the dict in lowercase form if they are strings.
If I override __getitem__/__setitem__, then get/set don't work. How do I make them work? Surely I don't need to implement them individually?
Well, implementing them each individually is the downside to this approach and the upside to using MutableMapping (see the accepted answer), but it's really not that much more work.
First, let's factor out the difference between Python 2 and 3, create a singleton (_RaiseKeyError) to make sure we know if we actually get an argument to dict.pop, and create a function to ensure our string keys are lowercase:
from itertools import chain
try: # Python 2
str_base = basestring
items = 'iteritems'
except NameError: # Python 3
str_base = str, bytes, bytearray
items = 'items'
_RaiseKeyError = object() # singleton for no-default behavior
def ensure_lower(maybe_str):
"""dict keys can be any hashable object - only call lower if str"""
return maybe_str.lower() if isinstance(maybe_str, str_base) else maybe_str
Now we implement - I'm using super with the full arguments so that this code works for Python 2 and 3:
class LowerDict(dict): # dicts take a mapping or iterable as their optional first argument
__slots__ = () # no __dict__ - that would be redundant
#staticmethod # because this doesn't make sense as a global function.
def _process_args(mapping=(), **kwargs):
if hasattr(mapping, items):
mapping = getattr(mapping, items)()
return ((ensure_lower(k), v) for k, v in chain(mapping, getattr(kwargs, items)()))
def __init__(self, mapping=(), **kwargs):
super(LowerDict, self).__init__(self._process_args(mapping, **kwargs))
def __getitem__(self, k):
return super(LowerDict, self).__getitem__(ensure_lower(k))
def __setitem__(self, k, v):
return super(LowerDict, self).__setitem__(ensure_lower(k), v)
def __delitem__(self, k):
return super(LowerDict, self).__delitem__(ensure_lower(k))
def get(self, k, default=None):
return super(LowerDict, self).get(ensure_lower(k), default)
def setdefault(self, k, default=None):
return super(LowerDict, self).setdefault(ensure_lower(k), default)
def pop(self, k, v=_RaiseKeyError):
if v is _RaiseKeyError:
return super(LowerDict, self).pop(ensure_lower(k))
return super(LowerDict, self).pop(ensure_lower(k), v)
def update(self, mapping=(), **kwargs):
super(LowerDict, self).update(self._process_args(mapping, **kwargs))
def __contains__(self, k):
return super(LowerDict, self).__contains__(ensure_lower(k))
def copy(self): # don't delegate w/ super - dict.copy() -> dict :(
return type(self)(self)
#classmethod
def fromkeys(cls, keys, v=None):
return super(LowerDict, cls).fromkeys((ensure_lower(k) for k in keys), v)
def __repr__(self):
return '{0}({1})'.format(type(self).__name__, super(LowerDict, self).__repr__())
We use an almost boiler-plate approach for any method or special method that references a key, but otherwise, by inheritance, we get methods: len, clear, items, keys, popitem, and values for free. While this required some careful thought to get right, it is trivial to see that this works.
(Note that haskey was deprecated in Python 2, removed in Python 3.)
Here's some usage:
>>> ld = LowerDict(dict(foo='bar'))
>>> ld['FOO']
'bar'
>>> ld['foo']
'bar'
>>> ld.pop('FoO')
'bar'
>>> ld.setdefault('Foo')
>>> ld
{'foo': None}
>>> ld.get('Bar')
>>> ld.setdefault('Bar')
>>> ld
{'bar': None, 'foo': None}
>>> ld.popitem()
('bar', None)
Am I preventing pickling from working, and do I need to implement
__setstate__ etc?
pickling
And the dict subclass pickles just fine:
>>> import pickle
>>> pickle.dumps(ld)
b'\x80\x03c__main__\nLowerDict\nq\x00)\x81q\x01X\x03\x00\x00\x00fooq\x02Ns.'
>>> pickle.loads(pickle.dumps(ld))
{'foo': None}
>>> type(pickle.loads(pickle.dumps(ld)))
<class '__main__.LowerDict'>
__repr__
Do I need repr, update and __init__?
We defined update and __init__, but you have a beautiful __repr__ by default:
>>> ld # without __repr__ defined for the class, we get this
{'foo': None}
However, it's good to write a __repr__ to improve the debugability of your code. The ideal test is eval(repr(obj)) == obj. If it's easy to do for your code, I strongly recommend it:
>>> ld = LowerDict({})
>>> eval(repr(ld)) == ld
True
>>> ld = LowerDict(dict(a=1, b=2, c=3))
>>> eval(repr(ld)) == ld
True
You see, it's exactly what we need to recreate an equivalent object - this is something that might show up in our logs or in backtraces:
>>> ld
LowerDict({'a': 1, 'c': 3, 'b': 2})
Conclusion
Should I just use mutablemapping (it seems one shouldn't use UserDict
or DictMixin)? If so, how? The docs aren't exactly enlightening.
Yeah, these are a few more lines of code, but they're intended to be comprehensive. My first inclination would be to use the accepted answer,
and if there were issues with it, I'd then look at my answer - as it's a little more complicated, and there's no ABC to help me get my interface right.
Premature optimization is going for greater complexity in search of performance.
MutableMapping is simpler - so it gets an immediate edge, all else being equal. Nevertheless, to lay out all the differences, let's compare and contrast.
I should add that there was a push to put a similar dictionary into the collections module, but it was rejected. You should probably just do this instead:
my_dict[transform(key)]
It should be far more easily debugable.
Compare and contrast
There are 6 interface functions implemented with the MutableMapping (which is missing fromkeys) and 11 with the dict subclass. I don't need to implement __iter__ or __len__, but instead I have to implement get, setdefault, pop, update, copy, __contains__, and fromkeys - but these are fairly trivial, since I can use inheritance for most of those implementations.
The MutableMapping implements some things in Python that dict implements in C - so I would expect a dict subclass to be more performant in some cases.
We get a free __eq__ in both approaches - both of which assume equality only if another dict is all lowercase - but again, I think the dict subclass will compare more quickly.
Summary:
subclassing MutableMapping is simpler with fewer opportunities for bugs, but slower, takes more memory (see redundant dict), and fails isinstance(x, dict)
subclassing dict is faster, uses less memory, and passes isinstance(x, dict), but it has greater complexity to implement.
Which is more perfect? That depends on your definition of perfect.
After trying out both of the top two suggestions, I've settled on a shady-looking middle route for Python 2.7. Maybe 3 is saner, but for me:
class MyDict(MutableMapping):
# ... the few __methods__ that mutablemapping requires
# and then this monstrosity
#property
def __class__(self):
return dict
which I really hate, but seems to fit my needs, which are:
can override **my_dict
if you inherit from dict, this bypasses your code. try it out.
this makes #2 unacceptable for me at all times, as this is quite common in python code
masquerades as isinstance(my_dict, dict)
rules out MutableMapping alone, so #1 is not enough
I heartily recommend #1 if you don't need this, it's simple and predictable
fully controllable behavior
so I cannot inherit from dict
If you need to tell yourself apart from others, personally I use something like this (though I'd recommend better names):
def __am_i_me(self):
return True
#classmethod
def __is_it_me(cls, other):
try:
return other.__am_i_me()
except Exception:
return False
As long as you only need to recognize yourself internally, this way it's harder to accidentally call __am_i_me due to python's name-munging (this is renamed to _MyDict__am_i_me from anything calling outside this class). Slightly more private than _methods, both in practice and culturally.
So far I have no complaints, aside from the seriously-shady-looking __class__ override. I'd be thrilled to hear of any problems that others encounter with this though, I don't fully understand the consequences. But so far I've had no problems whatsoever, and this allowed me to migrate a lot of middling-quality code in lots of locations without needing any changes.
As evidence: https://repl.it/repls/TraumaticToughCockatoo
Basically: copy the current #2 option, add print 'method_name' lines to every method, and then try this and watch the output:
d = LowerDict() # prints "init", or whatever your print statement said
print '------'
splatted = dict(**d) # note that there are no prints here
You'll see similar behavior for other scenarios. Say your fake-dict is a wrapper around some other datatype, so there's no reasonable way to store the data in the backing-dict; **your_dict will be empty, regardless of what every other method does.
This works correctly for MutableMapping, but as soon as you inherit from dict it becomes uncontrollable.
Edit: as an update, this has been running without a single issue for almost two years now, on several hundred thousand (eh, might be a couple million) lines of complicated, legacy-ridden python. So I'm pretty happy with it :)
Edit 2: apparently I mis-copied this or something long ago. #classmethod __class__ does not work for isinstance checks - #property __class__ does: https://repl.it/repls/UnitedScientificSequence
My requirements were a bit stricter:
I had to retain case info (the strings are paths to files displayed to the user, but it's a windows app so internally all operations must be case insensitive)
I needed keys to be as small as possible (it did make a difference in memory performance, chopped off 110 mb out of 370). This meant that caching lowercase version of keys is not an option.
I needed creation of the data structures to be as fast as possible (again made a difference in performance, speed this time). I had to go with a builtin
My initial thought was to substitute our clunky Path class for a case insensitive unicode subclass - but:
proved hard to get that right - see: A case insensitive string class in python
turns out that explicit dict keys handling makes code verbose and messy - and error prone (structures are passed hither and thither, and it is not clear if they have CIStr instances as keys/elements, easy to forget plus some_dict[CIstr(path)] is ugly)
So I had finally to write down that case insensitive dict. Thanks to code by #AaronHall that was made 10 times easier.
class CIstr(unicode):
"""See https://stackoverflow.com/a/43122305/281545, especially for inlines"""
__slots__ = () # does make a difference in memory performance
#--Hash/Compare
def __hash__(self):
return hash(self.lower())
def __eq__(self, other):
if isinstance(other, CIstr):
return self.lower() == other.lower()
return NotImplemented
def __ne__(self, other):
if isinstance(other, CIstr):
return self.lower() != other.lower()
return NotImplemented
def __lt__(self, other):
if isinstance(other, CIstr):
return self.lower() < other.lower()
return NotImplemented
def __ge__(self, other):
if isinstance(other, CIstr):
return self.lower() >= other.lower()
return NotImplemented
def __gt__(self, other):
if isinstance(other, CIstr):
return self.lower() > other.lower()
return NotImplemented
def __le__(self, other):
if isinstance(other, CIstr):
return self.lower() <= other.lower()
return NotImplemented
#--repr
def __repr__(self):
return '{0}({1})'.format(type(self).__name__,
super(CIstr, self).__repr__())
def _ci_str(maybe_str):
"""dict keys can be any hashable object - only call CIstr if str"""
return CIstr(maybe_str) if isinstance(maybe_str, basestring) else maybe_str
class LowerDict(dict):
"""Dictionary that transforms its keys to CIstr instances.
Adapted from: https://stackoverflow.com/a/39375731/281545
"""
__slots__ = () # no __dict__ - that would be redundant
#staticmethod # because this doesn't make sense as a global function.
def _process_args(mapping=(), **kwargs):
if hasattr(mapping, 'iteritems'):
mapping = getattr(mapping, 'iteritems')()
return ((_ci_str(k), v) for k, v in
chain(mapping, getattr(kwargs, 'iteritems')()))
def __init__(self, mapping=(), **kwargs):
# dicts take a mapping or iterable as their optional first argument
super(LowerDict, self).__init__(self._process_args(mapping, **kwargs))
def __getitem__(self, k):
return super(LowerDict, self).__getitem__(_ci_str(k))
def __setitem__(self, k, v):
return super(LowerDict, self).__setitem__(_ci_str(k), v)
def __delitem__(self, k):
return super(LowerDict, self).__delitem__(_ci_str(k))
def copy(self): # don't delegate w/ super - dict.copy() -> dict :(
return type(self)(self)
def get(self, k, default=None):
return super(LowerDict, self).get(_ci_str(k), default)
def setdefault(self, k, default=None):
return super(LowerDict, self).setdefault(_ci_str(k), default)
__no_default = object()
def pop(self, k, v=__no_default):
if v is LowerDict.__no_default:
# super will raise KeyError if no default and key does not exist
return super(LowerDict, self).pop(_ci_str(k))
return super(LowerDict, self).pop(_ci_str(k), v)
def update(self, mapping=(), **kwargs):
super(LowerDict, self).update(self._process_args(mapping, **kwargs))
def __contains__(self, k):
return super(LowerDict, self).__contains__(_ci_str(k))
#classmethod
def fromkeys(cls, keys, v=None):
return super(LowerDict, cls).fromkeys((_ci_str(k) for k in keys), v)
def __repr__(self):
return '{0}({1})'.format(type(self).__name__,
super(LowerDict, self).__repr__())
Implicit vs explicit is still a problem, but once dust settles, renaming of attributes/variables to start with ci (and a big fat doc comment explaining that ci stands for case insensitive) I think is a perfect solution - as readers of the code must be fully aware that we are dealing with case insensitive underlying data structures.
This will hopefully fix some hard to reproduce bugs, which I suspect boil down to case sensitivity.
Comments/corrections welcome :)
All you will have to do is
class BatchCollection(dict):
def __init__(self, *args, **kwargs):
dict.__init__(*args, **kwargs)
OR
class BatchCollection(dict):
def __init__(self, inpt={}):
super(BatchCollection, self).__init__(inpt)
A sample usage for my personal use
### EXAMPLE
class BatchCollection(dict):
def __init__(self, inpt={}):
dict.__init__(*args, **kwargs)
def __setitem__(self, key, item):
if (isinstance(key, tuple) and len(key) == 2
and isinstance(item, collections.Iterable)):
# self.__dict__[key] = item
super(BatchCollection, self).__setitem__(key, item)
else:
raise Exception(
"Valid key should be a tuple (database_name, table_name) "
"and value should be iterable")
Note: tested only in python3
collections.UserDict is often the simplest option when you need a custom dict.
As shown in the other answer, it's very tricky to overwrite dict correctly, while UserDict makes it easy. To answer the original question, you can get a dict with lower keys:
import collections
class LowercaseDict(collections.UserDict):
def __getitem__(self, key):
return super().__getitem__(key.lower())
def __setitem__(self, key, value):
return super().__setitem__(key.lower(), value)
def __delitem__(self, key):
return super().__delitem__(key.lower())
# Unfortunately, __contains__ is required currently due to
# https://github.com/python/cpython/issues/91784
def __contains__(self, key):
return key.lower() in self.data
d = LowercaseDict(MY_KEY=0) # Keys normalized in .__init__
d.update({'OTHER_KEY': 1}) # Keys normalized in .update
d['Hello'] = d['other_KEY']
assert 'HELLO' in d
print(d) # All keys normalized {'my_key': 0, 'other_key': 1, 'hello': 1}
And contrary to collections.abc.MutableMapping, you don't need __iter__, __len__, __init__,...
Subclassing UserDict is much easier.
However UserDict is a MutableMapping, not a dict, so:
assert not isinstance(collections.UserDict(), dict)
assert isinstance(collections.UserDict(), collections.abc.MutableMapping)

Python 3.x: How to compare two lists containing dictionaries where order doesn't matter

I have nested dictionaries that may contain other dictionaries or lists. I need to be able to compare a list (or set, really) of these dictionaries to show that they are equal.
The order of the list is not uniform. Typically, I would turn the list into a set, but it is not possible since there are values that are also dictionaries.
a = {'color': 'red'}
b = {'shape': 'triangle'}
c = {'children': [{'color': 'red'}, {'age': 8},]}
test_a = [a, b, c]
test_b = [b, c, a]
print(test_a == test_b) # False
print(set(test_a) == set(test_b)) # TypeError: unhashable type: 'dict'
Is there a good way to approach this to show that test_a has the same contents as test_b?
You can use a simple loop to check if each of one list is in the other:
def areEqual(a, b):
if len(a) != len(b):
return False
for d in a:
if d not in b:
return False
return True
I suggest writing a function that turns any Python object into something orderable, with its contents, if it has any, in sorted order. If we call it canonicalize, we can compare nested objects with:
canonicalize(test_a) == canonicalize(test_b)
Here's my attempt at writing a canonicalize function:
def canonicalize(x):
if isinstance(x, dict):
x = sorted((canonicalize(k), canonicalize(v)) for k, v in x.items())
elif isinstance(x, collections.abc.Iterable) and not isinstance(x, str):
x = sorted(map(canonicalize, x))
else:
try:
bool(x < x) # test for unorderable types like complex
except TypeError:
x = repr(x) # replace with something orderable
return x
This should work for most Python objects. It won't work for lists of heterogeneous items, containers that contain themselves (which will cause the function to hit the recursion limit), nor float('nan') (which has bizarre comparison behavior, and so may mess up the sorting of any container it's in).
It's possible that this code will do the wrong thing for non-iterable, unorderable objects, if they don't have a repr function that describes all the data that makes up their value (e.g. what is tested by ==). I picked repr as it will work on any kind of object and might get it right (it works for complex, for example). It should also work as desired for classes that have a repr that looks like a constructor call. For classes that have inherited object.__repr__ and so have repr output like <Foo object at 0xXXXXXXXX> it at least won't crash, though the objects will be compared by identity rather than value. I don't think there's any truly universal solution, and you can add some special cases for classes you expect to find in your data if they don't work with repr.
If the elements in both lists are shallow, the idea of sorting them, and then comparing with equality can work. The problem with #Alex's solution is that he is only using "id" - but if instead of id, one uses a function that will sort dictionaries properly, things shuld just work:
def sortkey(element):
if isinstance(element, dict):
element = sorted(element.items())
return repr(element)
sorted(test_a, key=sortkey) == sorted(test_b, key=sotrkey)
(I use an repr to wrap the key because it will cast all elements to string before comparison, which will avoid typerror if different elements are of unorderable types - which would almost certainly happen if you are using Python 3.x)
Just to be clear, if your dictionaries and lists have nested dictionaries themselves, you should use the answer by #m_callens. If your inner lists are also unorderd, you can fix this to work, jsut sorting them inside the key function as well.
In this case they are the same dicts so you can compare ids (docs). Note that if you introduced a new dict whose values were identical it would still be treated differently. I.e. d = {'color': 'red'} would be treated as not equal to a.
sorted(map(id, test_a)) == sorted(map(id, test_b))
As #jsbueno points out, you can do this with the kwarg key.
sorted(test_a, key=id) == sorted(test_b, key=id)
An elegant and relatively fast solution:
class QuasiUnorderedList(list):
def __eq__(self, other):
"""This method isn't as ineffiecient as you think! It runs in O(1 + 2 + 3 + ... + n) time,
possibly better than recursively freezing/checking all the elements."""
for item in self:
for otheritem in other:
if otheritem == item:
break
else:
# no break was reached, item not found.
return False
return True
This runs in O(1 + 2 + 3 + ... + n) flat. While slow for dictionaries of low depth, this is faster for dictionaries of high depth.
Here's a considerably longer snippet which is faster for dictionaries where depth is low and length is high.
class FrozenDict(collections.Mapping, collections.Hashable): # collections.Hashable = portability
"""Adapated from http://stackoverflow.com/a/2704866/1459669"""
def __init__(self, *args, **kwargs):
self._d = dict(*args, **kwargs)
self._hash = None
def __iter__(self):
return iter(self._d)
def __len__(self):
return len(self._d)
def __getitem__(self, key):
return self._d[key]
def __hash__(self):
# It would have been simpler and maybe more obvious to
# use hash(tuple(sorted(self._d.iteritems()))) from this discussion
# so far, but this solution is O(n). I don't know what kind of
# n we are going to run into, but sometimes it's hard to resist the
# urge to optimize when it will gain improved algorithmic performance.
# Now thread safe by CrazyPython
if self._hash is None:
_hash = 0
for pair in self.iteritems():
_hash ^= hash(pair)
self._hash = _hash
return _hash
def freeze(obj):
if type(obj) in (str, int, ...): # other immutable atoms you store in your data structure
return obj
elif issubclass(type(obj), list): # ugly but needed
return set(freeze(item) for item in obj)
elif issubclass(type(obj), dict): # for defaultdict, etc.
return FrozenDict({key: freeze(value) for key, value in obj.items()})
else:
raise NotImplementedError("freeze() doesn't know how to freeze " + type(obj).__name__ + " objects!")
class FreezableList(list, collections.Hashable):
_stored_freeze = None
_hashed_self = None
def __eq__(self, other):
if self._stored_freeze and (self._hashed_self == self):
frozen = self._stored_freeze
else:
frozen = freeze(self)
if frozen is not self._stored_freeze:
self._stored_hash = frozen
return frozen == freeze(other)
def __hash__(self):
if self._stored_freeze and (self._hashed_self == self):
frozen = self._stored_freeze
else:
frozen = freeze(self)
if frozen is not self._stored_freeze:
self._stored_hash = frozen
return hash(frozen)
class UncachedFreezableList(list, collections.Hashable):
def __eq__(self, other):
"""No caching version of __eq__. May be faster.
Don't forget to get rid of the declarations at the top of the class!
Considerably more elegant."""
return freeze(self) == freeze(other)
def __hash__(self):
"""No caching version of __hash__. See the notes in the docstring of __eq__2"""
return hash(freeze(self))
Test all three (QuasiUnorderedList, FreezableList, and UncachedFreezableList) and see which one is faster in your situation. I'll betcha it's faster than the other solutions.

Overriding __eq__ and __hash__ to compare a dict attribute of two instances

I'm struggling to understand how to correctly compare objects based on an underlying dict attribute that each instance possesses.
Since I'm overriding __eq__, do I need to override __hash__ as well? I haven't a firm grasp on when/where to do so and could really use some help.
I created a simple example below to illustrate the maximum recursion exception that I've run into. A RegionalCustomerCollection organizes account IDs by geographical region. RegionalCustomerCollection objects are said to be equal if the regions and their respective accountids are. Essentially, all items() should be equal in content.
from collections import defaultdict
class RegionalCustomerCollection(object):
def __init__(self):
self.region_accountids = defaultdict(set)
def get_region_accountid(self, region_name=None):
return self.region_accountids.get(region_name, None)
def set_region_accountid(self, region_name, accountid):
self.region_accountids[region_name].add(accountid)
def __eq__(self, other):
if (other == self):
return True
if isinstance(other, RegionalCustomerCollection):
return self.region_accountids == other.region_accountids
return False
def __repr__(self):
return ', '.join(["{0}: {1}".format(region, acctids)
for region, acctids
in self.region_accountids.items()])
Let's create two object instances and populate them with some sample data:
>>> a = RegionalCustomerCollection()
>>> b = RegionalCustomerCollection()
>>> a.set_region_accountid('northeast',1)
>>> a.set_region_accountid('northeast',2)
>>> a.set_region_accountid('northeast',3)
>>> a.set_region_accountid('southwest',4)
>>> a.set_region_accountid('southwest',5)
>>> b.set_region_accountid('northeast',1)
>>> b.set_region_accountid('northeast',2)
>>> b.set_region_accountid('northeast',3)
>>> b.set_region_accountid('southwest',4)
>>> b.set_region_accountid('southwest',5)
Now let's try to compare the two instances and generate the recursion exception:
>>> a == b
...
RuntimeError: maximum recursion depth exceeded while calling a Python object
Your object shouldn't return a hash because it's mutable. If you put this object into a dictionary or set and then change it afterward, you may never be able to find it again.
In order to make an object unhashable, you need to do the following:
class MyClass(object):
__hash__ = None
This will ensure that the object is unhashable.
[in] >>> m = MyClass()
[in] >>> hash(m)
[out] >>> TypeError: unhashable type 'MyClass'
Does this answer your question? I'm suspecting not because you were explicitly looking for a hash function.
As far as the RuntimeError you're receiving, it's because of the following line:
if self == other:
return True
That gets you into an infinite recursion loop. Try the following instead:
if self is other:
return True
You don't need to override __hash__ to compare two objects (you'll need to if you want custom hashing, i.e. to improve performance when inserting into sets or dictionaries).
Also, you have infinite recursion here:
def __eq__(self, other):
if (other == self):
return True
if isinstance(other, RegionalCustomerCollection):
return self.region_accountids == other.region_accountids
return False
If both objects are of type RegionalCustomerCollection then you'll have infinite recursion since == calls __eq__.

How to remove duplicates in set for objects?

I have set of objects:
class Test(object):
def __init__(self):
self.i = random.randint(1,10)
res = set()
for i in range(0,1000):
res.add(Test())
print len(res) = 1000
How to remove duplicates from set of objects ?
Thanks for answers, it's work:
class Test(object):
def __init__(self, i):
self.i = i
# self.i = random.randint(1,10)
# self.j = random.randint(1,20)
def __keys(self):
t = ()
for key in self.__dict__:
t = t + (self.__dict__[key],)
return t
def __eq__(self, other):
return isinstance(other, Test) and self.__keys() == other.__keys()
def __hash__(self):
return hash(self.__keys())
res = set()
res.add(Test(2))
...
res.add(Test(8))
result: [2,8,3,4,5,6,7]
but how to save order ? Sets not support order. Can i use list instead set for example ?
Your objects must be hashable (i.e. must have __eq__() and __hash__() defined) for sets to work properly with them:
class Test(object):
def __init__(self):
self.i = random.randint(1, 10)
def __eq__(self, other):
return self.i == other.i
def __hash__(self):
return self.i
An object is hashable if it has a hash value which never changes during its lifetime (it needs a __hash__() method), and can be compared to other objects (it needs an __eq__() or __cmp__() method). Hashable objects which compare equal must have the same hash value.
Hashability makes an object usable as a dictionary key and a set member, because these data structures use the hash value internally.
If you have several attributes, hash and compare a tuple of them (thanks, delnan):
class Test(object):
def __init__(self):
self.i = random.randint(1, 10)
self.k = random.randint(1, 10)
self.j = random.randint(1, 10)
def __eq__(self, other):
return (self.i, self.k, self.j) == (other.i, other.k, other.j)
def __hash__(self):
return hash((self.i, self.k, self.j))
Your first question is already answered by Pavel Anossov.
But you have another question:
but how to save order ? Sets not support order. Can i use list instead set for example ?
You can use a list, but there are a few downsides:
You get the wrong interface.
You don't get automatic handling of duplicates. You have to explicitly write if foo not in res: res.append(foo). Obviously, you can wrap this up in a function instead of writing it repeatedly, but it's still extra work.
It's going to be a lot less efficient if the collection can get large. Basically, adding a new element, checking whether an element already exists, etc. are all going to be O(N) instead of O(1).
What you want is something that works like an ordered set. Or, equivalently, like a list that doesn't allow duplicates.
If you do all your adds first, and then all your lookups, and you don't need lookups to be fast, you can get around this by first building a list, then using unique_everseen from the itertools recipes to remove duplicates.
Or you could just keep a set and a list or elements by order (or a list plus a set of elements seen so far). But that can get a bit complicated, so you might want to wrap it up.
Ideally, you want to wrap it up in a type that has exactly the same API as set. Something like an OrderedSet akin to collections.OrderedDict.
Fortunately, if you scroll to the bottom of that docs page, you'll see that exactly what you want already exists; there's a link to an OrderedSet recipe at ActiveState.
So, copy it, paste it into your code, then just change res = set() to res = OrderedSet(), and you're done.
I think you can easily do what you want with a list as you asked in your first post since you defined the eq operator :
l = []
if Test(0) not in l :
l.append(Test(0))
My 2 cts ...
Pavel Anossov's answer is great for allowing your class to be used in a set with the semantics you want. However, if you want to preserve the order of your items, you'll need a bit more. Here's a function that de-duplicates a list, as long as the list items are hashable:
def dedupe(lst):
seen = set()
results = []
for item in lst:
if item not in seen:
seen.add(item)
results.append(item)
return results
A slightly more idiomatic version would be a generator, rather than a function that returns a list. This gets rid of the results variable, using yield rather than appending the unique values to it. I've also renamed the lst parameter to iterable, since it will work just as well on any iterable object (such as another generator).
def dedupe(iterable):
seen = set()
for item in iterable:
if item not in seen:
seen.add(item)
yield item

How to "perfectly" override a dict?

How can I make as "perfect" a subclass of dict as possible? The end goal is to have a simple dict in which the keys are lowercase.
It would seem that there should be some tiny set of primitives I can override to make this work, but according to all my research and attempts it seem like this isn't the case:
If I override __getitem__/__setitem__, then get/set don't work. How can I make them work? Surely I don't need to implement them individually?
Am I preventing pickling from working, and do I need to implement __setstate__ etc?
Do I need repr, update and __init__?
Should I just use mutablemapping (it seems one shouldn't use UserDict
or DictMixin)? If so, how? The docs aren't exactly enlightening.
Here is my first go at it, get() doesn't work and no doubt there are many other minor problems:
class arbitrary_dict(dict):
"""A dictionary that applies an arbitrary key-altering function
before accessing the keys."""
def __keytransform__(self, key):
return key
# Overridden methods. List from
# https://stackoverflow.com/questions/2390827/how-to-properly-subclass-dict
def __init__(self, *args, **kwargs):
self.update(*args, **kwargs)
# Note: I'm using dict directly, since super(dict, self) doesn't work.
# I'm not sure why, perhaps dict is not a new-style class.
def __getitem__(self, key):
return dict.__getitem__(self, self.__keytransform__(key))
def __setitem__(self, key, value):
return dict.__setitem__(self, self.__keytransform__(key), value)
def __delitem__(self, key):
return dict.__delitem__(self, self.__keytransform__(key))
def __contains__(self, key):
return dict.__contains__(self, self.__keytransform__(key))
class lcdict(arbitrary_dict):
def __keytransform__(self, key):
return str(key).lower()
You can write an object that behaves like a dict quite easily with ABCs (Abstract Base Classes) from the collections.abc module. It even tells you if you missed a method, so below is the minimal version that shuts the ABC up.
from collections.abc import MutableMapping
class TransformedDict(MutableMapping):
"""A dictionary that applies an arbitrary key-altering
function before accessing the keys"""
def __init__(self, *args, **kwargs):
self.store = dict()
self.update(dict(*args, **kwargs)) # use the free update to set keys
def __getitem__(self, key):
return self.store[self._keytransform(key)]
def __setitem__(self, key, value):
self.store[self._keytransform(key)] = value
def __delitem__(self, key):
del self.store[self._keytransform(key)]
def __iter__(self):
return iter(self.store)
def __len__(self):
return len(self.store)
def _keytransform(self, key):
return key
You get a few free methods from the ABC:
class MyTransformedDict(TransformedDict):
def _keytransform(self, key):
return key.lower()
s = MyTransformedDict([('Test', 'test')])
assert s.get('TEST') is s['test'] # free get
assert 'TeSt' in s # free __contains__
# free setdefault, __eq__, and so on
import pickle
# works too since we just use a normal dict
assert pickle.loads(pickle.dumps(s)) == s
I wouldn't subclass dict (or other builtins) directly. It often makes no sense, because what you actually want to do is implement the interface of a dict. And that is exactly what ABCs are for.
How can I make as "perfect" a subclass of dict as possible?
The end goal is to have a simple dict in which the keys are lowercase.
If I override __getitem__/__setitem__, then get/set don't work. How
do I make them work? Surely I don't need to implement them
individually?
Am I preventing pickling from working, and do I need to implement
__setstate__ etc?
Do I need repr, update and __init__?
Should I just use mutablemapping (it seems one shouldn't use UserDict
or DictMixin)? If so, how? The docs aren't exactly enlightening.
The accepted answer would be my first approach, but since it has some issues,
and since no one has addressed the alternative, actually subclassing a dict, I'm going to do that here.
What's wrong with the accepted answer?
This seems like a rather simple request to me:
How can I make as "perfect" a subclass of dict as possible?
The end goal is to have a simple dict in which the keys are lowercase.
The accepted answer doesn't actually subclass dict, and a test for this fails:
>>> isinstance(MyTransformedDict([('Test', 'test')]), dict)
False
Ideally, any type-checking code would be testing for the interface we expect, or an abstract base class, but if our data objects are being passed into functions that are testing for dict - and we can't "fix" those functions, this code will fail.
Other quibbles one might make:
The accepted answer is also missing the classmethod: fromkeys.
The accepted answer also has a redundant __dict__ - therefore taking up more space in memory:
>>> s.foo = 'bar'
>>> s.__dict__
{'foo': 'bar', 'store': {'test': 'test'}}
Actually subclassing dict
We can reuse the dict methods through inheritance. All we need to do is create an interface layer that ensures keys are passed into the dict in lowercase form if they are strings.
If I override __getitem__/__setitem__, then get/set don't work. How do I make them work? Surely I don't need to implement them individually?
Well, implementing them each individually is the downside to this approach and the upside to using MutableMapping (see the accepted answer), but it's really not that much more work.
First, let's factor out the difference between Python 2 and 3, create a singleton (_RaiseKeyError) to make sure we know if we actually get an argument to dict.pop, and create a function to ensure our string keys are lowercase:
from itertools import chain
try: # Python 2
str_base = basestring
items = 'iteritems'
except NameError: # Python 3
str_base = str, bytes, bytearray
items = 'items'
_RaiseKeyError = object() # singleton for no-default behavior
def ensure_lower(maybe_str):
"""dict keys can be any hashable object - only call lower if str"""
return maybe_str.lower() if isinstance(maybe_str, str_base) else maybe_str
Now we implement - I'm using super with the full arguments so that this code works for Python 2 and 3:
class LowerDict(dict): # dicts take a mapping or iterable as their optional first argument
__slots__ = () # no __dict__ - that would be redundant
#staticmethod # because this doesn't make sense as a global function.
def _process_args(mapping=(), **kwargs):
if hasattr(mapping, items):
mapping = getattr(mapping, items)()
return ((ensure_lower(k), v) for k, v in chain(mapping, getattr(kwargs, items)()))
def __init__(self, mapping=(), **kwargs):
super(LowerDict, self).__init__(self._process_args(mapping, **kwargs))
def __getitem__(self, k):
return super(LowerDict, self).__getitem__(ensure_lower(k))
def __setitem__(self, k, v):
return super(LowerDict, self).__setitem__(ensure_lower(k), v)
def __delitem__(self, k):
return super(LowerDict, self).__delitem__(ensure_lower(k))
def get(self, k, default=None):
return super(LowerDict, self).get(ensure_lower(k), default)
def setdefault(self, k, default=None):
return super(LowerDict, self).setdefault(ensure_lower(k), default)
def pop(self, k, v=_RaiseKeyError):
if v is _RaiseKeyError:
return super(LowerDict, self).pop(ensure_lower(k))
return super(LowerDict, self).pop(ensure_lower(k), v)
def update(self, mapping=(), **kwargs):
super(LowerDict, self).update(self._process_args(mapping, **kwargs))
def __contains__(self, k):
return super(LowerDict, self).__contains__(ensure_lower(k))
def copy(self): # don't delegate w/ super - dict.copy() -> dict :(
return type(self)(self)
#classmethod
def fromkeys(cls, keys, v=None):
return super(LowerDict, cls).fromkeys((ensure_lower(k) for k in keys), v)
def __repr__(self):
return '{0}({1})'.format(type(self).__name__, super(LowerDict, self).__repr__())
We use an almost boiler-plate approach for any method or special method that references a key, but otherwise, by inheritance, we get methods: len, clear, items, keys, popitem, and values for free. While this required some careful thought to get right, it is trivial to see that this works.
(Note that haskey was deprecated in Python 2, removed in Python 3.)
Here's some usage:
>>> ld = LowerDict(dict(foo='bar'))
>>> ld['FOO']
'bar'
>>> ld['foo']
'bar'
>>> ld.pop('FoO')
'bar'
>>> ld.setdefault('Foo')
>>> ld
{'foo': None}
>>> ld.get('Bar')
>>> ld.setdefault('Bar')
>>> ld
{'bar': None, 'foo': None}
>>> ld.popitem()
('bar', None)
Am I preventing pickling from working, and do I need to implement
__setstate__ etc?
pickling
And the dict subclass pickles just fine:
>>> import pickle
>>> pickle.dumps(ld)
b'\x80\x03c__main__\nLowerDict\nq\x00)\x81q\x01X\x03\x00\x00\x00fooq\x02Ns.'
>>> pickle.loads(pickle.dumps(ld))
{'foo': None}
>>> type(pickle.loads(pickle.dumps(ld)))
<class '__main__.LowerDict'>
__repr__
Do I need repr, update and __init__?
We defined update and __init__, but you have a beautiful __repr__ by default:
>>> ld # without __repr__ defined for the class, we get this
{'foo': None}
However, it's good to write a __repr__ to improve the debugability of your code. The ideal test is eval(repr(obj)) == obj. If it's easy to do for your code, I strongly recommend it:
>>> ld = LowerDict({})
>>> eval(repr(ld)) == ld
True
>>> ld = LowerDict(dict(a=1, b=2, c=3))
>>> eval(repr(ld)) == ld
True
You see, it's exactly what we need to recreate an equivalent object - this is something that might show up in our logs or in backtraces:
>>> ld
LowerDict({'a': 1, 'c': 3, 'b': 2})
Conclusion
Should I just use mutablemapping (it seems one shouldn't use UserDict
or DictMixin)? If so, how? The docs aren't exactly enlightening.
Yeah, these are a few more lines of code, but they're intended to be comprehensive. My first inclination would be to use the accepted answer,
and if there were issues with it, I'd then look at my answer - as it's a little more complicated, and there's no ABC to help me get my interface right.
Premature optimization is going for greater complexity in search of performance.
MutableMapping is simpler - so it gets an immediate edge, all else being equal. Nevertheless, to lay out all the differences, let's compare and contrast.
I should add that there was a push to put a similar dictionary into the collections module, but it was rejected. You should probably just do this instead:
my_dict[transform(key)]
It should be far more easily debugable.
Compare and contrast
There are 6 interface functions implemented with the MutableMapping (which is missing fromkeys) and 11 with the dict subclass. I don't need to implement __iter__ or __len__, but instead I have to implement get, setdefault, pop, update, copy, __contains__, and fromkeys - but these are fairly trivial, since I can use inheritance for most of those implementations.
The MutableMapping implements some things in Python that dict implements in C - so I would expect a dict subclass to be more performant in some cases.
We get a free __eq__ in both approaches - both of which assume equality only if another dict is all lowercase - but again, I think the dict subclass will compare more quickly.
Summary:
subclassing MutableMapping is simpler with fewer opportunities for bugs, but slower, takes more memory (see redundant dict), and fails isinstance(x, dict)
subclassing dict is faster, uses less memory, and passes isinstance(x, dict), but it has greater complexity to implement.
Which is more perfect? That depends on your definition of perfect.
After trying out both of the top two suggestions, I've settled on a shady-looking middle route for Python 2.7. Maybe 3 is saner, but for me:
class MyDict(MutableMapping):
# ... the few __methods__ that mutablemapping requires
# and then this monstrosity
#property
def __class__(self):
return dict
which I really hate, but seems to fit my needs, which are:
can override **my_dict
if you inherit from dict, this bypasses your code. try it out.
this makes #2 unacceptable for me at all times, as this is quite common in python code
masquerades as isinstance(my_dict, dict)
rules out MutableMapping alone, so #1 is not enough
I heartily recommend #1 if you don't need this, it's simple and predictable
fully controllable behavior
so I cannot inherit from dict
If you need to tell yourself apart from others, personally I use something like this (though I'd recommend better names):
def __am_i_me(self):
return True
#classmethod
def __is_it_me(cls, other):
try:
return other.__am_i_me()
except Exception:
return False
As long as you only need to recognize yourself internally, this way it's harder to accidentally call __am_i_me due to python's name-munging (this is renamed to _MyDict__am_i_me from anything calling outside this class). Slightly more private than _methods, both in practice and culturally.
So far I have no complaints, aside from the seriously-shady-looking __class__ override. I'd be thrilled to hear of any problems that others encounter with this though, I don't fully understand the consequences. But so far I've had no problems whatsoever, and this allowed me to migrate a lot of middling-quality code in lots of locations without needing any changes.
As evidence: https://repl.it/repls/TraumaticToughCockatoo
Basically: copy the current #2 option, add print 'method_name' lines to every method, and then try this and watch the output:
d = LowerDict() # prints "init", or whatever your print statement said
print '------'
splatted = dict(**d) # note that there are no prints here
You'll see similar behavior for other scenarios. Say your fake-dict is a wrapper around some other datatype, so there's no reasonable way to store the data in the backing-dict; **your_dict will be empty, regardless of what every other method does.
This works correctly for MutableMapping, but as soon as you inherit from dict it becomes uncontrollable.
Edit: as an update, this has been running without a single issue for almost two years now, on several hundred thousand (eh, might be a couple million) lines of complicated, legacy-ridden python. So I'm pretty happy with it :)
Edit 2: apparently I mis-copied this or something long ago. #classmethod __class__ does not work for isinstance checks - #property __class__ does: https://repl.it/repls/UnitedScientificSequence
My requirements were a bit stricter:
I had to retain case info (the strings are paths to files displayed to the user, but it's a windows app so internally all operations must be case insensitive)
I needed keys to be as small as possible (it did make a difference in memory performance, chopped off 110 mb out of 370). This meant that caching lowercase version of keys is not an option.
I needed creation of the data structures to be as fast as possible (again made a difference in performance, speed this time). I had to go with a builtin
My initial thought was to substitute our clunky Path class for a case insensitive unicode subclass - but:
proved hard to get that right - see: A case insensitive string class in python
turns out that explicit dict keys handling makes code verbose and messy - and error prone (structures are passed hither and thither, and it is not clear if they have CIStr instances as keys/elements, easy to forget plus some_dict[CIstr(path)] is ugly)
So I had finally to write down that case insensitive dict. Thanks to code by #AaronHall that was made 10 times easier.
class CIstr(unicode):
"""See https://stackoverflow.com/a/43122305/281545, especially for inlines"""
__slots__ = () # does make a difference in memory performance
#--Hash/Compare
def __hash__(self):
return hash(self.lower())
def __eq__(self, other):
if isinstance(other, CIstr):
return self.lower() == other.lower()
return NotImplemented
def __ne__(self, other):
if isinstance(other, CIstr):
return self.lower() != other.lower()
return NotImplemented
def __lt__(self, other):
if isinstance(other, CIstr):
return self.lower() < other.lower()
return NotImplemented
def __ge__(self, other):
if isinstance(other, CIstr):
return self.lower() >= other.lower()
return NotImplemented
def __gt__(self, other):
if isinstance(other, CIstr):
return self.lower() > other.lower()
return NotImplemented
def __le__(self, other):
if isinstance(other, CIstr):
return self.lower() <= other.lower()
return NotImplemented
#--repr
def __repr__(self):
return '{0}({1})'.format(type(self).__name__,
super(CIstr, self).__repr__())
def _ci_str(maybe_str):
"""dict keys can be any hashable object - only call CIstr if str"""
return CIstr(maybe_str) if isinstance(maybe_str, basestring) else maybe_str
class LowerDict(dict):
"""Dictionary that transforms its keys to CIstr instances.
Adapted from: https://stackoverflow.com/a/39375731/281545
"""
__slots__ = () # no __dict__ - that would be redundant
#staticmethod # because this doesn't make sense as a global function.
def _process_args(mapping=(), **kwargs):
if hasattr(mapping, 'iteritems'):
mapping = getattr(mapping, 'iteritems')()
return ((_ci_str(k), v) for k, v in
chain(mapping, getattr(kwargs, 'iteritems')()))
def __init__(self, mapping=(), **kwargs):
# dicts take a mapping or iterable as their optional first argument
super(LowerDict, self).__init__(self._process_args(mapping, **kwargs))
def __getitem__(self, k):
return super(LowerDict, self).__getitem__(_ci_str(k))
def __setitem__(self, k, v):
return super(LowerDict, self).__setitem__(_ci_str(k), v)
def __delitem__(self, k):
return super(LowerDict, self).__delitem__(_ci_str(k))
def copy(self): # don't delegate w/ super - dict.copy() -> dict :(
return type(self)(self)
def get(self, k, default=None):
return super(LowerDict, self).get(_ci_str(k), default)
def setdefault(self, k, default=None):
return super(LowerDict, self).setdefault(_ci_str(k), default)
__no_default = object()
def pop(self, k, v=__no_default):
if v is LowerDict.__no_default:
# super will raise KeyError if no default and key does not exist
return super(LowerDict, self).pop(_ci_str(k))
return super(LowerDict, self).pop(_ci_str(k), v)
def update(self, mapping=(), **kwargs):
super(LowerDict, self).update(self._process_args(mapping, **kwargs))
def __contains__(self, k):
return super(LowerDict, self).__contains__(_ci_str(k))
#classmethod
def fromkeys(cls, keys, v=None):
return super(LowerDict, cls).fromkeys((_ci_str(k) for k in keys), v)
def __repr__(self):
return '{0}({1})'.format(type(self).__name__,
super(LowerDict, self).__repr__())
Implicit vs explicit is still a problem, but once dust settles, renaming of attributes/variables to start with ci (and a big fat doc comment explaining that ci stands for case insensitive) I think is a perfect solution - as readers of the code must be fully aware that we are dealing with case insensitive underlying data structures.
This will hopefully fix some hard to reproduce bugs, which I suspect boil down to case sensitivity.
Comments/corrections welcome :)
All you will have to do is
class BatchCollection(dict):
def __init__(self, *args, **kwargs):
dict.__init__(*args, **kwargs)
OR
class BatchCollection(dict):
def __init__(self, inpt={}):
super(BatchCollection, self).__init__(inpt)
A sample usage for my personal use
### EXAMPLE
class BatchCollection(dict):
def __init__(self, inpt={}):
dict.__init__(*args, **kwargs)
def __setitem__(self, key, item):
if (isinstance(key, tuple) and len(key) == 2
and isinstance(item, collections.Iterable)):
# self.__dict__[key] = item
super(BatchCollection, self).__setitem__(key, item)
else:
raise Exception(
"Valid key should be a tuple (database_name, table_name) "
"and value should be iterable")
Note: tested only in python3
collections.UserDict is often the simplest option when you need a custom dict.
As shown in the other answer, it's very tricky to overwrite dict correctly, while UserDict makes it easy. To answer the original question, you can get a dict with lower keys:
import collections
class LowercaseDict(collections.UserDict):
def __getitem__(self, key):
return super().__getitem__(key.lower())
def __setitem__(self, key, value):
return super().__setitem__(key.lower(), value)
def __delitem__(self, key):
return super().__delitem__(key.lower())
# Unfortunately, __contains__ is required currently due to
# https://github.com/python/cpython/issues/91784
def __contains__(self, key):
return key.lower() in self.data
d = LowercaseDict(MY_KEY=0) # Keys normalized in .__init__
d.update({'OTHER_KEY': 1}) # Keys normalized in .update
d['Hello'] = d['other_KEY']
assert 'HELLO' in d
print(d) # All keys normalized {'my_key': 0, 'other_key': 1, 'hello': 1}
And contrary to collections.abc.MutableMapping, you don't need __iter__, __len__, __init__,...
Subclassing UserDict is much easier.
However UserDict is a MutableMapping, not a dict, so:
assert not isinstance(collections.UserDict(), dict)
assert isinstance(collections.UserDict(), collections.abc.MutableMapping)

Categories

Resources