Key-ordered dict in Python - python

I am looking for a solid implementation of an ordered associative array, that is, an ordered dictionary. I want the ordering in terms of keys, not of insertion order.
More precisely, I am looking for a space-efficent implementation of a int-to-float (or string-to-float for another use case) mapping structure for which:
Ordered iteration is O(n)
Random access is O(1)
The best I came up with was gluing a dict and a list of keys, keeping the last one ordered with bisect and insert.
Any better ideas?

"Random access O(1)" is an extremely exacting requirement which basically imposes an underlying hash table -- and I hope you do mean random READS only, because I think it can be mathematically proven than it's impossible in the general case to have O(1) writes as well as O(N) ordered iteration.
I don't think you will find a pre-packaged container suited to your needs because they are so extreme -- O(log N) access would of course make all the difference in the world. To get the big-O behavior you want for reads and iterations you'll need to glue two data structures, essentially a dict and a heap (or sorted list or tree), and keep them in sync. Although you don't specify, I think you'll only get amortized behavior of the kind you want - unless you're truly willing to pay any performance hits for inserts and deletes, which is the literal implication of the specs you express but does seem a pretty unlikely real-life requirement.
For O(1) read and amortized O(N) ordered iteration, just keep a list of all keys on the side of a dict. E.g.:
class Crazy(object):
def __init__(self):
self.d = {}
self.L = []
self.sorted = True
def __getitem__(self, k):
return self.d[k]
def __setitem__(self, k, v):
if k not in self.d:
self.L.append(k)
self.sorted = False
self.d[k] = v
def __delitem__(self, k):
del self.d[k]
self.L.remove(k)
def __iter__(self):
if not self.sorted:
self.L.sort()
self.sorted = True
return iter(self.L)
If you don't like the "amortized O(N) order" you can remove self.sorted and just repeat self.L.sort() in __setitem__ itself. That makes writes O(N log N), of course (while I still had writes at O(1)). Either approach is viable and it's hard to think of one as intrinsically superior to the other. If you tend to do a bunch of writes then a bunch of iterations then the approach in the code above is best; if it's typically one write, one iteration, another write, another iteration, then it's just about a wash.
BTW, this takes shameless advantage of the unusual (and wonderful;-) performance characteristics of Python's sort (aka "timsort"): among them, sorting a list that's mostly sorted but with a few extra items tacked on at the end is basically O(N) (if the tacked on items are few enough compared to the sorted prefix part). I hear Java's gaining this sort soon, as Josh Block was so impressed by a tech talk on Python's sort that he started coding it for the JVM on his laptop then and there. Most sytems (including I believe Jython as of today and IronPython too) basically have sorting as an O(N log N) operation, not taking advantage of "mostly ordered" inputs; "natural mergesort", which Tim Peters fashioned into Python's timsort of today, is a wonder in this respect.

The sortedcontainers module provides a SortedDict type that meets your requirements. It basically glues a SortedList and dict type together. The dict provides O(1) lookup and the SortedList provides O(N) iteration (it's extremely fast). The whole module is pure-Python and has benchmark graphs to backup the performance claims (fast-as-C implementations). SortedDict is also fully tested with 100% coverage and hours of stress. It's compatible with Python 2.6 through 3.4.

Here is my own implementation:
import bisect
class KeyOrderedDict(object):
__slots__ = ['d', 'l']
def __init__(self, *args, **kwargs):
self.l = sorted(kwargs)
self.d = kwargs
def __setitem__(self, k, v):
if not k in self.d:
idx = bisect.bisect(self.l, k)
self.l.insert(idx, k)
self.d[k] = v
def __getitem__(self, k):
return self.d[k]
def __delitem__(self, k):
idx = bisect.bisect_left(self.l, k)
del self.l[idx]
del self.d[k]
def __iter__(self):
return iter(self.l)
def __contains__(self, k):
return k in self.d
The use of bisect keeps self.l ordered, and insertion is O(n) (because of the insert, but not a killer in my case, because I append far more often than truly insert, so the usual case is amortized O(1)). Access is O(1), and iteration O(n). But maybe someone had invented (in C) something with a more clever structure ?

An ordered tree is usually better for this cases, but random access is going to be log(n). You should keep into account also insertion and removal costs...

You could build a dict that allows traversal by storing a pair (value, next_key) in each position.
Random access:
my_dict[k][0] # for a key k
Traversal:
k = start_key # stored somewhere
while k is not None: # next_key is None at the end of the list
v, k = my_dict[k]
yield v
Keep a pointer to start and end and you'll have efficient update for those cases where you just need to add onto the end of the list.
Inserting in the middle is obviously O(n). Possibly you could build a skip list on top of it if you need more speed.

I'm not sure which python version are you working in, but in case you like to experiment, Python 3.1 includes and official implementation of Ordered dictionaries:
http://www.python.org/dev/peps/pep-0372/
http://docs.python.org/3.1/whatsnew/3.1.html#pep-372-ordered-dictionaries

The ordereddict package ( http://anthon.home.xs4all.nl/Python/ordereddict/ ) that I implemented back in 2007 includes sorteddict. sorteddict is a KSO ( Key Sorted Order) dictionary. It is implemented in C and very space efficient and several times faster than a pure Python implementation. Downside is that only works with CPython.
>>> from _ordereddict import sorteddict
>>> x = sorteddict()
>>> x[1] = 1.0
>>> x[3] = 3.3
>>> x[2] = 2.2
>>> print x
sorteddict([(1, 1.0), (2, 2.2), (3, 3.3)])
>>> for i in x:
... print i, x[i]
...
1 1.0
2 2.2
3 3.3
>>>
Sorry for the late reply, maybe this answer can help others find that library.

here's a pastie: I Had a need for something similar. Note however that this specific implementation is immutable, there are no inserts, once the instance is created: The exact performance doesn't quite match what you're asking for, however. Lookup is O(log n) and full scan is O(n). This works using the bisect module upon a tuple of key/value (tuple) pairs. Even if you can't use this precisely, you might have some success adapting it to your needs.
import bisect
class dictuple(object):
"""
>>> h0 = dictuple()
>>> h1 = dictuple({"apples": 1, "bananas":2})
>>> h2 = dictuple({"bananas": 3, "mangoes": 5})
>>> h1+h2
('apples':1, 'bananas':3, 'mangoes':5)
>>> h1 > h2
False
>>> h1 > 6
False
>>> 'apples' in h1
True
>>> 'apples' in h2
False
>>> d1 = {}
>>> d1[h1] = "salad"
>>> d1[h1]
'salad'
>>> d1[h2]
Traceback (most recent call last):
...
KeyError: ('bananas':3, 'mangoes':5)
"""
def __new__(cls, *args, **kwargs):
initial = {}
args = [] if args is None else args
for arg in args:
initial.update(arg)
initial.update(kwargs)
instance = object.__new__(cls)
instance.__items = tuple(sorted(initial.items(),key=lambda i:i[0]))
return instance
def __init__(self,*args, **kwargs):
pass
def __find(self,key):
return bisect.bisect(self.__items, (key,))
def __getitem__(self, key):
ind = self.__find(key)
if self.__items[ind][0] == key:
return self.__items[ind][1]
raise KeyError(key)
def __repr__(self):
return "({0})".format(", ".join(
"{0}:{1}".format(repr(item[0]),repr(item[1]))
for item in self.__items))
def __contains__(self,key):
ind = self.__find(key)
return self.__items[ind][0] == key
def __cmp__(self,other):
return cmp(self.__class__.__name__, other.__class__.__name__
) or cmp(self.__items, other.__items)
def __eq__(self,other):
return self.__items == other.__items
def __format__(self,key):
pass
#def __ge__(self,key):
# pass
#def __getattribute__(self,key):
# pass
#def __gt__(self,key):
# pass
__seed = hash("dictuple")
def __hash__(self):
return dictuple.__seed^hash(self.__items)
def __iter__(self):
return self.iterkeys()
def __len__(self):
return len(self.__items)
#def __reduce__(self,key):
# pass
#def __reduce_ex__(self,key):
# pass
#def __sizeof__(self,key):
# pass
#classmethod
def fromkeys(cls,key,v=None):
cls(dict.fromkeys(key,v))
def get(self,key, default):
ind = self.__find(key)
return self.__items[ind][1] if self.__items[ind][0] == key else default
def has_key(self,key):
ind = self.__find(key)
return self.__items[ind][0] == key
def items(self):
return list(self.iteritems())
def iteritems(self):
return iter(self.__items)
def iterkeys(self):
return (i[0] for i in self.__items)
def itervalues(self):
return (i[1] for i in self.__items)
def keys(self):
return list(self.iterkeys())
def values(self):
return list(self.itervalues())
def __add__(self, other):
_sum = dict(self.__items)
_sum.update(other.__items)
return self.__class__(_sum)
if __name__ == "__main__":
import doctest
doctest.testmod()

For "string to float" problem you can use a Trie - it provides O(1) access time and O(n) sorted iteration. By "sorted" I mean "sorted alphabetically by key" - it seems that the question implies the same.
Some implementations (each with its own strong and weak points):
https://github.com/biopython/biopython has Bio.trie module with a full-featured Trie; other Trie packages are more memory-effcient;
https://github.com/kmike/datrie - random insertions could be slow, keys alphabet must be known in advance;
https://github.com/kmike/hat-trie - all operations are fast, but many dict methods are not implemented; underlying C library supports sorted iteration, but it is not implemented in a wrapper;
https://github.com/kmike/marisa-trie - very memory efficient, but doesn't support insertions; iteration is not sorted by default but can be made sorted (there is an example in docs);
https://github.com/kmike/DAWG - can be seen as a minimized Trie; very fast and memory efficient, but doesn't support insertions; has size limits (several GB of data)

Here's one option that has not been mentioned in other answers, I think:
Use a binary search tree (Treap/AVL/RB) to keep the mapping.
Also use a hashmap (aka dictionary) to keep the same mapping (again).
This will provide O(n) ordered traversal (via the tree), O(1) random access (via the hashmap) and O(log n) insertion/deletion (because you need to update both the tree and the hash).
The drawback is the need to keep all the data twice, however the alternatives which suggest keeping a list of keys alongside a hashmap are not much better in this sense.

Related

Python 3.x: How to compare two lists containing dictionaries where order doesn't matter

I have nested dictionaries that may contain other dictionaries or lists. I need to be able to compare a list (or set, really) of these dictionaries to show that they are equal.
The order of the list is not uniform. Typically, I would turn the list into a set, but it is not possible since there are values that are also dictionaries.
a = {'color': 'red'}
b = {'shape': 'triangle'}
c = {'children': [{'color': 'red'}, {'age': 8},]}
test_a = [a, b, c]
test_b = [b, c, a]
print(test_a == test_b) # False
print(set(test_a) == set(test_b)) # TypeError: unhashable type: 'dict'
Is there a good way to approach this to show that test_a has the same contents as test_b?
You can use a simple loop to check if each of one list is in the other:
def areEqual(a, b):
if len(a) != len(b):
return False
for d in a:
if d not in b:
return False
return True
I suggest writing a function that turns any Python object into something orderable, with its contents, if it has any, in sorted order. If we call it canonicalize, we can compare nested objects with:
canonicalize(test_a) == canonicalize(test_b)
Here's my attempt at writing a canonicalize function:
def canonicalize(x):
if isinstance(x, dict):
x = sorted((canonicalize(k), canonicalize(v)) for k, v in x.items())
elif isinstance(x, collections.abc.Iterable) and not isinstance(x, str):
x = sorted(map(canonicalize, x))
else:
try:
bool(x < x) # test for unorderable types like complex
except TypeError:
x = repr(x) # replace with something orderable
return x
This should work for most Python objects. It won't work for lists of heterogeneous items, containers that contain themselves (which will cause the function to hit the recursion limit), nor float('nan') (which has bizarre comparison behavior, and so may mess up the sorting of any container it's in).
It's possible that this code will do the wrong thing for non-iterable, unorderable objects, if they don't have a repr function that describes all the data that makes up their value (e.g. what is tested by ==). I picked repr as it will work on any kind of object and might get it right (it works for complex, for example). It should also work as desired for classes that have a repr that looks like a constructor call. For classes that have inherited object.__repr__ and so have repr output like <Foo object at 0xXXXXXXXX> it at least won't crash, though the objects will be compared by identity rather than value. I don't think there's any truly universal solution, and you can add some special cases for classes you expect to find in your data if they don't work with repr.
If the elements in both lists are shallow, the idea of sorting them, and then comparing with equality can work. The problem with #Alex's solution is that he is only using "id" - but if instead of id, one uses a function that will sort dictionaries properly, things shuld just work:
def sortkey(element):
if isinstance(element, dict):
element = sorted(element.items())
return repr(element)
sorted(test_a, key=sortkey) == sorted(test_b, key=sotrkey)
(I use an repr to wrap the key because it will cast all elements to string before comparison, which will avoid typerror if different elements are of unorderable types - which would almost certainly happen if you are using Python 3.x)
Just to be clear, if your dictionaries and lists have nested dictionaries themselves, you should use the answer by #m_callens. If your inner lists are also unorderd, you can fix this to work, jsut sorting them inside the key function as well.
In this case they are the same dicts so you can compare ids (docs). Note that if you introduced a new dict whose values were identical it would still be treated differently. I.e. d = {'color': 'red'} would be treated as not equal to a.
sorted(map(id, test_a)) == sorted(map(id, test_b))
As #jsbueno points out, you can do this with the kwarg key.
sorted(test_a, key=id) == sorted(test_b, key=id)
An elegant and relatively fast solution:
class QuasiUnorderedList(list):
def __eq__(self, other):
"""This method isn't as ineffiecient as you think! It runs in O(1 + 2 + 3 + ... + n) time,
possibly better than recursively freezing/checking all the elements."""
for item in self:
for otheritem in other:
if otheritem == item:
break
else:
# no break was reached, item not found.
return False
return True
This runs in O(1 + 2 + 3 + ... + n) flat. While slow for dictionaries of low depth, this is faster for dictionaries of high depth.
Here's a considerably longer snippet which is faster for dictionaries where depth is low and length is high.
class FrozenDict(collections.Mapping, collections.Hashable): # collections.Hashable = portability
"""Adapated from http://stackoverflow.com/a/2704866/1459669"""
def __init__(self, *args, **kwargs):
self._d = dict(*args, **kwargs)
self._hash = None
def __iter__(self):
return iter(self._d)
def __len__(self):
return len(self._d)
def __getitem__(self, key):
return self._d[key]
def __hash__(self):
# It would have been simpler and maybe more obvious to
# use hash(tuple(sorted(self._d.iteritems()))) from this discussion
# so far, but this solution is O(n). I don't know what kind of
# n we are going to run into, but sometimes it's hard to resist the
# urge to optimize when it will gain improved algorithmic performance.
# Now thread safe by CrazyPython
if self._hash is None:
_hash = 0
for pair in self.iteritems():
_hash ^= hash(pair)
self._hash = _hash
return _hash
def freeze(obj):
if type(obj) in (str, int, ...): # other immutable atoms you store in your data structure
return obj
elif issubclass(type(obj), list): # ugly but needed
return set(freeze(item) for item in obj)
elif issubclass(type(obj), dict): # for defaultdict, etc.
return FrozenDict({key: freeze(value) for key, value in obj.items()})
else:
raise NotImplementedError("freeze() doesn't know how to freeze " + type(obj).__name__ + " objects!")
class FreezableList(list, collections.Hashable):
_stored_freeze = None
_hashed_self = None
def __eq__(self, other):
if self._stored_freeze and (self._hashed_self == self):
frozen = self._stored_freeze
else:
frozen = freeze(self)
if frozen is not self._stored_freeze:
self._stored_hash = frozen
return frozen == freeze(other)
def __hash__(self):
if self._stored_freeze and (self._hashed_self == self):
frozen = self._stored_freeze
else:
frozen = freeze(self)
if frozen is not self._stored_freeze:
self._stored_hash = frozen
return hash(frozen)
class UncachedFreezableList(list, collections.Hashable):
def __eq__(self, other):
"""No caching version of __eq__. May be faster.
Don't forget to get rid of the declarations at the top of the class!
Considerably more elegant."""
return freeze(self) == freeze(other)
def __hash__(self):
"""No caching version of __hash__. See the notes in the docstring of __eq__2"""
return hash(freeze(self))
Test all three (QuasiUnorderedList, FreezableList, and UncachedFreezableList) and see which one is faster in your situation. I'll betcha it's faster than the other solutions.

Efficient list manipulation in python

I have a large list and regularily need to find an item satisfying a rather complex condition (not equality), i.e. I am forced to check every item in the list until I find one. The conditions change, but some items match more often then others. So I would like to bring the matching item to the front of the list each time I find one, so frequently matching items are found more quickly.
Is there an efficient, pythonic way to do this?
Sequences ([]) are backed by an array, so removing an item somewhere in the middle and prepending it to the array means moving every previous item. That's in O(n) time, not good.
In C you could build a linked list and move the item on your own when found. In Python there is a deque, but afaik you cannot reference the node objects nor have access to .next pointers.
And a self-made linked list is very slow in Python. (In fact it's slower than ordinary linear search without moving any item.)
Sadly, a dict or set finds items based on value equality and thus doesn't fit my problem.
As an illustration, here's the condition:
u, v, w = n.value # list item
if v in g[u] and w in g[v] and u not in g[w]:
...
Consider instead a Pythonic approach. As Ed Post once put it, "The determined Real Programmer can write FORTRAN programs in any language" -- and this generalizes... you're trying to write C in Python and it isn't working well for you:-)
Rather, think of putting an auxiliary dict cache next to the list -- caching the indices where items are found (needs to be invalidated only on "deep" changes to the list's structure). Much simpler and faster...
Probably best done by having list and dict in a small class:
class Seeker(object):
def __init__(self, *a, **k):
self.l = list(*a, **k)
self.d = {}
def find(self, value):
where = self.d.get(value)
if where is None:
self.d[value] = where = self.l.find(value)
return where
def __setitem__(self, index, value):
if value in self.d: del self.d[value]
self.l[index] = value
# and so on for other mutators that invalidate self.d; then,
def __getattr__(self, name):
# delegate everything else to the list
return getattr(self.l, name)
You need only define the mutators you actually need to use -- e.g, if you won't do insert, sort, __delitem__, &c, no need to define those, you can just delegate them to the list.
Added: in Python 3.2 or better, functools.lru_cache can actually do most of the work for you -- use it to decorate find and you'll get a better implementation of caching, with the ability to limit cache size if you so desire. To clear the cache, you'll need to call self.find.cache_clear() at the appropriate spots (where I above use self.d = {}) -- unfortunately, that crucial functionality is not (yet!-) documented (the volunteers updating the docs are not the same ones updating the code...!-)... but, trust me, it's not going to disappear on you:-).
Added: the OP edited the Q to clarify that he's not after "value equality", but rather some more complex set of conditions, exemplified by a predicate such as:
def good_for_g(g, n):
# for some container `g` and item value `n`:
u, v, w = n.value
return v in g[u] and w in g[v] and u not in g[w]
Presumably, then, the desire to bring "good" items towards the front is in turn predicated on their "goodness" being "sticky", i.e, g staying pretty much the same for a while. In this case, one can use the predicate one as a feature extraction and checking function, which forms the key into the dictionary -- so for example:
class FancySeeker(object):
def __init__(self, *a, **k):
self.l = list(*a, **k)
self.d = {}
def _find_in_list(self, predicate):
for i, n in enumerate(self.l):
if predicate(n):
return i
return -1
def find(self, predicate):
where = self.d.get(predicate)
if where is None:
where = self._find_in_list(predicate)
self.d[predicate] = where
return where
and so forth.
So the remaining difficulty is to put predicate in a form suitable for effective indexing into a dict. If predicate is just a function, no problem. But if predicate is a function with parameters, as formed e.g by functools.partial or as a bound method of some instance, that requires a bit of further processing/wrapping to make the indexing work.
Two calls to functools.partial with the same bound argument(s) and function, for example, do not return equal objects -- one has, rather, to inspect the .args and .func of the returned objects to ensure, so to speak, a "singleton" is returned for any given (func, args) pair.
Moreover, if some of the bound arguments are mutable, one needs to use their id in lieu of their hash (or else the raw functools.partial object would not be hashable). It gets even hairier for bound methods, though they can similarly be wrapped into e.g a hashable, "equality adjusted" Predicate class.
Lastly, if these gyrations prove too cumbersome and you really want a fast implementation of a linked list instead, look at https://pypi.python.org/pypi/llist/0.4 -- it's a C-coded implementation of singly and doubly linked lists for Python (for each kind, it implements three types: the list itself, the list node, and the list's iterator).
You can do exactly what you want using deque.rotate.
from collections import deque
class Collection:
"Linked List collection that moves searched for items to the front of the collection"
def __init__(self, seq):
self._deque = deque(seq)
def __contains__(self, target):
for i, item in enumerate(self._deque):
if item == target:
self._deque.rotate(i)
self._deque.popleft()
self._deque.rotate(-i+1)
self._deque.appendleft(item)
return True
return False
def __str__(self):
return "Collection({})".format(str(self._deque))
c = Collection(range(10))
print(c)
print("5 in d:", 5 in c)
print(c)
Gives the following output:
Collection(deque([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]))
5 in c: True
Collection(deque([5, 0, 1, 2, 3, 4, 6, 7, 8, 9]))

Python set with the ability to pop a random element

I am in need of a Python (2.7) object that functions like a set (fast insertion, deletion, and membership checking) but has the ability to return a random value. Previous questions asked on stackoverflow have answers that are things like:
import random
random.sample(mySet, 1)
But this is quite slow for large sets (it runs in O(n) time).
Other solutions aren't random enough (they depend on the internal representation of python sets, which produces some results which are very non-random):
for e in mySet:
break
# e is now an element from mySet
I coded my own rudimentary class which has constant time lookup, deletion, and random values.
class randomSet:
def __init__(self):
self.dict = {}
self.list = []
def add(self, item):
if item not in self.dict:
self.dict[item] = len(self.list)
self.list.append(item)
def addIterable(self, item):
for a in item:
self.add(a)
def delete(self, item):
if item in self.dict:
index = self.dict[item]
if index == len(self.list)-1:
del self.dict[self.list[index]]
del self.list[index]
else:
self.list[index] = self.list.pop()
self.dict[self.list[index]] = index
del self.dict[item]
def getRandom(self):
if self.list:
return self.list[random.randomint(0,len(self.list)-1)]
def popRandom(self):
if self.list:
index = random.randint(0,len(self.list)-1)
if index == len(self.list)-1:
del self.dict[self.list[index]]
return self.list.pop()
returnValue = self.list[index]
self.list[index] = self.list.pop()
self.dict[self.list[index]] = index
del self.dict[returnValue]
return returnValue
Are there any better implementations for this, or any big improvements to be made to this code?
I think the best way to do this would be to use the MutableSet abstract base class in collections. Inherit from MutableSet, and then define add, discard, __len__, __iter__, and __contains__; also rewrite __init__ to optionally accept a sequence, just like the set constructor does. MutableSet provides built-in definitions of all other set methods based on those methods. That way you get the full set interface cheaply. (And if you do this, addIterable is defined for you, under the name extend.)
discard in the standard set interface appears to be what you have called delete here. So rename delete to discard. Also, instead of having a separate popRandom method, you could just define popRandom like so:
def popRandom(self):
item = self.getRandom()
self.discard(item)
return item
That way you don't have to maintain two separate item removal methods.
Finally, in your item removal method (delete now, discard according to the standard set interface), you don't need an if statement. Instead of testing whether index == len(self.list) - 1, simply swap the final item in the list with the item at the index of the list to be popped, and make the necessary change to the reverse-indexing dictionary. Then pop the last item from the list and remove it from the dictionary. This works whether index == len(self.list) - 1 or not:
def discard(self, item):
if item in self.dict:
index = self.dict[item]
self.list[index], self.list[-1] = self.list[-1], self.list[index]
self.dict[self.list[index]] = index
del self.list[-1] # or in one line:
del self.dict[item] # del self.dict[self.list.pop()]
One approach you could take is to derive a new class from set which salts itself with random objects of a type derived from int.
You can then use pop to select a random element, and if it is not of the salt type, reinsert and return it, but if it is of the salt type, insert a new, randomly-generated salt object (and pop to select a new object).
This will tend to alter the order in which objects are selected. On average, the number of attempts will depend on the proportion of salting elements, i.e. amortised O(k) performance.
Can't we implement a new class inheriting from set with some (hackish) modifications that enable us to retrieve a random element from the list with O(1) lookup time? Btw, on Python 2.x you should inherit from object, i.e. use class randomSet(object). Also PEP8 is something to consider for you :-)
Edit:
For getting some ideas of what hackish solutions might be capable of, this thread is worth reading:
http://python.6.n6.nabble.com/Get-item-from-set-td1530758.html
Here's a solution from scratch, which adds and pops in constant time. I also included some extra set functions for demonstrative purposes.
from random import randint
class RandomSet(object):
"""
Implements a set in which elements can be
added and drawn uniformly and randomly in
constant time.
"""
def __init__(self, seq=None):
self.dict = {}
self.list = []
if seq is not None:
for x in seq:
self.add(x)
def add(self, x):
if x not in self.dict:
self.dict[x] = len(self.list)
self.list.append(x)
def pop(self, x=None):
if x is None:
i = randint(0,len(self.list)-1)
x = self.list[i]
else:
i = self.dict[x]
self.list[i] = self.list[-1]
self.dict[self.list[-1]] = i
self.list.pop()
self.dict.pop(x)
return x
def __contains__(self, x):
return x in self.dict
def __iter__(self):
return iter(self.list)
def __repr__(self):
return "{" + ", ".join(str(x) for x in self.list) + "}"
def __len__(self):
return len(self.list)
Yes, I'd implement an "ordered set" in much the same way you did - and use a list as an internal data structure.
However, I'd inherit straight from "set" and just keep track of the added items in an
internal list (as you did) - and leave the methods I don't use alone.
Maybe add a "sync" method to update the internal list whenever the set is updated
by set-specific operations, like the *_update methods.
That if using an "ordered dict" does not cover your use cases. (I just found that trying to cast ordered_dict keys to a regular set is not optmized, so if you need set operations on your data that is not an option)
If you don't mind only supporting comparable elements, then you could use blist.sortedset.

Python hashable dicts

As an exercise, and mostly for my own amusement, I'm implementing a backtracking packrat parser. The inspiration for this is i'd like to have a better idea about how hygenic macros would work in an algol-like language (as apposed to the syntax free lisp dialects you normally find them in). Because of this, different passes through the input might see different grammars, so cached parse results are invalid, unless I also store the current version of the grammar along with the cached parse results. (EDIT: a consequence of this use of key-value collections is that they should be immutable, but I don't intend to expose the interface to allow them to be changed, so either mutable or immutable collections are fine)
The problem is that python dicts cannot appear as keys to other dicts. Even using a tuple (as I'd be doing anyways) doesn't help.
>>> cache = {}
>>> rule = {"foo":"bar"}
>>> cache[(rule, "baz")] = "quux"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
>>>
I guess it has to be tuples all the way down. Now the python standard library provides approximately what i'd need, collections.namedtuple has a very different syntax, but can be used as a key. continuing from above session:
>>> from collections import namedtuple
>>> Rule = namedtuple("Rule",rule.keys())
>>> cache[(Rule(**rule), "baz")] = "quux"
>>> cache
{(Rule(foo='bar'), 'baz'): 'quux'}
Ok. But I have to make a class for each possible combination of keys in the rule I would want to use, which isn't so bad, because each parse rule knows exactly what parameters it uses, so that class can be defined at the same time as the function that parses the rule.
Edit: An additional problem with namedtuples is that they are strictly positional. Two tuples that look like they should be different can in fact be the same:
>>> you = namedtuple("foo",["bar","baz"])
>>> me = namedtuple("foo",["bar","quux"])
>>> you(bar=1,baz=2) == me(bar=1,quux=2)
True
>>> bob = namedtuple("foo",["baz","bar"])
>>> you(bar=1,baz=2) == bob(bar=1,baz=2)
False
tl'dr: How do I get dicts that can be used as keys to other dicts?
Having hacked a bit on the answers, here's the more complete solution I'm using. Note that this does a bit extra work to make the resulting dicts vaguely immutable for practical purposes. Of course it's still quite easy to hack around it by calling dict.__setitem__(instance, key, value) but we're all adults here.
class hashdict(dict):
"""
hashable dict implementation, suitable for use as a key into
other dicts.
>>> h1 = hashdict({"apples": 1, "bananas":2})
>>> h2 = hashdict({"bananas": 3, "mangoes": 5})
>>> h1+h2
hashdict(apples=1, bananas=3, mangoes=5)
>>> d1 = {}
>>> d1[h1] = "salad"
>>> d1[h1]
'salad'
>>> d1[h2]
Traceback (most recent call last):
...
KeyError: hashdict(bananas=3, mangoes=5)
based on answers from
http://stackoverflow.com/questions/1151658/python-hashable-dicts
"""
def __key(self):
return tuple(sorted(self.items()))
def __repr__(self):
return "{0}({1})".format(self.__class__.__name__,
", ".join("{0}={1}".format(
str(i[0]),repr(i[1])) for i in self.__key()))
def __hash__(self):
return hash(self.__key())
def __setitem__(self, key, value):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def __delitem__(self, key):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def clear(self):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def pop(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def popitem(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def setdefault(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def update(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
# update is not ok because it mutates the object
# __add__ is ok because it creates a new object
# while the new object is under construction, it's ok to mutate it
def __add__(self, right):
result = hashdict(self)
dict.update(result, right)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
Here is the easy way to make a hashable dictionary. Just remember not to mutate them after embedding in another dictionary for obvious reasons.
class hashabledict(dict):
def __hash__(self):
return hash(tuple(sorted(self.items())))
Hashables should be immutable -- not enforcing this but TRUSTING you not to mutate a dict after its first use as a key, the following approach would work:
class hashabledict(dict):
def __key(self):
return tuple((k,self[k]) for k in sorted(self))
def __hash__(self):
return hash(self.__key())
def __eq__(self, other):
return self.__key() == other.__key()
If you DO need to mutate your dicts and STILL want to use them as keys, complexity explodes hundredfolds -- not to say it can't be done, but I'll wait until a VERY specific indication before I get into THAT incredible morass!-)
All that is needed to make dictionaries usable for your purpose is to add a __hash__ method:
class Hashabledict(dict):
def __hash__(self):
return hash(frozenset(self))
Note, the frozenset conversion will work for all dictionaries (i.e. it doesn't require the keys to be sortable). Likewise, there is no restriction on the dictionary values.
If there are many dictionaries with identical keys but with distinct values, it is necessary to have the hash take the values into account. The fastest way to do that is:
class Hashabledict(dict):
def __hash__(self):
return hash((frozenset(self), frozenset(self.itervalues())))
This is quicker than frozenset(self.iteritems()) for two reasons. First, the frozenset(self) step reuses the hash values stored in the dictionary, saving unnecessary calls to hash(key). Second, using itervalues will access the values directly and avoid the many memory allocator calls using by items to form new many key/value tuples in memory every time you do a lookup.
The given answers are okay, but they could be improved by using frozenset(...) instead of tuple(sorted(...)) to generate the hash:
>>> import timeit
>>> timeit.timeit('hash(tuple(sorted(d.iteritems())))', "d = dict(a=3, b='4', c=2345, asdfsdkjfew=0.23424, x='sadfsadfadfsaf')")
4.7758948802947998
>>> timeit.timeit('hash(frozenset(d.iteritems()))', "d = dict(a=3, b='4', c=2345, asdfsdkjfew=0.23424, x='sadfsadfadfsaf')")
1.8153600692749023
The performance advantage depends on the content of the dictionary, but in most cases I've tested, hashing with frozenset is at least 2 times faster (mainly because it does not need to sort).
A reasonably clean, straightforward implementation is
import collections
class FrozenDict(collections.Mapping):
"""Don't forget the docstrings!!"""
def __init__(self, *args, **kwargs):
self._d = dict(*args, **kwargs)
def __iter__(self):
return iter(self._d)
def __len__(self):
return len(self._d)
def __getitem__(self, key):
return self._d[key]
def __hash__(self):
return hash(tuple(sorted(self._d.iteritems())))
I keep coming back to this topic... Here's another variation. I'm uneasy with subclassing dict to add a __hash__ method; There's virtually no escape from the problem that dict's are mutable, and trusting that they won't change seems like a weak idea. So I've instead looked at building a mapping based on a builtin type that is itself immutable. although tuple is an obvious choice, accessing values in it implies a sort and a bisect; not a problem, but it doesn't seem to be leveraging much of the power of the type it's built on.
What if you jam key, value pairs into a frozenset? What would that require, how would it work?
Part 1, you need a way of encoding the 'item's in such a way that a frozenset will treat them mainly by their keys; I'll make a little subclass for that.
import collections
class pair(collections.namedtuple('pair_base', 'key value')):
def __hash__(self):
return hash((self.key, None))
def __eq__(self, other):
if type(self) != type(other):
return NotImplemented
return self.key == other.key
def __repr__(self):
return repr((self.key, self.value))
That alone puts you in spitting distance of an immutable mapping:
>>> frozenset(pair(k, v) for k, v in enumerate('abcd'))
frozenset([(0, 'a'), (2, 'c'), (1, 'b'), (3, 'd')])
>>> pairs = frozenset(pair(k, v) for k, v in enumerate('abcd'))
>>> pair(2, None) in pairs
True
>>> pair(5, None) in pairs
False
>>> goal = frozenset((pair(2, None),))
>>> pairs & goal
frozenset([(2, None)])
D'oh! Unfortunately, when you use the set operators and the elements are equal but not the same object; which one ends up in the return value is undefined, we'll have to go through some more gyrations.
>>> pairs - (pairs - goal)
frozenset([(2, 'c')])
>>> iter(pairs - (pairs - goal)).next().value
'c'
However, looking values up in this way is cumbersome, and worse, creates lots of intermediate sets; that won't do! We'll create a 'fake' key-value pair to get around it:
class Thief(object):
def __init__(self, key):
self.key = key
def __hash__(self):
return hash(pair(self.key, None))
def __eq__(self, other):
self.value = other.value
return pair(self.key, None) == other
Which results in the less problematic:
>>> thief = Thief(2)
>>> thief in pairs
True
>>> thief.value
'c'
That's all the deep magic; the rest is wrapping it all up into something that has an interface like a dict. Since we're subclassing from frozenset, which has a very different interface, there's quite a lot of methods; we get a little help from collections.Mapping, but most of the work is overriding the frozenset methods for versions that work like dicts, instead:
class FrozenDict(frozenset, collections.Mapping):
def __new__(cls, seq=()):
return frozenset.__new__(cls, (pair(k, v) for k, v in seq))
def __getitem__(self, key):
thief = Thief(key)
if frozenset.__contains__(self, thief):
return thief.value
raise KeyError(key)
def __eq__(self, other):
if not isinstance(other, FrozenDict):
return dict(self.iteritems()) == other
if len(self) != len(other):
return False
for key, value in self.iteritems():
try:
if value != other[key]:
return False
except KeyError:
return False
return True
def __hash__(self):
return hash(frozenset(self.iteritems()))
def get(self, key, default=None):
thief = Thief(key)
if frozenset.__contains__(self, thief):
return thief.value
return default
def __iter__(self):
for item in frozenset.__iter__(self):
yield item.key
def iteritems(self):
for item in frozenset.__iter__(self):
yield (item.key, item.value)
def iterkeys(self):
for item in frozenset.__iter__(self):
yield item.key
def itervalues(self):
for item in frozenset.__iter__(self):
yield item.value
def __contains__(self, key):
return frozenset.__contains__(self, pair(key, None))
has_key = __contains__
def __repr__(self):
return type(self).__name__ + (', '.join(repr(item) for item in self.iteritems())).join('()')
#classmethod
def fromkeys(cls, keys, value=None):
return cls((key, value) for key in keys)
which, ultimately, does answer my own question:
>>> myDict = {}
>>> myDict[FrozenDict(enumerate('ab'))] = 5
>>> FrozenDict(enumerate('ab')) in myDict
True
>>> FrozenDict(enumerate('bc')) in myDict
False
>>> FrozenDict(enumerate('ab', 3)) in myDict
False
>>> myDict[FrozenDict(enumerate('ab'))]
5
The accepted answer by #Unknown, as well as the answer by #AlexMartelli work perfectly fine, but only under the following constraints:
The dictionary's values must be hashable. For example, hash(hashabledict({'a':[1,2]})) will raise TypeError.
Keys must support comparison operation. For example, hash(hashabledict({'a':'a', 1:1})) will raise TypeError.
The comparison operator on keys imposes total ordering. For example, if the two keys in a dictionary are frozenset((1,2,3)) and frozenset((4,5,6)), they compare unequal in both directions. Therefore, sorting the items of a dictionary with such keys can result in an arbitrary order, and therefore will violate the rule that equal objects must have the same hash value.
The much faster answer by #ObenSonne lifts the constraints 2 and 3, but is still bound by constraint 1 (values must be hashable).
The faster yet answer by #RaymondHettinger lifts all 3 constraints because it does not include .values() in the hash calculation. However, its performance is good only if:
Most of the (non-equal) dictionaries that need to be hashed have do not identical .keys().
If this condition isn't satisfied, the hash function will still be valid, but may cause too many collisions. For example, in the extreme case where all the dictionaries are generated from a website template (field names as keys, user input as values), the keys will always be the same, and the hash function will return the same value for all the inputs. As a result, a hashtable that relies on such a hash function will become as slow as a list when retrieving an item (O(N) instead of O(1)).
I think the following solution will work reasonably well even if all 4 constraints I listed above are violated. It has an additional advantage that it can hash not only dictionaries, but any containers, even if they have nested mutable containers.
I'd much appreciate any feedback on this, since I only tested this lightly so far.
# python 3.4
import collections
import operator
import sys
import itertools
import reprlib
# a wrapper to make an object hashable, while preserving equality
class AutoHash:
# for each known container type, we can optionally provide a tuple
# specifying: type, transform, aggregator
# even immutable types need to be included, since their items
# may make them unhashable
# transformation may be used to enforce the desired iteration
# the result of a transformation must be an iterable
# default: no change; for dictionaries, we use .items() to see values
# usually transformation choice only affects efficiency, not correctness
# aggregator is the function that combines all items into one object
# default: frozenset; for ordered containers, we can use tuple
# aggregator choice affects both efficiency and correctness
# e.g., using a tuple aggregator for a set is incorrect,
# since identical sets may end up with different hash values
# frozenset is safe since at worst it just causes more collisions
# unfortunately, no collections.ABC class is available that helps
# distinguish ordered from unordered containers
# so we need to just list them out manually as needed
type_info = collections.namedtuple(
'type_info',
'type transformation aggregator')
ident = lambda x: x
# order matters; first match is used to handle a datatype
known_types = (
# dict also handles defaultdict
type_info(dict, lambda d: d.items(), frozenset),
# no need to include set and frozenset, since they are fine with defaults
type_info(collections.OrderedDict, ident, tuple),
type_info(list, ident, tuple),
type_info(tuple, ident, tuple),
type_info(collections.deque, ident, tuple),
type_info(collections.Iterable, ident, frozenset) # other iterables
)
# hash_func can be set to replace the built-in hash function
# cache can be turned on; if it is, cycles will be detected,
# otherwise cycles in a data structure will cause failure
def __init__(self, data, hash_func=hash, cache=False, verbose=False):
self._data=data
self.hash_func=hash_func
self.verbose=verbose
self.cache=cache
# cache objects' hashes for performance and to deal with cycles
if self.cache:
self.seen={}
def hash_ex(self, o):
# note: isinstance(o, Hashable) won't check inner types
try:
if self.verbose:
print(type(o),
reprlib.repr(o),
self.hash_func(o),
file=sys.stderr)
return self.hash_func(o)
except TypeError:
pass
# we let built-in hash decide if the hash value is worth caching
# so we don't cache the built-in hash results
if self.cache and id(o) in self.seen:
return self.seen[id(o)][0] # found in cache
# check if o can be handled by decomposing it into components
for typ, transformation, aggregator in AutoHash.known_types:
if isinstance(o, typ):
# another option is:
# result = reduce(operator.xor, map(_hash_ex, handler(o)))
# but collisions are more likely with xor than with frozenset
# e.g. hash_ex([1,2,3,4])==0 with xor
try:
# try to frozenset the actual components, it's faster
h = self.hash_func(aggregator(transformation(o)))
except TypeError:
# components not hashable with built-in;
# apply our extended hash function to them
h = self.hash_func(aggregator(map(self.hash_ex, transformation(o))))
if self.cache:
# storing the object too, otherwise memory location will be reused
self.seen[id(o)] = (h, o)
if self.verbose:
print(type(o), reprlib.repr(o), h, file=sys.stderr)
return h
raise TypeError('Object {} of type {} not hashable'.format(repr(o), type(o)))
def __hash__(self):
return self.hash_ex(self._data)
def __eq__(self, other):
# short circuit to save time
if self is other:
return True
# 1) type(self) a proper subclass of type(other) => self.__eq__ will be called first
# 2) any other situation => lhs.__eq__ will be called first
# case 1. one side is a subclass of the other, and AutoHash.__eq__ is not overridden in either
# => the subclass instance's __eq__ is called first, and we should compare self._data and other._data
# case 2. neither side is a subclass of the other; self is lhs
# => we can't compare to another type; we should let the other side decide what to do, return NotImplemented
# case 3. neither side is a subclass of the other; self is rhs
# => we can't compare to another type, and the other side already tried and failed;
# we should return False, but NotImplemented will have the same effect
# any other case: we won't reach the __eq__ code in this class, no need to worry about it
if isinstance(self, type(other)): # identifies case 1
return self._data == other._data
else: # identifies cases 2 and 3
return NotImplemented
d1 = {'a':[1,2], 2:{3:4}}
print(hash(AutoHash(d1, cache=True, verbose=True)))
d = AutoHash(dict(a=1, b=2, c=3, d=[4,5,6,7], e='a string of chars'),cache=True, verbose=True)
print(hash(d))
You might also want to add these two methods to get the v2 pickling protocol work with hashdict instances. Otherwise cPickle will try to use hashdict.____setitem____ resulting in a TypeError. Interestingly, with the other two versions of the protocol your code works just fine.
def __setstate__(self, objstate):
for k,v in objstate.items():
dict.__setitem__(self,k,v)
def __reduce__(self):
return (hashdict, (), dict(self),)
serialize the dict as string with json package:
d = {'a': 1, 'b': 2}
s = json.dumps(d)
restore the dict when you need:
d2 = json.loads(s)
If you don't put numbers in the dictionary and you never lose the variables containing your dictionaries, you can do this:
cache[id(rule)] = "whatever"
since id() is unique for every dictionary
EDIT:
Oh sorry, yeah in that case what the other guys said would be better. I think you could also serialize your dictionaries as a string, like
cache[ 'foo:bar' ] = 'baz'
If you need to recover your dictionaries from the keys though, then you'd have to do something uglier like
cache[ 'foo:bar' ] = ( {'foo':'bar'}, 'baz' )
I guess the advantage of this is that you wouldn't have to write as much code.

Is there code out there to subclass set in Python for big xranges?

I'm trying to write some Python code that includes union/intersection of sets that potentially can be very large. Much of the time, these sets will be essentially set(xrange(1<<32)) or something of the kind, but often there will be ranges of values that do not belong in the set (say, 'bit 5 cannot be clear'), or extra values thrown in. For the most part, the set contents can be expressed algorithmically.
I can go in and do the dirty work to subclass set and create something, but I feel like this must be something that's been done before, and I don't want to spend days on wheel reinvention.
Oh, and just to make it harder, once I've created the set, I need to be able to iterate over it in random order. Quickly. Even if the set has a billion entries. (And that billion-entry set had better not actually take up gigabytes, because I'm going to have a lot of them.)
Is there code out there? Anyone have neat tricks? Am I asking for the moon?
You say:
For the most part, the set contents can be expressed algorithmically.
How about writing a class which presents the entire set API, but determines set inclusion algorithmically. Then with a number of classes which wrap around other sets to perform the union and intersection algorithmically.
For example, if you had a set a and set b which are instances of these pseudo sets:
>>> u = Union(a, b)
And then you use u with the full set API, which will turn around and query a and b using the correct logic. All the set methods could be designed to return these pseudo unions/intersections automatically so the whole process is transparent.
Edit: Quick example with a very limited API:
class Base(object):
def union(self, other):
return Union(self, other)
def intersection(self, other):
return Intersection(self, other)
class RangeSet(Base):
def __init__(self, low, high):
self.low = low
self.high = high
def __contains__(self, value):
return value >= self.low and value < self.high
class Union(Base):
def __init__(self, *sets):
self.sets = sets
def __contains__(self, value):
return any(value in x for x in self.sets)
class Intersection(Base):
def __init__(self, *sets):
self.sets = sets
def __contains__(self, value):
return all(value in x for x in self.sets)
a = RangeSet(0, 10)
b = RangeSet(5, 15)
u = a.union(b)
i = a.intersection(b)
print 3 in u
print 7 in u
print 12 in u
print 3 in i
print 7 in i
print 12 in i
Running gives you:
True
True
True
False
True
False
You are trying to make a set containing all the integer values in from 0 to 4,294,967,295. A byte is 8 bits, which gets you to 255. 99.9999940628% of your values are over one byte in size. A crude minimum size for your set, even if you are able to overcome the syntactic issues, is 4 billion bytes, or 4 GB.
You are never going to be able to hold an instance of that set in less than a GB of memory. Even with compression, it's likely to be a tough squeeze. You are going to have to get much more clever with your math. You may be able to take advantage of some properties of the set. After all, it's a very special set. What you are trying to do?
If you are using python 3.0, you can subclass collections.Set
This sounds like it might overlap with linear programming. In linear programming you are trying to find some optimal case where you add constraints to a set of values (typically integers) which initially van be very large. There are various libraries listed at http://wiki.python.org/moin/NumericAndScientific/Libraries that mention integer and linear programming, but nothing jumps out as being obviously what you want.
I would avoid subclassing set, since clearly you can usefully reuse no part of set's implementation. I would even avoid subclassing collections.Set, since the latter requires you to supply a __len__ -- a functionality which you appear not to need otherwise, and just can't be done effectively in the general case (it's going to be O(N), with, which the kind of size you're talking about, is far too slow). You're unlikely to find some existing implementation that matches your use case well enough to be worth reusing, because your requirements are very specific and even peculiar -- the concept of "random iterating and an occasional duplicate is OK", for example, is a really unusual one.
If your specs are complete (you only need union, intersection, and random iteration, plus occasional additions and removals of single items), implementing a special purpose class that fills those specs is not a crazy undertaking. If you have more specs that you have not explicitly mentioned, it will be trickier, but it's hard to guess without hearing all the specs. So for example, something like:
import random
class AbSet(object):
def __init__(self, predicate, maxitem=1<<32):
# set of all ints, >=0 and <maxitem, satisfying the predicate
self.maxitem = maxitem
self.predicate = predicate
self.added = set()
self.removed = set()
def copy(self):
x = type(self)(self.predicate, self.maxitem)
x.added = set(self.added)
x.removed = set(self.removed)
return x
def __contains__(self, item):
if item in self.removed: return False
if item in self.added: return True
return (0 <= item < self.maxitem) and self.predicate(item)
def __iter__(self):
# random endless iteration
while True:
x = random.randrange(self.maxitem)
if x in self: yield x
def add(self, item):
if item<0 or item>=self.maxitem: raise ValueError
if item not in self:
self.removed.discard(item)
self.added.add(item)
def discard(self, item):
if item<0 or item>=self.maxitem: raise ValueError
if item in self:
self.removed.add(item)
self.added.discard(item)
def union(self, o):
pred = lambda v: self.predicate(v) or o.predicate(v),
x = type(self)(pred, max(self.maxitem, o.maxitem))
toadd = [v for v in (self.added|o.added) if not pred(v)]
torem = [v for v in (self.removed|o.removed) if pred(v)]
x.added = set(toadd)
x.removed = set(torem)
def intersection(self, o):
pred = lambda v: self.predicate(v) and o.predicate(v),
x = type(self)(pred, min(self.maxitem, o.maxitem))
toadd = [v for v in (self.added&o.added) if not pred(v)]
torem = [v for v in (self.removed&o.removed) if pred(v)]
x.added = set(toadd)
x.removed = set(torem)
I'm not entirely certain about the logic determining added and removed upon union and intersection, but I hope this is a good base for you to work from.

Categories

Resources