Python: maintain key sort order on dictionary - python

I want to iteratively add elements to a dictionary with an integer key for which I would like to keep a key-ordering. Modern Python (3.7+) keeps an insertion order on dict, but I need a key ordering.
Example use-case:
from collections import defaultdict
import numpy as np
my_dict = defaultdict(list)
for i in range(10):
idx = i + np.random.randint(10)
my_dict[idx].append(i)
# Do something with my_dict
...
print(my_dict)
Example output:
>> defaultdict(<class 'list'>, {9: [0, 4, 9], 10: [1], 2: [2], 6: [3, 5], 7: [6, 7], 16: [8]})
Desired output:
print(defaultdict(list, sorted(my_dict.items())))
>> defaultdict(<class 'list'>, {2: [2], 6: [3, 5], 7: [6, 7], 9: [0, 4, 9], 10: [1], 16: [8]})
Of course, this is a very simple sort, but the index shifts (computed above as i + np.random.randint(10)) can become arbitrarily large and I need a low time-complexity solution. Also note that I am also removing items from my_dict inside the loop (e.g., keys with value lesser or equal to i).
What kind of objects/ data structures does Python provide to achieve this? I've looked at PriorityQueue (heapq), which preserves the ordering I need but only that. I need the get and pop methods from the conventional dictionary + the ordering of e.g., PriorityQueue on the keys without having to expensively sort at every iteration.
Edit: The best current solution that I was able to find is the use of SortedDict from the SortedContainers library. Unfortunately, this solution loses the O(1) time complexity of dict.pop, to O(log N), but the dictionary is kept in key-order with O(log N) instead of O(N log N).
I am still open to hearing alternative solutions that preserves the characteristics of SortedDict, but provides O(1) time complexity for SortedDict.pop. Note that pop is always called on the smallest key, just like a queue (/ dequeue).

Here is a function I made to sort the dictionary keys.
def dict_sort(d):
return({x:d[x] for x in sorted(d)})
Basically, the function iterates through the sorted() version of the keys, and then maps them to a new dictionary (in order) and returns that new dictionary.
Call like this:
a=dict_sort(my_dict)
print(a) #this prints out sorted dictionary
Hope this was a help.

Related

Time complexity of dict.fromkeys()

I'm trying to get an ordered set in Python 3.8. According to this answer, I'm using dict.fromkeys() method to get the unique items from a list preserving the insertion order. What's the time complexity of this method? As I'm using this frequently in my codebase, is it the most efficient way or is there any better way to get an ordered set?
>>> lst = [4,2,4,5,6,2]
>>> dict.fromkeys(lst)
{4: None, 2: None, 5: None, 6: None}
>>> list(dict.fromkeys(lst))
[4, 2, 5, 6]

Time complexity in sorting a list by converting it to a set and back into a list

I recently watched Raymond Hettingers talk about python dictionaries (and by extension sets...) and he mentioned that integers hash to themselves and that adding integers to a dict (or set...) will insert them in order and as long as you don't delete items the order will be preserved in python 3.6 (and probably above?). In the answer to this question it is stated that dictionaries preserve order of insertion, but for sets it seams like integers are ordered according to their value.
Now: according to the time-complexity section of python.org and in more detail here it is stated that the average time-complexity of adding an element to a set is O(1). This means if you have an unsorted list of integers it should be possible to sort them by simply doing:
sorted_list = list(set(unsorted_list))
This seams to be the case as far as I have tested it (did it a few 1000 times with random sequences).
My question is now: Does this mean it is possible to sort integers in python in O(n) time?
To me it would seam so, as it takes O(n) to build the set and O(n) to convert the set back to a list or am I missing something here?
No, not in general. You must've tried it with special cases, for example where the unsorted input list contains all numbers from 0 to n, each once.
Here's a simple case that fails:
>>> list(set([8, 1]))
[8, 1]
Done with CPython 3.8.1 32-bit.
No. Sets do not allow sorting integers. While the hashes of integers are well-defined, the order of sets is arbitrary.
The order of sets may vary by implementation, process and instance.
# CPython 3.7.4
>>> list({1, 8})
[8, 1]
>>> list({8, 1})
[8, 1]
# PyPy 3.6.9 (PyPy 7.3.0)
>>> list({1, 8})
[1, 8]
>>> list({8, 1})
[8, 1]
# CPython 2.7.10
>>> list({1, 8})
[8, 1]
>>> list({8, 1})
[8, 1]
# Jython 2.7.1 (java13.0.2)
>>> list({1, 8})
[1, 8]
>>> list({8, 1})
[1, 8]
The order of sets may also depend on the history of an instance.
# CPython 3.7.4
>>> a = {1, 3, 4, 8}
>>> list(a)
[8, 1, 3, 4]
>>> a.add(2)
>>> list(a)
[1, 2, 3, 4, 8]
>>> a.discard(2)
>>> list(a)
[1, 3, 4, 8]
Generally, O(n) sorting is possible for integers, strings and many other types of data. O(n log n) is the best you can do with sorting algorithms that only use comparisons (>, <, ==) to determine the order of items, but for many types, you're not limited to such algorithms. In particular, see Radix sort for sorting integers.

Extract index of Non duplicate elements in python list

I have a list:
input = ['a','b','c','a','b','d','e','d','g','g']
I want index of all elements except duplicate in a list.
output = [0,1,2,5,6,8]
You should iterate over the enumerated list and add each element to a set of "seen" elements and add the index to the output list if the element hasn't already been seen (is not in the "seen" set).
Oh, the name input overrides the built-in input() function, so I renamed it input_list.
output = []
seen = set()
for i,e in enumerate(input_list):
if e not in seen:
output.append(i)
seen.add(e)
which gives output as [0, 1, 2, 5, 6, 8].
why use a set?
You could be thinking, why use a set when you could do something like:
[i for i,e in enumerate(input_list) if input_list.index(e) == i]
which would work because .index returns you the index of the first element in a list with that value, so if you check the index of an element against this, you can assert that it is the first occurrence of that element and filter out those elements which aren't the first occurrences.
However, this is not as efficient as using a set, because list.index requires Python to iterate over the list until it finds the element (or doesn't). This operation is O(n) complexity and since we are calling it for every element in input_list, the whole solution would be O(n^2).
On the other hand, using a set, as in the first solution, yields an O(n) solution, because checking if an element is in a set is complexity O(1) (average case). This is due to how sets are implemented (they are like lists, but each element is stored at the index of its hash so you can just compute the hash of an element and see if there is an element there to check membership rather than iterating over it - note that this is a vague oversimplification but is the idea of them).
Thus, since each check for membership is O(1), and we do this for each element, we get an O(n) solution which is much better than an O(n^2) solution.
You could do a something like this, checking for counts (although this is computation-heavy):
indexes = []
for i, x in enumerate(inputlist):
if (inputlist.count(x) == 1
and x not in inputlist[:i]):
indexes.append(i)
This checks for the following:
if the item appears only once. If so, continue...
if the item hasn't appeared before in the list up till now. If so, add to the results list
In case you don't mind indexes of the last occurrences of duplicates instead and are using Python 3.6+, here's an alternative solution:
list(dict(map(reversed, enumerate(input))).values())
This returns:
[3, 4, 2, 7, 6, 9]
Here is a one-liner using zip and reversed
>>> input = ['a','b','c','a','b','d','e','d','g','g']
>>> sorted(dict(zip(reversed(input), range(len(input)-1, -1, -1))).values())
[0, 1, 2, 5, 6, 8]
This question is missing a pandas solution. 😉
>>> import pandas as pd
>>> inp = ['a','b','c','a','b','d','e','d','g','g']
>>>
>>> pd.DataFrame(list(enumerate(inp))).groupby(1).first()[0].tolist()
[0, 1, 2, 5, 6, 8]
Yet another version, using a side effect in a list comprehension.
>>> xs=['a','b','c','a','b','d','e','d','g','g']
>>> seen = set()
>>> [i for i, v in enumerate(xs) if v not in seen and not seen.add(v)]
[0, 1, 2, 5, 6, 8]
The list comprehension filters indices of values that have not been seen already.
The trick is that not seen.add(v) is always true because seen.add(v) returns None.
Because of short circuit evaluation, seen.add(v) is performed if and only if v is not in seen, adding new values to seen on the fly.
At the end, seen contains all the values of the input list.
>>> seen
{'a', 'c', 'g', 'b', 'd', 'e'}
Note: it is usually a bad idea to use side effects in list comprehension,
but you might see this trick sometimes.

Optimized method of cutting/slicing sorted lists

Is there any pre-made optimized tool/library in Python to cut/slice lists for values "less than" something?
Here's the issue: Let's say I have a list like:
a=[1,3,5,7,9]
and I want to delete all the numbers which are <= 6, so the resulting list would be
[7,9]
6 is not in the list, so I can't use the built-in index(6) method of the list. I can do things like:
#!/usr/bin/env python
a = [1, 3, 5, 7, 9]
cut=6
for i in range(len(a)-1, -2, -1):
if a[i] <= cut:
break
b = a[i+1:]
print "Cut list: %s" % b
which would be fairly quick method if the index to cut from is close to the end of the list, but which will be inefficient if the item is close to the beginning of the list (let's say, I want to delete all the items which are >2, there will be a lot of iterations).
I can also implement my own find method using binary search or such, but I was wondering if there's a more... wide-scope built in library to handle this type of things that I could reuse in other cases (for instance, if I need to delete all the number which are >=6).
Thank you in advance.
You can use the bisect module to perform a sorted search:
>>> import bisect
>>> a[bisect.bisect_left(a, 6):]
[7, 9]
bisect.bisect_left is what you are looking for, I guess.
If you just want to filter the list for all elements that fulfil a certain criterion, then the most straightforward way is to use the built-in filter function.
Here is an example:
a_list = [10,2,3,8,1,9]
# filter all elements smaller than 6:
filtered_list = filter(lambda x: x<6, a_list)
the filtered_list will contain:
[2, 3, 1]
Note: This method does not rely on the ordering of the list, so for very large lists it might be that a method optimised for ordered searching (as bisect) performs better in terms of speed.
Bisect left and right helper function
#!/usr/bin/env python3
import bisect
def get_slice(list_, left, right):
return list_[
bisect.bisect_left(list_, left):
bisect.bisect_left(list_, right)
]
assert get_slice([0, 1, 1, 3, 4, 4, 5, 6], 1, 5) == [1, 1, 3, 4, 4]
Tested in Ubuntu 16.04, Python 3.5.2.
Adding to Jon's answer, if you need to actually delete the elements less than 6 and want to keep the same reference to the list, rather than returning a new one.
del a[:bisect.bisect_right(a,6)]
You should note as well that bisect will only work on a sorted list.

Python "set" with duplicate/repeated elements

Is there a standard way to represent a "set" that can contain duplicate elements.
As I understand it, a set has exactly one or zero of an element. I want functionality to have any number.
I am currently using a dictionary with elements as keys, and quantity as values, but this seems wrong for many reasons.
Motivation:
I believe there are many applications for such a collection. For example, a survey of favourite colours could be represented by:
survey = ['blue', 'red', 'blue', 'green']
Here, I do not care about the order, but I do about quantities. I want to do things like:
survey.add('blue')
# would give survey == ['blue', 'red', 'blue', 'green', 'blue']
...and maybe even
survey.remove('blue')
# would give survey == ['blue', 'red', 'green']
Notes:
Yes, set is not the correct term for this kind of collection. Is there a more correct one?
A list of course would work, but the collection required is unordered. Not to mention that the method naming for sets seems to me to be more appropriate.
You are looking for a multiset.
Python's closest datatype is collections.Counter:
A Counter is a dict subclass for counting hashable objects. It is an
unordered collection where elements are stored as dictionary keys and
their counts are stored as dictionary values. Counts are allowed to be
any integer value including zero or negative counts. The Counter class
is similar to bags or multisets in other languages.
For an actual implementation of a multiset, use the bag class from the data-structures package on pypi. Note that this is for Python 3 only. If you need Python 2, here is a recipe for a bag written for Python 2.4.
Your approach with dict with element/count seems ok to me. You probably need some more functionality. Have a look at collections.Counter.
O(1) test whether an element is present and current count retrieval (faster than with element in list and list.count(element))
counter.elements() looks like a list with all duplicates
easy manipulation union/difference with other Counters
Python "set" with duplicate/repeated elements
This depends on how you define a set. One may assume that to the OP
order does not matter (whether ordered or unordered)
replicates/repeated elements (a.k.a. multiplicities) are permitted
Given these assumptions, the options reduce to two abstract types: a list or a multiset. In Python, these type usually translate to a list and Counter respectively. See the Details on some subtleties to observe.
Given
import random
import collections as ct
random.seed(123)
elems = [random.randint(1, 11) for _ in range(10)]
elems
# [1, 5, 2, 7, 5, 2, 1, 7, 9, 9]
Code
A list of replicate elements:
list(elems)
# [1, 5, 2, 7, 5, 2, 1, 7, 9, 9]
A "multiset" of replicate elements:
ct.Counter(elems)
# Counter({1: 2, 5: 2, 2: 2, 7: 2, 9: 2})
Details
On Data Structures
We have a mix of terms here that easily get confused. To clarify, here are some basic mathematical data structures compared to ones in Python.
Type |Abbr|Order|Replicates| Math* | Python | Implementation
------------|----|-----|----------|-----------|-------------|----------------
Set |Set | n | n | {2 3 1} | {2, 3, 1} | set(el)
Ordered Set |Oset| y | n | {1, 2, 3} | - | list(dict.fromkeys(el)
Multiset |Mset| n | y | [2 1 2] | - | <see `mset` below>
List |List| y | y | [1, 2, 2] | [1, 2, 2] | list(el)
From the table, one can deduce the definition of each type. Example: a set is a container that ignores order and rejects replicate elements. In contrast, a list is a container that preserves order and permits replicate elements.
Also from the table, we can see:
Both an ordered set and a multiset are not explicitly implemented in Python
"Order" is a contrary term to a random arrangement of elements, e.g. sorted or insertion order
Sets and multisets are not strictly ordered. They can be ordered, but order does not matter.
Multisets permit replicates, thus they are not strict sets (the term "set" is indeed confusing).
On Multisets
Some may argue that collections.Counter is a multiset. You are safe in many cases to treat it as such, but be aware that Counter is simply a dict (a mapping) of key-multiplicity pairs. It is a map of multiplicities. See an example of elements in a flattened multiset:
mset = [x for k, v in ct.Counter(elems).items() for x in [k]*v]
mset
# [1, 1, 5, 5, 2, 2, 7, 7, 9, 9]
Notice there is some residual ordering, which may be surprising if you expect disordered results. However, disorder does not preclude order. Thus while you can generate a multiset from a Counter, be aware of the following provisos on residual ordering in Python:
replicates get grouped together in the mapping, introducing some degree of order
in Python 3.6, dict's preserve insertion order
Summary
In Python, a multiset can be translated to a map of multiplicities, i.e. a Counter, which is not randomly unordered like a pure set. There can be some residual ordering, which in most cases is ok since order does not generally matter in multisets.
See Also
collections-extended - a package on extra data types in collections
N. Wildberger's lectures on mathematical data structures
*Mathematically, (according to N. Wildberger, we express braces {} to imply a set and brackets [] to imply a list, as seen in Python. Unlike Python, commas , to imply order.
You can use a plain list and use list.count(element) whenever you want to access the "number" of elements.
my_list = [1, 1, 2, 3, 3, 3]
my_list.count(1) # will return 2
An alternative Python multiset implementation uses a sorted list data structure. There are a couple implementations on PyPI. One option is the sortedcontainers module which implements a SortedList data type that efficiently implements set-like methods like add, remove, and contains. The sortedcontainers module is implemented in pure-Python, fast-as-C implementations (even faster), has 100% unit test coverage, and hours of stress testing.
Installation is easy from PyPI:
pip install sortedcontainers
If you can't pip install then simply pull the sortedlist.py file down from the open-source repository.
Use it as you would a set:
from sortedcontainers import SortedList
survey = SortedList(['blue', 'red', 'blue', 'green']]
survey.add('blue')
print survey.count('blue') # "3"
survey.remove('blue')
The sortedcontainers module also maintains a performance comparison with other popular implementations.
What you're looking for is indeed a multiset (or bag), a collection of not necessarily distinct elements (whereas a set does not contain duplicates).
There's an implementation for multisets here: https://github.com/mlenzen/collections-extended (Pypy's collections extended module).
The data structure for multisets is called bag. A bag is a subclass of the Set class from collections module with an extra dictionary to keep track of the multiplicities of elements.
class _basebag(Set):
"""
Base class for bag and frozenbag. Is not mutable and not hashable, so there's
no reason to use this instead of either bag or frozenbag.
"""
# Basic object methods
def __init__(self, iterable=None):
"""Create a new basebag.
If iterable isn't given, is None or is empty then the bag starts empty.
Otherwise each element from iterable will be added to the bag
however many times it appears.
This runs in O(len(iterable))
"""
self._dict = dict()
self._size = 0
if iterable:
if isinstance(iterable, _basebag):
for elem, count in iterable._dict.items():
self._inc(elem, count)
else:
for value in iterable:
self._inc(value)
A nice method for bag is nlargest (similar to Counter for lists), that returns the multiplicities of all elements blazingly fast since the number of occurrences of each element is kept up-to-date in the bag's dictionary:
>>> b=bag(random.choice(string.ascii_letters) for x in xrange(10))
>>> b.nlargest()
[('p', 2), ('A', 1), ('d', 1), ('m', 1), ('J', 1), ('M', 1), ('l', 1), ('n', 1), ('W', 1)]
>>> Counter(b)
Counter({'p': 2, 'A': 1, 'd': 1, 'm': 1, 'J': 1, 'M': 1, 'l': 1, 'n': 1, 'W': 1})
You can used collections.Counter to implement a multiset, as already mentioned.
Another way to implement a multiset is by using defaultdict, which would work by counting occurrences, like collections.Counter.
Here's a snippet from the python docs:
Setting the default_factory to int makes the defaultdict useful for counting (like a bag or multiset in other languages):
>>> s = 'mississippi'
>>> d = defaultdict(int)
>>> for k in s:
... d[k] += 1
...
>>> d.items()
[('i', 4), ('p', 2), ('s', 4), ('m', 1)]
If you need duplicates, use a list, and transform it to a set when you need operate as a set.

Categories

Resources