Create dict from list of lists [duplicate] - python

I want to combine these:
keys = ['name', 'age', 'food']
values = ['Monty', 42, 'spam']
Into a single dictionary:
{'name': 'Monty', 'age': 42, 'food': 'spam'}

Like this:
keys = ['a', 'b', 'c']
values = [1, 2, 3]
dictionary = dict(zip(keys, values))
print(dictionary) # {'a': 1, 'b': 2, 'c': 3}
Voila :-) The pairwise dict constructor and zip function are awesomely useful.

Imagine that you have:
keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
What is the simplest way to produce the following dictionary ?
dict = {'name' : 'Monty', 'age' : 42, 'food' : 'spam'}
Most performant, dict constructor with zip
new_dict = dict(zip(keys, values))
In Python 3, zip now returns a lazy iterator, and this is now the most performant approach.
dict(zip(keys, values)) does require the one-time global lookup each for dict and zip, but it doesn't form any unnecessary intermediate data-structures or have to deal with local lookups in function application.
Runner-up, dict comprehension:
A close runner-up to using the dict constructor is to use the native syntax of a dict comprehension (not a list comprehension, as others have mistakenly put it):
new_dict = {k: v for k, v in zip(keys, values)}
Choose this when you need to map or filter based on the keys or value.
In Python 2, zip returns a list, to avoid creating an unnecessary list, use izip instead (aliased to zip can reduce code changes when you move to Python 3).
from itertools import izip as zip
So that is still (2.7):
new_dict = {k: v for k, v in zip(keys, values)}
Python 2, ideal for <= 2.6
izip from itertools becomes zip in Python 3. izip is better than zip for Python 2 (because it avoids the unnecessary list creation), and ideal for 2.6 or below:
from itertools import izip
new_dict = dict(izip(keys, values))
Result for all cases:
In all cases:
>>> new_dict
{'age': 42, 'name': 'Monty', 'food': 'spam'}
Explanation:
If we look at the help on dict we see that it takes a variety of forms of arguments:
>>> help(dict)
class dict(object)
| dict() -> new empty dictionary
| dict(mapping) -> new dictionary initialized from a mapping object's
| (key, value) pairs
| dict(iterable) -> new dictionary initialized as if via:
| d = {}
| for k, v in iterable:
| d[k] = v
| dict(**kwargs) -> new dictionary initialized with the name=value pairs
| in the keyword argument list. For example: dict(one=1, two=2)
The optimal approach is to use an iterable while avoiding creating unnecessary data structures. In Python 2, zip creates an unnecessary list:
>>> zip(keys, values)
[('name', 'Monty'), ('age', 42), ('food', 'spam')]
In Python 3, the equivalent would be:
>>> list(zip(keys, values))
[('name', 'Monty'), ('age', 42), ('food', 'spam')]
and Python 3's zip merely creates an iterable object:
>>> zip(keys, values)
<zip object at 0x7f0e2ad029c8>
Since we want to avoid creating unnecessary data structures, we usually want to avoid Python 2's zip (since it creates an unnecessary list).
Less performant alternatives:
This is a generator expression being passed to the dict constructor:
generator_expression = ((k, v) for k, v in zip(keys, values))
dict(generator_expression)
or equivalently:
dict((k, v) for k, v in zip(keys, values))
And this is a list comprehension being passed to the dict constructor:
dict([(k, v) for k, v in zip(keys, values)])
In the first two cases, an extra layer of non-operative (thus unnecessary) computation is placed over the zip iterable, and in the case of the list comprehension, an extra list is unnecessarily created. I would expect all of them to be less performant, and certainly not more-so.
Performance review:
In 64 bit Python 3.8.2 provided by Nix, on Ubuntu 16.04, ordered from fastest to slowest:
>>> min(timeit.repeat(lambda: dict(zip(keys, values))))
0.6695233230129816
>>> min(timeit.repeat(lambda: {k: v for k, v in zip(keys, values)}))
0.6941362579818815
>>> min(timeit.repeat(lambda: {keys[i]: values[i] for i in range(len(keys))}))
0.8782548159942962
>>>
>>> min(timeit.repeat(lambda: dict([(k, v) for k, v in zip(keys, values)])))
1.077607496001292
>>> min(timeit.repeat(lambda: dict((k, v) for k, v in zip(keys, values))))
1.1840861019445583
dict(zip(keys, values)) wins even with small sets of keys and values, but for larger sets, the differences in performance will become greater.
A commenter said:
min seems like a bad way to compare performance. Surely mean and/or max would be much more useful indicators for real usage.
We use min because these algorithms are deterministic. We want to know the performance of the algorithms under the best conditions possible.
If the operating system hangs for any reason, it has nothing to do with what we're trying to compare, so we need to exclude those kinds of results from our analysis.
If we used mean, those kinds of events would skew our results greatly, and if we used max we will only get the most extreme result - the one most likely affected by such an event.
A commenter also says:
In python 3.6.8, using mean values, the dict comprehension is indeed still faster, by about 30% for these small lists. For larger lists (10k random numbers), the dict call is about 10% faster.
I presume we mean dict(zip(... with 10k random numbers. That does sound like a fairly unusual use case. It does makes sense that the most direct calls would dominate in large datasets, and I wouldn't be surprised if OS hangs are dominating given how long it would take to run that test, further skewing your numbers. And if you use mean or max I would consider your results meaningless.
Let's use a more realistic size on our top examples:
import numpy
import timeit
l1 = list(numpy.random.random(100))
l2 = list(numpy.random.random(100))
And we see here that dict(zip(... does indeed run faster for larger datasets by about 20%.
>>> min(timeit.repeat(lambda: {k: v for k, v in zip(l1, l2)}))
9.698965263989521
>>> min(timeit.repeat(lambda: dict(zip(l1, l2))))
7.9965161079890095

Try this:
>>> import itertools
>>> keys = ('name', 'age', 'food')
>>> values = ('Monty', 42, 'spam')
>>> adict = dict(itertools.izip(keys,values))
>>> adict
{'food': 'spam', 'age': 42, 'name': 'Monty'}
In Python 2, it's also more economical in memory consumption compared to zip.

keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
out = dict(zip(keys, values))
Output:
{'food': 'spam', 'age': 42, 'name': 'Monty'}

You can also use dictionary comprehensions in Python ≥ 2.7:
>>> keys = ('name', 'age', 'food')
>>> values = ('Monty', 42, 'spam')
>>> {k: v for k, v in zip(keys, values)}
{'food': 'spam', 'age': 42, 'name': 'Monty'}

A more natural way is to use dictionary comprehension
keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
dict = {keys[i]: values[i] for i in range(len(keys))}

If you need to transform keys or values before creating a dictionary then a generator expression could be used. Example:
>>> adict = dict((str(k), v) for k, v in zip(['a', 1, 'b'], [2, 'c', 3]))
Take a look Code Like a Pythonista: Idiomatic Python.

with Python 3.x, goes for dict comprehensions
keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
dic = {k:v for k,v in zip(keys, values)}
print(dic)
More on dict comprehensions here, an example is there:
>>> print {i : chr(65+i) for i in range(4)}
{0 : 'A', 1 : 'B', 2 : 'C', 3 : 'D'}

For those who need simple code and aren’t familiar with zip:
List1 = ['This', 'is', 'a', 'list']
List2 = ['Put', 'this', 'into', 'dictionary']
This can be done by one line of code:
d = {List1[n]: List2[n] for n in range(len(List1))}

you can use this below code:
dict(zip(['name', 'age', 'food'], ['Monty', 42, 'spam']))
But make sure that length of the lists will be same.if length is not same.then zip function turncate the longer one.

2018-04-18
The best solution is still:
In [92]: keys = ('name', 'age', 'food')
...: values = ('Monty', 42, 'spam')
...:
In [93]: dt = dict(zip(keys, values))
In [94]: dt
Out[94]: {'age': 42, 'food': 'spam', 'name': 'Monty'}
Tranpose it:
lst = [('name', 'Monty'), ('age', 42), ('food', 'spam')]
keys, values = zip(*lst)
In [101]: keys
Out[101]: ('name', 'age', 'food')
In [102]: values
Out[102]: ('Monty', 42, 'spam')

Here is also an example of adding a list value in you dictionary
list1 = ["Name", "Surname", "Age"]
list2 = [["Cyd", "JEDD", "JESS"], ["DEY", "AUDIJE", "PONGARON"], [21, 32, 47]]
dic = dict(zip(list1, list2))
print(dic)
always make sure the your "Key"(list1) is always in the first parameter.
{'Name': ['Cyd', 'JEDD', 'JESS'], 'Surname': ['DEY', 'AUDIJE', 'PONGARON'], 'Age': [21, 32, 47]}

I had this doubt while I was trying to solve a graph-related problem. The issue I had was I needed to define an empty adjacency list and wanted to initialize all the nodes with an empty list, that's when I thought how about I check if it is fast enough, I mean if it will be worth doing a zip operation rather than simple assignment key-value pair. After all most of the times, the time factor is an important ice breaker. So I performed timeit operation for both approaches.
import timeit
def dictionary_creation(n_nodes):
dummy_dict = dict()
for node in range(n_nodes):
dummy_dict[node] = []
return dummy_dict
def dictionary_creation_1(n_nodes):
keys = list(range(n_nodes))
values = [[] for i in range(n_nodes)]
graph = dict(zip(keys, values))
return graph
def wrapper(func, *args, **kwargs):
def wrapped():
return func(*args, **kwargs)
return wrapped
iteration = wrapper(dictionary_creation, n_nodes)
shorthand = wrapper(dictionary_creation_1, n_nodes)
for trail in range(1, 8):
print(f'Itertion: {timeit.timeit(iteration, number=trails)}\nShorthand: {timeit.timeit(shorthand, number=trails)}')
For n_nodes = 10,000,000
I get,
Iteration: 2.825081646999024
Shorthand: 3.535717916001886
Iteration: 5.051560923002398
Shorthand: 6.255070794999483
Iteration: 6.52859034499852
Shorthand: 8.221581164998497
Iteration: 8.683652416999394
Shorthand: 12.599181543999293
Iteration: 11.587241565001023
Shorthand: 15.27298851100204
Iteration: 14.816342867001367
Shorthand: 17.162912737003353
Iteration: 16.645022411001264
Shorthand: 19.976680120998935
You can clearly see after a certain point, iteration approach at n_th step overtakes the time taken by shorthand approach at n-1_th step.

It can be done by the following way.
keys = ['name', 'age', 'food']
values = ['Monty', 42, 'spam']
dict = {}
for i in range(len(keys)):
dict[keys[i]] = values[i]
print(dict)
{'name': 'Monty', 'age': 42, 'food': 'spam'}

All answers sum up:
l = [1, 5, 8, 9]
ll = [3, 7, 10, 11]
zip:
dict(zip(l,ll)) # {1: 3, 5: 7, 8: 10, 9: 11}
#if you want to play with key or value #recommended
{k:v*10 for k, v in zip(l, ll)} #{1: 30, 5: 70, 8: 100, 9: 110}
counter:
d = {}
c=0
for k in l:
d[k] = ll[c] #setting up keys from the second list values
c += 1
print(d)
{1: 3, 5: 7, 8: 10, 9: 11}
enumerate:
d = {}
for i,k in enumerate(l):
d[k] = ll[i]
print(d)
{1: 3, 5: 7, 8: 10, 9: 11}

Solution as dictionary comprehension with enumerate:
dict = {item : values[index] for index, item in enumerate(keys)}
Solution as for loop with enumerate:
dict = {}
for index, item in enumerate(keys):
dict[item] = values[index]

If you are working with more than 1 set of values and wish to have a list of dicts you can use this:
def as_dict_list(data: list, columns: list):
return [dict((zip(columns, row))) for row in data]
Real-life example would be a list of tuples from a db query paired to a tuple of columns from the same query. Other answers only provided for 1 to 1.

keys = ['name', 'age', 'food']
values = ['Monty', 42, 'spam']
dic = {}
c = 0
for i in keys:
dic[i] = values[c]
c += 1
print(dic)
{'name': 'Monty', 'age': 42, 'food': 'spam'}

import pprint
p = ['A', 'B', 'C']
q = [5, 2, 7]
r = ["M", "F", "M"]
s = ['Sovabazaar','Shyambazaar','Bagbazaar','Hatkhola']
def makeDictUsingAlternateLists1(**rest):
print("*rest.keys() : ",*rest.keys())
print("rest.keys() : ",rest.keys())
print("*rest.values() : ",*rest.values())
print("**rest.keys() : ",rest.keys())
print("**rest.values() : ",rest.values())
[print(a) for a in zip(*rest.values())]
[ print(dict(zip(rest.keys(),a))) for a in zip(*rest.values())]
print("...")
finalRes= [ dict( zip( rest.keys(),a)) for a in zip(*rest.values())]
return finalRes
l = makeDictUsingAlternateLists1(p=p,q=q,r=r,s=s)
pprint.pprint(l)
"""
*rest.keys() : p q r s
rest.keys() : dict_keys(['p', 'q', 'r', 's'])
*rest.values() : ['A', 'B', 'C'] [5, 2, 7] ['M', 'F', 'M'] ['Sovabazaar', 'Shyambazaar', 'Bagbazaar', 'Hatkhola']
**rest.keys() : dict_keys(['p', 'q', 'r', 's'])
**rest.values() : dict_values([['A', 'B', 'C'], [5, 2, 7], ['M', 'F', 'M'], ['Sovabazaar', 'Shyambazaar', 'Bagbazaar', 'Hatkhola']])
('A', 5, 'M', 'Sovabazaar')
('B', 2, 'F', 'Shyambazaar')
('C', 7, 'M', 'Bagbazaar')
{'p': 'A', 'q': 5, 'r': 'M', 's': 'Sovabazaar'}
{'p': 'B', 'q': 2, 'r': 'F', 's': 'Shyambazaar'}
{'p': 'C', 'q': 7, 'r': 'M', 's': 'Bagbazaar'}
...
[{'p': 'A', 'q': 5, 'r': 'M', 's': 'Sovabazaar'},
{'p': 'B', 'q': 2, 'r': 'F', 's': 'Shyambazaar'},
{'p': 'C', 'q': 7, 'r': 'M', 's': 'Bagbazaar'}]
"""

method without zip function
l1 = [1,2,3,4,5]
l2 = ['a','b','c','d','e']
d1 = {}
for l1_ in l1:
for l2_ in l2:
d1[l1_] = l2_
l2.remove(l2_)
break
print (d1)
{1: 'd', 2: 'b', 3: 'e', 4: 'a', 5: 'c'}

Although there are multiple ways of doing this but i think most fundamental way of approaching it; creating a loop and dictionary and store values into that dictionary. In the recursive approach the idea is still same it but instead of using a loop, the function called itself until it reaches to the end. Of course there are other approaches like using dict(zip(key, value)) and etc. These aren't the most effective solutions.
y = [1,2,3,4]
x = ["a","b","c","d"]
# This below is a brute force method
obj = {}
for i in range(len(y)):
obj[y[i]] = x[i]
print(obj)
# Recursive approach
obj = {}
def map_two_lists(a,b,j=0):
if j < len(a):
obj[b[j]] = a[j]
j +=1
map_two_lists(a, b, j)
return obj
res = map_two_lists(x,y)
print(res)
Both the results should print
{1: 'a', 2: 'b', 3: 'c', 4: 'd'}

Related

Could anyone explain this syntax? [duplicate]

Can I use list comprehension syntax to create a dictionary?
For example, by iterating over pairs of keys and values:
d = {... for k, v in zip(keys, values)}
Use a dict comprehension (Python 2.7 and later):
{key: value for (key, value) in iterable}
Alternatively for simpler cases or earlier version of Python, use the dict constructor, e.g.:
pairs = [('a', 1), ('b', 2)]
dict(pairs) #=> {'a': 1, 'b': 2}
dict([(k, v+1) for k, v in pairs]) #=> {'a': 2, 'b': 3}
Given separate arrays of keys and values, use the dict constructor with zip:
keys = ['a', 'b']
values = [1, 2]
dict(zip(keys, values)) #=> {'a': 1, 'b': 2}
2) "zip'ped" from two separate iterables of keys/vals
dict(zip(list_of_keys, list_of_values))
In Python 3 and Python 2.7+, dictionary comprehensions look like the below:
d = {k:v for k, v in iterable}
For Python 2.6 or earlier, see fortran's answer.
In fact, you don't even need to iterate over the iterable if it already comprehends some kind of mapping, the dict constructor doing it graciously for you:
>>> ts = [(1, 2), (3, 4), (5, 6)]
>>> dict(ts)
{1: 2, 3: 4, 5: 6}
>>> gen = ((i, i+1) for i in range(1, 6, 2))
>>> gen
<generator object <genexpr> at 0xb7201c5c>
>>> dict(gen)
{1: 2, 3: 4, 5: 6}
Create a dictionary with list comprehension in Python
I like the Python list comprehension syntax.
Can it be used to create dictionaries too? For example, by iterating
over pairs of keys and values:
mydict = {(k,v) for (k,v) in blah blah blah}
You're looking for the phrase "dict comprehension" - it's actually:
mydict = {k: v for k, v in iterable}
Assuming blah blah blah is an iterable of two-tuples - you're so close. Let's create some "blahs" like that:
blahs = [('blah0', 'blah'), ('blah1', 'blah'), ('blah2', 'blah'), ('blah3', 'blah')]
Dict comprehension syntax:
Now the syntax here is the mapping part. What makes this a dict comprehension instead of a set comprehension (which is what your pseudo-code approximates) is the colon, : like below:
mydict = {k: v for k, v in blahs}
And we see that it worked, and should retain insertion order as-of Python 3.7:
>>> mydict
{'blah0': 'blah', 'blah1': 'blah', 'blah2': 'blah', 'blah3': 'blah'}
In Python 2 and up to 3.6, order was not guaranteed:
>>> mydict
{'blah0': 'blah', 'blah1': 'blah', 'blah3': 'blah', 'blah2': 'blah'}
Adding a Filter:
All comprehensions feature a mapping component and a filtering component that you can provide with arbitrary expressions.
So you can add a filter part to the end:
>>> mydict = {k: v for k, v in blahs if not int(k[-1]) % 2}
>>> mydict
{'blah0': 'blah', 'blah2': 'blah'}
Here we are just testing for if the last character is divisible by 2 to filter out data before mapping the keys and values.
In Python 2.7, it goes like:
>>> list1, list2 = ['a', 'b', 'c'], [1,2,3]
>>> dict( zip( list1, list2))
{'a': 1, 'c': 3, 'b': 2}
Zip them!
Python version >= 2.7, do the below:
d = {i: True for i in [1,2,3]}
Python version < 2.7(RIP, 3 July 2010 - 31 December 2019), do the below:
d = dict((i,True) for i in [1,2,3])
To add onto #fortran's answer, if you want to iterate over a list of keys key_list as well as a list of values value_list:
d = dict((key, value) for (key, value) in zip(key_list, value_list))
or
d = {(key, value) for (key, value) in zip(key_list, value_list)}
Just to throw in another example. Imagine you have the following list:
nums = [4,2,2,1,3]
and you want to turn it into a dict where the key is the index and value is the element in the list. You can do so with the following line of code:
{index:nums[index] for index in range(0,len(nums))}
Here is another example of dictionary creation using dict comprehension:
What i am tring to do here is to create a alphabet dictionary where each pair; is the english letter and its corresponding position in english alphabet
>>> import string
>>> dict1 = {value: (int(key) + 1) for key, value in
enumerate(list(string.ascii_lowercase))}
>>> dict1
{'a': 1, 'c': 3, 'b': 2, 'e': 5, 'd': 4, 'g': 7, 'f': 6, 'i': 9, 'h': 8,
'k': 11, 'j': 10, 'm': 13, 'l': 12, 'o': 15, 'n': 14, 'q': 17, 'p': 16, 's':
19, 'r': 18, 'u': 21, 't': 20, 'w': 23, 'v': 22, 'y': 25, 'x': 24, 'z': 26}
>>>
Notice the use of enumerate here to get a list of alphabets and their indexes in the list and swapping the alphabets and indices to generate the key value pair for dictionary
Hope it gives a good idea of dictionary comp to you and encourages you to use it more often to make your code compact
This code will create dictionary using list comprehension for multiple lists with different values that can be used for pd.DataFrame()
#Multiple lists
model=['A', 'B', 'C', 'D']
launched=[1983,1984,1984,1984]
discontinued=[1986, 1985, 1984, 1986]
#Dictionary with list comprehension
keys=['model','launched','discontinued']
vals=[model, launched,discontinued]
data = {key:vals[n] for n, key in enumerate(keys)}
#Convert dict to dataframe
df=pd.DataFrame(data)
display(df)
enumerate will pass n to vals to match each key with its list
Try this,
def get_dic_from_two_lists(keys, values):
return { keys[i] : values[i] for i in range(len(keys)) }
Assume we have two lists country and capital
country = ['India', 'Pakistan', 'China']
capital = ['New Delhi', 'Islamabad', 'Beijing']
Then create dictionary from the two lists:
print get_dic_from_two_lists(country, capital)
The output is like this,
{'Pakistan': 'Islamabad', 'China': 'Beijing', 'India': 'New Delhi'}
Adding to #Ekhtiar answer, if you want to make look up dict from list, you can use this:
names = ['a', 'b', 'd', 'f', 'c']
names_to_id = {v:k for k, v in enumerate(names)}
# {'a': 0, 'b': 1, 'c': 2, 'd': 3, 'f': 4}
Or in rare case that you want to filter duplicate, use set first (best in list of number):
names = ['a', 'b', 'd', 'f', 'd', 'c']
sorted_list = list(set(names))
sorted_list.sort()
names_to_id = {v:k for k, v in enumerate(sorted_list)}
# {'a': 0, 'b': 1, 'c': 2, 'd': 3, 'f': 4}
names = [1,2,5,5,6,2,1]
names_to_id = {v:k for k, v in enumerate(set(names))}
# {1: 0, 2: 1, 5: 2, 6: 3}
>>> {k: v**3 for (k, v) in zip(string.ascii_lowercase, range(26))}
Python supports dict comprehensions, which allow you to express the creation of dictionaries at runtime using a similarly concise syntax.
A dictionary comprehension takes the form {key: value for (key, value) in iterable}. This syntax was introduced in Python 3 and backported as far as Python 2.7, so you should be able to use it regardless of which version of Python you have installed.
A canonical example is taking two lists and creating a dictionary where the item at each position in the first list becomes a key and the item at the corresponding position in the second list becomes the value.
The zip function used inside this comprehension returns an iterator of tuples, where each element in the tuple is taken from the same position in each of the input iterables. In the example above, the returned iterator contains the tuples (“a”, 1), (“b”, 2), etc.
Output:
{'i': 512, 'e': 64, 'o': 2744, 'h': 343, 'l': 1331, 's': 5832, 'b': 1, 'w': 10648, 'c': 8, 'x': 12167, 'y': 13824, 't': 6859, 'p': 3375, 'd': 27, 'j': 729, 'a': 0, 'z': 15625, 'f': 125, 'q': 4096, 'u': 8000, 'n': 2197, 'm': 1728, 'r': 4913, 'k': 1000, 'g': 216, 'v': 9261}
Yes, it's possible. In python, Comprehension can be used in List, Set, Dictionary, etc.
You can write it this way
mydict = {k:v for (k,v) in blah}
Another detailed example of Dictionary Comprehension with the Conditional Statement and Loop:
parents = [father, mother]
parents = {parent:1 - P["mutation"] if parent in two_genes else 0.5 if parent in one_gene else P["mutation"] for parent in parents}
You can create a new dict for each pair and merge it with the previous dict:
reduce(lambda p, q: {**p, **{q[0]: q[1]}}, bla bla bla, {})
Obviously this approaches requires reduce from functools.
Assuming blah blah blah is a two-tuples list:
Let's see two methods:
# method 1
>>> lst = [('a', 2), ('b', 4), ('c', 6)]
>>> dict(lst)
{'a': 2, 'b': 4, 'c': 6}
# method 2
>>> lst = [('a', 2), ('b', 4), ('c', 6)]
>>> d = {k:v for k, v in lst}
>>> d
{'a': 2, 'b': 4, 'c': 6}
this approach uses iteration over the given date using a for loop.
Syntax: {key: value for (key, value) in data}
Eg:
# create a list comprehension with country and code:
Country_code = [('China', 86), ('USA', 1),
('Ghana', 233), ('Uk', 44)]
# use iterable method to show results
{key: value for (key, value) in Country_code}

Deleting dictionary keys from a provided list in Python [duplicate]

I know how to remove an entry, 'key' from my dictionary d, safely. You do:
if d.has_key('key'):
del d['key']
However, I need to remove multiple entries from a dictionary safely. I was thinking of defining the entries in a tuple as I will need to do this more than once.
entities_to_remove = ('a', 'b', 'c')
for x in entities_to_remove:
if x in d:
del d[x]
However, I was wondering if there is a smarter way to do this?
Using dict.pop:
d = {'some': 'data'}
entries_to_remove = ('any', 'iterable')
for k in entries_to_remove:
d.pop(k, None)
Using Dict Comprehensions
final_dict = {key: value for key, value in d if key not in [key1, key2]}
where key1 and key2 are to be removed.
In the example below, keys "b" and "c" are to be removed & it's kept in a keys list.
>>> a
{'a': 1, 'c': 3, 'b': 2, 'd': 4}
>>> keys = ["b", "c"]
>>> print {key: a[key] for key in a if key not in keys}
{'a': 1, 'd': 4}
>>>
Why not like this:
entries = ('a', 'b', 'c')
the_dict = {'b': 'foo'}
def entries_to_remove(entries, the_dict):
for key in entries:
if key in the_dict:
del the_dict[key]
A more compact version was provided by mattbornski using dict.pop()
a solution is using map and filter functions
python 2
d={"a":1,"b":2,"c":3}
l=("a","b","d")
map(d.__delitem__, filter(d.__contains__,l))
print(d)
python 3
d={"a":1,"b":2,"c":3}
l=("a","b","d")
list(map(d.__delitem__, filter(d.__contains__,l)))
print(d)
you get:
{'c': 3}
If you also need to retrieve the values for the keys you are removing, this would be a pretty good way to do it:
values_removed = [d.pop(k, None) for k in entities_to_remove]
You could of course still do this just for the removal of the keys from d, but you would be unnecessarily creating the list of values with the list comprehension. It is also a little unclear to use a list comprehension just for the function's side effect.
Found a solution with pop and map
d = {'a': 'valueA', 'b': 'valueB', 'c': 'valueC', 'd': 'valueD'}
keys = ['a', 'b', 'c']
list(map(d.pop, keys))
print(d)
The output of this:
{'d': 'valueD'}
I have answered this question so late just because I think it will help in the future if anyone searches the same. And this might help.
Update
The above code will throw an error if a key does not exist in the dict.
DICTIONARY = {'a': 'valueA', 'b': 'valueB', 'c': 'valueC', 'd': 'valueD'}
keys = ['a', 'l', 'c']
def remove_key(key):
DICTIONARY.pop(key, None)
list(map(remove_key, keys))
print(DICTIONARY)
output:
DICTIONARY = {'b': 'valueB', 'd': 'valueD'}
Some timing tests for cpython 3 shows that a simple for loop is the fastest way, and it's quite readable. Adding in a function doesn't cause much overhead either:
timeit results (10k iterations):
all(x.pop(v) for v in r) # 0.85
all(map(x.pop, r)) # 0.60
list(map(x.pop, r)) # 0.70
all(map(x.__delitem__, r)) # 0.44
del_all(x, r) # 0.40
<inline for loop>(x, r) # 0.35
def del_all(mapping, to_remove):
"""Remove list of elements from mapping."""
for key in to_remove:
del mapping[key]
For small iterations, doing that 'inline' was a bit faster, because of the overhead of the function call. But del_all is lint-safe, reusable, and faster than all the python comprehension and mapping constructs.
I have no problem with any of the existing answers, but I was surprised to not find this solution:
keys_to_remove = ['a', 'b', 'c']
my_dict = {k: v for k, v in zip("a b c d e f g".split(' '), [0, 1, 2, 3, 4, 5, 6])}
for k in keys_to_remove:
try:
del my_dict[k]
except KeyError:
pass
assert my_dict == {'d': 3, 'e': 4, 'f': 5, 'g': 6}
Note: I stumbled across this question coming from here. And my answer is related to this answer.
I have tested the performance of three methods:
# Method 1: `del`
for key in remove_keys:
if key in d:
del d[key]
# Method 2: `pop()`
for key in remove_keys:
d.pop(key, None)
# Method 3: comprehension
{key: v for key, v in d.items() if key not in remove_keys}
Here are the results of 1M iterations:
del: 2.03s 2.0 ns/iter (100%)
pop(): 2.38s 2.4 ns/iter (117%)
comprehension: 4.11s 4.1 ns/iter (202%)
So both del and pop() are the fastest. Comprehensions are 2x slower.
But anyway, we speak nanoseconds here :) Dicts in Python are ridiculously fast.
Why not:
entriestoremove = (2,5,1)
for e in entriestoremove:
if d.has_key(e):
del d[e]
I don't know what you mean by "smarter way". Surely there are other ways, maybe with dictionary comprehensions:
entriestoremove = (2,5,1)
newdict = {x for x in d if x not in entriestoremove}
inline
import functools
#: not key(c) in d
d = {"a": "avalue", "b": "bvalue", "d": "dvalue"}
entitiesToREmove = ('a', 'b', 'c')
#: python2
map(lambda x: functools.partial(d.pop, x, None)(), entitiesToREmove)
#: python3
list(map(lambda x: functools.partial(d.pop, x, None)(), entitiesToREmove))
print(d)
# output: {'d': 'dvalue'}
I think using the fact that the keys can be treated as a set is the nicest way if you're on python 3:
def remove_keys(d, keys):
to_remove = set(keys)
filtered_keys = d.keys() - to_remove
filtered_values = map(d.get, filtered_keys)
return dict(zip(filtered_keys, filtered_values))
Example:
>>> remove_keys({'k1': 1, 'k3': 3}, ['k1', 'k2'])
{'k3': 3}
It would be nice to have full support for set methods for dictionaries (and not the unholy mess we're getting with Python 3.9) so that you could simply "remove" a set of keys. However, as long as that's not the case, and you have a large dictionary with potentially a large number of keys to remove, you might want to know about the performance. So, I've created some code that creates something large enough for meaningful comparisons: a 100,000 x 1000 matrix, so 10,000,00 items in total.
from itertools import product
from time import perf_counter
# make a complete worksheet 100000 * 1000
start = perf_counter()
prod = product(range(1, 100000), range(1, 1000))
cells = {(x,y):x for x,y in prod}
print(len(cells))
print(f"Create time {perf_counter()-start:.2f}s")
clock = perf_counter()
# remove everything above row 50,000
keys = product(range(50000, 100000), range(1, 100))
# for x,y in keys:
# del cells[x, y]
for n in map(cells.pop, keys):
pass
print(len(cells))
stop = perf_counter()
print(f"Removal time {stop-clock:.2f}s")
10 million items or more is not unusual in some settings. Comparing the two methods on my local machine I see a slight improvement when using map and pop, presumably because of fewer function calls, but both take around 2.5s on my machine. But this pales in comparison to the time required to create the dictionary in the first place (55s), or including checks within the loop. If this is likely then its best to create a set that is a intersection of the dictionary keys and your filter:
keys = cells.keys() & keys
In summary: del is already heavily optimised, so don't worry about using it.
Another map() way to remove list of keys from dictionary
and avoid raising KeyError exception
dic = {
'key1': 1,
'key2': 2,
'key3': 3,
'key4': 4,
'key5': 5,
}
keys_to_remove = ['key_not_exist', 'key1', 'key2', 'key3']
k = list(map(dic.pop, keys_to_remove, keys_to_remove))
print('k=', k)
print('dic after = \n', dic)
**this will produce output**
k= ['key_not_exist', 1, 2, 3]
dic after = {'key4': 4, 'key5': 5}
Duplicate keys_to_remove is artificial, it needs to supply defaults values for dict.pop() function.
You can add here any array with len_ = len(key_to_remove)
For example
dic = {
'key1': 1,
'key2': 2,
'key3': 3,
'key4': 4,
'key5': 5,
}
keys_to_remove = ['key_not_exist', 'key1', 'key2', 'key3']
k = list(map(dic.pop, keys_to_remove, np.zeros(len(keys_to_remove))))
print('k=', k)
print('dic after = ', dic)
** will produce output **
k= [0.0, 1, 2, 3]
dic after = {'key4': 4, 'key5': 5}
def delete_keys_from_dict(dictionary, keys):
"""
Deletes the unwanted keys in the dictionary
:param dictionary: dict
:param keys: list of keys
:return: dict (modified)
"""
from collections.abc import MutableMapping
keys_set = set(keys)
modified_dict = {}
for key, value in dictionary.items():
if key not in keys_set:
if isinstance(value, list):
modified_dict[key] = list()
for x in value:
if isinstance(x, MutableMapping):
modified_dict[key].append(delete_keys_from_dict(x, keys_set))
else:
modified_dict[key].append(x)
elif isinstance(value, MutableMapping):
modified_dict[key] = delete_keys_from_dict(value, keys_set)
else:
modified_dict[key] = value
return modified_dict
_d = {'a': 1245, 'b': 1234325, 'c': {'a': 1245, 'b': 1234325}, 'd': 98765,
'e': [{'a': 1245, 'b': 1234325},
{'a': 1245, 'b': 1234325},
{'t': 767}]}
_output = delete_keys_from_dict(_d, ['a', 'b'])
_expected = {'c': {}, 'd': 98765, 'e': [{}, {}, {'t': 767}]}
print(_expected)
print(_output)
I'm late to this discussion but for anyone else. A solution may be to create a list of keys as such.
k = ['a','b','c','d']
Then use pop() in a list comprehension, or for loop, to iterate over the keys and pop one at a time as such.
new_dictionary = [dictionary.pop(x, 'n/a') for x in k]
The 'n/a' is in case the key does not exist, a default value needs to be returned.

Comparing values in a dictionary and grouping them again in Python [duplicate]

Given a dictionary like so:
my_map = {'a': 1, 'b': 2}
How can one invert this map to get:
inv_map = {1: 'a', 2: 'b'}
Python 3+:
inv_map = {v: k for k, v in my_map.items()}
Python 2:
inv_map = {v: k for k, v in my_map.iteritems()}
Assuming that the values in the dict are unique:
Python 3:
dict((v, k) for k, v in my_map.items())
Python 2:
dict((v, k) for k, v in my_map.iteritems())
If the values in my_map aren't unique:
Python 3:
inv_map = {}
for k, v in my_map.items():
inv_map[v] = inv_map.get(v, []) + [k]
Python 2:
inv_map = {}
for k, v in my_map.iteritems():
inv_map[v] = inv_map.get(v, []) + [k]
To do this while preserving the type of your mapping (assuming that it is a dict or a dict subclass):
def inverse_mapping(f):
return f.__class__(map(reversed, f.items()))
Try this:
inv_map = dict(zip(my_map.values(), my_map.keys()))
(Note that the Python docs on dictionary views explicitly guarantee that .keys() and .values() have their elements in the same order, which allows the approach above to work.)
Alternatively:
inv_map = dict((my_map[k], k) for k in my_map)
or using python 3.0's dict comprehensions
inv_map = {my_map[k] : k for k in my_map}
Another, more functional, way:
my_map = { 'a': 1, 'b':2 }
dict(map(reversed, my_map.items()))
We can also reverse a dictionary with duplicate keys using defaultdict:
from collections import Counter, defaultdict
def invert_dict(d):
d_inv = defaultdict(list)
for k, v in d.items():
d_inv[v].append(k)
return d_inv
text = 'aaa bbb ccc ddd aaa bbb ccc aaa'
c = Counter(text.split()) # Counter({'aaa': 3, 'bbb': 2, 'ccc': 2, 'ddd': 1})
dict(invert_dict(c)) # {1: ['ddd'], 2: ['bbb', 'ccc'], 3: ['aaa']}
See here:
This technique is simpler and faster than an equivalent technique using dict.setdefault().
This expands upon the answer by Robert, applying to when the values in the dict aren't unique.
class ReversibleDict(dict):
# Ref: https://stackoverflow.com/a/13057382/
def reversed(self):
"""
Return a reversed dict, with common values in the original dict
grouped into a list in the returned dict.
Example:
>>> d = ReversibleDict({'a': 3, 'c': 2, 'b': 2, 'e': 3, 'd': 1, 'f': 2})
>>> d.reversed()
{1: ['d'], 2: ['c', 'b', 'f'], 3: ['a', 'e']}
"""
revdict = {}
for k, v in self.items():
revdict.setdefault(v, []).append(k)
return revdict
The implementation is limited in that you cannot use reversed twice and get the original back. It is not symmetric as such. It is tested with Python 2.6. Here is a use case of how I am using to print the resultant dict.
If you'd rather use a set than a list, and there could exist unordered applications for which this makes sense, instead of setdefault(v, []).append(k), use setdefault(v, set()).add(k).
Combination of list and dictionary comprehension. Can handle duplicate keys
{v:[i for i in d.keys() if d[i] == v ] for k,v in d.items()}
A case where the dictionary values is a set. Like:
some_dict = {"1":{"a","b","c"},
"2":{"d","e","f"},
"3":{"g","h","i"}}
The inverse would like:
some_dict = {vi: k for k, v in some_dict.items() for vi in v}
The output is like this:
{'c': '1',
'b': '1',
'a': '1',
'f': '2',
'd': '2',
'e': '2',
'g': '3',
'h': '3',
'i': '3'}
For instance, you have the following dictionary:
my_dict = {'a': 'fire', 'b': 'ice', 'c': 'fire', 'd': 'water'}
And you wanna get it in such an inverted form:
inverted_dict = {'fire': ['a', 'c'], 'ice': ['b'], 'water': ['d']}
First Solution. For inverting key-value pairs in your dictionary use a for-loop approach:
# Use this code to invert dictionaries that have non-unique values
inverted_dict = dict()
for key, value in my_dict.items():
inverted_dict.setdefault(value, list()).append(key)
Second Solution. Use a dictionary comprehension approach for inversion:
# Use this code to invert dictionaries that have unique values
inverted_dict = {value: key for key, value in my_dict.items()}
Third Solution. Use reverting the inversion approach (relies on the second solution):
# Use this code to invert dictionaries that have lists of values
my_dict = {value: key for key in inverted_dict for value in my_map[key]}
Lot of answers but didn't find anything clean in case we are talking about a dictionary with non-unique values.
A solution would be:
from collections import defaultdict
inv_map = defaultdict(list)
for k, v in my_map.items():
inv_map[v].append(k)
Example:
If initial dict my_map = {'c': 1, 'd': 5, 'a': 5, 'b': 10}
then, running the code above will give:
{5: ['a', 'd'], 1: ['c'], 10: ['b']}
I found that this version is more than 10% faster than the accepted version of a dictionary with 10000 keys.
d = {i: str(i) for i in range(10000)}
new_d = dict(zip(d.values(), d.keys()))
In addition to the other functions suggested above, if you like lambdas:
invert = lambda mydict: {v:k for k, v in mydict.items()}
Or, you could do it this way too:
invert = lambda mydict: dict( zip(mydict.values(), mydict.keys()) )
I think the best way to do this is to define a class. Here is an implementation of a "symmetric dictionary":
class SymDict:
def __init__(self):
self.aToB = {}
self.bToA = {}
def assocAB(self, a, b):
# Stores and returns a tuple (a,b) of overwritten bindings
currB = None
if a in self.aToB: currB = self.bToA[a]
currA = None
if b in self.bToA: currA = self.aToB[b]
self.aToB[a] = b
self.bToA[b] = a
return (currA, currB)
def lookupA(self, a):
if a in self.aToB:
return self.aToB[a]
return None
def lookupB(self, b):
if b in self.bToA:
return self.bToA[b]
return None
Deletion and iteration methods are easy enough to implement if they're needed.
This implementation is way more efficient than inverting an entire dictionary (which seems to be the most popular solution on this page). Not to mention, you can add or remove values from your SymDict as much as you want, and your inverse-dictionary will always stay valid -- this isn't true if you simply reverse the entire dictionary once.
If the values aren't unique, and you're a little hardcore:
inv_map = dict(
(v, [k for (k, xx) in filter(lambda (key, value): value == v, my_map.items())])
for v in set(my_map.values())
)
Especially for a large dict, note that this solution is far less efficient than the answer Python reverse / invert a mapping because it loops over items() multiple times.
This handles non-unique values and retains much of the look of the unique case.
inv_map = {v:[k for k in my_map if my_map[k] == v] for v in my_map.itervalues()}
For Python 3.x, replace itervalues with values.
I am aware that this question already has many good answers, but I wanted to share this very neat solution that also takes care of duplicate values:
def dict_reverser(d):
seen = set()
return {v: k for k, v in d.items() if v not in seen or seen.add(v)}
This relies on the fact that set.add always returns None in Python.
Here is another way to do it.
my_map = {'a': 1, 'b': 2}
inv_map= {}
for key in my_map.keys() :
val = my_map[key]
inv_map[val] = key
dict([(value, key) for key, value in d.items()])
Function is symmetric for values of type list; Tuples are coverted to lists when performing reverse_dict(reverse_dict(dictionary))
def reverse_dict(dictionary):
reverse_dict = {}
for key, value in dictionary.iteritems():
if not isinstance(value, (list, tuple)):
value = [value]
for val in value:
reverse_dict[val] = reverse_dict.get(val, [])
reverse_dict[val].append(key)
for key, value in reverse_dict.iteritems():
if len(value) == 1:
reverse_dict[key] = value[0]
return reverse_dict
Since dictionaries require one unique key within the dictionary unlike values, we have to append the reversed values into a list of sort to be included within the new specific keys.
def r_maping(dictionary):
List_z=[]
Map= {}
for z, x in dictionary.iteritems(): #iterate through the keys and values
Map.setdefault(x,List_z).append(z) #Setdefault is the same as dict[key]=default."The method returns the key value available in the dictionary and if given key is not available then it will return provided default value. Afterward, we will append into the default list our new values for the specific key.
return Map
Fast functional solution for non-bijective maps (values not unique):
from itertools import imap, groupby
def fst(s):
return s[0]
def snd(s):
return s[1]
def inverseDict(d):
"""
input d: a -> b
output : b -> set(a)
"""
return {
v : set(imap(fst, kv_iter))
for (v, kv_iter) in groupby(
sorted(d.iteritems(),
key=snd),
key=snd
)
}
In theory this should be faster than adding to the set (or appending to the list) one by one like in the imperative solution.
Unfortunately the values have to be sortable, the sorting is required by groupby.
Try this for python 2.7/3.x
inv_map={};
for i in my_map:
inv_map[my_map[i]]=i
print inv_map
def invertDictionary(d):
myDict = {}
for i in d:
value = d.get(i)
myDict.setdefault(value,[]).append(i)
return myDict
print invertDictionary({'a':1, 'b':2, 'c':3 , 'd' : 1})
This will provide output as : {1: ['a', 'd'], 2: ['b'], 3: ['c']}
A lambda solution for current python 3.x versions:
d1 = dict(alice='apples', bob='bananas')
d2 = dict(map(lambda key: (d1[key], key), d1.keys()))
print(d2)
Result:
{'apples': 'alice', 'bananas': 'bob'}
This solution does not check for duplicates.
Some remarks:
The lambda construct can access d1 from the outer scope, so we only
pass in the current key. It returns a tuple.
The dict() constructor accepts a list of tuples. It
also accepts the result of a map, so we can skip the conversion to a
list.
This solution has no explicit for loop. It also avoids using a list comprehension for those who are bad at math ;-)
Taking up the highly voted answer starting If the values in my_map aren't unique:, I had a problem where not only the values were not unique, but in addition, they were a list, with each item in the list consisting again of a list of three elements: a string value, a number, and another number.
Example:
mymap['key1'] gives you:
[('xyz', 1, 2),
('abc', 5, 4)]
I wanted to switch only the string value with the key, keeping the two number elements at the same place. You simply need another nested for loop then:
inv_map = {}
for k, v in my_map.items():
for x in v:
# with x[1:3] same as x[1], x[2]:
inv_map[x[0]] = inv_map.get(x[0], []) + [k, x[1:3]]
Example:
inv_map['abc'] now gives you:
[('key1', 1, 2),
('key1', 5, 4)]
This works even if you have non-unique values in the original dictionary.
def dict_invert(d):
'''
d: dict
Returns an inverted dictionary
'''
# Your code here
inv_d = {}
for k, v in d.items():
if v not in inv_d.keys():
inv_d[v] = [k]
else:
inv_d[v].append(k)
inv_d[v].sort()
print(f"{inv_d[v]} are the values")
return inv_d
I would do it that way in python 2.
inv_map = {my_map[x] : x for x in my_map}
Not something completely different, just a bit rewritten recipe from Cookbook. It's futhermore optimized by retaining setdefault method, instead of each time getting it through the instance:
def inverse(mapping):
'''
A function to inverse mapping, collecting keys with simillar values
in list. Careful to retain original type and to be fast.
>> d = dict(a=1, b=2, c=1, d=3, e=2, f=1, g=5, h=2)
>> inverse(d)
{1: ['f', 'c', 'a'], 2: ['h', 'b', 'e'], 3: ['d'], 5: ['g']}
'''
res = {}
setdef = res.setdefault
for key, value in mapping.items():
setdef(value, []).append(key)
return res if mapping.__class__==dict else mapping.__class__(res)
Designed to be run under CPython 3.x, for 2.x replace mapping.items() with mapping.iteritems()
On my machine runs a bit faster, than other examples here

Removing multiple keys from a dictionary safely

I know how to remove an entry, 'key' from my dictionary d, safely. You do:
if d.has_key('key'):
del d['key']
However, I need to remove multiple entries from a dictionary safely. I was thinking of defining the entries in a tuple as I will need to do this more than once.
entities_to_remove = ('a', 'b', 'c')
for x in entities_to_remove:
if x in d:
del d[x]
However, I was wondering if there is a smarter way to do this?
Using dict.pop:
d = {'some': 'data'}
entries_to_remove = ('any', 'iterable')
for k in entries_to_remove:
d.pop(k, None)
Using Dict Comprehensions
final_dict = {key: value for key, value in d if key not in [key1, key2]}
where key1 and key2 are to be removed.
In the example below, keys "b" and "c" are to be removed & it's kept in a keys list.
>>> a
{'a': 1, 'c': 3, 'b': 2, 'd': 4}
>>> keys = ["b", "c"]
>>> print {key: a[key] for key in a if key not in keys}
{'a': 1, 'd': 4}
>>>
Why not like this:
entries = ('a', 'b', 'c')
the_dict = {'b': 'foo'}
def entries_to_remove(entries, the_dict):
for key in entries:
if key in the_dict:
del the_dict[key]
A more compact version was provided by mattbornski using dict.pop()
a solution is using map and filter functions
python 2
d={"a":1,"b":2,"c":3}
l=("a","b","d")
map(d.__delitem__, filter(d.__contains__,l))
print(d)
python 3
d={"a":1,"b":2,"c":3}
l=("a","b","d")
list(map(d.__delitem__, filter(d.__contains__,l)))
print(d)
you get:
{'c': 3}
If you also need to retrieve the values for the keys you are removing, this would be a pretty good way to do it:
values_removed = [d.pop(k, None) for k in entities_to_remove]
You could of course still do this just for the removal of the keys from d, but you would be unnecessarily creating the list of values with the list comprehension. It is also a little unclear to use a list comprehension just for the function's side effect.
Found a solution with pop and map
d = {'a': 'valueA', 'b': 'valueB', 'c': 'valueC', 'd': 'valueD'}
keys = ['a', 'b', 'c']
list(map(d.pop, keys))
print(d)
The output of this:
{'d': 'valueD'}
I have answered this question so late just because I think it will help in the future if anyone searches the same. And this might help.
Update
The above code will throw an error if a key does not exist in the dict.
DICTIONARY = {'a': 'valueA', 'b': 'valueB', 'c': 'valueC', 'd': 'valueD'}
keys = ['a', 'l', 'c']
def remove_key(key):
DICTIONARY.pop(key, None)
list(map(remove_key, keys))
print(DICTIONARY)
output:
DICTIONARY = {'b': 'valueB', 'd': 'valueD'}
Some timing tests for cpython 3 shows that a simple for loop is the fastest way, and it's quite readable. Adding in a function doesn't cause much overhead either:
timeit results (10k iterations):
all(x.pop(v) for v in r) # 0.85
all(map(x.pop, r)) # 0.60
list(map(x.pop, r)) # 0.70
all(map(x.__delitem__, r)) # 0.44
del_all(x, r) # 0.40
<inline for loop>(x, r) # 0.35
def del_all(mapping, to_remove):
"""Remove list of elements from mapping."""
for key in to_remove:
del mapping[key]
For small iterations, doing that 'inline' was a bit faster, because of the overhead of the function call. But del_all is lint-safe, reusable, and faster than all the python comprehension and mapping constructs.
I have no problem with any of the existing answers, but I was surprised to not find this solution:
keys_to_remove = ['a', 'b', 'c']
my_dict = {k: v for k, v in zip("a b c d e f g".split(' '), [0, 1, 2, 3, 4, 5, 6])}
for k in keys_to_remove:
try:
del my_dict[k]
except KeyError:
pass
assert my_dict == {'d': 3, 'e': 4, 'f': 5, 'g': 6}
Note: I stumbled across this question coming from here. And my answer is related to this answer.
I have tested the performance of three methods:
# Method 1: `del`
for key in remove_keys:
if key in d:
del d[key]
# Method 2: `pop()`
for key in remove_keys:
d.pop(key, None)
# Method 3: comprehension
{key: v for key, v in d.items() if key not in remove_keys}
Here are the results of 1M iterations:
del: 2.03s 2.0 ns/iter (100%)
pop(): 2.38s 2.4 ns/iter (117%)
comprehension: 4.11s 4.1 ns/iter (202%)
So both del and pop() are the fastest. Comprehensions are 2x slower.
But anyway, we speak nanoseconds here :) Dicts in Python are ridiculously fast.
Why not:
entriestoremove = (2,5,1)
for e in entriestoremove:
if d.has_key(e):
del d[e]
I don't know what you mean by "smarter way". Surely there are other ways, maybe with dictionary comprehensions:
entriestoremove = (2,5,1)
newdict = {x for x in d if x not in entriestoremove}
inline
import functools
#: not key(c) in d
d = {"a": "avalue", "b": "bvalue", "d": "dvalue"}
entitiesToREmove = ('a', 'b', 'c')
#: python2
map(lambda x: functools.partial(d.pop, x, None)(), entitiesToREmove)
#: python3
list(map(lambda x: functools.partial(d.pop, x, None)(), entitiesToREmove))
print(d)
# output: {'d': 'dvalue'}
I think using the fact that the keys can be treated as a set is the nicest way if you're on python 3:
def remove_keys(d, keys):
to_remove = set(keys)
filtered_keys = d.keys() - to_remove
filtered_values = map(d.get, filtered_keys)
return dict(zip(filtered_keys, filtered_values))
Example:
>>> remove_keys({'k1': 1, 'k3': 3}, ['k1', 'k2'])
{'k3': 3}
It would be nice to have full support for set methods for dictionaries (and not the unholy mess we're getting with Python 3.9) so that you could simply "remove" a set of keys. However, as long as that's not the case, and you have a large dictionary with potentially a large number of keys to remove, you might want to know about the performance. So, I've created some code that creates something large enough for meaningful comparisons: a 100,000 x 1000 matrix, so 10,000,00 items in total.
from itertools import product
from time import perf_counter
# make a complete worksheet 100000 * 1000
start = perf_counter()
prod = product(range(1, 100000), range(1, 1000))
cells = {(x,y):x for x,y in prod}
print(len(cells))
print(f"Create time {perf_counter()-start:.2f}s")
clock = perf_counter()
# remove everything above row 50,000
keys = product(range(50000, 100000), range(1, 100))
# for x,y in keys:
# del cells[x, y]
for n in map(cells.pop, keys):
pass
print(len(cells))
stop = perf_counter()
print(f"Removal time {stop-clock:.2f}s")
10 million items or more is not unusual in some settings. Comparing the two methods on my local machine I see a slight improvement when using map and pop, presumably because of fewer function calls, but both take around 2.5s on my machine. But this pales in comparison to the time required to create the dictionary in the first place (55s), or including checks within the loop. If this is likely then its best to create a set that is a intersection of the dictionary keys and your filter:
keys = cells.keys() & keys
In summary: del is already heavily optimised, so don't worry about using it.
Another map() way to remove list of keys from dictionary
and avoid raising KeyError exception
dic = {
'key1': 1,
'key2': 2,
'key3': 3,
'key4': 4,
'key5': 5,
}
keys_to_remove = ['key_not_exist', 'key1', 'key2', 'key3']
k = list(map(dic.pop, keys_to_remove, keys_to_remove))
print('k=', k)
print('dic after = \n', dic)
**this will produce output**
k= ['key_not_exist', 1, 2, 3]
dic after = {'key4': 4, 'key5': 5}
Duplicate keys_to_remove is artificial, it needs to supply defaults values for dict.pop() function.
You can add here any array with len_ = len(key_to_remove)
For example
dic = {
'key1': 1,
'key2': 2,
'key3': 3,
'key4': 4,
'key5': 5,
}
keys_to_remove = ['key_not_exist', 'key1', 'key2', 'key3']
k = list(map(dic.pop, keys_to_remove, np.zeros(len(keys_to_remove))))
print('k=', k)
print('dic after = ', dic)
** will produce output **
k= [0.0, 1, 2, 3]
dic after = {'key4': 4, 'key5': 5}
def delete_keys_from_dict(dictionary, keys):
"""
Deletes the unwanted keys in the dictionary
:param dictionary: dict
:param keys: list of keys
:return: dict (modified)
"""
from collections.abc import MutableMapping
keys_set = set(keys)
modified_dict = {}
for key, value in dictionary.items():
if key not in keys_set:
if isinstance(value, list):
modified_dict[key] = list()
for x in value:
if isinstance(x, MutableMapping):
modified_dict[key].append(delete_keys_from_dict(x, keys_set))
else:
modified_dict[key].append(x)
elif isinstance(value, MutableMapping):
modified_dict[key] = delete_keys_from_dict(value, keys_set)
else:
modified_dict[key] = value
return modified_dict
_d = {'a': 1245, 'b': 1234325, 'c': {'a': 1245, 'b': 1234325}, 'd': 98765,
'e': [{'a': 1245, 'b': 1234325},
{'a': 1245, 'b': 1234325},
{'t': 767}]}
_output = delete_keys_from_dict(_d, ['a', 'b'])
_expected = {'c': {}, 'd': 98765, 'e': [{}, {}, {'t': 767}]}
print(_expected)
print(_output)
I'm late to this discussion but for anyone else. A solution may be to create a list of keys as such.
k = ['a','b','c','d']
Then use pop() in a list comprehension, or for loop, to iterate over the keys and pop one at a time as such.
new_dictionary = [dictionary.pop(x, 'n/a') for x in k]
The 'n/a' is in case the key does not exist, a default value needs to be returned.

Reverse / invert a dictionary mapping

Given a dictionary like so:
my_map = {'a': 1, 'b': 2}
How can one invert this map to get:
inv_map = {1: 'a', 2: 'b'}
Python 3+:
inv_map = {v: k for k, v in my_map.items()}
Python 2:
inv_map = {v: k for k, v in my_map.iteritems()}
Assuming that the values in the dict are unique:
Python 3:
dict((v, k) for k, v in my_map.items())
Python 2:
dict((v, k) for k, v in my_map.iteritems())
If the values in my_map aren't unique:
Python 3:
inv_map = {}
for k, v in my_map.items():
inv_map[v] = inv_map.get(v, []) + [k]
Python 2:
inv_map = {}
for k, v in my_map.iteritems():
inv_map[v] = inv_map.get(v, []) + [k]
To do this while preserving the type of your mapping (assuming that it is a dict or a dict subclass):
def inverse_mapping(f):
return f.__class__(map(reversed, f.items()))
Try this:
inv_map = dict(zip(my_map.values(), my_map.keys()))
(Note that the Python docs on dictionary views explicitly guarantee that .keys() and .values() have their elements in the same order, which allows the approach above to work.)
Alternatively:
inv_map = dict((my_map[k], k) for k in my_map)
or using python 3.0's dict comprehensions
inv_map = {my_map[k] : k for k in my_map}
Another, more functional, way:
my_map = { 'a': 1, 'b':2 }
dict(map(reversed, my_map.items()))
We can also reverse a dictionary with duplicate keys using defaultdict:
from collections import Counter, defaultdict
def invert_dict(d):
d_inv = defaultdict(list)
for k, v in d.items():
d_inv[v].append(k)
return d_inv
text = 'aaa bbb ccc ddd aaa bbb ccc aaa'
c = Counter(text.split()) # Counter({'aaa': 3, 'bbb': 2, 'ccc': 2, 'ddd': 1})
dict(invert_dict(c)) # {1: ['ddd'], 2: ['bbb', 'ccc'], 3: ['aaa']}
See here:
This technique is simpler and faster than an equivalent technique using dict.setdefault().
This expands upon the answer by Robert, applying to when the values in the dict aren't unique.
class ReversibleDict(dict):
# Ref: https://stackoverflow.com/a/13057382/
def reversed(self):
"""
Return a reversed dict, with common values in the original dict
grouped into a list in the returned dict.
Example:
>>> d = ReversibleDict({'a': 3, 'c': 2, 'b': 2, 'e': 3, 'd': 1, 'f': 2})
>>> d.reversed()
{1: ['d'], 2: ['c', 'b', 'f'], 3: ['a', 'e']}
"""
revdict = {}
for k, v in self.items():
revdict.setdefault(v, []).append(k)
return revdict
The implementation is limited in that you cannot use reversed twice and get the original back. It is not symmetric as such. It is tested with Python 2.6. Here is a use case of how I am using to print the resultant dict.
If you'd rather use a set than a list, and there could exist unordered applications for which this makes sense, instead of setdefault(v, []).append(k), use setdefault(v, set()).add(k).
Combination of list and dictionary comprehension. Can handle duplicate keys
{v:[i for i in d.keys() if d[i] == v ] for k,v in d.items()}
A case where the dictionary values is a set. Like:
some_dict = {"1":{"a","b","c"},
"2":{"d","e","f"},
"3":{"g","h","i"}}
The inverse would like:
some_dict = {vi: k for k, v in some_dict.items() for vi in v}
The output is like this:
{'c': '1',
'b': '1',
'a': '1',
'f': '2',
'd': '2',
'e': '2',
'g': '3',
'h': '3',
'i': '3'}
For instance, you have the following dictionary:
my_dict = {'a': 'fire', 'b': 'ice', 'c': 'fire', 'd': 'water'}
And you wanna get it in such an inverted form:
inverted_dict = {'fire': ['a', 'c'], 'ice': ['b'], 'water': ['d']}
First Solution. For inverting key-value pairs in your dictionary use a for-loop approach:
# Use this code to invert dictionaries that have non-unique values
inverted_dict = dict()
for key, value in my_dict.items():
inverted_dict.setdefault(value, list()).append(key)
Second Solution. Use a dictionary comprehension approach for inversion:
# Use this code to invert dictionaries that have unique values
inverted_dict = {value: key for key, value in my_dict.items()}
Third Solution. Use reverting the inversion approach (relies on the second solution):
# Use this code to invert dictionaries that have lists of values
my_dict = {value: key for key in inverted_dict for value in my_map[key]}
Lot of answers but didn't find anything clean in case we are talking about a dictionary with non-unique values.
A solution would be:
from collections import defaultdict
inv_map = defaultdict(list)
for k, v in my_map.items():
inv_map[v].append(k)
Example:
If initial dict my_map = {'c': 1, 'd': 5, 'a': 5, 'b': 10}
then, running the code above will give:
{5: ['a', 'd'], 1: ['c'], 10: ['b']}
I found that this version is more than 10% faster than the accepted version of a dictionary with 10000 keys.
d = {i: str(i) for i in range(10000)}
new_d = dict(zip(d.values(), d.keys()))
In addition to the other functions suggested above, if you like lambdas:
invert = lambda mydict: {v:k for k, v in mydict.items()}
Or, you could do it this way too:
invert = lambda mydict: dict( zip(mydict.values(), mydict.keys()) )
I think the best way to do this is to define a class. Here is an implementation of a "symmetric dictionary":
class SymDict:
def __init__(self):
self.aToB = {}
self.bToA = {}
def assocAB(self, a, b):
# Stores and returns a tuple (a,b) of overwritten bindings
currB = None
if a in self.aToB: currB = self.bToA[a]
currA = None
if b in self.bToA: currA = self.aToB[b]
self.aToB[a] = b
self.bToA[b] = a
return (currA, currB)
def lookupA(self, a):
if a in self.aToB:
return self.aToB[a]
return None
def lookupB(self, b):
if b in self.bToA:
return self.bToA[b]
return None
Deletion and iteration methods are easy enough to implement if they're needed.
This implementation is way more efficient than inverting an entire dictionary (which seems to be the most popular solution on this page). Not to mention, you can add or remove values from your SymDict as much as you want, and your inverse-dictionary will always stay valid -- this isn't true if you simply reverse the entire dictionary once.
If the values aren't unique, and you're a little hardcore:
inv_map = dict(
(v, [k for (k, xx) in filter(lambda (key, value): value == v, my_map.items())])
for v in set(my_map.values())
)
Especially for a large dict, note that this solution is far less efficient than the answer Python reverse / invert a mapping because it loops over items() multiple times.
This handles non-unique values and retains much of the look of the unique case.
inv_map = {v:[k for k in my_map if my_map[k] == v] for v in my_map.itervalues()}
For Python 3.x, replace itervalues with values.
I am aware that this question already has many good answers, but I wanted to share this very neat solution that also takes care of duplicate values:
def dict_reverser(d):
seen = set()
return {v: k for k, v in d.items() if v not in seen or seen.add(v)}
This relies on the fact that set.add always returns None in Python.
Here is another way to do it.
my_map = {'a': 1, 'b': 2}
inv_map= {}
for key in my_map.keys() :
val = my_map[key]
inv_map[val] = key
dict([(value, key) for key, value in d.items()])
Function is symmetric for values of type list; Tuples are coverted to lists when performing reverse_dict(reverse_dict(dictionary))
def reverse_dict(dictionary):
reverse_dict = {}
for key, value in dictionary.iteritems():
if not isinstance(value, (list, tuple)):
value = [value]
for val in value:
reverse_dict[val] = reverse_dict.get(val, [])
reverse_dict[val].append(key)
for key, value in reverse_dict.iteritems():
if len(value) == 1:
reverse_dict[key] = value[0]
return reverse_dict
Since dictionaries require one unique key within the dictionary unlike values, we have to append the reversed values into a list of sort to be included within the new specific keys.
def r_maping(dictionary):
List_z=[]
Map= {}
for z, x in dictionary.iteritems(): #iterate through the keys and values
Map.setdefault(x,List_z).append(z) #Setdefault is the same as dict[key]=default."The method returns the key value available in the dictionary and if given key is not available then it will return provided default value. Afterward, we will append into the default list our new values for the specific key.
return Map
Fast functional solution for non-bijective maps (values not unique):
from itertools import imap, groupby
def fst(s):
return s[0]
def snd(s):
return s[1]
def inverseDict(d):
"""
input d: a -> b
output : b -> set(a)
"""
return {
v : set(imap(fst, kv_iter))
for (v, kv_iter) in groupby(
sorted(d.iteritems(),
key=snd),
key=snd
)
}
In theory this should be faster than adding to the set (or appending to the list) one by one like in the imperative solution.
Unfortunately the values have to be sortable, the sorting is required by groupby.
Try this for python 2.7/3.x
inv_map={};
for i in my_map:
inv_map[my_map[i]]=i
print inv_map
def invertDictionary(d):
myDict = {}
for i in d:
value = d.get(i)
myDict.setdefault(value,[]).append(i)
return myDict
print invertDictionary({'a':1, 'b':2, 'c':3 , 'd' : 1})
This will provide output as : {1: ['a', 'd'], 2: ['b'], 3: ['c']}
A lambda solution for current python 3.x versions:
d1 = dict(alice='apples', bob='bananas')
d2 = dict(map(lambda key: (d1[key], key), d1.keys()))
print(d2)
Result:
{'apples': 'alice', 'bananas': 'bob'}
This solution does not check for duplicates.
Some remarks:
The lambda construct can access d1 from the outer scope, so we only
pass in the current key. It returns a tuple.
The dict() constructor accepts a list of tuples. It
also accepts the result of a map, so we can skip the conversion to a
list.
This solution has no explicit for loop. It also avoids using a list comprehension for those who are bad at math ;-)
Taking up the highly voted answer starting If the values in my_map aren't unique:, I had a problem where not only the values were not unique, but in addition, they were a list, with each item in the list consisting again of a list of three elements: a string value, a number, and another number.
Example:
mymap['key1'] gives you:
[('xyz', 1, 2),
('abc', 5, 4)]
I wanted to switch only the string value with the key, keeping the two number elements at the same place. You simply need another nested for loop then:
inv_map = {}
for k, v in my_map.items():
for x in v:
# with x[1:3] same as x[1], x[2]:
inv_map[x[0]] = inv_map.get(x[0], []) + [k, x[1:3]]
Example:
inv_map['abc'] now gives you:
[('key1', 1, 2),
('key1', 5, 4)]
This works even if you have non-unique values in the original dictionary.
def dict_invert(d):
'''
d: dict
Returns an inverted dictionary
'''
# Your code here
inv_d = {}
for k, v in d.items():
if v not in inv_d.keys():
inv_d[v] = [k]
else:
inv_d[v].append(k)
inv_d[v].sort()
print(f"{inv_d[v]} are the values")
return inv_d
I would do it that way in python 2.
inv_map = {my_map[x] : x for x in my_map}
Not something completely different, just a bit rewritten recipe from Cookbook. It's futhermore optimized by retaining setdefault method, instead of each time getting it through the instance:
def inverse(mapping):
'''
A function to inverse mapping, collecting keys with simillar values
in list. Careful to retain original type and to be fast.
>> d = dict(a=1, b=2, c=1, d=3, e=2, f=1, g=5, h=2)
>> inverse(d)
{1: ['f', 'c', 'a'], 2: ['h', 'b', 'e'], 3: ['d'], 5: ['g']}
'''
res = {}
setdef = res.setdefault
for key, value in mapping.items():
setdef(value, []).append(key)
return res if mapping.__class__==dict else mapping.__class__(res)
Designed to be run under CPython 3.x, for 2.x replace mapping.items() with mapping.iteritems()
On my machine runs a bit faster, than other examples here

Categories

Resources