Related
I am trying to create a function that takes in a dictionary and returns a reverse of it while taking care of repeated values. That is, if the original dictionary would be
original_dict = {'first': ['a'], 'second': ['b', 'c'], 'third': ['d'], 'fourth': ['d']}
the function should return
{'a': ['first'], 'b': ['second'], 'c': ['second'], 'd': ['third', 'fourth']}
I've written
def reversed_dict(d):
new_dict = {}
for keys,values in d.items():
new_dict[values]=keys
but when I try it out with the original dictionary, I get an error "unhashable type: 'list'" when I try out the function. Are there any hints what might be causing it?
You have to iterate over the values in the list as well:
def reversed_dict(d):
new_dict = {}
for keys,values in d.items():
for val in values:
new_dict.setdefault(val, []).append(keys)
return new_dict
You have to iterate over the values and add them as keys. You also have to take into account the possibility that you may have already added a value as a key.
def reversed_dict(d):
new_dict = {}
for keys,values in d.items():
for v in values:
if v in new_dict:
new_dict[v].append(keys)
else:
new_dict[v] = [keys]
return new_dict
Use collections.defaultdict:
from collections import defaultdict
def reversed_dict(d):
new_dict = defaultdict(list)
for key, values in d.items():
for value in values:
new_dict[value].append(key)
return new_dict
The problem with your approach is you're using the entire list as the key of the dictionary. Instead you need to iterate over the list (i.e. for value in values: in the code above.)
defaultdict just makes it simpler to read.
You are getting this error because any of your original_dict values is a mutable type which is, as the error suggests, an unhashable type thus not
avalid candidate for a key in the reversed_dict.
You can workaround this problem by type-checking and casting mutable types into an immutable equivalent, e.g. a list into a tuple.
(also I find dict comp a way more elegant and concise approach):
def reversed_dict(d):
return {v if not isinstance(v, list) else tuple(v): k for k, v in d.items()}
I have multiple dicts (or sequences of key-value pairs) like this:
d1 = {key1: x1, key2: y1}
d2 = {key1: x2, key2: y2}
How can I efficiently get a result like this, as a new dict?
d = {key1: (x1, x2), key2: (y1, y2)}
See also: How can one make a dictionary with duplicate keys in Python?.
Here's a general solution that will handle an arbitrary amount of dictionaries, with cases when keys are in only some of the dictionaries:
from collections import defaultdict
d1 = {1: 2, 3: 4}
d2 = {1: 6, 3: 7}
dd = defaultdict(list)
for d in (d1, d2): # you can list as many input dicts as you want here
for key, value in d.items():
dd[key].append(value)
print(dd) # result: defaultdict(<type 'list'>, {1: [2, 6], 3: [4, 7]})
assuming all keys are always present in all dicts:
ds = [d1, d2]
d = {}
for k in d1.iterkeys():
d[k] = tuple(d[k] for d in ds)
Note: In Python 3.x use below code:
ds = [d1, d2]
d = {}
for k in d1.keys():
d[k] = tuple(d[k] for d in ds)
and if the dic contain numpy arrays:
ds = [d1, d2]
d = {}
for k in d1.keys():
d[k] = np.concatenate(list(d[k] for d in ds))
This function merges two dicts even if the keys in the two dictionaries are different:
def combine_dict(d1, d2):
return {
k: tuple(d[k] for d in (d1, d2) if k in d)
for k in set(d1.keys()) | set(d2.keys())
}
Example:
d1 = {
'a': 1,
'b': 2,
}
d2` = {
'b': 'boat',
'c': 'car',
}
combine_dict(d1, d2)
# Returns: {
# 'a': (1,),
# 'b': (2, 'boat'),
# 'c': ('car',)
# }
dict1 = {'m': 2, 'n': 4}
dict2 = {'n': 3, 'm': 1}
Making sure that the keys are in the same order:
dict2_sorted = {i:dict2[i] for i in dict1.keys()}
keys = dict1.keys()
values = zip(dict1.values(), dict2_sorted.values())
dictionary = dict(zip(keys, values))
gives:
{'m': (2, 1), 'n': (4, 3)}
If you only have d1 and d2,
from collections import defaultdict
d = defaultdict(list)
for a, b in d1.items() + d2.items():
d[a].append(b)
Here is one approach you can use which would work even if both dictonaries don't have same keys:
d1 = {'a':'test','b':'btest','d':'dreg'}
d2 = {'a':'cool','b':'main','c':'clear'}
d = {}
for key in set(d1.keys() + d2.keys()):
try:
d.setdefault(key,[]).append(d1[key])
except KeyError:
pass
try:
d.setdefault(key,[]).append(d2[key])
except KeyError:
pass
print d
This would generate below input:
{'a': ['test', 'cool'], 'c': ['clear'], 'b': ['btest', 'main'], 'd': ['dreg']}
Using precomputed keys
def merge(dicts):
# First, figure out which keys are present.
keys = set().union(*dicts)
# Build a dict with those keys, using a list comprehension to
# pull the values from the source dicts.
return {
k: [d[k] for d in dicts if k in d]
for k in keys
}
This is essentially Flux's answer, generalized for a list of input dicts.
The set().union trick works by making a set union of the keys in all the source dictionaries. The union method on a set (we start with an empty one) can accept an arbitrary number of arguments, and make a union of each input with the original set; and it can accept other iterables (it does not require other sets for the arguments) - it will iterate over them and look for all unique elements. Since iterating over a dict yields its keys, they can be passed directly to the union method.
In the case where the keys of all inputs are known to be the same, this can be simplified: the keys can be hard-coded (or inferred from one of the inputs), and the if check in the list comprehension becomes unnecessary:
def merge(dicts):
return {
k: [d[k] for d in dicts]
for k in dicts[0].keys()
}
This is analogous to blubb's answer, but using a dict comprehension rather than an explicit loop to build the final result.
We could also try something like Mahdi Ghelichi's answer:
def merge(dicts):
values = zip(*(d.values() for d in ds))
return dict(zip(dicts[0].keys(), values))
This should work in Python 3.5 and below: dicts with identical keys will store them in the same order, during the same run of the program (if you run the program again, you may get a different ordering, but still a consistent one).
In 3.6 and above, dictionaries preserve their insertion order (though they are only guaranteed to do so by the specification in 3.7 and above). Thus, input dicts could have the same keys in a different order, which would cause the first zip to combine the wrong values.
We can work around this by "sorting" the input dicts (re-creating them with keys in a consistent order, like [{k:d[k] for k in dicts[0].keys()} for d in dicts]. (In older versions, this would be extra work with no net effect.) However, this adds complexity, and this double-zip approach really doesn't offer any advantages over the previous one using a dict comprehension.
Building the result explicitly, discovering keys on the fly
As in Eli Bendersky's answer, but as a function:
from collections import defaultdict
def merge(dicts):
result = defaultdict(list)
for d in dicts:
for key, value in d.items():
result[key].append(value)
return result
This will produce a defaultdict, a subclass of dict defined by the standard library. The equivalent code using only built-in dicts might look like:
def merge(dicts):
result = {}
for d in dicts:
for key, value in d.items():
result.setdefault(key, []).append(value)
return result
Using other container types besides lists
The precomputed-key approach will work fine to make tuples; replace the list comprehension [d[k] for d in dicts if k in d] with tuple(d[k] for d in dicts if k in d). This passes a generator expression to the tuple constructor. (There is no "tuple comprehension".)
Since tuples are immutable and don't have an append method, the explicit loop approach should be modified by replacing .append(value) with += (value,). However, this may perform poorly if there is a lot of key duplication, since it must create a new tuple each time. It might be better to produce lists first and then convert the final result with something like {k: tuple(v) for (k, v) in merged.items()}.
Similar modifications can be made to get sets (although there is a set comprehension, using {}), Numpy arrays etc. For example, we can generalize both approaches with a container type like so:
def merge(dicts, value_type=list):
# First, figure out which keys are present.
keys = set().union(*dicts)
# Build a dict with those keys, using a list comprehension to
# pull the values from the source dicts.
return {
k: value_type(d[k] for d in dicts if k in d)
for k in keys
}
and
from collections import defaultdict
def merge(dicts, value_type=list):
# We stick with hard-coded `list` for the first part,
# because even other mutable types will offer different interfaces.
result = defaultdict(list)
for d in dicts:
for key, value in d.items():
result[key].append(value)
# This is redundant for the default case, of course.
return {k:value_type(v) for (k, v) in result}
If the input values are already sequences
Rather than wrapping the values from the source in a new list, often people want to take inputs where the values are all already lists, and concatenate those lists in the output (or concatenate tuples or 1-dimensional Numpy arrays, combine sets, etc.).
This is still a trivial modification. For precomputed keys, use a nested list comprehension, ordered to get a flat result:
def merge(dicts):
keys = set().union(*dicts)
return {
k: [v for d in dicts if k in d for v in d[k]]
# Alternately:
# k: [v for d in dicts for v in d.get(k, [])]
for k in keys
}
One might instead think of using sum to concatenate results from the original list comprehension. Don't do this - it will perform poorly when there are a lot of duplicate keys. The built-in sum isn't optimized for sequences (and will explicitly disallow "summing" strings) and will try to create a new list with each addition internally.
With the explicit loop approach, use .extend instead of .append:
from collections import defaultdict
def merge(dicts):
result = defaultdict(list)
for d in dicts:
for key, value in d.items():
result[key].extend(value)
return result
The extend method of lists accepts any iterable, so this will work with inputs that have tuples for the values - of course, it still uses lists in the output; and of course, those can be converted back as shown previously.
If the inputs have one item each
A common version of this problem involves input dicts that each have a single key-value pair. Alternately, the input might be (key, value) tuples (or lists).
The above approaches will still work, of course. For tuple inputs, converting them to dicts first, like [{k:v} for (k, v) in tuples], allows for using the directly. Alternately, the explicit iteration approach can be modified to accept the tuples directly, like in Victoria Stuart's answer:
from collections import defaultdict
def merge(pairs):
result = defaultdict(list)
for key, value in pairs:
result[key].extend(value)
return result
(The code was simplified because there is no need to iterate over key-value pairs when there is only one of them and it has been provided directly.)
However, for these single-item cases it may work better to sort the values by key and then use itertools.groupby. In this case, it will be easier to work with the tuples. That looks like:
from itertools import groupby
def merge(tuples):
grouped = groupby(tuples, key=lambda t: t[0])
return {k: [kv[1] for kv in ts] for k, ts in grouped}
Here, t is used as a name for one of the tuples from the input. The grouped iterator will provide pairs of a "key" value k (the first element that was common to the tuples being grouped) and an iterator ts over the tuples in that group. Then we extract the values from the key-value pairs kv in the ts, make a list from those, and use that as the value for the k key in the resulting dict.
To merge one-item dicts this way, of course, convert them to tuples first. One simple way to do this, for a list of one-item dicts, is [next(iter(d.items())) for d in dicts].
Assuming there are two dictionaries with exact same keys, below is the most succinct way of doing it (python3 should be used for both the solution).
d1 = {'a': 1, 'b': 2, 'c':3}
d2 = {'a': 5, 'b': 6, 'c':7}
# get keys from one of the dictionary
ks = [k for k in d1.keys()]
print(ks)
['a', 'b', 'c']
# call values from each dictionary on available keys
d_merged = {k: (d1[k], d2[k]) for k in ks}
print(d_merged)
{'a': (1, 5), 'b': (2, 6), 'c': (3, 7)}
# to merge values as list
d_merged = {k: [d1[k], d2[k]] for k in ks}
print(d_merged)
{'a': [1, 5], 'b': [2, 6], 'c': [3, 7]}
If there are two dictionaries with some common keys, but a few different keys, a list of all the keys should be prepared.
d1 = {'a': 1, 'b': 2, 'c':3, 'd': 9}
d2 = {'a': 5, 'b': 6, 'c':7, 'e': 4}
# get keys from one of the dictionary
d1_ks = [k for k in d1.keys()]
d2_ks = [k for k in d2.keys()]
all_ks = set(d1_ks + d2_ks)
print(all_ks)
['a', 'b', 'c', 'd', 'e']
# call values from each dictionary on available keys
d_merged = {k: [d1.get(k), d2.get(k)] for k in all_ks}
print(d_merged)
{'d': [9, None], 'a': [1, 5], 'b': [2, 6], 'c': [3, 7], 'e': [None, 4]}
There is a great library funcy doing what you need in a just one, short line.
from funcy import join_with
from pprint import pprint
d1 = {"key1": "x1", "key2": "y1"}
d2 = {"key1": "x2", "key2": "y2"}
list_of_dicts = [d1, d2]
merged_dict = join_with(tuple, list_of_dicts)
pprint(merged_dict)
Output:
{'key1': ('x1', 'x2'), 'key2': ('y1', 'y2')}
More info here: funcy -> join_with.
def merge(d1, d2, merge):
result = dict(d1)
for k,v in d2.iteritems():
if k in result:
result[k] = merge(result[k], v)
else:
result[k] = v
return result
d1 = {'a': 1, 'b': 2}
d2 = {'a': 1, 'b': 3, 'c': 2}
print merge(d1, d2, lambda x, y:(x,y))
{'a': (1, 1), 'c': 2, 'b': (2, 3)}
If keys are nested:
d1 = { 'key1': { 'nkey1': 'x1' }, 'key2': { 'nkey2': 'y1' } }
d2 = { 'key1': { 'nkey1': 'x2' }, 'key2': { 'nkey2': 'y2' } }
ds = [d1, d2]
d = {}
for k in d1.keys():
for k2 in d1[k].keys():
d.setdefault(k, {})
d[k].setdefault(k2, [])
d[k][k2] = tuple(d[k][k2] for d in ds)
yields:
{'key1': {'nkey1': ('x1', 'x2')}, 'key2': {'nkey2': ('y1', 'y2')}}
Modifying this answer to create a dictionary of tuples (what the OP asked for), instead of a dictionary of lists:
from collections import defaultdict
d1 = {1: 2, 3: 4}
d2 = {1: 6, 3: 7}
dd = defaultdict(tuple)
for d in (d1, d2): # you can list as many input dicts as you want here
for key, value in d.items():
dd[key] += (value,)
print(dd)
The above prints the following:
defaultdict(<class 'tuple'>, {1: (2, 6), 3: (4, 7)})
d1 ={'B': 10, 'C ': 7, 'A': 20}
d2 ={'B': 101, 'Y ': 7, 'X': 8}
d3 ={'A': 201, 'Y ': 77, 'Z': 8}
def CreateNewDictionaryAssemblingAllValues1(d1,d2,d3):
aa = {
k :[d[k] for d in (d1,d2,d3) if k in d ] for k in set(d1.keys() | d2.keys() | d3.keys() )
}
aap = print(aa)
return aap
CreateNewDictionaryAssemblingAllValues1(d1, d2, d3)
"""
Output :
{'X': [8], 'C ': [7], 'Y ': [7, 77], 'Z': [8], 'B': [10, 101], 'A': [20, 201]}
"""
From blubb answer:
You can also directly form the tuple using values from each list
ds = [d1, d2]
d = {}
for k in d1.keys():
d[k] = (d1[k], d2[k])
This might be useful if you had a specific ordering for your tuples
ds = [d1, d2, d3, d4]
d = {}
for k in d1.keys():
d[k] = (d3[k], d1[k], d4[k], d2[k]) #if you wanted tuple in order of d3, d1, d4, d2
Using below method we can merge two dictionaries having same keys.
def update_dict(dict1: dict, dict2: dict) -> dict:
output_dict = {}
for key in dict1.keys():
output_dict.update({key: []})
if type(dict1[key]) != str:
for value in dict1[key]:
output_dict[key].append(value)
else:
output_dict[key].append(dict1[key])
if type(dict2[key]) != str:
for value in dict2[key]:
output_dict[key].append(value)
else:
output_dict[key].append(dict2[key])
return output_dict
Input: d1 = {key1: x1, key2: y1} d2 = {key1: x2, key2: y2}
Output: {'key1': ['x1', 'x2'], 'key2': ['y1', 'y2']}
dicts = [dict1,dict2,dict3]
out = dict(zip(dicts[0].keys(),[[dic[list(dic.keys())[key]] for dic in dicts] for key in range(0,len(dicts[0]))]))
A compact possibility
d1={'a':1,'b':2}
d2={'c':3,'d':4}
context={**d1, **d2}
context
{'b': 2, 'c': 3, 'd': 4, 'a': 1}
I have multiple dicts (or sequences of key-value pairs) like this:
d1 = {key1: x1, key2: y1}
d2 = {key1: x2, key2: y2}
How can I efficiently get a result like this, as a new dict?
d = {key1: (x1, x2), key2: (y1, y2)}
See also: How can one make a dictionary with duplicate keys in Python?.
Here's a general solution that will handle an arbitrary amount of dictionaries, with cases when keys are in only some of the dictionaries:
from collections import defaultdict
d1 = {1: 2, 3: 4}
d2 = {1: 6, 3: 7}
dd = defaultdict(list)
for d in (d1, d2): # you can list as many input dicts as you want here
for key, value in d.items():
dd[key].append(value)
print(dd) # result: defaultdict(<type 'list'>, {1: [2, 6], 3: [4, 7]})
assuming all keys are always present in all dicts:
ds = [d1, d2]
d = {}
for k in d1.iterkeys():
d[k] = tuple(d[k] for d in ds)
Note: In Python 3.x use below code:
ds = [d1, d2]
d = {}
for k in d1.keys():
d[k] = tuple(d[k] for d in ds)
and if the dic contain numpy arrays:
ds = [d1, d2]
d = {}
for k in d1.keys():
d[k] = np.concatenate(list(d[k] for d in ds))
This function merges two dicts even if the keys in the two dictionaries are different:
def combine_dict(d1, d2):
return {
k: tuple(d[k] for d in (d1, d2) if k in d)
for k in set(d1.keys()) | set(d2.keys())
}
Example:
d1 = {
'a': 1,
'b': 2,
}
d2` = {
'b': 'boat',
'c': 'car',
}
combine_dict(d1, d2)
# Returns: {
# 'a': (1,),
# 'b': (2, 'boat'),
# 'c': ('car',)
# }
dict1 = {'m': 2, 'n': 4}
dict2 = {'n': 3, 'm': 1}
Making sure that the keys are in the same order:
dict2_sorted = {i:dict2[i] for i in dict1.keys()}
keys = dict1.keys()
values = zip(dict1.values(), dict2_sorted.values())
dictionary = dict(zip(keys, values))
gives:
{'m': (2, 1), 'n': (4, 3)}
If you only have d1 and d2,
from collections import defaultdict
d = defaultdict(list)
for a, b in d1.items() + d2.items():
d[a].append(b)
Here is one approach you can use which would work even if both dictonaries don't have same keys:
d1 = {'a':'test','b':'btest','d':'dreg'}
d2 = {'a':'cool','b':'main','c':'clear'}
d = {}
for key in set(d1.keys() + d2.keys()):
try:
d.setdefault(key,[]).append(d1[key])
except KeyError:
pass
try:
d.setdefault(key,[]).append(d2[key])
except KeyError:
pass
print d
This would generate below input:
{'a': ['test', 'cool'], 'c': ['clear'], 'b': ['btest', 'main'], 'd': ['dreg']}
Using precomputed keys
def merge(dicts):
# First, figure out which keys are present.
keys = set().union(*dicts)
# Build a dict with those keys, using a list comprehension to
# pull the values from the source dicts.
return {
k: [d[k] for d in dicts if k in d]
for k in keys
}
This is essentially Flux's answer, generalized for a list of input dicts.
The set().union trick works by making a set union of the keys in all the source dictionaries. The union method on a set (we start with an empty one) can accept an arbitrary number of arguments, and make a union of each input with the original set; and it can accept other iterables (it does not require other sets for the arguments) - it will iterate over them and look for all unique elements. Since iterating over a dict yields its keys, they can be passed directly to the union method.
In the case where the keys of all inputs are known to be the same, this can be simplified: the keys can be hard-coded (or inferred from one of the inputs), and the if check in the list comprehension becomes unnecessary:
def merge(dicts):
return {
k: [d[k] for d in dicts]
for k in dicts[0].keys()
}
This is analogous to blubb's answer, but using a dict comprehension rather than an explicit loop to build the final result.
We could also try something like Mahdi Ghelichi's answer:
def merge(dicts):
values = zip(*(d.values() for d in ds))
return dict(zip(dicts[0].keys(), values))
This should work in Python 3.5 and below: dicts with identical keys will store them in the same order, during the same run of the program (if you run the program again, you may get a different ordering, but still a consistent one).
In 3.6 and above, dictionaries preserve their insertion order (though they are only guaranteed to do so by the specification in 3.7 and above). Thus, input dicts could have the same keys in a different order, which would cause the first zip to combine the wrong values.
We can work around this by "sorting" the input dicts (re-creating them with keys in a consistent order, like [{k:d[k] for k in dicts[0].keys()} for d in dicts]. (In older versions, this would be extra work with no net effect.) However, this adds complexity, and this double-zip approach really doesn't offer any advantages over the previous one using a dict comprehension.
Building the result explicitly, discovering keys on the fly
As in Eli Bendersky's answer, but as a function:
from collections import defaultdict
def merge(dicts):
result = defaultdict(list)
for d in dicts:
for key, value in d.items():
result[key].append(value)
return result
This will produce a defaultdict, a subclass of dict defined by the standard library. The equivalent code using only built-in dicts might look like:
def merge(dicts):
result = {}
for d in dicts:
for key, value in d.items():
result.setdefault(key, []).append(value)
return result
Using other container types besides lists
The precomputed-key approach will work fine to make tuples; replace the list comprehension [d[k] for d in dicts if k in d] with tuple(d[k] for d in dicts if k in d). This passes a generator expression to the tuple constructor. (There is no "tuple comprehension".)
Since tuples are immutable and don't have an append method, the explicit loop approach should be modified by replacing .append(value) with += (value,). However, this may perform poorly if there is a lot of key duplication, since it must create a new tuple each time. It might be better to produce lists first and then convert the final result with something like {k: tuple(v) for (k, v) in merged.items()}.
Similar modifications can be made to get sets (although there is a set comprehension, using {}), Numpy arrays etc. For example, we can generalize both approaches with a container type like so:
def merge(dicts, value_type=list):
# First, figure out which keys are present.
keys = set().union(*dicts)
# Build a dict with those keys, using a list comprehension to
# pull the values from the source dicts.
return {
k: value_type(d[k] for d in dicts if k in d)
for k in keys
}
and
from collections import defaultdict
def merge(dicts, value_type=list):
# We stick with hard-coded `list` for the first part,
# because even other mutable types will offer different interfaces.
result = defaultdict(list)
for d in dicts:
for key, value in d.items():
result[key].append(value)
# This is redundant for the default case, of course.
return {k:value_type(v) for (k, v) in result}
If the input values are already sequences
Rather than wrapping the values from the source in a new list, often people want to take inputs where the values are all already lists, and concatenate those lists in the output (or concatenate tuples or 1-dimensional Numpy arrays, combine sets, etc.).
This is still a trivial modification. For precomputed keys, use a nested list comprehension, ordered to get a flat result:
def merge(dicts):
keys = set().union(*dicts)
return {
k: [v for d in dicts if k in d for v in d[k]]
# Alternately:
# k: [v for d in dicts for v in d.get(k, [])]
for k in keys
}
One might instead think of using sum to concatenate results from the original list comprehension. Don't do this - it will perform poorly when there are a lot of duplicate keys. The built-in sum isn't optimized for sequences (and will explicitly disallow "summing" strings) and will try to create a new list with each addition internally.
With the explicit loop approach, use .extend instead of .append:
from collections import defaultdict
def merge(dicts):
result = defaultdict(list)
for d in dicts:
for key, value in d.items():
result[key].extend(value)
return result
The extend method of lists accepts any iterable, so this will work with inputs that have tuples for the values - of course, it still uses lists in the output; and of course, those can be converted back as shown previously.
If the inputs have one item each
A common version of this problem involves input dicts that each have a single key-value pair. Alternately, the input might be (key, value) tuples (or lists).
The above approaches will still work, of course. For tuple inputs, converting them to dicts first, like [{k:v} for (k, v) in tuples], allows for using the directly. Alternately, the explicit iteration approach can be modified to accept the tuples directly, like in Victoria Stuart's answer:
from collections import defaultdict
def merge(pairs):
result = defaultdict(list)
for key, value in pairs:
result[key].extend(value)
return result
(The code was simplified because there is no need to iterate over key-value pairs when there is only one of them and it has been provided directly.)
However, for these single-item cases it may work better to sort the values by key and then use itertools.groupby. In this case, it will be easier to work with the tuples. That looks like:
from itertools import groupby
def merge(tuples):
grouped = groupby(tuples, key=lambda t: t[0])
return {k: [kv[1] for kv in ts] for k, ts in grouped}
Here, t is used as a name for one of the tuples from the input. The grouped iterator will provide pairs of a "key" value k (the first element that was common to the tuples being grouped) and an iterator ts over the tuples in that group. Then we extract the values from the key-value pairs kv in the ts, make a list from those, and use that as the value for the k key in the resulting dict.
To merge one-item dicts this way, of course, convert them to tuples first. One simple way to do this, for a list of one-item dicts, is [next(iter(d.items())) for d in dicts].
Assuming there are two dictionaries with exact same keys, below is the most succinct way of doing it (python3 should be used for both the solution).
d1 = {'a': 1, 'b': 2, 'c':3}
d2 = {'a': 5, 'b': 6, 'c':7}
# get keys from one of the dictionary
ks = [k for k in d1.keys()]
print(ks)
['a', 'b', 'c']
# call values from each dictionary on available keys
d_merged = {k: (d1[k], d2[k]) for k in ks}
print(d_merged)
{'a': (1, 5), 'b': (2, 6), 'c': (3, 7)}
# to merge values as list
d_merged = {k: [d1[k], d2[k]] for k in ks}
print(d_merged)
{'a': [1, 5], 'b': [2, 6], 'c': [3, 7]}
If there are two dictionaries with some common keys, but a few different keys, a list of all the keys should be prepared.
d1 = {'a': 1, 'b': 2, 'c':3, 'd': 9}
d2 = {'a': 5, 'b': 6, 'c':7, 'e': 4}
# get keys from one of the dictionary
d1_ks = [k for k in d1.keys()]
d2_ks = [k for k in d2.keys()]
all_ks = set(d1_ks + d2_ks)
print(all_ks)
['a', 'b', 'c', 'd', 'e']
# call values from each dictionary on available keys
d_merged = {k: [d1.get(k), d2.get(k)] for k in all_ks}
print(d_merged)
{'d': [9, None], 'a': [1, 5], 'b': [2, 6], 'c': [3, 7], 'e': [None, 4]}
There is a great library funcy doing what you need in a just one, short line.
from funcy import join_with
from pprint import pprint
d1 = {"key1": "x1", "key2": "y1"}
d2 = {"key1": "x2", "key2": "y2"}
list_of_dicts = [d1, d2]
merged_dict = join_with(tuple, list_of_dicts)
pprint(merged_dict)
Output:
{'key1': ('x1', 'x2'), 'key2': ('y1', 'y2')}
More info here: funcy -> join_with.
def merge(d1, d2, merge):
result = dict(d1)
for k,v in d2.iteritems():
if k in result:
result[k] = merge(result[k], v)
else:
result[k] = v
return result
d1 = {'a': 1, 'b': 2}
d2 = {'a': 1, 'b': 3, 'c': 2}
print merge(d1, d2, lambda x, y:(x,y))
{'a': (1, 1), 'c': 2, 'b': (2, 3)}
If keys are nested:
d1 = { 'key1': { 'nkey1': 'x1' }, 'key2': { 'nkey2': 'y1' } }
d2 = { 'key1': { 'nkey1': 'x2' }, 'key2': { 'nkey2': 'y2' } }
ds = [d1, d2]
d = {}
for k in d1.keys():
for k2 in d1[k].keys():
d.setdefault(k, {})
d[k].setdefault(k2, [])
d[k][k2] = tuple(d[k][k2] for d in ds)
yields:
{'key1': {'nkey1': ('x1', 'x2')}, 'key2': {'nkey2': ('y1', 'y2')}}
Modifying this answer to create a dictionary of tuples (what the OP asked for), instead of a dictionary of lists:
from collections import defaultdict
d1 = {1: 2, 3: 4}
d2 = {1: 6, 3: 7}
dd = defaultdict(tuple)
for d in (d1, d2): # you can list as many input dicts as you want here
for key, value in d.items():
dd[key] += (value,)
print(dd)
The above prints the following:
defaultdict(<class 'tuple'>, {1: (2, 6), 3: (4, 7)})
d1 ={'B': 10, 'C ': 7, 'A': 20}
d2 ={'B': 101, 'Y ': 7, 'X': 8}
d3 ={'A': 201, 'Y ': 77, 'Z': 8}
def CreateNewDictionaryAssemblingAllValues1(d1,d2,d3):
aa = {
k :[d[k] for d in (d1,d2,d3) if k in d ] for k in set(d1.keys() | d2.keys() | d3.keys() )
}
aap = print(aa)
return aap
CreateNewDictionaryAssemblingAllValues1(d1, d2, d3)
"""
Output :
{'X': [8], 'C ': [7], 'Y ': [7, 77], 'Z': [8], 'B': [10, 101], 'A': [20, 201]}
"""
From blubb answer:
You can also directly form the tuple using values from each list
ds = [d1, d2]
d = {}
for k in d1.keys():
d[k] = (d1[k], d2[k])
This might be useful if you had a specific ordering for your tuples
ds = [d1, d2, d3, d4]
d = {}
for k in d1.keys():
d[k] = (d3[k], d1[k], d4[k], d2[k]) #if you wanted tuple in order of d3, d1, d4, d2
Using below method we can merge two dictionaries having same keys.
def update_dict(dict1: dict, dict2: dict) -> dict:
output_dict = {}
for key in dict1.keys():
output_dict.update({key: []})
if type(dict1[key]) != str:
for value in dict1[key]:
output_dict[key].append(value)
else:
output_dict[key].append(dict1[key])
if type(dict2[key]) != str:
for value in dict2[key]:
output_dict[key].append(value)
else:
output_dict[key].append(dict2[key])
return output_dict
Input: d1 = {key1: x1, key2: y1} d2 = {key1: x2, key2: y2}
Output: {'key1': ['x1', 'x2'], 'key2': ['y1', 'y2']}
dicts = [dict1,dict2,dict3]
out = dict(zip(dicts[0].keys(),[[dic[list(dic.keys())[key]] for dic in dicts] for key in range(0,len(dicts[0]))]))
A compact possibility
d1={'a':1,'b':2}
d2={'c':3,'d':4}
context={**d1, **d2}
context
{'b': 2, 'c': 3, 'd': 4, 'a': 1}
Given a dictionary like so:
my_map = {'a': 1, 'b': 2}
How can one invert this map to get:
inv_map = {1: 'a', 2: 'b'}
Python 3+:
inv_map = {v: k for k, v in my_map.items()}
Python 2:
inv_map = {v: k for k, v in my_map.iteritems()}
Assuming that the values in the dict are unique:
Python 3:
dict((v, k) for k, v in my_map.items())
Python 2:
dict((v, k) for k, v in my_map.iteritems())
If the values in my_map aren't unique:
Python 3:
inv_map = {}
for k, v in my_map.items():
inv_map[v] = inv_map.get(v, []) + [k]
Python 2:
inv_map = {}
for k, v in my_map.iteritems():
inv_map[v] = inv_map.get(v, []) + [k]
To do this while preserving the type of your mapping (assuming that it is a dict or a dict subclass):
def inverse_mapping(f):
return f.__class__(map(reversed, f.items()))
Try this:
inv_map = dict(zip(my_map.values(), my_map.keys()))
(Note that the Python docs on dictionary views explicitly guarantee that .keys() and .values() have their elements in the same order, which allows the approach above to work.)
Alternatively:
inv_map = dict((my_map[k], k) for k in my_map)
or using python 3.0's dict comprehensions
inv_map = {my_map[k] : k for k in my_map}
Another, more functional, way:
my_map = { 'a': 1, 'b':2 }
dict(map(reversed, my_map.items()))
We can also reverse a dictionary with duplicate keys using defaultdict:
from collections import Counter, defaultdict
def invert_dict(d):
d_inv = defaultdict(list)
for k, v in d.items():
d_inv[v].append(k)
return d_inv
text = 'aaa bbb ccc ddd aaa bbb ccc aaa'
c = Counter(text.split()) # Counter({'aaa': 3, 'bbb': 2, 'ccc': 2, 'ddd': 1})
dict(invert_dict(c)) # {1: ['ddd'], 2: ['bbb', 'ccc'], 3: ['aaa']}
See here:
This technique is simpler and faster than an equivalent technique using dict.setdefault().
This expands upon the answer by Robert, applying to when the values in the dict aren't unique.
class ReversibleDict(dict):
# Ref: https://stackoverflow.com/a/13057382/
def reversed(self):
"""
Return a reversed dict, with common values in the original dict
grouped into a list in the returned dict.
Example:
>>> d = ReversibleDict({'a': 3, 'c': 2, 'b': 2, 'e': 3, 'd': 1, 'f': 2})
>>> d.reversed()
{1: ['d'], 2: ['c', 'b', 'f'], 3: ['a', 'e']}
"""
revdict = {}
for k, v in self.items():
revdict.setdefault(v, []).append(k)
return revdict
The implementation is limited in that you cannot use reversed twice and get the original back. It is not symmetric as such. It is tested with Python 2.6. Here is a use case of how I am using to print the resultant dict.
If you'd rather use a set than a list, and there could exist unordered applications for which this makes sense, instead of setdefault(v, []).append(k), use setdefault(v, set()).add(k).
Combination of list and dictionary comprehension. Can handle duplicate keys
{v:[i for i in d.keys() if d[i] == v ] for k,v in d.items()}
A case where the dictionary values is a set. Like:
some_dict = {"1":{"a","b","c"},
"2":{"d","e","f"},
"3":{"g","h","i"}}
The inverse would like:
some_dict = {vi: k for k, v in some_dict.items() for vi in v}
The output is like this:
{'c': '1',
'b': '1',
'a': '1',
'f': '2',
'd': '2',
'e': '2',
'g': '3',
'h': '3',
'i': '3'}
For instance, you have the following dictionary:
my_dict = {'a': 'fire', 'b': 'ice', 'c': 'fire', 'd': 'water'}
And you wanna get it in such an inverted form:
inverted_dict = {'fire': ['a', 'c'], 'ice': ['b'], 'water': ['d']}
First Solution. For inverting key-value pairs in your dictionary use a for-loop approach:
# Use this code to invert dictionaries that have non-unique values
inverted_dict = dict()
for key, value in my_dict.items():
inverted_dict.setdefault(value, list()).append(key)
Second Solution. Use a dictionary comprehension approach for inversion:
# Use this code to invert dictionaries that have unique values
inverted_dict = {value: key for key, value in my_dict.items()}
Third Solution. Use reverting the inversion approach (relies on the second solution):
# Use this code to invert dictionaries that have lists of values
my_dict = {value: key for key in inverted_dict for value in my_map[key]}
Lot of answers but didn't find anything clean in case we are talking about a dictionary with non-unique values.
A solution would be:
from collections import defaultdict
inv_map = defaultdict(list)
for k, v in my_map.items():
inv_map[v].append(k)
Example:
If initial dict my_map = {'c': 1, 'd': 5, 'a': 5, 'b': 10}
then, running the code above will give:
{5: ['a', 'd'], 1: ['c'], 10: ['b']}
I found that this version is more than 10% faster than the accepted version of a dictionary with 10000 keys.
d = {i: str(i) for i in range(10000)}
new_d = dict(zip(d.values(), d.keys()))
In addition to the other functions suggested above, if you like lambdas:
invert = lambda mydict: {v:k for k, v in mydict.items()}
Or, you could do it this way too:
invert = lambda mydict: dict( zip(mydict.values(), mydict.keys()) )
I think the best way to do this is to define a class. Here is an implementation of a "symmetric dictionary":
class SymDict:
def __init__(self):
self.aToB = {}
self.bToA = {}
def assocAB(self, a, b):
# Stores and returns a tuple (a,b) of overwritten bindings
currB = None
if a in self.aToB: currB = self.bToA[a]
currA = None
if b in self.bToA: currA = self.aToB[b]
self.aToB[a] = b
self.bToA[b] = a
return (currA, currB)
def lookupA(self, a):
if a in self.aToB:
return self.aToB[a]
return None
def lookupB(self, b):
if b in self.bToA:
return self.bToA[b]
return None
Deletion and iteration methods are easy enough to implement if they're needed.
This implementation is way more efficient than inverting an entire dictionary (which seems to be the most popular solution on this page). Not to mention, you can add or remove values from your SymDict as much as you want, and your inverse-dictionary will always stay valid -- this isn't true if you simply reverse the entire dictionary once.
If the values aren't unique, and you're a little hardcore:
inv_map = dict(
(v, [k for (k, xx) in filter(lambda (key, value): value == v, my_map.items())])
for v in set(my_map.values())
)
Especially for a large dict, note that this solution is far less efficient than the answer Python reverse / invert a mapping because it loops over items() multiple times.
This handles non-unique values and retains much of the look of the unique case.
inv_map = {v:[k for k in my_map if my_map[k] == v] for v in my_map.itervalues()}
For Python 3.x, replace itervalues with values.
I am aware that this question already has many good answers, but I wanted to share this very neat solution that also takes care of duplicate values:
def dict_reverser(d):
seen = set()
return {v: k for k, v in d.items() if v not in seen or seen.add(v)}
This relies on the fact that set.add always returns None in Python.
Here is another way to do it.
my_map = {'a': 1, 'b': 2}
inv_map= {}
for key in my_map.keys() :
val = my_map[key]
inv_map[val] = key
dict([(value, key) for key, value in d.items()])
Function is symmetric for values of type list; Tuples are coverted to lists when performing reverse_dict(reverse_dict(dictionary))
def reverse_dict(dictionary):
reverse_dict = {}
for key, value in dictionary.iteritems():
if not isinstance(value, (list, tuple)):
value = [value]
for val in value:
reverse_dict[val] = reverse_dict.get(val, [])
reverse_dict[val].append(key)
for key, value in reverse_dict.iteritems():
if len(value) == 1:
reverse_dict[key] = value[0]
return reverse_dict
Since dictionaries require one unique key within the dictionary unlike values, we have to append the reversed values into a list of sort to be included within the new specific keys.
def r_maping(dictionary):
List_z=[]
Map= {}
for z, x in dictionary.iteritems(): #iterate through the keys and values
Map.setdefault(x,List_z).append(z) #Setdefault is the same as dict[key]=default."The method returns the key value available in the dictionary and if given key is not available then it will return provided default value. Afterward, we will append into the default list our new values for the specific key.
return Map
Fast functional solution for non-bijective maps (values not unique):
from itertools import imap, groupby
def fst(s):
return s[0]
def snd(s):
return s[1]
def inverseDict(d):
"""
input d: a -> b
output : b -> set(a)
"""
return {
v : set(imap(fst, kv_iter))
for (v, kv_iter) in groupby(
sorted(d.iteritems(),
key=snd),
key=snd
)
}
In theory this should be faster than adding to the set (or appending to the list) one by one like in the imperative solution.
Unfortunately the values have to be sortable, the sorting is required by groupby.
Try this for python 2.7/3.x
inv_map={};
for i in my_map:
inv_map[my_map[i]]=i
print inv_map
def invertDictionary(d):
myDict = {}
for i in d:
value = d.get(i)
myDict.setdefault(value,[]).append(i)
return myDict
print invertDictionary({'a':1, 'b':2, 'c':3 , 'd' : 1})
This will provide output as : {1: ['a', 'd'], 2: ['b'], 3: ['c']}
A lambda solution for current python 3.x versions:
d1 = dict(alice='apples', bob='bananas')
d2 = dict(map(lambda key: (d1[key], key), d1.keys()))
print(d2)
Result:
{'apples': 'alice', 'bananas': 'bob'}
This solution does not check for duplicates.
Some remarks:
The lambda construct can access d1 from the outer scope, so we only
pass in the current key. It returns a tuple.
The dict() constructor accepts a list of tuples. It
also accepts the result of a map, so we can skip the conversion to a
list.
This solution has no explicit for loop. It also avoids using a list comprehension for those who are bad at math ;-)
Taking up the highly voted answer starting If the values in my_map aren't unique:, I had a problem where not only the values were not unique, but in addition, they were a list, with each item in the list consisting again of a list of three elements: a string value, a number, and another number.
Example:
mymap['key1'] gives you:
[('xyz', 1, 2),
('abc', 5, 4)]
I wanted to switch only the string value with the key, keeping the two number elements at the same place. You simply need another nested for loop then:
inv_map = {}
for k, v in my_map.items():
for x in v:
# with x[1:3] same as x[1], x[2]:
inv_map[x[0]] = inv_map.get(x[0], []) + [k, x[1:3]]
Example:
inv_map['abc'] now gives you:
[('key1', 1, 2),
('key1', 5, 4)]
This works even if you have non-unique values in the original dictionary.
def dict_invert(d):
'''
d: dict
Returns an inverted dictionary
'''
# Your code here
inv_d = {}
for k, v in d.items():
if v not in inv_d.keys():
inv_d[v] = [k]
else:
inv_d[v].append(k)
inv_d[v].sort()
print(f"{inv_d[v]} are the values")
return inv_d
I would do it that way in python 2.
inv_map = {my_map[x] : x for x in my_map}
Not something completely different, just a bit rewritten recipe from Cookbook. It's futhermore optimized by retaining setdefault method, instead of each time getting it through the instance:
def inverse(mapping):
'''
A function to inverse mapping, collecting keys with simillar values
in list. Careful to retain original type and to be fast.
>> d = dict(a=1, b=2, c=1, d=3, e=2, f=1, g=5, h=2)
>> inverse(d)
{1: ['f', 'c', 'a'], 2: ['h', 'b', 'e'], 3: ['d'], 5: ['g']}
'''
res = {}
setdef = res.setdefault
for key, value in mapping.items():
setdef(value, []).append(key)
return res if mapping.__class__==dict else mapping.__class__(res)
Designed to be run under CPython 3.x, for 2.x replace mapping.items() with mapping.iteritems()
On my machine runs a bit faster, than other examples here
Given a dictionary like so:
my_map = {'a': 1, 'b': 2}
How can one invert this map to get:
inv_map = {1: 'a', 2: 'b'}
Python 3+:
inv_map = {v: k for k, v in my_map.items()}
Python 2:
inv_map = {v: k for k, v in my_map.iteritems()}
Assuming that the values in the dict are unique:
Python 3:
dict((v, k) for k, v in my_map.items())
Python 2:
dict((v, k) for k, v in my_map.iteritems())
If the values in my_map aren't unique:
Python 3:
inv_map = {}
for k, v in my_map.items():
inv_map[v] = inv_map.get(v, []) + [k]
Python 2:
inv_map = {}
for k, v in my_map.iteritems():
inv_map[v] = inv_map.get(v, []) + [k]
To do this while preserving the type of your mapping (assuming that it is a dict or a dict subclass):
def inverse_mapping(f):
return f.__class__(map(reversed, f.items()))
Try this:
inv_map = dict(zip(my_map.values(), my_map.keys()))
(Note that the Python docs on dictionary views explicitly guarantee that .keys() and .values() have their elements in the same order, which allows the approach above to work.)
Alternatively:
inv_map = dict((my_map[k], k) for k in my_map)
or using python 3.0's dict comprehensions
inv_map = {my_map[k] : k for k in my_map}
Another, more functional, way:
my_map = { 'a': 1, 'b':2 }
dict(map(reversed, my_map.items()))
We can also reverse a dictionary with duplicate keys using defaultdict:
from collections import Counter, defaultdict
def invert_dict(d):
d_inv = defaultdict(list)
for k, v in d.items():
d_inv[v].append(k)
return d_inv
text = 'aaa bbb ccc ddd aaa bbb ccc aaa'
c = Counter(text.split()) # Counter({'aaa': 3, 'bbb': 2, 'ccc': 2, 'ddd': 1})
dict(invert_dict(c)) # {1: ['ddd'], 2: ['bbb', 'ccc'], 3: ['aaa']}
See here:
This technique is simpler and faster than an equivalent technique using dict.setdefault().
This expands upon the answer by Robert, applying to when the values in the dict aren't unique.
class ReversibleDict(dict):
# Ref: https://stackoverflow.com/a/13057382/
def reversed(self):
"""
Return a reversed dict, with common values in the original dict
grouped into a list in the returned dict.
Example:
>>> d = ReversibleDict({'a': 3, 'c': 2, 'b': 2, 'e': 3, 'd': 1, 'f': 2})
>>> d.reversed()
{1: ['d'], 2: ['c', 'b', 'f'], 3: ['a', 'e']}
"""
revdict = {}
for k, v in self.items():
revdict.setdefault(v, []).append(k)
return revdict
The implementation is limited in that you cannot use reversed twice and get the original back. It is not symmetric as such. It is tested with Python 2.6. Here is a use case of how I am using to print the resultant dict.
If you'd rather use a set than a list, and there could exist unordered applications for which this makes sense, instead of setdefault(v, []).append(k), use setdefault(v, set()).add(k).
Combination of list and dictionary comprehension. Can handle duplicate keys
{v:[i for i in d.keys() if d[i] == v ] for k,v in d.items()}
A case where the dictionary values is a set. Like:
some_dict = {"1":{"a","b","c"},
"2":{"d","e","f"},
"3":{"g","h","i"}}
The inverse would like:
some_dict = {vi: k for k, v in some_dict.items() for vi in v}
The output is like this:
{'c': '1',
'b': '1',
'a': '1',
'f': '2',
'd': '2',
'e': '2',
'g': '3',
'h': '3',
'i': '3'}
For instance, you have the following dictionary:
my_dict = {'a': 'fire', 'b': 'ice', 'c': 'fire', 'd': 'water'}
And you wanna get it in such an inverted form:
inverted_dict = {'fire': ['a', 'c'], 'ice': ['b'], 'water': ['d']}
First Solution. For inverting key-value pairs in your dictionary use a for-loop approach:
# Use this code to invert dictionaries that have non-unique values
inverted_dict = dict()
for key, value in my_dict.items():
inverted_dict.setdefault(value, list()).append(key)
Second Solution. Use a dictionary comprehension approach for inversion:
# Use this code to invert dictionaries that have unique values
inverted_dict = {value: key for key, value in my_dict.items()}
Third Solution. Use reverting the inversion approach (relies on the second solution):
# Use this code to invert dictionaries that have lists of values
my_dict = {value: key for key in inverted_dict for value in my_map[key]}
Lot of answers but didn't find anything clean in case we are talking about a dictionary with non-unique values.
A solution would be:
from collections import defaultdict
inv_map = defaultdict(list)
for k, v in my_map.items():
inv_map[v].append(k)
Example:
If initial dict my_map = {'c': 1, 'd': 5, 'a': 5, 'b': 10}
then, running the code above will give:
{5: ['a', 'd'], 1: ['c'], 10: ['b']}
I found that this version is more than 10% faster than the accepted version of a dictionary with 10000 keys.
d = {i: str(i) for i in range(10000)}
new_d = dict(zip(d.values(), d.keys()))
In addition to the other functions suggested above, if you like lambdas:
invert = lambda mydict: {v:k for k, v in mydict.items()}
Or, you could do it this way too:
invert = lambda mydict: dict( zip(mydict.values(), mydict.keys()) )
I think the best way to do this is to define a class. Here is an implementation of a "symmetric dictionary":
class SymDict:
def __init__(self):
self.aToB = {}
self.bToA = {}
def assocAB(self, a, b):
# Stores and returns a tuple (a,b) of overwritten bindings
currB = None
if a in self.aToB: currB = self.bToA[a]
currA = None
if b in self.bToA: currA = self.aToB[b]
self.aToB[a] = b
self.bToA[b] = a
return (currA, currB)
def lookupA(self, a):
if a in self.aToB:
return self.aToB[a]
return None
def lookupB(self, b):
if b in self.bToA:
return self.bToA[b]
return None
Deletion and iteration methods are easy enough to implement if they're needed.
This implementation is way more efficient than inverting an entire dictionary (which seems to be the most popular solution on this page). Not to mention, you can add or remove values from your SymDict as much as you want, and your inverse-dictionary will always stay valid -- this isn't true if you simply reverse the entire dictionary once.
If the values aren't unique, and you're a little hardcore:
inv_map = dict(
(v, [k for (k, xx) in filter(lambda (key, value): value == v, my_map.items())])
for v in set(my_map.values())
)
Especially for a large dict, note that this solution is far less efficient than the answer Python reverse / invert a mapping because it loops over items() multiple times.
This handles non-unique values and retains much of the look of the unique case.
inv_map = {v:[k for k in my_map if my_map[k] == v] for v in my_map.itervalues()}
For Python 3.x, replace itervalues with values.
I am aware that this question already has many good answers, but I wanted to share this very neat solution that also takes care of duplicate values:
def dict_reverser(d):
seen = set()
return {v: k for k, v in d.items() if v not in seen or seen.add(v)}
This relies on the fact that set.add always returns None in Python.
Here is another way to do it.
my_map = {'a': 1, 'b': 2}
inv_map= {}
for key in my_map.keys() :
val = my_map[key]
inv_map[val] = key
dict([(value, key) for key, value in d.items()])
Function is symmetric for values of type list; Tuples are coverted to lists when performing reverse_dict(reverse_dict(dictionary))
def reverse_dict(dictionary):
reverse_dict = {}
for key, value in dictionary.iteritems():
if not isinstance(value, (list, tuple)):
value = [value]
for val in value:
reverse_dict[val] = reverse_dict.get(val, [])
reverse_dict[val].append(key)
for key, value in reverse_dict.iteritems():
if len(value) == 1:
reverse_dict[key] = value[0]
return reverse_dict
Since dictionaries require one unique key within the dictionary unlike values, we have to append the reversed values into a list of sort to be included within the new specific keys.
def r_maping(dictionary):
List_z=[]
Map= {}
for z, x in dictionary.iteritems(): #iterate through the keys and values
Map.setdefault(x,List_z).append(z) #Setdefault is the same as dict[key]=default."The method returns the key value available in the dictionary and if given key is not available then it will return provided default value. Afterward, we will append into the default list our new values for the specific key.
return Map
Fast functional solution for non-bijective maps (values not unique):
from itertools import imap, groupby
def fst(s):
return s[0]
def snd(s):
return s[1]
def inverseDict(d):
"""
input d: a -> b
output : b -> set(a)
"""
return {
v : set(imap(fst, kv_iter))
for (v, kv_iter) in groupby(
sorted(d.iteritems(),
key=snd),
key=snd
)
}
In theory this should be faster than adding to the set (or appending to the list) one by one like in the imperative solution.
Unfortunately the values have to be sortable, the sorting is required by groupby.
Try this for python 2.7/3.x
inv_map={};
for i in my_map:
inv_map[my_map[i]]=i
print inv_map
def invertDictionary(d):
myDict = {}
for i in d:
value = d.get(i)
myDict.setdefault(value,[]).append(i)
return myDict
print invertDictionary({'a':1, 'b':2, 'c':3 , 'd' : 1})
This will provide output as : {1: ['a', 'd'], 2: ['b'], 3: ['c']}
A lambda solution for current python 3.x versions:
d1 = dict(alice='apples', bob='bananas')
d2 = dict(map(lambda key: (d1[key], key), d1.keys()))
print(d2)
Result:
{'apples': 'alice', 'bananas': 'bob'}
This solution does not check for duplicates.
Some remarks:
The lambda construct can access d1 from the outer scope, so we only
pass in the current key. It returns a tuple.
The dict() constructor accepts a list of tuples. It
also accepts the result of a map, so we can skip the conversion to a
list.
This solution has no explicit for loop. It also avoids using a list comprehension for those who are bad at math ;-)
Taking up the highly voted answer starting If the values in my_map aren't unique:, I had a problem where not only the values were not unique, but in addition, they were a list, with each item in the list consisting again of a list of three elements: a string value, a number, and another number.
Example:
mymap['key1'] gives you:
[('xyz', 1, 2),
('abc', 5, 4)]
I wanted to switch only the string value with the key, keeping the two number elements at the same place. You simply need another nested for loop then:
inv_map = {}
for k, v in my_map.items():
for x in v:
# with x[1:3] same as x[1], x[2]:
inv_map[x[0]] = inv_map.get(x[0], []) + [k, x[1:3]]
Example:
inv_map['abc'] now gives you:
[('key1', 1, 2),
('key1', 5, 4)]
This works even if you have non-unique values in the original dictionary.
def dict_invert(d):
'''
d: dict
Returns an inverted dictionary
'''
# Your code here
inv_d = {}
for k, v in d.items():
if v not in inv_d.keys():
inv_d[v] = [k]
else:
inv_d[v].append(k)
inv_d[v].sort()
print(f"{inv_d[v]} are the values")
return inv_d
I would do it that way in python 2.
inv_map = {my_map[x] : x for x in my_map}
Not something completely different, just a bit rewritten recipe from Cookbook. It's futhermore optimized by retaining setdefault method, instead of each time getting it through the instance:
def inverse(mapping):
'''
A function to inverse mapping, collecting keys with simillar values
in list. Careful to retain original type and to be fast.
>> d = dict(a=1, b=2, c=1, d=3, e=2, f=1, g=5, h=2)
>> inverse(d)
{1: ['f', 'c', 'a'], 2: ['h', 'b', 'e'], 3: ['d'], 5: ['g']}
'''
res = {}
setdef = res.setdefault
for key, value in mapping.items():
setdef(value, []).append(key)
return res if mapping.__class__==dict else mapping.__class__(res)
Designed to be run under CPython 3.x, for 2.x replace mapping.items() with mapping.iteritems()
On my machine runs a bit faster, than other examples here