Question
According to this answer, in Python 3.5 or greater, it is possible to merge two dictionaries x and y by unpacking them:
z = {**x, **y}
Is it possible to unpack a variadic list of dictionaries? Something like
def merge(*dicts):
return {***dicts} # this fails, of course. What should I use here?
For instance, I would expect that
list_of_dicts = [{'a': 1, 'b': 2}, {'c': 3}, {'d': 4}]
{***list_of_dicts} == {'a': 1, 'b': 2, 'c': 3, 'd': 4}
Note that this question is not about how to merge lists of dictionaries since the link above provides an answer to this. The question here is: is it possible, and how, to unpack lists of dictionaries?
Edit
As stated in the comments, this question is very similar to this one. However, unpacking a list of dictionaries is different from simply merging them. Supposing that there was an operator *** designed to unpack lists of dictionaries, and given
def print_values(a, b, c, d):
print('a =', a)
print('b =', b)
print('c =', c)
print('d =', d)
list_of_dicts = [{'a': 1, 'b': 2}, {'c': 3}, {'d': 4}]
it would be possible to write
print_values(***list_of_dicts)
instead of
print_values(**merge(list_of_dicts))
Another solution is using collections.ChainMap
from collections import ChainMap
dict(ChainMap(*list_of_dicts[::-1]))
Out[88]: {'a': 1, 'b': 2, 'c': 3, 'd': 4}
You could just iterate over the list and use update:
lst = [{'a': 1, 'b': 2}, {'c': 3}, {'d': 4}]
dct = {}
for item in lst:
dct.update(item)
print(dct)
# {'a': 1, 'b': 2, 'c': 3, 'd': 4}
There's no syntax for that, but you can use itertools.chain to concatenate the key/value tuples from each dict into a single stream that dict can consume.
from itertools import chain
def merge(*dicts):
return dict(chain.from_iterable(d.items() for d in dicts))
You can also unpack a list created by a list comprehension as well:
def merge(*dicts):
return dict(*[d.items() for d in dicts])
To merge multiple dictionaries you can use the function reduce:
from functools import reduce
lst = [{'a': 1, 'b': 2}, {'c': 3}, {'d': 4}]
reduce(lambda x, y: dict(**x, **y), lst)
# {'a': 1, 'b': 2, 'c': 3, 'd': 4}
You could use list comprehension and put this iterable object as an argument to dict
def merge(*dicts):
lst = [*[d.items() for d in dicts]]
return dict(lst)
You can just use a list comprehension to iterate over all the dicts in the list and then iterate over each if those dicts' items and finally convert them to dict
>>> lst = [{'a':1}, {'b':2}, {'c':1}, {'d':2}]
>>> dict(kv for d in lst for kv in d.items())
{'a': 1, 'b': 2, 'c': 1, 'd': 2}
You can use reduce to merge two dicts at a time using dict.update
>>> from functools import reduce
>>> lst = [{'a':1}, {'b':2}, {'c':1}, {'d':2}]
>>> reduce(lambda d1, d2: d1.update(d2) or d1, lst, {})
{'a': 1, 'b': 2, 'c': 1, 'd': 2}
When you *dicts its put in as a tuple, you can pull the list out with d[0], then use this comprehension for nonuniform keys
list_of_dicts = [{'a': 1, 'b': 2}, {'c': 3}, {'d': 4}]
def merge(*dicts):
return dict( j for i in dicts[0] for j in i.items())
print(merge(list_of_dicts))
{'a': 1, 'b': 2, 'c': 3, 'd': 4}
[Program finished]
Related
Imagine that you have to sort a list of dicts, by the value of a particular key. Note that the key might be missing from some of the dicts, in which case you default to the value of that key to being 0.
sample input
input = [{'a': 1, 'b': 2}, {'a': 10, 'b': 3}, {'b': 5}]
sample output (sorted by value of key 'a')
[{'b': 5}, {'a': 1, 'b': 2}, {'a': 10, 'b': 3}]
note that {'b': 5} is first in the sort-order because it has the lowest value for 'a' (0)
I would've used input.sort(key=operator.itemgetter('a')), if all the dicts were guaranteed to have the key 'a'. Or I could convert the input dicts to collections.defaultdict and then sort.
Is there a way to do this in-place without having to creating new dicts or updating the existing dicts? Can operator.itemgetter handle missing keys?
>>> items = [{'a': 1, 'b': 2}, {'a': 10, 'b': 3}, {'b': 5}]
>>> sorted(items, key=lambda d: d.get('a', 0))
[{'b': 5}, {'a': 1, 'b': 2}, {'a': 10, 'b': 3}]
Or to update the existing dictionary in-place
items.sort(key=lambda d: d.get('a', 0))
Or if in sorted:
>>> items = [{'a': 1, 'b': 2}, {'a': 10, 'b': 3}, {'b': 5}]
>>> sorted(items,key=lambda x: x['a'] if 'a' in x else 0)
[{'b': 5}, {'a': 1, 'b': 2}, {'a': 10, 'b': 3}]
>>>
Given a list of dictionaries, e.g.,
[{'a': 1, 'b': 2}, {'c': 3, 'd': 4}]
... I would like to move each value in each dictionary into a list:
[{'a': [1], 'b': [2]}, {'c': [3], 'd': [4]}]
in a one-liner if practical.
Given a dict list l of dicts:
[{k:[d[k]] for k in d} for d in l]
Let's say I have a list of dictionaries:
>>> d = [{'a': 2, 'b': 3, 'c': 4}, {'a': 5, 'b': 6, 'c': 7}]
And I want to perform a map operation where I change just one value in each dictionary. One possible way to do that is to create a new dictionary which simply contains the original values along with the changed ones:
>>> map(lambda x: {'a': x['a'], 'b': x['b'] + 1, 'c': x['c']}, d)
[{'a': 2, 'c': 4, 'b': 4}, {'a': 5, 'c': 7, 'b': 7}]
This can get unruly if the dictionaries have many items.
Another way might be to define a function which copies the original dictionary and only changes the desired values:
>>> def change_b(x):
... new_x = x.copy()
... new_x['b'] = x['b'] + 1
... return new_x
...
>>> map(change_b, d)
[{'a': 2, 'c': 4, 'b': 4}, {'a': 5, 'c': 7, 'b': 7}]
This, however, requires writing a separate function and loses the elegance of a lambda expression.
Is there a better way?
This works (and is compatible with python2 and python31):
>>> map(lambda x: dict(x, b=x['b']+1), d)
[{'a': 2, 'c': 4, 'b': 4}, {'a': 5, 'c': 7, 'b': 7}]
With that said, I think that more often than not, lambda based solutions are less elegant than non-lambda counterparts... The rational behind this statement is that I can immediately look at the non-lambda solution that you proposed and I know exactly what it does. The lambda based solution that I just wrote would take a bit of thinking to parse and then more thinking to actually understand...
1Though, map will give you an iterable object on python3.x that isn't a list...
First, writing a function doesn't seem that inelegant to me in the first place. That said, welcome to the brave new world of Python 3.5 and PEP 448:
>>> d = [{'a': 2, 'b': 3, 'c': 4}, {'a': 5, 'b': 6, 'c': 7}]
>>> d
[{'b': 3, 'a': 2, 'c': 4}, {'b': 6, 'a': 5, 'c': 7}]
>>> [{**x, 'b': x['b']+1} for x in d]
[{'b': 4, 'a': 2, 'c': 4}, {'b': 7, 'a': 5, 'c': 7}]
From how your map is behaving, it's clear you're using 2, but that's easy enough to fix. :-)
You can use a for loop with an update call. Here is a hacky one-liner:
dcts = [{'a': 2, 'b': 3, 'c': 4}, {'a': 5, 'b': 6, 'c': 7}]
dcts = [d.update({'b': d['b']+1}) or d for d in dcts]
Edit: To preserve original dicts:
from copy import copy
dcts = [d.update({'b': d['b']+1}) or d for d in map(copy, dcts)]
I want to convert a list of lists to a list of dictionaries. I have a way to do it but I suspect there's a better way:
t = [[1,2,3], [4,5,6]]
keys = ['a', 'b', 'c']
[{keys[0]:l[0], keys[1]:l[1], keys[2]:l[2]} for l in t]
with output
[{'a': 1, 'c': 3, 'b': 2}, {'a': 4, 'c': 6, 'b': 5}]
This could be done with a loop, but I bet there's a function to do it even easier. From this answer I'm guessing there's a way to do it with the map command, but I'm not quite sure how.
You can use list comprehension with the dict() constructor and zip:
[dict(zip(keys, l)) for l in t ]
Demo
>>> d = [dict(zip(keys, l)) for l in t ]
>>>
>>> d
[{'a': 1, 'c': 3, 'b': 2}, {'a': 4, 'c': 6, 'b': 5}]
>>>
It can also be solved with a dictionary comprehension, this way:
>>> [{k:v for k,v in zip(keys, l)} for l in t]
[{'c': 3, 'b': 2, 'a': 1}, {'c': 6, 'b': 5, 'a': 4}]
I have a list of dicts:
d =[{'a': 4}, {'b': 20}, {'c': 5}, {'d': 3}]
I want to remove the curly braces and convert d to a single dict which looks like:
d ={'a': 4, 'b': 20, 'c': 5, 'd': 3}
If you don't mind duplicate keys replacing earlier keys you can use:
from functools import reduce # Python 3 compatibility
d = reduce(lambda a, b: dict(a, **b), d)
This merges the first two dictionaries then merges each following dictionary into the result built so far.
Demo:
>>> d =[{'a': 4}, {'b': 20}, {'c': 5}, {'d': 3}]
>>> reduce(lambda a, b: dict(a, **b), d)
{'a': 4, 'c': 5, 'b': 20, 'd': 3}
Or if you need this to work for arbitrary (non string) keys (and you are using Python 3.5 or greater):
>>> d =[{4: 4}, {20: 20}, {5: 5}, {3: 3}]
>>> reduce(lambda a, b: dict(a, **b), d) # This wont work
TypeError: keywords must be strings
>>> reduce(lambda a, b: {**a, **b}, d) # Use this instead
{4: 4, 20: 20, 5: 5, 3: 3}
The first solution hacks the behaviour of keyword arguments to the dict function. The second solution is using the more general ** operator introduced in Python 3.5.
You just need to iterate over d and append (update()) the element to a new dict e.g. newD.
d =[{'a': 4}, {'b': 20}, {'c': 5}, {'d': 3}]
newD = {}
for entry in d:
newD.update(entry)
>>> newD
{'c': 5, 'b': 20, 'a': 4, 'd': 3}
Note: If there are duplicate values in d the last one will be appear in newD.
Overwriting the values of existing keys, a brutal and inexperienced solution is
nd = {}
for el in d:
for k,v in el.items():
nd[k] = v
or, written as a dictionary comprehension:
d = {k:v for el in d for k,v in el.items()}
a = [{'a': 4}, {'b': 20}, {'c': 5}, {'d': 3}]
b = {}
[b.update(c) for c in a]
b = {'a': 4, 'b': 20, 'c': 5, 'd': 3}
if order is important:
from collections import OrderedDict
a = [{'a': 4}, {'b': 20}, {'c': 5}, {'d': 3}]
newD = OrderedDict()
[newD.update(c) for c in a]
out = dict(newD)