Are dict comprehensions evaluated incrementally in Python? - python

I'd have assumed the results of purge and purge2 would be the same in the following code (remove duplicate elements, keeping the first occurrences and their order):
def purge(a):
l = []
return (l := [x for x in a if x not in l])
def purge2(a):
d = {}
return list(d := {x: None for x in a if x not in d})
t = [2,5,3,7,2,6,2,5,2,1,7]
print(purge(t), purge2(t))
But it looks like with dict comprehensions, unlike with lists, the value of d is built incrementally. Is this what's actually happening? Do I correctly infer the semantics of dict comprehensions from this sample code and their difference from list comprehensions? Does it work only with comprehensions, or also with other right-hand sides referring to the dictionary being assigned to (e.g. comprehensions nested inside other expressions, something involving iterators, comprehensions of types other than dict)? Where is it specified and full semantics can be consulted? Or is it just an undocumented behaviour of the implementation, not to be relied upon?

There's nothing "incremental" going on here. The walrus operator doesn't assign to the variable until the dictionary comprehension completes. if x not in d is referring to the original empty dictionary, not the dictionary that you're building with the comprehension, just as the version with the list comprehension is referring to the original l.
The reason the duplicates are filtered out is simply because dictionary keys are always unique. Trying to create a duplicate key simply ignores the second one. It's the same as if you'd written:
return {2: None, 2: None}
you'll just get {2: None}.
So your function can be simplified to
def purge2(a):
return list({x: None for x in a})

Related

how to remove duplicate list in the values of a dictionary

i have a dictionary
dictionary = {
1:[[1,2],[3,4],[5,6],[7,8],[1,2]],
2:[[5,6],[7,8],[1,2]],
3:[3,4],[5,6],[3,4]]
}
How can i remove duplicate list in each value of the dictionary?
output = {
1:[[3,4],[5,6],[7,8],[1,2]],
2:[[5,6],[7,8],[1,2]],
3:[3,4],[5,6]]
}
How can i remove all duplicates?
output = [[1,2],[3,4],[5,6],[7,8]]
i have tried doing for loops, like so:
for i in dictionary.values():
for j in i:
for k in i:
if j == k:
i.remove(k)
but i'm just a beginner so i'm not getting any results...
The usual way to do this is to leverage a set, which is like a dictionary that has only keys and no values. Dictionaries (and sets) rely on their keys to be "hashable," which means that you can feed the key through some hash function and get the same result every time. In Python you can call this hash function with hash(some_object), which internally invokes some_object.__hash__().
The problem with this approach is that lists are not hashable. No mutable objects (things you can change with methods like list.append or set.add or dict.union or etc) are. This means you must either check equality by hand, or mutate it into some form that is hashable, use the set, and then mutate it back. I think the latter is probably your best bet.
To that end, let's use a tuple. Tuples are just like lists except they are not idiomatically homogenous (so mixing types is common, not just technically allowed) and their order has semantic meaning. Consider an ordered pair on a plane -- it would matter deeply if the order flipped: (1, 4) is not the same point as (4, 1). They are, however, immutable and hashable.
d = {1: [[1,2],[3,4],[5,6],[7,8],[1,2]],
2: [[5,6],[7,8],[1,2]],
3: [[3,4],[5,6],[3,4]]}
# we'll use a set comprehension here because it's concise
uniques = {tuple(sublst) for lst in d.values() for sublst in lst}
result = [list(tup) for tup in uniques] # then just change them back to lists
Note that the conversion to set and back does lose all ordering. If ordering is important then you'll have to do something like iterate through every sub list, convert it to tuple, check to see if it's already been seen, and if not add it to the seen set and append it to your final list.
d = {1: [[1,2],[3,4],[5,6],[7,8],[1,2]],
2: [[5,6],[7,8],[1,2]],
3: [[3,4],[5,6],[3,4]]}
seen = set()
result = []
for lst in d.values():
for sublst in lst:
tup = tuple(sublst)
if tup not in seen:
seen.add(tup)
result.append(sublst)

python dictionary: how does appending of items work? [duplicate]

This question already has answers here:
How do I initialize a dictionary of empty lists in Python?
(7 answers)
Closed 2 years ago.
I came across this behavior that surprised me in Python 2.6 and 3.2:
>>> xs = dict.fromkeys(range(2), [])
>>> xs
{0: [], 1: []}
>>> xs[0].append(1)
>>> xs
{0: [1], 1: [1]}
However, dict comprehensions in 3.2 show a more polite demeanor:
>>> xs = {i:[] for i in range(2)}
>>> xs
{0: [], 1: []}
>>> xs[0].append(1)
>>> xs
{0: [1], 1: []}
>>>
Why does fromkeys behave like that?
Your Python 2.6 example is equivalent to the following, which may help to clarify:
>>> a = []
>>> xs = dict.fromkeys(range(2), a)
Each entry in the resulting dictionary will have a reference to the same object. The effects of mutating that object will be visible through every dict entry, as you've seen, because it's one object.
>>> xs[0] is a and xs[1] is a
True
Use a dict comprehension, or if you're stuck on Python 2.6 or older and you don't have dictionary comprehensions, you can get the dict comprehension behavior by using dict() with a generator expression:
xs = dict((i, []) for i in range(2))
In the first version, you use the same empty list object as the value for both keys, so if you change one, you change the other, too.
Look at this:
>>> empty = []
>>> d = dict.fromkeys(range(2), empty)
>>> d
{0: [], 1: []}
>>> empty.append(1) # same as d[0].append(1) because d[0] references empty!
>>> d
{0: [1], 1: [1]}
In the second version, a new empty list object is created in every iteration of the dict comprehension, so both are independent from each other.
As to "why" fromkeys() works like that - well, it would be surprising if it didn't work like that. fromkeys(iterable, value) constructs a new dict with keys from iterable that all have the value value. If that value is a mutable object, and you change that object, what else could you reasonably expect to happen?
To answer the actual question being asked: fromkeys behaves like that because there is no other reasonable choice. It is not reasonable (or even possible) to have fromkeys decide whether or not your argument is mutable and make new copies every time. In some cases it doesn't make sense, and in others it's just impossible.
The second argument you pass in is therefore just a reference, and is copied as such. An assignment of [] in Python means "a single reference to a new list", not "make a new list every time I access this variable". The alternative would be to pass in a function that generates new instances, which is the functionality that dict comprehensions supply for you.
Here are some options for creating multiple actual copies of a mutable container:
As you mention in the question, dict comprehensions allow you to execute an arbitrary statement for each element:
d = {k: [] for k in range(2)}
The important thing here is that this is equivalent to putting the assignment k = [] in a for loop. Each iteration creates a new list and assigns it to a value.
Use the form of the dict constructor suggested by #Andrew Clark:
d = dict((k, []) for k in range(2))
This creates a generator which again makes the assignment of a new list to each key-value pair when it is executed.
Use a collections.defaultdict instead of a regular dict:
d = collections.defaultdict(list)
This option is a little different from the others. Instead of creating the new list references up front, defaultdict will call list every time you access a key that's not already there. You can there fore add the keys as lazily as you want, which can be very convenient sometimes:
for k in range(2):
d[k].append(42)
Since you've set up the factory for new elements, this will actually behave exactly as you expected fromkeys to behave in the original question.
Use dict.setdefault when you access potentially new keys. This does something similar to what defaultdict does, but it has the advantage of being more controlled, in the sense that only the access you want to create new keys actually creates them:
d = {}
for k in range(2):
d.setdefault(k, []).append(42)
The disadvantage is that a new empty list object gets created every time you call the function, even if it never gets assigned to a value. This is not a huge problem, but it could add up if you call it frequently and/or your container is not as simple as list.

python initialize nested dictionary with keys and ambiguous behavior of dict.fromkeys class method [duplicate]

This question already has answers here:
How do I initialize a dictionary of empty lists in Python?
(7 answers)
Closed 2 years ago.
I came across this behavior that surprised me in Python 2.6 and 3.2:
>>> xs = dict.fromkeys(range(2), [])
>>> xs
{0: [], 1: []}
>>> xs[0].append(1)
>>> xs
{0: [1], 1: [1]}
However, dict comprehensions in 3.2 show a more polite demeanor:
>>> xs = {i:[] for i in range(2)}
>>> xs
{0: [], 1: []}
>>> xs[0].append(1)
>>> xs
{0: [1], 1: []}
>>>
Why does fromkeys behave like that?
Your Python 2.6 example is equivalent to the following, which may help to clarify:
>>> a = []
>>> xs = dict.fromkeys(range(2), a)
Each entry in the resulting dictionary will have a reference to the same object. The effects of mutating that object will be visible through every dict entry, as you've seen, because it's one object.
>>> xs[0] is a and xs[1] is a
True
Use a dict comprehension, or if you're stuck on Python 2.6 or older and you don't have dictionary comprehensions, you can get the dict comprehension behavior by using dict() with a generator expression:
xs = dict((i, []) for i in range(2))
In the first version, you use the same empty list object as the value for both keys, so if you change one, you change the other, too.
Look at this:
>>> empty = []
>>> d = dict.fromkeys(range(2), empty)
>>> d
{0: [], 1: []}
>>> empty.append(1) # same as d[0].append(1) because d[0] references empty!
>>> d
{0: [1], 1: [1]}
In the second version, a new empty list object is created in every iteration of the dict comprehension, so both are independent from each other.
As to "why" fromkeys() works like that - well, it would be surprising if it didn't work like that. fromkeys(iterable, value) constructs a new dict with keys from iterable that all have the value value. If that value is a mutable object, and you change that object, what else could you reasonably expect to happen?
To answer the actual question being asked: fromkeys behaves like that because there is no other reasonable choice. It is not reasonable (or even possible) to have fromkeys decide whether or not your argument is mutable and make new copies every time. In some cases it doesn't make sense, and in others it's just impossible.
The second argument you pass in is therefore just a reference, and is copied as such. An assignment of [] in Python means "a single reference to a new list", not "make a new list every time I access this variable". The alternative would be to pass in a function that generates new instances, which is the functionality that dict comprehensions supply for you.
Here are some options for creating multiple actual copies of a mutable container:
As you mention in the question, dict comprehensions allow you to execute an arbitrary statement for each element:
d = {k: [] for k in range(2)}
The important thing here is that this is equivalent to putting the assignment k = [] in a for loop. Each iteration creates a new list and assigns it to a value.
Use the form of the dict constructor suggested by #Andrew Clark:
d = dict((k, []) for k in range(2))
This creates a generator which again makes the assignment of a new list to each key-value pair when it is executed.
Use a collections.defaultdict instead of a regular dict:
d = collections.defaultdict(list)
This option is a little different from the others. Instead of creating the new list references up front, defaultdict will call list every time you access a key that's not already there. You can there fore add the keys as lazily as you want, which can be very convenient sometimes:
for k in range(2):
d[k].append(42)
Since you've set up the factory for new elements, this will actually behave exactly as you expected fromkeys to behave in the original question.
Use dict.setdefault when you access potentially new keys. This does something similar to what defaultdict does, but it has the advantage of being more controlled, in the sense that only the access you want to create new keys actually creates them:
d = {}
for k in range(2):
d.setdefault(k, []).append(42)
The disadvantage is that a new empty list object gets created every time you call the function, even if it never gets assigned to a value. This is not a huge problem, but it could add up if you call it frequently and/or your container is not as simple as list.

Dictionary creation with fromkeys and mutable objects. A surprise [duplicate]

This question already has answers here:
How do I initialize a dictionary of empty lists in Python?
(7 answers)
Closed 2 years ago.
I came across this behavior that surprised me in Python 2.6 and 3.2:
>>> xs = dict.fromkeys(range(2), [])
>>> xs
{0: [], 1: []}
>>> xs[0].append(1)
>>> xs
{0: [1], 1: [1]}
However, dict comprehensions in 3.2 show a more polite demeanor:
>>> xs = {i:[] for i in range(2)}
>>> xs
{0: [], 1: []}
>>> xs[0].append(1)
>>> xs
{0: [1], 1: []}
>>>
Why does fromkeys behave like that?
Your Python 2.6 example is equivalent to the following, which may help to clarify:
>>> a = []
>>> xs = dict.fromkeys(range(2), a)
Each entry in the resulting dictionary will have a reference to the same object. The effects of mutating that object will be visible through every dict entry, as you've seen, because it's one object.
>>> xs[0] is a and xs[1] is a
True
Use a dict comprehension, or if you're stuck on Python 2.6 or older and you don't have dictionary comprehensions, you can get the dict comprehension behavior by using dict() with a generator expression:
xs = dict((i, []) for i in range(2))
In the first version, you use the same empty list object as the value for both keys, so if you change one, you change the other, too.
Look at this:
>>> empty = []
>>> d = dict.fromkeys(range(2), empty)
>>> d
{0: [], 1: []}
>>> empty.append(1) # same as d[0].append(1) because d[0] references empty!
>>> d
{0: [1], 1: [1]}
In the second version, a new empty list object is created in every iteration of the dict comprehension, so both are independent from each other.
As to "why" fromkeys() works like that - well, it would be surprising if it didn't work like that. fromkeys(iterable, value) constructs a new dict with keys from iterable that all have the value value. If that value is a mutable object, and you change that object, what else could you reasonably expect to happen?
To answer the actual question being asked: fromkeys behaves like that because there is no other reasonable choice. It is not reasonable (or even possible) to have fromkeys decide whether or not your argument is mutable and make new copies every time. In some cases it doesn't make sense, and in others it's just impossible.
The second argument you pass in is therefore just a reference, and is copied as such. An assignment of [] in Python means "a single reference to a new list", not "make a new list every time I access this variable". The alternative would be to pass in a function that generates new instances, which is the functionality that dict comprehensions supply for you.
Here are some options for creating multiple actual copies of a mutable container:
As you mention in the question, dict comprehensions allow you to execute an arbitrary statement for each element:
d = {k: [] for k in range(2)}
The important thing here is that this is equivalent to putting the assignment k = [] in a for loop. Each iteration creates a new list and assigns it to a value.
Use the form of the dict constructor suggested by #Andrew Clark:
d = dict((k, []) for k in range(2))
This creates a generator which again makes the assignment of a new list to each key-value pair when it is executed.
Use a collections.defaultdict instead of a regular dict:
d = collections.defaultdict(list)
This option is a little different from the others. Instead of creating the new list references up front, defaultdict will call list every time you access a key that's not already there. You can there fore add the keys as lazily as you want, which can be very convenient sometimes:
for k in range(2):
d[k].append(42)
Since you've set up the factory for new elements, this will actually behave exactly as you expected fromkeys to behave in the original question.
Use dict.setdefault when you access potentially new keys. This does something similar to what defaultdict does, but it has the advantage of being more controlled, in the sense that only the access you want to create new keys actually creates them:
d = {}
for k in range(2):
d.setdefault(k, []).append(42)
The disadvantage is that a new empty list object gets created every time you call the function, even if it never gets assigned to a value. This is not a huge problem, but it could add up if you call it frequently and/or your container is not as simple as list.

python: union keys from multiple dictionary?

I have 5 dictionaries and I want a union of their keys.
alldict = [dict1, dict2, dict3, dict4, dict5]
I tried
allkey = reduce(lambda x, y: set(x.keys()).union(y.keys()), alldict)
but it gave me an error
AttributeError: 'set' object has no attribute 'keys'
Am I doing it wrong ? I using normal forloop but I wonder why the above code didn't work.
I think #chuck already answered the question why it doesn't work, but a simpler way to do this would be to remember that the union method can take multiple arguments:
allkey = set().union(*alldict)
does what you want without any loops or lambdas.
Your solution works for the first two elements in the list, but then dict1 and dict2 got reduced into a set and that set is put into your lambda as the x. So now x does not have the method keys() anymore.
The solution is to make x be a set from the very beginning by initializing the reduction with an empty set (which happens to be the neutral element of the union).
Try it with an initializer:
allkey = reduce(lambda x, y: x.union(y.keys()), alldict, set())
An alternative without any lambdas would be:
allkey = reduce(set.union, map(set, map(dict.keys, alldict)))
A simple strategy for non-functional neurons (pun intended):
allkey = []
for dictio in alldict:
for key in dictio:
allkey.append(key)
allkey = set(allkey)
We can convert this code to a much sorter form using set comprehensions:
allkey = {key for dictio in alldict for key in dictio}
This one-liner is still very readable in comparison with the conventional for loop.
The key to convert a nested loop to a list or set comprehension is to write the inner loop (the one that varies faster in the nested loop) as the last index (that is, for key in dictio).
set().union(dict1.keys(),dict2.keys()...)
I tried the list and it didnt work so just putting it up here for anyone.
Just one more way, 'cause what the hay:
a={}; [ a.update(b) for b in alldict ] and a.keys()
or the slightly-more-mysterious
reduce(lambda a, b: a.update(b) or a, alldict, {}).keys()
(I'm bummed that there's no built-in function equivalent to
def f(a,b):
r = {}
r.update(a)
r.update(b)
return r
is there?)
If you only want to union keys of 2 dicts you could use operator |.
Quote from docs:
Return a new set with elements from the set and all others.
Example:
all_keys = (dict1.keys() | dict2.keys())

Categories

Resources