I have two very large lists and to loop through it once takes at least a second and I need to do it 200,000 times. What's the fastest way to remove duplicates in two lists to form one?
This is the fastest way I can think of:
import itertools
output_list = list(set(itertools.chain(first_list, second_list)))
Slight update: As jcd points out, depending on your application, you probably don't need to convert the result back to a list. Since a set is iterable by itself, you might be able to just use it directly:
output_set = set(itertools.chain(first_list, second_list))
for item in output_set:
# do something
Beware though that any solution involving the use of set() will probably reorder the elements in your list, so there's no guarantee that elements will be in any particular order. That said, since you're combining two lists, it's hard to come up with a good reason why you would need a particular ordering over them anyway, so this is probably not something you need to worry about.
I'd recommend something like this:
def combine_lists(list1, list2):
s = set(list1)
s.update(list2)
return list(s)
This eliminates the problem of creating a monster list of the concatenation of the first two.
Depending on what you're doing with the output, don't bother to convert back to a list. If ordering is important, you might need some sort of decorate/sort/undecorate shenanigans around this.
As Daniel states, a set cannot contain duplicate entries - so concatenate the lists:
list1 + list2
Then convert the new list to a set:
set(list1 + list2)
Then back to a list:
list(set(list1 + list2))
result = list(set(list1).union(set(list2)))
That's how I'd do it. I am not so sure about performance, though, but it is certainly better, than doing it by hand.
Related
According to my research there are two easy ways to remove duplicates from a list:
a = list(dict.fromkeys(a))
and
a = list(set(a))
Is one of them more efficient than the other?
Definitely the second one is more efficient as sets are more or less created for that purpose and you skip the overhead related to creation of dict which is way heavier.
Perfomance-wise it definitely depends on what the payload actually is.
import timeit
import random
input_data = [random.choice(range(100)) for i in range(1000)]
from_keys = timeit.timeit('list(dict.fromkeys(input_data))', number=10000, globals={'input_data': input_data})
from_set = timeit.timeit('list(set(input_data))', number=10000, globals={'input_data': input_data})
print(f"From keys performance: {from_keys:.3f}")
print(f"From set performance: {from_set:.3f}")
Prints:
From keys performance: 0.230
From set performance: 0.140
It doesn't really mean it's almost twice as fast. The difference is barely visible. Try it for yourself with different random data.
The second answer is way better not only because its faster, but it shows the intention of the programmer better. set() is designed specifically to describe mathematical sets in which elements cannot be duplicated, thus it fits this purpose and the intention is clear to the reader. On the other hand dict() is for storing key-value pairs and tells nothing about the intention.
in case we have a list containing a = [1,16,2,3,4,5,6,8,10,3,9,15,7]
and we used a = list(set(a)) the set() function will drop the duplication's and also reorder our list, the new list will look like this [1,2,3,4,5,6,7,8,9,10,15,16], while if we use a = list(dict.fromkeys(a)) the dict.fromkeys() function will drop the duplication's and keep the list elements in the same order [1,16,2,3,4,5,6,8,10,9,15,7].
to sum things up, if you're looking for a way to drop duplications from a list without caring about reordering the list then set() is what you're looking for, but!! if keeping the order of the list is required, then you can use dict.fromkeys()
CAUTION: since Python 3.7 the keys of a dict are ordered.
So the first form that uses
list(dict.fromkeys(a)) # preserves order!!
preserves the order while using the set will potentially (and probably) change the order of the elements of the list 'a'.
I quite often use set() to remove duplicates from lists. After doing so, I always directly change it back to a list.
a = [0,0,0,1,2,3,4,5]
b = list(set(a))
Why does set() return a set item, instead of simply a list?
type(set(a)) == set # is true
Is there a use for set items that I have failed to understand?
Yes, sets have many uses. They have lots of nice operations documented here which lists don't have. One very useful difference is that membership testing (x in a) can be much faster than for a list.
Okay, by doubles you mean duplicate? and set() will always return a set because it is a data structure in python like lists. when you are calling set you are creating an object of set().
rest of the information about sets you can find here
https://docs.python.org/2/library/sets.html
As already mentioned, I won't go into why set does not return a list but like you stated:
I quite often use set() to remove doubles from lists. After doing so, I always directly change it back to a list.
You could use OrderedDict if you really hate going back to changing it to a list:
source_list = [0,0,0,1,2,3,4,5]
from collections import OrderedDict
print(OrderedDict((x, True) for x in source_list).keys())
OUTPUT:
odict_keys([0, 1, 2, 3, 4, 5])
As said before, for certain operations if you use set instead of list, it is faster. Python wiki has query TimeComplexity in which speed of operations of various data types are given. Note that if you have few elements in your list or set, you will most probably do not notice difference, but with more elements it become more important.
Notice that for example if you want to make in-place removal, for list it is O(n) meaning that for 10 times longer list it will need 10 times more time, while for set and s.difference_update(t) where s is set, t is set with one element to be removed from s, time is O(1) i.e. independent from number of elements of s.
Is there an operator to remove elements from a List based on the content of a Set?
What I want to do is already possible by doing this:
words = ["hello", "you", "how", "are", "you", "today", "hello"]
my_set = {"you", "are"}
new_list = [w for w in words if w not in my_set]
# ["hello", "how", "today", "hello"]
What bothers me with this list comprehension is that for huge collections, it looks less effective to me than the - operator that can be used between two sets. Because in the list comprehension, the iteration happens in Python, whereas with the operator, the iteration happens in C and is more low-level, hence faster.
So is there some way of computing a difference between a List and a Set in a shorter/cleaner/more efficient way than using a list comprehension, like for example:
# I know this is not possible, but does something approaching exist?
new_list = words - my_set
TL;DR
I'm looking for a way to remove all element presents in a Set from a List, that is either:
cleaner (with a built-in perhaps)
and/or more efficient
than what I know can be done with list comprehensions.
Unfortunately, the only answer for this is: No, there is no built-in way, implemented in native code, for this kind of operation.
What bothers me with this list comprehension is that for huge collections, it looks less effective to me than the - operator that can be used between two sets.
I think what’s important here is the “looks” part. Yes, list comprehensions run more within Python than a set difference, but I assume that most of your application actually runs within Python (otherwise you should probably be programming in C instead). So you should consider whether it really matters much. Iterating over a list is fast in Python, and a membership test on a set is also super fast (constant time, and implemented in native code). And if you look at list comprehensions, they are also very fast. So it probably won’t matter much.
Because in the list comprehension, the iteration happens in Python, whereas with the operator, the iteration happens in C and is more low-level, hence faster.
It is true that native operations are faster, but they are also more specialized, limited and allow for less flexibility. For sets, a difference is pretty easy. The set difference is a mathematical concept and is very clearly defined.
But when talking about a “list difference” or a “list and set difference” (or more generalized “list and iterable difference”?) it becomes a lot more unclear. There are a lot open questions:
How are duplicates handled? If there are two X in the original list and only one X in the subtrahend, should both X disappear from the list? Should only one disappear? If so, which one, and why?
How is order handled? Should the order be kept as in the original list? Does the order of the elements in the subtrahend have any impact?
What if we want to subtract members based on some other condition than equality? For sets, it’s clear that they always work on the equality (and hash value) of the members. Lists don’t, so lists are by design a lot more flexible. With list comprehensions, we can easily have any kind of condition to remove elements from a list; with a “list difference” we would be restricted to equality, and that might actually be a rare situation if you think about it.
It’s maybe more likely to use a set if you need to calculate differences (or even some ordered set). And for filtering lists, it might also be a rare case that you want to end up with a filtered list, so it might be more common to use a generator expression (or the Python 3 filter() function) and work with that later without having to create that filtered list in memory.
What I’m trying to say is that the use case for a list difference is not as clear as a set difference. And if there was a use case, it might be a very rare use case. And in general, I don’t think it’s worth to add complexity to the Python implementation for this. Especially when the in-Python alternative, e.g. a list comprehension, is as fast as it already is.
First things first, are you prematurely worrying about an optimisation problem that isn't really an issue? I have to to have lists with at least 10,000,000 elements before I even get into the range of this operation taking 1/10ths of a second.
If you're working with large data sets then you may find it advantageous to move to using numpy.
import random
import timeit
r = range(10000000)
setup = """
import numpy as np
l = list({!r})
s = set(l)
to_remove = {!r}
n = np.array(l)
n_remove = np.array(list(to_remove))
""".format(r, set(random.sample(r, 3)))
list_filter = "[x for x in l if x not in to_remove]"
set_filter = "s - to_remove"
np_filter = "n[np.in1d(n, n_remove, invert=True)]"
n = 1
l_time = timeit.timeit(list_filter, setup, number=n)
print("lists:", l_time)
s_time = timeit.timeit(set_filter, setup, number=n)
print("sets:", s_time)
n_time = timeit.timeit(np_filter, setup, number=n)
print("numpy:", n_time)
returns the following results -- with numpy an order of magnitude faster than using sets.
lists: 0.8743789765043315
sets: 0.20703006886620656
numpy: 0.06197169088128707
I agree with poke. Here is my reasoning:
The easiest way to do it would be using a filter:
words = ["hello", "you", "how", "are", "you", "today", "hello"]
my_set = {"you", "are"}
new_list = filter(lambda w: w not in my_set, words)
And using Dunes solution, I get these times:
lists: 0.87401028
sets: 0.55103887
numpy: 0.16134396
filter: 0.00000886 WOW beats numpy by various orders of magnitude !!!
But wait, we are making a flawed comparison because we are comparing the time of making a list strictly (comprehension and set difference) vs. lazily (numpy and filter).
If I run Dunes solution but producing the actual lists, I get:
lists: 0.86804159
sets: 0.56945663
numpy: 1.19315723
filter: 1.68792561
Now numpy is slightly more efficient than using a simple filter, but both are not better than the list comprehension, which was the first and more intuitive solution.
I would definitely use a filter over the comprehension, except if I need to use the filtered list more than once (although I could tee it).
Given a list of objects, where each has a property named x, and I want to remove all the objects whose x property contains value v from the list.
One way to do it is to use list comprehension: [item for item in mylist if item.x != v], but since my list is small (usually less than 10). Another way is to iterate through the list in a loop and check for every single item.
Is there a third way that is equally fast or even faster?
You can also use a generator or the filter function. Choose what you find the most readable; efficiency doesn't really matter at this point (especially not if you're dealing with just a few elements).
Create a new list using list comprehension syntax. I don't think you can do anything faster than that. It doesn't matter that your list is small, that's even better.
I have a Python list of objects that could be pretty long. At particular times, I'm interested in all of the elements in the list that have a certain attribute, say flag, that evaluates to False. To do so, I've been using a list comprehension, like this:
objList = list()
# ... populate list
[x for x in objList if not x.flag]
Which seems to work well. After forming the sublist, I have a few different operations that I might need to do:
Subscript the sublist to get the element at index ind.
Calculate the length of the sublist (i.e. the number of elements that have flag == False).
Search the sublist for the first instance of a particular object (i.e. using the list's .index() method).
I've implemented these using the naive approach of just forming the sublist and then using its methods to get at the data I want. I'm wondering if there are more efficient ways to go about these. #1 and #3 at least seem like they could be optimized, because in #1 I only need the first ind + 1 matching elements of the sublist, not necessarily the entire result set, and in #3 I only need to search through the sublist until I find a matching element.
Is there a good Pythonic way to do this? I'm guessing I might be able to use the () syntax in some way to get a generator instead of creating the entire list, but I haven't happened upon the right way yet. I obviously could write loops manually, but I'm looking for something as elegant as the comprehension-based method.
If you need to do any of these operations a couple of times, the overhead of other methods will be higher, the list is the best way. It's also probably the clearest, so if memory isn't a problem, then I'd recommend just going with it.
If memory/speed is a problem, then there are alternatives - note that speed-wise, these might actually be slower, depending on the common case for your software.
For your scenarios:
#value = sublist[n]
value = nth(x for x in objList if not x.flag, n)
#value = len(sublist)
value = sum(not x.flag for x in objList)
#value = sublist.index(target)
value = next(dropwhile(lambda x: x != target, (x for x in objList if not x.flag)))
Using itertools.dropwhile() and the nth() recipe from the itertools docs.
I'm going to assume you might do any of these three things, and you might do them more than once.
In that case, what you want is basically to write a lazily evaluated list class. It would keep two pieces of data, a real list cache of evaluated items, and a generator of the rest. You could then do ll[10] and it would evaluate up to the 10th item, ll.index('spam') and it would evaluate until it finds 'spam', and then len(ll) and it would evaluate the rest of the list, all the while caching in the real list what it sees so nothing is done more than once.
Constructing it would look like this:
LazyList(x for x in obj_list if not x.flag)
But nothing would actually be computed until you actually start using it as above.
Since you commented that your objList can change, if you don't also need to index or search objList itself, then you might be better off just storing two different lists, one with .flag = True and one with .flag = False. Then you can use the second list directly instead of constructing it with a list comprehension each time.
If this works in your situation, it is likely the most efficient way to do it.