I have two lists:
l1: 38510 entries
l2: 6384 entries
I want to extract only values, which are present in both lists.
So far that was my approach:
equals = []
for quote in l2:
for quote2 in l1:
if quote == quote2:
equals.append(quote)
len(equals)) = 4999
len(set(equals))) = 4452
First of all, I have the feeling this approach is pretty inefficient, because I am checking every value in l1 several times ..
Furthermore, it seems that I get still duplicates. Is this due to the inner-loop for l1?
Thank you!!
You can use list comprehension and the in operator.
a = [1, 2, 3, 4, 5, 6, 7, 8, 9]
b = [2, 4, 6, 8, 0]
[x for x in a if x in b]
#[2, 4, 6, 8]
You were on the right track by using sets. One of set's coolest features is that you can get the intersection between two sets. An intersection is another way to say the values that occur in both sets. You can read about it more in the docs
Here is my example:
l1_set = set(l1)
l2_set = set(l2)
equals = l1_set & l2_set
#If you really want it as a list
equals = list(equals)
print(equals)
The & operator tells python to return a new set that only has values in both sets. At the end, I went ahead and converted equals back to a list because that's what your original example wanted. You can omit that if you don't need it.
1. This is the simplest method where we haven’t used any built-in functions.
# Two lists in most simple way of showing the intersection
def intersection(list_one, list_two):
temp_list = [value for value in list_one if value in list_two]
return temp_list
# Illustrate the intersection
list_one = [4, 9, 1, 17, 11, 26, 28, 54, 69]
list_two = [9, 9, 74, 21, 45, 11, 63, 28, 26]
print(intersection(list_one, list_two))
# [123, 3, 23, 15]
2. You can use the python set() method.
# Two lists using set() method
def intersection(list_one, list_two):
return list(set(list_one) & set(list_two))
# Illustrate the intersection
list_one = [15, 13, 123, 23, 31, 10, 3, 311, 738, 25, 124, 19]
list_two = [12, 14, 1, 15, 36, 123, 23, 3, 315, 87]
print(intersection(list_one, list_two))
# [123, 3, 23, 15]
3. In this technique, we can use the built-in function called intersection() to compute the intersected list.
First, we need to use set() for a larger list then compute the intersection.
# Two lists using set() and intersection()
def intersection_list(list_one, list_two):
return list(set(list_one).intersection(list_two))
# Illustrate the intersection
list_one = [15, 13, 123, 23, 31, 10, 3, 311, 738, 25, 124, 19]
list_two = [12, 14, 1, 15, 36, 123, 23, 3, 315, 87, 978, 4, 13, 19, 20, 11]
if len(list_one) < len(list_two):
list_one, list_two = list_two, list_one
print(intersection_list(list_one, list_two))
# [3, 13, 15, 19, 23, 123]
Additional you can follow the bellow tutorials
Geeksforgeeks
docs.python.org
LearnCodingFast
Let's assume that all the entries in both of your lists are integers. If so, computing the intersection between the 2 lists would be more efficient than using list comprehension:
import timeit
l1 = [i for i in range(0, 38510)]
l2 = [i for i in range(0, 6384)]
st1 = timeit.default_timer()
# Using list comprehension
l3 = [i for i in l1 if i in l2]
ed1 = timeit.default_timer()
# Using set
st2 = timeit.default_timer()
l4 = list(set(l1) & set(l2))
ed2 = timeit.default_timer()
print(ed1-st1) # 5.7621682 secs
print(ed2-st2) # 0.004478600000000554 secs
As you have such long lists, you might want to use numpy which is specialized in providing efficient list processing for Python.
You can enjoy the fast processing with its numpy function. For your case, you can use numpy.intersect1d() to get the sorted, unique values that are in both of the input arrays, as follows:
import numpy as np
l1 = [1, 3, 5, 10, 11, 12]
l2 = [2, 3, 4, 10, 12, 14, 16, 18]
l_uniques = np.intersect1d(l1, l2)
print(l_uniques)
[ 3 10 12]
You can keep the resulting list as numpy array for further fast processing or further convert it back to Python list by:
l_uniques2 = l_uniques.tolist()
I'm trying to create a list of all the prime numbers less than or equal to a given number. I did that successfully using for loops. I was trying to achieve the same using list comprehension using python. But my output has some unexpected values.
Here is my code..
pr=[2]
pr+=[i for i in xrange(3,num+1) if not [x for x in pr if i%x==0]]
where num is the number I had taken as input from user.
The output of the above code for
num=20 is this: [2, 3, 5, 7, 9, 11, 13, 15, 17, 19]
I'm puzzled as to why 9 and 15 are there in the output. What am I doing wrong here?
It simply doesn’t work that way. List comprehensions are evaluated separately, so you can imagine it like this:
pr = [2]
tmp = [i for i in xrange(3,num+1) if not [x for x in pr if i%x==0]]
pr += tmp
By the time tmp is evaluated, pr only contains 2, so you only ever check if a number is divisible by 2 (i.e. if it’s even). That’s why you get all uneven numbers.
You simply can’t solve this nicely† using list comprehensions.
† Not nicely, but ugly and in a very hackish way, by abusing that you can call functions inside a list comprehension:
pr = [2]
[pr.append(i) for i in xrange(3,num+1) if not [x for x in pr if i%x==0]]
print(pr) # [2, 3, 5, 7, 11, 13, 17, 19]
This abuses list comprehensions and basically collects a None value for each prime number you add to pr. So it’s essentially like your normal for loop except that we unnecessarily collect None values in a list… so you should rather allow yourself to use a line break and just use a normal loop.
Your list pr doesn't update until after your entire list comprehension is done. This means your list only contains 2, so every number dividable by 2 is not in the list (as you can see). You should update the list whenever you found a new prime number.
This is because the pr += [...] is evaluated approximately as this:
pr = [2]
tmp = [i for i in xrange(3,num+1) if not [x for x in pr if i%x==0]]
pr.extend(tmp)
So while tmp is generated, contents of pr remains the same ([2]).
I would go with function like this:
>>> import itertools
>>> def primes():
... results = []
... for i in itertools.count(2):
... if all(i%x != 0 for x in results):
... results.append(i)
... yield i
...
# And then you can fetch first 10 primes
>>> list(itertools.islice(primes(), 10))
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
# Or get primes smaller than X
>>> list(itertools.takewhile(lambda x: x < 50, primes()))
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47]
Note, that using all is more efficient than creating array and testing whether it's empty.
This question already has answers here:
How do I remove duplicates from a list, while preserving order?
(31 answers)
Closed 8 years ago.
For example:
>>> x = [1, 1, 2, 'a', 'a', 3]
>>> unique(x)
[1, 2, 'a', 3]
Assume list elements are hashable.
Clarification: The result should keep the first duplicate in the list. For example, [1, 2, 3, 2, 3, 1] becomes [1, 2, 3].
def unique(items):
found = set()
keep = []
for item in items:
if item not in found:
found.add(item)
keep.append(item)
return keep
print unique([1, 1, 2, 'a', 'a', 3])
Using:
lst = [8, 8, 9, 9, 7, 15, 15, 2, 20, 13, 2, 24, 6, 11, 7, 12, 4, 10, 18, 13, 23, 11, 3, 11, 12, 10, 4, 5, 4, 22, 6, 3, 19, 14, 21, 11, 1, 5, 14, 8, 0, 1, 16, 5, 10, 13, 17, 1, 16, 17, 12, 6, 10, 0, 3, 9, 9, 3, 7, 7, 6, 6, 7, 5, 14, 18, 12, 19, 2, 8, 9, 0, 8, 4, 5]
And using the timeit module:
$ python -m timeit -s 'import uniquetest' 'uniquetest.etchasketch(uniquetest.lst)'
And so on for the various other functions (which I named after their posters), I have the following results (on my first generation Intel MacBook Pro):
Allen: 14.6 µs per loop [1]
Terhorst: 26.6 µs per loop
Tarle: 44.7 µs per loop
ctcherry: 44.8 µs per loop
Etchasketch 1 (short): 64.6 µs per loop
Schinckel: 65.0 µs per loop
Etchasketch 2: 71.6 µs per loop
Little: 89.4 µs per loop
Tyler: 179.0 µs per loop
[1] Note that Allen modifies the list in place – I believe this has skewed the time, in that the timeit module runs the code 100000 times and 99999 of them are with the dupe-less list.
Summary: Straight-forward implementation with sets wins over confusing one-liners :-)
Update: on Python3.7+:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
old answer:
Here is the fastest solution so far (for the following input):
def del_dups(seq):
seen = {}
pos = 0
for item in seq:
if item not in seen:
seen[item] = True
seq[pos] = item
pos += 1
del seq[pos:]
lst = [8, 8, 9, 9, 7, 15, 15, 2, 20, 13, 2, 24, 6, 11, 7, 12, 4, 10, 18,
13, 23, 11, 3, 11, 12, 10, 4, 5, 4, 22, 6, 3, 19, 14, 21, 11, 1,
5, 14, 8, 0, 1, 16, 5, 10, 13, 17, 1, 16, 17, 12, 6, 10, 0, 3, 9,
9, 3, 7, 7, 6, 6, 7, 5, 14, 18, 12, 19, 2, 8, 9, 0, 8, 4, 5]
del_dups(lst)
print(lst)
# -> [8, 9, 7, 15, 2, 20, 13, 24, 6, 11, 12, 4, 10, 18, 23, 3, 5, 22, 19, 14,
# 21, 1, 0, 16, 17]
Dictionary lookup is slightly faster then the set's one in Python 3.
What's going to be fastest depends on what percentage of your list is duplicates. If it's nearly all duplicates, with few unique items, creating a new list will probably be faster. If it's mostly unique items, removing them from the original list (or a copy) will be faster.
Here's one for modifying the list in place:
def unique(items):
seen = set()
for i in xrange(len(items)-1, -1, -1):
it = items[i]
if it in seen:
del items[i]
else:
seen.add(it)
Iterating backwards over the indices ensures that removing items doesn't affect the iteration.
This is the fastest in-place method I've found (assuming a large proportion of duplicates):
def unique(l):
s = set(); n = 0
for x in l:
if x not in s: s.add(x); l[n] = x; n += 1
del l[n:]
This is 10% faster than Allen's implementation, on which it is based (timed with timeit.repeat, JIT compiled by psyco). It keeps the first instance of any duplicate.
repton-infinity: I'd be interested if you could confirm my timings.
Obligatory generator-based variation:
def unique(seq):
seen = set()
for x in seq:
if x not in seen:
seen.add(x)
yield x
This may be the simplest way:
list(OrderedDict.fromkeys(iterable))
As of Python 3.5, OrderedDict is now implemented in C, so this was is now the shortest, cleanest, and fastest.
Taken from http://www.peterbe.com/plog/uniqifiers-benchmark
def f5(seq, idfun=None):
# order preserving
if idfun is None:
def idfun(x): return x
seen = {}
result = []
for item in seq:
marker = idfun(item)
# in old Python versions:
# if seen.has_key(marker)
# but in new ones:
if marker in seen: continue
seen[marker] = 1
result.append(item)
return result
One-liner:
new_list = reduce(lambda x,y: x+[y][:1-int(y in x)], my_list, [])
An in-place one-liner for this:
>>> x = [1, 1, 2, 'a', 'a', 3]
>>> [ item for pos,item in enumerate(x) if x.index(item)==pos ]
[1, 2, 'a', 3]
This is the fastest one, comparing all the stuff from this lengthy discussion and the other answers given here, refering to this benchmark. It's another 25% faster than the fastest function from the discussion, f8. Thanks to David Kirby for the idea.
def uniquify(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if x not in seen and not seen_add(x)]
Some time comparison:
$ python uniqifiers_benchmark.py
* f8_original 3.76
* uniquify 3.0
* terhorst 5.44
* terhorst_localref 4.08
* del_dups 4.76
You can actually do something really cool in Python to solve this. You can create a list comprehension that would reference itself as it is being built. As follows:
# remove duplicates...
def unique(my_list):
return [x for x in my_list if x not in locals()['_[1]'].__self__]
Edit: I removed the "self", and it works on Mac OS X, Python 2.5.1.
The _[1] is Python's "secret" reference to the new list. The above, of course, is a little messy, but you could adapt it fit your needs as necessary. For example, you can actually write a function that returns a reference to the comprehension; it would look more like:
return [x for x in my_list if x not in this_list()]
Do the duplicates necessarily need to be in the list in the first place? There's no overhead as far as looking the elements up, but there is a little bit more overhead in adding elements (though the overhead should be O(1) ).
>>> x = []
>>> y = set()
>>> def add_to_x(val):
... if val not in y:
... x.append(val)
... y.add(val)
... print x
... print y
...
>>> add_to_x(1)
[1]
set([1])
>>> add_to_x(1)
[1]
set([1])
>>> add_to_x(1)
[1]
set([1])
>>>
Remove duplicates and preserve order:
This is a fast 2-liner that leverages built-in functionality of list comprehensions and dicts.
x = [1, 1, 2, 'a', 'a', 3]
tmpUniq = {} # temp variable used below
results = [tmpUniq.setdefault(i,i) for i in x if i not in tmpUniq]
print results
[1, 2, 'a', 3]
The dict.setdefaults() function returns the value as well as adding it to the temp dict directly in the list comprehension. Using the built-in functions and the hashes of the dict will work to maximize efficiency for the process.
O(n) if dict is hash, O(nlogn) if dict is tree, and simple, fixed. Thanks to Matthew for the suggestion. Sorry I don't know the underlying types.
def unique(x):
output = []
y = {}
for item in x:
y[item] = ""
for item in x:
if item in y:
output.append(item)
return output
has_key in python is O(1). Insertion and retrieval from a hash is also O(1). Loops through n items twice, so O(n).
def unique(list):
s = {}
output = []
for x in list:
count = 1
if(s.has_key(x)):
count = s[x] + 1
s[x] = count
for x in list:
count = s[x]
if(count > 0):
s[x] = 0
output.append(x)
return output
There are some great, efficient solutions here. However, for anyone not concerned with the absolute most efficient O(n) solution, I'd go with the simple one-liner O(n^2*log(n)) solution:
def unique(xs):
return sorted(set(xs), key=lambda x: xs.index(x))
or the more efficient two-liner O(n*log(n)) solution:
def unique(xs):
positions = dict((e,pos) for pos,e in reversed(list(enumerate(xs))))
return sorted(set(xs), key=lambda x: positions[x])
Here are two recipes from the itertools documentation:
def unique_everseen(iterable, key=None):
"List unique elements, preserving order. Remember all elements ever seen."
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in ifilterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
else:
for element in iterable:
k = key(element)
if k not in seen:
seen_add(k)
yield element
def unique_justseen(iterable, key=None):
"List unique elements, preserving order. Remember only the element just seen."
# unique_justseen('AAAABBBCCDAABBB') --> A B C D A B
# unique_justseen('ABBCcAD', str.lower) --> A B C A D
return imap(next, imap(itemgetter(1), groupby(iterable, key)))
I have no experience with python, but an algorithm would be to sort the list, then remove duplicates (by comparing to previous items in the list), and finally find the position in the new list by comparing with the old list.
Longer answer: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560
>>> def unique(list):
... y = []
... for x in list:
... if x not in y:
... y.append(x)
... return y
If you take out the empty list from the call to set() in Terhost's answer, you get a little speed boost.
Change:
found = set([])
to:
found = set()
However, you don't need the set at all.
def unique(items):
keep = []
for item in items:
if item not in keep:
keep.append(item)
return keep
Using timeit I got these results:
with set([]) -- 4.97210427363
with set() -- 4.65712377445
with no set -- 3.44865284975
x = [] # Your list of items that includes Duplicates
# Assuming that your list contains items of only immutable data types
dict_x = {}
dict_x = {item : item for i, item in enumerate(x) if item not in dict_x.keys()}
# Average t.c. = O(n)* O(1) ; furthermore the dict comphrehension and generator like behaviour of enumerate adds a certain efficiency and pythonic feel to it.
x = dict_x.keys() # if you want your output in list format
>>> x=[1,1,2,'a','a',3]
>>> y = [ _x for _x in x if not _x in locals()['_[1]'] ]
>>> y
[1, 2, 'a', 3]
"locals()['_[1]']" is the "secret name" of the list being created.
I don't know if this one is fast or not, but at least it is simple.
Simply, convert it first to a set and then again to a list
def unique(container):
return list(set(container))
One pass.
a = [1,1,'a','b','c','c']
new_list = []
prev = None
while 1:
try:
i = a.pop(0)
if i != prev:
new_list.append(i)
prev = i
except IndexError:
break
I haven't done any tests, but one possible algorithm might be to create a second list, and iterate through the first list. If an item is not in the second list, add it to the second list.
x = [1, 1, 2, 'a', 'a', 3]
y = []
for each in x:
if each not in y:
y.append(each)
a=[1,2,3,4,5,7,7,8,8,9,9,3,45]
def unique(l):
ids={}
for item in l:
if not ids.has_key(item):
ids[item]=item
return ids.keys()
print a
print unique(a)
Inserting elements will take theta(n)
retrieving if element is exiting or not will take constant time
testing all the items will take also theta(n)
so we can see that this solution will take theta(n).
Bear in mind that dictionary in python implemented by hash table.