Deleting multiple elements from a list - python

Is it possible to delete multiple elements from a list at the same time? If I want to delete elements at index 0 and 2, and try something like del somelist[0], followed by del somelist[2], the second statement will actually delete somelist[3].
I suppose I could always delete the higher numbered elements first but I'm hoping there is a better way.

For some reason I don't like any of the answers here.
Yes, they work, but strictly speaking most of them aren't deleting elements in a list, are they? (But making a copy and then replacing the original one with the edited copy).
Why not just delete the higher index first?
Is there a reason for this?
I would just do:
for i in sorted(indices, reverse=True):
del somelist[i]
If you really don't want to delete items backwards, then I guess you should just deincrement the indices values which are greater than the last deleted index (can't really use the same index since you're having a different list) or use a copy of the list (which wouldn't be 'deleting' but replacing the original with an edited copy).
Am I missing something here, any reason to NOT delete in the reverse order?

You can use enumerate and remove the values whose index matches the indices you want to remove:
indices = 0, 2
somelist = [i for j, i in enumerate(somelist) if j not in indices]

If you're deleting multiple non-adjacent items, then what you describe is the best way (and yes, be sure to start from the highest index).
If your items are adjacent, you can use the slice assignment syntax:
a[2:10] = []

You can use numpy.delete as follows:
import numpy as np
a = ['a', 'l', 3.14, 42, 'u']
I = [0, 2]
np.delete(a, I).tolist()
# Returns: ['l', '42', 'u']
If you don't mind ending up with a numpy array at the end, you can leave out the .tolist(). You should see some pretty major speed improvements, too, making this a more scalable solution. I haven't benchmarked it, but numpy operations are compiled code written in either C or Fortran.

As a specialisation of Greg's answer, you can even use extended slice syntax. eg. If you wanted to delete items 0 and 2:
>>> a= [0, 1, 2, 3, 4]
>>> del a[0:3:2]
>>> a
[1, 3, 4]
This doesn't cover any arbitrary selection, of course, but it can certainly work for deleting any two items.

As a function:
def multi_delete(list_, *args):
indexes = sorted(list(args), reverse=True)
for index in indexes:
del list_[index]
return list_
Runs in n log(n) time, which should make it the fastest correct solution yet.

So, you essentially want to delete multiple elements in one pass? In that case, the position of the next element to delete will be offset by however many were deleted previously.
Our goal is to delete all the vowels, which are precomputed to be indices 1, 4, and 7. Note that its important the to_delete indices are in ascending order, otherwise it won't work.
to_delete = [1, 4, 7]
target = list("hello world")
for offset, index in enumerate(to_delete):
index -= offset
del target[index]
It'd be a more complicated if you wanted to delete the elements in any order. IMO, sorting to_delete might be easier than figuring out when you should or shouldn't subtract from index.

I'm a total beginner in Python, and my programming at the moment is crude and dirty to say the least, but my solution was to use a combination of the basic commands I learnt in early tutorials:
some_list = [1,2,3,4,5,6,7,8,10]
rem = [0,5,7]
for i in rem:
some_list[i] = '!' # mark for deletion
for i in range(0, some_list.count('!')):
some_list.remove('!') # remove
print some_list
Obviously, because of having to choose a "mark-for-deletion" character, this has its limitations.
As for the performance as the size of the list scales, I'm sure that my solution is sub-optimal. However, it's straightforward, which I hope appeals to other beginners, and will work in simple cases where some_list is of a well-known format, e.g., always numeric...

Here is an alternative, that does not use enumerate() to create tuples (as in SilentGhost's original answer).
This seems more readable to me. (Maybe I'd feel differently if I was in the habit of using enumerate.) CAVEAT: I have not tested performance of the two approaches.
# Returns a new list. "lst" is not modified.
def delete_by_indices(lst, indices):
indices_as_set = set(indices)
return [ lst[i] for i in xrange(len(lst)) if i not in indices_as_set ]
NOTE: Python 2.7 syntax. For Python 3, xrange => range.
Usage:
lst = [ 11*x for x in xrange(10) ]
somelist = delete_by_indices( lst, [0, 4, 5])
somelist:
[11, 22, 33, 66, 77, 88, 99]
--- BONUS ---
Delete multiple values from a list. That is, we have the values we want to delete:
# Returns a new list. "lst" is not modified.
def delete__by_values(lst, values):
values_as_set = set(values)
return [ x for x in lst if x not in values_as_set ]
Usage:
somelist = delete__by_values( lst, [0, 44, 55] )
somelist:
[11, 22, 33, 66, 77, 88, 99]
This is the same answer as before, but this time we supplied the VALUES to be deleted [0, 44, 55].

An alternative list comprehension method that uses list index values:
stuff = ['a', 'b', 'c', 'd', 'e', 'f', 'woof']
index = [0, 3, 6]
new = [i for i in stuff if stuff.index(i) not in index]
This returns:
['b', 'c', 'e', 'f']

here is another method which removes the elements in place. also if your list is really long, it is faster.
>>> a = range(10)
>>> remove = [0,4,5]
>>> from collections import deque
>>> deque((list.pop(a, i) for i in sorted(remove, reverse=True)), maxlen=0)
>>> timeit.timeit('[i for j, i in enumerate(a) if j not in remove]', setup='import random;remove=[random.randrange(100000) for i in range(100)]; a = range(100000)', number=1)
0.1704120635986328
>>> timeit.timeit('deque((list.pop(a, i) for i in sorted(remove, reverse=True)), maxlen=0)', setup='from collections import deque;import random;remove=[random.randrange(100000) for i in range(100)]; a = range(100000)', number=1)
0.004853963851928711

This has been mentioned, but somehow nobody managed to actually get it right.
On O(n) solution would be:
indices = {0, 2}
somelist = [i for j, i in enumerate(somelist) if j not in indices]
This is really close to SilentGhost's version, but adds two braces.

l = ['a','b','a','c','a','d']
to_remove = [1, 3]
[l[i] for i in range(0, len(l)) if i not in to_remove])
It's basically the same as the top voted answer, just a different way of writing it. Note that using l.index() is not a good idea, because it can't handle duplicated elements in a list.

Remove method will causes a lot of shift of list elements. I think is better to make a copy:
...
new_list = []
for el in obj.my_list:
if condition_is_true(el):
new_list.append(el)
del obj.my_list
obj.my_list = new_list
...

technically, the answer is NO it is not possible to delete two objects AT THE SAME TIME. However, it IS possible to delete two objects in one line of beautiful python.
del (foo['bar'],foo['baz'])
will recusrively delete foo['bar'], then foo['baz']

we can do this by use of a for loop iterating over the indexes after sorting the index list in descending order
mylist=[66.25, 333, 1, 4, 6, 7, 8, 56, 8769, 65]
indexes = 4,6
indexes = sorted(indexes, reverse=True)
for i in index:
mylist.pop(i)
print mylist

For the indices 0 and 2 from listA:
for x in (2,0): listA.pop(x)
For some random indices to remove from listA:
indices=(5,3,2,7,0)
for x in sorted(indices)[::-1]: listA.pop(x)

I wanted to a way to compare the different solutions that made it easy to turn the knobs.
First I generated my data:
import random
N = 16 * 1024
x = range(N)
random.shuffle(x)
y = random.sample(range(N), N / 10)
Then I defined my functions:
def list_set(value_list, index_list):
index_list = set(index_list)
result = [value for index, value in enumerate(value_list) if index not in index_list]
return result
def list_del(value_list, index_list):
for index in sorted(index_list, reverse=True):
del(value_list[index])
def list_pop(value_list, index_list):
for index in sorted(index_list, reverse=True):
value_list.pop(index)
Then I used timeit to compare the solutions:
import timeit
from collections import OrderedDict
M = 1000
setup = 'from __main__ import x, y, list_set, list_del, list_pop'
statement_dict = OrderedDict([
('overhead', 'a = x[:]'),
('set', 'a = x[:]; list_set(a, y)'),
('del', 'a = x[:]; list_del(a, y)'),
('pop', 'a = x[:]; list_pop(a, y)'),
])
overhead = None
result_dict = OrderedDict()
for name, statement in statement_dict.iteritems():
result = timeit.timeit(statement, number=M, setup=setup)
if overhead is None:
overhead = result
else:
result = result - overhead
result_dict[name] = result
for name, result in result_dict.iteritems():
print "%s = %7.3f" % (name, result)
Output
set = 1.711
del = 3.450
pop = 3.618
So the generator with the indices in a set was the winner. And del is slightly faster then pop.

You can use this logic:
my_list = ['word','yes','no','nice']
c=[b for i,b in enumerate(my_list) if not i in (0,2,3)]
print c

Another implementation of the idea of removing from the highest index.
for i in range(len(yourlist)-1, -1, -1):
del yourlist(i)

You may want to simply use np.delete:
list_indices = [0, 2]
original_list = [0, 1, 2, 3]
new_list = np.delete(original_list, list_indices)
Output
array([1, 3])
Here, the first argument is the original list, the second is the index or a list of indices you want to delete.
There is a third argument which you can use in the case of having ndarrays: axis (0 for rows and 1 for columns in case of ndarrays).

I can actually think of two ways to do it:
slice the list like (this deletes the 1st,3rd and 8th elements)
somelist = somelist[1:2]+somelist[3:7]+somelist[8:]
do that in place, but one at a time:
somelist.pop(2)
somelist.pop(0)

You can do that way on a dict, not on a list. In a list elements are in sequence. In a dict they depend only on the index.
Simple code just to explain it by doing:
>>> lst = ['a','b','c']
>>> dct = {0: 'a', 1: 'b', 2:'c'}
>>> lst[0]
'a'
>>> dct[0]
'a'
>>> del lst[0]
>>> del dct[0]
>>> lst[0]
'b'
>>> dct[0]
Traceback (most recent call last):
File "<pyshell#19>", line 1, in <module>
dct[0]
KeyError: 0
>>> dct[1]
'b'
>>> lst[1]
'c'
A way to "convert" a list in a dict is:
>>> dct = {}
>>> for i in xrange(0,len(lst)): dct[i] = lst[i]
The inverse is:
lst = [dct[i] for i in sorted(dct.keys())]
Anyway I think it's better to start deleting from the higher index as you said.

To generalize the comment from #sth. Item deletion in any class, that implements abc.MutableSequence, and in list in particular, is done via __delitem__ magic method. This method works similar to __getitem__, meaning it can accept either an integer or a slice. Here is an example:
class MyList(list):
def __delitem__(self, item):
if isinstance(item, slice):
for i in range(*item.indices(len(self))):
self[i] = 'null'
else:
self[item] = 'null'
l = MyList(range(10))
print(l)
del l[5:8]
print(l)
This will output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[0, 1, 2, 3, 4, 'null', 'null', 'null', 8, 9]

Importing it only for this reason might be overkill, but if you happen to be using pandas anyway, then the solution is simple and straightforward:
import pandas as pd
stuff = pd.Series(['a','b','a','c','a','d'])
less_stuff = stuff[stuff != 'a'] # define any condition here
# results ['b','c','d']

some_list.remove(some_list[max(i, j)])
Avoids sorting cost and having to explicitly copy list.

None of the answers offered so far performs the deletion in place in O(n) on the length of the list for an arbitrary number of indices to delete, so here's my version:
def multi_delete(the_list, indices):
assert type(indices) in {set, frozenset}, "indices must be a set or frozenset"
offset = 0
for i in range(len(the_list)):
if i in indices:
offset += 1
elif offset:
the_list[i - offset] = the_list[i]
if offset:
del the_list[-offset:]
# Example:
a = [0, 1, 2, 3, 4, 5, 6, 7]
multi_delete(a, {1, 2, 4, 6, 7})
print(a) # prints [0, 3, 5]

I tested the suggested solutions with perfplot and found that NumPy's
np.delete(lst, remove_ids)
is the fastest solution if the list is longer than about 100 entries. Before that, all solutions are around 10^-5 seconds. The list comprehension seems simple enough then:
out = [item for i, item in enumerate(lst) if i not in remove_ids]
Code to reproduce the plot:
import perfplot
import random
import numpy as np
import copy
def setup(n):
lst = list(range(n))
random.shuffle(lst)
# //10 = 10%
remove_ids = random.sample(range(n), n // 10)
return lst, remove_ids
def if_comprehension(lst, remove_ids):
return [item for i, item in enumerate(lst) if i not in remove_ids]
def del_list_inplace(lst, remove_ids):
out = copy.deepcopy(lst)
for i in sorted(remove_ids, reverse=True):
del out[i]
return out
def del_list_numpy(lst, remove_ids):
return np.delete(lst, remove_ids)
b = perfplot.bench(
setup=setup,
kernels=[if_comprehension, del_list_numpy, del_list_inplace],
n_range=[2**k for k in range(20)],
)
b.save("out.png")
b.show()

How about one of these (I'm very new to Python, but they seem ok):
ocean_basin = ['a', 'Atlantic', 'Pacific', 'Indian', 'a', 'a', 'a']
for i in range(1, (ocean_basin.count('a') + 1)):
ocean_basin.remove('a')
print(ocean_basin)
['Atlantic', 'Pacific', 'Indian']
ob = ['a', 'b', 4, 5,'Atlantic', 'Pacific', 'Indian', 'a', 'a', 4, 'a']
remove = ('a', 'b', 4, 5)
ob = [i for i in ob if i not in (remove)]
print(ob)
['Atlantic', 'Pacific', 'Indian']

I put it all together into a list_diff function that simply takes two lists as inputs and returns their difference, while preserving the original order of the first list.
def list_diff(list_a, list_b, verbose=False):
# returns a difference of list_a and list_b,
# preserving the original order, unlike set-based solutions
# get indices of elements to be excluded from list_a
excl_ind = [i for i, x in enumerate(list_a) if x in list_b]
if verbose:
print(excl_ind)
# filter out the excluded indices, producing a new list
new_list = [i for i in list_a if list_a.index(i) not in excl_ind]
if verbose:
print(new_list)
return(new_list)
Sample usage:
my_list = ['a', 'b', 'c', 'd', 'e', 'f', 'woof']
# index = [0, 3, 6]
# define excluded names list
excl_names_list = ['woof', 'c']
list_diff(my_list, excl_names_list)
>> ['a', 'b', 'd', 'e', 'f']

Related

python prog to remove duplicates from a list [duplicate]

How can I check if a list has any duplicates and return a new list without duplicates?
The common approach to get a unique collection of items is to use a set. Sets are unordered collections of distinct objects. To create a set from any iterable, you can simply pass it to the built-in set() function. If you later need a real list again, you can similarly pass the set to the list() function.
The following example should cover whatever you are trying to do:
>>> t = [1, 2, 3, 1, 2, 3, 5, 6, 7, 8]
>>> list(set(t))
[1, 2, 3, 5, 6, 7, 8]
>>> s = [1, 2, 3]
>>> list(set(t) - set(s))
[8, 5, 6, 7]
As you can see from the example result, the original order is not maintained. As mentioned above, sets themselves are unordered collections, so the order is lost. When converting a set back to a list, an arbitrary order is created.
Maintaining order
If order is important to you, then you will have to use a different mechanism. A very common solution for this is to rely on OrderedDict to keep the order of keys during insertion:
>>> from collections import OrderedDict
>>> list(OrderedDict.fromkeys(t))
[1, 2, 3, 5, 6, 7, 8]
Starting with Python 3.7, the built-in dictionary is guaranteed to maintain the insertion order as well, so you can also use that directly if you are on Python 3.7 or later (or CPython 3.6):
>>> list(dict.fromkeys(t))
[1, 2, 3, 5, 6, 7, 8]
Note that this may have some overhead of creating a dictionary first, and then creating a list from it. If you don’t actually need to preserve the order, you’re often better off using a set, especially because it gives you a lot more operations to work with. Check out this question for more details and alternative ways to preserve the order when removing duplicates.
Finally note that both the set as well as the OrderedDict/dict solutions require your items to be hashable. This usually means that they have to be immutable. If you have to deal with items that are not hashable (e.g. list objects), then you will have to use a slow approach in which you will basically have to compare every item with every other item in a nested loop.
In Python 2.7, the new way of removing duplicates from an iterable while keeping it in the original order is:
>>> from collections import OrderedDict
>>> list(OrderedDict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
In Python 3.5, the OrderedDict has a C implementation. My timings show that this is now both the fastest and shortest of the various approaches for Python 3.5.
In Python 3.6, the regular dict became both ordered and compact. (This feature is holds for CPython and PyPy but may not present in other implementations). That gives us a new fastest way of deduping while retaining order:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
In Python 3.7, the regular dict is guaranteed to both ordered across all implementations. So, the shortest and fastest solution is:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
It's a one-liner: list(set(source_list)) will do the trick.
A set is something that can't possibly have duplicates.
Update: an order-preserving approach is two lines:
from collections import OrderedDict
OrderedDict((x, True) for x in source_list).keys()
Here we use the fact that OrderedDict remembers the insertion order of keys, and does not change it when a value at a particular key is updated. We insert True as values, but we could insert anything, values are just not used. (set works a lot like a dict with ignored values, too.)
>>> t = [1, 2, 3, 1, 2, 5, 6, 7, 8]
>>> t
[1, 2, 3, 1, 2, 5, 6, 7, 8]
>>> s = []
>>> for i in t:
if i not in s:
s.append(i)
>>> s
[1, 2, 3, 5, 6, 7, 8]
If you don't care about the order, just do this:
def remove_duplicates(l):
return list(set(l))
A set is guaranteed to not have duplicates.
To make a new list retaining the order of first elements of duplicates in L:
newlist = [ii for n,ii in enumerate(L) if ii not in L[:n]]
For example: if L = [1, 2, 2, 3, 4, 2, 4, 3, 5], then newlist will be [1, 2, 3, 4, 5]
This checks each new element has not appeared previously in the list before adding it.
Also it does not need imports.
There are also solutions using Pandas and Numpy. They both return numpy array so you have to use the function .tolist() if you want a list.
t=['a','a','b','b','b','c','c','c']
t2= ['c','c','b','b','b','a','a','a']
Pandas solution
Using Pandas function unique():
import pandas as pd
pd.unique(t).tolist()
>>>['a','b','c']
pd.unique(t2).tolist()
>>>['c','b','a']
Numpy solution
Using numpy function unique().
import numpy as np
np.unique(t).tolist()
>>>['a','b','c']
np.unique(t2).tolist()
>>>['a','b','c']
Note that numpy.unique() also sort the values. So the list t2 is returned sorted. If you want to have the order preserved use as in this answer:
_, idx = np.unique(t2, return_index=True)
t2[np.sort(idx)].tolist()
>>>['c','b','a']
The solution is not so elegant compared to the others, however, compared to pandas.unique(), numpy.unique() allows you also to check if nested arrays are unique along one selected axis.
In this answer, there will be two sections: Two unique solutions, and a graph of speed for specific solutions.
Removing Duplicate Items
Most of these answers only remove duplicate items which are hashable, but this question doesn't imply it doesn't just need hashable items, meaning I'll offer some solutions which don't require hashable items.
collections.Counter is a powerful tool in the standard library which could be perfect for this. There's only one other solution which even has Counter in it. However, that solution is also limited to hashable keys.
To allow unhashable keys in Counter, I made a Container class, which will try to get the object's default hash function, but if it fails, it will try its identity function. It also defines an eq and a hash method. This should be enough to allow unhashable items in our solution. Unhashable objects will be treated as if they are hashable. However, this hash function uses identity for unhashable objects, meaning two equal objects that are both unhashable won't work. I suggest you override this, and changing it to use the hash of an equivalent mutable type (like using hash(tuple(my_list)) if my_list is a list).
I also made two solutions. Another solution which keeps the order of the items, using a subclass of both OrderedDict and Counter which is named 'OrderedCounter'. Now, here are the functions:
from collections import OrderedDict, Counter
class Container:
def __init__(self, obj):
self.obj = obj
def __eq__(self, obj):
return self.obj == obj
def __hash__(self):
try:
return hash(self.obj)
except:
return id(self.obj)
class OrderedCounter(Counter, OrderedDict):
'Counter that remembers the order elements are first encountered'
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, OrderedDict(self))
def __reduce__(self):
return self.__class__, (OrderedDict(self),)
def remd(sequence):
cnt = Counter()
for x in sequence:
cnt[Container(x)] += 1
return [item.obj for item in cnt]
def oremd(sequence):
cnt = OrderedCounter()
for x in sequence:
cnt[Container(x)] += 1
return [item.obj for item in cnt]
remd is non-ordered sorting, while oremd is ordered sorting. You can clearly tell which one is faster, but I'll explain anyways. The non-ordered sorting is slightly faster, since it doesn't store the order of the items.
Now, I also wanted to show the speed comparisons of each answer. So, I'll do that now.
Which Function is the Fastest?
For removing duplicates, I gathered 10 functions from a few answers. I calculated the speed of each function and put it into a graph using matplotlib.pyplot.
I divided this into three rounds of graphing. A hashable is any object which can be hashed, an unhashable is any object which cannot be hashed. An ordered sequence is a sequence which preserves order, an unordered sequence does not preserve order. Now, here are a few more terms:
Unordered Hashable was for any method which removed duplicates, which didn't necessarily have to keep the order. It didn't have to work for unhashables, but it could.
Ordered Hashable was for any method which kept the order of the items in the list, but it didn't have to work for unhashables, but it could.
Ordered Unhashable was any method which kept the order of the items in the list, and worked for unhashables.
On the y-axis is the amount of seconds it took.
On the x-axis is the number the function was applied to.
I generated sequences for unordered hashables and ordered hashables with the following comprehension: [list(range(x)) + list(range(x)) for x in range(0, 1000, 10)]
For ordered unhashables: [[list(range(y)) + list(range(y)) for y in range(x)] for x in range(0, 1000, 10)]
Note there is a step in the range because without it, this would've taken 10x as long. Also because in my personal opinion, I thought it might've looked a little easier to read.
Also note the keys on the legend are what I tried to guess as the most vital parts of the implementation of the function. As for what function does the worst or best? The graph speaks for itself.
With that settled, here are the graphs.
Unordered Hashables
(Zoomed in)
Ordered Hashables
(Zoomed in)
Ordered Unhashables
(Zoomed in)
Very late answer.
If you don't care about the list order, you can use *arg expansion with set uniqueness to remove dupes, i.e.:
l = [*{*l}]
Python3 Demo
A colleague have sent the accepted answer as part of his code to me for a codereview today.
While I certainly admire the elegance of the answer in question, I am not happy with the performance.
I have tried this solution (I use set to reduce lookup time)
def ordered_set(in_list):
out_list = []
added = set()
for val in in_list:
if not val in added:
out_list.append(val)
added.add(val)
return out_list
To compare efficiency, I used a random sample of 100 integers - 62 were unique
from random import randint
x = [randint(0,100) for _ in xrange(100)]
In [131]: len(set(x))
Out[131]: 62
Here are the results of the measurements
In [129]: %timeit list(OrderedDict.fromkeys(x))
10000 loops, best of 3: 86.4 us per loop
In [130]: %timeit ordered_set(x)
100000 loops, best of 3: 15.1 us per loop
Well, what happens if set is removed from the solution?
def ordered_set(inlist):
out_list = []
for val in inlist:
if not val in out_list:
out_list.append(val)
return out_list
The result is not as bad as with the OrderedDict, but still more than 3 times of the original solution
In [136]: %timeit ordered_set(x)
10000 loops, best of 3: 52.6 us per loop
Another way of doing:
>>> seq = [1,2,3,'a', 'a', 1,2]
>> dict.fromkeys(seq).keys()
['a', 1, 2, 3]
Simple and easy:
myList = [1, 2, 3, 1, 2, 5, 6, 7, 8]
cleanlist = []
[cleanlist.append(x) for x in myList if x not in cleanlist]
Output:
>>> cleanlist
[1, 2, 3, 5, 6, 7, 8]
I had a dict in my list, so I could not use the above approach. I got the error:
TypeError: unhashable type:
So if you care about order and/or some items are unhashable. Then you might find this useful:
def make_unique(original_list):
unique_list = []
[unique_list.append(obj) for obj in original_list if obj not in unique_list]
return unique_list
Some may consider list comprehension with a side effect to not be a good solution. Here's an alternative:
def make_unique(original_list):
unique_list = []
map(lambda x: unique_list.append(x) if (x not in unique_list) else False, original_list)
return unique_list
All the order-preserving approaches I've seen here so far either use naive comparison (with O(n^2) time-complexity at best) or heavy-weight OrderedDicts/set+list combinations that are limited to hashable inputs. Here is a hash-independent O(nlogn) solution:
Update added the key argument, documentation and Python 3 compatibility.
# from functools import reduce <-- add this import on Python 3
def uniq(iterable, key=lambda x: x):
"""
Remove duplicates from an iterable. Preserves order.
:type iterable: Iterable[Ord => A]
:param iterable: an iterable of objects of any orderable type
:type key: Callable[A] -> (Ord => B)
:param key: optional argument; by default an item (A) is discarded
if another item (B), such that A == B, has already been encountered and taken.
If you provide a key, this condition changes to key(A) == key(B); the callable
must return orderable objects.
"""
# Enumerate the list to restore order lately; reduce the sorted list; restore order
def append_unique(acc, item):
return acc if key(acc[-1][1]) == key(item[1]) else acc.append(item) or acc
srt_enum = sorted(enumerate(iterable), key=lambda item: key(item[1]))
return [item[1] for item in sorted(reduce(append_unique, srt_enum, [srt_enum[0]]))]
If you want to preserve the order, and not use any external modules here is an easy way to do this:
>>> t = [1, 9, 2, 3, 4, 5, 3, 6, 7, 5, 8, 9]
>>> list(dict.fromkeys(t))
[1, 9, 2, 3, 4, 5, 6, 7, 8]
Note: This method preserves the order of appearance, so, as seen above, nine will come after one because it was the first time it appeared. This however, is the same result as you would get with doing
from collections import OrderedDict
ulist=list(OrderedDict.fromkeys(l))
but it is much shorter, and runs faster.
This works because each time the fromkeys function tries to create a new key, if the value already exists it will simply overwrite it. This wont affect the dictionary at all however, as fromkeys creates a dictionary where all keys have the value None, so effectively it eliminates all duplicates this way.
I've compared the various suggestions with perfplot. It turns out that, if the input array doesn't have duplicate elements, all methods are more or less equally fast, independently of whether the input data is a Python list or a NumPy array.
If the input array is large, but contains just one unique element, then the set, dict and np.unique methods are costant-time if the input data is a list. If it's a NumPy array, np.unique is about 10 times faster than the other alternatives.
It's somewhat surprising to me that those are not constant-time operations, too.
Code to reproduce the plots:
import perfplot
import numpy as np
import matplotlib.pyplot as plt
def setup_list(n):
# return list(np.random.permutation(np.arange(n)))
return [0] * n
def setup_np_array(n):
# return np.random.permutation(np.arange(n))
return np.zeros(n, dtype=int)
def list_set(data):
return list(set(data))
def numpy_unique(data):
return np.unique(data)
def list_dict(data):
return list(dict.fromkeys(data))
b = perfplot.bench(
setup=[
setup_list,
setup_list,
setup_list,
setup_np_array,
setup_np_array,
setup_np_array,
],
kernels=[list_set, numpy_unique, list_dict, list_set, numpy_unique, list_dict],
labels=[
"list(set(lst))",
"np.unique(lst)",
"list(dict(lst))",
"list(set(arr))",
"np.unique(arr)",
"list(dict(arr))",
],
n_range=[2 ** k for k in range(23)],
xlabel="len(array)",
equality_check=None,
)
# plt.title("input array = [0, 1, 2,..., n]")
plt.title("input array = [0, 0,..., 0]")
b.save("out.png")
b.show()
You could also do this:
>>> t = [1, 2, 3, 3, 2, 4, 5, 6]
>>> s = [x for i, x in enumerate(t) if i == t.index(x)]
>>> s
[1, 2, 3, 4, 5, 6]
The reason that above works is that index method returns only the first index of an element. Duplicate elements have higher indices. Refer to here:
list.index(x[, start[, end]])
Return zero-based index in the list of
the first item whose value is x. Raises a ValueError if there is no
such item.
Best approach of removing duplicates from a list is using set() function, available in python, again converting that set into list
In [2]: some_list = ['a','a','v','v','v','c','c','d']
In [3]: list(set(some_list))
Out[3]: ['a', 'c', 'd', 'v']
You can use set to remove duplicates:
mylist = list(set(mylist))
But note the results will be unordered. If that's an issue:
mylist.sort()
Try using sets:
import sets
t = sets.Set(['a', 'b', 'c', 'd'])
t1 = sets.Set(['a', 'b', 'c'])
print t | t1
print t - t1
One more better approach could be,
import pandas as pd
myList = [1, 2, 3, 1, 2, 5, 6, 7, 8]
cleanList = pd.Series(myList).drop_duplicates().tolist()
print(cleanList)
#> [1, 2, 3, 5, 6, 7, 8]
and the order remains preserved.
This one cares about the order without too much hassle (OrderdDict & others). Probably not the most Pythonic way, nor shortest way, but does the trick:
def remove_duplicates(item_list):
''' Removes duplicate items from a list '''
singles_list = []
for element in item_list:
if element not in singles_list:
singles_list.append(element)
return singles_list
Reduce variant with ordering preserve:
Assume that we have list:
l = [5, 6, 6, 1, 1, 2, 2, 3, 4]
Reduce variant (unefficient):
>>> reduce(lambda r, v: v in r and r or r + [v], l, [])
[5, 6, 1, 2, 3, 4]
5 x faster but more sophisticated
>>> reduce(lambda r, v: v in r[1] and r or (r[0].append(v) or r[1].add(v)) or r, l, ([], set()))[0]
[5, 6, 1, 2, 3, 4]
Explanation:
default = (list(), set())
# user list to keep order
# use set to make lookup faster
def reducer(result, item):
if item not in result[1]:
result[0].append(item)
result[1].add(item)
return result
reduce(reducer, l, default)[0]
There are many other answers suggesting different ways to do this, but they're all batch operations, and some of them throw away the original order. That might be okay depending on what you need, but if you want to iterate over the values in the order of the first instance of each value, and you want to remove the duplicates on-the-fly versus all at once, you could use this generator:
def uniqify(iterable):
seen = set()
for item in iterable:
if item not in seen:
seen.add(item)
yield item
This returns a generator/iterator, so you can use it anywhere that you can use an iterator.
for unique_item in uniqify([1, 2, 3, 4, 3, 2, 4, 5, 6, 7, 6, 8, 8]):
print(unique_item, end=' ')
print()
Output:
1 2 3 4 5 6 7 8
If you do want a list, you can do this:
unique_list = list(uniqify([1, 2, 3, 4, 3, 2, 4, 5, 6, 7, 6, 8, 8]))
print(unique_list)
Output:
[1, 2, 3, 4, 5, 6, 7, 8]
You can use the following function:
def rem_dupes(dup_list):
yooneeks = []
for elem in dup_list:
if elem not in yooneeks:
yooneeks.append(elem)
return yooneeks
Example:
my_list = ['this','is','a','list','with','dupicates','in', 'the', 'list']
Usage:
rem_dupes(my_list)
['this', 'is', 'a', 'list', 'with', 'dupicates', 'in', 'the']
Using set :
a = [0,1,2,3,4,3,3,4]
a = list(set(a))
print a
Using unique :
import numpy as np
a = [0,1,2,3,4,3,3,4]
a = np.unique(a).tolist()
print a
Without using set
data=[1, 2, 3, 1, 2, 5, 6, 7, 8]
uni_data=[]
for dat in data:
if dat not in uni_data:
uni_data.append(dat)
print(uni_data)
The Magic of Python Built-in type
In python, it is very easy to process the complicated cases like this and only by python's built-in type.
Let me show you how to do !
Method 1: General Case
The way (1 line code) to remove duplicated element in list and still keep sorting order
line = [1, 2, 3, 1, 2, 5, 6, 7, 8]
new_line = sorted(set(line), key=line.index) # remove duplicated element
print(new_line)
You will get the result
[1, 2, 3, 5, 6, 7, 8]
Method 2: Special Case
TypeError: unhashable type: 'list'
The special case to process unhashable (3 line codes)
line=[['16.4966155686595', '-27.59776154691', '52.3786295521147']
,['16.4966155686595', '-27.59776154691', '52.3786295521147']
,['17.6508629295574', '-27.143305738671', '47.534955022564']
,['17.6508629295574', '-27.143305738671', '47.534955022564']
,['18.8051102904552', '-26.688849930432', '42.6912804930134']
,['18.8051102904552', '-26.688849930432', '42.6912804930134']
,['19.5504702331098', '-26.205884452727', '37.7709192714727']
,['19.5504702331098', '-26.205884452727', '37.7709192714727']
,['20.2929416861422', '-25.722717575124', '32.8500163147157']
,['20.2929416861422', '-25.722717575124', '32.8500163147157']]
tuple_line = [tuple(pt) for pt in line] # convert list of list into list of tuple
tuple_new_line = sorted(set(tuple_line),key=tuple_line.index) # remove duplicated element
new_line = [list(t) for t in tuple_new_line] # convert list of tuple into list of list
print (new_line)
You will get the result :
[
['16.4966155686595', '-27.59776154691', '52.3786295521147'],
['17.6508629295574', '-27.143305738671', '47.534955022564'],
['18.8051102904552', '-26.688849930432', '42.6912804930134'],
['19.5504702331098', '-26.205884452727', '37.7709192714727'],
['20.2929416861422', '-25.722717575124', '32.8500163147157']
]
Because tuple is hashable and you can convert data between list and tuple easily
below code is simple for removing duplicate in list
def remove_duplicates(x):
a = []
for i in x:
if i not in a:
a.append(i)
return a
print remove_duplicates([1,2,2,3,3,4])
it returns [1,2,3,4]
Here's the fastest pythonic solution comaring to others listed in replies.
Using implementation details of short-circuit evaluation allows to use list comprehension, which is fast enough. visited.add(item) always returns None as a result, which is evaluated as False, so the right-side of or would always be the result of such an expression.
Time it yourself
def deduplicate(sequence):
visited = set()
adder = visited.add # get rid of qualification overhead
out = [adder(item) or item for item in sequence if item not in visited]
return out

How to write a function that gives me unique numbers under 100 [duplicate]

How can I check if a list has any duplicates and return a new list without duplicates?
The common approach to get a unique collection of items is to use a set. Sets are unordered collections of distinct objects. To create a set from any iterable, you can simply pass it to the built-in set() function. If you later need a real list again, you can similarly pass the set to the list() function.
The following example should cover whatever you are trying to do:
>>> t = [1, 2, 3, 1, 2, 3, 5, 6, 7, 8]
>>> list(set(t))
[1, 2, 3, 5, 6, 7, 8]
>>> s = [1, 2, 3]
>>> list(set(t) - set(s))
[8, 5, 6, 7]
As you can see from the example result, the original order is not maintained. As mentioned above, sets themselves are unordered collections, so the order is lost. When converting a set back to a list, an arbitrary order is created.
Maintaining order
If order is important to you, then you will have to use a different mechanism. A very common solution for this is to rely on OrderedDict to keep the order of keys during insertion:
>>> from collections import OrderedDict
>>> list(OrderedDict.fromkeys(t))
[1, 2, 3, 5, 6, 7, 8]
Starting with Python 3.7, the built-in dictionary is guaranteed to maintain the insertion order as well, so you can also use that directly if you are on Python 3.7 or later (or CPython 3.6):
>>> list(dict.fromkeys(t))
[1, 2, 3, 5, 6, 7, 8]
Note that this may have some overhead of creating a dictionary first, and then creating a list from it. If you don’t actually need to preserve the order, you’re often better off using a set, especially because it gives you a lot more operations to work with. Check out this question for more details and alternative ways to preserve the order when removing duplicates.
Finally note that both the set as well as the OrderedDict/dict solutions require your items to be hashable. This usually means that they have to be immutable. If you have to deal with items that are not hashable (e.g. list objects), then you will have to use a slow approach in which you will basically have to compare every item with every other item in a nested loop.
In Python 2.7, the new way of removing duplicates from an iterable while keeping it in the original order is:
>>> from collections import OrderedDict
>>> list(OrderedDict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
In Python 3.5, the OrderedDict has a C implementation. My timings show that this is now both the fastest and shortest of the various approaches for Python 3.5.
In Python 3.6, the regular dict became both ordered and compact. (This feature is holds for CPython and PyPy but may not present in other implementations). That gives us a new fastest way of deduping while retaining order:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
In Python 3.7, the regular dict is guaranteed to both ordered across all implementations. So, the shortest and fastest solution is:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
It's a one-liner: list(set(source_list)) will do the trick.
A set is something that can't possibly have duplicates.
Update: an order-preserving approach is two lines:
from collections import OrderedDict
OrderedDict((x, True) for x in source_list).keys()
Here we use the fact that OrderedDict remembers the insertion order of keys, and does not change it when a value at a particular key is updated. We insert True as values, but we could insert anything, values are just not used. (set works a lot like a dict with ignored values, too.)
>>> t = [1, 2, 3, 1, 2, 5, 6, 7, 8]
>>> t
[1, 2, 3, 1, 2, 5, 6, 7, 8]
>>> s = []
>>> for i in t:
if i not in s:
s.append(i)
>>> s
[1, 2, 3, 5, 6, 7, 8]
If you don't care about the order, just do this:
def remove_duplicates(l):
return list(set(l))
A set is guaranteed to not have duplicates.
To make a new list retaining the order of first elements of duplicates in L:
newlist = [ii for n,ii in enumerate(L) if ii not in L[:n]]
For example: if L = [1, 2, 2, 3, 4, 2, 4, 3, 5], then newlist will be [1, 2, 3, 4, 5]
This checks each new element has not appeared previously in the list before adding it.
Also it does not need imports.
There are also solutions using Pandas and Numpy. They both return numpy array so you have to use the function .tolist() if you want a list.
t=['a','a','b','b','b','c','c','c']
t2= ['c','c','b','b','b','a','a','a']
Pandas solution
Using Pandas function unique():
import pandas as pd
pd.unique(t).tolist()
>>>['a','b','c']
pd.unique(t2).tolist()
>>>['c','b','a']
Numpy solution
Using numpy function unique().
import numpy as np
np.unique(t).tolist()
>>>['a','b','c']
np.unique(t2).tolist()
>>>['a','b','c']
Note that numpy.unique() also sort the values. So the list t2 is returned sorted. If you want to have the order preserved use as in this answer:
_, idx = np.unique(t2, return_index=True)
t2[np.sort(idx)].tolist()
>>>['c','b','a']
The solution is not so elegant compared to the others, however, compared to pandas.unique(), numpy.unique() allows you also to check if nested arrays are unique along one selected axis.
In this answer, there will be two sections: Two unique solutions, and a graph of speed for specific solutions.
Removing Duplicate Items
Most of these answers only remove duplicate items which are hashable, but this question doesn't imply it doesn't just need hashable items, meaning I'll offer some solutions which don't require hashable items.
collections.Counter is a powerful tool in the standard library which could be perfect for this. There's only one other solution which even has Counter in it. However, that solution is also limited to hashable keys.
To allow unhashable keys in Counter, I made a Container class, which will try to get the object's default hash function, but if it fails, it will try its identity function. It also defines an eq and a hash method. This should be enough to allow unhashable items in our solution. Unhashable objects will be treated as if they are hashable. However, this hash function uses identity for unhashable objects, meaning two equal objects that are both unhashable won't work. I suggest you override this, and changing it to use the hash of an equivalent mutable type (like using hash(tuple(my_list)) if my_list is a list).
I also made two solutions. Another solution which keeps the order of the items, using a subclass of both OrderedDict and Counter which is named 'OrderedCounter'. Now, here are the functions:
from collections import OrderedDict, Counter
class Container:
def __init__(self, obj):
self.obj = obj
def __eq__(self, obj):
return self.obj == obj
def __hash__(self):
try:
return hash(self.obj)
except:
return id(self.obj)
class OrderedCounter(Counter, OrderedDict):
'Counter that remembers the order elements are first encountered'
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, OrderedDict(self))
def __reduce__(self):
return self.__class__, (OrderedDict(self),)
def remd(sequence):
cnt = Counter()
for x in sequence:
cnt[Container(x)] += 1
return [item.obj for item in cnt]
def oremd(sequence):
cnt = OrderedCounter()
for x in sequence:
cnt[Container(x)] += 1
return [item.obj for item in cnt]
remd is non-ordered sorting, while oremd is ordered sorting. You can clearly tell which one is faster, but I'll explain anyways. The non-ordered sorting is slightly faster, since it doesn't store the order of the items.
Now, I also wanted to show the speed comparisons of each answer. So, I'll do that now.
Which Function is the Fastest?
For removing duplicates, I gathered 10 functions from a few answers. I calculated the speed of each function and put it into a graph using matplotlib.pyplot.
I divided this into three rounds of graphing. A hashable is any object which can be hashed, an unhashable is any object which cannot be hashed. An ordered sequence is a sequence which preserves order, an unordered sequence does not preserve order. Now, here are a few more terms:
Unordered Hashable was for any method which removed duplicates, which didn't necessarily have to keep the order. It didn't have to work for unhashables, but it could.
Ordered Hashable was for any method which kept the order of the items in the list, but it didn't have to work for unhashables, but it could.
Ordered Unhashable was any method which kept the order of the items in the list, and worked for unhashables.
On the y-axis is the amount of seconds it took.
On the x-axis is the number the function was applied to.
I generated sequences for unordered hashables and ordered hashables with the following comprehension: [list(range(x)) + list(range(x)) for x in range(0, 1000, 10)]
For ordered unhashables: [[list(range(y)) + list(range(y)) for y in range(x)] for x in range(0, 1000, 10)]
Note there is a step in the range because without it, this would've taken 10x as long. Also because in my personal opinion, I thought it might've looked a little easier to read.
Also note the keys on the legend are what I tried to guess as the most vital parts of the implementation of the function. As for what function does the worst or best? The graph speaks for itself.
With that settled, here are the graphs.
Unordered Hashables
(Zoomed in)
Ordered Hashables
(Zoomed in)
Ordered Unhashables
(Zoomed in)
Very late answer.
If you don't care about the list order, you can use *arg expansion with set uniqueness to remove dupes, i.e.:
l = [*{*l}]
Python3 Demo
A colleague have sent the accepted answer as part of his code to me for a codereview today.
While I certainly admire the elegance of the answer in question, I am not happy with the performance.
I have tried this solution (I use set to reduce lookup time)
def ordered_set(in_list):
out_list = []
added = set()
for val in in_list:
if not val in added:
out_list.append(val)
added.add(val)
return out_list
To compare efficiency, I used a random sample of 100 integers - 62 were unique
from random import randint
x = [randint(0,100) for _ in xrange(100)]
In [131]: len(set(x))
Out[131]: 62
Here are the results of the measurements
In [129]: %timeit list(OrderedDict.fromkeys(x))
10000 loops, best of 3: 86.4 us per loop
In [130]: %timeit ordered_set(x)
100000 loops, best of 3: 15.1 us per loop
Well, what happens if set is removed from the solution?
def ordered_set(inlist):
out_list = []
for val in inlist:
if not val in out_list:
out_list.append(val)
return out_list
The result is not as bad as with the OrderedDict, but still more than 3 times of the original solution
In [136]: %timeit ordered_set(x)
10000 loops, best of 3: 52.6 us per loop
Another way of doing:
>>> seq = [1,2,3,'a', 'a', 1,2]
>> dict.fromkeys(seq).keys()
['a', 1, 2, 3]
Simple and easy:
myList = [1, 2, 3, 1, 2, 5, 6, 7, 8]
cleanlist = []
[cleanlist.append(x) for x in myList if x not in cleanlist]
Output:
>>> cleanlist
[1, 2, 3, 5, 6, 7, 8]
I had a dict in my list, so I could not use the above approach. I got the error:
TypeError: unhashable type:
So if you care about order and/or some items are unhashable. Then you might find this useful:
def make_unique(original_list):
unique_list = []
[unique_list.append(obj) for obj in original_list if obj not in unique_list]
return unique_list
Some may consider list comprehension with a side effect to not be a good solution. Here's an alternative:
def make_unique(original_list):
unique_list = []
map(lambda x: unique_list.append(x) if (x not in unique_list) else False, original_list)
return unique_list
All the order-preserving approaches I've seen here so far either use naive comparison (with O(n^2) time-complexity at best) or heavy-weight OrderedDicts/set+list combinations that are limited to hashable inputs. Here is a hash-independent O(nlogn) solution:
Update added the key argument, documentation and Python 3 compatibility.
# from functools import reduce <-- add this import on Python 3
def uniq(iterable, key=lambda x: x):
"""
Remove duplicates from an iterable. Preserves order.
:type iterable: Iterable[Ord => A]
:param iterable: an iterable of objects of any orderable type
:type key: Callable[A] -> (Ord => B)
:param key: optional argument; by default an item (A) is discarded
if another item (B), such that A == B, has already been encountered and taken.
If you provide a key, this condition changes to key(A) == key(B); the callable
must return orderable objects.
"""
# Enumerate the list to restore order lately; reduce the sorted list; restore order
def append_unique(acc, item):
return acc if key(acc[-1][1]) == key(item[1]) else acc.append(item) or acc
srt_enum = sorted(enumerate(iterable), key=lambda item: key(item[1]))
return [item[1] for item in sorted(reduce(append_unique, srt_enum, [srt_enum[0]]))]
If you want to preserve the order, and not use any external modules here is an easy way to do this:
>>> t = [1, 9, 2, 3, 4, 5, 3, 6, 7, 5, 8, 9]
>>> list(dict.fromkeys(t))
[1, 9, 2, 3, 4, 5, 6, 7, 8]
Note: This method preserves the order of appearance, so, as seen above, nine will come after one because it was the first time it appeared. This however, is the same result as you would get with doing
from collections import OrderedDict
ulist=list(OrderedDict.fromkeys(l))
but it is much shorter, and runs faster.
This works because each time the fromkeys function tries to create a new key, if the value already exists it will simply overwrite it. This wont affect the dictionary at all however, as fromkeys creates a dictionary where all keys have the value None, so effectively it eliminates all duplicates this way.
I've compared the various suggestions with perfplot. It turns out that, if the input array doesn't have duplicate elements, all methods are more or less equally fast, independently of whether the input data is a Python list or a NumPy array.
If the input array is large, but contains just one unique element, then the set, dict and np.unique methods are costant-time if the input data is a list. If it's a NumPy array, np.unique is about 10 times faster than the other alternatives.
It's somewhat surprising to me that those are not constant-time operations, too.
Code to reproduce the plots:
import perfplot
import numpy as np
import matplotlib.pyplot as plt
def setup_list(n):
# return list(np.random.permutation(np.arange(n)))
return [0] * n
def setup_np_array(n):
# return np.random.permutation(np.arange(n))
return np.zeros(n, dtype=int)
def list_set(data):
return list(set(data))
def numpy_unique(data):
return np.unique(data)
def list_dict(data):
return list(dict.fromkeys(data))
b = perfplot.bench(
setup=[
setup_list,
setup_list,
setup_list,
setup_np_array,
setup_np_array,
setup_np_array,
],
kernels=[list_set, numpy_unique, list_dict, list_set, numpy_unique, list_dict],
labels=[
"list(set(lst))",
"np.unique(lst)",
"list(dict(lst))",
"list(set(arr))",
"np.unique(arr)",
"list(dict(arr))",
],
n_range=[2 ** k for k in range(23)],
xlabel="len(array)",
equality_check=None,
)
# plt.title("input array = [0, 1, 2,..., n]")
plt.title("input array = [0, 0,..., 0]")
b.save("out.png")
b.show()
You could also do this:
>>> t = [1, 2, 3, 3, 2, 4, 5, 6]
>>> s = [x for i, x in enumerate(t) if i == t.index(x)]
>>> s
[1, 2, 3, 4, 5, 6]
The reason that above works is that index method returns only the first index of an element. Duplicate elements have higher indices. Refer to here:
list.index(x[, start[, end]])
Return zero-based index in the list of
the first item whose value is x. Raises a ValueError if there is no
such item.
Best approach of removing duplicates from a list is using set() function, available in python, again converting that set into list
In [2]: some_list = ['a','a','v','v','v','c','c','d']
In [3]: list(set(some_list))
Out[3]: ['a', 'c', 'd', 'v']
You can use set to remove duplicates:
mylist = list(set(mylist))
But note the results will be unordered. If that's an issue:
mylist.sort()
Try using sets:
import sets
t = sets.Set(['a', 'b', 'c', 'd'])
t1 = sets.Set(['a', 'b', 'c'])
print t | t1
print t - t1
One more better approach could be,
import pandas as pd
myList = [1, 2, 3, 1, 2, 5, 6, 7, 8]
cleanList = pd.Series(myList).drop_duplicates().tolist()
print(cleanList)
#> [1, 2, 3, 5, 6, 7, 8]
and the order remains preserved.
This one cares about the order without too much hassle (OrderdDict & others). Probably not the most Pythonic way, nor shortest way, but does the trick:
def remove_duplicates(item_list):
''' Removes duplicate items from a list '''
singles_list = []
for element in item_list:
if element not in singles_list:
singles_list.append(element)
return singles_list
Reduce variant with ordering preserve:
Assume that we have list:
l = [5, 6, 6, 1, 1, 2, 2, 3, 4]
Reduce variant (unefficient):
>>> reduce(lambda r, v: v in r and r or r + [v], l, [])
[5, 6, 1, 2, 3, 4]
5 x faster but more sophisticated
>>> reduce(lambda r, v: v in r[1] and r or (r[0].append(v) or r[1].add(v)) or r, l, ([], set()))[0]
[5, 6, 1, 2, 3, 4]
Explanation:
default = (list(), set())
# user list to keep order
# use set to make lookup faster
def reducer(result, item):
if item not in result[1]:
result[0].append(item)
result[1].add(item)
return result
reduce(reducer, l, default)[0]
There are many other answers suggesting different ways to do this, but they're all batch operations, and some of them throw away the original order. That might be okay depending on what you need, but if you want to iterate over the values in the order of the first instance of each value, and you want to remove the duplicates on-the-fly versus all at once, you could use this generator:
def uniqify(iterable):
seen = set()
for item in iterable:
if item not in seen:
seen.add(item)
yield item
This returns a generator/iterator, so you can use it anywhere that you can use an iterator.
for unique_item in uniqify([1, 2, 3, 4, 3, 2, 4, 5, 6, 7, 6, 8, 8]):
print(unique_item, end=' ')
print()
Output:
1 2 3 4 5 6 7 8
If you do want a list, you can do this:
unique_list = list(uniqify([1, 2, 3, 4, 3, 2, 4, 5, 6, 7, 6, 8, 8]))
print(unique_list)
Output:
[1, 2, 3, 4, 5, 6, 7, 8]
You can use the following function:
def rem_dupes(dup_list):
yooneeks = []
for elem in dup_list:
if elem not in yooneeks:
yooneeks.append(elem)
return yooneeks
Example:
my_list = ['this','is','a','list','with','dupicates','in', 'the', 'list']
Usage:
rem_dupes(my_list)
['this', 'is', 'a', 'list', 'with', 'dupicates', 'in', 'the']
Using set :
a = [0,1,2,3,4,3,3,4]
a = list(set(a))
print a
Using unique :
import numpy as np
a = [0,1,2,3,4,3,3,4]
a = np.unique(a).tolist()
print a
Without using set
data=[1, 2, 3, 1, 2, 5, 6, 7, 8]
uni_data=[]
for dat in data:
if dat not in uni_data:
uni_data.append(dat)
print(uni_data)
The Magic of Python Built-in type
In python, it is very easy to process the complicated cases like this and only by python's built-in type.
Let me show you how to do !
Method 1: General Case
The way (1 line code) to remove duplicated element in list and still keep sorting order
line = [1, 2, 3, 1, 2, 5, 6, 7, 8]
new_line = sorted(set(line), key=line.index) # remove duplicated element
print(new_line)
You will get the result
[1, 2, 3, 5, 6, 7, 8]
Method 2: Special Case
TypeError: unhashable type: 'list'
The special case to process unhashable (3 line codes)
line=[['16.4966155686595', '-27.59776154691', '52.3786295521147']
,['16.4966155686595', '-27.59776154691', '52.3786295521147']
,['17.6508629295574', '-27.143305738671', '47.534955022564']
,['17.6508629295574', '-27.143305738671', '47.534955022564']
,['18.8051102904552', '-26.688849930432', '42.6912804930134']
,['18.8051102904552', '-26.688849930432', '42.6912804930134']
,['19.5504702331098', '-26.205884452727', '37.7709192714727']
,['19.5504702331098', '-26.205884452727', '37.7709192714727']
,['20.2929416861422', '-25.722717575124', '32.8500163147157']
,['20.2929416861422', '-25.722717575124', '32.8500163147157']]
tuple_line = [tuple(pt) for pt in line] # convert list of list into list of tuple
tuple_new_line = sorted(set(tuple_line),key=tuple_line.index) # remove duplicated element
new_line = [list(t) for t in tuple_new_line] # convert list of tuple into list of list
print (new_line)
You will get the result :
[
['16.4966155686595', '-27.59776154691', '52.3786295521147'],
['17.6508629295574', '-27.143305738671', '47.534955022564'],
['18.8051102904552', '-26.688849930432', '42.6912804930134'],
['19.5504702331098', '-26.205884452727', '37.7709192714727'],
['20.2929416861422', '-25.722717575124', '32.8500163147157']
]
Because tuple is hashable and you can convert data between list and tuple easily
below code is simple for removing duplicate in list
def remove_duplicates(x):
a = []
for i in x:
if i not in a:
a.append(i)
return a
print remove_duplicates([1,2,2,3,3,4])
it returns [1,2,3,4]
Here's the fastest pythonic solution comaring to others listed in replies.
Using implementation details of short-circuit evaluation allows to use list comprehension, which is fast enough. visited.add(item) always returns None as a result, which is evaluated as False, so the right-side of or would always be the result of such an expression.
Time it yourself
def deduplicate(sequence):
visited = set()
adder = visited.add # get rid of qualification overhead
out = [adder(item) or item for item in sequence if item not in visited]
return out

Python - finding value position in a list and outputting as a new list

Currently learning python in my course and I'm a little confused on this part, unfortunately my next prac isn't until next week so I thought I'd ask here.
We're supposed to write a function called find() that takes a list, searches for the value in the list and returns as a new list of the positions in the list the number was found. Eg:
list = 2, 3, 7, 2, 3, 3, 5
number = 3
the output would be: 0, 4, 5
The question requires us to use a loop but not to use built-in functions, slice expressions, list methods, or string methods unless specified, with the only ones specified for this part being: .append() and range().
We're given a file we're not allowed to edit:
import test_lists
list_A = ['q', 'w', 'e', 'r', 't', 'y']
print(test_lists.find(list_A, 'r'))
print(test_lists.find(list_A, 'b'))
From what I've attempted in a file named test_lists, the output gives <function find at 0x0000000003B89D90>:
def find(list1, listValue):
findList = 0
for x in range(findList):
findList.find(list1, value)
return find
anyone able to please push me in the right direction and explain it to me? We were told we could use .append() but I don't seem to understand how that would fit into this situation as from what I am aware append only adds to to the string. I feel like I'm pretty far off track.
Thanks
Using enumerate and list comprehension:
>>> lst = 2, 3, 7, 2, 3, 3, 5
>>> number = 3
>>> [i for i, x in enumerate(lst) if x == number]
[1, 4, 5]
>>> list_A = ['q', 'w', 'e', 'r', 't', 'y']
>>> [i for i, x in enumerate(list_A) if x == 'r']
[3]
BTW, don't use list as a variable name. It shadows builtin type/function list.
UPDATE
without using enumerate:
>>> lst = 2, 3, 7, 2, 3, 3, 5
>>> number = 3
>>> [i for i in range(len(lst)) if lst[i] == number]
[1, 4, 5]
Will this do ?
def find(list1, listValue):
found = []
for index in range(0,len(list1)):
if list1[index]==listValue:
found.append(index)
return found
I guess you can't use enumerate ?
Then, you you have to do the same thing by yourself : iterate over the indexes of the list and for each index, see if the element is the expected element. If it is, you had the index into your result list :
expected_element = 3
my_list = [...]
result_list = []
for i in range(len(my_list)):
if my_list[i] == expected_element:
result_list.append(i)
It's not very pythonic but if you have so much contraint, i think it's good
And if you can't use len(my_list), you can get the size by iterate over a first time :
def size(l):
s = 0
for _ in l:
s += 1
return s
A list comprehension would be the most Pythonic way to do this. But writing it explicitly for instructional purposes, here is another way to do it.
def find(number, numList):
indexMatches = [] # initialize your index array
for index in range(len(numList)): # iterate over the list of values
if number == numList[index]: # compare the number against the current value
indexMatches.append(index) # if they match, add it to the index array
return indexMatches # return the list of indexes where matches occured
find(3, [2,3,7,2,3,3,5])
Output
[1, 4, 5]

Removing duplicates in lists

How can I check if a list has any duplicates and return a new list without duplicates?
The common approach to get a unique collection of items is to use a set. Sets are unordered collections of distinct objects. To create a set from any iterable, you can simply pass it to the built-in set() function. If you later need a real list again, you can similarly pass the set to the list() function.
The following example should cover whatever you are trying to do:
>>> t = [1, 2, 3, 1, 2, 3, 5, 6, 7, 8]
>>> list(set(t))
[1, 2, 3, 5, 6, 7, 8]
>>> s = [1, 2, 3]
>>> list(set(t) - set(s))
[8, 5, 6, 7]
As you can see from the example result, the original order is not maintained. As mentioned above, sets themselves are unordered collections, so the order is lost. When converting a set back to a list, an arbitrary order is created.
Maintaining order
If order is important to you, then you will have to use a different mechanism. A very common solution for this is to rely on OrderedDict to keep the order of keys during insertion:
>>> from collections import OrderedDict
>>> list(OrderedDict.fromkeys(t))
[1, 2, 3, 5, 6, 7, 8]
Starting with Python 3.7, the built-in dictionary is guaranteed to maintain the insertion order as well, so you can also use that directly if you are on Python 3.7 or later (or CPython 3.6):
>>> list(dict.fromkeys(t))
[1, 2, 3, 5, 6, 7, 8]
Note that this may have some overhead of creating a dictionary first, and then creating a list from it. If you don’t actually need to preserve the order, you’re often better off using a set, especially because it gives you a lot more operations to work with. Check out this question for more details and alternative ways to preserve the order when removing duplicates.
Finally note that both the set as well as the OrderedDict/dict solutions require your items to be hashable. This usually means that they have to be immutable. If you have to deal with items that are not hashable (e.g. list objects), then you will have to use a slow approach in which you will basically have to compare every item with every other item in a nested loop.
In Python 2.7, the new way of removing duplicates from an iterable while keeping it in the original order is:
>>> from collections import OrderedDict
>>> list(OrderedDict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
In Python 3.5, the OrderedDict has a C implementation. My timings show that this is now both the fastest and shortest of the various approaches for Python 3.5.
In Python 3.6, the regular dict became both ordered and compact. (This feature is holds for CPython and PyPy but may not present in other implementations). That gives us a new fastest way of deduping while retaining order:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
In Python 3.7, the regular dict is guaranteed to both ordered across all implementations. So, the shortest and fastest solution is:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
It's a one-liner: list(set(source_list)) will do the trick.
A set is something that can't possibly have duplicates.
Update: an order-preserving approach is two lines:
from collections import OrderedDict
OrderedDict((x, True) for x in source_list).keys()
Here we use the fact that OrderedDict remembers the insertion order of keys, and does not change it when a value at a particular key is updated. We insert True as values, but we could insert anything, values are just not used. (set works a lot like a dict with ignored values, too.)
>>> t = [1, 2, 3, 1, 2, 5, 6, 7, 8]
>>> t
[1, 2, 3, 1, 2, 5, 6, 7, 8]
>>> s = []
>>> for i in t:
if i not in s:
s.append(i)
>>> s
[1, 2, 3, 5, 6, 7, 8]
If you don't care about the order, just do this:
def remove_duplicates(l):
return list(set(l))
A set is guaranteed to not have duplicates.
To make a new list retaining the order of first elements of duplicates in L:
newlist = [ii for n,ii in enumerate(L) if ii not in L[:n]]
For example: if L = [1, 2, 2, 3, 4, 2, 4, 3, 5], then newlist will be [1, 2, 3, 4, 5]
This checks each new element has not appeared previously in the list before adding it.
Also it does not need imports.
There are also solutions using Pandas and Numpy. They both return numpy array so you have to use the function .tolist() if you want a list.
t=['a','a','b','b','b','c','c','c']
t2= ['c','c','b','b','b','a','a','a']
Pandas solution
Using Pandas function unique():
import pandas as pd
pd.unique(t).tolist()
>>>['a','b','c']
pd.unique(t2).tolist()
>>>['c','b','a']
Numpy solution
Using numpy function unique().
import numpy as np
np.unique(t).tolist()
>>>['a','b','c']
np.unique(t2).tolist()
>>>['a','b','c']
Note that numpy.unique() also sort the values. So the list t2 is returned sorted. If you want to have the order preserved use as in this answer:
_, idx = np.unique(t2, return_index=True)
t2[np.sort(idx)].tolist()
>>>['c','b','a']
The solution is not so elegant compared to the others, however, compared to pandas.unique(), numpy.unique() allows you also to check if nested arrays are unique along one selected axis.
In this answer, there will be two sections: Two unique solutions, and a graph of speed for specific solutions.
Removing Duplicate Items
Most of these answers only remove duplicate items which are hashable, but this question doesn't imply it doesn't just need hashable items, meaning I'll offer some solutions which don't require hashable items.
collections.Counter is a powerful tool in the standard library which could be perfect for this. There's only one other solution which even has Counter in it. However, that solution is also limited to hashable keys.
To allow unhashable keys in Counter, I made a Container class, which will try to get the object's default hash function, but if it fails, it will try its identity function. It also defines an eq and a hash method. This should be enough to allow unhashable items in our solution. Unhashable objects will be treated as if they are hashable. However, this hash function uses identity for unhashable objects, meaning two equal objects that are both unhashable won't work. I suggest you override this, and changing it to use the hash of an equivalent mutable type (like using hash(tuple(my_list)) if my_list is a list).
I also made two solutions. Another solution which keeps the order of the items, using a subclass of both OrderedDict and Counter which is named 'OrderedCounter'. Now, here are the functions:
from collections import OrderedDict, Counter
class Container:
def __init__(self, obj):
self.obj = obj
def __eq__(self, obj):
return self.obj == obj
def __hash__(self):
try:
return hash(self.obj)
except:
return id(self.obj)
class OrderedCounter(Counter, OrderedDict):
'Counter that remembers the order elements are first encountered'
def __repr__(self):
return '%s(%r)' % (self.__class__.__name__, OrderedDict(self))
def __reduce__(self):
return self.__class__, (OrderedDict(self),)
def remd(sequence):
cnt = Counter()
for x in sequence:
cnt[Container(x)] += 1
return [item.obj for item in cnt]
def oremd(sequence):
cnt = OrderedCounter()
for x in sequence:
cnt[Container(x)] += 1
return [item.obj for item in cnt]
remd is non-ordered sorting, while oremd is ordered sorting. You can clearly tell which one is faster, but I'll explain anyways. The non-ordered sorting is slightly faster, since it doesn't store the order of the items.
Now, I also wanted to show the speed comparisons of each answer. So, I'll do that now.
Which Function is the Fastest?
For removing duplicates, I gathered 10 functions from a few answers. I calculated the speed of each function and put it into a graph using matplotlib.pyplot.
I divided this into three rounds of graphing. A hashable is any object which can be hashed, an unhashable is any object which cannot be hashed. An ordered sequence is a sequence which preserves order, an unordered sequence does not preserve order. Now, here are a few more terms:
Unordered Hashable was for any method which removed duplicates, which didn't necessarily have to keep the order. It didn't have to work for unhashables, but it could.
Ordered Hashable was for any method which kept the order of the items in the list, but it didn't have to work for unhashables, but it could.
Ordered Unhashable was any method which kept the order of the items in the list, and worked for unhashables.
On the y-axis is the amount of seconds it took.
On the x-axis is the number the function was applied to.
I generated sequences for unordered hashables and ordered hashables with the following comprehension: [list(range(x)) + list(range(x)) for x in range(0, 1000, 10)]
For ordered unhashables: [[list(range(y)) + list(range(y)) for y in range(x)] for x in range(0, 1000, 10)]
Note there is a step in the range because without it, this would've taken 10x as long. Also because in my personal opinion, I thought it might've looked a little easier to read.
Also note the keys on the legend are what I tried to guess as the most vital parts of the implementation of the function. As for what function does the worst or best? The graph speaks for itself.
With that settled, here are the graphs.
Unordered Hashables
(Zoomed in)
Ordered Hashables
(Zoomed in)
Ordered Unhashables
(Zoomed in)
Very late answer.
If you don't care about the list order, you can use *arg expansion with set uniqueness to remove dupes, i.e.:
l = [*{*l}]
Python3 Demo
A colleague have sent the accepted answer as part of his code to me for a codereview today.
While I certainly admire the elegance of the answer in question, I am not happy with the performance.
I have tried this solution (I use set to reduce lookup time)
def ordered_set(in_list):
out_list = []
added = set()
for val in in_list:
if not val in added:
out_list.append(val)
added.add(val)
return out_list
To compare efficiency, I used a random sample of 100 integers - 62 were unique
from random import randint
x = [randint(0,100) for _ in xrange(100)]
In [131]: len(set(x))
Out[131]: 62
Here are the results of the measurements
In [129]: %timeit list(OrderedDict.fromkeys(x))
10000 loops, best of 3: 86.4 us per loop
In [130]: %timeit ordered_set(x)
100000 loops, best of 3: 15.1 us per loop
Well, what happens if set is removed from the solution?
def ordered_set(inlist):
out_list = []
for val in inlist:
if not val in out_list:
out_list.append(val)
return out_list
The result is not as bad as with the OrderedDict, but still more than 3 times of the original solution
In [136]: %timeit ordered_set(x)
10000 loops, best of 3: 52.6 us per loop
Another way of doing:
>>> seq = [1,2,3,'a', 'a', 1,2]
>> dict.fromkeys(seq).keys()
['a', 1, 2, 3]
Simple and easy:
myList = [1, 2, 3, 1, 2, 5, 6, 7, 8]
cleanlist = []
[cleanlist.append(x) for x in myList if x not in cleanlist]
Output:
>>> cleanlist
[1, 2, 3, 5, 6, 7, 8]
I had a dict in my list, so I could not use the above approach. I got the error:
TypeError: unhashable type:
So if you care about order and/or some items are unhashable. Then you might find this useful:
def make_unique(original_list):
unique_list = []
[unique_list.append(obj) for obj in original_list if obj not in unique_list]
return unique_list
Some may consider list comprehension with a side effect to not be a good solution. Here's an alternative:
def make_unique(original_list):
unique_list = []
map(lambda x: unique_list.append(x) if (x not in unique_list) else False, original_list)
return unique_list
All the order-preserving approaches I've seen here so far either use naive comparison (with O(n^2) time-complexity at best) or heavy-weight OrderedDicts/set+list combinations that are limited to hashable inputs. Here is a hash-independent O(nlogn) solution:
Update added the key argument, documentation and Python 3 compatibility.
# from functools import reduce <-- add this import on Python 3
def uniq(iterable, key=lambda x: x):
"""
Remove duplicates from an iterable. Preserves order.
:type iterable: Iterable[Ord => A]
:param iterable: an iterable of objects of any orderable type
:type key: Callable[A] -> (Ord => B)
:param key: optional argument; by default an item (A) is discarded
if another item (B), such that A == B, has already been encountered and taken.
If you provide a key, this condition changes to key(A) == key(B); the callable
must return orderable objects.
"""
# Enumerate the list to restore order lately; reduce the sorted list; restore order
def append_unique(acc, item):
return acc if key(acc[-1][1]) == key(item[1]) else acc.append(item) or acc
srt_enum = sorted(enumerate(iterable), key=lambda item: key(item[1]))
return [item[1] for item in sorted(reduce(append_unique, srt_enum, [srt_enum[0]]))]
If you want to preserve the order, and not use any external modules here is an easy way to do this:
>>> t = [1, 9, 2, 3, 4, 5, 3, 6, 7, 5, 8, 9]
>>> list(dict.fromkeys(t))
[1, 9, 2, 3, 4, 5, 6, 7, 8]
Note: This method preserves the order of appearance, so, as seen above, nine will come after one because it was the first time it appeared. This however, is the same result as you would get with doing
from collections import OrderedDict
ulist=list(OrderedDict.fromkeys(l))
but it is much shorter, and runs faster.
This works because each time the fromkeys function tries to create a new key, if the value already exists it will simply overwrite it. This wont affect the dictionary at all however, as fromkeys creates a dictionary where all keys have the value None, so effectively it eliminates all duplicates this way.
I've compared the various suggestions with perfplot. It turns out that, if the input array doesn't have duplicate elements, all methods are more or less equally fast, independently of whether the input data is a Python list or a NumPy array.
If the input array is large, but contains just one unique element, then the set, dict and np.unique methods are costant-time if the input data is a list. If it's a NumPy array, np.unique is about 10 times faster than the other alternatives.
It's somewhat surprising to me that those are not constant-time operations, too.
Code to reproduce the plots:
import perfplot
import numpy as np
import matplotlib.pyplot as plt
def setup_list(n):
# return list(np.random.permutation(np.arange(n)))
return [0] * n
def setup_np_array(n):
# return np.random.permutation(np.arange(n))
return np.zeros(n, dtype=int)
def list_set(data):
return list(set(data))
def numpy_unique(data):
return np.unique(data)
def list_dict(data):
return list(dict.fromkeys(data))
b = perfplot.bench(
setup=[
setup_list,
setup_list,
setup_list,
setup_np_array,
setup_np_array,
setup_np_array,
],
kernels=[list_set, numpy_unique, list_dict, list_set, numpy_unique, list_dict],
labels=[
"list(set(lst))",
"np.unique(lst)",
"list(dict(lst))",
"list(set(arr))",
"np.unique(arr)",
"list(dict(arr))",
],
n_range=[2 ** k for k in range(23)],
xlabel="len(array)",
equality_check=None,
)
# plt.title("input array = [0, 1, 2,..., n]")
plt.title("input array = [0, 0,..., 0]")
b.save("out.png")
b.show()
You could also do this:
>>> t = [1, 2, 3, 3, 2, 4, 5, 6]
>>> s = [x for i, x in enumerate(t) if i == t.index(x)]
>>> s
[1, 2, 3, 4, 5, 6]
The reason that above works is that index method returns only the first index of an element. Duplicate elements have higher indices. Refer to here:
list.index(x[, start[, end]])
Return zero-based index in the list of
the first item whose value is x. Raises a ValueError if there is no
such item.
Best approach of removing duplicates from a list is using set() function, available in python, again converting that set into list
In [2]: some_list = ['a','a','v','v','v','c','c','d']
In [3]: list(set(some_list))
Out[3]: ['a', 'c', 'd', 'v']
You can use set to remove duplicates:
mylist = list(set(mylist))
But note the results will be unordered. If that's an issue:
mylist.sort()
Try using sets:
import sets
t = sets.Set(['a', 'b', 'c', 'd'])
t1 = sets.Set(['a', 'b', 'c'])
print t | t1
print t - t1
One more better approach could be,
import pandas as pd
myList = [1, 2, 3, 1, 2, 5, 6, 7, 8]
cleanList = pd.Series(myList).drop_duplicates().tolist()
print(cleanList)
#> [1, 2, 3, 5, 6, 7, 8]
and the order remains preserved.
This one cares about the order without too much hassle (OrderdDict & others). Probably not the most Pythonic way, nor shortest way, but does the trick:
def remove_duplicates(item_list):
''' Removes duplicate items from a list '''
singles_list = []
for element in item_list:
if element not in singles_list:
singles_list.append(element)
return singles_list
Reduce variant with ordering preserve:
Assume that we have list:
l = [5, 6, 6, 1, 1, 2, 2, 3, 4]
Reduce variant (unefficient):
>>> reduce(lambda r, v: v in r and r or r + [v], l, [])
[5, 6, 1, 2, 3, 4]
5 x faster but more sophisticated
>>> reduce(lambda r, v: v in r[1] and r or (r[0].append(v) or r[1].add(v)) or r, l, ([], set()))[0]
[5, 6, 1, 2, 3, 4]
Explanation:
default = (list(), set())
# user list to keep order
# use set to make lookup faster
def reducer(result, item):
if item not in result[1]:
result[0].append(item)
result[1].add(item)
return result
reduce(reducer, l, default)[0]
There are many other answers suggesting different ways to do this, but they're all batch operations, and some of them throw away the original order. That might be okay depending on what you need, but if you want to iterate over the values in the order of the first instance of each value, and you want to remove the duplicates on-the-fly versus all at once, you could use this generator:
def uniqify(iterable):
seen = set()
for item in iterable:
if item not in seen:
seen.add(item)
yield item
This returns a generator/iterator, so you can use it anywhere that you can use an iterator.
for unique_item in uniqify([1, 2, 3, 4, 3, 2, 4, 5, 6, 7, 6, 8, 8]):
print(unique_item, end=' ')
print()
Output:
1 2 3 4 5 6 7 8
If you do want a list, you can do this:
unique_list = list(uniqify([1, 2, 3, 4, 3, 2, 4, 5, 6, 7, 6, 8, 8]))
print(unique_list)
Output:
[1, 2, 3, 4, 5, 6, 7, 8]
You can use the following function:
def rem_dupes(dup_list):
yooneeks = []
for elem in dup_list:
if elem not in yooneeks:
yooneeks.append(elem)
return yooneeks
Example:
my_list = ['this','is','a','list','with','dupicates','in', 'the', 'list']
Usage:
rem_dupes(my_list)
['this', 'is', 'a', 'list', 'with', 'dupicates', 'in', 'the']
Using set :
a = [0,1,2,3,4,3,3,4]
a = list(set(a))
print a
Using unique :
import numpy as np
a = [0,1,2,3,4,3,3,4]
a = np.unique(a).tolist()
print a
Without using set
data=[1, 2, 3, 1, 2, 5, 6, 7, 8]
uni_data=[]
for dat in data:
if dat not in uni_data:
uni_data.append(dat)
print(uni_data)
The Magic of Python Built-in type
In python, it is very easy to process the complicated cases like this and only by python's built-in type.
Let me show you how to do !
Method 1: General Case
The way (1 line code) to remove duplicated element in list and still keep sorting order
line = [1, 2, 3, 1, 2, 5, 6, 7, 8]
new_line = sorted(set(line), key=line.index) # remove duplicated element
print(new_line)
You will get the result
[1, 2, 3, 5, 6, 7, 8]
Method 2: Special Case
TypeError: unhashable type: 'list'
The special case to process unhashable (3 line codes)
line=[['16.4966155686595', '-27.59776154691', '52.3786295521147']
,['16.4966155686595', '-27.59776154691', '52.3786295521147']
,['17.6508629295574', '-27.143305738671', '47.534955022564']
,['17.6508629295574', '-27.143305738671', '47.534955022564']
,['18.8051102904552', '-26.688849930432', '42.6912804930134']
,['18.8051102904552', '-26.688849930432', '42.6912804930134']
,['19.5504702331098', '-26.205884452727', '37.7709192714727']
,['19.5504702331098', '-26.205884452727', '37.7709192714727']
,['20.2929416861422', '-25.722717575124', '32.8500163147157']
,['20.2929416861422', '-25.722717575124', '32.8500163147157']]
tuple_line = [tuple(pt) for pt in line] # convert list of list into list of tuple
tuple_new_line = sorted(set(tuple_line),key=tuple_line.index) # remove duplicated element
new_line = [list(t) for t in tuple_new_line] # convert list of tuple into list of list
print (new_line)
You will get the result :
[
['16.4966155686595', '-27.59776154691', '52.3786295521147'],
['17.6508629295574', '-27.143305738671', '47.534955022564'],
['18.8051102904552', '-26.688849930432', '42.6912804930134'],
['19.5504702331098', '-26.205884452727', '37.7709192714727'],
['20.2929416861422', '-25.722717575124', '32.8500163147157']
]
Because tuple is hashable and you can convert data between list and tuple easily
below code is simple for removing duplicate in list
def remove_duplicates(x):
a = []
for i in x:
if i not in a:
a.append(i)
return a
print remove_duplicates([1,2,2,3,3,4])
it returns [1,2,3,4]
Here's the fastest pythonic solution comaring to others listed in replies.
Using implementation details of short-circuit evaluation allows to use list comprehension, which is fast enough. visited.add(item) always returns None as a result, which is evaluated as False, so the right-side of or would always be the result of such an expression.
Time it yourself
def deduplicate(sequence):
visited = set()
adder = visited.add # get rid of qualification overhead
out = [adder(item) or item for item in sequence if item not in visited]
return out

In Python, how do I take a list and reduce it to a list of duplicates?

I have a list of strings that should be unique. I want to be able to check for duplicates quickly. Specifically, I'd like to be able to take the original list and produce a new list containing any repeated items. I don't care how many times the items are repeated so it doesn't have to have a word twice if there are two duplicates.
Unfortunately, I can't think of a way to do this that wouldn't be clunky. Any suggestions?
EDIT:
Thanks for the answers and I thought I'd make a clarification. I'm not concerned with having a list of uniques for it's own sake. I'm generating the list based off of text files and I want to know what the duplicates are so I can go in the text files and remove them if any show up.
This code should work:
duplicates = set()
found = set()
for item in source:
if item in found:
duplicates.add(item)
else:
found.add(item)
groupby from itertools will probably be useful here:
from itertools import groupby
duplicated=[k for (k,g) in groupby(sorted(l)) if len(list(g)) > 1]
Basically you use it to find elements that appear more than once...
NB. the call to sorted is needed, as groupby only works properly if the input is sorted.
This will create the list in one line:
L = [1, 2, 3, 3, 4, 4, 4]
L_dup = set([i for i in L if L.count(i) > 1])
Definitely not the fastest way to do that, but it seem to work solve the problem:
>>> lst = [23, 32, 23, None]
>>> set(i for i in lst if lst.count(i) > 1)
{23}
Here's a simple 1-liner:
>>> l = ['a', 'a', 3, 'r', 'r', 's', 's', 2, 3, 't', 'y', 'a', 'w', 'r']
>>> [v for i, v in enumerate(l) if l[i:].count(v) > 1 and l[:i].count(v) == 0]
['a', 3, 'r', 's']
enumerate returns an indexed list which we use to splice our input list determining whether there are any duplicates ahead of our current index in the loop and whether we have already found a duplicate behind.
If you don't care about the order of the duplicates:
a = [1, 2, 3, 4, 5, 4, 6, 4, 7, 8, 8]
b = sorted(a)
duplicates = set([x for x, y in zip(b[:-1], b[1:]) if x == y])
Personally, I think this is the simplest way to do it with performance O(n). Similar to vartec's solution but no import required and no Python version dependencies to worry about:
def getDuplicates(iterable):
d = {}
for i in iterable:
d[i] = d.get(i, 0) + 1
return [i for i in d if d[i] > 1]
the solutions based on 'set' have a small drawback, namely they only work for hashable objects.
the solution based on itertools.groupby on the other hand works for all comparable objects (e.g.: dictionaries and lists).
EDIT : Ok, doesn't work since you want duplicates only.
Whith python > 2.4 :
You have set, just do :
my_filtered_list = list(set(mylist))
Set is a data structure that doesn't have duplicate by nature.
With older Python versions :
my_filtered_list = list(dict.fromkeys(mylist).keys())
Dictionary map a unique key to a value. We use the "unique" caracteristc to get rid of the duplicate.

Categories

Resources