Method to get the max distance (step) between values in python? - python

Given an list of integers does exists a default method find the max distance between values?
So if I have this array
[1, 3, 5, 9, 15, 30]
The max step between the values is 15. Does the list object has a method for do that?

No, list objects have no standard "adjacent differences" method or the like. However, using the pairwise function mentioned in the itertools recipes:
def pairwise(iterable):
a, b = tee(iterable)
next(b, None)
return izip(a, b)
...you can (concisely and efficiently) define
>>> max(b-a for (a,b) in pairwise([1, 3, 5, 9, 15, 30]))
15

No, but it's trivial to code:
last = data[0]
dist = 0
for i in data[1:]:
dist = max(dist, i-last)
last = i
return dist

You can do:
>>> s = [1, 3, 5, 9, 15, 30]
>>> max(x[0] - x[1] for x in zip(s[1:], s))
15
This uses max and zip. It computes the difference between all consecutive elements and returns the max of those.

l=[1, 3, 5, 9, 15, 30]
max([j-i for i, j in zip(l[:-1], l[1:])])
That is using pure python and gives you the desired output "15".
If you like to work with "numpy" you could do:
import numpy as np
max(np.diff(l))

The list object does not. However, it is pretty quick to write a function that does that:
def max_step(my_list):
max_step = 0
for ind in xrange(len(my_list)-1):
step = my_list[ind+1] - my_list[ind]
if step > max_step:
max_step = step
return max_step
>>> max_step([1, 3, 5, 9, 15, 30])
15
Or if you prefer even shorter:
max_step = lambda l: max([l[i+1] - l[i] for i in xrange(len(l)-1)])
>>> max_step([1, 3, 5, 9, 15, 30])
15

It is possible to use the reduce() function, but it is not that elegant as you need some way to keep track of the previous value:
def step(maxStep, cur):
if isinstance(maxStep, int):
maxStep = (abs(maxStep-cur), cur)
return (max(maxStep[0], abs(maxStep[1]-cur)), cur)
l = [1, 3, 5, 9, 15, 30]
print reduce(step, l)[0]
The solution works by returing the previous value and the accumulated max calculation as a tuple for each iteration.
Also what is the expected outcome for [10,20,30,5]? Is it 10 or 25? If 25 then you need to add abs() to your calculation.

Related

How to calculate Features from List of Numpy arrays with different array length? [duplicate]

How do I find the mean average of a list in Python?
[1, 2, 3, 4] ⟶ 2.5
For Python 3.8+, use statistics.fmean for numerical stability with floats. (Fast.)
For Python 3.4+, use statistics.mean for numerical stability with floats. (Slower.)
xs = [15, 18, 2, 36, 12, 78, 5, 6, 9]
import statistics
statistics.mean(xs) # = 20.11111111111111
For older versions of Python 3, use
sum(xs) / len(xs)
For Python 2, convert len to a float to get float division:
sum(xs) / float(len(xs))
xs = [15, 18, 2, 36, 12, 78, 5, 6, 9]
sum(xs) / len(xs)
Use numpy.mean:
xs = [15, 18, 2, 36, 12, 78, 5, 6, 9]
import numpy as np
print(np.mean(xs))
For Python 3.4+, use mean() from the new statistics module to calculate the average:
from statistics import mean
xs = [15, 18, 2, 36, 12, 78, 5, 6, 9]
mean(xs)
Why would you use reduce() for this when Python has a perfectly cromulent sum() function?
print sum(l) / float(len(l))
(The float() is necessary in Python 2 to force Python to do a floating-point division.)
There is a statistics library if you are using python >= 3.4
https://docs.python.org/3/library/statistics.html
You may use it's mean method like this. Let's say you have a list of numbers of which you want to find mean:-
list = [11, 13, 12, 15, 17]
import statistics as s
s.mean(list)
It has other methods too like stdev, variance, mode, harmonic mean, median etc which are too useful.
Instead of casting to float, you can add 0.0 to the sum:
def avg(l):
return sum(l, 0.0) / len(l)
EDIT:
I added two other ways to get the average of a list (which are relevant only for Python 3.8+). Here is the comparison that I made:
import timeit
import statistics
import numpy as np
from functools import reduce
import pandas as pd
import math
LIST_RANGE = 10
NUMBERS_OF_TIMES_TO_TEST = 10000
l = list(range(LIST_RANGE))
def mean1():
return statistics.mean(l)
def mean2():
return sum(l) / len(l)
def mean3():
return np.mean(l)
def mean4():
return np.array(l).mean()
def mean5():
return reduce(lambda x, y: x + y / float(len(l)), l, 0)
def mean6():
return pd.Series(l).mean()
def mean7():
return statistics.fmean(l)
def mean8():
return math.fsum(l) / len(l)
for func in [mean1, mean2, mean3, mean4, mean5, mean6, mean7, mean8 ]:
print(f"{func.__name__} took: ", timeit.timeit(stmt=func, number=NUMBERS_OF_TIMES_TO_TEST))
These are the results I got:
mean1 took: 0.09751558300000002
mean2 took: 0.005496791999999973
mean3 took: 0.07754683299999998
mean4 took: 0.055743208000000044
mean5 took: 0.018134082999999968
mean6 took: 0.6663848750000001
mean7 took: 0.004305374999999945
mean8 took: 0.003203333000000086
Interesting! looks like math.fsum(l) / len(l) is the fastest way, then statistics.fmean(l), and only then sum(l) / len(l). Nice!
Thank you #Asclepius for showing me these two other ways!
OLD ANSWER:
In terms of efficiency and speed, these are the results that I got testing the other answers:
# test mean caculation
import timeit
import statistics
import numpy as np
from functools import reduce
import pandas as pd
LIST_RANGE = 10
NUMBERS_OF_TIMES_TO_TEST = 10000
l = list(range(LIST_RANGE))
def mean1():
return statistics.mean(l)
def mean2():
return sum(l) / len(l)
def mean3():
return np.mean(l)
def mean4():
return np.array(l).mean()
def mean5():
return reduce(lambda x, y: x + y / float(len(l)), l, 0)
def mean6():
return pd.Series(l).mean()
for func in [mean1, mean2, mean3, mean4, mean5, mean6]:
print(f"{func.__name__} took: ", timeit.timeit(stmt=func, number=NUMBERS_OF_TIMES_TO_TEST))
and the results:
mean1 took: 0.17030245899968577
mean2 took: 0.002183011999932205
mean3 took: 0.09744236000005913
mean4 took: 0.07070840100004716
mean5 took: 0.022754742999950395
mean6 took: 1.6689282460001778
so clearly the winner is:
sum(l) / len(l)
sum(l) / float(len(l)) is the right answer, but just for completeness you can compute an average with a single reduce:
>>> reduce(lambda x, y: x + y / float(len(l)), l, 0)
20.111111111111114
Note that this can result in a slight rounding error:
>>> sum(l) / float(len(l))
20.111111111111111
I tried using the options above but didn't work.
Try this:
from statistics import mean
n = [11, 13, 15, 17, 19]
print(n)
print(mean(n))
worked on python 3.5
Or use pandas's Series.mean method:
pd.Series(sequence).mean()
Demo:
>>> import pandas as pd
>>> l = [15, 18, 2, 36, 12, 78, 5, 6, 9]
>>> pd.Series(l).mean()
20.11111111111111
>>>
From the docs:
Series.mean(axis=None, skipna=None, level=None, numeric_only=None, **kwargs)¶
And here is the docs for this:
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.mean.html
And the whole documentation:
https://pandas.pydata.org/pandas-docs/stable/10min.html
I had a similar question to solve in a Udacity´s problems. Instead of a built-in function i coded:
def list_mean(n):
summing = float(sum(n))
count = float(len(n))
if n == []:
return False
return float(summing/count)
Much more longer than usual but for a beginner its quite challenging.
as a beginner, I just coded this:
L = [15, 18, 2, 36, 12, 78, 5, 6, 9]
total = 0
def average(numbers):
total = sum(numbers)
total = float(total)
return total / len(numbers)
print average(L)
If you wanted to get more than just the mean (aka average) you might check out scipy stats:
from scipy import stats
l = [15, 18, 2, 36, 12, 78, 5, 6, 9]
print(stats.describe(l))
# DescribeResult(nobs=9, minmax=(2, 78), mean=20.11111111111111,
# variance=572.3611111111111, skewness=1.7791785448425341,
# kurtosis=1.9422716419666397)
In order to use reduce for taking a running average, you'll need to track the total but also the total number of elements seen so far. since that's not a trivial element in the list, you'll also have to pass reduce an extra argument to fold into.
>>> l = [15, 18, 2, 36, 12, 78, 5, 6, 9]
>>> running_average = reduce(lambda aggr, elem: (aggr[0] + elem, aggr[1]+1), l, (0.0,0))
>>> running_average[0]
(181.0, 9)
>>> running_average[0]/running_average[1]
20.111111111111111
Both can give you close to similar values on an integer or at least 10 decimal values. But if you are really considering long floating values both can be different. Approach can vary on what you want to achieve.
>>> l = [15, 18, 2, 36, 12, 78, 5, 6, 9]
>>> print reduce(lambda x, y: x + y, l) / len(l)
20
>>> sum(l)/len(l)
20
Floating values
>>> print reduce(lambda x, y: x + y, l) / float(len(l))
20.1111111111
>>> print sum(l)/float(len(l))
20.1111111111
#Andrew Clark was correct on his statement.
suppose that
x = [
[-5.01,-5.43,1.08,0.86,-2.67,4.94,-2.51,-2.25,5.56,1.03],
[-8.12,-3.48,-5.52,-3.78,0.63,3.29,2.09,-2.13,2.86,-3.33],
[-3.68,-3.54,1.66,-4.11,7.39,2.08,-2.59,-6.94,-2.26,4.33]
]
you can notice that x has dimension 3*10 if you need to get the mean to each row you can type this
theMean = np.mean(x1,axis=1)
don't forget to import numpy as np
l = [15, 18, 2, 36, 12, 78, 5, 6, 9]
l = map(float,l)
print '%.2f' %(sum(l)/len(l))
Find the average in list
By using the following PYTHON code:
l = [15, 18, 2, 36, 12, 78, 5, 6, 9]
print(sum(l)//len(l))
try this it easy.
print reduce(lambda x, y: x + y, l)/(len(l)*1.0)
or like posted previously
sum(l)/(len(l)*1.0)
The 1.0 is to make sure you get a floating point division
Combining a couple of the above answers, I've come up with the following which works with reduce and doesn't assume you have L available inside the reducing function:
from operator import truediv
L = [15, 18, 2, 36, 12, 78, 5, 6, 9]
def sum_and_count(x, y):
try:
return (x[0] + y, x[1] + 1)
except TypeError:
return (x + y, 2)
truediv(*reduce(sum_and_count, L))
# prints
20.11111111111111
I want to add just another approach
import itertools,operator
list(itertools.accumulate(l,operator.add)).pop(-1) / len(l)
You can make a function for averages, usage:
average(21,343,2983) # You can pass as many arguments as you want.
Here is the code:
def average(*args):
total = 0
for num in args:
total+=num
return total/len(args)
*args allows for any number of answers.
Simple solution is a avemedi-lib
pip install avemedi_lib
Than include to your script
from avemedi_lib.functions import average, get_median, get_median_custom
test_even_array = [12, 32, 23, 43, 14, 44, 123, 15]
test_odd_array = [1, 2, 3, 4, 5, 6, 7, 8, 9]
# Getting average value of list items
print(average(test_even_array)) # 38.25
# Getting median value for ordered or unordered numbers list
print(get_median(test_even_array)) # 27.5
print(get_median(test_odd_array)) # 27.5
# You can use your own sorted and your count functions
a = sorted(test_even_array)
n = len(a)
print(get_median_custom(a, n)) # 27.5
Enjoy.
numbers = [0,1,2,3]
numbers[0] = input("Please enter a number")
numbers[1] = input("Please enter a second number")
numbers[2] = input("Please enter a third number")
numbers[3] = input("Please enter a fourth number")
print (numbers)
print ("Finding the Avarage")
avarage = int(numbers[0]) + int(numbers[1]) + int(numbers[2]) + int(numbers [3]) / 4
print (avarage)

Is there a way to find the nᵗʰ entry in itertools.combinations() without converting the entire thing to a list?

I am using the itertools library module in python.
I am interested the different ways to choose 15 of the first 26000 positive integers. The function itertools.combinations(range(1,26000), 15) enumerates all of these possible subsets, in a lexicographical ordering.
The binomial coefficient 26000 choose 15 is a very large number, on the order of 10^54. However, python has no problem running the code y = itertools.combinations(range(1,26000), 15) as shown in the sixth line below.
If I try to do y[3] to find just the 3rd entry, I get a TypeError. This means I need to convert it into a list first. The problem is that trying to convert it into a list gives a MemoryError. All of this is shown in the screenshot above.
Converting it into a list does work for smaller combinations, like 6 choose 3, shown below.
My question is:
Is there a way to access specific elements in itertools.combinations() without converting it into a list?
I want to be able to access, say, the first 10000 of these ~10^54 enumerated 15-element subsets.
Any help is appreciated. Thank you!
You can use a generator expression:
comb = itertools.combinations(range(1,26000), 15)
comb1000 = (next(comb) for i in range(1000))
To jump directly to the nth combination, here is an itertools recipe:
def nth_combination(iterable, r, index):
"""Equivalent to list(combinations(iterable, r))[index]"""
pool = tuple(iterable)
n = len(pool)
if r < 0 or r > n:
raise ValueError
c = 1
k = min(r, n-r)
for i in range(1, k+1):
c = c * (n - k + i) // i
if index < 0:
index += c
if index < 0 or index >= c:
raise IndexError
result = []
while r:
c, n, r = c*r//n, n-1, r-1
while index >= c:
index -= c
c, n = c*(n-r)//n, n-1
result.append(pool[-1-n])
return tuple(result)
It's also available in more_itertools.nth_combination
>>> import more_itertools # pip install more-itertools
>>> more_itertools.nth_combination(range(1,26000), 15, 123456)
(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 18, 19541)
To instantly "fast-forward" a combinations instance to this position and continue iterating, you can set the state to the previously yielded state (note: 0-based state vector) and continue from there:
>>> comb = itertools.combinations(range(1,26000), 15)
>>> comb.__setstate__((0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 17, 19540))
>>> next(comb)
(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 18, 19542)
If you want to access the first few elements, it's pretty straightforward with islice:
import itertools
print(list(itertools.islice(itertools.combinations(range(1,26000), 15), 1000)))
Note that islice internally iterates the combinations up to the specified point, so it can't magically give you the middle elements without iterating all the way there. You'd have to go down the route of computing the elements you want combinatorially in that case.

Insert a value to a sorted list via list comprehension Python

I have an assignment to add a value to a sorted list using list comprehension. I'm not allowed to import modules, only list comprehension, preferably a one liner. I'm not allowed to create functions and use them aswell.
I'm completely in the dark with this problem. Hopefully someone can help :)
Edit: I don't need to mutate the current list. In fact, I'm trying my solution right now with .pop, I need to create a new list with the element properly added, but I still can't figure out much.
try:
sorted_a.insert(next(i for i,lhs,rhs in enumerate(zip(sorted_a,sorted_a[1:])) if lhs <= value <= rhs),value)
except StopIteration:
sorted_a.append(value)
I guess ....
with your new problem statement
[x for x in sorted_a if x <= value] + [value,] + [y for y in sorted_a if y >= value]
you could certainly improve the big-O complexity
For bisecting the list, you may use bisect.bisect (for other readers referencing the answer in future) as:
>>> from bisect import bisect
>>> my_list = [2, 4, 6, 9, 10, 15, 18, 20]
>>> num = 12
>>> index = bisect(my_list, num)
>>> my_list[:index]+[num] + my_list[index:]
[2, 4, 6, 9, 10, 12, 15, 18, 20]
But since you can not import libraries, you may use sum and zip with list comprehension expression as:
>>> my_list = [2, 4, 6, 9, 10, 15, 18, 20]
>>> num = 12
>>> sum([[i, num] if i<num<j else [i] for i, j in zip(my_list,my_list[1:])], [])
[2, 4, 6, 9, 10, 12, 15, 18]

Building up a counting function

I need to build up a counting function starting from a dictionary. The dictionary is a classical Bag_of_Words and looks like as follows:
D={'the':5, 'pow':2, 'poo':2, 'row':2, 'bub':1, 'bob':1}
I need the function that for a given integer returns the number of words with at least that number of occurrences. In the example F(2)=4, all words but 'bub' and 'bob'.
First of all I build up the inverse dictionary of D:
ID={5:1, 2:3, 1:2}
I think I'm fine with that. Then here is the code:
values=list(ID.keys())
values.sort(reverse=True)
Lk=[]
Nw=0
for val in values:
Nw=Nw+ID[val]
Lk.append([Nw, val])
The code works fine but I do not like it. The point is that I would prefer to use a list comprehension to build up Lk; also I really ate the Nw variable I have used. It does not seems pythonic at all
you can create a sorted array of your word counts then find the insertion point with np.searchsorted to get how many items are to either side of it... np.searchsorted is very efficient and fast. If your dictionary doesn't change often this call is basically free compared to other methods
import numpy as np
def F(n, D):
#creating the array each time would be slow if it doesn't change move this
#outside the function
arr = np.array(D.values())
arr.sort()
L = len(arr)
return L - np.searchsorted(arr, n) #this line does all the work...
what's going on....
first we take just the word counts (and convert to a sorted array)...
D = {"I'm": 12, "pretty": 3, "sure":12, "the": 45, "Donald": 12, "is": 3, "on": 90, "crack": 11}
vals = np.arrau(D.values())
#vals = array([90, 12, 12, 3, 11, 12, 45, 3])
vals.sort()
#vals = array([ 3, 3, 11, 12, 12, 12, 45, 90])
then if we want to know how many values are greater than or equal to n, we simply find the length of the list beyond the first number greater than or equal to n. We do this by determining the leftmost index where n would be inserted (insertion sort) and subtracting that from the total number of positions (len)
# how many are >= 10?
# insertion point for value of 10..
#
# | index: 2
# v
# array([ 3, 3, 11, 12, 12, 12, 45, 90])
#find how many elements there are
#len(arr) = 8
#subtract.. 2-8 = 6 elements that are >= 10
A fun little trick for counting things: True has a numerical value of 1 and False has a numerical value of 0. SO we can do things like
sum(v >= k for v in D.values())
where k is the value you're comparing against.
collections.Counter() is ideal choice for this. Use them on dict.values() list. Also, you need not to install them explicitly like numpy. Sample example:
>>> from collections import Counter
>>> D = {'the': 5, 'pow': 2, 'poo': 2, 'row': 2, 'bub': 1, 'bob': 1}
>>> c = Counter(D.values())
>>> c
{2: 3, 1: 2, 5: 1}

In Python, what is the fastest algorithm for removing duplicates from a list so that all elements are unique *while preserving order*? [duplicate]

This question already has answers here:
How do I remove duplicates from a list, while preserving order?
(31 answers)
Closed 8 years ago.
For example:
>>> x = [1, 1, 2, 'a', 'a', 3]
>>> unique(x)
[1, 2, 'a', 3]
Assume list elements are hashable.
Clarification: The result should keep the first duplicate in the list. For example, [1, 2, 3, 2, 3, 1] becomes [1, 2, 3].
def unique(items):
found = set()
keep = []
for item in items:
if item not in found:
found.add(item)
keep.append(item)
return keep
print unique([1, 1, 2, 'a', 'a', 3])
Using:
lst = [8, 8, 9, 9, 7, 15, 15, 2, 20, 13, 2, 24, 6, 11, 7, 12, 4, 10, 18, 13, 23, 11, 3, 11, 12, 10, 4, 5, 4, 22, 6, 3, 19, 14, 21, 11, 1, 5, 14, 8, 0, 1, 16, 5, 10, 13, 17, 1, 16, 17, 12, 6, 10, 0, 3, 9, 9, 3, 7, 7, 6, 6, 7, 5, 14, 18, 12, 19, 2, 8, 9, 0, 8, 4, 5]
And using the timeit module:
$ python -m timeit -s 'import uniquetest' 'uniquetest.etchasketch(uniquetest.lst)'
And so on for the various other functions (which I named after their posters), I have the following results (on my first generation Intel MacBook Pro):
Allen: 14.6 µs per loop [1]
Terhorst: 26.6 µs per loop
Tarle: 44.7 µs per loop
ctcherry: 44.8 µs per loop
Etchasketch 1 (short): 64.6 µs per loop
Schinckel: 65.0 µs per loop
Etchasketch 2: 71.6 µs per loop
Little: 89.4 µs per loop
Tyler: 179.0 µs per loop
[1] Note that Allen modifies the list in place – I believe this has skewed the time, in that the timeit module runs the code 100000 times and 99999 of them are with the dupe-less list.
Summary: Straight-forward implementation with sets wins over confusing one-liners :-)
Update: on Python3.7+:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
old answer:
Here is the fastest solution so far (for the following input):
def del_dups(seq):
seen = {}
pos = 0
for item in seq:
if item not in seen:
seen[item] = True
seq[pos] = item
pos += 1
del seq[pos:]
lst = [8, 8, 9, 9, 7, 15, 15, 2, 20, 13, 2, 24, 6, 11, 7, 12, 4, 10, 18,
13, 23, 11, 3, 11, 12, 10, 4, 5, 4, 22, 6, 3, 19, 14, 21, 11, 1,
5, 14, 8, 0, 1, 16, 5, 10, 13, 17, 1, 16, 17, 12, 6, 10, 0, 3, 9,
9, 3, 7, 7, 6, 6, 7, 5, 14, 18, 12, 19, 2, 8, 9, 0, 8, 4, 5]
del_dups(lst)
print(lst)
# -> [8, 9, 7, 15, 2, 20, 13, 24, 6, 11, 12, 4, 10, 18, 23, 3, 5, 22, 19, 14,
# 21, 1, 0, 16, 17]
Dictionary lookup is slightly faster then the set's one in Python 3.
What's going to be fastest depends on what percentage of your list is duplicates. If it's nearly all duplicates, with few unique items, creating a new list will probably be faster. If it's mostly unique items, removing them from the original list (or a copy) will be faster.
Here's one for modifying the list in place:
def unique(items):
seen = set()
for i in xrange(len(items)-1, -1, -1):
it = items[i]
if it in seen:
del items[i]
else:
seen.add(it)
Iterating backwards over the indices ensures that removing items doesn't affect the iteration.
This is the fastest in-place method I've found (assuming a large proportion of duplicates):
def unique(l):
s = set(); n = 0
for x in l:
if x not in s: s.add(x); l[n] = x; n += 1
del l[n:]
This is 10% faster than Allen's implementation, on which it is based (timed with timeit.repeat, JIT compiled by psyco). It keeps the first instance of any duplicate.
repton-infinity: I'd be interested if you could confirm my timings.
Obligatory generator-based variation:
def unique(seq):
seen = set()
for x in seq:
if x not in seen:
seen.add(x)
yield x
This may be the simplest way:
list(OrderedDict.fromkeys(iterable))
As of Python 3.5, OrderedDict is now implemented in C, so this was is now the shortest, cleanest, and fastest.
Taken from http://www.peterbe.com/plog/uniqifiers-benchmark
def f5(seq, idfun=None):
# order preserving
if idfun is None:
def idfun(x): return x
seen = {}
result = []
for item in seq:
marker = idfun(item)
# in old Python versions:
# if seen.has_key(marker)
# but in new ones:
if marker in seen: continue
seen[marker] = 1
result.append(item)
return result
One-liner:
new_list = reduce(lambda x,y: x+[y][:1-int(y in x)], my_list, [])
An in-place one-liner for this:
>>> x = [1, 1, 2, 'a', 'a', 3]
>>> [ item for pos,item in enumerate(x) if x.index(item)==pos ]
[1, 2, 'a', 3]
This is the fastest one, comparing all the stuff from this lengthy discussion and the other answers given here, refering to this benchmark. It's another 25% faster than the fastest function from the discussion, f8. Thanks to David Kirby for the idea.
def uniquify(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if x not in seen and not seen_add(x)]
Some time comparison:
$ python uniqifiers_benchmark.py
* f8_original 3.76
* uniquify 3.0
* terhorst 5.44
* terhorst_localref 4.08
* del_dups 4.76
You can actually do something really cool in Python to solve this. You can create a list comprehension that would reference itself as it is being built. As follows:
# remove duplicates...
def unique(my_list):
return [x for x in my_list if x not in locals()['_[1]'].__self__]
Edit: I removed the "self", and it works on Mac OS X, Python 2.5.1.
The _[1] is Python's "secret" reference to the new list. The above, of course, is a little messy, but you could adapt it fit your needs as necessary. For example, you can actually write a function that returns a reference to the comprehension; it would look more like:
return [x for x in my_list if x not in this_list()]
Do the duplicates necessarily need to be in the list in the first place? There's no overhead as far as looking the elements up, but there is a little bit more overhead in adding elements (though the overhead should be O(1) ).
>>> x = []
>>> y = set()
>>> def add_to_x(val):
... if val not in y:
... x.append(val)
... y.add(val)
... print x
... print y
...
>>> add_to_x(1)
[1]
set([1])
>>> add_to_x(1)
[1]
set([1])
>>> add_to_x(1)
[1]
set([1])
>>>
Remove duplicates and preserve order:
This is a fast 2-liner that leverages built-in functionality of list comprehensions and dicts.
x = [1, 1, 2, 'a', 'a', 3]
tmpUniq = {} # temp variable used below
results = [tmpUniq.setdefault(i,i) for i in x if i not in tmpUniq]
print results
[1, 2, 'a', 3]
The dict.setdefaults() function returns the value as well as adding it to the temp dict directly in the list comprehension. Using the built-in functions and the hashes of the dict will work to maximize efficiency for the process.
O(n) if dict is hash, O(nlogn) if dict is tree, and simple, fixed. Thanks to Matthew for the suggestion. Sorry I don't know the underlying types.
def unique(x):
output = []
y = {}
for item in x:
y[item] = ""
for item in x:
if item in y:
output.append(item)
return output
has_key in python is O(1). Insertion and retrieval from a hash is also O(1). Loops through n items twice, so O(n).
def unique(list):
s = {}
output = []
for x in list:
count = 1
if(s.has_key(x)):
count = s[x] + 1
s[x] = count
for x in list:
count = s[x]
if(count > 0):
s[x] = 0
output.append(x)
return output
There are some great, efficient solutions here. However, for anyone not concerned with the absolute most efficient O(n) solution, I'd go with the simple one-liner O(n^2*log(n)) solution:
def unique(xs):
return sorted(set(xs), key=lambda x: xs.index(x))
or the more efficient two-liner O(n*log(n)) solution:
def unique(xs):
positions = dict((e,pos) for pos,e in reversed(list(enumerate(xs))))
return sorted(set(xs), key=lambda x: positions[x])
Here are two recipes from the itertools documentation:
def unique_everseen(iterable, key=None):
"List unique elements, preserving order. Remember all elements ever seen."
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in ifilterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
else:
for element in iterable:
k = key(element)
if k not in seen:
seen_add(k)
yield element
def unique_justseen(iterable, key=None):
"List unique elements, preserving order. Remember only the element just seen."
# unique_justseen('AAAABBBCCDAABBB') --> A B C D A B
# unique_justseen('ABBCcAD', str.lower) --> A B C A D
return imap(next, imap(itemgetter(1), groupby(iterable, key)))
I have no experience with python, but an algorithm would be to sort the list, then remove duplicates (by comparing to previous items in the list), and finally find the position in the new list by comparing with the old list.
Longer answer: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560
>>> def unique(list):
... y = []
... for x in list:
... if x not in y:
... y.append(x)
... return y
If you take out the empty list from the call to set() in Terhost's answer, you get a little speed boost.
Change:
found = set([])
to:
found = set()
However, you don't need the set at all.
def unique(items):
keep = []
for item in items:
if item not in keep:
keep.append(item)
return keep
Using timeit I got these results:
with set([]) -- 4.97210427363
with set() -- 4.65712377445
with no set -- 3.44865284975
x = [] # Your list of items that includes Duplicates
# Assuming that your list contains items of only immutable data types
dict_x = {}
dict_x = {item : item for i, item in enumerate(x) if item not in dict_x.keys()}
# Average t.c. = O(n)* O(1) ; furthermore the dict comphrehension and generator like behaviour of enumerate adds a certain efficiency and pythonic feel to it.
x = dict_x.keys() # if you want your output in list format
>>> x=[1,1,2,'a','a',3]
>>> y = [ _x for _x in x if not _x in locals()['_[1]'] ]
>>> y
[1, 2, 'a', 3]
"locals()['_[1]']" is the "secret name" of the list being created.
I don't know if this one is fast or not, but at least it is simple.
Simply, convert it first to a set and then again to a list
def unique(container):
return list(set(container))
One pass.
a = [1,1,'a','b','c','c']
new_list = []
prev = None
while 1:
try:
i = a.pop(0)
if i != prev:
new_list.append(i)
prev = i
except IndexError:
break
I haven't done any tests, but one possible algorithm might be to create a second list, and iterate through the first list. If an item is not in the second list, add it to the second list.
x = [1, 1, 2, 'a', 'a', 3]
y = []
for each in x:
if each not in y:
y.append(each)
a=[1,2,3,4,5,7,7,8,8,9,9,3,45]
def unique(l):
ids={}
for item in l:
if not ids.has_key(item):
ids[item]=item
return ids.keys()
print a
print unique(a)
Inserting elements will take theta(n)
retrieving if element is exiting or not will take constant time
testing all the items will take also theta(n)
so we can see that this solution will take theta(n).
Bear in mind that dictionary in python implemented by hash table.

Categories

Resources