Python for loop get index and index+1 - python

I am trying to write a function in python. The function is based on a algorithm. It is summation using sides of polygons with n sides.
For each "loop" you add n[i]+n[1+i].
In python can you do this with for loops?
This is a very easy thing to do in languages like java and c++. But the nature of python for loops make it less obvious. Can for loops accomplish this or should while loops be use?

You can use zip and for-loop here:
>>> lis = range(10)
>>> [x+y for x, y in zip(lis, lis[1:])]
[1, 3, 5, 7, 9, 11, 13, 15, 17]
If the list is huge then you can use itertools.izip and iter:
from itertools import izip, tee
it1, it2 = tee(lis) #creates two iterators from the list(or any iterable)
next(it2) #drop the first item
print [x+y for x, y in izip(it1, it2)]
#[1, 3, 5, 7, 9, 11, 13, 15, 17]

for i in range(N): # i = 0,1, ... N-1
val = n[i] + n[i+1]
if you want to 'wrap around', you can write
for i in range(N): # i = 0,1, ... N-1
val = n[i] + n[(i+1)%N]
.. or use the fact that n[-1] is the same as the last element
for i in range(N): # i = 0,1, ... N-1
val = n[i-1] + n[i] # [N-1]+[0], [0]+[1], ... [N-2] + [N-1]
This approach will likely be slower but may be easier to follow than zips and iterations.

Related

How to cumsum all values at tuple index n, from a list of tuples?

In the following code I want to add second parameters of list=[(0,3),(2,6),(1,10)] in a for loop. first iteration should be 3+6=9 and the second iteration should add output of previous iteration which is 9 to 10---> 9+10=19 and I want final output S=[9,19]. I am not sure how to do it, Should I add another loop to my code?
T=[(0,3),(2,6),(1,10)]
S=[]
for i in range(len(T)):
b=T[0][i]+T[0][i+1]
S.append(b)
use itertools.accumulate
spam = [(0,3),(2,6),(1,10)]
from itertools import accumulate
print(list(accumulate(item[-1] for item in spam))[1:])
output
[9, 19]
Use zip to combine the vales from the tuples, with the same index.
Use an assignment expression (from python 3.8), in a list-comprehension, to sum the values in the second tuple, T[1], of T.
T = [(0,3),(2,6),(1,10)]
T = list(zip(*T))
print(T)
[out]:
[(0, 2, 1), (3, 6, 10)]
# use an assignment expression to sum T[1]
total = T[1][0] # 3
S = [total := total + v for v in T[1][1:]]
print(S)
[out]:
[9, 19]
Just modify your code as below
T=[(0,3),(2,6),(1,10)]
S=[]
b = T[0][1]
for i in range(1,len(T)):
b+=T[i][1]
S.append(b)
This should help u:
T=[(0,3),(2,6),(1,10)]
lst = [T[i][1] for i in range(len(T))]
final_lst = []
for x in range(2,len(lst)+1):
final_lst.append(sum(lst[:x]))
print(final_lst)
Output:
[9, 19]
If u prefer list comprehension, then use this line instead of the last for loop:
[final_lst.append(sum(lst[:x])) for x in range(2,len(lst)+1)]
Output:
[9, 19]
Here is a solution with native recursion
import operator
mylist =[(0,3),(2,6),(1,10)]
def accumulate(L, i, op):
def iter(result, rest, out):
if rest == []:
return out
else:
r = op(result, rest[0][i-1])
return iter(r, rest[1:], out + [r])
return iter(L[0][i-1], L[1:], [])
print(accumulate(mylist, 2, operator.add))
print(accumulate(mylist, 1, operator.add))
print(accumulate(mylist, 2, operator.mul))
# ==>
# [9, 19]
# [2, 3]
# [18, 180]

Is there a way to find the nᵗʰ entry in itertools.combinations() without converting the entire thing to a list?

I am using the itertools library module in python.
I am interested the different ways to choose 15 of the first 26000 positive integers. The function itertools.combinations(range(1,26000), 15) enumerates all of these possible subsets, in a lexicographical ordering.
The binomial coefficient 26000 choose 15 is a very large number, on the order of 10^54. However, python has no problem running the code y = itertools.combinations(range(1,26000), 15) as shown in the sixth line below.
If I try to do y[3] to find just the 3rd entry, I get a TypeError. This means I need to convert it into a list first. The problem is that trying to convert it into a list gives a MemoryError. All of this is shown in the screenshot above.
Converting it into a list does work for smaller combinations, like 6 choose 3, shown below.
My question is:
Is there a way to access specific elements in itertools.combinations() without converting it into a list?
I want to be able to access, say, the first 10000 of these ~10^54 enumerated 15-element subsets.
Any help is appreciated. Thank you!
You can use a generator expression:
comb = itertools.combinations(range(1,26000), 15)
comb1000 = (next(comb) for i in range(1000))
To jump directly to the nth combination, here is an itertools recipe:
def nth_combination(iterable, r, index):
"""Equivalent to list(combinations(iterable, r))[index]"""
pool = tuple(iterable)
n = len(pool)
if r < 0 or r > n:
raise ValueError
c = 1
k = min(r, n-r)
for i in range(1, k+1):
c = c * (n - k + i) // i
if index < 0:
index += c
if index < 0 or index >= c:
raise IndexError
result = []
while r:
c, n, r = c*r//n, n-1, r-1
while index >= c:
index -= c
c, n = c*(n-r)//n, n-1
result.append(pool[-1-n])
return tuple(result)
It's also available in more_itertools.nth_combination
>>> import more_itertools # pip install more-itertools
>>> more_itertools.nth_combination(range(1,26000), 15, 123456)
(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 18, 19541)
To instantly "fast-forward" a combinations instance to this position and continue iterating, you can set the state to the previously yielded state (note: 0-based state vector) and continue from there:
>>> comb = itertools.combinations(range(1,26000), 15)
>>> comb.__setstate__((0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 17, 19540))
>>> next(comb)
(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 18, 19542)
If you want to access the first few elements, it's pretty straightforward with islice:
import itertools
print(list(itertools.islice(itertools.combinations(range(1,26000), 15), 1000)))
Note that islice internally iterates the combinations up to the specified point, so it can't magically give you the middle elements without iterating all the way there. You'd have to go down the route of computing the elements you want combinatorially in that case.

How to add every nth entry in a python list to each other?

Let's say I have a python list:
[4,5,25,60,19,2]
How can I add every nth entry to each other?
e.g. I split the list into 3 entries [ 4,5 / 25,60 / 19,2 ], then add these entries in order to get a new list:
[4+25+19, 5+60+2]
Which gives me the sum:
[48, 67]
For a more complex example, lets say I have 2000 entries in my list. I want to add every 100th entry to the one before so I get 100 entries in the new list. Each entry would now be the sum of every 100th entry.
Iteratively extract your slices and sum them up.
>>> [sum(l[i::2]) for i in range(len(l) // 3)]
[48, 67]
You may have to do a bit more to handle corner cases but this should be a good start for you.
The itertools documentation has a recipe function called grouper, you can import it from more_itertools (needs manual install) or copy paste it.
It works like this:
>>> from more_itertools import grouper
>>> l = [4,5,25,60,19,2]
>>> list(grouper(2, l)) # 2 = len(l)/3
>>> [(4, 5), (25, 60), (19, 2)]
You can transpose the output of grouper with zip and apply sum to each group.
>>> [sum(g) for g in zip(*grouper(2, l))]
>>> [48, 67]
I prefer this to manually fiddling with indices. In addition, it works with any iterable, not just lists. A generic iterable may not support indexing or slicing, but it will always be able to produce a stream of values.
Using the chunks function taken from here, you could write the following:
def chunks(l, n):
"""Yield successive n-sized chunks from l."""
for i in range(0, len(l), n):
yield l[i:i + n]
l = [4,5,25,60,19,2]
print([sum(e) for e in list(chunks(l, 2))])
There may be a smart sequence of list operations that you could use but I couldn't think of any. So instead I just did a parser that goes from 0 to n-1 and within the confines of the list adds the elements, going every n. So if n=3, you go 0, 3, 6, etc; then 1, 4, 7, etc. - and put it into the output list.
The code is attached below. Hope it helps.
list1 = [7, 6, -5.4, 6, -4, 55, -21, 45, 67, -9, -8, -7, 8, 9, 11, 110, -0.8, -9.8, 1.1]
n = 5
list2 = []
sum_elem = 0
for i in range(n):
sum_elem = 0
j = i
while j < len( list1 ):
sum_elem += list1[j]
j += n
list2.append(sum_elem)
print( list2 )

Remove items from a list in Python based on previous items in the same list

Say I have a simple list of numbers, e.g.
simple_list = range(100)
I would like to shorten this list such that the gaps between the values are greater than or equal to 5 for example, so it should look like
[0, 5, 10...]
FYI the actual list does not have regular increments but it is ordered
I'm trying to use list comprehension to do it but the below obviously returns an empty list:
simple_list2 = [x for x in simple_list if x-simple_list[max(0,x-1)] >= 5]
I could do it in a loop by appending to a list if the condition is met but I'm wondering specifically if there is a way to do it using list comprehension?
This is not a use case for a comprehension, you have to use a loop as there could be any amount of elements together that have less than five between them, you cannot just check the next or any n amount of numbers unless you knew the data had some very specific format:
simple_list = range(100)
def f(l):
it = iter(l)
i = next(it)
for ele in it:
if abs(ele - i) >= 5:
yield i
i = ele
yield i
simple_list[:] = f(simple_list)
print(simple_list)
[0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95]
A better example to use would be:
l = [1, 2, 2, 2, 3, 3, 3, 10, 12, 13, 13, 18, 24]
l[:] = f(l)
print(l)
Which would return:
[1, 10, 18, 24]
If your data is always in ascending order you can remove the abs and just if ele - i >= 5.
If I understand your question correctly, which I'm not sure I do (please clarify), you can do this easily. Assume that a is the list you want to process.
[v for i,v in enumerate(a) if abs(a[i] - a[i - 1]) >= 5]
This gives all elements with which the difference to the previous one (should it be next?) are greater or equal than 5. There are some variations of this, according to what you need. Should the first element not be compared and excluded? The previous implementation compares it with index -1 and includes it if the criteria is met, this one excludes it from the result:
[v for i,v in enumerate(a) if i != 0 and abs(a[i] - a[i - 1]) >= 5]
On the other hand, should it always be included? Then use this:
[v for i,v in enumerate(a) if (i != 0 and abs(a[i] - a[i - 1]) >= 5) or (i == 0)]

In Python, what is the fastest algorithm for removing duplicates from a list so that all elements are unique *while preserving order*? [duplicate]

This question already has answers here:
How do I remove duplicates from a list, while preserving order?
(31 answers)
Closed 8 years ago.
For example:
>>> x = [1, 1, 2, 'a', 'a', 3]
>>> unique(x)
[1, 2, 'a', 3]
Assume list elements are hashable.
Clarification: The result should keep the first duplicate in the list. For example, [1, 2, 3, 2, 3, 1] becomes [1, 2, 3].
def unique(items):
found = set()
keep = []
for item in items:
if item not in found:
found.add(item)
keep.append(item)
return keep
print unique([1, 1, 2, 'a', 'a', 3])
Using:
lst = [8, 8, 9, 9, 7, 15, 15, 2, 20, 13, 2, 24, 6, 11, 7, 12, 4, 10, 18, 13, 23, 11, 3, 11, 12, 10, 4, 5, 4, 22, 6, 3, 19, 14, 21, 11, 1, 5, 14, 8, 0, 1, 16, 5, 10, 13, 17, 1, 16, 17, 12, 6, 10, 0, 3, 9, 9, 3, 7, 7, 6, 6, 7, 5, 14, 18, 12, 19, 2, 8, 9, 0, 8, 4, 5]
And using the timeit module:
$ python -m timeit -s 'import uniquetest' 'uniquetest.etchasketch(uniquetest.lst)'
And so on for the various other functions (which I named after their posters), I have the following results (on my first generation Intel MacBook Pro):
Allen: 14.6 µs per loop [1]
Terhorst: 26.6 µs per loop
Tarle: 44.7 µs per loop
ctcherry: 44.8 µs per loop
Etchasketch 1 (short): 64.6 µs per loop
Schinckel: 65.0 µs per loop
Etchasketch 2: 71.6 µs per loop
Little: 89.4 µs per loop
Tyler: 179.0 µs per loop
[1] Note that Allen modifies the list in place – I believe this has skewed the time, in that the timeit module runs the code 100000 times and 99999 of them are with the dupe-less list.
Summary: Straight-forward implementation with sets wins over confusing one-liners :-)
Update: on Python3.7+:
>>> list(dict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
old answer:
Here is the fastest solution so far (for the following input):
def del_dups(seq):
seen = {}
pos = 0
for item in seq:
if item not in seen:
seen[item] = True
seq[pos] = item
pos += 1
del seq[pos:]
lst = [8, 8, 9, 9, 7, 15, 15, 2, 20, 13, 2, 24, 6, 11, 7, 12, 4, 10, 18,
13, 23, 11, 3, 11, 12, 10, 4, 5, 4, 22, 6, 3, 19, 14, 21, 11, 1,
5, 14, 8, 0, 1, 16, 5, 10, 13, 17, 1, 16, 17, 12, 6, 10, 0, 3, 9,
9, 3, 7, 7, 6, 6, 7, 5, 14, 18, 12, 19, 2, 8, 9, 0, 8, 4, 5]
del_dups(lst)
print(lst)
# -> [8, 9, 7, 15, 2, 20, 13, 24, 6, 11, 12, 4, 10, 18, 23, 3, 5, 22, 19, 14,
# 21, 1, 0, 16, 17]
Dictionary lookup is slightly faster then the set's one in Python 3.
What's going to be fastest depends on what percentage of your list is duplicates. If it's nearly all duplicates, with few unique items, creating a new list will probably be faster. If it's mostly unique items, removing them from the original list (or a copy) will be faster.
Here's one for modifying the list in place:
def unique(items):
seen = set()
for i in xrange(len(items)-1, -1, -1):
it = items[i]
if it in seen:
del items[i]
else:
seen.add(it)
Iterating backwards over the indices ensures that removing items doesn't affect the iteration.
This is the fastest in-place method I've found (assuming a large proportion of duplicates):
def unique(l):
s = set(); n = 0
for x in l:
if x not in s: s.add(x); l[n] = x; n += 1
del l[n:]
This is 10% faster than Allen's implementation, on which it is based (timed with timeit.repeat, JIT compiled by psyco). It keeps the first instance of any duplicate.
repton-infinity: I'd be interested if you could confirm my timings.
Obligatory generator-based variation:
def unique(seq):
seen = set()
for x in seq:
if x not in seen:
seen.add(x)
yield x
This may be the simplest way:
list(OrderedDict.fromkeys(iterable))
As of Python 3.5, OrderedDict is now implemented in C, so this was is now the shortest, cleanest, and fastest.
Taken from http://www.peterbe.com/plog/uniqifiers-benchmark
def f5(seq, idfun=None):
# order preserving
if idfun is None:
def idfun(x): return x
seen = {}
result = []
for item in seq:
marker = idfun(item)
# in old Python versions:
# if seen.has_key(marker)
# but in new ones:
if marker in seen: continue
seen[marker] = 1
result.append(item)
return result
One-liner:
new_list = reduce(lambda x,y: x+[y][:1-int(y in x)], my_list, [])
An in-place one-liner for this:
>>> x = [1, 1, 2, 'a', 'a', 3]
>>> [ item for pos,item in enumerate(x) if x.index(item)==pos ]
[1, 2, 'a', 3]
This is the fastest one, comparing all the stuff from this lengthy discussion and the other answers given here, refering to this benchmark. It's another 25% faster than the fastest function from the discussion, f8. Thanks to David Kirby for the idea.
def uniquify(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if x not in seen and not seen_add(x)]
Some time comparison:
$ python uniqifiers_benchmark.py
* f8_original 3.76
* uniquify 3.0
* terhorst 5.44
* terhorst_localref 4.08
* del_dups 4.76
You can actually do something really cool in Python to solve this. You can create a list comprehension that would reference itself as it is being built. As follows:
# remove duplicates...
def unique(my_list):
return [x for x in my_list if x not in locals()['_[1]'].__self__]
Edit: I removed the "self", and it works on Mac OS X, Python 2.5.1.
The _[1] is Python's "secret" reference to the new list. The above, of course, is a little messy, but you could adapt it fit your needs as necessary. For example, you can actually write a function that returns a reference to the comprehension; it would look more like:
return [x for x in my_list if x not in this_list()]
Do the duplicates necessarily need to be in the list in the first place? There's no overhead as far as looking the elements up, but there is a little bit more overhead in adding elements (though the overhead should be O(1) ).
>>> x = []
>>> y = set()
>>> def add_to_x(val):
... if val not in y:
... x.append(val)
... y.add(val)
... print x
... print y
...
>>> add_to_x(1)
[1]
set([1])
>>> add_to_x(1)
[1]
set([1])
>>> add_to_x(1)
[1]
set([1])
>>>
Remove duplicates and preserve order:
This is a fast 2-liner that leverages built-in functionality of list comprehensions and dicts.
x = [1, 1, 2, 'a', 'a', 3]
tmpUniq = {} # temp variable used below
results = [tmpUniq.setdefault(i,i) for i in x if i not in tmpUniq]
print results
[1, 2, 'a', 3]
The dict.setdefaults() function returns the value as well as adding it to the temp dict directly in the list comprehension. Using the built-in functions and the hashes of the dict will work to maximize efficiency for the process.
O(n) if dict is hash, O(nlogn) if dict is tree, and simple, fixed. Thanks to Matthew for the suggestion. Sorry I don't know the underlying types.
def unique(x):
output = []
y = {}
for item in x:
y[item] = ""
for item in x:
if item in y:
output.append(item)
return output
has_key in python is O(1). Insertion and retrieval from a hash is also O(1). Loops through n items twice, so O(n).
def unique(list):
s = {}
output = []
for x in list:
count = 1
if(s.has_key(x)):
count = s[x] + 1
s[x] = count
for x in list:
count = s[x]
if(count > 0):
s[x] = 0
output.append(x)
return output
There are some great, efficient solutions here. However, for anyone not concerned with the absolute most efficient O(n) solution, I'd go with the simple one-liner O(n^2*log(n)) solution:
def unique(xs):
return sorted(set(xs), key=lambda x: xs.index(x))
or the more efficient two-liner O(n*log(n)) solution:
def unique(xs):
positions = dict((e,pos) for pos,e in reversed(list(enumerate(xs))))
return sorted(set(xs), key=lambda x: positions[x])
Here are two recipes from the itertools documentation:
def unique_everseen(iterable, key=None):
"List unique elements, preserving order. Remember all elements ever seen."
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in ifilterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
else:
for element in iterable:
k = key(element)
if k not in seen:
seen_add(k)
yield element
def unique_justseen(iterable, key=None):
"List unique elements, preserving order. Remember only the element just seen."
# unique_justseen('AAAABBBCCDAABBB') --> A B C D A B
# unique_justseen('ABBCcAD', str.lower) --> A B C A D
return imap(next, imap(itemgetter(1), groupby(iterable, key)))
I have no experience with python, but an algorithm would be to sort the list, then remove duplicates (by comparing to previous items in the list), and finally find the position in the new list by comparing with the old list.
Longer answer: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560
>>> def unique(list):
... y = []
... for x in list:
... if x not in y:
... y.append(x)
... return y
If you take out the empty list from the call to set() in Terhost's answer, you get a little speed boost.
Change:
found = set([])
to:
found = set()
However, you don't need the set at all.
def unique(items):
keep = []
for item in items:
if item not in keep:
keep.append(item)
return keep
Using timeit I got these results:
with set([]) -- 4.97210427363
with set() -- 4.65712377445
with no set -- 3.44865284975
x = [] # Your list of items that includes Duplicates
# Assuming that your list contains items of only immutable data types
dict_x = {}
dict_x = {item : item for i, item in enumerate(x) if item not in dict_x.keys()}
# Average t.c. = O(n)* O(1) ; furthermore the dict comphrehension and generator like behaviour of enumerate adds a certain efficiency and pythonic feel to it.
x = dict_x.keys() # if you want your output in list format
>>> x=[1,1,2,'a','a',3]
>>> y = [ _x for _x in x if not _x in locals()['_[1]'] ]
>>> y
[1, 2, 'a', 3]
"locals()['_[1]']" is the "secret name" of the list being created.
I don't know if this one is fast or not, but at least it is simple.
Simply, convert it first to a set and then again to a list
def unique(container):
return list(set(container))
One pass.
a = [1,1,'a','b','c','c']
new_list = []
prev = None
while 1:
try:
i = a.pop(0)
if i != prev:
new_list.append(i)
prev = i
except IndexError:
break
I haven't done any tests, but one possible algorithm might be to create a second list, and iterate through the first list. If an item is not in the second list, add it to the second list.
x = [1, 1, 2, 'a', 'a', 3]
y = []
for each in x:
if each not in y:
y.append(each)
a=[1,2,3,4,5,7,7,8,8,9,9,3,45]
def unique(l):
ids={}
for item in l:
if not ids.has_key(item):
ids[item]=item
return ids.keys()
print a
print unique(a)
Inserting elements will take theta(n)
retrieving if element is exiting or not will take constant time
testing all the items will take also theta(n)
so we can see that this solution will take theta(n).
Bear in mind that dictionary in python implemented by hash table.

Categories

Resources