Related
Suppose I have the list A=[2,32,41,2,4,73,5,9,20]. If I want to create a list B whose elements are the sum of neighboring elements in A, boundary values 2 and 20 excluded, here's the code I have:
A=[2,32,41,2,4,73,5,9,20]
B = []
for i in range (1,len(A)-1):
B.append(A[i-1]+A[i+1])
>>> B
>>> [43, 34, 45, 75, 9, 82, 25]
I'm just wondering is there a better way I can generate the list B? This pattern is what I usually rely on while coding with some problems like that, but I really want to know if there's a better/easier way I can use the elements in a list/array with excluded boundary points. (instead of using range, as what I did here)
Many thanks for the help and suggestions!
You can use zip:
In [109]: A
Out[109]: [2, 32, 41, 2, 4, 73, 5, 9, 20]
In [110]: [a + b for a,b in zip(A, A[2:])]
Out[110]: [43, 34, 45, 75, 9, 82, 25]
Slightly better performance using zip and list comp. Using %timeit to measure, ops version is b my version is a:
In [114]: %timeit a(A)
1.06 µs ± 7.63 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [115]: %timeit b(A)
1.63 µs ± 9.72 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
This is one way to do using with list comprehension:
a = [2, 32, 41, 2, 4, 73, 5, 9, 20]
b = [i + j for (i,j) in zip(a[:-2], a[2:])]
b
[43, 34, 45, 75, 9, 82, 25]
time_interval = [4, 6, 12]
I want to sum up the numbers like [4, 4+6, 4+6+12] in order to get the list t = [4, 10, 22].
I tried the following:
t1 = time_interval[0]
t2 = time_interval[1] + t1
t3 = time_interval[2] + t2
print(t1, t2, t3) # -> 4 10 22
If you're doing much numerical work with arrays like this, I'd suggest numpy, which comes with a cumulative sum function cumsum:
import numpy as np
a = [4,6,12]
np.cumsum(a)
#array([4, 10, 22])
Numpy is often faster than pure python for this kind of thing, see in comparison to #Ashwini's accumu:
In [136]: timeit list(accumu(range(1000)))
10000 loops, best of 3: 161 us per loop
In [137]: timeit list(accumu(xrange(1000)))
10000 loops, best of 3: 147 us per loop
In [138]: timeit np.cumsum(np.arange(1000))
100000 loops, best of 3: 10.1 us per loop
But of course if it's the only place you'll use numpy, it might not be worth having a dependence on it.
In Python 2 you can define your own generator function like this:
def accumu(lis):
total = 0
for x in lis:
total += x
yield total
In [4]: list(accumu([4,6,12]))
Out[4]: [4, 10, 22]
And in Python 3.2+ you can use itertools.accumulate():
In [1]: lis = [4,6,12]
In [2]: from itertools import accumulate
In [3]: list(accumulate(lis))
Out[3]: [4, 10, 22]
I did a bench-mark of the top two answers with Python 3.4 and I found itertools.accumulate is faster than numpy.cumsum under many circumstances, often much faster. However, as you can see from the comments, this may not always be the case, and it's difficult to exhaustively explore all options. (Feel free to add a comment or edit this post if you have further benchmark results of interest.)
Some timings...
For short lists accumulate is about 4 times faster:
from timeit import timeit
def sum1(l):
from itertools import accumulate
return list(accumulate(l))
def sum2(l):
from numpy import cumsum
return list(cumsum(l))
l = [1, 2, 3, 4, 5]
timeit(lambda: sum1(l), number=100000)
# 0.4243644131347537
timeit(lambda: sum2(l), number=100000)
# 1.7077815784141421
For longer lists accumulate is about 3 times faster:
l = [1, 2, 3, 4, 5]*1000
timeit(lambda: sum1(l), number=100000)
# 19.174508565105498
timeit(lambda: sum2(l), number=100000)
# 61.871223849244416
If the numpy array is not cast to list, accumulate is still about 2 times faster:
from timeit import timeit
def sum1(l):
from itertools import accumulate
return list(accumulate(l))
def sum2(l):
from numpy import cumsum
return cumsum(l)
l = [1, 2, 3, 4, 5]*1000
print(timeit(lambda: sum1(l), number=100000))
# 19.18597290944308
print(timeit(lambda: sum2(l), number=100000))
# 37.759664884768426
If you put the imports outside of the two functions and still return a numpy array, accumulate is still nearly 2 times faster:
from timeit import timeit
from itertools import accumulate
from numpy import cumsum
def sum1(l):
return list(accumulate(l))
def sum2(l):
return cumsum(l)
l = [1, 2, 3, 4, 5]*1000
timeit(lambda: sum1(l), number=100000)
# 19.042188624851406
timeit(lambda: sum2(l), number=100000)
# 35.17324400227517
Try the
itertools.accumulate() function.
import itertools
list(itertools.accumulate([1,2,3,4,5]))
# [1, 3, 6, 10, 15]
Behold:
a = [4, 6, 12]
reduce(lambda c, x: c + [c[-1] + x], a, [0])[1:]
Will output (as expected):
[4, 10, 22]
Assignment expressions from PEP 572 (new in Python 3.8) offer yet another way to solve this:
time_interval = [4, 6, 12]
total_time = 0
cum_time = [total_time := total_time + t for t in time_interval]
You can calculate the cumulative sum list in linear time with a simple for loop:
def csum(lst):
s = lst.copy()
for i in range(1, len(s)):
s[i] += s[i-1]
return s
time_interval = [4, 6, 12]
print(csum(time_interval)) # [4, 10, 22]
The standard library's itertools.accumulate may be a faster alternative (since it's implemented in C):
from itertools import accumulate
time_interval = [4, 6, 12]
print(list(accumulate(time_interval))) # [4, 10, 22]
Since python 3.8 it's possible to use Assignment expressions, so things like this became easier to implement
nums = list(range(1, 10))
print(f'array: {nums}')
v = 0
cumsum = [v := v + n for n in nums]
print(f'cumsum: {cumsum}')
produces
array: [1, 2, 3, 4, 5, 6, 7, 8, 9]
cumsum: [1, 3, 6, 10, 15, 21, 28, 36, 45]
The same technique can be applied to find the cum product, mean, etc.
p = 1
cumprod = [p := p * n for n in nums]
print(f'cumprod: {cumprod}')
s = 0
c = 0
cumavg = [(s := s + n) / (c := c + 1) for n in nums]
print(f'cumavg: {cumavg}')
results in
cumprod: [1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
cumavg: [1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0]
First, you want a running list of subsequences:
subseqs = (seq[:i] for i in range(1, len(seq)+1))
Then you just call sum on each subsequence:
sums = [sum(subseq) for subseq in subseqs]
(This isn't the most efficient way to do it, because you're adding all of the prefixes repeatedly. But that probably won't matter for most use cases, and it's easier to understand if you don't have to think of the running totals.)
If you're using Python 3.2 or newer, you can use itertools.accumulate to do it for you:
sums = itertools.accumulate(seq)
And if you're using 3.1 or earlier, you can just copy the "equivalent to" source straight out of the docs (except for changing next(it) to it.next() for 2.5 and earlier).
If You want a pythonic way without numpy working in 2.7 this would be my way of doing it
l = [1,2,3,4]
_d={-1:0}
cumsum=[_d.setdefault(idx, _d[idx-1]+item) for idx,item in enumerate(l)]
now let's try it and test it against all other implementations
import timeit, sys
L=list(range(10000))
if sys.version_info >= (3, 0):
reduce = functools.reduce
xrange = range
def sum1(l):
cumsum=[]
total = 0
for v in l:
total += v
cumsum.append(total)
return cumsum
def sum2(l):
import numpy as np
return list(np.cumsum(l))
def sum3(l):
return [sum(l[:i+1]) for i in xrange(len(l))]
def sum4(l):
return reduce(lambda c, x: c + [c[-1] + x], l, [0])[1:]
def this_implementation(l):
_d={-1:0}
return [_d.setdefault(idx, _d[idx-1]+item) for idx,item in enumerate(l)]
# sanity check
sum1(L)==sum2(L)==sum3(L)==sum4(L)==this_implementation(L)
>>> True
# PERFORMANCE TEST
timeit.timeit('sum1(L)','from __main__ import sum1,sum2,sum3,sum4,this_implementation,L', number=100)/100.
>>> 0.001018061637878418
timeit.timeit('sum2(L)','from __main__ import sum1,sum2,sum3,sum4,this_implementation,L', number=100)/100.
>>> 0.000829620361328125
timeit.timeit('sum3(L)','from __main__ import sum1,sum2,sum3,sum4,this_implementation,L', number=100)/100.
>>> 0.4606760001182556
timeit.timeit('sum4(L)','from __main__ import sum1,sum2,sum3,sum4,this_implementation,L', number=100)/100.
>>> 0.18932826995849608
timeit.timeit('this_implementation(L)','from __main__ import sum1,sum2,sum3,sum4,this_implementation,L', number=100)/100.
>>> 0.002348129749298096
There could be many answers for this depending on the length of the list and the performance. One very simple way which I can think without thinking of the performance is this:
a = [1, 2, 3, 4]
a = [sum(a[0:x]) for x in range(1, len(a)+1)]
print(a)
[1, 3, 6, 10]
This is by using list comprehension and this may work fairly well it is just that here I am adding over the subarray many times, you could possibly improvise on this and make it simple!
Cheers to your endeavor!
values = [4, 6, 12]
total = 0
sums = []
for v in values:
total = total + v
sums.append(total)
print 'Values: ', values
print 'Sums: ', sums
Running this code gives
Values: [4, 6, 12]
Sums: [4, 10, 22]
Try this:
result = []
acc = 0
for i in time_interval:
acc += i
result.append(acc)
l = [1,-1,3]
cum_list = l
def sum_list(input_list):
index = 1
for i in input_list[1:]:
cum_list[index] = i + input_list[index-1]
index = index + 1
return cum_list
print(sum_list(l))
In Python3, To find the cumulative sum of a list where the ith element
is the sum of the first i+1 elements from the original list, you may do:
a = [4 , 6 , 12]
b = []
for i in range(0,len(a)):
b.append(sum(a[:i+1]))
print(b)
OR you may use list comprehension:
b = [sum(a[:x+1]) for x in range(0,len(a))]
Output
[4,10,22]
lst = [4, 6, 12]
[sum(lst[:i+1]) for i in xrange(len(lst))]
If you are looking for a more efficient solution (bigger lists?) a generator could be a good call (or just use numpy if you really care about performance).
def gen(lst):
acu = 0
for num in lst:
yield num + acu
acu += num
print list(gen([4, 6, 12]))
In [42]: a = [4, 6, 12]
In [43]: [sum(a[:i+1]) for i in xrange(len(a))]
Out[43]: [4, 10, 22]
This is slighlty faster than the generator method above by #Ashwini for small lists
In [48]: %timeit list(accumu([4,6,12]))
100000 loops, best of 3: 2.63 us per loop
In [49]: %timeit [sum(a[:i+1]) for i in xrange(len(a))]
100000 loops, best of 3: 2.46 us per loop
For larger lists, the generator is the way to go for sure. . .
In [50]: a = range(1000)
In [51]: %timeit [sum(a[:i+1]) for i in xrange(len(a))]
100 loops, best of 3: 6.04 ms per loop
In [52]: %timeit list(accumu(a))
10000 loops, best of 3: 162 us per loop
Somewhat hacky, but seems to work:
def cumulative_sum(l):
y = [0]
def inc(n):
y[0] += n
return y[0]
return [inc(x) for x in l]
I did think that the inner function would be able to modify the y declared in the outer lexical scope, but that didn't work, so we play some nasty hacks with structure modification instead. It is probably more elegant to use a generator.
Without having to use Numpy, you can loop directly over the array and accumulate the sum along the way. For example:
a=range(10)
i=1
while((i>0) & (i<10)):
a[i]=a[i-1]+a[i]
i=i+1
print a
Results in:
[0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
A pure python oneliner for cumulative sum:
cumsum = lambda X: X[:1] + cumsum([X[0]+X[1]] + X[2:]) if X[1:] else X
This is a recursive version inspired by recursive cumulative sums. Some explanations:
The first term X[:1] is a list containing the previous element and is almost the same as [X[0]] (which would complain for empty lists).
The recursive cumsum call in the second term processes the current element [1] and remaining list whose length will be reduced by one.
if X[1:] is shorter for if len(X)>1.
Test:
cumsum([4,6,12])
#[4, 10, 22]
cumsum([])
#[]
And simular for cumulative product:
cumprod = lambda X: X[:1] + cumprod([X[0]*X[1]] + X[2:]) if X[1:] else X
Test:
cumprod([4,6,12])
#[4, 24, 288]
Here's another fun solution. This takes advantage of the locals() dict of a comprehension, i.e. local variables generated inside the list comprehension scope:
>>> [locals().setdefault(i, (elem + locals().get(i-1, 0))) for i, elem
in enumerate(time_interval)]
[4, 10, 22]
Here's what the locals() looks for each iteration:
>>> [[locals().setdefault(i, (elem + locals().get(i-1, 0))), locals().copy()][1]
for i, elem in enumerate(time_interval)]
[{'.0': <enumerate at 0x21f21f7fc80>, 'i': 0, 'elem': 4, 0: 4},
{'.0': <enumerate at 0x21f21f7fc80>, 'i': 1, 'elem': 6, 0: 4, 1: 10},
{'.0': <enumerate at 0x21f21f7fc80>, 'i': 2, 'elem': 12, 0: 4, 1: 10, 2: 22}]
Performance is not terrible for small lists:
>>> %timeit list(accumulate([4, 6, 12]))
387 ns ± 7.53 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
>>> %timeit np.cumsum([4, 6, 12])
5.31 µs ± 67.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
>>> %timeit [locals().setdefault(i, (e + locals().get(i-1,0))) for i,e in enumerate(time_interval)]
1.57 µs ± 12 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
And obviously falls flat for larger lists.
>>> l = list(range(1_000_000))
>>> %timeit list(accumulate(l))
95.1 ms ± 5.22 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
>>> %timeit np.cumsum(l)
79.3 ms ± 1.07 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
>>> %timeit np.cumsum(l).tolist()
120 ms ± 1.23 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
>>> %timeit [locals().setdefault(i, (e + locals().get(i-1, 0))) for i, e in enumerate(l)]
660 ms ± 5.14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Even though the method is ugly and not practical, it sure is fun.
I think the below code is the easiest:
a=[1,1,2,1,2]
b=[a[0]]+[sum(a[0:i]) for i in range(2,len(a)+1)]
def cumulative_sum(list):
l = []
for i in range(len(list)):
new_l = sum(list[:i+1])
l.append(new_l)
return l
time_interval = [4, 6, 12]
print(cumulative_sum(time_interval)
Maybe a more beginner-friendly solution.
So you need to make a list of cumulative sums. You can do it by using for loop and .append() method
time_interval = [4, 6, 12]
cumulative_sum = []
new_sum = 0
for i in time_interval:
new_sum += i
cumulative_sum.append(new_sum)
print(cumulative_sum)
or, using numpy module
import numpy
time_interval = [4, 6, 12]
c_sum = numpy.cumsum(time_interval)
print(c_sum.tolist())
This would be Haskell-style:
def wrand(vtlg):
def helpf(lalt,lneu):
if not lalt==[]:
return helpf(lalt[1::],[lalt[0]+lneu[0]]+lneu)
else:
lneu.reverse()
return lneu[1:]
return helpf(vtlg,[0])
I have 3 NumPy arrays, and I want to create tuples of the i-th element of each list. These tuples represent keys for a dictionary I had previously defined.
Ex:
List 1: [1, 2, 3, 4, 5]
List 2: [6, 7, 8, 9, 10]
List 3: [11, 12, 13, 14, 15]
Desired output: [mydict[(1,6,11)],mydict[(2,7,12)],mydict[(3,8,13)],mydict[(4,9,14)],mydict[(5,10,15)]]
These tuples represent keys of a dictionary I have previously defined (essentially, as input variables to a previously calculated function). I had read that this is the best way to store function values for lookup.
My current method of doing this is as follows:
[dict[x] for x in zip(l1, l2, l3)]
This works, but is obviously slow. Is there a way to vectorize this operation, or make it faster in any way? I'm open to changing the way I've stored the function values as well, if that is necessary.
EDIT: My apologies for the question being unclear. I do in fact, have NumPy arrays. My mistake for referring to them as lists and displaying them as such. They are of the same length.
Your question is a bit confusing, since you're calling these NumPy arrays, and asking for a way to vectorize things, but then showing lists, and labeling them as lists in your example, and using list in the title. I'm going to assume you do have arrays.
>>> l1 = np.array([1, 2, 3, 4, 5])
>>> l2 = np.array([6, 7, 8, 9, 10])
>>> l3 = np.array([11, 12, 13, 14, 15])
If so, you can stack these up in a 2D array:
>>> ll = np.stack((l1, l2, l3))
And then you can just transpose that:
>>> lt = ll.T
This is better than vectorized; it's constant-time. NumPy is just creating another view of the same data, with different striding so it reads in column order instead of row order.
>>> lt
array([[ 1, 6, 11],
[ 2, 7, 12],
[ 3, 8, 13],
[ 4, 9, 14],
[ 5, 10, 15]])
As miradulo points out, you can do both of these in one step with column_stack:
>>> lt = np.column_stack((l1, l2, l3))
But I suspect you're actually going to want ll as a value in its own right. (Although I admit I'm just guessing here at what you're trying to do…)
And of course if you want to loop over these rows as 1D arrays instead of doing further vectorized work, you can:
>>> for row in lt:
...: print(row)
[ 1 6 11]
[ 2 7 12]
[ 3 8 13]
[ 4 9 14]
[ 5 10 15]
Of course, you can convert them from 1D arrays to tuples just by calling tuple on each row. Or… whatever that mydict is supposed to be (it doesn't look like a dictionary—there's no key-value pairs, just values), you can do that.
>>> mydict = collections.namedtuple('mydict', list('abc'))
>>> tups = [mydict(*row) for row in lt]
>>> tups
[mydict(a=1, b=6, c=11),
mydict(a=2, b=7, c=12),
mydict(a=3, b=8, c=13),
mydict(a=4, b=9, c=14),
mydict(a=5, b=10, c=15)]
If you're worried about the time to look up a tuple of keys in a dict, itemgetter in the operator module has a C-accelerated version. If keys is a np.array, or a tuple, or whatever, you can do this:
for row in lt:
myvals = operator.itemgetter(*row)(mydict)
# do stuff with myvals
Meanwhile, I decided to slap together a C extension that should be as fast as possible (with no error handling, because I'm lazy it should be a tiny bit faster that way—this code will probably segfault if you give it anything but a dict and a tuple or list):
static PyObject *
itemget_itemget(PyObject *self, PyObject *args) {
PyObject *d;
PyObject *keys;
PyArg_ParseTuple(args, "OO", &d, &keys);
PyObject *seq = PySequence_Fast(keys, "keys must be an iterable");
PyObject **arr = PySequence_Fast_ITEMS(seq);
int seqlen = PySequence_Fast_GET_SIZE(seq);
PyObject *result = PyTuple_New(seqlen);
PyObject **resarr = PySequence_Fast_ITEMS(result);
for (int i=0; i!=seqlen; ++i) {
resarr[i] = PyDict_GetItem(d, arr[i]);
Py_INCREF(resarr[i]);
}
return result;
}
Times for looking up 100 random keys out of a 10000-key dictionary on my laptop with python.org CPython 3.7 on macOS:
itemget.itemget: 1.6µs
operator.itemgetter: 1.8µs
comprehension: 3.4µs
pure-Python operator.itemgetter: 6.7µs
So, I'm pretty sure anything you do is going to be fast enough—that's only 34ns/key that we're trying to optimize. But if that really is too slow, operator.itemgetter does a good enough job moving the loop to C and cuts it roughly in half, which is pretty close to the best possibly result you could expect. (It's hard to imagine looping up a bunch of boxed-value keys in a hash table in much less than 16ns/key, after all.)
Define your 3 lists. You mention 3 arrays, but show lists (and call them that as well):
In [112]: list1,list2,list3 = list(range(1,6)),list(range(6,11)),list(range(11,16))
Now create a dictionary with tuple keys:
In [114]: dd = {x:i for i,x in enumerate(zip(list1,list2,list3))}
In [115]: dd
Out[115]: {(1, 6, 11): 0, (2, 7, 12): 1, (3, 8, 13): 2, (4, 9, 14): 3, (5, 10, 15): 4}
Accessing elements from that dictionary with your code:
In [116]: [dd[x] for x in zip(list1,list2,list3)]
Out[116]: [0, 1, 2, 3, 4]
In [117]: timeit [dd[x] for x in zip(list1,list2,list3)]
1.62 µs ± 11.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
Now for an array equivalent - turn the lists into a 2d array:
In [118]: arr = np.array((list1,list2,list3))
In [119]: arr
Out[119]:
array([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15]])
Access the same dictionary elements. If I used column_stack I could have omitted the .T, but that's slower. (array transpose is fast)
In [120]: [dd[tuple(x)] for x in arr.T]
Out[120]: [0, 1, 2, 3, 4]
In [121]: timeit [dd[tuple(x)] for x in arr.T]
15.7 µs ± 21.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Notice that this is substantially slower. Iteration over an array is slower than iteration over a list. You can't access elements of a dictionary in any sort of numpy 'vectorized' fashion - you have to use a Python iteration.
I can improve on the array iteration by first turning it into a list:
In [124]: arr.T.tolist()
Out[124]: [[1, 6, 11], [2, 7, 12], [3, 8, 13], [4, 9, 14], [5, 10, 15]]
In [125]: timeit [dd[tuple(x)] for x in arr.T.tolist()]
3.21 µs ± 9.67 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Array construction times:
In [122]: timeit arr = np.array((list1,list2,list3))
3.54 µs ± 15.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [123]: timeit arr = np.column_stack((list1,list2,list3))
18.5 µs ± 11.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
With the pure Python itemgetter (from v3.6.3) there's no savings:
In [149]: timeit operator.itemgetter(*[tuple(x) for x in arr.T.tolist()])(dd)
3.51 µs ± 16.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
and if I move the getter definition out of the time loop:
In [151]: %%timeit idx = operator.itemgetter(*[tuple(x) for x in arr.T.tolist()]
...: )
...: idx(dd)
...:
482 ns ± 1.85 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
time_interval = [4, 6, 12]
I want to sum up the numbers like [4, 4+6, 4+6+12] in order to get the list t = [4, 10, 22].
I tried the following:
t1 = time_interval[0]
t2 = time_interval[1] + t1
t3 = time_interval[2] + t2
print(t1, t2, t3) # -> 4 10 22
If you're doing much numerical work with arrays like this, I'd suggest numpy, which comes with a cumulative sum function cumsum:
import numpy as np
a = [4,6,12]
np.cumsum(a)
#array([4, 10, 22])
Numpy is often faster than pure python for this kind of thing, see in comparison to #Ashwini's accumu:
In [136]: timeit list(accumu(range(1000)))
10000 loops, best of 3: 161 us per loop
In [137]: timeit list(accumu(xrange(1000)))
10000 loops, best of 3: 147 us per loop
In [138]: timeit np.cumsum(np.arange(1000))
100000 loops, best of 3: 10.1 us per loop
But of course if it's the only place you'll use numpy, it might not be worth having a dependence on it.
In Python 2 you can define your own generator function like this:
def accumu(lis):
total = 0
for x in lis:
total += x
yield total
In [4]: list(accumu([4,6,12]))
Out[4]: [4, 10, 22]
And in Python 3.2+ you can use itertools.accumulate():
In [1]: lis = [4,6,12]
In [2]: from itertools import accumulate
In [3]: list(accumulate(lis))
Out[3]: [4, 10, 22]
I did a bench-mark of the top two answers with Python 3.4 and I found itertools.accumulate is faster than numpy.cumsum under many circumstances, often much faster. However, as you can see from the comments, this may not always be the case, and it's difficult to exhaustively explore all options. (Feel free to add a comment or edit this post if you have further benchmark results of interest.)
Some timings...
For short lists accumulate is about 4 times faster:
from timeit import timeit
def sum1(l):
from itertools import accumulate
return list(accumulate(l))
def sum2(l):
from numpy import cumsum
return list(cumsum(l))
l = [1, 2, 3, 4, 5]
timeit(lambda: sum1(l), number=100000)
# 0.4243644131347537
timeit(lambda: sum2(l), number=100000)
# 1.7077815784141421
For longer lists accumulate is about 3 times faster:
l = [1, 2, 3, 4, 5]*1000
timeit(lambda: sum1(l), number=100000)
# 19.174508565105498
timeit(lambda: sum2(l), number=100000)
# 61.871223849244416
If the numpy array is not cast to list, accumulate is still about 2 times faster:
from timeit import timeit
def sum1(l):
from itertools import accumulate
return list(accumulate(l))
def sum2(l):
from numpy import cumsum
return cumsum(l)
l = [1, 2, 3, 4, 5]*1000
print(timeit(lambda: sum1(l), number=100000))
# 19.18597290944308
print(timeit(lambda: sum2(l), number=100000))
# 37.759664884768426
If you put the imports outside of the two functions and still return a numpy array, accumulate is still nearly 2 times faster:
from timeit import timeit
from itertools import accumulate
from numpy import cumsum
def sum1(l):
return list(accumulate(l))
def sum2(l):
return cumsum(l)
l = [1, 2, 3, 4, 5]*1000
timeit(lambda: sum1(l), number=100000)
# 19.042188624851406
timeit(lambda: sum2(l), number=100000)
# 35.17324400227517
Try the
itertools.accumulate() function.
import itertools
list(itertools.accumulate([1,2,3,4,5]))
# [1, 3, 6, 10, 15]
Behold:
a = [4, 6, 12]
reduce(lambda c, x: c + [c[-1] + x], a, [0])[1:]
Will output (as expected):
[4, 10, 22]
Assignment expressions from PEP 572 (new in Python 3.8) offer yet another way to solve this:
time_interval = [4, 6, 12]
total_time = 0
cum_time = [total_time := total_time + t for t in time_interval]
You can calculate the cumulative sum list in linear time with a simple for loop:
def csum(lst):
s = lst.copy()
for i in range(1, len(s)):
s[i] += s[i-1]
return s
time_interval = [4, 6, 12]
print(csum(time_interval)) # [4, 10, 22]
The standard library's itertools.accumulate may be a faster alternative (since it's implemented in C):
from itertools import accumulate
time_interval = [4, 6, 12]
print(list(accumulate(time_interval))) # [4, 10, 22]
Since python 3.8 it's possible to use Assignment expressions, so things like this became easier to implement
nums = list(range(1, 10))
print(f'array: {nums}')
v = 0
cumsum = [v := v + n for n in nums]
print(f'cumsum: {cumsum}')
produces
array: [1, 2, 3, 4, 5, 6, 7, 8, 9]
cumsum: [1, 3, 6, 10, 15, 21, 28, 36, 45]
The same technique can be applied to find the cum product, mean, etc.
p = 1
cumprod = [p := p * n for n in nums]
print(f'cumprod: {cumprod}')
s = 0
c = 0
cumavg = [(s := s + n) / (c := c + 1) for n in nums]
print(f'cumavg: {cumavg}')
results in
cumprod: [1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
cumavg: [1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0]
First, you want a running list of subsequences:
subseqs = (seq[:i] for i in range(1, len(seq)+1))
Then you just call sum on each subsequence:
sums = [sum(subseq) for subseq in subseqs]
(This isn't the most efficient way to do it, because you're adding all of the prefixes repeatedly. But that probably won't matter for most use cases, and it's easier to understand if you don't have to think of the running totals.)
If you're using Python 3.2 or newer, you can use itertools.accumulate to do it for you:
sums = itertools.accumulate(seq)
And if you're using 3.1 or earlier, you can just copy the "equivalent to" source straight out of the docs (except for changing next(it) to it.next() for 2.5 and earlier).
If You want a pythonic way without numpy working in 2.7 this would be my way of doing it
l = [1,2,3,4]
_d={-1:0}
cumsum=[_d.setdefault(idx, _d[idx-1]+item) for idx,item in enumerate(l)]
now let's try it and test it against all other implementations
import timeit, sys
L=list(range(10000))
if sys.version_info >= (3, 0):
reduce = functools.reduce
xrange = range
def sum1(l):
cumsum=[]
total = 0
for v in l:
total += v
cumsum.append(total)
return cumsum
def sum2(l):
import numpy as np
return list(np.cumsum(l))
def sum3(l):
return [sum(l[:i+1]) for i in xrange(len(l))]
def sum4(l):
return reduce(lambda c, x: c + [c[-1] + x], l, [0])[1:]
def this_implementation(l):
_d={-1:0}
return [_d.setdefault(idx, _d[idx-1]+item) for idx,item in enumerate(l)]
# sanity check
sum1(L)==sum2(L)==sum3(L)==sum4(L)==this_implementation(L)
>>> True
# PERFORMANCE TEST
timeit.timeit('sum1(L)','from __main__ import sum1,sum2,sum3,sum4,this_implementation,L', number=100)/100.
>>> 0.001018061637878418
timeit.timeit('sum2(L)','from __main__ import sum1,sum2,sum3,sum4,this_implementation,L', number=100)/100.
>>> 0.000829620361328125
timeit.timeit('sum3(L)','from __main__ import sum1,sum2,sum3,sum4,this_implementation,L', number=100)/100.
>>> 0.4606760001182556
timeit.timeit('sum4(L)','from __main__ import sum1,sum2,sum3,sum4,this_implementation,L', number=100)/100.
>>> 0.18932826995849608
timeit.timeit('this_implementation(L)','from __main__ import sum1,sum2,sum3,sum4,this_implementation,L', number=100)/100.
>>> 0.002348129749298096
There could be many answers for this depending on the length of the list and the performance. One very simple way which I can think without thinking of the performance is this:
a = [1, 2, 3, 4]
a = [sum(a[0:x]) for x in range(1, len(a)+1)]
print(a)
[1, 3, 6, 10]
This is by using list comprehension and this may work fairly well it is just that here I am adding over the subarray many times, you could possibly improvise on this and make it simple!
Cheers to your endeavor!
values = [4, 6, 12]
total = 0
sums = []
for v in values:
total = total + v
sums.append(total)
print 'Values: ', values
print 'Sums: ', sums
Running this code gives
Values: [4, 6, 12]
Sums: [4, 10, 22]
Try this:
result = []
acc = 0
for i in time_interval:
acc += i
result.append(acc)
l = [1,-1,3]
cum_list = l
def sum_list(input_list):
index = 1
for i in input_list[1:]:
cum_list[index] = i + input_list[index-1]
index = index + 1
return cum_list
print(sum_list(l))
In Python3, To find the cumulative sum of a list where the ith element
is the sum of the first i+1 elements from the original list, you may do:
a = [4 , 6 , 12]
b = []
for i in range(0,len(a)):
b.append(sum(a[:i+1]))
print(b)
OR you may use list comprehension:
b = [sum(a[:x+1]) for x in range(0,len(a))]
Output
[4,10,22]
lst = [4, 6, 12]
[sum(lst[:i+1]) for i in xrange(len(lst))]
If you are looking for a more efficient solution (bigger lists?) a generator could be a good call (or just use numpy if you really care about performance).
def gen(lst):
acu = 0
for num in lst:
yield num + acu
acu += num
print list(gen([4, 6, 12]))
In [42]: a = [4, 6, 12]
In [43]: [sum(a[:i+1]) for i in xrange(len(a))]
Out[43]: [4, 10, 22]
This is slighlty faster than the generator method above by #Ashwini for small lists
In [48]: %timeit list(accumu([4,6,12]))
100000 loops, best of 3: 2.63 us per loop
In [49]: %timeit [sum(a[:i+1]) for i in xrange(len(a))]
100000 loops, best of 3: 2.46 us per loop
For larger lists, the generator is the way to go for sure. . .
In [50]: a = range(1000)
In [51]: %timeit [sum(a[:i+1]) for i in xrange(len(a))]
100 loops, best of 3: 6.04 ms per loop
In [52]: %timeit list(accumu(a))
10000 loops, best of 3: 162 us per loop
Somewhat hacky, but seems to work:
def cumulative_sum(l):
y = [0]
def inc(n):
y[0] += n
return y[0]
return [inc(x) for x in l]
I did think that the inner function would be able to modify the y declared in the outer lexical scope, but that didn't work, so we play some nasty hacks with structure modification instead. It is probably more elegant to use a generator.
Without having to use Numpy, you can loop directly over the array and accumulate the sum along the way. For example:
a=range(10)
i=1
while((i>0) & (i<10)):
a[i]=a[i-1]+a[i]
i=i+1
print a
Results in:
[0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
A pure python oneliner for cumulative sum:
cumsum = lambda X: X[:1] + cumsum([X[0]+X[1]] + X[2:]) if X[1:] else X
This is a recursive version inspired by recursive cumulative sums. Some explanations:
The first term X[:1] is a list containing the previous element and is almost the same as [X[0]] (which would complain for empty lists).
The recursive cumsum call in the second term processes the current element [1] and remaining list whose length will be reduced by one.
if X[1:] is shorter for if len(X)>1.
Test:
cumsum([4,6,12])
#[4, 10, 22]
cumsum([])
#[]
And simular for cumulative product:
cumprod = lambda X: X[:1] + cumprod([X[0]*X[1]] + X[2:]) if X[1:] else X
Test:
cumprod([4,6,12])
#[4, 24, 288]
Here's another fun solution. This takes advantage of the locals() dict of a comprehension, i.e. local variables generated inside the list comprehension scope:
>>> [locals().setdefault(i, (elem + locals().get(i-1, 0))) for i, elem
in enumerate(time_interval)]
[4, 10, 22]
Here's what the locals() looks for each iteration:
>>> [[locals().setdefault(i, (elem + locals().get(i-1, 0))), locals().copy()][1]
for i, elem in enumerate(time_interval)]
[{'.0': <enumerate at 0x21f21f7fc80>, 'i': 0, 'elem': 4, 0: 4},
{'.0': <enumerate at 0x21f21f7fc80>, 'i': 1, 'elem': 6, 0: 4, 1: 10},
{'.0': <enumerate at 0x21f21f7fc80>, 'i': 2, 'elem': 12, 0: 4, 1: 10, 2: 22}]
Performance is not terrible for small lists:
>>> %timeit list(accumulate([4, 6, 12]))
387 ns ± 7.53 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
>>> %timeit np.cumsum([4, 6, 12])
5.31 µs ± 67.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
>>> %timeit [locals().setdefault(i, (e + locals().get(i-1,0))) for i,e in enumerate(time_interval)]
1.57 µs ± 12 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
And obviously falls flat for larger lists.
>>> l = list(range(1_000_000))
>>> %timeit list(accumulate(l))
95.1 ms ± 5.22 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
>>> %timeit np.cumsum(l)
79.3 ms ± 1.07 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
>>> %timeit np.cumsum(l).tolist()
120 ms ± 1.23 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
>>> %timeit [locals().setdefault(i, (e + locals().get(i-1, 0))) for i, e in enumerate(l)]
660 ms ± 5.14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Even though the method is ugly and not practical, it sure is fun.
I think the below code is the easiest:
a=[1,1,2,1,2]
b=[a[0]]+[sum(a[0:i]) for i in range(2,len(a)+1)]
def cumulative_sum(list):
l = []
for i in range(len(list)):
new_l = sum(list[:i+1])
l.append(new_l)
return l
time_interval = [4, 6, 12]
print(cumulative_sum(time_interval)
Maybe a more beginner-friendly solution.
So you need to make a list of cumulative sums. You can do it by using for loop and .append() method
time_interval = [4, 6, 12]
cumulative_sum = []
new_sum = 0
for i in time_interval:
new_sum += i
cumulative_sum.append(new_sum)
print(cumulative_sum)
or, using numpy module
import numpy
time_interval = [4, 6, 12]
c_sum = numpy.cumsum(time_interval)
print(c_sum.tolist())
This would be Haskell-style:
def wrand(vtlg):
def helpf(lalt,lneu):
if not lalt==[]:
return helpf(lalt[1::],[lalt[0]+lneu[0]]+lneu)
else:
lneu.reverse()
return lneu[1:]
return helpf(vtlg,[0])
Suppose I want the first element, the 3rd through 200th elements, and the 201st element through the last element by step-size 3, from a list in Python.
One way to do it is with distinct indexing and concatenation:
new_list = old_list[0:1] + old_list[3:201] + old_list[201::3]
Is there a way to do this with just one index on old_list? I would like something like the following (I know this doesn't syntactically work since list indices cannot be lists and since Python unfortunately doesn't have slice literals; I'm just looking for something close):
new_list = old_list[[0, 3:201, 201::3]]
I can achieve some of this by switching to NumPy arrays, but I'm more interested in how to do it for native Python lists. I could also create a slice maker or something like that, and possibly strong arm that into giving me an equivalent slice object to represent the composition of all my desired slices.
But I'm looking for something that doesn't involve creating a new class to manage the slices. I want to just sort of concatenate the slice syntax and feed that to my list and have the list understand that it means to separately get the slices and concatenate their respective results in the end.
A slice maker object (e.g. SliceMaker from your other question, or np.s_) can accept multiple comma-separated slices; they are received as a tuple of slices or other objects:
from numpy import s_
s_[0, 3:5, 6::3]
Out[1]: (0, slice(3, 5, None), slice(6, None, 3))
NumPy uses this for multidimensional arrays, but you can use it for slice concatenation:
def xslice(arr, slices):
if isinstance(slices, tuple):
return sum((arr[s] if isinstance(s, slice) else [arr[s]] for s in slices), [])
elif isinstance(slices, slice):
return arr[slices]
else:
return [arr[slices]]
xslice(list(range(10)), s_[0, 3:5, 6::3])
Out[1]: [0, 3, 4, 6, 9]
xslice(list(range(10)), s_[1])
Out[2]: [1]
xslice(list(range(10)), s_[:])
Out[3]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
import numpy as np
a = list(range(15, 50, 3))
# %%timeit -n 10000 -> 41.1 µs ± 1.71 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
[a[index] for index in np.r_[1:3, 5:7, 9:11]]
---
[18, 21, 30, 33, 42, 45]
import numpy as np
a = np.arange(15, 50, 3).astype(np.int32)
# %%timeit -n 10000 -> 31.9 µs ± 5.68 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
a[np.r_[1:3, 5:7, 9:11]]
---
array([18, 21, 30, 33, 42, 45], dtype=int32)
import numpy as np
a = np.arange(15, 50, 3).astype(np.int32)
# %%timeit -n 10000 -> 7.17 µs ± 1.17 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
slices = np.s_[1:3, 5:7, 9:11]
np.concatenate([a[_slice] for _slice in slices])
---
array([18, 21, 30, 33, 42, 45], dtype=int32)
Seems using numpy is a faster way.
Adding numpy part to the answer from ecatmur.
import numpy as np
def xslice(x, slices):
"""Extract slices from array-like
Args:
x: array-like
slices: slice or tuple of slice objects
"""
if isinstance(slices, tuple):
if isinstance(x, np.ndarray):
return np.concatenate([x[_slice] for _slice in slices])
else:
return sum((x[s] if isinstance(s, slice) else [x[s]] for s in slices), [])
elif isinstance(slices, slice):
return x[slices]
else:
return [x[slices]]
You're probably better off writing your own sequence type.
>>> L = range(20)
>>> L
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
>>> operator.itemgetter(*(range(1, 5) + range(10, 18, 3)))(L)
(1, 2, 3, 4, 10, 13, 16)
And to get you started on that:
>>> operator.itemgetter(*(range(*slice(1, 5).indices(len(L))) + range(*slice(10, 18, 3).indices(len(L)))))(L)
(1, 2, 3, 4, 10, 13, 16)
Not sure if this is "better", but it works so why not...
[y for x in [old_list[slice(*a)] for a in ((0,1),(3,201),(201,None,3))] for y in x]
It's probably slow (especially compared to chain) but it's basic python (3.5.2 used for testing)
Why don;t you create a custom slice for your purpose
>>> from itertools import chain, islice
>>> it = range(50)
>>> def cslice(iterable, *selectors):
return chain(*(islice(iterable,*s) for s in selectors))
>>> list(cslice(it,(1,5),(10,15),(25,None,3)))
[1, 2, 3, 4, 10, 11, 12, 13, 14, 25, 28, 31, 34, 37, 40, 43, 46, 49]
You could extend list to allow multiple slices and indices:
class MultindexList(list):
def __getitem__(self, key):
if type(key) is tuple or type(key) is list:
r = []
for index in key:
item = super().__getitem__(index)
if type(index) is slice:
r += item
else:
r.append(item)
return r
else:
return super().__getitem__(key)
a = MultindexList(range(10))
print(a[1:3]) # [1, 2]
print(a[[1, 2]]) # [1, 2]
print(a[1, 1:3, 4:6]) # [1, 1, 2, 4, 5]