Is there multi arange in NumPy? - python

Numpy's arange accepts only single scalar values for start/stop/step. Is there a multi version of this function? Which can accept array inputs for start/stop/step? E.g. having input 2D array like:
[[1 5 1], # start/stop/step first
[3 8 2]] # start/stop/step second
should create array consisting of concatenation of aranges for every row of input (each start/stop/step), input above should create 1D array
1 2 3 4 3 5 7
i.e. we need to design such function that it does next:
print(np.multi_arange(np.array([[1,5,1],[3,8,2]])))
# prints:
# array([1, 2, 3, 4, 3, 5, 7])
And this function should be efficient (pure numpy), i.e. very fast process input array of shape (10000, 3) without pure-Python looping.
Of cause it is possible to create pure Python's loop (or listcomp) to create arange for each row and concatenate results of this loop. But I have very many rows with triples start/stop/step and need to have efficient and fast code, hence looking for pure numpy function.
Why do I need it. I needed this for several tasks. One of this is for indexing - suppose I have 1D array a and I need to extract many (possibly intersecting) subranges of this array. If I had that multi version of arange I would just do:
values = a[np.multi_arange(starts_stops_steps)]
Maybe it is possible to create multi arange function using some combinations of numpy functions? Can you suggest?
Also maybe there are some more efficient solutions for the specific case of extracting subranges of 1D array (see last line of code above) without creating all indexes using multi_arange?

Here's a vectorized one with cumsum that accounts for positive and negative stepsizes -
def multi_arange(a):
steps = a[:,2]
lens = ((a[:,1]-a[:,0]) + steps-np.sign(steps))//steps
b = np.repeat(steps, lens)
ends = (lens-1)*steps + a[:,0]
b[0] = a[0,0]
b[lens[:-1].cumsum()] = a[1:,0] - ends[:-1]
return b.cumsum()
If you need to validate for valid ranges : (start < stop when step > 0) and (start > stop when step < 0) , use a pre-processing step :
a = a[((a[:,1] > a[:,0]) & (a[:,2]>0) | (a[:,1] < a[:,0]) & (a[:,2]<0))]
Sample run -
In [17]: a
Out[17]:
array([[ 1, 5, 1],
[ 3, 8, 2],
[18, 6, -2]])
In [18]: multi_arange(a)
Out[18]: array([ 1, 2, 3, 4, 3, 5, 7, 18, 16, 14, 12, 10, 8])

In [1]: np.r_[1:5:1, 3:8:2]
Out[1]: array([1, 2, 3, 4, 3, 5, 7])
In [2]: np.hstack((np.arange(1,5,1),np.arange(3,8,2)))
Out[2]: array([1, 2, 3, 4, 3, 5, 7])
The r_ version is nice and compact, but not faster:
In [3]: timeit np.r_[1:5:1, 3:8:2]
23.9 µs ± 34.6 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [4]: timeit np.hstack((np.arange(1,5,1),np.arange(3,8,2)))
11.2 µs ± 19.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

I've just came up with my solution using numba. Still I prefer numpy-only solution if we find best one not to carry heavy numba JIT compiler.
I've also tested #Divakar solution in my code.
Next code output is:
naive_multi_arange 0.76601 sec
arty_multi_arange 0.01801 sec 42.52 speedup
divakar_multi_arange 0.05504 sec 13.92 speedup
Meaning my numba solution has 42x speedup, #Divakar's numpy solution has 14x speedup.
Next code can be also run online here.
import time, random
import numpy as np, numba
#numba.jit(nopython = True)
def arty_multi_arange(a):
starts, stops, steps = a[:, 0], a[:, 1], a[:, 2]
pos = 0
cnt = np.sum((stops - starts + steps - np.sign(steps)) // steps, dtype = np.int64)
res = np.zeros((cnt,), dtype = np.int64)
for i in range(starts.size):
v, stop, step = starts[i], stops[i], steps[i]
if step > 0:
while v < stop:
res[pos] = v
pos += 1
v += step
elif step < 0:
while v > stop:
res[pos] = v
pos += 1
v += step
assert pos == cnt
return res
def divakar_multi_arange(a):
steps = a[:,2]
lens = ((a[:,1]-a[:,0]) + steps-np.sign(steps))//steps
b = np.repeat(steps, lens)
ends = (lens-1)*steps + a[:,0]
b[0] = a[0,0]
b[lens[:-1].cumsum()] = a[1:,0] - ends[:-1]
return b.cumsum()
random.seed(0)
neg_prob = 0.5
N = 100000
minv, maxv, maxstep = -100, 300, 15
steps = [random.randrange(1, maxstep + 1) * ((1, -1)[random.random() < neg_prob]) for i in range(N)]
starts = [random.randrange(minv + 1, maxv) for i in range(N)]
stops = [random.randrange(*(((starts[i] + 1, maxv + 1), (minv, starts[i]))[steps[i] < 0])) for i in range(N)]
joined = np.array([starts, stops, steps], dtype = np.int64).T
tb = time.time()
aref = np.concatenate([np.arange(joined[i, 0], joined[i, 1], joined[i, 2], dtype = np.int64) for i in range(N)])
npt = time.time() - tb
print('naive_multi_arange', round(npt, 5), 'sec')
for func in ['arty_multi_arange', 'divakar_multi_arange']:
globals()[func](joined)
tb = time.time()
a = globals()[func](joined)
myt = time.time() - tb
print(func, round(myt, 5), 'sec', round(npt / myt, 2), 'speedup')
assert a.size == aref.size, (a.size, aref.size)
assert np.all(a == aref), np.vstack((np.flatnonzero(a != aref)[:5], a[a != aref][:5], aref[a != aref][:5])).T

Related

Best way (preferably numpythonic) to count the length before value changes occur, and number of transitions, in a numpy array?

Sorry if the title is a little confusing, but I'll explain more here. Say I have a large array with a small number of unique elements that looks like this:
arr = np.array([0,0,1,1,1,1,1],
[0,2,0,0,1,1,1],
[0,2,0,0,1,1,1],
[0,2,1,1,1,0,0],
[0,3,2,2,0,2,1])
In this case, the array is 5x6 for example purposes, but in reality, I could be working with something as large as a 10000x10000 array (still with a small amount of unique elements).
I was wondering how to iterate through each rows and 'count' the amount of times the array element changes as you move from right to left, as well as the number of constant elements between transitions.
For example, in the above array, the first row has 1 transition, and lengths 2 and 5 for the values 0 and 1, respectively. In the second-to-last row, there are 3 transitions, with lengths 1, 1, 2, and 2, for the values 0, 2, 1, and 0, respectively.
Ideally, some function transition_count would take arr above and return the something like:
row0: [1, (0,2), (1,5)]
row1: [3, (0,1), (2,1), (0,2), (1,3)]
row2: ...
and so forth.
My thinking for this is to iterate through each row of the array, arr[i,:], and analyze it separately (maybe as a list?). But even for just a single row, I'm not sure how to 'count' the number of transitions and the obtain length of each constant element.
Any help would be appreciated, thank you!
This works on a per-row basis. Not sure we can readily vectorize further given the jagged nature of the output.
for row in arr:
d = np.diff(row) != 0
idx = np.concatenate(([0], np.flatnonzero(d) + 1))
c = np.diff(np.concatenate((idx, [len(row)])))
print(len(c))
print('v', row[idx])
print('c', c)
Here is a fully vectorized solution, if you are willing to accept a slightly different output format:
d = np.diff(arr, axis=1) != 0
t = np.ones(shape=arr.shape, dtype=np.bool)
t[:, 1:] = d
e = np.ones(shape=arr.shape, dtype=np.bool)
e[:, :-1] = d
sr, sc = np.nonzero(t)
er, ec = np.nonzero(e)
v = arr[sr, sc]
print(sr)
print(sc)
print(v)
print(ec-sc + 1)
Note: you can group and split there outputs by sr to arrive at your original stated format; but usually it is best to stay away from jagged arrays entirely if you can (and you almost always can!), also in any downstream processing.
Here's a vectorized way to get all values and counts -
# Look for interval changes and pad with bool 1s on either sides to set the
# first interval for each row and for setting boundary wrt the next row
p = np.ones((len(a),1), dtype=bool)
m = np.hstack((p, a[:,:-1]!=a[:,1:], p))
# Look for interval change indices in flattened array version
intv = m.sum(1).cumsum()-1
# Get index and counts
idx = np.diff(np.flatnonzero(m.ravel()))
count = np.delete(idx, intv[:-1])
val = a[m[:,:-1]]
To get to the final split ones split based on rows -
# Get couples and setup offsetted interval change indices
grps = np.c_[val,count]
intvo = np.r_[0,intv-np.arange(len(intv))]
# Finally slice and get output
out = [grps[i:j] for (i,j) in zip(intvo[:-1], intvo[1:])]
Benchmarking
Solution to get counts and values as functions :
# #Eelco Hoogendoorn's soln
def eh(arr):
d = np.diff(arr, axis=1) != 0
t = np.ones(shape=arr.shape, dtype=np.bool)
t[:, 1:] = d
e = np.ones(shape=arr.shape, dtype=np.bool)
e[:, :-1] = d
sr, sc = np.nonzero(t)
er, ec = np.nonzero(e)
v = arr[sr, sc]
return ec-sc + 1,v
# Function form of proposed solution from this post
def grouped_info(a):
p = np.ones((len(a),1), dtype=bool)
m = np.hstack((p, a[:,:-1]!=a[:,1:], p))
intv = m.sum(1).cumsum()-1
idx = np.diff(np.flatnonzero(m.ravel()))
count = np.delete(idx, intv[:-1])
val = a[m[:,:-1]]
return count,val
We will try to get closer to your actual use-case scenario of 10000x10000 by tiling the given sample along the two axes and time the proposed solutions.
In [48]: a
Out[48]:
array([[0, 0, 1, 1, 1, 1, 1],
[0, 2, 0, 0, 1, 1, 1],
[0, 2, 0, 0, 1, 1, 1],
[0, 2, 1, 1, 1, 0, 0],
[0, 3, 2, 2, 0, 2, 1]])
In [49]: a = np.repeat(np.repeat(a,1000,axis=0),1000,axis=1)
In [50]: %timeit grouped_info(a)
126 ms ± 7.4 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [52]: %timeit eh(a)
389 ms ± 41.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Stepping with multiple values while slicing an array in Python

I am trying to get m values while stepping through every n elements of an array. For example, for m = 2 and n = 5, and given
a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
I want to retrieve
b = [1, 2, 6, 7]
Is there a way to do this using slicing? I can do this using a nested list comprehension, but I was wondering if there was a way to do this using the indices only. For reference, the list comprehension way is:
b = [k for j in [a[i:i+2] for i in range(0,len(a),5)] for k in j]
I agree with wim that you can't do it with just slicing. But you can do it with just one list comprehension:
>>> [x for i,x in enumerate(a) if i%n < m]
[1, 2, 6, 7]
No, that is not possible with slicing. Slicing only supports start, stop, and step - there is no way to represent stepping with "groups" of size larger than 1.
In short, no, you cannot. But you can use itertools to remove the need for intermediary lists:
from itertools import chain, islice
res = list(chain.from_iterable(islice(a, i, i+2) for i in range(0, len(a), 5)))
print(res)
[1, 2, 6, 7]
Borrowing #Kevin's logic, if you want a vectorised solution to avoid a for loop, you can use 3rd party library numpy:
import numpy as np
m, n = 2, 5
a = np.array(a) # convert to numpy array
res = a[np.where(np.arange(a.shape[0]) % n < m)]
There are other ways to do it, which all have advantages for some cases, but none are "just slicing".
The most general solution is probably to group your input, slice the groups, then flatten the slices back out. One advantage of this solution is that you can do it lazily, without building big intermediate lists, and you can do it to any iterable, including a lazy iterator, not just a list.
# from itertools recipes in the docs
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.zip_longest(*args, fillvalue=fillvalue)
groups = grouper(a, 5)
truncated = (group[:2] for group in groups)
b = [elem for group in truncated for elem in group]
And you can convert that into a pretty simple one-liner, although you still need the grouper function:
b = [elem for group in grouper(a, 5) for elem in group[:2]]
Another option is to build a list of indices, and use itemgetter to grab all the values. This might be more readable for a more complicated function than just "the first 2 of every 5", but it's probably less readable for something as simple as your use:
indices = [i for i in range(len(a)) if i%5 < 2]
b = operator.itemgetter(*indices)(a)
… which can be turned into a one-liner:
b = operator.itemgetter(*[i for i in range(len(a)) if i%5 < 2])(a)
And you can combine the advantages of the two approaches by writing your own version of itemgetter that takes a lazy index iterator—which I won't show, because you can go even better by writing one that takes an index filter function instead:
def indexfilter(pred, a):
return [elem for i, elem in enumerate(a) if pred(i)]
b = indexfilter((lambda i: i%5<2), a)
(To make indexfilter lazy, just replace the brackets with parens.)
… or, as a one-liner:
b = [elem for i, elem in enumerate(a) if i%5<2]
I think this last one might be the most readable. And it works with any iterable rather than just lists, and it can be made lazy (again, just replace the brackets with parens). But I still don't think it's simpler than your original comprehension, and it's not just slicing.
The question states array, and by that if we are talking about NumPy arrays, we can surely use few obvious NumPy tricks and few not-so obvious ones. We can surely use slicing to get a 2D view into the input under certain conditions.
Now, based on the array length, let's call it l and m, we would have three scenarios :
Scenario #1 :l is divisible by n
We can use slicing and reshaping to get a view into the input array and hence get constant runtime.
Verify the view concept :
In [108]: a = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
In [109]: m = 2; n = 5
In [110]: a.reshape(-1,n)[:,:m]
Out[110]:
array([[1, 2],
[6, 7]])
In [111]: np.shares_memory(a, a.reshape(-1,n)[:,:m])
Out[111]: True
Check timings on a very large array and hence constant runtime claim :
In [118]: a = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
In [119]: %timeit a.reshape(-1,n)[:,:m]
1000000 loops, best of 3: 563 ns per loop
In [120]: a = np.arange(10000000)
In [121]: %timeit a.reshape(-1,n)[:,:m]
1000000 loops, best of 3: 564 ns per loop
To get flattened version :
If we have to get a flattened array as output, we just need to use a flattening operation with .ravel(), like so -
In [127]: a = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
In [128]: m = 2; n = 5
In [129]: a.reshape(-1,n)[:,:m].ravel()
Out[129]: array([1, 2, 6, 7])
Timings show that it's not too bad when compared with the other looping and vectorized numpy.where versions from other posts -
In [143]: a = np.arange(10000000)
# #Kevin's soln
In [145]: %timeit [x for i,x in enumerate(a) if i%n < m]
1 loop, best of 3: 1.23 s per loop
# #jpp's soln
In [147]: %timeit a[np.where(np.arange(a.shape[0]) % n < m)]
10 loops, best of 3: 145 ms per loop
In [144]: %timeit a.reshape(-1,n)[:,:m].ravel()
100 loops, best of 3: 16.4 ms per loop
Scenario #2 :l is not divisible by n, but the groups end with a complete one at the end
We go to the non-obvious NumPy methods with np.lib.stride_tricks.as_strided that allows to go beyoond the memory block bounds (hence we need to be careful here to not write into those) to facilitate a solution using slicing. The implementation would look something like this -
def select_groups(a, m, n):
a = np.asarray(a)
strided = np.lib.stride_tricks.as_strided
# Get params defining the lengths for slicing and output array shape
nrows = len(a)//n
add0 = len(a)%n
s = a.strides[0]
out_shape = nrows+int(add0!=0),m
# Finally stride, flatten with reshape and slice
return strided(a, shape=out_shape, strides=(s*n,s))
A sample run to verify that the output is a view -
In [151]: a = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13])
In [152]: m = 2; n = 5
In [153]: select_groups(a, m, n)
Out[153]:
array([[ 1, 2],
[ 6, 7],
[11, 12]])
In [154]: np.shares_memory(a, select_groups(a, m, n))
Out[154]: True
To get flattened version, append with .ravel().
Let's get some timings comparison -
In [158]: a = np.arange(10000003)
In [159]: m = 2; n = 5
# #Kevin's soln
In [161]: %timeit [x for i,x in enumerate(a) if i%n < m]
1 loop, best of 3: 1.24 s per loop
# #jpp's soln
In [162]: %timeit a[np.where(np.arange(a.shape[0]) % n < m)]
10 loops, best of 3: 148 ms per loop
In [160]: %timeit select_groups(a, m=m, n=n)
100000 loops, best of 3: 5.8 µs per loop
If we need a flattened version, it's still not too bad -
In [163]: %timeit select_groups(a, m=m, n=n).ravel()
100 loops, best of 3: 16.5 ms per loop
Scenario #3 :l is not divisible by n,and the groups end with a incomplete one at the end
For this case, we would need an extra slicing at the end on top of what we had in the previous method, like so -
def select_groups_generic(a, m, n):
a = np.asarray(a)
strided = np.lib.stride_tricks.as_strided
# Get params defining the lengths for slicing and output array shape
nrows = len(a)//n
add0 = len(a)%n
lim = m*(nrows) + add0
s = a.strides[0]
out_shape = nrows+int(add0!=0),m
# Finally stride, flatten with reshape and slice
return strided(a, shape=out_shape, strides=(s*n,s)).reshape(-1)[:lim]
Sample run -
In [166]: a = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
In [167]: m = 2; n = 5
In [168]: select_groups_generic(a, m, n)
Out[168]: array([ 1, 2, 6, 7, 11])
Timings -
In [170]: a = np.arange(10000001)
In [171]: m = 2; n = 5
# #Kevin's soln
In [172]: %timeit [x for i,x in enumerate(a) if i%n < m]
1 loop, best of 3: 1.23 s per loop
# #jpp's soln
In [173]: %timeit a[np.where(np.arange(a.shape[0]) % n < m)]
10 loops, best of 3: 145 ms per loop
In [174]: %timeit select_groups_generic(a, m, n)
100 loops, best of 3: 12.2 ms per loop
I realize that recursion isn't popular, but would something like this work? Also, uncertain if adding recursion to the mix counts as just using slices.
def get_elements(A, m, n):
if(len(A) < m):
return A
else:
return A[:m] + get_elements(A[n:], m, n)
A is the array, m and n are defined as in the question. The first if covers the base case, where you have an array with length less than the number of elements you're trying to retrieve, and the second if is the recursive case. I'm somewhat new to python, please forgive my poor understanding of the language if this doesn't work properly, though I tested it and it seems to work fine.
With itertools you could get an iterator with:
from itertools import compress, cycle
a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
n = 5
m = 2
it = compress(a, cycle([1, 1, 0, 0, 0]))
res = list(it)

How to access multi columns in the rolling operator?

I want to do some rolling window calculation in pandas which need to deal with two columns at the same time. I'll take an simple instance to express the problem clearly:
import pandas as pd
df = pd.DataFrame({
'x': [1, 2, 3, 2, 1, 5, 4, 6, 7, 9],
'y': [4, 3, 4, 6, 5, 9, 1, 3, 1, 2]
})
windowSize = 4
result = []
for i in range(1, len(df)+1):
if i < windowSize:
result.append(None)
else:
x = df.x.iloc[i-windowSize:i]
y = df.y.iloc[i-windowSize:i]
m = y.mean()
r = sum(x[y > m]) / sum(x[y <= m])
result.append(r)
print(result)
Is there any way without for loop in pandas to solve the problem? Any help is appreciated
You can use the rolling window trick for numpy arrays and apply it to the array underlying the DataFrame.
import pandas as pd
import numpy as np
def rolling_window(a, window):
shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)
strides = a.strides + (a.strides[-1],)
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
df = pd.DataFrame({
'x': [1, 2, 3, 2, 1, 5, 4, 6, 7, 9],
'y': [4, 3, 4, 6, 5, 9, 1, 3, 1, 2]
})
windowSize = 4
rw = rolling_window(df.values.T, windowSize)
m = np.mean(rw[1], axis=-1, keepdims=True)
a = np.sum(rw[0] * (rw[1] > m), axis=-1)
b = np.sum(rw[0] * (rw[1] <= m), axis=-1)
result = a / b
The result lacks the leading None values, but they should be easy to append (in form of np.nan or after converting the result to a list).
This is probably not what you are looking for, working with pandas, but it will get the job done without loops.
Here's one vectorized approach using NumPy tools -
windowSize = 4
a = df.values
X = strided_app(a[:,0],windowSize,1)
Y = strided_app(a[:,1],windowSize,1)
M = Y.mean(1)
mask = Y>M[:,None]
sums = np.einsum('ij,ij->i',X,mask)
rest_sums = X.sum(1) - sums
out = sums/rest_sums
strided_app is taken from here.
Runtime test -
Approaches -
# #kazemakase's solution
def rolling_window_sum(df, windowSize=4):
rw = rolling_window(df.values.T, windowSize)
m = np.mean(rw[1], axis=-1, keepdims=True)
a = np.sum(rw[0] * (rw[1] > m), axis=-1)
b = np.sum(rw[0] * (rw[1] <= m), axis=-1)
result = a / b
return result
# Proposed in this post
def strided_einsum(df, windowSize=4):
a = df.values
X = strided_app(a[:,0],windowSize,1)
Y = strided_app(a[:,1],windowSize,1)
M = Y.mean(1)
mask = Y>M[:,None]
sums = np.einsum('ij,ij->i',X,mask)
rest_sums = X.sum(1) - sums
out = sums/rest_sums
return out
Timings -
In [46]: df = pd.DataFrame(np.random.randint(0,9,(1000000,2)))
In [47]: %timeit rolling_window_sum(df)
10 loops, best of 3: 90.4 ms per loop
In [48]: %timeit strided_einsum(df)
10 loops, best of 3: 62.2 ms per loop
To squeeze in more performance, we can compute the Y.mean(1) part, which is basically a windowed summation with Scipy's 1D uniform filter. Thus, M could be alternatively computed for windowSize=4 as -
from scipy.ndimage.filters import uniform_filter1d as unif1d
M = unif1d(a[:,1].astype(float),windowSize)[2:-1]
The performance gains are significant -
In [65]: %timeit strided_einsum(df)
10 loops, best of 3: 61.5 ms per loop
In [66]: %timeit strided_einsum_unif_filter(df)
10 loops, best of 3: 49.4 ms per loop

Is there a faster way to add two 2-d numpy array

Let say I have two large 2-d numpy array of same dimensions (say 2000x2000). I want to sum them element wise. I was wondering if there is a faster way than np.add()
Edit: I am adding a similar example of what I am using now. Is there a way to speed up this?
#a and b are the two matrices I already have.Dimension is 2000x2000
#shift is also a list that is previously known
for j in range(100000):
b=np.roll(b, shift[j] , axis=0)
a=np.add(a,b)
Approach #1 (Vectorized)
We can use modulus to simulate the circulating behavior of roll/circshift and with broadcasted indices to cover all rows, we would have a fully vectorized approach, like so -
n = b.shape[0]
idx = n-1 - np.mod(shift.cumsum()[:,None]-1 - np.arange(n), n)
a += b[idx].sum(0)
Approach #2 (Loopy one)
b_ext = np.row_stack((b, b[:-1] ))
start_idx = n-1 - np.mod(shift.cumsum()-1,n)
for j in range(start_idx.size):
a += b_ext[start_idx[j]:start_idx[j]+n]
Colon notation vs using indices for slicing
The idea here to do minimal work once we are inside the loop. We are pre-computing the start row index of each iteration before going into the loop. So, all we need to do once inside the loop is slicing using colon notation, which is a view into the array and adding up. This should be much better than rolling that needs to compute all of those row indices that results in a copy that is expensive.
Here's a bit more into the view and copy concepts when slicing with colon and indices -
In [11]: a = np.random.randint(0,9,(10))
In [12]: a
Out[12]: array([8, 0, 1, 7, 5, 0, 6, 1, 7, 0])
In [13]: a[3:8]
Out[13]: array([7, 5, 0, 6, 1])
In [14]: a[[3,4,5,6,7]]
Out[14]: array([7, 5, 0, 6, 1])
In [15]: np.may_share_memory(a, a[3:8])
Out[15]: True
In [16]: np.may_share_memory(a, a[[3,4,5,6,7]])
Out[16]: False
Runtime test
Function defintions -
def original_loopy_app(a,b):
for j in range(shift.size):
b=np.roll(b, shift[j] , axis=0)
a += b
def vectorized_app(a,b):
n = b.shape[0]
idx = n-1 - np.mod(shift.cumsum()[:,None]-1 - np.arange(n), n)
a += b[idx].sum(0)
def modified_loopy_app(a,b):
n = b.shape[0]
b_ext = np.row_stack((b, b[:-1] ))
start_idx = n-1 - np.mod(shift.cumsum()-1,n)
for j in range(start_idx.size):
a += b_ext[start_idx[j]:start_idx[j]+n]
Case #1:
In [5]: # Setup input arrays
...: N = 200
...: M = 1000
...: a = np.random.randint(11,99,(N,N))
...: b = np.random.randint(11,99,(N,N))
...: shift = np.random.randint(0,N,M)
...:
In [6]: original_loopy_app(a1,b1)
...: vectorized_app(a2,b2)
...: modified_loopy_app(a3,b3)
...:
In [7]: np.allclose(a1, a2) # Verify results
Out[7]: True
In [8]: np.allclose(a1, a3) # Verify results
Out[8]: True
In [9]: %timeit original_loopy_app(a1,b1)
...: %timeit vectorized_app(a2,b2)
...: %timeit modified_loopy_app(a3,b3)
...:
10 loops, best of 3: 107 ms per loop
10 loops, best of 3: 137 ms per loop
10 loops, best of 3: 48.2 ms per loop
Case #2:
In [13]: # Setup input arrays (datasets are exactly 1/10th of original sizes)
...: N = 200
...: M = 10000
...: a = np.random.randint(11,99,(N,N))
...: b = np.random.randint(11,99,(N,N))
...: shift = np.random.randint(0,N,M)
...:
In [14]: %timeit original_loopy_app(a1,b1)
...: %timeit modified_loopy_app(a3,b3)
...:
1 loops, best of 3: 1.11 s per loop
1 loops, best of 3: 481 ms per loop
So, we are looking at 2x+ speedup there with the modified loopy approach!

get the index of the last negative value in a 2d array per column

I'm trying to get the index of the last negative value of an array per column (in order to slice it after).
a simple working example on a 1d vector is :
import numpy as np
A = np.arange(10) - 5
A[2] = 2
print A # [-5 -4 2 -2 -1 0 1 2 3 4]
idx = np.max(np.where(A <= 0)[0])
print idx # 5
A[:idx] = 0
print A # [0 0 0 0 0 0 1 2 3 4]
Now I wanna do the same thing on each column of a 2D array :
A = np.arange(10) - 5
A[2] = 2
A2 = np.tile(A, 3).reshape((3, 10)) - np.array([0, 2, -1]).reshape((3, 1))
print A2
# [[-5 -4 2 -2 -1 0 1 2 3 4]
# [-7 -6 0 -4 -3 -2 -1 0 1 2]
# [-4 -3 3 -1 0 1 2 3 4 5]]
And I would like to obtain :
print A2
# [[0 0 0 0 0 0 1 2 3 4]
# [0 0 0 0 0 0 0 0 1 2]
# [0 0 0 0 0 1 2 3 4 5]]
but I can't manage to figure out how to translate the max/where statement to the this 2d array...
You already have good answers, but I wanted to propose a potentially quicker variation using the function np.maximum.accumulate. Since your method for a 1D array uses max/where, you may also find this approach quite intuitive. (Edit: quicker Cython implementation added below).
The overall approach is very similar to the others; the mask is created with:
np.maximum.accumulate((A2 < 0)[:, ::-1], axis=1)[:, ::-1]
This line of code does the following:
(A2 < 0) creates a Boolean array, indicating whether a value is negative or not. The index [:, ::-1] flips this left-to-right.
np.maximum.accumulate is used to return the cumulative maximum along each row (i.e. axis=1). For example [False, True, False] would become [False, True, True].
The final indexing operation [:, ::-1] flips this new Boolean array left-to-right.
Then all that's left to do is to use the Boolean array as a mask to set the True values to zero.
Borrowing the timing methodology and two functions from #Divakar's answer, here are the benchmarks for my proposed method:
# method using np.maximum.accumulate
def accumulate_based(A2):
A2[np.maximum.accumulate((A2 < 0)[:, ::-1], axis=1)[:, ::-1]] = 0
return A2
# large sample array
A2 = np.random.randint(-4, 10, size=(100000, 100))
A2c = A2.copy()
A2c2 = A2.copy()
The timings are:
In [47]: %timeit broadcasting_based(A2)
10 loops, best of 3: 61.7 ms per loop
In [48]: %timeit cumsum_based(A2c)
10 loops, best of 3: 127 ms per loop
In [49]: %timeit accumulate_based(A2c2) # quickest
10 loops, best of 3: 43.2 ms per loop
So using np.maximum.accumulate can be as much as 30% faster than the next fastest solution for arrays of this size and shape.
As #tom10 points out, each NumPy operation processes arrays in their entirety, which can be inefficient when multiple operations are needed to get a result. An iterative approach which works through the array just once may fare better.
Below is a naive function written in Cython which could more than twice as fast as a pure NumPy approach.
This function may be able to be sped up further using memory views.
cimport cython
import numpy as np
cimport numpy as np
#cython.boundscheck(False)
#cython.wraparound(False)
#cython.nonecheck(False)
def cython_based(np.ndarray[long, ndim=2, mode="c"] array):
cdef int rows, cols, i, j, seen_neg
rows = array.shape[0]
cols = array.shape[1]
for i in range(rows):
seen_neg = 0
for j in range(cols-1, -1, -1):
if seen_neg or array[i, j] < 0:
seen_neg = 1
array[i, j] = 0
return array
This function works backwards through each row and starts setting values to zero once it has seen a negative value.
Testing it works:
A2 = np.random.randint(-4, 10, size=(100000, 100))
A2c = A2.copy()
np.array_equal(accumulate_based(A2), cython_based(A2c))
# True
Comparing the performance of the function:
In [52]: %timeit accumulate_based(A2)
10 loops, best of 3: 49.8 ms per loop
In [53]: %timeit cython_based(A2c)
100 loops, best of 3: 18.6 ms per loop
Assuming that you are looking to set all elements for each row until the last negative element to be set to zero (as per the expected output listed in the question for a sample case), two approaches could be suggested here.
Approach #1
This one is based on np.cumsum to generate a mask of elements to be set to zeros as listed next -
# Get boolean mask with TRUEs for each row starting at the first element and
# ending at the last negative element
mask = (np.cumsum(A2[:,::-1]<0,1)>0)[:,::-1]
# Use mask to set all such al TRUEs to zeros as per the expected output in OP
A2[mask] = 0
Sample run -
In [280]: A2 = np.random.randint(-4,10,(6,7)) # Random input 2D array
In [281]: A2
Out[281]:
array([[-2, 9, 8, -3, 2, 0, 5],
[-1, 9, 5, 1, -3, -3, -2],
[ 3, -3, 3, 5, 5, 2, 9],
[ 4, 6, -1, 6, 1, 2, 2],
[ 4, 4, 6, -3, 7, -3, -3],
[ 0, 2, -2, -3, 9, 4, 3]])
In [282]: A2[(np.cumsum(A2[:,::-1]<0,1)>0)[:,::-1]] = 0 # Use mask to set zeros
In [283]: A2
Out[283]:
array([[0, 0, 0, 0, 2, 0, 5],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 3, 5, 5, 2, 9],
[0, 0, 0, 6, 1, 2, 2],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 9, 4, 3]])
Approach #2
This one starts with the idea of finding the last negative element indices from #tom10's answer and develops into a mask finding method using broadcasting to get us the desired output, similar to approach #1.
# Find last negative index for each row
last_idx = A2.shape[1] - 1 - np.argmax(A2[:,::-1]<0, axis=1)
# Find the invalid indices (rows with no negative indices)
invalid_idx = A2[np.arange(A2.shape[0]),last_idx]>=0
# Set the indices for invalid ones to "-1"
last_idx[invalid_idx] = -1
# Boolean mask with each row starting with TRUE as the first element
# and ending at the last negative element
mask = np.arange(A2.shape[1]) < (last_idx[:,None] + 1)
# Set masked elements to zeros, for the desired output
A2[mask] = 0
Runtime tests -
Function defintions:
def broadcasting_based(A2):
last_idx = A2.shape[1] - 1 - np.argmax(A2[:,::-1]<0, axis=1)
last_idx[A2[np.arange(A2.shape[0]),last_idx]>=0] = -1
A2[np.arange(A2.shape[1]) < (last_idx[:,None] + 1)] = 0
return A2
def cumsum_based(A2):
A2[(np.cumsum(A2[:,::-1]<0,1)>0)[:,::-1]] = 0
return A2
Runtimes:
In [379]: A2 = np.random.randint(-4,10,(100000,100))
...: A2c = A2.copy()
...:
In [380]: %timeit broadcasting_based(A2)
10 loops, best of 3: 106 ms per loop
In [381]: %timeit cumsum_based(A2c)
1 loops, best of 3: 167 ms per loop
Verify results -
In [384]: A2 = np.random.randint(-4,10,(100000,100))
...: A2c = A2.copy()
...:
In [385]: np.array_equal(broadcasting_based(A2),cumsum_based(A2c))
Out[385]: True
Finding the first is usually easier and faster than finding the last, so here I reverse the array and then find the first negative (using the OP's version of A2):
im = A2.shape[1] - 1 - np.argmax(A2[:,::-1]<0, axis=1)
# [4 6 3] # which are the indices of the last negative in A2
Also, though, note that if you have large arrays with many negative numbers, it might actually be faster to use a non-numpy approach so you can short circuit the search. That is, numpy will do the calculation on the entire array, so if you have 10000 elements in a row but typically will hit a negative number in the first 10 elements (of a reverse search), a pure Python approach might end up being faster.
Overall, iterating the rows might be faster for subsequent operations as well. For example, if your next step is multiplication, it could be faster to just multiply the slices at the ends that are non-zeros, or maybe find that longest non-zero section and just deal with the truncated array.
This basically comes down to number of negatives per row. If you have 1000 negatives per row you'll on average have non-zeros segments that are 1/1000th of your full row length, so you could get a 1000x speed-up by just looking at the ends. The short example given in the question is great for understanding and answering the basic question, but I wouldn't take timing tests too seriously when your end application is a very different use case; especially since your fractional time savings by using iteration improves in proportion to array size (assuming a constant ratio and random distribution of negative numbers).
You can access individual rows:
A2[0] == array([-5, -4, 2, -2, -1, 0, 1, 2, 3, 4])

Categories

Resources