Delete negative elements which are between positives only - python

a = [1, 3, 6, -2, 4, 5, 8, -3, 9,
2, -5, -7, -9, 3, 6, -7, -6, 2]
I want to do like:
a = [1, 3, 6, 4, 5, 8, 9, 2,
-5, -7, -9, 3, 6, -7, -6, 2]
which deletes only 4th and 8th elements, which are single negative elements between two positive elements.
import numpy as np
a = [1, 3, 6, -2, 4, 5, 8, -3, 9,
2, -5, -7, -9, 3, 6, -7, -6, 2]
for i in range(len(a)):
if a[i] < 0 and a[i - 1] > 0 and a[i + 1] > 0:
np.delete(a[i])
print(a)
This did not work. Can I know where I have to fix?

Because you ask about numpy in the subject line and also attempt to use np.delete() in your code, I assume you intend for a to be a numpy array.
Here is a way to do what your question asks using vectorized operations in numpy:
import numpy as np
a = np.array([1,3,6,-2,4,5,8,-3,9,2,-5,-7,-9, 3, 6, -7, -6, 2])
b = np.concatenate([a[1:], [np.NaN]])
c = np.concatenate([[np.NaN], a[:-1]])
d = (a<0)&(b>0)&(c>0)
print(a[~d])
Output:
[ 1 3 6 4 5 8 9 2 -5 -7 -9 3 6 -7 -6 2]
What we've done is to shift a one to the left with NaN fill on the right (b) and one to the right with NaN fill on the left (c), then to create a boolean mask d using vectorized compare and boolean operators <, > and & which is True only where we want to delete single negative values sandwiched between positives. Finally, we use the ~ operator to flip the boolean value of the mask and use it to filter out the unneeded negative values in a.
UPDATE: Based on benchmarking of several possible strategies for answering your question (see below), the conclusion is that the following solution appears to be the most performative in answering OP's question (credit to #Kelly Bundy for suggesting this in a comment):
a = np.concatenate((a[:1], a[1:-1][(a[1:-1]>=0)|(a[2:]<=0)|(a[:-2]<=0)], a[-1:]))
UPDATE: Here are some timeit() comparisons of several variations on answers given for this question using NumPy 1.22.2.
The fastest of the 8 strategies is:
a = np.concatenate([a[:1], a[1:-1][(a[1:-1]>=0)|(a[2:]<=0)|(a[:-2]<=0)], a[-1:]])
A close second is:
a = a[np.concatenate([[True], ~((a[1:-1]<0)&(a[2:]>0)&(a[:-2]>0)), [True]])]
The strategies using np.r_(), either with np.delete() or with a boolean mask and [] syntax, are about twice as slow as the fastest.
The strategy using numpy.roll() is about 3 times as slow as the fastest. Note: As highlighted by in a comment by #Kelly Bundy, the roll() strategy in the benchmark does not give a correct answer to this question in all cases (though for the particular input example it happens to). I have nevertheless included it in the benchmark because the performance of roll() relative to concatenate() and r_() may be of general interest beyond the narrow context of this question.
Results:
foo_1 output:
[ 1 3 6 4 5 8 9 2 -5 -7 -9 3 6 -7 -6 2]
foo_2 output:
[ 1 3 6 4 5 8 9 2 -5 -7 -9 3 6 -7 -6 2]
foo_3 output:
[ 1 3 6 4 5 8 9 2 -5 -7 -9 3 6 -7 -6 2]
foo_4 output:
[ 1 3 6 4 5 8 9 2 -5 -7 -9 3 6 -7 -6 2]
foo_5 output:
[ 1 3 6 4 5 8 9 2 -5 -7 -9 3 6 -7 -6 2]
foo_6 output:
[ 1 3 6 4 5 8 9 2 -5 -7 -9 3 6 -7 -6 2]
foo_7 output:
[ 1 3 6 4 5 8 9 2 -5 -7 -9 3 6 -7 -6 2]
foo_8 output:
[ 1 3 6 4 5 8 9 2 -5 -7 -9 3 6 -7 -6 2]
Timeit results:
foo_1 ran in 1.2354546000715346e-05 seconds using 100000 iterations
foo_2 ran in 1.0962473000399769e-05 seconds using 100000 iterations
foo_3 ran in 7.733614000026136e-06 seconds using 100000 iterations
foo_4 ran in 7.751871000509709e-06 seconds using 100000 iterations
foo_5 ran in 5.856722998432815e-06 seconds using 100000 iterations
foo_6 ran in 7.5727709988132115e-06 seconds using 100000 iterations
foo_7 ran in 1.7790602000895887e-05 seconds using 100000 iterations
foo_8 ran in 5.435103999916464e-06 seconds using 100000 iterations
Code that generated the results:
import numpy as np
a = np.array([1,3,6,-2,4,5,8,-3,9,2,-5,-7,-9, 3, 6, -7, -6, 2])
from timeit import timeit
def foo_1(a):
a = a if a.shape[0] < 2 else np.delete(a, np.r_[False, (a[1:-1] < 0) & (a[:-2] > 0) & (a[2:] > 0), False])
return a
def foo_2(a):
a = a if a.shape[0] < 2 else a[np.r_[True, ~((a[1:-1] < 0) & (a[:-2] > 0) & (a[2:] > 0)), True]]
return a
def foo_3(a):
b = np.concatenate([a[1:], [np.NaN]])
c = np.concatenate([[np.NaN], a[:-1]])
d = (a<0)&(b>0)&(c>0)
a = a[~d]
return a
def foo_4(a):
a = a[~((a<0)&(np.concatenate([a[1:], [np.NaN]])>0)&(np.concatenate([[np.NaN], a[:-1]])>0))]
return a
def foo_5(a):
a = a if a.shape[0] < 2 else a[np.concatenate([[True], ~((a[1:-1]<0)&(a[2:]>0)&(a[:-2]>0)), [True]])]
return a
def foo_6(a):
a = a if a.shape[0] < 2 else np.delete(a, np.concatenate([[False], (a[1:-1]<0)&(a[2:]>0)&(a[:-2]>0), [False]]))
return a
def foo_7(a):
mask_bad = (
(a < 0) & # the value is < 0 AND
(np.roll(a,1) >= 0) & # the value to the right is >= 0
(np.roll(a,-1) >= 0) # the value to the left is >= 0
)
mask_good = ~mask_bad
a = a[mask_good]
return a
def foo_8(a):
a = np.concatenate([a[:1], a[1:-1][(a[1:-1]>=0)|(a[2:]<=0)|(a[:-2]<=0)], a[-1:]])
return a
foo_count = 8
for foo in ['foo_' + str(i + 1) for i in range(foo_count)]:
print(f'{foo} output:')
print(eval(f"{foo}(a)"))
n = 100000
print(f'Timeit results:')
for foo in ['foo_' + str(i + 1) for i in range(foo_count)]:
t = timeit(f"{foo}(a)", setup=f"from __main__ import a, {foo}", number=n) / n
print(f'{foo} ran in {t} seconds using {n} iterations')

A solution that handles edges correctly and doesn't create an unholy number of temporary arrays:
a = np.delete(a, np.r_[False, (a[1:-1] < 0) & (a[:-2] > 0) & (a[2:] > 0), False])
Alternatively, you can create the positive rather than the negative mask
a = a[np.r_[True, (a[1:-1] >= 0) | (a[:-2] <= 0) | (a[2:] <= 0), True]]
Since np.concatenate is faster than np.r_, you could rephrase the masks as
np.concatenate(([False], (a[1:-1] < 0) & (a[:-2] > 0) & (a[2:] > 0), [False])
and
np.concatenate(([True], (a[1:-1] >= 0) | (a[:-2] <= 0) | (a[2:] <= 0), [True]))
In some cases, you might get extra mileage out of applying np.where(...)[0] or np.flatnonzero to the mask. This works sometimes because it avoids having to recompute the size of the number of masked elements twice.

Your conditional logic
if a[i] < 0 and a[i - 1] > 0 and a[i + 1] > 0
seems sound and readable to me. But it would have issues with the boundary cases:
[1, 2, -3] -> IndexError: list index out of range
[-1, 2, 3] -> [2, 3]
Handling it properly could be as simple as skipping the first and last element of you list with
for i in range(1, len(a) - 1)
Test
import numpy as np
def del_neg_between_pos(a):
delete_idx = []
for i in range(1, len(a) - 1):
if a[i] < 0 and a[i - 1] > 0 and a[i + 1] > 0:
delete_idx.append(i)
return np.delete(a, delete_idx)
if __name__ == "__main__":
a1 = [1, 3, 6, -2, 4, 5, 8, -3, 9, 2, -5, -7, -9, 3, 6, -7, -6, 2]
a2 = [1, 2, -3]
a3 = [-1, 2, 3]
for a in [a1, a2, a3]:
print(del_neg_between_pos(a))
Output
[ 1 3 6 4 5 8 9 2 -5 -7 -9 3 6 -7 -6 2]
[ 1 2 -3]
[-1 2 3]

A one-liner is
a[1:-1] = [a[i] for i in range(1, len(a) - 1) if not (a[i] < 0 and a[i-1] > 0 and a[i+1] > 0)]
The above assigns elements from the list a that are not negative and preceded and followed by a positive number to a slicing of a.
Output (from printing a)
[1, 3, 6, 4, 5, 8, 9, 2, -5, -7, -9, 3, 6, -7, -6, 2]
Timings
The below timings compare my approach above to the fastest approach in #constantstranger's answer:
a = np.concatenate([a[:1], a[1:-1][(a[1:-1]>=0)|(a[2:]<=0)|(a[:-2]<=0)], a[-1:]])
My suggested approach is obviously optimized for the case where you want both the input and output to be a list. However, even in suboptimal (for my approach) input/output configurations, for this input, my approach appears to be faster than the numpy approach.
Input/Output Configuration 1
Input is a list (as in your question).
Output is a numpy array.
In [3]: %%timeit
...: a = [1, 3, 6, -2, 4, 5, 8, -3, 9, 2, -5, -7, -9, 3, 6, -7, -6, 2]
...: a[1:-1] = [a[i] for i in range(1, len(a) - 1) if not (a[i] < 0 and a[i-
...: 1] > 0 and a[i+1] > 0)]
...: a = np.array(a)
...:
...:
3.08 µs ± 17.2 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [5]: %%timeit
...: a = np.array([1, 3, 6, -2, 4, 5, 8, -3, 9, 2, -5, -7, -9, 3, 6, -7, -6,
...: 2])
...: a = np.concatenate([a[:1], a[1:-1][(a[1:-1]>=0)|(a[2:]<=0)|(a[:-2]<=0)]
...: , a[-1:]])
...:
...:
6.66 µs ± 16.1 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
Input/Output Configuration 2
Input and output is a numpy array (as assumed in other answers).
The input
a = np.array([1, 3, 6, -2, 4, 5, 8, -3, 9, 2, -5, -7, -9, 3, 6, -7, -6, 2])
Timings:
In [3]: %%timeit
...: b = a.tolist()
...: b[1:-1] = [b[i] for i in range(1, len(b) - 1) if not (b[i] < 0 and b[i-
...: 1] > 0 and b[i+1] > 0)]
...: b = np.array(b)
...:
...:
3.1 µs ± 10.7 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [5]: %%timeit
...: b = np.concatenate([a[:1], a[1:-1][(a[1:-1]>=0)|(a[2:]<=0)|(a[:-2]<=0)]
...: , a[-1:]])
...:
...:
4.8 µs ± 13.9 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
Remarks
The above holds for this specific input. A larger input size may have different results (particularly due to the conversion between types). I would be happy to provide timings that vary the input size (presented graphically). However, it would be useful to know whether you want the input or output to be a list or a numpy array.

Related

Group a numpy array

I have an one-dimensional array A, such that 0 <= A[i] <= 11, and I want to map A to an array B such that
for i in range(len(A)):
if 0 <= A[i] <= 2: B[i] = 0
elif 3 <= A[i] <= 5: B[i] = 1
elif 6 <= A[i] <= 8: B[i] = 2
elif 9 <= A[i] <= 11: B[i] = 3
How can implement this efficiently in numpy?
You need to use an int division by //3, and that is the most performant solution
A = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
B = A // 3
print(A) # [0 1 2 3 4 5 6 7 8 9 10 11]
print(B) # [0 0 0 1 1 1 2 2 2 3 3 3]
I would do something like dividing the values of the A[i] by 3 'cause you're sorting out them 3 by 3, 0-2 divided by 3 go answer 0, 3-5 go answer 1, 6-8 divided by 3 is equal to 2, and so on
I built a little schema here:
A[i] --> 0-2. divided by 3 = 0, what you wnat in array B[i] is 0, so it's ok
A[i] --> 3-5. divided by 3 = 1, and so on. Just use a method to make floor the value, so that it don't become float type.
Answers provided by others are valid, however I find this function from numpy quite elegant, plus it allows you to avoid for loop which could be quite inefficient for large arrays
import numpy as np
bins = [3, 5, 8, 9, 11]
B = np.digitize(A, bins)
Something like this might work:
C = np.zeros(12, dtype=np.int)
C[3:6] = 1
C[6:9] = 2
C[9:12] = 3
B = C[A]
If you hope to expand this to a more complex example you can define a function with all your conditions:
def f(a):
if 0 <= a and a <= 2:
return 0
elif 3 <= a and a <= 5:
return 1
elif 6 <= a and a <= 8:
return 2
elif 9 <= a and a <= 11:
return 3
And call it on your array A:
A = np.array([0,1,5,7,8,9,10,10, 11])
B = np.array(list(map(f, A))) # array([0, 0, 1, 2, 2, 3, 3, 3, 3])

Is there multi arange in NumPy?

Numpy's arange accepts only single scalar values for start/stop/step. Is there a multi version of this function? Which can accept array inputs for start/stop/step? E.g. having input 2D array like:
[[1 5 1], # start/stop/step first
[3 8 2]] # start/stop/step second
should create array consisting of concatenation of aranges for every row of input (each start/stop/step), input above should create 1D array
1 2 3 4 3 5 7
i.e. we need to design such function that it does next:
print(np.multi_arange(np.array([[1,5,1],[3,8,2]])))
# prints:
# array([1, 2, 3, 4, 3, 5, 7])
And this function should be efficient (pure numpy), i.e. very fast process input array of shape (10000, 3) without pure-Python looping.
Of cause it is possible to create pure Python's loop (or listcomp) to create arange for each row and concatenate results of this loop. But I have very many rows with triples start/stop/step and need to have efficient and fast code, hence looking for pure numpy function.
Why do I need it. I needed this for several tasks. One of this is for indexing - suppose I have 1D array a and I need to extract many (possibly intersecting) subranges of this array. If I had that multi version of arange I would just do:
values = a[np.multi_arange(starts_stops_steps)]
Maybe it is possible to create multi arange function using some combinations of numpy functions? Can you suggest?
Also maybe there are some more efficient solutions for the specific case of extracting subranges of 1D array (see last line of code above) without creating all indexes using multi_arange?
Here's a vectorized one with cumsum that accounts for positive and negative stepsizes -
def multi_arange(a):
steps = a[:,2]
lens = ((a[:,1]-a[:,0]) + steps-np.sign(steps))//steps
b = np.repeat(steps, lens)
ends = (lens-1)*steps + a[:,0]
b[0] = a[0,0]
b[lens[:-1].cumsum()] = a[1:,0] - ends[:-1]
return b.cumsum()
If you need to validate for valid ranges : (start < stop when step > 0) and (start > stop when step < 0) , use a pre-processing step :
a = a[((a[:,1] > a[:,0]) & (a[:,2]>0) | (a[:,1] < a[:,0]) & (a[:,2]<0))]
Sample run -
In [17]: a
Out[17]:
array([[ 1, 5, 1],
[ 3, 8, 2],
[18, 6, -2]])
In [18]: multi_arange(a)
Out[18]: array([ 1, 2, 3, 4, 3, 5, 7, 18, 16, 14, 12, 10, 8])
In [1]: np.r_[1:5:1, 3:8:2]
Out[1]: array([1, 2, 3, 4, 3, 5, 7])
In [2]: np.hstack((np.arange(1,5,1),np.arange(3,8,2)))
Out[2]: array([1, 2, 3, 4, 3, 5, 7])
The r_ version is nice and compact, but not faster:
In [3]: timeit np.r_[1:5:1, 3:8:2]
23.9 µs ± 34.6 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [4]: timeit np.hstack((np.arange(1,5,1),np.arange(3,8,2)))
11.2 µs ± 19.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
I've just came up with my solution using numba. Still I prefer numpy-only solution if we find best one not to carry heavy numba JIT compiler.
I've also tested #Divakar solution in my code.
Next code output is:
naive_multi_arange 0.76601 sec
arty_multi_arange 0.01801 sec 42.52 speedup
divakar_multi_arange 0.05504 sec 13.92 speedup
Meaning my numba solution has 42x speedup, #Divakar's numpy solution has 14x speedup.
Next code can be also run online here.
import time, random
import numpy as np, numba
#numba.jit(nopython = True)
def arty_multi_arange(a):
starts, stops, steps = a[:, 0], a[:, 1], a[:, 2]
pos = 0
cnt = np.sum((stops - starts + steps - np.sign(steps)) // steps, dtype = np.int64)
res = np.zeros((cnt,), dtype = np.int64)
for i in range(starts.size):
v, stop, step = starts[i], stops[i], steps[i]
if step > 0:
while v < stop:
res[pos] = v
pos += 1
v += step
elif step < 0:
while v > stop:
res[pos] = v
pos += 1
v += step
assert pos == cnt
return res
def divakar_multi_arange(a):
steps = a[:,2]
lens = ((a[:,1]-a[:,0]) + steps-np.sign(steps))//steps
b = np.repeat(steps, lens)
ends = (lens-1)*steps + a[:,0]
b[0] = a[0,0]
b[lens[:-1].cumsum()] = a[1:,0] - ends[:-1]
return b.cumsum()
random.seed(0)
neg_prob = 0.5
N = 100000
minv, maxv, maxstep = -100, 300, 15
steps = [random.randrange(1, maxstep + 1) * ((1, -1)[random.random() < neg_prob]) for i in range(N)]
starts = [random.randrange(minv + 1, maxv) for i in range(N)]
stops = [random.randrange(*(((starts[i] + 1, maxv + 1), (minv, starts[i]))[steps[i] < 0])) for i in range(N)]
joined = np.array([starts, stops, steps], dtype = np.int64).T
tb = time.time()
aref = np.concatenate([np.arange(joined[i, 0], joined[i, 1], joined[i, 2], dtype = np.int64) for i in range(N)])
npt = time.time() - tb
print('naive_multi_arange', round(npt, 5), 'sec')
for func in ['arty_multi_arange', 'divakar_multi_arange']:
globals()[func](joined)
tb = time.time()
a = globals()[func](joined)
myt = time.time() - tb
print(func, round(myt, 5), 'sec', round(npt / myt, 2), 'speedup')
assert a.size == aref.size, (a.size, aref.size)
assert np.all(a == aref), np.vstack((np.flatnonzero(a != aref)[:5], a[a != aref][:5], aref[a != aref][:5])).T

Max value per diagonal in 2d array

I have array and need max of rolling difference with dynamic window.
a = np.array([8, 18, 5,15,12])
print (a)
[ 8 18 5 15 12]
So first I create difference by itself:
b = a - a[:, None]
print (b)
[[ 0 10 -3 7 4]
[-10 0 -13 -3 -6]
[ 3 13 0 10 7]
[ -7 3 -10 0 -3]
[ -4 6 -7 3 0]]
Then replace upper triangle matrix to 0:
c = np.tril(b)
print (c)
[[ 0 0 0 0 0]
[-10 0 0 0 0]
[ 3 13 0 0 0]
[ -7 3 -10 0 0]
[ -4 6 -7 3 0]]
Last need max values per diagonal, so it means:
max([0,0,0,0,0]) = 0
max([-10,13,-10,3]) = 13
max([3,3,-7]) = 3
max([-7,6]) = 6
max([-4]) = -4
So expected output is:
[0, 13, 3, 6, -4]
What is some nice vectorized solution? Or is possible some another way for expected output?
Use ndarray.diagonal
v = [max(c.diagonal(-i)) for i in range(b.shape[0])]
print(v) # [0, 13, 3, 6, -4]
Not sure exactly how efficient this is considering the advanced indexing involved, but this is one way to do that:
import numpy as np
a = np.array([8, 18, 5, 15, 12])
b = a[:, None] - a
# Fill lower triangle with largest negative
b[np.tril_indices(len(a))] = np.iinfo(b.dtype).min # np.finfo for float
# Put diagonals as rows
s = b.strides[1]
diags = np.ndarray((len(a) - 1, len(a) - 1), b.dtype, b, offset=s, strides=(s, (len(a) + 1) * s))
# Get maximum from each row and add initial zero
c = np.r_[0, diags.max(1)]
print(c)
# [ 0 13 3 6 -4]
EDIT:
Another alternative, which may not be what you were looking for though, is just using Numba, for example like this:
import numpy as np
import numba as nb
def max_window_diffs_jdehesa(a):
a = np.asarray(a)
dtinf = np.iinfo(b.dtype) if np.issubdtype(b.dtype, np.integer) else np.finfo(b.dtype)
out = np.full_like(a, dtinf.min)
_pwise_diffs(a, out)
return out
#nb.njit(parallel=True)
def _pwise_diffs(a, out):
out[0] = 0
for w in nb.prange(1, len(a)):
for i in range(len(a) - w):
out[w] = max(a[i] - a[i + w], out[w])
a = np.array([8, 18, 5, 15, 12])
print(max_window_diffs(a))
# [ 0 13 3 6 -4]
Comparing these methods to the original:
import numpy as np
import numba as nb
def max_window_diffs_orig(a):
a = np.asarray(a)
b = a - a[:, None]
out = np.zeros(len(a), b.dtype)
out[-1] = b[-1, 0]
for i in range(1, len(a) - 1):
out[i] = np.diag(b, -i).max()
return out
def max_window_diffs_jdehesa_np(a):
a = np.asarray(a)
b = a[:, None] - a
dtinf = np.iinfo(b.dtype) if np.issubdtype(b.dtype, np.integer) else np.finfo(b.dtype)
b[np.tril_indices(len(a))] = dtinf.min
s = b.strides[1]
diags = np.ndarray((len(a) - 1, len(a) - 1), b.dtype, b, offset=s, strides=(s, (len(a) + 1) * s))
return np.concatenate([[0], diags.max(1)])
def max_window_diffs_jdehesa_nb(a):
a = np.asarray(a)
dtinf = np.iinfo(b.dtype) if np.issubdtype(b.dtype, np.integer) else np.finfo(b.dtype)
out = np.full_like(a, dtinf.min)
_pwise_diffs(a, out)
return out
#nb.njit(parallel=True)
def _pwise_diffs(a, out):
out[0] = 0
for w in nb.prange(1, len(a)):
for i in range(len(a) - w):
out[w] = max(a[i] - a[i + w], out[w])
np.random.seed(0)
a = np.random.randint(0, 100, size=100)
r = max_window_diffs_orig(a)
print((max_window_diffs_jdehesa_np(a) == r).all())
# True
print((max_window_diffs_jdehesa_nb(a) == r).all())
# True
%timeit max_window_diffs_orig(a)
# 348 µs ± 986 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit max_window_diffs_jdehesa_np(a)
# 91.7 µs ± 1.3 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit max_window_diffs_jdehesa_nb(a)
# 19.7 µs ± 88.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
np.random.seed(0)
a = np.random.randint(0, 100, size=10000)
%timeit max_window_diffs_orig(a)
# 651 ms ± 26 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit max_window_diffs_jdehesa_np(a)
# 1.61 s ± 6.19 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit max_window_diffs_jdehesa_nb(a)
# 22 ms ± 967 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
The first one may be a bit better for smaller arrays, but doesn't work well for bigger ones. Numba on the other hand is pretty good in all cases.
You can use numpy.diagonal:
a = np.array([8, 18, 5,15,12])
b = a - a[:, None]
c = np.tril(b)
for i in range(b.shape[0]):
print(max(c.diagonal(-i)))
Output:
0
13
3
6
-4
Here's a vectorized solution with strides -
from skimage.util import view_as_windows
n = len(a)
z = np.zeros(n-1,dtype=a.dtype)
p = np.concatenate((a,z))
s = view_as_windows(p,n)
mask = np.tri(n,k=-1,dtype=bool)[:,::-1]
v = s[0]-s
out = np.where(mask,v.min()-1,v).max(1)
With one-loop for memory-efficiency -
n = len(a)
out = [max(a[:-i+n]-a[i:]) for i in range(n)]
Use np.max in place of max for better use of array-memory.
You can abuse the fact that reshaping non-square arrays of shape (N+1, N) to (N, N+1) will make diagonals appear as columns
from scipy.linalg import toeplitz
a = toeplitz([1,2,3,4], [1,4,3])
# array([[1, 4, 3],
# [2, 1, 4],
# [3, 2, 1],
# [4, 3, 2]])
a.reshape(3, 4)
# array([[1, 4, 3, 2],
# [1, 4, 3, 2],
# [1, 4, 3, 2]])
Which you can then use like (note that I've swapped the sign and set the lower triangle to zero)
smallv = -10000 # replace this with np.nan if you have floats
a = np.array([8, 18, 5,15,12])
b = a[:, None] - a
b[np.tril_indices(len(b), -1)] = smallv
d = np.vstack((b, np.full(len(b), smallv)))
d.reshape(len(d) - 1, -1).max(0)[:-1]
# array([ 0, 13, 3, 6, -4])

How much data are there in an interval?

I have a list object,i want to know that how many numbers are in a particular interval?The code is as follows
a = [1, 7, 4, 7, 4, 8, 5, 2, 17, 8, 3, 12, 9, 6, 28]
interval = 3
a = list(map(lambda x:int(x/interval),a))
for i in range(min(a),max(a)+1):
print(i*interval,(i+1)*interval,':',a.count(i))
Output
0 3 : 2
3 6 : 4
6 9 : 5
9 12 : 1
12 15 : 1
15 18 : 1
18 21 : 0
21 24 : 0
24 27 : 0
27 30 : 1
Is there a simple way to get this information?The simpler the better
Now that we're talking about performance, I'd like to offer my numpy solution using bincount:
import numpy as np
interval = 3
a = [1, 7, 4, 7, 4, 8, 5, 2, 17, 8, 3, 12, 9, 6, 28]
l = max(a) // interval + 1
b = np.bincount(a, minlength=l*interval).reshape((l,interval)).sum(axis=1)
(minlength is necessary just to be able to reshape if max(a) isn't a multiple of interval)
With the lables taken from Erfan's answer we get:
rnge = range(0, max(a) + interval + 1, interval)
lables = [f'[{i}-{j})' for i, j in zip(rnge[:-1], rnge[1:])]
for l,b in zip(lables,b):
print(l,b)
[0-3) 2
[3-6) 4
[6-9) 5
[9-12) 1
[12-15) 1
[15-18) 1
[18-21) 0
[21-24) 0
[24-27) 0
[27-30) 1
This is much faster than the pandas solution.
Performance and scaling comparison
In order to assess the scaling capability, I just replaced a = [1, ..., 28] * n and timed the execution (without imports and printing) for n = 1, 10, 100, 1K, 10K and 100K:
(python 3.7.3 on win32 / pandas 0.24.2 / numpy 1.16.2)
Pandas solution with pd.cut and groupby
s = pd.Series(a)
bins = pd.cut(s, range(0, s.max() + interval, interval), right=False)
s.groupby(bins).count()
[0, 3) 2
[3, 6) 4
[6, 9) 5
[9, 12) 1
[12, 15) 1
[15, 18) 1
[18, 21) 0
[21, 24) 0
[24, 27) 0
[27, 30) 1
dtype: int64
To get cleaner bins results, we can use this method from linked answer:
s = pd.Series(a)
rnge = range(0, s.max() + interval, interval)
labels = [f'{i}-{j}' for i, j in zip(rnge[:-1], rnge[1:])]
bins = pd.cut(s, range(0, s.max() + interval, interval), right=False, labels=labels)
s.groupby(bins).count()
0-3 2
3-6 4
6-9 5
9-12 1
12-15 1
15-18 1
18-21 0
21-24 0
24-27 0
27-30 1
dtype: int64
You can do it in one line using a dictionary comprehension :
a = [1, 7, 4, 7, 4, 8, 5, 2, 17, 8, 3, 12, 9, 6, 28]
{"[{};{}[".format(x, x+3) : len( [y for y in a if y >= x and y < x+3] )
for x in range(min(a), max(a), 3)}
Output :
{'[1;4[': 3,
'[4;7[': 4,
'[7;10[': 5,
'[10;13[': 1,
'[13;16[': 0,
'[16;19[': 1,
'[19;22[': 0,
'[22;25[': 0,
'[25;28[': 0}
Performance comparison :
Pandas solution with pd.cut and groupby : 8.51 ms ± 32 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Dictionary comprehension : 19.7 µs ± 37.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Using np.bincount : 22.4 µs ± 263 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

get the index of the last negative value in a 2d array per column

I'm trying to get the index of the last negative value of an array per column (in order to slice it after).
a simple working example on a 1d vector is :
import numpy as np
A = np.arange(10) - 5
A[2] = 2
print A # [-5 -4 2 -2 -1 0 1 2 3 4]
idx = np.max(np.where(A <= 0)[0])
print idx # 5
A[:idx] = 0
print A # [0 0 0 0 0 0 1 2 3 4]
Now I wanna do the same thing on each column of a 2D array :
A = np.arange(10) - 5
A[2] = 2
A2 = np.tile(A, 3).reshape((3, 10)) - np.array([0, 2, -1]).reshape((3, 1))
print A2
# [[-5 -4 2 -2 -1 0 1 2 3 4]
# [-7 -6 0 -4 -3 -2 -1 0 1 2]
# [-4 -3 3 -1 0 1 2 3 4 5]]
And I would like to obtain :
print A2
# [[0 0 0 0 0 0 1 2 3 4]
# [0 0 0 0 0 0 0 0 1 2]
# [0 0 0 0 0 1 2 3 4 5]]
but I can't manage to figure out how to translate the max/where statement to the this 2d array...
You already have good answers, but I wanted to propose a potentially quicker variation using the function np.maximum.accumulate. Since your method for a 1D array uses max/where, you may also find this approach quite intuitive. (Edit: quicker Cython implementation added below).
The overall approach is very similar to the others; the mask is created with:
np.maximum.accumulate((A2 < 0)[:, ::-1], axis=1)[:, ::-1]
This line of code does the following:
(A2 < 0) creates a Boolean array, indicating whether a value is negative or not. The index [:, ::-1] flips this left-to-right.
np.maximum.accumulate is used to return the cumulative maximum along each row (i.e. axis=1). For example [False, True, False] would become [False, True, True].
The final indexing operation [:, ::-1] flips this new Boolean array left-to-right.
Then all that's left to do is to use the Boolean array as a mask to set the True values to zero.
Borrowing the timing methodology and two functions from #Divakar's answer, here are the benchmarks for my proposed method:
# method using np.maximum.accumulate
def accumulate_based(A2):
A2[np.maximum.accumulate((A2 < 0)[:, ::-1], axis=1)[:, ::-1]] = 0
return A2
# large sample array
A2 = np.random.randint(-4, 10, size=(100000, 100))
A2c = A2.copy()
A2c2 = A2.copy()
The timings are:
In [47]: %timeit broadcasting_based(A2)
10 loops, best of 3: 61.7 ms per loop
In [48]: %timeit cumsum_based(A2c)
10 loops, best of 3: 127 ms per loop
In [49]: %timeit accumulate_based(A2c2) # quickest
10 loops, best of 3: 43.2 ms per loop
So using np.maximum.accumulate can be as much as 30% faster than the next fastest solution for arrays of this size and shape.
As #tom10 points out, each NumPy operation processes arrays in their entirety, which can be inefficient when multiple operations are needed to get a result. An iterative approach which works through the array just once may fare better.
Below is a naive function written in Cython which could more than twice as fast as a pure NumPy approach.
This function may be able to be sped up further using memory views.
cimport cython
import numpy as np
cimport numpy as np
#cython.boundscheck(False)
#cython.wraparound(False)
#cython.nonecheck(False)
def cython_based(np.ndarray[long, ndim=2, mode="c"] array):
cdef int rows, cols, i, j, seen_neg
rows = array.shape[0]
cols = array.shape[1]
for i in range(rows):
seen_neg = 0
for j in range(cols-1, -1, -1):
if seen_neg or array[i, j] < 0:
seen_neg = 1
array[i, j] = 0
return array
This function works backwards through each row and starts setting values to zero once it has seen a negative value.
Testing it works:
A2 = np.random.randint(-4, 10, size=(100000, 100))
A2c = A2.copy()
np.array_equal(accumulate_based(A2), cython_based(A2c))
# True
Comparing the performance of the function:
In [52]: %timeit accumulate_based(A2)
10 loops, best of 3: 49.8 ms per loop
In [53]: %timeit cython_based(A2c)
100 loops, best of 3: 18.6 ms per loop
Assuming that you are looking to set all elements for each row until the last negative element to be set to zero (as per the expected output listed in the question for a sample case), two approaches could be suggested here.
Approach #1
This one is based on np.cumsum to generate a mask of elements to be set to zeros as listed next -
# Get boolean mask with TRUEs for each row starting at the first element and
# ending at the last negative element
mask = (np.cumsum(A2[:,::-1]<0,1)>0)[:,::-1]
# Use mask to set all such al TRUEs to zeros as per the expected output in OP
A2[mask] = 0
Sample run -
In [280]: A2 = np.random.randint(-4,10,(6,7)) # Random input 2D array
In [281]: A2
Out[281]:
array([[-2, 9, 8, -3, 2, 0, 5],
[-1, 9, 5, 1, -3, -3, -2],
[ 3, -3, 3, 5, 5, 2, 9],
[ 4, 6, -1, 6, 1, 2, 2],
[ 4, 4, 6, -3, 7, -3, -3],
[ 0, 2, -2, -3, 9, 4, 3]])
In [282]: A2[(np.cumsum(A2[:,::-1]<0,1)>0)[:,::-1]] = 0 # Use mask to set zeros
In [283]: A2
Out[283]:
array([[0, 0, 0, 0, 2, 0, 5],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 3, 5, 5, 2, 9],
[0, 0, 0, 6, 1, 2, 2],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 9, 4, 3]])
Approach #2
This one starts with the idea of finding the last negative element indices from #tom10's answer and develops into a mask finding method using broadcasting to get us the desired output, similar to approach #1.
# Find last negative index for each row
last_idx = A2.shape[1] - 1 - np.argmax(A2[:,::-1]<0, axis=1)
# Find the invalid indices (rows with no negative indices)
invalid_idx = A2[np.arange(A2.shape[0]),last_idx]>=0
# Set the indices for invalid ones to "-1"
last_idx[invalid_idx] = -1
# Boolean mask with each row starting with TRUE as the first element
# and ending at the last negative element
mask = np.arange(A2.shape[1]) < (last_idx[:,None] + 1)
# Set masked elements to zeros, for the desired output
A2[mask] = 0
Runtime tests -
Function defintions:
def broadcasting_based(A2):
last_idx = A2.shape[1] - 1 - np.argmax(A2[:,::-1]<0, axis=1)
last_idx[A2[np.arange(A2.shape[0]),last_idx]>=0] = -1
A2[np.arange(A2.shape[1]) < (last_idx[:,None] + 1)] = 0
return A2
def cumsum_based(A2):
A2[(np.cumsum(A2[:,::-1]<0,1)>0)[:,::-1]] = 0
return A2
Runtimes:
In [379]: A2 = np.random.randint(-4,10,(100000,100))
...: A2c = A2.copy()
...:
In [380]: %timeit broadcasting_based(A2)
10 loops, best of 3: 106 ms per loop
In [381]: %timeit cumsum_based(A2c)
1 loops, best of 3: 167 ms per loop
Verify results -
In [384]: A2 = np.random.randint(-4,10,(100000,100))
...: A2c = A2.copy()
...:
In [385]: np.array_equal(broadcasting_based(A2),cumsum_based(A2c))
Out[385]: True
Finding the first is usually easier and faster than finding the last, so here I reverse the array and then find the first negative (using the OP's version of A2):
im = A2.shape[1] - 1 - np.argmax(A2[:,::-1]<0, axis=1)
# [4 6 3] # which are the indices of the last negative in A2
Also, though, note that if you have large arrays with many negative numbers, it might actually be faster to use a non-numpy approach so you can short circuit the search. That is, numpy will do the calculation on the entire array, so if you have 10000 elements in a row but typically will hit a negative number in the first 10 elements (of a reverse search), a pure Python approach might end up being faster.
Overall, iterating the rows might be faster for subsequent operations as well. For example, if your next step is multiplication, it could be faster to just multiply the slices at the ends that are non-zeros, or maybe find that longest non-zero section and just deal with the truncated array.
This basically comes down to number of negatives per row. If you have 1000 negatives per row you'll on average have non-zeros segments that are 1/1000th of your full row length, so you could get a 1000x speed-up by just looking at the ends. The short example given in the question is great for understanding and answering the basic question, but I wouldn't take timing tests too seriously when your end application is a very different use case; especially since your fractional time savings by using iteration improves in proportion to array size (assuming a constant ratio and random distribution of negative numbers).
You can access individual rows:
A2[0] == array([-5, -4, 2, -2, -1, 0, 1, 2, 3, 4])

Categories

Resources