Find index of max element in numpy array excluding few indexes - python

Say:
p = array([4, 0, 8, 2, 7])
Want to find the index of max value, except few indexes, say:
excptIndx = [2, 3]
Ans: 4, as 7 will be max.
if excptIndx = [1, 3], Ans: 2, as 8 will be max.

In numpy, you can mask all values at excptIndx and run argmax to obtain index of max element:
import numpy as np
p = np.array([4, 0, 8, 2, 7])
excptIndx = [2, 3]
m = np.zeros(p.size, dtype=bool)
m[excptIndx] = True
a = np.ma.array(p, mask=m)
print(np.argmax(a))
# 4

The setup:
In [153]: p = np.array([4,0,8,2,7])
In [154]: exceptions = [2,3]
Original indexes in p:
In [155]: idx = np.arange(p.shape[0])
delete exceptions from both:
In [156]: np.delete(p,exceptions)
Out[156]: array([4, 0, 7])
In [157]: np.delete(idx,exceptions)
Out[157]: array([0, 1, 4])
Find the argmax in the deleted array:
In [158]: np.argmax(np.delete(p,exceptions))
Out[158]: 2
Use that to find the max value (could just as well use np.max(_156)
In [159]: _156[_158]
Out[159]: 7
Use the same index to find the index in the original p
In [160]: _157[_158]
Out[160]: 4
In [161]: p[_160] # another way to get the max value
Out[161]: 7
For this small example, the pure Python alternatives might well be faster. They often are in small cases. We need test cases with a 1000 or more values to really see the advantages of numpy.
Another method
Set the exceptions to a small enough value, and take the argmax:
In [162]: p1 = p.copy(); p1[exceptions] = -1000
In [163]: np.argmax(p1)
Out[163]: 4
Here the small enough is easy to pick; more generally it may require some thought.
Or taking advantage of the np.nan... functions:
In [164]: p1 = p.astype(float); p1[exceptions]=np.nan
In [165]: np.nanargmax(p1)
Out[165]: 4

A solution is
mask = np.isin(np.arange(len(p)), excptIndx)
subset_idx = np.argmax(p[mask])
parent_idx = np.arange(len(p))[mask][subset_idx]
See http://seanlaw.github.io/2015/09/10/numpy-argmin-with-a-condition/

p = np.array([4,0,8,2,7]) # given
exceptions = [2,3] # given
idx = list( range(0,len(p)) ) # simple array of index
a1 = np.delete(idx, exceptions) # remove exceptions from idx (i.e., index)
a2 = np.argmax(np.delete(p, exceptions)) # get index of the max value after removing exceptions from actual p array
a1[a2] # as a1 and a2 are in sync, this will give the original index (as asked) of the max value

Related

iterating a filtered Numpy array whilst maintaining index information

I am attempting to pass filtered values from a Numpy array into a function.
I need to pass values only above a certain value, and their index position with the Numpy array.
I am attempting to avoid iterating over the entire array within python by using Numpys own filtering systems, the arrays i am dealing with have 20k of values in them with potentially only very few being relevant.
import numpy as np
somearray = np.array([1,2,3,4,5,6])
arrayindex = np.nonzero(somearray > 4)
for i in arrayindex:
somefunction(arrayindex[0], somearray[arrayindex[0]])
This threw up errors of logic not being able to handle multiple values,
this led me to testing it through print statement to see what was going on.
for cell in arrayindex:
print(f"index {cell}")
print(f"data {somearray[cell]}")
I expected an output of
index 4
data 5
index 5
data 6
But instead i get
index [4 5]
data [5 6]
I have looked through different methods to iterate through numpy arrays such and neditor, but none seem to still allow me to do the filtering of values outside of the for loop.
Is there a solution to my quandary?
Oh, i am aware that is is generally frowned upon to loop through a numpy array, however the function that i am passing these values to are complex, triggering certain events and involving data to be uploaded to a data base dependent on the data location within the array.
Thanks.
import numpy as np
somearray = np.array([1,2,3,4,5,6])
arrayindex = [idx for idx, val in enumerate(somearray) if val > 4]
for i in range(0, len(arrayindex)):
somefunction(arrayindex[i], somearray[arrayindex[i]])
for i in range(0, len(arrayindex)):
print("index", arrayindex[i])
print("data", somearray[arrayindex[i]])
You need to have a clear idea of what nonzero produces, and pay attention to the difference between indexing with a list(s) and with a tuple.
===
In [110]: somearray = np.array([1,2,3,4,5,6])
...: arrayindex = np.nonzero(somearray > 4)
nonzero produces a tuple of arrays, one per dimension (this becomes more obvious with 2d arrays):
In [111]: arrayindex
Out[111]: (array([4, 5]),)
It can be used directly as an index:
In [113]: somearray[arrayindex]
Out[113]: array([5, 6])
In this 1d case you could take the array out of the tuple, and iterate on it:
In [114]: for i in arrayindex[0]:print(i, somearray[i])
4 5
5 6
argwhere does a 'transpose', which could also be used for iteration
In [115]: idxs = np.argwhere(somearray>4)
In [116]: idxs
Out[116]:
array([[4],
[5]])
In [117]: for i in idxs: print(i,somearray[i])
[4] [5]
[5] [6]
idxs is (2,1) shape, so i is (1,) shape array, resulting in the brackets in the display. Occasionally it's useful, but nonzero is used more (often by it's other name, np.where).
2d
argwhere has a 2d example:
In [119]: x=np.arange(6).reshape(2,3)
In [120]: np.argwhere(x>1)
Out[120]:
array([[0, 2],
[1, 0],
[1, 1],
[1, 2]])
In [121]: np.nonzero(x>1)
Out[121]: (array([0, 1, 1, 1]), array([2, 0, 1, 2]))
In [122]: x[np.nonzero(x>1)]
Out[122]: array([2, 3, 4, 5])
While nonzero can be used to index the array, argwhere elements can't.
In [123]: for ij in np.argwhere(x>1):
...: print(ij,x[ij])
...:
...
IndexError: index 2 is out of bounds for axis 0 with size 2
Problem is that ij is a list, which is used to index on dimension. numpy distinguishes between lists and tuples when indexing. (Earlier versions fudged the difference, but current versions are taking a more rigorous approach.)
So we need to change the list into a tuple. One way is to unpack it:
In [124]: for i,j in np.argwhere(x>1):
...: print(i,j,x[i,j])
...:
...:
0 2 2
1 0 3
1 1 4
1 2 5
I could have used: print(ij,x[tuple(ij)]) in [123].
I should have used unpacking the [117] iteration:
In [125]: for i, in idxs: print(i,somearray[i])
4 5
5 6
or somearray[tuple(i)]

Selective deletion by value in numpy array

EDITED: Refined problem statement
I am still figuring out the fancy options which are offered by the numpy library. Following topic came on my desk:
Purpose:
In a multi-dimensional array I select one column. This slicing works fine. But after that, values stored in another list need to be filtered out of the column values.
Current status:
array1 = np.asarray([[0,1,2],[1,0,3],[2,3,0]])
print(array1)
array1woZero = np.nonzero(array1)
print(array1woZero)
toBeRemoved = []
toBeRemoved.append(1)
print(toBeRemoved)
column = array1[:,1]
result = np.delete(column,toBeRemoved)
The above mentioned code does not bring the expected result. In fact, the np.delete() command just removes the value at index 1 - but I would need the value of 1 to be filtered out instead. What I also do not understand is the shape change when applying the nonzero to array1: While array1 is (3,3), the array1woZero turns out into a tuple of 2 dims with 6 values each.
0
Array of int64
(6,)
0
0
1
1
2
2
1
Array of int64
(6,)
1
2
0
2
0
1
My feeling is that I would require something like slicing with an exclusion operator. Do you have any hints for me to solve that? Is it necessary to use different data structures?
In [18]: arr = np.asarray([[0,1,2],[1,0,3],[2,3,0]])
In [19]: arr
Out[19]:
array([[0, 1, 2],
[1, 0, 3],
[2, 3, 0]])
nonzero gives the indices of all non-zero elements of its argument (arr):
In [20]: idx = np.nonzero(arr)
In [21]: idx
Out[21]: (array([0, 0, 1, 1, 2, 2]), array([1, 2, 0, 2, 0, 1]))
This is a tuple of arrays, one per dimension. That output can be confusing, but it is easily used to return all of those non-zero elements:
In [22]: arr[idx]
Out[22]: array([1, 2, 1, 3, 2, 3])
Indexing like this, with a pair of arrays, produces a 1d array. In your example there is just one 0 per row, but in general that's not the case.
This is the same indexing - with 2 lists of the same length:
In [24]: arr[[0,0,1,1,2,2], [1,2,0,2,0,1]]
Out[24]: array([1, 2, 1, 3, 2, 3])
idx[0] just selects on array of that tuple, the row indices. That probably isn't what you want. And I doubt if you want to apply np.delete to that tuple.
It's hard to tell from the description, and code, what you want. Maybe that's because you don't understand what nonzero is producing.
We can also select the nonzero elements with boolean masking:
In [25]: arr>0
Out[25]:
array([[False, True, True],
[ True, False, True],
[ True, True, False]])
In [26]: arr[ arr>0 ]
Out[26]: array([1, 2, 1, 3, 2, 3])
the hint with the boolean masking very good and helped me to develop my own solution. The symbolic names in the following code snippets are different, but the idea should become clear anyway.
At the beginning, I have my overall searchSpace.
searchSpace = relativeDistances[currentNode,:]
Assume that its shape is (5,). My filter is defined on the indexes, i.e. range 0..4. Then I define another numpy array "filter" of same shape with all 1, and the values to be filtered out I set to 0.
filter = np.full(shape=nodeCount,fill_value=1,dtype=np.int32())
filter[0] = 0
filter[3] = 0
searchSpace = searchSpace * filter
minValue = searchSpace[searchSpace > 0].min()
neighborNode = np.where(searchSpace==minValue)
The filter array provides me the flexibility to adjust the filter later on as part of a loop. Using the element-wise multiplication with 0 and subsequent boolean masking, I can create my reduced searchSpace for minimum search. Compared to a separate array or list, I still have the original shape, which is required to get the correct index in the where-statement.

Python - remove elements from array

I have an array called a and another array b. The array a is the main array where I store float data, and b is an array which contain some indexes belonging to a.
Example:
a = [1.3, 1.7, 18.4, 56.2, 82.2, 18.1, 81.9, 56.9, -274.45]
b = [0, 1, 2, 3, 4, 5, 6, 7]
In this example b contains indexes of a from 0 to 7.
What I'm trying to do in Python is to remove "duplicates", I mean to remove all indexes from b which have their similar value in a. For example, notice that there are pair 1.3 and 1.7. Also, there are 18.4 and 18.1 etc. I want to find all these values and to write -1 in all places in array b which has that value.
Output should be the following:
b = [0, -1, 2, 3, 4, -1, -1, -1]
I think it is obvious what I am trying to achieve. Here index 1 is replaced with -1 because in a it represents 1.7 which has "pair" 1.3. Also, last 3 indexes represents 18.1, 81.9 and 56.9 which also have their "pairs" before, so they are replaced with -1.
Of course, I have a parameter x which represents how "similar" values are. So, here x = 2 which mean that any 2 values which differ by 2 are similar.
What have I tried? I tried to use 2 nested for loops and a lot of unnecessary variables and my algorithm eats memory and performance. Is there an elegant np-ish way to achieve it?
Approach #1 : Here's a vectorized approach using broadcasting and a bit memory intensive -
x = 2 # threshold that decides similarity
a_b = a[b]
mask = np.triu(np.abs(a_b[:,None]-a_b)<x,1).any(0)
b[mask[:len(b)]] = -1
Sample run -
In [95]: a = np.array([1.3, 1.7, 18.4, 56.2, 82.2, 18.1, 81.9, 56.9, -274.45])
...: b = np.array([0, 1, 2, 3, 4, 5, 6, 7])
...:
# After code run ...
In [97]: b
Out[97]: array([ 0, -1, 2, 3, 4, -1, -1, -1])
Approach #2 : Less memory intensive approach
import pandas as pd
def set_mask(a,b,thresh):
a_b = a[b]
N = len(a_b)
sidx = a_b.argsort()
sorted_a_b = a_b[sidx]
mask0 = sorted_a_b[1:] - sorted_a_b[:-1] < thresh
id_arr = np.zeros(N, dtype=int)
id_arr[np.flatnonzero(~mask0)+1] = 1
ids = id_arr.cumsum()
d = np.column_stack(( ids, sidx))
df0 = pd.DataFrame(d, columns=(('ids','sidx')))
pp = df0['sidx'].groupby([ids]).min()
maskc = np.ones(N,dtype=bool)
maskc[pp.values] = 0
return maskc
Use this mask to replace the mask needed at the last step from previous approach.

Python: turn single array of sorted, repeat values into an array of arrays?

I have a sorted array with some repeated values. How can this array be turned into an array of arrays with the subarrays grouped by value (see below)? In actuality, my_first_array has ~8 million entries, so the solution would preferably be as time efficient as possible.
my_first_array = [1,1,1,3,5,5,9,9,9,9,9,10,23,23]
wanted_array = [ [1,1,1], [3], [5,5], [9,9,9,9,9], [10], [23,23] ]
itertools.groupby makes this trivial:
import itertools
wanted_array = [list(grp) for _, grp in itertools.groupby(my_first_array)]
With no key function, it just yields groups consisting of runs of identical values, so you list-ify each one in a list comprehension; easy-peasy. You can think of it as basically a within-Python API for doing the work of the GNU toolkit program, uniq, and related operations.
In CPython (the reference interpreter), groupby is implemented in C, and it operates lazily and linearly; the data must already appear in runs matching the key function, so sorting might make it too expensive, but for already sorted data like you have, there is nothing that will be more efficient.
Note: If the inputs might be value identical, but different objects, it may make sense for memory reasons to change list(grp) for _, grp to [k] * len(list(grp)) for k, grp. The former would retain the original (possibly value but not identity duplicate) objects in the final result, the latter would replicate the first object from each group instead, reducing the final cost per group to the cost of N references to a single object, instead of N references to between 1 and N objects.
I am assuming that the input is a NumPy array and you are looking for a list of arrays as output. Now, you can split the input array at indices where those shifts (groups of repeats have boundaries) with np.split. To find such indices, there are two ways - Using np.unique with its optional argument return_index set as True, and another with a combination of np.where and np.diff. Thus, we would have two approaches as listed next.
With np.unique -
import numpy as np
_,idx = np.unique(my_first_array, return_index=True)
out = np.split(my_first_array, idx)[1:]
With np.where and np.diff -
idx = np.where(np.diff(my_first_array)!=0)[0] + 1
out = np.split(my_first_array, idx)
Sample run -
In [28]: my_first_array
Out[28]: array([ 1, 1, 1, 3, 5, 5, 9, 9, 9, 9, 9, 10, 23, 23])
In [29]: _,idx = np.unique(my_first_array, return_index=True)
...: out = np.split(my_first_array, idx)[1:]
...:
In [30]: out
Out[30]:
[array([1, 1, 1]),
array([3]),
array([5, 5]),
array([9, 9, 9, 9, 9]),
array([10]),
array([23, 23])]
In [31]: idx = np.where(np.diff(my_first_array)!=0)[0] + 1
...: out = np.split(my_first_array, idx)
...:
In [32]: out
Out[32]:
[array([1, 1, 1]),
array([3]),
array([5, 5]),
array([9, 9, 9, 9, 9]),
array([10]),
array([23, 23])]
Here is a solution, although it might not be very efficient:
my_first_array = [1,1,1,3,5,5,9,9,9,9,9,10,23,23]
wanted_array = [ [1,1,1], [3], [5,5], [9,9,9,9,9], [10], [23,23] ]
new_array = [ [my_first_array[0]] ]
count = 0
for i in range(1,len(my_first_array)):
a = my_first_array[i]
if a == my_first_array[i - 1]:
new_array[count].append(a)
else:
count += 1
new_array.append([])
new_array[count].append(a)
new_array == wanted_array
This is O(n):
a = [1,1,1,3,5,5,9,9,9,9,9,10,23,23,24]
res = []
s = 0
e = 0
length = len(a)
while s < length:
b = []
while e < length and a[s] == a[e]:
b.append(a[s])
e += 1
res.append(b)
s = e
print res

get the index of the last negative value in a 2d array per column

I'm trying to get the index of the last negative value of an array per column (in order to slice it after).
a simple working example on a 1d vector is :
import numpy as np
A = np.arange(10) - 5
A[2] = 2
print A # [-5 -4 2 -2 -1 0 1 2 3 4]
idx = np.max(np.where(A <= 0)[0])
print idx # 5
A[:idx] = 0
print A # [0 0 0 0 0 0 1 2 3 4]
Now I wanna do the same thing on each column of a 2D array :
A = np.arange(10) - 5
A[2] = 2
A2 = np.tile(A, 3).reshape((3, 10)) - np.array([0, 2, -1]).reshape((3, 1))
print A2
# [[-5 -4 2 -2 -1 0 1 2 3 4]
# [-7 -6 0 -4 -3 -2 -1 0 1 2]
# [-4 -3 3 -1 0 1 2 3 4 5]]
And I would like to obtain :
print A2
# [[0 0 0 0 0 0 1 2 3 4]
# [0 0 0 0 0 0 0 0 1 2]
# [0 0 0 0 0 1 2 3 4 5]]
but I can't manage to figure out how to translate the max/where statement to the this 2d array...
You already have good answers, but I wanted to propose a potentially quicker variation using the function np.maximum.accumulate. Since your method for a 1D array uses max/where, you may also find this approach quite intuitive. (Edit: quicker Cython implementation added below).
The overall approach is very similar to the others; the mask is created with:
np.maximum.accumulate((A2 < 0)[:, ::-1], axis=1)[:, ::-1]
This line of code does the following:
(A2 < 0) creates a Boolean array, indicating whether a value is negative or not. The index [:, ::-1] flips this left-to-right.
np.maximum.accumulate is used to return the cumulative maximum along each row (i.e. axis=1). For example [False, True, False] would become [False, True, True].
The final indexing operation [:, ::-1] flips this new Boolean array left-to-right.
Then all that's left to do is to use the Boolean array as a mask to set the True values to zero.
Borrowing the timing methodology and two functions from #Divakar's answer, here are the benchmarks for my proposed method:
# method using np.maximum.accumulate
def accumulate_based(A2):
A2[np.maximum.accumulate((A2 < 0)[:, ::-1], axis=1)[:, ::-1]] = 0
return A2
# large sample array
A2 = np.random.randint(-4, 10, size=(100000, 100))
A2c = A2.copy()
A2c2 = A2.copy()
The timings are:
In [47]: %timeit broadcasting_based(A2)
10 loops, best of 3: 61.7 ms per loop
In [48]: %timeit cumsum_based(A2c)
10 loops, best of 3: 127 ms per loop
In [49]: %timeit accumulate_based(A2c2) # quickest
10 loops, best of 3: 43.2 ms per loop
So using np.maximum.accumulate can be as much as 30% faster than the next fastest solution for arrays of this size and shape.
As #tom10 points out, each NumPy operation processes arrays in their entirety, which can be inefficient when multiple operations are needed to get a result. An iterative approach which works through the array just once may fare better.
Below is a naive function written in Cython which could more than twice as fast as a pure NumPy approach.
This function may be able to be sped up further using memory views.
cimport cython
import numpy as np
cimport numpy as np
#cython.boundscheck(False)
#cython.wraparound(False)
#cython.nonecheck(False)
def cython_based(np.ndarray[long, ndim=2, mode="c"] array):
cdef int rows, cols, i, j, seen_neg
rows = array.shape[0]
cols = array.shape[1]
for i in range(rows):
seen_neg = 0
for j in range(cols-1, -1, -1):
if seen_neg or array[i, j] < 0:
seen_neg = 1
array[i, j] = 0
return array
This function works backwards through each row and starts setting values to zero once it has seen a negative value.
Testing it works:
A2 = np.random.randint(-4, 10, size=(100000, 100))
A2c = A2.copy()
np.array_equal(accumulate_based(A2), cython_based(A2c))
# True
Comparing the performance of the function:
In [52]: %timeit accumulate_based(A2)
10 loops, best of 3: 49.8 ms per loop
In [53]: %timeit cython_based(A2c)
100 loops, best of 3: 18.6 ms per loop
Assuming that you are looking to set all elements for each row until the last negative element to be set to zero (as per the expected output listed in the question for a sample case), two approaches could be suggested here.
Approach #1
This one is based on np.cumsum to generate a mask of elements to be set to zeros as listed next -
# Get boolean mask with TRUEs for each row starting at the first element and
# ending at the last negative element
mask = (np.cumsum(A2[:,::-1]<0,1)>0)[:,::-1]
# Use mask to set all such al TRUEs to zeros as per the expected output in OP
A2[mask] = 0
Sample run -
In [280]: A2 = np.random.randint(-4,10,(6,7)) # Random input 2D array
In [281]: A2
Out[281]:
array([[-2, 9, 8, -3, 2, 0, 5],
[-1, 9, 5, 1, -3, -3, -2],
[ 3, -3, 3, 5, 5, 2, 9],
[ 4, 6, -1, 6, 1, 2, 2],
[ 4, 4, 6, -3, 7, -3, -3],
[ 0, 2, -2, -3, 9, 4, 3]])
In [282]: A2[(np.cumsum(A2[:,::-1]<0,1)>0)[:,::-1]] = 0 # Use mask to set zeros
In [283]: A2
Out[283]:
array([[0, 0, 0, 0, 2, 0, 5],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 3, 5, 5, 2, 9],
[0, 0, 0, 6, 1, 2, 2],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 9, 4, 3]])
Approach #2
This one starts with the idea of finding the last negative element indices from #tom10's answer and develops into a mask finding method using broadcasting to get us the desired output, similar to approach #1.
# Find last negative index for each row
last_idx = A2.shape[1] - 1 - np.argmax(A2[:,::-1]<0, axis=1)
# Find the invalid indices (rows with no negative indices)
invalid_idx = A2[np.arange(A2.shape[0]),last_idx]>=0
# Set the indices for invalid ones to "-1"
last_idx[invalid_idx] = -1
# Boolean mask with each row starting with TRUE as the first element
# and ending at the last negative element
mask = np.arange(A2.shape[1]) < (last_idx[:,None] + 1)
# Set masked elements to zeros, for the desired output
A2[mask] = 0
Runtime tests -
Function defintions:
def broadcasting_based(A2):
last_idx = A2.shape[1] - 1 - np.argmax(A2[:,::-1]<0, axis=1)
last_idx[A2[np.arange(A2.shape[0]),last_idx]>=0] = -1
A2[np.arange(A2.shape[1]) < (last_idx[:,None] + 1)] = 0
return A2
def cumsum_based(A2):
A2[(np.cumsum(A2[:,::-1]<0,1)>0)[:,::-1]] = 0
return A2
Runtimes:
In [379]: A2 = np.random.randint(-4,10,(100000,100))
...: A2c = A2.copy()
...:
In [380]: %timeit broadcasting_based(A2)
10 loops, best of 3: 106 ms per loop
In [381]: %timeit cumsum_based(A2c)
1 loops, best of 3: 167 ms per loop
Verify results -
In [384]: A2 = np.random.randint(-4,10,(100000,100))
...: A2c = A2.copy()
...:
In [385]: np.array_equal(broadcasting_based(A2),cumsum_based(A2c))
Out[385]: True
Finding the first is usually easier and faster than finding the last, so here I reverse the array and then find the first negative (using the OP's version of A2):
im = A2.shape[1] - 1 - np.argmax(A2[:,::-1]<0, axis=1)
# [4 6 3] # which are the indices of the last negative in A2
Also, though, note that if you have large arrays with many negative numbers, it might actually be faster to use a non-numpy approach so you can short circuit the search. That is, numpy will do the calculation on the entire array, so if you have 10000 elements in a row but typically will hit a negative number in the first 10 elements (of a reverse search), a pure Python approach might end up being faster.
Overall, iterating the rows might be faster for subsequent operations as well. For example, if your next step is multiplication, it could be faster to just multiply the slices at the ends that are non-zeros, or maybe find that longest non-zero section and just deal with the truncated array.
This basically comes down to number of negatives per row. If you have 1000 negatives per row you'll on average have non-zeros segments that are 1/1000th of your full row length, so you could get a 1000x speed-up by just looking at the ends. The short example given in the question is great for understanding and answering the basic question, but I wouldn't take timing tests too seriously when your end application is a very different use case; especially since your fractional time savings by using iteration improves in proportion to array size (assuming a constant ratio and random distribution of negative numbers).
You can access individual rows:
A2[0] == array([-5, -4, 2, -2, -1, 0, 1, 2, 3, 4])

Categories

Resources