NumPy Array Indexing - python

Simple question here about indexing an array to get a subset of its values. Say I have a recarray which holds ages in one space, and corresponding values in another. I also have an array which is my desired subset of ages. Here is what I mean:
ages = np.arange(100)
values = np.random.uniform(low=0, high= 1, size = ages.shape)
data = np.core.rec.fromarrays([ages, values], names='ages,values')
desired_ages = np.array([1,4, 16, 29, 80])
What I'm trying to do is something like this:
data.values[data.ages==desired_ages]
But, it's not working.

You want to create an subarray containing only the values whose indexes are in desired_ages.
Python doesn't have any syntax that directly corresponds to this, but list comprehensions can do a pretty nice job:
result = [value for index, value in enumerate(data.values) if index in desired_ages]
However, doing it this way results in Python scanning through desired_ages for each element in data.values, which is slow. If you could insert
desired_ages = set(desired_ages)
on the line before, this would improve performance. (You can determine if a value in is a set in constant time, regardless of the set's size.)
Complete Example
import numpy as np
ages = np.arange(100)
values = np.random.uniform(low=0, high= 1, size = ages.shape)
data = np.core.rec.fromarrays([ages, values], names='ages,values')
desired_ages = np.array([1,4, 16, 29, 80])
result = [value for index, value in enumerate(data.values) if index in desired_ages]
print result
Output
[0.45852624094611272, 0.0099713014816563694, 0.26695859251958864, 0.10143425810157047, 0.93647796171383935]

I changed your example a little, shuffle the order of ages:
import numpy as np
np.random.seed(0)
ages = np.arange(3,103)
np.random.shuffle(ages)
values = np.random.uniform(low=0, high= 1, size = ages.shape)
data = np.core.rec.fromarrays([ages, values], names='ages,values')
desired_ages = np.array([4, 16, 29, 80])
If all the elements of desired_ages are in data.ages, you can sort data by age field first, and then use searchsorted() to find all the index quickly:
data.sort(order="ages") # sort by ages
print data.values[np.searchsorted(data.ages, desired_ages)]
or you can use np.in1d the get a bool array and use it as index:
print data.values[np.in1d(data.ages, desired_ages)]

This is a reasonable first approach:
>>> bool_indices = reduce(numpy.logical_or,
(data.ages == x for x in desired_ages))
>>> data.values[bool_indices]
array([ 0.63143784, 0.93852927, 0.0026815 , 0.66263594, 0.2603184 ])
But that uses python functions, so it's probably slower. We can translate it pretty easily into pure numpy, using ix_ to make the arrays broadcast against each other nicely. (meshgrid with swapped arguments would work too, but would use more memory.):
>>> bools_2d = numpy.equal(*numpy.ix_(desired_ages, data.ages))
>>> bool_indices = numpy.logical_or.reduce(bools_2d)
>>> data.ages[bool_indices]
array([ 1, 4, 16, 29, 80])
>>> data.values[bool_indices]
array([ 0.32324063, 0.65453647, 0.9300062 , 0.34534668, 0.12151951])
See also HYRY's answer for a potentially faster solution (using searchsorted) and a potentially more readable solution (using in1d).

Related

Fast algorithm to find indices where multiple arrays have the same value

I'm looking for ways to speed up (or replace) my algorithm for grouping data.
I have a list of numpy arrays. I want to generate a new numpy array, such that each element of this array is the same for each index where the original arrays are the same as well. (And different where this is not the case.)
This sounds kind of awkward, so have an example:
# Test values:
values = [
np.array([10, 11, 10, 11, 10, 11, 10]),
np.array([21, 21, 22, 22, 21, 22, 23]),
]
# Expected outcome: np.array([0, 1, 2, 3, 0, 3, 4])
# * *
Note that elements I marked (indices 0 and 4) of the expected outcome have the same value (0) because the original two arrays were also the same (namely 10 and 21). Similar for elements with indices 3 and 5 (3).
The algorithm has to deal with an arbitrary number of (equally-size) input arrays, and also return, for each resulting number, what values of the original arrays they correspond to. (So for this example, "3" refers to (11, 22).)
Here is my current algorithm:
import numpy as np
def groupify(values):
group = np.zeros((len(values[0]),), dtype=np.int64) - 1 # Magic number: -1 means ungrouped.
group_meanings = {}
next_hash = 0
matching = np.ones((len(values[0]),), dtype=bool)
while any(group == -1):
this_combo = {}
matching[:] = (group == -1)
first_ungrouped_idx = np.where(matching)[0][0]
for curr_id, value_array in enumerate(values):
needed_value = value_array[first_ungrouped_idx]
matching[matching] = value_array[matching] == needed_value
this_combo[curr_id] = needed_value
# Assign all of the found elements to a new group
group[matching] = next_hash
group_meanings[next_hash] = this_combo
next_hash += 1
return group, group_meanings
Note that the expression value_array[matching] == needed_value is evaluated many times for each individual index, which is where the slowness comes from.
I'm not sure if my algorithm can be sped up much more, but I'm also not sure if it's the optimal algorithm to begin with. Is there a better way of doing this?
Cracked it finally for a vectorized solution! It was an interesting problem. The problem was we had to tag each pair of values taken from the corresponding array elements of the list. Then, we are supposed to tag each such pair based on their uniqueness among othet pairs. So, we can use np.unique abusing all its optional arguments and finally do some additional work to keep the order for the final output. Here's the implementation basically done in three stages -
# Stack as a 2D array with each pair from values as a column each.
# Convert to linear index equivalent considering each column as indexing tuple
arr = np.vstack(values)
idx = np.ravel_multi_index(arr,arr.max(1)+1)
# Do the heavy work with np.unique to give us :
# 1. Starting indices of unique elems,
# 2. Srray that has unique IDs for each element in idx, and
# 3. Group ID counts
_,unq_start_idx,unqID,count = np.unique(idx,return_index=True, \
return_inverse=True,return_counts=True)
# Best part happens here : Use mask to ignore the repeated elems and re-tag
# each unqID using argsort() of masked elements from idx
mask = ~np.in1d(unqID,np.where(count>1)[0])
mask[unq_start_idx] = 1
out = idx[mask].argsort()[unqID]
Runtime test
Let's compare the proposed vectorized approach against the original code. Since the proposed code gets us the group IDs only, so for a fair benchmarking, let's just trim off parts from the original code that are not used to give us that. So, here are the function definitions -
def groupify(values): # Original code
group = np.zeros((len(values[0]),), dtype=np.int64) - 1
next_hash = 0
matching = np.ones((len(values[0]),), dtype=bool)
while any(group == -1):
matching[:] = (group == -1)
first_ungrouped_idx = np.where(matching)[0][0]
for curr_id, value_array in enumerate(values):
needed_value = value_array[first_ungrouped_idx]
matching[matching] = value_array[matching] == needed_value
# Assign all of the found elements to a new group
group[matching] = next_hash
next_hash += 1
return group
def groupify_vectorized(values): # Proposed code
arr = np.vstack(values)
idx = np.ravel_multi_index(arr,arr.max(1)+1)
_,unq_start_idx,unqID,count = np.unique(idx,return_index=True, \
return_inverse=True,return_counts=True)
mask = ~np.in1d(unqID,np.where(count>1)[0])
mask[unq_start_idx] = 1
return idx[mask].argsort()[unqID]
Runtime results on a list with large arrays -
In [345]: # Input list with random elements
...: values = [item for item in np.random.randint(10,40,(10,10000))]
In [346]: np.allclose(groupify(values),groupify_vectorized(values))
Out[346]: True
In [347]: %timeit groupify(values)
1 loops, best of 3: 4.02 s per loop
In [348]: %timeit groupify_vectorized(values)
100 loops, best of 3: 3.74 ms per loop
This should work, and should be considerably faster, since we're using broadcasting and numpy's inherently fast boolean comparisons:
import numpy as np
# Test values:
values = [
np.array([10, 11, 10, 11, 10, 11, 10]),
np.array([21, 21, 22, 22, 21, 22, 23]),
]
# Expected outcome: np.array([0, 1, 2, 3, 0, 3, 4])
# for every value in values, check where duplicate values occur
same_mask = [val[:,np.newaxis] == val[np.newaxis,:] for val in values]
# get the conjunction of all those tests
conjunction = np.logical_and.reduce(same_mask)
# ignore the diagonal
conjunction[np.diag_indices_from(conjunction)] = False
# initialize the labelled array with nans (used as flag)
labelled = np.empty(values[0].shape)
labelled.fill(np.nan)
# keep track of labelled value
val = 0
for k, row in enumerate(conjunction):
if np.isnan(labelled[k]): # this element has not been labelled yet
labelled[k] = val # so label it
labelled[row] = val # and label every element satisfying the test
val += 1
print(labelled)
# outputs [ 0. 1. 2. 3. 0. 3. 4.]
It is about a factor of 1.5x faster than your version when dealing with the two arrays, but I suspect the speedup should be better for more arrays.
The numpy_indexed package (disclaimer: I am its author) contains generalized variants of the numpy arrayset operations, which can be used to solve your problem in an elegant and efficient (vectorized) manner:
import numpy_indexed as npi
unique_values, labels = npi.unique(tuple(values), return_inverse=True)
The above will work for arbitrary type combinations, but alternatively, the below will be even more efficient if values is a list of many arrays of the same dtype:
unique_values, labels = npi.unique(np.asarray(values), axis=1, return_inverse=True)
If I understand correctly, you are trying to hash values according to columns. Its better to convert the columns into arbitrary values by themselves, and then find the hashes from them.
So you actually want to hash on list(np.array(values).T).
This functionality is already built into Pandas. You dont need to write it. The only problem is that it takes a list of values without further lists within it. In this case, you can just convert the inner list to string map(str, list(np.array(values).T)) and factorize that!
>>> import pandas as pd
>>> pd.factorize(map(str, list(np.array(values).T)))
(array([0, 1, 2, 3, 0, 3, 4]),
array(['[10 21]', '[11 21]', '[10 22]', '[11 22]', '[10 23]'], dtype=object))
I have converted your list of arrays into an array, and then into a string ...

How to sort a numpy array into a specific order, specified by a separate list, based on the relative sizes of the values in the array

Within a piece of code I have a numpy array already constructed, and I want to sort this 1st array given a specific order specified in a list. The result will be a 3rd array (new_values).
The 1st array has the values to be sorted.
values = numpy.array([10.0, 30.1, 50, 40, 20])
The list provides the order given by the indices of the values in new_values which should be in descending order where 0 corresponds to the largest number.
order=[0, 3, 1, 4, 2].
So,
new_values[0] > new_values[3] > new_values[1] > new_values[4] > new_values[2]
I tried to find a specific sort function to accomplish this such as argsort or with a key, however I did not understand how to adapt these to this situation.
Is there a simple quick method to accomplish this as I will be doing this for many iterations. I am willing to change the 2nd array, order, to specify the indices in another way if this was advantageous to sorting with a better method.
Currently I am using the following for loop.
size=len(values)
new_values = np.zeros(size)
values = sorted(values,reverse=True)
for val in range(0, size):
new_values[int(order[val])] = values[val]
Thank you in advance!
You can simply use indexing for that :
>>> import numpy as np
>>> values = np.array([10.0, 30.1, 50, 40, 20])
>>> order=[0, 3, 1, 4, 2]
>>> sorted_array=values[order]
>>> sorted_array
array([ 10. , 40. , 30.1, 20. , 50. ])
Also as #Divakar mentioned in comment if you want the following condition :
new_values[0] > new_values[3] > new_values[1] > new_values[4] > new_values[2]
You can do :
>>> values[order]=np.sort(values)[::-1]
>>> values
array([ 50. , 30.1, 10. , 40. , 20. ])

How to efficiently index into a 1D numpy array via slice ranges

I have a big 1D array of data. I have a starts array of indexes into that data where important things happened. I want to get an array of ranges so that I get windows of length L, one for each starting point in starts. Bogus sample data:
data = np.linspace(0,10,50)
starts = np.array([0,10,21])
length = 5
I want to instinctively do something like
data[starts:starts+length]
But really, I need to turn starts into 2D array of range "windows." Coming from functional languages, I would think of it as a map from a list to a list of lists, like:
np.apply_along_axis(lambda i: np.arange(i,i+length), 0, starts)
But that won't work because apply_along_axis only allows scalar return values.
You can do this:
pairs = np.vstack([starts, starts + length]).T
ranges = np.apply_along_axis(lambda p: np.arange(*p), 1, pairs)
data[ranges]
Or you can do it with a list comprehension:
data[np.array([np.arange(i,i+length) for i in starts])]
Or you can do it iteratively. (Bleh.)
Is there a concise, idiomatic way to slice into an array at certain start points like this? (Pardon the numpy newbie-ness.)
data = np.linspace(0,10,50)
starts = np.array([0,10,21])
length = 5
For a NumPy only way of doing this, you can use numpy.meshgrid() as described here
http://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html
As hpaulj pointed out in the comments, meshgrid actually isn't needed for this problem as you can use array broadcasting.
http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html
# indices = sum(np.meshgrid(np.arange(length), starts))
indices = np.arange(length) + starts[:, np.newaxis]
# array([[ 0, 1, 2, 3, 4],
# [10, 11, 12, 13, 14],
# [21, 22, 23, 24, 25]])
data[indices]
returns
array([[ 0. , 0.20408163, 0.40816327, 0.6122449 , 0.81632653],
[ 2.04081633, 2.24489796, 2.44897959, 2.65306122, 2.85714286],
[ 4.28571429, 4.48979592, 4.69387755, 4.89795918, 5.10204082]])
If you need to do this a lot of time, you can use as_strided() to create a sliding windows array of data
data = np.linspace(0,10,50000)
length = 5
starts = np.random.randint(0, len(data)-length, 10000)
from numpy.lib.stride_tricks import as_strided
sliding_window = as_strided(data, (len(data) - length + 1, length),
(data.itemsize, data.itemsize))
Then you can use:
sliding_window[starts]
to get what you want.
It's also faster than creating the index array.

Mask One 2D Numpy Array By Argmax Along Axis Of Another Array

I have a 2D numpy array that I need to take the max of along a specific axis. I then need to later know which indexes were selected for this operation as a mask for another operation which is only done on those same indexes but on another array of the same shape.
Right how I'm doing it by using 2d array indexing, but it's slow and kind of convoluted, particularly the mgrid hack to generate the row indexes. It's just [0,1] for this example but I need the robustness to work with arbitrary shapes.
a = np.array([[0,0,5],[0,0,5]])
b = np.array([[1,1,1],[1,1,1]])
columnIndexes = np.argmax(a,axis=1)
rowIndexes = np.mgrid[0:a.shape[0],0:columnIdx.size-1][0].flatten()
b[rowIndexes,columnIndexes] = b[rowIndexes,columnIndexes]+1
B should now be array([[1,1,2],[1,1,2]]) since it preformed the operation on b for only the indexes of the max along the columns of a.
Anyone know a better way? Preferably using just boolean masking arrays so that I can port this code to run on a GPU without too much hassle. Thanks!
I will suggest an answer but with slightly different data.
c = np.array([[0,1,1],[2,1,0]]) # note that this data has dupes for max in row 1
d = np.array([[0,10,10],[20,10,0]]) # data to be chaged
c_argmax = np.argmax(c,axis=1)[:,np.newaxis]
b_map1 = c_argmax == np.arange(c.shape[1])
# now use the bool map as you described
d[b_map1] += 1
d
[out]
array([[ 0, 11, 10],
[21, 10, 0]])
Note that I created an original with a duplicate of the largest number. The above works with argmax as you requested but you might have wanted to increment all max values. as in:
c_max = np.max(c,axis=1)[:,np.newaxis]
b_map2 = c_max == c
d[b_map2] += 1
d
[out]
array([[ 0, 12, 11],
[22, 10, 0]])

Pythonic way to get the first AND the last element of the sequence

What is the easiest and cleanest way to get the first AND the last elements of a sequence? E.g., I have a sequence [1, 2, 3, 4, 5], and I'd like to get [1, 5] via some kind of slicing magic. What I have come up with so far is:
l = len(s)
result = s[0:l:l-1]
I actually need this for a bit more complex task. I have a 3D numpy array, which is cubic (i.e. is of size NxNxN, where N may vary). I'd like an easy and fast way to get a 2x2x2 array containing the values from the vertices of the source array. The example above is an oversimplified, 1D version of my task.
Use this:
result = [s[0], s[-1]]
Since you're using a numpy array, you may want to use fancy indexing:
a = np.arange(27)
indices = [0, -1]
b = a[indices] # array([0, 26])
For the 3d case:
vertices = [(0,0,0),(0,0,-1),(0,-1,0),(0,-1,-1),(-1,-1,-1),(-1,-1,0),(-1,0,0),(-1,0,-1)]
indices = list(zip(*vertices)) #Can store this for later use.
a = np.arange(27).reshape((3,3,3)) #dummy array for testing. Can be any shape size :)
vertex_values = a[indices].reshape((2,2,2))
I first write down all the vertices (although I am willing to bet there is a clever way to do it using itertools which would let you scale this up to N dimensions ...). The order you specify the vertices is the order they will be in the output array. Then I "transpose" the list of vertices (using zip) so that all the x indices are together and all the y indices are together, etc. (that's how numpy likes it). At this point, you can save that index array and use it to index your array whenever you want the corners of your box. You can easily reshape the result into a 2x2x2 array (although the order I have it is probably not the order you want).
This would give you a list of the first and last element in your sequence:
result = [s[0], s[-1]]
Alternatively, this would give you a tuple
result = s[0], s[-1]
With the particular case of a (N,N,N) ndarray X that you mention, would the following work for you?
s = slice(0,N,N-1)
X[s,s,s]
Example
>>> N = 3
>>> X = np.arange(N*N*N).reshape(N,N,N)
>>> s = slice(0,N,N-1)
>>> print X[s,s,s]
[[[ 0 2]
[ 6 8]]
[[18 20]
[24 26]]]
>>> from operator import itemgetter
>>> first_and_last = itemgetter(0, -1)
>>> first_and_last([1, 2, 3, 4, 5])
(1, 5)
Why do you want to use a slice? Getting each element with
result = [s[0], s[-1]]
is better and more readable.
If you really need to use the slice, then your solution is the simplest working one that I can think of.
This also works for the 3D case you've mentioned.

Categories

Resources