How to index a tensor using arrays? - python

Suppose I have a 3D array (tensor)
print a.shape
(100, 100, 100)
and want to index its first axis using one array:
print i.shape
(20,)
and its last axis using another array
print j.shape
(30,)
My intent is to get a (20, 100, 30) array, for example, to be used in assignments like
a[?!, :, ?!] = b
However, I can't figure out how.
print a[i.reshape(20, 1, 1), :, j.reshape(1, 1, 30)].shape
(20, 1, 30, 100)
print a[i.reshape(20, 1, 1), :, j].shape
(20, 1, 30, 100)
print a[i.reshape(20, 1), :, j].shape
(20, 30, 100)
print a[i.reshape(20, 1, 1), j.reshape(1, 1, 30)].shape
(20, 1, 30, 100)
As I understand the "advanced indexing" rules, the first attempt should have worked, but I didn't even end up with a 3D array, and the full dimension (100) came at the end instead of the middle.

Approach #1 : You can use np.ix_ for getting such meshes of indices and simply indexing into the input array must give you the desired output. Thus, an implementation would be like so -
a[np.ix_(i,np.arange(a.shape[1]),j)]
Approach #2 : Simpler way using broadcasted indexing -
a[i[:,None],:,j].swapaxes(1,2)
Verify with a sample case -
In [24]: a = np.random.randint(0,99,(5,3,5))
In [25]: i = np.random.randint(0,5,(2))
In [26]: j = np.random.randint(0,5,(2))
In [27]: a[i[0],:,j[0]]
Out[27]: array([15, 7, 74])
In [28]: a[i[0],:,j[1]]
Out[28]: array([32, 19, 85])
In [29]: a[i[1],:,j[0]]
Out[29]: array([76, 65, 96])
In [30]: a[i[1],:,j[1]]
Out[30]: array([54, 65, 66])
In [31]: a[np.ix_(i,np.arange(a.shape[1]),j)]
Out[31]:
array([[[15, 32],
[ 7, 19],
[74, 85]],
[[76, 54],
[65, 65],
[96, 66]]])
In [50]: a[i[:,None],:,j].swapaxes(1,2)
Out[50]:
array([[[15, 32],
[ 7, 19],
[74, 85]],
[[76, 54],
[65, 65],
[96, 66]]])
Assigning values with the indexing
For approach #1, it's just straight-forward -
a[np.ix_(i,np.arange(a.shape[1]),j)] = b
For approach #2, if b is a scalar, it should be straight-forward too -
a[i[:,None],:,j] = b
For approach #2 again, if you are assigning to a ndarray b of shape (20,100,30), we need to swap axes of b before assigning, like so -
a[i[:,None],:,j] = np.swapaxes(b,1,2)

Related

identifying sub-arrays in numpy

I have two two dimensional arrays a and b (#columns of a <= #columns in b). I would like to find an efficient way of matching a row in array a to a contiguous part of a row in array b.
a = np.array([[ 25, 28],
[ 84, 97],
[105, 24],
[ 28, 900]])
b = np.array([[ 25, 28, 84, 97],
[ 22, 25, 28, 900],
[ 11, 12, 105, 24]])
The output should be np.array([[0,0], [0,1], [1,0], [2,2], [3,1]]). Row 0 in array a matches Row 0 in array b (first two positions). Row 1 in array a matches row 0 in array b (third and fourth positions).
We can leverage np.lib.stride_tricks.as_strided based scikit-image's view_as_windows for efficient patch extraction, and then compare those patches against each row off a, all of it in a vectorized manner. Then, get the matching indices with np.argwhere -
# a and b from posted question
In [325]: from skimage.util.shape import view_as_windows
In [428]: w = view_as_windows(b,(1,a.shape[1]))
In [429]: np.argwhere((w == a).all(-1).any(-2))[:,::-1]
Out[429]:
array([[0, 0],
[1, 0],
[0, 1],
[3, 1],
[2, 2]])
Alternatively, we could get the indices by the order of rows in a by pushing forward the first axis of a while performing broadcasted comparisons -
In [444]: np.argwhere((w[:,:,0] == a[:,None,None,:]).all(-1).any(-1))
Out[444]:
array([[0, 0],
[0, 1],
[1, 0],
[2, 2],
[3, 1]])
Another way I can think of is to loop over each row in a and perform a 2D correlation between the b which you can consider as a 2D signal a row in a.
We would find the results which are equal to the sum of squares of all values in a. If we subtract our correlation result with this sum of squares, we would find matches with a zero result. Any rows that give you a 0 result would mean that the subarray was found in that row. If you are using floating-point numbers for example, you may want to compare with some small threshold that is just above 0.
If you can use SciPy, the scipy.signal.correlate2d method is what I had in mind.
import numpy as np
from scipy.signal import correlate2d
a = np.array([[ 25, 28],
[ 84, 97],
[105, 24]])
b = np.array([[ 25, 28, 84, 97],
[ 22, 25, 28, 900],
[ 11, 12, 105, 24]])
EPS = 1e-8
result = []
for (i, row) in enumerate(a):
out = correlate2d(b, row[None,:], mode='valid') - np.square(row).sum()
locs = np.where(np.abs(out) <= EPS)[0]
unique_rows = np.unique(locs)
for res in unique_rows:
result.append((i, res))
We get:
In [32]: result
Out[32]: [(0, 0), (0, 1), (1, 0), (2, 2)]
The time complexity of this could be better, especially since we're looping over each row of a to find any subarrays in b.

How can I create an numpy array from two different numpy arrays?

I want to create a bumpy array from two different bumpy arrays. For example:
Say I have 2 arrays a and b.
a = np.array([1,3,4])
b = np.array([[1,5,51,52],[2,6,61,62],[3,7,71,72],[4,8,81,82],[5,9,91,92]])
I want it to loop through each indices in array a and find it in array b and then save the row of b into c. Like below:
c = np.array([[1,5,51,52],
[3,7,71,72],
[4,8,81,82]])
I have tried doing:
c=np.zeros(shape=(len(b),4))
for i in b:
c[i]=a[b[i][:]]
but get this error "arrays used as indices must be of integer (or boolean) type"
Approach #1
If a is sorted, we can use np.searchsorted, like so -
idx = np.searchsorted(a,b[:,0])
idx[idx==a.size] = 0
out = b[a[idx] == b[:,0]]
Sample run -
In [160]: a
Out[160]: array([1, 3, 4])
In [161]: b
Out[161]:
array([[ 1, 5, 51, 52],
[ 2, 6, 61, 62],
[ 3, 7, 71, 72],
[ 4, 8, 81, 82],
[ 5, 9, 91, 92]])
In [162]: out
Out[162]:
array([[ 1, 5, 51, 52],
[ 3, 7, 71, 72],
[ 4, 8, 81, 82]])
If a is not sorted, we need to use sorter argument with searchsorted.
Approach #2
We can also use np.in1d -
b[np.in1d(b[:,0],a)]

Multiply NumPy ndarray with every element in another binary ndarray of different size

I have two ndarrays :
a = [[30,40],
[60,90]]
b = [[0,0,1],
[1,0,1],
[1,1,1]]
please notice that a shape might be larger but always square array (50,50) , (100,100)
The wanted result is :
Result = [[a*0,a*0,a*1],
[[a*1,a*0,a*1],
[[a*1,a*1,a*1]]
I managed to get the right answer with this code but I think there would be a built in function in numpy that accomplish this task in fast manners
totalrows=[]
for row in range(b.shape[0]):
cells=[]
for column in range(b.shape[1]):
print row,column
cells.append(b[row,column]*a)
totalrows.append(np.concatenate(cells,axis=1))
return np.concatenate(totalrows,axis=0)
Indeed there's a NumPy built-in np.kron for such block-based elementwise multiplication problems. To solve your case, it could be used like so -
np.kron(b,a)
Sample run -
In [50]: a
Out[50]:
array([[30, 40],
[60, 90]])
In [51]: b
Out[51]:
array([[0, 0, 1],
[1, 0, 1],
[1, 1, 1]])
In [52]: np.kron(b,a)
Out[52]:
array([[ 0, 0, 0, 0, 30, 40],
[ 0, 0, 0, 0, 60, 90],
[30, 40, 0, 0, 30, 40],
[60, 90, 0, 0, 60, 90],
[30, 40, 30, 40, 30, 40],
[60, 90, 60, 90, 60, 90]])
3D array case
Now, let's say we are working with a as a 3D array (m,n,p) and b as (q,r) and assuming you are looking to perform such a block-wise multiplication iteratively along the last axis of a. Thus, the shapes are to be multiplied along the first two axes on the two inputs to get the output array. To achieve such an output, we need to extend the dimension of b by introducing a singleton dimension as the last axis. The final output would be of shape (m*q,n*r,p*1). The implementation would be simply -
np.kron(b[...,None],a)
Shape check -
In [161]: a = np.random.randint(0,99,(4,5,2))
...: b = np.random.randint(0,99,(6,7))
...:
In [162]: np.kron(b[...,None],a).shape
Out[162]: (24, 35, 2)

Multiply each row of one array with each element of another array in numpy

I have two arrays A and B in numpy. A holds cartesian coordinates, each row is one point in 3D space and has the shape (r, 3). B has the shape (r, n) and holds integers.
What I would like to do is multiply each element of B with each row in A, so that the resulting array has the shape (r, n, 3). So for example:
# r = 3
A = np.array([1,1,1, 2,2,2, 3,3,3]).reshape(3,3)
# n = 2
B = np.array([10, 20, 30, 40, 50, 60]).reshape(3,2)
# Result with shape (3, 2, 3):
# [[[10,10,10], [20,20,20]],
# [[60,60,60], [80,80,80]]
# [[150,150,150], [180,180,180]]]
I'm pretty sure this can be done with np.einsum, but I've been trying this for quite a while now and can't get it to work.
Use broadcasting -
A[:,None,:]*B[:,:,None]
Since np.einsum also supports broadcasting, you can use that as well (thanks to #ajcr for suggesting this concise version) -
np.einsum('ij,ik->ikj',A,B)
Sample run -
In [22]: A
Out[22]:
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
In [23]: B
Out[23]:
array([[10, 20],
[30, 40],
[50, 60]])
In [24]: A[:,None,:]*B[:,:,None]
Out[24]:
array([[[ 10, 10, 10],
[ 20, 20, 20]],
[[ 60, 60, 60],
[ 80, 80, 80]],
[[150, 150, 150],
[180, 180, 180]]])
In [25]: np.einsum('ijk,ij->ijk',A[:,None,:],B)
Out[25]:
array([[[ 10, 10, 10],
[ 20, 20, 20]],
[[ 60, 60, 60],
[ 80, 80, 80]],
[[150, 150, 150],
[180, 180, 180]]])

Numpy: fast calculations considering items' neighbors and their position inside the array

I have 4 2D numpy arrays, called a, b, c, d, each of them made of n rows and m columns. What I need to do is giving to each element of b and d a value calculated as follows (pseudo-code):
min_coords = min_of_neighbors_coords(x, y)
b[x,y] = a[x,y] * a[min_coords];
d[x,y] = c[min_coords];
Where min_of_neighbors_coords is a function that, given the coordinates of an element of the array, returns the coordinates of the 'neighbor' element that has the lower value. I.e., considering the array:
1, 2, 5
3, 7, 2
2, 3, 6
min_of_neighbors_coords(1, 1) will refer to the central element with the value of 7, and will return the tuple (0, 0): the coordinates of the number 1.
I managed to do this using for loops (element per element), but the algorithm is VERY slow and I'm searching a way to improve it, avoiding loops and demanding the calculations to numpy.
Is it possible?
EDIT I have kept my original answer at the bottom. As Paul points out in the comments, the original answer didn't really answer the OP's question, and could be more easily achieved with an ndimage filter. The following much more cumbersome function should do the right thing. It takes two arrays, a and c, and returns the windowed minimum of a and the values in c at the positions of the windowed minimums in a:
def neighbor_min(a, c):
ac = np.concatenate((a[None], c[None]))
rows, cols = ac.shape[1:]
ret = np.empty_like(ac)
# Fill in the center
win_ac = as_strided(ac, shape=(2, rows-2, cols, 3),
strides=ac.strides+ac.strides[1:2])
win_ac = win_ac[np.ogrid[:2, :rows-2, :cols] +
[np.argmin(win_ac[0], axis=2)]]
win_ac = as_strided(win_ac, shape=(2, rows-2, cols-2, 3),
strides=win_ac.strides+win_ac.strides[2:3])
ret[:, 1:-1, 1:-1] = win_ac[np.ogrid[:2, :rows-2, :cols-2] +
[np.argmin(win_ac[0], axis=2)]]
# Fill the top, bottom, left and right borders
win_ac = as_strided(ac[:, :2, :], shape=(2, 2, cols-2, 3),
strides=ac.strides+ac.strides[2:3])
win_ac = win_ac[np.ogrid[:2, :2, :cols-2] +
[np.argmin(win_ac[0], axis=2)]]
ret[:, 0, 1:-1] = win_ac[:, np.argmin(win_ac[0], axis=0),
np.ogrid[:cols-2]]
win_ac = as_strided(ac[:, -2:, :], shape=(2, 2, cols-2, 3),
strides=ac.strides+ac.strides[2:3])
win_ac = win_ac[np.ogrid[:2, :2, :cols-2] +
[np.argmin(win_ac[0], axis=2)]]
ret[:, -1, 1:-1] = win_ac[:, np.argmin(win_ac[0], axis=0),
np.ogrid[:cols-2]]
win_ac = as_strided(ac[:, :, :2], shape=(2, rows-2, 2, 3),
strides=ac.strides+ac.strides[1:2])
win_ac = win_ac[np.ogrid[:2, :rows-2, :2] +
[np.argmin(win_ac[0], axis=2)]]
ret[:, 1:-1, 0] = win_ac[:, np.ogrid[:rows-2],
np.argmin(win_ac[0], axis=1)]
win_ac = as_strided(ac[:, :, -2:], shape=(2, rows-2, 2, 3),
strides=ac.strides+ac.strides[1:2])
win_ac = win_ac[np.ogrid[:2, :rows-2, :2] +
[np.argmin(win_ac[0], axis=2)]]
ret[:, 1:-1, -1] = win_ac[:, np.ogrid[:rows-2],
np.argmin(win_ac[0], axis=1)]
# Fill the corners
win_ac = ac[:, :2, :2]
win_ac = win_ac[:, np.ogrid[:2],
np.argmin(win_ac[0], axis=-1)]
ret[:, 0, 0] = win_ac[:, np.argmin(win_ac[0], axis=-1)]
win_ac = ac[:, :2, -2:]
win_ac = win_ac[:, np.ogrid[:2],
np.argmin(win_ac[0], axis=-1)]
ret[:, 0, -1] = win_ac[:, np.argmin(win_ac[0], axis=-1)]
win_ac = ac[:, -2:, -2:]
win_ac = win_ac[:, np.ogrid[:2],
np.argmin(win_ac[0], axis=-1)]
ret[:, -1, -1] = win_ac[:, np.argmin(win_ac[0], axis=-1)]
win_ac = ac[:, -2:, :2]
win_ac = win_ac[:, np.ogrid[:2],
np.argmin(win_ac[0], axis=-1)]
ret[:, -1, 0] = win_ac[:, np.argmin(win_ac[0], axis=-1)]
return ret
The return is a (2, rows, cols) array that can be unpacked into the two arrays:
>>> a = np.random.randint(100, size=(5,5))
>>> c = np.random.randint(100, size=(5,5))
>>> a
array([[42, 54, 18, 88, 26],
[80, 65, 83, 31, 4],
[51, 52, 18, 88, 52],
[ 1, 70, 5, 0, 89],
[47, 34, 27, 67, 68]])
>>> c
array([[94, 94, 29, 6, 76],
[81, 47, 67, 21, 26],
[44, 92, 20, 32, 90],
[81, 25, 32, 68, 25],
[49, 43, 71, 79, 77]])
>>> neighbor_min(a, c)
array([[[42, 18, 18, 4, 4],
[42, 18, 18, 4, 4],
[ 1, 1, 0, 0, 0],
[ 1, 1, 0, 0, 0],
[ 1, 1, 0, 0, 0]],
[[94, 29, 29, 26, 26],
[94, 29, 29, 26, 26],
[81, 81, 68, 68, 68],
[81, 81, 68, 68, 68],
[81, 81, 68, 68, 68]]])
The OP's case could then be solved as:
def bd_from_ac(a, c):
b,d = neighbor_min(a, c)
return a*b, d
And while there is a serious performance hit, it is pretty fast still:
In [3]: a = np.random.rand(1000, 1000)
In [4]: c = np.random.rand(1000, 1000)
In [5]: %timeit bd_from_ac(a, c)
1 loops, best of 3: 570 ms per loop
You are not really using the coordinates of the minimum neighboring element for anything else than fetching it, so you may as well skip that part and create a min_neighbor function. If you don't want to resort to cython for fast looping, you are going to have to go with rolling window views, such as outlined in Paul's link. This will typically convert your (m, n) array into a (m-2, n-2, 3, 3) view of the same data, and you would then apply np.min over the last two axes.
Unfortunately you have to apply it one axis at a time, so you will have to create a (m-2, n-2, 3) copy of your data. Fortunately, you can compute the minimum in two steps, first windowing and minimizing along one axis, then along the other, and obtain the same result. So at most you are going to have intermediate storage the size of your input. If needed, you could even reuse the output array as intermediate storage and avoid memory allocations, but that is left as exercise...
The following function does that. It is kind of lengthy because it has to deal not only with the central area, but also with the special cases of the four edges and four corners. Other than that it is a pretty compact implementation:
def neighbor_min(a):
rows, cols = a.shape
ret = np.empty_like(a)
# Fill in the center
win_a = as_strided(a, shape=(m-2, n, 3),
strides=a.strides+a.strides[:1])
win_a = win_a.min(axis=2)
win_a = as_strided(win_a, shape=(m-2, n-2, 3),
strides=win_a.strides+win_a.strides[1:])
ret[1:-1, 1:-1] = win_a.min(axis=2)
# Fill the top, bottom, left and right borders
win_a = as_strided(a[:2, :], shape=(2, cols-2, 3),
strides=a.strides+a.strides[1:])
ret[0, 1:-1] = win_a.min(axis=2).min(axis=0)
win_a = as_strided(a[-2:, :], shape=(2, cols-2, 3),
strides=a.strides+a.strides[1:])
ret[-1, 1:-1] = win_a.min(axis=2).min(axis=0)
win_a = as_strided(a[:, :2], shape=(rows-2, 2, 3),
strides=a.strides+a.strides[:1])
ret[1:-1, 0] = win_a.min(axis=2).min(axis=1)
win_a = as_strided(a[:, -2:], shape=(rows-2, 2, 3),
strides=a.strides+a.strides[:1])
ret[1:-1, -1] = win_a.min(axis=2).min(axis=1)
# Fill the corners
ret[0, 0] = a[:2, :2].min()
ret[0, -1] = a[:2, -2:].min()
ret[-1, -1] = a[-2:, -2:].min()
ret[-1, 0] = a[-2:, :2].min()
return ret
You can now do things like:
>>> a = np.random.randint(10, size=(5, 5))
>>> a
array([[0, 3, 1, 8, 9],
[7, 2, 7, 5, 7],
[4, 2, 6, 1, 9],
[2, 8, 1, 2, 3],
[7, 7, 6, 8, 0]])
>>> neighbor_min(a)
array([[0, 0, 1, 1, 5],
[0, 0, 1, 1, 1],
[2, 1, 1, 1, 1],
[2, 1, 1, 0, 0],
[2, 1, 1, 0, 0]])
And your original question can be solved as:
def bd_from_ac(a, c):
return a*neighbor_min(a), neighbor_min(c)
As a performance benchmark:
In [2]: m, n = 1000, 1000
In [3]: a = np.random.rand(m, n)
In [4]: c = np.random.rand(m, n)
In [5]: %timeit bd_from_ac(a, c)
1 loops, best of 3: 123 ms per loop
Finding a[min_coords] is a rolling window operation. Several clever solutions our outlined in this post. You'll want to make the creation of the c[min_coords] array a side-effect of whichever solution you choose.
I hope this helps. I can post some sample code later when I have some time.
I have interest in helping you, and I believe there are possibly better solutions outside the scope of your question, but in order to put my own time into writing code, I must have some feedback of yours, because I am not 100% sure I understand what you need.
One thing to consider: if you are a C# developer, maybe a "brute-force" implementation of C# can outperform a clever implementation of Numpy, so you could consider at least testing your rather simple operations implemented in C#. Geotiff (which I suppose you are reading) has a relatively friendly specification, and I guess there might be .NET GeoTiff libraries around.
But supposing you want to give Numpy a try (and I believe you should), let's take a look at what you're trying to achieve:
If you are going to run min_coords(array) in every element of arrays a and c, you might consider to "stack" nine copies of the same array, each copy rolled by some offset, using numpy.dstack() and numpy.roll(). Then, you apply numpy.argmin(stacked_array, axis=2) and you get an array containing values between 0 and 8, where each of these values map to a tuple containing the offset indexes.
Then, using this principle, your min_coords() function would be vectorized, operating in the whole array at once, and giving back an array that gives you an offset which would be the index of a lookup table containing the offsets.
If you have interest in elaborating this, please leave a comment.
Hope this helps!

Categories

Resources