how is axis indexed in numpy's array? - python

From Numpy's tutorial, axis can be indexed with integers, like 0 is for column, 1 is for row, but I don't grasp why they are indexed this way? And How do I figure out each axis' index when coping with multidimensional array?

By definition, the axis number of the dimension is the index of that dimension within the array's shape. It is also the position used to access that dimension during indexing.
For example, if a 2D array a has shape (5,6), then you can access a[0,0] up to a[4,5]. Axis 0 is thus the first dimension (the "rows"), and axis 1 is the second dimension (the "columns"). In higher dimensions, where "row" and "column" stop really making sense, try to think of the axes in terms of the shapes and indices involved.
If you do .sum(axis=n), for example, then dimension n is collapsed and deleted, with each value in the new matrix equal to the sum of the corresponding collapsed values. For example, if b has shape (5,6,7,8), and you do c = b.sum(axis=2), then axis 2 (dimension with size 7) is collapsed, and the result has shape (5,6,8). Furthermore, c[x,y,z] is equal to the sum of all elements b[x,y,:,z].

If at all anyone need this visual description of a shape=(3,5) array:

You can grasp axis in this way:
>>> a = np.array([[[1,2,3],[2,2,3]],[[2,4,5],[1,3,6]],[[1,2,4],[2,3,4]],[[1,2,4],[1,2,6]]])
array([[[1, 2, 3],
[2, 2, 3]],
[[2, 4, 5],
[1, 3, 6]],
[[1, 2, 4],
[2, 3, 4]],
[[1, 2, 4],
[1, 2, 6]]])
>>> a.shape
(4,2,3)
I created an array of a shape with different values(4,2,3) so that you can tell the structure clearly. Different axis means different 'layer'.
That is, axis = 0 index the first dimension of shape (4,2,3). It refers to the arrays in the first []. There are 4 elements in it, so its shape is 4:
array[[1, 2, 3],
[2, 2, 3]],
array[[2, 4, 5],
[1, 3, 6]],
array[[1, 2, 4],
[2, 3, 4]],
array[[1, 2, 4],
[1, 2, 6]]
axis = 1 index the second dimension in shape(4,2,3). There are 2 elements in each array of the layer: axis = 0,e.c. In the array of
array[[1, 2, 3],
[2, 2, 3]]
.
The two elements are:
array[1, 2, 3]
array[2, 2, 3]
And the third shape value means there are 3 elements in each array element of layer: axis = 2. e.c. There are 3 elements in array[1, 2, 3]. That is explicit.
And also, you can tell the axis/dimensions from the number of [] at the beginning or in the end. In this case, the number is 3([[[), so you can choose axis from axis = 0, axis = 1 and axis = 2.

In general, axis = 0, means all cells with first dimension varying with each value of 2nd dimension and 3rd dimension and so on
For example , 2-dimensional array has two corresponding axes: the first running vertically downwards across rows (axis 0), and the second running horizontally across columns (axis 1)
For 3D, it becomes complex, so, use multiple for loops
>>> x = np.array([[[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8]],
[[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17]],
[[18, 19, 20],
[21, 22, 23],
[24, 25, 26]]])
>>> x.shape #(3, 3, 3)
#axis = 0
>>> for j in range(0, x.shape[1]):
for k in range(0, x.shape[2]):
print( "element = ", (j,k), " ", [ x[i,j,k] for i in range(0, x.shape[0]) ])
...
element = (0, 0) [0, 9, 18] #sum is 27
element = (0, 1) [1, 10, 19] #sum is 30
element = (0, 2) [2, 11, 20]
element = (1, 0) [3, 12, 21]
element = (1, 1) [4, 13, 22]
element = (1, 2) [5, 14, 23]
element = (2, 0) [6, 15, 24]
element = (2, 1) [7, 16, 25]
element = (2, 2) [8, 17, 26]
>>> x.sum(axis=0)
array([[27, 30, 33],
[36, 39, 42],
[45, 48, 51]])
#axis = 1
for i in range(0, x.shape[0]):
for k in range(0, x.shape[2]):
print( "element = ", (i,k), " ", [ x[i,j,k] for j in range(0, x.shape[1]) ])
element = (0, 0) [0, 3, 6] #sum is 9
element = (0, 1) [1, 4, 7]
element = (0, 2) [2, 5, 8]
element = (1, 0) [9, 12, 15]
element = (1, 1) [10, 13, 16]
element = (1, 2) [11, 14, 17]
element = (2, 0) [18, 21, 24]
element = (2, 1) [19, 22, 25]
element = (2, 2) [20, 23, 26]
# for sum, axis is the first keyword, so we may omit it,
>>> x.sum(0), x.sum(1), x.sum(2)
(array([[27, 30, 33],
[36, 39, 42],
[45, 48, 51]]),
array([[ 9, 12, 15],
[36, 39, 42],
[63, 66, 69]]),
array([[ 3, 12, 21],
[30, 39, 48],
[57, 66, 75]]))

Related

How to get reverse diagonal from a certain point in a 2d numpy array

Let's say I have a n x m numpy array. For example:
array([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20]])
Now I want both diagonals intersection with a certain point (for example (1,2) which is 8). I already know that I can get the diagonal from top to bottom like so:
row = 1
col = 2
a = np.array(
[[1, 2, 3, 4, 5], [6, 7, 8, 9, 10], [11, 12, 13, 14, 15], [16, 17, 18, 19, 20]]
)
diagonal_1 = a.diagonal(col - row)
Where the result of col - row gives the offset for the diagonal.
Now I want to also have the reverse diagonal (from bottom to top) intersecting with the first diagonal in a random point (in this case (1,2), but it can be any point). But for this example it would be:
[16, 12, 8, 4]
I already tried a bunch with rotating and flipping the matrix. But I can't get a hold on the offset which I should use after rotating or flipping the matrix.
You can use np.eye to create a diagnal line of 1's, and use that as a mask:
x, y = np.nonzero(a == 8)
k = y[0] - a.shape[0] + x[0] + 1
nums = a[np.eye(*a.shape, k=k)[::-1].astype(bool)][::-1]
Output:
>>> nums
array([16, 12, 8, 4])
If you need to move the position of the line, increment/decrement the k parameter passed to np.eye.
A simple solution using numpy.fliplr:
def get_diags(a, row=1, col=2):
d1 = a.diagonal(col - row)
h,w = a.shape
d2 = np.fliplr(a).diagonal(w-col-1-row)
return d1, d2[::-1]
get_diags(a, 1, 1)
# (array([ 1, 7, 13, 19]), array([11, 7, 3]))
get_diags(a, 1, 3)
# (array([ 3, 9, 15]), array([17, 13, 9, 5]))
get_diags(a, 2, 0)
# (array([11, 17]), array([11, 7, 3]))
One liner for the second diagonal:
np.fliplr(a).diagonal(a.shape[1]-col-1-row)[::-1]

What is smart way to get batched gather?

I have two matrices, A and B, with shapes (n, m, k) and (n, m) respectively. n is the batch size, m is the amount of data in a batch, and k is the feature size.
Each element of B is an index less than m (specifically B = torch.randint(high=m, shape=(n,m))).
I want to implement [A[i][B[i]] for i in range(n)] in a smarter way.
Is there a better way in pytorch to implement this without doing for loop?
You can use
a[torch.arange(n)[:, None], b]
An example:
>>> n, m, k = 3, 2, 5
>>> a = torch.arange(30).view(n, m, k)
>>> b = torch.randint(high=m, size=(n,m))
# first indexer (of shape (n, 1))
>>> torch.arange(n)[:, None]
tensor([[0],
[1],
[2]])
# second indexer
>>> b
tensor([[1, 0],
[0, 1],
[1, 1]])
The indexers have the shape (3, 1) and (3, 2) respectively so they'll be broadcasted to (3, 2) to effectively have
tensor([[0, 0],
[1, 1],
[2, 2]])
and
tensor([[1, 0],
[0, 1],
[1, 1]])
which says: for the first row, take 1st (k,) array and put the result and take 0th (k,) array and put the result. This fills in a (m, k) array in the output which is repeated n times for each row,
to get
>>> a[torch.arange(n)[:, None], b]
tensor([[[ 5, 6, 7, 8, 9],
[ 0, 1, 2, 3, 4]],
[[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]],
[[25, 26, 27, 28, 29],
[25, 26, 27, 28, 29]]])
comparing with list comprehension:
>>> [a[i][b[i]] for i in range(n)]
[tensor([[5, 6, 7, 8, 9],
[0, 1, 2, 3, 4]]),
tensor([[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]]),
tensor([[25, 26, 27, 28, 29],
[25, 26, 27, 28, 29]])]

Pytorch gather question (3D Computer Vision)

I have N groups of C-dimension points. In each groups there are M points. So, there is a tensor of (N, M, C). Let's call it features.
I calculated the maximum element and the index through M dimension, to find the maximum points for each C dimension (a max pooling operation), resulting max tensor (N, 1, C) and index tensor (N, 1, C).
I have another tensor of shape (N, M, 3) storing the geometric coordinates of those N*M high-dimention points. Now, I want to use the index from the maximum points in each C dimension, to get the coordinates of all those maximum points.
For example, N=2, M=4, C=6.
The coordinate tensor, whose shape is (2, 4, 3):
[[[1, 2, 3]
[4, 5, 6]
[7, 8, 9]
[8, 7, 6]]
[11, 12, 13]
[14, 15, 16]
[17, 18, 19]
[18, 17, 16]]]
The indices tensor, whose shape is (2, 1, 6):
[[[0, 1, 2, 1, 2, 3]]
[[1, 2, 3, 2, 1, 0]]]
For example, the first element in indices is 0, I want to grab [1, 2, 3] from the coordinate tensor out. For the second element (1), I want to grab [4, 5, 6] out. For the third element in the next dimension (3), I want to grab [18, 17, 16] out.
The result tensor will look like:
[[[1, 2, 3] # 0
[4, 5, 6] # 1
[7, 8, 9] # 2
[4, 5, 6] # 1
[7, 8, 9] # 2
[8, 7, 6]] # 3
[[14, 15, 16] # 1
[17, 18, 19] # 2
[18, 17, 16] # 3
[17, 18, 19] # 2
[14, 15, 16] # 1
[11, 12, 13]]]# 0
whose shape is (2, 6, 3).
I tried to use torch.gather but I can not get it worked. I wrote a naive algorithm enumerating all N groups, but indeed it is slow, even using TorchScript's JIT. So, how to write this efficiently in pytorch?
You can use integer array indexing combined with broadcasting semantics to get your result.
import torch
x = torch.tensor([
[[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[8, 7, 6]],
[[11, 12, 13],
[14, 15, 16],
[17, 18, 19],
[18, 17, 16]],
])
i = torch.tensor([[[0, 1, 2, 1, 2, 3]],
[[1, 2, 3, 2, 1, 0]]])
# rows is shape [2, 1], cols is shape [2, 6]
rows = torch.arange(x.shape[0]).type_as(i).unsqueeze(1)
cols = i.squeeze(1)
# y is [2, 6, ...]
y = x[rows, cols]

Tensor Entry Selection Logic Divergence in PyTorch & Numpy

Description
I'm setting up a torch.Tensor for masking purpose. When attempting to select entries by indices, it turns out that behaviors between using numpy.ndarray and torch.Tensor to hold index data are different. I would like to have access to the design in both frameworks and related documents that explain the difference.
Steps to replicate
Environment
Pytorch 1.3 in container from official release: pytorch/pytorch:1.3-cuda10.1-cudnn7-devel
Example
Say I need to set up mask as torch.Tensor object with shape [3,3,3] and set values at entries (0,0,1) & (1,2,0) to 1. The code below explains the difference.
mask = torch.zeros([3,3,3])
indices = torch.tensor([[0, 1],
[0, 2],
[1, 0]])
mask[indices.numpy()] = 1 # Works
# mask[indices] = 1 # Incorrect result
I noticed that when using mask[indices.numpy()] a new torch.Tensor of shape [2], while mask[indices] returns a new torch.Tensor of shape [3, 2, 3, 3], which suggests difference in tensor slicing logic.
You get different results because that's how indexing is implemented in Pytorch. If you pass an array as index, then it gets "unpacked". For example:
indices = torch.tensor([[0, 1], [0, 2], [1, 0]])
mask = torch.arange(1,28).reshape(3,3,3)
# tensor([[[ 1, 2, 3],
# [ 4, 5, 6],
# [ 7, 8, 9]],
# [[10, 11, 12],
# [13, 14, 15],
# [16, 17, 18]],
# [[19, 20, 21],
# [22, 23, 24],
# [25, 26, 27]]])
The mask[indices.numpy()] is equivalent to mask[[0, 1], [0, 2], [1, 0]], i.e. the elements of the i-th row of indices.numpy() are used to select elements of mask along i-th axis. So it returns tensor([mask[0,0,1], mask[1,2,0]]), i.e. tensor([2, 16]).
On the other hand, when passing a tensor as index (I don't know the exact reason for this differentiation between arrays and tensors for indexing), it is not "unpacked" like an array, and the elements of the i-th row of the indices tensor are used for selecting the elements of mask along the axis-0. That is, mask[indices] is equivalent to mask[[[0, 1], [0, 2], [1, 0]], :, :]
>>> mask[ind]
tensor([[[[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9]],
[[10, 11, 12],
[13, 14, 15],
[16, 17, 18]]],
[[[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9]],
[[19, 20, 21],
[22, 23, 24],
[25, 26, 27]]],
[[[10, 11, 12],
[13, 14, 15],
[16, 17, 18]],
[[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9]]]])
which is basically tensor(mask[[0,1], :, :], mask[[0,2],: ,:], mask[[1,0], :, :]) and has shape indices.shape + mask[0,:,:].shape == (3,2,3,3). So whole "sheets" are selected and stacked into new dimensions. Note that this is not a new tensor, but a special view of mask. Therefore if you assign mask[indices] = 1, with this particular indices, then all the elements of mask will become 1.

sum groups rows of numpy matrix using list of lists of indices

slice numpy array using lists of indices and apply function, is it possible to vectorize (or nonvectorized way to do this)? vectorized would be ideal for large matrices
import numpy as np
index = [[1,3], [2,4,5]]
a = np.array(
[[ 3, 4, 6, 3],
[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15],
[1, 1, 4, 5]])
summing by the groups of row indices in index, giving:
np.array([[8, 10, 12, 14],
[17, 19, 24, 37]])
Approach #1 : Here's an almost* vectorized approach -
def sumrowsby_index(a, index):
index_arr = np.concatenate(index)
lens = np.array([len(i) for i in index])
cut_idx = np.concatenate(([0], lens[:-1].cumsum() ))
return np.add.reduceat(a[index_arr], cut_idx)
*Almost because of the step that computes lens with a loop-comprehension, but since we are simply getting the lengths and no computation is involved there, that step won't sway the timings in any big way.
Sample run -
In [716]: a
Out[716]:
array([[ 3, 4, 6, 3],
[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15],
[ 1, 1, 4, 5]])
In [717]: index
Out[717]: [[1, 3], [2, 4, 5]]
In [718]: sumrowsby_index(a, index)
Out[718]:
array([[ 8, 10, 12, 14],
[17, 19, 24, 27]])
Approach #2 : We could leverage fast matrix-multiplication with numpy.dot to perform those sum-reductions, giving us another method as listed below -
def sumrowsby_index_v2(a, index):
lens = np.array([len(i) for i in index])
id_ar = np.zeros((len(lens), a.shape[0]))
c = np.concatenate(index)
r = np.repeat(np.arange(len(index)), lens)
id_ar[r,c] = 1
return id_ar.dot(a)
Using a list comprehension...
For each index list in index, create a new list which is a list of the rows in a of those indexes. From here, we have a list of numpy arrays which we can apply the sum() method to. On a numpy array, sum() will return a new array of each element from the arrays added which will give you what you want:
np.array([sum([a[r] for r in i]) for i in index])
giving:
array([[ 8, 10, 12, 14],
[17, 19, 24, 27]])

Categories

Resources