Tensor Entry Selection Logic Divergence in PyTorch & Numpy - python

Description
I'm setting up a torch.Tensor for masking purpose. When attempting to select entries by indices, it turns out that behaviors between using numpy.ndarray and torch.Tensor to hold index data are different. I would like to have access to the design in both frameworks and related documents that explain the difference.
Steps to replicate
Environment
Pytorch 1.3 in container from official release: pytorch/pytorch:1.3-cuda10.1-cudnn7-devel
Example
Say I need to set up mask as torch.Tensor object with shape [3,3,3] and set values at entries (0,0,1) & (1,2,0) to 1. The code below explains the difference.
mask = torch.zeros([3,3,3])
indices = torch.tensor([[0, 1],
[0, 2],
[1, 0]])
mask[indices.numpy()] = 1 # Works
# mask[indices] = 1 # Incorrect result
I noticed that when using mask[indices.numpy()] a new torch.Tensor of shape [2], while mask[indices] returns a new torch.Tensor of shape [3, 2, 3, 3], which suggests difference in tensor slicing logic.

You get different results because that's how indexing is implemented in Pytorch. If you pass an array as index, then it gets "unpacked". For example:
indices = torch.tensor([[0, 1], [0, 2], [1, 0]])
mask = torch.arange(1,28).reshape(3,3,3)
# tensor([[[ 1, 2, 3],
# [ 4, 5, 6],
# [ 7, 8, 9]],
# [[10, 11, 12],
# [13, 14, 15],
# [16, 17, 18]],
# [[19, 20, 21],
# [22, 23, 24],
# [25, 26, 27]]])
The mask[indices.numpy()] is equivalent to mask[[0, 1], [0, 2], [1, 0]], i.e. the elements of the i-th row of indices.numpy() are used to select elements of mask along i-th axis. So it returns tensor([mask[0,0,1], mask[1,2,0]]), i.e. tensor([2, 16]).
On the other hand, when passing a tensor as index (I don't know the exact reason for this differentiation between arrays and tensors for indexing), it is not "unpacked" like an array, and the elements of the i-th row of the indices tensor are used for selecting the elements of mask along the axis-0. That is, mask[indices] is equivalent to mask[[[0, 1], [0, 2], [1, 0]], :, :]
>>> mask[ind]
tensor([[[[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9]],
[[10, 11, 12],
[13, 14, 15],
[16, 17, 18]]],
[[[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9]],
[[19, 20, 21],
[22, 23, 24],
[25, 26, 27]]],
[[[10, 11, 12],
[13, 14, 15],
[16, 17, 18]],
[[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9]]]])
which is basically tensor(mask[[0,1], :, :], mask[[0,2],: ,:], mask[[1,0], :, :]) and has shape indices.shape + mask[0,:,:].shape == (3,2,3,3). So whole "sheets" are selected and stacked into new dimensions. Note that this is not a new tensor, but a special view of mask. Therefore if you assign mask[indices] = 1, with this particular indices, then all the elements of mask will become 1.

Related

Slice of 2d numpy array with another array

I have a quite large 2d array, and I need to get both the index of the maximum value in axis 1, and the maximum value itself. I can retrieve these two values as follows:
import numpy as np
a = np.arange(27).reshape(9, 3)
idx = np.argmax(a, axis=1)
max_val = np.max(a, axis=1)
However, since I have already found the index of the maximum value, it feels like I should be able to construct the array of maximum values using idx without having to look up the value again.
I realise I can use np.choose(idx, a.T) but this involves transposing the matrix which will be much more expensive than just using max. I can do something like np.array([a[i][idx[i]] for i in range(len(a))]) but this involves creating a list which again seems more expensive that just calling np.max.
Is there any way to slice a with idx in numpy without restructuring the array?
Your a and argmax:
In [602]: a
Out[602]:
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17],
[18, 19, 20],
[21, 22, 23],
[24, 25, 26]])
In [603]: idx
Out[603]: array([2, 2, 2, 2, 2, 2, 2, 2, 2], dtype=int64)
A common way of using that index array:
In [606]: a[np.arange(a.shape[0]),idx]
Out[606]: array([ 2, 5, 8, 11, 14, 17, 20, 23, 26])
A newer tool, that may be easier to use (if not familiar with the first):
In [607]: np.take_along_axis(a,idx[:,None],1)
Out[607]:
array([[ 2],
[ 5],
[ 8],
[11],
[14],
[17],
[20],
[23],
[26]])

numpy ndarrays: Is it possible to access a row based on a member element?

Say I have a 2x3 ndarray:
[[0,1,1],
[1,1,1]]
I want to replace {any row that has 0 in the first index} with [0,0,0]:
[[0,0,0],
[1,1,1]]
Is it possible to do this with np.where?
Here's my attempt:
import numpy as np
arr = np.array([[0,1,1],[1,1,1]])
replacement = np.full(arr.shape,[0,0,0])
new = np.where(arr[:,0]==0,replacement,arr)
I'm met with the following error at the last line:
ValueError: operands could not be broadcast together with shapes (2,) (2,3) (2,3)
The error makes sense, but I don't know how to fix the code to accomplish my goal. Any advice would be greatly appreciated!
Edit:
I was trying to simplify a higher-dimensional case, but turns out it might not generalize.
If I have this ndarray:
[[[0,1,1],[1,1,1],[1,1,1]],
[[1,1,1],[1,1,1],[1,1,1]],
[[1,1,1],[1,1,1],[1,1,1]]]
how can I replace the first triplet with [0,0,0]?
Simple indexing/broadcasting will do:
a[a[:,0]==0] = [0,0,0]
output:
array([[0, 0, 0],
[1, 1, 1]])
explanation:
# get first column
a[:,0]
# array([0, 1])
# compare to 0 creating a boolean array
a[:,0]==0
# array([ True, False])
# select rows where the boolean is True
a[a[:,0]==0]
# array([[0, 1, 1]])
# replace those rows with new array
a[a[:,0]==0] = [0,0,0]
using np.where
this is less elegant in my opinion:
a[np.where(a[:,0]==0)[0]] = [0,0,0]
Edit: generalization
input:
a = np.arange(3**3).reshape((3,3,3))
array([[[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8]],
[[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17]],
[[18, 19, 20],
[21, 22, 23],
[24, 25, 26]]])
transformation:
a[a[...,0]==0] = [0,0,0]
array([[[ 0, 0, 0],
[ 3, 4, 5],
[ 6, 7, 8]],
[[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17]],
[[18, 19, 20],
[21, 22, 23],
[24, 25, 26]]])

How to create a 2D array of ranges using numpy

I have an array of start and stop indices, like this:
[[0, 3], [4, 7], [15, 18]]
and i would like to construct a 2D numpy array where each row is a range from the corresponding pair of start and stop indices, as follows:
[[0, 1, 2],
[4, 5, 6],
[15, 16, 18]]
Currently, i am creating an empty array and filling it in a for loop:
ranges = numpy.empty((3, 3))
a = [[0, 3], [4, 7], [15, 18]]
for i, r in enumerate(a):
ranges[i] = numpy.arange(r[0], r[1])
Is there a more compact and (more importantly) faster way of doing this? possibly something that doesn't involve using a loop?
One way is to use broadcast to add the left hand edges to the base arange:
In [11]: np.arange(3) + np.array([0, 4, 15])[:, None]
Out[11]:
array([[ 0, 1, 2],
[ 4, 5, 6],
[15, 16, 17]])
Note: this requires all ranges to be the same length.
If the ranges were to result in different lengths, for a vectorized approach you could use n_ranges from the linked solution:
a = np.array([[0, 3], [4, 7], [15, 18]])
n_ranges(a[:,0], a[:,1], return_flat=False)
# [array([0, 1, 2]), array([4, 5, 6]), array([15, 16, 17])]
Which would also work with the following array:
a = np.array([[0, 3], [4, 9], [15, 18]])
n_ranges(*a.T, return_flat=False)
# [array([0, 1, 2]), array([4, 5, 6, 7, 8]), array([15, 16, 17])]

numpy `take` along 2 axes

I have a 3D array a of data and a 2D array b of indices. I need to take a sub-array of a along the 3rd axis, using the indices from b. I can do it with take like this:
a = np.arange(24).reshape((2,3,4))
b = np.array([0,2,1,3]).reshape((2,2))
np.array([np.take(a_,b_,axis=1) for (a_,b_) in zip(a,b)])
Can I do it without list comprehension, using some fancy indexing? I am worried about efficiency, so if fancy indexing is not more efficient in this case, I would like to know it.
EDIT The 1st thing I've tried is a[[0,1],:,b] but it doesn't give the sub-array I need
In [317]: a
Out[317]:
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]])
In [318]: a = np.arange(24).reshape((2,3,4))
...: b = np.array([0,2,1,3]).reshape((2,2))
...: np.array([np.take(a_,b_,axis=1) for (a_,b_) in zip(a,b)])
...:
Out[318]:
array([[[ 0, 2],
[ 4, 6],
[ 8, 10]],
[[13, 15],
[17, 19],
[21, 23]]])
So you want the 0 & 2 columns from the 1st block, and 1 & 3 from the second.
Make a c that matches b in shape, and embodies this observation
In [319]: c=np.array([[0,0],[1,1]])
In [320]: c
Out[320]:
array([[0, 0],
[1, 1]])
In [321]: b
Out[321]:
array([[0, 2],
[1, 3]])
In [322]: a[c,:,b]
Out[322]:
array([[[ 0, 4, 8],
[ 2, 6, 10]],
[[13, 17, 21],
[15, 19, 23]]])
That's the right numbers, but not the right shape.
A column vector can be used instead of c.
In [323]: a[np.arange(2)[:,None],:,b] # or a[[[0],[1]],:,b]
Out[323]:
array([[[ 0, 4, 8],
[ 2, 6, 10]],
[[13, 17, 21],
[15, 19, 23]]])
As for the shape, we can transpose the last two axes
In [324]: a[np.arange(2)[:,None],:,b].transpose(0,2,1)
Out[324]:
array([[[ 0, 2],
[ 4, 6],
[ 8, 10]],
[[13, 15],
[17, 19],
[21, 23]]])
This transpose is required because we have a slice between two index arrays, a mix of basic and advanced indexing. It's documented, but never the less often puzzling. It put the slice dimension (3) last, and we have to transpose it back.
Nice little indexing puzzle!
The latest question and explanation of this advanced/basic transpose:
Indexing numpy multidimensional arrays depends on a slicing method
This is my first try. I will see if I can do better.
#using numpy broadcasting.
np.r_[a[0][:,b[0]],a[1][:,b[1]]].reshape(2,3,2)
Out[300]: In [301]:
array([[[ 0, 2],
[ 4, 6],
[ 8, 10]],
[[13, 15],
[17, 19],
[21, 23]]])
Second try:
#convert both a and b to a 2d array and then slice all rows and only columns determined by b.
a.reshape(6,4)[np.arange(6)[:,None],b.repeat(3,0)].reshape(2,3,2)
Out[429]:
array([[[ 0, 2],
[ 4, 6],
[ 8, 10]],
[[13, 15],
[17, 19],
[21, 23]]])

Find the lowest non-masked point with numpy efficiently

The application here is finding the "cloud base", but the principles apply wherever. I have a numpy masked 3-D array (which we will say corresponds to a 3-D grid box with dimensions z, y, x), where I have masked out all points with a value of less than 0.1. What I want to find is, at every x,y point, what is the lowest z point index (not the lowest value in z, the smallest z coordinate) that is not masked out. I can think of a few trivial ways to do it, e.g.:
for x points:
for y points:
minz=-1
for z points:
if x,y,z is not masked:
minz = z
break
However, this seems really inefficient and I'm sure that there is a more efficient or more pythonic way to do this. What am I missing here?
Edit: I do not need to use masked arrays, but it seemed like the easiest way to ask the question- I can instead find the lowest point under a certain threshold without using masked arrays.
Edit 2: Idea for what I'm looking for (taking z=0 to be the lowest point):
input:
[[[0,1],
[1,5]],
[[3,3],
[2,4]],
[[2,1],
[4,9]]]
threshold: val >=3
output:
[[1,1],
[2,0]]
Assuming A as the input array, you could do -
np.where((A < thresh).all(0),-1,(A >= thresh).argmax(0))
Sample runs
Run #1:
In [87]: A
Out[87]:
array([[[0, 1],
[1, 5]],
[[3, 3],
[2, 4]],
[[2, 1],
[4, 9]]])
In [88]: thresh = 3
In [89]: np.where((A < thresh).all(0),-1,(A >= thresh).argmax(0))
Out[89]:
array([[1, 1],
[2, 0]])
Run #2:
In [82]: A
Out[82]:
array([[[17, 1, 2, 3],
[ 5, 13, 11, 2],
[ 9, 16, 11, 19],
[11, 16, 6, 3],
[15, 9, 14, 14]],
[[18, 19, 5, 8],
[13, 13, 17, 2],
[17, 12, 16, 0],
[19, 14, 12, 5],
[ 7, 8, 4, 7]],
[[10, 12, 11, 2],
[10, 18, 6, 15],
[ 4, 16, 0, 16],
[16, 18, 2, 1],
[10, 19, 9, 4]]])
In [83]: thresh = 10
In [84]: np.where((A < thresh).all(0),-1,(A >= thresh).argmax(0))
Out[84]:
array([[ 0, 1, 2, -1],
[ 1, 0, 0, 2],
[ 1, 0, 0, 0],
[ 0, 0, 1, -1],
[ 0, 2, 0, 0]])

Categories

Resources