Difference between tensor.permute and tensor.view in PyTorch? - python

What is the difference between tensor.permute() and tensor.view()?
They seem to do the same thing.

Input
In [12]: aten = torch.tensor([[1, 2, 3], [4, 5, 6]])
In [13]: aten
Out[13]:
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
In [14]: aten.shape
Out[14]: torch.Size([2, 3])
torch.view() reshapes the tensor to a different but compatible shape. For example, our input tensor aten has the shape (2, 3). This can be viewed as tensors of shapes (6, 1), (1, 6) etc.,
# reshaping (or viewing) 2x3 matrix as a column vector of shape 6x1
In [15]: aten.view(6, -1)
Out[15]:
tensor([[ 1],
[ 2],
[ 3],
[ 4],
[ 5],
[ 6]])
In [16]: aten.view(6, -1).shape
Out[16]: torch.Size([6, 1])
Alternatively, it can also be reshaped or viewed as a row vector of shape (1, 6) as in:
In [19]: aten.view(-1, 6)
Out[19]: tensor([[ 1, 2, 3, 4, 5, 6]])
In [20]: aten.view(-1, 6).shape
Out[20]: torch.Size([1, 6])
Whereas tensor.permute() is only used to swap the axes. The below example will make things clear:
In [39]: aten
Out[39]:
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
In [40]: aten.shape
Out[40]: torch.Size([2, 3])
# swapping the axes/dimensions 0 and 1
In [41]: aten.permute(1, 0)
Out[41]:
tensor([[ 1, 4],
[ 2, 5],
[ 3, 6]])
# since we permute the axes/dims, the shape changed from (2, 3) => (3, 2)
In [42]: aten.permute(1, 0).shape
Out[42]: torch.Size([3, 2])
You can also use negative indexing to do the same thing as in:
In [45]: aten.permute(-1, 0)
Out[45]:
tensor([[ 1, 4],
[ 2, 5],
[ 3, 6]])
In [46]: aten.permute(-1, 0).shape
Out[46]: torch.Size([3, 2])

View changes how the tensor is represented. For ex: a tensor with 4 elements can be represented as 4X1 or 2X2 or 1X4 but permute changes the axes. While permuting the data is moved but with view data is not moved but just reinterpreted.
Below code examples may help you. a is 2x2 tensor/matrix. With the use of view you can read a as a column or row vector (tensor). But you can't transpose it. To transpose you need permute. Transpose is achieved by swapping/permuting axes.
In [7]: import torch
In [8]: a = torch.tensor([[1,2],[3,4]])
In [9]: a
Out[9]:
tensor([[ 1, 2],
[ 3, 4]])
In [11]: a.permute(1,0)
Out[11]:
tensor([[ 1, 3],
[ 2, 4]])
In [12]: a.view(4,1)
Out[12]:
tensor([[ 1],
[ 2],
[ 3],
[ 4]])
In [13]:
Bonus: See https://twitter.com/karpathy/status/1013322763790999552

tensor.permute() permutes the order of the axes of a tensor.
tensor.view() reshapes the tensor (analogous to numpy.reshape) by reducing/expanding the size of each dimension (if one increases, the others must decrease).

The link gives a clear explanation about view, reshape, and permute:
view works on contiguous tensors.
reshape works on non-contigous tensors.
permute returns a view of the original tensor input with its dimensions permuted. It is quite different to view and reshape.

Related

How to insert a value in a fixed positon of pytorch

I have a PyTorch tensor
x = [[1,2,3,4,5]]
Now I want to add a value to a fixed position of the tensor x, for example, I want to add 11 in position 3 then the x will be
x= [[1,2,3,11,4,5]]
How can I perform this operation in Pytorch?
Dynamically extending arrays to arbitrary sizes along the non-singleton dimensions, such as the ones you mentioned, are unsupported in PyTorch mainly because the memory is pre-allocated during tensor construction and set to fixed size depending on the data type. The only way to grow non-singleton dimension size is to create a new (empty/zero) tensor with the target shape and insert values at the desired position(s), while also copying values.
In [24]: z = torch.zeros(1, 6)
In [27]: t
Out[27]: tensor([[1, 2, 3, 4, 5]])
In [30]: z[:, :3] = t[:, :3]
In [33]: z[:, -2:] = t[:, -2:]
In [36]: z[z == 0] = 11
In [37]: z
Out[37]: tensor([[ 1., 2., 3., 11., 4., 5.]])
However, if you'd have instead wanted to expand the tensor along the singleton dimension, then that's easy to achieve using tensor.expand(new_shape). In the below example, we expand the tensor t to length 3 along the 0th dimension, which is originally a singleton dimension.
# make a copy for in-place modification since `expand()` returns a view
In [64]: t_expd = t.expand(3, -1).clone()
In [65]: t_expd
Out[65]:
tensor([[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]])
# modify 2nd and 3rd rows
In [66]: t_expd[1:, ...] = 23
In [67]: t_expd
Out[67]:
tensor([[ 1, 2, 3, 4, 5],
[23, 23, 23, 23, 23],
[23, 23, 23, 23, 23]])

why there is deference between the output type of this two Numpy slice commands

The output of the two commands below gives a different array shape, I do appreciate explaining why and referring me to a reference if any, I searched the internet but did not find any clear explanation for it.
data.shape
(11,2)
# outputs the values in column-0 in an (1x11) array.
data[:,0]
array([-7.24070e-01, -2.40724e+00, 2.64837e+00, 3.60920e-01,
6.73120e-01, -4.54600e-01, 2.20168e+00, 1.15605e+00,
5.06940e-01, -8.59520e-01, -5.99700e-01])
# outputs the values in column-0 in an (11x1) array
data[:,:-1]
array([[-7.24070e-01],
[-2.40724e+00],
[ 2.64837e+00],
[ 3.60920e-01],
[ 6.73120e-01],
[-4.54600e-01],
[ 2.20168e+00],
[ 1.15605e+00],
[ 5.06940e-01],
[-8.59520e-01],
[-5.99700e-01]])
I'll try to consolidate the comments into an answer.
First look at Python list indexing
In [92]: alist = [1,2,3]
selecting an item:
In [93]: alist[0]
Out[93]: 1
making a copy of the whole list:
In [94]: alist[:]
Out[94]: [1, 2, 3]
or a slice of length 2, or 1 or 0:
In [95]: alist[:2]
Out[95]: [1, 2]
In [96]: alist[:1]
Out[96]: [1]
In [97]: alist[:0]
Out[97]: []
Arrays follow the same basic rules
In [98]: x = np.arange(12).reshape(3,4)
In [99]: x
Out[99]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
Select a row:
In [100]: x[0]
Out[100]: array([0, 1, 2, 3])
or a column:
In [101]: x[:,0]
Out[101]: array([0, 4, 8])
x[0,1] selects an single element.
https://numpy.org/doc/stable/user/basics.indexing.html#single-element-indexing
Indexing with a slice returns multiple rows:
In [103]: x[0:2]
Out[103]:
array([[0, 1, 2, 3],
[4, 5, 6, 7]])
In [104]: x[0:1] # it retains the dimensions, even if only 1 (or even 0)
Out[104]: array([[0, 1, 2, 3]])
Likewise for columns:
In [106]: x[:,0:1]
Out[106]:
array([[0],
[4],
[8]])
subslices on both dimensions:
In [107]: x[0:2,1:3]
Out[107]:
array([[1, 2],
[5, 6]])
https://numpy.org/doc/stable/user/basics.indexing.html
x[[0]] also returns a 2d array, but that gets into "advanced" indexing (which doesn't have a list equivalent).

Unexpected result from Numpy Matrix insert, How does this work?

My goal was to insert a column to the right on a numpy matrix. However, I found that the code I was using is putting in two columns rather than just one.
# This one results in a 4x1 matrix, as expected
np.insert(np.matrix([[0],[0]]), 1, np.matrix([[0],[0]]), 0)
>>>matrix([[0],
[0],
[0],
[0]])
# I would expect this line to return a 2x2 matrix, but it returns a 2x3 matrix instead.
np.insert(np.matrix([[0],[0]]), 1, np.matrix([[0],[0]]), 1)
>>>matrix([[0, 0, 0],
[0, 0, 0]]
Why do I get the above, in the second example, instead of [[0,0], [0,0]]?
While new use of np.matrix is discouraged, we get the same result with np.array:
In [41]: np.insert(np.array([[1],[2]]),1, np.array([[10],[20]]), 0)
Out[41]:
array([[ 1],
[10],
[20],
[ 2]])
In [42]: np.insert(np.array([[1],[2]]),1, np.array([[10],[20]]), 1)
Out[42]:
array([[ 1, 10, 20],
[ 2, 10, 20]])
In [44]: np.insert(np.array([[1],[2]]),1, np.array([10,20]), 1)
Out[44]:
array([[ 1, 10],
[ 2, 20]])
Insert as [1]:
In [46]: np.insert(np.array([[1],[2]]),[1], np.array([[10],[20]]), 1)
Out[46]:
array([[ 1, 10],
[ 2, 20]])
In [47]: np.insert(np.array([[1],[2]]),[1], np.array([10,20]), 1)
Out[47]:
array([[ 1, 10, 20],
[ 2, 10, 20]])
np.insert is a complex function written in Python. So we need to look at that code, and see how values are being mapped on the target space.
The docs elaborate on the difference between insert at 1 and [1]. But off hand I don't see an explanation of how the shape of values matters.
Difference between sequence and scalars:
>>> np.insert(a, [1], [[1],[2],[3]], axis=1)
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
>>> np.array_equal(np.insert(a, 1, [1, 2, 3], axis=1),
... np.insert(a, [1], [[1],[2],[3]], axis=1))
True
When adding an array at the end of another, I'd use concatenate (or one of its stack variants) rather than insert. None of these operate in-place.
In [48]: np.concatenate([np.array([[1],[2]]), np.array([[10],[20]])], axis=1)
Out[48]:
array([[ 1, 10],
[ 2, 20]])

numpy: how to get a max from an argmax result

I have a numpy array of arbitrary shape, e.g.:
a = array([[[ 1, 2],
[ 3, 4],
[ 8, 6]],
[[ 7, 8],
[ 9, 8],
[ 3, 12]]])
a.shape = (2, 3, 2)
and a result of argmax over the last axis:
np.argmax(a, axis=-1) = array([[1, 1, 0],
[1, 0, 1]])
I'd like to get max:
np.max(a, axis=-1) = array([[ 2, 4, 8],
[ 8, 9, 12]])
But without recalculating everything. I've tried:
a[np.arange(len(a)), np.argmax(a, axis=-1)]
But got:
IndexError: shape mismatch: indexing arrays could not be broadcast together with shapes (2,) (2,3)
How to do it? Similar question for 2-d: numpy 2d array max/argmax
You can use advanced indexing -
In [17]: a
Out[17]:
array([[[ 1, 2],
[ 3, 4],
[ 8, 6]],
[[ 7, 8],
[ 9, 8],
[ 3, 12]]])
In [18]: idx = a.argmax(axis=-1)
In [19]: m,n = a.shape[:2]
In [20]: a[np.arange(m)[:,None],np.arange(n),idx]
Out[20]:
array([[ 2, 4, 8],
[ 8, 9, 12]])
For a generic ndarray case of any number of dimensions, as stated in the comments by #hpaulj, we could use np.ix_, like so -
shp = np.array(a.shape)
dim_idx = list(np.ix_(*[np.arange(i) for i in shp[:-1]]))
dim_idx.append(idx)
out = a[dim_idx]
For ndarray with arbitrary shape, you can flatten the argmax indices, then recover the correct shape, as so:
idx = np.argmax(a, axis=-1)
flat_idx = np.arange(a.size, step=a.shape[-1]) + idx.ravel()
maximum = a.ravel()[flat_idx].reshape(*a.shape[:-1])
For arbitrary-shape arrays, the following should work :)
a = np.arange(5 * 4 * 3).reshape((5,4,3))
# for last axis
argmax = a.argmax(axis=-1)
a[tuple(np.indices(a.shape[:-1])) + (argmax,)]
# for other axis (eg. axis=1)
argmax = a.argmax(axis=1)
idx = list(np.indices(a.shape[:1]+a.shape[2:]))
idx[1:1] = [argmax]
a[tuple(idx)]
or
a = np.arange(5 * 4 * 3).reshape((5,4,3))
argmax = a.argmax(axis=0)
np.choose(argmax, np.moveaxis(a, 0, 0))
argmax = a.argmax(axis=1)
np.choose(argmax, np.moveaxis(a, 1, 0))
argmax = a.argmax(axis=2)
np.choose(argmax, np.moveaxis(a, 2, 0))
argmax = a.argmax(axis=-1)
np.choose(argmax, np.moveaxis(a, -1, 0))

Python/numpy issue with array/vector with empty second dimension

I have what seems to be an easy question.
Observe the code:
In : x=np.array([0, 6])
Out: array([0, 6])
In : x.shape
Out: (2L,)
Which shows that the array has no second dimension, and therefore x is no differnet from x.T.
How can I make x have dimension (2L,1L)? The real motivation for this question is that I have an array y of shape [3L,4L], and I want y.sum(1) to be a vector that can be transposed, etc.
While you can reshape arrays, and add dimensions with [:,np.newaxis], you should be familiar with the most basic nested brackets, or list, notation. Note how it matches the display.
In [230]: np.array([[0],[6]])
Out[230]:
array([[0],
[6]])
In [231]: _.shape
Out[231]: (2, 1)
np.array also takes a ndmin parameter, though it add extra dimensions at the start (the default location for numpy.)
In [232]: np.array([0,6],ndmin=2)
Out[232]: array([[0, 6]])
In [233]: _.shape
Out[233]: (1, 2)
A classic way of making something 2d - reshape:
In [234]: y=np.arange(12).reshape(3,4)
In [235]: y
Out[235]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
sum (and related functions) has a keepdims parameter. Read the docs.
In [236]: y.sum(axis=1,keepdims=True)
Out[236]:
array([[ 6],
[22],
[38]])
In [237]: _.shape
Out[237]: (3, 1)
empty 2nd dimension isn't quite the terminology. More like a nonexistent 2nd dimension.
A dimension can have 0 terms:
In [238]: np.ones((2,0))
Out[238]: array([], shape=(2, 0), dtype=float64)
If you are more familiar with MATLAB, which has a minimum of 2d, you might like the np.matrix subclass. It takes steps to ensure that most operations return another 2d matrix:
In [247]: ym=np.matrix(y)
In [248]: ym.sum(axis=1)
Out[248]:
matrix([[ 6],
[22],
[38]])
The matrix sum does:
np.ndarray.sum(self, axis, dtype, out, keepdims=True)._collapse(axis)
The _collapse bit lets it return a scalar for ym.sum().
There is another point to keep dimension info:
In [42]: X
Out[42]:
array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
In [43]: X[1].shape
Out[43]: (2,)
In [44]: X[1:2].shape
Out[44]: (1, 2)
In [45]: X[1]
Out[45]: array([0, 1])
In [46]: X[1:2] # this way will keep dimension
Out[46]: array([[0, 1]])

Categories

Resources