How to insert a value in a fixed positon of pytorch - python

I have a PyTorch tensor
x = [[1,2,3,4,5]]
Now I want to add a value to a fixed position of the tensor x, for example, I want to add 11 in position 3 then the x will be
x= [[1,2,3,11,4,5]]
How can I perform this operation in Pytorch?

Dynamically extending arrays to arbitrary sizes along the non-singleton dimensions, such as the ones you mentioned, are unsupported in PyTorch mainly because the memory is pre-allocated during tensor construction and set to fixed size depending on the data type. The only way to grow non-singleton dimension size is to create a new (empty/zero) tensor with the target shape and insert values at the desired position(s), while also copying values.
In [24]: z = torch.zeros(1, 6)
In [27]: t
Out[27]: tensor([[1, 2, 3, 4, 5]])
In [30]: z[:, :3] = t[:, :3]
In [33]: z[:, -2:] = t[:, -2:]
In [36]: z[z == 0] = 11
In [37]: z
Out[37]: tensor([[ 1., 2., 3., 11., 4., 5.]])
However, if you'd have instead wanted to expand the tensor along the singleton dimension, then that's easy to achieve using tensor.expand(new_shape). In the below example, we expand the tensor t to length 3 along the 0th dimension, which is originally a singleton dimension.
# make a copy for in-place modification since `expand()` returns a view
In [64]: t_expd = t.expand(3, -1).clone()
In [65]: t_expd
Out[65]:
tensor([[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]])
# modify 2nd and 3rd rows
In [66]: t_expd[1:, ...] = 23
In [67]: t_expd
Out[67]:
tensor([[ 1, 2, 3, 4, 5],
[23, 23, 23, 23, 23],
[23, 23, 23, 23, 23]])

Related

Numpy: for each element in one dimension, find coordinates of maximum of sub-array

I've seen variations of this question asked a few times but so far haven't seen any answers that get to the heart of this general case. I have an n-dimensional array of shape [a, b, c, ...] . For some dimension x, I want to look at each sub-array and find the coordinates of the maximum.
For example, say b = 2, and that's the dimension I'm interested in. I want the coordinates of the maximum of [:, 0, :, ...] and [:, 1, :, ...] in the form a_max = [a_max_b0, a_max_b1], c_max = [c_max_b0, c_max_b1], etc.
I've tried to do this by reshaping my input matrix to a 2d array [b, a*c*d*...], using argmax along axis 0, and unraveling the indices, but the output coordinates don't wind up giving the maxima in my dataset. In this case, n = 3 and I'm interested in axis 1.
shape = gains_3d.shape
idx = gains_3d.reshape(shape[1], -1)
idx = idx.argmax(axis = 1)
a1, a2 = np.unravel_index(idx, [shape[0], shape[2]])
Obviously I could use a loop, but that's not very pythonic.
For a concrete example, I randomly generated a 4x2x3 array. I'm interested in axis 1, so the output should be two arrays of length 2.
testarray = np.array([[[0.17028444, 0.38504759, 0.64852725],
[0.8344524 , 0.54964746, 0.86628204]],
[[0.77089997, 0.25876277, 0.45092835],
[0.6119848 , 0.10096425, 0.627054 ]],
[[0.8466859 , 0.82011746, 0.51123959],
[0.26681694, 0.12952723, 0.94956865]],
[[0.28123628, 0.30465068, 0.29498136],
[0.6624998 , 0.42748154, 0.83362323]]])
testarray[:,0,:] is
array([[0.17028444, 0.38504759, 0.64852725],
[0.77089997, 0.25876277, 0.45092835],
[0.8466859 , 0.82011746, 0.51123959],
[0.28123628, 0.30465068, 0.29498136]])
, so the first element of the first output array will be 2, and the first element of the other will be 0, pointing to 0.8466859. The second elements of the two matrices will be 2 and 2, pointing to 0.94956865 of testarray[:,1,:]
Let's first try to get a clear idea of what you are trying to do:
Sample 3d array:
In [136]: arr = np.random.randint(0,10,(2,3,4))
In [137]: arr
Out[137]:
array([[[1, 7, 6, 2],
[1, 5, 7, 1],
[2, 2, 5, *6*]],
[[*9*, 1, 2, 9],
[2, *9*, 3, 9],
[0, 2, 0, 6]]])
After fiddling around a bit I came up with this iteration, showing the coordinates for each middle dimension, and the max value
In [151]: [(i,np.unravel_index(np.argmax(arr[:,i,:]),(2,4)),np.max(arr[:,i,:])) for i in range
...: (3)]
Out[151]: [(0, (1, 0), 9), (1, (1, 1), 9), (2, (0, 3), 6)]
I can move the unravel outside the iteration:
In [153]: np.unravel_index([np.argmax(arr[:,i,:]) for i in range(3)],(2,4))
Out[153]: (array([1, 1, 0]), array([0, 1, 3]))
Your reshape approach does avoid this loop:
In [154]: arr1 = arr.transpose(1,0,2) # move our axis first
In [155]: arr1 = arr1.reshape(3,-1)
In [156]: arr1
Out[156]:
array([[1, 7, 6, 2, 9, 1, 2, 9],
[1, 5, 7, 1, 2, 9, 3, 9],
[2, 2, 5, 6, 0, 2, 0, 6]])
In [158]: np.argmax(arr1,axis=1)
Out[158]: array([4, 5, 3])
In [159]: np.unravel_index(_,(2,4))
Out[159]: (array([1, 1, 0]), array([0, 1, 3]))
max and argmax take only one axis value, where as you want the equivalent of taking the max along all but one axis. Some ufunc takes a axis tuple, but these do not. The transpose and reshape may be the only way.
In [163]: np.max(arr1,axis=1)
Out[163]: array([9, 9, 6])

Predict memory layout of ufunc output

Using numpy ndarrays most of the time we need't worry our pretty little heads about memory layout because results do not depend on it.
Except when they do. Consider, for example, this slightly overengineered way of setting the diagonal of a 3x2 matrix
>>> a = np.zeros((3,2))
>>> a.reshape(2,3)[:,0] = 1
>>> a
array([[1., 0.],
[0., 1.],
[0., 0.]])
As long as we control the memory layout of a this is fine. But if we don't it is a bug and to make matters worse a nasty silent one:
>>> a = np.zeros((3,2),order='F')
>>> a.reshape(2,3)[:,0] = 1
>>> a
array([[0., 0.],
[0., 0.],
[0., 0.]])
This shall suffice to show that memory layout is not merely an implementation detail.
The first thing one might reasonably ask to get on top of array layout is What do new arrays look like? The factories empty,ones,zeros,identity etc. return C-contiguous layouts per default.
This rule does, however, not extend to every new array that was allocated by numpy. For example:
>>> a = np.arange(8).reshape(2,2,2).transpose(1,0,2)
>>> aa = a*a
The product aa is a new array allocated by ufunc np.multiply. Is it C-contiguous? No:
>>> aa.strides
(16, 32, 8)
My guess is that this is the result of an optimization that recognizes that this operation can be done on a flat linear array which would explain why the output has the same memory layout as the inputs.
In fact this can even be useful, unlike the following nonsense function. It shows a handy idiom to implement an axis parameter while still keeping indexing simple.
>>> def symmetrize_along_axis(a,axis=0):
... aux = a.swapaxes(0,axis)
... out = aux + aux[::-1]
... return out.swapaxes(0,axis)
The slightly surprising but clearly desirable thing is that this produces contiguous output as long as input is contiguous.
>>> a = np.arange(8).reshape(2,2,2)
>>> symmetrize_along_axis(a,1).flags.contiguous
True
This shall suffice to show that knowing what layouts are returned by ufuncs can be quite useful. Hence my question:
Given the layouts of ufunc arguments are there any rules or guarantees regarding the layout of the output?
In a = np.zeros((3,2),order='F') case, a.reshape(2,3) creates a copy, not a view. That's why assignment fails, not the memory layout itself.
Look at same shape array:
In [123]: a = np.arange(6).reshape(3,2)
In [124]: a
Out[124]:
array([[0, 1],
[2, 3],
[4, 5]])
In [125]: a.reshape(2,3)
Out[125]:
array([[0, 1, 2],
[3, 4, 5]])
In [127]: a.reshape(2,3)[:,0]
Out[127]: array([0, 3])
In [125] the values still flow in order C.
and an order F array:
In [128]: b = np.arange(6).reshape(3,2, order='F')
In [129]: b
Out[129]:
array([[0, 3], # values flow in order F
[1, 4],
[2, 5]])
In [130]: b.reshape(2,3)
Out[130]:
array([[0, 3, 1], # values are jumbled
[4, 2, 5]])
In [131]: b.reshape(2,3)[:,0]
Out[131]: array([0, 4])
If I keep order F in the shape:
In [132]: b.reshape(2,3, order='F')
Out[132]:
array([[0, 2, 4], # values still flow in order F
[1, 3, 5]])
In [133]: b.reshape(2,3, order='F')[:,0]
Out[133]: array([0, 1])
Confirm with assignment:
In [135]: a.reshape(2,3)[:,0]=10
In [136]: a
Out[136]:
array([[10, 1],
[ 2, 10],
[ 4, 5]])
not assignment:
In [137]: b.reshape(2,3)[:,0]=10
In [138]: b
Out[138]:
array([[0, 3],
[1, 4],
[2, 5]])
but here assignment works:
In [139]: b.reshape(2,3, order='F')[:,0]=10
In [140]: b
Out[140]:
array([[10, 3],
[10, 4],
[ 2, 5]])
Or we can use order A to preserve order:
In [143]: b.reshape(2,3, order='A')[:,0]
Out[143]: array([10, 10])
In [144]: b.reshape(2,3, order='A')[:,0] = 20
In [145]: b
Out[145]:
array([[20, 3],
[20, 4],
[ 2, 5]])
ufunc order
Suspecting that ufunc are (mostly) implemented with nditer (C version), I checked np.nditer` docs - order can be specified in several places. And the tutorial demonstrates order effect on the iteration.
I don't see order documented for ufunc, but, it is accepted by the kwargs.
In [171]: c = np.arange(8).reshape(2,2,2)
In [172]: d = c.transpose(1,0,2)
In [173]: d.strides
Out[173]: (16, 32, 8)
In [174]: np.multiply(d,d,order='K').strides
Out[174]: (16, 32, 8)
In [175]: np.multiply(d,d,order='C').strides
Out[175]: (32, 16, 8)
In [176]: np.multiply(d,d,order='F').strides
Out[176]: (8, 16, 32)

Numpy indexing broadcasting introduces new dimension

I have an array I wan to use for mapping. Let's call it my_map ,type float shape (m,c).
I have a second array with indexes, lest call it my_indexes, type int size (n,c), every value is between 0 and m.
Trying to index my_map doing my_ans = my_map[my_indexes] I get an array of shape (n,c,c), when I was expecting (n,c). What would be the proper way to do it?
Just to be clear, what I am trying to do is something equivalent to:
my_ans = np.empty_like(touch_probability)
for i in range(c):
my_ans[:,i] = my_map[:,i][my_indexes[:,i]]
To illustrate and test your problem, define simple, real arrays:
In [44]: arr = np.arange(12).reshape(3,4)
In [45]: idx = np.array([[0,2,1,0],[2,2,1,0]])
In [46]: arr.shape
Out[46]: (3, 4)
In [47]: idx.shape
Out[47]: (2, 4)
Your desired calculation:
In [48]: res = np.zeros((2,4), int)
In [49]: for i in range(4):
...: res[:,i] = arr[:,i][idx[:,i]] # same as arr[idx[:,i], i]
...:
In [50]: res
Out[50]:
array([[0, 9, 6, 3],
[8, 9, 6, 3]])
Doing the same with one indexing step:
In [51]: arr[idx, np.arange(4)]
Out[51]:
array([[0, 9, 6, 3],
[8, 9, 6, 3]])
This is broadcasting the two indexing arrays against each other, and then picking points:
In [52]: np.broadcast_arrays(idx, np.arange(4))
Out[52]:
[array([[0, 2, 1, 0],
[2, 2, 1, 0]]),
array([[0, 1, 2, 3],
[0, 1, 2, 3]])]
So we are indexing the (m,c) array with 2 (n,c) arrays
The following are the same:
arr[idx]
arr[idx, :]
It is using idx to select whole rows from arr, so the result is shape of idx plus the last dimension of arr. Where as what you want is just the ith element of the idx[j,i] row.

Difference between tensor.permute and tensor.view in PyTorch?

What is the difference between tensor.permute() and tensor.view()?
They seem to do the same thing.
Input
In [12]: aten = torch.tensor([[1, 2, 3], [4, 5, 6]])
In [13]: aten
Out[13]:
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
In [14]: aten.shape
Out[14]: torch.Size([2, 3])
torch.view() reshapes the tensor to a different but compatible shape. For example, our input tensor aten has the shape (2, 3). This can be viewed as tensors of shapes (6, 1), (1, 6) etc.,
# reshaping (or viewing) 2x3 matrix as a column vector of shape 6x1
In [15]: aten.view(6, -1)
Out[15]:
tensor([[ 1],
[ 2],
[ 3],
[ 4],
[ 5],
[ 6]])
In [16]: aten.view(6, -1).shape
Out[16]: torch.Size([6, 1])
Alternatively, it can also be reshaped or viewed as a row vector of shape (1, 6) as in:
In [19]: aten.view(-1, 6)
Out[19]: tensor([[ 1, 2, 3, 4, 5, 6]])
In [20]: aten.view(-1, 6).shape
Out[20]: torch.Size([1, 6])
Whereas tensor.permute() is only used to swap the axes. The below example will make things clear:
In [39]: aten
Out[39]:
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
In [40]: aten.shape
Out[40]: torch.Size([2, 3])
# swapping the axes/dimensions 0 and 1
In [41]: aten.permute(1, 0)
Out[41]:
tensor([[ 1, 4],
[ 2, 5],
[ 3, 6]])
# since we permute the axes/dims, the shape changed from (2, 3) => (3, 2)
In [42]: aten.permute(1, 0).shape
Out[42]: torch.Size([3, 2])
You can also use negative indexing to do the same thing as in:
In [45]: aten.permute(-1, 0)
Out[45]:
tensor([[ 1, 4],
[ 2, 5],
[ 3, 6]])
In [46]: aten.permute(-1, 0).shape
Out[46]: torch.Size([3, 2])
View changes how the tensor is represented. For ex: a tensor with 4 elements can be represented as 4X1 or 2X2 or 1X4 but permute changes the axes. While permuting the data is moved but with view data is not moved but just reinterpreted.
Below code examples may help you. a is 2x2 tensor/matrix. With the use of view you can read a as a column or row vector (tensor). But you can't transpose it. To transpose you need permute. Transpose is achieved by swapping/permuting axes.
In [7]: import torch
In [8]: a = torch.tensor([[1,2],[3,4]])
In [9]: a
Out[9]:
tensor([[ 1, 2],
[ 3, 4]])
In [11]: a.permute(1,0)
Out[11]:
tensor([[ 1, 3],
[ 2, 4]])
In [12]: a.view(4,1)
Out[12]:
tensor([[ 1],
[ 2],
[ 3],
[ 4]])
In [13]:
Bonus: See https://twitter.com/karpathy/status/1013322763790999552
tensor.permute() permutes the order of the axes of a tensor.
tensor.view() reshapes the tensor (analogous to numpy.reshape) by reducing/expanding the size of each dimension (if one increases, the others must decrease).
The link gives a clear explanation about view, reshape, and permute:
view works on contiguous tensors.
reshape works on non-contigous tensors.
permute returns a view of the original tensor input with its dimensions permuted. It is quite different to view and reshape.

Python/numpy issue with array/vector with empty second dimension

I have what seems to be an easy question.
Observe the code:
In : x=np.array([0, 6])
Out: array([0, 6])
In : x.shape
Out: (2L,)
Which shows that the array has no second dimension, and therefore x is no differnet from x.T.
How can I make x have dimension (2L,1L)? The real motivation for this question is that I have an array y of shape [3L,4L], and I want y.sum(1) to be a vector that can be transposed, etc.
While you can reshape arrays, and add dimensions with [:,np.newaxis], you should be familiar with the most basic nested brackets, or list, notation. Note how it matches the display.
In [230]: np.array([[0],[6]])
Out[230]:
array([[0],
[6]])
In [231]: _.shape
Out[231]: (2, 1)
np.array also takes a ndmin parameter, though it add extra dimensions at the start (the default location for numpy.)
In [232]: np.array([0,6],ndmin=2)
Out[232]: array([[0, 6]])
In [233]: _.shape
Out[233]: (1, 2)
A classic way of making something 2d - reshape:
In [234]: y=np.arange(12).reshape(3,4)
In [235]: y
Out[235]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
sum (and related functions) has a keepdims parameter. Read the docs.
In [236]: y.sum(axis=1,keepdims=True)
Out[236]:
array([[ 6],
[22],
[38]])
In [237]: _.shape
Out[237]: (3, 1)
empty 2nd dimension isn't quite the terminology. More like a nonexistent 2nd dimension.
A dimension can have 0 terms:
In [238]: np.ones((2,0))
Out[238]: array([], shape=(2, 0), dtype=float64)
If you are more familiar with MATLAB, which has a minimum of 2d, you might like the np.matrix subclass. It takes steps to ensure that most operations return another 2d matrix:
In [247]: ym=np.matrix(y)
In [248]: ym.sum(axis=1)
Out[248]:
matrix([[ 6],
[22],
[38]])
The matrix sum does:
np.ndarray.sum(self, axis, dtype, out, keepdims=True)._collapse(axis)
The _collapse bit lets it return a scalar for ym.sum().
There is another point to keep dimension info:
In [42]: X
Out[42]:
array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
In [43]: X[1].shape
Out[43]: (2,)
In [44]: X[1:2].shape
Out[44]: (1, 2)
In [45]: X[1]
Out[45]: array([0, 1])
In [46]: X[1:2] # this way will keep dimension
Out[46]: array([[0, 1]])

Categories

Resources