Related
I have 6 tensors of shape (batch_size, S, S, 1) and I want to combine them in one python list of size (batch_size, S*S, 6) - so every element of tensor should be inside the inner list.
Can this be achieved without using loops? What's the efficient way to solve it?
Let batch_size=10 and S=4 for the purpose of this example:
>>> x = [torch.rand(10, 4, 4, 1) for _ in range(6)]
Indeed the first step is to concatenate the tensor on the last dimension axis=3:
>>> y = torch.cat(x, -1)
>>> y.shape
torch.Size([10, 4, 4, 6])
Then reshape to flatten axis=1 and axis=2, you can do so with torch.flatten here since the two axes as adjacent:
>>> y = torch.cat(x, -1).flatten(1, 2)
>>> y.shape
torch.Size([10, 16, 6])
First things first: I'm relatively new to TensorFlow.
I'm trying to implement a custom layer in tensorflow.keras and I'm having relatively hard time when I try to achieve the following:
I've got 3 Tensors (x,y,z) of shape (?,49,3,3,32) [where ? is the batch size]
On each Tensor I compute the sum over the 3rd and 4th axes [thus I end up with 3 Tensors of shape (?,49,32)]
By doing an argmax (A)on the above 3 Tensors (?,49,32) I get a single (?,49,32) Tensor
Now I want to use this tensor to select slices from the initial x,y,z Tensors in the following form:
Each element in the last dimension of A corresponds to the selected Tensor.
(aka: 0 = X, 1 = Y, 2 = Z)
The index of the last dimension of A corresponds to the slice that I would like to extract from the Tensor last dimension.
I've tried to achieve the above using tf.gather but I had no luck. Then I tried using a series of tf.map_fn, which is ugly and computationally costly.
To simplify the above:
let's say we've got an A array of shape (3,3,3,32). Then the numpy equivalent of what I try to achieve is this:
import numpy as np
x = np.random.rand(3,3,32)
y = np.random.rand(3,3,32)
z = np.random.rand(3,3,32)
x_sums = np.sum(np.sum(x,axis=0),0);
y_sums = np.sum(np.sum(y,axis=0),0);
z_sums = np.sum(np.sum(z,axis=0),0);
max_sums = np.argmax([x_sums,y_sums,z_sums],0)
A = np.array([x,y,z])
tmp = []
for i in range(0,len(max_sums)):
tmp.append(A[max_sums[i],:,:,i)
output = np.transpose(np.stack(tmp))
Any suggestions?
ps: I tried tf.gather_nd but I had no luck
This is how you can do something like that with tf.gather_nd:
import tensorflow as tf
# Make example data
tf.random.set_seed(0)
b = 10 # Batch size
x = tf.random.uniform((b, 49, 3, 3, 32))
y = tf.random.uniform((b, 49, 3, 3, 32))
z = tf.random.uniform((b, 49, 3, 3, 32))
# Stack tensors together
data = tf.stack([x, y, z], axis=2)
# Put reduction axes last
data_t = tf.transpose(data, (0, 1, 5, 2, 3, 4))
# Reduce
s = tf.reduce_sum(data_t, axis=(4, 5))
# Find largest sums
idx = tf.argmax(s, 3)
# Make gather indices
data_shape = tf.shape(data_t, idx.dtype)
bb, ii, jj = tf.meshgrid(*(tf.range(data_shape[i]) for i in range(3)), indexing='ij')
# Gather result
output_t = tf.gather_nd(data_t, tf.stack([bb, ii, jj, idx], axis=-1))
# Reorder axes
output = tf.transpose(output_t, (0, 1, 3, 4, 2))
print(output.shape)
# TensorShape([10, 49, 3, 3, 32])
I was trying to use the built in padding function but it wasn't padding things for me for some reason. This is my reproducible code:
import torch
def padding_batched_embedding_seq():
## 3 sequences with embedding of size 300
a = torch.ones(1, 4, 5) # 25 seq len (so 25 tokens)
b = torch.ones(1, 3, 5) # 22 seq len (so 22 tokens)
c = torch.ones(1, 2, 5) # 15 seq len (so 15 tokens)
##
sequences = [a, b, c]
batch = torch.nn.utils.rnn.pad_sequence(sequences)
if __name__ == '__main__':
padding_batched_embedding_seq()
error message:
Traceback (most recent call last):
File "padding.py", line 51, in <module>
padding_batched_embedding_seq()
File "padding.py", line 40, in padding_batched_embedding_seq
batch = torch.nn.utils.rnn.pad_sequence(sequences)
File "/Users/rene/miniconda3/envs/automl/lib/python3.7/site-packages/torch/nn/utils/rnn.py", line 376, in pad_sequence
out_tensor[:length, i, ...] = tensor
RuntimeError: The expanded size of the tensor (4) must match the existing size (3) at non-singleton dimension 1. Target sizes: [1, 4, 5]. Tensor sizes: [3, 5]
any idea?
cross posted: https://discuss.pytorch.org/t/how-does-one-padd-a-tensor-of-3-dimensions/51097
You sould have torch.ones(2, 5) instead or torch.ones(2, ...) where ... are the same dimention for each sample.
RuntimeError: The expanded size of the tensor (4) must match the existing size (3) at non-singleton dimension 1. Target sizes: [1, 4, 5]. Tensor sizes: [3, 5]
stands for it expects all dimension other the first ~ dim == 0 to be the same because the first one is variable seq length and others are for input item that is same.
The example from doc https://pytorch.org/docs/stable/_modules/torch/nn/utils/rnn.html is :
>>> from torch.nn.utils.rnn import pad_sequence
>>> a = torch.ones(25, 300)
>>> b = torch.ones(22, 300)
>>> c = torch.ones(15, 300)
>>> pad_sequence([a, b, c]).size()
Wtih output:
torch.Size([25, 3, 300])
With shape :(max_sequence len, batch_size, single_input) because of batch_first=False
by default , but i prefer batch_first=True with shape torch.Size([3, 25, 300]) then.
Pad just means fill zeroes until it matches max sequence len. As input in RNN you may prefer packed sequence what contains no zero inputs.
So in your example if input has more dimms it will be like
a = torch.ones(4, 5, 10) # 5*10 2d input, sequence of length 4 for them
b = torch.ones(3, 5, 10)
c = torch.ones(2, 5, 10)
I have a tensor of shape (16, 4096, 3). I have another tensor of indices of shape (16, 32768, 3). I am trying to collect the values along dim=1. This was initially done in pytorch using gather function as shown below-
# a.shape (16L, 4096L, 3L)
# idx.shape (16L, 32768L, 3L)
b = a.gather(1, idx)
# b.shape (16L, 32768L, 3L)
Please note that the size of output b is the same as that of idx. However, when I apply gather function of tensorflow, I get a completely different output. The output dimension was found mismatching as shown below-
b = tf.gather(a, idx, axis=1)
# b.shape (16, 16, 32768, 3, 3)
I also tried using tf.gather_nd but got in vain. See below-
b = tf.gather_nd(a, idx)
# b.shape (16, 32768)
Why am I getting different shapes of tensors? I want to get the tensor of the same shape as calculated by pytorch.
In other words, I want to know the tensorflow equivalent of torch.gather.
For 2D case,there is a method to do it:
# a.shape (16L, 10L)
# idx.shape (16L,1)
idx = tf.stack([tf.range(tf.shape(idx)[0]),idx[:,0]],axis=-1)
b = tf.gather_nd(a,idx)
However,For ND case,this method maybe very complex
This "should" be a general solution using tf.gather_nd (I've only tested for rank 2 and 3 tensors along the last axis):
def torch_gather(x, indices, gather_axis):
# if pytorch gather indices are
# [[[0, 10, 20], [0, 10, 20], [0, 10, 20]],
# [[0, 10, 20], [0, 10, 20], [0, 10, 20]]]
# tf nd_gather needs to be
# [[0,0,0], [0,0,10], [0,0,20], [0,1,0], [0,1,10], [0,1,20], [0,2,0], [0,2,10], [0,2,20],
# [1,0,0], [1,0,10], [1,0,20], [1,1,0], [1,1,10], [1,1,20], [1,2,0], [1,2,10], [1,2,20]]
# create a tensor containing indices of each element
all_indices = tf.where(tf.fill(indices.shape, True))
gather_locations = tf.reshape(indices, [indices.shape.num_elements()])
# splice in our pytorch style index at the correct axis
gather_indices = []
for axis in range(len(indices.shape)):
if axis == gather_axis:
gather_indices.append(gather_locations)
else:
gather_indices.append(all_indices[:, axis])
gather_indices = tf.stack(gather_indices, axis=-1)
gathered = tf.gather_nd(x, gather_indices)
reshaped = tf.reshape(gathered, indices.shape)
return reshaped
For the last-axis gathering, we can use the 2D-reshape trick for general ND cases, and then employ #LiShaoyuan 2D code above
# last-axis gathering only - use 2D-reshape-trick for Torch's style nD gathering
def torch_gather(param, id_tensor):
# 2d-gather torch equivalent from #LiShaoyuan above
def gather2d(target, id_tensor):
idx = tf.stack([tf.range(tf.shape(id_tensor)[0]),id_tensor[:,0]],axis=-1)
result = tf.gather_nd(target,idx)
return tf.expand_dims(result,axis=-1)
target = tf.reshape(param, (-1, param.shape[-1])) # reshape 2D
target_shape = id_tensor.shape
id_tensor = tf.reshape(id_tensor, (-1, 1)) # also 2D-index
result = gather2d(target, id_tensor)
return tf.reshape(result, target_shape)
I want to reshape a vector of shape (5,) into a matrix of shape (1, 5).
With numpy, I can do:
>>> import numpy as np
>>> a = np.array([1, 2, 3, 4, 5])
>>> a.shape
(5,)
>>> a = np.reshape(a, (1, 5))
>>> a.shape
(1, 5)
>>> a
array([[1, 2, 3, 4, 5]])
But how do I do this with PyTorch?
Use torch.unsqueeze(input, dim, out=None):
>>> import torch
>>> a = torch.Tensor([1, 2, 3, 4, 5])
>>> a
1
2
3
4
5
[torch.FloatTensor of size 5]
>>> a = a.unsqueeze(0)
>>> a
1 2 3 4 5
[torch.FloatTensor of size 1x5]
you might use
a.view(1,5)
Out:
1 2 3 4 5
[torch.FloatTensor of size 1x5]
There are multiple ways of reshaping a PyTorch tensor. You can apply these methods on a tensor of any dimensionality.
Let's start with a 2-dimensional 2 x 3 tensor:
x = torch.Tensor(2, 3)
print(x.shape)
# torch.Size([2, 3])
To add some robustness to this problem, let's reshape the 2 x 3 tensor by adding a new dimension at the front and another dimension in the middle, producing a 1 x 2 x 1 x 3 tensor.
Approach 1: add dimension with None
Use NumPy-style insertion of None (aka np.newaxis) to add dimensions anywhere you want. See here.
print(x.shape)
# torch.Size([2, 3])
y = x[None, :, None, :] # Add new dimensions at positions 0 and 2.
print(y.shape)
# torch.Size([1, 2, 1, 3])
Approach 2: unsqueeze
Use torch.Tensor.unsqueeze(i) (a.k.a. torch.unsqueeze(tensor, i) or the in-place version unsqueeze_()) to add a new dimension at the i'th dimension. The returned tensor shares the same data as the original tensor. In this example, we can use unqueeze() twice to add the two new dimensions.
print(x.shape)
# torch.Size([2, 3])
# Use unsqueeze twice.
y = x.unsqueeze(0) # Add new dimension at position 0
print(y.shape)
# torch.Size([1, 2, 3])
y = y.unsqueeze(2) # Add new dimension at position 2
print(y.shape)
# torch.Size([1, 2, 1, 3])
In practice with PyTorch, adding an extra dimension for the batch may be important, so you may often see unsqueeze(0).
Approach 3: view
Use torch.Tensor.view(*shape) to specify all the dimensions. The returned tensor shares the same data as the original tensor.
print(x.shape)
# torch.Size([2, 3])
y = x.view(1, 2, 1, 3)
print(y.shape)
# torch.Size([1, 2, 1, 3])
Approach 4: reshape
Use torch.Tensor.reshape(*shape) (aka torch.reshape(tensor, shapetuple)) to specify all the dimensions. If the original data is contiguous and has the same stride, the returned tensor will be a view of input (sharing the same data), otherwise it will be a copy. This function is similar to the NumPy reshape() function in that it lets you define all the dimensions and can return either a view or a copy.
print(x.shape)
# torch.Size([2, 3])
y = x.reshape(1, 2, 1, 3)
print(y.shape)
# torch.Size([1, 2, 1, 3])
Furthermore, from the O'Reilly 2019 book Programming PyTorch for Deep Learning, the author writes:
Now you might wonder what the difference is between view() and reshape(). The answer is that view() operates as a view on the original tensor, so if the underlying data is changed, the view will change too (and vice versa). However, view() can throw errors if the required view is not contiguous; that is, it doesn’t share the same block of memory it would occupy if a new tensor of the required shape was created from scratch. If this happens, you have to call tensor.contiguous() before you can use view(). However, reshape() does all that behind the scenes, so in general, I recommend using reshape() rather than view().
Approach 5: resize_
Use the in-place function torch.Tensor.resize_(*sizes) to modify the original tensor. The documentation states:
WARNING. This is a low-level method. The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged). For most purposes, you will instead want to use view(), which checks for contiguity, or reshape(), which copies data if needed. To change the size in-place with custom strides, see set_().
print(x.shape)
# torch.Size([2, 3])
x.resize_(1, 2, 1, 3)
print(x.shape)
# torch.Size([1, 2, 1, 3])
My observations
If you want to add just one dimension (e.g. to add a 0th dimension for the batch), then use unsqueeze(0). If you want to totally change the dimensionality, use reshape().
See also:
What's the difference between reshape and view in pytorch?
What is the difference between view() and unsqueeze()?
In PyTorch 0.4, is it recommended to use reshape than view when it is possible?
For in-place modification of the shape of the tensor, you should use
tensor.resize_():
In [23]: a = torch.Tensor([1, 2, 3, 4, 5])
In [24]: a.shape
Out[24]: torch.Size([5])
# tensor.resize_((`new_shape`))
In [25]: a.resize_((1,5))
Out[25]:
1 2 3 4 5
[torch.FloatTensor of size 1x5]
In [26]: a.shape
Out[26]: torch.Size([1, 5])
In PyTorch, if there's an underscore at the end of an operation (like tensor.resize_()) then that operation does in-place modification to the original tensor.
Also, you can simply use np.newaxis in a torch Tensor to increase the dimension. Here is an example:
In [34]: list_ = range(5)
In [35]: a = torch.Tensor(list_)
In [36]: a.shape
Out[36]: torch.Size([5])
In [37]: new_a = a[np.newaxis, :]
In [38]: new_a.shape
Out[38]: torch.Size([1, 5])
or you can use this, the '-1' means you don't have to specify the number of the elements.
In [3]: a.view(1,-1)
Out[3]:
1 2 3 4 5
[torch.FloatTensor of size 1x5]
This question has been thoroughly answered already, but I want to add for the less experienced python developers that you might find the * operator helpful in conjunction with view().
For example if you have a particular tensor size that you want a different tensor of data to conform to, you might try:
img = Variable(tensor.randn(20,30,3)) # tensor with goal shape
flat_size = 20*30*3
X = Variable(tensor.randn(50, flat_size)) # data tensor
X = X.view(-1, *img.size()) # sweet maneuver
print(X.size()) # size is (50, 20, 30, 3)
This works with numpy shape too:
img = np.random.randn(20,30,3)
flat_size = 20*30*3
X = Variable(tensor.randn(50, flat_size))
X = X.view(-1, *img.shape)
print(X.size()) # size is (50, 20, 30, 3)
torch.reshape() is made to dupe the numpy reshape method.
It came after the view() and torch.resize_() and it is inside the dir(torch) package.
import torch
x=torch.arange(24)
print(x, x.shape)
x_view = x.view(1,2,3,4) # works on is_contiguous() tensor
print(x_view.shape)
x_reshaped = x.reshape(1,2,3,4) # works on any tensor
print(x_reshaped.shape)
x_reshaped2 = torch.reshape(x_reshaped, (-1,)) # part of torch package, while view() and resize_() are not
print(x_reshaped2.shape)
Out:
tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23]) torch.Size([24])
torch.Size([1, 2, 3, 4])
torch.Size([1, 2, 3, 4])
torch.Size([24])
But did you know it can also work as a replacement for squeeze() and unsqueeze()
x = torch.tensor([1, 2, 3, 4])
print(x.shape)
x1 = torch.unsqueeze(x, 0)
print(x1.shape)
x2 = torch.unsqueeze(x1, 1)
print(x2.shape)
x3=x.reshape(1,1,4)
print(x3.shape)
x4=x.reshape(4)
print(x4.shape)
x5=x3.squeeze()
print(x5.shape)
Out:
torch.Size([4])
torch.Size([1, 4])
torch.Size([1, 1, 4])
torch.Size([1, 1, 4])
torch.Size([4])
torch.Size([4])
As far as I know, the best way to reshape tensors is to use einops. It solves various reshape problems by providing a simple and elegant function. In your situation, the code could be written as
from einops import rearrange
ans = rearrange(tensor,'h -> 1 h')
I highly recommend you try it.
BTW, you can use it with pytorch/tensorflow/numpy and many other libraries.
import torch
>>>a = torch.Tensor([1,2,3,4,5])
>>>a.size()
torch.Size([5])
#use view to reshape
>>>b = a.view(1,a.shape[0])
>>>b
tensor([[1., 2., 3., 4., 5.]])
>>>b.size()
torch.Size([1, 5])
>>>b.type()
'torch.FloatTensor'
Assume the following code:
import torch
import numpy as np
a = torch.tensor([1, 2, 3, 4, 5])
The following three calls have the exact same effect:
res_1 = a.unsqueeze(0)
res_2 = a.view(1, 5)
res_3 = a[np.newaxis,:]
res_1.shape == res_2.shape == res_3.shape == (1,5) # Returns true
Notice that for any of the resulting tensors, if you modify the data in them, you are also modifying the data in a, because they don't have a copy of the data, but reference the original data in a.
res_1[0,0] = 2
a[0] == res_1[0,0] == 2 # Returns true
The other way of doing it would be using the resize_ in place operation:
a.shape == res_1.shape # Returns false
a.reshape_((1, 5))
a.shape == res_1.shape # Returns true
Be careful of using resize_ or other in-place operation with autograd. See the following discussion: https://pytorch.org/docs/stable/notes/autograd.html#in-place-operations-with-autograd
import torch
t = torch.ones((2, 3, 4))
t.size()
>>torch.Size([2, 3, 4])
a = t.view(-1,t.size()[1]*t.size()[2])
a.size()
>>torch.Size([2, 12])