How to Do Numpy Like Index Selection in Tensorflow? - python

Let us use the following code
#!/usr/bin/env python3
# encoding: utf-8
import numpy as np, tensorflow as tf # tf.__version__==2.7.0
sample_array=np.random.uniform(size=(2**10, 120, 20))
to_select=[5, 6, 9, 4]
sample_tensor=tf.convert_to_tensor(value=sample_array)
sample_array[:, :, to_select] # Works okay
sample_tensor[:, :, to_select] # TypeError. How to do this in tensor?
tf.convert_to_tensor(value=sample_tensor.numpy()[:, :, to_select]) # Ugly workaround
Basically, how to get those elements as a tensor of appropriate dimension, just like numpy? I tried tf.slice and tf.gather, but cannot figure out the proper arguments to pass.
I can convert it to numpy and back, but not sure if it will sacrifice the operation's efficiency, and work as part of a custom training loop.

The simplest solution would be to use tf.concat, although it is probably not so efficient:
import numpy as np
import tensorflow as tf
sample_array = np.random.uniform(size=(2, 2, 20))
to_select = [5, 6, 9, 4]
sample_tensor = tf.convert_to_tensor(value = sample_array)
numpy_way = sample_array[:, :, to_select]
tf_way = tf.concat([tf.expand_dims(sample_array[:, :, to_select[i]], axis=-1) for i in tf.range(len(to_select))], axis=-1)
#tf_way = tf.concat([tf.expand_dims(sample_array[:, :, s], axis=-1) for s in to_select], axis=-1)
print(numpy_way)
print(tf_way)
[[[0.81208086 0.03873406 0.89959868 0.97896671]
[0.57569184 0.33659472 0.32566287 0.58383079]]
[[0.59984846 0.43405048 0.42366314 0.25505199]
[0.16180442 0.5903358 0.21302399 0.86569914]]]
tf.Tensor(
[[[0.81208086 0.03873406 0.89959868 0.97896671]
[0.57569184 0.33659472 0.32566287 0.58383079]]
[[0.59984846 0.43405048 0.42366314 0.25505199]
[0.16180442 0.5903358 0.21302399 0.86569914]]], shape=(2, 2, 4), dtype=float64)
A more complicated, but efficient solution would involve using tf.meshgrid and tf.gather_nd. Check this post or this post and finally this. Here is an example based on your question:
to_select = tf.expand_dims(tf.constant([5, 6, 9, 4]), axis=0)
to_select_shape = tf.shape(to_select)
sample_tensor_shape = tf.shape(sample_tensor)
to_select = tf.expand_dims(tf.reshape(tf.tile(to_select, [1, to_select_shape[1]]), (sample_tensor_shape[0], sample_tensor_shape[0] * to_select_shape[1])), axis=-1)
ij = tf.stack(tf.meshgrid(
tf.range(sample_tensor_shape[0], dtype=tf.int32),
tf.range(sample_tensor_shape[1], dtype=tf.int32),
indexing='ij'), axis=-1)
gather_indices = tf.concat([tf.repeat(ij, repeats=to_select_shape[1], axis=1), to_select], axis=-1)
gather_indices = tf.reshape(gather_indices, (to_select_shape[1], to_select_shape[1], 3))
result = tf.gather_nd(sample_tensor, gather_indices, batch_dims=0)
result = tf.reshape(result, (result.shape[0]//2, result.shape[0]//2, result.shape[1]))
tf.Tensor(
[[[0.81208086 0.03873406 0.89959868 0.97896671]
[0.57569184 0.33659472 0.32566287 0.58383079]]
[[0.59984846 0.43405048 0.42366314 0.25505199]
[0.16180442 0.5903358 0.21302399 0.86569914]]], shape=(2, 2, 4), dtype=float64)

Related

Create boolean tensor where values in one tensor are in another

I want to find all the places where values from one tensor show up in another tensor. The output should be a Boolean tensor the same shape as the main tensor.
import numpy as np
import tensorflow as tf
values = np.random.randint(0, 32, (3, 32))
check_vals = np.random.choice(values.flatten(), 2, False)
main_tensor = tf.convert_to_tensor(values, dtype=tf.int32)
check_tensor = tf.convert_to_tensor(check_vals, dtype=tf.int32)
#In numpy I can just do this
np_res = np.isin(values, check_vals)
#and verify it worked by doing:
print (check_vals)
print (values[np_res])
#How to do it in tensorflow??
res = tf.equal(main_tensor, check_tensor)
This isn't as straight forward as it should be but this is what I got to work.
import numpy as np
import tensorflow as tf
values = np.random.randint(0, 32, (3, 32))
check_vals = np.random.choice(values.flatten(), 2, False)
main_tensor = tf.convert_to_tensor(values, dtype=tf.int32)
check_tensor = tf.convert_to_tensor(check_vals, dtype=tf.int32)
#In numpy I can just do this
np_res = np.isin(values, check_vals)
#Loop over the check_tensor using tf.map_fn. Have to set output since
#input and output dtypes are different
checks = tf.map_fn(lambda x: tf.equal(main_tensor, x), check_tensor, fn_output_signature=tf.bool)
#Now we do a reduce_any operation to combine the results from
#each entry in check_tensor
result = tf.reduce_any(checks, axis=0)
print (np.all(result.numpy() == np_res))
If you got a small enough tensor you can do this without map() using broadcasting. It can easily explode in memory if your inputs are large so please be mindful.
values = tf.constant(np.random.randint(0, 32, (3, 5)))
check_vals = tf.constant(np.random.choice(values.numpy().flatten(), 2, False))
res = tf.reduce_any(values[:, :, tf.newaxis]==check_vals[tf.newaxis, tf.newaxis, :], axis=-1)

Python numpy reshape from (n,) to (n, 1 , 1) [duplicate]

I'm starting off with a numpy array of an image.
In[1]:img = cv2.imread('test.jpg')
The shape is what you might expect for a 640x480 RGB image.
In[2]:img.shape
Out[2]: (480, 640, 3)
However, this image that I have is a frame of a video, which is 100 frames long. Ideally, I would like to have a single array that contains all the data from this video such that img.shape returns (480, 640, 3, 100).
What is the best way to add the next frame -- that is, the next set of image data, another 480 x 640 x 3 array -- to my initial array?
A dimension can be added to a numpy array as follows:
image = image[..., np.newaxis]
Alternatively to
image = image[..., np.newaxis]
in #dbliss' answer, you can also use numpy.expand_dims like
image = np.expand_dims(image, <your desired dimension>)
For example (taken from the link above):
x = np.array([1, 2])
print(x.shape) # prints (2,)
Then
y = np.expand_dims(x, axis=0)
yields
array([[1, 2]])
and
y.shape
gives
(1, 2)
You could just create an array of the correct size up-front and fill it:
frames = np.empty((480, 640, 3, 100))
for k in xrange(nframes):
frames[:,:,:,k] = cv2.imread('frame_{}.jpg'.format(k))
if the frames were individual jpg file that were named in some particular way (in the example, frame_0.jpg, frame_1.jpg, etc).
Just a note, you might consider using a (nframes, 480,640,3) shaped array, instead.
Pythonic
X = X[:, :, None]
which is equivalent to
X = X[:, :, numpy.newaxis] and
X = numpy.expand_dims(X, axis=-1)
But as you are explicitly asking about stacking images,
I would recommend going for stacking the list of images np.stack([X1, X2, X3]) that you may have collected in a loop.
If you do not like the order of the dimensions you can rearrange with np.transpose()
You can use np.concatenate() use the axis parameter to specify the dimension that should be concatenated. If the arrays being concatenated do not have this dimension, you can use np.newaxis to indicate where the new dimension should be added:
import numpy as np
movie = np.concatenate((img1[:,np.newaxis], img2[:,np.newaxis]), axis=3)
If you are reading from many files:
import glob
movie = np.concatenate([cv2.imread(p)[:,np.newaxis] for p in glob.glob('*.jpg')], axis=3)
Consider Approach 1 with reshape method and Approach 2 with np.newaxis method that produce the same outcome:
#Lets suppose, we have:
x = [1,2,3,4,5,6,7,8,9]
print('I. x',x)
xNpArr = np.array(x)
print('II. xNpArr',xNpArr)
print('III. xNpArr', xNpArr.shape)
xNpArr_3x3 = xNpArr.reshape((3,3))
print('IV. xNpArr_3x3.shape', xNpArr_3x3.shape)
print('V. xNpArr_3x3', xNpArr_3x3)
#Approach 1 with reshape method
xNpArrRs_1x3x3x1 = xNpArr_3x3.reshape((1,3,3,1))
print('VI. xNpArrRs_1x3x3x1.shape', xNpArrRs_1x3x3x1.shape)
print('VII. xNpArrRs_1x3x3x1', xNpArrRs_1x3x3x1)
#Approach 2 with np.newaxis method
xNpArrNa_1x3x3x1 = xNpArr_3x3[np.newaxis, ..., np.newaxis]
print('VIII. xNpArrNa_1x3x3x1.shape', xNpArrNa_1x3x3x1.shape)
print('IX. xNpArrNa_1x3x3x1', xNpArrNa_1x3x3x1)
We have as outcome:
I. x [1, 2, 3, 4, 5, 6, 7, 8, 9]
II. xNpArr [1 2 3 4 5 6 7 8 9]
III. xNpArr (9,)
IV. xNpArr_3x3.shape (3, 3)
V. xNpArr_3x3 [[1 2 3]
[4 5 6]
[7 8 9]]
VI. xNpArrRs_1x3x3x1.shape (1, 3, 3, 1)
VII. xNpArrRs_1x3x3x1 [[[[1]
[2]
[3]]
[[4]
[5]
[6]]
[[7]
[8]
[9]]]]
VIII. xNpArrNa_1x3x3x1.shape (1, 3, 3, 1)
IX. xNpArrNa_1x3x3x1 [[[[1]
[2]
[3]]
[[4]
[5]
[6]]
[[7]
[8]
[9]]]]
a = np.expand_dims(a, axis=-1)
or
a = a[:, np.newaxis]
or
a = a.reshape(a.shape + (1,))
There is no structure in numpy that allows you to append more data later.
Instead, numpy puts all of your data into a contiguous chunk of numbers (basically; a C array), and any resize requires allocating a new chunk of memory to hold it. Numpy's speed comes from being able to keep all the data in a numpy array in the same chunk of memory; e.g. mathematical operations can be parallelized for speed and you get less cache misses.
So you will have two kinds of solutions:
Pre-allocate the memory for the numpy array and fill in the values, like in JoshAdel's answer, or
Keep your data in a normal python list until it's actually needed to put them all together (see below)
images = []
for i in range(100):
new_image = # pull image from somewhere
images.append(new_image)
images = np.stack(images, axis=3)
Note that there is no need to expand the dimensions of the individual image arrays first, nor do you need to know how many images you expect ahead of time.
You can use stack with the axis parameter:
img.shape # h,w,3
imgs = np.stack([img1,img2,img3,img4], axis=-1) # -1 = new axis is last
imgs.shape # h,w,3,nimages
For example: to convert grayscale to color:
>>> d = np.zeros((5,4), dtype=int) # 5x4
>>> d[2,3] = 1
>>> d3.shape
Out[30]: (5, 4, 3)
>>> d3 = np.stack([d,d,d], axis=-2) # 5x4x3 -1=as last axis
>>> d3[2,3]
Out[32]: array([1, 1, 1])
I followed this approach:
import numpy as np
import cv2
ls = []
for image in image_paths:
ls.append(cv2.imread('test.jpg'))
img_np = np.array(ls) # shape (100, 480, 640, 3)
img_np = np.rollaxis(img_np, 0, 4) # shape (480, 640, 3, 100).
This worked for me:
image = image[..., None]
This will help you add axis anywhere you want
import numpy as np
signal = np.array([[0.3394572666491664, 0.3089068053925853, 0.3516359279582483], [0.33932706934615525, 0.3094755563319447, 0.3511973743219001], [0.3394407172182317, 0.30889042266755573, 0.35166886011421256], [0.3394407172182317, 0.30889042266755573, 0.35166886011421256]])
print(signal.shape)
#(4,3)
print(signal[...,np.newaxis].shape) or signal[...:none]
#(4, 3, 1)
print(signal[:, np.newaxis, :].shape) or signal[:,none, :]
#(4, 1, 3)
there is three-way for adding new dimensions to ndarray .
first: using "np.newaxis" (something like #dbliss answer)
np.newaxis is just given an alias to None for making it easier to
understand. If you replace np.newaxis with None, it works the same
way. but it's better to use np.newaxis for being more explicit.
import numpy as np
my_arr = np.array([2, 3])
new_arr = my_arr[..., np.newaxis]
print("old shape", my_arr.shape)
print("new shape", new_arr.shape)
>>> old shape (2,)
>>> new shape (2, 1)
second: using "np.expand_dims()"
Specify the original ndarray in the first argument and the position
to add the dimension in the second argument axis.
my_arr = np.array([2, 3])
new_arr = np.expand_dims(my_arr, -1)
print("old shape", my_arr.shape)
print("new shape", new_arr.shape)
>>> old shape (2,)
>>> new shape (2, 1)
third: using "reshape()"
my_arr = np.array([2, 3])
new_arr = my_arr.reshape(*my_arr.shape, 1)
print("old shape", my_arr.shape)
print("new shape", new_arr.shape)
>>> old shape (2,)
>>> new shape (2, 1)

How can I append a single row dimension to an ndarray shape? [duplicate]

I'm starting off with a numpy array of an image.
In[1]:img = cv2.imread('test.jpg')
The shape is what you might expect for a 640x480 RGB image.
In[2]:img.shape
Out[2]: (480, 640, 3)
However, this image that I have is a frame of a video, which is 100 frames long. Ideally, I would like to have a single array that contains all the data from this video such that img.shape returns (480, 640, 3, 100).
What is the best way to add the next frame -- that is, the next set of image data, another 480 x 640 x 3 array -- to my initial array?
A dimension can be added to a numpy array as follows:
image = image[..., np.newaxis]
Alternatively to
image = image[..., np.newaxis]
in #dbliss' answer, you can also use numpy.expand_dims like
image = np.expand_dims(image, <your desired dimension>)
For example (taken from the link above):
x = np.array([1, 2])
print(x.shape) # prints (2,)
Then
y = np.expand_dims(x, axis=0)
yields
array([[1, 2]])
and
y.shape
gives
(1, 2)
You could just create an array of the correct size up-front and fill it:
frames = np.empty((480, 640, 3, 100))
for k in xrange(nframes):
frames[:,:,:,k] = cv2.imread('frame_{}.jpg'.format(k))
if the frames were individual jpg file that were named in some particular way (in the example, frame_0.jpg, frame_1.jpg, etc).
Just a note, you might consider using a (nframes, 480,640,3) shaped array, instead.
Pythonic
X = X[:, :, None]
which is equivalent to
X = X[:, :, numpy.newaxis] and
X = numpy.expand_dims(X, axis=-1)
But as you are explicitly asking about stacking images,
I would recommend going for stacking the list of images np.stack([X1, X2, X3]) that you may have collected in a loop.
If you do not like the order of the dimensions you can rearrange with np.transpose()
You can use np.concatenate() use the axis parameter to specify the dimension that should be concatenated. If the arrays being concatenated do not have this dimension, you can use np.newaxis to indicate where the new dimension should be added:
import numpy as np
movie = np.concatenate((img1[:,np.newaxis], img2[:,np.newaxis]), axis=3)
If you are reading from many files:
import glob
movie = np.concatenate([cv2.imread(p)[:,np.newaxis] for p in glob.glob('*.jpg')], axis=3)
Consider Approach 1 with reshape method and Approach 2 with np.newaxis method that produce the same outcome:
#Lets suppose, we have:
x = [1,2,3,4,5,6,7,8,9]
print('I. x',x)
xNpArr = np.array(x)
print('II. xNpArr',xNpArr)
print('III. xNpArr', xNpArr.shape)
xNpArr_3x3 = xNpArr.reshape((3,3))
print('IV. xNpArr_3x3.shape', xNpArr_3x3.shape)
print('V. xNpArr_3x3', xNpArr_3x3)
#Approach 1 with reshape method
xNpArrRs_1x3x3x1 = xNpArr_3x3.reshape((1,3,3,1))
print('VI. xNpArrRs_1x3x3x1.shape', xNpArrRs_1x3x3x1.shape)
print('VII. xNpArrRs_1x3x3x1', xNpArrRs_1x3x3x1)
#Approach 2 with np.newaxis method
xNpArrNa_1x3x3x1 = xNpArr_3x3[np.newaxis, ..., np.newaxis]
print('VIII. xNpArrNa_1x3x3x1.shape', xNpArrNa_1x3x3x1.shape)
print('IX. xNpArrNa_1x3x3x1', xNpArrNa_1x3x3x1)
We have as outcome:
I. x [1, 2, 3, 4, 5, 6, 7, 8, 9]
II. xNpArr [1 2 3 4 5 6 7 8 9]
III. xNpArr (9,)
IV. xNpArr_3x3.shape (3, 3)
V. xNpArr_3x3 [[1 2 3]
[4 5 6]
[7 8 9]]
VI. xNpArrRs_1x3x3x1.shape (1, 3, 3, 1)
VII. xNpArrRs_1x3x3x1 [[[[1]
[2]
[3]]
[[4]
[5]
[6]]
[[7]
[8]
[9]]]]
VIII. xNpArrNa_1x3x3x1.shape (1, 3, 3, 1)
IX. xNpArrNa_1x3x3x1 [[[[1]
[2]
[3]]
[[4]
[5]
[6]]
[[7]
[8]
[9]]]]
a = np.expand_dims(a, axis=-1)
or
a = a[:, np.newaxis]
or
a = a.reshape(a.shape + (1,))
There is no structure in numpy that allows you to append more data later.
Instead, numpy puts all of your data into a contiguous chunk of numbers (basically; a C array), and any resize requires allocating a new chunk of memory to hold it. Numpy's speed comes from being able to keep all the data in a numpy array in the same chunk of memory; e.g. mathematical operations can be parallelized for speed and you get less cache misses.
So you will have two kinds of solutions:
Pre-allocate the memory for the numpy array and fill in the values, like in JoshAdel's answer, or
Keep your data in a normal python list until it's actually needed to put them all together (see below)
images = []
for i in range(100):
new_image = # pull image from somewhere
images.append(new_image)
images = np.stack(images, axis=3)
Note that there is no need to expand the dimensions of the individual image arrays first, nor do you need to know how many images you expect ahead of time.
You can use stack with the axis parameter:
img.shape # h,w,3
imgs = np.stack([img1,img2,img3,img4], axis=-1) # -1 = new axis is last
imgs.shape # h,w,3,nimages
For example: to convert grayscale to color:
>>> d = np.zeros((5,4), dtype=int) # 5x4
>>> d[2,3] = 1
>>> d3.shape
Out[30]: (5, 4, 3)
>>> d3 = np.stack([d,d,d], axis=-2) # 5x4x3 -1=as last axis
>>> d3[2,3]
Out[32]: array([1, 1, 1])
I followed this approach:
import numpy as np
import cv2
ls = []
for image in image_paths:
ls.append(cv2.imread('test.jpg'))
img_np = np.array(ls) # shape (100, 480, 640, 3)
img_np = np.rollaxis(img_np, 0, 4) # shape (480, 640, 3, 100).
This worked for me:
image = image[..., None]
This will help you add axis anywhere you want
import numpy as np
signal = np.array([[0.3394572666491664, 0.3089068053925853, 0.3516359279582483], [0.33932706934615525, 0.3094755563319447, 0.3511973743219001], [0.3394407172182317, 0.30889042266755573, 0.35166886011421256], [0.3394407172182317, 0.30889042266755573, 0.35166886011421256]])
print(signal.shape)
#(4,3)
print(signal[...,np.newaxis].shape) or signal[...:none]
#(4, 3, 1)
print(signal[:, np.newaxis, :].shape) or signal[:,none, :]
#(4, 1, 3)
there is three-way for adding new dimensions to ndarray .
first: using "np.newaxis" (something like #dbliss answer)
np.newaxis is just given an alias to None for making it easier to
understand. If you replace np.newaxis with None, it works the same
way. but it's better to use np.newaxis for being more explicit.
import numpy as np
my_arr = np.array([2, 3])
new_arr = my_arr[..., np.newaxis]
print("old shape", my_arr.shape)
print("new shape", new_arr.shape)
>>> old shape (2,)
>>> new shape (2, 1)
second: using "np.expand_dims()"
Specify the original ndarray in the first argument and the position
to add the dimension in the second argument axis.
my_arr = np.array([2, 3])
new_arr = np.expand_dims(my_arr, -1)
print("old shape", my_arr.shape)
print("new shape", new_arr.shape)
>>> old shape (2,)
>>> new shape (2, 1)
third: using "reshape()"
my_arr = np.array([2, 3])
new_arr = my_arr.reshape(*my_arr.shape, 1)
print("old shape", my_arr.shape)
print("new shape", new_arr.shape)
>>> old shape (2,)
>>> new shape (2, 1)

how to average a tensor axis with specified mask in tensorflow

For example:
I have a input tensor(input), shaped (?,10) dtype=float32, the first dimension means batch_size.
And a mask tensor(mask), shaped (?,10). mask[sample_number] is like [True,True,False,...], means the masks
And a label tensor(avg_label), shaped (?,) ,means the correct mean value of masked positions for each sample
I want to train the model , but can't find a good way to get the output.
The tf.reduce_... (e.g. tf.reduce_mean) functions don't seem to support argument about masking.
I try tf.boolean_mask ,But it will flatten the output shape into only one dimension,throwing the sample_number dimension, so it cannot differentiate among the samples
I considered tf.where, like:
masked=tf.where(mask,input,tf.zeros(tf.shape(input)))
avg_out=tf.reduce_mean(masked,axis=1)
loss=tf.pow(avg_out-avg_label,2)
But the code above is certainly not working because False set to 0 will change avg. If use np.nan ,it will always get nan. i wonder if there is a value representing absence when doing reduce operations.
How can i do this?
You can use tf.ragged.boolean_mask to keep the dimensionality.
tf.reduce_mean(tf.ragged.boolean_mask(x, mask=mask), axis=1)
You can use tf.boolean_mask.
In [17]: tensor = tf.constant([[1, 2], [3, 4], [5, 6]])
In [18]: mask = np.array([[True, False], [False, True], [True, False]])
In [19]: masked = tf.boolean_mask(tensor, mask)
In [20]: masked.eval()
Out[20]: array([1, 4, 5], dtype=int32)
In [21]: tf.reduce_mean(masked).eval()
Out[21]: 3
For the False masked values you can use tf.logical_not to toggle the mask.
You can write your own mean function by just counting the non-vanishing entries in your mask
Why not just
import tensorflow as tf
import numpy as np
B, H, W, C = 5, 224, 224, 3
data = np.random.randn(B, H, W, C).astype(np.float32)
mask = np.random.randint(2, size=(B, H, W, C)).astype(np.float32)
expected = (data * mask).sum(axis=(1, 2, 3), keepdims=True)
expected = expected / mask.sum(axis=(1, 2, 3), keepdims=True)
data_op = tf.convert_to_tensor(data)
mask_op = tf.convert_to_tensor(mask)
actual_op = tf.reduce_sum(tf.multiply(data, mask), axis=[1, 2, 3], keepdims=True) / tf.reduce_sum(mask, axis=[1, 2, 3], keepdims=True)
with tf.Session() as sess:
actual = sess.run(actual_op)
np.testing.assert_allclose(actual, expected)

PyTorch reshape tensor dimension

I want to reshape a vector of shape (5,) into a matrix of shape (1, 5).
With numpy, I can do:
>>> import numpy as np
>>> a = np.array([1, 2, 3, 4, 5])
>>> a.shape
(5,)
>>> a = np.reshape(a, (1, 5))
>>> a.shape
(1, 5)
>>> a
array([[1, 2, 3, 4, 5]])
But how do I do this with PyTorch?
Use torch.unsqueeze(input, dim, out=None):
>>> import torch
>>> a = torch.Tensor([1, 2, 3, 4, 5])
>>> a
1
2
3
4
5
[torch.FloatTensor of size 5]
>>> a = a.unsqueeze(0)
>>> a
1 2 3 4 5
[torch.FloatTensor of size 1x5]
you might use
a.view(1,5)
Out:
1 2 3 4 5
[torch.FloatTensor of size 1x5]
There are multiple ways of reshaping a PyTorch tensor. You can apply these methods on a tensor of any dimensionality.
Let's start with a 2-dimensional 2 x 3 tensor:
x = torch.Tensor(2, 3)
print(x.shape)
# torch.Size([2, 3])
To add some robustness to this problem, let's reshape the 2 x 3 tensor by adding a new dimension at the front and another dimension in the middle, producing a 1 x 2 x 1 x 3 tensor.
Approach 1: add dimension with None
Use NumPy-style insertion of None (aka np.newaxis) to add dimensions anywhere you want. See here.
print(x.shape)
# torch.Size([2, 3])
y = x[None, :, None, :] # Add new dimensions at positions 0 and 2.
print(y.shape)
# torch.Size([1, 2, 1, 3])
Approach 2: unsqueeze
Use torch.Tensor.unsqueeze(i) (a.k.a. torch.unsqueeze(tensor, i) or the in-place version unsqueeze_()) to add a new dimension at the i'th dimension. The returned tensor shares the same data as the original tensor. In this example, we can use unqueeze() twice to add the two new dimensions.
print(x.shape)
# torch.Size([2, 3])
# Use unsqueeze twice.
y = x.unsqueeze(0) # Add new dimension at position 0
print(y.shape)
# torch.Size([1, 2, 3])
y = y.unsqueeze(2) # Add new dimension at position 2
print(y.shape)
# torch.Size([1, 2, 1, 3])
In practice with PyTorch, adding an extra dimension for the batch may be important, so you may often see unsqueeze(0).
Approach 3: view
Use torch.Tensor.view(*shape) to specify all the dimensions. The returned tensor shares the same data as the original tensor.
print(x.shape)
# torch.Size([2, 3])
y = x.view(1, 2, 1, 3)
print(y.shape)
# torch.Size([1, 2, 1, 3])
Approach 4: reshape
Use torch.Tensor.reshape(*shape) (aka torch.reshape(tensor, shapetuple)) to specify all the dimensions. If the original data is contiguous and has the same stride, the returned tensor will be a view of input (sharing the same data), otherwise it will be a copy. This function is similar to the NumPy reshape() function in that it lets you define all the dimensions and can return either a view or a copy.
print(x.shape)
# torch.Size([2, 3])
y = x.reshape(1, 2, 1, 3)
print(y.shape)
# torch.Size([1, 2, 1, 3])
Furthermore, from the O'Reilly 2019 book Programming PyTorch for Deep Learning, the author writes:
Now you might wonder what the difference is between view() and reshape(). The answer is that view() operates as a view on the original tensor, so if the underlying data is changed, the view will change too (and vice versa). However, view() can throw errors if the required view is not contiguous; that is, it doesn’t share the same block of memory it would occupy if a new tensor of the required shape was created from scratch. If this happens, you have to call tensor.contiguous() before you can use view(). However, reshape() does all that behind the scenes, so in general, I recommend using reshape() rather than view().
Approach 5: resize_
Use the in-place function torch.Tensor.resize_(*sizes) to modify the original tensor. The documentation states:
WARNING. This is a low-level method. The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged). For most purposes, you will instead want to use view(), which checks for contiguity, or reshape(), which copies data if needed. To change the size in-place with custom strides, see set_().
print(x.shape)
# torch.Size([2, 3])
x.resize_(1, 2, 1, 3)
print(x.shape)
# torch.Size([1, 2, 1, 3])
My observations
If you want to add just one dimension (e.g. to add a 0th dimension for the batch), then use unsqueeze(0). If you want to totally change the dimensionality, use reshape().
See also:
What's the difference between reshape and view in pytorch?
What is the difference between view() and unsqueeze()?
In PyTorch 0.4, is it recommended to use reshape than view when it is possible?
For in-place modification of the shape of the tensor, you should use
tensor.resize_():
In [23]: a = torch.Tensor([1, 2, 3, 4, 5])
In [24]: a.shape
Out[24]: torch.Size([5])
# tensor.resize_((`new_shape`))
In [25]: a.resize_((1,5))
Out[25]:
1 2 3 4 5
[torch.FloatTensor of size 1x5]
In [26]: a.shape
Out[26]: torch.Size([1, 5])
In PyTorch, if there's an underscore at the end of an operation (like tensor.resize_()) then that operation does in-place modification to the original tensor.
Also, you can simply use np.newaxis in a torch Tensor to increase the dimension. Here is an example:
In [34]: list_ = range(5)
In [35]: a = torch.Tensor(list_)
In [36]: a.shape
Out[36]: torch.Size([5])
In [37]: new_a = a[np.newaxis, :]
In [38]: new_a.shape
Out[38]: torch.Size([1, 5])
or you can use this, the '-1' means you don't have to specify the number of the elements.
In [3]: a.view(1,-1)
Out[3]:
1 2 3 4 5
[torch.FloatTensor of size 1x5]
This question has been thoroughly answered already, but I want to add for the less experienced python developers that you might find the * operator helpful in conjunction with view().
For example if you have a particular tensor size that you want a different tensor of data to conform to, you might try:
img = Variable(tensor.randn(20,30,3)) # tensor with goal shape
flat_size = 20*30*3
X = Variable(tensor.randn(50, flat_size)) # data tensor
X = X.view(-1, *img.size()) # sweet maneuver
print(X.size()) # size is (50, 20, 30, 3)
This works with numpy shape too:
img = np.random.randn(20,30,3)
flat_size = 20*30*3
X = Variable(tensor.randn(50, flat_size))
X = X.view(-1, *img.shape)
print(X.size()) # size is (50, 20, 30, 3)
torch.reshape() is made to dupe the numpy reshape method.
It came after the view() and torch.resize_() and it is inside the dir(torch) package.
import torch
x=torch.arange(24)
print(x, x.shape)
x_view = x.view(1,2,3,4) # works on is_contiguous() tensor
print(x_view.shape)
x_reshaped = x.reshape(1,2,3,4) # works on any tensor
print(x_reshaped.shape)
x_reshaped2 = torch.reshape(x_reshaped, (-1,)) # part of torch package, while view() and resize_() are not
print(x_reshaped2.shape)
Out:
tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23]) torch.Size([24])
torch.Size([1, 2, 3, 4])
torch.Size([1, 2, 3, 4])
torch.Size([24])
But did you know it can also work as a replacement for squeeze() and unsqueeze()
x = torch.tensor([1, 2, 3, 4])
print(x.shape)
x1 = torch.unsqueeze(x, 0)
print(x1.shape)
x2 = torch.unsqueeze(x1, 1)
print(x2.shape)
x3=x.reshape(1,1,4)
print(x3.shape)
x4=x.reshape(4)
print(x4.shape)
x5=x3.squeeze()
print(x5.shape)
Out:
torch.Size([4])
torch.Size([1, 4])
torch.Size([1, 1, 4])
torch.Size([1, 1, 4])
torch.Size([4])
torch.Size([4])
As far as I know, the best way to reshape tensors is to use einops. It solves various reshape problems by providing a simple and elegant function. In your situation, the code could be written as
from einops import rearrange
ans = rearrange(tensor,'h -> 1 h')
I highly recommend you try it.
BTW, you can use it with pytorch/tensorflow/numpy and many other libraries.
import torch
>>>a = torch.Tensor([1,2,3,4,5])
>>>a.size()
torch.Size([5])
#use view to reshape
>>>b = a.view(1,a.shape[0])
>>>b
tensor([[1., 2., 3., 4., 5.]])
>>>b.size()
torch.Size([1, 5])
>>>b.type()
'torch.FloatTensor'
Assume the following code:
import torch
import numpy as np
a = torch.tensor([1, 2, 3, 4, 5])
The following three calls have the exact same effect:
res_1 = a.unsqueeze(0)
res_2 = a.view(1, 5)
res_3 = a[np.newaxis,:]
res_1.shape == res_2.shape == res_3.shape == (1,5) # Returns true
Notice that for any of the resulting tensors, if you modify the data in them, you are also modifying the data in a, because they don't have a copy of the data, but reference the original data in a.
res_1[0,0] = 2
a[0] == res_1[0,0] == 2 # Returns true
The other way of doing it would be using the resize_ in place operation:
a.shape == res_1.shape # Returns false
a.reshape_((1, 5))
a.shape == res_1.shape # Returns true
Be careful of using resize_ or other in-place operation with autograd. See the following discussion: https://pytorch.org/docs/stable/notes/autograd.html#in-place-operations-with-autograd
import torch
t = torch.ones((2, 3, 4))
t.size()
>>torch.Size([2, 3, 4])
a = t.view(-1,t.size()[1]*t.size()[2])
a.size()
>>torch.Size([2, 12])

Categories

Resources