pytorch : How to stack 2 tensors - python

I’m trying to stack 2 tensors A.shape=(64,16,16) and B.shape=(64,16,16) in a tensor of shape C.shape=(1,128,16,16)
and non of functions I’ve tried work where
torch.stack => C.shape=(2,64,16,16) and
torch.cat => C.shape=(128,16,16)
can enyones help me

Concat first and then use unsqueeze to add singleton dimension at 0th position
torch.cat([A, B]).unsqueeze(0)

Related

Making a tensor with the outside dimension ragged

Suppose you have a tf.data.Dataset of the following definition:
tf.data.Dataset.from_generator(gen, output_signature=(
tf.TensorSpec(shape=(1, self.n_channels, self.height, self.width)),
tf.RaggedTensorSpec(shape=(1, None, self.agent_dim), ragged_rank=1)
))
These ones in the beginning do not do you any good, since using .batch(batch_size) on this Dataset adds another dimension. Now you have two approaches: to reshape ragged tensors as soon as they get into keras.Model.train_step, squeezing the excess dimensions, or to drop this 1 in tf.RaggedTensorSpec(shape=(1, None, self.agent_dim), ragged_rank=1).
For the first one, can one cast a RaggedTensor to a shape? A related question
For the second one, can one create a RaggedTensor with the first dimension being None?
Thank you for your attention in advance

How to concatenate 2d tensors with 2 different dimensions

I was wondering if it is possible to concatenate two different pytorch tensors with different shapes.
one tensor is of shape torch.Size([247, 247]) and the other is of shape torch.Size([10, 183]). Is it possible to concatenate these using torch.cat() on dim=1?
I think you should use broadcasting. That is, to broadcast torch.Size([10, 183]) along dimension 0 (to reach 247) or do it for the other dimensions. For torch.cat to work, you need to have matching dimensions along which you are trying to concatenate.

How to randomly choose a tensor in the tensor list in TensorFlow

I have a tensor list and each element has different shapes. For example, there are two tensors in my list, the shape of the first one is 333 and the second one is 4*4. I want to randomly sample a tensor from them in TensorFlow. But I don't know how to do it.
My approach now to reshape all the tensors to 1*N and use tf.concat to create a new tensor. Then I can use tf.gather. But this is to slow. I want to choose the tensor by using the index directly.

Different Numpy reshaping to 3D array syntax's

I'm looking at LSTM neural networks. I saw code like this below:
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
This code is meant to change a 2d array into a 3d array but the syntax looks off to me or at least I don't understand it. For example I would assume this code below as a 3d syntax
np.reshape(rows , columns, dimensions)
Could someone elaborate what the syntax is and what it is trying to do.
Function numpy.reshape gives a new shape to an array without changing its data. It is a numpy package function. First of all, it needs to know what to reshape, which is the first argument of this function (in your case, you want to reshape X_train).
Then it needs to know what is the shape of your new matrix. This argument needs to be a tuple. For 2D reshape you can pass (W,H), for three dimensional you can pass (W,H,D), for four dimensional you can pass (W,H,D,T) and so on.
However, you can also call reshape a Numpy matrix by X_train.reshape((W,H,D)). In this case, since reshape function is a method of X_train object, then you do not have to pass it and only pass the new shape.
It is also worth mentioning that the total number of element in a matrix with the new shape, should match your original matrix. For example, your 2D X_train has X_train.shape[0] x X_train.shape[1] elements. This value should be equal to W x H x D.

How to realise concatenation in TensorFlow without using 'tf.concat'?

I am just using TensorFlow to realise a CNN model. In this model I need to concatenate two 4-D tensors: tensor A with shape of [16,128,128,3] and tensor B with shape of [16,128,128,3] (16 is the batch size, 128 is the image block size, 3 is the number of channels). The concatenation result should be a tensor C with shape of [16,128,128,6].
I know that we could use 'tf.concat' function to realise this, however, this function does copy tensor A and tensor B and it uses a large GPU memory.
How could I achieve the concatenation in TensorFlow without using 'tf.concat'?
Thanks in advance!

Categories

Resources