Given a 4D tensor x of shape (batch_size, batch_size, seq_len, feature_dim), I want to be able to retrieve the matrices along the diagonal entries, i.e. I need a way to fetch all x[diag_entry, diag_entry, :, :] slices for the values range(batch_size) producing a tensor of shape (batch_size, seq_len, feature_dim). However, I cannot explicitly loop over range(batch_size) as batch_size may vary since I work in Keras. Does Tensorflow have functionality supporting such an operation?
Related
Basically I have two tensors that I am trying to multiply two tensors (one is simply size [batch] (ie a tensor with a single value for each batch), and a tensor of size [batch, x, y, z]) such that each 3D tensor in the batch gets scaled by its corresponding value
currently, I have attempting the following but I've yet to find a way that works!:
enc_out1 = tf.math.multiply(enc_out1, w_int, name=None)
#enc_out1 has shape [batch, x, y, z], w_int has shape [batch]
any and all help is much appreciated
You could just reshape it.
Here is an example
import tensorflow as tf
a = tf.ones(shape=(4,3,3,3), dtype=tf.float32)
b = tf.constant([[1],[2],[3],[4]], dtype=tf.float32)
c = tf.math.multiply(tf.reshape(a,shape=(4,27)),b)
c = tf.reshape(c, shape=(4,3,3,3))
tf.print(c)
I want to build a tensor placeholder, features with dimension, say (10, a, a). Such that features[i, :, :] can be an arbitrary square tensor. As an instance, features[0. :, :] may be of dimension 5*5, and features[1, :, :] can be of dimension 8*8 at the same time. How can we do this with tensorflow?
I found ragged tensors for this purpose. But the problem is that for feeding values in the ragged tensor, I will have to use normal lists in python. In my case this feed list is very sparse and there is no way to compress ragged lists.
In my problem, I want to convolve two tensors in my neural network model.
The shape of two tensors is [None, 2, 1], [None, 3, 1] respectively. The axis with dimension None means the batch size of the input tensor. For each sample in batch, I want to convolve the two tensors with shape [2, 1] and [3, 1].
However, the tf.nn.conv1d in TensorFlow can only convolve the input with a fixed kernel. Is there any function that can support the convolution of two tensors according to the batch size axis, similar to the tf.multiply which can multiply two tensors for each sample or just elementwise multiplication.
The code I ran can be simplified as follows:
input_signal = Input(shape=(L, M), name='input_signal')
input_h = Input(shape=(N), name='input_h')
faded= Lambda(lambda x: tf.nn.conv1d(input, x))(input_h)
What I want to do is that the sample of input_signal can be convolved by the sample of input_h with the same index. However, it just shows my pure idea which can not be able to run in the env. My question is that how I can modify the code to enable the input tensor can be convolved with another input tensor for every sample in the batch.
According to the description of the kernel size arguments for Conv1D layer or any other layer mentioned in the documentation, you cannot add multiple filters with different Kernel size or strides.
Also, Convolutions with Kernels of different sizes will produce outputs of different height and width.
The general formula for output size assuming a symmetric kernel is given by
(X−K+2P)/S+1
Where X is the input Height / Width
K is the Kernel size
P is the zero-padding
S is the stride length
So assuming you are keeping zero paddings and stride same you cannot have multiple kernels with different sizes in ConvD layer.
You can, however, use the tf.keras.Model API to create Conv1D multiple times on the same input OR multiple Conv1D Layer for different inputs and kernel size respectively in your case and then either maxpool, crop or use zero paddings to match the dimensions of the different outputs before stacking them.
Example:
inputs = tf.keras.Input(shape=(n_timesteps,n_features))
x1 = tf.keras.layers.Conv1D(filters=32, kernel_size=2)(inputs)
x2 = tf.keras.layers.Conv1D(filters=16, kernel_size=3)(inputs)
#match dimensions (height and width) of x1 or x2 here
x3 = tf.keras.layers.Concatenate(axis=-1)[x1,x2]
You can use either Zeropadding1D or Cropping2D or Maxpool1D for matching the dimensions.
I am doing some sentiment analysis with Tensorflow, but there is a problem I can't solve:
I have one tensor (input) shaped as [?, 38] [batch_size, max_word_length] and one (prediction) shaped as [?, 3] [batch_size, predicted_label].
My goal is to combine both tensors into a single tensor with the shape of [?, 38, 3].
This tensor is used as the input of my second stage.
Seems easy, but i can't find a way of doing it.
Can (and will) you tell me how to do this?
This is impossible. You have tensor, which contains batch_size * max_word_length
elements and tensor which contains batch_size * predicted_label elements. Hence there are
batch_size * (max_word_length + predicted_label)
elements. And now you want to create new tensor [batch_size, max_word_length, predicted_label] with
batch_size * max_word_length * predicted_label
elements. You don't have enough elements for this.
Reading the Tensorflow MNIST tutorial, I stumbled over the line
x_image = tf.reshape(x, [-1,28,28,1])
28, 28 comes from width, height, 1 comes from the number of channels. But why -1?
I guess this is related to mini-batch training, but I wondered why -1 and not 1 (which seems to give the same result in numpy).
(Probably related: Why does the reshape of numpy give the same results for -1,-2 and 1)?
-1 means that the length in that dimension is inferred. This is done based on the constraint that the number of elements in an ndarray or Tensor when reshaped must remain the same. In the tutorial, each image is a row vector (784 elements) and there are lots of such rows (let it be n, so there are 784n elements). So, when you write
x_image = tf.reshape(x, [-1, 28, 28, 1])
TensorFlow can infer that -1 is n.
In the MNIST tutorial that you are reading, the desired shape for your input layer : [batch_size, 28, 28, 1]
x_image = tf.reshape(x, [-1,28,28,1])
Here -1 for input x specifies that this dimension should be dynamically computed based on the number of input values in x, holding the size of all other dimensions constant. This allows us to treat batch_size(parameter with value -1) as a hyperparameter that we can tune.
−1 indicates that the length on the current axis needs to be automatically deduced according to the rule that the total elements of the tensor remain unchanged