I am new to tensorflow. I experimented a DQN algorithm with a section involving
a = tf.placeholder(tf.int32, shape = [None],name='A')
q = tf.reduce_sum(critic_q * tf.one_hot(a,n_outputs),axis=1,keepdims=True,name='Q')#Q value for chosen action
y = tf.placeholder(tf.float32, shape = [None],name='Y')
learning_rate = 1e-4
cost = tf.reduce_mean(tf.square(y-q))#mean squared error
global_step = tf.Variable(0,trainable=False,name='global_step')
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(cost,global_step=global_step)
and initialized the input y with y_batch=np.zeros(nbatch). The network hardly trained at all.
Then, I switched to defining y as
y = tf.placeholder(tf.float32, shape = [None,1],name='Y')
and initialized the input with y_batch=np.zeros(nbatch).reshape(-1,1), which worked nicely.
What was happening in the first implementation?
Every tensor has a rank (number of dimensions) and a set of dimensions.
A placeholder with shape [1] is a placeholder with rank 1 and the dimension in position 0 of 1.
A placeholder with shape [None, 1] is a placeholder with rank 2, hence it has 2 dimensions. The first dimension (index 0) has unknown size (it will be resolved at runtime) while the second dimension (index 1) has the known size of 1.
In order to be compatible, tensors must have the same rank a dimensions.
You can read a more complete assessment about the tensors shape here: https://pgaleone.eu/tensorflow/2018/07/28/understanding-tensorflow-tensors-shape-static-dynamic/#tensors-the-basic
Related
I have a Tensor as below:
y = tf.placeholder(tf.float32, [None, 3],name="output")
I want to multiply the last of the 3 dimension tensor.
I have tried this:
outputs_with_multiplier = y
outputs_with_multiplier[-1] = tf.multiply(outputs_with_multiplier[-1],tf.constant(2.0))
I received the following error:
outputs_with_multiplier[-1] = tf.multiply(outputs_with_multiplier[-1],tf.constant(2.0))
TypeError: 'Tensor' object does not support item assignment
I have check the following questions for reference, but I didn't found them helpful, may be because I didn't understood them.
1) Tensorflow - matmul of input matrix with batch data
2) Tensor multiplication in Tensorflow
Kindly, help me multiply the Tensors dimension so that it work smoothly.
For example if this is my y = [[1,2,3],[2,3,4],[3,4,5],[2,5,7],[8,9,10],[0,3,2]] So I want to make it outputs_with_multiplier = [[1,2,6],[2,3,8],[3,4,10],[2,5,14],[8,9,20],[0,3,4]]
Please let me know if there is any solution to this.
You can't do an item assignment but you can create a new Tensor. The key is to multiply the first 2 columns by 1 and the 3rd column by 2.
x = tf.placeholder(tf.float32, [None, 3], name="output")
y = tf.constant([[1.0, 1.0, 2.0]])
z = tf.multiply(x, y)
sess = tf.Session()
sess.run(z, feed_dict={x: [[1,2,3],[2,3,4],[3,4,5],[2,5,7],[8,9,10],[0,3,2]]})
My question is about the elemental dynamic or static rnn outputs's dimensionality.
nlu_input = tf.placeholder(tf.float32, shape=[4,1607,1])
cell = tf.nn.rnn_cell.BasicLSTMCell(80)
outts, states = tf.nn.dynamic_rnn(cell=cell, inputs=nlu_input, dtype=tf.float32)
Then tf.gloabal_valiables() returns the following list.
[<tf.Variable 'rnn/basic_lstm_cell/kernel:0' shape=(81, 320) dtype=float32_ref>,<tf.Variable 'rnn/basic_lstm_cell/bias:0' shape=(320,) dtype=float32_ref>]
I expected the tf.Variable 'rnn/basic_lstm_cell/kernel:0' shape=(80, 320), because 320 = 4*80 and the unit number is 80.
Why the dimensionality of the kernel is incremented?
According to the tensorflow implementation: BasicLSTMCell source code. The shape of kernel is [input_depth + h_depth, 4 * num_units], which input_depth is your input vector dimension, and h_depth is your hidden units count. So your kernel shape is [1 + 80, 4 * 80].
I want to slice a tensor in "None" dimension.
For example,
tensor = tf.placeholder(tf.float32, shape=[None, None, 10], name="seq_holder")
sliced_tensor = tensor[:,1:,:] # it works well!
but
# Assume that tensor's shape will be [3,10, 10]
tensor = tf.placeholder(tf.float32, shape=[None, None, 10], name="seq_holder")
sliced_seq = tf.slice(tensor, [0,1,0],[3, 9, 10]) # it doens't work!
It is same that i get a message when i used another place_holder to feed size parameter for tf.slice().
The second methods gave me "Input size (depth of inputs) must be accessible via shape inference" error message.
I'd like to know what's different between two methods and what is more tensorflow-ish way.
[Edited]
Whole code is below
import tensorflow as tf
import numpy as np
print("Tensorflow for tests!")
vec_dim = 5
num_hidden = 10
# method 1
input_seq1 = np.random.random([3,7,vec_dim])
# method 2
input_seq2 = np.random.random([5,10,vec_dim])
shape_seq2 = [5,9,vec_dim]
# seq: [batch, seq_len]
seq = tf.placeholder(tf.float32, shape=[None, None, vec_dim], name="seq_holder")
# Method 1
sliced_seq = seq[:,1:,:]
# Method 2
seq_shape = tf.placeholder(tf.int32, shape=[3])
sliced_seq = tf.slice(seq,[0,0,0], seq_shape)
cell = tf.contrib.rnn.GRUCell(num_units=num_hidden)
init_state = cell.zero_state(tf.shape(seq)[0], tf.float32)
outputs, last_state = tf.nn.dynamic_rnn(cell, sliced_seq, initial_state=init_state)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# method 1
# states = sess.run([sliced_seq], feed_dict={seq:input_seq1})
# print(states[0].shape)
# method 2
states = sess.run([sliced_seq], feed_dict={seq:input_seq2, seq_shape:shape_seq2})
print(states[0].shape)
Your problem is exactly described by issue #4590
The problem is that tf.nn.dynamic_rnn needs to know the size of the last dimension in the input (the "depth"). Unfortunately, as the issue points out, currently tf.slice cannot infer any output size if any of the slice ranges are not fully known at graph construction time; therefore, sliced_seq ends up having a shape (?, ?, ?).
In your case, the first issue is that you are using a placeholder of three elements to determine the size of the slice; this is not the best approach, since the last dimension should never change (even if you later pass vec_dim, it could cause errors). The easiest solution would be to turn seq_shape into a placeholder of size 2 (or even two separate placeholders), and then do the slicing like:
sliced_seq = seq[:seq_shape[0], :seq_shape[1], :]
For some reason, the NumPy-style indexing seems to have better shape inference capabilities, and this will preserve the size of the last dimension in sliced_seq.
I am trying to use LSTM with inputs with different time steps (different number of frames). The input to the rnn.static_rnn should be a sequence of tf (not a tf!). So, I should convert my input to sequence. I tried to use tf.unstack and tf.split, but both of them need to know exact size of inputs, while one dimension of my inputs (time steps) is changing by different inputs. following is part of my code:
n_input = 256*256 # data input (img shape: 256*256)
n_steps = None # timesteps
batch_size = 1
# tf Graph input
x = tf.placeholder("float", [ batch_size , n_input,n_steps])
y = tf.placeholder("float", [batch_size, n_classes])
# Permuting batch_size and n_steps
x1 = tf.transpose(x, [2, 1, 0])
x1 = tf.transpose(x1, [0, 2, 1])
x3=tf.unstack(x1,axis=0)
#or x3 = tf.split(x2, ?, 0)
# Define a lstm cell with tensorflow
lstm_cell = rnn.BasicLSTMCell(num_units=n_hidden, forget_bias=1.0)
# Get lstm cell output
outputs, states = rnn.static_rnn(lstm_cell, x3, dtype=tf.float32,sequence_length=None)
I got following error when I am using tf.unstack:
ValueError: Cannot infer num from shape (?, 1, 65536)
Also, there are some discussions here and here, but none of them were useful for me. Any help is appreciated.
As explained in here, tf.unstack does not work if the argument is unspecified and non-inferrable.
In your code, after transpositions, x1 has the shape of [ n_steps, batch_size, n_input] and its value at axis=0 is set to None.
I'm trying to impelement this article:
http://ronan.collobert.com/pub/matos/2008_deep_icml.pdf
Specfically the equation (3) from section 2.
Shortly I want to do a pairwise distance computation for the features of each mini-batch and insert this loss to the general network loss.
I have only the Tesnor of the batch (16 samples), the labels tensor of the batch and the batch feature Tensor.
After looking for quite a while I still couldn't figure out the following:
1) How do I divide the batch for Positive (i.e. same label) and negative pairs. Since Tensor are not iterateble I can't figure out how to get which sample have which label and then divide my vector, or get which indices of the tensor belong to each class.
2) How can I do pairwise distance calculation for some of the indices in the batch tensor?
3) I also need to define a new distance function for negative examples
Overall, I need to get which indices belong to which class, do a positive pair-wise distace calculation for all positive pairs. And do another calculation for all negative pairs. Then sum it all up and add it to the network loss.
Any help (to one of more of the 3 issues) would be highly appreciated.
1)
You should do the pair sampling before feeding the data into a session. Label every pair a boolean label, say y = 1 for matched-pair, 0 otherwise.
2) 3) Just calculate both pos/neg terms for every pair, and let the 0-1 label y to choose which to add to the loss.
First create placeholders, y_ is for boolean labels.
dim = 64
x1_ = tf.placeholder('float32', shape=(None, dim))
x2_ = tf.placeholder('float32', shape=(None, dim))
y_ = tf.placeholder('uint8', shape=[None]) # uint8 for boolean
Then the loss tensor can be created by the function.
def loss(x1, x2, y):
# Euclidean distance between x1,x2
l2diff = tf.sqrt( tf.reduce_sum(tf.square(tf.sub(x1, x2)),
reduction_indices=1))
# you can try margin parameters
margin = tf.constant(1.)
labels = tf.to_float(y)
match_loss = tf.square(l2diff, 'match_term')
mismatch_loss = tf.maximum(0., tf.sub(margin, tf.square(l2diff)), 'mismatch_term')
# if label is 1, only match_loss will count, otherwise mismatch_loss
loss = tf.add(tf.mul(labels, match_loss), \
tf.mul((1 - labels), mismatch_loss), 'loss_add')
loss_mean = tf.reduce_mean(loss)
return loss_mean
loss_ = loss(x1_, x2_, y_)
Then feed your data (random generated for example):
batchsize = 4
x1 = np.random.rand(batchsize, dim)
x2 = np.random.rand(batchsize, dim)
y = np.array([0,1,1,0])
l = sess.run(loss_, feed_dict={x1_:x1, x2_:x2, y_:y})
Short answer
I think the simplest way to do that is to sample the pairs offline (i.e. outside of the TensorFlow graph).
You create tf.placeholder for a batch of pairs along with their labels (positive or negative, i.e. same class or different class), and then you can compute in TensorFlow the corresponding loss.
With the code
You sample the pairs offline. You sample batch_size pairs of inputs, and output the batch_size left elements of the pairs of shape [batch_size, input_size]. You also output the labels of the pairs (either positive of negative) of shape [batch_size,]
pairs_left = np.zeros((batch_size, input_size))
pairs_right = np.zeros((batch_size, input_size))
labels = np.zeros((batch_size, 1)) # ex: [[0.], [1.], [1.], [0.]] for batch_size=4
Then you create Tensorflow placeholders corresponding to these inputs. In your code, you will feed the previous inputs to these placeholders in the feed_dict argument of sess.run()
pairs_left_node = tf.placeholder(tf.float32, [batch_size, input_size])
pairs_right_node = tf.placeholder(tf.float32, [batch_size, input_size])
labels_node = tf.placeholder(tf.float32, [batch_size, 1])
Now we can perform a feedforward on the inputs (let's say your model is a linear model).
W = ... # shape [input_size, feature_size]
output_left = tf.matmul(pairs_left_node, W) # shape [batch_size, feature_size]
output_right = tf.matmul(pairs_right_node, W) # shape [batch_size, feature_size]
Finally we can compute the pairwise loss.
l2_loss_pairs = tf.reduce_sum(tf.square(output_left - output_right), 1)
positive_loss = l2_loss_pairs
negative_loss = tf.nn.relu(margin - l2_loss_pairs)
final_loss = tf.mul(labels_node, positive_loss) + tf.mul(1. - labels_node, negative_loss)
And that's it ! You can now optimize on this loss, with a good offline sampling.