TensorFlow Assign - python

I am trying to write a custom version of an RNN and would like to just store the state and last output of the cells in variables but it is not working. My guess is that TensorFlow sees the storing of the values unnecessary and does not execute it. Here is a snippet that illustrates the problem.
For this example, I have five layers of "cells" that intentionally ignore the input and output the sum of the biases for the cell and the previous output, which is initialized to zero. However, as we run this, the output of the network is always just the values of the biases in the final layer and the value of last_output remains zero.
import tensorflow as tf
import numpy as np
def cell_function(cell_inputs, layer):
last_output = tf.get_variable('last_output_{}'.format(layer), shape=(10, 1),
initializer=tf.zeros_initializer, trainable=False)
biases = tf.get_variable('biases_{}'.format(layer), shape=(10, 1),
initializer=tf.zeros_initializer)
cell_output = last_output + biases
last_output.assign(cell_output)
return cell_output
def rnn_function(inputs):
with tf.variable_scope('rnn', reuse=tf.AUTO_REUSE):
next_inputs = inputs
for layer in range(num_layers):
next_inputs = cell_function(next_inputs, layer)
return next_inputs
num_layers = 5
data = np.random.uniform(0, 10, size=(1001, 10, 1))
x = tf.placeholder('float', shape=(10, 1))
y = tf.placeholder('float', shape=(10, 1))
predictions = rnn_function(x)
loss = tf.losses.mean_squared_error(predictions=predictions, labels=y)
optimizer = tf.train.AdamOptimizer(learning_rate=0.1).minimize(loss=loss)
with tf.variable_scope('rnn', reuse=tf.AUTO_REUSE):
last = tf.get_variable('last_output_4', shape=(10, 1),
initializer=tf.zeros_initializer, trainable=False)
layer_biases = tf.get_variable('biases_4', shape=(10, 1),
initializer=tf.zeros_initializer)
with tf.Session() as sess:
tf.global_variables_initializer().run()
for t in range(1000):
rnn_input = data[t]
rnn_output = data[t+1]
feed_dict = {x: rnn_input, y: rnn_output}
fetches = [optimizer, redictions, loss, last, layer_biases]
_, pred, mse, value, bias = sess.run(fetches, feed_dict=feed_dict)
print('Predictions:')
print(rnn_predictions)
print(last.name)
print(value)
print(layer_biases.name)
print(bias)
If change the last line of cell_function before the return to last_output = tf.assign(last_output, cell_output) and then return it with cell_output and then return it again out of rnn_function and use that for the variable last everything works. I think it is because we are forcing TensorFlow to compute that node in the graph.
Is there any way to make this work without passing last_output out of the cell? It would be much nicer if I didn't have to keep passing all this stuff out to get the assignment operation to be executed.

Make it dependent on an operation that will be run, in this example I'll use the cost function, but use whatever makes sense:
with tf.control_dependencies(cost):
tf.assign(last_output, cell_output)
Now the assign operation will be required in order for cost to be computed, which should solve your problem. For any operation you request tensorflow to compute with sess.run(some_op), tensorflow will work backwards through the dependency graph and only compute the minimum elements necessary to produce the requested output.

Related

Why am I getting Nan after adding relu activation in LSTM?

I have simple LSTM network that looks roughly like this:
lstm_activation = tf.nn.relu
cells_fw = [LSTMCell(num_units=100, activation=lstm_activation),
LSTMCell(num_units=10, activation=lstm_activation)]
stacked_cells_fw = MultiRNNCell(cells_fw)
_, states = tf.nn.dynamic_rnn(cell=stacked_cells_fw,
inputs=embedding_layer,
sequence_length=features['length'],
dtype=tf.float32)
output_states = [s.h for s in states]
states = tf.concat(output_states, 1)
My question is. When I don't use activation (activation=None) or use tanh everything works but when I switch relu I'm keep getting "NaN loss during training", why is that?. It's 100% reproducible.
When you use the relu activation function inside the lstm cell, it is guaranteed that all the outputs from the cell, as well as the cell state, will be strictly >= 0. Because of that, your gradients become extremely large and are exploding. For example, run the following code snippet and observe that the outputs are never < 0.
X = np.random.rand(4,3,2)
lstm_cell = tf.nn.rnn_cell.LSTMCell(5, activation=tf.nn.relu)
hidden_states, _ = tf.nn.dynamic_rnn(cell=lstm_cell, inputs=X, dtype=tf.float64)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print(sess.run(hidden_states))

Custom/Lambda layer with trainable weight

I am trying to use the Keras.backend ops to write a function that I will wrap as a Lambda to use in my model.
There are two tensors, X and Y. X is not trainable. Y is trainable.
The python function that is wrapped is:
import keras.backend as K
from keras.activations import softmax
def _attention(inputs):
X, Y = inputs
attention_weight = K.dot(X, K.expand_dims(Y))
attention_weight = K.squeeze(attention_weight, axis=-1)
attention_weight = softmax(attention_weight, axis=-1)
return attention_weight
which I wanted to wrap as:
Y = K.random_normal_variable(shape=(200,), mean=0.0, scale=1.0)
attend = Lambda(_attention)
attention = attend((X,Y))
When I call:
model = Model(inputs=[input], outputs=[attention])
I receive the message
ValueError: Output tensors to a Model must be the output of a TensorFlowLayer(thus holding past layer metadata). Found: Tensor("lambda_2/Softmax:0", shape=(?, ?), dtype=float32)
Do I really need to make a custom layer for the expand_dims, dot product, and squeeze method? I know I could always reshape Y from (dim,) -> (dim,1) but I am still stuck with the squeeze.

How to change the value of a tensor which is not a tf.Variable in TensorFlow?

I know that there is a tf.assign function in TensorFlow, but this function is mainly aimed at mutable tensor (tf.Variable). How to modify the value of the tensor? For example, the following code,
import numpy as np
import tensorflow as tf
X = tf.placeholder(tf.float32, shape=[None, 32, 32, 3])
conv1 = tf.layers.conv2d(X, filters=64, kernel_size=(3, 3), padding='same',name='conv1')
relu1 = tf.nn.relu(conv1)
conv2 = tf.layers.conv2d(relu1, filters=64, kernel_size=(3, 3), padding='same',name='conv2')
relu2 = tf.nn.relu(conv2)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
tensor = sess.graph.get_tensor_by_name(u'conv2/Conv2D:0')
feature_map = tf.reduce_mean(tensor[:,:,:,24])
image = np.random.uniform(size=(1,32,32,3))
sess.run([feature_map], feed_dict={X: image})
How to modify the value of feature_map and do not affect its derivation?
More specifically, when I change the value of feature_map, it does not affect its derivation process.
For example, y = a^2, y'= 2a, I just need to change a = 1 to a = 2.
Other_op = tf.gradients(feature_map, X)
Different feature_map would achieve the different values, but it does not destroy the graph structures of operation.
In your example feature_map doesn't have a value as it's an operation. Therefore you can't change it's value as such. What you can do, is pass another value in as part of the feed_dict parameter of session.run.
So for example if your feature_map is followed by an operation like this:
other_op = tf.gradient(feature_map, X)
Then you can change the value passed in to that op (gradient in this case) via feed_dict like so:
session.run(other_op, feed_dict={feature_map: <new value>})
That's not possible. A tensor is the output of tf.Operation. From documentation:
A Tensor is a symbolic handle to one of the outputs of an Operation. It does not hold the values of that operation's output, but instead provides a means of computing those values in a TensorFlow tf.Session.
So you can't change its value independently.

MLP(ReLu) stops learning after few iterations. Tensor Flow

2 layers MLP (Relu) + Softmax
After 20 iterations, Tensor Flow just gives up and stops updating any weights or biases.
I initially thought that my ReLu where dying, so I displayed histograms to make sure none of them where 0. And none of them are !
They just stop changing after few iterations and cross entropy is still high. ReLu, Sigmoid and tanh gives the same results. Tweaking GradientDescentOptimizer from 0.01 to 0.5 also doesn't change much.
There has to be a bug somewhere. Like an actual bug in my code. I can't even overfit a small sample set !
Here are my histograms and here's my code, if anyone could check it out, that would be a major help.
We have 3000 scalars with 6 values between 0 and 255
to classify in two classes : [1,0] or [0,1]
(I made sure to randomise the order)
def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):
with tf.name_scope(layer_name):
weights = tf.Variable(tf.truncated_normal([input_dim, output_dim], stddev=1.0 / math.sqrt(float(6))))
tf.summary.histogram('weights', weights)
biases = tf.Variable(tf.constant(0.4, shape=[output_dim]))
tf.summary.histogram('biases', biases)
preactivate = tf.matmul(input_tensor, weights) + biases
tf.summary.histogram('pre_activations', preactivate)
#act=tf.nn.relu
activations = act(preactivate, name='activation')
tf.summary.histogram('activations', activations)
return activations
#We have 3000 scalars with 6 values between 0 and 255 to classify in two classes
x = tf.placeholder(tf.float32, [None, 6])
y = tf.placeholder(tf.float32, [None, 2])
#After normalisation, input is between 0 and 1
normalised = tf.scalar_mul(1/255,x)
#Two layers
hidden1 = nn_layer(normalised, 6, 4, "hidden1")
hidden2 = nn_layer(hidden1, 4, 2, "hidden2")
#Finish by a softmax
softmax = tf.nn.softmax(hidden2)
#Defining loss, accuracy etc..
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=softmax))
tf.summary.scalar('cross_entropy', cross_entropy)
correct_prediction = tf.equal(tf.argmax(softmax, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
tf.summary.scalar('accuracy', accuracy)
#Init session and writers and misc
session = tf.Session()
train_writer = tf.summary.FileWriter('log', session.graph)
train_writer.add_graph(session.graph)
init= tf.global_variables_initializer()
session.run(init)
merged = tf.summary.merge_all()
#Train
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)
batch_x, batch_y = self.trainData
for _ in range(1000):
session.run(train_step, {x: batch_x, y: batch_y})
#Every 10 steps, add to the summary
if _ % 10 == 0:
s = session.run(merged, {x: batch_x, y: batch_y})
train_writer.add_summary(s, _)
#Evaluate
evaluate_x, evaluate_y = self.evaluateData
print(session.run(accuracy, {x: batch_x, y: batch_y}))
print(session.run(accuracy, {x: evaluate_x, y: evaluate_y}))
Hidden Layer 1. Output isn't zero, so that's not a dying ReLu problem. but still, weights are constant! TF didn't even try to modify them
Same for Hidden Layer 2. TF tried tweaking them a bit and gave up pretty fast.
Cross entropy does decrease, but stays staggeringly high.
EDIT :
LOTS of mistakes in my code.
First one is 1/255 = 0 in python... Changed it to 1.0/255.0 and my code started to live.
So basically, my input was multiplied by 0 and the neural network just was purely blind. So he tried to get the best result he could while being blind and then gave up. Which explains totally it's reaction.
Now I was applying a softmax twice... Modifying it helped also.
And by strying different learning rates and different number of epoch I finally found something good.
Here is the final working code :
def runModel(self):
def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):
with tf.name_scope(layer_name):
#This is standard weight for neural networks with ReLu.
#I divide by math.sqrt(float(6)) because my input has 6 values
weights = tf.Variable(tf.truncated_normal([input_dim, output_dim], stddev=1.0 / math.sqrt(float(6))))
tf.summary.histogram('weights', weights)
#I chose this bias myself. It work. Not sure why.
biases = tf.Variable(tf.constant(0.4, shape=[output_dim]))
tf.summary.histogram('biases', biases)
preactivate = tf.matmul(input_tensor, weights) + biases
tf.summary.histogram('pre_activations', preactivate)
#Some neurons will have ReLu as activation function
#Some won't have any activation functions
if act == "None":
activations = preactivate
else :
activations = act(preactivate, name='activation')
tf.summary.histogram('activations', activations)
return activations
#We have 3000 scalars with 6 values between 0 and 255 to classify in two classes
x = tf.placeholder(tf.float32, [None, 6])
y = tf.placeholder(tf.float32, [None, 2])
#After normalisation, input is between 0 and 1
#Normalising input really helps. Nothing is doable without it
#But my ERROR was to write 1/255. Becase in python
#1/255 = 0 .... (integer division)
#But 1.0/255.0 = 0,003921568 (float division)
normalised = tf.scalar_mul(1.0/255.0,x)
#Three layers total. The first one is just a matrix multiplication
input = nn_layer(normalised, 6, 4, "input", act="None")
#The second one has a ReLu after a matrix multiplication
hidden1 = nn_layer(input, 4, 4, "hidden", act=tf.nn.relu)
#The last one is also jsut a matrix multiplcation
#WARNING ! No softmax here ! Because later we call a function
#That implicitly does a softmax
#And it's bad practice to do two softmax one after the other
output = nn_layer(hidden1, 4, 2, "output", act="None")
#Tried different learning rates
#Higher learning rate means find a result faster
#But could be a local minimum
#Lower learning rate means we need much more epochs
learning_rate = 0.03
with tf.name_scope('learning_rate_'+str(learning_rate)):
#Defining loss, accuracy etc..
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=output))
tf.summary.scalar('cross_entropy', cross_entropy)
correct_prediction = tf.equal(tf.argmax(output, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
tf.summary.scalar('accuracy', accuracy)
#Init session and writers and misc
session = tf.Session()
train_writer = tf.summary.FileWriter('log', session.graph)
train_writer.add_graph(session.graph)
init= tf.global_variables_initializer()
session.run(init)
merged = tf.summary.merge_all()
#Train
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
batch_x, batch_y = self.trainData
for _ in range(1000):
session.run(train_step, {x: batch_x, y: batch_y})
#Every 10 steps, add to the summary
if _ % 10 == 0:
s = session.run(merged, {x: batch_x, y: batch_y})
train_writer.add_summary(s, _)
#Evaluate
evaluate_x, evaluate_y = self.evaluateData
print(session.run(accuracy, {x: batch_x, y: batch_y}))
print(session.run(accuracy, {x: evaluate_x, y: evaluate_y}))
I'm afraid that you have to reduce your learning rate. It's to high. High learning rate usually leads you to local minimum not global one.
Try 0.001, 0.0001 or even 0.00001. Or make your learning rate flexible.
I did not checked the code, so firstly try to tune LR.
Just incase someone needs it in the future:
I had initialized my dual layer network's layers with np.random.randn but the network refused to learn. Using the He (for ReLU) and Xavier(for softmax) initializations totally worked.

Tensorflow: No gradients provided for any variable

I am new to tensorflow and I am building a network but failing to compute/apply the gradients for it. I get the error:
ValueError: No gradients provided for any variable: ((None, tensorflow.python.ops.variables.Variable object at 0x1025436d0), ... (None, tensorflow.python.ops.variables.Variable object at 0x10800b590))
I tried using a tensorboard graph to see if there`s was something that made it impossible to trace the graph and get the gradients but I could not see anything.
Here`s part of the code:
sess = tf.Session()
X = tf.placeholder(type, [batch_size,feature_size])
W = tf.Variable(tf.random_normal([feature_size, elements_size * dictionary_size]), name="W")
target_probabilties = tf.placeholder(type, [batch_size * elements_size, dictionary_size])
lstm = tf.nn.rnn_cell.BasicLSTMCell(lstm_hidden_size)
stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm] * number_of_layers)
initial_state = state = stacked_lstm.zero_state(batch_size, type)
output, state = stacked_lstm(X, state)
pred = tf.matmul(output,W)
pred = tf.reshape(pred, (batch_size * elements_size, dictionary_size))
# instead of calculating this, I will calculate the difference between the target_W and the current W
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(target_probabilties, pred)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
sess.run(optimizer, feed_dict={X:my_input, target_probabilties:target_prob})
I will appreciate any help on figuring this out.
I always have the tf.nn.softmax_cross_entropy_with_logits() used so that I have the logits as first argument and the labels as second. Can you try this?

Categories

Resources