Is it possible to use tf.TensorArray's for reading and writing inside of the body of a tf.while_loop, but not pass them all through loop_vars?
I want to use tf.while_loop as part of a graph for WaveNet sound generation, which is a sequential generation mechanism that that generates the next amplitude value based on a window of previously generated ones. But, I want to use this just during inference, so no need for gradients, and would call with back_prop=False.
Besides this, the loop body function must read and write intermediate values that must be remembered across time steps.
It looks like tf.TensorArray is the only option for reading and writing values in this way, but I notice that tf.TensorArray.write() returns a new tf.TensorArray that is meant to be returned by the body and used in the loop_vars argument. Is this the best way to do this?
If I don't have a need for gradients, is there a simpler way to preserve state over the loop?
You can use tf.assign inside the tf.while to assign stuff inside a global variable.
EDIT: You cannot use tf.assign for assigning sliced indices of tf.Tensor, it is however allowed for tf.Variable. The argument sent to the while body is of type tf.Tensor and not tf.Variable so it wont work.
Here is some sample code.
import tensorflow as tf
x = tf.Variable([0,0])
assign_op = tf.assign(x[1],42)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(x)) # [0,0]
sess.run(assign_op)
print(sess.run(x)) # [0,42]
Related
In TensorFlow when storing and restoring variables using tf.train.Saver I get random values when the initializer value is set to a tensor, why?
I have two ways of creating a variable, one that initializes using a function and the other with a tensor:
# Way 1, works when restoring
T = tf.get_variable('T', shape=[3], initializer = tf.zeros_initializer)
# Way 2-a, doesn't work when restoring
T = tf.get_variable('T', initializer = [1,2,3]) # Doesn't restore
# Way 2-b
T = tf.Variable(name='T', initial_value=[1,2,3]) # Doesn't restore
# Way 2-c
T = tf.Variable(name='T', initial_value=tf.constant([1,2,3])) # Doesn't restore
When I try to restore the variable using the second way, the values seem randomly generated.
I created a Jupyter Notebook to see the problem in action (you don't need to do anything but press enter): https://colab.research.google.com/drive/1QoJ_YFYZQe3GSAi3Lr7wOnsnt2jCjPR7 .
Am I missing something? Why does this happen? Is it a bug?
It all seems counterintuitive to me.
Try using tf.constant_initializer:
T = tf.get_variable('T', shape=[3],
initializer=tf.constant_initializer([1,2,3]))
From the docs on tf.get_variable:
The initializer can also be a Tensor, in which case the variable is initialized to this value and shape.
So you may also be able to get by with setting it to an explicit Tensor, in which case you do not need to pass a shape. Something like:
T = tf.get_variable('T', initializer=tf.constant([1,2,3]))
I think the issue arises from passing a list instead of a tensor, although typically TF automatically converts array-likes to Tensors, so that to me is a bit weird.
So, I'm using a bunch of functions from OpenAI baselines for Reinforcement Learning. In those functions, policy nets are initialised using statements like:
with tf.variable_scope('deepq', reuse=True):
...
return output
The problem is that the pointer to the output of those networks gets returned while still inside the scope, which means that when accessing those functions from another .py file I am still inside those scopes.
Basically I want to run a first function train_policy(output_dir) that trains the net and dumps the checkpoint to disk using tf.Saver().
Next, I run a function run_policy(output_dir) that reinitializes the same tf Graph and loads it's pretrained values using the checkpoint dir.
Right now, when I try this, I get a ValueError:
"Variable deepq/... already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope?" because at the point of running the second function, I'm still in the scope defined by the first.. I checked the code from OpenAI baselines (very nested code, hard to see everything that's going on), and reuse is already set to True.
So I tried doing something like:
tf.get_default_session().close() followed by:
tf.reset_default_graph()
after the first function call. (I don't need the session to remain active since I'm dumping everything to disk)
But this gives me errors because I'm still inside a nested graph scope and so I can't reset the default graph... (see eg here)
Alternatively I tried things like:
tf.get_default_graph().as_graph_def().__exit__()
or
tf.name_scope('deepq').__exit__()
but the exit() function needs a whole bunch of args I don't know how to get... (and I can't find good documentation on how to use this function).
My current solution is to run these functions in separate subprocesses in Python (and let the garbage collector do all the work), but this doensn't feel like a satisfactory solution..
Any ideas on how to deal with this? Ideally I'd need something like: tf.clear_all_graphs_and_sessions()
Ait one solution is indeed to reset the default graph:
I simply wrap every function call in a new default graph object like this:
with tf.Graph().as_default():
train_policy(output_dir)
with tf.Graph().as_default():
run_policy(output_dir)
...
This way the default graph simply gets reinitialised empty and you can load whatever is in the checkpoint file. (Inside every function I also close the default session before returning).
You can try to do your work in another default graph:
with tf.get_default_graph().as_default():
with tf.variable_scope('deepq', reuse=False):
v = tf.get_variable('v', shape=[])
print(v.name, v.graph)
with tf.Graph().as_default():
v = tf.get_variable('v', shape=[])
print(v.name, v.graph)
Output:
deepq/v:0 <tensorflow.python.framework.ops.Graph object at 0x7f61adaa6390>
v:0 <tensorflow.python.framework.ops.Graph object at 0x7f61460abbd0>
I'm trying to write my own multi-gpu on one node tensorflow example.
I read the code here: https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py
The core code of mine is:
vars = tf.trainable_variables()
grads = tf.gradients(cost, vars)
tower_grads.append(grads)
But as the program run to the second GPU, tf.trainable_variables() return both the first round of the loop tensorflow varibles and the second round's tensorflow varibles.
By the way, I think this task is that the two GPU use the same variables, but the gradients is different. Is it right?
The problem is that
The variable is named aaa in the first round.
But the variable is named aaa_1 in the second round. Even if I set reuse = True.
The solution is using tf.get_variable instead of tf.Variable
I learn it from https://github.com/normanheckscher/mnist-multi-gpu/blob/master/mnist_multi_gpu_batching_train.py#L356
I'm trying to train an LSTM in Tensorflow using minibatches, but after training is complete I would like to use the model by submitting one example at a time to it. I can set up the graph within Tensorflow to train my LSTM network, but I can't use the trained result afterward in the way I want.
The setup code looks something like this:
#Build the LSTM model.
cellRaw = rnn_cell.BasicLSTMCell(LAYER_SIZE)
cellRaw = rnn_cell.MultiRNNCell([cellRaw] * NUM_LAYERS)
cell = rnn_cell.DropoutWrapper(cellRaw, output_keep_prob = 0.25)
input_data = tf.placeholder(dtype=tf.float32, shape=[SEQ_LENGTH, None, 3])
target_data = tf.placeholder(dtype=tf.float32, shape=[SEQ_LENGTH, None])
initial_state = cell.zero_state(batch_size=BATCH_SIZE, dtype=tf.float32)
with tf.variable_scope('rnnlm'):
output_w = tf.get_variable("output_w", [LAYER_SIZE, 6])
output_b = tf.get_variable("output_b", [6])
outputs, final_state = seq2seq.rnn_decoder(input_list, initial_state, cell, loop_function=None, scope='rnnlm')
output = tf.reshape(tf.concat(1, outputs), [-1, LAYER_SIZE])
output = tf.nn.xw_plus_b(output, output_w, output_b)
...Note the two placeholders, input_data and target_data. I haven't bothered including the optimizer setup. After training is complete and the training session closed, I would like to set up a new session that uses the trained LSTM network whose input is provided by a completely different placeholder, something like:
with tf.Session() as sess:
with tf.variable_scope("simulation", reuse=None):
cellSim = cellRaw
input_data_sim = tf.placeholder(dtype=tf.float32, shape=[1, 1, 3])
initial_state_sim = cell.zero_state(batch_size=1, dtype=tf.float32)
input_list_sim = tf.unpack(input_data_sim)
outputsSim, final_state_sim = seq2seq.rnn_decoder(input_list_sim, initial_state_sim, cellSim, loop_function=None, scope='rnnlm')
outputSim = tf.reshape(tf.concat(1, outputsSim), [-1, LAYER_SIZE])
with tf.variable_scope('rnnlm'):
output_w = tf.get_variable("output_w", [LAYER_SIZE, nOut])
output_b = tf.get_variable("output_b", [nOut])
outputSim = tf.nn.xw_plus_b(outputSim, output_w, output_b)
This second part returns the following error:
tensorflow.python.framework.errors.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float
[[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
...Presumably because the graph I'm using still has the old training placeholders attached to the trained LSTM nodes. What's the right way to 'extract' the trained LSTM and put it into a new, different graph that has a different style of inputs? The Varible scoping features that Tensorflow has seem to address something like this, but the examples in the documentation all talk about using variable scope as a way of managing variable names so that the same piece of code will generate similar subgraphs within the same graph. The 'reuse' feature seems to be close to what I want, but I don't find the Tensorflow documentation linked above to be clear at all on what it does. The cells themselves cannot be given a name (in other words,
cellRaw = rnn_cell.MultiRNNCell([cellRaw] * NUM_LAYERS, name="multicell")
is not valid), and while I can give a name to a seq2seq.rnn_decoder(), I presumably wouldn't be able to remove the rnn_cell.DropoutWrapper() if I used that node unchanged.
Questions:
What is the proper way to move trained LSTM weights from one graph to another?
Is it correct to say that starting a new session "releases resources", but doesn't erase the graph built in memory?
It seems to me like the 'reuse' feature allows Tensorflow to search outside of the current variable scope for variables with the same name (existing in a different scope), and use them in the current scope. Is this correct? If it is, what happens to all of the graph edges from the non-current scope that link to that variable? If it isn't, why does Tensorflow throw an error if you try to have the same variable name within two different scopes? It seems perfectly reasonable to define two variables with identical names in two different scopes, e.g. conv1/sum1 and conv2/sum1.
In my code I'm working within a new scope but the graph won't run without data to be fed into a placeholder from the initial, default scope. Is the default scope always 'in-scope' for some reason?
If graph edges can span different scopes, and names in different scopes can't be shared unless they refer to the exact same node, then that would seem to defeat the purpose of having different scopes in the first place. What am I misunderstanding here?
Thanks!
What is the proper way to move trained LSTM weights from one graph to another?
You can create your decoding graph first (with a saver object to save the parameters) and create a GraphDef object that you can import in your bigger training graph:
basegraph = tf.Graph()
with basegraph.as_default():
***your graph***
traingraph = tf.Graph()
with traingraph.as_default():
tf.import_graph_def(basegraph.as_graph_def())
***your training graph***
make sure you load your variables when you start a session for a new graph.
I don't have experience with this functionality so you may have to look into it a bit more
Is it correct to say that starting a new session "releases resources", but doesn't erase the graph built in memory?
yep, the graph object still hold it
It seems to me like the 'reuse' feature allows Tensorflow to search outside of the current variable scope for variables with the same name (existing in a different scope), and use them in the current scope. Is this correct? If it is, what happens to all of the graph edges from the non-current scope that link to that variable? If it isn't, why does Tensorflow throw an error if you try to have the same variable name within two different scopes? It seems perfectly reasonable to define two variables with identical names in two different scopes, e.g. conv1/sum1 and conv2/sum1.
No, reuse is to determine the behaviour when you use get_variable on an existing name, when it is true it will return the existing variable, otherwise it will return a new one. Normally tensorflow should not throw an error. Are you sure your using tf.get_variable and not just tf.Variable?
In my code I'm working within a new scope but the graph won't run without data to be fed into a placeholder from the initial, default scope. Is the default scope always 'in-scope' for some reason?
I don't really see what you mean. The do not always have to be used. If a placeholder is not required for running an operation you don't have to define it.
If graph edges can span different scopes, and names in different scopes can't be shared unless they refer to the exact same node, then that would seem to defeat the purpose of having different scopes in the first place. What am I misunderstanding here?
I think your understanding or usage of scopes is flawed, see above
I have a setup where I need to initialize an LSTM after the main initialization which uses tf.initialize_all_variables(). I.e. I want to call tf.initialize_variables([var_list])
Is there way to collect all the internal trainable variables for both:
rnn_cell.BasicLSTM
rnn_cell.MultiRNNCell
so that I can initialize JUST these parameters?
The main reason I want this is because I do not want to re-initialize some trained values from earlier.
The easiest way to solve your problem is to use variable scope. The names of the variables within a scope will be prefixed with its name. Here is a short snippet:
cell = rnn_cell.BasicLSTMCell(num_nodes)
with tf.variable_scope("LSTM") as vs:
# Execute the LSTM cell here in any way, for example:
for i in range(num_steps):
output[i], state = cell(input_data[i], state)
# Retrieve just the LSTM variables.
lstm_variables = [v for v in tf.all_variables()
if v.name.startswith(vs.name)]
# [..]
# Initialize the LSTM variables.
tf.initialize_variables(lstm_variables)
It would work the same way with MultiRNNCell.
EDIT: changed tf.trainable_variables to tf.all_variables()
You can also use tf.get_collection():
cell = rnn_cell.BasicLSTMCell(num_nodes)
with tf.variable_scope("LSTM") as vs:
# Execute the LSTM cell here in any way, for example:
for i in range(num_steps):
output[i], state = cell(input_data[i], state)
lstm_variables = tf.get_collection(tf.GraphKeys.VARIABLES, scope=vs.name)
(partly copied from Rafal's answer)
Note that the last line is equivalent to the list comprehension in Rafal's code.
Basically, tensorflow stores a global collection of variables, which can be fetched by either tf.all_variables() or tf.get_collection(tf.GraphKeys.VARIABLES). If you specify scope (scope name) in the tf.get_collection() function, then you only fetch tensors (variables in this case) in the collection whose scopes are under the specified scope.
EDIT:
You can also use tf.GraphKeys.TRAINABLE_VARIABLES to get trainable variables only. But since vanilla BasicLSTMCell does not initialize any non-trainable variable, both will be functionally equivalent. For a complete list of default graph collections, check this out.