Attempting to use uninitialized variable - tensorboard - python

I am just starting to play around with Tensorboard, and want to create a simple example where I have a loop that calls a function. Inside that function I have a tensor variable that gets incremented by one and then I add it to a summary.
I am getting a FailedPreconditionError: Attempting to use uninitalized value x_scalar
But I thought I was initializing the x_scalar with lines 10 and 14. What is the proper way to initialize?
import tensorflow as tf
tf.reset_default_graph() # To clear the defined variables and operations of the previous cell
# create the scalar variable
x_scalar = tf.get_variable('x_scalar', shape=[], initializer=tf.truncated_normal_initializer(mean=0, stddev=1))
# ____step 1:____ create the scalar summary
first_summary = tf.summary.scalar(name='My_first_scalar_summary', tensor=x_scalar)
step = 1
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
writer = tf.summary.FileWriter('./graphs', sess.graph)
sess.run(x_scalar.assign(1))
print(sess.run(x_scalar))
print("---------------------------")
def main():
global init
global first_summary
global step
# launch the graph in a session
# with tf.Session() as sess:
# # ____step 2:____ creating the writer inside the session
# writer = tf.summary.FileWriter('./graphs', sess.graph)
for s in range(100):
func()
def func():
global init
global first_summary
global step
global x_scalar
with tf.Session() as sess:
# ____step 2:____ creating the writer inside the session
# loop over several initializations of the variable
sess.run(x_scalar.assign(x_scalar + 1))
# ____step 3:____ evaluate the scalar summary
summary = sess.run(first_summary)
# ____step 4:____ add the summary to the writer (i.e. to the event file)
writer.add_summary(summary, step)
step = step + 1
print('Done with writing the scalar summary')
if __name__ == '__main__':
main()

You initialized your variable in a different tf.Session(). When using your tf.Session() as a context manager the session automatically closes after the block of code has completed.
You could use a checkpoint and metagraph to save your graph+weights and then loading them into your newly created session.
Or you can try passing around a session
sess = tf.Session()
sess.run([CODE])
sess.run([CODE])
sess.run([CODE])
sess.run([CODE])
sess.close()
edited: made a correction

Related

tensorflow reload mode to session from graph def

I have an exported tensorflow saved model which is used for serving.
I want to "reload" it from graphdef object, which I can broadcast for usage with spark.
I load it using:
sess = tf.Session()
tf.saved_model.loader.load(sess, ['serve'], folder)
sess.run('dense_1/Softmax:0', {'input_1:0': input_image}) # works
Then, to load it again to different session, I've tried:
graph_def = sess.graph.as_graph_def()
# then, to load
with tf.Session(graph=tf.Graph()) as sess:
tf.import_graph_def(graph_def, name="")
sess.run('dense_1/Softmax:0', {'input_1:0': input_image})
I get the error:
FailedPreconditionError: Attempting to use uninitialized value dense_1/kernel
I've tried adding
sess.run(tf.global_variables_initializer())
But still the same error.
What am I missing?
You cannot copy variable values from one session to another through the graphdef. The variable values are stored within the session, and the graph definition only contains the structure of the graph. You need to "export" the variable values from one session and then restore them in the other. If you want to avoid using checkpoints or similar tooling, you can use a function that should work in most cases like this:
import tensorflow as tf
# Gets variable values as a list of pairs with the name and the value
def get_variable_values(sess):
# Find variable operations
var_ops = [op for op in sess.graph.get_operations() if op.type == 'VariableV2']
# Get the values
var_values = []
for v in var_ops:
try:
var_values.append(sess.run(v.outputs[0]))
except tf.errors.FailedPreconditionError:
# Uninitialized variables are ignored
pass
# Return the pairs list
return [(op.name, val) for op, val in zip(var_ops, var_values)]
# Restore the variable values
def restore_var_values(sess, var_values):
# Find the variable initialization operations
assign_ops = [sess.graph.get_operation_by_name(v + '/Assign') for v, _ in var_values]
# Run the initialization operations with the given variable values
sess.run(assign_ops, feed_dict={op.inputs[1]: val
for op, (_, val) in zip(assign_ops, var_values)})
# Test
with tf.Graph().as_default(), tf.Session() as sess:
v = tf.Variable(0., tf.float32, name='a')
v.load(3., sess)
var_values = get_variable_values(sess)
graph_def = tf.get_default_graph().as_graph_def()
with tf.Graph().as_default(), tf.Session() as sess:
tf.import_graph_def(graph_def, name="")
restore_var_values(sess, var_values)
print(sess.run('a:0'))
# 3.0

how to use eager execution to save and restore in TensorFlow?

We always use tf.train.Saver() to save and restore weights, like in this example.
But how to use eager execution to save? how to change the following example?
Another question, is it a good idea to use eager?
I found tf.contrib.eager.Saver here, but it says,
"Saver's name-based checkpointing strategy is fragile".
What does it mean?
# Create some variables.
v1 = tf.get_variable("v1", shape=[3], initializer = tf.zeros_initializer)
v2 = tf.get_variable("v2", shape=[5], initializer = tf.zeros_initializer)
inc_v1 = v1.assign(v1+1)
dec_v2 = v2.assign(v2-1)
# Add an op to initialize the variables.
init_op = tf.global_variables_initializer()
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
# Later, launch the model, initialize the variables, do some work, and save the
# variables to disk.
with tf.Session() as sess:
sess.run(init_op)
# Do some work with the model.
inc_v1.op.run()
dec_v2.op.run()
# Save the variables to disk.
save_path = saver.save(sess, "/tmp/model.ckpt")
print("Model saved in path: %s" % save_path)

Tensorflow: Error initializing variables created within tf.data.Dataset.map()

I have created a function with TF operations that I invoke with tf.data.Dataset.map() to transform the input data to my model. Inside that function I create a tf.Variable and assign to it. When initializing the variables, TF complains that the variable's init operation is not an element of the graph, or that the variable does not belong to the same graph as the other variables. I would appreciate any help to solve this issue.
Here you can see some toy code to reproduce the issue (TF 1.12):
import tensorflow as tf
def fun(x):
f = tf.Variable(tf.ones((1,), dtype=tf.int64), name='test')
op = f.assign(x, name='test_assign')
with tf.control_dependencies([op]):
f = tf.identity(f)
return f
def generator():
while True:
yield [2]
ds = tf.data.Dataset.from_generator(generator,
output_shapes=tf.TensorShape([1,]), output_types=tf.int64)
ds = ds.map(fun)
iterator = ds.make_one_shot_iterator()
y = iterator.get_next()
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for _ in range(5):
print(sess.run(y)

Can tensorflow Saver be used in different graphs with the same structure

The network structure has already been loaded into the default global graph. I want to create another graph with the same structure and load checkpoints into this graph.
If the code is like this, it will throw error: ValueError: No variables to save in the last line. However, the second line works fine. Why? Does GraphDef returned by as_graph_def() contains variable definition/name?
inference_graph_def = tf.get_default_graph().as_graph_def()
saver = tf.train.Saver()
with tf.Graph().as_default():
tf.import_graph_def(inference_graph_def)
saver1 = tf.train.Saver()
If the code like this, it will throw error Cannot interpret feed_dict key as Tensor: The name 'save/Const:0' refers to a Tensor which does not exist in last line. Howerver, it works fine with the 3rd line removed.
inference_graph_def = tf.get_default_graph().as_graph_def()
saver = tf.train.Saver()
with tf.Graph().as_default():
tf.import_graph_def(inference_graph_def)
with session.Session() as sess:
saver.restore(sess, checkpoint_path)
So, does this mean Saver cannot work in different graphs even though they have the same structure?
Any help would be appreciated~
Here's an example of using a MetaGraphDef, which unlike GraphDef saves variable collections, to initialize a new graph using a previously saved graph.
import tensorflow as tf
CHECKPOINT_PATH = "/tmp/first_graph_checkpoint"
with tf.Graph().as_default():
some_variable = tf.get_variable(
name="some_variable",
shape=[2],
dtype=tf.float32)
init_op = tf.global_variables_initializer()
first_meta_graph = tf.train.export_meta_graph()
first_graph_saver = tf.train.Saver()
with tf.Session() as session:
init_op.run()
print("Initialized value in first graph", some_variable.eval())
first_graph_saver.save(
sess=session,
save_path=CHECKPOINT_PATH)
with tf.Graph().as_default():
tf.train.import_meta_graph(first_meta_graph)
second_graph_saver = tf.train.Saver()
with tf.Session() as session:
second_graph_saver.restore(
sess=session,
save_path=CHECKPOINT_PATH)
print("Variable value after restore", tf.global_variables()[0].eval())
Prints something like:
Initialized value in first graph [-0.98926258 -0.09709156]
Variable value after restore [-0.98926258 -0.09709156]
Note that the checkpoint is still important! Loading the MetaGraph does not restore the values of Variables (it doesn't contain those values), just the bookkeeping which tracks their existence (collections). SavedModel format addresses this, bundling MetaGraphs with checkpoints and other metadata for running them.
Edit: By popular demand, here's an example of doing the same thing with a GraphDef. I don't recommend it. Since none of the collections are restored when the GraphDef is loaded, we have to manually specify the Variables we want the Saver to restore; the "import/" default naming scheme is easy enough to fix with a name='' argument to import_graph_def, but removing it isn't super helpful since you'd need to manually fill in the variables collection if you wanted the Saver to work "automatically". Instead I've chosen to specify a mapping manually when creating the Saver.
import tensorflow as tf
CHECKPOINT_PATH = "/tmp/first_graph_checkpoint"
with tf.Graph().as_default():
some_variable = tf.get_variable(
name="some_variable",
shape=[2],
dtype=tf.float32)
init_op = tf.global_variables_initializer()
first_graph_def = tf.get_default_graph().as_graph_def()
first_graph_saver = tf.train.Saver()
with tf.Session() as session:
init_op.run()
print("Initialized value in first graph", some_variable.eval())
first_graph_saver.save(
sess=session,
save_path=CHECKPOINT_PATH)
with tf.Graph().as_default():
tf.import_graph_def(first_graph_def)
variable_to_restore = tf.get_default_graph().get_tensor_by_name(
"import/some_variable:0")
second_graph_saver = tf.train.Saver(var_list={
"some_variable": variable_to_restore
})
with tf.Session() as session:
second_graph_saver.restore(
sess=session,
save_path=CHECKPOINT_PATH)
print("Variable value after restore", variable_to_restore.eval())

Tensorflow ValueError: No variables to save from

I have written a tensorflow CNN and it is already trained. I wish to restore it to run it on a few samples but unfortunately its spitting out:
ValueError: No variables to save
My eval code can be found here:
import tensorflow as tf
import main
import Process
import Input
eval_dir = "/Users/Zanhuang/Desktop/NNP/model.ckpt-30"
checkpoint_dir = "/Users/Zanhuang/Desktop/NNP/checkpoint"
init_op = tf.initialize_all_variables()
saver = tf.train.Saver()
def evaluate():
with tf.Graph().as_default() as g:
sess.run(init_op)
ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
saver.restore(sess, eval_dir)
images, labels = Process.eval_inputs(eval_data = eval_data)
forward_propgation_results = Process.forward_propagation(images)
top_k_op = tf.nn.in_top_k(forward_propgation_results, labels, 1)
print(top_k_op)
def main(argv=None):
evaluate()
if __name__ == '__main__':
tf.app.run()
The tf.train.Saver must be created after the variables that you want to restore (or save). Additionally it must be created in the same graph as those variables.
Assuming that Process.forward_propagation(…) also creates the variables in your model, adding the saver creation after this line should work:
forward_propgation_results = Process.forward_propagation(images)
In addition, you must pass the new tf.Graph that you created to the tf.Session constructor so you'll need to move the creation of sess inside that with block as well.
The resulting function will be something like:
def evaluate():
with tf.Graph().as_default() as g:
images, labels = Process.eval_inputs(eval_data = eval_data)
forward_propgation_results = Process.forward_propagation(images)
init_op = tf.initialize_all_variables()
saver = tf.train.Saver()
top_k_op = tf.nn.in_top_k(forward_propgation_results, labels, 1)
with tf.Session(graph=g) as sess:
sess.run(init_op)
saver.restore(sess, eval_dir)
print(sess.run(top_k_op))
Simply, there should be at least one tf.variable that is defined before you create your saver object.
You can get the above code running by adding the following line of code before the saver object definition.
The code that you need to add has come between the two ###.
import tensorflow as tf
import main
import Process
import Input
eval_dir = "/Users/Zanhuang/Desktop/NNP/model.ckpt-30"
checkpoint_dir = "/Users/Zanhuang/Desktop/NNP/checkpoint"
init_op = tf.initialize_all_variables()
### Here Comes the fake variable that makes defining a saver object possible.
_ = tf.Variable(initial_value='fake_variable')
###
saver = tf.train.Saver()
...
Note that since TF 0.11 — a long time ago yet after the currently accepted answer — tf.train.Saver gained a defer_build argument in its constructor that allows you to define variables after it has been constructed. However you now need to call its build member function when all variables have been added, typically just before finilizeing your graph.
saver = tf.train.Saver(defer_build=True)
# build you graph here
saver.build()
graph.finalize()
# now entering training loop

Categories

Resources