Understanding next step after saving TensorFlow model - python

I've a simple MNIST which I've successfully saved, being the code the next:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
import tensorflow as tf
sess = tf.InteractiveSession()
tf_save_file = './mnist-to-save-saved'
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer())
y = tf.matmul(x, W) + b
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y_, logits = y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
saver.save(sess, tf_save_file)
for _ in range(1000):
batch = mnist.train.next_batch(100)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver.save(sess, tf_save_file, global_step=1000)
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
Then, the next files are generated:
checkpoint
mnist-to-save-saved-1000.data-00000-of-00001
mnist-to-save-saved-1000.index
mnist-to-save-saved-1000.meta
mnist-to-save-saved.data-00000-of-00001
mnist-to-save-saved.index
mnist-to-save-saved.meta
Now, in order to use it in production (and so, for example, pass it a number image), I want to be able to execute the trained model by passing it any number image to make the prediction (I mean, not deploying yet a server but making this prediction "locally", having in the same directory that "fixed" number image, so using the model would be like when you run an executable).
But, considering the (mid-low?) API level of my code, I'm confused about what would be the easiest correct next step (if restoring, using an Estimator, etc...), and how to do it.
Although I've read the official documentation, I insist that they seem to be many ways, but some are a bit complex and "noisy" for a simple model like this.
Edit:
I've edit and re-run the mnist file, whose code is the same as above except for those lines:
...
x = tf.placeholder(tf.float32, shape=[None, 784], name='input')
...
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1), name='result')
...
Then, I try to run this another .py code (in the same directory as the above code) in order to pass a local handwritten number image ("mnist-input-image.png") located in the same directory:
import tensorflow as tf
from PIL import Image
import numpy as np
image_test = Image.open("mnist-input-image.png")
image = np.array(image_test)
with tf.Session() as sess:
saver = tf.train.import_meta_graph('/Users/username/.meta')
new = saver.restore(sess, tf.train.latest_checkpoint('/Users/username/'))
graph = tf.get_default_graph()
input_x = graph.get_tensor_by_name("input:0")
result = graph.get_tensor_by_name("result:0")
feed_dict = {input_x: image}
predictions = result.eval(feed_dict=feed_dict)
print(predictions)
Now, if I correctly understand, I've to pass the image as numpy array. Then, my questions are:
1) Which is the exact file reference of those lines (since I've no .meta folder in my User folder)?
saver = tf.train.import_meta_graph('/Users/username/.meta')
new = saver.restore(sess, tf.train.latest_checkpoint('/Users/username/'))
I mean, to which exact files refer those lines (from my generated files list above)?
2) Translasted to my case, is correct this line to pass my numpy array into the feed dict?
feed_dict = {input_x: image}

A simple solution is to use your session object. When you have generated the checkpoint file, you can restore it with a Saver object.
By the way, do you know why most tutorials have their graph creation inside of a function? One good reason is because you can deserialize the graph quickly with your inputs.
The correct method to start a session is with the following:
# Use your placeholders, variables, etc to create the entire graph.
# Usually you return the input placeholder,
# prediction and the loss/accuracy here.
# You don't need the accuracy.
x, y, _ = make_your_graph(test_X, test_y)
# This object is the interface for serialization in tf
saver = tf.train.Saver()
with tf.Session() as sess:
# Takes your current model's checkpoint. "./checkpoint" is your checkpoint file.
saver.restore(sess, tf.train.latest_checkpoint("./checkpoint"))
prediction = sess.run(y)
Want to run more than 1 data point for your already-booted up session?
Then replace the last line with a feed dict:
while waiting_for_new_y():
another_y = get_new_y()
feed_dict = {x: [another_y]}
another_prediction = sess.run(y, feed_dict)

First of all , give value to name parameter in each object which you want to use later , so that you can use it later by it's name:
change this :
x = tf.placeholder(tf.float32, shape=[None, 784])
to
x = tf.placeholder(tf.float32, shape=[None, 784],name='input')
and
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
to
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1),name='result')
Now run this small script to store model :
import tensorflow as tf
with tf.Session() as sess:
saver = tf.train.import_meta_graph('/Users/dummy/.meta')
new=saver.restore(sess, tf.train.latest_checkpoint('/Users/dummy/'))
graph = tf.get_default_graph()
input_x = graph.get_tensor_by_name("input:0")
result = graph.get_tensor_by_name("result:0")
feed_dict = {input_x: mnist.test.images,} #here you feed your new data for example i am feeding mnist
predictions = result.eval(feed_dict=feed_dict)
print(predictions)
And you will get output.

Related

What does this error InvalidArgumentError (see above for traceback): Expected dimension in the range [-1, 1), but got 1 mean?

I'm getting an error when trying to run the following code:
correct = tf.equal(tf.argmax(activation,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct,'float'))
print ('Accuracy: ', sess.run(accuracy, feed_dict = {x: test_x, yL test_y})
The actual error is:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Expected dimension in the range [-1, 1), but got 1
[[Node: ArgMax_1 = ArgMax[T=DT_FLOAT, Tidx=DT_INT32, output_type=DT_INT64, _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_Placeholder_1_0_1, ArgMax_1/dimension)]]
The code is running a simple binary classification using logistic regression in TensorFlow.
I've been doing some research on the net but can't really find a satisfactory solution.
Thanks,
The problem is not in accuracy. Error clearly shows that problem is in argmax.
please check your dimension of 'activation' and 'y' if anyone of them is of 1-D then remove the second operand of argmax and it will probably resolve your issue.
There is a full, running example on Github. Specifically, I was able to get the following code to run:
$ cat tf.py
from __future__ import print_function
import tensorflow as tf
assert tf.__version__ == '1.3.0'
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# Parameters
learning_rate = 0.01
training_epochs = 0
batch_size = 100
display_step = 1
# tf Graph Input
x = tf.placeholder(tf.float32, [None, 784]) # mnist data image of shape 28*28=784
y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes
# Set model weights
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
# Construct model
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
# Minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
sess = tf.InteractiveSession()
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
sess.run(init)
print ('Accuracy: ', sess.run(accuracy, feed_dict = {x: mnist.test.images,
y: mnist.test.labels}))
$ python tf.py
Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
Accuracy: 0.098
This suggests that you might have an older version of Tensorflow. I would try installing 1.3.0 and seeing if that solves your problem.
In your code modify this
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
to this
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y))

How to load a trained tensorflow model

I am having all kinds of trouble loading a tensorflow model to test on some new data. When I trained the model, I used this:
save_model_file = 'my_saved_model'
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_file)
This seems to result in the following files being created:
my_saved_model.meta
checkpoint
my_saved_model.index
my_saved_model.data-00000-of-00001
I have no idea which of these files I am supposed to pay attention to.
Now the model is trained, and I can't seem to load it or use it without throwing an exception. Here is what I am doing:
def neural_net_data_input(data_shape):
theshape=(None,)+tuple(data_shape)
return tf.placeholder(tf.float32,shape=theshape,name='x')
def neural_net_label_input(n_out):
return tf.placeholder(tf.float32,shape=(None,n_out),name='one_hot_labels')
def neural_net_keep_prob_input():
return tf.placeholder(tf.float32,name='keep_prob')
def do_generate_network(x):
#
# here is where i generate the network layer by layer.
# this code works fine so i am not showing it here
#
pass
#
# Now I want to restore the model
#
tf.reset_default_graph()
input_data_shape=(32,32,1)
final_num_outputs=43
graph1 = tf.Graph()
with graph1.as_default():
x = neural_net_data_input(input_data_shape)
one_hot_labels = neural_net_label_input(final_num_outputs)
keep_prob=neural_net_keep_prob_input()
logits = do_generate_network(x)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
#
# accuracy: we use this for validation testing
#
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
################################
# Evaluate
################################
new_data=myutils.load_pickle_file(SOME_DATA_FILE_NAME)
new_features=new_data['features']
new_one_hot_labels=new_data['labels']
print('Evaluating on new data...')
with tf.Session(graph=graph1) as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
saver.restore(sess,save_model_file)
new_acc = sess.run(accuracy, feed_dict={x: new_features, one_hot_labels: new_one_hot_labels, keep_prob: 1.})
print('Testing Accuracy For New Images: {}'.format(new_acc))
But when I do this, I get this:
TypeError: Cannot interpret feed_dict key as Tensor: The name 'save/Const:0' refers to a Tensor which does not exist. The operation, 'save/Const', does not exist in the graph.
So, i tried moving my graph inside the session like this:
################################
# Evaluate
################################
print('Evaluating on web data...')
with tf.Session() as sess:
x = neural_net_data_input(input_data_shape)
one_hot_labels = neural_net_label_input(final_num_outputs)
keep_prob=neural_net_keep_prob_input()
logits = do_generate_network(x)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
#
# accuracy: we use this for validation testing
#
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
sess.run(tf.global_variables_initializer())
my_save_dir="/home/carnd/CarND-Traffic-Sign-Classifier-Project"
load_model_meta_file=os.path.join(my_save_dir,"my_saved_model.meta")
load_model_path=os.path.join(my_save_dir,"my_saved_model")
new_saver = tf.train.import_meta_graph(load_model_meta_file)
new_saver.restore(sess, load_model_path)
web_acc = sess.run(accuracy, feed_dict={x: web_features, one_hot_labels: web_one_hot_labels, keep_prob: 1.})
print('Testing Accuracy For Web Images: {}'.format(web_acc))
Now it runs without throwing an error, but the accuracy result it prints is 0.02! I am feeding in the very same data that during training I was getting 95% accuracy on. So it appears I am somehow loading my model incorrectly.
What am I doing wrong?
Steps for loading the trained model:
Load the graph:
You can load the graph using tf.train.import_meta_graph(). An example code would be:
model_path = "my_saved_model"
inference_graph = tf.Graph()
with tf.Session(graph= inference_graph) as sess:
# Load the graph with the trained states
loader = tf.train.import_meta_graph(model_path+'.meta')
loader.restore(sess, model_path)
Get the tensors: Get the tensors need for inference by using get_tensor_by_name(). So in your model make sure you name the tensors by name, so that you can call it during inference.
#Get the tensors by their variable name
_accuracy = inference_graph.get_tensor_by_name('accuracy:0')
_x = inference_graph get_tensor_by_name('x:0')
_y = inference_graph.get_tensor_by_name('y:0')
Test: Can do done by using the tensors loaded. sess.run(_accuracy, feed_dict={_x: ... , _y:...}

TensorFlow Model Restore ValueError - At least two variables have the same name

I have saved a model and now I am trying to restore it, after restoring it works for the first time correctly but when I press 'TEST' button on the same running program to test another image it gives error
ValueError: At least two variables have the same name: Variable_2/Adam
def train_neural_network(x):
prediction = neural_network_model(x)#logits
softMax=tf.nn.softmax_cross_entropy_with_logits(
logits=prediction, labels=y)#prediction and original comapriosn
cost = tf.reduce_mean(softMax)#total loss
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)#learning_rate=0.01
hm_epochs = 20
new_saver = tf.train.Saver()
with tf.Session() as sess:
global s
s=sess
sess.run(tf.global_variables_initializer())
new_saver = tf.train.import_meta_graph('../MY_MODELS/my_MNIST_Model_1.meta')
new_saver.restore(s, tf.train.latest_checkpoint('../MY_MODELS'))
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Accuracy:', accuracy.eval(
{x: mnist.test.images, y: mnist.test.labels}))
The graph which you have loaded already has all the variables needed for inference. You need to load the tensors like accuracy from the saved graph. In your case you have same variables declared outside, which conflicts with the variables in the graph.
During training, if you have named your tensor accuracy with a name='accuracy', then you can load it from the graph using: get_tensor_by_name('accuracy:0'). In your example, you need to load the input tensors x and y also from the graph. Your code should be like:
def inference():
loaded_graph = tf.Graph()
new_saver = tf.train.Saver()
with tf.Session(graph=loaded_graph) as sess:
new_saver = tf.train.import_meta_graph('../MY_MODELS/my_MNIST_Model_1.meta')
new_saver.restore(s, tf.train.latest_checkpoint('../MY_MODELS'))
#Get the tensors by their variable name
# Note: the names of the following tensors have to be declared in your train graph for this to work. So just name them appropriately.
_accuracy = loaded_graph.get_tensor_by_name('accuracy:0')
_x = loaded_graph.get_tensor_by_name('x:0')
_y = loaded_graph.get_tensor_by_name('y:0')
print('Accuracy:', _accuracy.eval(
{_x: mnist.test.images, _y: mnist.test.labels}))

Tensorflow: Print value when Session running

First of all, I'm very new in Python and Tensorflow either.
I'm trying on demo of link: https://www.tensorflow.org/get_started/mnist/beginners
and it runs well.
However, I would like to debug (or log) the value of some placeholders, variables which are changed when I run Session.run(). I
Could you please show me the way to "debug" or log them when Session running in the loops?
Here is my code
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("mnist/", one_hot=True)
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y1 = tf.add(tf.matmul(x,W),b)
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
cross_entropy1 = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y1, y_))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy1)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.argmax(y,1), feed_dict={x: mnist.test.images, y_: mnist.test.labels})
In this script, I would like to log the value of y and tf.argmax(y, 1) for each test image processed.
Mrry answered it best in this stackoverflow answer: https://stackoverflow.com/a/33633839/6487788
Exactly what you are asking (printing during sess.run) would be this part of his answer:
To print the value of a tensor without returning it to your Python program, you can use the tf.Print() op, as And suggests in another answer. Note that you still need to run part of the graph to see the output of this op, which is printed to standard output. If you're running distributed TensorFlow, the tf.Print() op will print its output to the standard output of the task where that op runs.
This would be this code for argmax:
argmaxy = tf.Print(tf.argmax(y,1))
correct_prediction = tf.equal(argmaxy, tf.argmax(y_,1))
Good luck!
While #rmeerten's answer is correct, you can consider also using TensorBoard which can be a useful tool for debugging your models and seeing what's happening. For background, you can also check out the TensorBoard session from the TensorFlow Dev Summit.

How does one use the official Batch Normalization layer in TensorFlow?

I was trying to use batch normalization to train my Neural Networks using TensorFlow but it was unclear to me how to use the official layer implementation of Batch Normalization (note this is different from the one from the API).
After some painful digging on the their github issues it seems that one needs a tf.cond to use it properly and also a 'resue=True' flag so that the BN shift and scale variables are properly reused. After figuring that out I provided a small description of how I believe is the right way to use it here.
Now I have written a short script to test it (only a single layer and a ReLu, hard to make it smaller than this). However, I am not 100% sure how to test it. Right now my code runs with no error messages but returns NaNs unexpectedly. Which lowers my confidence that the code I gave in the other post might be right. Or maybe the network I have is weird. Either way, does someone know whats wrong? Here is the code:
import tensorflow as tf
# download and install the MNIST data automatically
from tensorflow.examples.tutorials.mnist import input_data
from tensorflow.contrib.layers.python.layers import batch_norm as batch_norm
def batch_norm_layer(x,train_phase,scope_bn):
bn_train = batch_norm(x, decay=0.999, center=True, scale=True,
is_training=True,
reuse=None, # is this right?
trainable=True,
scope=scope_bn)
bn_inference = batch_norm(x, decay=0.999, center=True, scale=True,
is_training=False,
reuse=True, # is this right?
trainable=True,
scope=scope_bn)
z = tf.cond(train_phase, lambda: bn_train, lambda: bn_inference)
return z
def get_NN_layer(x, input_dim, output_dim, scope, train_phase):
with tf.name_scope(scope+'vars'):
W = tf.Variable(tf.truncated_normal(shape=[input_dim, output_dim], mean=0.0, stddev=0.1))
b = tf.Variable(tf.constant(0.1, shape=[output_dim]))
with tf.name_scope(scope+'Z'):
z = tf.matmul(x,W) + b
with tf.name_scope(scope+'BN'):
if train_phase is not None:
z = batch_norm_layer(z,train_phase,scope+'BN_unit')
with tf.name_scope(scope+'A'):
a = tf.nn.relu(z) # (M x D1) = (M x D) * (D x D1)
return a
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# placeholder for data
x = tf.placeholder(tf.float32, [None, 784])
# placeholder that turns BN during training or off during inference
train_phase = tf.placeholder(tf.bool, name='phase_train')
# variables for parameters
hiden_units = 25
layer1 = get_NN_layer(x, input_dim=784, output_dim=hiden_units, scope='layer1', train_phase=train_phase)
# create model
W_final = tf.Variable(tf.truncated_normal(shape=[hiden_units, 10], mean=0.0, stddev=0.1))
b_final = tf.Variable(tf.constant(0.1, shape=[10]))
y = tf.nn.softmax(tf.matmul(layer1, W_final) + b_final)
### training
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean( -tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]) )
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
steps = 3000
for iter_step in xrange(steps):
#feed_dict_batch = get_batch_feed(X_train, Y_train, M, phase_train)
batch_xs, batch_ys = mnist.train.next_batch(100)
# Collect model statistics
if iter_step%1000 == 0:
batch_xstrain, batch_xstrain = batch_xs, batch_ys #simualtes train data
batch_xcv, batch_ycv = mnist.test.next_batch(5000) #simualtes CV data
batch_xtest, batch_ytest = mnist.test.next_batch(5000) #simualtes test data
# do inference
train_error = sess.run(fetches=cross_entropy, feed_dict={x: batch_xs, y_:batch_ys, train_phase: False})
cv_error = sess.run(fetches=cross_entropy, feed_dict={x: batch_xcv, y_:batch_ycv, train_phase: False})
test_error = sess.run(fetches=cross_entropy, feed_dict={x: batch_xtest, y_:batch_ytest, train_phase: False})
def do_stuff_with_errors(*args):
print args
do_stuff_with_errors(train_error, cv_error, test_error)
# Run Train Step
sess.run(fetches=train_step, feed_dict={x: batch_xs, y_:batch_ys, train_phase: True})
# list of booleans indicating correct predictions
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
# accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels, train_phase: False}))
when I run it I get:
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
(2.3474066, 2.3498712, 2.3461707)
(0.49414295, 0.88536006, 0.91152304)
(0.51632041, 0.393666, nan)
0.9296
it used to be all the last ones were nan and now only a few of them. Is everything fine or am I paranoic?
I am not sure if this will solve your problem, the documentation for BatchNorm is not quite easy-to-use/informative, so here is a short recap on how to use simple BatchNorm:
First of all, you define your BatchNorm layer. If you want to use it after an affine/fully-connected layer, you do this (just an example, order can be different/as you desire):
...
inputs = tf.matmul(inputs, W) + b
inputs = tf.layers.batch_normalization(inputs, training=is_training)
inputs = tf.nn.relu(inputs)
...
The function tf.layers.batch_normalization calls variable-initializers. These are internal-variables and need a special scope to be called, which is in the tf.GraphKeys.UPDATE_OPS. As such, you must call your optimizer function as follows (after all layers have been defined!):
...
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
trainer = tf.train.AdamOptimizer()
updateModel = trainer.minimize(loss, global_step=global_step)
...
You can read more about it here. I know it's a little late to answer your question, but it might help other people coming across BatchNorm problems in tensorflow! :)
training =tf.placeholder(tf.bool, name = 'training')
lr_holder = tf.placeholder(tf.float32, [], name='learning_rate')
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer = tf.train.AdamOptimizer(learning_rate = lr).minimize(cost)
when defining the layers, you need to use the placeholder 'training'
batchNormal_layer = tf.layers.batch_normalization(pre_batchNormal_layer, training=training)

Categories

Resources