tensorflow prediction on test dataset - python

I am having trouble printing out the prediction on test data.
Can anyone help me filling in the input to sess.run at output step Thanks!
def nn_model(data):
convnet = conv_2d(in_data, 32, 3, padding='same', activation='relu')
convnet = max_pool_2d(convnet, 2)
logits = nn_model(next_element)
prediction = tf.argmax(logits, 1)
with tf.Session() as sess:
sess.run(init_op)
sess.run(training_init_op)
for i in range(epochs):
l, _, acc = sess.run([loss, optimizer, accuracy])
output = sess.run(prediction, ***{logits:nn_model(test_data)}***)
output = np.argmax(output)
print("The prediction for test is :", output)

On this line:
output = sess.run(prediction, ***{logits:nn_model(test_data)}***)
You seem to be trying to pass in your test data (presumably in numpy format) to your logits. logits is traditionally the name of the output of your model, which is quite confusing. In your nn_model function you should be returning both the logits (output of the model) and placeholders for your input. Normally you have something like this:
x = tf.placeholder(tf.float32, shape=(None, 1024))
labels = tf.placeholder(tf.float32, shape=(None,))
Now your output looks something like:
output = sess.run(logits, feed_dict={x:test_data, y:test_labels}
In the case of test data you might not need to pass in labels, but if you wanted to compute accuracy you would need them, you decide depending on your need.
Here are some really nice examples you can follow:
https://github.com/aymericdamien/TensorFlow-Examples

Related

How to save colab tensorflow deep learning model in google drive

I have just begun working with Tensorflow and Colab.
I followed a tutorial online on how to build a simple image recognition model in Colab.
From the tutorial, I was able to build a simple model, without completely understanding every step at this point.
But what I would like to know is how I can now save the model I built for use elsewhere.
Here is the final bits of code used to build and test the model.
Placeholder:
# Initialize placeholders
x = tf.placeholder(dtype = tf.float32, shape = [None, 28, 28])
y = tf.placeholder(dtype = tf.int32, shape = [None])
# Flatten the input data
images_flat = tf.contrib.layers.flatten(x)
# Fully connected layer
logits = tf.contrib.layers.fully_connected(images_flat, 62, tf.nn.relu)
# Define a loss function
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels = y,
logits = logits))
# Define an optimizer
train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
# Convert logits to label indexes
correct_pred = tf.argmax(logits, 1)
# Define an accuracy metric
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
print("images_flat: ", images_flat)
print("logits: ", logits)
print("loss: ", loss)
print("predicted_labels: ", correct_pred)
Run in session:
tf.set_random_seed(1234)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for i in range(201):
print('EPOCH', i)
_, accuracy_val = sess.run([train_op, accuracy], feed_dict={x: images28, y: labels})
if i % 10 == 0:
print("Loss: ", loss)
print('DONE WITH EPOCH')
Test on test data
# Import `skimage`
from skimage import transform
# Load the test data
test_images, test_labels = load_data(test_data_directory)
# Transform the images to 28 by 28 pixels
test_images28 = [transform.resize(image, (28, 28)) for image in test_images]
# Convert to grayscale
from skimage.color import rgb2gray
test_images28 = rgb2gray(np.array(test_images28))
# Run predictions against the full test set.
predicted = sess.run([correct_pred], feed_dict={x: test_images28})[0]
# Calculate correct matches
match_count = sum([int(y == y_) for y, y_ in zip(test_labels, predicted)])
# Calculate the accuracy
accuracy = match_count / len(test_labels)
# Print the accuracy
print("Accuracy: {:.3f}".format(accuracy))
From the above can someone suggest a bit of code whereby I can save the model to google drive? To be honest I'm not even sure which variable the model is stored in?
Thank you, and sorry for the beginner question.

Unexpected output for tf.nn.sparse_softmax_cross_entropy_with_logits

The TensorFlow documentation for tf.nn.sparse_softmax_cross_entropy_with_logits explicitly declares that I should not apply softmax to the inputs of this op:
This op expects unscaled logits, since it performs a softmax on logits
internally for efficiency. Do not call this op with the output of
softmax, as it will produce incorrect results.
However if I use cross entropy without softmax it gives me unexpected results. According to CS231n course the expected loss value is around 2.3 for CIFAR-10:
For example, for CIFAR-10 with a Softmax classifier we would expect
the initial loss to be 2.302, because we expect a diffuse probability
of 0.1 for each class (since there are 10 classes), and Softmax loss
is the negative log probability of the correct class so: -ln(0.1) =
2.302.
However without softmax I get much bigger values, for example 108.91984.
What exactly am I doing wrong with sparse_softmax_cross_entropy_with_logits? The TF code is shown below.
import tensorflow as tf
import numpy as np
from tensorflow.python import keras
(_, _), (x_test, y_test) = keras.datasets.cifar10.load_data()
x_test = np.reshape(x_test, [-1, 32, 32, 3])
y_test = np.reshape(y_test, (10000,))
y_test = y_test.astype(np.int32)
x = tf.placeholder(dtype=tf.float32, shape=(None, 32, 32, 3))
y = tf.placeholder(dtype=tf.int32, shape=(None,))
layer = tf.layers.Conv2D(filters=16, kernel_size=3)(x)
layer = tf.nn.relu(layer)
layer = tf.layers.Flatten()(layer)
layer = tf.layers.Dense(units=1000)(layer)
layer = tf.nn.relu(layer)
logits = tf.layers.Dense(units=10)(layer)
# If this line is uncommented I get expected value around 2.3
# logits = tf.nn.softmax(logits)
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
logits=logits)
loss = tf.reduce_mean(loss, name='cross_entropy')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
res = sess.run(loss, feed_dict={x: x_test[0:256], y: y_test[0:256]})
print("loss: ", res)
# Expected output is value close to 2.3
# Real outputs are 108.91984, 72.82324, etc.
The issue is not in the lines
# If this line is uncommented I get expected value around 2.3
# logits = tf.nn.softmax(logits)
Images in cifar10 dataset are in RGB, thus pixel values are in range [0, 256). If you divide your x_test by 255
x_test = np.reshape(x_test, [-1, 32, 32, 3]).astype(np.float32) / 255
the values will be rescaled to [0,1] and tf.nn.sparse_softmax_cross_entropy_with_logits will return expected values

Error when using tf.get_variable as alternativ for tf.Variable in Tensorflow

Hi I'm new to neural networks and I'm currently working on Tensoflow.
First I did the MNIST tutorial which worked quite well. Now I wanted to deepen the whole by means of an own network for Cifar10 in Google Colab. For this purpose I wrote the following code:
def conv2d(input, size, inputDim, outputCount):
with tf.variable_scope("conv2d"):
## -> This area causes problems <- ##
##########variant1
weight = tf.Variable(tf.truncated_normal([size, size, inputDim, outputCount], stddev=0.1),name="weight")
bias = tf.Variable( tf.constant(0.1, shape=[outputCount]),name="bias")
##########variant2
weight = tf.get_variable("weight", tf.truncated_normal([size, size, inputDim, outputCount], stddev=0.1))
bias = tf.get_variable("bias", tf.constant(0.1, shape=[outputCount]))
##################
conv = tf.nn.relu(tf.nn.conv2d(input, weight, strides=[1, 1, 1, 1], padding='SAME') + bias)
return conv
def maxPool(conv2d):....
def fullyConnect(input, inputSize, outputCount, relu):
with tf.variable_scope("fullyConnect"):
## -> This area causes problems <- ##
##########variant1
weight = tf.Variable( tf.truncated_normal([inputSize, outputCount], stddev=0.1),name="weight")
bias = tf.Variable( tf.constant(0.1, shape=[outputCount]),name="bias")
##########variant2
weight = tf.get_variable("weight", tf.truncated_normal([inputSize, outputCount], stddev=0.1))
bias = tf.get_variable("bias", tf.constant(0.1, shape=[outputCount]))
##################
fullyIn = tf.reshape(input, [-1, inputSize])
fullyCon = fullyIn
if relu:
fullyCon = tf.nn.relu(tf.matmul(fullyIn, weight) + bias)
return fullyCon
#Model Def.
def getVGG16A(grafic,width,height,dim):
with tf.name_scope("VGG16A"):
img = tf.reshape(grafic, [-1,width,height,dim])
with tf.name_scope("Layer1"):
with tf.variable_scope("Layer1"):
with tf.variable_scope("conv1"):
l1_c = conv2d(img,3, dim, 64)
with tf.variable_scope("mp1"):
l1_mp = maxPool(l1_c) #32 > 16
with tf.name_scope("Layer2"):
with tf.variable_scope("Layer2"):
with tf.variable_scope("conv1"):
l2_c = conv2d(l1_mp,3, 64, 128)
with tf.variable_scope("mp1"):
l2_mp = maxPool(l2_c) #16 > 8
with tf.name_scope("Layer6"):
with tf.variable_scope("Layer6"):
with tf.variable_scope("fully1"):
L6_fc1 = fullyConnect(l2_mp, 8*8*128 , 1024, True)
with tf.variable_scope("fully2"):
L6_fc2 = fullyConnect(L6_fc1, 1024, 1024, True)
keep_prob = tf.placeholder(tf.float32)
drop = tf.nn.dropout(L6_fc2, keep_prob)
with tf.variable_scope("fully3"):
L6_fc3 = fullyConnect(drop,1024, 3, False)
return L6_fc3, keep_prob
x = tf.placeholder(tf.float32, [None, 3072]) #input
y_ = tf.placeholder(tf.float32, [None, 3]) #output
# Build the graph for the deep net
y_conv, keep_prob = getVGG16A(x,32,32,3) #create Model
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-3).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch in getBatchData(prep_filter_dataBatch1,2): #a self-written method for custom batch return
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.8})
print('test accuracy %g' % accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
For the definition of the tensorflow variables I first used variant1 (tf.variable).
This caused an overflow of the graphics memory after repeated execution.
Then I used variant2 (tf.get_variable). If I have understood the documentation correctly, this should use already existing variables if they exist.
But as soon as I do this I get the following error message:
TypeError: Tensor objects are not iterable when eager execution is not enabled. To iterate over this tensor use tf.map_fn.
I've been looking the hole day, but I haven't found an explanation for this.
Now I hope that there is someone here who can explain to me why this is not possible, or where I can find further information. The error message is getting me nowhere. I don't want a solution because I want to and have to understand this, because I want to write my bachelor thesis in the field of CNN.
Why can I use tf.variable but not tf.get_variable which should do the same?
Thanks for the help,
best regards, Pascal :)
I found my mistake.
I forgot the keyword initializer.
the correct line looks like this:
weight = tf.get_variable("weight",initializer=tf.truncated_normal([size, size, inputDim, outputCount], stddev=anpassung))

How to save and restore a lstm trained model in Tensorflow using Saver?

I have saved a trained LSTM model and I want to restore the prediction to use it in testing. I was trying to follow this post. But I am getting errors. Here is what I tried:
x = tf.placeholder('tf.float32', [None, input_vec_size, 1])
y = tf.placeholder('tf.float32')
def recurrent_neural_network(x):
layer = {'weights': tf.Variable(tf.random_normal([n_hidden, n_classes])),
'biases': tf.Variable(tf.random_normal([n_classes]))}
x = tf.transpose(x, [1, 0, 2])
x = tf.reshape(x, [-1, 1])
x = tf.split(x, input_vec_size, 0)
lstm_cell = rnn.BasicLSTMCell(n_hidden, state_is_tuple=True)
outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)
output = tf.add(tf.matmul(outputs[-1], layer['weights']), layer['biases'])
return output
def train_neural_network(x):
prediction = recurrent_neural_network(x)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
Training ...
saver.save(sess, os.path.join(os.getcwd(), 'my_test_model'))
After that, in the training phase I am trying
def test_neural_network(input_data):
with tf.Session() as sess:
#sess.run(tf.global_variables_initializer())
new_saver = tf.train.import_meta_graph('my_test_model.meta')
new_saver.restore(sess, tf.train.latest_checkpoint('./'))
prediction = tf.get_default_graph().get_tensor_by_name("prediction:0")
Calculate features from input_data ...
result = sess.run(tf.argmax(prediction.eval(feed_dict={x: features}), 1))
But this throws the following error:
KeyError: "The name 'prediction:0' refers to a Tensor which does not exist. The operation, 'prediction', does not exist in the graph."
Then I tried adding :
tf.add_to_collection('prediction', prediction) before saving and replacing by prediction = tf.get_collection('prediction')[0] after restoring. But this gives me the following error:
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_2' with dtype float and shape [?,34,1]
[[Node: Placeholder_2 = Placeholderdtype=DT_FLOAT, shape=[?,34,1], _device="/job:localhost/replica:0/task:0/cpu:0"]]
I know for the first error, I am supposed to assign a name in order to restore but prediction is not a tensorflow variable. I went through few previous posts and articles but unable to come up with a working solution. So, my questions are:
Am I doing something conceptually wrong? If so, what?
If not, is there an implementation error? And how do I solve it?
Thanks.
I could save my trained model at last and so posting an answer in case anyone comes across this question. I did not get a solution for the exact problem but I could build and save my model using tflearn. In order to train and store:
model = tflearn.DNN(lstm_model(n_classes, input_vec_size))
model.fit(train_x, train_y, validation_set=(test_x, test_y), n_epoch=20,
show_metric=True, snapshot_epoch=True, run_id='lstm_model')
model.save("../Models/lstm_model")
And later, to restore:
model.load(filepath+"lstm_model")
This seemed to be a far easier way to work with the model, and provides a compact and novel way to do the same task which I posted in the question.

How does one use the official Batch Normalization layer in TensorFlow?

I was trying to use batch normalization to train my Neural Networks using TensorFlow but it was unclear to me how to use the official layer implementation of Batch Normalization (note this is different from the one from the API).
After some painful digging on the their github issues it seems that one needs a tf.cond to use it properly and also a 'resue=True' flag so that the BN shift and scale variables are properly reused. After figuring that out I provided a small description of how I believe is the right way to use it here.
Now I have written a short script to test it (only a single layer and a ReLu, hard to make it smaller than this). However, I am not 100% sure how to test it. Right now my code runs with no error messages but returns NaNs unexpectedly. Which lowers my confidence that the code I gave in the other post might be right. Or maybe the network I have is weird. Either way, does someone know whats wrong? Here is the code:
import tensorflow as tf
# download and install the MNIST data automatically
from tensorflow.examples.tutorials.mnist import input_data
from tensorflow.contrib.layers.python.layers import batch_norm as batch_norm
def batch_norm_layer(x,train_phase,scope_bn):
bn_train = batch_norm(x, decay=0.999, center=True, scale=True,
is_training=True,
reuse=None, # is this right?
trainable=True,
scope=scope_bn)
bn_inference = batch_norm(x, decay=0.999, center=True, scale=True,
is_training=False,
reuse=True, # is this right?
trainable=True,
scope=scope_bn)
z = tf.cond(train_phase, lambda: bn_train, lambda: bn_inference)
return z
def get_NN_layer(x, input_dim, output_dim, scope, train_phase):
with tf.name_scope(scope+'vars'):
W = tf.Variable(tf.truncated_normal(shape=[input_dim, output_dim], mean=0.0, stddev=0.1))
b = tf.Variable(tf.constant(0.1, shape=[output_dim]))
with tf.name_scope(scope+'Z'):
z = tf.matmul(x,W) + b
with tf.name_scope(scope+'BN'):
if train_phase is not None:
z = batch_norm_layer(z,train_phase,scope+'BN_unit')
with tf.name_scope(scope+'A'):
a = tf.nn.relu(z) # (M x D1) = (M x D) * (D x D1)
return a
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# placeholder for data
x = tf.placeholder(tf.float32, [None, 784])
# placeholder that turns BN during training or off during inference
train_phase = tf.placeholder(tf.bool, name='phase_train')
# variables for parameters
hiden_units = 25
layer1 = get_NN_layer(x, input_dim=784, output_dim=hiden_units, scope='layer1', train_phase=train_phase)
# create model
W_final = tf.Variable(tf.truncated_normal(shape=[hiden_units, 10], mean=0.0, stddev=0.1))
b_final = tf.Variable(tf.constant(0.1, shape=[10]))
y = tf.nn.softmax(tf.matmul(layer1, W_final) + b_final)
### training
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean( -tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]) )
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
steps = 3000
for iter_step in xrange(steps):
#feed_dict_batch = get_batch_feed(X_train, Y_train, M, phase_train)
batch_xs, batch_ys = mnist.train.next_batch(100)
# Collect model statistics
if iter_step%1000 == 0:
batch_xstrain, batch_xstrain = batch_xs, batch_ys #simualtes train data
batch_xcv, batch_ycv = mnist.test.next_batch(5000) #simualtes CV data
batch_xtest, batch_ytest = mnist.test.next_batch(5000) #simualtes test data
# do inference
train_error = sess.run(fetches=cross_entropy, feed_dict={x: batch_xs, y_:batch_ys, train_phase: False})
cv_error = sess.run(fetches=cross_entropy, feed_dict={x: batch_xcv, y_:batch_ycv, train_phase: False})
test_error = sess.run(fetches=cross_entropy, feed_dict={x: batch_xtest, y_:batch_ytest, train_phase: False})
def do_stuff_with_errors(*args):
print args
do_stuff_with_errors(train_error, cv_error, test_error)
# Run Train Step
sess.run(fetches=train_step, feed_dict={x: batch_xs, y_:batch_ys, train_phase: True})
# list of booleans indicating correct predictions
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
# accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels, train_phase: False}))
when I run it I get:
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
(2.3474066, 2.3498712, 2.3461707)
(0.49414295, 0.88536006, 0.91152304)
(0.51632041, 0.393666, nan)
0.9296
it used to be all the last ones were nan and now only a few of them. Is everything fine or am I paranoic?
I am not sure if this will solve your problem, the documentation for BatchNorm is not quite easy-to-use/informative, so here is a short recap on how to use simple BatchNorm:
First of all, you define your BatchNorm layer. If you want to use it after an affine/fully-connected layer, you do this (just an example, order can be different/as you desire):
...
inputs = tf.matmul(inputs, W) + b
inputs = tf.layers.batch_normalization(inputs, training=is_training)
inputs = tf.nn.relu(inputs)
...
The function tf.layers.batch_normalization calls variable-initializers. These are internal-variables and need a special scope to be called, which is in the tf.GraphKeys.UPDATE_OPS. As such, you must call your optimizer function as follows (after all layers have been defined!):
...
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
trainer = tf.train.AdamOptimizer()
updateModel = trainer.minimize(loss, global_step=global_step)
...
You can read more about it here. I know it's a little late to answer your question, but it might help other people coming across BatchNorm problems in tensorflow! :)
training =tf.placeholder(tf.bool, name = 'training')
lr_holder = tf.placeholder(tf.float32, [], name='learning_rate')
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer = tf.train.AdamOptimizer(learning_rate = lr).minimize(cost)
when defining the layers, you need to use the placeholder 'training'
batchNormal_layer = tf.layers.batch_normalization(pre_batchNormal_layer, training=training)

Categories

Resources