Tensorflow MNIST tutorial code error - python

I've searched the web for answers but non helped (all of the issues encountered by others was due to syntax or too old tensorflow version) so I decided to ask myself - here I am.
I'm trying to run code from Tensorflow MNIST tutorial:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y),
reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images,
y_: mnist.test.labels}))
And I'm getting this error as a result:
InvalidArgumentError: You must feed a value for placeholder tensor
'Placeholder_6' with dtype float and shape [?,784]
[[Node: Placeholder_6 = Placeholder[dtype=DT_FLOAT, shape=[?,784],
_device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
My Tensorflow is 1.4.0. and whole code seems to be exact as the one in tutorial.

Seems like it was some sort of variable conflict: after restarting my IDE finally code worked with no errors - I leave this for anyone having same troubles.

Related

Arguments to tensorflow session.run() - do you pass operations?

I'm following this tutorial for tensorflow:
I'm trying to understand the arguments to tf.session.run(). I understand that you have to run operations in a graph in a session.
Is train_step passed in because it encapsulates all the operations of the network in this particular example? I'm trying to understand why I don't need to pass any other variables to the session like cross_entropy.
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
Here is the full code:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import tensorflow as tf
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(10):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
In a TensorFlow Session tf.Session, you want to run (or execute) the optimizer operation (in this case it is train_step). The optimizer minimizes your loss function (in this case cross_entropy), which is evaluated or computed using the model hypothesis y.
In the cascade approach, the cross_entropy loss function minimizes the error made when computing y, so it finds the best values of the weights W that when combined with x accurately approximates y.
So using a TensorFlow Session object tf.Session as sess we run the optimizer train_step, which then evaluates the entire Computational Graph.
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
Because the cascade approach ultimately calls cross_entropy which makes use of the placeholders x and y, you have to use the feed_dict to pass data to those placeholders.
As you mentioned, Tensorflow is used to build a graph of operations. Your train_step operation (i.e. "minimize by gradient descent") is connected/depends on the result of cross_entropy. cross_entropy itself relies on the results of y (softmax operation) and y_ (data assignment); etc.
When you are calling sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}), you are basically asking Tensorflow "run all the operations leading to train_step, and return its result (with x = batch_xs and y = batch_ys for input)". So yes, Tensorflow will itself go through your graph backward to figure out the operation/input dependencies for train_step, then execute all these operations forward, to return what you asked.

Define variables to load MNIST images app

I'm learning ML with TensorFlow and then I've a simple MNIST model.
This is the model code, following the official tutorial
import tensorflow as tf
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
Now, I'd like to practice with this model by passing number images.
So, the question is how can I make (in Python) to define a variable to load in an image (.bmp, .jpg,
.png...). The idea is to first practice with local files in my computer and later being able
to send image data from a client (supposedly, via JSON in a REST API way) to the model
to show the prediction about which number appears in the image.
Use the Python PIL package to load the image:
from PIL import Image
im = Image.open("bride.jpg")
Convert it to a numpy array, everything you pass to tensorflow has to be in numpy format:
import numpy as np
img3d = np.array(im)
These images are currently shaped [28, 28] or [28, 28, 1], reshape them to 784:
img_flat = np.reshape(img3d, (1, 784))
You'll want to batch together a number of these, use tf.vstack to combine them.

modifying softmax function in tensorflow

I started using tensorflow about a week ago, so I'm not sure what API can I use.
Currently I'm using basic mnist number recognition code.
I want to test how recognition precision of this code changes if I modify the softmax function from floating point calculation to fixed point calculation.
At first I tried to modify the library but it was too complicated to do so. So I think I have to read tensors and modify(calculate) it in the form of array and change it to the tensor using tf.Session().eval() function.
Which function should I use?
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import tensorflow as tf
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
#temp = tf.Variable(tf.zeros([784, 10]))
temp = tf.Variable(tf.matmul(x, W) + b)
#temp = tf.add(tf.matmul(x, W),b)
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
#print(temp[500])
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

Tensorflow. Kernel died when training. Window Anaconda

# Import data
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
mnist = input_data.read_data_sets('/tmp/tensorflow/mnist/input_data', one_hot=True)
# Create the model
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
k = tf.matmul(x, W) + b
y = tf.nn.softmax(k)
i = 0
# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, 10])
learning_rate = 0.5
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(k, y_))
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
print ("Training")
sess = tf.Session()
init = tf.global_variables_initializer() #.run()
sess.run(init)
for _ in range(1000):
print(i)
batch_xs, batch_ys = mnist.train.next_batch(100)
print(i)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
print(i)
i=i+1
print ('b is ',sess.run(b))
print('W is',sess.run(W))
Explain.
This is MNIST code using softmax.
The problem appears at
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
in the for loop.
Just kernel died and restarting with any error message.
Maybe the code is not the problem because it works well on other guy.
I'm using Windows10 Anaconda.
What is the problem?
I came across similar problem as yours. It is likely that you installed cuda and cudnn and are running codes on tensorflow-gpu.
In my case, I at first installed cuda8.0 and cudnn v6.0 for cuda8.0, and got the kernel died problem.
Then I changed the cudnn version to cudnn v5.1 for cuda8.0 and solved this problem. Now I am working fine with my environment.

computation of test data in tensorflow tutorial

I was going through the tutorial of tensorflow-
https://www.tensorflow.org/versions/r0.9/tutorials/mnist/beginners/index.html
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10])) #weights
b = tf.Variable(tf.zeros([10])) #bias
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
Towards the very end, we pass in test data to the placeholders. y_ is matrix containing true values. and y is the matrix with predicted values. My question is when is y computed for the test data. The W matrix has been trained by backpropagation. But this trained matrix must be multiplied with new input x (test data) to give the prediction y. Where does this happen?
Normally i have seen sequential execution of code, and in the last few lines, y isn't called explicitly.
accuracy depends on correct_prediction which depends on y.
So when you call sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}), y is computed before accuracy is computed. All this happen inside the TensorFlow graph.
The TensorFlow graph is the same for train and test. The only difference is the data you feed to the placeholders x and y_.
y is computed here:
y = tf.nn.softmax(tf.matmul(x, W) + b) # Line 7
specifically what you are looking for is with in that line:
tf.matmul(x, W) + b
the output of which is put through the softmax function to identify the class.
This is computed in each of the 1000 passes through the graph, each time the variables W, and b are updated by GradientDescent and y is computed and compared against y_ to determine the loss.

Categories

Resources