Tensorflow - Testing a mnist neural net with my own images - python

I'm trying to write a script that will allow me to draw an image of a digit and then determine what digit it is with a model trained on MNIST.
Here is my code:
import random
import image
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import numpy as np
import scipy.ndimage
mnist = input_data.read_data_sets( "MNIST_data/", one_hot=True )
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize (cross_entropy)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range( 1000 ):
batch_xs, batch_ys = mnist.train.next_batch( 1000 )
sess.run(train_step, feed_dict= {x: batch_xs, y_: batch_ys})
print ("done with training")
data = np.ndarray.flatten(scipy.ndimage.imread("im_01.jpg", flatten=True))
result = sess.run(tf.argmax(y,1), feed_dict={x: [data]})
print (' '.join(map(str, result)))
For some reason the results are always wrong but has a 92% accuracy when I use the standard testing method.
I think the problem might be how I encoded the image:
data = np.ndarray.flatten(scipy.ndimage.imread("im_01.jpg", flatten=True))
I tried looking in the tensorflow code for the next_batch() function to see how they did it, but I have no idea how I can compare against my approach.
The problem might be somewhere else too.
Any help to make the accuracy 80+% would be greatly appreciated.

I found my mistake: it encoded the reverse, blacks were at 255 instead of 0.
data = np.vectorize(lambda x: 255 - x)(np.ndarray.flatten(scipy.ndimage.imread("im_01.jpg", flatten=True)))
Fixed it.

Related

tensorflow mnist example with my own get_next_minibatch

I just started using tensorflow and I followed the tutorial example on MNIST dataset. It went well, I got like around 90% accuracy.
But after I replace the next_batch with my own version, the result was way worse than it used to be, usually 50%.
Instead of using the data Tensorflow downloaded and parsed, I download the dataset from this website. Using numpy to get what I want.
df = pd.read_csv('mnist_train.csv', header=None)
X = df.drop(0,1)
Y = df[0]
temp = np.zeros((Y.size, Y.max()+1))
temp[np.arange(Y.size),Y] = 1
np.save('X',X)
np.save('Y',temp)
do the same thing to the test data, then following the tutorial, nothing is changed
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
X = np.load('X.npy')
Y = np.load('Y.npy')
X_test = np.load('X_test.npy')
Y_test = np.load('Y_test.npy')
BATCHES = 1000
W = tf.Variable(tf.truncated_normal([784,10], stddev=0.1))
# W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
right here is my own get_mini_batch, I shuffle the original data's index, then every time I get 100 data out of it, which seems to be like the exact same thing example code does. The only difference is data I throw away some of the data in the tail.
pos = 0
idx = np.arange(X.shape[0])
np.random.shuffle(idx)
for _ in range(1000):
batch_xs, batch_ys = X[idx[range(pos,pos+BATCHES)],:], Y[idx[range(pos,pos+BATCHES)],]
if pos+BATCHES >= X.shape[0]:
pos = 0
idx = np.arange(X.shape[0])
np.random.shuffle(idx)
pos += BATCHES
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
print(sess.run(accuracy, feed_dict={x: X_test, y_: Y_test}))
It confuses me why my version is way worse than the tutorial one.
Like lejilot said, we should normalize the data before we push it into the neural network.
See this post

Tensorflow: Print value when Session running

First of all, I'm very new in Python and Tensorflow either.
I'm trying on demo of link: https://www.tensorflow.org/get_started/mnist/beginners
and it runs well.
However, I would like to debug (or log) the value of some placeholders, variables which are changed when I run Session.run(). I
Could you please show me the way to "debug" or log them when Session running in the loops?
Here is my code
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("mnist/", one_hot=True)
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y1 = tf.add(tf.matmul(x,W),b)
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
cross_entropy1 = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y1, y_))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy1)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.argmax(y,1), feed_dict={x: mnist.test.images, y_: mnist.test.labels})
In this script, I would like to log the value of y and tf.argmax(y, 1) for each test image processed.
Mrry answered it best in this stackoverflow answer: https://stackoverflow.com/a/33633839/6487788
Exactly what you are asking (printing during sess.run) would be this part of his answer:
To print the value of a tensor without returning it to your Python program, you can use the tf.Print() op, as And suggests in another answer. Note that you still need to run part of the graph to see the output of this op, which is printed to standard output. If you're running distributed TensorFlow, the tf.Print() op will print its output to the standard output of the task where that op runs.
This would be this code for argmax:
argmaxy = tf.Print(tf.argmax(y,1))
correct_prediction = tf.equal(argmaxy, tf.argmax(y_,1))
Good luck!
While #rmeerten's answer is correct, you can consider also using TensorBoard which can be a useful tool for debugging your models and seeing what's happening. For background, you can also check out the TensorBoard session from the TensorFlow Dev Summit.

How does one use the official Batch Normalization layer in TensorFlow?

I was trying to use batch normalization to train my Neural Networks using TensorFlow but it was unclear to me how to use the official layer implementation of Batch Normalization (note this is different from the one from the API).
After some painful digging on the their github issues it seems that one needs a tf.cond to use it properly and also a 'resue=True' flag so that the BN shift and scale variables are properly reused. After figuring that out I provided a small description of how I believe is the right way to use it here.
Now I have written a short script to test it (only a single layer and a ReLu, hard to make it smaller than this). However, I am not 100% sure how to test it. Right now my code runs with no error messages but returns NaNs unexpectedly. Which lowers my confidence that the code I gave in the other post might be right. Or maybe the network I have is weird. Either way, does someone know whats wrong? Here is the code:
import tensorflow as tf
# download and install the MNIST data automatically
from tensorflow.examples.tutorials.mnist import input_data
from tensorflow.contrib.layers.python.layers import batch_norm as batch_norm
def batch_norm_layer(x,train_phase,scope_bn):
bn_train = batch_norm(x, decay=0.999, center=True, scale=True,
is_training=True,
reuse=None, # is this right?
trainable=True,
scope=scope_bn)
bn_inference = batch_norm(x, decay=0.999, center=True, scale=True,
is_training=False,
reuse=True, # is this right?
trainable=True,
scope=scope_bn)
z = tf.cond(train_phase, lambda: bn_train, lambda: bn_inference)
return z
def get_NN_layer(x, input_dim, output_dim, scope, train_phase):
with tf.name_scope(scope+'vars'):
W = tf.Variable(tf.truncated_normal(shape=[input_dim, output_dim], mean=0.0, stddev=0.1))
b = tf.Variable(tf.constant(0.1, shape=[output_dim]))
with tf.name_scope(scope+'Z'):
z = tf.matmul(x,W) + b
with tf.name_scope(scope+'BN'):
if train_phase is not None:
z = batch_norm_layer(z,train_phase,scope+'BN_unit')
with tf.name_scope(scope+'A'):
a = tf.nn.relu(z) # (M x D1) = (M x D) * (D x D1)
return a
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# placeholder for data
x = tf.placeholder(tf.float32, [None, 784])
# placeholder that turns BN during training or off during inference
train_phase = tf.placeholder(tf.bool, name='phase_train')
# variables for parameters
hiden_units = 25
layer1 = get_NN_layer(x, input_dim=784, output_dim=hiden_units, scope='layer1', train_phase=train_phase)
# create model
W_final = tf.Variable(tf.truncated_normal(shape=[hiden_units, 10], mean=0.0, stddev=0.1))
b_final = tf.Variable(tf.constant(0.1, shape=[10]))
y = tf.nn.softmax(tf.matmul(layer1, W_final) + b_final)
### training
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean( -tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]) )
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
steps = 3000
for iter_step in xrange(steps):
#feed_dict_batch = get_batch_feed(X_train, Y_train, M, phase_train)
batch_xs, batch_ys = mnist.train.next_batch(100)
# Collect model statistics
if iter_step%1000 == 0:
batch_xstrain, batch_xstrain = batch_xs, batch_ys #simualtes train data
batch_xcv, batch_ycv = mnist.test.next_batch(5000) #simualtes CV data
batch_xtest, batch_ytest = mnist.test.next_batch(5000) #simualtes test data
# do inference
train_error = sess.run(fetches=cross_entropy, feed_dict={x: batch_xs, y_:batch_ys, train_phase: False})
cv_error = sess.run(fetches=cross_entropy, feed_dict={x: batch_xcv, y_:batch_ycv, train_phase: False})
test_error = sess.run(fetches=cross_entropy, feed_dict={x: batch_xtest, y_:batch_ytest, train_phase: False})
def do_stuff_with_errors(*args):
print args
do_stuff_with_errors(train_error, cv_error, test_error)
# Run Train Step
sess.run(fetches=train_step, feed_dict={x: batch_xs, y_:batch_ys, train_phase: True})
# list of booleans indicating correct predictions
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
# accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels, train_phase: False}))
when I run it I get:
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
(2.3474066, 2.3498712, 2.3461707)
(0.49414295, 0.88536006, 0.91152304)
(0.51632041, 0.393666, nan)
0.9296
it used to be all the last ones were nan and now only a few of them. Is everything fine or am I paranoic?
I am not sure if this will solve your problem, the documentation for BatchNorm is not quite easy-to-use/informative, so here is a short recap on how to use simple BatchNorm:
First of all, you define your BatchNorm layer. If you want to use it after an affine/fully-connected layer, you do this (just an example, order can be different/as you desire):
...
inputs = tf.matmul(inputs, W) + b
inputs = tf.layers.batch_normalization(inputs, training=is_training)
inputs = tf.nn.relu(inputs)
...
The function tf.layers.batch_normalization calls variable-initializers. These are internal-variables and need a special scope to be called, which is in the tf.GraphKeys.UPDATE_OPS. As such, you must call your optimizer function as follows (after all layers have been defined!):
...
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
trainer = tf.train.AdamOptimizer()
updateModel = trainer.minimize(loss, global_step=global_step)
...
You can read more about it here. I know it's a little late to answer your question, but it might help other people coming across BatchNorm problems in tensorflow! :)
training =tf.placeholder(tf.bool, name = 'training')
lr_holder = tf.placeholder(tf.float32, [], name='learning_rate')
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer = tf.train.AdamOptimizer(learning_rate = lr).minimize(cost)
when defining the layers, you need to use the placeholder 'training'
batchNormal_layer = tf.layers.batch_normalization(pre_batchNormal_layer, training=training)

TensorFlow: parameters do not update when training

I'm implementing a classification model using TensorFlow
The problem that I'm facing is that my weights and error are not being updated when I run the training step. As a result, my network keeps returning the same results.
I've developed my model based on the MNIST example from the TensorFlow website.
import numpy as np
import tensorflow as tf
sess = tf.InteractiveSession()
#load dataset
dataset = np.loadtxt('char8k.txt', dtype='float', comments='#', delimiter=",")
Y = np.asmatrix( dataset[:,0] )
X = np.asmatrix( dataset[:,1:1201] )
m = 11527
labels = 26
# y is update to 11527x26
Yt = np.zeros((m,labels))
for i in range(0,m):
index = Y[0,i] - 1
Yt[i,index]= 1
Y = Yt
Y = np.asmatrix(Y)
#------------------------------------------------------------------------------
#graph settings
x = tf.placeholder(tf.float32, shape=[None, 1200])
y_ = tf.placeholder(tf.float32, shape=[None, 26])
Wtest = tf.Variable(tf.truncated_normal([1200,26], stddev=0.001))
W = tf.Variable(tf.truncated_normal([1200,26], stddev=0.001))
b = tf.Variable(tf.zeros([26]))
sess.run(tf.initialize_all_variables())
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
Wtest = W
for i in range(10):
print("iteracao:")
print(i)
Xbatch = X[np.random.randint(X.shape[0],size=100),:]
Ybatch = Y[np.random.randint(Y.shape[0],size=100),:]
train_step.run(feed_dict={x: Xbatch, y_: Ybatch})
print("atualizacao de pesos")
print(Wtest==W)#monitora atualizaƧao dos pesos
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("precisao:Y")
print accuracy.eval(feed_dict={x: X, y_: Y})
print(" ")
print(" ")
The issue probably arises from how you initialize the weight matrix, W. If it is initialized to all zeroes, all of the neurons will follow the same gradient in each step, which leads to the network not training. Replacing the line
W = tf.Variable(tf.zeros([1200,26]))
...with something like
W = tf.Variable(tf.truncated_normal([1200,26], stddev=0.001))
...should cause it to start training.
This question on the CrossValidated site has a good explanation of why you should not initialize all of your weights to zero.

How to test tensorflow cifar10 cnn tutorial model

I am relatively new to machine-learning and currently have almost no experiencing in developing it.
So my Question is: after training and evaluating the cifar10 dataset from the tensorflow tutorial I was wondering how could one test it with sample images?
I could train and evaluate the Imagenet tutorial from the caffe machine-learning framework and it was relatively easy to use the trained model on custom applications using the python API.
Any help would be very appreciated!
This isn't 100% the answer to the question, but it's a similar way of solving it, based on a MNIST NN training example suggested in the comments to the question.
Based on the TensorFlow begginer MNIST tutorial, and thanks to this tutorial, this is a way of training and using your Neural Network with custom data.
Please note that similar should be done for tutorials such as the CIFAR10, as #Yaroslav Bulatov mentioned in the comments.
import input_data
import datetime
import numpy as np
import tensorflow as tf
import cv2
from matplotlib import pyplot as plt
import matplotlib.image as mpimg
from random import randint
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
x = tf.placeholder("float", [None, 784])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x,W) + b)
y_ = tf.placeholder("float", [None,10])
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
#Train our model
iter = 1000
for i in range(iter):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
#Evaluationg our model:
correct_prediction=tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy=tf.reduce_mean(tf.cast(correct_prediction,"float"))
print "Accuracy: ", sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})
#1: Using our model to classify a random MNIST image from the original test set:
num = randint(0, mnist.test.images.shape[0])
img = mnist.test.images[num]
classification = sess.run(tf.argmax(y, 1), feed_dict={x: [img]})
'''
#Uncomment this part if you want to plot the classified image.
plt.imshow(img.reshape(28, 28), cmap=plt.cm.binary)
plt.show()
'''
print 'Neural Network predicted', classification[0]
print 'Real label is:', np.argmax(mnist.test.labels[num])
#2: Using our model to classify MNIST digit from a custom image:
# create an an array where we can store 1 picture
images = np.zeros((1,784))
# and the correct values
correct_vals = np.zeros((1,10))
# read the image
gray = cv2.imread("my_digit.png", 0 ) #0=cv2.CV_LOAD_IMAGE_GRAYSCALE #must be .png!
# rescale it
gray = cv2.resize(255-gray, (28, 28))
# save the processed images
cv2.imwrite("my_grayscale_digit.png", gray)
"""
all images in the training set have an range from 0-1
and not from 0-255 so we divide our flatten images
(a one dimensional vector with our 784 pixels)
to use the same 0-1 based range
"""
flatten = gray.flatten() / 255.0
"""
we need to store the flatten image and generate
the correct_vals array
correct_val for a digit (9) would be
[0,0,0,0,0,0,0,0,0,1]
"""
images[0] = flatten
my_classification = sess.run(tf.argmax(y, 1), feed_dict={x: [images[0]]})
"""
we want to run the prediction and the accuracy function
using our generated arrays (images and correct_vals)
"""
print 'Neural Network predicted', my_classification[0], "for your digit"
For further image conditioning (digits should be completely dark in a white background) and better NN training (accuracy>91%) please check the Advanced MNIST tutorial from TensorFlow or the 2nd tutorial i've mentioned.
The below example is not for the mnist tutorial, but a simple XOR example. Note the train() and test() methods. All that we declare & keep globally are the weights, biases, and session. In the test method we redefine the shape of the input and reuse the same weights & biases (and session) that we refined in training.
import tensorflow as tf
#parameters for the net
w1 = tf.Variable(tf.random_uniform(shape=[2,2], minval=-1, maxval=1, name='weights1'))
w2 = tf.Variable(tf.random_uniform(shape=[2,1], minval=-1, maxval=1, name='weights2'))
#biases
b1 = tf.Variable(tf.zeros([2]), name='bias1')
b2 = tf.Variable(tf.zeros([1]), name='bias2')
#tensorflow session
sess = tf.Session()
def train():
#placeholders for the traning inputs (4 inputs with 2 features each) and outputs (4 outputs which have a value of 0 or 1)
x = tf.placeholder(tf.float32, [4, 2], name='x-inputs')
y = tf.placeholder(tf.float32, [4, 1], name='y-inputs')
#set up the model calculations
temp = tf.sigmoid(tf.matmul(x, w1) + b1)
output = tf.sigmoid(tf.matmul(temp, w2) + b2)
#cost function is avg error over training samples
cost = tf.reduce_mean(((y * tf.log(output)) + ((1 - y) * tf.log(1.0 - output))) * -1)
#training step is gradient descent
train_step = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)
#declare training data
training_x = [[0,1], [0,0], [1,0], [1,1]]
training_y = [[1], [0], [1], [0]]
#init session
init = tf.initialize_all_variables()
sess.run(init)
#training
for i in range(100000):
sess.run(train_step, feed_dict={x:training_x, y:training_y})
if i % 1000 == 0:
print (i, sess.run(cost, feed_dict={x:training_x, y:training_y}))
print '\ntraining done\n'
def test(inputs):
#redefine the shape of the input to a single unit with 2 features
xtest = tf.placeholder(tf.float32, [1, 2], name='x-inputs')
#redefine the model in terms of that new input shape
temp = tf.sigmoid(tf.matmul(xtest, w1) + b1)
output = tf.sigmoid(tf.matmul(temp, w2) + b2)
print (inputs, sess.run(output, feed_dict={xtest:[inputs]})[0, 0] >= 0.5)
train()
test([0,1])
test([0,0])
test([1,1])
test([1,0])
I recommend taking a look at the basic MNIST tutorial on the TensorFlow website. It looks like you define some function that generates the type of output that you want, and then run your session, passing it this evaluation function (correct_prediction below), and a dictionary containing whatever arguments you require (x and y_ below).
If you have defined and trained some network that takes an input x, generates a response y based on your inputs, and you know your expected responses for your testing set y_, you may be able to print out every response to your testing set with something like:
correct_prediction = tf.equal(y, y_) % Check whether your prediction is correct
print(sess.run(correct_prediction, feed_dict={x: test_images, y_: test_labels}))
This is just a modification of what is done in the tutorial, where instead of trying to print each response, they determine the percent of correct responses. Also note that the tutorial uses one-hot vectors for the prediction y and actual value y_, so in order to return the associated numeral, they have to find which index of these vectors are equal to one with tf.argmax(y, 1).
Edit
In general, if you define something in your graph, you can output it later when you run your graph. Say you define something that determines the result of the softmax function on your output logits as:
graph = tf.Graph()
with graph.as_default():
...
prediction = tf.nn.softmax(logits)
...
then you can output this at run time with:
with tf.Session(graph=graph) as sess:
...
feed_dict = { ... } # define your feed dictionary
pred = sess.run([prediction], feed_dict=feed_dict)
# do stuff with your prediction vector

Categories

Resources