I'm learning to use Tensorflow and I wrote this Python script that learns from mnist db, save the model and make a prediction on a image:
X = tf.placeholder(tf.float32, [None, 28, 28, 1])
W = tf.Variable(tf.zeros([784, 10], name="W"))
b = tf.Variable(tf.zeros([10]), name="b")
Y = tf.nn.softmax(tf.matmul(tf.reshape(X, [-1, 784]), W) + b)
# ...
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)
# ... learning loop
saver.save(sess, "/tmp/my-model")
# Make a prediction with an image
im = numpy.asarray(Image.open("digit.png")) / 255
im = im[numpy.newaxis, :, :, numpy.newaxis]
dict = {X: im}
print("Prediction: ", numpy.array(sess.run(Y, dict)).argmax())
The prediction is correct, but I can't restore the saved model for reusing.
I wrote this other script that tries to restore the model and make the same prediction:
X = tf.placeholder(tf.float32, [None, 28, 28, 1])
W = tf.Variable(tf.zeros([784, 10]), name="W")
b = tf.Variable(tf.ones([10]) / 10, name="b")
Y = tf.nn.softmax(tf.matmul(tf.reshape(X, [-1, 784]), W) + b)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
saver = tf.train.import_meta_graph('/tmp/my-model.meta')
saver.restore(sess, tf.train.latest_checkpoint('/tmp/'))
# Make a prediction with an image
im = numpy.asarray(Image.open("digit.png")) / 255
im = im[numpy.newaxis, :, :, numpy.newaxis]
dict = {X: im}
print("Prediction: ", numpy.array(sess.run(Y, dict)).argmax())
but the prediction is wrong.
How can I restore my variables and make a prediction?
Thanks
When test, comment this line
# saver = tf.train.import_meta_graph('/tmp/my-model.meta')
will solve your problem.
import_meta_graph will create a new Graph/model saved in the '.meta' file and the new model will co-exist with the model you created manually. The saver is assigned to the new model, so saver.restore restores the trained weights to the new model, but the sess runs using the model you created manually.
Related
I have the same question for tf.contrib.rnn.LSTMBlockCell and tf.contrib.cudnn_rnn.CudnnCompatibleLSTMCell:
How do I initialize the LSTM weights from numpy arrays correctly? The following code-snipped executes, but does not seem
to do what I am looking for:
train_data = np.load('mnist_train_data.npy').reshape(-1,28,28)
train_label = np.load('mnist_train_label.npy')
params = [np.random.randn(28+128, 4*128), np.zeros(4*128)]
X = tf.placeholder(tf.float32, shape=[54999, 28, 28])
y = tf.placeholder(tf.int64, None)
state = LSTMStateTuple(*(tf.zeros((54999, 128), dtype=tf.float32) for _ in range(2)))
cell = tf.contrib.rnn.LSTMBlockCell(128)
cell.build(tf.TensorShape((None, 28)))
cell.set_weights(params)
initial_weights = cell.get_weights()
print(np.array_equal(params[0], initial_weights[0]))
w1 = tf.Variable(np.random.randn(128, 10), dtype=tf.float32)
b1 = tf.Variable(np.zeros(10), dtype=tf.float32)
full_seq, current_state = tf.nn.dynamic_rnn(cell, X, initial_state=state, dtype=tf.float32)
output = tf.matmul(current_state[1], w1)
output += b1
loss = tf.losses.softmax_cross_entropy(y, output)
train_step = tf.train.AdamOptimizer(0.01).minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(1):
feed_dict = {X: train_data, y: train_label}
sess.run(train_step, feed_dict=feed_dict)
final_weights = cell.get_weights()
print(np.array_equal(initial_weights[0], final_weights[0]))
This prints out False in the first print-statement, so the numpy arrays do not actually seem to be used as weights.
Moreover, after the training session, this prints out True thus implying, that these weights are not actually updated during training.
Thanks in advance for any help on the subject.
I've a simple MNIST which I've successfully saved, being the code the next:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
import tensorflow as tf
sess = tf.InteractiveSession()
tf_save_file = './mnist-to-save-saved'
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer())
y = tf.matmul(x, W) + b
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y_, logits = y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
saver.save(sess, tf_save_file)
for _ in range(1000):
batch = mnist.train.next_batch(100)
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver.save(sess, tf_save_file, global_step=1000)
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
Then, the next files are generated:
checkpoint
mnist-to-save-saved-1000.data-00000-of-00001
mnist-to-save-saved-1000.index
mnist-to-save-saved-1000.meta
mnist-to-save-saved.data-00000-of-00001
mnist-to-save-saved.index
mnist-to-save-saved.meta
Now, in order to use it in production (and so, for example, pass it a number image), I want to be able to execute the trained model by passing it any number image to make the prediction (I mean, not deploying yet a server but making this prediction "locally", having in the same directory that "fixed" number image, so using the model would be like when you run an executable).
But, considering the (mid-low?) API level of my code, I'm confused about what would be the easiest correct next step (if restoring, using an Estimator, etc...), and how to do it.
Although I've read the official documentation, I insist that they seem to be many ways, but some are a bit complex and "noisy" for a simple model like this.
Edit:
I've edit and re-run the mnist file, whose code is the same as above except for those lines:
...
x = tf.placeholder(tf.float32, shape=[None, 784], name='input')
...
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1), name='result')
...
Then, I try to run this another .py code (in the same directory as the above code) in order to pass a local handwritten number image ("mnist-input-image.png") located in the same directory:
import tensorflow as tf
from PIL import Image
import numpy as np
image_test = Image.open("mnist-input-image.png")
image = np.array(image_test)
with tf.Session() as sess:
saver = tf.train.import_meta_graph('/Users/username/.meta')
new = saver.restore(sess, tf.train.latest_checkpoint('/Users/username/'))
graph = tf.get_default_graph()
input_x = graph.get_tensor_by_name("input:0")
result = graph.get_tensor_by_name("result:0")
feed_dict = {input_x: image}
predictions = result.eval(feed_dict=feed_dict)
print(predictions)
Now, if I correctly understand, I've to pass the image as numpy array. Then, my questions are:
1) Which is the exact file reference of those lines (since I've no .meta folder in my User folder)?
saver = tf.train.import_meta_graph('/Users/username/.meta')
new = saver.restore(sess, tf.train.latest_checkpoint('/Users/username/'))
I mean, to which exact files refer those lines (from my generated files list above)?
2) Translasted to my case, is correct this line to pass my numpy array into the feed dict?
feed_dict = {input_x: image}
A simple solution is to use your session object. When you have generated the checkpoint file, you can restore it with a Saver object.
By the way, do you know why most tutorials have their graph creation inside of a function? One good reason is because you can deserialize the graph quickly with your inputs.
The correct method to start a session is with the following:
# Use your placeholders, variables, etc to create the entire graph.
# Usually you return the input placeholder,
# prediction and the loss/accuracy here.
# You don't need the accuracy.
x, y, _ = make_your_graph(test_X, test_y)
# This object is the interface for serialization in tf
saver = tf.train.Saver()
with tf.Session() as sess:
# Takes your current model's checkpoint. "./checkpoint" is your checkpoint file.
saver.restore(sess, tf.train.latest_checkpoint("./checkpoint"))
prediction = sess.run(y)
Want to run more than 1 data point for your already-booted up session?
Then replace the last line with a feed dict:
while waiting_for_new_y():
another_y = get_new_y()
feed_dict = {x: [another_y]}
another_prediction = sess.run(y, feed_dict)
First of all , give value to name parameter in each object which you want to use later , so that you can use it later by it's name:
change this :
x = tf.placeholder(tf.float32, shape=[None, 784])
to
x = tf.placeholder(tf.float32, shape=[None, 784],name='input')
and
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
to
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1),name='result')
Now run this small script to store model :
import tensorflow as tf
with tf.Session() as sess:
saver = tf.train.import_meta_graph('/Users/dummy/.meta')
new=saver.restore(sess, tf.train.latest_checkpoint('/Users/dummy/'))
graph = tf.get_default_graph()
input_x = graph.get_tensor_by_name("input:0")
result = graph.get_tensor_by_name("result:0")
feed_dict = {input_x: mnist.test.images,} #here you feed your new data for example i am feeding mnist
predictions = result.eval(feed_dict=feed_dict)
print(predictions)
And you will get output.
I'm trying to get started with TensorFlow in python, building a simple CNN with batch normalization. But when i create a new graph to run, exception happens to BN.
My key codes is as follows
**# exception here**
def batch_norm(x, beta, gamma, phase_train, scope='bn', decay=0.9, eps=1e-5):
with tf.variable_scope(scope):
batch_mean, batch_var = tf.nn.moments(x, [0], name='moments')
ema = tf.train.ExponentialMovingAverage(decay=decay)
def mean_var_with_update():
ema_apply_op = ema.apply([batch_mean, batch_var])
with tf.control_dependencies([ema_apply_op]):
return tf.identity(batch_mean), tf.identity(batch_var)
mean, var = tf.cond(phase_train, mean_var_with_update, lambda: (ema.average(batch_mean), ema.average(batch_var)))
normed = tf.nn.batch_normalization(x, mean, var, beta, gamma, eps)
return normed
training code:
# start training
output = conv2d_net()
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=output, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=0.002).minimize(loss)
predict = tf.reshape(output, [-1, MAX_CAPTCHA, CHAR_SET_LEN])
max_idx_p = tf.argmax(predict, 2)
max_idx_l = tf.argmax(tf.reshape(Y, [-1, MAX_CAPTCHA, CHAR_SET_LEN]), 2)
correct_pred = tf.equal(max_idx_p, max_idx_l)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
step = 0
while True:
batch_x, batch_y = get_next_batch(64)
_, loss_ = sess.run([optimizer, loss],
feed_dict={X: batch_x, Y: batch_y, keep_prob: 0.75, train_phase: True})
print(step, loss_)
if step % 10 == 0 and step != 0:
batch_x_test, batch_y_test = get_next_batch(100)
acc = sess.run(accuracy,
feed_dict={X: batch_x_test, Y: batch_y_test, keep_prob: 1., train_phase: False})
print("step %s,accuracy:%s" % (step, acc))
if acc > 0.05:
# stop training and save parameters in layer
result_weights['wc1'] = weights['wc1'].eval(sess)
...
break
step += 1
Create new graph for exporting:
EXPORT_DIR = './model'
if os.path.exists(EXPORT_DIR):
shutil.rmtree(EXPORT_DIR)
g = tf.Graph()
with g.as_default():
x_2 = tf.placeholder(tf.float32, shape=[None, IMAGE_HEIGHT * IMAGE_WIDTH], name="input")
x_image = tf.reshape(x_2, shape=[-1, IMAGE_HEIGHT, IMAGE_WIDTH, 1])
# fill trained parameters and create new cnn layers
WC1 = tf.constant(result_weights['wc1'], name="WC1")
...
**# crash here!!!**
CONV1 = conv2d(WC1, BC1, x_image, tf.constant(0.0, shape=[32]),
tf.random_normal(shape=[32], mean=1.0, stddev=0.02), scope='BN_1')
OUTPUT = tf.add(tf.matmul(FULL1, W_OUT), B_OUT)
OUTPUT = tf.nn.sigmoid(OUTPUT, name="output")
sess = tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = g.as_graph_def()
tf.train.write_graph(graph_def, EXPORT_DIR, 'phone_model_graph.pb', as_text=True)
I create a new graph at last. The exception means it uses incorrect parameter in old training graph. How to explain it?
Thank you very much!
Log is:
I call batch_norm in fuction conv2d. It seems no tensor passed to the new graph.
def conv2d(w, b, x, tf_constant, tf_random_normal, scope, keep_p=1., phase=tf.constant(False)):
out = tf.nn.bias_add(tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME'), b)
out = batch_norm(out, tf_constant, tf_random_normal, phase, scope=scope)
out = tf.nn.relu(out)
out = tf.nn.max_pool(out, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
out = tf.nn.dropout(out, keep_p)
return out
I create a new graph at last.
That's the key statement here: upon creation of a new graph one can't use any tensor from the old graph. See a detailed explanation in this question. According to the stacktrace, at least one of the tensors that is passed to the batch_norm is defined before g.as_default(), that's why tensorflow crashes. From your code snippets it's unclear how exactly the batch_norm is called, so I can't say which one.
You can check this hypothesis by printing x.graph and g and checking if these values are different. In order to avoid this problem you can either do all the work inside one graph (which is a recommended way) or define both graphs in different python scopes thus making impossible to accidentally reuse the same python variable in two graphs.
I'm trying to build LSTM RNN based on this guide:
http://monik.in/a-noobs-guide-to-implementing-rnn-lstm-using-tensorflow/
My input is ndarray with the size of 89102*39 (89102 rows, 39 features). There are 3 labels for the data - 0,1,2
It seems like I'm having a problem with the placeholders definition but I'm not sure what it is.
My code is:
data = tf.placeholder(tf.float32, [None, 1000, 39])
target = tf.placeholder(tf.float32, [None, 3])
cell = tf.nn.rnn_cell.LSTMCell(self.num_hidden)
val, state = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32)
val = tf.transpose(val, [1, 0, 2])
last = tf.gather(val, int(val.get_shape()[0]) - 1)
weight = tf.Variable(tf.truncated_normal([self.num_hidden, int(target.get_shape()[1])]))
bias = tf.Variable(tf.constant(0.1, shape=[target.get_shape()[1]]))
prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)
cross_entropy = -tf.reduce_sum(target * tf.log(tf.clip_by_value(prediction, 1e-10, 1.0)))
optimizer = tf.train.AdamOptimizer()
minimize = optimizer.minimize(cross_entropy)
mistakes = tf.not_equal(tf.argmax(target, 1), tf.argmax(prediction, 1))
error = tf.reduce_mean(tf.cast(mistakes, tf.float32))
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
batch_size = 1000
no_of_batches = int(len(train_input) / batch_size)
epoch = 5000
for i in range(epoch):
ptr = 0
for j in range(no_of_batches):
inp, out = train_input[ptr:ptr + batch_size], train_output[ptr:ptr + batch_size]
ptr += batch_size
sess.run(minimize, {data: inp, target: out})
print( "Epoch - ", str(i))
And I'm getting to following error:
File , line 133, in execute_graph
sess.run(minimize, {data: inp, target: out})
File "/usr/local/lib/python3.5/dist-
packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 975, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1000, 39) for Tensor 'Placeholder:0', which has shape '(1000, 89102, 39)'
Any idea what might be causing the problem?
As indicated here, The dynamic_rnn function takes the batch inputs of shape
[batch_size, truncated_backprop_length, input_size]
In the link that you provided, the shape of the placeholder was
data = tf.placeholder(tf.float32, [None, 20,1])
This means that they chose truncated_backprop_length=20 and input_size=1.
Their data was the following 3D array:
[
array([[0],[0],[1],[0],[0],[1],[0],[1],[1],[0],[0],[0],[1],[1],[1],[1],[1],[1],[0],[0]]),
array([[1],[1],[0],[0],[0],[0],[1],[1],[1],[1],[1],[0],[0],[1],[0],[0],[0],[1],[0],[1]]),
.....
]
Based on your code, it seems that train_input is a 2D array and not a 3D array. Hence, you need to transform it into a 3D array. In order to do that, you need to decide which parameters you want to use for truncated_backprop_length and input_size. Afterwards, you need to define
data appropriately.
For example, if you want truncated_backprop_length and input_size to be 39 and 1 respectively, you can do
import numpy as np
train_input=np.reshape(train_input,(len(train_input),39,1))
data = tf.placeholder(tf.float32, [None, 39,1])
I changed your code according to the above discussion and run it on some random data that I produced. It runs without throwing an error. See the code below:
import tensorflow as tf
import numpy as np
num_hidden=5
train_input=np.random.rand(89102,39)
train_input=np.reshape(train_input,(len(train_input),39,1))
train_output=np.random.rand(89102,3)
data = tf.placeholder(tf.float32, [None, 39, 1])
target = tf.placeholder(tf.float32, [None, 3])
cell = tf.nn.rnn_cell.LSTMCell(num_hidden)
val, state = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32)
val = tf.transpose(val, [1, 0, 2])
last = tf.gather(val, int(val.get_shape()[0]) - 1)
weight = tf.Variable(tf.truncated_normal([num_hidden, int(target.get_shape()[1])]))
bias = tf.Variable(tf.constant(0.1, shape=[target.get_shape()[1]]))
prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)
cross_entropy = -tf.reduce_sum(target * tf.log(tf.clip_by_value(prediction, 1e-10, 1.0)))
optimizer = tf.train.AdamOptimizer()
minimize = optimizer.minimize(cross_entropy)
mistakes = tf.not_equal(tf.argmax(target, 1), tf.argmax(prediction, 1))
error = tf.reduce_mean(tf.cast(mistakes, tf.float32))
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
batch_size = 1000
no_of_batches = int(len(train_input) / batch_size)
epoch = 5000
for i in range(epoch):
ptr = 0
for j in range(no_of_batches):
inp, out = train_input[ptr:ptr + batch_size], train_output[ptr:ptr + batch_size]
ptr += batch_size
sess.run(minimize, {data: inp, target: out})
print( "Epoch - ", str(i))
I'm just learning TensorFlow, so sorry if this is obvious. I've checked the documentation and experimented quite a bit and I just can't seem to get this to work.
def train_network():
OUT_DIMS = 1
FIN_SIZE = 500
x = tf.placeholder(tf.float32, [OUT_DIMS, FIN_SIZE], name="x")
w = tf.Variable(tf.zeros([FIN_SIZE, OUT_DIMS]), name="w")
b = tf.Variable(tf.zeros([OUT_DIMS]), name="b")
y = tf.tanh(tf.matmul(x, w) + b)
yhat = tf.placeholder(tf.float32, [None, OUT_DIMS])
cross_entropy = -tf.reduce_sum(yhat*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
# Launch the model
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for this_x, this_y in yield_financials():
sess.run(train_step, feed_dict={x: this_x,
yhat: this_y})
print(end=".")
sys.stdout.flush()
yield_financials() outputs an numpy array of 500 numbers and the number that I want it to guess. I've tried shuffling OUT_DIMS and FIN_SIZE around, I tried accumulating them into batches to more closely match what the tutorial looked like, I tried setting OUT_DIMS to 0, removing it entirely, and I tried replacing None with other numbers, but have not made any progress.
Try
this_x = np.reshape(this_x,(1, FIN_SIZE))
sess.run(train_step, feed_dict={x: this_x,
yhat: this_y})
I had the same problem and I solved this problem.I hope that it's helpful for u.
firstly,I transformed load data into :
train_data = np.genfromtxt(train_data1, delimiter=',')
train_label = np.transpose(train_label1, delimiter=',')
test_data = np.genfromtxt(test_data1, delimiter=',')
test_label = np.transpose(test_label1, delimiter=',')
Then,transformed trX, trY, teX, teY data into:
# convert the data
trX, trY, teX, teY = train_data,train_label, test_data, test_label
temp = trY.shape
trY = trY.reshape(temp[0], 1)
trY = np.concatenate((1-trY, trY), axis=1)
temp = teY.shape
teY = teY.reshape(temp[0], 1)
teY = np.concatenate((1-teY, teY), axis=1)
Finally,I transformed launching the graph in a session into:
with tf.Session() as sess:
# you need to initialize all variables
tf.initialize_all_variables().run()
for i in range(100):
sess.run(train_op, feed_dict={X: trX, Y: trY})
print(i, np.mean(np.argmax(teY, axis=1) == sess.run(predict_op, feed_dict={X: teX})))
That's all.