I'm trying to produce adversarial examples for a semantic segmentation classifier, which involves optimising an image based using the gradients of the loss with respect to the input image variable (where loss is between the current and goal network outputs).
However, no matter what I've tried, I can't seem to create the graph in a way that allows these gradients to be calculated. I need to ensure that the calculated network output for each iteration of the image is not disconnected from the loss.
Here is the code. I've not included absolutely everything as it would be nightmarishly long. The model builder is a method from the code suite I'm trying to adapt. I'm sure that this must be some kind of trivial misunderstanding on my part.
#From elsewhere - x is the processed input image and yg is calculated using argmin on the output
#of a previous run through the network.
x = self.xclean
self.get_ygoal()
yg = self.ygoal
yg = tf.convert_to_tensor(yg)
tf.reset_default_graph()
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
with tf.Session(config=config) as sess:
#sess.run(tf.global_variables_initializer())
net_input = tf.placeholder(tf.float32,shape=[None,None,None,3])
net_output = tf.placeholder(tf.float32,shape=[None,None,None,self.num_classes])
network, _ = model_builder.build_model(self.model, net_input=net_input,
num_classes=self.num_classes,
crop_width=self.dims[0],
crop_height=self.dims[1],
is_training=True)
print('Loading model checkpoint weights')
checkpoint_path = 'checkpoints/latest_model_'+self.model+'_'+self.dataset+'.ckpt'
saver=tf.train.Saver(max_to_keep=1000)
saver.restore(sess, checkpoint_path)
img = tf.Variable(tf.zeros(shape=(1,self.dims[0], self.dims[1], 3)),name='img')
assign = tf.assign(img,net_input)
learning_rate = tf.constant(lr,dtype=tf.float32)
loss = tf.nn.softmax_cross_entropy_with_logits_v2(logits=network, labels=net_output)
optim_step = tf.compat.v1.train.GradientDescentOptimizer(learning_rate).minimize(loss, var_list=[img])
epsilon_ph = tf.placeholder(tf.float32, ())
below = net_input - epsilon_ph
above = net_input + epsilon_ph
projected = tf.clip_by_value(tf.clip_by_value(img, below, above), 0, 1)
with tf.control_dependencies([projected]):
project_step = tf.assign(img, projected)
sess.run(assign, feed_dict={net_input: x})
for i in range(steps):
print('Starting...')
# gradient descent step
_, loss_value = sess.run([optim_step], feed_dict={net_input:x,net_output:yg})
# project step
sess.run(project_step, feed_dict={net_input: x, epsilon_ph: epsilon})
if (i+1) % 10 == 0:
print('step %d, loss=%g' % (i+1, loss_value))
adv = img.eval() # retrieve the adversarial example
Here's the error message I get:
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ["<tf.Variable 'img:0' shape=(1, 512, 512, 3) dtype=float32_ref>"] and loss Tensor("softmax_cross_entropy_with_logits/Reshape_2:0", shape=(?, ?, ?), dtype=float32).
I should mention that this is using Tensorflow 1.14 - as the code suite is built around it.
Thanks in advance.
Related
i'm stuck working on a Tensorflow Convolutional Neural Network for a university project and i hope somebody can help me.
it's supposed to output a picture for a picture input. left is input, right is output. both are in .jpeg format.
input and output
The weights look like this. left image shows the weights before learning, right is after a few epochs and it does not change at all with further training.
The net does not seem to learn anything useful and i have a feeling i forgot something basic.
the accuracy peeks around 5% when learning
weights
here is what it looks when i save the input image x
i dont know if i make a mistake loading or saving the image
And this is what the output y of the net looks like
i based the code on the tensorflow mnist tutorial.
here is my code that i have shortened to make it more readable:
import tensorflow as tf
from PIL import Image
import numpy as np
def weight_variable(dim,stddev=0.35):
init = tf.random_normal(dim, stddev=stddev)
return tf.Variable(init)
def bias_variable(dim,val=0.1):
init = tf.constant(val, shape=dim)
return tf.Variable(init)
def conv2d(x,W):
return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding = 'SAME')
def max_pool2x2(x):
return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding = 'SAME')
def output_pics(pic): # for weights
#1 color (dimension) array cast to uint8 and output as jpeg to file
def output_pics_color(pic):
#3 colors (dimensions) array cast to uint8 and output as jpeg to file
def show_pic(pic):
#3 colors (dimensions) array cast to uint8 and shown in window
filesX = [...] # filenames of inputs for training
filesY = [...] # filenames of outputsfor training
test_filesX = [...]# filenames of inputs for testing
test_filesY = [...]# filenames of outputs for testing
px_size = 128 # size of images 128x128 (resized)
filename_queueX = tf.train.string_input_producer(filesX)
filename_queueY = tf.train.string_input_producer(filesY)
filename_testX = tf.train.string_input_producer(test_filesY)
filename_testY = tf.train.string_input_producer(test_filesY)
image_reader = tf.WholeFileReader()
img_name, img_dataX = image_reader.read(filename_queueX)
imageX = tf.image.decode_jpeg(img_dataX)
imageX = tf.image.resize_images(imageX, [px_size,px_size])
imageX.set_shape((px_size,px_size,3))
imageX=tf.cast(imageX, tf.float32)
...
same for imageY, test_imageX, test_imageY
trainX = []
trainY = []
testX = []
testY = []
j=1
with tf.name_scope('model'):
x=tf.placeholder(tf.float32, [None, px_size,px_size,3])
prob = tf.placeholder(tf.float32)
init_op = tf.global_variables_initializer()
# load images into lists
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
for i in range(1,65):
trainX.append(imageX.eval())
trainY.append(imageY.eval())
for i in range(1, 10):
testX.append(test_imageX.eval())
testY.append(test_imageY.eval())
coord.request_stop()
coord.join(threads)
# layer 1
x_img = tf.reshape(x,[-1,px_size,px_size, 3])
W1 = weight_variable([20,20,3,3])
b1 = bias_variable([3])
y1 = tf.nn.softmax(conv2d(x_img,W1)+b1)
# layer 2
W2 = weight_variable([30,30,3,3])
b2 = bias_variable([3])
y2=tf.nn.softmax(conv2d(y1, W2)+b2)
# layer 3
W3 = weight_variable([40,40,3,3])
b3 = bias_variable([3])
y3=tf.nn.softmax(conv2d(y2, W3)+b3)
y = y3
with tf.name_scope('train'):
y_ =tf.placeholder(tf.float32, [None, px_size,px_size,3])
cross_entropy = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y_, logits=y))
opt = tf.train.MomentumOptimizer(learning_rate=0.5, momentum=0.1).minimize(cross_entropy)
with tf.name_scope('eval'):
correct = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
nEpochs = 1000
batchSize = 10
res = 0
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
trAccs = []
for i in range(nEpochs):
if i%100 == 0 :
train_accuracy = sess.run(accuracy, feed_dict={x:trainX, y_:trainY, prob: 1.0})
print(train_accuracy)
output_pics(W1)#output weights of layer 1 to file
output_pics_color(x)#save input image
output_pics_color(y)#save net output
sess.run(opt, feed_dict={x:trainX, y_:trainY, prob: 0.5})
This is an Image generation problem
The model you selected is a very bad model for Image generation tasks
Normal CNNs are used for image recognition and object detection tasks
The tutorial on MNIST is image classification problem and not image generation problem
It is very important to select an appropriate model type for a particular problem
Clearly with this model there is no chance of achieving the output that you have mentioned
I do not event understand that how are you even calculating the accuracy because this is unsupervised learning problem
You have used softmax after every layer which is really a bad idea.. Tensorflow mnist tutorial does not even has this code
Softmax is only used in the last layer
In the hidden layers leaky relu or simple relu should be used
I would suggest you to look for a more appropriate deep-learning model
Specifically combination of Variational Auto-Encoder Generative Adversarial Networks or simple Generative Adversarial Networks
I have wrote a program with Tensorflow that identifies a number of figures in an image. The model is trained with a function and then used with another function to label the figures. The training have been done on my computer and the resulting model upload to aws with the solve function.
I my computer it works well, but when create a lambda in aws it works strange and start giving different answers with the same test data.
The model in the solve function is this:
# Recreate neural network from model file generated during training
# input
x = tf.placeholder(tf.float32, [None, size_of_image])
# weights
W = tf.Variable(tf.zeros([size_of_image, num_chars]))
# biases
b = tf.Variable(tf.zeros([num_chars]))
The solve function code to label the figures is this:
for testi in range(captcha_letters_num):
# load model from file
saver = tf.train.import_meta_graph(model_path + '.meta',
clear_devices=True)
saver.restore(sess, model_path)
# Data to label
test_x = np.asarray(char_imgs[testi], dtype=np.float32)
predict_op = model(test_x, W, b)
op = sess.run(predict_op, feed_dict={x: test_x})
# find max probability from the probability distribution returned by softmax
max_probability = op[0][0]
max_probability_index = -1
for i in range(num_chars):
if op[0][i] > max_probability:
max_probability = op[0][i]
max_probability_index = i
# append it to final output
final_text += char_map_list[max_probability_index]
# Reset the model so it can be used again
tf.reset_default_graph()
With the same test data it gives different answers, don't know why.
Solved!
What I finally do was to keep the Session outside the loop and initialize the variables. After ending the loop, reset the graph.
saver = tf.train.Saver()
sess = tf.Session()
# Initialize variables
sess.run(tf.global_variables_initializer())
.
.
.
# passing each of the 5 characters through the NNet
for testi in range(captcha_letters_num):
# Data to label
test_x = np.asarray(char_imgs[testi], dtype=np.float32)
predict_op = model(test_x, W, b)
op = sess.run(predict_op, feed_dict={x: test_x})
# find max probability from the probability distribution returned by softmax
max_probability = op[0][0]
max_probability_index = -1
for i in range(num_chars):
if op[0][i] > max_probability:
max_probability = op[0][i]
max_probability_index = i
# append it to final output
final_text += char_map_list[max_probability_index]
# Reset the model so it can be used again
tf.reset_default_graph()
sess.close()
I've been digging around on this for a while. I have found a ton of articles; but none really show just tensorflow inference as a plain inference. Its always "use the serving engine" or using a graph that is pre-coded/defined.
Here is the problem: I have a device which occasionally checks for updated models. It then needs to load that model and run input predictions through the model.
In keras this was simple: build a model; train the model and the call model.predict(). In scikit-learn same thing.
I am able to grab a new model and load it; I can print out all of the weights; but how in the world do I run inference against it?
Code to load model and print weights:
with tf.Session() as sess:
new_saver = tf.train.import_meta_graph(MODEL_PATH + '.meta', clear_devices=True)
new_saver.restore(sess, MODEL_PATH)
for var in tf.trainable_variables():
print(sess.run(var))
I printed out all of my collections and I have:
['queue_runners', 'variables', 'losses', 'summaries', 'train_op', 'cond_context', 'trainable_variables']
I tried using sess.run(train_op); however that just started kicking up a full training session; which is not what I want to do. I just want to run inference against a different set of inputs that I provide which are not TF Records.
Just a little more detail:
The device can use C++ or Python; as long as I can produce a .exe. I can set up a feed dict if I want to feed the system. I trained with TFRecords; but in production I'm not going to use TFRecords; its a real/near real time system.
Thanks for any input. I am posting sample code to this repo: https://github.com/drcrook1/CIFAR10/TensorFlow which does all the training and sample inference.
Any hints are greatly appreciated!
------------EDITS-----------------
I rebuilt the model to be as below:
def inference(images):
'''
Portion of the compute graph that takes an input and converts it into a Y output
'''
with tf.variable_scope('Conv1') as scope:
C_1_1 = ld.cnn_layer(images, (5, 5, 3, 32), (1, 1, 1, 1), scope, name_postfix='1')
C_1_2 = ld.cnn_layer(C_1_1, (5, 5, 32, 32), (1, 1, 1, 1), scope, name_postfix='2')
P_1 = ld.pool_layer(C_1_2, (1, 2, 2, 1), (1, 2, 2, 1), scope)
with tf.variable_scope('Dense1') as scope:
P_1 = tf.reshape(C_1_2, (CONSTANTS.BATCH_SIZE, -1))
dim = P_1.get_shape()[1].value
D_1 = ld.mlp_layer(P_1, dim, NUM_DENSE_NEURONS, scope, act_func=tf.nn.relu)
with tf.variable_scope('Dense2') as scope:
D_2 = ld.mlp_layer(D_1, NUM_DENSE_NEURONS, CONSTANTS.NUM_CLASSES, scope)
H = tf.nn.softmax(D_2, name='prediction')
return H
notice I add the name 'prediction' to the TF operation so I can retrieve it later.
When training I used the input pipeline for tfrecords and input queues.
GRAPH = tf.Graph()
with GRAPH.as_default():
examples, labels = Inputs.read_inputs(CONSTANTS.RecordPaths,
batch_size=CONSTANTS.BATCH_SIZE,
img_shape=CONSTANTS.IMAGE_SHAPE,
num_threads=CONSTANTS.INPUT_PIPELINE_THREADS)
examples = tf.reshape(examples, [CONSTANTS.BATCH_SIZE, CONSTANTS.IMAGE_SHAPE[0],
CONSTANTS.IMAGE_SHAPE[1], CONSTANTS.IMAGE_SHAPE[2]])
logits = Vgg3CIFAR10.inference(examples)
loss = Vgg3CIFAR10.loss(logits, labels)
OPTIMIZER = tf.train.AdamOptimizer(CONSTANTS.LEARNING_RATE)
I am attempting to use feed_dict on the loaded operation in the graph; however now it is just simply hanging....
MODEL_PATH = 'models/' + CONSTANTS.MODEL_NAME + '.model'
images = tf.placeholder(tf.float32, shape=(1, 32, 32, 3))
def run_inference():
'''Runs inference against a loaded model'''
with tf.Session() as sess:
#sess.run(tf.global_variables_initializer())
new_saver = tf.train.import_meta_graph(MODEL_PATH + '.meta', clear_devices=True)
new_saver.restore(sess, MODEL_PATH)
pred = tf.get_default_graph().get_operation_by_name('prediction')
rand = np.random.rand(1, 32, 32, 3)
print(rand)
print(pred)
print(sess.run(pred, feed_dict={images: rand}))
print('done')
run_inference()
I believe this is not working because the original network was trained using TFRecords. In the sample CIFAR data set the data is small; our real data set is huge and it is my understanding TFRecords the the default best practice for training a network. The feed_dict makes great perfect sense from a productionizing perspective; we can spin up some threads and populate that thing from our input systems.
So I guess I have a network that is trained, I can get the predict operation; but how do I tell it to stop using the input queues and start using the feed_dict? Remember that from the production perspective I do not have access to whatever the scientists did to make it. They do their thing; and we stick it in production using whatever agreed upon standard.
-------INPUT OPS--------
tf.Operation 'input/input_producer/Const' type=Const, tf.Operation 'input/input_producer/Size' type=Const, tf.Operation 'input/input_producer/Greater/y' type=Const, tf.Operation 'input/input_producer/Greater' type=Greater, tf.Operation 'input/input_producer/Assert/Const' type=Const, tf.Operation 'input/input_producer/Assert/Assert/data_0' type=Const, tf.Operation 'input/input_producer/Assert/Assert' type=Assert, tf.Operation 'input/input_producer/Identity' type=Identity, tf.Operation 'input/input_producer/RandomShuffle' type=RandomShuffle, tf.Operation 'input/input_producer' type=FIFOQueueV2, tf.Operation 'input/input_producer/input_producer_EnqueueMany' type=QueueEnqueueManyV2, tf.Operation 'input/input_producer/input_producer_Close' type=QueueCloseV2, tf.Operation 'input/input_producer/input_producer_Close_1' type=QueueCloseV2, tf.Operation 'input/input_producer/input_producer_Size' type=QueueSizeV2, tf.Operation 'input/input_producer/Cast' type=Cast, tf.Operation 'input/input_producer/mul/y' type=Const, tf.Operation 'input/input_producer/mul' type=Mul, tf.Operation 'input/input_producer/fraction_of_32_full/tags' type=Const, tf.Operation 'input/input_producer/fraction_of_32_full' type=ScalarSummary, tf.Operation 'input/TFRecordReaderV2' type=TFRecordReaderV2, tf.Operation 'input/ReaderReadV2' type=ReaderReadV2,
------END INPUT OPS-----
----UPDATE 3----
I believe what I need to do is to kill the input section of the graph trained with TF Records and rewire the input to the first layer to a new input. Its kinda like performing surgery; but this is the only way I can find to do inference if I trained using TFRecords as crazy as it sounds...
Full Graph:
Section to kill:
So I think the question becomes: How does one kill the input section of the graph and replace it with a feed_dict?
A follow up to this would be: is this really the right way to do it? This seems bonkers.
----END UPDATE 3----
---link to checkpoint files---
https://drcdata.blob.core.windows.net/checkpoints/CIFAR_10_VGG3_50neuron_1pool_1e-3lr_adam.model.zip?st=2017-05-01T21%3A56%3A00Z&se=2020-05-02T21%3A56%3A00Z&sp=rl&sv=2015-12-11&sr=b&sig=oBCGxlOusB4NOEKnSnD%2FTlRYa5NKNIwAX1IyuZXAr9o%3D
--end link to checkpoint files---
-----UPDATE 4 -----
I gave in and just gave a shot at the 'normal' way of performing inference assuming I could have the scientists simply just pickle their models and we could grab the model pickle; unpack it and then run inference on it. So to test I tried the normal way assuming we already unpacked it...It doesn't work worth a beans either...
import tensorflow as tf
import CONSTANTS
import Vgg3CIFAR10
import numpy as np
from scipy import misc
import time
MODEL_PATH = 'models/' + CONSTANTS.MODEL_NAME + '.model'
imgs_bsdir = 'C:/data/cifar_10/train/'
images = tf.placeholder(tf.float32, shape=(1, 32, 32, 3))
logits = Vgg3CIFAR10.inference(images)
def run_inference():
'''Runs inference against a loaded model'''
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
new_saver = tf.train.import_meta_graph(MODEL_PATH + '.meta')#, import_scope='1', input_map={'input:0': images})
new_saver.restore(sess, MODEL_PATH)
pred = tf.get_default_graph().get_operation_by_name('prediction')
enq = sess.graph.get_operation_by_name(enqueue_op)
#tf.train.start_queue_runners(sess)
print(rand)
print(pred)
print(enq)
for i in range(1, 25):
img = misc.imread(imgs_bsdir + str(i) + '.png').astype(np.float32) / 255.0
img = img.reshape(1, 32, 32, 3)
print(sess.run(logits, feed_dict={images : img}))
time.sleep(3)
print('done')
run_inference()
Tensorflow ends up building a new graph with the inference function from the loaded model; then it appends all the other stuff from the other graph to the end of it. So then when I populate a feed_dict expecting to get inferences back; I just get a bunch of random garbage as if it were the first pass through the network...
Again; this seems nuts; do I really need to write my own framework for serializing and deserializing random networks? This has had to have been done before...
-----UPDATE 4 -----
Again; thanks!
Alright, this took way too much time to figure out; so here is the answer for the rest of the world.
Quick Reminder: I needed to persist a model that can be dynamically loaded and inferred against without knowledge as to the under pinnings or insides of how it works.
Step 1: Create a model as a Class and ideally use an interface definition
class Vgg3Model:
NUM_DENSE_NEURONS = 50
DENSE_RESHAPE = 32 * (CONSTANTS.IMAGE_SHAPE[0] // 2) * (CONSTANTS.IMAGE_SHAPE[1] // 2)
def inference(self, images):
'''
Portion of the compute graph that takes an input and converts it into a Y output
'''
with tf.variable_scope('Conv1') as scope:
C_1_1 = ld.cnn_layer(images, (5, 5, 3, 32), (1, 1, 1, 1), scope, name_postfix='1')
C_1_2 = ld.cnn_layer(C_1_1, (5, 5, 32, 32), (1, 1, 1, 1), scope, name_postfix='2')
P_1 = ld.pool_layer(C_1_2, (1, 2, 2, 1), (1, 2, 2, 1), scope)
with tf.variable_scope('Dense1') as scope:
P_1 = tf.reshape(P_1, (-1, self.DENSE_RESHAPE))
dim = P_1.get_shape()[1].value
D_1 = ld.mlp_layer(P_1, dim, self.NUM_DENSE_NEURONS, scope, act_func=tf.nn.relu)
with tf.variable_scope('Dense2') as scope:
D_2 = ld.mlp_layer(D_1, self.NUM_DENSE_NEURONS, CONSTANTS.NUM_CLASSES, scope)
H = tf.nn.softmax(D_2, name='prediction')
return H
def loss(self, logits, labels):
'''
Adds Loss to all variables
'''
cross_entr = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels)
cross_entr = tf.reduce_mean(cross_entr)
tf.summary.scalar('cost', cross_entr)
tf.add_to_collection('losses', cross_entr)
return tf.add_n(tf.get_collection('losses'), name='total_loss')
Step 2: Train your network with whatever inputs you want; in my case I used Queue Runners and TF Records. Note that this step is done by a different team which iterates, builds, designs and optimizes models. This can also change over time. The output they produce must be able to be pulled from a remote location so we can dynamically load the updated models on devices (reflashing hardware is a pain especially if it is geographically distributed). In this instance; the team drops the 3 files associated with a graph saver; but also a pickle of the model used for that training session
model = vgg3.Vgg3Model()
def create_sess_ops():
'''
Creates and returns operations needed for running
a tensorflow training session
'''
GRAPH = tf.Graph()
with GRAPH.as_default():
examples, labels = Inputs.read_inputs(CONSTANTS.RecordPaths,
batch_size=CONSTANTS.BATCH_SIZE,
img_shape=CONSTANTS.IMAGE_SHAPE,
num_threads=CONSTANTS.INPUT_PIPELINE_THREADS)
examples = tf.reshape(examples, [-1, CONSTANTS.IMAGE_SHAPE[0],
CONSTANTS.IMAGE_SHAPE[1], CONSTANTS.IMAGE_SHAPE[2]], name='infer/input')
logits = model.inference(examples)
loss = model.loss(logits, labels)
OPTIMIZER = tf.train.AdamOptimizer(CONSTANTS.LEARNING_RATE)
gradients = OPTIMIZER.compute_gradients(loss)
apply_gradient_op = OPTIMIZER.apply_gradients(gradients)
gradients_summary(gradients)
summaries_op = tf.summary.merge_all()
return [apply_gradient_op, summaries_op, loss, logits], GRAPH
def main():
'''
Run and Train CIFAR 10
'''
print('starting...')
ops, GRAPH = create_sess_ops()
total_duration = 0.0
with tf.Session(graph=GRAPH) as SESSION:
COORDINATOR = tf.train.Coordinator()
THREADS = tf.train.start_queue_runners(SESSION, COORDINATOR)
SESSION.run(tf.global_variables_initializer())
SUMMARY_WRITER = tf.summary.FileWriter('Tensorboard/' + CONSTANTS.MODEL_NAME, graph=GRAPH)
GRAPH_SAVER = tf.train.Saver()
for EPOCH in range(CONSTANTS.EPOCHS):
duration = 0
error = 0.0
start_time = time.time()
for batch in range(CONSTANTS.MINI_BATCHES):
_, summaries, cost_val, prediction = SESSION.run(ops)
error += cost_val
duration += time.time() - start_time
total_duration += duration
SUMMARY_WRITER.add_summary(summaries, EPOCH)
print('Epoch %d: loss = %.2f (%.3f sec)' % (EPOCH, error, duration))
if EPOCH == CONSTANTS.EPOCHS - 1 or error < 0.005:
print(
'Done training for %d epochs. (%.3f sec)' % (EPOCH, total_duration)
)
break
GRAPH_SAVER.save(SESSION, 'models/' + CONSTANTS.MODEL_NAME + '.model')
with open('models/' + CONSTANTS.MODEL_NAME + '.pkl', 'wb') as output:
pickle.dump(model, output)
COORDINATOR.request_stop()
COORDINATOR.join(THREADS)
Step 3: Run some Inference. Load your pickled model; create a new graph by piping in the new placeholder to the logits; and then call session restore. DO NOT RESTORE THE WHOLE GRAPH; JUST THE VARIABLES.
MODEL_PATH = 'models/' + CONSTANTS.MODEL_NAME + '.model'
imgs_bsdir = 'C:/data/cifar_10/train/'
images = tf.placeholder(tf.float32, shape=(1, 32, 32, 3))
with open('models/vgg3.pkl', 'rb') as model_in:
model = pickle.load(model_in)
logits = model.inference(images)
def run_inference():
'''Runs inference against a loaded model'''
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
new_saver = tf.train.Saver()
new_saver.restore(sess, MODEL_PATH)
print("Starting...")
for i in range(20, 30):
print(str(i) + '.png')
img = misc.imread(imgs_bsdir + str(i) + '.png').astype(np.float32) / 255.0
img = img.reshape(1, 32, 32, 3)
pred = sess.run(logits, feed_dict={images : img})
max_node = np.argmax(pred)
print('predicted label: ' + str(max_node))
print('done')
run_inference()
There definitely ways to improve on this using interfaces and maybe packaging up everything better; but this is working and sets the stage for how we will be moving forward.
FINAL NOTE When we finally pushed this to production, we ended up having to ship the stupid `mymodel_model.py file down with everything to build up the graph. So we now enforce a naming convention for all models and there is also a coding standard for production model runs so we can do this properly.
Good Luck!
While it's not as cut and dry as model.predict(), it's still really trivial.
In your model you should have a tensor that computes the final output you're interested in, let's name that tensor output. You may currently just have a loss function. If so create another tensor (variable in the model) that actually computes the output you want.
For example, if your loss function is:
tf.nn.sigmoid_cross_entropy_with_logits(last_layer_activation, labels)
And you expect your outputs to be in the range [0,1] per class, create another variable:
output = tf.sigmoid(last_layer_activation)
Now, when you call sess.run(...) just request the output tensor. Don't request the optimization OP you normally would to train it. When you request this variable tensorflow will do the minimum work necessary to produce the value (e.g. it won't bother with backprop, loss functions, and all that because a simple feed forward pass is all that's necessary to compute output.
So if you're creating a service to return inferences of the model you'll want to keep the model loaded in memory/gpu, and repeat:
sess.run(output, feed_dict={X: input_data})
You won't need to feed it the labels because tensorflow won't bother to compute ops that aren't needed to produce the output you are requesting. You don't have to change your model or anything.
While this approach might not be as obvious as model.predict(...) I'd argue that it's vastly more flexible. If you start playing with more complex models you'll probably learn to love this approach. model.predict() is like "thinking inside the box."
I've made a learning on Tensorflow (MNIST) and I've saved the weights in a .ckpt.
Now I want to test my neural network on this weights, with the same images translated of a few pixels to the right and bottom.
The loading weigths works well, but when I print an eval, Tensorflow display always the same results (0.9630 for the test), whatever the translation is about 1 or 14px.
Here is my code for the function which print the eval :
def eval_translation(sess, eval_correct, images_pl, labels_pl, dataset):
print('Test Data Eval:')
for i in range(28):
true_count = 0 # Counts the number of correct predictions.
steps_per_epoch = dataset.num_examples // FLAGS.batch_size
nb_exemples = steps_per_epoch * FLAGS.batch_size
for step in xrange(steps_per_epoch):
images_feed, labels_feed = dataset.next_batch(FLAGS.batch_size)
feed_dict = {images_pl: translate_right(images_feed, i), labels_pl: labels_feed}
true_count += sess.run(eval_correct, feed_dict=feed_dict)
precision = true_count / nb_exemples
print('Translation: %d Num examples: %d Num correct: %d Precision # 1: %0.04f' % (i, nb_exemples, true_count, precision))
This is the function which with I load the datas and which with I print the test results.
Here is my translation function :
def translate_right(images, dev):
for i in range(len(images)):
for j in range(len(images[i])):
images[i][j] = np.roll(images[i][j], dev)
return images
I call this function in place of the learning just after initialise all the variables :
with tf.Graph().as_default():
# Generate placeholders for the images and labels.
images_placeholder, labels_placeholder = placeholder_inputs(FLAGS.batch_size)
# Build a Graph that computes predictions from the inference model.
weights, logits = mnist.inference(images_placeholder, neurons)
# Add to the Graph the Ops for loss calculation.
loss = mnist.loss(logits, labels_placeholder)
# Add to the Graph the Ops that calculate and apply gradients.
train_op = mnist.training(loss, learning_rate)
# Add the Op to compare the logits to the labels during evaluation.
eval_correct = mnist.evaluation(logits, labels_placeholder)
# Build the summary operation based on the TF collection of Summaries.
summary_op = tf.merge_all_summaries()
# Create a saver for writing training checkpoints.
save = {}
for i in range(len(weights)):
save['weights' + str(i)] = weights[i]
saver = tf.train.Saver(save)
# Create a session for running Ops on the Graph.
sess = tf.Session()
init = tf.initialize_all_variables()
sess.run(init)
# load weights
saver.restore(sess, restore_path)
# Instantiate a SummaryWriter to output summaries and the Graph.
summary_writer = tf.train.SummaryWriter(FLAGS.train_dir, sess.graph)
temps_total = time.time()
eval_translation(sess, eval_correct, images_placeholder, labels_placeholder, dataset.test)
I don't know what's wrong with my code, and why Tensorflow seems to ignore my images.
Can someone could help me please ?
Thanks !
You function translate_right doesn't work, because images[i, j] is just one pixel (containing 1 value if you have greyscale images).
You should use the argument axis of np.roll:
def translate_right(images, dev):
return np.roll(images, dev, axis=1)
I am relatively new to machine-learning and currently have almost no experiencing in developing it.
So my Question is: after training and evaluating the cifar10 dataset from the tensorflow tutorial I was wondering how could one test it with sample images?
I could train and evaluate the Imagenet tutorial from the caffe machine-learning framework and it was relatively easy to use the trained model on custom applications using the python API.
Any help would be very appreciated!
This isn't 100% the answer to the question, but it's a similar way of solving it, based on a MNIST NN training example suggested in the comments to the question.
Based on the TensorFlow begginer MNIST tutorial, and thanks to this tutorial, this is a way of training and using your Neural Network with custom data.
Please note that similar should be done for tutorials such as the CIFAR10, as #Yaroslav Bulatov mentioned in the comments.
import input_data
import datetime
import numpy as np
import tensorflow as tf
import cv2
from matplotlib import pyplot as plt
import matplotlib.image as mpimg
from random import randint
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
x = tf.placeholder("float", [None, 784])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x,W) + b)
y_ = tf.placeholder("float", [None,10])
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
#Train our model
iter = 1000
for i in range(iter):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
#Evaluationg our model:
correct_prediction=tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy=tf.reduce_mean(tf.cast(correct_prediction,"float"))
print "Accuracy: ", sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})
#1: Using our model to classify a random MNIST image from the original test set:
num = randint(0, mnist.test.images.shape[0])
img = mnist.test.images[num]
classification = sess.run(tf.argmax(y, 1), feed_dict={x: [img]})
'''
#Uncomment this part if you want to plot the classified image.
plt.imshow(img.reshape(28, 28), cmap=plt.cm.binary)
plt.show()
'''
print 'Neural Network predicted', classification[0]
print 'Real label is:', np.argmax(mnist.test.labels[num])
#2: Using our model to classify MNIST digit from a custom image:
# create an an array where we can store 1 picture
images = np.zeros((1,784))
# and the correct values
correct_vals = np.zeros((1,10))
# read the image
gray = cv2.imread("my_digit.png", 0 ) #0=cv2.CV_LOAD_IMAGE_GRAYSCALE #must be .png!
# rescale it
gray = cv2.resize(255-gray, (28, 28))
# save the processed images
cv2.imwrite("my_grayscale_digit.png", gray)
"""
all images in the training set have an range from 0-1
and not from 0-255 so we divide our flatten images
(a one dimensional vector with our 784 pixels)
to use the same 0-1 based range
"""
flatten = gray.flatten() / 255.0
"""
we need to store the flatten image and generate
the correct_vals array
correct_val for a digit (9) would be
[0,0,0,0,0,0,0,0,0,1]
"""
images[0] = flatten
my_classification = sess.run(tf.argmax(y, 1), feed_dict={x: [images[0]]})
"""
we want to run the prediction and the accuracy function
using our generated arrays (images and correct_vals)
"""
print 'Neural Network predicted', my_classification[0], "for your digit"
For further image conditioning (digits should be completely dark in a white background) and better NN training (accuracy>91%) please check the Advanced MNIST tutorial from TensorFlow or the 2nd tutorial i've mentioned.
The below example is not for the mnist tutorial, but a simple XOR example. Note the train() and test() methods. All that we declare & keep globally are the weights, biases, and session. In the test method we redefine the shape of the input and reuse the same weights & biases (and session) that we refined in training.
import tensorflow as tf
#parameters for the net
w1 = tf.Variable(tf.random_uniform(shape=[2,2], minval=-1, maxval=1, name='weights1'))
w2 = tf.Variable(tf.random_uniform(shape=[2,1], minval=-1, maxval=1, name='weights2'))
#biases
b1 = tf.Variable(tf.zeros([2]), name='bias1')
b2 = tf.Variable(tf.zeros([1]), name='bias2')
#tensorflow session
sess = tf.Session()
def train():
#placeholders for the traning inputs (4 inputs with 2 features each) and outputs (4 outputs which have a value of 0 or 1)
x = tf.placeholder(tf.float32, [4, 2], name='x-inputs')
y = tf.placeholder(tf.float32, [4, 1], name='y-inputs')
#set up the model calculations
temp = tf.sigmoid(tf.matmul(x, w1) + b1)
output = tf.sigmoid(tf.matmul(temp, w2) + b2)
#cost function is avg error over training samples
cost = tf.reduce_mean(((y * tf.log(output)) + ((1 - y) * tf.log(1.0 - output))) * -1)
#training step is gradient descent
train_step = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)
#declare training data
training_x = [[0,1], [0,0], [1,0], [1,1]]
training_y = [[1], [0], [1], [0]]
#init session
init = tf.initialize_all_variables()
sess.run(init)
#training
for i in range(100000):
sess.run(train_step, feed_dict={x:training_x, y:training_y})
if i % 1000 == 0:
print (i, sess.run(cost, feed_dict={x:training_x, y:training_y}))
print '\ntraining done\n'
def test(inputs):
#redefine the shape of the input to a single unit with 2 features
xtest = tf.placeholder(tf.float32, [1, 2], name='x-inputs')
#redefine the model in terms of that new input shape
temp = tf.sigmoid(tf.matmul(xtest, w1) + b1)
output = tf.sigmoid(tf.matmul(temp, w2) + b2)
print (inputs, sess.run(output, feed_dict={xtest:[inputs]})[0, 0] >= 0.5)
train()
test([0,1])
test([0,0])
test([1,1])
test([1,0])
I recommend taking a look at the basic MNIST tutorial on the TensorFlow website. It looks like you define some function that generates the type of output that you want, and then run your session, passing it this evaluation function (correct_prediction below), and a dictionary containing whatever arguments you require (x and y_ below).
If you have defined and trained some network that takes an input x, generates a response y based on your inputs, and you know your expected responses for your testing set y_, you may be able to print out every response to your testing set with something like:
correct_prediction = tf.equal(y, y_) % Check whether your prediction is correct
print(sess.run(correct_prediction, feed_dict={x: test_images, y_: test_labels}))
This is just a modification of what is done in the tutorial, where instead of trying to print each response, they determine the percent of correct responses. Also note that the tutorial uses one-hot vectors for the prediction y and actual value y_, so in order to return the associated numeral, they have to find which index of these vectors are equal to one with tf.argmax(y, 1).
Edit
In general, if you define something in your graph, you can output it later when you run your graph. Say you define something that determines the result of the softmax function on your output logits as:
graph = tf.Graph()
with graph.as_default():
...
prediction = tf.nn.softmax(logits)
...
then you can output this at run time with:
with tf.Session(graph=graph) as sess:
...
feed_dict = { ... } # define your feed dictionary
pred = sess.run([prediction], feed_dict=feed_dict)
# do stuff with your prediction vector