Tensorflow Inception_Resnet_V2 Classify Image - python

I trained an inception_resnet_v2 model for the flowers images following the README at https://github.com/tensorflow/models/tree/master/slim
I got my graph.pbtxt file out of this after training with which I converted to a graph.pb file with the following code:
import tensorflow as tf
from google.protobuf import text_format
def convert_pbtxt_to_graphdef(filename):
"""Returns a `tf.GraphDef` proto representing the data in the given pbtxt file.
Args:
filename: The name of a file containing a GraphDef pbtxt (text-formatted
`tf.GraphDef` protocol buffer data).
Returns:
A `tf.GraphDef` protocol buffer.
"""
with tf.gfile.FastGFile(filename, 'r') as f:
graph_def = tf.GraphDef()
file_content = f.read()
# Merges the human-readable string in `file_content` into `graph_def`.
text_format.Merge(file_content, graph_def)
return graph_def
with tf.gfile.FastGFile('/foo/bar/workspace/results/graph.pb', 'wb') as f:
f.write(convert_pbtxt_to_graphdef('/foo/bar/workspace/results/graph.pbtxt'))
After getting this file I tried feeding the trained model a random image using tensorflow's classify_image.py found here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify_image.py
using my .pb, .pbtxt, and my labels file, however, I get the following error:
Traceback (most recent call last):
File "classify_image.py", line 212, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "classify_image.py", line 208, in main
run_inference_on_image(image)
File "classify_image.py", line 170, in run_inference_on_image
softmax_tensor = sess.graph.get_tensor_by_name('softmax:0')
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2615, in get_tensor_by_name
return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2466, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2508, in _as_graph_element_locked
"graph." % (repr(name), repr(op_name)))
KeyError: "The name 'softmax:0' refers to a Tensor which does not exist. The operation, 'softmax', does not exist in the graph."

The problem with slim, in fact tensorflow/models, is that the framework and so the produced models don't really fit the prediction use case:
TF-slim is a new lightweight high-level API of TensorFlow
(tensorflow.contrib.slim) for defining, training and evaluating
complex models.
(Source https://github.com/tensorflow/models/tree/master/slim)
The main problem at prediction time is, that it only works well with the checkpoint files created by the Saver class. When using checkpoint files the assign_from_checkpoint_fn() method can be used to initialize all the variables with the trained parameters contained in the checkpoint. On the other hand, in the situation when having the GraphDef file *.pb only, you kinda lost. There is a nice trick though.
The key idea is to inject a tf.placeholder variable for the input image(s) into the computation graph after you saved your trained model as a checkpoint. The following script (convert_checkpoint_to_pb.py) reads a checkpoint, inserts a placeholder, converts the graph variables to constants and dumps it to a *.pb file.
import tensorflow as tf
from tensorflow.contrib import slim
from nets import inception
from tensorflow.python.framework.graph_util import convert_variables_to_constants
from tensorflow.python.tools.optimize_for_inference_lib import optimize_for_inference
from preprocessing import inception_preprocessing
checkpoints_dir = '/path/to/your/checkpoint_dir/'
OUTPUT_PB_FILENAME = 'minimal_graph.proto'
NUM_CLASSES = 2
# We need default size of image for a particular network.
# The network was trained on images of that size -- so we
# resize input image later in the code.
image_size = inception.inception_resnet_v2.default_image_size
with tf.Graph().as_default():
# Inject placeholder into the graph
input_image_t = tf.placeholder(tf.string, name='input_image')
image = tf.image.decode_jpeg(input_image_t, channels=3)
# Resize the input image, preserving the aspect ratio
# and make a central crop of the resulted image.
# The crop will be of the size of the default image size of
# the network.
# I use the "preprocess_for_eval()" method instead of "inception_preprocessing()"
# because the latter crops all images to the center by 85% at
# prediction time (training=False).
processed_image = inception_preprocessing.preprocess_for_eval(image,
image_size,
image_size, central_fraction=None)
# Networks accept images in batches.
# The first dimension usually represents the batch size.
# In our case the batch size is one.
processed_images = tf.expand_dims(processed_image, 0)
# Load the inception network structure
with slim.arg_scope(inception.inception_resnet_v2_arg_scope()):
logits, _ = inception.inception_resnet_v2(processed_images,
num_classes=NUM_CLASSES,
is_training=False)
# Apply softmax function to the logits (output of the last layer of the network)
probabilities = tf.nn.softmax(logits)
model_path = tf.train.latest_checkpoint(checkpoints_dir)
# Get the function that initializes the network structure (its variables) with
# the trained values contained in the checkpoint
init_fn = slim.assign_from_checkpoint_fn(
model_path,
slim.get_model_variables())
with tf.Session() as sess:
# Now call the initialization function within the session
init_fn(sess)
# Convert variables to constants and make sure the placeholder input_image is included
# in the graph as well as the other neccesary tensors.
constant_graph = convert_variables_to_constants(sess, sess.graph_def, ["input_image", "DecodeJpeg",
"InceptionResnetV2/Logits/Predictions"])
# Define the input and output layer properly
optimized_constant_graph = optimize_for_inference(constant_graph, ["eval_image"],
["InceptionResnetV2/Logits/Predictions"],
tf.string.as_datatype_enum)
# Write the production ready graph to file.
tf.train.write_graph(optimized_constant_graph, '.', OUTPUT_PB_FILENAME, as_text=False)
(The models/slim code must be in your python path to execute this code)
To predict new images with the converted model (now present as a *.pb file) use the code from file minimal_predict.py:
import tensorflow as tf
import urllib2
def create_graph(model_file):
"""Creates a graph from saved GraphDef file and returns a saver."""
# Creates graph from saved graph_def.pb.
with tf.gfile.FastGFile(model_file, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
model_file = "/your/path/to/minimal_graph.proto"
url = ("http://pictureparadise.net/funny-babies/funny-babies02/funny-babies-053.jpg")
# Open specified url and load image as a string
image_string = urllib2.urlopen(url).read()
with tf.Graph().as_default():
with tf.Session() as new_sess:
create_graph(model_file)
softmax = new_sess.graph.get_tensor_by_name("InceptionResnetV2/Logits/Predictions:0")
# Loading the injected placeholder
input_placeholder = new_sess.graph.get_tensor_by_name("input_image:0")
probabilities = new_sess.run(softmax, {input_placeholder: image_string})
print probabilities
To use these scripts simply run python
python convert_checkpoint_to_pb.py
python minimal_predicty.py
while having tensorflow and tensorflow/models/slim in your PYTHONPATH.

In the convert_checkpoint_to_pb.pyprovided by #Maximilian,["eval_image"] in
optimized_constant_graph = optimize_for_inference(constant_graph, ["eval_image"],["InceptionResnetV2/Logits/Predictions"],tf.string.as_datatype_enum)
should be replaced by ["input_image"] else you will get an error stating "The following input nodes were not found: {'eval_image'}\n".

Related

there are something wrong when changing keras .h5 model to tensorflow .pb model

I want to change the keras .h5 file to tensorflow .pb file. It seems there are something wrong with the .pb file. My code is shown as follow:
network_eval = model.vggvox_resnet2d_icassp(input_dim=params['dim'],
num_class=params['n_classes'],
mode='eval', args=args)
path = 'XXX'
name = 'XXX.pb'
network_eval.load_weights(os.path.join(args.resume), by_name=True) # load model
# I use parts of the keras_to_tensorflow util, see https://github.com/amir-abdi/keras_to_tensorflow
orig_output_node_names = [node.op.name for node in network_eval.outputs]
# I do not change the output_nodes_prefix, so
converted_output_node_names = orig_output_node_names
sess = K.get_session()
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), converted_output_node_names)
graph_io.write_graph(constant_graph, path, name, as_text=False)
The .pb file was generated successfully, but the predicted outputs of the test file using .pb model are different from those using original .h5 keras model. The test code is shown as
# using .h5 model
for spec in specs: # specs is sliced magtitude spectrum of a .wav file for predicting
spec = np.expand_dims(np.expand_dims(spec, 0), -1)
v_1 = network_eval.predict(spec)
print(v_1)
# using .pb model
sess = tf.Session()
with gfile.FastGFile('XXX.pb', 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
sess.graph.as_default()
tf.import_graph_def(graph_def, name='')
sess.run(tf.global_variables_initializer())
# 'lambda_1/l2_normalize' is the name of the last layer of the network
# I got this name by printing network_eval.output.name
op = sess.graph.get_tensor_by_name('lambda_1/l2_normalize:0')
x = sess.graph.get_tensor_by_name('input:0')
for spec in specs:
spec = np.expand_dims(np.expand_dims(spec, 0), -1)
v_2 = sess.run(op, feed_dict={x: spec, K.learning_phase(): 0})
print(v_2)
As I said above, the printed results v_1 and v_2 are quite different, but they have the same shape, which makes me confused and I don't know which step was wrong. Is there anyone can help me? I will be very grateful.

Error while allocating tensors in tflite Interpreter

I am making a Linear Regression model (3 input parameters of type float) that can be made to run on-device in an Android app that makes predictions based on user input.
For this, I have used the TensorFlow estimator tf.estimator.LinearRegressor. I also made a SavedModel out of this using this code:
serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(tf.feature_column.make_parse_example_spec([crim, indus, tax]))
export_path = model_est.export_saved_model("saved_model", serving_input_fn)
where the feature columns have been defined before in the code as:
tax = tf.feature_column.numeric_column('tax')
indus = tf.feature_column.numeric_column('indus')
crim = tf.feature_column.numeric_column('crim')
The whole model building code is as follows:
import tensorflow as tf
tf.compat.v1.disable_v2_behavior()
TRAIN_CSV_PATH = './data/BostonHousing_subset.csv'
TEST_CSV_PATH = './data/boston_test_subset.csv'
PREDICT_CSV_PATH = './data/boston_predict_subset.csv'
# target variable to predict:
LABEL_PR = "medv"
def get_batch(file_path, batch_size, num_epochs=None, **args):
with open(file_path) as file:
num_rows = len(file.readlines())
dataset = tf.data.experimental.make_csv_dataset(
file_path, batch_size, label_name=LABEL_PR, num_epochs=num_epochs, header=True, **args)
# repeat and shuffle and batch separately instead of the previous line
# for clarity purposes
# dataset = dataset.repeat(num_epochs)
# dataset = dataset.batch(batch_size)
iterator = dataset.make_one_shot_iterator()
elem = iterator.get_next()
return elem
# Now to define the feature columns
tax = tf.feature_column.numeric_column('tax')
indus = tf.feature_column.numeric_column('indus')
crim = tf.feature_column.numeric_column('crim')
# Building the model
model_est = tf.estimator.LinearRegressor(feature_columns=[crim, indus, tax], model_dir='model_dir')
# Train it now
model_est.train(steps=2300, input_fn=lambda: get_batch(TRAIN_CSV_PATH, batch_size=256))
results = model_est.evaluate(steps=1000, input_fn=lambda: get_batch(TEST_CSV_PATH, batch_size=128))
for key in results:
print(" {}, was: {}".format(key, results[key]))
to_pred = {
'crim': [0.03359, 5.09017, 0.12650, 0.05515, 8.15174, 0.24522],
'indus': [2.95, 18.10, 5.13, 2.18, 18.10, 9.90],
'tax': [252, 666, 284, 222, 666, 304],
}
def test_get_inp():
dataset = tf.data.Dataset.from_tensors(to_pred)
return dataset
# Predict
for pred_results in model_est.predict(input_fn=test_get_inp):
print(pred_results['predictions'][0])
# Now to export as SavedModel
print(tf.feature_column.make_parse_example_spec([crim, indus, tax]))
serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(tf.feature_column.make_parse_example_spec([crim, indus, tax]))
export_path = model_est.export_saved_model("saved_model", serving_input_fn)
The code I am using to convert this SavedModel to tflite format is:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model('saved_model/1576168761')
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
which outputs a .tflite file.
However, when I try to load this tflite file using this code:
import tensorflow as tf
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
I get this error:
Traceback (most recent call last):
File "D:/Documents/Projects/boston_house_pricing/get_model_details.py", line 5, in <module>
interpreter.allocate_tensors()
File "D:\Anaconda3\envs\boston_housing\lib\site-packages\tensorflow_core\lite\python\interpreter.py", line 244, in allocate_tensors
return self._interpreter.AllocateTensors()
File "D:\Anaconda3\envs\boston_housing\lib\site-packages\tensorflow_core\lite\python\interpreter_wrapper\tensorflow_wrap_interpreter_wrapper.py", line 106, in AllocateTensors
return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Regular TensorFlow ops are not supported by this interpreter. Make sure you invoke the Flex delegate before inference.Node number 0 (FlexParseExample) failed to prepare.
I am unable to understand how to resolve this error. Also, an error with the same message is thrown when I try to initialize an interpreter with this file (Android) in Java using tflite.
Help would be greatly appreciated regarding the same.
It seems like the error explains the issue pretty well, when converting to tflite you have specified the tf.lite.OpsSet.SELECT_TF_OPS flag, which causes the converter to include operations that are not supported by tflite natively, and it expects that you will use the flex module in tflite to compile and include those operations in the interpreter.
For more info regarding flex: https://www.tensorflow.org/lite/guide/ops_select
In any case you have two main options, either use flex and compile the needed ops, or to use only operations that are supported natively by tflite and omit the tf.lite.OpsSet.SELECT_TF_OPS flag.
For natively supported tensorflow ops refer here: https://www.tensorflow.org/lite/guide/ops_compatibility

Add new layers to Tensorflow freeze_graph?

These discussion talked (1,2) about adding new layers to Tensorflow graph and retrain the model.
And the following code shows to add in new layer to restored trainable model.
import tensorflow as tf
sess=tf.Session()
#First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('my_test_model-1000.meta')
saver.restore(sess,tf.train.latest_checkpoint('./'))
# Now, let's access and create placeholders variables and
# create feed-dict to feed new data
graph = tf.get_default_graph()
w1 = graph.get_tensor_by_name("w1:0")
w2 = graph.get_tensor_by_name("w2:0")
feed_dict ={w1:13.0,w2:17.0}
#Now, access the op that you want to run.
op_to_restore = graph.get_tensor_by_name("op_to_restore:0")
#Add more to the current graph
add_on_op = tf.multiply(op_to_restore,2)
print sess.run(add_on_op,feed_dict)
#This will print 120.
But I like to add in layers to restored frozen graph.
I have frozen model only for an application. I like to add in layers to the model and freeze again.
Those layers are more for post processing and not necessary to train so not in the trained model.
The reason why is I am converting the freeze graph to TensorRT and I like to include those layers into Int8 engine.
I hope below will help you. I have a custom Op which was supposed to be added to my existing graph which i loaded from .pb file (freezed model file)
With this i was able to append new nodes to my existing graph.
Source code below:
import tensorflow as tf
from tensorflow.python.framework import load_library
from tensorflow.python.platform import resource_loader
from tensorflow.core.protobuf import saved_model_pb2
from tensorflow.python.util import compat
# Utility functions for Loading and Freezing graphs
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name="")
return graph
def freeze_graph(sess, output_graph):
output_node_names = [
"custom_op_zero","custom_op_zero_1"
output_node_names = ",".join(output_node_names)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(",")
)
with tf.gfile.GFile(output_graph, "wb") as f:
f.write(output_graph_def.SerializeToString())
print("{} ops written to {}.".format(len(output_graph_def.node), output_graph))
## load custom Ops shared object file
zero_out_ops = load_library.load_op_library(
resource_loader.get_path_to_datafile('my-op/tensorflow_zero_out/python/ops/_zero_out_ops.so'))
zero_out = zero_out_ops.zero_out
frozen_graph = load_graph("frozen_model.pb")
all_tensors = [tensor for op in frozen_graph.get_operations() for tensor in op.values()]
#print (all_tensors[29])
# Input to the new node is the output of last node
zero_out_custom = zero_out(all_tensors[-1],name="custom_op_zero")
zero_out_custom1 = zero_out(all_tensors[-1],name="custom_op_zero_1")
#print (new_op)
# save new freezed model file
with tf.Session(graph=frozen_graph) as persisted_sess:
for op in persisted_sess.graph.get_operations():
print(op)
freeze_graph(persisted_sess,"new_model.pb")

Invalid character found in base64 while using a deployed model on cloudml

For better context, I have uploaded a pre-trained model on cloud ml. It's an inceptionV3 model converted from keras to acceptable format in tensorflow.
from keras.applications.inception_v3 import InceptionV3
model = InceptionV3(weights='imagenet')
from keras.models import Model
intermediate_layer_model = Model(inputs=model.input,outputs=model.layers[311].output)
with tf.Graph().as_default() as g_input:
input_b64 = tf.placeholder(shape=(1,),
dtype=tf.string,
name='input')
input_bytes = tf.decode_base64(input_b64[0])
image = tf.image.decode_image(input_bytes)
image_f = tf.image.convert_image_dtype(image, dtype=tf.float32)
input_image = tf.expand_dims(image_f, 0)
output = tf.identity(input_image, name='input_image')
g_input_def = g_input.as_graph_def()
K.set_learning_phase(0)
sess = K.get_session()
from tensorflow.python.framework import graph_util
g_trans = sess.graph
g_trans_def = graph_util.convert_variables_to_constants(sess,
g_trans.as_graph_def(),
[intermediate_layer_model.output.name.replace(':0','')])
with tf.Graph().as_default() as g_combined:
x = tf.placeholder(tf.string, name="input_b64")
im, = tf.import_graph_def(g_input_def,
input_map={'input:0': x},
return_elements=["input_image:0"])
pred, = tf.import_graph_def(g_trans_def,
input_map={intermediate_layer_model.input.name: im,
'batch_normalization_1/keras_learning_phase:0': False},
return_elements=[intermediate_layer_model.output.name])
with tf.Session() as sess2:
inputs = {"inputs": tf.saved_model.utils.build_tensor_info(x)}
outputs = {"outputs":tf.saved_model.utils.build_tensor_info(pred)}
signature =tf.saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
)
# save as SavedModel
b = tf.saved_model.builder.SavedModelBuilder('inceptionv4/')
b.add_meta_graph_and_variables(sess2,
[tf.saved_model.tag_constants.SERVING],
signature_def_map={'serving_default': signature})
b.save()
The generated pb file works fine when I use it locally. But when I deploy it on cloud ml I get the following error.
RuntimeError: Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="Invalid character found in base64.
[[Node: import/DecodeBase64 = DecodeBase64[_output_shapes=[<unknown>], _device="/job:localhost/replica:0/task:0/device:CPU:0"](import/strided_slice)]]")
Following is the code I use for getting local predictions.
import base64
import json
with open('MEL_BE_0.jpg', 'rb') as image_file:
encoded_string = str(base64.urlsafe_b64encode(image_file.read()),'ascii')
import tensorflow as tf
with tf.Session(graph=tf.Graph()) as sess:
MetaGraphDef=tf.saved_model.loader.load(
sess,
[tf.saved_model.tag_constants.SERVING],
'inceptionv4')
input_tensor = tf.get_default_graph().get_tensor_by_name('input_b64:0')
print(input_tensor)
avg_tensor = tf.get_default_graph().get_tensor_by_name('import_1/avg_pool/Mean:0')
print(avg_tensor)
predictions = sess.run(avg_tensor, {input_tensor: [encoded_string]})
And finally following is the code snippet that I use for wrapping the encoded string in the request that is sent to the cloud-ml engine.
request_body= json.dumps({"key":"0", "image_bytes": {"b64": [encoded_string]}})
It looks like you are trying to do the base64 decoding in TensorFlow and use the {"b64": ...} JSON format. You need to do one or the other; we typically recommend the latter.
As a side note, your input placeholder must have an outer dimension of None. That can make some things tricky, e.g., you'll either have to reshape the dimensions to be size 1 (which will prevent you from using the batch prediction service in its current state) or you'll have to us tf.map_fn to apply the same set of transformations to each element of the input "batch". You can find an example of that technique in this example.
Finally, I recommend the use of tf.saved_model.simple_save.
Putting it altogether, here is some modified code. Note that I'm inlining your input function (as opposed to serializing it to a graph def and reimporting):
HEIGHT = 299
WIDTH = 299
# Get Keras Model
from keras.applications.inception_v3 import InceptionV3
model = InceptionV3(weights='imagenet')
from keras.models import Model
intermediate_layer_model = Model(inputs=model.input,outputs=model.layers[311].output)
K.set_learning_phase(0)
sess = K.get_session()
from tensorflow.python.framework import graph_util
g_trans = sess.graph
g_trans_def = graph_util.convert_variables_to_constants(sess,
g_trans.as_graph_def(),
[intermediate_layer_model.output.name.replace(':0','')])
# Create inputs to model and export
with tf.Graph().as_default() as g_combined:
def decode_and_resize(image_bytes):
image = tf.image.decode_image(image_bytes)
# Note resize expects a batch_size, but tf_map supresses that index,
# thus we have to expand then squeeze. Resize returns float32 in the
# range [0, uint8_max]
image = tf.expand_dims(image, 0)
image = tf.image.resize_bilinear(
image, [HEIGHT, WIDTH], align_corners=False)
image = tf.squeeze(image, squeeze_dims=[0])
image = tf.cast(image, dtype=tf.uint8)
return image
input_byes = tf.placeholder(shape=(None,),
dtype=tf.string,
name='input')
images = tf.map_fn(
decode_and_resize, input_bytes, back_prop=False, dtype=tf.uint8)
images = tf.image.convert_image_dtype(images, dtype=tf.float32)
pred, = tf.import_graph_def(g_trans_def,
input_map={intermediate_layer_model.input.name: images,
'batch_normalization_1/keras_learning_phase:0': False},
return_elements=[intermediate_layer_model.output.name])
with tf.Session() as sess2:
tf.saved_model.simple_save(
sess2,
model_dir='inceptionv4/'
inputs={"inputs": input_bytes},
outputs={"outputs": pred})
Note: I'm not 100% certain that the shapes of intermediate_layer_model and images are compatible. The shape of images will be [None, height, width, num_channels].
Also note that your local prediction code will change a bit. You don't base64 encode the images and you need to send a "batch"/list of images rather than single images. Something like:
with open('MEL_BE_0.jpg', 'rb') as image_file:
encoded_string = image_file.read()
input_tensor = tf.get_default_graph().get_tensor_by_name('input:0')
print(input_tensor)
avg_tensor = tf.get_default_graph().get_tensor_by_name('import_1/avg_pool/Mean:0')
print(avg_tensor)
predictions = sess.run(avg_tensor, {input_tensor: [encoded_string]})
You didn't specify whether you're doing batch prediction or online prediction, which have similar but slightly different "formats" for the inputs. In either case, your model is not exporting a "key" field (did you mean to? It's probably helpful for batch prediction, but not for online).
For batch prediction, the file format is JSON lines; each line contains one example. Each line can be generated like so from Python:
example = json.dumps({"image_bytes": {"b64": ENCODED_STRING}})
(Note the omission of "key" for now). Since you only have one input, there is a shorthand:
example = json.dumps({"b64": ENCODED_STRING})
If you want to do online prediction, you'll note that if you are using gcloud to send requests, you actually use the same file format as for batch prediction.
In fact, we highly recommend using gcloud ml-engine local predict --json-instances=FILE --model-dir=... before deploying to the cloud to help debug.
If you intend to use some other client besides gcloud, e.g., in a web app, mobile app, frontend server, etc., then you won't be sending a file and you need to construct the full request yourself. It's very similar to the file format above. Basically, take each line of the JSON lines file and put them in an array calle "instances", i.e.,
request_body= json.dumps({"instances": [{"image_bytes": {"b64": [encoded_string]}}]})
You can use the same syntactic sugar if you'd like:
request_body= json.dumps({"instances": [{"b64": [encoded_string]}]})
I hope this helps!

Single Image Inference in Tensorflow [Python]

I have already converted a pre-trained .ckpt file to .pb file freezing the model and saving the weighs as well. What I am trying to do now is to make a simple inference using that .pb file and extract and save output image. The model is a (Fully Convolutional Network for Semantic Segmentation) downloaded from here : https://github.com/MarvinTeichmann/KittiSeg . So far I have managed to, load the image, set the default tf graph and import the graph defined by the model on that, read the input and the output tensors and run the session (error here).
import tensorflow as tf
import os
import numpy as np
from tensorflow.python.platform import gfile
from PIL import Image
# Read the image & get statstics
img=Image.open('/path-to-image/demoImage.png')
img.show()
width, height = img.size
print(width)
print(height)
#Plot the image
#image.show()
with tf.Graph().as_default() as graph:
with tf.Session() as sess:
# Load the graph in graph_def
print("load graph")
# We load the protobuf file from the disk and parse it to retrive the unserialized graph_drf
with gfile.FastGFile("/path-to-FCN-model/FCN8.pb",'rb') as f:
#Set default graph as current graph
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
#sess.graph.as_default() #new line
# Import a graph_def into the current default Graph
tf.import_graph_def(graph_def, name='')
# Print the name of operations in the session
#for op in sess.graph.get_operations():
#print "Operation Name :",op.name # Operation name
#print "Tensor Stats :",str(op.values()) # Tensor name
# INFERENCE Here
l_input = graph.get_tensor_by_name('Placeholder:0')
l_output = graph.get_tensor_by_name('save/Assign_38:0')
print "l_input", l_input
print "l_output", l_output
print
print
# Acceptable feed values include Python scalars, strings, lists, numpy ndarrays, or TensorHandles.
result = sess.run(l_output, feed_dict={l_input : img})
print(results)
print("Inference done")
# Info
# First Tensor name : Placeholder:0
# Last tensor name : save/Assign_38:0"
Can the error come from the format of the image (e.g should I convert .png to another format?). Is it another fundamental error?
I managed to fix the error, below is the working script to inference a single image on Fully Convolutional Networks (for whoever is interesting in an alternative segmentation algorithm from SEGNET) . This model use billinear interpolation for scaling rather than an un-pooling layer. Anyway, because the model is available to download in a .chkpt format, you must first freeze the model and save it as a .pb file. Later on, you must pass the network from TF optimizer to set Dropout probabilities to 1. Afterwards, set the correct input and output tensor name in this script and the inference works correctly, extracting the segmented image.
import tensorflow as tf # Default graph is initialized when the library is imported
import os
from tensorflow.python.platform import gfile
from PIL import Image
import numpy as np
import scipy
from scipy import misc
import matplotlib.pyplot as plt
import cv2
with tf.Graph().as_default() as graph: # Set default graph as graph
with tf.Session() as sess:
# Load the graph in graph_def
print("load graph")
# We load the protobuf file from the disk and parse it to retrive the unserialized graph_drf
with gfile.FastGFile("/path-to-protobuf/FCN8_Freezed.pb",'rb') as f:
print("Load Image...")
# Read the image & get statstics
image = scipy.misc.imread('/Path-To-Image/uu_000010.png')
image = image.astype(float)
Input_image_shape=image.shape
height,width,channels = Input_image_shape
print("Plot image...")
#scipy.misc.imshow(image)
# Set FCN graph to the default graph
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
sess.graph.as_default()
# Import a graph_def into the current default Graph (In this case, the weights are (typically) embedded in the graph)
tf.import_graph_def(
graph_def,
input_map=None,
return_elements=None,
name="",
op_dict=None,
producer_op_list=None
)
# Print the name of operations in the session
for op in graph.get_operations():
print "Operation Name :",op.name # Operation name
print "Tensor Stats :",str(op.values()) # Tensor name
# INFERENCE Here
l_input = graph.get_tensor_by_name('Inputs/fifo_queue_Dequeue:0') # Input Tensor
l_output = graph.get_tensor_by_name('upscore32/conv2d_transpose:0') # Output Tensor
print "Shape of input : ", tf.shape(l_input)
#initialize_all_variables
tf.global_variables_initializer()
# Run Kitty model on single image
Session_out = sess.run( l_output, feed_dict = {l_input : image}
Have you already looked at the demo.py. There is shown at line 141 how they modify the input of the graph:
# Create placeholder for input
image_pl = tf.placeholder(tf.float32)
image = tf.expand_dims(image_pl, 0)
# build Tensorflow graph using the model from logdir
prediction = core.build_inference_graph(hypes, modules,
image=image)
And at line 164 how the image is opened:
image = scp.misc.imread(input_image)
Which is fed directly to image_pl. The only point is that core.build_inference_graph is a TensorVision call.
Note, it would be interesting to provide the exact error message as as input as well.

Categories

Resources