Error while allocating tensors in tflite Interpreter - python

I am making a Linear Regression model (3 input parameters of type float) that can be made to run on-device in an Android app that makes predictions based on user input.
For this, I have used the TensorFlow estimator tf.estimator.LinearRegressor. I also made a SavedModel out of this using this code:
serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(tf.feature_column.make_parse_example_spec([crim, indus, tax]))
export_path = model_est.export_saved_model("saved_model", serving_input_fn)
where the feature columns have been defined before in the code as:
tax = tf.feature_column.numeric_column('tax')
indus = tf.feature_column.numeric_column('indus')
crim = tf.feature_column.numeric_column('crim')
The whole model building code is as follows:
import tensorflow as tf
tf.compat.v1.disable_v2_behavior()
TRAIN_CSV_PATH = './data/BostonHousing_subset.csv'
TEST_CSV_PATH = './data/boston_test_subset.csv'
PREDICT_CSV_PATH = './data/boston_predict_subset.csv'
# target variable to predict:
LABEL_PR = "medv"
def get_batch(file_path, batch_size, num_epochs=None, **args):
with open(file_path) as file:
num_rows = len(file.readlines())
dataset = tf.data.experimental.make_csv_dataset(
file_path, batch_size, label_name=LABEL_PR, num_epochs=num_epochs, header=True, **args)
# repeat and shuffle and batch separately instead of the previous line
# for clarity purposes
# dataset = dataset.repeat(num_epochs)
# dataset = dataset.batch(batch_size)
iterator = dataset.make_one_shot_iterator()
elem = iterator.get_next()
return elem
# Now to define the feature columns
tax = tf.feature_column.numeric_column('tax')
indus = tf.feature_column.numeric_column('indus')
crim = tf.feature_column.numeric_column('crim')
# Building the model
model_est = tf.estimator.LinearRegressor(feature_columns=[crim, indus, tax], model_dir='model_dir')
# Train it now
model_est.train(steps=2300, input_fn=lambda: get_batch(TRAIN_CSV_PATH, batch_size=256))
results = model_est.evaluate(steps=1000, input_fn=lambda: get_batch(TEST_CSV_PATH, batch_size=128))
for key in results:
print(" {}, was: {}".format(key, results[key]))
to_pred = {
'crim': [0.03359, 5.09017, 0.12650, 0.05515, 8.15174, 0.24522],
'indus': [2.95, 18.10, 5.13, 2.18, 18.10, 9.90],
'tax': [252, 666, 284, 222, 666, 304],
}
def test_get_inp():
dataset = tf.data.Dataset.from_tensors(to_pred)
return dataset
# Predict
for pred_results in model_est.predict(input_fn=test_get_inp):
print(pred_results['predictions'][0])
# Now to export as SavedModel
print(tf.feature_column.make_parse_example_spec([crim, indus, tax]))
serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(tf.feature_column.make_parse_example_spec([crim, indus, tax]))
export_path = model_est.export_saved_model("saved_model", serving_input_fn)
The code I am using to convert this SavedModel to tflite format is:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model('saved_model/1576168761')
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
which outputs a .tflite file.
However, when I try to load this tflite file using this code:
import tensorflow as tf
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
I get this error:
Traceback (most recent call last):
File "D:/Documents/Projects/boston_house_pricing/get_model_details.py", line 5, in <module>
interpreter.allocate_tensors()
File "D:\Anaconda3\envs\boston_housing\lib\site-packages\tensorflow_core\lite\python\interpreter.py", line 244, in allocate_tensors
return self._interpreter.AllocateTensors()
File "D:\Anaconda3\envs\boston_housing\lib\site-packages\tensorflow_core\lite\python\interpreter_wrapper\tensorflow_wrap_interpreter_wrapper.py", line 106, in AllocateTensors
return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Regular TensorFlow ops are not supported by this interpreter. Make sure you invoke the Flex delegate before inference.Node number 0 (FlexParseExample) failed to prepare.
I am unable to understand how to resolve this error. Also, an error with the same message is thrown when I try to initialize an interpreter with this file (Android) in Java using tflite.
Help would be greatly appreciated regarding the same.

It seems like the error explains the issue pretty well, when converting to tflite you have specified the tf.lite.OpsSet.SELECT_TF_OPS flag, which causes the converter to include operations that are not supported by tflite natively, and it expects that you will use the flex module in tflite to compile and include those operations in the interpreter.
For more info regarding flex: https://www.tensorflow.org/lite/guide/ops_select
In any case you have two main options, either use flex and compile the needed ops, or to use only operations that are supported natively by tflite and omit the tf.lite.OpsSet.SELECT_TF_OPS flag.
For natively supported tensorflow ops refer here: https://www.tensorflow.org/lite/guide/ops_compatibility

Related

reshape.cc:55 stretch_dim != -1. Node number X (RESHAPE) failed to prepare

I'm new to all this, and I need some help with running inference using a custom tflite yolov3 tiny model.
The error I am getting is:
File "/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/interpreter.py", line 524, in invoke
self._interpreter.Invoke()
RuntimeError: tensorflow/lite/kernels/reshape.cc:55 stretch_dim != -1 (0 != -1)Node number 35 (RESHAPE) failed to prepare.
What have I done to get here:
trained a custom yolov3 tiny model for object detection to detect just 1 class using this project
https://github.com/pythonlessons/TensorFlow-2.x-YOLOv3.git
used default hyperparameters:
https://github.com/pythonlessons/TensorFlow-2.x-YOLOv3/blob/master/yolov3/configs.py
used tf-nightly
the model is here:
https://github.com/vladimirhorvat/y/blob/master/app/src/main/assets/converted_model.tflite
When the model was trained I tested the SavedModel by running inference and it worked.
Converted the SavedModel to tflite, run inference on it using the following code, and received the error from the title:
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
(this code is from here, btw
https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python )
Data for node 35:
type: Reshape
location: 35
inputs
data: name: functional_1/tf_op_layer_Tile_3/Tile_3;StatefulPartitionedCall/functional_1/tf_op_layer_Tile_3/Tile_3
shape: name: functional_1/tf_op_layer_strided_slice_6/strided_slice_6;StatefulPartitionedCall/functional_1/tf_op_layer_strided_slice_6/strided_slice_6
outputs
reshaped: name: functional_1/tf_op_layer_strided_slice_16/strided_slice_16;StatefulPartitionedCall/functional_1/tf_op_layer_strided_slice_16/strided_slice_16
Please help. I am out of ideas.
I was able to solve the exact same problem by following this similar post:
https://stackoverflow.com/a/62552677/11334316
In essence you would have to do the following when converting your model:
batch_size = 1
model = tf.keras.models.load_model('./yolo_model')
input_shape = model.inputs[0].shape.as_list()
input_shape[0] = batch_size
func = tf.function(model).get_concrete_function(tf.TensorSpec(input_shape, model.inputs[0].dtype))
model_converter = tf.lite.TFLiteConverter.from_concrete_functions([func])
model_lite = model_converter.convert()
f = open("./yolo_model.tflite", "wb")
f.write(model_lite)
f.close()

TensorFlow saved model export conversion to tflite

TLDR:
I get a ValueError: when running
tf.contrib.lite.TocoConverter.from_saved_model()
Aims: I am trying to convert a TensorFlow saved model to tflite for deployment on mobile devices via Firebase. I can train the model and output a saved model but I am having trouble converting it to .tflite with the python ToCo interface. Any help would be greatly appreciated. Also if anyone can comment on whether the tflite conversion will capture the hub.text_embedding_column() input process that I am relying on. Will the mobile deployment execute this with raw input text or do I need to deploy that part of it separately?
Question: here is the code I am running:
INPUTS:
train_input_fn = tf.estimator.inputs.pandas_input_fn(
train_df, train_df["target_var"], num_epochs=None, shuffle=True
)
predict_train_input_fn = tf.estimator.inputs.pandas_input_fn(
train_df, train_df["target_var"], shuffle=False
)
predict_test_input_fn = tf.estimator.inputs.pandas_input_fn(
test_df, test_df["target_var"], shuffle=False)
embedded_text_feature_column = hub.text_embedding_column(
key="text",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1"
)
TRAIN AND EVALUATE:
estimator = tf.estimator.DNNClassifier(
hidden_units=[500, 100],
feature_columns=[embedded_text_feature_column],
n_classes=2,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.003),
model_dir="my-model"
)
estimator.train(input_fn=train_input_fn, steps=1000)
train_eval_result = estimator.evaluate(input_fn=predict_train_input_fn)
test_eval_result = estimator.evaluate(input_fn=predict_test_input_fn)
SAVE MODEL:
feature_spec = tf.feature_column.make_parse_example_spec([embedded_text_feature_column])
serve_input_fun = tf.estimator.export.build_parsing_serving_input_receiver_fn(
feature_spec,
default_batch_size=None
)
estimator.export_savedmodel(
export_dir_base = "my-model",
serving_input_receiver_fn = serve_input_fun,
as_text=False,
checkpoint_path="my-model/model.ckpt-1000",
)
CONVERT MODEL:
converter = tf.contrib.lite.TocoConverter.from_saved_model("my-model/1529320265/")
tflite_model = converter.convert()
Error
When running the last line I get the following error:
ValueError: Tensors input_example_tensor:0 not known type tf.string
And the full trace is:
ValueError Traceback (most recent call last)
in ()
1 converter = tf.contrib.lite.TocoConverter.from_saved_model("my-model/1529320265/")
----> 2 tflite_model = converter.convert()
/media/rmn/data/projects/anaconda3/envs/monily_tf19/lib/python3.6/site-packages/tensorflow/contrib/lite/python/lite.py in convert(self)
307 reorder_across_fake_quant=self.reorder_across_fake_quant,
308 change_concat_input_ranges=self.change_concat_input_ranges,
--> 309 allow_custom_ops=self.allow_custom_ops)
310 return result
311
/media/rmn/data/projects/anaconda3/envs/monily_tf19/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py in toco_convert(input_data, input_tensors, output_tensors, inference_type, inference_input_type, input_format, output_format, quantized_input_stats, default_ranges_stats, drop_control_dependency, reorder_across_fake_quant, allow_custom_ops, change_concat_input_ranges)
204 else:
205 raise ValueError("Tensors %s not known type %r" % (input_tensor.name,
--> 206 input_tensor.dtype))
207
208 input_array = model.input_arrays.add()
ValueError: Tensors input_example_tensor:0 not known type tf.string
Details
train_df and test_df are pandas dataframes consisting of a single input text column and a binary target variable. I am using Python 3.6.5 and TensorFlow r1.9.
This issue is fixed on TensorFlow's master branch (in commit d3931c8). Reference the following documentation on TensorFlow's website to build a pip installation from GitHub: https://www.tensorflow.org/install/install_sources.

Tensorflow import_graph_def after quantization results in error

I am trying to generate an eightbit quantized graph for a custom LSTM model using TransformGraph. The graph import works fine if I only quantize_weights. Once quantize_nodes is applied importing fails with an error as given below
ValueError: Specified colocation to an op that does not exist during import: lstm1/lstm1/BasicLSTMCellZeroState/zeros in lstm1/lstm1/cond/Switch_2
The code snippet I an using for quantizing is listed below
from tensorflow.tools.graph_transforms import TransformGraph
import tensorflow as tf
input_names = ["inp/X"]
output_names = ["out/Softmax"]
#transforms = ["quantize_weights", "quantize_nodes"]
#transforms = ["quantize_weights"]
transforms = ["add_default_attributes",
"strip_unused_nodes",
"remove_nodes(op=Identity, op=CheckNumerics)",
#"fold_constants(ignore_errors=true)",
"fold_batch_norms",
"fold_old_batch_norms",
"quantize_weights",
"quantize_nodes",
"sort_by_execution_order"]
#output_graph_path="/tmp/fixed.pb"
output_graph_path="/tmp/output_graph.pb"
with tf.Graph().as_default():
output_graph_def = tf.GraphDef()
with tf.Session() as sess:
with open(output_graph_path, "rb") as f:
output_graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(output_graph_def, name="")
transformed_graph_def = TransformGraph(output_graph_def, input_names,
output_names, transforms)
tf.train.write_graph(transformed_graph_def, '/tmp', 'quantized.pb', as_text=False)
I also tried using quantize_graph.py, which always resulted in a keyerror as in https://github.com/tensorflow/tensorflow/issues/8025. I believe this code is no longer maintained. Can anyone please point out how to debug this issue.

TypeError when training Tensorflow Random Forest using TensorForestEstimator

I get a TypeError when attempting to train an Tensorflow Random Forest using TensorForestEstimator.
TypeError: Input 'input_data' of 'CountExtremelyRandomStats' Op has type float64 that does not match expected type of float32.
I've tried using Python 2.7 and Python 3, and I've tried using tf.cast() to put everything in float32 but it doesn't help. I have checked the data type on execution and it's float32. The problem doesn't seem to be the data I provide (csv of all floats), so I'm not sure where to go from here.
Any suggestions of things I can try would be much appreciated.
Code:
# Build an estimator.
def build_estimator(model_dir):
params = tensor_forest.ForestHParams(
num_classes=2, num_features=760,
num_trees=FLAGS.num_trees, max_nodes=FLAGS.max_nodes)
graph_builder_class = tensor_forest.RandomForestGraphs
if FLAGS.use_training_loss:
graph_builder_class = tensor_forest.TrainingLossForest
# Use the SKCompat wrapper, which gives us a convenient way to split in-memory data into batches.
return estimator.SKCompat(random_forest.TensorForestEstimator(params, graph_builder_class=graph_builder_class, model_dir=model_dir))
# Train and evaluate the model.
def train_and_eval():
# load datasets
training_set = pd.read_csv('/Users/carl/Dropbox/Docs/Python/randomforest_balanced_train.csv', dtype=np.float32, header=None)
test_set = pd.read_csv('/Users/carl/Dropbox/Docs/Python/randomforest_balanced_test.csv', dtype=np.float32, header=None)
print('###########')
print(training_set.loc[:,1].dtype) # this prints float32
# load labels
training_labels = pd.read_csv('/Users/carl/Dropbox/Docs/Python/randomforest_balanced_train_class.csv', dtype=np.int32, names=LABEL, header=None)
test_labels = pd.read_csv('/Users/carl/Dropbox/Docs/Python/randomforest_balanced_test_class.csv', dtype=np.int32, names=LABEL, header=None)
# define the path where the model will be stored - default is current directory
model_dir = tempfile.mkdtemp() if not FLAGS.model_dir else FLAGS.model_dir
print('model directory = %s' % model_dir)
# build the random forest estimator
est = build_estimator(model_dir)
tf.cast(training_set, tf.float32) #error occurs with/without casts
tf.cast(test_set, tf.float32)
# train the forest to fit the training data
est.fit(x=training_set, y=training_labels) #this line throws the error
You are using tf.cast in incorrect manner
tf.cast(training_set, tf.float32) #error occurs with/without casts
should be
training_set = tf.cast(training_set, tf.float32)
tf.cast is not in-place method, it is a tensor flow op, as any other operation, and needs to be assigned and run.

Tensorflow Inception_Resnet_V2 Classify Image

I trained an inception_resnet_v2 model for the flowers images following the README at https://github.com/tensorflow/models/tree/master/slim
I got my graph.pbtxt file out of this after training with which I converted to a graph.pb file with the following code:
import tensorflow as tf
from google.protobuf import text_format
def convert_pbtxt_to_graphdef(filename):
"""Returns a `tf.GraphDef` proto representing the data in the given pbtxt file.
Args:
filename: The name of a file containing a GraphDef pbtxt (text-formatted
`tf.GraphDef` protocol buffer data).
Returns:
A `tf.GraphDef` protocol buffer.
"""
with tf.gfile.FastGFile(filename, 'r') as f:
graph_def = tf.GraphDef()
file_content = f.read()
# Merges the human-readable string in `file_content` into `graph_def`.
text_format.Merge(file_content, graph_def)
return graph_def
with tf.gfile.FastGFile('/foo/bar/workspace/results/graph.pb', 'wb') as f:
f.write(convert_pbtxt_to_graphdef('/foo/bar/workspace/results/graph.pbtxt'))
After getting this file I tried feeding the trained model a random image using tensorflow's classify_image.py found here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify_image.py
using my .pb, .pbtxt, and my labels file, however, I get the following error:
Traceback (most recent call last):
File "classify_image.py", line 212, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "classify_image.py", line 208, in main
run_inference_on_image(image)
File "classify_image.py", line 170, in run_inference_on_image
softmax_tensor = sess.graph.get_tensor_by_name('softmax:0')
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2615, in get_tensor_by_name
return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2466, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2508, in _as_graph_element_locked
"graph." % (repr(name), repr(op_name)))
KeyError: "The name 'softmax:0' refers to a Tensor which does not exist. The operation, 'softmax', does not exist in the graph."
The problem with slim, in fact tensorflow/models, is that the framework and so the produced models don't really fit the prediction use case:
TF-slim is a new lightweight high-level API of TensorFlow
(tensorflow.contrib.slim) for defining, training and evaluating
complex models.
(Source https://github.com/tensorflow/models/tree/master/slim)
The main problem at prediction time is, that it only works well with the checkpoint files created by the Saver class. When using checkpoint files the assign_from_checkpoint_fn() method can be used to initialize all the variables with the trained parameters contained in the checkpoint. On the other hand, in the situation when having the GraphDef file *.pb only, you kinda lost. There is a nice trick though.
The key idea is to inject a tf.placeholder variable for the input image(s) into the computation graph after you saved your trained model as a checkpoint. The following script (convert_checkpoint_to_pb.py) reads a checkpoint, inserts a placeholder, converts the graph variables to constants and dumps it to a *.pb file.
import tensorflow as tf
from tensorflow.contrib import slim
from nets import inception
from tensorflow.python.framework.graph_util import convert_variables_to_constants
from tensorflow.python.tools.optimize_for_inference_lib import optimize_for_inference
from preprocessing import inception_preprocessing
checkpoints_dir = '/path/to/your/checkpoint_dir/'
OUTPUT_PB_FILENAME = 'minimal_graph.proto'
NUM_CLASSES = 2
# We need default size of image for a particular network.
# The network was trained on images of that size -- so we
# resize input image later in the code.
image_size = inception.inception_resnet_v2.default_image_size
with tf.Graph().as_default():
# Inject placeholder into the graph
input_image_t = tf.placeholder(tf.string, name='input_image')
image = tf.image.decode_jpeg(input_image_t, channels=3)
# Resize the input image, preserving the aspect ratio
# and make a central crop of the resulted image.
# The crop will be of the size of the default image size of
# the network.
# I use the "preprocess_for_eval()" method instead of "inception_preprocessing()"
# because the latter crops all images to the center by 85% at
# prediction time (training=False).
processed_image = inception_preprocessing.preprocess_for_eval(image,
image_size,
image_size, central_fraction=None)
# Networks accept images in batches.
# The first dimension usually represents the batch size.
# In our case the batch size is one.
processed_images = tf.expand_dims(processed_image, 0)
# Load the inception network structure
with slim.arg_scope(inception.inception_resnet_v2_arg_scope()):
logits, _ = inception.inception_resnet_v2(processed_images,
num_classes=NUM_CLASSES,
is_training=False)
# Apply softmax function to the logits (output of the last layer of the network)
probabilities = tf.nn.softmax(logits)
model_path = tf.train.latest_checkpoint(checkpoints_dir)
# Get the function that initializes the network structure (its variables) with
# the trained values contained in the checkpoint
init_fn = slim.assign_from_checkpoint_fn(
model_path,
slim.get_model_variables())
with tf.Session() as sess:
# Now call the initialization function within the session
init_fn(sess)
# Convert variables to constants and make sure the placeholder input_image is included
# in the graph as well as the other neccesary tensors.
constant_graph = convert_variables_to_constants(sess, sess.graph_def, ["input_image", "DecodeJpeg",
"InceptionResnetV2/Logits/Predictions"])
# Define the input and output layer properly
optimized_constant_graph = optimize_for_inference(constant_graph, ["eval_image"],
["InceptionResnetV2/Logits/Predictions"],
tf.string.as_datatype_enum)
# Write the production ready graph to file.
tf.train.write_graph(optimized_constant_graph, '.', OUTPUT_PB_FILENAME, as_text=False)
(The models/slim code must be in your python path to execute this code)
To predict new images with the converted model (now present as a *.pb file) use the code from file minimal_predict.py:
import tensorflow as tf
import urllib2
def create_graph(model_file):
"""Creates a graph from saved GraphDef file and returns a saver."""
# Creates graph from saved graph_def.pb.
with tf.gfile.FastGFile(model_file, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
_ = tf.import_graph_def(graph_def, name='')
model_file = "/your/path/to/minimal_graph.proto"
url = ("http://pictureparadise.net/funny-babies/funny-babies02/funny-babies-053.jpg")
# Open specified url and load image as a string
image_string = urllib2.urlopen(url).read()
with tf.Graph().as_default():
with tf.Session() as new_sess:
create_graph(model_file)
softmax = new_sess.graph.get_tensor_by_name("InceptionResnetV2/Logits/Predictions:0")
# Loading the injected placeholder
input_placeholder = new_sess.graph.get_tensor_by_name("input_image:0")
probabilities = new_sess.run(softmax, {input_placeholder: image_string})
print probabilities
To use these scripts simply run python
python convert_checkpoint_to_pb.py
python minimal_predicty.py
while having tensorflow and tensorflow/models/slim in your PYTHONPATH.
In the convert_checkpoint_to_pb.pyprovided by #Maximilian,["eval_image"] in
optimized_constant_graph = optimize_for_inference(constant_graph, ["eval_image"],["InceptionResnetV2/Logits/Predictions"],tf.string.as_datatype_enum)
should be replaced by ["input_image"] else you will get an error stating "The following input nodes were not found: {'eval_image'}\n".

Categories

Resources