TensorFlow saved model export conversion to tflite - python

TLDR:
I get a ValueError: when running
tf.contrib.lite.TocoConverter.from_saved_model()
Aims: I am trying to convert a TensorFlow saved model to tflite for deployment on mobile devices via Firebase. I can train the model and output a saved model but I am having trouble converting it to .tflite with the python ToCo interface. Any help would be greatly appreciated. Also if anyone can comment on whether the tflite conversion will capture the hub.text_embedding_column() input process that I am relying on. Will the mobile deployment execute this with raw input text or do I need to deploy that part of it separately?
Question: here is the code I am running:
INPUTS:
train_input_fn = tf.estimator.inputs.pandas_input_fn(
train_df, train_df["target_var"], num_epochs=None, shuffle=True
)
predict_train_input_fn = tf.estimator.inputs.pandas_input_fn(
train_df, train_df["target_var"], shuffle=False
)
predict_test_input_fn = tf.estimator.inputs.pandas_input_fn(
test_df, test_df["target_var"], shuffle=False)
embedded_text_feature_column = hub.text_embedding_column(
key="text",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1"
)
TRAIN AND EVALUATE:
estimator = tf.estimator.DNNClassifier(
hidden_units=[500, 100],
feature_columns=[embedded_text_feature_column],
n_classes=2,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.003),
model_dir="my-model"
)
estimator.train(input_fn=train_input_fn, steps=1000)
train_eval_result = estimator.evaluate(input_fn=predict_train_input_fn)
test_eval_result = estimator.evaluate(input_fn=predict_test_input_fn)
SAVE MODEL:
feature_spec = tf.feature_column.make_parse_example_spec([embedded_text_feature_column])
serve_input_fun = tf.estimator.export.build_parsing_serving_input_receiver_fn(
feature_spec,
default_batch_size=None
)
estimator.export_savedmodel(
export_dir_base = "my-model",
serving_input_receiver_fn = serve_input_fun,
as_text=False,
checkpoint_path="my-model/model.ckpt-1000",
)
CONVERT MODEL:
converter = tf.contrib.lite.TocoConverter.from_saved_model("my-model/1529320265/")
tflite_model = converter.convert()
Error
When running the last line I get the following error:
ValueError: Tensors input_example_tensor:0 not known type tf.string
And the full trace is:
ValueError Traceback (most recent call last)
in ()
1 converter = tf.contrib.lite.TocoConverter.from_saved_model("my-model/1529320265/")
----> 2 tflite_model = converter.convert()
/media/rmn/data/projects/anaconda3/envs/monily_tf19/lib/python3.6/site-packages/tensorflow/contrib/lite/python/lite.py in convert(self)
307 reorder_across_fake_quant=self.reorder_across_fake_quant,
308 change_concat_input_ranges=self.change_concat_input_ranges,
--> 309 allow_custom_ops=self.allow_custom_ops)
310 return result
311
/media/rmn/data/projects/anaconda3/envs/monily_tf19/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py in toco_convert(input_data, input_tensors, output_tensors, inference_type, inference_input_type, input_format, output_format, quantized_input_stats, default_ranges_stats, drop_control_dependency, reorder_across_fake_quant, allow_custom_ops, change_concat_input_ranges)
204 else:
205 raise ValueError("Tensors %s not known type %r" % (input_tensor.name,
--> 206 input_tensor.dtype))
207
208 input_array = model.input_arrays.add()
ValueError: Tensors input_example_tensor:0 not known type tf.string
Details
train_df and test_df are pandas dataframes consisting of a single input text column and a binary target variable. I am using Python 3.6.5 and TensorFlow r1.9.

This issue is fixed on TensorFlow's master branch (in commit d3931c8). Reference the following documentation on TensorFlow's website to build a pip installation from GitHub: https://www.tensorflow.org/install/install_sources.

Related

TypeError: an integer is required (got type NoneType)

Goal: Amend this Notebook to work with distilbert-base-uncased model
Error occurs in Section 1.3.
Kernel: conda_pytorch_p36. I did Restart & Run All, and refreshed file view in working directory.
Section 1.3:
# define the tokenizer
tokenizer = AutoTokenizer.from_pretrained(
configs.output_dir, do_lower_case=configs.do_lower_case)
Traceback:
Evaluating PyTorch full precision accuracy and performance:
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/data/processors/glue.py:67: FutureWarning: This function will be removed from the library soon, preprocessing should be handled with the 🤗 Datasets library. You can have a look at this example script for pointers: https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py
warnings.warn(DEPRECATION_WARNING.format("function"), FutureWarning)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-31-1f864e3046eb> in <module>
144 # Evaluate the original FP32 BERT model
145 print('Evaluating PyTorch full precision accuracy and performance:')
--> 146 time_model_evaluation(model, configs, tokenizer)
147
148 # Evaluate the INT8 BERT model after the dynamic quantization
<ipython-input-31-1f864e3046eb> in time_model_evaluation(model, configs, tokenizer)
132 def time_model_evaluation(model, configs, tokenizer):
133 eval_start_time = time.time()
--> 134 result = evaluate(configs, model, tokenizer, prefix="")
135 eval_end_time = time.time()
136 eval_duration_time = eval_end_time - eval_start_time
<ipython-input-31-1f864e3046eb> in evaluate(args, model, tokenizer, prefix)
22 results = {}
23 for eval_task, eval_output_dir in zip(eval_task_names, eval_outputs_dirs):
---> 24 eval_dataset = load_and_cache_examples(args, eval_task, tokenizer, evaluate=True)
25
26 if not os.path.exists(eval_output_dir) and args.local_rank in [-1, 0]:
<ipython-input-31-1f864e3046eb> in load_and_cache_examples(args, task, tokenizer, evaluate)
121 all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
122 all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long)
--> 123 all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
124 if output_mode == "classification":
125 all_labels = torch.tensor([f.label for f in features], dtype=torch.long)
TypeError: an integer is required (got type NoneType)
Please let me know if there's anything else I can add to post.
A Dev explains this predicament at this Git Issue.
The Notebook experiments with BERT, which uses token_type_ids.
DistilBERT does not use token_type_ids for training.
So, this would require re-developing the notebook; removing/ conditioning all mentions of token_type_ids for this model specifically.

'Invalid input shapes: expected 1 items got 16 items' when trying to quantize model for tflite

I was trying to quantise a TF model into a TFLite model to deploy it on my ESP32 by calling the dataset through tf.keras.preprocessing.image_dataset_from_directory() and used images_batch and labels_batch to iterate in the representative dataset() function. First, I was getting the error: 'EndVector() takes 1 positional argument but 2 were given' but when I restarted the kernel it went away and now, I am getting 'Invalid input shapes: expected 1 items got 16 items'. I have tried absolutely everything but it just doesn't seem to work. Can someone please help me out?
Code sample of the quantizer with error:
def representative_dataset():
for image_batch, labels_batch in val_ds:
yield (image_batch)
# Set the optimization flag.
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# Enforce integer only quantization
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
# Provide a representative dataset to ensure we quantize correctly.
converter.representative_dataset = representative_dataset
model_tflite = converter.convert()
# Save the model to disk
open('modelwithquant.tflite', "wb").write(model_tflite)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-25-31cc89ef97b4> in <module>
13 # Provide a representative dataset to ensure we quantize correctly.
14 converter.representative_dataset = representative_dataset
---> 15 model_tflite = converter.convert()
16
17 # Save the model to disk
/usr/lib/python3.9/site-packages/tensorflow/lite/python/lite.py in convert(self)
919
920 if calibrate_and_quantize:
--> 921 result = self._calibrate_quantize_model(result, **flags)
922
923 flags_modify_model_io_type = quant_mode.flags_modify_model_io_type(
/usr/lib/python3.9/site-packages/tensorflow/lite/python/lite.py in _calibrate_quantize_model(self, result, inference_input_type, inference_output_type, activations_type, allow_float)
519 custom_op_registerers_by_func)
520 if self._experimental_calibrate_only or self.experimental_new_quantizer:
--> 521 calibrated = calibrate_quantize.calibrate(
522 self.representative_dataset.input_gen)
523
/usr/lib/python3.9/site-packages/tensorflow/lite/python/optimize/calibrator.py in calibrate(self, dataset_gen)
170 if not initialized:
171 initialized = True
--> 172 self._calibrator.Prepare([list(s.shape) for s in sample])
173 self._calibrator.FeedTensor(sample)
174 return self._calibrator.Calibrate()
ValueError: Invalid input shapes: expected 1 items got 16 items.

reshape.cc:55 stretch_dim != -1. Node number X (RESHAPE) failed to prepare

I'm new to all this, and I need some help with running inference using a custom tflite yolov3 tiny model.
The error I am getting is:
File "/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/interpreter.py", line 524, in invoke
self._interpreter.Invoke()
RuntimeError: tensorflow/lite/kernels/reshape.cc:55 stretch_dim != -1 (0 != -1)Node number 35 (RESHAPE) failed to prepare.
What have I done to get here:
trained a custom yolov3 tiny model for object detection to detect just 1 class using this project
https://github.com/pythonlessons/TensorFlow-2.x-YOLOv3.git
used default hyperparameters:
https://github.com/pythonlessons/TensorFlow-2.x-YOLOv3/blob/master/yolov3/configs.py
used tf-nightly
the model is here:
https://github.com/vladimirhorvat/y/blob/master/app/src/main/assets/converted_model.tflite
When the model was trained I tested the SavedModel by running inference and it worked.
Converted the SavedModel to tflite, run inference on it using the following code, and received the error from the title:
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
(this code is from here, btw
https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python )
Data for node 35:
type: Reshape
location: 35
inputs
data: name: functional_1/tf_op_layer_Tile_3/Tile_3;StatefulPartitionedCall/functional_1/tf_op_layer_Tile_3/Tile_3
shape: name: functional_1/tf_op_layer_strided_slice_6/strided_slice_6;StatefulPartitionedCall/functional_1/tf_op_layer_strided_slice_6/strided_slice_6
outputs
reshaped: name: functional_1/tf_op_layer_strided_slice_16/strided_slice_16;StatefulPartitionedCall/functional_1/tf_op_layer_strided_slice_16/strided_slice_16
Please help. I am out of ideas.
I was able to solve the exact same problem by following this similar post:
https://stackoverflow.com/a/62552677/11334316
In essence you would have to do the following when converting your model:
batch_size = 1
model = tf.keras.models.load_model('./yolo_model')
input_shape = model.inputs[0].shape.as_list()
input_shape[0] = batch_size
func = tf.function(model).get_concrete_function(tf.TensorSpec(input_shape, model.inputs[0].dtype))
model_converter = tf.lite.TFLiteConverter.from_concrete_functions([func])
model_lite = model_converter.convert()
f = open("./yolo_model.tflite", "wb")
f.write(model_lite)
f.close()

Converting tensorflow 2 estimator to tf.lite

I am trying to convert an estimator LinearClassifier into tflite.
However the code is throwing some error.. I am not able to understand where I am doing wrong.
Here is my code
import pandas as pd
import tensorflow as tf
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
#create feature columns. For testing I am using only numeric ones
NUMERIC_COLUMNS = ['age', 'fare']
feature_columns = []
for feature_name in NUMERIC_COLUMNS:
feature_columns.append(tf.feature_column.numeric_column(feature_name,
dtype=tf.float32))
# Use entire batch since this is such a small dataset.
NUM_EXAMPLES = len(y_train)
def make_input_fn(X, y, n_epochs=None, shuffle=True):
def input_fn():
dataset = tf.data.Dataset.from_tensor_slices((dict(X), y))
if shuffle:
dataset = dataset.shuffle(NUM_EXAMPLES)
# For training, cycle thru dataset as many times as need (n_epochs=None).
dataset = dataset.repeat(n_epochs)
# In memory training doesn't use batching.
dataset = dataset.batch(NUM_EXAMPLES)
return dataset
return input_fn
# Training and evaluation input functions.
train_input_fn = make_input_fn(dftrain[NUMERIC_COLUMNS], y_train)
eval_input_fn = make_input_fn(dfeval[NUMERIC_COLUMNS], y_eval, shuffle=False, n_epochs=1)
linear_est = tf.estimator.LinearClassifier(feature_columns)
# Train model.
linear_est.train(train_input_fn, max_steps=100)
# Evaluation.
result = linear_est.evaluate(eval_input_fn)
So model is working fine.
print(pd.Series(result))
accuracy 0.659091
accuracy_baseline 0.625000
auc 0.667095
auc_precision_recall 0.589936
average_loss 0.619227
label/mean 0.375000
loss 0.619227
precision 0.764706
prediction/mean 0.336755
recall 0.131313
global_step 100.000000
dtype: float64
Now saving part:
serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
tf.feature_column.make_parse_example_spec(feature_columns))
model_dir = 'model_data'
path = linear_est.export_saved_model(model_dir, serving_input_fn)
when I am using :
converter = tf.lite.TFLiteConverter.from_saved_model(path)
tflite_model = converter.convert()
it throws error :
ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.
I have also tried :
saved_model_obj = tf.saved_model.load(export_dir=path)
concrete_func = saved_model_obj.signatures['serving_default']
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
tflite_model = converter.convert()
And error is :
ConverterError: See console for info.
2020-02-18 16:23:15.446583: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 20
2020-02-18 16:23:15.446687: F tensorflow/lite/toco/import_tensorflow.cc:2706] Check failed: status.ok() Input_content string_val doesn't have the right dimensions for this string tensor
(while processing node 'head/AsString')
Fatal Python error: Aborted
Please help.
Please refer to the following info:
"Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, ADD_N, ARG_MAX, EXPAND_DIMS, FULLY_CONNECTED, RESHAPE, SOFTMAX, TILE. Here is a list of operators for which you will need custom implementations: As
String."
For me, this problem was solved after adding option "--allow_custom_ops":
tflite_convert --enable_v1_converter --allow_custom_ops --output_file=xxx --saved_model_dir=xxx --saved_model_signature_key='predict'

Error while allocating tensors in tflite Interpreter

I am making a Linear Regression model (3 input parameters of type float) that can be made to run on-device in an Android app that makes predictions based on user input.
For this, I have used the TensorFlow estimator tf.estimator.LinearRegressor. I also made a SavedModel out of this using this code:
serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(tf.feature_column.make_parse_example_spec([crim, indus, tax]))
export_path = model_est.export_saved_model("saved_model", serving_input_fn)
where the feature columns have been defined before in the code as:
tax = tf.feature_column.numeric_column('tax')
indus = tf.feature_column.numeric_column('indus')
crim = tf.feature_column.numeric_column('crim')
The whole model building code is as follows:
import tensorflow as tf
tf.compat.v1.disable_v2_behavior()
TRAIN_CSV_PATH = './data/BostonHousing_subset.csv'
TEST_CSV_PATH = './data/boston_test_subset.csv'
PREDICT_CSV_PATH = './data/boston_predict_subset.csv'
# target variable to predict:
LABEL_PR = "medv"
def get_batch(file_path, batch_size, num_epochs=None, **args):
with open(file_path) as file:
num_rows = len(file.readlines())
dataset = tf.data.experimental.make_csv_dataset(
file_path, batch_size, label_name=LABEL_PR, num_epochs=num_epochs, header=True, **args)
# repeat and shuffle and batch separately instead of the previous line
# for clarity purposes
# dataset = dataset.repeat(num_epochs)
# dataset = dataset.batch(batch_size)
iterator = dataset.make_one_shot_iterator()
elem = iterator.get_next()
return elem
# Now to define the feature columns
tax = tf.feature_column.numeric_column('tax')
indus = tf.feature_column.numeric_column('indus')
crim = tf.feature_column.numeric_column('crim')
# Building the model
model_est = tf.estimator.LinearRegressor(feature_columns=[crim, indus, tax], model_dir='model_dir')
# Train it now
model_est.train(steps=2300, input_fn=lambda: get_batch(TRAIN_CSV_PATH, batch_size=256))
results = model_est.evaluate(steps=1000, input_fn=lambda: get_batch(TEST_CSV_PATH, batch_size=128))
for key in results:
print(" {}, was: {}".format(key, results[key]))
to_pred = {
'crim': [0.03359, 5.09017, 0.12650, 0.05515, 8.15174, 0.24522],
'indus': [2.95, 18.10, 5.13, 2.18, 18.10, 9.90],
'tax': [252, 666, 284, 222, 666, 304],
}
def test_get_inp():
dataset = tf.data.Dataset.from_tensors(to_pred)
return dataset
# Predict
for pred_results in model_est.predict(input_fn=test_get_inp):
print(pred_results['predictions'][0])
# Now to export as SavedModel
print(tf.feature_column.make_parse_example_spec([crim, indus, tax]))
serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(tf.feature_column.make_parse_example_spec([crim, indus, tax]))
export_path = model_est.export_saved_model("saved_model", serving_input_fn)
The code I am using to convert this SavedModel to tflite format is:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model('saved_model/1576168761')
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
which outputs a .tflite file.
However, when I try to load this tflite file using this code:
import tensorflow as tf
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
I get this error:
Traceback (most recent call last):
File "D:/Documents/Projects/boston_house_pricing/get_model_details.py", line 5, in <module>
interpreter.allocate_tensors()
File "D:\Anaconda3\envs\boston_housing\lib\site-packages\tensorflow_core\lite\python\interpreter.py", line 244, in allocate_tensors
return self._interpreter.AllocateTensors()
File "D:\Anaconda3\envs\boston_housing\lib\site-packages\tensorflow_core\lite\python\interpreter_wrapper\tensorflow_wrap_interpreter_wrapper.py", line 106, in AllocateTensors
return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Regular TensorFlow ops are not supported by this interpreter. Make sure you invoke the Flex delegate before inference.Node number 0 (FlexParseExample) failed to prepare.
I am unable to understand how to resolve this error. Also, an error with the same message is thrown when I try to initialize an interpreter with this file (Android) in Java using tflite.
Help would be greatly appreciated regarding the same.
It seems like the error explains the issue pretty well, when converting to tflite you have specified the tf.lite.OpsSet.SELECT_TF_OPS flag, which causes the converter to include operations that are not supported by tflite natively, and it expects that you will use the flex module in tflite to compile and include those operations in the interpreter.
For more info regarding flex: https://www.tensorflow.org/lite/guide/ops_select
In any case you have two main options, either use flex and compile the needed ops, or to use only operations that are supported natively by tflite and omit the tf.lite.OpsSet.SELECT_TF_OPS flag.
For natively supported tensorflow ops refer here: https://www.tensorflow.org/lite/guide/ops_compatibility

Categories

Resources