Can't convert Keras model to tflite - python

I have a Keras model saved with the following line:
tf.keras.models.save_model(model, "path/to/model.h5")
Later, I try to convert it to a tflite file as follows:
converter = tf.contrib.lite.TFLiteConverter.from_keras_model_file('path/to/model.h5')
tflite_model = converter.convert()
open("path/to/model.tflite", "wb").write(tflite_model)
But I get a weird error:
You are trying to load a weight file containing 35 layers into a model with 0 layers.
I know that my model is working fine. I am able to load it and draw inferences. This error only shows up when trying to save it as a tflite model.
TensorFlow version: tensorflow-gpu 1.12.0
I'm using tf.keras.

Turns out, the issue is due to explicitly defining an InputLayer with some input_shape.
My model was of the form:
InputLayer(input_shape=(...))
BatchNormalization()
.... Remaining layers
I changed it to:
BatchNormalization(input_shape=(...))
.... Remaining layers
and transferred the weights from the previous model here. Now it works perfectly.

Related

How to convert Object Detection model to CoreML

I have SSD MobilenetV2 model trained using object detection API (tensorflow version 2.8.2). How can I convert it to coreml? Tried to use coremltools (version 5.2), but failed.
I did the following:
Converted checkpoints to saved_model.pb using exporter_main_v2.py
Loaded the model using tf.saved_model.load(model_path)
Run coremltools.convert(model, source="tensorflow")
On step 3 I got the following error:
NotImplementedError: Expected model format: [SavedModel | [concrete_function] | tf.keras.Model | .h5], got <tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject object at 0x7fab1a41cd90>
Coremltools is expecting a Keras .h5 model but you are providing a checkpoint.
You will need to instantiate the same class model the checkpoint came from, load the checkpoint, then provide the model to CoreML.

Converting ONNX model to TensorFlow Lite

I've got some models for the ONNX Model Zoo. I'd like to use models from here in a TensorFlow Lite (Android) application and I'm running into problems figuring out how to get the models converted.
From what I've read, the process I need to follow is to convert the ONNX model to a TensorFlow model, then convert that TensorFlow model to a TensorFlow Lite model.
import onnx
from onnx_tf.backend import prepare
import tensorflow as tf
onnx_model = onnx.load('./some-model.onnx')
tf_rep = prepare(onnx_model)
tf_rep.export_graph("some-model.pb")
After the above executes, I have the file some-model.pb which I believe contains a TensorFlow Freeze Graph. From here I am not sure where to go. When I search I find a lot of answers that are for TensorFlow 1.x (which I only realize after the samples I find fail to execute). I'm trying to use TensorFlow 2.x.
If it matters, the specific model I'm starting off with is here.
Per the ReadMe.md, the shape of the input is (1x3x416x416) and the output shape is (1x125x13x13).
I got my anser. I was able to use the code below to complete the conversion.
import tensorflow as tf
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph('model.pb', #TensorFlow freezegraph
input_arrays=['input.1'], # name of input
output_arrays=['218'] # name of output
)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
# tell converter which type of optimization techniques to use
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tf_lite_model = converter.convert()
open('model.tflite', 'wb').write(tf_lite_model)

Save caffe model after deleting layers from it

I have a caffe model with me which has crop layers in it so converting it to TensorFlow is posing a problem.
I have successfully loaded the model and dropped the crop layers and now I would like to save the corresponding model.prototxt and model.caffemodel
I found the following questions on StackOverflow but they are about replacing the layers and not permanently deleting them:
Layer drop and update caffe model
How to modify the Imagenet Caffe Model?
When I save the model using caffe.Net.save() only the model.caffemodel file gets saved and not the corresponding .prototxt. What to do ?
Model files : https://github.com/Charrin/RetinaFace-Cpp/tree/master/convert_models/mnet
Code used so far -
import caffe
net = caffe.Net('mnet.prototxt', 'mnet.caffemodel' , caffe.TEST)
del net.layer_dict['crop1']
del net.layer_dict['crop0']
net.save('new_model.caffemodel')

get Embeddings by reloading tensorflow model

I saved a tensorflow model, .pb file, trained using transfer learning taking this as reference with following code added at the end:
tf.train.write_graph(graph_name, saving_dir_path, output.pb, as_text=False)
and it successfully saved. But now after training I want to get Embedding's output. Following is the last layer defined to train in the graph under layer name final_training_ops:
with tf.name_scope('Wx_plus_b'):
logits = tf.add(tf.matmul(bottleneck_input, layer_weights), layer_biases, name='logits')
After reloading saved model I am using tf.get_default_graph().get_tensor_by_name('Wx_plus_b/logits') to access layer so as to pass image to get embeddings but getting error as invalid operation name.
I gave more time to it and found correct syntax is of the form <op_name>:<output_index> -
tf.get_default_graph().get_tensor_by_name('Wx_plus_b/logits:0')

I use TFLiteConvert post_training_quantize=True but my model is still too big for being hosted in Firebase ML Kit's Custom servers

I have written a TensorFlow / Keras Super-Resolution GAN. I've converted the resulting trained .h5 model to a .tflite model, using the below code, executed in Google Colab:
import tensorflow as tf
model = tf.keras.models.load_model('/content/drive/My Drive/srgan/output/srgan.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.post_training_quantize=True
tflite_model = converter.convert()
open("/content/drive/My Drive/srgan/output/converted_model_quantized.tflite", "wb").write(tflite_model)
As you can see I use converter.post_training_quantize=True which was censed to help to output a lighter .tflite model than the size of my original .h5 model, which is 159MB. The resulting .tflite model is still 159MB however.
It's so big that I can't upload it to Google Firebase Machine Learning Kit's servers in the Google Firebase Console.
How could I either:
decrease the size of the current .tflite model which is 159MB (for example using a tool),
or after having deleted the current .tflite model which is 159MB, convert the .h5 model to a lighter .tflite model (for example using a tool)?
Related questions
How to decrease size of .tflite which I converted from keras: no answer, but a comment telling to use converter.post_training_quantize=True. However, as I explained it, this solution doesn't seem to work in my case.
In general, quantization means, shifting from dtype float32 to uint8. So theoretically our model should reduce by the size of 4. This will be clearly visible in files of greater size.
Check whether your model has been quantized or not by using the tool "https://lutzroeder.github.io/netron/". Here you have to load the model and check the random layers having weight.The quantized graph contains the weights value in uint8 format
In unquantized graph the weights value will be in float32 format.
Only setting "converter.post_training_quantize=True" is not enough to quantize your model. The other settings include:
converter.inference_type=tf.uint8
converter.default_ranges_stats=[min_value,max_value]
converter.quantized_input_stats={"name_of_the_input_layer_for_your_model":[mean,std]}
Hoping you are dealing with images.
min_value=0, max_value=255, mean=128(subjective) and std=128(subjective). name_of_the_input_layer_for_your_model= first name of the graph when you load your model in the above mentioned link or you can get the name of the input layer through the code "model.input" will give the output "tf.Tensor 'input_1:0' shape=(?, 224, 224, 3) dtype=float32". Here the input_1 is the name of the input layer(NOTE: model must include the graph configuration and the weight.)

Categories

Resources