Converting ONNX model to TensorFlow Lite - python

I've got some models for the ONNX Model Zoo. I'd like to use models from here in a TensorFlow Lite (Android) application and I'm running into problems figuring out how to get the models converted.
From what I've read, the process I need to follow is to convert the ONNX model to a TensorFlow model, then convert that TensorFlow model to a TensorFlow Lite model.
import onnx
from onnx_tf.backend import prepare
import tensorflow as tf
onnx_model = onnx.load('./some-model.onnx')
tf_rep = prepare(onnx_model)
tf_rep.export_graph("some-model.pb")
After the above executes, I have the file some-model.pb which I believe contains a TensorFlow Freeze Graph. From here I am not sure where to go. When I search I find a lot of answers that are for TensorFlow 1.x (which I only realize after the samples I find fail to execute). I'm trying to use TensorFlow 2.x.
If it matters, the specific model I'm starting off with is here.
Per the ReadMe.md, the shape of the input is (1x3x416x416) and the output shape is (1x125x13x13).

I got my anser. I was able to use the code below to complete the conversion.
import tensorflow as tf
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph('model.pb', #TensorFlow freezegraph
input_arrays=['input.1'], # name of input
output_arrays=['218'] # name of output
)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
# tell converter which type of optimization techniques to use
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tf_lite_model = converter.convert()
open('model.tflite', 'wb').write(tf_lite_model)

Related

Loading the saved models from tf.keras in different versions (From tf 2.3.0 to tf 1.12)

Question:
I have created and trained a keras model in tf 2.3.0 and I need to load this model in tf 1.12.0 in order to be used with a library that requires an older version of tf. Is there any way to either convert the models from the format of the new version of tf to an older version so I can load the model with tf 1.12.0?
What I have tried so far:
A similar discussion showed how to convert models from tf 1.15 - 2.1 to tf.10, but when I tried this solution I got an error "Unknown layer: functional". Link: Loading the saved models from tf.keras in different versions
I tried to fix this by using the following line suggested by another question:
new_model = tf.keras.models.model_from_json(json_config, custom_objects {'Functional':tf.keras.models.Model})
Link: ValueError: Unknown layer: Functional
However, if I use this the I get an error: ('Unrecognized keyword arguments:', dict_keys(['ragged'])) , which is the same error discussed in the first discussion I linked above.
Another method I tried was using the Onnx library to convert the keras model to an Onnx model and then back to a keras model of a different version. However, I soon realized that the keras2onnx library required tf 2.x.
Links: https://github.com/onnx/tensorflow-onnx and https://github.com/gmalivenko/onnx2keras
Any suggestions about how to get around this without having to retrain my models in a older version of tensorflow would be greatly appreciated! Thanks
Here is the simple code that I tried to implement to load my model:
Save in tf 2.3.0
import tensorflow as tf
CNN_model=tf.keras.models.load_model('Real_Image_XAI_Models/Test_10_DC_R_Image.h5')
CNN_model.save_weights("Real_Image_XAI_Models/weights_only.h5")
json_config = CNN_model.to_json()
with open('Real_Image_XAI_Models/model_config.json', 'w') as json_file:
json_file.write(json_config)
Load in tf 1.12.0
with open('Real_Image_XAI_Models/model_config.json') as json_file:
json_config = json_file.read()
new_model = tf.keras.models.model_from_json(json_config)
#or implement the line to acount for the functional class
#new_model = tf.keras.models.model_from_json(json_config, custom_objects={'Functional':tf.keras.models.Model})
new_model.load_weights('Real_Image_XAI_Models/weights_only.h5')
There are breaking changes in the model config from tf-1.12.0 to tf-2.3.0 including, but not limited to, following:
The root class Model is now Functional
The support for Ragged tensors was added in tf-1.15
You can try to edit the model config json file once saved from tf-2.3.0 to reverse the effects of these changes as follows:
Replace the root class definition "class_name": "Functional" by "class_name": "Model". This will reverse the effect of change #1 above.
Delete all occurrences of "ragged": false, (and of "ragged": true, if present). This will reverse the effect of change #2 above.
Note the trailing comma and space along with the "ragged" fields above
You may try to find a way to make these changes programmatically in the json dictionary or at the model load time, but I find it easier to make these one-time changes to the json file itself.

Error while importing VGG16 h5 file ValueError: No model found in config file

I tried to import vgg16 which I downloaded from google storage
import keras
import cv2
from keras.models import Sequential, load_model
But I got that error
ValueError: No model found in config file.
I was able to recreate the issue using your code and downloaded weights file mentioned by you. I am not sure about the reason for the issue but I can offer an alternative way for you to use pretrained vgg16 model from keras.
You need to use model from keras.applications file
Here is the link for your reference https://keras.io/api/applications/
There are three ways to instantiate this model by using weights argument which takes any of following three values None/'imagenet'/filepathToWeightsFile.
Since you have already downloaded the weights , I suggest that you use the filepath option like the below code but for first time usage I will suggest to use imagenet (option 3). It will download the weight file which can be saved and reused later.
You need to add the following lines of code.
Option 1:
from keras.applications.vgg16 import VGG16
model = VGG16(weights = 'vgg16_weights_tf_dim_ordering_tf_kernels.h5')
Option 2:
from keras.applications.vgg16 import VGG16
model = VGG16(weights = None)
model.load_weights('vgg16_weights_tf_dim_ordering_tf_kernels.h5')
Option 3: for using pretrained imagenet weights
from keras.applications.vgg16 import VGG16
model = VGG16(weights = 'imagenet')
The constructor also takes other arguments like include_top etc which can be added as per requirement.
The problem here is that you're trying to load a model that is not a model and probably are just weights: so the problem is not in the load of the model but in the save.
When you are saving the model try:
If you are using callbacks then "save_weights_only"=False
Else use the function tf.keras.models.save_model(model,filepath)
A complete model has two parts: model architecture and weights.
So if we just have weights, we must first load architecture(may be use python file or keras file ), and then load weights on the architecture.
for example:
model = tf.keras.models.load_model("facenet_keras.h5")
model.load_weights("facenet_keras_weights.h5")

Getting the correct labels for object detection using Tensorflow Lite [Python/Flutter Integration]

I am trying to implement object detection using MobileNetV2 model on Flutter. Since, most of the examples or implementation available online for Flutter app is not using MobileNetV2, so I took a long route to reach to that phase.
The way I achieved this is as follows:
1) Created a python script where I am using MobileNetV2 model (pre-trained on ImageNet for 1000 classes) of Keras (backend Tensorflow) and tested it with images to see if it is returning the correct labels after detecting objects correctly. [Python script provided below for reference]
2) Converted the same MobileNetV2 keras model (MobileNetV2.h5) to Tensorflow Lite model (MobileNetV2.tflite)
3) Followed the existing example of creating Flutter app to use Tensorflow Lite (https://itnext.io/working-with-tensorflow-lite-in-flutter-f00d733a09c3). Replaced the TFLite model shown in the example with the MobileNetV2.tflite model and used the ImageNet classes/labels in https://gist.github.com/aaronpolhamus/964a4411c0906315deb9f4a3723aac57 as the labels.txt.
[GitHub project of the Flutter example is provided here: https://github.com/umair13adil/tensorflow_lite_flutter]
When I now run the Flutter app, it is running without any error, however during classification/predicting the label the output is not correct. For example: It classifies an orange (object id: n07747607) as poncho (object id: n03980874), and classifies pomegranate (object id: n07768694) as banded_gecko (object id: n01675722).
However, if I use the same pictures and test it with my python script, it is returning the correct labels. So, I was wondering if the issue is actually with the label.txt (which holds the labels) used in the Flutter app, where the order of the labels is not matching the inference of the model.
Can anyone mention that how I can resolve the issue to classify the correct objects? How can I get the ImageNet labels that are used by the MobileNetV2 (keras) so that I can use that in the Flutter app?
My Flutter App to detect object using MobileNetv2 can be downloaded from here: https://github.com/somdipdey/Tensorflow_Lite_Object_Detection_Flutter_App
My python script to convert the MobileNetV2 model (keras) to TFLite while testing it on image for classification as follows:
import tensorflow as tf
from tensorflow import keras
from keras.preprocessing import image
from keras.applications.mobilenet_v2 import preprocess_input, decode_predictions
import numpy as np
import PIL
from PIL import Image
import requests
from io import BytesIO
# load the model
model = tf.keras.applications.MobileNetV2(weights='imagenet', include_top=True)
#model = tf.keras.models.load_model('MobileNetV2.h5')
# To save model
model.save('MobileNetV2.h5')
# chose the URL image that you want
URL = "https://images.unsplash.com/photo-1557800636-894a64c1696f?ixlib=rb-1.2.1&w=1000&q=80"
# get the image
response = requests.get(URL)
img = Image.open(BytesIO(response.content))
# resize the image according to each model (see documentation of each model)
img = img.resize((224, 224))
##############################################
# if you want to read the image from your PC
#############################################
# img_path = 'myimage.jpg'
# img = image.load_img(img_path, target_size=(299, 299))
#############################################
# convert to numpy array
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
features = model.predict(x)
# return the top 10 detected objects
num_top = 10
labels = decode_predictions(features, top=num_top)
print(labels)
#load keras model
new_model= tf.keras.models.load_model(filepath="MobileNetV2.h5")
# Create a converter # I could also directly use keras model instead of loading it again
converter = tf.lite.TFLiteConverter.from_keras_model(new_model)
# Convert the model
tflite_model = converter.convert()
# Create the tflite model file
tflite_model_name = "MobileNetV2.tflite"
open(tflite_model_name, "wb").write(tflite_model)
Let me start by sharing the ImageNet labels in two formats, JSON and txt. Given the fact MobileNetV2 is trained on ImageNet, it should be returning results based on these labels.
My initial thought is that there must be an error with the 2nd step of your pipeline. I assume you are trying to convert the trained Keras-based weights to Tensorflow Lite weights (is it the same format with pure Tensorflow?). A good option is to try and find already saved weights in the format of Tensorflow Lite ( but I guess they might not be available and that's why you are doing the conversion). I had similar problems with converting TF weights to Keras so you must be sure whether the conversion was successfully done before even going to step 3, creation of Flutter app to use Tensorflow Lite. A good way to achieve this is by printing all the available classes of your classifier and compare them with the original ImageNet labels given above.

Can't convert Keras model to tflite

I have a Keras model saved with the following line:
tf.keras.models.save_model(model, "path/to/model.h5")
Later, I try to convert it to a tflite file as follows:
converter = tf.contrib.lite.TFLiteConverter.from_keras_model_file('path/to/model.h5')
tflite_model = converter.convert()
open("path/to/model.tflite", "wb").write(tflite_model)
But I get a weird error:
You are trying to load a weight file containing 35 layers into a model with 0 layers.
I know that my model is working fine. I am able to load it and draw inferences. This error only shows up when trying to save it as a tflite model.
TensorFlow version: tensorflow-gpu 1.12.0
I'm using tf.keras.
Turns out, the issue is due to explicitly defining an InputLayer with some input_shape.
My model was of the form:
InputLayer(input_shape=(...))
BatchNormalization()
.... Remaining layers
I changed it to:
BatchNormalization(input_shape=(...))
.... Remaining layers
and transferred the weights from the previous model here. Now it works perfectly.

use cntk trained model with python

I have trained a model using CNTK, lets call simple.dnn
now for the phase of testing I do not want to install CNTK on windows,but use trained model with python. How can I use trained model (weights,...) for testing using python?
You can use the load_model function, see https://www.cntk.ai/pythondocs/cntk.html?highlight=load_model#cntk.persist.load_model. The basic flow should look like this:
from cntk import load_model
loaded_model = load_model("yourModel.model", 'float')
output = model.eval(arguments)

Categories

Resources