Error when converting a tf model to TFlite model - python

I am currently building a model to use it onto my nano 33 BLE sense board to predict weather by mesuring Humidity, Pressure, Temperature, I have 5 classes.
I have used a kaggle dataset to train on it.
df_labels = to_categorical(df.pop('Summary'))
df_features = np.array(df)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df_features, df_labels, test_size=0.15)
normalize = preprocessing.Normalization()
normalize.adapt(X_train)
activ_func = 'gelu'
model = tf.keras.Sequential([
normalize,
tf.keras.layers.Dense(units=6, input_shape=(3,)),
tf.keras.layers.Dense(units=100,activation=activ_func),
tf.keras.layers.Dense(units=100,activation=activ_func),
tf.keras.layers.Dense(units=100,activation=activ_func),
tf.keras.layers.Dense(units=100,activation=activ_func),
tf.keras.layers.Dense(units=5, activation='softmax')
])
model.compile(optimizer='adam',#tf.keras.optimizers.Adagrad(lr=0.001),
loss='categorical_crossentropy',metrics=['acc'])
model.summary()
model.fit(x=X_train,y=y_train,verbose=1,epochs=15,batch_size=32, use_multiprocessing=True)
Then the model is trained, I want to convert it into a tflite model when I run the command convert I get the following message :
# Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the model to disk
open("gesture_model.tflite", "wb").write(tflite_model)
import os
basic_model_size = os.path.getsize("gesture_model.tflite")
print("Model is %d bytes" % basic_model_size)
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.Erf {device = ""}
For your information I use google colab to design the model.
If anyone has any idea or solution to this issue, I would be glad to hear it !

This often happens when you have not set the converter's supported Operations.
Here is an example:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
This list of supported operations are constantly changing so in case the error still appears you can also try to set the experimental converter features as follow:
converter.experimental_new_converter = True

I solved the problem ! It was the activation function 'gelu' not yet supported by TFlite. I changed it to 'relu' and no more problem.

Related

how Convert my tensorflow 2 using tflearn model to graph.pb file

I am trying to convert my model to CoreML so I save my model using this code
model2.save("modelcnn2.tfl")
then giving three model files as follow:
checkpoint
modelcnn2.tfl.data-00000-of-00001
modelcnn2.tfl.index
modelcnn2.tfl.meta
so how can convert to one graph.pb then convert to CoreMl
I use this code
import tensorflow as tf
meta_path = '/content/drive/MyDrive/check/modelcnn2.tfl.meta' # Your .meta file
output_node_names = ['0',
'1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17','18',
'19''20','21','22','23','24','25','26','27','28'] # Output nodes
with tf.compat.v1.Session() as sess:
# Restore the graph
saver = tf.compat.v1.train.import_meta_graph(meta_path)
# Load weights
saver.restore(sess,tf.train.latest_checkpoint('path/of/your/.meta/file'))
# Freeze the graph
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph
with open('output_graph.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
but this error appeared
KeyError: "The name 'Adam' refers to an Operation not in the graph."
so if any suggestion helps me, it is appreciated
The error is telling you exactly what the issue is.
Adam is not a supported operation. You have two options:
(1) Create a new model without an Adam layer.
(2) Implement a Custom Operators.

Are we able to prune a pre-trained model? Example: MobileNetV2

I'm trying to prune a pre-trained model: MobileNetV2 and I got this error. Tried searching online and couldn't understand. I'm running on Google Colab.
These are my imports.
import tensorflow as tf
import tensorflow_model_optimization as tfmot
import tensorflow_datasets as tfds
from tensorflow import keras
import os
import numpy as np
import matplotlib.pyplot as plt
import tempfile
import zipfile
This is my code.
model_1 = keras.Sequential([
basemodel,
keras.layers.GlobalAveragePooling2D(),
keras.layers.Dense(1)
])
model_1.compile(optimizer='adam',
loss=keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model_1.fit(train_batches,
epochs=5,
validation_data=valid_batches)
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.50,
final_sparsity=0.80,
begin_step=0,
end_step=end_step)
}
model_2 = prune_low_magnitude(model_1, **pruning_params)
model_2.compile(optmizer='adam',
loss=keres.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
This is the error i get.
---> 12 model_2 = prune_low_magnitude(model, **pruning_params)
ValueError: Please initialize `Prune` with a supported layer. Layers should either be a `PrunableLayer` instance, or should be supported by the PruneRegistry. You passed: <class 'tensorflow.python.keras.engine.training.Model'>
I believe you are following Pruning in Keras Example and jumped into Fine-tune pre-trained model with pruning section without setting your prunable layers. You have to reinstantiate model and set layers you wish to set as prunable. Follow this guide for further information on how to set prunable layers.
https://www.tensorflow.org/model_optimization/guide/pruning/comprehensive_guide.md
I faced the same issue with:
tensorflow version: 2.2.0
Just updating the version of tensorflow to 2.3.0 solved the issue, I think Tensorflow added support to this feature in 2.3.0.
One thing I found is that the experimental preprocessing I added to my model was throwing this error. I had this at the beginning of my model to help add some more training samples but the keras pruning code doesn't like subclassed models like this. Similarly, the code doesn't like the experimental preprocessing like I have with centering of the image. Removing the preprocessing from the model solved the issue for me.
def classificationModel(trainImgs, testImgs):
L2_lambda = 0.01
data_augmentation = tf.keras.Sequential(
[ layers.experimental.preprocessing.RandomFlip("horizontal", input_shape=IM_DIMS),
layers.experimental.preprocessing.RandomRotation(0.1),
layers.experimental.preprocessing.RandomZoom(0.1),])
model = tf.keras.Sequential()
model.add(data_augmentation)
model.add(layers.experimental.preprocessing.Rescaling(1./255, input_shape=IM_DIMS))
...
Saving the model as below and reloading worked for me.
_, keras_file = tempfile.mkstemp('.h5')
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
print('Saved baseline model to:', keras_file)
Had the same problem today, its the following error.
If you don't want the layer to be pruned or don't care for it, you can use this code to only prune the prunable layers in a model:
from tensorflow_model_optimization.python.core.sparsity.keras import prunable_layer
from tensorflow_model_optimization.python.core.sparsity.keras import prune_registry
def apply_pruning_to_prunable_layers(layer):
if isinstance(layer, prunable_layer.PrunableLayer) or hasattr(layer, 'get_prunable_weights') or prune_registry.PruneRegistry.supports(layer):
return tfmot.sparsity.keras.prune_low_magnitude(layer)
print("Not Prunable: ", layer)
return layer
model_for_pruning = tf.keras.models.clone_model(
base_model,
clone_function=apply_pruning_to_pruneable_layers
)

Converting saved_model to TFLite model using TF 2.0

currently I am working on converting custom object detection model (trained using SSD and inception network) to quantized TFLite model. I can able to convert custom object detection model from frozen graph to quantized TFLite model using the following code snippet (using Tensorflow 1.4):
converter = tf.lite.TFLiteConverter.from_frozen_graph(args["model"],input_shapes = {'normalized_input_image_tensor':[1,300,300,3]},
input_arrays = ['normalized_input_image_tensor'],output_arrays = ['TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1',
'TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3'])
converter.allow_custom_ops=True
converter.post_training_quantize=True
tflite_model = converter.convert()
open(args["output"], "wb").write(tflite_model)
However tf.lite.TFLiteConverter.from_frozen_graph class method is not available for Tensorflow 2.0 (refer this link). So I tried to convert the model using tf.lite.TFLiteConverter.from_saved_model class method. The code snippet is shown below:
converter = tf.lite.TFLiteConverter.from_saved_model("/content/") # Path to saved_model directory
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
The above code snippet throws the following error:
ValueError: None is only supported in the 1st dimension. Tensor 'image_tensor' has invalid shape '[None, None, None, 3]'.
I tried to pass input_shapes as argument
converter = tf.lite.TFLiteConverter.from_saved_model("/content/",input_shapes={"image_tensor" : [1,300,300,3]})
but it throws the following error:
TypeError: from_saved_model() got an unexpected keyword argument 'input_shapes'
Am I missing something? Please feel free to correct me!
I got the solution using tf.compat.v1.lite.TFLiteConverter.from_frozen_graph. This compat.v1 brings the functionality of TF1.x into TF2.x.
Following is the full code:
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph("/content/tflite_graph.pb",input_shapes = {'normalized_input_image_tensor':[1,300,300,3]},
input_arrays = ['normalized_input_image_tensor'],output_arrays = ['TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1',
'TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3'])
converter.allow_custom_ops=True
# Convert the model to quantized TFLite model.
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
# Write a model using the following line
open("/content/uno_mobilenetV2.tflite", "wb").write(tflite_model)

Launch Tensorboard with keras graph (for visualize accuracy, loss and predict result)

I don't understand how to use tensor board to visualize the training step of my keras network.
I already launch tensor board with the command line : tensorboard --logdir=/run1
But he raise this error :
No dashboards are active for the current data set. Probable causes:You
haven’t written any data to your event files. TensorBoard can’t find
your event files.
import tensorflow as tf
from tensorflow import keras
import numpy as np
# Create the array of data
train_data = [[1.0,2.0,3.0],[4.0,5.0,6.0]]
train_data_np = np.asarray(train_data)
train_label = [[1,2,3],[4,5,6]]
train_label_np = np.asarray(train_data)
### Build the model
model = keras.Sequential([
keras.layers.Dense(3,input_shape =(3,2)),
keras.layers.Dense(3,activation=tf.nn.sigmoid)
])
model.compile(optimizer='sgd',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
#Train the model
tensorboard = TensorBoard(log_dir="run1")
model.fit(train_data_np,train_label_np,epochs=10,callbacks=tensorboard)
#test the model
restest = model.evaluate(test_data_np,test_label_np)
Adding formal answer here; looks like there is a typo in the tensorboard logdir parameter. You need to remove the slash at the beginning of the directory
tensorboard --logdir=run1

How to save final model using keras?

I use KerasClassifier to train the classifier.
The code is below:
import numpy
from pandas import read_csv
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import Pipeline
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load dataset
dataframe = read_csv("iris.csv", header=None)
dataset = dataframe.values
X = dataset[:,0:4].astype(float)
Y = dataset[:,4]
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
#print("encoded_Y")
#print(encoded_Y)
# convert integers to dummy variables (i.e. one hot encoded)
dummy_y = np_utils.to_categorical(encoded_Y)
#print("dummy_y")
#print(dummy_y)
# define baseline model
def baseline_model():
# create model
model = Sequential()
model.add(Dense(4, input_dim=4, init='normal', activation='relu'))
#model.add(Dense(4, init='normal', activation='relu'))
model.add(Dense(3, init='normal', activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
estimator = KerasClassifier(build_fn=baseline_model, nb_epoch=200, batch_size=5, verbose=0)
#global_model = baseline_model()
kfold = KFold(n_splits=10, shuffle=True, random_state=seed)
results = cross_val_score(estimator, X, dummy_y, cv=kfold)
print("Accuracy: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
But How to save the final model for future prediction?
I usually use below code to save model:
# serialize model to JSON
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model.h5")
print("Saved model to disk")
But I don't know how to insert the saving model's code into KerasClassifier's code.
Thank you.
The model has a save method, which saves all the details necessary to reconstitute the model. An example from the keras documentation:
from keras.models import load_model
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model
# returns a compiled model
# identical to the previous one
model = load_model('my_model.h5')
you can save the model in json and weights in a hdf5 file format.
# keras library import for Saving and loading model and weights
from keras.models import model_from_json
from keras.models import load_model
# serialize model to JSON
# the keras model which is trained is defined as 'model' in this example
model_json = model.to_json()
with open("model_num.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model_num.h5")
files "model_num.h5" and "model_num.json" are created which contain our model and weights
To use the same trained model for further testing you can simply load the hdf5 file and use it for the prediction of different data.
here's how to load the model from saved files.
# load json and create model
json_file = open('model_num.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("model_num.h5")
print("Loaded model from disk")
loaded_model.save('model_num.hdf5')
loaded_model=load_model('model_num.hdf5')
To predict for different data you can use this
loaded_model.predict_classes("your_test_data here")
You can use model.save(filepath) to save a Keras model into a single HDF5 file which will contain:
the architecture of the model, allowing to re-create the model.
the weights of the model.
the training configuration (loss, optimizer)
the state of the optimizer, allowing to resume training exactly where you left off.
In your Python code probable the last line should be:
model.save("m.hdf5")
This allows you to save the entirety of the state of a model in a single file.
Saved models can be reinstantiated via keras.models.load_model().
The model returned by load_model() is a compiled model ready to be used (unless the saved model was never compiled in the first place).
model.save() arguments:
filepath: String, path to the file to save the weights to.
overwrite: Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt.
include_optimizer: If True, save optimizer's state together.
you can save the model and load in this way.
from keras.models import Sequential, load_model
from keras_contrib.losses import import crf_loss
from keras_contrib.metrics import crf_viterbi_accuracy
# To save model
model.save('my_model_01.hdf5')
# To load the model
custom_objects={'CRF': CRF,'crf_loss':crf_loss,'crf_viterbi_accuracy':crf_viterbi_accuracy}
# To load a persisted model that uses the CRF layer
model1 = load_model("/home/abc/my_model_01.hdf5", custom_objects = custom_objects)
Generally, we save the model and weights in the same file by calling the save() function.
For saving,
model.compile(optimizer='adam',
loss = 'categorical_crossentropy',
metrics = ["accuracy"])
model.fit(X_train, Y_train,
batch_size = 32,
epochs= 10,
verbose = 2,
validation_data=(X_test, Y_test))
#here I have use filename as "my_model", you can choose whatever you want to.
model.save("my_model.h5") #using h5 extension
print("model saved!!!")
For Loading the model,
from keras.models import load_model
model = load_model('my_model.h5')
model.summary()
In this case, we can simply save and load the model without re-compiling our model again.
Note - This is the preferred way for saving and loading your Keras model.
Saving a Keras model:
model = ... # Get model (Sequential, Functional Model, or Model subclass)
model.save('path/to/location')
Loading the model back:
from tensorflow import keras
model = keras.models.load_model('path/to/location')
For more information, read Documentation
You can save the best model using keras.callbacks.ModelCheckpoint()
Example:
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model_checkpoint_callback = keras.callbacks.ModelCheckpoint("best_Model.h5",save_best_only=True)
history = model.fit(x_train,y_train,
epochs=10,
validation_data=(x_valid,y_valid),
callbacks=[model_checkpoint_callback])
This will save the best model in your working directory.
Since the syntax of keras, how to save a model, changed over the years I will post a fresh answer. In principle the earliest answer of bogatron, posted Mar 13 '17 at 12:10 is still good, if you want to save your model including the weights into one file.
model.save("my_model.h5")
This will save the model in the older Keras H5 format.
However, there is a new format, the TensorFlow SavedModel format, which will be used if you do not specify the extension .h5, .hdf5 or .keras after the filename.
The syntax in this case is
model.save("path/to/folder")
If the given folder name does not yet exist, it will be created. Two files and two folders will be created within this folder:
keras_metadata.pb, saved_model.pb, assets, variables
So far you can still decide whether you want to store your model into one single file or into a folder containing files and folders. (See keras documentation at www.tensorflow.org.)

Categories

Resources