I used the tensorflow2 object detection API. I received a saved_model.pb which is a TensorFlow graph and not a tf.keras model. So it can be loaded with tf.saved_model.load() but not with tf.keras.load_model(). The model is saved via the tf.saved_model.save() in the export_lib_v2.py of the object detection API in line 271.
I tried to build the model from the config file and load the checkpoints, to then save it as a tf.keras model:
import tensorflow as tf
from object_detection.utils import config_util
from object_detection.builders import model_builder
import os
def save_in_tfkeras(save_filepath,label_map_path, config_file_path, checkpoint_path):
configs = config_util.get_configs_from_pipeline_file(config_file_path)
model_config = configs['model']
detection_model = model_builder.build(model_config=model_config, is_training=False)
# Restore checkpoint
ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt.restore(checkpoint_path).expect_partial()
detection_model.built(input_shape=(320,320))
tf.keras.models.save_model(detection_model, save_filepath)
print('modelsaved as tf.keras in ' + save_filepath)
if __name__ == "__main__":
PATH_TO_LABELMAP = './models/face_model/face_label.pbtxt'
PATH_TO_CONFIG = './models/face_model/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/pipeline.config'
PATH_TO_CHECKPOINT = './models/face_model/v2_model_50k/ckpt-51'
save_filepath='./Kmodels/mobileNet_V2'
if not os.path.exists(save_filepath):
os.makedirs(save_filepath)
save_in_tfkeras(save_filepath,PATH_TO_LABELMAP,PATH_TO_CONFIG,PATH_TO_CHECKPOINT)
However, this does not seem to work. There are errors which origin, in my opinion, in mixing the tf and tf.keras model. The last error message:
ValueError: Weights for model
ssd_mobile_net_v2fpn_keras_feature_extractor_1 have not yet been
created. Weights are created when the Model is first called on inputs
or build() is called with an input_shape.
The model was saved with TensorFlow loaded as tensorflow.compat.v2
Question: Is there a way to build the model, load the checkpoint weights and then save as tf.keras model?
Related
Using the basic test code form hidden layer, I am getting the error in the title:
import torch
import torchvision.models
import hiddenlayer as hl
# VGG16 with BatchNorm
model = torchvision.models.vgg16()
# Build HiddenLayer graph
# Jupyter Notebook renders it automatically
hl.build_graph(model, torch.zeros([1, 3, 224, 224]))
Versions:
hiddenlayer-0.3
pytorch=1.13.0+cu117
python=3.10.6
I followed error recommendation and changed _optimize_trace to _optimize_graph in pytorch_builder.py line 71. After that It worked correctly.
I was testing a tensorflow model on Postman that uses https://tfhub.dev/google/universal-sentence-encoder-multilingual/3 from tensorflow-hub, knowing that it worked perfectly in jupyter notebook without any error, I encountered this error in postman after sending a POST request that calls predict method.
Error:
"error": "{{function_node __inference_signature_wrapper_133703}} {{function_node __inference_signature_wrapper_133703}} {{function_node __inference__wrapped_model_95698}} {{function_node __inference__wrapped_model_95698}} {{function_node __inference_restored_function_body_51031}} {{function_node __inference_restored_function_body_51031}} [_Derived_]{{function_node __inference___call___6286}} {{function_node __inference___call___6286}} Op type not registered \'SentencepieceOp\' in binary running on 329ddc874964. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.\n\t [[{{node StatefulPartitionedCall}}]]\n\t [[StatefulPartitionedCall]]\n\t [[sequential/keras_layer/StatefulPartitionedCall]]\n\t [[StatefulPartitionedCall]]\n\t [[StatefulPartitionedCall]]"
with a status 404 not found.
and this is my model:
import tensorflow as tf
import numpy as np
import tensorflow_text
import pandas as pd
import random
import tensorflow_hub as hub
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import losses
from tensorflow.keras import preprocessing
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
from tensorflow.keras.layers import SpatialDropout1D
from tensorflow.keras.layers import Embedding
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
module_url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
# Import the Universal Sentence Encoder's TF Hub module
hub_layer = hub.KerasLayer(module_url, input_shape=[], dtype=tf.string, trainable=True)
#some data preprocessing
opt = keras.optimizers.Adam(learning_rate= 0.001)
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(32, activation='relu'))
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1,activation='sigmoid'))
model.layers[0].trainable = False
model.compile(loss='binary_crossentropy',optimizer=opt, metrics=['accuracy'])
model.summary()
history = model.fit(np.array(tweet),np.array(sentiment),
validation_split=0.2, epochs=5, batch_size=32)
This is the request in Postman using localhost:.../arabtextclasstfhubtest:predict :
{
"signature_name": "serving_default",
"inputs":
{
"keras_layer_input":["كلامك جميل ورائع"]
}
}
I would like to know if it's a bug in tensoflow-hub or how to fix this problem.
thank you!!
It appears you are exporting a SavedModel to a server binary that answers the Postman requests. That server binary needs to link in the 'SentencepieceOp' from tensorflow_text, because your SavedModel uses it (as it should).
I have created one machine learning model using Tensorflow and Keras by using IAM dataset. How to load this model as an API to predict an image? When I was trying to integrate It shows error
return self.function(inputs, **arguments)
File "test2.py", line 136, in resize_image
return tf.image.resize_images(image,[56,56])
NameError: name 'tf' is not defined
I have load model using from keras.models import load_model and trying to predict image handwriting. low_loss.hdf5 is model which I try to integrate.
def testmodel(image_path):
global model
# load the pre-trained Keras model
model = load_model('low_loss.hdf5')
model.summary()
img = Image.open(image_path).convert("L")
img = np.resize(image_path, (28,28,1))
im2arr = np.array(img)
im2arr = im2arr.reshape(1,28,28,1)
y_pred = model.predict_classes(im2arr)
return y_pred
I wish to predict image Handwritten data.
your error is about tf which is not loaded.
try:
import tensorflow as tf
You were getting an error because you have not imported TensorFlow in your code or if you have imported you have not given an alias.
import tensorflow as tf
I've fine-tuned a model (using TF 1.9) from Object Detection Zoo Model and right now I am trying to freeze the graph for TensorFlowSharp using TF 1.9.
import tensorflow as tf
import os
from tensorflow.python.tools import freeze_graph
from tensorflow.core.protobuf import saver_pb2
#print("current tensorflow version: ", tf.version)
sess=tf.Session()
model_path = 'latest_cp/'
saver = tf.train.import_meta_graph('model.ckpt.meta')
saver.restore(sess,tf.train.latest_checkpoint('.')) #current dir of the checkpoint file
tf.train.write_graph(sess.graph_def, '.', 'test.pbtxt') #output in pbtxt format
freeze_graph.freeze_graph(input_graph = 'test.pbtxt',
input_binary = False,
input_checkpoint = model_path + 'model.ckpt',
output_node_names = "num_detections,detection_boxes,detection_scores,detection_classes",
output_graph = 'test.bytes' ,
clear_devices = True, initializer_nodes = "",input_saver = "",
restore_op_name = "save/restore_all", filename_tensor_name = "save/Const:0")
It worked but then after I imported it to Unity it returned the following error:
TFException: Op type not registered 'NonMaxSuppressionV3' in binary running on AK38713. Make sure the Op and Kernel are registered in the binary running in this process.
I find out that TensorFlowSharp works with TensorFlow 1.4 and when I tried to freeze graph with 1.4 it returns the same NonMaxSuppressionV3 error.
Do you know any way to solve this issue? Thank you so much for the support.
I have got some problem for the below code
of the following line
new_model = load_model('124446.model', custom_objects=None, compile=True)
Here is the code:
import tensorflow as tf
from tensorflow.keras.models import load_model
mnist = tf.keras.datasets.mnist
(x_train,y_train), (x_test,y_test) = mnist.load_data()
x_train = tf.keras.utils.normalize(x_train,axis=1)
x_test = tf.keras.utils.normalize(x_test,axis=1)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128,activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(128,activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10,activation=tf.nn.softmax))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train,y_train,epochs=3)
tf.keras.models.save_model(model,'124446.model')
val_loss, val_acc = model.evaluate(x_test,y_test)
print(val_loss, val_acc)
new_model = load_model('124446.model', custom_objects=None, compile=True)
prediction = new_model.predict([x_test])
print(prediction)
Errors are:
Traceback (most recent call last): File
"C:/Users/TanveerIslam/PycharmProjects/DeepLearningPractice/1.py",
line 32, in
new_model = load_model('124446.model', custom_objects=None, compile=True) File
"C:\Users\TanveerIslam\PycharmProjects\DeepLearningPractice\venv\lib\site-packages\tensorflow\python\keras\engine\saving.py",
line 262, in load_model
sample_weight_mode=sample_weight_mode) File "C:\Users\TanveerIslam\PycharmProjects\DeepLearningPractice\venv\lib\site-packages\tensorflow\python\training\checkpointable\base.py",
line 426, in _method_wrapper
method(self, *args, **kwargs) File "C:\Users\TanveerIslam\PycharmProjects\DeepLearningPractice\venv\lib\site-packages\tensorflow\python\keras\engine\training.py",
line 525, in compile
metrics, self.output_names)
AttributeError: 'Sequential' object has no attribute 'output_names'
So can any one give me ant solution.
Note: I use pycharm as IDE.
As #Shinva said to set the "compile" attribute of the load_model function to "False".
Then after loading the model, compile it separately.
from tensorflow.keras.models import save_model, load_model
save_model(model,'124446.model')
Then for loading the model again do:
saved_model = load_model('124446.model', compile=False)
saved_model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
saved_model.predict([x_test])
Update: For some unknown reasons, I started to get the same errors as the question states. After trying to find different solutions it seems using the "keras" library directly instead of "tensorflow.keras" works properly.
My setup is on "Windows 10" with python:'3.6.7', tensorflow:'1.11.0' and keras:'2.2.4'
As per my knowledge, there are three different ways in which you can save and restore your model; provided you have used keras directly to make your model.
Option1:
import json
from keras.models import model_from_json, load_model
# Save Weights + Architecture
model.save_weights('model_weights.h5')
with open('model_architecture.json', 'w') as f:
f.write(model.to_json())
# Load Weights + Architecture
with open('model_architecture.json', 'r') as f:
new_model = model_from_json(f.read())
new_model.load_weights('model_weights.h5')
Option2:
from keras.models import save_model, load_model
# Creates a HDF5 file 'my_model.h5'
save_model(model, 'my_model.h5') # model, [path + "/"] name of model
# Deletes the existing model
del model
# Returns a compiled model identical to the previous one
new_model = load_model('my_model.h5')
Option 3
# using model's methods
model.save("my_model.h5")
# deletes the existing model
del model
# load the saved model back
new_model = load_model('my_model.h5')
Option 1 requires the new_model to be compiled before using.
Option 2 and 3 are almost similar in syntax.
Codes used from:
1. Saving & Loading Keras Models
2. https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model
I was able to load the model by setting compile=False in load_model()
import tensorflow as tf
tf.keras.models.save_model(
model,
"epic_num_reader.model",
overwrite=True,
include_optimizer=True
)
new_model = tf.keras.models.load_model('epic_num_reader.model', custom_objects=None, compile=False)
predictions = new_model.predict(x_test)
print(predictions)
import numpy as np
print(np.argmax(predictions[0]))
plt.imshow(x_test[0],cmap=plt.cm.binary)
plt.show()
If this is run on Windows then the issue is that currently toco is not supported on Windows - https://github.com/tensorflow/tensorflow/issues/20975