I have SSD MobilenetV2 model trained using object detection API (tensorflow version 2.8.2). How can I convert it to coreml? Tried to use coremltools (version 5.2), but failed.
I did the following:
Converted checkpoints to saved_model.pb using exporter_main_v2.py
Loaded the model using tf.saved_model.load(model_path)
Run coremltools.convert(model, source="tensorflow")
On step 3 I got the following error:
NotImplementedError: Expected model format: [SavedModel | [concrete_function] | tf.keras.Model | .h5], got <tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject object at 0x7fab1a41cd90>
Coremltools is expecting a Keras .h5 model but you are providing a checkpoint.
You will need to instantiate the same class model the checkpoint came from, load the checkpoint, then provide the model to CoreML.
Related
While expoting an object detection model inference graph with Tensorflow Object Detection API (TFODAPI), I am getting a warning as:
WARNING:tensorflow:Skipping full serialization of Keras layer object_detection.meta_architectures.ssd_meta_arch.SSDMetaArch object at 0x7f7bf0096d00>, because it is not built.
W1211 12:05:10.070806 140172767647616 save_impl.py:66]
This warning also occurs while expoting tflite graph. This is resulting an error while converting the .pb model to tflite with metadata. Also this further gives an error while conversion as:
TypeError: EndVector() missing 1 required positional argument: 'vectorNumElems'
While the inference from .pb model works perfectly, I am not able to get inference from tflite model.
my expoting graph script is:
%cd /content/models/research/object_detection
##Export inference graph
!python exporter_main_v2.py --trained_checkpoint_dir=/content/gdrive/MyDrive/Road_potholes/new_try/training --pipeline_config_path=/content/gdrive/MyDrive/Road_potholes/new_try/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.config --output_directory /content/gdrive/MyDrive/Road_potholes/new_try/inference_graph
and export tflite graph code is :
%cd /content/models/research/object_detection
!python export_tflite_graph_tf2.py --pipeline_config_path /content/gdrive/MyDrive/Road_potholes/new_try/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.config --trained_checkpoint_dir /content/gdrive/MyDrive/Road_potholes/new_try/training --output_directory /content/gdrive/MyDrive/Road_potholes/new_try/tflite
I have followed the code as shown here: https://www.youtube.com/watch?v=eA5G-uL_OmQ&t=1591s
I'm having a problem loading a Tensorflow model I downloaded. Based on the error, it sounds like incompatibility issue with model from older Tensorflow? I tried to load it on Tensorflow 2.4, 2.3 with no success. It throws a completely different error.
Inside model folder, I see checkpoint, saved_model, pipeline.config.
Inside model/saved_model folder, I see variables and saved_model.pb.
If I try to load using tf.keras.models.load_model("model/saved_model"), I get this.
WARNING:tensorflow:SavedModel saved prior to TF 2.5 detected when loading Keras model. Please ensure that you are saving the model with model.save() or tf.keras.models.save_model(), *NOT* tf.saved_model.save(). To confirm, there should be a file named "keras_metadata.pb" in the SavedModel directory.
ValueError: Unable to create a Keras model from SavedModel at design-system-detector/rico/models/mobilenetv2-50k/saved-model/saved_model. This SavedModel was exported with `tf.saved_model.save`, and lacks the Keras metadata file. Please save your Keras model by calling `model.save` or `tf.keras.models.save_model`.
Note that you can still load this SavedModel with `tf.saved_model.load`.
If I load with tf.saved_model.load, it just returns _UserObject, and I can't run model.predict.
AttributeError: '_UserObject' object has no attribute 'predict'
What's the best way to use this model to predict?
I want to retrain an existing object detection model with a new image dataset and quantize it for Intel Movidius. Is there any working procedure to do this?
I have successfully retrained the model but failing to quantize it. I have followed the following tutorial Retrain SSD MobileNet
The Movidius devices only support FP16 models, and to convert a caffe version of SSD Mobilenet then you supply the "--data_type FP16"to the model optimizer (mo.py)
The openvino model zoo has a mobilenet-ssd model,also using caffe, and the associated yaml file has the following parameters
model_optimizer_args:
--input_shape=[1,3,300,300]
--input=data
--mean_values=data[127.5,127.5,127.5]
--scale_values=data[127.5]
--output=detection_out
--input_model=$dl_dir/mobilenet-ssd.caffemodel
--input_proto=$dl_dir/mobilenet-ssd.prototxt.
Note that your input shape and the mean and scale values will likely be different so change those to match your retrained model.
There's also a demo file shipped with openvino that can be used with your converted model. See the associated mode.lst file for all the supported architectures. https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/object_detection_demo/python
I have a Keras model saved with the following line:
tf.keras.models.save_model(model, "path/to/model.h5")
Later, I try to convert it to a tflite file as follows:
converter = tf.contrib.lite.TFLiteConverter.from_keras_model_file('path/to/model.h5')
tflite_model = converter.convert()
open("path/to/model.tflite", "wb").write(tflite_model)
But I get a weird error:
You are trying to load a weight file containing 35 layers into a model with 0 layers.
I know that my model is working fine. I am able to load it and draw inferences. This error only shows up when trying to save it as a tflite model.
TensorFlow version: tensorflow-gpu 1.12.0
I'm using tf.keras.
Turns out, the issue is due to explicitly defining an InputLayer with some input_shape.
My model was of the form:
InputLayer(input_shape=(...))
BatchNormalization()
.... Remaining layers
I changed it to:
BatchNormalization(input_shape=(...))
.... Remaining layers
and transferred the weights from the previous model here. Now it works perfectly.
I retrained ssd_mobilenet_v2_coco_2018_03_29 from Tensorflow detection model zoo by following the procedures defined in Tensorflow Object Detection API using my own dataset. When I try to export the model by
python object_detection/export_inference_graph.py --input_type=image_tensor --pipeline_config_path=.../pipeline.config --trained_checkpoint_prefix=.../model.ckpt-200000 --output_directory=.../export-model-200000 --input_shape=1,300,300,3
it says "20 ops no flops stats due to incomplete shapes." although I overwrite input_shape with 1,300,300,3.
How can I overcome this problem?
My final goal is to convert this model into IR representation, in which I got problems related to shape inference during model conversion whereas I have no problem with converting the original model, which I used for transfer learning, into IR.
Thanks