While expoting an object detection model inference graph with Tensorflow Object Detection API (TFODAPI), I am getting a warning as:
WARNING:tensorflow:Skipping full serialization of Keras layer object_detection.meta_architectures.ssd_meta_arch.SSDMetaArch object at 0x7f7bf0096d00>, because it is not built.
W1211 12:05:10.070806 140172767647616 save_impl.py:66]
This warning also occurs while expoting tflite graph. This is resulting an error while converting the .pb model to tflite with metadata. Also this further gives an error while conversion as:
TypeError: EndVector() missing 1 required positional argument: 'vectorNumElems'
While the inference from .pb model works perfectly, I am not able to get inference from tflite model.
my expoting graph script is:
%cd /content/models/research/object_detection
##Export inference graph
!python exporter_main_v2.py --trained_checkpoint_dir=/content/gdrive/MyDrive/Road_potholes/new_try/training --pipeline_config_path=/content/gdrive/MyDrive/Road_potholes/new_try/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.config --output_directory /content/gdrive/MyDrive/Road_potholes/new_try/inference_graph
and export tflite graph code is :
%cd /content/models/research/object_detection
!python export_tflite_graph_tf2.py --pipeline_config_path /content/gdrive/MyDrive/Road_potholes/new_try/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.config --trained_checkpoint_dir /content/gdrive/MyDrive/Road_potholes/new_try/training --output_directory /content/gdrive/MyDrive/Road_potholes/new_try/tflite
I have followed the code as shown here: https://www.youtube.com/watch?v=eA5G-uL_OmQ&t=1591s
Related
I have SSD MobilenetV2 model trained using object detection API (tensorflow version 2.8.2). How can I convert it to coreml? Tried to use coremltools (version 5.2), but failed.
I did the following:
Converted checkpoints to saved_model.pb using exporter_main_v2.py
Loaded the model using tf.saved_model.load(model_path)
Run coremltools.convert(model, source="tensorflow")
On step 3 I got the following error:
NotImplementedError: Expected model format: [SavedModel | [concrete_function] | tf.keras.Model | .h5], got <tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject object at 0x7fab1a41cd90>
Coremltools is expecting a Keras .h5 model but you are providing a checkpoint.
You will need to instantiate the same class model the checkpoint came from, load the checkpoint, then provide the model to CoreML.
I'm trying to use my Keras model alongside my TF object detection model. But I am getting this error:
ValueError: Calling `Model.predict` in graph mode is not supported when the `Model` instance was constructed with eager mode enabled. Please construct your `Model` instance in graph mode or call `Model.predict` with eager mode enabled.
It errors on print(np.around(model1.predict(datatest)))
I believe it is because with TF object detection I have to run it within
with detection_graph.as_default():
with tf.compat.v1.Session(graph=detection_graph) as sess:
I'm not trying to run my Keras model in graph mode. Any idea around this?
I had initially loaded the model outside the with statements, but loading within the with statements seems to fix the issue.
Screenshot of the errorI have trained my own YOLO model with Darkflow and got the .pb file and the .meta file for license plate recognition. What i am trying now is to implement an android App that can use this model. To do so i have decided to convert it into tflite but always get this error about "tags=". I've checked the tags of my model with 'saved_model_cli show --dir=./' and i've got no tags. What should i do to fix this error ?
I am creating a custom object detection sample for android, I used the ssd_mobilenet_v1_coco pretrained model for transfer learning and got a decent accuracy. I also successfully managed to export the model.ckpt-XXXX to a .pb tflite graph using this line of code in the terminal (ran from the object_detection folder after cloning Tensorflow Object Detection API from github):
python export_tflite_ssd_graph.py --pipeline_config_path=training/ssd_mobilenet_v1_coco.config --trained_checkpoint_prefix=training/model.ckpt-40500--output_directory=tflite --add_postprocessing_op=true
The above created a folder tflite and it contained 2 files :
tflite_graph.pb
tflite_graph.pbtxt
However, when I want to convert the tflite_graph.pb to detect.tflite I get the following error and the program ends abruptly:
"TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed.
.
.
.
Check failed: input_array_dims[i] == input_array_proto.shape().dims(i) (300 vs. 128)
Fatal Python error: Aborted
.
.
.
This is the command I used to convert the .pb to .tflite:
tflite_convert --graph_def_file=tflite/tflite_graph.pb --output_file=tflite/detect.tflite --input_shapes=1,128,128,3 --input_arrays=normalized_input_image_tensor --output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3 --allow_custom_ops
The images I used had a size of 128x128 hence why I assumed that would be the input_shapes. I do have Toco installed as well.
Any help or advice will be highly appreciated.
So upon doing additional research, I found out that it was because the config file of the model was looking at images of sizes 300 x 300. So I changed the images dimension in the config file to 128 and it worked.
I retrained ssd_mobilenet_v2_coco_2018_03_29 from Tensorflow detection model zoo by following the procedures defined in Tensorflow Object Detection API using my own dataset. When I try to export the model by
python object_detection/export_inference_graph.py --input_type=image_tensor --pipeline_config_path=.../pipeline.config --trained_checkpoint_prefix=.../model.ckpt-200000 --output_directory=.../export-model-200000 --input_shape=1,300,300,3
it says "20 ops no flops stats due to incomplete shapes." although I overwrite input_shape with 1,300,300,3.
How can I overcome this problem?
My final goal is to convert this model into IR representation, in which I got problems related to shape inference during model conversion whereas I have no problem with converting the original model, which I used for transfer learning, into IR.
Thanks