I have created a my saved_model.pb through Docker in my macOS device.
After trying tflite_convert --output_file=./myModels/mymodel.tflite --saved_model_dir=./myModels/ it is outputing the following error requiring me a tag argument:
"ValueError: Importing a SavedModel with tf.saved_model.load requires a 'tags=' argument if there is more than one MetaGraph. Got 'tags=None', but there are 0 MetaGraphs in the SavedModel with tag sets []. Pass a 'tags=' argument to load this SavedModel."
I have also tried another command where you will have to provide the input and output arrays but I do know know which ones are my arrays to put in those fields. Anybody got this problem and solved it before? Thanks.
You can find the tags in your saved model using the saved_model_cli:
https://www.tensorflow.org/guide/saved_model#saved_model_cli
$ saved_model_cli show --dir ./myModels/ --all
Pass the required tags with
--saved_model_tag_set to tflite_convert.
Related
I am creating a custom object detection sample for android, I used the ssd_mobilenet_v1_coco pretrained model for transfer learning and got a decent accuracy. I also successfully managed to export the model.ckpt-XXXX to a .pb tflite graph using this line of code in the terminal (ran from the object_detection folder after cloning Tensorflow Object Detection API from github):
python export_tflite_ssd_graph.py --pipeline_config_path=training/ssd_mobilenet_v1_coco.config --trained_checkpoint_prefix=training/model.ckpt-40500--output_directory=tflite --add_postprocessing_op=true
The above created a folder tflite and it contained 2 files :
tflite_graph.pb
tflite_graph.pbtxt
However, when I want to convert the tflite_graph.pb to detect.tflite I get the following error and the program ends abruptly:
"TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed.
.
.
.
Check failed: input_array_dims[i] == input_array_proto.shape().dims(i) (300 vs. 128)
Fatal Python error: Aborted
.
.
.
This is the command I used to convert the .pb to .tflite:
tflite_convert --graph_def_file=tflite/tflite_graph.pb --output_file=tflite/detect.tflite --input_shapes=1,128,128,3 --input_arrays=normalized_input_image_tensor --output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3 --allow_custom_ops
The images I used had a size of 128x128 hence why I assumed that would be the input_shapes. I do have Toco installed as well.
Any help or advice will be highly appreciated.
So upon doing additional research, I found out that it was because the config file of the model was looking at images of sizes 300 x 300. So I changed the images dimension in the config file to 128 and it worked.
I am trying an example of car evaluation classification from
http://archive.ics.uci.edu/ml/datasets/Car+Evaluation
I have successfully trained model and printing predictions successfully
using following code
I am following this page for converting .pb model to .tflite
I have successfully build frozen graph
bazel build tensorflow/python/tools:freeze_graph
Now I am facing problems in running following command
bazel-bin/tensorflow/python/tools/freeze_graph\
--input_graph=/CarEvaluation/mobilenet_v1_224.pb \
--input_checkpoint=/CarEvaluation/checkpoints/mobilenet-10202.ckpt \
--input_binary=true --output_graph=/CarEvaluation/frozen_mobilenet_v1_224.pb \
--output_node_names=CarEvaluation/Predictions/Reshape_1
Problem is that in model directory I have .pbtxt file instead of .pb
and also I couldn't find .ckpt file in model directory, I have a simple checkpoint file and several .ckpt meta and index files with some number as suffix.
I have tried running above command with .pbtxt file and I am getting this exception
input_graph_def.ParseFromString(f.read())
google.protobuf.message.DecodeError: Error parsing message
Use the .pbtxt and the highest numbered .ckpt
i.e. something like:
bazel-bin/tensorflow/python/tools/freeze_graph\
--input_graph=/CarEvaluation/mobilenet_v1_224.pbtxt \
--input_checkpoint=/CarEvaluation/checkpoints/mobilenet-10202.ckpt-2000 \
--input_binary=true --output_graph=/CarEvaluation/frozen_mobilenet_v1_224.pb \
--output_node_names=CarEvaluation/Predictions/Reshape_1
As far as I have understood from the freeze_graph code, when you want to use it with a pbtxt file, you need to omit the --input_binary=true option, as the input file is no longer a binary one.
I have trained new model on top of ssd_mobilenet_v1_coco for a custom data set. This model works fine in tensorflow. But now I want to use this in OpenCV.
net = cv2.dnn.readNetFromTensorflow("model/frozen_inference_graph.pb", "model/protobuf.pbtxt")
detections = net.forward()
So for the config file I convert frozen_graph to pbtxt and add it. But then I got the following error
[libprotobuf ERROR /home/chamath/Projects/opencv/opencv/3rdparty/protobuf/src/google/protobuf/text_format.cc:298] Error parsing text-format tensorflow.GraphDef: 731:5: Unknown enumeration value of "DT_RESOURCE" for field "type".
As suggested here I try to use this config file mentioned in the thread but when I use it object detection is not working properly. Incorrect number of squares detected and they are misplaced.
Is there any method to create a pbtxt config file that works with OpenCV? Or any suggestions how to make my model work in OpenCV?
Is likely that you havent generate the propper graph, after training.
You have to convert the graph like this:
python ../opencv/samples/dnn/tf_text_graph_ssd.py --input
trained-inference-graphs/inference_graph_v5.pb/frozen_inference_graph.pb
--output trained-inference-graphs/inference_graph_v5.pb/graph.pbtxt
Then pass the .pb and the graph.pbtxt to DNN.readNetFromTensorflow that should work for you :)
This is probably a very basic question...
But how do I convert checkpoint files into a single .pb file.
My goal is to serve the model using probably C++
These are the files that I'm trying to convert.
As a side note I'm using tflearn with tensorflow.
Edit 1:
I found an article that explains how to do this: https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc
The problem is that I'm stuck with the following error
KeyError: "The name 'Adam' refers to an Operation not in the graph."
How do I fix this?
Edit 2:
Maybe this will shed some light on the problem.
The error that I get comes from the regression layer, if I use: sgd.
I'll get
KeyError: "The name 'SGD' refers to an Operation not in the graph."
The tutorial on https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc works just fine.
The problem was that I was loading the model using tensorflow instead of using tflearn.
So... instead of:
tf.train.import_meta_graph(...)
We do:
model.load(...)
TFLearn knows how to parse the graph properly.
I'm having an issue i can't manage to solve.
I'm just approaching the Super Resolution Images on Python and i found this on github: https://github.com/titu1994/Image-Super-Resolution
I think this is exactly what i need for my project.
So i just install everything i need to run it and i run it with this:
python main.py (path)t1.bmp
t1.bmp is an image stored in the "input-images" directory so my command is this:
python main.py C:\Users\cecilia....\t1.bmp
The error i get is this:
http://imgur.com/X3ssj08
http://imgur.com/rRSdyUb
Can you please help me solving this? (The code i'm using is the one on the github i linked)
Thanks in advance
The very first line on the Readme in the github link that you give says that the code is designed for theano only. Yet in your traceback it shows that you are using tensorflow as backend...
The error that you are having is typical of having the wrong image format for the used backend. You have to know that for convolutional networks, Theano and tensorflow have different conventions. Theano expects the following order for the dimensions (batch, channels, nb_rows , nb_cols) and tensorflow (batch, nb_rows, nb_cols, channels). The first is known as "channels_first" and the other "channels_last". So what happens is that the code you are trying to run (which is explicitly said to be designed for Theano) organises the data to match the channels_first format, which causes tensorflow to crash because the dimensions don't match what it expects.
Bottom line: use theano, or change the code appropriately to make it work on tensorflow.