I am trying an example of car evaluation classification from
http://archive.ics.uci.edu/ml/datasets/Car+Evaluation
I have successfully trained model and printing predictions successfully
using following code
I am following this page for converting .pb model to .tflite
I have successfully build frozen graph
bazel build tensorflow/python/tools:freeze_graph
Now I am facing problems in running following command
bazel-bin/tensorflow/python/tools/freeze_graph\
--input_graph=/CarEvaluation/mobilenet_v1_224.pb \
--input_checkpoint=/CarEvaluation/checkpoints/mobilenet-10202.ckpt \
--input_binary=true --output_graph=/CarEvaluation/frozen_mobilenet_v1_224.pb \
--output_node_names=CarEvaluation/Predictions/Reshape_1
Problem is that in model directory I have .pbtxt file instead of .pb
and also I couldn't find .ckpt file in model directory, I have a simple checkpoint file and several .ckpt meta and index files with some number as suffix.
I have tried running above command with .pbtxt file and I am getting this exception
input_graph_def.ParseFromString(f.read())
google.protobuf.message.DecodeError: Error parsing message
Use the .pbtxt and the highest numbered .ckpt
i.e. something like:
bazel-bin/tensorflow/python/tools/freeze_graph\
--input_graph=/CarEvaluation/mobilenet_v1_224.pbtxt \
--input_checkpoint=/CarEvaluation/checkpoints/mobilenet-10202.ckpt-2000 \
--input_binary=true --output_graph=/CarEvaluation/frozen_mobilenet_v1_224.pb \
--output_node_names=CarEvaluation/Predictions/Reshape_1
As far as I have understood from the freeze_graph code, when you want to use it with a pbtxt file, you need to omit the --input_binary=true option, as the input file is no longer a binary one.
Related
I am using TensorFlow 2.x object detection API. I have trained a deep learning model from the model zoo on my dataset. I am using Google Colab. After training now I want to evaluate my model. I am using coco detection metrics. I used the following script to evaluate my model,
!python3 model_main_tf2.py \
--model_dir = path/to/model directory \
--pipeline_config_path = path/to/pipeline config file \
--checkpoint_dir = path/to/checkpoint directory
After running the above code I get the mean average precision (mAP) and average recall (AR) for the latest checkpoint on my test set. But for academic purposes, I want to get these metrics on all the checkpoints to get a graph of how my model has improved over time. Is there a possible way to that? or is it possible to train and evaluate at the same time in TensorFlow 2 object detection API? I am a beginner in this field so kindly help me out with this issue. Thank you.
I am facing the same problem. So I had an idea. We can run the model_main_tf2.py you mentioned to eval the model but changing the current checkpoint (first line) to evaluate in the checkpoint file
model_checkpoint_path: "ckpt-1"
then
model_checkpoint_path: "ckpt-2"
then
model_checkpoint_path: "ckpt-3"
.
.
.
For each checkpoint you will get a .tfevent so then you open TensorBoard pointing to the directory that contains all the .tfevent and you can see how the model improves over time.
I just saved the last 3 checkpoints in my computer so I can't see the progress from the beginning (my fault) but if you have all the checkpoints try to do what I suggest.
See my graph evaluating the last 3 checkpoints.
You should have an eval directory including an events.out.tfevents file under your model directory. You can run !tensorboard --logdir=path/to/eval/directory to access the graphs.
You can run training with the same snipped you have except without the checkpoint_dirand can open another terminal to run evaluation like you're currently doing.
After using YOLOv5 to train model weights as .pt file,
how can I convert the weights file (model.pt) to hdf5 file (model.h5)?
Running python train.py --batch 16 --epochs 3 --data mydata.yaml --weights yolov5s.pt, the result is given by best.pt file at subfolder of YOLOv5, how can I convert it to h5 file?
Do this installation and get started here
This could be a lengthy procedure ... as you are aware that .pt only contains weights and not model architecture hence your model class should also be present in your conversion code
Edit: New links are added
I am creating a custom object detection sample for android, I used the ssd_mobilenet_v1_coco pretrained model for transfer learning and got a decent accuracy. I also successfully managed to export the model.ckpt-XXXX to a .pb tflite graph using this line of code in the terminal (ran from the object_detection folder after cloning Tensorflow Object Detection API from github):
python export_tflite_ssd_graph.py --pipeline_config_path=training/ssd_mobilenet_v1_coco.config --trained_checkpoint_prefix=training/model.ckpt-40500--output_directory=tflite --add_postprocessing_op=true
The above created a folder tflite and it contained 2 files :
tflite_graph.pb
tflite_graph.pbtxt
However, when I want to convert the tflite_graph.pb to detect.tflite I get the following error and the program ends abruptly:
"TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed.
.
.
.
Check failed: input_array_dims[i] == input_array_proto.shape().dims(i) (300 vs. 128)
Fatal Python error: Aborted
.
.
.
This is the command I used to convert the .pb to .tflite:
tflite_convert --graph_def_file=tflite/tflite_graph.pb --output_file=tflite/detect.tflite --input_shapes=1,128,128,3 --input_arrays=normalized_input_image_tensor --output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3 --allow_custom_ops
The images I used had a size of 128x128 hence why I assumed that would be the input_shapes. I do have Toco installed as well.
Any help or advice will be highly appreciated.
So upon doing additional research, I found out that it was because the config file of the model was looking at images of sizes 300 x 300. So I changed the images dimension in the config file to 128 and it worked.
I have created a my saved_model.pb through Docker in my macOS device.
After trying tflite_convert --output_file=./myModels/mymodel.tflite --saved_model_dir=./myModels/ it is outputing the following error requiring me a tag argument:
"ValueError: Importing a SavedModel with tf.saved_model.load requires a 'tags=' argument if there is more than one MetaGraph. Got 'tags=None', but there are 0 MetaGraphs in the SavedModel with tag sets []. Pass a 'tags=' argument to load this SavedModel."
I have also tried another command where you will have to provide the input and output arrays but I do know know which ones are my arrays to put in those fields. Anybody got this problem and solved it before? Thanks.
You can find the tags in your saved model using the saved_model_cli:
https://www.tensorflow.org/guide/saved_model#saved_model_cli
$ saved_model_cli show --dir ./myModels/ --all
Pass the required tags with
--saved_model_tag_set to tflite_convert.
I have trained new model on top of ssd_mobilenet_v1_coco for a custom data set. This model works fine in tensorflow. But now I want to use this in OpenCV.
net = cv2.dnn.readNetFromTensorflow("model/frozen_inference_graph.pb", "model/protobuf.pbtxt")
detections = net.forward()
So for the config file I convert frozen_graph to pbtxt and add it. But then I got the following error
[libprotobuf ERROR /home/chamath/Projects/opencv/opencv/3rdparty/protobuf/src/google/protobuf/text_format.cc:298] Error parsing text-format tensorflow.GraphDef: 731:5: Unknown enumeration value of "DT_RESOURCE" for field "type".
As suggested here I try to use this config file mentioned in the thread but when I use it object detection is not working properly. Incorrect number of squares detected and they are misplaced.
Is there any method to create a pbtxt config file that works with OpenCV? Or any suggestions how to make my model work in OpenCV?
Is likely that you havent generate the propper graph, after training.
You have to convert the graph like this:
python ../opencv/samples/dnn/tf_text_graph_ssd.py --input
trained-inference-graphs/inference_graph_v5.pb/frozen_inference_graph.pb
--output trained-inference-graphs/inference_graph_v5.pb/graph.pbtxt
Then pass the .pb and the graph.pbtxt to DNN.readNetFromTensorflow that should work for you :)