Couldn't convert pytorch model to ONNX - python

I used this repo : https://github.com/Turoad/lanedet
to convert a pytorch model that use mobilenetv2 as backbone To ONNX but I didn't succeeded.
i got a Runtime error that says:
RuntimeError: Exporting the operator eye to ONNX opset version 12 is
not supported. Please open a bug to request ONNX export support for
the missing operator.
it's really disappointing, looking to the good result that this model gives and the quick performance that it provides,
is there any way that I can fix this bug? because I need to convert it to ONNX and then to TF lite model to use it in Android App
I will provide the pretrained model that I have used and the way that I follow in converting..
Thank you so much for helping!
my colab notebook:
https://colab.research.google.com/drive/18udIh8tNJvti7jKmR4jRaRO-oYDgRmvA?usp=sharing
the pretrained model that I use:
https://drive.google.com/file/d/1o3-BgLIQesurIyDCKGliqbo2inUA5cPw/view?usp=sharing

Use torch>=1.7.0 to convert the model, because operation Eye is added.

Related

Tensorflow: Convert Save_mode.pb to a freeze_graph.pb with Tensorflow 2.5.0

I recently trained my own custom object detector model using mobilenetssd as my pretrained model. I have been following this tutorial to train my model, and used exporter_main_v2.py program in TensorFlow/models/research/object_detection/ to generate my save_model.pb file. I would like to use OpenCV DNN module where it requires a frozen model and the pbtext. In order to generate the pbtext file, I would need to convert it from the frozen_graph.pb. I would like to convert my save_model.pb to a frozen_graph.pb file, but it looks like freezing graphs has been deprecated in tensorflow 2.X. I tried this solution but I get this error:
AttributeError: '_UserObject' object has no attribute 'inputs'
Does anyone have recommendations or solutions on how I can covert my save_model.pb to a freeze_graph.pb? I would really appreciate the help!

convert .pb model into quantized tflite model

Totally new to Tensorflow,
I have created one object detection model (.pb and .pbtxt) using 'faster_rcnn_inception_v2_coco_2018_01_28' model I found from TensorFlow zoo. It works fine on windows but I want to use this model on google coral edge TPU. How can I convert my frozen model into edgetpu.tflite quantized model?
There are 2 more steps to this pipeline:
1) Convert the .pb -> tflite:
I won't go through details since there are documentation on this on tensorflow official page and it changes very often, but I'll still try to answer specifically to your question. There are 2 ways of doing this:
Quantization Aware Training: this happens during training of the model. I don't think this applies to you since your question seems to indicates that you were not aware of this process. But please correct me if I'm wrong.
Post Training Quantization: Basically loading your model where all tensors are of type float and convert it to a tflite form with int8 tensors. Again, I won't go into too much details, but I'll give you 2 actual examples of doing so :) a) with code
b) with tflite_convert tools
2) Compile the model from tflite -> edgetpu.tflite:
Once you have produced a fully quantized tflite model, congrats your model is now much more efficient for arm platform and the size is much smaller. However it will still be ran on the CPU unless you compile it for the edgetpu. You can review this doc for installation and usage. But compiling it is as easy as:
$ edgetpu_compiler -s your_quantized_model.tflite
Hope this helps!

Error in converting SageMaker XGBoost model to ONNX model

I'm trying to convert a SageMaker XGBoost model to ONNX, in order to use the ONNX model in .Net application using ML.NET. I've tried to convert the model using winmltools and onnxmltools but both tools are returned similar error.
There is a good resource to use machine learning in business area. I've tried Using Machine Learning to Improve Sales in SageMaker to create the model and then convert the model to ONNX model. The example is working well in SageMaker.
After running the example, I got a model and the type of the model is sagemaker.estimator.Estimator. I've tried to convert the model by using winmltools and onnxmltools. But both are returned same error.
ValueError: No proper operator name found for '<class 'sagemaker.estimator.Estimator'>'
I've tried to follow Convert ML models to ONNX with WinMLTools and ONNXMLTools enables conversion of models to ONNX to convert the SageMaker model to ONNX model.
After that, I used xgb.create_model() command to create SageMaker model. Then used the tools to convert the model to ONNX. but no luck. I got same error this time. Just the model is different.
ValueError: No proper operator name found for '<class 'sagemaker.model.Model'>'
Then I load the model using pickle and tried to convert the model. I got same error, just the model is different.
ValueError: No proper operator name found for '<class 'xgboost.core.Booster'>'
At this moment, I have no idea about the issues. How should I solve the issues. I've attached the Improve Sales Classification to ONNX notebook file for reference.
Could you please take a look at the issues and let me know a way to solve the issues?
Thanks in advance!

Load tensorflow checkpoint as keras model

I have an old model defined and trained using tensorflow, and now I would like to work on it but I'm currently using Keras for everything.
So the question is: is it possible to load a tf cehckpoint (with *.index, *.meta etc..) into a Keras model?
I am aware of old questions like: How can I convert a trained Tensorflow model to Keras?.
I am hoping that after 2 years, and with keras being included into tf, there would be a easier way to do it now.
Unfortunately I don't have the original model definition in tf; I may be able to find it, but it would be nicer if it wasn't necessary.
Thanks!
In the below link, which is the official TensorFlow tutorial, the trained model is saved and it has .ckpt extension. After, it is loaded and is used with Keras model.
I think it might help you.
https://www.tensorflow.org/tutorials/keras/save_and_restore_models

Can I convert all the tensorflow slim models to tflite?

I'm training tensorflow slim based models for image classification on a custom dataset. Before I invest a lot of time training such huge a dataset, I wanted to know whether or not can I convert all the models available in the slim model zoo to tflite format.
Also, I know that I can convert my custom slim-model to a frozen graph. It is the step after this which I'm worried about i.e, conversion to .tflite from my custom trained .pb model.
Is this supported ? or is there anyone who is facing conversion problems that has not yet been resolved ?
Thanks.
Many Slim models can be converted to TFLite, but it isn't a guarantee since some models might have ops not supported by TFLite.
What you could do, is try and convert your model to TensorFlow Lite using TFLiteConverter in Python before training. If the conversion succeeds, then you can train your TF model and convert it once again.

Categories

Resources