I'm training tensorflow slim based models for image classification on a custom dataset. Before I invest a lot of time training such huge a dataset, I wanted to know whether or not can I convert all the models available in the slim model zoo to tflite format.
Also, I know that I can convert my custom slim-model to a frozen graph. It is the step after this which I'm worried about i.e, conversion to .tflite from my custom trained .pb model.
Is this supported ? or is there anyone who is facing conversion problems that has not yet been resolved ?
Thanks.
Many Slim models can be converted to TFLite, but it isn't a guarantee since some models might have ops not supported by TFLite.
What you could do, is try and convert your model to TensorFlow Lite using TFLiteConverter in Python before training. If the conversion succeeds, then you can train your TF model and convert it once again.
Related
I used this repo : https://github.com/Turoad/lanedet
to convert a pytorch model that use mobilenetv2 as backbone To ONNX but I didn't succeeded.
i got a Runtime error that says:
RuntimeError: Exporting the operator eye to ONNX opset version 12 is
not supported. Please open a bug to request ONNX export support for
the missing operator.
it's really disappointing, looking to the good result that this model gives and the quick performance that it provides,
is there any way that I can fix this bug? because I need to convert it to ONNX and then to TF lite model to use it in Android App
I will provide the pretrained model that I have used and the way that I follow in converting..
Thank you so much for helping!
my colab notebook:
https://colab.research.google.com/drive/18udIh8tNJvti7jKmR4jRaRO-oYDgRmvA?usp=sharing
the pretrained model that I use:
https://drive.google.com/file/d/1o3-BgLIQesurIyDCKGliqbo2inUA5cPw/view?usp=sharing
Use torch>=1.7.0 to convert the model, because operation Eye is added.
I created a model of darknet53.weights for image classification using my original data in darknet.
(This isn't a YOLO v3 model.)
Is there a way to convert a darknet53.weight to a pytorch pt model?
I tried quoting various codes on github etc., but all of them can convert only YOLOv3 weights file to pytorch's pt model.
I want to compare the accuracy of the darknet53 model created with darknet with other image classification models created with pytorch.
Initially, I tried to make a darknet53 model with pytorch, but that didn't work. Therefore, I created a darknet53 model with darknet.
If anyone knows a good way, please teach me.
Thanks.
I have a Keras (not tf.keras) model which I quantized (post-training) to run it on an embedded device.
To convert the model to a quantized tflite model, I tried different approaches and ended with around five versions of quantized models. They all have slightly different size but they all seem to work on my x86 machine. All models show different inference timings.
Now, I would like to check how the models are actually quantized (fully, only weights,... ) as the embedded solution only takes a fully quantized model. And I want to see more details, e.g., what are the differences in weights (maybe explaining the different model size). the model summary does not give any insights.
Can you give me a tip on how to go about it?
Does anyone know if the tflite conversion with the TF1.x version is always fully quantized?
Thanks
More explanation:
The models should be fully quantized, as I used
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
during conversion. However, I had to use the TF1.x version to transform, or respectively tf.compat.v1.lite.TFLiteConverter.from_keras_model_file with TF2.x. so I am not sure about the output model using the "classic" TF1.x version or the tf.compat.v1. version.
The way different models were created
Using TF1.3 converting a h5 model
using TF1.5.3 converting a h5 model
using TF2.2 converting a h5 model
converting h5 model to pb with TF1.3
converting h5 model to pb with
TF1.5
converting h5 model to pb with TF2.2
using TF1.5.3 converting the converted pb models
using TF2.2 converting the converted pb models
Netron is a handy tool for visualizing networks. You can choose individual layers and see the types and values of weights, biases, inputs and outputs.
I'm working with a Deep Learning model which has a ResNet-50 as backbone pretrained on ImageNet. The dataset that I'm using is the CUB-200, which is a set of 200 species of birds. For this reason I think that could be good to have a pretrained model on a dataset that has a similar domain and I found that the iNaturalist one could be the one that I'm looking for.
The problem is that I didn't find any pretrained model for Pytorch, but only a Tensorflow one here.
I tried to convert it using the MDNN library, but it needs also the '.ckpt.meta' file extend and I have only the '.ckpt'.
This is an example of how to use the MDNN library to convert a tf model to torch:
mmconvert -sf tensorflow -in imagenet_resnet_v2_152.ckpt.meta -iw imagenet_resnet_v2_152.ckpt --dstNode MMdnn_Output -df pytorch -om tf_to_pytorch_resnet_152.pth
Could anyone help me with it?
Wanted to know what are the differences between this and this?
Is it just the ways the inputs vary?
The main differences between LayersModel and GraphModels are:
LayersModel can only be imported from tf.keras or keras HDF5 format model types. GraphModels can be imported from either the aforementioned model types, or TensorFlow SavedModels.
LayersModels support further training in JavaScript (through its fit() method). GraphModel supports only inference.
GraphModel usually gives you higher inference speed (10-20%) than LayersModel, due to its graph optimization, which is possible thanks to the inference-only support.
Hope this helps.
Both are doing the same task i.e. converting a NN model to tfjs format. It's just that in the 1st link model stored in h5 format (typically format in which keras model are saved) is used, while in another it's TF saved model.