Can I quantize a tflite file? - python

I'm new to TensorFlow and started training my model in Google Collaboratory. After spending a few hours training my model, I was finally able to download the tflite file, and it's working great! The only issue I have with it is its speed. I've looked into post-training quantization, but it seems as if I still need the Keras model to do that, but all I have left is the actual tflite file itself, as the notebook has since been closed and all data lost. Is there any way I can quantize the file itself?
Thank you in advance for replies.
I tried using the tf.lite.Interpreter to load the model into a keras optimizer, but that didn't work.

Related

convert .pb model into quantized tflite model

Totally new to Tensorflow,
I have created one object detection model (.pb and .pbtxt) using 'faster_rcnn_inception_v2_coco_2018_01_28' model I found from TensorFlow zoo. It works fine on windows but I want to use this model on google coral edge TPU. How can I convert my frozen model into edgetpu.tflite quantized model?
There are 2 more steps to this pipeline:
1) Convert the .pb -> tflite:
I won't go through details since there are documentation on this on tensorflow official page and it changes very often, but I'll still try to answer specifically to your question. There are 2 ways of doing this:
Quantization Aware Training: this happens during training of the model. I don't think this applies to you since your question seems to indicates that you were not aware of this process. But please correct me if I'm wrong.
Post Training Quantization: Basically loading your model where all tensors are of type float and convert it to a tflite form with int8 tensors. Again, I won't go into too much details, but I'll give you 2 actual examples of doing so :) a) with code
b) with tflite_convert tools
2) Compile the model from tflite -> edgetpu.tflite:
Once you have produced a fully quantized tflite model, congrats your model is now much more efficient for arm platform and the size is much smaller. However it will still be ran on the CPU unless you compile it for the edgetpu. You can review this doc for installation and usage. But compiling it is as easy as:
$ edgetpu_compiler -s your_quantized_model.tflite
Hope this helps!

keras model prediction is nan after saving and loading

I trained a neural network with google colab.
I saved the neural network using joblib.dump()
I then loaded the model on my PC using joblib.load()
I made a prediction on the exact same sample, using the same model, on both colab and my PC. On colab, it has an output of [[0.51]]. On my pc, it has an output of [[nan]].
The model summary reports that the architecture of the model is the same.
I checked the weights of the model I loaded on my PC, and the model on colab, and the weights are the exact same.
Any ideas as to what I can do? Thank you.
Quick update: even if I change all of my inputs to zero, the prediction is still nan.
As far as I know keras has its own function to save the model such as model.save('file.h5'), and the joblib library is used to save sklearn models.

Load tensorflow checkpoint as keras model

I have an old model defined and trained using tensorflow, and now I would like to work on it but I'm currently using Keras for everything.
So the question is: is it possible to load a tf cehckpoint (with *.index, *.meta etc..) into a Keras model?
I am aware of old questions like: How can I convert a trained Tensorflow model to Keras?.
I am hoping that after 2 years, and with keras being included into tf, there would be a easier way to do it now.
Unfortunately I don't have the original model definition in tf; I may be able to find it, but it would be nicer if it wasn't necessary.
Thanks!
In the below link, which is the official TensorFlow tutorial, the trained model is saved and it has .ckpt extension. After, it is loaded and is used with Keras model.
I think it might help you.
https://www.tensorflow.org/tutorials/keras/save_and_restore_models

Use Machine Learning Model in Pretrained Manner Keras, Tensorflow

I built a CNN model for image classification using the Keras library. However training takes many hours. Once I trained my model, how can I use it without training once more? I mean after I trained my model, I want to use it many times.
Because I will use my model in android studio.
Any help is appreciated
Thank YOU...
EDIT
When I wrote this question, I did not know the save model and load.model, in the answers you see the appropriate usage of them.
You can easily save your model after the training process by using:
model.save('my_model.h5')
you can later load that model by using:
model = load_model('my_model.h5')
for more details have a look at the documentation: https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model

Sessions with tensorflow

I'm a tensorflow beginner. So, excuse my question if it is stupied
I checked a github code for implementing CNN using MNIST data and tensorflow.
the link below:
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/convolutional_network.py
However, I need to save the model generated by this code, but don't know how to do it, as this code does not involve the use of sessions, how to incoperate session on it?
Would appreciate your response.
The linked code is using tf.estimator.Estimator to train the model. Its documentation includes how to save the model using export_savedmodel. A saved model can be imported by specifying its location through the model_dir argument of the tf.estimator.Estimator initialiser.

Categories

Resources