Testing tflite model in Raspberry pi 4 - python

I have a pretrained tensor flow model; I converted it to tflite to make inferences on Rpi4. Now, I want to test the model on the same original test data saved in a (.csv) file. How can I do that using tflite. Interpreter?
I have tried testing the tflite model to get approximate results to the original one.

Related

How to train faster-rcnn in tensorflow using custom dataset in tfrecord format?

I have a data-set in tfrecord format annotated on RoboFlow. Now, I want to train a Faster-Rcnn and SSD from scratch in tensorflow. How will I load the dataset in the program and input to the model? Typical classification models have an input layer with an image shape defined but here I have to deal with image and the annotations as well.

Check quantization status of model

I have a Keras (not tf.keras) model which I quantized (post-training) to run it on an embedded device.
To convert the model to a quantized tflite model, I tried different approaches and ended with around five versions of quantized models. They all have slightly different size but they all seem to work on my x86 machine. All models show different inference timings.
Now, I would like to check how the models are actually quantized (fully, only weights,... ) as the embedded solution only takes a fully quantized model. And I want to see more details, e.g., what are the differences in weights (maybe explaining the different model size). the model summary does not give any insights.
Can you give me a tip on how to go about it?
Does anyone know if the tflite conversion with the TF1.x version is always fully quantized?
Thanks
More explanation:
The models should be fully quantized, as I used
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
during conversion. However, I had to use the TF1.x version to transform, or respectively tf.compat.v1.lite.TFLiteConverter.from_keras_model_file with TF2.x. so I am not sure about the output model using the "classic" TF1.x version or the tf.compat.v1. version.
The way different models were created
Using TF1.3 converting a h5 model
using TF1.5.3 converting a h5 model
using TF2.2 converting a h5 model
converting h5 model to pb with TF1.3
converting h5 model to pb with
TF1.5
converting h5 model to pb with TF2.2
using TF1.5.3 converting the converted pb models
using TF2.2 converting the converted pb models
Netron is a handy tool for visualizing networks. You can choose individual layers and see the types and values of weights, biases, inputs and outputs.

Feeding input to resnet18 onnx model (Resnet18 deployment for object detction in video file)

I have built and saved a trained resnet18 model using the code in github in this link
the code can be run by specifying the training directory and type of network model.
the model resnet18.onnx is chosen and trained to classify 4 types of cells.
I am using Nvidia jetson (ubuntu) for this project.
now i need to use the generated trained model from the code above (resnet18.onnx)to classify objects in video using the following snippet code where a small box and the value of prediction is displayed on the detected cell in the video.
the error message i get when i run the above code using the resnet18.onnx is
confidences, boxes = ort_session.run(None, {input_name: img})
ort_session.run expected 2 get 1
what is the second input that is expected by the onnx model(i know that the model needs only the image to classify it so what is the second input that is required)
The error means the model expects 2 inputs and you only provided one.

How to convert segmentation model model to openvino int8 model?

I am trying convert my tensorflow segmentation model to openvino with quantization. I convert my .pb model to intermediate representation with openvino model optimizer. But how quantize model. In official documentation write that to do it with DL workbench. But in workbench i have only detection and classification dataset.
Can i convert my model to int8 without dataset or can i create dataset to segmentation?
The overall flow for converting a model from FP32 to INT8 is:
Select an FP32 model
Select an appropriate dataset
Run a baseline inference
Configure INT8 calibration settings
Configure inference settings for a calibrated model
View INT8 calibration
View inference results
Compare the calibrated model with the original FP32 model
Only some convolution models in the FP32 format can be quantized to INT8. If your model is incompatible, you will receive an error message.
The second stage of creating a configuration is adding a sample dataset. You can import a dataset, automatically generate a test dataset consisting of Gaussian distributed noise, or select a previously uploaded dataset.
You can find more details in the below link:
http://docs.openvinotoolkit.org/latest/_docs_Workbench_DG_Select_Datasets.html
You can find additional information about low precision inference in OpenVINO here:
General approach: https://docs.openvino.ai/latest/openvino_docs_IE_DG_Int8Inference.html
Post-Training Optimisation Tool (POT) with default algorithm: https://docs.openvino.ai/latest/pot_docs_LowPrecisionOptimizationGuide.html#doxid-pot-docs-low-precision-optimization-guide
Let me know if you still have questions.

I use TFLiteConvert post_training_quantize=True but my model is still too big for being hosted in Firebase ML Kit's Custom servers

I have written a TensorFlow / Keras Super-Resolution GAN. I've converted the resulting trained .h5 model to a .tflite model, using the below code, executed in Google Colab:
import tensorflow as tf
model = tf.keras.models.load_model('/content/drive/My Drive/srgan/output/srgan.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.post_training_quantize=True
tflite_model = converter.convert()
open("/content/drive/My Drive/srgan/output/converted_model_quantized.tflite", "wb").write(tflite_model)
As you can see I use converter.post_training_quantize=True which was censed to help to output a lighter .tflite model than the size of my original .h5 model, which is 159MB. The resulting .tflite model is still 159MB however.
It's so big that I can't upload it to Google Firebase Machine Learning Kit's servers in the Google Firebase Console.
How could I either:
decrease the size of the current .tflite model which is 159MB (for example using a tool),
or after having deleted the current .tflite model which is 159MB, convert the .h5 model to a lighter .tflite model (for example using a tool)?
Related questions
How to decrease size of .tflite which I converted from keras: no answer, but a comment telling to use converter.post_training_quantize=True. However, as I explained it, this solution doesn't seem to work in my case.
In general, quantization means, shifting from dtype float32 to uint8. So theoretically our model should reduce by the size of 4. This will be clearly visible in files of greater size.
Check whether your model has been quantized or not by using the tool "https://lutzroeder.github.io/netron/". Here you have to load the model and check the random layers having weight.The quantized graph contains the weights value in uint8 format
In unquantized graph the weights value will be in float32 format.
Only setting "converter.post_training_quantize=True" is not enough to quantize your model. The other settings include:
converter.inference_type=tf.uint8
converter.default_ranges_stats=[min_value,max_value]
converter.quantized_input_stats={"name_of_the_input_layer_for_your_model":[mean,std]}
Hoping you are dealing with images.
min_value=0, max_value=255, mean=128(subjective) and std=128(subjective). name_of_the_input_layer_for_your_model= first name of the graph when you load your model in the above mentioned link or you can get the name of the input layer through the code "model.input" will give the output "tf.Tensor 'input_1:0' shape=(?, 224, 224, 3) dtype=float32". Here the input_1 is the name of the input layer(NOTE: model must include the graph configuration and the weight.)

Categories

Resources