Printing label for Tf Lite model Image Classification - python

I am working on a Image Claasification TF Lite model to detect mask or no mask from human faces using this link. I followed the link and trained an image multi class classification in vertex AI and downloaded the TF lite model. The labels of the model are "mask" and "no_mask". In order to test the model, I wrote the following code:
interpret= tf.lite.Interpreter(model_path="<FILE_PATH>")
input= interpret.get_input_details()
output= interpret.get_output_details()
interpret.allocate_tensors()
pprint(input)
pprint(output)
data= cv2.imread("file.jpeg")
new_image= cv2.resize(data,(224,224))
interpret.resize_tensor_input(input[0]["index"],[1,224,224,3])
interpret.allocate_tensors()
interpret.set_tensor(input[0]["index"],[new_image])
interpret.invoke()
result= interpret.get_tensor(output[0]['index'])
print (" Prediction is - {}".format(result))
Using this code for one of my image is giving me the result as :
[[30 246]]
Now I want to print the label in the result as well. For example:
mask: 30
no_mask: 46
Is there any way I can implement this?
Please help as I am new to TF Lite

I solved it myself. The .tflite model downloaded from Vertex AI contains the label file called 'dict.txt' that contains all the labels. Check the GCP doc here. To get this label file we first need to unzip the .tflite file which will give us the dict.txt. For more information, check out the tflite documentation and how to read associate file from the models.
After that I executed the following code taking reference from the github link label.py:
import argparse
import time
import numpy as np
from PIL import Image
import tensorflow as tf
interpret= tf.lite.Interpreter(model_path="<FILE_PATH>")
input= interpret.get_input_details()
output= interpret.get_output_details()
interpret.allocate_tensors()
pprint(input)
pprint(output)
data= cv2.imread("file.jpeg")
new_image= cv2.resize(data,(224,224))
interpret.resize_tensor_input(input[0]["index"],[1,224,224,3])
interpret.allocate_tensors()
interpret.set_tensor(input[0]["index"],[new_image])
interpret.invoke()
floating_model= input[0]['dtype'] == np.float32
op_data= interpret.get_tensor(output[0]['index'])
result= np.squeeze(op_data)
top_k=result.agrsort()[-5:][::1]
labels=load_labels("dict.txt")
for i in top_k:
if floating_model:
print('{:08.6f}: {}'.format(float(result[i]), labels[i]))
else:
print('{:08.6f}: {}'.format(float(result[i] / 255.0), labels[i]))

Related

What preprocessing is done by Tensorflow Lite Image Searcher ImageDataLoader

I'm trying to use Tensorflow Lite Image Searcher with mobilenet_v3 to query a database for a similar image and I'm getting surprisingly bad results.
Thus, I suspect there is a mistake in one of my steps.
Here is my code:
from tflite_model_maker import searcher
from tflite_support.task import vision
# Load pretrained model:
model_name = "lite-model_imagenet_mobilenet_v3_large_100_224_feature_vector_5_metadata_1.tflite"
data_loader = searcher.ImageDataLoader.create(model_name)
# Load db images, calc feature vectors
path_db = 'dir_with_db_jpg_images'
data_loader.load_from_folder(path_db)
# Set model:
scann_options = searcher.ScaNNOptions(
distance_measure="dot_product",
tree=searcher.Tree(num_leaves=10, num_leaves_to_search=2),
score_ah=searcher.ScoreAH(2, anisotropic_quantization_threshold=0.2))
model = searcher.Searcher.create_from_data(data_loader, scann_options)
# Export as tflite model:
model.export(
export_filename="searcher.tflite",
userinfo="",
export_format=searcher.ExportFormat.TFLITE)
# Load model:
image_searcher = vision.ImageSearcher.create_from_file("searcher.tflite")
# Predict NN for query image:
image = vision.TensorImage.create_from_file('path_query_img.jpg')
result = image_searcher.search(image)
result.nearest_neighbors
Do the bad results stem from missing preprocessing steps for the input images (DB or query)?
What happens to the images once load_from_folder(path) is called before the feature vector is created? And is this different from image_searcher.search(vision.TensorImage.create_from_file('path_query_img.jpg')) which might explain the bad results.
I couldn't figure out a way to first load the DB images and then feed them to the model for it to extract the feature vector and append it to the searcher model. Do such methods exist? That would allow me to experiment more with preprocessing steps (e.g. resizing the images to the 224x244 expected image size of the network - which I tried).
Maybe there is another problem with my code?
Thank you!

How can I add the decode_batch_predictions() method into the Keras Handwriting Recognition OCR model?

I need to add the decode_batch_predictions() method to the output of the Keras Handwriting Recognition OCR model. The reason for that is that I want to convert the model to TF Lite and I want the output to be decoded since I didn't find any way to decode the output on TF Lite in Android. I already saw a similar post for a similar Keras model but it wouldn't work for this model.
I have not much knowledge in Python so it's difficult for me to adapt the answers on that post for this model so I would really appreciate any help, thanks!
I tried using the code from that post but it wouldn't work
In the notebook for model given in your link, make the following changes after prediction_model:
prediction_model = keras.models.Model(
model.get_layer(name="image").input, model.get_layer(name="dense2").output
) # This line is present in the handwriting_recognition notebook.
def CTCDecoder():
def decode_batch_predictions(pred):
input_len = np.ones(pred.shape[0]) * pred.shape[1]
# Use greedy search. For complex tasks, you can use beam search
results = keras.backend.ctc_decode(pred, input_length=input_len, greedy=True)[0][0][:, :max_length]
# Iterate over the results and get back the text
output_text = []
for res in results:
#print(res)
res = tf.strings.reduce_join(num_to_char(res)).numpy().decode("utf-8")
output_text.append(res)
return output_text
return tf.keras.layers.Lambda(decode_batch_predictions, name='decode')
decoded_pred_model = keras.models.Model(prediction_model.input, outputs=CTCDecoder()(prediction_model.output))
Convert the decoded_pred_model to a .tflite and use it in android.

Convert TensorFlow Keras python model to Android .tflite model

I am working on an image recognition project using TensorFlow and Keras, that I would like to implement to my Android project. And I am new to Tensorflow...
I would like to find the closest match between an image to a folder with +2000 images. Images are similar in background and size, like so:
For now I have this following Python code that works okay.
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.vgg16 import VGG16, preprocess_input
from tensorflow.keras.models import Model
import numpy as np
from PIL import Image
base_model = VGG16(weights='imagenet')
model = Model(inputs=base_model.input, outputs=base_model.get_layer('fc1').output)
def extract(img):
img = img.resize((224, 224)) # Resize the image
img = img.convert('RGB') # Convert the image color space
x = image.img_to_array(img) # Reformat the image
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
feature = model.predict(x)[0] # Extract Features
return feature / np.linalg.norm(feature)
# Iterate through images and extract Features
images = ["img1.png","img2.png","img3.png","img4.png","img5.png"...+2000 more]
all_features = np.zeros(shape=(len(images),4096))
for i in range(len(images)):
feature = extract(img=Image.open(images[i]))
all_features[i] = np.array(feature)
# Match image
query = extract(img=Image.open("image_to_match.png")) # Extract its features
dists = np.linalg.norm(all_features - query, axis=1) # Calculate the similarity (distance) between images
ids = np.argsort(dists)[:5] # Extract 5 images that have lowest distance
Now I am a bit lost to where to go from here. To my understanding I need to create a .h5 file with all extracted image features and a .tflite file containing the model.
UPDATE after answer
I can convert the model with:
# Convert the model.
base_model = VGG16(weights='imagenet')
model = Model(inputs=base_model.input, outputs=base_model.get_layer('fc1').output)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
But how can I get the extracted features to my Android project? Also, the file size of the model is +400 mb so Android doesnt allow to import it.
Hope you can help me, thanks.
From Tensorflows own site:
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
Please find the below diagram for better understanding of the conversion process. The TensorFlow Lite converter takes a tensorflow/keras model and generates a tensoflow lite (.tflite) model. Even though there is a command line way of converting the model (https://www.tensorflow.org/lite/convert/index#cmdline) you are recommended to use the Python API to do the same, because it allows to add metadata(if required) and apply optimizations to the model. The following steps lets you convert your keras model to tflite.
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
Many a times when you are dealing with bigger models, your model size will be much bigger than the allowed size and might not perform as good as you want. So you have apply optimizations to make the model work. This is done using tf.lite.Optimize. It allows you to optimize your model for speed, storage etc. before tensorflow allowed a lot of manual control where you were able to specify what you need to optimize upon using tf.lite.Optimize.OPTIMIZE_FOR_LATENCY or tf.lite.Optimize.OPTIMIZE_FOR_SIZE nowadays default comes with both these optimizations. Now the conversion code becomes like this.
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT] #optimization
tflite_quant_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_quant_model)
What this does is a dynamic range quantization. Check the size of your model after this step.
If you want to further quantize the model, for example convert all float32 to float16 which will reduce the model size to approx. half the size as original, you can do it specifying a target spec. then your code will look like this. (understand that this will affect the model accuracy)
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT] #optimization
converter.target_spec.supported_types = [tf.float16] #target spec
tflite_fp16_model = converter.convert()
tflite_model_fp16_file = tflite_models_dir/"model_quant_f16.tflite"
tflite_model_fp16_file.write_bytes(tflite_fp16_model)
There are other types of post training quantizations also, which you can find in this page.
https://www.tensorflow.org/lite/performance/post_training_quantization
All this is post training quantizations, you may quantize the model before that also. refer to this link for the same, you can also find a lot of tutorials via search https://www.tensorflow.org/api_docs/python/tf/quantization/
After this you will have to run these tflite models to test them using python.
the steps are as below.
Load the model onto interpreters.
Test the model with sample images(s)
Evaluate the model
you can find a detailed example on this page
https://www.tensorflow.org/lite/performance/post_training_float16_quant
there are many other types of quantization as well based on the precision required and the type of edge device you are going to use. Please refer to the below link for details on how to apply them to your model.
https://www.tensorflow.org/lite/performance/model_optimization.
After quantization check your model size, this should reduce the model size to the required value if not repeat the operation will lower precisions.
You have 2 options:
Post-training quantization
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
Selective builds
TensorFlow Lite enables you to reduce model binary sizes by using selective builds (Mobilenet). It says:
Selective builds skip unused operations in your model set and produce
a compact library with just the runtime and the op kernels required
for the model to run on your mobile device.

ValueError: Image size is zero in Google Colab

I'm learning ML model training following this tutorial from Tensorflow. I have uploaded my own dataset from my computer to a folder named "sample_arrow" in Google Colab and specified the path to it:
image_path = 'sample_arrow'
The folder contains images, the size is not 0. But I get an error when executing this line of code:
data = DataLoader.from_folder(image_path) train_data, test_data = data.split(0.9)
ValueError: Image size is zero
What is wrong here? Maybe the folder path is not specified correctly? I'm completely new to the topic, unfamiliar with Pyhon (have Java skills) and would appreciate a detailed answer.
At last, I've found the solution.
Import os and the correct path definition were missing:
import os
root_path = "/content/"
image_path = os.path.join(os.path.dirname(root_path), 'sample_arrow')
Use the following code in google colab:
import tensorflow as tf
data_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
from tflite_model_maker import image_classifier
from tflite_model_maker.image_classifier import DataLoader
# Load input data specific to an on-device ML app.
data = DataLoader.from_folder(data_path)
train_data, test_data = data.split(0.9)
# Customize the TensorFlow model.
model = image_classifier.create(train_data)
# Evaluate the model.
loss, accuracy = model.evaluate(test_data)
# Export to Tensorflow Lite model and label file in `export_dir`.
model.export(export_dir='/tmp/')
So you need to download the data! took me a while to find the solution!!

TensorFlow, what does the tensorflow_datasets.load() exactly return?

I am following a tutorial of TensorFlow ML and I am new to Python. I come from a background of languages like Java. Here is the link to the tutorial.
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
# Download the Flowers Dataset using TensorFlow Datasets
(training_set, validation_set), dataset_info = tfds.load(
'tf_flowers',
split=['train[:70%]', 'train[70%:]'],
with_info=True,
as_supervised=True,
)
for example in training_set:
num_training_examples += 1
# Reformat Images and Create Batches
IMAGE_RES = 224
def format_image(image, label):
image = tf.image.resize(image, (IMAGE_RES, IMAGE_RES))/255.0
return image, label
BATCH_SIZE = 32
train_batches = training_set.shuffle(num_training_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
validation_batches = validation_set.map(format_image).batch(BATCH_SIZE).prefetch(1)
I don't understand how this code operates: (training_set, validation_set), dataset_info = tfds.load. The function tfds.load downloads images of flowers. How come that training_set is iterable like some sort of array, when it should be a folder perhaps?
for example in training_set:
num_training_examples += 1
Also how come each element in it is used in the following line as two arguments to the function format_image(image, label) in this line:
train_batches = training_set.shuffle(num_training_examples//4).map(format_image).batch(BATCH_SIZE).prefetch(1)
What is training_set exactly? Why is it not a folder that contains the following structure:
flowers_a
file1, file2, file3 ... etc
flowers_b
file1, file2, file3 ... etc
flowers_c
file1, file2, file3 ... etc
etc ...
instead its some sort of an array with each element containing an image and its label? It is not clear in the documentation what is happening for a beginner in Python such as I.
Like the name suggests, Tensorflow exists to "make the tensors flow". It's an entire ecosystem with data loading, preprocessing, and machine learning capabilities. So it's not built as an intuitive library that deals with numpy arrays. Tensorflow doesn't keep everything in memory so what TFDS returns is literally a "Tensorflow Dataset". You need to manipulate it as such. This means that you can't get basic information, like the count, intuitively. You need to iterate through the whole thing. For instance this line you gave:
for example in training_set:
num_training_examples += 1
It's passing all the samples and counting them. For this part:
(training_set, validation_set), dataset_info = tfds.load...
It loads the "Tensorflow Dataset" as supervised, meaning that it's 2 tuples for data and label. If you remove the as_supervised=True, it will be a dictionary, and you can iterate through them with dataset['image'] and dataset['label'].
Let me know if you want me to explain anything else.

Categories

Resources