Understanding what fc1000_softmax output from tensorflow CNN classification means - python

I am using python 3.6.10 and tensorflow 1.5 on a cpu. I have trained a cnn and saved it as a .onnx file. I am now trying to make a binary classification of my images using the code below:
import onnx
import warnings
from onnx_tf.backend import prepare
import numpy as np
from numpy import array
from IPython.display import display
from PIL import Image
warnings.filterwarnings('ignore')
onnx_model = onnx.load("trainednet.onnx") # load onnx model
tf_rep = prepare(onnx_model) # Import the ONNX model to Tensorflow
img = Image.open('Im025.jpg').resize((224, 224))
img = array(img).reshape(1,3, 224,224)
classification = tf_rep.run(img)
print(classification)
The print(classification) gives me an output like this:
Outputs(fc1000_softmax=array([[9.9967182e-01, 3.2823894e-04]], dtype=float32))
What does this output mean and how can I use it to understand what tensorflow classified my image as?

Well, what you have is the array of the output of the softmax layer of your model. This output can be interpreted as a probability assigned to each class ( in your case 2). You can also see it as "how confident the model is for each class".
So to have the final classification you need to take the max value of this array and map it to your label map { 0 -> class_one, 1-> class_two}.

Yes exactly, so after you model is not perfect (ie the model doesn't have 100% of accuracy). You should test on more images of class two and you will see the second number will be higher .
You should run it on a test set or refer to the metrics of the trained model.

Related

Loading TFlite model for Inference (Python)

I'm using Tensorflow Lite to train an image classifier. I now have a bunch of *.tflite models stored, and I'm trying to write some code that allows me to pick a tflite model file, pick a dataset, and test that model on that dataset (inference).
When I train a model using:
model = image_classifier.create(trainData, validation_data=valData, shuffle=True, use_augmentation=False)
I am able to easily test this model on a test dataset right away because the model is actually stored in the variable 'model', by using:
model.evaluate_tflite('model.tflite', test_data)
or
loss, accuracy = model.evaluate(test_data)
However, if I simply want to load an already existing *.tflite model, without having trained it in the same run, I can't figure out a simple way to do that.
Following these instructions, it seems to be a lot of steps for what I'm trying to do. In other Machine Learning libraries (like PyTorch), you are able to define the model and then quickly load the saved weights and then get to testing, like:
model = models.densenet201(progress=True, pretrained=pretrained)
model.load_state_dict(torch.load("models/model.pt"))
Is there a simple way for me to initialise the model into the 'model' variable, load the saved weights from a *.tflite file, and then run inference?
Thank you for your help
A simple example of image classification:
import tensorflow as tf
import numpy as np
import cv2
class TFLiteModel:
def __init__(self, model_path: str):
self.interpreter = tf.lite.Interpreter(model_path)
self.interpreter.allocate_tensors()
self.input_details = self.interpreter.get_input_details()
self.output_details = self.interpreter.get_output_details()
def predict(self, *data_args):
assert len(data_args) == len(self.input_details)
for data, details in zip(data_args, self.input_details):
self.interpreter.set_tensor(details["index"], data)
self.interpreter.invoke()
return self.interpreter.get_tensor(self.output_details[0]["index"])
model = TFLiteModel("mobilenet_v2_1.0_224_1_default_1.tflite")
image = cv2.imread("hand_blower.png")
image = cv2.resize(image, (224, 224))
image = image.astype(np.float32)[np.newaxis]
image = (image - 127.5) / 127.5
label = model.predict(image)[0].argmax()
print(label)
Please refer to the official documentation for detailed information:
https://www.tensorflow.org/api_docs/python/tf/lite/Interpreter
The model was loaded from:
https://tfhub.dev/tensorflow/lite-model/mobilenet_v2_1.0_224/1/default/1

Is possible to make bounding boxe inference from a detectron2 model in ONNX format?

After successful converting my model detectron2 model to ONNX format I cant make predictions.
I am getting the following error:
failed: Fatal error: AliasWithName is not a registered function/op
My code:
import onnx
import onnxruntime as ort
import numpy as np
import glob
import cv2
onnx_model = onnx.load("test.onnx")
onnx.checker.check_model(onnx_model)
im = cv2.imread('img.png')
print(im.shape)
ort_sess = ort.InferenceSession('test.onnx',providers=[ 'CPUExecutionProvider'])
outputs = ort_sess.run(None, {'input': im})
print(outputs)
I am doing something wrong?
In documentation: https://detectron2.readthedocs.io/en/latest/modules/export.html#detectron2.export.Caffe2Tracer.export_onnx
They say:
"Export the model to ONNX format. Note that the exported model contains custom ops only available in caffe2, therefore it cannot be directly executed by another runtime (such as onnxruntime or TensorRT). Post-processing or transformation passes may be applied on the model to accommodate different runtimes, but we currently do not provide support for them."
What is that "Post-processing or transformation" that I should do?

How to Load Fastai model and predict it on single image

I have trained a fastai model using Kaggle notebook, it has saved the model but how to load the model is the problem, i have tried different methods like the method given below. Even it does load the model it doesn't have any predict function only thing I can see is model.eval().
The second problem is when the model was trained on google collab it didn't even get the single image, I did try to convert the image to NumPy way and another way but both didn't work out.
I am attaching the kaggle link of model training, the saved model and the test images in last after this code
#Code for Loading model
from fastai import *
from fastai.vision import *
import torch
loc = torch.load('/content/gdrive/MyDrive/Data Exports/35k data/stage-1.pth')
body = create_body(models.resnet18, True, None)
data_classes = 4
nf = callbacks.hooks.num_features_model(body) * 2
head = create_head(nf, data_classes, None, ps=0.5, bn_final=False)
model = nn.Sequential(body, head)
Kaggle Model
Test Images From Kaggle Dataset
Saved Model
How to load pytorch models:
loc = torch.load('/content/gdrive/MyDrive/Data Exports/35k data/stage-1.pth')
model = ... # build your model
model.load_state_dict(loc)
model.eval()
Now you should be able to simply use the forward pass to generate your predictions:
input = ... # your input image
pred = model(input) # your class predictions
Don't forget to convert your inputs to torch tensors first, you might want to use a DataLoader for ease of use.

Logging long tensor values in tensorflow estimator

I have built a classification model using tensorflow estimator API. I am trying to get tensor outputs from hidden layes printed in logs while prediction using below code.
model = tf.estimator.DNNLinearCombinedClassifier(
model_dir=model_dir,
linear_feature_columns=wide_columns,
dnn_feature_columns=deep_columns,
dnn_hidden_units=hidden_units,
config=run_config)
tensors_to_log = {"DenseOut": "dnn/logits/BiasAdd"}
logging_hook = tf.train.LoggingTensorHook(tensors=tensors_to_log, every_n_iter=1)
predictions = model.predict(train_input_fn, hooks=[logging_hook])
When i run the code I am able to get the tensors logged in output but since the value is very long it is truncated and i can only see few numbers in begining and end.
INFO:tensorflow:DenseOut = [[ 0.61572325 -0.44044942 -0.19232166 ... 0.04 0.605 0.15]]
How can I specify tensorflow to log the complete output?
So interesting, I found that the solution was to set np.set_printoptions.
import numpy as np
np.set_printoptions(threshold=np.nan)
It seems that tensorflow and numpy are closely integrated.
Try this:
import sys
import numpy as np
np.set_printoptions(threshold= sys.maxsize)

Caffe execution

I am starting to use caffe for deep learning. I have the .caffemodel file with my trained weights and a particular neural network. I am using python interface.
I've seen that I can load my network and my weights by doing this:
solver=caffe.get_solver('prototxtfile.prototxt')
solver.net.copy_from('weights.caffemodel')
But I do not want to fine-tuned my application. I just want to use those weights. I want to execute the network and for each image from the Imagenet data set I want to obtain the result of the classification (not the accuracy of an entire batch). How can I do that?
Thank you very much.
Try to understand the attached lines of python code and adjust them to your needs. It's not my code but I wrote a similar piece to test my models.
The source is:
https://www.cc.gatech.edu/~zk15/deep_learning/classify_test.py
If you don't want to fine-tune a pre-trained model, it's obvious that you don't need a solver. The solver is what optimizes the model. If you want to predict the class probability for an image, you actually just have to do a forward pass. Keep in mind that your deploy.prototxt must have a proper last layer which uses either a softmax or sigmoid function (depending on the architecture). You can't use the loss function from the train_val.prototxt for this.
import numpy as np
import matplotlib.pyplot as plt
# Make sure that caffe is on the python path:
caffe_root = '../' # this file is expected to be in {caffe_root}/examples
import sys
sys.path.insert(0, caffe_root + 'python')
import caffe
# Set the right path to your model definition file, pretrained model weights,
# and the image you would like to classify.
MODEL_FILE = '../models/bvlc_reference_caffenet/deploy.prototxt'
PRETRAINED = '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'
IMAGE_FILE = 'images/cat.jpg'
caffe.set_mode_cpu()
net = caffe.Classifier(MODEL_FILE, PRETRAINED,
mean=np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy').mean(1).mean(1),
channel_swap=(2,1,0),
raw_scale=255,
image_dims=(256, 256))
input_image = caffe.io.load_image(IMAGE_FILE)
plt.imshow(input_image)
prediction = net.predict([input_image]) # predict takes any number of images, and formats them for the Caffe net automatically
print 'prediction shape:', prediction[0].shape
plt.plot(prediction[0])
print 'predicted class:', prediction[0].argmax()
plt.show()
This is the code I use when I need to forward an image through my network:
import caffe
caffe.set_mode_cpu() #If you are using CPU
#caffe.set_mode_gpu() #or if you are using GPU
model_def = "path/to/deploy.prototxt" #architecture
model_weights = "path/to/weights.caffemodel" #weights
net = caffe.Net(model_def, # defines the structure of the model
model_weights,
caffe.TEST) # use test mode (e.g., don't perform dropout)
#Let's forward a single image (let's say inputImg)
#'data' is the name of my input blob
net.blobs["data"].data[0] = inputImg
out = net.forward()
# to get the final softmax probability
# in my case, 'prob' is the name of our last blob
# a softmax layer that will output the score/probability for our problem
outputScore = net.blobs["prob"].data[0] #[0] here because we forwarded a single image
In this example, the inputImg dimensions must match the dimensions of the images used during training, as well as all preprocessing done.

Categories

Resources