OpenVino Optimised model ignores few labels while inferencing - python

System information
OpenVINO => 2022.1
Operating System / Platform =>Intel(R) Core(TM) i5-9400F CPU # 2.90GHz/ Windows 10 64 Bit
I trained the following YoloV5 model:
Model Size: Large
Labels: ['mango', 'apple', 'milk', 'orange', 'grapes'].
batch-size: 4
Img-Size: 512
When I perform the inference on the trained YoloV5 Model the detections are descent and it is able to detect all 5 labels. The detection confidence is also good averaging around 90%.
I then optimized the model using OpenVino:
Quantization: FP16, FP32
But the converted model only detects mango, apple, and grapes and completely ignores the remaining labels.
Things I have tried:
Retraining the Yolov5 model with different batch-size.
Tried different quantization while converting to OpenVino.
Tried different (previous) versions of OpenVino like 2020.4.
I have previously faced similar issues while training other models but could never figure out the solution or even the cause of the same. Has anyone else faced similar issues?
It would be ideal if someone can guide me in a direction to help solve it. Other answers that also explain potential causes of the issue are also welcome!

Converting the model into a smaller precision has its pros and cons.
The inferencing time is faster but the trade-off is accuracy.
If your use case involves something like clinical results that require to be accurate, it is not recommended to use smaller precision as you need to bear with less accuracy. Meanwhile, if your use case needs to be fast without being precise, then smaller precision (FP16/INT8) is suitable.
You should carefully choose the right precision depending on your use case and also hardware.
This might help you to further understand.

Related

Methods for increasing accuracying of a CNN for image classification

I'm currently working on a image classification task, involving a large datasets of grayscale images of cartoons and my CNN needs to classify them. Atm my model has a test accuracy of about 88% but I know a higher accuracy is possible.
I've tried:
improving / changing the actual model / architecture
using different meta parameters
different loss functions from the pytorch libraries
a bunch of different transforms
different optimizes from torch.optim
I've also tried a bunch of the standard models included in torchvision.models and am still getting sub 90% accuracy on the test set.
Do I just need to keep trying the above things to squeeze out better accuracy or are there any other avenues I can try? Would really appreciate any suggestions, the only other thing I can think of would be making my own custom loss function specific for the data set but I'm not exactly sure how much that would help?
From what you've described, it sounds like it might be worth spending some time on the data preparation. Here is a good article on how to do that for images. Some ideas you could try are:
Resizing all your images to a fixed size
Subtracting mean pixel values, i.e. normalizing the dataset
I don't really know the context of what you're doing but I would also consider adding additional features that may be relevant and seeing if that helps.

Measuring Flops for TFLite Model in Tensorflow 2.X

I am trying to measure FLOPS for a TFLite model in TF2.
I know that Tensorflow 1.x had the tf.profiler, which was awesome for measuring flops. It doesn't seem to work well with tf.keras.
Could anybody please describe how to measure FLOPs for a TFLite model in TF2? I can't seem to find an answer online.
Thank you all so much for your time.
Edit: The link commented below does not help with tflite.
I encountered the same problem and wrote a simple python package to roughly calculate FLOPS.
https://github.com/lisosia/tflite-flops
Only Conv and DepthwiseConv layers are considered, but it was sufficient for my use case.
Unfortunately, there's no direct way you can calculate the FLOPS for a tflite model. However, you can estimate its value indirectly, by following these 3 steps:
Use the official TFLite performance tool to measure how long your model takes (in ms) to perform a single inference.
Use some benchmark app (such as xOPS) to estimate how many floating-point operations per second (FLOPS) your target device can run.
Use the results you got from steps 1 and 2 to estimate the number of floating-point operations your model performs during a single inference.
The final result will probably be a rough approximation, but it still can bring some value to your performance analysis.

Detecting small objects with Tensorflow 2 Object Detection API

I'm having problems in finding the best network and configuration to detect small-scale objects. Since now I got very Los mAPs on small objects (i am trying to detect traffic Signs using mapillary dataset)
I have tried using Faster R-CNN 101 (resizing the input to 1024) and the SSD 101 with FPN (resizing the input to 1024).
I did not find a pre-trained model of faster R-CNN with FPN so i could not try that.
What do you think would be the best network and confuguration to detect small objects?
Thank you.
The models you mentioned are models that are built for speed. With small object detection, you often care more about accuracy of the model. So you should probably use bigger models that sacrifice speed for accuracy (mAP). If you want to use tensorflow 2, here is an overview of the available models. Also, for small object detection you should keep high resolution, as you said. You could also maybe crop images into multiple crops instead, to detect on portions of images.
So I disagree with #Akash Desai about SSD, but I also think that detectron2 is more up to date to state of the art models for better performance. So if you don't care about the framework, maybe switch to detectron2.
SSD is best for detecting small as well as large target ,because it will try to do prediction on each and every feature map.
you resized images to 1024 ??? it this case model will take more time to train on dataset, so keep the size of images small like 460*460.
also you can try with detectron2 ,its faster & simpler than tensorflow.
https://colab.research.google.com/github/Tony607/detectron2_instance_segmentation_demo/blob/master/Detectron2_custom_coco_data_segmentation.ipynb

How should I approach a 300 classes classification machine learning problem?

I am trying to make a Multi-Class classification application, but my dataset has 300 classes, is it possible to train my model with all these classes with a normal PC?
Sure it is. You can even train imagenet with 1000 categories or more, if you have enough time! ;)
You just have to think about which loss function you want (categorical crossentropy, sparse categorical crossentropy or even binary if you want to penalize each output node independently), apart from that there's not really much difference between 10, 100 or a 1000 classes.
Of course you have to increase your model size to account for more classes, so RAM may be an issue, but then you can always decrease batch size. If you are using images and convnets and your model is still too large, try to downsample the images, use pooling layers or larger strides.
If your computer is too old and slow, you can also try Google Colab which offers free GPU and even TPU online!
It is difficult to answer this question. The training time of your model depends on a number of factors. It might be best to train your model for a certain amount of hours and evaluate the performance. You could make use of fitting a learning curve, which could provide an esitmation of how many data points your require for training to achieve a certain performance. After that you could link the required amount of data points to computation time.
Here is an article provides an algorithm for fitting a learning curve: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3307431/.

TensorFlow: lack of repeatability in tf.estimator.DNNClassifier

I wish to use TensorFlow to perform classification on medical datasets.
To do this, I am using exactly the same code as that proposed in https://www.tensorflow.org/get_started/estimator, the only difference being that I face medical datasets rather than the Iris database. So, it is not custom code.
I do need help about the following problem: if I run the same code on the same data with the same parameters for the network configuration (number of layers, number of neurons in each layer, and so on) , the results are different. I mean, I run the same code ten times, and I obtain ten different values for the accuracy. These values are even largely different, as they range from 73% up to 83%.
This means that the subjects considered suffering from a given disease vary from a run to another. Differently said, once set a network structure, there are several subjects who are considered either healthy or sick depending on the run only.
As you can imagine, this lack of repeatability makes that code useless from both scientific and medical viewpoints: another user, running the same configuration over the same data set, would find a different model and different results, so would cure different subjects.
I have noticed that for the Iris database the problem seems not to take place, and accuracy is always 0.9666. This depends on the problem being very easy (linearly separable for all but one items, and very small data set).
I have carried out a search on the internet, and I have found several other people who have noted the same problem. As for the possible solutions, I have read several as well, I have implemented them all, with no result.
Here I add a short list of some suggested remedies that failed in my case:
os.environ['PYTHONHASHSEED'] = '0'
np.random.seed(0)
tf.set_random_seed(0)
rn.seed(0)
tf.reset_default_graph()
session_conf = tf.ConfigProto(
intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1
)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
Is there any chance to fix this problem? It is a pity that such an excellent tool, as TensorFlow is, cannot guarantee repeatibility.
I am using the following:
custom code: no, it is the one in https://www.tensorflow.org/get_started/estimator
system: Apple
OS: Mac OsX 10.13
TensorFlow version: 1.3.0
Python version: 3.6.3
GPU model: AMD FirePro D700 (actually, two such GPUs)
Thank you very much!
Best regards
Ivanoe De Falco

Categories

Resources