Tensorflow Object Detection API validation vs test set - python

I recently started looking into the Tensorflow Object Detection API and have a question on the validation set:
Is the validation used at all for the model training?
For instance are the weights of the model selected based on the accuracy on the validation set?
I am trying to figure out whether I need to have an independent test set (different from the evaluation set) to get unbiased results on the model performance, or can use the validation set for that.
Thank you!

The validation dataset (the test.record ) is not used in the training.
It is always better to have a validation dataset, to prevent overfitting for example.

Related

Train and validation data structure

What will happen if I use the same training data and validation data for my machine learning classifier?
If the train data and the validation data are the same, the trained classifier will have a high accuracy, because it has already seen the data. That is why we use train-test splits. We take 60-70% of the training data to train the classifier, and then run the classifier against 30-40% of the data, the validation data which the classifier has not seen yet. This helps measure the accuracy of the classifier and its behavior, such as over fitting or under fitting, against a real test set with no labels.
We create multiple models and then use the validation to see which model performed the best. We also use the validation data to reduce the complexity of our model to the correct level. If you use train data as your validation data, you will achieve incredibly high levels of success (your misclassification rate or average square error will be tiny), but when you apply the model to real data that isn't from your train data, your model will do very poorly. This is called OVERFITTING to the train data.
Basically nothing happens. You are just trying to validate your model's performance on the same data it was trained on, which practically doesn't yield anything different or useful. It is like teaching someone to recognize an apple and asking them to recognize just the same apple and see how they performed.
Why a validation set is used then? To answer this in short, the train and validation sets are assumed to be generated from the same distribution and thus the model trained on training set should perform almost equally well on the examples from validation set that it has not seen before.
Generally, we divide the data to validation and training to prevent overfitting. To explain it, we can think a model that classifies that it is human or not and you have dataset contains 1000 human images. If you train your model with all your images in that dataset , and again validate it with again same data set your accuracy will be 99%. However, when you put another image from different dataset to be classified by the your model ,your accuracy will be much more lower than the first. Therefore, generalization of the model for this example is a training a model looking for a stickman to define basically it is human or not instead of looking for specific handsome blonde man. Therefore, we divide dataset into validation and training to generalize the model and prevent overfitting.
TLDR;
If you use the same dataset for training and validation then:
training_accuracy = testing_accuracy
Your testing_accuracy will be the same as training_accuracy if you use the training dataset as the validation dataset. Therefore you will NOT be able to tell if your model has underfit or not.
Let's talk about datasets and evaluation metrics. Here is some terminology (reference) -
Datasets:
Training dataset: The data used to fit the model.
Validation dataset: the data used to validate the generalization ability of the model or for early stopping, during the training process. In most cases, this is the same as the test dataset
Evaluations:
Training accuracy: The accuracy you achieve when comparing predictions and actuals from the training data itself.
Testing accuracy: The accuracy you achieve when comparing predictions and actuals from the testing/validation data.
With the training_accuracy, you can get a sense of how well a model fits your data and the testing_accuracy tells you how well that model is generalizable. If train_accuracy is low, then your model has underfitted and you may need a better model (better features, different architecture, etc) for modeling the given problem. If training_accuracy is high but testing_accuracy is low, this means your model fits the data well, but it's not generalizable on unseen data. This is overfitting.
Note: In practice, it is better to have a overfit model and regularize it heavily rather than work with an underfit model.
Another important thing you need to understand that training a model (fit) and inference from a model (predict / score) are 2 separate tasks. Therefore, when you use the validation dataset as the training dataset, you are basically still training the model on the same training dataset but while inference, you are using the training dataset which will give you the same accuracy as the training_accuracy.
You will therefore not come to know if at all you overfit BUT that doesn't mean you will get 99% accuracy like the other answer to suggest! You may still underfit and get an extremely low model accuracy

Shall I update my training data in real-time?

I tried image classification using trained model and its working well but some images could not find perfectly in that time have to get that image and label from users so my doubt is..Is it possible to add new data into already trained model?
No, during inference time you use the weights of the trained model for predictions. Which basically means that at the time your model is deployed the capabilities of your image classifier are fixed by the weights. If you wish to improve your model, you would have to retrain your model with the new - data. However, there is another paradigm of learning called "Online Learning" where the model is continuously learning and modifying the weights. In this case your weights are not fixed and your model is continuously updating its weights with each training input. However afaik this is not usually recommended for CNNs, because the backward pass of gradients is computationally intensive and your inference will be slow because of this.
No model can predict with 100% accuracy if it does it's an ideal model. And if you want to add more data to your train model you have to retrain the model with the new data. Having more data is always a good idea. It allows the “data to tell for itself,” instead of relying on assumptions and weak correlations. Presence of more data results in better and accurate models. So if you want to get better accuracy you have to train your model with more data. Without retraining, you can't add data to your trained model.

Using validation set in training model after adjust hyperparameters

I'm doing my best with creating model for imbalanced data using NN. I have a separate test set but I have a problem with validation data. Can I add validation data to train set after adjusting hyperparameters? Or it's better to leave this out and train final model only on the train data set? What do you think and what's your experience with this kind of data?
The validation dataset is only for keras to calculate you a score for each epoch. So your model is not affected by this dataset, but you will get better statistics.
That means: You can set the validatedata set after you adjusted the hyperparameters and if you don't want to, you don't have to set validation data.

Validation and Testing in Tensforflow Estimator vs. Keras

I've read answers here and trying to understand how training, validation and testing map to Tensorflow Estimator API and Keras API.
A: Tensorflow
tf.estimator.train_and_evaluate function takes a train_spec and a eval_spec.
Here, does evaluate mean validation or testing in above terminology?
If it's testing, where do I specify a validation set?
B: Keras
In Keras, this seems to be clearer, model.fit takes validation_data argument, which is for validation set. There is a separate function model.evaluate, to which we provide the test set. Is this correct?
In practice the terms "test set" and "validation set" are used interchangeably (flipped from how they are described above). As a result it's become common to refer to the one that is used during training to be referred to as either the test/validation set. To disambiguate, the set that gets set aside for hyperparameter tuning (here described as the validation set) is generally referred to as the holdout set.(source)
Based on this definition you can do one simple thing. For example suppose that, the first dataset is "train", the second is "validation"(as in keras) for real-time evaluation of the model at each step and the final dataset is the "test".
you can simply check the model once it finished training by running the model.predict on the test dataset, to see how your model works on the unseen data.

Different validation accuracy when use keras funciton fit_generator() and do prediction on every individual picture?

Recently, I use keras to train a network to classify pictures, and use the keras function model.fit_generator() to fit my model. The fit_generator() will automatically run the model in validation data and return a validation accuracy when finish a epoch.
But odd thing happened, when I used the model to predict the validation data and compared the results with the correct class, the validation accuracy is lower than what I get when use the fit_generator().
I have two assumptions:
1. I use a generator to get data from dictionary, so I assume in one single epoch, the generator may repeatedly fetch data which is highly fitted to the model, so that the accuracy may be higher.
2. keras may use some tricks or preprocess the data when do validation, thus enhance the accuracy.
I tried to look through the source code and document of keras, but nothing helped. I would be very thankful if anyone could give me some advice about the problem.

Categories

Resources