Validation loss is neither increasing or decreasing - python

Usually when a model overfits, validation loss goes up and training loss goes down from the point of overfitting. But for my case, training loss still goes down but validation loss stays at same level. Hence validation accuracy also stays at same level but training accuracy goes up. I am trying to reconstruct a 2D image from a 3D volume using UNet. Same is the behavior when I am trying to reconstruct 3D volume from 2D image but at higher loss and lower accuracy. Can someone explain the curve that why validation loss is not going down from the point of overfitting?

The trends show that your model is overfitting. Ways to overcome overfitting include:
Use data augmentation
Use more data
Use Dropout
Use regularization
Try slowing down your learning rate!

Related

What noise mean in accuracy-loss graphs (LSTM)?

I am trying to train a LSTM model and I am also
plotting the graphs of train-test accuracy and train-test loss as you can see from the images I attached.
What concerns me is that the plots are noisy. From my understanding and please correct me if I am wrong noise means that I overfit my model and it doesn't learn. Am I right?
Thank you.
"Noise" doesn't mean overfit. When your validation loss is much higher than your training loss or when your validation accuracy is much lower than your training accuracy, we call that overfitting.
But for your situation, your training & validation accuracy is similar, your training & validation loss are similar too. Therefore, Your model is not overfitting.

How to know when an overfitting is taking place?

I have a training data with 3961 different rows and 32 columns I want to fit to a Random Forest and a Gradient Boosting model. While training, I need to fine-tune the hyper-parameters of the models to get the best AUC possible. To do so, I minimize the quantity 1-AUC(Y_real,Y_pred) using the Basin-Hopping algorithm described in Scipy; so my training and internal validation subsamples are the same.
When the optimization is finished, I get for Random Forest an AUC=0.994, while for the Gradient Boosting I get AUC=1. Am I overfitting these models? How could I know when an overfitting is taking place during training?
To know if your are overfitting you have to compute:
Training set accuracy (or 1-AUC in your case)
Test set accuracy (or 1-AUC in your case)(You can use validation data set if you have it)
Once you have calculated this scores, compare it. If training set score is much better than your test set score, then you are overfitting. This means that your model is "memorizing" your data, instead of learning from it to make future predictions.
To know if you are overfitting, you always need to do this process. However, if your training accuracy or score is too perfect (e.g. accuracy of 100%), you can sense that you are overfitting too.
So, if you don't have training and test data, you have to create it using sklearn.model_selection.train_test_split. Then you will be able to compare both accuracy. Otherwise, you won't be able to know, with confidence, if you are overfitting or not.

What does a sudden increase in accuracy during epoch training show about my model?

I am learning Convolution Neural Network now and practicing it on kaggle digit recognizer (MNIST) dataset.
While training the data, I noticed that inspite of initial gradually growing accuracy, in between there was a huge jump i.e from 0.8984 to 0.9814.
As a beginner, I want to investigate what does this jump really show about my model. Here is the image of the epochs:
enter image description here
I have circled the jump in yellow. Thanks in advance!
As the loss gradually starts to decrease, this create an impact on fitting of the model. The cost function makes the loss go down, which directly creates an impact on the fitting of model. Better the fitting of model into training data, better the accuracy (which we can easily see as the accuracy increases with the reduction in loss). There is almost a difference of 0.08 in your consecutive loss function which is enough for the model to fit more from the current state.
Now as the model progresses, we try it on the testing dataset because the real world data is nothing like the data we trained it on.
However, a higher accuracy might not always be good as the model is considered to be over-evaluated which is also known as overfitting which means the model is performing too well that it can't handle any little changes. Therefore, a correct balance between learning rate and epochs are required in order to predict the classes correctly. It also depends on the architecture, Optimizing function which make sure the oscillations are low and numerous other things.

Python Keras - Accuracy drops to zero

I have a problem, when training a U-Net, which has many similarities with a CNN, in Keras with Tensorflow. When starting the Training, the Accuracy increases and the loss steadily goes down. At around epoch 40, in my example, the validation loss jumps to the maximum and the validation accuracy to zero. What can I do, to prevent that from happening. I am using a similar approach to this one, for my code, in Keras.
Example image of the Loss
Edit:
I already tried changing Learning rate, adding dropout and changing optimzers, those will not change the curve for the better. As i have a big training set, it is very unlikely, that I am encountering overfitting.

Validation accuracy increasing but validation loss is also increasing

I am using a CNN network to classify images into 5 classes. The size of my dataset is around 370K. I am using Adam optimizer with learning rate 0.0001 and batch size of 32. Surprisingly, I am getting improvement in validation accuracy over the epochs but validation loss is constantly growing.
I am assuming that the model is becoming less and less unsure about the validation set but the accuracy is more because the value of softmax output is more than the threshold value.
What can be the reason behind this? Any help in this regard would be highly appreciated.
I think this is a case of overfitting, as previous comments pointed out. Overfitting can be the result of high variance in the dataset. When you trained the CNN it showed a good ratio towards the decreasing of training error, producing a more complex model. More complex models produce overfitting and it can be noted when validation error tends to increase.
Adam optimizer is taking care of the learning rate, exponential decay and in general of the optimization of the model, but it won't take any action against overfitting. If you want to reduce it (overfitting), you will need to add a regularization technique which will penalize large values of the weights in the model.
You can read more details about this in the deep learning book: http://www.deeplearningbook.org/contents/regularization.html

Categories

Resources