What noise mean in accuracy-loss graphs (LSTM)? - python

I am trying to train a LSTM model and I am also
plotting the graphs of train-test accuracy and train-test loss as you can see from the images I attached.
What concerns me is that the plots are noisy. From my understanding and please correct me if I am wrong noise means that I overfit my model and it doesn't learn. Am I right?
Thank you.

"Noise" doesn't mean overfit. When your validation loss is much higher than your training loss or when your validation accuracy is much lower than your training accuracy, we call that overfitting.
But for your situation, your training & validation accuracy is similar, your training & validation loss are similar too. Therefore, Your model is not overfitting.

Related

Validation loss of CNN lower than training loss but still stuck around 80%

I am doing research in NLP and deep learning with mental health textual data. While training my CNN model my validation loss is lower than training loss but almost around the training loss. The validation and training loss does not go too low instead are stuck on almost 75-80% loss but accuracy achieved is also 76%. What shall i do? What is the exact interpretation of this?

Validation loss is neither increasing or decreasing

Usually when a model overfits, validation loss goes up and training loss goes down from the point of overfitting. But for my case, training loss still goes down but validation loss stays at same level. Hence validation accuracy also stays at same level but training accuracy goes up. I am trying to reconstruct a 2D image from a 3D volume using UNet. Same is the behavior when I am trying to reconstruct 3D volume from 2D image but at higher loss and lower accuracy. Can someone explain the curve that why validation loss is not going down from the point of overfitting?
The trends show that your model is overfitting. Ways to overcome overfitting include:
Use data augmentation
Use more data
Use Dropout
Use regularization
Try slowing down your learning rate!

Validation loss and training loss curve , is that acceptable?

I applying epilepsy seizure prediction using CNN. This is a plot for validation loss and training loss.
I don't know whether this curve is acceptable or not.
Any help would be appreciated
Yes it's acceptable as long as increasing the number of epochs helps the validation loss to get lower beside there is no overfitting.

Validation accuracy increasing but validation loss is also increasing

I am using a CNN network to classify images into 5 classes. The size of my dataset is around 370K. I am using Adam optimizer with learning rate 0.0001 and batch size of 32. Surprisingly, I am getting improvement in validation accuracy over the epochs but validation loss is constantly growing.
I am assuming that the model is becoming less and less unsure about the validation set but the accuracy is more because the value of softmax output is more than the threshold value.
What can be the reason behind this? Any help in this regard would be highly appreciated.
I think this is a case of overfitting, as previous comments pointed out. Overfitting can be the result of high variance in the dataset. When you trained the CNN it showed a good ratio towards the decreasing of training error, producing a more complex model. More complex models produce overfitting and it can be noted when validation error tends to increase.
Adam optimizer is taking care of the learning rate, exponential decay and in general of the optimization of the model, but it won't take any action against overfitting. If you want to reduce it (overfitting), you will need to add a regularization technique which will penalize large values of the weights in the model.
You can read more details about this in the deep learning book: http://www.deeplearningbook.org/contents/regularization.html

Tensorflow model accuracy

My model which I have trained on a set of 29K images for 36 classes and validated on 7K images. The model has a training accuracy of 94.59% and validation accuracy of 95.72%
It has been created for OCR on digits and characters. I know the amount of images for training on 36 classes might not be sufficient. I'm not certain what to infer from these results.
Question: Is this a good result? Should the testing accuracy always be greater than training accuracy? Is my model overfitting?
Question: How would I know if my model was overfitting? I'm assuming a very high training accuracy and very low testing accuracy would indicate that?
95% is rather good for 36 classes. If your validation accuracy is higher than training accuracy, you are underfitting. You can run some more epochs, until your training accuracy is a bit higher than validation accuracy.
Exactly, if training accuracy is much higher, you are overfitting.
The training accuracy should always be higher than the testing accuracy/ validation accuracy. This is because your model has to be good on the data that it is provided to be able to predict unknown datas. However, it also happens sometimes and the reason could be
a. The test test wasn't randomly selected or was randomly selected but turned out to be
favourable one (a coincidence).
b. Your Model is very generalized and in combination with the first problem.
Check the learning curve first, Your case is rare in which the training accuracy is smaller. Solution may be more data or More complex models or more epochs (Solution to underfitting)

Categories

Resources