I applying epilepsy seizure prediction using CNN. This is a plot for validation loss and training loss.
I don't know whether this curve is acceptable or not.
Any help would be appreciated
Yes it's acceptable as long as increasing the number of epochs helps the validation loss to get lower beside there is no overfitting.
Related
I am doing research in NLP and deep learning with mental health textual data. While training my CNN model my validation loss is lower than training loss but almost around the training loss. The validation and training loss does not go too low instead are stuck on almost 75-80% loss but accuracy achieved is also 76%. What shall i do? What is the exact interpretation of this?
I am trying to train a LSTM model and I am also
plotting the graphs of train-test accuracy and train-test loss as you can see from the images I attached.
What concerns me is that the plots are noisy. From my understanding and please correct me if I am wrong noise means that I overfit my model and it doesn't learn. Am I right?
Thank you.
"Noise" doesn't mean overfit. When your validation loss is much higher than your training loss or when your validation accuracy is much lower than your training accuracy, we call that overfitting.
But for your situation, your training & validation accuracy is similar, your training & validation loss are similar too. Therefore, Your model is not overfitting.
Usually when a model overfits, validation loss goes up and training loss goes down from the point of overfitting. But for my case, training loss still goes down but validation loss stays at same level. Hence validation accuracy also stays at same level but training accuracy goes up. I am trying to reconstruct a 2D image from a 3D volume using UNet. Same is the behavior when I am trying to reconstruct 3D volume from 2D image but at higher loss and lower accuracy. Can someone explain the curve that why validation loss is not going down from the point of overfitting?
The trends show that your model is overfitting. Ways to overcome overfitting include:
Use data augmentation
Use more data
Use Dropout
Use regularization
Try slowing down your learning rate!
I am using a CNN network to classify images into 5 classes. The size of my dataset is around 370K. I am using Adam optimizer with learning rate 0.0001 and batch size of 32. Surprisingly, I am getting improvement in validation accuracy over the epochs but validation loss is constantly growing.
I am assuming that the model is becoming less and less unsure about the validation set but the accuracy is more because the value of softmax output is more than the threshold value.
What can be the reason behind this? Any help in this regard would be highly appreciated.
I think this is a case of overfitting, as previous comments pointed out. Overfitting can be the result of high variance in the dataset. When you trained the CNN it showed a good ratio towards the decreasing of training error, producing a more complex model. More complex models produce overfitting and it can be noted when validation error tends to increase.
Adam optimizer is taking care of the learning rate, exponential decay and in general of the optimization of the model, but it won't take any action against overfitting. If you want to reduce it (overfitting), you will need to add a regularization technique which will penalize large values of the weights in the model.
You can read more details about this in the deep learning book: http://www.deeplearningbook.org/contents/regularization.html
I am detecting objects using CNN and keras
When i test/train model it outputs acc and loss.
I am using MSE loss functions so i understand what loss mean, however what is accuracy and how is it calculated? I have 4000 loss and accuracy 80% which is stupid. It does not detect object 80% correctly. What does it mean and how is it calcualted then?
Thanks for help.