I am detecting objects using CNN and keras
When i test/train model it outputs acc and loss.
I am using MSE loss functions so i understand what loss mean, however what is accuracy and how is it calculated? I have 4000 loss and accuracy 80% which is stupid. It does not detect object 80% correctly. What does it mean and how is it calcualted then?
Thanks for help.
Related
I am doing research in NLP and deep learning with mental health textual data. While training my CNN model my validation loss is lower than training loss but almost around the training loss. The validation and training loss does not go too low instead are stuck on almost 75-80% loss but accuracy achieved is also 76%. What shall i do? What is the exact interpretation of this?
I am trying to train a LSTM model and I am also
plotting the graphs of train-test accuracy and train-test loss as you can see from the images I attached.
What concerns me is that the plots are noisy. From my understanding and please correct me if I am wrong noise means that I overfit my model and it doesn't learn. Am I right?
Thank you.
"Noise" doesn't mean overfit. When your validation loss is much higher than your training loss or when your validation accuracy is much lower than your training accuracy, we call that overfitting.
But for your situation, your training & validation accuracy is similar, your training & validation loss are similar too. Therefore, Your model is not overfitting.
I am training a multi-label image classifier using transfer learning in Keras. During training I asked the model to report out loss and acc in each epoch. In the very last epoch, it says the training acc is ~86% and val acc is pretty much the same. However, when I took the trained model and test on the training data using Sklearn performance matrix, it says the accuracy is 97%.
I am not sure if I am doing some thing wrong or the way of calculating accuracy is different in Keras and Sklearn. Please help
I have developed parameters for a Keras model and looked at the loss and accuracy over epochs as shown below. What does it mean when your loss and accuracy overlaps for test and training data? What else is apparent from the data in these graphs? This is a classification supervised learning model.
I have built a Keras model and while training, the categorical accuracy metric reaches 0.78.
However after training the model, when I predict the output of same training data when I run the following code:
predicted_labels = model.predict(input_data)
acc = sklearn.metrics.accuracy_score(true_labels, predicted_labels)
the accuracy is 0.39.
To summarize, I don't get same accuracy result for Keras and Sklearn.
There are many ways of measuring accuracy, and sklearn might not be using the same as Keras.
You may take your compiled model and lossAndMetrics = model.evaluate(input_data, true_labels) to see loss and metrics that are surely the same you used for training.
PS: it's not rare to have a bad result for test/validation data if your model is overfitting.