Why val_loss and val_acc are not displaying? - python

When the training starts, in the run window only loss and acc are displayed, the val_loss and val_acc are missing. Only at the end, these values are showed.
model.add(Flatten())
model.add(Dense(512, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(10, activation="softmax"))
model.compile(
loss='categorical_crossentropy',
optimizer="adam",
metrics=['accuracy']
)
model.fit(
x_train,
y_train,
batch_size=32,
epochs=1,
validation_data=(x_test, y_test),
shuffle=True
)
this is how the training starts:
Train on 50000 samples, validate on 10000 samples
Epoch 1/1
32/50000 [..............................] - ETA: 34:53 - loss: 2.3528 - acc: 0.0938
64/50000 [..............................] - ETA: 18:56 - loss: 2.3131 - acc: 0.0938
96/50000 [..............................] - ETA: 13:45 - loss: 2.3398 - acc: 0.1146
and this is when it finishes
49984/50000 [============================>.] - ETA: 0s - loss: 1.5317 - acc: 0.4377
50000/50000 [==============================] - 231s 5ms/step - loss: 1.5317 - acc: 0.4378 - val_loss: 1.1503 - val_acc: 0.5951
I want to see the val_acc and val_loss in each line

Validation loss and accuracy are computed on epoch end, not on batch end. If you want to compute those values after each batch, you would have to implement your own callback with an on_batch_end() method and call self.model.evaluate() on the validation set. See https://keras.io/callbacks/.
But computing the validation loss and accuracy after each epoch is going to slow down your training a lot and doesn't bring much in terms of evaluation of the network performance.

It doesn't make much sense to compute the validation metrics at each iteration, because it would make your training process much slower and your model doesn't change that much from iteration to iteration. On the other hand it makes much more sense to compute these metrics at the end of each epoch.
In your case you have 50000 samples on the training set and 10000 samples on the validation set and a batch size of 32. If you were to compute the val_loss and val_acc after each iteration it would mean that for every 32 training samples updating your weights you would have 313 (i.e. 10000/32) iterations of 32 validation samples. Since your every epoch consists of 1563 iterations (i.e. 50000/32), you'd have to perform 489219 (i.e. 313*1563) batch predictions just for evaluating the model. This would cause your model to train several orders of magnitude slower!
If you still want to compute the validation metrics at the end of each iteration (not recommended for the reasons stated above), you could simply shorten your "epoch" so that your model sees just 1 batch per epoch:
model.fit(
x_train,
y_train,
batch_size=32,
epochs=len(x_train) // batch_size + 1, # 1563 in your case
steps_per_epoch=1,
validation_data=(x_test, y_test),
shuffle=True
)
This isn't exactly equivalent because the samples will be drawn at random, with replacement, from the data but it is the easiest you can get...

You should use One hot encoding or to_categorical method
to your y_train and y_test variable.
I faced this problem while model fitting part in a Text generation nlp problem.
When I used to_categorical method to both my y_test and y_validation variable then
the validation accuracy displayed.

if batch size is bigger than max number of your validation data set, this will happen. Simply lowering batch size solved my issue.

Related

Keras Xception trained from scratch give ~100% accuracy in the history but only predicts 1 when evaluating, giving 50% accuracy

I'm training an Xception model on keras without using pre-trained weights for a Binary Classification problem, and I get a very strange behavior. The history plot shows that the training accuracy is increasing until it touch 100%, while the validation one is always around 50%, so it seems it's overfitting but that's not the case, since I checked and it always predict (close to) 1 even on the training set.
What could be the reason of this behaviour?
Here is the code I'm using to train. x_train_xception has already been preprocessed by the keras.applications.xception.preprocess_input function.
I'm using the same code, except the model creation, for training a pretrained Xception model, which works fine
inLayerX = Input((512, 512, 4))
xceptionModel = keras.applications.Xception(include_top = True, weights=None, input_tensor=inLayerX, classes=1, classifier_activation= 'sigmoid')
xceptionModel.compile(loss= 'binary_crossentropy', metrics = ['accuracy'])
history = xceptionModel.fit(x_train_xception, y_train, batch_size= batch_size, epochs= epochs, validation_data=(x_val_xception, y_val))
_, accTest = xceptionModel.evaluate(x_test_xception, y_test)
_, accVal = xceptionModel.evaluate(x_val_xception, y_val)
_, accTrain = xceptionModel.evaluate(x_train_xception, y_train)
print("Training accuracy {:.2%}".format(accTrain))
print("Validation accuracy {:.2%}".format(accVal))
print("Test accuracy {:.2%}".format(accTest))
which output:
2/2 [==============================] - 6s 2s/step - loss: 1.2063 - accuracy: 0.5000
1/1 [==============================] - 4s 4s/step - loss: 1.2960 - accuracy: 0.4667
4/4 [==============================] - 5s 1s/step - loss: 1.2025 - accuracy: 0.5083
Training accuracy 50.83%
Validation accuracy 46.67%
Test accuracy 50.00%
the validation and test accuracy is expected but what's bugging me is really the training accuracy, which I would expect to be close to 100% looking at the history.
Model Accuracy
Model Loss

How can i increase the accuracy of my LSTM model (regression) [duplicate]

I am doing a time series analysis using Tensorflow/ Keras in Python.
The overall LSTM model looks like,
model = keras.models.Sequential()
model.add(keras.layers.LSTM(25, input_shape = (1,1), activation = 'relu', dropout = 0.2, return_sequences = False))
model.add(keras.layers.Dense(1))
model.compile(optimizer = 'adam', loss = 'mean_squared_error', metrics=['acc'])
tensorboard = keras.callbacks.TensorBoard(log_dir="logs/{}".format(time()))
es = keras.callbacks.EarlyStopping(monitor='val_acc', mode='max', verbose=1, patience=50)
mc = keras.callbacks.ModelCheckpoint('/home/sukriti/best_model.h5', monitor='val_loss', mode='min', save_best_only=True)
history = model.fit(trainX_3d, trainY_1d, epochs=50, batch_size=10, verbose=2, validation_data = (testX_3d, testY_1d), callbacks=[mc, es, tensorboard])
I am having the following outcome,
Train on 14015 samples, validate on 3503 samples
Epoch 1/50
- 3s - loss: 0.0222 - acc: 7.1352e-05 - val_loss: 0.0064 - val_acc: 0.0000e+00
Epoch 2/50
- 2s - loss: 0.0120 - acc: 7.1352e-05 - val_loss: 0.0054 - val_acc: 0.0000e+00
Epoch 3/50
- 2s - loss: 0.0108 - acc: 7.1352e-05 - val_loss: 0.0047 - val_acc: 0.0000e+00
Now the val_acc remains unchanged. Is it normal?
what does it signify?
As signified by loss = 'mean_squared_error', you are in a regression setting, where accuracy is meaningless (it is meaningful only in classification problems).
Unfortunately, Keras will not "protect" you in such a case, insisting in computing and reporting back an "accuracy", despite the fact that it is meaningless and inappropriate for your problem - see my answer in What function defines accuracy in Keras when the loss is mean squared error (MSE)?
You should simply remove metrics=['acc'] from your model compilation, and don't bother - in regression settings, MSE itself can (and usually does) serve also as the performance metric.
In my case I had validation accuracy of 0.0000e+00 throughout training (using Keras and CNTK-GPU backend) when my batch size was 64 but there were only 120 samples in my validation set (divided into three classes). After I changed the batch size to 60, I got normal accuracy values.
It will not improve with changing batch size or with metrics. I had the same problem but when I shuffled my training and validation data set 0.0000e+00 gone.

Big difference between val-acc and prediction accuracy in Keras Neural Network

I have a dataset that I used for making NN model in Keras, i took 2000 rows from that dataset to have them as validation data, those 2000 rows should be added in .predict function.
I wrote a code for Keras NN and for now it works good, but I noticed something that is very strange for me. It gives me very good accuracy of more than 83%, loss is around 0.12, but when I want to make a prediction with unseen data (those 2000 rows), it only predicts correct in average of 65%.
When I add Dropout layer, it only decreases accuracy.
Then I have added EarlyStopping, and it gave me accuracy around 86%, loss is around 0.10, but still when I make prediction with unseen data, I get final prediction accuracy of 67%.
Does this mean that model made correct prediction in 87% of situations? Im going with a logic, if I add 100 samples in my .predict function, that program should make good prediction for 87/100 samples, or somewhere in that range (lets say more than 80)? I have tried to add 100, 500, 1000, 1500 and 2000 samples in my .predict function, and it always make correct prediction in 65-68% of the samples.
Why is that, am I doing something wrong?
I have tried to play with number of layers, number of nodes, with different activation functions and with different optimizers but it only changes the results by 1-2%.
My dataset looks like this:
DataFrame shape (59249, 33)
x_train shape (47399, 32)
y_train shape (47399,)
x_test shape (11850, 32)
y_test shape (11850,)
testing_features shape (1000, 32)
This is my NN model:
model = Sequential()
model.add(Dense(64, input_dim = x_train.shape[1], activation = 'relu')) # input layer requires input_dim param
model.add(Dropout(0.2))
model.add(Dense(32, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(1, activation='sigmoid')) # sigmoid instead of relu for final probability between 0 and 1
# compile the model, adam gradient descent (optimized)
model.compile(loss="binary_crossentropy", optimizer= "adam", metrics=['accuracy'])
# call the function to fit to the data training the network)
es = EarlyStopping(monitor='val_loss', min_delta=0.0, patience=1, verbose=0, mode='auto')
model.fit(x_train, y_train, epochs = 15, shuffle = True, batch_size=32, validation_data=(x_test, y_test), verbose=2, callbacks=[es])
scores = model.evaluate(x_test, y_test)
print(model.metrics_names[0], round(scores[0]*100,2), model.metrics_names[1], round(scores[1]*100,2))
These are the results:
Train on 47399 samples, validate on 11850 samples
Epoch 1/15
- 25s - loss: 0.3648 - acc: 0.8451 - val_loss: 0.2825 - val_acc: 0.8756
Epoch 2/15
- 9s - loss: 0.2949 - acc: 0.8689 - val_loss: 0.2566 - val_acc: 0.8797
Epoch 3/15
- 9s - loss: 0.2741 - acc: 0.8773 - val_loss: 0.2468 - val_acc: 0.8849
Epoch 4/15
- 9s - loss: 0.2626 - acc: 0.8816 - val_loss: 0.2416 - val_acc: 0.8845
Epoch 5/15
- 10s - loss: 0.2566 - acc: 0.8827 - val_loss: 0.2401 - val_acc: 0.8867
Epoch 6/15
- 8s - loss: 0.2503 - acc: 0.8858 - val_loss: 0.2364 - val_acc: 0.8893
Epoch 7/15
- 9s - loss: 0.2480 - acc: 0.8873 - val_loss: 0.2321 - val_acc: 0.8895
Epoch 8/15
- 9s - loss: 0.2450 - acc: 0.8886 - val_loss: 0.2357 - val_acc: 0.8888
11850/11850 [==============================] - 2s 173us/step
loss 23.57 acc 88.88
And this is for prediction:
#testing_features are 2000 rows that i extracted from dataset (these samples are not used in training, this is separate dataset thats imported)
prediction = model.predict(testing_features , batch_size=32)
res = []
for p in prediction:
res.append(p[0].round(0))
# Accuracy with sklearn - also much lower
acc_score = accuracy_score(testing_results, res)
print("Sklearn acc", acc_score)
result_df = pd.DataFrame({"label":testing_results,
"prediction":res})
result_df["prediction"] = result_df["prediction"].astype(int)
s = 0
for x,y in zip(result_df["label"], result_df["prediction"]):
if x == y:
s+=1
print(s,"/",len(result_df))
acc = s*100/len(result_df)
print('TOTAL ACC:', round(acc,2))
The problem is...now I get accuracy with sklearn 52% and my_acc 52%.
Why do I get such low accuracy on validation, when it says that its much larger?
The training data you posted gives high validation accuracy, so I'm a bit confused as to where you get that 65% from, but in general when your model performs much better on training data than on unseen data, that means you're over fitting. This is a big and recurring problem in machine learning, and there is no method guaranteed to prevent this, but there are a couple of things you can try:
regularizing the weights of your network, e.g. using l2 regularization
using stochastic regularization techniques such as drop-out during training
early stopping
reducing model complexity (but you say you've already tried this)
I will list the problems/recommendations that I see on your model.
What are you trying to predict? You are using sigmoid activation function in the last layer which seems it is a binary classification but in your loss fuction you used mse which seems strange. You can try binary_crossentropy instead of mse loss function for your model.
Your model seems suffer from overfitting so you can increase the prob. of Dropout and also add new Dropout between other hidden layers or you can remove one of the hidden layers because it seem your model is too complex.
You can change your neuron numbers in layers like a narrower => 64 -> 32 -> 16 -> 1 or try different NN architectures.
Try adam optimizer instead of sgd.
If you have 57849 sample you can use 47000 samples in training+validation and rest of will be your test set.
Don't use the same sets for your evaluation and validation. First split your data into train and test set. Then when you are fitting your model give validation_split_ratio then it will automatically give validation set from your training set.

tf.keras loss becomes NaN

I'm programming a neural network in tf.keras, with 3 layers. My dataset is the MNIST dataset. I decreased the number of examples in the dataset, so the runtime is lower. This is my code:
import tensorflow as tf
from tensorflow.keras import layers
import numpy as np
import pandas as pd
!git clone https://github.com/DanorRon/data
%cd data
!ls
batch_size = 32
epochs = 10
alpha = 0.0001
lambda_ = 0
h1 = 50
train = pd.read_csv('/content/first-repository/mnist_train.csv.zip')
test = pd.read_csv('/content/first-repository/mnist_test.csv.zip')
train = train.loc['1':'5000', :]
test = test.loc['1':'2000', :]
train = train.sample(frac=1).reset_index(drop=True)
test = test.sample(frac=1).reset_index(drop=True)
x_train = train.loc[:, '1x1':'28x28']
y_train = train.loc[:, 'label']
x_test = test.loc[:, '1x1':'28x28']
y_test = test.loc[:, 'label']
x_train = x_train.values
y_train = y_train.values
x_test = x_test.values
y_test = y_test.values
nb_classes = 10
targets = y_train.reshape(-1)
y_train_onehot = np.eye(nb_classes)[targets]
nb_classes = 10
targets = y_test.reshape(-1)
y_test_onehot = np.eye(nb_classes)[targets]
model = tf.keras.Sequential()
model.add(layers.Dense(784, input_shape=(784,)))
model.add(layers.Dense(h1, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(lambda_)))
model.add(layers.Dense(10, activation='sigmoid', kernel_regularizer=tf.keras.regularizers.l2(lambda_)))
model.compile(optimizer=tf.train.GradientDescentOptimizer(alpha),
loss = 'categorical_crossentropy',
metrics = ['accuracy'])
model.fit(x_train, y_train_onehot, epochs=epochs, batch_size=batch_size)
Whenever I run it, one of 3 things happens:
The loss decreases and the accuracy increases for a few epochs, until the loss becomes NaN for no apparent reason and the accuracy plummets.
The loss and accuracy stay the same for each epoch. Usually the loss is 2.3025 and the accuracy is 0.0986.
The loss starts at NaN(and stays that way), while the accuracy stays low.
Most of the time, the model does one of these things, but sometimes it does something random. It seems like the type of erratic behavior that occurs is completely random. I have no idea what the problem is. How do I fix this problem?
Edit: Sometimes, the loss decreases, but the accuracy stays the same. Also, sometimes the loss decreases and the accuracy increases, then after a while the accuracy decreases while the loss still decreases. Or, the loss decreases and the accuracy increases, then it switches and the loss goes up fast while the accuracy plummets, eventually ending with loss: 2.3025 acc: 0.0986.
Edit 2: This is an example of something that sometimes happens:
Epoch 1/100
49999/49999 [==============================] - 5s 92us/sample - loss: 1.8548 - acc: 0.2390
Epoch 2/100
49999/49999 [==============================] - 5s 104us/sample - loss: 0.6894 - acc: 0.8050
Epoch 3/100
49999/49999 [==============================] - 4s 90us/sample - loss: 0.4317 - acc: 0.8821
Epoch 4/100
49999/49999 [==============================] - 5s 104us/sample - loss: 2.2178 - acc: 0.1345
Epoch 5/100
49999/49999 [==============================] - 5s 90us/sample - loss: 2.3025 - acc: 0.0986
Epoch 6/100
49999/49999 [==============================] - 4s 90us/sample - loss: 2.3025 - acc: 0.0986
Epoch 7/100
49999/49999 [==============================] - 4s 89us/sample - loss: 2.3025 - acc: 0.0986
Edit 3: I changed the loss to mean squared error and the network works well now. Is there a way to keep it in cross entropy without it converging to a local minimum?
I changed the loss to mean squared error and the network works well now
MSE is not the appropriate loss function for such classification problems; you should certainly stick to loss = 'categorical_crossentropy'.
Most probably, the issue is due to your MNIST data being not normalized; you should normalize your final variables as
x_train = x_train.values/255
x_test = x_test.values/255
Not normalizing input data is a known cause of exploding gradient problems, which is probably what is happening here.
Other advice: set activation='relu' for your first dense layer, and get rid of both the regularizer & initializer arguments from all layers (the default glorot_uniform is actually a better initializer, while regularization here may actually be harmful for the performance).
As a general advice, try not to reinvent the wheel - start with a Keras example using the built-in MNIST data...
The frustration your feeling towards the seemly random output of your code is understandable and correctly identified. Every time the model begins training it randomly initializes the weights. Depending on this initialization you see one of your three output scenarios.
The issue is most likely due to vanishing gradients. It's a phenomenon that occurs when the backpropagation causes very small weights to be multiplied by a small number to create an almost infinitely small value. The solution is to add small jitter (1e-10) to each of your gradients (from within the cost function) so that they never reach zero.
There are tons of more detailed blogs about vanishing gradients online and for an implementation example checkout line 217 of this TensorFlow Network

Tensorflow Keras LSTM not training - affected by number_of_epochs, optimizer adam

I have 2 code snippets. One of them trains a model while the other one doesn't. I don't want to raise an issue on Github without getting to the bottom of this and it wasted a day of mine waiting for the incorrect model to train.
This is the model, which is correct. Running tensorflow 1.10.1.
model = Sequential()
# I truncate the string at 20 characters, alphabet listset is a sorted list of the set of [A-Za-z0-9-_] which has len = 64
model.add(LSTM(512, return_sequences=True, input_shape=(20, len(alphabet_listset)), dropout=0.2, stateful=False))
model.add(LSTM(512, return_sequences=False, dropout=0.2, stateful=False))
model.add(Dense(2, activation="softmax"))
model.compile(optimizer=adam, loss='categorical_crossentropy',
metrics=['accuracy']) # adam here is at learning rate 1e-3
model.summary()
To create X_train and Y_train I use test_train_split.
The way I convert the string to one hot vector(even though there is a fuction for one hot vector for lstm now, if you add that it would really help) is
def string_vectorizer(strng, alphabet, max_str_len=20):
vector = [[0 if char != letter else 1 for char in alphabet] for letter in strng[0:max_str_len]]
while len(vector) != max_str_len:
vector = [*vector, [0 for char in alphabet]]
return np.array(vector)
The parts I mention as correct are indeed correct as this is not the first time I am training this model and have validated it. I need to update my models every month and when I was testing my architecture by running multiple models I came across this anomaly.
Here is the incorrect code
model.fit(X_train, to_categorical(Y_train, 2), epochs=1000,
validation_data=(X_test, to_categorical(Y_test, 2)),
verbose=2, shuffle=True)
loss, accuracy = model.evaluate(X_test, to_categorical(Y_test, 2))
Output of this incorrect snippet is the same as the correct snippet log, just that the accuracy remains at 0.5454 for 12 epochs and the loss does not reduce. My sample data is at a split of 50k correct to 60k incorrect labels. So if the model just predicts 1 for all the 60k incorrect labels, the accuracy would be 60k / (60k + 50k) => 0.54.
Here is the correct code, the only difference is the value of epochs.
expected_acc_eth, expected_loss_eth = 0.83, 0.40
while(True):
model.fit(X_train, to_categorical(Y_train, 2), epochs=1,
validation_data=(X_test, to_categorical(Y_test, 2)),\
verbose=2, shuffle=True)
loss, accuracy = model.evaluate(X_test, to_categorical(Y_test, 2))
if((accuracy > expected_acc_eth) & (loss < expected_loss_eth)):
break
Output of this correct code
Train on 99000 samples, validate on 11000 samples
Epoch 1/1
- 1414s - loss: 0.6847 - acc: 0.5578 - val_loss: 0.6698 - val_acc: 0.5961
11000/11000 [==============================] - 36s 3ms/step
Train on 99000 samples, validate on 11000 samples
Epoch 1/1
- 1450s - loss: 0.6777 - acc: 0.5764 - val_loss: 0.6707 - val_acc: 0.5886
11000/11000 [==============================] - 36s 3ms/step
Train on 99000 samples, validate on 11000 samples
Epoch 1/1
- 1425s - loss: 0.6729 - acc: 0.5862 - val_loss: 0.6643 - val_acc: 0.6030
11000/11000 [==============================] - 37s 3ms/step
Train on 99000 samples, validate on 11000 samples
Epoch 1/1
- 1403s - loss: 0.6681 - acc: 0.5948 - val_loss: 0.6633 - val_acc: 0.6092
11000/11000 [==============================] - 35s 3ms/step
Train on 99000 samples, validate on 11000 samples
Epoch 1/1
I have seen this stackoverflow post which states that early stopping affects the way models learn but they go off topic with the steps per epoch theory. I tried setting batch_size but that doesn't help or I couldn't do it correctly as it depends inversely to the learning rate of adam and my scale must have been off. I understand deep nets and machine learning to some extent but this is too much of a difference between the outputs.
I hope it saves others who face similar bugs from wasting too much time like me!
Can someone please elaborate on this. Any help is much appreciated!
From our discussion in the comments, it sounds like the issue arises in the implementation of the Adam optimizer failing to update anything when model.fit() is called with epochs > 1.
I would be interested in seeing why this is, but a (slower) working solution for now is to use optimizer=rmsprop instead of optimizer=adam in your call to model.compile().

Categories

Resources