I am using Keras for the first time on a regression problem. I have set up an early stopping callback, monitoring val_loss (which is mean squared error) with patience=3. However, the training stops even if val_loss is decreasing for the last few epochs. Either there is a bug in my code, or I fail to understand the true meaning of my callback. Can anyone understand what is going on? I provide the training progress and the model building code below.
As you see below, the training stopped at epoch 8, but val_loss has been decreasing since epoch 6 and I think it should have continued running. There was only one time when val_loss increased (from epoch 5 to 6), and patience is 3.
Epoch 1/100
35849/35849 - 73s - loss: 11317667.0000 - val_loss: 7676812.0000
Epoch 2/100
35849/35849 - 71s - loss: 11095449.0000 - val_loss: 7635795.0000
Epoch 3/100
35849/35849 - 71s - loss: 11039211.0000 - val_loss: 7627178.5000
Epoch 4/100
35849/35849 - 71s - loss: 10997918.0000 - val_loss: 7602583.5000
Epoch 5/100
35849/35849 - 65s - loss: 10955304.0000 - val_loss: 7599179.0000
Epoch 6/100
35849/35849 - 59s - loss: 10914252.0000 - val_loss: 7615204.0000
Epoch 7/100
35849/35849 - 59s - loss: 10871920.0000 - val_loss: 7612452.0000
Epoch 8/100
35849/35849 - 59s - loss: 10827388.0000 - val_loss: 7603128.5000
The model is built as follows:
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import EarlyStopping
from keras import initializers
# create model
model = Sequential()
model.add(Dense(len(predictors), input_dim=len(predictors), activation='relu',name='input',
kernel_initializer=initializers.he_uniform(seed=seed_value)))
model.add(Dense(155, activation='relu',name='hidden1',
kernel_initializer=initializers.he_uniform(seed=seed_value)))
model.add(Dense(1, activation='linear',name='output',
kernel_initializer=initializers.he_uniform(seed=seed_value)))
callback = EarlyStopping(monitor='val_loss', patience=3,restore_best_weights=True)
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam')
# Fit the model
history = model.fit(X,y, validation_split=0.2, epochs=100,
batch_size=50,verbose=2,callbacks=[callback])
After experimenting with some of the hyperparameters, such as the activation functions, I keep having the same problem. It doesn't always stop at epoch 8, though. I also tried changing patience.
Details:
Ubuntu 18.04
Tensorflow 2.6.0
Python 3.8.5
You are misunderstanding how Keras defines improvement. You are correct in that the val_loss decreased in epochs 7 and 8 and only increased in epoch 6. What you are missing though is that the improvements in 7 and 8 did not improve on the current best value from epoch 5 (7599179.0000). The current best value for loss occurred in epoch 5 and your callback waited 3 epochs to see if anything could beat it, NOT if there would be an improvement from within those 3 epochs. In epoch 8 when the loss did not dip below the 5th epoch the callback terminated the training.
Related
I have run a model with 4 epochs and using early_stopping.
early_stopping = EarlyStopping(monitor='val_loss', mode='min', patience=2, restore_best_weights=True)
history = model.fit(trainX, trainY, validation_data=(testX, testY), epochs=4, callbacks=[early_stopping])
Epoch 1/4
812/812 [==============================] - 68s 13ms/sample - loss: 0.6072 - acc: 0.717 - val_loss: 0.554 - val_acc: 0.7826
Epoch 2/4
812/812 [==============================] - 88s 11ms/sample - loss: 0.5650 - acc: 0.807 - val_loss: 0.527 - val_acc: 0.8157
Epoch 3/4
812/812 [==============================] - 88s 11ms/sample - loss: 0.5456 - acc: 0.830 - val_loss: 0.507 - val_acc: 0.8244
Epoch 4/4
812/812 [==============================] - 51s 9ms/sample - loss: 0.658 - acc: 0.833 - val_loss: 0.449 - val_acc: 0.8110
The highest val_ac corresponds to the third epoch, and is 0.8244. However, the accuracy_score function will return the last val_acc value, which is 0.8110.
yhat = model.predict_classes(testX)
accuracy = accuracy_score(testY, yhat)
It is possible to specify the epoch while calling the predict_classesin order to get the highest accuracy (in this case, the one corresponding to the third epoch) ?
It looks like early-stopping isn't being trigged because you're only training for 4 epochs and you've set early stopping to trigger when val_loss doesn't decrease over two epochs. If you look at your val_loss for each epoch, you can see it's still decreasing even on the fourth epoch.
So simply put, your model is just running the full four epochs without using early stopping, which is why it's using the weights learned in epoch 4 rather than the best in terms of val_acc.
To fix this, set monitor='val_acc' and run for a few more epochs. val_acc only starts to decrease after epoch 3, so earlystopping won't trigger until epoch 5 at the earliest.
Alternatively you could set patience=1 so it only checks a single epoch ahead.
I am running a code using Python 3.7.5 with TensorFlow 2.0 for MNIST classification.
I am using EarlyStopping from TensorFlow 2.0 and the callback I have for it is:
callbacks = [
tf.keras.callbacks.EarlyStopping(
monitor='val_loss', patience = 3,
min_delta=0.001
)
]
According to EarlyStopping - TensorFlow 2.0 page, the definition of min_delta parameter is as follows:
min_delta: Minimum change in the monitored quantity to qualify as an
improvement, i.e. an absolute change of less than min_delta, will
count as no improvement.
Train on 60000 samples, validate on 10000 samples
Epoch 1/15 60000/60000 [==============================] - 10s
173us/sample - loss: 0.2040 - accuracy: 0.9391 - val_loss: 0.1117 -
val_accuracy: 0.9648
Epoch 2/15 60000/60000 [==============================] - 9s
150us/sample - loss: 0.0845 - accuracy: 0.9736 - val_loss: 0.0801 -
val_accuracy: 0.9748
Epoch 3/15 60000/60000 [==============================] - 9s
151us/sample - loss: 0.0574 - accuracy: 0.9817 - val_loss: 0.0709 -
val_accuracy: 0.9795
Epoch 4/15 60000/60000 [==============================] - 9s
149us/sample - loss: 0.0434 - accuracy: 0.9858 - val_loss: 0.0787 -
val_accuracy: 0.9761
Epoch 5/15 60000/60000 [==============================] - 9s
151us/sample - loss: 0.0331 - accuracy: 0.9893 - val_loss: 0.0644 -
val_accuracy: 0.9808
Epoch 6/15 60000/60000 [==============================] - 9s
150us/sample - loss: 0.0275 - accuracy: 0.9910 - val_loss: 0.0873 -
val_accuracy: 0.9779
Epoch 7/15 60000/60000 [==============================] - 9s
151us/sample - loss: 0.0232 - accuracy: 0.9921 - val_loss: 0.0746 -
val_accuracy: 0.9805
Epoch 8/15 60000/60000 [==============================] - 9s
151us/sample - loss: 0.0188 - accuracy: 0.9936 - val_loss: 0.1088 -
val_accuracy: 0.9748
Now if I look at the last three epochs viz., epochs 6, 7, and 8 and look at the validation loss ('val_loss'), their values are:
0.0688, 0.0843 and 0.0847.
And the differences between consecutive 3 terms are: 0.0155, 0.0004. But isn't the first difference greater than 'min_delta' as defined in the callback.
The code I came up with for EarlyStopping is as follows:
# numpy array to hold last 'patience = 3' values-
pv = [0.0688, 0.0843, 0.0847]
# numpy array to compute differences between consecutive elements in 'pv'-
differences = np.diff(pv, n=1)
differences
# array([0.0155, 0.0004])
# minimum change required for monitored metric's improvement-
min_delta = 0.001
# Check whether the consecutive differences is greater than 'min_delta' parameter-
check = differences > min_delta
check
# array([ True, False])
# Condition to see whether all 3 'val_loss' differences are less than 'min_delta'
# for training to stop since EarlyStopping is called-
if np.all(check == False):
print("Stop Training - EarlyStopping is called")
# stop training
But according to the 'val_loss', the differences between the not ALL of the 3 last epochs are greater than 'min_delta' of 0.001. For example, the first difference is greater than 0.001 (0.0843 - 0.0688) while the second difference is less than 0.001 (0.0847 - 0.0843).
Also, according to patience parameter definition of "EarlyStopping":
patience: Number of epochs with no improvement after which training will be stopped.
So, EarlyStopping should be called when there is no improvement for 'val_loss' for 3 consecutive epochs where the absolute change of less than 'min_delta' does not count as improvement.
Then why is EarlyStopping called?
Code for model definition and 'fit()' are:
import tensorflow_model_optimization as tfmot
from tensorflow_model_optimization.sparsity import keras as sparsity
import matplotlib.pyplot as plt from tensorflow.keras.layers import AveragePooling2D, Conv2D
from tensorflow.keras import models, layers, datasets
from tensorflow.keras.layers import Dense, Flatten, Reshape, Input, InputLayer
from tensorflow.keras.models import Sequential, Model
# Specify the parameters to be used for layer-wise pruning, NO PRUNING is done here:
pruning_params_unpruned = {
'pruning_schedule': sparsity.PolynomialDecay(
initial_sparsity=0.0, final_sparsity=0.0,
begin_step = 0, end_step=0, frequency=100) }
def pruned_nn(pruning_params):
"""
Function to define the architecture of a neural network model
following 300 100 architecture for MNIST dataset and using
provided parameter which are used to prune the model.
Input: 'pruning_params' Python 3 dictionary containing parameters which are used for pruning
Output: Returns designed and compiled neural network model
"""
pruned_model = Sequential()
pruned_model.add(l.InputLayer(input_shape=(784, )))
pruned_model.add(Flatten())
pruned_model.add(sparsity.prune_low_magnitude(
Dense(units = 300, activation='relu', kernel_initializer=tf.initializers.GlorotUniform()),
**pruning_params))
# pruned_model.add(l.Dropout(0.2))
pruned_model.add(sparsity.prune_low_magnitude(
Dense(units = 100, activation='relu', kernel_initializer=tf.initializers.GlorotUniform()),
**pruning_params))
# pruned_model.add(l.Dropout(0.1))
pruned_model.add(sparsity.prune_low_magnitude(
Dense(units = num_classes, activation='softmax'),
**pruning_params))
# Compile pruned CNN-
pruned_model.compile(
loss=tf.keras.losses.categorical_crossentropy,
# optimizer='adam',
optimizer=tf.keras.optimizers.Adam(lr = 0.001),
metrics=['accuracy'])
return pruned_model
batch_size = 32
epochs = 50
# Instantiate NN-
orig_model = pruned_nn(pruning_params_unpruned)
# Train unpruned Neural Network-
history_orig = orig_model.fit(
x = X_train, y = y_train,
batch_size = batch_size,
epochs = epochs,
verbose = 1,
callbacks = callbacks,
validation_data = (X_test, y_test),
shuffle = True )
The behaviour of the Early Stopping callback is related to:
Metric or Loss to be monitored
min_delta which is the minimum quantity to be considered an improvement, between the performance of the monitored quantity in the current epoch and the best result in that metric.
patience which is the number of epochs without improvements (taking into consideration that improvements have to be of a greater change than min_delta) before stopping the algorithm.
In your case, the best val_lossis 0.0644 and should have a value of lower than 0.0634 to be registered as improvement:
Epoch 6/15 val_loss: 0.0873 | Difference is: + 0.0229
Epoch 7/15 val_loss: 0.0746 | Difference is: + 0.0102
Epoch 8/15 val_loss: 0.1088 | Difference is: + 0.0444
Be aware that the quantities that are printed in the "training log", are approximated and you shouldn't base your assumptions on these values. You should rather take into consideration the true values from callbacks or the history of the training.
Reference
I have a rather complex sequence to sequence encoder decoder model. I run into an issue where my loss and accuracy drop to zero and I can't reproduce this error. It has nothing to do with the training data as it happens with different sets.
It seems to be learning as the loss slowly drops. Below is what it is like just before:
Epoch 1/2
5000/5000 [==============================] - 235s 47ms/step - loss: 0.9825 - acc: 0.7077
Epoch 2/2
5000/5000 [==============================] - 235s 47ms/step - loss: 0.9443 - acc: 0.7177
And here is what is like during the next mode.fit() iteration:
Epoch 1/2
2882/2882 [==============================] - 136s 47ms/step - loss: 0.7033 - acc: 0.4399
Epoch 2/2
2882/2882 [==============================] - 136s 47ms/step - loss: 1.1921e-07 - acc: 0.0000e+00
After this, the loss and accuracy remain the same:
Epoch 1/2
5000/5000 [==============================] - 278s 56ms/step - loss: 1.1921e-07 - acc: 0.0000e+00
Epoch 2/2
5000/5000 [==============================] - 279s 56ms/step - loss: 1.1921e-07 - acc: 0.0000e+00
The reason I have to train in such a manner is because I have variable input sizes and output sizes. So I have to make batches of my training data with fixed input size before I train.
sgd = optimizers.SGD(lr= 0.015, decay=0.002)
out2 = model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
I need to use curriculum learning to reach sentence level predictions, so I am doing the following:
I initially train my model to output "1 word + end" token. Training on this works fine. When i start to train on "2 words + end", this problem starts to arise.
After training on 1 word, I save the model. Then I define a new model with output size for 2 words, and use the following:
new_model = createModel(...,num_output_words)
new_model.set_weights(old_model.get_weights())
I have to do this as I can't define a model with variable output length.
I can provide more information if needed. I can't find any information online.
I am quite new to deep learning and I was studying this RNN example.
After completing the tutorial, I decided to see the effect of various hyperparameters such as the number of nodes in each layer and dropout factor etc.
What I do is, for each value in my lists, create a new model using a set of parameters and test the performance in my dataset. Below is the basic code:
def build_model(MODELNAME, l1,l2,l3, l4, d):
tf.global_variables_initializer()
tf.reset_default_graph()
model = Sequential(name = MODELNAME)
model.reset_states
model.add(CuDNNLSTM(l1, input_shape=(x_train.shape[1:]), return_sequences=True) )
model.add(Dropout(d))
model.add(BatchNormalization())
model.add(CuDNNLSTM(l2, input_shape=(x_train.shape[1:]), return_sequences=True) )
# Definition of other layers of the model ...
model.compile(loss="sparse_categorical_crossentropy",
optimizer=opt,
metrics=['accuracy'])
history = model.fit(x_train, y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(x_validation, y_validation))
return model
layer1 = [64, 128, 256]
layer2,3,4 = [...]
drop = [0.2, 0.3, 0.4]
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
for l1 in layer1:
#for l2, l3, l4 for layer2, layer3, layer4
for d in drop:
sess = tf.Session(config=config)
set_session(sess)
MODELNAME = 'RNN-l1={}-l2={}-l3={}-l4={}-drop={} '.format(l1, l2, l3, l4, d)
print(MODELNAME)
model = build_model(MODELNAME, l1,l2,l3, l4, d)
sess.close()
print('-----> training & validation loss & accuracies)
The problem is when the new model is built using the new parameters, it works as if the next epoch of the previous model, rather than epoch 1 of the new one. Below is some of the results.
RNN-l1=64-l2=64-l3=64-l4=32-drop=0.2
Train on 90116 samples, validate on 4458 samples
Epoch 1/6
90116/90116 [==============================] - 139s 2ms/step - loss: 0.5558 - acc: 0.7116 - val_loss: 0.8857 - val_acc: 0.5213
... # results for other epochs
Epoch 6/6
RNN-l1=64-l2=64-l3=64-l4=32-drop=0.3
90116/90116 [==============================] - 140s 2ms/step - loss: 0.5233 - acc: 0.7369 - val_loss: 0.9760 - val_acc: 0.5336
Epoch 1/6
90116/90116 [==============================] - 142s 2ms/step - loss: 0.5170 - acc: 0.7403 - val_loss: 0.9671 - val_acc: 0.5310
... # results for other epochs
90116/90116 [==============================] - 142s 2ms/step - loss: 0.4953 - acc: 0.7577 - val_loss: 0.9587 - val_acc: 0.5354
Epoch 6/6
90116/90116 [==============================] - 143s 2ms/step - loss: 0.4908 - acc: 0.7614 - val_loss: 1.0319 - val_acc: 0.5397
# -------------------AFTER 31TH SET OF PARAMETERS
RNN-l1=64-l2=256-l3=128-l4=32-drop=0.2
Epoch 1/6
90116/90116 [==============================] - 144s 2ms/step - loss: 0.1080 - acc: 0.9596 - val_loss: 1.8910 - val_acc: 0.5372
As seen, the first epoch of 31th set of parameters behaves as if it is 181th epoch. Similarly, if I stop the run at one point and re-run again, the accuracy and loss look as if it is the next epoch as below.
Epoch 1/6
90116/90116 [==============================] - 144s 2ms/step - loss: 0.1053 - acc: 0.9621 - val_loss: 1.9120 - val_acc: 0.5375
I tried a bunch of things (as you can see in the code), such as model=None, reinitializing the variables, resetting_status of the model, closing session in each iteration etc but none helped. I searched for similar question with no luck.
I am trying to understand what I am doing wrong.
Any help is appreciated,
Note: Title is not very explanatory, I am open to suggestions for a better title.
Looks like you are using a Keras setting, which means you need to import keras backend and then clear that session before you run your new model. It would be something like this:
from keras import backend as K
K.clear_session()
I am building my first artificial multilayer perceptron neural network using Keras.
This is my input data:
This is my code which I used to build my initial model which basically follows the Keras example code:
model = Sequential()
model.add(Dense(64, input_dim=14, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(64, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(2, init='uniform'))
model.add(Activation('softmax'))
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer=sgd)
model.fit(X_train, y_train, nb_epoch=20, batch_size=16)
Output:
Epoch 1/20
1213/1213 [==============================] - 0s - loss: 0.1760
Epoch 2/20
1213/1213 [==============================] - 0s - loss: 0.1840
Epoch 3/20
1213/1213 [==============================] - 0s - loss: 0.1816
Epoch 4/20
1213/1213 [==============================] - 0s - loss: 0.1915
Epoch 5/20
1213/1213 [==============================] - 0s - loss: 0.1928
Epoch 6/20
1213/1213 [==============================] - 0s - loss: 0.1964
Epoch 7/20
1213/1213 [==============================] - 0s - loss: 0.1948
Epoch 8/20
1213/1213 [==============================] - 0s - loss: 0.1971
Epoch 9/20
1213/1213 [==============================] - 0s - loss: 0.1899
Epoch 10/20
1213/1213 [==============================] - 0s - loss: 0.1957
Epoch 11/20
1213/1213 [==============================] - 0s - loss: 0.1923
Epoch 12/20
1213/1213 [==============================] - 0s - loss: 0.1910
Epoch 13/20
1213/1213 [==============================] - 0s - loss: 0.2104
Epoch 14/20
1213/1213 [==============================] - 0s - loss: 0.1976
Epoch 15/20
1213/1213 [==============================] - 0s - loss: 0.1979
Epoch 16/20
1213/1213 [==============================] - 0s - loss: 0.2036
Epoch 17/20
1213/1213 [==============================] - 0s - loss: 0.2019
Epoch 18/20
1213/1213 [==============================] - 0s - loss: 0.1978
Epoch 19/20
1213/1213 [==============================] - 0s - loss: 0.1954
Epoch 20/20
1213/1213 [==============================] - 0s - loss: 0.1949
How do I train and tune this model and get my code to output my best predictive model? I am new to neural networks and am just wholly confused as to what is the next step after building the model. I know I want to optimize it, but I'm not sure which features to tweak or if I am supposed to do it manually or how to write code to do so.
Some things that you could do are:
Change your loss function from mean_squared_error to binary_crossentropy. mean_squared_error is intended for regression, but you want to classify your data.
Add show_accuracy=True to your fit() function, which outputs the accuracy of your model at every epoch. That information is probably more useful to you than just the loss value.
Add validation_split=0.2 to your fit() function. Currently you are only training on a training set and validating on nothing. That's a no-go in machine learning as you can't be sure that your model hasn't simply memorized the correct answers for your dataset (without really understanding why these answers are correct).
Change from Obama/Romney to Democrat/Republican and add data from previous elections. ~1200 examples is a pretty small dataset for neural networks. Also add columns with valuable information, like unemployment rate or population density. Note that quite some of the values (like population number) are probably similar to providing the name of the state, so e.g. your net will likely learn that Texas means Republican.
If you haven't done that already, normalize all your values to the range of 0 to 1 (by subtracting from each value the minimum of the column and then dividing by the (max - min) of the column). Neural networks can handle normalized data better than unnormalized data.
Try Adam and Adagrad instead of SGD. Sometimes they perform better. (See documentation about optimizers.)
Try Activation('relu'), LeakyReLU, PReLU and ELU instead of Activation('tanh'). Tanh is rarely the best choice. (See advanced activation functions.)
Try increasing/decreasing your dense layers sizes (e.g. from 64 to 128). Also try adding/removing layers.
Try adding BatchNormalization layers (before the Activation layers). (See documentation.)
Try changing the dropout rates (e.g. from 0.5 to 0.25).