Early stopping with multiple conditions - python

I am doing multi-class classification for a recommender system (item recommendations), and I'm currently training my network using sparse_categorical_crossentropy loss. Therefore, it is reasonable to perform EarlyStopping by monitoring my validation loss, val_loss as such:
tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
which works as expected. However, the performance of the network (recommender system) is measured by Average-Precision-at-10, and is tracked as a metric during training, as average_precision_at_k10. Because of this, I could also perform early stopping with this metric as such:
tf.keras.callbacks.EarlyStopping(monitor='average_precision_at_k10', patience=10)
which also works as expected.
My problem:
Sometimes the validation loss increases, whilst the Average-Precision-at-10 is improving and vice-versa. Because of this, I would need to monitor both, and perform early stopping, if and only if both are deteriorating. What I would like to do:
tf.keras.callbacks.EarlyStopping(monitor=['val_loss', 'average_precision_at_k10'], patience=10)
which obviously does not work. Any ideas how this could be done?

With guidance from Gerry P above I managed to create my own custom EarlyStopping callback, and thought I post it here in case anyone else are looking to implement something similar.
If both the validation loss and the mean average precision at 10 does not improve for patience number of epochs, early stopping is performed.
class CustomEarlyStopping(keras.callbacks.Callback):
def __init__(self, patience=0):
super(CustomEarlyStopping, self).__init__()
self.patience = patience
self.best_weights = None
def on_train_begin(self, logs=None):
# The number of epoch it has waited when loss is no longer minimum.
self.wait = 0
# The epoch the training stops at.
self.stopped_epoch = 0
# Initialize the best as infinity.
self.best_v_loss = np.Inf
self.best_map10 = 0
def on_epoch_end(self, epoch, logs=None):
v_loss=logs.get('val_loss')
map10=logs.get('val_average_precision_at_k10')
# If BOTH the validation loss AND map10 does not improve for 'patience' epochs, stop training early.
if np.less(v_loss, self.best_v_loss) and np.greater(map10, self.best_map10):
self.best_v_loss = v_loss
self.best_map10 = map10
self.wait = 0
# Record the best weights if current results is better (less).
self.best_weights = self.model.get_weights()
else:
self.wait += 1
if self.wait >= self.patience:
self.stopped_epoch = epoch
self.model.stop_training = True
print("Restoring model weights from the end of the best epoch.")
self.model.set_weights(self.best_weights)
def on_train_end(self, logs=None):
if self.stopped_epoch > 0:
print("Epoch %05d: early stopping" % (self.stopped_epoch + 1))
It is then used as:
model.fit(
x_train,
y_train,
batch_size=64,
steps_per_epoch=5,
epochs=30,
verbose=0,
callbacks=[CustomEarlyStopping(patience=10)],
)

You can achieve this by by creating a custom callback. Information on how to do that is located here. Below is some code that illustrates what you can do in a custom callback. The documentation I referenced shows many other options.
class LRA(keras.callbacks.Callback): # subclass the callback class
# create class variables as below. These can be accessed in your code outside the class definition as LRA.my_class_variable, LRA.best_weights
my_class_variable=something # a class variable
best_weights=model.get_weights() # another class variable
# define an initialization function with parameters you want to feed to the callback
def __init__(self, param1, param2, etc):
super(LRA, self).__init__()
self.param1=param1
self.param2=param2
etc for all parameters
# write any initialization code you need here
def on_epoch_end(self, epoch, logs=None): # method runs on the end of each epoch
v_loss=logs.get('val_loss') # example of getting log data at end of epoch the validation loss for this epoch
acc=logs.get('accuracy') # another example of getting log data
LRA.best_weights=model.get_weights() # example of setting class variable value
print(f'Hello epoch {epoch} has just ended') # print a message at the end of every epoch
lr=float(tf.keras.backend.get_value(self.model.optimizer.lr)) # get the current learning rate
if v_loss > self.param1:
new_lr=lr * self.param2
tf.keras.backend.set_value(model.optimizer.lr, new_lr) # set the learning rate in the optimizer
# write whatever code you need

I recommend you to create your own callback.
In the following I added a solution that monitors both the accuracy and the loss. You can replace the acc with your own metric:
class CustomCallback(keras.callbacks.Callback):
acc = {}
loss = {}
best_weights = None
def __init__(self, patience=None):
super(CustomCallback, self).__init__()
self.patience = patience
def on_epoch_end(self, epoch, logs=None):
epoch += 1
self.loss[epoch] = logs['loss']
self.acc[epoch] = logs['accuracy']
if self.patience and epoch > self.patience:
# best weight if the current loss is less than epoch-patience loss. Simiarly for acc but when larger
if self.loss[epoch] < self.loss[epoch-self.patience] and self.acc[epoch] > self.acc[epoch-self.patience]:
self.best_weights = self.model.get_weights()
else:
# to stop training
self.model.stop_training = True
# Load the best weights
self.model.set_weights(self.best_weights)
else:
# best weight are the current weights
self.best_weights = self.model.get_weights()
Please bear in mind that if you want to control the minimum change in the monitored quantity (aka. min_delta) you have to integrate it in the code.
Here is the documentation for how to build your custome callback: custom_callback

At this point it would be more simple to make a custom loop and just use if-statements. E.g.:
def main(epochs=50):
for epoch in range(epochs):
fit(epoch)
if test_acc.result() > .8 and topk_acc.result() > .9:
print(f'\nEarly stopping. Test acc is above 80% and TopK acc is above 90%.')
break
if __name__ == '__main__':
main(epochs=100)
Here's a simple custom training loop using this method:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow_datasets as tfds
import tensorflow as tf
data, info = tfds.load('iris', split='train',
as_supervised=True,
shuffle_files=True,
with_info=True)
def preprocessing(inputs, targets):
scaled = tf.divide(inputs, tf.reduce_max(inputs, axis=0))
return scaled, targets
dataset = data.filter(lambda x, y: tf.less_equal(y, 2)).\
map(preprocessing).\
shuffle(info.splits['train'].num_examples)
train_dataset = dataset.take(120).batch(4)
test_dataset = dataset.skip(120).take(30).batch(4)
model = tf.keras.Sequential([
tf.keras.layers.Dense(8, activation='relu'),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(info.features['label'].num_classes, activation='softmax')
])
loss_object = tf.losses.SparseCategoricalCrossentropy(from_logits=True)
train_loss = tf.metrics.Mean()
test_loss = tf.metrics.Mean()
train_acc = tf.metrics.SparseCategoricalAccuracy()
test_acc = tf.metrics.SparseCategoricalAccuracy()
topk_acc = tf.metrics.SparseTopKCategoricalAccuracy(k=2)
opt = tf.keras.optimizers.Adam(learning_rate=1e-3)
#tf.function
def train_step(inputs, labels):
with tf.GradientTape() as tape:
logits = model(inputs)
loss = loss_object(labels, logits)
gradients = tape.gradient(loss, model.trainable_variables)
opt.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_acc(labels, logits)
#tf.function
def test_step(inputs, labels):
logits = model(inputs)
loss = loss_object(labels, logits)
test_loss.update_state(loss)
test_acc.update_state(labels, logits)
topk_acc.update_state(labels, logits)
def fit(epoch):
template = 'Epoch {:>2} Train Loss {:.3f} Test Loss {:.3f} ' \
'Train Acc {:.2f} Test Acc {:.2f} Test TopK Acc {:.2f} '
train_loss.reset_states()
test_loss.reset_states()
train_acc.reset_states()
test_acc.reset_states()
topk_acc.reset_states()
for X_train, y_train in train_dataset:
train_step(X_train, y_train)
for X_test, y_test in test_dataset:
test_step(X_test, y_test)
print(template.format(
epoch + 1,
train_loss.result(),
test_loss.result(),
train_acc.result(),
test_acc.result(),
topk_acc.result()
))
def main(epochs=50):
for epoch in range(epochs):
fit(epoch)
if test_acc.result() > .8 and topk_acc.result() > .9:
print(f'\nEarly stopping. Test acc is above 80% and TopK acc is above 90%.')
break
if __name__ == '__main__':
main(epochs=100)

Related

Has anyone implemented a optuna Hyperparameter optimization for a Pytorch LSTM?

I am trying to implemented a Optuna Hyperparameter optimization for a Pytorch LSTM. But I do not know how to define my model correctly.
When I just use nn.linear erverything works fine but when I use nn.LSTMCell I get the following error:
AttributeError: 'tuple' object has no attribute 'dim'
The error gets raised because, the LSTM returns a tupel not a tensor. But I do not know how to fix it and can not find an example of an Pytorch LSTM with Optuna optimization online.
Here the Model definition:
def build_model_custom(trail):
# Suggest the number of layers of neural network model
n_layers = trail.suggest_int("n_layers", 1, 3)
layers = []
in_features = 20
for i in range(n_layers):
# Suggest the number of units in each layer
out_features = trail.suggest_int("n_units_l{}".format(i), 4, 18)
layers.append(nn.LSTMCell(in_features, out_features))
in_features = out_features
layers.append(nn.Linear(in_features, 2))
return nn.Sequential(*layers)
I have implemented an example of optuna optimizing LSTM before, I hope it will help you:
def get_best_parameters(args, Dtr, Val):
def objective(trial):
model = TransformerModel(args).to(args.device)
loss_function = nn.MSELoss().to(args.device)
optimizer = trial.suggest_categorical('optimizer',
[torch.optim.SGD,
torch.optim.RMSprop,
torch.optim.Adam])(
model.parameters(), lr=trial.suggest_loguniform('lr', 5e-4, 1e-2))
print('training...')
epochs = 10
val_loss = 0
for epoch in range(epochs):
train_loss = []
for batch_idx, (seq, target) in enumerate(Dtr, 0):
seq, target = seq.to(args.device), target.to(args.device)
optimizer.zero_grad()
y_pred = model(seq)
loss = loss_function(y_pred, target)
train_loss.append(loss.item())
loss.backward()
optimizer.step()
# validation
val_loss = get_val_loss(args, model, Val)
print('epoch {:03d} train_loss {:.8f} val_loss {:.8f}'.format(epoch, np.mean(train_loss), val_loss))
model.train()
return val_loss
sampler = optuna.samplers.TPESampler()
study = optuna.create_study(sampler=sampler, direction='minimize')
study.optimize(func=objective, n_trials=5)
pruned_trials = study.get_trials(deepcopy=False,
states=tuple([TrialState.PRUNED]))
complete_trials = study.get_trials(deepcopy=False,
states=tuple([TrialState.COMPLETE]))
best_trial = study.best_trial
print('val_loss = ', best_trial.value)
for key, value in best_trial.params.items():
print("{}: {}".format(key, value))
I implemented a solution by my self. I am not sure if it's the most pythonic but it works.
Suggestions for improvement are welcome.
def train_and_evaluate(param, model, trail):
# Load Data
train_dataloader = torch.utils.data.DataLoader(Train_Dataset, batch_size=batch_size)
Test_dataloader = torch.utils.data.DataLoader(Test_Dataset, batch_size=batch_size)
criterion = nn.MSELoss()
optimizer = getattr(optim, param['optimizer'])(model.parameters(), lr= param['learning_rate'])
acc = nn.L1Loss()
# Training Loop
for epoch_num in range(EPOCHS):
# Training
total_loss_train = 0
for train_input, train_target in train_dataloader:
output = model.forward(train_input.float())
batch_loss = criterion(output, train_target.float())
total_loss_train += batch_loss.item()
model.zero_grad()
batch_loss.backward()
optimizer.step()
# Evaluation
total_loss_val = 0
total_mae = 0
with torch.no_grad():
for test_input, test_target in Test_dataloader:
output = model(test_input.float())
batch_loss = criterion(output, test_target)
total_loss_val += batch_loss.item()
batch_mae = acc(output, test_target)
total_mae += batch_mae.item()
accuracy = total_mae/len(Test_Dataset)
# Add prune mechanism
trail.report(accuracy, epoch_num)
if trail.should_prune():
raise optuna.exceptions.TrialPruned()
return accuracy

Predicted value in simple LSTM keeps accumulating

After each epoch y_pred simply keeps increasing
input at each batch is 64x10 tensor, trying to predict max of the vector at each row.
I thought the gradient might not be going to 0 between batches, but I that wasn't the case.
I tried changing LR, epoch, LSTM layers (LSTM to RNN), hidden size etc, nothing helped.
BTW, using simple sequential network of dense and relu instead of lstm worked perfectly
Following is the code:
LR = 0.0001
class LSTM(nn.Module):
def __init__(self, input_size=1, hidden_layer_size=100, output_size=1):
super().__init__()
self.hidden_layer_size = hidden_layer_size
self.lstm = nn.LSTM(input_size, hidden_layer_size)
self.linear = nn.Linear(hidden_layer_size, output_size)
# self.hidden_cell = (torch.zeros(1,max_array_len,self.hidden_layer_size),
# torch.zeros(1,max_array_len,self.hidden_layer_size))
def forward(self, input_seq):
# lstm_out,self.hidden_cell = self.lstm(input_seq.view(len(input_seq),max_array_len, 1),self.hidden_cell)
lstm_out,self.hidden_cell = self.lstm(input_seq.view(len(input_seq),max_array_len, 1))
predictions = self.linear(lstm_out[:, -1,:])
return predictions
model=LSTM()
optimizer = torch.optim.Adam(model.parameters(), lr=LR, weight_decay=0.8) # optimize all cnn parameters
loss_func = nn.MSELoss() # the target label is not one-hotted
print(model)
EPOCHS=2000
for i in range(EPOCHS):
# model.train()
for step, (seq,labels) in enumerate(train_data):
model.zero_grad()
labels=labels.view(labels.shape[0],1)
y_pred = model(seq)
loss = loss_func(y_pred.float(), labels.float())
loss.backward(retain_graph=True)
optimizer.step()
if i%10 == 0:
# print(y_pred.shape,labels.shape)
print(y_pred)
print(f'epoch: {i:3} train_loss: {loss.item():10.8f}')
print('Finished Training')
y_pred i am gettting is:
tensor([[0.2661],
[0.7536],
[1.4659],
[2.4905],
[3.8662],
[5.4478],
[6.8958],
[7.9347],
[8.5493],
[8.8773],
[9.0486],
[9.1409],
[9.1931],
[9.2244],
[9.2441],
[9.2570],
[9.2657],
[9.2718],
[9.2761],
[9.2792],
[9.2815],
[9.2831],
[9.2843],
[9.2853],
[9.2860],
[9.2865],
[9.2869],
[9.2872],
[9.2874],
[9.2876],
[9.2877],
[9.2878]], grad_fn=<AddmmBackward>)```

Why is a simple Binary classification failing in a feedforward neural network?

I am new to Pytorch. I was trying to model a binary classifier on the Kepler dataset. The following was my dataset class.
class KeplerDataset(Dataset):
def __init__(self, test=False):
self.dataframe_orig = pd.read_csv(koi_cumm_path)
if (test == False):
self.data = df_numeric[( df_numeric.koi_disposition == 1 ) | ( df_numeric.koi_disposition == 0 )].values
else:
self.data = df_numeric[~(( df_numeric.koi_disposition == 1 ) | ( df_numeric.koi_disposition == 0 ))].values
self.X_data = torch.FloatTensor(self.data[:, 1:])
self.y_data = torch.FloatTensor(self.data[:, 0])
def __len__(self):
return len(self.data)
def __getitem__(self, index):
return self.X_data[index], self.y_data[index]
Here, I created a custom classifier class with one hidden layer and a single output unit that produces sigmoidal probability of being in class 1 (planet).
class KOIClassifier(nn.Module):
def __init__(self, input_dim, out_dim):
super(KOIClassifier, self).__init__()
self.linear1 = nn.Linear(input_dim, 32)
self.linear2 = nn.Linear(32, 32)
self.linear3 = nn.Linear(32, out_dim)
def forward(self, xb):
out = self.linear1(xb)
out = F.relu(out)
out = self.linear2(out)
out = F.relu(out)
out = self.linear3(out)
out = torch.sigmoid(out)
return out
I then created a train_model function to optimize the loss using SGD.
def train_model(X, y):
criterion = nn.BCELoss()
optim = torch.optim.SGD(model.parameters(), lr=0.001)
n_epochs = 100
losses = []
for epoch in range(n_epochs):
y_pred = model.forward(X)
loss = criterion(y_pred, y)
losses.append(loss.item())
optim.zero_grad()
loss.backward()
optim.step()
losses = []
for X, y in train_loader:
losses.append(train_model(X, y))
But after performing the optimization over the train_loader, When I try predicting on the trainn_loader itself, the prediction values are so much worse.
for features, y in train_loader:
y_pred = model.predict(features)
break
y_pred
> tensor([[4.5436e-02],
[1.5024e-02],
[2.2579e-01],
[4.2279e-01],
[6.0811e-02],
.....
Why is my model not working properly? Is it the problem with the dataset or am I doing something wrong with implementing the Neural net? I will link my Kaggle notebook because more context might be helpful. Please help.
You are optimizing many times (100 steps) on the first batch (first samples), then moving to the next samples. It means that your model will overfit your few samples before going to the next batch. Then, your training will be very non smooth, diverge and go far from your global optimum.
Usually, in a training loop you should:
go over all samples (this is one epoch)
shuffle your dataset in order to visit your samples in a different order (set your pytorch training loader accordingly)
go back to 1. until you reach the max number of epochs
Also you should not define your optimizer each time (nor your criterion).
Your training loop should look like this:
criterion = nn.BCELoss()
optim = torch.optim.SGD(model.parameters(), lr=0.001)
n_epochs = 100
def train_model():
for X, y in train_loader:
optim.zero_grad()
y_pred = model.forward(X)
loss = criterion(y_pred, y)
loss.backward()
optim.step()
for epoch in range(n_epochs):
train_model()

Applying callbacks in a custom training loop in Tensorflow 2.0

I'm writing a custom training loop using the code provided in the Tensorflow DCGAN implementation guide. I wanted to add callbacks in the training loop. In Keras I know we pass them as an argument to the 'fit' method, but can't find resources on how to use these callbacks in the custom training loop. I'm adding the code for the custom training loop from the Tensorflow documentation:
# Notice the use of `tf.function`
# This annotation causes the function to be "compiled".
#tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
# Produce images for the GIF as we go
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
seed)
# Save the model every 15 epochs
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator,
epochs,
seed)
I've had this problem myself: (1) I want to use a custom training loop; (2) I don't want to lose the bells and whistles Keras gives me in terms of callbacks; (3) I don't want to re-implement them all myself. Tensorflow has a design philosophy of allowing a developer to gradually opt-in to its more low-level APIs. As #HyeonPhilYoun notes in his comment below, the official documentation for tf.keras.callbacks.Callback gives an example of what we're looking for.
The following has worked for me, but can be improved by reverse engineering tf.keras.Model.
The trick is to use tf.keras.callbacks.CallbackList and then manually trigger its lifecycle events from within your custom training loop. This example uses tqdm to give attractive progress bars, but CallbackList has a progress_bar initialization argument that can let you use the defaults. training_model is a typical instance of tf.keras.Model.
from tqdm.notebook import tqdm, trange
# Populate with typical keras callbacks
_callbacks = []
callbacks = tf.keras.callbacks.CallbackList(
_callbacks, add_history=True, model=training_model)
logs = {}
callbacks.on_train_begin(logs=logs)
# Presentation
epochs = trange(
max_epochs,
desc="Epoch",
unit="Epoch",
postfix="loss = {loss:.4f}, accuracy = {accuracy:.4f}")
epochs.set_postfix(loss=0, accuracy=0)
# Get a stable test set so epoch results are comparable
test_batches = batches(test_x, test_Y)
for epoch in epochs:
callbacks.on_epoch_begin(epoch, logs=logs)
# I like to formulate new batches each epoch
# if there are data augmentation methods in play
training_batches = batches(x, Y)
# Presentation
enumerated_batches = tqdm(
enumerate(training_batches),
desc="Batch",
unit="batch",
postfix="loss = {loss:.4f}, accuracy = {accuracy:.4f}",
position=1,
leave=False)
for (batch, (x, y)) in enumerated_batches:
training_model.reset_states()
callbacks.on_batch_begin(batch, logs=logs)
callbacks.on_train_batch_begin(batch, logs=logs)
logs = training_model.train_on_batch(x=x, y=Y, return_dict=True)
callbacks.on_train_batch_end(batch, logs=logs)
callbacks.on_batch_end(batch, logs=logs)
# Presentation
enumerated_batches.set_postfix(
loss=float(logs["loss"]),
accuracy=float(logs["accuracy"]))
for (batch, (x, y)) in enumerate(test_batches):
training_model.reset_states()
callbacks.on_batch_begin(batch, logs=logs)
callbacks.on_test_batch_begin(batch, logs=logs)
logs = training_model.test_on_batch(x=x, y=Y, return_dict=True)
callbacks.on_test_batch_end(batch, logs=logs)
callbacks.on_batch_end(batch, logs=logs)
# Presentation
epochs.set_postfix(
loss=float(logs["loss"]),
accuracy=float(logs["accuracy"]))
callbacks.on_epoch_end(epoch, logs=logs)
# NOTE: This is a decent place to check on your early stopping
# callback.
# Example: use training_model.stop_training to check for early stopping
callbacks.on_train_end(logs=logs)
# Fetch the history object we normally get from keras.fit
history_object = None
for cb in callbacks:
if isinstance(cb, tf.keras.callbacks.History):
history_object = cb
assert history_object is not None
The simplest way would be to check if the loss has changed over your expected period and break or manipulate the training process if not.
Here is one way you could implement a custom early stopping callback :
def Callback_EarlyStopping(LossList, min_delta=0.1, patience=20):
#No early stopping for 2*patience epochs
if len(LossList)//patience < 2 :
return False
#Mean loss for last patience epochs and second-last patience epochs
mean_previous = np.mean(LossList[::-1][patience:2*patience]) #second-last
mean_recent = np.mean(LossList[::-1][:patience]) #last
#you can use relative or absolute change
delta_abs = np.abs(mean_recent - mean_previous) #abs change
delta_abs = np.abs(delta_abs / mean_previous) # relative change
if delta_abs < min_delta :
print("*CB_ES* Loss didn't change much from last %d epochs"%(patience))
print("*CB_ES* Percent change in loss value:", delta_abs*1e2)
return True
else:
return False
This Callback_EarlyStopping checks your metrics/loss every epoch and returns True if the relative change is less than what you expected by computing moving average of losses after every patience number of epochs. You can then capture this True signal and break the training loop. To completely answer your question, within your sample training loop you can use this as:
gen_loss_seq = []
for epoch in range(epochs):
#in your example, make sure your train_step returns gen_loss
gen_loss = train_step(dataset)
#ideally, you can have a validation_step and get gen_valid_loss
gen_loss_seq.append(gen_loss)
#check every 20 epochs and stop if gen_valid_loss doesn't change by 10%
stopEarly = Callback_EarlyStopping(gen_loss_seq, min_delta=0.1, patience=20)
if stopEarly:
print("Callback_EarlyStopping signal received at epoch= %d/%d"%(epoch,epochs))
print("Terminating training ")
break
Of course, you can increase the complexity in numerous ways, for example, which loss or metrics you would like to track, your interest in the loss at a particular epoch or moving average of loss, your interest in relative or absolute change in value, etc. You can refer to Tensorflow 2.x implementation of tf.keras.callbacks.EarlyStopping here which is generally used in the popular tf.keras.Model.fit method.
I think you would need to implement the functionality of the callback manually. It should not be too difficult. You could for instance have the "train_step" function return the losses and then implement functionality of callbacks such as early stopping in your "train" function. For callbacks such as learning rate schedule the function tf.keras.backend.set_value(generator_optimizer.lr,new_lr) would come in handy. Therefore the functionality of the callback would be implemented in your "train" function.
A custom training loop is just a normal Python loop, so you can use if statements to break the loop whenever some condition is met. For instance:
if len(loss_history) > patience:
if loss_history.popleft()*delta < min(loss_history):
print(f'\nEarly stopping. No improvement of more than {delta:.5%} in '
f'validation loss in the last {patience} epochs.')
break
If there is no improvement of delta% in the loss in the past patience epochs, the loop will be broken. Here, I'm using a collections.deque, which can easily be used as a rolling list that keeps in memory information only the last patience epochs.
Here's a full implementation, with the documentation example from the Tensorflow documentation:
patience = 3
delta = 0.001
loss_history = deque(maxlen=patience + 1)
for epoch in range(1, 25 + 1):
train_loss = tf.metrics.Mean()
train_acc = tf.metrics.CategoricalAccuracy()
test_loss = tf.metrics.Mean()
test_acc = tf.metrics.CategoricalAccuracy()
for x, y in train:
loss_value, grads = get_grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_loss.update_state(loss_value)
train_acc.update_state(y, model(x, training=True))
for x, y in test:
loss_value, _ = get_grad(model, x, y)
test_loss.update_state(loss_value)
test_acc.update_state(y, model(x, training=False))
print(verbose.format(epoch,
train_loss.result(),
test_loss.result(),
train_acc.result(),
test_acc.result()))
loss_history.append(test_loss.result())
if len(loss_history) > patience:
if loss_history.popleft()*delta < min(loss_history):
print(f'\nEarly stopping. No improvement of more than {delta:.5%} in '
f'validation loss in the last {patience} epochs.')
break
Epoch 1 Loss: 0.191 TLoss: 0.282 Acc: 68.920% TAcc: 89.200%
Epoch 2 Loss: 0.157 TLoss: 0.297 Acc: 70.880% TAcc: 90.000%
Epoch 3 Loss: 0.133 TLoss: 0.318 Acc: 71.560% TAcc: 90.800%
Epoch 4 Loss: 0.117 TLoss: 0.299 Acc: 71.960% TAcc: 90.800%
Early stopping. No improvement of more than 0.10000% in validation loss in the last 3 epochs.
aapa3e8's answer is correct but I am providing an implementation of Callback_EarlyStopping below that is more similar to tf.keras.callbacks.EarlyStopping
def Callback_EarlyStopping(MetricList, min_delta=0.1, patience=20, mode='min'):
#No early stopping for the first patience epochs
if len(MetricList) <= patience:
return False
min_delta = abs(min_delta)
if mode == 'min':
min_delta *= -1
else:
min_delta *= 1
#last patience epochs
last_patience_epochs = [x + min_delta for x in MetricList[::-1][1:patience + 1]]
current_metric = MetricList[::-1][0]
if mode == 'min':
if current_metric >= max(last_patience_epochs):
print(f'Metric did not decrease for the last {patience} epochs.')
return True
else:
return False
else:
if current_metric <= min(last_patience_epochs):
print(f'Metric did not increase for the last {patience} epochs.')
return True
else:
return False
I tested #Rob Hall's method with tensorboard callbacks and it did indeed work. So in my case it looked like this:
'''
tensorboard_callback = keras.callbacks.TensorBoard(
log_dir='./callbacks/tensorboard',
histogram_freq=1)
_callbacks = [tensorboard_callback]
callbacks = keras.callbacks.CallbackList(
_callbacks, add_history=True, model=encoder)
logs_ae = {}
callbacks.on_train_begin(logs=logs_ae)
...
...
'''

Early stop when validation loss satisfies certain criteria

I am training a neural net model in Keras. I want to monitor the validation loss and stop the training when certain condition is attained.
I know I can use EarlyStopping to stop the training when there is no improvement in training for a given number of patience rounds.
I want to something different. I want to stop the training when the val_loss is going above a value say x after n rounds.
To make things clear, Let's say x in 0.5 and n is 50. I want to stop the model's training only if the epoch number is greater than 50 and val_loss is above 0.5.
How can I do this in Keras.?
You can define your own callback by inheriting from the Keras EarlyStopping callback and overriding it with your own logic:
from keras.callbacks import EarlyStopping # use as base class
class MyCallBack(EarlyStopping):
def __init__(self, threshold, min_epochs, **kwargs):
super(MyCallBack, self).__init__(**kwargs)
self.threshold = threshold # threshold for validation loss
self.min_epochs = min_epochs # min number of epochs to run
def on_epoch_end(self, epoch, logs=None):
current = logs.get(self.monitor)
if current is None:
warnings.warn(
'Early stopping conditioned on metric `%s` '
'which is not available. Available metrics are: %s' %
(self.monitor, ','.join(list(logs.keys()))), RuntimeWarning
)
return
# implement your own logic here
if (epoch >= self.min_epochs) & (current >= self.threshold):
self.stopped_epoch = epoch
self.model.stop_training = True
Small example to illustrate that it should work:
from keras.layers import Input, Dense
from keras.models import Model
import numpy as np
# Generate some random data
features = np.random.rand(100, 5)
labels = np.random.rand(100, 1)
validation_feat = np.random.rand(100, 5)
validation_labels = np.random.rand(100, 1)
# Define a simple model
input_layer = Input((5, ))
dense_layer = Dense(10)(input_layer)
output_layer = Dense(1)(dense_layer)
model = Model(inputs=input_layer, outputs=output_layer)
model.compile(loss='mse', optimizer='sgd')
# Fit with custom callback
callbacks = [MyCallBack(threshold=0.001, min_epochs=10, verbose=1)]
model.fit(features, labels, validation_data=(validation_feat, validation_labels), callbacks=callbacks, epochs=100)

Categories

Resources