Keras earlystopping: print selected epoch - python

Simple question. I am using Keras earlystopping in the following form:
Earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=1, mode='auto')
How can I get Keras to print the selected epoch once the model has been fit? I think you have to use logs but don't quite know how.
Thanks.
Edit:
The full code is very long! Let me add a bit more than I gave. Hopefully it will help.
# Define model
def design_flexiNN(m_type, neurons, shape_timestep, shape_feature, activation, kernel_ini):
model = Sequential()
model.add(Dense(neurons, input_dim=shape_feature, activation = activation, use_bias=True, kernel_initializer=kernel_ini))
model.add(Dense(1, use_bias=True))
model.compile(loss='mae', optimizer='Adam')
return model
# fit model
def fit_flexiNN(m_type, train_X, train_y, epochs, batch_size, test_X, test_y):
history = model.fit(train_X, train_y, epochs=epochs, batch_size=batch_size, callbacks=callbacks_list, validation_data=(test_X, test_y), verbose=0, shuffle=False)
Earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=1, mode='auto')
callbacks_list = [Earlystop]
model = design_flexiNN(m_type, neurons, neurons_step, train_X_feature_shape, activation, kernel_ini);
history = fit_flexiNN(m_type, train_X, train_y, ini_epochs, batch_size, test_X, test_y)
I've been able to infer the selected epoch by doing len(history.history['val_loss']) minus 1, but that doesn't work if you have a patience above zero.

Been trying to solve this myself and realised that the len(history.history['val_loss']) method is almost correct. All you need to add is:
len(history.history['val_loss']) - patience
which should give you the epoch number for the selected model (assuming that the model didnt run for the full number of epochs).
A slightly more thorough method would be:
model_loss = history.history["val_loss"]
epoch_chosen = model_loss.index(min(model_loss)) +1
print(epoch_chosen)
Hope this helps!

Related

callback causes ValueError

The codes were working fine for the past months but somehow went wrong after something I have done but I cannot restore it.
def bi_LSTM_model(X_train, y_train, X_test, y_test, num_classes, loss,batch_size=68, units=128, learning_rate=0.005,epochs=20, dropout=0.2, recurrent_dropout=0.2):
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if (logs.get('acc') > 0.90):
print("\nReached 90% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
model = tf.keras.models.Sequential()
model.add(Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout)))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=loss,
optimizer=adamopt,
metrics=['accuracy'])
history = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(X_test, y_test),
verbose=1,
callbacks=[callbacks])
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
yhat = model.predict(X_test)
return history, yhat
def duo_bi_LSTM_model(X_train, y_train, X_test, y_test, num_classes, loss,batch_size=68, units=128, learning_rate=0.005,epochs=20, dropout=0.2, recurrent_dropout=0.2):
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if (logs.get('acc') > 0.90):
print("\nReached 90% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
model = tf.keras.models.Sequential()
model.add(Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(Bidirectional(
LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout, return_sequences=True)))
model.add(Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout)))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=loss,
optimizer=adamopt,
metrics=['accuracy'])
history = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(X_test, y_test),
verbose=1,
callbacks=[callbacks])
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
yhat = model.predict(X_test)
return history, yhat
Basically, I have defined two models and whenever the second one runs, the error comes up.
BTW, I use tf.keras.backend.clear_session() between the models.
ValueError: Tensor("Adam/bidirectional/forward_lstm/kernel/m:0", shape=(), dtype=resource) must be from the same graph as Tensor("bidirectional/forward_lstm/kernel:0", shape=(), dtype=resource).
The only modifications I ever done to the codes was that I tried to bring the callback class out of the two models, and put it before them, reducing the redundancy of the code.
The problem is not the callback function. The error shows up because you pass the same optimizer to two different models, which is not possible since they are two different computational graphs.
Try to define the optimizer inside the function where you define the model before the model.compile() call.

How can I restart model.fit after x epochs if loss remains high?

I am having trouble sometimes getting a fit on my data, and when I restart fit (with shuffle=true) then I sometimes get a good fit.
See my previous question:
https://datascience.stackexchange.com/questions/62516/why-does-my-model-sometimes-not-learn-well-from-same-data
As a work around, I want to automatically restart the fitting process, if loss is high after x epochs. How can I achieve this?
I assume I would need to use a custom version of EarlyStopping callback? How could I differentiate between ES because of finding a low loss ( < 0.5) so training is finished, or because loss > 0.5 after x epochs so need to restart training?
Here is a simplified structure:
def train_till_good():
while not_finished:
train()
def train():
load_data()
model = VerySimpleNet2();
checkpoint = keras.callbacks.ModelCheckpoint(filepath=images_root + dataset_name + '\\CheckPoint.hdf5')
myOpt = keras.optimizers.Adam(lr=0.001,decay=0.01)
model.compile(optimizer=myOpt, loss='categorical_crossentropy', metrics=['accuracy'])
LRS = CyclicLR(base_lr=0.000005, max_lr=0.0003, step_size=200.)
tensorboard = keras.callbacks.TensorBoard(log_dir='C:\\Tensorflow', histogram_freq=0,write_graph=True, write_images=False)
ES = keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5)
model.fit(train_images, train_labels, shuffle=True, epochs=num_epochs,
callbacks=[checkpoint,
tensorboard,
ES,
LRS],
validation_data = (test_images, test_labels)
)
def VerySimpleNet2():
model = keras.Sequential([
keras.layers.Dense(112, activation=tf.nn.relu, input_shape=(224, 224, 3)),
keras.layers.Dropout(0.4),
keras.layers.Flatten(),
keras.layers.Dense(3, activation=tf.nn.softmax)
])
return model

Add SVM to last layer

What I did:
I implement the following model using of Keras:
train_X, test_X, train_Y, test_Y = train_test_split(X, Y, test_size=0.2, random_state=np.random.seed(7), shuffle=True)
train_X = np.reshape(train_X, (train_X.shape[0], 1, train_X.shape[1]))
test_X = np.reshape(test_X, (test_X.shape[0], 1, test_X.shape[1]))
inp = Input((train_X.shape[1], train_X.shape[2]))
lstm = LSTM(1, return_sequences=False)(inp)
output = Dense(train_Y.shape[1], activation='softmax')(lstm)
model = Model(inputs=inp, outputs=output)
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
model.fit(train_X, train_Y, validation_split=.20, epochs=2, batch_size=50)
What I want:
I want to add SVM to the last layer in my model but i dont know how? Any idea?
This should work for adding svm as last layer.
inp = Input((train_X.shape[1], train_X.shape[2]))
lstm = LSTM(1, return_sequences=False)(inp)
output = Dense(train_Y.shape[1], activation='softmax', W_regularizer=l2(0.01)))(lstm)
model = Model(inputs=inp, outputs=output)
model.compile(loss='hinge', optimizer='adam', metrics=['accuracy'])
model.fit(train_X, train_Y, validation_split=.20, epochs=2, batch_size=50)
Here I have used hinge as loss considering binary categorised target. But if it is more than that, then you can consider using categorical_hinge
Change softmax to linear and add kernel_regularizer=l2(1e-4) instead W_regularizer=l2(0.01) using keras 2.2.4. Use loss = categorical_hinge.

Keras, get output of a layer at each epochs

What I have done?
I implemented a keras model as follow:
train_X, test_X, train_Y, test_Y = train_test_split(X, Y, test_size=0.2, random_state=np.random.seed(7), shuffle=True)
train_X = np.reshape(train_X, (train_X.shape[0], 1, train_X.shape[1]))
test_X = np.reshape(test_X, (test_X.shape[0], 1, test_X.shape[1]))
model = Sequential()
model.add(LSTM(100, return_sequences=False, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(train_Y.shape[1], activation='softmax'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
model.fit(train_X, train_Y, validation_split=.20,
epochs=1000, batch_size=50)
What i want?
I want to give support vector machine(SVM) the output of the penultimate layer (LSTM), in any epoch(that is 1000) to svm also be trained.
But I do not know how to do this?
Any idea?
UPDATED:
I use from ModelCheckpoint as follow:
model = Sequential()
model.add(LSTM(100, return_sequences=False, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(train_Y.shape[1], activation='softmax'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
# checkpoint
filepath="weights-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
model.fit(train_X, train_Y, validation_split=.20,
epochs=1000, batch_size=50, callbacks=callbacks_list, verbose=0)
OUTPUT:
Epoch 00991: val_acc did not improve
Epoch 00992: val_acc improved from 0.93465 to 0.93900, saving model to weights-992-0.94.hdf5
Epoch 00993: val_acc did not improve
Epoch 00994: val_acc did not improve
Epoch 00995: val_acc did not improve
Epoch 00996: val_acc did not improve
Epoch 00997: val_acc did not improve
Epoch 00998: val_acc improved from 0.93900 to 0.94543, saving model to weights-998-0.94.hdf5
Epoch 00999: val_acc did not improve
PROBLEM:
How to load all these models to obtain the output of the LSTM layer in each epochs as #IonicSolutions said?
What works best in your situation depends on how exactly you set up and train your SVM, but there are at least two options using callbacks:
You could use the ModelCheckpoint callback to save a copy of the model you are training at each epoch and then later load all these models to obtain the output of the LSTM layer.
You can also create your own callback by implementing the Callback base class. Within the callback, the model can be accessed and you can use on_epoch_end to extract the LSTM output at the end of each epoch.
Edit: To get convenient access to the penultimate layer, you can do the following:
# Create the model with the functional API
inp = Input((train_X.shape[1], train_X.shape[2],))
lstm = LSTM(100, return_sequences=False)(inp)
dense = Dense(train_Y.shape[1], activation='softmax')(lstm)
# Create the full model
model = Model(inputs=inp, outputs=dense)
# Create the model for access to the LSTM layer
access = Model(inputs=inp, outputs=lstm)
Then, you can pass access to your callback when you instantiate it. The key thing to note here is that model and access share the very same LSTM layer, whose weights will change when training model.
In order to get prediction output at each epoch here is what we can do:
import tensorflow as tf
import keras
# define your custom callback for prediction
class PredictionCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
y_pred = self.model.predict(self.validation_data[0])
print('prediction: {} at epoch: {}'.format(y_pred, epoch))
# ...
# register the callback before training starts
model.fit(X_train, y_train, batch_size=32, epochs=25,
validation_data=(X_valid, y_valid),
callbacks=[PredictionCallback()])

Keras - Save output of intermediate layer

What I have done?
I'm using from this code to implement my keras'model:
X, tx, Y, ty = train_test_split(X, Y, test_size=0.2, random_state=np.random.seed(7), shuffle=True)
X = np.reshape(X, (X.shape[0], 1, X.shape[1]))
tx = np.reshape(tx, (tx.shape[0], 1, tx.shape[1]))
model = Sequential()
model.add(LSTM(100, return_sequences=False, input_shape=(X.shape[1], X.shape[2])))
model.add(Dense(Y.shape[1], activation='softmax'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
filepath="weights-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
model.fit(X, Y, validation_split=.20,
epochs=1000, batch_size=50, callbacks=callbacks_list, verbose=0)
OUTPUT:
Below is a part of the program's output:
Epoch 00993: val_acc did not improve
Epoch 00994: val_acc did not improve
Epoch 00995: val_acc did not improve
Epoch 00996: val_acc did not improve
Epoch 00997: val_acc did not improve
Epoch 00998: val_acc improved from 0.93900 to 0.94543, saving model to weights-998-0.94.hdf5
Epoch 00999: val_acc did not improve
PROBLEM:
I need to save the output of LSTM layer in any epoch but i do not know how?
Any idea?
You can use Keras functional API. You will have to rewrite your model creation but it's not much work. Then when you write something like this:
lstm_output = LSTM(128, ...)(x)
the output of the LSTM layer will be in the lstm_output variable and you can save it in every iteration of every epoch.
I hope this answers your question.

Categories

Resources