Related
I am running a Keras sequential model as a regressor with tensorflow backend. I am using Optuna to optimize it's hyper-paramters and reducing the rmse in the Optuna optimizer.
However, when I re-create the Keras model with the best parameters from Optuna and use the same dataset for re-fitting and predicting as the one used in the Optuna objective function, I get wildly inconsistent results.
I'm aware that neural nets are stochastic in nature with an element of randomness. In order to make it deterministic I tried setting the seeds for both numpy and tensorflow in the following manner at beginning of my script, but it doesn't work,
from numpy.random import seed
seed(1)
import tensorflow
tensorflow.random.set_seed(2)
Following is my code and the output-
def create_model(trial):
n_layers = trial.suggest_int("layers_number", 4, 8)#4
model = keras.Sequential()
for i in range(n_layers):
num_hidden = trial.suggest_int("n_units_l_{}".format(i), 10, 16)
activation = trial.suggest_categorical('activation_l_{}'.format(i), ['linear'])#, 'relu', 'sigmoid', 'tanh', 'elu'
model.add(layers.Dense(num_hidden, activation=activation, kernel_initializer = 'uniform'))
dropout = trial.suggest_uniform("dropout_l_{}".format(i), 0.1, 0.4)
model.add(layers.Dropout(dropout))
model.add(layers.Dense(1, activation='linear'))
lr = trial.suggest_loguniform("lr", 1e-5, 1e-1)
model.compile(
loss='mean_squared_error',
optimizer=keras.optimizers.Adam(lr=lr),
metrics=['mse']
)
return model
def objective(trial):
keras.backend.clear_session()
model = create_model(trial)
epochs = trial.suggest_int("epochs", 3, 4)#50
batch = trial.suggest_int("batch", 1, 2)
model.fit(
X_train.values,
y_train.values,
batch_size=batch,
epochs=epochs,
verbose=0,
shuffle=False
)
y_pred_test = model.predict(X_test)
test_copy['pred_scaled'] = y_pred_test
rmse = inverse_transform(test_copy, y_pred_test, df_copy) #inverse transforms the transformed target and calculates rmse
return rmse
study = optuna.create_study(direction='minimize')
study.optimize(objective, n_trials=2)
Output- best trial screenshot
RMSE of best trial is 110.90926282554379
Refitting and predicting using best params.
def KerasRegressor(parameters):
print(parameters)
model = keras.Sequential()
layers_number = int(parameters['layers_number'])
for i in range(layers_number):
model.add(layers.Dense(int(parameters['n_units_l_' + str(i)]), activation=parameters['activation_l_' + str(i)], kernel_initializer = 'uniform'))
model.add(layers.Dropout(int(parameters['dropout_l_' + str(i)])))
model.add(layers.Dense(1, activation='linear'))
model.compile(
loss='mean_squared_error',
optimizer=keras.optimizers.Adam(lr=float(parameters['lr'])),
metrics=['mse'])
return model
params = study.best_trial.params
epochs = params['epochs']
batch = params['batch']
del params['epochs']
del params['batch']
seed(1)
tensorflow.random.set_seed(2)
model = KerasRegressor(params)
model.fit(X_train.values, y_train.values, epochs=epochs, batch_size=batch, shuffle=False)
y_pred_test = model.predict(X_test)
test_copy['pred_scaled'] = y_pred_test
rmse = inverse_transform(test_copy, y_pred_test, df_copy)#inverse transforms the transformed target and calculates rmse
print(rmse)
New rmse on same dataset as used in Optuna objective function with best hyperparameters-
New rmse - 227892.23560327655
Small differences in rmse are acceptable but not this large a difference.
I have a different approach. I save the model to a file every time optuna finds the best metric. During prediction I just load the model file to predict the test.
If you really want to debug, have a method to find the randomness in your system like fix seed (you did that), fit same data and ordering, use same layers etc. use same param then test it. Run again - fix seed, fit ..., test it. Are the 2 tests results the same? Run multiple test, are the tests the same.
I am doing a speech emotion recognition machine training.
I wish to apply an attention layer to the model. The instruction page is hard to understand.
def bi_duo_LSTM_model(X_train, y_train, X_test,y_test,num_classes,batch_size=68,units=128, learning_rate=0.005, epochs=20, dropout=0.2, recurrent_dropout=0.2):
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if (logs.get('acc') > 0.95):
print("\nReached 99% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(tf.keras.layers.Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout,return_sequences=True)))
model.add(tf.keras.layers.Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout)))
# model.add(tf.keras.layers.Bidirectional(LSTM(32)))
model.add(Dense(num_classes, activation='softmax'))
adamopt = tf.keras.optimizers.Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-8)
RMSopt = tf.keras.optimizers.RMSprop(lr=learning_rate, rho=0.9, epsilon=1e-6)
SGDopt = tf.keras.optimizers.SGD(lr=learning_rate, momentum=0.9, decay=0.1, nesterov=False)
model.compile(loss='binary_crossentropy',
optimizer=adamopt,
metrics=['accuracy'])
history = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(X_test, y_test),
verbose=1,
callbacks=[callbacks])
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
yhat = model.predict(X_test)
return history, yhat
How can I apply it to fit for my model?
And are use_scale, causal and dropout all the arguments?
If there is a dropout in attention layer, how do we deal with it since we have dropout in LSTM layer?
Attention can be interpreted as a soft vector retrieval.
You have some query vectors. For each query, you want to retrieve some
values, such that you compute a weighted of them,
where the weights are obtained by comparing a query with keys (the number of keys must the be same as the number of values and often they are the same vectors).
In sequence-to-sequence models, the query is the decoder state and keys and values are the decoder states.
In classification task, you do not have such an explicit query. The easiest way how to get around this is training a "universal" query that is used to collect relevant information from the hidden states (something similar to what was originally described in this paper).
If you approach the problem as sequence labeling, assigning a label not to an entire sequence, but to individual time steps, you might want to use a self-attentive layer instead.
I built a Bi-LSTM model, which tries to predict certain categories based on a given word. For example, the word "smile" should be predicted by "friendly".
However, after training, the model with 100 samples per 10 categories (1000 in total), at the time of plotting the accuracy and loss, these two are slightly shaky continuously. Why does this occur? Increasing the number of samples causes underfitting.
Model
def build_model(vocab_size, embedding_dim=64, input_length=30):
print('\nbuilding the model...\n')
model = tf.keras.Sequential([
tf.keras.layers.Embedding(input_dim=(vocab_size + 1), output_dim=embedding_dim, input_length=input_length),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(rnn_units, return_sequences=True, dropout=0.2)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(rnn_units, return_sequences=True, dropout=0.2)),
tf.keras.layers.GlobalMaxPool1D(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='tanh', kernel_regularizer=tf.keras.regularizers.L2(l2=0.01)),
# softmax output layer
tf.keras.layers.Dense(10, activation='softmax')
])
# optimizer & loss
opt = 'RMSprop' #tf.optimizers.Adam(learning_rate=1e-4)
loss = 'categorical_crossentropy'
# Metrics
metrics = ['accuracy', 'AUC','Precision', 'Recall']
# compile model
model.compile(optimizer=opt,
loss=loss,
metrics=metrics)
model.summary()
return model
training
def train(model, x_train, y_train, x_validation, y_validation,
epochs, batch_size=32, patience=5,
verbose=2, monitor_es='accuracy', mode_es='auto', restore=True,
monitor_mc='val_accuracy', mode_mc='max'):
# callback
early_stopping = tf.keras.callbacks.EarlyStopping(monitor=monitor_es,
verbose=1, mode=mode_es, restore_best_weights=restore,
min_delta=1e-3, patience=patience)
model_checkpoint = tf.keras.callbacks.ModelCheckpoint('tfjsmode.h5', monitor=monitor_mc, mode=mode_mc,
verbose=1, save_best_only=True)
keras_callbacks = [early_stopping, model_checkpoint]
# train model
history = model.fit(x_train, y_train,
batch_size=batch_size, epochs=epochs, verbose=verbose,
validation_data=(x_validation, y_validation),
callbacks=keras_callbacks)
return history
ACCURACY & LOSS
BATCH SIZE
Currently the batch size is set to 16, if I increase the batch size to 64 with 2500 samples per category, the final plots will will result in underfitting.
As pointed out in the comments the smaller the batch size the more variance of the mean for the batches which then appear in more fluctuation in the loss. I typically use a batch size of 80 since I have a fairly large memory capacity. You are using the ModelCheckpoint callback and saving the model with the best validation accuracy. It is better to save the model with the lowest validation loss. You say increasing the number of samples leads to under fitting. That seems rather strange. Usually more samples results in better accuracy.
I am building a CNN in Keras using a Tensorflow backend for speaker identification, and currently I am attempting to train the model and then save it in as an .hdf5 file. The program trains the model for 100 epochs with early stopping and checkpoints, saving only the best model to a file, as illustrated in the code below:
class BuildModel:
# Create First Model in Ensemble
def createModel(self, model_input, n_outputs, first_session=True):
if first_session != True:
model = load_model('SI_ideal_model_fixed.hdf5')
return model
# Define Input Layer
inputs = model_input
# Define Densely Connected Layers
conv = Dense(16, activation='relu')(inputs)
conv = Dense(64, activation='relu')(conv)
conv = Dense(16, activation='relu')(conv)
conv = Reshape((conv.shape[1]*conv.shape[2]*conv.shape[3],))(conv)
outputs = Dense(n_outputs, activation='softmax')(conv)
# Create Model
model = Model(inputs, outputs)
model.summary()
return model
# Train the Model
def evaluateModel(self, x_train, x_val, y_train, y_val, num_classes, first_session=True):
# Model Parameters
verbose, epochs, batch_size, patience = 1, 100, 64, 10
# Determine Input and Output Dimensions
x = x_train[0].shape[0] # Number of MFCC rows
y = x_train[0].shape[1] # Number of MFCC columns
c = 1 # Number of channels
# Create Model
inputs = Input(shape=(x, y, c), name='input')
model = self.createModel(model_input=inputs,
n_outputs=num_classes,
first_session=first_session)
# Compile Model
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Callbacks
es = EarlyStopping(monitor='val_loss',
mode='min',
verbose=verbose,
patience=patience,
min_delta=0.0001) # Stop training at right time
mc = ModelCheckpoint('SI_ideal_model_fixed.hdf5',
monitor='val_accuracy',
verbose=verbose,
save_best_only=True,
mode='max') # Save best model after each epoch
reduce_lr = ReduceLROnPlateau(monitor='val_loss',
factor=0.2,
patience=patience//2,
min_lr=1e-3) # Reduce learning rate once learning stagnates
# Evaluate Model
model.fit(x_train, y=y_train, epochs=epochs,
callbacks=[es,mc,reduce_lr], batch_size=batch_size,
validation_data=(x_val, y_val))
accuracy = model.evaluate(x=x_train, y=y_train,
batch_size=batch_size,
verbose=verbose)
# Load Best Model
model = load_model('SI_ideal_model_fixed.hdf5')
return (accuracy[1], model)
However, it appears that the load_model function is not working properly since the model achieved a validation accuracy of 0.56193 after the first training session but then only started with a validation accuracy of 0.2508 at the beginning of the second training session. (From what I have seen, the first epoch of the second training session should have a validation accuracy much closer to the that of the best model.)
Moreover, I then attempted to test the trained model on a set of unseen samples with model.predict, and it failed on all six, often with high probabilities, which leads me to believe that it was using minimally trained (or untrained) weights.
So, my question is could this be an issue from loading and saving the models using the load_model and ModelCheckpoint functions? If so, what is the best alternative method? If not, what are some good troubleshooting tips for improving the model's prediction functionality?
I am not sure what you mean by training session. What I would do is first train for a few epochs epochs and note the validation accuracy. Then, load the model and use evaluate() to get the same accuracy. If it differs, then yes something is wrong with your loading. Here is what I would do:
def createModel(self, model_input, n_outputs):
# Define Input Layer
inputs = model_input
# Define Densely Connected Layers
conv = Dense(16, activation='relu')(inputs)
conv2 = Dense(64, activation='relu')(conv)
conv3 = Dense(16, activation='relu')(conv2)
conv4 = Reshape((conv.shape[1]*conv.shape[2]*conv.shape[3],))(conv3)
outputs = Dense(n_outputs, activation='softmax')(conv4)
# Create Model
model = Model(inputs, outputs)
return model
# Train the Model
def evaluateModel(self, x_train, x_val, y_train, y_val, num_classes, first_session=True):
# Model Parameters
verbose, epochs, batch_size, patience = 1, 100, 64, 10
# Determine Input and Output Dimensions
x = x_train[0].shape[0] # Number of MFCC rows
y = x_train[0].shape[1] # Number of MFCC columns
c = 1 # Number of channels
# Create Model
inputs = Input(shape=(x, y, c), name='input')
model = self.createModel(model_input=inputs,
n_outputs=num_classes)
# Compile Model
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Callbacks
es = EarlyStopping(monitor='val_loss',
mode='min',
verbose=verbose,
patience=patience,
min_delta=0.0001) # Stop training at right time
mc = ModelCheckpoint('SI_ideal_model_fixed.h5',
monitor='val_accuracy',
verbose=verbose,
save_best_only=True,
save_weights_only=False) # Save best model after each epoch
reduce_lr = ReduceLROnPlateau(monitor='val_loss',
factor=0.2,
patience=patience//2,
min_lr=1e-3) # Reduce learning rate once learning stagnates
# Evaluate Model
model.fit(x_train, y=y_train, epochs=5,
callbacks=[es,mc,reduce_lr], batch_size=batch_size,
validation_data=(x_val, y_val))
model.evaluate(x=x_val, y=y_val,
batch_size=batch_size,
verbose=verbose)
# Load Best Model
model2 = load_model('SI_ideal_model_fixed.h5')
model2.evaluate(x=x_val, y=y_val,
batch_size=batch_size,
verbose=verbose)
return (accuracy[1], model)
The two evaluations should print the same thing really.
P.S. TF might change the order of your computations so I used different names to prevent that in the model e.g. conv1, conv2 ...)
I am trying to calculate the recall in both binary and multi class (one hot encoded) classification scenarios for each class after each epoch in a model that uses Tensorflow 2's Keras API. e.g. for binary classification I'd like to be able to do something like
import tensorflow as tf
model = tf.keras.Sequential()
model.add(...)
model.add(tf.keras.layers.Dense(1))
model.compile(metrics=[binary_recall(label=0), binary_recall(label=1)], ...)
history = model.fit(...)
plt.plot(history.history['binary_recall_0'])
plt.plot(history.history['binary_recall_1'])
plt.show()
or in a multi class scenario I'd like to do something like
model = tf.keras.Sequential()
model.add(...)
model.add(tf.keras.layers.Dense(3))
model.compile(metrics=[recall(label=0), recall(label=1), recall(label=2)], ...)
history = model.fit(...)
plt.plot(history.history['recall_0'])
plt.plot(history.history['recall_1'])
plt.plot(history.history['recall_2'])
plt.show()
I'm working on a classifier for an unbalanced dataset and want to be able to see at what point the recall for my minority class(s) starts to degrade.
I found an implementation of precision for a specific class in a multi-class classifier here https://stackoverflow.com/a/41717938/373655. I'm am trying to adapt this into what I need but keras.backend is still pretty foreign to me so any help would be greatly appreciated.
I am also not clear on if I can use Keras metrics (as they are calculated at the end of each batch and then averaged) or if I need to use Keras callbacks (which can run at the end of each epoch). It seems to me like it shouldn't make a difference for recall (e.g. 8/10 == (3/5 + 5/5) / 2) but this is why recall was removed in Keras 2 so maybe I'm missing something (https://github.com/keras-team/keras/issues/5794)
Edit - partial solution (multi-class classification)
#mujjiga's solution works for both binary classification and multi-class classification but as #P-Gn pointed out, tensorflow 2's Recall metric supports this out of the box for multi-class classification. e.g.
from tensorflow.keras.metrics import Recall
model = ...
model.compile(loss='categorical_crossentropy', metrics=[
Recall(class_id=0, name='recall_0')
Recall(class_id=1, name='recall_1')
Recall(class_id=2, name='recall_2')
])
history = model.fit(...)
plt.plot(history.history['recall_2'])
plt.plot(history.history['val_recall_2'])
plt.show()
We can use classification_report of sklearn and keras Callback to achieve this.
Working code sample (with comments)
import tensorflow as tf
import keras
from tensorflow.python.keras.layers import Dense, Input
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.callbacks import Callback
from sklearn.metrics import recall_score, classification_report
from sklearn.datasets import make_classification
import numpy as np
import matplotlib.pyplot as plt
# Model -- Binary classifier
binary_model = Sequential()
binary_model.add(Dense(16, input_shape=(2,), activation='relu'))
binary_model.add(Dense(8, activation='relu'))
binary_model.add(Dense(1, activation='sigmoid'))
binary_model.compile('adam', loss='binary_crossentropy')
# Model -- Multiclass classifier
multiclass_model = Sequential()
multiclass_model.add(Dense(16, input_shape=(2,), activation='relu'))
multiclass_model.add(Dense(8, activation='relu'))
multiclass_model.add(Dense(3, activation='softmax'))
multiclass_model.compile('adam', loss='categorical_crossentropy')
# callback to find metrics at epoch end
class Metrics(Callback):
def __init__(self, x, y):
self.x = x
self.y = y if (y.ndim == 1 or y.shape[1] == 1) else np.argmax(y, axis=1)
self.reports = []
def on_epoch_end(self, epoch, logs={}):
y_hat = np.asarray(self.model.predict(self.x))
y_hat = np.where(y_hat > 0.5, 1, 0) if (y_hat.ndim == 1 or y_hat.shape[1] == 1) else np.argmax(y_hat, axis=1)
report = classification_report(self.y,y_hat,output_dict=True)
self.reports.append(report)
return
# Utility method
def get(self, metrics, of_class):
return [report[str(of_class)][metrics] for report in self.reports]
# Generate some train data (2 class) and train
x, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
random_state=1, n_clusters_per_class=1)
metrics_binary = Metrics(x,y)
binary_model.fit(x, y, epochs=30, callbacks=[metrics_binary])
# Generate some train data (3 class) and train
x, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
random_state=1, n_clusters_per_class=1, n_classes=3)
y = keras.utils.to_categorical(y,3)
metrics_multiclass = Metrics(x,y)
multiclass_model.fit(x, y, epochs=30, callbacks=[metrics_multiclass])
# Plotting
plt.close('all')
plt.plot(metrics_binary.get('recall',0), label='Class 0 recall')
plt.plot(metrics_binary.get('recall',1), label='Class 1 recall')
plt.plot(metrics_binary.get('precision',0), label='Class 0 precision')
plt.plot(metrics_binary.get('precision',1), label='Class 1 precision')
plt.plot(metrics_binary.get('f1-score',0), label='Class 0 f1-score')
plt.plot(metrics_binary.get('f1-score',1), label='Class 1 f1-score')
plt.legend(loc='lower right')
plt.show()
plt.close('all')
for m in ['recall', 'precision', 'f1-score']:
for c in [0,1,2]:
plt.plot(metrics_multiclass.get(m,c), label='Class {0} {1}'.format(c,m))
plt.legend(loc='lower right')
plt.show()
Output
Advantages:
classification_report provides lots of metrics
Can calculate metrics on validation data on train data by passing the same to Metrics constructor.
In TF2, tf.keras.metrics.Recall gained a class_id member that enables to do just that. Example using FashionMNIST:
import tensorflow as tf
(x_train, y_train), _ = tf.keras.datasets.fashion_mnist.load_data()
x_train = x_train[..., None].astype('float32') / 255
y_train = tf.keras.utils.to_categorical(y_train)
input_shape = x_train.shape[1:]
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(filters=64, kernel_size=2, padding='same', activation='relu', input_shape=input_shape),
tf.keras.layers.MaxPool2D(pool_size=2),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'),
tf.keras.layers.MaxPool2D(pool_size=2),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=256, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(units=10, activation='softmax')])
model.compile(loss='categorical_crossentropy', optimizer='Adam',
metrics=[tf.keras.metrics.Recall(class_id=i) for i in range(10)])
model.fit(x_train, y_train, batch_size=128, epochs=50)
In TF 1.13, tf.keras.metric.Recall does not have this class_id argument, but it can be added by subclassing (something that, somewhat suprisingly, seems impossible in the alpha release of TF2).
class Recall(tf.keras.metrics.Recall):
def __init__(self, *, class_id, **kwargs):
super().__init__(**kwargs)
self.class_id= class_id
def update_state(self, y_true, y_pred, sample_weight=None):
y_true = y_true[:, self.class_id]
y_pred = tf.cast(tf.equal(
tf.math.argmax(y_pred, axis=-1), self.class_id), dtype=tf.float32)
return super().update_state(y_true, y_pred, sample_weight)
There are multiple ways to do this but using a callback seems the best and most kerasy way of doing it. One side-note before I show you how:
I am also not clear on if I can use Keras metrics (as they are
calculated at the end of each batch and then averaged) or if I need to
use Keras callbacks (which can run at the end of each epoch).
This is not true. Keras' callbacks can use the following methods:
on_epoch_begin: called at the beginning of every epoch.
on_epoch_end: called at the end of every epoch.
on_batch_begin: called at the beginning of every batch.
on_batch_end: called at the end of every batch.
on_train_begin: called at the beginning of model training.
on_train_end: called at the end of model training.
This is true regardless of whether you are using keras or tf.keras.
Below you can find my implementation of a custom callback.
class RecallHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.recall = {}
def on_epoch_end(self, epoch, logs={}):
# Compute and store recall for each class here.
self.recall[...] = 42
history = RecallHistory()
model.fit(..., callbacks=[history])
print(history.recall)