Tensorboard loss plots go back in time (Keras) - python

I have a similar problem to the one described here, only I am using Keras with Tensorflow backend rather then Tensorflow.
I have trained 18 MLP models for time series forecasting with different meta parameters, all created with the same basic architecture. The 3 meta parameters I am scanning are the lookahead that the model uses for prediction, the depth of the model, and whether I am using L2 regularization.
model = Sequential()
# input_shape should be a 3D tensor with shape (batch_size, timesteps ,input_dim)
model.add(Flatten())
# hidden layer sizes should drop gradually from 256 to 2*lookahead
hidden_layer_sizes = [int(256 - i * (256 - 2 * lookahead) / depth) for i in range(depth)]
for hidden_layer_size in hidden_layer_sizes:
if regularization:
model.add(Dense(hidden_layer_size, kernel_initializer="he_normal",
kernel_regularizer = regularizers.l2(0.01), activation=activations))
else:
model.add(Dense(hidden_layer_size, kernel_initializer="he_normal",
activation=activations))
model.add(Dense(2 * lookahead))
loss = losses.mean_squared_error
model.compile(loss=loss, optimizer=self.kwargs["optimizer"], metrics=['mae'])
Each model's tensorboard data is saved in a seperate folder through the relevant Keras callback
callback_tensorboard = TensorBoard(log_dir=log_dir,
histogram_freq=5,
write_graph=False,
write_grads=True,
write_images=False)
but for some reason, 3 out of the 18 models have two tensorboard files saved instead of one, and the resulting graphs show this weird phenomena of progressing backwards through time
Why does this happen? and other then deleting the second tensorboard file, what can I do to prevent this?

It happens because every time you call mode.fit() you reinitialize the internal epoch counter in keras. However you could set it up manually when you compile the model like:
model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.losses.mean_squared_error)
init_epoch = 0
And then when ypu call fit, you can pass additional argument to keras, which describes current epoch:
epoch_count = 200
init_epoch += epoch_count
history = model.fit(x_train, y_train,
batch_size=256,
epochs=init_epoch,
validation_data=(x_test, y_test),
verbose=1,
initial_epoch=init_epoch-epoch_count,
)

Related

TensorFlow Extended | Trainer Not Warm Starting With GenericExecutor & Keras Model

I'm presently trying to get a Trainer component of a TFX pipeline to warm-start from a previous run of the same pipeline. The use case is:
Run the pipeline once, produce a model.
As new data comes in, train the existing model with the new data.
I am aware the ResolverNode component is designed for this purpose, so you can see how I utilize it below:
# detect the previously trained model
latest_model_resolver = ResolverNode(
instance_name='latest_model_resolver',
resolver_class=latest_artifacts_resolver.LatestArtifactsResolver,
latest_model=Channel(type=Model))
context.run(latest_model_resolver)
# set prior model as base_model
train_file = 'tfx_modules/recommender_train.py'
trainer = Trainer(
module_file=os.path.abspath(train_file),
custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor),
transformed_examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000),
base_model=latest_model_resolver.outputs['latest_model'])
The components above run successfully, and the ResolverNode is able to detect the latest model from prior pipeline runs. No error is thrown - however, when running context.run(trainer), the model loss basically begins where it started the first time. After the model's first run, it finishes training loss ~0.1, however, upon the second run (with the supposed warm-start), it restarts ~18.2.
This leads me to believe all weights were re-initialized, which I don't believe should have occurred. Below are the relevant model construction functions:
def build_keras_model():
"""build keras model"""
embedding_max_values = load(open(os.path.abspath('tfx-example/user_artifacts/embedding_max_dict.pkl'), 'rb'))
embedding_dimensions = dict([(key, 20) for key in embedding_max_values.keys()])
embedding_pairs = [recommender.EmbeddingPair(embedding_name=feature,
embedding_dimension=embedding_dimensions[feature],
embedding_max_val=embedding_max_values[feature])
for feature in recommender_constants.univalent_features]
numeric_inputs = []
for num_feature in recommender_constants.numeric_features:
numeric_inputs.append(keras.Input(shape=(1,), name=num_feature))
input_layers = numeric_inputs + [elem for pair in embedding_pairs for elem in pair.input_layers]
pre_concat_layers = numeric_inputs + [elem for pair in embedding_pairs for elem in pair.embedding_layers]
concat = keras.layers.Concatenate()(pre_concat_layers) if len(pre_concat_layers) > 1 else pre_concat_layers[0]
layer_1 = keras.layers.Dense(64, activation='relu', name='layer1')(concat)
output = keras.layers.Dense(1, kernel_initializer='lecun_uniform', name='out')(layer_1)
model = keras.models.Model(input_layers, outputs=output)
model.compile(optimizer='adam', loss='mean_squared_error')
return model
def run_fn(fn_args: TrainerFnArgs):
"""function for the Trainer component"""
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, fn_args.data_accessor,
tf_transform_output, 40)
eval_dataset = _input_fn(fn_args.eval_files, fn_args.data_accessor,
tf_transform_output, 40)
model = build_keras_model()
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=fn_args.model_run_dir, update_freq='epoch', histogram_freq=1,
write_images=True)
model.fit(train_dataset, steps_per_epoch=fn_args.train_steps, validation_data=eval_dataset,
validation_steps=fn_args.eval_steps, callbacks=[tensorboard_callback],
epochs=5)
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model, tf_transform_output).get_concrete_function(tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples')
)
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
To research the problem, I have perused:
Warm Start Example From TFX
https://github.com/tensorflow/tfx/blob/master/tfx/examples/chicago_taxi_pipeline/taxi_pipeline_warmstart.py
However, this guide uses the Estimator component instead of the Keras components. That component has a warm_start_from initialization parameter which I couldn't find for the Keras equivalent.
I suspect:
Either the warm-start functionality is only available for Estimator components and won't take effect even if base_model is set for Keras components.
I am somehow telling the model to re-initialize weights even after successfully loading the prior model - in that case I would love a pointer as to where that's happening.
Any assistance would be great! Much thanks.
With Keras models you have to load the model first using the base model path, then you can continue training from there instead of building a new model.
Your Trainer component looks correct, but in run_fn do the following instead:
def run_fn(fn_args: FnArgs):
model = tf.keras.models.load_model(fn_args.base_model)
model.fit(train_dataset, steps_per_epoch=fn_args.train_steps, validation_data=eval_dataset,
validation_steps=fn_args.eval_steps, callbacks=[tensorboard_callback],
epochs=5)

Computing the loss (MSE) for every iteration and time Tensorflow

I want to use Tensorboard to plot the mean squared error (y-axis) for every iteration over a given time frame (x-axis), say 5 minutes.
However, i can only plot the MSE given every epoch and set a callback at 5 minutes. This does not however solve my problem.
I have tried looking at the internet for some solutions to how you can maybe set a maximum number of iterations rather than epochs when doing model.fit, but without luck. I know iterations is the number of batches needed to complete one epoch, but as I want to tune the batch_size, I prefer to use the iterations.
My code currently looks like the following:
input_size = len(train_dataset.keys())
output_size = 10
hidden_layer_size = 250
n_epochs = 3
weights_initializer = keras.initializers.GlorotUniform()
#A function that trains and validates the model and returns the MSE
def train_val_model(run_dir, hparams):
model = keras.models.Sequential([
#Layer to be used as an entry point into a Network
keras.layers.InputLayer(input_shape=[len(train_dataset.keys())]),
#Dense layer 1
keras.layers.Dense(hidden_layer_size, activation='relu',
kernel_initializer = weights_initializer,
name='Layer_1'),
#Dense layer 2
keras.layers.Dense(hidden_layer_size, activation='relu',
kernel_initializer = weights_initializer,
name='Layer_2'),
#activation function is linear since we are doing regression
keras.layers.Dense(output_size, activation='linear', name='Output_layer')
])
#Use the stochastic gradient descent optimizer but change batch_size to get BSG, SGD or MiniSGD
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.0,
nesterov=False)
#Compiling the model
model.compile(optimizer=optimizer,
loss='mean_squared_error', #Computes the mean of squares of errors between labels and predictions
metrics=['mean_squared_error']) #Computes the mean squared error between y_true and y_pred
# initialize TimeStopping callback
time_stopping_callback = tfa.callbacks.TimeStopping(seconds=5*60, verbose=1)
#Training the network
history = model.fit(normed_train_data, train_labels,
epochs=n_epochs,
batch_size=hparams['batch_size'],
verbose=1,
#validation_split=0.2,
callbacks=[tf.keras.callbacks.TensorBoard(run_dir + "/Keras"), time_stopping_callback])
return history
#train_val_model("logs/sample", {'batch_size': len(normed_train_data)})
train_val_model("logs/sample1", {'batch_size': 1})
%tensorboard --logdir_spec=BSG:logs/sample,SGD:logs/sample1
resulting in:
The desired output should look something like this:
The reason you can't do it every iteration is that the loss is calculated at the end of each epoch. If you want to tune the batch size, run for a set number of epochs and evaluate. Start from 16 and jump in powers of 2 and see how much you can push the power of your network. But, usually bigger batch size is said to increase performance but it is not as substantial to solely focus on it. Focus on other things in the network first.
The answer was actually quite simple.
tf.keras.callbacks.TensorBoard has an update_freq argument allowing you to control when to write losses and metrics to tensorboard. The standard is epoch, but you can change it to batch or an integer if you want to write to tensorboard every n batches. See the documentation for more information: https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/TensorBoard

Different results when training a model with same initial weights and same data

I'm trying to make some transfer learning to adjust the ResNet50 to my data set.
the problem is when I run the training again with the same parameters, I get a different result (loss and accuracy for train and val sets, so I guess also different weights and as a result different error rate for the test set)
here is my model:
the weights parameter is 'imagenet', all other parameter value isn't really important, the important thing is they are the same for each run...
def ImageNet_model(train_data, train_labels, param_dict, num_classes):
X_datagen = get_train_augmented()
validatin_cut_point= math.ceil(len(train_data)*(1-param_dict["validation_split"]))
base_model = applications.resnet50.ResNet50(weights=param_dict["weights"], include_top=False, pooling=param_dict["pooling"],
input_shape=(param_dict["image_size"], param_dict["image_size"],3))
# Define the layers in the new classification prediction
x = base_model.output
x = Dense(num_classes, activation='relu')(x) # new FC layer, random init
predictions = Dense(num_classes, activation='softmax')(x) # new softmax layer
model = Model(inputs=base_model.input, outputs=predictions)
# Freeze layers
layers_to_freeze = param_dict["freeze"]
for layer in model.layers[:layers_to_freeze]:
layer.trainable = False
for layer in model.layers[layers_to_freeze:]:
layer.trainable = True
sgd = optimizers.SGD(lr=param_dict["lr"], momentum=param_dict["momentum"], decay=param_dict["decay"])
model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])
lables_ints = [y.argmax() for y in np.array(train_labels)]
class_weights = class_weight.compute_class_weight('balanced',
np.unique(lables_ints),
np.array(lables_ints))
train_generator = X_datagen.flow(np.array(train_data)[0:validatin_cut_point],np.array(train_labels)[0:validatin_cut_point], batch_size=param_dict['batch_size'])
validation_generator = X_datagen.flow(np.array(train_data)[validatin_cut_point:len(train_data)],
np.array(train_labels)[validatin_cut_point:len(train_data)],
batch_size=param_dict['batch_size'])
history= model.fit_generator(
train_generator,
epochs=param_dict['epochs'],
steps_per_epoch=validatin_cut_point // param_dict['batch_size'],
validation_data=validation_generator,
validation_steps=(len(train_data)-validatin_cut_point) // param_dict['batch_size'],
class_weight=class_weights)
shuffle=False,class_weight=class_weights)
graph_of_loss_and_acc(history)
model.save(param_dict['model_file_name'])
return model
what can make the output of each run different?
Since the initial weights are the same, it can't explain the difference ( I also tried to freeze some layers, didn't help). any ideas?
Thank!
When you initialize the weights randomly in Dense layer, weights are initialized differently across runs and also converge to different local minima.
x = Dense(num_classes, activation='relu')(x) # new FC layer, random init
If you want the output to be same you need to initialize weights with same value across runs. You can read the details on how to obtain reproducible results on Keras here. These are the steps you need to follow
Set the PYTHONHASHSEED environment variable to 0
Set random seed for numpy generated random numbers np.random.seed(SEED)
Set random seed for Python generated random numbers random.seed(SEED)
Set random state for tensorflow backend tf.set_random_seed(SEED)

Tensorflow model performing significantly worse than Keras model

I was having an issue with my Tensorflow model and decided to try Keras. It appears to me at least that I am creating the same model with the same parameters, but the Tensorflow model just outputs the mean value of train_y while the Keras model actually varies according the input. Am I missing something in my tf.Session? I usually use Tensorflow and have never had a problem like this.
Tensorflow Code:
score_inputs = tf.placeholder(np.float32, shape=(None, 100))
targets = tf.placeholder(np.float32, shape=(None), name="targets")
l2 = tf.contrib.layers.l2_regularizer(0.01)
first_layer = tf.layers.dense(score_inputs, 100, activation=tf.nn.relu, kernel_regularizer=l2)
outputs = tf.layers.dense(first_layer, 1, activation = None, kernel_regularizer=l2)
optimizer = tf.train.AdamOptimizer(0.001)
l2_loss = tf.losses.get_regularization_loss()
loss = tf.reduce_mean(tf.square(tf.subtract(targets, outputs)))
loss += l2_loss
rmse = tf.sqrt(tf.reduce_mean(tf.square(outputs - targets)))
mae = tf.reduce_mean(tf.sqrt(tf.square(outputs - targets)))
training_op = optimizer.minimize(loss)
batch_size = 32
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(10):
avg_train_error = []
for i in range(len(train_x) // batch_size):
batch_x = train_x[i*batch_size: (i+1)*batch_size]
batch_y = train_y[i*batch_size: (i+1)*batch_size]
_, train_loss = sess.run([training_op, loss], {score_inputs: batch_x, targets: batch_y})
feed = {score_inputs: test_x, targets: test_y}
test_loss, test_mae, test_rmse, test_ouputs = sess.run([loss, mae, rmse, outputs], feed)
This has a mean absolute error of 0.682 and root mean squared error of 0.891.
The Keras Code:
inputs = Input(shape=(100,))
hidden = Dense(100, activation="relu", kernel_regularizer = regularizers.l2(0.01))(inputs)
outputs = Dense(1, activation=None, kernel_regularizer = regularizers.l2(0.01))(hidden)
model = Model(inputs=inputs, outputs=outputs)
model.compile(optimizer=keras.optimizers.Adam(lr=0.001), loss='mse', metrics=['mae'])
model.fit(train_x, train_y, batch_size=32, epochs=10, shuffle=False)
keras_pred = model.predict(test_x)
This has a mean absolute error of 0.601 and root mean square error of 0.753.
It appears to me that I am defining the same network in both instances, yet as I said the Tensorflow model only outputs the mean value of train_y, while the Keras model performs a lot better. Any suggestions?
I'm going to try to point out the differences between the two codes.
Keras documentation here shows that the weights are initialized by 'glorot_uniform' whereas your weights are initialized by default, most probably at random as the documentation doesn't clearly specify what it is tensorflow intialization. So initialization is most probably different and it definitely
matters.
The second difference most probably is because of the difference in the data type of input, one being numpy.float32 and other being keras default input type, which again hasn't been specified by the documentation
#Priyank Pathak and #lehiester have given some valid points. Taking their suggestions into account, I can suggest you to change the following things and check again:
Use same kernel_initializer and data_type
Use more epochs for better generalisation
Seed your random, numpy and tensorflow functions
There isn't any obvious difference in the models, but the different results could possibly be explained due to random variation in training. Especially since you're only training for 10 epochs, the results could be fairly sensitive to the randomly chosen initial weights for the models.
Try running with more epochs (e.g. 1000) and running each one several times (e.g. 5)--the average results should be fairly close.

Extracting weights from best Neural Network in Tensorflow/Keras - multiple epochs

I am working on a 1 - hidden - layer Neural Network with 2000 neurons and 8 + constant input neurons for a regression problem.
In particular, as optimizer I am using RMSprop with learning parameter = 0.001, ReLU activation from input to hidden layer and linear from hidden to output. I am also using a mini-batch-gradient-descent (32 observations) and running the model 2000 times, that is epochs = 2000.
My goal is, after the training, to extract the weights from the best Neural Network out of the 2000 run (where, after many trials, the best one is never the last, and with best I mean the one that leads to the smallest MSE).
Using save_weights('my_model_2.h5', save_format='h5') actually works, but at my understanding it extract the weights from the last epoch, while I want those from the epoch in which the NN has perfomed the best. Please find the code I have written:
def build_first_NN():
model = keras.Sequential([
layers.Dense(2000, activation=tf.nn.relu, input_shape=[len(X_34.keys())]),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mean_squared_error',
optimizer=optimizer,
metrics=['mean_absolute_error', 'mean_squared_error']
)
return model
first_NN = build_first_NN()
history_firstNN_all_nocv = first_NN.fit(X_34,
y_34,
epochs = 2000)
first_NN.save_weights('my_model_2.h5', save_format='h5')
trained_weights_path = 'C:/Users/Myname/Desktop/otherfolder/Data/my_model_2.h5'
trained_weights = h5py.File(trained_weights_path, 'r')
weights_0 = pd.DataFrame(trained_weights['dense/dense/kernel:0'][:])
weights_1 = pd.DataFrame(trained_weights['dense_1/dense_1/kernel:0'][:])
The then extracted weights should be those from the last of the 2000 epochs: how can I get those from, instead, the one in which the MSE was the smallest?
Looking forward for any comment.
EDIT: SOLVED
Building on the received suggestions, as for general interest, that's how I have updated my code, meeting my scope:
# build_first_NN() as defined before
first_NN = build_first_NN()
trained_weights_path = 'C:/Users/Myname/Desktop/otherfolder/Data/my_model_2.h5'
checkpoint = ModelCheckpoint(trained_weights_path,
monitor='mean_squared_error',
verbose=1,
save_best_only=True,
mode='min')
history_firstNN_all_nocv = first_NN.fit(X_34,
y_34,
epochs = 2000,
callbacks = [checkpoint])
trained_weights = h5py.File(trained_weights_path, 'r')
weights_0 = pd.DataFrame(trained_weights['model_weights/dense/dense/kernel:0'][:])
weights_1 = pd.DataFrame(trained_weights['model_weights/dense_1/dense_1/kernel:0'][:])
Use ModelCheckpoint callback from Keras.
from keras.callbacks import ModelCheckpoint
checkpoint = ModelCheckpoint(filepath, monitor='val_mean_squared_error', verbose=1, save_best_only=True, mode='max')
use this as a callback in your model.fit() . This will always save the model with the highest validation accuracy (lowest MSE on validation) at the location specified by filepath.
You can find the documentation here.
Of course, you need validation data during training for this. Otherwise I think you can probably be able to check on lowest training MSE by writing a callback function yourself.

Categories

Resources