I am using Keras in multi-gpu, with Tensorflow backend on 2 gpus. I am using a generator (keras.utils.Sequence) to load my data in batch mode (BS = 64). Therefore I am using the fit_generator class, providing it with my train and validation data and steps.
I noticed a strange behaviour starting from the 2nd epoch on. Basically, the first 3 steps of each epoch are completed in just 8/9 seconds each, then the network starts taking longer and longer (as it should do). Logs are the following:
Epoch 00001: val_acc improved from -inf to 0.46875, saving model to data/subs_best_model.h5
Epoch 2/32
1/29 [>.............................] - ETA: 8s - loss: 1.0664 - acc: 0.5000
2/29 [=>............................] - ETA: 8s - loss: 1.1384 - acc: 0.4531
3/29 [==>...........................] - ETA: 9s - loss: 1.0915 - acc: 0.5052
4/29 [===>..........................] - ETA: 42:03 - loss: 1.1064 - acc: 0.5117
5/29 [====>.........................] - ETA: 56:02 - loss: 1.1173 - acc: 0.4969
6/29 [=====>........................] - ETA: 1:03:13 - loss: 1.0964 - acc: 0.4974
7/29 [======>.......................] - ETA: 1:06:45 - loss: 1.0740 - acc: 0.5067
8/29 [=======>......................] - ETA: 1:08:35 - loss: 1.0592 - acc: 0.5195
9/29 [========>.....................] - ETA: 1:08:53 - loss: 1.0580 - acc: 0.5191
Do you know what could cause this anomaly/strange behaviour?
EDIT:
My DataGenerator is inspired by this implementation
The code I use for the fit_generator is as follows:
params = {'batch_size': TrainConfig.BATCH_SIZE,
'dim' : ( TrainConfig.BATCH_SIZE, 1, TrainConfig.SAMPLES),
'labels_dim': ( TrainConfig.BATCH_SIZE,),
'n_classes' : TrainConfig.OUTPUT_DIM}
training_generator = DataGenerator(train_set, **params)
validation_generator = DataGenerator(val_set, **params)
training_steps_per_epoch = int(1.*len(train_set) / batch_size)
validation_steps_per_epoch = int(1.*len(val_set) / batch_size)
history = model.fit_generator(generator=training_generator,
verbose=1,
use_multiprocessing=False,
workers=1,
steps_per_epoch=training_steps_per_epoch,
epochs=epochs,
validation_data=validation_generator,
validation_steps =validation_steps_per_epoch,
callbacks=callbacks)
Related
I'm having trouble getting my model to converge. Based on a paper I found that uses SVM as the top of the ResNet, but it's just not working. The RandomFourierTransform I read can be used as a quasi-substitute for SVM in keras
# Instantiate ResNet 50 architecture
with strategy.scope():
t = tf.keras.Input(shape=(256,256,3))
basemodel = ResNet50(
include_top=False,
input_tensor=t,
weights='imagenet'
)
# Create ResNET 50 (RGB Channel)
# Pretrained on ImageNet
# Input: RGB Image ==> Output: 2048 element vector
with strategy.scope():
rgb_model = basemodel.output
rgb_model = AveragePooling2D(pool_size=(7,7))(rgb_model)
rgb_model = Flatten()(rgb_model)
rgb_model = Dense(1000)(rgb_model)
rgb_model = RandomFourierFeatures(output_dim=2048, scale=5.0, kernel_initializer="gaussian", trainable=True)(rgb_model)
rgb_model = Dense(len(classes), activation="linear")(rgb_model)
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
model = tf.keras.Model(inputs=basemodel.input, outputs=rgb_model)
model.compile(optimizer=optimizer,
loss='hinge',
metrics=[tf.keras.metrics.CategoricalAccuracy(name="acc")])
history = model.fit(train_dataset,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
validation_data=val_dataset,
validation_steps=validation_steps)
This is the output I receive
Epoch 1/50
2/234 [..............................] - ETA: 44:41 - loss: 1.4583 - acc: 0.0781 WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0051s vs `on_train_batch_end` time: 0.0790s). Check your callbacks.
WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0051s vs `on_train_batch_end` time: 0.0790s). Check your callbacks.
234/234 [==============================] - ETA: 0s - loss: 1.3060 - acc: 0.0452WARNING:tensorflow:Callbacks method `on_test_batch_end` is slow compared to the batch time (batch time: 0.0045s vs `on_test_batch_end` time: 0.0343s). Check your callbacks.
WARNING:tensorflow:Callbacks method `on_test_batch_end` is slow compared to the batch time (batch time: 0.0045s vs `on_test_batch_end` time: 0.0343s). Check your callbacks.
234/234 [==============================] - 75s 320ms/step - loss: 1.3060 - acc: 0.0452 - val_loss: 1.1811 - val_acc: 0.0365
Epoch 2/50
234/234 [==============================] - 21s 91ms/step - loss: 1.1190 - acc: 0.0527 - val_loss: 1.0879 - val_acc: 0.0469
Epoch 3/50
234/234 [==============================] - 21s 92ms/step - loss: 1.0570 - acc: 0.0513 - val_loss: 1.0394 - val_acc: 0.0521
Epoch 4/50
234/234 [==============================] - 21s 91ms/step - loss: 1.0192 - acc: 0.0536 - val_loss: 1.0011 - val_acc: 0.0938
Epoch 5/50
234/234 [==============================] - 21s 91ms/step - loss: 1.0005 - acc: 0.0612 - val_loss: 1.0003 - val_acc: 0.0729
Epoch 6/50
234/234 [==============================] - 21s 92ms/step - loss: 1.0003 - acc: 0.0612 - val_loss: 1.0002 - val_acc: 0.0521
Epoch 7/50
234/234 [==============================] - 22s 92ms/step - loss: 1.0002 - acc: 0.0646 - val_loss: 1.0001 - val_acc: 0.0573
Below is my Tensorflow and Python code which will end the training when accuracy in 99% with the call back function. But the callback is not invoking. Where is the problem ?
def train_mnist():
class myCallback(tf.keras.callbacks.Callback):
def on_epoc_end(self, epoch,logs={}):
if (logs.get('accuracy')>0.99):
print("Reached 99% accuracy so cancelling training!")
self.model.stop_training=True
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data(path=path)
x_train= x_train/255.0
x_test= x_test/255.0
callbacks=myCallback()
model = tf.keras.models.Sequential([
# YOUR CODE SHOULD START HERE
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# model fitting
history = model.fit(x_train,y_train, epochs=10,callbacks=[callbacks])
# model fitting
return history.epoch, history.history['acc'][-1]
You're misspelling epoch and also you should return accuracy not acc.
from tensorflow.keras.layers import Input, Dense, Add, Activation, Flatten
from tensorflow.keras.models import Model, Sequential
import tensorflow as tf
import numpy as np
import random
from tensorflow.python.keras.layers import Input, GaussianNoise, BatchNormalization
def train_mnist():
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch,logs={}):
print(logs.get('accuracy'))
if (logs.get('accuracy')>0.9):
print("Reached 90% accuracy so cancelling training!")
self.model.stop_training=True
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train= x_train/255.0
x_test= x_test/255.0
callbacks=myCallback()
model = tf.keras.models.Sequential([
# YOUR CODE SHOULD START HERE
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# model fitting
history = model.fit(x_train,y_train, epochs=10,callbacks=[callbacks])
# model fitting
return history.epoch, history.history['accuracy'][-1]
train_mnist()
Epoch 1/10
1859/1875 [============================>.] - ETA: 0s - loss: 0.2273 - accuracy: 0.93580.93586665391922
Reached 90% accuracy so cancelling training!
1875/1875 [==============================] - 3s 2ms/step - loss: 0.2265 - accuracy: 0.9359
([0], 0.93586665391922)
Unfortunately don't have enough reputation to provide commentary on one of the above comments, but I wanted to point out that the on_epoch_end function is something called directly through tensorflow when an epoch ends. In this case, we're just implementing it inside a custom python class that will be called automatically by the underlying framework. I'm sourcing from Tensorflow in Practice deeplearning.ai week 2 on coursera. Very similar where the issues with the above callback are coming from it seems.
Here's some proof from my most recent run:
Epoch 1/20
59968/60000 [============================>.] - ETA: 0s - loss: 1.0648 - acc: 0.9491Inside callback
60000/60000 [==============================] - 34s 575us/sample - loss: 1.0645 - acc: 0.9491
Epoch 2/20
59968/60000 [============================>.] - ETA: 0s - loss: 0.0560 - acc: 0.9825Inside callback
60000/60000 [==============================] - 35s 583us/sample - loss: 0.0560 - acc: 0.9825
Epoch 3/20
59840/60000 [============================>.] - ETA: 0s - loss: 0.0457 - acc: 0.9861Inside callback
60000/60000 [==============================] - 31s 512us/sample - loss: 0.0457 - acc: 0.9861
Epoch 4/20
59840/60000 [============================>.] - ETA: 0s - loss: 0.0428 - acc: 0.9873Inside callback
60000/60000 [==============================] - 32s 528us/sample - loss: 0.0428 - acc: 0.9873
Epoch 5/20
59808/60000 [============================>.] - ETA: 0s - loss: 0.0314 - acc: 0.9909Inside callback
60000/60000 [==============================] - 30s 507us/sample - loss: 0.0315 - acc: 0.9909
Epoch 6/20
59840/60000 [============================>.] - ETA: 0s - loss: 0.0271 - acc: 0.9924Inside callback
60000/60000 [==============================] - 32s 532us/sample - loss: 0.0270 - acc: 0.9924
Epoch 7/20
59968/60000 [============================>.] - ETA: 0s - loss: 0.0238 - acc: 0.9938Inside callback
60000/60000 [==============================] - 33s 555us/sample - loss: 0.0238 - acc: 0.9938
Epoch 8/20
59936/60000 [============================>.] - ETA: 0s - loss: 0.0255 - acc: 0.9934Inside callback
60000/60000 [==============================] - 33s 550us/sample - loss: 0.0255 - acc: 0.9934
Epoch 9/20
59872/60000 [============================>.] - ETA: 0s - loss: 0.0195 - acc: 0.9953Inside callback
60000/60000 [==============================] - 33s 557us/sample - loss: 0.0194 - acc: 0.9953
Epoch 10/20
59744/60000 [============================>.] - ETA: 0s - loss: 0.0186 - acc: 0.9959Inside callback
60000/60000 [==============================] - 33s 551us/sample - loss: 0.0185 - acc: 0.9959
Epoch 11/20
59968/60000 [============================>.] - ETA: 0s - loss: 0.0219 - acc: 0.9954Inside callback
60000/60000 [==============================] - 32s 530us/sample - loss: 0.0219 - acc: 0.9954
Epoch 12/20
59936/60000 [============================>.] - ETA: 0s - loss: 0.0208 - acc: 0.9960Inside callback
60000/60000 [==============================] - 33s 558us/sample - loss: 0.0208 - acc: 0.9960
Epoch 13/20
59872/60000 [============================>.] - ETA: 0s - loss: 0.0185 - acc: 0.9968Inside callback
60000/60000 [==============================] - 31s 520us/sample - loss: 0.0184 - acc: 0.9968
Epoch 14/20
59872/60000 [============================>.] - ETA: 0s - loss: 0.0181 - acc: 0.9970Inside callback
60000/60000 [==============================] - 35s 587us/sample - loss: 0.0181 - acc: 0.9970
Epoch 15/20
59936/60000 [============================>.] - ETA: 0s - loss: 0.0193 - acc: 0.9971Inside callback
60000/60000 [==============================] - 33s 555us/sample - loss: 0.0192 - acc: 0.9972
Epoch 16/20
59968/60000 [============================>.] - ETA: 0s - loss: 0.0176 - acc: 0.9972Inside callback
60000/60000 [==============================] - 33s 558us/sample - loss: 0.0176 - acc: 0.9972
Epoch 17/20
59968/60000 [============================>.] - ETA: 0s - loss: 0.0183 - acc: 0.9974Inside callback
60000/60000 [==============================] - 33s 555us/sample - loss: 0.0182 - acc: 0.9974
Epoch 18/20
59872/60000 [============================>.] - ETA: 0s - loss: 0.0225 - acc: 0.9970Inside callback
60000/60000 [==============================] - 34s 570us/sample - loss: 0.0224 - acc: 0.9970
Epoch 19/20
59808/60000 [============================>.] - ETA: 0s - loss: 0.0185 - acc: 0.9975Inside callback
60000/60000 [==============================] - 33s 548us/sample - loss: 0.0185 - acc: 0.9975
Epoch 20/20
59776/60000 [============================>.] - ETA: 0s - loss: 0.0150 - acc: 0.9979Inside callback
60000/60000 [==============================] - 34s 565us/sample - loss: 0.0149 - acc: 0.9979
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-25-1ff3c304aec3> in <module>
----> 1 _, _ = train_mnist_conv()
<ipython-input-24-b469df35dac0> in train_mnist_conv()
38 )
39 # model fitting
---> 40 return history.epoch, history.history['accuracy'][-1]
41
KeyError: 'accuracy'
The key error is because of the history object not having the keyword 'accuracy', so I wanted to address that as a source of concern before continuing on.
currently i'm training my model using the following model.fit
history = finetune_model.fit_generator(train_generator, epochs=NUM_EPOCHS, workers=1,
steps_per_epoch=num_train_images // batch_size,
validation_data=(x_val, y_val_))
And also i'm using the docker image from dockerhub tensorflow/tensorflow:1.15.0-gpu-py3-jupyter
Here is the current showing:
Epoch 38/40
61/62 [============================>.] - ETA: 0s - loss: 0.4109 - acc: 0.9536Epoch 1/40
420/62 [===========================================================================================================================================================================================================] - 2s 4ms/sample - loss: 0.6136 - acc: 0.7190
However in Colaboratory, the output is this:
Epoch 38/40
62/62 [==============================] - 13s 212ms/step - loss: 0.4069 - acc: 0.8997 - val_loss: 0.7886 - val_acc: 0.752
I am using Keras with TensorFlow backend to train an LSTM network for some time-sequential data sets. The performance seems pretty good when I represent my training data (as well as the validation data) in the Numpy array format:
train_x.shape: (128346, 10, 34)
val_x.shape: (7941, 10, 34)
test_x.shape: (24181, 10, 34)
train_y.shape: (128346, 2)
val_y.shape: (7941, 2)
test_y.shape: (24181, 2)
P.s., 10 is the time steps and 34 is the number of features; The labels were one-hot encoded.
model = tf.keras.Sequential()
model.add(layers.LSTM(_HIDDEN_SIZE, return_sequences=True,
input_shape=(_TIME_STEPS, _FEATURE_DIMENTIONS)))
model.add(layers.Dropout(0.4))
model.add(layers.LSTM(_HIDDEN_SIZE, return_sequences=True))
model.add(layers.Dropout(0.3))
model.add(layers.TimeDistributed(layers.Dense(_NUM_CLASSES)))
model.add(layers.Flatten())
model.add(layers.Dense(_NUM_CLASSES, activation='softmax'))
opt = tf.keras.optimizers.Adam(lr = _LR)
model.compile(optimizer = opt, loss = 'categorical_crossentropy',
metrics = ['accuracy'])
model.fit(train_x,
train_y,
epochs=_EPOCH,
batch_size = _BATCH_SIZE,
verbose = 1,
validation_data = (val_x, val_y)
)
And the training results are:
Train on 128346 samples, validate on 7941 samples
Epoch 1/10
128346/128346 [==============================] - 50s 390us/step - loss: 0.5883 - acc: 0.6975 - val_loss: 0.5242 - val_acc: 0.7416
Epoch 2/10
128346/128346 [==============================] - 49s 383us/step - loss: 0.4804 - acc: 0.7687 - val_loss: 0.4265 - val_acc: 0.8014
Epoch 3/10
128346/128346 [==============================] - 49s 383us/step - loss: 0.4232 - acc: 0.8076 - val_loss: 0.4095 - val_acc: 0.8096
Epoch 4/10
128346/128346 [==============================] - 49s 383us/step - loss: 0.3894 - acc: 0.8276 - val_loss: 0.3529 - val_acc: 0.8469
Epoch 5/10
128346/128346 [==============================] - 49s 382us/step - loss: 0.3610 - acc: 0.8430 - val_loss: 0.3283 - val_acc: 0.8593
Epoch 6/10
128346/128346 [==============================] - 49s 382us/step - loss: 0.3402 - acc: 0.8525 - val_loss: 0.3334 - val_acc: 0.8558
Epoch 7/10
128346/128346 [==============================] - 49s 383us/step - loss: 0.3233 - acc: 0.8604 - val_loss: 0.2944 - val_acc: 0.8741
Epoch 8/10
128346/128346 [==============================] - 49s 383us/step - loss: 0.3087 - acc: 0.8663 - val_loss: 0.2786 - val_acc: 0.8805
Epoch 9/10
128346/128346 [==============================] - 49s 383us/step - loss: 0.2969 - acc: 0.8709 - val_loss: 0.2785 - val_acc: 0.8777
Epoch 10/10
128346/128346 [==============================] - 49s 383us/step - loss: 0.2867 - acc: 0.8757 - val_loss: 0.2590 - val_acc: 0.8877
This log seems pretty normal, but when I tried to use TensorFlow Dataset API to represent my data sets, the training process performed very strange (it seems that the model turns to overfit/underfit?):
def tfdata_generator(features, labels, is_training = False, batch_size = _BATCH_SIZE, epoch = _EPOCH):
dataset = tf.data.Dataset.from_tensor_slices((features, tf.cast(labels, dtype = tf.uint8)))
if is_training:
dataset = dataset.shuffle(10000) # depends on sample size
dataset = dataset.batch(batch_size, drop_remainder = True).repeat(epoch).prefetch(batch_size)
return dataset
training_set = tfdata_generator(train_x, train_y, is_training=True)
validation_set = tfdata_generator(val_x, val_y, is_training=False)
testing_set = tfdata_generator(test_x, test_y, is_training=False)
Training on the same model and hyperparameters:
model.fit(
training_set.make_one_shot_iterator(),
epochs = _EPOCH,
steps_per_epoch = len(train_x) // _BATCH_SIZE,
verbose = 1,
validation_data = validation_set.make_one_shot_iterator(),
validation_steps = len(val_x) // _BATCH_SIZE
)
And the log seems much different from the previous one:
Epoch 1/10
2005/2005 [==============================] - 54s 27ms/step - loss: 0.1451 - acc: 0.9419 - val_loss: 3.2980 - val_acc: 0.4975
Epoch 2/10
2005/2005 [==============================] - 49s 24ms/step - loss: 0.1675 - acc: 0.9371 - val_loss: 3.0838 - val_acc: 0.4975
Epoch 3/10
2005/2005 [==============================] - 49s 24ms/step - loss: 0.1821 - acc: 0.9316 - val_loss: 3.1212 - val_acc: 0.4975
Epoch 4/10
2005/2005 [==============================] - 49s 24ms/step - loss: 0.1902 - acc: 0.9287 - val_loss: 3.0032 - val_acc: 0.4975
Epoch 5/10
2005/2005 [==============================] - 49s 24ms/step - loss: 0.1905 - acc: 0.9283 - val_loss: 2.9671 - val_acc: 0.4975
Epoch 6/10
2005/2005 [==============================] - 49s 24ms/step - loss: 0.1867 - acc: 0.9299 - val_loss: 2.8734 - val_acc: 0.4975
Epoch 7/10
2005/2005 [==============================] - 49s 24ms/step - loss: 0.1802 - acc: 0.9316 - val_loss: 2.8651 - val_acc: 0.4975
Epoch 8/10
2005/2005 [==============================] - 49s 24ms/step - loss: 0.1740 - acc: 0.9350 - val_loss: 2.8793 - val_acc: 0.4975
Epoch 9/10
2005/2005 [==============================] - 49s 24ms/step - loss: 0.1660 - acc: 0.9388 - val_loss: 2.7894 - val_acc: 0.4975
Epoch 10/10
2005/2005 [==============================] - 49s 24ms/step - loss: 0.1613 - acc: 0.9405 - val_loss: 2.7997 - val_acc: 0.4975
The validation loss could not be reduced and the val_acc always the same value when I use the TensorFlow Dataset API to represent my data.
My questions are:
Based on the same model and parameters, why the model.fit() provides such different training results when I merely adopted tf.data.Dataset API?
What the difference between these two mechanisms?
model.fit(train_x,
train_y,
epochs=_EPOCH,
batch_size = _BATCH_SIZE,
verbose = 1,
validation_data = (val_x, val_y)
)
vs
model.fit(
training_set.make_one_shot_iterator(),
epochs = _EPOCH,
steps_per_epoch = len(train_x) // _BATCH_SIZE,
verbose = 1,
validation_data = validation_set.make_one_shot_iterator(),
validation_steps = len(val_x) // _BATCH_SIZE
)
How to solve this strange problem if I have to use tf.data.Dataset API?
When I use Keras to train a model with model.fit(), I see a progress bar that looks like this:
Epoch 1/10
8000/8000 [==========] - 55s 7ms/step - loss: 0.9318 - acc: 0.0783 - val_loss: 0.8631 - val_acc: 0.1180
Epoch 2/10
8000/8000 [==========] - 55s 7ms/step - loss: 0.6587 - acc: 0.1334 - val_loss: 0.7052 - val_acc: 0.1477
Epoch 3/10
8000/8000 [==========] - 54s 7ms/step - loss: 0.5701 - acc: 0.1526 - val_loss: 0.6445 - val_acc: 0.1632
To improve readability, I would like to have the epoch number on the same line as the progress bar, like this:
Epoch 1/10: 8000/8000 [==========] - 55s 7ms/step - loss: 0.9318 - acc: 0.0783 - val_loss: 0.8631 - val_acc: 0.1180
Epoch 2/10: 8000/8000 [==========] - 55s 7ms/step - loss: 0.6587 - acc: 0.1334 - val_loss: 0.7052 - val_acc: 0.1477
Epoch 3/10: 8000/8000 [==========] - 54s 7ms/step - loss: 0.5701 - acc: 0.1526 - val_loss: 0.6445 - val_acc: 0.1632
How can I make that change? I know that Keras has callbacks that can be invoked during training, but I am not familiar with how that works.
If you want to use an alternative, you could use tqdm (version >= 4.41.0):
from tqdm.keras import TqdmCallback
...
model.fit(..., verbose=0, callbacks=[TqdmCallback(verbose=2)])
This turns off keras' progress (verbose=0), and uses tqdm instead. For the callback, verbose=2 means separate progressbars for epochs and batches. 1 means clear batch bars when done. 0 means only show epochs (never show batch bars).
Yes, you can use callbacks (https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/Callback). For example:
import tensorflow as tf
class PrintLogs(tf.keras.callbacks.Callback):
def __init__(self, epochs):
self.epochs = epochs
def set_params(self, params):
params['epochs'] = 0
def on_epoch_begin(self, epoch, logs=None):
print('Epoch %d/%d' % (epoch + 1, self.epochs), end='')
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
epochs = 5
model.fit(x_train, y_train,
epochs=epochs,
validation_split=0.2,
verbose = 2,
callbacks=[PrintLogs(epochs)])
output:
Train on 48000 samples, validate on 12000 samples
Epoch 1/5 - 10s - loss: 0.0306 - acc: 0.9901 - val_loss: 0.0837 - val_acc: 0.9786
Epoch 2/5 - 9s - loss: 0.0269 - acc: 0.9910 - val_loss: 0.0839 - val_acc: 0.9788
Epoch 3/5 - 9s - loss: 0.0253 - acc: 0.9915 - val_loss: 0.0895 - val_acc: 0.9781
Epoch 4/5 - 9s - loss: 0.0201 - acc: 0.9930 - val_loss: 0.0871 - val_acc: 0.9792
Epoch 5/5 - 9s - loss: 0.0206 - acc: 0.9931 - val_loss: 0.0917 - val_acc: 0.9793