I am using Tensorflow 2.3 and trying to save model checkpoint after n number of epochs. n can be anything but for now trying with 10
Per this thread, I tried save_freq = 'epoch' and period = 10 which works but since period parameter is deprecated, I wanted to try an alternative approach.
HEIGHT = 256
WIDTH = 256
CHANNELS = 3
EPOCHS = 100
BATCH_SIZE = 1
SAVE_PERIOD = 10
n_monet_samples = 21
checkpoint_filepath = "./model_checkpoints/cyclegan_checkpoints.{epoch:03d}"
model_checkpoint_callback = callbacks.ModelCheckpoint(
filepath=checkpoint_filepath,
save_freq=SAVE_PERIOD * (n_monet_samples//BATCH_SIZE)
)
If I use save_freq=SAVE_PERIOD * (n_monet_samples//BATCH_SIZE) for the checkpoint callback definition, I get error
ValueError: Unrecognized save_freq: 210
I am not sure why since per Keras callback code, as long as save_freq is in epochs or in integer, it should be good.
Please suggest.
It does not show any error to me when I tried the same code in same Tensorflow version==2.3:
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
BATCH_SIZE = 1
SAVE_PERIOD = 10
n_monet_samples = 21
# Create a callback that saves the model's weights
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True,
verbose=1, save_freq=SAVE_PERIOD * (n_monet_samples//BATCH_SIZE))
# Train the model with the new callback
model.fit(train_images,
train_labels,
epochs=20,
validation_data=(test_images, test_labels),
callbacks=[cp_callback])
Output:
Epoch 1/20
32/32 [==============================] - 0s 14ms/step - loss: 1.1152 - sparse_categorical_accuracy: 0.6890 - val_loss: 0.6934 - val_sparse_categorical_accuracy: 0.7940
Epoch 2/20
32/32 [==============================] - 0s 9ms/step - loss: 0.4154 - sparse_categorical_accuracy: 0.8840 - val_loss: 0.5317 - val_sparse_categorical_accuracy: 0.8330
Epoch 3/20
32/32 [==============================] - 0s 8ms/step - loss: 0.2787 - sparse_categorical_accuracy: 0.9270 - val_loss: 0.4854 - val_sparse_categorical_accuracy: 0.8400
Epoch 4/20
32/32 [==============================] - 0s 8ms/step - loss: 0.2230 - sparse_categorical_accuracy: 0.9420 - val_loss: 0.4525 - val_sparse_categorical_accuracy: 0.8590
Epoch 5/20
32/32 [==============================] - 0s 10ms/step - loss: 0.1549 - sparse_categorical_accuracy: 0.9620 - val_loss: 0.4275 - val_sparse_categorical_accuracy: 0.8650
Epoch 6/20
32/32 [==============================] - 0s 10ms/step - loss: 0.1110 - sparse_categorical_accuracy: 0.9770 - val_loss: 0.4251 - val_sparse_categorical_accuracy: 0.8630
Epoch 7/20
11/32 [=========>....................] - ETA: 0s - loss: 0.0936 - sparse_categorical_accuracy: 0.9886
Epoch 00007: saving model to training_1/cp.ckpt
32/32 [==============================] - 0s 14ms/step - loss: 0.0807 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.4248 - val_sparse_categorical_accuracy: 0.8610
Epoch 8/20
32/32 [==============================] - 0s 10ms/step - loss: 0.0612 - sparse_categorical_accuracy: 0.9950 - val_loss: 0.4058 - val_sparse_categorical_accuracy: 0.8650
Epoch 9/20
32/32 [==============================] - 0s 8ms/step - loss: 0.0489 - sparse_categorical_accuracy: 0.9950 - val_loss: 0.4393 - val_sparse_categorical_accuracy: 0.8610
Epoch 10/20
32/32 [==============================] - 0s 6ms/step - loss: 0.0361 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4150 - val_sparse_categorical_accuracy: 0.8620
Epoch 11/20
32/32 [==============================] - 0s 10ms/step - loss: 0.0294 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4090 - val_sparse_categorical_accuracy: 0.8670
Epoch 12/20
32/32 [==============================] - 0s 7ms/step - loss: 0.0272 - sparse_categorical_accuracy: 0.9990 - val_loss: 0.4365 - val_sparse_categorical_accuracy: 0.8600
Epoch 13/20
32/32 [==============================] - 0s 8ms/step - loss: 0.0203 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4231 - val_sparse_categorical_accuracy: 0.8620
Epoch 14/20
1/32 [..............................] - ETA: 0s - loss: 0.0115 - sparse_categorical_accuracy: 1.0000
Epoch 00014: saving model to training_1/cp.ckpt
32/32 [==============================] - 0s 9ms/step - loss: 0.0164 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4263 - val_sparse_categorical_accuracy: 0.8650
Epoch 15/20
32/32 [==============================] - 0s 7ms/step - loss: 0.0128 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4260 - val_sparse_categorical_accuracy: 0.8690
Epoch 16/20
32/32 [==============================] - 0s 7ms/step - loss: 0.0120 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4194 - val_sparse_categorical_accuracy: 0.8740
Epoch 17/20
32/32 [==============================] - 0s 9ms/step - loss: 0.0110 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4302 - val_sparse_categorical_accuracy: 0.8710
Epoch 18/20
32/32 [==============================] - 0s 6ms/step - loss: 0.0090 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4331 - val_sparse_categorical_accuracy: 0.8660
Epoch 19/20
32/32 [==============================] - 0s 7ms/step - loss: 0.0084 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4320 - val_sparse_categorical_accuracy: 0.8760
Epoch 20/20
16/32 [==============>...............] - ETA: 0s - loss: 0.0074 - sparse_categorical_accuracy: 1.0000
Epoch 00020: saving model to training_1/cp.ckpt
32/32 [==============================] - 0s 13ms/step - loss: 0.0072 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4280 - val_sparse_categorical_accuracy: 0.8750
<tensorflow.python.keras.callbacks.History at 0x7f90f0082cd0>
As you already know save_freq is equal to 'epoch' or integer. When using 'epoch', the callback saves the model after each epoch. When using integer, the callback saves the model at end of theses many batches(end of these many steps_per_epoch).
As above definition of save_freq, checkpoints saves every after 210 steps.
Please check this for more details on ModelCheckpoint Arguments.
Related
The code and output when I execute once:
model.fit(X,y,validation_split=0.2, epochs=10, batch_size= 100)
Epoch 1/10
8/8 [==============================] - 1s 31ms/step - loss: 0.6233 - accuracy: 0.6259 - val_loss: 0.6333 - val_accuracy: 0.6461
Epoch 2/10
8/8 [==============================] - 0s 5ms/step - loss: 0.5443 - accuracy: 0.7722 - val_loss: 0.4803 - val_accuracy: 0.7978
Epoch 3/10
8/8 [==============================] - 0s 4ms/step - loss: 0.5385 - accuracy: 0.7904 - val_loss: 0.4465 - val_accuracy: 0.8202
Epoch 4/10
8/8 [==============================] - 0s 5ms/step - loss: 0.5014 - accuracy: 0.7932 - val_loss: 0.5228 - val_accuracy: 0.7753
Epoch 5/10
8/8 [==============================] - 0s 4ms/step - loss: 0.5283 - accuracy: 0.7736 - val_loss: 0.4284 - val_accuracy: 0.8315
Epoch 6/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4936 - accuracy: 0.7989 - val_loss: 0.4309 - val_accuracy: 0.8539
Epoch 7/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4700 - accuracy: 0.8045 - val_loss: 0.4622 - val_accuracy: 0.8146
Epoch 8/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4732 - accuracy: 0.8087 - val_loss: 0.4159 - val_accuracy: 0.8202
Epoch 9/10
8/8 [==============================] - 0s 5ms/step - loss: 0.5623 - accuracy: 0.7764 - val_loss: 0.7438 - val_accuracy: 0.8090
Epoch 10/10
8/8 [==============================] - 0s 4ms/step - loss: 0.5886 - accuracy: 0.7806 - val_loss: 0.5889 - val_accuracy: 0.6798
Output when I execute the same line of code again in jupyter lab:
Epoch 1/10
8/8 [==============================] - 0s 9ms/step - loss: 0.5269 - accuracy: 0.7496 - val_loss: 0.4568 - val_accuracy: 0.8371
Epoch 2/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4688 - accuracy: 0.8087 - val_loss: 0.4885 - val_accuracy: 0.7753
Epoch 3/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4597 - accuracy: 0.8017 - val_loss: 0.4638 - val_accuracy: 0.7865
Epoch 4/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4741 - accuracy: 0.7890 - val_loss: 0.4277 - val_accuracy: 0.8258
Epoch 5/10
8/8 [==============================] - 0s 5ms/step - loss: 0.4840 - accuracy: 0.8003 - val_loss: 0.4712 - val_accuracy: 0.7978
Epoch 6/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4488 - accuracy: 0.8087 - val_loss: 0.4825 - val_accuracy: 0.7809
Epoch 7/10
8/8 [==============================] - 0s 5ms/step - loss: 0.4432 - accuracy: 0.8087 - val_loss: 0.4865 - val_accuracy: 0.8090
Epoch 8/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4299 - accuracy: 0.8059 - val_loss: 0.4458 - val_accuracy: 0.8371
Epoch 9/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4358 - accuracy: 0.8172 - val_loss: 0.5232 - val_accuracy: 0.8034
Epoch 10/10
8/8 [==============================] - 0s 5ms/step - loss: 0.4697 - accuracy: 0.8059 - val_loss: 0.4421 - val_accuracy: 0.8202
It continues the previous fit, and my question is: how can I make it start from the beginning again? without having to create a new model, so the second time I execute the line of code is independent of the first one
This is a little bit tricky without being able to see the code to initialise the model, and not sure why you'd need to reset the weights without re-initialising the model.
If you save the weights of your model before training, you can then then reset to those initial weights before you train again.
modelWeights = model.get_weights()
model.set_weights(modelWeights)
CNN is:
input: 2 image 128x128: input1 and input2
output: CNN return a Trasformation Matrix:
outLayer = Layers.Dense(3 * 2, activation=activations.linear,kernel_initializer="zeos",bias_initializer=output_bias)(d2)
After I:
append STN (https://github.com/kevinzakka/spatial-transformer-network) for translate and rotate input1.
The complete network is CNN with STN. Return an image (128x128) with 32 batch
My loss is MSE between input2 and output (CNN): target error registration. MSE between 2 image (input2 and output)
The network must find the transformation matrix because input2 = output
Can I use STN only for translate and rotate input1? Because I have problem with val_loss (use tensorflow):
Epoch 1/90
30/30 [==============================] - 81s 3s/step - loss: 16.0190 - val_loss: 24.3248
Epoch 2/90
30/30 [==============================] - 79s 3s/step - loss: 13.9868 - val_loss: 21.4465
Epoch 3/90
30/30 [==============================] - 73s 2s/step - loss: 13.2970 - val_loss: 21.3151
Epoch 4/90
30/30 [==============================] - 69s 2s/step - loss: 12.9244 - val_loss: 21.6154
Epoch 5/90
30/30 [==============================] - 67s 2s/step - loss: 12.6868 - val_loss: 20.0113
Epoch 6/90
30/30 [==============================] - 66s 2s/step - loss: 12.4998 - val_loss: 20.8911
Epoch 7/90
30/30 [==============================] - 69s 2s/step - loss: 12.3066 - val_loss: 21.4276
Epoch 8/90
30/30 [==============================] - 67s 2s/step - loss: 12.1034 - val_loss: 21.3593
Epoch 9/90
30/30 [==============================] - 69s 2s/step - loss: 11.7645 - val_loss: 20.8941
Epoch 10/90
30/30 [==============================] - 67s 2s/step - loss: 11.6544 - val_loss: 20.4768
Epoch 11/90
30/30 [==============================] - 70s 2s/step - loss: 11.5483 - val_loss: 21.7420
Epoch 12/90
30/30 [==============================] - 68s 2s/step - loss: 11.5680 - val_loss: 19.4531
I've been working on the California Housing dataset from Chapter 10 (page 300) of "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems Book by Aurelien Geron".
I copy-pasted the following code into my Jupyter Notebook from his Colab Notebook:
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
housing = fetch_california_housing()
X_train_full, X_test, y_train_full, y_test = train_test_split(housing.data, housing.target, random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(X_train_full, y_train_full, random_state=42)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_valid = scaler.transform(X_valid)
X_test = scaler.transform(X_test)
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="relu", input_shape=X_train.shape[1:]),
keras.layers.Dense(1)
])
model.compile(loss="mean_squared_error", optimizer=keras.optimizers.SGD(learning_rate=1e-3))
history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid))
mse_test = model.evaluate(X_test, y_test)
X_new = X_test[:3]
y_pred = model.predict(X_new)
My loss looks like this:
Epoch 1/20
363/363 [==============================] - 1s 2ms/step - loss: 22974151580975104.0000 - val_loss: 14617883443200.0000
Epoch 2/20
363/363 [==============================] - 0s 1ms/step - loss: 7723970199552.0000 - val_loss: 3417093963776.0000
Epoch 3/20
363/363 [==============================] - 0s 1ms/step - loss: 1805568049152.0000 - val_loss: 798785339392.0000
Epoch 4/20
363/363 [==============================] - 0s 1ms/step - loss: 422071828480.0000 - val_loss: 186725318656.0000
Epoch 5/20
363/363 [==============================] - 0s 1ms/step - loss: 98664218624.0000 - val_loss: 43649114112.0000
Epoch 6/20
363/363 [==============================] - 1s 2ms/step - loss: 23063846912.0000 - val_loss: 10203475968.0000
Epoch 7/20
363/363 [==============================] - 1s 2ms/step - loss: 5391437312.0000 - val_loss: 2385182720.0000
Epoch 8/20
363/363 [==============================] - 1s 1ms/step - loss: 1260309632.0000 - val_loss: 557565056.0000
Epoch 9/20
363/363 [==============================] - 0s 1ms/step - loss: 294611968.0000 - val_loss: 130337808.0000
Epoch 10/20
363/363 [==============================] - 1s 1ms/step - loss: 68868872.0000 - val_loss: 30468182.0000
Epoch 11/20
363/363 [==============================] - 1s 2ms/step - loss: 16098886.0000 - val_loss: 7122408.0000
Epoch 12/20
363/363 [==============================] - 1s 2ms/step - loss: 3763294.0000 - val_loss: 1665006.2500
Epoch 13/20
363/363 [==============================] - 0s 1ms/step - loss: 879712.9375 - val_loss: 389246.1250
Epoch 14/20
363/363 [==============================] - 1s 1ms/step - loss: 205644.4062 - val_loss: 91006.7344
Epoch 15/20
363/363 [==============================] - 1s 1ms/step - loss: 48073.1602 - val_loss: 21282.1250
Epoch 16/20
363/363 [==============================] - 1s 1ms/step - loss: 11238.7031 - val_loss: 4979.3115
Epoch 17/20
363/363 [==============================] - 1s 2ms/step - loss: 2628.1484 - val_loss: 1166.6659
Epoch 18/20
363/363 [==============================] - 1s 2ms/step - loss: 615.3716 - val_loss: 274.4792
Epoch 19/20
363/363 [==============================] - 1s 2ms/step - loss: 144.8725 - val_loss: 65.5832
Epoch 20/20
363/363 [==============================] - 1s 2ms/step - loss: 34.9049 - val_loss: 16.5316
162/162 [==============================] - 0s 1ms/step - loss: 16.3210
But his loss looks like this:
Epoch 1/20
363/363 [==============================] - 1s 2ms/step - loss: 1.6419 - val_loss: 0.8560
Epoch 2/20
363/363 [==============================] - 1s 2ms/step - loss: 0.7047 - val_loss: 0.6531
Epoch 3/20
363/363 [==============================] - 1s 2ms/step - loss: 0.6345 - val_loss: 0.6099
Epoch 4/20
363/363 [==============================] - 1s 2ms/step - loss: 0.5977 - val_loss: 0.5658
Epoch 5/20
363/363 [==============================] - 1s 2ms/step - loss: 0.5706 - val_loss: 0.5355
Epoch 6/20
363/363 [==============================] - 1s 2ms/step - loss: 0.5472 - val_loss: 0.5173
Epoch 7/20
363/363 [==============================] - 1s 2ms/step - loss: 0.5288 - val_loss: 0.5081
Epoch 8/20
363/363 [==============================] - 1s 2ms/step - loss: 0.5130 - val_loss: 0.4799
Epoch 9/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4992 - val_loss: 0.4690
Epoch 10/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4875 - val_loss: 0.4656
Epoch 11/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4777 - val_loss: 0.4482
Epoch 12/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4688 - val_loss: 0.4479
Epoch 13/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4615 - val_loss: 0.4296
Epoch 14/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4547 - val_loss: 0.4233
Epoch 15/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4488 - val_loss: 0.4176
Epoch 16/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4435 - val_loss: 0.4123
Epoch 17/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4389 - val_loss: 0.4071
Epoch 18/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4347 - val_loss: 0.4037
Epoch 19/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4306 - val_loss: 0.4000
Epoch 20/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4273 - val_loss: 0.3969
162/162 [==============================] - 0s 1ms/step - loss: 0.4212
Why is my loss so much higher???
Here is my code and result of the training.
batch_size = 100
epochs = 50
yale_history = yale_classifier.fit(x_train, y_train_oh,batch_size=batch_size,epochs=epochs,validation_data=(x_train,y_train_oh))
Epoch 1/50
20/20 [==============================] - 32s 2s/step - loss: 3.9801 - accuracy: 0.2071 - val_loss: 3.6919 - val_accuracy: 0.0245
Epoch 2/50
20/20 [==============================] - 30s 2s/step - loss: 1.2557 - accuracy: 0.6847 - val_loss: 4.1914 - val_accuracy: 0.0245
Epoch 3/50
20/20 [==============================] - 30s 2s/step - loss: 0.4408 - accuracy: 0.8954 - val_loss: 4.6284 - val_accuracy: 0.0245
Epoch 4/50
20/20 [==============================] - 30s 2s/step - loss: 0.1822 - accuracy: 0.9592 - val_loss: 4.9481 - val_accuracy: 0.0398
Epoch 5/50
20/20 [==============================] - 30s 2s/step - loss: 0.1252 - accuracy: 0.9760 - val_loss: 5.3728 - val_accuracy: 0.0276
Epoch 6/50
20/20 [==============================] - 30s 2s/step - loss: 0.0927 - accuracy: 0.9816 - val_loss: 5.7009 - val_accuracy: 0.0260
Epoch 7/50
20/20 [==============================] - 30s 2s/step - loss: 0.0858 - accuracy: 0.9837 - val_loss: 6.0049 - val_accuracy: 0.0260
Epoch 8/50
20/20 [==============================] - 30s 2s/step - loss: 0.0646 - accuracy: 0.9867 - val_loss: 6.3786 - val_accuracy: 0.0260
Epoch 9/50
20/20 [==============================] - 30s 2s/step - loss: 0.0489 - accuracy: 0.9898 - val_loss: 6.5156 - val_accuracy: 0.0260
You can see that I also used the training data as the validation data. This is weird that the training loss is not the same as the validation loss. Further, when I evaluated it, seem like my model was not trained at all as follow.
yale_classifier.evaluate(x_train, y_train_oh)
62/62 [==============================] - 6s 96ms/step - loss: 7.1123 - accuracy: 0.0260
[7.112329483032227, 0.026020407676696777]
Do you have any recommened to solve this problem ?
I am doing a Binary classification of IMDB movie review data into Positive or Negative Sentiment.
I have 25K movie reviews and corresponding label.
Preprocessing:
Removed the stop words and split the data into 70:30 training and test. So 17.5K training and 7k test. 17.5k training has been further divided into 14K train and 3.5 k validation dataset as used in keras.model.fit method
Each processed movie review has been converted to TF-IDF vector using Keras text processing module.
Here is my Fully Connected Architecture I used in Keras Dense class
def model_param(self):
""" Method to do deep learning
"""
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras import regularizers
self.model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
self.model.add(Dense(32, activation='relu', input_dim=self.x_train_std.shape[1]))
self.model.add(Dropout(0.5))
#self.model.add(Dense(60, activation='relu'))
#self.model.add(Dropout(0.5))
self.model.add(Dense(1, activation='sigmoid'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
self.model.compile(loss='binary_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
def fit(self):
""" Training the deep learning network on the training data
"""
self.model.fit(self.x_train_std, self.y_train,validation_split=0.20,
epochs=50,
batch_size=128)
As you see, I tried first without Dropout and as usual I got training accuracy as 1.0 but validation was poor as overfitting was happening. So I added Dropout to prevent overfitting
However inspite of trying multiple dropout ratio, adding another layer with different no. of units in it, changing learning rate I am still getting overfitting on validation dataset. Gets stuck at 85% while training keeps increasing to 99% and so on. Even changed the Epochs from 10 to 50
What could be going wrong here
Train on 14000 samples, validate on 3500 samples
Epoch 1/50
14000/14000 [==============================] - 0s - loss: 0.5684 - acc: 0.7034 - val_loss: 0.3794 - val_acc: 0.8431
Epoch 2/50
14000/14000 [==============================] - 0s - loss: 0.3630 - acc: 0.8388 - val_loss: 0.3304 - val_acc: 0.8549
Epoch 3/50
14000/14000 [==============================] - 0s - loss: 0.2977 - acc: 0.8749 - val_loss: 0.3271 - val_acc: 0.8591
Epoch 4/50
14000/14000 [==============================] - 0s - loss: 0.2490 - acc: 0.8991 - val_loss: 0.3302 - val_acc: 0.8580
Epoch 5/50
14000/14000 [==============================] - 0s - loss: 0.2251 - acc: 0.9086 - val_loss: 0.3388 - val_acc: 0.8546
Epoch 6/50
14000/14000 [==============================] - 0s - loss: 0.2021 - acc: 0.9189 - val_loss: 0.3532 - val_acc: 0.8523
Epoch 7/50
14000/14000 [==============================] - 0s - loss: 0.1797 - acc: 0.9286 - val_loss: 0.3670 - val_acc: 0.8529
Epoch 8/50
14000/14000 [==============================] - 0s - loss: 0.1611 - acc: 0.9350 - val_loss: 0.3860 - val_acc: 0.8543
Epoch 9/50
14000/14000 [==============================] - 0s - loss: 0.1427 - acc: 0.9437 - val_loss: 0.4077 - val_acc: 0.8529
Epoch 10/50
14000/14000 [==============================] - 0s - loss: 0.1344 - acc: 0.9476 - val_loss: 0.4234 - val_acc: 0.8526
Epoch 11/50
14000/14000 [==============================] - 0s - loss: 0.1222 - acc: 0.9534 - val_loss: 0.4473 - val_acc: 0.8506
Epoch 12/50
14000/14000 [==============================] - 0s - loss: 0.1131 - acc: 0.9546 - val_loss: 0.4718 - val_acc: 0.8497
Epoch 13/50
14000/14000 [==============================] - 0s - loss: 0.1079 - acc: 0.9559 - val_loss: 0.4818 - val_acc: 0.8526
Epoch 14/50
14000/14000 [==============================] - 0s - loss: 0.0954 - acc: 0.9630 - val_loss: 0.5057 - val_acc: 0.8494
Epoch 15/50
14000/14000 [==============================] - 0s - loss: 0.0906 - acc: 0.9636 - val_loss: 0.5229 - val_acc: 0.8557
Epoch 16/50
14000/14000 [==============================] - 0s - loss: 0.0896 - acc: 0.9657 - val_loss: 0.5387 - val_acc: 0.8497
Epoch 17/50
14000/14000 [==============================] - 0s - loss: 0.0816 - acc: 0.9666 - val_loss: 0.5579 - val_acc: 0.8463
Epoch 18/50
14000/14000 [==============================] - 0s - loss: 0.0762 - acc: 0.9709 - val_loss: 0.5704 - val_acc: 0.8491
Epoch 19/50
14000/14000 [==============================] - 0s - loss: 0.0718 - acc: 0.9723 - val_loss: 0.5834 - val_acc: 0.8454
Epoch 20/50
14000/14000 [==============================] - 0s - loss: 0.0633 - acc: 0.9752 - val_loss: 0.6032 - val_acc: 0.8494
Epoch 21/50
14000/14000 [==============================] - 0s - loss: 0.0687 - acc: 0.9724 - val_loss: 0.6181 - val_acc: 0.8480
Epoch 22/50
14000/14000 [==============================] - 0s - loss: 0.0614 - acc: 0.9762 - val_loss: 0.6280 - val_acc: 0.8503
Epoch 23/50
14000/14000 [==============================] - 0s - loss: 0.0620 - acc: 0.9756 - val_loss: 0.6407 - val_acc: 0.8500
Epoch 24/50
14000/14000 [==============================] - 0s - loss: 0.0536 - acc: 0.9794 - val_loss: 0.6563 - val_acc: 0.8511
Epoch 25/50
14000/14000 [==============================] - 0s - loss: 0.0538 - acc: 0.9791 - val_loss: 0.6709 - val_acc: 0.8500
Epoch 26/50
14000/14000 [==============================] - 0s - loss: 0.0507 - acc: 0.9807 - val_loss: 0.6869 - val_acc: 0.8491
Epoch 27/50
14000/14000 [==============================] - 0s - loss: 0.0528 - acc: 0.9794 - val_loss: 0.7002 - val_acc: 0.8483
Epoch 28/50
14000/14000 [==============================] - 0s - loss: 0.0465 - acc: 0.9810 - val_loss: 0.7083 - val_acc: 0.8469
Epoch 29/50
14000/14000 [==============================] - 0s - loss: 0.0504 - acc: 0.9796 - val_loss: 0.7153 - val_acc: 0.8497
Epoch 30/50
14000/14000 [==============================] - 0s - loss: 0.0477 - acc: 0.9819 - val_loss: 0.7232 - val_acc: 0.8480
Epoch 31/50
14000/14000 [==============================] - 0s - loss: 0.0475 - acc: 0.9819 - val_loss: 0.7343 - val_acc: 0.8469
Epoch 32/50
14000/14000 [==============================] - 0s - loss: 0.0459 - acc: 0.9819 - val_loss: 0.7352 - val_acc: 0.8500
Epoch 33/50
14000/14000 [==============================] - 0s - loss: 0.0426 - acc: 0.9807 - val_loss: 0.7429 - val_acc: 0.8511
Epoch 34/50
14000/14000 [==============================] - 0s - loss: 0.0396 - acc: 0.9846 - val_loss: 0.7576 - val_acc: 0.8477
Epoch 35/50
14000/14000 [==============================] - 0s - loss: 0.0420 - acc: 0.9836 - val_loss: 0.7603 - val_acc: 0.8506
Epoch 36/50
14000/14000 [==============================] - 0s - loss: 0.0359 - acc: 0.9856 - val_loss: 0.7683 - val_acc: 0.8497
Epoch 37/50
14000/14000 [==============================] - 0s - loss: 0.0377 - acc: 0.9849 - val_loss: 0.7823 - val_acc: 0.8520
Epoch 38/50
14000/14000 [==============================] - 0s - loss: 0.0352 - acc: 0.9861 - val_loss: 0.7912 - val_acc: 0.8500
Epoch 39/50
14000/14000 [==============================] - 0s - loss: 0.0390 - acc: 0.9845 - val_loss: 0.8025 - val_acc: 0.8489
Epoch 40/50
14000/14000 [==============================] - 0s - loss: 0.0371 - acc: 0.9853 - val_loss: 0.8128 - val_acc: 0.8494
Epoch 41/50
14000/14000 [==============================] - 0s - loss: 0.0367 - acc: 0.9848 - val_loss: 0.8184 - val_acc: 0.8503
Epoch 42/50
14000/14000 [==============================] - 0s - loss: 0.0331 - acc: 0.9871 - val_loss: 0.8264 - val_acc: 0.8500
Epoch 43/50
14000/14000 [==============================] - 0s - loss: 0.0338 - acc: 0.9871 - val_loss: 0.8332 - val_acc: 0.8483
Epoch 44/50