Understanding reason for Overfitting in Keras Binary Classification Task - python

I am doing a Binary classification of IMDB movie review data into Positive or Negative Sentiment.
I have 25K movie reviews and corresponding label.
Preprocessing:
Removed the stop words and split the data into 70:30 training and test. So 17.5K training and 7k test. 17.5k training has been further divided into 14K train and 3.5 k validation dataset as used in keras.model.fit method
Each processed movie review has been converted to TF-IDF vector using Keras text processing module.
Here is my Fully Connected Architecture I used in Keras Dense class
def model_param(self):
""" Method to do deep learning
"""
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras import regularizers
self.model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
self.model.add(Dense(32, activation='relu', input_dim=self.x_train_std.shape[1]))
self.model.add(Dropout(0.5))
#self.model.add(Dense(60, activation='relu'))
#self.model.add(Dropout(0.5))
self.model.add(Dense(1, activation='sigmoid'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
self.model.compile(loss='binary_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
def fit(self):
""" Training the deep learning network on the training data
"""
self.model.fit(self.x_train_std, self.y_train,validation_split=0.20,
epochs=50,
batch_size=128)
As you see, I tried first without Dropout and as usual I got training accuracy as 1.0 but validation was poor as overfitting was happening. So I added Dropout to prevent overfitting
However inspite of trying multiple dropout ratio, adding another layer with different no. of units in it, changing learning rate I am still getting overfitting on validation dataset. Gets stuck at 85% while training keeps increasing to 99% and so on. Even changed the Epochs from 10 to 50
What could be going wrong here
Train on 14000 samples, validate on 3500 samples
Epoch 1/50
14000/14000 [==============================] - 0s - loss: 0.5684 - acc: 0.7034 - val_loss: 0.3794 - val_acc: 0.8431
Epoch 2/50
14000/14000 [==============================] - 0s - loss: 0.3630 - acc: 0.8388 - val_loss: 0.3304 - val_acc: 0.8549
Epoch 3/50
14000/14000 [==============================] - 0s - loss: 0.2977 - acc: 0.8749 - val_loss: 0.3271 - val_acc: 0.8591
Epoch 4/50
14000/14000 [==============================] - 0s - loss: 0.2490 - acc: 0.8991 - val_loss: 0.3302 - val_acc: 0.8580
Epoch 5/50
14000/14000 [==============================] - 0s - loss: 0.2251 - acc: 0.9086 - val_loss: 0.3388 - val_acc: 0.8546
Epoch 6/50
14000/14000 [==============================] - 0s - loss: 0.2021 - acc: 0.9189 - val_loss: 0.3532 - val_acc: 0.8523
Epoch 7/50
14000/14000 [==============================] - 0s - loss: 0.1797 - acc: 0.9286 - val_loss: 0.3670 - val_acc: 0.8529
Epoch 8/50
14000/14000 [==============================] - 0s - loss: 0.1611 - acc: 0.9350 - val_loss: 0.3860 - val_acc: 0.8543
Epoch 9/50
14000/14000 [==============================] - 0s - loss: 0.1427 - acc: 0.9437 - val_loss: 0.4077 - val_acc: 0.8529
Epoch 10/50
14000/14000 [==============================] - 0s - loss: 0.1344 - acc: 0.9476 - val_loss: 0.4234 - val_acc: 0.8526
Epoch 11/50
14000/14000 [==============================] - 0s - loss: 0.1222 - acc: 0.9534 - val_loss: 0.4473 - val_acc: 0.8506
Epoch 12/50
14000/14000 [==============================] - 0s - loss: 0.1131 - acc: 0.9546 - val_loss: 0.4718 - val_acc: 0.8497
Epoch 13/50
14000/14000 [==============================] - 0s - loss: 0.1079 - acc: 0.9559 - val_loss: 0.4818 - val_acc: 0.8526
Epoch 14/50
14000/14000 [==============================] - 0s - loss: 0.0954 - acc: 0.9630 - val_loss: 0.5057 - val_acc: 0.8494
Epoch 15/50
14000/14000 [==============================] - 0s - loss: 0.0906 - acc: 0.9636 - val_loss: 0.5229 - val_acc: 0.8557
Epoch 16/50
14000/14000 [==============================] - 0s - loss: 0.0896 - acc: 0.9657 - val_loss: 0.5387 - val_acc: 0.8497
Epoch 17/50
14000/14000 [==============================] - 0s - loss: 0.0816 - acc: 0.9666 - val_loss: 0.5579 - val_acc: 0.8463
Epoch 18/50
14000/14000 [==============================] - 0s - loss: 0.0762 - acc: 0.9709 - val_loss: 0.5704 - val_acc: 0.8491
Epoch 19/50
14000/14000 [==============================] - 0s - loss: 0.0718 - acc: 0.9723 - val_loss: 0.5834 - val_acc: 0.8454
Epoch 20/50
14000/14000 [==============================] - 0s - loss: 0.0633 - acc: 0.9752 - val_loss: 0.6032 - val_acc: 0.8494
Epoch 21/50
14000/14000 [==============================] - 0s - loss: 0.0687 - acc: 0.9724 - val_loss: 0.6181 - val_acc: 0.8480
Epoch 22/50
14000/14000 [==============================] - 0s - loss: 0.0614 - acc: 0.9762 - val_loss: 0.6280 - val_acc: 0.8503
Epoch 23/50
14000/14000 [==============================] - 0s - loss: 0.0620 - acc: 0.9756 - val_loss: 0.6407 - val_acc: 0.8500
Epoch 24/50
14000/14000 [==============================] - 0s - loss: 0.0536 - acc: 0.9794 - val_loss: 0.6563 - val_acc: 0.8511
Epoch 25/50
14000/14000 [==============================] - 0s - loss: 0.0538 - acc: 0.9791 - val_loss: 0.6709 - val_acc: 0.8500
Epoch 26/50
14000/14000 [==============================] - 0s - loss: 0.0507 - acc: 0.9807 - val_loss: 0.6869 - val_acc: 0.8491
Epoch 27/50
14000/14000 [==============================] - 0s - loss: 0.0528 - acc: 0.9794 - val_loss: 0.7002 - val_acc: 0.8483
Epoch 28/50
14000/14000 [==============================] - 0s - loss: 0.0465 - acc: 0.9810 - val_loss: 0.7083 - val_acc: 0.8469
Epoch 29/50
14000/14000 [==============================] - 0s - loss: 0.0504 - acc: 0.9796 - val_loss: 0.7153 - val_acc: 0.8497
Epoch 30/50
14000/14000 [==============================] - 0s - loss: 0.0477 - acc: 0.9819 - val_loss: 0.7232 - val_acc: 0.8480
Epoch 31/50
14000/14000 [==============================] - 0s - loss: 0.0475 - acc: 0.9819 - val_loss: 0.7343 - val_acc: 0.8469
Epoch 32/50
14000/14000 [==============================] - 0s - loss: 0.0459 - acc: 0.9819 - val_loss: 0.7352 - val_acc: 0.8500
Epoch 33/50
14000/14000 [==============================] - 0s - loss: 0.0426 - acc: 0.9807 - val_loss: 0.7429 - val_acc: 0.8511
Epoch 34/50
14000/14000 [==============================] - 0s - loss: 0.0396 - acc: 0.9846 - val_loss: 0.7576 - val_acc: 0.8477
Epoch 35/50
14000/14000 [==============================] - 0s - loss: 0.0420 - acc: 0.9836 - val_loss: 0.7603 - val_acc: 0.8506
Epoch 36/50
14000/14000 [==============================] - 0s - loss: 0.0359 - acc: 0.9856 - val_loss: 0.7683 - val_acc: 0.8497
Epoch 37/50
14000/14000 [==============================] - 0s - loss: 0.0377 - acc: 0.9849 - val_loss: 0.7823 - val_acc: 0.8520
Epoch 38/50
14000/14000 [==============================] - 0s - loss: 0.0352 - acc: 0.9861 - val_loss: 0.7912 - val_acc: 0.8500
Epoch 39/50
14000/14000 [==============================] - 0s - loss: 0.0390 - acc: 0.9845 - val_loss: 0.8025 - val_acc: 0.8489
Epoch 40/50
14000/14000 [==============================] - 0s - loss: 0.0371 - acc: 0.9853 - val_loss: 0.8128 - val_acc: 0.8494
Epoch 41/50
14000/14000 [==============================] - 0s - loss: 0.0367 - acc: 0.9848 - val_loss: 0.8184 - val_acc: 0.8503
Epoch 42/50
14000/14000 [==============================] - 0s - loss: 0.0331 - acc: 0.9871 - val_loss: 0.8264 - val_acc: 0.8500
Epoch 43/50
14000/14000 [==============================] - 0s - loss: 0.0338 - acc: 0.9871 - val_loss: 0.8332 - val_acc: 0.8483
Epoch 44/50

Related

Executing model.fit multiple times

The code and output when I execute once:
model.fit(X,y,validation_split=0.2, epochs=10, batch_size= 100)
Epoch 1/10
8/8 [==============================] - 1s 31ms/step - loss: 0.6233 - accuracy: 0.6259 - val_loss: 0.6333 - val_accuracy: 0.6461
Epoch 2/10
8/8 [==============================] - 0s 5ms/step - loss: 0.5443 - accuracy: 0.7722 - val_loss: 0.4803 - val_accuracy: 0.7978
Epoch 3/10
8/8 [==============================] - 0s 4ms/step - loss: 0.5385 - accuracy: 0.7904 - val_loss: 0.4465 - val_accuracy: 0.8202
Epoch 4/10
8/8 [==============================] - 0s 5ms/step - loss: 0.5014 - accuracy: 0.7932 - val_loss: 0.5228 - val_accuracy: 0.7753
Epoch 5/10
8/8 [==============================] - 0s 4ms/step - loss: 0.5283 - accuracy: 0.7736 - val_loss: 0.4284 - val_accuracy: 0.8315
Epoch 6/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4936 - accuracy: 0.7989 - val_loss: 0.4309 - val_accuracy: 0.8539
Epoch 7/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4700 - accuracy: 0.8045 - val_loss: 0.4622 - val_accuracy: 0.8146
Epoch 8/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4732 - accuracy: 0.8087 - val_loss: 0.4159 - val_accuracy: 0.8202
Epoch 9/10
8/8 [==============================] - 0s 5ms/step - loss: 0.5623 - accuracy: 0.7764 - val_loss: 0.7438 - val_accuracy: 0.8090
Epoch 10/10
8/8 [==============================] - 0s 4ms/step - loss: 0.5886 - accuracy: 0.7806 - val_loss: 0.5889 - val_accuracy: 0.6798
Output when I execute the same line of code again in jupyter lab:
Epoch 1/10
8/8 [==============================] - 0s 9ms/step - loss: 0.5269 - accuracy: 0.7496 - val_loss: 0.4568 - val_accuracy: 0.8371
Epoch 2/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4688 - accuracy: 0.8087 - val_loss: 0.4885 - val_accuracy: 0.7753
Epoch 3/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4597 - accuracy: 0.8017 - val_loss: 0.4638 - val_accuracy: 0.7865
Epoch 4/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4741 - accuracy: 0.7890 - val_loss: 0.4277 - val_accuracy: 0.8258
Epoch 5/10
8/8 [==============================] - 0s 5ms/step - loss: 0.4840 - accuracy: 0.8003 - val_loss: 0.4712 - val_accuracy: 0.7978
Epoch 6/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4488 - accuracy: 0.8087 - val_loss: 0.4825 - val_accuracy: 0.7809
Epoch 7/10
8/8 [==============================] - 0s 5ms/step - loss: 0.4432 - accuracy: 0.8087 - val_loss: 0.4865 - val_accuracy: 0.8090
Epoch 8/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4299 - accuracy: 0.8059 - val_loss: 0.4458 - val_accuracy: 0.8371
Epoch 9/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4358 - accuracy: 0.8172 - val_loss: 0.5232 - val_accuracy: 0.8034
Epoch 10/10
8/8 [==============================] - 0s 5ms/step - loss: 0.4697 - accuracy: 0.8059 - val_loss: 0.4421 - val_accuracy: 0.8202
It continues the previous fit, and my question is: how can I make it start from the beginning again? without having to create a new model, so the second time I execute the line of code is independent of the first one
This is a little bit tricky without being able to see the code to initialise the model, and not sure why you'd need to reset the weights without re-initialising the model.
If you save the weights of your model before training, you can then then reset to those initial weights before you train again.
modelWeights = model.get_weights()
model.set_weights(modelWeights)

Saving Model Checkpoint in Tensorflow

I am using Tensorflow 2.3 and trying to save model checkpoint after n number of epochs. n can be anything but for now trying with 10
Per this thread, I tried save_freq = 'epoch' and period = 10 which works but since period parameter is deprecated, I wanted to try an alternative approach.
HEIGHT = 256
WIDTH = 256
CHANNELS = 3
EPOCHS = 100
BATCH_SIZE = 1
SAVE_PERIOD = 10
n_monet_samples = 21
checkpoint_filepath = "./model_checkpoints/cyclegan_checkpoints.{epoch:03d}"
model_checkpoint_callback = callbacks.ModelCheckpoint(
filepath=checkpoint_filepath,
save_freq=SAVE_PERIOD * (n_monet_samples//BATCH_SIZE)
)
If I use save_freq=SAVE_PERIOD * (n_monet_samples//BATCH_SIZE) for the checkpoint callback definition, I get error
ValueError: Unrecognized save_freq: 210
I am not sure why since per Keras callback code, as long as save_freq is in epochs or in integer, it should be good.
Please suggest.
It does not show any error to me when I tried the same code in same Tensorflow version==2.3:
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
BATCH_SIZE = 1
SAVE_PERIOD = 10
n_monet_samples = 21
# Create a callback that saves the model's weights
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True,
verbose=1, save_freq=SAVE_PERIOD * (n_monet_samples//BATCH_SIZE))
# Train the model with the new callback
model.fit(train_images,
train_labels,
epochs=20,
validation_data=(test_images, test_labels),
callbacks=[cp_callback])
Output:
Epoch 1/20
32/32 [==============================] - 0s 14ms/step - loss: 1.1152 - sparse_categorical_accuracy: 0.6890 - val_loss: 0.6934 - val_sparse_categorical_accuracy: 0.7940
Epoch 2/20
32/32 [==============================] - 0s 9ms/step - loss: 0.4154 - sparse_categorical_accuracy: 0.8840 - val_loss: 0.5317 - val_sparse_categorical_accuracy: 0.8330
Epoch 3/20
32/32 [==============================] - 0s 8ms/step - loss: 0.2787 - sparse_categorical_accuracy: 0.9270 - val_loss: 0.4854 - val_sparse_categorical_accuracy: 0.8400
Epoch 4/20
32/32 [==============================] - 0s 8ms/step - loss: 0.2230 - sparse_categorical_accuracy: 0.9420 - val_loss: 0.4525 - val_sparse_categorical_accuracy: 0.8590
Epoch 5/20
32/32 [==============================] - 0s 10ms/step - loss: 0.1549 - sparse_categorical_accuracy: 0.9620 - val_loss: 0.4275 - val_sparse_categorical_accuracy: 0.8650
Epoch 6/20
32/32 [==============================] - 0s 10ms/step - loss: 0.1110 - sparse_categorical_accuracy: 0.9770 - val_loss: 0.4251 - val_sparse_categorical_accuracy: 0.8630
Epoch 7/20
11/32 [=========>....................] - ETA: 0s - loss: 0.0936 - sparse_categorical_accuracy: 0.9886
Epoch 00007: saving model to training_1/cp.ckpt
32/32 [==============================] - 0s 14ms/step - loss: 0.0807 - sparse_categorical_accuracy: 0.9840 - val_loss: 0.4248 - val_sparse_categorical_accuracy: 0.8610
Epoch 8/20
32/32 [==============================] - 0s 10ms/step - loss: 0.0612 - sparse_categorical_accuracy: 0.9950 - val_loss: 0.4058 - val_sparse_categorical_accuracy: 0.8650
Epoch 9/20
32/32 [==============================] - 0s 8ms/step - loss: 0.0489 - sparse_categorical_accuracy: 0.9950 - val_loss: 0.4393 - val_sparse_categorical_accuracy: 0.8610
Epoch 10/20
32/32 [==============================] - 0s 6ms/step - loss: 0.0361 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4150 - val_sparse_categorical_accuracy: 0.8620
Epoch 11/20
32/32 [==============================] - 0s 10ms/step - loss: 0.0294 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4090 - val_sparse_categorical_accuracy: 0.8670
Epoch 12/20
32/32 [==============================] - 0s 7ms/step - loss: 0.0272 - sparse_categorical_accuracy: 0.9990 - val_loss: 0.4365 - val_sparse_categorical_accuracy: 0.8600
Epoch 13/20
32/32 [==============================] - 0s 8ms/step - loss: 0.0203 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4231 - val_sparse_categorical_accuracy: 0.8620
Epoch 14/20
1/32 [..............................] - ETA: 0s - loss: 0.0115 - sparse_categorical_accuracy: 1.0000
Epoch 00014: saving model to training_1/cp.ckpt
32/32 [==============================] - 0s 9ms/step - loss: 0.0164 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4263 - val_sparse_categorical_accuracy: 0.8650
Epoch 15/20
32/32 [==============================] - 0s 7ms/step - loss: 0.0128 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4260 - val_sparse_categorical_accuracy: 0.8690
Epoch 16/20
32/32 [==============================] - 0s 7ms/step - loss: 0.0120 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4194 - val_sparse_categorical_accuracy: 0.8740
Epoch 17/20
32/32 [==============================] - 0s 9ms/step - loss: 0.0110 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4302 - val_sparse_categorical_accuracy: 0.8710
Epoch 18/20
32/32 [==============================] - 0s 6ms/step - loss: 0.0090 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4331 - val_sparse_categorical_accuracy: 0.8660
Epoch 19/20
32/32 [==============================] - 0s 7ms/step - loss: 0.0084 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4320 - val_sparse_categorical_accuracy: 0.8760
Epoch 20/20
16/32 [==============>...............] - ETA: 0s - loss: 0.0074 - sparse_categorical_accuracy: 1.0000
Epoch 00020: saving model to training_1/cp.ckpt
32/32 [==============================] - 0s 13ms/step - loss: 0.0072 - sparse_categorical_accuracy: 1.0000 - val_loss: 0.4280 - val_sparse_categorical_accuracy: 0.8750
<tensorflow.python.keras.callbacks.History at 0x7f90f0082cd0>
As you already know save_freq is equal to 'epoch' or integer. When using 'epoch', the callback saves the model after each epoch. When using integer, the callback saves the model at end of theses many batches(end of these many steps_per_epoch).
As above definition of save_freq, checkpoints saves every after 210 steps.
Please check this for more details on ModelCheckpoint Arguments.

In Tensorflow, I do not know why I trained a model, the training loss decreased, but seem like it is not trained

Here is my code and result of the training.
batch_size = 100
epochs = 50
yale_history = yale_classifier.fit(x_train, y_train_oh,batch_size=batch_size,epochs=epochs,validation_data=(x_train,y_train_oh))
Epoch 1/50
20/20 [==============================] - 32s 2s/step - loss: 3.9801 - accuracy: 0.2071 - val_loss: 3.6919 - val_accuracy: 0.0245
Epoch 2/50
20/20 [==============================] - 30s 2s/step - loss: 1.2557 - accuracy: 0.6847 - val_loss: 4.1914 - val_accuracy: 0.0245
Epoch 3/50
20/20 [==============================] - 30s 2s/step - loss: 0.4408 - accuracy: 0.8954 - val_loss: 4.6284 - val_accuracy: 0.0245
Epoch 4/50
20/20 [==============================] - 30s 2s/step - loss: 0.1822 - accuracy: 0.9592 - val_loss: 4.9481 - val_accuracy: 0.0398
Epoch 5/50
20/20 [==============================] - 30s 2s/step - loss: 0.1252 - accuracy: 0.9760 - val_loss: 5.3728 - val_accuracy: 0.0276
Epoch 6/50
20/20 [==============================] - 30s 2s/step - loss: 0.0927 - accuracy: 0.9816 - val_loss: 5.7009 - val_accuracy: 0.0260
Epoch 7/50
20/20 [==============================] - 30s 2s/step - loss: 0.0858 - accuracy: 0.9837 - val_loss: 6.0049 - val_accuracy: 0.0260
Epoch 8/50
20/20 [==============================] - 30s 2s/step - loss: 0.0646 - accuracy: 0.9867 - val_loss: 6.3786 - val_accuracy: 0.0260
Epoch 9/50
20/20 [==============================] - 30s 2s/step - loss: 0.0489 - accuracy: 0.9898 - val_loss: 6.5156 - val_accuracy: 0.0260
You can see that I also used the training data as the validation data. This is weird that the training loss is not the same as the validation loss. Further, when I evaluated it, seem like my model was not trained at all as follow.
yale_classifier.evaluate(x_train, y_train_oh)
62/62 [==============================] - 6s 96ms/step - loss: 7.1123 - accuracy: 0.0260
[7.112329483032227, 0.026020407676696777]
Do you have any recommened to solve this problem ?

Why is my margin of error graph so spikey tensorflow

So whenever I run my TensorFlow model the margin of error (loss / val_loss) graph is extremely back and fourth and I was wondering how I could stop this /reduce it here is a picture
Graph
here's the code if anyone wants to run it should work fine as long as you have the pips
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt
import datetime
import tensorboard
from keras.models import Sequential
from keras.layers import Dense
train_df = pd.read_csv('https://www.dropbox.com/s/ednsabkdzs8motw/ROK%20INPUT%20DATA%20-%20Sheet1.csv?dl=1')
eval_df = pd.read_csv('https://www.dropbox.com/s/irnqwc1v67wmbfk/ROK%20EVAL%20DATA%20-%20Sheet1.csv?dl=1')
train_df['Troops'] = train_df['Troops'].astype(float)
train_df['Enemy Troops'] = train_df['Enemy Troops'].astype(float)
train_df['Damage'] = train_df['Damage'].astype(float)
eval_df['Troops'] = eval_df['Troops'].astype(float)
eval_df['Enemy Troops'] = eval_df['Enemy Troops'].astype(float)
eval_df['Damage'] = eval_df['Damage'].astype(float)
damage = train_df.pop('Damage')
dataset = tf.data.Dataset.from_tensor_slices((train_df.values, damage.values))
test_labels = eval_df.pop('Damage')
test_features = eval_df.copy()
model = keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape = (8,)),
tf.keras.layers.Dense(8, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1),
]
)
model.compile(optimizer='adam', loss='mean_squared_error')
model.summary()
history = model.fit(train_df, damage, validation_split=0.2, epochs=5000)
def plot_loss(history):
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.ylim([0, 2000])
plt.xlabel('Epoch')
plt.ylabel('Error [MPG]')
plt.legend()
plt.grid(True)
plot_loss(history)
plt.show()
This is due to the labeled data has imbalanced values in your dataset which means you should use mean_absolute_error in place of mean_squared_error as a loss function to prevent outliers.
Please check below code:
model.compile(optimizer='adam', loss=tf.losses.MeanAbsoluteError())
history = model.fit(train_df, damage,
validation_data=(test_features,test_labels), epochs=50)
Output:
Epoch 1/100
2/2 [==============================] - 1s 150ms/step - loss: 1015.9664 - val_loss: 129.8347
Epoch 2/100
2/2 [==============================] - 0s 30ms/step - loss: 244.7547 - val_loss: 28.9964
Epoch 3/100
2/2 [==============================] - 0s 32ms/step - loss: 629.1597 - val_loss: 20.9922
Epoch 4/100
2/2 [==============================] - 0s 35ms/step - loss: 612.6526 - val_loss: 45.7117
Epoch 5/100
2/2 [==============================] - 0s 34ms/step - loss: 335.1754 - val_loss: 93.0301
Epoch 6/100
2/2 [==============================] - 0s 30ms/step - loss: 168.1687 - val_loss: 128.6208
Epoch 7/100
2/2 [==============================] - 0s 30ms/step - loss: 406.5712 - val_loss: 129.7909
Epoch 8/100
2/2 [==============================] - 0s 28ms/step - loss: 391.4481 - val_loss: 113.0307
Epoch 9/100
2/2 [==============================] - 0s 27ms/step - loss: 182.2033 - val_loss: 83.6522
Epoch 10/100
2/2 [==============================] - 0s 42ms/step - loss: 176.4511 - val_loss: 68.1947
Epoch 11/100
2/2 [==============================] - 0s 28ms/step - loss: 266.6671 - val_loss: 71.0774
Epoch 12/100
2/2 [==============================] - 0s 40ms/step - loss: 198.2684 - val_loss: 88.3499
Epoch 13/100
2/2 [==============================] - 0s 28ms/step - loss: 119.8650 - val_loss: 100.7030
Epoch 14/100
2/2 [==============================] - 0s 27ms/step - loss: 189.6049 - val_loss: 94.6102
Epoch 15/100
2/2 [==============================] - 0s 28ms/step - loss: 146.5237 - val_loss: 77.1270
Epoch 16/100
2/2 [==============================] - 0s 30ms/step - loss: 106.8908 - val_loss: 60.1246
Epoch 17/100
2/2 [==============================] - 0s 29ms/step - loss: 132.0525 - val_loss: 56.3836
Epoch 18/100
2/2 [==============================] - 0s 29ms/step - loss: 129.6660 - val_loss: 64.7796
Epoch 19/100
2/2 [==============================] - 0s 32ms/step - loss: 118.3896 - val_loss: 68.5954
Epoch 20/100
2/2 [==============================] - 0s 30ms/step - loss: 114.2150 - val_loss: 67.0202
Epoch 21/100
2/2 [==============================] - 0s 32ms/step - loss: 112.6538 - val_loss: 65.2389
Epoch 22/100
2/2 [==============================] - 0s 30ms/step - loss: 107.1644 - val_loss: 59.4646
Epoch 23/100
2/2 [==============================] - 0s 31ms/step - loss: 106.9518 - val_loss: 51.4506
Epoch 24/100
2/2 [==============================] - 0s 28ms/step - loss: 107.4203 - val_loss: 48.4060
Epoch 25/100
2/2 [==============================] - 0s 30ms/step - loss: 108.1180 - val_loss: 48.5364
Epoch 26/100
2/2 [==============================] - 0s 30ms/step - loss: 106.6088 - val_loss: 47.0263
Epoch 27/100
2/2 [==============================] - 0s 29ms/step - loss: 107.6407 - val_loss: 47.3658
Epoch 28/100
2/2 [==============================] - 0s 32ms/step - loss: 105.1175 - val_loss: 45.2668
Epoch 29/100
2/2 [==============================] - 0s 35ms/step - loss: 105.9028 - val_loss: 45.2371
Epoch 30/100
2/2 [==============================] - 0s 32ms/step - loss: 103.5908 - val_loss: 48.8512
Epoch 31/100
2/2 [==============================] - 0s 27ms/step - loss: 102.6504 - val_loss: 53.9927
Epoch 32/100
2/2 [==============================] - 0s 28ms/step - loss: 100.8014 - val_loss: 58.1143
Epoch 33/100
2/2 [==============================] - 0s 30ms/step - loss: 114.6031 - val_loss: 49.8311
Epoch 34/100
2/2 [==============================] - 0s 32ms/step - loss: 104.9576 - val_loss: 45.7614
Epoch 35/100
2/2 [==============================] - 0s 35ms/step - loss: 102.5296 - val_loss: 44.3673
Epoch 36/100
2/2 [==============================] - 0s 32ms/step - loss: 105.3818 - val_loss: 40.8473
Epoch 37/100
2/2 [==============================] - 0s 26ms/step - loss: 102.0235 - val_loss: 38.7967
Epoch 38/100
2/2 [==============================] - 0s 30ms/step - loss: 103.9142 - val_loss: 36.8466
Epoch 39/100
2/2 [==============================] - 0s 32ms/step - loss: 105.1095 - val_loss: 40.7968
Epoch 40/100
2/2 [==============================] - 0s 34ms/step - loss: 102.7449 - val_loss: 46.4677
Epoch 41/100
2/2 [==============================] - 0s 29ms/step - loss: 101.3321 - val_loss: 53.2947
Epoch 42/100
2/2 [==============================] - 0s 29ms/step - loss: 106.1829 - val_loss: 53.4320
Epoch 43/100
2/2 [==============================] - 0s 32ms/step - loss: 97.9348 - val_loss: 47.5536
Epoch 44/100
2/2 [==============================] - 0s 31ms/step - loss: 98.5830 - val_loss: 41.6827
Epoch 45/100
2/2 [==============================] - 0s 32ms/step - loss: 98.8272 - val_loss: 36.0022
Epoch 46/100
2/2 [==============================] - 0s 29ms/step - loss: 109.2409 - val_loss: 32.8524
Epoch 47/100
2/2 [==============================] - 0s 39ms/step - loss: 112.1813 - val_loss: 38.2731
Epoch 48/100
2/2 [==============================] - 0s 34ms/step - loss: 99.5903 - val_loss: 40.8585
Epoch 49/100
2/2 [==============================] - 0s 29ms/step - loss: 106.2939 - val_loss: 47.6244
Epoch 50/100
2/2 [==============================] - 0s 27ms/step - loss: 97.1548 - val_loss: 51.4656
Epoch 51/100
2/2 [==============================] - 0s 29ms/step - loss: 97.9445 - val_loss: 46.3714
Epoch 52/100
2/2 [==============================] - 0s 29ms/step - loss: 96.2311 - val_loss: 39.1717
Epoch 53/100
2/2 [==============================] - 0s 38ms/step - loss: 96.8036 - val_loss: 34.6192
Epoch 54/100
2/2 [==============================] - 0s 33ms/step - loss: 99.1502 - val_loss: 31.0388
Epoch 55/100
2/2 [==============================] - 0s 31ms/step - loss: 105.3854 - val_loss: 30.7220
Epoch 56/100
2/2 [==============================] - 0s 46ms/step - loss: 103.1274 - val_loss: 35.8683
Epoch 57/100
2/2 [==============================] - 0s 26ms/step - loss: 94.2024 - val_loss: 38.4891
Epoch 58/100
2/2 [==============================] - 0s 33ms/step - loss: 95.7762 - val_loss: 41.9727
Epoch 59/100
2/2 [==============================] - 0s 34ms/step - loss: 93.3703 - val_loss: 30.4720
Epoch 60/100
2/2 [==============================] - 0s 36ms/step - loss: 93.3310 - val_loss: 20.7104
Epoch 61/100
2/2 [==============================] - 0s 30ms/step - loss: 98.0708 - val_loss: 12.8391
Epoch 62/100
2/2 [==============================] - 0s 31ms/step - loss: 101.6647 - val_loss: 24.7238
Epoch 63/100
2/2 [==============================] - 0s 33ms/step - loss: 89.2492 - val_loss: 35.5170
Epoch 64/100
2/2 [==============================] - 0s 32ms/step - loss: 114.9297 - val_loss: 19.0492
Epoch 65/100
2/2 [==============================] - 0s 42ms/step - loss: 89.8944 - val_loss: 9.8713
Epoch 66/100
2/2 [==============================] - 0s 32ms/step - loss: 119.7986 - val_loss: 12.5584
Epoch 67/100
2/2 [==============================] - 0s 33ms/step - loss: 85.2151 - val_loss: 23.7810
Epoch 68/100
2/2 [==============================] - 0s 31ms/step - loss: 91.6945 - val_loss: 27.0833
Epoch 69/100
2/2 [==============================] - 0s 31ms/step - loss: 91.0443 - val_loss: 20.8228
Epoch 70/100
2/2 [==============================] - 0s 32ms/step - loss: 88.2557 - val_loss: 17.0245
Epoch 71/100
2/2 [==============================] - 0s 31ms/step - loss: 89.2440 - val_loss: 14.7132
Epoch 72/100
2/2 [==============================] - 0s 32ms/step - loss: 89.3514 - val_loss: 13.7965
Epoch 73/100
2/2 [==============================] - 0s 31ms/step - loss: 87.8547 - val_loss: 12.9283
Epoch 74/100
2/2 [==============================] - 0s 32ms/step - loss: 87.2561 - val_loss: 13.1212
Epoch 75/100
2/2 [==============================] - 0s 29ms/step - loss: 87.3379 - val_loss: 15.1878
Epoch 76/100
2/2 [==============================] - 0s 30ms/step - loss: 85.2761 - val_loss: 16.0503
Epoch 77/100
2/2 [==============================] - 0s 34ms/step - loss: 87.9641 - val_loss: 17.0547
Epoch 78/100
2/2 [==============================] - 0s 37ms/step - loss: 82.7034 - val_loss: 15.5357
Epoch 79/100
2/2 [==============================] - 0s 39ms/step - loss: 82.3891 - val_loss: 14.0231
Epoch 80/100
2/2 [==============================] - 0s 31ms/step - loss: 81.3045 - val_loss: 15.4905
Epoch 81/100
2/2 [==============================] - 0s 32ms/step - loss: 81.0241 - val_loss: 15.6177
Epoch 82/100
2/2 [==============================] - 0s 32ms/step - loss: 80.9134 - val_loss: 15.9989
Epoch 83/100
2/2 [==============================] - 0s 32ms/step - loss: 82.4333 - val_loss: 14.1885
Epoch 84/100
2/2 [==============================] - 0s 28ms/step - loss: 79.1791 - val_loss: 14.6505
Epoch 85/100
2/2 [==============================] - 0s 32ms/step - loss: 79.3381 - val_loss: 12.7476
Epoch 86/100
2/2 [==============================] - 0s 33ms/step - loss: 78.1342 - val_loss: 9.6814
Epoch 87/100
2/2 [==============================] - 0s 29ms/step - loss: 83.7268 - val_loss: 7.7703
Epoch 88/100
2/2 [==============================] - 0s 28ms/step - loss: 78.5488 - val_loss: 11.2915
Epoch 89/100
2/2 [==============================] - 0s 27ms/step - loss: 77.6771 - val_loss: 14.2054
Epoch 90/100
2/2 [==============================] - 0s 33ms/step - loss: 78.5004 - val_loss: 14.1587
Epoch 91/100
2/2 [==============================] - 0s 36ms/step - loss: 81.0928 - val_loss: 8.8034
Epoch 92/100
2/2 [==============================] - 0s 29ms/step - loss: 80.1722 - val_loss: 7.1039
Epoch 93/100
2/2 [==============================] - 0s 31ms/step - loss: 77.2722 - val_loss: 6.9086
Epoch 94/100
2/2 [==============================] - 0s 26ms/step - loss: 77.4540 - val_loss: 11.6563
Epoch 95/100
2/2 [==============================] - 0s 27ms/step - loss: 84.5494 - val_loss: 6.5362
Epoch 96/100
2/2 [==============================] - 0s 35ms/step - loss: 76.0600 - val_loss: 15.5146
Epoch 97/100
2/2 [==============================] - 0s 33ms/step - loss: 91.8825 - val_loss: 5.5035
Epoch 98/100
2/2 [==============================] - 0s 28ms/step - loss: 83.6633 - val_loss: 10.4812
Epoch 99/100
2/2 [==============================] - 0s 29ms/step - loss: 76.4038 - val_loss: 11.0298
Epoch 100/100
2/2 [==============================] - 0s 48ms/step - loss: 77.8150 - val_loss: 16.8254
and the loss graph looks like this:

neural network validation accuracy doesn't change sometimes

I am using a neural network for a binary classification problem but I am running into some trouble. Sometimes when running my model, my validation accuracy doesn't change at all and sometimes it works just fine. My dataset has 1200 samples with 28 features and I have a class imbalance (200 class a 1000 class b).All my features are normalized and are between 1 and 0. As I stated before this problem doesn't always happen but I want to know why and fix it
I have tried changing the optimisation function and the activation function but that did me no good. I have also noticed that when I increased the number of neurons in my network this problem occurs less often but it wasn't fixed.I also tried increasing the number of epochs but the problem keeps occuring sometimes
model = Sequential()
model.add(Dense(28, input_dim=28,kernel_initializer='normal', activation='sigmoid'))
model.add(Dense(200, kernel_initializer='normal',activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(300, kernel_initializer='normal',activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(300, kernel_initializer='normal',activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(150, kernel_initializer='normal',activation='sigmoid'))
model.add(Dropout(0.4))
model.add(Dense(1,kernel_initializer='normal'))
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
history = model.fit(X_train, y_train,
epochs=34,
batch_size=32,
validation_data=(X_val, y_val),
verbose=1)
This is the result I get sometimes from training my model
Epoch 1/34
788/788 [==============================] - 1s 2ms/step - loss: 1.5705 - acc: 0.6865 - val_loss: 0.6346 - val_acc: 0.7783
Epoch 2/34
788/788 [==============================] - 0s 211us/step - loss: 1.0262 - acc: 0.6231 - val_loss: 0.5310 - val_acc: 0.7783
Epoch 3/34
788/788 [==============================] - 0s 194us/step - loss: 1.7575 - acc: 0.7221 - val_loss: 0.5431 - val_acc: 0.7783
Epoch 4/34
788/788 [==============================] - 0s 218us/step - loss: 0.9113 - acc: 0.5774 - val_loss: 0.5685 - val_acc: 0.7783
Epoch 5/34
788/788 [==============================] - 0s 199us/step - loss: 1.0987 - acc: 0.6688 - val_loss: 0.6435 - val_acc: 0.7783
Epoch 6/34
788/788 [==============================] - 0s 201us/step - loss: 0.9777 - acc: 0.5343 - val_loss: 0.5643 - val_acc: 0.7783
Epoch 7/34
788/788 [==============================] - 0s 204us/step - loss: 1.0603 - acc: 0.5914 - val_loss: 0.6266 - val_acc: 0.7783
Epoch 8/34
788/788 [==============================] - 0s 197us/step - loss: 0.7580 - acc: 0.5939 - val_loss: 0.6615 - val_acc: 0.7783
Epoch 9/34
788/788 [==============================] - 0s 206us/step - loss: 0.8950 - acc: 0.6650 - val_loss: 0.5291 - val_acc: 0.7783
Epoch 10/34
788/788 [==============================] - 0s 230us/step - loss: 0.8114 - acc: 0.6701 - val_loss: 0.5428 - val_acc: 0.7783
Epoch 11/34
788/788 [==============================] - 0s 281us/step - loss: 0.7235 - acc: 0.6624 - val_loss: 0.5275 - val_acc: 0.7783
Epoch 12/34
788/788 [==============================] - 0s 264us/step - loss: 0.7237 - acc: 0.6485 - val_loss: 0.5473 - val_acc: 0.7783
Epoch 13/34
788/788 [==============================] - 0s 213us/step - loss: 0.6902 - acc: 0.7056 - val_loss: 0.5265 - val_acc: 0.7783
Epoch 14/34
788/788 [==============================] - 0s 217us/step - loss: 0.6726 - acc: 0.7145 - val_loss: 0.5285 - val_acc: 0.7783
Epoch 15/34
788/788 [==============================] - 0s 197us/step - loss: 0.6656 - acc: 0.7132 - val_loss: 0.5354 - val_acc: 0.7783
Epoch 16/34
788/788 [==============================] - 0s 216us/step - loss: 0.6083 - acc: 0.7259 - val_loss: 0.5262 - val_acc: 0.7783
Epoch 17/34
788/788 [==============================] - 0s 218us/step - loss: 0.6188 - acc: 0.7310 - val_loss: 0.5271 - val_acc: 0.7783
Epoch 18/34
788/788 [==============================] - 0s 210us/step - loss: 0.6642 - acc: 0.6142 - val_loss: 0.5676 - val_acc: 0.7783
Epoch 19/34
788/788 [==============================] - 0s 200us/step - loss: 0.6017 - acc: 0.7221 - val_loss: 0.5256 - val_acc: 0.7783
Epoch 20/34
788/788 [==============================] - 0s 209us/step - loss: 0.6188 - acc: 0.7157 - val_loss: 0.8090 - val_acc: 0.2217
Epoch 21/34
788/788 [==============================] - 0s 201us/step - loss: 1.1724 - acc: 0.4061 - val_loss: 0.5448 - val_acc: 0.7783
Epoch 22/34
788/788 [==============================] - 0s 205us/step - loss: 0.5724 - acc: 0.7424 - val_loss: 0.5293 - val_acc: 0.7783
Epoch 23/34
788/788 [==============================] - 0s 234us/step - loss: 0.5829 - acc: 0.7538 - val_loss: 0.5274 - val_acc: 0.7783
Epoch 24/34
788/788 [==============================] - 0s 209us/step - loss: 0.5815 - acc: 0.7525 - val_loss: 0.5274 - val_acc: 0.7783
Epoch 25/34
788/788 [==============================] - 0s 220us/step - loss: 0.5688 - acc: 0.7576 - val_loss: 0.5274 - val_acc: 0.7783
Epoch 26/34
788/788 [==============================] - 0s 210us/step - loss: 0.5715 - acc: 0.7525 - val_loss: 0.5273 - val_acc: 0.7783
Epoch 27/34
788/788 [==============================] - 0s 206us/step - loss: 0.5584 - acc: 0.7576 - val_loss: 0.5274 - val_acc: 0.7783
Epoch 28/34
788/788 [==============================] - 0s 215us/step - loss: 0.5728 - acc: 0.7563 - val_loss: 0.5272 - val_acc: 0.7783
Epoch 29/34
788/788 [==============================] - 0s 281us/step - loss: 0.5735 - acc: 0.7576 - val_loss: 0.5275 - val_acc: 0.7783
Epoch 30/34
788/788 [==============================] - 0s 272us/step - loss: 0.5773 - acc: 0.7614 - val_loss: 0.5272 - val_acc: 0.7783
Epoch 31/34
788/788 [==============================] - 0s 225us/step - loss: 0.5847 - acc: 0.7525 - val_loss: 0.5272 - val_acc: 0.7783
Epoch 32/34
788/788 [==============================] - 0s 239us/step - loss: 0.5739 - acc: 0.7551 - val_loss: 0.5272 - val_acc: 0.7783
Epoch 33/34
788/788 [==============================] - 0s 216us/step - loss: 0.5632 - acc: 0.7525 - val_loss: 0.5269 - val_acc: 0.7783
Epoch 34/34
788/788 [==============================] - 0s 240us/step - loss: 0.5672 - acc: 0.7576 - val_loss: 0.5267 - val_acc: 0.7783
Given your reported class imbalance, your model does not seem to learn anything (the reported accuracy seems consistent with just predicting everything as the majority class). Nevertheless, there are issues with your code; for starters:
Replace all activation functions except for the output layer to activation = 'relu'.
Add a sigmoid activation function to your last layer activation='sigmoid'; as is, yours is a regression network (default linear output in the last layer) and not a classification one.
Remove all kernel_initializer='normal' arguments from all your layers, i.e. leave it to the default one kernel_initializer='glorot_uniform', which is known to achieve (much) better performance.
Also, not clear why you go for an input dense layer of 28 units - no. of units here has nothing to do with the input dimension; please see Keras Sequential model input layer.
Dropout should not go into the network by default - try first without it and then add if necessary.
All in all, here is how your model should look for starters:
model = Sequential()
model.add(Dense(200, input_dim=28, activation='relu'))
# model.add(Dropout(0.5))
model.add(Dense(300, activation='relu'))
# model.add(Dropout(0.5))
model.add(Dense(300, activation='relu'))
# model.add(Dropout(0.5))
model.add(Dense(150, activation='relu'))
# model.add(Dropout(0.4))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
and, as said, uncomment/adjust the dropout layers depending on your experimental results.

Categories

Resources