Validation Accuracy not improving CNN - python

I am fairly new to deep learning and right now am trying to predict consumer choices based on EEG data. The total dataset consists of 1045 EEG recordings each with a corresponding label, indicating Like or Dislike for a product. Classes are distributed as follows (44% Likes and 56% Dislikes). I read that Convolutional Neural Networks are suitable to work with raw EEG data so I tried to implement a network based on keras with the following structure:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(full_data, target, test_size=0.20, random_state=42)
y_train = np.asarray(y_train).astype('float32').reshape((-1,1))
y_test = np.asarray(y_test).astype('float32').reshape((-1,1))
# X_train.shape = ((836, 512, 14))
# y_train.shape = ((836, 1))
from keras.optimizers import Adam
from keras.optimizers import SGD
from keras.layers import MaxPooling1D
model = Sequential()
model.add(Conv1D(16, kernel_size=3, activation="relu", input_shape=(512,14)))
model.add(MaxPooling1D())
model.add(Conv1D(8, kernel_size=3, activation="relu"))
model.add(MaxPooling1D())
model.add(Flatten())
model.add(Dense(1, activation="sigmoid"))
model.compile(optimizer=Adam(lr = 0.001), loss='binary_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=20, batch_size = 64)
When I fit the model however the validation accuracy does not change at all with the following output:
Epoch 1/20
14/14 [==============================] - 0s 32ms/step - loss: 292.6353 - accuracy: 0.5383 - val_loss: 0.7884 - val_accuracy: 0.5407
Epoch 2/20
14/14 [==============================] - 0s 7ms/step - loss: 1.3748 - accuracy: 0.5598 - val_loss: 0.8860 - val_accuracy: 0.5502
Epoch 3/20
14/14 [==============================] - 0s 6ms/step - loss: 1.0537 - accuracy: 0.5598 - val_loss: 0.7629 - val_accuracy: 0.5455
Epoch 4/20
14/14 [==============================] - 0s 6ms/step - loss: 0.8827 - accuracy: 0.5598 - val_loss: 0.7010 - val_accuracy: 0.5455
Epoch 5/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7988 - accuracy: 0.5598 - val_loss: 0.8689 - val_accuracy: 0.5407
Epoch 6/20
14/14 [==============================] - 0s 6ms/step - loss: 1.0221 - accuracy: 0.5610 - val_loss: 0.6961 - val_accuracy: 0.5455
Epoch 7/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7415 - accuracy: 0.5598 - val_loss: 0.6945 - val_accuracy: 0.5455
Epoch 8/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7381 - accuracy: 0.5574 - val_loss: 0.7761 - val_accuracy: 0.5455
Epoch 9/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7326 - accuracy: 0.5598 - val_loss: 0.6926 - val_accuracy: 0.5455
Epoch 10/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7338 - accuracy: 0.5598 - val_loss: 0.6917 - val_accuracy: 0.5455
Epoch 11/20
14/14 [==============================] - 0s 7ms/step - loss: 0.7203 - accuracy: 0.5610 - val_loss: 0.6916 - val_accuracy: 0.5455
Epoch 12/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7192 - accuracy: 0.5610 - val_loss: 0.6914 - val_accuracy: 0.5455
Epoch 13/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7174 - accuracy: 0.5610 - val_loss: 0.6912 - val_accuracy: 0.5455
Epoch 14/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7155 - accuracy: 0.5610 - val_loss: 0.6911 - val_accuracy: 0.5455
Epoch 15/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7143 - accuracy: 0.5610 - val_loss: 0.6910 - val_accuracy: 0.5455
Epoch 16/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7129 - accuracy: 0.5610 - val_loss: 0.6909 - val_accuracy: 0.5455
Epoch 17/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7114 - accuracy: 0.5610 - val_loss: 0.6907 - val_accuracy: 0.5455
Epoch 18/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7103 - accuracy: 0.5610 - val_loss: 0.6906 - val_accuracy: 0.5455
Epoch 19/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7088 - accuracy: 0.5610 - val_loss: 0.6906 - val_accuracy: 0.5455
Epoch 20/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7075 - accuracy: 0.5610 - val_loss: 0.6905 - val_accuracy: 0.5455
Thanks in advance for any insights!

The phenomenon you run into is called underfitting. This happens when the amount our quality of your training data is insufficient, or your network architecture is too small and not capable to learn the problem.
Try normalizing your input data and experiment with different network architectures, learning rates and activation functions.
As #Muhammad Shahzad stated in his comment, adding some Dense Layers after flatting would be a concrete architecture adaption you should try.

You can also increase the epoch and must increase the data set. And you also can use-
train_datagen= ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
vertical_flip = True,
channel_shift_range=0.2,
fill_mode='nearest'
)
for feeding the model more data and I hope you can increase the validation_accuracy.

Related

How can I prevent my model from being overfitted?

I'm a newbie with deep learning and I try to create a model and I don't really understand the model. add(layers). I m sure that the input shape (it's for recognition). I think the problem is in the Dropout, but I don't understand the value.
Can someone explains to me the
model = models.Sequential()
model.add(layers.Conv2D(32, (3,3), activation = 'relu', input_shape = (128,128,3)))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64, (3,3), activation = 'relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(6, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.Adam(lr=1e-4), metrics=['acc'])
-------------------------------------------------------
history = model.fit(
train_data,
train_labels,
epochs=30,
validation_data=(test_data, test_labels),
)
and here is the result :
Epoch 15/30
5/5 [==============================] - 0s 34ms/step - loss: 0.3987 - acc: 0.8536 - val_loss: 0.7021 - val_acc: 0.7143
Epoch 16/30
5/5 [==============================] - 0s 31ms/step - loss: 0.3223 - acc: 0.8891 - val_loss: 0.6393 - val_acc: 0.7778
Epoch 17/30
5/5 [==============================] - 0s 32ms/step - loss: 0.3321 - acc: 0.9082 - val_loss: 0.6229 - val_acc: 0.7460
Epoch 18/30
5/5 [==============================] - 0s 31ms/step - loss: 0.2615 - acc: 0.9409 - val_loss: 0.6591 - val_acc: 0.8095
Epoch 19/30
5/5 [==============================] - 0s 32ms/step - loss: 0.2161 - acc: 0.9857 - val_loss: 0.6368 - val_acc: 0.7143
Epoch 20/30
5/5 [==============================] - 0s 33ms/step - loss: 0.1773 - acc: 0.9857 - val_loss: 0.5644 - val_acc: 0.7778
Epoch 21/30
5/5 [==============================] - 0s 32ms/step - loss: 0.1650 - acc: 0.9782 - val_loss: 0.5459 - val_acc: 0.8413
Epoch 22/30
5/5 [==============================] - 0s 31ms/step - loss: 0.1534 - acc: 0.9789 - val_loss: 0.5738 - val_acc: 0.7460
Epoch 23/30
5/5 [==============================] - 0s 32ms/step - loss: 0.1205 - acc: 0.9921 - val_loss: 0.5351 - val_acc: 0.8095
Epoch 24/30
5/5 [==============================] - 0s 32ms/step - loss: 0.0967 - acc: 1.0000 - val_loss: 0.5256 - val_acc: 0.8413
Epoch 25/30
5/5 [==============================] - 0s 32ms/step - loss: 0.0736 - acc: 1.0000 - val_loss: 0.5493 - val_acc: 0.7937
Epoch 26/30
5/5 [==============================] - 0s 32ms/step - loss: 0.0826 - acc: 1.0000 - val_loss: 0.5342 - val_acc: 0.8254
Epoch 27/30
5/5 [==============================] - 0s 32ms/step - loss: 0.0687 - acc: 1.0000 - val_loss: 0.5452 - val_acc: 0.8254
Epoch 28/30
5/5 [==============================] - 0s 32ms/step - loss: 0.0571 - acc: 1.0000 - val_loss: 0.5176 - val_acc: 0.7937
Epoch 29/30
5/5 [==============================] - 0s 32ms/step - loss: 0.0549 - acc: 1.0000 - val_loss: 0.5142 - val_acc: 0.8095
Epoch 30/30
5/5 [==============================] - 0s 32ms/step - loss: 0.0479 - acc: 1.0000 - val_loss: 0.5243 - val_acc: 0.8095
I never depassed the 70% average but on this i have 80% but i think i'm on overfitting.. I evidemently searched on differents docs but i'm lost
Have you try following into your training:
Data Augmentation
Pre-trained Model
Looking at the execution time per epoch, it looks like your data set is pretty small. Also, it's not clear whether there is any class imbalance in your dataset. You probably should try stratified CV training and analysis on the folds results. It won't prevent overfit but it will eventually give you more insight into your model, which generally can help to reduce overfitting. However, preventing overfitting is a general topic, search online to get resources. You can also try this
model.compile(loss='categorical_crossentropy',
optimizer='adam, metrics=['acc'])
-------------------------------------------------------
# src: https://keras.io/api/callbacks/reduce_lr_on_plateau/
# reduce learning rate by a factor of 0.2 if val_loss -
# won't improve within 5 epoch.
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
patience=5, min_lr=0.00001)
# src: https://keras.io/api/callbacks/early_stopping/
# stop training if val_loss don't improve within 15 epoch.
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=15)
history = model.fit(
train_data,
train_labels,
epochs=30,
validation_data=(test_data, test_labels),
callbacks=[reduce_lr, early_stop]
)
You may also find it useful of using ModelCheckpoint or LearningRateScheduler. This doesn't guarantee of no overfit but some approach for that to adopt.

Loss value varying mid LSTM training

I'm currently working on a neural network project, and I need some help understanding the relationships between parameters and the values my neural network is outputing.
My goal is to train a LSTM neural network to detect stress in speech. I'm using a dataset divided into audios of neutral voices and audios of voices under stress. In order to classify which audios contain stress, I'm extracting relevant features from the voices each frame, and then feeding this information into the LSTM neural network.
Since I'm extracting features by frame, the extraction output from audio files with different lenghts also have different lenghts, proportionally to the audio duration. To normalize the neural networks inputs, I'm using a padding technique, which consists in adding zeroes to the end of each extracted features set to meet the biggest set size.
So, for example, if I have 3 audio files, each with these durations: 4, 5, 6 seconds, the extracted features set from the first two audios would be padded with zeroes to meet the third audio extracted set length.
A padded features set looks like this:
[
[9.323346e+00, 9.222625e+00, 8.910659e+00],
[8.751126e+00, 8.432300e+00, 8.046866e+00],
...
[7.439109e+00, 7.380966e+00, 6.092496e+00],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]
]
The whole dataset dimensions are as follows: (number of audio files) x (number of frames in biggest audio file) x (number of features)
I divided my dataset into a training set, a validation set and a test set. Currently, I have audio files from two public databases, one set with 576 audio files (288 non-stressed, 288 stressed) and other with 240 files (120 non-stressed, 120 stressed).
The following code shows my LSTM implementation using Keras:
N_HIDDEN_CELLS = 100
LEARNING_RATE = 0.00005
BATCH_SIZE = 32
EPOCHS_N = 30
ACTIVATION_FUNCTION = 'softmax'
LOSS_FUNCTION = 'binary_crossentropy'
def create_model(input_shape)
model = keras.Sequential()
model.add(keras.layers.LSTM(N_HIDDEN_CELLS, input_shape=input_shape, return_sequences=True))
model.add(keras.layers.LSTM(N_HIDDEN_CELLS, return_sequences=True))
model.add(keras.layers.LSTM(N_HIDDEN_CELLS, return_sequences=True))
model.add(keras.layers.Dropout(0.3))
model.add(keras.layers.LSTM(2, activation=ACTIVATION_FUNCTION))
return model
def prepare_datasets(data, labels, test_size, validation_size):
X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=test_size)
X_train, X_validation, y_train, y_validation = train_test_split(X_train, y_train, test_size=validation_size)
return X_train, X_validation, X_test, y_train, y_validation, y_test
X_train, X_validation, X_test, y_train, y_validation, y_test = prepare_datasets(data, labels, 0.25, 0.2)
input_shape = (X_train.shape[1], X_train.shape[2])
model = create_model(input_shape)
optimizer = keras.optimizers.Adam(learning_rate=LEARNING_RATE)
model.compile(optimizer=optimizer, loss=LOSS_FUNCTION, metrics=['accuracy'])
model.summary()
history = model.fit(X_train, y_train, validation_data=(X_validation, y_validation), batch_size=BATCH_SIZE, epochs=EPOCHS_N)
test_loss, test_acc = model.evaluate(X_test, y_test, verbose=2)
After various tests and executions, I'm not so confident about my network performance. At first, the validation loss values were all over the place, varying a lot and not converging at all. With some adjustments to parameters, I ended up with the values in the code above. Still, I'm not that confident, mainly because the validation loss starts to vary after epoch 15 (more or less). In the first epochs, both training and validation losses fall accordingly to expectations, but after some epochs, the training loss keeps falling and the validation loss starts to vary and rise.
Below are two executions of the same network (with the same parameters as the code provided) and same dataset (the one with 576 audio files):
Epoch 1/30
11/11 [==============================] - 5s 194ms/step - loss: 0.8493 - accuracy: 0.4934 - val_loss: 0.8436 - val_accuracy: 0.4943
Epoch 2/30
11/11 [==============================] - 1s 123ms/step - loss: 0.8398 - accuracy: 0.5271 - val_loss: 0.8364 - val_accuracy: 0.4943
Epoch 3/30
11/11 [==============================] - 1s 124ms/step - loss: 0.8291 - accuracy: 0.6015 - val_loss: 0.8277 - val_accuracy: 0.4828
Epoch 4/30
11/11 [==============================] - 1s 128ms/step - loss: 0.8187 - accuracy: 0.6022 - val_loss: 0.8159 - val_accuracy: 0.5402
Epoch 5/30
11/11 [==============================] - 1s 124ms/step - loss: 0.8017 - accuracy: 0.6691 - val_loss: 0.8002 - val_accuracy: 0.5862
Epoch 6/30
11/11 [==============================] - 1s 123ms/step - loss: 0.7754 - accuracy: 0.7081 - val_loss: 0.7750 - val_accuracy: 0.6322
Epoch 7/30
11/11 [==============================] - 1s 124ms/step - loss: 0.7455 - accuracy: 0.7168 - val_loss: 0.7391 - val_accuracy: 0.6092
Epoch 8/30
11/11 [==============================] - 1s 130ms/step - loss: 0.7017 - accuracy: 0.7287 - val_loss: 0.6896 - val_accuracy: 0.6437
Epoch 9/30
11/11 [==============================] - 1s 125ms/step - loss: 0.6519 - accuracy: 0.7210 - val_loss: 0.6311 - val_accuracy: 0.6897
Epoch 10/30
11/11 [==============================] - 1s 129ms/step - loss: 0.5613 - accuracy: 0.7817 - val_loss: 0.5935 - val_accuracy: 0.7356
Epoch 11/30
11/11 [==============================] - 1s 123ms/step - loss: 0.5050 - accuracy: 0.7789 - val_loss: 0.5645 - val_accuracy: 0.7471
Epoch 12/30
11/11 [==============================] - 1s 123ms/step - loss: 0.4612 - accuracy: 0.8098 - val_loss: 0.5127 - val_accuracy: 0.7356
Epoch 13/30
11/11 [==============================] - 1s 127ms/step - loss: 0.4117 - accuracy: 0.8301 - val_loss: 0.4848 - val_accuracy: 0.7931
Epoch 14/30
11/11 [==============================] - 1s 128ms/step - loss: 0.3857 - accuracy: 0.8479 - val_loss: 0.4609 - val_accuracy: 0.7816
Epoch 15/30
11/11 [==============================] - 1s 122ms/step - loss: 0.3392 - accuracy: 0.8724 - val_loss: 0.4467 - val_accuracy: 0.8276
Epoch 16/30
11/11 [==============================] - 1s 118ms/step - loss: 0.3140 - accuracy: 0.8901 - val_loss: 0.4462 - val_accuracy: 0.8161
Epoch 17/30
11/11 [==============================] - 1s 125ms/step - loss: 0.2775 - accuracy: 0.9092 - val_loss: 0.4619 - val_accuracy: 0.8046
Epoch 18/30
11/11 [==============================] - 1s 128ms/step - loss: 0.2963 - accuracy: 0.8873 - val_loss: 0.3995 - val_accuracy: 0.8621
Epoch 19/30
11/11 [==============================] - 1s 122ms/step - loss: 0.2663 - accuracy: 0.9141 - val_loss: 0.4364 - val_accuracy: 0.8276
Epoch 20/30
11/11 [==============================] - 1s 120ms/step - loss: 0.2415 - accuracy: 0.9368 - val_loss: 0.4758 - val_accuracy: 0.8276
Epoch 21/30
11/11 [==============================] - 1s 121ms/step - loss: 0.2209 - accuracy: 0.9297 - val_loss: 0.3855 - val_accuracy: 0.8276
Epoch 22/30
11/11 [==============================] - 1s 121ms/step - loss: 0.1605 - accuracy: 0.9676 - val_loss: 0.3658 - val_accuracy: 0.8621
Epoch 23/30
11/11 [==============================] - 1s 126ms/step - loss: 0.1618 - accuracy: 0.9641 - val_loss: 0.3638 - val_accuracy: 0.8506
Epoch 24/30
11/11 [==============================] - 1s 129ms/step - loss: 0.1309 - accuracy: 0.9728 - val_loss: 0.4450 - val_accuracy: 0.8276
Epoch 25/30
11/11 [==============================] - 1s 125ms/step - loss: 0.2014 - accuracy: 0.9394 - val_loss: 0.3439 - val_accuracy: 0.8621
Epoch 26/30
11/11 [==============================] - 1s 126ms/step - loss: 0.1342 - accuracy: 0.9554 - val_loss: 0.3356 - val_accuracy: 0.8851
Epoch 27/30
11/11 [==============================] - 1s 125ms/step - loss: 0.1555 - accuracy: 0.9618 - val_loss: 0.3486 - val_accuracy: 0.8736
Epoch 28/30
11/11 [==============================] - 1s 124ms/step - loss: 0.1346 - accuracy: 0.9659 - val_loss: 0.3208 - val_accuracy: 0.9080
Epoch 29/30
11/11 [==============================] - 1s 127ms/step - loss: 0.1193 - accuracy: 0.9697 - val_loss: 0.3706 - val_accuracy: 0.8851
Epoch 30/30
11/11 [==============================] - 1s 123ms/step - loss: 0.0836 - accuracy: 0.9777 - val_loss: 0.3623 - val_accuracy: 0.8621
5/5 - 0s - loss: 0.4383 - accuracy: 0.8472
Test accuracy: 0.8472222089767456
Test loss: 0.43826407194137573
1st execution val_loss x train_loss graph
Epoch 1/30
11/11 [==============================] - 5s 190ms/step - loss: 0.8297 - accuracy: 0.5306 - val_loss: 0.8508 - val_accuracy: 0.4138
Epoch 2/30
11/11 [==============================] - 1s 123ms/step - loss: 0.8138 - accuracy: 0.5460 - val_loss: 0.8355 - val_accuracy: 0.4713
Epoch 3/30
11/11 [==============================] - 1s 120ms/step - loss: 0.8082 - accuracy: 0.5384 - val_loss: 0.8145 - val_accuracy: 0.5402
Epoch 4/30
11/11 [==============================] - 1s 118ms/step - loss: 0.7997 - accuracy: 0.5799 - val_loss: 0.7911 - val_accuracy: 0.5517
Epoch 5/30
11/11 [==============================] - 1s 117ms/step - loss: 0.7752 - accuracy: 0.6585 - val_loss: 0.7654 - val_accuracy: 0.5862
Epoch 6/30
11/11 [==============================] - 1s 125ms/step - loss: 0.7527 - accuracy: 0.6609 - val_loss: 0.7289 - val_accuracy: 0.6437
Epoch 7/30
11/11 [==============================] - 1s 121ms/step - loss: 0.7129 - accuracy: 0.7432 - val_loss: 0.6790 - val_accuracy: 0.6782
Epoch 8/30
11/11 [==============================] - 1s 125ms/step - loss: 0.6570 - accuracy: 0.7707 - val_loss: 0.6107 - val_accuracy: 0.7356
Epoch 9/30
11/11 [==============================] - 1s 125ms/step - loss: 0.6112 - accuracy: 0.7513 - val_loss: 0.5529 - val_accuracy: 0.7586
Epoch 10/30
11/11 [==============================] - 1s 129ms/step - loss: 0.5339 - accuracy: 0.8026 - val_loss: 0.4895 - val_accuracy: 0.7816
Epoch 11/30
11/11 [==============================] - 1s 120ms/step - loss: 0.4720 - accuracy: 0.8189 - val_loss: 0.4579 - val_accuracy: 0.8046
Epoch 12/30
11/11 [==============================] - 1s 121ms/step - loss: 0.4332 - accuracy: 0.8527 - val_loss: 0.4169 - val_accuracy: 0.8046
Epoch 13/30
11/11 [==============================] - 1s 122ms/step - loss: 0.3976 - accuracy: 0.8568 - val_loss: 0.3850 - val_accuracy: 0.7931
Epoch 14/30
11/11 [==============================] - 1s 124ms/step - loss: 0.3489 - accuracy: 0.8726 - val_loss: 0.3753 - val_accuracy: 0.8046
Epoch 15/30
11/11 [==============================] - 1s 124ms/step - loss: 0.3088 - accuracy: 0.9020 - val_loss: 0.3562 - val_accuracy: 0.8161
Epoch 16/30
11/11 [==============================] - 1s 124ms/step - loss: 0.3489 - accuracy: 0.8745 - val_loss: 0.3501 - val_accuracy: 0.8391
Epoch 17/30
11/11 [==============================] - 1s 130ms/step - loss: 0.2725 - accuracy: 0.9240 - val_loss: 0.3436 - val_accuracy: 0.8506
Epoch 18/30
11/11 [==============================] - 1s 121ms/step - loss: 0.3494 - accuracy: 0.8764 - val_loss: 0.3516 - val_accuracy: 0.8506
Epoch 19/30
11/11 [==============================] - 1s 119ms/step - loss: 0.2553 - accuracy: 0.9243 - val_loss: 0.3413 - val_accuracy: 0.8391
Epoch 20/30
11/11 [==============================] - 1s 122ms/step - loss: 0.2723 - accuracy: 0.9092 - val_loss: 0.3258 - val_accuracy: 0.8621
Epoch 21/30
11/11 [==============================] - 1s 121ms/step - loss: 0.2600 - accuracy: 0.9306 - val_loss: 0.3257 - val_accuracy: 0.8506
Epoch 22/30
11/11 [==============================] - 1s 126ms/step - loss: 0.2406 - accuracy: 0.9411 - val_loss: 0.3203 - val_accuracy: 0.8966
Epoch 23/30
11/11 [==============================] - 1s 127ms/step - loss: 0.1892 - accuracy: 0.9577 - val_loss: 0.3191 - val_accuracy: 0.8851
Epoch 24/30
11/11 [==============================] - 1s 127ms/step - loss: 0.1869 - accuracy: 0.9594 - val_loss: 0.3246 - val_accuracy: 0.8621
Epoch 25/30
11/11 [==============================] - 1s 122ms/step - loss: 0.1898 - accuracy: 0.9487 - val_loss: 0.3217 - val_accuracy: 0.8851
Epoch 26/30
11/11 [==============================] - 1s 125ms/step - loss: 0.1731 - accuracy: 0.9523 - val_loss: 0.3280 - val_accuracy: 0.8506
Epoch 27/30
11/11 [==============================] - 1s 128ms/step - loss: 0.1445 - accuracy: 0.9687 - val_loss: 0.3213 - val_accuracy: 0.8851
Epoch 28/30
11/11 [==============================] - 1s 117ms/step - loss: 0.1441 - accuracy: 0.9718 - val_loss: 0.3212 - val_accuracy: 0.8621
Epoch 29/30
11/11 [==============================] - 1s 124ms/step - loss: 0.1250 - accuracy: 0.9762 - val_loss: 0.3232 - val_accuracy: 0.8851
Epoch 30/30
11/11 [==============================] - 1s 123ms/step - loss: 0.1460 - accuracy: 0.9687 - val_loss: 0.3218 - val_accuracy: 0.8736
5/5 - 0s - loss: 0.3297 - accuracy: 0.8889
Test accuracy: 0.8888888955116272
Test loss: 0.32971107959747314
2nd execution val_loss x train_loss graph
Some additional information:
My labels are hot encoded.
Frame step is 0.05s.
Frame size is 0.125s.
When running this configuration with the smaller dataset, I get a slightly different behaviour. The loss value falls more evenly, but kind of slowly. I tried increasing the epochs number, but after the 30th epoch the validation loss started to vary and rise as well.
My questions are:
What could be causing this validation loss problem?
What does it mean when a model has a high loss rate, but its accuracy remains ok?
I read about binary cross entropy but I don't know if I understand what the loss value means in my tests, could someone help me understanding these values?
Could this padding strategy be affecting the network performance?
Is my input data and its dimensions coherent considering LSTM definitions?
Could this be related to my dataset size?
What would be an acceptable validation loss rate?
Your validation loss being much higher than your training loss usually implies overfitting. Note that your val_loss isn't really high, it's just higher than the training loss. The validation accuracy isn't bad either, just much lower than on the training data which has effectively been memorized by your network.
Basically you need to reduce the strength of the model so it can generalize to the complexity of the problem at hand. Use more dropout and fewer parameters/layers.

neural network validation accuracy doesn't change sometimes

I am using a neural network for a binary classification problem but I am running into some trouble. Sometimes when running my model, my validation accuracy doesn't change at all and sometimes it works just fine. My dataset has 1200 samples with 28 features and I have a class imbalance (200 class a 1000 class b).All my features are normalized and are between 1 and 0. As I stated before this problem doesn't always happen but I want to know why and fix it
I have tried changing the optimisation function and the activation function but that did me no good. I have also noticed that when I increased the number of neurons in my network this problem occurs less often but it wasn't fixed.I also tried increasing the number of epochs but the problem keeps occuring sometimes
model = Sequential()
model.add(Dense(28, input_dim=28,kernel_initializer='normal', activation='sigmoid'))
model.add(Dense(200, kernel_initializer='normal',activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(300, kernel_initializer='normal',activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(300, kernel_initializer='normal',activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(150, kernel_initializer='normal',activation='sigmoid'))
model.add(Dropout(0.4))
model.add(Dense(1,kernel_initializer='normal'))
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
history = model.fit(X_train, y_train,
epochs=34,
batch_size=32,
validation_data=(X_val, y_val),
verbose=1)
This is the result I get sometimes from training my model
Epoch 1/34
788/788 [==============================] - 1s 2ms/step - loss: 1.5705 - acc: 0.6865 - val_loss: 0.6346 - val_acc: 0.7783
Epoch 2/34
788/788 [==============================] - 0s 211us/step - loss: 1.0262 - acc: 0.6231 - val_loss: 0.5310 - val_acc: 0.7783
Epoch 3/34
788/788 [==============================] - 0s 194us/step - loss: 1.7575 - acc: 0.7221 - val_loss: 0.5431 - val_acc: 0.7783
Epoch 4/34
788/788 [==============================] - 0s 218us/step - loss: 0.9113 - acc: 0.5774 - val_loss: 0.5685 - val_acc: 0.7783
Epoch 5/34
788/788 [==============================] - 0s 199us/step - loss: 1.0987 - acc: 0.6688 - val_loss: 0.6435 - val_acc: 0.7783
Epoch 6/34
788/788 [==============================] - 0s 201us/step - loss: 0.9777 - acc: 0.5343 - val_loss: 0.5643 - val_acc: 0.7783
Epoch 7/34
788/788 [==============================] - 0s 204us/step - loss: 1.0603 - acc: 0.5914 - val_loss: 0.6266 - val_acc: 0.7783
Epoch 8/34
788/788 [==============================] - 0s 197us/step - loss: 0.7580 - acc: 0.5939 - val_loss: 0.6615 - val_acc: 0.7783
Epoch 9/34
788/788 [==============================] - 0s 206us/step - loss: 0.8950 - acc: 0.6650 - val_loss: 0.5291 - val_acc: 0.7783
Epoch 10/34
788/788 [==============================] - 0s 230us/step - loss: 0.8114 - acc: 0.6701 - val_loss: 0.5428 - val_acc: 0.7783
Epoch 11/34
788/788 [==============================] - 0s 281us/step - loss: 0.7235 - acc: 0.6624 - val_loss: 0.5275 - val_acc: 0.7783
Epoch 12/34
788/788 [==============================] - 0s 264us/step - loss: 0.7237 - acc: 0.6485 - val_loss: 0.5473 - val_acc: 0.7783
Epoch 13/34
788/788 [==============================] - 0s 213us/step - loss: 0.6902 - acc: 0.7056 - val_loss: 0.5265 - val_acc: 0.7783
Epoch 14/34
788/788 [==============================] - 0s 217us/step - loss: 0.6726 - acc: 0.7145 - val_loss: 0.5285 - val_acc: 0.7783
Epoch 15/34
788/788 [==============================] - 0s 197us/step - loss: 0.6656 - acc: 0.7132 - val_loss: 0.5354 - val_acc: 0.7783
Epoch 16/34
788/788 [==============================] - 0s 216us/step - loss: 0.6083 - acc: 0.7259 - val_loss: 0.5262 - val_acc: 0.7783
Epoch 17/34
788/788 [==============================] - 0s 218us/step - loss: 0.6188 - acc: 0.7310 - val_loss: 0.5271 - val_acc: 0.7783
Epoch 18/34
788/788 [==============================] - 0s 210us/step - loss: 0.6642 - acc: 0.6142 - val_loss: 0.5676 - val_acc: 0.7783
Epoch 19/34
788/788 [==============================] - 0s 200us/step - loss: 0.6017 - acc: 0.7221 - val_loss: 0.5256 - val_acc: 0.7783
Epoch 20/34
788/788 [==============================] - 0s 209us/step - loss: 0.6188 - acc: 0.7157 - val_loss: 0.8090 - val_acc: 0.2217
Epoch 21/34
788/788 [==============================] - 0s 201us/step - loss: 1.1724 - acc: 0.4061 - val_loss: 0.5448 - val_acc: 0.7783
Epoch 22/34
788/788 [==============================] - 0s 205us/step - loss: 0.5724 - acc: 0.7424 - val_loss: 0.5293 - val_acc: 0.7783
Epoch 23/34
788/788 [==============================] - 0s 234us/step - loss: 0.5829 - acc: 0.7538 - val_loss: 0.5274 - val_acc: 0.7783
Epoch 24/34
788/788 [==============================] - 0s 209us/step - loss: 0.5815 - acc: 0.7525 - val_loss: 0.5274 - val_acc: 0.7783
Epoch 25/34
788/788 [==============================] - 0s 220us/step - loss: 0.5688 - acc: 0.7576 - val_loss: 0.5274 - val_acc: 0.7783
Epoch 26/34
788/788 [==============================] - 0s 210us/step - loss: 0.5715 - acc: 0.7525 - val_loss: 0.5273 - val_acc: 0.7783
Epoch 27/34
788/788 [==============================] - 0s 206us/step - loss: 0.5584 - acc: 0.7576 - val_loss: 0.5274 - val_acc: 0.7783
Epoch 28/34
788/788 [==============================] - 0s 215us/step - loss: 0.5728 - acc: 0.7563 - val_loss: 0.5272 - val_acc: 0.7783
Epoch 29/34
788/788 [==============================] - 0s 281us/step - loss: 0.5735 - acc: 0.7576 - val_loss: 0.5275 - val_acc: 0.7783
Epoch 30/34
788/788 [==============================] - 0s 272us/step - loss: 0.5773 - acc: 0.7614 - val_loss: 0.5272 - val_acc: 0.7783
Epoch 31/34
788/788 [==============================] - 0s 225us/step - loss: 0.5847 - acc: 0.7525 - val_loss: 0.5272 - val_acc: 0.7783
Epoch 32/34
788/788 [==============================] - 0s 239us/step - loss: 0.5739 - acc: 0.7551 - val_loss: 0.5272 - val_acc: 0.7783
Epoch 33/34
788/788 [==============================] - 0s 216us/step - loss: 0.5632 - acc: 0.7525 - val_loss: 0.5269 - val_acc: 0.7783
Epoch 34/34
788/788 [==============================] - 0s 240us/step - loss: 0.5672 - acc: 0.7576 - val_loss: 0.5267 - val_acc: 0.7783
Given your reported class imbalance, your model does not seem to learn anything (the reported accuracy seems consistent with just predicting everything as the majority class). Nevertheless, there are issues with your code; for starters:
Replace all activation functions except for the output layer to activation = 'relu'.
Add a sigmoid activation function to your last layer activation='sigmoid'; as is, yours is a regression network (default linear output in the last layer) and not a classification one.
Remove all kernel_initializer='normal' arguments from all your layers, i.e. leave it to the default one kernel_initializer='glorot_uniform', which is known to achieve (much) better performance.
Also, not clear why you go for an input dense layer of 28 units - no. of units here has nothing to do with the input dimension; please see Keras Sequential model input layer.
Dropout should not go into the network by default - try first without it and then add if necessary.
All in all, here is how your model should look for starters:
model = Sequential()
model.add(Dense(200, input_dim=28, activation='relu'))
# model.add(Dropout(0.5))
model.add(Dense(300, activation='relu'))
# model.add(Dropout(0.5))
model.add(Dense(300, activation='relu'))
# model.add(Dropout(0.5))
model.add(Dense(150, activation='relu'))
# model.add(Dropout(0.4))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
and, as said, uncomment/adjust the dropout layers depending on your experimental results.

Keras CNN model with a wrong ROC curve and low accuracy

I am learning to write CNNs in Keras on Kaggle using one of the datasets I found there.
The link to my notebook is
https://www.kaggle.com/vj6978/brain-tumor-vimal?scriptVersionId=16814133
The code, the dataset and the ROC curve are available at the link. The ROC curve itself looks as if the model is simply making guesses rather than a learned prediction.
The testing accuracy also seems to peak at around 60% to 70% only which is quiet low. Any help would be appreciated.
Thanks
Vimal James
I believe your last activation should be sigmoid instead of softmax.
UPDATE :
Just forked your kernel on Kaggle and modifying as follows gives better results :
model = Sequential()
model.add(Conv2D(128, (3,3), input_shape = data_set.shape[1:]))
model.add(Activation("relu"))
model.add(AveragePooling2D(pool_size = (2,2)))
model.add(Conv2D(128, (3,3)))
model.add(Activation("relu"))
model.add(AveragePooling2D(pool_size = (2,2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Dense(1))
model.add(Activation("sigmoid")) # Last activation should be sigmoid for binary classification
model.compile(optimizer = "adam", loss = "binary_crossentropy", metrics = ['accuracy'])
This gave the following results :
rain on 204 samples, validate on 23 samples
Epoch 1/15
204/204 [==============================] - 2s 11ms/step - loss: 2.8873 - acc: 0.6373 - val_loss: 0.8000 - val_acc: 0.8261
Epoch 2/15
204/204 [==============================] - 1s 3ms/step - loss: 0.7292 - acc: 0.7206 - val_loss: 0.6363 - val_acc: 0.7391
Epoch 3/15
204/204 [==============================] - 1s 3ms/step - loss: 0.4731 - acc: 0.8088 - val_loss: 0.5417 - val_acc: 0.8261
Epoch 4/15
204/204 [==============================] - 1s 3ms/step - loss: 0.3605 - acc: 0.8775 - val_loss: 0.6820 - val_acc: 0.8696
Epoch 5/15
204/204 [==============================] - 1s 3ms/step - loss: 0.2986 - acc: 0.8529 - val_loss: 0.8356 - val_acc: 0.8696
Epoch 6/15
204/204 [==============================] - 1s 3ms/step - loss: 0.2151 - acc: 0.9020 - val_loss: 0.7592 - val_acc: 0.8696
Epoch 7/15
204/204 [==============================] - 1s 3ms/step - loss: 0.1305 - acc: 0.9657 - val_loss: 1.2486 - val_acc: 0.8696
Epoch 8/15
204/204 [==============================] - 1s 3ms/step - loss: 0.0565 - acc: 0.9853 - val_loss: 1.2668 - val_acc: 0.8696
Epoch 9/15
204/204 [==============================] - 1s 3ms/step - loss: 0.0426 - acc: 0.9853 - val_loss: 1.4674 - val_acc: 0.8696
Epoch 10/15
204/204 [==============================] - 1s 3ms/step - loss: 0.0141 - acc: 1.0000 - val_loss: 1.7379 - val_acc: 0.8696
Epoch 11/15
204/204 [==============================] - 1s 3ms/step - loss: 0.0063 - acc: 1.0000 - val_loss: 1.7232 - val_acc: 0.8696
Epoch 12/15
204/204 [==============================] - 1s 3ms/step - loss: 0.0023 - acc: 1.0000 - val_loss: 1.8291 - val_acc: 0.8696
Epoch 13/15
204/204 [==============================] - 1s 3ms/step - loss: 0.0014 - acc: 1.0000 - val_loss: 1.9164 - val_acc: 0.8696
Epoch 14/15
204/204 [==============================] - 1s 3ms/step - loss: 8.6263e-04 - acc: 1.0000 - val_loss: 1.8946 - val_acc: 0.8696
Epoch 15/15
204/204 [==============================] - 1s 3ms/step - loss: 6.8785e-04 - acc: 1.0000 - val_loss: 1.9596 - val_acc: 0.8696
Test loss: 3.079359292984009
Test accuracy: 0.807692289352417
You are using a softmax activation with a single neuron, this will always produce constant 1.0 output, due to the normalization used in softmax, so it makes no sense. For binary classification you have to use the sigmoid activation with a single output neuron.

Understanding reason for Overfitting in Keras Binary Classification Task

I am doing a Binary classification of IMDB movie review data into Positive or Negative Sentiment.
I have 25K movie reviews and corresponding label.
Preprocessing:
Removed the stop words and split the data into 70:30 training and test. So 17.5K training and 7k test. 17.5k training has been further divided into 14K train and 3.5 k validation dataset as used in keras.model.fit method
Each processed movie review has been converted to TF-IDF vector using Keras text processing module.
Here is my Fully Connected Architecture I used in Keras Dense class
def model_param(self):
""" Method to do deep learning
"""
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras import regularizers
self.model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
self.model.add(Dense(32, activation='relu', input_dim=self.x_train_std.shape[1]))
self.model.add(Dropout(0.5))
#self.model.add(Dense(60, activation='relu'))
#self.model.add(Dropout(0.5))
self.model.add(Dense(1, activation='sigmoid'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
self.model.compile(loss='binary_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
def fit(self):
""" Training the deep learning network on the training data
"""
self.model.fit(self.x_train_std, self.y_train,validation_split=0.20,
epochs=50,
batch_size=128)
As you see, I tried first without Dropout and as usual I got training accuracy as 1.0 but validation was poor as overfitting was happening. So I added Dropout to prevent overfitting
However inspite of trying multiple dropout ratio, adding another layer with different no. of units in it, changing learning rate I am still getting overfitting on validation dataset. Gets stuck at 85% while training keeps increasing to 99% and so on. Even changed the Epochs from 10 to 50
What could be going wrong here
Train on 14000 samples, validate on 3500 samples
Epoch 1/50
14000/14000 [==============================] - 0s - loss: 0.5684 - acc: 0.7034 - val_loss: 0.3794 - val_acc: 0.8431
Epoch 2/50
14000/14000 [==============================] - 0s - loss: 0.3630 - acc: 0.8388 - val_loss: 0.3304 - val_acc: 0.8549
Epoch 3/50
14000/14000 [==============================] - 0s - loss: 0.2977 - acc: 0.8749 - val_loss: 0.3271 - val_acc: 0.8591
Epoch 4/50
14000/14000 [==============================] - 0s - loss: 0.2490 - acc: 0.8991 - val_loss: 0.3302 - val_acc: 0.8580
Epoch 5/50
14000/14000 [==============================] - 0s - loss: 0.2251 - acc: 0.9086 - val_loss: 0.3388 - val_acc: 0.8546
Epoch 6/50
14000/14000 [==============================] - 0s - loss: 0.2021 - acc: 0.9189 - val_loss: 0.3532 - val_acc: 0.8523
Epoch 7/50
14000/14000 [==============================] - 0s - loss: 0.1797 - acc: 0.9286 - val_loss: 0.3670 - val_acc: 0.8529
Epoch 8/50
14000/14000 [==============================] - 0s - loss: 0.1611 - acc: 0.9350 - val_loss: 0.3860 - val_acc: 0.8543
Epoch 9/50
14000/14000 [==============================] - 0s - loss: 0.1427 - acc: 0.9437 - val_loss: 0.4077 - val_acc: 0.8529
Epoch 10/50
14000/14000 [==============================] - 0s - loss: 0.1344 - acc: 0.9476 - val_loss: 0.4234 - val_acc: 0.8526
Epoch 11/50
14000/14000 [==============================] - 0s - loss: 0.1222 - acc: 0.9534 - val_loss: 0.4473 - val_acc: 0.8506
Epoch 12/50
14000/14000 [==============================] - 0s - loss: 0.1131 - acc: 0.9546 - val_loss: 0.4718 - val_acc: 0.8497
Epoch 13/50
14000/14000 [==============================] - 0s - loss: 0.1079 - acc: 0.9559 - val_loss: 0.4818 - val_acc: 0.8526
Epoch 14/50
14000/14000 [==============================] - 0s - loss: 0.0954 - acc: 0.9630 - val_loss: 0.5057 - val_acc: 0.8494
Epoch 15/50
14000/14000 [==============================] - 0s - loss: 0.0906 - acc: 0.9636 - val_loss: 0.5229 - val_acc: 0.8557
Epoch 16/50
14000/14000 [==============================] - 0s - loss: 0.0896 - acc: 0.9657 - val_loss: 0.5387 - val_acc: 0.8497
Epoch 17/50
14000/14000 [==============================] - 0s - loss: 0.0816 - acc: 0.9666 - val_loss: 0.5579 - val_acc: 0.8463
Epoch 18/50
14000/14000 [==============================] - 0s - loss: 0.0762 - acc: 0.9709 - val_loss: 0.5704 - val_acc: 0.8491
Epoch 19/50
14000/14000 [==============================] - 0s - loss: 0.0718 - acc: 0.9723 - val_loss: 0.5834 - val_acc: 0.8454
Epoch 20/50
14000/14000 [==============================] - 0s - loss: 0.0633 - acc: 0.9752 - val_loss: 0.6032 - val_acc: 0.8494
Epoch 21/50
14000/14000 [==============================] - 0s - loss: 0.0687 - acc: 0.9724 - val_loss: 0.6181 - val_acc: 0.8480
Epoch 22/50
14000/14000 [==============================] - 0s - loss: 0.0614 - acc: 0.9762 - val_loss: 0.6280 - val_acc: 0.8503
Epoch 23/50
14000/14000 [==============================] - 0s - loss: 0.0620 - acc: 0.9756 - val_loss: 0.6407 - val_acc: 0.8500
Epoch 24/50
14000/14000 [==============================] - 0s - loss: 0.0536 - acc: 0.9794 - val_loss: 0.6563 - val_acc: 0.8511
Epoch 25/50
14000/14000 [==============================] - 0s - loss: 0.0538 - acc: 0.9791 - val_loss: 0.6709 - val_acc: 0.8500
Epoch 26/50
14000/14000 [==============================] - 0s - loss: 0.0507 - acc: 0.9807 - val_loss: 0.6869 - val_acc: 0.8491
Epoch 27/50
14000/14000 [==============================] - 0s - loss: 0.0528 - acc: 0.9794 - val_loss: 0.7002 - val_acc: 0.8483
Epoch 28/50
14000/14000 [==============================] - 0s - loss: 0.0465 - acc: 0.9810 - val_loss: 0.7083 - val_acc: 0.8469
Epoch 29/50
14000/14000 [==============================] - 0s - loss: 0.0504 - acc: 0.9796 - val_loss: 0.7153 - val_acc: 0.8497
Epoch 30/50
14000/14000 [==============================] - 0s - loss: 0.0477 - acc: 0.9819 - val_loss: 0.7232 - val_acc: 0.8480
Epoch 31/50
14000/14000 [==============================] - 0s - loss: 0.0475 - acc: 0.9819 - val_loss: 0.7343 - val_acc: 0.8469
Epoch 32/50
14000/14000 [==============================] - 0s - loss: 0.0459 - acc: 0.9819 - val_loss: 0.7352 - val_acc: 0.8500
Epoch 33/50
14000/14000 [==============================] - 0s - loss: 0.0426 - acc: 0.9807 - val_loss: 0.7429 - val_acc: 0.8511
Epoch 34/50
14000/14000 [==============================] - 0s - loss: 0.0396 - acc: 0.9846 - val_loss: 0.7576 - val_acc: 0.8477
Epoch 35/50
14000/14000 [==============================] - 0s - loss: 0.0420 - acc: 0.9836 - val_loss: 0.7603 - val_acc: 0.8506
Epoch 36/50
14000/14000 [==============================] - 0s - loss: 0.0359 - acc: 0.9856 - val_loss: 0.7683 - val_acc: 0.8497
Epoch 37/50
14000/14000 [==============================] - 0s - loss: 0.0377 - acc: 0.9849 - val_loss: 0.7823 - val_acc: 0.8520
Epoch 38/50
14000/14000 [==============================] - 0s - loss: 0.0352 - acc: 0.9861 - val_loss: 0.7912 - val_acc: 0.8500
Epoch 39/50
14000/14000 [==============================] - 0s - loss: 0.0390 - acc: 0.9845 - val_loss: 0.8025 - val_acc: 0.8489
Epoch 40/50
14000/14000 [==============================] - 0s - loss: 0.0371 - acc: 0.9853 - val_loss: 0.8128 - val_acc: 0.8494
Epoch 41/50
14000/14000 [==============================] - 0s - loss: 0.0367 - acc: 0.9848 - val_loss: 0.8184 - val_acc: 0.8503
Epoch 42/50
14000/14000 [==============================] - 0s - loss: 0.0331 - acc: 0.9871 - val_loss: 0.8264 - val_acc: 0.8500
Epoch 43/50
14000/14000 [==============================] - 0s - loss: 0.0338 - acc: 0.9871 - val_loss: 0.8332 - val_acc: 0.8483
Epoch 44/50

Categories

Resources