This question already has answers here:
What function defines accuracy in Keras when the loss is mean squared error (MSE)?
(3 answers)
Improve Accuracy in neural network with Keras
(1 answer)
Closed 1 year ago.
from sklearn import preprocessing
min_max_scaler = preprocessing.MinMaxScaler()
data_scale = min_max_scaler.fit_transform(data)
from sklearn.model_selection import train_test_splitX_train, X_val, Y_train, Y_val = train_test_split(data_scale, output, test_size=0.09375)
from keras.models import Sequential
from keras.layers import Dense
model = Sequential([
Dense(20, activation='tanh', input_shape=(6,)),
Dense(1),
])
model.compile(optimizer='adam',
loss='mean_squared_error', metrics=['accuracy']
)
hist = model.fit(X_train, Y_train,
batch_size=29, epochs=100,
validation_data=(X_val, Y_val))
I am trying to create a model to predict an output using 6 types of input features. I have the data of 64 items, and I am using 58 for training and 6 for validation. The model that I am asked to prepare has to have one hidden layer of 20 units and activation tanh.
But I am getting zero accuracy in every epoch.
Epoch 1/100
2/2 [==============================] - 1s 243ms/step - loss: 51811.7448 - accuracy: 0.0000e+00 - val_loss: 50574.0625 - val_accuracy: 0.0000e+00
Epoch 2/100
2/2 [==============================] - 0s 25ms/step - loss: 51643.5742 - accuracy: 0.0000e+00 - val_loss: 50551.1445 - val_accuracy: 0.0000e+00
Epoch 3/100
2/2 [==============================] - 0s 20ms/step - loss: 49723.1016 - accuracy: 0.0000e+00 - val_loss: 50528.1992 - val_accuracy: 0.0000e+00
Epoch 4/100
2/2 [==============================] - 0s 22ms/step - loss: 52539.8047 - accuracy: 0.0000e+00 - val_loss: 50505.1836 - val_accuracy: 0.0000e+00
Epoch 5/100
2/2 [==============================] - 0s 22ms/step - loss: 52482.2578 - accuracy: 0.0000e+00 - val_loss: 50482.1211 - val_accuracy: 0.0000e+00
Epoch 6/100
2/2 [==============================] - 0s 23ms/step - loss: 52004.6667 - accuracy: 0.0000e+00 - val_loss: 50459.0156 - val_accuracy: 0.0000e+00
Epoch 7/100
2/2 [==============================] - 0s 25ms/step - loss: 51018.6693 - accuracy: 0.0000e+00 - val_loss: 50435.8750 - val_accuracy: 0.0000e+00
Epoch 8/100
2/2 [==============================] - 0s 24ms/step - loss: 49717.7070 - accuracy: 0.0000e+00 - val_loss: 50412.6758 - val_accuracy: 0.0000e+00
Epoch 9/100
2/2 [==============================] - 0s 23ms/step - loss: 51335.4518 - accuracy: 0.0000e+00 - val_loss: 50389.3867 - val_accuracy: 0.0000e+00
Epoch 10/100
2/2 [==============================] - 0s 22ms/step - loss: 49924.4167 - accuracy: 0.0000e+00 - val_loss: 50366.0586 - val_accuracy: 0.0000e+00
Epoch 11/100
2/2 [==============================] - 0s 25ms/step - loss: 49802.1510 - accuracy: 0.0000e+00 - val_loss: 50342.6406 - val_accuracy: 0.0000e+00
Epoch 12/100
2/2 [==============================] - 0s 22ms/step - loss: 51098.2708 - accuracy: 0.0000e+00 - val_loss: 50319.1133 - val_accuracy: 0.0000e+00
Epoch 13/100
2/2 [==============================] - 0s 26ms/step - loss: 51471.4388 - accuracy: 0.0000e+00 - val_loss: 50295.5039 - val_accuracy: 0.0000e+00
Epoch 14/100
2/2 [==============================] - 0s 23ms/step - loss: 52146.4961 - accuracy: 0.0000e+00 - val_loss: 50271.7344 - val_accuracy: 0.0000e+00
Epoch 15/100
2/2 [==============================] - 0s 25ms/step - loss: 52312.6641 - accuracy: 0.0000e+00 - val_loss: 50247.8945 - val_accuracy: 0.0000e+00
Epoch 16/100
2/2 [==============================] - 0s 22ms/step - loss: 50880.9089 - accuracy: 0.0000e+00 - val_loss: 50223.9336 - val_accuracy: 0.0000e+00
Epoch 17/100
2/2 [==============================] - 0s 23ms/step - loss: 51607.6393 - accuracy: 0.0000e+00 - val_loss: 50199.8594 - val_accuracy: 0.0000e+00
Epoch 18/100
2/2 [==============================] - 0s 22ms/step - loss: 51863.2174 - accuracy: 0.0000e+00 - val_loss: 50175.6914 - val_accuracy: 0.0000e+00
Epoch 19/100
2/2 [==============================] - 0s 21ms/step - loss: 49922.4349 - accuracy: 0.0000e+00 - val_loss: 50151.3906 - val_accuracy: 0.0000e+00
Epoch 20/100
2/2 [==============================] - 0s 27ms/step - loss: 53367.2396 - accuracy: 0.0000e+00 - val_loss: 50126.8711 - val_accuracy: 0.0000e+00
Epoch 21/100
2/2 [==============================] - 0s 25ms/step - loss: 49393.4909 - accuracy: 0.0000e+00 - val_loss: 50102.3164 - val_accuracy: 0.0000e+00
Epoch 22/100
2/2 [==============================] - 0s 26ms/step - loss: 51564.7669 - accuracy: 0.0000e+00 - val_loss: 50077.5195 - val_accuracy: 0.0000e+00
Epoch 23/100
2/2 [==============================] - 0s 24ms/step - loss: 50011.7604 - accuracy: 0.0000e+00 - val_loss: 50052.6367 - val_accuracy: 0.0000e+00
Epoch 24/100
2/2 [==============================] - 0s 25ms/step - loss: 52412.6693 - accuracy: 0.0000e+00 - val_loss: 50027.5273 - val_accuracy: 0.0000e+00
Epoch 25/100
2/2 [==============================] - 0s 25ms/step - loss: 49858.7826 - accuracy: 0.0000e+00 - val_loss: 50002.3281 - val_accuracy: 0.0000e+00
Epoch 26/100
2/2 [==============================] - 0s 25ms/step - loss: 51710.6602 - accuracy: 0.0000e+00 - val_loss: 49976.9219 - val_accuracy: 0.0000e+00
Epoch 27/100
2/2 [==============================] - 0s 24ms/step - loss: 50573.8073 - accuracy: 0.0000e+00 - val_loss: 49951.3438 - val_accuracy: 0.0000e+00
Epoch 28/100
2/2 [==============================] - 0s 28ms/step - loss: 49282.0898 - accuracy: 0.0000e+00 - val_loss: 49925.6562 - val_accuracy: 0.0000e+00
Epoch 29/100
2/2 [==============================] - 0s 31ms/step - loss: 51581.7708 - accuracy: 0.0000e+00 - val_loss: 49899.7656 - val_accuracy: 0.0000e+00
Epoch 30/100
2/2 [==============================] - 0s 28ms/step - loss: 50463.7826 - accuracy: 0.0000e+00 - val_loss: 49873.7188 - val_accuracy: 0.0000e+00
Epoch 31/100
2/2 [==============================] - 0s 24ms/step - loss: 51371.1406 - accuracy: 0.0000e+00 - val_loss: 49847.5156 - val_accuracy: 0.0000e+00
Epoch 32/100
2/2 [==============================] - 0s 26ms/step - loss: 49800.9284 - accuracy: 0.0000e+00 - val_loss: 49821.1562 - val_accuracy: 0.0000e+00
Epoch 33/100
2/2 [==============================] - 0s 25ms/step - loss: 51305.6406 - accuracy: 0.0000e+00 - val_loss: 49794.5820 - val_accuracy: 0.0000e+00
Epoch 34/100
2/2 [==============================] - 0s 25ms/step - loss: 50867.6029 - accuracy: 0.0000e+00 - val_loss: 49767.8789 - val_accuracy: 0.0000e+00
Epoch 35/100
2/2 [==============================] - 0s 22ms/step - loss: 51187.8073 - accuracy: 0.0000e+00 - val_loss: 49740.9961 - val_accuracy: 0.0000e+00
Epoch 36/100
2/2 [==============================] - 0s 25ms/step - loss: 50952.5169 - accuracy: 0.0000e+00 - val_loss: 49713.9336 - val_accuracy: 0.0000e+00
Epoch 37/100
2/2 [==============================] - 0s 22ms/step - loss: 49043.2917 - accuracy: 0.0000e+00 - val_loss: 49686.7812 - val_accuracy: 0.0000e+00
Epoch 38/100
2/2 [==============================] - 0s 24ms/step - loss: 51361.7057 - accuracy: 0.0000e+00 - val_loss: 49659.3867 - val_accuracy: 0.0000e+00
Epoch 39/100
2/2 [==============================] - 0s 21ms/step - loss: 51550.3112 - accuracy: 0.0000e+00 - val_loss: 49631.8008 - val_accuracy: 0.0000e+00
Epoch 40/100
2/2 [==============================] - 0s 24ms/step - loss: 49792.8099 - accuracy: 0.0000e+00 - val_loss: 49604.0781 - val_accuracy: 0.0000e+00
Epoch 41/100
2/2 [==============================] - 0s 22ms/step - loss: 50967.6693 - accuracy: 0.0000e+00 - val_loss: 49576.1758 - val_accuracy: 0.0000e+00
Epoch 42/100
2/2 [==============================] - 0s 27ms/step - loss: 49447.2734 - accuracy: 0.0000e+00 - val_loss: 49548.1289 - val_accuracy: 0.0000e+00
Epoch 43/100
2/2 [==============================] - 0s 23ms/step - loss: 51130.9167 - accuracy: 0.0000e+00 - val_loss: 49519.8555 - val_accuracy: 0.0000e+00
Epoch 44/100
2/2 [==============================] - 0s 23ms/step - loss: 48933.1875 - accuracy: 0.0000e+00 - val_loss: 49491.4727 - val_accuracy: 0.0000e+00
Epoch 45/100
2/2 [==============================] - 0s 25ms/step - loss: 49759.7201 - accuracy: 0.0000e+00 - val_loss: 49462.8906 - val_accuracy: 0.0000e+00
Epoch 46/100
2/2 [==============================] - 0s 27ms/step - loss: 50660.8659 - accuracy: 0.0000e+00 - val_loss: 49434.0742 - val_accuracy: 0.0000e+00
Epoch 47/100
2/2 [==============================] - 0s 30ms/step - loss: 50524.8307 - accuracy: 0.0000e+00 - val_loss: 49405.1367 - val_accuracy: 0.0000e+00
Epoch 48/100
2/2 [==============================] - 0s 26ms/step - loss: 49962.2904 - accuracy: 0.0000e+00 - val_loss: 49375.9844 - val_accuracy: 0.0000e+00
Epoch 49/100
2/2 [==============================] - 0s 25ms/step - loss: 49227.0677 - accuracy: 0.0000e+00 - val_loss: 49346.7344 - val_accuracy: 0.0000e+00
Epoch 50/100
2/2 [==============================] - 0s 29ms/step - loss: 51078.3815 - accuracy: 0.0000e+00 - val_loss: 49317.1992 - val_accuracy: 0.0000e+00
Epoch 51/100
2/2 [==============================] - 0s 37ms/step - loss: 51424.3841 - accuracy: 0.0000e+00 - val_loss: 49287.4883 - val_accuracy: 0.0000e+00
Epoch 52/100
2/2 [==============================] - 0s 28ms/step - loss: 50354.8789 - accuracy: 0.0000e+00 - val_loss: 49257.6719 - val_accuracy: 0.0000e+00
Epoch 53/100
2/2 [==============================] - 0s 28ms/step - loss: 50108.2552 - accuracy: 0.0000e+00 - val_loss: 49227.6602 - val_accuracy: 0.0000e+00
Epoch 54/100
2/2 [==============================] - 0s 29ms/step - loss: 50449.8867 - accuracy: 0.0000e+00 - val_loss: 49197.5000 - val_accuracy: 0.0000e+00
Epoch 55/100
2/2 [==============================] - 0s 30ms/step - loss: 49881.0417 - accuracy: 0.0000e+00 - val_loss: 49167.1758 - val_accuracy: 0.0000e+00
Epoch 56/100
2/2 [==============================] - 0s 28ms/step - loss: 50056.2122 - accuracy: 0.0000e+00 - val_loss: 49136.6602 - val_accuracy: 0.0000e+00
Epoch 57/100
2/2 [==============================] - 0s 26ms/step - loss: 49487.7695 - accuracy: 0.0000e+00 - val_loss: 49105.9570 - val_accuracy: 0.0000e+00
Epoch 58/100
2/2 [==============================] - 0s 27ms/step - loss: 48665.1133 - accuracy: 0.0000e+00 - val_loss: 49075.0781 - val_accuracy: 0.0000e+00
Epoch 59/100
2/2 [==============================] - 0s 25ms/step - loss: 48959.0208 - accuracy: 0.0000e+00 - val_loss: 49044.0430 - val_accuracy: 0.0000e+00
Epoch 60/100
2/2 [==============================] - 0s 35ms/step - loss: 49726.4310 - accuracy: 0.0000e+00 - val_loss: 49012.7773 - val_accuracy: 0.0000e+00
Epoch 61/100
2/2 [==============================] - 0s 28ms/step - loss: 50568.7305 - accuracy: 0.0000e+00 - val_loss: 48981.3711 - val_accuracy: 0.0000e+00
Epoch 62/100
2/2 [==============================] - 0s 24ms/step - loss: 50870.2344 - accuracy: 0.0000e+00 - val_loss: 48949.7461 - val_accuracy: 0.0000e+00
Epoch 63/100
2/2 [==============================] - 0s 26ms/step - loss: 49693.0521 - accuracy: 0.0000e+00 - val_loss: 48918.0352 - val_accuracy: 0.0000e+00
Epoch 64/100
2/2 [==============================] - 0s 25ms/step - loss: 51050.4596 - accuracy: 0.0000e+00 - val_loss: 48886.1133 - val_accuracy: 0.0000e+00
Epoch 65/100
2/2 [==============================] - 0s 23ms/step - loss: 51286.4609 - accuracy: 0.0000e+00 - val_loss: 48854.0156 - val_accuracy: 0.0000e+00
Epoch 66/100
2/2 [==============================] - 0s 24ms/step - loss: 50518.3047 - accuracy: 0.0000e+00 - val_loss: 48821.7656 - val_accuracy: 0.0000e+00
Epoch 67/100
2/2 [==============================] - 0s 25ms/step - loss: 49936.2500 - accuracy: 0.0000e+00 - val_loss: 48789.4023 - val_accuracy: 0.0000e+00
Epoch 68/100
2/2 [==============================] - 0s 26ms/step - loss: 48336.5130 - accuracy: 0.0000e+00 - val_loss: 48756.8906 - val_accuracy: 0.0000e+00
Epoch 69/100
2/2 [==============================] - 0s 25ms/step - loss: 50585.8268 - accuracy: 0.0000e+00 - val_loss: 48724.1992 - val_accuracy: 0.0000e+00
Epoch 70/100
2/2 [==============================] - 0s 22ms/step - loss: 50887.9961 - accuracy: 0.0000e+00 - val_loss: 48691.3438 - val_accuracy: 0.0000e+00
Epoch 71/100
2/2 [==============================] - 0s 28ms/step - loss: 48245.7734 - accuracy: 0.0000e+00 - val_loss: 48658.4023 - val_accuracy: 0.0000e+00
Epoch 72/100
2/2 [==============================] - 0s 24ms/step - loss: 49855.4987 - accuracy: 0.0000e+00 - val_loss: 48625.2539 - val_accuracy: 0.0000e+00
Epoch 73/100
2/2 [==============================] - 0s 27ms/step - loss: 49838.8216 - accuracy: 0.0000e+00 - val_loss: 48591.9961 - val_accuracy: 0.0000e+00
Epoch 74/100
2/2 [==============================] - 0s 24ms/step - loss: 50452.5221 - accuracy: 0.0000e+00 - val_loss: 48558.6094 - val_accuracy: 0.0000e+00
Epoch 75/100
2/2 [==============================] - 0s 26ms/step - loss: 49913.4987 - accuracy: 0.0000e+00 - val_loss: 48525.0977 - val_accuracy: 0.0000e+00
Epoch 76/100
2/2 [==============================] - 0s 22ms/step - loss: 50415.0065 - accuracy: 0.0000e+00 - val_loss: 48491.4062 - val_accuracy: 0.0000e+00
Epoch 77/100
2/2 [==============================] - 0s 21ms/step - loss: 48946.9909 - accuracy: 0.0000e+00 - val_loss: 48457.6523 - val_accuracy: 0.0000e+00
Epoch 78/100
2/2 [==============================] - 0s 24ms/step - loss: 49364.7357 - accuracy: 0.0000e+00 - val_loss: 48423.7656 - val_accuracy: 0.0000e+00
Epoch 79/100
2/2 [==============================] - 0s 22ms/step - loss: 48923.9831 - accuracy: 0.0000e+00 - val_loss: 48389.7344 - val_accuracy: 0.0000e+00
Epoch 80/100
2/2 [==============================] - 0s 29ms/step - loss: 48342.6185 - accuracy: 0.0000e+00 - val_loss: 48355.6523 - val_accuracy: 0.0000e+00
Epoch 81/100
2/2 [==============================] - 0s 31ms/step - loss: 47624.0443 - accuracy: 0.0000e+00 - val_loss: 48321.4961 - val_accuracy: 0.0000e+00
Epoch 82/100
2/2 [==============================] - 0s 28ms/step - loss: 50017.1719 - accuracy: 0.0000e+00 - val_loss: 48287.1250 - val_accuracy: 0.0000e+00
Epoch 83/100
2/2 [==============================] - 0s 29ms/step - loss: 48870.0182 - accuracy: 0.0000e+00 - val_loss: 48252.7070 - val_accuracy: 0.0000e+00
Epoch 84/100
2/2 [==============================] - 0s 30ms/step - loss: 48225.5339 - accuracy: 0.0000e+00 - val_loss: 48218.1875 - val_accuracy: 0.0000e+00
Epoch 85/100
2/2 [==============================] - 0s 23ms/step - loss: 48875.5664 - accuracy: 0.0000e+00 - val_loss: 48183.5781 - val_accuracy: 0.0000e+00
Epoch 86/100
2/2 [==============================] - 0s 23ms/step - loss: 49319.9857 - accuracy: 0.0000e+00 - val_loss: 48148.8750 - val_accuracy: 0.0000e+00
Epoch 87/100
2/2 [==============================] - 0s 42ms/step - loss: 47941.7708 - accuracy: 0.0000e+00 - val_loss: 48114.1445 - val_accuracy: 0.0000e+00
Epoch 88/100
2/2 [==============================] - 0s 23ms/step - loss: 50075.6576 - accuracy: 0.0000e+00 - val_loss: 48079.2461 - val_accuracy: 0.0000e+00
Epoch 89/100
2/2 [==============================] - 0s 28ms/step - loss: 50083.3125 - accuracy: 0.0000e+00 - val_loss: 48044.2969 - val_accuracy: 0.0000e+00
Epoch 90/100
2/2 [==============================] - 0s 27ms/step - loss: 48913.2578 - accuracy: 0.0000e+00 - val_loss: 48009.3086 - val_accuracy: 0.0000e+00
Epoch 91/100
2/2 [==============================] - 0s 28ms/step - loss: 47684.0156 - accuracy: 0.0000e+00 - val_loss: 47974.3320 - val_accuracy: 0.0000e+00
Epoch 92/100
2/2 [==============================] - 0s 22ms/step - loss: 48293.5768 - accuracy: 0.0000e+00 - val_loss: 47939.2383 - val_accuracy: 0.0000e+00
Epoch 93/100
2/2 [==============================] - 0s 23ms/step - loss: 49457.4193 - accuracy: 0.0000e+00 - val_loss: 47904.0781 - val_accuracy: 0.0000e+00
Epoch 94/100
2/2 [==============================] - 0s 24ms/step - loss: 49460.8294 - accuracy: 0.0000e+00 - val_loss: 47868.8281 - val_accuracy: 0.0000e+00
Epoch 95/100
2/2 [==============================] - 0s 30ms/step - loss: 48577.9479 - accuracy: 0.0000e+00 - val_loss: 47833.6133 - val_accuracy: 0.0000e+00
Epoch 96/100
2/2 [==============================] - 0s 22ms/step - loss: 49447.5651 - accuracy: 0.0000e+00 - val_loss: 47798.3164 - val_accuracy: 0.0000e+00
Epoch 97/100
2/2 [==============================] - 0s 28ms/step - loss: 48044.7982 - accuracy: 0.0000e+00 - val_loss: 47763.0625 - val_accuracy: 0.0000e+00
Epoch 98/100
2/2 [==============================] - 0s 35ms/step - loss: 48426.7266 - accuracy: 0.0000e+00 - val_loss: 47727.7227 - val_accuracy: 0.0000e+00
Epoch 99/100
2/2 [==============================] - 0s 26ms/step - loss: 49340.5078 - accuracy: 0.0000e+00 - val_loss: 47692.3594 - val_accuracy: 0.0000e+00
Epoch 100/100
2/2 [==============================] - 0s 27ms/step - loss: 47890.7018 - accuracy: 0.0000e+00 - val_loss: 47657.0117 - val_accuracy: 0.0000e+00
Accuracy is for classification problems. You are probably interested in regression model evaluation. One example solution might be:
model.compile(
optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.MeanSquaredError()])
If you are interested in more metrics, then take a look at this: https://keras.io/api/metrics/
Related
The code and output when I execute once:
model.fit(X,y,validation_split=0.2, epochs=10, batch_size= 100)
Epoch 1/10
8/8 [==============================] - 1s 31ms/step - loss: 0.6233 - accuracy: 0.6259 - val_loss: 0.6333 - val_accuracy: 0.6461
Epoch 2/10
8/8 [==============================] - 0s 5ms/step - loss: 0.5443 - accuracy: 0.7722 - val_loss: 0.4803 - val_accuracy: 0.7978
Epoch 3/10
8/8 [==============================] - 0s 4ms/step - loss: 0.5385 - accuracy: 0.7904 - val_loss: 0.4465 - val_accuracy: 0.8202
Epoch 4/10
8/8 [==============================] - 0s 5ms/step - loss: 0.5014 - accuracy: 0.7932 - val_loss: 0.5228 - val_accuracy: 0.7753
Epoch 5/10
8/8 [==============================] - 0s 4ms/step - loss: 0.5283 - accuracy: 0.7736 - val_loss: 0.4284 - val_accuracy: 0.8315
Epoch 6/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4936 - accuracy: 0.7989 - val_loss: 0.4309 - val_accuracy: 0.8539
Epoch 7/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4700 - accuracy: 0.8045 - val_loss: 0.4622 - val_accuracy: 0.8146
Epoch 8/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4732 - accuracy: 0.8087 - val_loss: 0.4159 - val_accuracy: 0.8202
Epoch 9/10
8/8 [==============================] - 0s 5ms/step - loss: 0.5623 - accuracy: 0.7764 - val_loss: 0.7438 - val_accuracy: 0.8090
Epoch 10/10
8/8 [==============================] - 0s 4ms/step - loss: 0.5886 - accuracy: 0.7806 - val_loss: 0.5889 - val_accuracy: 0.6798
Output when I execute the same line of code again in jupyter lab:
Epoch 1/10
8/8 [==============================] - 0s 9ms/step - loss: 0.5269 - accuracy: 0.7496 - val_loss: 0.4568 - val_accuracy: 0.8371
Epoch 2/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4688 - accuracy: 0.8087 - val_loss: 0.4885 - val_accuracy: 0.7753
Epoch 3/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4597 - accuracy: 0.8017 - val_loss: 0.4638 - val_accuracy: 0.7865
Epoch 4/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4741 - accuracy: 0.7890 - val_loss: 0.4277 - val_accuracy: 0.8258
Epoch 5/10
8/8 [==============================] - 0s 5ms/step - loss: 0.4840 - accuracy: 0.8003 - val_loss: 0.4712 - val_accuracy: 0.7978
Epoch 6/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4488 - accuracy: 0.8087 - val_loss: 0.4825 - val_accuracy: 0.7809
Epoch 7/10
8/8 [==============================] - 0s 5ms/step - loss: 0.4432 - accuracy: 0.8087 - val_loss: 0.4865 - val_accuracy: 0.8090
Epoch 8/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4299 - accuracy: 0.8059 - val_loss: 0.4458 - val_accuracy: 0.8371
Epoch 9/10
8/8 [==============================] - 0s 4ms/step - loss: 0.4358 - accuracy: 0.8172 - val_loss: 0.5232 - val_accuracy: 0.8034
Epoch 10/10
8/8 [==============================] - 0s 5ms/step - loss: 0.4697 - accuracy: 0.8059 - val_loss: 0.4421 - val_accuracy: 0.8202
It continues the previous fit, and my question is: how can I make it start from the beginning again? without having to create a new model, so the second time I execute the line of code is independent of the first one
This is a little bit tricky without being able to see the code to initialise the model, and not sure why you'd need to reset the weights without re-initialising the model.
If you save the weights of your model before training, you can then then reset to those initial weights before you train again.
modelWeights = model.get_weights()
model.set_weights(modelWeights)
Here is my code and result of the training.
batch_size = 100
epochs = 50
yale_history = yale_classifier.fit(x_train, y_train_oh,batch_size=batch_size,epochs=epochs,validation_data=(x_train,y_train_oh))
Epoch 1/50
20/20 [==============================] - 32s 2s/step - loss: 3.9801 - accuracy: 0.2071 - val_loss: 3.6919 - val_accuracy: 0.0245
Epoch 2/50
20/20 [==============================] - 30s 2s/step - loss: 1.2557 - accuracy: 0.6847 - val_loss: 4.1914 - val_accuracy: 0.0245
Epoch 3/50
20/20 [==============================] - 30s 2s/step - loss: 0.4408 - accuracy: 0.8954 - val_loss: 4.6284 - val_accuracy: 0.0245
Epoch 4/50
20/20 [==============================] - 30s 2s/step - loss: 0.1822 - accuracy: 0.9592 - val_loss: 4.9481 - val_accuracy: 0.0398
Epoch 5/50
20/20 [==============================] - 30s 2s/step - loss: 0.1252 - accuracy: 0.9760 - val_loss: 5.3728 - val_accuracy: 0.0276
Epoch 6/50
20/20 [==============================] - 30s 2s/step - loss: 0.0927 - accuracy: 0.9816 - val_loss: 5.7009 - val_accuracy: 0.0260
Epoch 7/50
20/20 [==============================] - 30s 2s/step - loss: 0.0858 - accuracy: 0.9837 - val_loss: 6.0049 - val_accuracy: 0.0260
Epoch 8/50
20/20 [==============================] - 30s 2s/step - loss: 0.0646 - accuracy: 0.9867 - val_loss: 6.3786 - val_accuracy: 0.0260
Epoch 9/50
20/20 [==============================] - 30s 2s/step - loss: 0.0489 - accuracy: 0.9898 - val_loss: 6.5156 - val_accuracy: 0.0260
You can see that I also used the training data as the validation data. This is weird that the training loss is not the same as the validation loss. Further, when I evaluated it, seem like my model was not trained at all as follow.
yale_classifier.evaluate(x_train, y_train_oh)
62/62 [==============================] - 6s 96ms/step - loss: 7.1123 - accuracy: 0.0260
[7.112329483032227, 0.026020407676696777]
Do you have any recommened to solve this problem ?
So whenever I run my TensorFlow model the margin of error (loss / val_loss) graph is extremely back and fourth and I was wondering how I could stop this /reduce it here is a picture
Graph
here's the code if anyone wants to run it should work fine as long as you have the pips
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt
import datetime
import tensorboard
from keras.models import Sequential
from keras.layers import Dense
train_df = pd.read_csv('https://www.dropbox.com/s/ednsabkdzs8motw/ROK%20INPUT%20DATA%20-%20Sheet1.csv?dl=1')
eval_df = pd.read_csv('https://www.dropbox.com/s/irnqwc1v67wmbfk/ROK%20EVAL%20DATA%20-%20Sheet1.csv?dl=1')
train_df['Troops'] = train_df['Troops'].astype(float)
train_df['Enemy Troops'] = train_df['Enemy Troops'].astype(float)
train_df['Damage'] = train_df['Damage'].astype(float)
eval_df['Troops'] = eval_df['Troops'].astype(float)
eval_df['Enemy Troops'] = eval_df['Enemy Troops'].astype(float)
eval_df['Damage'] = eval_df['Damage'].astype(float)
damage = train_df.pop('Damage')
dataset = tf.data.Dataset.from_tensor_slices((train_df.values, damage.values))
test_labels = eval_df.pop('Damage')
test_features = eval_df.copy()
model = keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape = (8,)),
tf.keras.layers.Dense(8, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1),
]
)
model.compile(optimizer='adam', loss='mean_squared_error')
model.summary()
history = model.fit(train_df, damage, validation_split=0.2, epochs=5000)
def plot_loss(history):
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.ylim([0, 2000])
plt.xlabel('Epoch')
plt.ylabel('Error [MPG]')
plt.legend()
plt.grid(True)
plot_loss(history)
plt.show()
This is due to the labeled data has imbalanced values in your dataset which means you should use mean_absolute_error in place of mean_squared_error as a loss function to prevent outliers.
Please check below code:
model.compile(optimizer='adam', loss=tf.losses.MeanAbsoluteError())
history = model.fit(train_df, damage,
validation_data=(test_features,test_labels), epochs=50)
Output:
Epoch 1/100
2/2 [==============================] - 1s 150ms/step - loss: 1015.9664 - val_loss: 129.8347
Epoch 2/100
2/2 [==============================] - 0s 30ms/step - loss: 244.7547 - val_loss: 28.9964
Epoch 3/100
2/2 [==============================] - 0s 32ms/step - loss: 629.1597 - val_loss: 20.9922
Epoch 4/100
2/2 [==============================] - 0s 35ms/step - loss: 612.6526 - val_loss: 45.7117
Epoch 5/100
2/2 [==============================] - 0s 34ms/step - loss: 335.1754 - val_loss: 93.0301
Epoch 6/100
2/2 [==============================] - 0s 30ms/step - loss: 168.1687 - val_loss: 128.6208
Epoch 7/100
2/2 [==============================] - 0s 30ms/step - loss: 406.5712 - val_loss: 129.7909
Epoch 8/100
2/2 [==============================] - 0s 28ms/step - loss: 391.4481 - val_loss: 113.0307
Epoch 9/100
2/2 [==============================] - 0s 27ms/step - loss: 182.2033 - val_loss: 83.6522
Epoch 10/100
2/2 [==============================] - 0s 42ms/step - loss: 176.4511 - val_loss: 68.1947
Epoch 11/100
2/2 [==============================] - 0s 28ms/step - loss: 266.6671 - val_loss: 71.0774
Epoch 12/100
2/2 [==============================] - 0s 40ms/step - loss: 198.2684 - val_loss: 88.3499
Epoch 13/100
2/2 [==============================] - 0s 28ms/step - loss: 119.8650 - val_loss: 100.7030
Epoch 14/100
2/2 [==============================] - 0s 27ms/step - loss: 189.6049 - val_loss: 94.6102
Epoch 15/100
2/2 [==============================] - 0s 28ms/step - loss: 146.5237 - val_loss: 77.1270
Epoch 16/100
2/2 [==============================] - 0s 30ms/step - loss: 106.8908 - val_loss: 60.1246
Epoch 17/100
2/2 [==============================] - 0s 29ms/step - loss: 132.0525 - val_loss: 56.3836
Epoch 18/100
2/2 [==============================] - 0s 29ms/step - loss: 129.6660 - val_loss: 64.7796
Epoch 19/100
2/2 [==============================] - 0s 32ms/step - loss: 118.3896 - val_loss: 68.5954
Epoch 20/100
2/2 [==============================] - 0s 30ms/step - loss: 114.2150 - val_loss: 67.0202
Epoch 21/100
2/2 [==============================] - 0s 32ms/step - loss: 112.6538 - val_loss: 65.2389
Epoch 22/100
2/2 [==============================] - 0s 30ms/step - loss: 107.1644 - val_loss: 59.4646
Epoch 23/100
2/2 [==============================] - 0s 31ms/step - loss: 106.9518 - val_loss: 51.4506
Epoch 24/100
2/2 [==============================] - 0s 28ms/step - loss: 107.4203 - val_loss: 48.4060
Epoch 25/100
2/2 [==============================] - 0s 30ms/step - loss: 108.1180 - val_loss: 48.5364
Epoch 26/100
2/2 [==============================] - 0s 30ms/step - loss: 106.6088 - val_loss: 47.0263
Epoch 27/100
2/2 [==============================] - 0s 29ms/step - loss: 107.6407 - val_loss: 47.3658
Epoch 28/100
2/2 [==============================] - 0s 32ms/step - loss: 105.1175 - val_loss: 45.2668
Epoch 29/100
2/2 [==============================] - 0s 35ms/step - loss: 105.9028 - val_loss: 45.2371
Epoch 30/100
2/2 [==============================] - 0s 32ms/step - loss: 103.5908 - val_loss: 48.8512
Epoch 31/100
2/2 [==============================] - 0s 27ms/step - loss: 102.6504 - val_loss: 53.9927
Epoch 32/100
2/2 [==============================] - 0s 28ms/step - loss: 100.8014 - val_loss: 58.1143
Epoch 33/100
2/2 [==============================] - 0s 30ms/step - loss: 114.6031 - val_loss: 49.8311
Epoch 34/100
2/2 [==============================] - 0s 32ms/step - loss: 104.9576 - val_loss: 45.7614
Epoch 35/100
2/2 [==============================] - 0s 35ms/step - loss: 102.5296 - val_loss: 44.3673
Epoch 36/100
2/2 [==============================] - 0s 32ms/step - loss: 105.3818 - val_loss: 40.8473
Epoch 37/100
2/2 [==============================] - 0s 26ms/step - loss: 102.0235 - val_loss: 38.7967
Epoch 38/100
2/2 [==============================] - 0s 30ms/step - loss: 103.9142 - val_loss: 36.8466
Epoch 39/100
2/2 [==============================] - 0s 32ms/step - loss: 105.1095 - val_loss: 40.7968
Epoch 40/100
2/2 [==============================] - 0s 34ms/step - loss: 102.7449 - val_loss: 46.4677
Epoch 41/100
2/2 [==============================] - 0s 29ms/step - loss: 101.3321 - val_loss: 53.2947
Epoch 42/100
2/2 [==============================] - 0s 29ms/step - loss: 106.1829 - val_loss: 53.4320
Epoch 43/100
2/2 [==============================] - 0s 32ms/step - loss: 97.9348 - val_loss: 47.5536
Epoch 44/100
2/2 [==============================] - 0s 31ms/step - loss: 98.5830 - val_loss: 41.6827
Epoch 45/100
2/2 [==============================] - 0s 32ms/step - loss: 98.8272 - val_loss: 36.0022
Epoch 46/100
2/2 [==============================] - 0s 29ms/step - loss: 109.2409 - val_loss: 32.8524
Epoch 47/100
2/2 [==============================] - 0s 39ms/step - loss: 112.1813 - val_loss: 38.2731
Epoch 48/100
2/2 [==============================] - 0s 34ms/step - loss: 99.5903 - val_loss: 40.8585
Epoch 49/100
2/2 [==============================] - 0s 29ms/step - loss: 106.2939 - val_loss: 47.6244
Epoch 50/100
2/2 [==============================] - 0s 27ms/step - loss: 97.1548 - val_loss: 51.4656
Epoch 51/100
2/2 [==============================] - 0s 29ms/step - loss: 97.9445 - val_loss: 46.3714
Epoch 52/100
2/2 [==============================] - 0s 29ms/step - loss: 96.2311 - val_loss: 39.1717
Epoch 53/100
2/2 [==============================] - 0s 38ms/step - loss: 96.8036 - val_loss: 34.6192
Epoch 54/100
2/2 [==============================] - 0s 33ms/step - loss: 99.1502 - val_loss: 31.0388
Epoch 55/100
2/2 [==============================] - 0s 31ms/step - loss: 105.3854 - val_loss: 30.7220
Epoch 56/100
2/2 [==============================] - 0s 46ms/step - loss: 103.1274 - val_loss: 35.8683
Epoch 57/100
2/2 [==============================] - 0s 26ms/step - loss: 94.2024 - val_loss: 38.4891
Epoch 58/100
2/2 [==============================] - 0s 33ms/step - loss: 95.7762 - val_loss: 41.9727
Epoch 59/100
2/2 [==============================] - 0s 34ms/step - loss: 93.3703 - val_loss: 30.4720
Epoch 60/100
2/2 [==============================] - 0s 36ms/step - loss: 93.3310 - val_loss: 20.7104
Epoch 61/100
2/2 [==============================] - 0s 30ms/step - loss: 98.0708 - val_loss: 12.8391
Epoch 62/100
2/2 [==============================] - 0s 31ms/step - loss: 101.6647 - val_loss: 24.7238
Epoch 63/100
2/2 [==============================] - 0s 33ms/step - loss: 89.2492 - val_loss: 35.5170
Epoch 64/100
2/2 [==============================] - 0s 32ms/step - loss: 114.9297 - val_loss: 19.0492
Epoch 65/100
2/2 [==============================] - 0s 42ms/step - loss: 89.8944 - val_loss: 9.8713
Epoch 66/100
2/2 [==============================] - 0s 32ms/step - loss: 119.7986 - val_loss: 12.5584
Epoch 67/100
2/2 [==============================] - 0s 33ms/step - loss: 85.2151 - val_loss: 23.7810
Epoch 68/100
2/2 [==============================] - 0s 31ms/step - loss: 91.6945 - val_loss: 27.0833
Epoch 69/100
2/2 [==============================] - 0s 31ms/step - loss: 91.0443 - val_loss: 20.8228
Epoch 70/100
2/2 [==============================] - 0s 32ms/step - loss: 88.2557 - val_loss: 17.0245
Epoch 71/100
2/2 [==============================] - 0s 31ms/step - loss: 89.2440 - val_loss: 14.7132
Epoch 72/100
2/2 [==============================] - 0s 32ms/step - loss: 89.3514 - val_loss: 13.7965
Epoch 73/100
2/2 [==============================] - 0s 31ms/step - loss: 87.8547 - val_loss: 12.9283
Epoch 74/100
2/2 [==============================] - 0s 32ms/step - loss: 87.2561 - val_loss: 13.1212
Epoch 75/100
2/2 [==============================] - 0s 29ms/step - loss: 87.3379 - val_loss: 15.1878
Epoch 76/100
2/2 [==============================] - 0s 30ms/step - loss: 85.2761 - val_loss: 16.0503
Epoch 77/100
2/2 [==============================] - 0s 34ms/step - loss: 87.9641 - val_loss: 17.0547
Epoch 78/100
2/2 [==============================] - 0s 37ms/step - loss: 82.7034 - val_loss: 15.5357
Epoch 79/100
2/2 [==============================] - 0s 39ms/step - loss: 82.3891 - val_loss: 14.0231
Epoch 80/100
2/2 [==============================] - 0s 31ms/step - loss: 81.3045 - val_loss: 15.4905
Epoch 81/100
2/2 [==============================] - 0s 32ms/step - loss: 81.0241 - val_loss: 15.6177
Epoch 82/100
2/2 [==============================] - 0s 32ms/step - loss: 80.9134 - val_loss: 15.9989
Epoch 83/100
2/2 [==============================] - 0s 32ms/step - loss: 82.4333 - val_loss: 14.1885
Epoch 84/100
2/2 [==============================] - 0s 28ms/step - loss: 79.1791 - val_loss: 14.6505
Epoch 85/100
2/2 [==============================] - 0s 32ms/step - loss: 79.3381 - val_loss: 12.7476
Epoch 86/100
2/2 [==============================] - 0s 33ms/step - loss: 78.1342 - val_loss: 9.6814
Epoch 87/100
2/2 [==============================] - 0s 29ms/step - loss: 83.7268 - val_loss: 7.7703
Epoch 88/100
2/2 [==============================] - 0s 28ms/step - loss: 78.5488 - val_loss: 11.2915
Epoch 89/100
2/2 [==============================] - 0s 27ms/step - loss: 77.6771 - val_loss: 14.2054
Epoch 90/100
2/2 [==============================] - 0s 33ms/step - loss: 78.5004 - val_loss: 14.1587
Epoch 91/100
2/2 [==============================] - 0s 36ms/step - loss: 81.0928 - val_loss: 8.8034
Epoch 92/100
2/2 [==============================] - 0s 29ms/step - loss: 80.1722 - val_loss: 7.1039
Epoch 93/100
2/2 [==============================] - 0s 31ms/step - loss: 77.2722 - val_loss: 6.9086
Epoch 94/100
2/2 [==============================] - 0s 26ms/step - loss: 77.4540 - val_loss: 11.6563
Epoch 95/100
2/2 [==============================] - 0s 27ms/step - loss: 84.5494 - val_loss: 6.5362
Epoch 96/100
2/2 [==============================] - 0s 35ms/step - loss: 76.0600 - val_loss: 15.5146
Epoch 97/100
2/2 [==============================] - 0s 33ms/step - loss: 91.8825 - val_loss: 5.5035
Epoch 98/100
2/2 [==============================] - 0s 28ms/step - loss: 83.6633 - val_loss: 10.4812
Epoch 99/100
2/2 [==============================] - 0s 29ms/step - loss: 76.4038 - val_loss: 11.0298
Epoch 100/100
2/2 [==============================] - 0s 48ms/step - loss: 77.8150 - val_loss: 16.8254
and the loss graph looks like this:
I have a very simple model where I try to predict the value for the expression 2x - 2
It works well, but here is my question.
So far I trained it based on just 20 values (-10 to 10), and it works fine. What I don't understand is that, when I train it on more values, let's say (-10 to 25), my prediction returns [[nan]]. Even the model weights are [<tf.Variable 'dense/kernel:0' shape=(1, 1) dtype=float32, numpy=array([[nan]], dtype=float32)>, <tf.Variable 'dense/bias:0' shape=(1,) dtype=float32, numpy=array([nan], dtype=float32)>]
Why does adding more training data result in nan?
import tensorflow as tf
import numpy as np
from tensorflow import keras
def gen_vals(x):
return x*2 - 2
model = tf.keras.Sequential([
keras.layers.InputLayer(input_shape=(1,)),
keras.layers.Dense(units=1)
])
model.compile(optimizer='sgd', loss='mean_squared_error', metrics=['accuracy'])
xs = []
ys = []
for x in range(-10, 10):
xs.append(x)
ys.append(gen_vals(x))
xs = np.array(xs, dtype=float)
ys = np.array(ys, dtype=float)
model.fit(xs, ys, epochs=500)
print(model.predict([20]))
So I checked your code and the problem is in your loss function. You are using mean_squared_erro. Due to this, your error is reaching infinity.
Epoch 1/15
7/7 [==============================] - 0s 1ms/step - loss: 22108.5449 - accuracy: 0.0000e+00
Epoch 2/15
7/7 [==============================] - 0s 1ms/step - loss: 2046332.6250 - accuracy: 0.0286
Epoch 3/15
7/7 [==============================] - 0s 1ms/step - loss: 18862860288.0000 - accuracy: 0.0000e+00
Epoch 4/15
7/7 [==============================] - 0s 1ms/step - loss: 8550264864768.0000 - accuracy: 0.0286
Epoch 5/15
7/7 [==============================] - 0s 1ms/step - loss: 24012283831123968.0000 - accuracy: 0.0000e+00
Epoch 6/15
7/7 [==============================] - 0s 1ms/step - loss: 22680820415763316736.0000 - accuracy: 0.0000e+00
Epoch 7/15
7/7 [==============================] - 0s 1ms/step - loss: 1655609635839244500992.0000 - accuracy: 0.0000e+00
Epoch 8/15
7/7 [==============================] - 0s 1ms/step - loss: 611697420191128514199552.0000 - accuracy: 0.0000e+00
Epoch 9/15
7/7 [==============================] - 0s 1ms/step - loss: 229219278753403035799519232.0000 - accuracy: 0.0286
Epoch 10/15
7/7 [==============================] - 0s 1ms/step - loss: 2146224141449145393293494845440.0000 - accuracy: 0.0000e+00
Epoch 11/15
7/7 [==============================] - 0s 1ms/step - loss: 1169213631609383639522618269237248.0000 - accuracy: 0.0000e+00
Epoch 12/15
7/7 [==============================] - 0s 1ms/step - loss: 1042864695227246165669313090114551808.0000 - accuracy: 0.0000e+00
Epoch 13/15
7/7 [==============================] - 0s 1ms/step - loss: inf - accuracy: 0.0286
Epoch 14/15
7/7 [==============================] - 0s 3ms/step - loss: inf - accuracy: 0.0286
Epoch 15/15
7/7 [==============================] - 0s 1ms/step - loss: inf - accuracy: 0.0286
As MSE loss function squares the actual loss and due to the toy dataset that you have it might happen that it reaches inf as in your case.
I will suggest using MAE mean absolute error for your toy example and toy network.
I checked the network provides decent results.
import tensorflow as tf
import numpy as np
from tensorflow import keras
def gen_vals(x):
return x*2 - 2
model = tf.keras.Sequential([
keras.layers.InputLayer(input_shape=(1,)),
keras.layers.Dense(units=1)
])
model.compile(optimizer='sgd', loss='mae', metrics=['accuracy'])
xs = []
ys = []
for x in range(-10, 25):
xs.append(x)
ys.append(gen_vals(x))
Epoch 1/15
7/7 [==============================] - 0s 1ms/step - loss: 14.5341 - accuracy: 0.0000e+00
Epoch 2/15
7/7 [==============================] - 0s 2ms/step - loss: 7.5144 - accuracy: 0.0000e+00
Epoch 3/15
7/7 [==============================] - 0s 2ms/step - loss: 2.0986 - accuracy: 0.0000e+00
Epoch 4/15
7/7 [==============================] - 0s 1ms/step - loss: 1.4349 - accuracy: 0.0000e+00
Epoch 5/15
7/7 [==============================] - 0s 1ms/step - loss: 1.3424 - accuracy: 0.0000e+00
Epoch 6/15
7/7 [==============================] - 0s 1ms/step - loss: 1.5290 - accuracy: 0.0000e+00
Epoch 7/15
7/7 [==============================] - 0s 1ms/step - loss: 1.4349 - accuracy: 0.0000e+00
Epoch 8/15
7/7 [==============================] - 0s 1ms/step - loss: 1.2839 - accuracy: 0.0000e+00
Epoch 9/15
7/7 [==============================] - 0s 1ms/step - loss: 1.4003 - accuracy: 0.0000e+00
Epoch 10/15
7/7 [==============================] - 0s 1ms/step - loss: 1.4593 - accuracy: 0.0000e+00
Epoch 11/15
7/7 [==============================] - 0s 1ms/step - loss: 1.4561 - accuracy: 0.0000e+00
Epoch 12/15
7/7 [==============================] - 0s 1ms/step - loss: 1.4761 - accuracy: 0.0000e+00
Epoch 13/15
7/7 [==============================] - 0s 2ms/step - loss: 1.3080 - accuracy: 0.0000e+00
Epoch 14/15
7/7 [==============================] - 0s 1ms/step - loss: 1.1885 - accuracy: 0.0000e+00
Epoch 15/15
7/7 [==============================] - 0s 1ms/step - loss: 1.2665 - accuracy: 0.0000e+00
[[38.037006]]
I am doing a Binary classification of IMDB movie review data into Positive or Negative Sentiment.
I have 25K movie reviews and corresponding label.
Preprocessing:
Removed the stop words and split the data into 70:30 training and test. So 17.5K training and 7k test. 17.5k training has been further divided into 14K train and 3.5 k validation dataset as used in keras.model.fit method
Each processed movie review has been converted to TF-IDF vector using Keras text processing module.
Here is my Fully Connected Architecture I used in Keras Dense class
def model_param(self):
""" Method to do deep learning
"""
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras import regularizers
self.model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
self.model.add(Dense(32, activation='relu', input_dim=self.x_train_std.shape[1]))
self.model.add(Dropout(0.5))
#self.model.add(Dense(60, activation='relu'))
#self.model.add(Dropout(0.5))
self.model.add(Dense(1, activation='sigmoid'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
self.model.compile(loss='binary_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
def fit(self):
""" Training the deep learning network on the training data
"""
self.model.fit(self.x_train_std, self.y_train,validation_split=0.20,
epochs=50,
batch_size=128)
As you see, I tried first without Dropout and as usual I got training accuracy as 1.0 but validation was poor as overfitting was happening. So I added Dropout to prevent overfitting
However inspite of trying multiple dropout ratio, adding another layer with different no. of units in it, changing learning rate I am still getting overfitting on validation dataset. Gets stuck at 85% while training keeps increasing to 99% and so on. Even changed the Epochs from 10 to 50
What could be going wrong here
Train on 14000 samples, validate on 3500 samples
Epoch 1/50
14000/14000 [==============================] - 0s - loss: 0.5684 - acc: 0.7034 - val_loss: 0.3794 - val_acc: 0.8431
Epoch 2/50
14000/14000 [==============================] - 0s - loss: 0.3630 - acc: 0.8388 - val_loss: 0.3304 - val_acc: 0.8549
Epoch 3/50
14000/14000 [==============================] - 0s - loss: 0.2977 - acc: 0.8749 - val_loss: 0.3271 - val_acc: 0.8591
Epoch 4/50
14000/14000 [==============================] - 0s - loss: 0.2490 - acc: 0.8991 - val_loss: 0.3302 - val_acc: 0.8580
Epoch 5/50
14000/14000 [==============================] - 0s - loss: 0.2251 - acc: 0.9086 - val_loss: 0.3388 - val_acc: 0.8546
Epoch 6/50
14000/14000 [==============================] - 0s - loss: 0.2021 - acc: 0.9189 - val_loss: 0.3532 - val_acc: 0.8523
Epoch 7/50
14000/14000 [==============================] - 0s - loss: 0.1797 - acc: 0.9286 - val_loss: 0.3670 - val_acc: 0.8529
Epoch 8/50
14000/14000 [==============================] - 0s - loss: 0.1611 - acc: 0.9350 - val_loss: 0.3860 - val_acc: 0.8543
Epoch 9/50
14000/14000 [==============================] - 0s - loss: 0.1427 - acc: 0.9437 - val_loss: 0.4077 - val_acc: 0.8529
Epoch 10/50
14000/14000 [==============================] - 0s - loss: 0.1344 - acc: 0.9476 - val_loss: 0.4234 - val_acc: 0.8526
Epoch 11/50
14000/14000 [==============================] - 0s - loss: 0.1222 - acc: 0.9534 - val_loss: 0.4473 - val_acc: 0.8506
Epoch 12/50
14000/14000 [==============================] - 0s - loss: 0.1131 - acc: 0.9546 - val_loss: 0.4718 - val_acc: 0.8497
Epoch 13/50
14000/14000 [==============================] - 0s - loss: 0.1079 - acc: 0.9559 - val_loss: 0.4818 - val_acc: 0.8526
Epoch 14/50
14000/14000 [==============================] - 0s - loss: 0.0954 - acc: 0.9630 - val_loss: 0.5057 - val_acc: 0.8494
Epoch 15/50
14000/14000 [==============================] - 0s - loss: 0.0906 - acc: 0.9636 - val_loss: 0.5229 - val_acc: 0.8557
Epoch 16/50
14000/14000 [==============================] - 0s - loss: 0.0896 - acc: 0.9657 - val_loss: 0.5387 - val_acc: 0.8497
Epoch 17/50
14000/14000 [==============================] - 0s - loss: 0.0816 - acc: 0.9666 - val_loss: 0.5579 - val_acc: 0.8463
Epoch 18/50
14000/14000 [==============================] - 0s - loss: 0.0762 - acc: 0.9709 - val_loss: 0.5704 - val_acc: 0.8491
Epoch 19/50
14000/14000 [==============================] - 0s - loss: 0.0718 - acc: 0.9723 - val_loss: 0.5834 - val_acc: 0.8454
Epoch 20/50
14000/14000 [==============================] - 0s - loss: 0.0633 - acc: 0.9752 - val_loss: 0.6032 - val_acc: 0.8494
Epoch 21/50
14000/14000 [==============================] - 0s - loss: 0.0687 - acc: 0.9724 - val_loss: 0.6181 - val_acc: 0.8480
Epoch 22/50
14000/14000 [==============================] - 0s - loss: 0.0614 - acc: 0.9762 - val_loss: 0.6280 - val_acc: 0.8503
Epoch 23/50
14000/14000 [==============================] - 0s - loss: 0.0620 - acc: 0.9756 - val_loss: 0.6407 - val_acc: 0.8500
Epoch 24/50
14000/14000 [==============================] - 0s - loss: 0.0536 - acc: 0.9794 - val_loss: 0.6563 - val_acc: 0.8511
Epoch 25/50
14000/14000 [==============================] - 0s - loss: 0.0538 - acc: 0.9791 - val_loss: 0.6709 - val_acc: 0.8500
Epoch 26/50
14000/14000 [==============================] - 0s - loss: 0.0507 - acc: 0.9807 - val_loss: 0.6869 - val_acc: 0.8491
Epoch 27/50
14000/14000 [==============================] - 0s - loss: 0.0528 - acc: 0.9794 - val_loss: 0.7002 - val_acc: 0.8483
Epoch 28/50
14000/14000 [==============================] - 0s - loss: 0.0465 - acc: 0.9810 - val_loss: 0.7083 - val_acc: 0.8469
Epoch 29/50
14000/14000 [==============================] - 0s - loss: 0.0504 - acc: 0.9796 - val_loss: 0.7153 - val_acc: 0.8497
Epoch 30/50
14000/14000 [==============================] - 0s - loss: 0.0477 - acc: 0.9819 - val_loss: 0.7232 - val_acc: 0.8480
Epoch 31/50
14000/14000 [==============================] - 0s - loss: 0.0475 - acc: 0.9819 - val_loss: 0.7343 - val_acc: 0.8469
Epoch 32/50
14000/14000 [==============================] - 0s - loss: 0.0459 - acc: 0.9819 - val_loss: 0.7352 - val_acc: 0.8500
Epoch 33/50
14000/14000 [==============================] - 0s - loss: 0.0426 - acc: 0.9807 - val_loss: 0.7429 - val_acc: 0.8511
Epoch 34/50
14000/14000 [==============================] - 0s - loss: 0.0396 - acc: 0.9846 - val_loss: 0.7576 - val_acc: 0.8477
Epoch 35/50
14000/14000 [==============================] - 0s - loss: 0.0420 - acc: 0.9836 - val_loss: 0.7603 - val_acc: 0.8506
Epoch 36/50
14000/14000 [==============================] - 0s - loss: 0.0359 - acc: 0.9856 - val_loss: 0.7683 - val_acc: 0.8497
Epoch 37/50
14000/14000 [==============================] - 0s - loss: 0.0377 - acc: 0.9849 - val_loss: 0.7823 - val_acc: 0.8520
Epoch 38/50
14000/14000 [==============================] - 0s - loss: 0.0352 - acc: 0.9861 - val_loss: 0.7912 - val_acc: 0.8500
Epoch 39/50
14000/14000 [==============================] - 0s - loss: 0.0390 - acc: 0.9845 - val_loss: 0.8025 - val_acc: 0.8489
Epoch 40/50
14000/14000 [==============================] - 0s - loss: 0.0371 - acc: 0.9853 - val_loss: 0.8128 - val_acc: 0.8494
Epoch 41/50
14000/14000 [==============================] - 0s - loss: 0.0367 - acc: 0.9848 - val_loss: 0.8184 - val_acc: 0.8503
Epoch 42/50
14000/14000 [==============================] - 0s - loss: 0.0331 - acc: 0.9871 - val_loss: 0.8264 - val_acc: 0.8500
Epoch 43/50
14000/14000 [==============================] - 0s - loss: 0.0338 - acc: 0.9871 - val_loss: 0.8332 - val_acc: 0.8483
Epoch 44/50