I'm working on image classification problem of sign language digits dataset with 10 categories (numbers from 0 to 10). My models are highly overfitting for some reason, even though I tried simple ones (like 1 Conv Layer), classical ResNet50 and even state-of-art NASNetMobile.
Images are colored and 100x100 in size. I tried tuning learning rate but it doesn't help much, although decreasing batch size results in earlier increase of val accuracy.
I applied augmentation to images and it didn't help too: my train accuracy can hit 1.0 when val accuracy can't get higher than 0.6.
I looked at the data and it seems to load just fine. Distribution of classes in validation set is fair too. I have 2062 images in total.
When I change my loss to binary_crossentropy it seems to give better results for both train accuracy and val accuracy, but that doesn't seem to be right.
I don't understand what's wrong, could you please help me find out what I'm missing? Thank you.
Here's a link to my notebook: click
This is going to be a very interesting answer. There's so many things you need to pay attention to when looking at a problem. Fortunately, there's a methodology (might be vague, but still a methodology).
TLDR: Start your journey at the data, not the model.
Analysing the data
First let's look at your data?
You have 10 classes. Each image is (100,100). And there only 2062 images. There's your first problem. There's very little data compared to a standard image classification problem. Therefore, you need to make sure that your data is easy to learn from without sacrificing generalizability of the data (i.e. so that it can do well on the validation/test sets). How do we do that?
Understand your data
Normalize your data
Reduce the number of features
Understanding data is a recurring theme in the other sections. So I won't have a separate section for that.
Normalizing your data
Here's first problem. You are rescaling your data to be between [0,1]. But you can do so much better by standardizing your data (i.e. (x - mean(x))/std(x)). Here's how you do that.
def create_datagen():
return tf.keras.preprocessing.image.ImageDataGenerator(
samplewise_center=True,
samplewise_std_normalization=True,
horizontal_flip=False,
rotation_range=30,
shear_range=0.2,
validation_split=VALIDATION_SPLIT)
Another thing you might notice is I've set horizontal_flip=False. This brings me back to the first point. You have to make a judgement call to see what augmentation techniques might make sense.
Brightness/ Shear - Seems okay
Cropping/resizing - Seems okay
Horizontal/Vertical flip - This is not something I'd try at the beginning. If someone shows you a hand sign in two different horizontal orientations, you might have trouble understanding some signs.
Reducing the number of features
This is very important. You don't have that much data. And you want to make sure you get the most out of the data. The data has the original size of (100,100). You can do well with a significantly less size image (I have tried (64,64) - But you might be able to go even lower). So please reduce the size of the images whenever you can.
Next thing, it doesn't matter if you see a sign in RGB or Grayscale. You still can recognize the sign. But Grayscale cuts down the amount of samples by 66% compared to RGB. So use less color channels whenever you can.
This is how you do these,
def create_flow(datagen, subset, directory, hflip=False):
return datagen.flow_from_directory(
directory=directory,
target_size=(64, 64),
color_mode='grayscale',
batch_size=BATCH_SIZE,
class_mode='categorical',
subset=subset,
shuffle=True
)
So again to reiterate, you need to spend time understanding data before you go ahead with a model. This is a bare minimal list for this problem. Feel free to try other things as well.
Creating the model
So, here's the changes I did to the model.
Added padding='same' to all the convolutional layers. If you don't do that by default it has padding=valid, which results in an automatic dimensionality reduction. This means, the deeper you go, the smaller your output is going to be. And you can see in the model you had you have a final convolution output of size (3,3). This is probably too small for the dense layer to make sense of. So pay attention to what the dense layer is getting.
Reduced the kernel size - Kernel size is directly related to the number of parameters. So to reduce the chances of overfitting to your small dataset. Go with a smaller kernel size whenever possible.
Removed dropout from convolutional layers - This is something I did as a precaution. Personally, I don't know if dropout works with convolution layers as well as with Dense layers. So I don't want to have an unknown complexity in my model at the beginning.
Removed the last convolutional layer - Reducing the parameters in the model to reduce changes of overfitting.
About the optimizer
After you do these changes, you don't need to change the learning rate of Adam. Adam works pretty well without any tuning. So that's a worry you can leave for later.
About the batch size
You were using a batch size of 8. Which is not even big enough to contain a single image for each class in a batch. Try to set this to a higher value. I set it to 32. Whenever you can try to increase batch size. May be not to very large values. But up to around 128 (for this problem should be fine).
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Convolution2D(8, (5, 5), activation='relu', input_shape=(64, 64, 1), padding='same'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Convolution2D(16, (3, 3), activation='relu', padding='same'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Convolution2D(32, (3, 3), activation='relu', padding='same'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(10, activation='softmax'))
model.summary()
Final result
By doing some pre-meditation before jumping to making a model I achieved significantly better results than what you have.
Your result
Epoch 1/10
233/233 [==============================] - 37s 159ms/step - loss: 2.6027 - categorical_accuracy: 0.2218 - val_loss: 2.7203 - val_categorical_accuracy: 0.1000
Epoch 2/10
233/233 [==============================] - 37s 159ms/step - loss: 1.8627 - categorical_accuracy: 0.3711 - val_loss: 2.8415 - val_categorical_accuracy: 0.1450
Epoch 3/10
233/233 [==============================] - 37s 159ms/step - loss: 1.5608 - categorical_accuracy: 0.4689 - val_loss: 2.7879 - val_categorical_accuracy: 0.1750
Epoch 4/10
233/233 [==============================] - 37s 158ms/step - loss: 1.3778 - categorical_accuracy: 0.5145 - val_loss: 2.9411 - val_categorical_accuracy: 0.1450
Epoch 5/10
233/233 [==============================] - 38s 161ms/step - loss: 1.1507 - categorical_accuracy: 0.6090 - val_loss: 2.5648 - val_categorical_accuracy: 0.1650
Epoch 6/10
233/233 [==============================] - 38s 163ms/step - loss: 1.1377 - categorical_accuracy: 0.6042 - val_loss: 2.5416 - val_categorical_accuracy: 0.1850
Epoch 7/10
233/233 [==============================] - 37s 160ms/step - loss: 1.0224 - categorical_accuracy: 0.6472 - val_loss: 2.3338 - val_categorical_accuracy: 0.2450
Epoch 8/10
233/233 [==============================] - 37s 158ms/step - loss: 0.9198 - categorical_accuracy: 0.6788 - val_loss: 2.2660 - val_categorical_accuracy: 0.2450
Epoch 9/10
233/233 [==============================] - 37s 160ms/step - loss: 0.8494 - categorical_accuracy: 0.7111 - val_loss: 2.4924 - val_categorical_accuracy: 0.2150
Epoch 10/10
233/233 [==============================] - 37s 161ms/step - loss: 0.7699 - categorical_accuracy: 0.7417 - val_loss: 1.9339 - val_categorical_accuracy: 0.3450
My result
Epoch 1/10
59/59 [==============================] - 14s 240ms/step - loss: 1.8182 - categorical_accuracy: 0.3625 - val_loss: 2.1800 - val_categorical_accuracy: 0.1600
Epoch 2/10
59/59 [==============================] - 13s 228ms/step - loss: 1.1982 - categorical_accuracy: 0.5843 - val_loss: 2.2777 - val_categorical_accuracy: 0.1350
Epoch 3/10
59/59 [==============================] - 13s 228ms/step - loss: 0.9460 - categorical_accuracy: 0.6676 - val_loss: 2.5666 - val_categorical_accuracy: 0.1400
Epoch 4/10
59/59 [==============================] - 13s 226ms/step - loss: 0.7066 - categorical_accuracy: 0.7465 - val_loss: 2.3700 - val_categorical_accuracy: 0.2500
Epoch 5/10
59/59 [==============================] - 13s 227ms/step - loss: 0.5875 - categorical_accuracy: 0.8008 - val_loss: 2.0166 - val_categorical_accuracy: 0.3150
Epoch 6/10
59/59 [==============================] - 13s 228ms/step - loss: 0.4681 - categorical_accuracy: 0.8416 - val_loss: 1.4043 - val_categorical_accuracy: 0.4400
Epoch 7/10
59/59 [==============================] - 13s 228ms/step - loss: 0.4367 - categorical_accuracy: 0.8518 - val_loss: 1.7028 - val_categorical_accuracy: 0.4300
Epoch 8/10
59/59 [==============================] - 13s 226ms/step - loss: 0.3823 - categorical_accuracy: 0.8711 - val_loss: 1.3747 - val_categorical_accuracy: 0.5600
Epoch 9/10
59/59 [==============================] - 13s 227ms/step - loss: 0.3802 - categorical_accuracy: 0.8663 - val_loss: 1.0967 - val_categorical_accuracy: 0.6000
Epoch 10/10
59/59 [==============================] - 13s 227ms/step - loss: 0.3585 - categorical_accuracy: 0.8818 - val_loss: 1.0768 - val_categorical_accuracy: 0.5950
Note: This is a minimal effort I put. You can increase your accuracy further by augmenting data, optimizing the model structure, choosing the right batch size etc.
Related
Firstly I read in my cvs file which contained a 1 or 0 matrix
df = pd.read_csv(url)
print(df.head())
print(df.columns)
Next I gathered the Pictures and resized them
image_directory = 'Directory/'
dir_list = os.listdir(path)
print("Files and directories in '", image_directory, "' :")
# print the list
print(dir_list)
They were saved into an X2 variable.
SIZE = 200
X_dataset = []
for i in tqdm(range(df.shape[0])):
img2 = cv2.imread("Cell{}.png".format(i), cv2.IMREAD_UNCHANGED)
img = tf.keras.preprocessing.image.load_img(image_directory +df['ID'][i], target_size=(SIZE,SIZE,3))
#numpy array of each image at size 200, 200, 3 (color)
img = np.array(img)
img = img/255.
X_dataset.append(img)
X2 = np.array(X_dataset)
print(X2.shape)
I created the y2 data by getting the cvs data, dropping two columns and getting a shape of (1000, 16)
y2 = np.array(df.drop(['Outcome', 'ID'], axis=1))
print(y2.shape)
I then did the train_test_split
I wonder if my random state or test_size is not optimal
X_train2, X_test2, y_train2, y_test2 = train_test_split(X2, y2, random_state=10, test_size=0.3)
Next, I created a sequential model
SIZE = (200,200,3) which was made above in the resized model.
model2 = Sequential()
model2.add(Conv2D(filters=16, kernel_size=(10, 10), activation="relu", input_shape=(SIZE,SIZE,3)))
model2.add(BatchNormalization())
model2.add(MaxPooling2D(pool_size=(5, 5)))
model2.add(Dropout(0.2))
model2.add(Conv2D(filters=32, kernel_size=(5, 5), activation='relu'))
model2.add(MaxPooling2D(pool_size=(2, 2)))
model2.add(BatchNormalization())
model2.add(Dropout(0.2))
model2.add(Conv2D(filters=64, kernel_size=(5, 5), activation="relu"))
model2.add(MaxPooling2D(pool_size=(2, 2)))
model2.add(BatchNormalization())
model2.add(Dropout(0.2))
model2.add(Conv2D(filters=128, kernel_size=(3, 3), activation='relu'))
model2.add(MaxPooling2D(pool_size=(2, 2)))
model2.add(BatchNormalization())
model2.add(Dropout(0.2))
model2.add(Flatten())
model2.add(Dense(512, activation='relu'))
model2.add(Dropout(0.5))
model2.add(Dense(128, activation='relu'))
model2.add(Dropout(0.5))
model2.add(Dense(16, activation='sigmoid'))
#Do not use softmax for multilabel classification
#Softmax is useful for mutually exclusive classes, either cat or dog but not both.
#Also, softmax outputs all add to 1. So good for multi class problems where each
#class is given a probability and all add to 1. Highest one wins.
#Sigmoid outputs probability. Can be used for non-mutually exclusive problems.
#like multi label, in this example.
#But, also good for binary mutually exclusive (cat or not cat).
model2.summary()
#Binary cross entropy of each label. So no really a binary classification problem but
#Calculating binary cross entropy for each label.
opt = tf.keras.optimizers.Adamax(
learning_rate=0.02,
beta_1=0.8,
beta_2=0.9999,
epsilon=1e-9,
name='Adamax')
model2.compile(optimizer=opt, loss='binary_crossentropy', metrics=['accuracy', 'mse' ])
The model uses a custom optimizer, ands the shape generated has 473,632 trainable params.
I then specify the sample weight which was calculated by taking the highest sampled number and divided the other numbers by that.
sample_weight = { 0:1,
1:0.5197368421,
2:0.4385964912,
3:0.2324561404,
4:0.2302631579,
5:0.399122807,
6:0.08114035088,
7:0.5723684211,
8:0.08552631579,
9:0.2061403509,
10:0.3815789474,
11:0.125,
12:0.08333333333,
13:0.1206140351,
14:0.1403508772,
15:0.4824561404
}
finally I ran the model.fit
history = model2.fit(X_train2, y_train2, epochs=25, validation_data=(X_test2, y_test2), batch_size=64, class_weight = sample_weight, shuffle = False)
My issue was that the model was maxing out at around 30 to 40% accuracy. I looked into it, and they said tuning the learning rate was important. I also saw that raising the epochs would help to a point, as would lowering the batch size.
Is there any other thing I may have missed? I noticed the worse models only predicted one class frequently (100% normal, 0% anything else) but the better model predicted on a sliding scale where some items were at 10% and some at 70%.
I also wonder if I inverted my sample weights, my item 0 has the most items in it... Should it be inverted, where 1 sample 1 counts for 2 sample 0s?
Thanks.
Things I tried.
Changing the batch size down to 16 or 8. (resulted in longer epoch times, slightly better results)
Changing the learning rate to a lower number (resulted in slightly better results, but over more epochs)
Changing it to 100 epochs (results plateaued around 20 epochs usually.)
Attempting to create more params higher filters, larger initial kernel size, larger initial pool size, more and higher value dense layers. (This resulted in it eating the RAM and not getting much better results.)
Changing the optimizer to Adam or RAdam or AdamMax. (Didn't really change much, the other optimizers sucked though). I messed with the beta_1 and epsilon too.
Revising the Cvs. (the data is fairly vague, had help and it still was hard to tell)
Removing bad data (I didn't want to get rid of too many pictures.)
Edit: Added sample accuracy. This one was unusually low, but starts off well enough (accuracy initially is 25.9%)
14/14 [==============================] - 79s 6s/step - loss: 0.4528 - accuracy: 0.2592 - mse: 0.1594 - val_loss: 261.8521 - val_accuracy: 0.3881 - val_mse: 0.1416
Epoch 2/25
14/14 [==============================] - 85s 6s/step - loss: 0.2817 - accuracy: 0.3188 - mse: 0.1310 - val_loss: 22.7037 - val_accuracy: 0.3881 - val_mse: 0.1416
Epoch 3/25
14/14 [==============================] - 79s 6s/step - loss: 0.2611 - accuracy: 0.3555 - mse: 0.1243 - val_loss: 11.9977 - val_accuracy: 0.3881 - val_mse: 0.1416
Epoch 4/25
14/14 [==============================] - 80s 6s/step - loss: 0.2420 - accuracy: 0.3521 - mse: 0.1172 - val_loss: 6.6056 - val_accuracy: 0.3881 - val_mse: 0.1416
Epoch 5/25
14/14 [==============================] - 80s 6s/step - loss: 0.2317 - accuracy: 0.3899 - mse: 0.1151 - val_loss: 4.9567 - val_accuracy: 0.3881 - val_mse: 0.1415
Epoch 6/25
14/14 [==============================] - 80s 6s/step - loss: 0.2341 - accuracy: 0.3899 - mse: 0.1141 - val_loss: 2.7395 - val_accuracy: 0.3881 - val_mse: 0.1389
Epoch 7/25
14/14 [==============================] - 76s 5s/step - loss: 0.2277 - accuracy: 0.4128 - mse: 0.1107 - val_loss: 2.3758 - val_accuracy: 0.3881 - val_mse: 0.1375
Epoch 8/25
14/14 [==============================] - 85s 6s/step - loss: 0.2199 - accuracy: 0.4106 - mse: 0.1094 - val_loss: 1.4526 - val_accuracy: 0.3881 - val_mse: 0.1319
Epoch 9/25
14/14 [==============================] - 76s 5s/step - loss: 0.2196 - accuracy: 0.4151 - mse: 0.1086 - val_loss: 0.7962 - val_accuracy: 0.3881 - val_mse: 0.1212
Epoch 10/25
14/14 [==============================] - 80s 6s/step - loss: 0.2187 - accuracy: 0.4140 - mse: 0.1087 - val_loss: 0.6308 - val_accuracy: 0.3744 - val_mse: 0.1211
Epoch 11/25
14/14 [==============================] - 81s 6s/step - loss: 0.2175 - accuracy: 0.4071 - mse: 0.1086 - val_loss: 0.5986 - val_accuracy: 0.3242 - val_mse: 0.1170
Epoch 12/25
14/14 [==============================] - 80s 6s/step - loss: 0.2087 - accuracy: 0.3968 - mse: 0.1034 - val_loss: 0.4003 - val_accuracy: 0.3333 - val_mse: 0.1092
Epoch 13/25
12/14 [========================>.....] - ETA: 10s - loss: 0.2092 - accuracy: 0.3945 - mse: 0.1044
Here are some notes that might help:
When using Batch Normalization, avoid too small batch sizes. For more details see the Group Normalization paper by Yuxin Wu and Kaiming He.
It may worth looking at metrics like AUC, and F1 as well since you have an imbalanced multiclass case. You can add tf.keras.metrics.AUC(curve='PR') to your metrics list.
The training loss seems to have stalled at the end of epoch 13. If the training loss does not decrease anymore, you may want to 1. use smaller learning rate, and/or 2. reduce your dropout parameters. Particularly, the relatively-large dropout right before your last layer seems suspicious to me. First, try to obtain a model that fits well your training dataset (with low to no dropout). That is an important step. If your model cannot fit well your training dataset without any regularization, it may need more trainable parameters. After achieving a minimum model that fits the training dataset, you can then add regularization mechanisms to mitigate the overfitting issue.
Unless you have a good reason to do otherwise set shuffle = True (which is also the default setting) to shuffle the training data before each epoch.
Although it is probably not the root cause of your problem, there is a debate on whether the normalization should come before the activation or after that. Some prefer to use it before the activation.
The following was not clear to me:
I then specify the sample weight which was calculated by taking the
highest sampled number and divided the other numbers by that.
Your class weights may have already been calculated correctly. I'd like to emphasize though that an under-represented class should be assigned a larger weight. Refer to this tutorial from TensorFlow as needed.
I have np_final_x(71520, 2, 50) and np_final_y(71520, 1, 50) corpus
https://www.dropbox.com/s/k15dtcak78jaf34/np_final_x_len_2.npy?dl=0
https://www.dropbox.com/s/555lhbdnkl6gmrq/np_final_y_len_2.npy?dl=0
This means predict like this I use -> this,
you give -> me
predict next word from two words.
And each words are encoded into 50 dimension vector.
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM,Dropout
from tensorflow.keras.layers import Activation, Dense
from tensorflow.keras.optimizers import Adam
import numpy as np
final_x = np.load('np_final_x_len_2.npy')
final_y = np.load('np_final_y_len_2.npy')
in_out_neurons = 50
n_hidden = 512 # not so much change
#n_hidden = 1 # not so much change
model = Sequential()
model.add(LSTM(n_hidden, batch_input_shape=(None, 2, in_out_neurons), return_sequences=True))
model.add(Dense(in_out_neurons,activation="tanh")) #not so much change
#model.add(Dense(in_out_neurons,activation="sigmoid")) #not so much change
#model.add(Dense(in_out_neurons, activation="relu")) #not so much change
optimizer = Adam(learning_rate=0.0001)
model.compile(loss="mean_squared_error", optimizer=optimizer)
model.summary()
model.fit(
final_x,final_y,
batch_size=400,
epochs=10,
validation_split=0.1
)
However what I got was around 0.017~0.019 not so much progress, even I changed any parameters.
Moreover, even when n_hidden = 1,result doesn't change.
So I guess something is wrong basically.
appreciate any help and hints. thank you.
Epoch 1/10
161/161 [==============================] - 9s 46ms/step - loss: 0.0195 - val_loss: 0.0192
Epoch 2/10
161/161 [==============================] - 8s 49ms/step - loss: 0.0191 - val_loss: 0.0188
Epoch 3/10
161/161 [==============================] - 8s 52ms/step - loss: 0.0187 - val_loss: 0.0186
Epoch 4/10
161/161 [==============================] - 11s 68ms/step - loss: 0.0185 - val_loss: 0.0184
Epoch 5/10
161/161 [==============================] - 12s 77ms/step - loss: 0.0184 - val_loss: 0.0183
Epoch 6/10
161/161 [==============================] - 13s 83ms/step - loss: 0.0183 - val_loss: 0.0183
Epoch 7/10
161/161 [==============================] - 14s 85ms/step - loss: 0.0183 - val_loss: 0.0182
You want to predict from two word embeddings the next word. It doesn't make sense to set return_sequences to True in the LSTM. I played a bit with your data and model, my best mse validation is 0.015 The limitation comes from the data and the simplistic model to predict the next word from two consecutive words (would a human be able to do that?) . Also, how do you get the 50 dimensional word embedding? It would be interesting to be able to go back to the words and see what kind of prediction the model produces.
I have a rather complex sequence to sequence encoder decoder model. I run into an issue where my loss and accuracy drop to zero and I can't reproduce this error. It has nothing to do with the training data as it happens with different sets.
It seems to be learning as the loss slowly drops. Below is what it is like just before:
Epoch 1/2
5000/5000 [==============================] - 235s 47ms/step - loss: 0.9825 - acc: 0.7077
Epoch 2/2
5000/5000 [==============================] - 235s 47ms/step - loss: 0.9443 - acc: 0.7177
And here is what is like during the next mode.fit() iteration:
Epoch 1/2
2882/2882 [==============================] - 136s 47ms/step - loss: 0.7033 - acc: 0.4399
Epoch 2/2
2882/2882 [==============================] - 136s 47ms/step - loss: 1.1921e-07 - acc: 0.0000e+00
After this, the loss and accuracy remain the same:
Epoch 1/2
5000/5000 [==============================] - 278s 56ms/step - loss: 1.1921e-07 - acc: 0.0000e+00
Epoch 2/2
5000/5000 [==============================] - 279s 56ms/step - loss: 1.1921e-07 - acc: 0.0000e+00
The reason I have to train in such a manner is because I have variable input sizes and output sizes. So I have to make batches of my training data with fixed input size before I train.
sgd = optimizers.SGD(lr= 0.015, decay=0.002)
out2 = model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
I need to use curriculum learning to reach sentence level predictions, so I am doing the following:
I initially train my model to output "1 word + end" token. Training on this works fine. When i start to train on "2 words + end", this problem starts to arise.
After training on 1 word, I save the model. Then I define a new model with output size for 2 words, and use the following:
new_model = createModel(...,num_output_words)
new_model.set_weights(old_model.get_weights())
I have to do this as I can't define a model with variable output length.
I can provide more information if needed. I can't find any information online.
I have just built my first model using Keras and this is the output. It looks like the standard output you get after building any Keras artificial neural network. Even after looking in the documentation, I do not fully understand what the epoch is and what the loss is which is printed in the output.
What is epoch and loss in Keras?
(I know it's probably an extremely basic question, but I couldn't seem to locate the answer online, and if the answer is really that hard to glean from the documentation I thought others would have the same question and thus decided to post it here.)
Epoch 1/20
1213/1213 [==============================] - 0s - loss: 0.1760
Epoch 2/20
1213/1213 [==============================] - 0s - loss: 0.1840
Epoch 3/20
1213/1213 [==============================] - 0s - loss: 0.1816
Epoch 4/20
1213/1213 [==============================] - 0s - loss: 0.1915
Epoch 5/20
1213/1213 [==============================] - 0s - loss: 0.1928
Epoch 6/20
1213/1213 [==============================] - 0s - loss: 0.1964
Epoch 7/20
1213/1213 [==============================] - 0s - loss: 0.1948
Epoch 8/20
1213/1213 [==============================] - 0s - loss: 0.1971
Epoch 9/20
1213/1213 [==============================] - 0s - loss: 0.1899
Epoch 10/20
1213/1213 [==============================] - 0s - loss: 0.1957
Epoch 11/20
1213/1213 [==============================] - 0s - loss: 0.1923
Epoch 12/20
1213/1213 [==============================] - 0s - loss: 0.1910
Epoch 13/20
1213/1213 [==============================] - 0s - loss: 0.2104
Epoch 14/20
1213/1213 [==============================] - 0s - loss: 0.1976
Epoch 15/20
1213/1213 [==============================] - 0s - loss: 0.1979
Epoch 16/20
1213/1213 [==============================] - 0s - loss: 0.2036
Epoch 17/20
1213/1213 [==============================] - 0s - loss: 0.2019
Epoch 18/20
1213/1213 [==============================] - 0s - loss: 0.1978
Epoch 19/20
1213/1213 [==============================] - 0s - loss: 0.1954
Epoch 20/20
1213/1213 [==============================] - 0s - loss: 0.1949
Just to answer the questions more specifically, here's a definition of epoch and loss:
Epoch: A full pass over all of your training data.
For example, in your view above, you have 1213 observations. So an epoch concludes when it has finished a training pass over all 1213 of your observations.
Loss: A scalar value that we attempt to minimize during our training of the model. The lower the loss, the closer our predictions are to the true labels.
This is usually Mean Squared Error (MSE) as David Maust said above, or often in Keras, Categorical Cross Entropy
What you'd expect to see from running fit on your Keras model, is a decrease in loss over n number of epochs. Your training run is rather abnormal, as your loss is actually increasing. This could be due to a learning rate that is too large, which is causing you to overshoot optima.
As jaycode mentioned, you will want to look at your model's performance on unseen data, as this is the general use case of Machine Learning.
As such, you should include a list of metrics in your compile method, which could look like:
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
As well as run your model on validation during the fit method, such as:
model.fit(data, labels, validation_split=0.2)
There's a lot more to explain, but hopefully this gets you started.
One epoch ends when your model had run the data through all nodes in your network and ready to update the weights to reach optimal loss value. That is, smaller is better. In your case, as there are higher loss scores on higher epoch, it "seems" the model is better on first epoch.
I said "seems" since we can't actually tell for sure yet as the model has not been tested using proper cross validation method i.e. it is evaluated only against its training data.
Ways to improve your model:
Use cross validation in your Keras model in order to find out how the model actually perform, does it generalize well when predicting new data it has never seen before?
Adjust your learning rate, structure of neural network model, number of hidden units / layers, init, optimizer, and activator parameters used in your model among myriad other things.
Combining sklearn's GridSearchCV with Keras can automate this process.
I am building my first artificial multilayer perceptron neural network using Keras.
This is my input data:
This is my code which I used to build my initial model which basically follows the Keras example code:
model = Sequential()
model.add(Dense(64, input_dim=14, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(64, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(2, init='uniform'))
model.add(Activation('softmax'))
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer=sgd)
model.fit(X_train, y_train, nb_epoch=20, batch_size=16)
Output:
Epoch 1/20
1213/1213 [==============================] - 0s - loss: 0.1760
Epoch 2/20
1213/1213 [==============================] - 0s - loss: 0.1840
Epoch 3/20
1213/1213 [==============================] - 0s - loss: 0.1816
Epoch 4/20
1213/1213 [==============================] - 0s - loss: 0.1915
Epoch 5/20
1213/1213 [==============================] - 0s - loss: 0.1928
Epoch 6/20
1213/1213 [==============================] - 0s - loss: 0.1964
Epoch 7/20
1213/1213 [==============================] - 0s - loss: 0.1948
Epoch 8/20
1213/1213 [==============================] - 0s - loss: 0.1971
Epoch 9/20
1213/1213 [==============================] - 0s - loss: 0.1899
Epoch 10/20
1213/1213 [==============================] - 0s - loss: 0.1957
Epoch 11/20
1213/1213 [==============================] - 0s - loss: 0.1923
Epoch 12/20
1213/1213 [==============================] - 0s - loss: 0.1910
Epoch 13/20
1213/1213 [==============================] - 0s - loss: 0.2104
Epoch 14/20
1213/1213 [==============================] - 0s - loss: 0.1976
Epoch 15/20
1213/1213 [==============================] - 0s - loss: 0.1979
Epoch 16/20
1213/1213 [==============================] - 0s - loss: 0.2036
Epoch 17/20
1213/1213 [==============================] - 0s - loss: 0.2019
Epoch 18/20
1213/1213 [==============================] - 0s - loss: 0.1978
Epoch 19/20
1213/1213 [==============================] - 0s - loss: 0.1954
Epoch 20/20
1213/1213 [==============================] - 0s - loss: 0.1949
How do I train and tune this model and get my code to output my best predictive model? I am new to neural networks and am just wholly confused as to what is the next step after building the model. I know I want to optimize it, but I'm not sure which features to tweak or if I am supposed to do it manually or how to write code to do so.
Some things that you could do are:
Change your loss function from mean_squared_error to binary_crossentropy. mean_squared_error is intended for regression, but you want to classify your data.
Add show_accuracy=True to your fit() function, which outputs the accuracy of your model at every epoch. That information is probably more useful to you than just the loss value.
Add validation_split=0.2 to your fit() function. Currently you are only training on a training set and validating on nothing. That's a no-go in machine learning as you can't be sure that your model hasn't simply memorized the correct answers for your dataset (without really understanding why these answers are correct).
Change from Obama/Romney to Democrat/Republican and add data from previous elections. ~1200 examples is a pretty small dataset for neural networks. Also add columns with valuable information, like unemployment rate or population density. Note that quite some of the values (like population number) are probably similar to providing the name of the state, so e.g. your net will likely learn that Texas means Republican.
If you haven't done that already, normalize all your values to the range of 0 to 1 (by subtracting from each value the minimum of the column and then dividing by the (max - min) of the column). Neural networks can handle normalized data better than unnormalized data.
Try Adam and Adagrad instead of SGD. Sometimes they perform better. (See documentation about optimizers.)
Try Activation('relu'), LeakyReLU, PReLU and ELU instead of Activation('tanh'). Tanh is rarely the best choice. (See advanced activation functions.)
Try increasing/decreasing your dense layers sizes (e.g. from 64 to 128). Also try adding/removing layers.
Try adding BatchNormalization layers (before the Activation layers). (See documentation.)
Try changing the dropout rates (e.g. from 0.5 to 0.25).