Metrics not displaying when running model.fit - python

I am working my way through an ML example in Google Colabs. The documentation says that when I run model.fit, the loss and accuracy metrics are displayed. I am not seeing any loss or accuracy metric.
I have added accuracy as a metric in model.compile
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
Here is a screenshot of what I am seeing.
How do I get the loss and accuracy metrics to be displayed when I am fitting the model?

You can use the verbose flag and set it to 2 to display 1 line per epoch or 1 for a progress bar.

import keras
import numpy as np
model = keras.Sequential()
model.add(keras.layers.Dense(10, input_shape=(5, 6)))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy')
x_data = np.random.random((32, 5, 6))
y_data = np.random.randint(0, 9, size=(32,5,1))
model.fit(x=x_data, y=y_data, batch_size=16, epochs=3)
Use tf.cast instead.
Epoch 1/3
32/32 [==============================] - 1s 20ms/step - loss: 9.9664
Epoch 2/3
32/32 [==============================] - 0s 293us/step - loss: 9.9537
Epoch 3/3
32/32 [==============================] - 0s 164us/step - loss: 9.9425
I hope it solves your problem.

Related

How can i increase the accuracy of my LSTM model (regression) [duplicate]

I am doing a time series analysis using Tensorflow/ Keras in Python.
The overall LSTM model looks like,
model = keras.models.Sequential()
model.add(keras.layers.LSTM(25, input_shape = (1,1), activation = 'relu', dropout = 0.2, return_sequences = False))
model.add(keras.layers.Dense(1))
model.compile(optimizer = 'adam', loss = 'mean_squared_error', metrics=['acc'])
tensorboard = keras.callbacks.TensorBoard(log_dir="logs/{}".format(time()))
es = keras.callbacks.EarlyStopping(monitor='val_acc', mode='max', verbose=1, patience=50)
mc = keras.callbacks.ModelCheckpoint('/home/sukriti/best_model.h5', monitor='val_loss', mode='min', save_best_only=True)
history = model.fit(trainX_3d, trainY_1d, epochs=50, batch_size=10, verbose=2, validation_data = (testX_3d, testY_1d), callbacks=[mc, es, tensorboard])
I am having the following outcome,
Train on 14015 samples, validate on 3503 samples
Epoch 1/50
- 3s - loss: 0.0222 - acc: 7.1352e-05 - val_loss: 0.0064 - val_acc: 0.0000e+00
Epoch 2/50
- 2s - loss: 0.0120 - acc: 7.1352e-05 - val_loss: 0.0054 - val_acc: 0.0000e+00
Epoch 3/50
- 2s - loss: 0.0108 - acc: 7.1352e-05 - val_loss: 0.0047 - val_acc: 0.0000e+00
Now the val_acc remains unchanged. Is it normal?
what does it signify?
As signified by loss = 'mean_squared_error', you are in a regression setting, where accuracy is meaningless (it is meaningful only in classification problems).
Unfortunately, Keras will not "protect" you in such a case, insisting in computing and reporting back an "accuracy", despite the fact that it is meaningless and inappropriate for your problem - see my answer in What function defines accuracy in Keras when the loss is mean squared error (MSE)?
You should simply remove metrics=['acc'] from your model compilation, and don't bother - in regression settings, MSE itself can (and usually does) serve also as the performance metric.
In my case I had validation accuracy of 0.0000e+00 throughout training (using Keras and CNTK-GPU backend) when my batch size was 64 but there were only 120 samples in my validation set (divided into three classes). After I changed the batch size to 60, I got normal accuracy values.
It will not improve with changing batch size or with metrics. I had the same problem but when I shuffled my training and validation data set 0.0000e+00 gone.

How is the keras accuracy showed in progress bar calculated? From which inputs is it calculated? How to replicate it?

I am trying to understand what is the accuracy "acc" shown in the keras progress bar at the end of epoch:
13/13 [==============================] - 0s 76us/step - loss: 0.7100 - acc: 0.4615
At the end of an epoch it should be the accuracy of the model predictions of all training samples. However when the model is evaluated on the same training samples, the actual accuracy can be very different.
Below is adapted example of MLP for binary classification from keras webpage. A simple sequential neural net is doing binary classification of randomly generated numbers. The batch size is the same as the number of training examples (13), so that every epoch contain only one step. Since loss is set to binary_crossentropy, for the accuracy calculation is used binary_accuracy defined in metrics.py. MyEval class defines callback, which is called at the end of each epoch. It uses two ways of calculating the accuracy of the training data a) model evaluate and b) model predict to get prediction and then almost the same code as is used in keras binary_accuracy function. These two accuracies are consistent, but most of the time are different to the one in the progress bar. Why they are different? Is is possible to calculate the same accuracy as is in the progress bar? Or have I made a mistake in my assumptions?
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras import callbacks
np.random.seed(1) # fix random seed for reproducibility
# Generate dummy data
x_train = np.random.random((13, 20))
y_train = np.random.randint(2, size=(13, 1))
model = Sequential()
model.add(Dense(64, input_dim=20, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
class MyEval(callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
my_accuracy_1 = self.model.evaluate(x_train, y_train, verbose=0)[1]
y_pred = self.model.predict(x_train)
my_accuracy_2 = np.mean(np.equal(y_train, np.round(y_pred)))
print("my accuracy 1: {}".format(my_accuracy_1))
print("my accuracy 2: {}".format(my_accuracy_2))
my_eval = MyEval()
model.fit(x_train, y_train,
epochs=5,
batch_size=13,
callbacks=[my_eval],
shuffle=False)
The output of the above code:
13/13 [==============================] - 0s 25ms/step - loss: 0.7303 - acc: 0.5385
my accuracy 1: 0.5384615659713745
my accuracy 2: 0.5384615384615384
Epoch 2/5
13/13 [==============================] - 0s 95us/step - loss: 0.7412 - acc: 0.4615
my accuracy 1: 0.9230769276618958
my accuracy 2: 0.9230769230769231
Epoch 3/5
13/13 [==============================] - 0s 77us/step - loss: 0.7324 - acc: 0.3846
my accuracy 1: 0.9230769276618958
my accuracy 2: 0.9230769230769231
Epoch 4/5
13/13 [==============================] - 0s 72us/step - loss: 0.6543 - acc: 0.5385
my accuracy 1: 0.9230769276618958
my accuracy 2: 0.9230769230769231
Epoch 5/5
13/13 [==============================] - 0s 76us/step - loss: 0.6459 - acc: 0.6923
my accuracy 1: 0.8461538553237915
my accuracy 2: 0.8461538461538461
using: Python 3.5.2, tensorflow-gpu==1.14.0 Keras==2.2.4 numpy==1.15.2
I think it has to do with the usage of Dropout. Dropout is only enabled during training, but not during evaluation or prediction. Hence the discrepancy of the accuracies during training and evaluation/prediction.
Moreover, the training accuracy that is displayed in the bar shows the averaged accuracy over the training epoch, averaged over the batch accuracies calculated after each batch. Keep in mind that the model parameters are tuned after each batch, such that the accuracy shown in the bar at the end does not exactly match the accuracy of a valication after the epoch is finished (because the training accuracy is calculated with different model parameters per batch, and the validation accuracy is calculated with the same parameters for all batches).
This is your example, with more data (therefore more than one epoch), and without dropout:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras import callbacks
np.random.seed(1) # fix random seed for reproducibility
# Generate dummy data
x_train = np.random.random((200, 20))
y_train = np.random.randint(2, size=(200, 1))
model = Sequential()
model.add(Dense(64, input_dim=20, activation='relu'))
# model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
# model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
class MyEval(callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
my_accuracy_1 = self.model.evaluate(x_train, y_train, verbose=0)[1]
y_pred = self.model.predict(x_train)
my_accuracy_2 = np.mean(np.equal(y_train, np.round(y_pred)))
print("my accuracy 1 after epoch {}: {}".format(epoch + 1,my_accuracy_1))
print("my accuracy 2 after epoch {}: {}".format(epoch + 1,my_accuracy_2))
my_eval = MyEval()
model.fit(x_train, y_train,
epochs=5,
batch_size=13,
callbacks=[my_eval],
shuffle=False)
The output reads:
Train on 200 samples
Epoch 1/5
my accuracy 1 after epoch 1: 0.5450000166893005
my accuracy 2 after epoch 1: 0.545
200/200 [==============================] - 0s 2ms/sample - loss: 0.6978 - accuracy: 0.5350
Epoch 2/5
my accuracy 1 after epoch 2: 0.5600000023841858
my accuracy 2 after epoch 2: 0.56
200/200 [==============================] - 0s 383us/sample - loss: 0.6892 - accuracy: 0.5550
Epoch 3/5
my accuracy 1 after epoch 3: 0.5799999833106995
my accuracy 2 after epoch 3: 0.58
200/200 [==============================] - 0s 496us/sample - loss: 0.6844 - accuracy: 0.5800
Epoch 4/5
my accuracy 1 after epoch 4: 0.6000000238418579
my accuracy 2 after epoch 4: 0.6
200/200 [==============================] - 0s 364us/sample - loss: 0.6801 - accuracy: 0.6150
Epoch 5/5
my accuracy 1 after epoch 5: 0.6050000190734863
my accuracy 2 after epoch 5: 0.605
200/200 [==============================] - 0s 393us/sample - loss: 0.6756 - accuracy: 0.6200
The validation accuracy after the epoch pretty much resembles the averaged training accuracy at the end of the epoch now.

Keras backend function seems to be working incorrectly

I am trying to implement a custom loss function in Keras.
To start it off, I wanted to be sure the previous loss function can be called from my custom function. And this is where the weird stuff begins:
model.compile(loss="sparse_categorical_crossentropy", optimizer="adam", metrics=['accuracy'])
works as expected.
Now the implementation of "sparse_categorical_crossentropy" in keras.losses is as follows:
def sparse_categorical_crossentropy(y_true, y_pred):
return K.sparse_categorical_crossentropy(y_true, y_pred)
I concluded that passing K.sparse_categorical_crossentropy directly should also work. However, it throws expected activation_6 to have shape (4,) but got array with shape (1,).
Also, defining a custom loss function like this:
def custom_loss(y_true, y_pred):
return keras.losses.sparse_categorical_crossentropy(y_true, y_pred)
does not work. During training is reduces the loss (which seems correct) but the accuracy does not improve (but it does, when using the non-custom loss function)
I am not sure what is happening, neither do I know how to debug it properly. Any help would be highly appreciated.
I tested what you are saying on my code and yes, you are right. I was initially getting the same error as you were getting, but once I changed the metrics parameter from accuracy to sparse_categorical_accuracy, I started getting higher accuracy.
Here, one important thing to note is when we tell keras to use accuracy as metrics, keras uses the default accuracy which is categorical_accuracy. So, if we want to implement our own custom loss function, then we have to set metrics parameter accordingly.
Read about available metrics function in keras from here.
Case 1:
def sparse_categorical_crossentropy(y_true, y_pred):
return K.sparse_categorical_crossentropy(y_true, y_pred)
model.compile(optimizer='adam',
loss=sparse_categorical_crossentropy,
metrics=['accuracy'])
output:
ValueError: Error when checking target: expected dense_71 to have
shape (10,) but got array with shape (1,)
Case 2:
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
output:
Epoch 1/2
60000/60000 [==============================] - 2s 38us/step - loss: 0.4714 - acc: 0.8668
Epoch 2/2
60000/60000 [==============================] - 1s 22us/step - loss: 0.2227 - acc: 0.9362
10000/10000 [==============================] - 1s 94us/step
Case 3:
def custom_sparse_categorical_crossentropy(y_true, y_pred):
return K.sparse_categorical_crossentropy(y_true, y_pred)
model.compile(optimizer='adam',
loss=custom_sparse_categorical_crossentropy,
metrics=['accuracy'])
output:
Epoch 1/2
60000/60000 [==============================] - 2s 41us/step - loss: 0.4558 - acc: 0.1042
Epoch 2/2
60000/60000 [==============================] - 1s 22us/step - loss: 0.2164 - acc: 0.0997
10000/10000 [==============================] - 1s 89us/step
Case 4:
def custom_sparse_categorical_crossentropy(y_true, y_pred):
return K.sparse_categorical_crossentropy(y_true, y_pred)
model.compile(optimizer='adam',
loss=custom_sparse_categorical_crossentropy,
metrics=['sparse_categorical_accuracy'])
output:
Epoch 1/2
60000/60000 [==============================] - 2s 40us/step - loss: 0.4736 - sparse_categorical_accuracy: 0.8673
Epoch 2/2
60000/60000 [==============================] - 1s 23us/step - loss: 0.2222 - sparse_categorical_accuracy: 0.9372
10000/10000 [==============================] - 1s 85us/step
Full Code:
from __future__ import absolute_import, division, print_function
import tensorflow as tf
import keras.backend as K
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(100, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.10),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
def custom_sparse_categorical_crossentropy(y_true, y_pred):
return K.sparse_categorical_crossentropy(y_true, y_pred)
#def sparse_categorical_accuracy(y_true, y_pred):
# # reshape in case it's in shape (num_samples, 1) instead of (num_samples,)
# if K.ndim(y_true) == K.ndim(y_pred):
# y_true = K.squeeze(y_true, -1)
# # convert dense predictions to labels
# y_pred_labels = K.argmax(y_pred, axis=-1)
# y_pred_labels = K.cast(y_pred_labels, K.floatx())
# return K.cast(K.equal(y_true, y_pred_labels), K.floatx())
model.compile(optimizer='adam',
loss=custom_sparse_categorical_crossentropy,
metrics=['sparse_categorical_accuracy'])
history = model.fit(x_train, y_train, epochs=2, batch_size=200)
model.evaluate(x_test, y_test)
Check out the implementation of sparse_categorical_accuracy from here and sparse_categorical_crossentropy from here.
What happens is that when you use the accuracy metric, Kera actually selects a different accuracy implementation depending on the loss, as how the accuracy is computed depends on the labels and the predictions of the model:
for categorical_crossentropy it uses categorical_accuracy as accuracy metric.
for binary_crossentropy it uses binary_accuracy as accuracy metric.
for sparse_categorical_crossentropy it uses sparse_categorical_accuracy as accuracy metric.
Keras can only do this if you use the predefined losses, as it can't guess otherwise. For your custom loss you can directly use one of the three accuracy implementations directly, like metrics=['sparse_categorical_accuracy'].

tf.keras predictions are bad while evaluation is good

I'm programming a model in tf.keras, and running model.evaluate() on the training set usually yields ~96% accuracy. My evaluation on the test set is usually close, about 93%. However, when I predict manually, the model is usually inaccurate. This is my code:
import tensorflow as tf
from tensorflow.keras import layers
import numpy as np
import pandas as pd
!git clone https://github.com/DanorRon/data
%cd data
!ls
batch_size = 100
epochs = 15
alpha = 0.001
lambda_ = 0.001
h1 = 50
train = pd.read_csv('/content/data/mnist_train.csv.zip')
test = pd.read_csv('/content/data/mnist_test.csv.zip')
train = train.loc['1':'5000', :]
test = test.loc['1':'2000', :]
train = train.sample(frac=1).reset_index(drop=True)
test = test.sample(frac=1).reset_index(drop=True)
x_train = train.loc[:, '1x1':'28x28']
y_train = train.loc[:, 'label']
x_test = test.loc[:, '1x1':'28x28']
y_test = test.loc[:, 'label']
x_train = x_train.values
y_train = y_train.values
x_test = x_test.values
y_test = y_test.values
nb_classes = 10
targets = y_train.reshape(-1)
y_train_onehot = np.eye(nb_classes)[targets]
nb_classes = 10
targets = y_test.reshape(-1)
y_test_onehot = np.eye(nb_classes)[targets]
model = tf.keras.Sequential()
model.add(layers.Dense(784, input_shape=(784,), kernel_initializer='random_uniform', bias_initializer='zeros'))
model.add(layers.Dense(h1, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(lambda_), kernel_initializer='random_uniform', bias_initializer='zeros'))
model.add(layers.Dense(10, activation='softmax', kernel_regularizer=tf.keras.regularizers.l2(lambda_), kernel_initializer='random_uniform', bias_initializer='zeros'))
model.compile(optimizer='SGD',
loss = 'mse',
metrics = ['categorical_accuracy'])
model.fit(x_train, y_train_onehot, epochs=epochs, batch_size=batch_size)
model.evaluate(x_test, y_test_onehot, batch_size=batch_size)
prediction = model.predict_classes(x_test)
print(prediction)
print(y_test[1:])
I've heard that a lot of the time when people have this problem, it's just a problem with data input. But I can't see any problem with that here since it almost always predicts wrongly (about as much as you would expect if it was random). How do I fix this problem?
Edit: Here are the specific results:
Last training step:
Epoch 15/15
49999/49999 [==============================] - 3s 70us/sample - loss: 0.0309 - categorical_accuracy: 0.9615
Evaluation output:
2000/2000 [==============================] - 0s 54us/sample - loss: 0.0352 - categorical_accuracy: 0.9310
[0.03524150168523192, 0.931]
Output from model.predict_classes:
[9 9 0 ... 5 0 5]
Output from print(y_test):
[9 0 0 7 6 8 5 1 3 2 4 1 4 5 8 4 9 2 4]
First thing is, your loss function is wrong: you are in a multi-class classification setting, and you are using a loss function suitable for regression and not classification (MSE).
Change our model compilation to:
model.compile(loss='categorical_crossentropy',
optimizer='SGD',
metrics=['accuracy'])
See the Keras MNIST MLP example for corroboration, and own answer in What function defines accuracy in Keras when the loss is mean squared error (MSE)? for more details (although here you actually have the inverse problem, i.e. regression loss in a classification setting).
Moreover, it is not clear if the MNIST variant you are using is already normalized; if not, you should normalize them yourself:
x_train = x_train.values/255
x_test = x_test.values/255
It is also not clear why you ask for a 784-unit layer, since this is actually the second layer of your NN (the first is implicitly set by the input_shape argument - see Keras Sequential model input layer), and it certainly does not need to contain one unit for each one of your 784 input features.
UPDATE (after comments):
But why is MSE meaningless for classification?
This is a theoretical issue, not exactly appropriate for SO; roughly speaking, it is for the same reason we don't use linear regression for classification - we use logistic regression, the actual difference between the two approaches being exactly the loss function. Andrew Ng, in his popular Machine Learning course at Coursera, explains this nicely - see his Lecture 6.1 - Logistic Regression | Classification at Youtube (explanation starts at ~ 3:00), as well as section 4.2 Why Not Linear Regression [for classification]? of the (highly recommended and freely available) textbook An Introduction to Statistical Learning by Hastie, Tibshirani and coworkers.
And MSE does give a high accuracy, so why doesn't that matter?
Nowadays, almost anything you throw at MNIST will "work", which of course neither makes it correct nor a good approach for more demanding datasets...
UPDATE 2:
whenever I run with crossentropy, the accuracy just flutters around at ~10%
Sorry, cannot reproduce the behavior... Taking the Keras MNIST MLP example with a simplified version of your model, i.e.:
model = Sequential()
model.add(Dense(784, activation='linear', input_shape=(784,)))
model.add(Dense(50, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer=SGD(),
metrics=['accuracy'])
we easily end up with a ~ 92% validation accuracy after only 5 epochs:
history = model.fit(x_train, y_train,
batch_size=128,
epochs=5,
verbose=1,
validation_data=(x_test, y_test))
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 [==============================] - 4s - loss: 0.8974 - acc: 0.7801 - val_loss: 0.4650 - val_acc: 0.8823
Epoch 2/10
60000/60000 [==============================] - 4s - loss: 0.4236 - acc: 0.8868 - val_loss: 0.3582 - val_acc: 0.9034
Epoch 3/10
60000/60000 [==============================] - 4s - loss: 0.3572 - acc: 0.9009 - val_loss: 0.3228 - val_acc: 0.9099
Epoch 4/10
60000/60000 [==============================] - 4s - loss: 0.3263 - acc: 0.9082 - val_loss: 0.3024 - val_acc: 0.9156
Epoch 5/10
60000/60000 [==============================] - 4s - loss: 0.3061 - acc: 0.9132 - val_loss: 0.2845 - val_acc: 0.9196
Notice the activation='linear' of the first Dense layer, which is the equivalent of not specifying anything, like in your case (as I said, practically everything you throw to MNIST will "work")...
Final advice: Try modifying your model as:
model = tf.keras.Sequential()
model.add(layers.Dense(784, activation = 'relu',input_shape=(784,)))
model.add(layers.Dense(h1, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
in order to use the better (and default) 'glorot_uniform' initializer, and remove the kernel_regularizer args (they may be the cause of any issue - always start simple!)...

keras MLP accuracy zero

The following is my MLP model,
layers = [10,20,30,40,50]
model = keras.models.Sequential()
#Stacking Layers
model.add(keras.layers.Dense(layers[0], input_dim = input_dim, activation='relu'))
#Defining the shape of input
for layer in layers[1:]:
model.add(keras.layers.Dense(layer, activation='relu'))
#Layer activation function
# Output layer
model.add(keras.layers.Dense(1, activation='sigmoid'))
#Pre-training
model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
#Training
model.fit(train_set, test_set, validation_split = 0.10, epochs = 50, batch_size = 10, shuffle = True, verbose = 2)
# evaluate the network
loss, accuracy = model.evaluate(train_set, test_set)
print("\nLoss: %.2f, Accuracy: %.2f%%" % (loss, accuracy*100))
#predictions
predt = model.predict(final_test)
print(predt)
The problem is that, accuracy is always 0, error log as shown,
Epoch 48/50 - 0s - loss: 1.0578 - acc: 0.0000e+00 - val_loss: 0.4885 - val_acc: 0.0000e+00
Epoch 49/50 - 0s - loss: 1.0578 - acc: 0.0000e+00 - val_loss: 0.4885 - val_acc: 0.0000e+00
Epoch 50/50 - 0s - loss: 1.0578 - acc: 0.0000e+00 - val_loss: 0.4885 - val_acc: 0.0000e+00
2422/2422 [==============================] - 0s 17us/step
Loss: 1.00, Accuracy: 0.00%
As suggested i've changed my learning signal from -1,1 to 0,1 and yet, the following is the error log
Epoch 48/50 - 0s - loss: 8.5879 - acc: 0.4672 - val_loss: 8.2912 - val_acc: 0.4856
Epoch 49/50 - 0s - loss: 8.5879 - acc: 0.4672 - val_loss: 8.2912 - val_acc: 0.4856
Epoch 50/50 - 0s - loss: 8.5879 - acc: 0.4672 - val_loss: 8.2912 - val_acc: 0.4856
2422/2422 [==============================] - 0s 19us/step
You code is very hard to read. This is not the recommended standard to write Keras model. Try this and let us know what you get. Assuming X is a matrix where the rows are the instances and the columns are the features. And Y is the labels
You need to add a channel as the last dimension as explained when using the TensorFlow backend. Furthermore the labels should be split into 2 nodes for better chance of success. A single neuron mapping is often less successful, than using a probabilistic output with 2 nodes.
n = 1000 # Number of instances
m = 4 # Number of features
num_classes = 2 # Number of output classes
... # Your code for loading the data
X = X.reshape(n, m,)
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.33)
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
Build your model. The last layer should use either sigmoid or softmax for a classification task. Try to use the Adadelta optimizer it has been shown to produce better results by traversing the gradient more efficiently, and reducing oscillations. We will also use cross entropy as our loss function as is standard with classification tasks. Binary cross entropy is fine too.
Try to use a standard model configuration. An increasing number of nodes does not really make much sense. The model should look like a prism, small set of input features, many hidden nodes, and a small set of output nodes. You should aim for the least number of hidden layers, make the layers fatter, rather than adding layers.
input_shape = (m,)
model = Sequential()
model.add(Dense(32, activation='relu', input_shape=input_shape))
model.add(Dense(64, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
You can get a summary of your model using
model.summary()
Train your model
epochs = 100
batch_size = 128
# Fit the model weights.
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
To view what happened during training
plt.figure(figsize=(8,10))
plt.subplot(2,1,1)
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='lower right')
plt.subplot(2,1,2)
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper right')
plt.show()

Categories

Resources