keras MLP accuracy zero - python

The following is my MLP model,
layers = [10,20,30,40,50]
model = keras.models.Sequential()
#Stacking Layers
model.add(keras.layers.Dense(layers[0], input_dim = input_dim, activation='relu'))
#Defining the shape of input
for layer in layers[1:]:
model.add(keras.layers.Dense(layer, activation='relu'))
#Layer activation function
# Output layer
model.add(keras.layers.Dense(1, activation='sigmoid'))
#Pre-training
model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
#Training
model.fit(train_set, test_set, validation_split = 0.10, epochs = 50, batch_size = 10, shuffle = True, verbose = 2)
# evaluate the network
loss, accuracy = model.evaluate(train_set, test_set)
print("\nLoss: %.2f, Accuracy: %.2f%%" % (loss, accuracy*100))
#predictions
predt = model.predict(final_test)
print(predt)
The problem is that, accuracy is always 0, error log as shown,
Epoch 48/50 - 0s - loss: 1.0578 - acc: 0.0000e+00 - val_loss: 0.4885 - val_acc: 0.0000e+00
Epoch 49/50 - 0s - loss: 1.0578 - acc: 0.0000e+00 - val_loss: 0.4885 - val_acc: 0.0000e+00
Epoch 50/50 - 0s - loss: 1.0578 - acc: 0.0000e+00 - val_loss: 0.4885 - val_acc: 0.0000e+00
2422/2422 [==============================] - 0s 17us/step
Loss: 1.00, Accuracy: 0.00%
As suggested i've changed my learning signal from -1,1 to 0,1 and yet, the following is the error log
Epoch 48/50 - 0s - loss: 8.5879 - acc: 0.4672 - val_loss: 8.2912 - val_acc: 0.4856
Epoch 49/50 - 0s - loss: 8.5879 - acc: 0.4672 - val_loss: 8.2912 - val_acc: 0.4856
Epoch 50/50 - 0s - loss: 8.5879 - acc: 0.4672 - val_loss: 8.2912 - val_acc: 0.4856
2422/2422 [==============================] - 0s 19us/step

You code is very hard to read. This is not the recommended standard to write Keras model. Try this and let us know what you get. Assuming X is a matrix where the rows are the instances and the columns are the features. And Y is the labels
You need to add a channel as the last dimension as explained when using the TensorFlow backend. Furthermore the labels should be split into 2 nodes for better chance of success. A single neuron mapping is often less successful, than using a probabilistic output with 2 nodes.
n = 1000 # Number of instances
m = 4 # Number of features
num_classes = 2 # Number of output classes
... # Your code for loading the data
X = X.reshape(n, m,)
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.33)
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
Build your model. The last layer should use either sigmoid or softmax for a classification task. Try to use the Adadelta optimizer it has been shown to produce better results by traversing the gradient more efficiently, and reducing oscillations. We will also use cross entropy as our loss function as is standard with classification tasks. Binary cross entropy is fine too.
Try to use a standard model configuration. An increasing number of nodes does not really make much sense. The model should look like a prism, small set of input features, many hidden nodes, and a small set of output nodes. You should aim for the least number of hidden layers, make the layers fatter, rather than adding layers.
input_shape = (m,)
model = Sequential()
model.add(Dense(32, activation='relu', input_shape=input_shape))
model.add(Dense(64, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
You can get a summary of your model using
model.summary()
Train your model
epochs = 100
batch_size = 128
# Fit the model weights.
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
To view what happened during training
plt.figure(figsize=(8,10))
plt.subplot(2,1,1)
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='lower right')
plt.subplot(2,1,2)
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper right')
plt.show()

Related

How can i increase the accuracy of my LSTM model (regression) [duplicate]

I am doing a time series analysis using Tensorflow/ Keras in Python.
The overall LSTM model looks like,
model = keras.models.Sequential()
model.add(keras.layers.LSTM(25, input_shape = (1,1), activation = 'relu', dropout = 0.2, return_sequences = False))
model.add(keras.layers.Dense(1))
model.compile(optimizer = 'adam', loss = 'mean_squared_error', metrics=['acc'])
tensorboard = keras.callbacks.TensorBoard(log_dir="logs/{}".format(time()))
es = keras.callbacks.EarlyStopping(monitor='val_acc', mode='max', verbose=1, patience=50)
mc = keras.callbacks.ModelCheckpoint('/home/sukriti/best_model.h5', monitor='val_loss', mode='min', save_best_only=True)
history = model.fit(trainX_3d, trainY_1d, epochs=50, batch_size=10, verbose=2, validation_data = (testX_3d, testY_1d), callbacks=[mc, es, tensorboard])
I am having the following outcome,
Train on 14015 samples, validate on 3503 samples
Epoch 1/50
- 3s - loss: 0.0222 - acc: 7.1352e-05 - val_loss: 0.0064 - val_acc: 0.0000e+00
Epoch 2/50
- 2s - loss: 0.0120 - acc: 7.1352e-05 - val_loss: 0.0054 - val_acc: 0.0000e+00
Epoch 3/50
- 2s - loss: 0.0108 - acc: 7.1352e-05 - val_loss: 0.0047 - val_acc: 0.0000e+00
Now the val_acc remains unchanged. Is it normal?
what does it signify?
As signified by loss = 'mean_squared_error', you are in a regression setting, where accuracy is meaningless (it is meaningful only in classification problems).
Unfortunately, Keras will not "protect" you in such a case, insisting in computing and reporting back an "accuracy", despite the fact that it is meaningless and inappropriate for your problem - see my answer in What function defines accuracy in Keras when the loss is mean squared error (MSE)?
You should simply remove metrics=['acc'] from your model compilation, and don't bother - in regression settings, MSE itself can (and usually does) serve also as the performance metric.
In my case I had validation accuracy of 0.0000e+00 throughout training (using Keras and CNTK-GPU backend) when my batch size was 64 but there were only 120 samples in my validation set (divided into three classes). After I changed the batch size to 60, I got normal accuracy values.
It will not improve with changing batch size or with metrics. I had the same problem but when I shuffled my training and validation data set 0.0000e+00 gone.

LSTM model on the 3 class label as classification problem

My problem is to predict the output as which has 3 class label,
Lets say I have 20000 samples in my dataset with each sample is associated with label (0,1,2).
As this is multiclass classification problem.
Can I only give input as Labels which are ( 0, 1,2) to the network and get prediction based on the labels.
Will the data feeded to the network is sufficient to learn and predict the output
Please help me with your inputs
# Below is the code
X_train, X_test, y_train, y_test = train_test_split(values_train[:, 0],
values_train[:, 1],
test_size=0.25,
random_state=42)
print(" X Training Set size is",X_train.shape )
print(" y Training Set size is",y_train.shape )
print(" X Test Set size is",X_test.shape)
print(" y Test Set size is",y_test.shape )
'X Training Set size is (165081,)'
'y Training Set size is (165081,)'
'X Test Set size is (55028,)'
'y Test Set size is (55028,)'
# convert to LSTM friendly format
X_train = X_train.reshape(len(X_train),1, 1)
X_test = X_test.reshape(len(X_test),1,1)
print(X_train.shape, X_test.shape)
(165081, 1, 1) (55028, 1, 1)
# configure network
n_batch = 1
n_epoch = 100
n_neurons = 10
from keras.optimizers import SGD
opt = SGD(lr=0.01)
# design network
model = Sequential()
model.add(LSTM(n_neurons, batch_input_shape=(n_batch, X_train.shape[1],
X_train.shape[2]),
stateful=True))
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
# fit network
for i in range(n_epoch):
model.fit(X_train, y_train ,validation_data=(X_test, y_test),
epochs=1, batch_size=n_batch, verbose=1, shuffle= False)
model.reset_states()
df_actual = []
dp_predict = []
for i in range(len(X_test)):
testX,testy = X_test[i],y_test[i]
testX = testX.reshape(1, 1, 1)
yhat = model.predict(testX, batch_size=1)
df_actual.append(testy)
dp_predict.append(yhat)
print('>Actual =%.1f, Predicted=%.1f' % (testy, yhat))
I am not able to get correct prediction in this model.
Update:
Please find the below Validation accuracy and Training accuracy with the loss
Train on 154076 samples, validate on 66033 samples
Epoch 1/5
154076/154076 [==============================] - 289s 2ms/step - loss: 1.0033 - accuracy: 0.3816 - val_loss: 1.0018 - val_accuracy: 0.4286
Epoch 2/5
154076/154076 [==============================] - 291s 2ms/step - loss: 1.0021 - accuracy: 0.3817 - val_loss: 1.0020 - val_accuracy: 0.4286
Epoch 3/5
154076/154076 [==============================] - 293s 2ms/step - loss: 1.0018 - accuracy: 0.3804 - val_loss: 1.0014 - val_accuracy: 0.4286
Epoch 4/5
154076/154076 [==============================] - 290s 2ms/step - loss: 1.0016 - accuracy: 0.3812 - val_loss: 1.0012 - val_accuracy: 0.4286
Epoch 5/5
154076/154076 [==============================] - 290s 2ms/step - loss: 1.0015 - accuracy: 0.3814 - val_loss: 1.0012 - val_accuracy: 0.4286
Can anyone suggest me what can be improvement
Note: - I have normalized the input data with MinMaxScalar and used the scaled data, but there is no change in the output
Class labels are of categorical type. Neural networks can't learn on categorical data. You have to one-hot encode it with e.g. keras.utils.to_categorical:
x = values_train[:, 0]
y = values_train[:, 1]
y = keras.utils.to_categorical(y)
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=42)

Big difference between val-acc and prediction accuracy in Keras Neural Network

I have a dataset that I used for making NN model in Keras, i took 2000 rows from that dataset to have them as validation data, those 2000 rows should be added in .predict function.
I wrote a code for Keras NN and for now it works good, but I noticed something that is very strange for me. It gives me very good accuracy of more than 83%, loss is around 0.12, but when I want to make a prediction with unseen data (those 2000 rows), it only predicts correct in average of 65%.
When I add Dropout layer, it only decreases accuracy.
Then I have added EarlyStopping, and it gave me accuracy around 86%, loss is around 0.10, but still when I make prediction with unseen data, I get final prediction accuracy of 67%.
Does this mean that model made correct prediction in 87% of situations? Im going with a logic, if I add 100 samples in my .predict function, that program should make good prediction for 87/100 samples, or somewhere in that range (lets say more than 80)? I have tried to add 100, 500, 1000, 1500 and 2000 samples in my .predict function, and it always make correct prediction in 65-68% of the samples.
Why is that, am I doing something wrong?
I have tried to play with number of layers, number of nodes, with different activation functions and with different optimizers but it only changes the results by 1-2%.
My dataset looks like this:
DataFrame shape (59249, 33)
x_train shape (47399, 32)
y_train shape (47399,)
x_test shape (11850, 32)
y_test shape (11850,)
testing_features shape (1000, 32)
This is my NN model:
model = Sequential()
model.add(Dense(64, input_dim = x_train.shape[1], activation = 'relu')) # input layer requires input_dim param
model.add(Dropout(0.2))
model.add(Dense(32, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(1, activation='sigmoid')) # sigmoid instead of relu for final probability between 0 and 1
# compile the model, adam gradient descent (optimized)
model.compile(loss="binary_crossentropy", optimizer= "adam", metrics=['accuracy'])
# call the function to fit to the data training the network)
es = EarlyStopping(monitor='val_loss', min_delta=0.0, patience=1, verbose=0, mode='auto')
model.fit(x_train, y_train, epochs = 15, shuffle = True, batch_size=32, validation_data=(x_test, y_test), verbose=2, callbacks=[es])
scores = model.evaluate(x_test, y_test)
print(model.metrics_names[0], round(scores[0]*100,2), model.metrics_names[1], round(scores[1]*100,2))
These are the results:
Train on 47399 samples, validate on 11850 samples
Epoch 1/15
- 25s - loss: 0.3648 - acc: 0.8451 - val_loss: 0.2825 - val_acc: 0.8756
Epoch 2/15
- 9s - loss: 0.2949 - acc: 0.8689 - val_loss: 0.2566 - val_acc: 0.8797
Epoch 3/15
- 9s - loss: 0.2741 - acc: 0.8773 - val_loss: 0.2468 - val_acc: 0.8849
Epoch 4/15
- 9s - loss: 0.2626 - acc: 0.8816 - val_loss: 0.2416 - val_acc: 0.8845
Epoch 5/15
- 10s - loss: 0.2566 - acc: 0.8827 - val_loss: 0.2401 - val_acc: 0.8867
Epoch 6/15
- 8s - loss: 0.2503 - acc: 0.8858 - val_loss: 0.2364 - val_acc: 0.8893
Epoch 7/15
- 9s - loss: 0.2480 - acc: 0.8873 - val_loss: 0.2321 - val_acc: 0.8895
Epoch 8/15
- 9s - loss: 0.2450 - acc: 0.8886 - val_loss: 0.2357 - val_acc: 0.8888
11850/11850 [==============================] - 2s 173us/step
loss 23.57 acc 88.88
And this is for prediction:
#testing_features are 2000 rows that i extracted from dataset (these samples are not used in training, this is separate dataset thats imported)
prediction = model.predict(testing_features , batch_size=32)
res = []
for p in prediction:
res.append(p[0].round(0))
# Accuracy with sklearn - also much lower
acc_score = accuracy_score(testing_results, res)
print("Sklearn acc", acc_score)
result_df = pd.DataFrame({"label":testing_results,
"prediction":res})
result_df["prediction"] = result_df["prediction"].astype(int)
s = 0
for x,y in zip(result_df["label"], result_df["prediction"]):
if x == y:
s+=1
print(s,"/",len(result_df))
acc = s*100/len(result_df)
print('TOTAL ACC:', round(acc,2))
The problem is...now I get accuracy with sklearn 52% and my_acc 52%.
Why do I get such low accuracy on validation, when it says that its much larger?
The training data you posted gives high validation accuracy, so I'm a bit confused as to where you get that 65% from, but in general when your model performs much better on training data than on unseen data, that means you're over fitting. This is a big and recurring problem in machine learning, and there is no method guaranteed to prevent this, but there are a couple of things you can try:
regularizing the weights of your network, e.g. using l2 regularization
using stochastic regularization techniques such as drop-out during training
early stopping
reducing model complexity (but you say you've already tried this)
I will list the problems/recommendations that I see on your model.
What are you trying to predict? You are using sigmoid activation function in the last layer which seems it is a binary classification but in your loss fuction you used mse which seems strange. You can try binary_crossentropy instead of mse loss function for your model.
Your model seems suffer from overfitting so you can increase the prob. of Dropout and also add new Dropout between other hidden layers or you can remove one of the hidden layers because it seem your model is too complex.
You can change your neuron numbers in layers like a narrower => 64 -> 32 -> 16 -> 1 or try different NN architectures.
Try adam optimizer instead of sgd.
If you have 57849 sample you can use 47000 samples in training+validation and rest of will be your test set.
Don't use the same sets for your evaluation and validation. First split your data into train and test set. Then when you are fitting your model give validation_split_ratio then it will automatically give validation set from your training set.

Metrics not displaying when running model.fit

I am working my way through an ML example in Google Colabs. The documentation says that when I run model.fit, the loss and accuracy metrics are displayed. I am not seeing any loss or accuracy metric.
I have added accuracy as a metric in model.compile
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
Here is a screenshot of what I am seeing.
How do I get the loss and accuracy metrics to be displayed when I am fitting the model?
You can use the verbose flag and set it to 2 to display 1 line per epoch or 1 for a progress bar.
import keras
import numpy as np
model = keras.Sequential()
model.add(keras.layers.Dense(10, input_shape=(5, 6)))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy')
x_data = np.random.random((32, 5, 6))
y_data = np.random.randint(0, 9, size=(32,5,1))
model.fit(x=x_data, y=y_data, batch_size=16, epochs=3)
Use tf.cast instead.
Epoch 1/3
32/32 [==============================] - 1s 20ms/step - loss: 9.9664
Epoch 2/3
32/32 [==============================] - 0s 293us/step - loss: 9.9537
Epoch 3/3
32/32 [==============================] - 0s 164us/step - loss: 9.9425
I hope it solves your problem.

Running Keras Sequential model with different optimizers

I want to check the performance of my model against various optimizers
(sgd, rmsprop, adam, adamax etc)
So i define a keras sequential model and then i do this
epochs = 50
print('--sgd start---')
model.compile(optimizer='sgd', loss='mse', metrics=['accuracy'])
checkpointer_sgd = ModelCheckpoint(filepath='my_model_sgd.h5',
verbose=1, save_best_only=True)
history_sgd = model.fit(X_train, y_train,
validation_split=0.2,epochs=epochs, batch_size=32, callbacks=[checkpointer_sgd],verbose=1)
print('--sgd end---')
print('--------------------------------------------')
print('--rmsprop start---')
model.compile(optimizer='rmsprop', loss='mse', metrics=['accuracy'])
checkpointer_rmsprop = ModelCheckpoint(filepath='my_model_rmsprop.h5',
verbose=1, save_best_only=True)
history_rmsprop = model.fit(X_train, y_train,
validation_split=0.2,
epochs=epochs, batch_size=32, callbacks=[checkpointer_rmsprop],verbose=1)
print('--rmsprop end---')
I do this for all the optimizers (in the code above have mentioned only sgd and rmsprop) and then execute the statements. So now what happens is the first optimizer starts from low accuracy and then accuracy is increased as more epochs happen. But the next optimizer starts from already a high accuracy.
Is the above code correct or do i need to reset the model everytime
before i compile
See below the first epoch output for different optimizers
--sgd start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 46s 27ms/step - loss: 0.0510 - acc: 0.2985 - val_loss: 0.0442 - val_acc: 0.6986
--rmsprop start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 46s 27ms/step - loss: 0.0341 - acc: 0.5940 - val_loss: 0.0148 - val_acc: 0.6963
--adagrad start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 44s 26ms/step - loss: 0.0068 - acc: 0.6951 - val_loss: 0.0046 - val_acc: 0.6963
--adadelta start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 52s 30ms/step - loss: 8.0430e-04 - acc: 0.8125 - val_loss: 9.4660e-04 - val_acc: 0.7850
--adam start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 47s 27ms/step - loss: 7.7599e-04 - acc: 0.8201 - val_loss: 9.8981e-04 - val_acc: 0.7757
--adamax start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 54s 31ms/step - loss: 6.4941e-04 - acc: 0.8359 - val_loss: 9.2495e-04 - val_acc: 0.7991
use K.clear_session() which will clean up everything.
from keras import backend as K
def get_model():
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1,activation='sigmoid'))
return model
model = get_model()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)
K.clear_session() # it will destroy keras object
model1 = get_model()
model1.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
model1.fit(X, Y, epochs=150, batch_size=10, verbose=0)
K.clear_session()
This solution should solve your problem. Let me know if it works.
Recompiling the model does not change it's state. Weights learn before compilation will be same after compilation. You need to delete the model object to clear the weights and create a new one before compiling again.

Categories

Resources