Why my accuracy is always 0.2 in this simple code - python

I am new in this field and trying to re-run an example LSTM code copied from internet. The accuracy of the LSTM model is always 0.2 but the predicted output is totally correct which means the accuracy should be 1. Could anyone tell me why?
from numpy import array
from keras.models import Sequential, Dense, LSTM
length = 5
seq = array([i/float(length) for i in range(length)])
print(seq)
X = seq.reshape(length, 1, 1)
y = seq.reshape(length, 1)
# define LSTM configuration
n_neurons = length
n_batch = 1000
n_epoch = 1000
# create LSTM
model = Sequential()
model.add(LSTM(n_neurons, input_shape=(1, 1)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
# train LSTM
model.fit(X, y, epochs=n_epoch, batch_size=n_batch)#, verbose=2)
train_loss, train_acc = model.evaluate(X, y)
print('Training set accuracy:', train_acc
result = model.predict(X, batch_size=n_batch, verbose=0)
for value in result:
print('%.1f' % value)

You are measuring accuracy, but you are training a regressor. This means you are having as output a float number and not a fixed categorical value.
If you change the last print to have 3 decimals of precision (print('%.3f' % value) ) you will see that the predicted values are really close to the ground truth but not exactly the same, therefore the accuracy is low:
0.039
0.198
0.392
0.597
0.788
For some reason, the accuracy being used (sparse_categorical_accuracy) is considering the 0.0 and 0.039 (or similar) as a hit instead of a miss, so that's why you are getting 20% instead of 0%.
If you change the sequence to not contain zero, you will have 0% accuracy, which is less confusing:
seq = array([i/float(length) for i in range(1, length+1)])
Finally, to correct this, you can use, for example, mae instead of accuracy as the metric, where you will see the error going down:
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mae'])
Other option would be to switch to a categorical framework (changing your floats to categorical values).
Hope this helps! I will edit the answer if I can dig into why the sparse_categorical_accuracy detects the 0 as a hit and not a miss.

Related

Computing the loss (MSE) for every iteration and time Tensorflow

I want to use Tensorboard to plot the mean squared error (y-axis) for every iteration over a given time frame (x-axis), say 5 minutes.
However, i can only plot the MSE given every epoch and set a callback at 5 minutes. This does not however solve my problem.
I have tried looking at the internet for some solutions to how you can maybe set a maximum number of iterations rather than epochs when doing model.fit, but without luck. I know iterations is the number of batches needed to complete one epoch, but as I want to tune the batch_size, I prefer to use the iterations.
My code currently looks like the following:
input_size = len(train_dataset.keys())
output_size = 10
hidden_layer_size = 250
n_epochs = 3
weights_initializer = keras.initializers.GlorotUniform()
#A function that trains and validates the model and returns the MSE
def train_val_model(run_dir, hparams):
model = keras.models.Sequential([
#Layer to be used as an entry point into a Network
keras.layers.InputLayer(input_shape=[len(train_dataset.keys())]),
#Dense layer 1
keras.layers.Dense(hidden_layer_size, activation='relu',
kernel_initializer = weights_initializer,
name='Layer_1'),
#Dense layer 2
keras.layers.Dense(hidden_layer_size, activation='relu',
kernel_initializer = weights_initializer,
name='Layer_2'),
#activation function is linear since we are doing regression
keras.layers.Dense(output_size, activation='linear', name='Output_layer')
])
#Use the stochastic gradient descent optimizer but change batch_size to get BSG, SGD or MiniSGD
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.0,
nesterov=False)
#Compiling the model
model.compile(optimizer=optimizer,
loss='mean_squared_error', #Computes the mean of squares of errors between labels and predictions
metrics=['mean_squared_error']) #Computes the mean squared error between y_true and y_pred
# initialize TimeStopping callback
time_stopping_callback = tfa.callbacks.TimeStopping(seconds=5*60, verbose=1)
#Training the network
history = model.fit(normed_train_data, train_labels,
epochs=n_epochs,
batch_size=hparams['batch_size'],
verbose=1,
#validation_split=0.2,
callbacks=[tf.keras.callbacks.TensorBoard(run_dir + "/Keras"), time_stopping_callback])
return history
#train_val_model("logs/sample", {'batch_size': len(normed_train_data)})
train_val_model("logs/sample1", {'batch_size': 1})
%tensorboard --logdir_spec=BSG:logs/sample,SGD:logs/sample1
resulting in:
The desired output should look something like this:
The reason you can't do it every iteration is that the loss is calculated at the end of each epoch. If you want to tune the batch size, run for a set number of epochs and evaluate. Start from 16 and jump in powers of 2 and see how much you can push the power of your network. But, usually bigger batch size is said to increase performance but it is not as substantial to solely focus on it. Focus on other things in the network first.
The answer was actually quite simple.
tf.keras.callbacks.TensorBoard has an update_freq argument allowing you to control when to write losses and metrics to tensorboard. The standard is epoch, but you can change it to batch or an integer if you want to write to tensorboard every n batches. See the documentation for more information: https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/TensorBoard

Simple neural network refuses to overfit

I wrote this super simple piece of code
model = Sequential()
model.add(Dense(1, input_dim=d, activation='linear'))
model.compile(loss='mse', optimizer='adam')
model.fit(X_train, y_train, epochs=10000, batch_size=n)
test_mse = model.evaluate(X_test, y_test)
print('test mse is {}'.format(test_mse))
X_train is an n by d numpy matrix and y is n by 1 numpy matrix.
This is basically the simplest linear neural network you could think of. One layer, input dimension is d, and we output a number.
It simply refuses to overfit. Even after running an insane amount of iterations (10k as you can see), the training loss is at around 0.17.
I expect the loss to be zero. Why do I expect that? Because in my case, d is much greater than n. I have a lot more degrees of freedom. And as a further piece of evidence, when I actually solve X_train # w = y_train using numpy.linalg.lstsq, the max value of X_train # w - y is something like 10 to the -14.
So this system is definitely solvable. I expected to see zero loss or very close to zero loss, but I don't. Why?

Transfer learning with pretrained model by tf.GradientTape can't converge

I would like to perform transfer learning with pretrained model of keras
import tensorflow as tf
from tensorflow import keras
base_model = keras.applications.MobileNetV2(input_shape=(96, 96, 3), include_top=False, pooling='avg')
x = base_model.outputs[0]
outputs = layers.Dense(10, activation=tf.nn.softmax)(x)
model = keras.Model(inputs=base_model.inputs, outputs=outputs)
Training with keras compile/fit functions can converge
model.compile(optimizer=keras.optimizers.Adam(), loss=keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy'])
history = model.fit(train_data, epochs=1)
The results are: loss: 0.4402 - accuracy: 0.8548
I wanna train with tf.GradientTape, but it can't converge
optimizer = keras.optimizers.Adam()
train_loss = keras.metrics.Mean()
train_acc = keras.metrics.SparseCategoricalAccuracy()
def train_step(data, labels):
with tf.GradientTape() as gt:
pred = model(data)
loss = keras.losses.SparseCategoricalCrossentropy()(labels, pred)
grads = gt.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_loss(loss)
train_acc(labels, pred)
for xs, ys in train_data:
train_step(xs, ys)
print('train_loss = {:.3f}, train_acc = {:.3f}'.format(train_loss.result(), train_acc.result()))
But the results are: train_loss = 7.576, train_acc = 0.101
If I only train the last layer by setting
base_model.trainable = False
It converges and the results are: train_loss = 0.525, train_acc = 0.823
What's the problem with the codes? How should I modify it? Thanks
Try RELU as activation function. It may be Vanishing Gradient issue which occurs if you use activation function other than RELU.
Following my comment, the reason why it didn't converge is because you picked a learning rate that was too big. This causes the weight to change too much and the loss to explode. When setting base_model.trainable to False, most of the weight in the networks were fixed and the learning rate was a good fit for your last layers. Here's a picture :
As a general rule, your learning rate should always be chosen for each experiments.
Edit : Following Wilson's comment, I'm not sure this is the reason you have different results but this could be it :
When you specify your loss, your loss is computed on each element of the batch, then to get the loss of the batch, you can take the sum or the mean of the losses, depending on which one you chose, you get a different magnitude. For example, if your batch size is 64, summing the loss will yield you a 64 times bigger loss which will yield 64 times bigger gradient, so choosing sum over mean with a batch size 64 is like picking a 64 times bigger learning rate.
So maybe the reason you have different results is that by default a keras.losses wrapped in a model.compile has a different reduction method. In the same vein, if the loss is reduced by a sum method, the magnitude of the loss depends on the batch size, if you have twice the batch size, you get (on average) twice the loss, and twice the gradient and so it's like doubling the learning rate.
My advice is to check the reduction method used by the loss to be sure it's the same in both case, and if it's sum, to check that the batch size is the same. I would advise to use mean reduction in general since it's not influenced by batch size.

how to improve neural network prediction, classification

I am trying to learn some neural networks for fun. I decided to try to classify some pokemon legendary cards, from a data set from kaggle. I read up on documentations and followed machine learning mastery guides, while reading up on medium to try to understand the process.
My problem/ question : i tried predicting and everything is predicting "0". i assume that is false. is my 92% false accuracy? i read something about false accuracy online.
please help!
Some background information : the dataset has 800 rows, 12 columns. i am predicting the last column ( true/false). I am using attributes of the data that has numerical and categorical. i label encoded the numerical categories. 92% of these cards are False. 8% are true.
i sampled and ran a neural network on 200 cards, and got 91% accuracy... i also reset everything and got a 92% accuracy on all 800 cards. am i overfitting?
thank you for help in advance
dataFrame = dataFrame.fillna(value='NaN')
labelencoder = LabelEncoder()
numpy_dataframe = dataFrame.as_matrix()
numpy_dataframe[:, 0] = labelencoder.fit_transform(numpy_dataframe[:, 0])
numpy_dataframe[:, 1] = labelencoder.fit_transform(numpy_dataframe[:, 1])
numpy_dataframe
X = numpy_dataframe[:,0:10]
Y = numpy_dataframe[:,10]
model = Sequential()
model.add(Dense(12, input_dim=10, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, Y, epochs=150, batch_size=10)
scores = model.evaluate(X, Y)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
#this shows that we have 91.88% accuracy with the whole dataframe
dataFrame200False = dataFrame
dataFrame200False['Legendary'] = dataFrame200False['Legendary'].astype(str)
dataFrame200False= dataFrame200False[dataFrame200False['Legendary'].str.contains("False")]
dataFrame65True = dataFrame
dataFrame65True['Legendary'] = dataFrame65True['Legendary'].astype(str)
dataFrame65True= dataFrame65True[dataFrame65True['Legendary'].str.contains("True")]
DataFrameFalseSample = dataFrame200False.sample(200)
DataFrameFalseSample
dataFrameSampledTrueFalse = dataFrame65True.append(DataFrameFalseSample, ignore_index=True)
dataFrameSampledTrueFalse
#label encoding the files
labelencoder = LabelEncoder()
numpy_dataSample = dataFrameSampledTrueFalse.as_matrix()
numpy_dataSample[:, 0] = labelencoder.fit_transform(numpy_dataSample[:, 0])
numpy_dataSample[:, 1] = labelencoder.fit_transform(numpy_dataSample[:, 1])
numpy_dataSample
a = numpy_dataframe[:,0:10]
b = numpy_dataframe[:,10]
model = Sequential()
model.add(Dense(12, input_dim=10, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(a, b, epochs=1000, batch_size=10)
scoresSample = model.evaluate(a, b)
print("\n%s: %.2f%%" % (model.metrics_names[1], scoresSample[1]*100))
dataFramePredictSample = dataFrame.sample(500)
labelencoder = LabelEncoder()
numpy_dataframeSamples = dataFramePredictSample.as_matrix()
numpy_dataframeSamples[:, 0] = labelencoder.fit_transform(numpy_dataframeSamples[:, 0])
numpy_dataframeSamples[:, 1] = labelencoder.fit_transform(numpy_dataframeSamples[:, 1])
Xnew = numpy_dataframeSamples[:,0:10]
Ynew = numpy_dataframeSamples[:,10]
# make a prediction
Y = model.predict_classes(Xnew)
# show the inputs and predicted outputs
for i in range(len(Xnew)):
print("X=%s, Predicted=%s" % (Xnew[i], Y[i]))
Problem:
The problem is that, as you stated, your dataset is heavily imbalanced. This means that you have a lot more training examples for class 0 than class 1. This causes the network, during training, to develop a heavy bias towards predicting class 0.
Evaluation:
The first thing you should do is not use accuracy as your evaluation measure! My suggestion would be to draw a confusion matrix so that you see exactly what the model is predicting. You could also look into macro-averaging (read this if you're not familiar with the technique).
Dealing with the problem:
There are two ways you can improve the performance of the model:
Resample your data, so that it becomes balanced. You have a couple of options here. The most common way is to oversample (e.g. SMOTE) the minority class so that it reaches the population of the majority. Another option is to undersample (e.g. Clustering Centroids) the majority class so that it's population drops to that of the minority.
Use class weights during training. This forces the network to pay more attention to samples from the minority class (read this post for more info).

What should I do to get low average loss?

I'm an student in hydraulic engineering, working on a neural network in my internship so it's something new for me.
I created my neural network but it gives me a high loss and I don't know what is the problem ... you can see the code :
def create_model():
model = Sequential()
# Adding the input layer
model.add(Dense(26,activation='relu',input_shape=(n_cols,)))
# Adding the hidden layer
model.add(Dense(60,activation='relu'))
model.add(Dense(60,activation='relu'))
model.add(Dense(60,activation='relu'))
# Adding the output layer
model.add(Dense(2))
# Compiling the RNN
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
return model
kf = KFold(n_splits = 5, shuffle = True)
model = create_model()
scores = []
for i in range(5):
result = next(kf.split(data_input), None)
input_train = data_input[result[0]]
input_test = data_input[result[1]]
output_train = data_output[result[0]]
output_test = data_output[result[1]]
# Fitting the RNN to the Training set
model.fit(input_train, output_train, epochs=5000, batch_size=200 ,verbose=2)
predictions = model.predict(input_test)
scores.append(model.evaluate(input_test, output_test))
print('Scores from each Iteration: ', scores)
print('Average K-Fold Score :' , np.mean(scores))
And whene I execute my code, the result is like :
Scores from each Iteration: [[93.90406122928908, 0.8907562990148529], [89.5892979597845, 0.8907563030218878], [81.26530176050522, 0.9327731132507324], [56.46526102659081, 0.9495798339362905], [54.314151876112994, 0.9579831877676379]]
Average K-Fold Score : 38.0159922589274
Can anyone help me please ? how could I do to make the loss low ?
There are several issues, both with your questions and with your code...
To start with, in general we cannot say that an MSE loss of X value is low or high. Unlike the accuracy in classification problems which is by definition in [0, 1], the loss is not similarly bounded, so there is no general way of saying that a particular value is low or high, as you imply here (it always depends on the specific problem).
Having clarified this, let's go to your code.
First, judging from your loss='mean_squared_error', it would seem that you are in a regression setting, in which accuracy is meaningless; see What function defines accuracy in Keras when the loss is mean squared error (MSE)?. You have not shared what exact problem you are trying to solve here, but if it is indeed a regression one (i.e. prediction of some numeric value), you should get rid of metrics=['accuracy'] in your model compilation, and possibly change your last layer to a single unit, i.e. model.add(Dense(1)).
Second, as your code currently is, you don't actually fit independent models from scratch in each of your CV folds (which is the very essence of CV); in Keras, model.fit works cumulatively, i.e. it does not "reset" the model each time it is called, but it continues fitting from the previous call. That's exactly why if you see your scores, it is evident that the model is significantly better in the later folds (which already gives a hint for improving: add more epochs). To fit independent models as you should do for a proper CV, you should move create_model() inside the for loop.
Third, your usage of np.mean() here is again meaningless, as you average both the loss and the accuracy (i.e. apples with oranges) together; the fact that from 5 values of loss between 54 and 94 you end up with an "average" of 38 should have already alerted you that you are attempting something wrong. Truth is, if you dismiss the accuracy metric, as argued above, you would not have this problem here.
All in all, here is how it seems that your code should be in principle (but again, I have not the slightest idea of the exact problem you are trying to solve, so some details might be different):
def create_model():
model = Sequential()
# Adding the input layer
model.add(Dense(26,activation='relu',input_shape=(n_cols,)))
# Adding the hidden layer
model.add(Dense(60,activation='relu'))
model.add(Dense(60,activation='relu'))
model.add(Dense(60,activation='relu'))
# Adding the output layer
model.add(Dense(1)) # change to 1 unit
# Compiling the RNN
model.compile(optimizer='adam', loss='mean_squared_error') # dismiss accuracy
return model
kf = KFold(n_splits = 5, shuffle = True)
scores = []
for i in range(5):
result = next(kf.split(data_input), None)
input_train = data_input[result[0]]
input_test = data_input[result[1]]
output_train = data_output[result[0]]
output_test = data_output[result[1]]
# Fitting the RNN to the Training set
model = create_model() # move create_model here
model.fit(input_train, output_train, epochs=10000, batch_size=200 ,verbose=2) # increase the epochs
predictions = model.predict(input_test)
scores.append(model.evaluate(input_test, output_test))
print('Loss from each Iteration: ', scores)
print('Average K-Fold Loss :' , np.mean(scores))

Categories

Resources