I'm an student in hydraulic engineering, working on a neural network in my internship so it's something new for me.
I created my neural network but it gives me a high loss and I don't know what is the problem ... you can see the code :
def create_model():
model = Sequential()
# Adding the input layer
model.add(Dense(26,activation='relu',input_shape=(n_cols,)))
# Adding the hidden layer
model.add(Dense(60,activation='relu'))
model.add(Dense(60,activation='relu'))
model.add(Dense(60,activation='relu'))
# Adding the output layer
model.add(Dense(2))
# Compiling the RNN
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
return model
kf = KFold(n_splits = 5, shuffle = True)
model = create_model()
scores = []
for i in range(5):
result = next(kf.split(data_input), None)
input_train = data_input[result[0]]
input_test = data_input[result[1]]
output_train = data_output[result[0]]
output_test = data_output[result[1]]
# Fitting the RNN to the Training set
model.fit(input_train, output_train, epochs=5000, batch_size=200 ,verbose=2)
predictions = model.predict(input_test)
scores.append(model.evaluate(input_test, output_test))
print('Scores from each Iteration: ', scores)
print('Average K-Fold Score :' , np.mean(scores))
And whene I execute my code, the result is like :
Scores from each Iteration: [[93.90406122928908, 0.8907562990148529], [89.5892979597845, 0.8907563030218878], [81.26530176050522, 0.9327731132507324], [56.46526102659081, 0.9495798339362905], [54.314151876112994, 0.9579831877676379]]
Average K-Fold Score : 38.0159922589274
Can anyone help me please ? how could I do to make the loss low ?
There are several issues, both with your questions and with your code...
To start with, in general we cannot say that an MSE loss of X value is low or high. Unlike the accuracy in classification problems which is by definition in [0, 1], the loss is not similarly bounded, so there is no general way of saying that a particular value is low or high, as you imply here (it always depends on the specific problem).
Having clarified this, let's go to your code.
First, judging from your loss='mean_squared_error', it would seem that you are in a regression setting, in which accuracy is meaningless; see What function defines accuracy in Keras when the loss is mean squared error (MSE)?. You have not shared what exact problem you are trying to solve here, but if it is indeed a regression one (i.e. prediction of some numeric value), you should get rid of metrics=['accuracy'] in your model compilation, and possibly change your last layer to a single unit, i.e. model.add(Dense(1)).
Second, as your code currently is, you don't actually fit independent models from scratch in each of your CV folds (which is the very essence of CV); in Keras, model.fit works cumulatively, i.e. it does not "reset" the model each time it is called, but it continues fitting from the previous call. That's exactly why if you see your scores, it is evident that the model is significantly better in the later folds (which already gives a hint for improving: add more epochs). To fit independent models as you should do for a proper CV, you should move create_model() inside the for loop.
Third, your usage of np.mean() here is again meaningless, as you average both the loss and the accuracy (i.e. apples with oranges) together; the fact that from 5 values of loss between 54 and 94 you end up with an "average" of 38 should have already alerted you that you are attempting something wrong. Truth is, if you dismiss the accuracy metric, as argued above, you would not have this problem here.
All in all, here is how it seems that your code should be in principle (but again, I have not the slightest idea of the exact problem you are trying to solve, so some details might be different):
def create_model():
model = Sequential()
# Adding the input layer
model.add(Dense(26,activation='relu',input_shape=(n_cols,)))
# Adding the hidden layer
model.add(Dense(60,activation='relu'))
model.add(Dense(60,activation='relu'))
model.add(Dense(60,activation='relu'))
# Adding the output layer
model.add(Dense(1)) # change to 1 unit
# Compiling the RNN
model.compile(optimizer='adam', loss='mean_squared_error') # dismiss accuracy
return model
kf = KFold(n_splits = 5, shuffle = True)
scores = []
for i in range(5):
result = next(kf.split(data_input), None)
input_train = data_input[result[0]]
input_test = data_input[result[1]]
output_train = data_output[result[0]]
output_test = data_output[result[1]]
# Fitting the RNN to the Training set
model = create_model() # move create_model here
model.fit(input_train, output_train, epochs=10000, batch_size=200 ,verbose=2) # increase the epochs
predictions = model.predict(input_test)
scores.append(model.evaluate(input_test, output_test))
print('Loss from each Iteration: ', scores)
print('Average K-Fold Loss :' , np.mean(scores))
Related
I trained my BERT model, then I get 99% in the training part whoever in part of validation I get just 80%, so how can I improve my validation accuracy?
Code :
def build_model(self, n_categories):
input_word_ids = tf.keras.Input(shape=(self.MAX_LEN,), dtype=tf.int32, name='input_word_ids')
input_mask = tf.keras.Input(shape=(self.MAX_LEN,), dtype=tf.int32, name='input_mask')
input_type_ids = tf.keras.Input(shape=(self.MAX_LEN,), dtype=tf.int32, name='input_type_ids')
# Import RoBERTa model from HuggingFace
#roberta_model = TFRobertaModel.from_pretrained(self.MODEL_NAME, num_labels = n_categories, output_attentions = False, output_hidden_states = False)
roberta_model = TFBertModel.from_pretrained(self.MODEL_NAME, num_labels = n_categories, output_attentions = True, output_hidden_states = True)
# for layer in roberta_model.layers[:-15]:
# layer.trainable = False
x = roberta_model(input_word_ids, attention_mask=input_mask, token_type_ids=input_type_ids)
# Huggingface transformers have multiple outputs, embeddings are the first one,
# so let's slice out the first position
x = x[0]
x = tf.keras.layers.Dropout(0.1)(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(256, activation='relu')(x)
x = tf.keras.layers.Dense(n_categories, activation='softmax')(x)
model = tf.keras.Model(inputs=[input_word_ids, input_mask, input_type_ids], outputs=x)
model.compile(optimizer=tf.keras.optimizers.Adam(lr=1e-5), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
return model
Based on the information you've provided it seems that your model is overfitting. Achieving a 99% accuracy on the training set and a significantly lower accuracy on the validation set indicates that the model is simply memorizing the training data and thus performing poorly on the validation set.
The first two hyper-parameters that I would consider tuning, in this case, are the number of epochs and the learning rate. Your initial goal should be to achieve a similar accuracy on both the training and validation set, even if it is only 80% or so. This generally means that you should lower the number of epochs until you're seeing roughly the same accuracy.
In this chart, the blue line is the training acc, the red line is the validation acc and the x axis represents the number of epochs. You can see that training acc continues to decrease, even as the validation acc begins to increase (where the warning sign is). Ideally, you should stop training at the epoch under the warning.
From there you can begin to tune other parameters of the model, such as any available optimizer and regularization params.
Also, it is not clear from your question whether or not you are using a test set. It's advisable to split your data into three partitions (train, validation and testing). Test data should be used only during training, independently after the model has been trained.
I am currently trying to train a model using tf.GradientTape, as model.fit(...) from keras will not be able to handle my data input in the future. However, while a test run with model.fit(...) and my model works perfectly, tf.GradientTape does not.
During training, the loss using the tf.GradientTape custom workflow will first slightly decrease, but then become stuck and not improve any further, no matter how many epochs I run. The chosen metric will also not change after the first few batches. Additionally, the loss per batch is unstable and jumps between nearly zero to something very large. The running loss is more stable but shows the model not improving.
This is all in contrast to using model.fit(...), where loss and metrics are improving immediately.
My code:
def build_model(kernel_regularizer=l2(0.0001), dropout=0.001, recurrent_dropout=0.):
x1 = Input(62)
x2 = Input((62, 3))
x = Embedding(30, 100, mask_zero=True)(x1)
x = Concatenate()([x, x2])
x = Bidirectional(LSTM(500,
return_sequences=True,
kernel_regularizer=kernel_regularizer,
dropout=dropout,
recurrent_dropout=recurrent_dropout))(x)
x = Bidirectional(LSTM(500,
return_sequences=False,
kernel_regularizer=kernel_regularizer,
dropout=dropout,
recurrent_dropout=recurrent_dropout))(x)
x = Activation('softmax')(x)
x = Dense(1000)(x)
x = Dense(500)(x)
x = Dense(250)(x)
x = Dense(1, bias_initializer='ones')(x)
x = tf.math.abs(x)
return Model(inputs=[x1, x2], outputs=x)
optimizer = Adam(learning_rate=0.0001)
model = build_model()
model.compile(optimizer=optimizer, loss='mse', metrics='mse')
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = AutoShardPolicy.DATA
dat_train = tf.data.Dataset.from_generator(
generator= lambda: <load_function()>
output_types=((tf.int32, tf.float32), tf.float32)
)
dat_train = dat_train.with_options(options)
# keras training
model.fit(dat_train, epochs=50)
# custom training
for epoch in range(50):
for (x1, x2), y in dat_train:
with tf.GradientTape() as tape:
y_pred = model((x1, x2), training=True)
loss = model.loss(y, y_pred)
grads = tape.gradient(loss, model.trainable_variables)
model.optimizer.apply_gradients(zip(grads, model.trainable_variables))
I could use relu at the output layer, however, I found the abs to be more robust. Changing it does not change the outcome. The input x1 of the model is a sequence, x2 are some additional features, that are later concatenated to the embedded x1 sequence. For my approach, I'm not using the MSE, but it works either way.
I could provide some data, however, my dataset is quite large, so I would need to extract a bit out of it.
All in all, my problem seems to be similar to:
Keras model doesn't train when using GradientTape
Edit 1
The softmax activation is currently not necessary, but is relevant for my future goal of splitting the model.
Additionally, some things I noticed:
The custom training takes roughly 2x the amount of time compared to model.fit(...).
The gradients in the custom training seem very small and range from ±1e-3 to ±1e-9 inside the model. I don't know if that's normal and don't know how to compare it to the gradients provided by model.fit(...).
Edit 2
I've added a Google Colab notebook to reproduce the issue:
https://colab.research.google.com/drive/1pk66rbiux5vHZcav9VNSBhdWWIhQM-nF?usp=sharing
The loss and MSE for 20 epochs is shown here:
custom training
keras training
While I only used a portion of my data in the notebook, it will still run for a very long time. For the custom training run, the loss for each batch is simply stored in losses. It matches the behavior in the custom training run image.
So far, I've noticed two ways of improving the performance of the custom training:
The usage of custom layer initialization
Using MSE as a loss function
Using the MSE, compared to my own loss function actually improves the custom training performance. Still, using MSE and/or different initialization won't come close to the performance of keras fit.
I have found the solution, it was a simple shape mismatch, which was somehow not picked up by any error check and worked both with my custom loss function and MSE. Using x = Reshape(())(x) as final layer did the trick.
Hi I'm trying to build a simple neural network with tensorflow, where I give the model the training_data, which contains the standard values and i give it the target_data, which is the result I want it to have if the predicted value is near one of those numbers.
For example, if I give the y_test a value of 3.5, the model would predict and give a number close to 4. So the condition would say it was a lightsmoker. I searched a bit for activation functions and I learned I can't use sigmoid for what I want to do. I'm quite new on this matter. What i've done so far it's by error and trial.
import random
import tensorflow as tf
import numpy as np
training_data=[]
for i in range(0,5):
training_data.append([random.uniform(0,0.2944)])
for i in range(0,5):
training_data.append([random.uniform(0.2944,1.7394)])
for i in range(0,5):
training_data.append([random.uniform(1.7394,3.2394)])
for i in range(0,5):
training_data.append([random.uniform(3.2394,6)])
target_data=[]
for i in range(0,5):
target_data.append([1])
for i in range(0,5):
target_data.append([2])
for i in range(0,5):
target_data.append([3])
for i in range(0,5):
target_data.append([4])
y_test= np.array([100])
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(len(target_data),input_dim=1,activation='softmax'))
model.add(tf.keras.layers.Dense(1,activation='relu'))
model.compile( loss='mean_squared_error',
optimizer='adam',
metrics=['accuracy'])
training_data = np.asarray(training_data)
target_data = np.asarray(target_data)
model.fit(training_data, target_data, epochs=50, verbose=0)
target_pred= model.predict(y_test)
target_pred=float(target_pred)
print("X=%s, Predicted=%s" % (y_test, target_pred))
if( 0<= target_pred <= 1.5):
print("\nNon-Smoker")
elif(1.5<= target_pred <2.5):
print("\nPassive Smoker")
elif(2.5<= target_pred <3.5 ):
print("Lghtsmoker")
else:
print("Smoker\n")
Here is a helpful guide to using activation functions in the final layer as well as corresponding losses for different type of problems.
In your case, I am assuming you are working with a regression task with arbitrary values (any float value as output, not restricted between 0 to 1 or -1 to 1). So, skip the activation function and keep mse or mean_squared_error as your loss function.
EDIT:
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(3,input_shape=(1,),activation='relu'))
model.add(tf.keras.layers.Dense(1))
You are defining your problem as a regression problem where the result of model.predict is a linear value. For that kind of situation the last layer in your model is a linear layer that does not have an activation function. For this kind of problem your loss as mse is fine. Now you could elect to define your problem as a classification problem. Where you have 3 classes, Non-Smoker, Passive-Smoker and Light smoker. Now in that case, your target data in training is not a number in the numerical sense but an integer that indicates which class the training sample represents. For example you could have Non_Smoker with the label 0, Passive_Smoker with the label 1 and Light_Smoker with the label 2. Now the last layer in your model would use a softmax activation function. In model.compile your loss would be sparse_categorical_crossentropy because your labels are integers. If you one-hot encode your labels, for example Non_Smoker coded as 100, Light_Smoker as 010 and Passive_Smoker coded as 001 then your loss fuction would be categorical_cross_entropy. Now when you ran model.predict on a test sample it will produce a list containing 3 probabilities. The first in the list is the probability for class 0 - Non_Smoker, second is the probability for class 1 Light Smoker and the third is the probability of the third class Passive_Smoker. Now what you do is use np.argmax to find which index has the highest probability value and that is then the model's prediction.
i used linear regression to make ML model but met problem.
this is my result values
Model1 Training Mean squared error: 154.96
Model1 Test Mean squared error: 72018955075415565139968.00
training score: 0.48
testing score: -236446352571492139008.00
i dont know why these values are printed
because overfitting?
i am using tensorflow 1.13.1 and python 3.7
This seems to be the case of Overfitting.
You can
Ensure that you are following the Data Pre-Processing Steps like 1. Missing Value Imputation 2. Fixing the Outliers 3. Scaling or Normalizing the Features
Ensure that you are Performing Feature Engineering (removing undesired Features, adding meaningful Features)
Shuffle the Data, by using shuffle=True in model.fit. Code is shown below:
history = cnn_model.fit(x = X_train_reshaped,
y = y_train,
batch_size = 512,
epochs = epochs, callbacks=[callback],
verbose = 1, validation_data = (X_test_reshaped, y_test),
validation_steps = 10, steps_per_epoch=steps_per_epoch,
shuffle = True)
Use Early Stopping. Code is shown below
callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=15)
Use Dropout
Use Regularization. Code for Regularization is shown below (You can try l1 Regularization or l1_l2 Regularization as well):
`
from tensorflow.keras.regularizers import l2
Regularizer = l2(0.001)
model.add(tf.keras.layers.Dense(
units = 64, activation='relu', kernel_regularizer=Regularizer, bias_regularizer=Regularizer,
activity_regularizer=Regularizer))
tf.keras.layers(Dropout(0.4))
model.add(Dense(units = 10, activation = 'sigmoid',
activity_regularizer=Regularizer, kernel_regularizer=Regularizer))`
Last but not the least, Try removing some Layers, as it reduces the number of Trainable Parameters
If your Test Accuracy and Testing Score doesn't improve despite following above instructions, please share the complete code so that we can help you.
Hope this helps. Happy Learning!
I wrote a simple neural net/MLP and I'm getting some strange accuracy values and wanted to double check things.
This is my intended setup: features matrix with 913 samples and 192 features (913,192). I'm classifying 2 outcomes, so my labels are binary and have shape (913,1). 1 hidden layer with 100 units (for now). All activations will use tanh and all losses use l2 regularization, optimized with SGD
The code is below. It was writtin in python with the Keras framework (http://keras.io/) but my question isn't specific to Keras
input_size = 192
hidden_size = 100
output_size = 1
lambda_reg = 0.01
learning_rate = 0.01
num_epochs = 100
batch_size = 10
model = Sequential()
model.add(Dense(input_size, hidden_size, W_regularizer=l2(lambda_reg), init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(hidden_size, output_size, W_regularizer=l2(lambda_reg), init='uniform'))
model.add(Activation('tanh'))
sgd = SGD(lr=learning_rate, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer=sgd, class_mode="binary")
history = History()
model.fit(features_all, labels_all, batch_size=batch_size, nb_epoch=num_epochs, show_accuracy=True, verbose=2, validation_split=0.2, callbacks=[history])
score = model.evaluate(features_all, labels_all, show_accuracy=True, verbose=1)
I have 2 questions:
This is my first time using Keras, so I want to double check that the code I wrote is actually correct for what I want it to do in terms of my choice of parameters and their values etc.
Using the code above, I get training and test set accuracy hovering around 50-60%. Maybe I'm just using bad features, but I wanted to test to see what might be wrong, so I manually set all the labels and features to something that should be predictable:
labels_all[:500] = 1
labels_all[500:] = 0
features_all[:500] = np.ones(192)*500
features_all[500:] = np.ones(192)
So I set the first 500 samples to have a label of 1, everything else is labelled 0. I set all the features manually to 500 for each of the first 500 samples, and all other features (for the rest of the samples) get a 1
When I run this, I get training accuracy of around 65%, and validation accuracy around 0%. I was expecting both accuracies to be extremely high/almost perfect - is this incorrect? My thinking was that the features with extremely high values all have the same label (1), while the features with low values get a 0 label
Mostly I'm just wondering if my code/model is incorrect or whether my logic is wrong
thanks!
I don't know that library, so I can't tell you if this is correctly implemented, but it looks legit.
I think your problem lies with activation function - tanh(500)=1 and tanh(1)=0.76. This difference seem too small for me. Try using -1 instead of 500 for testing purposes and normalize your real data to something about [-2, 2]. If you need full real numbers range, try using linear activation function. If you only care about positive half on real numbers, I propose softplus or ReLU. I've checked and all those functions are provided with Keras.
You can try thresholding your output too - answer 0.75 when expecting 1 and 0.25 when expecting 0 are valid, but may impact you accuracy.
Also, try tweaking your parameters. I can propose (basing on my own experience) that you'd use:
learning rate = 0.1
lambda in L2 = 0.2
number of epochs = 250 and bigger
batch size around 20-30
momentum = 0.1
learning rate decay about 10e-2 or 10e-3
I'd say that learning rate, number of epochs, momentum and lambda are the most important factors here - in order from most to least important.
PS. I've just spotted that you're initializing your weights uniformly (is that even a word? I'm not a native speaker...). I can't tell you why, but my intuition tells me that this is a bad idea. I'd go with random initial weights.