Keras - Autoencoder accuracy stuck on zero - python

I'm trying to detect fraud using autoencoder and Keras. I've written the following code as a Notebook:
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn.preprocessing import StandardScaler
from keras.layers import Input, Dense
from keras.models import Model
import matplotlib.pyplot as plt
data = pd.read_csv('../input/creditcard.csv')
data['normAmount'] = StandardScaler().fit_transform(data['Amount'].values.reshape(-1, 1))
data = data.drop(['Time','Amount'],axis=1)
data = data[data.Class != 1]
X = data.loc[:, data.columns != 'Class']
encodingDim = 7
inputShape = X.shape[1]
inputData = Input(shape=(inputShape,))
X = X.as_matrix()
encoded = Dense(encodingDim, activation='relu')(inputData)
decoded = Dense(inputShape, activation='sigmoid')(encoded)
autoencoder = Model(inputData, decoded)
encoder = Model(inputData, encoded)
encodedInput = Input(shape=(encodingDim,))
decoderLayer = autoencoder.layers[-1]
decoder = Model(encodedInput, decoderLayer(encodedInput))
autoencoder.summary()
autoencoder.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
history = autoencoder.fit(X, X,
epochs=10,
batch_size=256,
validation_split=0.33)
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
I'm probably missing something, my accuracy is stuck on 0 and my test loss is lower than my train loss.
Any Insight would be appericiated

Accuracy on an autoencoder has little meaning, especially on a fraud detection algorithm. What I mean by this is that accuracy is not well defined on regression tasks. For example is it accurate to say that 0.1 is the same as 0.11. For the keras algorithm it is not. If you want to see how well your algorithm replicates the data I would suggest looking at the MSE or at the data itself. Many autoencoder use MSE as their loss function.
The metric you should be monitoring is the training loss on good examples vs the validation loss on fraudulent examples. There you can easily see if you can fit your real examples more closely than the fraudulent ones and how well your algorithm performs in practice.
Another design choice I would not make is relu in an autoencoder. ReLU works well with deeper model because of its simplicity and effectiveness in combating vanishing/exploding gradients. However, both of this concerns are non-factors in autoencoder and the loss of data hurts in an autoencoder. I would suggest using tanh as your intermediate activation function.

Related

Why do i get lagged results on my LSTM model

I am new to machine learning and I am performing a Multivariate Time Series Forecast using LSTMs in Keras. I have a monthly timeseries dataset with 4 input variables (temperature, precipitation, Dew and wind_spreed) and 1 output variable (pollution). Using this data i framed a forecasting problem where, given the weather conditions and pollution for prior months, I forecast the pollution at the next month. Below is my code
X = df[['Temperature', 'Precipitation', 'Dew', 'Wind_speed' ,'Pollution (t_1)']].values
y = df['Pollution (t)'].values
y = y.reshape(-1,1)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(X)
#dataset has 359 samples in total
train_X, train_y = X[:278], y[:278]
test_X, test_y = X[278:], y[278:]
# reshape input to be 3D [samples, timesteps, features]
train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))
test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
model = Sequential()
model.add(LSTM(100, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dropout(0.2))
# model.add(LSTM(70))
# model.add(Dropout(0.3))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
history = model.fit(train_X, train_y, epochs=700, batch_size=70, validation_data=(test_X, test_y), verbose=2, shuffle=False)
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.show()
To do predictions i use the following code
from sklearn.metrics import mean_squared_error,r2_score
yhat = model.predict(test_X)
mse = mean_squared_error(test_y, yhat)
rmse = np.sqrt(mse)
r2 = r2_score(test_y, yhat)
print("test set performance")
print("--------------------")
print("MSE:",mse)
print("RMSE:",rmse)
print("R^2: ",r2)
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(range(len(test_y)), test_y, '-b',label='Actual')
ax.plot(range(len(yhat)), yhat, 'r', label='Predicted')
plt.legend()
plt.show()
Running this code i fell into the following issues:
For some reason am getting a lagged result for my test set which is not in my training data as shown on the below image. I do not understand why i have these lagged results (does it have something to do with including 'pollution (t_1)' as part of my inputs)?
Graph Results:
By adding "pollution (t_1)" which is a shift by 1 lag of the polution variable as part of my inputs this variable now seems to dominate the prediction as removing the other varibales seems to have no influence on my results (r-squared and rmse) which is strange since all these variables do assit in pollution prediction.
Is there something i am doing wrong in my code which is the reason for these issues? I am new to python so any help to answer the above 2 questions will be greatly appreaciated.
First of all, I think it is not appropriate to input '1' as Timesteps value, because LSTM model is the one treating timeseries or sequence data.
I think the following script of data mining will work well
def lstm_data(df,timestamps):
array_data=df.values
sc=MinMaxScaler()
array_data_=sc.fit_transform(array_data)
array=np.empty((0,array_data_.shape[1]))
range_=array_data_.shape[0]-(timestamps-1)
for t in range(range_):
array_data_p=array_data_[t:t+sequenth_length,:]
array=np.vstack((array,array_data_p))
array_=array.reshape(-1,timestamps, array.shape[1])
return array_
#timestamps depend on your objection, but not '1'
x_data=lstm_data(x, timestamps=4)
y_data=lstm_data(y, timestamps=4)
y_data=y_data.reshape(-1,1)
#Divide each data into train and test
#Input the divided data into your LSTM model

Tensorflow 2.0: How are metrics computed when the ouput is sequential?

I have been working with binary sequential inputs and outputs using Tensorflow 2.0, and I've been wondering which approach Tensorflow uses to compute metrics such as recall or accuracy during training in those scenarios.
Each sample to my network consists of 60 timesteps, each with 300 features, and thus my expected output is a (60, 1) array of 1s and 0s. Suppose I have 2000 validation samples. When evaluating the validation set for each epoch, does tensorflow concatenates all of the 2000 samples into a single (2000*60=120000, 1) array and then compares to the concatenated groundtruth labels, or does it evalutes each of the (60, 1) individually and then returns a mean of those values? Is there any way to modify this behavior?
Tensorflow/Keras by default computes the metrics batch-wise for train data, while it computes the same metrics on ALL the data passed in validation_data parameters in fit method.
This means that the metric printed during fitting for the train data is the mean of that score calculated on all the batches. In other words, for trainset keras evaluates each bach individually and then returns a mean of those values. For validation data is different, keras gets all the validation samples and then compares them with the "concatenated" groundtruth labels.
To prove this behavior with code I propose a dummy example. I provide a custom callback that computes for sure the accuracy score on ALL the data passed at the end of the epoch (for train and optionally validation). this is useful for us to understand the behavior of tensorflow during training.
import numpy as np
from sklearn.metrics import accuracy_score
import tensorflow as tf
from tensorflow.keras.layers import *
from tensorflow.keras.models import *
from tensorflow.keras.callbacks import *
class ACC_custom(tf.keras.callbacks.Callback):
def __init__(self, train, validation=None):
super(ACC_custom, self).__init__()
self.validation = validation
self.train = train
def on_epoch_end(self, epoch, logs={}):
logs['ACC_score_train'] = float('-inf')
X_train, y_train = self.train[0], self.train[1]
y_pred = (self.model.predict(X_train).ravel()>0.5)+0
score = accuracy_score(y_train.ravel(), y_pred)
if (self.validation):
logs['ACC_score_val'] = float('-inf')
X_valid, y_valid = self.validation[0], self.validation[1]
y_val_pred = (self.model.predict(X_valid).ravel()>0.5)+0
val_score = accuracy_score(y_valid.ravel(), y_val_pred)
logs['ACC_score_train'] = np.round(score, 5)
logs['ACC_score_val'] = np.round(val_score, 5)
else:
logs['ACC_score_train'] = np.round(score, 5)
create dummy data
x_train = np.random.uniform(0,1, (1000,60,10))
y_train = np.random.randint(0,2, (1000,60,1))
x_val = np.random.uniform(0,1, (500,60,10))
y_val = np.random.randint(0,2, (500,60,1))
fit model
inp = Input(shape=((60,10)), dtype='float32')
x = Dense(32, activation='relu')(inp)
out = Dense(1, activation='sigmoid')(x)
model = Model(inp, out)
es = EarlyStopping(patience=10, verbose=1, min_delta=0.001,
monitor='ACC_score_val', mode='max', restore_best_weights=True)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(x_train,y_train, epochs=10, verbose=2,
callbacks=[ACC_custom(train=(x_train,y_train),validation=(x_val,y_val)),es],
validation_data=(x_val,y_val))
in the graphs below I make a comparison between the accuracies computed by our callback and the accuracy computed by keras
plt.plot(history.history['ACC_score_train'], label='accuracy_callback_train')
plt.plot(history.history['accuracy'], label='accuracy_default_train')
plt.legend(); plt.title('train accuracy')
plt.plot(history.history['ACC_score_val'], label='accuracy_callback_valid')
plt.plot(history.history['val_accuracy'], label='accuracy_default_valid')
plt.legend(); plt.title('validation accuracy')
as we can see the accuracy on the train data (first plot) is different between the default method and our callbacks. this means that the accuracy of train data is calculated batch-wise.
the validation accuracy (second plot) calculated by our callback and the default method is the same! this means that the score on validation data is computed one-shoot

Machine Learning Model overfitting

So I build a GRU model and I'm comparing 3 different datasets on the same model. I was just running the first dataset and set the number of epochs to 25, but I have noticed that my validation loss is increasing just after the 6th epoch, doesn't that indicate overfitting, am I doing something wrong?
import pandas as pd
import tensorflow as tf
from keras.layers.core import Dense
from keras.layers.recurrent import GRU
from keras.models import Sequential
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from google.colab import files
from tensorboardcolab import TensorBoardColab, TensorBoardColabCallback
tbc=TensorBoardColab() # Tensorboard
df10=pd.read_csv('/content/drive/My Drive/Isolation Forest/IF 10 PERCENT.csv',index_col=None)
df2_10= pd.read_csv('/content/drive/My Drive/2019 Dataframe/2019 10minutes IF 10 PERCENT.csv',index_col=None)
X10_train= df10[['WindSpeed_mps','AmbTemp_DegC','RotorSpeed_rpm','RotorSpeedAve','NacelleOrientation_Deg','MeasuredYawError','Pitch_Deg','WindSpeed1','WindSpeed2','WindSpeed3','GeneratorTemperature_DegC','GearBoxTemperature_DegC']]
X10_train=X10_train.values
y10_train= df10['Power_kW']
y10_train=y10_train.values
X10_test= df2_10[['WindSpeed_mps','AmbTemp_DegC','RotorSpeed_rpm','RotorSpeedAve','NacelleOrientation_Deg','MeasuredYawError','Pitch_Deg','WindSpeed1','WindSpeed2','WindSpeed3','GeneratorTemperature_DegC','GearBoxTemperature_DegC']]
X10_test=X10_test.values
y10_test= df2_10['Power_kW']
y10_test=y10_test.values
# scaling values for model
x_scale = MinMaxScaler()
y_scale = MinMaxScaler()
X10_train= x_scale.fit_transform(X10_train)
y10_train= y_scale.fit_transform(y10_train.reshape(-1,1))
X10_test= x_scale.fit_transform(X10_test)
y10_test= y_scale.fit_transform(y10_test.reshape(-1,1))
X10_train = X10_train.reshape((-1,1,12))
X10_test = X10_test.reshape((-1,1,12))
# creating model using Keras
model10 = Sequential()
model10.add(GRU(units=512, return_sequences=True, input_shape=(1,12)))
model10.add(GRU(units=256, return_sequences=True))
model10.add(GRU(units=256))
model10.add(Dense(units=1, activation='sigmoid'))
model10.compile(loss=['mse'], optimizer='adam',metrics=['mse'])
model10.summary()
history10=model10.fit(X10_train, y10_train, batch_size=256, epochs=25,validation_split=0.20, verbose=1, callbacks=[TensorBoardColabCallback(tbc)])
score = model10.evaluate(X10_test, y10_test)
print('Score: {}'.format(score))
y10_predicted = model10.predict(X10_test)
y10_predicted = y_scale.inverse_transform(y10_predicted)
y10_test = y_scale.inverse_transform(y10_test)
plt.plot( y10_predicted, label='Predicted')
plt.plot( y10_test, label='Measurements')
plt.legend()
plt.savefig('/content/drive/My Drive/Figures/Power Prediction 10 Percent.png')
plt.show()
LSTMs(and also GRUs in spite of their lighter construction) are notorious for easily overfitting.
Reduce the number of units(the output size) in each of the layers(32(layer1)-64(layer2); you could also eliminate the last layer altogether.
The second of all, you are using the activation 'sigmoid', but your loss function + metric is mse.
Ensure that your problem is either a regression or a classification one. If it is indeed a regression, then the activation function should be 'linear' at the last step. If it is a classification one, you should change your loss_function to binary_crossentropy and your metric to 'accuracy'.
Therefore, the plot displayed is just misleading for the moment. If you modify like I suggested and you still get such a train-val loss plot, then we can state for sure that you have an overfitting case.

Which loss function and metrics to use for multi-label classification with very high ratio of negatives to positives?

I am training a multi-label classification model for detecting attributes of clothes. I am using transfer learning in Keras, retraining the last few layers of the vgg-19 model.
The total number of attributes is 1000 and about 99% of them are 0s. Metrics like accuracy, precision, recall, etc., all fail, as the model can predict all zeroes and still achieve a very high score. Binary cross-entropy, hamming loss, etc., haven't worked in the case of loss functions.
I am using the deep fashion dataset.
So, which metrics and loss functions can I use to measure my model correctly?
What hassan has suggested is not correct -
Categorical Cross-Entropy loss or Softmax Loss is a Softmax activation plus a Cross-Entropy loss. If we use this loss, we will train a CNN to output a probability over the C classes for each image. It is used for multi-class classification.
What you want is multi-label classification, so you will use Binary Cross-Entropy Loss or Sigmoid Cross-Entropy loss. It is a Sigmoid activation plus a Cross-Entropy loss. Unlike Softmax loss it is independent for each vector component (class), meaning that the loss computed for every CNN output vector component is not affected by other component values. That’s why it is used for multi-label classification, where the insight of an element belonging to a certain class should not influence the decision for another class.
Now for handling class imbalance, you can use weighted Sigmoid Cross-Entropy loss. So you will penalize for wrong prediction based on the number/ratio of positive examples.
Actually you should use tf.nn.weighted_cross_entropy_with_logits.
It not only for multi label classification and also has a pos_weight can pay much attention at the positive classes as you would expected.
Multi-class and binary-class classification determine the number of output units, i.e. the number of neurons in the final layer.
Multi-label and single-Label determines which choice of activation function for the final layer and loss function you should use.
For single-label, the standard choice is Softmax with categorical cross-entropy; for multi-label, switch to Sigmoid activations with binary cross-entropy.
Categorical Cross-Entropy:
Binary Cross-Entropy:
C is the number of classes, and m is the number of examples in the current mini-batch. L is the loss function and J is the cost function. You can also see here.
In the loss function, you are iterating over different classes. In the cost function, you are iterating over the examples in the current mini-batch.
You can refer to this github. They have binary, multi-class, multi-labels and also options to enforce model to learn close to 0 and 1 or simply learn probability.
https://github.com/monkeyDemon/AI-Toolbox/blob/master/computer_vision/image_classification_keras/loss_function/focal_loss.py
Steve
I have been in a simialr situation like yours
you can use softmax activation function in the output layer with categorical_crossentropy to check other metrics such as precision, recall and f1 score you can use the sklearn library as follows:
from sklearn.metrics import classification_report
y_pred = model.predict(x_test, batch_size=64, verbose=1)
y_pred_bool = np.argmax(y_pred, axis=1)
print(classification_report(y_test, y_pred_bool))
as for the training stage as far as know there is the accuracy metric as follows
model.compile(loss='categorical_crossentropy'
, metrics=['acc'], optimizer='adam')
if it helps you, you can plot the training history for the loss and accuracy of your training stage using matplotlib as follows :
hist = model.fit(x_train, y_train, batch_size=24, epochs=1000, verbose=2,
callbacks=[checkpoint],
validation_data=(x_valid, y_valid)
)
# Plot training & validation accuracy values
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()

Keras autoencoder : validation loss > training loss - but performing well on testing dataset

IN SHORT:
I have trained an Autoencoder whose validation loss is always higher than its training loss (see attached figure). I would think that this is a signal of overfitting. However, my Autoencoder performs well on the testing dataset. I was wondering if:
1) with reference to the architecture of the network, provided below, anyone could provide insights on how to reduce the validation loss (and how it is possible that the validation loss is much higher than the training one, despite the performance of the Autoencoder being good on the testing dataset);
2) if it is actually a problem that there is this gap between training and validation loss (when the performance on the testing dataset is actually good).
DETAILS:
I coded up my deep Autoencoder in Keras (code below). The architecture is 2001 (input layer) - 1000 - 500 - 200 - 50 - 200 - 500 - 1000 - 2001 (output layer). My samples are 1d functions of time. Each of them has 2001 time components. I have 2000 samples, which I split in 1500 for training, 500 for testing. Ot of the 1500 training samples, 20% of them (i.e. 300) are used as validation set. I normalize the training set removing the mean and dividing by the standard deviation. I use the mean and standard deviation of the training dataset to normalise the testing dataset as well.
I train the Autoencoder using Adamax optimizer and mean squared error as loss function.
from tensorflow.keras.layers import Input, Dense, Dropout
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers
import numpy as np
import copy
# data
data = # read my input samples. They are 1d functions of time and I have 2000 of them.
# Each function has 2001 time components
# shuffling data before training
import random
random.seed(4)
random.shuffle(data)
# split training (1500 samples) and testing (500 samples) dataset
X_train = data[:1500]
X_test = data[1500:]
# normalize training and testing set using mean and std deviation of training set
X_mean = X_train.mean()
X_train -= X_mean
X_std = X_train.std()
X_train /= X_std
X_test -= X_mean
X_test /= X_std
### MODEL ###
# Architecture
# input layer
input_shape = [X_train.shape[1]]
X_input = Input(input_shape)
# hidden layers
x = Dense(1000, activation='tanh', name='enc0')(X_input)
encoded = Dense(500, activation='tanh', name='enc1')(x)
encoded_2 = Dense(200, activation='tanh', name='enc2')(encoded)
encoded_3 = Dense(50, activation='tanh', name='enc3')(encoded_2)
decoded_2 = Dense(200, activation='tanh', name='dec2')(encoded_3)
decoded_1 = Dense(500, activation='tanh', name='dec1')(decoded_2)
x2 = Dense(1000, activation='tanh', name='dec0')(decoded_1)
# output layer
decoded = Dense(input_shape[0], name='out')(x2)
# the Model
model = Model(inputs=X_input, outputs=decoded, name='autoencoder')
# optimizer
opt = optimizers.Adamax()
model.compile(optimizer=opt, loss='mse', metrics=['acc'])
print(model.summary())
###################
### TRAINING ###
epochs = 1000
# train the model
history = model.fit(x = X_train, y = X_train,
epochs=epochs,
batch_size=100,
validation_split=0.2) # using 20% of training samples for validation
# Testing
prediction = model.predict(X_test)
for i in range(len(prediction)):
prediction[i] = np.multiply(prediction[i], X_std) + X_mean
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(epochs)
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
plt.close()
2) if it is actually a problem that there is this gap between training and validation loss (when the performance on the testing dataset is actually good).
This is just the generalization gap, i.e. the expected gap in the performance between the training and validation sets; quoting from a recent blog post by Google AI:
An important concept for understanding generalization is the generalization gap, i.e., the difference between a model’s performance on training data and its performance on unseen data drawn from the same distribution.
.
I would think that this is a signal of overfitting. However, my Autoencoder performs well on the testing dataset.
It is not, but the reason is not exactly what you think (let alone the fact that "well" is a highly subjective term).
The telltale signature of overfitting is when your validation loss starts increasing, while your training loss continues decreasing, i.e.:
Your graph does not show such a behavior; also, notice the gap (pun intended) between the curves in the above plot (adapted from the Wikipedia entry on overfitting).
how it is possible that the validation loss is much higher than the training one, despite the performance of the Autoencoder being good on the testing dataset
There is absolutely no contradiction here; notice that your training loss is almost zero, which is not necessarily surprising in itself, but it would certainly be surprising if the validation loss were anywhere close to zero. And, again, "good" is a highly subjective term.
In other words, nothing in the info you have provided shows that there is something wrong with your model...

Categories

Resources