How to evaluate model performance of keras tuner.search? - python

I'm currently trying to visualise the performance of my prediction model by showing the val_mse in every epoch. The code that used to work for model.fit() doesn't work for tuner.search(). Can anyone provide me with some guide on this. Thank you.
Previous code:
import matplotlib.pyplot as plt
def plot_model(history):
hist = pd.DataFrame (history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Absolute Error')
plt.plot(hist['epoch'], hist['mae'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mae'],
label = 'Val Error')
plt.legend()
plt.ylim([0,20])
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error')
plt.plot (hist['epoch'], hist['mse'],
label='Train Error')
plt.plot (hist['epoch'], hist['val_mse'],
label = 'Val Error')
plt.legend()
plt.ylim([0,400])
plot_model(history)
keras.tuner code:
history = tuner.search(x = normed_train_data,
y = y_train,
epochs = 200,
batch_size=64,
validation_data=(normed_test_data, y_test),
callbacks = [early_stopping])

Before using tuner.search to search the best model, you need to install and import keras_tuner:
!pip install keras-tuner --upgrade
import keras_tuner as kt
from tensorflow import keras
Then, define the hyperparameter (hp) in the model definition, for instance as below:
def build_model(hp):
model = keras.Sequential()
model.add(keras.layers.Dense(
hp.Choice('units', [8, 16, 32]), # define the hyperparameter
activation='relu'))
model.add(keras.layers.Dense(1, activation='relu'))
model.compile(loss='mse')
return model
Initialize the tuner:
tuner = kt.RandomSearch(build_model,objective='val_loss',max_trials=5)
Now, Start the search and get the best model by using tuner.search:
tuner.search(x = normed_train_data,
y = y_train,
epochs = 200,
batch_size=64,
validation_data=(normed_test_data, y_test),
callbacks = [early_stopping])
best_model = tuner.get_best_models()[0]
Hence, Now you can use this best_model to train and evaluate the model with your dataset and will get a significant decrease in loss.
Please check this link as a reference for more detail.

Related

How to incorporate cross validation into training process in spektral?

I am testing a sample graph neural network using spektral as follows:
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.losses import CategoricalCrossentropy
from tensorflow.keras.metrics import categorical_accuracy
from tensorflow.keras.optimizers import Adam
from spektral.data import DisjointLoader
from spektral.datasets import TUDataset
from spektral.models import GeneralGNN
np.random.seed(0)
batch_size = 16
learning_rate = 0.0001
epochs = 100
data = TUDataset("PROTEINS")
np.random.shuffle(data)
split = int(0.8 * len(data))
data_tr, data_te = data[:split], data[split:]
loader_tr = DisjointLoader(data_tr, batch_size=batch_size, epochs=epochs)
loader_te = DisjointLoader(data_te, batch_size=batch_size)
model = GeneralGNN(data.n_labels, activation="softmax")
optimizer = Adam(learning_rate)
loss_fn = CategoricalCrossentropy()
model.compile(loss=loss_fn,
optimizer=optimizer,
metrics=categorical_accuracy)
history = model.fit(loader_tr.load(),
steps_per_epoch=loader_tr.steps_per_epoch,
epochs=epochs)
plt.plot(history.history['loss'])
plt.plot(history.history['categorical_accuracy'])
plt.xlabel('epoch')
plt.legend(["Loss", "Categorical Accuracy"])
plt.show()
How can one incorporate cross validation into the training process above?
If I just do
history = model.fit(loader_tr.load(),
steps_per_epoch=loader_tr.steps_per_epoch,
epochs=epochs,
validation_data=loader_te.load())
the training halts right after the first epoch. I guess it happens because steps_per_epoch is not set for the loader of the validation data. But I have no clue how to do that.
Does anyone have any experience in such a situation?
Instead of steps_per_epoch, the keyword for validation data is validation_steps:
history = model.fit(loader_tr.load(),
steps_per_epoch=loader_tr.steps_per_epoch,
epochs=epochs,
validation_data=loader_te.load(),
validation_steps=loader_te.steps_per_epoch)
This will work as intended. Otherwise, the validation loop will run until the loader is exhausted (which, by default is never).
Cheers

Why do I get different predictions using Keras sequential neural network in a loop?

I came across a weird difference between keras model.fit() and sklearn model.fit() functions. When model.fit() is called inside a loop I get inconsistent predictions using a Keras sequential model. This is not the case when using an sklearn model. See sample code to reproduce the phenomenon.
from numpy.random import seed
seed(1337)
import tensorflow as tf
tf.random.set_seed(1337)
from sklearn.linear_model import LogisticRegression
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.layers import InputLayer
from sklearn.datasets import make_blobs
from sklearn.preprocessing import MinMaxScaler
import numpy as np
def get_sequential_dnn(NUM_COLS, NUM_ROWS):
# code for model
if __name__ == "__main__":
input_size = 10
X, y = make_blobs(n_samples=100, centers=2, n_features=input_size,
random_state=1
)
scalar = MinMaxScaler()
scalar.fit(X)
X = scalar.transform(X)
model = get_sequential_dnn(X.shape[1], X.shape[0])
# print(model.summary())
# model = LogisticRegression()
for i in range(2):
model.fit(X, y, epochs=100, verbose=0, shuffle=False)
# model.fit(X, y)
Xnew, _ = make_blobs(n_samples=3, centers=2, n_features=10, random_state=1)
Xnew = scalar.transform(Xnew)
# make a prediction
# ynew = model.predict_proba(Xnew)[:, 1]
ynew = model.predict_proba(Xnew)
ynew = np.array(ynew)
# show the inputs and predicted outputs
print('--------------')
for i in range(len(Xnew)):
print("X=%s \n Predicted=%s" % (Xnew[i], ynew[i]))
The output of this is
--------------
X=[0.32799209 0.32682211 0.62699485 0.89987274 0.59894281 0.94662653
0.77125788 0.73345369 0.2153754 0.35317172]
Predicted=[0.9931685]
X=[0.60876924 0.33208319 0.24770841 0.11435312 0.66211608 0.17361879
0.12891829 0.25729677 0.69975833 0.73165292]
Predicted=[0.35249507]
X=[0.65154993 0.26153846 0.2416324 0.11793901 0.7047334 0.17706289
0.07761879 0.45189967 0.8481064 0.85092378]
Predicted=[0.35249507]
--------------
X=[0.32799209 0.32682211 0.62699485 0.89987274 0.59894281 0.94662653
0.77125788 0.73345369 0.2153754 0.35317172]
Predicted=[1.]
X=[0.60876924 0.33208319 0.24770841 0.11435312 0.66211608 0.17361879
0.12891829 0.25729677 0.69975833 0.73165292]
Predicted=[0.17942095]
X=[0.65154993 0.26153846 0.2416324 0.11793901 0.7047334 0.17706289
0.07761879 0.45189967 0.8481064 0.85092378]
Predicted=[0.17942095]
While if I use a Logistic Regression (un-comment the commented lines) the predictions are consistent:
--------------
X=[0.32799209 0.32682211 0.62699485 0.89987274 0.59894281 0.94662653
0.77125788 0.73345369 0.2153754 0.35317172]
Predicted=0.929209043999009
X=[0.60876924 0.33208319 0.24770841 0.11435312 0.66211608 0.17361879
0.12891829 0.25729677 0.69975833 0.73165292]
Predicted=0.04643513037543502
X=[0.65154993 0.26153846 0.2416324 0.11793901 0.7047334 0.17706289
0.07761879 0.45189967 0.8481064 0.85092378]
Predicted=0.038716408758471876
--------------
X=[0.32799209 0.32682211 0.62699485 0.89987274 0.59894281 0.94662653
0.77125788 0.73345369 0.2153754 0.35317172]
Predicted=0.929209043999009
X=[0.60876924 0.33208319 0.24770841 0.11435312 0.66211608 0.17361879
0.12891829 0.25729677 0.69975833 0.73165292]
Predicted=0.04643513037543502
X=[0.65154993 0.26153846 0.2416324 0.11793901 0.7047334 0.17706289
0.07761879 0.45189967 0.8481064 0.85092378]
Predicted=0.038716408758471876
I get that the obvious solution to this is fit the model before the loop, probably there is a strong randomness how Keras models fit the data to the labels, but there are a couple of cases where you need to have a loop to get prediction scores. For example if you want to perform a 10-fold cross validation to get the AUC, sensitivity, specificity values on a training data. In these situations this randomness is unacceptable.
What is causing this inconsistency and what is the solution to it?
There are couple of issue with the way your are trying to make reproducible results with keras.
You are calling the fit (when i==1) over the already fitted model (when i==0). So the optimizer sees different sets of inital weights in both the cases and so you will end up in two different models. Solution: Get a fresh model everytime. This is not the case with sklearn, which starts with fresh initialized weights every time a fit is called.
model.fit internally might use a current stage of random number generator. You seeded it outside the loop, so the state will be different when fit is called the second time. Solution: Seed inside the loop.
Sample code with issue
# Issue 2 here
tf.random.set_seed(1337)
def get_model():
model = Sequential()
model.add(Dense(4, input_dim=8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam')
return model
X = np.random.randn(10,8)
y = np.random.randn(10,1)
# Issue 1 here
model = get_model()
results = []
for i in range(10):
model.fit(X, y, epochs=5, verbose=0, shuffle=False)
results.append(np.sum(model.predict(X)))
assert np.all(np.isclose(results, results[0]))
As you can see the assert fails
Corrected code
results = []
for i in range(10):
tf.random.set_seed(1337)
model = get_model()
model.fit(X, y, epochs=5, verbose=0, shuffle=False)
results.append(np.sum(model.predict(X)))
assert np.all(np.isclose(results, results[0]))

Trial ID of the best model

I'm using Autokeras to find the best regression model and I need to plot the learning curves of the best model. However, I cannot find the trial ID of the best model. Here is a part of my code:
#path for saving the logs
import datetime
%load_ext tensorboard
path = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback= tf.keras.callbacks.TensorBoard(log_dir=path)
# Creating Keras Search Type
regressor = ak.StructuredDataRegressor(
output_dim=2,
loss="mean_squared_error",
metrics=["mae","mean_squared_error"],
project_name="structured_data_regressor",
max_trials=3,
directory=None,
objective="val_mean_squared_error",
tuner=None,
overwrite=False,
seed=100,
)
#Fitting ANN to Training Set - Validation Data Provide --> Validation Split = 0.1111
history = regressor.fit(x = X_train, y = y_train, epochs= 5, callbacks=[tensorboard_callback], validation_split=0.2)
#Learning Curves
%tensorboard --logdir logs/fit

Why do i get lagged results on my LSTM model

I am new to machine learning and I am performing a Multivariate Time Series Forecast using LSTMs in Keras. I have a monthly timeseries dataset with 4 input variables (temperature, precipitation, Dew and wind_spreed) and 1 output variable (pollution). Using this data i framed a forecasting problem where, given the weather conditions and pollution for prior months, I forecast the pollution at the next month. Below is my code
X = df[['Temperature', 'Precipitation', 'Dew', 'Wind_speed' ,'Pollution (t_1)']].values
y = df['Pollution (t)'].values
y = y.reshape(-1,1)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(X)
#dataset has 359 samples in total
train_X, train_y = X[:278], y[:278]
test_X, test_y = X[278:], y[278:]
# reshape input to be 3D [samples, timesteps, features]
train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))
test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
model = Sequential()
model.add(LSTM(100, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dropout(0.2))
# model.add(LSTM(70))
# model.add(Dropout(0.3))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
history = model.fit(train_X, train_y, epochs=700, batch_size=70, validation_data=(test_X, test_y), verbose=2, shuffle=False)
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.show()
To do predictions i use the following code
from sklearn.metrics import mean_squared_error,r2_score
yhat = model.predict(test_X)
mse = mean_squared_error(test_y, yhat)
rmse = np.sqrt(mse)
r2 = r2_score(test_y, yhat)
print("test set performance")
print("--------------------")
print("MSE:",mse)
print("RMSE:",rmse)
print("R^2: ",r2)
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(range(len(test_y)), test_y, '-b',label='Actual')
ax.plot(range(len(yhat)), yhat, 'r', label='Predicted')
plt.legend()
plt.show()
Running this code i fell into the following issues:
For some reason am getting a lagged result for my test set which is not in my training data as shown on the below image. I do not understand why i have these lagged results (does it have something to do with including 'pollution (t_1)' as part of my inputs)?
Graph Results:
By adding "pollution (t_1)" which is a shift by 1 lag of the polution variable as part of my inputs this variable now seems to dominate the prediction as removing the other varibales seems to have no influence on my results (r-squared and rmse) which is strange since all these variables do assit in pollution prediction.
Is there something i am doing wrong in my code which is the reason for these issues? I am new to python so any help to answer the above 2 questions will be greatly appreaciated.
First of all, I think it is not appropriate to input '1' as Timesteps value, because LSTM model is the one treating timeseries or sequence data.
I think the following script of data mining will work well
def lstm_data(df,timestamps):
array_data=df.values
sc=MinMaxScaler()
array_data_=sc.fit_transform(array_data)
array=np.empty((0,array_data_.shape[1]))
range_=array_data_.shape[0]-(timestamps-1)
for t in range(range_):
array_data_p=array_data_[t:t+sequenth_length,:]
array=np.vstack((array,array_data_p))
array_=array.reshape(-1,timestamps, array.shape[1])
return array_
#timestamps depend on your objection, but not '1'
x_data=lstm_data(x, timestamps=4)
y_data=lstm_data(y, timestamps=4)
y_data=y_data.reshape(-1,1)
#Divide each data into train and test
#Input the divided data into your LSTM model

Plotting learning curve in keras gives KeyError: 'val_acc'

I was trying to plot train and test learning curve in keras, however, the following code produces KeyError: 'val_acc error.
The official document <https://keras.io/callbacks/> states that in order to use 'val_acc' I need to enable validation and accuracy monitoring which I dont understand and dont know how to use in my code.
Any help would be much appreciated.
Thanks.
seed = 7
np.random.seed(seed)
dataframe = pandas.read_csv("iris.csv", header=None)
dataset = dataframe.values
X = dataset[:,0:4].astype(float)
Y = dataset[:,4]
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
dummy_y = np_utils.to_categorical(encoded_Y)
kfold = StratifiedKFold(y=Y, n_folds=10, shuffle=True, random_state=seed)
cvscores = []
for i, (train, test) in enumerate(kfold):
model = Sequential()
model.add(Dense(12, input_dim=4, init='uniform', activation='relu'))
model.add(Dense(3, init='uniform', activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history=model.fit(X[train], dummy_y[train], nb_epoch=200, batch_size=5, verbose=0)
scores = model.evaluate(X[test], dummy_y[test], verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
cvscores.append(scores[1] * 100)
print( "%.2f%% (+/- %.2f%%)" % (np.mean(cvscores), np.std(cvscores)))
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
Looks like in Keras + Tensorflow 2.0 val_acc was renamed to val_accuracy
history_dict = history.history
print(history_dict.keys())
if u print keys of history_dict, you will get like this dict_keys(['loss', 'acc', 'val_loss', 'val_acc']).
and edit a code like this
acc = history_dict['acc']
val_acc = history_dict['val_acc']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
Keys and error
You may need to enable the validation split of your trainset. Usually, the validation happens in 1/3 of the trainset. In your code, make the change as given below:
history=model.fit(X[train], dummy_y[train],validation_split=0.33,nb_epoch=200, batch_size=5, verbose=0)
It works!
The main point everyone misses to mention is that this Key Error is related to the naming of metrics during model.compile(...). You need to be consistent with the way you name your accuracy metric inside model.compile(....,metrics=['<metric name>']). Your history callback object will receive the dictionary containing key-value pairs as defined in metrics.
So, if your metric is metrics=['acc'], you can access them in history object with history.history['acc'] but if you define metric as metrics=['accuracy'], you need to change to history.history['accuracy'] to access the value, in order to avoid Key Error. I hope it helps.
N.B. Here's a link to the metrics you can use in Keras.
If you upgrade keras older version (e.g. 2.2.5) to 2.3.0 (or newer) which is compatible with Tensorflow 2.0, you might have such error (e.g. KeyError: 'acc'). Both acc and val_acc has been renamed to accuracy and val_accuracy respectively. Renaming them in script will solve the issue.
to get any val_* data (val_acc, val_loss, ...), you need to first set the validation.
first method (will validate from what you give it):
model.fit(validation_data=(X_test, Y_test))
second method (will validate from a part of the training data):
model.fit(validation_split=0.5)
I have changed acc to accuracy and my problem solved. Tensorflow 2+
e.g.
accuracy = history_dict['accuracy']
val_accuracy = history_dict['val_acccuracy']
This error also happens when you specify the validation_data=(X_test, Y_test) and your X_test and/or Y_test are empty. To check this, print the shape of X_test and Y_test respectively. In this case, the model.fit(validation_data=(X_test, Y_test), ...) method ran but because the validation set was empty, it didn't create a dictionary key for val_loss in the history.history dictionary.
What worked for me was changing objective='val_accuracy'to objective=["val_accuracy"] in
tuner = kt.BayesianOptimization(model_builder,
objective=["val_accuracy"],
max_trials=80,
seed=123)
tuner.search(X_train, y_train, epochs=50, validation_split=0.2)
I have TensorFlow 2+.

Categories

Resources