I'm using KerasClassifier from sklearn to wrap my Keras model in order to perform K-fold cross validation.
model = KerasClassifier(build_fn=create_model, epochs=20, batch_size=8, verbose = 1)
kfold = KFold(n_splits=10)
scoring = ['accuracy', 'precision', 'recall', 'f1']
results = cross_validate(estimator=model,
X=x_train,
y=y_train,
cv=kfold,
scoring=scoring,
return_train_score=True,
return_estimator=True)
Then I choose the best model between the 10 estimators returned by the function, according to metrics:
best_model = results['estimators'][2] #for example the second model
Now, I want to perform a predict on x_test and get accuracy and loss metrics. How can I do it? I tried model.evaluate(x_test, y_test) but the model is a KerasClassifier so I get an error.
Point is that your KerasClassifier instance mimics standard scikit-learn classifiers. In other terms, it is kind of a scikit-learn beast and, as is, it does not provide method .evaluate().
Therefore, you might just call best_model.score(X_test, y_test) which will automatically return the accuracy as standard sklearn classifiers do. On the other hand, you can access the loss values obtained during training via the history_ attribute of your KerasClassifier instance.
Here's an example:
!pip install scikeras
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split, cross_validate, KFold
import tensorflow as tf
import tensorflow.keras
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from scikeras.wrappers import KerasClassifier
X, y = make_classification(n_samples=100, n_features=20, n_informative=5, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
def build_nn():
ann = Sequential()
ann.add(Dense(20, input_dim=X_train.shape[1], activation='relu', name="Hidden_Layer_1"))
ann.add(Dense(1, activation='sigmoid', name='Output_Layer'))
ann.compile(loss='binary_crossentropy', optimizer= 'adam', metrics = 'accuracy')
return ann
keras_clf = KerasClassifier(model = build_nn, optimizer="adam", optimizer__learning_rate=0.001, epochs=100, verbose=0)
kfold = KFold(n_splits=10)
scoring = ['accuracy', 'precision', 'recall', 'f1']
results = cross_validate(estimator=keras_clf, X=X_train, y=y_train, scoring=scoring, cv=kfold, return_train_score=True, return_estimator=True)
best_model = results['estimator'][2]
# accuracy
best_model.score(X_test, y_test)
# loss values
best_model.history_['loss']
Eventually observe that, when in doubt, you can call dir(object) to get the list of all properties and methods of the specified object (dir(best_model) in your case).
Related
Check the following code:
import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Conv1D, MaxPooling1D, Flatten
from sklearn.model_selection import train_test_split
# Data
X = np.random.rand(1000, 100, 1)
y = np.random.randint(0, 2, (1000, 1))
# Splitting into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Conv1D
model = Sequential()
model.add(Conv1D(32, kernel_size=3, activation='relu', input_shape=(100, 1)))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
# Predict before fitting the model
cnn_features_train = model.predict(X_train)
cnn_features_test = model.predict(X_test)
Why this runs without throwing an error? The weights are not yet stabilished by the .fit method, how can it predict something?
If i try to do the same thing (predict before fitting the model) using Sklearn i get the expected error, for example:
from sklearn.ensemble import RandomForestClassifier
# Data
X = np.random.rand(1000, 100, 1)
y = np.random.randint(0, 2, (1000, 1))
# Splitting into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Random Forest
rf = RandomForestClassifier()
rf.predict(X_test)
The error:
sklearn.exceptions.NotFittedError: This RandomForestClassifier instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.
Keras is different from sklearn. The .predict ()without calling .fit() helps users in preparing and debugging the correct shapes of the tensor.
In the course of assessing a trained model synthesized for the regression problem below, I have some confusion in plotting the resulting history. In particular, when I don't consider any metrics
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.datasets import fetch_california_housing
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
housing = fetch_california_housing()
X_train_full, X_test, y_train_full, y_test = train_test_split(
housing.data, housing.target)
X_train, X_valid, y_train, y_valid = train_test_split(
X_train_full, y_train_full)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_valid = scaler.fit_transform(X_valid)
X_test = scaler.fit_transform(X_test)
model = tf.keras.Sequential([
tf.keras.layers.Dense(30, tf.keras.activations.relu, input_shape=X_train.shape[1:]),
tf.keras.layers.Dense(1)
])
model.compile(loss=tf.keras.losses.mean_squared_error,
optimizer=tf.keras.optimizers.SGD())
history = model.fit(X_train, y_train, epochs=20,
validation_data=(X_valid, y_valid))
pd.DataFrame(history.history).plot()
plt.grid(True)
plt.show()
the final plot includes loss and val_loss graphs as expected.
But once I add a metrics to my model, say, tf.keras.metrics.MeanSquaredError(), the resulting plot generated by
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.datasets import fetch_california_housing
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
housing = fetch_california_housing()
X_train_full, X_test, y_train_full, y_test = train_test_split(
housing.data, housing.target)
X_train, X_valid, y_train, y_valid = train_test_split(
X_train_full, y_train_full)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_valid = scaler.fit_transform(X_valid)
X_test = scaler.fit_transform(X_test)
model = tf.keras.Sequential([
tf.keras.layers.Dense(30, tf.keras.activations.relu, input_shape=X_train.shape[1:]),
tf.keras.layers.Dense(1)
])
model.compile(loss=tf.keras.losses.mean_squared_error,
optimizer=tf.keras.optimizers.SGD(),
metrics=[tf.keras.metrics.MeanSquaredError()])
history = model.fit(X_train, y_train, epochs=20,
validation_data=(X_valid, y_valid))
pd.DataFrame(history.history).plot()
plt.grid(True)
plt.show()
lacks loss and val_loss sketches.
What's the problem here?
Edit:
Here is the content of history.history:
{'loss': [0.880902886390686, 0.6208109855651855, 0.5102624297142029, 0.47074252367019653, 0.4556053578853607, 0.4464321732521057, 0.44210636615753174, 0.43378400802612305, 0.42544370889663696, 0.428415447473526], 'mean_squared_error': [0.880902886390686, 0.6208109855651855, 0.5102624297142029, 0.47074252367019653, 0.4556053578853607, 0.4464321732521057, 0.44210636615753174, 0.43378400802612305, 0.42544370889663696, 0.428415447473526], 'val_loss': [0.6332216262817383, 0.514700710773468, 0.4509757459163666, 0.46695834398269653, 0.5228265523910522, 0.6748611330986023, 0.6648175716400146, 0.7329052090644836, 0.8352308869361877, 1.081600546836853], 'val_mean_squared_error': [0.6332216262817383, 0.514700710773468, 0.4509757459163666, 0.46695834398269653, 0.5228265523910522, 0.6748611330986023, 0.6648175716400146, 0.7329052090644836, 0.8352308869361877, 1.081600546836853]}
Your loss is the mean squared error and your metric is the mean squared error, which is exactly the same. It means they are overlapping when you plot them !
I am training a neural network with a simple dataset. I have tried different combinations of parameters, optimizers, learning rates ... but even after 20 epochs the network is still not learning anything.
I wonder where in the following code lies the problem?
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import Input, Dense, Flatten
from tensorflow import keras
from livelossplot import PlotLossesKeras
from keras.models import Model
from sklearn.datasets import make_classification
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd
seed = 42
X, y = make_classification(n_samples=100000, n_features=2, n_redundant=0,
n_informative=2, random_state=seed)
print(f"Number of features: {X.shape[1]}")
print(f"Number of samples: {X.shape[0]}")
df = pd.DataFrame(np.concatenate((X,y.reshape(-1,1)), axis=1))
df.set_axis([*df.columns[:-1], 'Class'], axis=1, inplace=True)
df['Class'] = df['Class'].astype('int')
X = df.drop('Class', axis=1)
y = df['Class']
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
print(f"Train set: {X_train.shape}")
print(f"Validation set: {X_val.shape}")
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
X_val_scaled = scaler.transform(X_val.astype(np.float64))
inputs = Input(shape=X_train_scaled.shape[1:])
h0 = Dense(5, activation='relu')(inputs)
h1 = Dense(5, activation='relu')(h0)
preds = Dense(1, activation = 'sigmoid')(h1)
model = Model(inputs=inputs, outputs=preds)
opt = keras.optimizers.Adam(lr=0.0001)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(X_train_scaled, y_train, batch_size=128, epochs=20, verbose=0,
validation_data=(X_val_scaled, y_val),
callbacks=[PlotLossesKeras()])
score_train = model.evaluate(X_train_scaled, y_train, verbose=0)
score_test = model.evaluate(X_val_scaled, y_val, verbose=0)
print('Train score:', score_train[0])
print('Train accuracy:', score_train[1])
print('Test score:', score_test[0])
print('Test accuracy:', score_test[1])
The code produces the following kind of output
You have used wrong loss function, change this line
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
to, for example,
model.compile(optimizer=opt, loss='mse', metrics=['accuracy'])
Categorical cross-entropy needs a one-hot encoded y which means, you have to have a 0 or a 1 for every class. MSE is just mean squared error, so it will work. But you might try some other losses as well.
your y:
[1,0,1]
one-hot encoded y:
[[0,1], [1,0], [0,1]]
So my main goal is to use data from 2018 and try to predict data for 2019. I'm using a GRU model and I have the following code. I have a few issues, I'm not sure if the code is actually correct or if I am missing something, and also for model.fit should I use validation_split=0.1 or validation_data=X_test,y_test since I'm using a different dataframe for tesing.
Regarding the accuracy, it is very small and doesn't make any sense and I have no idea why.
import pandas as pd
import tensorflow as tf
from keras.layers.core import Dense
from keras.layers.recurrent import GRU
from keras.models import Sequential
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from tensorboardcolab import TensorBoardColab, TensorBoardColabCallback
df = pd.read_csv('IF 10 PERCENT.csv',index_col=None)
#Loading Second Dataframe
df2 = pd.read_csv('2019 10minutes IF 10 PERCENT.csv',index_col=None)
tbc=TensorBoardColab() # Tensorboard
X_train= df[['WindSpeed_mps','AmbTemp_DegC','RotorSpeed_rpm','RotorSpeedAve','NacelleOrientation_Deg','MeasuredYawError','Pitch_Deg','WindSpeed1','WindSpeed2','WindSpeed3','GeneratorTemperature_DegC','GearBoxTemperature_DegC']]
X_train=X_train.values
y_train= df['Power_kW']
y_train=y_train.values
X_test= df2[['WindSpeed_mps','AmbTemp_DegC','RotorSpeed_rpm','RotorSpeedAve','NacelleOrientation_Deg','MeasuredYawError','Pitch_Deg','WindSpeed1','WindSpeed2','WindSpeed3','GeneratorTemperature_DegC','GearBoxTemperature_DegC']]
X_test=X_test.values
y_test= df2['Power_kW']
y_test=y_test.values
# conversion to numpy array
# scaling values for model
x_scale = MinMaxScaler()
y_scale = MinMaxScaler()
X_train= x_scale.fit_transform(X_train)
y_train= y_scale.fit_transform(y_train.reshape(-1,1))
X_test=x_scale.fit_transform(X_test)
y_test=y_scale.fit_transform(y_test.reshape(-1,1))
X_train = X_train.reshape((-1,1,12))
X_test = X_test.reshape((-1,1,12))
# splitting train and test
# creating model using Keras
model = Sequential()
model.add(GRU(units=512, return_sequences=True, input_shape=(1,12)))
model.add(GRU(units=256, return_sequences=True))
model.add(GRU(units=256))
model.add(Dense(units=1, activation='sigmoid'))
model.compile(loss=['mse'], optimizer='adam',metrics=['accuracy'])
model.summary()
#model.fit(X_train, y_train, batch_size=250, epochs=10, validation_split=0.1, verbose=1, callbacks=[TensorBoardColabCallback(tbc)])
model.fit(X_train, y_train, batch_size=250, epochs=10, validation_data=(X_test,y_test), verbose=1, callbacks=[TensorBoardColabCallback(tbc)])
score = model.evaluate(X_test, y_test)
print('Score: {}'.format(score))
print('Accuracy: {}'.format(acc))
y_predicted = model.predict(X_test)
y_predicted = y_scale.inverse_transform(y_predicted)
y_t
est = y_scale.inverse_transform(y_test)
plt.plot(y_predicted, label='Predicted')
plt.plot(y_test, label='Measurements')
plt.legend()
plt.show()
Thank you
It sounds to me that you are trying to solve a regression problem here. if it is so, It does not make sense to measure accuracy as a metric, since accuracy is about to measure the exact label matching. MSE should be pretty good for the regression
I have a regression problem and I am using a keras fully connected layer to model my problem. I am using cross_val_score and my question is: how can I extract the model and the history of each train/validation combination the cross_val_score does?
Assuming this example:
from sklearn import datasets
from sklearn.model_selection import cross_val_score, KFold
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasRegressor
seed = 1
diabetes = datasets.load_diabetes()
X = diabetes.data[:150]
y = diabetes.target[:150]
def baseline_model():
model = Sequential()
model.add(Dense(10, input_dim=10, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
return model
estimator = KerasRegressor(build_fn=baseline_model, nb_epoch=100, batch_size=100, verbose=False)
kfold = KFold(n_splits=10, random_state=seed)
results = cross_val_score(estimator, X, y, cv=kfold)
print("Results: %.2f (%.2f) MSE" % (results.mean(), results.std()))
My understanding is that I only get the overall mse over each fold, so to say.
But I want to compare the train to validation mse over the epochs of the model for each fold, i.e. for 10 in this case.
When not using kfold, but simple train/validation split, then one can do:
hist = model.fit(X_tr, y_tr, validation_data=val_data,
epochs=100, batch_size=100,
verbose=1)
plt.plot(history.history['loss'])
plt.plot(history.history['loss'])
This would return a plot representing the evolution of the mse w.r.t. to the epochs for the train and validation datasets, allowing to spot over/underfitting.
How to do this for each fold when using cross validation?
You can go for a "manual" CV procedure, and plot the loss (or any other available metric you might want to use) for each fold, i.e. something like this:
from sklearn.metrics import mean_squared_error
cv_mse = []
for train_index, val_index in kfold.split(X):
history = estimator.fit(X[train_index], y[train_index])
pred = estimator.predict(X[val_index])
err = mean_squared_error(y[val_index], pred)
cv_mse.append(err)
plt.plot(history.history['loss'])
In which case, the cv_mse list will contain the final MSE for each fold, and you also get the respective plots for its evolution per epoch for each fold.