Implement K-fold cross validation in MLPClassification Python - python

I am learning how to develop a Backpropagation Neural Network using scikit-learn. I still confuse with how to implement k-fold cross validation in my neural network. I wish you guys can help me out. My code is as follow:
import numpy as np
from sklearn.model_selection import KFold
from sklearn.neural_network import MLPClassifier
f = open("seeds_dataset.txt")
data = np.loadtxt(f)
X=data[:,0:]
y=data[:,-1]
kf = KFold(n_splits=10)
X_train, X_test, y_train, y_test = X[train], X[test], y[train], y[test]
clf = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1)
clf.fit(X, y)
MLPClassifier(activation='relu', alpha=1e-05, batch_size='auto',
beta_1=0.9, beta_2=0.999, early_stopping=False,
epsilon=1e-08, hidden_layer_sizes=(5, 2), learning_rate='constant',
learning_rate_init=0.001, max_iter=200, momentum=0.9,
nesterovs_momentum=True, power_t=0.5, random_state=1, shuffle=True,
solver='lbfgs', tol=0.0001, validation_fraction=0.1, verbose=False,
warm_start=False)

Do not split your data into train and test. This is automatically handled by the KFold cross-validation.
from sklearn.model_selection import KFold
kf = KFold(n_splits=10)
clf = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1)
for train_indices, test_indices in kf.split(X):
clf.fit(X[train_indices], y[train_indices])
print(clf.score(X[test_indices], y[test_indices]))
KFold validation partitions your dataset into n equal, fair portions. Each portion is then split into test and train. With this, you get a fairly accurate measure of the accuracy of your model since it is tested on small portions of fairly distributed data.

In case you are looking for already built in method to do this, you can take a look at cross_validate.
from sklearn.model_selection import cross_validate
model = MLPClassifier()
cv_results = cross_validate(model, X, Y, cv=10,
return_train_score=False,
scoring=model.score)
print("Fit scores: {}".format(cv_results['test_score']))
The thing I like about this approach is it gives you access to the fit_time, score_time, and test_score. It also allows you to supply your choice of scoring metrics and cross-validation generator/iterable (i.e. Kfold). Another good resource is Cross Validation.

Kudos to #COLDSPEED's answer.
If you'd like to have the prediction of n fold cross-validation, cross_val_predict() is the way to go.
# Scamble and subset data frame into train + validation(80%) and test(10%)
df = df.sample(frac=1).reset_index(drop=True)
train_index = 0.8
df_train = df[ : len(df) * train_index]
# convert dataframe to ndarray, since kf.split returns nparray as index
feature = df_train.iloc[:, 0: -1].values
target = df_train.iloc[:, -1].values
solver = MLPClassifier(activation='relu', solver='adam', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1, verbose=True)
y_pred = cross_val_predict(solver, feature, target, cv = 10)
Basically, the option cv indicates how many cross-validation you'd like to do in the training. y_pred is the same size as target.

Related

Performance metric when using XGboost regressor with sklearn learning_curve

I've created xgboost regressor model and want to see how training and test performance changes as number of training set increases.
xgbm_reg = XGBRegressor()
tr_sizes, tr_scs, test_scs = learning_curve(estimator=xgbm_reg,
X=ori_X,y=y,
train_sizes=np.linspace(0.1, 1, 5),
cv=5)
What is performance is it using for tr_scs, and test_scs?
Sklearn doc tells me that
scoring : str or callable, default=None
A str (see model evaluation documentation) or a scorer callable object / function
with signature scorer(estimator, X, y)
So I've looked at XGboost documentation which says objective is default = reg:squarederror does this mean results of tr_scs, and test_scs are in terms of squared error?
I want to check by using cross_val_score
scoring = "neg_mean_squared_error"
cv_results = cross_val_score(xgbm_reg, ori_X, y, cv=5, scoring=scoring)
however not quite sure how to get squared_error from cross_val_score
The XGBRegressor's built-in scorer is the R-squared and this is the default scorer used in learning_curve and cross_val_score, see the code below.
from xgboost import XGBRegressor
from sklearn.datasets import make_regression
from sklearn.model_selection import learning_curve, cross_val_score, KFold
from sklearn.metrics import r2_score
# generate the data
X, y = make_regression(n_features=10, random_state=100)
# generate 5 CV splits
kf = KFold(n_splits=5, shuffle=False)
# calculate the CV scores using `learning_curve`, use 100% train size for comparison purposes
_, _, lc_scores = learning_curve(estimator=XGBRegressor(), X=X, y=y, train_sizes=[1.0], cv=kf)
print(lc_scores)
# [[0.51444244 0.70020972 0.64521668 0.36608259 0.81670165]]
# calculate the CV scores using `cross_val_score`
cv_scores = cross_val_score(estimator=XGBRegressor(), X=X, y=y, cv=kf)
print(cv_scores)
# [0.51444244 0.70020972 0.64521668 0.36608259 0.81670165]
# calculate the CV scores manually
xgb_scores = []
r2_scores = []
# iterate across the CV splits
for train_index, test_index in kf.split(X):
# extract the training and test data
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
# fit the model to the training data
estimator = XGBRegressor()
estimator.fit(X_train, y_train)
# score the test data using the XGBRegressor built-in scorer
xgb_scores.append(estimator.score(X_test, y_test))
# score the test data using the R-squared
y_pred = estimator.predict(X_test)
r2_scores.append(r2_score(y_test, y_pred))
print(xgb_scores)
# [0.5144424362721487, 0.7002097211679331, 0.645216683969211, 0.3660825936288453, 0.8167016490227281]
print(r2_scores)
# [0.5144424362721487, 0.7002097211679331, 0.645216683969211, 0.3660825936288453, 0.8167016490227281]

Python: I want to perform 5 fold cross validation for logistic regression and report scores. Do I use LogisticRegressionCV() or cross_val_score()?

cross_val_scores gives different results than LogisticRegressionCV, and I can't figure out why.
Here is my code:
seed = 42
test_size = .33
X_train, X_test, Y_train, Y_test = train_test_split(scale(X),Y, test_size=test_size, random_state=seed)
#Below is my model that I use throughout the program.
model = LogisticRegressionCV(random_state=42)
print('Logistic Regression results:')
#For cross_val_score below, I just call LogisticRegression (and not LogRegCV) with the same parameters.
scores = cross_val_score(LogisticRegression(random_state=42), X_train, Y_train, scoring='accuracy', cv=5)
print(np.amax(scores)*100)
print("%.2f%% average accuracy with a standard deviation of %0.2f" % (scores.mean() * 100, scores.std() * 100))
model.fit(X_train, Y_train)
y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]
accuracy = accuracy_score(Y_test, predictions)
coef=np.round(model.coef_,2)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
The output is this.
Logistic Regression results:
79.90483019359885
79.69% average accuracy with a standard deviation of 0.14
Accuracy: 79.81%
Why is the maximum accuracy from cross_val_score higher than the accuracy used by LogisticRegressionCV?
And, I recognize that cross_val_scores does not return a model, which is why I want to use LogisticRegressionCV, but I am struggling to understand why it is not performing as well. Likewise, I am not sure how to get the standard deviations of the predictors from LogisticRegressionCV.
For me, there might be some points to take into consideration:
Cross validation is generally used whenever you should simulate a validation set (for instance when the training set is not that big to be divided into training, validation and test sets) and only uses training data. In your case you're computing accuracy of model on test data, making it impossible to exactly compare results.
According to the docs:
Cross-validation estimators are named EstimatorCV and tend to be roughly equivalent to GridSearchCV(Estimator(), ...). The advantage of using a cross-validation estimator over the canonical estimator class along with grid search is that they can take advantage of warm-starting by reusing precomputed results in the previous steps of the cross-validation process. This generally leads to speed improvements.
If you look at this snippet, you'll see that's what happens indeed:
import numpy as np
from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression, LogisticRegressionCV
from sklearn.model_selection import cross_val_score, GridSearchCV, train_test_split
data = load_breast_cancer()
X, y = data['data'], data['target']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
estimator = LogisticRegression(random_state=42, solver='liblinear')
grid = {
'C': np.power(10.0, np.arange(-10, 10)),
}
gs = GridSearchCV(estimator, param_grid=grid, scoring='accuracy', cv=5)
gs.fit(X_train, y_train)
print(gs.best_score_) # 0.953846153846154
lrcv = LogisticRegressionCV(Cs=list(np.power(10.0, np.arange(-10, 10))),
cv=5, scoring='accuracy', solver='liblinear', random_state=42)
lrcv.fit(X_train, y_train)
print(lrcv.scores_[1].mean(axis=0).max()) # 0.953846153846154
I would suggest to have a look here, too, so as to get the details of lrcv.scores_[1].mean(axis=0).max().
Eventually, to get the same results with cross_val_score you should better write:
score = cross_val_score(gs.best_estimator_, X_train, y_train, cv=5, scoring='accuracy')
score.mean() # 0.953846153846154

Should Cross Validation Score be performed on original or split data?

When I want to evaluate my model with cross validation, should I perform cross validation on original (data thats not split on train and test) or on train / test data?
I know that training data is used for fitting the model, and testing for evaluating. If I use cross validation, should I still split the data into train and test, or not?
features = df.iloc[:,4:-1]
results = df.iloc[:,-1]
x_train, x_test, y_train, y_test = train_test_split(features, results, test_size=0.3, random_state=0)
clf = LogisticRegression()
model = clf.fit(x_train, y_train)
accuracy_test = cross_val_score(clf, x_test, y_test, cv = 5)
Or should I do like this:
features = df.iloc[:,4:-1]
results = df.iloc[:,-1]
clf = LogisticRegression()
model = clf.fit(features, results)
accuracy_test = cross_val_score(clf, features, results, cv = 5)), 2)
Or maybe something different?
Both your approaches are wrong.
In the first one, you apply cross validation to the test set, which is meaningless
In the second one, you first fit the model with your whole data, and then you perform cross validation, which is again meaningless. Moreover, the approach is redundant (your fitted clf is not used by the cross_val_score method, which does its own fitting)
Since you are not doing any hyperparameter tuning (i.e. you seem to be interested only in performance assessment), there are two ways:
Either with a separate test set
Or with cross validation
First way (test set):
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
x_train, x_test, y_train, y_test = train_test_split(features, results, test_size=0.3, random_state=0)
clf = LogisticRegression()
model = clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
accuracy_test = accuracy_score(y_test, y_pred)
Second way (cross validation):
from sklearn.model_selection import cross_val_score
from sklearn.metrics import accuracy_score
from sklearn.utils import shuffle
clf = LogisticRegression()
# shuffle data first:
features_s, results_s = shuffle(features, results)
accuracy_cv = cross_val_score(clf, features_s, results_s, cv = 5, scoring='accuracy')
# fit the model afterwards with the whole data, if satisfied with the performance:
model = clf.fit(features, results)
I will try to summarize the "best practice" here:
1) If you want to train your model, fine-tune parameters, and do final evaluation, I recommend you to split your data into training|val|test.
You fit your model using the training part, and then you check different parameter combinations on the val part. Finally, when you're sure which classifier/parameter obtains the best result on the val part, you evaluate on the test to get the final rest.
Once you evaluate on the test part, you shouldn't change the parameters any more.
2) On the other hand, some people follow another way, they split their data into training and test, and they finetune their model using cross-validation on the training part and at the end they evaluate it on the test part.
If your data is quite large, I recommend you to use the first way, but if your data is small, the 2.

How to correctly perform cross validation in scikit-learn?

I am trying to do a cross validation on a k-nn classifier and I am confused about which of the following two methods below conducts cross validation correctly.
training_scores = defaultdict(list)
validation_f1_scores = defaultdict(list)
validation_precision_scores = defaultdict(list)
validation_recall_scores = defaultdict(list)
validation_scores = defaultdict(list)
def model_1(seed, X, Y):
np.random.seed(seed)
scoring = ['accuracy', 'f1_macro', 'precision_macro', 'recall_macro']
model = KNeighborsClassifier(n_neighbors=13)
kfold = StratifiedKFold(n_splits=2, shuffle=True, random_state=seed)
scores = model_selection.cross_validate(model, X, Y, cv=kfold, scoring=scoring, return_train_score=True)
print(scores['train_accuracy'])
training_scores['KNeighbour'].append(scores['train_accuracy'])
print(scores['test_f1_macro'])
validation_f1_scores['KNeighbour'].append(scores['test_f1_macro'])
print(scores['test_precision_macro'])
validation_precision_scores['KNeighbour'].append(scores['test_precision_macro'])
print(scores['test_recall_macro'])
validation_recall_scores['KNeighbour'].append(scores['test_recall_macro'])
print(scores['test_accuracy'])
validation_scores['KNeighbour'].append(scores['test_accuracy'])
print(np.mean(training_scores['KNeighbour']))
print(np.std(training_scores['KNeighbour']))
#rest of print statments
It seems that for loop in the second model is redundant.
def model_2(seed, X, Y):
np.random.seed(seed)
scoring = ['accuracy', 'f1_macro', 'precision_macro', 'recall_macro']
model = KNeighborsClassifier(n_neighbors=13)
kfold = StratifiedKFold(n_splits=2, shuffle=True, random_state=seed)
for train, test in kfold.split(X, Y):
scores = model_selection.cross_validate(model, X[train], Y[train], cv=kfold, scoring=scoring, return_train_score=True)
print(scores['train_accuracy'])
training_scores['KNeighbour'].append(scores['train_accuracy'])
print(scores['test_f1_macro'])
validation_f1_scores['KNeighbour'].append(scores['test_f1_macro'])
print(scores['test_precision_macro'])
validation_precision_scores['KNeighbour'].append(scores['test_precision_macro'])
print(scores['test_recall_macro'])
validation_recall_scores['KNeighbour'].append(scores['test_recall_macro'])
print(scores['test_accuracy'])
validation_scores['KNeighbour'].append(scores['test_accuracy'])
print(np.mean(training_scores['KNeighbour']))
print(np.std(training_scores['KNeighbour']))
# rest of print statments
I am using StratifiedKFold and I am not sure if I need for loop as in model_2 function or does cross_validate function already use the split as we are passing cv=kfold as an argument.
I am not calling fit method, is this OK? Does cross_validate calls that automatically or do I need to call fit before calling cross_validate?
Finally, how can I create confusion matrix? Do I need to create it for each fold, if yes, how can the final/average confusion matrix be calculated?
The documentation is arguably your best friend in such questions; from the simple example there it should be apparent that you should use neither a for loop nor a call to fit. Adapting the example to use KFold as you do:
from sklearn.model_selection import KFold, cross_validate
from sklearn.datasets import load_boston
from sklearn.tree import DecisionTreeRegressor
X, y = load_boston(return_X_y=True)
n_splits = 5
kf = KFold(n_splits=n_splits, shuffle=True)
model = DecisionTreeRegressor()
scoring=('r2', 'neg_mean_squared_error')
cv_results = cross_validate(model, X, y, cv=kf, scoring=scoring, return_train_score=False)
cv_results
Result:
{'fit_time': array([0.00901461, 0.00563478, 0.00539804, 0.00529385, 0.00638533]),
'score_time': array([0.00132656, 0.00214362, 0.00134897, 0.00134444, 0.00176597]),
'test_neg_mean_squared_error': array([-11.15872549, -30.1549505 , -25.51841584, -16.39346535,
-15.63425743]),
'test_r2': array([0.7765484 , 0.68106786, 0.73327311, 0.83008371, 0.79572363])}
how can I create confusion matrix? Do I need to create it for each fold
No one can tell you if you need to create a confusion matrix for each fold - it is your choice. If you choose to do so, it may be better to skip cross_validate and do the procedure "manually" - see my answer in How to display confusion matrix and report (recall, precision, fmeasure) for each cross validation fold.
if yes, how can the final/average confusion matrix be calculated?
There is no "final/average" confusion matrix; if you want to calculate anything further than the k ones (one for each k-fold) as described in the linked answer, you need to have available a separate validation set...
model_1 is correct.
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html
cross_validate(estimator, X, y=None, groups=None, scoring=None, cv=’warn’, n_jobs=None, verbose=0, fit_params=None, pre_dispatch=‘2*n_jobs’, return_train_score=’warn’, return_estimator=False, error_score=’raise-deprecating’)
where
estimator is an object implementing ‘fit’. It will be called to fit the model on the train folds.
cv: is a cross-validation generator that is used to generated train and test splits.
If you follow the example in the sklearn docs
cv_results = cross_validate(lasso, X, y, cv=3, return_train_score=False)
cv_results['test_score']
array([0.33150734, 0.08022311, 0.03531764])
You can see that the model lasso is fitted 3 times once for each fold on train splits and also validated 3 times on test splits. You can see that the test score on validation data are reported.
Cross validation of Keras models
Keras provides wrapper which makes the keras models compatible with sklearn cross_validatation method. You have to wrap the keras model using KerasClassifier
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import KFold, cross_validate
from keras.models import Sequential
from keras.layers import Dense
import numpy as np
def get_model():
model = Sequential()
model.add(Dense(2, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=get_model, epochs=10, batch_size=8, verbose=0)
kf = KFold(n_splits=3, shuffle=True)
X = np.random.rand(10,2)
y = np.random.rand(10,1)
cv_results = cross_validate(model, X, y, cv=kf, return_train_score=False)
print (cv_results)

My r-squared score is coming negative but my accuracy score using k-fold cross validation is coming to about 92%

For the code below, my r-squared score is coming out to be negative but my accuracies score using k-fold cross validation is coming out to be 92%. How's this possible? Im using random forest regression algorithm to predict some data. The link to the dataset is given in the link below:
https://www.kaggle.com/ludobenistant/hr-analytics
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder,OneHotEncoder
dataset = pd.read_csv("HR_comma_sep.csv")
x = dataset.iloc[:,:-1].values ##Independent variable
y = dataset.iloc[:,9].values ##Dependent variable
##Encoding the categorical variables
le_x1 = LabelEncoder()
x[:,7] = le_x1.fit_transform(x[:,7])
le_x2 = LabelEncoder()
x[:,8] = le_x1.fit_transform(x[:,8])
ohe = OneHotEncoder(categorical_features = [7,8])
x = ohe.fit_transform(x).toarray()
##splitting the dataset in training and testing data
from sklearn.cross_validation import train_test_split
y = pd.factorize(dataset['left'].values)[0].reshape(-1, 1)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc_x = StandardScaler()
x_train = sc_x.fit_transform(x_train)
x_test = sc_x.transform(x_test)
sc_y = StandardScaler()
y_train = sc_y.fit_transform(y_train)
from sklearn.ensemble import RandomForestRegressor
regressor = RandomForestRegressor(n_estimators = 10, random_state = 0)
regressor.fit(x_train, y_train)
y_pred = regressor.predict(x_test)
print(y_pred)
from sklearn.metrics import r2_score
r2_score(y_test , y_pred)
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator = regressor, X = x_train, y = y_train, cv = 10)
accuracies.mean()
accuracies.std()
There are several issues with your question...
For starters, you are doing a very basic mistake: you think you are using accuracy as a metric, while you are in a regression setting and the actual metric used underneath is the mean squared error (MSE).
Accuracy is a metric used in classification, and it has to do with the percentage of the correctly classified examples - check the Wikipedia entry for more details.
The metric used internally in your chosen regressor (Random Forest) is included in the verbose output of your regressor.fit(x_train, y_train) command - notice the criterion='mse' argument:
RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=None,
max_features='auto', max_leaf_nodes=None,
min_impurity_split=1e-07, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
n_estimators=10, n_jobs=1, oob_score=False, random_state=0,
verbose=0, warm_start=False)
MSE is a positive continuous quantity, and it is not upper-bounded by 1, i.e. if you got a value of 0.92, this means... well, 0.92, and not 92%.
Knowing that, it is good practice to include explicitly the MSE as the scoring function of your cross-validation:
cv_mse = cross_val_score(estimator = regressor, X = x_train, y = y_train, cv = 10, scoring='neg_mean_squared_error')
cv_mse.mean()
# -2.433430574463703e-28
For all practical purposes, this is zero - you fit the training set almost perfectly; for confirmation, here is the (perfect again) R-squared score on your training set:
train_pred = regressor.predict(x_train)
r2_score(y_train , train_pred)
# 1.0
But, as always, the moment of truth comes when you apply your model on the test set; your second mistake here is that, since you train your regressor with scaled y_train, you should also scale y_test before evaluating:
y_test = sc_y.fit_transform(y_test)
r2_score(y_test , y_pred)
# 0.9998476914664215
and you get a very nice R-squared in the test set (close to 1).
What about the MSE?
from sklearn.metrics import mean_squared_error
mse_test = mean_squared_error(y_test, y_pred)
mse_test
# 0.00015230853357849051

Categories

Resources