I am making a predictive model in python with scikit-learn, and I am trying to cross validate to get a valid F1 score. However, depending on my CV method, I am getting very different results. It seems that issues like this are typically due to overfitting or bad data, but in this case that shouldn't explain how my methods give consistent internal results even with different splits, but always differ from each other. x is my dataset, y is the labels, and rf_best is my classifier. For example:
cv_scores = cross_val_score(rf_best, x, y, cv=5, scoring='f1')
avg_cv_score = np.mean(cv_scores)
print cv_scores
avg_cv_score
returns
Out[227]:
[ 0.39825853 0.55969686 0.58727599 0.64060356 0.41976476]
0.52111993918160837
while (changing cv from 5 splits to a ShuffleSplit function)
cv = ShuffleSplit(len(y), n_iter=5, test_size=0.25, random_state=1)
cv_scores = cross_val_score(rf_best, x, y, cv=cv, scoring='f1')
avg_cv_score = np.mean(cv_scores)
print cv_scores
avg_cv_score
returns
Out[228]:
[ 0.88029259 0.86664242 0.8898564 0.87900669 0.86130213]
0.87542004725953615
I'm sure the classifier isn't performing this well, and I don't see how I could be overfitting with 5 shufflesplits, especially as I rerun it over and over. And this:
scores = []
for train, test in KFold(len(y), n_folds=5): #.25 tt split
xtrain, xtest, ytrain, ytest = x[train], x[test], y[train], y[test]
rf_best.fit(xtrain, ytrain)
scores.append(f1_score(ytest, rf_best.predict(xtest)))
print scores
np.mean(scores)
returns
Out[224]:
[0.3365789517232205, 0.39921963139526107, 0.47179614359341149, .56085913206027882, 0.3765470091576244]
0.42900017358595932
How can 3 methods which are doing close to the same thing return such different results so consistently? Even when I change the random seed or test-set size, the results stay similar to what I posted above. Thanks for your time.
Related
I am trying to use cross_val_score to evaluate my regression model (with PolymonialFeatures(degree = 2)). As I noted from different blog posts that I should use cross_val_score with original X, y values, not the X_train and y_train.
r_squareds = cross_val_score(pipe, X, y, cv=10)
r_squareds
>>> array([ 0.74285583, 0.78710331, -1.67690578, 0.68890253, 0.63120873,
0.74753825, 0.13937611, 0.18794756, -0.12916661, 0.29576638])
which indicates my model doesn't perform really well with the mean r2 of only 0.241. Is this supposed to be a correct interpretation?
However, I came across a Kaggle code working on the same data and the guy performed cross_val_score on X_train and y_train. I gave this a try and the average r2 was better.
r_squareds = cross_val_score(pipe, X_train, y_train, cv=10)
r_squareds.mean()
>>> 0.673
Is this supposed to be a problem?
Here is the code for my model:
X = df[['CHAS', 'RM', 'LSTAT']]
y = df['MEDV']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=0)
pipe = Pipeline(
steps=[('poly_feature', PolynomialFeatures(degree=2)),
('model', LinearRegression())]
)
## fit the model
pipe.fit(X_train, y_train)
You first interpretation is correct. The first cross_val_score is training 10 models with 90% of your data as train and 10 as a validation dataset. We can see from these results that the estimator's r_square variance is quite high. Sometimes the model performs even worse than a straight line.
From this result we can safely say that the model is not performing well on this dataset.
It is possible that the obtained result using only the train set on your cross_val_score is higher but this score is most likely not representative of your model performance as the dataset might be to small to capture all its variance. (The train set for the second cross_val_score is only 54% of your dataset 90% of 60% of the original dataset)
So I have a very challenging dataset to work with, but even with that in mind the ROC curves I am getting as a result seem quite bizarre and looks wrong.
Below is my code - I have used the scikitplot library (skplt) for plotting ROC curves after passing in my predictions and the ground truth labels so I cannot reasonably be getting that wrong. Is there something crazily obvious that I am missing here?
# My dataset - note that m (number of examples) is 115. These are histograms that are already
# summed to 1 so I am doubtful that further preprocessing is necessary.
X, y = load_new_dataset(positives, positive_files, m=115, upper=21, range_size=10, display_plot=False)
# Partition - class balance is 0.87 : 0.13 for negative and positive classes respectively
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.10, stratify=y)
# Pick a baseline classifier - Naive Bayes
nb = GaussianNB()
# Very large class imbalance, so use stratified K-fold cross-validation.
cross_val = StratifiedKFold(n_splits=10)
# Use RFE for feature selection
est = SVR(kernel="linear")
selector = feature_selection.RFE(est)
# Create pipeline, nothing fancy here
clf = Pipeline(steps=[("feature selection", selector), ("classifier", nb)])
# Score using F1-score due to class imbalance - accuracy unlikely to be meaningful
scores = cross_val_score(clf, X_train, y_train, cv=cross_val,
scoring=make_scorer(f1_score, average='micro'))
# Fit and make predictions. Use these to plot ROC curves.
print(scores)
clf.fit(X_train, y_train)
y_pred = clf.predict_proba(X_test)
skplt.metrics.plot_roc_curve(y_test, y_pred)
plt.show()
And below is the starkly binary ROC curve:
I understand that I can't expect outstanding performance with such a challenging dataset, but even so I cannot fathom why I am getting such a binary result, particularly for the ROC curves of the individual classes. No, I cannot get more data, although I sincerely wish I could. If this really is valid code, then I will just have to make do with it and perhaps report the micro-average F1 score, which does not look too bad.
For reference, using the make_classification function from sklearn in the code snippet below, I get the following ROC curve:
# Randomly generate a dataset with similar characteristics (size, class balance,
# num_features)
X, y = make_classification(n_samples=103, n_features=21, random_state=0, n_classes=2, \
weights=[0.87, 0.13], n_informative=5, n_clusters_per_class=3)
positives = np.where(y == 1)
X_minority, X_majority, y_minority, y_majority = np.take(X, positives, axis=0), \
np.delete(X, positives, axis=0), \
np.take(y, positives, axis=0), \
np.delete(y, positives, axis=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.10, stratify=y)
# Cross-validation again
cross_val = StratifiedKFold(n_splits=10)
# Use Naive Bayes again for consistency
clf = GaussianNB()
# Likewise for the evaluation metric
scores = cross_val_score(clf, X_train, y_train, cv=cross_val, \
scoring=make_scorer(f1_score, average='micro'))
print(scores)
# Fit, predict, plot results
clf.fit(X_train, y_train)
y_pred = clf.predict_proba(X_test)
skplt.metrics.plot_roc_curve(y_test, y_pred)
plt.show()
Am I doing something wrong? Or is this what I should expect given these characteristics?
Thanks to Stev's kind suggestion of increasing the test size, the resulting curves I ended up getting were far smoother and exhibited much less variance. Using SMOTE in this case was also very helpful and I would advise it (using imblearn perhaps) for anyone else with a similar issue.
I have a dataset with 155 features. 40143 samples. It is sorted by date (oldest to newest) then I deleted the date column from the dataset.
label is on the first column.
CV results c. %65 (mean accuracy of scores +/- 0.01) with the code below:
def cross(dataset):
dropz = ["result"]
X = dataset.drop(dropz, axis=1)
X = preprocessing.normalize(X)
y = dataset["result"]
clf = KNeighborsClassifier(n_neighbors=1, weights='distance', n_jobs=-1)
scores = cross_val_score(clf, X, y, cv=10, scoring='accuracy')
Also I get similar accuracy with the code below:
def train(dataset):
dropz = ["result"]
X = dataset.drop(dropz, axis=1)
X = preprocessing.normalize(X)
y = dataset["result"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1000, random_state=42)
clf = KNeighborsClassifier(n_neighbors=1, weights='distance', n_jobs=-1).fit(X_train, y_train)
clf.score(X_test, y_test)
But If I don't use shuffle in the code below it results c. %49
If I use shuffle then it results c. %65
I should mention that I try every 1000 consecutive split of all set from end to beginning and the result is same.
dataset = pd.read_csv("./dataset.csv", header=0,sep=";")
dataset = shuffle(dataset) #!!!???
X_train = dataset.iloc[:-1000,1:]
X_train = preprocessing.normalize(X_train)
y_train = dataset.iloc[:-1000,0]
X_test = dataset.iloc[-1000:,1:]
X_test = preprocessing.normalize(X_test)
y_test = dataset.iloc[-1000:,0]
clf = KNeighborsClassifier(n_neighbors=1, weights='distance', n_jobs=-1).fit(X_train, y_train)
clf.score(X_test, y_test)
Assuming your question is "Why does it happen":
In both your first and second code snippets you have underlying shuffling happening (in your cross validation and your train_test_split methods), therefore they are equivalent (both in score and algorithm) to your last snippet with shuffling "on".
Since your original dataset is ordered by date there might be (and usually likely) some data that changes over time, which means that since your classifier never sees data from the last 1000 time points - it is unaware of the change in the underlying distribution and therefore fails to classify it.
Addendum to answer further data in comment:
This suggests that there might be some indicative process that is captured in smaller time frames. Two interesting ways to explore it are:
reduce the size of the test set until you find a window size in which the difference between shuffle/no shuffle is negligible.
this process essentially manifests as some dependence between your features so you could see if in a small time frame there is a dependence between your features
I'm trying to do run a simple RandomForestClassifier() with a large-ish dataset. I typically first do the cross-validation using train_test_split, and then start using cross_val_score.
In this case though, I get very different results from these two approaches, and I can't figure out why. My understanding these is that these two snippets should do exactly the same thing:
cfc = RandomForestClassifier(n_estimators=50)
scores = cross_val_score(cfc, X, y,
cv = ShuffleSplit(len(X), 1, 0.25),
scoring = 'roc_auc')
print(scores)
>>> [ 0.88482262]
and this:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)
cfc = RandomForestClassifier(n_estimators=50)
cfc.fit(X_train, y_train)
roc_auc_score(y_test, cfc.predict(X_test))
>>> 0.57733474562203269
And yet the scores are widely different. (The scores are very representative, I observed the same behavior across many many runs).
Any ideas why this can be? I am tempted to trust the cross_val_score result, but I want to be sure that I am not messing up somewhere..
** Update **
I noticed that when I reverse the order of the arguments to roc_auc_score, I get a similar result:
roc_auc_score(cfc.predict(X_test), y_test)
But the documentation explicitly states that the first element should be the real values, and the second one the target.
I'm not sure what's the issue but here are two things you could try:
ROC AUC needs prediction probabilities for proper evaluation, not hard scores (i.e. 0 or 1). Therefore change the cross_val_score to work with probabilities. You can check the first answer on this link for more details.
Compare this with roc_auc_score(y_test, cfc.predict_proba(X_test)[:,1])
As xysmas said, try setting a random_state to both cross_val_score and roc_auc_score
I am trying to learn to use scikit-learn for some basic statistical learning tasks. I thought I had successfully created a LinearRegression model fit to my data:
X_train, X_test, y_train, y_test = cross_validation.train_test_split(
X, y,
test_size=0.2, random_state=0)
model = linear_model.LinearRegression()
model.fit(X_train, y_train)
print model.score(X_test, y_test)
Which yields:
0.797144744766
Then I wanted to do multiple similar 4:1 splits via automatic cross-validation:
model = linear_model.LinearRegression()
scores = cross_validation.cross_val_score(model, X, y, cv=5)
print scores
And I get output like this:
[ 0.04614495 -0.26160081 -3.11299397 -0.7326256 -1.04164369]
How can the cross-validation scores be so different from the score of the single random split? They are both supposed to be using r2 scoring, and the results are the same if I pass the scoring='r2' parameter to cross_val_score.
I've tried a number of different options for the random_state parameter to cross_validation.train_test_split, and they all give similar scores in the 0.7 to 0.9 range.
I am using sklearn version 0.16.1
It turns out that my data was ordered in blocks of different classes, and by default cross_validation.cross_val_score picks consecutive splits rather than random (shuffled) splits. I was able to solve this by specifying that the cross-validation should use shuffled splits:
model = linear_model.LinearRegression()
shuffle = cross_validation.KFold(len(X), n_folds=5, shuffle=True, random_state=0)
scores = cross_validation.cross_val_score(model, X, y, cv=shuffle)
print scores
Which gives:
[ 0.79714474 0.86636341 0.79665689 0.8036737 0.6874571 ]
This is in line with what I would expect.
train_test_split seems to generate random splits of the dataset, while cross_val_score uses consecutive sets, i.e.
"When the cv argument is an integer, cross_val_score uses the KFold or StratifiedKFold strategies by default"
http://scikit-learn.org/stable/modules/cross_validation.html
Depending on the nature of your data set, e.g. data highly correlated over the length of one segment, consecutive sets will give vastly different fits than e.g. random samples from the whole data set.
Folks, thanks for this thread.
The code in the answer above (Schneider) is outdated.
As of scikit-learn==0.19.1, this will work as expected.
from sklearn.model_selection import cross_val_score, KFold
kf = KFold(n_splits=3, shuffle=True, random_state=0)
cv_scores = cross_val_score(regressor, X, y, cv=kf)
Best,
M.