I have the following code where I want to use k-fold cross validation for a Linear Regression model:
kf = KFold(n_splits=100)
predi = cross_val_predict(model, train[columns], train[target], cv = kf)
predi = pandas.Series(predi)
model.fit(data[columns], data[target])
pred_test = model.predict(test[columns])
print(mean_squared_error(pred_test, test[target]))
However, I am not sure whether the code does what I would like it to do. Specifically, I am not sure about the model.fit part. Does it even use the cross-validation?
The reason why I am not sure that calculating it like this yields worse results than without cross-validation.
No. CV is just for checking the performance of model on a data (or rather different parts of it)
When you call fit(), it will fit the whole data supplied at the time whereas cross-validation only uses parts of the data (leaving 1 fold in each iteration). So this data difference may cause the estimator to perform better or worse.
model.fit doesn't have any functionality to divide the data. It just works on the cost function minimization problem and creates a model (means find parameters).
Also if you think that you create a loop and you divide the data on every iteration and call model.fit again and again you get the more generalized model, then it's not possible because on calling fit 2nd time on linear regression model object, it forgets about old data.
Related
I'm a little bit confused about how GridSearchCV works with Train Test Split.
As far as I know, when creating models for the dataset I'm using, a paper used roc-auc.
I'm trying to replicate what this paper did, at least as well as I can. From reading a few other posts here, I've gathered that running GridSearchCV on the entire dataset is prone to overfitting, so we should split the data into a training partition and a testing partition. Then, we should run the training partition with GridSearchCV with whatever model and parameters, and then fit it, and then get a score using the test part of the dataset we set aside.
Now where I'm confused is with GridSearchCV, as far as I understand, it gives us scores for each of the folds that the data is split into when doing the search for parameters and using best_score_ we can pull the best of these scores. I don't understand what the scores represent and why you can pass in a scoring parameter to begin with, since the job of GridSearchCV is to always find the best possible parameters anyways? (Perhaps I'm making a poor assumption here but I'm assuming that there is an objective best set of parameters, regardless of scoring method). What I figured was that I would find the best parameters with GridSearchCV and then use the said parameters to create fit a model, and finally use that model and the partition I saved for testing and test it using the roc-auc scoring method.
So in the end, does it matter (if at all) what scoring methods I'm passing into GridSearchCV, as it will always look to give the best set of parameters anyways, which I will use to compute my final score with the testing partition?
This document may help.
Here you see that the scoring parameter allows you to have various metrics, such as roc_auc. See here all Scikit's metrics.
Optimizing over different metrics result in different optimal parameters. Just think about optimizing precision versus recall. Optimizing precision leads to less false positives while optimizing recall leads to less false negatives.
Also, in GridSearchCV, the CV stands for cross validated. Train/test splitting happens inside this function, it's taken care of. You only have to provide the splitter as an argument to GridSearchCV, for example cv=StratifiedKFold(n_splits=5, shuffle=True).
I am trying to evaluate logistic regression using the AUROC curve and and cross-validate my scores. When I don't cross-validate I have no issues, but I really want to use cross validation to help decrease bias in my method.
Anyway, below is the code and error term I get for the beginning part of my code:
X = df.drop('Survived', axis=1)
y = df['Survived']
skf = StratifiedKFold(n_splits=5)
logmodel = LogisticRegression()
i=0
for train, test in skf.split(X,y):
logmodel.fit(X[train], y[train]) # error occurs here
predictions = logmodel.predict_proba(X[test])
# a bunch of code that I haven't included which creates the ROC curve
i += 1
The error occurs in the fourth to last line, and returns a list of integers followed by 'not in index'
I don't really understand what the problem is?
This is my understanding of the code: First I create an instance of both stratified kfold and logistic regression. The instance of stratified kfold states that five folds are to be made. Next, I say that for each train and test fold in my dataset X, y I fit the logistic model to the data and then create a list of predictions for different probabilities based on the test data. Later (this part is not showed) I will create a ROC curve for each k-fold of data.
Again, I don't really understand what the problem is but maybe somebody can clarify. My work is more or less copied directly from this link in sklearn: https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc_crossval.html#sphx-glr-auto-examples-model-selection-plot-roc-crossval-py
Please add more details so it can be truly examined. Preferably (and actually required) a piece of code that one can run to see the error.
From first view, you take a pandas dataframe and feed it into the model, and that is done incorrect.
See the following lines that are correct for retrieving data and feeding it to the model:
X = df.drop('Survived', axis=1).values
y = df['Survived'].values
The .values suffix accesses the numpy data object that is stored in those dataframes, which is consistent with the rest of the code.
Hopefully that helps you to solve the error.
Good luck!
How are we supposed to use the dictionary output from lightgbm.cv to improve our predictions?
Here's an example - we train our cv model using the code below:
cv_mod = lgb.cv(params,
d_train,
500,
nfold = 10,
early_stopping_rounds = 25,
stratified = True)
How can we use the parameters found from the best iteration of the above code to predict an output? In this case, cv_mod has no "predict" method like lightgbm.train, and the dictionary output from lightgbm.cvthrows an error when used in lightgbm.train.predict(..., pred_parameters = cv_mod).
Am I missing an important transformation step?
In general, the purpose of CV is NOT to do hyperparameter optimisation. The purpose is to evaluate performance of model-building procedure.
A basic train/test split is conceptually identical to a 1-fold CV (with a custom size of the split in contrast to the 1/K train size in the k-fold CV). The advantage of doing more splits (i.e. k>1 CV) is to get more information about the estimate of generalisation error. There is more info in a sense of getting the error + stat uncertainty. There is an excellent discussion on CrossValidated (start with the links added to the question, which cover the same question, but formulated in a different way). It covers nested cross validation and is absolutely not straightforward. But if you will wrap your head around the concept in general, this will help you in various non-trivial situations. The idea that you have to take away is: The purpose of CV is to evaluate performance of model-building procedure.
Keeping that idea in mind, how does one approach hyperparameter estimation in general (not only in LightGBM)?
You want to train a model with a set of parameters on some data and evaluate each variation of the model on an independent (validation) set. Then you intend to choose the best parameters by choosing the variant that gives the best evaluation metric of your choice.
This can be done with a simple train/test split. But evaluated performance, and thus the choice of the optimal model parameters, might be just a fluctuation on a particular split.
Thus, you can evaluate each of those models more statistically robust averaging evaluation over several train/test splits, i.e k-fold CV.
Then you can make a step further and say that you had an additional hold-out set, that was separated before hyperparameter optimisation was started. This way you can evaluate the chosen best model on that set to measure the final generalisation error. However, you can make even step further and instead of having a single test sample you can have an outer CV loop, which brings us to nested cross validation.
Technically, lightbgm.cv() allows you only to evaluate performance on a k-fold split with fixed model parameters. For hyper-parameter tuning you will need to run it in a loop providing different parameters and recoding averaged performance to choose the best parameter set. after the loop is complete. This interface is different from sklearn, which provides you with complete functionality to do hyperparameter optimisation in a CV loop. Personally, I would recommend to use the sklearn-API of lightgbm. It is just a wrapper around the native lightgbm.train() functionality, thus it is not slower. But it allows you to use the full stack of sklearn toolkit, thich makes your life MUCH easier.
If you're happy with your CV results, you just use those parameters to call the 'lightgbm.train' method. Like #pho said, CV is usually just for param tuning. You don't use the actual CV object for predictions.
You should use CV for parameter optimization.
If your model performs well on all folds use these parameters to train on the whole training set.
Then evaluate that model on the external test set.
Scikit has CalibratedClassifierCV, which allows us to calibrate our models on a particular X, y pair. It also states clearly that data for fitting the classifier and for calibrating it must be disjoint.
If they must be disjoint, is it legitimate to train the classifier with the following?
model = CalibratedClassifierCV(my_classifier)
model.fit(X_train, y_train)
I fear that by using the same training set I'm breaking the disjoint data rule. An alternative might be to have a validation set
my_classifier.fit(X_train, y_train)
model = CalibratedClassifierCV(my_classifier, cv='prefit')
model.fit(X_valid, y_valid)
Which has the disadvantage of leaving less data for training. Also, if CalibratedClassifierCV should only be fit on models fit on a different training set, why would it's default options be cv=3, which will also fit the base estimator? Does the cross validation handle the disjoint rule on its own?
Question: what is the correct way to use CalibratedClassifierCV?
I already answered this in CrossValidated to the exact same question. I'm leaving it here anyways since it is not clear for me whether this question belongs here or to CrossVal.
--- Original answer ---
There are two things mentioned in the CalibratedClassifierCV docs that hint towards the ways it can be used:
base_estimator: If cv=prefit, the classifier must have been fit already on data.
cv: If “prefit” is passed, it is assumed that base_estimator has been fitted already and all data is used for calibration.
I may obviously be interpreting this wrong, but it appears you can use the CCCV (short for CalibratedClassifierCV) in two ways:
Number one:
You train your model as usual, your_model.fit(X_train, y_train).
Then, you create your CCCV instance, your_cccv = CalibratedClassifierCV(your_model, cv='prefit'). Notice you set cv to flag that your model has already been fit.
Finally, you call your_cccv.fit(X_validation, y_validation). This validation data is used solely for calibration purposes.
Number two:
You have a new, untrained model.
Then you create your_cccv=CalibratedClassifierCV(your_untrained_model, cv=3). Notice cv is now the number of folds.
Finally, you call cccv_instance.fit(X, y). Because your model is untrained, X and y have to be used for both training and calibration. The way to ensure the data is 'disjoint' is cross validation: for any given fold, CCCV will split X and y into your training and calibration data, so they do not overlap.
TLDR: Method one allows you to control what is used for training and for calibration. Method two uses cross validation to try and make the most out of your data for both purposes.
I'm currently working on a research study about classifiers performances comparison. To evaluate those performances, I'm computing the accuracy, the area under curve and the squared error for each classifier on all the datasets I have. Besides I need to perform tuning parameters for some of the classifiers in order to select the best parameters in terms of accuracy, so a validation test is required (I chose 20% of the dataset).
I was told that, in order to make this comparison even more meaningful, the cross validation should be performed on the same sets for each classifier.
So basically, is there a way to use the cross_val_score method so that it runs always on the same folds for all the classifiers or should I rewrite from scratch some code that can do this job ?
Thank you in advance.
cross_val_score accepts a cv parameter which represents the cross validation object you want to use. You probably want StratifiedKFold, which accepts a shuffle parameter, which specifies if you want to shuffle the data prior to running cross validation on it.
cv can also be an int, in which case a StratifiedKFold or KFold object will be created automatically with K = cv.
As you can tell from the documentation, shuffle is False by default, so by default it will already be performed on the same folds for all of your classifiers.
You can test it by running it twice on the same classifier to make sure (you should get the exact same results).
You can specify it yourself like this:
your_cv = StratifiedKFold(your_y, n_folds=10, shuffle=True) # or shuffle=False
cross_val_score(your_estimator, your_X, y=your_y, cv=your_cv)