Linear Discriminant Analysis - python

I am using sklearn.lda for a classification purpose and was a little puzzled about the score function that prints the mean classification error.
Is it determined by leave one out - jackknife?
How do I interpret the result? It's only a float value without much documentation.
Thanks in advance,
EL

The score method takes samples X and their true labels y and compares its own predictions with y. It returns the mean accuracy, which is always a single figure. For example,
lda = LDA().fit(X, y)
print(lda.score(X, y))
will print the accuracy of the classifier on its own training set.
Every classifier has a score method, which usually (though not necessarily) returns mean accuracy. The method is used by the GridSearchCV model selection algorithm to determine the quality of the classifier if you don't explicitly give it a scoring argument.

Related

How to penalize False Negatives more than False Positives

From the business perspective, false negatives lead to about tenfold higher costs (real money) than false positives. Given my standard binary classification models (logit, random forest, etc.), how can I incorporate this into my model?
Do I have to change (weight) the loss function in favor of the 'preferred' error (FP) ? If so, how to do that?
There are several options for you:
As suggested in the comments, class_weight should boost the loss function towards the preferred class. This option is supported by various estimators, including sklearn.linear_model.LogisticRegression,
sklearn.svm.SVC, sklearn.ensemble.RandomForestClassifier, and others. Note there's no theoretical limit to the weight ratio, so even if 1 to 100 isn't strong enough for you, you can go on with 1 to 500, etc.
You can also select the decision threshold very low during the cross-validation to pick the model that gives highest recall (though possibly low precision). The recall close to 1.0 effectively means false_negatives close to 0.0, which is what to want. For that, use sklearn.model_selection.cross_val_predict and sklearn.metrics.precision_recall_curve functions:
y_scores = cross_val_predict(classifier, x_train, y_train, cv=3,
method="decision_function")
precisions, recalls, thresholds = precision_recall_curve(y_train, y_scores)
If you plot the precisions and recalls against the thresholds, you should see the picture like this:
After picking the best threshold, you can use the raw scores from classifier.decision_function() method for your final classification.
Finally, try not to over-optimize your classifier, because you can easily end up with a trivial const classifier (which is obviously never wrong, but is useless).
As #Maxim mentioned, there are 2 stages to make this kind of tuning: in the model training stage (like custom weights) and the prediction stage (like lowering the decision threshold).
Another tuning for the model-training stage is using a recall scorer. you can use it in your grid-search cross-validation (GridSearchCV) for tuning your classifier with the best hyper-param towards high recall.
GridSearchCV scoring parameter can either accepts the 'recall' string or the function recall_score.
Since you're using a binary classification, both options should work out of the box, and call recall_score with its default values that suits a binary classification:
average: 'binary' (i.e. one simple recall value)
pos_label: 1 (like numpy's True value)
Should you need to custom it, you can wrap an existing scorer, or a custom one, with make_scorer, and pass it to the scoring parameter.
For example:
from sklearn.metrics import recall_score, make_scorer
recall_custom_scorer = make_scorer(
lambda y, y_pred, **kwargs: recall_score(y, y_pred, pos_label='yes')[1]
)
GridSearchCV(estimator=est, param_grid=param_grid, scoring=recall_custom_scorer, ...)

Gridsearch technique in sklearn, python

I am working on a supervised machine learning algorithm and it seems to have a curious behavior.
So, let me start:
I have a function where I pass different classifiers, their parameters, training data and their labels:
def HT(targets,train_new, algorithm, parameters):
#creating my scorer
scorer=make_scorer(f1_score)
#creating the grid search object with the parameters of the function
grid_search = GridSearchCV(algorithm,
param_grid=parameters,scoring=scorer, cv=5)
# fit the grid_search object to the data
grid_search.fit(train_new, targets.ravel())
# print the name of the classifier, the best score and best parameters
print algorithm.__class__.__name__
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
# assign the best estimator to the pipeline variable
pipeline=grid_search.best_estimator_
# predict the results for the training set
results=pipeline.predict(train_new).astype(int)
print results
return pipeline
To this function I pass parameters like:
clf_param.append( {'C' : np.array([0.001,0.01,0.1,1,10]),
'kernel':(['linear','rbf']),
'decision_function_shape' : (['ovr'])})
Ok, so here is where things start to get strange. This functions is returning a f1_score but it is different from the score I am computing manually using the formula:
F1 = 2 * (precision * recall) / (precision + recall)
There are pretty big differences (0.68 compared with 0.89)
I am doing something wrong in the function ?
The score computed by grid_search (grid_search.best_score_) should be the same with the score on the whole training set (grid_search.best_estimator_.predict(train_new)) ?
Thanks
The score that you are manually calculating takes into account the global true positives and negatives for all classes. But in scikit, f1_score, the default approach is to calculate the binary average (i.e only for the positive class).
So, in order to achieve the same scores, use the f1_score as specified below:
scorer=make_scorer(f1_score, average='micro')
Or simply, in the gridSearchCV, use:
scoring = 'f1_micro'
More information about how the averaging of scores is done is given on:
- http://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values
You may also want to take a look at the following answer which describes the calculation of scores in scikit in detail:-
https://stackoverflow.com/a/31575870/3374996
EDIT:
Changed macro to micro. As written in documentation:
'micro': Calculate metrics globally by counting the total true
positives, false negatives and false positives.

How to evaluate cost function for scikit learn LogisticRegression?

After using sklearn.linear_model.LogisticRegression to fit a training data set, I would like to obtain the value of the cost function for the training data set and a cross validation data set.
Is it possible to have sklearn simply give me the value (at the fit minimum) of the function it minimized?
The function is stated in the documentation at http://scikit-learn.org/stable/modules/linear_model.html#logistic-regression (depending on the regularization one has chosen). But I can't find how to get sklearn to give me the value of this function.
I would have thought this is what LogisticRegression.score does, but that simply returns the accuracy (the fraction of data points its prediction classifies correctly).
I have found sklearn.metrics.log_loss, but of course this is not the actual function being minimized.
Unfortunately there is no "nice" way to do so, but there is a private function
_logistic_loss(w, X, y, alpha, sample_weight=None) in https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/logistic.py, thus you can call it by hand
from sklearn.linear_model.logistic import _logistic_loss
print _logistic_loss(clf.coef_, X, y, 1 / clf.C)
where clf is your learned LogisticRegression
I used below code to calculate cost value.
import numpy as np
cost = np.sum((reg.predict(x) - y) ** 2)
where reg is your learned LogisticRegression
I have the following suggestions.
You can write the codes for the loss function of logistic regression as a function.
After you get your predicted labels of data, you can revoke your defined function to calculate the cost values.

Leave-one-out cross-validation

I am trying to evaluate a multivariable dataset by leave-one-out cross-validation and then remove those samples not predictive of the original dataset (Benjamini-corrected, FDR > 10%).
Using the docs on cross-validation, I've found the leave-one-out iterator. However, when trying to get the score for the nth fold, an exception is raised saying that more than one sample is needed. Why does .predict() work while .score() doesn't? How can I get the score for a single sample? Do I need to use another approach?
Unsuccessful code:
from sklearn import ensemble, cross_validation, datasets
dataset = datasets.load_linnerud()
x, y = dataset.data, dataset.target
clf = ensemble.RandomForestRegressor(n_estimators=500)
loo = cross_validation.LeaveOneOut(x.shape[0])
for train_i, test_i in loo:
score = clf.fit(x[train_i], y[train_i]).score(x[test_i], y[test_i])
print('Sample %d score: %f' % (test_i[0], score))
Resulting exception:
ValueError: r2_score can only be computed given more than one sample.
[EDIT, to clarify]:
I am not asking why this doesn't work, but for a different approach that does. After fitting/training my model, how do I test how good a single sample fits the trained model?
cross_validation.LeaveOneOut(x.shape[0]) is creating as many folds as the number of rows. This results in each validation run getting only one instance.
Now, to draw a "line" you need two points, whereas with your one instance, you only have one point. That's what your error message says, that it needs more than one instance (or sample) to draw the "line" that will be used to calculate the r^2 value.
Generally, in the ML world, people report 10-fold or 5-fold cross validation result. So I would recommend setting the n to 10 or 5, accordingly.
Edit: After a quick discussion with #banana, we realized that the question was not understood correctly initially. Since it is not possible to get the R2 score for a single data point, an alternative is to calculate the distance between the actual and predicted points. This can be done using
numpy.linalg.norm(clf.predict(x[test_i])[0] - y[test_i])

Scikit-learn cross validation scoring for regression

How can one use cross_val_score for regression? The default scoring seems to be accuracy, which is not very meaningful for regression. Supposedly I would like to use mean squared error, is it possible to specify that in cross_val_score?
Tried the following two but doesn't work:
scores = cross_validation.cross_val_score(svr, diabetes.data, diabetes.target, cv=5, scoring='mean_squared_error')
and
scores = cross_validation.cross_val_score(svr, diabetes.data, diabetes.target, cv=5, scoring=metrics.mean_squared_error)
The first one generates a list of negative numbers while mean squared error should always be non-negative. The second one complains that:
mean_squared_error() takes exactly 2 arguments (3 given)
I dont have the reputation to comment but I want to provide this link for you and/or a passersby where the negative output of the MSE in scikit learn is discussed - https://github.com/scikit-learn/scikit-learn/issues/2439
In addition (to make this a real answer) your first option is correct in that not only is MSE the metric you want to use to compare models but R^2 cannot be calculated depending (I think) on the type of cross-val you are using.
If you choose MSE as a scorer, it outputs a list of errors which you can then take the mean of, like so:
# Doing linear regression with leave one out cross val
from sklearn import cross_validation, linear_model
import numpy as np
# Including this to remind you that it is necessary to use numpy arrays rather
# than lists otherwise you will get an error
X_digits = np.array(x)
Y_digits = np.array(y)
loo = cross_validation.LeaveOneOut(len(Y_digits))
regr = linear_model.LinearRegression()
scores = cross_validation.cross_val_score(regr, X_digits, Y_digits, scoring='mean_squared_error', cv=loo,)
# This will print the mean of the list of errors that were output and
# provide your metric for evaluation
print scores.mean()
The first one is correct. It outputs the negative of the MSE, as it always tries to maximize the score. Please help us by suggesting an improvement to the documentation.

Categories

Resources