I want to plot a ROC curve for evaluating a trained Nearest Centroid classifier.
My code works for Naive Bayes, SVM, kNN and DT but I get an exception whenever I try to plot the curve for Nearest Centroid, because the estimator has no .predict_proba() method:
AttributeError: 'NearestCentroid' object has no attribute 'predict_proba'
The code for plotting the curve is
def plot_roc(self):
plt.clf()
for label, estimator in self.roc_estimators.items():
estimator.fit(self.data_train, self.target_train)
proba_for_each_class = estimator.predict_proba(self.data_test)
fpr, tpr, thresholds = roc_curve(self.target_test, proba_for_each_class[:, 1])
plt.plot(fpr, tpr, label=label)
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r', label='Luck', alpha=.8)
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.legend()
plt.show()
self.roc_estimators is a dict where I store the trained estimators with the label of the classifier like this
cl_label = "kNN"
knn_estimator = KNeighborsClassifier(algorithm='ball_tree', p=2, n_neighbors=5)
knn_estimator.fit(self.data_train, self.target_train)
self.roc_estimators[cl_label] = knn_estimator
and for Nearest Centroid respectively
cl_label = "Nearest Centroid"
nc_estimator = NearestCentroid(metric='euclidean', shrink_threshold=6)
nc_estimator.fit(self.data_train, self.target_train)
self.roc_estimators[cl_label] = nc_estimator
So it works for all classifiers I tried but not for Nearest Centroid. Is there a specific reason regarding the nature of the Nearest Centroid classifier that I am missing which explains why it is not possible to plot the ROC curve (more specifically why the estimator does not have the .predict_proba() method?) Thank you in advance!
You need a "score" for each prediction to make the ROC curve. This could be the predicted probability of belonging to one class.
See e.g. https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Curves_in_ROC_space
Just looking for the nearest centroid will give you predicted class, but not the probability.
EDIT: For NearestCentroid it is not possible to compute a score. This is simply a limitation of the model. It assigns a class to each sample, but not a probability of that class. I guess if you need to use Nearest Centroid and you want a probability, you can use some ensemble method. Train a bunch of models of subsets of your training data, and average their predictions on your test set. That could give you a score. See scikit-learn.org/stable/modules/ensemble.html#bagging
To get the class probabilities you can do something like (untested code):
from sklearn.utils.extmath import softmax
from sklearn.metrics.pairwise import pairwise_distances
def predict_proba(self, X):
distances = pairwise_distances(X, self.centroids_, metric=self.metric)
probs = softmax(distances)
return probs
clf = NearestCentroid()
clf.fit(X_train, y_train)
predict_proba(clf, X_test)
Related
I am working with an imbalanced dataset. I have applied SMOTE Algorithm to balance the dataset after splitting the dataset into test and training set before applying ML models. I want to apply cross-validation and plot the ROC curves of each folds showing the AUC of each fold and also display the mean of the AUCs in the plot. I named the resampled training set variables as X_train_res and y_train_res and following is the code:
cv = StratifiedKFold(n_splits=10)
classifier = SVC(kernel='sigmoid',probability=True,random_state=0)
tprs = []
aucs = []
mean_fpr = np.linspace(0, 1, 100)
plt.figure(figsize=(10,10))
i = 0
for train, test in cv.split(X_train_res, y_train_res):
probas_ = classifier.fit(X_train_res[train], y_train_res[train]).predict_proba(X_train_res[test])
# Compute ROC curve and area the curve
fpr, tpr, thresholds = roc_curve(y_train_res[test], probas_[:, 1])
tprs.append(interp(mean_fpr, fpr, tpr))
tprs[-1][0] = 0.0
roc_auc = auc(fpr, tpr)
aucs.append(roc_auc)
plt.plot(fpr, tpr, lw=1, alpha=0.3,
label='ROC fold %d (AUC = %0.2f)' % (i, roc_auc))
i += 1
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
std_auc = np.std(aucs)
plt.plot(mean_fpr, mean_tpr, color='b',
label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc),
lw=2, alpha=.8)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2,
label=r'$\pm$ 1 std. dev.')
plt.xlim([-0.01, 1.01])
plt.ylim([-0.01, 1.01])
plt.xlabel('False Positive Rate',fontsize=18)
plt.ylabel('True Positive Rate',fontsize=18)
plt.title('Cross-Validation ROC of SVM',fontsize=18)
plt.legend(loc="lower right", prop={'size': 15})
plt.show()
following is the output:
Please tell me whether the code is correct for plotting ROC curve for the cross-validation or not.
The problem is that I do not clearly understand cross-validation. In the for loop range, I have passed the training sets of X and y variables. Does cross-validation work like this?
Leaving SMOTE and the imbalance issue aside, which are not included in your code, your procedure looks correct.
In more detail, for each one of your n_splits=10:
you create train and test folds
you fit the model using the train fold:
classifier.fit(X_train_res[train], y_train_res[train])
and then you predict probabilities using the test fold:
predict_proba(X_train_res[test])
This is exactly the idea behind cross-validation.
So, since you have n_splits=10, you get 10 ROC curves and respective AUC values (and their average), exactly as expected.
However:
The need for (SMOTE) upsampling due to the class imbalance changes the correct procedure, and turns your overall process incorrect: you should not upsample your initial dataset; instead, you need to incorporate the upsampling procedure into the CV process.
So, the correct procedure here for each one of your n_splits becomes (notice that starting with a stratified CV split, as you have done, becomes essential in class imbalance cases):
create train and test folds
upsample your train fold with SMOTE
fit the model using the upsampled train fold
predict probabilities using the test fold (not upsampled)
For details regarding the rationale, please see own answer in the Data Science SE thread Why you shouldn't upsample before cross validation.
I'm using scikit learn, and I want to plot the precision and recall curves. the classifier I'm using is RandomForestClassifier. All the resources in the documentations of scikit learn uses binary classification. Also, can I plot a ROC curve for multiclass?
Also, I only found for SVM for multilabel and it has a decision_function which RandomForest doesn't have
From scikit-learn documentation:
Precision-Recall:
Precision-recall curves are typically used in binary classification to
study the output of a classifier. In order to extend the
precision-recall curve and average precision to multi-class or
multi-label classification, it is necessary to binarize the output.
One curve can be drawn per label, but one can also draw a
precision-recall curve by considering each element of the label
indicator matrix as a binary prediction (micro-averaging).
Receiver Operating Characteristic (ROC):
ROC curves are typically used in binary classification to study the
output of a classifier. In order to extend ROC curve and ROC area to
multi-class or multi-label classification, it is necessary to binarize
the output. One ROC curve can be drawn per label, but one can also
draw a ROC curve by considering each element of the label indicator
matrix as a binary prediction (micro-averaging).
Therefore, you should binarize the output and consider precision-recall and roc curves for each class. Moreover, you are going to use predict_proba to get class probabilities.
I divide the code into three parts:
general settings, learning and prediction
precision-recall curve
ROC curve
1. general settings, learning and prediction
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import precision_recall_curve, roc_curve
from sklearn.preprocessing import label_binarize
import matplotlib.pyplot as plt
#%matplotlib inline
mnist = fetch_openml("mnist_784")
y = mnist.target
y = y.astype(np.uint8)
n_classes = len(set(y))
Y = label_binarize(mnist.target, classes=[*range(n_classes)])
X_train, X_test, y_train, y_test = train_test_split(mnist.data,
Y,
random_state = 42)
clf = OneVsRestClassifier(RandomForestClassifier(n_estimators=50,
max_depth=3,
random_state=0))
clf.fit(X_train, y_train)
y_score = clf.predict_proba(X_test)
2. precision-recall curve
# precision recall curve
precision = dict()
recall = dict()
for i in range(n_classes):
precision[i], recall[i], _ = precision_recall_curve(y_test[:, i],
y_score[:, i])
plt.plot(recall[i], precision[i], lw=2, label='class {}'.format(i))
plt.xlabel("recall")
plt.ylabel("precision")
plt.legend(loc="best")
plt.title("precision vs. recall curve")
plt.show()
3. ROC curve
# roc curve
fpr = dict()
tpr = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i],
y_score[:, i]))
plt.plot(fpr[i], tpr[i], lw=2, label='class {}'.format(i))
plt.xlabel("false positive rate")
plt.ylabel("true positive rate")
plt.legend(loc="best")
plt.title("ROC curve")
plt.show()
Using this code :
from sklearn import metrics
import numpy as np
import matplotlib.pyplot as plt
y_true = [1,0,0]
y_predict = [.6,.1,.1]
fpr, tpr, thresholds = metrics.roc_curve(y_true, y_predict , pos_label=1)
print(fpr)
print(tpr)
print(thresholds)
# Print ROC curve
plt.plot(fpr,tpr)
plt.show()
y_true = [1,0,0]
y_predict = [.6,.1,.6]
fpr, tpr, thresholds = metrics.roc_curve(y_true, y_predict , pos_label=1)
print(fpr)
print(tpr)
print(thresholds)
# Print ROC curve
plt.plot(fpr,tpr)
plt.show()
the following roc curves are plotted :
scikit learn sets the thresholds but I would like to set custom thresholds.
For example, for values :
y_true = [1,0,0]
y_predict = [.6,.1,.6]
The following thresholds are returned :
[1.6 0.6 0.1]
Why does value 1.6 not exist in ROC curve ? Is threshold 1.6 redundant in this case as the probabilities range from 0-1 ? Can custom thresholds be set : .3,.5,.7 to check how well the classifier performs in this case ?
Update :
From https://sachinkalsi.github.io/blog/category/ml/2018/08/20/top-8-performance-metrics-one-should-know.html#receiver-operating-characteristic-curve-roc I used same x and predicted values :
from sklearn import metrics
import numpy as np
import matplotlib.pyplot as plt
y_true = [1,1,1,0]
y_predict = [.94,.87,.83,.80]
fpr, tpr, thresholds = metrics.roc_curve(y_true, y_predict , pos_label=1)
print('false positive rate:', fpr)
print('true positive rate:', tpr)
print('thresholds:', thresholds)
# Print ROC curve
plt.plot(fpr,tpr)
plt.show()
which produces this plot :
Plot is different to referenced plot in blog, also thresholds are different :
Also, the thresholds returned by using scikit metrics.roc_curve implemented are : thresholds: [0.94 0.83 0.8 ]. Should scikit return a similar roc curve as is using same points ? I should implement roc curve myself instead of relying on scikit implementation as results are different ?
Thresholds won't appear in the ROC curve. The scikit-learn documentations says:
thresholds[0] represents no instances being predicted and is arbitrarily set to max(y_score) + 1
If y_predict contains 0.3, 0.5, 0.7, then those thresholds will be tried by the metrics.roc_curve function.
Typically these steps are followed while calculating ROC curve
1. Sort y_predict in descending order.
2. For each of the probability scores (lets say τ_i) in y_predict, if y_predict >= τ_i, then consider that data point as positive.
P.S: If we have N data points, then we will have N thresholds (if the combinations of y_true and y_predict is unique)
3. For each of the y_predicted (τ_i) values, calculate TPR & FPR.
4. Plot ROC by taking N (no. of data points) TPR, FPR pairs
You can refer this blog for detailed information
I'm trying to draw average plot, using the following way:
Compute precision-recall curve for all folds.
Compute average precision-recall curve. I don't know how to do it because dimension in different folds is different.
Draw curve,that was computed in the second step.
P.S. Solution from there Plotting Precision-Recall curve when using cross-validation in scikit-learn is not suitable because if I compute average of all predictions and then compute precision-recall curve I will get AUC = 1.0. This is wrong.
I want to get something like this:
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
scores = []
for train, test in kfold:
true, pred = clf.predict(test)
precision, recall, _ = precision_recall_curve(true, pred)
scores.append((precision, recall))
precision_avg, recall_avg = compute_average(scores)
plt.plot(precision_avg, recall_avg)
I'm doing a binary classification .. I've an imbalanced data and I've used the svm weight in trying to mitigate the situation ...
As you can see I've calculated and plot the roc curve for each class and I've got the following plot:
It looks like the two classes some up to one .. and I'm n't sure if I'm doing the right thing or not because its the first time for me to draw my own roc curve ... I'm using Scikit learn to plot ... is it right to plot each class alone .. and is the classifier failing in classifying the blue class ?
this is the code that I've used to get the plot:
y_pred = clf.predict_proba(X_test)[:,0] # for calculating the probability of the first class
y_pred2 = clf.predict_proba(X_test)[:,1] # for calculating the probability of the second class
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred)
auc=metrics.auc(fpr, tpr)
print "auc for the first class",auc
fpr2, tpr2, thresholds2 = metrics.roc_curve(y_test, y_pred2)
auc2=metrics.auc(fpr2, tpr2)
print "auc for the second class",auc2
# ploting the roc curve
plt.plot(fpr,tpr)
plt.plot(fpr2,tpr2)
plt.xlim([0.0,1.0])
plt.ylim([0.0,1.0])
plt.title('Roc curve')
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.legend(loc="lower right")
plt.show()
I know there is a better way to write as a dictionary for example but I was just trying to see the curve first
See the Wikipedia entry for a all your ROC curve needs :)
predict_proba returns class probabilities for each class. The first column contains the probability of the first class and the second column contains the probability of the second class. Note that the two curves are rotated versions of each other. That is because the class probabilities add up to 1.
The documentation of roc_curve states that the second parameter must contain
Target scores, can either be probability estimates of the positive class or confidence values.
This means you have to pass the probabilities that corresponds to class 1. Most likely this is the second column.
You get the blue curve because you passed the probabilities of the wrong class (first column). Only the green curve is correct.
It does not make sense to compute ROC curves for each class, because the ROC curve describes the ability of the classifier to distinguish two classes. You have only one curve per classifier.
The specific problem is a coding mistake.
predict_proba returns class probabilities (1 if it's certainly the class, 0 if it is definitly not the class, usually it's something in-between).
metrics.roc_curve(y_test, y_pred) now compares class labels against probabilities, which is like comparing pears against apple juice.
You should use predict instead of predict_proba to predict class labels and not probabilities. These can be compared against the true class labels for computing the ROC curve. Incidentally, this also removes the option to plot a second curve - you only get one curve for the classifier, not one for each class.
you have to rethink the whole approach. ROC curve indicates the quality of different classifiers at different "probability" thresholds and not the classes. Usually, a straight line with a slope of 0.5 is the benchmark for the classifiers whether your classifier is able to beat a random guess.
It's because while building ROC for class 0, it considers '0' in y_test as Boolean False for your target class.
Try changing:
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred) to fpr, tpr, thresholds = metrics.roc_curve(1-y_test, y_pred)