I have a classifier that outputs a proportion X between 0 and 1. I also have an associated ground truth which is the real proportion.
I want to predict 1 when the output of the classifier is greater than some threshold and 0 otherwise .
From data visualization I know that a good threshold is around 0.5.
How can I estimate the best threshold from the data ?
Here is an example of my data
predicted = [0.13675214 0.31400966 0.28037383 0.18337408 0.10043668 0.6
0.74242424 0.30853994 0.30588235 0.24766355 0.19806763 0.20512821
0.29752066 0.23504274 0.14133333 0.52733119 0.46039604 0.56306306
0.29059829 0.02890173 0.2962963 0.47008547 0.54545455 0.58119658
0.3 0.66242038 0.42066421]
ground_truth = [0.11111111 0.647343 0.21028037 0.20293399 0. 0.93333333
1. 0.07162534 0.61176471 0.21028037 0.647343 0.11111111
0.07162534 0.5 0.08 0.88424437 0.58415842 0.74774775
0.11111111 0.03468208 0. 0.5 0. 0.91168091
1. 0.96178344 0.10701107]
desired_output = [0,1,0,0,0,1,1,0,1,0,1,0,0,0,0,1,1,1,0,0,0,1,0,1,1,1,0]
Thank you
Precision, recall and f1-score values depend on the probability threshold. Changes in the threshold that we select to use as a cut-off to determine that a sample belongs to the positive class will affect the precision, recall and therefore f1-score. I share my attempt to plot precision, recall and f1-score depending on discrimination threshold. The plot also determines the optimal threshold for the dataset and the model to classify a sample as a member of the positive class. The optimal threshold is that at which f1-score is highest by default.
import pandas as pd
import pathlib
import matplotlib.pyplot as plt
from matplotlib.ticker import (MultipleLocator, AutoMinorLocator)
from sklearn.metrics import confusion_matrix as cm_sklearn
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
def plot_discrimination_threshold(clf, X_test, y_test, argmax='f1', title='Metrics vs Discriminant Threshold', fig_size=(10, 8), dpi=100, save_fig_path=None):
"""
Plot precision, recall and f1-score vs discriminant threshold for the given pipeline model
Parameters
----------
clf : estimator instance (either sklearn.Pipeline, imblearn.Pipeline or a classifier)
PRE-FITTED classifier or a PRE-FITTED Pipeline in which the last estimator is a classifier.
X_test : pandas.DataFrame of shape (n_samples, n_features)
Test features.
y_test : pandas.Series of shape (n_samples,)
Target values.
argmax : str, default: 'f1'
Annotate the threshold maximized by the supplied metric. Options: 'f1', 'precision', 'recall'
title : str, default ='FPR and FNR vs Discriminant Threshold'
Plot title.
fig_size : tuple, default = (10, 8)
Size (inches) of the plot.
dpi : int, default = 100
Image DPI.
save_fig_path : str, defaut=None
Full path where to save the plot. Will generate the folders if they don't exist already.
Returns
-------
fig : Matplotlib.pyplot.Figure
Figure from matplotlib
ax : Matplotlib.pyplot.Axe
Axe object from matplotlib
"""
thresholds = np.linspace(0, 1, 100)
precision_ls = []
recall_ls = []
f1_ls = []
fpr_ls = []
fnr_ls = []
# obtain probabilities
probs = clf.predict_proba(X_test)[:,1]
for threshold in thresholds:
# obtain class prediction based on threshold
y_predictions = np.where(probs>=threshold, 1, 0)
# obtain confusion matrix
tn, fp, fn, tp = cm_sklearn(y_test, y_predictions).ravel()
# obtain FRP and FNR
FPR = fp / (tn + fp)
FNR = fn / (tp + fn)
# obtain precision, recall and f1 scores
precision = precision_score(y_test, y_predictions, average='binary')
recall = recall_score(y_test, y_predictions, average='binary')
f1 = f1_score(y_test, y_predictions, average='binary')
precision_ls.append(precision)
recall_ls.append(recall)
f1_ls.append(f1)
fpr_ls.append(FPR)
fnr_ls.append(FNR)
metrics = pd.concat([
pd.Series(precision_ls),
pd.Series(recall_ls),
pd.Series(f1_ls),
pd.Series(fpr_ls),
pd.Series(fnr_ls)], axis=1)
metrics.columns = ['precision', 'recall', 'f1', 'fpr', 'fnr']
metrics.index = thresholds
plt.rcParams["figure.facecolor"] = 'white'
plt.rcParams["axes.facecolor"] = 'white'
plt.rcParams["savefig.facecolor"] = 'white'
fig, ax = plt.subplots(1, 1, figsize=fig_size, dpi=dpi)
ax.plot(metrics['precision'], label='Precision')
ax.plot(metrics['recall'], label='Recall')
ax.plot(metrics['f1'], label='f1')
ax.plot(metrics['fpr'], label='False Positive Rate (FPR)', linestyle='dotted')
ax.plot(metrics['fnr'], label='False Negative Rate (FNR)', linestyle='dotted')
# Draw a threshold line
disc_threshold = round(metrics[argmax].idxmax(), 2)
ax.axvline(x=metrics[argmax].idxmax(), color='black', linestyle='dashed', label="$t_r$="+str(disc_threshold))
ax.xaxis.set_major_locator(MultipleLocator(0.1))
ax.xaxis.set_major_formatter('{x:.1f}')
ax.yaxis.set_major_locator(MultipleLocator(0.1))
ax.yaxis.set_major_formatter('{x:.1f}')
ax.xaxis.set_minor_locator(MultipleLocator(0.05))
ax.yaxis.set_minor_locator(MultipleLocator(0.05))
ax.tick_params(which='both', width=2)
ax.tick_params(which='major', length=7)
ax.tick_params(which='minor', length=4, color='black')
plt.grid(True)
plt.xlabel('Probability Threshold', fontsize=18)
plt.ylabel('Scores', fontsize=18)
plt.title(title, fontsize=18)
leg = ax.legend(loc='best', frameon=True, framealpha=0.7)
leg_frame = leg.get_frame()
leg_frame.set_color('gold')
plt.show()
if (save_fig_path != None):
path = pathlib.Path(save_fig_path)
path.parent.mkdir(parents=True, exist_ok=True)
fig.savefig(save_fig_path, dpi=dpi)
return fig, ax, disc_threshold
You seem to have a native 90% accuracy
delt = predicted - ground_truth # where all but 2 of 20 appear within .4
Other/ more examples of (model) predicted would illustrate ranges perhaps?
Related
I am playing around with a dbscan example in order to see if it will work for me. In my case, I have clusters of a few points (3-5) close together with a fairly long distance in between clusters. I have tried to replicate the situation in the following code. I figured with a low epsilon and low min_samples,this should work, but instead it is telling me that it only sees 1 group (and 20 noise points?). Am I using this incorrectly, or is dbscan not good for this type of problem. I went with dbscan instead of kmeans because I dont know beforehand exactly how many clusters there will be (1-5).
from sklearn.datasets import make_blobs
from sklearn.cluster import DBSCAN
import numpy as np
import matplotlib.pyplot as plt
# Configuration options
num_samples_total = 20
cluster_centers = [(3,3), (7,7),(7,3),(3,7),(5,5)]
num_classes = len(cluster_centers)
#epsilon = 1.0
epsilon = 1e-5
#min_samples = 13
min_samples = 2
# Generate data
X, y = make_blobs(n_samples = num_samples_total, centers = cluster_centers, n_features = num_classes, center_box=(0, 1), cluster_std = 0.05)
np.save('./clusters.npy', X)
X = np.load('./clusters.npy')
# Compute DBSCAN
db = DBSCAN(eps=epsilon, min_samples=min_samples).fit(X)
labels = db.labels_
no_clusters = len(np.unique(labels) )
no_noise = np.sum(np.array(labels) == -1, axis=0)
print('Estimated no. of clusters: %d' % no_clusters)
print('Estimated no. of noise points: %d' % no_noise)
# Generate scatter plot for training data
colors = list(map(lambda x: '#3b4cc0' if x == 1 else '#b40426', labels)) #only set for 2 colors
plt.scatter(X[:,0], X[:,1], c=colors, marker="o", picker=True)
plt.title('Two clusters with data')
plt.xlabel('Axis X[0]')
plt.ylabel('Axis X[1]')
plt.show()
ended up going with kmeans and doing a modified elbow method:
print(__doc__)
# Author: Phil Roth <mr.phil.roth#gmail.com>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
# Configuration options
num_samples_total = 20
cluster_centers = [(3,3), (7,7),(7,3),(3,7),(5,5)]
num_classes = len(cluster_centers)
#epsilon = 1.0
epsilon = 1e-5
#min_samples = 13
min_samples = 2
# Generate data
X, y = make_blobs(n_samples = num_samples_total, centers = cluster_centers, n_features = num_classes, center_box=(0, 1), cluster_std = 0.05)
random_state = 170
#y_pred = KMeans(n_clusters=5, random_state=random_state).fit_predict(X)
#plt.scatter(X[:, 0], X[:, 1], c=y_pred)
#kmeans = KMeans(n_clusters=2, random_state=0).fit(X)
#maybe I dont have to look for an elbow, just go until the value drops below 1.
#also if I do go too far, it just means that the same shape will be shown twice.
clusterIdx = 0
inertia = 100
while inertia > 1:
clusterIdx = clusterIdx + 1
kmeans = KMeans(n_clusters=clusterIdx, random_state=0).fit(X)
inertia = kmeans.inertia_
print(inertia)
plt.scatter(X[:, 0], X[:, 1], c=kmeans.labels_)
print(clusterIdx)
plt.show()
On each surface I would like to have, the actual number the predicitons.
I don't really care if it's just percentages or numbers. I would also like to label them with True Positive and False Negative.
The Code:
sns.heatmap(pd.crosstab(ytest,classifier.predict(xtest)),cmap='Spectral')
plt.xlabel('predicted')
plt.ylabel('actual')
plt.show()
I Use below to do what you want, though a google search will also give you answer
def find_best_threshold(threshold, fpr, tpr):
t = threshold[np.argmax(tpr * (1-fpr))]
### TPR * TNR ---> We are trying to maximize TNR and TPR
print("the maximum value of tpr*(1-fpr)", max(tpr*(1-fpr)), "for threshold", np.round(t,3))
return t
def predict_with_best_thresh(prob,t):
pred=[1 if i>=t else 0 for i in prob ]
return pred
### https://medium.com/#dtuk81/confusion-matrix-visualization-fc31e3f30fea
def conf_matrix_plot(cf_matrix,title):
group_names = ['True Neg','False Pos','False Neg','True Pos']
group_counts = ["{0:0.0f}".format(value) for value in cf_matrix.flatten()]
group_percentages = ["{0:.2%}".format(value) for value in cf_matrix.flatten()/np.sum(cf_matrix)]
labels = [f"{v1}\n{v2}\n{v3}" for v1, v2, vQ3 in zip(group_names,group_counts,group_percentages)]
labels = np.asarray(labels).reshape(2,2)
#sns.set(font_scale=1.5)
sns.heatmap(cf_matrix, annot=labels, fmt='',cmap='coolwarm').set_title(title + ' Confusion Matrix for TFIDF')
plt.xlabel('Actual')
plt.ylabel('Predicted')
from sklearn.metrics import confusion_matrix
import numpy as np
best_t = find_best_threshold(tr_thresholds, train_fpr, train_tpr)
cf_matrix_train = confusion_matrix(y_train, predict_with_best_thresh(y_train_pred[:,1], best_t))
cf_matrix_test = confusion_matrix(y_test, predict_with_best_thresh(y_test_pred[:,1], best_t))
conf_matrix_plot(cf_matrix_train,'Train')
Result:
I am performing semantic segmentation (with materials as classes) on images, and wish to calculate precision-recall curves of my accuracy. Currently, I calculate the true positives, false positives and false negatives for each class by summing the number of pixels for which ground truth and prediction agree with that class, for which only the prediction agrees, and for which only the ground truth agrees, respectively. Then I calculate precision and recall accordingly:
pixel_probs = np.array(pixel_probs) # shape (num_pixels), the classification certainty for each pixel
pixel_labels_pred, pixel_labels_gt = np.array(pixel_labels_pred).astype(bool), np.array(pixel_labels_gt).astype(bool) # shape (num_pixels, num_classes), one hot labels for each pixel
precision_mat, recall_mat = np.array([]).reshape(num_labels, 0), np.array([]).reshape(num_labels, 0) # stores the precision-recall pairs for each certainty threshold
prev_num_pixels = sum(pixel_probs > 0.0)
for threshold in sorted(thresholds):
pixel_mask = pixel_probs > threshold
if sum(pixel_mask) == prev_num_pixels: continue
prev_num_pixels == sum(pixel_mask)
pixel_labels_pred_msk = pixel_labels_pred[pixel_mask]
pixel_labels_gt_msk = pixel_labels_gt[pixel_mask]
tps = np.sum(np.logical_and(pixel_labels_gt_msk, pixel_labels_pred_msk), axis=0)
fps = np.sum(np.logical_and(np.logical_not(pixel_labels_gt_msk), pixel_labels_pred_msk), axis=0)
fns = np.sum(np.logical_and(pixel_labels_gt_msk, np.logical_not(pixel_labels_pred_msk)), axis=0)
precisions = tps / (tps + fps)
recalls = tps / (tps + fns)
precision_mat = np.concatenate([precision_mat, np.expand_dims(precisions, axis=-1)], axis=-1)
recall_mat = np.concatenate([recall_mat, np.expand_dims(recalls, axis=-1)], axis=-1)
fig = plt.figure()
fig.set_size_inches(12, 5)
for label_index in range(precision_mat.shape[0]):
r = recall_mat[label_index]
p = precision_mat[label_index]
sort_order = np.argsort(r)
r = r[sort_order]
p = p[sort_order]
plt.plot(r, p, '-o', markersize=2, label=labels[label_index])
plt.title("Precision-recall curve")
plt.legend(loc='upper left', fontsize=8.5, ncol=1, bbox_to_anchor=(1, 1))
plt.xlabel('recall', fontsize=12)
plt.ylabel('precision', fontsize=12)
plt.savefig(dir + "test/pr_curves.png")
However, this produces some very strange looking graphs:
It is true that my segmentator is performing rather horribly, but I would at least expect the curves to follow more or less a downward slope.
Am I calculating my PR-curve correctly? Are there alternative ways of calculating such curves that I should consider? Is there perhaps a bug in my plotting code?
I have the histogram of my input data (in black) given in the following graph:
I'm trying to fit the Gamma distribution but not on the whole data but just to the first curve of the histogram (the first mode). The green plot in the previous graph corresponds to when I fitted the Gamma distribution on all the samples using the following python code which makes use of scipy.stats.gamma:
img = IO.read(input_file)
data = img.flatten() + abs(np.min(img)) + 1
# calculate dB positive image
img_db = 10 * np.log10(img)
img_db_pos = img_db + abs(np.min(img_db))
data = img_db_pos.flatten() + 1
# data histogram
n, bins, patches = plt.hist(data, 1000, normed=True)
# slice histogram here
# estimation of the parameters of the gamma distribution
fit_alpha, fit_loc, fit_beta = gamma.fit(data, floc=0)
x = np.linspace(0, 100)
y = gamma.pdf(x, fit_alpha, fit_loc, fit_beta)
print '(alpha, beta): (%f, %f)' % (fit_alpha, fit_beta)
# plot estimated model
plt.plot(x, y, linewidth=2, color='g')
plt.show()
How can I restrict the fitting only to the interesting subset of this data?
Update1 (slicing):
I sliced the input data by keeping only values below the max of the previous histogram, but the results were not really convincing:
This was achieved by inserting the following code below the # slice histogram here comment in the previous code:
max_data = bins[np.argmax(n)]
data = data[data < max_data]
Update2 (scipy.optimize.minimize):
The code below shows how scipy.optimize.minimize() is used to minimize an energy function to find (alpha, beta):
import matplotlib.pyplot as plt
import numpy as np
from geotiff.io import IO
from scipy.stats import gamma
from scipy.optimize import minimize
def truncated_gamma(x, max_data, alpha, beta):
gammapdf = gamma.pdf(x, alpha, loc=0, scale=beta)
norm = gamma.cdf(max_data, alpha, loc=0, scale=beta)
return np.where(x < max_data, gammapdf / norm, 0)
# read image
img = IO.read(input_file)
# calculate dB positive image
img_db = 10 * np.log10(img)
img_db_pos = img_db + abs(np.min(img_db))
data = img_db_pos.flatten() + 1
# data histogram
n, bins = np.histogram(data, 100, normed=True)
# using minimize on a slice data below max of histogram
max_data = bins[np.argmax(n)]
data = data[data < max_data]
data = np.random.choice(data, 1000)
energy = lambda p: -np.sum(np.log(truncated_gamma(data, max_data, *p)))
initial_guess = [np.mean(data), 2.]
o = minimize(energy, initial_guess, method='SLSQP')
fit_alpha, fit_beta = o.x
# plot data histogram and model
x = np.linspace(0, 100)
y = gamma.pdf(x, fit_alpha, 0, fit_beta)
plt.hist(data, 30, normed=True)
plt.plot(x, y, linewidth=2, color='g')
plt.show()
The algorithm above converged for a subset of data, and the output in o was:
x: array([ 16.66912781, 6.88105559])
But as can be seen on the screenshot below, the gamma plot doesn't fit the histogram:
You can use a general optimization tool such as scipy.optimize.minimize to fit a truncated version of the desired function, resulting in a nice fit:
First, the modified function:
def truncated_gamma(x, alpha, beta):
gammapdf = gamma.pdf(x, alpha, loc=0, scale=beta)
norm = gamma.cdf(max_data, alpha, loc=0, scale=beta)
return np.where(x<max_data, gammapdf/norm, 0)
This selects values from the gamma distribution where x < max_data, and zero elsewhere. The np.where part is not actually important here, because the data is exclusively to the left of max_data anyway. The key is normalization, because varying alpha and beta will change the area to the left of the truncation point in the original gamma.
The rest is just optimization technicalities.
It's common practise to work with logarithms, so I used what's sometimes called "energy", or the logarithm of the inverse of the probability density.
energy = lambda p: -np.sum(np.log(truncated_gamma(data, *p)))
Minimize:
initial_guess = [np.mean(data), 2.]
o = minimize(energy, initial_guess, method='SLSQP')
fit_alpha, fit_beta = o.x
My output is (alpha, beta): (11.595208, 824.712481). Like the original, it is a maximum likelihood estimate.
If you're not happy with the convergence rate, you may want to
Select a sample from your rather big dataset:
data = np.random.choice(data, 10000)
Try different algorithms using the method keyword argument.
Some optimization routines output a representation of the inverse hessian, which is useful for uncertainty estimation. Enforcement of nonnegativity for the parameters may also be a good idea.
A log-scaled plot without truncation shows the entire distribution:
Here's another possible approach using a manually created dataset in excel that more or less matched the plot given.
Raw Data
Outline
Imported data into a Pandas dataframe.
Mask the indices after the
max response index.
Create a mirror image of the remaining data.
Append the mirror image while leaving a buffer of empty space.
Fit the desired distribution to the modified data. Below I do a normal fit by the method of moments and adjust the amplitude and width.
Working Script
# Import data to dataframe.
df = pd.read_csv('sample.csv', header=0, index_col=0)
# Mask indices after index at max Y.
mask = df.index.values <= df.Y.argmax()
df = df.loc[mask, :]
scaled_y = 100*df.Y.values
# Create new df with mirror image of Y appended.
sep = 6
app_zeroes = np.append(scaled_y, np.zeros(sep, dtype=np.float))
mir_y = np.flipud(scaled_y)
new_y = np.append(app_zeroes, mir_y)
# Using Scipy-cookbook to fit a normal by method of moments.
idxs = np.arange(new_y.size) # idxs=[0, 1, 2,...,len(data)]
mid_idxs = idxs.mean() # len(data)/2
# idxs-mid_idxs is [-53.5, -52.5, ..., 52.5, len(data)/2]
scaling_param = np.sqrt(np.abs(np.sum((idxs-mid_idxs)**2*new_y)/np.sum(new_y)))
# adjust amplitude
fmax = new_y.max()*1.2 # adjusted function max to 120% max y.
# adjust width
scaling_param = scaling_param*.7 # adjusted by 70%.
# Fit normal.
fit = lambda t: fmax*np.exp(-(t-mid_idxs)**2/(2*scaling_param**2))
# Plot results.
plt.plot(new_y, '.')
plt.plot(fit(idxs), '--')
plt.show()
Result
See the scipy-cookbook fitting data page for more on fitting a normal using method of moments.
I ran a logistic regression model and made predictions of the logit values. I used this to get the points on the ROC curve:
from sklearn import metrics
fpr, tpr, thresholds = metrics.roc_curve(Y_test,p)
I know metrics.roc_auc_score gives the area under the ROC curve. Can anyone tell me what command will find the optimal cut-off point (threshold value)?
You can do this using the epi package in R, however I could not find similar package or example in Python.
The optimal cut off point would be where “true positive rate” is high and the “false positive rate” is low. Based on this logic, I have pulled an example below to find optimal threshold.
Python code:
import pandas as pd
import statsmodels.api as sm
import pylab as pl
import numpy as np
from sklearn.metrics import roc_curve, auc
# read the data in
df = pd.read_csv("http://www.ats.ucla.edu/stat/data/binary.csv")
# rename the 'rank' column because there is also a DataFrame method called 'rank'
df.columns = ["admit", "gre", "gpa", "prestige"]
# dummify rank
dummy_ranks = pd.get_dummies(df['prestige'], prefix='prestige')
# create a clean data frame for the regression
cols_to_keep = ['admit', 'gre', 'gpa']
data = df[cols_to_keep].join(dummy_ranks.iloc[:, 'prestige_2':])
# manually add the intercept
data['intercept'] = 1.0
train_cols = data.columns[1:]
# fit the model
result = sm.Logit(data['admit'], data[train_cols]).fit()
print result.summary()
# Add prediction to dataframe
data['pred'] = result.predict(data[train_cols])
fpr, tpr, thresholds =roc_curve(data['admit'], data['pred'])
roc_auc = auc(fpr, tpr)
print("Area under the ROC curve : %f" % roc_auc)
####################################
# The optimal cut off would be where tpr is high and fpr is low
# tpr - (1-fpr) is zero or near to zero is the optimal cut off point
####################################
i = np.arange(len(tpr)) # index for df
roc = pd.DataFrame({'fpr' : pd.Series(fpr, index=i),'tpr' : pd.Series(tpr, index = i), '1-fpr' : pd.Series(1-fpr, index = i), 'tf' : pd.Series(tpr - (1-fpr), index = i), 'thresholds' : pd.Series(thresholds, index = i)})
roc.iloc[(roc.tf-0).abs().argsort()[:1]]
# Plot tpr vs 1-fpr
fig, ax = pl.subplots()
pl.plot(roc['tpr'])
pl.plot(roc['1-fpr'], color = 'red')
pl.xlabel('1-False Positive Rate')
pl.ylabel('True Positive Rate')
pl.title('Receiver operating characteristic')
ax.set_xticklabels([])
The optimal cut off point is 0.317628, so anything above this can be labeled as 1 else 0. You can see from the output/chart that where TPR is crossing 1-FPR the TPR is 63%, FPR is 36% and TPR-(1-FPR) is nearest to zero in the current example.
Output:
1-fpr fpr tf thresholds tpr
171 0.637363 0.362637 0.000433 0.317628 0.637795
Hope this is helpful.
Edit
To simplify and bring in re-usability, I have made a function to find the optimal probability cutoff point.
Python Code:
def Find_Optimal_Cutoff(target, predicted):
""" Find the optimal probability cutoff point for a classification model related to event rate
Parameters
----------
target : Matrix with dependent or target data, where rows are observations
predicted : Matrix with predicted data, where rows are observations
Returns
-------
list type, with optimal cutoff value
"""
fpr, tpr, threshold = roc_curve(target, predicted)
i = np.arange(len(tpr))
roc = pd.DataFrame({'tf' : pd.Series(tpr-(1-fpr), index=i), 'threshold' : pd.Series(threshold, index=i)})
roc_t = roc.iloc[(roc.tf-0).abs().argsort()[:1]]
return list(roc_t['threshold'])
# Add prediction probability to dataframe
data['pred_proba'] = result.predict(data[train_cols])
# Find optimal probability threshold
threshold = Find_Optimal_Cutoff(data['admit'], data['pred_proba'])
print threshold
# [0.31762762459360921]
# Find prediction to the dataframe applying threshold
data['pred'] = data['pred_proba'].map(lambda x: 1 if x > threshold else 0)
# Print confusion Matrix
from sklearn.metrics import confusion_matrix
confusion_matrix(data['admit'], data['pred'])
# array([[175, 98],
# [ 46, 81]])
Given tpr, fpr, thresholds from your question, the answer for the optimal threshold is just:
optimal_idx = np.argmax(tpr - fpr)
optimal_threshold = thresholds[optimal_idx]
Vanilla Python Implementation of Youden's J-Score
def cutoff_youdens_j(fpr,tpr,thresholds):
j_scores = tpr-fpr
j_ordered = sorted(zip(j_scores,thresholds))
return j_ordered[-1][1]
Another possible solution.
I'll create some random data.
import numpy as np
import pandas as pd
import scipy.stats as sps
from sklearn import linear_model
from sklearn.metrics import roc_curve, RocCurveDisplay, auc
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import seaborn as sns
# define data distributions
N0 = 300
N1 = 250
dist0 = sps.gamma(a=8, scale=1/10)
x0 = np.linspace(dist0.ppf(0), dist0.ppf(1-1e-5), 100)
y0 = dist0.pdf(x0)
dist1 = sps.gamma(a=15, scale=1/10)
x1 = np.linspace(dist1.ppf(0), dist1.ppf(1-1e-5), 100)
y1 = dist1.pdf(x1)
with plt.style.context("bmh"):
plt.plot(x0, y0, label="NEG")
plt.plot(x1, y1, label="POS")
plt.legend()
plt.title("Gamma distributions")
# create a random dataset
rvs0 = dist0.rvs(N0, random_state=0)
rvs1 = dist1.rvs(N1, random_state=1)
with plt.style.context("bmh"):
plt.hist(rvs0, alpha=.5, label="NEG")
plt.hist(rvs1, alpha=.5, label="POS")
plt.legend()
plt.title("Random dataset")
Initialize a dataframe with observations (x feature and y target)
df = pd.DataFrame({
"y": np.concatenate(( np.repeat(0, N0) , np.repeat(1, N1) )),
"x": np.concatenate(( rvs0 , rvs1 )),
})
and display it with a box plot
# plot the data
with plt.style.context("bmh"):
g = sns.catplot(
kind="box",
data=df,
x="y", y="x"
)
ax = g.axes.flat[0]
sns.stripplot(
data=df,
x="y", y="x",
ax=ax, color='k',
alpha=.25
)
plt.show()
Now, we can split the dataframe into train-test, perform Logistic regression, compute ROC curve, AUC, Youden's index, find the cut-off and plot everything. All using pandas
# split dataset into train-test
X_train, X_test, y_train, y_test = train_test_split(
df[["x"]], df.y.values, test_size=0.5, random_state=1)
# init and fit Logistic Regression on train set
clf = linear_model.LogisticRegression()
clf.fit(X_train, y_train)
# predict probabilities on x test set
y_proba = clf.predict_proba(X_test)
# compute FPR and TPR from y test set and predicted probabilities
fpr, tpr, thresholds = roc_curve(
y_test, y_proba[:,1], drop_intermediate=False)
# compute ROC AUC
roc_auc = auc(fpr, tpr)
# init a dataframe for results
df_test = pd.DataFrame({
"x": X_test.x.values.flatten(),
"y": y_test,
"proba": y_proba[:,1]
})
# sort it by predicted probabilities
# because thresholds[1:] = y_proba[::-1]
df_test.sort_values(by="proba", inplace=True)
# add reversed TPR and FPR
df_test["tpr"] = tpr[1:][::-1]
df_test["fpr"] = fpr[1:][::-1]
# optional: add thresholds to check
#df_test["thresholds"] = thresholds[1:][::-1]
# add Youden's j index
df_test["youden_j"] = df_test.tpr - df_test.fpr
# define the cut_off and diplay it
cut_off = df_test.sort_values(
by="youden_j", ascending=False, ignore_index=True).iloc[0]
print("CUT-OFF:")
print(cut_off)
# plot everything
with plt.style.context("bmh"):
fig, ax = plt.subplots(1, 3, figsize=(15, 5))
RocCurveDisplay(
fpr=df_test.fpr, tpr=df_test.tpr,
roc_auc=roc_auc).plot(ax=ax[0])
ax[0].set_title("ROC curve")
ax[0].axline(xy1=(0,0), slope=1, color="r", ls=":")
ax[0].plot(cut_off.fpr, cut_off.tpr, 'ko', ms=10)
df_test.plot(
x="youden_j", y="proba", ax=ax[1],
ylabel="Predicted Probabilities", xlabel="Youden j",
title="Youden's index", legend=False
)
ax[1].axvline(cut_off.youden_j, color="k", ls="--")
ax[1].axhline(cut_off.proba, color="k", ls="--")
df_test.plot(
x="x", y="proba", ax=ax[2],
ylabel="Predicted Probabilities", xlabel="X Feature",
title="Cut-Off", legend=False
)
ax[2].axvline(cut_off.x, color="k", ls="--")
ax[2].axhline(cut_off.proba, color="k", ls="--")
plt.show()
and we get
CUT-OFF:
x 1.065712
y 1.000000
proba 0.378543
tpr 0.852713
fpr 0.143836
youden_j 0.708878
We can finally check
# check results
TP = df_test[(df_test.x>=cut_off.x)&(df_test.y==1)].index.size
FP = df_test[(df_test.x>=cut_off.x)&(df_test.y==0)].index.size
TN = df_test[(df_test.x< cut_off.x)&(df_test.y==0)].index.size
FN = df_test[(df_test.x< cut_off.x)&(df_test.y==1)].index.size
print("True Positive Rate: ", TP / (TP + FN))
print("False Positive Rate:", 1 - TN / (TN + FP))
True Positive Rate: 0.8527131782945736
False Positive Rate: 0.14383561643835618
Although I am late to the party, but you can also use Geometric Mean to determine the optimal threshold as stated here: threshold tuning for imbalance classification
It can be computed as:
# calculate the g-mean for each threshold
gmeans = sqrt(tpr * (1-fpr))
# locate the index of the largest g-mean
ix = argmax(gmeans)
print('Best Threshold=%f, G-Mean=%.3f' % (thresholds[ix], gmeans[ix]))