How can I save a confusion matrix as png? I've saw this answer:
How to save Confusion Matrix plot so that I can call it for future reference?
from sklearn.metrics import plot_confusion_matrix
y_true = [0,1,1,1,0]
y_pred = [1,1,1,1,0]
IC = type('IdentityClassifier', (), {"predict": lambda i : i, "_estimator_type": "classifier"})
cm = plot_confusion_matrix(IC, y_pred, y_true, normalize='true', values_format='.2%')
cm.figure_.savefig('confusion_matrix.png')
The result that I'm getting is just a black png image.
I think, you should update sklearn to the latest version and then use:
from sklearn.metrics import ConfusionMatrixDisplay
y_true = [0,1,1,1,0]
y_pred = [1,1,1,1,0]
IC = type('IdentityClassifier', (), {"predict": lambda i : i, "_estimator_type": "classifier"})
cm=ConfusionMatrixDisplay.from_estimator(IC, y_pred, y_true, normalize='true', values_format='.2%')
cm.figure_.savefig('confusion_matrix.png')
Related
For testing the accuracy of my trained model I am using the accuracy_score function but it is not working.
from sklearn.metrics import accuracy_score
y_test = pd.read_csv('Test.csv')
labels = y_test["ClassId"].values
imgs = y_test["Path"].values
data=[]
for img in imgs:
image = Image.open(img)
image = image.resize((30,30))
data.append(np.array(image))
X_test=np.array(data)
pred = model.predict(X_test)
classes_x=np.argmax(X_test,axis=1)
#Accuracy with the test data
from sklearn.metrics import accuracy_score
print(accuracy_score(labels, pred))
ERROR:
This is what it shows
It seems like the problem has something to do with the format you use to represent the output of the model.
I will assume that you are using One hot coding, so you do:
pred = model.predict(X_test)
classes_x=np.argmax(X_test,axis=1)
On np.argmax should go axis=-1:
predictions = np.argmax(model.predict(X_test), axis=-1)
At the end, on accuracy function you're sending pred, no classes_x.
print(accuracy_score(labels, pred))
I am trying to use the GaussianProcessRegressor in scikit-learn with some graph kernels computed by the grakel software. Below is my code for a 5-fold cross-validation on 100 graph data. For the sake of testing convenience, I have commented out all graph-related lines and use random kernel matrices and y values instead.
from sklearn.model_selection import KFold
from sklearn.utils import check_random_state
from sklearn.gaussian_process import GaussianProcessRegressor as GPR
from sklearn.metrics import mean_squared_error
#from grakel.kernels import WeisfeilerLehman
import numpy as np
def Kfold_CV_GPR(Gs, y, n_iter=4, n_splits=5, random_state=None):
random_state = check_random_state(random_state)
kf = KFold(n_splits=n_splits, random_state=random_state, shuffle=True)
errors = []
for train_idxs, test_idxs in kf.split(y):
# gk = WeisfeilerLehman(n_iter=n_iter, normalize=True)
# K_train = gk.fit_transform(Gs[train_idxs])
# K_test = gk.transform(Gs[test_idxs])
K_train = np.random.randn(80, 80)
K_test = np.random.randn(20, 80)
gpr = GPR(kernel='precomputed')
gpr.fit(K_train, y[train_idxs])
y_pred = gpr.predict(K_test)
rmse = mean_squared_error(y[test_idxs], y_pred, squared=False)
errors.append(rmse)
return -np.mean(errors)
score = Kfold_CV_GPR(Gs=None, y=np.random.randn(100, ), n_iter=4, n_splits=5)
print(score)
However, I am getting the following error
TypeError: Cannot clone object ''precomputed'' (type <class 'str'>): it does not seem to be a scikit-learn
estimator as it does not implement a 'get_params' method.
When I change sklearn.gaussian_process.GaussianProcessRegressor to sklearn.svm.SVR (support vector regression), my code doesn't throw any error, but it will run forever for some reason. I also tested classifers like sklearn.svm.SVC and my code is working fine.
Anyone know how to use precomputed kernel in a scikit-learn's GaussianProcessRegressor?
I'm trying to use plot_confusion_matrix,
from sklearn.metrics import confusion_matrix
y_true = [1, 1, 0, 1]
y_pred = [1, 1, 0, 0]
confusion_matrix(y_true, y_pred)
Output:
array([[1, 0],
[1, 2]])
Now, while using the followings; using 'classes' or without 'classes'
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(y_true, y_pred, classes=[0,1], title='Confusion matrix, without normalization')
or
plot_confusion_matrix(y_true, y_pred, title='Confusion matrix, without normalization')
I expect to get similar output like this except the numbers inside,
Plotting simple diagram, it should not require the estimator.
Using mlxtend.plotting,
from mlxtend.plotting import plot_confusion_matrix
import matplotlib.pyplot as plt
import numpy as np
binary1 = np.array([[4, 1],
[1, 2]])
fig, ax = plot_confusion_matrix(conf_mat=binary1)
plt.show()
It provides same output.
Based on this
it requires a classifier,
disp = plot_confusion_matrix(classifier, X_test, y_test,
display_labels=class_names,
cmap=plt.cm.Blues,
normalize=normalize)
Can I plot it without a classifier?
plot_confusion_matrix expects a trained classifier. If you look at the source code, what it does is perform the prediction to generate y_pred for you:
y_pred = estimator.predict(X)
cm = confusion_matrix(y_true, y_pred, sample_weight=sample_weight,
labels=labels, normalize=normalize)
So in order to plot the confusion matrix without specifying a classifier, you'll have to go with some other tool, or do it yourself.
A simple option is to use seaborn:
import seaborn as sns
cm = confusion_matrix(y_true, y_pred)
f = sns.heatmap(cm, annot=True)
I am a bit late here, but I thought other people might benefit from my answer.
As others have mentioned using plot_confusion_matrix is not an option without the classifier but it is still possible to use sklearn to obtain a similar-looking confusion matrix without the classifier. The function below does exactly this.
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
def confusion_ma(y_true, y_pred, class_names):
cm = confusion_matrix(y_true, y_pred, normalize='true')
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=class_names)
disp.plot(cmap=plt.cm.Blues)
return plt.show()
The confusion_matrix function returns a simple ndarry matrix. By passing this together with labels of the predictions to the ConfusionMatrixDisplay function a similar looking matrix is obtained. In the definition I've added the class_names to be displayed instead of 0 and 1, chosen to normalize the output and specified a colormap - change accordingly to your needs.
Since plot_confusion_matrix require the argument 'estimator' not to be None, the answer is: no, you can't. But you can plot your confusion matrix in other ways, for example see this answer: How can I plot a confusion matrix?
I tested the following "identity classifier" in a Jupyter notebook running the conda_python3 kernel in Amazon SageMaker. The reason is that SageMaker's transformation job is async and so does not allow the classifier to be used in the parameters of plot_confusion_matrix, y_pred has to be calculated before calling the function.
IC = type('IdentityClassifier', (), {"predict": lambda i : i, "_estimator_type": "classifier"})
plot_confusion_matrix(IC, y_pred, y_test, normalize='true', values_format='.2%');
So while plot_confusion_matrix indeed expects an estimator, you'll not necessarily have to use another tool IMO, if this solution fits your use case.
simplified POC from the notebook
I solved the problem of using a customized classifier; you can build any custom classifier and pass it to the plot_confusion matrix as a class:
class MyModelPredict(object):
def __init__(self, model):
self._estimator_type = 'classifier'
def predict(self, X):
return your_custom_prediction
model = MyModelPredict()
plot_confusion_matrix(model, X, y_true)
I get this error:
AttributeError: 'NumpyArrayIterator' object has no attribute 'classes'
I am trying to make a confusion matrix to evaluate the Neural Net I have trained. I am using ImageDatagenerator and datagen.flow functions for before the fit_generator function for training.
For predictions I use the predict_generator function on the test set. All is working fine so far. Issue arrises in the following:
test_generator.reset()
pred = model.predict_generator(test_generator, steps=len(test_generator), verbose=2)
from sklearn.metrics import classification_report, confusion_matrix, cohen_kappa_score
y_pred = np.argmax(pred, axis=1)
print('Confusion Matrix')
print(pd.DataFrame(confusion_matrix(test_generator.classes, y_pred)))
I should be seeing a confusion matrix but instead I see an error. I ran the same code with sample data before I ran on the actual dataset and that did show me the results.
First you need to extract labels from generator and then put them in confusion_matrix function. To extract labels use x_gen,y_gen = test_generator.next(), just pay attention that labels are one hot encoded. Example:
test_generator.reset()
pred = model.predict_generator(test_generator, steps=len(test_generator), verbose=2)
from sklearn.metrics import classification_report, confusion_matrix, cohen_kappa_score
y_pred = np.argmax(pred, axis=1)
x_gen,y_gen = test_generator.next()
y_gen = np.argmax(y_gen, axis=1)
print('Confusion Matrix')
print(pd.DataFrame(confusion_matrix(y_gen, y_pred)))
I know how to draw confusion matrix when I use the train and test split using sklearn but I do not know how to create the confusion matrix when I am using the leave-one-out cross validation as shown in this example:
# Evaluate using Leave One Out Cross Validation
import pandas
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = pandas.read_csv(url, names=names)
array = dataframe.values
X = array[:,0:8]
Y = array[:,8]
num_folds = 10
num_instances = len(X)
loocv = model_selection.LeaveOneOut()
model = LogisticRegression()
results = model_selection.cross_val_score(model, X, Y, cv=loocv)
print("Accuracy: %.3f%% (%.3f%%)" % (results.mean()*100.0, results.std()*100.0))
How should I create the confusion matrix for LOOCV in order to visualize the per-class accuracy?
Borrowing your method from here, you can work around the problem via creating a custom scorer that receives the metadata during the iterations. These metadata can be used to find: F1 Score, Precision, Recall, Accuracy as well as the Confusion Matrix!
Here we need another trick that is using GridSearchCV which accepts a custom scorer, so here we go!
Here is an example that you can work on more according to your absolute requirements:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import make_scorer, accuracy_score, confusion_matrix
from sklearn.model_selection import GridSearchCV, StratifiedKFold
# Your method from the link you provided
def cm_analysis(y_true, y_pred, labels, ymap=None, figsize=(10,10)):
if ymap is not None:
y_pred = [ymap[yi] for yi in y_pred]
y_true = [ymap[yi] for yi in y_true]
labels = [ymap[yi] for yi in labels]
cm = confusion_matrix(y_true, y_pred, labels=labels)
cm_sum = np.sum(cm, axis=1, keepdims=True)
cm_perc = cm / cm_sum.astype(float) * 100
annot = np.empty_like(cm).astype(str)
nrows, ncols = cm.shape
for i in range(nrows):
for j in range(ncols):
c = cm[i, j]
p = cm_perc[i, j]
if i == j:
s = cm_sum[i]
annot[i, j] = '%.1f%%\n%d/%d' % (p, c, s)
elif c == 0:
annot[i, j] = ''
else:
annot[i, j] = '%.1f%%\n%d' % (p, c)
cm = pd.DataFrame(cm, index=labels, columns=labels)
cm.index.name = 'Actual'
cm.columns.name = 'Predicted'
fig, ax = plt.subplots(figsize=figsize)
sns.heatmap(cm, annot=annot, fmt='', ax=ax)
#plt.savefig(filename)
plt.show()
# Custom Scorer
def my_scorer(y_true, y_pred):
acc = accuracy_score(y_true, y_pred)
# you can either save y_true, y_pred and accuracy into a file
# for later use with the info in clf.cv_results_
# or plot the confusion matrix right here!
# for labels, you can create a class attribute to make it more dynamic
# i.e. changes automatically with every new dataset!
cm_analysis(y_true, y_pred, labels=[0,1], ymap=None, figsize=(10, 10))
# N.B as long as you have y_true and y_pred from every round here, you can
# do with them all the metrics that want such as F1 Score, Precision, Recall, A
# ccuracy and the Confusion Matrix!
return acc
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
df = pd.read_csv(url, names=names)
array = df.values
X = np.array(array[:,0:8])
Y = np.array(array[:,8]).astype(int)
# I'll make it two just for submitting the result here!
num_folds = 2
skf = StratifiedKFold(n_splits=num_folds, random_state=0)
# this is just a trick because the list contains
# the default parameter only (i.e. useless)
param_grid = {'C': [1.0]}
model = LogisticRegression()
# create custom scorer
custom_scorer = make_scorer(my_scorer)
# pass it to the GridSearchCV
clf = GridSearchCV(model, param_grid, scoring=custom_scorer, cv=skf, return_train_score=True)
# Fit and Go
clf.fit(X,Y)
# cv_results_ is a dict with all CV results during the iterations!
# IDK, you may need it to combine its content with the metrics ..etc
print(clf.cv_results_)
Result
{'mean_score_time': array([0.09023476]), 'split0_train_score':
array([0.79166667]), 'mean_train_score': array([0.77864583]),
'params': [{'C': 1.0}], 'std_test_score': array([0.01953125]),
'mean_fit_time': array([0.00235796]),
'param_C': masked_array(data=[1.0], mask=[False], fill_value='?',
dtype=object), 'rank_test_score': array([1], dtype=int32),
'split1_test_score': array([0.7734375]),
'std_fit_time': array([0.00032902]), 'mean_test_score': array([0.75390625]),
'std_score_time': array([0.00237632]), 'split1_train_score': array([0.765625]),
'split0_test_score': array([0.734375]), 'std_train_score': array([0.01302083])}
Split 0
Split 1
EDIT
If you strictly want LOOCV, then you can apply it in the above code, just replace StratifiedKFold by LeaveOneOut function; but bear in mind that LeaveOneOut will iterate around 684 times! so it's computationally very expensive. However, that would give you the confusion matrix in details during the iterations (i.e. metadata).
Nevertheless, if you are seeking the confusion matrix of the overall (i.e. final) process then you will still need to use the GridSearchCV but like follow:
......
loocv = LeaveOneOut()
clf = GridSearchCV(model, param_grid, scoring='accuracy', cv=loocv)
clf.fit(X,Y)
y_pred = clf.best_estimator_.predict(X)
cm_analysis(Y, y_pred, labels=[0, 1], ymap=None, figsize=(10,10))
Result