Default positive class in multilevel sklearn classification - python

I am working on a churn classification with 3 classes 0, 1,2 but want to optimize class 0 and 1 for recall, does that mean sklearn needs to take classes 0 & 1 to be the positive classes. How can I explicitly mention for which class do I want to optimise recall , if that is not possible should I consider renaming the classes in an ascending order so that 1, 2 are default positive?
precision recall f1-score support
0 0.71 0.18 0.28 2611
1 0.57 0.54 0.56 5872
2 0.70 0.88 0.78 8913
accuracy 0.66 17396
macro avg 0.66 0.53 0.54 17396
weighted avg 0.66 0.66 0.63 17396
Here is the code I am using for reference (although I need more of an understanding of how to optimize for recall for only 0, 1 class here)
param_test1={'learning_rate':(0.05,0.1),'max_depth':(3,5)}
estimator=GridSearchCV(estimator=GradientBoostingClassifier(loss='deviance',subsample=0.8,random_state=10,
n_estimators=200),param_grid=param_test1,cv=2, refit='recall_score')
estimator.fit(df[predictors],df[target])

Related

how to adjust accuracy only for 1's?

Suppose I have such data :
x1 x2 x3 y
0.85 0.95 0.22 1
0.35 0.26 0.42 0
0.89 0.82 0.82 1
0.36 0.14 0.32 0
0.44 0.53 0.82 1
0.75 0.78 0.52 1
I predict binary classification but the only thing that matters ,is the correct prediction of the 1s, and if the prediction is 0, it will not affect my accuracy.
I simply used the following code :
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
But this code also includes zeros in its accuracy.
How can I apply to the network that only the prediction of 1 is important ?
In other words, During fitting model, if the prediction was zero , this zero predication does not apply to the model accuracy.
It looks like you care about precision of the model. Precision means for all instances that you predict 1, what portion of them is correct.
If yes, use tf.keras.metrics.Precision() as metrics.
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001),
loss='binary_crossentropy',
metrics=[tf.keras.metrics.Precision()])

Metrics F1 warning zero division

I want to calculate the F1 score of my models. But I receive a warning and get a 0.0 F1-score and I don't know what to do.
here is the source code:
def model_evaluation(dict):
for key,value in dict.items():
classifier = Pipeline([('tfidf', TfidfVectorizer()),
('clf', value),
])
classifier.fit(X_train, y_train)
predictions = classifier.predict(X_test)
print("Accuracy Score of" , key , ": ", metrics.accuracy_score(y_test,predictions))
print(metrics.classification_report(y_test,predictions))
print(metrics.f1_score(y_test, predictions, average="weighted", labels=np.unique(predictions), zero_division=0))
print("---------------","\n")
dlist = { "KNeighborsClassifier": KNeighborsClassifier(3),"LinearSVC":
LinearSVC(), "MultinomialNB": MultinomialNB(), "RandomForest": RandomForestClassifier(max_depth=5, n_estimators=100)}
model_evaluation(dlist)
And here is the result:
Accuracy Score of KNeighborsClassifier : 0.75
precision recall f1-score support
not positive 0.71 0.77 0.74 13
positive 0.79 0.73 0.76 15
accuracy 0.75 28
macro avg 0.75 0.75 0.75 28
weighted avg 0.75 0.75 0.75 28
0.7503192848020434
---------------
Accuracy Score of LinearSVC : 0.8928571428571429
precision recall f1-score support
not positive 1.00 0.77 0.87 13
positive 0.83 1.00 0.91 15
accuracy 0.89 28
macro avg 0.92 0.88 0.89 28
weighted avg 0.91 0.89 0.89 28
0.8907396950875212
---------------
Accuracy Score of MultinomialNB : 0.5357142857142857
precision recall f1-score support
not positive 0.00 0.00 0.00 13
positive 0.54 1.00 0.70 15
accuracy 0.54 28
macro avg 0.27 0.50 0.35 28
weighted avg 0.29 0.54 0.37 28
0.6976744186046512
---------------
C:\Users\Cey\anaconda3\lib\site-packages\sklearn\metrics\_classification.py:1272: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
Accuracy Score of RandomForest : 0.5714285714285714
precision recall f1-score support
not positive 1.00 0.08 0.14 13
positive 0.56 1.00 0.71 15
accuracy 0.57 28
macro avg 0.78 0.54 0.43 28
weighted avg 0.76 0.57 0.45 28
0.44897959183673475
---------------
Can someone tell me what to do? I only receive this message when using the "MultinomialNB()" classifier
Second:
When extending the dictionary by using the Gausian classifier (GaussianNB()) I receive this error message:
TypeError: A sparse matrix was passed, but dense data is required. Use X.toarray() to convert to a dense numpy array.
What should I do here ?
Together with UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples (main credits go there) and #yatu's answer, I could at least find a workaround for the warning:
UndefinedMetricWarning: Precision is ill-defined and being set to 0.0
due to no predicted samples. Use zero_division parameter to control
this behavior. _warn_prf(average, modifier, msg_start, len(result))
Quote from sklearn.metrics.f1_score in the Notes at the bottom:
When true positive + false positive == 0, precision is undefined. When
true positive + false negative == 0, recall is undefined. In such
cases, by default the metric will be set to 0, as will f-score, and
UndefinedMetricWarning will be raised. This behavior can be modified
with zero_division.
Thus, you cannot avoid this error if your data does not output a difference between true positives and false positives.
That being said, you can only suppress the warning at least, adding zero_division=0 to the functions mentioned in the quote. In either case, set to 0 or 1, you will get a 0 value as the return anyway.
precision = precision_score(y_test, y_pred, zero_division=0)
print('Precision score: {0:0.2f}'.format(precision))
recall = recall_score(y_test, y_pred, zero_division=0)
print('Recall score: {0:0.2f}'.format(recall))
f1 = f1_score(y_test, y_pred, zero_division=0)
print('f1 score: {0:0.2f}'.format(recall))
Can someone tell me what to do? I only receive this message when using the "MultinomialNB()" classifier
The first error seems to be indicating that a specific label is not predicted when using the MultinomialNB, which results in an undefined f-score, or ill-defined, since the missing values are set to 0. This is explained here
When extending the dictionary by using the Gausian classifier (GaussianNB()) I receive this error message:
TypeError: A sparse matrix was passed, but dense data is required. Use X.toarray() to convert to a dense numpy array.
As per this question, the error is quite explicit, the issue is that TfidfVectorizer is returning a sparse matrix, which cannot be used as input for the GaussianNB. So the way I see it, you either avoid using the GaussianNB, or you add an intermediate transformer to turn the sparse array to dense, which I wouldn't advise being the result of a tf-idf vectorization.

How can I improve massively classification report of one class using ensemble model?

I have a dataset including
{0: 6624, 1: 75} 0 for nonobservational sentences and 1 for observational sentences. (basically, I annotate my sentences using Named Entity Recognition, If there is a specific entity like DATA, TIME, LONG (coordinate) I put label 1)
Now I want to make a model to classify them, the best model (CV =3 FOR ALL) that I made is the ensembling model of
clf= SGDClassifier()
trial_05=Pipeline([("vect",vec),("clf",clf)])
which has:
precision recall f1-score support
0 1.00 1.00 1.00 6624
1 0.73 0.57 0.64 75
micro avg 0.99 0.99 0.99 6699
macro avg 0.86 0.79 0.82 6699
weighted avg 0.99 0.99 0.99 669
[[6611 37]
[ 13 38]]
and this model which used resampled sgd for classifcation
precision recall f1-score support
0 1.00 0.92 0.96 6624
1 0.13 1.00 0.22 75
micro avg 0.92 0.92 0.92 6699
macro avg 0.56 0.96 0.59 6699
weighted avg 0.99 0.92 0.95 6699
[[6104 0]
[ 520 75]]
As you see the problem in both cases is class 1, but in forst one we have fairly good precision and f1 score versus in the second one we have a very good recall
So I decided to use ensemble model using both in this way:
from sklearn.ensemble import VotingClassifier#create a dictionary of our models
estimators=[("trail_05",trial_05), ("resampled", SGD_RESAMPLED_Model)]#create our voting classifier, inputting our models
ensemble = VotingClassifier(estimators, voting='hard')
now I have this result:
precision recall f1-score support
0 0.99 1.00 1.00 6624
1 0.75 0.48 0.59 75
micro avg 0.99 0.99 0.99 6699
macro avg 0.87 0.74 0.79 6699
weighted avg 0.99 0.99 0.99 6699
[[6612 39]
[ 12 36]]
As you the ensembe model has better precision regarding to class 1,but worse recall and f1 socre which caused to worse confusion matrix regarding classed 1 (36 TP vs 38 TP for class 1)
MY aim is to improve TP for class one (f1 score, recall for class 1)
what do you recommend to improve TP for class one (f1score, recall for class 1?
generaly do you have any idea regarding my workflow?
I have tried parameter tuning, it i does not improve sgd model.

Make console-friendly string a useable pandas dataframe python

A quick question as I'm currently changing from R to pandas for some projects:
I get the following print output from metrics.classification_report from sci-kit learn:
precision recall f1-score support
0 0.67 0.67 0.67 3
1 0.50 1.00 0.67 1
2 1.00 0.80 0.89 5
avg / total 0.83 0.78 0.79 9
I want to use this (and similar ones) as a matrix/dataframe so, that I could subset it to extract, say the precision of class 0.
In R, I'd give the first "column" a name like 'outcome_class' and then subset it:
my_dataframe[my_dataframe$class_outcome == 1, 'precision']
And I can do this in pandas but the dataframe that I want to use is simply a string see sckikit's doc
How can I make the table output here to a useable dataframe in pandas?
Assign it to a variable, s:
s = classification_report(y_true, y_pred, target_names=target_names)
Or directly:
s = '''
precision recall f1-score support
class 0 0.50 1.00 0.67 1
class 1 0.00 0.00 0.00 1
class 2 1.00 0.67 0.80 3
avg / total 0.70 0.60 0.61 5
'''
Use that as the string input for StringIO:
import io # For Python 2.x use import StringIO
df = pd.read_table(io.StringIO(s), sep='\s{2,}') # For Python 2.x use StringIO.StringIO(s)
df
Out:
precision recall f1-score support
class 0 0.5 1.00 0.67 1
class 1 0.0 0.00 0.00 1
class 2 1.0 0.67 0.80 3
avg / total 0.7 0.60 0.61 5
Now you can slice it like an R data.frame:
df.loc['class 2']['f1-score']
Out: 0.80000000000000004
Here, classes are the index of the DataFrame. You can use reset_index() if you want to use it as a regular column:
df = df.reset_index().rename(columns={'index': 'outcome_class'})
df.loc[df['outcome_class']=='class 1', 'support']
Out:
1 1
Name: support, dtype: int64

Precision of sklearn.metric classification_report

I would like to know if it is possible to get more numbers after the comma with classification_report from sklearn (scikit).
atm it looks like this:
precision recall f1-score support
1 0.61 0.73 0.67 71194
2 0.64 0.33 0.43 13877
3 0.56 0.59 0.57 61591
4 0.64 0.51 0.57 13187
5 0.66 0.69 0.67 57530
6 0.54 0.06 0.11 2391
7 0.54 0.40 0.46 30223
avg / total 0.60 0.60 0.60 249993
I don't think it is possible with that method, but maybe someone had the same idea (probably).
I know that sklearn.metrics.precision_score exists, though the classification_report is such a nice way to display all the results at once.
Not possible according to the source code. See lines 819 and 830, format strings are hardcoded to %0.2f. If you really want it, just change it in your local file sklearn/metrics/metrics.py. Better yet, add an argument to classification_report with a precision number and use that. And submit your patch to the project!

Categories

Resources