How to get precision and recall using Linear svc of SVM? - python

I have used SVM's Linear svc for training and testing the data. I'm able to get the accuracy for SVM on my dataset. But, in addition to accuracy, I need precision and recall. Can anyone suggest me how to calculate precision and recall.
MyCode:
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.svm import LinearSVC
with open("/Users/abc/Desktop/reviews.txt") as f:
reviews = f.read().split("\n")
with open("/Users/abc/Desktop/labels.txt") as f:
labels = f.read().split("\n")
reviews_tokens = [review.split() for review in reviews]
onehot_enc = MultiLabelBinarizer()
onehot_enc.fit(reviews_tokens)
X_train, X_test, y_train, y_test = train_test_split(reviews_tokens, labels, test_size=0.20, random_state=None)
lsvm = LinearSVC()
lsvm.fit(onehot_enc.transform(X_train), y_train)
score = lsvm.score(onehot_enc.transform(X_test), y_test)
print("Score of SVM:" , score)

You can do like this:
from sklearn.metrics import confusion_matrix
predicted_y = lsvm.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, predicted_y).ravel()
precision_score = tp / (tp + fp)
recall_score = tp / (tp + fn)
Refer confusion_matrix documentation for more info

Related

Precision, recall, F1 score all have zero value for the minority class in the classification report

I got error while using SVM and MLP classifiers from SkLearn package. The error is C:\Users\cse_s\anaconda3\lib\site-packages\sklearn\metrics_classification.py:1327: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use zero_division parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
Code for splitting dataset
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, stratify=y)
Code for SVM classifier
from sklearn import svm
SVM_classifier = svm.SVC(kernel="rbf", probability = True, random_state=1)
SVM_classifier.fit(X_train, y_train)
SVM_y_pred = SVM_classifier.predict(X_test)
print(classification_report(y_test, SVM_y_pred))
Code for MLP classifier
from sklearn.neural_network import MLPClassifier
MLP = MLPClassifier(random_state=1, learning_rate = "constant", learning_rate_init=0.3, momentum = 0.2 )
MLP.fit(X_train, y_train)
R_y_pred = MLP.predict(X_test)
target_names = ['No class', 'Yes Class']
print(classification_report(y_test, R_y_pred, target_names=target_names))
The error is same for both classifiers
I hope, it could help.
Classification_report:
Sets the value to return when there is a zero division. You can provide 0 or 1 if zero division occur. by the precision or recall formula
classification_report(y_test, R_y_pred, target_names=target_names, zero_division=0)
I don't know what's your data look like. Here's an example
Features of cancer dataset
import pandas as pd
import numpy as np
from sklearn import svm
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import classification_report
cancer = load_breast_cancer()
df_feat = pd.DataFrame(cancer['data'],columns=cancer['feature_names'])
df_feat.head()
Target of dataset:
df_target = pd.DataFrame(cancer['target'],columns=['Cancer'])
np.ravel(df_target) # convert it into a 1-d array
Generate classification report:
X_train, X_test, y_train, y_test = train_test_split(df_feat, np.ravel(df_target), test_size=0.3, random_state=101)
SVM_classifier = svm.SVC(kernel="rbf", probability = True, random_state=1)
SVM_classifier.fit(X_train, y_train)
SVM_y_pred = SVM_classifier.predict(X_test)
print(classification_report(y_test, SVM_y_pred))
Generate classification report for MLP Classifier:
MLP = MLPClassifier(random_state=1, learning_rate = "constant", learning_rate_init=0.3, momentum = 0.2 )
MLP.fit(X_train, y_train)
R_y_pred = MLP.predict(X_test)
target_names = ['No class', 'Yes Class']
print(classification_report(y_test, R_y_pred, target_names=target_names, zero_division=0))

Applying Cross validation in Naive bayes

My dataset is Spam and Ham Filipino Message
I divided my dataset into 60% training, 20% testing and 20%validation
Split data into testing, training and Validation
from sklearn.model_selection import train_test_split
data['label'] = (data['label'].replace({'ham' : 0,
'spam' : 1}))
X_train, X_test, y_train, y_test = train_test_split(data['message'],
data['label'], test_size=0.2, random_state=1)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=1) # 0.25 x 0.8 = 0.2
print('Total: {} rows'.format(data.shape[0]))
print('Train: {} rows'.format(X_train.shape[0]))
print(' Test: {} rows'.format(X_test.shape[0]))
print(' Validation: {} rows'.format(X_val.shape[0]))
Train a MultinomialNB from sklearn
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score
import numpy as np
naive_bayes = MultinomialNB().fit(train_data,
y_train)
predictions = naive_bayes.predict(test_data)
Evaluate the Model
from sklearn.metrics import (accuracy_score,
precision_score,
recall_score,
f1_score)
accuracy_score = accuracy_score(y_test,
predictions)
precision_score = precision_score(y_test,
predictions)
recall_score = recall_score(y_test,
predictions)
f1_score = f1_score(y_test,
predictions)
My problem is in Validation. The error says
warnings.warn("Estimator fit failed. The score on this train-test"
this is how I code my validation, don't know if I'm doing the right thing"
from sklearn.model_selection import cross_val_score
mnb = MultinomialNB()
scores = cross_val_score(mnb,X_val,y_val, cv = 10, scoring='accuracy')
print('Cross-validation scores:{}'.format(scores))
I did not get any error or warning. Maybe it can be worked.
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score
import numpy as np
from sklearn.metrics import (accuracy_score,
precision_score,
recall_score,
f1_score)
from sklearn.model_selection import cross_val_score
from sklearn.feature_extraction.text import CountVectorizer
df = pd.read_csv("https://raw.githubusercontent.com/jeffprosise/Machine-Learning/master/Data/ham-spam.csv")
vectorizer = CountVectorizer(ngram_range=(1, 2), stop_words='english')
x = vectorizer.fit_transform(df['Text'])
y = df['IsSpam']
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=1)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=1) # 0.25 x 0.8 = 0.2
print('Total: {} rows'.format(data.shape[0]))
print('Train: {} rows'.format(X_train.shape[0]))
print(' Test: {} rows'.format(X_test.shape[0]))
print(' Validation: {} rows'.format(X_val.shape[0]))
naive_bayes = MultinomialNB().fit(X_train, y_train)
predictions = naive_bayes.predict(X_test)
accuracy_score = accuracy_score(y_test,predictions)
precision_score = precision_score(y_test, predictions)
recall_score = recall_score(y_test, predictions)
f1_score = f1_score(y_test, predictions)
mnb = MultinomialNB()
scores = cross_val_score(mnb,X_val,y_val, cv = 10, scoring='accuracy')
print('Cross-validation scores:{}'.format(scores))
Result:
Total: 1000 rows
Train: 600 rows
Test: 200 rows
Validation: 200 rows
Cross-validation scores:[1. 0.95 0.85 1. 1. 0.9 0.9 0.8 0.9 0.9 ]
First, it is worth noting that because it's called cross validation doesn't mean you have to use a validation set as you have done in your code, to do the crossval. There are a number of reasons why you would perform cross validation which include:
Ensuring that all your dataset is used in training as well as evaluating the performance of your model
To perform hyperparameter tuning.
Hence, your case here lean toward the first use case. As such you don't need to first perform a split of train, val, and test. Instead you can perform the 10-fold cross validation on your entire dataset.
If you are doing hyparameterization, then you can have a hold-out set of say 30% and use the remaining 70% for cross validation. Once the best parameters have been determined, you can then use the hold-out set to perform an evaluation of the model with the best parameters.
Some refs:
https://towardsdatascience.com/5-reasons-why-you-should-use-cross-validation-in-your-data-science-project-8163311a1e79
https://www.analyticsvidhya.com/blog/2021/11/top-7-cross-validation-techniques-with-python-code/
https://towardsdatascience.com/train-test-split-and-cross-validation-in-python-80b61beca4b6

How to change from normal machine learning technique to cross validation?

from sklearn.svm import LinearSVC
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.metrics import accuracy_score
X = data['Review']
y = data['Category']
tfidf = TfidfVectorizer(ngram_range=(1,1))
classifier = LinearSVC()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3)
clf = Pipeline([
('tfidf', tfidf),
('clf', classifier)
])
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred))
accuracy_score(y_test, y_pred)
This is the code to train a model and prediction. I need to know my model performance. so where should I change to become cross_val_score?
use this:(it is an example from my previous project)
import numpy as np
from sklearn.model_selection import KFold, cross_val_score
kfolds = KFold(n_splits=5, shuffle=True, random_state=42)
def cv_f1(model, X, y):
score = np.mean(cross_val_score(model, X, y,
scoring="f1",
cv=kfolds))
return (score)
model = ....
score_f1 = cv_f1(model, X_train, y_train)
you can have multiple scoring. you should just change scoring="f1".
if you want to see score for each fold just remove np.mean
from sklearn documentation
The simplest way to use cross-validation is to call the cross_val_score helper function on the estimator and the dataset.
In your case it will be
from sklearn.model_selection import cross_val_score
scores = cross_val_score(clf, X_train, y_train, cv=5)
print(scores)

Precision significantly drops when using entire dataset to test a classifier trained on undersampled data

I'm doing the Kaggle Credit Card Fraud Detection.
There is a significant imbalance between Class = 1 (fraudulent transaction) and Class = 0 (not fraudulent). To compensate, I undersampled the data so that there was a 1:1 ratio between fraudulent and unfraudulent transaction (492 each). When I trained my Logistic Regression classifier on the undersampled/balanced data, it performed well. However, when I used that same classifier and tested it on the entire dataset, the recall was still good, but the precision dropped significantly.
I am aware that having a high recall is much more important for this type of problem, but I would still like to understand why the precision tanks, and if this is fine or not.
Code:
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.model_selection import train_test_split
def model_report(y_test, pred):
print("Accuracy:\t", accuracy_score(y_test, pred))
print("Precision:\t", precision_score(y_test, pred))
print("RECALL:\t\t", recall_score(y_test, pred))
print("F1 Score:\t", f1_score(y_test, pred))
df = pd.read_csv("data/creditcard.csv")
target = 'Class'
X = df.loc[:, df.columns != target]
y = df.loc[:, df.columns == target]
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
print("WITHOUT UNDERSAMPLING:")
clf = LogisticRegression().fit(x_train, y_train)
pred = clf.predict(x_test)
model_report(y_test, pred)
# Creates the undersampled DataFrame with 492 fraud and 492 clean
minority_class_len = len(df[df[target] == 1])
minority_class_indices = df[df[target] == 1].index
majority_class_indices = df[df[target] == 0].index
random_majority_indices = np.random.choice(majority_class_indices, minority_class_len, replace=False)
undersample_indices = np.concatenate([minority_class_indices, random_majority_indices])
undersample = df.loc[undersample_indices]
X_undersample = undersample.loc[:, undersample.columns != target]
y_undersample = undersample.loc[:, undersample.columns == target]
x_train, x_test, y_train, y_test = train_test_split(X_undersample, y_undersample, test_size=0.33, random_state=42)
print("\nWITH UNDERSAMPLING:")
clf = LogisticRegression().fit(x_train, y_train)
pred = clf.predict(x_test)
model_report(y_test, pred)
print("\nWITH UNDERSAMPLING & TESTING ON ENIRE DATASET:")
pred = clf.predict(X)
model_report(y, pred)
Output:
WITHOUT UNDERSAMPLING:
Accuracy: 0.9989679423750093
Precision: 0.7241379310344828
RECALL: 0.5637583892617449
F1 Score: 0.6339622641509434
WITH UNDERSAMPLING:
Accuracy: 0.9353846153846154
Precision: 0.9673202614379085
RECALL: 0.9024390243902439
F1 Score: 0.9337539432176657
WITH UNDERSAMPLING & TESTING ON ENIRE DATASET:
Accuracy: 0.9595936897618387
Precision: 0.03760913364674278
RECALL: 0.9105691056910569
F1 Score: 0.07223476297968398

How can accuracy differs between one_hot_encode and count_vectorizer for the same dataset?

onehot_enc, BernoulliNB:
Here, I have used two different files for reviews and labels and I've used "train_test_split" to randomly split the data into 80% train data and 20% test data.
reviews.txt:
Colors & clarity is superb
Sadly the picture is not nearly as clear or bright as my 40 inch Samsung
The picture is clear and beautiful
Picture is not clear
labels.txt:
positive
negative
positive
negative
My Code:
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import BernoulliNB
from sklearn.metrics import confusion_matrix
with open("/Users/abc/reviews.txt") as f:
reviews = f.read().split("\n")
with open("/Users/abc/labels.txt") as f:
labels = f.read().split("\n")
reviews_tokens = [review.split() for review in reviews]
onehot_enc = MultiLabelBinarizer()
onehot_enc.fit(reviews_tokens)
X_train, X_test, y_train, y_test = train_test_split(reviews_tokens, labels, test_size=0.20, random_state=1)
bnbc = BernoulliNB(binarize=None)
bnbc.fit(onehot_enc.transform(X_train), y_train)
score = bnbc.score(onehot_enc.transform(X_test), y_test)
print("score of Naive Bayes algo is :" , score) // 90%
predicted_y = bnbc.predict(onehot_enc.transform(X_test))
tn, fp, fn, tp = confusion_matrix(y_test, predicted_y).ravel()
precision_score = tp / (tp + fp)
recall_score = tp / (tp + fn)
print("precision_score :" , precision_score) //92%
print("recall_score :" , recall_score) //97%
CountVectorizer, MultinomialNB:
Here, I've manually split the same data into train (80%) and test(20%).And I'm supplying these two csv files to the algorithm.
But, this is giving less accuracy compared to the above method. Can anyone help me out regarding the same ...
train_data.csv:
review,label
Colors & clarity is superb,positive
Sadly the picture is not nearly as clear or bright as my 40 inch Samsung,negative
test_data.csv:
review,label
The picture is clear and beautiful,positive
Picture is not clear,negative
My Code:
from sklearn.metrics import confusion_matrix
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
def load_data(filename):
reviews = list()
labels = list()
with open(filename) as file:
file.readline()
for line in file:
line = line.strip().split(',')
labels.append(line[1])
reviews.append(line[0])
return reviews, labels
X_train, y_train = load_data('/Users/abc/Sep_10/train_data.csv')
X_test, y_test = load_data('/Users/abc/Sep_10/test_data.csv')
vec = CountVectorizer()
X_train_transformed = vec.fit_transform(X_train)
X_test_transformed = vec.transform(X_test)
clf= MultinomialNB()
clf.fit(X_train_transformed, y_train)
score = clf.score(X_test_transformed, y_test)
print("score of Naive Bayes algo is :" , score) // 46%
y_pred = clf.predict(X_test_transformed)
print(confusion_matrix(y_test,y_pred))
print("Precision Score : ",precision_score(y_test, y_pred,average='micro'))//46%
print("Precision Score : ",recall_score(y_test, y_pred,average='micro')) // 46%
The issue here is that you are using MultiLabelBinarizer to:
onehot_enc.fit(reviews_tokens)`
before splitting into train and test, and test data is leaked to the model and hence higher accuracy.
On the other hand, when you use CountVectorizer is only seeing the trained data and then ignoring the words that dont appear in the trained data, which may be valuable to model for classification.
So depending on the quantity of your data, this can make a huge difference. Anyways, your second technique (using CountVectorizer) is correct and should be used in case of text data. MultiLabelBinarizer and one-hot encoding in general should be used only for categorical data, not text data.
Can you share your complete data?

Categories

Resources