Scikit BinaryClassification training data selection - python

I'm training a BinaryClassifier on data that has 100 attributes where the positive scenario occurs only 3% of 800k items. During training, do we need to include the positives as well as the negatives instances? I'm guessing that we shouldnt as the outcome would only be binary i.e. if the model is trained on positives, then a weak match would mean that it's negative.
If in case i do need to include both then would the pandas DataFrame's sample method be reliable?
Thank you!

If you're asking how to handle an imbalanced dataset, there are many blog posts online on that topic, e.g. here. One possible way to use pandas' sample method would be to set the weights parameter to the frequency of the other class, i.e. 0.97 for positive instances and 0.03 for negative ones, thereby correcting the imbalance by oversampling.
But if you're saying that you could theoretically fit a model to the distribution of the positive instances and, during testing, label all outliers as negative instances – that is possible, although not advisable. That approach would certainly perform worse than one that learns from both classes. Furthermore, binary classification algorithms like scikit-learn's always assume instances from both classes.

If you are training a binary classifier you will need to have two outputs in your training dataset.
At least if you want your classifier to work.
What you have is an unbalanced dataset, here are some ways to address this problem:
https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/

Related

Is it necessary to mitigate class imbalance problem in multiclass text classification?

I am performing multi-class text classification using BERT in python. The dataset that I am using for retraining my model is highly imbalanced. Now, I am very clear that the class imbalance leads to a poor model and one should balance the training set by undersampling, oversampling, etc. before model training.
However, it is also a fact that the distribution of the training set should be similar to the distribution of the production data.
Now, if I am sure that the data thrown at me in the production environment will also be imbalanced, i.e., the samples to be classified will likely belong to one or more classes as compared to some other classes, should I balance my training set?
OR
Should I keep the training set as it is as I know that the distribution of the training set is similar to the distribution of data that I will encounter in the production?
Please give me some ideas, or provide some blogs or papers for understanding this problem.
Class imbalance is not a problem by itself, the problem is too few minority class' samples make it harder to describe its statistical distribution, which is especially true for high-dimensional data (and BERT embeddings have 768 dimensions IIRC).
Additionally, logistic function tends to underestimate the probability of rare events (see e.g. https://gking.harvard.edu/files/gking/files/0s.pdf for the mechanics), which can be offset by selecting a classification threshold as well as resampling.
There's quite a few discussions on CrossValidated regarding this (like https://stats.stackexchange.com/questions/357466). TL;DR:
while too few class' samples may degrade the prediction quality, resampling is not guaranteed to give an overall improvement; at least, there's no universal recipe to a perfect resampling proportion, you'll have to test it out for yourself;
however, real life tasks often weigh classification errors unequally: resampling may help improving certain class' metrics at the cost of overall accuracy. Same applies to classification threshold selection however.
This depends on the goal of your classification:
Do you want a high probability that a random sample is classified correctly? -> Do not balance your training set.
Do you want a high probability that a random sample from a rare class is classified correctly? -> balance your training set or apply weighting during training increasing the weights for rare classes.
For example in web applications seen by clients, it is important that most samples are classified correctly, disregarding rare classes, whereas in the case of anomaly detection/classification, it is very important that rare classes are classified correctly.
Keep in mind that a highly imbalanced dataset tends to always predicting the majority class, therefore increasing the number or weights of rare classes can be a good idea, even without perfectly balancing the training set..
P(label | sample) is not the same as P(label).
P(label | sample) is your training goal.
In the case of gradient-based learning with mini-batches on models with large parameter space, rare labels have a small footprint on the model training. So, your model fits in P(label).
To avoid fitting to P(label), you can balance batches.
Overall batches of an epoch, data looks like an up-sampled minority class. The goal is to get a better loss function that its gradients move parameters toward a better classification goal.
UPDATE
I don't have any proof to show this here. It is perhaps not an accurate statement. With enough training data (with respect to the complexity of features) and enough training steps you may not need balancing. But most language tasks are quite complex and there is not enough data for training. That was the situation I imagined in the statements above.

How to handle Imbalanced Datatset and outliers in python?

I have 2 doubts :
If we have a classification problem with a dataframe that has large no of features (columns > 100) and if say 20/30 of them are highly correlated and the target columns (y) is very skewed towards one class ;
should we first remove the imbalance using Imblearn or should we drop the highly correlated columns ?
In a classification problem should we first standardise the data or handle the outliers ?
There is no "true" answer to your questions - the approach to take highly depends on your setting, the models you apply and the goals at hand.
The topic of class imbalance has been discussed elsewhere (for example here and here).
A valid reason for oversampling/undersampling your positive or negative class training examples could be the knowledge that the true incidence of positive instances is higher (lower) than your training data suggests. Then you might want to apply sampling techniques to achieve a positive/negative class balance that matches that prior knowledge.
While not really dealing with imbalance in your label distribution, your specific setting may warrant assigning different costs to false positives and false negatives (e.g. the cost of misclassifying a cancer patient as healthy may be higher than vice versa). This you can deal with by e.g. adapting your cost function (e.g. a false negative incurring higher cost than a false negative) or by performing some kind of threshold optimization after training (to e.g. reach a certain precision/recall in cross-validation).
The problem of highly correlated features occurs with models that assume that there is no correlation between features. For example, if you have an issue with multicollinearity in your feature space, parameter estimates in logistic regressions may be off. Whether or not there is multicollinearity you can for example check using the variance inflation factor (VIF). However, not all models carry such an assumption, so you might be save disregarding the issue depending on your setting.
Same goes for standardisation: This may not be necessary (e.g. tree classifiers), but other method may require it (e.g. PCA).
Whether or not to handle outliers is a difficult question. First you would have to define what an outlier is - are they e.g. a result of human error? Do you expect seeing similar instances out in the wild? If you can establish that your model performs better if you train it with outliers removed (on a holdout validation or test set), then: sure, go for it. But keep potential outliers in for validation if you plan to apply your model on streams of data which may produce similar outliers.

Proper way to handle highly imbalanced data - binary classification

I have a really large dataset with 60 million rows and 11 features.
It is highly imbalanced dataset, 20:1 (signal:background).
As I saw, there are two ways to tackle this problem:
First: Under-sampling/Oversampling.
I have two problems/questions in this way.
If I make under-sampling before train test split, I am losing a lot of data.
But more important, If I train a model on a balanced dataset, I am losing information about the frequency of my signal data(let's say the frequency of benign tumor over malignant), and because model is trained on and evaluated, model will perform well. But if sometime in the future I am going to try my model on new data, it will bad perform because real data is imbalanced.
If I made undersampling after train test split, my model will underfit because it will be trained on balanced data but validated/tested on imbalanced.
Second - class weight penalty
Can I use class weight penalty for XBG, Random Forest, Logistic Regression?
So, everybody, I am looking for an explanation and idea for a way of work on this kind of problem.
Thank you in advance, I will appreciate any of your help.
I suggest this quick paper by Breiman (author of Random Forest):
Using Random Forest to Learn Imbalanced Data
The suggested methods are weighted RF, where you compute the splits using weighted Gini (or Entropy, which in my opinion is better when weighted), and Balanced Random Forest, where you try to balance the classes during the bootstrap.
Both methods can be implemented also for boosted trees!
One of the suggested methodologies could be using Synthetic Minority oversampling technique (SMOTE) which attempts to balance the data set by creating synthetic instances. And train the balanced data set using any of the classification algorithm.
For comparing multiple models, Area Under the ROC Curve (AUC score) can be used to determine which model is superior.
This guide will be able to give you some ideas on different methodologies you can use and compare to resolve imbalance problem.
The above issue is pretty common when dealing with medical datasets and other types of fault detection where one of the classes (ill-effect) is always under-represented.
The best way to tackle this is to generate folds and apply cross validation. The folds should be generated in a way to balance the classes in each fold. In your case this creates 20 folds, each has the same under-represented class and a different fraction of the over-represented class.
Generating balanced folds and using cross validation also results in a better generalised and robust model. In your case, 20 folds might seem to harsh, so you can possibly create 10 folds each with a 2:1 class ratio.

How to increase true positive in your classification Machine Learning model?

I am new to Machine Learning
I have a dataset which has highly unbalanced classes(dominated by negative class) and contains more than 2K numeric features and the target is [0,1]. I have trained a logistics regression though I am getting an accuracy of 89% but from confusion matrix, it was found the model True positive is very low. Below are the scores of my model
Accuracy Score : 0.8965989500114129
Precision Score : 0.3333333333333333
Recall Score : 0.029545454545454545
F1 Score : 0.05427974947807933
How I can increase my True Positives? Should I be using a different classification model?
I have tried the PCA and represented my data in 2 components, it increased the model accuracy up to 90%(approx) however True Positives was decreased again
There are several ways to do this :
You can change your model and test whether it performs better or not
You can Fix a different prediction threshold : here I guess you predict 0 if the output of your regression is <0.5, you could change the 0.5 into 0.25 for example. It would increase your True Positive rate, but of course, at the price of some more False Positives.
You can duplicate every positive example in your training set so that your classifier has the feeling that classes are actually balanced.
You could change the loss of the classifier in order to penalize more False Negatives (this is actually pretty close to duplicating your positive examples in the dataset)
I'm sure many other tricks could apply, here is just my favorite short-list.
I'm assuming that your purpose is to obtain a model with good classification accuracy on some test set, regardless of the form of that model.
In that case, if you have access to the computational resources, try Gradient-Boosted Trees. That's a ensemble classifier using multiple decision trees on subsets of your data, then a voting ensemble to make predictions. As far as I know, it can give good results with unbalanced class counts.
SciKitLearn has the function sklearn.ensemble.GradientBoostingClassifier for this. I have not used that particular one, but I use the regression version often and it seems good. I'm pretty sure MATLAB has this as a package too, if you have access.
2k features might be difficult for the SKL algorithm - I don't know I've never tried.
What is the size of your dataset?How many rows are we talking here?
Your dataset is not balanced and so its kind of normal for a simple classification algorithm to predict the 'majority-class' most of the times and give you an accuracy of 90%. Can you collect more data that will have more positive examples in it.
Or, just try oversampling/ under-sampling. see if that helps.
You can also use penalized version of the algorithm to impose penalty, whenever a wrong class is predicted. That may help.
You can try many different solutions.
If you have quite a lot data points. For instance you have 2k 1s and 20k 0s. You can try just dump those extra 0s only keep 2k 0s. Then train it. And also you can try to use different set of 2k 0s and same set of 2k 1s. To train multiple models. And make decision based on multiple models.
You also can try adding weights at the output layer. For instance, you have 10 times 0s than 1s. Try to multiply 10 at the 1s prediction value.
Probably you also can try to increase dropout?
And so on.

What is "The sum of true positives and false positives are equal to zero for some labels." mean?

I'm using scikit learn to perform cross validation using StratifiedKFold to compute the f1 score, but it says that some of my labels have the sum of true positives and false positives are equal to zero for some labels. I thought using StratifiedKFold should prevent this? Why am I getting this problem?
Also, is there a way to get the confusion matrix from the cross_val_score function?
Your classifier is probably classifying all data points as negative, so there are no positives. You can check that is the case by looking at the confusion matrix (docs and example here). It's hard to tell what is happening without information about your data and choice of classifier, but common causes include:
bug in your code. Check your training data contains negative data points, and that these data points contain non-zero features.
inappropriate classifier parameters. If using Naive Bayes, check your class biases. If using SVM, try using grid search over parameter values.
The sklearn classification_report function may come in handy (docs).
Re your second question: stratification ensures that each fold contains roughly the same proportion of data points from all classes. This does not mean your classifier will perform sensibly.
Update:
In a classification task (and especially when class imbalance is present) you are trading off precision for recall. Depending on your application, you can set your classifier so it does well most of the time (i.e. high accuracy) or so that it can detect the few points that you care about (i.e. high recall of the smaller classes). For example, if the task is to forward support emails to the right department, you want high accuracy. It is somewhat acceptable to misclassify the kind of email you get once a year, because you only upset one person. If your task is to detect posts by sexual predators on a children's forum, you definitely do not want to miss any of them, even if the price is that a few posts will get incorrectly flagged. Bottom line: you should optimise for your application.
Are you micro or macro averaging recall? In the former case, more weight will be given to the frequent classes (which is similar to optimising for accuracy), and in the latter all classes will have the same weight.

Categories

Resources