Difference between Normalizer and MinMaxScaler - python

I'm trying to understand the effects of applying the Normalizer or applying MinMaxScaler or applying both in my data. I've read the docs in SKlearn, and saw some examples of use. I understand that MinMaxScaler is important (is important to scale the features), but what about Normalizer?
It keeps unclear to me the practical result of using the Normamlizer in my data.
MinMaxScaler is applied column-wise, Normalizer is apllied row-wise. What does it implies? Should I use the Normalizer or just use the MinMaxScale or should use then both?

As you have said,
MinMaxScaler is applied column-wise, Normalizer is applied row-wise.
Do not confuse Normalizer with MinMaxScaler. The Normalizer class from Sklearn normalizes samples individually to unit norm. It is not column based but a row-based normalization technique. In other words, the range will be determined either by rows or columns.
So, remember that we scale features not records, because we want features to have the same scale, so the model to be trained will not give different weights to different features based on their range. If we scale the records, this will give each record its own scale, which is not what we need.
So, if features are represented by rows, then you should use the Normalizer. But in most cases, features are represented by columns, so you should use one of the scalers from Sklearn depending on the case:
MinMaxScaler transforms features by scaling each feature to a given range. It scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one.
The transformation is given by:
X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
X_scaled = X_std * (max - min) + min
where min, max = feature_range.
This transformation is often used as an alternative to zero mean, unit variance scaling.
StandardScaler standardizes features by removing the mean and scaling to unit variance. The standard score of a sample x is calculated as:
z = (x - u) / s. Use this if the data distribution is normal.
RobustScaler is robust to outliers. It removes the median and scales the data according to IQR (Interquartile Range). The IQR is the range between the 25th quantile and the 75th quantile.

Related

Why is another term for StandardScaler is Z-Score Normalization if it uses Standardization?

I found this online:
"StandardScaler or Z-Score Normalization is one of the feature scaling techniques, here the transformation of features is done by subtracting from the mean and dividing by standard deviation. This is often called Z-score normalization. The resulting data will have the mean as 0 and the standard deviation as 1."
To sum it up: why is it called Z-Score Normalization if it uses a Standardization technique?
What I'm thinking is that if it is called Z-Score Normalization shouldn't it use a Normalization technique rather than a Standardization one?
According to sklearn StandardScaler documentation:
StandardScaler standardize features by removing the mean and scaling to unit variance.
The standard score of a sample x is calculated as:
z = (x - u) / s (The formula for calculating a z-score)
So, both of StandardScaler (standard normalization) and Z-Score Normalization use the same formula and they are equivalent.

Data augmentation by adding noise in python regression model

I am building a regression model for a target variable which is heavy tailed. I want to augment data so that the model gets enough training samples in the region where it's a long tail. Accuracy of prediction for the rare data points is important.
I am currently augmenting data by adding noise to the training samples. After splitting into train and test, I do MinMaxScaler on all the features(X), but no scaling on the target variable(y).
Then, I add noise to both X and y with different mean and std since X is scaled to [0,1] but y isn't.
Here is the code for augmenting by adding noise
def add_noise(mean, std, df):
noise = np.random.normal(mean, std, df.shape)
df2= df.where(df <= 0.001 , df.add(abs(noise)))
return df2
I invoke this using something like add_noise(0,0.005,X_train) and add_noise(0,1,y_train)
X_train is normalized/scaled so I can use a small std deviation. Now I have to decide what std deviation of y_train will cause only a small perturbation that corresponds to the perturbation to X_train.
Questions
How do I find the right mean and std for my y variable - keeping in mind that the model should see similar data distribution between the original dataset and the augmented one?
Any other suggestions on augmenting data for regression ?

Different F1 scores for different preprocessing techniques- sklearn

I am building a classification model using sklearn's GradientBoostingClassifier. For the same model, I tried different preprocessing techniques: StandarScaler, Scale, and Normalizer on the same data but I am getting different f1_scores each time. For StandardScaler, it is highest and lowest for Normalizer. Why is it so? Is there any other technique for which I can get an even higher score?
The difference lies in their respective definitions:
StandardScaler: Standardize features by removing the mean and scaling to unit variance
Normalizer: Normalize samples individually to unit norm.
Scale: Standardize a dataset along any axis. Center to the mean and component wise scale to unit variance.
The data used to fit your model will change, so will the F1 score.
Here is a useful link comparing different scalers : https://scikit-learn.org/stable/auto_examples/preprocessing/plot_all_scaling.html#sphx-glr-auto-examples-preprocessing-plot-all-scaling-py

Using chi2 test for feature selection with continuous features (Scikit Learn)

I am trying to predict a binary (categorical) target from many continuous features, and would like to narrow your feature space before heading into model fitting. I noticed that the SelectKBest class from SKLearn's Feature Selection package has the following example on the Iris dataset (which is also predicting a binary target from continuous features):
from sklearn.datasets import load_iris
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
iris = load_iris()
X, y = iris.data, iris.target
X.shape
(150, 4)
X_new = SelectKBest(chi2, k=2).fit_transform(X, y)
X_new.shape
(150,2)
The example uses the chi2 test to determine which features should be used in the model. However it is my understanding that the chi2 test is strictly meant to be used in situations where we have categorical features predicting categorical performance. I did not think the chi2 test could be used for scenarios like this. Is my understanding wrong? Can the chi2 test be used to test whether a categorical variable is dependent on a continuous variable?
The SelectKBest function with the chi2 test only works with categorical data. In fact, the result from the test only will have real meaning if the feature only has 1's and 0's.
If you inspect a little bit the implementation of chi2 you going to see that the code only apply a sum across each feature, which means that the function expects just binary values. Also, the parameters that receive the chi2 function indicate the following:
def chi2(X, y):
...
X : {array-like, sparse matrix}, shape = (n_samples, n_features_in)
Sample vectors.
y : array-like, shape = (n_samples,)
Target vector (class labels).
Which means that the function expects to receive the feature vector with all their samples. But later when the expected values are calculated, you will see:
feature_count = X.sum(axis=0).reshape(1, -1)
class_prob = Y.mean(axis=0).reshape(1, -1)
expected = np.dot(class_prob.T, feature_count)
And these lines of code only make sense if the X and Y vector only has 1's and 0's.
I agree with #lalfab however, it's not clear to me why sklearn provides an example of using chi2 on the iris dataset which has all continuous variables. https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html
>>> from sklearn.datasets import load_digits
>>> from sklearn.feature_selection import SelectKBest, chi2
>>> X, y = load_digits(return_X_y=True)
>>> X.shape
(1797, 64)
>>> X_new = SelectKBest(chi2, k=20).fit_transform(X, y)
>>> X_new.shape
(1797, 20)
My understand of this is that when using Chi2 for feature selection, the dependent variable has to be categorical type, but the independent variables can be either categorical or continuous variables, as long as it's non-negative. What the algorithm trying to do is firstly build a contingency table in a matrix format that reveals the multivariate frequency distribution of the variables. Then try to find the dependence structure underlying the variables using this contingency table. The Chi2 is one way to measure the dependency.
From the Wikipedia on contingency table (https://en.wikipedia.org/wiki/Contingency_table, 2020-07-04):
Standard contents of a contingency table
Multiple columns (historically, they were designed to use up all the white space of a printed page). Where each row refers to a specific sub-group in the population (in this case men or women), the columns are sometimes referred to as banner points or cuts (and the rows are sometimes referred to as stubs).
Significance tests. Typically, either column comparisons, which test for differences between columns and display these results using letters, or, cell comparisons, which use color or arrows to identify a cell in a table that stands out in some way.
Nets or netts which are sub-totals.
One or more of: percentages, row percentages, column percentages, indexes or averages.
Unweighted sample sizes (counts).
Based on this, pure binary features can be easily summed up as counts, which is how people conduct the Chi2 test usually. But as long as the features are non-negative, one can always accumulated it in the contingency table in a "meaningful" way. In the sklearn implementation, it sums up as feature_count = X.sum(axis=0), then later averaged on class_prob.
In my understanding, you cannot use chi-square (chi2) for continuous variables.The chi2 calculation requires to build the contingency table, where you count occurrences of each category of the variables of interest. As the cells in that RC table correspond to particular categories, I cannot see how such table could be built from continuous variables without significant preprocessing.
So, the iris example which you quote, in my view, is an example of incorrect usage.
But there are more problems with the existing implementation of the chi2 feature reduction in Scikit-learn. First, as #lalfab wrote, the implementation requires binary feature, but the documentation is not clear about this. This led to common perception in the community that SelectKBest could be used for categorical features, while in fact it cannot. Second, the Scikit-learn implementation fails to implement the chi2 condition (80% cells of RC table need to have expected count >=5) which leads to incorrect results if some categorical features have many possible values. All in all, in my view this method should not be used neither for continuous, nor for categorical features (except binary). I wrote more about this below:
Here is the Scikit-learn bug request #21455:
and here the article and the alternative implementation:

Scaling test data to 0 and 1 using MinMaxScaler

Using the MinMaxScaler from sklearn, I scale my data as below.
min_max_scaler = preprocessing.MinMaxScaler()
X_train_scaled = min_max_scaler.fit_transform(features_train)
X_test_scaled = min_max_scaler.transform(features_test)
However, when printing X_test_scaled.min(), I have some negative values (the values do not fall between 0 and 1). This is due to the fact that the lowest value in my test data was lower than the train data, of which the min max scaler was fit.
How much effect does not having exactly normalized data between 0 and 1 values have on the SVM classifier? Also, is it bad practice to concatenate the train and test data into a single matrix, perform min-max scaling to ensure values are between 0 and 1, then seperate them again?
If you can scale all your data in one shot this would be better because all your data are managed by the Scaler in a logical way (all between 0 and 1). But for the SVM algorithm, there must be no difference as the scaler will extend the scale. There's still the same difference even if it is negative.
In the documentation we can see that there are negative values so I don't think it has an impact on the result
For this scaling it probably doesn't matter much in practice, but in general you should not use your test data to estimate any parameters of the preprocessing. This can severely bias you results for more complex preprocessing steps.
There is really no reason why you would want to concatenate the data here, the SVM will deal with it.
If you would be using a model that needs positive values and your test data is not made positive, you might consider another strategy than the MinMaxScaler.

Categories

Resources