I have a dataset with numerical and categorical data. The data includes outliner, which are essential for interpretation later. I’ve binary encoded the categorical data and used the RobustScaler on the numerical data.
The categorical binary encoded data does not get scaled. Is this combination possible or is there a logical error?
There's no reason why you couldn't do that, but there's also no point.
The reason why you scale input features to be on roughly the same scale is that lots of inference methods get tripped up by features which are on vastly different scales. See Why does feature scaling improve the convergence speed for gradient descent? for more.
A binary feature which ranges from 0 to 1 and a continuous feature where the 25-75% percentile range from -1 to 1 are already on approximately the same scale.
Since a binary feature is easier to interpret than a scaled binary feature, I would just leave it and not apply another scaling method.
Related
I have a set of alphanumeric categorical features (c_1,c_2, ..., c_n) and one numeric target variable (prediction) as a pandas dataframe. Can you please suggest to me any feature selection algorithm that I can use for this data set?
I'm assuming you are solving a supervised learning problem like Regression or Classification.
First of all I suggest to transform the categorical features into numeric ones using one-hot encoding. Pandas provides an useful function that already does it:
dataset = pd.get_dummies(dataset, columns=['feature-1', 'feature-2', ...])
If you have a limited number of features and a model that is not too computationally expensive you can test the combination of all the possible features, it is the best way however it is seldom a viable option.
A possible alternative is to sort all the features using the correlation with the target, then sequentially add them to the model, measure the goodness of my model and select the set of features that provides the best performance.
If you have high dimensional data, you can consider to reduce the dimensionality using PCA or another dimensionality reduction technique, it projects the data into a lower dimensional space reducing the number of features, obviously you will loose some information due to the PCA approximation.
These are only some examples of methods to perform feature selection, there are many others.
Final tips:
Remember to split the data into Training, Validation and Test set.
Often data normalization is recommended to obtain better results.
Some models have embedded mechanism to perform feature selection (Lasso, Decision Trees, ...).
I have a question regarding random forests. Imagine that I have data on users interacting with items. The number of items is large, around 10 000. My output of the random forest should be the items that the user is likely to interact with (like a recommender system). For any user, I want to use a feature that describes the items that the user has interacted with in the past. However, mapping the categorical product feature as a one-hot encoding seems very memory inefficient as a user interacts with no more than a couple of hundred of the items at most, and sometimes as little as 5.
How would you go about constructing a random forest when one of the input features is a categorical variable with ~10 000 possible values and the output is a categorical variable with ~10 000 possible values? Should I use CatBoost with the features as categorical? Or should I use one-hot encoding, and if so, do you think XGBoost or CatBoost does better?
You could also try entity embeddings to reduce hundreds of boolean features into vectors of small dimension.
It is similar to word embedings for categorical features. In practical terms you define an embedding of your discrete space of features into a vector space of low dimension. It can enhance your results and save on memory. The downside is that you do need to train a neural network model to define the embedding before hand.
Check this article for more information.
XGBoost doesn't support categorical features directly, you need to do the preprocessing to use it with catfeatures. For example, you could do one-hot encoding. One-hot encoding usually works well if there are some frequent values of your cat feature.
CatBoost does have categorical features support - both, one-hot encoding and calculation of different statistics on categorical features. To use one-hot encoding you need to enable it with one_hot_max_size parameter, by default statistics are calculated. Statistics usually work better for categorical features with many values.
Assuming you have enough domain expertise, you could create a new categorical column from existing column.
ex:-
if you column has below values
A,B,C,D,E,F,G,H
if you are aware that A,B,C are similar D,E,F are similar and G,H are similar
your new column would be
Z,Z,Z,Y,Y,Y,X,X.
In your random forest model you should removing previous column and only include this new column. By transforming your features like this you would loose explainability of your mode.
Does categorical variables needs to scaled before model building?
I have scaled all my continuous numerical variables using StandardScalear
now all the continues variables are between -1 and 1 where as categorical columns are binary.
How will it it affect my model?
Can someone please explain, how a scaled categorical variable will effect the splitting of nodes in the DecisionTreeClassifier
When you one-hot encode your categorical variables, the values in encoded variables become 0 and 1. Therefore, encoded variables will not negatively affect your model. The fact that you encode variables and pass them to ML learning algorithms is good, as you may gain additional insights from ML models.
When scaling your dataset, make sure you pay attention to 2 things:
Some ML algorithms require data to be scaled, and some do not. It is a good practice to only scale your data for models that are sensitive to un-scaled data, such as kNN.
There are different methods to scale your data. StandardScaler() is one of them, but it is vulnerable to outliers. Therefore, make sure you are using the scaling method that best fits your business needs. You can learn more about different scaling methods here: https://scikit-learn.org/stable/auto_examples/preprocessing/plot_all_scaling.html
Encoded categorical variables contain values on 0 and 1. Therefore, there is even no need to scale them. However, scaling methods will be applied to them when you choose to scale your entire dataset prior to using your data with scale-sensitive ML models.
I have a training feature set consisting of 92 features. Out of which 91 features are boolean values of 1 or 0. But 1 feature is numerical and it varies from 3-2000.
Will it be better if I do feature scaling on my 92nd feature?
If yes, what are the best possible ways to do it? I am using Python.
Sometimes, It is highly dependent on which algorithm you wanna use for your prediction. Suppose if you are using SVM and using Gaussian Kernel for that and you are not using feature scaling on your inputs then you might end up with wrong hypothesis and your large features will dominates over the other smaller features. Generally, feature scaling are always the best ways to control the variations in input and also it makes algorithm to compute fast (or in other words converge to the optimal minima).
I wish to fit a logistic regression model with a set of parameters. The parameters that I have include three distinct types of data:
Binary data [0,1]
Categorical data which has been encoded to integers [0,1,2,3,...]
Continuous data
I have two questions regarding pre-processing the parameter data before fitting a regression model:
For the categorical data, I've seen two ways to handle this. The first method is to use a one hot encoder, thus giving a new parameter for each category. The second method, is to just encode the categories with integers within a single parameter variable [0,1,2,3,4,...]. I understand that using a one hot encoder creates more parameters and therefore increases the risk of over-fitting the model; however, other than that, are there any reasons to prefer one method over the other?
I would like to normalize the parameter data to account for the large differences between the continuous and binary data. Is it generally acceptable to normalize the binary and categorical data? Should I normalize the categorical and the continuous parameters but not the binary parameters or can I just normalize all the parameter data types.
I realize I could fit this data with a random forest model and not have to worry much about pre-processing, but I'm curious how this applies with a regression type model.
Thank you in advance for your time and consideration.