Encoding large number of categorical features [duplicate] - python

I have a question regarding random forests. Imagine that I have data on users interacting with items. The number of items is large, around 10 000. My output of the random forest should be the items that the user is likely to interact with (like a recommender system). For any user, I want to use a feature that describes the items that the user has interacted with in the past. However, mapping the categorical product feature as a one-hot encoding seems very memory inefficient as a user interacts with no more than a couple of hundred of the items at most, and sometimes as little as 5.
How would you go about constructing a random forest when one of the input features is a categorical variable with ~10 000 possible values and the output is a categorical variable with ~10 000 possible values? Should I use CatBoost with the features as categorical? Or should I use one-hot encoding, and if so, do you think XGBoost or CatBoost does better?

You could also try entity embeddings to reduce hundreds of boolean features into vectors of small dimension.
It is similar to word embedings for categorical features. In practical terms you define an embedding of your discrete space of features into a vector space of low dimension. It can enhance your results and save on memory. The downside is that you do need to train a neural network model to define the embedding before hand.
Check this article for more information.

XGBoost doesn't support categorical features directly, you need to do the preprocessing to use it with catfeatures. For example, you could do one-hot encoding. One-hot encoding usually works well if there are some frequent values of your cat feature.
CatBoost does have categorical features support - both, one-hot encoding and calculation of different statistics on categorical features. To use one-hot encoding you need to enable it with one_hot_max_size parameter, by default statistics are calculated. Statistics usually work better for categorical features with many values.

Assuming you have enough domain expertise, you could create a new categorical column from existing column.
ex:-
if you column has below values
A,B,C,D,E,F,G,H
if you are aware that A,B,C are similar D,E,F are similar and G,H are similar
your new column would be
Z,Z,Z,Y,Y,Y,X,X.
In your random forest model you should removing previous column and only include this new column. By transforming your features like this you would loose explainability of your mode.

Related

Sklearn regression with label encoding

I'm attempting to use sklearn's linear regression model to predict fantasy players points. I have numeric stats for each player and obviously their name which I have encoded with the Label encoder function. My question is when performing the linear regression the encoded values included in the training it doesn't seem to recognize it as an ID but instead treats it as a numeric value.
So is there a better way to encode player names so they are treated as an ID so that it recognizes player 1 averages 25 points compared to player 2's 20? Or is this type of encoding even possible with linear regression? Thanks in advance
Apart from one hot encoding (which might create way too many columns in this case), mean target encoding does exactly what you need (encodes the category with its mean target value). You should be vary about the target leakage in case of rare categories though. sklearn-compatible category_encoders library provides several robust implementations, such as LeaveOneOutEncoder()

categorical features (object/float) selection for regression problem using python

I have a set of alphanumeric categorical features (c_1,c_2, ..., c_n) and one numeric target variable (prediction) as a pandas dataframe. Can you please suggest to me any feature selection algorithm that I can use for this data set?
I'm assuming you are solving a supervised learning problem like Regression or Classification.
First of all I suggest to transform the categorical features into numeric ones using one-hot encoding. Pandas provides an useful function that already does it:
dataset = pd.get_dummies(dataset, columns=['feature-1', 'feature-2', ...])
If you have a limited number of features and a model that is not too computationally expensive you can test the combination of all the possible features, it is the best way however it is seldom a viable option.
A possible alternative is to sort all the features using the correlation with the target, then sequentially add them to the model, measure the goodness of my model and select the set of features that provides the best performance.
If you have high dimensional data, you can consider to reduce the dimensionality using PCA or another dimensionality reduction technique, it projects the data into a lower dimensional space reducing the number of features, obviously you will loose some information due to the PCA approximation.
These are only some examples of methods to perform feature selection, there are many others.
Final tips:
Remember to split the data into Training, Validation and Test set.
Often data normalization is recommended to obtain better results.
Some models have embedded mechanism to perform feature selection (Lasso, Decision Trees, ...).

Feature Selection from Mixed dataset

I am a newbie in data science domain.
I have a data set, which has both numerical and string data.The interesting fact is both type of data make sense for the outcome. How to choose the relevant features from the data set?
Should I be using the LabelEncoder and convert the data from string to numerical and continue with the correlation? I am taking the right path? Is there any better way to solve this crisis?
You can encode categorical variables with label encoding if there is a meaningful ordering of available values and making sure the ordering is retained in the encoding. See here for an example.
If there's no ordering (or resolving a meaningful one is too much work) you can use one-hot encoding. This, however will increase the feature set proportionally to the distinct values for the feature in the dataset.
If one-hot results in a very large feature set and the categorical string data are natural language words, you may want to use a pretrained embedding.
Either way, you can then concatenate the encoded categorical column(s) to the continuous feature set and proceed with learning and feature selection.
Kind of a cop out but you could simply use a random forest and happily mix numerical and categorical data. Encoding with LabelEncoder on OneHotEncoding would allow you to use a wider variety of algorithms.

pandas get_dummies on high cardinality variables using one hot encoding creates too many new features

I have several high cardinal variables in a dataset and want to convert them into dummies. All of them have more than 500 levels. When I used pandas get_dummies, the matrix got so large and my program crashed.
pd.get_dummies(data, sparse=True, drop_first=True, dummy_na=True)
I don't know better ways to handle high cardinal variables besides using one hot encoding, but it increases the size of the data so much that the memory can't handle it. Does anyone have better solutions?
Method 1:
For non-linear algorithms like RF you can also replace a categorical variable by the number of times it appears in the train set. This turns it into a single feature.
Method 2:
If you can make one-hot encoding fit into your memory, you can consider first applying one-hot encoding, and then apply some dimensional reduction method (like PCA) or embedding method (word2Vec, etc) to reduce the dimension, before you fit them into any ML algorithm.
There are more discussion here:
https://www.kaggle.com/general/16927

Mixed parameter types for machine learning

I wish to fit a logistic regression model with a set of parameters. The parameters that I have include three distinct types of data:
Binary data [0,1]
Categorical data which has been encoded to integers [0,1,2,3,...]
Continuous data
I have two questions regarding pre-processing the parameter data before fitting a regression model:
For the categorical data, I've seen two ways to handle this. The first method is to use a one hot encoder, thus giving a new parameter for each category. The second method, is to just encode the categories with integers within a single parameter variable [0,1,2,3,4,...]. I understand that using a one hot encoder creates more parameters and therefore increases the risk of over-fitting the model; however, other than that, are there any reasons to prefer one method over the other?
I would like to normalize the parameter data to account for the large differences between the continuous and binary data. Is it generally acceptable to normalize the binary and categorical data? Should I normalize the categorical and the continuous parameters but not the binary parameters or can I just normalize all the parameter data types.
I realize I could fit this data with a random forest model and not have to worry much about pre-processing, but I'm curious how this applies with a regression type model.
Thank you in advance for your time and consideration.

Categories

Resources