Can anyone please suggest what is the best way to encode string features wherein I have > 500 unique features. Does this fall under categorical Data?
I need to basically normalize data with string features having huge number of unique features and adjacent features are co-realted. ( eg. col1 and col2 have a particular combination for one class in classification Problem. Similarly col3 and col4 again have some fixed pattern for each class)
How do I encode my data in this scenario before making it ready for ML algorithm?
There are several ways to encode categorical features. The best way really depends on your dataset and which ML algorithm you are going to use, so you could try different encoding schemes and pick the one that has the best results.
I've worked with categorical features with hundreds of unique values (e.g. Product Brands) and with tree-based algorithms and a label-encoder worked well with the algorithm.
For example you could use the scikit-learn label encoder:
>>> le = preprocessing.LabelEncoder()
>>> le.fit(["paris", "paris", "tokyo", "amsterdam"])
LabelEncoder()
>>> list(le.classes_)
['amsterdam', 'paris', 'tokyo']
>>> le.transform(["tokyo", "tokyo", "paris"])
array([2, 2, 1]...)
>>> list(le.inverse_transform([2, 2, 1]))
['tokyo', 'tokyo', 'paris']
You can do that in pandas as well, for example, if you have a column with the string categories you want to encode you could try this:
df["categorical_feature"] = df["categorical_feature"].astype('category')
df["categorical_feature_enc"] = df["categorical_feature"].cat.codes
Another useful encoding you could try is the one-hot encoding. However, since you have a lot of categories to encode that would result in an addition of n columns to your dataset per categorical feature (n = number of categories). Check the pandas get_dummies to see an example.
Related
I am trying to use sklearn to train a decision tree based on my dataset.
When I was trying to slicing the data to (outcome:Y, and predicting variables:X), it turns out that the outcome (my label) is in True/False:
#data slicing
X = df.values[:,3:27] #X are the sets of predicting variable, dropping unique_id and student name here
Y = df.values[:,'OffTask'] #Y is our predicted value (outcome), it is in the 3rd column
This is how I do, but I do not know whether this is the right approach:
#convert the label "OffTask" to dummy
df1 = pd.get_dummies(df,columns=["OffTask"])
df1
My trouble is the dataset df1 return my label Offtask to OffTask_N and OffTask_Y
Can someone know how to fix it?
get_dummies is used for converting nominal string values to integer. It returns as many as column as many unique string values are available in columns eg:
df={'color':['red','green','blue'],'price':[1200,3000,2500]}
my_df=pd.DataFrame(df)
pd.get_dummies(my_df)
In your case you can drop first value, wherever value is null can be considered it will be first value
You could make the pd.get_dummies to return only one column by setting drop_first=True
y = pd.get_dummies(df,columns=["OffTask"], drop_first=True)
But this is not the recommended way to convert the label to binaries. I would suggest using labelbinarizer for this purpose.
Example:
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
lb.fit_transform(pd.DataFrame({'OffTask':['yes', 'no', 'no', 'yes']}))
#
array([[1],
[0],
[0],
[1]])
I need to encode categorial features in my dataset. I want them to be ordered, so that 'low' comes to 0 and 'vhigh' comes to 3.
I tried using label encoder from preprocessing:
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(['low', 'med', 'high', 'vhigh'])
ar = le.transform(df[df["buying"] == 'low']["buying"])
Unfortunately features were not ordered: 4th line returns array of ones, and I want an array of zeros.
I tried to create another encoder that maps numbers to numbers I want. But it seems to have just no result.
other_le = preprocessing.LabelEncoder()
other_le.fit([1, 2, 0, 3])
other_le.transform(ar)
Last line returns ones again.
How to keep an order on categorial features in the shortest way?
You can use factorize function from pandas. It encodes values based on sequence i.e. if low is first, it gets encoded as 0, medium gets 1 and so on.
import pandas as pd
myli = ['low','medium','high','very_high']
pd.factorize(myli)[0]
# output
array([0, 1, 2, 3])
LabelEncoder will sort your features according to the output of Python's builtin sorted() function, which in this case will order them alphabetically. It would not be hard to write your own function to label these in a way that maintains the order you're looking for:
def label( array ):
labels = ['low', 'med', 'high', 'vhigh']
return map( labels.index, array )
I have a Pandas dataframe which has Encoding: latin-1 and is delimited by ;. The dataframe is very large almost of size: 350000 x 3800. I wanted to use sklearn initially but my dataframe has missing values (NAN values) so i could not use sklearn's random forests or GBM. So i had to use H2O's Distributed random forests for the Training of the dataset. The main Problem is the dataframe is not efficiently converted when i do h2o.H2OFrame(data). I checked for the possibility for providing the Encoding Options but there is nothing in the documentation.
Do anyone have an idea about this? Any leads could help me. I also want to know if there are any other libraries like H2O which can handle NAN values very efficiently? I know that we can impute the columns but i should not do that in my dataset because my columns are values from different sensors, if the values are not there implies that the sensor is not present. I can use only Python
import h2o
import pandas as pd
df = pd.DataFrame({'col1': [1,1,2], 'col2': ['César Chávez Day', 'César Chávez Day', 'César Chávez Day']})
hf = h2o.H2OFrame(df)
Since the problem that you are facing is due to the high number of NANs in the dataset, this should be handled first. There are two ways to do so.
Replace NAN with a single, obviously out-of-range value.
Ex. If a feature varies between 0-1 replace all NAN with -1 for that feature.
Use the class Imputer to handle NAN values. This will replace NAN with either of mean, median or mode of that feature.
If there are large number of missing values in your data and you want to increase the efficiency of conversion, I would recommend explicitly specifying the column types and NA strings instead of letting H2O interpret it. You can pass a list of strings to be interpreted as NAs and a dictionary specifying column types to H2OFrame() method.
It will also allow you to create custom labels for the sensors that are not present, instead of having a generic "not available" (impute NaN values with a custom string in pandas).
import h2o
col_dtypes = {'col1_name':col1_type, 'col2_name':col2_type}
na_list = ['NA', 'none', 'nan', 'etc']
hf = h2o.H2OFrame(df, column_types=col_dtypes, na_strings=na_list)
For more information - http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/_modules/h2o/frame.html#H2OFrame
Edit: #ErinLeDell 's suggestion to use h2o.import_file() directly with specifying column dtypes and NA string will give you the largest speed-up.
I'm trying to understand how to use categorical data as features in sklearn.linear_model's LogisticRegression.
I understand of course I need to encode it.
What I don't understand is how to pass the encoded feature to the Logistic regression so it's processed as a categorical feature, and not interpreting the int value it got when encoding as a standard quantifiable feature.
(Less important) Can somebody explain the difference between using preprocessing.LabelEncoder(), DictVectorizer.vocabulary or just encoding the categorical data yourself with a simple dict? Alex A.'s comment here touches on the subject but not very deeply.
Especially with the first one!
You can create indicator variables for different categories. For example:
animal_names = {'mouse';'cat';'dog'}
Indicator_cat = strcmp(animal_names,'cat')
Indicator_dog = strcmp(animal_names,'dog')
Then we have:
[0 [0
Indicator_cat = 1 Indicator_dog = 0
0] 1]
And you can concatenate these onto your original data matrix:
X_with_indicator_vars = [X, Indicator_cat, Indicator_dog]
Remember though to leave one category without an indicator if a constant term is included in the data matrix! Otherwise, your data matrix won't be full column rank (or in econometric terms, you have multicollinearity).
[1 1 0 0
1 0 1 0
1 0 0 1]
Notice how constant term, an indicator for mouse, an indicator for cat and an indicator for dog leads to a less than full column rank matrix: the first column is the sum of the last three.
Standart approach to convert categorial features into numerical - OneHotEncoding
It's completely different classes:
[DictVectorizer][2].vocabulary_
A dictionary mapping feature names to feature indices.
i.e After fit() DictVectorizer has all possible feature names, and now it knows in which particular column it will place particular value of a feature. So DictVectorizer.vocabulary_ contains indicies of features, but not values.
LabelEncoder in opposite maps each possible label (Label could be string, or integer) to some integer value, and returns 1D vector of these integer values.
Suppose the type of each categorical variable is "object". Firstly, you can create an panda.index of categorical column names:
import pandas as pd
catColumns = df.select_dtypes(['object']).columns
Then, you can create the indicator variables using a for-loop below. For the binary categorical variables, use the LabelEncoder() to convert it to 0 and 1. For categorical variables with more than two categories, use pd.getDummies() to obtain the indicator variables and then drop one category (to avoid multicollinearity issue).
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
for col in catColumns:
n = len(df[col].unique())
if (n > 2):
X = pd.get_dummies(df[col])
X = X.drop(X.columns[0], axis=1)
df[X.columns] = X
df.drop(col, axis=1, inplace=True) # drop the original categorical variable (optional)
else:
le.fit(df[col])
df[col] = le.transform(df[col])
Binary one-hot (also known as one-of-K) coding lies in making one binary column for each distinct value for a categorical variable. For example, if one has a color column (categorical variable) that takes the values 'red', 'blue', 'yellow', and 'unknown' then a binary one-hot coding replaces the color column with binaries columns 'color=red', 'color=blue', and 'color=yellow'. I begin with data in a pandas data-frame and I want to use this data to train a model with scikit-learn. I know two ways to do the binary one-hot coding, none of them satisfactory to me.
Pandas and get_dummies in the categorical columns of the data-frame. This method seems excellent as far as the original data-frame contains all data available. That is, you do the one-hot coding before splitting your data in training, validation, and test sets. However, if the data is already split in different sets, this method doesn't work very well. Why? Because one of the data sets (say, the test set) can contain fewer values for a given variable. For example, it can happen that whereas the training set contain the values red, blue, yellow, and unknown for the variable color, the test set only contains red and blue. So the test set would end up having fewer columns than the training set. (I don't know either how the new columns are sorted, and if even having the same columns, this could be in a different order in each set).
Sklearn and DictVectorizer This solves the previous issue, as we can make sure that we are applying the very same transformation to the test set. However, the outcome of the transformation is a numpy array instead of a pandas data-frame. If we want to recover the output as a pandas data-frame, we need to (or at least this is the way I do it): 1) pandas.DataFrame(data=outcome of DictVectorizer transformation, index=index of original pandas data frame, columns= DictVectorizer().get_features_names) and 2) join along the index the resulting data-frame with the original one containing the numerical columns. This works, but it is somewhat cumbersome.
Is there a better way to do a binary one-hot encoding within a pandas data-frame if we have our data split in training and test set?
If your columns are in the same order, you can concatenate the dfs, use get_dummies, and then split them back again, e.g.,
encoded = pd.get_dummies(pd.concat([train,test], axis=0))
train_rows = train.shape[0]
train_encoded = encoded.iloc[:train_rows, :]
test_encoded = encoded.iloc[train_rows:, :]
If your columns are not in the same order, then you'll have challenges regardless of what method you try.
You can set your data type to categorical:
In [5]: df_train = pd.DataFrame({"car":Series(["seat","bmw"]).astype('category',categories=['seat','bmw','mercedes']),"color":["red","green"]})
In [6]: df_train
Out[6]:
car color
0 seat red
1 bmw green
In [7]: pd.get_dummies(df_train )
Out[7]:
car_seat car_bmw car_mercedes color_green color_red
0 1 0 0 0 1
1 0 1 0 1 0
See this issue of Pandas.