How to check selected features with PySpark's ChiSqSelector? - python

I'm using PySpark's ChiSqSelector to select the most important features. The code is running well, however I can't verify what my features are in terms of index or name.
So my question is: How can I identify what the values ​​in selectedFeatures are referring to?
I have the sample code below that I use only four columns for the purpose of facilitating the visualization, however, I have to do this for a DF with almost 100 columns.
df=df.select("IsBeta","AVProductStatesIdentifier","IsProtected","Firewall","HasDetections")
from pyspark.ml.feature import VectorAssembler
vec_assembler = VectorAssembler(inputCols = ["IsBeta","AVProductStatesIdentifier","IsProtected","Firewall"], outputCol="features")
vec_df = vec_assembler.transform(df)
selector = ChiSqSelector(featuresCol='features', fpr=0.05, outputCol="selectedFeatures",labelCol= "HasDetections")
result = selector.fit(vec_df).transform(vec_df)
print(result.show())
And yet, when trying to apply the solution I found in this question. I still cannot understand which columns are selected in terms of name or index. That is, which are the features that are being selected.
model = selector.fit(vec_df)
model.selectedFeatures

First: Please don't use one hot encoded features, the ChiSqSelector should be directly used on categorical (non-encoded) columns, as you can see here.
Without the one-hot encoded stuff the selector usage is straight forward:
Now let's look at how the ChiSqSelector is used and how to find the relevant features by name.
For example usage I'll create a df with only 2 relevant columns (AVProductStatesIdentifier and Firewall), the other 2 (IsBeta and IsProtected) will be constant:
from pyspark.sql.types import StructType, StructField, IntegerType
from pyspark.sql.functions import col, create_map, lit
from itertools import chain
import numpy as np
import pandas as pd
#create df
df_p = pd.DataFrame([np.ones(1000, dtype=int),
np.ones(1000, dtype=int),
np.random.randint(0,500, 1000, dtype=int),
np.random.randint(0,2, 1000, dtype=int)
], index=['IsBeta', 'IsProtected', 'Firewall', 'HasDetections']).T
df_p['AVProductStatesIdentifier'] = np.random.choice(['a', 'b', 'c'], 1000)
schema=StructType([StructField("IsBeta",IntegerType(),True),
StructField("AVProductStatesIdentifier",StringType(),True),
StructField("IsProtected",IntegerType(),True),
StructField("Firewall",IntegerType(),True),
StructField("HasDetections",IntegerType(),True),
])
df = spark.createDataFrame(
df_p[['IsBeta', 'AVProductStatesIdentifier', 'IsProtected', 'Firewall', 'HasDetections']],
schema
)
First let's make the column AVProductStatesIdentifier categorical
mapping = {l.AVProductStatesIdentifier:i for i,l in enumerate(df.select('AVProductStatesIdentifier').distinct().collect())}
mapping_expr = create_map([lit(x) for x in chain(*mapping.items())])
df = df.withColumn("AVProductStatesIdentifier", mapping_expr.getItem(col("AVProductStatesIdentifier")))
Now, let's assemble that and select the 2 most important columns
from pyspark.ml.feature import VectorAssembler
vec_assembler = VectorAssembler(inputCols = ["IsBeta","AVProductStatesIdentifier","IsProtected","Firewall"], outputCol="features")
vec_df = vec_assembler.transform(df)
selector = ChiSqSelector(numTopFeatures=2,featuresCol='features', fpr=0.05, outputCol="selectedFeatures",labelCol= "HasDetections")
model = selector.fit(vec_df)
Now execute:
np.array(df.columns)[model.selectedFeatures]
which results in
array(['AVProductStatesIdentifier', 'Firewall'], dtype='<U25')
The two non-constant columns.

Related

Extract smaller table from pivot table pandas

I want to split the following pivot table into training and testing sets (to evaluate recommendation system), and was thinking of extracting two tables with non-overlapping indices (userID) and column values (ISBN). How can I split it properly? Thank you.
As suggested by #moys, can use train_test_split from scikit-learn after splitting your dataframe columns first for the non-overlapping column names.
Example:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
Generate data:
df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'))
Split df columns in some way, eg half:
cols = int(len(df.columns)/2)
df_A = df.iloc[:, 0:cols]
df_B = df.iloc[:, cols:]
Use train_test_split:
train_A, test_A = train_test_split(df_A, test_size=0.33)
train_B, test_B = train_test_split(df_B, test_size=0.33)

How to set pandas dataframe column as index when generating loading matrix for PCA

I am using sklearn in python to perform principle component analysis (PCA) on gene expression data. My data is loaded as a pandas dataframe, for which I can call df.head() and the df looks good. I am using sklearn to generate a loading matrix, but the matrix only displays a generic index, and will not accept a column name for an index. I have 1722 genes, so it is important that I obtain the loading score for each gene computationally.
Here is my code for PCA:
import pandas as pd
from sklearn.decomposition import PCA
from sklearn import preprocessing
# Load the data as pandas dataframe
cols = ['gene', 'FC_TSWV', 'FC_WFT', 'FC_TSWV_WFT']
df = pd.read_csv('./PCA.txt', names = cols, header = None, index_col = 'gene')
# preprocess data:
scaled_df = preprocessing.scale(df.T)
# perform PCA
pca = PCA()
pca.fit(scaled_df)
pca_data = pca.transform(scaled_df)
# Generate loading matrix. HERE IS WHERE THE TROUBLE IS:
loading_scores = pd.Series(pca.components_[0], index = df.gene)
# Print loading matrix
sorted_loading_scores = loading_scores.abs().sort_values(ascending=False)
print(loading_scores)
I have tried:
loading_scores = pd.Series(pca.components_[0], index = df.gene)
loading_scores = pd.Series(pca.components_[0], index = df['gene'])
loading_scores = pd.Series(pca.components_[0], index = df.loc['gene']
AttributeError: 'DataFrame' object has no attribute 'gene'.
If I do not specify an index at all, the loading scores are designated with the generic 0-based index.
Anyone know how to fix this?
Use df.index instead of df.gene or df['gene']
Once you set a certain column to be the index, the way to access it is through the .index attribute, and not through the column's name anymore.

Python - How to determine the feature / column names returned by Chi Squared test [duplicate]

I'm trying to conduct a supervised machine-learning experiment using the SelectKBest feature of scikit-learn, but I'm not sure how to create a new dataframe after finding the best features:
Let's assume I would like to conduct the experiment selecting 5 best features:
from sklearn.feature_selection import SelectKBest, f_classif
select_k_best_classifier = SelectKBest(score_func=f_classif, k=5).fit_transform(features_dataframe, targeted_class)
Now, if I add the line:
import pandas as pd
dataframe = pd.DataFrame(select_k_best_classifier)
I receive a new dataframe without feature names (only index starting from 0 to 4), but I want to create a dataframe with the new selected features, in a way like this:
dataframe = pd.DataFrame(fit_transofrmed_features, columns=features_names)
My question is how to create the features_names list?
I know that I should use:
select_k_best_classifier.get_support()
Which returns an array of boolean values, where true values indices represent the column that should be selected in the original dataframe.
How should I use this boolean array with the array of all features names I can get via the method feature_names = list(features_dataframe.columns.values) ?
This doesn't require loops.
# Create and fit selector
selector = SelectKBest(f_classif, k=5)
selector.fit(features_df, target)
# Get columns to keep and create new dataframe with those only
cols_idxs = selector.get_support(indices=True)
features_df_new = features_df.iloc[:,cols_idxs]
For me this code works fine and is more 'pythonic':
mask = select_k_best_classifier.get_support()
new_features = features_dataframe.columns[mask]
You can do the following :
mask = select_k_best_classifier.get_support() #list of booleans
new_features = [] # The list of your K best features
for bool_val, feature in zip(mask, feature_names):
if bool_val:
new_features.append(feature)
Then change the name of your features:
dataframe = pd.DataFrame(fit_transofrmed_features, columns=new_features)
Following code will help you in finding top K features with their F-scores. Let, X is the pandas dataframe, whose columns are all the features and y is the list of class labels.
import pandas as pd
from sklearn.feature_selection import SelectKBest, f_classif
#Suppose, we select 5 features with top 5 Fisher scores
selector = SelectKBest(f_classif, k = 5)
#New dataframe with the selected features for later use in the classifier. fit() method works too, if you want only the feature names and their corresponding scores
X_new = selector.fit_transform(X, y)
names = X.columns.values[selector.get_support()]
scores = selector.scores_[selector.get_support()]
names_scores = list(zip(names, scores))
ns_df = pd.DataFrame(data = names_scores, columns=['Feat_names', 'F_Scores'])
#Sort the dataframe for better visualization
ns_df_sorted = ns_df.sort_values(['F_Scores', 'Feat_names'], ascending = [False, True])
print(ns_df_sorted)
Select Best 10 feature according to chi2;
from sklearn.feature_selection import SelectKBest, chi2
KBest = SelectKBest(chi2, k=10).fit(X, y)
Get features with get_support()
f = KBest.get_support(1) #the most important features
Create new df called X_new;
X_new = X[X.columns[f]] # final features`
As of Scikit-learn 1.0, transformers have the get_feature_names_out method, which means you can write
dataframe = pd.DataFrame(fit_transformed_features, columns=transformer.get_features_names_out())
There is an another alternative method, which ,however, is not fast as above solutions.
# Use the selector to retrieve the best features
X_new = select_k_best_classifier.fit_transform(train[feature_cols],train['is_attributed'])
# Get back the kept features as a DataFrame with dropped columns as all 0s
selected_features = pd.DataFrame(select_k_best_classifier.inverse_transform(X_new),
index=train.index,
columns= feature_cols)
selected_columns = selected_features.columns[selected_features.var() !=0]
# Fit the SelectKBest instance
select_k_best_classifier = SelectKBest(score_func=f_classif, k=5).fit(features_dataframe, targeted_class)
# Extract the required features
new_features = select_k_best_classifier.get_feature_names_out(features_names)
Suppose that you want to choose 10 best features:
import pandas as pd
from sklearn.feature_selection import SelectKBest
selector = SelectKBest(score_func=chi2, k = 10)
selector.fit_transform(X, y)
features_names = selector.feature_names_in_
print(features_names)

The easiest way for getting feature names after running SelectKBest in Scikit Learn

I'm trying to conduct a supervised machine-learning experiment using the SelectKBest feature of scikit-learn, but I'm not sure how to create a new dataframe after finding the best features:
Let's assume I would like to conduct the experiment selecting 5 best features:
from sklearn.feature_selection import SelectKBest, f_classif
select_k_best_classifier = SelectKBest(score_func=f_classif, k=5).fit_transform(features_dataframe, targeted_class)
Now, if I add the line:
import pandas as pd
dataframe = pd.DataFrame(select_k_best_classifier)
I receive a new dataframe without feature names (only index starting from 0 to 4), but I want to create a dataframe with the new selected features, in a way like this:
dataframe = pd.DataFrame(fit_transofrmed_features, columns=features_names)
My question is how to create the features_names list?
I know that I should use:
select_k_best_classifier.get_support()
Which returns an array of boolean values, where true values indices represent the column that should be selected in the original dataframe.
How should I use this boolean array with the array of all features names I can get via the method feature_names = list(features_dataframe.columns.values) ?
This doesn't require loops.
# Create and fit selector
selector = SelectKBest(f_classif, k=5)
selector.fit(features_df, target)
# Get columns to keep and create new dataframe with those only
cols_idxs = selector.get_support(indices=True)
features_df_new = features_df.iloc[:,cols_idxs]
For me this code works fine and is more 'pythonic':
mask = select_k_best_classifier.get_support()
new_features = features_dataframe.columns[mask]
You can do the following :
mask = select_k_best_classifier.get_support() #list of booleans
new_features = [] # The list of your K best features
for bool_val, feature in zip(mask, feature_names):
if bool_val:
new_features.append(feature)
Then change the name of your features:
dataframe = pd.DataFrame(fit_transofrmed_features, columns=new_features)
Following code will help you in finding top K features with their F-scores. Let, X is the pandas dataframe, whose columns are all the features and y is the list of class labels.
import pandas as pd
from sklearn.feature_selection import SelectKBest, f_classif
#Suppose, we select 5 features with top 5 Fisher scores
selector = SelectKBest(f_classif, k = 5)
#New dataframe with the selected features for later use in the classifier. fit() method works too, if you want only the feature names and their corresponding scores
X_new = selector.fit_transform(X, y)
names = X.columns.values[selector.get_support()]
scores = selector.scores_[selector.get_support()]
names_scores = list(zip(names, scores))
ns_df = pd.DataFrame(data = names_scores, columns=['Feat_names', 'F_Scores'])
#Sort the dataframe for better visualization
ns_df_sorted = ns_df.sort_values(['F_Scores', 'Feat_names'], ascending = [False, True])
print(ns_df_sorted)
Select Best 10 feature according to chi2;
from sklearn.feature_selection import SelectKBest, chi2
KBest = SelectKBest(chi2, k=10).fit(X, y)
Get features with get_support()
f = KBest.get_support(1) #the most important features
Create new df called X_new;
X_new = X[X.columns[f]] # final features`
As of Scikit-learn 1.0, transformers have the get_feature_names_out method, which means you can write
dataframe = pd.DataFrame(fit_transformed_features, columns=transformer.get_features_names_out())
There is an another alternative method, which ,however, is not fast as above solutions.
# Use the selector to retrieve the best features
X_new = select_k_best_classifier.fit_transform(train[feature_cols],train['is_attributed'])
# Get back the kept features as a DataFrame with dropped columns as all 0s
selected_features = pd.DataFrame(select_k_best_classifier.inverse_transform(X_new),
index=train.index,
columns= feature_cols)
selected_columns = selected_features.columns[selected_features.var() !=0]
# Fit the SelectKBest instance
select_k_best_classifier = SelectKBest(score_func=f_classif, k=5).fit(features_dataframe, targeted_class)
# Extract the required features
new_features = select_k_best_classifier.get_feature_names_out(features_names)
Suppose that you want to choose 10 best features:
import pandas as pd
from sklearn.feature_selection import SelectKBest
selector = SelectKBest(score_func=chi2, k = 10)
selector.fit_transform(X, y)
features_names = selector.feature_names_in_
print(features_names)

How can I normalize the data in a range of columns in my pandas dataframe

Suppose I have a pandas data frame surveyData:
I want to normalize the data in each column by performing:
surveyData_norm = (surveyData - surveyData.mean()) / (surveyData.max() - surveyData.min())
This would work fine if my data table only contained the columns I wanted to normalize. However, I have some columns containing string data preceding like:
Name State Gender Age Income Height
Sam CA M 13 10000 70
Bob AZ M 21 25000 55
Tom FL M 30 100000 45
I only want to normalize the Age, Income, and Height columns but my above method does not work becuase of the string data in the name state and gender columns.
You can perform operations on a sub set of rows or columns in pandas in a number of ways. One useful way is indexing:
# Assuming same lines from your example
cols_to_norm = ['Age','Height']
survey_data[cols_to_norm] = survey_data[cols_to_norm].apply(lambda x: (x - x.min()) / (x.max() - x.min()))
This will apply it to only the columns you desire and assign the result back to those columns. Alternatively you could set them to new, normalized columns and keep the originals if you want.
I think it's better to use 'sklearn.preprocessing' in this case which can give us much more scaling options.
The way of doing that in your case when using StandardScaler would be:
from sklearn.preprocessing import StandardScaler
cols_to_norm = ['Age','Height']
surveyData[cols_to_norm] = StandardScaler().fit_transform(surveyData[cols_to_norm])
Simple way and way more efficient:
Pre-calculate the mean:
dropna() avoid missing data.
mean_age = survey_data.Age.dropna().mean()
max_age = survey_data.Age.dropna().max()
min_age = survey_data.Age.dropna().min()
dataframe['Age'] = dataframe['Age'].apply(lambda x: (x - mean_age ) / (max_age -min_age ))
this way will work...
I think it's really nice to use built-in functions
# Assuming same lines from your example
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
cols_to_norm = ['Age','Height']
survey_data[cols_to_norm] = scaler.fit_transform(survey_data[cols_to_norm])
MinMax normalize all numeric columns with minmax_scale
import numpy as np
from sklearn.preprocessing import minmax_scale
# cols = ['Age', 'Height']
cols = df.select_dtypes(np.number).columns
df[cols] = minmax_scale(df[cols])
Note: Keeps index, column names or non-numerical variables unchanged.
import pandas as pd
import numpy as np
# let Dataset here be your data#
from sklearn.preprocessing import MinMaxScaler
minmax = MinMaxScaler()
for x in dataset.columns[dataset.dtypes == 'int64']:
Dataset[x] = minmax.fit_transform(np.array(Dataset[I]).reshape(-1,1))

Categories

Resources