How to get indices of instances during cross-validation - python

I am doing a binary classification. May I know how to extract the real indexes of the misclassified or classified instances of the training data frame while doing K fold cross-validation? I found no answer to this question here.
I got the values in folds as described here:
skf=StratifiedKFold(n_splits=10,random_state=111,shuffle=False)
cv_results = cross_val_score(model, X_train, y_train, cv=skf, scoring='roc_auc')
fold_pred = [pred[j] for i, j in skf.split(X_train,y_train)]
fold_pred
Is there any method to get index of misclassified (or classified ones)? So the output is a dataframe that only has misclassified(or classified) instances while doing cross validation.
Desired output:
Missclassified instances in the dataframe with real indices.
col1 col2 col3 col4 target
13 0 1 0 0 0
14 0 1 0 0 0
18 0 1 0 0 1
22 0 1 0 0 0
where input has 100 instances, 4 are misclassified (index number 13,14,18 and 22) while doing CV

From cross_val_predict you already have the predictions. It's a matter of subsetting your data frame where the predictions are not the same as your true label, for example:
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_predict, StratifiedKFold
from sklearn.datasets import load_breast_cancer
import pandas as pd
data = load_breast_cancer()
df = pd.DataFrame(data.data[:,:5],columns=data.feature_names[:5])
df['label'] = data.target
rfc = RandomForestClassifier()
skf = StratifiedKFold(n_splits=10,random_state=111,shuffle=True)
pred = cross_val_predict(rfc, df.iloc[:,:5], df['label'], cv=skf)
df[df['label']!=pred]
mean radius mean texture ... mean smoothness label
3 11.42 20.38 ... 0.14250 0
5 12.45 15.70 ... 0.12780 0
9 12.46 24.04 ... 0.11860 0
22 15.34 14.26 ... 0.10730 0
31 11.84 18.70 ... 0.11090 0

Related

Split k-fold where each fold of validation data doesn't include duplicates

Let's say I have a pandas dataframe df. The df contains 1,000 rows. Like below.
print(df)
id class
0 0000799a2b2c42d 0
1 00042890562ff68 0
2 0005364cdcb8e5b 0
3 0007a5a46901c56 0
4 0009283e145448e 0
... ... ...
995 04309a8361c5a9e 0
996 0430bde854b470e 0
997 0431c56b712b9a5 1
998 043580af9803e8c 0
999 043733a88bfde0c 0
And it has 950 data as class 0 and 50 data as class 1.
Now I want to add one more column as fold, like below.
id class fold
0 0000799a2b2c42d 0 0
1 00042890562ff68 0 0
2 0005364cdcb8e5b 0 0
3 0007a5a46901c56 0 0
4 0009283e145448e 0 0
... ... ... ...
995 04309a8361c5a9e 0 4
996 0430bde854b470e 0 4
997 0431c56b712b9a5 1 4
998 043580af9803e8c 0 4
999 043733a88bfde0c 0 4
where the fold column contains 5 folds(0,1,2,3,4). And each fold has 200 data, where 190 data as class 0 and 10 data as class 1(by which means preserving the percentage of samples for each class).
I've tried StratifiedShuffleSplit from sklearn.model_selection, like below.
sss = StratifiedShuffleSplit(n_split=5, random_state=2021, test_size = 0.2)
for _, val_index in sss.split(df.id, df.class):
....
Then I regard every list of val_index as one specific fold, but it ends up giving me duplicates in each val_index.
Can someone help me?
What you need is a kfold used for cross validation, not a train test split. You can use StratifiedKFold, for example your dataset is like this:
import pandas as pd
import numpy as np
from sklearn.model_selection import StratifiedKFold
np.random.seed(12345)
df = pd.DataFrame({'id' : np.random.randint(1,1e5,1000),
'class' :np.random.binomial(1,0.1,1000)})
df['fold'] = np.NaN
We use the kfold, iterate through like you did and assign the fold number:
skf = StratifiedKFold(n_splits=5,shuffle=True)
for fold, [train,test] in enumerate(skf.split(df,df['class'])):
df.loc[test,"fold"] = fold
End product:
pd.crosstab(df['fold'],df['class'])
class 0 1
fold
0.0 182 18
1.0 182 18
2.0 182 18
3.0 182 18
4.0 181 19

ValueError: Found array with 1 feature(s) while a minimum of 2 is required

I applied Random Forest RFECV among other ML models to a churn dataset.
While Logistic, SVC, Gradient Boosting, Decision Trees worked well on the data (all using RFECV),
Random Forest RFECV decided that only one feature was important and eliminated all the other features.
Code:
#Create Feature variable X and Target variable y
y = churn_dataset['Churn']
X = churn_dataset.drop(['Churn'], axis = 1)
#RFECV
rfecv = RFECV(RandomForestClassifier(), cv=10, scoring='f1')
rfecv = rfecv.fit(X, y)
print('Optimal number of features :', rfecv.n_features_)
print('Best features :', X.columns[rfecv.support_])
print(np.where(rfecv.support_ == False)[0])
#drop columns
X.drop(X.columns[np.where(rfecv.support_ == False)[0]], axis=1, inplace=True)
rfecv.estimator_.feature_importances_
#train test split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.20,
random_state=8)
#fit model
random_forest = rfecv.fit(X_train, y_train)
The following error is returned:
ValueError: Found array with 1 feature(s) (shape=(1622, 1)) while a minimum of 2 is required.
Output of churn_dataset.head()
name gender churn last_purchase_in_days order_count purchase_quantity ...
2 ACKLE 0 1 0.317604 -0.453647 2 -0.368683 1.173058 0.291104 0 ... 0 0 0 0 0 0 1 0 0 1.00
4 ADNAN 1 1 0.250814 -0.453647 2 -0.368683 -0.431351 -0.418023 0 ... 0 0 0 0 0 0 1 0 0 1.00
5 ADY 0 1 -1.143415 -0.453647 2 -0.368683 0.190767 -0.117630 0 ... 0 0 0 0 0 0 1 0 0 1.00
6 ANDY 0 1 0.768432 -0.453647 2 -0.368683 -0.752232 -0.559952 0 ... 0 0 0 0 0 0 1 0 0 1.00
7 AGIE 0 0 -1.669381 3.048875 8 -0.368683 0.520653 4.251851 0 ... 0 0 0 0 0 0 1 0 0 0.16
churn_dataset.columns
Index(['name', 'gender', 'Churn', 'last_purchase_in_days',
'order_count', 'quantity', 'disc_code',
'AOV', 'sales',
'channel_Paid Advertising','channel_Recurring Payment',
'channel_Search Engine',
'channel_Social Media', 'country_Denmark', 'country_France',
'country_Germany', 'country_Italy',
'country_Luxembourg', 'country_Others', 'country_Switzerland',
'country_United Kingdom', 'city_Düsseldorf', 'city_Frankfurt',
'city_Hamburg', 'city_Hannover', 'city_Köln', 'city_Leipzig',
'city_Munich', 'city_Others', 'city_Stuttgart', 'city_Wien',
'Probability_of_Churn'],
dtype='object')

Text Classification with Custom Vocabulary in Python

I have some list of words/lexicon and I want to use them for a BOW classification. In sklearnit is possible to use countvectorizer and tfidfvectorizer in sklearn, the two approaches builds the vocabulary the use from the training data. But in my case I have built a kind of list of words(dictionary) that can be used to discriminate between the classes for text classification.
Is there any library or package I can use in python?
Check out the vocabulary parameter of the CountVectorizer here in the docs.
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
import pandas as pd
twenty_train = fetch_20newsgroups(subset='train', categories=['alt.atheism', 'soc.religion.christian', 'comp.graphics', 'sci.med'], shuffle=True, random_state=42)
my_vocabulary = ['aristotle',
'arithmetic',
'arizona',
'arkansas'
]
count_vect = CountVectorizer(vocabulary=my_vocabulary)
X_train_counts = count_vect.fit_transform(twenty_train.data)
df = pd.DataFrame.sparse.from_spmatrix(X_train_counts)
And you will see in the output only that only the words from your vocabulary are used:
Out[16]:
0 1 2 3
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
4 0 0 0 0
... .. .. .. ..
2252 0 0 0 0
2253 0 0 0 0
2254 0 0 0 0
2255 0 0 0 0
2256 0 0 0 0
[2257 rows x 4 columns]
Of late, there are so many questions around multi-class multi label text classification. Please check if this article helps.

How to intepret iris data set result?

I'm working on the basics of machine learning with the iris dataset. I think I understand the idea of splitting data and making predictions on new data; however, I'm having trouble understanding the results I get for the code below:
iris = load_iris()
X = iris.data
y = iris.target
len(X)--result: 150
X_train, X_test, y_train, y_test = train_test_split( X, y, random_state=5)
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(y_pred)
print(metrics.accuracy_score(y_test, y_pred))
Result: [1 2 2 0 2 1 0 2 0 1 1 2 2 2 0 0 2 2 0 0 1 2 0 2 1 2 1 1 1 2 0 1 1 0 1 0 0
2]
0.95% accuracy
I only get back 38 results. From what I understand, the data is split into 50 50 chunks, meaning I should get back 50 results for the data not part of the train and test data. Why do I get only 38?
I feel like my biggest question regarding Machine Learning is actually using the model.
By default train_test_split set test_size to 0.25. In case of 50 it will be 12.5, so 38 values are correct.
sklearn.model_selection.train_test_split

GridSearchCV for number of neurons

I am trying to learn by myself how to grid-search number of neurons in a basic multi-layered neural networks. I am using GridSearchCV and KerasClasifier of Python as well as Keras. The code below works for other data sets very well but I could not make it work for Iris dataset for some reasons and I cannot find it why, I am missing out something here. The result I get is:
Best: 0.000000 using {'n_neurons': 3}
0.000000 (0.000000) with: {'n_neurons': 3}
0.000000 (0.000000) with: {'n_neurons': 5}
from pandas import read_csv
import numpy
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from keras.wrappers.scikit_learn import KerasClassifier
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import np_utils
from sklearn.model_selection import GridSearchCV
dataframe=read_csv("iris.csv", header=None)
dataset=dataframe.values
X=dataset[:,0:4].astype(float)
Y=dataset[:,4]
seed=7
numpy.random.seed(seed)
#encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
#one-hot encoding
dummy_y = np_utils.to_categorical(encoded_Y)
#scale the data
scaler = StandardScaler()
X = scaler.fit_transform(X)
def create_model(n_neurons=1):
#create model
model = Sequential()
model.add(Dense(n_neurons, input_dim=X.shape[1], activation='relu')) # hidden layer
model.add(Dense(3, activation='softmax')) # output layer
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=create_model, epochs=100, batch_size=10, initial_epoch=0, verbose=0)
# define the grid search parameters
neurons=[3, 5]
#this does 3-fold classification. One can change k.
param_grid = dict(n_neurons=neurons)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1)
grid_result = grid.fit(X, dummy_y)
# summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
For the purpose of illustration and computational efficiency I search only for two values. I sincerely apologize for asking such a simple question. I am new to Python, switched from R, by the way because I realized that Deep Learning community is using python.
Haha, this is probably the funniest thing I ever experienced on Stack Overflow :) Check:
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=5)
and you should see different behavior. The reason why your model get a perfect score (in terms of cross_entropy having 0 is equivalent to best model possible) is that you haven't shuffled your data and because Iris consist of three balanced classes each of your feed had a single class like a target:
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 (first fold ends here) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 (second fold ends here)2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
Such problems are really easy to be solved by every model - so that's why you've got a perfect match.
Try to shuffle your data before - this should result in an expected behavior.

Categories

Resources