Preserving the index when selecting a slice of a pandas dataframe - python

So I am creating my training and test sets for use in a Multiple Linear Regression model using sklearn.
my dataset contains 182 features looks like the following;
id feature1 feature2 .... feature182 Target
D24352 145 8 7 1
G09340 10 24 0 0
E40988 6 42 8 1
H42093 238 234 2 1
F32093 12 72 1 0
I have then have the following code;
import pandas as pd
dataset = pd.read_csv('C:\\mylocation\\myfile.csv')
dataset0 = dataset.set_index('t1.id')
dataset2 = pd.get_dummies(dataset0)
y = dataset0.iloc[:, 31:32].values
dataset2.pop('Target')
X = dataset2.iloc[:, :180].values
Once I use dataframe.iloc however, I loose my indexes (which I have set to be my IDs). I would like to keep these as I currently have no way of telling which records in my results relate to which records in my original dataset when I do the following step;
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
y_pred = regressor.predict(X_test)

It looks like your data is stored as object type. You should convert it to float64 (assuming that all your data is of numeric type. Else only convert those rows, that you want to have as numeric type). Since it turns out your index is of type string, you need to set the dtype of your dataframe after setting the index (and generating the dummies). Again assuming that the rest of your data is of numeric type:
dataset = pd.read_csv('C:\\mylocation\\myfile.csv')
dataset0 = dataset.set_index('t1.id')
dataset2 = pd.get_dummies(dataset0)
dataset0 = dataset0.astype(np.float64) # add this line to explicitly set the dtype
Now you should be able to just leave out values when slicing the DataFrame:
y = dataset0.iloc[:, 31:32]
dataset2.pop('Target')
X = dataset2.iloc[:, :180]
With .values you access the underlying numpy arrays of the DataFrame. These do not have an index column. Since sklearn is, in most cases, compatible with pandas, you can simply pass a pandas DataFrame to sklearn.
If this does not work, you can still apply reset_index to your DataFrame. This will add the index as a new column, which you will have to drop when passing the training data to sklearn:
dataset0.reset_index(inplace=True)
dataset2.reset_index(inplace=True)
y = dataset0.iloc[:, 31:32].values
dataset2.pop('Target')
X = dataset2.iloc[:, :180].values
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.2, random_state = 0)
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train.drop('index', axis=1), y_train.drop('index', axis=1))
y_pred = regressor.predict(X_test.drop('index', axis=1))
In this case you'll still have to change the slicing [:, 31:32] and [:, :180] to the correct columns, so that the index will be included in the slice.

Related

Python: Combine predicted y-variable labels to the dataframe

I have a multi-class label prediction problem to identify
lets say fruits for an example. I am able to get the prediction from the model, fit, and predict functions. I have also trained and tested the model. Below is the code. I am trying to merge my "y predictions" from a variable "forest_y_pred" to my original data set so that I can compare the Original Target Variable to Predicted Target Variable in a data frame. I have 2 questions:
1) Is y_test same as forest_y_pred = forest.predict(X_test). I am getting exact same results for when I compare. Am I getting this wrong? I am bit confused here, predict() is suppose to predict X_test not generate exact same results as y_test
2) I am trying to merge forest_y_pred = forest.predict(X_test) back to df. Here is what I tried from this: Merging results from model.predict() with original pandas DataFrame?
from sklearn.ensemble import RandomForestClassifier
import pandas as pd
# Load Data
df = pd.read_excel('../data/file.xlsx',converters={'col1':str})
df = df.set_index('INDEX_ID') # Setting index id
df
# Doing this way because of setting index. INDEX_ID is a column in the df
X_train, X_test, y_train, y_test = train_test_split(df.ix[:, ~df.columns.isin(['Target'])], df.Target,train_size=0.5)
print(y_test[:5])
type(y_test) #pandas.core.series.Series
ID
12 Apples
124 Oranges
345 Apples
123 Oranges
232 Kiwi
forest = RandomForestClassifier()
# Training
forest_model = forest.fit(X_train, y_train)
print(forest_model)
# Predictions
forest_y_pred = forest.predict(X_test)
print("forest_y_pred:\n",forest_y_pred[:5])
['Apples','Oranges','Apples','Oranges','Kiwi']
y_test['preds'] = forest_y_pred
print(y_test['preds'][:5])
['Apples','Oranges','Apples','Oranges','Kiwi']
df_out = pd.merge(df,y_test[['preds']],how = 'left',left_index = True, right_index = True)
# ValueError: can not merge DataFrame with instance of type <class 'pandas.core.series.Series'>
# How do I fix this? I tried ton of ways to convert ndarray, serries, dataframe...nothing is working so far what I tried. Thanks a bunch!!

Fitting MultinomialNB on multiple columns of data

Given a table of data containing 100 rows, such as:
Place | Text | Value | Text_Two
europe | some random text | 3.2 | some more random text
america | the usa | 4.1 | the white house
...
I am trying to classify with the following:
df = pd.read_csv('data.csv')
mnb = MultinomialNB()
tf = TfidfVectorizer()
df.loc[df['Place'] == 'europe','Place'] = 0
df.loc[df['Place'] == 'america','Place'] = 1
X = df[['Text', 'Value', 'Text_Two']]
y = df['Place']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)
X_train_tf = tf.fit_transform(X_train)
mnb.fit(X_train_tf, y_train)
The above produces the following error:
ValueError: Found input variables with inconsistent numbers of
samples: [3, 100]
So from what I understand it's only seeing the categories that were set with X = df[['Text', 'Value', 'Text_Two']], not the data within those categories.
The code above works if I only specify X for one category, such as:
X = df['Text']
Is it possible to fit the MultinomialNB on multiple categories of data?
This has nothing to do with MultinomialNB. It can handle multiple columns fine. The problem is TfidfVectorizer.
TfidfVectorizer only works on a an iterable of single dimension (single column of your dataframe) and will not do any kind of check on the shape or type of the input data.
It will only do this:
for doc in raw_documents:
...
...
When you pass a dataframe to it (be it a single column or multiple columns), for doc in raw_documents:, on a dataframe will only output the column names and not actual data. The data you pass in X has three columns, so only those columns are used as documents, and hence the error
ValueError: Found input variables with inconsistent numbers of samples: [3, 100]
because your y will have 100 length, and your X (even though it has length 100, but due to tfidfvectorizer it will only now have 3 length).
So to solve this, you have two options:
1) You need to do individual tf-idf vectorization for each text column (Text, Text_Two) and then combine the resultant matrices to form the feature matrix to be used with MultinomialNB.
2) You can combine the two text columns into a single column as #âńōŋŷxmoůŜ has suggested and then do tf-idf on that single column.
Both options will result in different feature vectors, so you need to first understand what each one does and choose what you want.
I rather combine columns Text and Text_Two as one column then construct the classifier from there. MultinomialNB works only for one classifier. Below is the code that combines columns Text and Text_Two into one.
You might be interested on multi-class or multi-label classification but it refers to the target variables (Y) rather than the dependent variables (X).
http://scikit-learn.org/stable/modules/multiclass.html. Hope it helps.
import pandas as pd
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
df = pd.read_csv('data.csv', header=0, sep='|')
df.columns = [x.strip() for x in df.columns]
mnb = MultinomialNB()
tf = TfidfVectorizer()
#df.loc[df['Place'] == 'europe','Place'] = 0
#df.loc[df['Place'] == 'america','Place'] = 1
#X = df[['Text', 'Value', 'Text_Two']]
X = df.Text + df.Text_Two
y = df['Place']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)
pipe = make_pipeline(TfidfVectorizer(), MultinomialNB())
pipe.fit(X_train, y_train)
pipe.predict(X_test)

add column to data set in python

I am trying to add predicted data back to my original dataset in Python. I think I'm supposed to use Pandas and ASSIGN and pd.DataFrame but I have no clue how to write this after reading all the documentation (sorry I'm new to all this and just started learning coding recently). I've written my code below and just need help with the code for adding my predictions back to the dataset. Thanks for the help!
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('Social_Network_Ads.csv')
X = dataset.iloc[:, [2, 3]].values
y = dataset.iloc[:, 4].values
# Splitting the dataset into the Training set and Test set
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25,
random_state = 0)
# Feature Scaling X_train and X_test
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
#Feature scaling the all independent variables used to build the model
whole_dataset = sc.transform(X)
# Fitting classifier to the Training set
# Create your Naive Bayes here
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict_proba(X_test)
# Predicting the results for the whole dataset
y_pred2 = classifier.predict_proba(whole_dataset)
# Add y_pred2 predictions back to the dataset
???
You can just do dataset['prediction'] = y_pred to add a new column.
Pandas supports a simple syntax for adding new columns, here it will add a new column and probably take a view on the numpy array returned from sklearn so it should be nice and fast.
EDIT
Looking at your code and the data, you're misunderstanding what train_test_split does, this is splitting the data into 3/4 1/4 splits of your original dataset which has 400 rows, your X train data contains 300 rows, the test data is 100 rows. You're then trying to assign back to your original dataset which is 400 rows. Firstly the number of rows don't match, secondly what is returned from predict_proba is a matrix of the predicted classes as a percentage. So what you want to do after training is to predict on the original dataset and assign this back as 2 columns by sub-selecting each column:
y_pred = classifier.predict_proba(X)
now assign this back :
dataset['predict_class_1'],dataset['predict_class_2'] = y_pred[:,0],y_pred[:,1]
There are several solutions. The answer of EdChurm had mentioned one.
As far as I know, pandas has other 2 methods to work with it.
df.insert()
df.assign()
Since you didn't provide the data in use, here's a pretty simple example.
import pandas as pd
import numpy as np
np.random.seed(1)
df = pd.DataFrame(np.random.randn(10), columns=['raw'])
df = df.assign(cube_raw=df['raw']**2)
df.insert(1,'square_raw',df['raw']**3)
df
raw square_raw cube_raw
0 1.624345 2.638498 4.285832
1 -0.611756 0.374246 -0.228947
2 -0.528172 0.278965 -0.147342
3 -1.072969 1.151262 -1.235268
4 0.865408 0.748930 0.648130
5 -2.301539 5.297080 -12.191435
6 1.744812 3.044368 5.311849
7 -0.761207 0.579436 -0.441071
8 0.319039 0.101786 0.032474
9 -0.249370 0.062186 -0.015507
Just keep in mind that df.assign() doesn't work inplace, so you should reassign to your previous variable.
In my opinion, I prefer df.insert() the most, for it allows you to assign which location you want to insert. (with parameter loc)

scikit-learn error: The least populated class in y has only 1 member

I'm trying to split my dataset into a training and a test set by using the train_test_split function from scikit-learn, but I'm getting this error:
In [1]: y.iloc[:,0].value_counts()
Out[1]:
M2 38
M1 35
M4 29
M5 15
M0 15
M3 15
In [2]: xtrain, xtest, ytrain, ytest = train_test_split(X, y, test_size=1/3, random_state=85, stratify=y)
Out[2]:
Traceback (most recent call last):
File "run_ok.py", line 48, in <module>
xtrain,xtest,ytrain,ytest = train_test_split(X,y,test_size=1/3,random_state=85,stratify=y)
File "/home/aurora/.pyenv/versions/3.6.0/lib/python3.6/site-packages/sklearn/model_selection/_split.py", line 1700, in train_test_split
train, test = next(cv.split(X=arrays[0], y=stratify))
File "/home/aurora/.pyenv/versions/3.6.0/lib/python3.6/site-packages/sklearn/model_selection/_split.py", line 953, in split
for train, test in self._iter_indices(X, y, groups):
File "/home/aurora/.pyenv/versions/3.6.0/lib/python3.6/site-packages/sklearn/model_selection/_split.py", line 1259, in _iter_indices
raise ValueError("The least populated class in y has only 1"
ValueError: The least populated class in y has only 1 member, which is too few. The minimum number of groups for any class cannot be less than 2.
However, all classes have at least 15 samples. Why am I getting this error?
X is a pandas DataFrame which represents the data points, y is a pandas DataFrame with one column that contains the target variable.
I cannot post the original data because it's proprietary, but it is fairly reproducible by creating a random pandas DataFrame (X) with 1k rows x 500 columns, and a random pandas DataFrame (y) with the same number of rows (1k) of X, and, for each row the target variable (a categorical label).
The y pandas DataFrame should have different categorical labels (e.g. 'class1', 'class2'...) and each labels should have at least 15 occurrences.
The problem was that train_test_split takes as input 2 arrays, but the y array is a one-column matrix. If I pass only the first column of y it works.
train, xtest, ytrain, ytest = train_test_split(X, y.iloc[:,1], test_size=1/3,
random_state=85, stratify=y.iloc[:,1])
The main point is if you use stratified CV, then you will get this warning if the number of splits cannot produce all CV splits with the same ratio of all classes in the data. E.g. if you have 2 samples of one class, there will be 2 CV sets with 2 samples of this class, and 3 CV sets with 0 samples, hence the ratio samples for this class does not equal in all CV sets. But the problem is only if there is 0 samples in any of the sets, so if you have at least as many samples as the number of CV splits, i.e. 5 in this case, this warning won't appear.
See https://stackoverflow.com/a/48314533/2340939.
I have the same problem. Some of class has one or two items.(My problem is multi class problem). You can remove or union classes that has less items. I solve my problem like that.
Continuing with user2340939's answer. If you really need your train-test splits to be stratified despite the less number of rows in certain class, you can try using the following method. I generally use the same, where I'll make a copy of all the rows of such classes to both the train and test datasets..
from sklearn.model_selection import train_test_split
def get_min_required_rows(test_size=0.2):
return 1 / test_size
def make_stratified_splits(df, y_col="label", test_size=0.2):
"""
for any class with rows less than min_required_rows corresponding to the input test_size,
all the rows associated with the specific class will have a copy in both the train and test splits.
example: if test_size is 0.2 (20% otherwise),
min_required_rows = 5 (which is obtained from 1 / test_size i.e., 1 / 0.2)
where the resulting splits will have 4 train rows (80%), 1 test row (20%)..
"""
id_col = "id"
temp_col = "same-class-rows"
class_to_counts = df[y_col].value_counts()
df[temp_col] = df[y_col].apply(lambda y: class_to_counts[y])
min_required_rows = get_min_required_rows(test_size)
copy_rows = df[df[temp_col] < min_required_rows].copy(deep=True)
valid_rows = df[df[temp_col] >= min_required_rows].copy(deep=True)
X = valid_rows[id_col].tolist()
y = valid_rows[y_col].tolist()
# notice, this train_test_split is a stratified split
X_train, X_test, _, _ = train_test_split(X, y, test_size=test_size, random_state=43, stratify=y)
X_test = X_test + copy_rows[id_col].tolist()
X_train = X_train + copy_rows[id_col].tolist()
df.drop([temp_col], axis=1, inplace=True)
test_df = df[df[id_col].isin(X_test)].copy(deep=True)
train_df = df[df[id_col].isin(X_train)].copy(deep=True)
print (f"number of rows in the original dataset: {len(df)}")
test_prop = round(len(test_df) / len(df) * 100, 2)
train_prop = round(len(train_df) / len(df) * 100, 2)
print (f"number of rows in the splits: {len(train_df)} ({train_prop}%), {len(test_df)} ({test_prop}%)")
return train_df, test_df
I had this issue because some of my things to be split were lists, and some were arrays. When I converted the arrays to a list, it worked.
from sklearn.model_selection import train_test_split
all_keys = df['Key'].unique().tolist()
t_df = pd.DataFrame()
c_df = pd.DataFrame()
for key in all_keys:
print(key)
if df.loc[df['Key']==key].shape[0] < 2 :
t_df = t_df.append(df.loc[df['Key']==key])
else:
df_t, df_c = train_test_split(df.loc[df['Key']==key],test_size=0.2,stratify=df.loc[df['Key']==key]['Key'])
t_df = t_df.append(df_t)
c_df = c_df.append(df_c)
when you use stratify=y, combine the less number of categories under one category
for example: filter the labels less than 50 and label them as one single category like "others" or any name then the least populated class error will be solved.
Do you like "functional" programming? Like confusing your co-workers, and writing everything in one line of code? Are you the type of person who loves nested ternary operators, instead of 2 'if' statements? Are you an Elixir programmer trapped in a Python programmer's body?
If so, the following solution may work for you. It allows you to discover how many members the least-populated class has, in real-time, then adjust your cross-validation value on the fly:
""" Let's say our dataframe is like this, for example:
dogs weight size
---- ---- ----
Poodle 14 small
Maltese 13 small
Shepherd 45 big
Retriever 41 big
Burmese 43 big
The 'least populated class' would be 'small', as it only has 2 members.
If we tried doing more than 2-fold cross validation on this, the results
would be skewed.
"""
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
X = df['weight']
y = df['size']
# Random forest classifier, to classify dogs into big or small
model = RandomForestClassifier()
# Find the number of members in the least-populated class, THIS IS THE LINE WHERE THE MAGIC HAPPENS :)
leastPopulated = [x for d in set(list(y)) for x in list(y) if x == d].count(min([x for d in set(list(y)) for x in list(y) if x == d], key=[x for d in set(list(y)) for x in list(y) if x == d].count))
# I want to know the F1 score at each fold of cross validation.
# This 'fOne' variable will be a list of the F1 score from each fold
fOne = cross_val_score(model, X, y, cv=leastPopulated, scoring='f1_weighted')
# We print the F1 score here
print(f"Average F1 score during cross-validation: {np.mean(fOne)}")
Try this way, It worked for me which also mentioned here:
x_train, x_test, y_train, y_test = train_test_split(data_x,data_y,test_size=0.33, random_state=42) .
remove stratify=y while splitting train and test data
xtrain, xtest, ytrain, ytest = train_test_split(X, y, test_size=1/3, random_state=85)
Remove stratify.
stratify=y
should only be used in case of classification problems, so that various output classes (say 'good', 'bad') can get equally distributed among train and test data. It is a sampling method in statistics. We should avoid using stratify in regression problems. The below code should work
xtrain, xtest, ytrain, ytest = train_test_split(X, y, test_size=1/3, random_state=85)

Merging results from model.predict() with original pandas DataFrame?

I am trying to merge the results of a predict method back with the original data in a pandas.DataFrame object.
from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeClassifier
import pandas as pd
import numpy as np
data = load_iris()
# bear with me for the next few steps... I'm trying to walk you through
# how my data object landscape looks... i.e. how I get from raw data
# to matrices with the actual data I have, not the iris dataset
# put feature matrix into columnar format in dataframe
df = pd.DataFrame(data = data.data)
# add outcome variable
df['class'] = data.target
X = np.matrix(df.loc[:, [0, 1, 2, 3]])
y = np.array(df['class'])
# finally, split into train-test
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.8)
model = DecisionTreeClassifier()
model.fit(X_train, y_train)
# I've got my predictions now
y_hats = model.predict(X_test)
To merge these predictions back with the original df, I try this:
df['y_hats'] = y_hats
But that raises:
ValueError: Length of values does not match length of index
I know I could split the df into train_df and test_df and this problem would be solved, but in reality I need to follow the path above to create the matrices X and y (my actual problem is a text classification problem in which I normalize the entire feature matrix before splitting into train and test). How can I align these predicted values with the appropriate rows in my df, since the y_hats array is zero-indexed and seemingly all information about which rows were included in the X_test and y_test is lost? Or will I be relegated to splitting dataframes into train-test first, and then building feature matrices? I'd like to just fill the rows included in train with np.nan values in the dataframe.
your y_hats length will only be the length on the test data (20%) because you predicted on X_test. Once your model is validated and you're happy with the test predictions (by examining the accuracy of your model on the X_test predictions compared to the X_test true values), you should rerun the predict on the full dataset (X). Add these two lines to the bottom:
y_hats2 = model.predict(X)
df['y_hats'] = y_hats2
EDIT per your comment, here is an updated result the returns the dataset with the prediction appended where they were in the test datset
from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeClassifier
import pandas as pd
import numpy as np
data = load_iris()
# bear with me for the next few steps... I'm trying to walk you through
# how my data object landscape looks... i.e. how I get from raw data
# to matrices with the actual data I have, not the iris dataset
# put feature matrix into columnar format in dataframe
df = pd.DataFrame(data = data.data)
# add outcome variable
df_class = pd.DataFrame(data = data.target)
# finally, split into train-test
X_train, X_test, y_train, y_test = train_test_split(df,df_class, train_size = 0.8)
model = DecisionTreeClassifier()
model.fit(X_train, y_train)
# I've got my predictions now
y_hats = model.predict(X_test)
y_test['preds'] = y_hats
df_out = pd.merge(df,y_test[['preds']],how = 'left',left_index = True, right_index = True)
I have the same problem (almost)
I fixed it this way
...
.
.
.
X_train, X_test, y_train, y_test = train_test_split(df,df_class, train_size = 0.8)
model = DecisionTreeClassifier()
model.fit(X_train, y_train)
y_hats = model.predict(X_test)
y_hats = pd.DataFrame(y_hats)
df_out = X_test.reset_index()
df_out["Actual"] = y_test.reset_index()["Columns_Name"]
df_out["Prediction"] = y_hats.reset_index()[0]
y_test['preds'] = y_hats
df_out = pd.merge(df,y_test[['preds']],how = 'left',left_index = True, right_index = True)
You can create a y_hat dataframe copying indices from X_test then merge with the original data.
y_hats_df = pd.DataFrame(data = y_hats, columns = ['y_hats'], index = X_test.index.copy())
df_out = pd.merge(df, y_hats_df, how = 'left', left_index = True, right_index = True)
Note, left join will include train data rows. Omitting 'how' parameter will result in just test data.
Try this:
y_hats2 = model.predict(X)
df[['y_hats']] = y_hats2
You can probably make a new dataframe and add to it the test data along with the predicted values:
data['y_hats'] = y_hats
data.to_csv('data1.csv')
predicted = m.predict(X_valid)
predicted_df = pd.DataFrame(data=predicted, columns=['y_hat'],
index=X_valid.index.copy())
df_out = pd.merge(X_valid, predicted_df, how ='left', left_index=True,
right_index=True)
This worked well for me. It maintains the indexing positions.
pred_prob = model.predict(X_test) # calculate prediction probabilities
pred_class = np.where(pred_prob >0.5, "Yes", "No") #for binary(Yes/No) category
predictions = pd.DataFrame(pred_class, columns=['Prediction'])
my_new_df = pd.concat([my_old_df, predictions], axis =1)
Here is a solution that worked for me:
It consists of building, for each of your folds/iterations, one dataframe which includes observed and predicted values for your test set; this way, you make use of the index (ID) contained in y_true, which should correspond to your subjects' IDs (in my code: 'SubjID').
You then concatenate the DataFrames that you generated (through 5 folds of test data in my case) and paste them back into your original dataset.
I hope this helps!
FoldNr = 0
for train_index, test_index in skf.split(X, y):
FoldNr = FoldNr + 1
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
# [...] your model
# performance is measured on test set
y_true, y_pred = y_test, clf.predict(X_test)
# Save predicted values for each test set
a = pd.DataFrame(y_true).reset_index()
b = pd.Series(y_pred, name = 'y_pred')
globals()['ObsPred_df' + str(FoldNr)] = a.join(b)
globals()['ObsPred_df' + str(FoldNr)].set_index('SubjID', inplace=True)
# Create dataframe with observed and predicted values for all subjects
ObsPred_Concat = pd.concat([ObsPred_df1, ObsPred_df2, ObsPred_df3, ObsPred_df4, ObsPred_df5])
original_df['y_pred'] = ObsPred_Concat['y_pred']
First you need to convert y_val or y_test data into the DataFrame.
compare_df = pd.DataFrame(y_val)
then just create a new column with predicted data.
compare_df['predicted_res'] = y_pred_val
After that, you can easily filter the data that shows you which data is matching with original prediction based on a simple condition.
test_df = compare_df[compare_df['y_val'] == compare_df['predicted_res'] ]
you can also use
y_hats = model.predict(X)
df['y_hats'] = y_hats.reset_index()['name of the target column']

Categories

Resources