Splitting dataset for training and testing row wise - python

I want to split my dataset into training and test datasets based on years. The idea is to put the rows with years ranging form 2009-2017 in train dataset and the 2018 data in test dataset. Splitting the datasets was easy for the most part but my models are throwing a lot of indexing issues.
X = ((df[df['Year'] < 2018]))
X_train = np.array(X.drop(['Usage'], 1))
X_test = np.array(X['Usage'])
y =((df[df['Year'] > 2017]))
y_train = np.array(y.drop(['Usage'], 1))
y_test = np.array(y['Usage'])
This is how I plan on splitting the data. The usage column is my forecast column and contains continuous values. Applying a simple RandomForestRegressor() gave me this error in return
ValueError: Number of labels=14495 does not match number of samples=382772
aditya my regressor model was pretty basic but i'm attaching the code any way. the columns being passed in X are as follows: X= [Cust_Id', 'Usage', 'Plan_Group', 'Contract_Type', 'Cust_Status','Premise_Zip', 'Year', 'Month']
model = RandomForestRegressor()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
# evaluate predictions
print(model.score(X_test, y_test))
# accuracy = accuracy_score(y_test, (y_pred < 0.5).astype(int))

For most of the algorithms in sklearn stack, you have a standard notation:
X, capital letter, is usually an array (even if there is one feature) and represents each data point in vector form.
y, small letter, is usually a vector that denotes labels, e.g. class label, or value of a regression element.
You created X and y both as a dataframe generated by the Year attribute. Instead you have to split into X_train and X_test.
X = df.drop(['Usage'],1)
X_train = df[df['Year'] < 2018]
X_test = df[df['Year'] > 2017]
y_train = df[df['Year'] < 2018]
y_test = df[df['Year'] > 2017]
y_train = y_train['Usage']
y_test = y_test['Usage']
And then you train on the basis of X_train and y_train
model = RandomForestRegressor()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
This is not the best way though. Will come back to edit the answer but this should be enough to get you going for now.

Related

Seperate Dataframe by label (convert dataframe into numpy array)

I have a dataframe and I want to separate the them into different arrays according to their label, I'm not sure on how to filter it by its index. Not sure on how if this is properly being done:
Example of Dataset (df)
Cancer_Type | Variable | Data Split | Target
Cancer1 43 Train Good
Cancer5 34 Train Bad
Cancer2 34 Test Good
Cancer3 23 Test Bad
Cancer4 25 Test Good
Possibly doing something like this?
#initial split into train/test data
train = df['split'] == 'train'
print("train")
print(train)
test = df['split'] == 'test'
print("valid")
print(test)
X_test = test.values[-1, :-1]
y_test = test.values[-1, -1]
# Get the remaining dataset
X = train.values[:-1, :-1]
y = train.values[:-1, -1]
print("X")
#print(type(X))
#print(X)
print("y")
#print(type(y))
#print(y)
# Split the remaining dataset into train and calibration sets.
X_train, X_cal, y_train, y_cal = train_test_split(X, y)
print(X_train.shape, y_train.shape)
print(X_cal.shape, y_cal.shape)
Hopefully by row.
From my understanding, you wish to split the data into train and test sets according to an observation's Data Split value. After which, you will again split the train set into train and calibration. The standard data preprocessing methodology involves creating our features, X and our target, y.
# Get dataframes of train and test features
X_train = df[df['Data Split'] == 'Train'].drop(columns = ['Target']).to_numpy()
X_test = df[df['Data Split'] == 'Test'].drop(columns = ['Target']).to_numpy()
# Get arrays of train and test targets
y_train = df[(df['Data Split'] == 'Train')]["Target"].to_numpy()
y_test = df[(df['Data Split'] == 'Test')]["Target"].to_numpy()
# Split the train dataset further into train and validation/calibration sets.
X_train, X_cal, y_train, y_cal = train_test_split(X_train, y_train)
You now have your train, validation/calibration and test sets in array form.
If you wish to preserve the Target variable, simply
train = df[df['Data Split'] == 'Train'].to_numpy()
test = df[df['Data Split'] == 'Test'].to_numpy()

How to get a specific row for testing and other for training?

I want to test a specific row from my dataset and to see the result, but I don't know how to do it. For example I want to test row number 100 and then to see the accuracy.
feature_cols = [0,1,2,3,4,5]
X = df[feature_cols] # Features
y = df[6] # Target variable
# Split dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1,
random_state=1)
#Create Decision Tree classifer object
clf = DecisionTreeClassifier(max_depth=5)
#Train Decision Tree Classifer
clf = clf.fit(X_train,y_train)
#Predict the response for test dataset
y_pred = clf.predict(X_test)
print("Accuracy:", metrics.accuracy_score(y_test, y_pred))
I recommend excluding the row you want to test from the dataset.
test_row=100
train_idx=np.arange(X.shape[0])!=test_row
test_idx=np.arange(X.shape[0])==test_row
X_train=X[train_idx]
y_train=y[train_idx]
X_test=X[test_idx]
y_test=y[test_idx]
Now X_test will contain a single row. However, the accuracy will now be either 0 or 1 since you are only testing one sample.

How to pass different set of data to train and test without splitting a dataframe. (python)?

I have gone through multiple questions that help divide your dataframe into train and test, with scikit, without etc.
But my question is I have 2 different csvs ( 2 different dataframes from different years). I want to use one as train and other as test?
How to do so for LinearRegression / any model?
Load the datasets individually.
Make sure they are in the same format of rows and columns (features).
Use the train set to fit the model.
Use the test set to predict the output after training.
# Load the data
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
# Split features and value
# when trying to predict column "target"
X_train, y_train = train.drop("target"), train["target"]
X_test, y_test = test.drop("target"), test["target"]
# Fit (train) model
reg = LinearRegression()
reg.fit(X_train, y_train)
# Predict
pred = reg.predict(X_test)
# Score
accuracy = reg.socre(X_test, y_test)
please skillsmuggler what about the X_train and X_Test how I can define it because when I try to do that it said NameError: name 'X_train' is not defined
I couldn't edit the first answer which is almost there. There is some code missing though...
# Load the data
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
y_train = train[:, :1] #if y is only one column
X_train = train[:, 1:]
# Fit (train) model
reg = LinearRegression()
reg.fit(X_train, y_train)
# Predict
pred = reg.predict(X_test)
# Score
accuracy = reg.socre(X_test, y_test)

Randomize the splitting of data for training and testing for this function

I wrote a function to split numpy ndarrays x_data and y_data into training and test data based on a percentage of the total size.
Here is the function:
def split_data_into_training_testing(x_data, y_data, percentage_split):
number_of_samples = x_data.shape[0]
p = int(number_of_samples * percentage_split)
x_train = x_data[0:p]
y_train = y_data[0:p]
x_test = x_data[p:]
y_test = y_data[p:]
return x_train, y_train, x_test, y_test
In this function, the top part of the data goes to the training dataset and the bottom part of the data samples go to the testing dataset based on percentage_split. How can this data split be made more randomized before being fed to the machine learning model?
Assuming there's a reason you're implementing this yourself instead of using sklearn.train_test_split, you can shuffle an array of indices (this leaves the training data untouched) and index on that.
def split_data_into_training_testing(x_data, y_data, split, shuffle=True):
idx = np.arange(len(x_data))
if shuffle:
np.random.shuffle(idx)
p = int(len(x_data) * split)
x_train = x_data[idx[:p]]
x_test = x_data[idx[p:]]
... # Similarly for y_train and y_test.
return x_train, x_test, y_train, y_test
You can create a mask with p randomly selected true elements and index the arrays that way. I would create the mask by shuffling an array of the available indices:
ind = np.arange(number_of_samples)
np.random.shuffle(ind)
ind_train = np.sort(ind[:p])
ind_test = np.sort(ind[p:])
x_train = x_data[ind_train]
y_train = y_data[ind_train]
x_test = x_data[ind_test]
y_test = y_data[ind_test]
Sorting the indices is only necessary if your original data is monotonically increasing or decreasing in x and you'd like to keep it that way. Otherwise, ind_train = ind[:p] is just fine.

KNeighborsClassifier different scores without shuffle

I have a dataset with 155 features. 40143 samples. It is sorted by date (oldest to newest) then I deleted the date column from the dataset.
label is on the first column.
CV results c. %65 (mean accuracy of scores +/- 0.01) with the code below:
def cross(dataset):
dropz = ["result"]
X = dataset.drop(dropz, axis=1)
X = preprocessing.normalize(X)
y = dataset["result"]
clf = KNeighborsClassifier(n_neighbors=1, weights='distance', n_jobs=-1)
scores = cross_val_score(clf, X, y, cv=10, scoring='accuracy')
Also I get similar accuracy with the code below:
def train(dataset):
dropz = ["result"]
X = dataset.drop(dropz, axis=1)
X = preprocessing.normalize(X)
y = dataset["result"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1000, random_state=42)
clf = KNeighborsClassifier(n_neighbors=1, weights='distance', n_jobs=-1).fit(X_train, y_train)
clf.score(X_test, y_test)
But If I don't use shuffle in the code below it results c. %49
If I use shuffle then it results c. %65
I should mention that I try every 1000 consecutive split of all set from end to beginning and the result is same.
dataset = pd.read_csv("./dataset.csv", header=0,sep=";")
dataset = shuffle(dataset) #!!!???
X_train = dataset.iloc[:-1000,1:]
X_train = preprocessing.normalize(X_train)
y_train = dataset.iloc[:-1000,0]
X_test = dataset.iloc[-1000:,1:]
X_test = preprocessing.normalize(X_test)
y_test = dataset.iloc[-1000:,0]
clf = KNeighborsClassifier(n_neighbors=1, weights='distance', n_jobs=-1).fit(X_train, y_train)
clf.score(X_test, y_test)
Assuming your question is "Why does it happen":
In both your first and second code snippets you have underlying shuffling happening (in your cross validation and your train_test_split methods), therefore they are equivalent (both in score and algorithm) to your last snippet with shuffling "on".
Since your original dataset is ordered by date there might be (and usually likely) some data that changes over time, which means that since your classifier never sees data from the last 1000 time points - it is unaware of the change in the underlying distribution and therefore fails to classify it.
Addendum to answer further data in comment:
This suggests that there might be some indicative process that is captured in smaller time frames. Two interesting ways to explore it are:
reduce the size of the test set until you find a window size in which the difference between shuffle/no shuffle is negligible.
this process essentially manifests as some dependence between your features so you could see if in a small time frame there is a dependence between your features

Categories

Resources