I have a mall customers dataset from kaggle, it's 200 customers with 5 features, CustomerID, gender, age, annual income and spending score. Before running regression i first ran k-means to cluster my data, with K = 6 for Spending score(dependent variable) annual income(Independent variable) and age(independent variable). After doing so i ran multiple linear regression for each cluster seperatly, after doing so i printed my predicted values and my actual values and my predicted values are way more than my actual printed values. The predicted y values were 34 (that's exactly the number of values i have in my cluster) and my actual y values are 9. Why don't all of my actual values print?
code:
df = pd.read_csv('D:\Mall_Customers.csv', usecols = ['Age','Spending Score (1-100)', 'Annual Income (k$)'])
x = StandardScaler().fit_transform(df)
kmeans = KMeans(n_clusters=6, max_iter=100, random_state=0)
y_kmeans= kmeans.fit_predict(x)
mydict = {i: np.where(kmeans.labels_ == i)[0] for i in range(kmeans.n_clusters)}
dictlist = []
for key, value in mydict.items():
temp = [key,value]
dictlist.append(temp)
df0 = df[df.index.isin(mydict[0].tolist())]
Y = df0['Spending Score (1-100)']
X = df0[[ 'Annual Income (k$)','Age']]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train,y_test = train_test_split(X, Y, test_size = None, random_state = 0)
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
y_pred = regressor.predict(X)
print('predicted:', y_pred, sep='\n')
print('actual', y_test, sep='\n')
The code above doesn't show how you computed y_pred. Also, you've called train_test_split with a test_size of None which means you're test set defaults to 25% of the data. If there's 34 items in your cluster that'd be 8.5 so the 9 actual values you're seeing makes sense. To understand why y_pred is more than that we'd have to see how you computed it but I'm guessing you did something like regressor.predict(X) which would give you predictions on all of the data, not just the test set.
Related
I have a large dataframe, and I want to predict the last column based on the other columns with xgboost, my codes are written below, but my prediction is wrong and I get the constant value.
the Data is not time-series, my trees also cant be plotted.
Overall is it possible by having 20 columns and I just wanna predict the 20th one by using the other 19th columns with this method?
#XGBoost
import xgboost as xgb
from sklearn.metrics import mean_squared_error
#Separate the target variable
X, y = f.iloc[:,:-1],f.iloc[:,-1]
data_dmatrix = xgb.DMatrix(data=X,label=y)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=123)
#Regressor
xg_reg = xgb.XGBRegressor(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
max_depth = 5, alpha = 10, n_estimators = 10)
#Fit the regressor to the training set and make predictions on the test set
xg_reg.fit(X_train,y_train)
preds = xg_reg.predict(X_test)
#RMSE
rmse = np.sqrt(mean_squared_error(y_test, preds))
print("RMSE: %f" % (rmse))
#k-fold Cross Validation
params = {"objective":"reg:squarederror",'colsample_bytree': 0.3,'learning_rate': 0.1,
'max_depth': 10, 'alpha': 10}
cv_results = xgb.cv(dtrain=data_dmatrix, params=params, nfold=3,
num_boost_round=50,early_stopping_rounds=10,metrics="rmse", as_pandas=True, seed=123)
print((cv_results["test-rmse-mean"]).tail(1))
#Visualizing
xg_reg = xgb.train(params=params, dtrain=data_dmatrix, num_boost_round=10)
#plot the trees
import matplotlib.pyplot as plt
xgb.plot_tree(xg_reg,num_trees=5)
plt.rcParams['figure.figsize'] = [50, 10]
plt.show()
#Examine the importance of each feature column in the original dataset within the model
xgb.plot_importance(xg_reg)
plt.rcParams['figure.figsize'] = [5, 5]
plt.show()
First of all, yes, the approach to predict the last column with the first 19 columns is ok.
If the model only produces constant values, I would change the parameters of the model.
Or train a linear model as a baseline first.
I have a random forest model I built to predict if NFL teams will score more combined points than the line Vegas has set. The features I use are Total - the total number of combined points Vegas thinks both teams will score, over_percentage - the percentage of public bets on the over, and under_percentage - the percentage of public bets on the under. The over means people are betting that both team's combined scores will be greater than the number Vegas sets and under means the combined score will go under the Vegas number. When I run my model I'm getting a confusion_matrix like this
and an accuracy_score of 76%. However, the predictions do not perform well. Right now I have it giving me the probability the classification will be 0. I'm wondering if there are parameters I can tune or solutions to prevent my model from overfitting. I have over 30K games in the training data set so I don't think lack of data is causing the issue.
Here is the code:
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, StratifiedKFold, GridSearchCV
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
training_data = pd.read_csv(
'/Users/aus10/NFL/Data/Betting_Data/Training_Data_Betting.csv')
test_data = pd.read_csv(
'/Users/aus10/NFL/Data/Betting_Data/Test_Data_Betting.csv')
df_model = training_data.dropna()
X = df_model.loc[:, ["Total", "Over_Percentage",
"Under_Percentage"]] # independent columns
y = df_model["Over_Under"] # target column
results = []
model = RandomForestClassifier(
random_state=1, n_estimators=500, min_samples_split=2, max_depth=30, min_samples_leaf=1)
n_estimators = [100, 300, 500, 800, 1200]
max_depth = [5, 8, 15, 25, 30]
min_samples_split = [2, 5, 10, 15, 100]
min_samples_leaf = [1, 2, 5, 10]
hyperF = dict(n_estimators=n_estimators, max_depth=max_depth,
min_samples_split=min_samples_split, min_samples_leaf=min_samples_leaf)
gridF = GridSearchCV(model, hyperF, cv=3, verbose=1, n_jobs=-1)
model.fit(X, y)
skf = StratifiedKFold(n_splits=2)
skf.get_n_splits(X, y)
StratifiedKFold(n_splits=2, random_state=None, shuffle=False)
for train_index, test_index in skf.split(X, y):
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X, X
y_train, y_test = y, y
bestF = gridF.fit(X_train, y_train)
print(bestF.best_params_)
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
print(round(accuracy_score(y_test, y_pred), 2))
index = 0
count = 0
while count < len(test_data):
team = test_data.loc[index].at['Team']
total = test_data.loc[index].at['Total']
over_perc = test_data.loc[index].at['Over_Percentage']
under_perc = test_data.loc[index].at['Under_Percentage']
Xnew = [[total, over_perc, under_perc]]
# make a prediction
ynew = model.predict_proba(Xnew)
# show the inputs and predicted outputs
results.append(
{
'Team': team,
'Over': ynew[0][0]
})
index += 1
count += 1
sorted_results = sorted(results, key=lambda k: k['Over'], reverse=True)
df = pd.DataFrame(sorted_results, columns=[
'Team', 'Over'])
writer = pd.ExcelWriter('/Users/aus10/NFL/Data/ML_Results/Over_Probability.xlsx', # pylint: disable=abstract-class-instantiated
engine='xlsxwriter')
df.to_excel(writer, sheet_name='Sheet1', index=False)
df.style.set_properties(**{'text-align': 'center'})
pd.set_option('display.max_colwidth', 100)
pd.set_option('display.width', 1000)
writer.save()
And here are links the the google docs with the test and training data.
Test Data
Training Data
There's a couple of things to note when using RandomForests. First of all you might want to use cross_validate in order to measure the performance of your model.
Furthermore RandomForests can be regularized by tweaking the following parameters:
Decreasing max_depth: This is a parameter that controls the maximum depth of the trees. The bigger it is, there more parameters will have, remember that overfitting happens when there's an excess of parameters being fitted.
Increasing min_samples_leaf: Instead of decreasing max_depth we can increase the minimum number of samples required to be at a leaf node, this will limit the growth of the trees too and prevent having leaves with very few samples (Overfitting!)
Decreasing max_features: As previously mentioned, overfitting happens when there's abundance of parameters being fitted, the number of parameters hold a direct relationship with the number of features in the model, therefore limiting the amount of features in each tree will prove valuable to help control overfitting.
Finally, you might want to try different values and approaches using GridSearchCV to automatize and try different combinations:
from sklearn.ensemble import RandomForestClassifier
from sklearn.grid_search import GridSearchCV
rf_clf = RandomForestClassifier()
parameters = {'max_features':np.arange(5,10),'n_estimators':[500,1000,1500],'max_depth':[2,4,8,16]}
clf = GridSearchCV(rf_clf, parameters, cv = 5)
clf.fit(X,y)
This will a return a table with the performance of all the different models (given the combination of hyperparameter) which will allow you to find the best one easier.
You are splitting the data using train_test_split by setting it totest_split=0.25. The downside to this is that it randomly splits the data and completely ignores the distribution of the classes when doing so. Your model will suffer from sampling bias where the correct distribution of the data is not maintained across the train and test datasets.
In your train set the data could be skewed more towards a particular instance of the data compared to the test set and vice versa.
To overcome this you can use StratifiedKFoldCrossValidation which maintains the distribution of the classes accordingly.
Creates K-Fold for the dataframe
def kfold_(df):
df = pd.read_csv(file)
df["kfold"] = -1
df = df.sample(frac=1).reset_index(drop=True)
y= df.target.values
kf= model_selection.StratifiedKFold(n_splits=5)
for f, (t_, v_) in enumerate(kf.split(X=df, y=y)):
df.loc[v_, "kfold"] = f
This function should be run for each fold of the dataset that was created based on the previous function
def run(fold):
df = pd.read_csv(file)
df_train = df[df.kfold != fold].reset_index(drop=True)
df_valid= df[df.kfold == fold].reset_index(drop=True)
x_train = df_train.drop("label", axis = 1).values
y_train = df_train.label.values
x_valid = df_valid.drop("label", axis = 1).values
y_valid = df_valid.label.values
rf = RandomForestRegressor()
grid_search = GridSearchCV(estimator = rf, param_grid = param_grid,
cv = 5, n_jobs = -1, verbose = 2)
grid_search.fit(x_train, y_train)
y_pred = model.predict(x_valid)
print(f"Fold: {fold}")
print(confusion_matrix(y_valid, y_pred))
print(classification_report(y_valid, y_pred))
print(round(accuracy_score(y_valid, y_pred), 2))
Moreover you should perform hyperparameter tuning to find the best parameters for you the other answer shows you how to do so.
Build a Decision tree Regressor model from X_train set and Y_train labels, with default parameters. Name the model as dt_reg.
Evaluate the model accuracy on the training data set and print its score.
Evaluate the model accuracy on the testing data set and print its score.
Predict the housing price for the first two samples of the X_test set and print them.(Hint : Use predict() function)
Fit multiple Decision tree regressors on X_train data and Y_train labels with max_depth parameter value changing from 2 to 5.
Evaluate each model's accuracy on the testing data set.
Hint: Make use of for loop
Print the max_depth value of the model with the highest accuracy.
import sklearn.datasets as datasets
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
import numpy as np
np.random.seed(100)
boston = datasets.load_boston()
X_train, X_test, Y_train, Y_test = train_test_split(boston.data, boston.target, random_state=30)
print(X_train.shape)
print(X_test.shape)
dt_reg = DecisionTreeRegressor()
dt_reg = dt_reg.fit(X_train, Y_train)
print(dt_reg.score(X_train,Y_train))
print(dt_reg.score(X_test,Y_test))
y_pred=dt_reg.predict(X_test[:2])
print(y_pred)
I want to get Print the max_depth value of the model with the highest accuracy. But fresco plays not submitted Let me know what is error.
max_reg = None
max_score = 0
t=()
for m in range(2, 6) :
rf_reg = DecisionTreeRegressor(max_depth=m)
rf_reg = rf_reg.fit(X_train, Y_train)
rf_reg_score = rf_reg.score(X_test,Y_test)
print (m, rf_reg_score ,max_score)
if rf_reg_score > max_score :
max_score = rf_reg_score
max_reg = rf_reg
t = (m,max_score)
print (t)
If you wish to continue to use the loop as you've done, you can create another variable called 'best_max_depth' and replace its value with dt_reg.max_depth if your if-statement condition is met (it being the best model so far).
I suggest however, you look into GridSearchCV to extract parameters from your best models and to loop through different parameter values.
max_reg = None
max_score = 0
best_max_depth = None
t=()
for m in range(2, 6) :
rf_reg = DecisionTreeRegressor(max_depth=m)
rf_reg = rf_reg.fit(X_train, Y_train)
rf_reg_score = rf_reg.score(X_test,Y_test)
print (m, rf_reg_score ,max_score)
if rf_reg_score > max_score :
max_score = rf_reg_score
max_reg = rf_reg
best_max_depth = rf_reg.max_depth
t = (m,max_score)
print (t)
Try this code -
myList = list(range(2,6))
scores =[]
for i in myList:
dt_reg = DecisionTreeRegressor(max_depth=i)
dt_reg.fit(X_train,Y_train)
scores.append(dt_reg.score(X_test, Y_test))
print(myList[scores.index(max(scores))])
I have a mall dataset and i ran k-means with k = 5. Now after i ran linear regression i wanted to print my predicted value Y to compare with the actual value of Y. Printing the actual value was very easy but i keep getting an error when i try to print the predicted Y. To print the predicted value i used df = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred}). But i get an error ValueError: array length 35 does not match index length 18.
code:
df = pd.read_csv('D:\Mall_Customers.csv', usecols = ['Spending Score (1-100)', 'Annual Income (k$)'])
x = StandardScaler().fit_transform(df)
kmeans = KMeans(n_clusters=5, max_iter=100, random_state=0)
y_kmeans= kmeans.fit_predict(x)
df0 = df[df.index.isin(mydict[0].tolist())]
Y = df0['Spending Score (1-100)']
X = df0[[ 'Annual Income (k$)','Age']]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train,y_test = train_test_split(X, Y, test_size = 0.5, random_state = 0)
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
r_sq = regressor.score(X, Y)
print('coefficient of determination:', r_sq)
print('intercept:', regressor.intercept_)
print('slope:', regressor.coef_)
y_pred = regressor.predict(X)
df = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
print(df)
I am using the following dataset, original version, obtained from: https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/
I want to apply logistic regression to classify the samples on that dataset, my code is the following:
import numpy as np
from sklearn.model_selection import train_test_split
data = np.genfromtxt("breast-cancer-wisconsin.data",delimiter=",")
X = data[:,1:-1]
X[X == '?'] = '-999999'
X = X.astype(int)
y = data[:, -1].astype(int)
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.2)
lg=linear_model.LogisticRegression(n_jobs = 10)
lg.fit(X_train,y_train)
predictions = lg.predict(X_test)
cm=confusion_matrix(y_test,predictions)
print(cm)
score = lg.score(X_test, y_test)
print("Accuracy: %0.2f (+/- %0.2f)" % (score.mean(), score.std() * 2))
I have deleted the first column because it is only the ID, and replaced the ? characters with a big number, so that it could be classified as an outlier. The problem I got is when I compare my results to the ones obtained in this page:
https://anujdutt9.github.io/ML_LogRSklearn.html
Because I am obtaining an accuracy of:
Accuracy: 0.34
and on the link mentioned before the accuracy was approximately 95%.
The results of my confusion matrix are also poor, for example, I obtain:
[[ 1 92]
[ 0 47]]
What is wrong with my model?
Thanks
Try this
X[X == '?'] = np.nan #converting ? to NaN
Then imputing the mean value
imputer = Imputer()
transformed_X = imputer.fit_transform(X)