sklearn's GradientBoostingRegressor gives the same prediction for different inputs - python

I encountered a weird behavior while trying to train sklearn's GradientBoostingRegressor and make prediction. I will bring an example to demonstrate the issue on a reduced dataset but issue remains on a larger dataset as well. I have the following 2 small datasets adapted from a big dataset. As you can see the target variable is identical for both cases but input variables are different though their values are close to each other. The target variables(Y) are in the last column.
I have the following code:
d1 = {'0':[101869.2,102119.9,102138.0,101958.3,101903.7,12384900],
'1':[101809.1,102031.3,102061.7,101930.0,101935.2,11930700],
'2':[101978.0,102208.9,102209.8,101970.0,101878.6,12116700],
'3':[101869.2,102119.9,102138.0,101958.3,101903.7,12301200],
'4':[102125.5,102283.4,102194.0,101884.8,101806.0,10706100],
'5':[102215.5,102351.9,102214.0,101769.3,101693.6,10116900]}
data1 = pd.DataFrame(d1).T
X1 = data1.ix[:,:4]
Y = data1[5]
d2 = {'0':[101876.0,102109.8,102127.6,101937.0,101868.4,12384900],
'1':[101812.9,102021.2,102058.8,101912.9,101896.4,11930700],
'2':[101982.5,102198.0,102195.4,101940.2,101842.5,12116700],
'3':[101876.0,102109.8,102127.6,101937.0,101868.4,12301200],
'4':[102111.3,102254.8,102182.8,101832.7,101719.7,10706100],
'5':[102184.6,102320.2,102188.9,101699.9,101548.1,10116900]}
data2 = pd.DataFrame(d2).T
X2 = data2.ix[:,:4]
Y = data2[5]
re1 = ensemble.GradientBoostingRegressor(n_estimators=40,max_depth=None,random_state=1)
re1.fit(X1,Y)
pred1 = re1.predict(X1)
re2 = ensemble.GradientBoostingRegressor(n_estimators=40,max_depth=None,random_state=3)
re2.fit(X2,Y)
pred2 = re2.predict(X2)
where
X1 is a pandas DataFrame corresponding to Column 1 through Column 5 on the 1st dataset
X2 is a pandas DataFrame corresponding to Column 1 through Column 5 on the 2nd dataset
Y represents the target column.
The issue I am facing is that I cannot explain why pred1 is exactly the same as pred2?? As long as X1 and X2 are not the same pred1 and pred2 must also be different, musn't they? Help me to find my false assumption, please.

What you observe is perfectly expected.
You fit a high complexity estimator to the data (max_depth=None), so it is easy to learn all of the data by heart, that is overfit completely on the training data.
Then the prediction will be whatever labels you gave for training.
Have a look at Peter's talk here about how to tune GradientBoosting correctly:
https://www.youtube.com/watch?v=-5l3g91NZfQ
Anyhow, you should at least have a test-set.

My guess is that since you are fitting X1 and X2 to the same Y, it is reasonable that pred1 and pred2 are similar. When your Regressor is very powerful (can fit anything to anything) or your problem is too easy (can be fitted exactly by your regressor), then pred1 and pred2 will be both equal to Y.

Related

GaussianProcessRegressor fitting perfectly but poor perfomance on test data?

I am trying to understand GPR, and I am testing it to predict some values. The response is the first component of a PCA, so it has relatively good data without outliers. The predictors also come from a PCA(n=2), and buth predictors columns has been standarized with StandardScaler().fit_transform, as I saw it was better in previous posts. Since the predictors are standarized, I am using a RBF kernel and mutiplying it by 1**2, and let the hyperparameters fit. The thing is that the model fits perfectly to predictors, and gives almost constant values for the test data. The set is a set of 463 points, and no matter if I randomize 20-100 or 200 for the train data, adding Whitekernel() or alpha values, I have the same result. I am almost certain that I am doing something wrong, but I can't find what, any help? Here's relevant chunk of code and the responses:
k1 = cKrnl(1**2,(1e-40, 1e40)) * RBF(2, (1e-40, 1e40))
k2 = cKrnl(1**2,(1e-40, 1e40)) * RBF(2, (1e-40, 1e40))
kernel = k1 + k2
gp = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=10,normalize_y = True)
gp.fit(x_train, y_train)
print("GPML kernel: %s" % gp.kernel_)
Output :
GPML kernel: 1**2 * RBF(length_scale=0.000388) + 8.01e-18**2 * RBF(length_scale=2.85e-18)
Training data:
Test data and prediction:
Thanks to all!!!

Sklearn svr give wrong results when the training data obvious show a pattern

I have the following training data:
x = [
[0.914728682,5.217,5,0.217,3.150362319,33.36,35,-1.64,4.220113852],
[0.885057471,7.793,8,-0.207,3.380911063,46.84,48,-1.16,4.448243115],
[0.871345029,7.152,7,0.152,3.976205037,44.98,47,-2.02,5.421236592],
[0.821428571,8.04,8,0.04,2.909880565,52.02,54.5,-2.48,2.824104235],
[0.931372549,8.01,8,0.01,4.616714697,48.04,48,0.04,9.650462033],
[0.66367713,5.424,5.5,-0.076,1.37804878,32.6,35.5,-2.9,1.189781022],
[0.78,8.66,9,-0.34,2.272965879,48.47,55,-6.53,2.564550265],
[0.227272727,19.55,21,-1.45,1.860133206,128.23,147,-18.77,1.896893491],
[0.47826087,10.09,8,2.09,1.155519927,74.43,64,10.43,1.169547454],
[0.652694611,6.775,4,2.775,1.05529595,43.1,30,13.1,1.062885327],
[0.798561151,3.986,2,1.986,0.656563993,25.38,13,12.38,0.652442159],
[0.666666667,5.419,3,2.419,1.057985162,34.37,16,18.37,0.981719509],
[0.5625,7.719,2,5.719,0.6421797,46.91,12,34.91,0.665673336]
]
and the following labels(scores):
y = [0.237113402,0.168831169,0.104166667,0.086419753,0.063147368,0.016042781,
0.014814815,0,0,-0.0794,-0.14,-0.1832,-0.2385]
It seems clear that the larger the values in column 5 and column 9 are, the higher the scores.
I write the following code that make use of SVR on the training data provided:
rb = RobustScaler()
xScaled = rb.fit_transform(x)
model = SVR(C=1.0, epsilon=0.1)
model.fit(xScaled,y)
But no matter which of the following I use for prediction, it is not giving a score that looks right.
1 score = model.predict(rb.fit_transform(testData))
2 score = model.predict(testData)
If I do something like the following during training:
xScaled = preprocessing.scale(x)
model = SVR(C=1.0, epsilon=0.1)
model.fit(xScaled,y)
then:
score = svmModel.predict(testData)
I get back something close to the origin y.
But I pick a row in x, put it in a 2d array with one row called testData, and do:
score = svmModel.predict(testData)
I get a wrong score. In fact, no matter which row in x I use for creating the 2d array with one row, I get the same score.
What have I done wrong? I would be extremely grateful if someone can help.
1) score = model.predict(rb.fit_transform(testData))
When you do the above, you are re-fitting the RobustScaler to the new data. That means that it will be scaled to new data and will not match the scales of the training data. So the results will not be good.
2) score = model.predict(testData)
In the above, you are not scaling the test data, so its different that what the SVC has learnt. Hence the results will be bad here also.
What you need to do:-
score = model.predict(rb.transform(testData))
Calling transform() will scale the supplied data based on training data scales, and hence the SVC can better predict the output.

Catboost understanding - Conversion of Categorical values

I have some silly question on catboost.
From the documentation of catboost, I understood that there are some permutation/shuffle between the rows, for the categorical data transformation.(https://tech.yandex.com/catboost/doc/dg/concepts/algorithm-main-stages_cat-to-numberic-docpage/#algorithm-main-stages_cat-to-numberic)
I was trying to predict on a single observation to check if my model works, but I get an error. However with 2 observations, it works fine.
My question is, for the prediction of a catboost classifier, do we have to at least give 2 observations because of the permutation ? If yes, does the first observation have an impact on the output ?
Catboost indeed has such a restriction. However, it has nothing to do with permutations, for they are applied only at the fitting stage.
The problem is that the same method catboost.Pool._check_data_empty is applied before predict as well as fit. And for fitting, having more than one observation is indeed crucial.
Now the checking function requires that sum(x.shape)>2, which is indeed strange. The following code illustrates the problem:
import catboost
import numpy as np
x_train3 = np.array([[1,2,3,], [2,3,4], [3,4,5]])
x_train1 = np.array([[1], [2], [3]])
y_train = np.array([1,2,3])
x_test3_2 = np.array([[4,5,6], [5,6,7]])
x_test3_1 = np.array([[4,5,6,]])
x_test1_2 = np.array([[4], [5]])
x_test1_1 = np.array([[4]])
model3 = catboost.CatBoostRegressor().fit(x_train3, y_train)
model1 = catboost.CatBoostRegressor().fit(x_train1, y_train)
print(model3.predict(x_test3_2)) # OK
print(model3.predict(x_test3_1)) # OK
print(model1.predict(x_test1_2)) # OK
print(model1.predict(x_test1_1)) # Throws an error!
For now, you can do well by adding one or two more fake rows before calling predict. They will have no effect on the output for the original row.

LSTM: Understand timesteps, samples and features and especially the use in reshape and input_shape

I'm trying to learn LSTM. Have taken this web courses, read this book (https://machinelearningmastery.com/lstms-with-python/) and a lot of blogs... But, I'm completely stuck. My interest is in multivariate LSTM's and I have read all I can find but still can't get it. Don't know if I'm stupid or what it is...
If this exact question and a good answer already exists then I am sorry for double posting but I have looked and haven't found it...
As I want to really know the basics I created a dummy dataset in excel where every "y" depends on the sum of each input x1 and x2 but also over time. As I understand it this is a many-to-one scenario.
Pseudo code:
x1(t) = sin(A(t))
x2(t) = cos(A(t))
tmp(t) = x1(t) + x2(t) (dummy variable)
y(t) = tmp(t) + tmp(t-1) + tmp(t-2) (i.e. sum over the last three steps)
(Basically I want to predict y(t) given x1 and x2 over three time steps)
This is then exported to a csv file with columns x1, x2, y
I have tried to code it up below but obviously it won't work.
I read the data and split it into a 80/20 test and train set as X_train, y_train, X_test, y_test with dimensions (217,2), (217,1), (54,2), (54/1)
What I really haven't got a grip on yet is what exactly are timesteps and samples and the use in reshape and input_shape. In many examples of code I have looked at they simply use numbers rather than defined variables which makes it very difficult to understand what is happening, especially if you want to change something. As an example, in one of the courses I took the reshaping was coded like this...
X_train = np.reshape(X_train, (1257, 1, 1))
This doesn't provide much info...
Anyway, when i run the code below it says
ValueError: cannot reshape array of size 434 into shape (217,3,2)
So, I know what the causes the error, but not what I need to do to fix it. If I set look_back=1 it works but that's not what I want.
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
# Load data
data_set = pd.read_csv('../Data/LSTM_test.csv',';')
"""
data loaded have three columns:
col 0, col 1: features (x)
col 2: y
"""
# Train/test and variable split
split = 0.8 # 80% train, 20% test
split_idx = int(data_set.shape[0]*split)
# ...train
X_train = data_set.values[0:split_idx,0:2]
y_train = data_set.values[0:split_idx,2]
# ...test
X_test = data_set.values[split_idx:-1,0:2]
y_test = data_set.values[split_idx:-1,2]
# Model setup
look_back = 3 # as that is how y was generated (i.e. sum last three steps)
num_features = 2 # in this case: 2 features x1, x2
output_dim = 1 # want to predict 1 y value
nb_hidden_neurons = 32 # assume something to start with
nb_epoch = 2 # assume something to start with
# Reshaping
nb_samples = len(X_train) # in this case 217 samples in the training set
X_train_reshaped = np.reshape(X_train,(nb_samples, look_back, num_features))
# Create model
model = Sequential()
model.add(LSTM(nb_hidden_neurons, input_shape=(look_back,num_features)))
model.add(Dense(units=output_dim))
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
model.fit(X_train_reshaped, y_train, batch_size = 32, epochs = nb_epoch)
print(model.summary())
Can anyone please explain what I have done wrong?
As I said, I have read a lot of blogs, questions, tutorials etc but if someone has a particularly good source of info I'd love to check that one up too.
I also had this question before. On a higher level, in (samples, time steps, features)
samples are the number of data, or say how many rows are there in your data set
time step is the number of times to feed in the model or LSTM
features is the number of columns of each sample
For me, I think a better example to understand it is that in NLP, suppose you have a sentence to process, then here sample is 1, which means 1 sentence to read, time step is the number of words in that sentence, you feed in the sentence word by word before the model read all the words and get a whole context of that sentence, features here is the dimension of each word, because in word embedding like word2vec or glove, each word is interpreted by a vector with multiple dimensions.
The input_shape parameter in Keras is only (time_steps, num_features),
more you can refer to this.
And the problem of yours is that when you reshape data, the multiplication of each dimension should equal to the multiplication of dimensions of original data set, where 434 does not equal to 217*3*2.
When you implement LSTM, you should be very clear of what are the features and what are the element you want the model to read each time step. There is a very similar case here surely can help you. For example, if you are trying to predict the value of time t using t-1 and t-2, you can either choose to feed in two values as one element to predict t, where (time_step, num_features)=(1, 2), or you can feed each value in 2 time steps, where (time_step, num_features)=(2, 1).
That's basically how I understand this, hope make it clear for you.
You seem to have a decent grasp of what LSTM expects and are just struggling with getting your data into the correct format. You start with an X_train of shape (217, 2) and you want to reshape this such that it's in the shape (nb_samples, look_back, num_features). You already have defined look_back and num_features and really all the work that's left is generating nb_samples chunks of length look_back with your original X_train. Numpy's reshape isn't really the tool for this, instead you'll have to write some code.
import numpy as np
nb_samples = X_train.shape[0] - look_back
x_train_reshaped = np.zeros((nb_samples, look_back, num_features))
y_train_reshaped = np.zeros((nb_samples))
for i in range(nb_samples):
y_position = i + look_back
x_train_reshaped[i] = X_train[i:y_position]
y_train_reshaped[i] = y_train[y_position]
model.fit(x_train_reshaped, y_train_reshaped, ...)
The shapes are now:
x_train_reshaped.shape
# (214, 3, 2)
y_train_reshaped.shape
# (214,)
You'll have to do the same thing with X_test and y_test.
This https://github.com/fchollet/keras/issues/2045 helped me.
But shortly, the answer for your question:
you want to reshape a list with 434 elements into shape (217,3,2), but it's impossible, let me show you why:
A new shape has 217*3*2 = 1302 elements, but you have 434 elements in the original list. Therefore, the solution is to change the dimensions of reshaping.

Error in model prediction using hmmlearn

Hi I have a dataframe test, I am trying to predict using a Gaussian HMM with hmmlearn.
When I do this:
y = model.predict(test)
y
I get the hmm working fine producing and array of states
however if i do this:
for i in range(0,len(test)):
y = model.predict(test[:i])
all I get is y being set to 1.
Can anyone help?
UPDATE
here is the code that does work iterating through
The training set was 0-249:
for i in range(251,len(X)):
test = X[:i]
y = model.predict(test)
print(y[len(y)-1])
HMM models sequences of observations. If you feed a single observation into predict (which does Viterbi decoding by default) you essentially reduce the prediction to the argmax over
(model.startprob_ * model.predict_proba(test[i:i + 1])).argmax()
which can be dominated by startprob_, e.g. if startprob = [10**-8, 1 - 10**-8]. This could explain the all-ones behaviour you're seeing.

Categories

Resources