my code is as followed:
transform scale
X = dataset #(100, 18)
scaler = MinMaxScaler(feature_range=(0, 1))
scaler = scaler.fit(X)
scaled_X = scaler.transform(X)
scaled_series = Series(scaled_X[:, 17])
print(scaled_series.head())
invert transform
inverted_X = scaler.inverse_transform(scaled_X)
inverted_series = Series(inverted_X[:, 17])
print(inverted_series.head())
the problem is that scaled_series and inverted_series are the same result, how should I correct the code?
I guess the problem is specific to your dataset. For instance, when I use an example dataset, the scaled_seriesand the inverted_series gave two different outputs:
Scaled Series output:
0 0.729412
1 0.741176
2 0.741176
3 0.670588
4 0.870588
dtype: float32
Inverted Series output:
0 0.698347
1 0.706612
2 0.706612
3 0.657025
4 0.797521
dtype: float32
Both scaled_series and inverted_series gave different outputs but the values are close to each other. If you scale your data before using MinMaxScalar:
from sklearn.preprocessing import scale
X = scale(X)
Result:
Scaled Series output:
0 0.729412
1 0.741176
2 0.741176
3 0.670588
4 0.870588
dtype: float32
Inverted Series output:
0 -0.188240
1 -0.123413
2 -0.123413
3 -0.512372
4 0.589678
dtype: float32
Now, the outputs are not close to each other, they are completely different.
Code:
from sklearn.datasets import fetch_olivetti_faces
from sklearn.preprocessing import MinMaxScaler, scale
from pandas import Series
X, _ = fetch_olivetti_faces(return_X_y=True)
X = scale(X)
scaler = MinMaxScaler(feature_range=(0, 1))
scaler = scaler.fit(X)
scaled_X = scaler.transform(X)
scaled_series = Series(scaled_X[:, 17])
print("\nScaled Series output:")
print(scaled_series.head())
inverted_X = scaler.inverse_transform(scaled_X)
inverted_series = Series(inverted_X[:, 17])
print("\nInverted Series output:")
print(inverted_series.head())
You have to consider the range of your dataset X. If we consider the formula for the MinMax scaler:
Should the range of X be [0,1], there will be no difference made as you will be subtracting 0 and dividing by 1. Thus, returning the same value.
Normalization is only viable for values which are not on the scale of 0-1.
Related
`I get error "Expected 2D array, got 1D array instead:
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
I think my problem is inverse_transform`
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('Pos.csv')
X = dataset.iloc[:, 1:-1].values
y = dataset.iloc[:, -1].values
print(X)
print(y)
y = y.reshape(len(y),1)
print(y)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
sc_y = StandardScaler()
X = sc_X.fit_transform(X)
y = sc_y.fit_transform(y)
print(X)
print(y)
# Training the SVR model on the whole dataset
from sklearn.svm import SVR
regressor = SVR(kernel = 'rbf')
regressor.fit(X, y)
# Predicting a new result
sc_y.inverse_transform(regressor.predict(sc_X.transform([[6.5]])))
# Visualising the SVR results
plt.scatter(sc_X.inverse_transform(X), sc_y.inverse_transform(y), color = 'red')
plt.plot(sc_X.inverse_transform(X), sc_y.inverse_transform(regressor.predict(X)), color = 'blue')
plt.title('Truth or Bluff (SVR)')
plt.xlabel('Position level')
plt.ylabel('Salary')
plt.show()
okay, so fist of all create a sample CSV as per the link.
Then push this into a dataframe.
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
csv_file = 'G:\\MyDrive\\path\\to\\test_output.csv'
dataset = pd.read_csv(csv_file)
# the dataset
print(dataset)
at this point inspect the data: print(dataset)
The data should look like this:
position level salary
0 Business Analyst 1 45000
2 Senior Consultant 3 60000
3 Manager 4 80000
4 Country Manager 5 110000
5 Region Manager 6 150000
6 Partner 7 200000
7 Senior Partner 8 300000
8 C-level 9 500000
9 CEO 10 1000000
The code in the question then tries to create two lists: X and y.
It should use the native dataframe which is more efficient and easier. So X and y will be this:
my_x = [i+1 for i in range(len(dataset))]
my_y = dataset['salary'].values
...this produces have 2 lists:
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
[ 45000 50000 60000 80000 110000 150000 200000 300000 500000
1000000]
This is the answer so far.
So the new question is what do you want to achieve with the reshape line given there are two (same length) lists ?
I have two numpy arrays
import numpy as np
temp_1 = np.array([['19.78018766'],
['19.72487359'],
['19.70280336'],
['19.69589641'],
['19.69746018']])
temp 2 = np.array([['43.8'],
['43.9'],
['44'],
['44.1'],
['44.2']])
and I am preparing X = np.stack((temp_1,temp_2), axis=-1)
which looks something like this
X = [[['19.78018766' '43.8']]
[['19.72487359' '43.9']]
[['19.70280336' '44']]
[['19.69589641' '44.1']]
[['19.69746018' '44.2']]]
I have another variable Y which is also a numpy array
Y = np.array([['28.78'],
['32.72'],
['15.70'],
['32.69'],
['55.69']])
I am trying to run the RandomforestRegressor model
where
from sklearn.ensemble import RandomForestRegressor
clf = RandomForestRegressor()
clf.fit(X,Y)
However, it is giving me this error
ValueError: Found array with dim 3. Estimator expected <= 2.
This happens because X and Y shapes are different (5, 1, 2) != (5,1).
Just reshape your X data to the number of samples you have
# In this example 5 samples
X = X.reshape(5, 2)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have a data frame named df:
import pandas as pd
df = pd.DataFrame({'p': [15-x for x in range(14)]
, 'x': [x for x in range(14)]})
df['y'] = 1000 * (10 / df['p'])
x is only for plotting purposes.
I'm trying to predict the y value based on the p values. I am using SVR from sklearn:
from sklearn.svm import SVR
nlm = SVR(kernel='poly').fit(df[['p']], df['y'])
df['nml'] = nlm.predict(df[['p']])
I have already tried all of kernels but it still doesn't work correct enough.
p x y nml
0 15 0 666.666667 524.669572
1 14 1 714.285714 713.042459
2 13 2 769.230769 876.338765
3 12 3 833.333333 1016.349674
Do you know which sklearn model or other libraries should I use to better fit a model?
You missed the fundamental step "normalize the data"
Fix
df = pd.DataFrame({'p': [15-x for x in range(14)]
, 'x': [x for x in range(14)]})
df['y'] = 1000 * (10 / df['p'])
# Normalize the data (x - mean(x))/std(x)
s_p = np.std(df['p'])
m_p = np.mean(df['p'])
s_y = np.std(df['y'])
m_y = np.mean(df['y'])
df['p_'] = (df['p'] - s_p)/m_p
df['y_'] = (df['y'] - s_y)/m_y
# Fit and make prediction
nlm = SVR(kernel='rbf').fit(df[['p_']], df['y_'])
df['nml'] = nlm.predict(df[['p_']])
# Plot
plt.plot(df['p_'], df['y_'], 'r')
plt.plot(df['p_'], df['nml'], 'g')
plt.show()
# Rescale back and plot
plt.plot(df['p_']*s_p+m_p, df['y_']*s_y+m_y, 'r')
plt.plot(df['p_']*s_p+m_p, df['nml']*s_y+m_y, 'g')
plt.show()
As #mujjiga pointed out, scaling is important part of the process.
I would like to draw your attention on another two key points:
model selection which determines your ability to solve a class of problem;
new scklearn API which helps you to standardize solution development.
Let's start with your dataset:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
x = np.arange(14)
df = pd.DataFrame({'x': x, 'p': 15-x})
df['y'] = 1e4/df['p']
Then we import somesklearn API objects of interest:
from sklearn.svm import SVR
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler, RobustScaler, FunctionTransformer
First we create a scaler function for target values:
ysc = StandardScaler()
Notice that we can use different scalers, or build a custom transformation.
# Scaler robust against outliers:
ysc = RobustScaler()
# Logarithmic Transformation:
ysc = FunctionTransformer(func=np.log, inverse_func=np.exp, check_inverse=True)
We scale target using the scaler of our choice:
ysc.fit(df[['y']])
df['yn'] = ysc.transform(df[['y']])
We also build a pipeline with features standardizer and the selected model (we adjusted parameters to improve the fit). We fit it to your dataset using the pipeline:
reg = make_pipeline(StandardScaler(), SVR(kernel='rbf', C=1e3, epsilon=1e-3))
reg.fit(df[['p']], df['yn'])
At this point we can predict values and transform them back to the original scale:
df['ynhat'] = reg.predict(df[['p']])
df['yhat'] = ysc.inverse_transform(df[['ynhat']])
We check the fit score:
reg.score(df[['p']], df['yn']) # 0.9999646718755011
We can also compute absolute and relative error for each point:
df['yaerr'] = df['yhat'] - df['y']
df['yrerr'] = df['yaerr']/df['y']
Final result is:
x p y yn ynhat yhat yaerr yrerr
0 0 15 666.666667 -0.834823 -0.833633 668.077018 1.410352 0.002116
1 1 14 714.285714 -0.794636 -0.795247 713.562403 -0.723312 -0.001013
2 2 13 769.230769 -0.748267 -0.749627 767.619013 -1.611756 -0.002095
3 3 12 833.333333 -0.694169 -0.693498 834.128425 0.795091 0.000954
4 4 11 909.090909 -0.630235 -0.629048 910.497550 1.406641 0.001547
5 5 10 1000.000000 -0.553514 -0.555029 998.204445 -1.795555 -0.001796
6 6 9 1111.111111 -0.459744 -0.460002 1110.805275 -0.305836 -0.000275
7 7 8 1250.000000 -0.342532 -0.341099 1251.697707 1.697707 0.001358
8 8 7 1428.571429 -0.191830 -0.193295 1426.835676 -1.735753 -0.001215
9 9 6 1666.666667 0.009105 0.010458 1668.269984 1.603317 0.000962
10 10 5 2000.000000 0.290414 0.291060 2000.764717 0.764717 0.000382
11 11 4 2500.000000 0.712379 0.690511 2474.088446 -25.911554 -0.010365
12 12 3 3333.333333 1.415652 1.416874 3334.780642 1.447309 0.000434
13 13 2 5000.000000 2.822199 2.821420 4999.076799 -0.923201 -0.000185
Graphically it leads to:
fig, axe = plt.subplots()
axe.plot(df['p'], df['y'], label='$y(p)$')
axe.plot(df['p'], df['yhat'], 'o', label='$\hat{y}(p)$')
axe.set_title(r"SVR Fit for $y(x) = \frac{k}{x-a}$")
axe.set_xlabel('$p = x-a$')
axe.set_ylabel('$y, \hat{y}$')
axe.legend()
axe.grid()
Linearization
In the example above we could not use the poly kernel, we had to use the rbf kernel instead. This is because if we aim to fit a rational function using polynomial we are better to transform our data before fitting using a p = x/(x-b) substitution at the first place. In this case it will merely boil down to perform a linear regression. The example below shows that it works:
Scaler and transformation can be composed into a pipeline as well. We define a pipeline that linearize and scale the problem:
# Rational Fraction Substitution with consecutive Standardization
ysc = make_pipeline(
FunctionTransformer(func=lambda x: x/(x+1),
inverse_func=lambda x: x/(1-x),
check_inverse=True),
StandardScaler()
)
Then we can regress the data using classical OLS:
reg = make_pipeline(StandardScaler(), LinearRegression())
reg.fit(df[['p']], df['yn'])
Which provides correct result:
reg.score(df[['p']], df['yn']) # 0.9999998722172933
This second solution take advantage of a known linearization and thus remove the need to parametrize the model.
I have a data file with following columns
'customer',
'calibrat' - Calibration sample = 1; Validation sample = 0;
'churn',
'churndep',
'revenue',
'mou',
Data file contains some 40000 rows out of which 20000 have value for calibrat as 1. I want to split this data into
X1 = data.loc[:, data.columns != 'churn']
y1 = data.loc[:, data.columns == 'churn']
from imblearn.over_sampling import SMOTE
os = SMOTE(random_state=0)
X1_train, X1_test, y1_train, y1_test = train_test_split(X1, y1, test_size=0.3, random_state=0)
what I want is that in my X1_train should come data for Calibration with calibrat =1
and in X1_test should come all data for validation with calibrat = 0
sklearn.model_selection has several other options other than train_test_split. One of them, aims at solving what you're asking for. In this case you could use GroupShuffleSplit, which as mentioned inthe docs it provides randomized train/test indices to split data according to a third-party provided group. This is useful when you're doing cross-validation, and you want to split in validation-train multiple times, ensuring that the sets are split by the group field. You also have GroupKFold for these cases which is very useful.
So, adapting your example, here's what you could do.
Say you have for instance:
from sklearn.model_selection import GroupShuffleSplit
cols = ['customer', 'calibrat', 'churn', 'churndep', 'revenue', 'mou',]
X = pd.DataFrame(np.random.rand(10, 6), columns=cols)
X['calibrat'] = np.random.choice([0,1], size=10)
print(X)
customer calibrat churn churndep revenue mou
0 0.523571 1 0.394896 0.933637 0.232630 0.103486
1 0.456720 1 0.850961 0.183556 0.885724 0.993898
2 0.411568 1 0.003360 0.774391 0.822560 0.840763
3 0.148390 0 0.115748 0.089891 0.842580 0.565432
4 0.505548 0 0.370198 0.566005 0.498009 0.601986
5 0.527433 0 0.550194 0.991227 0.516154 0.283175
6 0.983699 0 0.514049 0.958328 0.005034 0.050860
7 0.923172 0 0.531747 0.026763 0.450077 0.961465
8 0.344771 1 0.332537 0.046829 0.047598 0.324098
9 0.195655 0 0.903370 0.399686 0.170009 0.578925
y = X.pop('churn')
You can now instanciate GroupShuffleSplit, and do as you would with train_test_split, with the only difference of specifying a group column, which will be used to split X and y so the groups are split according the the groups values:
gs = GroupShuffleSplit(n_splits=2, train_size=.7, random_state=42)
As mentioned, this is more handy when you want to split into multiple groups, generally for cross validation purposes. Here's just an example of how you'd get two splits, as mentioned in the question:
train_ix, test_ix = next(gs.split(X, y, groups=X.calibrat))
X_train = X.loc[train_ix]
y_train = y.loc[train_ix]
X_test = X.loc[test_ix]
y_test = y.loc[test_ix]
Giving:
print(X_train)
customer calibrat churndep revenue mou
3 0.148390 0 0.089891 0.842580 0.565432
4 0.505548 0 0.566005 0.498009 0.601986
5 0.527433 0 0.991227 0.516154 0.283175
6 0.983699 0 0.958328 0.005034 0.050860
7 0.923172 0 0.026763 0.450077 0.961465
9 0.195655 0 0.399686 0.170009 0.578925
print(X_test)
customer calibrat churndep revenue mou
0 0.523571 1 0.933637 0.232630 0.103486
1 0.456720 1 0.183556 0.885724 0.993898
2 0.411568 1 0.774391 0.822560 0.840763
8 0.344771 1 0.046829 0.047598 0.324098
I'm trying to apply the RandomForest method to a dataset and I get this error:
ValueError: Input contains NaN, infinity or a value too large for dtype ('float32')
Could someone tell me what I can modify in the function for the code to work:
def ranks_RF(x_train, y_train, features_train, RESULT_PATH='Results'):
"""Get ranks from Random Forest"""
print("\nMĂ©todo_Random_Forest")
random_forest = RandomForestRegressor(n_estimators=10)
np.nan_to_num(x_train)
np.nan_to_num(y_train)
random_forest.fit(x_train, y_train)
# Get rank by doing two times a sort.
imp_array = np.array(random_forest.feature_importances_)
imp_order = imp_array.argsort()
ranks = imp_order.argsort()
# Plot Random Forest
imp = pd.Series(random_forest.feature_importances_, index=x_train.columns)
imp = imp.sort_values()
imp.plot(kind="barh")
plt.xlabel("Importance")
plt.ylabel("Features")
plt.title("Feature importance using Random Forest")
# plt.show()
plt.savefig(RESULT_PATH + '/ranks_RF.png', bbox_inches='tight')
return ranks
You did not overwrite the values when you replaced the nan, hence it's giving you the errors.
We try an example dataset:
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import load_iris
iris = load_iris()
df = pd.DataFrame(data= iris['data'],
columns= iris['feature_names'] )
df['target'] = iris['target']
# insert some NAs
df = df.mask(np.random.random(df.shape) < .1)
We have a function like yours, I removed the plotting part, because that's another question altogether:
def ranks_RF(x_train, y_train):
var_names = x_train.columns
random_forest = RandomForestRegressor(n_estimators=10)
# here you have to reassign back the values
x_train = np.nan_to_num(x_train)
y_train = np.nan_to_num(y_train)
random_forest.fit(x_train, y_train)
res = pd.DataFrame({
"features":var_names,
"importance":random_forest.feature_importances_,
})
res = res.sort_values(['importance'],ascending=False)
res['rank'] = np.arange(len(res))+1
return res
We run it:
ranks_RF(df.iloc[:,0:4],df['target'])
features importance rank
3 petal width (cm) 0.601734 1
2 petal length (cm) 0.191613 2
0 sepal length (cm) 0.132212 3
1 sepal width (cm) 0.074442
This worked for me
np.where(x.values >= np.finfo(np.float32).max)
Where x is my pandas Dataframe
Then Convert your DataFrame to Float32 if it's not