As we know, in logistic regression algorithm we predict one when theta times X is bigger than 0.5. I wanna raise the precision value. so i wanna change the predict function to predict 1 when theta times X is bigger than 0.7 or other values bigger than 0.5.
If i write the algorithm i could easily do it. But with sklearn package, i have no idea what to do.
Anyone can give me a hand?
To explain the question clearly enough, here is the predict function wroten in octave:
p = sigmoid(X*theta);
for i=1:size(p)(1)
if p(i) >= 0.6
p(i) = 1;
else
p(i) = 0;
endif;
endfor
The LogisticRegression predictor object from sklearn has a predict_proba method which outputs the probabilities that an input example belongs to a certain class. You can use this function along with your own defined theta times X to get the functionality you desire.
An example:
from sklearn import linear_model
import numpy as np
np.random.seed(1337) # Seed random for reproducibility
X = np.random.random((10, 5)) # Create sample data
Y = np.random.randint(2, size=10)
lr = linear_model.LogisticRegression().fit(X, Y)
prob_example_is_one = lr.predict_proba(X)[:, 1]
my_theta_times_X = 0.7 # Our custom threshold
predict_greater_than_theta = prob_example_is_one > my_theta_times_X
Here's the docstring for predict_proba:
Probability estimates.
The returned estimates for all classes are ordered by the
label of classes.
For a multi_class problem, if multi_class is set to be "multinomial"
the softmax function is used to find the predicted probability of
each class.
Else use a one-vs-rest approach, i.e calculate the probability
of each class assuming it to be positive using the logistic function.
and normalize these values across all the classes.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
T : array-like, shape = [n_samples, n_classes]
Returns the probability of the sample for each class in the model,
where classes are ordered as they are in ``self.classes_``.
this works for both binary and multi-class classification:
from sklearn.linear_model import LogisticRegression
import numpy as np
#X = some training data
#y = labels for training data
#X_test = some test data
clf = LogisticRegression()
clf.fit(X, y)
predictions = clf.predict_proba(X_test)
predictions = clf.classes_[np.argmax(predictions > threshold, axis=1)]
Related
# import sklearn and necessary libraries
from sklearn.linear_model import LogisticRegression
# Apply sklearn logistic regression on the given data X and labels Y
X_skl = np.vstack((df1,df2)) # 10000 x 2 array
Y_skl = Y # 10000 x 1 array
LogR = LogisticRegression()
LogR.fit(X_skl,Y_skl)
Y_skl_hat = LogR.predict(X_skl)
# Calculate the accuracy
# Check the number of points where Y_skl is not equal to Y_skl_hat
error_count_skl = 0 # Count the number of error points
for i in range(N):
if Y_skl[i] == Y_skl_hat[i]:
error_count_skl = error_count_skl
else:
error_count_skl = error_count_skl + 1
# Calculate the accuracy
Accuracy = 100*(N - error_count_skl)/N
print("Accuracy(%):")
print(Accuracy)
Output:
Accuracy(%):
99.48
Hello,
I'm trying to apply logistic regression model on array X (with size of 10000 x 2) and label Y (10000 x 1)
using sklearn library in Python. I'm completely lost cause I've never used this library before. Can anyone help me with the coding?
Edited:
Sorry for the vague question, the goal is to find the training accuracy using the entire dataset of X. Above is what I came up with, can anyone take a look and see if it makes sense?
To calculate accuracy you can simply use this sklearn method.
sklearn.metrics.accuracy_score(y_true, y_pred)
In your case
sklearn.metrics.accuracy_score(Y_skl, Y_skl_hat)
If you want to take a look at
sklearn documentation for accuracy_score
And also you should train your model on some data and test it on others to check if the model can be generalized and to avoid overfitting.
To split your data in train and test datasets you could use:
sklearn.model_selection.train_test_split
If you want to take a look at
sklearn documentation for train_test_split
There seems to be two methods for OLS fits in python. The Sklearn one and the Statsmodel one. I have a preference for the statsmodel one because it gives the error on the coefficients via the summary() function. However, I would like to use the TransformedTargetRegressor from sklearn to log my target. It would seem that I need to choose between getting the error on my fit coefficients in statsmodel and being able to transform my target in statsmodel. Is there a good way to do both of these at the same time in either system?
In stats model it would be done like this
import statsmodels.api as sm
X = sm.add_constant(X)
ols = sm.OLS(y, X)
ols_result = ols.fit()
print(ols_result.summary())
To return the fit with the coefficients and the error on them
For Sklearn you can use the TransformedTargetRegressor
from sklearn.compose import TransformedTargetRegressor
from sklearn.linear_model import LinearRegression
regr = TransformedTargetRegressor(regressor=LinearRegression(),func=np.log1p, inverse_func=np.expm1)
regr.fit(X, y)
print('Coefficients: \n', regr.coef_)
But there is no way to get the error on the coefficients without calculating them yourself. Is there a good way to get the best of both worlds?
EDIT
I found a good example for the special case I care about here
https://web.archive.org/web/20160322085813/http://www.ats.ucla.edu/stat/mult_pkg/faq/general/log_transformed_regression.htm
Just to add a lengthy comment here, I believe that TransformedTargetRegressor does not do what you think it does. As far as I can tell, the inverse transformation function is only applied when the predict method is called. It does not express the coefficients in units of the untransformed outcome.
Example:
import pandas as pd
import statsmodels.api as sm
from sklearn.compose import TransformedTargetRegressor
from sklearn.linear_model import LinearRegression
import numpy as np
from sklearn import datasets
create some sample data:
df = pd.DataFrame(datasets.load_iris().data)
df.columns = datasets.load_iris().feature_names
X = df.loc[:,['sepal length (cm)', 'sepal width (cm)']]
y = df.loc[:, 'petal width (cm)']
Sklearn first:
regr = TransformedTargetRegressor(regressor=LinearRegression(),func=np.log1p, inverse_func=np.expm1)
regr.fit(X, y)
print(regr.regressor_.intercept_)
for coef in regr.regressor_.coef_:
print(coef)
#-0.45867804195769357
# 0.3567583897503805
# -0.2962942997303887
Statsmodels on transformed outcome:
X = sm.add_constant(X)
ols_trans = sm.OLS(np.log1p(y), X).fit()
print(ols_trans.params)
#const -0.458678
#sepal length (cm) 0.356758
#sepal width (cm) -0.296294
#dtype: float64
You see that in both cases, the coefficients are identical.That is, using the regression with TransformedTargetRegressor yields the same coefficients as statsmodels.OLS with the transformed outcome. TransformedTargetRegressor does not backtranslate the coefficients into the original untransformed space. Note that the coefficients would be non-linear in the original space unless the transformation itself is linear, in which case this is trivial (adding and multiplying with constants). This discussion here points into a similar direction - backtransforming betas is infeasible in most/many cases.
What to do instead?
If interpretation is your goal, I believe the closest you can get to what you wish to achieve is to use predicted values where you vary the regressors or the coefficients. So, let me give you an example: if your goal is to say what's the effect of one standard error higher for sepal length on the untransformed outcome, you can create the predicted values as fitted as well as the predicted values for the 1-sigma scenario (either by tempering with the coefficient, or by tempering with the corresponding column in X).
Example:
# Toy example to add one sigma to sepal length coefficient
coeffs = ols_trans.params.copy()
coeffs['sepal length (cm)'] += 0.018 # this is one sigma
# function to predict and translate predictions back:
def get_predicted_backtransformed(coeffs, data, inv_func):
return inv_func(data.dot(coeffs))
# get standard predicted values, backtransformed:
original = get_predicted_backtransformed(ols_trans.params, X, np.expm1)
# get counterfactual predicted values, backtransformed:
variant1 = get_predicted_backtransformed(coeffs, X, np.expm1)
Then you can say e.g. about the mean shift in the untransformed outcome:
variant1.mean()-original.mean()
#0.2523083548367202
In short, Scikit learn cannot help you in calculating coefficient standard errors. However, if you opt to use it, you can just calculate the errors by yourself. In the question Python scikit learn Linear Model Parameter Standard Error #grisaitis provided a great answer explaining the main concepts behind it.
If you only want to use a plug-and-play function that will work with sciait-learn you can use this:
def get_coef_std_errors(reg: 'sklearn.linear_model.LinearRegression',
y_true: 'np.ndarray', X: 'np.ndarray'):
"""Function that calculates the standard deviation of the coefficients of
a linear regression.
Parameters
----------
reg : sklearn.linear_model.LinearRegression
LinearRegression object which has been fitted
y_true : np.ndarray
array containing the target variable
X : np.ndarray
array containing the features used in the regression
Returns
-------
beta_std
Standard deviation of the regression coefficients
"""
y_pred = reg.predict(X) # get predictions
errors = y_true - y_pred # calculate residuals
sigma_sq_hat = np.var(errors) # calculate residuals std error
sigma_beta_hat = sigma_sq_hat * np.linalg.inv(X.T # X)
return np.sqrt(np.diagonal(sigma_beta_hat)) # diagonal to recover variances
I have a dataset for regression: (X_train_scaled, y_train) and (X_val_scaled, y_val) for training and validation respectively. The inputs were scaled using StandardScaler.
I create a linear regression model using sklearn.linear_model.LinearRegression like follows:
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
linear_reg = LinearRegression()
linear_reg.fit(X_train_scaled, y_train)
y_pred_train = linear_reg.predict(X_train_scaled)
y_pred_val = linear_reg.predict(X_val_scaled)
r2_train = r2_score(y_train, y_pred_train)
r2_val = r2_score(y_val, y_pred_val)
print('r2_train', r2_train)
print('r2_val', r2_val)
After that I do the same but use polynomial features with degree = 1 (which are just the same as the original features but with an additional feature of ones, i.e. x^0, which I ignore).
from sklearn.preprocessing import PolynomialFeatures
pf = PolynomialFeatures(1)
X_train_poly = pf.fit_transform(X_train_scaled)[:, 1:] # ignore first col
X_val_poly = pf.transform(X_val_scaled)[:, 1:] # ignore first col
linear_reg = LinearRegression()
linear_reg.fit(X_train_poly, y_train)
y_pred_train = linear_reg.predict(X_train_poly)
y_pred_val = linear_reg.predict(X_val_poly)
r2_train = r2_score(y_train, y_pred_train)
r2_val = r2_score(y_val, y_pred_val)
print('r2_train', r2_train)
print('r2_val', r2_val)
However, I get different results. The first code gives me the following outputs:
r2_train 0.7409525513417043
r2_val 0.7239859358973735
whereas the second code gives this output:
r2_train 0.7410093370149977
r2_val 0.7241725658840452
Why are the outputs different although the dataset and model are the same?
To prove the datasets are the same, I tried the following code:
print(X_train_scaled.shape, X_train_poly.shape)
print(X_val_scaled.shape, X_val_poly.shape)
print((X_train_poly != X_train_scaled).sum())
print((X_val_poly != X_val_scaled).sum())
which has the output:
(802, 9) (802, 9)
(268, 9) (268, 9)
0
0
which indicates that the two datasets are identical.
Also, I use LinearRegession in the two cases which uses OLS algorithm and has no random operations at all. So, it's supposed to do the same calculations on the same data. However, I get different results.
Does anyone have an idea about the reason?
Sklearn LinearRegression uses ordinary least squares optimization to fit train data into a linear model while it is not clear what Sklearn PolynomialFeatures use. But based on its transform() function:
Prefer CSR over CSC for sparse input (for speed), but CSC is required
if the degree is 4 or higher. If the degree is less than 4 and the
input format is CSC, it will be converted to CSR, have its polynomial
features generated, then converted back to CSC.
(see: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html)
Assuming PolynomialFeatures uses ordinary least squares optimization, you would still have same results but with slight difference (just like yours) because Compressed Sparse Row (CSR) method would compromise float values (in other words, truncation/approximation error).
I'm currently using Python's scikit-learn to create a support vector regression model, and I was wondering how one would go about finding the explicit regression equation of our target variable in terms of our predictors. It doesn't have to be simple or pretty, but is there a method Python has to output this (for a polynomial kernel, specifically)? I am fairly new to using SVR, and I am not certain of what to expect a regression equation to look like used in the prediction from a test observation after the regression is fit.
I've already fit an SVR model that predicts with a performance I'm happy with, and I've used GridSearchCV to tune hyper-parameters. However, I need an explicit form of my target variable in terms of the predictors for an independent optimization, and don't know how to find this equation.
from sklearn.svm import SVR
svr = SVR(kernel = 'poly', C = best_params['C'], epsilon = best_params['epsilon'], gamma = best_params['gamma'], coef0 = 0.1, shrinking = True, tol = 0.001, cache_size = 200, verbose = False, max_iter = -1)
svr.fit(x,y)
Where x is my matrix of observations, y is my vector of target values from the observations, and best_params is the output (optimal hyperparameters) found by GridSearchCV.
Does Python have any method for outputting the resulting equation of the SVR model used in predicting future target values from a set of predictors? Or is there a straightforward way of using values found by SVR to create an equation myself if I specify the kernel to be of polynomial type?
Thank you!
If you use a linear kernel, then you can output your coefficient.
For example
from sklearn.svm import SVR
import numpy as np
n_samples, n_features = 1000, 5
rng = np.random.RandomState(0)
coef = [1,2,3,4,5]
X = rng.randn(n_samples, n_features)
y = coef * X
y = y.sum(axis = 1) + rng.randn(n_samples)
clf = SVR(kernel = 'linear', gamma='scale', C=1.0, epsilon=0.2)
clf.fit(X, y)
clf.coef_
array([[0.97626634, 2.00013793, 2.96205576, 4.00651352, 4.95923782]])
Here I asked how to compute AIC in a linear model. If I replace LinearRegression() method with linear_model.OLS method to have AIC, then how can I compute slope and intercept for the OLS linear model?
import statsmodels.formula.api as smf
regr = smf.OLS(y, X, hasconst=True).fit()
In your example, you can use the params attribute of regr, which will display the coefficients and intercept. They key is that you first need to add a column vector of 1.0s to your X data. Why? The intercept term is technically just the coefficient to a column vector of 1s. That is, the intercept is just a coefficient which, when multiplied by an X "term" of 1.0, produces itself. When you add this to the summed product of the other coefficients and features, to get your nx1 array of predicted values.
Below is an example.
# Pull some data to use in the regression
from pandas_datareader.data import DataReader
import statsmodels.api as sm
syms = {'TWEXBMTH' : 'usd',
'T10Y2YM' : 'term_spread',
'PCOPPUSDM' : 'copper'
}
data = (DataReader(syms.keys(), 'fred', start='2000-01-01')
.pct_change()
.dropna())
data = data.rename(columns = syms)
# Here's where we assign a column of 1.0s to the X data
# This is required by statsmodels
# You can check that the resulting coefficients are correct by exporting
# to Excel with data.to_clipboard() and running Data Analysis > Regression there
data = data.assign(intercept = 1.)
Now actually running the regression and getting coefficients takes just 1 line in addition to what you have now.
y = data.usd
X = data.loc[:, 'term_spread':]
regr = sm.OLS(y, X, hasconst=True).fit()
print(regr.params)
term_spread -0.00065
copper -0.09483
intercept 0.00105
dtype: float64
So regarding your question on AIC, you'll want to make sure the X data has a constant there as well, before you call .fit.
Note: when you call .fit, you create a regression results wrapper and can access any of the attributes lists here.
For anyone searching on how to get the slope and intercept of a LinearRegression in scikit-learn: it has coef_ and intercept_ properties which show this.
(x, y) = np.random.randn(10,2).T
from sklearn import linear_model
lr = linear_model.LinearRegression()
lr.fit(x.reshape(len(x), 1), y)
lr.coef_ # array([ 0.29387004])
lr.intercept_ # -0.17378418547919167