The regularization parameter C in logistic regression
(see http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) is used allow the function to be fitted to be well defined and avoid either overfitting or problems with step functions (see https://datascience.stackexchange.com/questions/10805/does-scikit-learn-use-regularization-by-default/10806).
However, regularization in logistic regression should only concern the weights for the features, not the intercept (also explained here: http://aimotion.blogspot.com/2011/11/machine-learning-with-python-logistic.html)
But is seems that sklearn.linear_model.LogisticRegression actually regularizes the intercept as well. Here is why:
1) Conside above link carefully (https://datascience.stackexchange.com/questions/10805/does-scikit-learn-use-regularization-by-default/10806): the sigmod is moved slightly to the left, closer to the intercept 0.
2) I tried to fit data points with a logistic curve and a manual maximum likelihood function. Including the intercept into the L2 norm gives identical results as sklearn's function.
Two questions please:
1) Did I get this wrong, is this a bug, or is there a well-justified reason for regularizing the intercept?
2) Is there a way to use sklearn and specify to regularize all parameters except the intercepts?
Thanks!
import numpy as np
from sklearn.linear_model import LogisticRegression
C = 1e1
model = LogisticRegression(C=C)
x = np.arange(100, 110)
x = x[:, np.newaxis]
y = np.array([0]*5 + [1]*5)
print x
print y
model.fit(x, y)
a = model.coef_[0][0]
b = model.intercept_[0]
b_modified = -b/a # without regularization, b_modified should be 104.5 (as for C=1e10)
print "a, b:", a, -b/a
# OUTPUT:
# [[100]
# [101]
# [102]
# [103]
# [104]
# [105]
# [106]
# [107]
# [108]
# [109]]
# [0 0 0 0 0 1 1 1 1 1]
# a, b: 0.0116744221756 100.478968664
scikit-learn has default regularized logistic regression.
The change in intercept_scaling parameter value in sklearn.linear_model.LogisticRegression has similar effect on the result if only C parameter is changed.
In case of modification in intercept_scaling parameter, regularization has an impact on the estimation of bias in logistic regression. When this parameter's value is on higher side then the regularization impact on bias is reduced. Per official documentation:
The intercept becomes intercept_scaling * synthetic_feature_weight.
Note! the synthetic feature weight is subject to l1/l2 regularization
as all other features. To lessen the effect of regularization on
synthetic feature weight (and therefore on the intercept)
intercept_scaling has to be increased.
Hope it helps!
Thanks #Prem, this is indeed the solution:
C = 1e1
intercept_scaling=1e3 # very high numbers make it unstable in practice
model = LogisticRegression(C=C, intercept_scaling=intercept_scaling)
Related
When performed a logistic regression using the two API, they give different coefficients.
Even with this simple example it doesn't produce the same results in terms of coefficients. And I follow advice from older advice on the same topic, like setting a large value for the parameter C in sklearn since it makes the penalization almost vanish (or setting penalty="none").
import pandas as pd
import numpy as np
import sklearn as sk
from sklearn.linear_model import LogisticRegression
import statsmodels.api as sm
n = 200
x = np.random.randint(0, 2, size=n)
y = (x > (0.5 + np.random.normal(0, 0.5, n))).astype(int)
display(pd.crosstab( y, x ))
max_iter = 100
#### Statsmodels
res_sm = sm.Logit(y, x).fit(method="ncg", maxiter=max_iter)
print(res_sm.params)
#### Scikit-Learn
res_sk = LogisticRegression( solver='newton-cg', multi_class='multinomial', max_iter=max_iter, fit_intercept=True, C=1e8 )
res_sk.fit( x.reshape(n, 1), y )
print(res_sk.coef_)
For example I just run the above code and get 1.72276655 for statsmodels and 1.86324749 for sklearn. And when run multiple times it always gives different coefficients (sometimes closer than others, but anyway).
Thus, even with that toy example the two APIs give different coefficients (so odds ratios), and with real data (not shown here), it almost get "out of control"...
Am I missing something? How can I produce similar coefficients, for example at least at one or two numbers after the comma?
There are some issues with your code.
To start with, the two models you show here are not equivalent: although you fit your scikit-learn LogisticRegression with fit_intercept=True (which is the default setting), you don't do so with your statsmodels one; from the statsmodels docs:
An intercept is not included by default and should be added by the user. See statsmodels.tools.add_constant.
It seems that this is a frequent point of confusion - see for example scikit-learn & statsmodels - which R-squared is correct? (and own answer there as well).
The other issue is that, although you are in a binary classification setting, you ask for multi_class='multinomial' in your LogisticRegression, which should not be the case.
The third issue is that, as explained in the relevant Cross Validated thread Logistic Regression: Scikit Learn vs Statsmodels:
There is no way to switch off regularization in scikit-learn, but you can make it ineffective by setting the tuning parameter C to a large number.
which makes the two models again non-comparable in principle, but you have successfully addressed it here by setting C=1e8. In fact, since then (2016), scikit-learn has indeed added a way to switch regularization off, by setting penalty='none' since, according to the docs:
If ‘none’ (not supported by the liblinear solver), no regularization is applied.
which should now be considered the canonical way to switch off the regularization.
So, incorporating these changes in your code, we have:
np.random.seed(42) # for reproducibility
#### Statsmodels
# first artificially add intercept to x, as advised in the docs:
x_ = sm.add_constant(x)
res_sm = sm.Logit(y, x_).fit(method="ncg", maxiter=max_iter) # x_ here
print(res_sm.params)
Which gives the result:
Optimization terminated successfully.
Current function value: 0.403297
Iterations: 5
Function evaluations: 6
Gradient evaluations: 10
Hessian evaluations: 5
[-1.65822763 3.65065752]
with the first element of the array being the intercept and the second the coefficient of x. While for scikit learn we have:
#### Scikit-Learn
res_sk = LogisticRegression(solver='newton-cg', max_iter=max_iter, fit_intercept=True, penalty='none')
res_sk.fit( x.reshape(n, 1), y )
print(res_sk.intercept_, res_sk.coef_)
with the result being:
[-1.65822806] [[3.65065707]]
These results are practically identical, within the machine's numeric precision.
Repeating the procedure for different values of np.random.seed() does not change the essence of the results shown above.
From many documents, I have learned the recipe of Ridge regression that is:
loss_Ridge = loss_function + lambda x L2 norm of slope
and the recipe of Lasso regression that is:
loss_Lasso = loss_function + lambda x L1 norm of slope
When I have read topic "Implementing Lasso and Ridge Regression" in "TensorFlow Machine Learning Cookbook", its author explained that:
"...we will use a continuous approximation to a step function, called
the continuous heavy step function..."
and its author also provided lines of code here.
I don't understand about which is called 'the continuous heavy step function' in this context. Please help me.
From the link that you provided,
if regression_type == 'LASSO':
# Declare Lasso loss function
# Lasso Loss = L2_Loss + heavyside_step,
# Where heavyside_step ~ 0 if A < constant, otherwise ~ 99
lasso_param = tf.constant(0.9)
heavyside_step = tf.truediv(1., tf.add(1., tf.exp(tf.multiply(-50., tf.subtract(A, lasso_param)))))
regularization_param = tf.multiply(heavyside_step, 99.)
loss = tf.add(tf.reduce_mean(tf.square(y_target - model_output)), regularization_param)
This heavyside_step function is very close to a logistic function which in turn can be a continuous approximation for a step function.
You use continuous approximation because the loss function needs to be differentiable with respect to the parameters of your model.
To get an intuition about read the constrained formulation section 1.6 in https://www.cs.ubc.ca/~schmidtm/Documents/2005_Notes_Lasso.pdf
You can see that in your code if A < 0.9 then regularization_param vanishes, so optimization will constrain A in that range.
If you want to normalize features using Lasso Regression here you have one example:
from sklearn.feature_selection import SelectFromModel
from sklearn.linear_model import Lasso
estimator = Lasso()
featureSelection = SelectFromModel(estimator)
featureSelection.fit(features_vector, target)
selectedFeatures = featureSelection.transform(features_vector)
print(selectedFeatures)
I have implemented a gradient boosting decision tree to do a mulitclass classification. My custom loss functions look like this:
import numpy as np
from sklearn.preprocessing import OneHotEncoder
def softmax(mat):
res = np.exp(mat)
res = np.multiply(res, 1/np.sum(res, axis=1, keepdims=True))
return res
def custom_asymmetric_objective(y_true, y_pred_encoded):
pred = y_pred_encoded.reshape((-1, 3), order='F')
pred = softmax(pred)
y_true = OneHotEncoder(sparse=False,categories='auto').fit_transform(y_true.reshape(-1, 1))
grad = (pred - y_true).astype("float")
hess = 2.0 * pred * (1.0-pred)
return grad.flatten('F'), hess.flatten('F')
def custom_asymmetric_valid(y_true, y_pred_encoded):
y_true = OneHotEncoder(sparse=False,categories='auto').fit_transform(y_true.reshape(-1, 1)).flatten('F')
margin = (y_true - y_pred_encoded).astype("float")
loss = margin*10
return "custom_asymmetric_eval", np.mean(loss), False
Everything works, but now I want to adjust my loss function in the following way: It should "penalize" if an item is classified incorrectly, and a penalty should be added for a certain constraint (this is calculated before, let's just say the penalty is e.g. 0,05, so just a real number).
Is there any way to consider both, the misclassification and the penalty value?
Try L2 regularization: weights will be updated following the subtraction of a learning rate times error times x plus the penalty term lambda weight to the power of 2
Simplifying:
This will be the effect:
ADDED: The penalization term (on the right of equation) increases the generalization power of your model. So, if you overfit your model in training set, the perfomance will be poor in test set. So, you penalize these "right" classifications in training set that generate error in test set and compromise generalization.
I generate a simple linear model in which X (dimension D) variables come from multi-normal with 0 covariance. Only the first 10 variables have true coefficients of 1, the rest have coefficients 0. Hence, theoretically, the ridge regression results should be the true coefficients divided by (1+C), where C is the penalty constant.
import numpy as np
from sklearn import linear_model
def generate_data(n):
d = 100
w = np.zeros(d)
for i in range(0,10):
w[i] = 1.0
trainx = np.random.normal(size=(n,d))
e = np.random.normal(size=(n))
trainy = np.dot(trainx, w) + e
return trainx, trainy
Then I use:
n = 200
x,y = generate_data(n)
regr = linear_model.Ridge(alpha=4,normalize=True)
regr.fit(x, y)
print(regr.coef_[0:20])
Under normalize = True, I get the first 10 coefficients to be somewhere 20% (i.e. 1/(1+4)) of the true value of 1. When normalize = False, I get the first 10 coefficients to be around 1, which are the same results as a simple linear regression model. Moreover, since I generate the data to be mean = 0 and std = 1, normalize = True shouldn't do anything as the data is already "normalized". Can someone explain to me what is going on here? Thanks!
It's important to understand that normalizing and standardizing are not the same and both cannot be done at the same time. You can either normalize or standardize.
Often Standardizing refers to transforming the data so that it has 0 mean and unit (1) variance. E.g. can be achieved by removing the mean and dividing by the standard deviation. In this case, this would be feature (column) wise.
Commonly Normalizing refers to transforming the data values to a range between 0 and 1. E.g. can be achieved by dividing by the length of the vector. But that doesn't mean that the mean is going to be 0 and the variance 1.
After generating trainx, trainy they're not not normalized yet. Maybe print it to see your results.
So, when normalize=True, trainx will be normalized by subtracting the mean and dividing by the l2-norm (according to sklearn).
When normalize=False, trainx will remain as is.
If you do normalize=True, every feature column is divided by its L2 norm, in other words, magnitude of every feature column is diminished, which causes the estimated coefficients to be larger (βX should be more or less constant; the smaller X, the larger β). When coefficients are larger, greater L2 penalty is imposed. The function thus places more focus on L2 penalty rather than the linear part (Xβ). The estimates of coefficients from the linear part, as a result, is not so accurate compared to pure linear regression.
By contrast, if normalize=False, X is bigger, β is smaller. Given the same alpha, L2 penalty is marginal. More focus is on linear part - the result is close to a pure linear regression.
There are standard ways of predicting proportions such as logistic regression (without thresholding) and beta regression. There have already been discussions about this:
http://scikit-learn-general.narkive.com/4dSCktaM/using-logistic-regression-on-a-continuous-target-variable
http://scikit-learn-general.narkive.com/lLVQGzyl/beta-regression
I cannot tell if there exists a work-around within the sklearn framework.
There exists a workaround, but it is not intrinsically within the sklearn framework.
If you have a proportional target variable (value range 0-1) you run into two basic difficulties with scikit-learn:
Classifiers (such as logistic regression) deal with class labels as target variables only. As a workaround you could simply threshold your probabilities to 0/1 and interpret them as class labels, but you would lose a lot of information.
Regression models (such as linear regression) do not restrict the target variable. You can train them on proportional data, but there is no guarantee that the output on unseen data will be restricted to the 0/1 range. However, in this situation, there is a powerful work-around (below).
There are different ways to mathematically formulate logistic regression. One of them is the generalized linear model, which basically defines the logistic regression as a normal linear regression on logit-transformed probabilities. Normally, this approach requires sophisticated mathematical optimization because the probabilities are unknown and need to be estimated along with the regression coefficients.
In your case, however, the probabilities are known. This means you can simply transform them with y = log(p / (1 - p)). Now they cover the full range from -oo to oo and can serve as the target variable for a LinearRegression model [*]. Of course, the model output then needs to be transformed again to result in probabilities p = 1 / (exp(-y) + 1).
import numpy as np
from sklearn.linear_model import LinearRegression
class LogitRegression(LinearRegression):
def fit(self, x, p):
p = np.asarray(p)
y = np.log(p / (1 - p))
return super().fit(x, y)
def predict(self, x):
y = super().predict(x)
return 1 / (np.exp(-y) + 1)
if __name__ == '__main__':
# generate example data
np.random.seed(42)
n = 100
x = np.random.randn(n).reshape(-1, 1)
noise = 0.1 * np.random.randn(n).reshape(-1, 1)
p = np.tanh(x + noise) / 2 + 0.5
model = LogitRegression()
model.fit(x, p)
print(model.predict([[-10], [0.0], [1]]))
# [[ 2.06115362e-09]
# [ 5.00000000e-01]
# [ 8.80797078e-01]]
There are also numerous other alternatives. Some non-linear regression models can work naturally in the 0-1 range. For example Random Forest Regressors will never exceed the target variables' range they were trained with. Simply put probabilities in and you will get probabilities out. Neural networks with appropriate output activation functions (tanh, I guess) will also work well with probabilities, but if you want to use those there are more specialized libraries than sklearn.
[*] You could in fact plug in any linear regression model which can make the method more powerful, but then it no longer is exactly equivalent to logistic regression.