Regression in Python - python

Trying to do logistic regression through pandas and statsmodels. Don't know why I'm getting an error or how to fix it.
import pandas as pd
import statsmodels.api as sm
x = [1, 3, 5, 6, 8]
y = [0, 1, 0, 1, 1]
d = { "x": pd.Series(x), "y": pd.Series(y)}
df = pd.DataFrame(d)
model = "y ~ x"
glm = sm.Logit(model, df=df).fit()
ERROR:
Traceback (most recent call last):
File "regress.py", line 45, in <module>
glm = sm.Logit(model, df=df).fit()
TypeError: __init__() takes exactly 3 arguments (2 given)

You can't pass a formula to Logit. Do:
In [82]: import patsy
In [83]: f = 'y ~ x'
In [84]: y, X = patsy.dmatrices(f, df, return_type='dataframe')
In [85]: sm.Logit(y, X).fit().summary()
Optimization terminated successfully.
Current function value: 0.511631
Iterations 6
Out[85]:
<class 'statsmodels.iolib.summary.Summary'>
"""
Logit Regression Results
==============================================================================
Dep. Variable: y No. Observations: 5
Model: Logit Df Residuals: 3
Method: MLE Df Model: 1
Date: Fri, 30 Aug 2013 Pseudo R-squ.: 0.2398
Time: 16:56:38 Log-Likelihood: -2.5582
converged: True LL-Null: -3.3651
LLR p-value: 0.2040
==============================================================================
coef std err z P>|z| [95.0% Conf. Int.]
------------------------------------------------------------------------------
Intercept -2.0544 2.452 -0.838 0.402 -6.861 2.752
x 0.5672 0.528 1.073 0.283 -0.468 1.603
==============================================================================
"""
This is pretty much straight from the docs on how to do exactly what you're asking.
EDIT: You can also use the formula API, as suggested by #user333700:
In [22]: print sm.formula.logit(model, data=df).fit().summary()
Optimization terminated successfully.
Current function value: 0.511631
Iterations 6
Logit Regression Results
==============================================================================
Dep. Variable: y No. Observations: 5
Model: Logit Df Residuals: 3
Method: MLE Df Model: 1
Date: Fri, 30 Aug 2013 Pseudo R-squ.: 0.2398
Time: 18:14:26 Log-Likelihood: -2.5582
converged: True LL-Null: -3.3651
LLR p-value: 0.2040
==============================================================================
coef std err z P>|z| [95.0% Conf. Int.]
------------------------------------------------------------------------------
Intercept -2.0544 2.452 -0.838 0.402 -6.861 2.752
x 0.5672 0.528 1.073 0.283 -0.468 1.603
==============================================================================

You can pass a formula directly in Logit too.
Logit.from_formula('y ~ x',data=data).fit()

Related

Using GLM to reproduce built-in regression models in statsmodels

I am currently trying to reproduce a regression model eq. (3) (edit: fixed link) in python using statsmodels. As this model is no part of the standard models provided by statsmodels I clearly have to write it myself using the provided formula api.
Since I have never worked with the formula api (or patsy for that matter), I wanted to start and verify my approach by reproducing standard models with the formula api and a generalized linear model. My code and the results for a poisson regression are given below at the end of my question.
You will see that it predicts the parameters beta = (2, -3, 1) for all three models with good accuracy. However, I have a couple of questions:
How do I explicitly add covariates to the glm model with a
regression coefficient equal to 1?
From what I understand, a poisson regression in general has the shape ln(counts) = exp(intercept + beta * x + log(exposure)), i.e. the exposure is added through a fixed constant of value 1. I would like to reproduce this behaviour in my glm model, i.e. I want something like ln(counts) = exp(intercept + beta * x + k * log(exposure)) where k is a fixed constant as a formula.
Simply using formula = "1 + x1 + x2 + x3 + np.log(exposure)" returns a perfect seperation error (why?). I can bypass that by adding some random noise to y, but in that case np.log(exposure) has a non-unity regression coefficient, i.e. it is treated as a normal regression covariate.
Apparently both built-in models 1 and 2 have no intercept, eventhough I tried to explicitly add one in model 2. Or is there a hidden intercept that is simply not reported? In either case, how do I fix that?
Any help would be greatly appreciated, so thanks in advance!
import numpy as np
import pandas as pd
np.random.seed(1+8+2022)
# Number of random samples
n = 4000
# Intercept
alpha = 1
# Regression Coefficients
beta = np.asarray([2.0,-3.0,1.0])
# Random Data
data = {
"x1" : np.random.normal(1.00,0.10, size = n),
"x2" : np.random.normal(1.50,0.15, size = n),
"x3" : np.random.normal(-2.0,0.20, size = n),
"exposure": np.random.poisson(14, size = n),
}
# Calculate the response
x = np.asarray([data["x1"], data["x2"] , data["x3"]]).T
t = np.asarray(data["exposure"])
# Add response to random data
data["y"] = np.exp(alpha + np.dot(x,beta) + np.log(t))
# Convert dict to df
data = pd.DataFrame(data)
print(data)
#-----------------------------------------------------
# My Model
#-----------------------------------------------------
import statsmodels.api as sm
import statsmodels.formula.api as smf
formula = "y ~ x1 + x2 + x3"
model = smf.glm(formula=formula, data=data, family=sm.families.Poisson()).fit()
print(model.summary())
#-----------------------------------------------------
# statsmodels.discrete.discrete_model.Poisson 1
#-----------------------------------------------------
import statsmodels.api as sm
data["offset"] = np.ones(n)
model = sm.Poisson( endog = data["y"],
exog = data[["x1", "x2", "x3"]],
exposure = data["exposure"],
offset = data["offset"]).fit()
print(model.summary())
#-----------------------------------------------------
# statsmodels.discrete.discrete_model.Poisson 2
#-----------------------------------------------------
import statsmodels.api as sm
data["x1"] = sm.add_constant(data["x1"])
model = sm.Poisson( endog = data["y"],
exog = data[["x1", "x2", "x3"]],
exposure = data["exposure"]).fit()
print(model.summary())
RESULTS:
x1 x2 x3 exposure y
0 1.151771 1.577677 -1.811903 13 0.508422
1 0.897012 1.678311 -2.327583 22 0.228219
2 1.040250 1.471962 -1.705458 13 0.621328
3 0.866195 1.512472 -1.766108 17 0.478107
4 0.925470 1.399320 -1.886349 13 0.512518
... ... ... ... ... ...
3995 1.073945 1.365260 -1.755071 12 0.804081
3996 0.855000 1.251951 -2.173843 11 0.439639
3997 0.892066 1.710856 -2.183085 10 0.107643
3998 0.763777 1.538938 -2.013619 22 0.363551
3999 1.056958 1.413922 -1.722252 19 1.098932
[4000 rows x 5 columns]
Generalized Linear Model Regression Results
==============================================================================
Dep. Variable: y No. Observations: 4000
Model: GLM Df Residuals: 3996
Model Family: Poisson Df Model: 3
Link Function: log Scale: 1.0000
Method: IRLS Log-Likelihood: -2743.7
Date: Sat, 08 Jan 2022 Deviance: 141.11
Time: 09:32:32 Pearson chi2: 140.
No. Iterations: 4
Covariance Type: nonrobust
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
Intercept 3.6857 0.378 9.755 0.000 2.945 4.426
x1 2.0020 0.227 8.800 0.000 1.556 2.448
x2 -3.0393 0.148 -20.604 0.000 -3.328 -2.750
x3 0.9937 0.114 8.719 0.000 0.770 1.217
==============================================================================
Optimization terminated successfully.
Current function value: 0.668293
Iterations 10
Poisson Regression Results
==============================================================================
Dep. Variable: y No. Observations: 4000
Model: Poisson Df Residuals: 3997
Method: MLE Df Model: 2
Date: Sat, 08 Jan 2022 Pseudo R-squ.: 0.09462
Time: 09:32:32 Log-Likelihood: -2673.2
converged: True LL-Null: -2952.6
Covariance Type: nonrobust LLR p-value: 4.619e-122
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 2.0000 0.184 10.875 0.000 1.640 2.360
x2 -3.0000 0.124 -24.160 0.000 -3.243 -2.757
x3 1.0000 0.094 10.667 0.000 0.816 1.184
==============================================================================
Optimization terminated successfully.
Current function value: 0.677893
Iterations 5
Poisson Regression Results
==============================================================================
Dep. Variable: y No. Observations: 4000
Model: Poisson Df Residuals: 3997
Method: MLE Df Model: 2
Date: Sat, 08 Jan 2022 Pseudo R-squ.: 0.08162
Time: 09:32:32 Log-Likelihood: -2711.6
converged: True LL-Null: -2952.6
Covariance Type: nonrobust LLR p-value: 2.196e-105
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 2.9516 0.304 9.711 0.000 2.356 3.547
x2 -2.9801 0.147 -20.275 0.000 -3.268 -2.692
x3 0.9807 0.113 8.655 0.000 0.759 1.203
==============================================================================
Process return 0 (0x0)
Press Enter to continue...

How to include interaction variables in logit statsmodel python?

I am working on Logistic regression model and I am using statsmodels api's logit. I am unable to figure out how to feed interaction terms to the model.
You can use the formula interface, and use the colon,: , inside the formula, for example :
import statsmodels.api as sm
import statsmodels.formula.api as smf
import numpy as np
import pandas
np.random.seed(111)
df = pd.DataFrame(np.random.binomial(1,0.5,(50,3)),columns=['x1','x2','y'])
res1 = smf.logit(formula='y ~ x1 + x2 + x1:x2', data=df).fit()
res1.summary()
Logit Regression Results
==============================================================================
Dep. Variable: y No. Observations: 50
Model: Logit Df Residuals: 46
Method: MLE Df Model: 3
Date: Thu, 04 Feb 2021 Pseudo R-squ.: 0.02229
Time: 10:03:59 Log-Likelihood: -32.463
converged: True LL-Null: -33.203
Covariance Type: nonrobust LLR p-value: 0.6869
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
Intercept -0.9808 0.677 -1.449 0.147 -2.308 0.346
x1 0.4700 0.851 0.552 0.581 -1.199 2.139
x2 0.9808 0.863 1.137 0.256 -0.710 2.671
x1:x2 -1.1632 1.229 -0.946 0.344 -3.572 1.246
==============================================================================

Dual beta in python - multiple linear regression with dummy variable in statsmodel

I am trying to calculate the dual beta in python using a statsmodel regression. Unfortunately I am prompting an error message.
The regression equation for dual betas is given here
Dual Beta Formula
I am neglecting the risk free rate (rf) for now, but implementation should be similiar once I add it. For now my code looks as follows, where my 'spx.xlsx' file simple has two columns with returns, called 'SPXrets' and 'AAPLrets' (+ one column with dates):
import pandas as pd
from pandas import ExcelWriter
from pandas import ExcelFile
import statsmodels.api as sm
import statsmodels.formula.api as smf
import numpy as np
df = pd.read_excel('spx.xlsx')
print(df.columns)
mod = smf.ols(formula='AAPLrets ~ SPXrets', data=df)
res = mod.fit()
print(res.summary())
Prompting an patsy error:
PatsyError: intercept term cannot interact with anything else
AAPLrets ~ SPXrets:c(D) + SPXrets:(1-c(D))
Grateful for any help - many thanks!
Edit:
After my initial suggestions, the OP has changed both the title and the provided code snippet. My suggestions have since been edited accordingly.
New suggestion:
I suspect you're experiencing some problems with your dataset.
I suggest that you tell us a little more about the data source, how you've loaded the data, what it looks like (structure) and what type your columns have (string, float etc).
What I can tell you right now, is that your snippet runs fine with some sample data like this:
Code:
CONret DAXret:c(D) DAXret:(1-c(D)) AAPLrets SPXrets dummy
2017-01-08 109 107 122 101 100 0
2017-01-09 117 108 124 113 147 0
2017-01-10 142 108 130 107 103 1
2017-01-11 106 121 149 103 104 1
2017-01-12 124 149 143 112 126 0
Output:
OLS Regression Results
==============================================================================
Dep. Variable: AAPLrets R-squared: 0.095
Model: OLS Adj. R-squared: 0.004
Method: Least Squares F-statistic: 1.044
Date: Thu, 14 Feb 2019 Prob (F-statistic): 0.331
Time: 16:00:01 Log-Likelihood: -48.388
No. Observations: 12 AIC: 100.8
Df Residuals: 10 BIC: 101.7
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Intercept 84.3198 31.143 2.708 0.022 14.929 153.711
SPXrets 0.2635 0.258 1.022 0.331 -0.311 0.838
==============================================================================
Omnibus: 5.649 Durbin-Watson: 1.882
Prob(Omnibus): 0.059 Jarque-Bera (JB): 2.933
Skew: 1.202 Prob(JB): 0.231
Kurtosis: 3.290 Cond. No. 872.
==============================================================================
Here's the whole thing:
# imports
import statsmodels.formula.api as smf
import pandas as pd
import numpy as np
import statsmodels.api as sm
# sample data
np.random.seed(1)
rows = 12
listVars= ['CONret','DAXret:c(D)', 'DAXret:(1-c(D))', 'AAPLrets', 'SPXrets']
rng = pd.date_range('1/1/2017', periods=rows, freq='D')
df = pd.DataFrame(np.random.randint(100,150,size=(rows, len(listVars))), columns=listVars)
df = df.set_index(rng)
df['dummy'] = np.random.randint(2, size=df.shape[0])
mod = smf.ols(formula='AAPLrets ~ SPXrets', data=df)
res = mod.fit()
res.summary()
Another suggestion:
Personally, I'd feel much more comfortable without patsy.
The snippet below will let you run a linear regression and select whether to return the model summary, or a dataframe with other details like coefficient p-values and r-squared.
# Imports
import pandas as pd
import numpy as np
import statsmodels.api as sm
# sample data
np.random.seed(1)
rows = 12
listVars= ['CONret','DAXret:c(D)', 'DAXret:(1-c(D))', 'AAPLrets', 'SPXrets']
rng = pd.date_range('1/1/2017', periods=rows, freq='D')
df = pd.DataFrame(np.random.randint(100,150,size=(rows, len(listVars))), columns=listVars)
df = df.set_index(rng)
df['dummy'] = np.random.randint(2, size=df.shape[0])
def LinReg(df, y, x, const, results):
betas = x.copy()
# Model with out without a constant
if const == True:
x = sm.add_constant(df[x])
model = sm.OLS(df[y], x).fit()
else:
model = sm.OLS(df[y], df[x]).fit()
# Estimates of R2 and p
res1 = {'Y': [y],
'R2': [format(model.rsquared, '.4f')],
'p': [model.pvalues.tolist()],
'start': [df.index[0]],
'stop': [df.index[-1]],
'obs' : [df.shape[0]],
'X': [betas]}
df_res1 = pd.DataFrame(data = res1)
# Regression Coefficients
theParams = model.params[0:]
coefs = theParams.to_frame()
df_coefs = pd.DataFrame(coefs.T)
xNames = list(df_coefs)
xValues = list(df_coefs.loc[0].values)
xValues2 = [ '%.2f' % elem for elem in xValues ]
res2 = {'Independent': [xNames],
'beta': [xValues2]}
df_res2 = pd.DataFrame(data = res2)
# All results
df_res = pd.concat([df_res1, df_res2], axis = 1)
df_res = df_res.T
df_res.columns = ['results']
if results == 'summary':
return(model.summary())
print(model.summary())
else:
return(df_res)
df_regression = LinReg(df = df, y = 'CONret', x = ['DAXret:c(D)', 'DAXret:(1-c(D))', 'dummy'], const = True, results = 'summary')
print(df_regression)
Test run 1:
df_regression = LinReg(df = df, y = 'CONret', x = ['DAXret:c(D)', 'DAXret:(1-c(D))'], const = True, results = '')
Output 1:
results
Y CONret
R2 0.0813
p [0.13194822614949883, 0.45726622261432304, 0.9...
start 2017-01-01 00:00:00
stop 2017-01-12 00:00:00
obs 12
X [DAXret:c(D), DAXret:(1-c(D)), dummy]
Independent [const, DAXret:c(D), DAXret:(1-c(D)), dummy]
beta [88.94, 0.24, -0.01, 2.20]
Test run 2:
df_regression = LinReg(df = df, y = 'CONret', x = ['DAXret:c(D)', 'DAXret:(1-c(D))', 'dummy'], const = True, results = 'summary')
Output 2:
OLS Regression Results
==============================================================================
Dep. Variable: CONret R-squared: 0.081
Model: OLS Adj. R-squared: -0.263
Method: Least Squares F-statistic: 0.2361
Date: Thu, 14 Feb 2019 Prob (F-statistic): 0.869
Time: 16:04:02 Log-Likelihood: -47.138
No. Observations: 12 AIC: 102.3
Df Residuals: 8 BIC: 104.2
Df Model: 3
Covariance Type: nonrobust
===================================================================================
coef std err t P>|t| [0.025 0.975]
-----------------------------------------------------------------------------------
const 88.9438 53.019 1.678 0.132 -33.318 211.205
DAXret:c(D) 0.2350 0.301 0.781 0.457 -0.459 0.929
DAXret:(1-c(D)) -0.0060 0.391 -0.015 0.988 -0.908 0.896
dummy 2.2005 8.973 0.245 0.812 -18.490 22.891
==============================================================================
Omnibus: 1.025 Durbin-Watson: 2.354
Prob(Omnibus): 0.599 Jarque-Bera (JB): 0.720
Skew: 0.540 Prob(JB): 0.698
Kurtosis: 2.477 Cond. No. 2.15e+03
==============================================================================

Logistic Regression different results with R and Python?

I used a logistic regression approach in both programs, and was wondering why I am getting different results, especially with the coefficients. The outcome, Infection, is (1, 0) and Flushed is a continuous variable.
Python:
import statsmodels.api as sm
logit_model=sm.Logit(data['INFECTION'], data['Flushed'])
result=logit_model.fit()
print(result.summary())
Results:
Logit Regression Results
==============================================================================
Dep. Variable: INFECTION No. Observations: 414
Model: Logit Df Residuals: 413
Method: MLE Df Model: 0
Date: Fri, 24 Aug 2018 Pseudo R-squ.: -1.388
Time: 15:47:42 Log-Likelihood: -184.09
converged: True LL-Null: -77.104
LLR p-value: nan
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
Flushed -0.6467 0.070 -9.271 0.000 -0.783 -0.510
==============================================================================
R:
mylogit <- glm(INFECTION ~ Flushed, data = cvc, family = "binomial")
summary(mylogit)
Results:
Call:
glm(formula = INFECTION ~ Flushed, family = "binomial", data = cvc)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.0598 -0.3107 -0.2487 -0.2224 2.8051
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -3.91441 0.38639 -10.131 < 2e-16 ***
Flushed 0.22696 0.06049 3.752 0.000175 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
You seem to be missing the constant (offset) parameter in the Python logistic model.
To use R's formula syntax you're fitting two different models:
Python model: INFECTION ~ 0 + Flushed
R model : INFECTION ~ Flushed
To add a constant to the Python model use sm.add_constant(...).

How to get R-squared for robust regression (RLM) in Statsmodels?

When it comes to measuring goodness of fit - R-Squared seems to be a commonly understood (and accepted) measure for "simple" linear models.
But in case of statsmodels (as well as other statistical software) RLM does not include R-squared together with regression results.
Is there a way to get it calculated "manually", perhaps in a way similar to how it is done in Stata?
Or is there another measure that can be used / calculated from the results produced by sm.RLS?
This is what Statsmodels is producing:
import numpy as np
import statsmodels.api as sm
# Sample Data with outliers
nsample = 50
x = np.linspace(0, 20, nsample)
x = sm.add_constant(x)
sig = 0.3
beta = [5, 0.5]
y_true = np.dot(x, beta)
y = y_true + sig * 1. * np.random.normal(size=nsample)
y[[39,41,43,45,48]] -= 5 # add some outliers (10% of nsample)
# Regression with Robust Linear Model
res = sm.RLM(y, x).fit()
print(res.summary())
Which outputs:
Robust linear Model Regression Results
==============================================================================
Dep. Variable: y No. Observations: 50
Model: RLM Df Residuals: 48
Method: IRLS Df Model: 1
Norm: HuberT
Scale Est.: mad
Cov Type: H1
Date: Mo, 27 Jul 2015
Time: 10:00:00
No. Iterations: 17
==============================================================================
coef std err z P>|z| [95.0% Conf. Int.]
------------------------------------------------------------------------------
const 5.0254 0.091 55.017 0.000 4.846 5.204
x1 0.4845 0.008 61.555 0.000 0.469 0.500
==============================================================================
Since an OLS return the R2, I would suggest regressing the actual values against the fitted values using simple linear regression. Regardless where the fitted values come from, such an approach would provide you an indication of the corresponding R2.
R2 is not a good measure of goodness of fit for RLM models. The problem is that the outliers have a huge effect on the R2 value, to the point where it is completely determined by outliers. Using weighted regression afterwards is an attractive alternative, but it is better to look at the p-values, standard errors and confidence intervals of the estimated coefficients.
Comparing the OLS summary to RLM (results are slightly different to yours due to different randomization):
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.726
Model: OLS Adj. R-squared: 0.721
Method: Least Squares F-statistic: 127.4
Date: Wed, 03 Nov 2021 Prob (F-statistic): 4.15e-15
Time: 09:33:40 Log-Likelihood: -87.455
No. Observations: 50 AIC: 178.9
Df Residuals: 48 BIC: 182.7
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 5.7071 0.396 14.425 0.000 4.912 6.503
x1 0.3848 0.034 11.288 0.000 0.316 0.453
==============================================================================
Omnibus: 23.499 Durbin-Watson: 2.752
Prob(Omnibus): 0.000 Jarque-Bera (JB): 33.906
Skew: -1.649 Prob(JB): 4.34e-08
Kurtosis: 5.324 Cond. No. 23.0
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Robust linear Model Regression Results
==============================================================================
Dep. Variable: y No. Observations: 50
Model: RLM Df Residuals: 48
Method: IRLS Df Model: 1
Norm: HuberT
Scale Est.: mad
Cov Type: H1
Date: Wed, 03 Nov 2021
Time: 09:34:24
No. Iterations: 17
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const 5.1857 0.111 46.590 0.000 4.968 5.404
x1 0.4790 0.010 49.947 0.000 0.460 0.498
==============================================================================
If the model instance has been used for another fit with different fit parameters, then the fit options might not be the correct ones anymore .
You can see that the standard errors and size of the confidence interval decreases in going from OLS to RLM for both the intercept and the slope term. This suggests that the estimates are closer to the real values.
Why not use model.predict to obtain the r2? For Example:
r2=1. - np.sum(np.abs(model.predict(X) - y) **2) / np.sum(np.abs(y - np.mean(y)) ** 2)

Categories

Resources