Results from Python linearmodels PanelOLS and Stata areg differ - python

For a fixed effect model I was planning to switch from Stata's areg to Python's linearmodels.panel.PanelOLS.
But the results are different. In Stata I get R-squared = 0.6047 and in Python I get R-squared = 0.1454.
How come that I get so different R-squared from the commands below?
Stata command and results:
use ./linearmodels_datasets_wage_panel.dta, clear
areg lwage expersq union married hours, vce(cluster nr) absorb(nr)
Linear regression, absorbing indicators Number of obs = 4,360
Absorbed variable: nr No. of categories = 545
F(4, 544) = 84.67
Prob > F = 0.0000
R-squared = 0.6047
Adj R-squared = 0.5478
Root MSE = 0.3582
(Std. err. adjusted for 545 clusters in nr)
------------------------------------------------------------------------------
| Robust
lwage | Coefficient std. err. t P>|t| [95% conf. interval]
-------------+----------------------------------------------------------------
expersq | .0039509 .0002554 15.47 0.000 .0034492 .0044526
union | .0784442 .0252621 3.11 0.002 .028821 .1280674
married | .1146543 .0234954 4.88 0.000 .0685014 .1608072
hours | -.0000846 .0000238 -3.56 0.000 -.0001313 -.0000379
_cons | 1.565825 .0531868 29.44 0.000 1.461348 1.670302
------------------------------------------------------------------------------
Python command and results:
from linearmodels.datasets import wage_panel
from linearmodels.panel import PanelOLS
data = wage_panel.load()
mod_entity = PanelOLS.from_formula(
"lwage ~ 1 + expersq + union + married + hours + EntityEffects",
data=data.set_index(["nr", "year"]),
)
result_entity = mod_entity.fit(
cov_type='clustered',
cluster_entity=True,
)
print(result_entity)
PanelOLS Estimation Summary
================================================================================
Dep. Variable: lwage R-squared: 0.1454
Estimator: PanelOLS R-squared (Between): -0.0844
No. Observations: 4360 R-squared (Within): 0.1454
Date: Wed, Feb 02 2022 R-squared (Overall): 0.0219
Time: 12:23:24 Log-likelihood -1416.4
Cov. Estimator: Clustered
F-statistic: 162.14
Entities: 545 P-value 0.0000
Avg Obs: 8.0000 Distribution: F(4,3811)
Min Obs: 8.0000
Max Obs: 8.0000 F-statistic (robust): 96.915
P-value 0.0000
Time periods: 8 Distribution: F(4,3811)
Avg Obs: 545.00
Min Obs: 545.00
Max Obs: 545.00
Parameter Estimates
==============================================================================
Parameter Std. Err. T-stat P-value Lower CI Upper CI
------------------------------------------------------------------------------
Intercept 1.5658 0.0497 31.497 0.0000 1.4684 1.6633
expersq 0.0040 0.0002 16.550 0.0000 0.0035 0.0044
hours -8.46e-05 2.22e-05 -3.8101 0.0001 -0.0001 -4.107e-05
married 0.1147 0.0220 5.2207 0.0000 0.0716 0.1577
union 0.0784 0.0236 3.3221 0.0009 0.0321 0.1247
==============================================================================
F-test for Poolability: 9.4833
P-value: 0.0000
Distribution: F(544,3811)
Included effects: Entity

man. How are you?
You are trying to run an absorbing regression (.areg). Specifically, you're trying to run 'a linear regression absorbing one categorical factor'. To do this, you can just run the following model linearmodels.iv.absorbing.AbsorbingLS(endog_variable, exog_variables, categorical_variable_absorb)
See the example below:
import pandas as pd
import statsmodels as sm
from linearmodels.iv import absorbing
dta = pd.read_csv('http://www.math.smith.edu/~bbaumer/mth247/labs/airline.csv')
dta.rename(columns={'I': 'airline',
'T': 'year',
'Q': 'output',
'C': 'cost',
'PF': 'fuel',
'LF ': 'load'}, inplace=True)
Next, transform the absorbing variable into a categorical variable (in this case, I will use the airline variable):
cats = pd.DataFrame({'airline': pd.Categorical(dta['airline'])})
Then, just run the model:
exog_variables = ['output', 'fuel', 'load']
endog_variable = ['cost']
exog = sm.tools.tools.add_constant(dta[exog_variables])
endog = dta[endog_variable]
model = absorbing.AbsorbingLS(endog, exog, absorb=cats, drop_absorbed=True)
model_res = model.fit(cov_type='unadjusted', debiased=True)
print(model_res.summary)
Below is the results of this same model in both python and stata (using the command .areg cost output fuel load, absorb(airline))
Python:
Absorbing LS Estimation Summary
==================================================================================
Dep. Variable: cost R-squared: 0.9974
Estimator: Absorbing LS Adj. R-squared: 0.9972
No. Observations: 90 F-statistic: 3827.4
Date: Thu, Oct 27 2022 P-value (F-stat): 0.0000
Time: 20:58:04 Distribution: F(3,81)
Cov. Estimator: unadjusted R-squared (No Effects): 0.9926
Varaibles Absorbed: 5.0000
Parameter Estimates
==============================================================================
Parameter Std. Err. T-stat P-value Lower CI Upper CI
------------------------------------------------------------------------------
const 9.7135 0.2229 43.585 0.0000 9.2701 10.157
output 0.9193 0.0290 31.691 0.0000 0.8616 0.9770
fuel 0.4175 0.0148 28.303 0.0000 0.3881 0.4468
load -1.0704 0.1957 -5.4685 0.0000 -1.4599 -0.6809
==============================================================================
Stata:
Linear regression, absorbing indicators Number of obs = 90
F( 3, 81) = 3604.80
Prob > F = 0.0000
R-squared = 0.9974
Adj R-squared = 0.9972
Root MSE = .06011
------------------------------------------------------------------------------
cost | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
output | .9192846 .0298901 30.76 0.000 .8598126 .9787565
fuel | .4174918 .0151991 27.47 0.000 .3872503 .4477333
load | -1.070396 .20169 -5.31 0.000 -1.471696 -.6690963
_cons | 9.713528 .229641 42.30 0.000 9.256614 10.17044
-------------+----------------------------------------------------------------
airline | F(5, 81) = 57.732 0.000 (6 categories)

Related

regression for percentages - different results in r, python and matlab

I have percentages and need to calculate a regression. According to basic statistics using logistic regression is better than OLS as percentages invalidate the requirement of a continuous and unconstraint value space.
So far, so good.
However, I get different results in R, Python, and Matlab. In fact, Matlab even reports significant values where python will not.
My models look like:
R:
summary(glm(foo ~ 1 + bar + baz , family = "binomial", data = <<data>>))
Python via statsmodels:
smf.logit('foo ~ 1 + bar + baz', <<data>>).fit().summary()
Matlab:
fitglm(<<data>>,'foo ~ 1 + bar + baz','Link','logit')
where Matlab currently produces the best results.
Could there be different initialization values? Different solvers? Different settings for alphas when computing p-values?
How can I get the same results at least in similar numeric ranges or same features detected as significant? I do not require exact equal numeric output.
edit
the summary statistics
python:
Dep. Variable: foo No. Observations: 104
Model: Logit Df Residuals: 98
Method: MLE Df Model: 5
Date: Wed, 28 Aug 2019 Pseudo R-squ.: inf
Time: 06:48:12 Log-Likelihood: -0.25057
converged: True LL-Null: 0.0000
LLR p-value: 1.000
coef std err z P>|z| [0.025 0.975]
Intercept -16.9863 154.602 -0.110 0.913 -320.001 286.028
bar -0.0278 0.945 -0.029 0.977 -1.880 1.824
baz 18.5550 280.722 0.066 0.947 -531.650 568.760
a 9.9996 153.668 0.065 0.948 -291.184 311.183
b 0.6757 132.542 0.005 0.996 -259.102 260.454
d 0.0005 0.039 0.011 0.991 -0.076 0.077
R:
glm(formula = myformula, family = "binomial", data = r_x)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.046466 -0.013282 -0.001017 0.006217 0.104467
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.699e+01 1.546e+02 -0.110 0.913
bar -2.777e-02 9.449e-01 -0.029 0.977
baz 1.855e+01 2.807e+02 0.066 0.947
a 1.000e+01 1.537e+02 0.065 0.948
b 6.757e-01 1.325e+02 0.005 0.996
d 4.507e-04 3.921e-02 0.011 0.991
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 0.049633 on 103 degrees of freedom
Residual deviance: 0.035684 on 98 degrees of freedom
AIC: 12.486
Matlab:
Estimated Coefficients:
Estimate SE tStat pValue
_________ __________ ________ __________
(Intercept) -21.044 3.315 -6.3483 6.8027e-09
bar -0.033507 0.022165 -1.5117 0.13383
d 0.0016149 0.00083173 1.9416 0.055053
baz 21.427 6.0132 3.5632 0.00056774
a 14.875 3.7828 3.9322 0.00015712
b -1.2126 2.7535 -0.44038 0.66063
104 observations, 98 error degrees of freedom
Estimated Dispersion: 1.25e-06
F-statistic vs. constant model: 7.4, p-value = 6.37e-06
You are not actually using the binomial distribution in the MATLAB case. You are specifying the link function, but the distribution remains its default value for a normal distribution, which will not give you the expected logistic fit, at least if the sample sizes for the percentages are small. It is also giving you lower p-values, because the normal distribution is less constrained in its variance than the binomial distribution is.
You need to specify the Distribution argument to Binomial:
fitglm(<<data>>, 'foo ~ 1 + bar + baz', 'Distribution', 'binomial ', 'Link', 'logit')
The R and Python code seem to match rather well.

Logistic Regression different results with R and Python?

I used a logistic regression approach in both programs, and was wondering why I am getting different results, especially with the coefficients. The outcome, Infection, is (1, 0) and Flushed is a continuous variable.
Python:
import statsmodels.api as sm
logit_model=sm.Logit(data['INFECTION'], data['Flushed'])
result=logit_model.fit()
print(result.summary())
Results:
Logit Regression Results
==============================================================================
Dep. Variable: INFECTION No. Observations: 414
Model: Logit Df Residuals: 413
Method: MLE Df Model: 0
Date: Fri, 24 Aug 2018 Pseudo R-squ.: -1.388
Time: 15:47:42 Log-Likelihood: -184.09
converged: True LL-Null: -77.104
LLR p-value: nan
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
Flushed -0.6467 0.070 -9.271 0.000 -0.783 -0.510
==============================================================================
R:
mylogit <- glm(INFECTION ~ Flushed, data = cvc, family = "binomial")
summary(mylogit)
Results:
Call:
glm(formula = INFECTION ~ Flushed, family = "binomial", data = cvc)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.0598 -0.3107 -0.2487 -0.2224 2.8051
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -3.91441 0.38639 -10.131 < 2e-16 ***
Flushed 0.22696 0.06049 3.752 0.000175 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
You seem to be missing the constant (offset) parameter in the Python logistic model.
To use R's formula syntax you're fitting two different models:
Python model: INFECTION ~ 0 + Flushed
R model : INFECTION ~ Flushed
To add a constant to the Python model use sm.add_constant(...).

Scikit-Learn: Std.Error, p-Value from LinearRegression

I've been trying to get the standard error & p-Values by using LR from scikit-learn. But no success.
I've end up finding up this article: but the std error & p-value does not match that from the statsmodel.api OLS method
import numpy as np
from sklearn import datasets
from sklearn import linear_model
import regressor
import statsmodels.api as sm
boston = datasets.load_boston()
which_betas = np.ones(13, dtype=bool)
which_betas[3] = False
X = boston.data[:,which_betas]
y = boston.target
#scikit + regressor stats
ols = linear_model.LinearRegression()
ols.fit(X,y)
xlables = boston.feature_names[which_betas]
regressor.summary(ols, X, y, xlables)
# statsmodel
x2 = sm.add_constant(X)
models = sm.OLS(y,x2)
result = models.fit()
print result.summary()
Output as follows:
Residuals:
Min 1Q Median 3Q Max
-26.3743 -1.9207 0.6648 2.8112 13.3794
Coefficients:
Estimate Std. Error t value p value
_intercept 36.925033 4.915647 7.5117 0.000000
CRIM -0.112227 0.031583 -3.5534 0.000416
ZN 0.047025 0.010705 4.3927 0.000014
INDUS 0.040644 0.055844 0.7278 0.467065
NOX -17.396989 3.591927 -4.8434 0.000002
RM 3.845179 0.272990 14.0854 0.000000
AGE 0.002847 0.009629 0.2957 0.767610
DIS -1.485557 0.180530 -8.2289 0.000000
RAD 0.327895 0.061569 5.3257 0.000000
TAX -0.013751 0.001055 -13.0395 0.000000
PTRATIO -0.991733 0.088994 -11.1438 0.000000
B 0.009827 0.001126 8.7256 0.000000
LSTAT -0.534914 0.042128 -12.6973 0.000000
---
R-squared: 0.73547, Adjusted R-squared: 0.72904
F-statistic: 114.23 on 12 features
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.735
Model: OLS Adj. R-squared: 0.729
Method: Least Squares F-statistic: 114.2
Date: Sun, 21 Aug 2016 Prob (F-statistic): 7.59e-134
Time: 21:56:26 Log-Likelihood: -1503.8
No. Observations: 506 AIC: 3034.
Df Residuals: 493 BIC: 3089.
Df Model: 12
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
const 36.9250 5.148 7.173 0.000 26.811 47.039
x1 -0.1122 0.033 -3.405 0.001 -0.177 -0.047
x2 0.0470 0.014 3.396 0.001 0.020 0.074
x3 0.0406 0.062 0.659 0.510 -0.081 0.162
x4 -17.3970 3.852 -4.516 0.000 -24.966 -9.828
x5 3.8452 0.421 9.123 0.000 3.017 4.673
x6 0.0028 0.013 0.214 0.831 -0.023 0.029
x7 -1.4856 0.201 -7.383 0.000 -1.881 -1.090
x8 0.3279 0.067 4.928 0.000 0.197 0.459
x9 -0.0138 0.004 -3.651 0.000 -0.021 -0.006
x10 -0.9917 0.131 -7.547 0.000 -1.250 -0.734
x11 0.0098 0.003 3.635 0.000 0.005 0.015
x12 -0.5349 0.051 -10.479 0.000 -0.635 -0.435
==============================================================================
Omnibus: 190.837 Durbin-Watson: 1.015
Prob(Omnibus): 0.000 Jarque-Bera (JB): 897.143
Skew: 1.619 Prob(JB): 1.54e-195
Kurtosis: 8.663 Cond. No. 1.51e+04
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 1.51e+04. This might indicate that there are
strong multicollinearity or other numerical problems.
I've also found the following articles
Find p-value (significance) in scikit-learn LinearRegression
http://connor-johnson.com/2014/02/18/linear-regression-with-python/
Both the codes in the SO link doesn't compile
Here is my code & data that I'm working on - but not being able to find the std error & p-values
import pandas as pd
import statsmodels.api as sm
import numpy as np
import scipy
from sklearn.linear_model import LinearRegression
from sklearn import metrics
def readFile(filename, sheetname):
xlsx = pd.ExcelFile(filename)
data = xlsx.parse(sheetname, skiprows=1)
return data
def lr_statsmodel(X,y):
X = sm.add_constant(X)
model = sm.OLS(y,X)
results = model.fit()
print (results.summary())
def lr_scikit(X,y,featureCols):
model = LinearRegression()
results = model.fit(X,y)
predictions = results.predict(X)
print 'Coefficients'
print 'Intercept\t' , results.intercept_
df = pd.DataFrame(zip(featureCols, results.coef_))
print df.to_string(index=False, header=False)
# Query:: The numbers matches with Excel OLS but skeptical about relating score as rsquared
rSquare = results.score(X,y)
print '\nR-Square::', rSquare
# This looks like a better option
# source: http://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html#sklearn.metrics.r2_score
r2 = metrics.r2_score(y,results.predict(X))
print 'r2', r2
# Query: No clue at all! http://scikit-learn.org/stable/modules/model_evaluation.html#regression-metrics
print 'Rsquared?!' , metrics.explained_variance_score(y, results.predict(X))
# INFO:: All three of them are providing the same figures!
# Adj-Rsquare formula # https://www.easycalculation.com/statistics/learn-adjustedr2.php
# In ML, we don't use all of the data for training, and hence its highly unusual to find AdjRsquared. Thus the need for manual calculation
N = X.shape[0]
p = X.shape[1]
adjRsquare = 1 - ((1 - rSquare ) * (N - 1) / (N - p - 1))
print "Adjusted R-Square::", adjRsquare
# calculate standard errors
# mean_absolute_error
# mean_squared_error
# median_absolute_error
# r2_score
# explained_variance_score
mse = metrics.mean_squared_error(y,results.predict(X))
print mse
print 'Residual Standard Error:', np.sqrt(mse)
# OLS in Matrix : https://github.com/nsh87/regressors/blob/master/regressors/stats.py
n = X.shape[0]
X1 = np.hstack((np.ones((n, 1)), np.matrix(X)))
se_matrix = scipy.linalg.sqrtm(
metrics.mean_squared_error(y, results.predict(X)) *
np.linalg.inv(X1.T * X1)
)
print 'se',np.diagonal(se_matrix)
# https://github.com/nsh87/regressors/blob/master/regressors/stats.py
# http://regressors.readthedocs.io/en/latest/usage.html
y_hat = results.predict(X)
sse = np.sum((y_hat - y) ** 2)
print 'Standard Square Error of the Model:', sse
if __name__ == '__main__':
# read file
fileData = readFile('Linear_regression.xlsx','Input Data')
# list of independent variables
feature_cols = ['Price per week','Population of city','Monthly income of riders','Average parking rates per month']
# build dependent & independent data set
X = fileData[feature_cols]
y = fileData['Number of weekly riders']
# Statsmodel - OLS
# lr_statsmodel(X,y)
# ScikitLearn - OLS
lr_scikit(X,y,feature_cols)
My data-set
Y X1 X2 X3 X4
City Number of weekly riders Price per week Population of city Monthly income of riders Average parking rates per month
1 1,92,000 $15 18,00,000 $5,800 $50
2 1,90,400 $15 17,90,000 $6,200 $50
3 1,91,200 $15 17,80,000 $6,400 $60
4 1,77,600 $25 17,78,000 $6,500 $60
5 1,76,800 $25 17,50,000 $6,550 $60
6 1,78,400 $25 17,40,000 $6,580 $70
7 1,80,800 $25 17,25,000 $8,200 $75
8 1,75,200 $30 17,25,000 $8,600 $75
9 1,74,400 $30 17,20,000 $8,800 $75
10 1,73,920 $30 17,05,000 $9,200 $80
11 1,72,800 $30 17,10,000 $9,630 $80
12 1,63,200 $40 17,00,000 $10,570 $80
13 1,61,600 $40 16,95,000 $11,330 $85
14 1,61,600 $40 16,95,000 $11,600 $100
15 1,60,800 $40 16,90,000 $11,800 $105
16 1,59,200 $40 16,30,000 $11,830 $105
17 1,48,800 $65 16,40,000 $12,650 $105
18 1,15,696 $102 16,35,000 $13,000 $110
19 1,47,200 $75 16,30,000 $13,224 $125
20 1,50,400 $75 16,20,000 $13,766 $130
21 1,52,000 $75 16,15,000 $14,010 $150
22 1,36,000 $80 16,05,000 $14,468 $155
23 1,26,240 $86 15,90,000 $15,000 $165
24 1,23,888 $98 15,95,000 $15,200 $175
25 1,26,080 $87 15,90,000 $15,600 $175
26 1,51,680 $77 16,00,000 $16,000 $190
27 1,52,800 $63 16,10,000 $16,200 $200
I've exhausted all my options and whatever I could make sense of. So any guidance on how to compute std error & p-values that is the same as per the statsmodel.api is appreciated.
EDIT: I'm trying to find the std error & p-values for intercept and all the independent variables
Here is reg is output of lin regression fit method of sklearn
to calculate adjusted r2
def adjustedR2(x,y reg):
r2 = reg.score(x,y)
n = x.shape[0]
p = x.shape[1]
adjusted_r2 = 1-(1-r2)*(n-1)/(n-p-1)
return adjusted_r2
and for p values
from sklearn.feature_selection import f_regression
freg=f_regression(x,y)
p=freg[1]
print(p.round(3))

Pandas: Implementing Breusch-Pagan with Panel data

I am currently using the following code to estimate PanelOLS:
Y = df['billsum']
X = df[['years_exp', 'leg_totalbills', 'amtsum', 'amtsumlag.1', 'cfcontrol', 'sen',\
'Republican']]
X = add_constant(X)
from pandas.stats.plm import PanelOLS
reg=PanelOLS(Y,X,time_effects=True)
print('MODEL 1: OLS Regression Results',reg)
MODEL 1: OLS Regression Results
-------------------------Summary of Regression Analysis-------------------------
Formula: Y ~ <const> + <years_exp> + <leg_totalbills> + <amtsum> + <amtsumlag.1>
+ <cfcontrol> + <sen> + <Republican>
Number of Observations: 6930
Number of Degrees of Freedom: 17
R-squared: 0.7081
Adj R-squared: 0.7074
Rmse: 0.2423
F-stat (8, 6913): 1048.1396, p-value: 0.0000
Degrees of Freedom: model 16, resid 6913
-----------------------Summary of Estimated Coefficients------------------------
Variable Coef Std Err t-stat p-value CI 2.5% CI 97.5%
--------------------------------------------------------------------------------
const 0.0000 nan nan nan nan nan
years_exp 0.0205 0.0005 43.71 0.0000 0.0196 0.0214
leg_totalbills 0.0148 0.0005 32.94 0.0000 0.0139 0.0157
amtsum -0.0003 0.0001 -3.17 0.0015 -0.0005 -0.0001
amtsumlag.1 0.0005 0.0001 5.03 0.0000 0.0003 0.0007
--------------------------------------------------------------------------------
cfcontrol 0.3629 0.0168 21.61 0.0000 0.3299 0.3958
sen 0.0598 0.0177 3.38 0.0007 0.0251 0.0944
Republican 0.6540 0.0114 57.38 0.0000 0.6317 0.6764
---------------------------------End of Summary---------------------------------
I want to do the Breusch-Pagan test for Heteroskedasticity:
statsmodels.stats.diagnostic.het_breushpagan(resid, exog_het)
I know that I am supposed to input the residuals (probably in array format) and the exog_het which in my case would be X. The problem is that I do not know how to get the PanelOLSto ouput the residuals. Actually, I'm not sure if the residuals is actually the Std Err reported in the PanelOLS output. So, the question: Where does the residual show in the regression output and how can I get Pandas to output it independently so that I can input it into the Breusch-Pagan test.

How to get R-squared for robust regression (RLM) in Statsmodels?

When it comes to measuring goodness of fit - R-Squared seems to be a commonly understood (and accepted) measure for "simple" linear models.
But in case of statsmodels (as well as other statistical software) RLM does not include R-squared together with regression results.
Is there a way to get it calculated "manually", perhaps in a way similar to how it is done in Stata?
Or is there another measure that can be used / calculated from the results produced by sm.RLS?
This is what Statsmodels is producing:
import numpy as np
import statsmodels.api as sm
# Sample Data with outliers
nsample = 50
x = np.linspace(0, 20, nsample)
x = sm.add_constant(x)
sig = 0.3
beta = [5, 0.5]
y_true = np.dot(x, beta)
y = y_true + sig * 1. * np.random.normal(size=nsample)
y[[39,41,43,45,48]] -= 5 # add some outliers (10% of nsample)
# Regression with Robust Linear Model
res = sm.RLM(y, x).fit()
print(res.summary())
Which outputs:
Robust linear Model Regression Results
==============================================================================
Dep. Variable: y No. Observations: 50
Model: RLM Df Residuals: 48
Method: IRLS Df Model: 1
Norm: HuberT
Scale Est.: mad
Cov Type: H1
Date: Mo, 27 Jul 2015
Time: 10:00:00
No. Iterations: 17
==============================================================================
coef std err z P>|z| [95.0% Conf. Int.]
------------------------------------------------------------------------------
const 5.0254 0.091 55.017 0.000 4.846 5.204
x1 0.4845 0.008 61.555 0.000 0.469 0.500
==============================================================================
Since an OLS return the R2, I would suggest regressing the actual values against the fitted values using simple linear regression. Regardless where the fitted values come from, such an approach would provide you an indication of the corresponding R2.
R2 is not a good measure of goodness of fit for RLM models. The problem is that the outliers have a huge effect on the R2 value, to the point where it is completely determined by outliers. Using weighted regression afterwards is an attractive alternative, but it is better to look at the p-values, standard errors and confidence intervals of the estimated coefficients.
Comparing the OLS summary to RLM (results are slightly different to yours due to different randomization):
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.726
Model: OLS Adj. R-squared: 0.721
Method: Least Squares F-statistic: 127.4
Date: Wed, 03 Nov 2021 Prob (F-statistic): 4.15e-15
Time: 09:33:40 Log-Likelihood: -87.455
No. Observations: 50 AIC: 178.9
Df Residuals: 48 BIC: 182.7
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 5.7071 0.396 14.425 0.000 4.912 6.503
x1 0.3848 0.034 11.288 0.000 0.316 0.453
==============================================================================
Omnibus: 23.499 Durbin-Watson: 2.752
Prob(Omnibus): 0.000 Jarque-Bera (JB): 33.906
Skew: -1.649 Prob(JB): 4.34e-08
Kurtosis: 5.324 Cond. No. 23.0
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Robust linear Model Regression Results
==============================================================================
Dep. Variable: y No. Observations: 50
Model: RLM Df Residuals: 48
Method: IRLS Df Model: 1
Norm: HuberT
Scale Est.: mad
Cov Type: H1
Date: Wed, 03 Nov 2021
Time: 09:34:24
No. Iterations: 17
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const 5.1857 0.111 46.590 0.000 4.968 5.404
x1 0.4790 0.010 49.947 0.000 0.460 0.498
==============================================================================
If the model instance has been used for another fit with different fit parameters, then the fit options might not be the correct ones anymore .
You can see that the standard errors and size of the confidence interval decreases in going from OLS to RLM for both the intercept and the slope term. This suggests that the estimates are closer to the real values.
Why not use model.predict to obtain the r2? For Example:
r2=1. - np.sum(np.abs(model.predict(X) - y) **2) / np.sum(np.abs(y - np.mean(y)) ** 2)

Categories

Resources