I have a continuous dependent variable y and a independent categorical variable x named control_grid. x contains two variables: c and g
using python package statsmodel I am trying to see if independent variable has significant effect on y variable, as such:
model = smf.ols('y ~ c(x)', data=df)
results = model.fit()
table = sm.stats.anova_lm(results, typ=2)
Printing the table gives this as ouput:
OLS Regression Results
==============================================================================
Dep. Variable: sedimentation R-squared: 0.167
Model: OLS Adj. R-squared: 0.165
Method: Least Squares F-statistic: 86.84
Date: Fri, 13 Jul 2018 Prob (F-statistic): 5.99e-19
Time: 16:15:51 Log-Likelihood: -2019.2
No. Observations: 436 AIC: 4042.
Df Residuals: 434 BIC: 4050.
Df Model: 1
Covariance Type: nonrobust
=====================================================================================
coef std err t P>|t| [0.025 0.975]
-------------------------------------------------------------------------------------
Intercept -6.0243 1.734 -3.474 0.001 -9.433 -2.616
control_grid[T.g] 22.2504 2.388 9.319 0.000 17.558 26.943
==============================================================================
Omnibus: 30.623 Durbin-Watson: 1.064
Prob(Omnibus): 0.000 Jarque-Bera (JB): 45.853
Skew: -0.510 Prob(JB): 1.10e-10
Kurtosis: 4.218 Cond. No. 2.69
==============================================================================
In the table where the coefficients are shown, I don't understand the depiction of my dependent variable.
it says:
control_grid[T.g]
What is the "T"?
And is it only looking at one of the two variables? Only at the effect of "g" and not at "c"?
If you go here you see that in the summary the catogorical data Region is also shown for all the four variables "N","S","E" and "W".
P.S. my data looks as such:
index sedimentation control_grid
0 5.0 c
1 10.0 g
2 0.0 c
3 -10.0 c
4 0.0 g
5 -20.0 g
6 30.0 g
7 40.0 g
8 -10.0 c
9 45.0 g
10 45.0 g
11 10.0 c
12 10.0 g
13 10.0 c
14 6.0 g
15 10.0 c
16 29.0 c
17 3.0 g
18 23.0 c
19 34.0 g
I am not an expert, but I'll try to explain it. First, you should know ANOVA is a Regression analysis, so you are building a model Y ~ X, but in Anova X is a categorical variable. In your case Y = sedimentation, and X = control_grid (this is categorical), so the model is "sedimentation ~ control_grid".
Ols perform a regression analysis, so it calculates the parameters for a linear model: Y = Bo + B1X, but, given your X is categorical, your X is dummy coded which means X only can be 0 or 1, what is coherent with categorical data. Be aware in Anova, the number of parameters estimated is equal to the number of categories - 1, you in your data you have only 2 categories (g and c), therefore only one parameter is showed in your ols report. "T.g" means this parameter corresponds to the "g" category. Then your model is Y = Bo + T.g*X
Now, the parameter for T.c is considered as Bo, so actually, your model is:
Y = T.cX + T.gX where X is O or 1 depending if it is "c" or "g".
So, you are asking:
1) What is the "T"?
T (T.g) is only indicating you the parameters estimated and showed correspond to the category "g".
2) And is it only looking at one of the two variables?
No, the analysis estimated the parameters for the two categories (c and g), but the intercept Bo represents the coefficient for the other level of the category, in your data "c".
3) Only at the effect of "g" and not at "c"?
No, in fact, the analyses look at the effect of both "g" and "c". If you look at the values of the coefficient T.g and Intercept (T.c) you can realize if they are significative or not (p values), and you can say if they have an effect on "sedimentation".
Cheers,
Related
I have the following code. I am running a linear model on the dataframe 'x', with gender and highest education level achieved as categorical variables.
The aim is to assess how well age, gender and highest level of education achieved can predict 'weighteddistance'.
resultmodeldistancevariation2sleep = smf.ols(formula='weighteddistance ~ age + C(gender) + C(highest_education_level_acheived)',data=x).fit()
summarymodel = resultmodeldistancevariation2sleep.summary()
print(summarymodel)
This gives me the output:
0 1 2 3 4 5 6
0 coef std err t P>|t| [0.025 0.975]
1 Intercept 6.3693 1.391 4.580 0.000 3.638 9.100
2 C(gender)[T.2.0] 0.2301 0.155 1.489 0.137 -0.073 0.534
3 C(gender)[T.3.0] 0.0302 0.429 0.070 0.944 -0.812 0.872
4 C(highest_education_level_acheived)[T.3] 1.1292 0.501 2.252 0.025 0.145 2.114
5 C(highest_education_level_acheived)[T.4] 1.0876 0.513 2.118 0.035 0.079 2.096
6 C(highest_education_level_acheived)[T.5] 1.0692 0.498 2.145 0.032 0.090 2.048
7 C(highest_education_level_acheived)[T.6] 1.2995 0.525 2.476 0.014 0.269 2.330
8 C(highest_education_level_acheived)[T.7] 1.7391 0.605 2.873 0.004 0.550 2.928
However, I want to calculate the main effect of each categorical variable on distance, which are not shown in the model above, and so I entered the model fit into an anova using 'anova_lm'.
anovaoutput = sm.stats.anova_lm(resultmodeldistancevariation2sleep)
anovaoutput['PR(>F)'] = anovaoutput['PR(>F)'].round(4)
This gives me following output below, and as I wanted, does show me the main effect of each categorical variable - gender and highest education level achieved - rather than the different groups within that variable (meaning that there is no gender[2.0] and gender[3.0] in the output below).
df sum_sq mean_sq F PR(>F)
C(gender) 2.0 4.227966 2.113983 5.681874 0.0036
C(highest_education_level_acheived) 5.0 11.425706 2.285141 6.141906 0.0000
age 1.0 8.274317 8.274317 22.239357 0.0000
Residual 647.0 240.721120 0.372057 NaN NaN
However, this output no longer shows me the confidence intervals or the coefficients for each variable.
So in other words, I would like the bottom anova table should have a column with 'coef' and '[0.025 0.975]' like in the first table.
How can I achieve this?
I would be so grateful for a helping hand!
I am currently trying to reproduce a regression model eq. (3) (edit: fixed link) in python using statsmodels. As this model is no part of the standard models provided by statsmodels I clearly have to write it myself using the provided formula api.
Since I have never worked with the formula api (or patsy for that matter), I wanted to start and verify my approach by reproducing standard models with the formula api and a generalized linear model. My code and the results for a poisson regression are given below at the end of my question.
You will see that it predicts the parameters beta = (2, -3, 1) for all three models with good accuracy. However, I have a couple of questions:
How do I explicitly add covariates to the glm model with a
regression coefficient equal to 1?
From what I understand, a poisson regression in general has the shape ln(counts) = exp(intercept + beta * x + log(exposure)), i.e. the exposure is added through a fixed constant of value 1. I would like to reproduce this behaviour in my glm model, i.e. I want something like ln(counts) = exp(intercept + beta * x + k * log(exposure)) where k is a fixed constant as a formula.
Simply using formula = "1 + x1 + x2 + x3 + np.log(exposure)" returns a perfect seperation error (why?). I can bypass that by adding some random noise to y, but in that case np.log(exposure) has a non-unity regression coefficient, i.e. it is treated as a normal regression covariate.
Apparently both built-in models 1 and 2 have no intercept, eventhough I tried to explicitly add one in model 2. Or is there a hidden intercept that is simply not reported? In either case, how do I fix that?
Any help would be greatly appreciated, so thanks in advance!
import numpy as np
import pandas as pd
np.random.seed(1+8+2022)
# Number of random samples
n = 4000
# Intercept
alpha = 1
# Regression Coefficients
beta = np.asarray([2.0,-3.0,1.0])
# Random Data
data = {
"x1" : np.random.normal(1.00,0.10, size = n),
"x2" : np.random.normal(1.50,0.15, size = n),
"x3" : np.random.normal(-2.0,0.20, size = n),
"exposure": np.random.poisson(14, size = n),
}
# Calculate the response
x = np.asarray([data["x1"], data["x2"] , data["x3"]]).T
t = np.asarray(data["exposure"])
# Add response to random data
data["y"] = np.exp(alpha + np.dot(x,beta) + np.log(t))
# Convert dict to df
data = pd.DataFrame(data)
print(data)
#-----------------------------------------------------
# My Model
#-----------------------------------------------------
import statsmodels.api as sm
import statsmodels.formula.api as smf
formula = "y ~ x1 + x2 + x3"
model = smf.glm(formula=formula, data=data, family=sm.families.Poisson()).fit()
print(model.summary())
#-----------------------------------------------------
# statsmodels.discrete.discrete_model.Poisson 1
#-----------------------------------------------------
import statsmodels.api as sm
data["offset"] = np.ones(n)
model = sm.Poisson( endog = data["y"],
exog = data[["x1", "x2", "x3"]],
exposure = data["exposure"],
offset = data["offset"]).fit()
print(model.summary())
#-----------------------------------------------------
# statsmodels.discrete.discrete_model.Poisson 2
#-----------------------------------------------------
import statsmodels.api as sm
data["x1"] = sm.add_constant(data["x1"])
model = sm.Poisson( endog = data["y"],
exog = data[["x1", "x2", "x3"]],
exposure = data["exposure"]).fit()
print(model.summary())
RESULTS:
x1 x2 x3 exposure y
0 1.151771 1.577677 -1.811903 13 0.508422
1 0.897012 1.678311 -2.327583 22 0.228219
2 1.040250 1.471962 -1.705458 13 0.621328
3 0.866195 1.512472 -1.766108 17 0.478107
4 0.925470 1.399320 -1.886349 13 0.512518
... ... ... ... ... ...
3995 1.073945 1.365260 -1.755071 12 0.804081
3996 0.855000 1.251951 -2.173843 11 0.439639
3997 0.892066 1.710856 -2.183085 10 0.107643
3998 0.763777 1.538938 -2.013619 22 0.363551
3999 1.056958 1.413922 -1.722252 19 1.098932
[4000 rows x 5 columns]
Generalized Linear Model Regression Results
==============================================================================
Dep. Variable: y No. Observations: 4000
Model: GLM Df Residuals: 3996
Model Family: Poisson Df Model: 3
Link Function: log Scale: 1.0000
Method: IRLS Log-Likelihood: -2743.7
Date: Sat, 08 Jan 2022 Deviance: 141.11
Time: 09:32:32 Pearson chi2: 140.
No. Iterations: 4
Covariance Type: nonrobust
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
Intercept 3.6857 0.378 9.755 0.000 2.945 4.426
x1 2.0020 0.227 8.800 0.000 1.556 2.448
x2 -3.0393 0.148 -20.604 0.000 -3.328 -2.750
x3 0.9937 0.114 8.719 0.000 0.770 1.217
==============================================================================
Optimization terminated successfully.
Current function value: 0.668293
Iterations 10
Poisson Regression Results
==============================================================================
Dep. Variable: y No. Observations: 4000
Model: Poisson Df Residuals: 3997
Method: MLE Df Model: 2
Date: Sat, 08 Jan 2022 Pseudo R-squ.: 0.09462
Time: 09:32:32 Log-Likelihood: -2673.2
converged: True LL-Null: -2952.6
Covariance Type: nonrobust LLR p-value: 4.619e-122
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 2.0000 0.184 10.875 0.000 1.640 2.360
x2 -3.0000 0.124 -24.160 0.000 -3.243 -2.757
x3 1.0000 0.094 10.667 0.000 0.816 1.184
==============================================================================
Optimization terminated successfully.
Current function value: 0.677893
Iterations 5
Poisson Regression Results
==============================================================================
Dep. Variable: y No. Observations: 4000
Model: Poisson Df Residuals: 3997
Method: MLE Df Model: 2
Date: Sat, 08 Jan 2022 Pseudo R-squ.: 0.08162
Time: 09:32:32 Log-Likelihood: -2711.6
converged: True LL-Null: -2952.6
Covariance Type: nonrobust LLR p-value: 2.196e-105
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 2.9516 0.304 9.711 0.000 2.356 3.547
x2 -2.9801 0.147 -20.275 0.000 -3.268 -2.692
x3 0.9807 0.113 8.655 0.000 0.759 1.203
==============================================================================
Process return 0 (0x0)
Press Enter to continue...
I'm trying to figure out how to incorporate lagged dependent variables into statsmodel or scikitlearn to forecast time series with AR terms but cannot seem to find a solution.
The general linear equation looks something like this:
y = B1*y(t-1) + B2*x1(t) + B3*x2(t-3) + e
I know I can use pd.Series.shift(t) to create lagged variables and then add it to be included in the model and generate parameters, but how can I get a prediction when the code does not know which variable is a lagged dependent variable?
In SAS's Proc Autoreg, you can designate which variable is a lagged dependent variable and will forecast accordingly, but it seems like there are no options like that in Python.
Any help would be greatly appreciated and thank you in advance.
Since you're already mentioned statsmodels in your tags you may want to take a look at statsmodels - ARIMA, i.e.:
from statsmodels.tsa.arima_model import ARIMA
model = ARIMA(endog=t, order=(2, 0, 0)) # p=2, d=0, q=0 for AR(2)
fit = model.fit()
fit.summary()
But like you mentioned, you could create new variables manually the way you described (I used some random data):
import numpy as np
import pandas as pd
import statsmodels.api as sm
df = pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/a10.csv', parse_dates=['date'])
df['random_variable'] = np.random.randint(0, 10, len(df))
df['y'] = np.random.rand(len(df))
df.index = df['date']
df = df[['y', 'value', 'random_variable']]
df.columns = ['y', 'x1', 'x2']
shifts = 3
for variable in df.columns.values:
for t in range(1, shifts + 1):
df[f'{variable} AR({t})'] = df.shift(t)[variable]
df = df.dropna()
>>> df.head()
y x1 x2 ... x2 AR(1) x2 AR(2) x2 AR(3)
date ...
1991-10-01 0.715115 3.611003 7 ... 5.0 7.0 7.0
1991-11-01 0.202662 3.565869 3 ... 7.0 5.0 7.0
1991-12-01 0.121624 4.306371 7 ... 3.0 7.0 5.0
1992-01-01 0.043412 5.088335 6 ... 7.0 3.0 7.0
1992-02-01 0.853334 2.814520 2 ... 6.0 7.0 3.0
[5 rows x 12 columns]
I'm using the model you describe in your post:
model = sm.OLS(df['y'], df[['y AR(1)', 'x1', 'x2 AR(3)']])
fit = model.fit()
>>> fit.summary()
<class 'statsmodels.iolib.summary.Summary'>
"""
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.696
Model: OLS Adj. R-squared: 0.691
Method: Least Squares F-statistic: 150.8
Date: Tue, 08 Oct 2019 Prob (F-statistic): 6.93e-51
Time: 17:51:20 Log-Likelihood: -53.357
No. Observations: 201 AIC: 112.7
Df Residuals: 198 BIC: 122.6
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
y AR(1) 0.2972 0.072 4.142 0.000 0.156 0.439
x1 0.0211 0.003 6.261 0.000 0.014 0.028
x2 AR(3) 0.0161 0.007 2.264 0.025 0.002 0.030
==============================================================================
Omnibus: 2.115 Durbin-Watson: 2.277
Prob(Omnibus): 0.347 Jarque-Bera (JB): 1.712
Skew: 0.064 Prob(JB): 0.425
Kurtosis: 2.567 Cond. No. 41.5
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
"""
Hope this helps you getting started.
I have percentages and need to calculate a regression. According to basic statistics using logistic regression is better than OLS as percentages invalidate the requirement of a continuous and unconstraint value space.
So far, so good.
However, I get different results in R, Python, and Matlab. In fact, Matlab even reports significant values where python will not.
My models look like:
R:
summary(glm(foo ~ 1 + bar + baz , family = "binomial", data = <<data>>))
Python via statsmodels:
smf.logit('foo ~ 1 + bar + baz', <<data>>).fit().summary()
Matlab:
fitglm(<<data>>,'foo ~ 1 + bar + baz','Link','logit')
where Matlab currently produces the best results.
Could there be different initialization values? Different solvers? Different settings for alphas when computing p-values?
How can I get the same results at least in similar numeric ranges or same features detected as significant? I do not require exact equal numeric output.
edit
the summary statistics
python:
Dep. Variable: foo No. Observations: 104
Model: Logit Df Residuals: 98
Method: MLE Df Model: 5
Date: Wed, 28 Aug 2019 Pseudo R-squ.: inf
Time: 06:48:12 Log-Likelihood: -0.25057
converged: True LL-Null: 0.0000
LLR p-value: 1.000
coef std err z P>|z| [0.025 0.975]
Intercept -16.9863 154.602 -0.110 0.913 -320.001 286.028
bar -0.0278 0.945 -0.029 0.977 -1.880 1.824
baz 18.5550 280.722 0.066 0.947 -531.650 568.760
a 9.9996 153.668 0.065 0.948 -291.184 311.183
b 0.6757 132.542 0.005 0.996 -259.102 260.454
d 0.0005 0.039 0.011 0.991 -0.076 0.077
R:
glm(formula = myformula, family = "binomial", data = r_x)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.046466 -0.013282 -0.001017 0.006217 0.104467
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.699e+01 1.546e+02 -0.110 0.913
bar -2.777e-02 9.449e-01 -0.029 0.977
baz 1.855e+01 2.807e+02 0.066 0.947
a 1.000e+01 1.537e+02 0.065 0.948
b 6.757e-01 1.325e+02 0.005 0.996
d 4.507e-04 3.921e-02 0.011 0.991
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 0.049633 on 103 degrees of freedom
Residual deviance: 0.035684 on 98 degrees of freedom
AIC: 12.486
Matlab:
Estimated Coefficients:
Estimate SE tStat pValue
_________ __________ ________ __________
(Intercept) -21.044 3.315 -6.3483 6.8027e-09
bar -0.033507 0.022165 -1.5117 0.13383
d 0.0016149 0.00083173 1.9416 0.055053
baz 21.427 6.0132 3.5632 0.00056774
a 14.875 3.7828 3.9322 0.00015712
b -1.2126 2.7535 -0.44038 0.66063
104 observations, 98 error degrees of freedom
Estimated Dispersion: 1.25e-06
F-statistic vs. constant model: 7.4, p-value = 6.37e-06
You are not actually using the binomial distribution in the MATLAB case. You are specifying the link function, but the distribution remains its default value for a normal distribution, which will not give you the expected logistic fit, at least if the sample sizes for the percentages are small. It is also giving you lower p-values, because the normal distribution is less constrained in its variance than the binomial distribution is.
You need to specify the Distribution argument to Binomial:
fitglm(<<data>>, 'foo ~ 1 + bar + baz', 'Distribution', 'binomial ', 'Link', 'logit')
The R and Python code seem to match rather well.
I am still a noob when it comes to statistics.
I am using Python Package Statsmodel, with the patsy functionality.
My pandas dataframe looks as such:
index sed label c_g lvl1 lvl2
0 5.0 SP_A c b c
1 10.0 SP_B g b c
2 0.0 SP_C c b c
3 -10.0 SP_H c b c
4 0.0 SP_J g b c
5 -20.0 SP_K g b c
6 30.0 SP_W g a a
7 40.0 SP_X g a a
8 -10.0 SP_Y c a a
9 45.0 SP_BB g a a
10 45.0 SP_CC g a a
11 10.0 SP_A c b c
12 10.0 SP_B g b c
13 10.0 SP_C c b c
14 6.0 SP_D g b c
15 10.0 SP_E c b c
16 29.0 SP_F c b c
17 3.0 SP_G g b c
18 23.0 SP_H c b c
19 34.0 SP_J g b c
Dependent variable: Sedimentation (longitudinal data)
Independent variables: Label (categorical), control_grid (categorical), lvl1(categorical) , lvl2 (categorical).
I am interested in two things.
Which Independent variables have significant effect on Dependent variable?
Which Independent variables have significant interaction?
After having searched and read multiple documents, I do this as such:
import statsmodels.formula.api as smf
import pandas as pd
df = pd.read_csv('some.csv')
model = smf.ols(formula = 'sedimentation ~ lvl1*lvl2',data=df)
results = model.fit()
results.summary()
With results showing:
OLS Regression Results
==============================================================================
Dep. Variable: sedimentation R-squared: 0.129
Model: OLS Adj. R-squared: 0.124
Method: Least Squares F-statistic: 24.91
Date: Tue, 17 Jul 2018 Prob (F-statistic): 4.80e-15
Time: 11:15:28 Log-Likelihood: -2353.6
No. Observations: 510 AIC: 4715.
Df Residuals: 506 BIC: 4732.
Df Model: 3
Covariance Type: nonrobust
=======================================================================================
coef std err t P>|t| [0.025 0.975]
---------------------------------------------------------------------------------------
Intercept 6.9871 1.611 4.338 0.000 3.823 10.151
lvl1[T.b] -3.7990 1.173 -3.239 0.001 -6.103 -1.495
lvl1[T.d] -3.5124 1.400 -2.509 0.012 -6.263 -0.762
lvl2[T.b] -8.9427 1.155 -7.744 0.000 -11.212 -6.674
lvl2[T.c] 5.1436 0.899 5.722 0.000 3.377 6.910
lvl2[T.f] -3.5124 1.400 -2.509 0.012 -6.263 -0.762
lvl1[T.b]:lvl2[T.b] -8.9427 1.155 -7.744 0.000 -11.212 -6.674
lvl1[T.d]:lvl2[T.b] 0 0 nan nan 0 0
lvl1[T.b]:lvl2[T.c] 5.1436 0.899 5.722 0.000 3.377 6.910
lvl1[T.d]:lvl2[T.c] 0 0 nan nan 0 0
lvl1[T.b]:lvl2[T.f] 0 0 nan nan 0 0
lvl1[T.d]:lvl2[T.f] -3.5124 1.400 -2.509 0.012 -6.263 -0.762
==============================================================================
Omnibus: 13.069 Durbin-Watson: 1.118
Prob(Omnibus): 0.001 Jarque-Bera (JB): 18.495
Skew: -0.224 Prob(JB): 9.63e-05
Kurtosis: 3.818 Cond. No. inf
==============================================================================
Am I using the correct model in Python to get my desired results?
I think I am, but I would like to verify. The way I read the table is that the categorical variables lvl1 and lvl2 have a significant effect on the dependent variable AND show significant interaction (for some of the variables). However, I don't understand why not all of my variables are showing...as you see in my data, lvl1 column also contains "a" but this variable is not shown in the results summary.
I am not an expert and I fear I can't tell you what is the correct test to apply to longitudinal data, but I think that the numbers you got can't really be trusted that much.
First, the easy part of the answer, regarding your "why not all of my variables are showing": for example, in lvl1, "a" is not showing because you have to fix a "base" value of some kind. So you should read every entry as "effect of having 'b' instead of 'a'" and "effect of having 'd' instead of 'a'", etc.. In more mathematical terms, if you have a categorical variable that takes three values (a,b,d here), then when you implicitly one-hot encode them you'll get three dimensions that always have values 0 or 1, and the sum of which is always 1. This means that your final A matrix in the regression y = A.x + b will always be degenerate, and you have to delete one column to have a chance of it not being so (thus giving any interpretability at all to the regression coefficients).
Concerning why I think the numbers you got cannot be trusted: among the various hypothesis of the linear regression is independence of the consecutive observations (rows). In the case of longitudinal data, this is exactly what clearly fails. Pushing the example to the limit, if you observe a bunch of people (e.g. 11 as in your set) every second for 1 day, you'll get a huge data frame of nearly 1M rows, and every single person will have virtually the same data repeated over and over again. In this setting, any spurious correlation between the independent and dependent variable will be seen by your model as hugely significant (to him, you've run 86400 independent tests and they all exactly confirmed the same conclusion!), while of course this is not the case.
Summing up, I can't say for sure that the regression coefficients you get are not the best guess you can hope for, but certainly the t statistic, the p-value and everything else that looks like statistic there doesn't make much sense.