I calculated the minimum variance hedge ratio (MVHR) of two securities' returns by:
1. Calculating the optimal h* = Cov(S,F) / Var(F) using samples
2. Running an OLS regression and obtain the beta value
Both values differ slightly, for example I got h* = 0.9547 and beta = 0.9537. But they are supposed to be the same. Why is that so?
Below is my code:
import numpy as np
import statsmodels.api as sm
var = np.var(secRets, ddof = 1)
cov_denom = len(secRets) - 1
for i in range (0, len(secRets)):
cov_num += (indexRets[i] - indexAvg) * (secRets[i] - secAvg)
cov = cov_num / cov_denom
h = cov / var
ols_res = sm.OLS(indexRets, secRets).fit()
beta = ols_res.params[0]
print h, beta
indexRets and secRets are lists of daily returns of the index and the security (futures), respectively.
This is also a case of missing constant in OLS regression. The covariance and variance calculation subtracts the mean which is the same in the linear regression as including a constant. statsmodels doesn't include a constant by default unless you use the formulas.
For more details and an example see for example OLS of statsmodels does not work with inversely proportional data?
Also, you can replace the python loop to calculate the covariance by a call to numpy.cov.
Related
I'm trying to compute the coefficient errors of a regression using statsmodels. Also known as the standard errors of the parameter estimates. But I need to compute their "unscaled" version. I've only managed to do so with NumPy.
You can see the meaning of "unscaled" in the docs: https://numpy.org/doc/stable/reference/generated/numpy.polyfit.html
cov bool or str, optional
If given and not False, return not just the estimate but also its covariance matrix.
By default, the covariance are scaled by chi2/dof, where dof = M - (deg + 1),
i.e., the weights are presumed to be unreliable except in a relative sense and
everything is scaled such that the reduced chi2 is unity. This scaling is omitted
if cov='unscaled', as is relevant for the case that the weights are w = 1/sigma, with
sigma known to be a reliable estimate of the uncertainty.
I'm using this data to run the rest of the code in this post:
import numpy as np
x = np.array([-0.841, -0.399, 0.599, 0.203, 0.527, 0.129, 0.703, 0.503])
y = np.array([1.01, 1.24, 1.09, 0.95, 1.02, 0.97, 1.01, 0.98])
sigmas = np.array([6872.26, 80.71, 47.97, 699.94, 57.55, 1561.54, 311.98, 501.08])
# The convention for weights are different
sm_weights = np.array([1.0/sigma**2 for sigma in sigmas])
np_weights = np.array([1.0/sigma for sigma in sigmas])
With NumPy:
coefficients, cov = np.polyfit(x, y, deg=2, w=np_weights, cov='unscaled')
# The errors I need to get
print(np.sqrt(np.diag(cov))) # [917.57938013 191.2100413 211.29028248]
If I compute the regression using statsmodels:
from sklearn.preprocessing import PolynomialFeatures
import statsmodels.api as smapi
polynomial_features = PolynomialFeatures(degree=2)
polynomial = polynomial_features.fit_transform(x.reshape(-1, 1))
model = smapi.WLS(y, polynomial, weights=sm_weights)
regression = model.fit()
# Get coefficient errors
# Notice the [::-1], statsmodels returns the coefficients in the reverse order NumPy does
print(regression.bse[::-1]) # [0.24532856, 0.05112286, 0.05649161]
So the values I get are different, but related:
np_errors = np.sqrt(np.diag(cov))
sm_errors = regression.bse[::-1]
print(np_errors / sm_errors) # [3740.2061481, 3740.2061481, 3740.2061481]
The NumPy documentation says the covariance are scaled by chi2/dof where dof = M - (deg + 1). So I tried the following:
degree = 2
model_predictions = np.polyval(coefficients, x)
residuals = (model_predictions - y)
chi_squared = np.sum(residuals**2)
degrees_of_freedom = len(x) - (degree + 1)
scale_factor = chi_squared / degrees_of_freedom
sm_cov = regression.cov_params()
unscaled_errors = np.sqrt(np.diag(sm_cov * scale_factor))[::-1] # [0.09848423, 0.02052266, 0.02267789]
unscaled_errors = np.sqrt(np.diag(sm_cov / scale_factor))[::-1] # [0.61112427, 0.12734931, 0.14072311]
What I notice is that the covariance matrix I get from NumPy is much larger than the one I get from statsmodels:
>>> cov
array([[ 841951.9188366 , -154385.61049538, -188456.18957375],
[-154385.61049538, 36561.27989418, 31208.76422516],
[-188456.18957375, 31208.76422516, 44643.58346933]])
>>> regression.cov_params()
array([[ 0.0031913 , 0.00223093, -0.0134716 ],
[ 0.00223093, 0.00261355, -0.0110361 ],
[-0.0134716 , -0.0110361 , 0.0601861 ]])
As long as I can't make them equivalent, I won't be able to get the same errors. Any idea of what the difference in scale could mean and how to make both covariance matrices equal?
statsmodels documentation is not well organized in some parts.
Here is a notebook with an example for the following
https://www.statsmodels.org/devel/examples/notebooks/generated/chi2_fitting.html
The regression models in statsmodels like OLS and WLS, have an option to keep the scale fixed. This is the equivalent to cov="unscaled" in numpy and scipy.
The statsmodels option is more general, because it allows fixing the scale at any user defined value.
https://www.statsmodels.org/devel/generated/statsmodels.regression.linear_model.OLSResults.get_robustcov_results.html
We we have a model as defined in the example, either OLS or WLS, then using
regression = model.fit(cov_type="fixed scale")
will keep the scale at 1 and the resulting covariance matrix is unscaled.
Using
regression = model.fit(cov_type="fixed scale", cov_kwds={"scale": 2})
will keep the scale fixed at value two.
(some links to related discussion motivation are in https://github.com/statsmodels/statsmodels/pull/2137 )
Caution
The fixed scale cov_type will be used for inferential statistic that are based on the covariance of the parameter estimates, cov_params.
This affects standard errors, t-tests, wald tests and confidence and prediction intervals.
However, some other results statistics might not be adjusted to use the fixed scale instead of the estimated scale, e.g. resid_pearson.
https://github.com/statsmodels/statsmodels/issues/8190
when using the .summary() function using pandas statsmodels, the OLS Regression Results include the following fields.
coef std err t P>|t| [0.025 0.975]
How can I get the standardised coefficients (which exclude the intercept), similarly to what is achievable in SPSS?
You just need to standardize your original DataFrame using a z distribution (i.e., z-score) first and then perform a linear regression.
Assume you name your dataframe as df, which has independent variables x1, x2, and x3, and dependent variable y. Consider the following code:
import pandas as pd
import numpy as np
from scipy import stats
import statsmodels.formula.api as smf
# standardizing dataframe
df_z = df.select_dtypes(include=[np.number]).dropna().apply(stats.zscore)
# fitting regression
formula = 'y ~ x1 + x2 + x3'
result = smf.ols(formula, data=df_z).fit()
# checking results
result.summary()
Now, the coef will show you the standardized (beta) coefficients so that you can compare their influence on your dependent variable.
Notes:
Please keep in mind that you need .dropna(). Otherwise, stats.zscore will return all NaN for a column if it has any missing values.
Instead of using .select_dtypes(), you can select column manually but make sure all the columns you selected are numeric.
If you only care about the standardized (beta) coefficients, you can also use result.params to return it only. It will usually be displayed in a scientific-notation fashion. You can use something like round(result.params, 5) to round them.
We can just transform the estimated params by the standard deviation of the exog. results.t_test(transformation) computes the parameter table for the linearly transformed variables.
AFAIR, the following should produce the beta coefficients and corresponding inferential statistics.
Compute standard deviation, but set it to 1 for the constant.
std = model.exog.std(0)
std[0] = 1
Then use results.t_test and look at the params_table. np.diag(std) creates a diagonal matrix that transforms the params.
tt = results.t_test(np.diag(std))
print(tt.summary()
tt.summary_frame()
you can convert unstandardized coefficients by taking std deviation. Standardized Coefficient (Beta) is the requirement for the driver analysis. Below is the code that works for me.
X is independent variables and y is dependent variable and coefficients are coef which are extracted by (model.params) from ols.
sd_x = X.std()
sd_y = Y.std()
beta_coefficients = []
# Iterate through independent variables and calculate beta coefficients
for i, col in enumerate(X.columns):
beta = coefficients[i] * (sd_x[col] / sd_y)
beta_coefficients.append([col, beta])
# Print beta coefficients
for var, beta in beta_coefficients:
print(f' {var}: {beta}')
I am running a regression as follows (df is a pandas dataframe):
import statsmodels.api as sm
est = sm.OLS(df['p'], df[['e', 'varA', 'meanM', 'varM', 'covAM']]).fit()
est.summary()
Which gave me, among others, an R-squared of 0.942. So then I wanted to plot the original y-values and the fitted values. For this, I sorted the original values:
orig = df['p'].values
fitted = est.fittedvalues.values
args = np.argsort(orig)
import matplotlib.pyplot as plt
plt.plot(orig[args], 'bo')
plt.plot(orig[args]-resid[args], 'ro')
plt.show()
This, however, gave me a graph where the values were completely off. Nothing that would suggest an R-squared of 0.9. Therefore, I tried to calculate it manually myself:
yBar = df['p'].mean()
SSTot = df['p'].apply(lambda x: (x-yBar)**2).sum()
SSReg = ((est.fittedvalues - yBar)**2).sum()
1 - SSReg/SSTot
Out[79]: 0.2618159806908984
Am I doing something wrong? Or is there a reason why my computation is so far off what statsmodels is getting? SSTot, SSReg have values of 48084, 35495.
If you do not include an intercept (constant explanatory variable) in your model, statsmodels computes R-squared based on un-centred total sum of squares, ie.
tss = (ys ** 2).sum() # un-centred total sum of squares
as opposed to
tss = ((ys - ys.mean())**2).sum() # centred total sum of squares
as a result, R-squared would be much higher.
This is mathematically correct. Because, R-squared should indicate how much of the variation is explained by the full-model comparing to the reduced model. If you define your model as:
ys = beta1 . xs + beta0 + noise
then the reduced model can be: ys = beta0 + noise, where the estimate for beta0 is the sample average, thus we have: noise = ys - ys.mean(). That is where de-meaning comes from in a model with intercept.
But from a model like:
ys = beta . xs + noise
you may only reduce to: ys = noise. Since noise is assumed zero-mean, you may not de-mean ys. Therefore, unexplained variation in the reduced model is the un-centred total sum of squares.
This is documented here under rsquared item. Set yBar equal to zero, and I would expect you will get the same number.
If your model is:
a = <yourmodel>.fit()
Then, to compute fitted values:
a.fittedvalues
and to compute R squared:
a.rsquared
The following code fits a oversimplified generalized linear model using statsmodels
model = smf.glm('Y ~ 1', family=sm.families.NegativeBinomial(), data=df)
results = model.fit()
This gives the coefficient and a stderr:
coef stderr
Intercept 2.9471 0.120
Now I want to graphically compare the real distribution of the variable Y (histogram) with the distribution that comes from the model.
But I need two parameters r and p to evaluate the stats.nbinom(r,p) and plot it.
Is there a way to retrieve the parameters from the results of the fitting?
How can I plot the PMF?
Generalized linear models, GLM, in statsmodels currently does not estimate the extra parameter of the Negative Binomial distribution. Negative Binomial belongs to the exponential family of distributions only for fixed shape parameter.
However, statsmodels also has Negative Binomial as a Maximum Likelihood Model in discrete_model which estimates all parameters.
The parameterization of the Negative Binomial for count regression is in terms of the mean or expected value, which is different from the parameterization in scipy.stats.nbinom. Actually, there are two different commonly used parameterization for the Negative Binomial count regression, usually called nb1 and nb2
Here is a quickly written script that recovers the scipy.stats.nbinom parameters, n=size and p=prob from the estimated parameters. Once you have the parameters for the scipy.stats.distribution you can use all the available method, rvs, pmf, and so on.
Something like this should be made available in statsmodels.
In a few example runs, I got results like this
data generating parameters 50 0.25
estimated params 51.7167511571 0.256814610633
estimated params 50.0985814878 0.249989725917
Aside, because of the underlying exponential reparameterization, the scipy optimizers have sometimes problems to converge. In those cases, either providing better starting values or using Nelder-Mead as optimization method usually helps.
import numpy as np
from scipy import stats
import statsmodels.api as sm
# generate some data to check
nobs = 1000
n, p = 50, 0.25
dist0 = stats.nbinom(n, p)
y = dist0.rvs(size=nobs)
x = np.ones(nobs)
loglike_method = 'nb1' # or use 'nb2'
res = sm.NegativeBinomial(y, x, loglike_method=loglike_method).fit(start_params=[0.1, 0.1])
print dist0.mean()
print res.params
mu = res.predict() # use this for mean if not constant
mu = np.exp(res.params[0]) # shortcut, we just regress on a constant
alpha = res.params[1]
if loglike_method == 'nb1':
Q = 1
elif loglike_method == 'nb2':
Q = 0
size = 1. / alpha * mu**Q
prob = size / (size + mu)
print 'data generating parameters', n, p
print 'estimated params ', size, prob
#estimated distribution
dist_est = stats.nbinom(size, prob)
BTW: I ran into this before but didn't have time to look at it
https://github.com/statsmodels/statsmodels/issues/106
numpy.average() has a weights option, but numpy.std() does not. Does anyone have suggestions for a workaround?
How about the following short "manual calculation"?
def weighted_avg_and_std(values, weights):
"""
Return the weighted average and standard deviation.
values, weights -- Numpy ndarrays with the same shape.
"""
average = numpy.average(values, weights=weights)
# Fast and numerically precise:
variance = numpy.average((values-average)**2, weights=weights)
return (average, math.sqrt(variance))
There is a class in statsmodels that makes it easy to calculate weighted statistics: statsmodels.stats.weightstats.DescrStatsW.
Assuming this dataset and weights:
import numpy as np
from statsmodels.stats.weightstats import DescrStatsW
array = np.array([1,2,1,2,1,2,1,3])
weights = np.ones_like(array)
weights[3] = 100
You initialize the class (note that you have to pass in the correction factor, the delta degrees of freedom at this point):
weighted_stats = DescrStatsW(array, weights=weights, ddof=0)
Then you can calculate:
.mean the weighted mean:
>>> weighted_stats.mean
1.97196261682243
.std the weighted standard deviation:
>>> weighted_stats.std
0.21434289609681711
.var the weighted variance:
>>> weighted_stats.var
0.045942877107170932
.std_mean the standard error of weighted mean:
>>> weighted_stats.std_mean
0.020818822467555047
Just in case you're interested in the relation between the standard error and the standard deviation: The standard error is (for ddof == 0) calculated as the weighted standard deviation divided by the square root of the sum of the weights minus 1 (corresponding source for statsmodels version 0.9 on GitHub):
standard_error = standard_deviation / sqrt(sum(weights) - 1)
Here's one more option:
np.sqrt(np.cov(values, aweights=weights))
There doesn't appear to be such a function in numpy/scipy yet, but there is a ticket proposing this added functionality. Included there you will find Statistics.py which implements weighted standard deviations.
There is a very good example proposed by gaborous:
import pandas as pd
import numpy as np
# X is the dataset, as a Pandas' DataFrame
mean = mean = np.ma.average(X, axis=0, weights=weights) # Computing the
weighted sample mean (fast, efficient and precise)
# Convert to a Pandas' Series (it's just aesthetic and more
# ergonomic; no difference in computed values)
mean = pd.Series(mean, index=list(X.keys()))
xm = X-mean # xm = X diff to mean
xm = xm.fillna(0) # fill NaN with 0 (because anyway a variance of 0 is
just void, but at least it keeps the other covariance's values computed
correctly))
sigma2 = 1./(w.sum()-1) * xm.mul(w, axis=0).T.dot(xm); # Compute the
unbiased weighted sample covariance
Correct equation for weighted unbiased sample covariance, URL (version: 2016-06-28)
A follow-up to "sample" or "unbiased" standard deviation in the "frequency weights" sense since "weighted sample standard deviation python" Google search leads to this post:
def frequency_sample_std_dev(X, n):
"""
Sample standard deviation for X and n,
where X[i] is the quantity each person in group i has,
and n[i] is the number of people in group i.
See Equation 6.4 of:
Montgomery, Douglas, C. and George C. Runger. Applied Statistics
and Probability for Engineers, Enhanced eText. Available from:
WileyPLUS, (7th Edition). Wiley Global Education US, 2018.
"""
n_groups = len(n)
n_people = sum(n)
lhs_numerator = sum([ni*Xi**2 for Xi, ni in zip(X, n)])
rhs_numerator = sum([Xi*ni for Xi, ni in zip(X,n)])**2/n_people
denominator = n_people-1
var = (lhs_numerator - rhs_numerator) / denominator
std = sqrt(var)
return std
Or modifying the answer by #Eric as follows:
def weighted_sample_avg_std(values, weights):
"""
Return the weighted average and weighted sample standard deviation.
values, weights -- Numpy ndarrays with the same shape.
Assumes that weights contains only integers (e.g. how many samples in each group).
See also https://en.wikipedia.org/wiki/Weighted_arithmetic_mean#Frequency_weights
"""
average = np.average(values, weights=weights)
variance = np.average((values-average)**2, weights=weights)
variance = variance*sum(weights)/(sum(weights)-1)
return (average, sqrt(variance))
print(weighted_sample_avg_std(X, n))
I was just searching for an API equivalent of the numpy np.std function that also allows the axis parameter to be set:
(I just tested it with two dimensions, so feel free for improvements if something is incorrect.)
def std(values, weights=None, axis=None):
"""
Return the weighted standard deviation.
axis -- the axis for std calculation
values, weights -- Numpy ndarrays with the same shape on the according axis.
"""
average = np.expand_dims(np.average(values, weights=weights, axis=axis), axis=axis)
# Fast and numerically precise:
variance = np.average((values-average)**2, weights=weights, axis=axis)
return np.sqrt(variance)
Thanks to Eric O Lebigot for the original answer.