I have a nonlinear model fit that looks like this:
The dark solid line is the model fit, and the grey part is the raw data.
Short version of the question: how do I get the likelihood of this model fit, so I can perform log-likelihood ratio test? Assume that the residual is normally distributed.
I am relatively new to statistics, and my current thoughts are:
Get the residual from the curve fit, and calculate the variance of residual;
Use this equation
And plug in the variance of residual into sigma-squared, x_i as experiment and mu as model fit;
Calculate the log-likelihood ratio.
Could anyone help me, with these two full-version questions?
Is my method correct? (I think so, but it would be really great to make sure!)
Are there any ready-made functions in python/scipy/statsmodels to do this for me?
Your likelihood function
which is simply the sum of log of probability density function of Gaussian distribution.
is the likelihood of fitting a mu and a sigma for your residue, not the likelihood of your model given your data. In one word, your approach is wrong.
Sine you are doing non-linear least square, following what #usethedeathstar already mentioned, you should go straight for F-test. . Consider the following example, modified from http://www.walkingrandomly.com/?p=5254, and we conduct F-test using R. And we will discuss how to translate it into python in the end.
# construct the data vectors using c()
> xdata = c(-2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9)
> ydata = c(0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001)
# some starting values
> p1 = 1
> p2 = 0.2
> p3 = 0.01
# do the fit
> fit1 = nls(ydata ~ p1*cos(p2*xdata) + p2*sin(p1*xdata), start=list(p1=p1,p2=p2))
> fit2 = nls(ydata ~ p1*cos(p2*xdata) + p2*sin(p1*xdata)+p3*xdata, start=list(p1=p1,p2=p2,p3=p3))
# summarise
> summary(fit1)
Formula: ydata ~ p1 * cos(p2 * xdata) + p2 * sin(p1 * xdata)
Parameters:
Estimate Std. Error t value Pr(>|t|)
p1 1.881851 0.027430 68.61 2.27e-12 ***
p2 0.700230 0.009153 76.51 9.50e-13 ***
---
Signif. codes: 0 ?**?0.001 ?*?0.01 ??0.05 ??0.1 ??1
Residual standard error: 0.08202 on 8 degrees of freedom
Number of iterations to convergence: 7
Achieved convergence tolerance: 2.189e-06
> summary(fit2)
Formula: ydata ~ p1 * cos(p2 * xdata) + p2 * sin(p1 * xdata) + p3 * xdata
Parameters:
Estimate Std. Error t value Pr(>|t|)
p1 1.90108 0.03520 54.002 1.96e-10 ***
p2 0.70657 0.01167 60.528 8.82e-11 ***
p3 0.02029 0.02166 0.937 0.38
---
Signif. codes: 0 ?**?0.001 ?*?0.01 ??0.05 ??0.1 ??1
Residual standard error: 0.08243 on 7 degrees of freedom
Number of iterations to convergence: 9
Achieved convergence tolerance: 2.476e-06
> anova(fit2, fit1)
Analysis of Variance Table
Model 1: ydata ~ p1 * cos(p2 * xdata) + p2 * sin(p1 * xdata) + p3 * xdata
Model 2: ydata ~ p1 * cos(p2 * xdata) + p2 * sin(p1 * xdata)
Res.Df Res.Sum Sq Df Sum Sq F value Pr(>F)
1 7 0.047565
2 8 0.053813 -1 -0.0062473 0.9194 0.3696
here we have two model, fit1 has 2 parameters, therefore the residue has 8 degrees-of-freedom; fit2 has one additional parameter and the residue has 7 degrees of freedom. Is model 2 significantly better? No, the F value is 0.9194, on (1,7) degrees of freedom and it is not significant.
To get the ANOVA table: Residue DF is easy. Residue Sum of squares: 0.08202*0.08202*8=0.05381 and 0.08243*0.08243*7=0.04756293 (notice: 'Residual standard error: 0.08243 on 7 degrees of freedom', etc). In python, you can get it by (y_observed-y_fitted)**2, since scipy.optimize.curve_fit() doesn't return the residues.
The F-ratio is 0.0062473/0.047565*7 and to get P-value: 1-scipy.stats.f.cdf(0.9194, 1, 7).
Put them together we have python equivalent:
In [1]:
import scipy.optimize as so
import scipy.stats as ss
xdata = np.array([-2,-1.64,-1.33,-0.7,0,0.45,1.2,1.64,2.32,2.9])
ydata = np.array([0.699369,0.700462,0.695354,1.03905,1.97389,2.41143,1.91091,0.919576,-0.730975,-1.42001])
def model0(x,p1,p2):
return p1*np.cos(p2*x) + p2*np.sin(p1*x)
def model1(x,p1,p2,p3):
return p1*np.cos(p2*x) + p2*np.sin(p1*x)+p3*x
p1, p2, p3 = 1, 0.2, 0.01
fit0=so.curve_fit(model0, xdata, ydata, p0=(p1,p2))[0]
fit1=so.curve_fit(model1, xdata, ydata, p0=(p1,p2,p3))[0]
yfit0=model0(xdata, fit0[0], fit0[1])
yfit1=model1(xdata, fit1[0], fit1[1], fit1[2])
ssq0=((yfit0-ydata)**2).sum()
ssq1=((yfit1-ydata)**2).sum()
df=len(xdata)-3
f_ratio=(ssq0-ssq1)/(ssq1/df)
p=1-ss.f.cdf(f_ratio, 1, df)
In [2]:
print f_ratio, p
0.919387419515 0.369574503394
As #usethedeathstar pointed out: when you the residue is normally distributed, nonlinear least square IS the maximum likelihood. Therefore F-test and likelihood ratio test is equivalent. Because, F-ratio is a monotone transformation of the likelihood ratio λ.
Or in a descriptive way, see: http://www.stata.com/support/faqs/statistics/chi-squared-and-f-distributions/
Your formula looks correct to me. It should give you the same results as scipy.stats.norm.logpdf(x, loc=mu, scale=sigma)
Since you already have your estimates of mu and sigma, I don't think there is a function for the likelihood ratio test where you can plug your results in.
If you have the estimates of two models, where one is nested in the other, then you can easily calculate it yourself.
http://en.wikipedia.org/wiki/Likelihood-ratio_test
Here is the part of a method in statsmodels that calculates the LR-test for comparing two nested linear models
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py#L1531
Related
I am trying to learn gaussian fitting using scipy and least-squares in Python.
I have a further question relates the answer to this question How to fit a double Gaussian distribution in Python?,
From the code of the answer, how can I estimate the parameters c1, mu1, sigma1, c2, mu2, sigma2
in
params = [c1, mu1, sigma1, c2, mu2, sigma2], since I want to use leastsq?
Hope this answer will not be too late..
There's a simple way to estimate those parameters actually.
There it is how I do it in my own code for gaussian fitting on spectra :
#Détermination des paramètres initiaux
mu0, m = xdata[roiDeb], ydata[roiDeb]
for j in range(roiDeb+1, roiFin+1) :
if ydata[j] > m:
mu0 = xdata[j]
m = ydata[j]
h0 = m
fwhmd, fwhmf, sigma0 = ydata[roiDeb], ydata[roiFin+1], 0
for j in range(roiDeb, mu0+1) :
if ydata[j] > h0/2 :
fwhmd = j
break
for j in range(mu0, roiFin+1) :
if ydata[j] < h0/2 :
fwhmf = j
break
sigma0 = (fwhmf-fwhmd)/2.355
To determinate the centroid you can just do a if condition and check for the higher y value in your region of interest.
Then you can calculate the full width at half maximum (FWHM) at both sides.
To finish the formula : sigma = FWHM / 2.355 can be demonstrate simply (or can be find on internet)
I let you discover by yourself how to use those values to do a gaussian fit...
I am trying to fit some data using scipy.optimize.curve_fit. I have read the documentation and also this StackOverflow post, but neither seem to answer my question.
I have some data which is simple, 2D data which looks approximately like a trig function. I want to fit it with a general trig function
using scipy.
My approach is as follows:
from __future__ import division
import numpy as np
from scipy.optimize import curve_fit
#Load the data
data = np.loadtxt('example_data.txt')
t = data[:,0]
y = data[:,1]
#define the function to fit
def func_cos(t,A,omega,dphi,C):
# A is the amplitude, omega the frequency, dphi and C the horizontal/vertical shifts
return A*np.cos(omega*t + dphi) + C
#do a scipy fit
popt, pcov = curve_fit(func_cos, t,y)
#Plot fit data and original data
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot2grid((1,1), (0,0))
ax1.plot(t,y)
ax1.plot(t,func_cos(t,*popt))
This outputs:
where blue is the data orange is the fit. Clearly I am doing something wrong. Any pointers?
If no values are provided for initial guess of the parameters p0 then a value of 1 is assumed for each of them. From the docs:
p0 : array_like, optional
Initial guess for the parameters (length N). If None, then the initial values will all be 1 (if the number of parameters for the function can be determined using introspection, otherwise a ValueError is raised).
Since your data has very large x-values and very small y-values an initial guess of 1 is far from the actual solution and hence the optimizer does not converge. You can help the optimizer by providing suitable initial parameter values that can be guessed / approximated from the data:
Amplitude: A = (y.max() - y.min()) / 2
Offset: C = (y.max() + y.min()) / 2
Frequency: Here we can estimate the number of zero crossing by multiplying consecutive y-values and check which products are smaller than zero. This number divided by the total x-range gives the frequency and in order to get it in units of pi we can multiply that number by pi: y_shifted = y - offset; oemga = np.pi * np.sum(y_shifted[:-1] * y_shifted[1:] < 0) / (t.max() - t.min())
Phase shift: can be set to zero, dphi = 0
So in summary, the following initial parameter guess can be used:
offset = (y.max() + y.min()) / 2
y_shifted = y - offset
p0 = (
(y.max() - y.min()) / 2,
np.pi * np.sum(y_shifted[:-1] * y_shifted[1:] < 0) / (t.max() - t.min()),
0,
offset
)
popt, pcov = curve_fit(func_cos, t, y, p0=p0)
Which gives me the following fit function:
I am having trouble computing a likelihood ratio test in Python 2.7.
I have two models and the corresponding likelihood values. I believe the rule for comparing whether model L2 is better than model L1 (if the models are closely related) is to look at -2 * log(L2/L1).
I then want to find the p-value for corresponding to -2 * log(L2/L1) and relate this to the significance for L2 is preferred to L1. Here is what I have so far:
import numpy as np
from scipy.stats import chisqprob
L1 = 467400. # log(likelihood) of my 1st fit
L2 = 467414. # log(likelihood) of my 2nd fit
LR = -2. * np.log(L2 / L1) # LR = -5.9905e-05
p = chisqprob(LR, 1) # L2 has 1 DoF more than L1
print 'p: %.30f' % p # p = 1.000000000000000000000000000000
five_sigma = 1 - scipy.special.erf(5 / np.sqrt(2.)) :-)
print '5 sigma: %.30f' % five_sigma
five_sigma_check = 1 - 0.999999426696856 :-(
print 'Check : %.30f' % five_sigma_check
However, I run into two issues:
My p-value is coming out to be 1 when I'd have expected it to be close to 0.
When I use the formula on the line marked with the :-) to find five sigma, for example, it differs from the value quoted in the literature - that line is highlighted with a :-(. My value for five_sigma_check is taken from here.
Can anyone offer any advice, please? I'm relatively new to the world of Python and statistics.
Thanks.
To calculate the likelihood ratio given the log-likelihoods, use this formula:
from scipy.stats.distributions import chi2
def likelihood_ratio(llmin, llmax):
return(2*(llmax-llmin))
LR = likelihood_ratio(L1,L2)
p = chi2.sf(LR, 1) # L2 has 1 DoF more than L1
print 'p: %.30f' % p
# p: 0.000000121315450836607258011741
Adding a little bit theory, so that it will be helpful for someone to understand:
Here we have two models (assuming nested) and we want to compare the effectiveness of the models in terms of explaining the data. From the above figure, let's now implement the LR test with python 3:
from scipy.stats import chi2
ll_0, ll_1 = 467400, 467414 # given, the log-likelihoods of the nested models m_0, m_1
# log likelihood for m_0 (H_0) must be <= log likelihood of m_1 (H_1)
Λ = -2 * (ll_0 - ll_1)
print(Λ)
# 28.0
df = 1 # given the difference in dof
# compute the p-value
pvalue = 1 - chi2(df).cdf(Λ) # since Λ follows χ2
print(pvalue)
# 1.2131545083660726e-07
We can plot and clearly see that we can reject the NULL hypothesis in favor of model m1, at α=0.05.
α, df = 0.05, 1
x = np.linspace(0, 30, 1000)
plt.plot(x, chi2(df).pdf(x), label='χ2')
plt.axvline(chi2(df).ppf(1-α), color='red', label='α=0.05')
plt.scatter(Λ, 0, color='green', s=50, label='Λ')
plt.legend()
plt.title('χ2({}) distribution'.format(df), size=20)
I need to know how to generate 1000 random numbers between 500 and 600 that has a mean = 550 and standard deviation = 30 in python.
import pylab
import random
xrandn = pylab.zeros(1000,float)
for j in range(500,601):
xrandn[j] = pylab.randn()
???????
You are looking for stats.truncnorm:
import scipy.stats as stats
a, b = 500, 600
mu, sigma = 550, 30
dist = stats.truncnorm((a - mu) / sigma, (b - mu) / sigma, loc=mu, scale=sigma)
values = dist.rvs(1000)
There are other choices for your problem too. Wikipedia has a list of continuous distributions with bounded intervals, depending on the distribution you may be able to get your required characteristics with the right parameters. For example, if you want something like "a bounded Gaussian bell" (not truncated) you can pick the (scaled) beta distribution:
import numpy as np
import scipy.stats
import matplotlib.pyplot as plt
def my_distribution(min_val, max_val, mean, std):
scale = max_val - min_val
location = min_val
# Mean and standard deviation of the unscaled beta distribution
unscaled_mean = (mean - min_val) / scale
unscaled_var = (std / scale) ** 2
# Computation of alpha and beta can be derived from mean and variance formulas
t = unscaled_mean / (1 - unscaled_mean)
beta = ((t / unscaled_var) - (t * t) - (2 * t) - 1) / ((t * t * t) + (3 * t * t) + (3 * t) + 1)
alpha = beta * t
# Not all parameters may produce a valid distribution
if alpha <= 0 or beta <= 0:
raise ValueError('Cannot create distribution for the given parameters.')
# Make scaled beta distribution with computed parameters
return scipy.stats.beta(alpha, beta, scale=scale, loc=location)
np.random.seed(100)
min_val = 1.5
max_val = 35
mean = 9.87
std = 3.1
my_dist = my_distribution(min_val, max_val, mean, std)
# Plot distribution PDF
x = np.linspace(min_val, max_val, 100)
plt.plot(x, my_dist.pdf(x))
# Stats
print('mean:', my_dist.mean(), 'std:', my_dist.std())
# Get a large sample to check bounds
sample = my_dist.rvs(size=100000)
print('min:', sample.min(), 'max:', sample.max())
Output:
mean: 9.87 std: 3.100000000000001
min: 1.9290674232087306 max: 25.03903889816994
Probability density function plot:
Note that not every possible combination of bounds, mean and standard deviation will produce a valid distribution in this case, though, and depending on the resulting values of alpha and beta the probability density function may look like an "inverted bell" instead (even though mean and standard deviation would still be correct).
I'm not exactly sure what the OP desired, but if he just wanted an array xrandn fulfilling the bottom plot - below I present the steps:
First, create a standard distribution (Gaussian distribution), the easiest way might be to use numpy:
import numpy as np
random_nums = np.random.normal(loc=550, scale=30, size=1000)
And then you keep only the numbers within the desired range with a list comprehension:
random_nums_filtered = [i for i in random_nums if i>500 and i<600]
Given two proportions, p1 and p2,
I want to calculate the necessary samples of p1 I would need to do a Z-test of equivalence if:
1) alpha = .05
2) power = 0.9
3) n1/n2 = r
The Python stats models program gives something like this, but I think it is wrong because it gets drastically different answers from the STATA sampsi program.
The stata code is:
sampsi .01 .1, alpha(0.05) ratio(2)
which gives
Estimated sample size for two-sample comparison o
f proportions
Test Ho: p1 = p2, where p1 is the proportion in p
opulation 1
and p2 is the proportion in p
opulation 2
Assumptions:
alpha = 0.0500 (two-sided)
power = 0.9000
p1 = 0.0100
p2 = 0.1000
n2/n1 = 2.00
Estimated required sample sizes:
n1 = 119
n2 = 238
The python code is:
import statsmodels.stats.api as sms
es = sms.proportion_effectsize(0.01, 0.1)
sms.NormalIndPower().solve_power(es, power=0.9, alpha=0.05, ratio=2)
Which gives:
80.25164112946563
As #rawr says in comments above, bsamsize works (you need to set the frac argument to the fraction of samples in group 1)
library("Hmisc")
bsamsize(.01, .1, power=.9, frac=1/3)
n1 n2
102.8526 205.7051
These are not the same numbers as Stata gives, but they're close. ?bsamsize gives details of the algorithm used.
You may also check rule of thumb sample size for proportion comparisons (though it is for equal groups), given by following formula:
N = 16 * p * (1-p) / (d**2)
where probability p = 0.1 and
difference d = p1 - p2 = 0.1 - 0.01 = 0.09
N comes to 178 for each group (356 in total) with your values, which is virtually same as total size given by Stata (357).
One reference for above formula is: https://www.evanmiller.org/how-not-to-run-an-ab-test.html