I'm trying to fit some data using lmfit. Originally, I used the Model class which worked well based on this example/tutorial:
https://lmfit.github.io/lmfit-py/model.html
But then I wanted to add some parameters constraints to the model, thus I looked at this tutorial:
https://lmfit.github.io/lmfit-py/parameters.html
However, I've got some problems in combining the two classes such that they work nicely together.
Either it complains about the fit function missing parameters or getting invalid parameters (that's the case in the example I'll post) or I get a model which doesn't actually take the parameters I specified.
I can actually solve the problem using one of these different approaches:
1. Pass the paramaters using model.make_params(...), but I would like to split them up individually
2. I could use the Minimizer instead of Model, but I would like to understand why they are so differently implemented even though I would expect them to be very similar (except that they work on different type of input)
Any help / explanations would be really appreciated. :)
This code is based on the example on the Parameters tutorial page.
What I did here is to modify the example such that the Model class is used instead of the Minimizer class, which in principle should work, but I'm somehow doing it the wrong way.
As comparison, the original example is here (scroll to the bottom):
https://lmfit.github.io/lmfit-py/parameters.html
# <examples/doc_parameters_basic.py>
import numpy as np
from lmfit import Model, Parameters
# create data to be fitted
x = np.linspace(0, 15, 301)
data = (5. * np.sin(2*x - 0.1) * np.exp(-x*x*0.025) +
np.random.normal(size=len(x), scale=0.2))
# define objective function: returns the array to be minimized
def fcn2min(params, x):
"""Model a decaying sine wave and subtract data."""
amp = params['amp']
shift = params['shift']
omega = params['omega']
decay = params['decay']
model = amp * np.sin(x*omega + shift) * np.exp(-x*x*decay)
return model
# create a set of Parameters
params = Parameters()
params.add('amp', value=10, min=0)
params.add('decay', value=0.1)
params.add('shift', value=0.0, min=-np.pi/2., max=np.pi/2)
params.add('omega', value=3.0)
# do fit, here with leastsq model
fitmodel = Model(fcn2min)
result = fitmodel.fit(data, params, x=x)
# calculate final result
final = result.fit_report()
# try to plot results
try:
import matplotlib.pyplot as plt
plt.plot(x, data, 'k+')
plt.plot(x, final, 'r')
plt.show()
except ImportError:
pass
# <end of examples/doc_parameters_basic.py>
But using it like this, I get an error
ValueError: Invalid independent variable name ('params') for function fcn2min
I tried:
Specifying all parameters as function arguments, e.g.
def fcn2min(x, amp, shift, omega, decay):
But in this case I end up with the model / function parameters not being connected and I get a 'fit' which does nothing.
Now I tried stuff like specifying:
fitmodel = Model(fcn2min, independent_vars=['x'], param_names=params)
But in this case I get
Invalid parameter name ('amp') for function fcn2min
Also tried something like this:
params = fitmodel.make_params()
params.add('amp', value=10, min=0)
...
But in this case I also don't get parameters which are connected to the model, which can be seen from the output of
fitmodel.param_names
which returns an empty list.
You are confusing a model function as wrapped by the Model class for curve-fitting with an objective function for general purpose minimization with minimize or leastsq. An objective function will have a signature like:
def objective(params, *args):
where params is a lmfit.Parameters instance that you would have to unpack within the function and *args are optional arguments. This function will return the array -- the objective -- that will be minimized in the least squares sense.
In contrast, a model function used to create a curve-fitting Model will have a signature like
def modelfunc(x, par1, par2, par3, ..., **kws):
where x is the independent variable and par1 ... will contain the values for the model parameters. The model function will return an array that models the data.
There are lots of examples using objective functions and using model functions at
https://github.com/lmfit/lmfit-py/tree/master/examples. To model your data, you would do
# define MODEL **not objective** function: returns the array to model the data
def sinedecay(x, amp, shift, omega, decay):
"""Model a decaying sine wave""" .
return amp * np.sin(x*omega + shift) * np.exp(-x*x*decay)
# create model:
smodel = Model(sinedecay)
# create a set of Parameters
params = smodel.make_params(amp=10, decay=0.1, shift=0, omega=3)
params.set('amp', min=0)
params.set('shift', min=-np.pi/2., max=np.pi/2)
# do fit
result = smodel.fit(data, params, x=x)
Related
My question is:
How to forecast out of sample values with exogenous predictors using the Statsmodels state-space class TVRegression and the custom data provided in the example (see link below).
I have spent several hours searching for examples of how to forecasts out-of-sample values when the regression model contains exogenous variables. I want to build a simple Dynamic Linear Model class. I found a class in Statsmodels, TVRegression (see here), [https://www.statsmodels.org/dev/examples/notebooks/generated/statespace_custom_models.html][1]
that should solve my problem. The TVRegression class takes two exogenous predictors and a response variable as arguments.
I copy and pasted the code and ran the example in the link above without a problem. However, I am unable to produce a simple out-of-sample forecast, even using the example data given. The TVRegression class is a child class of sm.tsa.statespace.MLEModel and thus should inherit all of the associated methods. One of the methods of sm.tsa.statespace.MLEModel is forecast() and according to the UserGuide I should be able to provide a simple step argument and get out of sample forecast: MLEResults.forecast(steps=1, **kwargs).
The code used to generate the dependent :y and the independents(exogs) x_t;w_t
def gen_data_for_model1():
nobs = 1000
rs = np.random.RandomState(seed=93572)
d = 5
var_y = 5
var_coeff_x = 0.01
var_coeff_w = 0.5
x_t = rs.uniform(size=nobs)
w_t = rs.uniform(size=nobs)
eps = rs.normal(scale=var_y ** 0.5, size=nobs)
beta_x = np.cumsum(rs.normal(size=nobs, scale=var_coeff_x ** 0.5))
beta_w = np.cumsum(rs.normal(size=nobs, scale=var_coeff_w ** 0.5))
y_t = d + beta_x * x_t + beta_w * w_t + eps
return y_t, x_t, w_t, beta_x, beta_w
y_t, x_t, w_t, beta_x, beta_w = gen_data_for_model1()
The TVRegression class from the link provided above:
class TVRegression(sm.tsa.statespace.MLEModel):
def __init__(self, y_t, x_t, w_t):
exog = np.c_[x_t, w_t] # shaped nobs x 2
super(TVRegression, self).__init__(
endog=y_t, exog=exog, k_states=2, initialization="diffuse"
)
# Since the design matrix is time-varying, it must be
# shaped k_endog x k_states x nobs
# Notice that exog.T is shaped k_states x nobs, so we
# just need to add a new first axis with shape 1
self.ssm["design"] = exog.T[np.newaxis, :, :] # shaped 1 x 2 x nobs
self.ssm["selection"] = np.eye(self.k_states)
self.ssm["transition"] = np.eye(self.k_states)
# Which parameters need to be positive?
self.positive_parameters = slice(1, 4)
#property
def param_names(self):
return ["intercept", "var.e", "var.x.coeff", "var.w.coeff"]
#property
def start_params(self):
"""
Defines the starting values for the parameters
The linear regression gives us reasonable starting values for the constant
d and the variance of the epsilon error
"""
exog = sm.add_constant(self.exog)
res = sm.OLS(self.endog, exog).fit()
params = np.r_[res.params[0], res.scale, 0.001, 0.001]
return params
def transform_params(self, unconstrained):
"""
We constraint the last three parameters
('var.e', 'var.x.coeff', 'var.w.coeff') to be positive,
because they are variances
"""
constrained = unconstrained.copy()
constrained[self.positive_parameters] = (
constrained[self.positive_parameters] ** 2
)
return constrained
def untransform_params(self, constrained):
"""
Need to unstransform all the parameters you transformed
in the `transform_params` function
"""
unconstrained = constrained.copy()
unconstrained[self.positive_parameters] = (
unconstrained[self.positive_parameters] ** 0.5
)
return unconstrained
def update(self, params, **kwargs):
params = super(TVRegression, self).update(params, **kwargs)
self["obs_intercept", 0, 0] = params[0]
self["obs_cov", 0, 0] = params[1]
self["state_cov"] = np.diag(params[2:4])
The simple results from fit using the fake generated data:
mod = TVRegression(y_t, x_t, w_t)
res = mod.fit()
print(res.summary())
What I want is to at least accomplish the following without error:
res.forecast(steps = 5)
Ideally I could get help on how to construct the argument exog to accept new values of x_t and w_t as exog predictors for this class.
What I have tried so far:
I added self.k_exog in the init section of the class code in response to the first error.
In my second attempt I received the following value error:
ValueError: Out-of-sample operations in a model with a regression component require additional exogenous values via the exog argument.
I have attempt to add the exogenous variables by concatenating new values, so that the steps are equal to the slice of data.
e.g. res.forecast(steps = 5, np.c_(w_t[:5],x_t[:5])
Solution: The easiest way to do this is:
First redefine your constructer as:
def __init__(self, y_t, exog):
super(TVRegression, self).__init__(
endog=y_t, exog=exog, k_states=2, initialization="diffuse"
)
self.k_exog = self.exog.shape[1]
# Since the design matrix is time-varying, it must be
# shaped k_endog x k_states x nobs
# Notice that exog.T is shaped k_states x nobs, so we
# just need to add a new first axis with shape 1
self.ssm["design"] = exog.T[np.newaxis, :, :] # shaped 1 x 2 x nobs
self.ssm["selection"] = np.eye(self.k_states)
self.ssm["transition"] = np.eye(self.k_states)
# Which parameters need to be positive?
self.positive_parameters = slice(1, 4)
Then and add a new method
def clone(self, endog, exog=None, **kwargs):
return self._clone_from_init_kwds(endog, exog=exog, **kwargs)
Now you can do e.g.:
exog_t = np.c_[x_t, w_t]
mod = TVRegression(y_t[:-5], exog=exog_t[:-5])
res = mod.fit()
print(res.summary())
res.forecast(steps=5, exog=exog_t[-5:])
Discussion:
The issue here is that when your model depends on these exog variables, then you need updated values for exog for the out-of-sample part of the forecast (since the exog you provided when constructing the model only contains in-sample values).
To perform the forecasting, Statsmodels has to be able to essentially create a new model representation using the new out-of-sample exog. This is what the clone method is for - it describes how to create a new copy of the model, but with a different dataset. Since this model is simple, there is an existing generic cloning method (_clone_from_init_kwds) that we can hook into.
Finally, we need to pass the out-of-sample exog values to the forecast method, using the exog argument.
I am trying to estimate a model in tensorflow using NUTS by providing it a likelihood function. I have checked the likelihood function is returning reasonable values. I am following the setup here for setting up NUTS:
https://rlhick.people.wm.edu/posts/custom-likes-tensorflow.html
and some of the examples here for setting up priors, etc.:
https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Multilevel_Modeling_Primer.ipynb
My code is in a colab notebook here:
https://drive.google.com/file/d/1L9JQPLO57g3OhxaRCB29do2m808ZUeex/view?usp=sharing
I get the error: OperatorNotAllowedInGraphError: iterating overtf.Tensoris not allowed: AutoGraph did not convert this function. Try decorating it directly with #tf.function. This is my first time using tensorflow and I am quite lost interpreting this error. It would also be ideal if I could pass the starting parameter values as a single input (example I am working off doesn't do it, but I assume it is possible).
Update
It looks like I had to change the position of the #tf.function decorator. The sampler now runs, but it gives me the same value for all samples for each of the parameters. Is it a requirement that I pass a joint distribution through the log_prob() function? I am clearly missing something. I can run the likelihood through bfgs optimization and get reasonable results (I've estimated the model via maximum likelihood with fixed parameters in other software). It looks like I need to define the function to return a joint distribution and call log_prob(). I can do this if I set it up as a logistic regression (logit choice model is logistically distributed in differences). However, I lose the standard closed form.
My function is as follows:
#tf.function
def mmnl_log_prob(init_mu_b_time,init_sigma_b_time,init_a_car,init_a_train,init_b_cost,init_scale):
# Create priors for hyperparameters
mu_b_time = tfd.Sample(tfd.Normal(loc=init_mu_b_time, scale=init_scale),sample_shape=1).sample()
# HalfCauchy distributions are too wide for logit discrete choice
sigma_b_time = tfd.Sample(tfd.Normal(loc=init_sigma_b_time, scale=init_scale),sample_shape=1).sample()
# Create priors for parameters
a_car = tfd.Sample(tfd.Normal(loc=init_a_car, scale=init_scale),sample_shape=1).sample()
a_train = tfd.Sample(tfd.Normal(loc=init_a_train, scale=init_scale),sample_shape=1).sample()
# a_sm = tfd.Sample(tfd.Normal(loc=init_a_sm, scale=init_scale),sample_shape=1).sample()
b_cost = tfd.Sample(tfd.Normal(loc=init_b_cost, scale=init_scale),sample_shape=1).sample()
# Define a heterogeneous random parameter model with MultivariateNormalDiag()
# Use MultivariateNormalDiagPlusLowRank() to define nests, etc.
b_time = tfd.Sample(tfd.MultivariateNormalDiag( # b_time
loc=mu_b_time,
scale_diag=sigma_b_time),sample_shape=num_idx).sample()
# Definition of the utility functions
V1 = a_train + tfm.multiply(b_time,TRAIN_TT_SCALED) + b_cost * TRAIN_COST_SCALED
V2 = tfm.multiply(b_time,SM_TT_SCALED) + b_cost * SM_COST_SCALED
V3 = a_car + tfm.multiply(b_time,CAR_TT_SCALED) + b_cost * CAR_CO_SCALED
print("Vs",V1,V2,V3)
# Definition of loglikelihood
eV1 = tfm.multiply(tfm.exp(V1),TRAIN_AV_SP)
eV2 = tfm.multiply(tfm.exp(V2),SM_AV_SP)
eV3 = tfm.multiply(tfm.exp(V3),CAR_AV_SP)
eVD = eV1 + eV2 +
eV3
print("eVs",eV1,eV2,eV3,eVD)
l1 = tfm.multiply(tfm.truediv(eV1,eVD),tf.cast(tfm.equal(CHOICE,1),tf.float32))
l2 = tfm.multiply(tfm.truediv(eV2,eVD),tf.cast(tfm.equal(CHOICE,2),tf.float32))
l3 = tfm.multiply(tfm.truediv(eV3,eVD),tf.cast(tfm.equal(CHOICE,3),tf.float32))
ll = tfm.reduce_sum(tfm.log(l1+l2+l3))
print("ll",ll)
return ll
The function is called as follows:
nuts_samples = 1000
nuts_burnin = 500
chains = 4
## Initial step size
init_step_size=.3
init = [0.,0.,0.,0.,0.,.5]
##
## NUTS (using inner step size averaging step)
##
#tf.function
def nuts_sampler(init):
nuts_kernel = tfp.mcmc.NoUTurnSampler(
target_log_prob_fn=mmnl_log_prob,
step_size=init_step_size,
)
adapt_nuts_kernel = tfp.mcmc.DualAveragingStepSizeAdaptation(
inner_kernel=nuts_kernel,
num_adaptation_steps=nuts_burnin,
step_size_getter_fn=lambda pkr: pkr.step_size,
log_accept_prob_getter_fn=lambda pkr: pkr.log_accept_ratio,
step_size_setter_fn=lambda pkr, new_step_size: pkr._replace(step_size=new_step_size)
)
samples_nuts_, stats_nuts_ = tfp.mcmc.sample_chain(
num_results=nuts_samples,
current_state=init,
kernel=adapt_nuts_kernel,
num_burnin_steps=100,
parallel_iterations=5)
return samples_nuts_, stats_nuts_
samples_nuts, stats_nuts = nuts_sampler(init)
I have an answer to my question! It is simply a matter of different nomenclature. I need to define my model as a softmax function, which I knew was what I would call a "logit model", but it just wasn't clicking for me. The following blog post gave me the epiphany:
http://khakieconomics.github.io/2019/03/17/Putting-it-all-together.html
I have a function Imaginary which describes a physics process and I want to fit this to a dataset x_interpolate, y_interpolate. The function is a form of a Lorentzian peak function and I have some initial values that are user given, except for f_peak (the peak location) which I find using a peak finding algorithm. All of the fit parameters, except for the offset, are expected to be positive and thus I have set bounds_I accordingly.
def Imaginary(freq, alpha, res, Ms, off):
numerator = (2*alpha*freq*res**2)
denominator = (4*(alpha*res*freq)**2) + (res**2 - freq**2)**2
Im = Ms*(numerator/denominator) + off
return Im
pI = np.array([alpha_init, f_peak, Ms_init, 0])
bounds_I = ([0,0,0,0, -np.inf], [np.inf,np.inf,np.inf, np.inf])
poptI, pcovI = curve_fit(Imaginary, x_interpolate, y_interpolate, pI, bounds=bounds_I)
In some situations I want to keep the parameter f_peak fixed during the fitting process. I tried an easy solution by changing bounds_I to:
bounds_I = ([0,f_peak+0.001,0,0, -np.inf], [np.inf,f_peak-0.001,np.inf, np.inf])
This is for many reasons not an optimal way of doing this so I was wondering if there is a more Pythonic way of doing this? Thank you for your help
If a parameter is fixed, it is not really a parameter, so it should be removed from the list of parameters. Define a model that has that parameter replaced by a fixed value, and fit that. Example below, simplified for brevity and to be self-contained:
x = np.arange(10)
y = np.sqrt(x)
def parabola(x, a, b, c):
return a*x**2 + b*x + c
fit1 = curve_fit(parabola, x, y) # [-0.02989396, 0.56204598, 0.25337086]
b_fixed = 0.5
fit2 = curve_fit(lambda x, a, c: parabola(x, a, b_fixed, c), x, y)
The second call to fit returns [-0.02350478, 0.35048631], which are the optimal values of a and c. The value of b was fixed at 0.5.
Of course, the parameter should be removed from the initial vector pI and the bounds as well.
You might find lmfit (https://lmfit.github.io/lmfit-py/) helpful. This library adds a higher-level interface to the scipy optimization routines, aiming for a more Pythonic approach to optimization and curve fitting. For example, it uses Parameter objects to allow setting bounds and fixing parameters without having to modify the objective or model function. For curve-fitting, it defines high level Model functions that can be used.
For you example, you could use your Imaginary function as you've written it with
from lmfit import Model
lmodel = Model(Imaginary)
and then create Parameters (lmfit will name the Parameter objects according to your function signature), providing initial values:
params = lmodel.make_params(alpha=alpha_init, res=f_peak, Ms=Ms_init, off=0)
By default all Parameters are unbound and will vary in the fit, but you can modify these attributes (without rewriting the model function):
params['alpha'].min = 0
params['res'].min = 0
params['Ms'].min = 0
You can set one (or more) of the parameters to not vary in the fit as with:
params['res'].vary = False
To be clear: this does not require altering the model function, making it much easier to change with is fixed, what bounds might be imposed, and so forth.
You would then perform the fit with the model and these parameters:
result = lmodel.fit(y_interpolate, params, freq=x_interpolate)
you can get a report of fit statistics, best-fit values and uncertainties for parameters with
print(result.fit_report())
The best fit Parameters will be held in result.params.
FWIW, lmfit also has builtin Models for many common forms, including Lorentzian and a Constant offset. So, you could construct this model as
from lmfit.models import LorentzianModel, ConstantModel
mymodel = LorentzianModel(prefix='l_') + ConstantModel()
params = mymodel.make_params()
which will have Parameters named l_amplitude, l_center, l_sigma, and c (where c is the constant) and the model will use the name x for the independent variable (your freq). This approach can become very convenient when you may want to change the functional form of the peaks or background, or when fitting multiple peaks to a spectrum.
I was able to solve this issue regarding arbitrary number of parameters and arbitrary positioning of the fixed parameters:
def d_fit(x, y, param, boundMi, boundMx, listparam):
Sparam, SboundMi, SboundMx = asarray([]), asarray([]), asarray([])
Nparam, NboundMi, NboundMx = asarray([]), asarray([]), asarray([])
for i in range(len(param)):
if(listparam[i] == 1):
Sparam = append(Sparam,asarray(param[i]))
SboundMi = append(SboundMi,asarray(boundMi[i]))
SboundMx = append(SboundMx,asarray(boundMx[i]))
else:
Nparam = append(Nparam,asarray(param[i]))
def funF(x, Sparam):
j = 0
for i in range(len(param)):
if(listparam[i] == 1):
param[i] = Sparam[i-j]
else:
param[i] = Nparam[j]
j = j + 1
return fun(x, param)
return curve_fit(lambda x, *Sparam: funF(x, Sparam), x, y, p0 = Sparam, bounds = (SboundMi,SboundMx))
In this case:
param = [a,b,c,...] # parameters array (any size)
boundMi = [min_a, min_b, min_c,...] # minimum allowable value of each parameter
boundMx = [max_a, max_b, max_c,...] # maximum allowable value of each parameter
listparam = [0,1,1,0,...] # 1 = fit and 0 = fix the corresponding parameter in the fit routine
and the root function is define as
def fun(x, param):
a,b,c,d.... = param
return a*b/c... # any function of the params a,b,c,d...
This way, you can change the root function and the number of parameters without changing the fit routine.
And, at any time, you can fix or let fit any parameter by changing "listparam".
Use like this:
popt, pcov = d_fit(x, y, param, boundMi, boundMx, listparam)
"popt" and "pcov" are 1D arrays of the size of the number of "1" in "listparam" bringing the results of the fitted parameters (best value and err matrix)
"param" will ramain an 1D array of the same size of the original (input) "param", HOWEVER IT WILL BE UPDATED AUTOMATICALLY TO THE FITTED VALUES (same as "popt") for the fitted values, keeping the fixed values according to "listparam"
Hope can be usefull!
Obs1: x = 1D-array independent values and y = 1D-array dependent values
Obs2: This is my first post. Please let me know if I can improove it!
I want to fit a curve to my data:
x=[24,25,28,37,58,104,200,235,235]
y=[340,350,370,400,430,460,490,520,550]
xerr=[1.1,1,0.8,1.4,1.4,2.6,3.8,2,2]
def fit_fc(x, a, b, c):
return a*x**b+c
popt, pcov=curve_fit(fit_fc,x,y,maxfev=5000)
plt.plot(x,fit_fc(x,popt[0],popt[1],popt[2]))
plt.errorbar(x,y,xerr=xerr,fmt='-o')
but i want to put some constraints on the a,b and c. For example I want them to be in some range, lets say between 0 and 20. How can i achieve that? I'm new in Python, so any help would be appreciated.
You could use lmfit to constrain you parameters. For the following plot, I constrained your parameters a and b to the range [0,20] (which you mentioned in your post) and c to the range [0, 400]. The parameters you get are:
a: 19.9999991
b: 0.46769173
c: 274.074071
and the corresponding plot looks as follows:
As you can see, the model reproduces the data reasonable well and the parameters are in the given ranges.
Here is the code that reproduces the results with additional comments:
from lmfit import minimize, Parameters, Parameter, report_fit
import numpy as np
x=[24,25,28,37,58,104,200,235,235]
y=[340,350,370,400,430,460,490,520,550]
def fit_fc(params, x, data):
a = params['a'].value
b = params['b'].value
c = params['c'].value
model = np.power(x,b)*a + c
return model - data #that's what you want to minimize
# create a set of Parameters
#'value' is the initial condition
#'min' and 'max' define your boundaries
params = Parameters()
params.add('a', value= 2, min=0, max=20)
params.add('b', value= 0.5, min=0, max=20)
params.add('c', value= 300.0, min=0, max=400)
# do fit, here with leastsq model
result = minimize(fit_fc, params, args=(x, y))
# calculate final result
final = y + result.residual
# write error report
report_fit(params)
#plot results
try:
import pylab
pylab.plot(x, y, 'k+')
pylab.plot(x, final, 'r')
pylab.show()
except:
pass
If you constrain all of your parameters to the range [0,20], the plot looks rather bad:
It depends on what you want to have happen if the variables are out of range. You can use a simple if statement (in this case the program exit()s):
x = 21
if (x not in range(0, 20)):
print("var x is out of range")
exit()
Another way is to assert that the variable must be in the range. In this case, it's wrapped in a try/except block that handles the problem gracefully, and also exit()s like above:
try:
assert(x in range(0, 20))
except AssertionError:
print("variable x is out of range")
exit()
Scipy uses unconstrained least squares in order to fit curve parameters, so it won't be that straightforward: https://github.com/scipy/scipy/blob/v0.16.0/scipy/optimize/minpack.py#L454
What you'd probably like to do is called constrained (non-linear?, giving what you're trying to fit) least squares problem. For instance, take a look at those discussions:
Constrained least-squares estimation in Python ( leastsq_bounds: https://gist.github.com/denis-bz/65da931bdbf92c49e4d0 )
scipy.optimize.leastsq with bound constraints
This is perhaps a silly question.
I'm trying to fit data to a very strange PDF using MCMC evaluation in PyMC. For this example I just want to figure out how to fit to a normal distribution where I manually input the normal PDF. My code is:
data = [];
for count in range(1000): data.append(random.gauss(-200,15));
mean = mc.Uniform('mean', lower=min(data), upper=max(data))
std_dev = mc.Uniform('std_dev', lower=0, upper=50)
# #mc.potential
# def density(x = data, mu = mean, sigma = std_dev):
# return (1./(sigma*np.sqrt(2*np.pi))*np.exp(-((x-mu)**2/(2*sigma**2))))
mc.Normal('process', mu=mean, tau=1./std_dev**2, value=data, observed=True)
model = mc.MCMC([mean,std_dev])
model.sample(iter=5000)
print "!"
print(model.stats()['mean']['mean'])
print(model.stats()['std_dev']['mean'])
The examples I've found all use something like mc.Normal, or mc.Poisson or whatnot, but I want to fit to the commented out density function.
Any help would be appreciated.
An easy way is to use the stochastic decorator:
import pymc as mc
import numpy as np
data = np.random.normal(-200,15,size=1000)
mean = mc.Uniform('mean', lower=min(data), upper=max(data))
std_dev = mc.Uniform('std_dev', lower=0, upper=50)
#mc.stochastic(observed=True)
def custom_stochastic(value=data, mean=mean, std_dev=std_dev):
return np.sum(-np.log(std_dev) - 0.5*np.log(2) -
0.5*np.log(np.pi) -
(value-mean)**2 / (2*(std_dev**2)))
model = mc.MCMC([mean,std_dev,custom_stochastic])
model.sample(iter=5000)
print "!"
print(model.stats()['mean']['mean'])
print(model.stats()['std_dev']['mean'])
Note that my custom_stochastic function returns the log likelihood, not the likelihood, and that it is the log likelihood for the entire sample.
There are a few other ways to create custom stochastic nodes. This doc gives more details, and this gist contains an example using pymc.Stochastic to create a node with a kernel density estimator.