Python curve fitting with constraints - python

I have been looking for Python curve fitting with constraints. One of the option is to use lmfit module and another option is to use penalization to enforce the constraints. I have following code in which I am trying to enforce a+b=3.6 as the constraint. In other words, y=3.6 when x=1 and x is always >=1 in my case.
import numpy as np
import scipy.optimize as sio
def func(x, a, b, c):
return a+b*x**c
x = [1, 2, 4, 8, 16]
y = [3.6, 3.96, 4.31, 5.217, 6.842]
lb = np.ones(3, dtype=float)
ub = np.ones(3, dtype=float)*10.
popt, pcov = sio.curve_fit(func, x, y)
print(popt)
Ideally, I would like to use the lmfit approach and spent good amount of trying to understand examples but could not succeed. Can someone help with an example?

If I understand your question correctly, you want to model some data with
def func(x, a, b, c):
return a+b*x**c
and for a particular set of data you want to impose the constraint that a+b=3.6. You could, just "hardwire that in", changing the function to be
def func2(x, b, c):
a = 3.6 - b
return a+b*x**c
and now you have a model function with only two variables: b, and c.
That would not be very flexible but would work.
Using lmfit gives back some of that flexibility. To do a completely unconstrained fit, you would say
from lmfit import Model
mymodel = Model(func)
params = mymodel.make_params(a=2, b=1.6, c=0.5)
result = mymodel.fit(y, params, x=x)
(Just as an aside: scipy.optimize.curve_fit permits you to not specify initial values for the parameters and implicitly sets them to 1 without telling you. This is a terrible mis-feature - always give initial values).
If you do want to impose the constraint a+b=3.6, you could then do
params['a'].expr = '3.6-b'
result2 = mymodel.fit(y, params, x=x)
print(result2.fit_report())
When I do that with the data you provided, this prints (note that it reports 2 variables, not 3):
[[Model]]
Model(func)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 34
# data points = 5
# variables = 2
chi-square = 0.01066525
reduced chi-square = 0.00355508
Akaike info crit = -26.7510142
Bayesian info crit = -27.5321384
[[Variables]]
a: 3.28044833 +/- 0.04900625 (1.49%) == '3.6-b'
b: 0.31955167 +/- 0.04900626 (15.34%) (init = 1.6)
c: 0.86901253 +/- 0.05281279 (6.08%) (init = 0.5)
[[Correlations]] (unreported correlations are < 0.100)
C(b, c) = -0.994
Your code hinted at using (but did not actually use) upper and lower bounds for the parameter values. Those are also possible with lmfit, as with
params['b'].min = 1
params['b'].min = 10
and so forth. I'm not sure you need them here, and would caution against trying to set bounds too tightly.

Related

Fitting two peaks with gauss in python

Curve_fit is not fit properly. I'm trying to fit experimental data with curve_fit. The data is imported from a .txt file to a array:
d = np.loadtxt("data.txt")
data_x = np.array(d[:, 0])
data_y = np.array(d[:, 2])
data_y_err = np.array(d[:, 3])
Since i know there must be two peaks, my model is a sum of two gaussian curves:
def model_dGauss(x, xc, A, y0, w, dx):
P = A/(w*np.sqrt(2*np.pi))
mu1 = (x - (xc - dx/3))/(2*w**2)
mu2 = (x - (xc + 2*dx/3))/(2*w**2)
return y0 + P * ( np.exp(-mu1**2) + 0.5 * np.exp(-mu2**2))
Using values for the guess is very sensitive to my guess values. Where is the point of fitting data if just nearly perfect fitting parameter will provide a result? Or am I doing something completely wrong?
t = np.linspace(8.4, 10, 300)
guess_dG = [32, 1, 10, 0.1, 0.2]
popt, pcov = curve_fit(model_dGauss, data_x, data_y, p0=guess_dG, sigma=data_y_err, absolute_sigma=True)
A, xc, y0, w, dx = popt
Plotting the data
plt.scatter(data_x, data_y)
plt.plot(t, model_dGauss(t1,*popt))
plt.errorbar(data_x, data_y, yerr=data_y_err)
yields:
Plot result
The result is just a straight line at the bottom of my graph while the evaluated parameters are not that bad. How can that be?
A complete example of code is always appreciated (and, ahem, usually expected here on SO). To remove much of the confusion about using curve_fit here, allow me to suggest that you will have an easier time using lmfit (https://lmfit.github.io/lmfit-py) and especially its builtin model functions and its use of named parameters. With lmfit, your code for two Gaussians plus a constant offset might look like this:
from lmfit.models import GaussianModel, ConstantModel
# start with 1 Gaussian + Constant offset:
model = GaussianModel(prefix='p1_') + ConstantModel()
# this model will have parameters named:
# p1_amplitude, p1_center, p1_sigma, and c.
# here we give initial values to these parameters
params = model.make_params(p1_amplitude=10, p1_center=32, p1_sigma=0.5, c=10)
# optionally place bounds on parameters (probably not needed here):
params['p1_amplitude'].min = 0.
## params['p1_center'].vary = False # fix a parameter from varying in fit
# now do the fit (including weighting residual by 1/y_err):
result = model.fit(data_y, params, x=data_x, weights=1.0/data_y_err)
# print out param values, uncertainties, and fit statistics, or get best-fit
# parameters from `result.params`
print(result.fit_report())
# plot results
plt.errorbar(data_x, data_y, yerr=data_y_err, label='data')
plt.plot(data_x, result.best_fit, label='best fit')
plt.legend()
plt.show()
To add a second Gaussian, you could just do
model = GaussianModel(prefix='p1_') + GaussianModel(prefix='p2_') + ConstantModel()
# and then:
params = model.make_params(p1_amplitude=10, p1_center=32, p1_sigma=0.5, c=10,
p2_amplitude=2, p2_center=31.75, p1_sigma=0.5)
and so on.
Your model has the two gaussian sharing or at least having "linked" values - the sigma values should be the same for the two peaks and the amplitude of the 2nd is half that of the 1st. As defined so far, the 2-Gaussian model has all the parameters being independent. But lmfit has a mechanism for setting constraints on any parameter by giving an algebraic expression in terms of other parameters. So, for example, you could say
params['p2_sigma'].expr = 'p1_sigma'
params['p2_amplitude'].expr = 'p1_amplitude / 2.0'
Now, p2_amplitude and p2_sigma will not be independently varied in the fit but will be constrained to have those values.

Illogical parameters returned by scipy.curve_fit

I'm modelling a ball falling through fluid in Python and fitting the model function to a set of data points using the damping coefficients (a and b) and the density of the fluid, but the fitted value for the fluid density is coming back negative and I have no idea what is wrong in the code. My code is below:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from scipy.optimize import curve_fit
##%%Parameters and constants
m = 0.1 #mass of object in kg
g = 9.81 #acceleration due to gravity in m/s**2
rho = 700 #density of object in kg/m**3
v0 = 0 #velocity at t=0
y0 = 0 #position at t=0
V = m / rho #volume in cubic meters
r = ((3/4)*(V/np.pi))**(1/3) #radius of the sphere
asample = 0.0001 #sample value for a
bsample = 0.0001 #sample value for b
#%%Defining integrating function
##fuction => y'' = g*(1-(rhof/rho))-((a/m)y'+(b/m)y'**2)
## y' = v
## v' = g*(1-rhof/rho)-((a/m)v+(b/m)v**2)
def sinkingball(Y, time, a, b, rhof):
return [Y[1],(1/m)*(V*g*(rho-rhof)-a*Y[1]-b*(Y[1]**2))]
def balldepth(time, a, b, rhof):
solutions = odeint(sinkingball, [y0,v0], time, args=(a, b, rhof))
return solutions[:,0]
time = np.linspace(0,15,151)
# imported some experimental values and named the array data
a, b, rhof = curve_fit(balldepth, time, data, p0=(asample, bsample, 100))[0]
print(a,b,rhof)
Providing the output you actually get would be helpful, and the comment about time not being used by sinkingball() is worth following.
You might find lmfit (https://lmfit.github.io/lmfit-py) useful. This provides a higher-level interface to curve-fitting that allows, among other things, placing bounds on parameters so that they can remain physically sensible. I think your problem would translate from curve_fit to lmfit as:
from lmfit import Model
def balldepth(time, a, b, rhof):
solutions = odeint(sinkingball, [y0,v0], time, args=(a, b, rhof))
return solutions[:,0]
# create a model based on the model function "balldepth"
ballmodel = Model(balldepth)
# create parameters, which will be named using the names of the
# function arguments, and provide initial values
params = bollmodel.make_params(a=0.001, b=0.001, rhof=100)
# you wanted rhof to **not** vary in the fit:
params['rhof'].vary = False
# set min/max values on `a` and `b`:
params['a'].min = 0
params['b'].min = 0
# run the fit
result = ballmodel.fit(data, params, time=time)
# print out full report of results
print(result.fit_report())
# get / print out best-fit parameters:
for parname, param in result.params.items():
print("%s = %f +/- %f" % (parname, param.value, param.stderr))

Lmfit gives -1 correlation and large uncertainty (python)

I am trying to fit a model function to a curve using the lmfit module.
The curve that I am fitting is set up as follows:
e(x) = exp(-(x-X)/x0) for x larger or equal than X, 0 otherwise.
G(x) = (1/sqrt(2*pi)*sigma) * exp(-x^2/2*sigma^2)
The model fit M(x) = E * conv(e,G)(x) + B
Where e is a truncated exponential, G is a gaussian, and E and B are constants. The operator between e and G is a convolution.
When I try to fit this function to my data I get a good fit. However, the fit is very sensitive to my initial value that I provide for X. This is also reflected in the uncertainty in the parameters:
[[Model]]
((Model(totemiss) * (Model(exptruncated) <function convolve at 0x7f139e2dcde8> Model(gaussian))) + Model(background))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 67
# data points = 54
# variables = 5
chi-square = 120558969110355112544642583094864038386991104.00000
reduced chi-square = 2460387124701124853181382654239391973638144.00000
Akaike info crit = 5275.63336
Bayesian info crit = 5285.57828
[[Variables]]
E: 9.7316e+28 +/- 2.41e+33 (2475007.74%) (init= 1.2e+29)
x0: 5.9420e+06 +/- 9.52e+04 (1.60%) (init= 5000000)
X: 4.9049e+05 +/- 1.47e+11 (29978575.17%) (init= 100000)
sigma: 2.6258e+06 +/- 5.74e+04 (2.19%) (init= 2000000)
center: 0 (fixed)
amplitude: 1 (fixed)
B: 3.9017e+22 +/- 3.75e+20 (0.96%) (init= 4.5e+22)
[[Correlations]] (unreported correlations are < 0.100)
C(E, X) = -1.000
C(sigma, B) = -0.429
C(x0, sigma) = -0.283
C(x0, B) = -0.266
C(E, x0) = -0.105
C(x0, X) = 0.105
I suspect this has something to do due to the correlation between E and X being -1.00, which does not make any sense. I am trying to find out why I get this error and I believe it might be in the definition of the model:
def exptruncated(x, x0, X):
return np.exp(-(x-X)/x0)* (x > X)
#Define convolution operator
def convolve(arr, kernel):
npts = min(len(arr), len(kernel))
pad = np.ones(npts)
tmp = np.concatenate((pad*arr[0], arr, pad*arr[-1]))
out = np.convolve(tmp, kernel, mode='valid')
noff = int((len(out) - npts)/2)
return out[noff:noff+npts]
#Constant value for total emissions#
def totemiss(x,E):
return E
#Constant value for background value
def background(x,B):
return B
# create Composite Model using the custom convolution operator
# M(x) = E + conv(exp,gauss) + B
mod = Model(totemiss)* CompositeModel(Model(exptruncated), Model(gaussian), convolve) + Model(background)
mod.set_param_hint('x0',value=50*1e5,min=0,max=60*1e5)
mod.set_param_hint('amplitude',value=1.0)
mod.set_param_hint('center',value=0.0)
mod.set_param_hint('sigma',value=20*1e5,min=0,max=100*1e5)
mod.set_param_hint('X',value=1.0*1e5,min=0, max=5.0*1e5)
mod.set_param_hint('B',value=0.45*1e23,min=0.3*1e23,max=1.0*1e23)
mod.set_param_hint('E',value=1.2*1e29,min=1.2*1e26,max=1.0*1e32)
pars = mod.make_params()
pars['amplitude'].vary = False
pars['center'].vary = False
result = mod.fit(y, params=pars, x=x)
comps = result.eval_components(x=x)
Although I believe the model is the reason I am not able to find where the error comes from. Perhaps somebody of you can help me out!
Why not just remove E from the model -- the X parameter is serving as a constant offset.
I'd also advise having parameters that are more reasonably scaled, closer to order of unity (roughly 1e-6 to 1e6, say). You can add scales of 1e10 and so on as needed in the model calculation, but it generally helps the calculations of covariance (used to determine how to update values in the fit) to have parameters more uniformly scaled.

is there an equivalent of R's nls in statsmodels?

Does statsmodels support nonlinear regression to an arbitrary equation? (I know that there are some forms that are already built in, e.g. for logistic regression, but I am after something more flexible)
In the solution https://stats.stackexchange.com/a/44249 to a question about non-linear regression,
the code is in R and uses the function nls. There it has the equation's parameters defined with start = list(a1=0, ...). These are of course just some initial guesses and not the final fitted values. But what is different here compared to lm is that the parameters don't need to be from the columns of the input data.
I've been able to use statsmodels.formula.api.ols as an equivalent for R's lm, but when I try to use it with an equation that has parameters (and not weights for the inputs / combinations of inputs), statsmodels complains about the parameters not being defined. It does not seem to have an equivalent argument as start= so it isn't obvious how to introduce them.
Is there a different class or function in statsmodels that accepts definition of these initial parameter values?
EDIT:
My current attempt and also workaround following suggestion with lmfit
from statsmodels.formula.api import ols
import numpy as np
import pandas as pd
def eqn_poly(x, a, b):
''' simple polynomial '''
return a*x**2.0 + b*x
def eqn_nl(x, a, b):
''' fractional equation '''
return 1.0 / ((a+x)*b)
x = np.arange(0, 3, 0.1)
y1 = eqn_poly(x, 0.1, 0.5)
y2 = eqn_nl(x, 0.1, 0.5)
sigma =0.05
y1_noise = y1 + sigma * np.random.randn(*y1.shape)
y2_noise = y2 + sigma * np.random.randn(*y2.shape)
df1 = pd.DataFrame(np.vstack([x, y1_noise]).T, columns= ['x', 'y'])
df2 = pd.DataFrame(np.vstack([x, y2_noise]).T, columns= ['x', 'y'])
res1 = ols("y ~ 1 + x + I(x ** 2.0)", df1).fit()
print res1.summary()
res3 = ols("y ~ 1 + x + I(x ** 2.0)", df2).fit()
#res2 = ols("y ~ eqn_nl(x, a, b)", df2).fit()
# ^^^ this fails if a, b are not initialised ^^^
# so initialise a, b
a,b = 1.0, 1.0
res2 = ols("y ~ eqn_nl(x, a, b)", df2).fit()
print res2.summary()
# ===> and now the fitting is bad, it has an intercept -4.79, and a weight
# on the equation 15.7.
Giving lmfit the formula it is able to find parameters.
import lmfit
mod = lmfit.Model(eqn_nl)
lm_result = mod.fit(y2_noise, x=x, a=1.0, b=1.0)
print lm_result.fit_report()
# ===> this one works fine, a=0.101, b=0.4977
But trying to put y1, x into ols doesn't seem to work ("PatsyError: model is missing required outcome variables"). I didn't really follow that suggestion.
consider scipy.optimize.curve_fit as desired R.nls-like function

Add constraints to scipy.optimize.curve_fit?

I have the option to add bounds to sio.curve_fit. Is there a way to expand upon this bounds feature that involves a function of the parameters? In other words, say I have an arbitrary function with two or more unknown constants. And then let's also say that I know the sum of all of these constants is less than 10. Is there a way I can implement this last constraint?
import numpy as np
import scipy.optimize as sio
def f(x, a, b, c):
return a*x**2 + b*x + c
x = np.linspace(0, 100, 101)
y = 2*x**2 + 3*x + 4
popt, pcov = sio.curve_fit(f, x, y, \
bounds = [(0, 0, 0), (10 - b - c, 10 - a - c, 10 - a - b)]) # a + b + c < 10
Now, this would obviously error, but I think it helps to get the point across. Is there a way I can incorporate a constraint function involving the parameters to a curve fit?
Thanks!
With lmfit, you would define 4 parameters (a, b, c, and delta). a and b can vary freely. delta is allowed to vary, but has a maximum value of 10 to represent the inequality. c would be constrained to be delta-a-b (so, there are still 3 variables: c will vary, but not independently from the others). If desired, you could also put bounds on the values for a, b, and c. Without testing, your code would be approximately::
import numpy as np
from lmfit import Model, Parameters
def f(x, a, b, c):
return a*x**2 + b*x + c
x = np.linspace(0, 100.0, 101)
y = 2*x**2 + 3*x + 4.0
fmodel = Model(f)
params = Parameters()
params.add('a', value=1, vary=True)
params.add('b', value=4, vary=True)
params.add('delta', value=5, vary=True, max=10)
params.add('c', expr = 'delta - a - b')
result = fmodel.fit(y, params, x=x)
print(result.fit_report())
Note that if you actually get to a situation where the constraint expression or bounds dictate the values for the parameters, uncertainties may not be estimated.
curve_fit and least_squares only accept box constraints. In scipy.optimize, SLSQP can deal with more complicated constraints.
For curve fitting specifically, you can have a look at lmfit package.

Categories

Resources