Piecewise Function lmfit - python

I am trying to define a piecewise function to be fitted by lmfit library in Python. The issue I am having is a parameter I have defined for the function will not evaluate alongside the data I am submitting.
I have one example of a case somewhat similar to mine here. However, the vectorize function the answer describes wasn't producing values I wanted, and when reading the documentation, it didn't seem to be the answer to my solution. I also used scipy.optimize.leastsq, but I got the same issue with lmfit described below.
I have a my residual function defined such as
from lmfit import minimize, Parameters, Model
def residual(params, y, x):
param1 = params['one']
param2 = params['two']
if(param2 < x):
p = 1
else:
p = param1*x + param2
return p - y
params = Parameters()
params.add('one', value=1)
params.add('two', value=2)
out = minimize(residual, params,args=(y,x))
I also tried defining the function such that
def f(param1,param2,x):
if(param2 < x):
p = 1
else:
p = param1*x + param2
return p
def residual(params, y, x):
param1 = params['one']
param2 = params['two']
return f(param1,param2,x) - y
I have also tried inline using a lambda function.
I am getting an error 'The truth value of an array with more than one element is ambiguous.' When I got the error, it made sense why it happened, because (param2 < x) would produce a logical array. However, I can't seem to find a way to define the function in a piecewise fashion with the given case to get it fitted with the lmfit.minimize() function. I have seen the answer done in Matlab, in which it's nlinfit function seems to evaluate the data element-wise without issue (I tried searching if Python has an equivalent operation to define element-wise computation such as .* or .+, but that doesn't seem to exist as explicitly).
lmfit also seems to operate a bit differently compared to nlinfit, because we have to always have our residuals return (model - y) while nlinfit outputs the result once the function is given, which I am not sure could be another issue.
So to reiterate, my main question is if there is a method of defining the piecewise function such that it can compare the parameter to the data set.
Any help or explanation would be appreciated, thank you!

In place of (param2 < x) (where param2 is a float and x is an numpy array), you want to use numpy.where. You might try:
def residual(params, y, x):
param1 = params['one']
param2 = params['two']
p = param1 * x + param2
p[np.where(param2 < x)] = 1.0
return p - y
I should also warn you about a potential problem with this approach to having a variable be a boundary for a piecewise function.
In non-linear fits, variables are always floating point (continuous, non-discrete) values. As the fit proceeds, it will make small adjustments in the values and see how that small change alters the result. In your approach, the parameter 'two' is used as both the transition between pieces and the offset for the line -- that is good.
If a parameter is used only as the transition, it may not work. Consider, say, x=np.array([0, 1., 2., 3., 4., ..., 20.0]). Having two = 10.5 and two=10.4 would then give the same result. In that case, the fit would not be able to alter the value of two: it would try a very small change, see no change in the result and give up.
So, either make sure that two is also used elsewhere in your real model (assuming your real model is more complicated than the example given), or consider using a more gentle transition rather than a hard change in pieces. I find an error-function of width ~spacing between x points often works. Depending on the nature of your problem, you might try something like this:
from scipy.special import erf, erfc
def residual(params, y, x):
param1 = params['one']
param2 = params['two']
dx = (max(x) - min(x))/(len(x)-1)
xhi = (erf((x-param2)/dx) + 1)/2.0
xlo = (erfc((x-param2)/dx) + 1)/2.0
p = xlo*1.0 + xhi*(param1*x + param2)
# note: did you really want?
# p = xlo*param + xhi*(param1*x + param2)
# p = param2 + xhi*param1*x
return p - y
Hope that helps.

Related

Least square optimalization with huge numbers

I have following function which I need to minimize utilizing least square method (I am using lmfit).
y = a * exp(-x/b) + c
I have for example following data:
profitlist = [-10000, 100.00, 1000.00, 100000.00, 1000000.00]
utilitylist = [0, 0.2, 0.4, 0.6, 1]
App returns the following error:
ValueError: NaN values detected in your input data or the output of your objective/model function - fitting algorithms cannot handle this! Please read https://lmfit.github.io/lmfit-py/faq.html#i-get-errors-from-nan-in-my-fit-what-can-i-do for more information.
Problem seems to be that: exp(-x/b) returns inf or -inf if profitList contains any bigger negative number (-1000 worked, -100000 not). So it overflows probably.
The values in the profitList can be very large float numbers and they are not always the same. So how can I optimize it with these huge numbers? It seems that lmfit does not support decimal numbers which would fix the issue... What can I do to make it work?
class LeastSquares:
def __init__(self, profitList, utilityList):
self.profitList = np.asarray(profitList)
self.utilityList = np.asanyarray(utilityList)
def function(self, params, x):
a = params["a"]
b = params["b"]
c = params["c"]
return a * np.exp(-x/b) + c
def residual(self, params, x, y):
return (y - self.function(params, x))**2
def setParameters(self, a_start, b_start, c_start):
parameters = Parameters()
parameters.add(name="a", value=a_start, min=None, max=0, vary=True)
parameters.add(name="b", value=b_start, vary=True, min=0.1, max=None)
parameters.add(name="c", value=c_start, vary=True)
return parameters
def startOptimalization(self):
parameters = self.setParameters(-1, 1, 1)
result = minimize(self.residual, parameters, args=(self.profitList, self.utilityList), method="leastsq")
result.params.pretty_print()
print(fit_report(result))
print("SSE")
print(np.sum(result.residual))
As you see, numpy.exp(arg) gives Infinity for any argument greater than ~709, and you will need to avoid such extreme values. The underlying solvers simply cannot solve them. Since your argument for arg is -x/b, you need to make sure that b is not so small as to blow up the argument to numpy.exp().
In fact, your code shows that you do set a lower bound on b of 0.1.
But with values of profitlist extending to 1e7, that lower bound is too small to prevent Infinity - your lower limit on b would have to be around 14,000.
If your values for profitlist are changing for each optimization run, you may need to do something like this (in your startOptimization):
parameters = self.setParameters(-1, 1, 1)
parameters['b'].min = max(abs(self.profitList))/700.0
result = minimize(self.residual, parameters, args=(self.profitList, self.utilityList), method="leastsq")
result.params.pretty_print()
Also, when fitting exponential changes, it is often helpful to compute your exponential model function, and then take the residual as the logarithm of your data and the logarithm of your model, effectively doing the fit in log-space, as you would likely plot the data.
And, finally, don't take the square or the sum of squares of the difference yourself, just return the residual array with sign in tact. That is, you will probably be better off using something like:
def residual(self, params, x, y):
return np.log(y) - np.log(self.function(params, x))

Problems using numpy.piecewise

1. The core problem and question
I will provide an executable example below, but let me first walk you through the problem first.
I am using solve_ivp from scipy.integrate to solve an initial value problem (see documentation). In fact I have to call the solver twice, to once integrate forward and once backward in time. (I would have to go unnecessarily deep into my concrete problem to explain why this is necessary, but please trust me here--it is!)
sol0 = solve_ivp(rhs,[0,-1e8],y0,rtol=10e-12,atol=10e-12,dense_output=True)
sol1 = solve_ivp(rhs,[0, 1e8],y0,rtol=10e-12,atol=10e-12,dense_output=True)
Here rhs is the right hand side function of the initial value problem y(t) = rhs(t,y). In my case, y has six components y[0] to y[5]. y0=y(0) is the initial condition. [0,±1e8] are the respective integration ranges, one forward and the other backward in time. rtol and atol are tolerances.
Importantly, you see that I flagged dense_output=True, which means that the solver does not only return the solutions on the numerical grids, but also as interpolation functions sol0.sol(t) and sol1.sol(t).
My main goal now is to define a piecewise function, say sol(t) which takes the value sol0.sol(t) for t<0 and the value sol1.sol(t) for t>=0. So the main question is: How do I do that?
I thought that numpy.piecewise should be tool of choice to do this for me. But I am having trouble using it, as you will see below, where I show you what I tried so far.
2. Example code
The code in the box below solves the initial value problem of my example. Most of the code is the definition of the rhs function, the details of which are not important to the question.
import numpy as np
from scipy.integrate import solve_ivp
# aux definitions and constants
sin=np.sin; cos=np.cos; tan=np.tan; sqrt=np.sqrt; pi=np.pi;
c = 299792458
Gm = 5.655090674872875e26
# define right hand side function of initial value problem, y'(t) = rhs(t,y)
def rhs(t,y):
p,e,i,Om,om,f = y
sinf=np.sin(f); cosf=np.cos(f); Q=sqrt(p/Gm); opecf=1+e*cosf;
R = Gm**2/(c**2*p**3)*opecf**2*(3*(e**2 + 1) + 2*e*cosf - 4*e**2*cosf**2)
S = Gm**2/(c**2*p**3)*4*opecf**3*e*sinf
rhs = np.zeros(6)
rhs[0] = 2*sqrt(p**3/Gm)/opecf*S
rhs[1] = Q*(sinf*R + (2*cosf + e*(1 + cosf**2))/opecf*S)
rhs[2] = 0
rhs[3] = 0
rhs[4] = Q/e*(-cosf*R + (2 + e*cosf)/opecf*sinf*S)
rhs[5] = sqrt(Gm/p**3)*opecf**2 + Q/e*(cosf*R - (2 + e*cosf)/opecf*sinf*S)
return rhs
# define initial values, y0
y0=[3.3578528933149297e13,0.8846,2.34921,3.98284,1.15715,0]
# integrate twice from t = 0, once backward in time (sol0) and once forward in time (sol1)
sol0 = solve_ivp(rhs,[0,-1e8],y0,rtol=10e-12,atol=10e-12,dense_output=True)
sol1 = solve_ivp(rhs,[0, 1e8],y0,rtol=10e-12,atol=10e-12,dense_output=True)
The solution functions can be addressed from here by sol0.sol and sol1.sol respectively. As an example, let's plot the 4th component:
from matplotlib import pyplot as plt
t0 = np.linspace(-1,0,500)*1e8
t1 = np.linspace( 0,1,500)*1e8
plt.plot(t0,sol0.sol(t0)[4])
plt.plot(t1,sol1.sol(t1)[4])
plt.title('plot 1')
plt.show()
3. Failing attempts to build piecewise function
3.1 Build vector valued piecewise function directly out of sol0.sol and sol1.sol
def sol(t): return np.piecewise(t,[t<0,t>=0],[sol0.sol,sol1.sol])
t = np.linspace(-1,1,1000)*1e8
print(sol(t))
This leads to the following error in piecewise in line 628 of .../numpy/lib/function_base.py:
TypeError: NumPy boolean array indexing assignment requires a 0 or 1-dimensional input, input has 2 dimensions
I am not sure, but I do think this is because of the following: In the documentation of piecewise it says about the third argument:
funclistlist of callables, f(x,*args,**kw), or scalars
[...]. It should take a 1d array as input and give an 1d array or a scalar value as output. [...].
I suppose the problem is, that the solution in my case has six components. Hence, evaluated on a time grid the output would be a 2d array. Can someone confirm, that this is indeed the problem? Since I think this really limits the usefulness of piecewiseby a lot.
3.2 Try the same, but just for one component (e.g. for the 4th)
def sol4(t): return np.piecewise(t,[t<0,t>=0],[sol0.sol(t)[4],sol1.sol(t)[4]])
t = np.linspace(-1,1,1000)*1e8
print(sol4(t))
This results in this error in line 624 of the same file as above:
ValueError: NumPy boolean array indexing assignment cannot assign 1000 input values to the 500 output values where the mask is true
Contrary to the previous error, unfortunately here I have so far no idea why it is not working.
3.3 Similar attempt, however first defining functions for the 4th components
def sol40(t): return sol0.sol(t)[4]
def sol41(t): return sol1.sol(t)[4]
def sol4(t): return np.piecewise(t,[t<0,t>=0],[sol40,sol41])
t = np.linspace(-1,1,1000)
plt.plot(t,sol4(t))
plt.title('plot 2')
plt.show()
Now this does not result in an error, and I can produce a plot, however this plot doesn't look like it should. It should look like plot 1 above. Also here, I so far have no clue what is going on.
Am thankful for help!
You can take a look to numpy.piecewise source code. There is nothing special in this function so I suggest to do everything manually.
def sol(t):
ans = np.empty((6, len(t)))
ans[:, t<0] = sol0.sol(t[t<0])
ans[:, t>=0] = sol1.sol(t[t>=0])
return ans
Regarding your failed attempts. Yes, piecewise excpect functions return 1d array. Your second attempt failed because documentation says that funclist argument should be list of functions or scalars but you send the list of arrays. Contrary to the documentation it works even with arrays, you just should use the arrays of the same size as t < 0 and t >= 0 like:
def sol4(t): return np.piecewise(t,[t<0,t>=0],[sol0.sol(t[t<0])[4],sol1.sol(t[t>=0])[4]])

Fixing fit parameters in curve_fit

I have a function Imaginary which describes a physics process and I want to fit this to a dataset x_interpolate, y_interpolate. The function is a form of a Lorentzian peak function and I have some initial values that are user given, except for f_peak (the peak location) which I find using a peak finding algorithm. All of the fit parameters, except for the offset, are expected to be positive and thus I have set bounds_I accordingly.
def Imaginary(freq, alpha, res, Ms, off):
numerator = (2*alpha*freq*res**2)
denominator = (4*(alpha*res*freq)**2) + (res**2 - freq**2)**2
Im = Ms*(numerator/denominator) + off
return Im
pI = np.array([alpha_init, f_peak, Ms_init, 0])
bounds_I = ([0,0,0,0, -np.inf], [np.inf,np.inf,np.inf, np.inf])
poptI, pcovI = curve_fit(Imaginary, x_interpolate, y_interpolate, pI, bounds=bounds_I)
In some situations I want to keep the parameter f_peak fixed during the fitting process. I tried an easy solution by changing bounds_I to:
bounds_I = ([0,f_peak+0.001,0,0, -np.inf], [np.inf,f_peak-0.001,np.inf, np.inf])
This is for many reasons not an optimal way of doing this so I was wondering if there is a more Pythonic way of doing this? Thank you for your help
If a parameter is fixed, it is not really a parameter, so it should be removed from the list of parameters. Define a model that has that parameter replaced by a fixed value, and fit that. Example below, simplified for brevity and to be self-contained:
x = np.arange(10)
y = np.sqrt(x)
def parabola(x, a, b, c):
return a*x**2 + b*x + c
fit1 = curve_fit(parabola, x, y) # [-0.02989396, 0.56204598, 0.25337086]
b_fixed = 0.5
fit2 = curve_fit(lambda x, a, c: parabola(x, a, b_fixed, c), x, y)
The second call to fit returns [-0.02350478, 0.35048631], which are the optimal values of a and c. The value of b was fixed at 0.5.
Of course, the parameter should be removed from the initial vector pI and the bounds as well.
You might find lmfit (https://lmfit.github.io/lmfit-py/) helpful. This library adds a higher-level interface to the scipy optimization routines, aiming for a more Pythonic approach to optimization and curve fitting. For example, it uses Parameter objects to allow setting bounds and fixing parameters without having to modify the objective or model function. For curve-fitting, it defines high level Model functions that can be used.
For you example, you could use your Imaginary function as you've written it with
from lmfit import Model
lmodel = Model(Imaginary)
and then create Parameters (lmfit will name the Parameter objects according to your function signature), providing initial values:
params = lmodel.make_params(alpha=alpha_init, res=f_peak, Ms=Ms_init, off=0)
By default all Parameters are unbound and will vary in the fit, but you can modify these attributes (without rewriting the model function):
params['alpha'].min = 0
params['res'].min = 0
params['Ms'].min = 0
You can set one (or more) of the parameters to not vary in the fit as with:
params['res'].vary = False
To be clear: this does not require altering the model function, making it much easier to change with is fixed, what bounds might be imposed, and so forth.
You would then perform the fit with the model and these parameters:
result = lmodel.fit(y_interpolate, params, freq=x_interpolate)
you can get a report of fit statistics, best-fit values and uncertainties for parameters with
print(result.fit_report())
The best fit Parameters will be held in result.params.
FWIW, lmfit also has builtin Models for many common forms, including Lorentzian and a Constant offset. So, you could construct this model as
from lmfit.models import LorentzianModel, ConstantModel
mymodel = LorentzianModel(prefix='l_') + ConstantModel()
params = mymodel.make_params()
which will have Parameters named l_amplitude, l_center, l_sigma, and c (where c is the constant) and the model will use the name x for the independent variable (your freq). This approach can become very convenient when you may want to change the functional form of the peaks or background, or when fitting multiple peaks to a spectrum.
I was able to solve this issue regarding arbitrary number of parameters and arbitrary positioning of the fixed parameters:
def d_fit(x, y, param, boundMi, boundMx, listparam):
Sparam, SboundMi, SboundMx = asarray([]), asarray([]), asarray([])
Nparam, NboundMi, NboundMx = asarray([]), asarray([]), asarray([])
for i in range(len(param)):
if(listparam[i] == 1):
Sparam = append(Sparam,asarray(param[i]))
SboundMi = append(SboundMi,asarray(boundMi[i]))
SboundMx = append(SboundMx,asarray(boundMx[i]))
else:
Nparam = append(Nparam,asarray(param[i]))
def funF(x, Sparam):
j = 0
for i in range(len(param)):
if(listparam[i] == 1):
param[i] = Sparam[i-j]
else:
param[i] = Nparam[j]
j = j + 1
return fun(x, param)
return curve_fit(lambda x, *Sparam: funF(x, Sparam), x, y, p0 = Sparam, bounds = (SboundMi,SboundMx))
In this case:
param = [a,b,c,...] # parameters array (any size)
boundMi = [min_a, min_b, min_c,...] # minimum allowable value of each parameter
boundMx = [max_a, max_b, max_c,...] # maximum allowable value of each parameter
listparam = [0,1,1,0,...] # 1 = fit and 0 = fix the corresponding parameter in the fit routine
and the root function is define as
def fun(x, param):
a,b,c,d.... = param
return a*b/c... # any function of the params a,b,c,d...
This way, you can change the root function and the number of parameters without changing the fit routine.
And, at any time, you can fix or let fit any parameter by changing "listparam".
Use like this:
popt, pcov = d_fit(x, y, param, boundMi, boundMx, listparam)
"popt" and "pcov" are 1D arrays of the size of the number of "1" in "listparam" bringing the results of the fitted parameters (best value and err matrix)
"param" will ramain an 1D array of the same size of the original (input) "param", HOWEVER IT WILL BE UPDATED AUTOMATICALLY TO THE FITTED VALUES (same as "popt") for the fitted values, keeping the fixed values according to "listparam"
Hope can be usefull!
Obs1: x = 1D-array independent values and y = 1D-array dependent values
Obs2: This is my first post. Please let me know if I can improove it!

How to call function for scipy.optimize.fmin_cg(func) in Python

I will simply explain the problem in short. This problem is exactly similar as shown in scipy.doc. The problem is on error occurance as float argument required, not numpy.ndarray
What I have:
Function: y = s*z^t
Variable length/dimensions
t - 1...m,
s - 1...m and 1...n. So, m is row number, n - col number.
z - 1...n.
y - this can be y1, y[2], y[3],..., y[m],
T - s[m,n] matrix
Like this:
y[1] = s[1][1]*z[1]^t[1]+s[1][2]*z[2]^t[1]+...s[1][n]*z[n]^t[1])
y[2] = s[2][1]*z[1]^t[2]+s[2][2]*z[2]^t[2]+...s[2][n]*z[n]^t[2])
...
y[m] = s[m][1]*z[1]^t[m]+s[m][2]*z[2]^t[2]+...s[m][n]*z[n]^t[m])
Problem: Error occured.
Optimization terminated successfully.
Traceback (most recent call last):
solution = optimize.fmin_cg(func, z, fprime=gradf, args=args)
File "C:\Python27\lib\site-packages\scipy\optimize\optimize.py", line 952, in fmin_cg
res = _minimize_cg(f, x0, args, fprime, callback=callback, **opts)
File "C:\Python27\lib\site-packages\scipy\optimize\optimize.py", line 1072, in _minimize_cg
print " Current function value: %f" % fval
TypeError: float argument required, not numpy.ndarray
Here is the code
import numpy as np
import scipy as sp
import scipy.optimize as optimize
def func(z, *args):
y,T,t = args[0]
return y - counter(T,z,t)
def counter(T, z, t):
rows,cols = np.shape(T)
res = np.zeros(rows)
for i,row_val in enumerate(T):
res[i] = np.dot(row_val, z**t[i])
return res
def gradf(z, *args):
y,T,t = args[0]
return np.dot(t,counter(T,z,t-1))
def main():
# Inputs
N = 30
M = 20
z0 = np.zeros(N) # initial guess
y = 30*np.random.random(M)
T = 10*np.random.random((M,N))
t = 5*np.random.random(M)
args = [y, T, t]
solution = optimize.fmin_cg(func, z0, fprime=gradf, args=args)
print 'solution: ', solution
if __name__ == '__main__':
main()
I also tried to find similar examples but could not find something very similar. Here is the code for your consideration. Thanks in advance.
The root of your problem is that fmin_cg expects the function to return a single scalar value for the misfit instead of an array.
Basically, you want something vaguely similar to:
def func(z, y, T, t):
return np.linalg.norm(y - counter(T,z,t))
I'm using np.linalg.norm here because there's no builtin function in numpy for the root-mean-square. The actual RMS would be norm(x) / sqrt(x.size), but for minimization the constant multiplier doesn't make any difference.
There are also other minor problems in your code (e.g. args[0] is going to be a single item. You want y, T, t = args or better yet, just func(z, y, T, t)). Your gradient function doesn't make any sense to me, but it's optional regardless. Also, there's no way the solution can produce reasonable values at the moment, as you're testing it against pure noise. I assume those are just meant to be placeholder values, though.
However, you have a larger problem. You're trying to minimize in 30-dimensional space. Most non-linear solvers aren't going to work well with that high of a dimensionality. It may work fine, but you're very likely to run into problems.
All that having been said, you may find it more intuitive to use the scipy.optimize.curve_fit interface rather than using the others, if you're okay with LM instead of CG (they're fairly similar methods).
One final thing: You're trying to solve for 30 model parameters with 20 observations. This is an underdetermined problem. This problem doesn't have a unique solution. You're going to need to apply some a-priori knowledge to get a reasonable answer.

NumPy odeint output extra variables

What is the easiest way to save intermediate variables during simulation with odeint in Numpy?
For example:
def dy(y,t)
x = np.rand(3,1)
return y + x.sum()
sim = odeint(dy,0,np.arange(0,1,0.1))
What would be the easiest way to save the data stored in x during simulation? Ideally at the points specified in the t argument passed to odeint.
A handy way to hack odeint, with some caveats, is to wrap your call to odeint in method in a class, with dy as another method, and pass self as an argument to your dy function. For example,
class WrapODE(object):
def __init__(self):
self.y_0 = 0.
self.L_x = []
self.timestep = 0
self.times = np.arange(0., 1., 0.1)
def run(self):
self.L_y = odeint(
self.dy,
self.y_0, self.times,
args=(self,))
#staticmethod
def dy(y, t, self):
""""
Discretized application of dudt
Watch out! Because this is a staticmethod, as required by odeint, self
is the third argument
"""
x = np.random.rand(3,1)
if t >= self.times[self.timestep]:
self.timestep += 1
self.L_x.append(x)
else:
self.L_x[-1] = x
return y + x.sum()
To be clear, this is a hack that is prone to pitfalls. For example, unless odeint is doing Euler stepping, dy is going to get called more times than the number of timesteps you specify. To make sure you get one x for each y, the monkey business in the if t >= self.times[self.timestep]: block picks a spot in an array for storing data for each time value from the times vector. Your particular application might lead to other crazy problems. Be sure to thoroughly validate this method for your application.

Categories

Resources