Related
I'm troubled by the warnings of odeint and curve fit. So the thing I want to do is:
my 1st problem is that curve fit and odeint gives warnings as below repeatedly for the 3 data sets(6 warnings in total), but meanwhile curve_fit does give results seemingly correct.
828: OptimizeWarning: Covariance of the parameters could not be estimated
247: ODEintWarning: Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information.
** the 2nd problem is for integrated curves, with the EXACT SAME code just by executing it multiple times, it gives differents results. It's seems to have a period of repetition, after a few executions, it gives correct curves, but on the next execution, it become once again incorrect and so on. Maybe a problem of instability?**
import math
import numpy as np
import pathlib as pl
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.integrate import odeint
def loadData(path):
with open(path,"r") as fid:
res=np.loadtxt(fid,comments="#")
return res
def modeleeq(a,N,p,we,wp):
res=(we*a/p[0])**p[1] + (wp*a/p[2])**p[3]
return res
#get path
chemin=pl.Path(input("Paste the path to the directory containing data files\n"))
#get we and wp in array 3X2
wewp=loadData(chemin/'wewp.dat').transpose()
#plotting
plt.style.use('seaborn')
fig, (ax1,ax2) = plt.subplots(1,2)
colors = ['#79ccff', '#f78db4', '#a07ffb']
files = pl.Path(chemin).glob("a_dadN*")
para=[]
for i,f in enumerate(files):
res=loadData(f)
a=res[:,0]
dadN=res[:,1]
we=wewp[i,0]
wp=wewp[i,1]
#exp data
ax1.scatter(a,dadN,c=colors[i],marker="x", label = f"test {i+1}")
#evaluation of parameters
modele=lambda a, *p: (we*a/p[0])**p[1] + (wp*a/p[2])**p[3] #p=[ge,me,gp,mp]
p, pcov= curve_fit(modele,a,dadN, p0 = [2e5,0,1e5,0])
para.append(p)
ax1.plot(a,modele(a,*p),c=colors[i],label=f"test {i+1} identification")
#intergration
a0=a.min() #initial condition
N = np.linspace(0, 4000)
aitgr=odeint(modeleeq,a0,N,args=(p,we,wp))
ax2.plot(N,aitgr,c=colors[i],label = f"test {i+1} integration")
#the following code is just for adding titles and print the identified paras
#so I won't put them here
Big thanks to you all!
I think here are a few issues. First it is clear that the two addends are very similar. So it can, and actually does, happen that the data cannot distinguish between the two. Scaling is a second issue. Having fit parameters spanning orders of magnitude is usually not a good idea. In fact, the gamma just rescale the W. Hence, one can easily rewrite the function as ( f1 * a )**e1 One fits f1 and calculates gamma by knowing f1=W/gamma. Another issue is the possibility of negative powers of a negative number coming up. So one should use either abs( f1 ) or f1**2. With this in mind I modified the code and got the result below. From the fit results in the second and third case, one can see that either f1=0 or the exponents are almost equal. In such a case it is normal that the covariance matrix cannot be determined.
Finally, when it comes to integrating / plotting a(N) The differential equation is somewhat of type da / dN = a**k so we have da / a**k = dN integrating will give a**(-k+1) = N+N0 so a(N) = 1 / (N + N0 )**( 1 / (k-1) ). As the initial slope is positive N0 < 0 and the function diverges at N0. Numerical integration beyond this value does not make sense.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.integrate import odeint
def loadData(path):
with open(path,"r") as fid:
res=np.loadtxt(fid,comments="#")
return res
def simplified_model(a, N, f1, e1, f2, e2 ):
dadN = ( np.fabs( f1 ) * a )**e1 + ( np.fabs( f2 ) * a )**e2
return dadN
plt.style.use('seaborn')
fig = plt.figure()
ax1 = fig.add_subplot( 1, 2, 1)
ax2 = fig.add_subplot( 1, 2, 2)
colors = ['#79ccff', '#f78db4', '#a07ffb']
para = list()
for i in range( 3 ):
res=loadData( "a_dadN_{}.dat".format( i + 1 ) )
a = res[ ::, 0 ]
dadN = res[ ::, 1 ]
# ~we = wewp[i, 0 ]
# ~wp = wewp[ i, 1 ]
def model_wrapper(a, ge, me, gp, mp ):
return simplified_model(a, 0, ge, me, gp, mp )
#exp data
ax1.scatter( a, dadN, c=colors[i], marker="x", label = f"test {i+1}" )
print(i)
al = np.linspace( min(a), max(a), 50 )
guess = [ .2 + i, 2.0, 2, 2.2 ]
dal = np.fromiter( ( model_wrapper( av, *guess ) for av in al ), np.float )
ax1.plot( al, dal, color=colors[i], ls=':')
p, pcov= curve_fit(model_wrapper, a, dadN, p0 = guess, maxfev=100000 )
print( p )
dalf = np.fromiter( ( model_wrapper( av, *p ) for av in al ), np.float )
ax1.plot( al, dalf, color=colors[i])
#intergration
a0 = a.min() #initial condition
N = np.linspace(0, 4000, 100000)
aitgr=odeint( simplified_model, a0, N,args=tuple(p) )
ax2.plot( N , aitgr, c=colors[i], label = f"test {i+1} integration")
ax2.set_ylim([1e-4,1e-2])
ax2.set_yscale("log")
plt.show()
providing
I recently got a script running to fit a gaussian to my absorption profile with help of SO. My hope was that things would work fine if I simply replace the Gauss function by a Voigt one, but this seems not to be the case. I think mainly due to the fact that it is a shifted voigt.
Edit: The profiles are absorption lines that vary in optical thickness. In practice they will be a mix between optically thick and thin features. Like the bottom part in this diagram. The current data will be more like the top image, but maybe the bottom is already flattened a bit. (And we only see the left side of the profile, a bit beyond the center)
For a Gauss it looks like this and as predicted the bottom seems to be less deep than the fit wants it to be, but still quite close. The profile itself should still be a voigt though. But now I realize that the central points might throw off the fit. So maybe a weight should be added based on wing position?
I'm mostly wondering if the shifted function could be mis-defined or if its my starting values.
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
import numpy as np
from scipy.special import wofz
x = np.arange(13)
xx = xx = np.linspace(0, 13, 100)
y = np.array([19699.959 , 21679.445 , 21143.195 , 20602.875 , 16246.769 ,
11635.25 , 8602.465 , 7035.493 , 6697.0337, 6510.092 ,
7717.772 , 12270.446 , 16807.81 ])
# weighted arithmetic mean (corrected - check the section below)
#mean = 2.4
sigma = 2.4
gamma = 2.4
def Gauss(x, y0, a, x0, sigma):
return y0 + a * np.exp(-(x - x0)**2 / (2 * sigma**2))
def Voigt(x, x0, y0, a, sigma, gamma):
#sigma = alpha / np.sqrt(2 * np.log(2))
return y0 + a * np.real(wofz((x - x0 + 1j*gamma)/sigma/np.sqrt(2))) / sigma /np.sqrt(2*np.pi)
popt, pcov = curve_fit(Voigt, x, y, p0=[8, np.max(y), -(np.max(y)-np.min(y)), sigma, gamma])
#p0=[8, np.max(y), -(np.max(y)-np.min(y)), mean, sigma])
plt.plot(x, y, 'b+:', label='data')
plt.plot(xx, Voigt(xx, *popt), 'r-', label='fit')
plt.legend()
plt.show()
I may be misunderstanding the model you're using, but I think you would need to include some sort of constant or linear background.
To do that with lmfit (which has Voigt, Gaussian, and many other models built in, and tries very hard to make these interchangeable), I would suggest starting with something like this:
import numpy as np
import matplotlib.pyplot as plt
from lmfit.models import GaussianModel, VoigtModel, LinearModel, ConstantModel
x = np.arange(13)
xx = np.linspace(0, 13, 100)
y = np.array([19699.959 , 21679.445 , 21143.195 , 20602.875 , 16246.769 ,
11635.25 , 8602.465 , 7035.493 , 6697.0337, 6510.092 ,
7717.772 , 12270.446 , 16807.81 ])
# build model as Voigt + Constant
## model = GaussianModel() + ConstantModel()
model = VoigtModel() + ConstantModel()
# create parameters with initial values
params = model.make_params(amplitude=-1e5, center=8,
sigma=2, gamma=2, c=25000)
# maybe place bounds on some parameters
params['center'].min = 2
params['center'].max = 12
params['amplitude'].max = 0.
# do the fit, print out report with results
result = model.fit(y, params, x=x)
print(result.fit_report())
# plot data, best fit, fit interpolated to `xx`
plt.plot(x, y, 'b+:', label='data')
plt.plot(x, result.best_fit, 'ko', label='fitted points')
plt.plot(xx, result.eval(x=xx), 'r-', label='interpolated fit')
plt.legend()
plt.show()
And, yes, you can simply replace VoigtModel() with GaussianModel() or LorentzianModel() and redo the fit and compare the fit statistics to see which model is better.
For the Voigt model fit, the printed report would be
[[Model]]
(Model(voigt) + Model(constant))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 41
# data points = 13
# variables = 4
chi-square = 17548672.8
reduced chi-square = 1949852.54
Akaike info crit = 191.502014
Bayesian info crit = 193.761811
[[Variables]]
amplitude: -173004.338 +/- 30031.4068 (17.36%) (init = -100000)
center: 8.06574198 +/- 0.16209266 (2.01%) (init = 8)
sigma: 1.96247322 +/- 0.23522096 (11.99%) (init = 2)
c: 23800.6655 +/- 1474.58991 (6.20%) (init = 25000)
gamma: 1.96247322 +/- 0.23522096 (11.99%) == 'sigma'
fwhm: 7.06743644 +/- 0.51511574 (7.29%) == '1.0692*gamma+sqrt(0.8664*gamma**2+5.545083*sigma**2)'
height: -18399.0337 +/- 2273.61672 (12.36%) == '(amplitude/(max(2.220446049250313e-16, sigma*sqrt(2*pi))))*wofz((1j*gamma)/(max(2.220446049250313e-16, sigma*sqrt(2)))).real'
[[Correlations]] (unreported correlations are < 0.100)
C(amplitude, c) = -0.957
C(amplitude, sigma) = -0.916
C(sigma, c) = 0.831
C(center, c) = -0.151
Note that by default gamma is constrained to be the same value as sigma. This constraint can be lifted and gamma made to vary independently with params['gamma'].set(expr=None, vary=True, min=1.e-9). I think that you may not have enough data points in this data set to robustly and independently determine gamma.
The plot for that fit would look like this:
I managed to get something, but not very satisfying. If you remove the offset as a parameter and add 20000 directly in the Voigt function, with starting values [8, 126000, 0.71, 2] (the particular values don't' matter much) you'll get something like
Now, the fit produces a value for gamma which is negative which I cannot really justify. I would expect gamma to always be positive, but maybe I'm wrong and it's completely fine.
One thing you could try is to mirror your data so that its a "positive" peak (and while at it removing the background) and/or normalize the values. That might help you in the convergence.
I have no idea why when using the offset as a parameter the solver has problems finding an optimum. Maybe you need a different optimizer routine.
Maybe it'll be a better option to use the lmfit package that it's a wrapper over scipy to fit nonlinear functions with many prebuilt lineshapes. There is even an example of fitting a Voigt profile.
I'm trying to fit some data to a power law using python. The problem is that some of my points are upper limits, which I don't know how to include in the fitting routine.
In the data, I have put the upper limits as errors in y equal to 1, when the rest is much smaller. You can put this errors to 0 and change the uplims list generator, but then the fit is terrible.
The code is the following:
import numpy as np
import matplotlib.pyplot as plt
from scipy.odr import *
# Initiate some data
x = [1.73e-04, 5.21e-04, 1.57e-03, 4.71e-03, 1.41e-02, 4.25e-02, 1.28e-01, 3.84e-01, 1.15e+00]
x_err = [1e-04, 1e-04, 1e-03, 1e-03, 1e-02, 1e-02, 1e-01, 1e-01, 1e-01]
y = [1.26e-05, 8.48e-07, 2.09e-08, 4.11e-09, 8.22e-10, 2.61e-10, 4.46e-11, 1.02e-11, 3.98e-12]
y_err = [1, 1, 2.06e-08, 2.5e-09, 5.21e-10, 1.38e-10, 3.21e-11, 1, 1]
# Define upper limits
uplims = np.ones(len(y_err),dtype='bool')
for i in range(len(y_err)):
if y_err[i]<1:
uplims[i]=0
else:
uplims[i]=1
# Define a function (power law in our case) to fit the data with.
def function(p, x):
m, c = p
return m*x**(-c)
# Create a model for fitting.
model = Model(function)
# Create a RealData object using our initiated data from above.
data = RealData(x, y, sx=x_err, sy=y_err)
# Set up ODR with the model and data.
odr = ODR(data, model, beta0=[1e-09, 2])
odr.set_job(fit_type=0) # 0 is full ODR and 2 is least squares; AFAIK, it doesn't change within errors
# more details in https://docs.scipy.org/doc/scipy/reference/generated/scipy.odr.ODR.set_job.html
# Run the regression.
out = odr.run()
# Use the in-built pprint method to give us results.
#out.pprint() #this prints much information, but normally we don't need it, just the parameters and errors; the residual variation is the reduced chi square estimator
print('amplitude = %5.2e +/- %5.2e \nindex = %5.2f +/- %5.2f \nchi square = %12.8f'% (out.beta[0], out.sd_beta[0], out.beta[1], out.sd_beta[1], out.res_var))
# Generate fitted data.
x_fit = np.linspace(x[0], x[-1], 1000) #to do the fit only within the x interval; we can always extrapolate it, of course
y_fit = function(out.beta, x_fit)
# Generate a plot to show the data, errors, and fit.
fig, ax = plt.subplots()
ax.errorbar(x, y, xerr=x_err, yerr=y_err, uplims=uplims, linestyle='None', marker='x')
ax.loglog(x_fit, y_fit)
ax.set_xlabel(r'$x$')
ax.set_ylabel(r'$f(x) = m·x^{-c}$')
ax.set_title('Power Law fit')
plt.show()
The result of the fit is:
amplitude = 3.42e-12 +/- 5.32e-13
index = 1.33 +/- 0.04
chi square = 0.01484021
As you can see in the plot, the two first and two last points are upper limits and the fit is not taking them into account. Moreover, in the penultimate point, the fit goes over it even though that would be strictly forbidden.
I need that the fit knows this limits are very strict, and not try to fit the point itself but only consider them just as limits. How could I do this with the odr routine (or any other code which makes me the fits and gives me a chi square-esque estimator)?
Please, take into account that I need to change the function to other generalizations easily, so things as the powerlaw module are not desirable.
Thanks!
This answer is related to this post, where I discuss fitting with x and y errors. This, hence does not require the ODR module, but can be done manually. Therefore, one can use leastsq or minimize. Concerning the constraints, I made clear in other posts that I try to avoid them if possible. This can be done here as well, although the details of programming and maths are a little cumbersome, especially if it is supposed to be stable and foolproof. I will just give a rough idea. Say we want y0 > m * x0**(-c). In log-form we can write this as eta0 > mu - c * xeta0. I.e. there is an alpha such that eta0 = mu - c * xeta0 + alpha**2. Same for the other inequalities. For the second upper limit you get a beta**2 but you can decide which one is the smaller one, so you automatically fulfil the other condition. Same thing works for the lower limits with a gamma**2 and a delta**2. Say we can work with alpha and gamma. We can combine the inequality conditions to relate those two as well. At the end we can fit a sigma and alpha = sqrt(s-t)* sigma / sqrt( sigma**2 + 1 ), where s and t are derived from the inequalities. The sigma / sqrt( sigma**2 + 1 ) function is just one option to let alpha vary in a certain range, i.e. alpha**2 < s-t The fact that the radicand may become negative, shows that there are cases without solution. With alpha known, mu and, therefore m are calculated. So fit parameters are c and sigma, which takes the inequalities into account and makes m depended. I tired it and it works, but the version at hand is not the most stable one. I'd post it upon request.
As we have a handmade residual function already, we have a second option, though. We just introduce our own chi**2 function and use minimize, whic allows constraints. As minimize and the constraints keyword solution are very flexible and the residual function is easily modified for other functions and not only for m * x**( -c ) the overall construction is quite flexible. It looks as follows:
import matplotlib.pyplot as plt
import numpy as np
from random import random, seed
from scipy.optimize import minimize,leastsq
seed(7563)
fig1 = plt.figure(1)
###for gaussion distributed errors
def boxmuller(x0,sigma):
u1=random()
u2=random()
ll=np.sqrt(-2*np.log(u1))
z0=ll*np.cos(2*np.pi*u2)
z1=ll*np.cos(2*np.pi*u2)
return sigma*z0+x0, sigma*z1+x0
###for plotting ellipses
def ell_data(a,b,x0=0,y0=0):
tList=np.linspace(0,2*np.pi,150)
k=float(a)/float(b)
rList=[a/np.sqrt((np.cos(t))**2+(k*np.sin(t))**2) for t in tList]
xyList=np.array([[x0+r*np.cos(t),y0+r*np.sin(t)] for t,r in zip(tList,rList)])
return xyList
###function to fit
def f(x,m,c):
y = abs(m) * abs(x)**(-abs(c))
#~ print y,x,m,c
return y
###how to rescale the ellipse to make fitfunction a tangent
def elliptic_rescale(x, m, c, x0, y0, sa, sb):
#~ print "e,r",x,m,c
y=f( x, m, c )
#~ print "e,r",y
r=np.sqrt( ( x - x0 )**2 + ( y - y0 )**2 )
kappa=float( sa ) / float( sb )
tau=np.arctan2( y - y0, x - x0 )
new_a=r*np.sqrt( np.cos( tau )**2 + ( kappa * np.sin( tau ) )**2 )
return new_a
###residual function to calculate chi-square
def residuals(parameters,dataPoint):#data point is (x,y,sx,sy)
m, c = parameters
#~ print "m c", m, c
theData = np.array(dataPoint)
best_t_List=[]
for i in range(len(dataPoint)):
x, y, sx, sy = dataPoint[i][0], dataPoint[i][1], dataPoint[i][2], dataPoint[i][3]
#~ print "x, y, sx, sy",x, y, sx, sy
###getthe point on the graph where it is tangent to an error-ellipse
ed_fit = minimize( elliptic_rescale, x , args = ( m, c, x, y, sx, sy ) )
best_t = ed_fit['x'][0]
best_t_List += [best_t]
#~ exit(0)
best_y_List=[ f( t, m, c ) for t in best_t_List ]
##weighted distance not squared yet, as this is done by scipy.optimize.leastsq
wighted_dx_List = [ ( x_b - x_f ) / sx for x_b, x_f, sx in zip( best_t_List,theData[:,0], theData[:,2] ) ]
wighted_dy_List = [ ( x_b - x_f ) / sx for x_b, x_f, sx in zip( best_y_List,theData[:,1], theData[:,3] ) ]
return wighted_dx_List + wighted_dy_List
def chi2(params, pnts):
r = np.array( residuals( params, pnts ) )
s = sum( [ x**2 for x in r] )
#~ print params,s,r
return s
def myUpperIneq(params,pnt):
m, c = params
x,y=pnt
return y - f( x, m, c )
def myLowerIneq(params,pnt):
m, c = params
x,y=pnt
return f( x, m, c ) - y
###to create some test data
def test_data(m,c, xList,const_sx,rel_sx,const_sy,rel_sy):
yList=[f(x,m,c) for x in xList]
xErrList=[ boxmuller(x,const_sx+x*rel_sx)[0] for x in xList]
yErrList=[ boxmuller(y,const_sy+y*rel_sy)[0] for y in yList]
return xErrList,yErrList
###some start values
mm_0=2.3511
expo_0=.3588
csx,rsx=.01,.07
csy,rsy=.04,.09,
limitingPoints=dict()
limitingPoints[0]=np.array([[.2,5.4],[.5,5.0],[5.1,.9],[5.7,.9]])
limitingPoints[1]=np.array([[.2,5.4],[.5,5.0],[5.1,1.5],[5.7,1.2]])
limitingPoints[2]=np.array([[.2,3.4],[.5,5.0],[5.1,1.1],[5.7,1.2]])
limitingPoints[3]=np.array([[.2,3.4],[.5,5.0],[5.1,1.7],[5.7,1.2]])
####some data
xThData=np.linspace(.2,5,15)
yThData=[ f(x, mm_0, expo_0) for x in xThData]
#~ ###some noisy data
xNoiseData,yNoiseData=test_data(mm_0, expo_0, xThData, csx,rsx, csy,rsy)
xGuessdError=[csx+rsx*x for x in xNoiseData]
yGuessdError=[csy+rsy*y for y in yNoiseData]
for testing in range(4):
###Now fitting with limits
zipData=zip(xNoiseData,yNoiseData, xGuessdError, yGuessdError)
estimate = [ 2.4, .3 ]
con0={'type': 'ineq', 'fun': myUpperIneq, 'args': (limitingPoints[testing][0],)}
con1={'type': 'ineq', 'fun': myUpperIneq, 'args': (limitingPoints[testing][1],)}
con2={'type': 'ineq', 'fun': myLowerIneq, 'args': (limitingPoints[testing][2],)}
con3={'type': 'ineq', 'fun': myLowerIneq, 'args': (limitingPoints[testing][3],)}
myResult = minimize( chi2 , estimate , args=( zipData, ), constraints=[ con0, con1, con2, con3 ] )
print "############"
print myResult
###plot that
ax=fig1.add_subplot(4,2,2*testing+1)
ax.plot(xThData,yThData)
ax.errorbar(xNoiseData,yNoiseData, xerr=xGuessdError, yerr=yGuessdError, fmt='none',ecolor='r')
testX = np.linspace(.2,6,25)
testY = np.fromiter( ( f( x, myResult.x[0], myResult.x[1] ) for x in testX ), np.float)
bx=fig1.add_subplot(4,2,2*testing+2)
bx.plot(xThData,yThData)
bx.errorbar(xNoiseData,yNoiseData, xerr=xGuessdError, yerr=yGuessdError, fmt='none',ecolor='r')
ax.plot(limitingPoints[testing][:,0],limitingPoints[testing][:,1],marker='x', linestyle='')
bx.plot(limitingPoints[testing][:,0],limitingPoints[testing][:,1],marker='x', linestyle='')
ax.plot(testX, testY, linestyle='--')
bx.plot(testX, testY, linestyle='--')
bx.set_xscale('log')
bx.set_yscale('log')
plt.show()
Providing results
############
status: 0
success: True
njev: 8
nfev: 36
fun: 13.782127248002116
x: array([ 2.15043226, 0.35646436])
message: 'Optimization terminated successfully.'
jac: array([-0.00377715, 0.00350225, 0. ])
nit: 8
############
status: 0
success: True
njev: 7
nfev: 32
fun: 41.372277637885716
x: array([ 2.19005695, 0.23229378])
message: 'Optimization terminated successfully.'
jac: array([ 123.95069313, -442.27114677, 0. ])
nit: 7
############
status: 0
success: True
njev: 5
nfev: 23
fun: 15.946621924326545
x: array([ 2.06146362, 0.31089065])
message: 'Optimization terminated successfully.'
jac: array([-14.39131606, -65.44189298, 0. ])
nit: 5
############
status: 0
success: True
njev: 7
nfev: 34
fun: 88.306027468763432
x: array([ 2.16834392, 0.14935514])
message: 'Optimization terminated successfully.'
jac: array([ 224.11848736, -791.75553417, 0. ])
nit: 7
I checked four different limiting points (rows). The result are displayed normally and in logarithmic scale (columns). With some additional work you could get errors as well.
Update on asymmetric errors
To be honest, at the moment I do not know how to handle this property. Naively, I'd define my own asymmetric loss function similar to this post.
With x and y errors I do it by quadrant instead of just checking positive or negative side. My error ellipse, hence, changes to four connected pieces.
Nevertheless, it is somewhat reasonable. For testing and to show how it works, I made an example with a linear function. I guess the OP can combine the two pieces of code according to his requirements.
In case of a linear fit it looks like this:
import matplotlib.pyplot as plt
import numpy as np
from random import random, seed
from scipy.optimize import minimize,leastsq
#~ seed(7563)
fig1 = plt.figure(1)
ax=fig1.add_subplot(2,1,1)
bx=fig1.add_subplot(2,1,2)
###function to fit, here only linear for testing.
def f(x,m,y0):
y = m * x +y0
return y
###for gaussion distributed errors
def boxmuller(x0,sigma):
u1=random()
u2=random()
ll=np.sqrt(-2*np.log(u1))
z0=ll*np.cos(2*np.pi*u2)
z1=ll*np.cos(2*np.pi*u2)
return sigma*z0+x0, sigma*z1+x0
###for plotting ellipse quadrants
def ell_data(aN,aP,bN,bP,x0=0,y0=0):
tPPList=np.linspace(0, 0.5 * np.pi, 50)
kPP=float(aP)/float(bP)
rPPList=[aP/np.sqrt((np.cos(t))**2+(kPP*np.sin(t))**2) for t in tPPList]
tNPList=np.linspace( 0.5 * np.pi, 1.0 * np.pi, 50)
kNP=float(aN)/float(bP)
rNPList=[aN/np.sqrt((np.cos(t))**2+(kNP*np.sin(t))**2) for t in tNPList]
tNNList=np.linspace( 1.0 * np.pi, 1.5 * np.pi, 50)
kNN=float(aN)/float(bN)
rNNList=[aN/np.sqrt((np.cos(t))**2+(kNN*np.sin(t))**2) for t in tNNList]
tPNList = np.linspace( 1.5 * np.pi, 2.0 * np.pi, 50)
kPN = float(aP)/float(bN)
rPNList = [aP/np.sqrt((np.cos(t))**2+(kPN*np.sin(t))**2) for t in tPNList]
tList = np.concatenate( [ tPPList, tNPList, tNNList, tPNList] )
rList = rPPList + rNPList+ rNNList + rPNList
xyList=np.array([[x0+r*np.cos(t),y0+r*np.sin(t)] for t,r in zip(tList,rList)])
return xyList
###how to rescale the ellipse to touch fitfunction at point (x,y)
def elliptic_rescale_asymmetric(x, m, c, x0, y0, saN, saP, sbN, sbP , getQuadrant=False):
y=f( x, m, c )
###distance to function
r=np.sqrt( ( x - x0 )**2 + ( y - y0 )**2 )
###angle to function
tau=np.arctan2( y - y0, x - x0 )
quadrant=0
if tau >0:
if tau < 0.5 * np.pi: ## PP
kappa=float( saP ) / float( sbP )
quadrant=1
else:
kappa=float( saN ) / float( sbP )
quadrant=2
else:
if tau < -0.5 * np.pi: ## PP
kappa=float( saN ) / float( sbN)
quadrant=3
else:
kappa=float( saP ) / float( sbN )
quadrant=4
new_a=r*np.sqrt( np.cos( tau )**2 + ( kappa * np.sin( tau ) )**2 )
if quadrant == 1 or quadrant == 4:
rel_a=new_a/saP
else:
rel_a=new_a/saN
if getQuadrant:
return rel_a, quadrant, tau
else:
return rel_a
### residual function to calculate chi-square
def residuals(parameters,dataPoint):#data point is (x,y,sxN,sxP,syN,syP)
m, c = parameters
theData = np.array(dataPoint)
bestTList=[]
qqList=[]
weightedDistanceList = []
for i in range(len(dataPoint)):
x, y, sxN, sxP, syN, syP = dataPoint[i][0], dataPoint[i][1], dataPoint[i][2], dataPoint[i][3], dataPoint[i][4], dataPoint[i][5]
### get the point on the graph where it is tangent to an error-ellipse
### i.e. smallest ellipse touching the graph
edFit = minimize( elliptic_rescale_asymmetric, x , args = ( m, c, x, y, sxN, sxP, syN, syP ) )
bestT = edFit['x'][0]
bestTList += [ bestT ]
bestA,qq , tau= elliptic_rescale_asymmetric( bestT, m, c , x, y, aN, aP, bN, bP , True)
qqList += [ qq ]
bestYList=[ f( t, m, c ) for t in bestTList ]
### weighted distance not squared yet, as this is done by scipy.optimize.leastsq or manual chi2 function
for counter in range(len(dataPoint)):
xb=bestTList[counter]
xf=dataPoint[counter][0]
yb=bestYList[counter]
yf=dataPoint[counter][1]
quadrant=qqList[counter]
if quadrant == 1:
sx, sy = sxP, syP
elif quadrant == 2:
sx, sy = sxN, syP
elif quadrant == 3:
sx, sy = sxN, syN
elif quadrant == 4:
sx, sy = sxP, syN
else:
assert 0
weightedDistanceList += [ ( xb - xf ) / sx, ( yb - yf ) / sy ]
return weightedDistanceList
def chi2(params, pnts):
r = np.array( residuals( params, pnts ) )
s = np.fromiter( ( x**2 for x in r), np.float ).sum()
return s
####...to make data with asymmetric error (fixed); for testing only
def noisy_data(xList,m0,y0, sxN,sxP,syN,syP):
yList=[ f(x, m0, y0) for x in xList]
gNList=[boxmuller(0,1)[0] for dummy in range(len(xList))]
xerrList=[]
for x,err in zip(xList,gNList):
if err < 0:
xerrList += [ sxP * err + x ]
else:
xerrList += [ sxN * err + x ]
gNList=[boxmuller(0,1)[0] for dummy in range(len(xList))]
yerrList=[]
for y,err in zip(yList,gNList):
if err < 0:
yerrList += [ syP * err + y ]
else:
yerrList += [ syN * err + y ]
return xerrList, yerrList
###some start values
m0=1.3511
y0=-2.2
aN, aP, bN, bP=.2,.5, 0.9, 1.6
#### some data
xThData=np.linspace(.2,5,15)
yThData=[ f(x, m0, y0) for x in xThData]
xThData0=np.linspace(-1.2,7,3)
yThData0=[ f(x, m0, y0) for x in xThData0]
### some noisy data
xErrList,yErrList = noisy_data(xThData, m0, y0, aN, aP, bN, bP)
###...and the fit
dataToFit=zip(xErrList,yErrList, len(xThData)*[aN], len(xThData)*[aP], len(xThData)*[bN], len(xThData)*[bP])
fitResult = minimize(chi2, (m0,y0) , args=(dataToFit,) )
fittedM, fittedY=fitResult.x
yThDataF=[ f(x, fittedM, fittedY) for x in xThData0]
### plot that
for cx in [ax,bx]:
cx.plot([-2,7], [f(x, m0, y0 ) for x in [-2,7]])
ax.errorbar(xErrList,yErrList, xerr=[ len(xThData)*[aN],len(xThData)*[aP] ], yerr=[ len(xThData)*[bN],len(xThData)*[bP] ], fmt='ro')
for x,y in zip(xErrList,yErrList)[:]:
xEllList,yEllList = zip( *ell_data(aN,aP,bN,bP,x,y) )
ax.plot(xEllList,yEllList ,c='#808080')
### rescaled
### ...as well as a scaled version that touches the original graph. This gives the error shortest distance to that graph
ed_fit = minimize( elliptic_rescale_asymmetric, 0 ,args=(m0, y0, x, y, aN, aP, bN, bP ) )
best_t = ed_fit['x'][0]
best_a,qq , tau= elliptic_rescale_asymmetric( best_t, m0, y0 , x, y, aN, aP, bN, bP , True)
xEllList,yEllList = zip( *ell_data( aN * best_a, aP * best_a, bN * best_a, bP * best_a, x, y) )
ax.plot( xEllList, yEllList, c='#4040a0' )
###plot the fit
bx.plot(xThData0,yThDataF)
bx.errorbar(xErrList,yErrList, xerr=[ len(xThData)*[aN],len(xThData)*[aP] ], yerr=[ len(xThData)*[bN],len(xThData)*[bP] ], fmt='ro')
for x,y in zip(xErrList,yErrList)[:]:
xEllList,yEllList = zip( *ell_data(aN,aP,bN,bP,x,y) )
bx.plot(xEllList,yEllList ,c='#808080')
####rescaled
####...as well as a scaled version that touches the original graph. This gives the error shortest distance to that graph
ed_fit = minimize( elliptic_rescale_asymmetric, 0 ,args=(fittedM, fittedY, x, y, aN, aP, bN, bP ) )
best_t = ed_fit['x'][0]
#~ print best_t
best_a,qq , tau= elliptic_rescale_asymmetric( best_t, fittedM, fittedY , x, y, aN, aP, bN, bP , True)
xEllList,yEllList = zip( *ell_data( aN * best_a, aP * best_a, bN * best_a, bP * best_a, x, y) )
bx.plot( xEllList, yEllList, c='#4040a0' )
plt.show()
which plots
The upper graph shows the original linear function and some data generated from this using asymmetric Gaussian errors. Error bars are plotted, as well as the piecewise error ellipses (grey...and rescaled to touch the linear function, blue). The lower graph additionally shows the fitted function as well as the rescaled piecewise ellipses, touching the fitted function.
I'm looking for a way to plot a curve through some experimental data. The data shows a small linear regime with a shallow gradient, followed by a steep linear regime after a threshold value.
My data is here: http://pastebin.com/H4NSbxqr
I could fit the data with two lines relatively easily, but I'd like to fit with a continuous line ideally - which should look like two lines with a smooth curve joining them around the threshold (~5000 in the data, shown above).
I attempted this using scipy.optimize curve_fit and trying a function which included the sum of a straight line and an exponential:
y = a*x + b + c*np.exp((x-d)/e)
although despite numerous attempts, it didn't find a solution.
If anyone has any suggestions please, either on the choice of fitting distribution / method or the curve_fit implementation, they would be greatly appreciated.
If you don't have a particular reason to believe that linear + exponential is the true underlying cause of your data, then I think a fit to two lines makes the most sense. You can do this by making your fitting function the maximum of two lines, for example:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def two_lines(x, a, b, c, d):
one = a*x + b
two = c*x + d
return np.maximum(one, two)
Then,
x, y = np.genfromtxt('tmp.txt', unpack=True, delimiter=',')
pw0 = (.02, 30, .2, -2000) # a guess for slope, intercept, slope, intercept
pw, cov = curve_fit(two_lines, x, y, pw0)
crossover = (pw[3] - pw[1]) / (pw[0] - pw[2])
plt.plot(x, y, 'o', x, two_lines(x, *pw), '-')
If you really want a continuous and differentiable solution, it occurred to me that a hyperbola has a sharp bend to it, but it has to be rotated. It was a bit difficult to implement (maybe there's an easier way), but here's a go:
def hyperbola(x, a, b, c, d, e):
""" hyperbola(x) with parameters
a/b = asymptotic slope
c = curvature at vertex
d = offset to vertex
e = vertical offset
"""
return a*np.sqrt((b*c)**2 + (x-d)**2)/b + e
def rot_hyperbola(x, a, b, c, d, e, th):
pars = a, b, c, 0, 0 # do the shifting after rotation
xd = x - d
hsin = hyperbola(xd, *pars)*np.sin(th)
xcos = xd*np.cos(th)
return e + hyperbola(xcos - hsin, *pars)*np.cos(th) + xcos - hsin
Run it as
h0 = 1.1, 1, 0, 5000, 100, .5
h, hcov = curve_fit(rot_hyperbola, x, y, h0)
plt.plot(x, y, 'o', x, two_lines(x, *pw), '-', x, rot_hyperbola(x, *h), '-')
plt.legend(['data', 'piecewise linear', 'rotated hyperbola'], loc='upper left')
plt.show()
I was also able to get the line + exponential to converge, but it looks terrible. This is because it's not a good descriptor of your data, which is linear and an exponential is very far from linear!
def line_exp(x, a, b, c, d, e):
return a*x + b + c*np.exp((x-d)/e)
e0 = .1, 20., .01, 1000., 2000.
e, ecov = curve_fit(line_exp, x, y, e0)
If you want to keep it simple, there's always a polynomial or spline (piecewise polynomials)
from scipy.interpolate import UnivariateSpline
s = UnivariateSpline(x, y, s=x.size) #larger s-value has fewer "knots"
plt.plot(x, s(x))
I researched this a little, Applied Linear Regression by Sanford, and the Correlation and Regression lecture by Steiger had some good info on it. They all however lack the right model, the piecewise function should be
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import lmfit
dfseg = pd.read_csv('segreg.csv')
def err(w):
th0 = w['th0'].value
th1 = w['th1'].value
th2 = w['th2'].value
gamma = w['gamma'].value
fit = th0 + th1*dfseg.Temp + th2*np.maximum(0,dfseg.Temp-gamma)
return fit-dfseg.C
p = lmfit.Parameters()
p.add_many(('th0', 0.), ('th1', 0.0),('th2', 0.0),('gamma', 40.))
mi = lmfit.minimize(err, p)
lmfit.printfuncs.report_fit(mi.params)
b0 = mi.params['th0']; b1=mi.params['th1'];b2=mi.params['th2']
gamma = int(mi.params['gamma'].value)
import statsmodels.formula.api as smf
reslin = smf.ols('C ~ 1 + Temp + I((Temp-%d)*(Temp>%d))' % (gamma,gamma), data=dfseg).fit()
print reslin.summary()
x0 = np.array(range(0,gamma,1))
x1 = np.array(range(0,80-gamma,1))
y0 = b0 + b1*x0
y1 = (b0 + b1 * float(gamma) + (b1 + b2)* x1)
plt.scatter(dfseg.Temp, dfseg.C)
plt.hold(True)
plt.plot(x0,y0)
plt.plot(x1+gamma,y1)
plt.show()
Result
[[Variables]]
th0: 78.6554456 +/- 3.966238 (5.04%) (init= 0)
th1: -0.15728297 +/- 0.148250 (94.26%) (init= 0)
th2: 0.72471237 +/- 0.179052 (24.71%) (init= 0)
gamma: 38.3110177 +/- 4.845767 (12.65%) (init= 40)
The data
"","Temp","C"
"1",8.5536,86.2143
"2",10.6613,72.3871
"3",12.4516,74.0968
"4",16.9032,68.2258
"5",20.5161,72.3548
"6",21.1613,76.4839
"7",24.3929,83.6429
"8",26.4839,74.1935
"9",26.5645,71.2581
"10",27.9828,78.2069
"11",32.6833,79.0667
"12",33.0806,71.0968
"13",33.7097,76.6452
"14",34.2903,74.4516
"15",36,56.9677
"16",37.4167,79.8333
"17",43.9516,79.7097
"18",45.2667,76.9667
"19",47,76
"20",47.1129,78.0323
"21",47.3833,79.8333
"22",48.0968,73.9032
"23",49.05,78.1667
"24",57.5,81.7097
"25",59.2,80.3
"26",61.3226,75
"27",61.9194,87.0323
"28",62.3833,89.8
"29",64.3667,96.4
"30",65.371,88.9677
"31",68.35,91.3333
"32",70.7581,91.8387
"33",71.129,90.9355
"34",72.2419,93.4516
"35",72.85,97.8333
"36",73.9194,92.4839
"37",74.4167,96.1333
"38",76.3871,89.8387
"39",78.0484,89.4516
Graph
I used #user423805 's answer (found via google groups thread: https://groups.google.com/forum/#!topic/lmfit-py/7I2zv2WwFLU ) but noticed it had some limitations when trying to use three or more segments.
Instead of applying np.maximum in the minimizer error function or adding (b1 + b2) in #user423805 's answer, I used the same linear spline calculation for both the minimizer and end-usage:
# least_splines_calc works like this for an example with three segments
# (four threshold params, three gamma params):
#
# for 0 < x < gamma0 : y = th0 + (th1 * x)
# for gamma0 < x < gamma1 : y = th0 + (th1 * x) + (th2 * (x - gamma0))
# for gamma1 < x : y = th0 + (th1 * x) + (th2 * (x - gamma0)) + (th3 * (x - gamma1))
#
def least_splines_calc(x, thresholds, gammas):
if(len(thresholds) < 2):
print("Error: expected at least two thresholds")
return None
applicable_gammas = filter(lambda gamma: x > gamma , gammas)
#base result
y = thresholds[0] + (thresholds[1] * x)
#additional factors calculated depending on x value
for i in range(0, len(applicable_gammas)):
y = y + ( thresholds[i + 2] * ( x - applicable_gammas[i] ) )
return y
def least_splines_calc_array(x_array, thresholds, gammas):
y_array = map(lambda x: least_splines_calc(x, thresholds, gammas), x_array)
return y_array
def err(params, x, data):
th0 = params['th0'].value
th1 = params['th1'].value
th2 = params['th2'].value
th3 = params['th3'].value
gamma1 = params['gamma1'].value
gamma2 = params['gamma2'].value
thresholds = np.array([th0, th1, th2, th3])
gammas = np.array([gamma1, gamma2])
fit = least_splines_calc_array(x, thresholds, gammas)
return np.array(fit)-np.array(data)
p = lmfit.Parameters()
p.add_many(('th0', 0.), ('th1', 0.0),('th2', 0.0),('th3', 0.0),('gamma1', 9.),('gamma2', 9.3)) #NOTE: the 9. / 9.3 were guesses specific to my data, you will need to change these
mi = lmfit.minimize(err_alt, p, args=(np.array(dfseg.Temp), np.array(dfseg.C)))
After minimization, convert the params found by the minimizer into an array of thresholds and gammas to re-use linear_splines_calc to plot the linear splines regression.
Reference: While there's various places that explain least splines (I think #user423805 used http://www.statpower.net/Content/313/Lecture%20Notes/Splines.pdf , which has the (b1 + b2) addition I disagree with in its sample code despite similar equations) , the one that made the most sense to me was this one (by Rob Schapire / Zia Khan at Princeton) : https://www.cs.princeton.edu/courses/archive/spring07/cos424/scribe_notes/0403.pdf - section 2.2 goes into linear splines. Excerpt below:
If you're looking to join what appears to be two straight lines with a hyperbola having a variable radius at/near the intersection of the two lines (which are its asymptotes), I urge you to look hard at Using an Hyperbola as a Transition Model to Fit Two-Regime Straight-Line Data, by Donald G. Watts and David W. Bacon, Technometrics, Vol. 16, No. 3 (Aug., 1974), pp. 369-373.
The formula is drop dead simple, nicely adjustable, and works like a charm. From their paper (in case you can't access it):
As a more useful alternative form we consider an hyperbola for which:
(i) the dependent variable y is a single valued function of the independent variable x,
(ii) the left asymptote has slope theta_1,
(iii) the right asymptote has slope theta_2,
(iv) the asymptotes intersect at the point (x_o, beta_o),
(v) the radius of curvature at x = x_o is proportional to a quantity delta. Such an hyperbola can be written y = beta_o + beta_1*(x - x_o) + beta_2* SQRT[(x - x_o)^2 + delta^2/4], where beta_1 = (theta_1 + theta_2)/2 and beta_2 = (theta_2 - theta_1)/2.
delta is the adjustable parameter that allows you to either closely follow the lines right to the intersection point or smoothly merge from one line to the other.
Just solve for the intersection point (x_o, beta_o), and plug into the formula above.
BTW, in general, if line 1 is y_1 = b_1 + m_1 *x and line 2 is y_2 = b_2 + m_2 * x, then they intersect at x* = (b_2 - b_1) / (m_1 - m_2) and y* = b_1 + m_1 * x*. So, to connect with the formalism above, x_o = x*, beta_o = y* and the two m_*'s are the two thetas.
There is a straightforward method (not iterative, no initial guess) pp.12-13 in https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecewise-Regression-pdf
The data comes from the scanning of the figure published by IanRoberts in his question. Scanning for the coordinates of the pixels in not accurate. So, don't be surprised by additional deviation.
Note that the abscisses and ordinates scales have been devised by 1000.
The equations of the two segments are
The approximate values of the five parameters are written on the above figure.
My knowledge of maths is limited which is why I am probably stuck. I have a spectra to which I am trying to fit two Gaussian peaks. I can fit to the largest peak, but I cannot fit to the smallest peak. I understand that I need to sum the Gaussian function for the two peaks but I do not know where I have gone wrong. An image of my current output is shown:
The blue line is my data and the green line is my current fit. There is a shoulder to the left of the main peak in my data which I am currently trying to fit, using the following code:
import matplotlib.pyplot as pt
import numpy as np
from scipy.optimize import leastsq
from pylab import *
time = []
counts = []
for i in open('/some/folder/to/file.txt', 'r'):
segs = i.split()
time.append(float(segs[0]))
counts.append(segs[1])
time_array = arange(len(time), dtype=float)
counts_array = arange(len(counts))
time_array[0:] = time
counts_array[0:] = counts
def model(time_array0, coeffs0):
a = coeffs0[0] + coeffs0[1] * np.exp( - ((time_array0-coeffs0[2])/coeffs0[3])**2 )
b = coeffs0[4] + coeffs0[5] * np.exp( - ((time_array0-coeffs0[6])/coeffs0[7])**2 )
c = a+b
return c
def residuals(coeffs, counts_array, time_array):
return counts_array - model(time_array, coeffs)
# 0 = baseline, 1 = amplitude, 2 = centre, 3 = width
peak1 = np.array([0,6337,16.2,4.47,0,2300,13.5,2], dtype=float)
#peak2 = np.array([0,2300,13.5,2], dtype=float)
x, flag = leastsq(residuals, peak1, args=(counts_array, time_array))
#z, flag = leastsq(residuals, peak2, args=(counts_array, time_array))
plt.plot(time_array, counts_array)
plt.plot(time_array, model(time_array, x), color = 'g')
#plt.plot(time_array, model(time_array, z), color = 'r')
plt.show()
This code worked for me providing that you are only fitting a function that is a combination of two Gaussian distributions.
I just made a residuals function that adds two Gaussian functions and then subtracts them from the real data.
The parameters (p) that I passed to Numpy's least squares function include: the mean of the first Gaussian function (m), the difference in the mean from the first and second Gaussian functions (dm, i.e. the horizontal shift), the standard deviation of the first (sd1), and the standard deviation of the second (sd2).
import numpy as np
from scipy.optimize import leastsq
import matplotlib.pyplot as plt
######################################
# Setting up test data
def norm(x, mean, sd):
norm = []
for i in range(x.size):
norm += [1.0/(sd*np.sqrt(2*np.pi))*np.exp(-(x[i] - mean)**2/(2*sd**2))]
return np.array(norm)
mean1, mean2 = 0, -2
std1, std2 = 0.5, 1
x = np.linspace(-20, 20, 500)
y_real = norm(x, mean1, std1) + norm(x, mean2, std2)
######################################
# Solving
m, dm, sd1, sd2 = [5, 10, 1, 1]
p = [m, dm, sd1, sd2] # Initial guesses for leastsq
y_init = norm(x, m, sd1) + norm(x, m + dm, sd2) # For final comparison plot
def res(p, y, x):
m, dm, sd1, sd2 = p
m1 = m
m2 = m1 + dm
y_fit = norm(x, m1, sd1) + norm(x, m2, sd2)
err = y - y_fit
return err
plsq = leastsq(res, p, args = (y_real, x))
y_est = norm(x, plsq[0][0], plsq[0][2]) + norm(x, plsq[0][0] + plsq[0][1], plsq[0][3])
plt.plot(x, y_real, label='Real Data')
plt.plot(x, y_init, 'r.', label='Starting Guess')
plt.plot(x, y_est, 'g.', label='Fitted')
plt.legend()
plt.show()
You can use Gaussian mixture models from scikit-learn:
from sklearn import mixture
import matplotlib.pyplot
import matplotlib.mlab
import numpy as np
clf = mixture.GMM(n_components=2, covariance_type='full')
clf.fit(yourdata)
m1, m2 = clf.means_
w1, w2 = clf.weights_
c1, c2 = clf.covars_
histdist = matplotlib.pyplot.hist(yourdata, 100, normed=True)
plotgauss1 = lambda x: plot(x,w1*matplotlib.mlab.normpdf(x,m1,np.sqrt(c1))[0], linewidth=3)
plotgauss2 = lambda x: plot(x,w2*matplotlib.mlab.normpdf(x,m2,np.sqrt(c2))[0], linewidth=3)
plotgauss1(histdist[1])
plotgauss2(histdist[1])
You can also use the function below to fit the number of Gaussian you want with ncomp parameter:
from sklearn import mixture
%pylab
def fit_mixture(data, ncomp=2, doplot=False):
clf = mixture.GMM(n_components=ncomp, covariance_type='full')
clf.fit(data)
ml = clf.means_
wl = clf.weights_
cl = clf.covars_
ms = [m[0] for m in ml]
cs = [numpy.sqrt(c[0][0]) for c in cl]
ws = [w for w in wl]
if doplot == True:
histo = hist(data, 200, normed=True)
for w, m, c in zip(ws, ms, cs):
plot(histo[1],w*matplotlib.mlab.normpdf(histo[1],m,np.sqrt(c)), linewidth=3)
return ms, cs, ws
coeffs 0 and 4 are degenerate - there is absolutely nothing in the data that can decide between them. you should use a single zero level parameter instead of two (ie remove one of them from your code). this is probably what is stopping your fit (ignore the comments here saying this is not possible - there are clearly at least two peaks in that data and you should certainly be able to fit to that).
(it may not be clear why i am suggesting this, but what is happening is that coeffs 0 and 4 can cancel each other out. they can both be zero, or one could be 100 and the other -100 - either way, the fit is just as good. this "confuses" the fitting routine, which spends its time trying to work out what they should be, when there is no single right answer, because whatever value one is, the other can just be the negative of that, and the fit will be the same).
in fact, from the plot, it looks like there may be no need for a zero level at all. i would try dropping both of those and seeing how the fit looks.
also, there is no need to fit coeffs 1 and 5 (or the zero point) in the least squares. instead, because the model is linear in those you could calculate their values each loop. this will make things faster, but is not critical. i just noticed you say your maths is not so good, so probably ignore this one.