Lorentzian scipy.optimize.leastsq fit to data fails - python

Since I took a lecture on Python I wanted to use it to fit my data. Although I have been trying for a while now, I still have no idea why this is not working.
What I would like to do
Take one data-file after another from a subfolder (here called: 'Test'), transform the data a little bit and fit it with a Lorentzian function.
Problem description
When I run the code posted below, it does not fit anything and just returns my initial parameters after 4 function calls. I tried scaling the data, playing around with ftol and maxfev after checking the python documentation over and over again, but nothing improved. I also tried changing the lists to numpy.arrays explicitely, as well as the solution given to the question scipy.optimize.leastsq returns best guess parameters not new best fit, x = x.astype(np.float64). No improvement. Strangely enough, for few selected data-files this same code worked at some point, but for the majority it never did. It can definitely be fitted, since a Levenberg-Marquard fitting routine gives reasonably good results in Origin.
Can someone tell me what is going wrong or point out alternatives...?
import numpy,math,scipy,pylab
from scipy.optimize import leastsq
import glob,os
for files in glob.glob("*.txt"):
x=[]
y=[]
z=[]
f = open(files, 'r')
raw=f.readlines()
f.close()
del raw[0:8] #delete Header
for columns in ( raw2.strip().split() for raw2 in raw ): #data columns
x.append(float(columns[0]))
y.append(float(columns[1]))
z.append(10**(float(columns[1])*0.1)) #transform data for the fit
def lorentz(p,x):
return (1/(1+(x/p[0] - 1)**4*p[1]**2))*p[2]
def errorfunc(p,x,z):
return lorentz(p,x)-z
p0=[3.,10000.,0.001]
Params,cov_x,infodict,mesg,ier = leastsq(errorfunc,p0,args=(x,z),full_output=True)
print Params
print ier

Without seeing your data it is hard to tell what is going wrong. I generated some random noise and used your code to perform a fit to it. Everything works okay. This algorithm does not allow for parameter boundaries so you may run into problems if your p0 is close to zero. I did the following:
import numpy as np
from scipy.optimize import leastsq
import matplotlib.pyplot as plt
def lorentz(p,x):
return p[2] / (1.0 + (x / p[0] - 1.0)**4 * p[1]**2)
def errorfunc(p,x,z):
return lorentz(p,x)-z
p = np.array([0.5, 0.25, 1.0], dtype=np.double)
x = np.linspace(-1.5, 2.5, num=30, endpoint=True)
noise = np.random.randn(30) * 0.05
z = lorentz(p,x)
noisyz = z + noise
p0 = np.array([-2.0, -4.0, 6.8], dtype=np.double) #Initial guess
solp, ier = leastsq(errorfunc,
p0,
args=(x,noisyz),
Dfun=None,
full_output=False,
ftol=1e-9,
xtol=1e-9,
maxfev=100000,
epsfcn=1e-10,
factor=0.1)
plt.plot(x, z, 'k-', linewidth=1.5, alpha=0.6, label='Theoretical')
plt.scatter(x, noisyz, c='r', marker='+', color='r', label='Measured Data')
plt.plot(x, lorentz(solp,x), 'g--', linewidth=2, label='leastsq fit')
plt.xlim((-1.5, 2.5))
plt.ylim((0.0, 1.2))
plt.grid(which='major')
plt.legend(loc=8)
plt.show()
This yielded a solution of:
solp = array([ 0.51779002, 0.26727697, 1.02946179])
Which is close to the theoretical value:
np.array([0.5, 0.25, 1.0])

Related

Scipy curve fit doesn't perform a fit and raises "Covariance of the parameters could not be estimated" error

I am trying to do a simple linear curve fit with scipy, normally this method works fine for me. This time however for a reason unknown to me it doesn't work.
(I suspect that maybe the numbers are so big that it reaches the limit of what can be stored under a given data type.)
Regardless of the reason, the idea is to make a plot that looks like this:
As you see on the axis here the numbers are of a common order of magnitude. However this time I tried to make a fit to much bigger data points on the order of 1E10, for this I tried to use the following code (here I present only the code for making a scatter plot and then fitting only one data set).
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
ucrt_T = 2/np.sqrt(3)
ucrt_U = 0.1/np.sqrt(3)
T = [314.1, 325.1, 335.1, 345.1, 355.1, 365.1, 374.1, 384.1, 393.1]
T_to_4th = [9733560790.61, 11170378213.80, 12609495509.84, 14183383217.88, 15900203737.92, 17768359469.96, 19586229219.65, 21765930026.49, 23878782252.31]
ucrt_T_lst = [143130823.11, 158701221.00, 173801148.95, 189829733.26, 206814686.75, 224783722.22, 241820148.88, 261735288.93, 280568229.17]
UBlack = [1.9,3.1, 4.4, 5.6, 7.0, 8.7, 10.2, 11.8, 13.4]
def lin_function(x,a,b):
return a*x + b
def line_fit_2():
#Dodanie pozostałych punktów na wykresie
plt.scatter(UBlack, T_to_4th, color='blue')
plt.errorbar(UBlack, T_to_4th, yerr=ucrt_T, fmt='o')
#Seria CZARNA
VltBlack = np.array(UBlack)
Tt4 = np.array(T_to_4th)
popt, pcov = curve_fit(lin_function, VltBlack, Tt4, absolute_sigma=False)
perr = np.sqrt(np.diag(pcov))
y = lin_function(VltBlack, *popt)
#Stylistyka i wygląd wykresu
#plt.plot(Pressure1, y, '--', color = 'g', label="fit with: $a={:.3f}\pm{:.3f}$, $b={:.3f}\pm{:.3f}$" .format(popt[0], perr[0], popt[1], perr[1]))
plt.plot(VltBlack, y, '--', color='green')
plt.ylabel(r'$T^4$ w $[K^4]$')
plt.xlabel(r'Napięcie termometru U w [mV]')
plt.legend(['Fit', 'Data points'])
plt.grid()
plt.show()
line_fit_2()
If you will run it you will find out that the scatter plot is created however the fit isn't executed properly, as only a horizontal line will be added. Additionally an error OptimizeWarning: Covariance of the parameters could not be estimated category=OptimizeWarning) is raised.
I would be very happy to know what I am doing wrong or how to resolve this problem. All help is appreciated!
You've pretty much already answered your question, so I'll just confirm your suspicion: the reason the OptimizeWarning is raised is because the underlying optimization algorithm doesn't work properly/diverges due to large parameter numbers.
The solution is very simple, just scale your input parameters before using the fitting tool. Just keep the scaling in mind when you add labels to your x/y axis:
T_to_4th = np.array([9733560790.61, 11170378213.80, 12609495509.84, 14183383217.88, 15900203737.92, 17768359469.96, 19586229219.65, 21765930026.49, 23878782252.31])/10e6
ucrt_T_lst = np.array([143130823.11, 158701221.00, 173801148.95, 189829733.26, 206814686.75, 224783722.22, 241820148.88, 261735288.93, 280568229.17])/10e6
What I did is just divide the lists with big numbers by 10e6. This means that the values are no longer in kPa for example, but in mega kPa (which would be GPa now).
To divide the entire list by the same value, first convert it to a numpy array.
Hope this helps :)

Displaying chi squared as the uncertainty of fit parameters in scipy.optimize

I am doing some curve fitting in python with the aid of scipy.optimize curve_fit. Normally I am satisfied with the scipy's default results.
However this time I would like to display the function with chi_squared as the uncertainty of my fit parameters and I don't know how to deal with this.
I have tried to use the parameter absolute_sigma=True instead of the default absolute_sigma=False. However according to my separate calculations, for absolute_sigma=False the uncertainty of the parameters, is equal to reduced_chi_squared. But it isn't chi_squared for absolute_sigma=True.
Within the documentation itself: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html chi_squared is mentioned few time, however it's not written explicitly how to display it within the plot and how to use it as the uncertainties for of the fit parameters.
My code is as follows:
# Necessary libraries
# Numpy will be used for the actual function (in this example it's not necessary)
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from uncertainties import *
#Some shortened data
Wave_lengths_500_est = [380, 447.1, 486, 492.2, 656, 700, 706.5]
Avraged_deflection_angles_500 = [11.965, 12.8, 13.93, 14.325, 19.37, 18.26, 18.335]
#The function to which the data are to be fitted
def lin_function(x,a,b):
return a*x + b
def line_fit_lamp_1_500():
#Niepewność kątowa na poziomie 2 minut kątowych. Wyrażona w stopniach
angle_error = 0.03333
#Dodanie punktów danych na wykresie
plt.scatter(Wave_lengths_500_est, Avraged_deflection_angles_500, color='b')
plt.errorbar(Wave_lengths_500_est, Avraged_deflection_angles_500, yerr = angle_error, fmt = 'o')
#Fitting of the function "function_lamp1_500" to the data points
Wave_length = np.array(Wave_lengths_500_est)
Defletion_angle = np.array(Avraged_deflection_angles_500)
popt, pcov = curve_fit(lin_function, Wave_length, Defletion_angle, absolute_sigma=True)
perr = np.sqrt(np.diag(pcov))
y = lin_function(Wave_length, *popt)
# Graph looks
plt.plot(Wave_length, y, '--', color = 'g', label="fit with: $a={:.3f}\pm{:.5f}$, $b={:.3f}\pm{:.5f}$" .format(popt[0], perr[0], popt[1], perr[1]))
plt.legend()
plt.xlabel('Długość fali [nm]')
plt.ylabel('Kąt załamania [Stopnie]')
plt.show()
# Function call
line_fit_lamp_1_500()
Toggling between absolute_sigma=True/False I get a change of uncertainty of the parameter a\pm0.00308 and b\pm1.74571 for absolute_sigma=True to a\pm0.022 and b\pm1.41679 for absolute_sigma=False. Versus an expected value of a\pm0.0001027 and b\pm0.058132 for chi_squared and a\pm0.002503 and b\pm1.4167 for reduced_chi_squared
Additionally could you please elaborate what does the expression .format(popt[0], perr[0], popt[1], perr[1]) do?
All help is appreciated, thanks in advance.

Scaling Lognormal Fit

I have two arrays with x- and y- data.
This data shows lognormal behavior. I need a graph of the fit, as well as the mu and the sigma to do some statistics.
I did a fit, in order to calculate the mu, the sigma, and further on some statistical values of it. (See code below)
I obtain the scaling factor, with which I have to multiply the distribution with an integral over the datapoints.
The code below, does work. My question now is, if (I am sure) there is a better way to do this? It feels like a workaround, that will work sometimes. I want a better way to do this, because I have to plot hundreds of these.
My code (sorry, that it is this long, wanted to include everything except import of crude data):
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# produce plot True/False
ploton = True
x0=np.array([3.58381e+01, 3.27125e+01, 2.98680e+01, 2.72888e+01, 2.49364e+01,
2.27933e+01, 2.08366e+01, 1.90563e+01, 1.74380e+01, 1.59550e+01,
1.45904e+01, 1.33460e+01, 1.22096e+01, 1.11733e+01, 1.02262e+01,
9.35893e+00, 8.56556e+00, 7.86688e+00, 7.20265e+00, 6.59782e+00,
6.01571e+00, 5.53207e+00, 5.03979e+00, 4.64415e+00, 4.19920e+00,
3.83595e+00, 3.50393e+00, 3.28070e+00, 3.00930e+00, 2.75634e+00,
2.52050e+00, 2.31349e+00, 2.12280e+00, 1.92642e+00, 1.77820e+00,
1.61692e+00, 1.49094e+00, 1.36233e+00, 1.22935e+00, 1.14177e+00,
1.03078e+00, 9.39603e-01, 8.78425e-01, 1.01490e+00, 1.07461e-01,
4.81523e-02, 4.81523e-02, 1.00000e-02, 1.00000e-02])
y0=np.array([3.94604811e+04, 2.78223936e+04, 1.95979179e+04, 2.14447807e+04,
1.68677487e+04, 1.79429516e+04, 1.73589776e+04, 2.16101026e+04,
3.79705638e+04, 6.83622301e+04, 1.73687772e+05, 5.74854475e+05,
1.69497465e+06, 3.79135941e+06, 7.76757753e+06, 1.33429094e+07,
1.96096415e+07, 2.50403065e+07, 2.72818618e+07, 2.53120387e+07,
1.93102362e+07, 1.22219224e+07, 4.96725699e+06, 1.61174658e+06,
3.19352386e+05, 1.80305856e+05, 1.41728002e+05, 1.66191809e+05,
1.33223816e+05, 1.31384905e+05, 2.49100945e+05, 2.28300583e+05,
3.01063903e+05, 1.84271914e+05, 1.26412781e+05, 8.57488083e+04,
1.35536571e+05, 4.50076293e+04, 1.98080100e+05, 2.27630303e+05,
1.89484527e+05, 0.00000000e+00, 1.36543525e+05, 2.20677520e+05,
3.60100586e+05, 1.62676486e+05, 1.90105093e+04, 9.27461467e+05,
1.58373542e+05])
Dnm = x0
dndlndp = y0
#lognormal PDF:
def f(x, mu, sigma) :
return 1/(np.sqrt(2*np.pi)*sigma*x)*np.exp(-((np.log(x)-mu)**2)/(2*sigma**2))
#normalizing y-values to obtain lognormal distributed data:
y0_normalized = y0/np.trapz(x0.ravel(), y0.ravel())
#calculating mu/sigma of this distribution:
params, extras = curve_fit(f, x0.ravel(), y0_normalized.ravel())
median = np.exp(params[0])
mu = params[0]
sigma = params[1]
#output of mu / sigma / calculated median:
print "mu=%g, sigma=%g" % (params[0], params[1])
print "median=%g" % median
#new variable z for smooth fit-curve:
z = np.linspace(0.1, 100, 10000)
#######################
Dnm = np.ravel(Dnm)
dndlndp = np.ravel(dndlndp)
Dnm_rev = list(reversed(Dnm))
dndlndp_rev = list(reversed(dndlndp))
scalingfactor = np.trapz(dndlndp_rev, Dnm_rev, dx = np.log(Dnm_rev))
#####################
#plotting
if ploton:
plt.plot(z, f(z, mu, sigma)*scalingfactor, label="fit", color = "red")
plt.scatter(x0, y0, label="data")
plt.xlim(3,20)
plt.xscale("log")
plt.legend()
EDIT1: Maybe I should add that I have no idea, why the scaling factor calculated with
scalingfactor = np.trapz(dndlndp_rev, Dnm_rev, dx = np.log(Dnm_rev))
is right. It was simply try and error. I really want to know, why this does the trick, since the "area" of all bins combined is:
N = np.trapz(dndlndp_rev, np.log(Dnm_rev), dx = np.log(Dnm_rev))
because the width of the bins is log(Dnm).
EDIT2: Thank you for all answers. I copied the arrays into the code, which is now runable. I want to simplify the question, since i think, due to my poor english, i was not able to say what i really want:
I have lognormal set of data. The code above allows me to calculate the mu and the sigma. To do so, i need to normalize the data, and the area under the function is from now on = 1.
In order to plot a lognormal function with the calculated mu and sigma, i need to multiply the function with an (unknown) factor, because the area under the real function is something like 1e8, but sure not one. I did a workaround by calculating this "scalingfactor" via the trapz integral of the diskrete crude data.
There has to be a better way to plot the fitted function, when mu and sigma are already known.

Path 'contains_points' yields incorrect results with Bezier curve

I am trying to select a region of data based on a matplotlib Path object, but when the path contains a Bezier curve (not just straight lines), the selected region doesn't completely fill in the curve. It looks like it's trying, but the far side of the curve gets chopped off.
For example, the following code defines a fairly simple closed path with one straight line and one cubic curve. When I look at the True/False result from the contains_points method, it does not seem to match either the curve itself or the raw vertices.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.path import Path
from matplotlib.patches import PathPatch
# Make the Path
verts = [(1.0, 1.5), (-2.0, 0.25), (-1.0, 0.0), (1.0, 0.5), (1.0, 1.5)]
codes = [Path.MOVETO, Path.CURVE4, Path.CURVE4, Path.CURVE4, Path.CLOSEPOLY]
path1 = Path(verts, codes)
# Make a field with points to select
nx, ny = 101, 51
x = np.linspace(-2, 2, nx)
y = np.linspace(0, 2, ny)
yy, xx = np.meshgrid(y, x)
pts = np.column_stack((xx.ravel(), yy.ravel()))
# Construct a True/False array of contained points
tf = path1.contains_points(pts).reshape(nx, ny)
# Make a PathPatch for display
patch1 = PathPatch(path1, facecolor='c', edgecolor='b', lw=2, alpha=0.5)
# Plot the true/false array, the patch, and the vertices
fig, ax = plt.subplots()
ax.imshow(tf.T, origin='lower', extent=(x[0], x[-1], y[0], y[-1]))
ax.add_patch(patch1)
ax.plot(*zip(*verts), 'ro-')
plt.show()
This gives me this plot:
It looks like there is some sort of approximation going on - is this just a fundamental limitation of the calculation in matplotlib, or am I doing something wrong?
I can calculate the points inside the curve myself, but I was hoping to not reinvent this wheel if I don't have to.
It's worth noting that a simpler construction using quadratic curves does appear to work properly:
I am using matplotlib 2.0.0.
This has to do with the space in which the paths are evaluated, as explained in GitHub issue #6076. From a comment by mdboom there:
Path intersection is done by converting the curves to line segments
and then converting the intersection based on the line segments. This
conversion happens by "sampling" the curve at increments of 1.0. This
is generally the right thing to do when the paths are already scaled
in display space, because sampling the curve at a resolution finer
than a single pixel doesn't really help. However, when calculating the
intersection in data space as you've done here, we obviously need to
sample at a finer resolution.
This is discussing intersections, but contains_points is also affected. This enhancement is still open so we'll have to see if it is addressed in the next milestone. In the meantime, there are a couple options:
1) If you are going to be displaying a patch anyway, you can use the display transformation. In the example above, adding the following demonstrates the correct behavior (based on a comment by tacaswell on duplicate issue #8734, now closed):
# Work in transformed (pixel) coordinates
hit_patch = path1.transformed(ax.transData)
tf1 = hit_patch.contains_points(ax.transData.transform(pts)).reshape(nx, ny)
ax.imshow(tf2.T, origin='lower', extent=(x[0], x[-1], y[0], y[-1]))
2) If you aren't using a display and just want to calculate using a path, the best bet is to simply form the Bezier curve yourself and make a path out of line segments. Replacing the formation of path1 with the following calculation of path2 will produce the desired result.
from scipy.special import binom
def bernstein(n, i, x):
coeff = binom(n, i)
return coeff * (1-x)**(n-i) * x**i
def bezier(ctrlpts, nseg):
x = np.linspace(0, 1, nseg)
outpts = np.zeros((nseg, 2))
n = len(ctrlpts)-1
for i, point in enumerate(ctrlpts):
outpts[:,0] += bernstein(n, i, x) * point[0]
outpts[:,1] += bernstein(n, i, x) * point[1]
return outpts
verts1 = [(1.0, 1.5), (-2.0, 0.25), (-1.0, 0.0), (1.0, 0.5), (1.0, 1.5)]
nsegments = 31
verts2 = np.concatenate([bezier(verts1[:4], nsegments), np.array([verts1[4]])])
codes2 = [Path.MOVETO] + [Path.LINETO]*(nsegments-1) + [Path.CLOSEPOLY]
path2 = Path(verts2, codes2)
Either method yields something that looks like the following:

Python interp1d vs. UnivariateSpline

I'm trying to port some MatLab code over to Scipy, and I've tried two different functions from scipy.interpolate, interp1d and UnivariateSpline. The interp1d results match the interp1d MatLab function, but the UnivariateSpline numbers come out different - and in some cases very different.
f = interp1d(row1,row2,kind='cubic',bounds_error=False,fill_value=numpy.max(row2))
return f(interp)
f = UnivariateSpline(row1,row2,k=3,s=0)
return f(interp)
Could anyone offer any insight? My x vals aren't equally spaced, although I'm not sure why that would matter.
I just ran into the same issue.
Short answer
Use InterpolatedUnivariateSpline instead:
f = InterpolatedUnivariateSpline(row1, row2)
return f(interp)
Long answer
UnivariateSpline is a 'one-dimensional smoothing spline fit to a given set of data points' whereas InterpolatedUnivariateSpline is a 'one-dimensional interpolating spline for a given set of data points'. The former smoothes the data whereas the latter is a more conventional interpolation method and reproduces the results expected from interp1d. The figure below illustrates the difference.
The code to reproduce the figure is shown below.
import scipy.interpolate as ip
#Define independent variable
sparse = linspace(0, 2 * pi, num = 20)
dense = linspace(0, 2 * pi, num = 200)
#Define function and calculate dependent variable
f = lambda x: sin(x) + 2
fsparse = f(sparse)
fdense = f(dense)
ax = subplot(2, 1, 1)
#Plot the sparse samples and the true function
plot(sparse, fsparse, label = 'Sparse samples', linestyle = 'None', marker = 'o')
plot(dense, fdense, label = 'True function')
#Plot the different interpolation results
interpolate = ip.InterpolatedUnivariateSpline(sparse, fsparse)
plot(dense, interpolate(dense), label = 'InterpolatedUnivariateSpline', linewidth = 2)
smoothing = ip.UnivariateSpline(sparse, fsparse)
plot(dense, smoothing(dense), label = 'UnivariateSpline', color = 'k', linewidth = 2)
ip1d = ip.interp1d(sparse, fsparse, kind = 'cubic')
plot(dense, ip1d(dense), label = 'interp1d')
ylim(.9, 3.3)
legend(loc = 'upper right', frameon = False)
ylabel('f(x)')
#Plot the fractional error
subplot(2, 1, 2, sharex = ax)
plot(dense, smoothing(dense) / fdense - 1, label = 'UnivariateSpline')
plot(dense, interpolate(dense) / fdense - 1, label = 'InterpolatedUnivariateSpline')
plot(dense, ip1d(dense) / fdense - 1, label = 'interp1d')
ylabel('Fractional error')
xlabel('x')
ylim(-.1,.15)
legend(loc = 'upper left', frameon = False)
tight_layout()
The reason why the results are different (but both likely correct) is that the interpolation routines used by UnivariateSpline and interp1d are different.
interp1d constructs a smooth B-spline using the x-points you gave to it as knots
UnivariateSpline is based on FITPACK, which also constructs a smooth B-spline. However, FITPACK tries to choose new knots for the spline, to fit the data better (probably to minimize chi^2 plus some penalty for curvature, or something similar). You can find out what knot points it used via g.get_knots().
So the reason why you get different results is that the interpolation algorithm is different. If you want B-splines with knots at data points, use interp1d or splmake. If you want what FITPACK does, use UnivariateSpline. In the limit of dense data, both methods give same results, but when data is sparse, you may get different results.
(How do I know all this: I read the code :-)
Works for me,
from scipy import allclose, linspace
from scipy.interpolate import interp1d, UnivariateSpline
from numpy.random import normal
from pylab import plot, show
n = 2**5
x = linspace(0,3,n)
y = (2*x**2 + 3*x + 1) + normal(0.0,2.0,n)
i = interp1d(x,y,kind=3)
u = UnivariateSpline(x,y,k=3,s=0)
m = 2**4
t = linspace(1,2,m)
plot(x,y,'r,')
plot(t,i(t),'b')
plot(t,u(t),'g')
print allclose(i(t),u(t)) # evaluates to True
show()
This gives me,
UnivariateSpline: A more recent
wrapper of the FITPACK routines.
this might explain the slightly different values? (I also experienced that UnivariateSpline is much faster than interp1d.)

Categories

Resources