Gaussian data fit varying depending on position of x data - python

I am having a hard time trying to understand why my Gaussian fit to a set of data (ydata) does not work well if I shift the interval of x-values corresponding to that data (xdata1 to xdata2). The Gaussian is written as:
where A is just an amplitude factor. Changing some of the values of the data, it is easy to make it work for both cases, but one can also easily find cases in which it does not work well for xdata1 and also in which covariance of the parameters is not estimated.
I am using scipy.optimize.curve_fit in Spyder with Python 3.7.1 on Windows 7.
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
xdata1 = np.linspace(-9,4,20, endpoint=True) # works fine
xdata2 = xdata1+2
ydata = np.array([8,9,15,12,14,20,24,40,54,94,160,290,400,420,300,130,40,10,8,4])
def gaussian(x, amp, mean, sigma):
return amp*np.exp(-(((x-mean)**2)/(2*sigma**2)))/(sigma*np.sqrt(2*np.pi))
popt1, pcov1 = curve_fit(gaussian, xdata1, ydata)
popt2, pcov2 = curve_fit(gaussian, xdata2, ydata)
fig, ([ax1, ax2]) = plt.subplots(nrows=1, ncols=2,figsize=(9, 4))
ax1.plot(xdata1, ydata, 'b+:', label='xdata1')
ax1.plot(xdata1, gaussian(xdata1, *popt1), 'r-', label='fit')
ax1.legend()
ax2.plot(xdata2, ydata, 'b+:', label='xdata2')
ax2.plot(xdata2, gaussian(xdata2, *popt2), 'r-', label='fit')
ax2.legend()

The problem is your second attempt at fitting a gaussian is getting stuck in a local minimum while searching parameter space: curve_fit is a wrapper for least_squares which uses gradient descent to minimize the cost function and this is liable to get stuck in local minima.
You should try providing reasonable starting parameters (by using the p0 argument of curve_fit) to avoid this:
#... your code
y_max = np.max(y_data)
max_pos = ydata[ydata==y_max][0]
initial_guess = [y_max, max_pos, 1] # amplitude, mean, std
popt2, pcov2 = curve_fit(gaussian, xdata2, ydata, p0=initial_guess)
Which as you can see provides a reasonable fit:
You should write a function which can provide reasonable estimates of the starting parameters. Here I just found the maximum y value and used this to determine the initial parameters. I've found this works well for the fitting normal distributions but you could consider other methods.
Edit:
You can also solve the problem by scaling the amplitude: the amplitude is so large the parameter space is distorted and the gradient descent simply follows the direction of greatest change in the amplitude and effectively ignores the sigma. Consider the following plot in parameter space (Colour is the sum of the squared residuals of the fit for given parameters and the white cross shows the optimal solution):
Make sure to make note of the different scales for the x and y axis.
One needs to make a large number of 'unit' sized steps in y (amplitude) to get to the minimum from the point x,y = (0,0), where as you only need less than one 'unit' sized step to get to the minimum in x (sigma). The algorithm simply takes steps in amplitude as this is the steepest gradient. When it gets to the amplitude which minimises the cost function it simply stops the algorithm as it appears to have converged and makes little or no changes in the sigma parameter.
One way to fix this is to scale your ydata to un-distort the parameter space: divide your ydata by 100 and you will see your fit works without providing any starting parameters!

Related

Python Spinmob curve_fit works but fitter does not

I'm trying to fit data with a Gaussian.
The raw data itself displays a very obvious peak.
When I attempt fitting using curve_fit, the fit identifies the peak but it does not have a curved top.
I am trying to fit the data now with spinmob's fitter as well. However, this fitting just gives a straight line.
I've tried changing several parameters of the fitter, the Gaussian function definition, and the initial parameters for the fit but nothing seems to work.
Here is the code:
from scipy.optimize import curve_fit
from scipy import asarray as ar,exp
import spinmob as s
x = x30
y = ydata
def gaussian(x, A, mu, sig): # See http://mathworld.wolfram.com/GaussianFunction.html
return A/(sig * np.sqrt(2*np.pi)) * np.exp(-np.power(x-mu, 2) / (2 * np.power(sig, 2)))
popt,pcov = curve_fit(gaussian,x,y,p0=[1,7.688,0.005])
FWHM = 2*np.sqrt(2*np.log(2))*popt[2]
print("FWHM: {}".format(FWHM))
plt.plot(x,y,'bo',label='data')
plt.plot(x,gaussian(x,*popt),'r+-',label='fit')
plt.legend()
fitter = s.data.fitter()
fitter.set(subtract_bg=True, plot_guess_zoom=True)
fitter.set_functions(f=gaussian, p='A=1,mu=8.688,sig=0.001')
fitter.set_data(x, y, eydata = 0.03)
fitter.fit()
The curve_fit returns this plot:
Curve_fit plot
The spinmob fitter plot gives this:
Spinmob Fitter Plot
Assuming that spinmob actually uses scipy.curve_fit under the hood, I would guess (sorry) that the problem is that the initial values you give to it are so far off that it cannot possibly find a solution.
For sure, A=1 is not a very good guess for either scipy.curve_fit() or spinmob.fitter(). The peak is definitely negative, and you should be guessing a value more like -0.1 than +1. In fact you could probably assert that A must be < 0.
The initial value of 7.688 for mu that you give to curve_fit() is pretty good, and will allow a solution. I do not know whether it is a typo or not, but the initial value of 8.688 for mu that you give to spinmob.fitter() is very far off (that is, way outside the data range), and the fit will never be able to refine its way to the correct solution from there.
Initial values matter for curve-fitting and poor initial values can lead to bad results.
It might be viewed by some as a shameless plug, but allow me to encourage you to try lmfit (https://lmfit.github.io/lmfit-py/) (I am a lead author) for this kind of problem. Lmfit replaces the array of parameter values with named Parameter objects for better organization of fits. It also has a built-in Gaussian model (which also calculates FWHM, including an uncertainty). That is, with Lmfit, your script might look like:
import numpy as np
import matplotlib.pyplot as plt
from lmfit.models import GaussianModel
from lmfit.lineshapes import gaussian
# create fake data that looks like yours
xdata = 7.670 + np.arange(41)*0.0010
ydata = gaussian(xdata, amplitude=-0.196, center=7.6881, sigma=0.001)
ydata += np.random.normal(size=41, scale=10.0)
# create gaussian model
gmodel = GaussianModel()
# fit data, giving initial values for amplitude, center, and sigma
result = gmodel.fit(ydata, x=xdata, amplitude=-0.1, center=7.688, sigma=0.005)
# show results
print(result.fit_report())
plt.plot(xdata, ydata, 'bo', label='data')
plt.plot(xdata, result.best_fit, 'r+-', label='fit')
plt.legend()
plt.show()
This will print out a report like
[Model]]
Model(gaussian)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 21
# data points = 41
# variables = 3
chi-square = 5114.87632
reduced chi-square = 134.602009
Akaike info crit = 203.879794
Bayesian info crit = 209.020510
[[Variables]]
sigma: 9.7713e-04 +/- 1.5456e-04 (15.82%) (init = 0.005)
center: 7.68822727 +/- 1.5484e-04 (0.00%) (init = 7.688)
amplitude: -0.19273945 +/- 0.02643400 (13.71%) (init = -0.1)
fwhm: 0.00230096 +/- 3.6396e-04 (15.82%) == '2.3548200*sigma'
height: -78.6917624 +/- 10.7894236 (13.71%) == '0.3989423*amplitude/max(1.e-15, sigma)'
[[Correlations]] (unreported correlations are < 0.100)
C(sigma, amplitude) = -0.577
and produce a plot of data and best fit like
which should be close to what you are trying to do.

Plotting only one side of gaussian in Python using matplotlib and scipy

I have a set of points in the first quadrant that look like a gaussian, and I am trying to fit it using a gaussian in python and my code is as follows:
import pylab as plb
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy import asarray as ar,exp
import math
x=ar([37,69,157,238,274,319,391,495,533,626,1366,1855,2821,3615,4130,4374,6453,6863,7021,
7951,8646,9656,10464,11400])
y=ar([1.77,1.67,1.65,1.17,1.34,1.46,0.75,1,0.8,1.02,0.65,0.69,0.44,0.44,0.55,0.43,0.75,0.27,0.26,
0.44,0.04,0.44,0.26,0.04])
n = 24 #the number of data
mean = sum(x*y)/n #note this correction
sigma = math.sqrt(sum(y*(x-mean)**2)/n) #note this correction
def gaus(x,a,x0,sigma):
return a*exp(-(x-x0)**2/(2*sigma**2))
popt,pcov = curve_fit(gaus,x,y,p0=None, sigma=None) #'''p0=[1,mean,sigma]'''
plt.plot(x,y,'b+:',label='data')
plt.plot(x,gaus(x,*popt),'ro:',label='fit')
plt.legend()
plt.title('Fig. 3 - Fit for Time Constant')
plt.xlabel('Time (s)')
plt.ylabel('Voltage (V)')
plt.show()
And the output is: this figure:
http://s2.postimg.org/wevggkc95/Workspace_1_022.png
Why are all the red points coming below, Also note that I am interested in a half gaussian as my data is like that, so my y values are big at first and then decreasing like one side of the gaussian bell. Can anyone tell me how to fit this curve in python, (in case it cannot be fit to gaussian). Or in other words, I want code to fit the half(left side) gaussian of my points (in the first quadrant only). Note that my points cannot be fit as an exponentially decreasing curve as I tried that earlier, and it is not fitting well at lower 'x' values.
Apparently your data do not fit well or easily to a Gaussian function. You use the default initial guesses for p0 = [1,1,1] which is so far away from any kind of optimal choice that curve_fit gives up before it gets started (check the values of popt=[1,1,1] and pcov=[inf, inf, inf]). You could try with better guesses (e.g. p0 = [2,0, 2000]), but on my system it won't converge: Optimal parameters not found: Number of calls to function has reached maxfev = 800.
To fit a "half-Gaussian", don't float the centre position x0 (just leave it equal to 0):
def gaus(x,a,sigma):
return a*exp(-(x)**2/(2*sigma**2))
p0 = [1.2, 4000]
popt,pcov = curve_fit(gaus,x,y,p0=p0)
Unless you have a particular reason for wanting to fit a Gaussian, why not do a more robust linear least squares fit to a polynomial, e.g.:
pfit = np.polyfit(x, y, 3)
poly = np.poly1d(pfit)

numpy.polyfit versus scipy.odr

I have a data set which in theory is described by a polynomial of the second degree. I would like to fit this data and I have used numpy.polyfit to do this. However, the down side is that the error on the returned coefficients is not available. Therefore I decided to also fit the data using scipy.odr. The weird thing was that the coefficients for the polynomial deviated from each other.
I do not understand this and therefore decided to test both fitting routines on a set of data that I produce my self:
import numpy
import scipy.odr
import matplotlib.pyplot as plt
x = numpy.arange(-20, 20, 0.1)
y = 1.8 * x**2 -2.1 * x + 0.6 + numpy.random.normal(scale = 100, size = len(x))
#Define function for scipy.odr
def fit_func(p, t):
return p[0] * t**2 + p[1] * t + p[2]
#Fit the data using numpy.polyfit
fit_np = numpy.polyfit(x, y, 2)
#Fit the data using scipy.odr
Model = scipy.odr.Model(fit_func)
Data = scipy.odr.RealData(x, y)
Odr = scipy.odr.ODR(Data, Model, [1.5, -2, 1], maxit = 10000)
output = Odr.run()
#output.pprint()
beta = output.beta
betastd = output.sd_beta
print "poly", fit_np
print "ODR", beta
plt.plot(x, y, "bo")
plt.plot(x, numpy.polyval(fit_np, x), "r--", lw = 2)
plt.plot(x, fit_func(beta, x), "g--", lw = 2)
plt.tight_layout()
plt.show()
An example of an outcome is as follows:
poly [ 1.77992643 -2.42753714 3.86331152]
ODR [ 3.8161735 -23.08952492 -146.76214989]
In the included image, the solution of numpy.polyfit (red dashed line) corresponds pretty well. The solution of scipy.odr (green dashed line) is basically completely off. I do have to note that the difference between numpy.polyfit and scipy.odr was less in the actual data set I wanted to fit. However, I do not understand where the difference between the two comes from, why in my own testing example the difference is extremely big, and which fitting routine is better?
I hope you can provide answers that might help me give a better understanding between the two fitting routines and in the process provide answers to the questions I have.
In the way you are using ODR it does a full orthogonal distance regression. To have it do a normal nonlinear least squares fit add
Odr.set_job(fit_type=2)
before starting the optimization and you will get what you expected.
The reason that the full ODR fails so badly is due to not specifying weights/standard deviations. Obviously it does hard to interpret that point cloud and assumes equal wheights for x and y. If you provide estimated standard deviations, odr will yield a good (though different of course) result, too.
Data = scipy.odr.RealData(x, y, sx=0.1, sy=10)
The actual problem is that the odr output has the beta coefficients in the opposite order than numpy.polyfit has. So the green curve is not calculated correctly. To plot it, use instead
plt.plot(x, fit_func(beta[::-1], x), "g--", lw = 2)

Scipy Fmin Guassian model to real data

I've been trying to solve this for a bit and really just haven't seen an example or anything that my brain is able to use to move forward.
The goal is to find a model Gaussian curve by minimizing the total chi-squared between the real data and the model resulting from unknown parameters that require sensible estimations (the Gaussian is of unknown position, amplitude and width). scipy.optimize.fmin has come up but I've never used this before and I'm still very new to python...
Ultimately, I'd like to plot the original data along with the model - I have use pyplot before, it's just generating the model and using fmin that has me completely bewildered where I'm essentially here:
def gaussian(a, b, c, x):
return a*np.exp(-(x-b)**2/(2*c**2))
I've seen multiple ways to generate a model and this has rendered me confused and thus I have no code! I have imported my data file through np.loadtxt.
Thanks for anyone that can suggest a framework or help at all.
There are basically four (or five) main steps involved in model fitting problems like this:
Define your forward model, yhat = F(P, x), that takes a set of parameters P and your independent variable x, and estimates your response variable y
Define your loss function, loss = L(P, x, y) that you'd like to minimize over your parameters
Optional: define a function that returns the Jacobian matrix, i.e. the partial derivatives of your loss function w.r.t. your model parameters.*
Make an initial guess at your model parameters
Plug all these into one of the optimizers and get the fitted parameters for your model
Here's a worked example to get you started:
import numpy as np
from scipy.optimize import minimize
from matplotlib import pyplot as pp
# function that defines the model we're fitting
def gaussian(P, x):
a, b, c = P
return a*np.exp(-(x-b)**2 /( 2*c**2))
# objective function to minimize
def loss(P, x, y):
yhat = gaussian(P, x)
return ((y - yhat)**2).sum()
# generate a gaussian distribution with known parameters
amp = 1.3543
pos = 64.546
var = 12.234
P_real = np.array([amp, pos, var])
# we use the vector of real parameters to generate our fake data
x = np.arange(100)
y = gaussian(P_real, x)
# add some gaussian noise to make things harder
y_noisy = y + np.random.randn(y.size)*0.5
# minimize needs an initial guess at the model parameters
P_guess = np.array([1, 50, 25])
# minimize provides a unified interface to all of scipy's solvers. you
# can also access them individually in scipy.optimize, but the
# standalone versions have annoying differences in their syntax. for now
# we'll use the Nelder-Mead solver, which doesn't use the Jacobian. we
# also need to hand it x and y_noisy as additional args to loss()
res = minimize(loss, P_guess, method='Nelder-Mead', args=(x, y_noisy))
# res is a dict containing the results of the optimization. in particular we
# want the optimized model parameters:
P_fit = res['x']
# we can pass these to gaussian() to evaluate our fitted model
y_fit = gaussian(P_fit, x)
# now let's plot the results:
fig, ax = pp.subplots(1,1)
ax.hold(True)
ax.plot(x, y, '-r', lw=2, label='Real')
ax.plot(x, y_noisy, '-k', alpha=0.5, label='Noisy')
ax.plot(x, y_fit, '--b', lw=5, label='Fit')
ax.legend(loc=0, fancybox=True)
*Some solvers, e.g. conjugate gradient methods, take the Jacobian as an additional argument, and by and large these solvers are faster and more robust, but if you're feeling lazy and performance isn't all that critical then you can usually get away without providing the Jacobian, in which case it will use the finite differences method to estimate the gradients.
You can read more about the different solvers here

Optimization block in python (scipy) - a histogram with a histogram

I need to fit an experimental histogram by a simulated one (to determine several parameters of the simulated one with which it fits best). I've tried curve_fit from scipy.optimize, but it does not work in this case: an error "... is not a python function" is returned. Is it possible to do this automatically in scipy or some other python module? If not, could you, please, give some links to probable algorithms to adjust them myself?
From what you have said I think the following should help, it seems your trying to use curve_fit in the wrong way:
You need to define the distribution you are trying to fit. For example if I have some data that looks normally distributed and I want to know if how well, and what parameters give the best fit, I would do the following:
import numpy as np
import pylab as plt
from scipy.optimize import curve_fit
# Create fake data and run it though `histogram` to get the experimental distribution
experimental = np.random.normal(10.0, 0.4, size=10000)
n, bins = plt.histogram(experimental, bins=100, normed=True)
# This just gives the mid points of the bins, many different (and better) ways exist to
# do this I'm sure
bins_mid_points = (0.5*(bins + np.roll(bins, 1)))[1:]
# Define the normal distribution as a function
def normal(x, sigma, mu):
return np.exp(-(x - mu)**2 / (2 * sigma**2)) / (sigma * np.sqrt(2*np.pi))
# Fit the experimental data,
popt, pcov = curve_fit(normal, xdata=bins_mid_points, ydata=n)
# Plot both
plt.bar(bins[:-1], n, width=np.diff(bins))
plt.plot(bins_mid_points, normal(bins_mid_points, *popt), color="r", lw=3)
The red line shows our simulated fit, if required you could also plot this as a histogram. The output of popt gives an array of [sigma, mu] which best fit the data while pcov could be used to determine how good the fit is.
Note that I normalised the data in histogram this is because the function I defined is the normal distribution.
You need to think carefully about what distribution you expect and what statistic your looking to get from it.

Categories

Resources