Fit poisson distribution to data and find lambda [duplicate] - python

I am trying to fit a curve over the histogram of a Poisson distribution that looks like this
I have modified the fit function so that it resembles a Poisson distribution, with the parameter t as a variable. But the curve_fit function can not be plotted and I am not sure why.
def histo(bsize):
N = bsize
#binwidth
bw = (dt.max()-dt.min())/(N-1.)
bin1 = dt.min()+ bw*np.arange(N)
#define the array to hold the occurrence count
bincount= np.array([])
for bin in bin1:
count = np.where((dt>=bin)&(dt<bin+bw))[0].size
bincount = np.append(bincount,count)
#bin center
binc = bin1+0.5*bw
plt.figure()
plt.plot(binc,bincount,drawstyle= 'steps-mid')
plt.xlabel("Interval[ticks]")
plt.ylabel("Frequency")
histo(30)
plt.xlim(0,.5e8)
plt.ylim(0,25000)
import numpy as np
from scipy.optimize import curve_fit
delta_t = 1.42e7
def func(x, t):
return t * np.exp(- delta_t/t)
popt, pcov = curve_fit(func, np.arange(0,.5e8),histo(30))
plt.plot(popt)

The problem with your code is that you do not know what the return values of curve_fit are. It is the parameters for the fit-function and their covariance matrix - not something you can plot directly.
Binned Least-Squares Fit
In general you can get everything much, much more easily:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.special import factorial
from scipy.stats import poisson
# get poisson deviated random numbers
data = np.random.poisson(2, 1000)
# the bins should be of integer width, because poisson is an integer distribution
bins = np.arange(11) - 0.5
entries, bin_edges, patches = plt.hist(data, bins=bins, density=True, label='Data')
# calculate bin centers
bin_centers = 0.5 * (bin_edges[1:] + bin_edges[:-1])
def fit_function(k, lamb):
'''poisson function, parameter lamb is the fit parameter'''
return poisson.pmf(k, lamb)
# fit with curve_fit
parameters, cov_matrix = curve_fit(fit_function, bin_centers, entries)
# plot poisson-deviation with fitted parameter
x_plot = np.arange(0, 15)
plt.plot(
x_plot,
fit_function(x_plot, *parameters),
marker='o', linestyle='',
label='Fit result',
)
plt.legend()
plt.show()
This is the result:
Unbinned Maximum-Likelihood fit
An even better possibility would be to not use a histogram at all
and instead to carry out a maximum-likelihood fit.
But by closer examination even this is unnecessary, because the
maximum-likelihood estimator for the parameter of the poissonian distribution is the arithmetic mean.
However, if you have other, more complicated PDFs, you can use this as example:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize
from scipy.special import factorial
from scipy import stats
def poisson(k, lamb):
"""poisson pdf, parameter lamb is the fit parameter"""
return (lamb**k/factorial(k)) * np.exp(-lamb)
def negative_log_likelihood(params, data):
"""
The negative log-Likelihood-Function
"""
lnl = - np.sum(np.log(poisson(data, params[0])))
return lnl
def negative_log_likelihood(params, data):
''' better alternative using scipy '''
return -stats.poisson.logpmf(data, params[0]).sum()
# get poisson deviated random numbers
data = np.random.poisson(2, 1000)
# minimize the negative log-Likelihood
result = minimize(negative_log_likelihood, # function to minimize
x0=np.ones(1), # start value
args=(data,), # additional arguments for function
method='Powell', # minimization method, see docs
)
# result is a scipy optimize result object, the fit parameters
# are stored in result.x
print(result)
# plot poisson-distribution with fitted parameter
x_plot = np.arange(0, 15)
plt.plot(
x_plot,
stats.poisson.pmf(x_plot, result.x),
marker='o', linestyle='',
label='Fit result',
)
plt.legend()
plt.show()

Thank you for the wonderful discussion!
You might want to consider the following:
1) Instead of computing "poisson", compute "log poisson", for better numerical behavior
2) Instead of using "lamb", use the logarithm (let me call it "log_mu"), to avoid the fit "wandering" into negative values of "mu".
So
log_poisson(k, log_mu): return k*log_mu - loggamma(k+1) - math.exp(log_mu)
Where "loggamma" is the scipy.special.loggamma function.
Actually, in the above fit, the "loggamma" term only adds a constant offset to the functions being minimized, so one can just do:
log_poisson_(k, log_mu): return k*log_mu - math.exp(log_mu)
NOTE: log_poisson_() not the same as log_poisson(), but when used for minimization in the manner above, will give the same fitted minimum (the same value of mu, up to numerical issues). The value of the function being minimized will have been offset, but one doesn't usually care about that anyway.

Related

Least Square fit for Gaussian in Python

I have tried to implement a Gaussian fit in Python with the given data. However, I am unable to obtain the desired fit. Any suggestions would help.
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from scipy.optimize import curve_fit
from scipy import asarray as ar, exp
xData=ar([-7.66E-06,-7.60E-06,-7.53E-06,-7.46E-06,-7.40E-06,-7.33E-06,-7.26E-06,-7.19E-06,-7.13E-06,-7.06E-06,-6.99E-06,
-6.93E-06,-6.86E-06,-6.79E-06,-6.73E-06,-6.66E-06,-6.59E-06,-6.52E-06,-6.46E-06,-6.39E-06,-6.32E-06,-6.26E-06,-6.19E-06,
-6.12E-06,-6.06E-06,-5.99E-06,-5.92E-06,-5.85E-06,-5.79E-06,-5.72E-06])
yData=ar([17763,2853,3694,4203,4614,4984,5080,7038,6905,8729,11687,13339,14667,16175,15953,15342,14340,15707,13001,10982,8867,6827,5262,4760,3869,3232,2835,2746,2552,2576])
#plot the data points
plt.plot(xData,yData,'bo',label='experimental_data')
plt.show()
#define the function we want to fit the plot into
# Define the Gaussian function
n = len(xData)
mean = sum(xData*yData)/n
sigma = np.sqrt(sum(yData*(xData-mean)**2)/n)
def Gauss(x,I0,x0,sigma,Background):
return I0*exp(-(x-x0)**2/(2*sigma**2))+Background
popt,pcov = curve_fit(Gauss,xData,yData,p0=[1,mean,sigma, 0.0])
print(popt)
plt.plot(xData,yData,'b+:',label='data')
plt.plot(xData,Gauss(xData,*popt),'ro:',label='fit')
plt.legend()
plt.title('Gaussian_Fit')
plt.xlabel('x-axis')
plt.ylabel('PL Intensity')
plt.show()
When computing mean and sigma, divide by sum(yData), not n.
mean = sum(xData*yData)/sum(yData)
sigma = np.sqrt(sum(yData*(xData-mean)**2)/sum(yData))
The reason is that, say for mean, you need to compute the average of xData weighed by yData. For this, you need to normalize yData to have sum 1, i.e., you need to multiply xData with yData / sum(yData) and take the sum.
With the correction by j1-lee and removing the first point which clearly doesn't agree with the Gaussian model, the fit looks like this:
Removing the bin that clearly doesn't belong in the fit reduces the fitted width by some 20% and the (fitted) noise to background ratio by some 30%. The mean is only marginally affected.

I want to fit my histogram with a curve but don't know what to do

I want a normal curve to fit the histogram I already have.
navf2 is a list of normalized random numbers and the histogram is based on those, and I want a curve to show the general trend of the histogram.
while len(navf2)<252:
number=np.random.normal(0,1,None)
navf2.append(number)
bin_edges=np.arange(70,130,1)
plt.style.use(["dark_background",'ggplot'])
plt.hist(navf2, bins=bin_edges, alpha=1)
plt.ylabel("Frequency of final NAV")
plt.xlabel("Ranges")
ymin=0
ymax=100
plt.ylim([ymin,ymax])
plt.show()
Here You go:
=^..^=
from scipy.stats import norm
import numpy as np
import matplotlib.pyplot as plt
# create raw data
data = np.random.uniform(size=252)
# distribution fitting
mu, sigma = norm.fit(data)
# fitting distribution
x = np.linspace(-0.5,1.5,100)
y = norm.pdf(x, loc=mu, scale=sigma)
# plot data
plt.plot(x, y,'r-')
plt.hist(data, density=1, alpha=1)
plt.show()
Output:
Here is a another solution using your code as mentioned in the question. We can achieve the expected result without the use of the scipy library. we will have to do three things, compute the mean of the data set, compute the standard deviation of the set, and create a function that generates the normal or Gaussian curve.
To compute the mean we can use the function within numpy library, ie mu = np.mean(your_data_set_here)
The standard deviation of the set is the square root of the sum of the differences of the values and mean squared https://en.wikipedia.org/wiki/Standard_deviation. We can express it in code as follows, using the numpy library again:
data_set = [] # some data set
sigma = np.sqrt(1/(len(data_set))*sum((data_set-mu)**2))
Finally we have to build the function for the normal curve or Gaussian https://en.wikipedia.org/wiki/Gaussian_function, it relies on both the mean (mu) and the standard deviation (sigma), so we will use those as parameters in our function:
def Gaussian(x,sigma,mu): # sigma is the standard deviation and mu is the mean
return ((1/(np.sqrt(2*np.pi)*sigma))*np.exp(-(x-mu)**2/(2*sigma**2)))
putting it all together looks like this:
import numpy as np
import matplotlib.pyplot as plt
navf2 = []
while len(navf2)<252:
number=np.random.normal(0,1,None) # since all values will be between 0,1 the bin size doesnt work
navf2.append(number)
navf2 = np.asarray(navf2) # convert to array for better results
mu = np.mean(navf2) #the avg of all values in navf2
sigma = np.sqrt(1/(len(navf2))*sum((navf2-mu)**2)) # standard deviation of navf2
x_vals = np.arange(min(navf2),max(navf2),0.001) # create a flat range based off data
# to build the curve
gauss = [] #store values for normal curve here
def Gaussian(x,sigma,mu): # defining the normal curve
return ((1/(np.sqrt(2*np.pi)*sigma))*np.exp(-(x-mu)**2/(2*sigma**2)))
for val in x_vals :
gauss.append(Gaussian(val,sigma,mu))
plt.style.use(["dark_background",'ggplot'])
plt.hist(navf2, density = 1, alpha=1) # add density = 1 to fix the scaling issues
plt.ylabel("Frequency of final NAV")
plt.xlabel("Ranges")
plt.plot(x_vals,gauss)
plt.show()
Here is a picture of an output:
Hope this helps, I tired to keep it as close to your original code as possible !

Python Spinmob curve_fit works but fitter does not

I'm trying to fit data with a Gaussian.
The raw data itself displays a very obvious peak.
When I attempt fitting using curve_fit, the fit identifies the peak but it does not have a curved top.
I am trying to fit the data now with spinmob's fitter as well. However, this fitting just gives a straight line.
I've tried changing several parameters of the fitter, the Gaussian function definition, and the initial parameters for the fit but nothing seems to work.
Here is the code:
from scipy.optimize import curve_fit
from scipy import asarray as ar,exp
import spinmob as s
x = x30
y = ydata
def gaussian(x, A, mu, sig): # See http://mathworld.wolfram.com/GaussianFunction.html
return A/(sig * np.sqrt(2*np.pi)) * np.exp(-np.power(x-mu, 2) / (2 * np.power(sig, 2)))
popt,pcov = curve_fit(gaussian,x,y,p0=[1,7.688,0.005])
FWHM = 2*np.sqrt(2*np.log(2))*popt[2]
print("FWHM: {}".format(FWHM))
plt.plot(x,y,'bo',label='data')
plt.plot(x,gaussian(x,*popt),'r+-',label='fit')
plt.legend()
fitter = s.data.fitter()
fitter.set(subtract_bg=True, plot_guess_zoom=True)
fitter.set_functions(f=gaussian, p='A=1,mu=8.688,sig=0.001')
fitter.set_data(x, y, eydata = 0.03)
fitter.fit()
The curve_fit returns this plot:
Curve_fit plot
The spinmob fitter plot gives this:
Spinmob Fitter Plot
Assuming that spinmob actually uses scipy.curve_fit under the hood, I would guess (sorry) that the problem is that the initial values you give to it are so far off that it cannot possibly find a solution.
For sure, A=1 is not a very good guess for either scipy.curve_fit() or spinmob.fitter(). The peak is definitely negative, and you should be guessing a value more like -0.1 than +1. In fact you could probably assert that A must be < 0.
The initial value of 7.688 for mu that you give to curve_fit() is pretty good, and will allow a solution. I do not know whether it is a typo or not, but the initial value of 8.688 for mu that you give to spinmob.fitter() is very far off (that is, way outside the data range), and the fit will never be able to refine its way to the correct solution from there.
Initial values matter for curve-fitting and poor initial values can lead to bad results.
It might be viewed by some as a shameless plug, but allow me to encourage you to try lmfit (https://lmfit.github.io/lmfit-py/) (I am a lead author) for this kind of problem. Lmfit replaces the array of parameter values with named Parameter objects for better organization of fits. It also has a built-in Gaussian model (which also calculates FWHM, including an uncertainty). That is, with Lmfit, your script might look like:
import numpy as np
import matplotlib.pyplot as plt
from lmfit.models import GaussianModel
from lmfit.lineshapes import gaussian
# create fake data that looks like yours
xdata = 7.670 + np.arange(41)*0.0010
ydata = gaussian(xdata, amplitude=-0.196, center=7.6881, sigma=0.001)
ydata += np.random.normal(size=41, scale=10.0)
# create gaussian model
gmodel = GaussianModel()
# fit data, giving initial values for amplitude, center, and sigma
result = gmodel.fit(ydata, x=xdata, amplitude=-0.1, center=7.688, sigma=0.005)
# show results
print(result.fit_report())
plt.plot(xdata, ydata, 'bo', label='data')
plt.plot(xdata, result.best_fit, 'r+-', label='fit')
plt.legend()
plt.show()
This will print out a report like
[Model]]
Model(gaussian)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 21
# data points = 41
# variables = 3
chi-square = 5114.87632
reduced chi-square = 134.602009
Akaike info crit = 203.879794
Bayesian info crit = 209.020510
[[Variables]]
sigma: 9.7713e-04 +/- 1.5456e-04 (15.82%) (init = 0.005)
center: 7.68822727 +/- 1.5484e-04 (0.00%) (init = 7.688)
amplitude: -0.19273945 +/- 0.02643400 (13.71%) (init = -0.1)
fwhm: 0.00230096 +/- 3.6396e-04 (15.82%) == '2.3548200*sigma'
height: -78.6917624 +/- 10.7894236 (13.71%) == '0.3989423*amplitude/max(1.e-15, sigma)'
[[Correlations]] (unreported correlations are < 0.100)
C(sigma, amplitude) = -0.577
and produce a plot of data and best fit like
which should be close to what you are trying to do.

How to compute standard deviation errors with scipy.optimize.least_squares

I compare fitting with optimize.curve_fit and optimize.least_squares. With curve_fit I get the covariance matrix pcov as an output and I can calculate the standard deviation errors for my fitted variables by that:
perr = np.sqrt(np.diag(pcov))
If I do the fitting with least_squares, I do not get any covariance matrix output and I am not able to calculate the standard deviation errors for my variables.
Here's my example:
#import modules
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import least_squares
noise = 0.5
N = 100
t = np.linspace(0, 4*np.pi, N)
# generate data
def generate_data(t, freq, amplitude, phase, offset, noise=0, n_outliers=0, random_state=0):
#formula for data generation with noise and outliers
y = np.sin(t * freq + phase) * amplitude + offset
rnd = np.random.RandomState(random_state)
error = noise * rnd.randn(t.size)
outliers = rnd.randint(0, t.size, n_outliers)
error[outliers] *= 10
return y + error
#generate data
data = generate_data(t, 1, 3, 0.001, 0.5, noise, n_outliers=10)
#initial guesses
p0=np.ones(4)
x0=np.ones(4)
# create the function we want to fit
def my_sin(x, freq, amplitude, phase, offset):
return np.sin(x * freq + phase) * amplitude + offset
# create the function we want to fit for least-square
def my_sin_lsq(x, t, y):
# freq=x[0]
# phase=x[1]
# amplitude=x[2]
# offset=x[3]
return (np.sin(t*x[0]+x[2])*x[1]+ x[3]) - y
# now do the fit for curve_fit
fit = curve_fit(my_sin, t, data, p0=p0)
print 'Curve fit output:'+str(fit[0])
#now do the fit for least_square
res_lsq = least_squares(my_sin_lsq, x0, args=(t, data))
print 'Least_squares output:'+str(res_lsq.x)
# we'll use this to plot our first estimate. This might already be good enough for you
data_first_guess = my_sin(t, *p0)
#data_first_guess_lsq = x0[2]*np.sin(t*x0[0]+x0[1])+x0[3]
data_first_guess_lsq = my_sin(t, *x0)
# recreate the fitted curve using the optimized parameters
data_fit = my_sin(t, *fit[0])
data_fit_lsq = my_sin(t, *res_lsq.x)
#calculation of residuals
residuals = data - data_fit
residuals_lsq = data - data_fit_lsq
ss_res = np.sum(residuals**2)
ss_tot = np.sum((data-np.mean(data))**2)
ss_res_lsq = np.sum(residuals_lsq**2)
ss_tot_lsq = np.sum((data-np.mean(data))**2)
#R squared
r_squared = 1 - (ss_res/ss_tot)
r_squared_lsq = 1 - (ss_res_lsq/ss_tot_lsq)
print 'R squared curve_fit is:'+str(r_squared)
print 'R squared least_squares is:'+str(r_squared_lsq)
plt.figure()
plt.plot(t, data)
plt.title('curve_fit')
plt.plot(t, data_first_guess)
plt.plot(t, data_fit)
plt.plot(t, residuals)
plt.figure()
plt.plot(t, data)
plt.title('lsq')
plt.plot(t, data_first_guess_lsq)
plt.plot(t, data_fit_lsq)
plt.plot(t, residuals_lsq)
#error
perr = np.sqrt(np.diag(fit[1]))
print 'The standard deviation errors for curve_fit are:' +str(perr)
I would be very thankful for any help, best wishes
ps: I got a lot of input from this source and used part of the code Robust regression
The result of optimize.least_squares has a parameter inside of it called jac. From the documentation:
jac : ndarray, sparse matrix or LinearOperator, shape (m, n)
Modified Jacobian matrix at the solution, in the sense that J^T J is a Gauss-Newton approximation of the Hessian of the cost function. The type is the same as the one used by the algorithm.
This can be used to estimate the Covariance Matrix of the parameters using the following formula: Sigma = (J'J)^-1.
J = res_lsq.jac
cov = np.linalg.inv(J.T.dot(J))
To find the variance of the parameters one can then use:
var = np.sqrt(np.diagonal(cov))
The SciPy program optimize.least_squares requires the user to provide in input a function fun(...) which returns a vector of residuals. This is typically defined as
residuals = (data - model)/sigma
where data and model are vectors with the data to fit and the corresponding model predictions for each data point, while sigma is the 1σ uncertainty in each data value.
In this situation, and assuming one can trust the input sigma uncertainties, one can use the output Jacobian matrix jac returned by least_squares to estimate the covariance matrix. Moreover, assuming the covariance matrix is diagonal, or simply ignoring non-diagonal terms, one can also obtain the 1σ uncertainty perr in the model parameters (often called "formal errors") as follows (see Section 15.4.2 of Numerical Recipes 3rd ed.)
import numpy as np
from scipy import linalg, optimize
res = optimize.least_squares(...)
U, s, Vh = linalg.svd(res.jac, full_matrices=False)
tol = np.finfo(float).eps*s[0]*max(res.jac.shape)
w = s > tol
cov = (Vh[w].T/s[w]**2) # Vh[w] # robust covariance matrix
perr = np.sqrt(np.diag(cov)) # 1sigma uncertainty on fitted parameters
The above code to obtain the covariance matrix is formally the same as the following simpler one (as suggested by Alex), but the above has the major advantage that it works even when the Jacobian is close to degenerate, which is a common occurrence in real-world least-squares fits
cov = linalg.inv(res.jac.T # res.jac) # covariance matrix when jac not degenerate
If one does not trust the input uncertainties sigma, one can still assume that the fit is good, to estimate the data uncertainties from the fit itself. This corresponds to assuming chi**2/DOF=1, where DOF is the number of degrees of freedom. In this case, one can use the following lines to rescale the covariance matrix before computing the uncertainties
chi2dof = np.sum(res.fun**2)/(res.fun.size - res.x.size)
cov *= chi2dof
perr = np.sqrt(np.diag(cov)) # 1sigma uncertainty on fitted parameters

How to calculate error for polynomial fitting (in slope and intercept)

Hi I want to calculate errors in slope and intercept which are calculated by scipy.polyfit function. I have (+/-) uncertainty for ydata so how can I include it for calculating uncertainty into slope and intercept? My code is,
from scipy import polyfit
import pylab as plt
from numpy import *
data = loadtxt("data.txt")
xdata,ydata = data[:,0],data[:,1]
x_d,y_d = log10(xdata),log10(ydata)
polycoef = polyfit(x_d, y_d, 1)
yfit = 10**( polycoef[0]*x_d+polycoef[1] )
plt.subplot(111)
plt.loglog(xdata,ydata,'.k',xdata,yfit,'-r')
plt.show()
Thanks a lot
You could use scipy.optimize.curve_fit instead of polyfit. It has a parameter sigma for errors of ydata. If you have your error for every y value in a sequence yerror (so that yerror has the same length as your y_d sequence) you can do:
polycoef, _ = scipy.optimize.curve_fit(lambda x, a, b: a*x+b, x_d, y_d, sigma=yerror)
For an alternative see the paragraph Fitting a power-law to data with errors in the Scipy Cookbook.

Categories

Resources