Fitting negative binomial in python - python

In scipy there is no support for fitting a negative binomial distribution using data
(maybe due to the fact that the negative binomial in scipy is only discrete).
For a normal distribution I would just do:
from scipy.stats import norm
param = norm.fit(samp)
Is there something similar 'ready to use' function in any other library?

Statsmodels has discrete.discrete_model.NegativeBinomial.fit(), see here:
https://www.statsmodels.org/dev/generated/statsmodels.discrete.discrete_model.NegativeBinomial.fit.html#statsmodels.discrete.discrete_model.NegativeBinomial.fit

Not only because it is discrete, also because maximum likelihood fit to negative binomial can be quite involving, especially with an additional location parameter. That would be the reason why .fit() method is not provided for it (and other discrete distributions in Scipy), here is an example:
In [163]:
import scipy.stats as ss
import scipy.optimize as so
In [164]:
#define a likelihood function
def likelihood_f(P, x, neg=1):
n=np.round(P[0]) #by definition, it should be an integer
p=P[1]
loc=np.round(P[2])
return neg*(np.log(ss.nbinom.pmf(x, n, p, loc))).sum()
In [165]:
#generate a random variable
X=ss.nbinom.rvs(n=100, p=0.4, loc=0, size=1000)
In [166]:
#The likelihood
likelihood_f([100,0.4,0], X)
Out[166]:
-4400.3696690513316
In [167]:
#A simple fit, the fit is not good and the parameter estimate is way off
result=so.fmin(likelihood_f, [50, 1, 1], args=(X,-1), full_output=True, disp=False)
P1=result[0]
(result[1], result[0])
Out[167]:
(4418.599495886474, array([ 59.61196161, 0.28650831, 1.15141838]))
In [168]:
#Try a different set of start paramters, the fit is still not good and the parameter estimate is still way off
result=so.fmin(likelihood_f, [50, 0.5, 0], args=(X,-1), full_output=True, disp=False)
P1=result[0]
(result[1], result[0])
Out[168]:
(4417.1495981801972,
array([ 6.24809397e+01, 2.91877405e-01, 6.63343536e-04]))
In [169]:
#In this case we need a loop to get it right
result=[]
for i in range(40, 120): #in fact (80, 120) should probably be enough
_=so.fmin(likelihood_f, [i, 0.5, 0], args=(X,-1), full_output=True, disp=False)
result.append((_[1], _[0]))
In [170]:
#get the MLE
P2=sorted(result, key=lambda x: x[0])[0][1]
sorted(result, key=lambda x: x[0])[0]
Out[170]:
(4399.780263084549,
array([ 9.37289361e+01, 3.84587087e-01, 3.36856705e-04]))
In [171]:
#Which one is visually better?
plt.hist(X, bins=20, normed=True)
plt.plot(range(260), ss.nbinom.pmf(range(260), np.round(P1[0]), P1[1], np.round(P1[2])), 'g-')
plt.plot(range(260), ss.nbinom.pmf(range(260), np.round(P2[0]), P2[1], np.round(P2[2])), 'r-')
Out[171]:
[<matplotlib.lines.Line2D at 0x109776c10>]

I know this thread is quite old, but current readers may want to look at this repo which is made for this purpose: https://github.com/gokceneraslan/fit_nbinom
There's also an implementation here, though part of a larger package: https://github.com/ernstlab/ChromTime/blob/master/optimize.py

I stumbled across this thread, and found an answer for anyone else wondering.
If you simply need the n, p parameterisation used by scipy.stats.nbinom you can convert the mean and variance estimates:
mu = np.mean(sample)
sigma_sqr = np.var(sample)
n = mu**2 / (sigma_sqr - mu)
p = mu / sigma_sqr
If you the dispersionparameter you can use a negative binomial regression model from statsmodels with just an interaction term. This will find the dispersionparameter alpha using MLE.
# Data processing
import pandas as pd
import numpy as np
# Analysis models
import statsmodels.formula.api as smf
from scipy.stats import nbinom
def convert_params(mu, alpha):
"""
Convert mean/dispersion parameterization of a negative binomial to the ones scipy supports
Parameters
----------
mu : float
Mean of NB distribution.
alpha : float
Overdispersion parameter used for variance calculation.
See https://en.wikipedia.org/wiki/Negative_binomial_distribution#Alternative_formulations
"""
var = mu + alpha * mu ** 2
p = mu / var
r = mu ** 2 / (var - mu)
return r, p
# Generate sample data
n = 2
p = 0.9
sample = nbinom.rvs(n=n, p=p, size=10000)
# Estimate parameters
## Mean estimates expectation parameter for negative binomial distribution
mu = np.mean(sample)
## Dispersion parameter from nb model with only interaction term
nbfit = smf.negativebinomial("nbdata ~ 1", data=pd.DataFrame({"nbdata": sample})).fit()
alpha = nbfit.params[1] # Dispersion parameter
# Convert parameters to n, p parameterization
n_est, p_est = convert_params(mu, alpha)
# Check that estimates are close to the true values:
print("""
{:<3} {:<3}
True parameters: {:<3} {:<3}
Estimates : {:<3} {:<3}""".format('n', 'p', n, p,
np.round(n_est, 2), np.round(p_est, 2)))

Related

Scipy method SLSQP with vector returns in constraints and objective function

[Edited]
Recently I work with Nelson Siegel Svensson Yield Curve Model, but I ran into a situation: search best fit of paramenters Model.
Given the above, I use simple dataset represented with Period (const vector) and Yield Value (y_real vector), to calibrate the parameters b0, b1, b2, b3, t1 and t2 of original function (detailed in model objective function with x[0], x[1], x[2], x[3], x[4] and x[5] respectively) to find a minimal difference of original Yield Value vs. estimated Yield Value with the NSS Yield Curve Model using the most adjusted parameters for this purpose, so that in the end calculate interpolation and extrapolations Yields Values based on specific Periods.
Note: The constraint function defined with this premise fun(x) == 0 (type 'eq') to search a minimal difference between the y_real and result of model (in vector form), for example:
If y_real = [1.1, 1.4 ,1.3] and the result of model function is [0.2, 1.7, 3.3], then the difference result is [0.9, -0.3, -2] and its necesary to iterate again until you get approximately zero vector in result difference, for exalmple of solution vector difference is: [1.0e-22, 1.0e-21, 1.0e-25]
I deveolped trial solution with Scipy minimize least_squares (Calibrate parameters of Yield Curve Nelson Siegel Svensson) but it's a very simple form and I need more accuracy, for this some people recommended me SLSQP method of Scipy Optimize Minimize.
This is my code for search a calibrated parameters and then use in a NSS Yield Curve:
from numpy import array, append
from scipy.optimize import minimize
from math import exp as EXP
const = [30,90,180,270,365,730,1095,1460,1825,2190,2555,2920,3285,3650,4015,4380,4745,5110,5475,5840,6205,6570,6935,7300]
y_real = [3.11826,3.71463,3.74677,3.83900,4.00049,4.40666,4.52346,4.64026,4.75706,4.87386,4.99066,5.10746,5.22426,
5.34106,5.44522,5.54669,5.64816,5.74963,5.85110,5.88607,5.91162,5.93717,5.96272,5.98827]
def model(x, const, y_real):
arr = array([])
for val in const:
arr = append(arr,(x[0])+(x[1]*((1-EXP(-val/x[4]))/(val/x[4])))+(x[2]*((((1-EXP(-val/x[4]))/(val/x[4])))-(EXP(-val/x[4]))))+x[3]*((((1-EXP(-val/x[5]))/(val/x[5])))-(EXP(-val/x[5]))))
return array(y_real) - arr
def fun(x, const, y_real):
eval = model(x, const, y_real)
leval = array([0 if val < 1.0e-20 else 1 for val in eval])
return leval
con = {'type': 'eq', 'fun': fun, 'args' : (const, y_real)}
x0 = array([0.001, 0.001, 0.001, 0.001, 1.0e-10, 1.0e-10])
#bounds_ = [(0.001,8),(0.001,8),(0.001,8),(0.001,8),(1.0e-15,3),(1.0e-15,3)] bounds=bounds_
res = minimize(model, x0, method='SLSQP', constraints=[con] , args=(const, y_real))
print(res)
but I reached one result error:
in _minimize_slsqp
w = zeros(len_w)
ValueError: negative dimensions are not allowed
How I reach the solution with SLSQP method (or another best option) with a comprehensive way without this error?
Thanks in advance.
First of all, your problem is probably that your model doesn't return a scalar value.
It's highly recommended to make yourself familiar with numpy. For instance, given const and y_real are numpy arrays, you don't need loops and append to implement your model function. It can be written as follows (note that it returns the sum of the squared differences):
import numpy as np
const = np.array([ # your values here ])
y_real = np.array([ # your values here ])
def model(x, const, y_real):
arr = (
x[0] + (x[1] * ((1 - np.exp(-const / x[4])) / (const / x[4])))
+ (x[2] * ((((1 - np.exp(-const / x[4])) / (const / x[4])))
- (np.exp(-const / x[4]))))
+ x[3] * ((((1 - np.exp(-const / x[5])) / (const / x[5])))
- (np.exp(-const / x[5])))
)
return np.sum((y_real - arr)**2)
In addition, there are a few more things that do not make sense and it's still not 100% clear to me what you are trying to achieve:
Subtracting a vector of zeros from the evaluated model inside fun
Subtracting a vector of zeros from leval inside fun
Adding the constraint fun(x) >= 0. What's the purpose of this constraint? The function returns a vector consisting of zeros or ones, so this constraint is always fulfilled.
By ignoring the constraint fun (which, by the way, is not differentiable and contradicts the mathematical assumptions of the SLSQP algorithm), you can write:
from scipy.optimize import minimize
x0 = np.array([0.001, 0.001, 0.001, 0.001, 1.0e-10, 1.0e-10])
res = minimize(lambda x: model(x, const, y_real), x0, method="SLSQP")
Team
In order to this case, I worked part of functional solution using perspective and some good tips of #joni in previos answer (very kind for that), it's only a part because works approximately for first third segment of dataset, to work with rest of dataset segment can use the previous case code (Calibrate parameters of Yield Curve Nelson Siegel Svensson).
I share with you to reuse and get idea of one possible solution in two parts, please discuss if you have a more sofistified solution
from scipy.optimize import minimize
import numpy as np
from math import trunc
from nelson_siegel_svensson import NelsonSiegelSvenssonCurve
import pandas as pd
import matplotlib.pyplot as plt
# days
const = np.array([30,90,270,548,913,1278,1643,2008,2373,2738,3103,3468,3833,4198,4563,4928,5293,5658,6023,6388,6935,7300,7665,8030])/365
# empty values represented with 0
y_real = np.array([3.33156,3.44928,3.62778,3.74313,3.96015,4.384,4.4705,4.55701,4.63817,4.69949,4.76081,4.82213,4.87285,4.8681,4.86336,4.85861,4.85387,4.84912,4.87039,4.89833,4.94286,4.98739,5.03192,5.07645])
def model(x, const, y_real):
arr = np.array([])
for val in const:
arr = np.append(arr,(x[0])+(x[1]*((1-np.exp(-val/x[4]))/(val/x[4])))+(x[2]*((((1-np.exp(-val/x[4]))/(val/x[4])))-(np.exp(-val/x[4]))))+x[3]*((((1-np.exp(-val/x[5]))/(val/x[5])))-(np.exp(-val/x[5]))))
return np.sum((y_real - arr)**2)
# initial coefficients of Nelson Siegel Svensson
x0 = np.array([0.001, 0.001, 0.001, 0.001, 1.0e-10, 1.0e-10])
res = minimize(model, x0, method='SLSQP', args=(const, y_real),
options={'maxiter': 10000, 'ftol': 1e-190})
print(res.x)
X_fix = np.linspace(start=const[0], stop=const[-1], num=(const.size*20))
NSS = NelsonSiegelSvenssonCurve(beta0=res.x[0], beta1=res.x[1], beta2=res.x[2], beta3=res.x[3], tau1=res.x[4], tau2=res.x[5])
pd_interpolation = pd.DataFrame(columns=['Period','Value'])
font = {'family': 'serif',
'color': '#1F618D',
'weight': 'bold',
'size': 14,
}
font_x_y = {'family': 'serif',
'color': '#C70039',
'weight': 'bold',
'size': 13,
'style': 'oblique'
}
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))
minx = -const[1]*3
config_manager = plt.get_current_fig_manager()
config_manager.set_window_title("Visualización " )
screen_x, screen_y = config_manager.window.wm_maxsize()
anchura = str(trunc(screen_x/12))
altura = str(trunc(screen_y/8))
middle_window = "+" + anchura + "+" + altura
config_manager.window.wm_geometry(middle_window)
plt.title('Interpolación ', fontdict=font, loc='center')
plt.xlabel('Periodo', fontdict=font_x_y)
plt.ylabel('Aproximación', fontdict=font_x_y, rotation=90, labelpad=10)
ax.set_ylim(ymin=0, ymax=(np.amax(y_real)*1.1))
ax.set_xlim(xmin=minx, xmax=(np.amax(const)*1.03))
ax.plot(const, y_real, 'ro', label='Dato real')
ax.plot(X_fix, NSS(X_fix),'--', label='Dato interpolado')
ax.legend(loc='lower right', frameon=False)
plt.show()
And result graph is this:

Fit the gamma distribution only to a subset of the samples

I have the histogram of my input data (in black) given in the following graph:
I'm trying to fit the Gamma distribution but not on the whole data but just to the first curve of the histogram (the first mode). The green plot in the previous graph corresponds to when I fitted the Gamma distribution on all the samples using the following python code which makes use of scipy.stats.gamma:
img = IO.read(input_file)
data = img.flatten() + abs(np.min(img)) + 1
# calculate dB positive image
img_db = 10 * np.log10(img)
img_db_pos = img_db + abs(np.min(img_db))
data = img_db_pos.flatten() + 1
# data histogram
n, bins, patches = plt.hist(data, 1000, normed=True)
# slice histogram here
# estimation of the parameters of the gamma distribution
fit_alpha, fit_loc, fit_beta = gamma.fit(data, floc=0)
x = np.linspace(0, 100)
y = gamma.pdf(x, fit_alpha, fit_loc, fit_beta)
print '(alpha, beta): (%f, %f)' % (fit_alpha, fit_beta)
# plot estimated model
plt.plot(x, y, linewidth=2, color='g')
plt.show()
How can I restrict the fitting only to the interesting subset of this data?
Update1 (slicing):
I sliced the input data by keeping only values below the max of the previous histogram, but the results were not really convincing:
This was achieved by inserting the following code below the # slice histogram here comment in the previous code:
max_data = bins[np.argmax(n)]
data = data[data < max_data]
Update2 (scipy.optimize.minimize):
The code below shows how scipy.optimize.minimize() is used to minimize an energy function to find (alpha, beta):
import matplotlib.pyplot as plt
import numpy as np
from geotiff.io import IO
from scipy.stats import gamma
from scipy.optimize import minimize
def truncated_gamma(x, max_data, alpha, beta):
gammapdf = gamma.pdf(x, alpha, loc=0, scale=beta)
norm = gamma.cdf(max_data, alpha, loc=0, scale=beta)
return np.where(x < max_data, gammapdf / norm, 0)
# read image
img = IO.read(input_file)
# calculate dB positive image
img_db = 10 * np.log10(img)
img_db_pos = img_db + abs(np.min(img_db))
data = img_db_pos.flatten() + 1
# data histogram
n, bins = np.histogram(data, 100, normed=True)
# using minimize on a slice data below max of histogram
max_data = bins[np.argmax(n)]
data = data[data < max_data]
data = np.random.choice(data, 1000)
energy = lambda p: -np.sum(np.log(truncated_gamma(data, max_data, *p)))
initial_guess = [np.mean(data), 2.]
o = minimize(energy, initial_guess, method='SLSQP')
fit_alpha, fit_beta = o.x
# plot data histogram and model
x = np.linspace(0, 100)
y = gamma.pdf(x, fit_alpha, 0, fit_beta)
plt.hist(data, 30, normed=True)
plt.plot(x, y, linewidth=2, color='g')
plt.show()
The algorithm above converged for a subset of data, and the output in o was:
x: array([ 16.66912781, 6.88105559])
But as can be seen on the screenshot below, the gamma plot doesn't fit the histogram:
You can use a general optimization tool such as scipy.optimize.minimize to fit a truncated version of the desired function, resulting in a nice fit:
First, the modified function:
def truncated_gamma(x, alpha, beta):
gammapdf = gamma.pdf(x, alpha, loc=0, scale=beta)
norm = gamma.cdf(max_data, alpha, loc=0, scale=beta)
return np.where(x<max_data, gammapdf/norm, 0)
This selects values from the gamma distribution where x < max_data, and zero elsewhere. The np.where part is not actually important here, because the data is exclusively to the left of max_data anyway. The key is normalization, because varying alpha and beta will change the area to the left of the truncation point in the original gamma.
The rest is just optimization technicalities.
It's common practise to work with logarithms, so I used what's sometimes called "energy", or the logarithm of the inverse of the probability density.
energy = lambda p: -np.sum(np.log(truncated_gamma(data, *p)))
Minimize:
initial_guess = [np.mean(data), 2.]
o = minimize(energy, initial_guess, method='SLSQP')
fit_alpha, fit_beta = o.x
My output is (alpha, beta): (11.595208, 824.712481). Like the original, it is a maximum likelihood estimate.
If you're not happy with the convergence rate, you may want to
Select a sample from your rather big dataset:
data = np.random.choice(data, 10000)
Try different algorithms using the method keyword argument.
Some optimization routines output a representation of the inverse hessian, which is useful for uncertainty estimation. Enforcement of nonnegativity for the parameters may also be a good idea.
A log-scaled plot without truncation shows the entire distribution:
Here's another possible approach using a manually created dataset in excel that more or less matched the plot given.
Raw Data
Outline
Imported data into a Pandas dataframe.
Mask the indices after the
max response index.
Create a mirror image of the remaining data.
Append the mirror image while leaving a buffer of empty space.
Fit the desired distribution to the modified data. Below I do a normal fit by the method of moments and adjust the amplitude and width.
Working Script
# Import data to dataframe.
df = pd.read_csv('sample.csv', header=0, index_col=0)
# Mask indices after index at max Y.
mask = df.index.values <= df.Y.argmax()
df = df.loc[mask, :]
scaled_y = 100*df.Y.values
# Create new df with mirror image of Y appended.
sep = 6
app_zeroes = np.append(scaled_y, np.zeros(sep, dtype=np.float))
mir_y = np.flipud(scaled_y)
new_y = np.append(app_zeroes, mir_y)
# Using Scipy-cookbook to fit a normal by method of moments.
idxs = np.arange(new_y.size) # idxs=[0, 1, 2,...,len(data)]
mid_idxs = idxs.mean() # len(data)/2
# idxs-mid_idxs is [-53.5, -52.5, ..., 52.5, len(data)/2]
scaling_param = np.sqrt(np.abs(np.sum((idxs-mid_idxs)**2*new_y)/np.sum(new_y)))
# adjust amplitude
fmax = new_y.max()*1.2 # adjusted function max to 120% max y.
# adjust width
scaling_param = scaling_param*.7 # adjusted by 70%.
# Fit normal.
fit = lambda t: fmax*np.exp(-(t-mid_idxs)**2/(2*scaling_param**2))
# Plot results.
plt.plot(new_y, '.')
plt.plot(fit(idxs), '--')
plt.show()
Result
See the scipy-cookbook fitting data page for more on fitting a normal using method of moments.

How to run non-linear regression in python

i am having the following information(dataframe) in python
product baskets scaling_factor
12345 475 95.5
12345 108 57.7
12345 2 1.4
12345 38 21.9
12345 320 88.8
and I want to run the following non-linear regression and estimate the parameters.
a ,b and c
Equation that i want to fit:
scaling_factor = a - (b*np.exp(c*baskets))
In sas we usually run the following model:(uses gauss newton method )
proc nlin data=scaling_factors;
parms a=100 b=100 c=-0.09;
model scaling_factor = a - (b * (exp(c*baskets)));
output out=scaling_equation_parms
parms=a b c;
is there a similar way to estimate the parameters in Python using non linear regression, how can i see the plot in python.
For problems like these I always use scipy.optimize.minimize with my own least squares function. The optimization algorithms don't handle large differences between the various inputs well, so it is a good idea to scale the parameters in your function so that the parameters exposed to scipy are all on the order of 1 as I've done below.
import numpy as np
baskets = np.array([475, 108, 2, 38, 320])
scaling_factor = np.array([95.5, 57.7, 1.4, 21.9, 88.8])
def lsq(arg):
a = arg[0]*100
b = arg[1]*100
c = arg[2]*0.1
now = a - (b*np.exp(c * baskets)) - scaling_factor
return np.sum(now**2)
guesses = [1, 1, -0.9]
res = scipy.optimize.minimize(lsq, guesses)
print(res.message)
# 'Optimization terminated successfully.'
print(res.x)
# [ 0.97336709 0.98685365 -0.07998282]
print([lsq(guesses), lsq(res.x)])
# [7761.0093358076601, 13.055053196410928]
Of course, as with all minimization problems it is important to use good initial guesses since all of the algorithms can get trapped in a local minimum. The optimization method can be changed by using the method keyword; some of the possibilities are
‘Nelder-Mead’
‘Powell’
‘CG’
‘BFGS’
‘Newton-CG’
The default is BFGS according to the documentation.
Agreeing with Chris Mueller, I'd also use scipy but scipy.optimize.curve_fit.
The code looks like:
###the top two lines are required on my linux machine
import matplotlib
matplotlib.use('Qt4Agg')
import matplotlib.pyplot as plt
from matplotlib.pyplot import cm
import numpy as np
from scipy.optimize import curve_fit #we could import more, but this is what we need
###defining your fitfunction
def func(x, a, b, c):
return a - b* np.exp(c * x)
###OP's data
baskets = np.array([475, 108, 2, 38, 320])
scaling_factor = np.array([95.5, 57.7, 1.4, 21.9, 88.8])
###let us guess some start values
initialGuess=[100, 100,-.01]
guessedFactors=[func(x,*initialGuess ) for x in baskets]
###making the actual fit
popt,pcov = curve_fit(func, baskets, scaling_factor,initialGuess)
#one may want to
print popt
print pcov
###preparing data for showing the fit
basketCont=np.linspace(min(baskets),max(baskets),50)
fittedData=[func(x, *popt) for x in basketCont]
###preparing the figure
fig1 = plt.figure(1)
ax=fig1.add_subplot(1,1,1)
###the three sets of data to plot
ax.plot(baskets,scaling_factor,linestyle='',marker='o', color='r',label="data")
ax.plot(baskets,guessedFactors,linestyle='',marker='^', color='b',label="initial guess")
ax.plot(basketCont,fittedData,linestyle='-', color='#900000',label="fit with ({0:0.2g},{1:0.2g},{2:0.2g})".format(*popt))
###beautification
ax.legend(loc=0, title="graphs", fontsize=12)
ax.set_ylabel("factor")
ax.set_xlabel("baskets")
ax.grid()
ax.set_title("$\mathrm{curve}_\mathrm{fit}$")
###putting the covariance matrix nicely
tab= [['{:.2g}'.format(j) for j in i] for i in pcov]
the_table = plt.table(cellText=tab,
colWidths = [0.2]*3,
loc='upper right', bbox=[0.483, 0.35, 0.5, 0.25] )
plt.text(250,65,'covariance:',size=12)
###putting the plot
plt.show()
###done
Eventually, giving you:

scipy curve fitting negative value

I would like to fit a curve with curve_fit and prevent it from becoming negative. Unfortunately, the code below does not work. Any hints? Thanks a lot!
# Imports
from scipy.optimize import curve_fit
import numpy as np
import matplotlib.pyplot as plt
xData = [0.0009824379203203417, 0.0011014182912933933, 0.0012433979929054324, 0.0014147106052612918, 0.0016240300315499524, 0.0018834904507916608, 0.002210485320720769, 0.002630660216394964, 0.0031830988618379067, 0.003929751681281367, 0.0049735919716217296, 0.0064961201261998095, 0.008841941282883075, 0.012732395447351627, 0.019894367886486918, 0.0353677651315323, 0.07957747154594767, 0.3183098861837907]
yData = [99.61973156923796, 91.79478510744039, 92.79302188621314, 84.32927272723863, 77.75060981602016, 75.62801782349504, 70.48026800610839, 72.21240551953743, 68.14019252499526, 55.23015406920851, 57.212682880377464, 50.777016257727176, 44.871140881319626, 40.544138806850846, 32.489105158795525, 25.65367127756607, 19.894206907130403, 13.057996247388862]
def func(x,m,c,d):
'''
Fitting Function
I put d as an absolute number to prevent negative values for d?
'''
return x**m * c + abs(d)
p0 = [-1, 1, 1]
coeff, _ = curve_fit(func, xData, yData, p0) # Fit curve
m, c, d = coeff[0], coeff[1], coeff[2]
print("d: " + str(d)) # Why is it negative!!
Your model actually works fine as the following plot shows. I used your code and plotted the original data and the data you obtain with the fitted parameters:
As you can see, the data can nicely be reproduced but you indeed obtain a negative value for d (which must not be a bad thing depending on the context of the model). If you want to avoid it, I recommend to use lmfit where you can constrain your parameters to certain ranges. The next plot shows the outcome.
As you can see, it also reproduces the data well and you obtain a positive value for d as desired.
namely:
m: -0.35199747
c: 8.48813181
d: 0.05775745
Here is the entire code that reproduces the figures:
# Imports
from scipy.optimize import curve_fit
import numpy as np
import matplotlib.pyplot as plt
#additional import
from lmfit import minimize, Parameters, Parameter, report_fit
xData = [0.0009824379203203417, 0.0011014182912933933, 0.0012433979929054324, 0.0014147106052612918, 0.0016240300315499524, 0.0018834904507916608, 0.002210485320720769, 0.002630660216394964, 0.0031830988618379067, 0.003929751681281367, 0.0049735919716217296, 0.0064961201261998095, 0.008841941282883075, 0.012732395447351627, 0.019894367886486918, 0.0353677651315323, 0.07957747154594767, 0.3183098861837907]
yData = [99.61973156923796, 91.79478510744039, 92.79302188621314, 84.32927272723863, 77.75060981602016, 75.62801782349504, 70.48026800610839, 72.21240551953743, 68.14019252499526, 55.23015406920851, 57.212682880377464, 50.777016257727176, 44.871140881319626, 40.544138806850846, 32.489105158795525, 25.65367127756607, 19.894206907130403, 13.057996247388862]
def func(x,m,c,d):
'''
Fitting Function
I put d as an absolute number to prevent negative values for d?
'''
print m,c,d
return np.power(x,m)*c + d
p0 = [-1, 1, 1]
coeff, _ = curve_fit(func, xData, yData, p0) # Fit curve
m, c, d = coeff[0], coeff[1], coeff[2]
print("d: " + str(d)) # Why is it negative!!
plt.scatter(xData, yData, s=30, marker = "v",label='P')
plt.scatter(xData, func(xData, *coeff), s=30, marker = "v",color="red",label='curvefit')
plt.show()
#####the new approach starts here
def func2(params, x, data):
m = params['m'].value
c = params['c'].value
d = params['d'].value
model = np.power(x,m)*c + d
return model - data #that's what you want to minimize
# create a set of Parameters
params = Parameters()
params.add('m', value= -2) #value is the initial condition
params.add('c', value= 8.)
params.add('d', value= 10.0, min=0) #min=0 prevents that d becomes negative
# do fit, here with leastsq model
result = minimize(func2, params, args=(xData, yData))
# calculate final result
final = yData + result.residual
# write error report
report_fit(params)
try:
import pylab
pylab.plot(xData, yData, 'k+')
pylab.plot(xData, final, 'r')
pylab.show()
except:
pass
You could use the scipy.optimize.curve_fit method's bounds option to specify the maximum bound and the minimum bound.
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html
Bounds is a two tuple array. In your case, you just need to specify the lower bound for d. You could use,
bounds=([-np.inf, -np.inf, 0], np.inf)
Note: If you provide a scalar as a parameter (eg:- as the second variable above), it automatically applies as the upper bound for all three coefficients.
You just need to add one little argument to constrain your parameters. That is:
curve_fit(func, xData, yData, p0, bounds=([m1,c1,d1],[m2,c2,d2]))
where m1,c1,d1 are the lower bounds of the parameters (in your case they should be 0) and
m2,c2,d2 are the upper bounds.
If u want all m,c,d to be positive, the code should goes like the following:
curve_fit(func, xData, yData, p0, bounds=(0,numpy.inf))
where all the parameters have a lower bound of 0 and an upper bound of infinity(no bound)

Fitting lognormal distribution using Scipy vs Matlab

I am trying to fit a lognormal distribution using Scipy. I've already done it using Matlab before but because of the need to extend the application beyond statistical analysis, I am in the process of trying to reproduce the fitted values in Scipy.
Below is the Matlab code I used to fit my data:
% Read input data (one value per line)
x = [];
fid = fopen(file_path, 'r'); % reading is default action for fopen
disp('Reading network degree data...');
if fid == -1
disp('[ERROR] Unable to open data file.')
else
while ~feof(fid)
[x] = [x fscanf(fid, '%f', [1])];
end
c = fclose(fid);
if c == 0
disp('File closed successfully.');
else
disp('[ERROR] There was a problem with closing the file.');
end
end
[f,xx] = ecdf(x);
y = 1-f;
parmhat = lognfit(x); % MLE estimate
mu = parmhat(1);
sigma = parmhat(2);
And here's the fitted plot:
Now here's my Python code with the aim of achieving the same:
import math
from scipy import stats
from statsmodels.distributions.empirical_distribution import ECDF
# The same input is read as a list in Python
ecdf_func = ECDF(degrees)
x = ecdf_func.x
ccdf = 1-ecdf_func.y
# Fit data
shape, loc, scale = stats.lognorm.fit(degrees, floc=0)
# Parameters
sigma = shape # standard deviation
mu = math.log(scale) # meanlog of the distribution
fit_ccdf = stats.lognorm.sf(x, [sigma], floc=1, scale=scale)
Here's the fit using the Python code.
As you can see, both sets of code are capable of producing good fits, at least visually speaking.
Problem is that there is a huge difference in the estimated parameters mu and sigma.
From Matlab: mu = 1.62 sigma = 1.29.
From Python: mu = 2.78 sigma = 1.74.
Why is there such a difference?
Note: I have double checked that both sets of data fitted are exactly the same. Same number of points, same distribution.
Your help is much appreciated! Thanks in advance.
Other info:
import scipy
import numpy
import statsmodels
scipy.__version__
'0.9.0'
numpy.__version__
'1.6.1'
statsmodels.__version__
'0.5.0.dev-1bbd4ca'
Version of Matlab is R2011b.
Edition:
As demonstrated in the answer below, the fault lies with Scipy 0.9. I am able to reproduce the mu and sigma results from Matlab using Scipy 11.0.
An easy way to update your Scipy is:
pip install --upgrade Scipy
If you don't have pip (you should!):
sudo apt-get install pip
There is a bug in the fit method in scipy 0.9.0 that has been fixed in later versions of scipy.
The output of the script below should be:
Explicit formula: mu = 4.99203450, sig = 0.81691086
Fit log(x) to norm: mu = 4.99203450, sig = 0.81691086
Fit x to lognorm: mu = 4.99203468, sig = 0.81691081
but with scipy 0.9.0, it is
Explicit formula: mu = 4.99203450, sig = 0.81691086
Fit log(x) to norm: mu = 4.99203450, sig = 0.81691086
Fit x to lognorm: mu = 4.23197270, sig = 1.11581240
The following test script shows three ways to get the same results:
import numpy as np
from scipy import stats
def lognfit(x, ddof=0):
x = np.asarray(x)
logx = np.log(x)
mu = logx.mean()
sig = logx.std(ddof=ddof)
return mu, sig
# A simple data set for easy reproducibility
x = np.array([50., 50, 100, 200, 200, 300, 500])
# Explicit formula
my_mu, my_sig = lognfit(x)
# Fit a normal distribution to log(x)
norm_mu, norm_sig = stats.norm.fit(np.log(x))
# Fit the lognormal distribution
lognorm_sig, _, lognorm_expmu = stats.lognorm.fit(x, floc=0)
print "Explicit formula: mu = %10.8f, sig = %10.8f" % (my_mu, my_sig)
print "Fit log(x) to norm: mu = %10.8f, sig = %10.8f" % (norm_mu, norm_sig)
print "Fit x to lognorm: mu = %10.8f, sig = %10.8f" % (np.log(lognorm_expmu), lognorm_sig)
With the option ddof=1 in the std. dev. calculation to use the unbiased variance estimation:
In [104]: x
Out[104]: array([ 50., 50., 100., 200., 200., 300., 500.])
In [105]: lognfit(x, ddof=1)
Out[105]: (4.9920345004312647, 0.88236457185021866)
There is a note in matlab's lognfit documentation that says when censoring is not used, lognfit computes sigma using the square root of the unbiased estimator of the variance. This corresponds to using ddof=1 in the above code.

Categories

Resources