Gaussian fit returning negative sigma - python
One of my algorithms performs automatic peak detection based on a Gaussian function, and then later determines the the edges based either on a multiplier (user setting) of the sigma or the 'full width at half maximum'. In the scenario where a user specified that he/she wants the peak limited at 2 Sigma, the algorithm takes -/+ 2*sigma from the peak center (mu). However, I noticed that the sigma returned by curve_fit can be negative, which is something that has been noticed before as can be seen here. However, as I determine the border by doing -/+ this can lead to the algorithm 'failing' (due to a - - scenario) as can be seen in the following code.
MVCE
#! /usr/bin/env python
from scipy.optimize import curve_fit
import bisect
import numpy as np
X = [16.4697402328,16.4701402404,16.4705402481,16.4709402557,16.4713402633,16.4717402709,16.4721402785,16.4725402862,16.4729402938,16.4733403014,16.473740309,16.4741403166,16.4745403243,16.4749403319,16.4753403395,16.4757403471,16.4761403547,16.4765403623,16.47694037,16.4773403776,16.4777403852,16.4781403928,16.4785404004,16.4789404081,16.4793404157,16.4797404233,16.4801404309,16.4805404385,16.4809404462,16.4813404538,16.4817404614,16.482140469,16.4825404766,16.4829404843,16.4833404919,16.4837404995,16.4841405071,16.4845405147,16.4849405224,16.48534053,16.4857405376,16.4861405452,16.4865405528,16.4869405604,16.4873405681,16.4877405757,16.4881405833,16.4885405909,16.4889405985,16.4893406062,16.4897406138,16.4901406214,16.490540629,16.4909406366,16.4913406443,16.4917406519,16.4921406595,16.4925406671,16.4929406747,16.4933406824,16.49374069,16.4941406976,16.4945407052,16.4949407128,16.4953407205,16.4957407281,16.4961407357,16.4965407433,16.4969407509,16.4973407585,16.4977407662,16.4981407738,16.4985407814,16.498940789,16.4993407966,16.4997408043,16.5001408119,16.5005408195,16.5009408271,16.5013408347,16.5017408424,16.50214085,16.5025408576,16.5029408652,16.5033408728,16.5037408805,16.5041408881,16.5045408957,16.5049409033,16.5053409109,16.5057409186,16.5061409262,16.5065409338,16.5069409414,16.507340949,16.5077409566,16.5081409643,16.5085409719,16.5089409795,16.5093409871,16.5097409947,16.5101410024,16.51054101,16.5109410176,16.5113410252,16.5117410328,16.5121410405,16.5125410481,16.5129410557,16.5133410633,16.5137410709,16.5141410786,16.5145410862,16.5149410938,16.5153411014,16.515741109,16.5161411166,16.5165411243,16.5169411319,16.5173411395,16.5177411471,16.5181411547,16.5185411624,16.51894117,16.5193411776,16.5197411852,16.5201411928,16.5205412005,16.5209412081,16.5213412157,16.5217412233,16.5221412309,16.5225412386,16.5229412462,16.5233412538,16.5237412614,16.524141269,16.5245412767,16.5249412843,16.5253412919,16.5257412995,16.5261413071,16.5265413147,16.5269413224,16.52734133,16.5277413376,16.5281413452,16.5285413528,16.5289413605,16.5293413681,16.5297413757,16.5301413833,16.5305413909,16.5309413986,16.5313414062,16.5317414138,16.5321414214,16.532541429,16.5329414367,16.5333414443,16.5337414519,16.5341414595,16.5345414671,16.5349414748,16.5353414824,16.53574149,16.5361414976,16.5365415052,16.5369415128,16.5373415205,16.5377415281,16.5381415357,16.5385415433,16.5389415509,16.5393415586,16.5397415662,16.5401415738,16.5405415814,16.540941589,16.5413415967,16.5417416043,16.5421416119,16.5425416195,16.5429416271,16.5433416348,16.5437416424,16.54414165,16.5445416576,16.5449416652,16.5453416729,16.5457416805,16.5461416881,16.5465416957,16.5469417033,16.5473417109,16.5477417186,16.5481417262,16.5485417338,16.5489417414,16.549341749,16.5497417567,16.5501417643,16.5505417719,16.5509417795,16.5513417871,16.5517417948,16.5521418024,16.55254181,16.5529418176,16.5533418252,16.5537418329,16.5541418405,16.5545418481,16.5549418557,16.5553418633,16.5557418709,16.5561418786,16.5565418862,16.5569418938,16.5573419014,16.557741909,16.5581419167,16.5585419243,16.5589419319,16.5593419395,16.5597419471,16.5601419548,16.5605419624,16.56094197,16.5613419776,16.5617419852,16.5621419929,16.5625420005,16.5629420081,16.5633420157,16.5637420233,16.564142031]
Y = [11579127.8554,11671781.7263,11764419.0191,11857026.0444,11949589.1124,12042094.5338,12134528.6188,12226877.6781,12319128.0219,12411265.9609,12503277.8053,12595149.8657,12686868.4525,12778419.8762,12869790.334,12960965.209,13051929.5278,13142668.3154,13233166.5969,13323409.3973,13413381.7417,13503068.6552,13592455.1627,13681526.2894,13770267.0602,13858662.5004,13946697.6348,14034357.4886,14121627.0868,14208491.4544,14294935.6166,14380944.5984,14466503.4248,14551597.1208,14636210.7116,14720329.3102,14803938.4081,14887023.5981,14969570.4732,15051564.6263,15132991.6503,15213837.1383,15294086.683,15373725.8775,15452740.3147,15531115.5875,15608837.2888,15685891.0116,15762262.3488,15837936.8934,15912900.2382,15987137.9762,16060635.7004,16133379.0036,16205353.4789,16276544.72,16346938.7731,16416522.8674,16485284.4226,16553210.8587,16620289.5956,16686508.0531,16751853.6511,16816313.8096,16879875.9485,16942527.4876,17004255.8468,17065048.446,17124892.7052,17183776.0442,17241685.8829,17298609.6412,17354534.739,17409448.5962,17463338.6327,17516192.2683,17567996.9463,17618741.7702,17668418.588,17717019.5043,17764536.6238,17810962.0514,17856287.8916,17900506.2493,17943609.2292,17985588.936,18026437.4744,18066146.9493,18104709.4653,18142117.1271,18178362.0396,18213436.3074,18247332.0352,18280041.3279,18311556.2901,18341869.0265,18370971.642,18398856.332,18425517.6188,18450952.493,18475158.064,18498131.4412,18519869.7341,18540370.0523,18559629.505,18577645.202,18594414.2525,18609933.7661,18624200.8523,18637212.6205,18648966.1802,18659458.6408,18668687.1119,18676648.7029,18683340.5233,18688759.6825,18692903.29,18695768.4553,18697352.5327,18697655.9558,18696681.2608,18694431.0245,18690907.8241,18686114.2363,18680052.838,18672726.2063,18664136.918,18654287.5501,18643180.6795,18630818.883,18617204.7377,18602340.8204,18586229.7081,18568873.9777,18550276.2061,18530438.9703,18509364.8471,18487056.4135,18463516.2464,18438747.4526,18412756.9228,18385553.1936,18357144.808,18327540.3094,18296748.2409,18264777.1456,18231635.5669,18197332.0479,18161875.1318,18125273.3619,18087535.2812,18048669.4331,18008684.3606,17967588.6071,17925390.7158,17882099.2297,17837722.6922,17792269.6464,17745748.6355,17698168.2027,17649537.512,17599868.3744,17549173.3069,17497464.8262,17444755.4492,17391057.6927,17336384.0736,17280747.1087,17224159.3148,17166633.2088,17108181.3075,17048816.1277,16988550.1864,16927396.0002,16865366.0862,16802472.961,16738729.1416,16674147.1447,16608739.4873,16542518.6861,16475497.2591,16407688.2541,16339106.0951,16269765.4262,16199680.8916,16128867.1358,16057338.8029,15985110.5372,15912196.9829,15838612.7844,15764372.5859,15689491.0316,15613982.7659,15537862.4329,15461144.6771,15383844.1425,15305975.4735,15227553.3143,15148592.3093,15069107.1026,14989112.3386,14908622.6595,14827652.5673,14746216.3337,14664328.209,14582002.4435,14499253.2874,14416094.9911,14332541.8049,14248607.9791,14164307.764,14079655.4098,13994665.1668,13909351.2855,13823728.016,13737809.6086,13651610.3137,13565144.3816,13478426.0625,13391469.6068,13304289.2646,13216899.2865,13129313.8865,13041546.3657,12953609.0623,12865514.2686,12777274.277,12688901.3798,12600407.8693,12511806.0378,12423108.1777,12334326.5812,12245473.5407,12156561.3486,12067602.297,11978608.6785,11889592.7852]
def gaussFunction(x, *p):
"""Define and return a Gaussian function.
This function returns the value of a Gaussian function, using the
A, mu and sigma value that is provided as *p.
Keyword arguments:
x -- number
p -- A, mu and sigma numbers
"""
A, mu, sigma = p
return A*np.exp(-(x-mu)**2/(2.*sigma**2))
newGaussX = np.linspace(10, 25, 2500*(X[-1]-X[0]))
p0 = [np.max(Y), X[np.argmax(Y)],0.1]
coeff, var_matrix = curve_fit(gaussFunction, X, Y, p0)
newGaussY = gaussFunction(newGaussX, *coeff)
print "Sigma is "+str(coeff[2])
# Original
low = bisect.bisect_left(newGaussX,coeff[1]-2*coeff[2])
high = bisect.bisect_right(newGaussX,coeff[1]+2*coeff[2])
print newGaussX[low], newGaussX[high]
# Absolute
low = bisect.bisect_left(newGaussX,coeff[1]-2*abs(coeff[2]))
high = bisect.bisect_right(newGaussX,coeff[1]+2*abs(coeff[2]))
print newGaussX[low], newGaussX[high]
Bottom-line, is taking the abs() of the sigma 'correct' or should this problem be solved in a different way?
You are fitting a function gaussFunction that does not care whether sigma is positive or negative. So whether you get a positive or negative result is mostly a matter of luck, and taking the absolute value of the returned sigma is fine. Also consider other possibilities:
(Suggested by Thomas Kühn): modify the model function so that it cares about the sign of sigma. Bringing it closer to the normalized Gaussian form would be enough: the formula A/np.sqrt(sigma)*np.exp(-(x-mu)**2/(2.*sigma**2)) would ensure that you get positive sigma only. A possible, mild downside is that the function takes a bit longer to compute.
Use the variance, sigma_squared, as a parameter:
A, mu, sigma_squared = p
return A*np.exp(-(x-mu)**2/(2.*sigma_squared))
This is probably easiest in terms of keeping the model equation simple. You will need to square your initial guess for that parameter, and take square root when you need sigma itself.
Aside: you hardcoded 0.1 as a guess for standard deviation. This probably should be based on data, like this:
peak = X[Y > np.exp(-0.5)*Y.max()]
guess_sigma = 0.5*(peak.max() - peak.min())
The idea is that within one standard deviation of the mean, the values of the Gaussian are greater than np.exp(-0.5) times the maximum value. So the first line locates this "peak" and the second takes half of its width as the guess for sigma.
For the above to work, X and Y should be already converted to NumPy arrays, e.g., X = np.array([16.4697402328,16.4701402404,..... This is a good idea in general: otherwise, you are making each NumPy method that receives X or Y make this conversion again.
You might find lmfit (http://lmfit.github.io/lmfit-py/) useful for this. It includes a Gaussian Model for curve-fitting that does normalize the Gaussian and also restricts sigma to be positive using a parameter transformation that is more gentle than abs(sigma). Your example would look like this
from lmfit.models import GaussianModel
xdat = np.array(X)
ydat = np.array(Y)
model = GaussianModel()
params = model.guess(ydat, x=xdat)
result = model.fit(ydat, params, x=xdat)
print(result.fit_report())
which will print a report with best-fit values and estimated uncertainties for all the parameters, and include FWHM.
[[Model]]
Model(gaussian)
[[Fit Statistics]]
# function evals = 31
# data points = 237
# variables = 3
chi-square = 95927408861.607
reduced chi-square = 409946191.716
Akaike info crit = 4703.055
Bayesian info crit = 4713.459
[[Variables]]
sigma: 0.04880178 +/- 1.57e-05 (0.03%) (init= 0.0314006)
center: 16.5174203 +/- 8.01e-06 (0.00%) (init= 16.51754)
amplitude: 2.2859e+06 +/- 586.4103 (0.03%) (init= 670578.1)
fwhm: 0.11491942 +/- 3.51e-05 (0.03%) == '2.3548200*sigma'
height: 1.8687e+07 +/- 910.0152 (0.00%) == '0.3989423*amplitude/max(1.e-15, sigma)'
[[Correlations]] (unreported correlations are < 0.100)
C(sigma, amplitude) = 0.949
The values for center +/- 2*sigma would be found with
xlo = result.params['center'].value - 2 * result.params['sigma'].value
xhi = result.params['center'].value + 2 * result.params['sigma'].value
You can use the result to evaluate the model with fitted parameters and different X values:
newGaussX = np.linspace(10, 25, 2500*(X[-1]-X[0]))
newGaussY = result.eval(x=newGaussX)
I would also recommend using numpy.where to find the location of center+/-2*sigma instead of bisect:
low = np.where(newGaussX > xlo)[0][0] # replace bisect_left
high = np.where(newGaussX <= xhi)[0][-1] + 1 # replace bisect_right
I got the same problem and I came up with a trivial but effective solution, which is basically to use the variance in the gaussian function definition instead of the standard deviation, since the variance is always positive. Then, you get the std_dev by square rooting the variance, obtaining a positive value i.e., the std_dev will always be positive. So, problem solved easily ;)
I mean, create the function this way:
def gaussian(x, Heigh, Mean, Variance):
return Heigh * np.exp(- (x-Mean)**2 / (2 * Variance))
Instead of:
def gaussian(x, Heigh, Mean, Std_dev):
return Heigh * np.exp(- (x-Mean)**2 / (2 * Std_dev**2))
And then do the fit as usual.
Related
Super Gaussian fit
I have to do study the laser beam profile. To this aim, I need to find a Super Gaussian curve fit for my data. Super Gaussian equation: I * exp(- 2 * ((x - x0) /sigma)^P) where P takes into account the flat-top laser beam curve characteristics. I started doing a simple Gaussian fit of my curve, in Python. The fit returns a Gaussian curve where the values of I, x0 and sigma are optimized. (I used the function curve_fit) Gaussian curve equation: I * exp(-(x - x0)^2 / (2 * sigma^2)) Now, I would like to do a step forward. I would like to do the Super Gaussian curve fit because I need to consider the flat-top characteristics of the beam. Thus, I need a fit which optimizes also the P parameter. Does someone know how to do a Super Gaussian curve fit with Python? I know that there is a way to do a Super Gaussian fit with wolfram mathematica which is not opensource. I do not have it. Thus, I would like also to know if someone knows an open source software thanks to which it is possible to do a Super Gaussian curve fit or to execute wolfram mathematica. Thank you
Well, you would need to write a function that calculates a parameterized super-Gaussian and use that to model data, say with scipy.optimize.curve_fit. As a lead author of LMFIT (https://lmfit.github.io/lmfit-py/) which provides a high-level interface to fitting and curve-fitting, I would recommend trying that library. With that approach, your model function for a super-Gaussian and using to fit data might look like this: import numpy as np from lmfit import Model def super_gaussian(x, amplitude=1.0, center=0.0, sigma=1.0, expon=2.0): """super-Gaussian distribution super_gaussian(x, amplitude, center, sigma, expon) = (amplitude/(sqrt(2*pi)*sigma)) * exp(-abs(x-center)**expon / (2*sigma**expon)) """ sigma = max(1.e-15, sigma) return ((amplitude/(np.sqrt(2*np.pi)*sigma)) * np.exp(-abs(x-center)**expon / 2*sigma**expon)) # generate some test data x = np.linspace(0, 10, 101) y = super_gaussian(x, amplitude=7.1, center=4.5, sigma=2.5, expon=1.5) y += np.random.normal(size=len(x), scale=0.015) # make Model from the super_gaussian function model = Model(super_gaussian) # build a set of Parameters to be adjusted in fit, named from the arguments # of the model function (super_gaussian), and providing initial values params = model.make_params(amplitude=1, center=5, sigma=2., expon=2) # you can place min/max bounds on parameters params['amplitude'].min = 0 params['sigma'].min = 0 params['expon'].min = 0 params['expon'].max = 100 # note: if you wanted to make this strictly Gaussian, you could set # expon=2 and prevent it from varying in the fit: ### params['expon'].value = 2.0 ### params['expon'].vary = False # now do the fit result = model.fit(y, params, x=x) # print out the fit statistics, best-fit parameter values and uncertainties print(result.fit_report()) # plot results import matplotlib.pyplot as plt plt.plot(x, y, label='data') plt.plot(x, result.best_fit, label='fit') plt.legend() plt.show() This will print a report like [[Model]] Model(super_gaussian) [[Fit Statistics]] # fitting method = leastsq # function evals = 53 # data points = 101 # variables = 4 chi-square = 0.02110713 reduced chi-square = 2.1760e-04 Akaike info crit = -847.799755 Bayesian info crit = -837.339273 [[Variables]] amplitude: 6.96892162 +/- 0.09939812 (1.43%) (init = 1) center: 4.50181661 +/- 0.00217719 (0.05%) (init = 5) sigma: 2.48339218 +/- 0.02134446 (0.86%) (init = 2) expon: 3.25148164 +/- 0.08379431 (2.58%) (init = 2) [[Correlations]] (unreported correlations are < 0.100) C(amplitude, sigma) = 0.939 C(sigma, expon) = -0.774 C(amplitude, expon) = -0.745 and generate a plot like this
This is the function for the super gaussian def super_gaussian(x, amp, x0, sigma): rank = 2 return amp * ((np.exp(-(2 ** (2 * rank - 1)) * np.log(2) * (((x - x0) ** 2) / ((sigma) ** 2)) ** (rank))) ** 2) And then you need to call it with scipy optimize curve fit like this: from scipy import optimize opt, _ = optimize.curve_fit(super_gaussian, x, y) vals = super_gaussian(x, *opt) 'vals' is what you need to plot, that is the fitted super gaussian function. This is what you get with rank=1: rank=2: rank=3:
The answer of #M Newville works perfectly for me. But be careful ! Parenthesis have been fogotten in the quotient of the exponential in the definition of super_gaussian function def super_gaussian(x, amplitude=1.0, center=0.0, sigma=1.0, expon=2.0): ... return ((amplitude/(np.sqrt(2*np.pi)*sigma)) * np.exp(-abs(x-center)**expon / 2*sigma**expon)) should be replaced by def super_gaussian(x, amplitude=1.0, center=0.0, sigma=1.0, expon=2.0): ... return (amplitude/(np.sqrt(2*np.pi)*sigma)) * np.exp(-abs(x-center)**expon / (2*sigma**expon)) Then the FWHM of the super-gaussian function which writes: FWHM = 2.*sigma*(2.*np.log(2.))**(1/expon) is well calculated and in excellent agreement with the plot. I am sorry to write this text as an answer. But my reputation score is low to add a comment to M Newville post...
Fitting of y(x)=a *exp(-b *(x-c)**p) to data for parameters a,b,c,p. The example of numerical calculus below shows an non-iterative method which doesn't require initial guess of parameters. This in an application of the general principle explained in the paper : https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales In the present version of the paper the case of Super-Gaussian isn't explicitely treated. It is not necessary to read the paper since the screen copy below shows the calculus in whole details. Note that the numerical results a,b,c,p can be used as initial values for classical iterative methotds of regression. Note: The linear equation considered is : A,B,C,D are the parameters to be computed thanks to linear regression. Numerical values S(k) of the integral are directly computed by numerical integration from the given data (As shown in the above example).
Why is there noise for higher bandwidth intensity distributions?
I have been trying to computationally evaluate the following E-field computationally: It is basically the sum of waves of a given wavelength, weighted by the square root of a gaussian distribution for the wavelengths. I compute it via Python by performing gauss quadrature integrals for each value of $x$, through the package scipy.integrate.quad function. The code is stated below: # Imports import numpy as np import scipy as sp from scipy import integrate # Parameters mu = 0.635 # mean wavelength sigma = 0.01 # std dev of wavelength distribution # wl is wavelength x_ara = np.arange(0, 1.4, 0.01) # Limits of Integration lower, upper = mu - 4*sigma, mu+4*sigma if lower < 0 : print('lower limit met') lower = 1e-15 # cannot evaluate sigma = 0 due to singularity for gaussian function # Functions def Iprofile_func(wl, mu, sigma): profile = np.exp(-( ((wl-mu) / (np.sqrt(2)*sigma))**2)) return profile def E_func(x_ara, wl, mu, sigma): return np.sqrt(Iprofile_func(wl, mu, sigma)) * np.cos(2*np.pi/wl * (x_ara)) # Computation field_ara = np.array([]) for x in x_ara: def E(wl): return E_func(x, wl, mu, sigma) field = sp.integrate.quad(E, lower, upper)[0] field_ara = np.append(field_ara, field) I fixed the value of $\mu$ = 0.635, and performed the same computation for two values of $\sigma$, $\sigma$ = 0.01 and $\sigma$ = 0.2. The arrays that I get, I have plotted below, the plot above is the wavelength distribution, while the plot below is the computed field array: Why does noise appear in the computed field when the value of sigma increases?
For large x even small changes in lambda do already lead to quickly fluctuating integrands. At some point the numeric integration routine will either take very, very long to converge or will not consider sufficiently many integration points so that the contributions from each integration point will not cancel out completely and will show exactly the noise that you see. When I run the code I actually get a warning from scipy about reaching a limit ("IntegrationWarning: The maximum number of subdivisions (50) has been achieved."). The good thing: you know that for sufficiently large x the integration must go to zero. There is no need to compute it outside a reasonable range. Example: x = 10, mu = 0.635, sigma = 0.01 The integration bounds are mu+/-4sigma = [0.595, 0.675] 2Pi/0.595*10=105.6, 2Pi/0.675*10=93.08 That means roughly two oscillations of the integrand over the wavelength range at x=10. x = 100, everything else the same That means 20 oscillations of the integrand over the wavelength range. x = 10, mu = 0.635, sigma = 0.1 The integration bounds are mu+/-4sigma=[0.235, 1.035] 2Pi/0.235*10=267.37, 2Pi/1.035*10=60.71 That means 33 oscillations of the the integrand over the wavelength range already at x=10. x = 100, everything else the same That means 329 oscillations of the the integrand over the wavelength range. More and more integration points wold be needed if x or sigma get large. Therefore there is no alternative to increase the limits in scipy.integrate for larger x.
How to obtain perfect fit from np.random.power function
I have generated random data using: bkg= 240-140*np.random.power(3.5,50000) I plotted the points into a histogram by using h_all = plt.hist(all,bins=binedges,histtype='step') My question is, provided that I know the pdf (in this case called "bkg") can I generate a curve using scipy.optimize that fits the points generated perfectly, and what equation it is for the curve ?
First of all, remark that your bkg is NOT a probability density function (pdf). Rather, it is a list of observations from a pdf. By calling matplotlib.pyplot.hist on this list of observations, you get to see a curve that approximates the (offset and scaled version of the) probability density function. If you are given this curve, it is possible to get a good estimation of the parameters needed to model this, provided you've been given the parameterized model a priori. For example: import matplotlib.pyplot as plt import numpy as np from scipy.optimize import curve_fit offset, scale, a, nsamples = 240, -140, 3.5, 500000 bkg = offset + scale*np.random.power(a, nsamples) # values range between (offset, offset+scale), which map to 0 and 1 nbins = 100 count, bins, ignored = plt.hist(bkg, bins=nbins, histtype='stepfilled', edgecolor='none') If now you are given the centers of these bins and the counts, xdata = .5*(bins[1:]+bins[:-1]) ydata = count and you are asked to find the parameters of the power distribution function that fits to this data (-> someone told you this, you trust that source), then you could go about in the following manner. First, observe that the power distribution function P(x,a) is a monotonously increasing function (i.e. P(x1, a ) < P(x2, a) when 0 <= x1 < x2 <= 1). That means that the dataset given above has been flipped left-to-right, or that it represents factor*P(x, a ) with factor < 0. Next, notice that the given data is not given over the interval [0,1], typical for a probability density function. That means that you should rescale the given xdata to the [0,1] interval prior to attempting to fit the power function distribution to it. Just by observing the graph, you figure out that the values that 0 and 1 map to are 100 and 240. However, this is just luck here, because matplotlib chose a sensible range for plotting. When you are confronted with not actually knowing the limits to which 0 and 1 have mapped to, you could choose the less optimal (but still very good) choice of xdata[0] - binwidth/2 and xdata[-1] + binwidth/2 or (a slightly worse choice) xdata[0] and xdata[-1]. From the previous paragraph, you know that 1 maps to xdata[0] - binwidth/2 :=: a and 0 maps to xdata[-1] + binwidth/2 :=: b. The linear map that does this is lambda x: (a - b)*x + b (simple algebra). If you pass this to [0,1]-mapped version of the xdata to curve_fit, it'll give you a good guess for the exponent. def get_model(nobservations, binwidth, scale, offset): def model(bin_centers, exponent): x = (bin_centers - offset)/scale y = exponent*x**(exponent - 1) normed_y = nobservations * binwidth * y / np.abs(scale) return normed_y return model binwidth = np.diff(xdata)[0] p0, _ = curve_fit(get_model(nsamples, binwidth, scale=-xdata.ptp() - binwidth, offset=xdata[-1] + binwidth/2), xdata, ydata) print(p0) # prints e.g.: 3.37117679 plt.plot(xdata, get_model(nsamples, binwidth, scale=-xdata.ptp() - binwidth, offset=xdata[-1] + binwidth/2)(xdata, *p0)) At this moment, you have found a rather accurate description of the distribution that was used to generate the observations of bkg: f(x) = offset + scale*(exponent * x**(exponent - 1)) = (xdata[-1] + binwidth/2) + (-xdata.ptp() - binwidth)*(p0[0] * x**(p0[0] - 1)) ~ 234.85 - 1.34.85*(3.37 * x**(3.37 - 1)) By the way, I'd like to point out that replicating bkg (the observations from the distribution) as a perfect copy is something you can only do if you know the exact parameters of the distribution (240, -140 and 3.5) AND set the seed for the random number generation equal to the seed that was in effect prior to the initial call to np.random.power. If you'd like to fit a curve to the histogram using splines, you should retrieve the knots and coefficients from the generated spline and pass those into the function of bspleval, as shown here. The topic of writing out those equations is a long one however, and there are numerous resources on the internet that you can check to understand how it's done. Needless to say, that function bspleval is what you'll need in case you want to go that route. If it were me, I'd go the route of curve fitting shown above.
SciPy + Numpy: Finding the slope of a sigmoid curve
I have some data that follow a sigmoid distribution as you can see in the following image: After normalizing and scaling my data, I have adjusted the curve at the bottom using scipy.optimize.curve_fit and some initial parameters: popt, pcov = curve_fit(sigmoid_function, xdata, ydata, p0 = [0.05, 0.05, 0.05]) >>> print popt [ 2.82019932e+02 -1.90996563e-01 5.00000000e-02] So popt, according to the documentation, returns *"Optimal values for the parameters so that the sum of the squared error of f(xdata, popt) - ydata is minimized". I understand here that there is no calculation of the slope with curve_fit, because I do not think the slope of this gentle curve is 282, neither is negative. Then I tried with scipy.optimize.leastsq, because the documentation says it returns "The solution (or the result of the last iteration for an unsuccessful call).", so I thought the slope would be returned. Like this: p, cov, infodict, mesg, ier = leastsq(residuals, p_guess, args = (nxdata, nydata), full_output=True) >>> print p Param(x0=281.73193626250207, y0=-0.012731420027056234, c=1.0069006606656596, k=0.18836680131910222) But again, I did not get what I expected. curve_fit and leastsq returned almost the same values, with is not surprising I guess, as curve_fit is using an implementation of the least squares method within to find the curve. But no slope back...unless I overlooked something. So, how to calculate the slope in a point, say, where X = 285 and Y = 0.5? I am trying to avoid manual methods, like calculating the derivative in, say, (285.5, 0.55) and (284.5, 0.45) and subtract and divide results and so. I would like to know if there is a more automatic method for this. Thank you all! EDIT #1 This is my "sigmoid_function", used by curve_fit and leastsq methods: def sigmoid_function(xdata, x0, k, p0): # p0 not used anymore, only its components (x0, k) # This function is called by two different methods: curve_fit and leastsq, # this last one through function "residuals". I don't know if it makes sense # to use a single function for two (somewhat similar) methods, but there # it goes. # p0: # + Is the initial parameter for scipy.optimize.curve_fit. # + For residuals calculation is left empty # + It is initialized to [0.05, 0.05, 0.05] # x0: # + Is the convergence parameter in X-axis and also the shift # + It starts with 0.05 and ends up being around ~282 (days in a year) # k: # + Set up either by curve_fit or leastsq # + In least squares it is initially fixed at 0.5 and in curve_fit # + to 0.05. Why? Just did this approach in two different ways and # + it seems it is working. # + But honestly, I have no clue on what it represents # xdata: # + Positions in X-axis. In this case from 240 to 365 # Finally I changed those parameters as suggested in the answer. # Sigmoid curve has 2 degrees of freedom, therefore, the initial # guess only needs to be this size. In this case, p0 = [282, 0.5] y = np.exp(-k*(xdata-x0)) / (1 + np.exp(-k*(xdata-x0))) return y def residuals(p_guess, xdata, ydata): # For the residuals calculation, there is no need of setting up the initial parameters # After fixing the initial guess and sigmoid_function header, remove [] # return ydata - sigmoid_function(xdata, p_guess[0], p_guess[1], []) return ydata - sigmoid_function(xdata, p_guess[0], p_guess[1], []) I am sorry if I made mistakes while describing the parameters or confused technical terms. I am very new with numpy and I have not studied maths for years, so I am catching up again. So, again, what is your advice to calculate the slope of X = 285, Y = 0.5 (more or less the midpoint) for this dataset? Thanks!! EDIT #2 Thanks to Oliver W., I updated my code as he suggested and understood a bit better the problem. There is a final detail I do not fully get. Apparently, curve_fit returns a popt array (x0, k) with the optimum parameters for the fitting: x0 seems to be how shifted is the curve by indicating the central point of the curve k parameter is the slope when y = 0.5, also in the center of the curve (I think!) Why if the sigmoid function is a growing one, the derivative/slope in popt is negative? Does it make sense? I used sigmoid_derivative to calculate the slope and, yes, I obtained the same results that popt but with positive sign. # Year 2003, 2005, 2007. Slope in midpoint. k = [-0.1910, -0.2545, -0.2259] # Values coming from popt slope = [0.1910, 0.2545, 0.2259] # Values coming from sigmoid_derivative function I know this is being a bit peaky because I could use both. The relevant data is in there but with negative sign, but I was wondering why is this happening. So, the calculation of the derivative function as you suggested, is only required if I need to know the slope in other points than y = 0.5. Only for midpoint, I can use popt. Thanks for your help, it saved me a lot of time. :-)
You're never using the parameter p0 you're passing to your sigmoid function. Hence, curve fitting will not have any good measure to find convergence, because it can take any value for this parameter. You should first rewrite your sigmoid function like this: def sigmoid_function(xdata, x0, k): y = np.exp(-k*(xdata-x0)) / (1 + np.exp(-k*(xdata-x0))) return y This means your model (the sigmoid) has only two degrees of freedom. This will be returned in popt: initial_guess = [282, 1] # (x0, k): at x0, the sigmoid reaches 50%, k is slope related popt, pcov = curve_fit(sigmoid_function, xdata, ydata, p0=initial_guess) Now popt will be a tuple (or array of 2 values), being the best possible x0 and k. To get the slope of this function at any point, to be honest, I would just calculate the derivative symbolically as the sigmoid is not such a hard function. You will end up with: def sigmoid_derivative(x, x0, k): f = np.exp(-k*(x-x0)) return -k / f If you have the results from your curve fitting stored in popt, you could pass this easily to this function: print(sigmoid_derivative(285, *popt)) which will return for you the derivative at x=285. But, because you ask specifically for the midpoint, so when x==x0 and y==.5, you'll see (from the sigmoid_derivative) that the derivative there is just -k, which can be observed immediately from the curve_fit output you've already obtained. In the output you've shown, that's about 0.19.
How do I get a lognormal distribution in Python with Mu and Sigma?
I have been trying to get the result of a lognormal distribution using Scipy. I already have the Mu and Sigma, so I don't need to do any other prep work. If I need to be more specific (and I am trying to be with my limited knowledge of stats), I would say that I am looking for the cumulative function (cdf under Scipy). The problem is that I can't figure out how to do this with just the mean and standard deviation on a scale of 0-1 (ie the answer returned should be something from 0-1). I'm also not sure which method from dist, I should be using to get the answer. I've tried reading the documentation and looking through SO, but the relevant questions (like this and this) didn't seem to provide the answers I was looking for. Here is a code sample of what I am working with. Thanks. from scipy.stats import lognorm stddev = 0.859455801705594 mean = 0.418749176686875 total = 37 dist = lognorm.cdf(total,mean,stddev) UPDATE: So after a bit of work and a little research, I got a little further. But I still am getting the wrong answer. The new code is below. According to R and Excel, the result should be .7434, but that's clearly not what is happening. Is there a logic flaw I am missing? dist = lognorm([1.744],loc=2.0785) dist.cdf(25) # yields=0.96374596, expected=0.7434 UPDATE 2: Working lognorm implementation which yields the correct 0.7434 result. def lognorm(self,x,mu=0,sigma=1): a = (math.log(x) - mu)/math.sqrt(2*sigma**2) p = 0.5 + 0.5*math.erf(a) return p lognorm(25,1.744,2.0785) > 0.7434
I know this is a bit late (almost one year!) but I've been doing some research on the lognorm function in scipy.stats. A lot of folks seem confused about the input parameters, so I hope to help these people out. The example above is almost correct, but I found it strange to set the mean to the location ("loc") parameter - this signals that the cdf or pdf doesn't 'take off' until the value is greater than the mean. Also, the mean and standard deviation arguments should be in the form exp(Ln(mean)) and Ln(StdDev), respectively. Simply put, the arguments are (x, shape, loc, scale), with the parameter definitions below: loc - No equivalent, this gets subtracted from your data so that 0 becomes the infimum of the range of the data. scale - exp μ, where μ is the mean of the log of the variate. (When fitting, typically you'd use the sample mean of the log of the data.) shape - the standard deviation of the log of the variate. I went through the same frustration as most people with this function, so I'm sharing my solution. Just be careful because the explanations aren't very clear without a compendium of resources. For more information, I found these sources helpful: http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html#scipy.stats.lognorm https://stats.stackexchange.com/questions/33036/fitting-log-normal-distribution-in-r-vs-scipy And here is an example, taken from #serv-inc 's answer, posted on this page here: import math from scipy import stats # standard deviation of normal distribution sigma = 0.859455801705594 # mean of normal distribution mu = 0.418749176686875 # hopefully, total is the value where you need the cdf total = 37 frozen_lognorm = stats.lognorm(s=sigma, scale=math.exp(mu)) frozen_lognorm.cdf(total) # use whatever function and value you need here
It sounds like you want to instantiate a "frozen" distribution from known parameters. In your example, you could do something like: from scipy.stats import lognorm stddev = 0.859455801705594 mean = 0.418749176686875 dist=lognorm([stddev],loc=mean) which will give you a lognorm distribution object with the mean and standard deviation you specify. You can then get the pdf or cdf like this: import numpy as np import pylab as pl x=np.linspace(0,6,200) pl.plot(x,dist.pdf(x)) pl.plot(x,dist.cdf(x)) Is this what you had in mind?
from math import exp from scipy import stats def lognorm_cdf(x, mu, sigma): shape = sigma loc = 0 scale = exp(mu) return stats.lognorm.cdf(x, shape, loc, scale) x = 25 mu = 2.0785 sigma = 1.744 p = lognorm_cdf(x, mu, sigma) #yields the expected 0.74341 Similar to Excel and R, The lognorm_cdf function above parameterizes the CDF for the log-normal distribution using mu and sigma. Although SciPy uses shape, loc and scale parameters to characterize its probability distributions, for the log-normal distribution I find it slightly easier to think of these parameters at the variable level rather than at the distribution level. Here's what I mean... A log-normal variable X is related to a normal variable Z as follows: X = exp(mu + sigma * Z) #Equation 1 which is the same as: X = exp(mu) * exp(Z)**sigma #Equation 2 This can be sneakily re-written as follows: X = exp(mu) * exp(Z-Z0)**sigma #Equation 3 where Z0 = 0. This equation is of the form: f(x) = a * ( (x-x0) ** b ) #Equation 4 If you can visualize equations in your head it should be clear that the scale, shape and location parameters in Equation 4 are: a, b and x0, respectively. This means that in Equation 3 the scale, shape and location parameters are: exp(mu), sigma and zero, respectfully. If you can't visualize that very clearly, let's rewrite Equation 2 as a function: f(Z) = exp(mu) * exp(Z)**sigma #(same as Equation 2) and then look at the effects of mu and sigma on f(Z). The figure below holds sigma constant and varies mu. You should see that mu vertically scales f(Z). However, it does so in a nonlinear manner; the effect of changing mu from 0 to 1 is smaller than the effect of changing mu from 1 to 2. From Equation 2 we see that exp(mu) is actually the linear scaling factor. Hence SciPy's "scale" is exp(mu). The next figure holds mu constant and varies sigma. You should see that the shape of f(Z) changes. That is, f(Z) has a constant value when Z=0 and sigma affects how quickly f(Z) curves away from the horizontal axis. Hence SciPy's "shape" is sigma.
Even more late, but in case it's helpful to anyone else: I found that the Excel's LOGNORM.DIST(x,Ln(mean),standard_dev,TRUE) provides the same results as python's from scipy.stats import lognorm lognorm.cdf(x,sigma,0,mean) Likewise, Excel's LOGNORM.DIST(x,Ln(mean),standard_dev,FALSE) seems equivalent to Python's from scipy.stats import lognorm lognorm.pdf(x,sigma,0,mean).
#lucas' answer has the usage down pat. As a code example, you could use import math from scipy import stats # standard deviation of normal distribution sigma = 0.859455801705594 # mean of normal distribution mu = 0.418749176686875 # hopefully, total is the value where you need the cdf total = 37 frozen_lognorm = stats.lognorm(s=sigma, scale=math.exp(mu)) frozen_lognorm.cdf(total) # use whatever function and value you need here
Known mean and stddev of the lognormal distribution In case someone is looking for it, here is a solution for getting the scipy.stats.lognorm distribution if the mean mu and standard deviation sigma of the lognormal distribution are known. In this case we have to calculate the stats.lognorm parameters from the known mu and sigma like so: import numpy as np from scipy import stats mu = 10 sigma = 3 a = 1 + (sigma / mu) ** 2 s = np.sqrt(np.log(a)) scale = mu / np.sqrt(a) This was obtained by looking into the implementation of the variance and mean calculations in the stats.lognorm.stats method and essentially reversing it (solving for the input). Then we can initialize the frozen distribution instance distr = stats.lognorm(s, 0, scale) # generate some randomvals randomvals = distr.rvs(1_000_000) # calculate mean and variance using the dedicated method mu_stats, var_stats = distr.stats("mv") Compare means and stddevs from input, randomvals and analytical solution from distr.stats: print(f""" Mean Std ---------------------------- Input: {mu:6.2f} {sigma:6.2f} Randomvals: {randomvals.mean():6.2f} {randomvals.std():6.2f} lognorm.stats: {mu_stats:6.2f} {np.sqrt(var_stats):6.2f} """) Mean Std ---------------------------- Input: 10.00 3.00 Randomvals: 10.00 3.00 lognorm.stats: 10.00 3.00 Plot PDF from stats.lognorm and histogram of the random values: import holoviews as hv hv.extension('bokeh') x = np.linspace(0, 30, 301) counts, _ = np.histogram(randomvals, bins=x) counts = counts / counts.sum() / (x[1] - x[0]) (hv.Histogram((counts, x)) * hv.Curve((x, distr.pdf(x))).opts(color="r").opts(width=900))
If you read this and just want a function with the behaviour similar to lnorm in R. Well, then relieve yourself from violent anger and use numpy's numpy.random.lognormal.