Fitting a Lognormal Distribution in Python using CURVE_FIT - python

I have a hypothetical y function of x and trying to find/fit a lognormal distribution curve that would shape over the data best. I am using curve_fit function and was able to fit normal distribution, but the curve does not look optimized.
Below are the give y and x data points where y = f(x).
y_axis = [0.00032425299473065838, 0.00063714106162861229, 0.00027009331177605913, 0.00096672396877715144, 0.002388766809835889, 0.0042233337680543182, 0.0053072824980722137, 0.0061291327849408699, 0.0064555344006149871, 0.0065601228278316746, 0.0052574034010282218, 0.0057924488798939255, 0.0048154093097913355, 0.0048619350036057446, 0.0048154093097913355, 0.0045114840997070331, 0.0034906838696562147, 0.0040069911024866456, 0.0027766995669134334, 0.0016595801819374015, 0.0012182145074882836, 0.00098231827111984341, 0.00098231827111984363, 0.0012863691645616997, 0.0012395921040321833, 0.00093554121059032721, 0.0012629806342969417, 0.0010057068013846018, 0.0006081017868837127, 0.00032743942370661445, 4.6777060529516312e-05, 7.0165590794274467e-05, 7.0165590794274467e-05, 4.6777060529516745e-05]
y-axis are probabilities of an event occurring in x-axis time bins:
x_axis = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0, 21.0, 22.0, 23.0, 24.0, 25.0, 26.0, 27.0, 28.0, 29.0, 30.0, 31.0, 32.0, 33.0, 34.0]
I was able to get a better fit on my data using excel and lognormal approach. When I attempt to use lognormal in python, the fit does not work and I am doing something wrong.
Below is the code I have for fitting a normal distribution, which seems to be the only one that I can fit in python (hard to believe):
#fitting distributino on top of savitzky-golay
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import scipy
import scipy.stats
import numpy as np
from scipy.stats import gamma, lognorm, halflogistic, foldcauchy
from scipy.optimize import curve_fit
matplotlib.rcParams['figure.figsize'] = (16.0, 12.0)
matplotlib.style.use('ggplot')
# results from savgol
x_axis = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0, 21.0, 22.0, 23.0, 24.0, 25.0, 26.0, 27.0, 28.0, 29.0, 30.0, 31.0, 32.0, 33.0, 34.0]
y_axis = [0.00032425299473065838, 0.00063714106162861229, 0.00027009331177605913, 0.00096672396877715144, 0.002388766809835889, 0.0042233337680543182, 0.0053072824980722137, 0.0061291327849408699, 0.0064555344006149871, 0.0065601228278316746, 0.0052574034010282218, 0.0057924488798939255, 0.0048154093097913355, 0.0048619350036057446, 0.0048154093097913355, 0.0045114840997070331, 0.0034906838696562147, 0.0040069911024866456, 0.0027766995669134334, 0.0016595801819374015, 0.0012182145074882836, 0.00098231827111984341, 0.00098231827111984363, 0.0012863691645616997, 0.0012395921040321833, 0.00093554121059032721, 0.0012629806342969417, 0.0010057068013846018, 0.0006081017868837127, 0.00032743942370661445, 4.6777060529516312e-05, 7.0165590794274467e-05, 7.0165590794274467e-05, 4.6777060529516745e-05]
## y_axis values must be normalised
sum_ys = sum(y_axis)
# normalize to 1
y_axis = [_/sum_ys for _ in y_axis]
# def gamma_f(x, a, loc, scale):
# return gamma.pdf(x, a, loc, scale)
def norm_f(x, loc, scale):
# print 'loc: ', loc, 'scale: ', scale, "\n"
return norm.pdf(x, loc, scale)
fitting = norm_f
# param_bounds = ([-np.inf,0,-np.inf],[np.inf,2,np.inf])
result = curve_fit(fitting, x_axis, y_axis)
result_mod = result
# mod scale
# results_adj = [result_mod[0][0]*.75, result_mod[0][1]*.85]
plt.plot(x_axis, y_axis, 'ro')
plt.bar(x_axis, y_axis, 1, alpha=0.75)
plt.plot(x_axis, [fitting(_, *result[0]) for _ in x_axis], 'b-')
plt.axis([0,35,0,.1])
# convert back into probability
y_norm_fit = [fitting(_, *result[0]) for _ in x_axis]
y_fit = [_*sum_ys for _ in y_norm_fit]
print list(y_fit)
plt.show()
I am trying to get answers two questions:
Is this the best fit I will get from normal distribution curve? How can I imporve my the fit?
Normal distribution result:
How can I fit a lognormal distribution to this data or is there a better distribution that I can use?
I was playing around with lognormal distribution curve adjust mu and sigma, it looks like that there is possible a better fit. I don't understand what I am doing wrong to get similar results in python.

Actually, Gamma distribution might be good fit as #Glen_b proposed. I'm using second definition with \alpha and \beta.
NB: trick I use for a quick fit is to compute mean and variance and for typical two-parametric distribution it is enough to recover parameters and get quick idea if it is good fit or not.
Code
import math
from scipy.misc import comb
import matplotlib.pyplot as plt
y_axis = [0.00032425299473065838, 0.00063714106162861229, 0.00027009331177605913, 0.00096672396877715144, 0.002388766809835889, 0.0042233337680543182, 0.0053072824980722137, 0.0061291327849408699, 0.0064555344006149871, 0.0065601228278316746, 0.0052574034010282218, 0.0057924488798939255, 0.0048154093097913355, 0.0048619350036057446, 0.0048154093097913355, 0.0045114840997070331, 0.0034906838696562147, 0.0040069911024866456, 0.0027766995669134334, 0.0016595801819374015, 0.0012182145074882836, 0.00098231827111984341, 0.00098231827111984363, 0.0012863691645616997, 0.0012395921040321833, 0.00093554121059032721, 0.0012629806342969417, 0.0010057068013846018, 0.0006081017868837127, 0.00032743942370661445, 4.6777060529516312e-05, 7.0165590794274467e-05, 7.0165590794274467e-05, 4.6777060529516745e-05]
x_axis = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0, 21.0, 22.0, 23.0, 24.0, 25.0, 26.0, 27.0, 28.0, 29.0, 30.0, 31.0, 32.0, 33.0, 34.0]
## y_axis values must be normalised
sum_ys = sum(y_axis)
# normalize to 1
y_axis = [_/sum_ys for _ in y_axis]
m = 0.0
for k in range(0, len(x_axis)):
m += y_axis[k] * x_axis[k]
v = 0.0
for k in range(0, len(x_axis)):
t = (x_axis[k] - m)
v += y_axis[k] * t * t
print(m, v)
b = m/v
a = m * b
print(a, b)
z = []
for k in range(0, len(x_axis)):
q = b**a * x_axis[k]**(a-1.0) * math.exp( - b*x_axis[k] ) / math.gamma(a)
z.append(q)
plt.plot(x_axis, y_axis, 'ro')
plt.plot(x_axis, z, 'b*')
plt.axis([0, 35, 0, .1])
plt.show()

Discrete distribution might look better - your x are all integers after all. You have distribution with variance about 3 times higher than mean, asymmetric - so most likely something like Negative Binomial might work quite well. Here is quick fit
r is a bit above 6, so you might want to move to distribution with real r - Polya distribution.
Code
from scipy.misc import comb
import matplotlib.pyplot as plt
y_axis = [0.00032425299473065838, 0.00063714106162861229, 0.00027009331177605913, 0.00096672396877715144, 0.002388766809835889, 0.0042233337680543182, 0.0053072824980722137, 0.0061291327849408699, 0.0064555344006149871, 0.0065601228278316746, 0.0052574034010282218, 0.0057924488798939255, 0.0048154093097913355, 0.0048619350036057446, 0.0048154093097913355, 0.0045114840997070331, 0.0034906838696562147, 0.0040069911024866456, 0.0027766995669134334, 0.0016595801819374015, 0.0012182145074882836, 0.00098231827111984341, 0.00098231827111984363, 0.0012863691645616997, 0.0012395921040321833, 0.00093554121059032721, 0.0012629806342969417, 0.0010057068013846018, 0.0006081017868837127, 0.00032743942370661445, 4.6777060529516312e-05, 7.0165590794274467e-05, 7.0165590794274467e-05, 4.6777060529516745e-05]
x_axis = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0, 21.0, 22.0, 23.0, 24.0, 25.0, 26.0, 27.0, 28.0, 29.0, 30.0, 31.0, 32.0, 33.0, 34.0]
## y_axis values must be normalised
sum_ys = sum(y_axis)
# normalize to 1
y_axis = [_/sum_ys for _ in y_axis]
s = 1.0 # shift by 1 to have them all at 0
m = 0.0
for k in range(0, len(x_axis)):
m += y_axis[k] * (x_axis[k] - s)
v = 0.0
for k in range(0, len(x_axis)):
t = (x_axis[k] - s - m)
v += y_axis[k] * t * t
print(m, v)
p = 1.0 - m/v
r = int(m*(1.0 - p) / p)
print(p, r)
z = []
for k in range(0, len(x_axis)):
q = comb(k + r - 1, k) * (1.0 - p)**r * p**k
z.append(q)
plt.plot(x_axis, y_axis, 'ro')
plt.plot(x_axis, z, 'b*')
plt.axis([0, 35, 0, .1])
plt.show()

Note that if a lognormal curve is correct and you take logs of both variables, you should have a quadratic relationship; even if that's not a suitable scale for a final model (because of variance effects -- if your variance is near constant on the original scale it will overweight the small values) it should at least give a good starting point for a nonlinear fit.
Indeed aside from the first two points this looks fairly good:
-- a quadratic fit to the solid points would describe that data quite well and should give suitable starting values if you then want to do a nonlinear fit.
(If error in x is at all possible, the lack of fit at the lowest x may be as much issues with error in x as error in y)
Incidentally, that plot seems to hint that a gamma curve may fit a little better overall than a lognormal one (in particular if you don't want to reduce the impact of those first two points relative to points 4-6). A good initial fit for that can be had by regressing log(y) on x and log(x):
The scaled gamma density is g = c.x^(a-1) exp(-bx) ... taking logs, you get log(g) = log(c) + (a-1) log(x) - b x = b0 + b1 log(x) + b2 x ... so supplying log(x) and x to a linear regression routine will fit that. The same caveats about variance effects apply (so it might be best as a starting point for a nonlinear least squares fit if your relative error in y isn't nearly constant).

In Python, I explained a trick here of how to fit a LogNormal very simply using OpenTURNS library:
import openturns as ot
n_times = [int(y_axis[i] * N) for i in range(len(y_axis))]
S = np.repeat(x_axis, n_times)
sample = ot.Sample([[p] for p in S])
fitdist = ot.LogNormalFactory().buildAsLogNormal(sample)
That's it!
print(fitdist) will show you >>> LogNormal(muLog = 2.92142, sigmaLog = 0.305, gamma = -6.24996)
and the fitting seems good:
import matplotlib.pyplot as plt
plt.hist(S, density =True, color = 'grey', bins = 34, alpha = 0.5)
plt.scatter(x_axis, y_axis, color= 'red')
plt.plot(x_axis, fitdist.computePDF(ot.Sample([[p] for p in x_axis])), color = 'black')
plt.show()

Related

Problem Fitting a Residence Time Distribution Data

I am trying to fit Resident Time Distribution (RTD) Data. RTD is typically skewed distribution. I have built a simple code that takes this non equally space-time data set from the RTD.
Data Sett
timeArray = [0.0, 0.5, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 12.0, 14.0]
concArray = [0.0, 0.6, 1.4, 5.0, 8.0, 10.0, 8.0, 6.0, 4.0, 3.0, 2.2, 1.5, 0.6, 0.0]
To fit the data I have been using python curve_fit function
parameters, covariance = curve_fit(nCSTR, time, conc, p0=guess)
and different sets of models (ex. CSTR, Sine, Gauss) to fit the data. However, no success so far.
The RTD data that I have correspond to a CSTR and there is an equation that model very accurate this type of behavior.
#Generalize nCSTR model
y = (( (np.power(x/tau,n-1)) * np.power(n,n) ) / (tau * math.gamma(n)) ) * np.exp(-n*x/tau)
As a separate note: from the Generalized nCSTR model I am using gamma instead of (n-1)! factorial terms because of the complexities of the code trying to deal with decimal values in factorials terms.
This CSTR model should be the one fitting the data without problem but for some reason is not able to do so. The outcome after executing my code:
timeArray = [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0, 12.5, 13.0, 13.5, 14.0]
concArray = [0.0, 0.6, 1.4, 2.6, 5.0, 6.5, 8.0, 9.0, 10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.5, 3.0, 2.5, 2.2, 1.8, 1.5, 1.2, 1.0, 0.8, 0.6, 0.5, 0.3, 0.1, 0.0]
#Recast time and conc into numpy arrays
time = np.asarray(timeArray)
conc = np.asarray(concArray)
plt.plot(time, conc, 'o')
def nCSTR(x, tau, n):
y = (( (np.power(x/tau,n-1)) * np.power(n,n) ) / (tau * math.gamma(n)) ) * np.exp(-n*x/tau)
return y
guess = [1, 12]
parameters, covariance = curve_fit(nCSTR, time, conc, p0=guess)
tau = parameters[0]
n = parameters[1]
y = np.arange(0.0, len(time), 1.0)
for i in range(len(timeArray)):
y[i] = (( (np.power(time[i]/tau,n-1)) * np.power(n,n) ) / (tau * math.gamma(n)) ) * np.exp(-n*time[i]/tau)
plt.plot(time,y)
is this plot Fitting Output
I know I am missing something and any help will be well appreciated. The model has been well known for decades so it should not be related to the equation. I did some dummy data to confirm that the equation is written correctly and the output was the same type of profile that I am looking for. In that end, the equestion is fine.
import numpy as np
import math
t = np.arange(0.0, 10.5, 0.5)
tau = 2
n = 5
y = np.arange(0.0, len(t), 1.0)
for i in range(len(t)):
y[i] = (( (np.power(t[i]/tau,n-1)) * np.power(n,n) ) / (tau * math.gamma(n)) ) * np.exp(-n*t[i]/tau)
print(y)
plt.plot(t,y)
CSTR profile with Dummy Data (image)
If anyone is interested in the theory behind it I recommend any reading related to Tank In Series (specifically CSTR) Fogler has a great book about this topic.
I think that the main problem is that your model does not allow for an overall scale factor or that your data may not be normalized as you expect.
If you'll permit me to convert your curve-fitting program to use lmfit (I am a lead author), you might do:
import numpy as np
from scipy.special import gamma
import matplotlib.pyplot as plt
from lmfit import Model
timeArray = [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0, 12.5, 13.0, 13.5, 14.0]
concArray = [0.0, 0.6, 1.4, 2.6, 5.0, 6.5, 8.0, 9.0, 10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.5, 3.0, 2.5, 2.2, 1.8, 1.5, 1.2, 1.0, 0.8, 0.6, 0.5, 0.3, 0.1, 0.0]
#Recast time and conc into numpy arrays
time = np.asarray(timeArray)
conc = np.asarray(concArray)
plt.plot(time, conc, 'o', label='data')
def nCSTR(x, scale, tau, n):
"""scaled CSTR model"""
z = n*x/tau
return scale * np.exp(-z) * z**(n-1) * (n/(tau*gamma(n)))
# create a Model for your model function
cmodel = Model(nCSTR)
# now create a set of Parameters for your model (note that parameters
# are named using your function arguments), and give initial values
params = cmodel.make_params(tau=3, scale=10, n=10)
# since you have `xxx**(n-1)`, setting a lower bound of 1 on `n`
# is wise, otherwise you would have to handle complex values
params['n'].min = 1
# now fit the model to your `conc` data with those parameters
# (and also passing in independent variables using `x`: the argument
# name from the signature of the model function)
result = cmodel.fit(conc, params, x=time)
# print out a report of the results
print(result.fit_report())
# you do not need to construct the best fit yourself, it is in `result`:
plt.plot(time, result.best_fit, label='fit')
plt.legend()
plt.show()
This will print out a report that includes statistics and uncertainties:
[[Model]]
Model(nCSTR)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 29
# data points = 29
# variables = 3
chi-square = 2.84348862
reduced chi-square = 0.10936495
Akaike info crit = -61.3456602
Bayesian info crit = -57.2437727
R-squared = 0.98989860
[[Variables]]
scale: 49.7615649 +/- 0.81616118 (1.64%) (init = 10)
tau: 5.06327482 +/- 0.05267918 (1.04%) (init = 3)
n: 4.33771512 +/- 0.14012112 (3.23%) (init = 10)
[[Correlations]] (unreported correlations are < 0.100)
C(scale, n) = -0.521
C(scale, tau) = 0.477
C(tau, n) = -0.406
and generate a plot of

Non linear complex function fitting - python

I'm trying to fit the curve of a graph based on a model. The problem is that the function has to fit the real solution and the imaginary solution.
I have tried with curve_fit from scipy but the results are not a proper fit to the curve.
This is the code
(the data to fit is invented, but it should work as an example):
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
import math
def long_function(fre, e_inf, e_s, alfa, beta, tau):
return ((e_s-e_inf)*((1+1j*2*np.pi*fre*tau)**(1-alfa))**(-beta))+e_inf
def funcBoth(x, e_inf, e_s, alfa, beta, tau):
N = len(x)
x_real = x[:N//2]
x_imag = x[N//2:]
y_real = np.real(long_function(x_real, e_inf, e_s, alfa, beta, tau))
y_imag = np.imag(long_function(x_imag, e_inf, e_s, alfa, beta, tau))
return np.hstack([y_real, y_imag])
def plot_graph(poptBoth, fre,yReal,yImag):
# Compute the best-fit solution
yFit = long_function(fre, *poptBoth)
print("alfa: {0:.2f}".format(poptBoth[2]))
print("beta: {0:.2f}".format(poptBoth[3]))
print("epsilon_infinita: {0:.2f}".format(poptBoth[0]))
print("epsilon_s: {0:.2f}".format(poptBoth[1]))
print("tau: ",poptBoth[4])
# Plot the results
plt.figure(figsize=(9, 4))
plt.subplot(121)
plt.plot(fre, np.real(yFit), label="Best fit")
plt.plot(fre, np.real(yReal), "k.", label="Noisy y")
plt.ylabel("Real part of y")
plt.xlabel("x")
plt.legend()
plt.subplot(122)
plt.plot(fre, np.imag(yFit), label="Best fit")
plt.plot(fre, np.real(yImag), "k.", label="Noisy y")
plt.ylabel("Real part of y")
plt.xlabel("x")
plt.tight_layout()
plt.legend(loc='best')
plt.show()
def curve_fitter(fre, yReal, yImag):
yBoth = np.hstack([yReal, yImag])
poptBoth, pcovBoth = curve_fit(funcBoth, np.hstack([fre, fre]), yBoth, maxfev=500000) # method='lm' , p0=guess
plot_graph(poptBoth,fre,yReal,yImag)
yReal = [70.0, 68.0, 60.0, 50.0, 42.0, 38.0, 36.0, 35.4, 34.0, 33.0, 32.0, 30.0, 29.1, 28.8, 28.6, 28.4, 28.3, 28.2, 28.2, 28.1, 28.0]
yImag = [17.0, 21.0, 22.5, 23.0, 22.5, 21.0, 19.0, 18.0, 17.3, 16.9, 16.4, 16.3, 16.2, 16.0, 15.7, 15.2, 14.8, 14.7, 14.7, 14.6, 14.5]
fre = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
yReal = np.array(yReal)
yImag = np.array(yImag)
fre = np.array(fre)
curve_fitter(fre, yReal, yImag)
And the result that I get is the following:
As you can see it is not fitting correctly.
I am also trying with the function minimize() but I am not getting results.

Calculating length of Polyline using a loop? python

I need to calculate the length of this poly line with the following coordinates formatted this exact way:
coords = [[1.0, 1.0], [1.5, 2.0], [2.2, 2.4], [3.0, 3.2], [4.0, 3.6], [4.5, 3.5], [4.8, 3.2], [5.2, 2.8], [5.6, 2.0],
[6.5, 1.2]]
using this distance formula 𝑑 = √(𝑥2 − 𝑥1)
2 + (𝑦2 − 𝑦1)
our lab wants us to use a Loop, and be able to use the same code on other sets of coordinates. I am lost on where to start.
Something like this could work?
Define a function for the Euclidean distance:
import math
def distance(x1,x2,y1,y2):
return math.sqrt((x2-x1)**2 + (y2-y1)**2)
Then, another function which receives coords as input and gives back the sum of point-to-point euclidean distances
def compute_poly_line_lenth(coords):
distances = []
for i in range(len(coords)-1):
current_line = coords[i]
next_line = coords[i+1]
distances.append(distance(current_line[0],next_line[0],current_line[1],next_line[1]))
return sum(distances)

Python: fit data to given cosine function

I am trying to simply find best fit for malus's law.
I_measured=I_0*(cos(theta)) ^2
When I scatter the plot, it obviously works but with the def form() function I get the error given below.
I googled the problem and it seems that this is not the correct way to curvefit a cosine function.
given data is ..
x_data=x1 in the code below
[ 0.0, 5.0, 10.0, 15.0, 20.0, 25.0, 30.0, 35.0, 40.0, 45.0, 50.0, 55.0,
60.0, 65.0, 70.0, 75.0, 80.0, 85.0, 90.0, 95.0, 100.0, 105.0, 110.0, 115.0,
120.0, 125.0, 130.0, 135.0, 140.0, 145.0, 150.0, 155.0, 160.0, 165.0,
170.0, 175.0, 180.0, 185.0, 190.0, 195.0, 200.0, 205.0, 210.0, 215.0,
220.0, 225.0, 230.0, 235.0, 240.0, 245.0, 250.0, 255.0, 260.0, 265.0,
270.0, 275.0, 280.0, 285.0, 290.0, 295.0, 300.0, 305.0, 310.0, 315.0,
320.0, 325.0, 330.0, 335.0, 340.0, 345.0, 350.0, 355.0, 360.0]
y_data = x2 in the code below
[ 1.69000000e-05 2.80000000e-05 4.14000000e-05 5.89000000e-05
7.97000000e-05 9.79000000e-05 1.23000000e-04 1.47500000e-04
1.69800000e-04 1.94000000e-04 2.17400000e-04 2.40200000e-04
2.55400000e-04 2.70500000e-04 2.81900000e-04 2.87600000e-04
2.91500000e-04 2.90300000e-04 2.83500000e-04 2.76200000e-04
2.62100000e-04 2.41800000e-04 2.24200000e-04 1.99500000e-04
1.74100000e-04 1.49300000e-04 1.35600000e-04 1.11500000e-04
9.00000000e-05 6.87000000e-05 4.98000000e-05 3.19000000e-05
2.07000000e-05 1.31000000e-05 9.90000000e-06 1.03000000e-05
1.49000000e-05 2.34000000e-05 3.65000000e-05 5.58000000e-05
7.56000000e-05 9.65000000e-05 1.19400000e-04 1.46900000e-04
1.73000000e-04 1.99200000e-04 2.24600000e-04 2.38700000e-04
2.60700000e-04 2.74800000e-04 2.84000000e-04 2.91200000e-04
2.93400000e-04 2.90300000e-04 2.86400000e-04 2.77900000e-04
2.63600000e-04 2.45900000e-04 2.25500000e-04 2.03900000e-04
1.79100000e-04 1.51800000e-04 1.32400000e-04 1.07000000e-04
8.39000000e-05 6.20000000e-05 4.41000000e-05 3.01000000e-05
1.93000000e-05 1.24000000e-05 1.00000000e-05 1.13000000e-05
1.77000000e-05]
the code
I_0=291,5*10**-6/(pi*0.35**2) # print(I_0) gives (291, 1.2992240252399621e-05)??
def form(theta, I_0):
return (I_0*(np.abs(np.cos(theta)))**2) # theta is x_data
param=I_0
parame,covariance= optimize.curve_fit(form,x1,x2,I_0)
test=parame*I_0
#print(parame)
#plt.scatter(x1,x2,label='data')
plt.ylim(10**-5,3*10**-4)
plt.plot(x1,form(x1,*parame),'b--',label='fitcurve')
The error I get is:
TypeError: form() takes 2 positional arguments but 3 were given`
i started again with another code shown below.
x1=np.radians(np.array(x1))
x2=np.array(x2)*10**-6
print(x1,x2)
def form(theta, I_0, theta0, offset):
return I_0 * np.cos(np.radians(theta - theta0)) ** 2 + offset
param, covariance = optimize.curve_fit(form, x1, x2)
plt.scatter(x1, x2, label='data')
plt.ylim(0, 3e-4)
plt.xlim(0, 360)
plt.plot(x1, form(x1, *param), 'b-')
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.axes().xaxis.set_major_locator(ticker.MultipleLocator(45))
plt.show()
in the new code. i multiplide the input array with a number.. basically it s still y_data in the first code. when i plot this, i see that function does not fit at all with an added code x1 = np.radians(np.array(x1))
Comma
I guess your I_0=291,5*10**-6/(pi*0.35**2) is supposed to be the initial guess for the fit. I don't know why this is expressed in such a complicated way. Using , as decimal separator is the wrong syntax in Python, use . instead. Also, instead of something like 123.4 * 10 ** -5 you can write 123.4e-5 (scientific notation).
Anyway, it turns out you don't even need to specify the initial guess if you do the fit correctly.
Model function
In your model function, I_measured = I_0 * cos(theta)**2, theta is in radians (0 to 2π), but your x values are in degrees (0 to 360).
Your model function doesn't account for any offset in the x or y values. You should include such parameters in the function.
An improved model function would look like this:
def form(theta, I_0, theta0, offset):
return I_0 * np.cos(np.radians(theta - theta0)) ** 2 + offset
(Credits to Martin Evans for pointing out the np.radians function.)
Result
Now the curve_fit function is able to derive values for I_0, theta0, and offset that best fit the model function to your measured data:
>>> param, covariance = optimize.curve_fit(form, x, y)
>>> print 'I_0: {0:e} / theta_0: {1} degrees / offset: {2:e}'.format(*param)
I_0: -2.827996e-04 / theta_0: -9.17118424279 degrees / offset: 2.926534e-04
The plot looks decent, too:
import matplotlib.ticker as ticker
plt.scatter(x, y, label='data')
plt.ylim(0, 3e-4)
plt.xlim(0, 360)
plt.plot(x, form(x, *param), 'b-')
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.axes().xaxis.set_major_locator(ticker.MultipleLocator(45))
plt.show()
(Your x values are from 0 to 360, I don't know why you've set the plot limits to 370. Also, I spaced the ticks in 45 degrees interval.)
Update: The fit results in a negative amplitude I_0 and an offset of about 3e-4, close to the maximum y values. You can guide the fit to a positive amplitude and offset close to zero ("flip it around") by providing a 90 degree initial phase offset:
>>> param, covariance = optimize.curve_fit(form, x, y, [3e-4, 90, 0])
>>> print 'I_0: {0:e} / theta_0: {1} degrees / offset: {2:e}'.format(*param)
I_0: 2.827996e-04 / theta_0: 80.8288157578 degrees / offset: 9.853833e-06
Here's the complete code.
The comma in your formula is creating a two object tuple, it does not specify "thousands", as such, you should remove this giving you:
I_O = 0.00757447606715
The aim here is to provide a function that can be adapted to fit your data. Your original function only provided one parameter, which was not enough to enable curve_fit() to get a good fit.
In order to get a better fit, you need to create more variables for your func() to enable the curve fitter more flexibility. In this case for the cos wave, it provides I_O for the amplitude, theta0 for the phase and yoffset.
So the code would be:
import matplotlib.pyplot as plt
from math import pi
from scipy import optimize
import numpy as np
x1 = [ 0.0, 5.0, 10.0, 15.0, 20.0, 25.0, 30.0, 35.0, 40.0, 45.0, 50.0, 55.0,
60.0, 65.0, 70.0, 75.0, 80.0, 85.0, 90.0, 95.0, 100.0, 105.0, 110.0, 115.0,
120.0, 125.0, 130.0, 135.0, 140.0, 145.0, 150.0, 155.0, 160.0, 165.0,
170.0, 175.0, 180.0, 185.0, 190.0, 195.0, 200.0, 205.0, 210.0, 215.0,
220.0, 225.0, 230.0, 235.0, 240.0, 245.0, 250.0, 255.0, 260.0, 265.0,
270.0, 275.0, 280.0, 285.0, 290.0, 295.0, 300.0, 305.0, 310.0, 315.0,
320.0, 325.0, 330.0, 335.0, 340.0, 345.0, 350.0, 355.0, 360.0]
x2 = [ 1.69000000e-05, 2.80000000e-05, 4.14000000e-05, 5.89000000e-05,
7.97000000e-05, 9.79000000e-05, 1.23000000e-04, 1.47500000e-04,
1.69800000e-04, 1.94000000e-04, 2.17400000e-04, 2.40200000e-04,
2.55400000e-04, 2.70500000e-04, 2.81900000e-04, 2.87600000e-04,
2.91500000e-04, 2.90300000e-04, 2.83500000e-04, 2.76200000e-04,
2.62100000e-04, 2.41800000e-04, 2.24200000e-04, 1.99500000e-04,
1.74100000e-04, 1.49300000e-04, 1.35600000e-04, 1.11500000e-04,
9.00000000e-05, 6.87000000e-05, 4.98000000e-05, 3.19000000e-05,
2.07000000e-05, 1.31000000e-05, 9.90000000e-06, 1.03000000e-05,
1.49000000e-05, 2.34000000e-05, 3.65000000e-05, 5.58000000e-05,
7.56000000e-05, 9.65000000e-05, 1.19400000e-04, 1.46900000e-04,
1.73000000e-04, 1.99200000e-04, 2.24600000e-04, 2.38700000e-04,
2.60700000e-04, 2.74800000e-04, 2.84000000e-04, 2.91200000e-04,
2.93400000e-04, 2.90300000e-04, 2.86400000e-04, 2.77900000e-04,
2.63600000e-04, 2.45900000e-04, 2.25500000e-04, 2.03900000e-04,
1.79100000e-04, 1.51800000e-04, 1.32400000e-04, 1.07000000e-04,
8.39000000e-05, 6.20000000e-05, 4.41000000e-05, 3.01000000e-05,
1.93000000e-05, 1.24000000e-05, 1.00000000e-05, 1.13000000e-05,
1.77000000e-05]
x1 = np.radians(np.array(x1))
x2 = np.array(x2)
def form(theta, I_0, theta0, offset):
return I_0 * np.cos(theta - theta0) ** 2 + offset
param, covariance = optimize.curve_fit(form, x1, x2)
plt.scatter(x1, x2, label='data')
plt.ylim(x2.min(), x2.max())
plt.plot(x1, form(x1, *param), 'b-')
plt.show()
Giving you an output of:
The maths libraries work in radians, so your data would need to be converted to radians at some point (where 2pi == 360 degrees). You can either convert your data to radians, or carry out the conversion within your function.
Thanks also to mkrieger1 for the extra parameters.

Python's implementation of Permutation Test with permutation number as input

R well-known library for permutation test i.e. perm.
The example I'm interested in is this:
x <- c(12.6, 11.4, 13.2, 11.2, 9.4, 12.0)
y <- c(16.4, 14.1, 13.4, 15.4, 14.0, 11.3)
permTS(x,y, alternative="two.sided", method="exact.mc", control=permControl(nmc=30000))$p.value
Which prints result with p-value: 0.01999933.
Note there the function permTS allows us to input number of permutation = 30000.
Is there such similar implmentation in Python?
I was looking at Python's perm_stat, but it's not what I'm looking for and seems
to be buggy.
This is a possible implementation of permutation test using monte-carlo method:
def exact_mc_perm_test(xs, ys, nmc):
n, k = len(xs), 0
diff = np.abs(np.mean(xs) - np.mean(ys))
zs = np.concatenate([xs, ys])
for j in range(nmc):
np.random.shuffle(zs)
k += diff < np.abs(np.mean(zs[:n]) - np.mean(zs[n:]))
return k / nmc
note that given the monte-carlo nature of the algorithm you will not get exact same number on each run:
>>> xs = np.array([12.6, 11.4, 13.2, 11.2, 9.4, 12.0])
>>> ys = np.array([16.4, 14.1, 13.4, 15.4, 14.0, 11.3])
>>> exact_mc_perm_test(xs, ys, 30000)
0.019466666666666667

Categories

Resources