I'm running the code below:
import numpy as np
from lmfit import Model
def exp_model(x, ampl1=1.0, tau1=0.1):
exponential = ampl1*np.exp(-x/tau1)
return exponential
x = np.array([2.496,2.528,2.56,2.592,2.624])
y = np.array([8774.52,8361.68,7923.42,7502.43,7144.11])
dec_model = Model(exp_model, nan_policy='propagate')
results = dec_model.fit(y, x=x, ampl1=y[0])
results.plot()
The result I get is
which means that the fit is just failing for some reason. I can't figure out why. It had worked for similar data before. Any help would be greatly appreciated.
It wasn't converging because the initial value for the tau1 parameter was too far away from the real value. The code below works well.
import numpy as np
from lmfit import Model
def exp_model(x, ampl1=1.0, tau1=1.0): # The initial value of tau1 was changed from 0.1 to 1.0
exponential = ampl1*np.exp(-x/tau1)
return exponential
x = np.array([2.496,2.528,2.56,2.592,2.624])
y = np.array([8774.52,8361.68,7923.42,7502.43,7144.11])
dec_model = Model(exp_model, nan_policy='propagate')
results = dec_model.fit(y, x=x, ampl1=y[0])
results.plot()
Related
I'am obviously doing something wrong here... Please have a look at the following program. It runs well but gives me a lambda parameter for an exponential distribution which is far away from the parameter I used for generating random observations:
import numpy as np
import arviz as az
import pymc as pm
lambda_param = 0.25
random_size = 1000
x = np.random.exponential(lambda_param, random_size)
basic_model = pm.Model()
with basic_model:
_lam_ = pm.HalfNormal("lambda", sigma = 1)
Y_obs = pm.Exponential("Y_obs", lam = _lam_, observed = x)
start = pm.find_MAP(model = basic_model)
idata = pm.sample(1000, start = start)
summary = az.summary(idata, round_to = 6)
summary
Following my last running of the program, I find in summary a mean lambda greater than 4..., where lambda=0.25 as I used it.
Pointing the finger at my programing errors would be highly appreciated.
I found the problem, the uncertainty on _lam_ was too large and given that the exponential probability distribution is not symmetric, the high uncertainty modified the result. The fix is simply to use a smaller standard deviation, I also used Normal rather than HalfNormal for simplicity:
import numpy as np
import pymc3 as pm
import arviz as az
lambda_param = 0.25
random_size = 1000
x = np.random.exponential(lambda_param, random_size)
with pm.Model() as basic_model:
lam = pm.Normal("lam", mu=lambda_param, sigma=0.0001)
Y_obs = pm.Exponential("Y_obs", lam=lam, observed=x)
trace = pm.sample(1000, tune=1000)
summary = az.summary(trace, round_to=6)
summary
This gives a mean of 0.25 for lambda, within a small margin of error.
I'm trying to fit my exponential data, but I am unable to get a decent answer. I'm using scipy and the following code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import glob
import scipy.optimize
import pylab
def exponential(x, a, k, b):
return a*np.exp(-x/k) + b
def main():
filename = 'tek0071ALL.csv'
df = pd.read_csv(filename, skiprows=14)
t = df['TIME']
ch3 = df['CH3']
idx1 = df.index[df['TIME']==-0.32]
idx2 = df.index[df['TIME']==-0.18]
t= t[idx1.values[0]:idx2.values[0]]
data=ch3[idx1.values[0]:idx2.values[0]]
popt_exponential, pcov_exponential = scipy.optimize.curve_fit(exponential, t, data, p0=[1,.1, 0])
# print(popt_exponential,pcov_exponential)
print(popt_exponential[0])
print(popt_exponential[1])
print(popt_exponential[2])
plt.plot(t,data,'.')
plt.plot(t,exponential(t,popt_exponential[0],popt_exponential[1],popt_exponential[2]))
plt.show()
plt.legend(['Data','Fit'])
main()
This is what the fit looks like:
and I think this means that it's actually a good fit. I think my time constant is correct, and that's what I'm trying to extract. However, the amplitude is really giving me trouble -- I expected the amplitude to be around 0.5 by inspection, but instead I get the following values for equation A*exp(-t/K)+C:
A:1.2424893552249658e-07
K:0.0207112474466181
C: 0.010623336832120528
I'm left wondering if this is correct, and that my amplitude really ought to be so tiny to account for the exponential's behavior.
I'm having trouble obtaining the dispersion parameter of simulated data using statsmodels' GLM function.
import statsmodels.api as sm
import matplotlib.pyplot as plt
import scipy.stats as stats
import numpy as np
np.random.seed(1)
# Generate data
x=np.random.uniform(0, 100,50000)
x2 = sm.add_constant(x)
a = 0.5
b = 0.2
y_true = 1/(a+(b*x))
# Add error
scale = 2 # the scale parameter I'm trying to obtain
shape = y_true/scale # given that, for Gamma, mu = scale*shape
y = np.random.gamma(shape=shape, scale=scale)
# Run model
model = sm.GLM(y, x2, family=sm.families.Gamma()).fit()
model.summary()
Here's the summary from above:
Note that the coefficient estimates are correct (0.5 and 0.2), but the scale (21.995) is way off the scale I set (2).
Can someone point out what it is I'm misunderstanding/doing wrong? Thanks!
As Josef noted in the comments, statsmodels uses a different kind of parameterization.
I am trying to do a lognorm distribution fit but the resulting paramter seem a bit odd. Could you please show me my mistake or explain to me if I am misinterpreting the parameters.
import numpy as np
import scipy.stats as st
data = np.array([1050000, 1100000, 1230000, 1300000, 1450000, 1459785, 1654000, 1888000])
s, loc, scale = st.lognorm.fit(data)
#calculating the mean
lognorm_mean = st.lognorm.mean(s = s, loc = loc, scale = scale)
The resulting mean is: 945853602904015.8.
But this doesn't make any sense.
The mean should be:
data_ln = np.log(data)
ln_mean = np.mean(data_ln)
ln_std = np.std(data_ln)
mean = np.exp(ln_mean + np.power(ln_std, 2)/2)
Here the resulting mean is 1391226.31. This should be correct.
Can you please help me with this topic?
Best regards
Norbi
I think you can tune the parameters of the minimizer to get an acceptable result:
import numpy as np
import scipy.stats as st
from scipy.optimize import minimize
data = np.array([1050000, 1100000, 1230000, 1300000,
1450000, 1459785, 1654000, 1888000])
def opti_wrap(fun, x0, args, disp=0, **kwargs):
return minimize(fun, x0, args=args, method='SLSQP',
tol=1e-12, options={'maxiter': 1000}).x
s, loc, scale = st.lognorm.fit(data, optimizer=opti_wrap)
lognorm_mean = st.lognorm.mean(s=s, loc=loc, scale=scale)
print(lognorm_mean) # should give 1392684.4350
The reason you are seeing a strange result is due to the default minimizer failing to converge on the maximum likelihood result. This could be due to a mis-behaving cost function with so few data points (you are trying to fit 3 params but only have 8 data points...). Note: I'm using scipy version 1.1.0.
I'm trying to plot the exact solution to a differential equation (a radioactive leak model) in python2.7 with matplotlib. When plotting the graph with Euler methods OR SciPy I get the expected results, but with the exact solution the output is a straight-line graph (should be logarithmic curve).
Here is my code:
import math
import numpy as np
import matplotlib.pyplot as plt
#define parameters
r = 1
beta = 0.0864
x0 = 0
maxt = 100.0
tstep = 1.0
#Make arrays for time and radioactivity
t = np.zeros(1)
#Implementing model with Exact solution where Xact = Exact Solution
Xact = np.zeros(1)
e = math.exp(-(beta/t))
while (t[-1]<maxt):
t = np.append(t,t[-1]+tstep)
Xact = np.append(Xact,Xact[-1] + ((r/beta)+(x0-r/beta)*e))
#plot results
plt.plot(t, Xact,color="green")
I realise that my problem may be due to an incorrect equation, but I'd be very grateful if someone could point out an error in my code. Cheers.
You probably want e to depend on t, as in
def e(t): return np.exp(-t/beta)
and then use
Xact.append( (r/beta)+(x0-r/beta)*e(t[-1]) )
But you can have that all shorter as
t = np.arange(0, maxt+tstep/2, tstep)
plt.plot(t, (r/beta)+(x0-r/beta)*np.exp(-t/beta), color="green" )