Related
I have an experimental dataset 1 which plots intensity as a function of energy. These are arrays of 1800 datapoints.
I have been trying to fit a model to this data, given by the equation below:
Imodel = I0 * ((math.cos(phi) + (beta * f1))**2 + (math.sin(phi) + (beta*f2))**2 + Ioff
I have 2 other datasets of f1 vs. energy and f2 vs. energy 2. These are arrays of 700 datapoints, albeit over the same energy range as the first dataset.
I want to use this model function together with the f1 and f2 data to find optimal values of the other 4 parameters (I0, phi, beta, Ioff) where this model function fits the experimental dataset exactly.
I have been looking into curve_fit and least_squares from the scipy.optimize package, as well as linear regression packages such as lmfit and scikit, but to no avail.
can anyone help? Thanks
Presently I have no representative data from Ayrtonb1 in order to test the method proposed below. The method seems convenient from theoretical basis but one cannot be sure that it will be satisfying with the OP data.
Nevertheless a preliminary test was carried out with a "toy" data (shown below).
I suppose that the screencopy below is sufficient to understand the method and to reproduce the calculus with real data.
The result of this preliminary test is rather good :
LRMSE<2 for a range up to 600. (Least Root Mean Square Error).
LRMSRE<2% (Least Root Mean Square Relative Error).
Your data and formula look suspiciously like resonant (or anomalous) X-ray diffraction data, with measurements of scattered intensity going across the Zn K-edge. Although you do not say this, the discussion here will assume that. You say you have 1800 measurements, presumably as a function of X-ray energy in eV. The resonant scattering factors (f1, f2) you show seem to be idealized and possibly "typical", and perhaps not specifically for the Zn K-edge -- at the very least the energy scale shown is not the same as your data.
You will want to treat the data and model the intensity as a function of X-ray energy. And you will want realistic values for f1 and f2 for the element of interest, and at the actual energy points for your data. I recommend using xraydb (full disclosure: I am the lead author) [pip install xraydb], so that you can do
import numpy as np
import xraydb
#edata, idata = function_to_extract_your_data()
# or perhaps testing with
edata = np.linspace(9500, 10500, 501)
f1 = xraydb.f1_chantler('Zn', edata)
f2 = xraydb.f2_chantler('Zn', edata)
As written, your intensity function does not directly depend on energy, though it might at a later date, say to make that offset be linear in energy, not just a constant. You might write a function like:
def intensity(en, phi, beta, scale=1, slope=0, offset=0, f1=-1, f2=1):
costerm = np.cos(phi) + beta*f1
sinterm = np.sin(phi) + beta*f2
return scale * (costerm**2 + sinterm**2) + slope*en + offset
with that you can start just trying out some values to get a feel for the function and how it compares to your data:
import matplotlib.pyplot as plt
beta = 0.025 # Wild Guess!
for phi in np.pi*np.arange(20)/10:
plt.plot(edata, intensity(edata, phi, beta, f1=f1, f2=f2), label='%.1f'%phi)
plt.legend()
plt.show()
It kind of looks like your value for phi would be around 5.5 to 6 (or -0.8 to -0.3). You could also try different values of beta and plot that with your actual data.
With that model for intensity and a feel for what the range of parameters is, you could then try to fit your data. To do that, I would recommend using lmfit (full disclosure: I am the lead author) [pip install lmfit], where you can create a model from your intensity model function - this will use the names of the function arguments to name the fitting parameters.
from lmfit import Model
imodel = Model(intensity, independent_vars=['en', 'f1', 'f2'])
params = imodel.make_params(scale=1, offset=0, slope=0, beta=0.1, phi=5.5)
That independent_vars will tell Model to not make fitting Parameters for the function arguments f1 and f2 and to expect them to be passed into any evaluation or fit. The other function arguments (scale, phi, etc) will be expected to become fitting variables. You do have to create a "Parameters" object and must give initial values for all parameters. You can put bounds on these or fix them so they are not altered in the fit, as with
params['scale'].min = 0 # force scale to be positive
params['slope'].vary = False # slope will be fixed at 0.
You can then evaluate the model with
init_value = imodel.eval(params, en=edata, f1=f1, f2=f2)
And then eventually do a fit with
result = imodel.fit(idata, params, en=edata, f1=f1, f2=f2)
print(result.fit_report())
plt.plot(edata, idata, label='data')
plt.plot(edata, init_value, label='initial fit')
plt.plot(edata, result.best_fit, label='best fit')
plt.legend()
plt.show()
Finally, for analysis of X-ray resonant scattering be sure to consider including absorption corrections in that intensity calculation. As you go across the Zn K edge, the absorption depth of the sample may change dramatically if the Zn concentration is high.
I'm trying to simulate the 2D Schrödinger equation using the explicit algorithm proposed by Askar and Cakmak (1977). I define a 100x100 grid with a complex function u+iv, null at the boundaries. The problem is, after just a few iterations the absolute value of the complex function explodes near the boundaries.
I post here the code so, if interested, you can check it:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
#Initialization+meshgrid
Ntsteps=30
dx=0.1
dt=0.005
alpha=dt/(2*dx**2)
x=np.arange(0,10,dx)
y=np.arange(0,10,dx)
X,Y=np.meshgrid(x,y)
#Initial Gaussian wavepacket centered in (5,5)
vargaussx=1.
vargaussy=1.
kx=10
ky=10
upre=np.zeros((100,100))
ucopy=np.zeros((100,100))
u=(np.exp(-(X-5)**2/(2*vargaussx**2)-(Y-5)**2/(2*vargaussy**2))/(2*np.pi*(vargaussx*vargaussy)**2))*np.cos(kx*X+ky*Y)
vpre=np.zeros((100,100))
vcopy=np.zeros((100,100))
v=(np.exp(-(X-5)**2/(2*vargaussx**2)-(Y-5)**2/(2*vargaussy**2))/(2*np.pi*(vargaussx*vargaussy)**2))*np.sin(kx*X+ky*Y)
#For the simple scenario, null potential
V=np.zeros((100,100))
#Boundary conditions
u[0,:]=0
u[:,0]=0
u[99,:]=0
u[:,99]=0
v[0,:]=0
v[:,0]=0
v[99,:]=0
v[:,99]=0
#Evolution with Askar-Cakmak algorithm
for n in range(1,Ntsteps):
upre=np.copy(ucopy)
vpre=np.copy(vcopy)
ucopy=np.copy(u)
vcopy=np.copy(v)
#For the first iteration, simple Euler method: without this I cannot have the two steps backwards wavefunction at the second iteration
#I use ucopy to make sure that for example u[i,j] is calculated not using the already modified version of u[i-1,j] and u[i,j-1]
if(n==1):
upre=np.copy(ucopy)
vpre=np.copy(vcopy)
for i in range(1,len(x)-1):
for j in range(1,len(y)-1):
u[i,j]=upre[i,j]+2*((4*alpha+V[i,j]*dt)*vcopy[i,j]-alpha*(vcopy[i+1,j]+vcopy[i-1,j]+vcopy[i,j+1]+vcopy[i,j-1]))
v[i,j]=vpre[i,j]-2*((4*alpha+V[i,j]*dt)*ucopy[i,j]-alpha*(ucopy[i+1,j]+ucopy[i-1,j]+ucopy[i,j+1]+ucopy[i,j-1]))
#Calculate absolute value and plot
abspsi=np.sqrt(np.square(u)+np.square(v))
fig=plt.figure()
ax=fig.add_subplot(projection='3d')
surf=ax.plot_surface(X,Y,abspsi)
plt.show()
As you can see the code is extremely simple: I cannot see where this error is coming from (I don't think is a stability problem because alpha<1/2). Have you ever encountered anything similar in your past simulations?
I'd try setting your dt to a smaller value (e.g. 0.001) and increase the number of integration steps (e.g fivefold).
The wavefunction looks in shape also at Ntsteps=150 and well beyond when trying out your code with dt=0.001.
Checking integrals of the motion (e.g. kinetic energy here?) should also confirm that things are going OK (or not) for different choices of dt.
I am illustrating hyperopt's TPE algorithm for my master project and cant seem to get the algorithm to converge. From what i understand from the original paper and youtube lecture the TPE algorithm works in the following steps:
(in the following, x=hyperparameters and y=loss)
Start by creating a search history of [x,y], say 10 points.
Sort the hyperparameters according to their loss and divide them into two sets using some quantile γ (γ = 0.5 means the sets will be equally sized)
Make a kernel density estimation for both the poor hyperparameter group (g(x)) and good hyperparameter group (l(x))
Good estimations will have low probability in g(x) and high probability in l(x), so we propose to evaluate the function at argmin(g(x)/l(x))
Evaluate (x,y) pair at the proposed point and repeat steps 2-5.
I have implemented this in python on the objective function f(x) = x^2, but the algorithm fails to converge to the minimum.
import numpy as np
import scipy as sp
from matplotlib import pyplot as plt
from scipy.stats import gaussian_kde
def objective_func(x):
return x**2
def measure(x):
noise = np.random.randn(len(x))*0
return x**2+noise
def split_meassures(x_obs,y_obs,gamma=1/2):
#split x and y observations into two sets and return a seperation threshold (y_star)
size = int(len(x_obs)//(1/gamma))
l = {'x':x_obs[:size],'y':y_obs[:size]}
g = {'x':x_obs[size:],'y':y_obs[size:]}
y_star = (l['y'][-1]+g['y'][0])/2
return l,g,y_star
#sample objective function values for ilustration
x_obj = np.linspace(-5,5,10000)
y_obj = objective_func(x_obj)
#start by sampling a parameter search history
x_obs = np.linspace(-5,5,10)
y_obs = measure(x_obs)
nr_iterations = 100
for i in range(nr_iterations):
#sort observations according to loss
sort_idx = y_obs.argsort()
x_obs,y_obs = x_obs[sort_idx],y_obs[sort_idx]
#split sorted observations in two groups (l and g)
l,g,y_star = split_meassures(x_obs,y_obs)
#aproximate distributions for both groups using kernel density estimation
kde_l = gaussian_kde(l['x']).evaluate(x_obj)
kde_g = gaussian_kde(g['x']).evaluate(x_obj)
#define our evaluation measure for sampling a new point
eval_measure = kde_g/kde_l
if i%10==0:
plt.figure()
plt.subplot(2,2,1)
plt.plot(x_obj,y_obj,label='Objective')
plt.plot(x_obs,y_obs,'*',label='Observations')
plt.plot([-5,5],[y_star,y_star],'k')
plt.subplot(2,2,2)
plt.plot(x_obj,kde_l)
plt.subplot(2,2,3)
plt.plot(x_obj,kde_g)
plt.subplot(2,2,4)
plt.semilogy(x_obj,eval_measure)
plt.draw()
#find point to evaluate and add the new observation
best_search = x_obj[np.argmin(eval_measure)]
x_obs = np.append(x_obs,[best_search])
y_obs = np.append(y_obs,[measure(np.asarray([best_search]))])
plt.show()
I suspect this happens because we keep sampling where we are most certain, thus making l(x) more and more narrow around this point, which doesn't change where we sample at all. So where is my understanding lacking?
So, I am still learning about TPE as well. But here's are the two problems in this code:
This code will only evaluate a few unique point. Because the best location is calculated based on the best recommended by the kernel density functions but there is no way for the code to do exploration of the search space. For example, what acquisition functions do.
Because this code is simply appending new observations to the list of x and y. It adds a whole lot of duplicates. The duplicates lead to a skewed set of observations and that leads to a very weird split and you can easily see that in the later plots. The eval_measure starts as something similar to the objective function but diverges later on.
If you remove the duplicates in x_obs and y_obs you can remove the problem no. 2. However, the first problem can only be removed through the addition of some way of exploring the search space.
I've got the following bit of Python (v2.7.14) code, which uses curve_fit from SciPy (v1.0.1) to find parameters for an exponential decay function. Most of the time, I get reasonable results. Occasionally though, I'll get some results which are completely out of my expected range, even though the found parameters will look fine when plotted against the original graph.
First, my understanding of the exponential decay formula comes from https://en.wikipedia.org/wiki/Exponential_decay which I've translated to Python as:
y = a * numpy.exp(-b * x) + c
Where by:
a is the initial value of the data
b is the decay rate, which is the inverse of when the signal gets to 1/e from initial value
c is an offset, as I am dealing with non-negative values in my data which never reach zero
x is the current time
The script takes into account that non-negative data is being fitted and offsets the initial guess appropriately. But even without guessing, not offsetting, using max/min (instead of first/last values) and other random things I've tried, I cannot seem to get curve_fit to produce sensible values on the troublesome datasets.
My hypothesis is that the troublesome datasets don't have enough of a curve that can be fit without going way outside the realm of the data. I've looked at the bounds argument for curve_fit, and thought that might be a reasonable option. I'm unsure as to what would make good lower and upper bounds for the calculation, or if it is actually the option I am looking for.
Here is the code. Commented out code are things I've tried.
#!/usr/local/bin/python
import numpy as numpy
from scipy.optimize import curve_fit
import matplotlib.pyplot as pyplot
def exponential_decay(x, a, b, c):
return a * numpy.exp(-b * x) + c
def fit_exponential(decay_data, time_data, decay_time):
# The start of the curve is offset by the last point, so subtract
guess_a = decay_data[0] - decay_data[-1]
#guess_a = max(decay_data) - min(decay_data)
# The time that it takes for the signal to reach 1/e becomes guess_b
guess_b = 1/decay_time
# Since this is non-negative data, above 0, we use the last data point as the baseline (c)
guess_c = decay_data[-1]
#guess_c = min(decay_data)
guess=[guess_a, guess_b, guess_c]
print "guess: {0}".format(guess)
#popt, pcov = curve_fit(exponential_decay, time_data, decay_data, maxfev=20000)
popt, pcov = curve_fit(exponential_decay, time_data, decay_data, p0=guess, maxfev=20000)
#bound_lower = [0.05, 0.05, 0.05]
#bound_upper = [decay_data[0]*2, guess_b * 10, decay_data[-1]]
#print "bound_lower: {0}".format(bound_lower)
#print "bound_upper: {0}".format(bound_upper)
#popt, pcov = curve_fit(exponential_decay, time_data, decay_data, p0=guess, bounds=[bound_lower, bound_upper], maxfev=20000)
a, b, c = popt
print "a: {0}".format(a)
print "b: {0}".format(b)
print "c: {0}".format(c)
plot_fit = exponential_decay(time_data, a, b, c)
pyplot.plot(time_data, decay_data, 'g', label='Data')
pyplot.plot(time_data, plot_fit, 'r', label='Fit')
pyplot.legend()
pyplot.show()
print "Gives reasonable results"
time_data = numpy.array([0.0,0.040000000000000036,0.08100000000000018,0.12200000000000011,0.16200000000000014,0.20300000000000007,0.2430000000000001,0.28400000000000003,0.32400000000000007,0.365,0.405,0.44599999999999995,0.486,0.5269999999999999,0.567,0.6079999999999999,0.6490000000000002,0.6889999999999998,0.7300000000000002,0.7700000000000002,0.8110000000000002,0.8510000000000002,0.8920000000000001,0.9320000000000002,0.9730000000000001])
decay_data = numpy.array([1.342146870531986,1.405586070225509,1.3439802492549762,1.3567811728250267,1.2666276377825874,1.1686375326985337,1.216119360088685,1.2022841507836042,1.1926979408026064,1.1544395213303447,1.1904416926531907,1.1054720201415882,1.112100683833435,1.0811434035632939,1.1221671794680403,1.0673295063196415,1.0036146509494743,0.9984005680821595,1.0134498134883763,0.9996920772051201,0.929782730581616,0.9646581154122312,0.9290690593684447,0.8907360533169936,0.9121560047238627])
fit_exponential(decay_data, time_data, 0.567)
print
print "Gives results that are way outside my expectations"
time_data = numpy.array([0.0,0.040000000000000036,0.08099999999999996,0.121,0.16199999999999992,0.20199999999999996,0.24300000000000033,0.28300000000000036,0.32399999999999984,0.3650000000000002,0.40500000000000025,0.44599999999999973,0.48599999999999977,0.5270000000000001,0.5670000000000002,0.6079999999999997,0.6479999999999997,0.6890000000000001,0.7290000000000001,0.7700000000000005,0.8100000000000005,0.851,0.8920000000000003,0.9320000000000004,0.9729999999999999,1.013,1.0540000000000003])
decay_data = numpy.array([1.4401611921948776,1.3720688158534153,1.3793465463227048,1.2939909686762128,1.3376345321949346,1.3352710161631154,1.3413634841956348,1.248705138603995,1.2914294791901497,1.2581763134585313,1.246975264018646,1.2006447776495062,1.188232179689515,1.1032789127515186,1.163294324147017,1.1686263160765304,1.1434009568472243,1.0511578409946472,1.0814520440570896,1.1035953824496334,1.0626893599266163,1.0645580326776076,0.994855722989818,0.9959891485338087,0.9394584009825916,0.949504060086646,0.9278639431146273])
fit_exponential(decay_data, time_data, 0.6890000000000001)
And here is the text output:
Gives reasonable results
guess: [0.4299908658081232, 1.7636684303350971, 0.9121560047238627]
a: 1.10498934435
b: 0.583046565885
c: 0.274503681044
Gives results that are way outside my expectations
guess: [0.5122972490802503, 1.4513788098693758, 0.9278639431146273]
a: 742.824622191
b: 0.000606308344957
c: -741.41398516
Most notably, with the second set of results, the value for a is very high, with the value for c being equally low on the negative scale, and b being a very small decimal number.
Here is the graph of the first dataset, which gives reasonable results.
Here is the graph of the second dataset, which does not give good results.
Note that the graph itself plots correctly, though the line does not really have a good curve to it.
My questions:
Is my implementation of the exponential decay algorithm with curve_fit correct?
Are my initial guess parameters good enough?
Is the bounds parameter the correct solution for this problem? If so, what is a good way to determine lower and upper bounds?
Have I missed something here?
Again, thank you!
When you say that the second fit gives results that are "way outside" of your expectations and that although the second graph "plots correctly" the line does not really "have a good curve fit" you are on the right track to understanding what is going on. I think you are just missing a piece of the puzzle.
The second graph is fit pretty well by a curve that does look linear. That probably means that you don't really have enough change in your data (well, perhaps below the noise level) to detect that it is an exponential decay.
I would bet that if you printed out not only the best-fit values but also the uncertainties and correlations for the variables that you would see that the uncertainties are huge and some of the correlations are very close to 1. That may mean that taking into account the uncertainties (and measurements always have uncertainties) the results might actually fit with your expectation. And that may also tell you that the data you have does not support an exponential decay very well.
You might also try other models for this data ("linear" comes to mind ;)) and compare goodness-of-fit statistics such as chi-square and Akaike information criterion.
scipy.curve_fit does return the covariance matrix -- the pcov that you did not use in your example. Unfortunately, scipy.curve_fit does not convert these values into uncertainties and correlation values, and it does not attempt to return any goodness-of-fit statistics at all.
To fully explain any fit to data, you need not only the best-fit values but also an estimate of the uncertainties for the variable parameters. And you need the goodness-of-fit statistics in order to determine if a fit is good, or at least whether one fit is better than another.
I have a dataset that I am trying to fit with parameters (a,b,c,d) that are within +/- 5% of the true fitting parameters. However, when I do this with scipy.optimize.curve_fit I get the error "ValueError: 'x0' is infeasible." inside the least squares package.
If I relax my bounds, then optimize.curve_fit seems to work as desired. I have also noticed that my parameters that are larger seem to be more flexible in applying bounds (i.e. I can get these to work with tighter constraints, but never below 20%). The following code is an MWE and has two sets of bounds (variable B), one that works and one that returns an error.
# %% import modules
import IPython as IP
IP.get_ipython().magic('reset -sf')
import matplotlib.pyplot as plt
import os as os
import numpy as np
import scipy as sp
import scipy.io as sio
plt.close('all')
#%% Load the data
capacity = np.array([1.0,9.896265560165975472e-01,9.854771784232364551e-01,9.823651452282157193e-01,9.797717842323651061e-01,9.776970954356846155e-01,9.751037344398340023e-01,9.735477178423235234e-01,9.714730290456431439e-01,9.699170124481327759e-01,9.683609958506222970e-01,9.668049792531120401e-01,9.652489626556015612e-01,9.636929460580913043e-01,9.621369294605808253e-01,9.610995850622406911e-01,9.595435684647302121e-01,9.585062240663900779e-01,9.574688796680497216e-01,9.559128630705394647e-01,9.548755186721991084e-01,9.538381742738588631e-01,9.528008298755185068e-01,9.517634854771783726e-01,9.507261410788381273e-01,9.496887966804978820e-01,9.486514522821576367e-01,9.476141078838172804e-01,9.460580912863070235e-01,9.450207468879666672e-01,9.439834024896265330e-01,9.429460580912862877e-01,9.419087136929459314e-01,9.408713692946057972e-01,9.393153526970953182e-01,9.382780082987551840e-01,9.372406639004148277e-01,9.356846473029045708e-01,9.346473029045642145e-01,9.330912863070539576e-01,9.320539419087136013e-01,9.304979253112033444e-01,9.289419087136928654e-01,9.273858921161826085e-01,9.258298755186721296e-01,9.242738589211617617e-01,9.227178423236513938e-01,9.211618257261410259e-01,9.196058091286306579e-01,9.180497925311202900e-01,9.159751037344397995e-01,9.144190871369294316e-01,9.123443983402489410e-01,9.107883817427384621e-01,9.087136929460579715e-01,9.071576763485477146e-01,9.050829875518671130e-01,9.030082987551866225e-01,9.009336099585061319e-01,8.988589211618257524e-01,8.967842323651451508e-01,8.947095435684646603e-01,8.926348547717841697e-01,8.905601659751035681e-01,8.884854771784231886e-01,8.864107883817426980e-01,8.843360995850622075e-01,8.817427385892115943e-01,8.796680497925309927e-01,8.775933609958505022e-01,8.749999999999998890e-01,8.729253112033195094e-01,8.708506224066390189e-01,8.682572614107884057e-01,8.661825726141078041e-01,8.635892116182571909e-01,8.615145228215767004e-01,8.589211618257260872e-01,8.563278008298754740e-01,8.542531120331948724e-01,8.516597510373442592e-01,8.490663900414936460e-01,8.469917012448132665e-01,8.443983402489626533e-01,8.418049792531120401e-01,8.397302904564315496e-01,8.371369294605809364e-01,8.345435684647303232e-01,8.324688796680497216e-01,8.298755186721991084e-01,8.272821576763484952e-01,8.246887966804978820e-01,8.226141078838173915e-01,8.200207468879667783e-01,8.174273858921160540e-01,8.153526970954355635e-01,8.127593360995849503e-01,8.101659751037343371e-01,8.075726141078837239e-01,8.054979253112033444e-01,8.029045643153527312e-01,8.003112033195021180e-01,7.977178423236515048e-01,7.956431535269707922e-01,7.930497925311201790e-01,7.904564315352695658e-01,7.883817427385891863e-01,7.857883817427385731e-01,7.831950207468879599e-01,7.811203319502073583e-01,7.785269709543567451e-01,7.759336099585061319e-01,7.738589211618256414e-01,7.712655601659750282e-01,7.686721991701244150e-01,7.665975103734440355e-01,7.640041493775934223e-01,7.619294605809127097e-01,7.593360995850620965e-01,7.567427385892114833e-01,7.546680497925311037e-01,7.520746887966804906e-01,7.499999999999998890e-01,7.474066390041492758e-01,7.453319502074687852e-01,7.427385892116181720e-01,7.406639004149377925e-01,7.380705394190871793e-01,7.359958506224064667e-01,7.339211618257260872e-01,7.313278008298754740e-01,7.292531120331949834e-01,7.266597510373443702e-01,7.245850622406637687e-01,7.225103734439833891e-01,7.199170124481327759e-01,7.178423236514521744e-01,7.157676348547717948e-01,7.136929460580911933e-01,7.110995850622405801e-01,7.090248962655600895e-01,7.069502074688797100e-01,7.048755186721989974e-01,7.022821576763483842e-01,7.002074688796680046e-01,6.981327800829875141e-01,6.960580912863069125e-01,6.939834024896265330e-01,6.919087136929459314e-01,6.898340248962655519e-01,6.877593360995849503e-01])
cycles = np.arange(0,151)
#%% fit the capacity data
# define the empicrial model to be fitted
def He_model(k,a,b,c,d):
return a*np.exp(b*k)+c*np.exp(d*k)
# Fit the entire data set with the function
P0 = [-40,0.005,40,-0.005]
fit, tmp = sp.optimize.curve_fit(He_model, cycles,capacity, p0=P0,maxfev=10000000)
capacity_fit = He_model(cycles, fit[0], fit[1],fit[2], fit[3])
# track all four data points with a 5% bound from the best fit
b_lim = np.zeros((4,2))
for i in range(4):
b_lim[i,0] = fit[i]-np.abs(0.2*fit[i]) # these should be 0.05
b_lim[i,1] = fit[i]+np.abs(0.2*fit[i])
# bounds that work
B = ([b_lim[0,0],-np.inf,b_lim[2,0],-np.inf],[b_lim[0,1], np.inf, b_lim[2,1], np.inf])
# bounds that do not work, but are closer to what I want.
#B = ([b_lim[0,0],b_lim[1,0],b_lim[2,0],b_lim[3,0]],[b_lim[0,1], b_lim[1,1], b_lim[2,1], b_lim[3,1]])
fit_4_5per, tmp = sp.optimize.curve_fit(He_model, cycles,capacity , p0=P0,bounds=B)
capacity_4_5per = He_model(cycles, fit_4_5per[0], fit_4_5per[1], fit_4_5per[2], fit_4_5per[3])
#%% plot the results
plt.figure()
plt.plot(cycles,capacity,'o',fillstyle='none',label='data',markersize=4)
plt.plot(cycles,capacity_fit,'--',label='best fit')
plt.plot(cycles,capacity_4_5per,'-.',label='5 percent bounds')
plt.ylabel('capacity')
plt.xlabel('charge cycles')
plt.legend()
plt.ylim([0.70,1])
plt.tight_layout()
I understand that optimize.curve_fit may need some space to explore the data set to find the optimum spot, however, I feel that 5% should be enough for it. Maybe I am wrong? Is there maybe a better way to apply bounds to a parameter?
The ValueError of "x0 is infeasible" is coming because your initial values violate the bounds. Printing out the parameters values and bounds will show this.
Basically, you're setting the bounds too cleverly, based on the first refined values. But the refined values are different enough from your starting values that the bounds for the second call to curve_fit mean the initial values fall outside the bounds.
More importantly, what leads you to "feel that 5% should be enough"? Primarily, you should apply bounds to make sure the model makes sense, and secondarily to help the fit avoid false solutions. You're calculating the bounds based on an initial fit, so I doubt there's a strong physical justification for those bounds. Why not let the fit do its job?