Related
The only example/docs I can find are on the Scipy docs page.
To test, I'm looking at a time-independent Schrod eq in a 1d infinite potential well. This has a neat analytic solution found by solving the DE, and inserting boundary conditions of ψ(0) = 0, ψ(L) = 0, and that the func soln to 1, but this question applies to solving any DE where the BCs we know aren't for the initial value.
You can solve it numerically with Scipy's solve_ivp by starting with ψ(0) = 0, and cheating to place ψ'(0) appropriately using the analytic soln. Can use shooting method to find an appropriate E value, eg the normalization condition above.
These are two sets of BCs: ψ(0) = 0 for both, normalization for both, and a second value of ψ for the analytic approach, and an initial value of ψ' for the ivp approach. Scipy's solve_bvp seems to offer a solution using the first set of BCs numerically (since we're cheating by inserting ψ'), but i can't get it working. This pseudocode describes the problem, and is how I expect the API to behave:
bcs = {0: (0, None), L: (0, None)} # Two BCs on ψ; no BCs on derivative
x_span = (0, L)
sol = solve_bvp(rhs, bcs, x_span)
In reality, the code looks something like this, and I can't get it to work:
def bc(ψ_a, ψ_b):
return np.array([ψ_a[0], ψ_b[0]])
x_span = (0, L)
x_eval = np.linspace(x_span[0], x_span[1], int(1e5))
x_guess = np.array([0, L])
ψ_guess = np.array([[0, 1], [0, -1]])
res = solve_bvp(rhs_1d, bc, x_guess, ψ_guess)
I've no idea how to build the bc function, and don't know why the guesses are set up the way they are. And unsure how I can guess for the value of ψ without also inserting a guess for ψ'. (The docs imply you can) Also of note, the docs shows an example implying you can use solve_bvp for a normalization BC as well, but not sure how to approach. (Example is too sparse)
The equivalent and working ivp code, for ref: (Compare to my solve_bvp pseudocode)
Python code:
ψ_0 = (0, sqrt(2/L) * n*π/L)
x_span = (0, L)
sol = solve_ivp(rhs_1d, x_span, ψ_0)
For the eigenvalue problem
-u''+V(x)u = c*u
with boundary conditions
u(0)=0=u(L)
and normalization
int(u(x)^2, x=0 to L)=1
set up the integral as third component. With the eigenvalue as parameter these are 4 dimensions allowing for 4 boundary conditions, the additional 2 are that the integral at 0 is zero and that the integral at L has value 1.
# some length
L = 10;
# some potential function
def V(x): return 1+(2*x-L)**2;
# the ODE function
def odesys(x,y,p):
u,v,S = y; c=p[0]
return [v, (V(x)-c)*u , u**2 ]
# the boundary conditions
def boundary(y0, yL, c):
return [ y0[0], yL[0], y0[2], yL[2]-1 ]
With the initial guess you select approximately what eigenfunction/eigenvalue you will get, more or less.
n=11;
w = (np.pi*n)/L
x_init = np.linspace(0,L,4*n+1);
u_init = np.sin(w*x_init);
v_init = np.cos(w*x_init)*w;
y_init = [ u_init, v_init, x_init/L ]
There is no need to put too many points into the guess, just enough that the structure of the first component is faithfully represented.
Then call the solver with the prepared data, take notice that the default tolerance is 1e-3, if you want better you have to allow for a finer subdivision. If everything runs fine, plot the solution.
res = solve_bvp(odesys, boundary, x_init, y_init, p=[w**2], max_nodes=10000, tol=1e-6)
print res.message
if res.success:
x_disp = np.linspace(0,L,3001)
y_disp = res.sol(x_disp)
plt.plot(x_disp, y_disp[0])
plt.title("eigenfunction to eigenvalue $\lambda=%.6f$"%res.p[0]);
plt.grid(); plt.show()
In the following code I am trying to implement the following
write a function naturalSpline that implements cubic spline interpolation with natural boundary conditions
Use a tridiagonal solver to solve the arising tridiagonal system for the first derivatives.
The prototype of the function should read yy=naturalSpline(x,y,xx) where (x,y) are the input points and data, and xx are the points where the data should be interpolated.
I figured first I would start with the second bullet point, creating the tridiagonal solver. So this is just the Thomas algorithm. I spent some time to create this part of the code and I have formatted it below. But now I am trying to finish the first and third bullet points but I am not sure how to use what I have done already to finish those. Looking for some help with this! Thanks in advance.
import numpy as np
def TDMA(a,b,c,d):
n = len(d)
w= np.zeros(n-1,float)
g= np.zeros(n, float)
p = np.zeros(n,float)
w[0] = c[0]/b[0]
g[0] = d[0]/b[0]
for i in range(1,n-1):
w[i] = c[i]/(b[i] - a[i-1]*w[i-1])
for i in range(1,n):
g[i] = (d[i] - a[i-1]*g[i-1])/(b[i] - a[i-1]*w[i-1])
p[n-1] = g[n-1]
for i in range(n-1,0,-1):
p[i-1] = g[i-1] - w[i-1]*p[i]
return p
A = np.array([[10,2,0,0],[3,10,4,0],[0,1,7,5], [0,0,3,4]],dtype=float)
a = np.array([3.,1,3])
b = np.array([10.,10.,7.,4.])
c = np.array([2.,4.,5.])
d = np.array([3,4,5,6.])
print (TDMA(a, b, c, d))
Which gives the correct output, I even tested it against np.linalg.solve(a,b,c,d) to make sure it was correct
[ 0.14877589 0.75612053 -1.00188324 2.25141243]
For each interval [x_k, x_(k+1)], you can solve the four equations
p_k(x_k) = f(x_k) = y_k
p_k'(x_k) = f'(x_k) = d_k
p_k(x_(k+1)) = f(x_(k+1)) = y_(k+1)
p_k'(x_(k+1)) = f'(x_(k+1)) = d_(k+1)
(without checking your code, I assume that this is what you did).
From this, you can construct a dict
{'polynomials': [ [a_0, ..., d_0], ..., [a_24, ..., d_24] ],
'knots': [x_0, ..., x_24]}
For each x of your 250 point, you check for which k the point x is in the interval [x_k, x_(k+1)] and evaluate p_k(x).
All of this is straight forward mathematics and python coding. If something is not clear, you are better of learning more about both fields, instead of getting specialized advise on this website.
I want to solve this differential equation:
y′′+2y′+2y=cos(2x) with initial conditions:
y(1)=2,y′(2)=0.5
y′(1)=1,y′(2)=0.8
y(1)=0,y(2)=1
and it's code is:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def dU_dx(U, x):
return [U[1], -2*U[1] - 2*U[0] + np.cos(2*x)]
U0 = [1,0]
xs = np.linspace(0, 10, 200)
Us = odeint(dU_dx, U0, xs)
ys = Us[:,0]
plt.xlabel("x")
plt.ylabel("y")
plt.title("Damped harmonic oscillator")
plt.plot(xs,ys);
how can I fulfill it?
Your initial conditions are not, as they give values at two different points. These are all boundary conditions.
def bc1(u1,u2): return [u1[0]-2.0,u2[1]-0.5]
def bc2(u1,u2): return [u1[1]-1.0,u2[1]-0.8]
def bc3(u1,u2): return [u1[0]-0.0,u2[0]-1.0]
You need a BVP solver to solve these boundary value problems.
You can either make your own solver using the shooting method, in case 1 as
def shoot(b): return odeint(dU_dx,[2,b],[1,2])[-1,1]-0.5
b = fsolve(shoot,0)
T = linspace(1,2,N)
U = odeint(dU_dx,[2,b],T)
or use the secant method instead of scipy.optimize.fsolve, as the problem is linear this should converge in 1, at most 2 steps.
Or you can use the scipy.integrate.solve_bvp solver (which is perhaps newer than the question?). Your task is similar to the documented examples. Note that the argument order in the ODE function is switched in all other solvers, even in odeint you can give the option tfirst=True.
def dudx(x,u): return [u[1], np.cos(2*x)-2*(u[1]+u[0])]
Solutions generated with solve_bvp, the nodes are the automatically generated subdivision of the integration interval, their density tells how "non-flat" the ODE is in that region.
xplot=np.linspace(1,2,161)
for k,bc in enumerate([bc1,bc2,bc3]):
res = solve_bvp(dudx, bc, [1.0,2.0], [[0,0],[0,0]], tol=1e-5)
print res.message
l,=plt.plot(res.x,res.y[0],'x')
c = l.get_color()
plt.plot(xplot, res.sol(xplot)[0],c=c, label="%d."%(k+1))
Solutions generated using the shooting method using the initial values at x=0 as unknown parameters to then obtain the solution trajectories for the interval [0,3].
x = np.linspace(0,3,301)
for k,bc in enumerate([bc1,bc2,bc3]):
def shoot(u0): u = odeint(dudx,u0,[0,1,2],tfirst=True); return bc(u[1],u[2])
u0 = fsolve(shoot,[0,0])
u = odeint(dudx,u0,x,tfirst=True);
l, = plt.plot(x, u[:,0], label="%d."%(k+1))
c = l.get_color()
plt.plot(x[::100],u[::100,0],'x',c=c)
You can use the scipy.integrate.ode function this is similar to scipy.integrate.odeint but allows a jac parameter which is df/dy or in the case of your given ODE df/dx
I have a function Imaginary which describes a physics process and I want to fit this to a dataset x_interpolate, y_interpolate. The function is a form of a Lorentzian peak function and I have some initial values that are user given, except for f_peak (the peak location) which I find using a peak finding algorithm. All of the fit parameters, except for the offset, are expected to be positive and thus I have set bounds_I accordingly.
def Imaginary(freq, alpha, res, Ms, off):
numerator = (2*alpha*freq*res**2)
denominator = (4*(alpha*res*freq)**2) + (res**2 - freq**2)**2
Im = Ms*(numerator/denominator) + off
return Im
pI = np.array([alpha_init, f_peak, Ms_init, 0])
bounds_I = ([0,0,0,0, -np.inf], [np.inf,np.inf,np.inf, np.inf])
poptI, pcovI = curve_fit(Imaginary, x_interpolate, y_interpolate, pI, bounds=bounds_I)
In some situations I want to keep the parameter f_peak fixed during the fitting process. I tried an easy solution by changing bounds_I to:
bounds_I = ([0,f_peak+0.001,0,0, -np.inf], [np.inf,f_peak-0.001,np.inf, np.inf])
This is for many reasons not an optimal way of doing this so I was wondering if there is a more Pythonic way of doing this? Thank you for your help
If a parameter is fixed, it is not really a parameter, so it should be removed from the list of parameters. Define a model that has that parameter replaced by a fixed value, and fit that. Example below, simplified for brevity and to be self-contained:
x = np.arange(10)
y = np.sqrt(x)
def parabola(x, a, b, c):
return a*x**2 + b*x + c
fit1 = curve_fit(parabola, x, y) # [-0.02989396, 0.56204598, 0.25337086]
b_fixed = 0.5
fit2 = curve_fit(lambda x, a, c: parabola(x, a, b_fixed, c), x, y)
The second call to fit returns [-0.02350478, 0.35048631], which are the optimal values of a and c. The value of b was fixed at 0.5.
Of course, the parameter should be removed from the initial vector pI and the bounds as well.
You might find lmfit (https://lmfit.github.io/lmfit-py/) helpful. This library adds a higher-level interface to the scipy optimization routines, aiming for a more Pythonic approach to optimization and curve fitting. For example, it uses Parameter objects to allow setting bounds and fixing parameters without having to modify the objective or model function. For curve-fitting, it defines high level Model functions that can be used.
For you example, you could use your Imaginary function as you've written it with
from lmfit import Model
lmodel = Model(Imaginary)
and then create Parameters (lmfit will name the Parameter objects according to your function signature), providing initial values:
params = lmodel.make_params(alpha=alpha_init, res=f_peak, Ms=Ms_init, off=0)
By default all Parameters are unbound and will vary in the fit, but you can modify these attributes (without rewriting the model function):
params['alpha'].min = 0
params['res'].min = 0
params['Ms'].min = 0
You can set one (or more) of the parameters to not vary in the fit as with:
params['res'].vary = False
To be clear: this does not require altering the model function, making it much easier to change with is fixed, what bounds might be imposed, and so forth.
You would then perform the fit with the model and these parameters:
result = lmodel.fit(y_interpolate, params, freq=x_interpolate)
you can get a report of fit statistics, best-fit values and uncertainties for parameters with
print(result.fit_report())
The best fit Parameters will be held in result.params.
FWIW, lmfit also has builtin Models for many common forms, including Lorentzian and a Constant offset. So, you could construct this model as
from lmfit.models import LorentzianModel, ConstantModel
mymodel = LorentzianModel(prefix='l_') + ConstantModel()
params = mymodel.make_params()
which will have Parameters named l_amplitude, l_center, l_sigma, and c (where c is the constant) and the model will use the name x for the independent variable (your freq). This approach can become very convenient when you may want to change the functional form of the peaks or background, or when fitting multiple peaks to a spectrum.
I was able to solve this issue regarding arbitrary number of parameters and arbitrary positioning of the fixed parameters:
def d_fit(x, y, param, boundMi, boundMx, listparam):
Sparam, SboundMi, SboundMx = asarray([]), asarray([]), asarray([])
Nparam, NboundMi, NboundMx = asarray([]), asarray([]), asarray([])
for i in range(len(param)):
if(listparam[i] == 1):
Sparam = append(Sparam,asarray(param[i]))
SboundMi = append(SboundMi,asarray(boundMi[i]))
SboundMx = append(SboundMx,asarray(boundMx[i]))
else:
Nparam = append(Nparam,asarray(param[i]))
def funF(x, Sparam):
j = 0
for i in range(len(param)):
if(listparam[i] == 1):
param[i] = Sparam[i-j]
else:
param[i] = Nparam[j]
j = j + 1
return fun(x, param)
return curve_fit(lambda x, *Sparam: funF(x, Sparam), x, y, p0 = Sparam, bounds = (SboundMi,SboundMx))
In this case:
param = [a,b,c,...] # parameters array (any size)
boundMi = [min_a, min_b, min_c,...] # minimum allowable value of each parameter
boundMx = [max_a, max_b, max_c,...] # maximum allowable value of each parameter
listparam = [0,1,1,0,...] # 1 = fit and 0 = fix the corresponding parameter in the fit routine
and the root function is define as
def fun(x, param):
a,b,c,d.... = param
return a*b/c... # any function of the params a,b,c,d...
This way, you can change the root function and the number of parameters without changing the fit routine.
And, at any time, you can fix or let fit any parameter by changing "listparam".
Use like this:
popt, pcov = d_fit(x, y, param, boundMi, boundMx, listparam)
"popt" and "pcov" are 1D arrays of the size of the number of "1" in "listparam" bringing the results of the fitted parameters (best value and err matrix)
"param" will ramain an 1D array of the same size of the original (input) "param", HOWEVER IT WILL BE UPDATED AUTOMATICALLY TO THE FITTED VALUES (same as "popt") for the fitted values, keeping the fixed values according to "listparam"
Hope can be usefull!
Obs1: x = 1D-array independent values and y = 1D-array dependent values
Obs2: This is my first post. Please let me know if I can improove it!
I'm a bit new to Python and PyMC, and making rapid progress. But I'm just confused about the use of setting deterministic values of a 2D matrix. I have a model below, that I cannot get to parse correctly. The problem relates to setting the value theta in the model.
import numpy as np
import pymc
define known variables
N = 2
T = 10
tau = 1
define model... which I cannot get to parse correctly. It's the allocation of theta that I'm having trouble with. The aim to to get samples of D and x. Theta is just an intermediate variable, but I need to keep it as it's used in more complex variations of the model.
def NAFCgenerator():
D = np.empty(T, dtype=object)
theta = np.empty([N,T], dtype=object)
x = np.empty([N,T], dtype=object)
# true location of signal
for t in range(T):
D[t] = pymc.DiscreteUniform('D_%i' % t, lower=0, upper=N-1)
for t in range(T):
for n in range(N):
#pymc.deterministic(plot=False)
def temp_theta(dt=D[t], n=n):
return dt==n
theta[n,t] = temp_theta
x[n,t] = pymc.Normal('x_%i,%i' % (n,t),
mu=theta[n,t], tau=tau)
return locals()
** EDIT **
Explicit indexing is useful for me as I'm learning both PyMC and Python. But it seems that extracting MCMC samples is a bit clunky, e.g.
D0values = pymc_generator.trace('D_0')[:]
But I am probably missing something. But did I managed to get a vectorised version working
# Approach 1b - actually quite promising
def NAFCgenerator():
# NOTE TO SELF. It's important to declare these as objects
D = np.empty(T, dtype=object)
theta = np.empty([N,T], dtype=object)
x = np.empty([N,T], dtype=object)
# true location of signal
D = pymc.Categorical('D', spatial_prior, size=T)
# displayed stimuli
#pymc.deterministic(plot=False)
def theta(D=D):
theta = np.zeros([N,T])
theta[0,D==0]=1
theta[1,D==1]=1
return theta
#for n in range(N):
x = pymc.Normal('x', mu=theta, tau=tau)
return locals()
Which seems easier to get at MCMC samples using this for example
Dvalues = pymc_generator.trace('D')[:]
In PyMC2, when creating deterministic nodes with decorators, the default is to take the node name from the function name. The solution is simple: specify the node name as a parameter for the decorator.
#pymc.deterministic(name='temp_theta_%d_%d'%(t,n), plot=False)
def temp_theta(dt=D[t], n=n):
return dt==n
theta[n,t] = temp_theta
Here is a notebook that puts this in context.