Checking the result of solve_ivp with solve_bvp - solve_bvp problems - python

I am hoping to use scipy.integrate.solve_bvp to solve a 2nd order differential equation: I am checking my process with a previous equation, so I am confident in moving onto more complex equations.
We begin with the differential equation system:
f''(x) + f(x) - f(x)^3 = 0
subject to the boundary conditions
f(x=0) = 0 f(x->infty) = gammaA
where gammaA is some constant between 0 and 1. I am finding numerical solutions for this, and comparing to a known analytic form (at least, for gammaA =1, a tanh function). For any given gammaA, we can integrate this equation once to and utilise the BC at infinity.
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import solve_ivp
gammaA = 0.9
xstart = 0.0
xend = 10
steps = 0.1
x = np.arange(xstart,xend,steps)
def dpsidx3(x,psi, gammaA):
eq = ( gammaA**2 *(1 - (1/2)*gammaA**2) - psi**2 *(1 - (1/2)*psi**2) )**0.5
return eq
psi0 = 0
x0 = xstart
x1 = xend
sol = solve_ivp(dpsidx3, [x0, x1], y0 = [psi0], args = (gammaA,), dense_output=True, rtol = 1e-9)
plotsol = sol.sol(x)
plt.plot(x, plotsol.T,marker = "", linestyle="--",label = r"Numerical solution - $solve\_ivp$")
plt.xlabel('x')
plt.ylabel('psi')
plt.legend()
plt.show()
If gammaA is not 1, then there are some runtime warnings but the shape is exactly as expected.
However, the ODE in the solve_ivp code has been manipulated into a form which is a 1st order ODE; for further work (with more complex and variable coefficients in the ODE), this will not be possible. Hence, I am trying to solve the boundary value problem using solve_bvp.
I am trying to solve now the same ODE, but I am not getting the same result as from this solution; the documentation is unclear on how to effectively use solve_bvp to me! Here is my attempt so far:
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import solve_bvp
gammaA = 0.9
xstart = 0.0
xend = 10
steps = 0.1
def fun(x,u):
du1 = u[1] #u[1] = u2, u[0] = u1
du2 = u[0]**3 - u[0]
return np.vstack( (du1, du2) )
def bc(ua, ub):
return np.array( [ua[0], ub[0]-gammaA])
x = np.linspace(xstart, xend, 10)
print(x.size)
y_a = np.zeros((2, x.size))
y_a[0] = np.linspace(0, gammaA, 10)
y_a[0] = gammaA
res_a = solve_bvp(fun, bc, x, y_a, max_nodes=100000, tol=1e-9)
print(res_a)
x_plot = np.linspace(0, xend, 100)
y_plot_a = res_a.sol(x_plot)[0]
fig2,ax2= plt.subplots()
ax2.plot(x_plot, y_plot_a, label=r'BVP solve')
ax2.legend()
ax2.set_xlabel("x")
ax2.set_ylabel("psi")
I have tried to write the 2nd order ODE as a system of 1st order ODEs and set the correct boundary conditions at the end of the system (rather than at infinity). I expected a similar tanh-function (where I could say that after the end of the system, my solution is simply gammaA, as expected by the asymptote), but it is clear that I am not getting this for any value of gammaA. Any advice gratefully appreciated; how can I reproduce the result of solve_ivp in solve_bvp?
EDIT: extra thoughts.
Can I add an additional constraint to my problem to ensure that the solution has a stationary point at the edge/is a monotonically increasing solution? The plots look okay for gammaA =1, but do not show the correct behaviour for any other values as in solve_ivp.
EDIT2: comparative figures, showing poor agreement with gammaA, e.g. 0.8 but good agreement for gammaA = 1.

You are making unfounded assumption on the mathematical nature of this equation. There is an energy functional
E = u'^2 + u^2 - 0.5*u^4 - 0.5 = u'^2 - 0.5*(u^2-1)^2
The solution that you first computed is at energy level 0.
For any smaller, negative energy level, roughly inside the unit circle, you get periodic oscillating solutions, these have no limit at infinity.
For larger, positive energy levels the solutions are unbounded, will rapidly grow larger, possibly diverge in finite time. Also here the limit at infinity either does not exist, as there is no solution connecting the initial point to large times, or the limit is infinity itself.
Forcing boundary conditions against this nature might work, but will not give a stable solution.

Related

Numerical stability of Euler's Method

I am trying to estimate when the following system of equations is numerically stable:
dx/dt = -x + y
dy/dt = -100*y
Solving for h in this inequality
|1+h*λ| <=1. where λ=-1,-100
(The eigenvalues of the system of equations). I get that for 1/50>= h>= 0. The Euler solution should be numerically stable. However, when
I am plotting the numerical and analytical solution for timestep h=1/50. They don't seem to agree (while for h=1e-3 the results look good). Am I doing something wrong?
This is the code I am using:
size=20
import matplotlib.pyplot as plt
# Define generl odel
w=1
def Euler(dt,y,x):
dx = dt*(-x+y)
dy = -100*y*dt
y = y+dy
x = x +dx
return x,y
# Solve Diff Ep.
tstop =10.0
dt=1/50
y=np.zeros(int(tstop/dt))
x=np.zeros(int(tstop/dt))
t=np.zeros(int(tstop/dt))
t[0] = 0
x[0] = 0.1
y[0] = 0.1
for i in range(0,int(tstop/dt)-1):
dx = dt*(-x[i]+y[i])
dy = -100*y[i]*dt
y[i+1] = y[i]+dy
x[i+1] = x[i] +dx
t[i+1] = t[i]+dt
ax=plt.subplot(1,2,1)
plt.plot(t1,s[:,0],'g-', linewidth=3.0,label='X, Analytical solution')
plt.plot(t,x,"r--",label='X, Euler Method')
plt.plot(t1,s[:,1],'black',linewidth=3.0,label='Y, Analytical solution')
plt.plot(t,y,"b--",label='Y, Euler Method')
plt.text(0.2,0.1, r'$ {\Delta}t \ = \ $'+str(round(dt,4))+'s',horizontalalignment='center',
verticalalignment='center', transform = ax.transAxes, fontsize=size)
ax.tick_params(axis='x',labelsize=17)
ax.tick_params(axis='y',labelsize=17)
plt.ylabel('X', fontsize=size)
plt.xlabel('t [sec]', fontsize=size)
plt.ylim([0,2])
plt.legend(frameon=False,loc=1,fontsize=18)
plt.xscale('log')
plt.ylim([0,0.12])
ax1.tick_params(axis='x',labelsize=17)
ax1.tick_params(axis='y',labelsize=17)
plt.rcParams["figure.figsize"] = [8,8]
With λ=-100 and h=1/50 you get for the propagation factor of the Euler method for the y component the value 1+h*λ=-1. Note that the solution, while oscillating, stays bounded. Which is the definition of A-stability. To get convergence to zero, you need h<0.02. To get non-oscillating behavior, you need h<=0.01. To see a somewhat curved behavior you need h<=0.005.

Trouble Solving and Plotting Solution to Simple Differential Equation

I'm trying to solve the differential equation R^{2} = 1/R with initial condition that R(0) = 0 in python. I should get the solution that R'(t) = (3/2 * t)^(2/3) as I get this from mathematica. Plot of solution to R'[t]^2 = 1/R with initial condition R(0) = 0
I used the following code in python:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
sqrt = np.sqrt
# function that returns dy/dt
def model(y,t):
#k = 1
dydt = sqrt(1/y)
return dydt
# initial condition
y0 = [0.0]
# time points
t = np.linspace(0,5)
# solve ODE
y = odeint(model,y0,t)
# plot results
plt.plot(t,y)
plt.ylabel('$R/R_0$')
plt.xticks([])
plt.yticks([])
plt.show()
however I get only 0 as I'm apparently dividing by zero at some point python plot of differential equation R'[t]^2 = 1/R, which is not correct. Could someone point out what I could do to get the solution and plot I am expecting.
Thank you
Your model needs to be changed.I get your equation for the solution would be something like this https://www.wolframalpha.com/input/?i=R%27%28t%29%5E2+%3D+1%2FR
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# function that returns dy/dt
def model(y,t):
#k = 1
dydt = 1*y**-1/2
return dydt
# initial condition
y0 = 0.1
# time points
t = np.linspace(0,20)
# solve ODE
y = odeint(model,y0,t)
# plot results
plt.plot(t,y)
plt.ylabel('$R$')
plt.xlabel('$t$')
plt.xticks([])
plt.yticks([])
plt.show()
Your starting value is 0, which results in the derivative being zero (y * -1, or simpler -y), which means 0 will get added to your current y-value, thus remaining at zero for the whole integration. Your code is correct, but not your formulation.
I see a 1/R in your link, so use that, e.g. dydt = 1/y; which will fail, because it results in division by zero, so don't start at zero, because your derivative is not defined there. There also appears to be a square root somewhere, that you imported, but never use.

Is there a way to easily integrate a set of differential equations over a full grid of points?

The problem is that I would like to be able to integrate the differential equations starting for each point of the grid at once instead of having to loop over the scipy integrator for each coordinate. (I'm sure there's an easy way)
As background for the code I'm trying to solve the trajectories of a Couette flux alternating the direction of the velocity each certain period, that is a well known dynamical system that produces chaos. I don't think the rest of the code really matters as the part of the integration with scipy and my usage of the meshgrid function of numpy.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation, writers
from scipy.integrate import solve_ivp
start_T = 100
L = 1
V = 1
total_run_time = 10*3
grid_points = 10
T_list = np.arange(start_T, 1, -1)
x = np.linspace(0, L, grid_points)
y = np.linspace(0, L, grid_points)
X, Y = np.meshgrid(x, y)
condition = True
totals = np.zeros((start_T, total_run_time, 2))
alphas = np.zeros(start_T)
i = 0
for T in T_list:
alphas[i] = L / (V * T)
solution = np.array([X, Y])
for steps in range(int(total_run_time/T)):
t = steps*T
if condition:
def eq(t, x):
return V * np.sin(2 * np.pi * x[1] / L), 0.0
condition = False
else:
def eq(t, x):
return 0.0, V * np.sin(2 * np.pi * x[1] / L)
condition = True
time_steps = np.arange(t, t + T)
xt = solve_ivp(eq, time_steps, solution)
solution = np.array([xt.y[0], xt.y[1]])
totals[i][t: t + T][0] = solution[0]
totals[i][t: t + T][1] = solution[1]
i += 1
np.save('alphas.npy', alphas)
np.save('totals.npy', totals)
The error given is :
ValueError: y0 must be 1-dimensional.
And it comes from the 'solve_ivp' function of scipy because it doesn't accept the format of the numpy function meshgrid. I know I could run some loops and get over it but I'm assuming there must be a 'good' way to do it using numpy and scipy. I accept advice for the rest of the code too.
Yes, you can do that, in several variants. The question remains if it is advisable.
To implement a generally usable ODE integrator, it needs to be abstracted from the models. Most implementations do that by having the state space a flat-array vector space, some allow a vector space engine to be passed as parameter, so that structured vector spaces can be used. The scipy integrators are not of this type.
So you need to translate the states to flat vectors for the integrator, and back to the structured state for the model.
def encode(X,Y): return np.concatenate([X.flatten(),Y.flatten()])
def decode(U): return U.reshape([2,grid_points,grid_points])
Then you can implement the ODE function as
def eq(t,U):
X,Y = decode(U)
Vec = V * np.sin(2 * np.pi * x[1] / L)
if int(t/T)%2==0:
return encode(Vec, np.zeros(Vec.shape))
else:
return encode(np.zeros(Vec.shape), Vec)
with initial value
U0 = encode(X,Y)
Then this can be directly integrated over the whole time span.
Why this might be not such a good idea: Thinking of each grid point and its trajectory separately, each trajectory has its own sequence of adapted time steps for the given error level. In integrating all simultaneously, the adapted step size is the minimum over all trajectories at the given time. Thus while the individual trajectories might have only short intervals with very small step sizes amid long intervals with sparse time steps, these can overlap in the ensemble to result in very small step sizes everywhere.
If you go beyond the testing stage, switch to a more compiled solver implementation, odeint is a Fortran code with wrappers, so half a solution. JITcode translates to C code and links with the compiled solver behind odeint. Leaving python you get sundials, the diffeq module of julia-lang, or boost::odeint.
TL;DR
I don't think you can "integrate the differential equations starting for each point of the grid at once".
MWE
Please try to provide a MWE to reproduce your problem, like you said : "I don't think the rest of the code really matters", and it makes it harder for people to understand your problem.
Understanding how to talk to the solver
Before answering your question, there are several things that seem to be misunderstood :
by defining time_steps = np.arange(t, t + T) and then calling solve_ivp(eq, time_steps, solution) : the second argument of solve_ivp is the time span you want the solution for, ie, the "start" and "stop" time as a 2-uple. Here your time_steps is 30-long (for the first loop), so I would probably replace it by (t, t+T). Look for t_span in the doc.
from what I understand, it seems like you want to control each iteration of the numerical resolution : that's not how solve_ivp works. More over, I think you want to switch the function "eq" at each iteration. Since you have to pass the "the right hand side" of the equation, you need to wrap this behavior inside a function. It would not work (see right after) but in terms of concept something like this:
def RHS(t, x):
# unwrap your variables, condition is like an additional variable of your problem,
# with a very simple differential equation
x0, x1, condition = x
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x[1] / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x[1] / L)
# compute new result for condition
condition_out = not(condition)
return [x0_out, x1_out, condition_out]
This would not work because the evolution of condition doesn't satisfy some mathematical properties of derivation/continuity. So condition is like a boolean switch that parametrizes the model, we can use global to control the state of this boolean :
condition = True
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
finaly, and this is the ValueError you mentionned in your post : you define solution = np.array([X, Y]) which actually is initial condition and supposed to be "y0: array_like, shape (n,)" where n is the number of variable of the problem (in the case of [x0_out, x1_out] that would be 2)
A MWE for a single initial condition
All that being said, lets start with a simple MWE for a single starting point (0.5,0.5), so we have a clear view of how to use the solver :
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
# initial conditions for x0, x1, and condition
initial = [0.5, 0.5]
condition = True
# time span
t_span = (0, 100)
# constants
V = 1
L = 1
# define the "model", ie the set of equations of t
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
solution = solve_ivp(RHS_eq, # Right Hand Side of the equation(s)
t_span, # time span, a 2-uple
initial, # initial conditions
)
fig, ax = plt.subplots()
ax.plot(solution.t,
solution.y[0],
label="x0")
ax.plot(solution.t,
solution.y[1],
label="x1")
ax.legend()
Final answer
Now, what we want is to do the exact same thing but for various initial conditions, and from what I understand, we can't : again, quoting the doc
y0 : array_like, shape (n,) : Initial state. . The solver's initial condition only allows one starting point vector.
So to answer the initial question : I don't think you can "integrate the differential equations starting for each point of the grid at once".

Solving an ordinary differential equation on a fixed grid (preferably in python)

I have a differential equation of the form
dy(x)/dx = f(y,x)
that I would like to solve for y.
I have an array xs containing all of the values of x for which I need ys.
For only those values of x, I can evaluate f(y,x) for any y.
How can I solve for ys, preferably in python?
MWE
import numpy as np
# these are the only x values that are legal
xs = np.array([0.15, 0.383, 0.99, 1.0001])
# some made up function --- I don't actually have an analytic form like this
def f(y, x):
if not np.any(np.isclose(x, xs)):
return np.nan
return np.sin(y + x**2)
# now I want to know which array of ys satisfies dy(x)/dx = f(y,x)
Assuming you can use something simple like Forward Euler...
Numerical solutions will rely on approximate solutions at previous times. So if you want a solution at t = 1 it is likely you will need the approximate solution at t<1.
My advice is to figure out what step size will allow you to hit the times you need, and then find the approximate solution on an interval containing those times.
import numpy as np
#from your example, smallest step size required to hit all would be 0.0001.
a = 0 #start point
b = 1.5 #possible end point
h = 0.0001
N = float(b-a)/h
y = np.zeros(n)
t = np.linspace(a,b,n)
y[0] = 0.1 #initial condition here
for i in range(1,n):
y[i] = y[i-1] + h*f(t[i-1],y[i-1])
Alternatively, you could use an adaptive step method (which I am not prepared to explain right now) to take larger steps between the times you need.
Or, you could find an approximate solution over an interval using a coarser mesh and interpolate the solution.
Any of these should work.
I think you should first solve ODE on a regular grid, and then interpolate solution on your fixed grid. The approximate code for your problem
import numpy as np
from scipy.integrate import odeint
from scipy import interpolate
xs = np.array([0.15, 0.383, 0.99, 1.0001])
# dy/dx = f(x,y)
def dy_dx(y, x):
return np.sin(y + x ** 2)
y0 = 0.0 # init condition
x = np.linspace(0, 10, 200)# here you can control an accuracy
sol = odeint(dy_dx, y0, x)
f = interpolate.interp1d(x, np.ravel(sol))
ys = f(xs)
But dy_dx(y, x) should always return something reasonable (not np.none).
Here is the drawing for this case

Odd SciPy ODE Integration error

I'm implementing a very simple Susceptible-Infected-Recovered model with a steady population for an idle side project - normally a pretty trivial task. But I'm running into solver errors using either PysCeS or SciPy, both of which use lsoda as their underlying solver. This only happens for particular values of a parameter, and I'm stumped as to why. The code I'm using is as follows:
import numpy as np
from pylab import *
import scipy.integrate as spi
#Parameter Values
S0 = 99.
I0 = 1.
R0 = 0.
PopIn= (S0, I0, R0)
beta= 0.50
gamma=1/10.
mu = 1/25550.
t_end = 15000.
t_start = 1.
t_step = 1.
t_interval = np.arange(t_start, t_end, t_step)
#Solving the differential equation. Solves over t for initial conditions PopIn
def eq_system(PopIn,t):
'''Defining SIR System of Equations'''
#Creating an array of equations
Eqs= np.zeros((3))
Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
return Eqs
SIR = spi.odeint(eq_system, PopIn, t_interval)
This produces the following error:
lsoda-- at current t (=r1), mxstep (=i1) steps
taken on this call before reaching tout
In above message, I1 = 500
In above message, R1 = 0.7818108252072E+04
Excess work done on this call (perhaps wrong Dfun type).
Run with full_output = 1 to get quantitative information.
Normally when I encounter a problem like that, there's something terminally wrong with the equation system I set up, but I both can't see anything wrong with it. Weirdly, it also works if you change mu to something like 1/15550. In case it was something wrong with the system, I also implemented the model in R as follows:
require(deSolve)
sir.model <- function (t, x, params) {
S <- x[1]
I <- x[2]
R <- x[3]
with (
as.list(params),
{
dS <- -beta*S*I/(S+I+R) - mu*S + mu*(S+I+R)
dI <- beta*S*I/(S+I+R) - gamma*I - mu*I
dR <- gamma*I - mu*R
res <- c(dS,dI,dR)
list(res)
}
)
}
times <- seq(0,15000,by=1)
params <- c(
beta <- 0.50,
gamma <- 1/10,
mu <- 1/25550
)
xstart <- c(S = 99, I = 1, R= 0)
out <- as.data.frame(lsoda(xstart,times,sir.model,params))
This also uses lsoda, but seems to be going off without a hitch. Can anyone see what's going wrong in the Python code?
I think that for the parameters you've chosen you're running into problems with stiffness - due to numerical instability the solver's step size is getting pushed into becoming very small in regions where the slope of the solution curve is actually quite shallow. The Fortran solver lsoda, which is wrapped by scipy.integrate.odeint, tries to switch adaptively between methods suited to 'stiff' and 'non-stiff' systems, but in this case it seems to be failing to switch to stiff methods.
Very crudely you can just massively increase the maximum allowed steps and the solver will get there in the end:
SIR = spi.odeint(eq_system, PopIn, t_interval,mxstep=5000000)
A better option is to use the object-oriented ODE solver scipy.integrate.ode, which allows you to explicitly choose whether to use stiff or non-stiff methods:
import numpy as np
from pylab import *
import scipy.integrate as spi
def run():
#Parameter Values
S0 = 99.
I0 = 1.
R0 = 0.
PopIn= (S0, I0, R0)
beta= 0.50
gamma=1/10.
mu = 1/25550.
t_end = 15000.
t_start = 1.
t_step = 1.
t_interval = np.arange(t_start, t_end, t_step)
#Solving the differential equation. Solves over t for initial conditions PopIn
def eq_system(t,PopIn):
'''Defining SIR System of Equations'''
#Creating an array of equations
Eqs= np.zeros((3))
Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
return Eqs
ode = spi.ode(eq_system)
# BDF method suited to stiff systems of ODEs
ode.set_integrator('vode',nsteps=500,method='bdf')
ode.set_initial_value(PopIn,t_start)
ts = []
ys = []
while ode.successful() and ode.t < t_end:
ode.integrate(ode.t + t_step)
ts.append(ode.t)
ys.append(ode.y)
t = np.vstack(ts)
s,i,r = np.vstack(ys).T
fig,ax = subplots(1,1)
ax.hold(True)
ax.plot(t,s,label='Susceptible')
ax.plot(t,i,label='Infected')
ax.plot(t,r,label='Recovered')
ax.set_xlim(t_start,t_end)
ax.set_ylim(0,100)
ax.set_xlabel('Time')
ax.set_ylabel('Percent')
ax.legend(loc=0,fancybox=True)
return t,s,i,r,fig,ax
Output:
The infected population PopIn[1] decays to zero. Apparently, (normal) numerical imprecision leads to PopIn[1] becoming negative (approx. -3.549e-12) near t=322.9. Then eventually the solution blows up near t=7818.093, with PopIn[0] going toward +infinity and PopIn[1] going toward -infinity.
Edit: I removed my earlier suggestion for a "quick fix". It was a questionable hack.

Categories

Resources