Stiff ODE-solver - python

I need an ODE-solver for a stiff problem similar to MATLAB ode15s.
For my problem I need to check how many steps (calculations) is needed for different initial values and compare this to my own ODE-solver.
I tried using
solver = scipy.integrate.ode(f)
solver.set_integrator('vode', method='bdf', order=15, nsteps=3000)
solver.set_initial_value(u0, t0)
And then integrating with:
i = 0
while solver.successful() and solver.t<tf:
solver.integrate(tf, step=True)
i += 1
print(i)
Where tf is the end of my time interval.
The function used is defined as:
def func(self, t, u):
u1 = u[1]
u2 = mu * (1-numpy.dot(u[0], u[0]))*u[1] - u[0]
return numpy.array([u1, u2])
Which with the initial value u0 = [ 2, 0] is a stiff problem.
This means that the number of steps should not depend on my constant mu.
But it does.
I think the odeint-method can solve this as a stiff problem - but then I have to send in the whole t-vector and therefore need to set the amount of steps that is done and this ruins the point of my assignment.
Is there anyway to use odeint with adaptive stepsize between two t0 and tf?
Or can you see anything I miss in the use of the vode-integrator?

I'm seeing something similar; with the 'vode' solver, changing methods between 'adams' and 'bdf' doesn't change the number of steps by very much. (By the way, there is no point in using order=15; the maximum order of the 'bdf' method of the 'vode' solver is 5 (and the maximum order of the 'adams' solver is 12). If you leave the argument out, it should use the maximum by default.)
odeint is a wrapper of LSODA. ode also provides a wrapper of LSODA:
change 'vode' to 'lsoda'. Unfortunately the 'lsoda' solver ignores
the step=True argument of the integrate method.
The 'lsoda' solver does much better than 'vode' with method='bdf'.
You can get an upper bound on
the number of steps that were used by initializing tvals = [],
and in func, do tvals.append(t). When the solver completes, set
tvals = np.unique(tvals). The length of tvals tells you the
number of time values at which your function was evaluated.
This is not exactly what you want, but it does show a huge difference
between using the 'lsoda' solver and the 'vode' solver with
method 'bdf'. The number of steps used by the 'lsoda' solver is
on the same order as you quoted for matlab in your comment. (I used mu=10000, tf = 10.)
Update: It turns out that, at least for a stiff problem, it make a huge difference for the 'vode' solver if you provide a function to compute the Jacobian matrix.
The script below runs the 'vode' solver with both methods, and it
runs the 'lsoda' solver. In each case, it runs the solver with and without the Jacobian function. Here's the output it generates:
vode adams jac=None len(tvals) = 517992
vode adams jac=jac len(tvals) = 195
vode bdf jac=None len(tvals) = 516284
vode bdf jac=jac len(tvals) = 55
lsoda jac=None len(tvals) = 49
lsoda jac=jac len(tvals) = 49
The script:
from __future__ import print_function
import numpy as np
from scipy.integrate import ode
def func(t, u, mu):
tvals.append(t)
u1 = u[1]
u2 = mu*(1 - u[0]*u[0])*u[1] - u[0]
return np.array([u1, u2])
def jac(t, u, mu):
j = np.empty((2, 2))
j[0, 0] = 0.0
j[0, 1] = 1.0
j[1, 0] = -mu*2*u[0]*u[1] - 1
j[1, 1] = mu*(1 - u[0]*u[0])
return j
mu = 10000.0
u0 = [2, 0]
t0 = 0.0
tf = 10
for name, kwargs in [('vode', dict(method='adams')),
('vode', dict(method='bdf')),
('lsoda', {})]:
for j in [None, jac]:
solver = ode(func, jac=j)
solver.set_integrator(name, atol=1e-8, rtol=1e-6, **kwargs)
solver.set_f_params(mu)
solver.set_jac_params(mu)
solver.set_initial_value(u0, t0)
tvals = []
i = 0
while solver.successful() and solver.t < tf:
solver.integrate(tf, step=True)
i += 1
print("%-6s %-8s jac=%-5s " %
(name, kwargs.get('method', ''), j.func_name if j else None),
end='')
tvals = np.unique(tvals)
print("len(tvals) =", len(tvals))

Related

GEKKO returned non-optimal solution

I want to use GEKKO to solve the following optimization problem:
Minimize x'Qx + 1e-10 * sum_{i=1}^n x_i^0.1
subject to 1' x = 1 and x >= 0
However, the following code returns sol = [0., 0., 0. ,0. ,1.] and Objective: 1.99419 as a solution. Which is far from optimal, I'll explain why below.
import numpy as np
from gekko import GEKKO
n = 5
m = GEKKO(remote=False)
m.options.SOLVER = 1
m.options.IMODE = 3
x = [m.Var(lb=0, ub=1) for _ in range(n)]
m.Equation(m.sum(x) == 1)
np.random.seed(0)
Q = np.random.uniform(-1, 1, size=(n, n))
Q = np.dot(Q.T, Q)
## Add h_i^p
c, p = 1e-10, 0.1
for i in range(n):
m.Obj(c * x[i] ** p)
for j in range(n):
m.Obj(x[i] * Q[i, j] * x[j])
m.solve(disp=True)
sol = np.array(x).flatten()
This is clearly wrong since if we only optimize the quadratic part (x'Qx) using below code, and put the solution to the initial objective, we get a much smaller objective value (Objective: 0.02489503). The 1e-10 * sum_{i=1}^n x_i^p is esentially ignored since it is very small.
m1 = GEKKO(remote=False)
m1.options.SOLVER = 1
m1.options.OTOL = 1e-10
x1 = [m1.Var(lb=0, ub=1) for _ in range(n)]
m1.Equation(m1.sum(x1) == 1)
m1.qobj(b=np.zeros(n), A=2 * Q, x=x1, otype='min')
m1.solve(disp=True)
sol = np.array(x1).flatten()
Is there any way to resolve this? Thank you!
Gekko solves nonlinear programming optimization problems with gradient-based methods: interior point and active set SQP. It looks like there is a problem with the objective function. Use matrix operations in Numpy to simplify the objective definition.
## Create Objective
c, p = 1e-10, 0.1
obj = np.dot(np.dot(x,Q),x) + c*m.sum([xi**p for xi in x])
m.Minimize(obj)
Here is the modified script that solves with Gekko. Increase MAX_ITER if the default limit of 250 is reached.
import numpy as np
from gekko import GEKKO
n = 5
m = GEKKO(remote=False)
m.options.SOLVER = 3
m.options.IMODE = 3
x = m.Array(m.Var,n,value=0.1, lb=1e-6, ub=1)
m.Equation(m.sum(x) == 1)
np.random.seed(0)
Q = np.random.uniform(-1, 1, size=(n, n))
Q = np.dot(Q.T, Q)
print(Q)
## Create Objective
c, p = 1e-10, 0.1
obj = np.dot(np.dot(x,Q),x) + c*m.sum([xi**p for xi in x])
m.Minimize(obj)
# adjust solver tolerance
m.options.RTOL=1e-10
m.options.OTOL=1e-10
m.options.MAX_ITER = 1000
m.solve(disp=True)
sol = np.array(x).flatten()
print('x: ', sol)
print('obj: ', m.options.OBJFCNVAL)
This gives an optimal solution that is also global because it is a Quadratic Programming (QP) problem (convex optimization). Using a nonlinear programming (SQP) solver for QP problems gives a solution with the IPOPT solver:
x: [[0.36315827507] [0.081993130341] [1e-06] [0.086231281612] [0.46861632269]]
obj: 0.024895918696
As far as I could see, gekko looks like it's built for machine learning, which focuses on local optimization opposed to global optimization, and typically most libraries will not be able to guarantee you optimal solutions.
If you really want optimal solutions, than for this case I would suggest looking into interval arithmetic. There are packages such as mpmath which can offer this, though I have yet to see optimizers using it in my brief time searching.
The TL;DR on how interval arithmetic works is you feed in a range of inputs and get back a range of outputs. For example, you can test if 1 is in the range of possible outputs for x1 + x2 + x3 + x4, and you can see the minimum/maximum potential values for your objective function. In this way, you can progressively split your intervals in half, keeping only intervals for which your constraints are potentially satisfied and for which your objective function's maximum potential is at least the largest minimum potential. This allows you to achieve guaranteed convergence to global optimums at the cost of a lot more computation.

How can I stop my Runge-Kutta2 (Heun) method from exploding?

I am currently trying to write some python code to solve an arbitrary system of first order ODEs, using a general explicit Runge-Kutta method defined by the values alpha, gamma (both vectors of dimension m) and beta (lower triangular matrix of dimension m x m) of the Butcher table which are passed in by the user. My code appears to work for single ODEs, having tested it on a few different examples, but I'm struggling to generalise my code to vector valued ODEs (i.e. systems).
In particular, I try to solve a Van der Pol oscillator ODE (reduced to a first order system) using Heun's method defined by the Butcher Tableau values given in my code, but I receive the errors
"RuntimeWarning: overflow encountered in double_scalars f = lambda t,u: np.array(... etc)" and
"RuntimeWarning: invalid value encountered in add kvec[i] = f(t+alpha[i]*h,y+h*sum)"
followed by my solution vector that is clearly blowing up. Note that the commented out code below is one of the examples of single ODEs that I tried and is solved correctly. Could anyone please help? Here is my code:
import numpy as np
def rk(t,y,h,f,alpha,beta,gamma):
'''Runga Kutta iteration'''
return y + h*phi(t,y,h,f,alpha,beta,gamma)
def phi(t,y,h,f,alpha,beta,gamma):
'''Phi function for the Runga Kutta iteration'''
m = len(alpha)
count = np.zeros(len(f(t,y)))
kvec = k(t,y,h,f,alpha,beta,gamma)
for i in range(1,m+1):
count = count + gamma[i-1]*kvec[i-1]
return count
def k(t,y,h,f,alpha,beta,gamma):
'''returning a vector containing each step k_{i} in the m step Runga Kutta method'''
m = len(alpha)
kvec = np.zeros((m,len(f(t,y))))
kvec[0] = f(t,y)
for i in range(1,m):
sum = np.zeros(len(f(t,y)))
for l in range(1,i+1):
sum = sum + beta[i][l-1]*kvec[l-1]
kvec[i] = f(t+alpha[i]*h,y+h*sum)
return kvec
def timeLoop(y0,N,f,alpha,beta,gamma,h,rk):
'''function that loops through time using the RK method'''
t = np.zeros([N+1])
y = np.zeros([N+1,len(y0)])
y[0] = y0
t[0] = 0
for i in range(1,N+1):
y[i] = rk(t[i-1],y[i-1], h, f,alpha,beta,gamma)
t[i] = t[i-1]+h
return t,y
#################################################################
'''f = lambda t,y: (c-y)**2
Y = lambda t: np.array([(1+t*c*(c-1))/(1+t*(c-1))])
h0 = 1
c = 1.5
T = 10
alpha = np.array([0,1])
gamma = np.array([0.5,0.5])
beta = np.array([[0,0],[1,0]])
eff_rk = compute(h0,Y(0),T,f,alpha,beta,gamma,rk, Y,11)'''
#constants
mu = 100
T = 1000
h = 0.01
N = int(T/h)
#initial conditions
y0 = 0.02
d0 = 0
init = np.array([y0,d0])
#Butcher Tableau for Heun's method
alpha = np.array([0,1])
gamma = np.array([0.5,0.5])
beta = np.array([[0,0],[1,0]])
#rhs of the ode system
f = lambda t,u: np.array([u[1],mu*(1-u[0]**2)*u[1]-u[0]])
#solving the system
time, sol = timeLoop(init,N,f,alpha,beta,gamma,h,rk)
print(sol)
Your step size is not small enough. The Van der Pol oscillator with mu=100 is a fast-slow system with very sharp turns at the switching of the modes, so rather stiff. With explicit methods this requires small step sizes, the smallest sensible step size is 1e-5 to 1e-6. You get a solution on the limit cycle already for h=0.001, with resulting velocities up to 150.
You can reduce some of that stiffness by using a different velocity/impulse variable. In the equation
x'' - mu*(1-x^2)*x' + x = 0
you can combine the first two terms into a derivative,
mu*v = x' - mu*(1-x^2/3)*x
so that
x' = mu*(v+(1-x^2/3)*x)
v' = -x/mu
The second equation is now uniformly slow close to the limit cycle, while the first has long relatively straight jumps when v leaves the cubic v=x^3/3-x.
This integrates nicely with the original h=0.01, keeping the solution inside the box [-3,3]x[-2,2], even if it shows some strange oscillations that are not present for smaller step sizes and the exact solution.

Second order ODE with RK4

I need to plot a graphic showing 2 variables, with a second order ODE with RK4, so far i've done this
from numpy import arange
from pylab import plot,xlabel,ylabel,show
Qger = 400
K = 20
T1 = 150
T2 = 60
N = 1000
h = (T2-T1)/N
rpoints = arange(6.0,8.0,h)
xpoints = []
x = 423
def df(s,t):
dTdt = -Qger*t/(2*K) + 172.8/t
return dTdt
for r in rpoints:
xpoints.append(x)
k1 = h*df(x,r)
k2 = h*df(x+0.5*k1,r+0.5*h)
k3 = h*df(x+0.5*k2,r+0.5*h)
k4 = h*df(x+k3,r+h)
x += (k1+2*k2+2*k3+k4)/6
pylab.plot(rpoints,xpoints)
pylab.xlabel("Raio")
pylab.ylabel("Temperatura")
pylab.show
But that's a RK4 for a first order ODE, because i didn't know and integrated by
hand, but i can't do that and neither use scipy, so can anyone explain to me how to integrate this function or use RK4 with a second order ODE. The function is below.
This is the function, only T and r are variables, the rest is 0
You should be able to put the above in a "semi-discrete" form, thats to say dT/dt in terms of only partial derivatives with respect r. If you can then find a numerical or otherwise approximation to the terms equivalent to dT/dt, i.e. the RHS of dT/dt= df(r,...) then explicit RK4 can be applicable.
In this approach, your time stepping method (RK4), is only applied to your first order derivative of temperature with respect to time.

algebraic constraint to terminate ODE integration with scipy

I'm using Scipy 14.0 to solve a system of ordinary differential equations describing the dynamics of a gas bubble rising vertically (in the z direction) in a standing still fluid because of buoyancy forces. In particular, I have an equation expressing the rising velocity U as a function of bubble radius R, i.e. U=dz/dt=f(R), and one expressing the radius variation as a function of R and U, i.e. dR/dT=f(R,U). All the rest appearing in the code below are material properties.
I'd like to implement something to account for the physical constraint on z which, obviously, is limited by the liquid height H. I consequently implemented a sort of z<=H constraint in order to stop integration in advance if needed: I used set_solout in order to do so. The situation is that the code runs and gives good results, but set_solout is not working at all (it seems like z_constraint is never called actually...). Do you know why?
Is there somebody with a more clever idea, may be also in order to interrupt exactly when z=H (i.e. a final value problem) ? is this the right way/tool or should I reformulate the problem?
thanks in advance
Emi
from scipy.integrate import ode
Db0 = 0.001 # init bubble radius
y0, t0 = [ Db0/2 , 0. ], 0. #init conditions
H = 1
def y_(t,y,g,p0,rho_g,mi_g,sig_g,H):
R = y[0]
z = y[1]
z_ = ( R**2 * g * rho_g ) / ( 3*mi_g ) #velocity
R_ = ( R/3 * g * rho_g * z_ ) / ( p0 + rho_g*g*(H-z) + 4/3*sig_g/R ) #R dynamics
return [R_, z_]
def z_constraint(t,y):
H = 1 #should rather be a variable..
z = y[1]
if z >= H:
flag = -1
else:
flag = 0
return flag
r = ode( y_ )
r.set_integrator('dopri5')
r.set_initial_value(y0, t0)
r.set_f_params(g, 5*1e5, 2000, 40, 0.31, H)
r.set_solout(z_constraint)
t1 = 6
dt = 0.1
while r.successful() and r.t < t1:
r.integrate(r.t+dt)
You're running into this issue. For set_solout to work correctly, it must be called right after set_integrator, before set_initial_value. If you introduce this modification into your code (and set a value for g), integration will terminate when z >= H, as you want.
To find the exact time when the bubble reached the surface, you can make a change of variables after the integration is terminated by solout and integrate back with respect to z (rather than t) to z = H. A paper that describes the technique is M. Henon, Physica 5D, 412 (1982); you may also find this discussion helpful. Here's a very simple example in which the time t such that y(t) = 0.5 is found, given dy/dt = -y:
import numpy as np
from scipy.integrate import ode
def f(t, y):
"""Exponential decay: dy/dt = -y."""
return -y
def solout(t, y):
if y[0] < 0.5:
return -1
else:
return 0
y_initial = 1
t_initial = 0
r = ode(f).set_integrator('dopri5')
r.set_solout(solout)
r.set_initial_value(y_initial, t_initial)
# Integrate until solout constraint violated
r.integrate(2)
# New system with t as independent variable: see Henon's paper for details.
def g(y, t):
return -1.0/y
r2 = ode(g).set_integrator('dopri5')
r2.set_initial_value(r.t, r.y)
r2.integrate(0.5)
y_final = r2.t
t_final = r2.y
# Error: difference between found and analytical solution
print t_final - np.log(2)

Odd SciPy ODE Integration error

I'm implementing a very simple Susceptible-Infected-Recovered model with a steady population for an idle side project - normally a pretty trivial task. But I'm running into solver errors using either PysCeS or SciPy, both of which use lsoda as their underlying solver. This only happens for particular values of a parameter, and I'm stumped as to why. The code I'm using is as follows:
import numpy as np
from pylab import *
import scipy.integrate as spi
#Parameter Values
S0 = 99.
I0 = 1.
R0 = 0.
PopIn= (S0, I0, R0)
beta= 0.50
gamma=1/10.
mu = 1/25550.
t_end = 15000.
t_start = 1.
t_step = 1.
t_interval = np.arange(t_start, t_end, t_step)
#Solving the differential equation. Solves over t for initial conditions PopIn
def eq_system(PopIn,t):
'''Defining SIR System of Equations'''
#Creating an array of equations
Eqs= np.zeros((3))
Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
return Eqs
SIR = spi.odeint(eq_system, PopIn, t_interval)
This produces the following error:
lsoda-- at current t (=r1), mxstep (=i1) steps
taken on this call before reaching tout
In above message, I1 = 500
In above message, R1 = 0.7818108252072E+04
Excess work done on this call (perhaps wrong Dfun type).
Run with full_output = 1 to get quantitative information.
Normally when I encounter a problem like that, there's something terminally wrong with the equation system I set up, but I both can't see anything wrong with it. Weirdly, it also works if you change mu to something like 1/15550. In case it was something wrong with the system, I also implemented the model in R as follows:
require(deSolve)
sir.model <- function (t, x, params) {
S <- x[1]
I <- x[2]
R <- x[3]
with (
as.list(params),
{
dS <- -beta*S*I/(S+I+R) - mu*S + mu*(S+I+R)
dI <- beta*S*I/(S+I+R) - gamma*I - mu*I
dR <- gamma*I - mu*R
res <- c(dS,dI,dR)
list(res)
}
)
}
times <- seq(0,15000,by=1)
params <- c(
beta <- 0.50,
gamma <- 1/10,
mu <- 1/25550
)
xstart <- c(S = 99, I = 1, R= 0)
out <- as.data.frame(lsoda(xstart,times,sir.model,params))
This also uses lsoda, but seems to be going off without a hitch. Can anyone see what's going wrong in the Python code?
I think that for the parameters you've chosen you're running into problems with stiffness - due to numerical instability the solver's step size is getting pushed into becoming very small in regions where the slope of the solution curve is actually quite shallow. The Fortran solver lsoda, which is wrapped by scipy.integrate.odeint, tries to switch adaptively between methods suited to 'stiff' and 'non-stiff' systems, but in this case it seems to be failing to switch to stiff methods.
Very crudely you can just massively increase the maximum allowed steps and the solver will get there in the end:
SIR = spi.odeint(eq_system, PopIn, t_interval,mxstep=5000000)
A better option is to use the object-oriented ODE solver scipy.integrate.ode, which allows you to explicitly choose whether to use stiff or non-stiff methods:
import numpy as np
from pylab import *
import scipy.integrate as spi
def run():
#Parameter Values
S0 = 99.
I0 = 1.
R0 = 0.
PopIn= (S0, I0, R0)
beta= 0.50
gamma=1/10.
mu = 1/25550.
t_end = 15000.
t_start = 1.
t_step = 1.
t_interval = np.arange(t_start, t_end, t_step)
#Solving the differential equation. Solves over t for initial conditions PopIn
def eq_system(t,PopIn):
'''Defining SIR System of Equations'''
#Creating an array of equations
Eqs= np.zeros((3))
Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
return Eqs
ode = spi.ode(eq_system)
# BDF method suited to stiff systems of ODEs
ode.set_integrator('vode',nsteps=500,method='bdf')
ode.set_initial_value(PopIn,t_start)
ts = []
ys = []
while ode.successful() and ode.t < t_end:
ode.integrate(ode.t + t_step)
ts.append(ode.t)
ys.append(ode.y)
t = np.vstack(ts)
s,i,r = np.vstack(ys).T
fig,ax = subplots(1,1)
ax.hold(True)
ax.plot(t,s,label='Susceptible')
ax.plot(t,i,label='Infected')
ax.plot(t,r,label='Recovered')
ax.set_xlim(t_start,t_end)
ax.set_ylim(0,100)
ax.set_xlabel('Time')
ax.set_ylabel('Percent')
ax.legend(loc=0,fancybox=True)
return t,s,i,r,fig,ax
Output:
The infected population PopIn[1] decays to zero. Apparently, (normal) numerical imprecision leads to PopIn[1] becoming negative (approx. -3.549e-12) near t=322.9. Then eventually the solution blows up near t=7818.093, with PopIn[0] going toward +infinity and PopIn[1] going toward -infinity.
Edit: I removed my earlier suggestion for a "quick fix". It was a questionable hack.

Categories

Resources