Differences between Rs deSolve and Pythons odeint - python

I'm currently exploring the Lorenz system with Python and R and have noticed subtle differences in the ode packages. odeint from Python and ode both say they use lsoda to calculate their derivatives. However, using the lsoda command for both seems to give far different results. I have tried ode45 for the ode function in R to get something more similar to Python but am wondering why I can't get exactly the same results:
from scipy.integrate import odeint
def lorenz(x, t):
return [
10 * (x[1] - x[0]),
x[0] * (28 - x[2]) - x[1],
x[0] * x[1] - 8 / 3 * x[2],
]
dt = 0.001
t_train = np.arange(0, 0.1, dt)
x0_train = [-8, 7, 27]
x_train = odeint(lorenz, x0_train, t_train)
x_train[0:5, :]
array([[-8. , 7. , 27. ],
[-7.85082366, 6.98457874, 26.87275343],
[-7.70328919, 6.96834721, 26.74700467],
[-7.55738803, 6.95135316, 26.62273959],
[-7.41311133, 6.93364263, 26.49994363]])
library(deSolve)
n <- round(100, 0)
# Lorenz Parameters: sigma, rho, beta
parameters <- c(s = 10, r = 28, b = 8 / 3)
state <- c(X = -8, Y = 7, Z = 27) # Initial State
# Lorenz Function used to generate Lorenz Derivatives
lorenz <- function(t, state, parameters) {
with(as.list(c(state, parameters)), {
dx <- parameters[1] * (state[2] - state[1])
dy <- state[1] * (parameters[2] - state[3]) - state[2]
dz <- state[1] * state[2] - parameters[3] * state[3]
list(c(dx, dy, dz))
})
}
times <- seq(0, ((n) - 1) * 0.001, by = 0.001)
# ODE45 used to determine Lorenz Matrix
out <- ode(y = state, times = times,
func = lorenz, parms = parameters, method = "ode45")[, -1]
out[1:nrow(out), , drop = FALSE]
X Y Z
[1,] -8.00000000 7.000000 27.00000
[2,] -7.85082366 6.984579 26.87275
[3,] -7.70328918 6.968347 26.74700
[4,] -7.55738803 6.951353 26.62274
[5,] -7.41311133 6.933643 26.49994
I had to call out[1:nrow(out), , drop = FALSE] to get the fully provided decimal places, it appears that head rounds to the nearest fifth. I understand it's incredibly subtle, but was hoping to get the exact same results. Does anyone know if this is something more than a rounding issue between R and Python?
Thanks in advance.

All numerical methods that solve ODEs are approximations that work up to a given precision. The precision of the deSolve solvers is set to atol=1e-6, rtol=1e-6 by default, where atol is absolute and rtol is relative tolerance. Furthermore, ode45 has some additional parameters to fine-tune the automatic step size algorithm, and it can make use of interpolation.
To increase tolerance, set for example:
out <- ode(y = state, times = times, func = lorenz,
parms = parameters, method = "ode45", atol = 1e-10, rtol = 1e-10)
Finally, I would recommend to use an odepack solver like lsoda or vode instead of the classical ode45. More details can be found in the ode and lsoda help pages and for ode45 in the ?rkMethod help page.
Similar parameters may also exist for odeint.
A final note: as Lorenz is a chaotic system, local errors will lead to diverging behaviour due to error magnification. This is an essential feature of chaotic systems, which are by theory unpredictable on the long run. So whatever you do, and how much precision you set, simulated trajectories are not "the real ones", they just show a similar pattern.

Related

Symbolic Calculus and Integration in Python

I am trying to numerically compute a double integral.
The issue is that (I think) I need a mix of symbolic integration and numerical integration.
The integral looks something like this:
I cannot use numpy.integrate because it is not just a double integral because of the power (1/a) in the middle.
I cannot get a number for the innermost integral (to then raise to the power) because it ends up being a function that depends on x which I would then need to integrate.
I tried with symbolic calculus, using a nested sym.integrate like here
sym.integrate((sym.integrate(sym.exp(-(w**2)/(2*sigmaw)-alpha*((x-w)**2)/(2*sigma)),(w,-sym.oo, sym.oo)))**(1/alpha),(x,-sym.oo, sym.oo))
however, it just spits back the expression itself and no number.
I think I would need to get a symbolic expression for the inner integral to use as a function for numerical integration.
Is it even possible?
If not in python, with another language like R?
Any experience with things of this sort?
I worked with Maxima (https://maxima.sourceforge.io) since OP seems to be saying the exact system used isn't too important.
The integrand is just a product of Gaussian bumps, so its integral over the real line is not too hard. Maxima doesn't have the strongest integrator in the world, but anyway it seems to handle this problem okay.
Start by assuming all the parameters are positive; if not specified, Maxima will ask for the sign during the calculation.
(%i2) assume (alpha > 0, sigmaw > 0, sigma > 0);
(%o2) [alpha > 0, sigmaw > 0, sigma > 0]
Define the inner integrand.
(%i3) I: exp(-(w**2)/(2*sigmaw)-alpha*((x-w)**2)/(2*sigma));
2 2
alpha (x - w) w
(- --------------) - --------
2 sigma 2 sigmaw
(%o3) %e
Compute the inner integral.
(%i4) I1: integrate (I, w, minf, inf);
(%o4) (sqrt(2) sqrt(%pi) sqrt(sigma) sqrt(sigmaw)
2
alpha x
- ------------------------
2 alpha sigmaw + 2 sigma
%e )/sqrt(alpha sigmaw + sigma)
The pretty-printer (ASCII art) display is hard to read here, maybe this 1-d representation makes more sense. grind produces the 1-d display.
(%i5) grind(%);
(sqrt(2)*sqrt(%pi)*sqrt(sigma)*sqrt(sigmaw)
*%e^-((alpha*x^2)/(2*alpha*sigmaw+2*sigma)))
/sqrt(alpha*sigmaw+sigma)$
(%o5) done
Define the outer integrand.
(%i7) I2: I1^(1/alpha);
1 1 1 1
------- ------- ------- -------
2 alpha 2 alpha 2 alpha 2 alpha
(%o7) (2 %pi sigma sigmaw
2
x 1
- ------------------------ -------
2 alpha sigmaw + 2 sigma 2 alpha
%e )/(alpha sigmaw + sigma)
Compute the outer integral. The final result is named foo here.
(%i9) foo: integrate (I2, x, minf, inf);
1 1 1 1
------- + 1/2 ------- ------- -------
2 alpha 2 alpha 2 alpha 2 alpha
(%o9) (%pi 2 sigma sigmaw
1
-------
2 alpha
sqrt(2 alpha sigmaw + 2 sigma))/(alpha sigmaw + sigma)
Evaluate the outer integral for specific values of the parameters.
(%i10) ev (foo, alpha = 3, sigma = 3/7, sigmaw = 7/4);
1/6 1/6 1/6 1/3 2/3
2 3 7 159 %pi
(%o10) ----------------------------
sqrt(14)
(%i11) float(%);
(%o11) 5.790416728790489
Compute a numerical approximation. Note quad_qagi is suitable for infinite intervals.
(%i12) ev (quad_qagi (lambda([x], quad_qagi (I, w, minf, inf)[1]^(1/alpha)), x, minf, inf),
alpha = 3, sigma = 3/7, sigmaw = 7/4);
(%o12) [5.790416728790598, 7.216782674725913E-9, 270, 0]
Looks like that supports the symbolic result.
(%i13) first(%) - %o11;
(%o13) 1.092459456231154E-13
The outer integral again, in 1-d display which might be useful for copying into another program:
(%i14) grind(foo);
(%pi^(1/(2*alpha)+1/2)*2^(1/(2*alpha))*sigma^(1/(2*alpha))
*sigmaw^(1/(2*alpha))
*sqrt(2*alpha*sigmaw+2*sigma))
/(alpha*sigmaw+sigma)^(1/(2*alpha))$
(%o14) done
I recommend pretty strongly to try to get to a symbolic result if possible; numerical integration is often tricky. In the example given, if it turned out that you could only do the inner integral but not the outer one, that would still be a pretty big win. You could plug the symbolic solution for the inner integral into a numerical approximation for the outer one.
this doesn't answer your question but it will surely help you, as other have already pointed out other useful tools.
for the integration at hand, you don't really need to do symbolic integration.
numerical integration is simply summing on a defined finite grid, and integrating over w is simply summing over the w axis, same as x.
the main problem is how to choose the integration grid, since it cannot be infinite, for gaussians I'd say at least 10 times their sigma for as low error as you can get, as for the grid spacing, I'd make it as small as you can wait for it to run.
so for the above integration, this would be equivalent, make sure you don't increase the grid steps until you have a picture of how much memory it will need, or else your pc will hang.
import numpy as np
# define constants
sigmaw = 0.1
sigma = 0.1
alpha = 0.2
# define grid
max_w = 2
min_w = -max_w
min_x = -3
max_x = -min_x
steps_w = 2000 # don't increase this too much or you'll run out of memory
steps_x = 1000 # don't increase this too much or you'll run out of memory
dw = (max_w - min_w) / steps_w
dx = (max_x - min_x) / steps_x
x_vec = np.linspace(min_x, max_x, steps_x)
w_vec = np.linspace(min_w, max_w, steps_w)
x, w = np.meshgrid(x_vec, w_vec, sparse=True)
# do integration
inner_term = np.exp(-(w ** 2) / (2 * sigmaw) - alpha * ((x - w) ** 2) / (2 * sigma))
inner_integral = np.sum(inner_term, axis=0) * dw
del inner_term # to free some memory
inner_integral_powered = inner_integral ** (1 / alpha)
del inner_integral # to free some memory
outer_integral = np.sum(inner_integral_powered) * dx
print(outer_integral)
Numerical integration works by sampling the integrand at some values of the argument. In particular, the Newton-Cotes formulas sample uniformly, while different flavors of Gaussian integration sample irregularly.
So in your case, the integrator will require an evaluation of the inner integral for various values of x to integrate on x, implying each time a numerical integration on w with known x.
Note that as your domain is unbounded, you will have to use a change of variable to make it finite.
If the inner integral has an analytical expression, you can of course use it and integrate numerically on x.

How to create the equivalent of Excel Solver valueof function?

I have the following equation: x/0,2 * (0,2+1)+y/0,1*(0,1+1) = 26.34
The initial values of X and Y are set as 4.085 and 0.17 respectively.
I need to find the values of X and Y which satisfy the equation and have the lowest common deviation from initially set values. In other words, sum of |4.085 - x| and |0.17 - y| is minimized.
With Excel Solver Valueof Function this easy to find:
we insert x and y as variables to be changed to reach 26 in the formula result
Here is my python code (I am trying to use sympy for that)
x,y = symbols('x y')
eqn = solve([Eq(x/0.2*(0.2+1)+y/0.1*(0.1+1),26)],x,y)
print(eqn)
I am getting however strange result {x: 4.33333333333333 - 1.83333333333333*y}
Can anyone help me solve this equation?
The answer you are obtaining is not strange, it is just the answer to what you ask. You have an equation on two variables x and y, the solution to this problem is in general not unique (sometimes infinite). Now, you can either add an extra condition (inequality for example) or change the numeric Domain in which solutions are possible (like in Diophantine equations). You can do either of them in Sympy, in the following example I find the solution on x to your problem in the Real domain, using solveset:
from sympy import symbols, Eq, solveset
x,y = symbols('x y')
eqn = solveset(Eq(1.2 * x / 0.2 + 1.1 * y / 0.1, 26), x, Reals)
print(eqn)
Output:
Intersection(FiniteSet(4.33333333333333 - 1.83333333333333*y), Reals)
As you can see the solution on x is a finite set, that is the intersection between a straight line on y and the Reals. Any particular solution can be found by direct evaluation of y.
This is equivalent to say x = 4.33333333333333 - 1.83333333333333 * y if you evaluate this equation in the guess value y = 0.17, you obtain x = 4.0216 (close to your x = 4.085 guess value).
Edit:
After analyzing the new information added to your question, I think I have finally understood it: your problem is a constrained optimization. Now, I don't use Excel frequently, but it would be my bet that under the hood this optimization is carried out there using Lagrange multipliers. In your particular case, the target function represents the deviation of the solution (x, y) from the point (4.085, 0.17). For convenience, I have chosen this function to be the Euclidean distance between them (absolute values as you suggested can be problematic due to discontinuity of the derivatives). The constraint function is simply the equation you provided. To solve this problem with Sympy, one could use something like this:
import sympy as sp
# Define symbols and functions
x, y, lamb = sp.symbols('x, y, lamb', real=True)
func = sp.sqrt((x - 4.085) ** 2 + (y - 0.17) ** 2) # Target function
const = 1.2 * x / 0.2 + 1.1 * y / 0.1 - 26 # Constraint function
# Define Lagrangian
lagrang = func - lamb * const
# Compute gradient of Lagrangian
grad_lagrang = [sp.diff(lagrang, var) for var in [x, y, lamb]]
# Solve the resulting system of equations
spoints = sp.solve(grad_lagrang, [x, y, lamb], dict=True)
# Print stationary points
print(spoints)
Output:
[{x: 4.07047770700637, lamb: -0.0798086884467563, y: 0.143375796178345}]
Since in our case only one stationary point was found, this is the optimal solution (although this is only a necessary condition). The value of the lamb multiplier can be ditched, so x, y = 4.070, 0.1434. Hope this helps.

How precise is numpy's sin(x) ? How do I find out? [need it to numerically solve x=a*sin(x)]

I'm trying to numerically solve the equation x=a*sin(x), where a is some constant, in python. I already tried first solving the equation symbolically, but it seems this particular shape of expression isn't implemented in sympy. I also tried using sympy.nsolve(), but it only gives me the first solution it encounters.
My plan looks something like this:
x=0
a=1
rje=[]
while(x<a):
if (x-numpy.sin(x))<=error_sin:
rje.append(x)
x+=increment
print(rje)
I don't want to waste time or risk missing solutions, so I want to know how to find out how precise numpy's sinus is on my device (that would become error_sin).
edit: I tried making both error_sin and increment equal to the machine epsilon of my device but it a) takes to much time, and b) sin(x) is less precise that x and so I get a lot of non-solutions (or rather repeated solutions because sin(x) grows much slower than x). Hence the question.
edit2: Could you please just help me answer the question about precision of numpy.sin(x)? I provided information about the purpose purely for context.
The answer
np.sin will in general be as precise as possible, given the precision of the double (ie 64-bit float) variables in which the input, output, and intermediate values are stored. You can get a reasonable measure of the precision of np.sin by comparing it to the arbitrary precision version of sin from mpmath:
import matplotlib.pyplot as plt
import mpmath
from mpmath import mp
# set mpmath to an extremely high precision
mp.dps = 100
x = np.linspace(-np.pi, np.pi, num=int(1e3))
# numpy sine values
y = np.sin(x)
# extremely high precision sine values
realy = np.array([mpmath.sin(a) for a in x])
# the end results are arrays of arbitrary precision mpf values (ie abserr.dtype=='O')
diff = realy - y
abserr = np.abs(diff)
relerr = np.abs(diff/realy)
plt.plot(x, abserr, lw=.5, label='Absolute error')
plt.plot(x, relerr, lw=.5, label='Relative error')
plt.axhline(2e-16, c='k', ls='--', lw=.5, label=r'$2 \cdot 10^{-16}$')
plt.yscale('log')
plt.xlim(-np.pi, np.pi)
plt.ylim(1e-20, 1e-15)
plt.xlabel('x')
plt.ylabel('Error in np.sin(x)')
plt.legend()
Output:
Thus, it is reasonable to say that both the relative and absolute errors of np.sin have an upper bound of 2e-16.
A better answer
There's an excellent chance that if you make increment small enough for your approach to be accurate, your algorithm will be too slow for practical use. The standard equation solving approaches won't work for you, since you don't have a standard function. Instead, you have an implicit, multi-valued function. Here's a stab at a general purpose approach for getting all solutions to this kind of equation:
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as spo
eps = 1e-4
def func(x, a):
return a*np.sin(x) - x
def uniqueflt(arr):
b = arr.copy()
b.sort()
d = np.append(True, np.diff(b))
return b[d>eps]
initial_guess = np.arange(-9, 9) + eps
# uniqueflt removes any repeated roots
roots = uniqueflt(spo.fsolve(func, initial_guess, args=(10,)))
# roots is an array with the 7 unique roots of 10*np.sin(x) - x == 0:
# array([-8.42320394e+00, -7.06817437e+00, -2.85234190e+00, -8.13413225e-09,
# 2.85234189e+00, 7.06817436e+00, 8.42320394e+00])
x = np.linspace(-20, 20, num=int(1e3))
plt.plot(x, x, label=r'$y = x$')
plt.plot(x, 10*np.sin(x), label=r'$y = 10 \cdot sin(x)$')
plt.plot(roots, 10*np.sin(roots), '.', c='k', ms=7, label='Solutions')
plt.ylim(-10.5, 20)
plt.gca().set_aspect('equal', adjustable='box')
plt.legend()
Output:
You'll have to tweak the initial_guess depending on your value of a. initial_guess has to be at least as large as the actual number of solutions.
The accuracy of the sine function is not so relevant here, you'd better perform the study of the equation.
If you write it in the form sin x / x = sinc x = 1 / a, you immediately see that the number of solutions is the number of intersections of the cardinal sine with an horizontal. This number depends on the ordinates of the extrema of the latter.
The extrema are found where x cos x - sin x = 0 or x = tan x, and the corresponding values are cos x. This is again a transcendental equation, but it is parameterless and you can solve it once for all. Also note that for increasing values of x, the solutions get closer and closer to (k+1/2)π.
Now for a given value of 1 / a, you can find all the extrema below and above and this will give you starting intervals where to look for the roots. The secant method will be handy.
A simple way to estimate the accuracy of sin() AND cos() for a given argument x would be:
eps_trig = np.abs(1 - (np.sin(x)**2 + np.cos(x)**2)) / 2
You may want to drop last 2 just to be on the "safe side" (well, there are values of x for which this approximation does not hold very well, in particular for x close to -90 deg). I would suggest to test at around x=pi/4
Explanation:
The basic idea behind this approach is as follows... Let's say our sin(x) and cos(x) deviates from exact values by a single "error value" eps. That is, exact_sin(x) = sin(x) + eps (same for cos(x)). Also, let's call delta to be the measured deviation from the Pythagorean trigonometric identity:
delta = 1 - sin(x)**2 - cos(x)**2
For exact functions, delta should be zero:
1 - exact_sin(x)**2 - exact_cos(x)**2 == 0
or, going to inexact functions:
1 - (sin(x) + eps)**2 - (cos(x) + eps)**2 == 0 =>
1 - sin(x)**2 - cos(x)**2 = delta = 2*eps*(sin(x) + cos(x)) + 2*eps**2
Neglecting last term 2*eps**2 (assume small errors):
2*eps*(sin(x)+cos(x)) = 1 - sin(x)**2 - cos(x)**2
If we choose x such that sin(x)+cos(x) hovers around 1 (or, somewhere in the range 0.5-2), we can roughly estimate that eps = |1 - sin(x)**2 - cos(x)**2|/2.
To the precision you already got good answers. To the task itself, you can be faster by investing some calculus.
First, from the bounds of the sine you know that any solution must be in the interval [-abs(a),abs(a)]. If abs(a)\le 1 then the only root in [-1,1] is x=0
Apart from the interval containing zero, you also know that there is exactly one root in any of the intervals between the roots of cos(x)=1/a which are the extrema of a*sin(x)-x. Set phi=arccos(1/a) in [0,pi], then these roots are -phi+2*k*pi and phi+2*k*pi.
The interval for k=0 might contain 3 roots if 1<a<0.5*pi. For the positive root one knows x/a=sin(x)>x-x^3/6 so that x^2>6-6/a.
And lastly, the problem is symmetric, if x is a root, so is -x so all you have to do is find the positive roots.
So to compute the roots,
Start the root list with the root 0.
in the case abs(a)<=1, there are no further roots, return. One could also use -pi/2<=a<=1.
in the case 1<a<pi/2, apply the chosen bracketing method to the interval [sqrt(6-6/a), pi/2], add the root to the list, and return.
In the remaining cases where abs(a)>=0.5*pi:
Compute phi=arccos(1/a).
Then for any positive integer k apply the bracketing method to the intervals [2*(k-1)*pi+phi,2*k*pi-phi] and [2*k*pi-phi,2*k*pi-phi so that (k-0.5)*pi < abs(a) [(k-0.5)*pi, (k+0.5)*pi] as long as the lower interval boundary is smaller than abs(a) and the function has a sign change over the interval.
Add the root found to the list. Return with the list after the loop ends.
let a=10;
function f(x) { return x - a * Math.sin(x); }
findRoots();
//-------------------------------------------------
function findRoots() {
log.innerHTML = `<p>roots for parameter a=${a}`;
rootList.innerHTML = "<tr><th>root <i>x</i></th><th><i>x-a*sin(x)</i></th><th>numSteps</th></tr>";
rootList.innerHTML += "<tr><td>0.0<td>0.0<td>0</tr>";
if( Math.abs(a)<=1) return;
if( (1.0<a) && (a < 0.5*Math.PI) ) {
illinois(Math.sqrt(6-6/a), 0.5*Math.PI);
return;
}
const phi = Math.acos(1.0/a);
log.innerHTML += `phi=${phi}<br>`;
let right = 2*Math.PI-phi;
for (let k=1; right<Math.abs(a); k++) {
let left = right;
right = (k+2)*Math.PI + ((0==k%2)?(-phi):(phi-Math.PI));
illinois(left, right);
}
}
function illinois(a, b) {
log.innerHTML += `<p>regula falsi variant illinois called for interval [a,b]=[${a}, ${b}]`;
let fa = f(a);
let fb = f(b);
let numSteps=2;
log.innerHTML += ` values f(a)=${fa}, f(b)=${fb}</p>`;
if (fa*fb > 0) return;
if (Math.abs(fa) < Math.abs(fb)) { var h=a; a=b; b=h; h=fa; fa=fb; fb=h;}
while(Math.abs(b-a) > 1e-15*Math.abs(b)) {
let c = b - fb*(b-a)/(fb-fa);
let fc = f(c); numSteps++;
log.innerHTML += `step ${numSteps}: midpoint c=${c}, f(c)=${fc}<br>`;
if ( fa*fc < 0 ) {
fa *= 0.5;
} else {
a = b; fa = fb;
}
b = c; fb = fc;
}
rootList.innerHTML += `<tr><td>${b}<td>${fb}<td>${numSteps}</tr>`;
}
aInput.addEventListener('change', () => {
let a_new = Number.parseFloat(aInput.value);
if( isNaN(a_new) ) {
alert('Not a number '+aInput.value);
} else if(a!=a_new) {
a = a_new;
findRoots();
}
});
<p>Factor <i>a</i>: <input id="aInput" value="10" /></p>
<h3>Root list</h3>
<table id="rootList" border = 1>
</table>
<h3>Computation log</h3>
<div id="log"/>
The solution should be precise up to machine epsilon
>>> from numpy import sin as sin_np
>>> from math import sin as sin_math
>>> x = 0.0
>>> sin_np(x) - x
0.0
>>> sin_math(x) - x
0.0
>>>
You could consider using scipy.optimize for this problem:
>>> from scipy.optimize import minimize
>>> from math import sin
>>> a = 1.0
Then define your objective as so:
>>> def obj(x):
... return abs(x - a*sin(x))
...
And you can go ahead and solve this problem numerically by:
>>> sol = minimize(obj, 0.0)
>>> sol
fun: array([ 0.])
hess_inv: array([[1]])
jac: array([ 0.])
message: 'Optimization terminated successfully.'
nfev: 3
nit: 0
njev: 1
status: 0
success: True
x: array([ 0.])
Now lets try with a new value of a
>>> a = .5
>>> sol = minimize(obj, 0.0)
>>> sol
fun: array([ 0.])
hess_inv: array([[1]])
jac: array([ 0.5])
message: 'Desired error not necessarily achieved due to precision loss.'
nfev: 315
nit: 0
njev: 101
status: 2
success: False
x: array([ 0.])
>>>
In case you want to find a non-trivial solution to this problem, you need to change x0 iteratively to values greater than zero and also lesser than. Also, manage the bounds of x in minimise by setting bounds in scipy.optimize.minimize, you would be able to walk from -infty to +infty ( or very large numbers ).

algebraic constraint to terminate ODE integration with scipy

I'm using Scipy 14.0 to solve a system of ordinary differential equations describing the dynamics of a gas bubble rising vertically (in the z direction) in a standing still fluid because of buoyancy forces. In particular, I have an equation expressing the rising velocity U as a function of bubble radius R, i.e. U=dz/dt=f(R), and one expressing the radius variation as a function of R and U, i.e. dR/dT=f(R,U). All the rest appearing in the code below are material properties.
I'd like to implement something to account for the physical constraint on z which, obviously, is limited by the liquid height H. I consequently implemented a sort of z<=H constraint in order to stop integration in advance if needed: I used set_solout in order to do so. The situation is that the code runs and gives good results, but set_solout is not working at all (it seems like z_constraint is never called actually...). Do you know why?
Is there somebody with a more clever idea, may be also in order to interrupt exactly when z=H (i.e. a final value problem) ? is this the right way/tool or should I reformulate the problem?
thanks in advance
Emi
from scipy.integrate import ode
Db0 = 0.001 # init bubble radius
y0, t0 = [ Db0/2 , 0. ], 0. #init conditions
H = 1
def y_(t,y,g,p0,rho_g,mi_g,sig_g,H):
R = y[0]
z = y[1]
z_ = ( R**2 * g * rho_g ) / ( 3*mi_g ) #velocity
R_ = ( R/3 * g * rho_g * z_ ) / ( p0 + rho_g*g*(H-z) + 4/3*sig_g/R ) #R dynamics
return [R_, z_]
def z_constraint(t,y):
H = 1 #should rather be a variable..
z = y[1]
if z >= H:
flag = -1
else:
flag = 0
return flag
r = ode( y_ )
r.set_integrator('dopri5')
r.set_initial_value(y0, t0)
r.set_f_params(g, 5*1e5, 2000, 40, 0.31, H)
r.set_solout(z_constraint)
t1 = 6
dt = 0.1
while r.successful() and r.t < t1:
r.integrate(r.t+dt)
You're running into this issue. For set_solout to work correctly, it must be called right after set_integrator, before set_initial_value. If you introduce this modification into your code (and set a value for g), integration will terminate when z >= H, as you want.
To find the exact time when the bubble reached the surface, you can make a change of variables after the integration is terminated by solout and integrate back with respect to z (rather than t) to z = H. A paper that describes the technique is M. Henon, Physica 5D, 412 (1982); you may also find this discussion helpful. Here's a very simple example in which the time t such that y(t) = 0.5 is found, given dy/dt = -y:
import numpy as np
from scipy.integrate import ode
def f(t, y):
"""Exponential decay: dy/dt = -y."""
return -y
def solout(t, y):
if y[0] < 0.5:
return -1
else:
return 0
y_initial = 1
t_initial = 0
r = ode(f).set_integrator('dopri5')
r.set_solout(solout)
r.set_initial_value(y_initial, t_initial)
# Integrate until solout constraint violated
r.integrate(2)
# New system with t as independent variable: see Henon's paper for details.
def g(y, t):
return -1.0/y
r2 = ode(g).set_integrator('dopri5')
r2.set_initial_value(r.t, r.y)
r2.integrate(0.5)
y_final = r2.t
t_final = r2.y
# Error: difference between found and analytical solution
print t_final - np.log(2)

Odd SciPy ODE Integration error

I'm implementing a very simple Susceptible-Infected-Recovered model with a steady population for an idle side project - normally a pretty trivial task. But I'm running into solver errors using either PysCeS or SciPy, both of which use lsoda as their underlying solver. This only happens for particular values of a parameter, and I'm stumped as to why. The code I'm using is as follows:
import numpy as np
from pylab import *
import scipy.integrate as spi
#Parameter Values
S0 = 99.
I0 = 1.
R0 = 0.
PopIn= (S0, I0, R0)
beta= 0.50
gamma=1/10.
mu = 1/25550.
t_end = 15000.
t_start = 1.
t_step = 1.
t_interval = np.arange(t_start, t_end, t_step)
#Solving the differential equation. Solves over t for initial conditions PopIn
def eq_system(PopIn,t):
'''Defining SIR System of Equations'''
#Creating an array of equations
Eqs= np.zeros((3))
Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
return Eqs
SIR = spi.odeint(eq_system, PopIn, t_interval)
This produces the following error:
lsoda-- at current t (=r1), mxstep (=i1) steps
taken on this call before reaching tout
In above message, I1 = 500
In above message, R1 = 0.7818108252072E+04
Excess work done on this call (perhaps wrong Dfun type).
Run with full_output = 1 to get quantitative information.
Normally when I encounter a problem like that, there's something terminally wrong with the equation system I set up, but I both can't see anything wrong with it. Weirdly, it also works if you change mu to something like 1/15550. In case it was something wrong with the system, I also implemented the model in R as follows:
require(deSolve)
sir.model <- function (t, x, params) {
S <- x[1]
I <- x[2]
R <- x[3]
with (
as.list(params),
{
dS <- -beta*S*I/(S+I+R) - mu*S + mu*(S+I+R)
dI <- beta*S*I/(S+I+R) - gamma*I - mu*I
dR <- gamma*I - mu*R
res <- c(dS,dI,dR)
list(res)
}
)
}
times <- seq(0,15000,by=1)
params <- c(
beta <- 0.50,
gamma <- 1/10,
mu <- 1/25550
)
xstart <- c(S = 99, I = 1, R= 0)
out <- as.data.frame(lsoda(xstart,times,sir.model,params))
This also uses lsoda, but seems to be going off without a hitch. Can anyone see what's going wrong in the Python code?
I think that for the parameters you've chosen you're running into problems with stiffness - due to numerical instability the solver's step size is getting pushed into becoming very small in regions where the slope of the solution curve is actually quite shallow. The Fortran solver lsoda, which is wrapped by scipy.integrate.odeint, tries to switch adaptively between methods suited to 'stiff' and 'non-stiff' systems, but in this case it seems to be failing to switch to stiff methods.
Very crudely you can just massively increase the maximum allowed steps and the solver will get there in the end:
SIR = spi.odeint(eq_system, PopIn, t_interval,mxstep=5000000)
A better option is to use the object-oriented ODE solver scipy.integrate.ode, which allows you to explicitly choose whether to use stiff or non-stiff methods:
import numpy as np
from pylab import *
import scipy.integrate as spi
def run():
#Parameter Values
S0 = 99.
I0 = 1.
R0 = 0.
PopIn= (S0, I0, R0)
beta= 0.50
gamma=1/10.
mu = 1/25550.
t_end = 15000.
t_start = 1.
t_step = 1.
t_interval = np.arange(t_start, t_end, t_step)
#Solving the differential equation. Solves over t for initial conditions PopIn
def eq_system(t,PopIn):
'''Defining SIR System of Equations'''
#Creating an array of equations
Eqs= np.zeros((3))
Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
return Eqs
ode = spi.ode(eq_system)
# BDF method suited to stiff systems of ODEs
ode.set_integrator('vode',nsteps=500,method='bdf')
ode.set_initial_value(PopIn,t_start)
ts = []
ys = []
while ode.successful() and ode.t < t_end:
ode.integrate(ode.t + t_step)
ts.append(ode.t)
ys.append(ode.y)
t = np.vstack(ts)
s,i,r = np.vstack(ys).T
fig,ax = subplots(1,1)
ax.hold(True)
ax.plot(t,s,label='Susceptible')
ax.plot(t,i,label='Infected')
ax.plot(t,r,label='Recovered')
ax.set_xlim(t_start,t_end)
ax.set_ylim(0,100)
ax.set_xlabel('Time')
ax.set_ylabel('Percent')
ax.legend(loc=0,fancybox=True)
return t,s,i,r,fig,ax
Output:
The infected population PopIn[1] decays to zero. Apparently, (normal) numerical imprecision leads to PopIn[1] becoming negative (approx. -3.549e-12) near t=322.9. Then eventually the solution blows up near t=7818.093, with PopIn[0] going toward +infinity and PopIn[1] going toward -infinity.
Edit: I removed my earlier suggestion for a "quick fix". It was a questionable hack.

Categories

Resources