Related
Background
I've been working for some time on attempting to solve the (notoriously painful) Time Difference of Arrival (TDoA) multi-lateration problem, in 3-dimensions and using 4 nodes. If you're unfamiliar with the problem, it is to determine the coordinates of some signal source (X,Y,Z), given the coordinates of n nodes, the time of arrival of the signal at each node, and the velocity of the signal v.
My solution is as follows:
For each node, we write (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 = (v(t_i - T)**2
Where (x_i, y_i, z_i) are the coordinates of the ith node, and T is the time of emission.
We have now 4 equations in 4 unknowns. Four nodes are obviously insufficient. We could try to solve this system directly, however that seems next to impossible given the highly nonlinear nature of the problem (and, indeed, I've tried many direct techniques... and failed). Instead, we simplify this to a linear problem by considering all i/j possibilities, subtracting equation i from equation j. We obtain (n(n-1))/2 =6 equations of the form:
2*(x_j - x_i)*X + 2*(y_j - y_i)*Y + 2*(z_j - z_i)*Z + 2 * v**2 * (t_i - t_j) = v**2 ( t_i**2 - t_j**2) + (x_j**2 + y_j**2 + z_j**2) - (x_i**2 + y_i**2 + z_i**2)
Which look like Xv_1 + Y_v2 + Z_v3 + T_v4 = b. We try now to apply standard linear least squares, where the solution is the matrix vector x in A^T Ax = A^T b. Unfortunately, if you were to try feeding this into any standard linear least squares algorithm, it'll choke up. So, what do we do now?
...
The time of arrival of the signal at node i is given (of course) by:
sqrt( (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 ) / v
This equation implies that the time of arrival, T, is 0. If we have that T = 0, we can drop the T column in matrix A and the problem is greatly simplified. Indeed, NumPy's linalg.lstsq() gives a surprisingly accurate & precise result.
...
So, what I do is normalize the input times by subtracting from each equation the earliest time. All I have to do then is determine the dt that I can add to each time such that the residual of summed squared error for the point found by linear least squares is minimized.
I define the error for some dt to be the squared difference between the arrival time for the point predicted by feeding the input times + dt to the least squares algorithm, minus the input time (normalized), summed over all 4 nodes.
for node, time in nodes, times:
error += ( (sqrt( (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 ) / v) - time) ** 2
My problem:
I was able to do this somewhat satisfactorily by using brute-force. I started at dt = 0, and moved by some step up to some maximum # of iterations OR until some minimum RSS error is reached, and that was the dt I added to the normalized times to obtain a solution. The resulting solutions were very accurate and precise, but quite slow.
In practice, I'd like to be able to solve this in real time, and therefore a far faster solution will be needed. I began with the assumption that the error function (that is, dt vs error as defined above) would be highly nonlinear-- offhand, this made sense to me.
Since I don't have an actual, mathematical function, I can automatically rule out methods that require differentiation (e.g. Newton-Raphson). The error function will always be positive, so I can rule out bisection, etc. Instead, I try a simple approximation search. Unfortunately, that failed miserably. I then tried Tabu search, followed by a genetic algorithm, and several others. They all failed horribly.
So, I decided to do some investigating. As it turns out the plot of the error function vs dt looks a bit like a square root, only shifted right depending upon the distance from the nodes that the signal source is:
Where dt is on horizontal axis, error on vertical axis
And, in hindsight, of course it does!. I defined the error function to involve square roots so, at least to me, this seems reasonable.
What to do?
So, my issue now is, how do I determine the dt corresponding to the minimum of the error function?
My first (very crude) attempt was to get some points on the error graph (as above), fit it using numpy.polyfit, then feed the results to numpy.root. That root corresponds to the dt. Unfortunately, this failed, too. I tried fitting with various degrees, and also with various points, up to a ridiculous number of points such that I may as well just use brute-force.
How can I determine the dt corresponding to the minimum of this error function?
Since we're dealing with high velocities (radio signals), it's important that the results be precise and accurate, as minor variances in dt can throw off the resulting point.
I'm sure that there's some infinitely simpler approach buried in what I'm doing here however, ignoring everything else, how do I find dt?
My requirements:
Speed is of utmost importance
I have access only to pure Python and NumPy in the environment where this will be run
EDIT:
Here's my code. Admittedly, a bit messy. Here, I'm using the polyfit technique. It will "simulate" a source for you, and compare results:
from numpy import poly1d, linspace, set_printoptions, array, linalg, triu_indices, roots, polyfit
from dataclasses import dataclass
from random import randrange
import math
#dataclass
class Vertexer:
receivers: list
# Defaults
c = 299792
# Receivers:
# [x_1, y_1, z_1]
# [x_2, y_2, z_2]
# [x_3, y_3, z_3]
# Solved:
# [x, y, z]
def error(self, dt, times):
solved = self.linear([time + dt for time in times])
error = 0
for time, receiver in zip(times, self.receivers):
error += ((math.sqrt( (solved[0] - receiver[0])**2 +
(solved[1] - receiver[1])**2 +
(solved[2] - receiver[2])**2 ) / c ) - time)**2
return error
def linear(self, times):
X = array(self.receivers)
t = array(times)
x, y, z = X.T
i, j = triu_indices(len(x), 1)
A = 2 * (X[i] - X[j])
b = self.c**2 * (t[j]**2 - t[i]**2) + (X[i]**2).sum(1) - (X[j]**2).sum(1)
solved, residuals, rank, s = linalg.lstsq(A, b, rcond=None)
return(solved)
def find(self, times):
# Normalize times
times = [time - min(times) for time in times]
# Fit the error function
y = []
x = []
dt = 1E-10
for i in range(50000):
x.append(self.error(dt * i, times))
y.append(dt * i)
p = polyfit(array(x), array(y), 2)
r = roots(p)
return(self.linear([time + r for time in times]))
# SIMPLE CODE FOR SIMULATING A SIGNAL
# Pick nodes to be at random locations
x_1 = randrange(10); y_1 = randrange(10); z_1 = randrange(10)
x_2 = randrange(10); y_2 = randrange(10); z_2 = randrange(10)
x_3 = randrange(10); y_3 = randrange(10); z_3 = randrange(10)
x_4 = randrange(10); y_4 = randrange(10); z_4 = randrange(10)
# Pick source to be at random location
x = randrange(1000); y = randrange(1000); z = randrange(1000)
# Set velocity
c = 299792 # km/ns
# Generate simulated source
t_1 = math.sqrt( (x - x_1)**2 + (y - y_1)**2 + (z - z_1)**2 ) / c
t_2 = math.sqrt( (x - x_2)**2 + (y - y_2)**2 + (z - z_2)**2 ) / c
t_3 = math.sqrt( (x - x_3)**2 + (y - y_3)**2 + (z - z_3)**2 ) / c
t_4 = math.sqrt( (x - x_4)**2 + (y - y_4)**2 + (z - z_4)**2 ) / c
print('Actual:', x, y, z)
myVertexer = Vertexer([[x_1, y_1, z_1],[x_2, y_2, z_2],[x_3, y_3, z_3],[x_4, y_4, z_4]])
solution = myVertexer.find([t_1, t_2, t_3, t_4])
print(solution)
It seems like the Bancroft method applies to this problem? Here's a pure NumPy implementation.
# Implementation of the Bancroft method, following
# https://gssc.esa.int/navipedia/index.php/Bancroft_Method
M = np.diag([1, 1, 1, -1])
def lorentz_inner(v, w):
return np.sum(v * (w # M), axis=-1)
B = np.array(
[
[x_1, y_1, z_1, c * t_1],
[x_2, y_2, z_2, c * t_2],
[x_3, y_3, z_3, c * t_3],
[x_4, y_4, z_4, c * t_4],
]
)
one = np.ones(4)
a = 0.5 * lorentz_inner(B, B)
B_inv_one = np.linalg.solve(B, one)
B_inv_a = np.linalg.solve(B, a)
for Lambda in np.roots(
[
lorentz_inner(B_inv_one, B_inv_one),
2 * (lorentz_inner(B_inv_one, B_inv_a) - 1),
lorentz_inner(B_inv_a, B_inv_a),
]
):
x, y, z, c_t = M # np.linalg.solve(B, Lambda * one + a)
print("Candidate:", x, y, z, c_t / c)
My answer might have mistakes (glaring) as I had not heard the TDOA term before this afternoon. Please double check if the method is right.
I could not find solution to your original problem of finding dt corresponding to the minimum error. My answer also deviates from the requirement that other than numpy no third party library had to be used (I used Sympy and largely used the code from here). However I am still posting this thinking that somebody someday might find it useful if all one is interested in ... is to find X,Y,Z of the source emitter. This method also does not take into account real-life situations where white noise or errors might be present or curvature of the earth and other complications.
Your initial test conditions are as below.
from random import randrange
import math
# SIMPLE CODE FOR SIMULATING A SIGNAL
# Pick nodes to be at random locations
x_1 = randrange(10); y_1 = randrange(10); z_1 = randrange(10)
x_2 = randrange(10); y_2 = randrange(10); z_2 = randrange(10)
x_3 = randrange(10); y_3 = randrange(10); z_3 = randrange(10)
x_4 = randrange(10); y_4 = randrange(10); z_4 = randrange(10)
# Pick source to be at random location
x = randrange(1000); y = randrange(1000); z = randrange(1000)
# Set velocity
c = 299792 # km/ns
# Generate simulated source
t_1 = math.sqrt( (x - x_1)**2 + (y - y_1)**2 + (z - z_1)**2 ) / c
t_2 = math.sqrt( (x - x_2)**2 + (y - y_2)**2 + (z - z_2)**2 ) / c
t_3 = math.sqrt( (x - x_3)**2 + (y - y_3)**2 + (z - z_3)**2 ) / c
t_4 = math.sqrt( (x - x_4)**2 + (y - y_4)**2 + (z - z_4)**2 ) / c
print('Actual:', x, y, z)
My solution is as below.
import sympy as sym
X,Y,Z = sym.symbols('X,Y,Z', real=True)
f = sym.Eq((x_1 - X)**2 +(y_1 - Y)**2 + (z_1 - Z)**2 , (c*t_1)**2)
g = sym.Eq((x_2 - X)**2 +(y_2 - Y)**2 + (z_2 - Z)**2 , (c*t_2)**2)
h = sym.Eq((x_3 - X)**2 +(y_3 - Y)**2 + (z_3 - Z)**2 , (c*t_3)**2)
i = sym.Eq((x_4 - X)**2 +(y_4 - Y)**2 + (z_4 - Z)**2 , (c*t_4)**2)
print("Solved coordinates are ", sym.solve([f,g,h,i],X,Y,Z))
print statement from your initial condition gave.
Actual: 111 553 110
and the solution that almost instantly came out was
Solved coordinates are [(111.000000000000, 553.000000000000, 110.000000000000)]
Sorry again if something is totally amiss.
I am currently trying to write some python code to solve an arbitrary system of first order ODEs, using a general explicit Runge-Kutta method defined by the values alpha, gamma (both vectors of dimension m) and beta (lower triangular matrix of dimension m x m) of the Butcher table which are passed in by the user. My code appears to work for single ODEs, having tested it on a few different examples, but I'm struggling to generalise my code to vector valued ODEs (i.e. systems).
In particular, I try to solve a Van der Pol oscillator ODE (reduced to a first order system) using Heun's method defined by the Butcher Tableau values given in my code, but I receive the errors
"RuntimeWarning: overflow encountered in double_scalars f = lambda t,u: np.array(... etc)" and
"RuntimeWarning: invalid value encountered in add kvec[i] = f(t+alpha[i]*h,y+h*sum)"
followed by my solution vector that is clearly blowing up. Note that the commented out code below is one of the examples of single ODEs that I tried and is solved correctly. Could anyone please help? Here is my code:
import numpy as np
def rk(t,y,h,f,alpha,beta,gamma):
'''Runga Kutta iteration'''
return y + h*phi(t,y,h,f,alpha,beta,gamma)
def phi(t,y,h,f,alpha,beta,gamma):
'''Phi function for the Runga Kutta iteration'''
m = len(alpha)
count = np.zeros(len(f(t,y)))
kvec = k(t,y,h,f,alpha,beta,gamma)
for i in range(1,m+1):
count = count + gamma[i-1]*kvec[i-1]
return count
def k(t,y,h,f,alpha,beta,gamma):
'''returning a vector containing each step k_{i} in the m step Runga Kutta method'''
m = len(alpha)
kvec = np.zeros((m,len(f(t,y))))
kvec[0] = f(t,y)
for i in range(1,m):
sum = np.zeros(len(f(t,y)))
for l in range(1,i+1):
sum = sum + beta[i][l-1]*kvec[l-1]
kvec[i] = f(t+alpha[i]*h,y+h*sum)
return kvec
def timeLoop(y0,N,f,alpha,beta,gamma,h,rk):
'''function that loops through time using the RK method'''
t = np.zeros([N+1])
y = np.zeros([N+1,len(y0)])
y[0] = y0
t[0] = 0
for i in range(1,N+1):
y[i] = rk(t[i-1],y[i-1], h, f,alpha,beta,gamma)
t[i] = t[i-1]+h
return t,y
#################################################################
'''f = lambda t,y: (c-y)**2
Y = lambda t: np.array([(1+t*c*(c-1))/(1+t*(c-1))])
h0 = 1
c = 1.5
T = 10
alpha = np.array([0,1])
gamma = np.array([0.5,0.5])
beta = np.array([[0,0],[1,0]])
eff_rk = compute(h0,Y(0),T,f,alpha,beta,gamma,rk, Y,11)'''
#constants
mu = 100
T = 1000
h = 0.01
N = int(T/h)
#initial conditions
y0 = 0.02
d0 = 0
init = np.array([y0,d0])
#Butcher Tableau for Heun's method
alpha = np.array([0,1])
gamma = np.array([0.5,0.5])
beta = np.array([[0,0],[1,0]])
#rhs of the ode system
f = lambda t,u: np.array([u[1],mu*(1-u[0]**2)*u[1]-u[0]])
#solving the system
time, sol = timeLoop(init,N,f,alpha,beta,gamma,h,rk)
print(sol)
Your step size is not small enough. The Van der Pol oscillator with mu=100 is a fast-slow system with very sharp turns at the switching of the modes, so rather stiff. With explicit methods this requires small step sizes, the smallest sensible step size is 1e-5 to 1e-6. You get a solution on the limit cycle already for h=0.001, with resulting velocities up to 150.
You can reduce some of that stiffness by using a different velocity/impulse variable. In the equation
x'' - mu*(1-x^2)*x' + x = 0
you can combine the first two terms into a derivative,
mu*v = x' - mu*(1-x^2/3)*x
so that
x' = mu*(v+(1-x^2/3)*x)
v' = -x/mu
The second equation is now uniformly slow close to the limit cycle, while the first has long relatively straight jumps when v leaves the cubic v=x^3/3-x.
This integrates nicely with the original h=0.01, keeping the solution inside the box [-3,3]x[-2,2], even if it shows some strange oscillations that are not present for smaller step sizes and the exact solution.
Is there a way to access what the current time step is in scipy.integrate.odeint?
I am trying to solve a system of ODEs where the form of the ode depends on whether or not a population will be depleted. Basically I take from population x provided x doesn't go below a threshold. If the amount I need to take this timestep is greater than that threshold I will take all of x to that point and the rest from z.
I am trying to do this by checking how much I will take this time step, and then allocating between populations x and z in the DEs.
To do this I need to be able to access the step size within the ODE solver to calculate what will be taken this time step. I am using scipy.integrate.odeint - is there a way to access the time step within the function defining the odes?
Alternatively, can you access what the last time was in the solver? I know it won't necessarily be the next time step, but it's likely a good enough approximation for me if that is the best I can do. Or is there another option I've not thought of to do this?
The below MWE is not my system of equations but what I could come up with to try to illustrate what I'm doing. The problem is that on the first time step, if the time step were 1 then the population will go too low, but since the timestep will be small, initially you can take all from x.
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
plt.interactive(False)
tend = 5
tspan = np.linspace(0.0, tend, 1000)
A = 3
B = 4.09
C = 1.96
D = 2.29
def odefunc(P,t):
x = P[0]
y = P[1]
z = P[2]
if A * x - B * x * y < 0.6:
dxdt = A/5 * x
dydt = -C * y + D * x * y
dzdt = - B * z * y
else:
dxdt = A * x - B * x * y
dydt = -C * y + D * x * y
dzdt = 0
dPdt = np.ravel([dxdt, dydt, dzdt])
return dPdt
init = ([0.75,0.95,100])
sol = odeint(odefunc, init, tspan, hmax = 0.01)
x = sol[:, 0]
y = sol[:, 1]
z = sol[:, 2]
plt.figure(1)
plt.plot(tspan,x)
plt.plot(tspan,y)
plt.plot(tspan,z)
Of course you can hack something together that might work.
You could log t but you have to be aware that the values
might not be constantly increasing. This depends on the ODE algorithm and how it works (forward, backward, and central finite differences).
But it will give you an idea where about you are.
logger = [] # visible in odefunc
def odefunc(P,t):
x = P[0]
y = P[1]
z = P[2]
print(t)
logger.append(t)
if logger: # if the list is not empty
if logger[-1] > 2.5: # then read the last value
print('hua!')
if A * x - B * x * y < 0.6:
dxdt = A/5 * x
dydt = -C * y + D * x * y
dzdt = - B * z * y
else:
dxdt = A * x - B * x * y
dydt = -C * y + D * x * y
dzdt = 0
dPdt = np.ravel([dxdt, dydt, dzdt])
return dPdt
print(logger)
As pointed out in the another answer, time may not be strictly increasing at each call to the ODE function in odeint, especially for stiff problems.
The most robust way to handle this kind of discontinuity in the ode function is to use an event to find the location of the zero of (A * x - B * x * y) - 0.6 in your example. For a discontinuous solution, use a terminal event to stop the computation precisely at the zero, and then change the ode function. In solve_ivp you can do this with the events parameter. See the solve ivp documentation and specifically the examples related to the cannonball trajectories. odeint does not support events, and solve_ivp has an LSODA method available that calls the same Fortran library as odeint.
Here is a short example, but you may want to additionally check that sol1 reached the terminal event before solving for sol2.
from scipy.integrate import solve_ivp
tend = 10
def discontinuity_zero(t, y):
return y[0] - 10
discontinuity_zero.terminal = True
def ode_func1(t, y):
return y
def ode_func2 (t, y):
return -y**2
sol1 = solve_ivp(ode_func1, t_span=[0, tend], y0=[1], events=discontinuity_zero, rtol=1e-8)
t1 = sol1.t[-1]
y1 = [sol1.y[0, -1]]
print(f'time={t1} y={y1} discontinuity_zero={discontinuity_zero(t1, y1)}')
sol2 = solve_ivp(ode_func2, t_span=[t1, tend], y0=y1, rtol=1e-8)
plt.plot(sol1.t, sol1.y[0,:])
plt.plot(sol2.t, sol2.y[0,:])
plt.show()
This prints the following, where the time of the discontinuity is accurate to 7 digits.
time=2.302584885712467 y=[10.000000000000002] discontinuity_zero=1.7763568394002505e-15
The problem is that I would like to be able to integrate the differential equations starting for each point of the grid at once instead of having to loop over the scipy integrator for each coordinate. (I'm sure there's an easy way)
As background for the code I'm trying to solve the trajectories of a Couette flux alternating the direction of the velocity each certain period, that is a well known dynamical system that produces chaos. I don't think the rest of the code really matters as the part of the integration with scipy and my usage of the meshgrid function of numpy.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation, writers
from scipy.integrate import solve_ivp
start_T = 100
L = 1
V = 1
total_run_time = 10*3
grid_points = 10
T_list = np.arange(start_T, 1, -1)
x = np.linspace(0, L, grid_points)
y = np.linspace(0, L, grid_points)
X, Y = np.meshgrid(x, y)
condition = True
totals = np.zeros((start_T, total_run_time, 2))
alphas = np.zeros(start_T)
i = 0
for T in T_list:
alphas[i] = L / (V * T)
solution = np.array([X, Y])
for steps in range(int(total_run_time/T)):
t = steps*T
if condition:
def eq(t, x):
return V * np.sin(2 * np.pi * x[1] / L), 0.0
condition = False
else:
def eq(t, x):
return 0.0, V * np.sin(2 * np.pi * x[1] / L)
condition = True
time_steps = np.arange(t, t + T)
xt = solve_ivp(eq, time_steps, solution)
solution = np.array([xt.y[0], xt.y[1]])
totals[i][t: t + T][0] = solution[0]
totals[i][t: t + T][1] = solution[1]
i += 1
np.save('alphas.npy', alphas)
np.save('totals.npy', totals)
The error given is :
ValueError: y0 must be 1-dimensional.
And it comes from the 'solve_ivp' function of scipy because it doesn't accept the format of the numpy function meshgrid. I know I could run some loops and get over it but I'm assuming there must be a 'good' way to do it using numpy and scipy. I accept advice for the rest of the code too.
Yes, you can do that, in several variants. The question remains if it is advisable.
To implement a generally usable ODE integrator, it needs to be abstracted from the models. Most implementations do that by having the state space a flat-array vector space, some allow a vector space engine to be passed as parameter, so that structured vector spaces can be used. The scipy integrators are not of this type.
So you need to translate the states to flat vectors for the integrator, and back to the structured state for the model.
def encode(X,Y): return np.concatenate([X.flatten(),Y.flatten()])
def decode(U): return U.reshape([2,grid_points,grid_points])
Then you can implement the ODE function as
def eq(t,U):
X,Y = decode(U)
Vec = V * np.sin(2 * np.pi * x[1] / L)
if int(t/T)%2==0:
return encode(Vec, np.zeros(Vec.shape))
else:
return encode(np.zeros(Vec.shape), Vec)
with initial value
U0 = encode(X,Y)
Then this can be directly integrated over the whole time span.
Why this might be not such a good idea: Thinking of each grid point and its trajectory separately, each trajectory has its own sequence of adapted time steps for the given error level. In integrating all simultaneously, the adapted step size is the minimum over all trajectories at the given time. Thus while the individual trajectories might have only short intervals with very small step sizes amid long intervals with sparse time steps, these can overlap in the ensemble to result in very small step sizes everywhere.
If you go beyond the testing stage, switch to a more compiled solver implementation, odeint is a Fortran code with wrappers, so half a solution. JITcode translates to C code and links with the compiled solver behind odeint. Leaving python you get sundials, the diffeq module of julia-lang, or boost::odeint.
TL;DR
I don't think you can "integrate the differential equations starting for each point of the grid at once".
MWE
Please try to provide a MWE to reproduce your problem, like you said : "I don't think the rest of the code really matters", and it makes it harder for people to understand your problem.
Understanding how to talk to the solver
Before answering your question, there are several things that seem to be misunderstood :
by defining time_steps = np.arange(t, t + T) and then calling solve_ivp(eq, time_steps, solution) : the second argument of solve_ivp is the time span you want the solution for, ie, the "start" and "stop" time as a 2-uple. Here your time_steps is 30-long (for the first loop), so I would probably replace it by (t, t+T). Look for t_span in the doc.
from what I understand, it seems like you want to control each iteration of the numerical resolution : that's not how solve_ivp works. More over, I think you want to switch the function "eq" at each iteration. Since you have to pass the "the right hand side" of the equation, you need to wrap this behavior inside a function. It would not work (see right after) but in terms of concept something like this:
def RHS(t, x):
# unwrap your variables, condition is like an additional variable of your problem,
# with a very simple differential equation
x0, x1, condition = x
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x[1] / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x[1] / L)
# compute new result for condition
condition_out = not(condition)
return [x0_out, x1_out, condition_out]
This would not work because the evolution of condition doesn't satisfy some mathematical properties of derivation/continuity. So condition is like a boolean switch that parametrizes the model, we can use global to control the state of this boolean :
condition = True
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
finaly, and this is the ValueError you mentionned in your post : you define solution = np.array([X, Y]) which actually is initial condition and supposed to be "y0: array_like, shape (n,)" where n is the number of variable of the problem (in the case of [x0_out, x1_out] that would be 2)
A MWE for a single initial condition
All that being said, lets start with a simple MWE for a single starting point (0.5,0.5), so we have a clear view of how to use the solver :
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
# initial conditions for x0, x1, and condition
initial = [0.5, 0.5]
condition = True
# time span
t_span = (0, 100)
# constants
V = 1
L = 1
# define the "model", ie the set of equations of t
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
solution = solve_ivp(RHS_eq, # Right Hand Side of the equation(s)
t_span, # time span, a 2-uple
initial, # initial conditions
)
fig, ax = plt.subplots()
ax.plot(solution.t,
solution.y[0],
label="x0")
ax.plot(solution.t,
solution.y[1],
label="x1")
ax.legend()
Final answer
Now, what we want is to do the exact same thing but for various initial conditions, and from what I understand, we can't : again, quoting the doc
y0 : array_like, shape (n,) : Initial state. . The solver's initial condition only allows one starting point vector.
So to answer the initial question : I don't think you can "integrate the differential equations starting for each point of the grid at once".
I'm implementing a very simple Susceptible-Infected-Recovered model with a steady population for an idle side project - normally a pretty trivial task. But I'm running into solver errors using either PysCeS or SciPy, both of which use lsoda as their underlying solver. This only happens for particular values of a parameter, and I'm stumped as to why. The code I'm using is as follows:
import numpy as np
from pylab import *
import scipy.integrate as spi
#Parameter Values
S0 = 99.
I0 = 1.
R0 = 0.
PopIn= (S0, I0, R0)
beta= 0.50
gamma=1/10.
mu = 1/25550.
t_end = 15000.
t_start = 1.
t_step = 1.
t_interval = np.arange(t_start, t_end, t_step)
#Solving the differential equation. Solves over t for initial conditions PopIn
def eq_system(PopIn,t):
'''Defining SIR System of Equations'''
#Creating an array of equations
Eqs= np.zeros((3))
Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
return Eqs
SIR = spi.odeint(eq_system, PopIn, t_interval)
This produces the following error:
lsoda-- at current t (=r1), mxstep (=i1) steps
taken on this call before reaching tout
In above message, I1 = 500
In above message, R1 = 0.7818108252072E+04
Excess work done on this call (perhaps wrong Dfun type).
Run with full_output = 1 to get quantitative information.
Normally when I encounter a problem like that, there's something terminally wrong with the equation system I set up, but I both can't see anything wrong with it. Weirdly, it also works if you change mu to something like 1/15550. In case it was something wrong with the system, I also implemented the model in R as follows:
require(deSolve)
sir.model <- function (t, x, params) {
S <- x[1]
I <- x[2]
R <- x[3]
with (
as.list(params),
{
dS <- -beta*S*I/(S+I+R) - mu*S + mu*(S+I+R)
dI <- beta*S*I/(S+I+R) - gamma*I - mu*I
dR <- gamma*I - mu*R
res <- c(dS,dI,dR)
list(res)
}
)
}
times <- seq(0,15000,by=1)
params <- c(
beta <- 0.50,
gamma <- 1/10,
mu <- 1/25550
)
xstart <- c(S = 99, I = 1, R= 0)
out <- as.data.frame(lsoda(xstart,times,sir.model,params))
This also uses lsoda, but seems to be going off without a hitch. Can anyone see what's going wrong in the Python code?
I think that for the parameters you've chosen you're running into problems with stiffness - due to numerical instability the solver's step size is getting pushed into becoming very small in regions where the slope of the solution curve is actually quite shallow. The Fortran solver lsoda, which is wrapped by scipy.integrate.odeint, tries to switch adaptively between methods suited to 'stiff' and 'non-stiff' systems, but in this case it seems to be failing to switch to stiff methods.
Very crudely you can just massively increase the maximum allowed steps and the solver will get there in the end:
SIR = spi.odeint(eq_system, PopIn, t_interval,mxstep=5000000)
A better option is to use the object-oriented ODE solver scipy.integrate.ode, which allows you to explicitly choose whether to use stiff or non-stiff methods:
import numpy as np
from pylab import *
import scipy.integrate as spi
def run():
#Parameter Values
S0 = 99.
I0 = 1.
R0 = 0.
PopIn= (S0, I0, R0)
beta= 0.50
gamma=1/10.
mu = 1/25550.
t_end = 15000.
t_start = 1.
t_step = 1.
t_interval = np.arange(t_start, t_end, t_step)
#Solving the differential equation. Solves over t for initial conditions PopIn
def eq_system(t,PopIn):
'''Defining SIR System of Equations'''
#Creating an array of equations
Eqs= np.zeros((3))
Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
return Eqs
ode = spi.ode(eq_system)
# BDF method suited to stiff systems of ODEs
ode.set_integrator('vode',nsteps=500,method='bdf')
ode.set_initial_value(PopIn,t_start)
ts = []
ys = []
while ode.successful() and ode.t < t_end:
ode.integrate(ode.t + t_step)
ts.append(ode.t)
ys.append(ode.y)
t = np.vstack(ts)
s,i,r = np.vstack(ys).T
fig,ax = subplots(1,1)
ax.hold(True)
ax.plot(t,s,label='Susceptible')
ax.plot(t,i,label='Infected')
ax.plot(t,r,label='Recovered')
ax.set_xlim(t_start,t_end)
ax.set_ylim(0,100)
ax.set_xlabel('Time')
ax.set_ylabel('Percent')
ax.legend(loc=0,fancybox=True)
return t,s,i,r,fig,ax
Output:
The infected population PopIn[1] decays to zero. Apparently, (normal) numerical imprecision leads to PopIn[1] becoming negative (approx. -3.549e-12) near t=322.9. Then eventually the solution blows up near t=7818.093, with PopIn[0] going toward +infinity and PopIn[1] going toward -infinity.
Edit: I removed my earlier suggestion for a "quick fix". It was a questionable hack.