An error in odeint program - python

I wrote a program to use odeint to solve a differential equation. But it had a problem. When I setted Cosmopara as np.array([70.0,0.3,0,-1.0,0]), it gave a warning that invalid value encountered in sqrt and invalid value encountered in double_scalars in h = np.sqrt(y1 ** 2 + Omega_M * t ** (-3) + Omega_DE*y2). But I checked that line and didn't find any error. If Cosmopara = np.array([70.0,0.3,0.0,-1.0,0.0]), Y shouldn't change but it changed. Besides, If I chose Cosmopara = np.array([70.0,0.3,0.1,-1.0,0.1]), this program could gives the right result.
What am I doing wrong?
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
global CosmoPara
Cosmopara = np.array([70.0,0.3,0.0,-1.0,0.0])
def derivfun(Y,t):
Omega_M = Cosmopara[1]
Sigma_0 = Cosmopara[2]
omega = Cosmopara[3]
delta = Cosmopara[4]
Omega_DE = 1-Omega_M-Sigma_0**2
y1 = Y[0]
y2 = Y[1]
h = np.sqrt(y1**2 + Omega_M*t**(-3) + Omega_DE*y2)
dy1dt = -3.0*y1/t + (delta*Omega_DE*y2)/(t*h)
dy2dt = -(3.0*(1+omega) + 2.0*delta*y1/h)*y2/t
return np.array([dy1dt,dy2dt])
z = np.linspace(1,2.5,15001)
time = 1.0/z
Omega_M = Cosmopara[1]
Sigma_0 = Cosmopara[2]
omega = Cosmopara[3]
delta = Cosmopara[4]
Omega_DE = 1-Omega_M-Sigma_0**2
y1init = Sigma_0
y2init = 1.0
Yinit = np.array([y1init,y2init])
Y = odeint(derivfun,Yinit,time)
y1 = Y[:,0]
y2 = Y[:,1]
h = np.sqrt(y1**2 + Omega_M*time**(-3) + Omega_DE*y2)
plt.figure()
plt.plot(z,h)
plt.show()

The error caused by the situation when the value in the sqrt() is less than 0. And the value in the sqrt() will be less than 0 is caused by the time value t suddenly decreasing to -0.0000999.
Reason
I have tested several cases to find out the reason, and I found that the time spacing of the time value t given to the function derivfun and the time spacing of the global array time are different even when the function odeint works fine. I guess it is a mechanism for speeding up the odeint. Because when the derivatives dydt1 and dydt2 do not change fast, the result can be considered as a linear function with a larger time spacing. If the derivatives will not change fast, this function will increase next step's time spacing and the function can solve it in less steps. In the case you provided. the derivatives dydt1 and dydt2 always equal to zero and do not change. This situation causes the error time value.
Based on this result, we can know that the function odeint will give the wrong time value which is not in the range of the time array users give when the derivatives do not change. If you want to avoid this situation, you can use global time variables instead of the original time value. But it will cost more time when you using odeint to solve ordinary differential equations.
Code
The following code is the code you provided and I add some lines for testing. You can change the parameters and check out the result.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
Cosmopara = np.array([70.0,0.3,0.1,-1.0,0.1])
i = 0
def derivfun(Y,t):
global i
Omega_M = Cosmopara[1]
Sigma_0 = Cosmopara[2]
omega = Cosmopara[3]
delta = Cosmopara[4]
Omega_DE = 1-Omega_M-Sigma_0**2
y1 = Y[0]
y2 = Y[1]
h = np.sqrt(y1**2 + Omega_M*t**(-3) + Omega_DE*y2)
dy1dt = -3.0*y1/t + (delta*Omega_DE*y2)/(t*h)
dy2dt = -(3.0*(1+omega) + 2.0*delta*y1/h)*y2/t
print "Time: %14.12f / Global time: %14.12f / dy1dt: %8.5f / dy2dt: %8.5f" % (t, time[i], dy1dt, dy2dt)
i += 1
return np.array([dy1dt,dy2dt])
z = np.linspace(1,2.5,15001)
time = 1.0/z
Omega_M = Cosmopara[1]
Sigma_0 = Cosmopara[2]
omega = Cosmopara[3]
delta = Cosmopara[4]
Omega_DE = 1-Omega_M-Sigma_0**2
y1init = Sigma_0
y2init = 1.0
Yinit = np.array([y1init,y2init])
Y = odeint(derivfun,Yinit,time)
y1 = Y[:,0]
y2 = Y[:,1]
h = np.sqrt(y1**2 + Omega_M*time**(-3) + Omega_DE*y2)
plt.figure()
plt.plot(z,h)
plt.show()
Testing results
The following picture shows that the time value in the function and the global time value have different time spacing when the function works fine (using the parameters you provided):
The following picture shows that when the derivatives dydt1 and dydt2 do not change (using the parameters you provided, too), and the time value in the function suddenly decreases to a value less than 0:

I had a similar problem, it seems to arrise because as previousely stated, the odeint program seems to want to take a shortcut by increasing the step size. I fixed this for myself by capping the step size. You can do this very easilly:
y = odeint(model, x, t, hmax= maximum step size)
This does make the computing time quite a bit slower, so for big computations and long time ranges, you want to avoid it if possible. I found it also worked to decrease the time range, but this is not always possible.
(in my case, I wanted to do a time range of 2000-20000 seconds, but I couldn't go any higher than 200 before values would just be all over the place)

Related

Checking the result of solve_ivp with solve_bvp - solve_bvp problems

I am hoping to use scipy.integrate.solve_bvp to solve a 2nd order differential equation: I am checking my process with a previous equation, so I am confident in moving onto more complex equations.
We begin with the differential equation system:
f''(x) + f(x) - f(x)^3 = 0
subject to the boundary conditions
f(x=0) = 0 f(x->infty) = gammaA
where gammaA is some constant between 0 and 1. I am finding numerical solutions for this, and comparing to a known analytic form (at least, for gammaA =1, a tanh function). For any given gammaA, we can integrate this equation once to and utilise the BC at infinity.
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import solve_ivp
gammaA = 0.9
xstart = 0.0
xend = 10
steps = 0.1
x = np.arange(xstart,xend,steps)
def dpsidx3(x,psi, gammaA):
eq = ( gammaA**2 *(1 - (1/2)*gammaA**2) - psi**2 *(1 - (1/2)*psi**2) )**0.5
return eq
psi0 = 0
x0 = xstart
x1 = xend
sol = solve_ivp(dpsidx3, [x0, x1], y0 = [psi0], args = (gammaA,), dense_output=True, rtol = 1e-9)
plotsol = sol.sol(x)
plt.plot(x, plotsol.T,marker = "", linestyle="--",label = r"Numerical solution - $solve\_ivp$")
plt.xlabel('x')
plt.ylabel('psi')
plt.legend()
plt.show()
If gammaA is not 1, then there are some runtime warnings but the shape is exactly as expected.
However, the ODE in the solve_ivp code has been manipulated into a form which is a 1st order ODE; for further work (with more complex and variable coefficients in the ODE), this will not be possible. Hence, I am trying to solve the boundary value problem using solve_bvp.
I am trying to solve now the same ODE, but I am not getting the same result as from this solution; the documentation is unclear on how to effectively use solve_bvp to me! Here is my attempt so far:
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import solve_bvp
gammaA = 0.9
xstart = 0.0
xend = 10
steps = 0.1
def fun(x,u):
du1 = u[1] #u[1] = u2, u[0] = u1
du2 = u[0]**3 - u[0]
return np.vstack( (du1, du2) )
def bc(ua, ub):
return np.array( [ua[0], ub[0]-gammaA])
x = np.linspace(xstart, xend, 10)
print(x.size)
y_a = np.zeros((2, x.size))
y_a[0] = np.linspace(0, gammaA, 10)
y_a[0] = gammaA
res_a = solve_bvp(fun, bc, x, y_a, max_nodes=100000, tol=1e-9)
print(res_a)
x_plot = np.linspace(0, xend, 100)
y_plot_a = res_a.sol(x_plot)[0]
fig2,ax2= plt.subplots()
ax2.plot(x_plot, y_plot_a, label=r'BVP solve')
ax2.legend()
ax2.set_xlabel("x")
ax2.set_ylabel("psi")
I have tried to write the 2nd order ODE as a system of 1st order ODEs and set the correct boundary conditions at the end of the system (rather than at infinity). I expected a similar tanh-function (where I could say that after the end of the system, my solution is simply gammaA, as expected by the asymptote), but it is clear that I am not getting this for any value of gammaA. Any advice gratefully appreciated; how can I reproduce the result of solve_ivp in solve_bvp?
EDIT: extra thoughts.
Can I add an additional constraint to my problem to ensure that the solution has a stationary point at the edge/is a monotonically increasing solution? The plots look okay for gammaA =1, but do not show the correct behaviour for any other values as in solve_ivp.
EDIT2: comparative figures, showing poor agreement with gammaA, e.g. 0.8 but good agreement for gammaA = 1.
You are making unfounded assumption on the mathematical nature of this equation. There is an energy functional
E = u'^2 + u^2 - 0.5*u^4 - 0.5 = u'^2 - 0.5*(u^2-1)^2
The solution that you first computed is at energy level 0.
For any smaller, negative energy level, roughly inside the unit circle, you get periodic oscillating solutions, these have no limit at infinity.
For larger, positive energy levels the solutions are unbounded, will rapidly grow larger, possibly diverge in finite time. Also here the limit at infinity either does not exist, as there is no solution connecting the initial point to large times, or the limit is infinity itself.
Forcing boundary conditions against this nature might work, but will not give a stable solution.

How can I optimize this code in python? For solving stochastic differential equations

I am developing a code that uses a method called Platen to solve stochastic differential equations. Then I must solve that stochastic differential equation many times (on the order of 10,000 times) to average all the results. My code is:
import numpy as np
import random
import numba
#numba.jit(nopython=True)
def integrador2(y,t,h): #this is the integrator of the function that solves the SDE
m = 6.6551079E-26 #parameters
gamma=0.05
T = 5E-3
k_b = 1.3806488E-23
b=np.sqrt(2*m*gamma*T*k_b)
c=np.sqrt(h)
for i in range(len(t)):
dW=c*random.gauss(0,1)
A=np.array([y[i,-1]/m,-gamma*y[i,-1]]) #this is the platen method that is applied at
B_dW=np.array([0,b*dW]) #each time step
z=y[i]+A*h+B_dW
Az=np.array([z[-1]/m,-gamma*z[-1]])
y[i+1]=y[i]+1/2*(Az+A)*h+B_dW
return y
def media(args): #args is a tuple with the parameters
y = args[0]
t = args[1]
k = args[2]
x=0
p=0
for n in range(k): #k=number of trajectories
y=integrador2(y,t,h)
x=(1./(n+1))*(n*x+y[:,0]) #I do the average like this so as not to have to save all the
p=(1./(n+1))*(n*p+y[:,1]) #solutions in memory
return x,p
The variables y, t and h are:
y0 = np.array([initial position, initial moment]) #initial conditions
t = np.linspace(initial time, final time, number of time intervals) #time array
y = np.zeros((len(t)+1,len(y0))) #array of positions and moments
y[0,:]=np.array(y0) #I keep the initial condition
h = (final time-initial time)/(number of time intervals) #time increment
I need to be able to run the program for a number of time intervals of 10 ** 7 and solve it 10 ** 4 times (k = 10 ** 4).
I feel that I have already reached a dead end because I already accelerate the function that calculates the result with Numba and then (although I do not put it here) I parallelize the "media" function to work with the four cores that my computer has. Even doing all this, my program takes an hour and a half to execute for 10 ** 6 time intervals and k = 10 ** 4, I have not had the courage to execute it for 10 ** 7 time intervals because my intuition tells me that it would take more than 10 hours.
I would really appreciate if someone could advise me to make some parts of the code faster.
Finally, I apologize if I have not expressed myself completely correctly in any part of the question, I am a physicist, not a computer scientist and my English is far from perfect.
I can save about 75% of compute time by simplifying the math in the loop:
def integrador2(y,t,h): #this is the integrator of the function that solves the SDE
m = 6.6551079E-26 #parameters
gamma=0.05
T = 5E-3
k_b = 1.3806488E-23
b=np.sqrt(2*m*gamma*T*k_b)
c=np.sqrt(h)
h = h * 1.
coeff0 = h/m - gamma*h**2/(2.*m)
coeff1 = (1. - gamma*h + gamma**2*h**2/2.)
coeffd = c*b*(1. - gamma*h/2.)
for i in range(len(t)):
dW=np.random.normal()
# Method 2
y[i+1] = np.array([y[i][0] + y[i][1]*coeff0, y[i][1]*coeff1 + dW*coeffd])
return y
Here's a method using filters with scipy, which I don't think is compatible with Numba, but is slightly faster than the solution above:
from scipy import signal
# #numba.jit(nopython=True)
def integrador2(y,t,h): #this is the integrator of the function that solves the SDE
m = 6.6551079E-26 #parameters
gamma=0.05
T = 5E-3
k_b = 1.3806488E-23
b=np.sqrt(2*m*gamma*T*k_b)
c=np.sqrt(h)
h = h * 1.
coeff0a = 1.
coeff0b = h/m - gamma*h**2/(2.*m)
coeff1 = (1. - gamma*h + gamma**2*h**2/2.)
coeffd = c*b*(1. - gamma*h/2.)
noise = np.zeros(y.shape[0])
noise[1:] = np.random.normal(0.,coeffd*1.,y.shape[0]-1)
noise[0] = y[0,1]
a = [1, -coeff1]
b = [1]
y[1:,1] = signal.lfilter(b,a,noise)[1:]
a = [1, -coeff0a]
b = [coeff0b]
y[1:,0] = signal.lfilter(b,a,y[:,1])[1:]
return y

How can I stop my Runge-Kutta2 (Heun) method from exploding?

I am currently trying to write some python code to solve an arbitrary system of first order ODEs, using a general explicit Runge-Kutta method defined by the values alpha, gamma (both vectors of dimension m) and beta (lower triangular matrix of dimension m x m) of the Butcher table which are passed in by the user. My code appears to work for single ODEs, having tested it on a few different examples, but I'm struggling to generalise my code to vector valued ODEs (i.e. systems).
In particular, I try to solve a Van der Pol oscillator ODE (reduced to a first order system) using Heun's method defined by the Butcher Tableau values given in my code, but I receive the errors
"RuntimeWarning: overflow encountered in double_scalars f = lambda t,u: np.array(... etc)" and
"RuntimeWarning: invalid value encountered in add kvec[i] = f(t+alpha[i]*h,y+h*sum)"
followed by my solution vector that is clearly blowing up. Note that the commented out code below is one of the examples of single ODEs that I tried and is solved correctly. Could anyone please help? Here is my code:
import numpy as np
def rk(t,y,h,f,alpha,beta,gamma):
'''Runga Kutta iteration'''
return y + h*phi(t,y,h,f,alpha,beta,gamma)
def phi(t,y,h,f,alpha,beta,gamma):
'''Phi function for the Runga Kutta iteration'''
m = len(alpha)
count = np.zeros(len(f(t,y)))
kvec = k(t,y,h,f,alpha,beta,gamma)
for i in range(1,m+1):
count = count + gamma[i-1]*kvec[i-1]
return count
def k(t,y,h,f,alpha,beta,gamma):
'''returning a vector containing each step k_{i} in the m step Runga Kutta method'''
m = len(alpha)
kvec = np.zeros((m,len(f(t,y))))
kvec[0] = f(t,y)
for i in range(1,m):
sum = np.zeros(len(f(t,y)))
for l in range(1,i+1):
sum = sum + beta[i][l-1]*kvec[l-1]
kvec[i] = f(t+alpha[i]*h,y+h*sum)
return kvec
def timeLoop(y0,N,f,alpha,beta,gamma,h,rk):
'''function that loops through time using the RK method'''
t = np.zeros([N+1])
y = np.zeros([N+1,len(y0)])
y[0] = y0
t[0] = 0
for i in range(1,N+1):
y[i] = rk(t[i-1],y[i-1], h, f,alpha,beta,gamma)
t[i] = t[i-1]+h
return t,y
#################################################################
'''f = lambda t,y: (c-y)**2
Y = lambda t: np.array([(1+t*c*(c-1))/(1+t*(c-1))])
h0 = 1
c = 1.5
T = 10
alpha = np.array([0,1])
gamma = np.array([0.5,0.5])
beta = np.array([[0,0],[1,0]])
eff_rk = compute(h0,Y(0),T,f,alpha,beta,gamma,rk, Y,11)'''
#constants
mu = 100
T = 1000
h = 0.01
N = int(T/h)
#initial conditions
y0 = 0.02
d0 = 0
init = np.array([y0,d0])
#Butcher Tableau for Heun's method
alpha = np.array([0,1])
gamma = np.array([0.5,0.5])
beta = np.array([[0,0],[1,0]])
#rhs of the ode system
f = lambda t,u: np.array([u[1],mu*(1-u[0]**2)*u[1]-u[0]])
#solving the system
time, sol = timeLoop(init,N,f,alpha,beta,gamma,h,rk)
print(sol)
Your step size is not small enough. The Van der Pol oscillator with mu=100 is a fast-slow system with very sharp turns at the switching of the modes, so rather stiff. With explicit methods this requires small step sizes, the smallest sensible step size is 1e-5 to 1e-6. You get a solution on the limit cycle already for h=0.001, with resulting velocities up to 150.
You can reduce some of that stiffness by using a different velocity/impulse variable. In the equation
x'' - mu*(1-x^2)*x' + x = 0
you can combine the first two terms into a derivative,
mu*v = x' - mu*(1-x^2/3)*x
so that
x' = mu*(v+(1-x^2/3)*x)
v' = -x/mu
The second equation is now uniformly slow close to the limit cycle, while the first has long relatively straight jumps when v leaves the cubic v=x^3/3-x.
This integrates nicely with the original h=0.01, keeping the solution inside the box [-3,3]x[-2,2], even if it shows some strange oscillations that are not present for smaller step sizes and the exact solution.

Running code between timesteps using scipy's solve_ivp

I'm transitioning my code from using scipy's odeint to scipy's solve_ivp. When using odeint I would use a while loop as follows:
while solver.successful() :
solver.integrate(t_final, step=True)
# do other operations
This method allowed me to store values that depended on the solutions after each timestep.
I'm now switching to using solve_ivp but not sure how to accomplish this functionality with the solve_ivp solver. Has anyone accomplished this functionality with solve_ivp?
Thanks!
I think i know what your trying to ask. I had a program that used solve_ivp to integrate between each time step individually, then used the values to calculate the values for the next iteration. (i.e. heat transfer coefficients, transport coefficients etc.) I used two nested for loops. The inner for loop calculates or completes the operations you need to at each step. Then save each value in a list or array and then the inner loop should terminate. The outer loop should only be used to feed time values and possibly reload necessary constants.
For example:
for i in range(start_value, end_value, time_step):
start_time = i
end_time = i + time_step
# load initial values and used most recent values
for j in range(0, 1, 1):
answer = solve_ivp(function,(start_time,end_time), [initial_values])
# Save new values at the end of a list storing all calculated values
Say you have a system such as
d(Y1)/dt = a1*Y2 + Y1
d(Y2)/dt = a2*Y1 + Y2
and you want to solve it from t = 0, 10. With a 0.1 time step. Where a1 and a2 are values calculated or determined elsewhere. This code would work.
from scipy.integrate import solve_ivp
import sympy as sp
import numpy as np
import math
import matplotlib.pyplot as plt
def a1(n):
return 1E-10*math.exp(n)
def a2(n):
return 2E-10*math.exp(n)
def rhs(t,y, *args):
a1, a2 = args
return [a1*y[1] + y[0],a2*y[0] + y[1]]
Y1 = [0.02]
Y2 = [0.01]
A1 = []
A2 = []
endtime = 10
time_step = 0.1
times = np.linspace(0,endtime, int(endtime/time_step)+1)
tsymb = sp.symbols('t')
ysymb = sp.symbols('y')
for i in range(0,endtime,1):
for j in range(0,int(1/time_step),1):
tstart = i + j*time_step
tend = i + j*time_step + time_step
A1.append(a1(tstart/100))
A2.append(a2(tstart/100))
Y0 = [Y1[-1],Y2[-1]]
args = [A1[-1],A2[-1]]
answer = solve_ivp(lambda tsymb, ysymb : rhs(tsymb,ysymb, *args), (tstart,tend), Y0)
Y1.append(answer.y[0][-1])
Y2.append(answer.y[1][-1])
fig = plt.figure()
plt1 = plt.plot(times,Y1, label = "Y1")
plt2 = plt.plot(times,Y2, label = "Y2")
plt.xlabel('Time')
plt.ylabel('Y Values')
plt.legend()
plt.grid()
plt.show()

Is there a way to easily integrate a set of differential equations over a full grid of points?

The problem is that I would like to be able to integrate the differential equations starting for each point of the grid at once instead of having to loop over the scipy integrator for each coordinate. (I'm sure there's an easy way)
As background for the code I'm trying to solve the trajectories of a Couette flux alternating the direction of the velocity each certain period, that is a well known dynamical system that produces chaos. I don't think the rest of the code really matters as the part of the integration with scipy and my usage of the meshgrid function of numpy.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation, writers
from scipy.integrate import solve_ivp
start_T = 100
L = 1
V = 1
total_run_time = 10*3
grid_points = 10
T_list = np.arange(start_T, 1, -1)
x = np.linspace(0, L, grid_points)
y = np.linspace(0, L, grid_points)
X, Y = np.meshgrid(x, y)
condition = True
totals = np.zeros((start_T, total_run_time, 2))
alphas = np.zeros(start_T)
i = 0
for T in T_list:
alphas[i] = L / (V * T)
solution = np.array([X, Y])
for steps in range(int(total_run_time/T)):
t = steps*T
if condition:
def eq(t, x):
return V * np.sin(2 * np.pi * x[1] / L), 0.0
condition = False
else:
def eq(t, x):
return 0.0, V * np.sin(2 * np.pi * x[1] / L)
condition = True
time_steps = np.arange(t, t + T)
xt = solve_ivp(eq, time_steps, solution)
solution = np.array([xt.y[0], xt.y[1]])
totals[i][t: t + T][0] = solution[0]
totals[i][t: t + T][1] = solution[1]
i += 1
np.save('alphas.npy', alphas)
np.save('totals.npy', totals)
The error given is :
ValueError: y0 must be 1-dimensional.
And it comes from the 'solve_ivp' function of scipy because it doesn't accept the format of the numpy function meshgrid. I know I could run some loops and get over it but I'm assuming there must be a 'good' way to do it using numpy and scipy. I accept advice for the rest of the code too.
Yes, you can do that, in several variants. The question remains if it is advisable.
To implement a generally usable ODE integrator, it needs to be abstracted from the models. Most implementations do that by having the state space a flat-array vector space, some allow a vector space engine to be passed as parameter, so that structured vector spaces can be used. The scipy integrators are not of this type.
So you need to translate the states to flat vectors for the integrator, and back to the structured state for the model.
def encode(X,Y): return np.concatenate([X.flatten(),Y.flatten()])
def decode(U): return U.reshape([2,grid_points,grid_points])
Then you can implement the ODE function as
def eq(t,U):
X,Y = decode(U)
Vec = V * np.sin(2 * np.pi * x[1] / L)
if int(t/T)%2==0:
return encode(Vec, np.zeros(Vec.shape))
else:
return encode(np.zeros(Vec.shape), Vec)
with initial value
U0 = encode(X,Y)
Then this can be directly integrated over the whole time span.
Why this might be not such a good idea: Thinking of each grid point and its trajectory separately, each trajectory has its own sequence of adapted time steps for the given error level. In integrating all simultaneously, the adapted step size is the minimum over all trajectories at the given time. Thus while the individual trajectories might have only short intervals with very small step sizes amid long intervals with sparse time steps, these can overlap in the ensemble to result in very small step sizes everywhere.
If you go beyond the testing stage, switch to a more compiled solver implementation, odeint is a Fortran code with wrappers, so half a solution. JITcode translates to C code and links with the compiled solver behind odeint. Leaving python you get sundials, the diffeq module of julia-lang, or boost::odeint.
TL;DR
I don't think you can "integrate the differential equations starting for each point of the grid at once".
MWE
Please try to provide a MWE to reproduce your problem, like you said : "I don't think the rest of the code really matters", and it makes it harder for people to understand your problem.
Understanding how to talk to the solver
Before answering your question, there are several things that seem to be misunderstood :
by defining time_steps = np.arange(t, t + T) and then calling solve_ivp(eq, time_steps, solution) : the second argument of solve_ivp is the time span you want the solution for, ie, the "start" and "stop" time as a 2-uple. Here your time_steps is 30-long (for the first loop), so I would probably replace it by (t, t+T). Look for t_span in the doc.
from what I understand, it seems like you want to control each iteration of the numerical resolution : that's not how solve_ivp works. More over, I think you want to switch the function "eq" at each iteration. Since you have to pass the "the right hand side" of the equation, you need to wrap this behavior inside a function. It would not work (see right after) but in terms of concept something like this:
def RHS(t, x):
# unwrap your variables, condition is like an additional variable of your problem,
# with a very simple differential equation
x0, x1, condition = x
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x[1] / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x[1] / L)
# compute new result for condition
condition_out = not(condition)
return [x0_out, x1_out, condition_out]
This would not work because the evolution of condition doesn't satisfy some mathematical properties of derivation/continuity. So condition is like a boolean switch that parametrizes the model, we can use global to control the state of this boolean :
condition = True
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
finaly, and this is the ValueError you mentionned in your post : you define solution = np.array([X, Y]) which actually is initial condition and supposed to be "y0: array_like, shape (n,)" where n is the number of variable of the problem (in the case of [x0_out, x1_out] that would be 2)
A MWE for a single initial condition
All that being said, lets start with a simple MWE for a single starting point (0.5,0.5), so we have a clear view of how to use the solver :
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
# initial conditions for x0, x1, and condition
initial = [0.5, 0.5]
condition = True
# time span
t_span = (0, 100)
# constants
V = 1
L = 1
# define the "model", ie the set of equations of t
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
solution = solve_ivp(RHS_eq, # Right Hand Side of the equation(s)
t_span, # time span, a 2-uple
initial, # initial conditions
)
fig, ax = plt.subplots()
ax.plot(solution.t,
solution.y[0],
label="x0")
ax.plot(solution.t,
solution.y[1],
label="x1")
ax.legend()
Final answer
Now, what we want is to do the exact same thing but for various initial conditions, and from what I understand, we can't : again, quoting the doc
y0 : array_like, shape (n,) : Initial state. . The solver's initial condition only allows one starting point vector.
So to answer the initial question : I don't think you can "integrate the differential equations starting for each point of the grid at once".

Categories

Resources