Differential equations in Python: Single time step - python

I would like to be able to do something else in each time step during solving an ODE (using scipy's integrate). Is there a way to do so? Can I somehow write my own time loop and just call a single e.g. Runge-Kutta step by myself? Is there a routine in python or would I have to come up with my own? I think there must be one since odeint etc have to use such a function. So the question is, how do I access them?
So it should looking something along these lines:
from scipy.integrate import *
from pylab import *
def deriv(y, t):
a = -2.0
b = -0.1
return array([y[1], a*y[0]+b*y[1]])
time = linspace(0.0, 10.0, 1000)
dt = 10.0/(1000-1)
yinit = array([0.0005, 0.2])
for t in time:
# doSomething, write into a file or whatever
y[t] = yinit
yinit = RungeKutta(deriv, yinit, t, dt, varargs)

I now came along with this:
from pylab import *
from scipy.integrate import *
def RHS(t, x):
return -x
min_t = 0.0
max_t = 10.0
num_t = 1e2
grid_t = linspace(min_t, max_t, num_t)
grid_dt = (max_t - min_t)/(num_t - 1)
y = zeros(num_t, dtype=complex)
y[0] = complex(1.0, 0.0)
solver = complex_ode(RHS)
solver.set_initial_value(y[0], grid_t[0]).set_integrator('dopri5')
for idx in range(1, int(num_t)):
solver.integrate(solver.t + grid_dt)
y[idx] = solver.y[0]
In here I can do whatever I want during the integration.

Related

How to compute the derivative graph of a Python plot

I am working on the following code, which solves a system of coupled differential equations. I have been able to solve them, and I plotted one of them. I am curious how to compute and plot the derivative of this graph numerically (I know the derivative is given in the first function, but suppose I didn't have that). I was thinking that I could use a for-loop, but is there a faster way?
import numpy as np
from scipy.integrate import odeint
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
import math
def hiv(x,t):
kr1 = 1e5
kr2 = 0.1
kr3 = 2e-7
kr4 = 0.5
kr5 = 5
kr6 = 100
h = x[0] # Healthy Cells -- function of time
i= x[1] #Infected Cells -- function of time
v = x[2] # Virus -- function of time
p = kr3 * h * v
dhdt = kr1 - kr2*h - p
didt = p - kr4*i
dvdt = -p -kr5*v + kr6*i
return [dhdt, didt, dvdt]
print(hiv([1e6, 0, 100], 0))
x0 = [1e6, 0, 100] #initial conditions
t = np.linspace(0,15,1000) #time in years
x = odeint(hiv, x0, t) #vector of the functions H(t), I(t), V(t)
h = x[:,0]
i = x[:,1]
v = x[:,2]
plt.semilogy(t,h)
plt.show()

Access current time step in scipy.integrate.odeint within the function

Is there a way to access what the current time step is in scipy.integrate.odeint?
I am trying to solve a system of ODEs where the form of the ode depends on whether or not a population will be depleted. Basically I take from population x provided x doesn't go below a threshold. If the amount I need to take this timestep is greater than that threshold I will take all of x to that point and the rest from z.
I am trying to do this by checking how much I will take this time step, and then allocating between populations x and z in the DEs.
To do this I need to be able to access the step size within the ODE solver to calculate what will be taken this time step. I am using scipy.integrate.odeint - is there a way to access the time step within the function defining the odes?
Alternatively, can you access what the last time was in the solver? I know it won't necessarily be the next time step, but it's likely a good enough approximation for me if that is the best I can do. Or is there another option I've not thought of to do this?
The below MWE is not my system of equations but what I could come up with to try to illustrate what I'm doing. The problem is that on the first time step, if the time step were 1 then the population will go too low, but since the timestep will be small, initially you can take all from x.
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
plt.interactive(False)
tend = 5
tspan = np.linspace(0.0, tend, 1000)
A = 3
B = 4.09
C = 1.96
D = 2.29
def odefunc(P,t):
x = P[0]
y = P[1]
z = P[2]
if A * x - B * x * y < 0.6:
dxdt = A/5 * x
dydt = -C * y + D * x * y
dzdt = - B * z * y
else:
dxdt = A * x - B * x * y
dydt = -C * y + D * x * y
dzdt = 0
dPdt = np.ravel([dxdt, dydt, dzdt])
return dPdt
init = ([0.75,0.95,100])
sol = odeint(odefunc, init, tspan, hmax = 0.01)
x = sol[:, 0]
y = sol[:, 1]
z = sol[:, 2]
plt.figure(1)
plt.plot(tspan,x)
plt.plot(tspan,y)
plt.plot(tspan,z)
Of course you can hack something together that might work.
You could log t but you have to be aware that the values
might not be constantly increasing. This depends on the ODE algorithm and how it works (forward, backward, and central finite differences).
But it will give you an idea where about you are.
logger = [] # visible in odefunc
def odefunc(P,t):
x = P[0]
y = P[1]
z = P[2]
print(t)
logger.append(t)
if logger: # if the list is not empty
if logger[-1] > 2.5: # then read the last value
print('hua!')
if A * x - B * x * y < 0.6:
dxdt = A/5 * x
dydt = -C * y + D * x * y
dzdt = - B * z * y
else:
dxdt = A * x - B * x * y
dydt = -C * y + D * x * y
dzdt = 0
dPdt = np.ravel([dxdt, dydt, dzdt])
return dPdt
print(logger)
As pointed out in the another answer, time may not be strictly increasing at each call to the ODE function in odeint, especially for stiff problems.
The most robust way to handle this kind of discontinuity in the ode function is to use an event to find the location of the zero of (A * x - B * x * y) - 0.6 in your example. For a discontinuous solution, use a terminal event to stop the computation precisely at the zero, and then change the ode function. In solve_ivp you can do this with the events parameter. See the solve ivp documentation and specifically the examples related to the cannonball trajectories. odeint does not support events, and solve_ivp has an LSODA method available that calls the same Fortran library as odeint.
Here is a short example, but you may want to additionally check that sol1 reached the terminal event before solving for sol2.
from scipy.integrate import solve_ivp
tend = 10
def discontinuity_zero(t, y):
return y[0] - 10
discontinuity_zero.terminal = True
def ode_func1(t, y):
return y
def ode_func2 (t, y):
return -y**2
sol1 = solve_ivp(ode_func1, t_span=[0, tend], y0=[1], events=discontinuity_zero, rtol=1e-8)
t1 = sol1.t[-1]
y1 = [sol1.y[0, -1]]
print(f'time={t1} y={y1} discontinuity_zero={discontinuity_zero(t1, y1)}')
sol2 = solve_ivp(ode_func2, t_span=[t1, tend], y0=y1, rtol=1e-8)
plt.plot(sol1.t, sol1.y[0,:])
plt.plot(sol2.t, sol2.y[0,:])
plt.show()
This prints the following, where the time of the discontinuity is accurate to 7 digits.
time=2.302584885712467 y=[10.000000000000002] discontinuity_zero=1.7763568394002505e-15

Is there a way to easily integrate a set of differential equations over a full grid of points?

The problem is that I would like to be able to integrate the differential equations starting for each point of the grid at once instead of having to loop over the scipy integrator for each coordinate. (I'm sure there's an easy way)
As background for the code I'm trying to solve the trajectories of a Couette flux alternating the direction of the velocity each certain period, that is a well known dynamical system that produces chaos. I don't think the rest of the code really matters as the part of the integration with scipy and my usage of the meshgrid function of numpy.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation, writers
from scipy.integrate import solve_ivp
start_T = 100
L = 1
V = 1
total_run_time = 10*3
grid_points = 10
T_list = np.arange(start_T, 1, -1)
x = np.linspace(0, L, grid_points)
y = np.linspace(0, L, grid_points)
X, Y = np.meshgrid(x, y)
condition = True
totals = np.zeros((start_T, total_run_time, 2))
alphas = np.zeros(start_T)
i = 0
for T in T_list:
alphas[i] = L / (V * T)
solution = np.array([X, Y])
for steps in range(int(total_run_time/T)):
t = steps*T
if condition:
def eq(t, x):
return V * np.sin(2 * np.pi * x[1] / L), 0.0
condition = False
else:
def eq(t, x):
return 0.0, V * np.sin(2 * np.pi * x[1] / L)
condition = True
time_steps = np.arange(t, t + T)
xt = solve_ivp(eq, time_steps, solution)
solution = np.array([xt.y[0], xt.y[1]])
totals[i][t: t + T][0] = solution[0]
totals[i][t: t + T][1] = solution[1]
i += 1
np.save('alphas.npy', alphas)
np.save('totals.npy', totals)
The error given is :
ValueError: y0 must be 1-dimensional.
And it comes from the 'solve_ivp' function of scipy because it doesn't accept the format of the numpy function meshgrid. I know I could run some loops and get over it but I'm assuming there must be a 'good' way to do it using numpy and scipy. I accept advice for the rest of the code too.
Yes, you can do that, in several variants. The question remains if it is advisable.
To implement a generally usable ODE integrator, it needs to be abstracted from the models. Most implementations do that by having the state space a flat-array vector space, some allow a vector space engine to be passed as parameter, so that structured vector spaces can be used. The scipy integrators are not of this type.
So you need to translate the states to flat vectors for the integrator, and back to the structured state for the model.
def encode(X,Y): return np.concatenate([X.flatten(),Y.flatten()])
def decode(U): return U.reshape([2,grid_points,grid_points])
Then you can implement the ODE function as
def eq(t,U):
X,Y = decode(U)
Vec = V * np.sin(2 * np.pi * x[1] / L)
if int(t/T)%2==0:
return encode(Vec, np.zeros(Vec.shape))
else:
return encode(np.zeros(Vec.shape), Vec)
with initial value
U0 = encode(X,Y)
Then this can be directly integrated over the whole time span.
Why this might be not such a good idea: Thinking of each grid point and its trajectory separately, each trajectory has its own sequence of adapted time steps for the given error level. In integrating all simultaneously, the adapted step size is the minimum over all trajectories at the given time. Thus while the individual trajectories might have only short intervals with very small step sizes amid long intervals with sparse time steps, these can overlap in the ensemble to result in very small step sizes everywhere.
If you go beyond the testing stage, switch to a more compiled solver implementation, odeint is a Fortran code with wrappers, so half a solution. JITcode translates to C code and links with the compiled solver behind odeint. Leaving python you get sundials, the diffeq module of julia-lang, or boost::odeint.
TL;DR
I don't think you can "integrate the differential equations starting for each point of the grid at once".
MWE
Please try to provide a MWE to reproduce your problem, like you said : "I don't think the rest of the code really matters", and it makes it harder for people to understand your problem.
Understanding how to talk to the solver
Before answering your question, there are several things that seem to be misunderstood :
by defining time_steps = np.arange(t, t + T) and then calling solve_ivp(eq, time_steps, solution) : the second argument of solve_ivp is the time span you want the solution for, ie, the "start" and "stop" time as a 2-uple. Here your time_steps is 30-long (for the first loop), so I would probably replace it by (t, t+T). Look for t_span in the doc.
from what I understand, it seems like you want to control each iteration of the numerical resolution : that's not how solve_ivp works. More over, I think you want to switch the function "eq" at each iteration. Since you have to pass the "the right hand side" of the equation, you need to wrap this behavior inside a function. It would not work (see right after) but in terms of concept something like this:
def RHS(t, x):
# unwrap your variables, condition is like an additional variable of your problem,
# with a very simple differential equation
x0, x1, condition = x
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x[1] / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x[1] / L)
# compute new result for condition
condition_out = not(condition)
return [x0_out, x1_out, condition_out]
This would not work because the evolution of condition doesn't satisfy some mathematical properties of derivation/continuity. So condition is like a boolean switch that parametrizes the model, we can use global to control the state of this boolean :
condition = True
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
finaly, and this is the ValueError you mentionned in your post : you define solution = np.array([X, Y]) which actually is initial condition and supposed to be "y0: array_like, shape (n,)" where n is the number of variable of the problem (in the case of [x0_out, x1_out] that would be 2)
A MWE for a single initial condition
All that being said, lets start with a simple MWE for a single starting point (0.5,0.5), so we have a clear view of how to use the solver :
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
# initial conditions for x0, x1, and condition
initial = [0.5, 0.5]
condition = True
# time span
t_span = (0, 100)
# constants
V = 1
L = 1
# define the "model", ie the set of equations of t
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
solution = solve_ivp(RHS_eq, # Right Hand Side of the equation(s)
t_span, # time span, a 2-uple
initial, # initial conditions
)
fig, ax = plt.subplots()
ax.plot(solution.t,
solution.y[0],
label="x0")
ax.plot(solution.t,
solution.y[1],
label="x1")
ax.legend()
Final answer
Now, what we want is to do the exact same thing but for various initial conditions, and from what I understand, we can't : again, quoting the doc
y0 : array_like, shape (n,) : Initial state. . The solver's initial condition only allows one starting point vector.
So to answer the initial question : I don't think you can "integrate the differential equations starting for each point of the grid at once".

Using finite difference in python

I am trying to use Python with Numpy to solve a basic equation using the finite difference method. The code gives me the correct first value for a i.e it gives me a[1]; however, every other value after that is just zero?
I don't know what I'm doing wrong because it obviously works for the first value so how do I fix this?
Any ideas would be very helpful.
from numpy import *
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
from scipy.integrate import odeint
def solver(omega_m, dt):
#t_0, H_0, a_0, dt, n, T; always the same; omega's change
t_0 = 0.0004
a_0 = 0.001
H_0 = 1./13.7
T = 13.7
dt = float(dt)
n = int(round((T - t_0)/dt))
x = zeros(n+1)
t = linspace(t_0, T, n+1)
x[0] = a_0
for i in range (0, n):
x[i+1] = x[i] + (H_0 * ((omega_m)**(1./2.)) * ((x[i])**(-1./2.)) * dt)
return x, t
a, t = solver(omega_m =1, dt=0.001)
print a, t
Your function returns after the first iteration because your return statement is inside the for loop. You should dedent the return statement so your loop does not terminate prematurely:
for i in range (0, n):
x[i+1] = x[i] + (H_0 * ((omega_m)**(1./2.)) * ((x[i])**(-1./2.)) * dt)
return x, t

Intersection between Gaussian

I'm just trying to plot two gaussians and to find the intersection point. I have the following code. It's not plotting the exact intersection though and I really cannot figure out why. It's like just barely slightly off but I worked through the derived solution if we took the log of subtracted gaussians and yeah it seems like it should be correct. Can anyone help? Thank you so much!
import numpy as np
import matplotlib.pyplot as plt
def plot_normal(x, mean = 0, sigma = 1):
return 1.0/(2*np.pi*sigma**2) * np.exp(-((x-mean)**2)/(2*sigma**2))
# found online
def solve_gasussians(m1, s1, m2, s2):
a = 1.0/(2.0*s1**2) - 1.0/(2.0*s2**2)
b = m2/(s2**2) - m1/(s1**2)
c = m1**2 /(2*s1**2) - m2**2 / (2.0*s2**2) - np.log(s2/s1)
return np.roots([a,b,c])
s1 = np.linspace(0, 10,300)
s2 = np.linspace(0, 14, 300)
solved_val = solve_gasussians(5.0, 0.5, 7.0, 1.0)
print solved_val
solved_val = solved_val[0]
plt.figure('Baseline Distributions')
plt.title('Baseline Distributions')
plt.xlabel('Response Rate')
plt.ylabel('Probability')
plt.plot(s1, plot_normal(s1, 5.0, 0.5),'r', label='s1')
plt.plot(s2, plot_normal(s2, 7.0, 1.0),'b', label='s2')
plt.plot(solved_val, plot_normal(solved_val, 7.0, 1.0), 'mo')
plt.legend()
plt.show()
You have a small bug in plot_normal function - you are missing square root in the denominator. Proper version:
def plot_normal(x, mean = 0, sigma = 1):
return 1.0/np.sqrt(2*np.pi*sigma**2) * np.exp(-((x-mean)**2)/(2*sigma**2))
gives the expected result:
And two remarks.
Remember that you can have 2 roots of the equation in general (two intersection points), and this is the case with parameters you provided.
As far as I know np.roots gives you approximate result, but you cat get exact result easily, rewriting solve_gasussians function as:
def solve_gasussians(m1, s1, m2, s2):
# coefficients of quadratic equation ax^2 + bx + c = 0
a = (s1**2.0) - (s2**2.0)
b = 2 * (m1 * s2**2.0 - m2 * s1**2.0)
c = m2**2.0 * s1**2.0 - m1**2.0 * s2**2.0 - 2 * s1**2.0 * s2**2.0 * np.log(s1/s2)
x1 = (-b + np.sqrt(b**2.0 - 4.0 * a * c)) / (2.0 * a)
x2 = (-b - np.sqrt(b**2.0 - 4.0 * a * c)) / (2.0 * a)
return x1, x2
I don't know where the mistake lies in your code. But I think I found the code your borrowed from and made part of the adjustment you need.
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
def solve(m1,m2,std1,std2):
a = 1/(2*std1**2) - 1/(2*std2**2)
b = m2/(std2**2) - m1/(std1**2)
c = m1**2 /(2*std1**2) - m2**2 / (2*std2**2) - np.log(std2/std1)
return np.roots([a,b,c])
m1 = 5
std1 = 0.5
m2 = 7
std2 = 1
result = solve(m1,m2,std1,std2)
x = np.linspace(-5,9,10000)
plot1=plt.plot(x,[norm.pdf(_,m1,std1) for _ in x])
plot2=plt.plot(x,[norm.pdf(_,m2,std2) for _ in x])
plot3=plt.plot(result[0],norm.pdf(result[0],m1,std1) ,'o')
plt.show()
I will offer two pieces of unsolicited advice that might make life easier for you (in the way they do for me):
When you adapt code try to make small, incremental changes and check that the code still works at each step.
Look for existing free libraries. In this case norm from scipy is a good replacement for what was used in the original code.
The mistake is here. This line:
def plot_normal(x, mean = 0, sigma = 1):
return 1.0/(2*np.pi*sigma**2) * np.exp(-((x-mean)**2)/(2*sigma**2))
Should be this:
def plot_normal(x, mean = 0, sigma = 1):
return 1.0/np.sqrt(2*np.pi*sigma**2) * np.exp(-((x-mean)**2)/(2*sigma**2))
You forgot the sqrt.
It would be wiser to use a pre-existing normal pdf if that's available, such as:
import scipy.stats
def plot_normal(x, mean = 0, sigma = 1):
return scipy.stats.norm.pdf(x,loc=mean,scale=sigma)
It's also possible to solve for the intersections exactly. This answer provides a quadratic equation for the roots of the Gaussians' intersections. Using maxima to solve for x gives the following expression. Which, while complicated, does not rely on iterative methods and can be automatically generated from simpler expressions.
def solve_gaussians(m1,s1,m2,s2):
x1 = (s1*s2*np.sqrt((-2*np.log(s1/s2)*s2**2)+2*s1**2*np.log(s1/s2)+m2**2-2*m1*m2+m1**2)+m1*s2**2-m2*s1**2)/(s2**2-s1**2)
x2 = -(s1*s2*np.sqrt((-2*np.log(s1/s2)*s2**2)+2*s1**2*np.log(s1/s2)+m2**2-2*m1*m2+m1**2)-m1*s2**2+m2*s1**2)/(s2**2-s1**2)
return x1,x2
Putting it altogether gives:
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
def plot_normal(x, mean = 0, sigma = 1):
return scipy.stats.norm.pdf(x,loc=mean,scale=sigma)
#Use the equation from [this answer](https://stats.stackexchange.com/a/12213/12116) solved for x
def solve_gaussians(m1,s1,m2,s2):
x1 = (s1*s2*np.sqrt((-2*np.log(s1/s2)*s2**2)+2*s1**2*np.log(s1/s2)+m2**2-2*m1*m2+m1**2)+m1*s2**2-m2*s1**2)/(s2**2-s1**2)
x2 = -(s1*s2*np.sqrt((-2*np.log(s1/s2)*s2**2)+2*s1**2*np.log(s1/s2)+m2**2-2*m1*m2+m1**2)-m1*s2**2+m2*s1**2)/(s2**2-s1**2)
return x1,x2
s = np.linspace(0, 14,300)
x = solve_gaussians(5.0,0.5,7.0,1.0)
plt.figure('Baseline Distributions')
plt.title('Baseline Distributions')
plt.xlabel('Response Rate')
plt.ylabel('Probability')
plt.plot(s, plot_normal(s, 5.0, 0.5),'r', label='s1')
plt.plot(s, plot_normal(s, 7.0, 1.0),'b', label='s2')
plt.plot(x[0],plot_normal(x[0],5.,0.5),'mo')
plt.plot(x[1],plot_normal(x[1],5.,0.5),'mo')
plt.legend()
plt.show()
Giving:

Categories

Resources