I am working on the following code, which solves a system of coupled differential equations. I have been able to solve them, and I plotted one of them. I am curious how to compute and plot the derivative of this graph numerically (I know the derivative is given in the first function, but suppose I didn't have that). I was thinking that I could use a for-loop, but is there a faster way?
import numpy as np
from scipy.integrate import odeint
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
import math
def hiv(x,t):
kr1 = 1e5
kr2 = 0.1
kr3 = 2e-7
kr4 = 0.5
kr5 = 5
kr6 = 100
h = x[0] # Healthy Cells -- function of time
i= x[1] #Infected Cells -- function of time
v = x[2] # Virus -- function of time
p = kr3 * h * v
dhdt = kr1 - kr2*h - p
didt = p - kr4*i
dvdt = -p -kr5*v + kr6*i
return [dhdt, didt, dvdt]
print(hiv([1e6, 0, 100], 0))
x0 = [1e6, 0, 100] #initial conditions
t = np.linspace(0,15,1000) #time in years
x = odeint(hiv, x0, t) #vector of the functions H(t), I(t), V(t)
h = x[:,0]
i = x[:,1]
v = x[:,2]
plt.semilogy(t,h)
plt.show()
Related
I have converted an equation to the first order ODE and now i would like to solve the motion equations for multiple periods with given conditions.
The equations shall be solved with the following 0 values:
x(0), y(0), vx(0), vy(0) = 3, 1, 2, 1.3 × π
This is the first order ODE, derived from motion equation of Newton's gravitational law for cellestial bodies:
How can I solve this in python? Could I use Runge Kutta or Keplers methods? I feel like I'm doing something wrong
import math
import matplotlib.pyplot as plt
import numpy as np
g = 6.674 * 10**(-11)
M = 4*(math.pi**2)/g
x0 = 3 #x-position of the center or h
y0 = 1 #y-position of the center or k
vx0 = 2
vy0 = 1.3* math.pi
#Trying second order
def RK2(y0, f, tlist):
t = [tlist[0]]
tf = tlist[1]
dt = tlist[2]
y = [y0]
while t[-1] < tf:
k1 = dt * f(y[-1],t[-1])
k2 = dt * f(y[-1]+0.5*k1 , t[-1]+0.5*dt) ##Take half step
y.append( y[-1] + k2 )
t.append(t[-1] + dt)
return(np.array(y), np.array(t) )
The goal is to plot two identical dynamical systems that are coupled.
We have:
X = [x0,x1,x2]
U = [u0,u1,u2]
And
Xdot = f(X) + alpha*(U-X)
Udot = f(U) + alpha*(X-U)
So I wish to plot the solution to this grand system on one set of axes (i.e in xyz for example) and eventually change the coupling strength to investigate the behaviour.
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from mpl_toolkits.mplot3d import Axes3D
def couple(s,t,a=0.2,beta=0.2,gamma=5.7,alpha=0.03):
[x,u] = s
[u0,u1,u2] = u
[x0,x1,x2] = x
xdot = np.zeros(3)
xdot[0] = -x1-x2
xdot[1] = x0+a*x1
xdot[2] = beta + x2*(x0-gamma)
udot = np.zeros(3)
udot[0] = -u1-u2
udot[1] = u0+a*u1
udot[2] = beta + u2*(u0-gamma)
sdot = np.zeros(2)
sdot[0] = xdot + alpha*(u-x)
sdot[1] = udot + alpha*(x-u)
return sdot
s_init = [0.1,0.1]
t_init=0; t_final = 300; t_step = 0.01
tpoints = np.arange(t_init,t_final,t_step)
a=0.2; beta=0.2; gamma=5.7; alpha=0.03
y = odeint(couple, s_init, tpoints,args=(a,beta,gamma,alpha), hmax = 0.01)
I imagine that something is wrong with s_init since it should be TWO initial condition vectors but when I try that I get that "odeint: y0 should be one-dimensional." On the other hand when I try s_init to be a 6-vector I get "too many values to unpack (expected two)." With the current setup, I am getting the error
File "C:/Users/Python Scripts/dynsys2019work.py", line 88, in couple
[u0,u1,u2] = u
TypeError: cannot unpack non-iterable numpy.float64 object
Cheers
*Edit: Please note this is basically my first time attempting this kind of thing and will be happy to receive further documentation and references.
The ode definition takes in and returns a 1D vector in scipy odeint, and I think some of your confusion is that you actually have 1 system of ODEs with 6 variables. You have just mentally apportioned it into 2 separate ODEs that are coupled.
You can do it like this:
import matplotlib.pyplot as plt
from scipy.integrate import odeint
import numpy as np
def couple(s,t,a=0.2,beta=0.2,gamma=5.7,alpha=0.03):
x0, x1, x2, u0, u1, u2 = s
xdot = np.zeros(3)
xdot[0] = -x1-x2
xdot[1] = x0+a*x1
xdot[2] = beta + x2*(x0-gamma)
udot = np.zeros(3)
udot[0] = -u1-u2
udot[1] = u0+a*u1
udot[2] = beta + u2*(u0-gamma)
return np.ravel([xdot, udot])
s_init = [0.1,0.1, 0.1, 0.1, 0.1, 0.1]
t_init=0; t_final = 300; t_step = 0.01
tpoints = np.arange(t_init,t_final,t_step)
a=0.2; beta=0.2; gamma=5.7; alpha=0.03
y = odeint(couple, s_init, tpoints,args=(a,beta,gamma,alpha), hmax = 0.01)
plt.plot(tpoints,y[:,0])
I've been trying to implement the skewed generalized t distribution in Python to model some financial returns. I based my code on formulas found on Wikipedia, and I used the Beta distribution from scipy.
from scipy.special import beta
import numpy as np
from math import sqrt
def sgt(x, params):
# This function accepts an array of 5 parameters [mu, sigma, lambda, p, q]
mu, sigma, lam, p, q = params
v = (q**(-1/p)) / (sqrt((3*lam*lam + 1)*beta(3/p, q-2/p)/beta(1/p, q) - 4*lam*lam*(beta(2/p, q-1/p)/(beta(1/p, q)))**2))
m = 2*v*sigma*lam*q**(1/p)*beta(2/p, q - 1/p) / beta(1/p, q)
fx = p / (2*v*sigma*(q**(1/p))*beta(1/p, q)*((abs(x-mu+m)**p/(q*(v*sigma)**p*(lam*np.sign(x-mu+m)+1)**p + 1)+1)**(1/p + q)))
return fx
Now, the function seems to work perfectly fine for some sets of parameters, but terribly for other sets of parameters.
For example:
dx = 0.001
x_axis = np.arange(-10, 10, dx)
ok_parameters = [0, 2, 0, 3, 8]
bad_parameters = [0, 2, 0, 1.05, 2.1]
ok_distribution = sgt(x_axis, ok_parameters)
bad_distribution = sgt(x_axis, bad_parameters)
If I try to compute the integrals of those two numbers:
a = np.sum(ok_distribution*dx)
b = np.sum(bad_distribution*dx)
I obtain the results a = 1.0013233154393804 and b = 2.2799746093533346.
Now, in theory both of these should be 1, but I assume since I approximated the integral the value won't always be exactly 1. In the second case however I don't understand why the value is so high.
Does anyone know what the issue is?
These are the graphs of the ok distribution (blue) and bad distribution (orange)
I believe there was just a typo (though I couldn't exactly find where) in your definition sgt. Here is an implementation that works.
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.special import beta
import numpy as np
from math import sqrt
from typing import Union
from scipy import integrate
# Generalised Student T probability Distribution
def generalized_student_t(x:Union[float, np.ndarray], mu:float, sigma:float,
lam:float, p:float, q:float) \
-> Union[float, np.ndarray]:
v = q**(-1/p) * ((3*lam**2 + 1)*(beta(3/p, q - 2/p)/beta(1/p,q)) - 4*lam**2*(beta(2/p, q - 1/p)/beta(1/p,q))**2)**(-1/2)
m = 2*v*sigma*lam*q**(1/p)*beta(2/p,q - 1/p)/beta(1/p,q)
fx = p / (2*v*sigma*q**(1/p)*beta(1/p,q)*(abs(x-mu+m)**p/(q*(v*sigma)**p)*(lam*np.sign(x-mu+m)+1)**p + 1)**(1/p + q))
return fx
def plot_cdf_pdf(x_axis:np.ndarray, pmf:np.ndarray) -> None:
"""
Plot the PDF and CDF of the array returned from the function.
"""
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6))
ax1.plot(x_axis, pmf)
ax1.set_title('PDF')
ax2.plot(x_axis, integrate.cumtrapz(x=x_axis, y=pmf, initial = 0))
ax2.set_title('CDF')
pass
dx = 0.0001
x_axis = np.arange(-10, 10, dx)
# Create the Two
distribution1 = generalized_student_t(x=x_axis, mu=0, sigma=1, lam=0, p=2, q=100)
distribution2 = generalized_student_t(x=x_axis, mu=0, sigma=2, lam=0, p=1.05, q=2.1)
plot_cdf_pdf(x_axis=x_axis, pmf=distribution1)
plot_cdf_pdf(x_axis=x_axis, pmf=distribution2)
We can also check that the integral of the PDFs are 1
integrate.simps(x=x_axis, y = distribution1)
integrate.simps(x=x_axis, y = distribution2)
We can see the results of the integral are 0.99999999999999978 and 0.99752026308335162. The reason they are not exactly 1 is due the CDF being defined as integral from -infinity to infinity of the PDF.
I'm going through Strogatz's Nonlinear Dynamics and Chaos and I've hit a snag in chapter 2 Exercise 2.8.1. (Educator flag: I've graduated so this isn't for a class, I'm just trying to get back into the numerical solving of differential equations) It's a pretty simple differential equation and I can plot individual solution curves given different initial conditions but I'm trying to use quiver or streamplot to superimpose individual solutions on top of the vector field.
My problem is in understanding how to translate the vector field plots for similar problems in the dy/dx form found here over to the dx/dt form that's primarily tackled in Strogatz's book.
Given that the x vector that's defined in the logistic function is only one dimensional I'm having a hard time reasoning out how express the u and v flows in quiver or streamplot since the problem only seems to have a u flow. It's probably super easy and is being over-thought but any guidance or assistance would be much appreciated!
So far I have the following:
# 2.8.1
# Plot the vector field and some trajectories for xdot = x(1-x) given
# some different initial conditions for the logistic equation with carrying
# capacity K = 1
# dx/dt = x(1-x)
# Imports:
from __future__ import division
from scipy import *
import numpy as np
import pylab
import matplotlib as mp
from matplotlib import pyplot as plt
import sys
import math as mt
def logistic(x,t):
return np.array([x[0]*(1-x[0])])
def RK4(t0 = 0, x0 = np.array([1]), t1 = 5 , dt = 0.01, ng = None):
tsp = np.arange(t0, t1, dt)
Nsize = np.size(tsp)
X = np.empty((Nsize, np.size(x0)))
X[0] = x0
for i in range(1, Nsize):
k1 = ng(X[i-1],tsp[i-1])
k2 = ng(X[i-1] + dt/2*k1, tsp[i-1] + dt/2)
k3 = ng(X[i-1] + dt/2*k2, tsp[i-1] + dt/2)
k4 = ng(X[i-1] + dt*k3, tsp[i-1] + dt)
X[i] = X[i-1] + dt/6*(k1 + 2*k2 + 2*k3 + k4)
return X
def tplot():
t0 = 0
t1 = 10
dt = 0.02
tsp = np.arange(t0,t1,dt)
X = RK4(x0 = np.array([2]), t1 = 10,dt = 0.02, ng = logistic)
Y = RK4(x0 = np.array([0.01]), t1 = 10,dt = 0.02, ng = logistic)
Z = RK4(x0 = np.array([0.5]), t1 = 10,dt = 0.02, ng = logistic)
P = RK4(x0 = np.array([3]), t1 = 10,dt = 0.02, ng = logistic)
Q = RK4(x0 = np.array([0.1]), t1 = 10,dt = 0.02, ng = logistic)
R = RK4(x0 = np.array([1.5]), t1 = 10,dt = 0.02, ng = logistic)
O = RK4(x0 = np.array([1]), t1 = 10,dt = 0.02, ng = logistic)
pylab.figure()
pylab.plot(tsp,X)
pylab.plot(tsp,Y)
pylab.plot(tsp,Z)
pylab.plot(tsp,P)
pylab.plot(tsp,Q)
pylab.plot(tsp,R)
pylab.plot(tsp,O)
pylab.title('Logistic Equation - K=1')
pylab.xlabel('Time')
pylab.ylabel('Xdot')
pylab.show()
print tplot()
image here
To graph a slope from a derivative (like, dx/dt), you can first find dx/dt, and then use a fixed dt to calculate dx. Then, at each (t, x) of interest, plot the little line segment from (t,x) to (t+dt, x+dx).
Here's an example for your equation dx/dt = x(1-x). (The Strogatz picture doesn't have arrowheads so I removed them too.)
import numpy as np
import matplotlib.pyplot as plt
times = np.linspace(0, 10, 20)
x = np.linspace(0 ,2, 20)
T, X = np.meshgrid(times, x) # make a grid that roughly matches the Strogatz grid
dxdt = X*(1-X) # the equation of interest
dt = .5*np.ones(X.shape) # a constant value (.5 is just so segments don't run into each other -- given spacing of times array
dx = dxdt * dt # given dt, now calc dx for the line segment
plt.quiver(T, X, dt, dx, headwidth=0., angles='xy', scale=15.)
plt.show()
wonkybadonk: For the difference in slope of the plotted trajectories and the plotted vector field seem to be due to the fact that your vector field are not steep enough. Make sure that
dx = dxdt*dt; (point by point multiplication, not a dot product)
and that you added "angles='xy'" as a quiver argument. (see tom10 post).
How do I numerically solve an ODE in Python?
Consider
\ddot{u}(\phi) = -u + \sqrt{u}
with the following conditions
u(0) = 1.49907
and
\dot{u}(0) = 0
with the constraint
0 <= \phi <= 7\pi.
Then finally, I want to produce a parametric plot where the x and y coordinates are generated as a function of u.
The problem is, I need to run odeint twice since this is a second order differential equation.
I tried having it run again after the first time but it comes back with a Jacobian error. There must be a way to run it twice all at once.
Here is the error:
odepack.error: The function and its Jacobian must be callable functions
which the code below generates. The line in question is the sol = odeint.
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
from numpy import linspace
def f(u, t):
return -u + np.sqrt(u)
times = linspace(0.0001, 7 * np.pi, 1000)
y0 = 1.49907
yprime0 = 0
yvals = odeint(f, yprime0, times)
sol = odeint(yvals, y0, times)
x = 1 / sol * np.cos(times)
y = 1 / sol * np.sin(times)
plot(x,y)
plt.show()
Edit
I am trying to construct the plot on page 9
Classical Mechanics Taylor
Here is the plot with Mathematica
In[27]:= sol =
NDSolve[{y''[t] == -y[t] + Sqrt[y[t]], y[0] == 1/.66707928,
y'[0] == 0}, y, {t, 0, 10*\[Pi]}];
In[28]:= ysol = y[t] /. sol[[1]];
In[30]:= ParametricPlot[{1/ysol*Cos[t], 1/ysol*Sin[t]}, {t, 0,
7 \[Pi]}, PlotRange -> {{-2, 2}, {-2.5, 2.5}}]
import scipy.integrate as integrate
import matplotlib.pyplot as plt
import numpy as np
pi = np.pi
sqrt = np.sqrt
cos = np.cos
sin = np.sin
def deriv_z(z, phi):
u, udot = z
return [udot, -u + sqrt(u)]
phi = np.linspace(0, 7.0*pi, 2000)
zinit = [1.49907, 0]
z = integrate.odeint(deriv_z, zinit, phi)
u, udot = z.T
# plt.plot(phi, u)
fig, ax = plt.subplots()
ax.plot(1/u*cos(phi), 1/u*sin(phi))
ax.set_aspect('equal')
plt.grid(True)
plt.show()
The code from your other question is really close to what you want. Two changes are needed:
You were solving a different ODE (because you changed two signs inside function deriv)
The y component of your desired plot comes from the solution values, not from the values of the first derivative of the solution, so you need to replace u[:,0] (function values) for u[:, 1] (derivatives).
This is the end result:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
def deriv(u, t):
return np.array([u[1], -u[0] + np.sqrt(u[0])])
time = np.arange(0.01, 7 * np.pi, 0.0001)
uinit = np.array([1.49907, 0])
u = odeint(deriv, uinit, time)
x = 1 / u[:, 0] * np.cos(time)
y = 1 / u[:, 0] * np.sin(time)
plt.plot(x, y)
plt.show()
However, I suggest that you use the code from unutbu's answer because it's self documenting (u, udot = z) and uses np.linspace instead of np.arange. Then, run this to get your desired figure:
x = 1 / u * np.cos(phi)
y = 1 / u * np.sin(phi)
plt.plot(x, y)
plt.show()
You can use scipy.integrate.ode. To solve dy/dt = f(t,y), with initial condition y(t0)=y0, at time=t1 with 4th order Runge-Kutta you could do something like this:
from scipy.integrate import ode
solver = ode(f).set_integrator('dopri5')
solver.set_initial_value(y0, t0)
dt = 0.1
while t < t1:
y = solver.integrate(t+dt)
t += dt
Edit: You have to get your derivative to first order to use numerical integration. This you can achieve by setting e.g. z1=u and z2=du/dt, after which you have dz1/dt = z2 and dz2/dt = d^2u/dt^2. Substitute these into your original equation, and simply iterate over the vector dZ/dt, which is first order.
Edit 2: Here's an example code for the whole thing:
import numpy as np
import matplotlib.pyplot as plt
from numpy import sqrt, pi, sin, cos
from scipy.integrate import ode
# use z = [z1, z2] = [u, u']
# and then f = z' = [u', u''] = [z2, -z1+sqrt(z1)]
def f(phi, z):
return [z[1], -z[0]+sqrt(z[0])]
# initialize the 4th order Runge-Kutta solver
solver = ode(f).set_integrator('dopri5')
# initial value
z0 = [1.49907, 0.]
solver.set_initial_value(z0)
values = 1000
phi = np.linspace(0.0001, 7.*pi, values)
u = np.zeros(values)
for ii in range(values):
u[ii] = solver.integrate(phi[ii])[0] #z[0]=u
x = 1. / u * cos(phi)
y = 1. / u * sin(phi)
plt.figure()
plt.plot(x,y)
plt.grid()
plt.show()
scipy.integrate() does ODE integration. Is that what you are looking for?