Solving an Involved Set of Coupled Differential Equations - python

I am trying to solve a set of complicated differential equations in Python. These equations contain five functions (defined in the function 'ODEs' in the code below) that are functions of a variable n (greek name is eta--- I used n and eta interchangeably as variable names). These coupled differential equations contain a function (called a) which is a function of a parameter t. n (or eta) is also a function of t and so my first goal was to express, again numerically, the function a as a function of n (eta). Hence I had to solve a less involved pair of differential equations, which I defined in the function 'coupleODE'. I got a plot of a(t) and n(t) and used interpolation to get a model relating function a to function n. This function is a numerical estimation for a(n). I called this interpolation model 'f_quad' --- f, means function, and quad represents quadratic interpolation.
Now, the original five differential equations actually contain a'(n)/a(n), where ' is derivative with respect to n. I numerically found an interpolator for this as well and called it 'deriv_quad'.
Now in 'ODEs' I am modelling the five differential equations, and as you can see in the code, the function body contains segments that use the interpolators 'f_quad' and 'deriv_quad'; these have argument n, which represents the a(n) and a'(n) respectively. I then use odeint python function to numerically solve these differential equations, but I get the error message:
'A value in x_new is above the interpolation range.' My computer says that the error occurred in the line 'f = odeint(ODEs, initials, n_new, atol=1.0e-8, rtol=1.0e-6)'.
I am not sure why this happened; I used the same eta values that I used when finding the interpolators 'f_quad' and 'deriv_quad'. Can anyone help me get rid of this error?
Here is the code:
import numpy as np
from scipy.misc import derivative
from scipy.integrate import odeint
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
import math
def coupledODE(x,t):
#define important constants
m = 0.315
r = 0.0000926
d = 0.685
H_0 = 67.4
a = x[0]
eta = x[1]
dndt = a**(-1)
dadt = H_0 * (m*a**(-1) + r*a**(-2) + d*a**2)**(1/2)
return [dadt, dndt]
#initial condtions
x0 = [1e-7, 1e-8]
t = np.linspace(0,0.01778301154,7500)
x = odeint(coupledODE, x0, t, atol=1.0e-8, rtol=1.0e-6) #vector of the functions a(t), n(t)
a = x[:,0]
n = x[:,1] #Eta; n is the greek letter eta
plt.semilogx(t,a)
plt.xlabel('time')
plt.ylabel('a(t)')
plt.show()
plt.semilogx(t,n)
plt.xlabel('time')
plt.ylabel('Eta')
plt.show()
plt.plot(n,a)
plt.xlabel('Eta')
plt.ylabel('a(t)')
plt.show()
##############################################################################
# Calculate the Derivative a' (i.e. da/d(eta) Numerically
##############################################################################
derivative_values = []
for i in range(7499):
numerator = x[i+1,0] - x[i,0]
denominator = x[i+1,1] - x[i,1]
ratio = numerator / denominator
derivative_values.append(ratio)
x_axis = n
x_axis = np.delete(x_axis, -1)
plt.plot(x_axis,derivative_values)
plt.xlabel('Eta')
plt.ylabel('Derivative of a')
plt.show()
##############################################################################
#Interpolation
##############################################################################
#Using quadratic interpolation
f_quad = interp1d(n, a, kind = 'quadratic')
deriv_quad = interp1d(x_axis, derivative_values, kind = 'quadratic')
n_new = np.linspace(1.0e-8, 0.0504473, num = 20000, endpoint = True)
plt.plot(n_new, f_quad(n_new))
plt.xlabel('Eta')
plt.ylabel('a')
plt.show()
plt.plot(n_new, deriv_quad(n_new))
plt.xlabel('Eta')
plt.ylabel('Derivative of a')
plt.show()
#print(x[0,1])
#print(eta_new[0])
#print(deriv_quad(1e-8))
##############################################################################
# The Main Coupled Equations
##############################################################################
def ODEs(x,n):
fourPiG_3 = 1
CDM_0 = 1
Rad_0 = 1
k = 0.005
Phi = x[0]
Theta1 = x[1]
iSpeed = x[2]
DeltaC = x[3]
Theta0 = x[4]
dPhi_dn = fourPiG_3 *((CDM_0 * DeltaC)/ deriv_quad(n) + (4*Rad_0)/(f_quad(n)*deriv_quad(n))) -
(k**2*Phi*f_quad(n))/(deriv_quad(n)) - (deriv_quad(n) * Phi)/(f_quad(n))
dTheta1_dn = (k/3)*(Theta0 - Phi)
diSpeed_dn = -k*Phi - (deriv_quad(n)/f_quad(n))*iSpeed
dDeltaC_dn = -k*iSpeed - 3*dPhi_dn
dTheta0_dn = -k*Theta1 - dPhi_dn
return [dPhi_dn, dTheta1_dn, diSpeed_dn, dDeltaC_dn, dTheta0_dn]
hub_0 = deriv_quad(1e-8)/f_quad(1e-8)
#Now the initial conditions
Phi_k_0 = 1.0e-5
Theta1_k_0 = -1*Phi_k_0/hub_0 #Ask about the k
iSpeed_k_0 = 3*Theta1_k_0
DeltaC_k_0 = 1.5 * Phi_k_0
Theta0_k_0 = 0.5 * Phi_k_0
initials = [Phi_k_0, Theta1_k_0, iSpeed_k_0, DeltaC_k_0, Theta0_k_0]
####Error Happens Here ####
f = odeint(ODEs, initials, n_new, atol=1.0e-8, rtol=1.0e-6)
Phi = f[:,0]
Theta1 = f[:,1]
iSpeed = f[:,2]
DeltaC = f[:,3]
Theta0 = f[:,4]

Related

Explicit Euler method is unable to complete the solution to a system of differential equations

I am trying to solve the following nonlinear system of differential equation with the explicit Euler method:
x' = f1(x,y),
y' = f2(x,y)
And I know the fact that the curve corresponding to the solution must connect (x_{initial},y_{initial}) to (0,1) in the x-y plane, but the obtained curve stops prematurely at around (0.17,0.98). I tried to vary the parameters but again I can't push that value any further towards (0,1). First, I thought my equations becomes stiff towards the end point; now it doesn't seem to be the case when I read about the stiff ODEs. What might be the problem?
The code I wrote in python is:
import math
import numpy as np
import matplotlib.pyplot as plt
q=1
#my f1 and f2 functions:
def l(lna,x,y,m,n,xi,yi):
return n *m**(-1)*(np.divide((yi**2)*(np.float64(1)-np.power(x,2)-np.power(y,2))*np.exp(3*(lna-lnai)),(y**2)*(1-xi**2-yi**2)))**(-1/n)
def f1 (x,y,l):
return -3*x + l*np.sqrt(3/2)* y**2+ 3/2 *x*(2*(x**2)+q*(1-x**2-y**2))
def f2 (x,y,l):
return -l*np.sqrt(3/2) *y*x + 3/2 *y*(2*x**2+q*(1-x**2-y**2))
#my code for the explicit Euler:
def e_E(xa,xb,dlna,m,n,xi,yi):
N = int(round((lnaf-lnai)/dlna))
lna = np.linspace(0, N*dlna, N+1)
x = np.empty(N+1)
y = np.empty(N+1)
x[0],y[0] = xi,yi
for i in range(N):
sd = l(lna[i],x[i],y[i],m,n,xi,yi)
x[i+1] = x[i] + dlna * f1(x[i],y[i],sd)
y[i+1] = y[i] + dlna * f2(x[i],y[i],sd)
return x,y,lna
#range for the independent variable (in my case it is lna)
lnai = np.float64(0)
lnaf = np.float64(15)
#step size
dlna = np.float64(1e-3)
#initial conditions
yi = np.float64(1e-5)
xi = 0
x1,y1,lna1 = e_E(lnai, lnaf, dlna, np.float64(0.1), np.float64(2), xi, yi)
plt.plot(x1,y1,'b',label = ('x1'))
plt.legend()
plt.grid()
plt.ylabel('y')
plt.xlabel('x')
plt.show()
My solution in the x-y plane:
Full/correct solution:

How can I solve this matrix ODE using python?

I would like to numerically compute this ODE from time 0 -> T :
ODE equation where all of the sub-matrix are numerically given in a paper. Here are all of the variables :
import numpy as np
T = 1
eta = np.diag([2e-7, 2e-7])
R = [[0.33, 3.95],
[-2.52, 10.23]]
R = np.array(R)
gamma = 2e-5
GAMMA = 100
S_bar = [54.23, 27.45]
cov = [[0.47, 0.2],
[0.2, 0.14]]
cov = np.array(cov)
shape = cov.shape
Q = 0.5*np.block([[gamma*cov, R],
[np.transpose(R), np.zeros(shape)]])
Y = np.block([[np.zeros(shape), np.zeros(shape)],
[gamma*cov, R]])
U = np.block([[-linalg.inv(eta), np.zeros(shape)],
[np.zeros(shape), 2*gamma*cov]])
P_T = np.block([[-GAMMA*np.ones(shape), np.zeros(shape)],
[np.zeros(shape), np.zeros(shape)]])
Now I define de function f so that P' = f(t, P) :
n = len(P_T)
def f(t, X):
X = X.reshape([n, n])
return (Q + np.transpose(Y)#X + X#Y + X#U#X).reshape(-1)
Now my goal is to numerically solve this ODE, im trying to figure out the right function solve so that if I integrate the ODE from T to 0, then using the final value I get, I integrate back from 0 to T, the two matrices I get are actually (nearly) the same. Here is my solve function :
from scipy import integrate
def solve(interval, initial_value):
return integrate.solve_ivp(f, interval, initial_value, method="LSODA", max_step=1e-4)
Now I can test wether the computation is right :
solv = solve([T, 0], P_T.reshape(-1))
y = np.array(solv.y)
solv2 = solve([0, T], y[:, -1])
y2 = np.array(solv2.y)
# print(solv.status)
# print(solv2.status)
# this lines shows the diffenrence between the initial matrix at T and the final matrix computed at T
# the smallest is the value, the better is the computation
print(sum(sum(abs((P_T - y2[:, -1].reshape([n, n]))))))
My issue is : No matter what "solve" function im using (using different methods, different step sizes, testing all the parameters...) I always get either errors or a very bad convergence (the difference between the two matrices is too high).
Knowing that according to the paper where this ODE comes from ( (23) in https://arxiv.org/pdf/2103.13773v4.pdf) there exists a solution, how can I numerically compute it?

Improve speed of gradient descent

I am trying to maximize a target function f(x) with function scipy.optimize.minimum. But it usually takes 4-5 hrs to run the code because the function f(x) involves a lot of computation of complex matrix. To improve its speed, I want to use gpu. And I've already tried tensorflow package. Since I use numpy to define f(x), I have to convert it into tensorflow's format. However, it doesn't support the computation of complex matrix. What else package or means I can use? Any suggestions?
To specific my problem, I will show calculate scheme below:
Calculate the expectation :
-where H=x*H_0, x is the parameter
Let \phi go through the dynamics of Schrödinger equation
-Different H is correspond to a different \phi_end. Thus, parameter x determines the expectation
Change x, calculate the corresponding expectation
Find a specific x that minimize the expectation
Here is a simple example of part of my code:
import numpy as np
import cmath
from scipy.linalg import expm
import scipy.optimize as opt
# create initial complex matrixes
N = 2 # Dimension of matrix
H = np.array([[1.0 + 1.0j] * N] * N) # a complex matrix with shape(N, N)
A = np.array([[0.0j] * N] * N)
A[0][0] = 1.0 + 1j
# calculate the expectation
def value(phi):
exp_H = expm(H) # put the matrix in the exp function
new_phi = np.linalg.linalg.matmul(exp_H, phi)
# calculate the expectation of the matrix
x = np.linalg.linalg.matmul(H, new_phi)
expectation = np.inner(np.conj(phi), x)
return expectation
# Contants
tmax = 1
dt = 0.1
nstep = int(tmax/dt)
phi_init = [1.0 + 1.0j] * N
# 1st derivative of Schrödinger equation
def dXdt(t, phi, H): # 1st derivative of the function
return -1j * np.linalg.linalg.matmul(H, phi)
def f(X):
phi = [[0j] * N] * nstep # store every time's phi
phi[0] = phi_init
# phi go through the dynamics of Schrödinger equation
for i in range(nstep - 1):
phi[i + 1] = phi[i] - dXdt(i * dt, X[i] * H, phi[i]) * dt
# calculate the corresponding value
f_result = value(phi[-1])
return f_result
# Initialize the parameter
X0 = np.array(np.ones(nstep))
results = opt.minimize(f, X0) # minimize the target function
opt_x = results.x
PS:
Python Version: 3.7
Operation System: Win 10

Scipy Minimize Not Working

I'm running the minimization below:
from scipy.optimize import minimize
import numpy as np
import math
import matplotlib.pyplot as plt
### objective function ###
def Rlzd_Vol1(w1, S):
L = len(S) - 1
m = len(S[0])
# Compute log returns, size (L, m)
LR = np.array([np.diff(np.log(S[:,j])) for j in xrange(m)]).T
# Compute weighted returns
w = np.array([w1, 1.0 - w1])
R = np.array([np.sum(w*LR[i,:]) for i in xrange(L)]) # size L
# Compute Realized Vol.
vol = np.std(R) * math.sqrt(260)
return vol
# stock prices
S = np.exp(np.random.normal(size=(50,2)))
### optimization ###
obj_fun = lambda w1: Rlzd_Vol1(w1, S)
w1_0 = 0.1
res = minimize(obj_fun, w1_0)
print res
### Plot objective function ###
fig_obj = plt.figure()
ax_obj = fig_obj.add_subplot(111)
n = 100
w1 = np.linspace(0.0, 1.0, n)
y_obj = np.zeros(n)
for i in xrange(n):
y_obj[i] = obj_fun(w1[i])
ax_obj.plot(w1, y_obj)
plt.show()
The objective function shows an obvious minimum (it's quadratic):
But the minimization output tells me the minimum is at 0.1, the initial point:
I cannot figure out what's going wrong. Any thoughts?
w1 is passed in as a (single entry) vector and not as scalar from the minimize routine. Try what happens if you define w1 = np.array([0.2]) and then calculate w = np.array([w1, 1.0 - w1]). You'll see you get a 2x1 matrix instead of a 2 entry vector.
To make your objective function able to handle w1 being an array you can simply put in an explicit conversion to float w1 = float(w1) as the first line of Rlzd_Vol1. Doing so I obtain the correct minimum.
Note that you might want to use scipy.optimize.minimize_scalar instead especially if you can bracket where you minimum will be.

Solving ODE numerically with Python

I am solving an ODE for an harmonic oscillator numerically with Python. When I add a driving force it makes no difference, so I'm guessing something is wrong with the code. Can anyone see the problem? The (h/m)*f0*np.cos(wd*i) part is the driving force.
import numpy as np
import matplotlib.pyplot as plt
# This code solves the ODE mx'' + bx' + kx = F0*cos(Wd*t)
# m is the mass of the object in kg, b is the damping constant in Ns/m
# k is the spring constant in N/m, F0 is the driving force in N,
# Wd is the frequency of the driving force and x is the position
# Setting up
timeFinal= 16.0 # This is how far the graph will go in seconds
steps = 10000 # Number of steps
dT = timeFinal/steps # Step length
time = np.linspace(0, timeFinal, steps+1)
# Creates an array with steps+1 values from 0 to timeFinal
# Allocating arrays for velocity and position
vel = np.zeros(steps+1)
pos = np.zeros(steps+1)
# Setting constants and initial values for vel. and pos.
k = 0.1
m = 0.01
vel0 = 0.05
pos0 = 0.01
freqNatural = 10.0**0.5
b = 0.0
F0 = 0.01
Wd = 7.0
vel[0] = vel0 #Sets the initial velocity
pos[0] = pos0 #Sets the initial position
# Numerical solution using Euler's
# Splitting the ODE into two first order ones
# v'(t) = -(k/m)*x(t) - (b/m)*v(t) + (F0/m)*cos(Wd*t)
# x'(t) = v(t)
# Using the definition of the derivative we get
# (v(t+dT) - v(t))/dT on the left side of the first equation
# (x(t+dT) - x(t))/dT on the left side of the second
# In the for loop t and dT will be replaced by i and 1
for i in range(0, steps):
vel[i+1] = (-k/m)*dT*pos[i] + vel[i]*(1-dT*b/m) + (dT/m)*F0*np.cos(Wd*i)
pos[i+1] = dT*vel[i] + pos[i]
# Ploting
#----------------
# With no damping
plt.plot(time, pos, 'g-', label='Undampened')
# Damping set to 10% of critical damping
b = (freqNatural/50)*0.1
# Using Euler's again to compute new values for new damping
for i in range(0, steps):
vel[i+1] = (-k/m)*dT*pos[i] + vel[i]*(1-(dT*(b/m))) + (F0*dT/m)*np.cos(Wd*i)
pos[i+1] = dT*vel[i] + pos[i]
plt.plot(time, pos, 'b-', label = '10% of crit. damping')
plt.plot(time, 0*time, 'k-') # This plots the x-axis
plt.legend(loc = 'upper right')
#---------------
plt.show()
The problem here is with the term np.cos(Wd*i). It should be np.cos(Wd*i*dT), that is note that dT has been added into the correct equation, since t = i*dT.
If this correction is made, the simulation looks reasonable. Here's a version with F0=0.001. Note that the driving force is clear in the continued oscillations in the damped condition.
The problem with the original equation is that np.cos(Wd*i) just jumps randomly around the circle, rather than smoothly moving around the circle, causing no net effect in the end. This can be best seen by plotting it directly, but the easiest thing to do is run the original form with F0 very large. Below is F0 = 10 (ie, 10000x the value used in the correct equation), but using the incorrect form of the equation, and it's clear that the driving force here just adds noise as it randomly moves around the circle.
Note that your ODE is well behaved and has an analytical solution. So you could utilize sympy for an alternate approach:
import sympy as sy
sy.init_printing() # Pretty printer for IPython
t,k,m,b,F0,Wd = sy.symbols('t,k,m,b,F0,Wd', real=True) # constants
consts = {k: 0.1, # values
m: 0.01,
b: 0.0,
F0: 0.01,
Wd: 7.0}
x = sy.Function('x')(t) # declare variables
dx = sy.Derivative(x, t)
d2x = sy.Derivative(x, t, 2)
# the ODE:
ode1 = sy.Eq(m*d2x + b*dx + k*x, F0*sy.cos(Wd*t))
sl1 = sy.dsolve(ode1, x) # solve ODE
xs1 = sy.simplify(sl1.subs(consts)).rhs # substitute constants
# Examining the solution, we note C3 and C4 are superfluous
xs2 = xs1.subs({'C3':0, 'C4':0})
dxs2 = xs2.diff(t)
print("Solution x(t) = ")
print(xs2)
print("Solution x'(t) = ")
print(dxs2)
gives
Solution x(t) =
C1*sin(3.16227766016838*t) + C2*cos(3.16227766016838*t) - 0.0256410256410256*cos(7.0*t)
Solution x'(t) =
3.16227766016838*C1*cos(3.16227766016838*t) - 3.16227766016838*C2*sin(3.16227766016838*t) + 0.179487179487179*sin(7.0*t)
The constants C1,C2 can be determined by evaluating x(0),x'(0) for the initial conditions.

Categories

Resources