Solving set of Boundary Value Problems - python

I am trying to solve a set of boundary value problems given by 4 differential equations. I am using bvp_solver in python, and I am getting errors which state 'invalid value encountered in division'. I am assuming this means I am dividing by NaN or 0 at some point, but I am unsure where.
import numpy as np
from scipy.integrate import solve_bvp
import matplotlib.pyplot as plt
%matplotlib inline
alpha = 1
zeta = 1
C_k = 1
sigma = 1
Q = 30
U_0 = 0.1
gamma = 5/3
theta = 3
m = 1.5
def fun(x, y):
U, dU, B, dB, T, dT, M, dM = y;
d2U = -2*U_0*Q**2*(1/np.cosh(Q*x))**2*np.tanh(Q*x)-((alpha)/(C_k*sigma))*dB;
d2B = -(1/(C_k*zeta))*dU;
d2T = (1/gamma - 1)*(sigma*dU**2 + zeta*alpha*dB**2);
d2M = -(dM/T)*dT + (dM/T)*theta*(m+1) - (alpha/T)*B*dB
return dU, d2U, dB, d2B, dT, d2T, dM, d2M
def bc(ya, yb):
return ya[0]+U_0*np.tanh(Q*0.5), yb[0]-U_0*np.tanh(Q*0.5), ya[2]-0, yb[2]-0, ya[4] - 1, yb[4] - 4, ya[6], yb[6] - 1
x = np.linspace(-0.5, 0.5, 500)
y = np.zeros((8, x.size))
sol = solve_bvp(fun, bc, x, y)
If I remove the last two equations for M and dM, then the solution works fine. I have had trouble in the past understanding the return arrays of bvp_solver, but I am confident I understand that now. But I continue to get errors each time I add more equations. Any help is greatly appreciated.

Of course this will fail in the first step. You initialize everything to zero, and then in the derivatives function, you divide by T, which is zero from the initialization.
Find a more realistic initialization of T, for instance
x = np.linspace(-0.5, 0.5, 15)
y = np.zeros((8, x.size))
y[4] = 2.5+3*x
y[5] = 3+0*x
or
desingularize the division, which usually is done similar to
d2M = (-dM*dT + dM*theta*(m+1) - alpha*B*dB) * T/(1e-12+T*T)
It makes always sense to print after sol = solve_bvp(...) the error message print(sol.message). Now that there are more than a few components, I changed the construction of the output tableau to the systematic
%matplotlib inline
plt.figure(figsize=(10,2.5*4))
for k in range(4):
v,c = ['U','B','T','M'][k],['-+b','-*r','-xg','-om'][k];
plt.subplot(4,2,2*k+1); plt.plot(sol.x,sol.y[2*k ],c, ms=2); plt.grid(); plt.legend(["$%c$"%v]);
plt.subplot(4,2,2*k+2); plt.plot(sol.x,sol.y[2*k+1],c, ms=2); plt.grid(); plt.legend(["$%c'$"%v]);
plt.tight_layout(); plt.savefig("/tmp/bvp3.png"); plt.show(); plt.close()

Related

Singular Jacobian in Boundary Value Problem

I am solving a set of coupled ODEs using solve_bvp in python. I have solved the equations for the boundary conditions that U = 0 and B = 0 on the boundaries, however I am trying to solve them such that U' = 0 and B = 0 on the boundary. My problem is that I keep encountering a singular jacobian in the return message - my guess is that the initial guess is diverging, however I have tried a range of initial guesses and still no solution. Is there a more systematic way of figuring this out? My code is below:
import numpy as np
from scipy import integrate
from scipy.integrate import solve_bvp
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(-0.5, 0.5, 1000)
y = np.ones((4, x.size))
y[0] = U_0*np.tanh(Q*x)
U_0 = 0.1
Q = 30
## Then we calculate the total energy
def FullSolver(x,y,Ha, Rm):
def fun(x, y):
U, dU, B, dB = y;
d2U = -2*U_0*Q**2*(1/np.cosh(Q*x))**2*np.tanh(Q*x)-((Ha**2)/(Rm))*dB;
d2B = -Rm*dU;
return dU, d2U, dB, d2B
def bc(ya, yb):
return ya[1], yb[1], ya[2]-0, yb[2]+0
# sol will give us the solutions which are accessible if we need them
sol = solve_bvp(fun, bc, x, y, tol=1e-12)
return(sol.x, sol.yp[0], sol.y[2], sol.message)
FullSolver(x, y, 0.000005, 0.00009)

Double Direct Integration

I am trying to solve the set of coupled boundary value problems such that;
U'' +aB'+ b*(cosh(lambda z))^{-2}tanh(lambda*z) = 0,
B'' + c*U' = 0,
T'' = (gamma^{-1} - 1)*(d*(U')^2 + e*(B')^2)
subject to the boundary conditions U(+/- 1/2) = +/-U_0*tanh(lambda/2), B(+/- 1/2) = 0 and T(-1/2) = 1, T(1/2) = 4. I have decomposed this set of equations into a set of first order differential equations, and used the derivative array such that [U, U', B, B', T, T']. But bvp solve is returning the error that I have a single Jacobian. When I remove the last two equations, I get a solution for U and B and that works fine. However, I am unsure why adding the other two equations results in this issue.
import numpy as np
from scipy.integrate import solve_bvp
import matplotlib.pyplot as plt
%matplotlib inline
alpha = 1E-7
zeta = 8E-3
C_k = 0.01
sigma = 0.005
Q = 30
U_0 = 0.1
gamma = 5/3
theta = 3
def fun(x, y):
return y[1], -2*U_0*Q**2*(1/np.cosh(Q*x))**2*np.tanh(Q*x)-((alpha)/(C_k*sigma))*y[3], y[3],\
-(1/(C_k*zeta))*y[1], y[4], (1/gamma - 1)*(sigma*(y[1])**2 + zeta*alpha*(y[3])**2)
def bc(ya, yb):
return ya[0]+U_0*np.tanh(Q*0.5), yb[0]-U_0*np.tanh(Q*0.5), ya[2]-0, yb[2]-0, ya[4] - 1, yb[4] - 4
x = np.linspace(-0.5, 0.5, 500)
y = np.zeros((6, x.size))
sol = solve_bvp(fun, bc, x, y)
print(sol)
However, the error that I am getting is that 'setting an array with sequence'. The first function and boundary conditions solves two coupled equations, then I use these results to evaluate the equation I have given. I have tried writing all of my equations in one function, however this seems to be returning trivial solutions i.e an array full of zeros.
Any help would be appreciated.
When the expressions become larger it is often more helpful to keep the computations human readable instead of compact.
def fun(x, y):
U, dU, B, dB, T, dT = y;
d2U = -2*U_0*Q**2*(1/np.cosh(Q*x))**2*np.tanh(Q*x)-((alpha)/(C_k*sigma))*dB;
d2B = -(1/(C_k*zeta))*dU;
d2T = (1/gamma - 1)*(sigma*dU**2 + zeta*alpha*dB**2);
return dU, d2U, dB, d2B, dT, d2T
This avoids missing an index error as there are no indices in this computation, all has names close to the original formulas.
Then the solution components (using initialization with only 5 points, resulting in a refinement with 65 points) plots as

Solve equation in python over a given time interval with initial condition given

I want to solve the equation in python over the time Interval I = [0,10] with initial condition (x_0, y_0) = (1,0) and the parameter values μ ∈ {-2, -1, 0, 1, 2} using the function
scipy.integrate.odeint
Then I want to plot the solutions (x(t;x_0,y_0), y(t;x_0,y_0)) in the xy-plane.
The originally given linear system is
dx/dt = y, x(0) = x_0
dy/dt = - x - μy, y(0) = y_0
Please see my code below:
import numpy as np
from scipy.integrate import odeint
sol = odeint(myode, y0, t , args=(mu,1)) #mu and 1 are the coefficients when set equation to 0
y0 = 0
myode(y, t, mu) = -x-mu*y
def t = np.linspace(0,10, 101) #time interval
dydt = [y[1], -y[0] - mu*y[1]]
return dydt
Could anyone check if I defined the callable function myode correctly? This function evaluates the right hand side of the ODE.
Also an syntax error message showed up for this line of code
def t = np.linspace(0,10, 101) #time interval
saying there is invalid syntax. Should I somehow use
for * in **
to get rid of the error message? If yes, how exactly?
I am very new to Python and ODE. Could anyone help me with this question? Thank you very much!
myode should be a function definition, thus
def myode(u, t, mu): x,y = u; return [ y, -x-mu*y]
The time array is a simple variable declaration/assignment, there should be no def there. As the system is two-dimensional, the initial value also needs to have dimension two
sol = odeint(myode, [x0,y0], t, args=(mu,) )
Thus a minimal modification of your script is
def myode(u, t, mu): x,y = u; return [ y, -x-mu*y]
t = np.linspace(0,10, 101) #time interval
x0,y0 = 1,0 # initial conditions
for mu in [-2,-1,0,1,2]:
sol = odeint(myode, [x0,y0], t, args=(mu,) )
x,y = sol.T
plt.plot(x,y)
a=5; plt.xlim(-a,a); plt.ylim(-a,a)
plt.grid(); plt.show()
giving the plot
Try using the solve_ivp method.
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
import numpy as np
i = 0
u = [-2,-1,0,1,2]
for x in u:
def rhs2(t,y):
return [y[1], -1*y[0] - u[x]*y[1]]
value = u[i]
res2 = solve_ivp(rhs2, [0,10], [1,0] , t_eval=[0,1,2,3,4,5,6,7,8,9,10], method = 'RK45')
t = np.array(res2.t[1:-1])
x = np.array(res2.y[0][1:-1])
y = np.array(res2.y[1][1:-1])
fig = plt.figure()
plt.plot(t, x, 'b-', label='X(t)')
plt.plot(t, y, 'g-', label='Y(t)')
plt.title("u = {}".format(value))
plt.legend(loc='lower right')
plt.show()
i = i + 1
Here is the solve_ivp method Documentation
Here is a very similar problem with a better explanation.

Error in solving a nonlinear optimization system with dynamic constraint using openopt and scipy

I'm trying to solve an nonlinear optimal control problem subject to dynamic ( h(x, x', u) = 0 ) constraint.
given:
f(x) = (u(t) - u(0)(t))^2 # u0(t) is the initial input provided to the system
h(x) = y'(t) - integral(sqrt(u(t))*y(t) + y(t)) = 0 # a nonlinear differential equation
-2 < y(t) < 10 # system state is bounded to this range
-2 < u(t) < 10 # system state is bounded to this range
u0(t) # will be defined as an arbitrary piecewise-linear function
I've tried to translate the problem into python code using openopt and scipy:
import numpy as np
from scipy.integrate import *
from openopt import NLP
import matplotlib.pyplot as plt
from operator import and_
N = 15*4
y0 = 10
t0 = 0
tf = 10
lb, ub = np.ones(2)*-2, np.ones(2)*10
t = np.linspace(t0, tf, N)
u0 = np.piecewise(t, [t < 3, and_(3 <= t, t < 6), 6 <= t], [2, lambda t: t - 3, lambda t: -t + 9])
p = np.empty(N, dtype=np.object)
r = np.empty(N, dtype=np.object)
y = np.empty(N, dtype=np.object)
u = np.empty(N, dtype=np.object)
ff = np.empty(N, dtype=np.object)
for i in range(N):
t = np.linspace(t0, tf, N)
b, a = t[i], t[i - 1]
integrand = lambda t, u1, y1 : np.sqrt(u1)*y1 + y1
integral = lambda u1, y1 : fixed_quad(integrand, a, b, args=(u1, y1))[0]
f = lambda x1: ((x1[1] - u0[i])**2).sum()
h = lambda x1: x1[0] - y0 - integral(x1[0], x1[1])
p[i] = NLP(f, (y0, u0[i]), h=h, lb=lb, ub=ub)
r[i] = p[i].solve('scipy_slsqp')
y0 = r[i].xf[0]
y[i] = r[i].xf[0]
u[i] = r[i].xf[1]
ff[i] = r[i].ff
figure1 = plt.figure()
axis1 = figure1.add_subplot(311)
plt.plot(u0)
axis2 = figure1.add_subplot(312)
plt.plot(u)
axis2 = figure1.add_subplot(313)
plt.plot(y)
plt.show()
Now the problem is, running the code with a positive initial y0 like y0 = 10 , the code will result satisfying results.
But giving y0 = 0 or a negative one y0 = -1, nlp problem will be deficient, saying:
"NO FEASIBLE SOLUTION has been obtained (1 constraint is equal to NaN, MaxResidual = 0, objFunc = nan)"
Also, considering the piecewise-linear initial u0, if you put any number other than 0 at the first range of the function at t < 3, meaning:
u0 = np.piecewise(t, [t < 3, and_(3 <= t, t < 6), 6 <= t], [2, lambda t: t - 3, lambda t: -t + 9])
instead of:
u0 = np.piecewise(t, [t < 3, and_(3 <= t, t < 6), 6 <= t], [0, lambda t: t - 3, lambda t: -t + 9])
This will result in the same error again.
Any ideas ?
Thanks in advance.
My first thought is that you seem to be solving a 2-dimensional Optimal Control problem as if it were a 1-Dimensional problem.
The constraint dynamics $h(x, x', t)$ are really a second order ODE.
y''(t) - sqrt(u(t))*y(t) + y(t)) = 0
Starting from this I would reword my system as a 2-dimensional, 1st order system in the standard way.
My second thought is that you seem to be optimizing independently, for $u(t)$, at each time step, whereas the problem is to optimize globally for $u(.)$, the entire function. So if anything, the call to NLP should be outside the for loop...
There are dedicated Optimal Control Open Source toolboxes:
Pythonically, there is JModellica: http://www.jmodelica.org/.
Alternatively, I have also successfully used: ACADO, http://sourceforge.net/p/acado/wiki/Home/ (in C++)
There are some very capable modeling tools now such as CasADi, Pyomo, and Gekko that didn't exist when the question was asked. Here is a solution with Gekko. One issue is that sqrt(u) needs to have a positive u value to avoid imaginary numbers.
from gekko import GEKKO
import numpy as np
m = GEKKO(remote=False)
u0=1; N=61
m.time = np.linspace(0,10,N)
# lb of -2 leads to imaginary number with sqrt(-)
u = m.MV(u0,lb=1e-2,ub=10); u.STATUS=1
m.options.MV_STEP_HOR = 18 # allow move at 3, 6 time units
m.options.MV_TYPE = 1 # piecewise linear
y = m.Var(10,lb=-2,ub=10)
m.Minimize((u-u0)**2)
m.Minimize(y**2) # otherwise solution is u=u0
m.Equation(y.dt()==m.integral(m.sqrt(u)*y-y))
m.options.IMODE=6
m.options.SOLVER=1
m.solve()
import matplotlib.pyplot as plt
plt.figure(figsize=(7,4))
plt.plot(m.time,u,label='u')
plt.plot(m.time,y,label='y')
plt.legend(); plt.grid()
plt.savefig('solution.png',dpi=300)
plt.show()

Numerical ODE solving in Python

How do I numerically solve an ODE in Python?
Consider
\ddot{u}(\phi) = -u + \sqrt{u}
with the following conditions
u(0) = 1.49907
and
\dot{u}(0) = 0
with the constraint
0 <= \phi <= 7\pi.
Then finally, I want to produce a parametric plot where the x and y coordinates are generated as a function of u.
The problem is, I need to run odeint twice since this is a second order differential equation.
I tried having it run again after the first time but it comes back with a Jacobian error. There must be a way to run it twice all at once.
Here is the error:
odepack.error: The function and its Jacobian must be callable functions
which the code below generates. The line in question is the sol = odeint.
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
from numpy import linspace
def f(u, t):
return -u + np.sqrt(u)
times = linspace(0.0001, 7 * np.pi, 1000)
y0 = 1.49907
yprime0 = 0
yvals = odeint(f, yprime0, times)
sol = odeint(yvals, y0, times)
x = 1 / sol * np.cos(times)
y = 1 / sol * np.sin(times)
plot(x,y)
plt.show()
Edit
I am trying to construct the plot on page 9
Classical Mechanics Taylor
Here is the plot with Mathematica
In[27]:= sol =
NDSolve[{y''[t] == -y[t] + Sqrt[y[t]], y[0] == 1/.66707928,
y'[0] == 0}, y, {t, 0, 10*\[Pi]}];
In[28]:= ysol = y[t] /. sol[[1]];
In[30]:= ParametricPlot[{1/ysol*Cos[t], 1/ysol*Sin[t]}, {t, 0,
7 \[Pi]}, PlotRange -> {{-2, 2}, {-2.5, 2.5}}]
import scipy.integrate as integrate
import matplotlib.pyplot as plt
import numpy as np
pi = np.pi
sqrt = np.sqrt
cos = np.cos
sin = np.sin
def deriv_z(z, phi):
u, udot = z
return [udot, -u + sqrt(u)]
phi = np.linspace(0, 7.0*pi, 2000)
zinit = [1.49907, 0]
z = integrate.odeint(deriv_z, zinit, phi)
u, udot = z.T
# plt.plot(phi, u)
fig, ax = plt.subplots()
ax.plot(1/u*cos(phi), 1/u*sin(phi))
ax.set_aspect('equal')
plt.grid(True)
plt.show()
The code from your other question is really close to what you want. Two changes are needed:
You were solving a different ODE (because you changed two signs inside function deriv)
The y component of your desired plot comes from the solution values, not from the values of the first derivative of the solution, so you need to replace u[:,0] (function values) for u[:, 1] (derivatives).
This is the end result:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
def deriv(u, t):
return np.array([u[1], -u[0] + np.sqrt(u[0])])
time = np.arange(0.01, 7 * np.pi, 0.0001)
uinit = np.array([1.49907, 0])
u = odeint(deriv, uinit, time)
x = 1 / u[:, 0] * np.cos(time)
y = 1 / u[:, 0] * np.sin(time)
plt.plot(x, y)
plt.show()
However, I suggest that you use the code from unutbu's answer because it's self documenting (u, udot = z) and uses np.linspace instead of np.arange. Then, run this to get your desired figure:
x = 1 / u * np.cos(phi)
y = 1 / u * np.sin(phi)
plt.plot(x, y)
plt.show()
You can use scipy.integrate.ode. To solve dy/dt = f(t,y), with initial condition y(t0)=y0, at time=t1 with 4th order Runge-Kutta you could do something like this:
from scipy.integrate import ode
solver = ode(f).set_integrator('dopri5')
solver.set_initial_value(y0, t0)
dt = 0.1
while t < t1:
y = solver.integrate(t+dt)
t += dt
Edit: You have to get your derivative to first order to use numerical integration. This you can achieve by setting e.g. z1=u and z2=du/dt, after which you have dz1/dt = z2 and dz2/dt = d^2u/dt^2. Substitute these into your original equation, and simply iterate over the vector dZ/dt, which is first order.
Edit 2: Here's an example code for the whole thing:
import numpy as np
import matplotlib.pyplot as plt
from numpy import sqrt, pi, sin, cos
from scipy.integrate import ode
# use z = [z1, z2] = [u, u']
# and then f = z' = [u', u''] = [z2, -z1+sqrt(z1)]
def f(phi, z):
return [z[1], -z[0]+sqrt(z[0])]
# initialize the 4th order Runge-Kutta solver
solver = ode(f).set_integrator('dopri5')
# initial value
z0 = [1.49907, 0.]
solver.set_initial_value(z0)
values = 1000
phi = np.linspace(0.0001, 7.*pi, values)
u = np.zeros(values)
for ii in range(values):
u[ii] = solver.integrate(phi[ii])[0] #z[0]=u
x = 1. / u * cos(phi)
y = 1. / u * sin(phi)
plt.figure()
plt.plot(x,y)
plt.grid()
plt.show()
scipy.integrate() does ODE integration. Is that what you are looking for?

Categories

Resources