Convert ode45 code from MATLAB to a python code - python

How can I solve this MATLAB ode problem using python
This is the IVP with the BCs:
F'''+FF''-F'^2+1=0
F(0)=F'(0)=0, F'(oo)=1
The current matlab code will generate the following plot
and it is identical to the textbook solution:
The problem is that I need to recode the same problem using python
% stagnation flow
clc; close all; clear all; clf;
tol=1e-3;
x=1; % f''(0)
dx=0.1;
Xf=3;
tspan=(0:dx:Xf);
Nt=Xf/dx+1;
for i=1:10000
iter=i;
x=x+0.0001;
F = #(t,y)[-y(1)*y(3)-1+y(2)^2;y(1);y(2)];
yo=[x;0;0];
[t,y]= ode45(F,tspan,yo);
y2=(y(Nt,2));
% x=x/(y2^(3/2)) % f''())=f''(0)/[f'(inf)^(3/2)]
if abs(y(Nt,2)-1.0)<tol, break, end
end
y2=(y(Nt,2));
% x=x/(y2^(3/2)) % f''())=f''(0)/[f'(inf)^(3/2)]
figure(1)
plot(t,y(:,1),t,y(:,2),t,y(:,3))
legend('fpp','fp','f');
xlabel('η=y(B/v)^2');
ylabel('f,fp,fpp');
title('Numerical solutions for stagnation flow')
fprintf ('η \t\t fp \n')
fprintf ('%.2f \t\t%.6f \n',t,y(:,2))
I have tried to code the same problem using python but I couldn't find any tutorial about this matter.

If the task was just to solve the mathematical problem, one could say you already "did it wrong" in Matlab (in the sense of using a too expensive method). As you want to solve a boundary value problem, you should use the dedicated BVP solver bvp4c, see the Matlab documentation on how to.
Even if the task was to implement a single-shooting approach, the root-finding procedure should be upgraded to some method with a convergence order, even bisection should be faster than the linear search. The secant method with the unknown second derivate starting at 1, 1.1 seems also to work well.
In python there is also a BVP solver that works well if it works.
import numpy as np
from scipy.integrate import solve_bvp
import matplotlib.pyplot as plt
x0,xf = 0,3
F = lambda t,y: [-y[0]*y[2]-1+y[1]**2,y[0],y[1]]
bc = lambda y0,yf: [ y0[1], y0[2], yf[1]-1]
res = solve_bvp(F,bc,[x0,xf], [[0,0],[1,1],[0,xf-1]])
print(f"y''(0)={res.y[0,0]}")
x = np.linspace(x0,xf,150)
plt.plot(x,res.sol(x).T)
plt.plot(res.x,res.y.T,'x', ms=2)
plt.legend(["y''", "y'", "y"])
plt.grid(); plt.show()
resulting in the plot
This initial guess also still works with xf=20, but fails for xf=30, this probably needs a more detailed initial guess with a short initial curve and a long linear part.
For larger xf the following intialization seems to work well
x = [0., 1.]
while x[-1] < xf: x.append(x[-1]*1.5)
xf = x[-1]
x = np.asarray(x); x[0]=0
y0 = x-x0-1; y0[0]=0
y1 = 0*x+1; y1[0]=0
y2 = 0*x; y2[0]=1
res = solve_bvp(F,bc,x, [y2,y1,y0], tol=1e-8)

Try this to make your matlab code run faster under the same logic of linear search with ODE45. I agree it's too expensive.
clc; close all; clear all; clf;
flow = #(t, F) [-F(1)*F(3)-1+F(2)^2; F(1); F(2)];
tol = 1e-3;
x = 1;
y2 = 0;
while abs(y2-1) >= tol
[t, y] = ode45(flow, [0,3], [x;0;0]);
x += 0.0001;
y2 = y(end, 2);
end
plot(t, y)
Here is an implementation in python
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
def flow(t, F):
return [-F[0]*F[2]-1+F[1]**2, F[0], F[1]]
tol = 1E-3
x = 1
y2 = 0
while np.abs(y2-1) >= tol:
sol = solve_ivp(flow, [0,3], [x,0,0])
x += 0.0001
y2 = sol.y[1, -1]
plt.plot(sol.t, sol.y.T)
plt.legend([r"$F^{\prime \prime} \propto \tau $",r"$F^\prime \propto u$", r"$F \propto \Psi$"])
plt.axis([0, 3, 0, 2])
plt.xlabel(r'$\eta = y \sqrt{\frac{B}{v}}$')
And here is an implementation where the response is proportional to the error, which runs faster.
clc; close all; clear all; clf;
flow = #(t, F) [-F(1)*F(3)-1+F(2)^2; F(1); F(2)];
tol = 1e-3;
x = 1;
error = 1;
while abs(error) >= tol
[t, y] = ode45(flow, [0,3], [x;0;0]);
y2 = y(end, 2);
error = y2 - 1;
x -= 0.1*error;
end
plot(t, y)
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
def flow(t, F):
return [-F[0]*F[2]-1+F[1]**2, F[0], F[1]]
tol = 1E-3
x = 1
error = 1
while np.abs(error) >= tol:
sol = solve_ivp(flow, [0,3], [x,0,0])
y2 = sol.y[1, -1]
error = y2 - 1
x -= 0.1 * error
plt.plot(sol.t, sol.y.T)
plt.legend([r"$F^{\prime \prime} \propto \tau $",r"$F^\prime \propto u$", r"$F \propto \Psi$"])
plt.axis([0, 3, 0, 2])
plt.xlabel(r'$\eta = y \sqrt{\frac{B}{v}}$')

Related

Why does Python RK23 Solver blow up and give unrealistic results?

I am experimenting with RK45/RK23 solver from python scipy module. Using it to solve simple ODEs and it is not giving me the correct results. When I manually code for Runge Kutta 4th order it works perfectly so does the odeint solver in the module but RK23/RK45 does not. If someone could aid me in figuring out the issue it would be help full. I have so far implemented only simple ODEs
dydt = -K*(y^2)
Code:
import numpy as np
from scipy.integrate import solve_ivp,RK45,odeint
import matplotlib.pyplot as plt
# function model fun(y,t)
def model(y,t):
k = 0.3
dydt = -k*(y**2)
return dydt
# intial condition
y0 = np.array([0.5])
y = np.zeros(100)
t = np.linspace(0,20)
t_span = [0,20]
#RK45 implementation
yr = solve_ivp(fun=model,t_span=t_span,y0=y,t_eval=t,method=RK45)
##odeint solver
yy = odeint(func=model,y0=y0,t=t)
##manual implementation
t1 = 0
h = 0.05
y = np.zeros(21)
y[0]=y0;
i=0
k=0.3
##Runge Kutta 4th order implementation
while (t1<1.):
m1 = -k*(y[i]**2)
y1 = y[i]+ m1*h/2
m2 = -k*(y1**2)
y2 = y1 + m2*h/2
m3 = -k*(y2**2)
y3 = y2 + m3*h/2
m4 = -k*(y3**2)
i=i+1
y[i] = y[i-1] + (m1 + 2*m2 + 2*m3 + m4)/6
t1 = t1 + h
#plotting
t2 = np.linspace(0,20,num=21)
plt.plot(t2,y,'r-',label='RK4')
plt.plot(t,yy,'b--',label='odeint')
#plt.plot(t2,yr.y[0],'g:',label='RK45')
plt.xlabel('time')
plt.ylabel('y(t)')
plt.legend()
plt.show()
Output: (without showing RK45 result)
enter image description here
Output: (with just RK45 plot displayed)
enter image description here
I cant find out where am I making a mistake
Okay I have figured out the solution. RK45 requires function definition to be like fun(t,y) and odeint requires it to be func(y,t) due to which giving them same function will result in different results.

How to simulate bouncing ball? odeint not applicable?

Here is a code simulating the simplest falling ball:
%pylab
from scipy.integrate import odeint
ts = linspace(0, 1)
def f(X, t):
dx0 = X[1]
dx1 = -9.8
return [dx0, dx1]
X = odeint(f, [2, 0], ts)
plot(ts, X[:, 0])
But how about a ball bouncing at y=0?
I know that, in general, collision is a difficult part of physics simulations. However, I wonder if it's really not possible to simulate this simple system, hopefully with odeint.
The simplest way is just to add a big force that kicks the particle out from the forbidden region. This requires an ODE solver that is able to handle "stiff" problems, but luckily odeint uses lsoda which automatically switches to a mode suitable for such problems.
For example,
F(x) = { 0 if x > 0
{ big_number if x < 0
The numerical solver will have slightly easier time if the step function is rounded a bit, for example, F(x) = big_number / (1 + exp(x/width)):
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
big_number = 10000.0
width = 0.0001
ts = np.linspace(0, 10, 2000)
def f(X, t):
dx0 = X[1]
dx1 = -9.8
dx1 += big_number / (1 + np.exp(X[0]/width))
return [dx0, dx1]
with np.errstate(over='ignore'):
# don't print overflow warning messages from exp(),
# and limit the step size so that the solver doesn't step too deep
# into the forbidden region
X = odeint(f, [2, 0], ts, hmax=0.01)
plt.plot(ts, X[:, 0])
plt.show()

Python solving 2nd order ODE with quad function

I am studying the dynamics of a damped, driven pendulum with second order ODE defined like so, and specifically I am progamming:
d^2y/dt^2 + c * dy/dt + sin(y) = a * cos(wt)
import numpy as np
import matplotlib.pyplot as plt
from scipy import integrate
def pendeq(t,y,a,c,w):
y0 = -c*y[1] - np.sin(y[0]) + a*np.cos(w*t)
y1 = y[1]
z = np.array([y0, y1])
return z
a = 2.1
c = 1e-4
w = 0.666667 # driving angular frequency
t = np.array([0, 5000]) # interval of integration
y = np.array([0, 0]) # initial conditions
yp = integrate.quad(pendeq, t[0], t[1], args=(y,a,c,w))
This problem does look quite similar to Need help solving a second order non-linear ODE in python, but I am getting the error
Supplied function does not return a valid float.
What am I doing wrong??
integrate.quad requires that the function supplied (pendeq, in your case) returns only a float. Your function is returning an array.

Speeding up the code using numpy

I'm new to python, and I have this code for calculating the potential inside a 1x1 box using fourier series, but a part of it is going way too slow (marked in the code below).
If someone could help me with this, I suspect I could've done something with the numpy library, but I'm not that familiar with it.
import matplotlib.pyplot as plt
import pylab
import sys
from matplotlib import rc
rc('text', usetex=False)
rc('font', family = 'serif')
#One of the boundary conditions for the potential.
def func1(x,n):
V_c = 1
V_0 = V_c * np.sin(n*np.pi*x)
return V_0*np.sin(n*np.pi*x)
#To calculate the potential inside a box:
def v(x,y):
n = 1;
sum = 0;
nmax = 20;
while n < nmax:
[C_n, err] = quad(func1, 0, 1, args=(n), );
sum = sum + 2*(C_n/np.sinh(np.pi*n)*np.sin(n*np.pi*x)*np.sinh(n*np.pi*y));
n = n + 1;
return sum;
def main(argv):
x_axis = np.linspace(0,1,100)
y_axis = np.linspace(0,1,100)
V_0 = np.zeros(100)
V_1 = np.zeros(100)
n = 4;
#Plotter for V0 = v_c * sin () x
for i in range(100):
V_0[i] = V_0_1(i/100, n)
plt.plot(x_axis, V_0)
plt.xlabel('x/L')
plt.ylabel('V_0')
plt.title('V_0(x) = sin(m*pi*x/L), n = 4')
plt.show()
#Plot for V_0 = V_c(1-(x-1/2)^4)
for i in range(100):
V_1[i] = V_0_2(i/100)
plt.figure()
plt.plot(x_axis, V_1)
plt.xlabel('x/L')
plt.ylabel('V_0')
plt.title('V_0(x) = 1- (x/L - 1/2)^4)')
#plt.legend()
plt.show()
#Plot V(x/L,y/L) on the boundary:
V_0_Y = np.zeros(100)
V_1_Y = np.zeros(100)
V_X_0 = np.zeros(100)
V_X_1 = np.zeros(100)
for i in range(100):
V_0_Y[i] = v(0, i/100)
V_1_Y[i] = v(1, i/100)
V_X_0[i] = v(i/100, 0)
V_X_1[i] = v(i/100, 1)
# V(x/L = 0, y/L):
plt.figure()
plt.plot(x_axis, V_0_Y)
plt.title('V(x/L = 0, y/L)')
plt.show()
# V(x/L = 1, y/L):
plt.figure()
plt.plot(x_axis, V_1_Y)
plt.title('V(x/L = 1, y/L)')
plt.show()
# V(x/L, y/L = 0):
plt.figure()
plt.plot(x_axis, V_X_0)
plt.title('V(x/L, y/L = 0)')
plt.show()
# V(x/L, y/L = 1):
plt.figure()
plt.plot(x_axis, V_X_1)
plt.title('V(x/L, y/L = 1)')
plt.show()
#Plot V(x,y)
#######
# This is where the code is way too slow, it takes like 10 minutes when n in v(x,y) is 20.
#######
V = np.zeros(10000).reshape((100,100))
for i in range(100):
for j in range(100):
V[i,j] = v(j/100, i/100)
plt.figure()
plt.contour(x_axis, y_axis, V, 50)
plt.savefig('V_1')
plt.show()
if __name__ == "__main__":
main(sys.argv[1:])
You can find how to use FFT/DFT in this document :
Discretized continuous Fourier transform with numpy
Also, regarding your V matrix, there are many ways to improve the execution speed. One is to make sure you use Python 3, or xrange() instead of range() if you a are still in Python 2.. I usually put these lines in my Python code, to allow it to run evenly wether I use Python 3. or 2.*
# Don't want to generate huge lists in memory... use standard range for Python 3.*
range = xrange if isinstance(range(2),
list) else range
Then, instead of re-computing j/100 and i/100, you can precompute these values and put them in an array; knowing that a division is much more costly than a multiplication ! Something like :
ratios = np.arange(100) / 100
V = np.zeros(10000).reshape((100,100))
j = 0
while j < 100:
i = 0
while i < 100:
V[i,j] = v(values[j], values[i])
i += 1
j += 1
Well, anyway, this is rather cosmetic and will not save your life; and you still need to call the function v()...
Then, you can use weave :
http://docs.scipy.org/doc/scipy-0.14.0/reference/tutorial/weave.html
Or write all your pure computation/loop code in C, compile it and generate a module which you can call from Python.
You should look into numpy's broadcasting tricks and vectorization (several references, one of the first good links that pops up is from Matlab but it is just as applicable to numpy - can anyone recommend me a good numpy link in the comments that I might point other users to in the future?).
What I saw in your code (once you remove all the unnecessary bits like plots and unused functions), is that you are essentially doing this:
from __future__ import division
from scipy.integrate import quad
import numpy as np
import matplotlib.pyplot as plt
def func1(x,n):
return 1*np.sin(n*np.pi*x)**2
def v(x,y):
n = 1;
sum = 0;
nmax = 20;
while n < nmax:
[C_n, err] = quad(func1, 0, 1, args=(n), );
sum = sum + 2*(C_n/np.sinh(np.pi*n)*np.sin(n*np.pi*x)*np.sinh(n*np.pi*y));
n = n + 1;
return sum;
def main():
x_axis = np.linspace(0,1,100)
y_axis = np.linspace(0,1,100)
#######
# This is where the code is way too slow, it takes like 10 minutes when n in v(x,y) is 20.
#######
V = np.zeros(10000).reshape((100,100))
for i in range(100):
for j in range(100):
V[i,j] = v(j/100, i/100)
plt.figure()
plt.contour(x_axis, y_axis, V, 50)
plt.show()
if __name__ == "__main__":
main()
If you look carefully (you could use a profiler too), you'll see that you're integrating your function func1 (which I'll rename into the integrand) about 20 times for each element in the 100x100 array V. However, the integrand doesn't change! So you can already bring it out of your loop. If you do that, and use broadcasting tricks, you could end up with something like this:
import numpy as np
from scipy.integrate import quad
import matplotlib.pyplot as plt
def integrand(x,n):
return 1*np.sin(n*np.pi*x)**2
sine_order = np.arange(1,20).reshape(-1,1,1) # Make an array along the third dimension
integration_results = np.empty_like(sine_order, dtype=np.float)
for enu, order in enumerate(sine_order):
integration_results[enu] = quad(integrand, 0, 1, args=(order,))[0]
y,x = np.ogrid[0:1:.01, 0:1:.01]
term = integration_results / np.sinh(np.pi * sine_order) * np.sin(sine_order * np.pi * x) * np.sinh(sine_order * np.pi * y)
# This is the key: you have a 3D matrix here and with this summation,
# you're basically squashing the entire 3D structure into a flat, 2D
# representation. This 'squashing' is done by means of a sum.
V = 2*np.sum(term, axis=0)
x_axis = np.linspace(0,1,100)
y_axis = np.linspace(0,1,100)
plt.figure()
plt.contour(x_axis, y_axis, V, 50)
plt.show()
which runs in less than a second on my system.
Broadcasting becomes much more understandable if you take pen&paper and draw out the vectors that you are "broadcasting" as if you were constructing a building, from basic Tetris-blocks.
These two versions are functionally the same, but one is completely vectorized, while the other uses python for-loops. As a new user to python and numpy, I definitely recommend reading through the broadcasting basics. Good luck!

Error in solving a nonlinear optimization system with dynamic constraint using openopt and scipy

I'm trying to solve an nonlinear optimal control problem subject to dynamic ( h(x, x', u) = 0 ) constraint.
given:
f(x) = (u(t) - u(0)(t))^2 # u0(t) is the initial input provided to the system
h(x) = y'(t) - integral(sqrt(u(t))*y(t) + y(t)) = 0 # a nonlinear differential equation
-2 < y(t) < 10 # system state is bounded to this range
-2 < u(t) < 10 # system state is bounded to this range
u0(t) # will be defined as an arbitrary piecewise-linear function
I've tried to translate the problem into python code using openopt and scipy:
import numpy as np
from scipy.integrate import *
from openopt import NLP
import matplotlib.pyplot as plt
from operator import and_
N = 15*4
y0 = 10
t0 = 0
tf = 10
lb, ub = np.ones(2)*-2, np.ones(2)*10
t = np.linspace(t0, tf, N)
u0 = np.piecewise(t, [t < 3, and_(3 <= t, t < 6), 6 <= t], [2, lambda t: t - 3, lambda t: -t + 9])
p = np.empty(N, dtype=np.object)
r = np.empty(N, dtype=np.object)
y = np.empty(N, dtype=np.object)
u = np.empty(N, dtype=np.object)
ff = np.empty(N, dtype=np.object)
for i in range(N):
t = np.linspace(t0, tf, N)
b, a = t[i], t[i - 1]
integrand = lambda t, u1, y1 : np.sqrt(u1)*y1 + y1
integral = lambda u1, y1 : fixed_quad(integrand, a, b, args=(u1, y1))[0]
f = lambda x1: ((x1[1] - u0[i])**2).sum()
h = lambda x1: x1[0] - y0 - integral(x1[0], x1[1])
p[i] = NLP(f, (y0, u0[i]), h=h, lb=lb, ub=ub)
r[i] = p[i].solve('scipy_slsqp')
y0 = r[i].xf[0]
y[i] = r[i].xf[0]
u[i] = r[i].xf[1]
ff[i] = r[i].ff
figure1 = plt.figure()
axis1 = figure1.add_subplot(311)
plt.plot(u0)
axis2 = figure1.add_subplot(312)
plt.plot(u)
axis2 = figure1.add_subplot(313)
plt.plot(y)
plt.show()
Now the problem is, running the code with a positive initial y0 like y0 = 10 , the code will result satisfying results.
But giving y0 = 0 or a negative one y0 = -1, nlp problem will be deficient, saying:
"NO FEASIBLE SOLUTION has been obtained (1 constraint is equal to NaN, MaxResidual = 0, objFunc = nan)"
Also, considering the piecewise-linear initial u0, if you put any number other than 0 at the first range of the function at t < 3, meaning:
u0 = np.piecewise(t, [t < 3, and_(3 <= t, t < 6), 6 <= t], [2, lambda t: t - 3, lambda t: -t + 9])
instead of:
u0 = np.piecewise(t, [t < 3, and_(3 <= t, t < 6), 6 <= t], [0, lambda t: t - 3, lambda t: -t + 9])
This will result in the same error again.
Any ideas ?
Thanks in advance.
My first thought is that you seem to be solving a 2-dimensional Optimal Control problem as if it were a 1-Dimensional problem.
The constraint dynamics $h(x, x', t)$ are really a second order ODE.
y''(t) - sqrt(u(t))*y(t) + y(t)) = 0
Starting from this I would reword my system as a 2-dimensional, 1st order system in the standard way.
My second thought is that you seem to be optimizing independently, for $u(t)$, at each time step, whereas the problem is to optimize globally for $u(.)$, the entire function. So if anything, the call to NLP should be outside the for loop...
There are dedicated Optimal Control Open Source toolboxes:
Pythonically, there is JModellica: http://www.jmodelica.org/.
Alternatively, I have also successfully used: ACADO, http://sourceforge.net/p/acado/wiki/Home/ (in C++)
There are some very capable modeling tools now such as CasADi, Pyomo, and Gekko that didn't exist when the question was asked. Here is a solution with Gekko. One issue is that sqrt(u) needs to have a positive u value to avoid imaginary numbers.
from gekko import GEKKO
import numpy as np
m = GEKKO(remote=False)
u0=1; N=61
m.time = np.linspace(0,10,N)
# lb of -2 leads to imaginary number with sqrt(-)
u = m.MV(u0,lb=1e-2,ub=10); u.STATUS=1
m.options.MV_STEP_HOR = 18 # allow move at 3, 6 time units
m.options.MV_TYPE = 1 # piecewise linear
y = m.Var(10,lb=-2,ub=10)
m.Minimize((u-u0)**2)
m.Minimize(y**2) # otherwise solution is u=u0
m.Equation(y.dt()==m.integral(m.sqrt(u)*y-y))
m.options.IMODE=6
m.options.SOLVER=1
m.solve()
import matplotlib.pyplot as plt
plt.figure(figsize=(7,4))
plt.plot(m.time,u,label='u')
plt.plot(m.time,y,label='y')
plt.legend(); plt.grid()
plt.savefig('solution.png',dpi=300)
plt.show()

Categories

Resources