Lorenz attractor with Runge-Kutta python - python

Hello I have to program a python function to solve Lorenz differential equations using Runge-Kutta 2cond grade
sigma=10, r=28 and b=8/3
with initial conditions (x,y,z)=(0,1,0)
this is the code i wrote, but it throws me an error saying overflow encountered in double_scalars,
and I don't see what is wrong with the program
from pylab import *
def runge_4(r0,a,b,n,f1,f2,f3):
def f(r,t):
x=r[0]
y=r[1]
z=r[2]
fx=f1(x,y,z,t)
fy=f2(x,y,z,t)
fz=f3(x,y,z,t)
return array([fx,fy,fz],float)
h=(b-a)/n
lista_t=arange(a,b,h)
print(lista_t)
X,Y,Z=[],[],[]
for t in lista_t:
k1=h*f(r0,t)
print("k1=",k1)
k2=h*f(r0+0.5*k1,t+0.5*h)
print("k2=",k2)
k3=h*f(r0+0.5*k2,t+0.5*h)
print("k3=",k3)
k4=h*f(r0+k3,t+h)
print("k4=",k4)
r0+=(k1+2*k2+2*k3+k4)/float(6)
print(r0)
X.append(r0[0])
Y.append(r0[1])
Z.append(r0[2])
return array([X,Y,Z])
def f1(x,y,z,t):
return 10*(y-x)
def f2(x,y,z,t):
return 28*x-y-x*z
def f3(x,y,z,t):
return x*y-(8.0/3.0)*z
#and I run it
r0=[1,1,1]
runge_4(r0,1,50,20,f1,f2,f3)

Solving differential equations numerically can be challenging. If you choose too high step sizes, the solution will accumulate high errors and can even become unstable, as in your case.
Either you should drastically reduce the step size (h) or just use the adaptive Runge Kutta method provided by scipy:
from numpy import array, linspace
from scipy.integrate import solve_ivp
import pylab
from mpl_toolkits import mplot3d
def func(t, r):
x, y, z = r
fx = 10 * (y - x)
fy = 28 * x - y - x * z
fz = x * y - (8.0 / 3.0) * z
return array([fx, fy, fz], float)
# and I run it
r0 = [0, 1, 0]
sol = solve_ivp(func, [0, 50], r0, t_eval=linspace(0, 50, 5000))
# and plot it
fig = pylab.figure()
ax = pylab.axes(projection="3d")
ax.plot3D(sol.y[0,:], sol.y[1,:], sol.y[2,:], 'blue')
pylab.show()
This solver uses 4th and 5th order Runge Kutta combination and controls the deviation between them by adapting the step size. See more usage documentation here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html

You use a step size of h=2.5.
For RK4 the useful step sizes given a Lipschitz constant L are in the range L*h=1e-3 to 0.1, one might get somewhat right looking results up to L*h=2.5. Above that the method turns chaotic, any resemblance to the underlying ODE is lost.
The Lorenz system has a Lipschitz constant of about L=50, see Chaos and continuous dependency of ODE solution, so h<0.05 is absolutely required, h=0.002 is better and h=2e-5 gives the numerically best results for this numerical method.

It can be related to a division by zero or when a limit of a type is exceeded (float type).
To figure out where and when it happens you can set numpy.seterr('raise') and it will raise an exception so you can debug and see what it's happening. It seems your algorithm is diverging.
Here you can se how to use numpy.seterr

Related

solve_ivp from scipy does not integrate the whole range of tspan

I'm trying to use solve_ivp from scipy in Python to solve an IVP. I specified the tspan argument of solve_ivp to be (0,10), as shown below. However, for some reason, the solutions I get always stop around t=2.5.
from scipy.integrate import solve_ivp
import numpy as np
import matplotlib.pyplot as plt
import scipy.optimize as optim
def dudt(t, u):
return u*(1-u/12)-4*np.heaviside(-(t-5), 1)
ic = [2,4,6,8,10,12,14,16,18,20]
sol = solve_ivp(dudt, (0, 10), ic, t_eval=np.linspace(0, 10, 10000))
for solution in sol.y:
y = [y for y in solution if y >= 0]
t = sol.t[:len(y)]
plt.plot(t, y)
What is going wrong
You should always look at what the solver returns. In this case it gives
message: 'Required step size is less than spacing between numbers.'
Think of the process of solving your initial value problem with scipy.integrate.solve_ivp as repeatedly estimating a direction and then going a small step in that direction. The above error means that the solutions to your equation change so fast that taking the minimal step size possible is too far. But your equation is simple enough that at least for t =< 5 where 4*np.heaviside(-(t-5), 1) always gives 4 it can be solved exactly/symbolically. I will explain more for t > 5 later.
Symbolic Solution
Sympy can solve your differential equation. While you can provide it an initial value it would have taken much longer to solve it once for each of your initial values. So instead I told it to give me all solutions and then I calculated the parameters C1 for your initial value separately.
import numpy as np
import matplotlib.pyplot as plt
from sympy import *
ics = [2,4,6,8,10,12,14,16,18,20]
f = symbols("f", cls=Function)
t = symbols("t")
eq = Eq(f(t).diff(t),f(t)*(1-f(t)/12)-4)
base_sol = dsolve(eq)
c1s = [solve(base_sol.args[1].subs({t:0})-ic) for ic in ics]
# Apparently sympy is unhappy that numpy does not supply a cotangent.
# So I do that manually.
sols = [lambdify(t, base_sol.args[1].subs({symbols('C1'):C1[0]}),
modules=['numpy', {'cot':lambda x:1/np.tan(x)}]) for C1 in c1s]
t = np.linspace(0, 5, 10000)
for sol in sols:
y = sol(t)
mask = (y > -5) & (y < 20)
plt.plot(t[mask], y[mask])
At first glance the picture looks odd. Especially the blue and orange straight line part. This is just due to the values lying outside the masked range so matplotlib connects them directly. What is actually happening is a sudden jump. That jumped tipped off the numeric ode solver earlier. You can see it even more clearly when you make sympy print the first solution.
The tangent is known to have a jump at pi/4 and if you solve the argument of the tangent above you get 2.47241377386575. Which is probably where your plotting stopped.
Now what about t>5?
Unfortunately your equation is not continuous in t=5. One approach would be to solve the equation for t>5 separately for the initial values given by following the solutions of the first equation. But that is an other question for an other day.

How can I solve ODEs for a set number of time steps using Python (SciPy)?

I am trying to solve a set of ODEs using SciPy. The task I am given asks me to solve the differential equation for 500 time steps. How can I achieve this using SciPy?
So far, I have tried using scipy.integrate.solve_ivp, which gives me a correct solution, but I cannot control the number of time steps that it runs for. The t_span argument lets me configure what the initial and final values of t are, but I'm not actually interested in that -- I am only interested in how many times I integrate. (For example, when I run my equations with t_span = (0, 500) the solver integrates 907 times.)
Below is a simplified example of my code:
from scipy.integrate import solve_ivp
def simple_diff(t, z) :
x, y = z
return [1 - 2*x*y, 2*x - y]
t_range = (0, 500)
xy_init = [0, 0]
sol = solve_ivp(simple_diff, t_range, xy_init)
I am also fine with using something other than SciPy, but solutions with SciPy are preferable.
You can use the t_eval argument to solve_ivp to evaluate at particular time points:
import numpy as np
t_eval = np.arange(501)
sol = solve_ivp(simple_diff, t_range, xy_init, t_eval=t_eval)
However, note that this will not cause the solver to limit the number of integration steps - that is determined by error metrics.
If you absolutely must evaluate the function exactly 500 times to obtain 500 integration steps, you are describing Euler integration, which will be less accurate than the algorithm that solve_ivp uses.
Looking at the solutions to your equation, it feels like you probably want to integrate only up to t=5.
Here's what the result looks like when integrating with the above settings:
And here's the result for
t_eval = np.linspace(0, 5)
t_range = (0, 5)
sol = solve_ivp(simple_diff, t_range, xy_init, t_eval=t_eval)

Adjusting scipy.integrate.ode to error tolerances

I have just read Using adaptive time step for scipy.integrate.ode when solving ODE systems
.
My code below works fine but the results it produces when solving more complicated equations rather than the one I have provided in the example below, the differential equations seem inaccurate. Is there a way to change this code so that it automatically adapts the time-step according to specied absolute and
relative error tolerances? eg. 10^-8?
from scipy.integrate import ode
initials = [0.5,0.2]
integration_range = (0, 30)
def f(t,X):
x,y = X[0],X[1]
dxdt = x**2 + y
dydt = y**2 + x
return [dxdt,dydt]
X_solutions = []
t_solutions = []
def solution_getter(t,X):
t_solutions.append(t)
X_solutions.append(X.copy())
backend = "dopri5"
ode_solver = ode(f).set_integrator(backend)
ode_solver.set_solout(solution_getter)
ode_solver.set_initial_value(y=initials, t=0)
ode_solver.integrate(integration_range[1])
You could set the values of rtol and atol in the set_integrator call, see https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html.
The default values provide a medium accuracy that is good enough for graphics, but may not be enough for other purposes.

Error in ODE Solver in Python

I'm working with a Spark Combustion Engine Model and because some reasons I'm using python to model the combustion. I'm trying to use the solver of ODEs but the yield is completly out of reality. I discovered that the integration of Volume of cylinder is wrong. I have already tried use the "odeint" and "ode" solver but the result is the same.
The code shows the derivative of Volume with theta and integrate to find the volume. I put the analytical equation to compare.
OBS: I had a similar problem using Matlab, but was when I tried use degrees in trigonometric functions. When I changed for radians the problem was solved.
The code follows:
from scipy.integrate import odeint
from scipy.integrate import ode
from scipy import integrate
import math
import sympy
from sympy import sqrt, sin, cos, tan, atan
from pylab import *
from RatesComp import *
V_real=np.zeros((100))
def Volume(V,theta):
V_sol = V[0]
dVdtheta = Vtdc*(r-1)/2 *( sin(theta) + eps/2*sin(2*theta)/sqrt(1-(eps**2)*sin(theta)**2))
return [dVdtheta]
#Geometry
eps = 0.25; # half stroke to rod ratio, s/2l
r = 10; # compression ratio
Vtdc = 6.9813e-05 # volume at TDC
# Initial Conditions
theta0 = - pi
V_init = 0.0006283
theta = linspace(-pi,pi,100)
solve = odeint( Volume, V_init, theta)
# Analytical Result
Size = len(theta)
for i in range(0, Size,1):
V_real[i] = Vtdc*(1+(r-1)/2*(1-cos(theta[i])+ 1/eps*(1-(1-(eps**2)*sin(theta[i])**2)**0.5)))
figure(1)
plot(theta, solve[:,0],label="Comput")
plot(theta, V_real[0:Size],label="Real")
ylabel('Volume [m^3]')
xlabel('CA [Rad]')
legend()
grid(True)
show()
The fig that I show is the volume of cylinder. The result real and the compute
Can someone help with information about why this problem happens?
Apparently you use python2. There the declaration of r=10 gives r the type integer which leads to a unwanted integer division in (r-1)/2 in the 'real' solution. In the derivative function there is a float value Vtdc as first factor in the product, after which the whole product evaluation is in float.
Thus change to r=10.0 or use (r-1.0)/2 or 0.5*(r-1).
And you should also set V_init = r*Vtdc as that is the value of V_real(-pi).
If you use python2 add at the first line: from __future__ import division to use division from Python3 according to documentation: https://mail.python.org/pipermail/tutor/2008-March/060886.html
In python2 when you divide two integer values you will get an integer result not float. It is may be solve your problem without large changing in the code.

Binomial expansion with fractional powers in Python

Is there a quick method of expanding and solving binomials raised to fractional powers in Scypy/numpy?
For example, I wish to solve the following equation
y * (1 + x)^4.8 = x^4.5
where y is known (e.g. 1.03).
This requires the binomial expansion of (1 + x)^4.8.
I wish to do this for millions of y values and so I'm after a nice and quick method to solve this.
I've tried the sympy expand (and simplification) but it seems not to like the fractional exponent. I'm also struggling with the scipy fsolve module.
Any pointers in the right direction would be appreciated.
EDIT:
By far the simplest solution I found was to generate a truth table (https://en.wikipedia.org/wiki/Truth_table) for assumed values of x (and known y). This allows fast interpolation of the 'true' x values.
y_true = np.linspace(7,12, 1e6)
x = np.linspace(10,15, 1e6)
a = 4.5
b = 4.8
y = x**(a+b) / (1 + x)**b
x_true = np.interp(y_true, y, x)
EDIT: Upon comparing output with that of Woldfram alpha for y=1.03, it looks like fsolve will not return complex roots. https://stackoverflow.com/a/15213699/3456127 is a similar question that may be of more help.
Rearrange your equation: y = x^4.5 / (1+x)^4.8.
Scipy.optimize.fsolve() requires a function as its first argument.
Either:
from scipy.optimize import fsolve
import math
def theFunction(x):
return math.pow(x, 4.5) / math.pow( (1+x) , 4.8)
for y in millions_of_values:
fsolve(theFunction, y)
Or using lambda (anonymous function construct):
from scipy.optimize import fsolve
import math
for y in millions_of_values:
fsolve((lambda x: (math.pow(x, 4.5) / math.pow((1+x), 4.8))), y)
I don't think you need the binomial expansion. Horner's method for evaluating polynomials implies it is better to have a factored form of the polynomials than an expanded form.
In general, nonlinear equation solving can benefit from symbolic differentiation, which is not too difficult to do by hand for your equation. Providing an analytical expression for the derivative saves the solver from having to estimate the derivatives numerically. You can write two functions: one that returns the value of the function and another that returns the derivative (i.e. the Jacobian of the function for this simple 1-D function), as described in the docs for scipy.optimize.fsolve(). Some code that takes this approach:
import timeit
import numpy as np
from scipy.optimize import fsolve
def the_function(x, y):
return y * (1 + x)**(4.8) / x**(4.5) - 1
def the_derivative(x, y):
l_dh = x**(4.5) * (4.8 * y * (1 + x)**(3.8))
h_dl = y * (1 + x)**(4.8) * 4.5 * x**3.5
square_of_whats_below = x**9
return (l_dh - h_dl)/square_of_whats_below
print fsolve(the_function, x0=1, args=(0.1,))
print '\n\n'
print fsolve(the_function, x0=1, args=(0.1,), fprime=the_derivative)
%timeit fsolve(the_function, x0=1, args=(0.1,))
%timeit fsolve(the_function, x0=1, args=(0.1,), fprime=the_derivative)
...gives me this output:
[ 1.79308495]
[ 1.79308495]
10000 loops, best of 3: 105 µs per loop
10000 loops, best of 3: 136 µs per loop
which shows that analytical differentiation did not result in any speedup in this particular case. My guess is that the numerical approximation to the function involves easier-to-compute functions like multiplication, squaring, and/or addition, instead of functions like fractional exponentiation.
You can get additional simplification by taking the log of your equation and plotting it. With a little algebra, you should be able to obtain an explicit function for ln_y, the natural log of y. If I've done the algebra correctly:
def ln_y(x):
return 4.5 * np.log(x/(1.+x)) - 0.3 * np.log(1.+x)
You can plot this function, which I have done for both lin-lin and log-log plots:
%matplotlib inline
import matplotlib.pyplot as plt
x_axis = np.linspace(1, 100, num=2000)
f, ax = plt.subplots(1, 2, figsize=(8, 4))
ln_y_axis = ln_y(x_axis)
ax[0].plot(x_axis, np.exp(ln_y_axis)) # plotting y vs. x
ax[1].plot(np.log(x_axis), ln_y_axis) # plotting ln(y) vs. ln(x)
This shows that there are two values of x for every y as long as y is below a critical value. The minimum, singular value of y occurs when x=ln(15) and has y value of:
np.exp(ln_y(15))
0.32556278053267873
So your example y value of 1.03 results in no (real) solution for x.
This behavior we have discerned from the plots is recapitulated by the scipy.optimize.fsolve() call we made before:
print fsolve(the_function, x0=1, args=(0.32556278053267873,), fprime=the_derivative)
[ 14.99999914]
That shows that guessing x=1 initially, when y is 0.32556278053267873, gives x=15 as the solution. Trying larger y values:
print fsolve(the_function, x0=15, args=(0.35,), fprime=the_derivative)
results in an error:
/Users/curt/anaconda/lib/python2.7/site-packages/IPython/kernel/__main__.py:5: RuntimeWarning: invalid value encountered in power
The reason for the error is that the power function in Python (or numpy) do not accept negative bases for fractional exponents by default. You can fix that by supplying the powers as a complex number, i.e. write x**(4.5+0j) instead of x**4.5 but are you really interested in complex x values that would solve your equation?

Categories

Resources