Lyapunov Spectrum for known ODEs - Python 3 [closed] - python

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I want to numerically compute the Lyapunov Spectrum of the Lorenz System by using the standard method which is described in this Paper, p.81.
One basically need integrate the Lorenz system and the tangential vectors (i used the Runge-Kutta method for this). The evolution equation of the tangential vectors are given by the Jacobi matrix of the Lorenz system. After each iterations one needs to apply the Gram-Schmidt scheme on the vectors and store its lengths. The three Lyapunov exponents are then given by the averages of the stored lengths.
I implemented the above explained scheme in python (used version 3.7.4), but I don't get the correct results.
I thing the bug lies in the Rk4-Method for der vectors, but i cannot find any error...The RK4-method for the trajectories x,y,z works correctly (indicated by the plot) and the implemented Gram-Schmidt scheme is also correctly implemented.
I hope that someone could look through my short code and maybe find my error
Edit: Updated Code
from numpy import array, arange, zeros, dot, log
import matplotlib.pyplot as plt
from numpy.linalg import norm
# Evolution equation of tracjectories and tangential vectors
def f(r):
x = r[0]
y = r[1]
z = r[2]
fx = sigma * (y - x)
fy = x * (rho - z) - y
fz = x * y - beta * z
return array([fx,fy,fz], float)
def jacobian(r):
M = zeros([3,3])
M[0,:] = [- sigma, sigma, 0]
M[1,:] = [rho - r[2], -1, - r[0] ]
M[2,:] = [r[1], r[0], -beta]
return M
def g(d, r):
dx = d[0]
dy = d[1]
dz = d[2]
M = jacobian(r)
dfx = dot(M, dx)
dfy = dot(M, dy)
dfz = dot(M, dz)
return array([dfx, dfy, dfz], float)
# Initial conditions
d = array([[1,0,0], [0,1,0], [0,0,1]], float)
r = array([19.0, 20.0, 50.0], float)
sigma, rho, beta = 10, 45.92, 4.0
T = 10**5 # time steps
dt = 0.01 # time increment
Teq = 10**4 # Transient time
l1, l2, l3 = 0, 0, 0 # Lengths
xpoints, ypoints, zpoints = [], [], []
# Transient
for t in range(Teq):
# RK4 - Method
k1 = dt * f(r)
k11 = dt * g(d, r)
k2 = dt * f(r + 0.5 * k1)
k22 = dt * g(d + 0.5 * k11, r + 0.5 * k1)
k3 = dt * f(r + 0.5 * k2)
k33 = dt * g(d + 0.5 * k22, r + 0.5 * k2)
k4 = dt * f(r + k3)
k44 = dt * g(d + k33, r + k3)
r += (k1 + 2 * k2 + 2 * k3 + k4) / 6
d += (k11 + 2 * k22 + 2 * k33 + k44) / 6
# Gram-Schmidt-Scheme
orth_1 = d[0]
d[0] = orth_1 / norm(orth_1)
orth_2 = d[1] - dot(d[1], d[0]) * d[0]
d[1] = orth_2 / norm(orth_2)
orth_3 = d[2] - (dot(d[2], d[1]) * d[1]) - (dot(d[2], d[0]) * d[0])
d[2] = orth_3 / norm(orth_3)
for t in range(T):
k1 = dt * f(r)
k11 = dt * g(d, r)
k2 = dt * f(r + 0.5 * k1)
k22 = dt * g(d + 0.5 * k11, r + 0.5 * k1)
k3 = dt * f(r + 0.5 * k2)
k33 = dt * g(d + 0.5 * k22, r + 0.5 * k2)
k4 = dt * f(r + k3)
k44 = dt * g(d + k33, r + k3)
r += (k1 + 2 * k2 + 2 * k3 + k4) / 6
d += (k11 + 2 * k22 + 2 * k33 + k44) / 6
orth_1 = d[0] # Gram-Schmidt-Scheme
l1 += log(norm(orth_1))
d[0] = orth_1 / norm(orth_1)
orth_2 = d[1] - dot(d[1], d[0]) * d[0]
l2 += log(norm(orth_2))
d[1] = orth_2 / norm(orth_2)
orth_3 = d[2] - (dot(d[2], d[1]) * d[1]) - (dot(d[2], d[0]) * d[0])
l3 += log(norm(orth_3))
d[2] = orth_3 / norm(orth_3)
# Correct Solution (2.16, 0.0, -32.4)
lya1 = l1 / (dt * T)
lya2 = l2 / (dt * T) - lya1
lya3 = l3 / (dt * T) - lya1 - lya2
lya1, lya2, lya3
# my solution T = 10^5 : (1.3540301507934012, -0.0021967491623752448, -16.351653561383387)
The above code is updated according to Lutz suggestions.
The results look much better but they are still not 100% accurate.
Correct Solution (2.16, 0.0, -32.4)
My solution (1.3540301507934012, -0.0021967491623752448, -16.351653561383387)
The correct solutions are from Wolf's Paper, p.289. On page 290-291 he describes his method, which looks exactly the same as in the paper that i mentioned in the beginning of this post (Paper, p.81).
So there must be another error in my code...

You need to solve the system of point and Jacobian as the (forward) coupled system that it is. In the original source exactly that is done, everything is updated in one RK4 call for the combined system.
So for instance in the second stage, you would mix the operations to have a combined second stage
k2 = dt * f(r + 0.5 * k1)
M = jacobian(r + 0.5 * k1)
k22 = dt * g(d + 0.5 * k11, r + 0.5 * k1)
You could also delegate the computation of M inside the g function, as this is the only place where it is needed, and you increase locality in the scope of variables.
Note that I changed the update of d from k1 to k11, which should be the main source of the error in the numerical result.
Additional remarks on the last code version (2/28/2021):
As said in the comments, the code looks like it will do what the mathematics of the algorithm prescribes. There are two misreadings that prevent the code from returning a result close to the reference:
The parameter in the paper is sigma=16.
The paper uses not the natural logarithm, but the binary one, that is, the magnitude evolution is given as 2^(L_it). So you have to divide the computed exponents by log(2).
Using the method derived in https://scicomp.stackexchange.com/questions/36013/numerical-computation-of-lyapunov-exponent I get the exponents
[2.1531855610566595, -0.00847304754613621, -32.441308372177566]
which is sufficiently close to the reference (2.16, 0.0, -32.4).

Related

How to resolve double_scalar error in numpy?

I was using the 4th order Runge Kutta method to solve differential equations of the duffing oscillator with NumPy arrays, but I had received an Error.
RuntimeWarning: overflow encountered in double_scalars
Does anyone know the possible sources of error that may cause the overflow error and ways in which I would be able to try to resolve it?
t = np.linspace(0,1,steps)
# start at t=0
h = t[1] - t[0]
xn = np.zeros(steps)
xn[0] = 1
vn = np.zeros(steps)
vn[0] = 1
for n in range(1,steps):
k1 = vn[n-1]
l1 = g * np.cos(w * t[n-1]) - d * vn[n-1] - a * xn[n-1] - b * xn[n-1] **3
k2 = vn[n-1] + h*k1/2
l2 = g * np.cos(w * (t[n-1] + h/2)) - d * (vn[n-1] + h*k1/2) - a * (xn[n-1] + h*l1/2) - b * (xn[n-1] + h*l1/2)**3
k3 = vn[n-1] + h*k2/2
l3 = g * np.cos(w * (t[n-1] + h/2)) - d * (vn[n-1] + h*k2/2) - a * (xn[n-1] + h*l2/2) - b * (xn[n-1] + h*l2/2)**3
k4 = vn[n-1]*k3
l4 = g * np.cos(w * (t[n-1] + h)) - d * (vn[n-1] + h*k3) - a * (xn[n-1] + h*l3) - b * (xn[n-1] + h*l3)**3
vn[n] = vn[n-1] + h/6 * (k1 + 2* k2 + 2 * k3+ k4)
xn[n] = xn[n-1] + h/6 * (l1 + 2* l2 + 2* l3 + l4)
Here is my code for reference should anyone needs it. vn represents v_numerical
A link to Wikipedia for reference of the formula: https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods
I don't think it is very important to understand the formula, some possible solutions that I could try resolve the problem would be helpful.
The k are the updates for x, the l the updates for v, as v=dx/dt. You are changing the ODE into a different system of two half-coupled first-order equations.
Also note the typo or deletion error in k4.

Using Runge-Kutta method to solved differential equation of motion

In our physics class we have to model a damping torsional pendulum.
Ilustration of torsional pendulum:
We came up with this equation of motion:
Where θ is the angle, A is torsion parameter, B is Newton's parameter, C is Stokes' parameter, and D is friction parameter. We also use the sign function sgn that determines the direction of the acting force upon the pendulum, depending on the current angle from the reference point.
The problem is, that I'm unable to solve it using Runge-Kutta method in Python.
I got a working solution in MATLAB by using Euler's method, which has some flaws, but it is something.
MATLAB Code:
function [theta, dtheta, epsilon] = drt(t, theta0, dtheta0, A, B, C, D)
epsilon = zeros(1, length(t));
theta = epsilon;
dtheta = epsilon;
theta(1) = theta0;
dtheta(1) = dtheta0;
epsilon(1) = A * alpha0 - B * dtheta^2 - C * dtheta - D;
dt = t(2) - t(1);
for i = 1 : (length(t) - 1)
epsilon(i + 1)= - A * theta(i) - B * dtheta(i)^2 * sign(dtheta(i)) - C * dtheta(i) - D * sign(dtheta(i));
dtheta(i + 1)= dtheta(i) + dt * epsilon(i);
theta(i + 1) = theta(i) + dt * dtheta(i);
end
end
We call this MATLAB function like this for example:
t = linspace(0, 10, 100);
theta0 = 90;
dtheta0 = 0;
A = 1;
B = 0.1;
C = 0.1;
D = 0.1;
[theta, dtheta, epsilon] = drt(t, theta0, dtheta0, A, B, C, D);
We can then plot the theta and other values in a graph, which shows us, how the torsional pendulum is being damped by the external forces acting on it.
Python Code:
import numpy as np
import matplotlib.pyplot as plt
# Damping torsional pendulum
def drp(drp_alpha, drp_d_alpha, drp_params):
a = drp_params["tors"]
b = drp_params["newt"]
c = drp_params["stok"]
d = drp_params["fric"]
result = a * drp_alpha - b * np.power(drp_d_alpha, 2) * np.sign(drp_d_alpha) - c * drp_d_alpha - d * np.sign(drp_d_alpha)
return result
# Runge-Kutta 4th Order
# f - function DamRotPen
# x0 - initial condition
# t0 - initial time
# tmax - maximum time
# dt - sample time
def RG4(rg4_f, rg4_x0, rg4_t0, rg4_tmax, rg4_dt):
# Time vector
rg4_t = np.arange(rg4_t0, rg4_tmax, rg4_dt)
# Time vector size
rg4_t_sz = rg4_t.size
# Initialize the array
rg4_alpha = np.zeros(rg4_t_sz)
# Initial value of the system
rg4_alpha[0] = rg4_x0
for k in range(rg4_t_sz - 1):
k1 = dt * f(rg4_t[k], rg4_alpha[k])
k2 = dt * rg4_f(rg4_t[k] + dt / 2, rg4_alpha[k] + k1 / 2)
k3 = dt * rg4_f(rg4_t[k] + dt / 2, rg4_alpha[k] + k2 / 2)
k4 = dt * rg4_f(rg4_t[k] + dt, rg4_alpha[k] + k3)
rg4_d_alpha = (k1 + 2 * k2 + 2 * k3 + k4) / 6
rg4_alpha[k + 1] = rg4_alpha[k] + rg4_d_alpha
return rg4_alpha, rg4_t
# Parameters of the forces acting on the system
# tors - torsion parameter
# newt - Newton's parameter
# stok - Stokes' parameter
# fric - friction parameter
params = {"tors": 1, "newt": 0.1, "stok": 0.1, "fric": 0.1}
# Start parameters
alpha = 90
d_alpha = 0
# Initial time
t0 = 0
# Maximum time
tmax = 120
# Sample time
dt = 0.01
# Define DamRotPen function as 'f' using lambda
f = lambda t, alpha : drp(alpha, d_alpha, params)
# Try to solve this shit
alpha, t = RG4(f, alpha, t0, tmax, dt)
# Plot this shit
plt.plot(t, alpha, "r", "r", label="Position I guess")
plt.xlabel("Time t / s")
plt.grid()
plt.show()
After plotting the values, we can see on the graph, that the θ sky rockets after certain amount of time. I don't know what I'm doing wrong, I tried practically everything, so that's why I'm asking you for help. (Though I think part of the problem might be my misunderstanding on how to implement the Runge-Kutta method, maybe I got the math wrong, etc...)

Octave's fzero() and Scipy's root() functions not producing the same result

I have to find the zero of the following equation:
This is an equation of state, and it doesn't matter a whole lot if you don't know exactly what an EoS is. With the root of the above equation I compute (among other things) the compressibility factors of a gaseous substance, Z, for different pressures and temperatures. With those solutions I can plot families of curves having pressures as abscissas, Zs as ordinates and temperatures as parameters. Beta, delta, eta and phi are constants, as well as pr and Tr.
After banging my head unsuccessfully against the Newton-Raphson method (which works fine with several other EoSs) I decided to try Scipy's root() function. To my discontent, I obtained this chart:
As one can easily perceive, this saw-toothed chart is totally flawed. I should've gotten smooth curves instead. Also, Z typically ranges between 0.25 and 2.0. Thus, Zs equal to, say, 3 or above are completely off the mark. Yet the curves with Z < 2 look OK, although highly compressed because of the scale.
Then I tried Octave's fzero() solver, and got this:
Which is exactly what I should've gotten, as those are curves with the correct/expected shape!
Here comes my question. Apparently Scipy's root() and Octave's fzero() are based on the same algorithm hybrid from MINPACK. Still, the results clearly aren't the same. Do any of you know why?
I plotted a curve of the Zs obtained by Octave (abscissas) against the ones obtained with Scipy and got this:
The points at the bottom hinting a straight line represent y = x, i.e., the points for which Octave and Scipy agreed in the solutions they presented. The other points are in total disagreement and, unfortunately, they're too many to be simply ignored.
I might always use Octave from now on since it works, but I want to keep using Python.
What's your take on this? Any suggestion?
PS: Here's the original Python code. It produces the first chart shown here.
import numpy
from scipy.optimize import root
import matplotlib.pyplot as plt
def fx(x, beta, delta, eta, phi, pr_, Tr_):
tmp = phi*x**2
etmp = numpy.exp(-tmp)
f = x*(1.0 + beta*x + delta*x**4 + eta*x**2*(1.0 + tmp)*etmp) - pr_/Tr_
return f
def zsbwr(pr_, Tr_, pc_, Tc_, zc_, w_, MW_, phase=0):
d1 = 0.4912 + 0.6478*w_
d2 = 0.3000 + 0.3619*w_
e1 = 0.0841 + 0.1318*w_ + 0.0018*w_**2
e2 = 0.075 + 0.2408*w_ - 0.014*w_**2
e3 = -0.0065 + 0.1798*w_ - 0.0078*w_**2
f = 0.770
ee = (2.0 - 5.0*zc_)*numpy.exp(f)/(1.0 + f + 3.0*f**2 - 2*f**3)
d = (1.0 - 2.0*zc_ - ee*(1.0 + f - 2.0*f**2)*numpy.exp(-f))/3.0
b = zc_ - 1.0 - d - ee*(1.0 + f)*numpy.exp(-f)
bc = b*zc_
dc = d*zc_**4
ec = ee*zc_**2
phi = f*zc_**2
beta = bc + 0.422*(1.0 - 1.0/Tr_**1.6) + 0.234*w_*(1.0- 1.0/Tr_**3)
delta = dc*(1.0+ d1*(1.0/Tr_ - 1.0) + d2*(1.0/Tr_ - 1.0)**2)
eta = ec + e1*(1.0/Tr_ - 1.0) + e2*(1.0/Tr_ - 1.0)**2 \
+ e3*(1.0/Tr_ - 1.0)**3
if Tr_ > 1:
y0 = pr_/Tr_/(1.0 + beta*pr_/Tr_)
else:
if phase == 0:
y0 = pr_/Tr_/(1.0 + beta*pr_/Tr_)
else:
y0 = 1.0/zc_**(1.0 + (1.0 - Tr_)**(2.0/7.0))
raiz = root(fx,y0,args=(beta, delta, eta, phi, pr_, Tr_),method='hybr',tol=1.0e-06)
return pr_/raiz.x[0]/Tr_
if __name__ == "__main__":
Tc = 304.13
pc = 73.773
omega = 0.22394
zc = 0.2746
MW = 44.01
Tr = numpy.array([0.8, 0.93793103])
pr = numpy.linspace(0.5, 14.5, 25)
zfactor = numpy.zeros((2, 25))
for redT in Tr:
j = numpy.where(Tr == redT)[0][0]
for redp in pr:
indp = numpy.where(pr == redp)[0][0]
zfactor[j][indp] = zsbwr(redp, redT, pc, Tc, zc, omega, MW, 0)
for key, value in enumerate(zfactor):
plt.plot(pr, value, '.-', linewidth=1, color='#ef082a')
plt.figure(1, figsize=(7, 6))
plt.xlabel('$p_R$', fontsize=16)
plt.ylabel('$Z$', fontsize=16)
plt.grid(color='#aaaaaa', linestyle='--', linewidth=1)
plt.show()
And now the Octave script:
function SoaveBenedictWebbRubin
format long;
nTr = 11;
npr = 43;
ic = 1;
nome = {"CO2"; "N2"; "H2O"; "CH4"; "C2H6"; "C3H8"};
comp = [304.13, 73.773, 0.22394, 0.2746, 44.0100; ...
126.19, 33.958, 0.03700, 0.2894, 28.0134; ...
647.14, 220.640, 0.34430, 0.2294, 18.0153; ...
190.56, 45.992, 0.01100, 0.2863, 16.0430; ...
305.33, 48.718, 0.09930, 0.2776, 30.0700; ...
369.83, 42.477, 0.15240, 0.2769, 44.0970];
Tc = comp(ic,1);
pc = comp(ic,2);
w = comp(ic,3);
zc = comp(ic,4);
MW = comp(ic,5);
Tr = linspace(0.8, 2.8, nTr);
pr = linspace(0.2, 7.2, npr);
figure(1, 'position',[300,150,600,500])
for i=1:size(Tr, 2)
icont = 1;
zval = zeros(1, npr);
for j=1:size(pr, 2)
[Z, phi, density] = SBWR(Tr(i), pr(j), Tc, pc, zc, w, MW, 0);
zval(icont) = Z;
icont = icont + 1;
endfor
plot(pr,zval,'o','markerfacecolor','white','linestyle','-','markersize',3);
hold on;
endfor
str = strcat("Soave-Benedict-Webb-Rubin para","\t",nome(ic));
xlabel("p_r",'fontsize',15);
ylabel("Z",'fontsize',15);
title(str,'fontsize',12);
end
function [Z,phi,density] = SBWR(Tr, pr, Tc, pc, Zc, w, MW, phase)
R = 8.3144E-5; % universal gas constant (bar·m3/(mol·K))
% Definition of parameters
d1 = 0.4912 + 0.6478*w;
d2 = 0.3 + 0.3619*w;
e1 = 0.0841 + 0.1318*w + 0.0018*w**2;
e2 = 0.075 + 0.2408*w - 0.014*w**2;
e3 = -0.0065 + 0.1798*w - 0.0078*w**2;
f = 0.77;
ee = (2.0 - 5.0*Zc)*exp(f)/(1.0 + f + 3.0*f**2 - 2.0*f**3);
d = (1.0 - 2.0*Zc - ee*(1.0 + f - 2.0*f**2)*exp(-f))/3.0;
b = Zc - 1.0 - d - ee*(1.0 + f)*exp(-f);
bc = b*Zc;
dc = d*Zc**4;
ec = ee*Zc**2;
ff = f*Zc**2;
beta = bc + 0.422*(1.0 - 1.0/Tr**1.6) + 0.234*w*(1.0 - 1.0/Tr**3);
delta = dc*(1.0 + d1*(1.0/Tr - 1.0) + d2*(1.0/Tr - 1.0)**2);
eta = ec + e1*(1.0/Tr - 1.0) + e2*(1.0/Tr - 1.0)**2 + e3*(1.0/Tr - 1.0)**3;
if Tr > 1
y0 = pr/Tr/(1.0 + beta*pr/Tr);
else
if phase == 0
y0 = pr/Tr/(1.0 + beta*pr/Tr);
else
y0 = 1.0/Zc**(1.0 + (1.0 - Tr)**(2.0/7.0));
end
end
fun = #(y)y*(1.0 + beta*y + delta*y**4 + eta*y**2*(1.0 + ff*y**2)*exp(-ff*y**2)) - pr/Tr;
options = optimset('TolX',1.0e-06);
yi = fzero(fun,y0,options);
Z = pr/yi/Tr;
density = yi*pc*MW/(1000.0*R*Tc);
phi = exp(Z - 1.0 - log(Z) + beta*yi + 0.25*delta*yi**4 - eta/ff*(exp(-ff*yi**2)*(1.0 + 0.5*ff*yi**2) - 1.0));
end
First things first. Your two files weren't equivalent, therefore a direct comparison of the underlying algorithms was difficult. I attach here an octave and a python version that are directly comparable line-for-line that can be compared side-by-side.
%%% File: SoaveBenedictWebbRubin.m:
% No package imports necessary
function SoaveBenedictWebbRubin()
nome = {"CO2"; "N2"; "H2O"; "CH4"; "C2H6"; "C3H8"};
comp = [ 304.13, 73.773, 0.22394, 0.2746, 44.0100;
126.19, 33.958, 0.03700, 0.2894, 28.0134;
647.14, 220.640, 0.34430, 0.2294, 18.0153;
190.56, 45.992, 0.01100, 0.2863, 16.0430;
305.33, 48.718, 0.09930, 0.2776, 30.0700;
369.83, 42.477, 0.15240, 0.2769, 44.0970 ];
nTr = 11; Tr = linspace( 0.8, 2.8, nTr );
npr = 43; pr = linspace( 0.2, 7.2, npr );
ic = 1;
Tc = comp(ic, 1);
pc = comp(ic, 2);
w = comp(ic, 3);
zc = comp(ic, 4);
MW = comp(ic, 5);
figure(1, 'position',[300,150,600,500])
zvalues = zeros( nTr, npr );
for i = 1 : nTr
for j = 1 : npr
zvalues(i,j) = zSBWR( Tr(i), pr(j), Tc, pc, zc, w, MW, 0 );
endfor
endfor
hold on
for i = 1 : nTr
plot( pr, zvalues(i,:), 'o-', 'markerfacecolor', 'white', 'markersize', 3);
endfor
hold off
xlabel( "p_r", 'fontsize', 15 );
ylabel( "Z" , 'fontsize', 15 );
title( ["Soave-Benedict-Webb-Rubin para\t", nome(ic)], 'fontsize', 12 );
endfunction % main
function Z = zSBWR( Tr, pr, Tc, pc, Zc, w, MW, phase )
% Definition of parameters
d1 = 0.4912 + 0.6478 * w;
d2 = 0.3 + 0.3619 * w;
e1 = 0.0841 + 0.1318 * w + 0.0018 * w ** 2;
e2 = 0.075 + 0.2408 * w - 0.014 * w ** 2;
e3 = -0.0065 + 0.1798 * w - 0.0078 * w ** 2;
f = 0.77;
ee = (2.0 - 5.0 * Zc) * exp( f ) / (1.0 + f + 3.0 * f ** 2 - 2.0 * f ** 3 );
d = (1.0 - 2.0 * Zc - ee * (1.0 + f - 2.0 * f ** 2) * exp( -f ) ) / 3.0;
b = Zc - 1.0 - d - ee * (1.0 + f) * exp( -f );
bc = b * Zc;
dc = d * Zc ** 4;
ec = ee * Zc ** 2;
phi = f * Zc ** 2;
beta = bc + 0.422 * (1.0 - 1.0 / Tr ** 1.6) + 0.234 * w * (1.0 - 1.0 / Tr ** 3);
delta = dc * (1.0 + d1 * (1.0 / Tr - 1.0) + d2 * (1.0 / Tr - 1.0) ** 2);
eta = ec + e1 * (1.0 / Tr - 1.0) + e2 * (1.0 / Tr - 1.0) ** 2 + e3 * (1.0 / Tr - 1.0) ** 3;
if Tr > 1
y0 = pr / Tr / (1.0 + beta * pr / Tr);
else
if phase == 0
y0 = pr / Tr / (1.0 + beta * pr / Tr);
else
y0 = 1.0 / Zc ** (1.0 + (1.0 - Tr) ** (2.0 / 7.0) );
endif
endif
yi = fzero( #(y) fx(y, beta, delta, eta, phi, pr, Tr), y0, optimset( 'TolX', 1.0e-06 ) );
Z = pr / yi / Tr;
endfunction % zSBWR
function Out = fx( y, beta, delta, eta, phi, pr, Tr )
Out = y * (1.0 + beta * y + delta * y ** 4 + eta * y ** 2 * (1.0 + phi * y ** 2) * exp( -phi * y ** 2 ) ) - pr / Tr;
endfunction
### File: SoaveBenedictWebbRubin.py
import numpy; from scipy.optimize import root; import matplotlib.pyplot as plt
def SoaveBenedictWebbRubin():
nome = ["CO2", "N2", "H2O", "CH4", "C2H6", "C3H8"]
comp = numpy.array( [ [ 304.13, 73.773, 0.22394, 0.2746, 44.0100 ],
[ 126.19, 33.958, 0.03700, 0.2894, 28.0134 ],
[ 647.14, 220.640, 0.34430, 0.2294, 18.0153 ],
[ 190.56, 45.992, 0.01100, 0.2863, 16.0430 ],
[ 305.33, 48.718, 0.09930, 0.2776, 30.0700 ],
[ 369.83, 42.477, 0.15240, 0.2769, 44.0970 ] ] )
nTr = 11; Tr = numpy.linspace( 0.8, 2.8, nTr )
npr = 43; pr = numpy.linspace( 0.2, 7.2, npr )
ic = 0
Tc = comp[ic, 0]
pc = comp[ic, 1]
w = comp[ic, 2]
zc = comp[ic, 3]
MW = comp[ic, 4]
plt.figure(1, figsize=(7, 6))
zvalues = numpy.zeros( (nTr, npr) )
for i in range( nTr ):
for j in range( npr ):
zvalues[i,j] = zsbwr( Tr[i], pr[j], pc, Tc, zc, w, MW, 0)
# endfor
# endfor
for i in range(nTr):
plt.plot(pr, zvalues[i, :], 'o-', markerfacecolor='white', markersize=3 )
plt.xlabel( '$p_r$', fontsize = 15 )
plt.ylabel( '$Z$' , fontsize = 15 )
plt.title( "Soave-Benedict-Webb-Rubin para\t" + nome[ic], fontsize = 12 );
plt.show()
# end function main
def zsbwr( Tr, pr, pc, Tc, zc, w, MW, phase=0):
# Definition of parameters
d1 = 0.4912 + 0.6478 * w
d2 = 0.3000 + 0.3619 * w
e1 = 0.0841 + 0.1318 * w + 0.0018 * w ** 2
e2 = 0.075 + 0.2408 * w - 0.014 * w ** 2
e3 = -0.0065 + 0.1798 * w - 0.0078 * w ** 2
f = 0.770
ee = (2.0 - 5.0 * zc) * numpy.exp( f ) / (1.0 + f + 3.0 * f ** 2 - 2 * f ** 3)
d = (1.0 - 2.0 * zc - ee * (1.0 + f - 2.0 * f ** 2) * numpy.exp( -f )) / 3.0
b = zc - 1.0 - d - ee * (1.0 + f) * numpy.exp( -f )
bc = b * zc
dc = d * zc ** 4
ec = ee * zc ** 2
phi = f * zc ** 2
beta = bc + 0.422 * (1.0 - 1.0 / Tr ** 1.6) + 0.234 * w * (1.0 - 1.0 / Tr ** 3)
delta = dc * (1.0 + d1 * (1.0 / Tr - 1.0) + d2 * (1.0 / Tr - 1.0) ** 2)
eta = ec + e1 * (1.0 / Tr - 1.0) + e2 * (1.0 / Tr - 1.0) ** 2 + e3 * (1.0 / Tr - 1.0) ** 3
if Tr > 1:
y0 = pr / Tr / (1.0 + beta * pr / Tr)
else:
if phase == 0:
y0 = pr / Tr / (1.0 + beta * pr / Tr)
else:
y0 = 1.0 / zc ** (1.0 + (1.0 - Tr) ** (2.0 / 7.0))
# endif
# endif
yi = root( fx, y0, args = (beta, delta, eta, phi, pr, Tr), method = 'hybr', tol = 1.0e-06 ).x
return pr / yi / Tr
# endfunction zsbwr
def fx(y, beta, delta, eta, phi, pr, Tr):
return y*(1.0 + beta*y + delta*y**4 + eta*y**2*(1.0 + phi*y**2)*numpy.exp(-phi*y**2)) - pr/Tr
# endfunction fx
if __name__ == "__main__": SoaveBenedictWebbRubin()
This confirms that the outputs from the two systems do in fact differ partly due to the outputs of the underlying algorithms used, rather than because the programs weren't the effectively the same. However, the comparison is not as bad now:
As for "the algorithms are the same", they are not. Octave typically hides more technical implementation details in the source code, so this is always worth checking. In particular, in file fzero.m, right after the docstring, it mentions the following:
This is essentially the ACM "Algorithm 748: Enclosing Zeros of Continuous Functions" due to Alefeld, Potra and Shi, ACM Transactions on Mathematical Software, Vol. 21, No. 3, September 1995.
Although the workflow should be the same, the structure of the algorithm has been transformed non-trivially; instead of the authors' approach of sequentially calling building blocks subprograms we implement here a FSM version using one interior point determination and one bracketing per iteration, thus reducing the number of temporary variables and simplifying the algorithm structure. Further, this approach reduces the need for external functions and error handling. The algorithm has also been slightly modified.
Whereas according to help(root):
Notes
This section describes the available solvers that can be selected by the 'method' parameter. The default method is hybr.
Method hybr uses a modification of the Powell hybrid method as
implemented in MINPACK [1].
References
[1] More, Jorge J., Burton S. Garbow, and Kenneth E. Hillstrom. 1980. User Guide for MINPACK-1.
I tried a couple of the alternatives mentioned in help(root). The df-sane one seems to be optimised for 'scalar' values (i.e. like 'fzero'). Indeed, while not as good as octave's implementation, this does give a slightly 'saner' (pun intended) result:
Having said all that, the hybrid method doesn't dump any warnings, but if you use some of the other alternatives, many of them will inform you that you have a lot of effective divisions by zero, nans, and infs, in places were you shouldn't, which is presumably why you get such weird results. So, perhaps it's not that octave's algorithm is "better" per se, but that it handles "division by zero" instances in this problem slightly more gracefully.
I don't know the exact nature of your problem, but it may be that the algorithms on python's side simply expect you to feed it well-conditioned problems instead. Perhaps some of your computations in zsbwr() result in division by zero occasions or unrealistic zeros etc, which you could detect and treat as special cases?
(Please trim the code to a minimum example which only show the root-finding part and parameters where it finds an unwanted root.)
Then the procedure is to manually inspect the equation to find the localization interval for the root you want and use it. I typically use brentq.

Simultaneous equation solver, 4 equations 4 unknowns

Trying to solve a system of 4 equations and 4 unknowns. Keep getting errors for "result from function call is not a proper array of floats". I am new to python so I think the issue is in my def of the equations.
I have tried fsolve, sympy.solve, and with and without a definition.
L0_fcc = 5200
L0_bct = 12000
L0_l = 4700
R = 8.3144
def equations(p):
t, XSnl, XSnfcc, XSnbct = p
GPb_fcc_bct = 489 + 3.52 * t
GPb_fcc_l = 4810 - 8.017 * t
GSn_bct_fcc = 5510 - 8.46 * t
GSn_bct_l = 7179 - 14.216 * t
GSn_fcc_l = 1661 - 5.756 * t
E1 = sp.Eq(GPb_fcc_l + R * t * sp.log((1-XSnl)/(1-XSnfcc)) + L0_l * (XSnl**2) - L0_fcc * (XSnfcc**2))
E2 = sp.Eq(GPb_fcc_bct + R * t * sp.log((1-XSnbct)/(1-XSnfcc)) + L0_bct * (XSnbct**2) - L0_fcc * (XSnfcc**2))
E3 = sp.Eq(GSn_fcc_l + R * t * sp.log(XSnl/XSnfcc) + L0_l * ((1-XSnl)**2) - L0_fcc * ((1-XSnfcc)**2))
E4 = sp.Eq(GSn_bct_l + R * t * sp.log(XSnl/XSnbct) + L0_l * ((1-XSnl)**2) - L0_bct * ((1-XSnbct)**2))
return (E1, E2, E3, E4)
x0 = [300, 0, 0, 0]
t, XSnl, XSnfcc, XSnbct = fsolve(equations, x0)
print(t, XSnl, XSnfcc, XSnbct)`
It should come out with 4 values, 3 of which should be between 0 and 1. I am getting the "Result from function call is not a proper array of floats"
I'm not sure why you would expect sympy objects to interact with scipy solvers, they are completely different libraries. The former is symbolic objects and the latter is numerical analysis.
The solution is to simply change the following lines to:
E1 = (GPb_fcc_l + R * t * np.log((1-XSnl)/(1-XSnfcc)) + L0_l * (XSnl**2) - L0_fcc * (XSnfcc**2))
E2 = (GPb_fcc_bct + R * t * np.log((1-XSnbct)/(1-XSnfcc)) + L0_bct * (XSnbct**2) - L0_fcc * (XSnfcc**2))
E3 = (GSn_fcc_l + R * t * np.log(XSnl/XSnfcc) + L0_l * ((1-XSnl)**2) - L0_fcc * ((1-XSnfcc)**2))
E4 = (GSn_bct_l + R * t * np.log(XSnl/XSnbct) + L0_l * ((1-XSnl)**2) - L0_bct * ((1-XSnbct)**2))
return (E1, E2, E3, E4)
Now this results in a RuntimeWarning: invalid value encountered in long_scalars and a RuntimeWarning: divide by zero encountered in double_scalars and finally a RuntimeWarning: The iteration is not making good progress, as measured by the improvement from the last ten iterations. but that's an algorithmic error which you will have to figure out on your own.
(It's almost certainly bad starting conditions since there is a XSnl/XSnfcc == 0/0 term on the first iteration)
Finding numerical solutions to equations if often more an art than a science.

What are the coefficients of the end value for the Runge-Kutta 3 (RK3) method in python

Sorry if I have worded this question badly, maybe if someone can suggest better phrasing and I will change accordingly.
So, In terms of RK4, timestepping with x_old as initial x value for a time t, we have;
x_new = x_old + (1.0/6) * dt * (k1 + 2*k2 + 2*k3 + k4)
What would the equation to solve RK3 look like? I.e. what would the value of the coefficients be for k1, k2 and k3 in this case be?
I can't find any examples of RK3 online, so sorry I have to ask this question...
Thanks!
The full step equation for the third order method is (in psuedo-code)
y[i+1] = y[i] + 1.0/6.0 * ( k1 + 4.0*k2 + k3 )
k1 = h * f(x[i], y[i])
k2 = h * f(x[i] + h / 2, y[i] + k1 / 2 )
k3 = h * f(x[i] + h, y[i] - k1 + 2 * k2 )

Categories

Resources