Matlab's "OutputFcn" implementation in Python for ODE solving - python

I am trying to solve an ODE using Python's solve_ivp. However, I want to change the right hand side of my ODE dynamically based on a comparison between the current solution and previous solution. The idea behind this is that my right hand side is a vector field, and I want to ensure directionality of the vector field by reversing the right hand side based on the direction of the previous solution.
The implementation for this is as follows: I want to check the dot product in the right hand side function definition between the previous solution and the vector field. If the dot product is negative the right hand side is multiplied by -1.
I therefore need to access the previous state of the ODE solver and use it in comparison with the current iteration. In MATLAB there is the possibility of using "OutputFcn" while solving an ODE. This function is called after every iteration of the integrator. In the function it is therefore possible to simply extract the state as a variable and use it in the next iteration. I have not been able to find something similar for Python.
def RHS(timesnotused,x):
out = solve_ivp(doubleGyreVar, [0,T/2, T], [x[0], x[1], 1, 0, 0, 1], rtol = 1e-10, atol=1e-10)
output = out.y
J = output[2:,-1].reshape(2,2)
CG = np.matmul(J.T , J)
lambdas, xis = np.linalg.eig(CG)
xi_1 = xis[np.argmin(lambdas)]
xi_2 = xis[np.argmax(lambdas)]
lambda_1 = np.min(lambdas)
lambda_2 = np.max(lambdas)
alpha = ((lambda_2-lambda_1) / (lambda_2+lambda_1))**2
sign = 1
if np.dot(xlast,xi_1) < 0:
sign = -1
return(sign*alpha*xi_1)
As can be seen I want "xlast" to be the previous solution, and check it with xi_1 of the current iteration. Somehow xlast needs to be updated every iteration.

If the differential equation you want to solve is
dx/dt = sign(g(x)) * F(x)
for some sufficiently smooth functions g, F, then you have a discontinuous right side where all advanced numerical solvers will produce nonsense as soon as they approach this jump singularity.
The clearest method to solve such a multi-phase system is to present only continuous right sides to the numerical solver and to handle the phase change via the event mechanism that is also present in scipy.integrate.solve_ivp.
I explored a mechanism to do that with the tools of scipy in a similar problem with a sign function producing a discontinuity in odeint returns wrong results for an ODE including descrete function

Related

Imposing monotonicity with scipy.optimize.minimize

I am trying to minimize a function of a vector of length 20, but I want to constrain the solution to be monotonic, i.e.
x[1] <= x[2]... <= x[20]
I have tried to implement this in the following way using "constraints" for this routine:
cons = tuple([{'type':'ineq', 'fun': lambda x: x[i]- x[i-1]} for i in range(1, len(node_vals))])
res = sp.optimize.minimize(localisation, b, args=(d), constraints = cons) #optimize
However, the results I get are not monotonic, even when the initial guess b is, it seems that the optimizer is completely ignoring the constraints. What could be going wrong? I have also tried changing the constraint to x[i]**3 - x[i+1]**3 to make it "smoother", but it didn't help at all. My objective function, localisation is the integral of solution to an eigenvalue problem whose parameters are defined beforehand:
def localisation(node_vals, domain): #calculate localisation for solutions with piecewise linear grading
f = piecewise(node_vals, domain) #create piecewise linear function using given values at nodes
#plt.plot(domain, f(domain))
M = diff_matrix(f(domain)) #differentiation matrix created from piecewise linear function
m = np.concatenate(([0], get_solutions(M)[1][:, 0], [0]))
integral = num_int(domain, m)
return integral
You didn’t post a minimum reproducible example that we can run. However, did you try to specify which optimization algorithm to use in SciPy? Something like this:
res = sp.optimize.minimize(localisation, b, args=(d), constraints = cons, method=‘SLSQP’)
I'm having a very similar problem but with additional upper and lower bounds on the monotonicity property. I'm tackling the problem like this (maybe it helps you):
Using the Trust-Region Constrained Algorithm given by scipy. This provides us a way of dealing with linear constraints in a matrix-manner:
lb <= A.dot(x) <= ub
where lb & and ub are the lower (upper) bounds of this constraint problem and A is the matrix, representing the linear constraint problem.
every row of matrix A is a linear term which defines a constraint
If, for example, x[0] <= x[1], then this can be transformed into x[0] - x[1] <= 0 which in terms of the linear constraint matrix A looks like this [1, -1,...], provided that the upper bound vector has a 0 value on this level of course (vice versa is also possible but either way, having at least one of both, lower or upper bound, makes this easy)
Setting up enough of these inequalities and at the same time merging a couple of those into a single inequality may create a sufficient matrix to solve this.
Hope this helps a bit, It did the job for my problem.

Performance issue with Scipy's solve_bvp and coupled differential equations

I'm facing a problem while trying to implement the coupled differential equation below (also known as single-mode coupling equation) in Python 3.8.3. As for the solver, I am using Scipy's function scipy.integrate.solve_bvp, whose documentation can be read here. I want to solve the equations in the complex domain, for different values of the propagation axis (z) and different values of beta (beta_analysis).
The problem is that it is extremely slow (not manageable) compared with an equivalent implementation in Matlab using the functions bvp4c, bvpinit and bvpset. Evaluating the first few iterations of both executions, they return the same result, except for the resulting mesh which is a lot greater in the case of Scipy. The mesh sometimes even saturates to the maximum value.
The equation to be solved is shown here below, along with the boundary conditions function.
import h5py
import numpy as np
from scipy import integrate
def coupling_equation(z_mesh, a):
ka_z = k # Global
z_a = z # Global
a_p = np.empty_like(a).astype(complex)
for idx, z_i in enumerate(z_mesh):
beta_zf_i = np.interp(z_i, z_a, beta_zf) # Get beta at the desired point of the mesh
ka_z_i = np.interp(z_i, z_a, ka_z) # Get ka at the desired point of the mesh
coupling_matrix = np.empty((2, 2), complex)
coupling_matrix[0] = [-1j * beta_zf_i, ka_z_i]
coupling_matrix[1] = [ka_z_i, 1j * beta_zf_i]
a_p[:, idx] = np.matmul(coupling_matrix, a[:, idx]) # Solve the coupling matrix
return a_p
def boundary_conditions(a_a, a_b):
return np.hstack(((a_a[0]-1), a_b[1]))
Moreover, I couldn't find a way to pass k, z and beta_zf as arguments of the function coupling_equation, given that the fun argument of the solve_bpv function must be a callable with the parameters (x, y). My approach is to define some global variables, but I would appreciate any help on this too if there is a better solution.
The analysis function which I am trying to code is:
def analysis(k, z, beta_analysis, max_mesh):
s11_analysis = np.empty_like(beta_analysis, dtype=complex)
s21_analysis = np.empty_like(beta_analysis, dtype=complex)
initial_mesh = np.linspace(z[0], z[-1], 10) # Initial mesh of 10 samples along L
mesh = initial_mesh
# a_init must be complex in order to solve the problem in a complex domain
a_init = np.vstack((np.ones(np.size(initial_mesh)).astype(complex),
np.zeros(np.size(initial_mesh)).astype(complex)))
for idx, beta in enumerate(beta_analysis):
print(f"Iteration {idx}: beta_analysis = {beta}")
global beta_zf
beta_zf = beta * np.ones(len(z)) # Global variable so as to use it in coupling_equation(x, y)
a = integrate.solve_bvp(fun=coupling_equation,
bc=boundary_conditions,
x=mesh,
y=a_init,
max_nodes=max_mesh,
verbose=1)
# mesh = a.x # Mesh for the next iteration
# a_init = a.y # Initial guess for the next iteration, corresponding to the current solution
s11_analysis[idx] = a.y[1][0]
s21_analysis[idx] = a.y[0][-1]
return s11_analysis, s21_analysis
I suspect that the problem has something to do with the initial guess that is being passed to the different iterations (see commented lines inside the loop in the analysis function). I try to set the solution of an iteration as the initial guess for the following (which must reduce the time needed for the solver), but it is even slower, which I don't understand. Maybe I missed something, because it is my first time trying to solve differential equations.
The parameters used for the execution are the following:
f2 = h5py.File(r'path/to/file', 'r')
k = np.array(f2['k']).squeeze()
z = np.array(f2['z']).squeeze()
f2.close()
analysis_points = 501
max_mesh = 1e6
beta_0 = 3e2;
beta_low = 0; # Lower value of the frequency for the analysis
beta_up = beta_0; # Upper value of the frequency for the analysis
beta_analysis = np.linspace(beta_low, beta_up, analysis_points);
s11_analysis, s21_analysis = analysis(k, z, beta_analysis, max_mesh)
Any ideas on how to improve the performance of these functions? Thank you all in advance, and sorry if the question is not well-formulated, I accept any suggestions about this.
Edit: Added some information about performance and sizing of the problem.
In practice, I can't find a relation that determines de number of times coupling_equation is called. It must be a matter of the internal operation of the solver. I checked the number of callings in one iteration by printing a line, and it happened in 133 ocasions (this was one of the fastests). This must be multiplied by the number of iterations of beta. For the analyzed one, the solver returned this:
Solved in 11 iterations, number of nodes 529.
Maximum relative residual: 9.99e-04
Maximum boundary residual: 0.00e+00
The shapes of a and z_mesh are correlated, since z_mesh is a vector whose length corresponds with the size of the mesh, recalculated by the solver each time it calls coupling_equation. Given that a contains the amplitudes of the progressive and regressive waves at each point of z_mesh, the shape of a is (2, len(z_mesh)).
In terms of computation times, I only managed to achieve 19 iterations in about 2 hours with Python. In this case, the initial iterations were faster, but they start to take more time as their mesh grows, until the point that the mesh saturates to the maximum allowed value. I think this is because of the value of the input coupling coefficients in that point, because it also happens when no loop in beta_analysisis executed (just the solve_bvp function for the intermediate value of beta). Instead, Matlab managed to return a solution for the entire problem in just 6 minutes, aproximately. If I pass the result of the last iteration as initial_guess (commented lines in the analysis function, the mesh overflows even faster and it is impossible to get more than a couple iterations.
Based on semi-random inputs, we can see that max_mesh is sometimes reached. This means that coupling_equation can be called with a quite big z_mesh and a arrays. The problem is that coupling_equation contains a slow pure-Python loop iterating on each column of the arrays. You can speed the computation up a lot using Numpy vectorization. Here is an implementation:
def coupling_equation_fast(z_mesh, a):
ka_z = k # Global
z_a = z # Global
a_p = np.empty(a.shape, dtype=np.complex128)
beta_zf_i = np.interp(z_mesh, z_a, beta_zf) # Get beta at the desired point of the mesh
ka_z_i = np.interp(z_mesh, z_a, ka_z) # Get ka at the desired point of the mesh
# Fast manual matrix multiplication
a_p[0] = (-1j * beta_zf_i) * a[0] + ka_z_i * a[1]
a_p[1] = ka_z_i * a[0] + (1j * beta_zf_i) * a[1]
return a_p
This code provides a similar output with semi-random inputs compared to the original implementation but is roughly 20 times faster on my machine.
Furthermore, I do not know if max_mesh happens to be big with your inputs too and even if this is normal/intended. It may make sense to decrease the value of max_mesh in order to reduce the execution time even more.

Runge-Kutta fourth order method. Integrating backwards

I am using a Runge-Kutta fourth order method to solve numerically the usual equation of motion of a background scalar field in curved spacetime with a quartic potential:
$\phi^{''}=-3\left(1+\frac{H^{'}}{3H}\right)\phi^{'}-\lambda\phi^3/H^2$,
$'$ denoting the derivative w.r.t. the e-folds number $\textrm{d}N=H\textrm{d}t$ and, from the Friedmann equation:
$H^2=\frac{\lambda \phi^4}{4}\frac{1}{3M_{Pl}^2-(1/2)\phi^{'2}}$;
$H^{'}=-\frac{1}{2M_{Pl}^2}H\phi^{'2}$.
The problem comes when integrating backwards using as the initial conditions the final values I got after integrating forward. The outcome blow up without matching the values obtained before, when integrating forward. I simply do not understand where the problem is as both the equation and the code are not unknown whatsoever. Firstly, I integrated from 0 to 64 e-folds. Then I simply reverse the integration direction.
I attach the code too:
def rk4trial(f,v0,t0,tf,n,V):
t=np.linspace(t0,tf,n)
h=t[1]-t[0]
v=np.array((n+1)*[v0])
for j in range(n):
V.append(v[j])
V[j]=copy.deepcopy(V[j])
k1=f(v[j],t[j])*h
k2=f(v[j]+(1/2)*k1,t[j]+(1/2)*h)*h
k3=f(v[j]+(1/2)*k2,t[j]+(1/2)*h)*h
k4=f(v[j]+k3,t[j]+h)*h
v[j+1]=v[j]+(k1+2*k2+2*k3+k4)/6
return V, t, h
def Fdet(v,t):
phi, sigma = v
H=(((lamb/4)*phi**4)/(3*mpl**2-(1/2)*sigma**2))**(1/2)
HH=-((1/2)*sigma**2)*(1/mpl**2)
return np.array([sigma,-3*(1+HH/3)*sigma-lamb*phi**3/(H**2)])
PS: This question has been posted here too:https://scicomp.stackexchange.com/questions/33583/runge-kutta-fourth-order-method-integrating-backwards, where equations are shown in detail.
EDIT: Unnecessary parts in the code removed.
EDIT: In response to #LutzL, I attach both the plots of \phi/M_{Pl} and \phi^{'} after integrating forward (solid lines) and backwards (dashed lines), by doing what (s)he said. As you see, there is a sudden deviation from the results of forward integration that I cannot explain.
I would change the RK4 method to the minimal necessary. It is not necessary to have a v array partially duplicating the contents of the V array, so
def rk4trial(f,v0,t0,tf,n,V):
t=np.linspace(t0,tf,n)
h=t[1]-t[0]
v=v0
for j in range(n):
V.append(v)
k1=f(v,t[j])*h
k2=f(v+0.5*k1,t[j]+0.5*h)*h
k3=f(v+0.5*k2,t[j]+0.5*h)*h
k4=f(v+k3,t[j]+h)*h
v=v+(k1+2*k2+2*k3+k4)/6
return V, t, h
There are no copy issues here, as v is constructed anew in every step, so that the objects appended to the return array are all separate.
The backward integration should be as simple as the forward integration,
V1, t1, h1 = rk4trial(Fdet,v0,t0,tf,n,[])
V2, t2, h2 = rk4trial(Fdet,V1[-1],tf,t0,n,[])
and V2[k] should be the same as V1[-k-1] within the bounds of the method error. Large differences are only to be expected in stiff ODE.

scipy.linalg.sparse.eigsh does not work for generalised eigenvalues

I'm working on a machine learning project which involves doing a Principal Component Analysis on some labeled data and using those labels to extract more valuable information from the data.
To do that, I'm calculating a scatter matrix for each class, and for each pair of classes I need to solve a generalised eigenvalue problem for their scatter matrices, as follows:
S_i * v = w * (S_j + b.I) * v
where b is a multiplier and I is the identity matrix. Now, this is the code in python:
jeigenvalues = eigsh(scatter_j, k=10, return_eigenvectors=False, maxiter=100)
print('eigenvalues made')
beta = betaMult*mean(jeigenvalues)
print(beta)
print(scatter_j+beta*eye(shape(x_data)[1]))
w, v = eigsh(scatter_i,M=scatter_j+beta*eye(shape(x_data)[1]),k=int(numberOfEVs/45), maxiter=100)
print(i,j,'done')
numberOfEVs is 90 in my current code (so that it's divisible by 45).
But the problem is, at the line where I use the eigsh for the aforementioned formula, it never gives me an answer. It keeps eating more and more memory without even completing a single iteration (I set its maxiter input to 1, and it still didn't give an answer). When I don't give the eigsh function the M argument (which is the matrix on the right side of the generalised EV problem and it is assumed to be "I" when not specified), it works correctly. But when M is provided, it becomes unresponsive.
Any ideas?
EDIT: The scatter matrices have rather small entries, mostly around 10^-5. I've also tried multiplying the left hand side by the inverse of the RHS matrix, and again it's having the same issue (goes on for a long time without an answer). Is the smallness of these entries the issue? How can I solve it, then?

scipy.integrate.ode with two coupled ODEs?

I'm currently trying to use SciPy's integrate.ode package to solve a pair of first-order ODEs that are coupled: say, the Lotka-Volterra predator-prey equation. However, this means during the integration loop I have to update the parameters I'm sending to the methods on every iteration, and simply keeping track of the previous value and calling set_f_params() on each iteration doesn't seem to be doing the trick.
hprev = Ho
pprev = Po
yh = np.zeros(0)
yp = np.zeros(0)
while dh.successful() and dp.successful() and dp.t < endtime and dh.t < endtime:
hparams = [alpha, beta, pprev]
pparams = [delta, gamma, hprev]
dh.set_f_params(hparams)
dp.set_f_params(pparams)
dh.integrate(dh.t + stepsize)
dp.integrate(dp.t + stepsize)
yh = np.append(yh, dh.y)
yp = np.append(yp, dp.y)
hprev = dh.y
pprev = dp.y
The values I'm setting at each iteration through set_f_params don't seem to be propagated to the callback methods, which wasn't terribly surprising given none of the examples on the web seem to involve "live" variable passing to the callbacks, but this was the only method by which I could think to get these values into the callback methods.
Does anyone have any advice on how to use SciPy to numerically integrate these ODEs?
I could be wrong, but this example seems very close to your problem. :) It uses odeint to solve the system of ODEs.
I had a similar issue. Turns out, the integrator doesn't re-evaluate the differential equation function for every call of integrate(), but does it at its own internal times. I changed max_step option of the integrator to be the same as stepsize and that worked for me.

Categories

Resources