Regularizing viscosity with scipy's ode solvers - python

Consider for the sake of simplicity the following equation (Burgers equation):
Let's solve it using scipy (in my case scipy.integrate.ode.set_integrator("zvode", ..).integrate(T)) with a variable time-step solver.
The issue is the following: if we use the naïve implementation in Fourier space
then the viscosity term nu * d2x(u[t]) can cause an overshoot if the time step is too big. This can lead to a fair amount of noise in the solutions, or even to (fake) diverging solutions (even with stiff solvers, on slightly more complex version of this equation).
One way to regularize this is to evaluate the viscosity term at step t+dt, and the update step becomes
This solution works well when programmed explicitly. How can I use scipy's variable-step ode solver to implement it ? To my surprise I haven't found any documentation on this fairly elementary thorny issue...

You actually can't, or on the other extreme, odeint or ode->zvode already does that to any given problem.
To the first, you would need to give the two parts of the equation separately. Obviously, that is not part of the solver interface. Look at DDE and SDE solvers where such a partition of the equation is actually required.
To the second, odeint and ode->zvode use implicit multi-step methods, which means that the values of u(t+dt) and the right side there enter the computation and the underlying local approximation.
You could still try to hack your original approach into the solver by providing a Jacobian function that only contains the second derivative term, but quite probably you will not achieve an improvement.
You could operator-partition the ODE and solve the linear part separately introducing
vhat(k,t) = exp(nu*k^2*t)*uhat(k,t)
so that
d/dt vhat(k,t) = -i*k*exp(nu*k^2*t)*conv(uhat(.,t),uhat(.,t))(k)

Related

Matrix Trace Minimization Problem using optimization method/solver

Is there any optimization method/solver in mystic or scipy.optimization library that solves the problem in the matrix domain. In other words, is there any optimization method/solver that accepts matrix as an argument and minimizes its trace?
The trace of a matrix is:
tr(A) = sum(i, a[i,i])
So, depending on the rest of the model, (almost) any solver will allow you to use an objective like this. It is linear and is as easy and well-behaved as it gets. It is about the best objective you will ever see.

SciPy rootfinding algorithm 'gives up' too fast

Is there any way to force 'hybr' method of scipy.optimize 'root' to keep working even after it finds that convergence its too slow? In my problem, the solver nearly reaches desired precision, but right before it, the algorithm terminates because of slow convergence... Is it possible to make 'hybr' more 'self-confident'?
I use the root-finding algorithm root from scipy.optimize module to solve a system of two algebraic, non-linear equations. Since the equations have to be solved many times for various parameter values it is important to find a numerical method that would be most stable for this problem.
I have compared the performance of all the methods provided by scipy.optimize module. To visualize their performance I have used the following procedure:
The algebraic equations were rearranged so that they have zero on the R.H.S.
Then, at each step made by the algorithm, the sum of the L.H.S. squared of all the equations was computed and printed.
In my case, the most efficient method is the default "hybr". Other build-in methods either do not converge at all or are significantly slower. Unfortunately, in some cases the desired method gives up too fast. Lowering the precision and/or providing additional options to the functions did not help.

Checkgradient without solving optimization problem in MATLAB

I have a relatively complicated function and I have calculated the analytical form of the Jacobian of this function. However, sometimes, I mess up this Jacobian.
MATLAB has a nice way to check for the accuracy of the Jacobian when using some optimization technique as described here.
The problem though is that it looks like MATLAB solves the optimization problem and then returns if the Jacobian was correct or not. This is extremely time consuming, especially considering that some of my optimization problems take hours or even days to compute.
Python has a somewhat similar function in scipy as described here which just compares the analytical gradient with a finite difference approximation of the gradient for some user provided input.
Is there anything I can do to check the accuracy of the Jacobian in MATLAB without having to solve the entire optimization problem?
A laborious but useful method I've used for this sort of thing is to check that the (numerical) integral of the purported derivative is the difference of the function at the end points. I have found this more convenient than comparing fractions like (f(x+h)-f(x))/h with f'(x) because of the difficulty of choosing h so that on the one hand h is not so small that the fraction is not dominated by rounding error and on the other h is small enough that the fraction should be close to f'(x)
In the case of a function F of a single variable, the assumption is that you have code f to evaluate F and fd say to evaluate F'. Then the test is, for various intervals [a,b] to look at the differences, which the fundamental theorem of calculus says should be 0,
Integral{ 0<=x<=b | fd(x)} - (f(b)-f(a))
with the integral being computed numerically. There is no need for the intervals to be small.
Part of the error will, of course, be due to the error in the numerical approximation to the integral. For this reason I tend to use, for example, and order 40 Gausss Legendre integrator.
For functions of several variables, you can test one variable at a time. For several functions, these can be tested one at a time.
I've found that these tests, which are of course by no means exhaustive, show up the kinds of mistakes that occur in computing derivatives quire readily.
Have you considered the usage of Complex step differentiation to check your gradient? See this description

Deterministic and stochastic part of an equation

I'm on the lookout for a numerical method that can solve both a deterministic and stochastic equation. In the deterministic case, I know that a fourth order RK method is a valuable one, very effective. Unfortunately, there has not been applied to stochastic equations successfully (at least as far as I know).
Now what I want to know is if a numerical method that can solve both equations (roughly I mean, in comparison to the analytic solutions) exists and, in that case, what would be. A stochastic equation analytically solvable would be the Black-Scholes one, for instance.
There are methods for solving these kinds of equations in DifferentialEquations.jl. Stochastic differential equations are a form of mixed deterministic and stochastic equation and solving them is shown in the SDE tutorial. Mixing discrete stochasticity with deterministic equations is shown in the jump equation tutorial. While written naively in Julia, it is accessible in Python via the package diffeqpy. Notice that this has some example stochastic differential equations in the README.

Solve differential equation in Python when I don't know the derivative analytically

I'm trying to solve a first-order ODE in Python:
where Gamma and u are square matrices.
I don't explicitly know u(t) at all times, but I do know it at discrete timesteps from doing an earlier calculation.
Every example I found of Python's solvers online (e.g. this one for scipy.integrate.odeint and scipy.integrate.ode) know the expression for the derivative analytically as a function of time.
Is there a way to call these (or other differential equation solvers) without knowing an analytic expression for the derivative?
For now, I've written my own Runge-Kutta solver and jitted it with numba.
You can use any of the SciPy interpolation methods, such as interp1d, to create a callable function based on your discrete data, and pass it to odeint. Cubic spline interpolation,
f = interp1d(x, y, kind='cubic')
should be good enough.
Is there a way to call these (or other differential equation solvers) without knowing an analytic expression for the derivative?
Yes, none of the solvers you mentioned (nor most other solvers) require an analytic expression for the derivative. Instead they call a function you supply that has to evaluate the derivative for a given time and state. So, your code would roughly look something like:
def my_derivative(time,flat_Gamma):
Gamma = flat_Gamma.reshape(dim_1,dim_2)
u = get_u_from_time(time)
dGamma_dt = u.dot(Gamma)
return dGamma_dt.flatten()
from scipy.integrate import ode
my_integrator = ode(my_derivative)
…
The difficulty in your situation is rather that you have to ensure that get_u_from_time provides an appropriate result for every time with which it is called. Probably the most robust and easy solution is to use interpolation (see the other answer).
You can also try to match your integration steps to the data you have, but at least for scipy.integrate.odeint and scipy.integrate.ode this will be very tedious as all the integrators use internal steps that are inconvenient for this purpose. For example, the fifth-order Dormand–Prince method (DoPri5) uses internal steps of 1/5, 3/10, 4/5, 8/9, and 1. This means that if you have temporally equidistant data for u, you would need 90 data points for each integration step (as 1/90 is the greatest common divisor of the internal steps). The only integrator that could make this remotely feasible is the Bogacki–Shampine integrator (RK23) from cipy.integrate.solve_ivp with internal steps of 1/2, 3/4, and 1.

Categories

Resources