If I have a system of nonlinear ordinary differential equations, M(t,y) y' = F(t,y), what is the best method of solution when my mass matrix M is sometimes singular?
I'm working with the following system of equations:
If t=0, this reduces to a differential algebraic equation. However, even if we restrict t>0, this becomes a differential algebraic equation whenever y4=0, which I cannot set a domain restriction to avoid (and is an integral part of the system I am trying to model). My only previous exposure to DAEs is when an entire row is 0 -- but in this case my mass matrix is not always singular.
What is the best way to implement this numerically?
So far, I've tried using Python where I add a small number (0.0001) to the main diagonals of M and invert it, solving the equations y' = M^{-1}(t,y) F(t,y). However, this seems prone to instabilities, and I'm unsure if this is a universally appropriate means of regularization.
Python doesn't have any built-in functions to deal with mass matrices, so I've also tried coding this in Julia. However, DifferentialEquations.jl states explicitly that "Non-constant mass matrices are not directly supported: users are advised to transform their problem through substitution to a DAE with constant mass matrices."
I'm at a loss on how to accomplish this. Any insights on how to do this substitution or a better way to solve this type of problem would be greatly appreciated.
The following transformation leads to a constant mass matrix:
.
You need to handle the case of y_4 = 0 separately.
Related
I'm attempting to write a simple implementation of the Newton-Raphson method, in Python. I've already done so using the SymPy library, however what I'm working on now will (ultimately) end up running in an environment where only Numpy is available.
For those unfamiliar with the algorithm, it works (in my case) as follows:
I have some a system of symbolic equations, which I "stack" to form a matrix F. The unknowns are X,Y,Z,T (which I wish to determine). Some additional values are initially unknown, until passed to my solver, which substitutes these known values for variables in the symbolic expressions.
Now, the Jacobian matrix (J) of F is computed. This, too, is a matrix of symbolic expressions.
Now, I iterate in some range (max_iter). With each iteration, I form a matrix A by substituing for the unknowns X,Y,Z,T in F current estimates (starting with some initial values). Similarly, I form a matrix b by substituting for X,Y,Z,T current estimates.
I then obtain new estimates by solving the matrix equation Ax = b for x. This vector x holds dT, dX, dY, dZ. I then add these to current estimates for T,X,Y,Z, and iterate again.
Thus far, I've found my largest issue to be computing the Jacobian matrix. I need only to do this once, however it will be different depending upon the coefficients fed to the solver (not unknowns, but only known once fed to the solver, so I can't simply hard-code the Jacobian).
While I'm not terribly familiar with Numpy, I know that it offers numpy.gradient. I'm not sure, however, that this is the same as SymPy's .jacobian.
How can the Jacobian matrix be found, either in "pure" Python, or with Numpy?
EDIT:
Should it be useful to you, more information on the problem can be found [here]. 1. It can be formulated a few different ways, however (as of now) I'm writing it as 4 equations of the form:
\sqrt{(X-x_i)^2+(Y-y_i)^2+(Z-z_i)^2 }= c * (t_i-T)
Where X,Y,Z and T are unknown.
This describes the solution to a localization problem, where we know (a) the location of n >= 4 observers in a 3-dimensional space, (b) the time at which each observer "saw" some signal, and (c) the velocity of the signal. The goal is to determine the coordinates of the signal source X,Y,Z (and, as a side effect, the time of emission, T).
Notice that I've tried (many) other approaches to solving this problem, and all leads point toward a combination of Newton-Raphson with regression.
How can I solve a system of k differential equations with derivatives appearing in every equation? I am trying to use Scipy's solve_ivp.
All the equations are of the following form:
equations
How can this system of equations be numerically solved using any solver? using solve_ivp, it seems you should be able to write every equation independent of the other ones, which seems not possible in this case when we have more than 2 equations.
If you set C[i]=B[i,i] then you can transform the equations to the linear system B*z'=A. This can be solved as
zdot = numpy.linalg.solve(B,A)
so that the derivative is this constant solution of a constant linear system, and the resulting solution for z is linear, z(t)=z(0)+zdot*t.
I'm trying to set up a system for solving these 5 coupled PDEs in FyPi to study the dynamics of electrons and holes in semiconductors
The system of coupled PDEs
I'm struggling with defining the terms highligted in blue as they're products of one variable with gradient of another. For example, I'm able to define the third equation like this without error messages:
eq3 = ImplicitSourceTerm(coeff=1, var=J_n) == ImplicitSourceTerm(coeff=e*mu_n*PowerLawConvectionTerm(var=phi), var=n) + PowerLawConvectionTerm(coeff=mu_n*k*T, var=n)
But I'm not sure if this is a good way. Is there a better way how to define this non-linear term, please?
Also, if I wanted to define a term that would be product of two variables (say p and n), would it be just:
ImplicitSourceTerm(p, var=n)
Or is there a different way?
I am amazed that you don't get an error from passing a PowerLawConvectionTerm as a coefficient of an ImplicitSourceTerm. It's certainly not intended to work. I suspect you would get an error if you attempted to solve().
You should substitute your flux equations into your continuity equations so that you end up with three second-order PDEs for electron drift-diffusion, hole drift-diffusion, and Poisson's equation. It will hopefully then be a bit clearer how to use FiPy Terms to represent the different elements of those equations.
That said, these equations are challenging. Please see this issue and this notebook for some pointers on how to set up and solve these equations, but realize that we provide no examples in our documentation because we haven't been able to come up with anything robust enough. Solving for pseudo-Fermi levels has worked a bit better for me than solving for electron and hole concentrations.
ImplicitSourceTerm(p, var=n) is a reasonable way to represent the n*p recombination term.
So, I have to solve numerically the following ODE y''+f(y)*(y')^2 = 0: ODE
originally between [y_i,y_f] where y = y(t) and y(0) = y_i. The main problem is that the function f(y) has a (regular) singular point (couldn't post the image).
Also, f(y) is obtained from long numerical calculations, so I have no analytical expression for it. The issue is that the singular point lies between y_i and y_f. So far, I could not find any method which helps me to solve this kind of problem. It doesn't matter if it can be solved as a BVP o IVP, but I need to cross the singularity.
What I have tried:
I approximated f(y) as -2/(y-y*), where y* is the singularity, and tried to solved the problem as a IVP using Runge-kutta, odeint and Runge-Kutta-Fehlberg method. But I cannot cross it.
I tried to cheat using RKF, as y tends to be constant I manually increased it, forcing it to cross the singular point, but then the solution increase to infinity, so not a valid solution.
Same as before, but using the numerically defined function f(y), not its approx.
The same but as BVP using the shooting method and solve_bvp from Python.
You can immediately integrate after dividing by y' to get ln(y')+F(y)=c or y'*exp(F(y))=C, F'=f, so that in principle you could compute t(y) via simple quadrature, and then get the solution y(t) via inversion of the function value table.
Your example integrates to y'=C*(y-y*)^2, y(t)=y(0)/(1-C*y(0)*t), which could also lead to a singularity in the solution formula due to the denominator. This means that even if a solution exists, you will have to start quite close to it, else the solution process might have to cross a surface of singular Jacobians or other impossibilities, and will fail to continue at this point.
I'm on the lookout for a numerical method that can solve both a deterministic and stochastic equation. In the deterministic case, I know that a fourth order RK method is a valuable one, very effective. Unfortunately, there has not been applied to stochastic equations successfully (at least as far as I know).
Now what I want to know is if a numerical method that can solve both equations (roughly I mean, in comparison to the analytic solutions) exists and, in that case, what would be. A stochastic equation analytically solvable would be the Black-Scholes one, for instance.
There are methods for solving these kinds of equations in DifferentialEquations.jl. Stochastic differential equations are a form of mixed deterministic and stochastic equation and solving them is shown in the SDE tutorial. Mixing discrete stochasticity with deterministic equations is shown in the jump equation tutorial. While written naively in Julia, it is accessible in Python via the package diffeqpy. Notice that this has some example stochastic differential equations in the README.