Deterministic and stochastic part of an equation - python

I'm on the lookout for a numerical method that can solve both a deterministic and stochastic equation. In the deterministic case, I know that a fourth order RK method is a valuable one, very effective. Unfortunately, there has not been applied to stochastic equations successfully (at least as far as I know).
Now what I want to know is if a numerical method that can solve both equations (roughly I mean, in comparison to the analytic solutions) exists and, in that case, what would be. A stochastic equation analytically solvable would be the Black-Scholes one, for instance.

There are methods for solving these kinds of equations in DifferentialEquations.jl. Stochastic differential equations are a form of mixed deterministic and stochastic equation and solving them is shown in the SDE tutorial. Mixing discrete stochasticity with deterministic equations is shown in the jump equation tutorial. While written naively in Julia, it is accessible in Python via the package diffeqpy. Notice that this has some example stochastic differential equations in the README.

Related

Evaluating uncertainty in SciPy root-finding results using Levenberg-Marquardt

I've written a Python script to solve the Time Difference of Arrival (TDoA) angular reconstruction problem in 3-dimensions. To do so, I'm using SciPy's scipy.optimize.root root finding algorithm to solve a system of nonlinear equations. I find that the Levenberg-Marquardt method is the only supported method capable of reliably producing accurate results (most others simply fail).
I'd like to assess the uncertainty in the resulting solution. For most methods (including the default hybr method), SciPy returns the inverse Hessian of the objective function (i.e. the covariance matrix), from which one may begin to calculate the uncertainty(ies) in the found roots. Unfortunately this is not the case for the Levenberg-Marquardt method (which I'm admittedly much less familiar with on a mathematical method than the other methods... it just seems to work).
How (in general) can I estimate the uncertainties in the solution returned by scipy.optimize.root when using the lm method?

Solution to nonlinear differential equation with non-constant mass matrix

If I have a system of nonlinear ordinary differential equations, M(t,y) y' = F(t,y), what is the best method of solution when my mass matrix M is sometimes singular?
I'm working with the following system of equations:
If t=0, this reduces to a differential algebraic equation. However, even if we restrict t>0, this becomes a differential algebraic equation whenever y4=0, which I cannot set a domain restriction to avoid (and is an integral part of the system I am trying to model). My only previous exposure to DAEs is when an entire row is 0 -- but in this case my mass matrix is not always singular.
What is the best way to implement this numerically?
So far, I've tried using Python where I add a small number (0.0001) to the main diagonals of M and invert it, solving the equations y' = M^{-1}(t,y) F(t,y). However, this seems prone to instabilities, and I'm unsure if this is a universally appropriate means of regularization.
Python doesn't have any built-in functions to deal with mass matrices, so I've also tried coding this in Julia. However, DifferentialEquations.jl states explicitly that "Non-constant mass matrices are not directly supported: users are advised to transform their problem through substitution to a DAE with constant mass matrices."
I'm at a loss on how to accomplish this. Any insights on how to do this substitution or a better way to solve this type of problem would be greatly appreciated.
The following transformation leads to a constant mass matrix:
.
You need to handle the case of y_4 = 0 separately.

Coupled non-linear equations in FyPi

I'm trying to set up a system for solving these 5 coupled PDEs in FyPi to study the dynamics of electrons and holes in semiconductors
The system of coupled PDEs
I'm struggling with defining the terms highligted in blue as they're products of one variable with gradient of another. For example, I'm able to define the third equation like this without error messages:
eq3 = ImplicitSourceTerm(coeff=1, var=J_n) == ImplicitSourceTerm(coeff=e*mu_n*PowerLawConvectionTerm(var=phi), var=n) + PowerLawConvectionTerm(coeff=mu_n*k*T, var=n)
But I'm not sure if this is a good way. Is there a better way how to define this non-linear term, please?
Also, if I wanted to define a term that would be product of two variables (say p and n), would it be just:
ImplicitSourceTerm(p, var=n)
Or is there a different way?
I am amazed that you don't get an error from passing a PowerLawConvectionTerm as a coefficient of an ImplicitSourceTerm. It's certainly not intended to work. I suspect you would get an error if you attempted to solve().
You should substitute your flux equations into your continuity equations so that you end up with three second-order PDEs for electron drift-diffusion, hole drift-diffusion, and Poisson's equation. It will hopefully then be a bit clearer how to use FiPy Terms to represent the different elements of those equations.
That said, these equations are challenging. Please see this issue and this notebook for some pointers on how to set up and solve these equations, but realize that we provide no examples in our documentation because we haven't been able to come up with anything robust enough. Solving for pseudo-Fermi levels has worked a bit better for me than solving for electron and hole concentrations.
ImplicitSourceTerm(p, var=n) is a reasonable way to represent the n*p recombination term.

Regularizing viscosity with scipy's ode solvers

Consider for the sake of simplicity the following equation (Burgers equation):
Let's solve it using scipy (in my case scipy.integrate.ode.set_integrator("zvode", ..).integrate(T)) with a variable time-step solver.
The issue is the following: if we use the naïve implementation in Fourier space
then the viscosity term nu * d2x(u[t]) can cause an overshoot if the time step is too big. This can lead to a fair amount of noise in the solutions, or even to (fake) diverging solutions (even with stiff solvers, on slightly more complex version of this equation).
One way to regularize this is to evaluate the viscosity term at step t+dt, and the update step becomes
This solution works well when programmed explicitly. How can I use scipy's variable-step ode solver to implement it ? To my surprise I haven't found any documentation on this fairly elementary thorny issue...
You actually can't, or on the other extreme, odeint or ode->zvode already does that to any given problem.
To the first, you would need to give the two parts of the equation separately. Obviously, that is not part of the solver interface. Look at DDE and SDE solvers where such a partition of the equation is actually required.
To the second, odeint and ode->zvode use implicit multi-step methods, which means that the values of u(t+dt) and the right side there enter the computation and the underlying local approximation.
You could still try to hack your original approach into the solver by providing a Jacobian function that only contains the second derivative term, but quite probably you will not achieve an improvement.
You could operator-partition the ODE and solve the linear part separately introducing
vhat(k,t) = exp(nu*k^2*t)*uhat(k,t)
so that
d/dt vhat(k,t) = -i*k*exp(nu*k^2*t)*conv(uhat(.,t),uhat(.,t))(k)

Finding best fit parameters for a set of equations with known uncertainties

As a follow up to another question:
solve linear equations given variables and uncertainties: scipy-optimize?solve linear equations given variables and uncertainties: scipy-optimize?
It appears to me that I have a very similar problem. I am relatively new to py and used it mostly to sort and reduce data with pandas.
I have a set of linear equations, where I want to find the best fit parameters. However, the dataset has known uncertainties that need to be considered given in parentheses.
x1*99(1)+x2*45(1)=52(0.2)
x1*1(0.5)+x2*16(1)=15(0.1)
Moreover, there are the following constraints:
x1>=0
x2>=0
x1+x2=1
My approach would be to treat the equations as constraints and solve the sum of the residues as it has been shown in the (simplified) example above.
Solving this without uncertainties is not the issue. I ask to get a hint on how to account for the uncertainties while finding the best fit parameters.
A quick and dirty approach would be to generate synthetic datasets for the coefficients (a number with an uncertainty corresponds to a normal distribution with a given mean and variance). For each realization you simply solve the 2-by-2 system and collect the distribution of x1 and x2.

Categories

Resources