Coupled non-linear equations in FyPi - python

I'm trying to set up a system for solving these 5 coupled PDEs in FyPi to study the dynamics of electrons and holes in semiconductors
The system of coupled PDEs
I'm struggling with defining the terms highligted in blue as they're products of one variable with gradient of another. For example, I'm able to define the third equation like this without error messages:
eq3 = ImplicitSourceTerm(coeff=1, var=J_n) == ImplicitSourceTerm(coeff=e*mu_n*PowerLawConvectionTerm(var=phi), var=n) + PowerLawConvectionTerm(coeff=mu_n*k*T, var=n)
But I'm not sure if this is a good way. Is there a better way how to define this non-linear term, please?
Also, if I wanted to define a term that would be product of two variables (say p and n), would it be just:
ImplicitSourceTerm(p, var=n)
Or is there a different way?

I am amazed that you don't get an error from passing a PowerLawConvectionTerm as a coefficient of an ImplicitSourceTerm. It's certainly not intended to work. I suspect you would get an error if you attempted to solve().
You should substitute your flux equations into your continuity equations so that you end up with three second-order PDEs for electron drift-diffusion, hole drift-diffusion, and Poisson's equation. It will hopefully then be a bit clearer how to use FiPy Terms to represent the different elements of those equations.
That said, these equations are challenging. Please see this issue and this notebook for some pointers on how to set up and solve these equations, but realize that we provide no examples in our documentation because we haven't been able to come up with anything robust enough. Solving for pseudo-Fermi levels has worked a bit better for me than solving for electron and hole concentrations.
ImplicitSourceTerm(p, var=n) is a reasonable way to represent the n*p recombination term.

Related

Scipy BVP :four coupled differential equations

I'm trying to use solve_bvp from scipy.integrate to solve a system of four coupled differential equations. I'm having trouble understanding the documentation regarding how to set up the boundary conditions. I'm unable to post my actual equations as this is for an assignment with school but I was wondering if someone could help me understand how I can express these boundary conditions in a way that I can feed it to the package:
at x=0, y_2->0
at x=1, y_1,y_3,y_4 all approach zero.
I don't even really understand why the function should only take two arguments when I have four boundary conditions.
this is my attempt (not based in much logic):
def bc(va,vb):
return np.array([vb[0],va[0],vb[0],vb[0]])
and a small side-question: when I'm setting up the initial guesses, what qualities about the solutions should I prioritize being accurate about? (closest to the actual value (within an order of magnitude) of the solution? General shape? values at the end of the range?)

Solution to nonlinear differential equation with non-constant mass matrix

If I have a system of nonlinear ordinary differential equations, M(t,y) y' = F(t,y), what is the best method of solution when my mass matrix M is sometimes singular?
I'm working with the following system of equations:
If t=0, this reduces to a differential algebraic equation. However, even if we restrict t>0, this becomes a differential algebraic equation whenever y4=0, which I cannot set a domain restriction to avoid (and is an integral part of the system I am trying to model). My only previous exposure to DAEs is when an entire row is 0 -- but in this case my mass matrix is not always singular.
What is the best way to implement this numerically?
So far, I've tried using Python where I add a small number (0.0001) to the main diagonals of M and invert it, solving the equations y' = M^{-1}(t,y) F(t,y). However, this seems prone to instabilities, and I'm unsure if this is a universally appropriate means of regularization.
Python doesn't have any built-in functions to deal with mass matrices, so I've also tried coding this in Julia. However, DifferentialEquations.jl states explicitly that "Non-constant mass matrices are not directly supported: users are advised to transform their problem through substitution to a DAE with constant mass matrices."
I'm at a loss on how to accomplish this. Any insights on how to do this substitution or a better way to solve this type of problem would be greatly appreciated.
The following transformation leads to a constant mass matrix:
.
You need to handle the case of y_4 = 0 separately.

Deterministic and stochastic part of an equation

I'm on the lookout for a numerical method that can solve both a deterministic and stochastic equation. In the deterministic case, I know that a fourth order RK method is a valuable one, very effective. Unfortunately, there has not been applied to stochastic equations successfully (at least as far as I know).
Now what I want to know is if a numerical method that can solve both equations (roughly I mean, in comparison to the analytic solutions) exists and, in that case, what would be. A stochastic equation analytically solvable would be the Black-Scholes one, for instance.
There are methods for solving these kinds of equations in DifferentialEquations.jl. Stochastic differential equations are a form of mixed deterministic and stochastic equation and solving them is shown in the SDE tutorial. Mixing discrete stochasticity with deterministic equations is shown in the jump equation tutorial. While written naively in Julia, it is accessible in Python via the package diffeqpy. Notice that this has some example stochastic differential equations in the README.

Solve N^2 nonlinear equations system

I am trying to solve a system of N*N nonlinear equations, but I get stuck and do not understand what is the problem.
My equations are :
h_{j,i} = (T/2) \sum_{k=1..N}{f(h_{k,j})} - (T/2) f(h_{i,j})
for i and j in [1..N]^2, and where the h are the unknowns, f is a known function and T is a parameter.
In all the examples I have found, there are maybe two or three equations/unknowns, so one can implement the equations directly. Nevertheless, I have too many equations here, and I do not understand how to implement a code without explicitly writing all the equations, and using fsolve (on python).
Thanks for your help

python solving differential equation with complex variables

folks,
Is it possible to solve ODE with complex variable in python? The equation I have has the following form
dx/dt = -a x -i y(t)
where y(t) is a known function, a is a known number and i is the root of -1.
I tried to use odeint() but it gives many error messages.
I am guessing odeint() does not work with complex variables. So one way out would be to separate the real and imaginary parts of x and treat the original ODE as two coupled ODEs.
But I am also wondering if there are some more convenient way for this task? Solving ODE/PDE with complex variables is a general problem and it will be quite a hassle to make this complex -> real conversion by hand all the time.
Thanks very much.
I'd suggest using scipy.integrate.complex_ode instead of scipy.integrate.odeint which performs the conversion automatically.

Categories

Resources