I am trying to solve a pde using the fipy package in python. The code that I have written for the same has been attached.
!pip install fipy
from fipy import *
mesh= Grid2D(nx=0.001,dx=100,ny=0.0005,dy=100)
phil=CellVariable(name='Sol variable',mesh=mesh)
phil.constrain(0,mesh.facesBottom)
phil.constrain(1,mesh.facesTop)
n=1.7*10**(-6)*((0.026*numerix.exp(phil/0.026)+phil-0.026)+2.25*10**(-10)*(0.026*numerix.exp(phil/0.026)-phil-0.026))**(0.5)
eq=(PowerLawConvectionTerm(coeff=(0.,1.))+ImplicitSourceTerm(coeff=n))
eq.solve(var=phil)
When I try to run the code, I get an error in the last line: k exceeds matrix dimensions.
Any help regarding this would be appreciated.
The obvious issue with the above code is that dx and nx are mixed up. dx is the mesh spacing so is a float while nx are the number of mesh cells, which is an int. So, the third line should be,
mesh= Grid2D(dx=0.001,nx=100,dy=0.0005,ny=100)
That at least makes the problem run without an error. However the solution isn't very interesting as the source term and the initial condition appear to be zero everywhere so the result is zero everywhere.
It's also worth considering the following
FiPy isn't really designed for hyperbolic problems, which require high order discretizations to solve accurately. It will solve, but may not be as accurate (it might be ok in equilibrium).
The source term is non-linear so will require many iterations to reach a solution.
It's a good idea to have a transient term in this type of equation and iterate towards equilibrium.
Related
I have a system of 52 equations with float coefficients (usually not rationals). This long list of equations is stored in variable concrete_eqs, and the list of unknowns (Lagrangians) in lagran. The system is linear.
Using solve, the solution would be:
result = solve(
concrete_focs, lagran
,simplify = False
,exclude = []
)
Using linsolve, it would be:
linresult = list(linsolve(concrete_focs, lagran))
I checked the solution using solve by substituting back into the equations and it gives the correct result. Using linsolve, not only are the solutions different, but they are wrong: they do not solve the system when I substitute them back.
linsolve does not throw any errors. I transformed the system of equations into matrix form by using linear_eq_to_matrix, and the system has full rank. Since the unknowns are indexed variables, I also tried to solve the system with linsolve and only symbols, but this didn't solve the issue.
I would like to solve this linear system with linsolve, since it is much faster than solve, even for this simple system.
I tried to reproduce the issue in simpler problems, using indexed variables, but I got the correct result.
Any hints on why this may be happening? (I suspect it is because of the handling of the float irrational coefficients, but I'm not sure.)
I'm trying to set up a system for solving these 5 coupled PDEs in FyPi to study the dynamics of electrons and holes in semiconductors
The system of coupled PDEs
I'm struggling with defining the terms highligted in blue as they're products of one variable with gradient of another. For example, I'm able to define the third equation like this without error messages:
eq3 = ImplicitSourceTerm(coeff=1, var=J_n) == ImplicitSourceTerm(coeff=e*mu_n*PowerLawConvectionTerm(var=phi), var=n) + PowerLawConvectionTerm(coeff=mu_n*k*T, var=n)
But I'm not sure if this is a good way. Is there a better way how to define this non-linear term, please?
Also, if I wanted to define a term that would be product of two variables (say p and n), would it be just:
ImplicitSourceTerm(p, var=n)
Or is there a different way?
I am amazed that you don't get an error from passing a PowerLawConvectionTerm as a coefficient of an ImplicitSourceTerm. It's certainly not intended to work. I suspect you would get an error if you attempted to solve().
You should substitute your flux equations into your continuity equations so that you end up with three second-order PDEs for electron drift-diffusion, hole drift-diffusion, and Poisson's equation. It will hopefully then be a bit clearer how to use FiPy Terms to represent the different elements of those equations.
That said, these equations are challenging. Please see this issue and this notebook for some pointers on how to set up and solve these equations, but realize that we provide no examples in our documentation because we haven't been able to come up with anything robust enough. Solving for pseudo-Fermi levels has worked a bit better for me than solving for electron and hole concentrations.
ImplicitSourceTerm(p, var=n) is a reasonable way to represent the n*p recombination term.
So, I have to solve numerically the following ODE y''+f(y)*(y')^2 = 0: ODE
originally between [y_i,y_f] where y = y(t) and y(0) = y_i. The main problem is that the function f(y) has a (regular) singular point (couldn't post the image).
Also, f(y) is obtained from long numerical calculations, so I have no analytical expression for it. The issue is that the singular point lies between y_i and y_f. So far, I could not find any method which helps me to solve this kind of problem. It doesn't matter if it can be solved as a BVP o IVP, but I need to cross the singularity.
What I have tried:
I approximated f(y) as -2/(y-y*), where y* is the singularity, and tried to solved the problem as a IVP using Runge-kutta, odeint and Runge-Kutta-Fehlberg method. But I cannot cross it.
I tried to cheat using RKF, as y tends to be constant I manually increased it, forcing it to cross the singular point, but then the solution increase to infinity, so not a valid solution.
Same as before, but using the numerically defined function f(y), not its approx.
The same but as BVP using the shooting method and solve_bvp from Python.
You can immediately integrate after dividing by y' to get ln(y')+F(y)=c or y'*exp(F(y))=C, F'=f, so that in principle you could compute t(y) via simple quadrature, and then get the solution y(t) via inversion of the function value table.
Your example integrates to y'=C*(y-y*)^2, y(t)=y(0)/(1-C*y(0)*t), which could also lead to a singularity in the solution formula due to the denominator. This means that even if a solution exists, you will have to start quite close to it, else the solution process might have to cross a surface of singular Jacobians or other impossibilities, and will fail to continue at this point.
My math skills are really poor.
I'm trying to handle accelerometer data (tri-axis) with Python
I need to compute the norm
That is easy I made something like that:
math.sqrt(x_value**2 + y_value**2 + z_value**2)
but now, I have to compute the integration of that:
And for that I have no clue..
Can somebody help me on that?
Edit: Adding more info due to the negative votes (??)
I know there are tools to make integrations in python, but this one is not with edges (there are no limits in the formula) So that, I don't understand how to make it works..
Integrating the modulus of the acceleration will lead you nowhere. You must integrate all three components separately to get the components of the speed (and integrate them a second time to get the position vector).
To perform the numerical integration, use the Simpson rule incrementally. Anyway, as the number of points must be even, you will only get every other value of the speed.
I'm using sympy to solve a simple linear system of equations.
It's a coupled ODE, there are time-derivatives of variables and I need to solve the system of equations for the highest derivatives.
Since sympy doesn't allow me to solve for statements like phi_1.diff(t), I've replaced all derivatives with placeholder symbols.
For example:
phi.diff(t).diff(t) + phi(t) =0
becomes
ddphi + phi(t) = 0
This works fine. The solutions are correct and I can simulate the system - it's a pendulum: https://youtu.be/Gc_V2FussNk
The problem is that solving the system of equations (with linsolve) takes very long.
For just 2 equations, it takes 2 seconds.
For 3 equations, it's still calculating (after over 10 minutes).
EDIT: #asmeurer advised me to try out solve instead.
For n=3, linsolve took about 34 minutes - I only made one measurement.
solve takes 31 seconds (averages over 3 runs).
Still, I believe that a linear 3x3 system should be solved in fractions of a second.
And for n=4 solve becomes unbearably slow, too (still calculating)
I've formatted the code and created an iPython notebook: http://nbviewer.jupyter.org/gist/lhk/bec52b222d1d8d28e0d1baf77d545ec5
If you scroll down a little, you can see the formatted output of the system of equations and directly below that the call to linsolve
The equations are rather long but strictly linear in the second derivatives.
I'm sure that this system can be solved.
All I need to do is solve a 3x3 system of linear equations where the coefficients might be symbols.
Is there a more performant way to do this ?
solve (not linsolve) has some flags that you can set which can make it faster:
simplify=False: disables simplification of the result.
rational=False: disables the automatic conversion of floats to rationals.
There is a warning in the solve docstring that rational=False can lead to some equations not being solvable due to issues in the polys, so be aware that that's a potential issue.
I have found that solve can be very slow in jupyter notebook if you have run sp.init_printing() before your equations. I have a module "equations" where I write my equations and solve them.
This is faster:
import sympy as sp
import equations
sp.init_printing()
Than this:
import sympy as sp
sp.init_printing()
import equations