I'm using sympy to solve a simple linear system of equations.
It's a coupled ODE, there are time-derivatives of variables and I need to solve the system of equations for the highest derivatives.
Since sympy doesn't allow me to solve for statements like phi_1.diff(t), I've replaced all derivatives with placeholder symbols.
For example:
phi.diff(t).diff(t) + phi(t) =0
becomes
ddphi + phi(t) = 0
This works fine. The solutions are correct and I can simulate the system - it's a pendulum: https://youtu.be/Gc_V2FussNk
The problem is that solving the system of equations (with linsolve) takes very long.
For just 2 equations, it takes 2 seconds.
For 3 equations, it's still calculating (after over 10 minutes).
EDIT: #asmeurer advised me to try out solve instead.
For n=3, linsolve took about 34 minutes - I only made one measurement.
solve takes 31 seconds (averages over 3 runs).
Still, I believe that a linear 3x3 system should be solved in fractions of a second.
And for n=4 solve becomes unbearably slow, too (still calculating)
I've formatted the code and created an iPython notebook: http://nbviewer.jupyter.org/gist/lhk/bec52b222d1d8d28e0d1baf77d545ec5
If you scroll down a little, you can see the formatted output of the system of equations and directly below that the call to linsolve
The equations are rather long but strictly linear in the second derivatives.
I'm sure that this system can be solved.
All I need to do is solve a 3x3 system of linear equations where the coefficients might be symbols.
Is there a more performant way to do this ?
solve (not linsolve) has some flags that you can set which can make it faster:
simplify=False: disables simplification of the result.
rational=False: disables the automatic conversion of floats to rationals.
There is a warning in the solve docstring that rational=False can lead to some equations not being solvable due to issues in the polys, so be aware that that's a potential issue.
I have found that solve can be very slow in jupyter notebook if you have run sp.init_printing() before your equations. I have a module "equations" where I write my equations and solve them.
This is faster:
import sympy as sp
import equations
sp.init_printing()
Than this:
import sympy as sp
sp.init_printing()
import equations
Related
I have a system of 52 equations with float coefficients (usually not rationals). This long list of equations is stored in variable concrete_eqs, and the list of unknowns (Lagrangians) in lagran. The system is linear.
Using solve, the solution would be:
result = solve(
concrete_focs, lagran
,simplify = False
,exclude = []
)
Using linsolve, it would be:
linresult = list(linsolve(concrete_focs, lagran))
I checked the solution using solve by substituting back into the equations and it gives the correct result. Using linsolve, not only are the solutions different, but they are wrong: they do not solve the system when I substitute them back.
linsolve does not throw any errors. I transformed the system of equations into matrix form by using linear_eq_to_matrix, and the system has full rank. Since the unknowns are indexed variables, I also tried to solve the system with linsolve and only symbols, but this didn't solve the issue.
I would like to solve this linear system with linsolve, since it is much faster than solve, even for this simple system.
I tried to reproduce the issue in simpler problems, using indexed variables, but I got the correct result.
Any hints on why this may be happening? (I suspect it is because of the handling of the float irrational coefficients, but I'm not sure.)
I am trying to solve a pde using the fipy package in python. The code that I have written for the same has been attached.
!pip install fipy
from fipy import *
mesh= Grid2D(nx=0.001,dx=100,ny=0.0005,dy=100)
phil=CellVariable(name='Sol variable',mesh=mesh)
phil.constrain(0,mesh.facesBottom)
phil.constrain(1,mesh.facesTop)
n=1.7*10**(-6)*((0.026*numerix.exp(phil/0.026)+phil-0.026)+2.25*10**(-10)*(0.026*numerix.exp(phil/0.026)-phil-0.026))**(0.5)
eq=(PowerLawConvectionTerm(coeff=(0.,1.))+ImplicitSourceTerm(coeff=n))
eq.solve(var=phil)
When I try to run the code, I get an error in the last line: k exceeds matrix dimensions.
Any help regarding this would be appreciated.
The obvious issue with the above code is that dx and nx are mixed up. dx is the mesh spacing so is a float while nx are the number of mesh cells, which is an int. So, the third line should be,
mesh= Grid2D(dx=0.001,nx=100,dy=0.0005,ny=100)
That at least makes the problem run without an error. However the solution isn't very interesting as the source term and the initial condition appear to be zero everywhere so the result is zero everywhere.
It's also worth considering the following
FiPy isn't really designed for hyperbolic problems, which require high order discretizations to solve accurately. It will solve, but may not be as accurate (it might be ok in equilibrium).
The source term is non-linear so will require many iterations to reach a solution.
It's a good idea to have a transient term in this type of equation and iterate towards equilibrium.
So, I'm trying to solve this nonlinear system of equations and I am getting the error "cannot create mpf". Here's the sympy link . Anyone know how to solve thsi?
mpmath is made for doing numerical calculations and l is a symbol, thus the error (you used an mpmath tanh in your second attempt, but SymPy's tanh in the first example). You already have the solution for k from your first attempt. Solving for l requires solving a 12th order polynomial in exp(1.401*l) which you won't be able to do symbolically in general.
I am trying to solve a system of N*N nonlinear equations, but I get stuck and do not understand what is the problem.
My equations are :
h_{j,i} = (T/2) \sum_{k=1..N}{f(h_{k,j})} - (T/2) f(h_{i,j})
for i and j in [1..N]^2, and where the h are the unknowns, f is a known function and T is a parameter.
In all the examples I have found, there are maybe two or three equations/unknowns, so one can implement the equations directly. Nevertheless, I have too many equations here, and I do not understand how to implement a code without explicitly writing all the equations, and using fsolve (on python).
Thanks for your help
I would like to use sympy to solve a definite integral that I know that Mathematica can solve. In Mathematica the following line
Integrate[z^2 (BesselI[0, z^2] - BesselI[1, z^2]) Exp[-z^2], {z, 0, x}]
yields
1/3 x^3 HypergeometricPFQ[{1/2,3/2},{1,5/2},-2 x^2]-1/10 x^5 HypergeometricPFQ[{3/2,5/2},{3,7/2},-2 x^2]
It would prefer to use python with sympy for this and I try to with the following code
import sympy;
x, z = sympy.var('x z');
sympy.integrate( z**2*(sympy.besseli(0,z**2)-sympy.besseli(1,z**2))*sympy.exp(-z**2) ,(z,0,x));
Unfortunately, the computation just hangs. I give up after about 30-40 minutes of waiting. In Mathematica, it takes less than a second. If I change the integrand, then I can get sympy to solve it. Such as
sympy.integrate( z**2*sympy.besseli(0,z**2) ,(z,0,x));
yields
x**3*gamma(3/4)*hyper((3/4,), (1, 7/4), -x**4/4)/(4*gamma(7/4))
I am a long time Mathematica user and have a pretty good sense of how to get it to solve tricky integrals. As a new sympy users I lack that experience.
Are there any flags I could add?
Are there other ways to solve this problem in sympy?
If this integral is not possible with sympy, is there a way to understand sympy's limitations? Such as how sympy does integration versus Mathematica.