Prevent evaluation of exponentials in SymPy - python

I'm using SymPy as a code generator for functions that contain many exponentials. Hence, it is important for numerical stability that the arguments of the exponentials are not evaluated. I want to prevent this:
>>> import sympy as sp
>>> x, y = sp.symbols('x y')
>>> expr = sp.exp(5.*x - 10.)
>>> print(expr)
4.53999297624849e-5*exp(5.0*x)
As it can lead to numerically inaccurate results.
I can prevent the evaluation of the exponentials as follows:
>>> expr = sp.exp(5.*x - 10., evaluate=False)
>>> print(expr)
exp(5.0*x - 10.0)
However, when I perform operations like a substitution or differentiation on the expression, the exponential is evaluated again:
>>> expr = sp.exp(5.*x - 10., evaluate=False)
>>> expr.subs(x, y)
4.53999297624849e-5*exp(5.0*y)
>>> expr.diff(x, 1)
5.0*(4.53999297624849e-5*exp(5.0*x))
What is the correct way in SymPy to prevent the evaluation of the exponential under such operations?

The most obvious point is that you are using float for integer values e.g.:
In [8]: exp(5*x - 10)
Out[8]:
5⋅x - 10
ℯ
In [9]: exp(5.*x - 10.)
Out[9]:
5.0⋅x
4.53999297624849e-5⋅ℯ
Maybe in your real problem you want to work with non-integers. Again rationals should be used for exact calculations:
In [10]: exp(Rational(1, 3)*x - S(3)/2)
Out[10]:
x 3
─ - ─
3 2
ℯ
Perhaps your input numbers are not really rational and you have them as Python floats but you want to keep them from evaluating. You can use symbols and then only substitute for them when evaluating:
In [13]: exp(a*x + b).evalf(subs={a:5.0, b:10.})
Out[13]:
a⋅x + b
ℯ
In [14]: exp(a*x + b).evalf(subs={x:1, a:5.0, b:10.})
Out[14]: 3269017.37247211
In [15]: exp(a*x + b).subs({a:5.0, b:10.})
Out[15]:
5.0⋅x
22026.4657948067⋅ℯ
If all of these seems awkward and you really do just want to stuff floats in and prevent evaluation then you can use UnevaluatedExpr:
In [21]: e = exp(UnevaluatedExpr(5.0)*x - UnevaluatedExpr(10.))
In [22]: e
Out[22]:
x⋅5.0 - 10.0
ℯ
In [23]: e.doit() # doit triggers evaluation
Out[23]:
5.0⋅x
4.53999297624849e-5⋅ℯ

Related

Symbolic simplification of algebraic expressions composed of complex numbers

I have a question concerning the symbolic simplification of algebraic expressions composed of complex numbers. I have executed the following Python script:
from sympy import *
expr1 = 3*(2 - 11*I)**Rational(1, 3)*(2 + 11*I)**Rational(2, 3)
expr2 = 3*((2 - 11*I)*(2 + 11*I))**Rational(1, 3)*(2 + 11*I)**Rational(1, 3)
print("expr1 = {0}".format(expr1))
print("expr2 = {0}\n".format(expr2))
print("simplify(expr1) = {0}".format(simplify(expr1)))
print("simplify(expr2) = {0}\n".format(simplify(expr2)))
print("expand(expr1) = {0}".format(expand(expr1)))
print("expand(expr2) = {0}\n".format(expand(expr2)))
print("expr1.equals(expr2) = {0}".format(expr1.equals(expr2)))
The output is:
expr1 = 3*(2 - 11*I)**(1/3)*(2 + 11*I)**(2/3)
expr2 = 3*((2 - 11*I)*(2 + 11*I))**(1/3)*(2 + 11*I)**(1/3)
simplify(expr1) = 3*(2 - 11*I)**(1/3)*(2 + 11*I)**(2/3)
simplify(expr2) = 15*(2 + 11*I)**(1/3)
expand(expr1) = 3*(2 - 11*I)**(1/3)*(2 + 11*I)**(2/3)
expand(expr2) = 15*(2 + 11*I)**(1/3)
expr1.equals(expr2) = True
My questions is why the simplifications does not work for expr1 but
works for expr2 thoug the expressions are algebraically equal.
What has to be done to get the same result from simplify for expr1 as for expr2?
Thanks in advance for your replys.
Kind regards
Klaus
You can use the minimal polynomial to place algebraic numbers into a canonical representation:
In [30]: x = symbols('x')
In [31]: p1 = minpoly(expr1, x, polys=True)
In [32]: p2 = minpoly(expr2, x, polys=True)
In [33]: p1
Out[33]: Poly(x**2 - 60*x + 1125, x, domain='QQ')
In [34]: p2
Out[34]: Poly(x**2 - 60*x + 1125, x, domain='QQ')
In [35]: [r for r in p1.all_roots() if p1.same_root(r, expr1)]
Out[35]: [30 + 15⋅ⅈ]
In [36]: [r for r in p2.all_roots() if p2.same_root(r, expr2)]
Out[36]: [30 + 15⋅ⅈ]
This method should work for any two expressions representing algebraic numbers through algebraic operations: either they give the precise same result or they are distinct numbers.
It works (but nominally) for expr1 because when the product in the radical is expanded you get the cube root of 125 which is reported as 5. But SymPy tries to be careful about putting radicals together under a common exponent, an operation that is not generally valid (e.g. root(-1, 3)*root(-1,3) != root(1, 3) because the principle values are used for the roots. But if you want the bases to combine under a common exponent, you can force it to happen with powsimp:
>>> from sympy.abc import x, y
>>> from sympy import powsimp, root, solve, numer, together
>>> powsimp(root(x,3)*root(y,3), force=True)
(x*y)**(1/3)
But that only works if the exponents are the same:
>>> powsimp(root(x,3)*root(y,3)**2, force=True)
x**(1/3)*y**(2/3)
As you saw, equals was able to show that the two expressions were the same. One way this could be done is to solve for root(2 + 11*I, 3) and see if any of the resulting expression are the same:
>>> solve(expr1 - expr2, root(2 + 11*I,3))
[0, 5/(2 - 11*I)**(1/3)]
We can check the non-zero candidate:
>>> numer(together(_[1]-root(2+11*I,3)))
-(2 - 11*I)**(1/3)*(2 + 11*I)**(1/3) + 5
>>> powsimp(_, force=True)
5 - ((2 - 11*I)*(2 + 11*I))**(1/3)
>>> expand(_)
0
So we have shown (with force) that the expression was the same as that for which we solved. (And, as Oscar showed while I was writing this, minpoly is a nice candidate when it works: e.g. minpoly(expr1-expr2) -> x which means expr1 == expr2.)

Basic tensor calculus with index substitution using sympy

I would like to switch from Mathematica to SymPy to perform some basic index substitution in tensor product. I have large expression like $A_{ab}\times B_{bcd}\times C_{cd}$ for instance. This product can be simplified because it involves some projector using Kronecker symbol. In Mathematica, I defined this Kronecker symbol with substitution rules :
SetAttributes[\[Delta], Orderless];
\[Delta] /: \[Delta][k_, k_] = 3;
\[Delta] /: \[Delta][k_, l_]^2 = 3;
\[Delta] /: \[Delta][k_, l_]*(f_)[m1___, k_, m2___]*x___ := x*f[m1, l, m2];
\[Delta] /: \[Delta][l_, k_]*(f_)[m1___, k_, m2___]*x___ := x*f[m1, l, m2];
That allows me to perform a simple index substitution like $v_{ai}\times\delta_{ij} = v_{aj}$. I can then simplify the expression and obtain the complete expression. It is the first step to further calculus.
Is it possible to define something like this in Python using SymPy? I found several Ricci packages to perform tensor calculus, but it seems way too heavy for what I want to do. I also saw some rules to substitute index to values, but I was not able to define what I want.
I'm not sure I fully understand what you are trying to do but sympy comes with some support for tensor expressions which might do what you want more directly:
https://docs.sympy.org/latest/modules/tensor/array_expressions.html
There is also the KroneckerDelta symbol which can be used in summations (although this might be a bit limited for what you want):
In [8]: k = symbols('k')
In [9]: s = Sum(KroneckerDelta(2, k), (k, 1, 3))
In [10]: s
Out[10]:
3
___
╲
╲ δ
╱ 2,k
╱
‾‾‾
k = 1
In [11]: s.doit()
Out[11]: 1
I don't know Mathematica so well but from what I understand of the code you've shown a more direct translation of your Mathematica code would look something like this:
Delta = Function('Delta')
a, b = symbols('a, b', cls=Wild)
rules = [
(Delta(a, a), 3),
(Delta(a, b)**2, 3),
]
def replace_all(e):
for r, v in rules:
e = e.replace(r, v)
return e
x, y = symbols('x, y')
expr = Delta(x, x) + Delta(x, y)**2
print(replace_all(expr))
This kind of pattern matching doesn't support sequence variables. Instead the usual way to do this in sympy is by using arbitrary Python functions like expr.replace(f, g) where you define f and g as functions in Python e.g.:
In [24]: is_match = lambda e: e.func == Delta and e.args[0] == e.args[1]
In [25]: replacement = lambda e: 3
In [26]: expr.replace(is_match, replacement)
Out[26]:
2
Δ (x, y) + 3
In [27]: expr.replace(is_match, replacement)
Out[27]:
2
Δ (x, y) + 3
Here the functions is_match and replacement could be arbitrarily complicated Python functions created with def rather than just lambda.

solving equation in python with SymPy takes forever

I tried to solve this equation but still running.
I gave the symbol and the equation is "Eq((1-(1+ x )(-60))/ x+32*(1+x)(-60) , 41.81)".
The way solve and solveset usual work is to split an expression into numerator and denominator, and return solutions for the one that are not in the other.
Let's define a helper function to put the solutions from nsolve into a FiniteSet and one to give the final solution:
>>> from sympy import FiniteSet, nsolve, Add, Eq
>>> from sympy.abc import x
>>> rr = lambda x: FiniteSet(*[i[0] for i in real_roots(x, multiple=False)])
>>> sol = lambda n, d: list(rr(n) - rr(d))
>>> go = lambda eq: sol(*eq.rewrite(Add).as_numer_denom())
Now we try this out on your original expression:
>>> eq = Eq(32/(x + 1)**60 + (1 - 1/(x + 1)**60)/x, 41.81)
>>> fsol = go(eq) # very slow
>>> [i.n(3) for i in fsol]
[-3.33, -2.56, -1.44, -0.568, -0.228, 0.0220]
If you check those out by substituting into the original expression (written as an expression) you will find that only the last one is valid
>>> expr = eq.rewrite(Add)
>>> [expr.subs(x, i).n(3) for i in fsol]
[-42.1, -42.2, 4.72e+22, 2.64e+23, 1.97e+8, 1.31e-15]
Now let's replace that Float with a Rational and get solutions:
>>> req = nsimplify(eq, rational=True); req
Eq(32/(x + 1)**60 + (1 - 1/(x + 1)**60)/x, 4181/100)
>>> rsol = go(_) # pretty fast
>>> [i.n(3) for i in rsol]
[-2.00, 0.0220]
We know the 2nd solution is right; let's check the first:
>>> req.subs(x, rsol[0]).rewrite(Add).n(3)
-0.e-114
So both solutions appear to be valid and you don't get any spurious solutions which (by the way) I wasn't expecting from nsolve.
An exact analytic solution to this is unlikely but you can get numeric solutions e.g.:
In [18]: nsolve(eq, x, -2)
Out[18]: -1.99561339048822
Since this can be transformed into a polynomial you can find all real solutions like:
In [20]: p = Poly(nsimplify(eq).rewrite(Add).as_numer_denom()[0])
In [21]: [r[0].n() for r in p.real_roots(multiple=False)]
Out[21]: [-1.99561339048822, -1.0, 0, 0.0219988833527669]
Using as_numer_denom like this can potentially introduce spurious solutions though so you should check them (e.g. by plotting the function around each root). For example 0 is not actually a root.

Check if an equation is linear for a specific set of variables

I have a scrip that automatically generates equations.
The equations are constructed using sympy symbols.
I would like to know whether or not these is a way to check if the equations are linear in terms of certain variables.
eg.
a, b, c, d = sympy.symbols('a, b, c, d')
eq1 = c*b*a + b*a + a + c*d
check for the following: is eq1 linear in terms of a, d?
True
A function is (jointly) linear in a given set of variables if all second-order derivatives are identically zero (including mixed ones). This can be checked as follows:
def is_linear(expr, vars):
for x in vars:
for y in vars:
try:
if not sympy.Eq(sympy.diff(expr, x, y), 0):
return False
except TypeError:
return False
return True
In the loop, every derivative is taken and checked for equality to zero. If sympy cannot decide if it's zero (raising TypeError) then it's not identically zero.
Output:
>>> is_linear(eq1, [a,d])
True
>>> is_linear(eq1, [a,c])
False
To check for separate linearity (e.g., separately in a and separately in b), drop mixed partial derivatives:
def is_separately_linear(expr, vars):
for x in vars:
try:
if not sympy.Eq(sympy.diff(expr, x, x), 0):
return False
except TypeError:
return False
return True
Output:
>>> is_separately_linear(eq1, [a,d])
True
>>> is_separately_linear(eq1, [a,c])
True
A simpler way would be to check the degree of the expression as a polynomial in each variable.
In [17]: eq1 = c*b*a + b*a + a + c*d
In [18]: degree(eq1, a)
Out[18]: 1
In [19]: degree(eq1, d)
Out[19]: 1
and expression is linear if the polynomial degree is <= 1.
If you know the expression is a polynomial in your variables, you can also just check for powers that contain the variable.
In [21]: [i for i in eq1.atoms(Pow) if i.base == a]
Out[21]: []
In [22]: eq2 = b*a**2 + d + c
In [23]: [i for i in eq2.atoms(Pow) if i.base == a]
Out[23]:
⎡ 2⎤
⎣a ⎦
To expand on the answer from 404, if fxy=0, then fyx=0. Thus, the computation time can be cut in half for the mixed derivatives solution.
from itertools import combinations_with_replacement
def is_linear(expr, variables):
combs = combinations_with_replacement(variables, 2)
try:
return all(sympy.Eq(sympy.diff(expr, *t), 0) for t in combs)
except TypeError:
return False

Sympify string expression depending on a real valued argument

I expect the following code evaluates derivative of sin(t)
import sympy as sy
t = sy.Symbol('t', real=True)
expr = sy.sympify('sin(t)')
dexpr = sy.diff(expr, t)
print dexpr
But actually it prints 0. If I change the first line by t = sy.Symbol('t'), it works well. It looks like sympy thinks there are 2 different t.
The question: how to say to sympy that my string expression depends on the real valued argument t, and how to sympify this string correctly?
If you declare t as a real variable, it will be considered a different variable to Symbol('t') (with no assumptions).
Try this way:
In [1]: treal = Symbol('t', real=True)
In [2]: t = Symbol('t')
In [3]: expr = sympify('sin(t)')
In [4]: expr.diff(treal)
Out[4]: 0
In [5]: expr.diff(t)
Out[5]: cos(t)
In [6]: treal == t
Out[6]: False
In [7]: expr_real = expr.subs(t, treal)
In [8]: expr_real.diff(treal)
Out[8]: cos(t)
In input [6] you may see that the two variables are considered different, even if both print as t. If you differentiate by the real variable your sympified expression (input [4]), your expression will be considered a constant, because the two ts do not identify as the same variable.
In input [7] I replaced t with treal, so that in input [8] I was able to correctly derive the expression.
EDIT
A fast form to import is by specifying in sympify a mapping between string-variables and local-variables:
In [1]: t = Symbol('t', real=True)
In [2]: expr = sympify('sin(t)', locals={'t': t})
In [3]: expr
Out[3]: sin(t)
In [4]: expr.diff(t)
Out[4]: cos(t)
In this way, t will be set to the variable defined in input [1].

Categories

Resources