Basic tensor calculus with index substitution using sympy - python

I would like to switch from Mathematica to SymPy to perform some basic index substitution in tensor product. I have large expression like $A_{ab}\times B_{bcd}\times C_{cd}$ for instance. This product can be simplified because it involves some projector using Kronecker symbol. In Mathematica, I defined this Kronecker symbol with substitution rules :
SetAttributes[\[Delta], Orderless];
\[Delta] /: \[Delta][k_, k_] = 3;
\[Delta] /: \[Delta][k_, l_]^2 = 3;
\[Delta] /: \[Delta][k_, l_]*(f_)[m1___, k_, m2___]*x___ := x*f[m1, l, m2];
\[Delta] /: \[Delta][l_, k_]*(f_)[m1___, k_, m2___]*x___ := x*f[m1, l, m2];
That allows me to perform a simple index substitution like $v_{ai}\times\delta_{ij} = v_{aj}$. I can then simplify the expression and obtain the complete expression. It is the first step to further calculus.
Is it possible to define something like this in Python using SymPy? I found several Ricci packages to perform tensor calculus, but it seems way too heavy for what I want to do. I also saw some rules to substitute index to values, but I was not able to define what I want.

I'm not sure I fully understand what you are trying to do but sympy comes with some support for tensor expressions which might do what you want more directly:
https://docs.sympy.org/latest/modules/tensor/array_expressions.html
There is also the KroneckerDelta symbol which can be used in summations (although this might be a bit limited for what you want):
In [8]: k = symbols('k')
In [9]: s = Sum(KroneckerDelta(2, k), (k, 1, 3))
In [10]: s
Out[10]:
3
___
╲
╲ δ
╱ 2,k
╱
‾‾‾
k = 1
In [11]: s.doit()
Out[11]: 1
I don't know Mathematica so well but from what I understand of the code you've shown a more direct translation of your Mathematica code would look something like this:
Delta = Function('Delta')
a, b = symbols('a, b', cls=Wild)
rules = [
(Delta(a, a), 3),
(Delta(a, b)**2, 3),
]
def replace_all(e):
for r, v in rules:
e = e.replace(r, v)
return e
x, y = symbols('x, y')
expr = Delta(x, x) + Delta(x, y)**2
print(replace_all(expr))
This kind of pattern matching doesn't support sequence variables. Instead the usual way to do this in sympy is by using arbitrary Python functions like expr.replace(f, g) where you define f and g as functions in Python e.g.:
In [24]: is_match = lambda e: e.func == Delta and e.args[0] == e.args[1]
In [25]: replacement = lambda e: 3
In [26]: expr.replace(is_match, replacement)
Out[26]:
2
Δ (x, y) + 3
In [27]: expr.replace(is_match, replacement)
Out[27]:
2
Δ (x, y) + 3
Here the functions is_match and replacement could be arbitrarily complicated Python functions created with def rather than just lambda.

Related

solving equation in python with SymPy takes forever

I tried to solve this equation but still running.
I gave the symbol and the equation is "Eq((1-(1+ x )(-60))/ x+32*(1+x)(-60) , 41.81)".
The way solve and solveset usual work is to split an expression into numerator and denominator, and return solutions for the one that are not in the other.
Let's define a helper function to put the solutions from nsolve into a FiniteSet and one to give the final solution:
>>> from sympy import FiniteSet, nsolve, Add, Eq
>>> from sympy.abc import x
>>> rr = lambda x: FiniteSet(*[i[0] for i in real_roots(x, multiple=False)])
>>> sol = lambda n, d: list(rr(n) - rr(d))
>>> go = lambda eq: sol(*eq.rewrite(Add).as_numer_denom())
Now we try this out on your original expression:
>>> eq = Eq(32/(x + 1)**60 + (1 - 1/(x + 1)**60)/x, 41.81)
>>> fsol = go(eq) # very slow
>>> [i.n(3) for i in fsol]
[-3.33, -2.56, -1.44, -0.568, -0.228, 0.0220]
If you check those out by substituting into the original expression (written as an expression) you will find that only the last one is valid
>>> expr = eq.rewrite(Add)
>>> [expr.subs(x, i).n(3) for i in fsol]
[-42.1, -42.2, 4.72e+22, 2.64e+23, 1.97e+8, 1.31e-15]
Now let's replace that Float with a Rational and get solutions:
>>> req = nsimplify(eq, rational=True); req
Eq(32/(x + 1)**60 + (1 - 1/(x + 1)**60)/x, 4181/100)
>>> rsol = go(_) # pretty fast
>>> [i.n(3) for i in rsol]
[-2.00, 0.0220]
We know the 2nd solution is right; let's check the first:
>>> req.subs(x, rsol[0]).rewrite(Add).n(3)
-0.e-114
So both solutions appear to be valid and you don't get any spurious solutions which (by the way) I wasn't expecting from nsolve.
An exact analytic solution to this is unlikely but you can get numeric solutions e.g.:
In [18]: nsolve(eq, x, -2)
Out[18]: -1.99561339048822
Since this can be transformed into a polynomial you can find all real solutions like:
In [20]: p = Poly(nsimplify(eq).rewrite(Add).as_numer_denom()[0])
In [21]: [r[0].n() for r in p.real_roots(multiple=False)]
Out[21]: [-1.99561339048822, -1.0, 0, 0.0219988833527669]
Using as_numer_denom like this can potentially introduce spurious solutions though so you should check them (e.g. by plotting the function around each root). For example 0 is not actually a root.

SymPy: Evaluate given expression with given variables

I have a sympy expression involving two variables a, b. I would now like to evaluate this expression for specific values of a and b. Using a lambda like
import sympy
def get_expression(a, b):
# Complex function with a simple result. I have no control here.
return a*b + 2
a = sympy.Symbol('a')
b = sympy.Symbol('b')
z = get_expression(a, b)
f = lambda a, b: z
print(f(1, 1))
only gives
a*b + 2
though.
Any hints?
Turns out that lambdify is what I need:
f = sympy.lambdify([a, b], z)
print(f(1, 1))

Corresponding Coefficients in Python SymPy Pattern Matching

I have a function named f = 0.5/(z-3). I would like to know what would the coefficients p and q be if f was written in the following form: q/(1-p*z) but unfortunately sympy match function returns None. Am I doing something wrong? or what is the right way of doing something like this?
Here is the code:
z = symbols('z')
p, q = Wild('p'), Wild('q')
print (0.5/(z-3)).match(q/(1-p*z))
EDIT:
My expected answer is: q=-1/6 and p = 1/3
One way of course is
p, q = symbols('p q')
f = 0.5/(z-3)
print solve(f - q/(1-p*z), p, q,rational=True)
But I don't know how to do that in pattern matching, or if it's capable of doing something like this.
Thanks in Advance =)
If you start by converting to linear form,
1 / (2*z - 6) == q / (1 - p*z)
# multiply both sides
# by (2*z - 6) * (1 - p*z)
1 - p*z == q * (2*z - 6)
then
from sympy import Eq, solve, symbols, Wild
z = symbols("z")
p,q = symbols("p q", cls=Wild)
solve(Eq(1 - p*z, q*(2*z - 6)), (p,q))
gives
{p_: 1/3, q_: -1/6}
as expected.
Edit: I found a slightly different approach:
solve(Eq(f, g)) is equivalent to solve(f - g) (implicitly ==0)
We can reduce f - g like simplify(f - g), but by default it doesn't do anything because the resulting equation is more than 1.7 times longer than the original (default value for ratio argument).
If we specify a higher ratio, like simplify(f - g, ratio=5), we get
>>> simplify(1/(2*z-6) - q/(1-p*z), ratio=5)
(z*p_ + 2*q_*(z - 3) - 1)/(2*(z - 3)*(z*p_ - 1))
This is now in a form the solver will deal with:
>>> solve(_, (p,q))
{p_: 1/3, q_: -1/6}
SymPy's pattern matcher only does minimal algebraic manipulation to match things. It doesn't match in this case because there is no 1 in the denominator. It would be better to match against a/(b + c*z) and manipulate a, b, and c into the p and q. solve can show you the exact formula:
In [7]: solve(Eq(a/(b + c*z), q/(1 - p*z)), (q, p))
Out[7]:
⎧ -c a⎫
⎨p: ───, q: ─⎬
⎩ b b⎭
Finally, it's always a good idea to use exclude when constructing Wild object, like Wild('a', exclude=[z]). Otherwise you can get unexpected behavior like
In [11]: a, b = Wild('a'), Wild('b')
In [12]: S(2).match(a + b*z)
Out[12]:
⎧ 2⎫
⎨a: 0, b: ─⎬
⎩ z⎭
which is technically correct, but probably not what you want.

Sympy: working with equalities manually

I'm currently doing a maths course where my aim is to understand the concepts and process rather than crunch through problem sets as fast as possible. When solving equations, I'd like to be able to poke at them myself rather than have them solved for me.
Let's say we have the very simple equation z + 1 = 4- if I were to solve this myself, I would obviously subtract 1 from both sides, but I can't figure out if sympy provides a simple way to do this. At the moment the best solution I can come up with is:
from sympy import *
z = symbols('z')
eq1 = Eq(z + 1, 4)
Eq(eq1.lhs - 1, eq1.rhs - 1)
# Output:
# z == 3
Where the more obvious expression eq1 - 1 only subtracts from the left-hand side. How can I use sympy to work through equalities step-by-step like this (i.e. without getting the solve() method to just given me the answer)? Any pointers to the manipulations that are actually possible with sympy equalities would be appreciated.
There is a "do" method and discussion at https://github.com/sympy/sympy/issues/5031#issuecomment-36996878 that would allow you to "do" operations to both sides of an Equality. It's not been accepted as an addition to SymPy but it is a simple add-on that you can use. It is pasted here for convenience:
def do(self, e, i=None, doit=False):
"""Return a new Eq using function given or a model
model expression in which a variable represents each
side of the expression.
Examples
========
>>> from sympy import Eq
>>> from sympy.abc import i, x, y, z
>>> eq = Eq(x, y)
When the argument passed is an expression with one
free symbol that symbol is used to indicate a "side"
in the Eq and an Eq will be returned with the sides
from self replaced in that expression. For example, to
add 2 to both sides:
>>> eq.do(i + 2)
Eq(x + 2, y + 2)
To add x to both sides:
>>> eq.do(i + x)
Eq(2*x, x + y)
In the preceding it was actually ambiguous whether x or i
was to be added but the rule is that any symbol that are
already in the expression are not to be interpreted as the
dummy variable. If we try to add z to each side, however, an
error is raised because now it is unclear whether i or z is being
added:
>>> eq.do(i + z)
Traceback (most recent call last):
...
ValueError: not sure what symbol is being used to represent a side
The ambiguity must be resolved by indicating with another parameter
which is the dummy variable representing a side:
>>> eq.do(i + z, i)
Eq(x + z, y + z)
Alternatively, if only one Dummy symbol appears in the expression then
it will be automatically used to represent a side of the Eq.
>>> eq.do(2*Dummy() + z)
Eq(2*x + z, 2*y + z)
Operations like differentiation must be passed as a
lambda:
>>> Eq(x, y).do(lambda i: i.diff(x))
Eq(1, 0)
Because doit=False by default, the result is not evaluated. to
evaluate it, either use the doit method or pass doit=True.
>>> _.doit == Eq(x, y).do(lambda i: i.diff(x), doit=True)
True
"""
if not isinstance(e, (FunctionClass, Lambda, type(lambda:1))):
e = S(e)
imaybe = e.free_symbols - self.free_symbols
if not imaybe:
raise ValueError('expecting a symbol')
if imaybe and i and i not in imaybe:
raise ValueError('indicated i not in given expression')
if len(imaybe) != 1 and not i:
d = [i for i in imaybe if isinstance(i, Dummy)]
if len(d) != 1:
raise ValueError(
'not sure what symbol is being used to represent a side')
i = set(d)
else:
i = imaybe
i = i.pop()
f = lambda side: e.subs(i, side)
else:
f = e
return self.func(*[f(side) for side in self.args], evaluate=doit)
from sympy.core.relational import Equality
Equality.do = do

Dynamically build a lambda function in python

Supose that I want to generate a function to be later incorporated in a set of equations to be solved with scipy nsolve function. I want to create a function like this:
xi + xi+1 + xi+3 = 1
in which the number of variables will be dependent on the number of components. For example, if I have 2 components:
f = lambda x: x[0] + x[1] - 1
for 3:
f = lambda x: x[0] + x[1] + x[2] - 1
I specify the components as an array within the arguments of the function to be called:
def my_func(components):
for component in components:
.....
.....
return f
I can't just find a way of doing this. I've to be able to make it this way as this function and other functions need to be solved together with nsolve:
x0 = scipy.optimize.fsolve(f, [0, 0, 0, 0 ....])
Any help would be appreciated
Thanks!
Since I'm not sure which is the best way of doing this I will fully explain what I'm trying to do:
-I'm trying to generate this two functions to be later nsolved:
So I want to create a function teste([list of components]) that can return me this two equations (Psat(T) is a function I can call depending on the component and P is a constant(value = 760)).
Example:
teste(['Benzene','Toluene'])
would return:
xBenzene + xToluene = 1
xBenzenePsat('Benzene') + xToluenePsat('Toluene') = 760
in the case of calling:
teste(['Benzene','Toluene','Cumene'])
it would return:
xBenzene + xToluene + xCumene = 1
xBenzenePsat('Benzene') + xToluenePsat('Toluene') + xCumene*Psat('Cumene') = 760
All these x values are not something I can calculate and turn into a list I can sum. They are variables that are created as a function ofthe number of components I have in the system...
Hope this helps to find the best way of doing this
A direct translation would be:
f = lambda *x: sum(x) - 1
But not sure if that's really what you want.
You can dynamically build a lambda with a string then parse it with the eval function like this:
a = [1, 2, 3]
s = "lambda x: "
s += " + ".join(["x[" + str(i) + "]" for i in xrange(0, 3)]) # Specify any range
s += " - 1"
print s
f = eval(s)
print f(a)
I would take advantage of numpy and do something like:
def teste(molecules):
P = np.array([Psat(molecule) for molecule in molecules])
f1 = lambda x: np.sum(x) - 1
f2 = lambda x: np.dot(x, P) - 760
return f1, f2
Actually what you are trying to solve is a possibly underdetermined system of linear equations, of the form A.x = b. You can construct A and b as follows:
A = np.vstack((np.ones((len(molecules),)),
[Psat(molecule) for molecule in molecules]))
b = np.array([1, 760])
And you could then create a single lambda function returning a 2 element vector as:
return lambda x: np.dot(A, x) - b
But I really don´t think that is the best approach to solving your equations: either you have a single solution you can get with np.linalg.solve(A, b), or you have a linear system with infinitely many solutions, in which case what you want to find is a base of the solution space, not a single point in that space, which is what you will get from a numerical solver that takes a function as input.
If you really want to define a function by building it up iteratively, you can. I can't think of any situation where this would be the best answer, or even a reasonable one, but it's what you asked for, so:
def my_func(components):
f = lambda x: -1
for component in components:
def wrap(f):
return lambda x: component * x[0] + f(x[1:])
f = wrap(f)
return f
Now:
>>> f = my_func([1, 2, 3])
>>> f([4,5,6])
44
Of course this will be no fun to debug. For example, look at the traceback from calling f([4,5]).
def make_constraint_function(components):
def constraint(vector):
return sum(vector[component] for component in components) - 1
return constraint
You could do it with a lambda, but a named function may be more readable. deffed functions can do anything lambdas can and more. Make sure to give the function a good docstring, and use variable and function names appropriate for your program.

Categories

Resources