SymPy: How to implement functionals and functional derivatives - python

Dear StackOverflow community,
although the Python package SymPy provides functionality for various QM (e.g. wave functions and operators) and QFT (e.g. gamma matrices) operations, there is no support for functional derivatives.
I would like to implement the following analytically known functional derivative
D f(t) / D f(t') = delta(t - t') (delta distribution)
to compute more interesting results, e.g.,
D F[f] / D f(t)
where ordinary derivation rules apply until D f(t) / D f(t') has to be computed. I have included an example below. I am aware SymPy already supports taking derivatives with respect to functions, but that is not a functional derivative.
With best regards, XeLasar
Example:
F[f] := exp(integral integral f(x) G(x, y) f(y) dx dy)
D F[f] / D f(t) = (integral f(x) * G(x, y) * delta (t - y) dx dy
+ integral delta (t - x) * G(x, y) * f(y) dx dy) * F[f]
= (integral f(x) * G(x, t) dx
+ integral G(t, y) * f(y) dy) * F[f]
= 2 * (integral G(t, t') * f(t') dt') * F[f]
Note: The integral over dt' collapses due to the delta that arises from D f(t) / D f(t')! G(t, t') is an arbitrary symmetric function. The result can be further simplified as integration variables can be renamed.

I found a solution to the problem and I want to share it so others don't have to waste as much time as I did.
Rewriting the Derivative() function in SymPy is unfortunately not an option as derivation rules are defined within expression classes. Although the SymPy source feels like Spaghetti, it is well tested and should not be touched.
It is best to explain with one or two examples what I came up with, but roughly speaking: I found a convenient substitution to perform the functional derivative using the ordinary Derivative() function of SymPy.
Example:
Z[f] = Integral(f(x) G(x,y) f(y), dx, dy) , let f(x) -> A(z), f(y) -> B(z)
= Integral(A(z) G(x,y) B(z), dx, dy)
dZ/dz = Integral(dA/dz G(x,y) B(z) + A(z) G(x,y) dB/dz, x, y)
Re-substitute: A(z) -> f(x) , B(z) -> f(y)
Insert: dA/dz -> delta(t-x), dB/dz -> delta(t-y)
=> DZ[f]/Df(t) = Integral(delta(t-x) G(x,y) f(y) + f(x) G(x,y) delta(t-y), x, y)
= Integral(G(t,y) f(y), y) + Integral(f(x) G(x,t), x)
Although this is not a general proof, this substitution works in my usecases. Since I am using the normal SymPy Derivative() routine, everything works flawlessly.
With best regards, XeLasar

Related

Best way to take a function (lambda) as input, and also have the variable within the computation itself?

I have a function that should compute an integral, taking in some function as input. I'd like the code to compute a definite integral of: <some function, in terms of x. e.g., 3*x or 3*x*(1-x), etc.> * np.sin(np.pi * x)). I'm using scipy for this:
import scipy.integrate as integrate
def calculate(a):
test = integrate.quad(a*np.sin(np.pi * x), 0, 1)
return test
a = lambda x: 3*x
calculate(a)
Now this implementation will fail because of the discrepancy between a and x. I tried defining x as x = lambda x: x, but that won't work because I get an error of multiplying a float by a function.
Any suggestions?
Since you are trying to combine two symbolic expressions before computing the definite integral numerically, I think this might be a good application for sympy's symbolic manipulation tools.
from sympy import symbols, Integral, sin, pi
def calculate(a_exp):
test = Integral(a_exp * sin(pi * x), (x, 0, 1)).evalf()
return test
x = symbols('x')
a_exp = 3*x
print(calculate(a_exp))
# 0.954929658551372
Note: I changed the name of a to a_exp to make it clear that this is an expression rather than a function.
If you decide to use sympy then note that you might also be able to compute the expression for the integral symbolically as well.
Update: Importing Sympy might be overkill for this
If computation speed is more important than precision, you can easily calculate the integral approximately using some simple discretized method.
For example, the functions below calculate the integral approximately with increasingly sophisticated methods. The accuracy of the first two will improve as n is increased and also depends on the nature of a_func etc.
import numpy as np
from scipy.integrate import trapz, quad
def calculate2(a_func, n=100):
dx = 1/n
x = np.linspace(0, 1-dx, n)
y = a_func(x) * np.sin(np.pi*x)
return np.sum(y) * dx
def calculate3(a_func, n=100):
x = np.linspace(0, 1, n+1)
y = a_func(x) * np.sin(np.pi*x)
return trapz(y, x)
def calculate4(a_func):
f = lambda x: a_func(x) * np.sin(np.pi*x)
return quad(f, 0, 1)
a_func = lambda x: 3*x
print(calculate2(a_func))
# 0.9548511174430737
print(calculate3(a_func))
# 0.9548511174430737
print(calculate4(a_func)[0])
# 0.954929658551372
I'm not an expert on numerical integration so there may be better ways to do this than these.

How to create the equivalent of Excel Solver valueof function?

I have the following equation: x/0,2 * (0,2+1)+y/0,1*(0,1+1) = 26.34
The initial values of X and Y are set as 4.085 and 0.17 respectively.
I need to find the values of X and Y which satisfy the equation and have the lowest common deviation from initially set values. In other words, sum of |4.085 - x| and |0.17 - y| is minimized.
With Excel Solver Valueof Function this easy to find:
we insert x and y as variables to be changed to reach 26 in the formula result
Here is my python code (I am trying to use sympy for that)
x,y = symbols('x y')
eqn = solve([Eq(x/0.2*(0.2+1)+y/0.1*(0.1+1),26)],x,y)
print(eqn)
I am getting however strange result {x: 4.33333333333333 - 1.83333333333333*y}
Can anyone help me solve this equation?
The answer you are obtaining is not strange, it is just the answer to what you ask. You have an equation on two variables x and y, the solution to this problem is in general not unique (sometimes infinite). Now, you can either add an extra condition (inequality for example) or change the numeric Domain in which solutions are possible (like in Diophantine equations). You can do either of them in Sympy, in the following example I find the solution on x to your problem in the Real domain, using solveset:
from sympy import symbols, Eq, solveset
x,y = symbols('x y')
eqn = solveset(Eq(1.2 * x / 0.2 + 1.1 * y / 0.1, 26), x, Reals)
print(eqn)
Output:
Intersection(FiniteSet(4.33333333333333 - 1.83333333333333*y), Reals)
As you can see the solution on x is a finite set, that is the intersection between a straight line on y and the Reals. Any particular solution can be found by direct evaluation of y.
This is equivalent to say x = 4.33333333333333 - 1.83333333333333 * y if you evaluate this equation in the guess value y = 0.17, you obtain x = 4.0216 (close to your x = 4.085 guess value).
Edit:
After analyzing the new information added to your question, I think I have finally understood it: your problem is a constrained optimization. Now, I don't use Excel frequently, but it would be my bet that under the hood this optimization is carried out there using Lagrange multipliers. In your particular case, the target function represents the deviation of the solution (x, y) from the point (4.085, 0.17). For convenience, I have chosen this function to be the Euclidean distance between them (absolute values as you suggested can be problematic due to discontinuity of the derivatives). The constraint function is simply the equation you provided. To solve this problem with Sympy, one could use something like this:
import sympy as sp
# Define symbols and functions
x, y, lamb = sp.symbols('x, y, lamb', real=True)
func = sp.sqrt((x - 4.085) ** 2 + (y - 0.17) ** 2) # Target function
const = 1.2 * x / 0.2 + 1.1 * y / 0.1 - 26 # Constraint function
# Define Lagrangian
lagrang = func - lamb * const
# Compute gradient of Lagrangian
grad_lagrang = [sp.diff(lagrang, var) for var in [x, y, lamb]]
# Solve the resulting system of equations
spoints = sp.solve(grad_lagrang, [x, y, lamb], dict=True)
# Print stationary points
print(spoints)
Output:
[{x: 4.07047770700637, lamb: -0.0798086884467563, y: 0.143375796178345}]
Since in our case only one stationary point was found, this is the optimal solution (although this is only a necessary condition). The value of the lamb multiplier can be ditched, so x, y = 4.070, 0.1434. Hope this helps.

How to solve Poisson 2D equation with sympy?

I have a Poisson equation in 2D space like this:
Here is my attempt to solve it:
import sympy as sp
x, y = sp.symbols('x, y')
f = sp.Function('f')
u = f(x, y)
eq = sp.Eq(u.diff(x, 2) + u.diff(y, 2), u)
print(sp.pdsolve(eq))
It gives an error:
psolve: Cannot solve -f(x, y) + Derivative(f(x, y), x, x) + Derivative(f(x, y), y, y)
Is it possible to use sympy for such equations? Please help me with an example if possible.
At the bottom of the PDE solver page you will find
Currently implemented solver methods
1st order linear homogeneous partial differential equations with constant coefficients.
1st order linear general partial differential equations with constant coefficients.
1st order linear partial differential equations with variable coefficients.
Nothing of second order. Which is not surprising, because such PDEs do not admit explicit symbolic solutions, with a few (mostly uninteresting) exceptions. (If the equation is really Eq(u.diff(x, 2) + u.diff(y, 2), u) with zero Neumann condition, then the solution is identically zero.) It's not only that SymPy does not know how to find a symbolic solution --- there is no such solution to find.
I think you can use this idea, where nt is the number of iterations of your Poisson solver and b is the source term. dx and dy is the step in x and y, and in the end we have the boundary conditions.
for it in range(nt):
pd = pDF.copy()
pDF[1:-1,1:-1] = (((pd[1:-1, 2:] + pd[1:-1, :-2]) * dy**2 +
(pd[2:, 1:-1] + pd[:-2, 1:-1]) * dx**2 -
b[1:-1, 1:-1] * dx**2 * dy**2) /
(2 * (dx**2 + dy**2)))
pDF[0, :] = 0
pDF[nx-1, :] = 0
pDF[:, 0] = 0
pDF[:, ny-1] = 0

Sympy fails to find Fourier Transform of Complex Hyperbolic Secant (Sech) Function

I'm trying to use Sympy to find the Fourier transform of a hyperbolic secant function ("Sech") with a complex argument.
import sympy as sy
C, t, T0, f, w = sy.symbols('C, T, T_0, f, omega', real=True)
Ut = sy.sech(t/T0) * sy.exp(-sy.sqrt(-1) * C / 2 * t * t / (T0 * T0))
Uf = sy.fourier_transform(Ut, t, f)
Unfortunately, Sympy seems to simply hang when I request this.
Is this a bug or is there a better way I could present the request to sympy?
Thanks
It's a bug, unfortunately. SymPy cannot handle the integral, and it gets stuck in the integration algorithm.

how to solve first-order linear differential equations analytically and numerically with sympy?

How can simple linear differential equations like this one be solved in sympy?
y' + p(t)y = q(t)
I'm looking to solve it in two ways: symbolically (analytically) if possible, if sympy can derive the integrating factor, etc., and also a way to do it numerically so that the two can be compared. how can this be done in sympy? is sympy.mpmath.odefun the right place to look?
Here and here are some examples.
As for your problem, you can write your equation like:
y' + p(t)y - q(t) = 0
and then use dsolve().
import sympy
t = sympy.Symbol('t')
y = sympy.Function('y')(t)
p = sympy.Function('p')(t)
q = sympy.Function('q')(t)
y_ = sympy.Derivative(y, t)
# y' + p(t)y - q(t)
sol = sympy.dsolve(y_ + p*y - q, y)
print(sol)
Solution as function
(Note : This is a quick solution i came up with by reading the documentation. I am not experienced with sympy. There might be much better ways to do the following.)
Suppose you want to solve y' = y.
from sympy import *
t = symbols('t')
y = Function('y')(t)
y_ = Derivative(y, t)
sol = dsolve(y_ - y, y)
We did the same as previously. Now, to use the second part of the sol, we use .args[1]. Then we create a function f(t_) and substitute the t value using subs().
def f(t_):
return sol.args[1].subs([(t, t_)])
print(sol)
print(f(0))

Categories

Resources