I have a Poisson equation in 2D space like this:
Here is my attempt to solve it:
import sympy as sp
x, y = sp.symbols('x, y')
f = sp.Function('f')
u = f(x, y)
eq = sp.Eq(u.diff(x, 2) + u.diff(y, 2), u)
print(sp.pdsolve(eq))
It gives an error:
psolve: Cannot solve -f(x, y) + Derivative(f(x, y), x, x) + Derivative(f(x, y), y, y)
Is it possible to use sympy for such equations? Please help me with an example if possible.
At the bottom of the PDE solver page you will find
Currently implemented solver methods
1st order linear homogeneous partial differential equations with constant coefficients.
1st order linear general partial differential equations with constant coefficients.
1st order linear partial differential equations with variable coefficients.
Nothing of second order. Which is not surprising, because such PDEs do not admit explicit symbolic solutions, with a few (mostly uninteresting) exceptions. (If the equation is really Eq(u.diff(x, 2) + u.diff(y, 2), u) with zero Neumann condition, then the solution is identically zero.) It's not only that SymPy does not know how to find a symbolic solution --- there is no such solution to find.
I think you can use this idea, where nt is the number of iterations of your Poisson solver and b is the source term. dx and dy is the step in x and y, and in the end we have the boundary conditions.
for it in range(nt):
pd = pDF.copy()
pDF[1:-1,1:-1] = (((pd[1:-1, 2:] + pd[1:-1, :-2]) * dy**2 +
(pd[2:, 1:-1] + pd[:-2, 1:-1]) * dx**2 -
b[1:-1, 1:-1] * dx**2 * dy**2) /
(2 * (dx**2 + dy**2)))
pDF[0, :] = 0
pDF[nx-1, :] = 0
pDF[:, 0] = 0
pDF[:, ny-1] = 0
Related
Dear StackOverflow community,
although the Python package SymPy provides functionality for various QM (e.g. wave functions and operators) and QFT (e.g. gamma matrices) operations, there is no support for functional derivatives.
I would like to implement the following analytically known functional derivative
D f(t) / D f(t') = delta(t - t') (delta distribution)
to compute more interesting results, e.g.,
D F[f] / D f(t)
where ordinary derivation rules apply until D f(t) / D f(t') has to be computed. I have included an example below. I am aware SymPy already supports taking derivatives with respect to functions, but that is not a functional derivative.
With best regards, XeLasar
Example:
F[f] := exp(integral integral f(x) G(x, y) f(y) dx dy)
D F[f] / D f(t) = (integral f(x) * G(x, y) * delta (t - y) dx dy
+ integral delta (t - x) * G(x, y) * f(y) dx dy) * F[f]
= (integral f(x) * G(x, t) dx
+ integral G(t, y) * f(y) dy) * F[f]
= 2 * (integral G(t, t') * f(t') dt') * F[f]
Note: The integral over dt' collapses due to the delta that arises from D f(t) / D f(t')! G(t, t') is an arbitrary symmetric function. The result can be further simplified as integration variables can be renamed.
I found a solution to the problem and I want to share it so others don't have to waste as much time as I did.
Rewriting the Derivative() function in SymPy is unfortunately not an option as derivation rules are defined within expression classes. Although the SymPy source feels like Spaghetti, it is well tested and should not be touched.
It is best to explain with one or two examples what I came up with, but roughly speaking: I found a convenient substitution to perform the functional derivative using the ordinary Derivative() function of SymPy.
Example:
Z[f] = Integral(f(x) G(x,y) f(y), dx, dy) , let f(x) -> A(z), f(y) -> B(z)
= Integral(A(z) G(x,y) B(z), dx, dy)
dZ/dz = Integral(dA/dz G(x,y) B(z) + A(z) G(x,y) dB/dz, x, y)
Re-substitute: A(z) -> f(x) , B(z) -> f(y)
Insert: dA/dz -> delta(t-x), dB/dz -> delta(t-y)
=> DZ[f]/Df(t) = Integral(delta(t-x) G(x,y) f(y) + f(x) G(x,y) delta(t-y), x, y)
= Integral(G(t,y) f(y), y) + Integral(f(x) G(x,t), x)
Although this is not a general proof, this substitution works in my usecases. Since I am using the normal SymPy Derivative() routine, everything works flawlessly.
With best regards, XeLasar
I have the following system of non-linear equations which I want to find its roots:
x - exp(a x + b y) = 0
y - exp(c x + d y) = 0
The code I'm using to find its roots is:
equations = lambda x, kernel: np.array([x[0] - np.exp(kernel[0] * x[0] + kernel[2] * x[1]), x[1] - np.exp(kernel[1] * x[1] + kernel[3] * x[0])])
kernels = np.array([kernel0, kernel1, kernel2, kernel3])
x_init = np.array([x_init0, x_init1])
x_sol = fsolve(two_equations, x_init, args=(kernels))
From the equations I know that this system in some situations has two answers for each variable: (x_sol1, x_sol2) and (y_sol1, y_sol2).
Is there a clean way to pass multiple initial guesses to this fsolve function to get both roots for each variable? (instead of using a for loop)
I know how to do it for a system of one equation only but I couldn't use that method for this case.
You can reduce the system to a single univariate equation by eliminating y.
y = (ln(x) - a x) / b
so that
(ln(x) - a x) / b - exp(c x + d (ln(x) - a x) / b) = 0
I have the following equation: x/0,2 * (0,2+1)+y/0,1*(0,1+1) = 26.34
The initial values of X and Y are set as 4.085 and 0.17 respectively.
I need to find the values of X and Y which satisfy the equation and have the lowest common deviation from initially set values. In other words, sum of |4.085 - x| and |0.17 - y| is minimized.
With Excel Solver Valueof Function this easy to find:
we insert x and y as variables to be changed to reach 26 in the formula result
Here is my python code (I am trying to use sympy for that)
x,y = symbols('x y')
eqn = solve([Eq(x/0.2*(0.2+1)+y/0.1*(0.1+1),26)],x,y)
print(eqn)
I am getting however strange result {x: 4.33333333333333 - 1.83333333333333*y}
Can anyone help me solve this equation?
The answer you are obtaining is not strange, it is just the answer to what you ask. You have an equation on two variables x and y, the solution to this problem is in general not unique (sometimes infinite). Now, you can either add an extra condition (inequality for example) or change the numeric Domain in which solutions are possible (like in Diophantine equations). You can do either of them in Sympy, in the following example I find the solution on x to your problem in the Real domain, using solveset:
from sympy import symbols, Eq, solveset
x,y = symbols('x y')
eqn = solveset(Eq(1.2 * x / 0.2 + 1.1 * y / 0.1, 26), x, Reals)
print(eqn)
Output:
Intersection(FiniteSet(4.33333333333333 - 1.83333333333333*y), Reals)
As you can see the solution on x is a finite set, that is the intersection between a straight line on y and the Reals. Any particular solution can be found by direct evaluation of y.
This is equivalent to say x = 4.33333333333333 - 1.83333333333333 * y if you evaluate this equation in the guess value y = 0.17, you obtain x = 4.0216 (close to your x = 4.085 guess value).
Edit:
After analyzing the new information added to your question, I think I have finally understood it: your problem is a constrained optimization. Now, I don't use Excel frequently, but it would be my bet that under the hood this optimization is carried out there using Lagrange multipliers. In your particular case, the target function represents the deviation of the solution (x, y) from the point (4.085, 0.17). For convenience, I have chosen this function to be the Euclidean distance between them (absolute values as you suggested can be problematic due to discontinuity of the derivatives). The constraint function is simply the equation you provided. To solve this problem with Sympy, one could use something like this:
import sympy as sp
# Define symbols and functions
x, y, lamb = sp.symbols('x, y, lamb', real=True)
func = sp.sqrt((x - 4.085) ** 2 + (y - 0.17) ** 2) # Target function
const = 1.2 * x / 0.2 + 1.1 * y / 0.1 - 26 # Constraint function
# Define Lagrangian
lagrang = func - lamb * const
# Compute gradient of Lagrangian
grad_lagrang = [sp.diff(lagrang, var) for var in [x, y, lamb]]
# Solve the resulting system of equations
spoints = sp.solve(grad_lagrang, [x, y, lamb], dict=True)
# Print stationary points
print(spoints)
Output:
[{x: 4.07047770700637, lamb: -0.0798086884467563, y: 0.143375796178345}]
Since in our case only one stationary point was found, this is the optimal solution (although this is only a necessary condition). The value of the lamb multiplier can be ditched, so x, y = 4.070, 0.1434. Hope this helps.
Definition of the problem
I am trying to calculate the points of intersection of geometrical objects, such as two planes and a sphere, in python.
Let's consider for example these three objects:
This system gives two solutions:
I would like to know if there is a python library that can help develop a solver to calculate these intersections. I am looking for something working as Wolfram alpha, where we can input three equations and it returns all the possible solutions when there's finite number of solutions for simplicity.
What I tried
I tried with SymPy, but it returns []:
from sympy.solvers import solve
from sympy import Symbol
x = Symbol('x')
y = Symbol('y')
z = Symbol('z')
solve(z, x, x**2 + y**2 + z**2 -1)
I then tried with scipy:
from scipy.optimize import fsolve
def f(x):
y = np.zeros(3)
y[2] = x[2]
y[0] = x[0]
y[1] = x[0] ** 2 + x[1] ** 2+ x[2] ** 2 - 1
return y
x0 = np.array([10, 10, 10])
solution = fsolve(f, x0)
print(solution[0],solution[1],solution[2])
but it only returns one of the two solutions:
6.79746218330325e-28 1.0000000000000002 -2.3528179942097343e-35
I also tried with gekko, and stil it only returns one possible solution (which depends on the initial guess):
from gekko import GEKKO
m = GEKKO()
x = m.Var(value = 1)
y = m.Var(value = 1)
z = m.Var(value = 1)
m.Equation(x == 0)
m.Equation(z == 0)
m.Equation(x**2 + y**2+z**2 ==1)
m.solve()
fsolve from scipy, and all other functions that I personally know of that will accept any form of input function, will return one value.
One workaround if you have an idea where the other solution is would be to give an x0 value that is closer to the second solution with a second call to fsolve (see https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html).
If you alternatively know what range you want to try and find solutions in, the easiest way is to make an array that you then check to see where the value changes sign (this would be doing it from scratch)
I found the solution with sympy. Apparently it's one of the only (if not only) libraries that allow finding analytical solutions, and returns more than just one solution. Also, we don't need to pass guesses as initial variables. In my question, there was an error in the example I posted with sympy. This is how I solved the system:
from sympy.solvers import solve
import sympy as sp
x = Symbol('x')
y = Symbol('y')
z = Symbol('z')
sp.solve([z , x, (x**2 + y**2 + z**2) - 1], x,y,z)
Result: [0,-1,0], [0,1,0]
How can simple linear differential equations like this one be solved in sympy?
y' + p(t)y = q(t)
I'm looking to solve it in two ways: symbolically (analytically) if possible, if sympy can derive the integrating factor, etc., and also a way to do it numerically so that the two can be compared. how can this be done in sympy? is sympy.mpmath.odefun the right place to look?
Here and here are some examples.
As for your problem, you can write your equation like:
y' + p(t)y - q(t) = 0
and then use dsolve().
import sympy
t = sympy.Symbol('t')
y = sympy.Function('y')(t)
p = sympy.Function('p')(t)
q = sympy.Function('q')(t)
y_ = sympy.Derivative(y, t)
# y' + p(t)y - q(t)
sol = sympy.dsolve(y_ + p*y - q, y)
print(sol)
Solution as function
(Note : This is a quick solution i came up with by reading the documentation. I am not experienced with sympy. There might be much better ways to do the following.)
Suppose you want to solve y' = y.
from sympy import *
t = symbols('t')
y = Function('y')(t)
y_ = Derivative(y, t)
sol = dsolve(y_ - y, y)
We did the same as previously. Now, to use the second part of the sol, we use .args[1]. Then we create a function f(t_) and substitute the t value using subs().
def f(t_):
return sol.args[1].subs([(t, t_)])
print(sol)
print(f(0))