I have a function that is updated -- I'm optimizing using the method of newton and steepest.
I'm new to sympy, and hoping to get some quick clarification.
x1,x2 = sym.symbols('x1 x2')
y = 2*x1 + 3*x2**2 + sym.exp(2*x1**2 + x2**2)
gy1 = sym.diff(y,x1)
gy2 = sym.diff(y,x2)
grad1 = sym.lambdify([x1,x2],gy1)(x[0],x[1])
grad2 = sym.lambdify([x1,x2],gy2)(x[0],x[1])
d = np.array([-1*grad1,-1*grad2])
l = sym.symbols('l')
theta = 2*(x[0]+l*d[0]) + 3*(x[1]+l*d[1])**2 + sym.exp(2*(x[0]+l*d[0])**2 + (x[1]+l*d[1])**2)
theta_p = sym.diff(theta,l)
my function, y is updated as follow: f(x_n) --> f(x_n + lambda*d_n) -- call this theta(lambda)
I've updated as above ('theta' function), and when printed to screen, it gives a numpy array:
array([-63.1124487914452*l + 2 + exp([1991.5905962264*(0.0316894691665188 - l)**2])],
dtype=object)
That is the equation I need, but now I want to differentiate it with respect to l, my lambda. But sympy doesn't work like this.
When I run
sym.diff(theta,l)
I get this output:
AttributeError: 'ImmutableDenseNDimArray' object has no attribute 'as_coeff_Mul'
Any ideas?
Try sym.diff(theta[0], l) and sym.diff(theta[1], l)
For some reason you end up with a ndarray containing objects that are sympy expressions. Print the type of each element to confirm.
Oh, there are nested ndarray expressions. You need to review what you are passing to theta.
You should end up with:
-63.1124487914452*l + 2 + exp(1991.5905962264*(0.0316894691665188 - l)**2)
instead of
array([-63.1124487914452*l + 2 + exp([1991.5905962264*(0.0316894691665188 - l)**2])],
dtype=object)
Replace x[0] and x[1] with symbols in theta, then diff, lambdify and evaluate with x[0] and x[1]
If xx (your unspecified x) is (2,1) array:
In [153]: xx = np.array([[1],[1]])
In [154]: grad1 = sym.lambdify([x1,x2],gy1)(xx[0],xx[1])
...: grad2 = sym.lambdify([x1,x2],gy2)(xx[0],xx[1])
...: d = np.array([-1*grad1,-1*grad2])
In [155]: d
Out[155]:
array([[-82.34214769],
[-46.17107385]])
In [156]: theta = 2*(xx[0]+l*d[0]) + 3*(xx[1]+l*d[1])**2 + sym.exp(2*(xx[0]+l*d[0])**2 + (xx[1]+l*d[1])
...: **2)
In [157]: theta
Out[157]:
array([-164.684295385501*l + 6395.30418038233*(0.0216585822397654 - l)**2 + 2 + exp([13560.4585733095*(0.0121444488396316 - l)**2 + 2131.76806012744*(0.0216585822397654 - l)**2])],
dtype=object)
If instead it is (2,) or a simple list
In [158]: xx = np.array([1,1]) # [1,1]
...
In [160]: d
Out[160]: array([-82.34214769, -46.17107385])
and theta is then a simple sympy expression, not an object array containing an expression. Then theta_p evaluates fine.
We can evaluate gy1 at specific x1,x2 with evalf instead of lambdify:
In [174]: xsub = {x1:1, x2:1}
In [175]: d = [-1*gy1.evalf(subs=xsub), -1*gy2.evalf(subs=xsub)]
In [176]: d
Out[176]: [-82.3421476927507, -46.1710738463753]
Related
I want to solve the ode f´´(x) + k*f(x) = 0.
which is a trivial ODE to solve (https://www.wolframalpha.com/input?i=f%60%60%28x%29+%2B+kf%28x%29%3D0)
my code is
from sympy import *
x,t,k,L,C1,C2 = symbols("x,t,k,L,C1,C2")
f=symbols('f', cls=Function)
g=symbols('g', cls=Function)
Fx = f(x).diff(x)
Fxx = f(x).diff(x,x)
Gtt = g(t).diff(t,t)
Gt = g(t).diff(t)
BC1 = 0
BC2 = L
Eq1_k_positive = dsolve(Eq1.subs(k,-k))
display(Eq1_k_positive)
Not really sure why I don't get the solution that I should get. and no its not the same when I use BCs that would get me a result I get 0 since I don't get the sin cos equation. any tips on what's not correct?
This is your differential equation:
In [18]: k, x = symbols('k, x')
In [19]: f = Function('f')
In [20]: eq = Eq(f(x).diff(x, 2) + k*f(x), 0)
In [21]: eq
Out[21]:
2
d
k⋅f(x) + ───(f(x)) = 0
2
dx
This is the solution returned by SymPy:
In [22]: dsolve(eq)
Out[22]:
____ ____
-x⋅╲╱ -k x⋅╲╱ -k
f(x) = C₁⋅ℯ + C₂⋅ℯ
That solution is correct for any nonzero complex number k.
There can be many equivalent forms to represent the general solution of an ODE. SymPy will choose a different form here if you specify something about the symbol k such as that it is positive:
In [24]: k = symbols('k', positive=True)
In [25]: eq = Eq(f(x).diff(x, 2) + k*f(x), 0)
In [26]: eq
Out[26]:
2
d
k⋅f(x) + ───(f(x)) = 0
2
dx
In [27]: dsolve(eq)
Out[27]: f(x) = C₁⋅sin(√k⋅x) + C₂⋅cos(√k⋅x)
This solution is also correct for any nonzero complex number k but will only be returned if k is declared positive because it is only for positive k that there is any reason to prefer the sin/cos form to the exp form.
I want to solve a system of equations symbolically such as A = ax + by and B = cx + dy, for x and y explicitly on sympy.
I tried the solve function of sympy as
solve([A, B], [x, y]), but isn't working. It's returning an empty list, [].
How can I solve it using sympy?
This is the actual equation I'm trying to solve:
from sympy import*
i,j,phi, p, e_phi, e_rho = symbols(r'\hat{i} \hat{j} \phi \rho e_\phi e_\rho')
e_rho = cos(phi)*i + sin(phi)*j
e_phi = -p*sin(phi)*i + p*cos(phi)*j
solve([e_rho,e_phi], [i,j])
I don't know what version of SymPy you're using but I just tried with the latest version and I get an answer:
In [4]: from sympy import*
...: i,j,phi, p, e_phi, e_rho = symbols(r'i j phi rho e_phi e_rho')
...: e_rho = cos(phi)*i + sin(phi)*j
...: e_phi = -p*sin(phi)*i + p*cos(phi)*j
...: solve([e_rho,e_phi], [i,j])
Out[4]: {i: 0, j: 0}
That's the correct answer to your equations (provided rho is nonzero):
In [5]: e_rho
Out[5]: i⋅cos(φ) + j⋅sin(φ)
In [6]: e_phi
Out[6]: -i⋅ρ⋅sin(φ) + j⋅ρ⋅cos(φ)
If you meant to solve for e_rho and e_phi to be equal to something other than zero then you should include a right hand side either by subtracting it from the expressions or by using Eq:
In [2]: A, B = symbols('A, B')
In [3]: solve([Eq(e_rho, A), Eq(e_phi, B)], [i, j])
Out[3]:
⎧ A⋅ρ⋅cos(φ) B⋅sin(φ) A⋅ρ⋅sin(φ) B⋅cos(φ) ⎫
⎪i: ───────────────────── - ─────────────────────, j: ───────────────────── + ─────────────────────⎪
⎨ 2 2 2 2 2 2 2 2 ⎬
⎪ ρ⋅sin (φ) + ρ⋅cos (φ) ρ⋅sin (φ) + ρ⋅cos (φ) ρ⋅sin (φ) + ρ⋅cos (φ) ρ⋅sin (φ) + ρ⋅cos (φ)⎪
⎩ ⎭
In [4]: solve([Eq(e_rho, A), Eq(e_phi, B)], [i, j], simplify=True)
Out[4]:
⎧ B⋅sin(φ) B⋅cos(φ)⎫
⎨i: A⋅cos(φ) - ────────, j: A⋅sin(φ) + ────────⎬
⎩ ρ ρ ⎭
Again that's the correct answer (assuming rho != 0).
I have the following equation, like this:
y = 3x2 + x
Then, I want to differentiate the both side w.r.t the variable t with sympy. I try to implement it in the following code in JupyterNotebook:
>>> import sympy as sp
>>> x, y, t = sp.symbols('x y t', real=True)
>>> eq = sp.Eq(y, 3 * x **2 + x)
>>>
>>> expr1 = eq.lhs
>>> expr1
𝑦
>>> expr1.diff(t)
0
>>>
>>> expr2 = eq.rhs
>>> expr2
3𝑥^2+𝑥
>>> expr2.diff(t)
0
As the result, sympy will treat the symbol x and y as a constant. However, the ideal result I want should be the same as the result derived manually like this:
y = 3x2 + x
d/dt (y) = d/dt (3x2 + x)
dy/dt = 6 • x • dx/dt + 1 • dx/dt
dy/dt = (6x + 1) • dx/dt
How can I do the derivative operation on a expression with a specific symbol which is not a free symbol in the expression?
You should declare x and y as functions rather than symbols e.g.:
In [8]: x, y = symbols('x, y', cls=Function)
In [9]: t = symbols('t')
In [10]: eq = Eq(y(t), 3*x(t)**2 + x(t))
In [11]: eq
Out[11]:
2
y(t) = 3⋅x (t) + x(t)
In [12]: Eq(eq.lhs.diff(t), eq.rhs.diff(t))
Out[12]:
d d d
──(y(t)) = 6⋅x(t)⋅──(x(t)) + ──(x(t))
dt dt dt
https://docs.sympy.org/latest/modules/core.html#sympy.core.function.Function
Alternatively, the idiff function was made for this purpose but it works with expressions like f(x, y) and can return the value of dy/dx. So first make your Eq and expression and then calculate the desired derivative:
>>> from sympy import idiff
>>> e = eq.rewrite(Add)
>>> dydx = idiff(e, y, x); dydx
6*x + 1
Note, too, that even in your equation (if you write it explicitly in terms of functions of t) you do not need to isolate y(t) -- you can differentiate and solve for it:
>>> from sympy.abc import t
>>> x,y=map(Function,'xy')
>>> eq = x(t)*(y(t)**2 - y(t) + 1)
>>> yp=y(t).diff(t); Eq(yp, solve(eq.diff(t),yp)[0])
Eq(Derivative(y(t), t), (-y(t)**2 + y(t) - 1)*Derivative(x(t), t)/((2*y(t) - 1)*x(t)))
I'd like to take the inverse of the A martix
import sympy as sp
import numpy as np
b = np.array([[0.1], [0.1], [-0.1]])
x, y, z = sp.symbols("x y z")
eq1 = 3*x - sp.cos(y*z) - 1/2
eq2 = x**2 -81*(y+0.1)**2 + sp.sin(z) + 1.06
eq3 = sp.exp(-x*y) + 20*z + (10*np.pi - 3)/3
A = np.array([[0,0,x],[0,0,0],[0,0,0]])
f = np.array([[eq1],[eq2],[eq3]])
A[0,0] = sp.diff(eq1,x)
A[1,0] = sp.diff(eq1,y)
A[2,0] = sp.diff(eq1,z)
A[0,1] = sp.diff(eq2,x)
A[1,1] = sp.diff(eq2,y)
A[2,1] = sp.diff(eq2,z)
A[0,2] = sp.diff(eq3,x)
A[1,2] = sp.diff(eq3,y)
A[2,2] = sp.diff(eq3,z)
print(A)
J = np.linalg.inv(A)
print(A)
However, the built-in function doesn't work. So how can I take the inverse of it?
You should use sympy matrix instead:
J = sp.Matrix(A).inv()
If you initialize matrix A using sympy rather than numpy array it will work.
import sympy as sp
import numpy as np
from sympy import Matrix
b = np.array([[0.1], [0.1], [-0.1]])
x, y, z = sp.symbols("x y z")
eq1 = 3*x - sp.cos(y*z) - 1/2
eq2 = x**2 -81*(y+0.1)**2 + sp.sin(z) + 1.06
eq3 = sp.exp(-x*y) + 20*z + (10*np.pi - 3)/3
f = np.array([[eq1],[eq2],[eq3]])
a1 = sp.diff(eq1,x)
b1 = sp.diff(eq1,y)
c1 = sp.diff(eq1,z)
a2 = sp.diff(eq2,x)
b2 = sp.diff(eq2,y)
c2 = sp.diff(eq2,z)
a3 = sp.diff(eq3,x)
b3 = sp.diff(eq3,y)
c3 = sp.diff(eq3,z)
print(A)
A = Matrix( [[a1,a2,a3],[b1,b2,b3],[c1,c2,c3]]) # Initialize using sympy Matrix
A_inverse = A.inv()
print(A_inverse)
I don’t think you can do numpy operations like inv on SymPy expressions. However, I don’t use SymPy.
Also, not all matrices are invertable.
A is a numpy array, object dtype, containing sympy objects:
In [5]: A
Out[5]:
array([[3, 2*x, -y*exp(-x*y)],
[z*sin(y*z), -162*y - 16.2, -x*exp(-x*y)],
[y*sin(y*z), cos(z), 20]], dtype=object)
In [6]: np.linalg.inv(A)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-ae645f97e1f8> in <module>
----> 1 np.linalg.inv(A)
<__array_function__ internals> in inv(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/numpy/linalg/linalg.py in inv(a)
545 signature = 'D->D' if isComplexType(t) else 'd->d'
546 extobj = get_linalg_error_extobj(_raise_linalgerror_singular)
--> 547 ainv = _umath_linalg.inv(a, signature=signature, extobj=extobj)
548 return wrap(ainv.astype(result_t, copy=False))
549
TypeError: No loop matching the specified signature and casting was found for ufunc inv
np.linalg works for numeric arrays, not general Python objects.
Math on object dtype arrays is hit-or-miss. Somethings work, but many don't. Mixing sympy and numpy is generally not recommended.
A row sum does work - because the sympy objects can 'add' themselves:
In [7]: A.sum(axis=1)
Out[7]:
array([2*x - y*exp(-x*y) + 3,
-x*exp(-x*y) - 162*y + z*sin(y*z) - 16.2,
y*sin(y*z) + cos(z) + 20
], dtype=object)
That numpy array can be made into a sparse matrix, as other show:
In [10]: As = Matrix(A.tolist() )
In [11]: As
Out[11]:
⎡ -x⋅y⎤
⎢ 3 2⋅x -y⋅ℯ ⎥
⎢ ⎥
⎢ -x⋅y⎥
⎢z⋅sin(y⋅z) -162⋅y - 16.2 -x⋅ℯ ⎥
⎢ ⎥
⎣y⋅sin(y⋅z) cos(z) 20 ⎦
and the inverse exists - but it is (3,3), but the elements are large expressions:
In [12]: As.inv()
...
In [14]: _12.shape
Out[14]: (3, 3)
In [15]: As.det()
Out[15]:
2 -x⋅y -x⋅y 2 -x⋅y
- 2⋅x ⋅y⋅ℯ ⋅sin(y⋅z) - 40⋅x⋅z⋅sin(y⋅z) + 3⋅x⋅ℯ ⋅cos(z) + y ⋅(-162⋅y - 16.2)⋅ℯ ⋅sin(y
-x⋅y
⋅z) - y⋅z⋅ℯ ⋅sin(y⋅z)⋅cos(z) - 9720⋅y - 972.0
I want to calculate the gradient of the following function h(x) = 0.5 x.T * A * x + b.T + x.
For now I set A to be just a (2,2) Matrix.
def function(x):
return 0.5 * np.dot(np.dot(np.transpose(x), A), x) + np.dot(np.transpose(b), x)
where
A = A = np.zeros((2, 2))
n = A.shape[0]
A[range(n), range(n)] = 1
a (2,2) Matrix with main diagonal of 1 and
b = np.ones(2)
For a given Point x = (1,1) numpy.gradient returns an empty list.
x = np.ones(2)
result = np.gradient(function(x))
However shouldn't I get something like that: grad(f((1,1)) = (x1 + 1, x2 + 1) = (2, 2).
Appreciate any help.
It seems like you want to perform symbolic differentiation or automatic differentiation which np.gradient does not do. sympy is a package for symbolic math and autograd is a package for automatic differentiation for numpy. For example, to do this with autograd:
import autograd.numpy as np
from autograd import grad
def function(x):
return 0.5 * np.dot(np.dot(np.transpose(x), A), x) + np.dot(np.transpose(b), x)
A = A = np.zeros((2, 2))
n = A.shape[0]
A[range(n), range(n)] = 1
b = np.ones(2)
x = np.ones(2)
grad(function)(x)
Outputs:
array([2., 2.])