Good afternoon,
I'm coming here as I noticed something unusual in the results of dsolve() in sympy.
from sympy import *
from sympy.abc import x,y
import sympy as s
import numpy as np
n = symbols('n', complex=True)
s.init_printing()
f=Function('x')
eq=Derivative(f(x),x,x)+n**2*f(x)
a=dsolve(eq, f(x))
eq2=Derivative(f(x),x,x)+2**2*f(x)
a2=dsolve(eq2, f(x))
display(a.subs(n,2)==a2)
The generated result is False.
Looking only at the result of 'a' it is already possible to see that there are differences in the results using the symbolic variable 'n'.
Could anyone guide if I'm doing it the right way?
The solution sets are equivalent:
In [2]: a
Out[2]:
-ⅈ⋅n⋅x ⅈ⋅n⋅x
x(x) = C₁⋅ℯ + C₂⋅ℯ
In [3]: a2
Out[3]: x(x) = C₁⋅sin(2⋅x) + C₂⋅cos(2⋅x)
These are just different ways of writing the general solution. If you had declared n to be real then the sin/cos form would be used.
The two forms are related by Euler's formula:
https://en.wikipedia.org/wiki/Linear_differential_equation#Second-order_case
Related
import sympy as sp
import matplotlib.pyplot as plt
# set
Cd = 0.25
g = 9.81
pf = 10**(-6) # Perturbation Fraction
t = 4
v = 36
xr = [int(input('initial guess : '))]
i = 0
Ea = 1
Es = 0.01
# 함수 정의
def f(m):
return sp.sqrt(g * m / Cd) * sp.tanh(sp.sqrt(g * Cd / m) * t) - v
# real root
x = sp.Symbol('x')
ans = sp.solve(f(x)) # sp.solve()로 해 구하기
print(ans)
i want to get real root of f(x).
but this code have some problem in line for # real root
i can't figure out
You should not expect sympy to do miracles. Beyond relatively simple symbolic manipulations, sympy just get stuck, sometimes even returning wrong answers. You have to turn to commercial tools such as Maple or Mathematica in order to crack tough nuts.
Your alternative in most practical cases is to use scipy and get a good numeric solution, which is what you want most of the time rather than a closed form solution.
Sympy is a symbolic math library, trying to find exact symbolic solutions. As such, it doesn't work well with floats, as they are necessarily imprecise.
If your equations are fully numeric, it is usually recommended to employ numeric libraries such as numpy and scipy. If you're already doing symbolic manipulations (e.g. calculating differentials), sympy provides nsolve which calls a numeric solver. As such, it also needs a seed to start its numeric search. In your case it would look like:
# ....
xr = 1
ans = sp.nsolve(f(x), xr)
Result: 142.737633108449
Sympy also has a way to convert a sympy function to numpy format (in numpy things work much faster, but there are no symbolic expressions). sp.lambdify(x, f(x)) creates such a numpy function. Here is how it would look like with your example:
import matplotlib.pyplot as plt
import numpy as np
f_np = sp.lambdify(x, f(x))
xi = np.linspace(1, 1000, 2000)
plt.plot(xi, f_np(xi))
In an interactive environment, you can add a question mark to display the numpy source of the function:
>>> f_np?
Signature: f_np(x)
Docstring:
Created with lambdify. Signature:
func(x)
Expression:
6.26418390534633*sqrt(x)*tanh(6.26418390534633*sqrt(1/x)) - 36
Source code:
def _lambdifygenerated(x):
return (6.26418390534633*sqrt(x)*tanh(6.26418390534633*sqrt(x**(-1.0))) - 36)
If you look at your expression for f(x) you will see that it is highly non-linear (as also shown in the plot that JoahnC showed you):
6.26418390534633*sqrt(x)*tanh(6.26418390534633*sqrt(1/x)) - 36
SymPy cannot give an analytical solution for something that has no such solution. It can, however, give numerical approximations for univariate expressions. That's what nsolve is for. It needs an initial guess for the solution (as you anticipated by asking for xr).
>>> sp.nsolve(f(x), 100)
142.737633108449
I have seen how to solve systems of ODEs in Python, but all of the examples I have seen were "standard" equations. What I mean by standard is that the equations do not say "derivative of one function = expression that contains derivative of another function".
Here is a sample system I am trying to solve numerically. Initial conditions are x(0) = 5, y(0) = 3, z(0) = 2, and all initial derivatives are 0:
x'(t) + 4y(t) = -3y'(t)
y'(t) + ty(t) = -2z'(t)
z'(t) = -2y(t) + x'(t)
I am not 100% sure how to code this. Here is what I have tried:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import math
def ODESystem(f,t):
x = f[0]
y = f[1]
z = f[2]
Now, what do I define first: dydt, dxdt or dzdt. Is there a way for me to define one expression that "hangs around" before I use it to define another expression?
You do not need to solve anything manually, you can just as well do
def ODESystem(f,t):
x,y,z = f
return np.linalg.solve([[1,3,0],[0,1,2],[-1,0,1]], [-4, -t, -2])*y
Nevermind; I am stupid. I can keep on substituting into the third equation until I get an equation for z'(t) that does not include any other derivatives.
Why does log(xy) = log(x) + log(y) not work in SymPy?
I tried this:
from sympy import *
var('x y')
print(simplify(log(x*y)))
print(expand(log(x*y)))
print(collect(log(x*y),x))
print(solve(log(x*y),x))
# log(x*y)
# log(x*y)
# log(x*y)
# [1/y]
log(xy) = log(x)+log(y) does not always hold. More specifically, this may to problems if both x and y are negative or in the complex domain. The Wolfram Alpha link you gave also states “Alternate form assuming x and y are positive”.
To see this relation in SymPy, you have to mark the symbols x and y as positive, e.g. like this:
from sympy import symbols,log
x,y = symbols("x,y",positive=True)
expr = log(x*y)
expr.expand()
Alternatively (as hinted at by user6655984) you can use the force hint to let SymPy assume that everything is maximally benign:
from sympy import log
from sympy.abc import x,y
expr = log(x*y)
expr.expand(force=True)
I am trying to solve an equation for an unknown variable(y), I am using the below. However it is taking lot of time, I have read some article about using scipy.optimize to speed it, but not sure how to. Any help will be appreciated:
from sympy import Eq, var, solve
var('y')
eq = Eq(((5/(1+((.0025+y)/2)))**2) + ((5/(1+((.0027+y)/2)))**4) + ((105/(1+((.003+y)/2)))**6),104.90)
solve(eq)
If you are looking for a numeric solution, you can use brentq
from scipy.optimize import brentq
f = lambda y: ((5/(1+((.0025+y)/2)))**2) + ((5/(1+((.0027+y)/2)))**4) + ((105/(1+((.003+y)/2)))**6)-104.90
res = brentq(f, 0, 1E8)
Using python 2.7 with PyCharm Community Edition 2016.2.3 + Anaconda distribution.
I have an input similar to :
from sympy import *
x = symbols('x')
f = cos(x)
print (f.subs(x, 25))
The output is cos(25), . Is there a way to evaluate trigonometric identities such as sin/cos, at a certain angle ? I've tried cos(degrees(x)), but nothing differs. Am I missing some crucial part of documentation or there really isn't a way to do this ? Ty for your help :)
Perform a numerical evaluation using function N:
>>> from sympy import N, symbols, cos
>>> x = symbols('x')
>>> f = cos(x)
>>> f.subs(x, 25)
cos(25)
>>> N(f.subs(x, 25)) # evaluate after substitution
0.991202811863474
To make the computation in degrees, convert the angle to radians, using mpmath.radians, so the computation is performed on a rad value:
>>> import mpmath
>>> f.subs(x, mpmath.radians(25))
0.906307787036650
Importing with * (wildcard imports) isn't a very good idea. Imagine what happens if you equally did from math import *, then one of the cos functions from both modules will be out in the wild.
See the PEP 8 guideline on imports.