I'm currently using sympy to check my algebra on some nasty equations involving second order derivatives and complex numbers.
import sympy
from sympy.abc import a, e, i, h, t, x, m, A
# define a wavefunction
Psi = A * sympy.exp(-a * ((m*x**2 /h)+i*t))
# take the first order time derivative
Psi_dt = sympy.diff(Psi, t)
# take the second order space derivative
Psi_d2x = sympy.diff(Psi, x, 2)
# write an expression for energy potential (rearrange Schrodingers Equation)
V = 1/Psi * (i*h*Psi_dt + (((h**2)/2*m) * Psi_d2x))
# symplify potential function
sympy.simplify(V)
Which yields this nice thing:
a*(-h*i**2 + m**2*(2*a*m*x**2 - h))
It would be really nice if sympy simplified i^2 to -1.
So how do I tell it that i represents the square root of -1?
On a related note, it would also be really nice to tell sympy that e is eulers number, so if I call sympy.diff(e**x, x) I get e**x as output.
You need to use the SymPy built-ins, rather than treating those symbols as free variables. In particular:
from sympy import I, E
I is sqrt(-1); E is Euler's number.
Then use the complexes methods to manipulate the complex numbers as needed.
Related
I am quite newbie to Python and Sympy, so I googled a lot before asking this question.
I am using Sympy to derive equation of motion for a mechanical system by Lagrangian approach, which I managed to do.
from math import *
from sympy import *;
from sympy.physics.mechanics import *
from sympy.physics.vector import *
# dynamic symbols, for Lagrange equations
x, z, theta = dynamicsymbols('x z theta')
xd, zd, thetad = dynamicsymbols('x z theta',1)
xdd, zdd, thetadd = dynamicsymbols('x z theta',2)
# Lagrangian potential
L = Symbol('L')
def L(x,z,theta,xd,zd,thetad):
return 0.5*1*xd**2 + 0.5*100*thetad**2 - 0.5*300*(z)**2+0.5*250*(x+ 100*theta)**2
# build Lagrange equations
LM0 = LagrangesMethod(L(x,z,theta,xd,zd,thetad), [x, z, theta])
equations0 = LM0.form_lagranges_equations()
print(equations0)
However I obtain a 2nd order ODE system in 3 variables, which I would like to convert to a 1st order ODE system in 6 variables.
As you can see, output is given in terms of variables x, z, theta and their second derivatives. I cannot manage to get the Lagrange equations in form of variables xd, xdd, etc. so I was thinking to replace derivatives like this:
print(equations0[2].subs(Derivative(x,t,t),Derivative(xd,t)))
However substitution does not work and I get back the unmodified expression. I have seen another example where such susbstitution seem to work; the difference I see here is the use of dynamicsymbols.
Can anyone suggest how should I treat my equations to describe the system in terms of first order derivatives only?
Thanks in advance
I want to differentiate the following equation
from sympy import *
init_printing()
x, t, r, phi = symbols('x, t, r, phi')
# this is how I want to do it
eq = Eq(x(t), r*phi(t))
eq.diff(t)
The result is differentiated only on the left side. I would like it to be evaluated on both sides. Is that possible in a simple way?
Currently I do the following:
Eq(eq.lhs.diff(t), eq.rhs.diff(t))
Borrowing some of the logic from Sympy: working with equalities manually, you can do something like this:
eq.func(*map(lambda x: diff(x, t), eq.args))
A bit ugly, but it works. Alternatively, you could just lift the .do() method from that and use it if you're going to want to do this a bunch of times.
I want to find an elegant way of solving the following differential equation:
from sympy import *
init_printing()
M, phi, t, r = symbols('M phi t r')
eq = Eq(-M * phi(t).diff(t), Rational(3, 2) * m * r**2 * phi(t).diff(t) * phi(t).diff(t,t))
I assume that phi(t).diff(t) is not zero. Hence the left and right side are shortened.
This is how I get to the solution:
# I assume d/dt(phi(t)) != 0
theta = symbols('theta')
eq = eq.subs({phi(t).diff(t, 2): theta}) # remove the second derivative
eq = eq.subs({phi(t).diff(t): 1}) # the first derivative is shortened
eq = eq.subs({theta: phi(t).diff(t, 2)}) # get the second derivative back
dsolve(eq, phi(t))
How do I solve this more elegantly?
Ideally dsolve() would be able to solve the equation directly, but it doesn't know how (it needs to learn that it can factor an equation and solve the factors independently). I opened an issue for it.
My only other suggestion is to divide phi' out directly:
eq = Eq(eq.lhs/phi(t).diff(t), eq.rhs/phi(t).diff(t))
You can also use
eq.xreplace({phi(t).diff(t): 1})
to replace the first derivative with 1 without modifying the second derivative (unlike subs, xreplace has no mathematical knowledge of what it is replacing; it just replaces expressions exactly).
And don't forget that phi(t) = C1 is also a solution (for when phi' does equal 0).
I have an equation 'a*x+logx-b=0,(a and b are constants)', and I want to solve x. The problem is that I have numerous constants a(accordingly numerous b). How do I solve this equation by using python?
You could check out something like
http://docs.scipy.org/doc/scipy-0.13.0/reference/optimize.nonlin.html
which has tools specifically designed for these kinds of equations.
Cool - today I learned about Python's numerical solver.
from math import log
from scipy.optimize import brentq
def f(x, a, b):
return a * x + log(x) - b
for a in range(1,5):
for b in range(1,5):
result = brentq(lambda x:f(x, a, b), 1e-10, 20)
print a, b, result
brentq provides estimate where the function crosses the x-axis. You need to give it two points, one which is definitely negative and one which is definitely positive. For negative point choose number that is smaller than exp(-B), where B is maximum value of b. For positive point choose number that's bigger than B.
If you cannot predict range of b values, you can use a solver instead. This will probably produce a solution - but this is not guaranteed.
from scipy.optimize import fsolve
for a in range(1,5):
for b in range(1,5):
result = fsolve(f, 1, (a,b))
print a, b, result
I'm solving the integral numerically using python:
where a(x) can take on any value; positive, negative, inside or outside the the [-1;1] and eta is an infinitesimal positive quantity. There is a second outer integral of which changes the value of a(x)
I'm trying to solve this using the Sokhotski–Plemelj theorem:
However this involves determining the principle value, which I can't find any method to in python. I know it's implemented in Matlab, but does anyone know of either a library or some other way of the determining the principal value in python (if a principle value exists)?
You can use sympy to evaluate the integral directly. Its real part with eta->0 is the principal value:
from sympy import *
x, y, eta = symbols('x y eta', real=True)
re(integrate(1/(x - y + I*eta), (x, -1, 1))).simplify().subs({eta: 0})
# -> log(Abs(-y + 1)/Abs(y + 1))
Matlab's symbolic toolbox int gives you the same result, of course (I'm not aware of other relevant tools in Matlab for this --- please specify if you know a specific one).
You asked about numerical computation of a principal value. The answer there is that if you only have a function f(y) whose analytical form or behavior you don't know, it's in general impossible to compute them numerically. You need to know things such as where the poles of the integrand are and what order they are.
If you on the other hand know your integral is of the form f(y) / (y - y_0), scipy.integrate.quad can compute the principal value for you, for example:
import numpy as np
from scipy import integrate, special
# P \int_{-1}^1 dx 1/(x - wvar) * (1 + sin(x))
print(integrate.quad(lambda x: 1 + np.sin(x), -1, 1, weight='cauchy', wvar=0))
# -> (1.8921661407343657, 2.426947531830592e-13)
# Check against known result
print(2*special.sici(1)[0])
# -> 1.89216614073
See here for details.