Indefinite integral with constant - python

I need to use python to get an indefinite integral of 1/(x^4(sqrt(x^2-a^2))), where a>0.
I know how to use python to integrate, but not when there is a constant. Just to be clear, the answer needs to be in terms of x and a, because it is indefinite.

Using SymPy, you can compute the symbolic integral for that expression like this:
import sympy
# 1/(x^4(sqrt(x^2-a^2))), a > 0
x, a = sympy.symbols('x, a')
exp = 1 / ((x ** 4) * sympy.sqrt(x ** 2 - a ** 2))
exp_int = sympy.integrate(exp, x)
print(exp_int)
Which results in:
Piecewise((I*sqrt(a**2/x**2 - 1)/(3*a**2*x**2) + 2*I*sqrt(a**2/x**2 - 1)/(3*a**4), Abs(a**2/x**2) > 1), (sqrt(-a**2/x**2 + 1)/(3*a**2*x**2) + 2*sqrt(-a**2/x**2 + 1)/(3*a**4), True))
Or displayed with LaTeX:

Related

How to write Bessel function using power series method in Python without Sympy?

I am studying Computational Physics with a lecturer who always ask me to write Python and Matlab code without using instant code (a library that gives me final answer without showing mathematical expression). So I try to write Bessel function for first kind using power series because I thought it was easy compare to other method (I am not sure). I dont know why the result is still very different? Far from answer that Sympy.special provided?
Here is my code for x = 5 and n = 3
import math
def bessel_function(n, x, num_terms):
# Initialize the power series expansion with the first term
series_sum = (x / 2) ** n
# Calculate the remaining terms of the power series expansion
for k in range(0, num_terms):
term = ((-1) ** k) * ((x / 2) ** (2 * k)) / (math.factorial(k)**2)*(2**2*k)
series_sum = series_sum + term
return series_sum
# Test the function with n = 3, x = 5, and num_terms = 10
print(bessel_function(3, 5, 30))
print(bessel_function(3, 5, 15))
And here is the code using sympy library:
from mpmath import *
mp.dps = 15; mp.pretty = True
print(besselj(3, 5))
import sympy
def bessel_function(n, x):
# Use the besselj function from sympy to calculate the Bessel function
return sympy.besselj(n, x)
# Calculate the numerical value of the Bessel function using evalf
numerical_value = bessel_function(3, 5).evalf()
print(numerical_value)
It is a big waste to compute the terms like you do, each from scratch with power and factorial. Much more efficient to compute a term from the previous.
For Jn,
Tk / Tk-1 = - (X/2)²/(k(k+N))
with T0 = (X/2)^N/N!.
N= 3
X= 5
# First term
X*= 0.5
Term= pow(X, N) / math.factorial(N)
Sum= Term
print(Sum)
# Next terms
X*= -X
for k in range(1, 21):
Term*= X / (k * (k + N))
Sum+= Term
print(Sum)
The successive sums are
2.6041666666666665
-1.4648437499999996
1.0782877604166665
0.19525598596643523
0.39236129276336185
0.3615635885763421
0.365128137672062
0.3648098743599441
0.3648324782883616
0.36483117019065225
0.3648312330799652
0.36483123052763916
0.3648312306162616
0.3648312306135987
0.3648312306136686
0.364831230613667
0.36483123061366707
0.36483123061366707
0.36483123061366707
0.36483123061366707
0.36483123061366707

sympy not ignoring unimportant decimals in exponential expression

I have a code that calculate some mathematical equations and when I want to see the simplified results, it can not equate 2.0 with 2 inside power, which is logical since one is float and the other is integer. But decision was sympys where to put these two values, not mine.
Here is the expression in my results that sympy is not simplifying
from sympy import *
x = symbols('x')
y = -exp(2.0*x) + exp(2*x)
print(simplify(y)) # output is -exp(2.0*x) + exp(2*x)
y = -exp(2*x) + exp(2*x)
print(simplify(y)) # output is 0
y = -2.0*x + 2*x
print(simplify(y)) # output is 0
y = -x**2.0 + x**2
print(simplify(y)) # output is -x**2.0 + x**2
is there any way working around this problem? I am looking for a way to make sympy assume that everything other than symbols are floats, and preventing it to decide which one is float or integer.
this problem has been asked before by Gerardo Suarez but not with a satisfactory answer.
There is another sympy function you can use called nsimplify. When I run your examples they all return zero:
from sympy import *
x = symbols("x")
y = -exp(2.0 * x) + exp(2 * x)
print(nsimplify(y)) # output is 0
y = -exp(2 * x) + exp(2 * x)
print(nsimplify(y)) # output is 0
y = -2.0 * x + 2 * x
print(nsimplify(y)) # output is 0
y = -(x ** 2.0) + x ** 2
print(nsimplify(y)) # output is 0
Update
As #Shoaib Mirzaei mentioned you can also use the rational argument in the simplify() function like this:
simplify(y,rational=True)

Sympy strange integral output

I'm trying to solve an integral with sympy. But it gives me a wrong solution. Why?
import sympy
from sympy import Integral, exp, oo
x, y = sympy.symbols("x y", real=True)
b, u, l, t = sympy.symbols("b u l t ", real=True, positive=True)
Fortet = Integral(exp(-l * t) * (sympy.sqrt(2 * sympy.pi * t)) ** (-1) * exp(-((b - u * t - y) ** 2) / (2 * t)),
(t, 0, oo))
Fortet.doit()
Result (wrong):
Piecewise((-(-b/2 + y)*sqrt(2*l +
u**2)*(-sqrt(pi)*sinh(sqrt(2)*sqrt(b)*sqrt(l +
u**2/2)*sqrt(polar_lift(1 + y**2/(b*polar_lift(b -
2*y))))*sqrt(polar_lift(b - 2*y))) +
sqrt(pi)*cosh(sqrt(2)*sqrt(b)*sqrt(l + u**2/2)*sqrt(polar_lift(1 +
y**2/(b*polar_lift(b - 2*y))))*sqrt(polar_lift(b - 2*y))))*exp(b*u -
u*y)/(sqrt(pi)*(b - 2*y)*(l + u**2/2)), Abs(arg(1 +
y**2/(b*polar_lift(b - 2*y))) + arg(b - 2*y)) <= pi/2),
(Integral(sqrt(2)*exp(-l*t)*exp(-(b - t*u -
y)**2/(2*t))/(2*sqrt(pi)*sqrt(t)), (t, 0, oo)), True))
Expected (correct) solution:
Solution = (exp((-u)*(b - y)) * exp(sympy.sqrt(u**2 + 2*l)*(b-y)))/(sympy.sqrt(2*l + u**2)) #RIGHT solution
Both results are in fact the same. The first one is probably slightly more correct. You tend see these polar_lift functions whenever SymPy tries to do something like square rooting something when it does not know the signs of the things inside (after integrating)
A polar_lift does not appear below, but this basic Gaussian example shows that SymPy tries to be as general as possible:
from sympy import *
x = Symbol("x", real=True)
y = Symbol("y", real=True)
s = Symbol("s", real=True) # , positive=True
gaussian = exp(-((x-y)**2)/(2*(s**2)))
nfactor = simplify(integrate(gaussian, (x,-oo,oo)))
print(nfactor)
You need s to be declared as positive: s = Symbol("s", real=True, positive=True). A similar thing happens with these kinds of polar_lift(b - 2*y) functions in your example. It also happens with the question I reference below.
I have no idea why, but N(polar_lift(x)) for any float or int x gives x again; yet, SymPy does not simplify nicely with symbolic x. Turns out if you keep on simplifying, you get nicer and nicer looking answers. I couldn't find anything about polar_lift related to pure math so I don't know what it actually does.
Remember for the simple example above how it gave a piece-wise? Same thing here. So we just take the first piece since the second piece is an un-evaluated integral.
In the code below, I use this question to remove the piece-wise function and then I simplify twice. And finally, I manually remove the polar_lift.
import sympy as sp
x, y = sp.symbols("x y", real=True)
b, u, l, t = sp.symbols("b u l t ", real=True, positive=True)
Fortet = sp.integrate(sp.exp(-l * t) * (sp.sqrt(2 * sp.pi * t)) ** (-1) *
sp.exp(-((b - u * t - y) ** 2) / (2 * t)),
(t, 0, sp.oo), conds='none')
incorrect = Fortet.simplify().simplify()
correct = eval(str(incorrect).replace("polar_lift", ""))
correct = correct.factor()
print(correct)
The result is:
exp(b*u)*exp(-u*y)*exp(-sqrt(2*l + u**2)*sqrt(b**2 - 2*b*y + y**2))/sqrt(2*l + u**2)
That is close enough to your expression. I couldn't make SymPy simplify the sqrt(b**2 - 2*b*y + y**2) to Abs(b-y) no matter how hard I tried.
Note that either SymPy is still wrong or you are wrong since the powers in the numerator are opposite. So I checked on the Desmos for a numeric answer (top one is yours):

How do I simplify the sum of sine and cosine in SymPy?

How do I simplify a*sin(wt) + b*cos(wt) into c*sin(wt+theta) using SymPy? For example:
f = sin(t) + 2*cos(t) = 2.236*sin(t + 1.107)
I tried the following:
from sympy import *
t = symbols('t')
f=sin(t)+2*cos(t)
trigsimp(f) #Returns sin(t)+2*cos(t)
simplify(f) #Returns sin(t)+2*cos(t)
f.rewrite(sin) #Returns sin(t)+2*sin(t+Pi/2)
PS.: I dont have direct access to a,b and w. Only to f
Any suggestion?
The general answer can be achieved by noting that you want to have
a * sin(t) + b * cos(t) = A * (cos(c)*sin(t) + sin(c)*cos(t))
This leads to a simultaneous equation a = A * cos(c) and b = A * sin(c).
Dividing the second equation by the second, we can solve for c. Substituting its solution into the first equation, you can solve for A.
I followed the same pattern but just to get it in terms of cos. If you want to get it in terms of sin, you can use Rodrigo's formula.
The following code should be able to take any linear combination of the form x * sin(t - w) or y * cos(t - z). There can be multiple sins and cos'.
from sympy import *
t = symbols('t', real=True)
expr = sin(t)+2*cos(t) # unknown
d = collect(expr.expand(trig=True), [sin(t), cos(t)], evaluate=False)
a = d[sin(t)]
b = d[cos(t)]
cos_phase = atan(a/b)
amplitude = a / sin(cos_phase)
print(amplitude.evalf() * cos(t - cos_phase.evalf()))
Which gives
2.23606797749979*cos(t - 0.463647609000806)
This seems to be a satisfactory match after plotting both graphs.
You could even have something like
expr = 2*sin(t - 3) + cos(t) - 3*cos(t - 2)
and it should work fine.
a * sin(wt) + b * cos(wt) = sqrt(a**2 + b**2) * sin(wt + acos(a / sqrt(a**2 + b**2)))
While the amplitude is the radical sqrt(a**2 + b**2), the phase is given by the arccosine of the ratio a / sqrt(a**2 + b**2), which may not be expressible in terms of arithmetic operations and radicals. Hence, you may be asking SymPy to do the impossible. Better use floating-point values, but you do not need SymPy for that.

Euler method (explicit and implicit)

I'd like to implement Euler's method (the explicit and the implicit one)
(https://en.wikipedia.org/wiki/Euler_method) for the following model:
x(t)' = q(x_M -x(t))x(t)
x(0) = x_0
where q, x_M and x_0 are real numbers.
I know already the (theoretical) implementation of the method. But I couldn't figure out where I can insert / change the model.
Could anybody help?
EDIT: You were right. I didn't understand correctly the method. Now, after a few hours, I think that I really got it! With the explicit method, I'm pretty sure (nevertheless: could anybody please have a look at my code? )
With the implicit implementation, I'm not very sure if it's correct. Could please anyone have a look at the implementation of the implicit method and give me a feedback what's correct / not good?
def explizit_euler():
''' x(t)' = q(xM -x(t))x(t)
x(0) = x0'''
q = 2.
xM = 2
x0 = 0.5
T = 5
dt = 0.01
N = T / dt
x = x0
t = 0.
for i in range (0 , int(N)):
t = t + dt
x = x + dt * (q * (xM - x) * x)
print '%6.3f %6.3f' % (t, x)
def implizit_euler():
''' x(t)' = q(xM -x(t))x(t)
x(0) = x0'''
q = 2.
xM = 2
x0 = 0.5
T = 5
dt = 0.01
N = T / dt
x = x0
t = 0.
for i in range (0 , int(N)):
t = t + dt
x = (1.0 / (1.0 - q *(xM + x) * x))
print '%6.3f %6.3f' % (t, x)
Pre-emptive note: Although the general idea should be correct, I did all the algebra in place in the editor box so there might be mistakes there. Please, check it yourself before using for anything really important.
I'm not sure how you come to the "implicit" formula
x = (1.0 / (1.0 - q *(xM + x) * x))
but this is wrong and you can check it by comparing your "explicit" and "implicit" results: they should slightly diverge but with this formula they will diverge drastically.
To understand the implicit Euler method, you should first get the idea behind the explicit one. And the idea is really simple and is explained at the Derivation section in the wiki: since derivative y'(x) is a limit of (y(x+h) - y(x))/h, you can approximate y(x+h) as y(x) + h*y'(x) for small h, assuming our original differential equation is
y'(x) = F(x, y(x))
Note that the reason this is only an approximation rather than exact value is that even over small range [x, x+h] the derivative y'(x) changes slightly. It means that if you want to get a better approximation of y(x+h), you need a better approximation of "average" derivative y'(x) over the range [x, x+h]. Let's call that approximation just y'. One idea of such improvement is to find both y' and y(x+h) at the same time by saying that we want to find such y' and y(x+h) that y' would be actually y'(x+h) (i.e. the derivative at the end). This results in the following system of equations:
y'(x+h) = F(x+h, y(x+h))
y(x+h) = y(x) + h*y'(x+h)
which is equivalent to a single "implicit" equation:
y(x+h) - y(x) = h * F(x+h, y(x+h))
It is called "implicit" because here the target y(x+h) is also a part of F. And note that quite similar equation is mentioned in the Modifications and extensions section of the wiki article.
So now going to your case that equation becomes
x(t+dt) - x(t) = dt*q*(xM -x(t+dt))*x(t+dt)
or equivalently
dt*q*x(t+dt)^2 + (1 - dt*q*xM)*x(t+dt) - x(t) = 0
This is a quadratic equation with two solutions:
x(t+dt) = [(dt*q*xM - 1) ± sqrt((dt*q*xM - 1)^2 + 4*dt*q*x(t))]/(2*dt*q)
Obviously we want the solution that is "close" to the x(t) which is the + solution. So the code should be something like:
b = (q * xM * dt - 1)
x(t+h) = (b + (b ** 2 + 4 * q * x(t) * dt) ** 0.5) / 2 / q / dt
(editor note:) Applying the binomial complement, this formula has the numerically more stable form for small dt, where then b < 0,
x(t+h) = (2 * x(t)) / ((b ** 2 + 4 * q * x(t) * dt) ** 0.5 - b)

Categories

Resources