I cannot figure it out how to assume positive real part of a complex number in Sympy.
Example of an Mathematica code:
a = InverseFourierTransform[ R/(I omega - lambda) + Conjugate[R]/(I omega - Conjugate[lambda]), omega, t, FourierParameters -> {1, -1}]
Simplify[a, {Re[lambda] < 0, t > 0}]
Similar Sympy code:
import sympy as sym
sym.init_printing()
ω = sym.symbols('omega', real=True, positive=True)
R, λ = sym.symbols('R, lambda', complex=True)
t = sym.symbols('t', real=True, positive=True)
α = R/(sym.I*ω-λ)+sym.conjugate(R)/(sym.I*ω-sym.conjugate(λ))
sym.inverse_fourier_transform(α, ω, t)
How could I assume real part of lambda to be positive? If I assume lambda to have positive=True, then sympy assumes imaginary=False.
Any ideas?
Create two real symbols x, y, assume x positive, and let λ be x + I*y.
import sympy as sym
ω, x, t = sym.symbols('omega x t', positive=True)
y = sym.symbols('y', real=True)
R = sym.symbols('R')
λ = x + sym.I*y
α = R/(sym.I*ω-λ)+sym.conjugate(R)/(sym.I*ω-sym.conjugate(λ))
res = sym.inverse_fourier_transform(α, ω, t)
The result is
2*pi*R*exp(2*pi*t*(x + I*y)) + 2*pi*exp(2*pi*t*(x - I*y))*conjugate(R)
You can then return to single-symbol λ with substitution:
λ = sym.symbols('lambda')
res.subs(x + sym.I*y, λ).conjugate().subs(x + sym.I*y, λ).conjugate()
obtaining
2*pi*R*exp(2*pi*lambda*t) + 2*pi*exp(2*pi*t*conjugate(lambda))*conjugate(R)
(The trick with two conjugations is needed because subs isn't going to replace x - I*y with conjugate(lambda) otherwise.)
Remarks on assumptions
complex=True is redundant; real numbers are included in complex numbers (7 is a complex number), so this has no effect
real=True is redundant when positive=True is given
Related
I am using the SymPy package (I wouldn't mind to use any other package if it works) to solve a system of equations. The system of equations contains the imaginary unit I and some coefficients p,q,r,c,w as constants of the variables n and N. I am interested in solving the system with these constant coefficients for a further evaluation. The code that I wrote is:
import sympy as sym
from sympy import re, im, I, E, symbols
I = complex(0,1)
p = sym.symbols('p', real=True)
q = sym.symbols('q', real=True)
r = sym.symbols('r', real=True)
c = sym.symbols('c', real=True)
w = sym.symbols('w', real=True)
n, N = sym.symbols('n, N')
Eq1 = sym.Eq(p*n-q*(N*r-n)-c + w*n*I, 0)
Eq2 = sym.Eq(q*(N*r-n) + w*N*I, 0)
Sol = sym.solve([Eq1, Eq2], (n, N))
n = sym.simplify(Sol[n])
N = sym.simplify(Sol[N])
n_real = sym.simplify(sym.re(n))
n_imag = sym.simplify(sym.im(n))
N_real = sym.simplify(sym.re(N))
N_imag = sym.simplify(sym.im(N))
p=1
q=1
r=1
c=1
print('Re(n) =', n_real)
print('Im(n) =', n_imag)
which gives as output:
Re(n) = c*(q*r*(p*q*r - w**2) + w**2*(p + q*r + q))/(w**2*(p + q*r + q)**2 + (p*q*r - w**2)**2)
Im(n) = c*w*(p*q*r - q*r*(p + q*r + q) - w**2)/(w**2*(p + q*r + q)**2 + (p*q*r - w**2)**2)
This is in agreement with the solution that i get by hand-solving it.
However, I would like to know how to replace the coefficients p,q,r,c by numerical values, e.g. =1 as in the code. Unfortunately I always get an output with symbolic coefficients but not numeric.
Does anyone knows what am I missing with this package? Any other package that can deal with this task/problem?
Thanks in advance.
I'm trying to solve an integral with sympy. But it gives me a wrong solution. Why?
import sympy
from sympy import Integral, exp, oo
x, y = sympy.symbols("x y", real=True)
b, u, l, t = sympy.symbols("b u l t ", real=True, positive=True)
Fortet = Integral(exp(-l * t) * (sympy.sqrt(2 * sympy.pi * t)) ** (-1) * exp(-((b - u * t - y) ** 2) / (2 * t)),
(t, 0, oo))
Fortet.doit()
Result (wrong):
Piecewise((-(-b/2 + y)*sqrt(2*l +
u**2)*(-sqrt(pi)*sinh(sqrt(2)*sqrt(b)*sqrt(l +
u**2/2)*sqrt(polar_lift(1 + y**2/(b*polar_lift(b -
2*y))))*sqrt(polar_lift(b - 2*y))) +
sqrt(pi)*cosh(sqrt(2)*sqrt(b)*sqrt(l + u**2/2)*sqrt(polar_lift(1 +
y**2/(b*polar_lift(b - 2*y))))*sqrt(polar_lift(b - 2*y))))*exp(b*u -
u*y)/(sqrt(pi)*(b - 2*y)*(l + u**2/2)), Abs(arg(1 +
y**2/(b*polar_lift(b - 2*y))) + arg(b - 2*y)) <= pi/2),
(Integral(sqrt(2)*exp(-l*t)*exp(-(b - t*u -
y)**2/(2*t))/(2*sqrt(pi)*sqrt(t)), (t, 0, oo)), True))
Expected (correct) solution:
Solution = (exp((-u)*(b - y)) * exp(sympy.sqrt(u**2 + 2*l)*(b-y)))/(sympy.sqrt(2*l + u**2)) #RIGHT solution
Both results are in fact the same. The first one is probably slightly more correct. You tend see these polar_lift functions whenever SymPy tries to do something like square rooting something when it does not know the signs of the things inside (after integrating)
A polar_lift does not appear below, but this basic Gaussian example shows that SymPy tries to be as general as possible:
from sympy import *
x = Symbol("x", real=True)
y = Symbol("y", real=True)
s = Symbol("s", real=True) # , positive=True
gaussian = exp(-((x-y)**2)/(2*(s**2)))
nfactor = simplify(integrate(gaussian, (x,-oo,oo)))
print(nfactor)
You need s to be declared as positive: s = Symbol("s", real=True, positive=True). A similar thing happens with these kinds of polar_lift(b - 2*y) functions in your example. It also happens with the question I reference below.
I have no idea why, but N(polar_lift(x)) for any float or int x gives x again; yet, SymPy does not simplify nicely with symbolic x. Turns out if you keep on simplifying, you get nicer and nicer looking answers. I couldn't find anything about polar_lift related to pure math so I don't know what it actually does.
Remember for the simple example above how it gave a piece-wise? Same thing here. So we just take the first piece since the second piece is an un-evaluated integral.
In the code below, I use this question to remove the piece-wise function and then I simplify twice. And finally, I manually remove the polar_lift.
import sympy as sp
x, y = sp.symbols("x y", real=True)
b, u, l, t = sp.symbols("b u l t ", real=True, positive=True)
Fortet = sp.integrate(sp.exp(-l * t) * (sp.sqrt(2 * sp.pi * t)) ** (-1) *
sp.exp(-((b - u * t - y) ** 2) / (2 * t)),
(t, 0, sp.oo), conds='none')
incorrect = Fortet.simplify().simplify()
correct = eval(str(incorrect).replace("polar_lift", ""))
correct = correct.factor()
print(correct)
The result is:
exp(b*u)*exp(-u*y)*exp(-sqrt(2*l + u**2)*sqrt(b**2 - 2*b*y + y**2))/sqrt(2*l + u**2)
That is close enough to your expression. I couldn't make SymPy simplify the sqrt(b**2 - 2*b*y + y**2) to Abs(b-y) no matter how hard I tried.
Note that either SymPy is still wrong or you are wrong since the powers in the numerator are opposite. So I checked on the Desmos for a numeric answer (top one is yours):
The idea is to compute the line integral of the following vector field and curve:
This is the code I have tried:
import numpy as np
from sympy import *
from sympy import Curve, line_integrate
from sympy.abc import x, y, t
C = Curve([cos(t) + 1, sin(t) + 1, 1 - cos(t) - sin(t)], (t, 0, 2*np.pi))
line_integrate(y * exp(x) + x**2 + exp(x) + z**2 * exp(z), C, [x, y, z])
But the ValueError: Function argument should be (x(t), y(t)) but got [cos(t) + 1, sin(t) + 1, -sin(t) - cos(t) + 1] comes up.
How can I compute this line integral then?
I think that maybe this line integral contains integrals that don't have exact solution. It is also fine if you provide a numerical approximation method.
Thanks
In this case you can compute the integral using line_integrate because we can reduce the 3d integral to a 2d one. I'm sorry to say I don't know python well enough to write the code, but here's the drill:
If we write
C(t) = x(t),y(t),z(t)
then the thing to notice is that
z(t) = 3 - x(t) - y(t)
and so
dz = -dx - dy
So, we can write
F.dr = Fx*dx + Fy*dy + Fz*dz
= (Fx-Fz)*dx + (Fy-Fz)*dy
So we have reduced the problem to a 2d problem: we integrate
G = (Fx-Fz)*i + (Fx-Fz)*j
round
t -> x(t), y(t)
Note that in G we need to get rid of z by substituting
z = 3 - x - y
The value error you receive does not come from your call to the line_integrate function; it comes because according to the source code for the Curve class, only functions in 2D Euclidean space are supported. This integral can still be computed without using sympy according to this research blog that I found by simply searching for a workable method on Google.
The code you need looks like this:
import autograd.numpy as np
from autograd import jacobian
from scipy.integrate import quad
def F(X):
x, y, z = X
return [y * np.exp(x), x**2 + np.exp(x), z**2 * np.exp(z)]
def C(t):
return np.array([np.cos(t) + 1, np.sin(t) + 1, 1 - np.cos(t) - np.sin(t)])
dCdt = jacobian(C, 0)
def integrand(t):
return F(C(t)) # dCdt(t)
I, e = quad(integrand, 0, 2 * np.pi)
The variable I then stores the numerical solution to your question.
You can define a function:
import sympy as sp
from sympy import *
def linea3(f,C):
P = f[0].subs([(x,C[0]),(y,C[1]),(z,C[2])])
Q = f[1].subs([(x,C[0]),(y,C[1]),(z,C[2])])
R = f[2].subs([(x,C[0]),(y,C[1]),(z,C[2])])
dx = diff(C[0],t)
dy = diff(C[1],t)
dz = diff(C[2],t)
m = integrate(P*dx+Q*dy+R*dz,(t,C[3],C[4]))
return m
Then use the example:
f = [x**2*z**2,y**2*z**2,x*y*z]
C = [2*cos(t),2*sin(t),4,0,2*sp.pi]
How do I simplify a*sin(wt) + b*cos(wt) into c*sin(wt+theta) using SymPy? For example:
f = sin(t) + 2*cos(t) = 2.236*sin(t + 1.107)
I tried the following:
from sympy import *
t = symbols('t')
f=sin(t)+2*cos(t)
trigsimp(f) #Returns sin(t)+2*cos(t)
simplify(f) #Returns sin(t)+2*cos(t)
f.rewrite(sin) #Returns sin(t)+2*sin(t+Pi/2)
PS.: I dont have direct access to a,b and w. Only to f
Any suggestion?
The general answer can be achieved by noting that you want to have
a * sin(t) + b * cos(t) = A * (cos(c)*sin(t) + sin(c)*cos(t))
This leads to a simultaneous equation a = A * cos(c) and b = A * sin(c).
Dividing the second equation by the second, we can solve for c. Substituting its solution into the first equation, you can solve for A.
I followed the same pattern but just to get it in terms of cos. If you want to get it in terms of sin, you can use Rodrigo's formula.
The following code should be able to take any linear combination of the form x * sin(t - w) or y * cos(t - z). There can be multiple sins and cos'.
from sympy import *
t = symbols('t', real=True)
expr = sin(t)+2*cos(t) # unknown
d = collect(expr.expand(trig=True), [sin(t), cos(t)], evaluate=False)
a = d[sin(t)]
b = d[cos(t)]
cos_phase = atan(a/b)
amplitude = a / sin(cos_phase)
print(amplitude.evalf() * cos(t - cos_phase.evalf()))
Which gives
2.23606797749979*cos(t - 0.463647609000806)
This seems to be a satisfactory match after plotting both graphs.
You could even have something like
expr = 2*sin(t - 3) + cos(t) - 3*cos(t - 2)
and it should work fine.
a * sin(wt) + b * cos(wt) = sqrt(a**2 + b**2) * sin(wt + acos(a / sqrt(a**2 + b**2)))
While the amplitude is the radical sqrt(a**2 + b**2), the phase is given by the arccosine of the ratio a / sqrt(a**2 + b**2), which may not be expressible in terms of arithmetic operations and radicals. Hence, you may be asking SymPy to do the impossible. Better use floating-point values, but you do not need SymPy for that.
Suppose I have the code below. I want to get either the right side of the equation (C1 +x...). How do I do that?
My problem is I have some boundary conditions for derivatives of f(x) at specific points, so I want to calculate those and find out the constants. I have also different values for w(x), so the final code will begin defining a variable called wx instead of having the function w(x).
from __future__ import division
from sympy import *
x, y = symbols('x y')
w, f = symbols('w f', cls=Function)
init_printing(use_unicode=True)
diffeq = f(x).diff(x,x,x,x)-w(x)
expr = dsolve(diffeq, f(x))
print diffeq
print expr
results:
-w(x) + Derivative(f(x), x, x, x, x)
f(x) == C1 + x**3*(C4 + Integral(w(x)/6, x)) + x**2*(C3 - Integral(x*w(x)/2, x)) + x*(C2 + Integral(x**2*w(x)/2, x)) - Integral(x**3*w(x)/6, x)
expr.lhs and expr.rhs will give you the left- and right-hand sides of the equation.