sympy simplifying fractional powers of imaginary number - python

Why doesn't -(-1)**(1/3) + (-1)**(2/3) reduce to -1?
wolfram alpha knows it's -1 but sympy gamma only does a float approximation
re(_) + I*im(_) produces a NegativeOne object, but none of the other simplification functions I tried did anything to it.

I'm assuming you really mean -(-1)**Rational(1, 3) + (-1)**Rational(2, 3), as literally -(-1)**(1/3) + (-1)**(2/3) is all Python (no SymPy), and evaluates numerically.
Most SymPy objects do not do any kind of nontrivial simplification automatically. The reason is that sometimes you might want to represent -(-1)**(1/3) + (-1)**(2/3) without it simplifying. Also, simplification in general is an expensive operation, and doing so at operation creation time would be very inefficient, as often you create intermediate expressions that don't need to be simplified at the intermediate stage.
re(expr) + I*im(expr) is fine. A more automated way to do that is to use expand_complex():
In [19]: expand_complex(-(-1)**Rational(1, 3) + (-1)**Rational(2, 3))
Out[19]: -1
Ideally simplify() would call expand_complex(), and there is an open issue for this (https://github.com/sympy/sympy/issues/7569).
And a note that SymPy Gamma provides a lot of automation on top of SymPy directly. For instance, it converts -(-1)**(1/3) + (-1)**(2/3) to SymPy types and performs various functions to the expression, like numerical evaluation, simplification, differentiation, etc.

Related

Is there a convenient way to add complex numbers in polar form in sympy (python)?

I had some trouble adding complex numbers in polar form in sympy.
The following code
from sympy import I, exp, pi, re, im
a = exp(2*pi/3*I)
b = exp(-2*pi/3*I)
c = a+b
print(c)
print(c.simplify())
print(c.as_real_imag())
print(re(c)+im(c)*I)
print(int(c))
print(complex(c))
gives
exp(-2*I*pi/3) + exp(2*I*pi/3)
-(-1)**(1/3) + (-1)**(2/3)
(-1, 0)
-1
-1
(-1+6.776263578034403e-21j)
What I want, is to get the simplest answer to a+b, which is -1. I can obtain this, by manually rebuilding c=a+b with re(c)+im(c)*I. Why is this necessary? And is there a better way to do this?
Simply printing c retains the polar forms, obfuscating the answer, c.simplify() leaves the polar form, but is not really helpful, and c.as_real_imag() returns a tuple. int(c) does the job, but requires the knowledge, that c is real (otherwise it throws an error) and integer (otherwise, this is not the answer I want). complex(c) kind of works, but I don't want to leave symbolic calculation. Note, that float(c) does not work, since complex(c) has a non-zero imaginary part.
https://stackoverflow.com/users/9450991/oscar-benjamin has given you the solution. If you are in polar coordinates, your expression may have exponential functions. If you don't want these you have to rewrite into trigonometric functions where special values are known for many values. For example, consider a's 2*pi/3 angle:
>>> cos(2*pi/3)
-1/2
>>> sin(2*pi/3)
sqrt(3)/2
When you rewrite a in terms of cos (or sin) it becomes the sum of those two values (with I on the sin value):
>>> a.rewrite(cos)
-1/2 + sqrt(3)*I/2
When you rewrite a more complex expression, you will get the whole expression rewritten in that way and any terms that cancel/combine will do so (or might need some simplification):
>>> c.rewrite(cos)
-1

Why i**2 is slower than i*i? [duplicate]

I'm curious as to why it's so much faster to multiply than to take powers in python (though from what I've read this may well be true in many other languages too). For example it's much faster to do
x*x
than
x**2
I suppose the ** operator is more general and can also deal with fractional powers. But if that's why it's so much slower, why doesn't it perform a check for an int exponent and then just do the multiplication?
Edit: Here's some example code I tried...
def pow1(r, n):
for i in range(r):
p = i**n
def pow2(r, n):
for i in range(r):
p = 1
for j in range(n):
p *= i
Now, pow2 is just a quick example and is clearly not optimised!
But even so I find that using n = 2 and r = 1,000,000, then pow1 takes ~ 2500ms and pow2 takes ~ 1700ms.
I admit that for large values of n, then pow1 does get much quicker than pow2. But that's not too surprising.
Basically naive multiplication is O(n) with a very low constant factor. Taking the power is O(log n) with a higher constant factor (There are special cases that need to be tested... fractional exponents, negative exponents, etc) . Edit: just to be clear, that's O(n) where n is the exponent.
Of course the naive approach will be faster for small n, you're only really implementing a small subset of exponential math so your constant factor is negligible.
Adding a check is an expense, too. Do you always want that check there? A compiled language could make the check for a constant exponent to see if it's a relatively small integer because there's no run-time cost, just a compile-time cost. An interpreted language might not make that check.
It's up to the particular implementation unless that kind of detail is specified by the language.
Python doesn't know what distribution of exponents you're going to feed it. If it's going to be 99% non-integer values, do you want the code to check for an integer every time, making runtime even slower?
Doing this in the exponent check will slow down the cases where it isn't a simple power of two very slightly, so isn't necessarily a win. However, in cases where the exponent is known in advance( eg. literal 2 is used), the bytecode generated could be optimised with a simple peephole optimisation. Presumably this simply hasn't been considered worth doing (it's a fairly specific case).
Here's a quick proof of concept that does such an optimisation (usable as a decorator). Note: you'll need the byteplay module to run it.
import byteplay, timeit
def optimise(func):
c = byteplay.Code.from_code(func.func_code)
prev=None
for i, (op, arg) in enumerate(c.code):
if op == byteplay.BINARY_POWER:
if c.code[i-1] == (byteplay.LOAD_CONST, 2):
c.code[i-1] = (byteplay.DUP_TOP, None)
c.code[i] = (byteplay.BINARY_MULTIPLY, None)
func.func_code = c.to_code()
return func
def square(x):
return x**2
print "Unoptimised :", timeit.Timer('square(10)','from __main__ import square').timeit(10000000)
square = optimise(square)
print "Optimised :", timeit.Timer('square(10)','from __main__ import square').timeit(10000000)
Which gives the timings:
Unoptimised : 6.42024898529
Optimised : 4.52667593956
[Edit]
Actually, thinking about it a bit more, there's a very good reason why this optimisaton isn't done. There's no guarantee that someone won't create a user defined class that overrides the __mul__ and __pow__ methods and do something different for each. The only way to do it safely is if you can guarantee that the object on the top of the stack is one that has the same result "x**2" and "x*x", but working that out is much harder. Eg. in my example it's impossible, as any object could be passed to the square function.
An implementation of b^p with binary exponentiation
def power(b, p):
"""
Calculates b^p
Complexity O(log p)
b -> double
p -> integer
res -> double
"""
res = 1
while p:
if p & 0x1: res *= b
b *= b
p >>= 1
return res
I'd suspect that nobody was expecting this to be all that important. Typically, if you want to do serious calculations, you do them in Fortran or C or C++ or something like that (and perhaps call them from Python).
Treating everything as exp(n * log(x)) works well in cases where n isn't integral or is pretty large, but is relatively inefficient for small integers. Checking to see if n is a small enough integer does take time, and adds complication.
Whether the check is worth it depends on the expected exponents, how important it is to get best performance here, and the cost of the extra complexity. Apparently, Guido and the rest of the Python gang decided the check wasn't worth doing.
If you like, you could write your own repeated-multiplication function.
how about xxxxx?
is it still faster than x**5?
as int exponents gets larger, taking powers might be faster than multiplication.
but the number where actual crossover occurs depends on various conditions, so in my opinion, that's why the optimization was not done(or couldn't be done) in language/library level. But users can still optimize for some special cases :)

MATLAB matrix power algorithm

I'm looking to port an algorithm from MATLAB to Python. One step in said algorithm involves taking A^(-1/2) where A is a 9x9 square complex matrix. As I understand it, the square root of matrices (and by extension their inverses) are not-unique.
I've been experimenting with scipy.linalg.fractional_matrix_power and an approximation using A^(-1/2) = exp((-1/2)*log(A)) with numpy's built in expm and logm functions. The former is exceptionally poor and only provides 3 decimal places of precision whereas the latter is decently correct for elements in the top left corner but gets progressively worse as you move down and to the right. This may or may not be a perfectly valid mathematical solution to the expression however it doesn't suffice for this application.
As a result, I'm looking to directly implement MATLAB's matrix power algorithm in Python so that I can 100% confirm the same result each time. Does anyone have any insight or documentation on how this would work? The more parallelizable this algorithm is, the better, as eventually the goal would be to rewrite it in OpenCL for GPU acceleration.
EDIT: An MCVE as requested:
[[(0.591557294607941+4.33680868994202e-19j), (-0.219707725574605-0.35810724986609j), (-0.121305654177909+0.244558388829046j), (0.155552026648172-0.0180264818714123j), (-0.0537690384136066-0.0630740244116577j), (-0.0107526931263697+0.0397896274845627j), (0.0182892503609312-0.00653264433724856j), (-0.00710188853532244-0.0050445035279044j), (-2.20414002823034e-05+0.00373184532662288j)], [(-0.219707725574605+0.35810724986609j), (0.312038814492119+2.16840434497101e-19j), (-0.109433401402399-0.174379997015402j), (-0.0503362231078033+0.108510948023091j), (0.0631826956936223-0.00992931123813742j), (-0.0219902325360141-0.0233215237172002j), (-0.00314837555001163+0.0148621558916679j), (0.00630295247506065-0.00266790359447072j), (-0.00249343102520442-0.00156160619280611j)], [(-0.121305654177909-0.244558388829046j), (-0.109433401402399+0.174379997015402j), (0.136649392858215-1.76182853028894e-19j), (-0.0434623984527311-0.0669251299161109j), (-0.0168737559719828+0.0393768358149159j), (0.0211288536117387-0.00417146769324491j), (-0.00734306979471257-0.00712443264825166j), (-0.000742681625102133+0.00455752452374196j), (0.00179068247786595-0.000862706240042082j)], [(0.155552026648172+0.0180264818714123j), (-0.0503362231078033-0.108510948023091j), (-0.0434623984527311+0.0669251299161109j), (0.0467980890488569+5.14996031930615e-19j), (-0.0140208255975664-0.0209483313237692j), (-0.00472995448413803+0.0117916398375124j), (0.00589653974090387-0.00134198920550751j), (-0.00202109265416585-0.00184021636458858j), (-0.000150793859056431+0.00116822322464066j)], [(-0.0537690384136066+0.0630740244116577j), (0.0631826956936223+0.00992931123813742j), (-0.0168737559719828-0.0393768358149159j), (-0.0140208255975664+0.0209483313237692j), (0.0136137125669776-2.03287907341032e-20j), (-0.00387854073283377-0.0056769786724813j), (-0.0011741038702424+0.00306007798625676j), (0.00144000687517355-0.000355251914809693j), (-0.000481433965262789-0.00042129815655098j)], [(-0.0107526931263697-0.0397896274845627j), (-0.0219902325360141+0.0233215237172002j), (0.0211288536117387+0.00417146769324491j), (-0.00472995448413803-0.0117916398375124j), (-0.00387854073283377+0.0056769786724813j), (0.00347771689075251+8.21621958836671e-20j), (-0.000944046302699304-0.00136521328407881j), (-0.00026318475762475+0.000704212317211994j), (0.00031422288569727-8.10033316327328e-05j)], [(0.0182892503609312+0.00653264433724856j), (-0.00314837555001163-0.0148621558916679j), (-0.00734306979471257+0.00712443264825166j), (0.00589653974090387+0.00134198920550751j), (-0.0011741038702424-0.00306007798625676j), (-0.000944046302699304+0.00136521328407881j), (0.000792908166233942-7.41153828847513e-21j), (-0.00020531962049495-0.000294952695922854j), (-5.36226164765808e-05+0.000145645628243286j)], [(-0.00710188853532244+0.00504450352790439j), (0.00630295247506065+0.00266790359447072j), (-0.000742681625102133-0.00455752452374196j), (-0.00202109265416585+0.00184021636458858j), (0.00144000687517355+0.000355251914809693j), (-0.00026318475762475-0.000704212317211994j), (-0.00020531962049495+0.000294952695922854j), (0.000162971629601464-5.39321759384574e-22j), (-4.03304806590714e-05-5.77159110863666e-05j)], [(-2.20414002823034e-05-0.00373184532662288j), (-0.00249343102520442+0.00156160619280611j), (0.00179068247786595+0.000862706240042082j), (-0.000150793859056431-0.00116822322464066j), (-0.000481433965262789+0.00042129815655098j), (0.00031422288569727+8.10033316327328e-05j), (-5.36226164765808e-05-0.000145645628243286j), (-4.03304806590714e-05+5.77159110863666e-05j), (3.04302590501313e-05-4.10281583826302e-22j)]]
I can think of two explanations, in both cases I accuse user error. In chronological order:
Theory #1 (the subtle one)
My suspicion is that you're copying the printed values of the input matrix from one code as input into the other. I.e. you're throwing away double precision when you switch codes, which gets amplified during the inverse-square-root calculation.
As proof, I compared MATLAB's inverse square root with the very function you're using in python. I will show a 3x3 example due to size considerations, but—spoiler warning—I did the same with a 9x9 random matrix and got two results with condition number 11.245754109790719 (MATLAB) and 11.245754109790818 (numpy). That should tell you something about the similarity of the results without having to save and load the actual matrices between the two codes. I suggest you do this though: keywords are scipy.io.loadmat and savemat.
What I did was generate the random data in python (because that's what I prefer):
>>> import numpy as np
>>> print((np.random.rand(3,3) + 1j*np.random.rand(3,3)).tolist())
[[(0.8404782758300281+0.29389006737780765j), (0.741574080512219+0.7944606900644321j), (0.12788250870304718+0.37304665786925073j)], [(0.8583402784463595+0.13952117266781894j), (0.2138809231406249+0.6233427148017449j), (0.7276466404131303+0.6480559739625379j)], [(0.1784816129006297+0.72452362541158j), (0.2870462766764591+0.8891190037142521j), (0.0980355896905617+0.03022344706473823j)]]
By copying the same truncated output into both codes, I guarantee the correspondence of the inputs.
Example in MATLAB:
>> M = [[(0.8404782758300281+0.29389006737780765j), (0.741574080512219+0.7944606900644321j), (0.12788250870304718+0.37304665786925073j)]; [(0.8583402784463595+0.13952117266781894j), (0.2138809231406249+0.6233427148017449j), (0.7276466404131303+0.6480559739625379j)]; [(0.1784816129006297+0.72452362541158j), (0.2870462766764591+0.8891190037142521j), (0.0980355896905617+0.03022344706473823j)]];
>> A = M^(-0.5);
>> format long
>> disp(A)
0.922112307438377 + 0.919346397931976i 0.108620882045523 - 0.649850434897895i -0.778737740194425 - 0.320654127149988i
-0.423384022626231 - 0.842737730824859i 0.592015668030645 + 0.661682656423866i 0.529361991464903 - 0.388343838121371i
-0.550789874427422 + 0.021129515921025i 0.472026152514446 - 0.502143106675176i 0.942976466768961 + 0.141839849623673i
>> cond(A)
ans =
3.429368520364765
Example in python:
>>> M = [[(0.8404782758300281+0.29389006737780765j), (0.741574080512219+0.7944606900644321j), (0.12788250870304718+0.37304665786925073j)], [(0.8583402784463595+0.13952117266781894j), (0.2138809231406249+0.6233427148017449j), (0.7276466404
... 131303+0.6480559739625379j)], [(0.1784816129006297+0.72452362541158j), (0.2870462766764591+0.8891190037142521j), (0.0980355896905617+0.03022344706473823j)]]
>>> A = fractional_matrix_power(M,-0.5)
>>> print(A)
[[ 0.92211231+0.9193464j 0.10862088-0.64985043j -0.77873774-0.32065413j]
[-0.42338402-0.84273773j 0.59201567+0.66168266j 0.52936199-0.38834384j]
[-0.55078987+0.02112952j 0.47202615-0.50214311j 0.94297647+0.14183985j]]
>>> np.linalg.cond(A)
3.4293685203647408
My suspicion is that if you scipy.io.loadmat the matrix into python, do the calculation, scipy.io.savemat the result and load it back in with MATLAB, you'll see less than 1e-12 absolute error (hopefully even less) between the results.
Theory #2 (the facepalm one)
My suspicion is that you're using python 2, and your -1/2-powered division is a simple inverse:
>>> # python 3 below
>>> # python 3's // is python 2's /, i.e. integer division
>>> 1/2
0.5
>>> 1//2
0
>>> -1/2
-0.5
>>> -1//2
-1
So if you're using python 2, then calling
fractional_matrix_power(M,-1/2)
is actually the inverse of M. The obvious solution is to switch to python 3. The less obvious solution is to keep using python 2 (which you shouldn't, as the above exemplifies), but use
from __future__ import division
on top of your every source file. This will override the behaviour of the simple / division operator so that it reflects the python 3 version, and you will have one less headache.

Getting a better answer from sympy inverse laplace transform

Trying to compute the following lines I'm getting a realy complex result.
from sympy import *
s = symbols("s")
t = symbols("t")
h = 1/(s**3 + s**2/5 + s)
inverse_laplace_transform(h,s,t)
The result is the following:
(-(I*exp(-t/10)*sin(3*sqrt(11)*t/10) - exp(-t/10)*cos(3*sqrt(11)*t/10))*gamma(-3*sqrt(11)*I/5)*gamma(-1/10 - 3*sqrt(11)*I/10)/(gamma(9/10 - 3*sqrt(11)*I/10)*gamma(1 - 3*sqrt(11)*I/5)) + (I*exp(-t/10)*sin(3*sqrt(11)*t/10) + exp(-t/10)*cos(3*sqrt(11)*t/10))*gamma(3*sqrt(11)*I/5)*gamma(-1/10 + 3*sqrt(11)*I/10)/(gamma(9/10 + 3*sqrt(11)*I/10)*gamma(1 + 3*sqrt(11)*I/5)) + gamma(1/10 - 3*sqrt(11)*I/10)*gamma(1/10 + 3*sqrt(11)*I/10)/(gamma(11/10 - 3*sqrt(11)*I/10)*gamma(11/10 + 3*sqrt(11)*I/10)))*Heaviside(t)
However the answer should be simpler, Wolframalpha proves it.
Is there any way to simplify this result?
I tried a bit with this one and the way I could find a simpler solution is using something like:
from sympy import *
s = symbols("s")
t = symbols("t", positive=True)
h = 1/(s**3 + s**2/5 + s)
inverse_laplace_transform(h,s,t).evalf().simplify()
Notice that I define t as a positive variable, otherwise the sympy function returns a large term followed by the Heaviaside function. The result still contains many gamma functions that I could not reduce to the expression returned by Wolfram. Using evalf() some of those are converted to their numeric value and then after simplification you get a expression similar like the one in Wolfram but with floating numbers.
Unfortunately this part of Sympy is not quite mature. I also tried with Maxima and the result is quite close to the one in Wolfram. So it seems that Wolfram is not doing anything really special there.

Sympy expression as ratio of polynomials

What is the best way to get sympy to rewrite an expression as a ratio of polynomials?
I'm working out the transfer function for a circuit, and would like to determine its poles and zeros which will require factoring the numerator and denominator of the transfer function. As I calculate, I'd like to keep the intermediate results expressed as a ratio of polynomials rather than the 1/1/1/1/1/x form that is naturally the result of parallel impedances.
I could write a function that keeps taking as_numer_denom() at each step and returns the ratio, but that seems cumbersome.
Is there a natural way to do this?
Perhaps you can use normal at each step?
>>> (1/1/1/1/x + 2/(1+1/x)).normal()
(2*x**2 + x + 1)/(x*(x + 1))

Categories

Resources