What is the best way to get sympy to rewrite an expression as a ratio of polynomials?
I'm working out the transfer function for a circuit, and would like to determine its poles and zeros which will require factoring the numerator and denominator of the transfer function. As I calculate, I'd like to keep the intermediate results expressed as a ratio of polynomials rather than the 1/1/1/1/1/x form that is naturally the result of parallel impedances.
I could write a function that keeps taking as_numer_denom() at each step and returns the ratio, but that seems cumbersome.
Is there a natural way to do this?
Perhaps you can use normal at each step?
>>> (1/1/1/1/x + 2/(1+1/x)).normal()
(2*x**2 + x + 1)/(x*(x + 1))
Related
I had some trouble adding complex numbers in polar form in sympy.
The following code
from sympy import I, exp, pi, re, im
a = exp(2*pi/3*I)
b = exp(-2*pi/3*I)
c = a+b
print(c)
print(c.simplify())
print(c.as_real_imag())
print(re(c)+im(c)*I)
print(int(c))
print(complex(c))
gives
exp(-2*I*pi/3) + exp(2*I*pi/3)
-(-1)**(1/3) + (-1)**(2/3)
(-1, 0)
-1
-1
(-1+6.776263578034403e-21j)
What I want, is to get the simplest answer to a+b, which is -1. I can obtain this, by manually rebuilding c=a+b with re(c)+im(c)*I. Why is this necessary? And is there a better way to do this?
Simply printing c retains the polar forms, obfuscating the answer, c.simplify() leaves the polar form, but is not really helpful, and c.as_real_imag() returns a tuple. int(c) does the job, but requires the knowledge, that c is real (otherwise it throws an error) and integer (otherwise, this is not the answer I want). complex(c) kind of works, but I don't want to leave symbolic calculation. Note, that float(c) does not work, since complex(c) has a non-zero imaginary part.
https://stackoverflow.com/users/9450991/oscar-benjamin has given you the solution. If you are in polar coordinates, your expression may have exponential functions. If you don't want these you have to rewrite into trigonometric functions where special values are known for many values. For example, consider a's 2*pi/3 angle:
>>> cos(2*pi/3)
-1/2
>>> sin(2*pi/3)
sqrt(3)/2
When you rewrite a in terms of cos (or sin) it becomes the sum of those two values (with I on the sin value):
>>> a.rewrite(cos)
-1/2 + sqrt(3)*I/2
When you rewrite a more complex expression, you will get the whole expression rewritten in that way and any terms that cancel/combine will do so (or might need some simplification):
>>> c.rewrite(cos)
-1
I'm trying to solve a 4th order polynomial with complex coefficients i.e.
-0.678916793992528*w^4 + 9207096.65180878*i*w^3
+ 1.47445911372677e+15*w^2 - 1.54212540689566e+21*i*w
+ 2.70530138119032e+26
The end goal of this code will be solving this polynomial at least 100,000 times, each time with different coefficients, so I'd like the code to be quick and efficient. I've been using sympy.nroots() to get the roots but according to %timeit it takes about 9.6 ms per loop which is quiet slow compared to numpy.roots() which takes 60 µs per loop. However I can't use numpy.roots() since it doesn't handle complex coefficients well and has consistently solved the roots of this polynomial incorrectly. Using sympy.solve() is even slower at 122 ms per loop.
One thing I have thought of to try and speed up this process is the fact that I really only need the imaginary components of the roots, specifically the most negative imaginary component, but I'm not sure if that can be leveraged into a faster run time for this code.
My questions are is there another function I can use for root finding that might be faster? Or is there a different root finding method I can write that will be faster? Finally is there a way to only solve for the complex valued roots, and would that be faster?
You can not get a much better result than the one of np.root in double precision floating point numbers. Evaluating a polynomial close to a root involves a lot of catastrophic cancellations.
Trying out your example with the routines of numpy gives the roots as
def print_Carr(z):
for zz in z: print(">>> % 22.17e %+.17ej"%(zz.real, zz.imag))
p=np.array([-0.678916793992528, 9207096.65180878j, 1.47445911372677e+15, -1.54212540689566e+21j, 2.70530138119032e+26])
z=np.roots(p); print_Carr(z)
>>> 4.60399640251209885e+07 +6.25409784852022864e+06j
>>> -4.60399640251209214e+07 +6.25409784852025378e+06j
>>> 6.97016694994478896e-13 +1.20627308238215139e+06j
>>> 5.23825344503222243e-11 -1.53018048966713541e+05j
These are rather large values for polynomial evaluation. The evaluated values at these roots are
print_Carr(np.polyval(p,z))
>>> -3.48222204464332800e+15 +2.82412997568102400e+15j
>>> 5.73769835033395200e+15 -1.64254152287846400e+15j
>>> -4.12316860416000000e+11 +1.37984933104284096e+09j
>>> 6.87194767360000000e+10 -1.04451799855962357e+11j
This looks rather bad for residuals, however changes in the last bits of the mantissa of the the roots introduce a large absolute change of the values. Remember that the exact roots (for the given coefficients) is somewhere in-between the floating point numbers. The influence of these changes on the polynomial value can be estimated by replacing coefficients and roots with their absolute values, as mu*|p|(|z|) is an estimate of the error of floating point evaluation.
print_Carr(np.polyval(abs(p),abs(z)) *2**-52)
>>> 1.63036010254646300e+15 +0.00000000000000000e+00j
>>> 1.63036010254645625e+15 +0.00000000000000000e+00j
>>> 9.53421868314746094e+11 +0.00000000000000000e+00j
>>> 1.20139515277909210e+11 +0.00000000000000000e+00j
The residuals are almost in the range of these bounds.
Changing the last mantissa bits of the root approximations or the polynomial coefficients has an influence that can be estimated via the derivatives at the root locations
print_Carr(abs(np.polyval(np.polyder(p),z))*(2**-52*abs(z)))
>>> 1.38853576300226150e+15 +0.00000000000000000e+00j
>>> 1.38853576300225050e+15 +0.00000000000000000e+00j
>>> 5.30242273857438416e+11 +0.00000000000000000e+00j
>>> 6.77504690635207825e+10 +0.00000000000000000e+00j
again demonstrating that any change in more than the last two mantissa bits will drastically increase the residual.
To remove the possible imprecision of the "eigenvalues of the companion matrix" in the implementation of np.roots, apply "root polishing" by one step of the Newton method and recalculate the residuals,
z = z - np.polyval(p,z)/np.polyval(np.polyder(p),z); print_Carr(z)
>>> 4.60399640251209661e+07 +6.25409784852025565e+06j
>>> -4.60399640251209661e+07 +6.25409784852025472e+06j
>>> 1.00974195868289511e-28 +1.20627308238215116e+06j
>>> 0.00000000000000000e+00 -1.53018048966713570e+05j
print_Carr(np.polyval(p,z))
>>> 6.74825261547520000e+13 -7.41139556597760000e+13j
>>> 1.55993212190720000e+13 -1.15513145425920000e+14j
>>> 2.74877906944000000e+11 +1.99893600285358499e-07j
>>> 0.00000000000000000e+00 +0.00000000000000000e+00j
There actually is a reduction in the residual by one or two decimal places, indicating that this is almost the best achievable with this floating point data type.
Thus the new proposal for your task is to use numpy.roots with one Newton step for root polishing.
Finally compare with another multi-precision result
from mpmath import mp
mp.dps = 20; mp.pretty = True;
mp.polyroots(p, maxsteps=20, extraprec=30) # prec=bits, dps=digits, 10bits=3digits
>>> [(0.0 - 153018.04896671356797j),
>>> (0.0 + 1206273.0823821511478j),
>>> (-46039964.025120967306 + 6254097.8485202553318j),
>>> ( 46039964.025120967306 + 6254097.8485202553318j)]
The roots+Newton result is correct in the 15 leading digits, when counting position the same way for real and imaginary part.
I want to find numerical solutions to the following exponential equation where a,b,c,d are constants and I want to solve for r, which is not equal to 1.
a^r + b^r = c^r + d^r (Equation 1)
I define a function in order to use Scipy.optimize.fsolve:
from scipy.optimize import fsolve
def func(r,a,b,c,d):
if r==1:
return 10**5
else:
return ( a**(1-r) + b**(1-r) ) - ( c**(1-r) + d**(1-r) )
fsolve(funcp,0.1, args=(5,5,4,7))
However, the fsolve always returns 1 as the solution, which is not what I want. Can someone help me with this issue? Or in general, tell me how to solve (Equation 1). I used an online numerical solver long time ago, but I cannot find it anymore. That's why I am trying to figure it out using Python.
You need to apply some mathematical reasoning when choosing the initial guess. Consider your problem f(r) = (51-r + 51-r) − (41-r + 71-r)
When r ≤ 1, f(r) is always negative and decreasing (since 71-r is growing much faster than other terms). Therefore, all root-finding algorithms will be pushed to right towards 1 until reaching this local solution.
You need to pick a point far away from 1 on the right to find the nontrivial solution:
>>> scipy.optimize.fsolve(lambda r: 5**(1-r)+5**(1-r)-4**(1-r)-7**(1-r), 2.0)
array([ 2.48866034])
Simply setting f(1) = 105 is not going to have any effect, as the root-finding algorithm won't check f(1) until the very last step(note).
If you wish to apply a penalty, the penalty must be applied to a range of value around 1. One way to do so, without affecting the position of other roots, is to divide the whole function by (r − 1):
>>> scipy.optimize.fsolve(lambda r: (5**(1-r)+5**(1-r)-4**(1-r)-7**(1-r)) / (r-1), 0.1)
array([ 2.48866034])
(note): they may climb like f(0.1) → f(0.4) → f(0.7) → f(0.86) → f(0.96) → f(0.997) → … and stop as soon as |f(x)| < 10-5, so your f(1) is never evaluated
First of your code seems to uses a different equation than your question: 1-r instead of just r.
Valid answers to the equation is 1 and 2.4886 approximately as can be seen here. With the second argument of fsolve you specify a starting estimate. I think due to 0.1 being close to 1 you get that result. Using the 2.1 as starting estimate I get the other answer 2.4886.
from scipy.optimize import fsolve
def func(r,a,b,c,d):
if r==1:
return 10**5
else:
return ( a**(1-r) + b**(1-r) ) - ( c**(1-r) + d**(1-r) )
print(fsolve(func, 2.1, args=(5,5,4,7)))
Chosing a starting estimate is tricky as many give the following error: ValueError: Integers to negative integer powers are not allowed.
Why doesn't -(-1)**(1/3) + (-1)**(2/3) reduce to -1?
wolfram alpha knows it's -1 but sympy gamma only does a float approximation
re(_) + I*im(_) produces a NegativeOne object, but none of the other simplification functions I tried did anything to it.
I'm assuming you really mean -(-1)**Rational(1, 3) + (-1)**Rational(2, 3), as literally -(-1)**(1/3) + (-1)**(2/3) is all Python (no SymPy), and evaluates numerically.
Most SymPy objects do not do any kind of nontrivial simplification automatically. The reason is that sometimes you might want to represent -(-1)**(1/3) + (-1)**(2/3) without it simplifying. Also, simplification in general is an expensive operation, and doing so at operation creation time would be very inefficient, as often you create intermediate expressions that don't need to be simplified at the intermediate stage.
re(expr) + I*im(expr) is fine. A more automated way to do that is to use expand_complex():
In [19]: expand_complex(-(-1)**Rational(1, 3) + (-1)**Rational(2, 3))
Out[19]: -1
Ideally simplify() would call expand_complex(), and there is an open issue for this (https://github.com/sympy/sympy/issues/7569).
And a note that SymPy Gamma provides a lot of automation on top of SymPy directly. For instance, it converts -(-1)**(1/3) + (-1)**(2/3) to SymPy types and performs various functions to the expression, like numerical evaluation, simplification, differentiation, etc.
Is it possible to calculate n complex roots of a given number using Python? I've shortly checked it, and it looks like Python gives me wrong/incomplete answers:
(-27.0j)**(1.0/3.0) produces (2.598076211353316-1.4999999999999998j)
but proper roots should be 3 complex numbers, because every non-zero number has n different complex number nth roots. Is it possible in Python?
I don't think standard Python will do this unless you write a function for it, but you can do it with Numpy:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.roots.html
There are many multi-valued complex functions - functions that can have more than one value corresponding to any point in their domain. For example: roots, logarithms, inverse trigonometric functions...
The reason these functions can have multiple values is usually because they are the inverse of a function that has multiple values in the domain map to the same value.
When doing calculations with such functions, it would be impractical to always return all possible values. For the inverse trigonometric functions, there are infinitely many possible values.
Usually the different function values can be expressed as a function of an integer parameter k. For example, the values of log z with z = r*(cos t + i*sin t is log r + i*(t + k*2*pi) with k any integer. For the nth root, it is r**(1/n)*exp(i*(t+k*2*pi)/n with k=0..n-1 inclusive.
Because returning all possible values is impractical, mathematical functions in Python and almost all other common programming languages return what's called the 'principal value' of the function. (reference) The principal value is usually the function value with k=0. Whatever choice is made, it should be stated clearly in the documentation.
So to get all the complex roots of a complex number, you just evaluate the function for all relevant values of k:
def roots(z, n):
nthRootOfr = abs(z)**(1.0/n)
t = phase(z)
return map(lambda k: nthRootOfr*exp((t+2*k*pi)*1j/n), range(n))
(You'll need to import the cmath module to make this work.) This gives:
>>> roots(-27j,3)
[(2.59808-1.5j), (1.83691e-16+3j), (-2.59808-1.5j)]
If you want to get all roots on clean python you can create simple function to do this:
import math
def root(num, r):
base = num ** (1.0/r)
roots = [base]
for i in range(1, r):
roots.append(complex(base * math.cos(2*math.pi * i / r), base * math.sin(2*math.pi * i / r)))
return roots