Since I am working on a project involving square roots, I need square roots to be simplified to the max. However, some square roots expressions do not produce the disired result. Please consider checking this example:
>>> from sympy import * # just an example don't tell me that import * is obsolete
>>> x1 = simplify(factor(sqrt(3 + 2*sqrt(2))))
>>> x1 # notice that factoring doesn't work
sqrt(2*sqrt(2) + 3)
>>> x2 = sqrt(2) + 1
>>> x2
sqrt(2) + 1
>>> x1 == x2
False
>>> N(x1)
2.41421356237309
>>> N(x2)
2.41421356237309
>>> N(x1) == N(x2)
True
As you can see, the numbers are actually equal, but numpy can't recognize that because it can't factorize and simplify x1. So how do I get the simplified form of x1 so that the equality would be correct without having to convert them to float ?
Thanks in advance.
When you are working with nested sqrt expressions, sqrtdenest is a good option to try. But a great fallback to use is nsimplify which can be more useful in some situations. Since this can give an answer that is not exactly the same as the input, I like to use this "safe" function to do the simplification:
def safe_nsimplify(x):
from sympy import nsimplify
if x.is_number:
ns = nsimplify(x)
if ns != x and x.equals(ns):
return ns
return x
>>> from sympy import sqrt, sqrtdenest
>>> eq = (-sqrt(2) + sqrt(10))/(2*sqrt(sqrt(5) + 5))
>>> simplify(eq)
(-sqrt(2) + sqrt(10))/(2*sqrt(sqrt(5) + 5)) <-- no change
>>> sqrtdenest(eq)
-sqrt(2)/(2*sqrt(sqrt(5) + 5)) + sqrt(10)/(2*sqrt(sqrt(5) + 5)) <-- worse
>>> safe_nsimplify(eq)
sqrt(1 - 2*sqrt(5)/5) <-- better
On your expression
>>> safe_nsimplify(sqrt(2 * sqrt(2) + 3))
1 + sqrt(2)
And if you want to seek out such expressions wherever they occur in a larger expression you can use
>>> from sympy import bottom_up, tan
>>> bottom_up(tan(eq), safe_nsimplify)
tan(sqrt(1 - 2*sqrt(5)/5))
It might be advantageous to accept the result of sqrtdenest instead of using nsimplify as in
def safe_nsimplify(x):
from sympy import nsimplify, sqrtdenest, Pow, S
if x.is_number:
if isinstance(x, Pow) and x.exp is S.Half:
ns = sqrtdenest(x)
if ns != x:
return ns
ns = nsimplify(x)
if ns != x and x.equals(ns):
return ns
return x
Thanks to Oscar Benjamin, the function I was looking for was sqrtdenest:
>>> from sympy import *
>>> sqrtdenest(sqrt(2 * sqrt(2) + 3))
1 + sqrt(2)
I hope this answer would help other people
Related
I have a question concerning the symbolic simplification of algebraic expressions composed of complex numbers. I have executed the following Python script:
from sympy import *
expr1 = 3*(2 - 11*I)**Rational(1, 3)*(2 + 11*I)**Rational(2, 3)
expr2 = 3*((2 - 11*I)*(2 + 11*I))**Rational(1, 3)*(2 + 11*I)**Rational(1, 3)
print("expr1 = {0}".format(expr1))
print("expr2 = {0}\n".format(expr2))
print("simplify(expr1) = {0}".format(simplify(expr1)))
print("simplify(expr2) = {0}\n".format(simplify(expr2)))
print("expand(expr1) = {0}".format(expand(expr1)))
print("expand(expr2) = {0}\n".format(expand(expr2)))
print("expr1.equals(expr2) = {0}".format(expr1.equals(expr2)))
The output is:
expr1 = 3*(2 - 11*I)**(1/3)*(2 + 11*I)**(2/3)
expr2 = 3*((2 - 11*I)*(2 + 11*I))**(1/3)*(2 + 11*I)**(1/3)
simplify(expr1) = 3*(2 - 11*I)**(1/3)*(2 + 11*I)**(2/3)
simplify(expr2) = 15*(2 + 11*I)**(1/3)
expand(expr1) = 3*(2 - 11*I)**(1/3)*(2 + 11*I)**(2/3)
expand(expr2) = 15*(2 + 11*I)**(1/3)
expr1.equals(expr2) = True
My questions is why the simplifications does not work for expr1 but
works for expr2 thoug the expressions are algebraically equal.
What has to be done to get the same result from simplify for expr1 as for expr2?
Thanks in advance for your replys.
Kind regards
Klaus
You can use the minimal polynomial to place algebraic numbers into a canonical representation:
In [30]: x = symbols('x')
In [31]: p1 = minpoly(expr1, x, polys=True)
In [32]: p2 = minpoly(expr2, x, polys=True)
In [33]: p1
Out[33]: Poly(x**2 - 60*x + 1125, x, domain='QQ')
In [34]: p2
Out[34]: Poly(x**2 - 60*x + 1125, x, domain='QQ')
In [35]: [r for r in p1.all_roots() if p1.same_root(r, expr1)]
Out[35]: [30 + 15⋅ⅈ]
In [36]: [r for r in p2.all_roots() if p2.same_root(r, expr2)]
Out[36]: [30 + 15⋅ⅈ]
This method should work for any two expressions representing algebraic numbers through algebraic operations: either they give the precise same result or they are distinct numbers.
It works (but nominally) for expr1 because when the product in the radical is expanded you get the cube root of 125 which is reported as 5. But SymPy tries to be careful about putting radicals together under a common exponent, an operation that is not generally valid (e.g. root(-1, 3)*root(-1,3) != root(1, 3) because the principle values are used for the roots. But if you want the bases to combine under a common exponent, you can force it to happen with powsimp:
>>> from sympy.abc import x, y
>>> from sympy import powsimp, root, solve, numer, together
>>> powsimp(root(x,3)*root(y,3), force=True)
(x*y)**(1/3)
But that only works if the exponents are the same:
>>> powsimp(root(x,3)*root(y,3)**2, force=True)
x**(1/3)*y**(2/3)
As you saw, equals was able to show that the two expressions were the same. One way this could be done is to solve for root(2 + 11*I, 3) and see if any of the resulting expression are the same:
>>> solve(expr1 - expr2, root(2 + 11*I,3))
[0, 5/(2 - 11*I)**(1/3)]
We can check the non-zero candidate:
>>> numer(together(_[1]-root(2+11*I,3)))
-(2 - 11*I)**(1/3)*(2 + 11*I)**(1/3) + 5
>>> powsimp(_, force=True)
5 - ((2 - 11*I)*(2 + 11*I))**(1/3)
>>> expand(_)
0
So we have shown (with force) that the expression was the same as that for which we solved. (And, as Oscar showed while I was writing this, minpoly is a nice candidate when it works: e.g. minpoly(expr1-expr2) -> x which means expr1 == expr2.)
I have to find the equilibrium points where the nullclines intersect. My code is as below.
>>> from sympy import symbols, Eq, solve
>>> A,M = symbols('A M')
>>> dMdt = Eq(1.05 - (1/(1 + pow(A,5))) - M)
>>> dAdt = Eq(M*1 - 0.5*A - M*A/(2 + A))
>>> solve((dMdt,dAdt), (M,A))
[]
Why is it not giving a solution?
You will see why as I work to get the solution.
I'm going to write the equations as e1 and e2 -- use of Eq without a second arg no longer works (or does so with a warning in the latest versions of SymPy):
>>> from sympy import solve, nsimplify, factor, real_roots
>>> from sympy.abc import A, M
>>> e1 = (1.05 - (1/(1 + pow(A,5))) - M)
>>> e2 = (M*1 - 0.5*A - M*A/(2 + A))
Solve for M using e1
>>> eM = solve(e1, M)[0]
Substitute into e2
>>> e22 = e2.subs(M, eM); e22
-0.5*A - 0.05*A*(21.0*A**5 + 1.0)/((A + 2)*(A**5 + 1.0)) + 0.05*(21.0*A**5 + 1.0)/(A**5 + 1.0)
Get the numerator and denominator
>>> n,d=e22.as_numer_denom()
Find the real roots for this expression (which depends only on A)
>>> rA = real_roots(n)
Find the corresponding values of M by substituting each into eM:
>>> [(a.n(2), eM.subs(A, a).n(2)) for a in rA]
[(-3.3, 1.1), (-1.0, zoo), (-0.74, -0.23), (0.095, 0.050)]
That root of A = -1 is spurious -- if you look at your denominator of e1 you will see that such a value causes division by zero. So that root can be ignored. The others can be verified graphically.
Why didn't solve give the solution? It couldn't give the solution for this high-order polynomial in closed form. Even if you factor the numerator described above (and make floats into Rationals with nsimplify) you have a factor of degree 7:
>>> factor(nsimplify(n))
-(A + 1)*(A**4 - A**3 + A**2 - A + 1)*(5*A**7 + 10*A**6 - 21*A**5 + 5*A**2 + 10*A - 1)/10
I'm using Sympy to calculate derivatives and some other things. I tried to calculate the derivative of "e**x + x + 1", and it returns e**x*log(e) + 1 as the result, but as far as I know the correct result should be e**x + 1. What's going on here?
Full code:
from sympy import *
from sympy.parsing.sympy_parser import parse_expr
x = symbols("x")
_fOfX = "e**x + x + 1"
sympyFunction = parse_expr(_fOfX)
dSeconda = diff(sympyFunction,x,1)
print(dSeconda)
The answer correctly includes log(e) because you never specified what "e" is. It's just a letter like "a" or "b".
The Euler number 2.71828... is represented as E in SymPy. But usually, writing exp(x) is preferable because the notation is unambiguous, and also because SymPy is going to return exp(x) anyway. Examples:
>>> fx = E**x + x + 1
>>> diff(fx, x, 1)
exp(x) + 1
or with exp notation:
>>> fx = exp(x) + x + 1
>>> diff(fx, x, 1)
exp(x) + 1
Avoid creating expressions by parsing strings, unless you really need to and know why you need it.
I have to write a function, s(x) = x * sin(3/x) in python that is capable of taking single values or vectors/arrays, but I'm having a little trouble handling the cases when x is zero (or has an element that's zero). This is what I have so far:
def s(x):
result = zeros(size(x))
for a in range(0,size(x)):
if (x[a] == 0):
result[a] = 0
else:
result[a] = float(x[a] * sin(3.0/x[a]))
return result
Which...doesn't work for x = 0. And it's kinda messy. Even worse, I'm unable to use sympy's integrate function on it, or use it in my own simpson/trapezoidal rule code. Any ideas?
When I use integrate() on this function, I get the following error message: "Symbol" object does not support indexing.
This takes about 30 seconds per integrate call:
import sympy as sp
x = sp.Symbol('x')
int2 = sp.integrate(x*sp.sin(3./x),(x,0.000001,2)).evalf(8)
print int2
int1 = sp.integrate(x*sp.sin(3./x),(x,0,2)).evalf(8)
print int1
The results are:
1.0996940
-4.5*Si(zoo) + 8.1682775
Clearly you want to start the integration from a small positive number to avoid the problem at x = 0.
You can also assign x*sin(3./x) to a variable, e.g.:
s = x*sin(3./x)
int1 = sp.integrate(s, (x, 0.00001, 2))
My original answer using scipy to compute the integral:
import scipy.integrate
import math
def s(x):
if abs(x) < 0.00001:
return 0
else:
return x*math.sin(3.0/x)
s_exact = scipy.integrate.quad(s, 0, 2)
print s_exact
See the scipy docs for more integration options.
If you want to use SymPy's integrate, you need a symbolic function. A wrong value at a point doesn't really matter for integration (at least mathematically), so you shouldn't worry about it.
It seems there is a bug in SymPy that gives an answer in terms of zoo at 0, because it isn't using limit correctly. You'll need to compute the limits manually. For example, the integral from 0 to 1:
In [14]: res = integrate(x*sin(3/x), x)
In [15]: ans = limit(res, x, 1) - limit(res, x, 0)
In [16]: ans
Out[16]:
9⋅π 3⋅cos(3) sin(3) 9⋅Si(3)
- ─── + ──────── + ────── + ───────
4 2 2 2
In [17]: ans.evalf()
Out[17]: -0.164075835450162
Can someone help me to find a solution on how to calculate a cubic root of the negative number using python?
>>> math.pow(-3, float(1)/3)
nan
it does not work. Cubic root of the negative number is negative number. Any solutions?
A simple use of De Moivre's formula, is sufficient to show that the cube root of a value, regardless of sign, is a multi-valued function. That means, for any input value, there will be three solutions. Most of the solutions presented to far only return the principle root. A solution that returns all valid roots, and explicitly tests for non-complex special cases, is shown below.
import numpy
import math
def cuberoot( z ):
z = complex(z)
x = z.real
y = z.imag
mag = abs(z)
arg = math.atan2(y,x)
return [ mag**(1./3) * numpy.exp( 1j*(arg+2*n*math.pi)/3 ) for n in range(1,4) ]
Edit: As requested, in cases where it is inappropriate to have dependency on numpy, the following code does the same thing.
def cuberoot( z ):
z = complex(z)
x = z.real
y = z.imag
mag = abs(z)
arg = math.atan2(y,x)
resMag = mag**(1./3)
resArg = [ (arg+2*math.pi*n)/3. for n in range(1,4) ]
return [ resMag*(math.cos(a) + math.sin(a)*1j) for a in resArg ]
You could use:
-math.pow(3, float(1)/3)
Or more generally:
if x > 0:
return math.pow(x, float(1)/3)
elif x < 0:
return -math.pow(abs(x), float(1)/3)
else:
return 0
math.pow(abs(x),float(1)/3) * (1,-1)[x<0]
You can get the complete (all n roots) and more general (any sign, any power) solution using:
import cmath
x, t = -3., 3 # x**(1/t)
a = cmath.exp((1./t)*cmath.log(x))
p = cmath.exp(1j*2*cmath.pi*(1./t))
r = [a*(p**i) for i in range(t)]
Explanation:
a is using the equation xu = exp(u*log(x)). This solution will then be one of the roots, and to get the others, rotate it in the complex plane by a (full rotation)/t.
Taking the earlier answers and making it into a one-liner:
import math
def cubic_root(x):
return math.copysign(math.pow(abs(x), 1.0/3.0), x)
The cubic root of a negative number is just the negative of the cubic root of the absolute value of that number.
i.e. x^(1/3) for x < 0 is the same as (-1)*(|x|)^(1/3)
Just make your number positive, and then perform cubic root.
You can also wrap the libm library that offers a cbrt (cube root) function:
from ctypes import *
libm = cdll.LoadLibrary('libm.so.6')
libm.cbrt.restype = c_double
libm.cbrt.argtypes = [c_double]
libm.cbrt(-8.0)
gives the expected
-2.0
numpy has an inbuilt cube root function cbrt that handles negative numbers fine:
>>> import numpy as np
>>> np.cbrt(-8)
-2.0
This was added in version 1.10.0 (released 2015-10-06).
Also works for numpy array / list inputs:
>>> np.cbrt([-8, 27])
array([-2., 3.])
You can use cbrt from scipy.special:
>>> from scipy.special import cbrt
>>> cbrt(-3)
-1.4422495703074083
This also works for arrays.
this works with numpy array as well:
cbrt = lambda n: n/abs(n)*abs(n)**(1./3)
Primitive solution:
def cubic_root(nr):
if nr<0:
return -math.pow(-nr, float(1)/3)
else:
return math.pow(nr, float(1)/3)
Probably massively non-pythonic, but it should work.
I just had a very similar problem and found the NumPy solution from this forum post.
In a nushell, we can use of the NumPy sign and absolute methods to help us out. Here is an example that has worked for me:
import numpy as np
x = np.array([-81,25])
print x
#>>> [-81 25]
xRoot5 = np.sign(x) * np.absolute(x)**(1.0/5.0)
print xRoot5
#>>> [-2.40822469 1.90365394]
print xRoot5**5
#>>> [-81. 25.]
So going back to the original cube root problem:
import numpy as np
y = -3.
np.sign(y) * np.absolute(y)**(1./3.)
#>>> -1.4422495703074083
I hope this helps.
For an arithmetic, calculator-like answer in Python 3:
>>> -3.0**(1/3)
-1.4422495703074083
or -3.0**(1./3) in Python 2.
For the algebraic solution of x**3 + (0*x**2 + 0*x) + 3 = 0 use numpy:
>>> p = [1,0,0,3]
>>> numpy.roots(p)
[-3.0+0.j 1.5+2.59807621j 1.5-2.59807621j]
New in Python 3.11
There is now math.cbrt which handles negative roots seamlessly:
>>> import math
>>> math.cbrt(-3)
-1.4422495703074083