Cubic root of the negative number on python - python

Can someone help me to find a solution on how to calculate a cubic root of the negative number using python?
>>> math.pow(-3, float(1)/3)
nan
it does not work. Cubic root of the negative number is negative number. Any solutions?

A simple use of De Moivre's formula, is sufficient to show that the cube root of a value, regardless of sign, is a multi-valued function. That means, for any input value, there will be three solutions. Most of the solutions presented to far only return the principle root. A solution that returns all valid roots, and explicitly tests for non-complex special cases, is shown below.
import numpy
import math
def cuberoot( z ):
z = complex(z)
x = z.real
y = z.imag
mag = abs(z)
arg = math.atan2(y,x)
return [ mag**(1./3) * numpy.exp( 1j*(arg+2*n*math.pi)/3 ) for n in range(1,4) ]
Edit: As requested, in cases where it is inappropriate to have dependency on numpy, the following code does the same thing.
def cuberoot( z ):
z = complex(z)
x = z.real
y = z.imag
mag = abs(z)
arg = math.atan2(y,x)
resMag = mag**(1./3)
resArg = [ (arg+2*math.pi*n)/3. for n in range(1,4) ]
return [ resMag*(math.cos(a) + math.sin(a)*1j) for a in resArg ]

You could use:
-math.pow(3, float(1)/3)
Or more generally:
if x > 0:
return math.pow(x, float(1)/3)
elif x < 0:
return -math.pow(abs(x), float(1)/3)
else:
return 0

math.pow(abs(x),float(1)/3) * (1,-1)[x<0]

You can get the complete (all n roots) and more general (any sign, any power) solution using:
import cmath
x, t = -3., 3 # x**(1/t)
a = cmath.exp((1./t)*cmath.log(x))
p = cmath.exp(1j*2*cmath.pi*(1./t))
r = [a*(p**i) for i in range(t)]
Explanation:
a is using the equation xu = exp(u*log(x)). This solution will then be one of the roots, and to get the others, rotate it in the complex plane by a (full rotation)/t.

Taking the earlier answers and making it into a one-liner:
import math
def cubic_root(x):
return math.copysign(math.pow(abs(x), 1.0/3.0), x)

The cubic root of a negative number is just the negative of the cubic root of the absolute value of that number.
i.e. x^(1/3) for x < 0 is the same as (-1)*(|x|)^(1/3)
Just make your number positive, and then perform cubic root.

You can also wrap the libm library that offers a cbrt (cube root) function:
from ctypes import *
libm = cdll.LoadLibrary('libm.so.6')
libm.cbrt.restype = c_double
libm.cbrt.argtypes = [c_double]
libm.cbrt(-8.0)
gives the expected
-2.0

numpy has an inbuilt cube root function cbrt that handles negative numbers fine:
>>> import numpy as np
>>> np.cbrt(-8)
-2.0
This was added in version 1.10.0 (released 2015-10-06).
Also works for numpy array / list inputs:
>>> np.cbrt([-8, 27])
array([-2., 3.])

You can use cbrt from scipy.special:
>>> from scipy.special import cbrt
>>> cbrt(-3)
-1.4422495703074083
This also works for arrays.

this works with numpy array as well:
cbrt = lambda n: n/abs(n)*abs(n)**(1./3)

Primitive solution:
def cubic_root(nr):
if nr<0:
return -math.pow(-nr, float(1)/3)
else:
return math.pow(nr, float(1)/3)
Probably massively non-pythonic, but it should work.

I just had a very similar problem and found the NumPy solution from this forum post.
In a nushell, we can use of the NumPy sign and absolute methods to help us out. Here is an example that has worked for me:
import numpy as np
x = np.array([-81,25])
print x
#>>> [-81 25]
xRoot5 = np.sign(x) * np.absolute(x)**(1.0/5.0)
print xRoot5
#>>> [-2.40822469 1.90365394]
print xRoot5**5
#>>> [-81. 25.]
So going back to the original cube root problem:
import numpy as np
y = -3.
np.sign(y) * np.absolute(y)**(1./3.)
#>>> -1.4422495703074083
I hope this helps.

For an arithmetic, calculator-like answer in Python 3:
>>> -3.0**(1/3)
-1.4422495703074083
or -3.0**(1./3) in Python 2.
For the algebraic solution of x**3 + (0*x**2 + 0*x) + 3 = 0 use numpy:
>>> p = [1,0,0,3]
>>> numpy.roots(p)
[-3.0+0.j 1.5+2.59807621j 1.5-2.59807621j]

New in Python 3.11
There is now math.cbrt which handles negative roots seamlessly:
>>> import math
>>> math.cbrt(-3)
-1.4422495703074083

Related

Sympy `factor` and `simplify` not working properly?

Since I am working on a project involving square roots, I need square roots to be simplified to the max. However, some square roots expressions do not produce the disired result. Please consider checking this example:
>>> from sympy import * # just an example don't tell me that import * is obsolete
>>> x1 = simplify(factor(sqrt(3 + 2*sqrt(2))))
>>> x1 # notice that factoring doesn't work
sqrt(2*sqrt(2) + 3)
>>> x2 = sqrt(2) + 1
>>> x2
sqrt(2) + 1
>>> x1 == x2
False
>>> N(x1)
2.41421356237309
>>> N(x2)
2.41421356237309
>>> N(x1) == N(x2)
True
As you can see, the numbers are actually equal, but numpy can't recognize that because it can't factorize and simplify x1. So how do I get the simplified form of x1 so that the equality would be correct without having to convert them to float ?
Thanks in advance.
When you are working with nested sqrt expressions, sqrtdenest is a good option to try. But a great fallback to use is nsimplify which can be more useful in some situations. Since this can give an answer that is not exactly the same as the input, I like to use this "safe" function to do the simplification:
def safe_nsimplify(x):
from sympy import nsimplify
if x.is_number:
ns = nsimplify(x)
if ns != x and x.equals(ns):
return ns
return x
>>> from sympy import sqrt, sqrtdenest
>>> eq = (-sqrt(2) + sqrt(10))/(2*sqrt(sqrt(5) + 5))
>>> simplify(eq)
(-sqrt(2) + sqrt(10))/(2*sqrt(sqrt(5) + 5)) <-- no change
>>> sqrtdenest(eq)
-sqrt(2)/(2*sqrt(sqrt(5) + 5)) + sqrt(10)/(2*sqrt(sqrt(5) + 5)) <-- worse
>>> safe_nsimplify(eq)
sqrt(1 - 2*sqrt(5)/5) <-- better
On your expression
>>> safe_nsimplify(sqrt(2 * sqrt(2) + 3))
1 + sqrt(2)
And if you want to seek out such expressions wherever they occur in a larger expression you can use
>>> from sympy import bottom_up, tan
>>> bottom_up(tan(eq), safe_nsimplify)
tan(sqrt(1 - 2*sqrt(5)/5))
It might be advantageous to accept the result of sqrtdenest instead of using nsimplify as in
def safe_nsimplify(x):
from sympy import nsimplify, sqrtdenest, Pow, S
if x.is_number:
if isinstance(x, Pow) and x.exp is S.Half:
ns = sqrtdenest(x)
if ns != x:
return ns
ns = nsimplify(x)
if ns != x and x.equals(ns):
return ns
return x
Thanks to Oscar Benjamin, the function I was looking for was sqrtdenest:
>>> from sympy import *
>>> sqrtdenest(sqrt(2 * sqrt(2) + 3))
1 + sqrt(2)
I hope this answer would help other people

evalf and subs in sympy on single variable expression returns expression instead of expected float value

I'm new to sympy and I'm trying to use it to get the values of higher order Greeks of options (basically higher order derivatives). My goal is to do a Taylor series expansion. The function in question is the first derivative.
f(x) = N(d1)
N(d1) is the P(X <= d1) of a standard normal distribution. d1 in turn is another function of x (x in this case is the price of the stock to anybody who's interested).
d1 = (np.log(x/100) + (0.01 + 0.5*0.11**2)*0.5)/(0.11*np.sqrt(0.5))
As you can see, d1 is a function of only x. This is what I have tried so far.
import sympy as sp
from math import pi
from sympy.stats import Normal,P
x = sp.symbols('x')
u = (sp.log(x/100) + (0.01 + 0.5*0.11**2)*0.5)/(0.11*np.sqrt(0.5))
N = Normal('N',0,1)
f = sp.simplify(P(N <= u))
print(f.evalf(subs={x:100})) # This should be 0.5155
f1 = sp.simplify(sp.diff(f,x))
f1.evalf(subs={x:100}) # This should also return a float value
The last line of code however returns an expression, not a float value as I expected like in the case with f. I feel like I'm making a very simple mistake but I can't find out why. I'd appreciate any help.
Thanks.
If you define x with positive=True (which is implied by the log in the definition of u assuming u is real which is implied by the definition of f) it looks like you get almost the expected result (also using f1.subs({x:100}) in the version without the positive x assumption shows the trouble is with unevaluated polar_lift(0) terms):
import sympy as sp
from sympy.stats import Normal, P
x = sp.symbols('x', positive=True)
u = (sp.log(x/100) + (0.01 + 0.5*0.11**2)*0.5)/(0.11*sp.sqrt(0.5)) # changed np to sp
N = Normal('N',0,1)
f = sp.simplify(P(N <= u))
print(f.evalf(subs={x:100})) # 0.541087287864516
f1 = sp.simplify(sp.diff(f,x))
print(f1.evalf(subs={x:100})) # 0.0510177033783834

Find roots of a system of equations to an arbitrary decimal precision

Given an initial guess for an array of values x, I am trying to find the root of a system that is closest to x. If you are familiar with finding roots of a system, you will understand that finding a root for a system of equations f satisfies:
0 = f_1(x)
0 = f_2(x)
....
0 = f_n(x)
Where f_i is one particular function within f
There is a package within scipy that will do this exactly: scipy.optimize.newton_krylov. For example:
import scipy.optimize as sp
def f(x):
f0 = (x[0]**2) + (3*(x[1]**3)) - 2
f1 = x[0] * (x[1]**2)
return [f0, f1]
# Nearest root is [sqrt(2), 0]
print sp.newton_krylov(f, [2, .01], iter=100, f_tol=Dc('1e-15'))
>>> [ 1.41421356e+00 3.49544535e-10] # Close enough!
However, I am using the decimal package within python because I am doing extremely precise work. decimal offers more than normal decimal precision. scipy.optimize.newton_krylov returns float-precision values. Is there a way to get my answer at an arbitrarily precise decimal precision?
You could try copying the code in and referenced by scipy.optimize.newton_krylov then modifying it to use decimal values rather than floating point values. This may be difficult and time-consuming, of course.
I have done the equivalent for other situations, but never quite this.
I have found the mpmath module, which contains mpmath.findroot. mpmath uses arbitrary decimal-point precision for all of its numbers. mpmath.findroot will find the nearest root within tolerance. Here is an example of using mpmath for the same problem, to a higher precision:
import scipy.optimize as sp
import mpmath
from mpmath import mpf
mpmath.mp.dps = 15
def mp_f(x1, x2):
f1 = (x1**2) + (3*(x2**3)) - 2
f2 = x1 * (x2**2)
return f1, f2
def f(x):
f0 = (x[0]**2) + (3*(x[1]**3)) - 2
f1 = x[0] * (x[1]**2)
return [f0, f1]
tmp_solution = sp.newton_krylov(f, [2, .01], f_tol=Dc('1e-10'))
print tmp_solution
>>> [ 1.41421356e+00 4.87315249e-06]
for _ in range(8):
tmp_solution = mpmath.findroot(mp_f, (tmp_solution[0], tmp_solution[1]))
print tmp_solution
mpmath.mp.dps += 10 # Increase precision
>>> [ 1.4142135623731]
[4.76620313173184e-9]
>>> [ 1.414213562373095048801689]
[4.654573673348783724565804e-12]
>>> [ 1.4142135623730950488016887242096981]
[4.5454827012374811707063801808968925e-15]
>>> [ 1.41421356237309504880168872420969807856967188]
[4.43894795688326535096068850443292395286770757e-18]
>>> [ 1.414213562373095048801688724209698078569671875376948073]
[4.334910114213471839327827177504976152074382061299675453e-21]
>>> [ 1.414213562373095048801688724209698078569671875376948073176679738]
[4.2333106584123451747941381835420647823192649980317402073699554127e-24]
>>> [ 1.41421356237309504880168872420969807856967187537694807317667973799073247846]
[4.1340924398558139440207202654766836515453497962889870471467483995909717197e-27]
>>> [ 1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885]
[4.037199648296693366576484784520203892002447351324378380584214947262318103197216393589e-30]
The precision can be raised arbitrarily.

Writing a function for x * sin(3/x) in python

I have to write a function, s(x) = x * sin(3/x) in python that is capable of taking single values or vectors/arrays, but I'm having a little trouble handling the cases when x is zero (or has an element that's zero). This is what I have so far:
def s(x):
result = zeros(size(x))
for a in range(0,size(x)):
if (x[a] == 0):
result[a] = 0
else:
result[a] = float(x[a] * sin(3.0/x[a]))
return result
Which...doesn't work for x = 0. And it's kinda messy. Even worse, I'm unable to use sympy's integrate function on it, or use it in my own simpson/trapezoidal rule code. Any ideas?
When I use integrate() on this function, I get the following error message: "Symbol" object does not support indexing.
This takes about 30 seconds per integrate call:
import sympy as sp
x = sp.Symbol('x')
int2 = sp.integrate(x*sp.sin(3./x),(x,0.000001,2)).evalf(8)
print int2
int1 = sp.integrate(x*sp.sin(3./x),(x,0,2)).evalf(8)
print int1
The results are:
1.0996940
-4.5*Si(zoo) + 8.1682775
Clearly you want to start the integration from a small positive number to avoid the problem at x = 0.
You can also assign x*sin(3./x) to a variable, e.g.:
s = x*sin(3./x)
int1 = sp.integrate(s, (x, 0.00001, 2))
My original answer using scipy to compute the integral:
import scipy.integrate
import math
def s(x):
if abs(x) < 0.00001:
return 0
else:
return x*math.sin(3.0/x)
s_exact = scipy.integrate.quad(s, 0, 2)
print s_exact
See the scipy docs for more integration options.
If you want to use SymPy's integrate, you need a symbolic function. A wrong value at a point doesn't really matter for integration (at least mathematically), so you shouldn't worry about it.
It seems there is a bug in SymPy that gives an answer in terms of zoo at 0, because it isn't using limit correctly. You'll need to compute the limits manually. For example, the integral from 0 to 1:
In [14]: res = integrate(x*sin(3/x), x)
In [15]: ans = limit(res, x, 1) - limit(res, x, 0)
In [16]: ans
Out[16]:
9⋅π 3⋅cos(3) sin(3) 9⋅Si(3)
- ─── + ──────── + ────── + ───────
4 2 2 2
In [17]: ans.evalf()
Out[17]: -0.164075835450162

Find roots of a function a x^n + bx - c = 0 where n isn't an integer with Numpy?

I'm writing a program in python and in it I need to find the roots of a function that is:
a*x^n + b*x -c = 0
where a and b are constants that are calculated earlier in the program but there are several thousand of them.
I need to repeat this equation twice for all values of a and b once with n = 77/27 and once with n = 3.
How can i do this in python?
I checked numpy.roots(p) and that would work for when n = 3 I think. But for n = 77/27 how would I be able to do that?
I think your beast choice is scipy.optimize.brentq():
def f(x, n, a, b, c):
return a * x**n + b * x - c
print scipy.optimize.brentq(
f, 0.0, 100.0, args=(77.0/27.0, 1.0, 1.0, 10.0))
prints
2.0672035922580592
Look here and here.
I'm so proud of myself, I still remember the specifics (without reading the link!) :)
If you don't get that, look here.
I would use fsolve from scipy,
from scipy.optimize import fsolve
def func(x,a,b,c,n):
return a*x**n + b*x - c
a,b,c = 11.,23.,31.
n = 77./27.
guess = [4.0,]
print fsolve(func,guess,args=(a,b,c,n)) # 0.94312258329
This of course gives you a root, not necessarily all roots.
Edit: Use brentq, it's much faster
from timeit import timeit
sp = """
from scipy.optimize import fsolve
from scipy.optimize import brentq
from numpy.random import uniform
from numpy import zeros
m = 10**3
z = zeros((m,4))
z[:,:3] = uniform(1,50,size=(m,3))
z[:,3] = uniform(1,10,m)
def func(x,a,b,c,n):
return a*x**n + b*x - c
"""
s = "[fsolve(func,1.0,args=tuple(i)) for i in z]"
t = "[brentq(func,0.,10.,args=tuple(i)) for i in z]"
runs = 10**2
print 'fsolve\t', timeit(s,sp,number=runs)
print 'brentq\t', timeit(t,sp,number=runs)
gives me,
fsolve 15.5562820435
brentq 3.84963393211
You need a root finding algorithm like Newton's method. All root finding algorithms will work with non-integer powers. They need not even be rational numbers.

Categories

Resources