Root of polynomial using bisection - python

I'm new to python and i'm having a hard time trying to find the root of a polynomial via using the bisection method. So far I have 2 methods. One for evaluating the polynomial at value x
def eval(x, poly):
"""
Evaluate the polynomial at the value x.
poly is a list of coefficients from lowest to highest.
:param x: Argument at which to evaluate
:param poly: The polynomial coefficients, lowest order to highest
:return: The result of evaluating the polynomial at x
"""
result = poly[0]
for i in range(1, len(poly)):
result = result + poly[i] * x**i
return result
The next method is supposed to use bisection to find the root of the polynomials given
def bisection(a, b, poly, tolerance):
poly(a) <= 0
poly(b) >= 0
try:
if
"""
Assume that poly(a) <= 0 and poly(b) >= 0.
:param a: poly(a) <= 0 Raises an exception if not true
:param b: poly(b) >= 0 Raises an exception if not true
:param poly: polynomial coefficients, low order first
:param tolerance: greater than 0
:return: a value between a and b that is within tolerance of a root of the polynomial
"""
How would I find the root using bisection? I have been provided a test script to test these out.
EDIT: I followed the pseudocode and ended up with this:
def bisection(a, b, poly, tolerance):
#poly(a) <= 0
#poly(b) >= 0
difference = abs(a-b)
xmid = (a-b)/2
n = 1
nmax = 60
while n <= nmax:
mid = (a-b) / 2
if poly(mid) == 0 or (b - a)/2 < tolerance:
print(mid)
n = n + 1
if sign(poly(mid)) == sign(poly(a)):
a = mid
else:
b = mid
return xmid
is this correct? I havent been able to test it because of indentation errors with the return xmid statement.

Your code seems fine, besides the mess with xmid and mid. mid = (a + b) / 2 instead of mid = (a - b) / 2 and you don't need the difference variable.
Cleaned it up a bit:
def sign(x):
return -1 if x < 0 else (1 if x > 0 else 0)
def bisection(a, b, poly, tolerance):
mid = a # will be overwritten
for i in range(60):
mid = (a+b) / 2
if poly(mid) == 0 or (b - a)/2 < tolerance:
return mid
if sign(poly(mid)) == sign(poly(a)):
a = mid
else:
b = mid
return mid
print(bisection(-10**10, 10**10, lambda x: x**5 - x**4 - x**3 - x**2 - x + 9271, 0.00001))

Related

How do I go about coding a custom function (without using any library) which returns the log of a number in python [duplicate]

I need to generate the result of the log.
I know that:
Then I made my code:
def log(x, base):
log_b = 2
while x != int(round(base ** log_b)):
log_b += 0.01
print(log_b)
return int(round(log_b))
But it works very slowly. Can I use other method?
One other thing you might want to consider is using the Taylor series of the natural logarithm:
Once you've approximated the natural log using a number of terms from this series, it is easy to change base:
EDIT: Here's another useful identity:
Using this, we could write something along the lines of
def ln(x):
n = 1000.0
return n * ((x ** (1/n)) - 1)
Testing it out, we have:
print ln(math.e), math.log(math.e)
print ln(0.5), math.log(0.5)
print ln(100.0), math.log(100.0)
Output:
1.00050016671 1.0
-0.692907009547 -0.69314718056
4.6157902784 4.60517018599
This shows our value compared to the math.log value (separated by a space) and, as you can see, we're pretty accurate. You'll probably start to lose some accuracy as you get very large (e.g. ln(10000) will be about 0.4 greater than it should), but you can always increase n if you need to.
I used recursion:
def myLog(x, b):
if x < b:
return 0
return 1 + myLog(x/b, b)
You can use binary search for that.
You can get more information on binary search on Wikipedia:
Binary search;
Doubling search.
# search for log_base(x) in range [mn, mx] using binary search
def log_in_range(x, base, mn, mx):
if (mn <= mx):
med = (mn + mx) / 2.0
y = base ** med
if abs(y - x) < 0.00001: # or if math.isclose(x, y): https://docs.python.org/3/library/math.html#math.isclose
return med
elif x > y:
return log_in_range(x, base, med, mx)
elif x < y:
return log_in_range(x, base, mn, med)
return 0
# determine range using doubling search, then call log_in_range
def log(x, base):
if base <= 0 or base == 1 or x <= 0:
raise ValueError('math domain error')
elif 0 < base < 1:
return -log(x, 1/base)
elif 1 <= x and 1 < base:
mx = 1
y = base
while y < x:
y *= y
mx *= 2
return log_in_range(x, base, 0, mx)
elif 0 <= x < 1 and 1 < base:
mn = -1
y = 1/base
while y > x:
y = y ** 0.5
mn *= 2
return log_in_range(x, base, mn, 0)
import math
try :
number_and_base = input() ##input the values for number and base
##assigning those values for the variables
number = int(number_and_base.split()[0])
base = int(number_and_base.split()[1])
##exception handling
except ValueError :
print ("Invalid input...!")
##program
else:
n1 = 1 ##taking an initial value to iterate
while(number >= int(round(base**(n1),0))) : ##finding the most similer value to the number given, varying the vlaue of the power
n1 += 0.000001 ##increasing the initial value bit by bit
n2 = n1-0.0001
if abs(number-base**(n2)) < abs(base**(n1)-number) :
n = n2
else :
n = n1
print(math.floor(n)) ##output value
Comparison:-
This is how your log works:-
def your_log(x, base):
log_b = 2
while x != int(round(base ** log_b)):
log_b += 0.01
#print log_b
return int(round(log_b))
print your_log(16, 2)
# %timeit %run your_log.py
# 1000 loops, best of 3: 579 us per loop
This is my proposed improvement:-
def my_log(x, base):
count = -1
while x > 0:
x /= base
count += 1
if x == 0:
return count
print my_log(16, 2)
# %timeit %run my_log.py
# 1000 loops, best of 3: 321 us per loop
which is faster, using the %timeit magic function in iPython to time the execution for comparison.
It will be long process since it goes in a loop. Therefore,
def log(x,base):
result = ln(x)/ln(base)
return result
def ln(x):
val = x
return 99999999*(x**(1/99999999)-1)
log(8,3)
Values are nearly equal but not exact.

Solve for x : n**x + x = 0 using binary search

I want to solve for x (up to 6 decimal places) in the equation: n**x + x = 0. I want to do this using Binary Search.
I used the below code to get the square root of a positive integer, 'n'. Need to apply the same logic to solve the above problem somehow.
n=int(input())
#find square root of n here
def sqroot(n):
l = 0
r = n
while abs(l-r) > 10**(-5):
mid = (l+r)/2
if mid**2 > n:
r = mid
else:
l = mid
return round(mid,4)
print('%.4f' % sqroot(n))
The function seems to be monotonically increasing so binary search can be applied in the same way as for the square root.
In your function n was the target, now it is 0 and n is a parameter.
Note that this function seems to have a it's zero in the negative, so do not forget to extend the search space.
n = int(input())
def binarySearch(n):
# n is the parameter
l = -10
r = 10
while abs(l - r) > 10**(-5):
mid = (l + r) / 2
# Compute the value of the function and compare against 0.0
if (n**mid + mid) > 0.0:
r = mid
else:
l = mid
return round(mid, 4)
print('%.4f' % binarySearch(n))
n = int(input())
def binarySearch(n):
# n is the parameter
l = -10
r = 10
while abs(l - r) > 10**(-6):
mid = (l + r) / 2
# Compute the value of the function and compare against 0.0
if (n**mid + mid) > 0.0:
r = mid
else:
l = mid
return round(mid, 6)
print('%.6f' % binarySearch(n))

Division using recursion

I am pretty sure that this must be some glaringly stupid mistake by me. But can anyone explain what is wrong in this division code using recursion. I know there are a lot of alternatives, but I need to know what is wrong with this
def division(a, b):
x = 0
if a < b:
return x
else:
x += 1
return division(a - b, b)
return x
When I do division(10, 2), it gives me 0 as output
You always set your local variable x to 0.
Then if the dividend is smaller than the divisor you return that x which is of course 0.
On the other hand when the dividend is greater or equal to the divisor you increment x by 1 and do a recursive call with a decremented dividend which will of course lead to the first case at the end and still you return an x which holds the value of 0.
Note: Nonetheless your final return is not reachable since both your if and else branch contains a return.
So please try considering this solution:
def division(a, b):
if a < b:
return 0
else:
return 1 + division(a-b, b)
Update:
A possible solution for working with negative integers following python's round towards negative infinity division:
def division_with_negatives(a, b):
a_abs, b_abs = abs(a), abs(b)
if (a >= 0 and b >= 0) or (a < 0 and b < 0):
# branch for positive results
if a_abs < b_abs:
return 0
else:
return 1 + division_with_negatives(a_abs - b_abs, b_abs)
else:
# branch for negative results
if b_abs > a_abs or a_abs == b_abs:
return -1
else:
return -1 + division_with_negatives(a_abs - b_abs, -b_abs)
assert division_with_negatives(-4, 1) == -4 // 1
assert division_with_negatives(10, 2) == 10 // 2
assert division_with_negatives(1, -5) == 1 // -5
assert division_with_negatives(-3, -2) == -3 // -2
This might save you from RecursionError:
def div(x, y):
if y == 0:
return 0
elif x == y:
return 1
elif x < y:
if y * -1 == x:
return -1
elif x % y == 0:
return 1 + div(x - y, y)
else:
return 0
else:
if y < 0:
return 1 - div(x - y, -y)
else:
return 1 + div(x - y, y)
#Application
A = div(1, 2)
B = div(-9, 9)
C = div(3, 2)
D = div(-9, -3)
E = div(100, -2)
print(A) # 0
print(B) # -1
print(C) # 1
print(D) # 3
print(E) # -50
Your escape condition is if a < b. That means that for this function to terminate, this must be fulfilled to leave the recursion. However, because x is declared at the top of the function, only redefined inside the body of the else statement but never returned, the function will always terminate with a value of x = 0.
You should either set x = 1 + division(a-b,b) or return division(a-b,b) + 1 and remove the unreachable returnat the end.
def div(a, b, x):
if a < b:
return x
else:
x +=1
return div(a - b, b, x)
print(div(130, 10, 0))
# 13
This might work a little better for you:
def division(a,b,x=0):
if a < b:
return x
else:
x += 1
return division(a-b,b,x)
return x
Every time your function ran through a new recusion, you were setting x to 0. This way, it defaults to 0 if x is not specified, and should work like I think you want.
Also note that this will not work for negative numbers, but you probably knew that :)
With global you get:
def division(a,b):
global x
if a < b: return x
else:
x += 1
return division(a-b,b)
x=0
print (division(10,2))
But you have to set x to zero every time before call the division
i think this is best way, this way u can get floating answer also
def division(a,b):
if a == 0 or a == 1 or a == 2:
return b
else:
return a / division(a % b,b)
print(division(9,2))

Simpson's rule in Python

For a numerical methods class, I need to write a program to evaluate a definite integral with Simpson's composite rule. I already got this far (see below), but my answer is not correct. I am testing the program with f(x)=x, integrated over 0 to 1, for which the outcome should be 0.5. I get 0.78746... etc.
I know there is a Simpson's rule available in Scipy, but I really need to write it myself.
I suspect there is something wrong with the two loops. I tried "for i in range(1, n, 2)" and "for i in range(2, n-1, 2)" before, and this gave me a result of 0.41668333... etc.
I also tried "x += h" and I tried "x += i*h". The first gave me 0.3954, and the second option 7.9218.
# Write a program to evaluate a definite integral using Simpson's rule with
# n subdivisions
from math import *
from pylab import *
def simpson(f, a, b, n):
h=(b-a)/n
k=0.0
x=a
for i in range(1,n/2):
x += 2*h
k += 4*f(x)
for i in range(2,(n/2)-1):
x += 2*h
k += 2*f(x)
return (h/3)*(f(a)+f(b)+k)
def function(x): return x
print simpson(function, 0.0, 1.0, 100)
You probably forget to initialize x before the second loop, also, starting conditions and number of iterations are off. Here is the correct way:
def simpson(f, a, b, n):
h=(b-a)/n
k=0.0
x=a + h
for i in range(1,n/2 + 1):
k += 4*f(x)
x += 2*h
x = a + 2*h
for i in range(1,n/2):
k += 2*f(x)
x += 2*h
return (h/3)*(f(a)+f(b)+k)
Your mistakes are connected with the notion of a loop invariant. Not to get into details too much, it's generally easier to understand and debug cycles which advance at the end of a cycle, not at the beginning, here I moved the x += 2 * h line to the end, which made it easy to verify where the summation starts. In your implementation it would be necessary to assign a weird x = a - h for the first loop only to add 2 * h to it as the first line in the loop.
All you need to do to make this code work is add a variable for a and b in the function bounds() and add a function in f(x) that uses the variable x. You could also implement the function and bounds directly into the simpsonsRule function if desired... Also, these are functions to be implimented into a program, not a program itself.
def simpsonsRule(n):
"""
simpsonsRule: (int) -> float
Parameters:
n: integer representing the number of segments being used to
approximate the integral
Pre conditions:
Function bounds() declared that returns lower and upper bounds of integral.
Function f(x) declared that returns the evaluated equation at point x.
Parameters passed.
Post conditions:
Returns float equal to the approximate integral of f(x) from a to b
using Simpson's rule.
Description:
Returns the approximation of an integral. Works as of python 3.3.2
REQUIRES NO MODULES to be imported, especially not non standard ones.
-Code by TechnicalFox
"""
a,b = bounds()
sum = float()
sum += f(a) #evaluating first point
sum += f(b) #evaluating last point
width=(b-a)/(2*n) #width of segments
oddSum = float()
evenSum = float()
for i in range(1,n): #evaluating all odd values of n (not first and last)
oddSum += f(2*width*i+a)
sum += oddSum * 2
for i in range(1,n+1): #evaluating all even values of n (not first and last)
evenSum += f(width*(-1+2*i)+a)
sum += evenSum * 4
return sum * width/3
def bounds():
"""
Description:
Function that returns both the upper and lower bounds of an integral.
"""
a = #>>>INTEGER REPRESENTING LOWER BOUND OF INTEGRAL<<<
b = #>>>INTEGER REPRESENTING UPPER BOUND OF INTEGRAL<<<
return a,b
def f(x):
"""
Description:
Function that takes an x value and returns the equation being evaluated,
with said x value.
"""
return #>>>EQUATION USING VARIABLE X<<<
You can use this program for calculating definite integrals by using Simpson's 1/3 rule. You can increase your accuracy by increasing the value of the variable panels.
import numpy as np
def integration(integrand,lower,upper,*args):
panels = 100000
limits = [lower, upper]
h = ( limits[1] - limits[0] ) / (2 * panels)
n = (2 * panels) + 1
x = np.linspace(limits[0],limits[1],n)
y = integrand(x,*args)
#Simpson 1/3
I = 0
start = -2
for looper in range(0,panels):
start += 2
counter = 0
for looper in range(start, start+3):
counter += 1
if (counter ==1 or counter == 3):
I += ((h/3) * y[looper])
else:
I += ((h/3) * 4 * y[looper])
return I
For example:
def f(x,a,b):
return a * np.log(x/b)
I = integration(f,3,4,2,5)
print(I)
will integrate 2ln(x/5) within the interval 3 and 4
There is my code (i think that is the most easy method). I done this in jupyter notebook.
The easiest and most accurate code for Simpson method is 1/3.
Explanation
For standard method (a=0, h=4, b=12) and f=100-(x^2)/2
We got:
n= 3.0, y0 = 100.0, y1 = 92.0, y2 = 68.0, y3 = 28.0,
So simpson method = h/3*(y0+4*y1+2*y2+y3) = 842,7 (this is not true).
Using 1/3 rule we got:
h = h/2= 4/2= 2 and then:
n= 3.0, y0 = 100.0, y1 = 98.0, y2 = 92.0, y3 = 82.0, y4 = 68.0, y5 = 50.0, y6 = 28.0,
Now we calculate the integral for each step (n=3 = 3 steps):
Integral of the first step: h/3*(y0+4*y1+y2) = 389.3333333333333
Integral of the second step: h/3*(y2+4*y3+y4) = 325.3333333333333
Integral of the third step: h/3*(y4+4*y5+y6) = 197.33333333333331
Sum all, and we get 912.0 AND THIS IS TRUE
x=0
b=12
h=4
x=float(x)
h=float(h)
b=float(b)
a=float(x)
def fun(x):
return 100-(x**2)/2
h=h/2
l=0 #just numeration
print('n=',(b-x)/(h*2))
n=int((b-a)/h+1)
y = [] #tablica/lista wszystkich y / list of all "y"
yf = []
for i in range(n):
f=fun(x)
print('y%s' %(l),'=',f)
y.append(f)
l+=1
x+=h
print(y,'\n')
n1=int(((b-a)/h)/2)
l=1
for i in range(n1):
nf=(h/3*(y[+0]+4*y[+1]+y[+2]))
y=y[2:] # with every step, we deleting 2 first "y" and we move 2 spaces to the right, i.e. y2+4*y3+y4
print('Całka dla kroku/Integral for a step',l,'=',nf)
yf.append(nf)
l+=1
print('\nWynik całki/Result of the integral =', sum(yf) )
At the beginning I added simple data entry:
d=None
while(True):
print("Enter your own data or enter the word "test" for ready data.\n")
x=input ('Enter the beginning of the interval (a): ')
if x == 'test':
x=0
h=4 #krok (Δx)
b=12 #granica np. 0>12
#w=(20*x)-(x**2) lub (1+x**3)**(1/2)
break
h=input ('Enter the size of the integration step (h): ')
if h == 'test':
x=0
h=4
b=12
break
b=input ('Enter the end of the range (b): ')
if b == 'test':
x=0
h=4
b=12
break
d=input ('Give the function pattern: ')
if d == 'test':
x=0
h=4
b=12
break
elif d != -9999.9:
break
x=float(x)
h=float(h)
b=float(b)
a=float(x)
if d == None or d == 'test':
def fun(x):
return 100-(x**2)/2 #(20*x)-(x**2)
else:
def fun(x):
w = eval(d)
return w
h=h/2
l=0 #just numeration
print('n=',(b-x)/(h*2))
n=int((b-a)/h+1)
y = [] #tablica/lista wszystkich y / list of all "y"
yf = []
for i in range(n):
f=fun(x)
print('y%s' %(l),'=',f)
y.append(f)
l+=1
x+=h
print(y,'\n')
n1=int(((b-a)/h)/2)
l=1
for i in range(n1):
nf=(h/3*(y[+0]+4*y[+1]+y[+2]))
y=y[2:]
print('Całka dla kroku/Integral for a step',l,'=',nf)
yf.append(nf)
l+=1
print('\nWynik całki/Result of the integral =', sum(yf) )
def simps(f, a, b, N): # N must be an odd integer
""" define simpson method, a and b are the lower and upper limits of
the interval, N is number of points, dx is the slice
"""
integ = 0
dx = float((b - a) / N)
for i in range(1,N-1,2):
integ += f((a+(i-1)*dx)) + 4*f((a+i*dx)) + f((a+(i+1)*dx))
integral = dx/3.0 * integ
# if number of points is even, then error araise
if (N % 2) == 0:
raise ValueError("N must be an odd integer.")
return integral
def f(x):
return x**2
integrate = simps(f,0,1,99)
print(integrate)
Example of implementing Simpson's rule for integral sinX with a = 0 and b = pi/4. And use 10 panels for the integration
def simpson(f, a, b, n):
x = np.linspace(a, b, n+1)
w = 2*np.ones(n+1); w[0] = 1.0; w[-1] = 1;
for i in range(len(w)):
if i % 2 == 1:
w[i] = 4
width = x[1] - x[0]
area = 0.333 * width * np.sum( w*f(x))
return area
f = lambda x: np.sin(x)
a = 0.0; b = np.pi/4
areaSim = simpson(f, a, b, 10)
print(areaSim)

How to compute the nth root of a very big integer

I need a way to compute the nth root of a long integer in Python.
I tried pow(m, 1.0/n), but it doesn't work:
OverflowError: long int too large to convert to float
Any ideas?
By long integer I mean REALLY long integers like:
11968003966030964356885611480383408833172346450467339251
196093144141045683463085291115677488411620264826942334897996389
485046262847265769280883237649461122479734279424416861834396522
819159219215308460065265520143082728303864638821979329804885526
557893649662037092457130509980883789368448042961108430809620626
059287437887495827369474189818588006905358793385574832590121472
680866521970802708379837148646191567765584039175249171110593159
305029014037881475265618958103073425958633163441030267478942720
703134493880117805010891574606323700178176718412858948243785754
898788359757528163558061136758276299059029113119763557411729353
915848889261125855717014320045292143759177464380434854573300054
940683350937992500211758727939459249163046465047204851616590276
724564411037216844005877918224201569391107769029955591465502737
961776799311859881060956465198859727495735498887960494256488224
613682478900505821893815926193600121890632
If it's a REALLY big number. You could use a binary search.
def find_invpow(x,n):
"""Finds the integer component of the n'th root of x,
an integer such that y ** n <= x < (y + 1) ** n.
"""
high = 1
while high ** n <= x:
high *= 2
low = high/2
while low < high:
mid = (low + high) // 2
if low < mid and mid**n < x:
low = mid
elif high > mid and mid**n > x:
high = mid
else:
return mid
return mid + 1
For example:
>>> x = 237734537465873465
>>> n = 5
>>> y = find_invpow(x,n)
>>> y
2986
>>> y**n <= x <= (y+1)**n
True
>>>
>>> x = 119680039660309643568856114803834088331723464504673392511960931441>
>>> n = 45
>>> y = find_invpow(x,n)
>>> y
227661383982863143360L
>>> y**n <= x < (y+1)**n
True
>>> find_invpow(y**n,n) == y
True
>>>
Gmpy is a C-coded Python extension module that wraps the GMP library to provide to Python code fast multiprecision arithmetic (integer, rational, and float), random number generation, advanced number-theoretical functions, and more.
Includes a root function:
x.root(n): returns a 2-element tuple (y,m), such that y is the
(possibly truncated) n-th root of x; m, an ordinary Python int,
is 1 if the root is exact (x==y**n), else 0. n must be an ordinary
Python int, >=0.
For example, 20th root:
>>> import gmpy
>>> i0=11968003966030964356885611480383408833172346450467339251
>>> m0=gmpy.mpz(i0)
>>> m0
mpz(11968003966030964356885611480383408833172346450467339251L)
>>> m0.root(20)
(mpz(567), 0)
You can make it run slightly faster by avoiding the while loops in favor of setting low to 10 ** (len(str(x)) / n) and high to low * 10. Probably better is to replace the len(str(x)) with the bitwise length and using a bit shift. Based on my tests, I estimate a 5% speedup from the first and a 25% speedup from the second. If the ints are big enough, this might matter (and the speedups may vary). Don't trust my code without testing it carefully. I did some basic testing but may have missed an edge case. Also, these speedups vary with the number chosen.
If the actual data you're using is much bigger than what you posted here, this change may be worthwhile.
from timeit import Timer
def find_invpow(x,n):
"""Finds the integer component of the n'th root of x,
an integer such that y ** n <= x < (y + 1) ** n.
"""
high = 1
while high ** n < x:
high *= 2
low = high/2
while low < high:
mid = (low + high) // 2
if low < mid and mid**n < x:
low = mid
elif high > mid and mid**n > x:
high = mid
else:
return mid
return mid + 1
def find_invpowAlt(x,n):
"""Finds the integer component of the n'th root of x,
an integer such that y ** n <= x < (y + 1) ** n.
"""
low = 10 ** (len(str(x)) / n)
high = low * 10
while low < high:
mid = (low + high) // 2
if low < mid and mid**n < x:
low = mid
elif high > mid and mid**n > x:
high = mid
else:
return mid
return mid + 1
x = 237734537465873465
n = 5
tests = 10000
print "Norm", Timer('find_invpow(x,n)', 'from __main__ import find_invpow, x,n').timeit(number=tests)
print "Alt", Timer('find_invpowAlt(x,n)', 'from __main__ import find_invpowAlt, x,n').timeit(number=tests)
Norm 0.626754999161
Alt 0.566340923309
If you are looking for something standard, fast to write with high precision. I would use decimal and adjust the precision (getcontext().prec) to at least the length of x.
Code (Python 3.0)
from decimal import *
x = '11968003966030964356885611480383408833172346450467339251\
196093144141045683463085291115677488411620264826942334897996389\
485046262847265769280883237649461122479734279424416861834396522\
819159219215308460065265520143082728303864638821979329804885526\
557893649662037092457130509980883789368448042961108430809620626\
059287437887495827369474189818588006905358793385574832590121472\
680866521970802708379837148646191567765584039175249171110593159\
305029014037881475265618958103073425958633163441030267478942720\
703134493880117805010891574606323700178176718412858948243785754\
898788359757528163558061136758276299059029113119763557411729353\
915848889261125855717014320045292143759177464380434854573300054\
940683350937992500211758727939459249163046465047204851616590276\
724564411037216844005877918224201569391107769029955591465502737\
961776799311859881060956465198859727495735498887960494256488224\
613682478900505821893815926193600121890632'
minprec = 27
if len(x) > minprec: getcontext().prec = len(x)
else: getcontext().prec = minprec
x = Decimal(x)
power = Decimal(1)/Decimal(3)
answer = x**power
ranswer = answer.quantize(Decimal('1.'), rounding=ROUND_UP)
diff = x - ranswer**Decimal(3)
if diff == Decimal(0):
print("x is the cubic number of", ranswer)
else:
print("x has a cubic root of ", answer)
Answer
x is the cubic number of 22873918786185635329056863961725521583023133411
451452349318109627653540670761962215971994403670045614485973722724603798
107719978813658857014190047742680490088532895666963698551709978502745901
704433723567548799463129652706705873694274209728785041817619032774248488
2965377218610139128882473918261696612098418
Oh, for numbers that big, you would use the decimal module.
ns: your number as a string
ns = "11968003966030964356885611480383408833172346450467339251196093144141045683463085291115677488411620264826942334897996389485046262847265769280883237649461122479734279424416861834396522819159219215308460065265520143082728303864638821979329804885526557893649662037092457130509980883789368448042961108430809620626059287437887495827369474189818588006905358793385574832590121472680866521970802708379837148646191567765584039175249171110593159305029014037881475265618958103073425958633163441030267478942720703134493880117805010891574606323700178176718412858948243785754898788359757528163558061136758276299059029113119763557411729353915848889261125855717014320045292143759177464380434854573300054940683350937992500211758727939459249163046465047204851616590276724564411037216844005877918224201569391107769029955591465502737961776799311859881060956465198859727495735498887960494256488224613682478900505821893815926193600121890632"
from decimal import Decimal
d = Decimal(ns)
one_third = Decimal("0.3333333333333333")
print d ** one_third
and the answer is: 2.287391878618402702753613056E+305
TZ pointed out that this isn't accurate... and he's right. Here's my test.
from decimal import Decimal
def nth_root(num_decimal, n_integer):
exponent = Decimal("1.0") / Decimal(n_integer)
return num_decimal ** exponent
def test():
ns = "11968003966030964356885611480383408833172346450467339251196093144141045683463085291115677488411620264826942334897996389485046262847265769280883237649461122479734279424416861834396522819159219215308460065265520143082728303864638821979329804885526557893649662037092457130509980883789368448042961108430809620626059287437887495827369474189818588006905358793385574832590121472680866521970802708379837148646191567765584039175249171110593159305029014037881475265618958103073425958633163441030267478942720703134493880117805010891574606323700178176718412858948243785754898788359757528163558061136758276299059029113119763557411729353915848889261125855717014320045292143759177464380434854573300054940683350937992500211758727939459249163046465047204851616590276724564411037216844005877918224201569391107769029955591465502737961776799311859881060956465198859727495735498887960494256488224613682478900505821893815926193600121890632"
nd = Decimal(ns)
cube_root = nth_root(nd, 3)
print (cube_root ** Decimal("3.0")) - nd
if __name__ == "__main__":
test()
It's off by about 10**891
Possibly for your curiosity:
http://en.wikipedia.org/wiki/Hensel_Lifting
This could be the technique that Maple would use to actually find the nth root of large numbers.
Pose the fact that x^n - 11968003.... = 0 mod p, and go from there...
I may suggest four methods for solving your task. First is based on Binary Search. Second is based on Newton's Method. Third is based on Shifting n-th Root Algorithm. Fourth is called by me Chord-Tangent method described by me in picture here.
Binary Search was already implemented in many answers above. I just introduce here my own vision of it and its implementation.
As alternative I also implement Optimized Binary Search method (marked Opt). This method just starts from range [hi / 2, hi) where hi is equal to 2^(num_bit_length / k) if we're computing k-th root.
Newton's Method is new here, as I see it wasn't implemented in other answers. It is usually considered to be faster than Binary Search, although my own timings in code below don't show any speedup. Hence this method here is just for reference/interest.
Shifting Method is 30-50% faster than optimized binary search method, and should be even faster if implemented in C++, because C++ has fast 64 bit arithemtics which is partially used in this method.
Chord-Tangent Method:
Chord-Tangent Method is invented by me on piece of paper (see image above), it is inspired and is an improvement of Newton method. Basically I draw a Chord and a Tangent Line and find intersection with horizontal line y = n, these two intersections form lower and upper bound approximations of location of root solution (x0, n) where n = x0 ^ k. This method appeared to be fastest of all, while all other methods do more than 2000 iterations, this method does just 8 iterations, for the case of 8192-bit numbers. So this method is 200-300x times faster than previous (by speed) Shifting Method.
As an example I generate really huge random integer of 8192 bits in size. And measure timings of finding cubic root with both methods.
In test() function you can see that I passed k = 3 as root's power (cubic root), you can pass any power instead of 3.
Try it online!
def binary_search(begin, end, f, *, niter = [0]):
while begin < end:
niter[0] += 1
mid = (begin + end) >> 1
if f(mid):
begin = mid + 1
else:
end = mid
return begin
def binary_search_kth_root(n, k, *, verbose = False):
# https://en.wikipedia.org/wiki/Binary_search_algorithm
niter = [0]
res = binary_search(0, n + 1, lambda root: root ** k < n, niter = niter)
if verbose:
print('Binary Search iterations:', niter[0])
return res
def binary_search_opt_kth_root(n, k, *, verbose = False):
# https://en.wikipedia.org/wiki/Binary_search_algorithm
niter = [0]
hi = 1 << (n.bit_length() // k - 1)
while hi ** k <= n:
niter[0] += 1
hi <<= 1
res = binary_search(hi >> 1, hi, lambda root: root ** k < n, niter = niter)
if verbose:
print('Binary Search Opt iterations:', niter[0])
return res
def newton_kth_root(n, k, *, verbose = False):
# https://en.wikipedia.org/wiki/Newton%27s_method
f = lambda x: x ** k - n
df = lambda x: k * x ** (k - 1)
x, px, niter = n, 2 * n, [0]
while abs(px - x) > 1:
niter[0] += 1
px = x
x -= f(x) // df(x)
if verbose:
print('Newton Method iterations:', niter[0])
mini, minv = None, None
for i in range(-2, 3):
v = abs(f(x + i))
if minv is None or v < minv:
mini, minv = i, v
return x + mini
def shifting_kth_root(n, k, *, verbose = False):
# https://en.wikipedia.org/wiki/Shifting_nth_root_algorithm
B_bits = 64
r, y = 0, 0
B = 1 << B_bits
Bk_bits = B_bits * k
Bk_mask = (1 << Bk_bits) - 1
niter = [0]
for i in range((n.bit_length() + Bk_bits - 1) // Bk_bits - 1, -1, -1):
alpha = (n >> (i * Bk_bits)) & Bk_mask
B_y = y << B_bits
Bk_yk = (y ** k) << Bk_bits
Bk_r_alpha = (r << Bk_bits) + alpha
Bk_yk_Bk_r_alpha = Bk_yk + Bk_r_alpha
beta = binary_search(1, B, lambda beta: (B_y + beta) ** k <= Bk_yk_Bk_r_alpha, niter = niter) - 1
y, r = B_y + beta, Bk_r_alpha - ((B_y + beta) ** k - Bk_yk)
if verbose:
print('Shifting Method iterations:', niter[0])
return y
def chord_tangent_kth_root(n, k, *, verbose = False):
niter = [0]
hi = 1 << (n.bit_length() // k - 1)
while hi ** k <= n:
niter[0] += 1
hi <<= 1
f = lambda x: x ** k
df = lambda x: k * x ** (k - 1)
# https://i.stack.imgur.com/et9O0.jpg
x_begin, x_end = hi >> 1, hi
y_begin, y_end = f(x_begin), f(x_end)
for icycle in range(1 << 30):
if x_end - x_begin <= 1:
break
niter[0] += 1
if 0: # Do Binary Search step if needed
x_mid = (x_begin + x_end) >> 1
y_mid = f(x_mid)
if y_mid > n:
x_end, y_end = x_mid, y_mid
else:
x_begin, y_begin = x_mid, y_mid
# (y_end - y_begin) / (x_end - x_begin) = (n - y_begin) / (x_n - x_begin) ->
x_n = x_begin + (n - y_begin) * (x_end - x_begin) // (y_end - y_begin)
y_n = f(x_n)
tangent_x = x_n + (n - y_n) // df(x_n) + 1
chord_x = x_n + (n - y_n) * (x_end - x_n) // (y_end - y_n)
assert chord_x <= tangent_x, (chord_x, tangent_x)
x_begin, x_end = chord_x, tangent_x
y_begin, y_end = f(x_begin), f(x_end)
assert y_begin <= n, (chord_x, y_begin, n, n - y_begin)
assert y_end > n, (icycle, tangent_x - binary_search_kth_root(n, k), y_end, n, y_end - n)
if verbose:
print('Chord Tangent Method iterations:', niter[0])
return x_begin
def test():
import random, timeit
nruns = 3
bits = 8192
n = random.randrange(1 << (bits - 1), 1 << bits)
a = binary_search_kth_root(n, 3, verbose = True)
b = binary_search_opt_kth_root(n, 3, verbose = True)
c = newton_kth_root(n, 3, verbose = True)
d = shifting_kth_root(n, 3, verbose = True)
e = chord_tangent_kth_root(n, 3, verbose = True)
assert abs(a - b) <= 0 and abs(a - c) <= 1 and abs(a - d) <= 1 and abs(a - e) <= 1, (a - b, a - c, a - d, a - e)
print()
print('Binary Search timing:', round(timeit.timeit(lambda: binary_search_kth_root(n, 3), number = nruns) / nruns, 3), 'sec')
print('Binary Search Opt timing:', round(timeit.timeit(lambda: binary_search_opt_kth_root(n, 3), number = nruns) / nruns, 3), 'sec')
print('Newton Method timing:', round(timeit.timeit(lambda: newton_kth_root(n, 3), number = nruns) / nruns, 3), 'sec')
print('Shifting Method timing:', round(timeit.timeit(lambda: shifting_kth_root(n, 3), number = nruns) / nruns, 3), 'sec')
print('Chord Tangent Method timing:', round(timeit.timeit(lambda: chord_tangent_kth_root(n, 3), number = nruns) / nruns, 3), 'sec')
if __name__ == '__main__':
test()
Output:
Binary Search iterations: 8192
Binary Search Opt iterations: 2732
Newton Method iterations: 9348
Shifting Method iterations: 2752
Chord Tangent Method iterations: 8
Binary Search timing: 0.506 sec
Binary Search Opt timing: 0.05 sec
Newton Method timing: 2.09 sec
Shifting Method timing: 0.03 sec
Chord Tangent Method timing: 0.001 sec
I came up with my own answer, which takes #Mahmoud Kassem's idea, simplifies the code, and makes it more reusable:
def cube_root(x):
return decimal.Decimal(x) ** (decimal.Decimal(1) / decimal.Decimal(3))
I tested it in Python 3.5.1 and Python 2.7.8, and it seemed to work fine.
The result will have as many digits as specified by the decimal context the function is run in, which by default is 28 decimal places. According to the documentation for the power function in the decimal module, "The result is well-defined but only “almost always correctly-rounded”.". If you need a more accurate result, it can be done as follows:
with decimal.localcontext() as context:
context.prec = 50
print(cube_root(42))
In older versions of Python, 1/3 is equal to 0. In Python 3.0, 1/3 is equal to 0.33333333333 (and 1//3 is equal to 0).
So, either change your code to use 1/3.0 or switch to Python 3.0 .
Try converting the exponent to a floating number, as the default behaviour of / in Python is integer division
n**(1/float(3))
Well, if you're not particularly worried about precision, you could convert it to a sting, chop off some digits, use the exponent function, and then multiply the result by the root of how much you chopped off.
E.g. 32123 is about equal to 32 * 1000, the cubic root is about equak to cubic root of 32 * cubic root of 1000. The latter can be calculated by dividing the number of 0s by 3.
This avoids the need for the use of extension modules.

Categories

Resources