Restructure Newton's method [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed yesterday.
Improve this question
Restructure Newton's method (Case Study: Approximating Square Roots) by decomposing it into three cooperating functions: newton, limitReached, and improveEstimate.
The newton function can use either the recursive strategy of Project 2 or the iterative strategy of the Approximating Square Roots Case Study. The task of testing for the limit is assigned to a function named limitReached, whereas the task of computing a new approximation is assigned to a function named improveEstimate. Each function expects the relevant arguments and returns an appropriate value.
# Modify the code below
"""
Program: newton.py
Author: Ken
Compute the square root of a number.
1. The input is a number.
2. The outputs are the program's estimate of the square root
using Newton's method of successive approximations, and
Python's own estimate using math.sqrt.
"""
import math
# Receive the input number from the user
x = float(input("Enter a positive number: "))
# Initialize the tolerance and estimate
tolerance = 0.000001
estimate = 1.0
# Perform the successive approximations
while True:
estimate = (estimate + x / estimate) / 2
difference = abs(x - estimate ** 2)
if difference <= tolerance:
break
# Output the result
print("The program's estimate is", estimate)
print("Python's estimate is ", math.sqrt(x))
This is what I have so far and it kind of works. I'm new to python and stuck on improveEstimate. I'm not sure what to do.
import math
tolerance = 0.000001
def newton(x):
estimate = 1.0
while True:
estimate = (estimate + x / estimate) / 2
difference = abs(x - estimate ** 2)
if difference <= tolerance:
break
return estimate
def limitReached(x, estimate):
difference=x-estimate ** 2
return abs(difference) <= 0.000001
def improveEstimate(x, estimate):
while True:
estimate = (estimate + x / estimate) / 2
difference = abs(x - estimate ** 2)
if difference <= tolerance:
break
return estimate
def main():
while True:
x = input("Enter a positive number or enter/return to quit: ")
if x == "":
break
x = float(x)
print("The program's estimate is", newton(x))
print("Python's estimate is ", math.sqrt(x))
if __name__ == "__main__":
main()

Related

Solving for rate to make NPV zero Python

My solution so far (which does not work and gets stuck) which uses NPV formula for monthly cashflow and attempts to find the discount rate to make it zero:
def goal_seek(target,cashflows,_threshold):
threshold = _threshold
lower = -10000
upper = 10000
solve = (lower + upper)/2
while abs(threshold) >= _threshold:
print(f'Threshold is: {threshold}')
print(f'range is: {lower} ---- {solve} ---- {upper}')
if threshold < 0:
upper = solve
solve = (lower + upper)/2
elif threshold > 0:
lower = solve
solve = (lower + upper)/2
threshold = target - numpy.npv(((numpy.sign(solve) * (numpy.abs(solve)+1) ** (1 / 12))-1),cashflows)
print(f'Final result: Threshold: {threshold}....Solved input: {solve}')
return solve
goal_seek(0,[-1,0,0,0,.5],.0001)
Implementing the above example results in this binary search algorithm getting stuck on below
Threshold is: 0.9767928293785059
range is: 10000.0 ---- 10000.0 ---- 10000
Is there an easy scipy module to solve for a single variable non-linear equation such as NPV?
I'm not sure if this is what you want.
np.npv()
https://numpy.org/doc/stable/reference/generated/numpy.npv.html
np.irr()
https://numpy.org/doc/stable/reference/generated/numpy.irr.html

Newton's Method with Looping Recursion

I have the code here all figured out. The directions say to "Convert Newton’s method for approximating square roots in Project 1 to a recursive function named newton. (Hint: The estimate of the square root should be passed as a second argument to the function.)"
How would I work out these directions according to my code here?
# Initialize the tolerance
TOLERANCE = 0.000001
def newton(x):
"""Returns the square root of x."""
# Perform the successive approximations
estimate = 1.0
while True:
estimate = (estimate + x / estimate) / 2
difference = abs(x - estimate ** 2)
if difference <= TOLERANCE:
break
return estimate
def main():
"""Allows the user to obtain square roots."""
while True:
# Receive the input number from the user
x = input("Enter a positive number or enter/return to quit: ")
if x == "":
break
x = float(x)
# Output the result
print("The program's estimate is", newton(x))
print("Python's estimate is ", math.sqrt(x))
if __name__ == "__main__":
main()
essentialy you need to convert the while True: part of your code in the recursive function
something like this:
def newton(x, estimate):
estimate = (estimate + x / estimate) / 2
difference = abs(x - estimate ** 2)
if difference > TOLERANCE:
estimate = newton(x, estimate)
return estimate
notice how the condition is different so you check if you need to continue, after you don't the final value is carried along out of the recursion and returned
This is the full code of the answer and comments discussion of jimakr
I utilized this in the def newton function (credits to NP890):
def newton(x, estimate = 1.0):
I also import math module to utilize the math.sqrt :
def main():
"""Allows the user to obtain square roots."""
while True:
import math
#Continuation
THE FULL CODE:
# Initialize the tolerance
TOLERANCE = 0.000001
def newton(x, estimate = 1.0):
"""Returns the square root of x."""
# Perform the successive approximations
estimate = (estimate + x / estimate) / 2
difference = abs(x - estimate ** 2)
if difference > TOLERANCE:
estimate = newton(x, estimate)
return estimate
def main():
"""Allows the user to obtain square roots."""
while True:
import math
# Receive the input number from the user
x = input("Enter a positive number or enter/return to quit: ")
if x == "":
break
x = float(x)
# Output the result
print("The program's estimate is", newton(x))
print("Python's estimate is ", math.sqrt(x))
if __name__ == "__main__":
main()

Calculating inverse trigonometric functions with formulas

I have been trying to create custom calculator for calculating trigonometric functions. Aside from Chebyshev pylonomials and/or Cordic algorithm I have used Taylor series which have been accurate by few places of decimal.
This is what i have created to calculate simple trigonometric functions without any modules:
from __future__ import division
def sqrt(n):
ans = n ** 0.5
return ans
def factorial(n):
k = 1
for i in range(1, n+1):
k = i * k
return k
def sin(d):
pi = 3.14159265359
n = 180 / int(d) # 180 degrees = pi radians
x = pi / n # Converting degrees to radians
ans = x - ( x ** 3 / factorial(3) ) + ( x ** 5 / factorial(5) ) - ( x ** 7 / factorial(7) ) + ( x ** 9 / factorial(9) )
return ans
def cos(d):
pi = 3.14159265359
n = 180 / int(d)
x = pi / n
ans = 1 - ( x ** 2 / factorial(2) ) + ( x ** 4 / factorial(4) ) - ( x ** 6 / factorial(6) ) + ( x ** 8 / factorial(8) )
return ans
def tan(d):
ans = sin(d) / sqrt(1 - sin(d) ** 2)
return ans
Unfortunately i could not find any sources that would help me interpret inverse trigonometric function formulas for Python. I have also tried putting sin(x) to the power of -1 (sin(x) ** -1) which didn't work as expected.
What could be the best solution to do this in Python (In the best, I mean simplest with similar accuracy as Taylor series)? Is this possible with power series or do i need to use cordic algorithm?
The question is broad in scope, but here are some simple ideas (and code!) that might serve as a starting point for computing arctan. First, the good old Taylor series. For simplicity, we use a fixed number of terms; in practice, you might want to decide the number of terms to use dynamically based on the size of x, or introduce some kind of convergence criterion. With a fixed number of terms, we can evaluate efficiently using something akin to Horner's scheme.
def arctan_taylor(x, terms=9):
"""
Compute arctan for small x via Taylor polynomials.
Uses a fixed number of terms. The default of 9 should give good results for
abs(x) < 0.1. Results will become poorer as abs(x) increases, becoming
unusable as abs(x) approaches 1.0 (the radius of convergence of the
series).
"""
# Uses Horner's method for evaluation.
t = 0.0
for n in range(2*terms-1, 0, -2):
t = 1.0/n - x*x*t
return x * t
The above code gives good results for small x (say smaller than 0.1 in absolute value), but the accuracy drops off as x becomes larger, and for abs(x) > 1.0, the series never converges, no matter how many terms (or how much extra precision) we throw at it. So we need a better way to compute for larger x. One solution is to use argument reduction, via the identity arctan(x) = 2 * arctan(x / (1 + sqrt(1 + x^2))). This gives the following code, which builds on arctan_taylor to give reasonable results for a wide range of x (but beware possible overflow and underflow when computing x*x).
import math
def arctan_taylor_with_reduction(x, terms=9, threshold=0.1):
"""
Compute arctan via argument reduction and Taylor series.
Applies reduction steps until x is below `threshold`,
then uses Taylor series.
"""
reductions = 0
while abs(x) > threshold:
x = x / (1 + math.sqrt(1 + x*x))
reductions += 1
return arctan_taylor(x, terms=terms) * 2**reductions
Alternatively, given an existing implementation for tan, you could simply find a solution y to the equation tan(y) = x using traditional root-finding methods. Since arctan is already naturally bounded to lie in the interval (-pi/2, pi/2), bisection search works well:
def arctan_from_tan(x, tolerance=1e-15):
"""
Compute arctan as the inverse of tan, via bisection search. This assumes
that you already have a high quality tan function.
"""
low, high = -0.5 * math.pi, 0.5 * math.pi
while high - low > tolerance:
mid = 0.5 * (low + high)
if math.tan(mid) < x:
low = mid
else:
high = mid
return 0.5 * (low + high)
Finally, just for fun, here's a CORDIC-like implementation, which is really more appropriate for a low-level implementation than for Python. The idea here is that you precompute, once and for all, a table of arctan values for 1, 1/2, 1/4, etc., and then use those to compute general arctan values, essentially by computing successive approximations to the true angle. The remarkable part is that, after the precomputation step, the arctan computation involves only additions, subtractions, and multiplications by by powers of 2. (Of course, those multiplications aren't any more efficient than any other multiplication at the level of Python, but closer to the hardware, this could potentially make a big difference.)
cordic_table_size = 60
cordic_table = [(2**-i, math.atan(2**-i))
for i in range(cordic_table_size)]
def arctan_cordic(y, x=1.0):
"""
Compute arctan(y/x), assuming x positive, via CORDIC-like method.
"""
r = 0.0
for t, a in cordic_table:
if y < 0:
r, x, y = r - a, x - t*y, y + t*x
else:
r, x, y = r + a, x + t*y, y - t*x
return r
Each of the above methods has its strengths and weaknesses, and all of the above code can be improved in a myriad of ways. I encourage you to experiment and explore.
To wrap it all up, here are the results of calling the above functions on a small number of not-very-carefully-chosen test values, comparing with the output of the standard library math.atan function:
test_values = [2.314, 0.0123, -0.56, 168.9]
for value in test_values:
print("{:20.15g} {:20.15g} {:20.15g} {:20.15g}".format(
math.atan(value),
arctan_taylor_with_reduction(value),
arctan_from_tan(value),
arctan_cordic(value),
))
Output on my machine:
1.16288340166519 1.16288340166519 1.16288340166519 1.16288340166519
0.0122993797673 0.0122993797673 0.0122993797673002 0.0122993797672999
-0.510488321916776 -0.510488321916776 -0.510488321916776 -0.510488321916776
1.56487573286064 1.56487573286064 1.56487573286064 1.56487573286064
The simplest way to do any inverse function is to use binary search.
definitions
let assume function
x = g(y)
And we want to code its inverse:
y = f(x) = f(g(y))
x = <x0,x1>
y = <y0,y1>
bin search on floats
You can do it on integer math accessing mantissa bits like in here:
Any Faster RMS Value Calculation in C?
but if you do not know the exponent of the result prior to computation then you need to use floats for bin search too.
so the idea behind binary search is to change mantissa of y from y1 to y0 bit by bit from MSB to LSB. Then call direct function g(y) and if the result cross x revert the last bit change.
In case of using floats you can use variable that will hold approximate value of the mantissa bit targeted instead of integer bit access. That will eliminate unknown exponent problem. So at the beginning set y = y0 and actual bit to MSB value so b=(y1-y0)/2. After each iteration halve it and do as many iterations as you got mantissa bits n... This way you obtain result in n iterations within (y1-y0)/2^n accuracy.
If your inverse function is not monotonic break it into monotonic intervals and handle each as separate binary search.
The function increasing/decreasing just determine the crossing condition direction (use of < or >).
C++ acos example
so y = acos(x) is defined on x = <-1,+1> , y = <0,M_PI> and decreasing so:
double f64_acos(double x)
{
const int n=52; // mantisa bits
double y,y0,b;
int i;
// handle domain error
if (x<-1.0) return 0;
if (x>+1.0) return 0;
// x = <-1,+1> , y = <0,M_PI> , decreasing
for (y= 0.0,b=0.5*M_PI,i=0;i<n;i++,b*=0.5) // y is min, b is half of max and halving each iteration
{
y0=y; // remember original y
y+=b; // try set "bit"
if (cos(y)<x) y=y0; // if result cross x return to original y decreasing is < and increasing is >
}
return y;
}
I tested it like this:
double x0,x1,y;
for (x0=0.0;x0<M_PI;x0+=M_PI*0.01) // cycle all angle range <0,M_PI>
{
y=cos(x0); // direct function (from math.h)
x1=f64_acos(y); // my inverse function
if (fabs(x1-x0)>1e-9) // check result and output to log if error
Form1->mm_log->Lines->Add(AnsiString().sprintf("acos(%8.3lf) = %8.3lf != %8.3lf",y,x0,x1));
}
Without any difference found... so the implementation is working correctly. Of coarse binary search on 52 bit mantissa is usually slower then polynomial approximation ... on the other hand the implementation is so simple ...
[Notes]
If you do not want to take care of the monotonic intervals you can try
approximation search
As you are dealing with goniometric functions you need to handle singularities to avoid NaN or division by zero etc ...
If you're interested here more bin search examples (mostly on integers)
Power by squaring for negative exponents it contains

Why does simple gradient descent diverge?

This is my second attempt at implementing gradient descent in one variable and it always diverges. Any ideas?
This is simple linear regression for minimizing the residual sum of squares in one variable.
def gradient_descent_wtf(xvalues, yvalues):
tolerance = 0.1
#y=mx+b
#some line to predict y values from x values
m=1.
b=1.
#a predicted y-value has value mx + b
for i in range(0,10):
#calculate y-value predictions for all x-values
predicted_yvalues = list()
for x in xvalues:
predicted_yvalues.append(m*x + b)
# predicted_yvalues holds the predicted y-values
#now calculate the residuals = y-value - predicted y-value for each point
residuals = list()
number_of_points = len(yvalues)
for n in range(0,number_of_points):
residuals.append(yvalues[n] - predicted_yvalues[n])
## calculate the residual sum of squares from the residuals, that is,
## square each residual and add them all up. we will try to minimize
## the residual sum of squares later.
residual_sum_of_squares = 0.
for r in residuals:
residual_sum_of_squares += r**2
print("RSS = %s" % residual_sum_of_squares)
##
##
##
#now make a version of the residuals which is multiplied by the x-values
residuals_times_xvalues = list()
for n in range(0,number_of_points):
residuals_times_xvalues.append(residuals[n] * xvalues[n])
#now create the sums for the residuals and for the residuals times the x-values
residuals_sum = sum(residuals)
residuals_times_xvalues_sum = sum(residuals_times_xvalues)
# now multiply the sums by a positive scalar and add each to m and b.
residuals_sum *= 0.1
residuals_times_xvalues_sum *= 0.1
b += residuals_sum
m += residuals_times_xvalues_sum
#and repeat until convergence.
#convergence occurs when ||sum vector|| < some tolerance.
# ||sum vector|| = sqrt( residuals_sum**2 + residuals_times_xvalues_sum**2 )
#check for convergence
magnitude_of_sum_vector = (residuals_sum**2 + residuals_times_xvalues_sum**2)**0.5
if magnitude_of_sum_vector < tolerance:
break
return (b, m)
Result:
gradient_descent_wtf([1,2,3,4,5,6,7,8,9,10],[6,23,8,56,3,24,234,76,59,567])
RSS = 370433.0
RSS = 300170125.7
RSS = 4.86943013045e+11
RSS = 7.90447409339e+14
RSS = 1.28312217794e+18
RSS = 2.08287421094e+21
RSS = 3.38110045417e+24
RSS = 5.48849288217e+27
RSS = 8.90939341376e+30
RSS = 1.44624932026e+34
Out[108]:
(-3.475524066284303e+16, -2.4195981188763203e+17)
The gradients are huge -- hence you are following large vectors for long distances (0.1 times a large number is large). Find unit vectors in the appropriate direction. Something like this (with comprehensions replacing your loops):
def gradient_descent_wtf(xvalues, yvalues):
tolerance = 0.1
m=1.
b=1.
for i in range(0,10):
predicted_yvalues = [m*x+b for x in xvalues]
residuals = [y-y_hat for y,y_hat in zip(yvalues,predicted_yvalues)]
residual_sum_of_squares = sum(r**2 for r in residuals) #only needed for debugging purposes
print("RSS = %s" % residual_sum_of_squares)
residuals_times_xvalues = [r*x for r,x in zip(residuals,xvalues)]
residuals_sum = sum(residuals)
residuals_times_xvalues_sum = sum(residuals_times_xvalues)
# (residuals_sum,residual_times_xvalues_sum) is a vector which points in the negative
# gradient direction. *Find a unit vector which points in same direction*
magnitude = (residuals_sum**2 + residuals_times_xvalues_sum**2)**0.5
residuals_sum /= magnitude
residuals_times_xvalues_sum /= magnitude
b += residuals_sum * (0.1)
m += residuals_times_xvalues_sum * (0.1)
#check for convergence -- this needs work!
magnitude_of_sum_vector = (residuals_sum**2 + residuals_times_xvalues_sum**2)**0.5
if magnitude_of_sum_vector < tolerance:
break
return (b, m)
For example:
>>> gradient_descent_wtf([1,2,3,4,5,6,7,8,9,10],[6,23,8,56,3,24,234,76,59,567])
RSS = 370433.0
RSS = 368732.1655050716
RSS = 367039.18363896786
RSS = 365354.0543519137
RSS = 363676.7775934381
RSS = 362007.3533123621
RSS = 360345.7814567845
RSS = 358692.061974069
RSS = 357046.1948108295
RSS = 355408.17991291644
(1.1157111313023558, 1.9932828425473605)
which is certainly much more plausible.
It isn't a trivial matter to make a numerically stable gradient-descent algorithm. You might want to consult a decent textbook in numerical analysis.
First, Your code is right.
But you should consider something about math when you do linear regression.
For example, the residual is -205.8 and your learning rate is 0.1 so you will get a huge descent step -25.8.
It's a so large step that you can't go back to the correct m and b. You have to make your step small enough.
There are two ways to make gradient descent step reasonable:
initialize a small learning rate, such as 0.001 and 0.0003.
Divide your step by the total amount of your input values.

Given f, is there an automatic way to calculate fprime for Newton's method?

The following was ported from the pseudo-code from the Wikipedia article on Newton's method:
#! /usr/bin/env python3
# https://en.wikipedia.org/wiki/Newton's_method
import sys
x0 = 1
f = lambda x: x ** 2 - 2
fprime = lambda x: 2 * x
tolerance = 1e-10
epsilon = sys.float_info.epsilon
maxIterations = 20
for i in range(maxIterations):
denominator = fprime(x0)
if abs(denominator) < epsilon:
print('WARNING: Denominator is too small')
break
newtonX = x0 - f(x0) / denominator
if abs(newtonX - x0) < tolerance:
print('The root is', newtonX)
break
x0 = newtonX
else:
print('WARNING: Not able to find solution within the desired tolerance of', tolerance)
print('The last computed approximate root was', newtonX)
Question
Is there an automated way to calculate some form of fprime given some form of f in Python 3.x?
A common way of approximating the derivative of f at x is using a finite difference:
f'(x) = (f(x+h) - f(x))/h Forward difference
f'(x) = (f(x+h) - f(x-h))/2h Symmetric
The best choice of h depends on x and f: mathematically the difference approaches the derivative as h tends to 0, but the method suffers from loss of accuracy due to catastrophic cancellation if h is too small. Also x+h should be distinct from x. Something like h = x*1e-15 might be appropriate for your application. See also implementing the derivative in C/C++.
You can avoid approximating f' by using the secant method. It doesn't converge as fast as Newton's, but it's computationally cheaper and you avoid the problem of having to calculate the derivative.
You can approximate fprime any number of ways. One of the simplest would be something like:
lambda fprime x,dx=0.1: (f(x+dx) - f(x-dx))/(2*dx)
the idea here is to sample f around the point x. The sampling region (determined by dx) should be small enough that the variation in f over that region is approximately linear. The algorithm that I've used is known as the midpoint method. You could get more accurate by using higher order polynomial fits for most functions, but that would be more expensive to calculate.
Of course, you'll always be more accurate and efficient if you know the analytical derivative.
Answer
Define the functions formula and derivative as the following directly after your import.
def formula(*array):
calculate = lambda x: sum(c * x ** p for p, c in enumerate(array))
calculate.coefficients = array
return calculate
def derivative(function):
return (p * c for p, c in enumerate(function.coefficients[1:], 1))
Redefine f using formula by plugging in the function's coefficients in order of increasing power.
f = formula(-2, 0, 1)
Redefine fprime so that it is automatically created using functions derivative and formula.
fprime = formula(*derivative(f))
That should solve your requirement to automatically calculate fprime from f in Python 3.x.
Summary
This is the final solution that produces the original answer while automatically calculating fprime.
#! /usr/bin/env python3
# https://en.wikipedia.org/wiki/Newton's_method
import sys
def formula(*array):
calculate = lambda x: sum(c * x ** p for p, c in enumerate(array))
calculate.coefficients = array
return calculate
def derivative(function):
return (p * c for p, c in enumerate(function.coefficients[1:], 1))
x0 = 1
f = formula(-2, 0, 1)
fprime = formula(*derivative(f))
tolerance = 1e-10
epsilon = sys.float_info.epsilon
maxIterations = 20
for i in range(maxIterations):
denominator = fprime(x0)
if abs(denominator) < epsilon:
print('WARNING: Denominator is too small')
break
newtonX = x0 - f(x0) / denominator
if abs(newtonX - x0) < tolerance:
print('The root is', newtonX)
break
x0 = newtonX
else:
print('WARNING: Not able to find solution within the desired tolerance of', tolerance)
print('The last computed approximate root was', newtonX)

Categories

Resources