Finding the root of an equation with a constraint - python

In python, I would like to find the roots of equations of the form:
-x*log(x) + (1-x)*log(n) - (1-x)*log(1 - x) - k = 0
where n and k are parameters that will be specified.
An additional constraint on the roots is that x >= (1-x)/n. So just for what it's worth, I'll be filtering out roots that don't satisfy that.
My first attempt was to use scipy.optimize.fsolve (note that I'm just setting k and n to be 0 and 1 respectively):
def f(x):
return -x*log(x) + (1-x)*log(1) - (1-x)*log(1-x)
fsolve(f, 1)
Using math.log, I got value-errors because I was supplying bad input to log. Using numpy.log gave me some divide by zeros and invalid values in multiply.
I adjusted f as so, just to see what it would do:
def f(x):
if x <= 0:
return 1000
if x >= 1:
return 2000
return -x*log(x) + (1-x)*log(1) - (1-x)*log(1-x)
Now I get
/usr/lib/python2.7/dist-packages/scipy/optimize/minpack.py:221: RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
Using python, how can I solve for x for various n and k parameters in the original equation?

fsolve also allows guesses to be inserted for where to start. My suggestion would be to plot the equation and have the user type a initial guess either with the mouse or via text to use as an initial guess. You may also want to change the out of bounds values:
if x <= 0:
return 1000 + abs(x)
if x >= 1:
return 2000 + abs(x)
This way the function has a slope outside of the region of interest that will guide the solver back into the interesting region.

Related

Python for loop: RuntimeWarning: divide by zero

I'm currently trying to implement the Newton-Raphson algorithm for some finance-based calculations.
I tried it in Python with a simple for loop, but I get this RuntimeWarning: divide by zero encountered in double_scalars and I also get no result of the approximation. I tried to fix it by checking every division on my own, but I found no step where Python should be forced to divide by a zero.
import numpy as np
import math as m
import scipy.stats as si
def totalvol_zero(M):
v_0 = m.sqrt(2 * abs(M))
return v_0
def C_prime(M,v):
C_prime = si.norm.cdf(M/v + v/2) - m.exp(-M)*si.norm.cdf(M/v - v/2)
return C_prime
def NR(M,C_prime_obs):
v_0 = totalvol_zero(M)
for k in range(0,7,1):
v_0 = v_0 - ((C_prime(M,v_0) - C_prime_obs)/(m.sqrt(1/(m.pi * 2))*m.exp(-0.5*((M/v_0 + v_0/2)**2))))
k += 1
return v_0
print(NR(2,2))
This may be a really easy error/typo for some of you because I am still a beginner in Python but at the moment I just don't see anything wrong in this code and can't explain why this warning appeared and why I don't get any value as result.
Edit:
Sorry, I forgot about M and v. They are just explicit formulas so I didn't thought that they are the cause of this problem.
def moneyness(S,K,d,r,t):
F = S * m.exp((r-d)*t)
M = m.log(F/K)
return M
def totalvol(sigma,t):
v = sigma * m.sqrt(t)
return v
These are the explicit expressions for M and v. M defines the moneyness of an option, while v is the total volatility of it. But because I didn't even express M and v in the for-loop like that, but used them just as numbers for the Newton-Raphson, I don't think they will help solve the problem.
C_prime_obs is a converted call price of an option. The value should be always positive but since I never divided by C_prime_obs, this doesn't change anything.

Precision error in `scipy.stats.binom` method

I am using scipy.stats.binom to work with the binomial distribution. Given n and p, the probability function is
A sum over k ranging over 0 to n should (and indeed does) give 1. Fixing a point x_0, we can add the probabilities in both directions and the two sums ought to add to 1. However the code below yields two different answers when x_0 is close to n.
from scipy.stats import binom
n = 9
p = 0.006985
b = binom(n=n, p=p)
x_0 = 8
# Method 1
cprob = 0
for k in range(x_0, n+1):
cprob += b.pmf(k)
print('cumulative prob with method 1:', cprob)
# Method 2
cprob = 1
for k in range(0, x_0):
cprob -= b.pmf(k)
print('cumulative prob with method 2:', cprob)
I expect the outputs from both methods to agree. For x_0 < 7 it agrees but for x_0 >= 8 as above I get
>> cumulative prob with method 1: 5.0683768775504006e-17
>> cumulative prob with method 2: 1.635963929799698e-16
The precision error in the two methods propagates through my code (later) and gives vastly different answers. Any help is appreciated.
Roundoff errors of the order of the machine epsilon are expected and are inevitable. That these propagate and later blow up means that your problem is very poorly conditioned. You'd need to rethink the algorithm or an implementation, depending on where the bad conditioning comes from.
In your specific example you can get by using either np.sum (which tries to be careful with roundoff), or even math.fsum from the standard library.

Why is this an incorrect implementation of Fermat's Factorization?

I'm currently working through Project Euler, and this was my attempt (in Python) at problem 3. I ran this and let it for roughly 30 minutes. After this, I looked at the numbers under "sum". I found several issues: some of these numbers were even, and thus not prime, and some of these numbers weren't even proper factors of n. Granted, they were only off by 0.000001 (usually division yielded x.99999230984 or whatever). The number I eventually stopped at was 3145819243.0.
Can anyone explain why these errors occur?
EDIT: My interpretation of the theorem was basically that, with rearranging of variables, you could solve for x with the square root of n + y^2, and y would be bruteforced until it was an integer. After this, the actual prime factor would be x+y.
Here is my code.
import math
n = int(600851475143)
y = int(1)
while y >= 1:
if math.sqrt(n + (y**2)).is_integer():
x = math.sqrt(n + (y**2))
print "x"
print x
print "sum"
print x + y
if x + y > (600851475142/2):
print "dead"
else:
print "nvm"
y = y + 1
Typical issue with big number and floating points precision.
When you get to y = 323734167, you compute math.sqrt(n + y**2) which is math.sqrt(104804411734659032).
This is 3.23735095000000010811308548429078847808587868214170702... × 10^8 according to wolfram alpha, i.e. not an integer, but 323735095.0 according to python.
As you can see, python does not have the precision to see the .00000001....
Instead of testing is_integer, you can test the square of the result:
> 323735095 ** 2
=> 104804411734659025
and see if it matches the input (it doesn't, the input is 104804411734659032, off by 7).

calculation of pi, getting different value than book

The mathematical constant π (pi) is an irrational number with value approximately 3.1415928... The precise value of π is equal to the following infinite sum: π = 4/1 - 4/3 + 4/5 - 4/7 + 4/9 - 4/11 + ... We can get a good approximation of π by computing the sum of the first few terms. Write a function approxPi() that takes as a parameter a floating point value error and approximates the constant π within error by computing the above sum, term by term, until the absolute value of the difference between the current sum and the previous sum (with one fewer terms) is no greater than error. Once the function finds that the difference is less than error, it should return the new sum. Please note that this function should not use any functions or constants from the math module. You are supposed to use the described algorithm to approximate π, not use the built-in value in Python.
I have done the below program but for some reason I am getting the different value from the one in book.
def pi(error):
prev = 1
current = 4
i = 1
while abs(current - prev) > error:
d = 2.0* i +1
sign = (-1)**i
prev = current
current = current + sign * 4 / d
i = i +1
return current
output In [2]: pi(0.01)
Out[2]: 3.146567747182955
But instead I need to get this value
>>> approxPi(0.01)
3.1611986129870506
>>> approxPi(0.0000001)
3.1415928535897395
The approximation you're using is very poor at converging, that is you have to loop quite a lot of times to get a reasonable value. You see that the difference will be 1/d and that's the accuracy. You'll have to loop 5000 times to get four digits 50k times to get next and 500k to get next and so on (that is exponential time complexity for the digits).
This could be one of the reasons that you see a discrepancy here, that you simply get the situation where rounding errors add up. Since you need to use that many iterations you will never get near the full precision of the floats you're using. Another source of discrepancy is that your reference probably is using another exit condition, with your condition you should get an error less than the provided (ideally), and you've got it (3.146567747182955-pi < 0.01). It actually looks like your reference is using the condition abs(current-prev) > 4*error instead.
The formula you're using is that pi=4arctan(1) and using McLaurin expansion of arctan(x) for a value of x that is on the limit of converging at all. To get better performance one should use lower x in that expansion. For example pi=16arctan(1/5)-4arctan(1/239) could be used (this gives linear time complexity for the digits):
def pi(error):
a = 1.0/5
b = 1.0/239
prev = 1
current = 0.0
i = 0
while abs(current - prev) > error:
d = 2.0* i +1
sign = (-1)**i
prev = current
current = current + sign * (16*a - 4*b)/d
a = a*1.0/(5*5)
b = b*1.0/(239*239)
i = i +1
return current
I guess the stopping rule for your function and approxPi are different. In fact, your estimation is better. If you print out all the values of current, you will see that when i equals 50 your function produces your desired output. But, it goes beyond that and produce a better approximation.
So to get the answer you are looking for then the question as posed is wrong in how it describes the exit condition.
Re-organizing you get Pi = 4*(1/1 + 1/3 + 1/5 + ...), to get 3.1611986129870506 with an error of 0.01, then you looking at the subsequent terms and stopping when the term < error:
from itertools import count, cycle #, izip for Py2
def approxPi(error):
p = 0
for sign, d in zip(cycle([1,-1]), count(1, 2)): # izip for Py2
n = sign / d
p += n
if abs(n) < error:
break
return 4*p
Then you get the correct answers:
>>> approxPi(0.01)
3.1611986129870506
>>> approxPi(0.0000001)
3.1415928535897395
Using your code (from #PaulBoddington: Find the value of pi)
def pi(error):
prev = 0
current = 1
i = 1
while abs(current - prev) > error:
d = 2.0*i + 1
sign = (-1)**i
prev = current
current = current + sign / d
i += 1
return 4*current
Note: This is not the difference between the current sum and the previous sum, so the question is wrong, but it is equal to difference_between_sums < error*4. So to get the correct exit for your original code just multiply the error by 4, e.g.:
>>> pi(0.04)
3.1611986129870506

Finding the square root using Newton's method (errors!)

I'm working to finish a math problem that approximates the square root of a number using Newton's guess and check method in Python. The user is supposed to enter a number, an initial guess for the number, and how many times they want to check their answer before returning. To make things easier and get to know Python (I've only just started learning the language a couple of months ago) I broke it up into a number of smaller functions; the problem now, though, is that I'm having trouble calling each function and passing the numbers through.
Here is my code, with comments to help (each function is in order of use):
# This program approximates the square root of a number (entered by the user)
# using Newton's method (guess-and-check). I started with one long function,
# but after research, have attempted to apply smaller functions on top of each
# other.
# * NEED TO: call functions properly; implement a counting loop so the
# goodGuess function can only be accessed the certain # of times the user
# specifies. Even if the - .001 range isn't reached, it should return.
# sqrtNewt is basically the main, which initiates user input.
def sqrtNewt():
# c equals a running count initiated at the beginning of the program, to
# use variable count.
print("This will approximate the square root of a number, using a guess-and-check process.")
x = eval(input("Please type in a positive number to find the square root of: "))
guess = eval(input("Please type in a guess for the square root of the number you entered: "))
count = eval(input("Please enter how many times would you like this program to improve your initial guess: "))
avg = average(guess, x)
g, avg = improveG(guess, x)
final = goodGuess(avg, x)
guess = square_root(guess, x, count)
compare(guess, x)
# Average function is called; is the first step that gives an initial average,
# which implements through smaller layers of simple functions stacked on each
# other.
def average(guess, x) :
return ((guess + x) / 2)
# An improvement function which builds upon the original average function.
def improveG(guess, x) :
return average(guess, x/guess)
# A function which determines if the difference between guess X guess minus the
# original number results in an absolute vale less than 0.001. Not taking
# absolute values (like if guess times guess was greater than x) might result
# in errors
from math import *
def goodGuess(avg, x) :
num = abs(avg * avg - x)
return (num < 0.001)
# A function that, if not satisfied, continues to "tap" other functions for
# better guess outputs. i.e. as long as the guess is not good enough, keep
# improving the guess.
def square_root(guess, x, count) :
while(not goodGuess(avg, x)):
c = 0
c = c + 1
if (c < count):
guess = improveG(guess, x)
elif (c == count):
return guess
else :
pass
# Function is used to check the difference between guess and the sqrt method
# applied to the user input.
import math
def compare(guess, x):
diff = math.sqrt(x) - guess
print("The following is the difference between the approximation")
print("and the Math.sqrt method, not rounded:", diff)
sqrtNewt()
Currently, I get this error: g, avg = improveG(guess, x)
TypeError: 'float' object is not iterable.
The final function uses the final iteration of the guess to subtract from the math square root method, and returns the overall difference.
Am I even doing this right? Working code would be appreciated, with suggestions, if you can provide it. Again, I'm a newbie, so I apologize for misconceptions or blind obvious errors.
Implementation of the newton method:
It should be fairly easy to add little tweaks to it when needed. Try, and tell us when you get stuck.
from math import *
def average(a, b):
return (a + b) / 2.0
def improve(guess, x):
return average(guess, x/guess)
def good_enough(guess, x):
d = abs(guess*guess - x)
return (d < 0.001)
def square_root(guess, x):
while(not good_enough(guess, x)):
guess = improve(guess, x)
return guess
def my_sqrt(x):
r = square_root(1, x)
return r
>>> my_sqrt(16)
4.0000006366929393
NOTE: you will find enough exaples on how to use raw input here at SO or googling, BUT, if you are counting loops, the c=0 has to be outside the loop, or you will be stuck in an infinite loop.
Quiqk and dirty, lots of ways to improve:
from math import *
def average(a, b):
return (a + b) / 2.0
def improve(guess, x):
return average(guess, x/guess)
def square_root(guess, x, c):
guesscount=0
while guesscount < c :
guesscount+=1
guess = improve(guess, x)
return guess
def my_sqrt(x,c):
r = square_root(1, x, c)
return r
number=int(raw_input('Enter a positive number'))
i_guess=int(raw_input('Enter an initial guess'))
times=int(raw_input('How many times would you like this program to improve your initial guess:'))
answer=my_sqrt(number,times)
print 'sqrt is approximately ' + str(answer)
print 'difference between your guess and sqrt is ' + str(abs(i_guess-answer))
The chosen answer is a bit convoluted...no disrespect to the OP.
For anyone who ever Googles this in the future, this is my solution:
def check(x, guess):
return (abs(guess*guess - x) < 0.001)
def newton(x, guess):
while not check(x, guess):
guess = (guess + (x/guess)) / 2.0
return guess
print newton(16, 1)
Here's a rather different function to compute square roots; it assumes n is non-negative:
def mySqrt(n):
if (n == 0):
return 0
if (n < 1):
return mySqrt(n * 4) / 2
if (4 <= n):
return mySqrt(n / 4) * 2
x = (n + 1.0) / 2.0
x = (x + n/x) / 2.0
x = (x + n/x) / 2.0
x = (x + n/x) / 2.0
x = (x + n/x) / 2.0
x = (x + n/x) / 2.0
return x
This algorithm is similar to Newton's, but not identical. It was invented by a Greek mathematician named Heron (his name is sometimes spelled Hero) living in Alexandria, Egypt in the first century (about two thousand years ago). Heron's recurrence formula is simpler than Newton's; Heron used x' = (x + n/x) / 2 where Newton used x' = x - (x^2 - n) / 2x.
The first test is a special case on zero; without it, the (n < 1) test causes an infinite loop. The next two tests normalize n to the range 1 < n <= 4; making the range smaller means that we can easily compute an initial approximation to the square root of n, which is done in the first computation of x, and then "unroll the loop" and iterate the recurrence equation a fixed number of times, thus eliminating the need for testing and recurring if the difference between two successive loops is too large.
By the way, Heron was a pretty interesting fellow. In addition to inventing a method for calculating square roots, he built a working jet engine, a coin-operated vending machine, and lots of other neat stuff!
You can read more about computing square roots at my blog.
it shouldnt have to be that complicated i wrote this up
def squareroot(n,x):
final = (0.5*(x+(n/x)))
print (final)
for i in range(0,10):
final = (0.5*(final+(n/final)))
print (final)
or you could change it to be like this
n = float(input('What number would you like to squareroot?'))
x = float(input('Estimate your answer'))
final = (0.5*(x+(n/x)))
print (final)
for i in range(0,10):
final = (0.5*(final+(n/final)))
print (final)
All you need to know is the nth term in the sequence. From the Leibniz series, we know this is ((-1)**n)/((2*n)+1). Just sum this series for all i with an initial condition of zero and you're set.
def myPi(n):
pi=0
for i in range(0,n):
pi=pi+((-1)**i)/((2*i)+1)
return 4*pi
print (myPi(10000))

Categories

Resources