I'm currently trying to implement the Newton-Raphson algorithm for some finance-based calculations.
I tried it in Python with a simple for loop, but I get this RuntimeWarning: divide by zero encountered in double_scalars and I also get no result of the approximation. I tried to fix it by checking every division on my own, but I found no step where Python should be forced to divide by a zero.
import numpy as np
import math as m
import scipy.stats as si
def totalvol_zero(M):
v_0 = m.sqrt(2 * abs(M))
return v_0
def C_prime(M,v):
C_prime = si.norm.cdf(M/v + v/2) - m.exp(-M)*si.norm.cdf(M/v - v/2)
return C_prime
def NR(M,C_prime_obs):
v_0 = totalvol_zero(M)
for k in range(0,7,1):
v_0 = v_0 - ((C_prime(M,v_0) - C_prime_obs)/(m.sqrt(1/(m.pi * 2))*m.exp(-0.5*((M/v_0 + v_0/2)**2))))
k += 1
return v_0
print(NR(2,2))
This may be a really easy error/typo for some of you because I am still a beginner in Python but at the moment I just don't see anything wrong in this code and can't explain why this warning appeared and why I don't get any value as result.
Edit:
Sorry, I forgot about M and v. They are just explicit formulas so I didn't thought that they are the cause of this problem.
def moneyness(S,K,d,r,t):
F = S * m.exp((r-d)*t)
M = m.log(F/K)
return M
def totalvol(sigma,t):
v = sigma * m.sqrt(t)
return v
These are the explicit expressions for M and v. M defines the moneyness of an option, while v is the total volatility of it. But because I didn't even express M and v in the for-loop like that, but used them just as numbers for the Newton-Raphson, I don't think they will help solve the problem.
C_prime_obs is a converted call price of an option. The value should be always positive but since I never divided by C_prime_obs, this doesn't change anything.
Related
I am having the very same problem asked in this question, but I can't figure out why the solution is not working.
In that question, there was an issue in sqrt function that seems to be solved, and now that problem leads to only positive results.
But in my problem, I can't eliminate the negative solution in the following code:
import sympy
v,Vs,Vp = sympy.symbols('v,Vs,Vp',real=True,positive=True)
sympy.solve( v - (Vp**2-2*Vs**2)/(2*(Vp**2-Vs**2)), Vs)
Which gives me the result
[-sqrt(2)*Vp*sqrt((2*v - 1)/(v - 1))/2, sqrt(2)*Vp*sqrt((2*v - 1)/(v - 1))/2]
How can I get only the positive result? What am I missing?
As the comments in the thread already describe, it is not really possible to get what you want in general.
There is a trick to assume 0 < v < 1/2. Since this involves a few fractions, intuition says that we should probably make a substitution that involves a fraction too.
import sympy
Vs,Vp = sympy.symbols('Vs,Vp', positive=True)
# A hack to assume 0 < v < 1/2
u = sympy.symbols('u', positive=True)
v = 1/(u+2) # Alternatives like atan can be used when there are trig functions
sol = sympy.solve( v - (Vp**2-2*Vs**2)/(2*(Vp**2-Vs**2)), Vs)
print(sol)
# Substitute back by redefining v
v = sympy.symbols('v', positive=True)
new_sol = [subsol.subs(u, 1/v - 2).simplify() for subsol in sol]
print(new_sol)
The next best you can do in this case is assume all square roots are positive which is a very brave assumption.
import sympy
v,Vs,Vp = sympy.symbols('v,Vs,Vp', real=True, positive=True)
sol = sympy.solve( v - (Vp**2-2*Vs**2)/(2*(Vp**2-Vs**2)), Vs)
# Assume sqrts are positive and sol is an array
# Both of these are not true in general
# It does not work if we assume the square root can be zero
# Or even complex or negative
s = sympy.symbols('s', positive=True) # Represents any square root
w = sympy.Wild('w') # Represents any argument inside a square root
new_sol = [subsol for subsol in sol if subsol.replace(sympy.sqrt(w), s) > 0]
print(new_sol)
Both code blocks assume sol is an array which is not true in general when it comes to solve.
Does anyone know why the below doesn't equal 0?
import numpy as np
np.sin(np.radians(180))
or:
np.sin(np.pi)
When I enter it into python it gives me 1.22e-16.
The number π cannot be represented exactly as a floating-point number. So, np.radians(180) doesn't give you π, it gives you 3.1415926535897931.
And sin(3.1415926535897931) is in fact something like 1.22e-16.
So, how do you deal with this?
You have to work out, or at least guess at, appropriate absolute and/or relative error bounds, and then instead of x == y, you write:
abs(y - x) < abs_bounds and abs(y-x) < rel_bounds * y
(This also means that you have to organize your computation so that the relative error is larger relative to y than to x. In your case, because y is the constant 0, that's trivial—just do it backward.)
Numpy provides a function that does this for you across a whole array, allclose:
np.allclose(x, y, rel_bounds, abs_bounds)
(This actually checks abs(y - x) < abs_ bounds + rel_bounds * y), but that's almost always sufficient, and you can easily reorganize your code when it's not.)
In your case:
np.allclose(0, np.sin(np.radians(180)), rel_bounds, abs_bounds)
So, how do you know what the right bounds are? There's no way to teach you enough error analysis in an SO answer. Propagation of uncertainty at Wikipedia gives a high-level overview. If you really have no clue, you can use the defaults, which are 1e-5 relative and 1e-8 absolute.
One solution is to switch to sympy when calculating sin's and cos's, then to switch back to numpy using sp.N(...) function:
>>> # Numpy not exactly zero
>>> import numpy as np
>>> value = np.cos(np.pi/2)
6.123233995736766e-17
# Sympy workaround
>>> import sympy as sp
>>> def scos(x): return sp.N(sp.cos(x))
>>> def ssin(x): return sp.N(sp.sin(x))
>>> value = scos(sp.pi/2)
0
just remember to use sp.pi instead of sp.np when using scos and ssin functions.
Faced same problem,
import numpy as np
print(np.cos(math.radians(90)))
>> 6.123233995736766e-17
and tried this,
print(np.around(np.cos(math.radians(90)), decimals=5))
>> 0
Worked in my case. I set decimal 5 not lose too many information. As you can think of round function get rid of after 5 digit values.
Try this... it zeros anything below a given tiny-ness value...
import numpy as np
def zero_tiny(x, threshold):
if (x.dtype == complex):
x_real = x.real
x_imag = x.imag
if (np.abs(x_real) < threshold): x_real = 0
if (np.abs(x_imag) < threshold): x_imag = 0
return x_real + 1j*x_imag
else:
return x if (np.abs(x) > threshold) else 0
value = np.cos(np.pi/2)
print(value)
value = zero_tiny(value, 10e-10)
print(value)
value = np.exp(-1j*np.pi/2)
print(value)
value = zero_tiny(value, 10e-10)
print(value)
Python uses the normal taylor expansion theory it solve its trig functions and since this expansion theory has infinite terms, its results doesn't reach exact but it only approximates.
For e.g
sin(x) = x - x³/3! + x⁵/5! - ...
=> Sin(180) = 180 - ... Never 0 bout approaches 0.
That is my own reason by prove.
Simple.
np.sin(np.pi).astype(int)
np.sin(np.pi/2).astype(int)
np.sin(3 * np.pi / 2).astype(int)
np.sin(2 * np.pi).astype(int)
returns
0
1
0
-1
thankyou for your help.
i am very new to programming, but have decided to learn Python. i am doing a program that can check if a number is a prime. this is mathematically done by checking if (x-1)^p -(x^p-1) is devisible by p (Capable of being divided, with no remainder) then p is a prime.
However i have run into trouble. this is my code so far:
from sympy import *
x=symbols('x')
p=11
f=(pow(x - 1, p)) - (pow(x, p) - 1) # (x-1)^p -(x^p-1)
f1=expand(f)
>>> -11*x**10 + 55*x**9 - 165*x**8 + 330*x**7 - 462*x**6 + 462*x**5 - 330*x**4 + 165*x**3 - 55*x**2 + 11*x
f2= f1/p
>>> -x**10 + 5*x**9 - 15*x**8 + 30*x**7 - 42*x**6 + 42*x**5 - 30*x**4 + 15*x**3 - 5*x**2 + x
to tell if the number p is a prime i need to check if the coefficients of the polynomium is divisible by p. so i have to check if the coefficients of f2 is whole numbers or real numbers.
this is what i would like to make a program that can check: https://www.youtube.com/watch?v=HvMSRWTE2mI
i have tried making it into int but it still shows fractions like 1/2 and 3/7. i wish that it will only show whole numbers.
how do i make it so?
What the method effective does is expand the polynomial and drop the first (x^p) and last coefficients (x^0). Then you have to iterate through the rest and check for divisibility. Since a polynomial expansion of power p produces p+1 terms (from 0 to p), we want to collect p-2 terms (from 1 to p-1). This is all summed up in the following code.
from sympy.abc import x
def is_prime_sympy(p):
poly = pow((x - 1), p).expand()
return not any(poly.coeff(x, i) % p for i in xrange(1, p))
This works, but the higher the number you input, e.g. 1013, the longer you'll notice it takes. Sympy is slow because internally it stores all expressions as some classes and all multiplications and additions take a long time. We can simply generate the coefficients using Pascal's triangle. For the polynomial (x - 1)^p, the coefficients are supposed to change sign, but we don't care about that. We just want the raw numbers. Credits to Copperfield for pointing out you only need half of the coefficients because of symmetry.
import math
def combination(n, r):
return math.factorial(n) // (math.factorial(r) * math.factorial(n - r))
def pascals_triangle(row):
# only generate half of the coefficients because of symmetry
return (combination(row, term) for term in xrange(1, (row+1)//2))
def is_prime_math(p):
return not any(c % p for c in pascals_triangle(p))
We can time both methods now to see which one is faster.
import time
def benchmark(p):
t0 = time.time()
is_prime_math(p)
t1 = time.time()
is_prime_sympy(p)
t2 = time.time()
print 'Math: %.3f, Sympy: %.3f' % (t1-t0, t2-t1)
And some tests.
>>> benchmark(512)
Math: 0.001, Sympy: 0.241
>>> benchmark(2003)
Math: 3.852, Sympy: 41.695
We know that 512 is not a prime. The very second term we have to check for divisibility fails the test, so most of the time is actually spent generating the coefficients. Python lazily computes them while sympy must expand the whole polynomial out before we can start collecting them. This shows as that a generator approach is preferable.
2003 is prime and here we notice sympy performs 10 times as slowly. In fact, all of the time is spent generating the coefficients, as iterating over 2000 elements for a modulo operation takes no time. So if there are any further optimisations, that's where one should focus.
numpy.poly1d()
Numpy has a class that can manipulate polynomial coefficients and it's exactly what we want. It even works relatively fast for powers up to 50k. However, in its original implementation it's useless to us. That is because the coefficients are stored as signed int32, which means very quickly they will overflow and our modulo operations will be thrown off. In fact, it'll fail for even 37.
But it's fast, though, right? Maybe if we can hack it so it accepts infite precision integers... Maybe it's possible, maybe it isn't. But even if it is, we have to consider that maybe the reason why it is so fast is exactly because it uses a fixed precision type under the hood.
For the sake of curiosity, this is what the implementation would look like if it were any useful.
import numpy as np
def is_prime_numpy(p):
poly = pow(np.poly1d([1, -1]), p)
return not any(c % p for c in poly.coeffs[1:-1])
And for the curious ones, the source code is located in ...\numpy\lib\polynomial.py.
I am not sure if I understood what you mean, but for checking if a number is an integer or float you can use isinstance:
>>> isinstance(1/2.0, float)
>>> True
>>> isinstance(1/2, float)
>>> False
In python, I would like to find the roots of equations of the form:
-x*log(x) + (1-x)*log(n) - (1-x)*log(1 - x) - k = 0
where n and k are parameters that will be specified.
An additional constraint on the roots is that x >= (1-x)/n. So just for what it's worth, I'll be filtering out roots that don't satisfy that.
My first attempt was to use scipy.optimize.fsolve (note that I'm just setting k and n to be 0 and 1 respectively):
def f(x):
return -x*log(x) + (1-x)*log(1) - (1-x)*log(1-x)
fsolve(f, 1)
Using math.log, I got value-errors because I was supplying bad input to log. Using numpy.log gave me some divide by zeros and invalid values in multiply.
I adjusted f as so, just to see what it would do:
def f(x):
if x <= 0:
return 1000
if x >= 1:
return 2000
return -x*log(x) + (1-x)*log(1) - (1-x)*log(1-x)
Now I get
/usr/lib/python2.7/dist-packages/scipy/optimize/minpack.py:221: RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
Using python, how can I solve for x for various n and k parameters in the original equation?
fsolve also allows guesses to be inserted for where to start. My suggestion would be to plot the equation and have the user type a initial guess either with the mouse or via text to use as an initial guess. You may also want to change the out of bounds values:
if x <= 0:
return 1000 + abs(x)
if x >= 1:
return 2000 + abs(x)
This way the function has a slope outside of the region of interest that will guide the solver back into the interesting region.
i wrote this python code, which from wolfram alpha says that its supposed to return the factorial of any positive value (i probably messed up somewhere), integer or not:
from math import *
def double_factorial(n):
if int(n) == n:
n = int(n)
if [0,1].__contains__(n):
return 1
a = (n&1) + 2
b = 1
while a<=n:
b*=a
a+= 2
return float(b)
else:
return factorials(n/2) * 2**(n/2) *(pi/2)**(.25 *(-1+cos(n * pi)))
def factorials(n):
return pi**(.5 * sin(n*pi)**2) * 2**(-n + .25 * (-1 + cos(2*n*pi))) * double_factorial(2*n)
the problem is , say i input pi to 6 decimal places. 2*n will not become a float with 0 as its decimals any time soon, so the equation turns out to be
pi**(.5 * sin(n*pi)**2) * 2**(-n + .25 * (-1 + cos(2*n*pi))) * double_factorial(loop(loop(loop(...)))))
how would i stop the recursion and still get the answer?
ive had suggestions to add an index to the definitions or something, but the problem is, if the code stops when it reaches an index, there is still no answer to put back into the previous "nests" or whatever you call them
You defined f in terms of g and g in terms of f. But you don't just have a circular definition with no base point to start the recursion. You have something worse. The definition of f is actually the definition of g inverted. f is precisely undoing what g did and vice versa. If you're trying to implement gamma yourself (ie. not using the one that's already there in the libraries) then you need to use a formula that expresses gamma in terms of something else that you know how to evaluate. Just using one formula and its inversion like that is a method that will fail for almost any problem you apply it to.
In your code, you define double_factorial like
double_factorial(n) = factorial(n/2) * f(n) ... (1)
and in the factorial you define it as
factorial(n) = double_factorial(2*n) / f(2*n) ... (2)
which is equivalent to equation (1), so you created a circular reference without an exit point. Even math can't help. You have to define either factorial or double_factorial, e.g.
def factorials(n):
return tgamma(n + 1)