I have a function that has a loop, inside of which I do both division and multiplication. The final answer is easily representable, as should the running answer be.
def tie(total):
count = total / 2
prob = 1.0
for i in xrange(1, count + 1):
i_f = float(i)
prob *= (count + i_f) / i_f / 4
return prob
-
tie(4962) == 0.01132634537589437
but
tie(4964) == inf
Is the compiler trying to do some optimization, doing the arithmetic operations in an order other than I seem to have specified and that order is supposedly equivalent but causes the overflow?
You're running into issues because even though the final result of your tie function should mathematically be between 0 and 1, the intermediate values in your loop grow very large: for total = 4962, the value of prob halfway through the iteration is around 1.5e308, which is almost but not quite large enough to overflow a Python float. For total = 4964, the mid-way value really does overflow a float, and since inf times anything finite is still inf, the inf from the overflow propagates all the way down to the final value.
If you're prepared to accept a (fairly small) amount of floating-point error, there's no need to compute this quantity using a loop at all: you can use the lgamma function from the math module to compute the log of the relevant factorials. (You could also use the gamma function directly, but that would likely also lead to overflow issues.)
Here's a version of your function based on this.
from math import lgamma, log, exp
def tie(total):
count = total / 2
return exp(lgamma(2*count + 1) - 2*lgamma(count + 1) - count*log(4))
Alternatively, you could compute the 2n-choose-n term using pure integer arithmetic (which won't cause overflow), and only produce a float at the last moment (when dividing by 4**count). This will be less efficient that the above, but will give you (in a sense) perfect accuracy, in that it'll give the closest representable float to the exact answer. Here's what that version looks like:
from __future__ import division
def tie(total):
count = total // 2
prod = 1
for i in xrange(1, count+1):
prod = prod * (count + i) // i
return prod / 4**count
Note: the floor division in prod * (count + i) // i may look wrong, but it actually works: a little bit of elementary number theory shows that at this point in the calculation, prod * (count + i) must be divisible by i, so it's safe to do an integer division.
Finally, just for fun, here's a third way to compute your probability that's similar in spirit to your original code, but avoids overflow: the value prob starts at 1.0 and steadily decreases to the final value.
def tie(total):
count = total // 2
prob = 1.0
for i in xrange(1, count+1):
prob *= (i-0.5) / i
return prob
Besides being immune from overflow issues, this solution will be more efficient that the integer-based solution, and more accurate than the lgamma-based one.
prob grows to be quite large and eventually overflows. Given the name, did you intend prob to always be between 0 and 1?
What do you mean "controlled calculation"? What causes the overflow is prob getting bigger and bigger.
Your prob variable grows very large and for total equals 4964 it overflows Python maximum float value sys.float_info
>>> import sys
>>> print(sys.float_info.max)
1.7976931348623157e+308
Related
I have the following function:
def pascal_triangle(i:int,j:int):
j = min(j, i-j + 1)
if i == 1 or j == 1:
return 1
elif j > i or j == 0 or i < 1 or j < 1:
return 0
else:
return pascal_triangle(i-1,j-1) + pascal_triangle(i-1,j)
The input value for j has the following constraint:
1<=j<=i/2
My computation for the time complexity is as follows:
f(i,j) = f(i-1,j-1) + f(i-1,j) = f(i-n,j-n) + ... + f(i-n,j)
So, to find the max n, we have:
i-n>=1
j-n>=1
i-n>=j
and, since we know that:
j>=1
j<=i/2
The max n is i/2-1, so the time complexity is O(2^(i/2-1)), and the space complexity is the maximum depth of recursion(n) times needed space for each time(O(1)), O(2^*(i/2-1)).
I hope my calculation is correct. Now my concern is that if i is odd, This number is not divisible by 2, but the terms of function must be an integer. Therefore, I want to know should I write the time complexity like this:
The time complexity and space complexity of the function are both:
O(2^(i/2-1)) if i is even
O(2^(i/2-0.5)) if i is odd
At a first glance, the time and space analysis looks (roughly) correct. I haven't made a super close inspection, however, since it doesn't appear to be the focus of the question.
Regarding the time complexity for even / odd inputs, the answer is that the time complexity is O(sqrt(2)^i), regardless of whether i is even or odd.
In the even case, we have O(2^(i / 2 - 1)) ==> O(1/2 * sqrt(2)^i) ==> O(sqrt(2)^i).
In the odd case, we have O(2^(i / 2 - 0.5)) ==> O(sqrt(2) / 2 * sqrt(2)^i) ==> O(sqrt(2)^i).
What you've written is technically correct, but significantly more verbose than necessary. (At the very least, it's poor style, and if this question was on a homework assignment or an exam, I personally think one could justify a penalty on this basis.)
I am using scipy.stats.binom to work with the binomial distribution. Given n and p, the probability function is
A sum over k ranging over 0 to n should (and indeed does) give 1. Fixing a point x_0, we can add the probabilities in both directions and the two sums ought to add to 1. However the code below yields two different answers when x_0 is close to n.
from scipy.stats import binom
n = 9
p = 0.006985
b = binom(n=n, p=p)
x_0 = 8
# Method 1
cprob = 0
for k in range(x_0, n+1):
cprob += b.pmf(k)
print('cumulative prob with method 1:', cprob)
# Method 2
cprob = 1
for k in range(0, x_0):
cprob -= b.pmf(k)
print('cumulative prob with method 2:', cprob)
I expect the outputs from both methods to agree. For x_0 < 7 it agrees but for x_0 >= 8 as above I get
>> cumulative prob with method 1: 5.0683768775504006e-17
>> cumulative prob with method 2: 1.635963929799698e-16
The precision error in the two methods propagates through my code (later) and gives vastly different answers. Any help is appreciated.
Roundoff errors of the order of the machine epsilon are expected and are inevitable. That these propagate and later blow up means that your problem is very poorly conditioned. You'd need to rethink the algorithm or an implementation, depending on where the bad conditioning comes from.
In your specific example you can get by using either np.sum (which tries to be careful with roundoff), or even math.fsum from the standard library.
The mathematical constant π (pi) is an irrational number with value approximately 3.1415928... The precise value of π is equal to the following infinite sum: π = 4/1 - 4/3 + 4/5 - 4/7 + 4/9 - 4/11 + ... We can get a good approximation of π by computing the sum of the first few terms. Write a function approxPi() that takes as a parameter a floating point value error and approximates the constant π within error by computing the above sum, term by term, until the absolute value of the difference between the current sum and the previous sum (with one fewer terms) is no greater than error. Once the function finds that the difference is less than error, it should return the new sum. Please note that this function should not use any functions or constants from the math module. You are supposed to use the described algorithm to approximate π, not use the built-in value in Python.
I have done the below program but for some reason I am getting the different value from the one in book.
def pi(error):
prev = 1
current = 4
i = 1
while abs(current - prev) > error:
d = 2.0* i +1
sign = (-1)**i
prev = current
current = current + sign * 4 / d
i = i +1
return current
output In [2]: pi(0.01)
Out[2]: 3.146567747182955
But instead I need to get this value
>>> approxPi(0.01)
3.1611986129870506
>>> approxPi(0.0000001)
3.1415928535897395
The approximation you're using is very poor at converging, that is you have to loop quite a lot of times to get a reasonable value. You see that the difference will be 1/d and that's the accuracy. You'll have to loop 5000 times to get four digits 50k times to get next and 500k to get next and so on (that is exponential time complexity for the digits).
This could be one of the reasons that you see a discrepancy here, that you simply get the situation where rounding errors add up. Since you need to use that many iterations you will never get near the full precision of the floats you're using. Another source of discrepancy is that your reference probably is using another exit condition, with your condition you should get an error less than the provided (ideally), and you've got it (3.146567747182955-pi < 0.01). It actually looks like your reference is using the condition abs(current-prev) > 4*error instead.
The formula you're using is that pi=4arctan(1) and using McLaurin expansion of arctan(x) for a value of x that is on the limit of converging at all. To get better performance one should use lower x in that expansion. For example pi=16arctan(1/5)-4arctan(1/239) could be used (this gives linear time complexity for the digits):
def pi(error):
a = 1.0/5
b = 1.0/239
prev = 1
current = 0.0
i = 0
while abs(current - prev) > error:
d = 2.0* i +1
sign = (-1)**i
prev = current
current = current + sign * (16*a - 4*b)/d
a = a*1.0/(5*5)
b = b*1.0/(239*239)
i = i +1
return current
I guess the stopping rule for your function and approxPi are different. In fact, your estimation is better. If you print out all the values of current, you will see that when i equals 50 your function produces your desired output. But, it goes beyond that and produce a better approximation.
So to get the answer you are looking for then the question as posed is wrong in how it describes the exit condition.
Re-organizing you get Pi = 4*(1/1 + 1/3 + 1/5 + ...), to get 3.1611986129870506 with an error of 0.01, then you looking at the subsequent terms and stopping when the term < error:
from itertools import count, cycle #, izip for Py2
def approxPi(error):
p = 0
for sign, d in zip(cycle([1,-1]), count(1, 2)): # izip for Py2
n = sign / d
p += n
if abs(n) < error:
break
return 4*p
Then you get the correct answers:
>>> approxPi(0.01)
3.1611986129870506
>>> approxPi(0.0000001)
3.1415928535897395
Using your code (from #PaulBoddington: Find the value of pi)
def pi(error):
prev = 0
current = 1
i = 1
while abs(current - prev) > error:
d = 2.0*i + 1
sign = (-1)**i
prev = current
current = current + sign / d
i += 1
return 4*current
Note: This is not the difference between the current sum and the previous sum, so the question is wrong, but it is equal to difference_between_sums < error*4. So to get the correct exit for your original code just multiply the error by 4, e.g.:
>>> pi(0.04)
3.1611986129870506
I'm trying to generate 0 or 1 with 50/50 chance of any using random.uniform instead of random.getrandbits.
Here's what I have
0 if random.uniform(0, 1e-323) == 0.0 else 1
But if I run this long enough, the average is ~70% to generate 1. As seem here:
sum(0 if random.uniform(0, 1e-323) == 0.0
else 1
for _ in xrange(1000)) / 1000.0 # --> 0.737
If I change it to 1e-324 it will always be 0. And if I change it to 1e-322, the average will be ~%90.
I made a dirty program that will try to find the sweet spot between 1e-322 and 1e-324, by dividing and multiplying it several times:
v = 1e-323
n_runs = 100000
target = n_runs/2
result = 0
while True:
result = sum(0 if random.uniform(0, v) == 0.0 else 1 for _ in xrange(n_runs))
if result > target:
v /= 1.5
elif result < target:
v *= 1.5 / 1.4
else:
break
print v
This end ups with 4.94065645841e-324
But it still will be wrong if I ran it enough times.
Is there I way to find this number without the dirty script I wrote? I know that Python has a intern min float value, show in sys.float_info.min, which in my PC is 2.22507385851e-308. But I don't see how to use it to solve this problem.
Sorry if this feels more like a puzzle than a proper question, but I'm not able to answer it myself.
I know that Python has a intern min float value, show in sys.float_info.min, which in my PC is 2.22507385851e-308. But I don't see how to use it to solve this problem.
2.22507385851e-308 is not the smallest positive float value, it is the smallest positive normalized float value. The smallest positive float value is 2-52 times that, that is, near 5e-324.
2-52 is called the “machine epsilon” and it is usual to call the “min” of a floating-point type a value that is nether that which is least of all comparable values (that is -inf), nor the least of finite values (that is -max), nor the least of positive values.
Then, the next problem you face is that random.uniform is not uniform to that level. It probably works ok when you pass it a normalized number, but if you pass it the smallest positive representable float number, the computation it does with it internally may be very approximative and lead it to behave differently than the documentation says. Although it appears to work surprisingly ok according to the results of your “dirty script”.
Here's the random.uniform implementation, according to the source:
from os import urandom as _urandom
BPF = 53 # Number of bits in a float
RECIP_BPF = 2**-BPF
def uniform(self, a, b):
"Get a random number in the range [a, b) or [a, b] depending on rounding."
return a + (b-a) * self.random()
def random(self):
"""Get the next random number in the range [0.0, 1.0)."""
return (int.from_bytes(_urandom(7), 'big') >> 3) * RECIP_BPF
So, your problem boils down to finding a number b that will give 0 when multiplied by a number less than 0.5 and another result when multiplied by a number larger than 0.5. I've found out that, on my machine, that number is 5e-324.
To test it, I've made the following script:
from random import uniform
def test():
runs = 1000000
results = [0, 0]
for i in range(runs):
if uniform(0, 5e-324) == 0:
results[0] += 1
else:
results[1] += 1
print(results)
Which returned results consistent with a 50% probability:
>>> test()
[499982, 500018]
>>> test()
[499528, 500472]
>>> test()
[500307, 499693]
I am having an issue getting the python 2.5 shell to do what I need to do. I am trying to have the user input a value for "n" representing a number of times the loop will be repeated. In reality, I need to have the user input N that will correspond to the number of terms from the Gregory–Leibniz series and outputs the approximation of pi.
Gregory–Leibniz series
pi=4*((1/1)-(1/3)+(1/5)-(1/7)+(1/9)-(1/11)+(1/31)...)
So when n is 3,I need the loop calculates up to 1/5. Unfortunately, it is always giving me a value of 0 for the variable of total.
My code as of right now is wrong, and I know that. Just looking for some help. Code below:
def main():
n = int(raw_input("What value of N would you like to calculate?"))
for i in range(1,n,7):
total = (((1)/(i+i+1))-((1)/(i+i+2))+((1)/(i+i+4)))
value = 4*(1-total)
print(value)
main()
This uses integer division, so you will get zero:
total = (((1)/(i+i+1))-((1)/(i+i+2))+((1)/(i+i+4)))
Instead, use floats to get float division.
total = ((1.0/(i+i+1))-(1.0/(i+i+2))+(1.0/(i+i+4)))
In python 2, by default doing / on integers will give you an integer.
In python 3, this has been changed, and / always performed float division (// does integer division).
You need to accumulate terms. e.g.
total = 0.0
term = 1.0
for i in range (1,n+1):
denom = 2*i-1
total += term/denom
term = -term
Of course, you can express this more tersely
It is also more natural perhaps to use this instead
total = 0.0
term = 1.0
for i in range (n):
denom = 2*i+1
total += term/denom
term = -term
As you use the most natural form of of n terms in a range this way. Note the difference in how denominator is calculated.
Q1) Go to https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80 to find the Leibniz formula for π. Let S be the sequence of terms that is used to approximate π. As we can see, the first term in S is +1, the second term in S is -1/3 and the third term in S is +1/5 and so on. Find the smallest number of terms such that the difference between 4*S and π is less than 0.01. That is, abs(4*S – math.pi) <= 0.01.