I am having an issue getting the python 2.5 shell to do what I need to do. I am trying to have the user input a value for "n" representing a number of times the loop will be repeated. In reality, I need to have the user input N that will correspond to the number of terms from the Gregory–Leibniz series and outputs the approximation of pi.
Gregory–Leibniz series
pi=4*((1/1)-(1/3)+(1/5)-(1/7)+(1/9)-(1/11)+(1/31)...)
So when n is 3,I need the loop calculates up to 1/5. Unfortunately, it is always giving me a value of 0 for the variable of total.
My code as of right now is wrong, and I know that. Just looking for some help. Code below:
def main():
n = int(raw_input("What value of N would you like to calculate?"))
for i in range(1,n,7):
total = (((1)/(i+i+1))-((1)/(i+i+2))+((1)/(i+i+4)))
value = 4*(1-total)
print(value)
main()
This uses integer division, so you will get zero:
total = (((1)/(i+i+1))-((1)/(i+i+2))+((1)/(i+i+4)))
Instead, use floats to get float division.
total = ((1.0/(i+i+1))-(1.0/(i+i+2))+(1.0/(i+i+4)))
In python 2, by default doing / on integers will give you an integer.
In python 3, this has been changed, and / always performed float division (// does integer division).
You need to accumulate terms. e.g.
total = 0.0
term = 1.0
for i in range (1,n+1):
denom = 2*i-1
total += term/denom
term = -term
Of course, you can express this more tersely
It is also more natural perhaps to use this instead
total = 0.0
term = 1.0
for i in range (n):
denom = 2*i+1
total += term/denom
term = -term
As you use the most natural form of of n terms in a range this way. Note the difference in how denominator is calculated.
Q1) Go to https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80 to find the Leibniz formula for π. Let S be the sequence of terms that is used to approximate π. As we can see, the first term in S is +1, the second term in S is -1/3 and the third term in S is +1/5 and so on. Find the smallest number of terms such that the difference between 4*S and π is less than 0.01. That is, abs(4*S – math.pi) <= 0.01.
Related
The mathematical constant π (pi) is an irrational number with value approximately 3.1415928... The precise value of π is equal to the following infinite sum: π = 4/1 - 4/3 + 4/5 - 4/7 + 4/9 - 4/11 + ... We can get a good approximation of π by computing the sum of the first few terms. Write a function approxPi() that takes as a parameter a floating point value error and approximates the constant π within error by computing the above sum, term by term, until the absolute value of the difference between the current sum and the previous sum (with one fewer terms) is no greater than error. Once the function finds that the difference is less than error, it should return the new sum. Please note that this function should not use any functions or constants from the math module. You are supposed to use the described algorithm to approximate π, not use the built-in value in Python.
I have done the below program but for some reason I am getting the different value from the one in book.
def pi(error):
prev = 1
current = 4
i = 1
while abs(current - prev) > error:
d = 2.0* i +1
sign = (-1)**i
prev = current
current = current + sign * 4 / d
i = i +1
return current
output In [2]: pi(0.01)
Out[2]: 3.146567747182955
But instead I need to get this value
>>> approxPi(0.01)
3.1611986129870506
>>> approxPi(0.0000001)
3.1415928535897395
The approximation you're using is very poor at converging, that is you have to loop quite a lot of times to get a reasonable value. You see that the difference will be 1/d and that's the accuracy. You'll have to loop 5000 times to get four digits 50k times to get next and 500k to get next and so on (that is exponential time complexity for the digits).
This could be one of the reasons that you see a discrepancy here, that you simply get the situation where rounding errors add up. Since you need to use that many iterations you will never get near the full precision of the floats you're using. Another source of discrepancy is that your reference probably is using another exit condition, with your condition you should get an error less than the provided (ideally), and you've got it (3.146567747182955-pi < 0.01). It actually looks like your reference is using the condition abs(current-prev) > 4*error instead.
The formula you're using is that pi=4arctan(1) and using McLaurin expansion of arctan(x) for a value of x that is on the limit of converging at all. To get better performance one should use lower x in that expansion. For example pi=16arctan(1/5)-4arctan(1/239) could be used (this gives linear time complexity for the digits):
def pi(error):
a = 1.0/5
b = 1.0/239
prev = 1
current = 0.0
i = 0
while abs(current - prev) > error:
d = 2.0* i +1
sign = (-1)**i
prev = current
current = current + sign * (16*a - 4*b)/d
a = a*1.0/(5*5)
b = b*1.0/(239*239)
i = i +1
return current
I guess the stopping rule for your function and approxPi are different. In fact, your estimation is better. If you print out all the values of current, you will see that when i equals 50 your function produces your desired output. But, it goes beyond that and produce a better approximation.
So to get the answer you are looking for then the question as posed is wrong in how it describes the exit condition.
Re-organizing you get Pi = 4*(1/1 + 1/3 + 1/5 + ...), to get 3.1611986129870506 with an error of 0.01, then you looking at the subsequent terms and stopping when the term < error:
from itertools import count, cycle #, izip for Py2
def approxPi(error):
p = 0
for sign, d in zip(cycle([1,-1]), count(1, 2)): # izip for Py2
n = sign / d
p += n
if abs(n) < error:
break
return 4*p
Then you get the correct answers:
>>> approxPi(0.01)
3.1611986129870506
>>> approxPi(0.0000001)
3.1415928535897395
Using your code (from #PaulBoddington: Find the value of pi)
def pi(error):
prev = 0
current = 1
i = 1
while abs(current - prev) > error:
d = 2.0*i + 1
sign = (-1)**i
prev = current
current = current + sign / d
i += 1
return 4*current
Note: This is not the difference between the current sum and the previous sum, so the question is wrong, but it is equal to difference_between_sums < error*4. So to get the correct exit for your original code just multiply the error by 4, e.g.:
>>> pi(0.04)
3.1611986129870506
I'm trying to generate 0 or 1 with 50/50 chance of any using random.uniform instead of random.getrandbits.
Here's what I have
0 if random.uniform(0, 1e-323) == 0.0 else 1
But if I run this long enough, the average is ~70% to generate 1. As seem here:
sum(0 if random.uniform(0, 1e-323) == 0.0
else 1
for _ in xrange(1000)) / 1000.0 # --> 0.737
If I change it to 1e-324 it will always be 0. And if I change it to 1e-322, the average will be ~%90.
I made a dirty program that will try to find the sweet spot between 1e-322 and 1e-324, by dividing and multiplying it several times:
v = 1e-323
n_runs = 100000
target = n_runs/2
result = 0
while True:
result = sum(0 if random.uniform(0, v) == 0.0 else 1 for _ in xrange(n_runs))
if result > target:
v /= 1.5
elif result < target:
v *= 1.5 / 1.4
else:
break
print v
This end ups with 4.94065645841e-324
But it still will be wrong if I ran it enough times.
Is there I way to find this number without the dirty script I wrote? I know that Python has a intern min float value, show in sys.float_info.min, which in my PC is 2.22507385851e-308. But I don't see how to use it to solve this problem.
Sorry if this feels more like a puzzle than a proper question, but I'm not able to answer it myself.
I know that Python has a intern min float value, show in sys.float_info.min, which in my PC is 2.22507385851e-308. But I don't see how to use it to solve this problem.
2.22507385851e-308 is not the smallest positive float value, it is the smallest positive normalized float value. The smallest positive float value is 2-52 times that, that is, near 5e-324.
2-52 is called the “machine epsilon” and it is usual to call the “min” of a floating-point type a value that is nether that which is least of all comparable values (that is -inf), nor the least of finite values (that is -max), nor the least of positive values.
Then, the next problem you face is that random.uniform is not uniform to that level. It probably works ok when you pass it a normalized number, but if you pass it the smallest positive representable float number, the computation it does with it internally may be very approximative and lead it to behave differently than the documentation says. Although it appears to work surprisingly ok according to the results of your “dirty script”.
Here's the random.uniform implementation, according to the source:
from os import urandom as _urandom
BPF = 53 # Number of bits in a float
RECIP_BPF = 2**-BPF
def uniform(self, a, b):
"Get a random number in the range [a, b) or [a, b] depending on rounding."
return a + (b-a) * self.random()
def random(self):
"""Get the next random number in the range [0.0, 1.0)."""
return (int.from_bytes(_urandom(7), 'big') >> 3) * RECIP_BPF
So, your problem boils down to finding a number b that will give 0 when multiplied by a number less than 0.5 and another result when multiplied by a number larger than 0.5. I've found out that, on my machine, that number is 5e-324.
To test it, I've made the following script:
from random import uniform
def test():
runs = 1000000
results = [0, 0]
for i in range(runs):
if uniform(0, 5e-324) == 0:
results[0] += 1
else:
results[1] += 1
print(results)
Which returned results consistent with a 50% probability:
>>> test()
[499982, 500018]
>>> test()
[499528, 500472]
>>> test()
[500307, 499693]
I need to compute a ratio of two number that are computed in a cycle.
The problem is that b becomes too big and it is equal to numpy.inf at some point.
However, the ratio a/b should exist and not be zero.
for i in range(num_iter):
a += x[i]
b += y[i]
return a/b
What are tricks to compute this type of limits?
Please let me know if it is a wrong stackexchange site for the question.
Update:
The loop is finite, I have two arrays x and y that can be analysed in advance on big number or something.
I guess dividing x and y by some large number (rescaling) might work?
You don't say what you are adding to a and b each time through the loop, but presumably both values get so large that any error introduced by truncating the increments to integers will be negligible in the limit. This way, you use arbitrary integers rather than floating-point values, which have both an upper bound on their magnitude and limited precision.
for i in range(num_iter):
a += int(...)
b += int(...)
return a/b
Building on Chepner's idea, how about tracking the float and the int part separately, then bringing the int part back when it it is larger than 1. Something like this:
for i in range(num_iter):
afloat += ... - int(...)
bfloat += ... - int(...)
a += int(...) + int(afloat)
b += int(...) + int(bfloat)
afloat += int(afloat)
bfloat += int(bfloat)
return a/b
If a and b have the same length, you know that the ratio of the means is equal to the ratio of the sum. If it isn't, you can use the ratio of the number of items to correct you ratio.
for i in xrange(num_iter):
numpy.append(a, ...)
numpy.append(b, ...)
return (mean(a)/mean(b)) * (float(len(b))/len(a))
It could be slow and it will use more memory, but I think it should work.
If you don't want to save everything, you can calculate the mean for every N elements, and do a weighted mean when you need to calculate it.
I am new to Python and am trying to create a program for a project- firstly, I need to generate a point between the numbers 0-1.0, including 0 and 1.0 ([0, 1.0]). I searched the python library for functions (https://docs.python.org/2/library/random.html) and I found this function:
random.random()
This will return the next random floating point number in the range [0.0, 1.0). This is a problem, since it does not include 1. Although the chances of actually generating a 1 are very slim anyway, it is still important because this is a scientific program that will be used in a larger data collection.
I also found this function:
rand.randint
This will return an integer, which is also a problem.
I researched on the website and previously asked questions and found that this function:
random.uniform(a, b)
will only return a number that is greater than or equal to a and less than b.
Does anyone know how to create a random function on python that will include [0, 1.0]?
Please correct me if I was mistaken on any of this information. Thank you.
*The random numbers represent the x value of a three dimensional point on a sphere.
Could you make do with something like this?
random.randint(0, 1000) / 1000.0
Or more formally:
precision = 3
randomNumber = random.randint(0, 10 ** precision) / float(10 ** precision)
Consider the following function built on top of random.uniform. I believe that the re-sampling approach should cause all numbers in the desired interval to appear with equal probability, because the probability of returning candidate > b is 0, and originally all numbers should be equally likely.
import sys
import random
def myRandom(a, b):
candidate = uniform.random(a, b + sys.float_info.epsilon)
while candidate > b:
candidate = uniform.random(a, b + sys.float_info.epsilon)
return candidate
As gnibbler mentioned below, for the general case, it may make more sense to change both the calls to the following. Note that this will only work correctly if b > 0.
candidate = uniform.random(a, b*1.000001)
Try this:
import random
random.uniform(0.0, 1.0)
Which will, according to the documentation [Python 3.x]:
Return a random floating point number N such that a <= N <= b for a <= b and b <= N <= a for b < a.
Notice that the above paragraph states that b is in fact included in the range of possible values returned by the function. However, beware of the second part (emphasis mine):
The end-point value b may or may not be included in the range depending on floating-point rounding in the equation a + (b-a) * random().
For floating point numbers you can use numpy's machine limits for floats class to get the smallest possible value for 64bit or 32bit floating point numbers. In theory, you should be able to add this value to b in random.uniform(a, b) making 1 inclusive in your generator:
import numpy
import random
def randomDoublePrecision():
floatinfo = numpy.finfo(float)
epsilon = floatinfo.eps
a = random.uniform(0, 1 + eps)
return a
This assumes that you are using full precision floating point numbers for your number generator. For more info read this Wikipedia article.
Would it be just:
list_rnd=[random.random() for i in range(_number_of_numbers_you_want)]
list_rnd=[item/max(list_rnd) for item in list_rnd]
Generate a list of random numbers and divide it by its max value. The resulting list still flows uniform distribution.
I've had the same problem, this should help you.
a: upper limit,
b: lower limit, and
digit: digit after comma
def konv_des (bin,a,b,l,digit):
des=int(bin,2)
return round(a+(des*(b-a)/((2**l)-1)),digit)
def rand_bin(p):
key1 = ""
for i in range(p):
temp = str(random.randint(0, 1))
key1 += temp
return(key1)
def rand_chrom(a,b,digit):
l = 1
eq=False
while eq==False:
l += 1
eq=2**(l-1) < (b-a)*(10**digit) and (b-a)*(10**digit) <= (2**l)-1
return konv_des(rand_bin(l),a,b,l,digit)
#run
rand_chrom(0,1,4)
The method I've used to try and solve this works but I don't think it's very efficient because as soon as I enter a number that is too large it doesn't work.
def fib_even(n):
fib_even = []
a, b = 0, 1
for i in range(0,n):
c = a+b
if c%2 == 0:
fib_even.append(c)
a, b = b, a+b
return fib_even
def sum_fib_even(n):
fib_evens = fib_even(n)
s = 0
for i in fib_evens:
s = s+i
return s
n = 4000000
answer = sum_fib_even(n)
print answer
This for example doesn't work for 4000000 but will work for 400. Is there a more efficient way of doing this?
It is not necessary to compute all the Fibonacci numbers.
Note: I use in what follows the more standard initial values F[0]=0, F[1]=1 for the Fibonacci sequence. Project Euler #2 starts its sequence with F[2]=1,F[3]=2,F[4]=3,.... For this problem the result is the same for either choice.
Summation of all Fibonacci numbers (as a warm-up)
The recursion equation
F[n+1] = F[n] + F[n-1]
can also be read as
F[n-1] = F[n+1] - F[n]
or
F[n] = F[n+2] - F[n+1]
Summing this up for n from 1 to N (remember F[0]=0, F[1]=1) gives on the left the sum of Fibonacci numbers, and on the right a telescoping sum where all of the inner terms cancel
sum(n=1 to N) F[n] = (F[3]-F[2]) + (F[4]-F[3]) + (F[5]-F[4])
+ ... + (F[N+2]-F[N+1])
= F[N+2] - F[2]
So for the sum using the number N=4,000,000 of the question one would have just to compute
F[4,000,002] - 1
with one of the superfast methods for the computation of single Fibonacci numbers. Either halving-and-squaring, equivalent to exponentiation of the iteration matrix, or the exponential formula based on the golden ratio (computed in the necessary precision).
Since about every 20 Fibonacci numbers you gain 4 additional digits, the final result will consist of about 800000 digits. Better use a data type that can contain all of them.
Summation of the even Fibonacci numbers
Just inspecting the first 10 or 20 Fibonacci numbers reveals that all even members have an index of 3*k. Check by subtracting two successive recursions to get
F[n+3]=2*F[n+2]-F[n]
so F[n+3] always has the same parity as F[n]. Investing more computation one finds a recursion for members three indices apart as
F[n+3] = 4*F[n] + F[n-3]
Setting
S = sum(k=1 to K) F[3*k]
and summing the recursion over n=3*k gives
F[3*K+3]+S-F[3] = 4*S + (-F[3*K]+S+F[0])
or
4*S = (F[3*K]+F[3*K]) - (F[3]+F[0]) = 2*F[3*K+2]-2*F[2]
So the desired sum has the formula
S = (F[3*K+2]-1)/2
A quick calculation with the golden ration formula reveals what N should be so that F[N] is just below the boundary, and thus what K=N div 3 should be,
N = Floor( log( sqrt(5)*Max )/log( 0.5*(1+sqrt(5)) ) )
Reduction of the Euler problem to a simple formula
In the original problem, one finds that N=33 and thus the sum is
S = (F[35]-1)/2;
Reduction of the problem in the question and consequences
Taken the mis-represented problem in the question, N=4,000,000, so K=1,333,333 and the sum is
(F[1,333,335]-1)/2
which still has about 533,400 digits. And yes, biginteger types can handle such numbers, it just takes time to compute with them.
If printed in the format of 60 lines a 80 digits, this number fills 112 sheets of paper, just to get the idea what the output would look like.
It should not be necessary to store all intermediate Fibonacci numbers, perhaps the storage causes a performance problem.