I would like to compute
for values of n up to 1000000 as accurately as possible. Here is some sample code.
from __future__ import division
from scipy.misc import comb
def M(n):
return sum(comb(n,k,exact=True)*(1/n)*(1-k/n)**(2*n-k)*(k/n)**(k-1) for k in xrange(1,n+1))
for i in xrange(1,1000000,100):
print i,M(i)
The first problem is that I get OverflowError: long int too large to convert to float when n = 1101. This is because comb(n,k,exact=True) is too large to be converted to a float. The end result is however always a number around 0.159 .
I asked a related question at How to compute sum with large intermediate values however this question is different for three main reasons.
The formula I want to compute is different which causes different problems.
The solution proposed before to use exact=True does not help here as can be seen in the example I gave. Coding up my own implementation of comb is also not going to work as I still need to perform the floating point division.
I need to compute the answer for much bigger values than before which causes new problems. I suspect it can't be done without coding up the sum in some clever way.
A solution that doesn't crash is to use
from fractions import Fraction
def M2(n):
return sum(comb(n,k,exact=True)*Fraction(1,n)*(1-Fraction(k,n))**(2*n-k)*Fraction(k,n)**(k-1) for k in xrange(1,n+1))
for i in xrange(1,1000000,100):
print i, M2(i)*1.0
Unfortunately it is now so slow that I don't get an answer for n=1101 in a reasonable amount of time.
So the second problem is how to make it fast enough to complete for large n.
You can compute each summand in with a logarithm transformation that replaces multiplication, division, and exponentiation with addition, subtraction, and multiplication, respectively.
def summand(n,k):
lk=log(k)
ln=log(n)
a=(lk-ln)*(k-1)
b=(log(n-k)-ln)*(2*n-k)
c=-ln
d=sum(log(x) for x in xrange(n-k+1,n+1))-sum(log(x) for x in xrange(1,k+1))
return exp(a+b+c+d)
def M(n):
return sum(summand(n,k) for k in xrange(1,n))
Note that when k=n the summand will be zero so I do not compute it since the logarithm will be undefined.
You can use gmpy2. It has arbitrary precision floating point arithmetic with large exponent bounds.
from __future__ import division
from gmpy2 import comb,mpfr,fsum
def M(n):
return fsum(comb(n,k)*(mpfr(1)/n)*(mpfr(1)-mpfr(k)/n)**(mpfr(2)*n-k)*(mpfr(k)/n)**(k-1) for k in xrange(1,n+1))
for i in xrange(1,1000000,100):
print i,M(i)
Here is an excerpt of the output:
2001 0.15857490038127975
2101 0.15857582611615381
2201 0.15857666768820194
2301 0.15857743607577454
2401 0.15857814042739268
2501 0.15857878842787806
2601 0.15857938657957615
Disclaimer: I maintain gmpy2.
A rather brutal method is to compute all the factors and then mutliply in such a way that the result stays around 1.0 (Python 3.x):
def M(n):
return sum(summand(n, k) for k in range(1, n + 1))
def f1(n, k):
for i in range(k - 1):
yield k
for i in range(k):
yield n - i
def f2(n, k):
for i in range(k - 1):
yield 1 / n
for i in range(2 * n - k):
yield 1 - k / n
yield 1 / n
for i in range(2, k + 1):
yield 1 / i
def summand(n, k):
result = 1.0
factors1 = f1(n, k)
factors2 = f2(n, k)
while True:
empty1 = False
for factor in factors1:
result *= factor
if result > 1:
break
else:
empty1 = True
for factor in factors2:
result *= factor
if result < 1:
break
else:
if empty1:
break
return result
For M(1101) I get 0.15855899364641846, but it takes a few seconds. M(2000) takes about 14 seconds and yields 0.15857489065619598.
(I'm sure it can be optimised.)
Related
I am an amateur Python coder trying to find an efficient solution for Project Euler Digit Sum problem. My code returns the correct result but is is inefficient for large integers such as 1234567890123456789. I know that the inefficiency lies in my sigma_sum function where there is a 'for' loop.
I have tried various alternate solutions such as loading the values into an numpy array but ran out of memory with large integers with this approach. I am eager to learn more efficient solutions.
import math
def sumOfDigits(n: int) :
digitSum = 0
if n < 10: return n
else:
for i in str(n): digitSum += int(i)
return digitSum
def sigma_sum(start, end, expression):
return math.fsum(expression(i) for i in range(start, end))
def theArguement(n: int):
return n / sumOfDigits(n)
def F(N: int) -> float:
"""
>>> F(10)
19
>>> F(123)
1.187764610390e+03
>>> F(12345)
4.855801996238e+06
"""
s = sigma_sum(1, N + 1, theArguement)
if s.is_integer():
print("{:0.0f}".format(s))
else:
print("{:.12e}".format(s))
print(F(123))
if __name__ == '__main__':
import doctest
doctest.testmod()
Try solving a different problem.
Define G(n) to be a dictionary. Its keys are integers representing digit sums and its values are the sum of all positive integers < n whose digit sum is the key. So
F(n) = sum(v / k for k, v in G(n + 1).items())
[Using < instead of ≤ simplifies the calculations below]
Given the value of G(a) for any value, how would you calculate G(10 * a)?
This gives you a nice easy way to calculate G(x) for any value of x. Calculate G(x // 10) recursively, use that to calculate the value G((x // 10) * 10), and then manually add the few remaining elements in the range (x // 10) * 10 ≤ i < x.
Getting from G(a) to G(10 * a) is mildly tricky, but not overly so. If your code is correct, you can use calculating G(12346) as a test case to see if you get the right answer for F(12345).
I have put together a function for finding combinations using recursion without hitting the built in limit in python. For example you could calculate: choose(1000, 500).
This is how it looks right now:
def choose(n, k):
if not k:
return 1 .
elif n < k:
return 0
else:
return ((n + 1 - k) / k) * choose(n, k - 1)
It works exactly how I want it to work. If k is 0 then return 1 and if n is less then k then return 0 (this is according to the mathematical definitions I found on wikipedia). However, the problem is that I don't quite understand the last row (found it when I was browsing the web). Before the last row I'm using at the moment, this was the last row I had in the function:
return choose(n-1,k-1) + choose(n-1, k)
Which I also found on wikipedia (though I don't think I understand this one a 100% as well). But it would always result in an error because of the built in limitation in python, whereas the new line I'm using does not result in such an error. I understand that the new line works much more efficient with the program, because we for example don't split it up to two subproblems.
So again.. what I'm asking is if there are any kind souls out there that could explain (in an understandable manner) how this line of code works in the function:
return ((n + 1 - k) / k) * choose(n, k - 1)
You would first need to know how combination C(n, k) is defined. The formula for C(n, k) is:
or, equivalently:
which can be reformed into a recursive expression:
which is what you implemented.
For the second implemention, this is the Pascal's formula. A recursive implementation would be very slow (and potentionly stack overflow, yes). A more efficient implementation would be to store each C(n, k) in a two-dimensional array to calculate each value in order.
Spoiler: the bottom line will be that you should use the closed form n! / (k! (n - k)!).
In many other languages, the solution would be to make your function tail-recursive, although Python does not support this kind of optimization. Thus implementing recursive solution is simply not the best option.
You could increase the maximal recursion depth with sys.setrecursionlimit, but this is not optimal.
An improvement would be to compute n-choose-k with iteration.
def choose(n, k):
if n < k:
return 0
ans = 1
while k > 0:
ans *= (n + 1 - k) / k
k -= 1
return ans
Although, the above will accumulate an error due to float arithmetic. The very best approach is thus to use the closed form of n-choose-k.
from math import factorial
def choose(n, k):
return factorial(n) / (factorial(k) * factorial(n - k))
I'm generating prime numbers from Fibonacci as follows (using Python, with mpmath and sympy for arbitrary precision):
from mpmath import *
def GCD(a,b):
while a:
a, b = fmod(b, a), a
return b
def generate(x):
mp.dps = round(x, int(log10(x))*-1)
if x == GCD(x, fibonacci(x-1)):
return True
if x == GCD(x, fibonacci(x+1)):
return True
return False
for x in range(1000, 2000)
if generate(x)
print(x)
It's a rather small algorithm but seemingly generates all primes (except for 5 somehow, but that's another question). I say seemingly because a very little percentage (0.5% under 1000 and 0.16% under 10K, getting less and less) isn't prime. For instance under 1000: 323, 377 and 442 are also generated. These numbers are not prime.
Is there something off in my script? I try to account for precision by relating the .dps setting to the number being calculated. Can it really be that Fibonacci and prime numbers are seemingly so related, but then when it's get detailed they aren't? :)
For this type of problem, you may want to look at the gmpy2 library. gmpy2 provides access to the GMP multiple-precision library which includes gcd() and fib() functions which calculate the greatest common divisor and the n-th fibonacci numbers quickly, and only using integer arithmetic.
Here is your program re-written to use gmpy2.
import gmpy2
def generate(x):
if x == gmpy2.gcd(x, gmpy2.fib(x-1)):
return True
if x == gmpy2.gcd(x, gmpy2.fib(x+1)):
return True
return False
for x in range(7, 2000):
if generate(x):
print(x)
You shouldn't be using any floating-point operations. You can calculate the GCD just using the builtin % (modulo) operator.
Update
As others have commented, you are checking for Fibonacci pseudoprimes. The actual test is slightly different than your code. Let's call the number being tested n. If n is divisible by 5, then the test passes if n evenly divides fib(n). If n divided by 5 leaves a remainder of either 1 or 4, then the test passes if n evenly divides fib(n-1). If n divided by 5 leaves a remainder of either 2 or 3, then the test passes if n evenly divides fib(n+1). Your code doesn't properly distinguish between the three cases.
If n evenly divides another number, say x, it leaves a remainder of 0. This is equivalent to x % n being 0. Calculating all the digits of the n-th Fibonacci number is not required. The test just cares about the remainder. Instead of calculating the Fibonacci number to full precision, you can calculate the remainder at each step. The following code calculates just the remainder of the Fibonacci numbers. It is based on the code given by #pts in Python mpmath not arbitrary precision?
def gcd(a,b):
while b:
a, b = b, a % b
return a
def fib_mod(n, m):
if n < 0:
raise ValueError
def fib_rec(n):
if n == 0:
return 0, 1
else:
a, b = fib_rec(n >> 1)
c = a * ((b << 1) - a)
d = b * b + a * a
if n & 1:
return d % m, (c + d) % m
else:
return c % m, d % m
return fib_rec(n)[0]
def is_fib_prp(n):
if n % 5 == 0:
return not fib_mod(n, n)
elif n % 5 == 1 or n % 5 == 4:
return not fib_mod(n-1, n)
else:
return not fib_mod(n+1, n)
It's written in pure Python and is very quick.
The sequence of numbers commonly known as the Fibonacci numbers is just a special case of a general Lucas sequence L(n) = p*L(n-1) - q*L(n-2). The usual Fibonacci numbers are generated by (p,q) = (1,-1). gmpy2.is_fibonacci_prp() accepts arbitrary values for p,q. gmpy2.is_fibonacci(1,-1,n) should match the results of the is_fib_pr(n) given above.
Disclaimer: I maintain gmpy2.
This isn't really a Python problem; it's a math/algorithm problem. You may want to ask it on the Math StackExchange instead.
Also, there is no need for any non-integer arithmetic whatsoever: you're computing floor(log10(x)) which can be done easily with purely integer math. Using arbitrary-precision math will greatly slow this algorithm down and may introduce some odd numerical errors too.
Here's a simple floor_log10(x) implementation:
from __future__ import division # if using Python 2.x
def floor_log10(x):
res = 0
if x < 1:
raise ValueError
while x >= 1:
x //= 10
res += 1
return res
For calculating Catalan Numbers, I wrote two codes. One (def "Catalan") works recursively and returns the right Catalan Numbers.
dicatalan = {}
def catalan(n):
if n == 0:
return 1
else:
res = 0
if n not in dicatalan:
for i in range(n):
res += catalan(i) * catalan(n - i - 1)
dicatalan[n] = res
return dicatalan[n]
the other (def "catalanFormula") applies the implicit formula, but doesn't calculate accurately starting from n=30. the problem derives from floating points - for k=9 the program returns "6835971.999999999" instead of "6835972" and from this moment on accumulates mistakes till the final wrong answer.
(print line is for checking)
def catalanFormula(n):
result = 1
for k in range(2, n + 1):
result *= ((n + k) / k)
print (result)
return int(result)
I tried rounding and failed, tried Decimal import and still got nothing right.
I need the "catalanFormula" work perfectly as "catalan";
Any Ideas?
Thanks!
Try calculating the numerator and denominator separately and dividing them at the end. If you do this, you should be able to make it a little bit farther with floating-point.
I'm sure Python has a package for rational numbers. Using rationals is an even better idea.
See the bigfloat package.
from bigfloat import *
setcontext(quadruple_precision)
def catalanFormula(n):
result = BigFloat(1)
for k in range(2, n + 1):
result *= ((BigFloat(n) + BigFloat(k)) / BigFloat(k))
return result
catalanFormula(30)
Output:
BigFloat.exact('3814986502092304.00000000000000000043', precision=113)
The Lucas-Lehmer primality test tests prime numbers to determine whether they are also Mersenne primes. One of the bottlenecks is the modulus operation in the calculation of (s**2 − 2) % (2**p - 1).
Using bitwise operations can speed things up considerably (see the L-L link), the best I have so far being:
def mod(n,p):
""" Returns the value of (s**2 - 2) % (2**p -1)"""
Mp = (1<<p) - 1
while n.bit_length() > p: # For Python < 2.7 use len(bin(n)) - 2 > p
n = (n & Mp) + (n >> p)
if n == Mp:
return 0
else:
return n
A simple test case is where p has 5-9 digits and s has 10,000+ digits (or more; not important what they are). Solutions can be tested by mod((s**2 - 2), p) == (s**2 - 2) % (2**p -1). Keep in mind that p - 2 iterations of this modulus operation are required in the L-L test, each with exponentially increasing s, hence the need for optimization.
Is there a way to speed this up further, using pure Python (Python 3 included)? Is there a better way?
The best improvement I could find was removing Mp = (1<<p) - 1 from the modulus function altogether, and pre-calculating it in the L-L function before starting the iterations of the L-L test. Using while n > Mp: instead of while n.bit_length() > p: also saved some time.
In the case where n is much longer than 2^p, you can avoid some quadratic-time pain by doing something like this:
def mod1(n,p):
while n.bit_length() > 3*p:
k = n.bit_length() // p
k1 = k>>1
k1p = k1*p
M = (1<<k1p)-1
n = (n & M) + (n >> k1p)
Mp = (1<<p)-1
while n.bit_length() > p:
n = (n&Mp) + (n>>p)
if n==Mp: return 0
return n
[EDITED because I screwed up the formatting before; thanks to Benjamin for pointing this out. Moral: don't copy-and-paste from an Idle window into SO. Sorry!]
(Note: the criterion for halving the length of n rather than taking p off it, and the exact choice of k1, are both a bit wrong, but it doesn't matter so I haven't bothered fixing them.)
If I take p=12345 and n=9**200000 (yes, I know p then isn't prime, but that doesn't matter here) then this is about 13 times faster.
Unfortunately this will not help you, because in the L-L test n is never bigger than about (2^p)^2. Sorry.