def f(n):
Total_Triangles = 0
for i in range(1,n+1):
term = 3**(i-1)
Total_Triangles+=term
return Total_Triangles
Q = int(input())
for i in range(Q):
n = int(input())
Ans = f(n)*4 +1
print(Ans%1000000007)
How to tackle with Time limit error in this code?
Karan has a good answer. It will speed up your original approach, but you still end up calculating huge numbers. Fortunately, Python's Long type can do that, but I expect that it isn't as efficient as the native 32-bit or 64-bit integer types.
You are told to give the answer modulo a huge number M, 1,000,000,007. You can improve the algorithm by using modular arithmetic throughout, so that your numbers never get very big. In modular arithmetic, this is true:
(a + b) % M == (a % M + b % M) % M
(a * b) % M == (a % M * b % M) % M
One approach could be to calculate all possible Q values up front using modular arithmetic:
M = 1000000007
def makef(m):
"""Generator to create all sum(3**i) mod M"""
n = 1
s = 0
for i in range(m):
yield s
s = (s + n) % M
n = ((n + n) % M + n) % M
f = list(makef(100000))
Q = int(input())
for i in range(Q):
n = int(input())
print (f[n] * 4 + 1) % M
This will do the calculations in a big loop, but only once and should be fast enough for your requirements.
Python offers you a second way: The expression a ** b is mapped to the in-built function pow(a, b). This function can take a third parameter: a base for modular arithmetic, so that pow(a, b, M) will calculate (a ** b) % M without generating huge intermediate results.
Now you can use Karan's neat formula. But wait, there's a pitfall: You have to divide the result of the power by two. The modular relationships above are not true of division. For example, (12 // 2) % M is 6, but if you applied the modulo operator first, as the pow function does, you'd get ((12 % 2) // 2) % M, which is 1 and not what you want. A solution is to calculate the power modulo 2 * M and then divide by 2:
def f(n):
return pow(3, n, 2 * 1000000007) // 2
Q = int(input())
for i in range(Q):
n = int(input())
print (f(n) * 4 + 1) % M
(Note that all powers of 3 are odd, so I have removed the - 1 and let the integer division do the work.)
Side note: The value of M is chosen so that the addition of two numbers that are smaller than M fits in a signed 32-bit integer. That means that users of C, C++ or Java don't have to use bignum libraries. But note that 3 * n can still overflow a signed int, so that you have to take care when multiplying by three: Use ((n + n) % M + n) % M instead.
You want to find 3 ** 0 + 3 ** 1 ... + 3 ** (n - 1), this is just a geometric series with first term a = 1, common ratio r = 3 and number of terms n = n, and using the summation of a geometric series formula, we can find f(n) much faster when defined as so:
def f(n):
return (3 ** n - 1) // 2
Related
I have to write a program to calculate a**b % c where b and c are both very large numbers. If I just use a**b % c, it's really slow. Then I found that the built-in function pow() can do this really fast by calling pow(a, b, c).
I'm curious to know how does Python implement this? Or where could I find the source code file that implement this function?
If a, b and c are integers, the implementation can be made more efficient by binary exponentiation and reducing modulo c in each step, including the first one (i.e. reducing a modulo c before you even start). This is what the implementation of long_pow() does indeed. The function has over two hundred lines of code, as it has to deal with reference counting, and it handles negative exponents and a whole bunch of special cases.
At its core, the idea of the algorithm is rather simple, though. Let's say we want to compute a ** b for positive integers a and b, and b has the binary digits b_i. Then we can write b as
b = b_0 + b1 * 2 + b2 * 2**2 + ... + b_k ** 2**k
ans a ** b as
a ** b = a**b0 * (a**2)**b1 * (a**2**2)**b2 * ... * (a**2**k)**b_k
Each factor in this product is of the form (a**2**i)**b_i. If b_i is zero, we can simply omit the factor. If b_i is 1, the factor is equal to a**2**i, and these powers can be computed for all i by repeatedly squaring a. Overall, we need to square and multiply k times, where k is the number of binary digits of b.
As mentioned above, for pow(a, b, c) we can reduce modulo c in each step, both after squaring and after multiplying.
You might consider the following two implementations for computing (x ** y) % z quickly.
In Python:
def pow_mod(x, y, z):
"Calculate (x ** y) % z efficiently."
number = 1
while y:
if y & 1:
number = number * x % z
y >>= 1
x = x * x % z
return number
In C:
#include <stdio.h>
unsigned long pow_mod(unsigned short x, unsigned long y, unsigned short z)
{
unsigned long number = 1;
while (y)
{
if (y & 1)
number = number * x % z;
y >>= 1;
x = (unsigned long)x * x % z;
}
return number;
}
int main()
{
printf("%d\n", pow_mod(63437, 3935969939, 20628));
return 0;
}
I don't know about python, but if you need fast powers, you can use exponentiation by squaring:
http://en.wikipedia.org/wiki/Exponentiation_by_squaring
It's a simple recursive method that uses the commutative property of exponents.
Line 1426 of this file shows the Python code that implements math.pow, but basically it boils down to it calling the standard C library which probably has a highly optimized version of that function.
Python can be quite slow for intensive number-crunching, but Psyco can give you a quite speed boost, it won't be as good as C code calling the standard library though.
Python uses C math libraries for general cases and its own logic for some of its concepts (such as infinity).
Implement pow(x,n) in Python
def myPow(x, n):
p = 1
if n<0:
x = 1/x
n = abs(n)
# Exponentiation by Squaring
while n:
if n%2:
p*= x
x*=x
n//=2
return p
Implement pow(x,n,m) in Python
def myPow(x,n,m):
p = 1
if n<0:
x = 1/x
n = abs(n)
while n:
if n%2:
p*= x%m
x*=x%m
n//=2
return p
Checkout this link for explanation
I am writing a program that handles numbers as large as 10 ** 100, everything looks good when dealing with smaller numbers but when values get big I get these kind of problems:
>>> N = 615839386751705599129552248800733595745824820450766179696019084949158872160074326065170966642688
>>> ((N + 63453534345) / sqrt(2)) == (N / sqrt(2))
>>> True
Clearly the above comparision is false, why is this happening?
Program code:
from math import *
def rec (n):
r = sqrt (2)
s = r + 2
m = int (floor (n * r))
j = int (floor (m / s))
if j <= 1:
return sum ([floor (r * i) for i in range (1, n + 1)])
assert m >= s * j and j > 1, "Error: something went wrong"
return m * (m + 1) / 2 - j * (j + 1) - rec (j)
print rec (1e100)
Edit:
I don't think my question is a duplicate of the linked question above because the decimal points in n, m and j are not important to me and I am looking for a solution to avoid this precision issue.
You can’t retain the precision you want while dividing by standard floating point numbers, so you should instead divide by a Fraction. The Fraction class in the fractions module lets you do exact rational arithmetic.
Of course, the square root of 2 is not rational. But if the error is less than one part in 10**100, you’ll get the right result.
So, how to compute an approximation to sqrt(2) as a Fraction? There are several ways to do it, but one simple way is to compute the integer square root of 2 * 10**200, which will be close to sqrt(2) * 10**100, then just make that the numerator and make 10**100 the denominator.
Here’s a little routine in Python 3 for integer square root.
def isqrt(n):
lg = -1
g = (1 >> n.bit_length() // 2) + 1
while abs(lg - g) > 1:
lg = g
g = (g + n//g) // 2
while g * g > n:
g -= 1
return g
You should be able to take it from there.
I wrote this code,but I not sure if it is right.In Simpson rule there is condition that it has to has even number of intervals.I dont know how to imprint this condition into my code.
def simpson(data):
data = np.array(data)
a = min(range(len(data)))
b = max(range(len(data)))
n = len(data)
h = (b-a)/n
for i in range(1,n, 2):
result += 4*data[i]*h
for i in range(2,n-1, 2):
result += 2*data[i]*h
return result * h /3
Interestingly enough, you can find it in the Wikipedia entry:
from __future__ import division # Python 2 compatibility
def simpson(f, a, b, n):
"""Approximates the definite integral of f from a to b by the
composite Simpson's rule, using n subintervals (with n even)"""
if n % 2:
raise ValueError("n must be even (received n=%d)" % n)
h = (b - a) / n
s = f(a) + f(b)
for i in range(1, n, 2):
s += 4 * f(a + i * h)
for i in range(2, n-1, 2):
s += 2 * f(a + i * h)
return s * h / 3
where you use it as:
simpson(lambda x:x**4, 0.0, 10.0, 100000)
Note how it bypasses your parity problem by requiring a function and n.
In case you need it for a list of values, though, then after adapting the code (which should be easy), I suggest you also raise a ValueError in case its length is not even.
Since you already seem to be using numpy you may also consider using scipy which conveniently provides a Simpson's rule integration routine.
from scipy.integrate import simps
result=simps(data)
See http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.simps.html for the full documentation (where they discuss the handling of even/odd intervals)
I have written the following code for evaluating integer partitions using the recurrence formula involving pentagonal numbers:
def part(n):
p = 0
if n == 0:
p += 1
else:
k = 1
while ((n >= (k*(3*k-1)/2)) or (n >= (k*(3*k+1)/2))):
i = (k * (3*k-1)/2)
j = (k * (3*k+1)/2)
if ((n-i) >= 0):
p -= ((-1)**k) * part(n-i)
if ((n-j) >= 0):
p -= ((-1)**k) * part(n-j)
k += 1
return p
n = int(raw_input("Enter a number: "))
m = part(n)
print m
The code works fine up until n=29. It gets a bit slow around n=24, but I still get an output within a decent runtime. I know the algorithm is correct because the numbers yielded are in accordance with known values.
For numbers above 35, I don't get an output even after waiting for a long time (about 30 minutes). I was under the impression that python can handle numbers much larger than the numbers used here. Can someone help me improve my runtime and get better results? Also, if there is something wrong with the code, please let me know.
You can use Memoization:
def memo(f):
mem = {}
def wrap(x):
if x not in mem:
mem[x] = f(x)
return mem[x]
return wrap
#memo
def part(n):
p = 0
if n == 0:
p += 1
else:
k = 1
while (n >= (k * (3 * k - 1) // 2)) or (n >= (k * (3 * k + 1) // 2)):
i = (k * (3 * k - 1) // 2)
j = (k * (3 * k + 1) // 2)
if (n - i) >= 0:
p -= ((-1) ** k) * part(n - i)
if (n - j) >= 0:
p -= ((-1) ** k) * part(n - j)
k += 1
return p
Demo:
In [9]: part(10)
Out[9]: 42
In [10]: part(20)
Out[10]: 627
In [11]: part(29)
Out[11]: 4565
In [12]: part(100)
Out[12]: 190569292
With memoization we remember previous calculation so for repeated calculations we just do a lookup in the dict.
Well there are a number of things you can do.
Remove duplicate calculations. - Basically you are calculating "3*k+1" many times for every execution of your while loop. You should calculate it once and assign it to a variable, and then use the variable.
Replace the (-1)**k with a much faster operation like something like -2*(k%2)+1). So instead of the calculation being linear with respect to k it is constant.
Cache the result of expensive deterministic calculations. "part" is a deterministic function. It gets called many times with the same arguments. You can build a hashmap of the inputs mapped to the results.
Consider refactoring it to use a loop rather than recursion. Python does not support tail recursion from what I understand, thus it is burdened with having to maintain very large stacks when you use deep recursion.
If you cache the calculations I can guarantee it will operate many times faster.
I have to write a program to calculate a**b % c where b and c are both very large numbers. If I just use a**b % c, it's really slow. Then I found that the built-in function pow() can do this really fast by calling pow(a, b, c).
I'm curious to know how does Python implement this? Or where could I find the source code file that implement this function?
If a, b and c are integers, the implementation can be made more efficient by binary exponentiation and reducing modulo c in each step, including the first one (i.e. reducing a modulo c before you even start). This is what the implementation of long_pow() does indeed. The function has over two hundred lines of code, as it has to deal with reference counting, and it handles negative exponents and a whole bunch of special cases.
At its core, the idea of the algorithm is rather simple, though. Let's say we want to compute a ** b for positive integers a and b, and b has the binary digits b_i. Then we can write b as
b = b_0 + b1 * 2 + b2 * 2**2 + ... + b_k ** 2**k
ans a ** b as
a ** b = a**b0 * (a**2)**b1 * (a**2**2)**b2 * ... * (a**2**k)**b_k
Each factor in this product is of the form (a**2**i)**b_i. If b_i is zero, we can simply omit the factor. If b_i is 1, the factor is equal to a**2**i, and these powers can be computed for all i by repeatedly squaring a. Overall, we need to square and multiply k times, where k is the number of binary digits of b.
As mentioned above, for pow(a, b, c) we can reduce modulo c in each step, both after squaring and after multiplying.
You might consider the following two implementations for computing (x ** y) % z quickly.
In Python:
def pow_mod(x, y, z):
"Calculate (x ** y) % z efficiently."
number = 1
while y:
if y & 1:
number = number * x % z
y >>= 1
x = x * x % z
return number
In C:
#include <stdio.h>
unsigned long pow_mod(unsigned short x, unsigned long y, unsigned short z)
{
unsigned long number = 1;
while (y)
{
if (y & 1)
number = number * x % z;
y >>= 1;
x = (unsigned long)x * x % z;
}
return number;
}
int main()
{
printf("%d\n", pow_mod(63437, 3935969939, 20628));
return 0;
}
I don't know about python, but if you need fast powers, you can use exponentiation by squaring:
http://en.wikipedia.org/wiki/Exponentiation_by_squaring
It's a simple recursive method that uses the commutative property of exponents.
Line 1426 of this file shows the Python code that implements math.pow, but basically it boils down to it calling the standard C library which probably has a highly optimized version of that function.
Python can be quite slow for intensive number-crunching, but Psyco can give you a quite speed boost, it won't be as good as C code calling the standard library though.
Python uses C math libraries for general cases and its own logic for some of its concepts (such as infinity).
Implement pow(x,n) in Python
def myPow(x, n):
p = 1
if n<0:
x = 1/x
n = abs(n)
# Exponentiation by Squaring
while n:
if n%2:
p*= x
x*=x
n//=2
return p
Implement pow(x,n,m) in Python
def myPow(x,n,m):
p = 1
if n<0:
x = 1/x
n = abs(n)
while n:
if n%2:
p*= x%m
x*=x%m
n//=2
return p
Checkout this link for explanation