I'm having some trouble figuring out the primitive count of operations for the following lines of code
def question1(n):
n = n # 1 ops
i = 0 # 1 ops
a = 0 # 1 ops
while i < n: # n ops
j = 0 # n ops
while j < n: # n * n ops
k = 0 # n * n ops
while k < 60: # n * n * 60 ops
a = i * j - i * 2 + k # n * n * 60 * 5 ops
k += 1 # n * n * 60 * 2 ops
j += 1 # n * n ops
i += 1 # n ops
# total sum of prim operations = (n * n * 483) + (3 * n) + 3
I'm not sure if
while k < 60: # n * n * 60 ops
a = i * j - i * 2 + k # n * n * 60 * 5 ops
k += 1 # n * n * 60 * 2 ops
Is it really
n * n * 60?
or should it be
n * n * n * 60
"primitive operations" is an ambiguous concept. For instance, a while statement will at some point evaluate the condition as false (which you didn't count) and then make the execution jump to the statement after the loop. One could say those are two operations (evaluation + jump).
Someone could say that k += 1 should count as 3 operations:
load the value of k into a CPU register,
add one to it
store that register's value back in k.
But if Python were compiled into a machine language that has the INC instruction (like NASM), and we deal with fixed-size integers (like 32 bit), it is only one operation.
So this concept is fuzzy, and it is quite useless to sum them up. You should not identify "primitive operations", but identify chunks of code that have a constant time complexity.
Analysis in terms of constant time complexity
First we need to decide whether to think of integer arithmetic operations to be constant in time, or whether we should take into consideration that integers (certainly in Python) can have arbitrary size, and therefore these operations do not have a constant time complexity. See also "bit complexity". I will assume here that you want to regard arithmetic operations as having a constant time complexity.
Then we can identify this chunk of code has having a constant time complexity:
k = 0
while k < 60:
a = i * j - i * 2 + k
k += 1
j += 1
Note here that executing the inner block (that has a constant complexity) 60 times, still means the total has a constant time complexity, since 60 is a constant (independent from the input).
Also the initialisation of integer variables or their incrementing all represent constant time complexity.
There are two nested loops that each iterate 𝑛 times when they get executed. So that means the (above) inner part gets executed 𝑛² times.
Hence, the overal time complexity is O(𝑛²)
I would say it is definitely not n * n * n * 60. You might be confused about asymptotic notation, which may be influencing this question in the first place. The third while loop is executed 60 times, meaning that each operation within it is executed 60 times. This while loop runs 60 times for each of the n iterations of the second while loop, which runs n times for each of the n iterations of the first while loop, yielding n * n * 60.
Although the 60 is involved here, it is still a constant and is therefore of little significance for large values of n. The use of a triple nested loop is more of a trick question in this case, designed to show an example of why the polynomial properties of the algorithm are more important than any constants, just because as n gets large, n * n becomes much larger than 60 does.
Your calculation looks correct, though. The only thing missed is that the following blocks are actually 2 operations each :
j += 1 # n * n * 2 ops, equal to j = j + 1, an assignment AND addition
i += 1 # n * 2 ops ( i = i + 1 )
Related
I want to calculate the summation of cosx series (while keeping the x in radian). This is the code i created:
import math
def cosine(x,n):
sum = 0
for i in range(0, n+1):
sum += ((-1) ** i) * (x**(2*i)/math.factorial(2*i))
return sum
and I checked it using math.cos() .
It works just fine when I tried out small numbers:
print("Result: ", cosine(25, 1000))
print(math.cos(25))
the output:
Result: 0.991203540954667 0.9912028118634736
The number is still similar. But when I tried a bigger number, i.e 40, it just returns a whole different value.
Result: 1.2101433786727471 -0.6669380616522619
Anyone got any idea why this happens?
The error term for a Taylor expansion increases the further you are from the point expanded about (in this case, x_0 = 0). To reduce the error, exploit the periodicity and symmetry by only evaluating within the interval [0, 2 * pi]:
def cosine(x, n):
x = x % (2 * pi)
total = 0
for i in range(0, n + 1):
total += ((-1) ** i) * (x**(2*i) / math.factorial(2*i))
return total
This can be further improved to [0, pi/2]:
def cosine(x, n):
x = x % (2 * pi)
if x > pi:
x = abs(x - 2 * pi)
if x > pi / 2:
return -cosine(pi - x, n)
total = 0
for i in range(0, n + 1):
total += ((-1) ** i) * (x**(2*i) / math.factorial(2*i))
return total
Contrary to the answer you got, this Taylor series converges regardless of how large the argument is. The factorial in the terms' denominators eventually drives the terms to 0.
But before the factorial portion dominates, terms can get larger and larger in absolute value. Native floating point doesn't have enough bits of precision to keep enough information for the low-order bits to survive.
Here's a way that doesn't lose any bits of precision. It's not practical because it's slow. Trust me when I tell you that it typically takes years of experience to learn how to write practical, fast, high-quality math libraries.
def mycos(x, nbits=100):
from fractions import Fraction
x2 = - Fraction(x) ** 2
i = 0
ntries = 0
total = term = Fraction(1)
while True:
ntries += 1
term = term * x2 / ((i+1) * (i+2))
i += 2
total += term
if (total // term).bit_length() > nbits:
break
print("converged to >=", nbits, "bits in", ntries, "steps")
return total
and then your examples:
>>> mycos(25)
converged to >= 100 bits in 60 steps
Fraction(177990265631575526901628601372315751766446600474817729598222950654891626294219622069090604398951917221057277891721367319419730580721270980180746700236766890453804854224688235663001, 179569976498504495450560473003158183053487302118823494306831203428122565348395374375382001784940465248260677204774780370309486592538808596156541689164857386103160689754560975077376)
>>> float(_)
0.9912028118634736
>>> mycos(40)
converged to >= 100 bits in 82 steps
Fraction(-41233919211296161511135381283308676089648279169136751860454289528820133116589076773613997242520904406094665861260732939711116309156993591792484104028113938044669594105655847220120785239949370429999292710446188633097549, 61825710035417531603549955214086485841025011572115538227516711699374454340823156388422475359453342009385198763106309156353690402915353642606997057282914587362557451641312842461463803518046090463931513882368034080863251)
>>> float(_)
-0.6669380616522619
Things to note:
The full-precision results require lots of bits.
Rounded back to float, they identically match what you got from math.cos().
It doesn't require anywhere near 1000 steps to converge.
I have the recurrence relation : (n-2)an = 2(4n-9)an-1 - (15n-38)an-2 - 2(2n-5)an-3 with initial conditions being a0 = 0, a1 = 1 and a2 = 3. I mainly want to calculate an mod n and 2n mod n for all odd composite numbers n from 1 up to say 2.5 million.
I have written down a code in Python. Using sympy and memoization, I did the computation for an mod n but it took it more than 2 hours. It got worse when I tried it for a2n mod n. One main reason for the slowness is that the recurrence has non-constant coefficients. Are there more efficient codes that I could use? Or would it help to do this on some other language (which preferably should have an in-built function or a function from some package that can be used directly for the primality testing part of the code)?
This is my code.
from functools import lru_cache
import sympy
#lru_cache(maxsize = 1000)
def f(n):
if n==0:
return 0
elif n==1:
return 1
elif n==2:
return 3
else:
return ((2*((4*n)-9)*f(n-1)) - (((15*n)-38)*f(n-2)) - (2*((2*n)-5)*f(n-3)))//(n-2)
for n in range(1,2500000,2):
if sympy.isprime(n)==False:
print(n,f(n)%n)
if n%10000==1:
print(n,'check')
The last 'if' statement is just to check how much progress is being made.
For a somewhat faster approach avoiding any memory issues, you could calculate the an directly in sequence, while always retaining only the last three values in a queue:
from collections import deque
a = deque([0, 1, 3])
for n in range(3, 2_500_000):
a.append(((8 * n - 18) * a[2]
- (15 * n - 38) * a[1]
- (4 * n - 10) * a.popleft())
// (n - 2))
if n % 2 == 1:
print(n, a[2] % n)
3 2
5 0
7 6
9 7
11 1
[...]
2499989 1
2499991 921156
2499993 1210390
2499995 1460120
2499997 2499996
2499999 1195814
This took about 50 minutes on my PC. Note I avoided the isprime() call in view of Rodrigo's comment.
what is the Big O notation for that two algorithms:
def foo1(n):
if n > 1:
for i in range(int(n)):
foo1(1)
foo1(n / 2)
def foo2(lst1, lst2):
i = 1
while i < max(len(lst1), len(lst2)):
j = 1
while j < min(len(lst1), len(lst2)):
j *= 2
i *= 2
I thought that foo1 run time complexity is O(n) because in that case if I see the for loop I can do that:
T(n) = O(n) + O(n/2) <= c*O(n) (c is const) for all n.
is that right ?
and I cant calculate the run time of foo2 can some one help me to know to do that.
thanks...
The number of operations T(n) is equal to T(n/2) + n. Applying the Master theorem we get T(n) = O(n). In simple terms there are n + n/2 + n/4 + ... + 1 operations that are less than 2*n and is O(n).
The inner loop does not depend on the outer loop, so we can treat them independently. T(n) = O(log(maxlen) * log(minlen)).
I have written the following code for evaluating integer partitions using the recurrence formula involving pentagonal numbers:
def part(n):
p = 0
if n == 0:
p += 1
else:
k = 1
while ((n >= (k*(3*k-1)/2)) or (n >= (k*(3*k+1)/2))):
i = (k * (3*k-1)/2)
j = (k * (3*k+1)/2)
if ((n-i) >= 0):
p -= ((-1)**k) * part(n-i)
if ((n-j) >= 0):
p -= ((-1)**k) * part(n-j)
k += 1
return p
n = int(raw_input("Enter a number: "))
m = part(n)
print m
The code works fine up until n=29. It gets a bit slow around n=24, but I still get an output within a decent runtime. I know the algorithm is correct because the numbers yielded are in accordance with known values.
For numbers above 35, I don't get an output even after waiting for a long time (about 30 minutes). I was under the impression that python can handle numbers much larger than the numbers used here. Can someone help me improve my runtime and get better results? Also, if there is something wrong with the code, please let me know.
You can use Memoization:
def memo(f):
mem = {}
def wrap(x):
if x not in mem:
mem[x] = f(x)
return mem[x]
return wrap
#memo
def part(n):
p = 0
if n == 0:
p += 1
else:
k = 1
while (n >= (k * (3 * k - 1) // 2)) or (n >= (k * (3 * k + 1) // 2)):
i = (k * (3 * k - 1) // 2)
j = (k * (3 * k + 1) // 2)
if (n - i) >= 0:
p -= ((-1) ** k) * part(n - i)
if (n - j) >= 0:
p -= ((-1) ** k) * part(n - j)
k += 1
return p
Demo:
In [9]: part(10)
Out[9]: 42
In [10]: part(20)
Out[10]: 627
In [11]: part(29)
Out[11]: 4565
In [12]: part(100)
Out[12]: 190569292
With memoization we remember previous calculation so for repeated calculations we just do a lookup in the dict.
Well there are a number of things you can do.
Remove duplicate calculations. - Basically you are calculating "3*k+1" many times for every execution of your while loop. You should calculate it once and assign it to a variable, and then use the variable.
Replace the (-1)**k with a much faster operation like something like -2*(k%2)+1). So instead of the calculation being linear with respect to k it is constant.
Cache the result of expensive deterministic calculations. "part" is a deterministic function. It gets called many times with the same arguments. You can build a hashmap of the inputs mapped to the results.
Consider refactoring it to use a loop rather than recursion. Python does not support tail recursion from what I understand, thus it is burdened with having to maintain very large stacks when you use deep recursion.
If you cache the calculations I can guarantee it will operate many times faster.
Help finding the big-Oh notation for the following code:
i = n
while i > 0:
k = 2 + 2
i = i // 2
I think its n because n is assigned and then looped. Is this right?
A simple way to think of this (which can be used as a general approach) is the following:
the initial value of i is n
the code will stop looping once i reaches 0 (consequently, the last iteration will execute when i is 1)
having executed an arbitrary number of iterations (call that number c), the value of i is
((n / 2) / 2) / ... ) = n / (2 ^ c)
^ divide by 2 c times
So, at the end of the loop, we want (n / (2 ^ c)) = 1. Solving for c gives us c = logn.
So the big-oh complexity is O(logn). (That is, assuming n is an integer and not a float).
The loop repeats the number of times that you can divide n by two. That is O(log n).