I am exploring how a Dynamic Programming design approach relates to the underlying combinatorial properties of problems.
For this, I am looking at the canonical instance of the coin exchange problem: Let S = [d_1, d_2, ..., d_m] and n > 0 be a requested amount. In how many ways can we add up to n using nothing but the elements in S?
If we follow a Dynamic Programming approach to design an algorithm for this problem that would allow for a solution with polynomial complexity, we would start by looking at the problem and how it is related to smaller and simpler sub-problems. This would yield a recursive relation describing an inductive step representing the problem in terms of the solutions to its related subproblems. We can then implement either a memoization technique or a tabulation technique to efficiently implement this recursive relation in a top-down or a bottom-up manner, respectively.
A recursive relation could be the following (Python 3.6 syntax and 0-based indexing):
def C(S, m, n):
if n < 0:
return 0
if n == 0:
return 1
if m <= 0:
return 0
count_wout_high_coin = C(S, m - 1, n)
count_with_high_coin = C(S, m, n - S[m - 1])
return count_wout_high_coin + count_with_high_coin
However, when drawing the sub-problem DAG, one can see that any DP-based algorithm implementing this recursive relation would yield a correct amount of solutions but disregarding the order.
For example, for S = [1, 2, 6] and n = 6, one can identify the following ways (assumming order matters):
1 + 1 + 1 + 1 + 1 + 1
2 + 1 + 1 + 1 + 1
1 + 2 + 1 + 1 + 1
1 + 1 + 2 + 1 + 1
1 + 1 + 1 + 2 + 1
1 + 1 + 1 + 1 + 2
2 + 2 + 1 + 1
1 + 2 + 2 + 1
1 + 1 + 2 + 2
2 + 1 + 2 + 1
1 + 2 + 1 + 2
2 + 1 + 1 + 2
2 + 2 + 2
6
Assumming order does not matter, we could count the following solutions:
1 + 1 + 1 + 1 + 1 + 1
2 + 1 + 1 + 1 + 1
2 + 2 + 1 + 1
2 + 2 + 2
6
When approaching a problem solution from the Dynamic Programming standpoint, how can I control the order? Specifically, how could I write functions:
count_with_order()
count_wout_order()
?
Could it be that the need for order mattering implies choosing pruned backtracking over a Dynamic Programming approach?
Each problem is idiosyncratic, although there may be some problems that could be grouped together. For your particular example, a count where order matters could be implemented (either recursed or tabulated) by considering that the number of solutions for n is equal to the total of solutions achievable from each lower achievable number, that is n - coin for each denomination.
Python code:
def f(n, coins):
if n < 0:
return 0
if n == 0:
return 1
return sum([f(n - coin, coins) for coin in coins])
# => f(6, [1, 2, 6]) # 14
Related
So given an input of 0o11110000000 base 8 ('0o' is ignored) i have to generate and count how many possible numbers that when the individual numbers are added they are the same. for example
0o1111000000 : 1 + 1 + 1 + 1 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 = 4
0o0000001111 : 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 1 + 1 + 1 + 1 = 4
0o0000000201 : 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 2 + 0 + 1 = 3
so for an input of 0o0000000001 i should get an answer of 10
0000000001
0000000010
0000000100
0000001000
0000010000
0000100000
0001000000
0010000000
0100000000
1000000000
My method is a very very brute force method in which i have to check every possible number from 0 to 7777 7777 77 base8. It uses the decimal represtation of the octal numbers and i use a recursive function to retrieve the octal sum of a number
How can i improve this to be a lot more faster. If possible no python modules as i doubt the machine running the program is unable to import a lot of stuff
def sum_of_octal_digits( n ):
if n == 0 :
return 0
else:
return int(n) % 8 + sum_of_octal_digits( int(n / 8) )
octInput = input("Enter hex: ")
int_octInput = int(octInput , 16)
total = sum_of_octal_digits(int_octInput )
allcombinations = list()
for i in range(pow(8,len(e[2:]))):
if sum_of_octal_digits(i) == total :
allcombinations.append(i)
print(len(allcombinations))
You're counting the number of sequences of {0, 1, 2, 3, 4, 5, 6, 7} of length n that sum to t. Call that value S[n, t].
There's a recurrence relation that S[n, t] satisfies:
S[0, 0] = 1
S[0, t] = 0 (t != 0)
S[n+1, t] = sum(S[n, t-d] for d=0...min(t, 7))
Once you have the recurrence relation, solving the problem using dynamic programming is straightforward (either by using memoization, or using a bottom-up table approach).
The program should run in O(nt) time and use O(nt) space.
I am doing a practice problem on recursion.
Implement sum_integers(n) to calculate the sum of all integers from 1 to 𝑛 using recursion. For example, sum_integers(3) should return 6 ( 1+2+3 ).
I solved the problem without really understanding what I actually did...
def sum_integers(n):
if n == 0:
return 0
else:
return n + sum_integers(n-1)
pass
The base case, I understand.
lets say we call sum_integers(3)
sum_integers(3)
sum_integers(2)
sum_integers(1)
sum_integers(0)
return 0
I don't understand once I return 0 what/how is it going back up the stack.
In my head this is what's happening
0 + sum_integers(1) = 0 + 1
0 + 1 + sum_integers(2) = 0 + 1 + 2
0 + 1 + 2 + sum_integers(3) = 0 + 1 + 2 + 3
I don't know for sure though. I'm just trying to understand it a bit better.
The 0 + 1 + 2 + sum_integers(3) = 0 + 1 + 2 + 3 is not really correct, an easier way to see it could be
sum_integers(3) # go down in recursion
3 + sum_integers(2) # go down in recursion
3 + 2 + sum_integers(1) # go down in recursion
3 + 2 + 1 + sum_integers(0) # go down in recursion
3 + 2 + 1 + 0 # stop because 'return 0'
3 + 2 + 1 # go back and apply the plus
3 + 3 # go back and apply the plus
6 # go back and apply the plus
def sum(a):
if a==1:
s=1
else:
s=1+2*sum(a-1)
return s
function:calculate the sum of the sequence of number. Its common ratio is 2, last term is 2^( a-1) and first term is 1.
Why does it use s=1+2*sum(a-1) to implement the function?
def sum1(a):
if a==1:
s=1
else:
s=1+2*sum1(a-1)
return s
What this function does, let's take a=4.
(1) s = 1 + 2*sum1(4-1) = 1 + 2*sum1(3) = 1 + 2*s2
(2) s2 = 1 + 2*sum1(3-1) = 1 + 2*sum1(2) = 1 + 2*s3
(3) s3 = 1 + 2*sum(2-1) = 1 + 2*sum(1) = 1 + 2*s4 = 1+2 = 3
Going BackWard : (3 * 2 + 1 ) * 2 + 1 = (7) * 2 + 1 = 15
Do it for bigger numbers, you will notice that this is the formula of 2^a - 1.
Print 2^a and 2^a - 1
Difference: (4, 3)
Difference: (8, 7)
Difference: (16, 15)
Difference: (32, 31)
Difference: (64, 63)
Difference: (128, 127)
Difference: (256, 255)
#Zhongyi I'm understanding this question as asking "how does recursion work."
Recursion exists in Python, but I find it a little difficult to explain how it works in Python. Instead I'll show how this works in Racket (a Lisp dialect).
First, I'lll rewrite yoursum provided above into mysum:
def yoursum(a):
if a==1:
s=1
else:
s=1+2*yoursum(a-1)
return s
def mysum(a):
if a == 1:
return 1
return 1 + (2 * mysum(a - 1))
for i in range(1, 11):
print(mysum(i), yoursum(i))
They are functionally the same:
# Output
1 1
3 3
7 7
15 15
31 31
63 63
127 127
255 255
511 511
1023 1023
In Racket, mysum looks like this:
#lang racket
(define (mysum a)
(cond
((eqv? a 1) 1)
(else (+ 1 (* 2 (mysum (sub1 a)))))))
But we can use language features called quasiquoting and unquoting to show what the recursion is doing:
#lang racket
(define (mysum a)
(cond
((eqv? a 1) 1)
(else `(+ 1 (* 2 ,(mysum (sub1 a))))))) ; Notice the ` and ,
Here is what this does for several outputs:
> (mysum 1)
1
> (mysum 2)
'(+ 1 (* 2 1))
> (mysum 3)
'(+ 1 (* 2 (+ 1 (* 2 1))))
> (mysum 4)
'(+ 1 (* 2 (+ 1 (* 2 (+ 1 (* 2 1))))))
> (mysum 5)
'(+ 1 (* 2 (+ 1 (* 2 (+ 1 (* 2 (+ 1 (* 2 1))))))))
You can view the recursive step as substituting the expression (1 + 2 * rest-of-the-computation)
Please comment or ask for clarification if there are parts that still do not make sense.
Formula Explanation
1+2*(sumOf(n-1))
This is not a generic formula for Geometric Progression.
this formula is only valid for the case when ratio is 2 and first term is 1
So how it is working.
The Geometric Progression with first term = 1 and r = 2 will be
1,2,4,6,8,16,32,64
FACT 1
Here you can clearly see nth term is always equals to sumOf(n-1) terms + 1
Let declare an equation from fact 1
n = sumOf(n-1) + 1 =======> Eq1.
Let put our equation to test
put n = 2 in Eq1
2 = sumOf(2-1) + 1
we know that sumOf(1) is 1 then
2 = 2 ==> proved
so if n = sumOf(n-1)+1 then
FACT 2
Sum of n term is n term + sum(n-1) terms
Lets declare an Equation from FACT 2
sumOf(n) = sumOf(n-1) + n ==> eq2
let us put eq1 in eq2 i.e. n = sumOf(n-1) + 1
sumOf(n) = sumOf(n-1) + sumOf(n-1) + 1 ==> eq3
Simplifying
sumOf(n) = 2 * sumOf(n-1) + 1
Rearranging
sumOf(n) = 1 + 2 * sumOf(n-1) ==> final Equation
Now Code this equation
we know sumOf 1st term is alway 1 so, this is our base case.
def sumOf(a):
if a==1:
return 1
so now sum of first n terms will be 1 + 2 * sumOf(n-1) ==> From final Equation
put this equation in else part
def sumOf(a):
if a==1:
return 1
else:
return 1 + 2 * sumOf(a-1)
Teaching myself coding, what is the order of operations for this line of code?
print 3 + 2 + 1 - 5 + 4 % 2 - 1 / 4 + 6
I attempted to do remainder and division first so I got 3 + 2 + 1 - 5 + 0 - 0.25 / 4 + 6. Then I completed AS from left to right and got 0.075. Totally wrong because LPTHW puts it at 7. Please offer detailed operation order.
I Googled Python order of operation, but results are not too instructively detailed.
print 3 + 2 + 1 - 5 + 4 % 2 - 1 / 4 + 6
Expected result is 7, but obtained 0.075
It depends on which Python version you are using.
In Python 2, the / operator defaults to integer division, so 1 / 4 == 0.
On the other hand, in Python 3, the / operator defaults to true division, so 1 / 4 == 0.25. You must use // to achieve integer division in Python 3.
Regardless, Python still follows the classical PEMDAS order of operations, so the modulus and division still happen first, followed by addition and subtraction from left to right.
Here's how the problem reduces after you do the modulus and division in both versions:
Python 2
(3 + 2 + 1 - 5 + 0 - 0 + 6) == 7
Python 3
(3 + 2 + 1 - 5 + 0 - 0.25 + 6) == 6.75
*, /, //, % have higher precedence than + and -. So you should first calculate 4 % 2 = 0 and 1 / 4 = 0 (in Python 2.7), and then do the rest of the calculation from left to right.
In Python 2, / uses integer division if both its arguments are integers. That means that 1/4 == 0, since integer division will round down. Then:
= 3 + 2 + 1 - 5 + 4 % 2 - 1 / 4 + 6
= 3 + 2 + 1 - 5 + 0 - 0 + 6
= 7
To get 6.75 in Python 2 (the expected answer when done on paper), make one of the operands a float:
>> 3 + 2 + 1 - 5 + 4 % 2 - 1.0 / 4 + 6
# ^
>> 6.75
This isnt necessary in Python 3 because / defaults to returning a float.
I'm new to Python, I found the below recursive program being tough to follow. While debugging the program I could find that it goes through recursion and the value of k decrements -1 every time we recurse. At one point k is -1 and the compiler moves to the else part and returns 0.
Finally, the k value turns out to be 1, how does this happen?
def tri_recursion(k):
if(k>0):
result = k+tri_recursion(k-1)
print(result)
else:
result = 0
return result
print("\n\nRecursion Example Results")
tri_recursion(6)
And the output:
Recursion Example Results
1
3
6
10
15
21
Try tracing the function with a pencil and paper. In this case, the print statement insde the function may be a bit misleading.
Consider this part of the program,
if(k>0):
result = k+tri_recursion(k-1)
...
From here,
tri_recursion(6) = 6 + tri_recursion(5)
So to get the result for tri_recursion(6) we must get the result of tri_recursion(5) Following this logic, the problem reduces to:
tri_recursion(6)
= 6 + tri_recursion(5)
= 6 + 5 + tri_recursion(4)
= 6 + 5 + 4 + tri_recursion(3)
= 6 + 5 + 4 + 3 + tri_recursion(2)
= 6 + 5 + 4 + 3 + 2 + tri_recursion(1)
= 6 + 5 + 4 + 3 + 2 + 1 + tri_recursion(0)
Now notice that 0 is not greater than 0 so the program moves to the body of the else clause:
else:
result = 0
...
Which means tri_recursion(0) = 0. Therefore:
tri_recursion(6)
= 6 + 5 + 4 + 3 + 2 + 1 + tri_recursion(0)
= 6 + 5 + 4 + 3 + 2 + 1 + 0
= 21
Points to note
In running this program.k is never equal to -1, infact it is impossible.
It is misleading to think of control flow in terms of "the compiler moving across a program". The compiler does't do anything during execution (JIT is a different matter). It is better to think in terms of control flow / order of execution in procedual languages, equationally in functional programming and relations in logic programming.
if you debug the code like this
def tri_recursion(k):
if(k > 0):
print('\t'*k,'start loop k',k)
holder = tri_recursion(k - 1)
result = k + holder
print('\t'*k,'i am k(', k,')+previous result(', holder,')=',result)
else:
result = 0
print('i reached when k =', k)
print('\t'*k,'end loop', k)
return result
print("\n\nRecursion Example Results")
tri_recursion(6)
you will see the output like
Recursion Example Results
start loop k 6
start loop k 5
start loop k 4
start loop k 3
start loop k 2
start loop k 1
i reached when k = 0
end loop 0
i am k( 1 )+previous result( 0 )= 1
end loop 1
i am k( 2 )+previous result( 1 )= 3
end loop 2
i am k( 3 )+previous result( 3 )= 6
end loop 3
i am k( 4 )+previous result( 6 )= 10
end loop 4
i am k( 5 )+previous result( 10 )= 15
end loop 5
i am k( 6 )+previous result( 15 )= 21
end loop 6
21