Convert recursive function to for and while loop [Python] - python

I have the functions:
h(0) = 0
h(1) = 3
h(n) = h(n-1) + 2 * h(n-2), for n>= 2
I need to convert this into a for loop, while loop, and recursive function. I have the recursive function figured out, but I can't seem to output the correct answer. My attempt at the for loop is this:
def hForLoop(n):
sum = 3
for i in range(2, n):
sum = sum + ((i - 1) + 2 * (i - 2))
return sum
I can't seem to figure out why I'm outputting the wrong answer. Some insight would be very useful and I will be very grateful.

Here's the version that stores just the last two values in a series:
def hForLoop(n):
prev, cur = 0, 3
for i in range(2, n + 1):
cur, prev = cur + prev * 2, cur
return cur

The issue is in you for-loop, where you increment sum with ((i - 1) + 2 * (i - 2)).
If you understood the original functions, it should really be increment sum with the previously computed value stored at h(i-1) and h(i-2).
Here's my fix to your for-loop function:
def hForLoop(n):
sum = [0,3]
for i in range(2, n+1):
sum.append((sum[i - 1]) + 2 * (sum[i - 2]))
return sum[n]

You need to store the intermediate values for h(n-1) and h(n-2) for the next iteration of the loop, where you use the values to calculate the next h(n).
def hLoop(n):
# initally [h(2), h(1), h(0)]
h = [3, 3, 0]
# And in the following: [ h(i), h(i-1), h(i-2)]
for i in range(2, n + 1):
# calculate h(i)
h[0] = h[1] + 2 * h[2]
# move 'index' forward, h(n-1) becomes h(n-2), h(n-1) becomes h(n-0)
h[2] = h[1]
h[1] = h[0]
return h[0]

With original function:
h(0) = 0
h(1) = 3
h(n) = h(n-1) + 2 * h(n-2), for n>= 2
you actual code is:
h(0) = 0
h(1) = 3
h(n) = Σ((n-1) + 2 * (n-2)), for n>= 2
See the difference?

Related

Infinite loop in binary search algorithm

I'm a newbie in algorithms. I have recently started studying binary search and tryed to implement it on my own. The task is simple: we have an array of integers a and an integer x. If a contains x the result should be its index, otherwise the function should return -1.
Here is the code I have written:
def binary_search(a, x):
l = 0
r = len(a)
while r - l > 0:
m = (l + r) // 2
if a[m] < x:
l = m
else:
r = m
if a[l] == x:
return l
return -1
But this code stucks in infinite cycle on a = [1, 2] and x = 2. I suppose, that I have incorrect cycle condition (probably, should be r - l >= 0), but this solution does not help. Where am I wrong?
Let me do some desk checking. I'll assume a = [1, 2] and we are searching for a 2
So we start with
l = 0
r = 2
Since r - l = 2 > 0, we enter the while-loop.
m = (l + r) / 2 = (0 + 2) / 2 = 1
a[m] = a[1] = 2 == x (hence not less than x)
r = m = 1 (and l remains the same)
Now r - l = 1 - 0 = 1 > 0, so we continue
m = (l + r) / 2 = (0 + 1) / 2 = 0
a[m] = a[0] = 1 < x
l = m = 0 (and r remains the same)
After this iteration both r and l have the same value as before, which then produces an endless loop.
Ashok's answer is a great fix. But I think it'll be educational to do some desk checking on the fixed code and look what improves it.
Basically the problematic situation arises, when l + 1 = r.
Then m will always evaluate to l, a[l] < x and l is set to m again, which doesn't change the situation.
In a larger piece of code it'll make sense to make a table that contains a column for each variable to watch and a column to write down the code line that was evaluated. A column for remarks won't harm either.
As Mani mentioned you are not considering when A[m]==x. Include that case (at that point you've found a so just return m), and once you have that case we can let l=m+1 when we are still below x. Like this:
def binary_search(a, x):
l = 0
r = len(a)
while r - l > 0:
m = (l + r) // 2
if a[m] < x:
l = m + 1
elif a[m]==x:
return m
else:
r = m
if l<len(a) and a[l] == x:
return l
return -1

What is Python List Comprehension of this Nested Loop?

I have case like this:
#!/usr/bin/python2.7
y = [[0 for i in xrange(size_of_array)] for j in xrange(size_of_array)]
offset_flag = 0
for i in xrange(size_of_array):
for j in xrange(size_of_array):
y[i][j] = starting_no + j + offset_flag
offset_flag += j + 1
I want list comprehension of nested for loop but also it should handle case like follows:
offset_flag += j + 1
How can I achieve this kind of list comprehension ?
Just use multiplication instead to calculate your offset:
y = [[starting_no + j + (i * size_of_array) for j in xrange(size_of_array)]
for i in xrange(size_of_array)]
which can be written a little more concisely using shorter variable names:
start, size = starting_no, size_of_array
y = [[start + j + (i * size) for j in xrange(size)] for i in xrange(size)]
Your offset value is nothing more than i * size_of_array here; each iteration of the outer loop you add j + 1, but j is always going to be set to size_of_array - 1. Substitute j for size_of_array - 1 and you get offset += size_of_array. The first iteration it is 0, then 1 * size_of_array, all the way up to (size_of_array - 1) * size_of_array, following the i variable.
Your entire code could be replaced with:
# Changing variables for cleaner expression:
# size_of_array -> s_a
# starting_no -> start
y = [[start + j + (i*s_a) for j in xrange(s_a)] for i in xrange(s_a)]

Efficiently generating Stern's Diatomic Sequence

Stern's Diatomic Sequence can be read about in more details over here; however, for my purpose I will define it now.
Definition of Stern's Diatomic Sequence
Let n be a number to generate the fusc function out of. Denoted fusc(n).
If n is 0 then the returned value is 0.
If n is 1 then the returned value is 1.
If n is even then the returned value is fusc(n / 2).
If n is odd then the returned value is fusc((n - 1) / 2) + fusc((n + 1) / 2).
Currently, my Python code brute forces through most of the generation, other than the dividing by two part since it will always yield no change.
def fusc (n):
if n <= 1:
return n
while n > 2 and n % 2 == 0:
n /= 2
return fusc((n - 1) / 2) + fusc((n + 1) / 2)
However, my code must be able to handle digits in the magnitude of 1000s millions of bits, and recursively running through the function thousands millions of times does not seem very efficient or practical.
Is there any way I could algorithmically improve my code such that massive numbers can be passed through without having to recursively call the function so many times?
With memoization for a million bits, the recursion stack would be extremely large. We can first try to look at a sufficiently large number which we can work by hand, fusc(71) in this case:
fusc(71) = fusc(35) + fusc(36)
fusc(35) = fusc(17) + fusc(18)
fusc(36) = fusc(18)
fusc(71) = 1 * fusc(17) + 2 * fusc(18)
fusc(17) = fusc(8) + fusc(9)
fusc(18) = fusc(9)
fusc(71) = 1 * fusc(8) + 3 * fusc(9)
fusc(8) = fusc(4)
fusc(9) = fusc(4) + fusc(5)
fusc(71) = 4 * fusc(4) + 3 * fusc(5)
fusc(4) = fusc(2)
fusc(3) = fusc(1) + fusc(2)
fusc(71) = 7 * fusc(2) + 3 * fusc(3)
fusc(2) = fusc(1)
fusc(3) = fusc(1) + fusc(2)
fusc(71) = 11 * fusc(1) + 3 * fusc(2)
fusc(2) = fusc(1)
fusc(71) = 14 * fusc(1) = 14
We realize that we can avoid recursion completely in this case as we can always express fusc(n) in the form a * fusc(m) + b * fusc(m+1) while reducing the value of m to 0. From the example above, you may find the following pattern:
if m is odd:
a * fusc(m) + b * fusc(m+1) = a * fusc((m-1)/2) + (b+a) * fusc((m+1)/2)
if m is even:
a * fusc(m) + b * fusc(m+1) = (a+b) * fusc(m/2) + b * fusc((m/2)+1)
Therefore, you may use a simple loop function to solve the problem in O(lg(n)) time
def fusc(n):
if n == 0: return 0
a = 1
b = 0
while n > 0:
if n%2:
b = b + a
n = (n-1)/2
else:
a = a + b
n = n/2
return b
lru_cache works wonders in your case. make sure maxsize is a power of 2. may need to fiddle a bit with that size for your application. cache_info() will help with that.
also use // instead of / for integer division.
from functools import lru_cache
#lru_cache(maxsize=512, typed=False)
def fusc(n):
if n <= 1:
return n
while n > 2 and n % 2 == 0:
n //= 2
return fusc((n - 1) // 2) + fusc((n + 1) // 2)
print(fusc(1000000000078093254329870980000043298))
print(fusc.cache_info())
and yes, this is just meomization as proposed by Filip Malczak.
you might gain an additional tiny speedup using bit-operations in the while loop:
while not n & 1: # as long as the lowest bit is not 1
n >>= 1 # shift n right by one
UPDATE:
here is a simple way of doing meomzation 'by hand':
def fusc(n, _mem={}): # _mem will be the cache of the values
# that have been calculated before
if n in _mem: # if we know that one: just return the value
return _mem[n]
if n <= 1:
return n
while not n & 1:
n >>= 1
if n == 1:
return 1
ret = fusc((n - 1) // 2) + fusc((n + 1) // 2)
_mem[n] = ret # store the value for next time
return ret
UPDATE
after reading a short article by dijkstra himself a minor update.
the article states, that f(n) = f(m) if the fist and last bit of m are the same as those of n and the bits in between are inverted. the idea is to get n as small as possible.
that is what the bitmask (1<<n.bit_length()-1)-2 is for (first and last bits are 0; those in the middle 1; xoring n with that gives m as described above).
i was only able to do small benchmarks; i'm interested if this is any help at all for the magitude of your input... this will reduce the memory for the cache and hopefully bring some speedup.
def fusc_ed(n, _mem={}):
if n <= 1:
return n
while not n & 1:
n >>= 1
if n == 1:
return 1
# https://www.cs.utexas.edu/users/EWD/transcriptions/EWD05xx/EWD578.html
# bit invert the middle bits and check if this is smaller than n
m = n ^ (1<<n.bit_length()-1)-2
n = m if m < n else n
if n in _mem:
return _mem[n]
ret = fusc(n >> 1) + fusc((n >> 1) + 1)
_mem[n] = ret
return ret
i had to increase the recursion limit:
import sys
sys.setrecursionlimit(10000) # default limit was 1000
benchmarking gave strange results; using the code below and making sure that i always started a fresh interperter (having an empty _mem) i sometimes got significantly better runtimes; on other occasions the new code was slower...
benchmarking code:
print(n.bit_length())
ti = timeit('fusc(n)', setup='from __main__ import fusc, n', number=1)
print(ti)
ti = timeit('fusc_ed(n)', setup='from __main__ import fusc_ed, n', number=1)
print(ti)
and these are three random results i got:
6959
24.117448464001427
0.013900151001507766
6989
23.92404893300045
0.013844672999766772
7038
24.33894686200074
24.685758719999285
that is where i stopped...

Sum of subsequences recursion in Python

Over the weekend I was working on the Ad Infinitum challenge on HackerRank.
One problem was to calculate the sum of all subsequences of a finite sequence, if each subsequence is thought of as an integer.
For example, sequence 4,5,6 would give answer 4 + 5 + 6 + 45 + 46 + 56 + 456 = 618.
I found a recursion and wrote the Python code below. It solved 5/13 test cases.
The remaining 8/13 test cases had runtime errors.
I was hoping someone could spy where in the code the inefficiencies lie, and how they can be sped up. Or, help me decide that it must be that my recursion is not the best strategy.
# Input is a list, representing the given sequence, e.g. L = [4,5,6]
def T(L):
limit = 10**9 + 7 # answer is returned modulo 10**9 + 7
N = len(L)
if N == 1:
return L[0]
else:
last = L[-1]
K = L[:N-1]
ans = T(K)%limit + 10*T(K)%limit + (last%limit)*pow(2,N-1,limit)
return ans%limit
This is my submission for the same problem (Manasa and Sub-sequences).
https://www.hackerrank.com/contests/infinitum-may14/challenges/manasa-and-sub-sequences
I hope this will help you to think of a better way.
ans = 0
count = 0
for item in raw_input():
temp = (ans * 10 + (count + 1)*(int(item)))%1000000007
ans = (ans + temp)%1000000007
count = (count*2 + 1)%1000000007
print ans
Well, you want the combinations:
from itertools import combinations
def all_combinations(iterable):
for r in range(len(digits)):
yield from combinations(digits, r+1)
And you want to convert them to integers:
def digits_to_int(digits):
return sum(10**i * digit for i, digit in enumerate(reversed(digits)))
And you want to sum them:
sum(map(digits_to_int, all_combinations([4, 5, 6])))
Then focus on speed.
Assuming you mean continuous subsequence.
test = [4, 5, 6]
def coms(ilist):
olist = []
ilist_len = len(ilist)
for win_size in range(ilist_len, 0, -1):
for offset in range((ilist_len - win_size) + 1):
subslice = ilist[offset: offset + win_size]
sublist = [value * (10 ** power) for (power, value) in enumerate(reversed(subslice))]
olist.extend(sublist)
return olist
print sum(coms(test))

Unable to implement a dynamic programming table algorithm in python

I am having problems creating a table in python. Basically I want to build a table that for every number tells me if I can use it to break down another(its the table algo from the accepted answer in Can brute force algorithms scale?). Here's the pseudo code:
for i = 1 to k
for z = 0 to sum:
for c = 1 to z / x_i:
if T[z - c * x_i][i - 1] is true:
set T[z][i] to true
Here's the python implementation I have:
from collections import defaultdict
data = [1, 2, 4]
target_sum = 10
# T[x, i] is True if 'x' can be solved
# by a linear combination of data[:i+1]
T = defaultdict(bool) # all values are False by default
T[0, 0] = True # base case
for i, x in enumerate(data): # i is index, x is data[i]
for s in range(target_sum + 1): #set the range of one higher than sum to include sum itself
for c in range(s / x + 1):
if T[s - c * x, i]:
T[s, i+1] = True
#query area
target_result = 1
for node in T:
if node[0]==target_result:
print node, ':', T[node]
So what I expect is if target_result is set to 8, it shows how each item in list data can be used to break that number down. For 8, 1,2,4 for all work so I expect them all to be true, but this program is making everything true. For example, 1 should only be able to be broken down by 1(and not 2 or 4) but when I run it as 1, I get:
(1, 2) : True
(1, 0) : False
(1, 3) : True
(1, 1) : True
can anyone help me understand what's wrong with the code? or perhaps I am not understanding the algorithm that was posted in answer I am referring to.
(Note: I could be completely wrong, but I learned that defaultdict creates entries even if its not there, and if the entry exists the algo turns it to true, maybe thats the problem I'm not sure, but it was the line of thought I tried to go but it didn't work for me because it seems to break the overall implemention)
Thanks!
The code works if you print the solution using RecursivelyListAllThatWork():
coeff = [0]*len(data)
def RecursivelyListAllThatWork(k, sum): # Using last k variables, make sum
# /* Base case: If we've assigned all the variables correctly, list this
# * solution.
# */
if k == 0:
# print what we have so far
print(' + '.join("%2s*%s" % t for t in zip(coeff, data)))
return
x_k = data[k-1]
# /* Recursive step: Try all coefficients, but only if they work. */
for c in range(sum // x_k + 1):
if T[sum - c * x_k, k - 1]:
# mark the coefficient of x_k to be c
coeff[k-1] = c
RecursivelyListAllThatWork(k - 1, sum - c * x_k)
# unmark the coefficient of x_k
coeff[k-1] = 0
RecursivelyListAllThatWork(len(data), target_sum)
Output
10*1 + 0*2 + 0*4
8*1 + 1*2 + 0*4
6*1 + 2*2 + 0*4
4*1 + 3*2 + 0*4
2*1 + 4*2 + 0*4
0*1 + 5*2 + 0*4
6*1 + 0*2 + 1*4
4*1 + 1*2 + 1*4
2*1 + 2*2 + 1*4
0*1 + 3*2 + 1*4
2*1 + 0*2 + 2*4
0*1 + 1*2 + 2*4
As a side note, you don't really need a defaultdict with what you're doing, you can use a normal dict + .get():
data = [1, 2, 4]
target_sum = 10
T = {}
T[0, 0] = True
for i,x in enumerate(data):
for s in range(target_sum + 1): # xrange on python-2.x
for c in range(s // x + 1):
if T.get((s - c * x, i)):
T[s, i+1] = True
If you're using J.S. solution, don't forget to change:
if T[sum - c * x_k, k - 1]:
with:
if T.get((sum - c * x_k, k - 1)):
Your code is right.
1 = 1 * 1 + 0 * 2, so T[1, 2] is True.
1 = 1 * 1 + 0 * 2 + 0 * 4, so T[1, 3] is True.
As requested in the comments, a short explanation of the algo:
It calculates all numbers from 0 to targetsum that can be represented as a sum of (non-negative) multiples of some of the numbers in data.
If T[s, i] is True, then s can be represented in this way using only the first i elements of data.
At the start, 0 can be represented as the empty sum, thus T[0, 0] is True. (This step may seem a little technical.)
Let x be the 'i+1'-th element of data. Then, the algorithm tries for each number s if it can be represented by the sum of some multiple of x and a number for which a representation exists that uses only the first i elements of data (the existence of such a number means T[s - c * x, i] is True for some c). If so, s can be represented using only the first i+1 elements of data.

Categories

Resources