Python Non-recursive Permutation - python

Does anyone understand the following iterative algorithm for producing all permutations of a list of numbers?
I do not understand the logic within the while len(stack) loop. Can someone please explain how it works?
# Non-Recursion
#param nums: A list of Integers.
#return: A list of permutations.
def permute(self, nums):
if nums is None:
return []
nums = sorted(nums)
permutation = []
stack = [-1]
permutations = []
while len(stack):
index = stack.pop()
index += 1
while index < len(nums):
if nums[index] not in permutation:
break
index += 1
else:
if len(permutation):
permutation.pop()
continue
stack.append(index)
stack.append(-1)
permutation.append(nums[index])
if len(permutation) == len(nums):
permutations.append(list(permutation))
return permutations
I'm just trying to understand the code above.

As mentioned in the comments section to your question, debugging may provide a helpful way to understand what the code does. However, let me provide a high-level perspective of what your code does.
First of all, although there are no recursive calls to the function permute, the code your provided is effectively recursive, as all it does is keeping its own stack, instead of using the one provided by the memory manager of your OS. Specifically, the variable stack is keeping the recursive state, so to speak, that is passed from one recursive call to another. You could, and perhaps should, consider each iteration of the outer while loop in the permute function as a recursive call. If you do so, you will see that the outer while loop helps 'recursively' traverse each permutation of nums in a depth-first manner.
Noticing this, it's fairly easy to figure out what each 'recursive call' does. Basically, the variable permutation keeps the current permutation of nums which is being formed as while loop progresses. Variable permutations store all the permutations of nums that are found. As you may observe, permutations are updated only when len(permutation) is equal to len(nums) which can be considered as the base case of the recurrence relation that is being implemented using a custom stack. Finally, the inner while loop picks which element of nums to add to the current permutation(i.e. stored in variable permutation) being formed.
So that is about it, really. You can figure out what is exactly being done on the lines relevant to the maintenance of stack using a debugger, as suggested. As a final note, let me repeat that I, personally, would not consider this implementation to be non-recursive. It just so happens that, instead of using the abstraction provided by the OS, this recursive solution keeps its own stack. To provide a better understanding of how a proper non-recursive solution would be, you may observe the difference in recursive and iterative solutions to the problem of finding nth Fibonacci number provided below. As you can see, the non-recursive solution keeps no stack, and instead of dividing the problem into smaller instances of it(recursion) it builds up the solution from smaller solutions. (dynamic programming)
def recursive_fib(n):
if n == 0:
return 0
elif n == 1:
return 1
return recursive_fib(n-1) + recursive_fib(n-2)
def iterative_fib(n):
f_0 = 0
f_1 = 1
for i in range(3, n):
f_2 = f_1 + f_0
f_0 = f_1
f_1 = f_2
return f_1

The answer from #ilim is correct and should be the accepted answer but I just wanted to add another point that wouldn't fit as a comment. Whilst I imagine you are studying this algorithm as an exercise it should be pointed out that a better way to proceed, depending on the size of the list, may be to user itertools's permutations() function:
print [x for x in itertools.permutations([1, 2, 3])]
Testing on my machine with a list of 11 items (39m permutations) took 1.7secs with itertools.permutations(x) but took 76secs using the custom solution above. Note however that with 12 items (479m permutations) the itertools solution blows up with a memory error. If you need to generate permutations of such size efficiently you may be better dropping to native code.

Related

Python Coin Change Dynamic Programming

I am currently trying to implement dynamic programming in Python, but I don't know how to setup the backtracking portion so that it does not repeat permutations.
For example, an input would be (6, [1,5]) and the expected output should be 2 because there are 2 possible ways to arrange 1 and 5 so that their sum is equivalent to 6. Those combinations are {1,1,1,1,1,1} and {1,5} but the way my program currently works, it accounts for the combinations displayed above and the combination {5,1}. This causes the output to be 3 which is not what I wanted. So my question is "How do I prevent from repeating permutations?". My current code is shown below.
import collections as c
class DynamicProgram(object):
def __init__(self):
self.fib_memo = {}
# nested dictionary, collections.defaultdict works better than a regular nested dictionary
self.coin_change_memo = c.defaultdict(dict)
self.__dict__.update({x:k for x, k in locals().items() if x != 'self'})
def coin_change(self, n, coin_array):
# check cache
if n in self.coin_change_memo:
if len(coin_array) in self.coin_change_memo[n]:
return [n][len(coin_array)]
# base cases
if n < 0: return 0
elif n == 1 or n == 0: return 1
result = 0
i = 0
# backtracking (the backbone of how this function works)
while i <= n and i < len(coin_array):
result += self.coin_change(n-coin_array[i], coin_array)
i += 1
# append to cache
self.coin_change_memo[n][len(coin_array)] = result
# return result
return result
One of the way of avoiding permutation is to use the numbers in "non-decreasing" order. By doing so you will never add answer for [5 1] because it is not in "non-decreasing" order.And [1 5] will be added as it is in "non-decreasing" order.
So the change in your code will be if you fix to use the ith number in sorted order than you will never ever use the number which is strictly lower than this.
The code change will be as described in Suparshva's answer with initial list of numbers sorted.
Quick fix would be:
result += self.coin_change(n-coin_array[i], coin_array[i:]) # notice coin_array[i:] instead of coin_array
But you want to avoid this as each time you will be creating a new list.
Better fix would be:
Simply add a parameter lastUsedCoinIndex in the function. Then always use coins with index >= lastUsedCoinIndex from coin array. This will ensure that the solutions are distinct.
Also you will have to make changes in your memo state. You are presently storing sum n and size of array(size of array is not changing in your provided implementation unlike the quick fix I provided, so its of no use there!!) together as a state for memo. Now you will have n and lastUsedCoinIndex, together determining a memo state.
EDIT:
Your function would look like:
def coin_change(self,coin_array,n,lastUsedCoinIndex):
Here, the only variables changing will be n and lastUsedCoinIndex. So you can also modify your constructor such that it takes coin_array as input and then you will access the coin_array initialized by constructor through self.coin_array. Then the function would become simply:
def coin_change(self,n,lastUsedCoinIndex):

Reasoning behind Quadratic Complexity in this particular code

I am asking in reference to this code:
array = [-37,-36,-19,-99,29,20,3,-7,-64,84,36,62,26,-76,55,-24,84,49,-65,41]
print sum(i for i in array if array.index(i) % 2 == 0)*array[-1] if array != [] else 0
You could see it here: Python code for sum with condition
Is this Quadratic because of a for loop followed by an if statement, inside the brackets?
Is this code, proposed by one more person on the same page - array[-1] * sum(array[::2]) free of Quadratic behaviour?
I think, it's again a Quadratic one, as it has to do a traversal and that too an alternate one.
Thanks in advance.
Yes, it's the array.index that makes it quadratic.
Let's first cut all irrelevant stuff away. The conditionals are for the complexity reasoning irrelevant (we will have array != [] and that check takes O(1) time). The same goes with the multiplication with array[-1]. So you're left with:
sum(i for i in array if array.index(i) % 2 == 0)
Now the inner is a generator and it would expand to an annonymous function looping through array and yielding a bunch of values, at most one per iteration. The sum function receives these values and adds them up.
The confusing thing maybe how a generator actually works. It actually works by running the generator intermixed with code from the consumer. This results in the complexity is the sum of the complexity of the generator and the consumer (ie sum). Now sum has linear complexity (it should have, it would if I wrote it).
So for the generator, it loops through the array, but for each element in the array it calls array.index which is of O(N) complexity.
To fix this you might use enumerate to avoid having to call array.index(i), it may or may not be what you want to do since array.index(i) returns the first index for which the element is i which might not be the index where you actually found i:
sum(i for idx, i in enumerate(array) if idx % 2 == 0)
To see the difference consider the list array = [0, 1, 2, 2], the first solution should sum up this to 4 since array.index(2) == 2 so it would also add the second 2. The later solution however will add up to 2 since enumerate will enumerate the elements in array, yielding pairs (0,0), (1,1), (2,2), (3,2) - where the first component is the actual index while the second is the actual element. Here the second 2 is omitted because it's actually got from index 3.
The first solution is indeed quadratic: when you call array.index you basically iterates each time on array again, so it behaves like an embedded loop.
The second solution traverses the list only one time, skipping each odd indexes.

Why doesnt this sort function work for Python

Please tell me why this sort function for Python isnt working :)
def sort(list):
if len(list)==0:
return list
elif len(list)==1:
return list
else:
for b in range(1,len(list)):
if list[b-1]>list[b]:
print (list[b-1])
hold = list[b-1]
list[b-1]=list[b]
list[b] = hold
a = [1,2,13,131,1,3,4]
print (sort(a))
It looks like you're attempting to implement a neighbor-sort algorithm. You need to repeat the loop N times. Since you only loop through the array once, you end up with the largest element being in its place (i.e., in the last index), but the rest is left unsorted.
You could debug your algorithm on your own, using pdb.
Or, you could use python's built-in sorting.
Lets take a look at you code. Sort is a built in Python function (at least I believe it is the same for both 2.7 and 3.X) So when you are making your own functions try to stay away from name that function with inbuilt functions unless you are going to override them (Which is a whole different topic.) This idea also applies to the parameter that you used. list is a type in the python language AKA you will not be able to use that variable name. Now for some work on your code after you change all the variables and etc...
When you are going through your function you only will swap is the 2 selected elements are next to each other when needed. This will not work with all list combinations. You have to be able to check that the current i that you are at is in the correct place. So if the end element is the lowest in the List then you have to have it swap all the way to the front of the list. There are many ways of sorting (ie. Quick sort, MergeSort,Bubble Sort) and this isnt the best way... :) Here is some help:
def sortThis(L):
if (len(L) == 0 or len(L) == 1):
return list
else:
for i in range(len(L)):
value = L[i]
j = i - 1
while (j >= 0) and (L[j] > value):
L[j+1] = L[j]
j -= 1
L[j+1] = value
a = [1,2,13,131,1,3,4]
sortThis(a)
print a
Take a look at this for more sorting Fun: QuickSort MergeSort
If it works, it would be the best sorting algotithm in the world (O(n)). Your algorithm only puts the greatest element at the end of the list. you have to apply recursively your function to list[:-1].
You should not use python reserved words

Python lists, dictionary optimization

I have been attending a couple of hackathons. I am beginning to understand that writing code is not enough. The code has to be optimized. That brings me to my question. Here are two questions that I faced.
def pairsum(numbers, k)
"""Write a function that returns two values in numbers whose sum is K"""
for i, j in numbers:
if i != j:
if i+j == k
return i, j
I wrote this function. And I was kind of stuck with optimization.
Next problem.
string = "ksjdkajsdkajksjdalsdjaksda"
def dedup(string):
""" write a function to remove duplicates in the variable string"""
output = []
for i in string:
if i not in output:
output.append(i)
These are two very simple programs that I wrote. But I am stuck with optimization after this. More on this, when we optimize code, how does the complexity reduce? Any pointers will help. Thanks in advance.
Knowing the most efficient Python idioms and also designing code that can reduce iterations and bail out early with an answer is a major part of optimization. Here are a few examples:
List list comprehensions and generators are usually fastest:
With a straightforward nested approach, a generator is faster than a for loop:
def pairsum(numbers, k):
"""Returns two unique values in numbers whose sum is k"""
return next((i, j) for i in numbers for j in numbers if i+j == k and i != j)
This is probably faster on average since it only goes though one iteration at most and does not check if a possible result is in numbers unless k-i != i:
def pairsum(numbers, k):
"""Returns two unique values in numbers whose sum is k"""
return next((k-i, i) for i in numbers if k-i != i and k-i in numbers)
Ouput:
>>> pairsum([1,2,3,4,5,6], 8)
(6, 2)
Note: I assumed numbers was a flat list since the doc string did not mention tuples and it makes the problem more difficult which is what I would expect in a competition.
For the second problem, if you are to create your own function as opposed to just using ''.join(set(s)) you were close:
def dedup(s):
"""Returns a string with duplicate characters removed from string s"""
output = ''
for c in s:
if c not in output:
output += c
return output
Tip: Do not use string as a name
You can also do:
def dedup(s):
for c in s:
s = c + s.replace(c, '')
return s
or a much faster recursive version:
def dedup(s, out=''):
s0, s = s[0], s.replace(s[0], '')
return dedup(s, n + s0) if s else out + s0
but not as fast as set for strings without lots of duplicates:
def dedup(s):
return ''.join(set(s))
Note: set() will not preserve the order of the remaining characters while the other approaches will preserve the order based on first occurrence.
Your first program is a little vague. I assume numbers is a list of tuples or something? Like [(1,2), (3,4), (5,6)]? If so, your program is pretty good, from a complexity standpoint - it's O(n). Perhaps you want a little more Pythonic solution? The neatest way to clean this up would be to join your conditions:
if i != j and i + j == k:
But this simply increases readability. I think it may also add an additional boolean operation, so it might not be an optimization.
I am not sure if you intended for your program to return the first pair of numbers which sum to k, but if you wanted all pairs which meet this requirement, you could write a comprehension:
def pairsum(numbers, k):
return list(((i, j) for i, j in numbers if i != j and i + j == k))
In that example, I used a generator comprehension instead of a list comprehension so as to conserve resources - generators are functions which act like iterators, meaning that they can save memory by only giving you data when you need it. This is called lazy iteration.
You can also use a filter, which is a function which returns only the elements from a set for which a predicate returns True. (That is, the elements which meet a certain requirement.)
import itertools
def pairsum(numbers, k):
return list(itertools.ifilter(lambda t: t[0] != t[1] and t[0] + t[1] == k, ((i, j) for i, j in numbers)))
But this is less readable in my opinion.
Your second program can be optimized using a set. If you recall from any discrete mathematics you may have learned in grade school or university, a set is a collection of unique elements - in other words, a set has no duplicate elements.
def dedup(mystring):
return set(mystring)
The algorithm to find the unique elements of a collection is generally going to be O(n^2) in time if it is O(1) in space - if you allow yourself to allocate more memory, you can use a Binary Search Tree to reduce the time complexity to O(n log n), which is likely how Python sets are implemented.
Your solution took O(n^2) time but also O(n) space, because you created a new list which could, if the input was already a string with only unique elements, take up the same amount of space - and, for every character in the string, you iterated over the output. That's essentially O(n^2) (although I think it's actually O(n*m), but whatever). I hope you see why this is. Read the Binary Search Tree article to see how it improves your code. I don't want to re-implement one again... freshman year was so grueling!
The key to optimization is basically to figure out a way to make the code do less work, in terms of the total number of primitive steps that needs to be performed. Code that employs control structures like nested loops quickly contributes to the number of primitive steps needed. Optimization is therefore often about replacing loops iterating over the a full list with something more clever.
I had to change the unoptimized pairsum() method sligtly to make it usable:
def pairsum(numbers, k):
"""
Write a function that returns two values in numbers whose sum is K
"""
for i in numbers:
for j in numbers:
if i != j:
if i+j == k:
return i,j
Here we see two loops, one nested inside the other. When describing the time complexity of a method like this, we often say that it is O(n²). Since when the length of the numbers array passed in grows proportional to n, then the number of primitive steps grows proportional to n². Specifically, the i+j == k conditional is evaluated exactly len(number)**2 times.
The clever thing we can do here is to presort the array at the cost of O(n log(n)) which allows us to hone in on the right answer by evaluating each element of the sorted array at most one time.
def fast_pairsum(numbers, k):
sortedints = sorted(numbers)
low = 0
high = len(numbers) - 1
i = sortedints[0]
j = sortedints[-1]
while low < high:
diff = i + j - k
if diff > 0:
# Too high, let's lower
high -= 1
j = sortedints[high]
elif diff < 0:
# Too low, let's increase.
low += 1
i = sortedints[low]
else:
# Just right
return i, j
raise Exception('No solution')
These kinds of optimization only begin to really matter when the size of the problem becomes large. On my machine the break-even point between pairsum() and fast_pairsum() is with a numbers array containing 13 integers. For smaller arrays pairsum() is faster, and for larger arrays fast_pairsum() is faster. As the size grows fast_pairsum() becomes drastically faster than the unoptimized pairsum().
The clever thing to do for dedup() is to avoid having to linearly scan through the output list to find out if you've already seen a character. This can be done by storing information about which characters you've seen in a set, which has O(log(n)) look-up cost, rather than the O(n) look-up cost of a regular list.
With the outer loop, the total cost becomes O(n log(n)) rather than O(n²).
def fast_dedup(string):
# if we didn't care about the order of the characters in the
# returned string we could simply do
# return set(string)
seen = set()
output = []
seen_add = seen.add
output_append = output.append
for i in string:
if i not in seen:
seen_add(i)
output_append(i)
return output
On my machine the break-even point between dedup() and fast_dedup() is with a string of length 30.
The fast_dedup() method also shows another simple optimization trick: Moving as much of the code out of the loop bodies as possible. Since looking up the add() and append() members in the seen and output objects takes time, it is cheaper to do it once outside the loop bodies and store references to the members in variables that is used repeatedly inside the loop bodies.
To properly optimize Python, one needs to find a good algorithm for the problem and a Python idiom close to that algorithm. Your pairsum example is a good case. First, your implementation appears wrong — numbers is most likely a sequence of numbers, not a sequence of pairs of numbers. Thus a naive implementation would look like this:
def pairsum(numbers, k)
"""Write a function that returns two values in numbers whose sum is K"""
for i in numbers:
for j in numbers:
if i != j and i + j != k:
return i, j
This will perform n^2 iterations, n being the length of numbers. For small ns this is not a problem, but once n gets into hundreds, the nested loops will become visibly slow, and once n gets into thousands, they will become unusable.
An optimization would be to recognize the difference between the inner and the outer loops: the outer loop traverses over numbers exactly once, and is unavoidable. The inner loop, however, is only used to verify that the other number (which has to be k - i) is actually present. This is a mere lookup, which can be made extremely fast by using a dict, or even better, a set:
def pairsum(numbers, k)
"""Write a function that returns two values in numbers whose sum is K"""
numset = set(numbers)
for i in numbers:
if k - i in numset:
return i, k - i
This is not only faster by a constant because we're using a built-in operation (set lookup) instead of a Python-coded loop. It actually does less work because set has a smarter algorithm of doing the lookup, it performs it in constant time.
Optimizing dedup in the analogous fashion is left as an excercise for the reader.
Your string one, order preserving is most easily and should be fairly efficient written as:
from collections import OrderedDict
new_string = ''.join(OrderedDict.fromkeys(old_string))

Question on a solution from Google python class day

Hey,
I'm trying to learn a bit about Python so I decided to follow Google's tutorial. Anyway I had a question regarding one of their solution for an exercise.
Where I did it like this way.
# E. Given two lists sorted in increasing order, create and return a merged
# list of all the elements in sorted order. You may modify the passed in lists.
# Ideally, the solution should work in "linear" time, making a single
# pass of both lists.
def linear_merge(list1, list2):
# +++your code here+++
return sorted(list1 + list2)
However they did it in a more complicated way. So is Google's solution quicker? Because I noticed in the comment lines that the solution should work in "linear" time, which mine probably isn't?
This is their solution
def linear_merge(list1, list2):
# +++your code here+++
# LAB(begin solution)
result = []
# Look at the two lists so long as both are non-empty.
# Take whichever element [0] is smaller.
while len(list1) and len(list2):
if list1[0] < list2[0]:
result.append(list1.pop(0))
else:
result.append(list2.pop(0))
# Now tack on what's left
result.extend(list1)
result.extend(list2)
return result
this could be another soln?
#
def linear_merge(list1, list2):
tmp = []
while len(list1) and len(list2):
#print list1[-1],list2[-1]
if list1[-1] > list2[-1]:
tmp.append(list1.pop())
else:
tmp.append(list2.pop())
#print "tmp = ",tmp
#print list1,list2
tmp = tmp + list1
tmp = tmp + list2
tmp.reverse()
return tmp
Yours is not linear, but that doesn't mean it's slower. Algorithmic complexity ("big-oh notation") is often only a rough guide and always only tells one part of the story.
However, theirs isn't linear either, though it may appear to be at first blush. Popping from a list requires moving all later items, so popping from the front requires moving all remaining elements.
It is a good exercise to think about how to make this O(n). The below is in the same spirit as the given solution, but avoids its pitfalls while generalizing to more than 2 lists for the sake of exercise. For exactly 2 lists, you could remove the heap handling and simply test which next item is smaller.
import heapq
def iter_linear_merge(*args):
"""Yield non-decreasing items from sorted a and b."""
# Technically, [1, 1, 2, 2] isn't an "increasing" sequence,
# but it is non-decreasing.
nexts = []
for x in args:
x = iter(x)
for n in x:
heapq.heappush(nexts, (n, x))
break
while len(nexts) >= 2:
n, x = heapq.heappop(nexts)
yield n
for n in x:
heapq.heappush(nexts, (n, x))
break
if nexts: # Degenerate case of the heap, not strictly required.
n, x = nexts[0]
yield n
for n in x:
yield n
Instead of the last if-for, the while loop condition could be changed to just "nexts", but it is probably worthwhile to specially handle the last remaining iterator.
If you want to strictly return a list instead of an iterator:
def linear_merge(*args):
return list(iter_linear_merge(*args))
With mostly-sorted data, timsort approaches linear. Also, your code doesn't have to screw around with the lists themselves. Therefore, your code is possibly just a bit faster.
But that's what timing is for, innit?
I think the issue here is that the tutorial is illustrating how to implement a well-known algorithm called 'merge' in Python. The tutorial is not expecting you to actually use a library sorting function in the solution.
sorted() is probably O(nlgn); then your solution cannot be linear in the worst case.
It is important to understand how merge() works because it is useful in many other algorithms. It exploits the fact the input lists are individually sorted, moving through each list sequentially and selecting the smallest option. The remaining items are appended at the end.
The question isn't which is 'quicker' for a given input case but about which algorithm is more complex.
There are hybrid variations of merge-sort which fall back on another sorting algorithm once the input list size drops below a certain threshold.

Categories

Resources