How can I turn this code into a generator function? Or can I do it in a other way avoiding reading all data into memory?
The problem right now is that my memory gets full. I get KILLED after a long time when executing the code.
Code:
data = [3,4,3,1,2]
def convert(data):
for index in range(len(data)):
if data[index] == 0:
data[index] = 6
data.append(8)
elif data[index] == 1:
data[index] = 0
elif data[index] == 2:
data[index] = 1
elif data[index] == 3:
data[index] = 2
elif data[index] == 4:
data[index] = 3
elif data[index] == 5:
data[index] = 4
elif data[index] == 6:
data[index] = 5
elif data[index] == 7:
data[index] = 6
elif data[index] == 8:
data[index] = 7
return data
for i in range(256):
output = convert(data)
print(len(output))
Output:
266396864
290566743
316430103
346477329
376199930
412595447
447983143
490587171
534155549
582826967
637044072
692630033
759072776
824183073
903182618
982138692
1073414138
1171199621
1275457000
1396116848
1516813106
Killed
To answer the question: to turn a function into a generator function, all you have to do is yield something. You might do it like this:
def convert(data):
for index in range(len(data)):
...
yield data
Then, you can iterate over the output like this:
iter_converted_datas = convert(data)
for _, converted in zip(range(256), iter_converted_datas):
print(len(converted))
I also would suggest some improvements to this code. The first thing that jumps out at me, is to get rid of all those elif statements.
One helpful thing for this might be to supply a dictionary argument to your generator function that tells it how to convert the data values (the first one is a special case since it also appends).
Here is what that dict might look like:
replacement_dict = {
0: 6,
1: 0,
2: 1,
3: 2,
4: 3,
5: 4,
6: 5,
7: 6,
8: 7,
}
By the way: replacing a series of elif statements with a dictionary is a pretty typical thing to do in python. It isn't always appropriate, but it often works well.
Now you can write your generator like this:
def convert(data, replacement_dict):
for index in range(len(data)):
if index==0:
lst.append(8)
data[index] = replacement_dict[index]
yield data
And use it like this:
iter_converted_datas = convert(data, replacement_dict)
for _, converted in enumerate(iter_converted_datas):
print(len(converted))
But we haven't yet addressed the underlying memory problem.
For that, we need to step back a second: the reason your memory is filling up is you have created a routine that grows very large very fast. And if you were to keep going beyond 256 iterations, the list would get longer without end.
If you want to compute the Xth output for some member of the list without storing the entire list into memory, you have to change things around quite a bit.
My suggestion on how you might get started: create a function to get the Xth iteration for any starting input value.
Here is a generator that just produces outputs based on the replacement dict. Depending on the contents of the replacement dict, this could be infinite, or it might have an end (in which case it would raise a KeyError). In your case, it is infinite.
def process_replacements(value, replacement_dict):
while True:
yield (value := replacement_dict[value])
Next we can write our function to process the Xth iteration for a starting value:
def process_xth(value, xth, replacement_dict):
# emit the xth value from the original value
for _, value in zip(range(xth), process_replacements(value, replacement_dict)):
pass
return value
Now you can process the Xth iteration for any value in your starting data list:
index = 0
xth = 256
process_xth(data[index], xth, data, replacement_dict)
However, we have not appended 8 to the data list anytime we encounter the 0 value. We could do this, but as you have discovered, eventually the list of 8s will get too big. Instead, what we need to do is keep COUNT of how many 8s we have added to the end.
So I suggest adding a zero_tracker function to increment the count:
def zero_tracker():
global eights_count
eights_count += 1
Now you can call that function in the generator every time a zero is encountered, but resetting the global eights_count to zero at the start of the iteration:
def process_replacements(value, replacement_dict):
global eights_count
eights_count = 0
while True:
if value == 0:
zero_tracker()
yield (value := replacement_dict[value])
Now, for any Xth iteration you perform at some point in the list, you can know how many 8s were appended at the end, and when they were added.
But unfortunately simply counting the 8s isn't enough to get the final sequence; you also have to keep track of WHEN (ie, which iteration) they were added to the sequence, so you can know how deeply to iterate them. You could store this in memory pretty efficiently by keeping track of each iteration in a dictionary; that dictionary would look like this:
eights_dict = {
# iteration: count of 8s
}
And of course you can also calculate what each of these 8s will become at any arbitrary depth:
depth = 1
process_xth(8, depth, data, replacement_dict)
Once you know how many 8s there are added for every iteration given some finite number of Xth iterations, you can construct the final sequence by just yielding the correct value the right number of times over and over again, in a generator, without storing anything. I leave it to you to figure out how to construct your eights_dict and do this final part. :)
Here are a few things you can do to optimize it:
Instead of range(len(data)) you can use enumerate(data). This gives you access to both the element AND it's index. Example:
EDIT: According to this post, range is faster than enumerate. If you care about speed, you could ignore this change.
for index, element in enumerate(data):
if element == 0:
data[index] = 6
Secondly, most of the if statements have a predictable pattern. So you can rewrite them like this:
def convert(data):
for idx, elem in enumerate(data):
if elem == 0:
data[idx] = 6
data.append(8)
if elem <= 8:
data[index] = elem - 1
Since lists are mutable, you don't need to return data. It modifies it in-place.
I see that you ask about generator functions, but that ain't solve your memory issues. You run out of memory because, well, you keep everything in memory...
The memory complexity of your solution is O*((8/7)^n) where n is a number of calls to convert. This is because every time you call convert(), the data structure gets expanded with 1/7 of its elements (on average). This is the case because every number in your structure has (roughly) a 1/7 probability of being zero.
So memory complexity is O*((8/7)^n), hence exponential. But can we do better?
Yes we can (assuming that the conversion function remains this "nice and predictable"). We can keep in memory just the number of zeros that were present in a structure when we called a convert(). That way, we will have a linear memory complexity O*(n). Does that come with a cost?
Yes. Element access time no longer has a constant complexity O(1) but it has linear complexity O(n) where n is a number of calls to convert() (At least that's what I came up with).
But it resolves out-of-memory issue.
I also assumed that there would be need to iterate over the computed list. If you are only interested in the length, it is sufficient to keep count of digits in a number and work over those. That way you would use just a few integers of memory.
Here is a code:
from copy import deepcopy # to keep original list untouched ;)
class Data:
def __init__(self, seed):
self.seed = deepcopy(seed)
self.iteration = 0
self.zero_counts = list()
self.len = len(seed)
def __len__(self):
return self.len
def __iter__(self):
return SeededDataIterator(self)
def __repr__(self):
"""not necessary for a solution, but helps with debugging"""
return "[" + (", ".join(f"{n}" for n in self)) + "]"
def __getitem__(self, index: int):
if index >= self.len:
raise IndexError
if index < len(self.seed):
ret = self.seed[index] - self.iteration
else:
inner_it_idx = index - len(self.seed)
for i, cnt in enumerate(self.zero_counts):
if inner_it_idx < cnt:
ret = 9 + i - self.iteration
break
else:
inner_it_idx -= cnt
ret = ret if ret > 6 else ret % 7
return ret
def convert(self):
zero_count = sum((self[i] == 0) for i, _ in enumerate(self.seed))
for i, count in enumerate(self.zero_counts):
i = 9 + i - self.iteration
i = i if i > 6 else i % 7
if i == 0:
zero_count += count
self.zero_counts.append(zero_count)
self.len += self.zero_counts[self.iteration]
self.iteration += 1
class DataIterator:
"""Iterator class for the Data class"""
def __init__(self, seed_data):
self.seed_data = seed_data
self.index = 0
def __next__(self):
if self.index >= self.seed_data.len:
raise StopIteration
ret = self.seed_data[self.index]
self.index += 1
return ret
There is code that tests logical equality and prints required output:
original_data = [3,4,3,1,2]
data = deepcopy(original_data)
d = Data(data)
for _ in range(30):
output = convert(data)
d.convert()
print("---------------------------------------")
print(len(output))
assert len(output) == len(d)
for i, e in enumerate(output):
assert e == d[i]
data = deepcopy(original_data)
d = Data(data)
for _ in range(256):
d.convert()
print(len(d))
Results after your program crashed are:
1516813106
1662255394 <<< Killed here
1806321765
1976596756
2153338313
2348871138
2567316469
2792270106
3058372242
3323134871
3638852150
3959660078
4325467894
4720654782
5141141244
5625688711
6115404977
6697224392
7282794949
7964320044
8680314860
9466609138
10346343493
11256546221
12322913103
13398199926
14661544436
15963109809
17430929182
19026658353
20723155359
22669256596
24654746147
26984457539
Related
I have the following code which I use to loop through row groups in a parquet metadata file to find the maximum values for columns i,j,k across the whole file. As far as I know I have to find the max value in each row group.
I am looking for:
how to write it with at least two fewer levels of nesting
in fewer lines in general
I tried to use a dictionary lambda combo as a switch statement in place of some of the if statements, and eliminate at least two levels of nesting, but I couldn't figure out how to do the greater than evaluation without nesting further.
import pyarrow.parquet as pq
def main():
metafile = r'D:\my_parquet_meta_file.metadata'
meta = pq.read_metadata(metafile)
max_i = 0
max_j = 0
max_k = 0
for grp in range(0, meta.num_row_groups):
for col in range(0, meta.num_columns):
# locate columns i,j,k
if meta.row_group(grp).column(col).path_in_schema in ['i', 'j', 'k']:
if meta.row_group(grp).column(col).path_in_schema == 'i':
if meta.row_group(grp).column(col).statistics.max > max_i:
max_i = meta.row_group(grp).column(col).statistics.max
if meta.row_group(grp).column(col).path_in_schema == 'j':
if meta.row_group(grp).column(col).statistics.max > max_j:
max_j = meta.row_group(grp).column(col).statistics.max
if meta.row_group(grp).column(col).path_in_schema == 'k':
if meta.row_group(grp).column(col).statistics.max > max_k:
max_k = meta.row_group(grp).column(col).statistics.max
print('max i: ' + str(max_i), 'max j: ' + str(max_j), 'max k: ' + str(max_k))
if __name__ == '__main__':
main()
I've had someone give me 2 solutions:
The first involves using a list to hold the max values for each of my nominated columns, and then uses the python max function to evaluate the higher value before assigning it back. I must say I'm not a huge fan of using an unnamed positional max value variable, but it does the job in this instance and I can't fault it.
Solution 1:
import pyarrow.parquet as pq
def main():
metafile = r'D:\my_parquet_meta_file.metadata'
meta = pq.read_metadata(metafile)
max_value = [0, 0, 0]
for grp in range(0, meta.num_row_groups):
for col in range(0, meta.num_columns):
column = meta.row_group(grp).column(col)
for i, name in enumerate(['i', 'j', 'k']):
if column.path_in_schema == name:
max_value[i] = max(max_value[i], column.statistics.max)
print(dict(zip(['max i', 'max j', 'max k'], max_value)))
if __name__ == '__main__':
main()
The second uses similar methods, but additionally uses list comprehension to get all of of the column objects before iterating through each column object to find the column's max values. This removes one additional level of nesting but more importantly separates the gathering of columns objects into a separate collection before interrogating them, making the process a little clearer. I think on the downside is may require higher memory usage due to everything in the column object being retained rather than just the reported max value.
:
Solution 2:
import pyarrow.parquet as pq
def main():
metafile = r'D:\my_parquet_meta_file.metadata'
meta = pq.read_metadata(metafile)
max_value = [0, 0, 0]
columns = [meta.row_group(grp).column(col)
for col in range(0, meta.num_columns)
for grp in range(0, meta.num_row_groups)] # Apparently list generators are read right to left
for column in columns:
for i, name in enumerate(['i', 'j', 'k']):
if column.path_in_schema == name:
max_value[i] = max(max_value[i], column.statistics.max)
print(dict(zip(['max i', 'max j', 'max k'], max_value)))
if __name__ == '__main__':
main()
*Update I've found out it actually uses less memory - the column object I mentioned, is actually a list generator not a list. It won't retrieve each column until it's called in the second loop where I enumerate through the "columns" list generator. The downside of using a list generator is you can only iterate through it once (it's not reusable) unless you redefine it.
The upside is if I happen to want to "break" from the loop once I've found a desired value, I could and there would be no remaining list taking up memory and it would not need to have called every column object making it faster. In my case it doesn't really matter cause I do go through the whole list anyway, but with a lower memory foot print.
*Note the list generator here is a Python 3 feature, Python 2 would have returned the complete list for the same syntax
# In Python 3 this returns a list generator, in Python 2 it returns a populated lsit
columns = [meta.row_group(grp).column(col)
for col in range(0, meta.num_columns)
for grp in range(0, meta.num_row_groups)]
To get a populated list as you would in Python 2, requires the list() function
e.g.
columns = list([<generator expression ... >])
You can simulate a switch statement with the following function:
def switch(v):yield lambda *c:v in c
It simulates a switch statement using a single pass for loop with if/elif/else conditions that don't repeat the switching value:
for example:
for case in switch(x):
if case(3):
# ... do something
elif case(4,5,6):
# ... do something else
else:
# ... do some other thing
It can also be used in a more C style:
for case in switch(x):
if case(3):
# ... do something
break
if case(4,5,6):
# ... do something else
break
else:
# ... do some other thing
Here's how to use it with your code:
...
for case in switch(meta.row_group(grp).column(col).path_in_schema):
if not case('i', 'j', 'k'): break
statMax = meta.row_group(grp).column(col).statistics.max
if case('i') and statMax > max_i: max_i = statMax
elif case('j') and statMax > max_j: max_j = statMax
elif case('k') and statMax > max_k: max_k = statMax
...
So I have two files/dictionaries I want to compare, using a binary search implementation (yes, this is very obviously homework).
One file is
american-english
Amazon
Americana
Americanization
Civilization
And the other file is
british-english
Amazon
Americana
Americanisation
Civilisation
The code below should be pretty straight forward. Import files, compare them, return differences. However, somewhere near the bottom, where it says entry == found_difference: I feel as if the debugger skips right over, even though I can see the two variables in memory being different, and I only get the final element returned in the end. Where am I going wrong?
# File importer
def wordfile_to_list(filename):
"""Converts a list of words to a Python list"""
wordlist = []
with open(filename) as f:
for line in f:
wordlist.append(line.rstrip("\n"))
return wordlist
# Binary search algorithm
def binary_search(sorted_list, element):
"""Search for element in list using binary search. Assumes sorted list"""
matches = []
index_start = 0
index_end = len(sorted_list)
while (index_end - index_start) > 0:
index_current = (index_end - index_start) // 2 + index_start
if element == sorted_list[index_current]:
return True
elif element < sorted_list[index_current]:
index_end = index_current
elif element > sorted_list[index_current]:
index_start = index_current + 1
return element
# Check file differences using the binary search algorithm
def wordfile_differences_binarysearch(file_1, file_2):
"""Finds the differences between two plaintext lists,
using binary search algorithm, and returns them in a new list"""
wordlist_1 = wordfile_to_list(file_1)
wordlist_2 = wordfile_to_list(file_2)
matches = []
for entry in wordlist_1:
found_difference = binary_search(sorted_list=wordlist_2, element=entry)
if entry == found_difference:
pass
else:
matches.append(found_difference)
return matches
# Check if it works
differences = wordfile_differences_binarysearch(file_1="british-english", file_2="american-english")
print(differences)
You don't have an else suite for your if statement. Your if statement does nothing (it uses pass when the test is true, skipped otherwise).
You do have an else suite for the for loop:
for entry in wordlist_1:
# ...
else:
matches.append(found_difference)
A for loop can have an else suite as well; it is executed when a loop completes without a break statement. So when your for loop completes, the current value for found_difference is appended; so whatever was assigned last to that name.
Fix your indentation if the else suite was meant to be part of the if test:
for entry in wordlist_1:
found_difference = binary_search(sorted_list=wordlist_2, element=entry)
if entry == found_difference:
pass
else:
matches.append(found_difference)
However, you shouldn't use a pass statement there, just invert the test:
matches = []
for entry in wordlist_1:
found_difference = binary_search(sorted_list=wordlist_2, element=entry)
if entry != found_difference:
matches.append(found_difference)
Note that the variable name matches feels off here; you are appending words that are missing in the other list, not words that match. Perhaps missing is a better variable name here.
Note that your binary_search() function always returns element, the word you searched on. That'll always be equal to the element you passed in, so you can't use that to detect if a word differed! You need to unindent that last return line and return False instead:
def binary_search(sorted_list, element):
"""Search for element in list using binary search. Assumes sorted list"""
matches = []
index_start = 0
index_end = len(sorted_list)
while (index_end - index_start) > 0:
index_current = (index_end - index_start) // 2 + index_start
if element == sorted_list[index_current]:
return True
elif element < sorted_list[index_current]:
index_end = index_current
elif element > sorted_list[index_current]:
index_start = index_current + 1
return False
Now you can use a list comprehension in your wordfile_differences_binarysearch() loop:
[entry for entry in wordlist_1 if not binary_search(wordlist_2, entry)]
Last but not least, you don't have to re-invent the binary seach wheel, just use the bisect module:
from bisect import bisect_left
def binary_search(sorted_list, element):
return sorted_list[bisect(sorted_list, element)] == element
With sets
Binary search is used to improve efficiency of an algorithm, and decrease complexity from O(n) to O(log n).
Since the naive approach would be to check every word in wordlist1 for every word in wordlist2, the complexity would be O(n**2).
Using binary search would help to get O(n * log n), which is already much better.
Using sets, you could get O(n):
american = """Amazon
Americana
Americanization
Civilization"""
british = """Amazon
Americana
Americanisation
Civilisation"""
american = {line.strip() for line in american.split("\n")}
british = {line.strip() for line in british.split("\n")}
You could get the american words not present in the british dictionary:
print(american - british)
# {'Civilization', 'Americanization'}
You could get the british words not present in the american dictionary:
print(british - american)
# {'Civilisation', 'Americanisation'}
You could get the union of the two last sets. I.e. words that are present in exactly one dictionary:
print(american ^ british)
# {'Americanisation', 'Civilisation', 'Americanization', 'Civilization'}
This approach is faster and more concise than any binary search implementation. But if you really want to use it, as usual, you cannot go wrong with #MartijnPieters' answer.
With two iterators
Since you know the two lists are sorted, you could simply iterate in parallel over the two sorted lists and look for any difference:
american = """Amazon
Americana
Americanism
Americanization
Civilization"""
british = """Amazon
Americana
Americanisation
Americanism
Civilisation"""
american = [line.strip() for line in american.split("\n")]
british = [line.strip() for line in british.split("\n")]
n1, n2 = len(american), len(british)
i, j = 0, 0
while True:
try:
w1 = american[i]
w2 = british[j]
if w1 == w2:
i += 1
j += 1
elif w1 < w2:
print('%s is in american dict only' % w1)
i += 1
else:
print('%s is in british dict only' % w2)
j += 1
except IndexError:
break
for w1 in american[i:]:
print('%s is in american dict only' % w1)
for w2 in british[j:]:
print('%s is in british dict only' % w2)
It outputs:
Americanisation is in british dict only
Americanization is in american dict only
Civilisation is in british dict only
Civilization is in american dict only
It's O(n) as well.
Here is my code-
def Max(lst):
if len(lst) == 1:
return lst[0]
else:
m = Max(lst[1:])
if m > lst[0]:
return m
else:
return lst[0]
def Min(lst):
if len(lst) == 1:
return lst[0]
else:
m = Min(lst[1:])
if m < lst[0]:
return m
else:
return lst[0]
print("Max number:",Max([5,4,100,0,2]))
print("Min number:",Min([5,4,100,0,2]))
Basically I need a single function that returns both the largest and smallest number and it needs to be recursively. How would would I change this code?
Some types of recursive algorithms/implementations operating on a list input are very quite easy to come up with, if you know the "trick". That trick being:
Just assume you already have a function that can do what you want.
Wait, no, that doesn't really make sense, does it? Then we'd already be done.
Let's try that again:
Just assume you already have a function that can do what you want (but only for inputs 1 element smaller than you need).
There, much better. While a bit silly, that's an assumption we can work with.
So what do we want? In your example, it's returning the minimum and maximum elements of a list. Let's assume we want them returned as a 2-tuple (a.k.a. a "pair"):
lst = [5, 4, 100, 0, 2]
# Well, actually, we can only do this for a smaller list,
# as per our assumption above.
lst = lst[1:]
lst_min, lst_max = magic_min_max(lst) # I want a pony!
assert lst_min == 0 # Wishful thinking
assert lst_max == 100 # Wishful thinking
If we have such a magic function, can we use it to solve the problem for the actual input size? Let's try:
def real_min_max(lst):
candidate = lst[0]
rest_of_the_list = lst[1:]
min_of_rest, max_of_rest = magic_min_max(rest_of_the_list) # Allowed because
# smaller than lst
min_of_lst = candidate if candidate < min_of_rest else min_of_rest
max_of_lst = candidate if candidate > max_of_rest else max_of_rest
return min_of_lst, max_of_lst
Not exactly easy, but pretty straight forward, isn't it? But let's assume our magic function magic_min_max has an additional restriction: It cannot handle empty lists. (After all, an empty list doesn't have neither a minimum nor a maximum element. Not even magic can change that.)
So if lst has size 1, we must not call the magic function. No problem for us, though. That case is easy to detect and easy to circumvent. The single element is both minimum and maximum of its list, so we just return it twice:
def real_min_max(lst):
candidate = lst[0]
if len(lst) == 1:
return candidate, candidate # single element is both min & max
rest_of_the_list = lst[1:]
min_of_rest, max_of_rest = magic_min_max(rest_of_the_list) # Allowed because
# smaller than lst
# but (if we get
# here) not empty
min_of_lst = candidate if candidate < min_of_rest else min_of_rest
max_of_lst = candidate if candidate > max_of_rest else max_of_rest
return min_of_lst, max_of_lst
So that's that.
But wait ... there is no magic. If we want to call a function, it has to actually exist. So we need to implement a function that can return the minimum and maximum of a list, so we can call it in real_min_max instead of magic_min_max. As this is about recursion, you know the solution: real_min_max is that function (once it's fixed by calling a function that does exist) so we can have it call itself:
def real_min_max(lst):
candidate = lst[0]
if len(lst) == 1:
return candidate, candidate # single element is both min & max
rest_of_the_list = lst[1:]
min_of_rest, max_of_rest = real_min_max(rest_of_the_list) # No magic needed,
# just recursion!
min_of_lst = candidate if candidate < min_of_rest else min_of_rest
max_of_lst = candidate if candidate > max_of_rest else max_of_rest
return min_of_lst, max_of_lst
Let's try it:
lst = [5, 4, 100, 0, 2]
real_min_max(lst) # returns (0, 100)
It works!
import sys
class MaxMin:
max = -sys.maxint - 1
min = sys.maxint
def getMaxMin(self, lst, obj):
if len(lst) == 1:
obj.max = lst[0]
obj.min = lst[0]
else:
self.getMaxMin(lst[1:], obj)
if obj.max < lst[0]:
obj.max = lst[0]
if obj.min > lst[0]:
obj.min = lst[0]
obj = MaxMin()
obj.getMaxMin([5,4,100,0,2], obj)
print("Max number:",obj.max)
print("Min number:",obj.min)
That is the exact idea of higher order functions. You can add a compare parameter in your function, and pass lambda a, b: a>b for Min and lambda a, b: a < b for max. then, instead of m > lst[0], use compare(m, lst[0])
Working on below problem,
Problem,
Given a m * n grids, and one is allowed to move up or right, find the different paths between two grid points.
I write a recursive version and a dynamic programming version, but they return different results, and any thoughts what is wrong?
Source code,
from collections import defaultdict
def move_up_right(remaining_right, remaining_up, prefix, result):
if remaining_up == 0 and remaining_right == 0:
result.append(''.join(prefix[:]))
return
if remaining_right > 0:
prefix.append('r')
move_up_right(remaining_right-1, remaining_up, prefix, result)
prefix.pop(-1)
if remaining_up > 0:
prefix.append('u')
move_up_right(remaining_right, remaining_up-1, prefix, result)
prefix.pop(-1)
def move_up_right_v2(remaining_right, remaining_up):
# key is a tuple (given remaining_right, given remaining_up),
# value is solutions in terms of list
dp = defaultdict(list)
dp[(0,1)].append('u')
dp[(1,0)].append('r')
for right in range(1, remaining_right+1):
for up in range(1, remaining_up+1):
for s in dp[(right-1,up)]:
dp[(right,up)].append(s+'r')
for s in dp[(right,up-1)]:
dp[(right,up)].append(s+'u')
return dp[(right, up)]
if __name__ == "__main__":
result = []
move_up_right(2,3,[],result)
print result
print '============'
print move_up_right_v2(2,3)
In version 2 you should be starting your for loops at 0 not at 1. By starting at 1 you are missing possible permutations where you traverse the bottom row or leftmost column first.
Change version 2 to:
def move_up_right_v2(remaining_right, remaining_up):
# key is a tuple (given remaining_right, given remaining_up),
# value is solutions in terms of list
dp = defaultdict(list)
dp[(0,1)].append('u')
dp[(1,0)].append('r')
for right in range(0, remaining_right+1):
for up in range(0, remaining_up+1):
for s in dp[(right-1,up)]:
dp[(right,up)].append(s+'r')
for s in dp[(right,up-1)]:
dp[(right,up)].append(s+'u')
return dp[(right, up)]
And then:
result = []
move_up_right(2,3,[],result)
set(move_up_right_v2(2,3)) == set(result)
True
And just for fun... another way to do it:
from itertools import permutations
list(map(''.join, set(permutations('r'*2+'u'*3, 5))))
The problem with the dynamic programming version is that it doesn't take into account the paths that start from more than one move up ('uu...') or more than one move right ('rr...').
Before executing the main loop you need to fill dp[(x,0)] for every x from 1 to remaining_right+1 and dp[(0,y)] for every y from 1 to remaining_up+1.
In other words, replace this:
dp[(0,1)].append('u')
dp[(1,0)].append('r')
with this:
for right in range(1, remaining_right+1):
dp[(right,0)].append('r'*right)
for up in range(1, remaining_up+1):
dp[(0,up)].append('u'*up)
Someone suggested replacing my:
for m in hazardflr:
safetiles.append((m, step))
i = 0
with a more reasonable approach such as:
for i, m in enumerate(hazardflr):
safetiles.append((m, step))
if there is a way to make this more efficient,
I see now how this saves code lines and says the same thing. I didn't know about enum() function. My question is now if are there any other modifications I can do to make this code more efficient and line saving?
def missingDoor(trapdoor, roomwidth, roomheight, step):
safezone = []
hazardflr = givenSteps(roomwidth, step, True)
safetiles = []
for i, m in enumerate(hazardflr):
safetiles.append((m,step))
while i < len(safetiles):
nextSafe = safetiles[i]
if knownSafe(roomwidth, roomheight, nextSafe[0], nextSafe[1]):
if trapdoor[nextSafe[0]/roomwidth][nextSafe[0]%roomwidth] is "0":
if nextSafe[0] not in safezone:
safezone.append(nextSafe[0])
for e in givenSteps(roomwidth, nextSafe[0], True):
if knownSafe(roomwidth, roomheight, e, nextSafe[0]):
if trapdoor[e/roomwidth][e%roomwidth] is "0" and (e,nextSafe[0]) not in safetiles:
safetiles.append((e,nextSafe[0]))
i += 1
return sorted(safezone)
assign nextSafe[0] to a local variable
Your code is 9 times (if I count correctly) using expression nextSafe[0].
Accessing an item from a list is more expensive than picking the value from a variable.
Modification as follows:
for i,m in enumerate(hazardflr):
safetiles.append((m,step))
while i < len(safetiles):
nextSafe = safetiles[i]
ns0 = nextSafe[0]
if knownSafe(roomwidth, roomheight, ns0, nextSafe[1]):
if trapdoor[ns0/roomwidth][ns0 % roomwidth] is "0":
if ns0 not in safezone:
safezone.append(ns0)
for e in givenSteps(roomwidth,ns0,True):
if knownSafe(roomwidth, roomheight, e, ns0):
if trapdoor[e/roomwidth][e%roomwidth] is "0" and (e, ns0) not in safetiles:
safetiles.append((e, ns0))
could speed it up a bit.
turn safezone into set
a test item in list_var is scanning whole list for list_var being a list.
If you turn the test to item in set_var, it knows the result almost immediately regardless of size of the set_var variable, because set has sort of hash which works as "database index" for lookup.
In your code change safezone = [] into safezone = set()
In fact, you can completely skip the membership test in your case:
if ns0 not in safezone:
safezone.append(ns0)
can be turned into:
safezone.add(ns0)
as set will take care of keeping only unique items.