This question already has answers here:
Why does Python code run faster in a function?
(3 answers)
Closed 7 years ago.
Trying to understand this phenomenon in Python. When I wrap my code into a function, it speeds up by more than 30%!!! So far I failed to google out any reasonable explanation.
For example:
import sys
sys.stdin = open("dd.txt")
import cProfile
import time
t0 = time.time()
def main():
from collections import defaultdict
n, T = map(int, raw_input().split())
tree = defaultdict(lambda: set())
root = None
for _ in xrange(n - 1):
a, b = map(int, raw_input().split())
tree[a].add(b)
tree[b].add(a)
if not root:
root = a
count = 0
stack = [root]
links = dict()
links[root] = 0
mem = dict()
while stack:
node = stack.pop()
path = list()
path.append(node)
lnk = links[node]
while lnk:
if lnk not in mem:
if abs(lnk - node) <= T:
count += 1
path.append(lnk)
lnk = links[lnk]
else:
path.extend(mem[lnk])
for el in mem[lnk]:
if abs(el - node) <= T:
count += 1
break
#print node, path
plen = len(path)
mem[node] = path
for next_node in tree[node]:
if plen <= 1 or next_node != path[1]:
links[next_node] = node
stack.append(next_node)
print count
main()
print time.time() - t0
This prints running time as 2.5 seconds, but this:
import sys
sys.stdin = open("dd.txt")
import cProfile
import time
t0 = time.time()
#def main():
from collections import defaultdict
n, T = map(int, raw_input().split())
tree = defaultdict(lambda: set())
root = None
for _ in xrange(n - 1):
a, b = map(int, raw_input().split())
tree[a].add(b)
tree[b].add(a)
if not root:
root = a
count = 0
stack = [root]
links = dict()
links[root] = 0
mem = dict()
while stack:
node = stack.pop()
path = list()
path.append(node)
lnk = links[node]
while lnk:
if lnk not in mem:
if abs(lnk - node) <= T:
count += 1
path.append(lnk)
lnk = links[lnk]
else:
path.extend(mem[lnk])
for el in mem[lnk]:
if abs(el - node) <= T:
count += 1
break
#print node, path
plen = len(path)
mem[node] = path
for next_node in tree[node]:
if plen <= 1 or next_node != path[1]:
links[next_node] = node
stack.append(next_node)
print count
#main()
print time.time() - t0
Simply moving code out of main() function makes it run 3.5 seconds instead of 2.5.
What can be the reason for this???
The difference is that Python uses different bytecode operations for accessing the local variables of a function and the global variables of a module. The LOAD_FAST opcodes used for accessing local variables takes a numeric index and performs a quick array lookup to retrieve the value. The LOAD_NAME and LOAD_GLOBAL opcodes used for accessing global variables take a name and perform a hash table lookup (possibly in multiple hash tables) to retrieve the value.
By wrapping your code in a function, you're effectively converting all of your variables from globals into locals, which enables much faster access to them.
Related
The Question
Is there a straightforward algorithm for figuring out if a variable is "used" within a given scope?
In a Python AST, I want to remove all assignments to variables that are not otherwise used anywhere, within a given scope.
Details
Motivating example
In the following code, it is obvious to me (a human), that _hy_anon_var_1 is unused, and therefore the _hy_anon_var_1 = None statements can be removed without changing the result:
# Before
def hailstone_sequence(n: int) -> Iterable[int]:
while n != 1:
if 0 == n % 2:
n //= 2
_hy_anon_var_1 = None
else:
n = 3 * n + 1
_hy_anon_var_1 = None
yield n
# After
def hailstone_sequence(n: int) -> Iterable[int]:
while n != 1:
if 0 == n % 2:
n //= 2
else:
n = 3 * n + 1
yield n
Bonus version
Extend this to []-lookups with string literals as keys.
In this example, I would expect _hyx_letXUffffX25['x'] to be eliminated as unused, because _hyx_letXUffffX25 is local to h, so _hyx_letXUffffX25['x'] is essentially the same thing as a local variable. I would then expect _hyx_letXUffffX25 itself to be eliminated once there are no more references to it.
# Before
def h():
_hyx_letXUffffX25 = {}
_hyx_letXUffffX25['x'] = 5
return 3
# After
def h():
return 3
From what I can tell, this is somewhat of an edge case, and I think the basic algorithmic problem is the same.
Definition of "used"
Assume that no dynamic name lookups are used in the code.
A name is used if any of these are true in a given scope:
It is referenced anywhere in an expression. Examples include: an expression in a return statement, an expression on the right-hand side of an assignment statement, a default argument in a function definition, being referenced inside a local function definition, etc.
It is referenced on the left-hand side of an "augmented assignment" statement, i.e. it is an augtarget therein. This might represent "useless work" in a lot of programs, but for the purpose of this task that's OK and distinct from being an entirely unused name.
It is nonlocal or global. These might be useless nonlocals or globals, but because they reach beyond the given scope, it is OK for my purposes to assume that they are "used".
Please let me know in the comments if this seems incorrect, or if you think I am missing something.
Examples of "used" and "unused"
Example 1: unused
Variable i in f is unused:
def f():
i = 0
return 5
Example 2: unused
Variable x in f is unused:
def f():
def g(x):
return x/5
x = 10
return g(100)
The name x does appear in g, but the variable x in g is local to g. It shadows the variable x created in f, but the two x names are not the same variable.
Variation
If g has no parameter x, then x is in fact used:
def f():
x = 10
def g():
return x/5
return g(100)
Example 3: used
Variable i in f is used:
def f():
i = 0
return i
Example 4: used
Variable accum in silly_map and silly_sum is used in both examples:
def silly_map(func, data):
data = iter(data)
accum = []
def _impl():
try:
value = next(data)
except StopIteration:
return accum
else:
accum.append(value)
return _impl()
return _impl()
def silly_any(func, data):
data = iter(data)
accum = False
def _impl():
nonlocal accum, data
try:
value = next(data)
except StopIteration:
return accum
else:
if value:
data = []
accum = True
else:
return _impl()
return _impl()
The solution below works in two parts. First, the syntax tree of the source is traversed and all unused target assignment statements are discovered. Second, the tree is traversed again via a custom ast.NodeTransformer class, which removes these offending assignment statements. The process is repeated until all unused assignment statements are removed. Once this is finished, the final source is written out.
The ast traverser class:
import ast, itertools, collections as cl
class AssgnCheck:
def __init__(self, scopes = None):
self.scopes = scopes or cl.defaultdict(list)
#classmethod
def eq_ast(cls, a1, a2):
#check that two `ast`s are the same
if type(a1) != type(a2):
return False
if isinstance(a1, list):
return all(cls.eq_ast(*i) for i in itertools.zip_longest(a1, a2))
if not isinstance(a1, ast.AST):
return a1 == a2
return all(cls.eq_ast(getattr(a1, i, None), getattr(a2, i, None))
for i in set(a1._fields)|set(a2._fields) if i != 'ctx')
def check_exist(self, t_ast, s_path):
#traverse the scope stack and remove scope assignments that are discovered in the `ast`
s_scopes = []
for _ast in t_ast:
for sid in s_path[::-1]:
s_scopes.extend(found:=[b for _, b in self.scopes[sid] if AssgnCheck.eq_ast(_ast, b) and \
all(not AssgnCheck.eq_ast(j, b) for j in s_scopes)])
self.scopes[sid] = [(a, b) for a, b in self.scopes[sid] if b not in found]
def traverse(self, _ast, s_path = [1]):
#walk the ast object itself
_t_ast = None
if isinstance(_ast, ast.Assign): #if assignment statement, add ast object to current scope
self.traverse(_ast.targets[0], s_path)
self.scopes[s_path[-1]].append((True, _ast.targets[0]))
_ast = _ast.value
if isinstance(_ast, (ast.ClassDef, ast.FunctionDef, ast.AsyncFunctionDef)):
s_path = [*s_path, (nid:=(1 if not self.scopes else max(self.scopes)+1))]
if isinstance(_ast, (ast.FunctionDef, ast.AsyncFunctionDef)):
self.scopes[nid].extend([(False, ast.Name(i.arg)) for i in _ast.args.args])
_t_ast = [*_ast.args.defaults, *_ast.body]
self.check_exist(_t_ast if _t_ast is not None else [_ast], s_path) #determine if any assignment statement targets have previously defined names
if _t_ast is None:
for _b in _ast._fields:
if isinstance((b:=getattr(_ast, _b)), list):
for i in b:
self.traverse(i, s_path)
elif isinstance(b, ast.AST):
self.traverse(b, s_path)
else:
for _ast in _t_ast:
self.traverse(_ast, s_path)
Putting it all together:
class Visit(ast.NodeTransformer):
def __init__(self, asgn):
super().__init__()
self.asgn = asgn
def visit_Assign(self, node):
#remove assignment nodes marked as unused
if any(node.targets[0] == i for i in self.asgn):
return None
return node
def remove_assgn(f_name):
tree = ast.parse(open(f_name).read())
while True:
r = AssgnCheck()
r.traverse(tree)
if not (k:=[j for b in r.scopes.values() for k, j in b if k]):
break
v = Visit(k)
tree = v.visit(tree)
return ast.unparse(tree)
print(remove_assgn('test_name_assign.py'))
Output Samples
Contents of test_name_assign.py:
def hailstone_sequence(n: int) -> Iterable[int]:
while n != 1:
if 0 == n % 2:
n //= 2
_hy_anon_var_1 = None
else:
n = 3 * n + 1
_hy_anon_var_1 = None
yield n
Output:
def hailstone_sequence(n: int) -> Iterable[int]:
while n != 1:
if 0 == n % 2:
n //= 2
else:
n = 3 * n + 1
yield n
Contents of test_name_assign.py:
def h():
_hyx_letXUffffX25 = {}
_hyx_letXUffffX25['x'] = 5
return 3
Output:
def h():
return 3
Contents of test_name_assign.py:
def f():
i = 0
return 5
Output:
def f():
return 5
Contents of test_name_assign.py:
def f():
x = 10
def g():
return x/5
return g(100)
Ouptut:
def f():
x = 10
def g():
return x / 5
return g(100)
I'm trying to optimize this solution for a function that accepts 2 arguments: fullstring and substring. The function will return True if the substring exists in the fullstring, and False if it does not. There is one special wildcard that could be entered in the substring that denotes 0 or 1 of the previous symbol, and there can be more than one wildcard in the substring.
For example, "a*" means "" or "a"
The solution I have works fine but I'm trying to reduce the number of for loops (3) and optimize for time complexity. Using regex is not permitted. Is there a more pythonic way to do this?
Current Solution:
def complex_search(fullstring, substring):
patterns = []
if "*" in substring:
index = substring.index("*")
patterns.append(substring[:index-1] + substring[index+1:])
patterns.append(substring[:index] + substring[index+1:])
else:
patterns.append(substring)
def check(s1, s2):
for a, b in zip(s1, s2):
if a != b:
return False
return True
for pattern in patterns:
for i in range(len(fullstring) - len(pattern) + 1):
if check(fullstring[i:i+len(pattern)], pattern):
return True
return False
>> print(complex_search("dogandcats", "dogs*andcats"))
>> True
Approach
Create all alternatives for the substring based upon '" in substring (can have zero or more '' in substring)
See Function combs(...) below
Use Aho-Corasick to check if one of the substring patterns is in the string. Aho-Corasick is a very efficient algorithm for checking if one or more substrings appear in a string and formed as the basis of the original Unix command fgrep.
For illustrative purposes a Python version of Aho-Corasik is used below, but a C implementation (with Python wrapper) is available at pyahocorasick for higher performance.
See class Aho-Corasick below.
Code
# Note: This is a modification of code explained in https://carshen.github.io/data-structures/algorithms/2014/04/07/aho-corasick-implementation-in-python.html
from collections import deque
class Aho_Corasick():
def __init__(self, keywords):
self.adj_list = []
# creates a trie of keywords, then sets fail transitions
self.create_empty_trie()
self.add_keywords(keywords)
self.set_fail_transitions()
def create_empty_trie(self):
""" initalize the root of the trie """
self.adj_list.append({'value':'', 'next_states':[],'fail_state':0,'output':[]})
def add_keywords(self, keywords):
""" add all keywords in list of keywords """
for keyword in keywords:
self.add_keyword(keyword)
def find_next_state(self, current_state, value):
for node in self.adj_list[current_state]["next_states"]:
if self.adj_list[node]["value"] == value:
return node
return None
def add_keyword(self, keyword):
""" add a keyword to the trie and mark output at the last node """
current_state = 0
j = 0
keyword = keyword.lower()
child = self.find_next_state(current_state, keyword[j])
while child != None:
current_state = child
j = j + 1
if j < len(keyword):
child = self.find_next_state(current_state, keyword[j])
else:
break
for i in range(j, len(keyword)):
node = {'value':keyword[i],'next_states':[],'fail_state':0,'output':[]}
self.adj_list.append(node)
self.adj_list[current_state]["next_states"].append(len(self.adj_list) - 1)
current_state = len(self.adj_list) - 1
self.adj_list[current_state]["output"].append(keyword)
def set_fail_transitions(self):
q = deque()
child = 0
for node in self.adj_list[0]["next_states"]:
q.append(node)
self.adj_list[node]["fail_state"] = 0
while q:
r = q.popleft()
for child in self.adj_list[r]["next_states"]:
q.append(child)
state = self.adj_list[r]["fail_state"]
while (self.find_next_state(state, self.adj_list[child]["value"]) == None
and state != 0):
state = self.adj_list[state]["fail_state"]
self.adj_list[child]["fail_state"] = self.find_next_state(state, self.adj_list[child]["value"])
if self.adj_list[child]["fail_state"] is None:
self.adj_list[child]["fail_state"] = 0
self.adj_list[child]["output"] = self.adj_list[child]["output"] + self.adj_list[self.adj_list[child]["fail_state"]]["output"]
def get_keywords_found(self, line):
""" returns keywords in trie from line """
line = line.lower()
current_state = 0
keywords_found = []
for i, c in enumerate(line):
while self.find_next_state(current_state, c) is None and current_state != 0:
current_state = self.adj_list[current_state]["fail_state"]
current_state = self.find_next_state(current_state, c)
if current_state is None:
current_state = 0
else:
for j in self.adj_list[current_state]["output"]:
yield {"index":i-len(j) + 1,"word":j}
def pattern_found(self, line):
''' Returns true when the pattern is found '''
return next(self.get_keywords_found(line), None) is not None
def combs(word, n = 0, path = ""):
''' Generate all combinations of words with star
e.g. list(combs("he*lp*")) = ['help', 'helpp', 'heelp', 'heelpp']
'''
if n == len(word):
yield path
elif word[n] == '*':
# Next letter
yield from combs(word, n+1, path) # don't add * to path
else:
if n < len(word) - 1 and word[n+1] == '*':
yield from combs(word, n+1, path) # Not including letter at n
yield from combs(word, n+1, path + word[n]) # including letter at n
Test
patterns = combs("dogs*andcats") # ['dogandcats', 'dogsandcats']
aho = Aho_Corasick(patterns) # Aho-Corasick structure to recognize patterns
print(aho.pattern_found("dogandcats")) # Output: True
print(aho.pattern_found("dogsandcats")) # Output: True
I am trying to develop a 15 star puzzle program in Python and its supposed to sort everything in numerical order using the a star search algorithm with the 0 being at the end.
Here is my a star algorithm I've developed so far:
"""Search the nodes with the lowest f scores first.
You specify the function f(node) that you want to minimize; for example,
if f is a heuristic estimate to the goal, then we have greedy best
first search; if f is node.depth then we have breadth-first search.
There is a subtlety: the line "f = memoize(f, 'f')" means that the f
values will be cached on the nodes as they are computed. So after doing
a best first search you can examine the f values of the path returned."""
def best_first_graph_search_manhattan(root_node):
start_time = time.time()
f = manhattan(root_node)
node = root_node
frontier = []
# how do we create this association?
heapq.heappush(frontier, node)
explored = set()
z = 0
while len(frontier) > 0:
node = heapq.heappop(frontier)
print(node.state.tiles)
explored.add(node)
if (goal_test(node.state.tiles)):
#print('In if statement')
path = find_path(node)
end_time = time.time()
z = z + f
return path, len(explored), z, (end_time - start_time)
for child in get_children(node):
# calcuate total cost
f_0 = manhattan(child)
z = z + f_0
print(z)
if child not in explored and child not in frontier:
#print('Pushing frontier and child')
heapq.heappush(frontier, child)
print('end of for loop')
return None
"""
Return the heuristic value for a given state using manhattan function
"""
def manhattan(node):
# Manhattan Heuristic Function
# x1, y1 = node.state.get_location()
# x2, y2 = self.goal
zero_location = node.state.tiles.index('0')
x1 = math.floor(zero_location / 4)
y1 = zero_location % 4
x2 = 3
y2 = 3
return abs(x2 - x1) + abs(y2 - y1)
"""
astar_search() is a best-first graph searching algortithim using equation f(n) = g(n) + h(n)
h is specified as...
"""
def astar_search_manhattan(root_node):
"""A* search is best-first graph search with f(n) = g(n)+h(n).
You need to specify the h function when you call astar_search, or
else in your Problem subclass."""
return best_first_graph_search_manhattan(root_node)
Here is the rest of my program. Assume that everything is working correctly in the following:
import random
import math
import time
import psutil
import heapq
#import utils.py
import os
import sys
from collections import deque
# This class defines the state of the problem in terms of board configuration
class Board:
def __init__(self,tiles):
self.size = int(math.sqrt(len(tiles))) # defining length/width of the board
self.tiles = tiles
#This function returns the resulting state from taking particular action from current state
def execute_action(self,action):
new_tiles = self.tiles[:]
empty_index = new_tiles.index('0')
if action=='l':
if empty_index%self.size>0:
new_tiles[empty_index-1],new_tiles[empty_index] = new_tiles[empty_index],new_tiles[empty_index-1]
if action=='r':
if empty_index%self.size<(self.size-1):
new_tiles[empty_index+1],new_tiles[empty_index] = new_tiles[empty_index],new_tiles[empty_index+1]
if action=='u':
if empty_index-self.size>=0:
new_tiles[empty_index-self.size],new_tiles[empty_index] = new_tiles[empty_index],new_tiles[empty_index-self.size]
if action=='d':
if empty_index+self.size < self.size*self.size:
new_tiles[empty_index+self.size],new_tiles[empty_index] = new_tiles[empty_index],new_tiles[empty_index+self.size]
return Board(new_tiles)
# This class defines the node on the search tree, consisting of state, parent and previous action
class Node:
def __init__(self,state,parent,action):
self.state = state
self.parent = parent
self.action = action
#self.initial = initial
#Returns string representation of the state
def __repr__(self):
return str(self.state.tiles)
#Comparing current node with other node. They are equal if states are equal
def __eq__(self,other):
return self.state.tiles == other.state.tiles
def __hash__(self):
return hash(self.state)
def __lt__(self, other):
return manhattan(self) < manhattan(other)
# Utility function to randomly generate 15-puzzle
def generate_puzzle(size):
numbers = list(range(size*size))
random.shuffle(numbers)
return Node(Board(numbers),None,None)
# This function returns the list of children obtained after simulating the actions on current node
def get_children(parent_node):
children = []
actions = ['l','r','u','d'] # left,right, up , down ; actions define direction of movement of empty tile
for action in actions:
child_state = parent_node.state.execute_action(action)
child_node = Node(child_state,parent_node,action)
children.append(child_node)
return children
# This function backtracks from current node to reach initial configuration. The list of actions would constitute a solution path
def find_path(node):
path = []
while(node.parent is not None):
path.append(node.action)
node = node.parent
path.reverse()
return path
# Main function accepting input from console , running iterative_deepening_search and showing output
def main():
global nodes_expanded
global path
global start_time
global cur_time
global end_time
nodes_expanded = 0
process = psutil.Process(os.getpid())
initial_memory = process.memory_info().rss / 1024.0
initial = str(input("initial configuration: "))
initial_list = initial.split(" ")
root = Node(Board(initial_list),None,None)
print(astar_search_manhattan(root))
final_memory = process.memory_info().rss / 1024.0
print('Directions: ', path)
print('Total Time: ', (end_time-start_time), ' seconds')
print('Total Memory: ',str(final_memory-initial_memory)+" KB")
print('Total Nodes Expanded: ', nodes_expanded)
# Utility function checking if current state is goal state or not
def goal_test(cur_tiles):
return cur_tiles == ['1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','0']
if __name__=="__main__":main()
I've managed to narrow it down into my for loop in my best_first_graph_search_manhattan function and it appears that the infinite loop is caused if the if statement where its checking if child is not in explored and child is not in frontier. I'm unsure if its the way I'm calling my child function or the way I'm pushing frontier and child into my priority queue. I have imported heapq into my program and I've done extensive research where importing that function allows you to utilize priority queue into your program. Please don't mind other variables that are not used in my a star search.
Here is a test case: 1 0 3 4 5 2 6 8 9 10 7 11 13 14 15 12 | DRDRD
Thank you all very much for your help!
I am practicing on problem Binary Tree Level Order Traversal - LeetCode
Given a binary tree, return the level order traversal of its nodes' values. (ie, from left to right, level by level).
For example:
Given binary tree [3,9,20,null,null,15,7],
3
/ \
9 20
/ \
15 7
return its level order traversal as:
[
[3],
[9,20],
[15,7]
]
The BFS solution
class TreeNode(object):
def __init__(self, x):
self.val = x
self.left = None
self.right = None
class Solution:
def levelOrder(self, root: TreeNode) -> List[List[int]]:
from collections import deque
if root == None: return []
queue = deque([root])
res = []
step = -1
while queue:
step += 1
size = len(queue)
if len(res) < step + 1:
res.append([])
for _ in range(size):
cur = queue.popleft()
res[step].append(cur.val)
#stretch
if cur.left:
queue.append(cur.left)
if cur.right:
queue.append(cur.right)
return res
The performane
Runtime: 44 ms, faster than 75.27% of Python3 online submissions forBinary Tree Level Order Traversal.
Memory Usage: 13.4 MB, less than 5.64% of Python3 online submissions for Binary Tree Level Order Traversal.
Please notice the cumbersome checking:
if len(res) < step + 1:
res.append([])
After removing the condition checking as
class Solution:
def levelOrder(self, root: TreeNode) -> List[List[int]]:
from collections import deque
if root == None: return []
queue = deque([root])
res = []
step = -1
while queue:
step += 1
size = len(queue)
# if len(res) < step + 1: #remove the condition checking
res.append([])
for _ in range(size):
cur = queue.popleft()
res[step].append(cur.val)
#stretch
if cur.left:
queue.append(cur.left)
if cur.right:
queue.append(cur.right)
return res
The performance changed to
Runtime: 56 ms, faster than 19.04% of Python3 online submissions forBinary Tree Level Order Traversal.
Memory Usage: 13.5 MB, less than 4.52% of Python3 online submissions for Binary Tree Level Order Traversal.
Logically, they are of the same performance, why reported such a major difference?
I am a starter & want to integrate dfs code with Fibonacci series generating code. The Fibonacci code too runs as dfs, with calls made from left to right.
The integration is incomplete still.
I have two issues :
(i) Unable to update 'path' correctly in fib(), as the output is not correctly depicting that.
(ii) Stated in fib() function below, as comment.
P.S.
Have one more issue that is concerned with program's working:
(iii) On modifying line #16 to: stack = root = stack[1:]; get the same output as before.
import sys
count = 0
root_counter = 0
#path=1
inf = -1
node_counter = 0
root =0
def get_depth_first_nodes(root):
nodes = []
stack = [root]
while stack:
cur_node = stack[0]
stack = stack[1:]
nodes.append(cur_node)
for child in cur_node.get_rev_children():
stack.insert(0, child)
return nodes
def node_counter_inc():
global node_counter
node_counter = node_counter + 1
class Node(object):
def __init__(self, id_,path):
self.id = node_counter_inc()
self.children = []
self.val = inf #On instantiation, val = -1, filled bottom up;
#except for leaf nodes
self.path = path
def __repr__(self):
return "Node: [%s]" % self.id
def add_child(self, node):
self.children.append(node)
def get_children(self):
return self.children
def get_rev_children(self):
children = self.children[:]
children.reverse()
return children
def fib(n, level, val, path):
global count, root_counter, root
print('count :', count, 'n:', n, 'dfs-path:', path)
count += 1
if n == 0 or n == 1:
path = path+1
root.add_child(Node(n, path))
return n
if root_counter == 0:
root = Node(n, path)
root_counter = 1
else:
#cur_node.add_child(Node(n, path)) -- discarded for next(new) line
root.add_child(Node(n, path))
tmp = fib(n-1, level + 1,inf, path) + fib(n-2, level + 1,inf,path+1)
#Issue 2: Need update node's val field with tmp.
#So, need suitable functions in Node() class for that.
print('tmp:', tmp, 'level', level)
return tmp
def test_depth_first_nodes():
fib(n,0,-1,1)
node_list = get_depth_first_nodes(root)
for node in node_list:
print(str(node))
if __name__ == "__main__":
n = int(input("Enter value of 'n': "))
test_depth_first_nodes()
Want to add that took idea for code from here.
Answer to the first question:
Path in this particular question is an int. It is a numbering of path from the root to a leaf in a greedy dfs manner.
This can be achieved by letting path be a global variable rather than an input to fib function. We increment the path count whenever we reach a leaf.
I have also modified the fib function to returns a node rather than a number.
import sys
count = 0
root_counter = 0
path=1
inf = -1
node_counter = 0
root = None
def node_counter_inc():
global node_counter
node_counter = node_counter + 1
print("node_counter:", node_counter)
return node_counter
class Node(object):
def __init__(self, id__,path):
print("calling node_counter_inc() for node:", n )
try:
self.id = int(node_counter_inc())
except TypeError:
self.id = 0 # or whatever you want to do
#self.id = int(node_counter_inc())
self.val = inf #On instantiation, val = -1, filled bottom up;
#except for leaf nodes
self.path = path
self.left = None
self.right = None
def __repr__(self):
return "Node" + str(self.id) + ":"+ str(self.val)
def fib(n, level, val):
# make fib returns a node rather than a value
global count, root_counter, root, path
print('count :', count, 'n:', n, 'dfs-path:', path)
count += 1
if n == 0 or n == 1:
path = path+1
new_Node = Node(n, path)
new_Node.val = n
return new_Node
#root.add_child(new_Node)
# return new_node
#if root_counter == 0:
# root = Node(n, path)
# root_counter = 1
#else:
#cur_node.add_child(Node(n, path)) -- discarded for next(new) line
# root.add_child(Node(n, path))
#tmp = fib(n-1, level + 1,inf) + fib(n-2, level + 1,inf)
#Issue 2: Need update node's val field with tmp.
#So, need suitable functions in Node() class for that.
#print('tmp:', tmp, 'level', level)
#return tmp
ans = Node(n, path)
ans.left = fib(n-1, level + 1, inf)
ans.right = fib(n-2, level + 1, inf)
ans.val = ans.left.val + ans.right.val
print("the node is", ans.id, "with left child", ans.left.id, "and right child", ans.right.id)
print("the corresponding values are", ans.val, ans.left.val, ans.right.val)
return ans
def test_depth_first_nodes():
ans = fib(n,0,-1)
print("The answer is", ans.val)
#node_list = get_depth_first_nodes(root)
#for node in node_list:
# print(str(node))
if __name__ == "__main__":
n = int(input("Enter value of 'n': "))
test_depth_first_nodes()