Can anyone explain this implementation of depth first search? - python

So i am learning about search algorithms at the minute, and would appreciate it if someone could provide an explanation of how this implementation of depth first search works, i do understand how depth first search works as a algorithm, but i am struggling to grasp how it has been implemented here.
Thanks for your patience and understanding, Below is the code:
map = {(0, 0): [(1, 0), (0, 1)],
(0, 1): [(1, 1), (0, 2)],
(0, 2): [(1, 2), (0, 3)],
(0, 3): [(1, 3), (0, 4)],
(0, 4): [(1, 4), (0, 5)],
(0, 5): [(1, 5)],
(1, 0): [(2, 0), (1, 1)],
(1, 1): [(2, 1), (1, 2)],
(1, 2): [(2, 2), (1, 3)],
(1, 3): [(2, 3), (1, 4)],
(1, 4): [(2, 4), (1, 5)],
(1, 5): [(2, 5)],
(2, 0): [(3, 0), (2, 1)],
(2, 1): [(3, 1), (2, 2)],
(2, 2): [(3, 2), (2, 3)],
(2, 3): [(3, 3), (2, 4)],
(2, 4): [(3, 4), (2, 5)],
(2, 5): [(3, 5)],
(3, 0): [(4, 0), (3, 1)],
(3, 1): [(4, 1), (3, 2)],
(3, 2): [(4, 2), (3, 3)],
(3, 3): [(4, 3), (3, 4)],
(3, 4): [(4, 4), (3, 5)],
(3, 5): [(4, 5)],
(4, 0): [(5, 0), (4, 1)],
(4, 1): [(5, 1), (4, 2)],
(4, 2): [(5, 2), (4, 3)],
(4, 3): [(5, 3), (4, 4)],
(4, 4): [(5, 4), (4, 5)],
(4, 5): [(5, 5)],
(5, 0): [(5, 1)],
(5, 1): [(5, 2)],
(5, 2): [(5, 3)],
(5, 3): [(5, 4)],
(5, 4): [(5, 5)],
(5, 5): []}
visited = []
path = []
routes = []
def goal_test(node):
if node == (5, 5):
return True
else:
return False
found = False
def dfs(visited, graph, node):
global routes
visited = visited + [node]
if goal_test(node):
routes = routes + [visited]
else:
for neighbour in graph[node]:
dfs(visited, graph, neighbour)
dfs(visited, map, (0, 0))
print(len(routes))
for route in routes:
print(route)

This implementation employs several bad practices:
map is a native Python function, so it is a bad idea to create a variable with that name.
visited should not need to be initialised in the global scope: the caller has no interest in this as it only plays a role in the DFS algorithm itself
routes should not have to be initialised to an empty list either, and it is bad that dfs mutates this global variable. Instead dfs should return that information to the caller. This makes one dfs call self-contained, as it returns the possible routes from the current node to the target. It is up to the caller to extend the routes in this returned collection with an additional node.
The body of goal_test should be written as return node == (5, 5). The if ... else is just translating a boolean value to the same boolean value.
The function goal_test seems overkill when you can just pass an argument to the dfs function that represents the target node. This makes it also more generic, as you don't need to hard-code the target location inside a function.
path and found are initialised but never used.
dfs would run into a stack overflow if the graph would have cycles. It does not happen with the given graph, because that graph is acyclic, but it would be better if you could also rely on this function when giving it cyclic graphs.
dfs will visit the same cell multiple times, as it can be found via different paths (like for instance (2,2)), and so from there it will perform the same DFS search it already did before. This could be made slightly more efficient by storing the result it got from a previous visit to that cell, i.e. we could use memoization. The gain is small, as most time is spent on creating and copying paths. The gain (of using memoization) would be significant if the function would only count the number of paths, and not build them.
Here is an implementation that deals with the above mentioned points. It uses a wrapper function to hide the use of memoization to the caller, and to reduce the number of arguments that need to be passed to dfs:
def search(graph, source, target):
# Use memoization to avoid repetitive DFS from same node,
# Also used to mark a node as visited, to avoid runnning in cycles
memo = dict() # has routes that were already collected
def dfs(node):
if node not in memo: # not been here before
if node == target:
memo[node] = [[target]]
else:
# Mark with None that this node is on the current path
# ...avoiding infinite recursion on a cycle
memo[node] = None # temporary value while not yet back from recursion
memo[node] = [
[node] + route
for neighbour in graph[node]
for route in dfs(neighbour)
if route
]
return memo[node]
return dfs(source)
graph = {(0, 0): [(1, 0), (0, 1)],
# ...etc ...
}
routes = search(graph, (0, 0), (5, 5))
print(len(routes))
for route in routes:
print(route)

Related

How to handle return from recursion

I'm trying to make a polyomino generator of level N. I successfully made a function that connects a tile with a root in every possible way and returns all combinations. Now I need to extend this to level N. I've done my best, but still can't handle recursion the right way.
Here's my function:
def connect_n(tiles,root,n=1):
if not isinstance(tiles[0], list): tiles = [tiles]
result = []
if n == 1:
for tile in tiles:
result += connect(tile, root)
return result
else:
return connect_n(tiles, root,n-1)
This function successfully creates N nested functions and executes base case at n==1. But then with obtained result, it just goes up and up and exits with that result without any other iterations. I'm sure I'm missing something. I tried to move conditions and loops around without success.
I have following input:
root = (0,0)
N = 3 #for example, can be any number > 0
Function connect(root,root) returns:
[[(0, 0), (1, 0)]]
Then functionconnect([[(0, 0), (1, 0)]],root) returns
[[(0, 0), (1, 0), (2, 0), (3, 0)],
[(0, 0), (0, 1), (1, 0), (2, 0)],
[(0, 0), (0, 1), (1, 0), (1, 1)],
[(0, 0), (1, 0), (1, 1), (2, 1)]]
And so on.
Function connect_n output should be
[[(0, 0), (1, 0)]] for N=1
[[(0, 0), (1, 0), (2, 0), (3, 0)],
[(0, 0), (0, 1), (1, 0), (2, 0)],
[(0, 0), (0, 1), (1, 0), (1, 1)],
[(0, 0), (1, 0), (1, 1), (2, 1)]] for N=2
And so on.
I'm not sure I understand the algorithm, but I think this is what you want:
def connect_n(tiles, root, n=1):
if n == 1:
return connect(tiles, root)
else:
return connect_n(connect_n(tiles, root, n-1)), root)

Loop through list of tuples and unpack elements of each tuple

I have this list of two-value tuples
stake_odds=[(0, 1), (0, 2), (0, 5), (0, 10), (2, 1), (2, 2), **(2, 5)**, (2, 10), (5, 1), (5, 2), (5, 5), (5, 10), (10, 1), (10, 2), (10, 5), (10, 10)]
I have the following function where I want to put the tuple into an object method where it calculates the product (or minus product depending on the instance) of the two numbers in the tuple. If the product is positive, I want to append the tuple used to another list, pos_cases.
def function():
pos_cases=[]
x,y=stake_odds[9]
b1=bet1.payout(x,y)
if b1 > 0:
return b1, "b is greater than zero!"
pos_cases.append(stake_odds[9])
print(pos_cases)
print(function())
As you can see below I have to unpack the tuple into two variables before computing. I can do it by specifying the element of the list (stake_odds[9]), however I am looking for a way to generalize and loop through the list (stake_odds[i]) rather than going one by one.
The list in this example would be shortened to the following:
pos_cases =[(2, 1), (2, 2), (2, 5), (2, 10), (5, 1), (5, 2), (5, 5), (5, 10), (10, 1), (10, 2), (10, 5), (10, 10)]
How could I do this? The only thing I can think of is some nested for loop like:
for i in stake_odds:
for x,y in i:
return(x,y)
But this results in error >TypeError: cannot unpack non-iterable int object>.
Doesn't this work?:
def function():
pos_cases=[]
for x,y in stake_odds:
b1=bet1.payout(x,y)
if b1 > 0:
return b1, "b is greater than zero!"
pos_cases.append((x,y))
return pos_cases
print(function())

Generate Complete DFS Paths using networkx

I am trying to generate complete path list instead of the optimized one. Better explained using the below example.
import networkx as nx
G = nx.Graph()
G.add_edges_from([(0, 1), (1, 2), (2, 3)])
G.add_edges_from([(0, 1), (1, 2), (2, 4)])
G.add_edges_from([(0, 5), (5, 6)])
The above code create a Graph with edges 0=>1=>2=>3 and 0=>1=>2=>4 and 0=>5=>6
All I want is to extract all paths from 0.
I tried:
>> list(nx.dfs_edges(G, 0))
[(0, 1), (1, 2), (2, 3), (2, 4), (0, 5), (5, 6)]
All I want is:
[(0, 1, 2, 3), (0, 1, 2, 4), (0, 5, 6)]
Is there any pre-existing method from networkx which can be used? If not, any way to write an optimal method that can do the job?
Note: My problem is limited to the given example. No more corner cases possible.
Note2: For simplification the data is generated. In my case, the edges list is coming from data set. Assumption is given a graph and a node (Say 0), Can we generate all paths?
Give this a try:
import networkx as nx
G = nx.Graph()
G.add_edges_from([(0, 1), (1, 2), (2, 3)])
G.add_edges_from([(0, 1), (1, 2), (2, 4)])
G.add_edges_from([(0, 5), (5, 6)])
pathes = []
path = [0]
for edge in nx.dfs_edges(G, 0):
if edge[0] == path[-1]:
# node of path
path.append(edge[1])
else:
# new path
pathes.append(path)
search_index = 2
while search_index <= len(path):
if edge[0] == path[-search_index]:
path = path[:-search_index + 1] + [edge[1]]
break
search_index += 1
else:
raise Exception("Wrong path structure?", path, edge)
# append last path
pathes.append(path)
print(pathes)
# [[0, 1, 2, 3], [0, 1, 2, 4], [0, 5, 6]]

Cycle through graph back to starting node

having an issue working out how to cycle back to the start node in my graph. Currently, from the graph I create I can cycle from a start node and follow the edges till there are no connected nodes. However I can't work out how to make it cycle through and finish on the start node, if possible.
This is an example of the graph with its connections.
Node & Connection(S) [(0, 4), (1, 5), (1, 8), (3, 1), (4, 0), (4, 3), (5, 0),
(5, 3), (5, 7), (6, 0), (6, 4), (7, 0), (8, 5), (8, 6), (8, 7)]
This is my code to cycle through the graph and follow its edges.
def pathSearch(graph, start, path=[]):
path=path+[start]
for node in graph[start]:
if not node in path:
path=pathSearch(graph, node, path)
return path
print ('Path ', pathSearch(g, 0))
This is what I get as an output starting from node 0:
pathSearch [0, 4, 3, 1, 5, 7, 8, 6]
This is right, but why isn't it doing a full cycle back to the start node?

Shorten a list of tuples by cutting out loops?

I have a function that generates a list of tuples like:
[(0, 0), (1, 1), (1, 2), (1,3), (2, 4), (3, 5), (4, 5)]
which are used to represent a path of tiles (row, column) in a game I'm making.
The function that I use to generate these paths isn't perfect, since it often produces "loops", as shown below:
[(2, 0), (2, 1), (1, 2), (0, 3), (0, 4), (1, 5), (2, 5), (3, 4), (3, 3),
(3, 2), (4, 1)]
The path above should instead look like:
[(2, 0), (2, 1), (3, 2), (4, 1)]
These paths can contain any number of loops, which can be of any size and shape.
So my question is, how do I write a function in python that cuts the loopy list and returns a new, shorter list that does not have these loops.
My attempt below:
def Cut_Out_Loops(Path):
NewList = list(Path)
Cutting = True
a = 0
for Cords in Path:
a += 1
try:
for i in range(a + 2, len(Path)):
if (Path[i][0] == Cords[0] and abs(Path[i][1] - Cords[1]) == 1:
NewList = NewList[0:a] + NewList[i:]
Path = list(NewList)
elif Path[i][1] == Cords[1] and abs(Path[i][0] - Cords[0]) == 1:
NewList = NewList[0:a] + NewList[i:]
Path = list(NewList)
elif abs(Path[i][0] - Cords[0]) == 1 and abs(Path[i][1] - Cords[1]) == 1:
NewList = NewList[0:a] + NewList[i:]
Path = list(NewList)
elif abs(Path[i][1] - Cords[1]) == 1 and abs(Path[i][0] - Cords[0]) == 1:
NewList = NewList[0:a] + NewList[i:]
Path = list(NewList)
Cutting = False
except IndexError:
Cutting = True
Although your definition of a "loop" isn't too clear, try this
def clean(path):
path1 = []
for (x1,y1) in path:
for (i,(x2,y2)) in enumerate(path1[:-1]):
if abs(x1-x2) <= 1 and abs(y1-y2) <= 1:
path1 = path1[:i+1]
break
path1.append((x1,y1))
return path1
It definitely works for your example:
>>> path = [(2, 0), (2, 1), (1, 2), (0, 3), (0, 4), (1, 5), (2, 5), (3, 4), (3, 3), (3, 2), (4, 1)]
>>> clean(path)
[(2, 0), (2, 1), (3, 2), (4, 1)]
That said, it is just the most straightforward of brute force solutions. The complexity is quadratic.
How long are your paths? If they're all under 1000 elements, even a naive brute-force algorithm would work:
path = [
(2, 0),
(2, 1),
(1, 2),
(0, 3),
(0, 4),
(1, 5),
(2, 5),
(3, 4),
(3, 3),
(3, 2),
(4, 1)
]
def adjacent(elem, next_elem):
return (abs(elem[0] - next_elem[0]) <= 1 and
abs(elem[1] - next_elem[1]) <= 1)
new_path = []
i = 0
while True:
elem = path[i]
new_path.append(elem)
if i + 1 == len(path):
break
j = len(path) - 1
while True:
future_elem = path[j]
if adjacent(elem, future_elem):
break
j -= 1
i = j
print new_path

Categories

Resources