I wrote code for a DFS after reading about what it is but not actually seeing the code. I did this to challenge myself (I always have believed that to learn something new you must always first challenge yourself). The thing is after I wrote my code, I compared my implementation to the one in the book I read it in (Introduction to the Design and Analysis of Algorithms - A. Levitin) and it is completely different. So now I am wondering well it works as intended... is it still a DFS?
I made the implementation to solve a maze. I will give a rundown on my code and also upload the code here (Some people hate reading other people's code while others do.)
Algorithm (What I understood and did):
Convert maze into a graph/map
Set start position as current node and run loop in which...
I choose one of the adjacent nodes as the next current node and do this until I stumble upon a dead end. Also I am adding each node I pass through into a list that acts as my stack.
Once I am at a dead end, I keep poping items from the stack and each time I pop, I check if it has adjacent nodes that have not been visited.
Once I have found an unvisited adjacent node, we continue the entire process from step 3.
We do this until current node is the end position.
Then I just retrace my way back through the stack.
Here is my code:
# Depth First Search implementation for maze...
# from random import choice
from copy import deepcopy
import maze_builderV2 as mb
order = 10
space = ['X']+['_' for x in range(order)]+['X']
maze = [deepcopy(space) for x in range(order)]
maze.append(['X' for x in range(order+2)])
maze.insert(0, ['X' for x in range(order+2)])
finalpos = (order, order)
pos = (1, 1)
maze[pos[0]][pos[1]] = 'S' # Initializing a start position
maze[finalpos[0]][finalpos[1]] = 'O' # Initializing a end position
mb.mazebuilder(maze=maze)
def spit():
for x in maze:
print(x)
spit()
print()
mazemap = {}
def scan(): # Converts raw map/maze into a suitable datastructure.
for x in range(1, order+1):
for y in range(1, order+1):
mazemap[(x, y)] = []
t = [(x-1, y), (x+1, y), (x, y-1), (x, y+1)]
for z in t:
if maze[z[0]][z[1]] == 'X':
pass
else:
mazemap[(x, y)].append(z)
scan()
path = [pos] # stack
impossible = False
while path[-1] != finalpos:
curpos = path[-1]
i = 0
while i < len(mazemap[curpos]):
if mazemap[curpos][i] in path:
del mazemap[curpos][i]
else:
i += 1
nextpos = None
if mazemap[curpos] == []:
while nextpos == None:
try:
wrongpos = path.pop(-1)
if mazemap[wrongpos] == []:
pass
else:
path.append(wrongpos)
# nextpos = choice(mazemap[wrongpos])
nextpos = mazemap[wrongpos][-1]
mazemap[wrongpos].remove(nextpos)
except IndexError:
impossible = True
break
else:
# nextpos = choice(mazemap[curpos])
nextpos = mazemap[curpos][-1]
if impossible:
break
path.append(nextpos)
if not impossible:
for x in path:
if x == pos or x == finalpos:
pass
else:
maze[x[0]][x[1]] = 'W'
else:
print("This maze not solvable, Blyat!")
print()
spit()
As always, I greatly appreciate your suggestions!
Your algorithm looks DFS to me. DFS means exploring the path as deep as possible, backtrack to the previous node only if there is no solution and your algorithm works in a similar way by popping nodes from the stack. You just mimic the recursion stack using your own stack so it looks quite different from the standard solution.
Essentially, all recursive algorithms can be simulated using stack and loop. But most of the time doing this will make the algorithm much less readable. To tackle a difficult problem, I think the usual way to do it is to first come up with the recursive solution. After making sure the recursive solution is bug-free, then start implementing the iterative version using stack if you care a lot about the efficiency.
Other Suggestion:
if mazemap[curpos][i] in path: is a O(n) operation since path is a normal list. Consider using a separate hash set to store visited nodes and use the set to check repetition instead to make it O(1).
Related
I'm so confused by backtracking because when the recursive call returns, won't you replace the solution found by replacing the grid back to zero. So even if you find the solution would it not be erased because after calling the solve function you are canceling what you did by replacing the value back to zero. I get the fact that you are backtracking but on the final recursive call that contains all the correct values are you not just replacing everything to 0?
# grid = ..... # defined as a global value,
# a list of 9 lists, 9-long each
def solve():
global grid
for y in range (0, 9):
for x in range (0, 9):
if grid[y][x] == 0:
for n in range(1,10):
if possible(y, x, n):
grid[y][x] = n
solve()
grid[y][x] = 0
return
# edit: missed this final line:
print (np.matrix(grid))
This was the code on Computerphile video by Prof. Thorsten Altenkirch.
This is weird code, but should work with some adaptations:
def solve():
global grid
for y in range(0, 9):
for x in range(0, 9):
if grid[y][x] == 0:
for n in range(1,10):
if possible(y, x, n):
grid[y][x] = n
if solve():
return True # return without reset
grid[y][x] = 0
return False # exhausted all options
return True # this is the deepest and last call with no more zeroes
Here is part of my code:
vlist = PossibleValueAtPosition(row,col) # find possible value at location (row, col)
for v in vlist: # try each possible value
puzzle[row][col] = v
if SolvePuzzle(n+1)==True: # n=81 means all filled then end loop
return True # if get a solution, you return True
puzzle[row][col] = 0 # if above return true, this line will never run
return False # return False for each fail attemp
Main program should like this
if SolvePuzzle(0)==True:
print(puzzle)
else:
print('No solution!')
It's not the final recursive call that contains all the correct values, but (each of) the deepest. Yes, this code enumerates all the solutions to the puzzle with the given board grid, not just the first solution.
For each (y,x) place, if it's empty, we try to place there each of the numbers from 1 through 9 in turn. If the placement was possible on the board as it is so far, we recurse with the changed grid board.
At the deepest level of recursion there were no empty (y,x) places on the board. Therefore we slide through to the print statement. (It could also be replaced by yield True for example, to turn it into a generator. On each next value we'd get from that generator, we'd have a complete solution -- in the changed grid. And when the generator would get exhausted, the grid would be again in its original state.)
When all the numbers from 1 through 9 have been tried, the current invocation has run its course. But the one above it in the recursion chain is waiting to continue its work trying to fill its (y,x) position. We must let it work on the same board it had before it invoked this invocation of solve(). And the only change on the board this invocation did was to change its (y,x) position's value from 0 to 1 through 9. So we must change it back to 0.
This means that the code could be restructured a little bit too, as
def solve():
global grid
for y in range (0, 9):
for x in range (0, 9): # for the first
if grid[y][x] == 0: # empty slot found:
for n in range(1,10): # try 1..9
if possible(y, x, n):
grid[y][x] = n
solve() # and recurse
# (move it here)
grid[y][x] = 0 # restore
return # and return
# no empty slots were found:
# we're at the deepest level of recursion and
# there are no more slots to fill:
yield True # was: print (np.matrix(grid))
Each invocation works only on one (y,x) location, the first empty position that it found by searching anew from the start on the changed board. This search is done by the first two nested loops on y and on x. That is a bit redundant; we know all the positions before this (y,x) are already filled. The code would be better restructured to pass the starting position (y,x) as a parameter to solve.
The paradigm of recursive backtracking is beautiful. Prolog is full of mystique, Haskell will dazzle you with cryptic monads talk (monads are actually just interpretable nestable data), but all it takes here are some nested loops, recursively created!
The paradigm is beautiful, but this code, not so much. A code's visual structure should reflect its true computational structure, but this code gives you an impression that the y-x- loops are the nested loops at work to create the backtracking structure, and they are not (they just implement a one-off linear search for the next empty space in the top-down left-to-right order).
That role is fulfilled by the n in range(1,10) loops. The y-x- loops should be stopped and exited explicitly when the empty space is found, to truly reflect in the code structure what is going on computationally, to make it apparent that the n in range(1,10) loop is not nested inside the y-x- loops, but comes in play after they finish their job.
Another problem is that it just assumes the validity of the numbers given to us in the grid before the very first call to solve(). That validity is never actually checked, only the validity of the numbers which we are placing in the empty cells is checked.
(note: previous versions of this answer were based on an erroneous reading of the code. there were some valid parts in them too. you can find them on the revisions list here).
I am attempting solve this problem. In the problem am required to iterate over a list of directions (NORTH, SOUTH, EAST, WEST) and discard any adjacent opposite directions (NORTH and SOUTH, EAST and WEST) to return a reduced array containing only non-redundant directions. When I iterate over a list that does not contain consecutive duplicates, such as ["NORTH", "SOUTH", "SOUTH", "EAST", "WEST", "NORTH", "WEST"], my code works fine, but it breaks when I iterate over a list with consecutive duplicates, like ['EAST', 'EAST', 'WEST']. Why is my code doing this, and how can I fix it to handle consecutive duplicates?
def dirReduc(arr):
for direction in arr[:-1]:
try:
gotdata = arr[arr.index(direction)+1]
except IndexError:
gotdata = 'null'
except ValueError:
gotdata = 'null'
if gotdata is 'null':
if arr == dirReduc(arr):
return arr
else:
return dirReduc(arr)
elif cancel_pair(direction, arr[arr.index(direction)+1]):
del arr[arr.index(direction):arr.index(direction)+2]
return arr
def cancel_pair(dir1, dir2):
if dir1 in ('NORTH') and dir2 in ('SOUTH') or dir1 in ('SOUTH') and dir2 in ('NORTH'):
return True
elif dir1 in ('WEST') and dir2 in ('EAST') or dir1 in ('EAST') and dir2 in ('WEST'):
return True
return False
A for loop is not a good match for this problem. If you delete a pair of items, you may need to backtrack to see if a new pair was created. A while loop is much more natural:
opposite_directions = {
"NORTH": "SOUTH",
"SOUTH": "NORTH",
"EAST": "WEST",
"WEST": "EAST"
}
def dirReduc(arr):
i = 0
while i < len(arr) - 1:
if arr[i] == opposite_directions[arr[i+1]]:
del arr[i:i+2]
if i > 0:
i -= 1 # back up a step if we just deleted some moves from the middle
else:
i += 1
return arr
I also replaced your cancel_pairs function with a simple dictionary lookup. Python's dictionaries are great, and they're often a better choice than a complicated if/else block.
EDIT
I'm leaving this here because I think the explanation and the recursive version are helpful, but #Blckknght's answer is superior in that it does less work (by backing up only one element rather than restarting with each iteration).
Here's some working code. I realize it's a bit of a departure from your code. (You can ignore the cancel_pair implementation; I wrote my own before I saw yours. Yours looks fine.)
def cancel_pair(dir1, dir2):
return tuple(sorted((dir1, dir2))) in (("NORTH", "SOUTH"), ("EAST", "WEST"))
def dirReduc(directions):
did_something = True
while did_something:
did_something = False
for i in range(len(directions) - 1):
if cancel_pair(directions[i], directions[i + 1]):
did_something = True
directions = directions[:i] + directions[i + 2:]
break
return directions
I think there are a couple issues with your code:
You're modifying the list of directions while you're iterating over it. This is generally a bad idea, and specifically here, I think it may cause you to skip over elements (because the length of the list is changing).
You're only taking one pass through the array. In the problem you linked to on Codewars, you need to repeat the process after removing a canceling pair.
You're using index to figure out the current index, but it will always tell you the index of the first occurrence of the value you're passing in. You need to instead keep track of the index yourself. I chose to just iterate over the indices, since I need to use the index + 1 anyway.
UPDATE
A recursive approach might be easier to understand, depending on your programming background:
def dirReduc(directions):
for i in range(len(directions) - 1):
if cancel_pair(directions[i], directions[i + 1]):
return dirReduc(directions[:i] + directions[i + 2:])
return directions
opp = {}
opp['NORTH'] = 'SOUTH'
opp['EAST'] = 'WEST'
opp['SOUTH'] = 'NORTH'
opp['WEST'] = 'EAST'
def red(lst):
i = 0
j = 1
while j< len(lst):
if(opp[lst[i]] == lst[j]):
lst[i] = 0
lst[j] = 0
lst = [x for x in lst if x != 0]
i =0
j =1
else:
i+=1
j+=1
print(i,j)
this is a cleaner way to do it, I store the opposite directions and then iterate over the list making a new copy every time i 'pop' opposite elements.
keep in mind that the input list will be modified by this code so you might want to pass a copy of the input list if needed.
There are a lot of mistakes in this function. The most important two local mistakes are:
arr.index(direction) will always find the first instance of direction in arr, even after you've gone on to the next one
you have del arr[x:y] in the middle of a loop over arr, which will have unpredictable results.1
But also, I don't think your algorithm does what it's supposed to do. I would write this like this:
import re
_rcd_removals = re.compile(
r"\b(?:SOUTH NORTH|NORTH SOUTH|WEST EAST|EAST WEST)\b")
def remove_cancelling_directions(dirs):
"""DIRS is a list of uppercase cardinal directions as strings
(NORTH, SOUTH, EAST, WEST). Remove pairs of elements that
cancel each others' motion, e.g. NORTH immediately followed
by SOUTH. Modifies DIRS in place and returns nothing."""
dirs[:] = _rcd_removals.sub("", " ".join(dirs)).split()
Regexes are great at sliding-window edits to sequences of words. Working on this as an array is going to be a lot finickier.
If you need to collapse pairs that are only visible after other pairs have been collapsed (e.g. WEST SOUTH NORTH EAST should become the empty list), then you should iterate to a fixed point:
import re
_rcd_removals = re.compile(
r"\b(?:SOUTH NORTH|NORTH SOUTH|WEST EAST|EAST WEST)\b")
_rcd_fixwhite = re.compile(" {2,}")
def remove_cancelling_directions_repeatedly(dirs):
ds = " ".join(dirs)
prev_ds = None
while prev_ds != ds:
prev_ds = ds
ds = _rcd_removals.sub("", ds)
ds = _rcd_fixwhite.sub(" ", ds)
dirs[:] = ds.split()
1 Can anyone point me at official Python documentation stating that the effect of mutating a list while iterating over it is unpredictable? I know that it is, but I can't find an official rule. (The spec for list goes out of its way to point out that mutating a list while sorting it has undefined behavior.)
I’m trying to build a program that plays draughts/checkers. At the moment I’m trying to make the function, that allows the computer to make and evaluate moves. My idea is to have the computer look at all it’s own possible moves and for each of these moves, look at the possible opponents moves and then for each of these moves, again look at it’s own possible moves.
With each ply it will evaluate if the move is good or bad for the player and assign points, at the end it picks the moves with the highest points.
So far I have managed to get a version of this working, but involves a lot of nested for loops. The code is a mess and not very readable at the moment, but this is a simple model of the same concept. Instead of evaluating and producing more lists, it just multiplies by two for the new list.
counter = 0
for x in list:
counter += 1
list_2 = [x * 2 for x in list]
print 'list_2', list_2, counter
for x in list_2:
counter += 1
list_3 = [x * 2 for x in list_2]
print 'list_3',list_3, counter
for x in list_3:
counter += 1
list_4 = [x * 2 for x in list_3]
print 'list_4', list_4, counter
If I run this code, I get what I want, except that I can't easily control the depth of the search without copying in more for loops. I thought recursion might be a way of doing this, but I can’t figure out how to stop the recursion after x levels of search depth.
Is there a better way of getting the same output form the code above, while getting rid of all the for loops? If I can get that to work, I think I can do the rest myself.
Here's an equivalent function that uses recursion. It controls the recursion with two parameters which track the current depth and maximum depth. If current depth exceeds the maximum depth it will return immediately thus stopping the recursion:
def evaluate(l, max_depth, cur_depth=0, counter=0):
if cur_depth > max_depth:
return counter
for x in l:
counter += 1
l2 = [x * 2 for x in l]
print cur_depth, l2, counter
counter = evaluate(l2, max_depth, cur_depth + 1, counter)
return counter
If called with max_depth=2 it will produce the same output except that instead of variable name the current depth is printed.
I thought recursion might be a way of doing this, but I can’t figure out how to stop the recursion after x levels of search depth.
Your intuition is correct, and a simple a way of doing this would be to have an incrementing number passed to each level. When the recursion gets the maximum value then the recursion is completed. A trivial example is below to demonstrate.
def countup(i=0):
print(i)
if i==MAX_COUNT: return
countup(i+1)
For your algorithm, you need a value to represent the board evaluation. For instance in the range [-1,1]. Player A could be said to be winning if the evaluation is -1 and Player B is winning if the evaluation is 1 for example. A recursive algorithm could be as follows.
def evaulate(board, player, depth=0):
if depth==MAX_DEPTH: return hueristicEvaluation(board)
bestMove = None
if player==PLAYER_A:
val=999 # some large value
for move in get_moves():
newboard = board.makeMove(move)
eval, _ = evaluate(newboard, PLAYER_B, depth+1)
if eval < val:
bestMove = move
val = eval
elif player==PLAYER_B:
val=-999 # some large negative value
for move in get_moves():
newboard = board.makeMove(move)
eval, _ = evaluate(newboard, PLAYER_A, depth+1)
if eval > val:
bestMove = move
val = eval
return val, bestMove
This is abstract, but the idea is there. Adjust depending on how your are representing the board or the players. The function hueristicEvaluation could be something as simple as counting the pieces on the board for each player and how close they are to the other side. Remember that this function needs to return a number between [-1,1]
Edge cases to consider, which I didn't take into account:
If all moves are winning and/or losing
If the are NO moves in the position, for example if your pieces are all blocked by your opponent's pieces
Many improvements exist to a simple search like this. Read if you're interested :)
For checkers, perhaps Memoization would speed things up a lot. I'm not sure, but I'd think it would be especially in the beginning. See python's way of doing this.
Pruning
Alpha-beta pruning
Branch and bound
This question already has answers here:
Best algorithm to test if a linked list has a cycle
(12 answers)
Closed 9 years ago.
Could someone please let me know the best way to prove a linked list contains a loop?
I am using an algorithm with two pointer, one is moving slow with one steps and one is moving faster with two steps.
class Node(object):
def __init__(self, value, next=None):
self.next=next
self.value=value
def create_list():
last = Node(8)
head = Node(7, last)
head = Node(6, head)
head = Node(5, head)
head = Node(4, head)
head = Node(3, head)
head = Node(2, head)
head = Node(1, head)
last.next = head
return head
def is_circular(head):
slow = head
fast = head
while True:
slow = slow.next
fast = fast.next.next
print slow.value, fast.value
if slow.value == fast.value:
return True
elif slow is fast:
return False
if __name__ == "__main__":
node = create_list()
print is_circular(node)
A good algorithm is as follows, it may very well be the best. You do not need to copy the list or anything, like that, it can be done in constant space.
Take two pointers and set them to the beginning of the list.
Let one increment one node at a time and the other two nodes at a time.
If there is a loop at any point in the list, they will have to be pointing to the same node at some point (not including the starting point). Obviously if you reach the end of the list, there is no loop.
EDIT:
Your code, but slightly edited:
def is_circular(head):
slow = head
fast = head
while fast != None:
slow = slow.next
if fast.next != None:
fast = fast.next.next
else:
return False
if slow is fast:
return True
return False
Don't know about best, but simpliest I can think of is
>>> import json
>>> l = []
>>> l.append(l)
>>> json.dumps(l)
Traceback (most recent call last):
...
ValueError: Circular reference detected
I would test it just like in any other language:
Start traversing the list from the start, adding all visited elements into a data structure (e.g. a set) with fast insertion and lookup. If you hit the end of the list without seeing any element twice, the list is not circular. If you see an element twice, the list is circular.
If neither is true, the list is infinite. :-)
Here is way to do it:
Start from a node & store its pointer in variable
Start visiting next elements till end or the start node is revisited.
if end is reached then it is not circular else circular
I am reviewing this stack overflow post
Python - Speed up an A Star Pathfinding Algorithm
I am trying to determine what the line for tile in graph[current]: represents. Namely what graph[] represents. I feel like graph should represent the entire grid, but I second guess this because we are giving current as argument to the [] operator on graph, so it has to be returning something, but im not sure what it should be. Maybe the tiles that we can travel to that are directly adjacent to current?
Also what does this syntax mean current = heapq.heappop(openHeap)[1]?
import heapq
def aStar(self, graph, current, end):
openSet = set()
openHeap = []
closedSet = set()
def retracePath(c):
path = [c]
while c.parent is not None:
c = c.parent
path.append(c)
path.reverse()
return path
openSet.add(current)
openHeap.append((0,current))
while openSet:
current = heapq.heappop(openHeap)[1]
if current == end:
return retracePath(current)
openSet.remove(current)
closedSet.add(current)
for tile in graph[current]:
if tile not in closedSet:
tile.H = (abs(end.x-tile.x)+abs(end.y-tile.y))*10
if tile not in openSet:
openSet.add(tile)
heapq.heappush(openHeap, (tile.H,tile))
tile.parent = current
return []
I believe the graph variable is a dict of some sort where the key is the current tile, and the value is a list of all the valid neighboring tiles. That way, every node in the graph is easily accessible via simple dict lookup.
The pseudocode on Wikipedia the author linked to in the original post supports this hypothesis -- the functionally equivalent line is listed as for each neighbor in neighbor_nodes(current)
What the line current = heapq.heappop(openHeap)[1] is doing is returning the literal tile object. If you observe the lines openHeap.append((0,current)) and heapq.heappush(openHeap, (tile.H,tile)), you can observe that the author is adding a tuple of two elements to openHeap where the first element is the heuristic, and the second element is the literal tile object.
Therefore, the line current = heapq.heappop(openHeap)[1] is identical to writing:
temp = heapq.heappop(openHeap)
current = temp[1]
...or to writting:
h, current = heapq.heappop(openHeap)
What the heaqpq.heappop() function itself is doing is returning the smallest element in the heap. Presumably, it's using the first element in the tuple to index, and so will return the open tile with the smallest heuristic as a cheap O(1) operation.