Skyscraper puzzle algorithm - python

I'm writing an algorithm to solve skyscrapers puzzles:
Skyscraper puzzles combine the row and column constraints of Sudoku with external clue values that re-imagine each row or column of numbers as a road full of skyscrapers of varying height. Higher numbers represent higher buildings.
To solve a Skyscraper puzzle you must place 1 to 5, or 1 to whatever the size of the puzzle is, once each into every row and column, while also solving each of the given skyscraper clues.
To understand Skyscraper puzzles, you must imagine that each value you place into the grid represents a skyscraper of that number of floors. So a 1 is a 1-floor skyscraper, while a 4 is a 4-floor skyscraper. Now imagine that you go and stand outside the grid where one of the clue numbers is and look back into the grid. That clue number tells you how many skyscrapers you can see from that point, looking only along the row or column where the clue is, and from the point of view of the clue. Taller buildings always obscure lower buildings, so in other words higher numbers always conceal lower numbers.
All the basic techniques are implemented and working, but I've realized that with bigger puzzles (5x5>) I need some sort of recursive algorithm. I found a decent working python script, but I'm not really following what it actually does beyond solving basic clues.
Does anyone know the proper way of solving these puzzles or can anyone reveal the essentials in the code above?

Misha showed you the brute-force way. A much faster recursive algorithm can be made based on constraint propagation. Peter Norvig (head of Google Research) wrote an excellent article about how to use this technique to solve Sudoku with python. Read it and try to understand every detail, you will learn a lot, guaranteed. Since the skyscraper puzzle has a lot in common with Sudoku (without the 3X3 blocks, but with some extra constraints given by the numbers on the edge), you could probably steal a lot of his code.
You start, as with Sudoku, where each field has a list of all the possible numbers from 1..N. After that, you look at one horizontal/vertical line or edge clue at a time and remove illegal options. E.g. in a 5x5 case, an edge of 3 excludes 5 from the first two and 4 from the first squares. The constraint propagation should do the rest. Keep looping over edge constraints until they are fulfilled or you get stuck after cycling through all constraints. As shown by Norvig, you then start guessing and remove numbers in case of a contradiction.
In case of Sudoku, a given clue has to be processed only once, since once you assign a single number to one square (you remove all the other possibilities), all the information of the clue has been used. With the skyscrapers, however, you might have to apply a given clue several times until it is totally satisfied (e.g. when the complete line is solved).

If you're desperate, you can brute-force the puzzle. I usually do this as a first step to become familiar with the puzzle. Basically, you need to populate NxN squares with integers from 1 to N inclusive, following the following constraints:
Each integer appears in every row exactly once
Each integer appears in every column exactly once
The row "clues" are satisfied
The column "clues" are satisfied
The brute force solution would work like this. First, represent the board as a 2D array of integers. Then write a function is_valid_solution that returns True if the board satisfies the above constraints, and False otherwise. This part is relatively easy to do in O(N^2).
Finally, iterate over the possible board permutations, and call is_valid_solution for each permutation. When that returns True, you've found a solution. There are a total of N^(NxN) possible arrangements, so your complete solution will be O(N^(NxN)). You can do better by using the above constraints for reducing the search space.
The above method will take a relatively long while to run (O(N^(NxN)) is pretty horrible for an algorithm), but you'll (eventually) get a solution. When you've got that working, try to think of a better way to to it; if you get stuck, then come back here.
EDIT
A slightly better alternative to the above would be to perform a search (e.g. depth-first) starting with an empty board. At each iteration of the search, you'd populate one cell of the table with a number (while not violating any of the constraints). Once you happen to fill up the board, you're done.
Here's pseudo-code for a recursive brute-force depth-first search. The search will be NxN nodes deep, and the branching factor at each node is at most N. This means you will need to examine at most 1 + N + N^2 + ... + N^(N-1) or (N^N-1)/(N-1) nodes. For each of these nodes, you need to call is_valid_board which is O(N^2) in the worst case (when the board is full).
def fill_square(board, row, column):
if row == column == N-1: # the board is full, we're done
print board
return
next_row, next_col = calculate_next_position(row, col)
for value in range(1, N+1):
next_board = copy.deepcopy(board)
next_board[row][col] = value
if is_valid_board(next_board):
fill_square(next_board, next_row, next_col)
board = initialize_board()
fill_square(board, 0, 0)
The function calculate_next_position selects the next square to fill. The easiest way to do this is just a scanline traversal of the board. A smarter way would be to fill rows and columns alternately.

Related

How do I obtain the shortest random sequence such that it encodes all possible transitions from one element to another?

I am designing an experiment where participants will be prompted a random sequence of actions, and I will be recording data through out the experiment. My intention is to capture every possible transition from one action to another using the shortest sequence possible. Say that there are N possible actions, I am searching for an algorithm that can generate a set of random sequences with the following properties:
Sliding through each sequence, consecutive two elements represents a transition from one action to another. Therefore, except for the start and end of the sequence, every element will serve as the end of one transition and also the start of the next transition. From what I observe using small examples, this approach appears to produce the shortest sequence while covering all transitions.
Code implementing the algorithm must return all such valid shortest sequences.
Cannot have two consecutive elements be the same (i.e. self transitions are not allowed).
Must use basic functions available in Python and MATLAB, so I cannot use modules/libraries that maybe available in Python but not in MATLAB (or vice-versa).
As an example, say I have 3 actions: {A, B, C}. One of the expected sequences this algorithm should produce is: ABCBACA. Sliding through this sequence, taking 2 elements at a time, I get {AB, BC, CB, BA, AC, CA}. As expected, this covers all 6 transitions that is possible using a sequence of length 7. The sequence has no two consecutive elements that are the same. Another valid sequence that this algorithm might produce is: ACABCBA. Sliding through this sequence taking 2 elements at a time, I get {AC, CA, AB, BC, CB, BA}, thus covering all transitions, with no two consecutive elements being the same.
I worked out both examples using a pen and paper, but I am having trouble seeing a pattern, particularly for N >3. How do I proceed from here?
It appears that a sequence of length N*(N-1) + 1 would be the shortest sequence in my case, which I think makes sense. I also observed that the start and end of such sequences are the same (i.e. if we start at A, we end at A). It almost appears as if this is a circular list instead of a linear list. Is this generally true?
If I'm understanding what you're asking correctly, here's basically what you need to do:
Create a directed graph with a node per possible transition (so one for AB, one for AC, etc), and add connections from each node to every node that starts with your "end" (so for AB, you'd connect it to BA and BC -- remember, these are unidirectional)
Find an arbitrary Hamiltonian cycle of the graph above.
You're done. Problem is, finding a Hamiltonian cycle is an NP-complete problem in the general case. As such, finding an efficient way of doing it for large N might prove challenging, to put it lightly. If you only need it for N of fairly small size, then you can just pick any algorithm that finds Hamiltonian cycles and stick it in.
Hell, you can probably just concatenate random transitions that 1. haven't been used yet and 2. start with whatever the previous transition ended (in other words, traverse the graph described above at random, without ever returning to a node you've already visited), and if you run out of options before using up all transitions, just start over. It would surely find solutions for small N (say, <= 6) reasonably quickly, and clearly it has equal probability of finding any valid solution.
As for your question on whether the solution will always be circular; yes, that is correct. It's pretty clear to see if you think about the fact that in an optimal solution, you will see every single transition exactly once, and also that any "outgoing" transition must be paired with an "incoming" transition of the same action: e.g. if you start with AB, your pool will contain N transitions of the form xA, but only N-1 Ax ones, and as such you will end up being left with a single dangling transition of the form xA that therefore must come last.
It's possible there is some kind of alternative solution that leverages the structure of this specific problem to produce a more efficient solution, but if there is, I'm not seeing it. This problem is basically a slightly smaller scale version of finding the shortest superpermutation, though, which isn't currently known to have a more efficient solution.
For anyone looking at this in the future: I came across De Bruijn sequence which is almost exactly the solution I want to my problem. The Python code referenced in the article works fairly well for my problem. The only modification I needed to make was that in the output string, I needed to ensure that all permutations involving self transitions (e.g. AA, BB, CC, etc.) were collapsed into single symbols (i.e. A, B, C, etc.).
Also, as the Wikipedia page states:
... Note that these sequences are understood to "wrap around" in a cycle ...
So this confirms my observation that the sequences always end and start at the same point. Multiple sequences can be obtained by supplying permuted strings to the input (i.e. the inputs ABC, ACB, BAC, etc.) and we get the outputs we are interested in. The output produced by the Python code appears to be always ordered.

5x5 Sliding Puzzle Fast & Low-Move Solution

I am trying to find a way to programmatically solve a 24-piece sliding puzzle in a reasonable amount of time and moves. Here is an example of the solved state in the puzzle I am describing:
I have already found that the IDA* algorithm works fairly well to accomplish this for a 15-puzzle (4x4 grid). The IDA* algorithm is able to find the lowest number of moves for any 4x4 sliding puzzle in a very reasonable amount of time. I ran an adaptation of this code to test 4x4 sliding puzzles and was able to significantly reduce runtime further by using PyPy. Unfortunately, when this code is adapted for 5x5 sliding puzzles it runs horribly slow. I ran it for over an hour and eventually just gave up on seeing it finish, whereas it ran for only a few seconds on 4x4 grids. I understand this is because the number of nodes that need to searched goes up exponentially as the grid increases. However, I am not looking to find the optimal solution to a 5x5 sliding puzzle, only a solution that is close to optimal. For example, if the optimal solution for a given puzzle was 120 moves, then I would be satisfied with any solution that is under 150 moves and can be found in a few minutes.
Are there any specific algorithms that might accomplish this?
It as been proved that finding the fewest number of moves of n-Puzzle is NP-Complete, see Daniel Ratner and Manfred Warmuth, The (n2-1)-Puzzle and Related Relocation Problems, Journal of Symbolic Computation (1990) 10, 111-137.
Interesting facts reviewed in Graham Kendall, A Survey of NP-Complete Puzzles, 2008:
The 8-puzzle can be solved with A* algorithm;
The 15-puzzle cannot be solved with A* algorithm but the IDA* algorithm can;
Optimal solutions to the 24-puzzle cannot be generated in reasonable times using IDA* algorithm.
Therefore stopping the computation to change the methodology was the correct things to do.
It seems there is an available algorithm in polynomial time that can find sub-optimal solutions, see Ian Parberry, Solving the (n^2−1)-Puzzle with 8/3n^3 Expected Moves, Algorithms 2015, 8(3), 459-465. It may be what you are looking for.
IDA* works great up to a 4x4 puzzle, because that's 'just' 16! (20,922,789,888,000‬) possible states. A 5x5 puzzle has 25! (15,511,210,043,330,985,984,000,000) possible states, a factor of 740k million larger.
You need to switch strategies. The 'easiest' method solves the puzzle along the top row and then left column first, repeatedly, until you have a 3x3 puzzle, which can easily be solved using existing techniques.
Solving the puzzle involves 3 different phases you alternate between:
Solve the top row (move the pieces for columns 1 - N-2 into place, then move the piece for column N-1 to column N, the piece for column N to colum N, but one row below, finish the row)
Solve the left column (move pieces for rows 2 - N-2 into place, then move the piece for row N-1 to row N, piece for row N to row N but one column to the right, finish the column)
(2 rows of 3 columns remaining): use A* to solve the remainder.
So phases 1 and 2 alternate until you can run phase 3; after solving the top 5 tiles (phase 1) you solve the left-most 4 tiles on the other rows (phase 2), then the top row of the remainder of the puzzle (4 tiles, phase 1), then the left column (3 tiles, phase 2), then solve phase 3. Phases 1 and 2 are basically identical, only the orientation differs, and for phase 2 the first tile is already in place.
Phases 1 and 2 are easily solved using lookup tables, no search required; you are moving specific tiles and don't care about anything else:
Locate the desired tile
Get the gap next to the tile (it depends on the direction of movement what side is best)
Move the tile into position; there are standard moves that move a tile in any direction (5 for vertical or horizontal moves, 6 for diagonal).
This doesn't give you the shortest path to a solution, but with no state search the problem is strictly bound and the worst case scenario known. Solving the first row and column of a 5x5 puzzle takes at most 427 moves this way, and 256 moves for the next row and column.
This algorithm was first described by Ian Parberry, in a paper titled A real-time algorithm for the (n2 − 1)-puzzle in 1995. I think that DSolving: a novel and efficient intelligent algorithm for large-scale sliding puzzles by GuiPing Wang and Ren Li describes a more efficient lookup-table method still, but as the paper isn't yet available for free I haven't studied it yet.
A two-character change that might do the trick is to multiple the heuristic by 2 (or some other constant). It's no longer admissible, but the solution found will be within a factor of 2 of optimal. This trick is called Weighted A*/Static Weighting.

How to generate all legal state-action pairs of connect four?

Consider a standard 7*6 board. Suppose I want to apply Q-Learning algorithm. For applying it, I need a set of all possible states and actions. There can be 3^(7*6) = 150094635296999121. Since its not feasible to store these, I am only considering legal states.
How can I generate Q(s,a) for all the legal states and actions?
This is not my homework. I am trying to learn about reinforcement algorithms. I have been searching about this since two days. The closest thing I have come to is consider only the legal states.
There are 3 process you need to set up. One that generates the next move, one that changes where that move leads, and lastly evaluating a block of 4x4 through a series of checks to see if this is a winner . Numpy and scipy will help with this.
Set up a Numpy array of zeroes. Change the number to 1 for player 1 moves and -1 for moves done by player 2. The 4x4 check is summing over the x axis and then the y axis and then the sum of the diagonals if the abs(sum(axis))==4 then yield board earlier than the end.
This may create duplicates depending on the implementation so put all of these in a set at the end.
**Edit due to comments and the question modification.
You need to use generators and do a depth first search. There is a max of 7 possible branches for any state with a possibility of 42 moves. You are only looking for winning or loosing states to store (don't save stalemates as they take the most memory). The states will be 2 sets of locations one for each player.
When you step forward and find a winning/losing state, store the state with the value, step backward to the previous move and update the value there storing this as well.
There are 144 possible ways of winning/losing to connect four with I don't know how many states associated with each. so I'm not sure how many steps away from winning you want to store.

Iterative Divide and Conquer algorithms

I am trying to create an algorithm using the divide-and-conquer approach but using an iterative algorithm (that is, no recursion).
I am confused as to how to approach the loops.
I need to break up my problems into smaller sub problems, until I hit a base case. I assume this is still true, but then I am not sure how I can (without recursion) use the smaller subproblems to solve the much bigger problem.
For example, I am trying to come up with an algorithm that will find the closest pair of points (in one-dimensional space - though I intend to generalize this on my own to higher dimensions). If I had a function closest_pair(L) where L is a list of integer co-ordinates in ℝ, how could I come up with a divide and conquer ITERATIVE algorithm that can solve this problem?
(Without loss of generality I am using Python)
The cheap way to turn any recursive algorithm into an iterative algorithm is to take the recursive function, put it in a loop, and use your own stack. This eliminates the function call overhead and from saving any unneeded data on the stack. However, this is not usually the "best" approach ("best" depends on the problem and context.)
They way you've worded your problem, it sounds like the idea is to break the list into sublists, find the closest pair in each, and then take the closest pair out of those two results. To do this iteratively, I think a better way to approach this than the generic way mentioned above is to start the other way around: look at lists of size 3 (there are three pairs to look at) and work your way up from there. Note that lists of size 2 are trivial.
Lastly, if your coordinates are integers, they are in Z (a much smaller subset of R).

Bubble Breaker Game Solver better than greedy?

For a mental exercise I decided to try and solve the bubble breaker game found on many cell phones as well as an example here:Bubble Break Game
The random (N,M,C) board consists N rows x M columns with C colors
The goal is to get the highest score by picking the sequence of bubble groups that ultimately leads to the highest score
A bubble group is 2 or more bubbles of the same color that are adjacent to each other in either x or y direction. Diagonals do not count
When a group is picked, the bubbles disappear, any holes are filled with bubbles from above first, ie shift down, then any holes are filled by shifting right
A bubble group score = n * (n - 1) where n is the number of bubbles in the bubble group
The first algorithm is a simple exhaustive recursive algorithm which explores going through the board row by row and column by column picking bubble groups. Once the bubble group is picked, we create a new board and try to solve that board, recursively descending down
Some of the ideas I am using include normalized memoization. Once a board is solved we store the board and the best score in a memoization table.
I create a prototype in python which shows a (2,15,5) board takes 8859 boards to solve in about 3 seconds. A (3,15,5) board takes 12,384,726 boards in 50 minutes on a server. The solver rate is ~3k-4k boards/sec and gradually decreases as the memoization search takes longer. Memoization table grows to 5,692,482 boards, and hits 6,713,566 times.
What other approaches could yield high scores besides the exhaustive search?
I don't seen any obvious way to divide and conquer. But trending towards larger and larger bubbles groups seems to be one approach
Thanks to David Locke for posting the paper link which talks above a window solver which uses a constant-depth lookahead heuristic.
According to this paper, determining if you can empty the board (which is related to the problem you want to solve) is NP-Complete. That doesn't mean that you won't be able to find a good algorithm, it just means that you likely won't find an efficient one.
I'm thinking you could try a branch and bound search with the following idea:
Given a state of the game S, you branch on S by breaking it up in m sets Si where each Si is the state after taking a legal move of all m legal moves given the state S
You need two functions U(S) and L(S) that compute a lower and upper bound respectively of a given state S.
For the U(S) function I'm thinking calculate the score that you would get if you were able to freely shuffle K bubbles in the board (each move) and arrange the blocks in such a way that would result in the highest score, where K is a value you choose yourself. When your calculating U(S) for a given S it should go quicker if you choose higher K (the conditions are relaxed) so choosing the value of K will be a trade of for quickness of finding U(S) and quality (how tight an upper bound U(S) is.)
For the L(S) function calculate the score that you would get if you simply randomly kept click until you got to a state that could not be solved any further. You can do this several times taking the highest lower bound that you get.
Once you have these two functions you can apply standard Bound and Branch search. Note that the speed of your search is going to greatly depend on how tight your Upper Bound is and how tight your Lower Bound is.
To get a faster solution than exhaustive search, I think what you want is probably dynamic programming. In dynamic programming, you find some sort of "step" that takes you possibly closer to your solution, and keep track of the results of each step in a big matrix. Then, once you have filled in the matrix, you can find the best result, and then work backward to get a path through the matrix that leads to the best result. The matrix is effectively a form of memoization.
Dynamic programming is discussed in The Algorithm Design Manual but there is also plenty of discussion of it on the web. Here's a good intro: http://20bits.com/articles/introduction-to-dynamic-programming/
I'm not sure exactly what the "step" is for this problem. Perhaps you could make a scoring metric for a board that simply sums the points for each of the bubble groups, and then record this score as you try popping balloons? Good steps would tend to cause bubble groups to coalesce, improving the score, and bad steps would break up bubble groups, making the score worse.
You can translate this problem into problem of searching shortest path on graph. http://en.wikipedia.org/wiki/Shortest_path_problem
I would try whit A* and heuristics would include number of islands.
In my chess program I use some ideas which could probably adapted to this problem.
Move Ordering. First find all
possible moves, store them in a list,
and sort them according to some
heuristic. The "better" ones first,
the "bad" ones last. For example,
this could be a function of the size
of the group (prefer medium sized
groups), or the number of adjacent
colors, groups, etc.
Iterative Deepening. Instead of
running a pure depth-first search,
cut of the search after a certain
deep and use some heuristic to assess
the result. Now research the tree
with "better" moves first.
Pruning. Don't search moves which
seems "obviously" bad, according to
some, again, heuristic. This involves
the risk that you won't find the
optimal solution anymore, but
depending on your heuristics you will
very likely find it much earlier.
Hash Tables. No need to store every
board you come accross, just remember
a certain number and overwrite older
ones.
I'm almost finished writing my version of the "solver" in Java. It does both exhaustive search, which takes fricking ages for larger board sizes, and a directed search based on a "pool" of possible paths, which is pruned after every generation, and a fitness function used to prune the pool. I'm just trying to tune the fitness function now...
Update - this is now available at http://bubblesolver.sourceforge.net/
This isn't my area of expertise, but I would like to recommend a book to you. Get a copy of The Algorithm Design Manual by Steven Skiena. This has a whole list of different algorithms, and once you read through it you can use it as a reference. If nothing else it will help you consider your options.

Categories

Resources