I wrote a python script to solve the N Queens puzzle. I made a function which, provided n, will return the first solution it finds for n queens by using backtracking. With a small modification it is possible to make the function find and return all solutions by exhausting the search space. It works well for n between 1 and 23. After 23 it starts to take some time to find a single solution.
I was wondering if it is possible to find a solution with a further constraint by extending the "attack range" of the queen. A queen in chess can attack horizontally, vertically, and on the diagonals. For my modification the queen can also attack the adjacent squares to the left and the right of the diagonal ones. As a consequence of this behavior, each queen must be 4 squares away from the next queen, instead of 3 squares for the normal puzzle.
In the following image, the blue squares are the normal queen range, and the green squares represent the new attack range: New queen attack range.
I made a new function which takes into account this new constraint. However, after running my code, I haven't been able to find any solutions for any number up to 23, and after 24 it takes a lot of time.
So my question is: does anyone know if there is a solution at all for this problem? Which is the smallest number for which there is a solution?
If anyone has done this before, I'm sure their code will be better and faster than mine, but I can provide the code if needed.
Thanks in advance!
With these super queens, you will no longer be able to fit N queens on an NxN board, other than the trivial 1x1 board. One way to see this is that there are 2N-1 diagonals (let's use lower left to upper right) on an NxN board. Each queen will be attacking 3 diagonals, except if they are in the corner they will attack 2 diagonals.
Let's say we have one queen in the corner, occupying 2 diagonals. Then e.g. on an 8x8 board we have 13 diagonals left which can be used by floor(13/3) queens or 4 queens. So at most we could have 5 queens on an 8x8 board. I don't know if this is a tight upper bound.
Related
I need to write a brute-force algorithm to count the number of unit triangles in a complex shape. The shape is created each iteration by adding triangles to surround all outer edges.
The shape in iteration number n would look as above and the outputs would be 1 4 10 respectively.
Unfortunately I do not really know where to begin, first thought was to create 2 classes; a triangle and a grid class consisting of multiple triangles. However adding the outer triangles would prove difficult to do past n = 3 as some edge pairs will only need 1 shared unit triangle.
Any thoughts?
Nevermind solution was simpler than I ever imagined as triangles added increase by 3 every time a simple for loop to add from n=1 worked easily enough.
I was doing a coding problem which I somehow passed all test cases but I did not understand exactly what was going on. The problem was a small twist on the classic nim game:
There are two players A and B. There are N piles of various stones. Each player can take any amount of stones if the pile is less than K, otherwise they must take a multiple of K stones. The last person to take stones wins.
python
# solution -> will A win the game of piles, k?
def solution(piles, k):
gn = 0 # Grundy number
for pile in piles:
if pile % 2 != 0:
gn ^= pile + 1
else:
gn ^= pile - 1
return gn != 0
I'm not sure if there was enough test cases, but k was not even used here. To be honest, I am having a difficult time even understanding what gn (Grundy number) really means. I realize there is a proof of winning the Nim game if the xor of all piles is not zero, but I don't really understand why this variation requires checking the parity of the pile.
First, the given solution is incorrect. You noticed that it does not use k, and indeed this is a big red flag. You can also look at the result it gives for a single pile game, where it seems to say that player A only wins if the size of the pile is one which you should fairly quickly be able to show is incorrect.
The structure of the answer is sort of correct, though. A lot of the power of the Grundy number is that the Grundy number of a combined game state is the nim sum (XOR in the case of finite ordinals) of the Grundy numbers of the individual game states. (This only works for a very specific way of combining game states, but this turns out to be the natural way of considering Nim piles together.) So, this problem can indeed be solved by finding the Grundy number for each pile (considering k) and XOR-ing them together to get the Grundy number for the full game state. (In Nim where you can take any number of stones from a pile and win by taking the last stone, the Grundy number of a pile is just the size of a pile. That's why the solution to that version of Nim just XOR-s the sizes of the piles.)
So, taking the theory for granted, you can solve the problem by finding the correct Grundy values for a single pile given k. You only need to consider one pile games to do this. This is actually a pretty classic problem, and IMO significantly simpler to correctly analyze than multi-pile Nim. You should give it a go.
As for how to think of Grundy numbers, there are plenty of places to read about it, but here's my approach. The thing to understand is why the combination of two game states allows the previous player (B) to win exactly when the Grundy numbers are equal.
To do this, we need only consider what effect moves have on the Grundy numbers of the two states.
By definition as the minimum excluded value of successor states, there is always a move that changes the Grundy number of a state to any lower value (ie n could become any number from 0 up to n - 1). There is never a move that leaves the Grundy number the same. There may or may not be moves that increase the Grundy number.
Then, in the case of the combination of two states with the same Grundy number, the player B can win by employing the "copycat strategy". If player A makes a move that decreases the Grundy number of one state, player B can "copy" by reducing the Grundy number of the other state to the same value. If player A makes a move that increases the Grundy number of one state, player B can "undo" it by making a move on the same state to reduce it to the same value it was before. (Our game is finite, so we don't have to worry about an infinite loop of doing and undoing.) These are the only things A can do. (Remember, importantly, there is no move that leaves a Grundy number unchanged.)
If the states don't have the same Grundy number, then the way for the first player to win is clear, then; they just reduces the number of the state with a higher value to match the state with the lower value. This reduces things to the previous scenario.
Here we should note that the minimum excluded value definition allows us to construct the Grundy number for any states recursively in terms of their successors (at least for a finite game). There are no choices, so these numbers are in fact well-defined.
The next question to address is why we can calculate the Grundy number of a combined state. I prefer not to think about XOR at all here. We can define this nim sum operation purely from the minimum excluded value property. We abstractly consider the successors of nim_sum(x, y) to be {nim_sum(k, y) for k in 0..x-1} and {nim_sum(x, k) for k in 0..y-1}; in other words, making a move on one sub-state or the other. (We can ignore successor of one of the sub-states that increase the Grundy number, as such a state would have all the successors of the original state plus nim_sum(x, y) itself as another successor, so it must then have a strictly larger Grundy number. Yes, that's a little bit hand-wavy.) This turns out to be the same as XOR. I don't have a particularly nice explanation for this, but I feel it isn't really necessary to a basic understanding. The important thing is that it is a well-defined operation.
I'm currently writing an algorithm that solves the 8-puzzle game through an A* search algorithm with Python. However, when I time my code, I find that get_manhattan_distance takes a really long amount of time.
I ran my code with cProfile for Python, and the results are below what is printed out by the program. Here is a gist for my issue.
I've already made my program more efficient by copying using Numpy Arrays instead of Python's lists. I don't quite know how to make this step more efficient. My current code for get_manhattan_distance is
def get_manhattan(self):
"""Returns the Manhattan heuristic for this board
Will attempt to use the cached Manhattan value for speed, but if it hasn't
already been calculated, then it will need to calculate it (which is
extremely costly!).
"""
if self.cached_manhattan != -1:
return self.cached_manhattan
# Set the value to zero, so we can add elements based off them being out of
# place.
self.cached_manhattan = 0
for r in range(self.get_dimension()):
for c in range(self.get_dimension()):
if self.board[r][c] != 0:
num = self.board[r][c]
# Solves for what row and column this number should be in.
correct_row, correct_col = np.divmod(num - 1, self.get_dimension())
# Adds the Manhattan distance from its current position to its correct
# position.
manhattan_dist = abs(correct_col - c) + abs(correct_row - r)
self.cached_manhattan += manhattan_dist
return self.cached_manhattan
The idea behind this is, the goal puzzle for a 3x3 grid is the following:
1 2 3
4 5 6
7 8
Where there is a blank tile (the blank tile is represented by a 0 in the int array). So, if we have the puzzle:
3 2 1
4 6 5
7 8
It should have a Manhattan distance of 6. This is because, 3 is two places away from where it should be. 1 is two places away from where it should be. 5 is one place away from where it should be, and 6 is one place away from where it should be. Hence, 2 + 2 + 1 + 1 = 6.
Unfortunately, this calculation takes a very long time because there are hundreds of thousands of different boards. Is there any way to speed this calculation up?
It looks to me like you should only need to calculate the full Manhattan distance sum for an entire board once - for the first board. After that, you're creating new Board entities from existing ones by swapping two adjacent numbers. The total Manhattan distance on the new board will differ only by the sum of changes in Manhattan distance for these two numbers.
If one of the numbers is the blank (0), then the total distance changes by minus one or one depending on whether the non-blank number moved closer to its proper place or farther from it. If both of the numbers are non-blank, as when you're making "twins", the total distance changes by minus two, zero, or two.
Here's what I would do: add a manhattan_distance = None argument to Board.__init__. If this is not given, calculate the board's total Manhattan distance; otherwise simply store the given distance. Create your first board without this argument. When you create a new board from an existing one, calculate the change in the total distance and pass the result in to the new board. (The cached_manhattan becomes irrelevant.)
This should reduce the total number of calculations involved with distance by quite a bit - I'd expect it to speed things up by several times, more the larger your board size.
It's a question on checkio - Break Rings, but I only can use a bad way with O(n*2^n) complexity by testing all possible break ways and find the minimum one.
The problem:
A blacksmith gave his apprentice a task, ordering them to make a selection of rings. The apprentice is not yet skilled in the craft and as a result of this, some (to be honest, most) of rings came out connected together. Now he’s asking for your help separating the rings and deciding how to break enough rings to free so as to get the maximum number of rings possible.
All of the rings are numbered and you are told which of the rings are connected. This information is given as a sequence of sets. Each set describes the connected rings. For example: {1, 2} means that the 1st and 2nd rings are connected. You should count how many rings we need to break to get the maximum of separate rings. Each of the rings are numbered in a range from 1 to N, where N is total quantity of rings.
https://static.checkio.org/media/task/media/0d98b24304034e2e9017ba00fc51f6e3/example-rings.svg
example-rings
(sorry I don't know how to change the svg in mac to a photo.)
In the above image you can see the connections: ({1,2},{2,3},{3,4},{4,5},{4,6},{6,5}). The optimal solution here would be to break 3 rings, making 3 full and separate rings. So the result is 3.
Input: Information about the connected rings as a tuple of sets with integers.
Output: The number of rings to break as an integer.
It works only when the test case is small so it is not practical(I guess it even can't pass the test)
from functools import reduce
import copy
def break_rings(rings):
max_ring = max(reduce(set.union,rings))
rings = list(rings)
possible_set = [list(bin(i)[2:].rjust(max_ring,'0')) for i in range(2**max_ring)]
possible_set = [list(map(int,j)) for j in possible_set]
min_result = max_ring
for test_case in possible_set:
tmp = copy.copy(rings)
tmp2 = copy.copy(rings)
for index, value in enumerate(test_case):
if value:
for set_connect in tmp:
if index+1 in set_connect and set_connect in tmp2:
tmp2.remove(set_connect)
if not tmp2:
min_result = min(sum(test_case),min_result)
return min_result
So, I think it must thinking about the algorithm about the graph, but i just don't know what kind of problem I am facing.
Can you help me improve the algorithm?
Thank you for looking this problem!
You can think of this as a type of graph problem called vertex cover.
Draw a graph with a vertex for each ring, and an edge for each connection, i.e. each pair of joined rings.
Your task is to disconnect the rings with minimum breakages. A connection is broken if the ring at either edge is broken. In other words, you need to choose a set of rings (vertices) such that every connection (edge) is incident to one to the chosen rings.
This is exactly the vertex cover problem.
Unfortunately, vertex cover is NP-complete so there is not any non-exponential algorithm currently known.
I would recommend improving the speed of your algorithm by rejecting bad cases earlier. For example, use a backtracking algorithm that decides for each ring whether to break it or not. If you chose to not break it, you can immediately conclude a lot of other rings must be broken.
A while back I wrote a simple python program to brute-force the single solution for the drive ya nuts puzzle.
(source: tabbykat.com)
The puzzle consists of 7 hexagons with the numbers 1-6 on them, and all pieces must be aligned so that each number is adjacent to the same number on the next piece.
The puzzle has ~1.4G non-unique possibilities: you have 7! options to sort the pieces by order (for example, center=0, top=1, continuing in clockwise order...). After you sorted the pieces, you can rotate each piece in 6 ways (each piece is a hexagon), so you get 6**7 possible rotations for a given permutation of the 7 pieces. Totalling: 7!*(6**7)=~1.4G possibilities. The following python code generates these possible solutions:
def rotations(p):
for i in range(len(p)):
yield p[i:] + p[:i]
def permutations(l):
if len(l)<=1:
yield l
else:
for perm in permutations(l[1:]):
for i in range(len(perm)+1):
yield perm[:i] + l[0:1] + perm[i:]
def constructs(l):
for p in permutations(l):
for c in product(*(rotations(x) for x in p)):
yield c
However, note that the puzzle has only ~0.2G unique possible solutions, as you must divide the total number of possibilities by 6 since each possible solution is equivalent to 5 other solutions (simply rotate the entire puzzle by 1/6 a turn).
Is there a better way to generate only the unique possibilities for this puzzle?
To get only unique valid solutions, you can fix the orientation of the piece in the center. For example, you can assume that that the "1" on the piece in the center is always pointing "up".
If you're not already doing so, you can make your program much more efficient by checking for a valid solution after placing each piece. Once you've placed two pieces in an invalid way, you don't need to enumerate all of the other invalid combinations.
If there were no piece in the centre, this would be easy. Simply consider only the situations where piece 0 is at the top.
But we can extend that idea to the actual situation. You can consider only the situations where piece i is in the centre, and piece (i+1) % 7 is at the top.
I think the search space is quite small, though the programming might be awkward.
We have seven choices for the centre piece. Then we have 6 choices for the
piece above that but its orientation is fixed, as its bottom edge must match the top edge of the centre piece, and similarly whenever we choose a piece to go in a slot, the orientation is fixed.
There are fewer choices for the remaining pieces. Suppose for
example we had chosen the centre piece and top piece as in the picture; then the
top right piece must have (clockwise) consecutive edges (5,3) to match the pieces in
place, and only three of the pieces have such a pair of edges (and in fact we've already
chosen one of them as the centre piece).
One could first off build a table with a list
of pieces for each edge pair, and then for each of the 42 choices of centre and top
proceed clockwise, choosing only among the pieces that have the required pair of edges (to match the centre piece and the previously placed piece) and backtracking if there are no such pieces.
I reckon the most common pair of edges is (1,6) which occurs on 4 pieces, two other edge pairs ((6,5) and (5,3)) occur on 3 pieces, there are 9 edge pairs that occur on two pieces, 14
that occur on 1 piece and 4 that don't occur at all.
So a very pessimistic estimate of the number of choices we must make is
7*6*4*3*3*2 or 3024.