Unable to understand the logic behind the solution[FrogRiverOne] - python

I m unable to understand what is logic behind the solution for Codility FrogRiverOne here https://codility.com/demo/take-sample-test/frog_river_one
Task description
A small frog wants to get to the other side of a river. The frog is initially located on one bank of the river (position 0) and wants to get to the opposite bank (position X+1). Leaves fall from a tree onto the surface of the river.
You are given an array A consisting of N integers representing the falling leaves. A[K] represents the position where one leaf falls at time K, measured in seconds.
The goal is to find the earliest time when the frog can jump to the other side of the river. The frog can cross only when leaves appear at every position across the river from 1 to X (that is, we want to find the earliest moment when all the positions from 1 to X are covered by leaves). You may assume that the speed of the current in the river is negligibly small, i.e. the leaves do not change their positions once they fall in the river.
For example, you are given integer X = 5 and array A such that:
A[0] = 1
A[1] = 3
A[2] = 1
A[3] = 4
A[4] = 2
A[5] = 3
A[6] = 5
A[7] = 4
In second 6, a leaf falls into position 5. This is the earliest time when leaves appear in every position across the river.
Write a function:
def solution(X, A)
that, given a non-empty array A consisting of N integers and integer X, returns the earliest time when the frog can jump to the other side of the river.
If the frog is never able to jump to the other side of the river, the function should return −1.
For example, given X=5 and array A such that:
A[0] = 1
A[1] = 3
A[2] = 1
A[3] = 4
A[4] = 2
A[5] = 3
A[6] = 5
A[7] = 4
the function should return 6, as explained above.
Assume that:
N and X are integers within the range [1 . 100,000]
each element of array A is an integer within the range (1 . X).
Complexity:
expected worst-case time complexity is O(N);
expected worst-case space complexity is O(X) (not counting the storage required for input arguments).
Solution--
Input arguments to Function - (2,[2,2,2,2,2]) and (5, [1, 3, 1, 4, 2, 3, 5, 4])
def solution(X,A):
covered = 0
covered_a = [-1]*X
for index,element in enumerate(A):
if covered_a[element-1] == -1:
covered_a[element-1] = element
covered += 1
if covered == X:
return index
return -1
I want to understand what is the logic behind creating an boolean array and substracting 1 element wise from the input array A

It's because you want to "flag" all the numbers you've seen so far. So it begins with all of them on 'False' because you haven't seen any of them yet.

I'm going to give you my Java solution to this question which scored 100%. The main strategy is to use java.util.Set to store all required integers for a full jump and a second java.util.Set to keep storing current leaves and to keep checking if the first set fully exists in the second set.
package com.codility.lesson04.countingelements;
import java.util.HashSet;
import java.util.Set;
public class FrogRiverOne {
public int solution(int X, int[] A) {
SetrequiredLeavesSet = new HashSet();
for(int i=1; i<=X; i++) {
requiredLeavesSet.add(i);
}
SetcurrentLeavesSet = new HashSet();
for(int p=0; p<A.length; p++) {
currentLeavesSet.add(A[p]);
//keep adding to current leaves set until it is at least the same size as required leaves set
if(currentLeavesSet.size() < requiredLeavesSet.size()) continue;
if(currentLeavesSet.containsAll(requiredLeavesSet)) {
return p;
}
}
return -1;
}
}
You can find the code and unit tests for this problem here and an entire list of Codility solutions with explanations of the strategies here.

This is the best solution that I came up with and is very easy to understand. It gives O(n) time complexity.
def solution(X, A):
positions = set()
seconds = 0
for i in range(0, len(A)):
if A[i] not in positions and A[i] <= X:
positions.add(A[i])
seconds = i
if len(positions) == X:
return seconds
return -1

Related

Google foo.bar challenge "Hey I already did that", not passing all the test cases

This is the description of the problem I am trying to solve.
Hey, I Already Did That!
Commander Lambda uses an automated algorithm to assign minions randomly to tasks, in order to keep minions on their toes. But you've noticed a flaw in the algorithm -- it eventually loops back on itself, so that instead of assigning new minions as it iterates, it gets stuck in a cycle of values so that the same minions end up doing the same tasks over and over again. You think proving this to Commander Lambda will help you make a case for your next promotion.
You have worked out that the algorithm has the following process:
Start with a random minion ID n, which is a nonnegative integer of length k in base b
Define x and y as integers of length k. x has the digits of n in descending order, and y has the digits of n in ascending order
Define z = x - y. Add leading zeros to z to maintain length k if necessary
Assign n = z to get the next minion ID, and go back to step 2
For example, given minion ID n = 1211, k = 4, b = 10, then x = 2111, y = 1112 and z = 2111 - 1112 = 0999. Then the next minion ID will be n = 0999 and the algorithm iterates again: x = 9990, y = 0999 and z = 9990 - 0999 = 8991, and so on.
Depending on the values of n, k (derived from n), and b, at some point the algorithm reaches a cycle, such as by reaching a constant value. For example, starting with n = 210022, k = 6, b = 3, the algorithm will reach the cycle of values [210111, 122221, 102212] and it will stay in this cycle no matter how many times it continues iterating. Starting with n = 1211, the routine will reach the integer 6174, and since 7641 - 1467 is 6174, it will stay as that value no matter how many times it iterates.
Given a minion ID as a string n representing a nonnegative integer of length k in base b, where 2 <= k <= 9 and 2 <= b <= 10, write a function solution(n, b) which returns the length of the ending cycle of the algorithm above starting with n. For instance, in the example above, solution(210022, 3) would return 3, since iterating on 102212 would return to 210111 when done in base 3. If the algorithm reaches a constant, such as 0, then the length is 1.
My solution isn't passing 5 of the 10 test cases for the challenge. I don't understand if there's a problem with my code, as it's performing exactly as the problem asked to solve it, or if it's inefficient.
Here's my code for the problem. I have commented it for easier understanding.
def convert_to_any_base(num, b): # returns id after converting back to the original base as string
digits = []
while(num/b != 0):
digits.append(str(num % b))
num //= b
result = ''.join(digits[::-1])
return result
def solution(n, b):
minion_id_list = [] #list storing all occurrences of the minion id's
k = len(n)
while n not in minion_id_list: # until the minion id repeats
minion_id_list.append(n) # adds the id to the list
x = ''.join(sorted(n, reverse = True)) # gives x in descending order
y = x[::-1] # gives y in ascending order
if b == 10: # if number is already a decimal
n = str(int(x) - int(y)) # just calculate the difference
else:
n = int(x, b) - int(y, b) # else convert to decimal and, calculate difference
n = convert_to_any_base(n, b) # then convert it back to the given base
n = (k-len(n)) * '0' + n # adds the zeroes in front to maintain the id length
if int(n) == 0: # for the case that it reaches a constant, return 1
return 1
return len(minion_id_list[minion_id_list.index(n):]) # return length of the repeated id from
# first occurrence to the end of the list
I have been trying this problem for quite a while and still don't understand what's wrong with it. Any help will be appreciated.

Recycled array solution to Find minimum in Rotated sorted Array

I am working on a hard but stupid bisect search problem and debugging for hours.
Find Minimum in Rotated Sorted Array II
Find Minimum in Rotated Sorted Array II
Hard
Suppose an array sorted in ascending order is rotated at some pivot unknown to you beforehand.
(i.e., [0,1,2,4,5,6,7] might become [4,5,6,7,0,1,2]).
Find the minimum element.
The array may contain duplicates.
Example 1:
Input: [1,3,5]
Output: 1
Example 2:
Input: [2,2,2,0,1]
Output: 0
Note:
This is a follow up problem to Find Minimum in Rotated Sorted Array.
Would allow duplicates affect the run-time complexity? How and why?
The widely accepted answer takes O(n) time,
class SolutionK:
def findMin(self, nums):
lo, hi = 0, len(nums)-1
while lo < hi:
mid = (hi +lo) // 2
if nums[mid] > nums[hi]:
lo = mid + 1
elif nums[mid] < nums[hi]:
hi = mid
else:
hi -= 1
return nums[lo]
# why not min(nums) or brute force
I think the problem might be solved by a recycled array.
Since there are duplicates, we can find the rightmost max, then max + 1 is the minimal.
#the mid
lo = 0
hi = len(nums)
mid = (lo+hi) // 2
mid = mid % len(nums)
and the terminating condition
if nums[mid-1] <= nums[mid] > nums[mid+1]: return mid as the peak.
Unfortunately I cannot design the decreasing conditions.
Could you please give some hints?
You can indeed use bisection. In case the array consists of only unique numbers and has been rotated either the leftmost or the rightmost value will be out of order with respect to the middle point. That is array[0] <= array[len(array) // 2] <= array[-1] will be False. On the other hand this condition may hold if:
the array is not rotated at all,
or there are duplicates such as [1, 1, 1, 1, 2] => (rotate left 1) [1, 1, 1, 2, 1].
So we can separately check the left and right part of the condition (array[0] and array[-1] respectively) and in case one of them is invalidated check the corresponding sub-array (the left and right sub-array respectively). In case neither condition is invalidated we need to check both sides and compare.
The following is an example implementation (it only uses min where there are less than three elements involved, i.e. a simple comparison could be made as well):
def minimum(array):
if len(array) <= 2:
return min(array)
midpoint = len(array) // 2
if array[0] > array[midpoint]:
return minimum(array[:midpoint+1])
elif array[midpoint] > array[-1]:
return minimum(array[midpoint+1:])
else: # Possibly dealing with duplicates.
return min(minimum(array[:midpoint]),
minimum(array[midpoint:]))
from collections import deque
from random import randint, choices
for test in range(1000):
l = randint(10, 100)
array = deque(sorted(choices(list(range(l // 2)), k=l)))
array.rotate(randint(-len(array), len(array)))
array = list(array)
assert min(array) == minimum(array)

Shuffling a list with maximum distance travelled [duplicate]

I have tried to ask this question before, but have never been able to word it correctly. I hope I have it right this time:
I have a list of unique elements. I want to shuffle this list to produce a new list. However, I would like to constrain the shuffle, such that each element's new position is at most d away from its original position in the list.
So for example:
L = [1,2,3,4]
d = 2
answer = magicFunction(L, d)
Now, one possible outcome could be:
>>> print(answer)
[3,1,2,4]
Notice that 3 has moved two indices, 1 and 2 have moved one index, and 4 has not moved at all. Thus, this is a valid shuffle, per my previous definition. The following snippet of code can be used to validate this:
old = {e:i for i,e in enumerate(L)}
new = {e:i for i,e in enumerate(answer)}
valid = all(abs(i-new[e])<=d for e,i in old.items())
Now, I could easily just generate all possible permutations of L, filter for the valid ones, and pick one at random. But that doesn't seem very elegant. Does anyone have any other ideas about how to accomplish this?
This is going to be long and dry.
I have a solution that produces a uniform distribution. It requires O(len(L) * d**d) time and space for precomputation, then performs shuffles in O(len(L)*d) time1. If a uniform distribution is not required, the precomputation is unnecessary, and the shuffle time can be reduced to O(len(L)) due to faster random choices; I have not implemented the non-uniform distribution. Both steps of this algorithm are substantially faster than brute force, but they're still not as good as I'd like them to be. Also, while the concept should work, I have not tested my implementation as thoroughly as I'd like.
Suppose we iterate over L from the front, choosing a position for each element as we come to it. Define the lag as the distance between the next element to place and the first unfilled position. Every time we place an element, the lag grows by at most one, since the index of the next element is now one higher, but the index of the first unfilled position cannot become lower.
Whenever the lag is d, we are forced to place the next element in the first unfilled position, even though there may be other empty spots within a distance of d. If we do so, the lag cannot grow beyond d, we will always have a spot to put each element, and we will generate a valid shuffle of the list. Thus, we have a general idea of how to generate shuffles; however, if we make our choices uniformly at random, the overall distribution will not be uniform. For example, with len(L) == 3 and d == 1, there are 3 possible shuffles (one for each position of the middle element), but if we choose the position of the first element uniformly, one shuffle becomes twice as likely as either of the others.
If we want a uniform distribution over valid shuffles, we need to make a weighted random choice for the position of each element, where the weight of a position is based on the number of possible shuffles if we choose that position. Done naively, this would require us to generate all possible shuffles to count them, which would take O(d**len(L)) time. However, the number of possible shuffles remaining after any step of the algorithm depends only on which spots we've filled, not what order they were filled in. For any pattern of filled or unfilled spots, the number of possible shuffles is the sum of the number of possible shuffles for each possible placement of the next element. At any step, there are at most d possible positions to place the next element, and there are O(d**d) possible patterns of unfilled spots (since any spot further than d behind the current element must be full, and any spot d or further ahead must be empty). We can use this to generate a Markov chain of size O(len(L) * d**d), taking O(len(L) * d**d) time to do so, and then use this Markov chain to perform shuffles in O(len(L)*d) time.
Example code (currently not quite O(len(L)*d) due to inefficient Markov chain representation):
import random
# states are (k, filled_spots) tuples, where k is the index of the next
# element to place, and filled_spots is a tuple of booleans
# of length 2*d, representing whether each index from k-d to
# k+d-1 has an element in it. We pretend indices outside the array are
# full, for ease of representation.
def _successors(n, d, state):
'''Yield all legal next filled_spots and the move that takes you there.
Doesn't handle k=n.'''
k, filled_spots = state
next_k = k+1
# If k+d is a valid index, this represents the empty spot there.
possible_next_spot = (False,) if k + d < n else (True,)
if not filled_spots[0]:
# Must use that position.
yield k-d, filled_spots[1:] + possible_next_spot
else:
# Can fill any empty spot within a distance d.
shifted_filled_spots = list(filled_spots[1:] + possible_next_spot)
for i, filled in enumerate(shifted_filled_spots):
if not filled:
successor_state = shifted_filled_spots[:]
successor_state[i] = True
yield next_k-d+i, tuple(successor_state)
# next_k instead of k in that index computation, because
# i is indexing relative to shifted_filled_spots instead
# of filled_spots
def _markov_chain(n, d):
'''Precompute a table of weights for generating shuffles.
_markov_chain(n, d) produces a table that can be fed to
_distance_limited_shuffle to permute lists of length n in such a way that
no list element moves a distance of more than d from its initial spot,
and all permutations satisfying this condition are equally likely.
This is expensive.
'''
if d >= n - 1:
# We don't need the table, and generating a table for d >= n
# complicates the indexing a bit. It's too complicated already.
return None
table = {}
termination_state = (n, (d*2 * (True,)))
table[termination_state] = 1
def possible_shuffles(state):
try:
return table[state]
except KeyError:
k, _ = state
count = table[state] = sum(
possible_shuffles((k+1, next_filled_spots))
for (_, next_filled_spots) in _successors(n, d, state)
)
return count
initial_state = (0, (d*(True,) + d*(False,)))
possible_shuffles(initial_state)
return table
def _distance_limited_shuffle(l, d, table):
# Generate an index into the set of all permutations, then use the
# markov chain to efficiently find which permutation we picked.
n = len(l)
if d >= n - 1:
random.shuffle(l)
return
permutation = [None]*n
state = (0, (d*(True,) + d*(False,)))
permutations_to_skip = random.randrange(table[state])
for i, item in enumerate(l):
for placement_index, new_filled_spots in _successors(n, d, state):
new_state = (i+1, new_filled_spots)
if table[new_state] <= permutations_to_skip:
permutations_to_skip -= table[new_state]
else:
state = new_state
permutation[placement_index] = item
break
return permutation
class Shuffler(object):
def __init__(self, n, d):
self.n = n
self.d = d
self.table = _markov_chain(n, d)
def shuffled(self, l):
if len(l) != self.n:
raise ValueError('Wrong input size')
return _distance_limited_shuffle(l, self.d, self.table)
__call__ = shuffled
1We could use a tree-based weighted random choice algorithm to improve the shuffle time to O(len(L)*log(d)), but since the table becomes so huge for even moderately large d, this doesn't seem worthwhile. Also, the factors of d**d in the bounds are overestimates, but the actual factors are still at least exponential in d.
In short, the list that should be shuffled gets ordered by the sum of index and a random number.
import random
xs = range(20) # list that should be shuffled
d = 5 # distance
[x for i,x in sorted(enumerate(xs), key= lambda (i,x): i+(d+1)*random.random())]
Out:
[1, 4, 3, 0, 2, 6, 7, 5, 8, 9, 10, 11, 12, 14, 13, 15, 19, 16, 18, 17]
Thats basically it. But this looks a little bit overwhelming, therefore...
The algorithm in more detail
To understand this better, consider this alternative implementation of an ordinary, random shuffle:
import random
sorted(range(10), key = lambda x: random.random())
Out:
[2, 6, 5, 0, 9, 1, 3, 8, 7, 4]
In order to constrain the distance, we have to implement a alternative sort key function that depends on the index of an element. The function sort_criterion is responsible for that.
import random
def exclusive_uniform(a, b):
"returns a random value in the interval [a, b)"
return a+(b-a)*random.random()
def distance_constrained_shuffle(sequence, distance,
randmoveforward = exclusive_uniform):
def sort_criterion(enumerate_tuple):
"""
returns the index plus a random offset,
such that the result can overtake at most 'distance' elements
"""
indx, value = enumerate_tuple
return indx + randmoveforward(0, distance+1)
# get enumerated, shuffled list
enumerated_result = sorted(enumerate(sequence), key = sort_criterion)
# remove enumeration
result = [x for i, x in enumerated_result]
return result
With the argument randmoveforward you can pass a random number generator with a different probability density function (pdf) to modify the distance distribution.
The remainder is testing and evaluation of the distance distribution.
Test function
Here is an implementation of the test function. The validatefunction is actually taken from the OP, but I removed the creation of one of the dictionaries for performance reasons.
def test(num_cases = 10, distance = 3, sequence = range(1000)):
def validate(d, lst, answer):
#old = {e:i for i,e in enumerate(lst)}
new = {e:i for i,e in enumerate(answer)}
return all(abs(i-new[e])<=d for i,e in enumerate(lst))
#return all(abs(i-new[e])<=d for e,i in old.iteritems())
for _ in range(num_cases):
result = distance_constrained_shuffle(sequence, distance)
if not validate(distance, sequence, result):
print "Constraint violated. ", result
break
else:
print "No constraint violations"
test()
Out:
No constraint violations
Distance distribution
I am not sure whether there is a way to make the distance uniform distributed, but here is a function to validate the distribution.
def distance_distribution(maxdistance = 3, sequence = range(3000)):
from collections import Counter
def count_distances(lst, answer):
new = {e:i for i,e in enumerate(answer)}
return Counter(i-new[e] for i,e in enumerate(lst))
answer = distance_constrained_shuffle(sequence, maxdistance)
counter = count_distances(sequence, answer)
sequence_length = float(len(sequence))
distances = range(-maxdistance, maxdistance+1)
return distances, [counter[d]/sequence_length for d in distances]
distance_distribution()
Out:
([-3, -2, -1, 0, 1, 2, 3],
[0.01,
0.076,
0.22166666666666668,
0.379,
0.22933333333333333,
0.07766666666666666,
0.006333333333333333])
Or for a case with greater maximum distance:
distance_distribution(maxdistance=9, sequence=range(100*1000))
This is a very difficult problem, but it turns out there is a solution in the academic literature, in an influential paper by Mark Jerrum, Alistair Sinclair, and Eric Vigoda, A Polynomial-Time Approximation Algorithm for the Permanent of a Matrix with Nonnegative Entries, Journal of the ACM, Vol. 51, No. 4, July 2004, pp. 671–697. http://www.cc.gatech.edu/~vigoda/Permanent.pdf.
Here is the general idea: first write down two copies of the numbers in the array that you want to permute. Say
1 1
2 2
3 3
4 4
Now connect a node on the left to a node on the right if mapping from the number on the left to the position on the right is allowed by the restrictions in place. So if d=1 then 1 on the left connects to 1 and 2 on the right, 2 on the left connects to 1, 2, 3 on the right, 3 on the left connects to 2, 3, 4 on the right, and 4 on the left connects to 3, 4 on the right.
1 - 1
X
2 - 2
X
3 - 3
X
4 - 4
The resulting graph is bipartite. A valid permutation corresponds a perfect matching in the bipartite graph. A perfect matching, if it exists, can be found in O(VE) time (or somewhat better, for more advanced algorithms).
Now the problem becomes one of generating a uniformly distributed random perfect matching. I believe that can be done, approximately anyway. Uniformity of the distribution is the really hard part.
What does this have to do with permanents? Consider a matrix representation of our bipartite graph, where a 1 means an edge and a 0 means no edge:
1 1 0 0
1 1 1 0
0 1 1 1
0 0 1 1
The permanent of the matrix is like the determinant, except there are no negative signs in the definition. So we take exactly one element from each row and column, multiply them together, and add up over all choices of row and column. The terms of the permanent correspond to permutations; the term is 0 if any factor is 0, in other words if the permutation is not valid according to the matrix/bipartite graph representation; the term is 1 if all factors are 1, in other words if the permutation is valid according to the restrictions. In summary, the permanent of the matrix counts all permutations satisfying the restriction represented by the matrix/bipartite graph.
It turns out that unlike calculating determinants, which can be accomplished in O(n^3) time, calculating permanents is #P-complete so finding an exact answer is not feasible in general. However, if we can estimate the number of valid permutations, we can estimate the permanent. Jerrum et. al. approached the problem of counting valid permutations by generating valid permutations uniformly (within a certain error, which can be controlled); an estimate of the value of the permanent can be obtained by a fairly elaborate procedure (section 5 of the paper referenced) but we don't need that to answer the question at hand.
The running time of Jerrum's algorithm to calculate the permanent is O(n^11) (ignoring logarithmic factors). I can't immediately tell from the paper the running time of the part of the algorithm that uniformly generates bipartite matchings, but it appears to be over O(n^9). However, another paper reduces the running time for the permanent to O(n^7): http://www.cc.gatech.edu/fac/vigoda/FasterPermanent_SODA.pdf; in that paper they claim that it is now possible to get a good estimate of a permanent of a 100x100 0-1 matrix. So it should be possible to (almost) uniformly generate restricted permutations for lists of 100 elements.
There may be further improvements, but I got tired of looking.
If you want an implementation, I would start with the O(n^11) version in Jerrum's paper, and then take a look at the improvements if the original algorithm is not fast enough.
There is pseudo-code in Jerrum's paper, but I haven't tried it so I can't say how far the pseudo-code is from an actual implementation. My feeling is it isn't too far. Maybe I'll give it a try if there's interest.
I am not sure how good it is, but maybe something like:
create a list of same length than initial list L; each element of this list should be a list of indices of allowed initial indices to be moved here; for instance [[0,1,2],[0,1,2,3],[0,1,2,3],[1,2,3]] if I understand correctly your example;
take the smallest sublist (or any of the smallest sublists if several lists share the same length);
pick a random element in it with random.choice, this element is the index of the element in the initial list to be mapped to the current location (use another list for building your new list);
remove the randomly chosen element from all sublists
For instance:
L = [ "A", "B", "C", "D" ]
i = [[0,1,2],[0,1,2,3],[0,1,2,3],[1,2,3]]
# I take [0,1,2] and pick randomly 1 inside
# I remove the value '1' from all sublists and since
# the first sublist has already been handled I set it to None
# (and my result will look as [ "B", None, None, None ]
i = [None,[0,2,3],[0,2,3],[2,3]]
# I take the last sublist and pick randomly 3 inside
# result will be ["B", None, None, "D" ]
i = [None,[0,2], [0,2], None]
etc.
I haven't tried it however. Regards.
My idea is to generate permutations by moving at most d steps by generating d random permutations which move at most 1 step and chaining them together.
We can generate permutations which move at most 1 step quickly by the following recursive procedure: consider a permutation of {1,2,3,...,n}. The last item, n, can move either 0 or 1 place. If it moves 0 places, n is fixed, and we have reduced the problem to generating a permutation of {1,2,...,n-1} in which every item moves at most one place.
On the other hand, if n moves 1 place, it must occupy position n-1. Then n-1 must occupy position n (if any smaller number occupies position n, it will have moved by more than 1 place). In other words, we must have a swap of n and n-1, and after swapping we have reduced the problem to finding such a permutation of the remainder of the array {1,...,n-2}.
Such permutations can be constructed in O(n) time, clearly.
Those two choices should be selected with weighted probabilities. Since I don't know the weights (though I have a theory, see below) maybe the choice should be 50-50 ... but see below.
A more accurate estimate of the weights might be as follows: note that the number of such permutations follows a recursion that is the same as the Fibonacci sequence: f(n) = f(n-1) + f(n-2). We have f(1) = 1 and f(2) = 2 ({1,2} goes to {1,2} or {2,1}), so the numbers really are the Fibonacci numbers. So my guess for the probability of choosing n fixed vs. swapping n and n-1 would be f(n-1)/f(n) vs. f(n-2)/f(n). Since the ratio of consecutive Fibonacci numbers quickly approaches the Golden Ratio, a reasonable approximation to the probabilities is to leave n fixed 61% of the time and swap n and n-1 39% of the time.
To construct permutations where items move at most d places, we just repeat the process d times. The running time is O(nd).
Here is an outline of an algorithm.
arr = {1,2,...,n};
for (i = 0; i < d; i++) {
j = n-1;
while (j > 0) {
u = random uniform in interval (0,1)
if (u < 0.61) { // related to golden ratio phi; more decimals may help
j -= 1;
} else {
swap items at positions j and j-1 of arr // 0-based indexing
j -= 2;
}
}
}
Since each pass moves items at most 1 place from their start, d passes will move items at most d places. The only question is the uniform distribution of the permutations. It would probably be a long proof, if it's even true, so I suggest assembling empirical evidence for various n's and d's. Probably to prove the statement, we would have to switch from using the golden ratio approximation to f(n-1)/f(n-2) in place of 0.61.
There might even be some weird reason why some permutations might be missed by this procedure, but I'm pretty sure that doesn't happen. Just in case, though, it would be helpful to have a complete inventory of such permutations for some values of n and d to check the correctness of my proposed algorithm.
Update
I found an off-by-one error in my "pseudocode", and I corrected it. Then I implemented in Java to get a sense of the distribution. Code is below. The distribution is far from uniform, I think because there are many ways of getting restricted permutations with short max distances (move forward, move back vs. move back, move forward, for example) but few ways of getting long distances (move forward, move forward). I can't think of a way to fix the uniformity issue with this method.
import java.util.Random;
import java.util.Map;
import java.util.TreeMap;
class RestrictedPermutations {
private static Random rng = new Random();
public static void rPermute(Integer[] a, int d) {
for (int i = 0; i < d; i++) {
int j = a.length-1;
while (j > 0) {
double u = rng.nextDouble();
if (u < 0.61) { // related to golden ratio phi; more decimals may help
j -= 1;
} else {
int t = a[j];
a[j] = a[j-1];
a[j-1] = t;
j -= 2;
}
}
}
}
public static void main(String[] args) {
int numTests = Integer.parseInt(args[0]);
int d = 2;
Map<String,Integer> count = new TreeMap<String,Integer>();
for (int t = 0; t < numTests; t++) {
Integer[] a = {1,2,3,4,5};
rPermute(a,d);
// convert a to String for storage in Map
String s = "(";
for (int i = 0; i < a.length-1; i++) {
s += a[i] + ",";
}
s += a[a.length-1] + ")";
int c = count.containsKey(s) ? count.get(s) : 0;
count.put(s,c+1);
}
for (String k : count.keySet()) {
System.out.println(k + ": " + count.get(k));
}
}
}
Here are two sketches in Python; one swap-based, the other non-swap-based. In the first, the idea is to keep track of where the indexes have moved and test if the next swap would be valid. An additional variable is added for the number of swaps to make.
from random import randint
def swap(a,b,L):
L[a], L[b] = L[b], L[a]
def magicFunction(L,d,numSwaps):
n = len(L)
new = list(range(0,n))
for i in xrange(0,numSwaps):
x = randint(0,n-1)
y = randint(max(0,x - d),min(n - 1,x + d))
while abs(new[x] - y) > d or abs(new[y] - x) > d:
y = randint(max(0,x - d),min(n - 1,x + d))
swap(x,y,new)
swap(x,y,L)
return L
print(magicFunction([1,2,3,4],2,3)) # [2, 1, 4, 3]
print(magicFunction([1,2,3,4,5,6,7,8,9],2,4)) # [2, 3, 1, 5, 4, 6, 8, 7, 9]
Using print(collections.Counter(tuple(magicFunction([0, 1, 2], 1, 1)) for i in xrange(1000))) we find that the identity permutation comes up heavy with this code (the reason why is left as an exercise for the reader).
Alternatively, we can think about it as looking for a permutation matrix with interval restrictions, where abs(i - j) <= d where M(i,j) would equal 1. We can construct a one-off random path by picking a random j for each row from those still available. x's in the following example represent matrix cells that would invalidate the solution (northwest to southeast diagonal would represent the identity permutation), restrictions represent how many is are still available for each j. (Adapted from my previous version to choose both the next i and the next j randomly, inspired by user2357112's answer):
n = 5, d = 2
Start:
0 0 0 x x
0 0 0 0 x
0 0 0 0 0
x 0 0 0 0
x x 0 0 0
restrictions = [3,4,5,4,3] # how many i's are still available for each j
1.
0 0 1 x x # random choice
0 0 0 0 x
0 0 0 0 0
x 0 0 0 0
x x 0 0 0
restrictions = [2,3,0,4,3] # update restrictions in the neighborhood of (i ± d)
2.
0 0 1 x x
0 0 0 0 x
0 0 0 0 0
x 0 0 0 0
x x 0 1 0 # random choice
restrictions = [2,3,0,0,2] # update restrictions in the neighborhood of (i ± d)
3.
0 0 1 x x
0 0 0 0 x
0 1 0 0 0 # random choice
x 0 0 0 0
x x 0 1 0
restrictions = [1,0,0,0,2] # update restrictions in the neighborhood of (i ± d)
only one choice for j = 0 so it must be chosen
4.
0 0 1 x x
1 0 0 0 x # dictated choice
0 1 0 0 0
x 0 0 0 0
x x 0 1 0
restrictions = [0,0,0,0,2] # update restrictions in the neighborhood of (i ± d)
Solution:
0 0 1 x x
1 0 0 0 x
0 1 0 0 0
x 0 0 0 1 # dictated choice
x x 0 1 0
[2,0,1,4,3]
Python code (adapted from my previous version to choose both the next i and the next j randomly, inspired by user2357112's answer):
from random import randint,choice
import collections
def magicFunction(L,d):
n = len(L)
restrictions = [None] * n
restrict = -1
solution = [None] * n
for i in xrange(0,n):
restrictions[i] = abs(max(0,i - d) - min(n - 1,i + d)) + 1
while True:
availableIs = filter(lambda x: solution[x] == None,[i for i in xrange(n)]) if restrict == -1 else filter(lambda x: solution[x] == None,[j for j in xrange(max(0,restrict - d),min(n,restrict + d + 1))])
if not availableIs:
L = [L[i] for i in solution]
return L
i = choice(availableIs)
availableJs = filter(lambda x: restrictions[x] <> 0,[j for j in xrange(max(0,i - d),min(n,i + d + 1))])
nextJ = restrict if restrict != -1 else choice(availableJs)
restrict = -1
solution[i] = nextJ
restrictions[ nextJ ] = 0
for j in xrange(max(0,i - d),min(n,i + d + 1)):
if j == nextJ or restrictions[j] == 0:
continue
restrictions[j] = restrictions[j] - 1
if restrictions[j] == 1:
restrict = j
print(collections.Counter(tuple(magicFunction([0, 1, 2], 1)) for i in xrange(1000)))
Using print(collections.Counter(tuple(magicFunction([0, 1, 2], 1)) for i in xrange(1000))) we find that the identity permutation comes up light with this code (why is left as an exercise for the reader).
Here's an adaptation of #גלעד ברקן's code that takes only one pass through the list (in random order) and swaps only once (using a random choice of possible positions):
from random import choice, shuffle
def magicFunction(L, d):
n = len(L)
swapped = [0] * n # 0: position not swapped, 1: position was swapped
positions = list(xrange(0,n)) # list of positions: 0..n-1
shuffle(positions) # randomize positions
for x in positions:
if swapped[x]: # only swap an item once
continue
# find all possible positions to swap
possible = [i for i in xrange(max(0, x - d), min(n, x + d)) if not swapped[i]]
if not possible:
continue
y = choice(possible) # choose another possible position at random
if x != y:
L[y], L[x] = L[x], L[y] # swap with that position
swapped[x] = swapped[y] = 1 # mark both positions as swapped
return L
Here is a refinement of the above code that simply finds all possible adjacent positions and chooses one:
from random import choice
def magicFunction(L, d):
n = len(L)
positions = list(xrange(0, n)) # list of positions: 0..n-1
for x in xrange(0, n):
# find all possible positions to swap
possible = [i for i in xrange(max(0, x - d), min(n, x + d)) if abs(positions[i] - x) <= d]
if not possible:
continue
y = choice(possible) # choose another possible position at random
if x != y:
L[y], L[x] = L[x], L[y] # swap with that position
positions[x] = y
positions[y] = x
return L

hackerrank new year chaos code optimization

I'm trying to optimize my solution for Hackerranks's 'New Year Chaos' problem. The gist of the problem goes like this
There's a queue of n people, labeled 1 through n, and each person can bribe the person directly in front of them to swap places and get closer to the front of the queue (in this case, index 0 of the list/array). Each person can only bribe a maximum of two times (and they cannot bribe someone who has already bribed them)
You are given the order of the people after all of the bribes have taken place and your job is to determine how many bribes took place to get to that point. For example, if you were given [3, 2, 1] then the answer would be 3 bribes (person 3 bribed person 1, 2 and person 2 bribed person 1).
My solution was, for each person I, count the number of people to the left of I that have a label greater than I (they would've had to bribe person I to get to the left of them). To complicate things (slightly), some of the test cases given would only be possible if someone bribed more than 2 times (i.e. [4, 1, 2, 3] - person 4 bribed person 3, then 2, then 1 to get to the front). If this is the case, simply output "Too chaotic"
Anyway here's the code:
# n is the number of people in the list
# q is the order of the people after the bribery has taken place ex. [1, 3, 2, 5, 4]
for I in range(1, n + 1): # for each person I in the list
index = q.index(I)
if I - index > 3: # more than two bribes
bribes = "Too chaotic"
break
for j in range(index): # for each number to the left of I, if greater than I, count it as a bribe
if q[j] > I:
bribes = bribes + 1
print bribes
My problem is that the code times out with some of the larger test cases (you're only given so much time for each test case to run). How can I optimize the algorithm so that it doesn't time out? Should I be trying this problem in another language?
An optimization to your solution would be to start the nested loop from q[i] - 2 instead of 0. Since no one can jump ahead of its original position by more than 2, so any value higher than q[i] can only be till index q[i] -2.
Something like:
for(int j = Math.max(0, q[i] - 2); j < i; j++) {
if(q[j] > q[i]) {
bribe++;
}
}
def minimumBribes(q):
bribes = 0
for i in range(len(q)-1,-1,-1):
if q[i] - (i + 1) > 2:
print('Too chaotic')
return
for j in range(max(0, q[i] - 2),i):
if q[j] > q[i]:
bribes+=1
print(bribes)
The final optimization is in the inner loop, to exclude all those people who were never in a position to offer the current person a bribe.
You already halt this loop when you reach the current persons final position because obviously no-one behind their final position gave them a bribe...
But what about all those people in front? This person could, at best, get to two positions in front of where they started, by issuing two bribes. But no-one in front of that was ever in a position to offer them a bribe, so we can exclude everyone further forward.
Thus the inner loop ranges from two spots in front of my start position to my final position. Which chops out a lot of iterations when the list gets lengthy.
def minimumBribes(q):
bribes = 0
for final_pos, start_pos in enumerate(q):
# Abort if anyone is more than two bribes ahead of where they started
if final_pos + 1 < start_pos - 2:
print('Too chaotic')
return
# Count the number of people who started behind me, who are ahead of my
# final position. Conduct the search between two spots forward of where
# I started, thru to the person in front of me in the end; as these are
# the only people to have potentially bribed me.
potential_bribers = range(max(start_pos - 2, 0), final_pos)
bribes += [q[briber] > start_pos for briber in potential_bribers].count(True)
print(bribes)
I may be able to work these puzzles out, but never ever can I do it within their timeframes. Which is why I didn't even bother an attempt the last time a potential employer put a hacker rank test in front of me. They can have the braniacs, the rest us mere mortals have stackoverflow.
Just came across this problem, took me some time but here's a nice clean solution that is optimized for larger test cases (and passes 10/10 on HackerRank). I realise it takes a slightly different approach to yours but thought I'd share as it is working nicely and might still help.
A key point that helped me is that for any given person X in the queue, the furthest you'll need to look for a person that overtook them is 1 spot in front of where person X started. People can be overtaken more than 2 times, so for example, person 1 can end up at the back of the queue even if the queue is 1000 people long. However, person Y can't end up more than two places ahead of where they began (otherwise they must have overtaken more than twice). This is the reason you can be sure you don't need to look further than 1 place in front of where person X began (2 places in front of the nearest possible overtaker) for the people that overtook them.
def minimumBribes(q):
count = 0
tooChaotic = False
for i in range(len(q)):
# to translate between q[i] and the position in a zero-indexed python list, you have to add 1, so here it's i+3 rather than i+2
if q[i] > i + 3:
print("Too chaotic")
tooChaotic = True
break
else:
# q[i]-2 rather than q[i]-1 since python is zero-indexed but the people in the queue start at 1
start = max(0, q[i]-2)
for j in range(start, i):
if q[j] > q[i]:
count += 1
if tooChaotic == False:
print(count)
There are two problems in your code.
First, you should iterate the given array from end to the beginning, not the other way around. The reason is that if you iterate from the beginning, you need to iterate the whole array in the inner loop. But if you iterate from the last element, you just need to check left two numbers of each number to count the inversion. I think this is why your code "time out" on large test cases.
Second, when you print out bribes at the last line, you should check if you "break out" from the outer loop, or it finished at i == n. Otherwise, it might print out "Too chaotic" and some bribes you already computed.
My Solution is better because it has less no of loops
count = 0
flag = True
i = len(q) - 1
for x in range(len(q)):
temp = max(q) - x
index = q.index(temp)
if index < i - x:
if i - x - index > 2:
print("Too chaotic")
flag = False
break
q.remove(temp)
q.insert(i - x, temp)
count += i - x - index
if flag:
print(count)
def minimumBribes(q):
moves = 0
for pos, val in enumerate(q):
if (val-1) - pos > 2:
return "Too chaotic"
for j in xrange(max(0,val-2), pos):
if q[j] > val:
moves+=1
return moves
Here is a complete C++ solution, based on #lazywiz's comment:
void minimumBribes(vector<int> q) {
int numBribes{0};
bool chaotic = false;
vector<bool> visited(100000, false);
for (int i=0; i<q.size(); i++) {
int pos = i+1;
if (q[i] - pos > 2) {
chaotic = true;
break;
} else if (q[i] <= pos) {
for (int j=q[i]+1; j<=pos+3; j++) {
if (visited[j-1]) { numBribes++; }
}
}
visited[q[i]-1] = true;
}
if (chaotic) {
cout << "Too chaotic\n";
} else {
cout << numBribes << "\n";
}
}
def minimumBribes(q):
total_bribe=0
chaos=""
for i in range(0, len(q)):
if chaos == "Too chaotic":
break
bribe_count=0
for j in range(i, len(q)):
if q[i]>q[j]:
if bribe_count<2:
bribe_count+=1
else:
chaos="Too chaotic"
break
total_bribe+=bribe_count
if chaos=="":
print(total_bribe)
else:
print(chaos)
How more can we optimize this code?

How to check whether two lists are circularly identical in Python

For instance, I have lists:
a[0] = [1, 1, 1, 0, 0]
a[1] = [1, 1, 0, 0, 1]
a[2] = [0, 1, 1, 1, 0]
# and so on
They seem to be different, but if it is supposed that the start and the end are connected, then they are circularly identical.
The problem is, each list which I have has a length of 55 and contains only three ones and 52 zeros in it. Without circular condition, there are 26,235 (55 choose 3) lists. However, if the condition 'circular' exists, there are a huge number of circularly identical lists
Currently I check circularly identity by following:
def is_dup(a, b):
for i in range(len(a)):
if a == list(numpy.roll(b, i)): # shift b circularly by i
return True
return False
This function requires 55 cyclic shift operations at the worst case. And there are 26,235 lists to be compared with each other. In short, I need 55 * 26,235 * (26,235 - 1) / 2 = 18,926,847,225 computations. It's about nearly 20 Giga!
Is there any good way to do it with less computations? Or any data types that supports circular?
First off, this can be done in O(n) in terms of the length of the list
You can notice that if you will duplicate your list 2 times ([1, 2, 3]) will be [1, 2, 3, 1, 2, 3] then your new list will definitely hold all possible cyclic lists.
So all you need is to check whether the list you are searching is inside a 2 times of your starting list. In python you can achieve this in the following way (assuming that the lengths are the same).
list1 = [1, 1, 1, 0, 0]
list2 = [1, 1, 0, 0, 1]
print ' '.join(map(str, list2)) in ' '.join(map(str, list1 * 2))
Some explanation about my oneliner:
list * 2 will combine a list with itself, map(str, [1, 2]) convert all numbers to string and ' '.join() will convert array ['1', '2', '111'] into a string '1 2 111'.
As pointed by some people in the comments, oneliner can potentially give some false positives, so to cover all the possible edge cases:
def isCircular(arr1, arr2):
if len(arr1) != len(arr2):
return False
str1 = ' '.join(map(str, arr1))
str2 = ' '.join(map(str, arr2))
if len(str1) != len(str2):
return False
return str1 in str2 + ' ' + str2
P.S.1 when speaking about time complexity, it is worth noticing that O(n) will be achieved if substring can be found in O(n) time. It is not always so and depends on the implementation in your language (although potentially it can be done in linear time KMP for example).
P.S.2 for people who are afraid strings operation and due to this fact think that the answer is not good. What important is complexity and speed. This algorithm potentially runs in O(n) time and O(n) space which makes it much better than anything in O(n^2) domain. To see this by yourself, you can run a small benchmark (creates a random list pops the first element and appends it to the end thus creating a cyclic list. You are free to do your own manipulations)
from random import random
bigList = [int(1000 * random()) for i in xrange(10**6)]
bigList2 = bigList[:]
bigList2.append(bigList2.pop(0))
# then test how much time will it take to come up with an answer
from datetime import datetime
startTime = datetime.now()
print isCircular(bigList, bigList2)
print datetime.now() - startTime # please fill free to use timeit, but it will give similar results
0.3 seconds on my machine. Not really long. Now try to compare this with O(n^2) solutions. While it is comparing it, you can travel from US to Australia (most probably by a cruise ship)
Not knowledgeable enough in Python to answer this in your requested language, but in C/C++, given the parameters of your question, I'd convert the zeros and ones to bits and push them onto the least significant bits of an uint64_t. This will allow you to compare all 55 bits in one fell swoop - 1 clock.
Wickedly fast, and the whole thing will fit in on-chip caches (209,880 bytes). Hardware support for shifting all 55 list members right simultaneously is available only in a CPU's registers. The same goes for comparing all 55 members simultaneously. This allows for a 1-for-1 mapping of the problem to a software solution. (and using the SIMD/SSE 256 bit registers, up to 256 members if needed) As a result the code is immediately obvious to the reader.
You might be able to implement this in Python, I just don't know it well enough to know if that's possible or what the performance might be.
After sleeping on it a few things became obvious, and all for the better.
1.) It's so easy to spin the circularly linked list using bits that Dali's very clever trick isn't necessary. Inside a 64-bit register standard bit shifting will accomplish the rotation very simply, and in an attempt to make this all more Python friendly, by using arithmetic instead of bit ops.
2.) Bit shifting can be accomplished easily using divide by 2.
3.) Checking the end of the list for 0 or 1 can be easily done by modulo 2.
4.) "Moving" a 0 to the head of the list from the tail can be done by dividing by 2. This because if the zero were actually moved it would make the 55th bit false, which it already is by doing absolutely nothing.
5.) "Moving" a 1 to the head of the list from the tail can be done by dividing by 2 and adding 18,014,398,509,481,984 - which is the value created by marking the 55th bit true and all the rest false.
6.) If a comparison of the anchor and composed uint64_t is TRUE after any given rotation, break and return TRUE.
I would convert the entire array of lists into an array of uint64_ts right up front to avoid having to do the conversion repeatedly.
After spending a few hours trying to optimize the code, studying the assembly language I was able to shave 20% off the runtime. I should add that the O/S and MSVC compiler got updated mid-day yesterday as well. For whatever reason/s, the quality of the code the C compiler produced improved dramatically after the update (11/15/2014). Run-time is now ~ 70 clocks, 17 nanoseconds to compose and compare an anchor ring with all 55 turns of a test ring and NxN of all rings against all others is done in 12.5 seconds.
This code is so tight all but 4 registers are sitting around doing nothing 99% of the time. The assembly language matches the C code almost line for line. Very easy to read and understand. A great assembly project if someone were teaching themselves that.
Hardware is Hazwell i7, MSVC 64-bit, full optimizations.
#include "stdafx.h"
#include "stdafx.h"
#include <string>
#include <memory>
#include <stdio.h>
#include <time.h>
const uint8_t LIST_LENGTH = 55; // uint_8 supports full witdth of SIMD and AVX2
// max left shifts is 32, so must use right shifts to create head_bit
const uint64_t head_bit = (0x8000000000000000 >> (64 - LIST_LENGTH));
const uint64_t CPU_FREQ = 3840000000; // turbo-mode clock freq of my i7 chip
const uint64_t LOOP_KNT = 688275225; // 26235^2 // 1000000000;
// ----------------------------------------------------------------------------
__inline uint8_t is_circular_identical(const uint64_t anchor_ring, uint64_t test_ring)
{
// By trial and error, try to synch 2 circular lists by holding one constant
// and turning the other 0 to LIST_LENGTH positions. Return compare count.
// Return the number of tries which aligned the circularly identical rings,
// where any non-zero value is treated as a bool TRUE. Return a zero/FALSE,
// if all tries failed to find a sequence match.
// If anchor_ring and test_ring are equal to start with, return one.
for (uint8_t i = LIST_LENGTH; i; i--)
{
// This function could be made bool, returning TRUE or FALSE, but
// as a debugging tool, knowing the try_knt that got a match is nice.
if (anchor_ring == test_ring) { // test all 55 list members simultaneously
return (LIST_LENGTH +1) - i;
}
if (test_ring % 2) { // ring's tail is 1 ?
test_ring /= 2; // right-shift 1 bit
// if the ring tail was 1, set head to 1 to simulate wrapping
test_ring += head_bit;
} else { // ring's tail must be 0
test_ring /= 2; // right-shift 1 bit
// if the ring tail was 0, doing nothing leaves head a 0
}
}
// if we got here, they can't be circularly identical
return 0;
}
// ----------------------------------------------------------------------------
int main(void) {
time_t start = clock();
uint64_t anchor, test_ring, i, milliseconds;
uint8_t try_knt;
anchor = 31525197391593472; // bits 55,54,53 set true, all others false
// Anchor right-shifted LIST_LENGTH/2 represents the average search turns
test_ring = anchor >> (1 + (LIST_LENGTH / 2)); // 117440512;
printf("\n\nRunning benchmarks for %llu loops.", LOOP_KNT);
start = clock();
for (i = LOOP_KNT; i; i--) {
try_knt = is_circular_identical(anchor, test_ring);
// The shifting of test_ring below is a test fixture to prevent the
// optimizer from optimizing the loop away and returning instantly
if (i % 2) {
test_ring /= 2;
} else {
test_ring *= 2;
}
}
milliseconds = (uint64_t)(clock() - start);
printf("\nET for is_circular_identical was %f milliseconds."
"\n\tLast try_knt was %u for test_ring list %llu",
(double)milliseconds, try_knt, test_ring);
printf("\nConsuming %7.1f clocks per list.\n",
(double)((milliseconds * (CPU_FREQ / 1000)) / (uint64_t)LOOP_KNT));
getchar();
return 0;
}
Reading between the lines, it sounds as though you're trying to enumerate one representative of each circular equivalence class of strings with 3 ones and 52 zeros. Let's switch from a dense representation to a sparse one (set of three numbers in range(55)). In this representation, the circular shift of s by k is given by the comprehension set((i + k) % 55 for i in s). The lexicographic minimum representative in a class always contains the position 0. Given a set of the form {0, i, j} with 0 < i < j, the other candidates for minimum in the class are {0, j - i, 55 - i} and {0, 55 - j, 55 + i - j}. Hence, we need (i, j) <= min((j - i, 55 - i), (55 - j, 55 + i - j)) for the original to be minimum. Here's some enumeration code.
def makereps():
reps = []
for i in range(1, 55 - 1):
for j in range(i + 1, 55):
if (i, j) <= min((j - i, 55 - i), (55 - j, 55 + i - j)):
reps.append('1' + '0' * (i - 1) + '1' + '0' * (j - i - 1) + '1' + '0' * (55 - j - 1))
return reps
Repeat the first array, then use the Z algorithm (O(n) time) to find the second array inside the first.
(Note: you don't have to physically copy the first array. You can just wrap around during matching.)
The nice thing about the Z algorithm is that it's very simple compared to KMP, BM, etc.
However, if you're feeling ambitious, you could do string matching in linear time and constant space -- strstr, for example, does this. Implementing it would be more painful, though.
Following up on Salvador Dali's very smart solution, the best way to handle it is to make sure all elements are of the same length, as well as both LISTS are of the same length.
def is_circular_equal(lst1, lst2):
if len(lst1) != len(lst2):
return False
lst1, lst2 = map(str, lst1), map(str, lst2)
len_longest_element = max(map(len, lst1))
template = "{{:{}}}".format(len_longest_element)
circ_lst = " ".join([template.format(el) for el in lst1]) * 2
return " ".join([template.format(el) for el in lst2]) in circ_lst
No clue if this is faster or slower than AshwiniChaudhary's recommended regex solution in Salvador Dali's answer, which reads:
import re
def is_circular_equal(lst1, lst2):
if len(lst2) != len(lst2):
return False
return bool(re.search(r"\b{}\b".format(' '.join(map(str, lst2))),
' '.join(map(str, lst1)) * 2))
Given that you need to do so many comparisons might it be worth your while taking an initial pass through your lists to convert them into some sort of canonical form that can be easily compared?
Are you trying to get a set of circularly-unique lists? If so you can throw them into a set after converting to tuples.
def normalise(lst):
# Pick the 'maximum' out of all cyclic options
return max([lst[i:]+lst[:i] for i in range(len(lst))])
a_normalised = map(normalise,a)
a_tuples = map(tuple,a_normalised)
a_unique = set(a_tuples)
Apologies to David Eisenstat for not spotting his v.similar answer.
You can roll one list like this:
list1, list2 = [0,1,1,1,0,0,1,0], [1,0,0,1,0,0,1,1]
str_list1="".join(map(str,list1))
str_list2="".join(map(str,list2))
def rotate(string_to_rotate, result=[]):
result.append(string_to_rotate)
for i in xrange(1,len(string_to_rotate)):
result.append(result[-1][1:]+result[-1][0])
return result
for x in rotate(str_list1):
if cmp(x,str_list2)==0:
print "lists are rotationally identical"
break
First convert every of your list elements (in a copy if necessary) to that rotated version that is lexically greatest.
Then sort the resulting list of lists (retaining an index into the original list position) and unify the sorted list, marking all the duplicates in the original list as needed.
Piggybacking on #SalvadorDali's observation on looking for matches of a in any a-lengthed sized slice in b+b, here is a solution using just list operations.
def rollmatch(a,b):
bb=b*2
return any(not any(ax^bbx for ax,bbx in zip(a,bb[i:])) for i in range(len(a)))
l1 = [1,0,0,1]
l2 = [1,1,0,0]
l3 = [1,0,1,0]
rollmatch(l1,l2) # True
rollmatch(l1,l3) # False
2nd approach: [deleted]
Not a complete, free-standing answer, but on the topic of optimizing by reducing comparisons, I too was thinking of normalized representations.
Namely, if your input alphabet is {0, 1}, you could reduce the number of allowed permutations significantly. Rotate the first list to a (pseudo-) normalized form (given the distribution in your question, I would pick one where one of the 1 bits is on the extreme left, and one of the 0 bits is on the extreme right). Now before each comparison, successively rotate the other list through the possible positions with the same alignment pattern.
For example, if you have a total of four 1 bits, there can be at most 4 permutations with this alignment, and if you have clusters of adjacent 1 bits, each additional bit in such a cluster reduces the amount of positions.
List 1 1 1 1 0 1 0
List 2 1 0 1 1 1 0 1st permutation
1 1 1 0 1 0 2nd permutation, final permutation, match, done
This generalizes to larger alphabets and different alignment patterns; the main challenge is to find a good normalization with only a few possible representations. Ideally, it would be a proper normalization, with a single unique representation, but given the problem, I don't think that's possible.
Building further on RocketRoy's answer:
Convert all your lists up front to unsigned 64 bit numbers.
For each list, rotate those 55 bits around to find the smallest numerical value.
You are now left with a single unsigned 64 bit value for each list that you can compare straight with the value of the other lists. Function is_circular_identical() is not required anymore.
(In essence, you create an identity value for your lists that is not affected by the rotation of the lists elements)
That would even work if you have an arbitrary number of one's in your lists.
This is the same idea of Salvador Dali but don't need the string convertion. Behind is the same KMP recover idea to avoid impossible shift inspection. Them only call KMPModified(list1, list2+list2).
public class KmpModified
{
public int[] CalculatePhi(int[] pattern)
{
var phi = new int[pattern.Length + 1];
phi[0] = -1;
phi[1] = 0;
int pos = 1, cnd = 0;
while (pos < pattern.Length)
if (pattern[pos] == pattern[cnd])
{
cnd++;
phi[pos + 1] = cnd;
pos++;
}
else if (cnd > 0)
cnd = phi[cnd];
else
{
phi[pos + 1] = 0;
pos++;
}
return phi;
}
public IEnumerable<int> Search(int[] pattern, int[] list)
{
var phi = CalculatePhi(pattern);
int m = 0, i = 0;
while (m < list.Length)
if (pattern[i] == list[m])
{
i++;
if (i == pattern.Length)
{
yield return m - i + 1;
i = phi[i];
}
m++;
}
else if (i > 0)
{
i = phi[i];
}
else
{
i = 0;
m++;
}
}
[Fact]
public void BasicTest()
{
var pattern = new[] { 1, 1, 10 };
var list = new[] {2, 4, 1, 1, 1, 10, 1, 5, 1, 1, 10, 9};
var matches = Search(pattern, list).ToList();
Assert.Equal(new[] {3, 8}, matches);
}
[Fact]
public void SolveProblem()
{
var random = new Random();
var list = new int[10];
for (var k = 0; k < list.Length; k++)
list[k]= random.Next();
var rotation = new int[list.Length];
for (var k = 1; k < list.Length; k++)
rotation[k - 1] = list[k];
rotation[rotation.Length - 1] = list[0];
Assert.True(Search(list, rotation.Concat(rotation).ToArray()).Any());
}
}
Hope this help!
Simplifying The Problem
The problem consist of list of ordered items
The domain of value is binary (0,1)
We can reduce the problem by mapping consecutive 1s into a count
and consecutive 0s into a negative count
Example
A = [ 1, 1, 1, 0, 0, 1, 1, 0 ]
B = [ 1, 1, 0, 1, 1, 1, 0, 0 ]
~
A = [ +3, -2, +2, -1 ]
B = [ +2, -1, +3, -2 ]
This process require that the first item and the last item must be different
This will reduce the amount of comparisons overall
Checking Process
If we assume that they're duplicate, then we can assume what we are looking for
Basically the first item from the first list must exist somewhere in the other list
Followed by what is followed in the first list, and in the same manner
The previous items should be the last items from the first list
Since it's circular, the order is the same
The Grip
The question here is where to start, technically known as lookup and look-ahead
We will just check where the first element of the first list exist through the second list
The probability of frequent element is lower given that we mapped the lists into histograms
Pseudo-Code
FUNCTION IS_DUPLICATE (LIST L1, LIST L2) : BOOLEAN
LIST A = MAP_LIST(L1)
LIST B = MAP_LIST(L2)
LIST ALPHA = LOOKUP_INDEX(B, A[0])
IF A.SIZE != B.SIZE
OR COUNT_CHAR(A, 0) != COUNT_CHAR(B, ALPHA[0]) THEN
RETURN FALSE
END IF
FOR EACH INDEX IN ALPHA
IF ALPHA_NGRAM(A, B, INDEX, 1) THEN
IF IS_DUPLICATE(A, B, INDEX) THEN
RETURN TRUE
END IF
END IF
END FOR
RETURN FALSE
END FUNCTION
FUNCTION IS_DUPLICATE (LIST L1, LIST L2, INTEGER INDEX) : BOOLEAN
INTEGER I = 0
WHILE I < L1.SIZE DO
IF L1[I] != L2[(INDEX+I)%L2.SIZE] THEN
RETURN FALSE
END IF
I = I + 1
END WHILE
RETURN TRUE
END FUNCTION
Functions
MAP_LIST(LIST A):LIST MAP CONSQUETIVE ELEMENTS AS COUNTS IN A NEW LIST
LOOKUP_INDEX(LIST A, INTEGER E):LIST RETURN LIST OF INDICES WHERE THE ELEMENT E EXIST IN THE LIST A
COUNT_CHAR(LIST A , INTEGER E):INTEGER COUNT HOW MANY TIMES AN ELEMENT E OCCUR IN A LIST A
ALPHA_NGRAM(LIST A,LIST B,INTEGER I,INTEGER N):BOOLEAN CHECK IF B[I] IS EQUIVALENT TO A[0] N-GRAM IN BOTH DIRECTIONS
Finally
If the list size is going to be pretty huge or if the element we are starting to check the cycle from is frequently high, then we can do the following:
Look for the least-frequent item in the first list to start with
increase the n-gram N parameter to lower the probability of going through a the linear check
An efficient, quick-to-compute "canonical form" for the lists in question can be derived as:
Count the number of zeroes between the ones (ignoring wrap-around), to get three numbers.
Rotate the three numbers so that the biggest number is first.
The first number (a) must be between 18 and 52 (inclusive). Re-encode it as between 0 and 34.
The second number (b) must be between 0 and 26, but it doesn't matter much.
Drop the third number, since it's just 52 - (a + b) and adds no information
The canonical form is the integer b * 35 + a, which is between 0 and 936 (inclusive), which is fairly compact (there are 477 circularly-unique lists in total).
I wrote an straightforward solution which compares both lists and just increases (and wraps around) the index of the compared value for each iteration.
I don't know python well so I wrote it in Java, but it's really simple so it should be easy to adapt it to any other language.
By this you could also compare lists of other types.
public class Main {
public static void main(String[] args){
int[] a = {0,1,1,1,0};
int[] b = {1,1,0,0,1};
System.out.println(isCircularIdentical(a, b));
}
public static boolean isCircularIdentical(int[] a, int[]b){
if(a.length != b.length){
return false;
}
//The outer loop is for the increase of the index of the second list
outer:
for(int i = 0; i < a.length; i++){
//Loop trough the list and compare each value to the according value of the second list
for(int k = 0; k < a.length; k++){
// I use modulo length to wrap around the index
if(a[k] != b[(k + i) % a.length]){
//If the values do not match I continue and shift the index one further
continue outer;
}
}
return true;
}
return false;
}
}
As others have mentioned, once you find the normalized rotation of a list, you can compare them.
Heres some working code that does this,
Basic method is to find a normalized rotation for each list and compare:
Calculate a normalized rotation index on each list.
Loop over both lists with their offsets, comparing each item, returning if they mis-match.
Note that this method is it doesn't depend on numbers, you can pass in lists of strings (any values which can be compared).
Instead of doing a list-in-list search, we know we want the list to start with the minimum value - so we can loop over the minimum values, searching until we find which one has the lowest successive values, storing this for further comparisons until we have the best.
There are many opportunities to exit early when calculating the index, details on some optimizations.
Skip searching for the best minimum value when theres only one.
Skip searching minimum values when the previous is also a minimum value (it will never be a better match).
Skip searching when all values are the same.
Fail early when lists have different minimum values.
Use regular comparison when offsets match.
Adjust offsets to avoid wrapping the index values on one of the lists during comparison.
Note that in Python a list-in-list search may well be faster, however I was interested to find an efficient algorithm - which could be used in other languages too. Also, there is some advantage to avoiding to create new lists.
def normalize_rotation_index(ls, v_min_other=None):
""" Return the index or -1 (when the minimum is above `v_min_other`) """
if len(ls) <= 1:
return 0
def compare_rotations(i_a, i_b):
""" Return True when i_a is smaller.
Note: unless there are large duplicate sections of identical values,
this loop will exit early on.
"""
for offset in range(1, len(ls)):
v_a = ls[(i_a + offset) % len(ls)]
v_b = ls[(i_b + offset) % len(ls)]
if v_a < v_b:
return True
elif v_a > v_b:
return False
return False
v_min = ls[0]
i_best_first = 0
i_best_last = 0
i_best_total = 1
for i in range(1, len(ls)):
v = ls[i]
if v_min > v:
v_min = v
i_best_first = i
i_best_last = i
i_best_total = 1
elif v_min == v:
i_best_last = i
i_best_total += 1
# all values match
if i_best_total == len(ls):
return 0
# exit early if we're not matching another lists minimum
if v_min_other is not None:
if v_min != v_min_other:
return -1
# simple case, only one minimum
if i_best_first == i_best_last:
return i_best_first
# otherwise find the minimum with the lowest values compared to all others.
# start looking after the first we've found
i_best = i_best_first
for i in range(i_best_first + 1, i_best_last + 1):
if (ls[i] == v_min) and (ls[i - 1] != v_min):
if compare_rotations(i, i_best):
i_best = i
return i_best
def compare_circular_lists(ls_a, ls_b):
# sanity checks
if len(ls_a) != len(ls_b):
return False
if len(ls_a) <= 1:
return (ls_a == ls_b)
index_a = normalize_rotation_index(ls_a)
index_b = normalize_rotation_index(ls_b, ls_a[index_a])
if index_b == -1:
return False
if index_a == index_b:
return (ls_a == ls_b)
# cancel out 'index_a'
index_b = (index_b - index_a)
if index_b < 0:
index_b += len(ls_a)
index_a = 0 # ignore it
# compare rotated lists
for i in range(len(ls_a)):
if ls_a[i] != ls_b[(index_b + i) % len(ls_b)]:
return False
return True
assert(compare_circular_lists([0, 9, -1, 2, -1], [-1, 2, -1, 0, 9]) == True)
assert(compare_circular_lists([2, 9, -1, 0, -1], [-1, 2, -1, 0, 9]) == False)
assert(compare_circular_lists(["Hello" "Circular", "World"], ["World", "Hello" "Circular"]) == True)
assert(compare_circular_lists(["Hello" "Circular", "World"], ["Circular", "Hello" "World"]) == False)
See: this snippet for some more tests/examples.
You can check to see if a list A is equal to a cyclic shift of list B in expected O(N) time pretty easily.
I would use a polynomial hash function to compute the hash of list A, and every cyclic shift of list B. Where a shift of list B has the same hash as list A, I'd compare the actual elements to see if they are equal.
The reason this is fast is that with polynomial hash functions (which are extremely common!), you can calculate the hash of each cyclic shift from the previous one in constant time, so you can calculate hashes for all of the cyclic shifts in O(N) time.
It works like this:
Let's say B has N elements, then the the hash of B using prime P is:
Hb=0;
for (i=0; i<N ; i++)
{
Hb = Hb*P + B[i];
}
This is an optimized way to evaluate a polynomial in P, and is equivalent to:
Hb=0;
for (i=0; i<N ; i++)
{
Hb += B[i] * P^(N-1-i); //^ is exponentiation, not XOR
}
Notice how every B[i] is multiplied by P^(N-1-i). If we shift B to the left by 1, then every every B[i] will be multiplied by an extra P, except the first one. Since multiplication distributes over addition, we can multiply all the components at once just by multiplying the whole hash, and then fix up the factor for the first element.
The hash of the left shift of B is just
Hb1 = Hb*P + B[0]*(1-(P^N))
The second left shift:
Hb2 = Hb1*P + B[1]*(1-(P^N))
and so on...
NOTE: all math above is performed modulo some machine word size, and you only have to calculate P^N once.
To glue to the most pythonic way to do it, use sets !
from sets import Set
a = Set ([1, 1, 1, 0, 0])
b = Set ([0, 1, 1, 1, 0])
c = Set ([1, 0, 0, 1, 1])
a==b
True
a==b==c
True

Categories

Resources