Shift the items in the given array, by some number of times, as shown in the below examples;
array = [1, 2 ,3 , 4, 5, 6]
k1 = 2
k2 = -3
k3 = 20
test1:
cirShift(array, k1)
Result: [5, 6, 1, 2, 3, 4]
test2:
cirShift(array, k2)
Result: [4, 5, 6, 1, 2, 3]
test3:
cirShift(array, k3)
Result: [5, 6, 1, 2, 3, 4]
I have used the below to achieve the right-rotate a list by k positions;
def rightRotateByOne(A):
Fin= A[-1]
for i in reversed(range(len(A) - 1)):
A[i + 1] = A[i]
A[0] = Fin
def rightRotate(A, k):
if k < 0 or k >= len(A):
return
for i in range(k):
rightRotateByOne(A)
if __name__ == '__main__':
A = [1, 2, 3, 4, 5, 6, 7]
k = 3
rightRotate(A, k)
print(A)
As of now, able to obtain results for test1 but would like to achieve the test2 and test3
Even easier, split the array in half given the boundary, swap and glue back:
def cirShift(a, shift):
if not shift or not a:
return a
return a[-shift%len(a):] + a[:-shift%len(a)]
Courtesy of #KellyBundy, a short-circut one-liner:
def cirShift(a, shift):
return a and a[-shift%len(a):] + a[:-shift%len(a)]
I think this question may be an exercise in self learning ('how to do X using just python'), so my answer is auxiliary, but you can always use np.roll():
#test 1
import numpy as np
a = [1, 2 ,3, 4, 5, 6]
np.roll(a, 2)
gives output
[5, 6, 1, 2, 3, 4]
and
#test 2
np.roll(a, -2)
gives output
[3, 4, 5, 6, 1, 2]
Even if you give a number that is larger than the array size, it handles the overflow:
#test 3
np.roll(a, 10)
gives output
[3, 4, 5, 6, 1, 2]
Rolling also works in multiple dimension arrays and across specified axes, which is pretty neat.
def shift(l, shift_t):
r = [*l]
for n in range(abs(shift_t)):
if shift_t < 0:
r = r[1:] + [r[0]]
else:
r = [r[-1]] + r[:-1]
return r
The key is to take one item of the list and place it on the opposite side, which is essentially all that shifting is doing. If you shift negative, you put the first one at the end, and if you shift positive, you put the last one at the beginning.
I need to concatenate arrays but also merge the end of A with the start of B if they are overlapping.
[1, 2, 4] + [2, 4, 5] -> [1, 2, 4, 5]
[1, 2, 4] + [2, 5, 4] -> [1, 2, 4, 2, 5, 4]
[1, 2, 4] + [1, 2, 4, 5] -> [1, 2, 4, 5]
Note: Order of elements must be preserved, [4, 5] is not the same as [5, 4].
Note 2: The question can be understood like this too: We need the shortest possible extension of A such that the output ends with B.
Of course I can iterate over the second array and compare element-by element, but I am looking for a nice Numpy solution.
Originally misunderstood the problem. The problem, is from my understanding:
Two item suffix of A matches 2 item prefix of B:
[1, 2, 4] +
[2, 4, 5] =>
[1, 2, 4, 5]
No suffix of A matches a prefix of B:
[1, 2, 4] +
[2, 5, 4] ->
[1, 2, 4, 2, 5, 4]
Then we can use this terribly inefficient function:
def merge(A,B):
i = 0
m = 0
# Find largest suffix of A that matches the prefix of B with the same length
while i <= len(A):
if A[-i:] == B[:i] and i > m:
m = i
i += 1
return A + B[m:]
Below is a solution using NumPy. It's not ideal, since it requires a (possibly unneeded) sort, and an iteration. Both the sorting and iteration should be over a relatively small array (or even a single element).
import numpy as np
def merge(left, right):
"""Concatenating two arrays, merging the overlapping end and start of
the left and right array"""
# We can limit the search to the maximum possible overlap between
# the arrays, which is the minimum of the two lengths
l = min(len(left), len(right))
# Find all indices in `right` where the element matches the last element of `left`.
# Need to sort, since the `nonzero` documentation doesn't
# explicitly state whether the returned indices follow the order
# as in `right`
# As long as there are few matches, sorting will not be a showstopper
# Need to reverse the sorted array, to start from the back of the
# right array, work towards the front, until there is a proper match
for i in np.sort(np.nonzero(right[:l] == left[-1])[0])[::-1]:
# Check if the subarrays are equal
if np.all(left[-i-1:] == right[:i+1]):
return np.concatenate([left, right[i+1:]])
# No match
return np.concatenate([left, right])
a = np.array([1, 2, 4])
b = np.array([2, 4, 5])
c = np.array([2, 5, 4])
d = np.array([1, 2, 4, 5])
e = np.array([1, 2, 4, 2])
f = np.array([2, 4, 2, 5])
print(merge(a, b))
print(merge(a, c))
print(merge(a, d))
print(merge(e, b))
print(merge(e, f))
which yields
[1 2 4 5]
[1 2 4 2 5 4]
[1 2 4 5]
[1 2 4 2 4 5]
[1 2 4 2 5]
I have an O(n) solution, albeit without Numpy:
def merge(a, b):
n_a = len(a)
n = min(n_a, len(b))
m = 0
for i in range(1, n + 1):
if b[n - i] == a[n_a - 1 - m]:
m += 1
else:
m = 0
return a + b[m:]
You could do it like this.
def concatenate(a,b):
ret = a.copy()
for element in b:
if not element in ret:
ret.append(element)
return ret
This keeps the order in a + b formation.
I am given a 1D array of numbers.
I need to go through the array adding each consecutive element to form a sum. Once this sum reaches a certain value, it forms the first element of a new array. The sum is then reset and the process repeats, thus iterating over the whole array.
For example if given:
[1, 3, 4, 5, 2, 5, 3]
and requiring the minimum sum to be 5,
the new array would be:
[8, 5, 7]
Explicity: [1 + 3 + 4, 5, 2 + 5]
I then also need to keep a record of the way the elements were combined for that particular array: I need to be to take a different array of the same length and combine the elements in the same way as above.
e.g. give the array
[1, 2, 1, 1, 3, 2, 1]
I require the output
[4, 1, 5]
Explicity: [1 + 2 + 1, 1, 3 + 2]
I have accomplished this with i loops and increment counters, but it is very ugly. The array named "record" contains the number of old elements summed to make each element of the new array i.e. [3, 1, 2]
import numpy as np
def bin(array, min_sum):
num_points = len(array)
# Create empty output.
output = list()
record = list()
i = 0
while i < num_points:
sum = 0
j = 0
while sum < min_sum:
# Break out if it reaches end of data whilst in loop.
if i+j == num_points:
break
sum += array[i+j]
j += 1
output.append(sum)
record.append(j)
i += j
# The final data point does not reach the min sum.
del output[-1]
return output
if __name__ == "__main__":
array = [1, 3, 4, 5, 2, 5, 3]
print bin(array, 5)
I would advice you to simply walk through the list. Add it to an accumulator like the_sum (do not use sum, since it is a builtin), and in case the_sum reaches a number higher than the min_sum, you add it, and reset the_sum to zero. Like:
def bin(array, min_sum):
result = []
the_sum = 0
for elem in array:
the_sum += elem
if the_sum >= min_sum:
result.append(the_sum)
the_sum = 0
return result
The lines where the accumulator is involved, are put in boldface.
I leave combining the other array the same way as an exercise, but as a hint: use an additional accumulator and zip to iterate over both arrays concurrently.
Here is a straightforward solution. which computes a list of boolean values where the value is true when accumulated element equals or exceeds the target value and calc computes an accumulation using this list.
def which(l, s):
w, a = [], 0
for e in l:
a += e
c = (a >= s)
w.append(c)
if c:
a = 0
return w
def calc(l, w):
a = 0
for (e, c) in zip(l, w):
a += e
if c:
yield a
a = 0
here is an interactive demonstration
>>> l1 = [1, 3, 4, 5, 2, 5, 3]
>>> w = which(l1, 5)
>>> w
[False, False, True, True, False, True, False]
>>> list(calc(l1, w))
[8, 5, 7]
>>> l2 = [1, 2, 1, 1, 3, 2, 1]
>>> list(calc(l2, w))
[4, 1, 5]
You can use short solutions I found out after a long struggle with flattening arrays.
For getting bounded sums use:
f = lambda a,x,j,l: 0 if j>=l else [a[i] for i in range(j,l) if sum(a[j:i])<x]
This outputs:
>>> f = lambda a,x,j,l: 0 if j>=l else [a[i] for i in range(j,l) if sum(a[j:i])< x]
>>> a= [1, 3, 4, 5, 2, 5, 3]
>>> f(a,5,0,7)
[1, 3, 4]
>>> sum(f(a,5,0,7))
8
>>> sum(f(a,5,3,7))
5
>>> sum(f(a,5,4,7))
7
>>>
To get your records use the function:
>>> y = lambda a,x,f,j,l: [] if j>=l else list(np.append(j,np.array(y(a,x,f,j+len(f(a,x,j,l)),l))))
From here, you can get both array of records and sums:
>>> listt=y(a,5,f,0,len(a))
>>> listt
[0.0, 3.0, 4.0, 6.0]
>>> [sum(f(a,5,int(listt[u]),len(a))) for u in range(0,len(listt)-1)]
[8, 5, 7]
>>>
Now, the bit of magic you can even use it as an index-conditional boundary for the second vector:
>>> b=[1, 2, 1, 1, 3, 2, 1]
>>> [sum(f(b,5,int(listt[u]),int(listt[u+1]))) for u in range(0,len(listt)-1)]
[4, 1, 5]
>>>
I have a list similar to:
[1 2 1 4 5 2 3 2 4 5 3 1 4 2]
I want to create a list of x random elements from this list where none of the chosen elements are the same. The difficult part is that I would like to do this by using list comprehension...
So possible results if x = 3 would be:
[1 2 3]
[2 4 5]
[3 1 4]
[4 5 1]
etc...
Thanks!
I should have specified that I cannot convert the list to a set. Sorry!
I need the randomly selected numbers to be weighted. So if 1 appears 4 times in the list and 3 appears 2 times in the list, then 1 is twice as likely to be selected...
Disclaimer: the "use a list comprehension" requirement is absurd.
Moreover, if you want to use the weights, there are many excellent approaches listed at Eli Bendersky's page on weighted random sampling.
The following is inefficient, doesn't scale, etc., etc.
That said, it has not one but two (TWO!) list comprehensions, returns a list, never duplicates elements, and respects the weights in a sense:
>>> s = [1, 2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2]
>>> [x for x in random.choice([p for c in itertools.combinations(s, 3) for p in itertools.permutations(c) if len(set(c)) == 3])]
[3, 1, 2]
>>> [x for x in random.choice([p for c in itertools.combinations(s, 3) for p in itertools.permutations(c) if len(set(c)) == 3])]
[5, 3, 4]
>>> [x for x in random.choice([p for c in itertools.combinations(s, 3) for p in itertools.permutations(c) if len(set(c)) == 3])]
[1, 5, 2]
.. or, as simplified by FMc:
>>> [x for x in random.choice([p for p in itertools.permutations(s, 3) if len(set(p)) == 3])]
[3, 5, 2]
(I'll leave the x for x in there, even though it hurts not to simply write list(random.choice(..)) or just leave it as a tuple..)
Generally, you don't want to do this sort of thing in a list comprehension -- It'll lead to much harder to read code. However, if you really must, we can write a completely horrible 1 liner:
>>> values = [random.randint(0,10) for _ in xrange(12)]
>>> values
[1, 10, 6, 6, 3, 9, 0, 1, 8, 9, 1, 2]
>>> # This is the 1 liner -- The other line was just getting us a list to work with.
>>> [(lambda x=random.sample(values,3):any(values.remove(z) for z in x) or x)() for _ in xrange(4)]
[[6, 1, 8], [1, 6, 10], [1, 0, 2], [9, 3, 9]]
Please never use this code -- I only post it for fun/academic reasons.
Here's how it works:
I create a function inside the list comprehension with a default argument of 3 randomly selected elements from the input list. Inside the function, I remove the elements from values so that they aren't available to be picked again. since list.remove returns None, I can use any(lst.remove(x) for x in ...) to remove the values and return False. Since any returns False, we hit the or clause which just returns x (the default value with 3 randomly selected items) when the function is called. All that is left then is to call the function and let the magic happen.
The one catch here is that you need to make sure that the number of groups you request (here I chose 4) multiplied by the number of items per group (here I chose 3) is less than or equal to the number of values in your input list. It may seem obvious, but it's probably worth mentioning anyway...
Here's another version where I pull shuffle into the list comprehension:
>>> lst = [random.randint(0,10) for _ in xrange(12)]
>>> lst
[3, 5, 10, 9, 10, 1, 6, 10, 4, 3, 6, 5]
>>> [lst[i*3:i*3+3] for i in xrange(shuffle(lst) or 4)]
[[6, 10, 6], [3, 4, 10], [1, 3, 5], [9, 10, 5]]
This is significantly better than my first attempt, however, most people would still need to stop, scratch their head a bit before they figured out what this code was doing. I still assert that it would be much better to do this in multiple lines.
If I'm understanding your question properly, this should work:
def weighted_sample(L, x):
# might consider raising some kind of exception of len(set(L)) < x
while True:
ans = random.sample(L, x)
if len(set(ans)) == x:
return ans
Then if you want many such samples you can just do something like:
[weighted_sample(L, x) for _ in range(num_samples)]
I have a hard time conceiving of a comprehension for the sampling logic that isn't just obfuscated. The logic is a bit too complicated. It sounds like something randomly tacked on to a homework assignment to me.
If you don't like infinite looping, I haven't tried it but I think this will work:
def weighted_sample(L, x):
ans = []
c = collections.Counter(L)
while len(ans) < x:
r = random.randint(0, sum(c.values())
for k in c:
if r < c[k]:
ans.append(k)
del c[k]
break
else:
r -= c[k]
else:
# maybe throw an exception since this should never happen on valid input
return ans
First of all, I hope your list might be like
[1,2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2]
so if you want to print the permutation from the given list as size 3, you can do as the following.
import itertools
l = [1,2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2]
for permutation in itertools.permutations(list(set(l)),3):
print permutation,
Output:
(1, 2, 3) (1, 2, 4) (1, 2, 5) (1, 3, 2) (1, 3, 4) (1, 3, 5) (1, 4, 2) (1, 4, 3) (1, 4, 5) (1, 5, 2) (1, 5, 3) (1, 5, 4) (2, 1, 3) (2, 1, 4) (2, 1, 5) (2, 3, 1) (2, 3, 4) (2, 3, 5) (2, 4, 1) (2, 4, 3) (2, 4, 5) (2, 5, 1) (2, 5, 3) (2, 5, 4) (3, 1, 2) (3, 1, 4) (3, 1, 5) (3, 2, 1) (3, 2, 4) (3, 2, 5) (3, 4, 1) (3, 4, 2) (3, 4, 5) (3, 5, 1) (3, 5, 2) (3, 5, 4) (4, 1, 2) (4, 1, 3) (4, 1, 5) (4, 2, 1) (4, 2, 3) (4, 2, 5) (4, 3, 1) (4, 3, 2) (4, 3, 5) (4, 5, 1) (4, 5, 2) (4, 5, 3) (5, 1, 2) (5, 1, 3) (5, 1, 4) (5, 2, 1) (5, 2, 3) (5, 2, 4) (5, 3, 1) (5, 3, 2) (5, 3, 4) (5, 4, 1) (5, 4, 2) (5, 4, 3)
Hope this helps. :)
>>> from random import shuffle
>>> L = [1, 2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2]
>>> x=3
>>> shuffle(L)
>>> zip(*[L[i::x] for i in range(x)])
[(1, 3, 2), (2, 2, 1), (4, 5, 3), (1, 4, 4)]
You could also use a generator expression instead of the list comprehension
>>> zip(*(L[i::x] for i in range(x)))
[(1, 3, 2), (2, 2, 1), (4, 5, 3), (1, 4, 4)]
Starting with a way to do it without list compehensions:
import random
import itertools
alphabet = [1, 2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2]
def alphas():
while True:
yield random.choice(alphabet)
def filter_unique(iter):
found = set()
for a in iter:
if a not in found:
found.add(a)
yield a
def dice(x):
while True:
yield itertools.islice(
filter_unique(alphas()),
x
)
for i, output in enumerate(dice(3)):
print list(output)
if i > 10:
break
The part, where list comprehensions have troubles is filter_unique() since list comprehension does not have 'memory' of what it did output. The possible solution would be to generate many outputs while the one of good quality is not found as #DSM suggested.
The slow, naive approach is:
import random
def pick_n_unique(l, n):
res = set()
while len(res) < n:
res.add(random.choice(l))
return list(res)
This will pick elements and only quit when it has n unique ones:
>>> pick_n_unique([1, 2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2], 3)
[2, 3, 4]
>>> pick_n_unique([1, 2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2], 3)
[3, 4, 5]
However it can get slow if, for example, you have a list with thirty 1s and one 2, since once it has a 1 it'll keep spinning until it finally hits a 2. The better is to count the number of occurrences of each unique element, choose a random one weighted by their occurrence count, remove that element from the count list, and repeat until you have the desired number of elements:
def weighted_choice(item__counts):
total_counts = sum(count for item, count in item__counts.items())
which_count = random.random() * total_counts
for item, count in item__counts.items():
which_count -= count
if which_count < 0:
return item
raise ValueError("Should never get here")
def pick_n_unique(items, n):
item__counts = collections.Counter(items)
if len(item__counts) < n:
raise ValueError(
"Can't pick %d values with only %d unique values" % (
n, len(item__counts))
res = []
for i in xrange(n):
choice = weighted_choice(item__counts)
res.append(choice)
del item__counts[choice]
return tuple(res)
Either way, this is a problem not well-suited to list comprehensions.
def sample(self, population, k):
n = len(population)
if not 0 <= k <= n:
raise ValueError("sample larger than population")
result = [None] * k
try:
selected = set()
selected_add = selected.add
for i in xrange(k):
j = int(random.random() * n)
while j in selected:
j = int(random.random() * n)
selected_add(j)
result[i] = population[j]
except (TypeError, KeyError): # handle (at least) sets
if isinstance(population, list):
raise
return self.sample(tuple(population), k)
return result
Above is a simplied version of the sample function Lib/random.py. I only removed some optimization code for small data sets. The codes tell us straightly how to implement a customized sample function:
get a random number
if the number have appeared before just abandon it and get a new one
repeat the above steps until you get all the sample numbers you want.
Then the real problem turns out to be how to get a random value from a list by weight.This could be by the original random.sample(population, 1) in the Python standard library (a little overkill here, but simple).
Below is an implementation, because duplicates represent weight in your given list, we can use int(random.random() * array_length) to get a random index of your array.
import random
arr = [1, 2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2]
def sample_by_weight( population, k):
n = len(population)
if not 0 <= k <= len(set(population)):
raise ValueError("sample larger than population")
result = [None] * k
try:
selected = set()
selected_add = selected.add
for i in xrange(k):
j = population[int(random.random() * n)]
while j in selected:
j = population[int(random.random() * n)]
selected_add(j)
result[i] = j
except (TypeError, KeyError): # handle (at least) sets
if isinstance(population, list):
raise
return self.sample(tuple(population), k)
return result
[sample_by_weight(arr,3) for i in range(10)]
With the setup:
from random import shuffle
from collections import deque
l = [1, 2, 1, 4, 5, 2, 3, 2, 4, 5, 3, 1, 4, 2]
This code:
def getSubLists(l,n):
shuffle(l) #shuffle l so the elements are in 'random' order
l = deque(l,len(l)) #create a structure with O(1) insert/pop at both ends
while l: #while there are still elements to choose
sample = set() #use a set O(1) to check for duplicates
while len(sample) < n and l: #until the sample is n long or l is exhausted
top = l.pop() #get the top value in l
if top in sample:
l.appendleft(top) #add it to the back of l for a later sample
else:
sample.add(top) #it isn't in sample already so use it
yield sample #yield the sample
You end up with:
for s in getSubLists(l,3):
print s
>>>
set([1, 2, 5])
set([1, 2, 3])
set([2, 4, 5])
set([2, 3, 4])
set([1, 4])