So I am working on a codility training and working on the CyclicRotation where you rotate an array a given number of steps. I have found several solutions and the one that looks the best does not really make sense to me.
So the solution is:
def solution(A, K):
if not A:
return A
if K == 0:
return A
K = -K % len(A)
print(K)
print(A)
print(A[K:])
print(A[:K])
return A[K:] + A[:K]
So I get the splitting of the array by K as this will give you how many should move. But i don't get how to get K. Why do you take the -K and then % by len of the Array? That part does not make sense to me.
Imagine repeating your array infinitely in both directions. So, for example, if you have
A = [0,1,2]
imagine
[...,0,1,2,0,1,2,0,1,2,...]
Say we want to circular shift A to the right by 1. Our array A should go from [0,1,2] to [2,0,1]. The last element should become the 0'th, and every other element should advance by 1 position. In terms of solution, we want solution(A,1).
One way to think of this is that instead of starting our array at index 0 (the middle 0 value) in our infinitely-repeated array and taking the next 3 values, we can start at index -1 and take the next 3 values. Then, instead of having [0,1,2], we'd have [2,0,1]. That's what we want.
Likewise, if we want to circular shift A to the left by 1 (which would be solution(A,-1)), we can start at index 1 and count 3 values from there instead. Note that if we want to shift right, we move our index left, and vice versa.
This process is exactly what K = -K % len(A) does. The K = -K part of it deals with the fact that if you're shifting right, you can do that by moving your index to the left along the infinite sequence. The % len(A) part of it essentially treats A as an infinite sequence.
Related
I am sorry if the title is a misnomer and/or doesn't properly describe what this is all about, you are welcome to edit the title to make it clear once you understand what this is about.
The thing is very simple, but I find it hard to describe, this thing is sorta like a number system, except it is about lists of integers.
So we start with a list of integers with only zero, foreach iteration we add one to it, until a certain limit is reached, then we insert 1 at the start of the list, and set the second element to 0, then iterate over the second element until the limit is reached again, then we add 1 to the first element and set the second element 0, and when the first element reaches the limit, insert another element with value of 1 to the start of the list, and zero the two elements after it, et cetera.
And just like this, when a place reaches limit, zero the place and the places after it, increase the place before it by one, and when all available places reach limit, add 1 to the left, for example:
0
1
2
1, 0
1, 1
1, 2
2, 0
2, 1
2, 2
1, 0, 0
The limit doesn't have to be three.
This is what I currently have that does something similar to this:
array = []
for c in range(26):
for b in range(26):
for a in range(26):
array.append((c, b, a))
I don't want leading zeroes but I can remove them, but I can't figure out how to do this with a variable number of elements.
What I want is a function that takes two arguments, limit (or base) and number of tuples to be returned, and returns the first n such tuples in order.
This must be very simple, but I just can't figure it out, and Google returns completely irrelevant results, so I am asking for help here.
How can this be done? Any help will truly be appreciated!
Hmm, I was thinking about something like this, but very unfortunately I can't make it work, please help me figure out why it doesn't work and how to make it work:
array = []
numbers = [0]
for i in range(1000):
numbers[-1] += 1
while 26 in numbers:
index = numbers.index(26)
numbers[index:] = [0] * (len(numbers) - index)
if index != 0:
numbers[index - 1] += 1
else:
numbers.insert(0, 1)
array.append(numbers)
I don't quite understand it, my testing shows everything inside the loop work perfectly fine outside the loop, the results are correct, but it just simply magically will not work in a loop, I don't know the reason for this, it is very strange.
I discovered the fact that if I change the last line to print(numbers) then everything prints correctly, but if I use append only the last element will be added, how so?
from math import log
def number_to_base(n,base):
number=[]
for digit in range(int(log(n+0.500001,base)),-1,-1):
number.append(n//base**digit%base)
return number
def first_numbers_in_base(n,base):
numbers=[]
for i in range(n):
numbers.append(tuple(number_to_base(i,base)))
return numbers
#tests:
print(first_numbers_in_base(10,3))
print(number_to_base(1048,10))
print(number_to_base(int("10201122110212",3),3))
print(first_numbers_in_base(25,10))
I finally did it!
The logic is very simple, but the hard part is to figure out why it won't work in a loop, turns out I need to use .copy(), because for whatever reason, doing an in-place modification to a list directly modifies the data reside in its memory space, such behavior modifies the same memory space, and .append() method always appends the latest data in a memory space.
So here is the code:
def steps(base, num):
array = []
numbers = [0]
for i in range(num):
copy = numbers.copy()
copy[-1] += 1
while base in copy:
index = copy.index(base)
copy[index:] = [0] * (len(copy) - index)
if index != 0:
copy[index - 1] += 1
else:
copy.insert(0, 1)
array.append(copy)
numbers = copy
return array
Use it like this:
steps(26, 1000)
For the first 1000 lists in base 26.
Here is a a function, that will satisfy original requirements (returns list of tuples, first tuple represents 0) and is faster than other functions that have been posted to this thread:
def first_numbers_in_base(n,base):
if n<2:
if n:
return [(0,)]
return []
numbers=[(0,),(1,)]
base-=1
l=-1
num=[1]
for i in range(n-2):
if num[-1]==base:
num[-1]=0
for i in range(l,-1,-1):
if num[i]==base:
num[i]=0
else:
num[i]+=1
break
else:
num=[1]+num
l+=1
else:
num[-1]+=1
numbers.append(tuple(num))#replace tuple(num) with num.copy() if you want resutl to contain lists instead of tuples.
return numbers
The task is following: sum the list elements with even indexes and multiply the result by the last list's elemet.
I have this oneliner solution code in Python.
array = [-37,-36,-19,-99,29,20,3,-7,-64,84,36,62,26,-76,55,-24,84,49,-65,41]
print sum(i for i in array if array.index(i) % 2 == 0)*array[-1] if array != [] else 0
My result is -1476 ( The calculation is: 41*(-37-19+29+3-64+36+26+55+84-65) )
The right result is 1968.
I can't figure it out why this code is not working correctly in this particular case.
This is what you are looking for:
array[-1] * sum(array[::2])
array[::2] traverses the array from first index to last index in steps of two, i.e., every alternate number. sum(array[::2]) gets the sum of alternate numbers from the original list.
Using index will work as expected only when you are sure the list does not have duplicates, which is why your code fails to give the correct result.
There is a repeated element 84 in the list, thus array.index does not work as you expected it to be. Also, your code has a quadratic complexity which is not required.
To fix your code with a minimum amount of edit, it would look something like this:
array = [-37,-36,-19,-99,29,20,3,-7,-64,84,36,62,26,-76,55,-24,84,49,-65,41]
print sum(array[i] for i in range(len(array)) if i % 2 == 0)*array[-1] if array != [] else 0
>>> sum(x for i, x in enumerate(array) if not i % 2)*array[-1]
1968
Use the built-in enumerate function, since there're duplicate elements in your list, and list.index(x) returns the index of the first element equal to x (as said in the documentation). Also take a look at the documentation on enumerate.
I have already solved the problem using mergesort, now I am thinking is that possible to calculate the number using quicksort? I also coded the quicksort, but I don't know how to calculate. Here is my code:
def Merge_and_Count(AL, AR):
count=0
i = 0
j = 0
A = []
for index in range(0, len(AL) + len(AR)):
if i<len(AL) and j<len(AR):
if AL[i] > AR[j]:
A.append(AR[j])
j = j + 1
count = count+len(AL) - i
else:
A.append(AL[i])
i = i + 1
elif i<len(AL):
A.append(AL[i])
i=i+1
elif j<len(AR):
A.append(AR[j])
j=j+1
return(count,A)
def Sort_and_Count(Arrays):
if len(Arrays)==1:
return (0,Arrays)
list1=Arrays[:len(Arrays) // 2]
list2=Arrays[len(Arrays) // 2:]
(LN,list1) = Sort_and_Count(list1)
(RN,list2) = Sort_and_Count(list2)
(M,Arrays)= Merge_and_Count(list1,list2)
return (LN + RN + M,Arrays)
Generally no, because during the partitioning, when you move a value to its correct side of the pivot, you don't know how many of the values you're moving it past are smaller than it and how many are larger. So, as soon as you do that you've lost information about the number of inversions in the original input.
I come across this problem for some times, As a whole, I think it should be still ok to use quick sort to compute the inversion count, as long as we do some modification to the original quick sort algorithm. (But I have not verified it yet, sorry for that).
Consider an array 3, 6, 2, 5, 4, 1. Support we use 3 as the pivot, the most voted answer is right in that the exchange might mess the orders of the other numbers. However, we might do it different by introducing a new temporary array:
Iterates over the array for the first time. During the iteration, moves all the numbers less than 3 to the temporary array. For each such number, we also records how much number larger than 3 are before it. In this case, the number 2 has one number 6 before it, and the number 1 has 3 number 6, 5, 4 before it. This could be done by a simple counting.
Then we copy 3 into the temporary array.
Then we iterates the array again and move the numbers large than 3 into the temporary array. At last we get 2 1 3 6 5 4.
The problem is that during this process how much inversion pairs are lost? The number is the sum of all the numbers in the first step, and the count of number less than the pivot in the second step. Then we have count all the inversion numbers that one is >= pivot and another is < pivot. Then we could recursively deal with the left part and the right part.
I'm trying to create a sorting technique that sorts a list of numbers. But what it does is that it compares two numbers, the first being the first number in the list, and the other number would be the index of 2k - 1.
2^k - 1 = [1,3,7, 15, 31, 63...]
For example, if I had a list [1, 4, 3, 6, 2, 10, 8, 19]
The length of this list is 8. So the program should find a number in the 2k - 1 list that is less than 8, in this case it will be 7.
So now it will compare the first number in the random list (1) with the 7th number in the same list (19). if it is greater than the second number, it will swap positions.
After this step, it will continue on to 4 and the 7th number after that, but that doesn't exist, so now it should compare with the 3rd number after 4 because 3 is the next number in 2k - 1.
So it should compare 4 with 2 and swap if they are not in the right place. So this should go on and on until I reach 1 in 2k - 1 in which the list will finally be sorted.
I need help getting started on this code.
So far, I've written a small code that makes the 2k - 1 list but thats as far as I've gotten.
a = []
for i in range(10):
a.append(2**(i+1) -1)
print(a)
EXAMPLE:
Consider sorting the sequence V = 17,4,8,2,11,5,14,9,18,12,7,1. The skipping
sequence 1, 3, 7, 15, … yields r=7 as the biggest value which fits, so looking at V, the first sparse subsequence =
17,9, so as we pass along V we produce 9,4,8,2,11,5,14,17,18,12,7,1 after the first swap, and
9,4,8,2,1,5,14,17,18,12,7,11 after using r=7 completely. Using a=3 (the next smaller term in the skipping
sequence), the first sparse subsequence = 9,2,14,12, which when applied to V gives 2,4,8,9,1,5,12,17,18,14,7,11, and the remaining a = 3 sorts give 2,1,8,9,4,5,12,7,18,14,17,11, and then 2,1,5,9,4,8,12,7,11,14,17,18. Finally, with a = 1, we get 1,2,4,5,7,8,9,11,12,14,17,18.
You might wonder, given that at the end we do a sort with no skips, why
this might be any faster than simply doing that final step as the only step at the beginning. Think of it as a comb
going through the sequence -- notice that in the earlier steps we’re using course combs to get distant things in the
right order, using progressively finer combs until at the end our fine-tuning is dealing with a nearly-sorted sequence
needing little adjustment.
p = 0
x = len(V) #finding out the length of V to find indexer in a
for j in a: #for every element in a (1,3,7....)
if x >= j: #if the length is greater than or equal to current checking value
p = j #sets j as p
So that finds what distance it should compare the first number in the list with but now i need to write something that keeps doing that until the distance is out of range so it switches from 3 to 1 and then just checks the smaller distances until the list is sorted.
The sorting algorithm you're describing actually is called Combsort. In fact, the simpler bubblesort is a special case of combsort where the gap is always 1 and doesn't change.
Since you're stuck on how to start this, here's what I recommend:
Implement the bubblesort algorithm first. The logic is simpler and makes it much easier to reason about as you write it.
Once you've done that you have the important algorithmic structure in place and from there it's just a matter of adding gap length calculation into the mix. This means, computing the gap length with your particular formula. You'll then modifying the loop control index and the inner comparison index to use the calculated gap length.
After each iteration of the loop you decrease the gap length(in effect making the comb shorter) by some scaling amount.
The last step would be to experiment with different gap lengths and formulas to see how it affects algorithm efficiency.
Definition: Array A(a1,a2,...,an) is >= than B(b1,b2,...bn) if they are equal sized and a_i>=b_i for every i from 1 to n.
For example:
[1,2,3] >= [1,2,0]
[1,2,0] not comparable with [1,0,2]
[1,0,2] >= [1,0,0]
I have a list which consists of a big number of such arrays (approx. 10000, but can be bigger). Arrays' elements are positive integers. I need to remove all arrays from this list that are bigger than at least one of other arrays. In other words: if there exists such B that A >= B then remove A.
Here is my current O(n^2) approach which is extremely slow. I simply compare every array with all other arrays and remove it if it's bigger. Are there any ways to speed it up.
import numpy as np
import time
import random
def filter_minimal(lst):
n = len(lst)
to_delete = set()
for i in xrange(n-1):
if i in to_delete:
continue
for j in xrange(i+1,n):
if j in to_delete: continue
if all(lst[i]>=lst[j]):
to_delete.add(i)
break
elif all(lst[i]<=lst[j]):
to_delete.add(j)
return [lst[i] for i in xrange(len(lst)) if i not in to_delete]
def test(number_of_arrays,size):
x = map(np.array,[[random.randrange(0,10) for _ in xrange(size)] for i in xrange(number_of_arrays)])
return filter_minimal(x)
a = time.time()
result = test(400,10)
print time.time()-a
print len(result)
P.S. I've noticed that using numpy.all instead of builtin python all slows the program dramatically. What can be the reason?
Might not be exactly what you are asking for, but this should get you started.
import numpy as np
import time
import random
def compare(x,y):
#Reshape x to a higher dimensional array
compare_array=x.reshape(-1,1,x.shape[-1])
#You can now compare every x with every y element wise simultaneously
mask=(y>=compare_array)
#Create a mask that first ensures that all elements of y are greater then x and
#then ensure that this is the case at least once.
mask=np.any(np.all(mask,axis=-1),axis=-1)
#Places this mask on x
return x[mask]
def test(number_of_arrays,size,maxval):
#Create arrays of size (number_of_arrays,size) with maximum value maxval.
x = np.random.randint(maxval, size=(number_of_arrays,size))
y= np.random.randint(maxval, size=(number_of_arrays,size))
return compare(x,y)
print test(50,10,20)
First of all we need to carefully check the objective. Is it true that we delete any array that is > ANY of the other arrays, even the deleted ones? For example, if A > B and C > A and B=C, then do we need to delete only A or both A and C? If we only need to delete INCOMPATIBLE arrays, then it is a much harder problem. This is a very difficult problem because different partitions of the set of arrays may be compatible, so you have the problem of finding the largest valid partition.
Assuming the easy problem, a better way to define the problem is that you want to KEEP all arrays which have at least one element < the corresponding element in ALL the other arrays. (In the hard problem, it is the corresponding element in the other KEPT arrays. We will not consider this.)
Stage 1
To solve this problem what you do is arrange the arrays in columns and then sort each row while maintaining the key to the array and the mapping of each array-row to position (POSITION lists). For example, you might end up with a result in stage 1 like this:
row 1: B C D A E
row 2: C A E B D
row 3: E D B C A
Meaning that for the first element (row 1) array B has a value >= C, C >= D, etc.
Now, sort and iterate the last column of this matrix ({E D A} in the example). For each item, check if the element is less than the previous element in its row. For example, in row 1, you would check if E < A. If this is true you return immediately and keep the result. For example, if E_row1 < A_row1 then you can keep array E. Only if the values in the row are equal do you need to do a stage 2 test (see below).
In the example shown you would keep E, D, A (as long as they passed the test above).
Stage 2
This leaves B and C. Sort the POSITION list for each. For example, this will tell you that the row with B's mininum position is row 2. Now do a direct comparison between B and every array below it in the mininum row, here row 2. Here there is only one such array, D. Do a direct comparison between B and D. This shows that B < D in row 3, therefore B is compatible with D. If the item is compatible with every array below its minimum position keep it. We keep B.
Now we do the same thing for C. In C's case we need only do one direct comparison, with A. C dominates A so we do not keep C.
Note that in addition to testing items that did not appear in the last column we need to test items that had equality in Stage 1. For example, imagine D=A=E in row 1. In this case we would have to do direct comparisons for every equality involving the array in the last column. So, in this case we direct compare E to A and E to D. This shows that E dominates D, so E is not kept.
The final result is we keep A, B, and D. C and E are discarded.
The overall performance of this algorithm is n2*log n in Stage 1 + { n lower bound, n * log n - upper bound } in Stage 2. So, maximum running time is n2*log n + nlogn and minimum running time is n2logn + n. Note that the running time of your algorithm is n-cubed n3. Since you compare each matrix (n*n) and each comparison is n element comparisons = n*n*n.
In general, this will be much faster than the brute force approach. Most of the time will be spent sorting the original matrix, a more or less unavoidable task. Note that you could potentially improve my algorithm by using priority queues instead of sorting, but the resulting algorithm would be much more complicated.