I want to count how many inversions there are in a list while sorting the list using mergesort. This is my code so far where 'x' counts the ammount of inversions while the rest sorts it:
import sys
x = 0
def merge_sort(A):
merge_sort2(A, 0, len(A) - 1)
def merge_sort2(A, first, last):
if first < last:
middle = (first + last) // 2
merge_sort2(A, first, middle)
merge_sort2(A, middle + 1, last)
merge(A, first, middle, last)
def merge(A, first, middle, last):
global x
L = A[first:middle + 1]
R = A[middle + 1:last + 1]
L.append(sys.maxsize)
R.append(sys.maxsize)
i = j = 0
for k in range(first, last + 1):
if L[i] <= R[j]:
A[k] = L[i]
i += 1
else:
A[k] = R[j]
j += 1
x += 1
x += len(L[first + 1:])
When I call merge sort using a list, the variable x is support to give the amount of inversions in the list. So If the list was '[4,3,2,1], x would be 6. If the list was [1,2,3] x would be 0. I change the value of x whenever the right is greater than the left in the merge definition however, the number always gets way too big. What am I doing wrong?
Check my work but, I think instead of:
x += 1
x += len(L[first + 1:])
you want:
x += middle + 1 + j - k
basically, you want to add the difference between where item k is actually coming from, and where you'd expect it to come from if everything was already sorted.
Your merge step is a little hard for me to understand — I'm not sure why you are doing this (maybe just another way to merge?):
L.append(sys.maxsize)
R.append(sys.maxsize)
but I couldn't get everything to work out with the extra elements added to the partitions. And I think you end up counting the extra element in L as an inversion with each merge move from R
I think that's causing some of the problems. But you also have two other issues:
Your last line isn't quite the right logic:
x += len(L[first + 1:])
the number of inversions will the number of elements in L that you jump over. You're counting almost every element of L each time. Something like this works better:
x += len(L[i:])
and then at the end, you may have elements left over whose inversions you haven't counted yet. Maybe that's not an issue with your extra elements but in a more traditional merge it is. Here's the way I would count the inversions:
def merge(A, first, middle, last):
global x
L = A[first:middle+1]
R = A[middle+1:last+1]
i = j = 0
k = first
print(L, R)
while i<len(L) and j<len(R):
if L[i] <= R[j]:
A[k] = L[i]
i += 1
else:
A[k] = R[j]
j += 1
# count how many left in L
x += len(L[i:])
k += 1
# take care of any leftovers in L or R
while i < len(L):
A[k] = L[i]
i += 1
k+=1
while j < len(R):
A[k] = R[j]
j += 1
k+=1
x += len(L[i:])
Related
I have an algorithm that looks for the good pairs in a list of numbers. A good pair is being considered as index i being less than j and arr[i] < arr[j]. It currently has a complexity of O(n^2) but I want to make it O(nlogn) based on divide and conquering. How can I go about doing that?
Here's the algorithm:
def goodPairs(nums):
count = 0
for i in range(0,len(nums)):
for j in range(i+1,len(nums)):
if i < j and nums[i] < nums[j]:
count += 1
j += 1
j += 1
return count
Here's my attempt at making it but it just returns 0:
def goodPairs(arr):
count = 0
if len(arr) > 1:
# Finding the mid of the array
mid = len(arr)//2
# Dividing the array elements
left_side = arr[:mid]
# into 2 halves
right_side = arr[mid:]
# Sorting the first half
goodPairs(left_side)
# Sorting the second half
goodPairs(right_side)
for i in left_side:
for j in right_side:
if i < j:
count += 1
return count
The current previously accepted answer by Fire Assassin doesn't really answer the question, which asks for better complexity. It's still quadratic, and about as fast as a much simpler quadratic solution. Benchmark with 2000 shuffled ints:
387.5 ms original
108.3 ms pythonic
104.6 ms divide_and_conquer_quadratic
4.1 ms divide_and_conquer_nlogn
4.6 ms divide_and_conquer_nlogn_2
Code (Try it online!):
def original(nums):
count = 0
for i in range(0,len(nums)):
for j in range(i+1,len(nums)):
if i < j and nums[i] < nums[j]:
count += 1
j += 1
j += 1
return count
def pythonic(nums):
count = 0
for i, a in enumerate(nums, 1):
for b in nums[i:]:
if a < b:
count += 1
return count
def divide_and_conquer_quadratic(arr):
count = 0
left_count = 0
right_count = 0
if len(arr) > 1:
mid = len(arr) // 2
left_side = arr[:mid]
right_side = arr[mid:]
left_count = divide_and_conquer_quadratic(left_side)
right_count = divide_and_conquer_quadratic(right_side)
for i in left_side:
for j in right_side:
if i < j:
count += 1
return count + left_count + right_count
def divide_and_conquer_nlogn(arr):
mid = len(arr) // 2
if not mid:
return 0
left = arr[:mid]
right = arr[mid:]
count = divide_and_conquer_nlogn(left)
count += divide_and_conquer_nlogn(right)
i = 0
for r in right:
while i < mid and left[i] < r:
i += 1
count += i
arr[:] = left + right
arr.sort() # linear, as Timsort takes advantage of the two sorted runs
return count
def divide_and_conquer_nlogn_2(arr):
mid = len(arr) // 2
if not mid:
return 0
left = arr[:mid]
right = arr[mid:]
count = divide_and_conquer_nlogn_2(left)
count += divide_and_conquer_nlogn_2(right)
i = 0
arr.clear()
append = arr.append
for r in right:
while i < mid and left[i] < r:
append(left[i])
i += 1
append(r)
count += i
arr += left[i:]
return count
from timeit import timeit
from random import shuffle
arr = list(range(2000))
shuffle(arr)
funcs = [
original,
pythonic,
divide_and_conquer_quadratic,
divide_and_conquer_nlogn,
divide_and_conquer_nlogn_2,
]
for func in funcs:
print(func(arr[:]))
for _ in range(3):
print()
for func in funcs:
arr2 = arr[:]
t = timeit(lambda: func(arr2), number=1)
print('%5.1f ms ' % (t * 1e3), func.__name__)
One of the most well-known divide-and-conquer algorithms is merge sort. And merge sort is actually a really good foundation for this algorithm.
The idea is that when comparing two numbers from two different 'partitions', you already have a lot of information about the remaining part of these partitions, as they're sorted in every iteration.
Let's take an example!
Consider the following partitions, which has already been sorted individually and "good pairs" have been counted.
Partition x: [1, 3, 6, 9].
Partition y: [4, 5, 7, 8].
It is important to note that the numbers from partition x is located further to the left in the original list than partition y. In particular, for every element in x, it's corresponding index i must be smaller than some index j for every element in y.
We will start of by comparing 1 and 4. Obviously 1 is smaller than 4. But since 4 is the smallest element in partition y, 1 must also be smaller than the rest of the elements in y. Consequently, we can conclude that there is 4 additional good pairs, since the index of 1 is also smaller than the index of the remaining elements of y.
The exact same thing happens with 3, and we can add 4 new good pairs to the sum.
For 6 we will conclude that there is two new good pairs. The comparison between 6 and 4 did not yield a good pair and likewise for 6 and 5.
You might now notice how these additional good pairs would be counted? Basically if the element from x is less than the element from y, add the number of elements remaining in y to the sum. Rince and repeat.
Since merge sort is an O(n log n) algorithm, and the additional work in this algorithm is constant, we can conclude that this algorithm is also an O(n log n) algorithm.
I will leave the actual programming as an exercise for you.
#niklasaa has added an explanation for the merge sort analogy, but your implementation still has an issue.
You are partitioning the array and calculating the result for either half, but
You haven't actually sorted either half. So when you're comparing their elements, your two pointer approach isn't correct.
You haven't used their results in the final computation. That's why you're getting an incorrect answer.
For point #1, you should look at merge sort, especially the merge() function. That logic is what will give you the correct pair count without having O(N^2) iteration.
For point #2, store the result for either half first:
# Sorting the first half
leftCount = goodPairs(left_side)
# Sorting the second half
rightCount = goodPairs(right_side)
While returning the final count, add these two results as well.
return count + leftCount + rightCount
Like #Abhinav Mathur stated, you have most of the code down, your problem is with these lines:
# Sorting the first half
goodPairs(left_side)
# Sorting the second half
goodPairs(right_side)
You want to store these in variables that should be declared before the if statement. Here's an updated version of your code:
def goodPairs(arr):
count = 0
left_count = 0
right_count = 0
if len(arr) > 1:
mid = len(arr) // 2
left_side = arr[:mid]
right_side = arr[mid:]
left_count = goodPairs(left_side)
right_count = goodPairs(right_side)
for i in left_side:
for j in right_side:
if i < j:
count += 1
return count + left_count + right_count
Recursion can be difficult at times, look into the idea of merge sort and quick sort to get better ideas on how the divide and conquer algorithms work.
I'm teaching a coding class and need an intuitive and obvious way to explain the time complexity of merge sort. I tried including a print statement at the start of my merge_sort() function, anticipating that the print statement would execute O(log n) times. However, as best as I can tell, it executes 2*n-1 times instead (Python code below):
merge_sort() function:
def merge_sort(my_list):
print("hi") #prints 2*n-1 times??
if(len(my_list) <= 1):
return
mid = len(my_list)//2
l = my_list[:mid]
r = my_list[mid:]
merge_sort(l)
merge_sort(r)
i = 0
j = 0
k = 0
while(i < len(l) or j < len(r)):
#print("hey") #prints nlogn times as expected
if(i >= len(l)):
my_list[k] = r[j]
j += 1
elif(j >= len(r)):
my_list[k] = l[i]
i += 1
elif(l[i] < r[j]):
my_list[k] = l[i]
i += 1
elif(l[i] > r[j]):
my_list[k] = r[j]
j += 1
k += 1
Driver code:
#print("Enter a list")
my_list = list(map(int, input().split()))
#print("Sorted list:")
#merge_sort(my_list)
print(my_list)
Input:
1 2 3 4 5 6 7 8
Expected output:
hi
hi
hi
or some variation thereof which varies proportional to log n.
Actual output:
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi #15 times, i.e. 2*n-1
A few more iterations of this with different input sizes have given me the impression that this is 2*n-1, which makes no sense to me. Does anyone have an explanation for this?
It is not true that there are only O(logn) recursive calls. The thing that is O(logn) is the depth of the recursion tree, not the number of nodes in the recursion tree.
When we look at one level of the recursion tree, then we can note that each call in that level deals with a distinct partition of the array. Together, the "nodes" in that recursion level, deal with all elements of the array, which gives that level a O(n) time complexity. This is true for each level.
As there are O(logn) levels, the total complexity comes down to O(nlogn).
Here is a suggestion on how to illustrate this:
statistics = []
def merge_sort(my_list, depth=0):
if len(my_list) <= 1:
return
# manage statistics
if depth >= len(statistics):
statistics.append(0) # for each depth we count operations
mid = len(my_list)//2
l = my_list[:mid]
r = my_list[mid:]
merge_sort(l, depth+1)
merge_sort(r, depth+1)
i = 0
j = 0
k = 0
while i < len(l) or j < len(r):
statistics[depth] += 1 # count this as a O(1) unit of work
if i >= len(l):
my_list[k] = r[j]
j += 1
elif j >= len(r):
my_list[k] = l[i]
i += 1
elif l[i] < r[j]:
my_list[k] = l[i]
i += 1
elif l[i] > r[j]:
my_list[k] = r[j]
j += 1
k += 1
import random
my_list = list(range(32))
random.shuffle(my_list)
merge_sort(my_list)
print(my_list)
print(statistics)
The statistics will output the number of units of work done at each level. In the example of an input size of 32, you'll get a list with 5 such numbers.
NB: In Python, if conditions don't need parentheses
Here is the explanation of what I'm trying to say:-
Input:- 5 1 3 2 7
Output:- 3
Explanation:
In first move, we move 3 to the end. Our list becomes 5,1,2,7,3
In second move, we move 5 to the end. Our list becomes 1,2,7,3,5
In third move, we move 7 to the end. Our final list = 1,2,3,5,7
So, total moves are:- 3.
Here is what I tried to do, but failed.
a = [int(i) for i in input().split()]
count = 0
n = 0
while (n < len(a) - 1):
for i in range(0,n+1):
while (a[i] > a[i + 1]):
temp = a[i]
a.pop(i)
a.append(temp)
count += 1
n += 1
print(count, end='')
I'd like to request your assistance in helping in solving this question.
jdehesa's answer is basically right, but not optimal for cases, when there is more element of same value. Maybe more complex solution?
def min_moves(a):
c = 0
while(1):
tmp = None
for i in range(0, len(a)):
if a[i] != min(a[i:]) and (tmp is None or a[i] < a[tmp]):
tmp = i
if tmp is None:
return c
else:
a.append(a.pop(tmp))
c += 1
Edit:
Or if you don't need ordered list, there's much more easier solution just to count items that are out of order for the reason from jdehesa's solution :-D
def min_moves(a):
c = 0
for i in range(0, len(a)):
if a[i] != min(a[i:]):
c += 1
return c
Edit 2:
Or if you like jdehesa's answer more, small fix is to reduce lst to set, so it will get smallest index
sorted_index = {elem: i for i, elem in enumerate(sorted(set(lst)))}
I cannot comment yet.
I don't know if it can be done better, but I think the following algorithm gives the right answer:
def num_move_end_sort(lst):
# dict that maps each list element to its index in the sorted list
sorted_index = {elem: i for i, elem in enumerate(sorted(lst))}
moves = 0
for idx, elem in enumerate(lst):
if idx != sorted_index[elem] + moves:
moves += 1
return moves
print(num_move_end_sort([5, 1, 3, 2, 7]))
# 3
The idea is as follows. Each element of the list would have to be moved to the end at most once (it should be easy to see that a solution that moves the same element to the end more than once can be simplified). So each element in the list may or may not need to be moved once to the end. If an element does not need to be moved is because it ended up in the right position after all the moves. So, if an element is currently at position i and should end up in position j, then the element will not need to be moved if the number of previous elements that need to be moved, n, satisfies j == i + n (because, after those n moves, the element will indeed be at position j).
So in order to compute that, I sorted the list and took the indices of each element in the sorted list. Then you just count the number of elements that are not in the right position.
Note this algorithm does not tell you the actual sequence of steps you would need to take (the order in which the elements would have to be moved), only the count. The complexity is O(n·log(n)) (due to the sorting).
I think you can simplify your problem,
Counting elements that need to be pushed at the end is equivalent to counting the length of the elements that are not in sorted order.
l = [5, 1, 3, 2, 7]
sorted_l = sorted(l)
current_element = sorted_l[0]
current_index = 0
ans = 0
for element in l:
if current_element == element:
current_index += 1
if current_index < len(l):
current_element = sorted_l[current_index]
else:
ans += 1
print(ans)
Here the answer is 3
folks
i have the following code:
for i in range(len(self.corpses)):
for j in range(len(self.corpses))[i+1:]:
if self.corpses[i].collides(self.corpses[j]):
self.corpses[i] = self.corpses[i] + self.corpses[j]
self.corpses.remove(self.corpses[j])
but this keeps going out of index.I would like to know what I do to remove the corpse under those conditions.
Thanks in advance.
You changed the length of the list. You could use a while loop instead:
i, l = 0, len(self.corpses)
while i < l:
j = i + 1
while j < l:
if self.corpses[i].collides(self.corpses[j]):
self.corpses[i] = self.corpses[i] + self.corpses[j]
del self.corpses[j]
l -= 1
else:
j += 1
i += 1
This way you keep full control over all changes.
I am trying to write a Hoare partitioning function that takes an array as input, and partitions it with the first element as pivot (I know it's not a good idea, I should be using randomized pivots, like the median-of-medians approach). Problem is that this function falls into infinite loop when the first element is the highest, as with the array [14,6,8,1,4,9,2,1,7,10,5]. I can see the error, after the first iteration of the outer while, both i and j equal 10, and hence the loop continues forever. Which portion should I mend to get the desired effect? Here's the code:
def hoare(arr):
pivot = arr[0]
i,j = 1,len(arr)-1
while i <= j:
while i < j and arr[i] < pivot:
i += 1
while j >= i and arr[j] >= pivot:
j -= 1
if i < j:
arr[i],arr[j] = arr[j],arr[i]
if j != 0:
arr[0],arr[j] = arr[j],arr[0]
return j
I believe the problem is that you've converted a do-while (or repeat-until, in Hoare's terms) loop into a while loop, so it never does the first j -= 1.
The simplest transformation in Python should be to change the two inner while loops like this:
while True:
i += 1
if not (i < j and arr[i] < pivot): break
while True:
j -= 1
if not (j >= i and arr[j] >= pivot): break
(I'm assuming here that the if i < j: is supposed to be outside the second while loop, and all of the other initial indentation is correct.)
I haven't reasoned this through completely, or run a variety of tests, but there's probably more than just this one error in your translation. You may need to also convert the outer loop into a do-while (Hoare actually makes it an explicit while TRUE with a check at the end), but I'm not sure. Anyway, for your sample input, the modified version returns 9, and arr is [10, 6, 8, 1, 4, 9, 2, 1, 7, 14, 5], which is incorrect, but it solves your infinite loop problem.
The next problem is an off-by-one error. If you're going to do the += 1 and -= 1 first in the inner loops, you have to start at -1, len(arr) rather than 0, len(arr)-1 (or, as you did, 1, len(arr)-1).
There may still be other problems. But I don't want to dig through your code finding all possible mistakes and explaining them. If you need that, tell us what our source was, and explain each transformation you made from that source, and it'll be much easier to explain where you went wrong. If not, it's much simpler to just translate Hoare's algorithm to Python directly, and then hopefully you can figure it out.
Here's a copy of the Hoare pseudocode that I found online (just replacing all tabs with two spaces):
Hoare-Partition (A, p, r)
x ← A[p]
i ← p − 1
j ← r + 1
while TRUE
repeat j ← j − 1
until A[j] ≤ x
repeat i ← i + 1
until A[i] ≥ x
if i < j
exchange A[i] ↔ A[j]
else
return j
Here's a trivial translation into Python; the only changes are minor syntax (including the way "exchange" is spelled) and turning each repeat/until into a while True/break.
def hoare(a, p, r):
x = a[p]
i, j = p-1, r+1
while True:
while True:
j -= 1
if a[j] <= x:
break
while True:
i += 1
if a[i] >= x:
break
if i < j:
a[i], a[j] = a[j], a[i]
else:
return j
For a function with the same signature as yours:
def hoare0(arr):
return hoare(arr, 0, len(arr)-1)
There is an error in this line:
while i < j and arr[i] < pivot:
It should be:
while i <= j and arr[i] < pivot:
The whole code for partition looks like:
def partition(a, l, r):
pivot = a[r]
i = l - 1
j = r
while i <= j:
if i <= j and a[i] < pivot:
i += 1
if i <= j and a[j] >= pivot:
j -= 1
if i < j:
a[i], a[j] = a[j], a[i]
a[l], a[j] = a[j], a[l]
return j
Why there was an infinite loop?
The pivot chosen here is 14.
So, after this code is executed:
while i < j and arr[i] < pivot:
i += 1
i is 10 and j is 10.
Now, when this block is executed:
while i <= j and arr[j] >= pivot:
j -= 1
As a[10] < 14, nothing happens. Since, i equals j, no swap happens. Now, since the outermost loop has condition i <= j, the loop keeps repeating.
What happens with correction?
So, after this code is executed:
while i <= j and arr[i] < pivot:
i += 1
i is 11 (because the condition is still true when i equals j) and j is 10.
Now, when this block is executed:
while i <= j and arr[j] >= pivot:
j -= 1
As a[10] < 14, nothing happens.
Now, i is 11 and j is 10, so no swap happens. But, the outermost loop is broken and a[j] swaps with pivot.
Your array becomes:
[5, 6, 8, 1, 4, 9, 2, 1, 7, 10, 14]
You can play here. It contains code with debug prints for both right and wrong partition schemes.
This Also Works :
key = arr[0]
i = 0
j = n-1
while i >= j:
while arr[i] < key:
i += 1
while arr[j] > key:
j -= 1
arr[j], arr[0] = arr[0], arr[j]
Partition algorithm has many variants, (short/long step), but we should be very careful with invariants,preconditions and non-structured programming statements (break, return ) concerning this classic algorithm.
Otherwise, we may fall in big troubles. Even when this can be against 'pythonic' philosophy of coding.
The next annotated solution (for didactic purposes) yields (10, [5, 6, 8, 1, 4, 9, 2, 1, 7, 10, 14]) for the original list [14,6,8,1,4,9,2,1,7,10,5], as expected. Comments can be stripped off,
def hoare(arr):
# P: len(arr) > 0
assert len(arr)>0
i,j = 1,len(arr)
# INV : \forall n : 1<=n<i :arr[n]<arr[0]
# \forall n : j<=n<len(arr) :arr[n]>=arr[0]
# Quote(j-i)>=0
while i < j:
aa,bb=i,j
while aa < j and arr[aa] < arr[0]:
aa += 1
while bb > aa and arr[bb-1] >= arr[0]:
bb -= 1
#let
# aa = min n : i<=n<=j: n<j -> arr[n]>=arr[0]
# bb = max n : aa<=n<=j: n>aa -> arr[n-1]<arr[0]
#in
if (bb-aa)==(j-i):
#restore
arr[i],arr[j-1] = arr[j-1],arr[i]
#step
i, j = i+1 , j -1
else:
#restore
pass
#step
i,j = aa,bb
arr[0],arr[j-1] = arr[j-1],arr[0]
return j-1,arr
# Q : \forall n : 0<=n<j-1 :arr[n]<arr[j]
# \forall n : j-1<=n<len(arr) :arr[n]>=arr[j]
EDIT:
I'm not against goto, breaks, and continues... Donald Knuth stresses that "structured programming" is an idea rather than a language...
Once you understand the laws, you can break them... is this more pythonic? Invariant keeps and quote decreases every loop.
def hoare_non_str(arr):
assert len(arr)>0
i,j = 1,len(arr)
while i < j:
while i < j and arr[i] < arr[0]:
i += 1
if i==j:
break
while j > i and arr[j-1] >= arr[0]:
j -= 1
if i==j:
break
#It is safe to swap here.
arr[i],arr[j-1] = arr[j-1],arr[i]
i = i + 1
# swap pivote
arr[0],arr[j-1] = arr[j-1],arr[0]
return j-1,arr