ZigZag Quadruples - python

I've seen this interesting question, and wonder if there are more ways to approach it:
Given a permutation of numbers from 1 to n, count the number of
quadruples indices (i,j,k,l) such that i<j<k<l and A[i]<A[k]<A[j]<A[l]
e.g.
Input : [1,3,2,6,5,4]
Output : 1 (1,3,2,6)
Desired algorithm is O(n^2)
Approach:
I've tried to solve it using stack, in a similiar manner to Leetcode 132 pattern - but it seems to fail.
def get_smaller_before(A):
smaller_before = [0] * len(A)
for i in range(len(A)):
for j in range(i):
if A[j] < A[i]:
smaller_before[i] += 1
return smaller_before
def get_larger_after(A):
larger_after = [0] * len(A)
for i in range(len(A)):
for j in range(i+1, len(A)):
if A[i] < A[j]:
larger_after[i] += 1
return larger_after
def countQuadrples(nums):
if not nums:
return False
smaller_before = get_smaller_before(nums)
larger_after = get_larger_after(nums)
counter = 0
stack = []
for j in reversed(range(1, len(nums))):
# i < j < k < l
# smaller_before < nums[k] < nums[j] < larger_after
while stack and nums[stack[-1]] < nums[j]:
counter += smaller_before[j] * larger_after[stack[-1]]
stack.pop()
stack.append(j)
return counter
Does anyone has a better idea?

What you need is some sort of 2-dimensional tree that allows you to quickly answer the question "How many points after k have value bigger than A[j]," and the question "How many points before j have value less than A[k]?" These will usually be time O(n log(n)) to build and those queries should run in time something like O(log(n)^2)).
A number of such data structures exist. One option is a variant on a Quadtree. You you turn each array element into a point with x-coordinate the position in the array and y-coordinate being its value. And your queries are just counting how many elements are in a box.
And now you can do a double loop over all j, k and count how many zig-zag quadruples have those as the inner pair.

Related

Making the complexity smaller (better)

I have an algorithm that looks for the good pairs in a list of numbers. A good pair is being considered as index i being less than j and arr[i] < arr[j]. It currently has a complexity of O(n^2) but I want to make it O(nlogn) based on divide and conquering. How can I go about doing that?
Here's the algorithm:
def goodPairs(nums):
count = 0
for i in range(0,len(nums)):
for j in range(i+1,len(nums)):
if i < j and nums[i] < nums[j]:
count += 1
j += 1
j += 1
return count
Here's my attempt at making it but it just returns 0:
def goodPairs(arr):
count = 0
if len(arr) > 1:
# Finding the mid of the array
mid = len(arr)//2
# Dividing the array elements
left_side = arr[:mid]
# into 2 halves
right_side = arr[mid:]
# Sorting the first half
goodPairs(left_side)
# Sorting the second half
goodPairs(right_side)
for i in left_side:
for j in right_side:
if i < j:
count += 1
return count
The current previously accepted answer by Fire Assassin doesn't really answer the question, which asks for better complexity. It's still quadratic, and about as fast as a much simpler quadratic solution. Benchmark with 2000 shuffled ints:
387.5 ms original
108.3 ms pythonic
104.6 ms divide_and_conquer_quadratic
4.1 ms divide_and_conquer_nlogn
4.6 ms divide_and_conquer_nlogn_2
Code (Try it online!):
def original(nums):
count = 0
for i in range(0,len(nums)):
for j in range(i+1,len(nums)):
if i < j and nums[i] < nums[j]:
count += 1
j += 1
j += 1
return count
def pythonic(nums):
count = 0
for i, a in enumerate(nums, 1):
for b in nums[i:]:
if a < b:
count += 1
return count
def divide_and_conquer_quadratic(arr):
count = 0
left_count = 0
right_count = 0
if len(arr) > 1:
mid = len(arr) // 2
left_side = arr[:mid]
right_side = arr[mid:]
left_count = divide_and_conquer_quadratic(left_side)
right_count = divide_and_conquer_quadratic(right_side)
for i in left_side:
for j in right_side:
if i < j:
count += 1
return count + left_count + right_count
def divide_and_conquer_nlogn(arr):
mid = len(arr) // 2
if not mid:
return 0
left = arr[:mid]
right = arr[mid:]
count = divide_and_conquer_nlogn(left)
count += divide_and_conquer_nlogn(right)
i = 0
for r in right:
while i < mid and left[i] < r:
i += 1
count += i
arr[:] = left + right
arr.sort() # linear, as Timsort takes advantage of the two sorted runs
return count
def divide_and_conquer_nlogn_2(arr):
mid = len(arr) // 2
if not mid:
return 0
left = arr[:mid]
right = arr[mid:]
count = divide_and_conquer_nlogn_2(left)
count += divide_and_conquer_nlogn_2(right)
i = 0
arr.clear()
append = arr.append
for r in right:
while i < mid and left[i] < r:
append(left[i])
i += 1
append(r)
count += i
arr += left[i:]
return count
from timeit import timeit
from random import shuffle
arr = list(range(2000))
shuffle(arr)
funcs = [
original,
pythonic,
divide_and_conquer_quadratic,
divide_and_conquer_nlogn,
divide_and_conquer_nlogn_2,
]
for func in funcs:
print(func(arr[:]))
for _ in range(3):
print()
for func in funcs:
arr2 = arr[:]
t = timeit(lambda: func(arr2), number=1)
print('%5.1f ms ' % (t * 1e3), func.__name__)
One of the most well-known divide-and-conquer algorithms is merge sort. And merge sort is actually a really good foundation for this algorithm.
The idea is that when comparing two numbers from two different 'partitions', you already have a lot of information about the remaining part of these partitions, as they're sorted in every iteration.
Let's take an example!
Consider the following partitions, which has already been sorted individually and "good pairs" have been counted.
Partition x: [1, 3, 6, 9].
Partition y: [4, 5, 7, 8].
It is important to note that the numbers from partition x is located further to the left in the original list than partition y. In particular, for every element in x, it's corresponding index i must be smaller than some index j for every element in y.
We will start of by comparing 1 and 4. Obviously 1 is smaller than 4. But since 4 is the smallest element in partition y, 1 must also be smaller than the rest of the elements in y. Consequently, we can conclude that there is 4 additional good pairs, since the index of 1 is also smaller than the index of the remaining elements of y.
The exact same thing happens with 3, and we can add 4 new good pairs to the sum.
For 6 we will conclude that there is two new good pairs. The comparison between 6 and 4 did not yield a good pair and likewise for 6 and 5.
You might now notice how these additional good pairs would be counted? Basically if the element from x is less than the element from y, add the number of elements remaining in y to the sum. Rince and repeat.
Since merge sort is an O(n log n) algorithm, and the additional work in this algorithm is constant, we can conclude that this algorithm is also an O(n log n) algorithm.
I will leave the actual programming as an exercise for you.
#niklasaa has added an explanation for the merge sort analogy, but your implementation still has an issue.
You are partitioning the array and calculating the result for either half, but
You haven't actually sorted either half. So when you're comparing their elements, your two pointer approach isn't correct.
You haven't used their results in the final computation. That's why you're getting an incorrect answer.
For point #1, you should look at merge sort, especially the merge() function. That logic is what will give you the correct pair count without having O(N^2) iteration.
For point #2, store the result for either half first:
# Sorting the first half
leftCount = goodPairs(left_side)
# Sorting the second half
rightCount = goodPairs(right_side)
While returning the final count, add these two results as well.
return count + leftCount + rightCount
Like #Abhinav Mathur stated, you have most of the code down, your problem is with these lines:
# Sorting the first half
goodPairs(left_side)
# Sorting the second half
goodPairs(right_side)
You want to store these in variables that should be declared before the if statement. Here's an updated version of your code:
def goodPairs(arr):
count = 0
left_count = 0
right_count = 0
if len(arr) > 1:
mid = len(arr) // 2
left_side = arr[:mid]
right_side = arr[mid:]
left_count = goodPairs(left_side)
right_count = goodPairs(right_side)
for i in left_side:
for j in right_side:
if i < j:
count += 1
return count + left_count + right_count
Recursion can be difficult at times, look into the idea of merge sort and quick sort to get better ideas on how the divide and conquer algorithms work.

Time complexity of merge sort: function appears to be called 2*n-1 times rather than O(log n) times

I'm teaching a coding class and need an intuitive and obvious way to explain the time complexity of merge sort. I tried including a print statement at the start of my merge_sort() function, anticipating that the print statement would execute O(log n) times. However, as best as I can tell, it executes 2*n-1 times instead (Python code below):
merge_sort() function:
def merge_sort(my_list):
print("hi") #prints 2*n-1 times??
if(len(my_list) <= 1):
return
mid = len(my_list)//2
l = my_list[:mid]
r = my_list[mid:]
merge_sort(l)
merge_sort(r)
i = 0
j = 0
k = 0
while(i < len(l) or j < len(r)):
#print("hey") #prints nlogn times as expected
if(i >= len(l)):
my_list[k] = r[j]
j += 1
elif(j >= len(r)):
my_list[k] = l[i]
i += 1
elif(l[i] < r[j]):
my_list[k] = l[i]
i += 1
elif(l[i] > r[j]):
my_list[k] = r[j]
j += 1
k += 1
Driver code:
#print("Enter a list")
my_list = list(map(int, input().split()))
#print("Sorted list:")
#merge_sort(my_list)
print(my_list)
Input:
1 2 3 4 5 6 7 8
Expected output:
hi
hi
hi
or some variation thereof which varies proportional to log n.
Actual output:
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi #15 times, i.e. 2*n-1
A few more iterations of this with different input sizes have given me the impression that this is 2*n-1, which makes no sense to me. Does anyone have an explanation for this?
It is not true that there are only O(logn) recursive calls. The thing that is O(logn) is the depth of the recursion tree, not the number of nodes in the recursion tree.
When we look at one level of the recursion tree, then we can note that each call in that level deals with a distinct partition of the array. Together, the "nodes" in that recursion level, deal with all elements of the array, which gives that level a O(n) time complexity. This is true for each level.
As there are O(logn) levels, the total complexity comes down to O(nlogn).
Here is a suggestion on how to illustrate this:
statistics = []
def merge_sort(my_list, depth=0):
if len(my_list) <= 1:
return
# manage statistics
if depth >= len(statistics):
statistics.append(0) # for each depth we count operations
mid = len(my_list)//2
l = my_list[:mid]
r = my_list[mid:]
merge_sort(l, depth+1)
merge_sort(r, depth+1)
i = 0
j = 0
k = 0
while i < len(l) or j < len(r):
statistics[depth] += 1 # count this as a O(1) unit of work
if i >= len(l):
my_list[k] = r[j]
j += 1
elif j >= len(r):
my_list[k] = l[i]
i += 1
elif l[i] < r[j]:
my_list[k] = l[i]
i += 1
elif l[i] > r[j]:
my_list[k] = r[j]
j += 1
k += 1
import random
my_list = list(range(32))
random.shuffle(my_list)
merge_sort(my_list)
print(my_list)
print(statistics)
The statistics will output the number of units of work done at each level. In the example of an input size of 32, you'll get a list with 5 such numbers.
NB: In Python, if conditions don't need parentheses

Minimum count to sort an array in Python by sending the element to the end

Here is the explanation of what I'm trying to say:-
Input:- 5 1 3 2 7
Output:- 3
Explanation:
In first move, we move 3 to the end. Our list becomes 5,1,2,7,3
In second move, we move 5 to the end. Our list becomes 1,2,7,3,5
In third move, we move 7 to the end. Our final list = 1,2,3,5,7
So, total moves are:- 3.
Here is what I tried to do, but failed.
a = [int(i) for i in input().split()]
count = 0
n = 0
while (n < len(a) - 1):
for i in range(0,n+1):
while (a[i] > a[i + 1]):
temp = a[i]
a.pop(i)
a.append(temp)
count += 1
n += 1
print(count, end='')
I'd like to request your assistance in helping in solving this question.
jdehesa's answer is basically right, but not optimal for cases, when there is more element of same value. Maybe more complex solution?
def min_moves(a):
c = 0
while(1):
tmp = None
for i in range(0, len(a)):
if a[i] != min(a[i:]) and (tmp is None or a[i] < a[tmp]):
tmp = i
if tmp is None:
return c
else:
a.append(a.pop(tmp))
c += 1
Edit:
Or if you don't need ordered list, there's much more easier solution just to count items that are out of order for the reason from jdehesa's solution :-D
def min_moves(a):
c = 0
for i in range(0, len(a)):
if a[i] != min(a[i:]):
c += 1
return c
Edit 2:
Or if you like jdehesa's answer more, small fix is to reduce lst to set, so it will get smallest index
sorted_index = {elem: i for i, elem in enumerate(sorted(set(lst)))}
I cannot comment yet.
I don't know if it can be done better, but I think the following algorithm gives the right answer:
def num_move_end_sort(lst):
# dict that maps each list element to its index in the sorted list
sorted_index = {elem: i for i, elem in enumerate(sorted(lst))}
moves = 0
for idx, elem in enumerate(lst):
if idx != sorted_index[elem] + moves:
moves += 1
return moves
print(num_move_end_sort([5, 1, 3, 2, 7]))
# 3
The idea is as follows. Each element of the list would have to be moved to the end at most once (it should be easy to see that a solution that moves the same element to the end more than once can be simplified). So each element in the list may or may not need to be moved once to the end. If an element does not need to be moved is because it ended up in the right position after all the moves. So, if an element is currently at position i and should end up in position j, then the element will not need to be moved if the number of previous elements that need to be moved, n, satisfies j == i + n (because, after those n moves, the element will indeed be at position j).
So in order to compute that, I sorted the list and took the indices of each element in the sorted list. Then you just count the number of elements that are not in the right position.
Note this algorithm does not tell you the actual sequence of steps you would need to take (the order in which the elements would have to be moved), only the count. The complexity is O(n·log(n)) (due to the sorting).
I think you can simplify your problem,
Counting elements that need to be pushed at the end is equivalent to counting the length of the elements that are not in sorted order.
l = [5, 1, 3, 2, 7]
sorted_l = sorted(l)
current_element = sorted_l[0]
current_index = 0
ans = 0
for element in l:
if current_element == element:
current_index += 1
if current_index < len(l):
current_element = sorted_l[current_index]
else:
ans += 1
print(ans)
Here the answer is 3

Partition Array

Given an array nums of integers and an int k, partition the array (i.e move the elements in nums) such that: All elements < k are moved to the left. All elements >= k are moved to the right
Return the partitioning index, i.e the first index i nums[i] >= k.
class Solution:
def partitionArray(self, nums, k):
# write your code here
if nums == []:
return 0
left = 0
i = 0
while i <= len(nums):
if nums[i] < k:
i += 1
left += 1
else:
r = nums[i]
del nums[i]
nums.append(r)
i += 1
return left
My idea is to going through the list one by one. The num[i] whose larger than k will be removed and append at the end of the num, the one whose smaller than k will be kept at the original place. Once the whole list has been going through, all the smaller num are at the front. left is a counter at this point for return. But I cannot fix the problem with nums[i]. After the each mods to the list, the counter i cannot point at the correct item in the list.
How can I write the code base on this idea???
You're looking for the index(k). This seems like a homework assignment so you may be limited to what built in functionality you can use. However, a pythonic approach to this is
def solution(nums, k):
return sorted(nums).index(k)
You are doing several things I would recommend avoiding.
Concurrent modification; you should not add or delete from a list while looping it.
You can not loop up to i == len(nums) because list indexes start at 0.
Since you are really just looking for index(k) you need only keep track of numbers less than k and not concern yourself with re-organizing the list.
class Solution:
def partitionArray(self,nums, k):
# write your code here
if nums == []:
return 0
left = 0
i = 0
while i < len(nums):
if nums[i] < k:
left += 1
i += 1
return left

How to calculate the worst case complexity for the function?

def functionX(L):
""" L is a non-empty list of length len(L) = n. """
i= 1
while i< len(L) -1:
j = i-1
while j <= i+ 1:
L[j] = L[j] + L[i]
j = j + 1
i= i+ 1
For j loop why do we have 3 iterations each with 3 steps instead of i iterations? I have hard time figuring it out.
Is it clearer with for loops than while loops?
def functionY(L):
N = len(L)
for i in range(1,N-1):
for j in range(i-1,i+2):
L[j] = L[j] + L[i]
How about pseudo-code?
for i in range(N): # drop the -1s on both ends; O(n-2) = O(n)
for j in range(3): # (i-1) to (i+2) covers 3 elements
do something
This makes it pretty clear that Tony's answer is correct, we're in the class O(n). Specifically, the line L[j] = L[j] + L[i] will be accessed 3n-6 times. This is in the complexity class O(3n) = O(n). If you're looking at array accesses as your atomic operation, then we have O(3*(3n-6)) = O(n), still. The complexity class would not change if the line read L[j] += L[i], though the total number of array accesses would go down.
You have n iterations of the outer loop and in every outer-loop-iteration, 3 iterations of the inner loop, because for given i, variable j has a value of i - 1, i and i + 1. Therefore the complexity equals O(3 * n) = O(n).

Categories

Resources