Complexity of comparing elements in two lists - python

I was coding a function in Python to find elements of a sorted list that exist in another sorted list and print out the results:
# assume that both lists are sorted
def compare_sorted_lists(list1, list2):
res = []
a = 0
b = 0
while a < len(list1) and b < len(list2):
if list1[a] == list2[b]:
res.append(list1[a])
a += 1
elif list1[a] < list2[b]:
a += 1
else:
b += 1
return res
I want to figure out the time complexity of comparing elements with this method.
Assuming that:
list1 has length A and the maximum number of digits/letters in a list1 element is X
list2 has length B and the maximum number of digits/letters in a list2 element is Y
For these lists I have O(A+B) time complexity when traversing them with pointers, but how would comparing elements affect the time complexity for this function (specifically, worst-case time complexity)?
Edit: 12 March 2021 16:30 - rephrased question

The comparison between two elements is constant time, so this does not affect the complexity of your whole algorithm, which you corrected identified as O(A+B).

As user1717828 pointed out, the loop takes place at most A+B times; however comparing two elements is not a constant time operation. If the numbers are fixed point precision numbers, then yes, it is; however Python integers are unbounded. Time cost of their comparison will grow linearly with respect to the number of digits in them. Therefore the time complexity of the algorithm you gave is
O((A+B) * max{X,Y})
You can actually do better than that under specific circumstances. E.g. if A << B, then the following code has O(A*log(B)*max{X,Y}) time complexity.
for a in A:
split B from the middle and keep searching a in B in one of the blocks. Continue
until you find a, or not.
because the inner loops keeps diving the list B into 2, which can last for at most log_2(B)+1 steps.

Related

Find longest quasi-constant sub-sequence of a sequence

I had this test earlier today, and I tried to be too clever and hit a road block. Unfortunately I got stuck in this mental rut and wasted too much time, failing this portion of the test. I solved it afterward, but maybe y'all can help me get out of the initial rut I was in.
Problem definition:
An unordered and non-unique sequence A consisting of N integers (all positive) is given. A subsequence of A is any sequence obtained by removing none, some or all elements from A. The amplitude of a sequence is the difference between the largest and the smallest element in this sequence. The amplitude of the empty subsequence is assumed to be 0.
For example, consider the sequence A consisting of six elements such that:
A[0] = 1
A[1] = 7
A[2] = 6
A[3] = 2
A[4] = 6
A[5] = 4
A subsequence of array A is called quasi-constant if its amplitude does not exceed 1. In the example above, the subsequences [1,2], [6,6], and [6,6,7] are quasi-constant. Subsequence [6, 6, 7] is the longest possible quasi-constant subsequence of A.
Now, find a solution that, given a non-empty zero-indexed array A consisting of N integers, returns the length of the longest quasi-constant subsequence of array A. For example, given sequence A outlined above, the function should return 3, as explained.
Now, I solved this in python 3.6 after the fact using a sort-based method with no classes (my code is below), but I didn't initially want to do that as sorting on large lists can be very slow. It seemed this should have a relatively simple formulation as a breadth-first tree-based class, but I couldn't get it right. Any thoughts on this?
My class-less sort-based solution:
def amp(sub_list):
if len(sub_list) <2:
return 0
else:
return max(sub_list) - min(sub_list)
def solution(A):
A.sort()
longest = 0
idxStart = 0
idxEnd = idxStart + 1
while idxEnd <= len(A):
tmp = A[idxStart:idxEnd]
if amp(tmp) < 2:
idxEnd += 1
if len(tmp) > longest:
longest = len(tmp)
else:
idxStart = idxEnd
idxEnd = idxStart + 1
return longest
As Andrey Tyukin pointed out, you can solve this problem in O(n) time, which is better than the O(n log n) time you'd likely get from either sorting or any kind of tree based solution. The trick is to use dictionaries to count the number of occurrences of each number in the input, and use the count to figure out the longest subsequence.
I had a similar idea to him, but I had though of a slightly different implementation. After a little testing, it looks like my approach is a quite a bit faster, so I'm posting it as my own answer. It's quite short!
from collections import Counter
def solution(seq):
if not seq: # special case for empty input sequence
return 0
counts = Counter(seq)
return max(counts[x] + counts[x+1] for x in counts)
I suspect this is faster than Andrey's solution because the running time for both of our solutions really take O(n) + O(k) time where k is the number of distinct values in the input (and n is the total number of values in the input). My code handles the O(n) part very efficiently by handing off the sequence to the Counter constructor, which is implemented in C. It is likely to be a bit slower (on a per-item basis) to deal with the O(k) part, since it needs a generator expression. Andrey's code does the reverse (it runs slower Python code for the O(n) part, and uses faster builtin C functions for the O(k) part). Since k is always less than or equal to n (perhaps a lot less if the sequence has a lot of repeated values), my code is faster overall. Both solutions are still O(n) though, and both should be much better than sorting for large inputs.
I don't know how BFS is supposed to help here.
Why not simply run once through the sequence and count how many elements every possible quasi-constant subsequence would have?
from collections import defaultdict
def longestQuasiConstantSubseqLength(seq):
d = defaultdict(int)
for s in seq:
d[s] += 1
d[s+1] += 1
return max(d.values() or [0])
s = [1,7,6,2,6,4]
print(longestQuasiConstantSubseqLength(s))
prints:
3
as expected.
Explanation: Every non-constant quasi-constant subsequence is uniquely identified by the greatest number that it contains (there can be only two, take the greater one). Now, if you have a number s, it can either contribute to the quasi-constant subsequence that has s or s + 1 as the greatest number. So, just add +1 to the subsequences identified by s and s + 1. Then output the maximum of all counts.
You can't get it faster than O(n), because you have to look at every entry of the input sequence at least once.

Better python logic that prevent time out when comparing arrays in nested loops

I was attempting to solve a programing challenge and the program i wrote solved the small test data correctly for this question. But When they run it against the larger datasets, my program timed out on some of the occasions . I am mostly a self taught programmer, if there is a better algorithm/implementation than my logic can you guys tell me.thanks.
Question
Given an array of integers, a, return the maximum difference of any
pair of numbers such that the larger integer in the pair occurs at a
higher index (in the array) than the smaller integer. Return -1 if you
cannot find a pair that satisfies this condition.
My Python Function
def maxDifference( a):
diff=0
find=0
leng = len(a)
for x in range(0,leng-1):
for y in range(x+1,leng):
if(a[y]-a[x]>=diff):
diff=a[y]-a[x]
find=1
if find==1:
return diff
else:
return -1
Constraints:
1 <= N <= 1,000,000
-1,000,000 <= a[i] <= 1,000,000 i belongs to [1,N]
Sample Input:
Array { 2,3,10,2,4,8,1}
Sample Output:
8
Well... since you don't care for anything else than finding the highest number following the lowest number, provided that difference is the highest so far, there's no reason to do several passes or using max() over a slice of the array:
def f1(a):
smallest = a[0]
result = 0
for b in a:
if b < smallest:
smallest = b
if b - smallest > result:
result = b - smallest
return result if result > 0 else -1
Thanks #Matthew for the testing code :)
This is very fast even on large sets:
The maximum difference is 99613 99613 99613
Time taken by Sojan's method: 0.0480000972748
Time taken by #Matthews's method: 0.0130000114441
Time taken by #GCord's method: 0.000999927520752
The reason your program takes too long is that your nested loop inherently means quadratic time.
The outer loop goes through N-1 indices. The inner loop goes through a different number of indices each time, but the average is obviously (N-1)/2 rounded up. So, the total number of times through the inner loop is (N-1) * (N-1)/2, which is O(N^2). For the maximum N=1000000, that means 499999000001 iterations. That's going to take a long time.
The trick is to find a way to do this in linear time.
Here's one solution (as a vague description, rather than actual code, so someone can't just copy and paste it when they face the same test as you):
Make a list of the smallest value before each index. Each one is just min(smallest_values[-1], arr[i]), and obviously you can do this in N steps.
Make a list of the largest value after each index. The simplest way to do this is to reverse the list, do the exact same loop as above (but with max instead of min), then reverse again. (Reversing a list takes N steps, of course.)
Now, for each element in the list, instead of comparing to every other element, you just have to compare to smallest_values[i] and largest_values[i]. Since you're only doing 2 comparisons for each of the N values, this takes 2N time.
So, even being lazy and naive, that's a total of N + 3N + 2N steps, which is O(N). If N=1000000, that means 6000000 steps, which is a whole lot faster than 499999000001.
You can obviously see how to remove the two reverses, and how to skip the first and last comparisons. If you're smart, you can see how to take the whole largest_values out of the equation entirely. Ultimately, I think you can get it down to 2N - 3 steps, or 1999997. But that's all just a small constant improvement; nowhere near as important as fixing the basic algorithmic problem. You'd probably get a bigger improvement than 3x (maybe 20x), for less work, by just running the naive code in PyPy instead of CPython, or by converting to NumPy—but you're not going to get the 83333x improvement in any way other than changing the algorithm.
Here's a linear time solution. It keeps a track of the minimum value before each index of the list. These minimum values are stored in a list min_lst. Finally, the difference between corresponding elements of the original and the min list is calculated into another list of differences by zipping the two. The maximum value in this differences list should be the required answer.
def get_max_diff(lst):
min_lst = []
running_min = lst[0]
for item in lst:
if item < running_min:
running_min = item
min_lst.append(running_min)
val = max(x-y for (x, y) in zip(lst, min_lst))
if not val:
return -1
return val
>>> get_max_diff([5, 6, 2, 12, 8, 15])
13
>>> get_max_diff([2, 3, 10, 2, 4, 8, 1])
8
>>> get_max_diff([5, 4, 3, 2, 1])
-1
Well, I figure since someone in the same problem can copy your code and run with that, I won't lose any sleep over them copying some more optimized code:
import time
import random
def max_difference1(a):
# your function
def max_difference2(a):
diff = 0
for i in range(0, len(a)-1):
curr_diff = max(a[i+1:]) - a[i]
diff = max(curr_diff, diff)
return diff if diff != 0 else -1
my_randoms = random.sample(range(100000), 1000)
t01 = time.time()
max_dif1 = max_difference1(my_randoms)
dt1 = time.time() - t01
t02 = time.time()
max_dif2 = max_difference2(my_randoms)
dt2 = time.time() - t02
print("The maximum difference is", max_dif1)
print("Time taken by your method:", dt1)
print("Time taken by my method:", dt2)
print("My method is", dt1/dt2, "times faster.")
The maximum difference is 99895
Time taken by your method: 0.5533690452575684
Time taken by my method: 0.08005285263061523
My method is 6.912546237558299 times faster.
Similar to what #abarnert said (who always snipes me on these things I swear), you don't want to loop over the list twice. You can exploit the fact that you know that your larger value has to be in front of the smaller one. You also can exploit the fact that you don't care for anything except the largest number, that is, in the list [1,3,8,5,9], the maximum difference is 8 (9-1) and you don't care that 3, 8, and 5 are in there. Thus: max(a[i+1:]) - a[i] is the maximum difference for a given index.
Then you compare it with diff, and take the larger of the 2 with max, as calling default built-in python functions is somewhat faster than if curr_diff > diff: diff = curr_diff (or equivalent).
The return line is simply your (fixed) line in 1 line instead of 4
As you can see, in a sample of 1000, this method is ~6x faster (note: used python 3.4, but nothing here would break on python 2.x)
I think the expected answer for
1, 2, 4, 2, 3, 8, 5, 6, 10
will be 8 - 2 = 6 but instead Saksham Varma code will return 10 - 1 = 9.
Its max(arr) - min(arr).
Don't we have to reset the min value when there is a dip
. ie; 4 -> 2 will reset current_smallest = 2 and continue diff the calculation with value '2'.
def f2(a):
current_smallest = a[0]
large_diff = 0
for i in range(1, len(a)):
# Identify the dip
if a[i] < a[i-1]:
current_smallest = a[i]
if a[i] - current_smallest > large_diff:
large_diff = a[i] - current_smallest

Optimize finding pairs of arrays which can be compared

Definition: Array A(a1,a2,...,an) is >= than B(b1,b2,...bn) if they are equal sized and a_i>=b_i for every i from 1 to n.
For example:
[1,2,3] >= [1,2,0]
[1,2,0] not comparable with [1,0,2]
[1,0,2] >= [1,0,0]
I have a list which consists of a big number of such arrays (approx. 10000, but can be bigger). Arrays' elements are positive integers. I need to remove all arrays from this list that are bigger than at least one of other arrays. In other words: if there exists such B that A >= B then remove A.
Here is my current O(n^2) approach which is extremely slow. I simply compare every array with all other arrays and remove it if it's bigger. Are there any ways to speed it up.
import numpy as np
import time
import random
def filter_minimal(lst):
n = len(lst)
to_delete = set()
for i in xrange(n-1):
if i in to_delete:
continue
for j in xrange(i+1,n):
if j in to_delete: continue
if all(lst[i]>=lst[j]):
to_delete.add(i)
break
elif all(lst[i]<=lst[j]):
to_delete.add(j)
return [lst[i] for i in xrange(len(lst)) if i not in to_delete]
def test(number_of_arrays,size):
x = map(np.array,[[random.randrange(0,10) for _ in xrange(size)] for i in xrange(number_of_arrays)])
return filter_minimal(x)
a = time.time()
result = test(400,10)
print time.time()-a
print len(result)
P.S. I've noticed that using numpy.all instead of builtin python all slows the program dramatically. What can be the reason?
Might not be exactly what you are asking for, but this should get you started.
import numpy as np
import time
import random
def compare(x,y):
#Reshape x to a higher dimensional array
compare_array=x.reshape(-1,1,x.shape[-1])
#You can now compare every x with every y element wise simultaneously
mask=(y>=compare_array)
#Create a mask that first ensures that all elements of y are greater then x and
#then ensure that this is the case at least once.
mask=np.any(np.all(mask,axis=-1),axis=-1)
#Places this mask on x
return x[mask]
def test(number_of_arrays,size,maxval):
#Create arrays of size (number_of_arrays,size) with maximum value maxval.
x = np.random.randint(maxval, size=(number_of_arrays,size))
y= np.random.randint(maxval, size=(number_of_arrays,size))
return compare(x,y)
print test(50,10,20)
First of all we need to carefully check the objective. Is it true that we delete any array that is > ANY of the other arrays, even the deleted ones? For example, if A > B and C > A and B=C, then do we need to delete only A or both A and C? If we only need to delete INCOMPATIBLE arrays, then it is a much harder problem. This is a very difficult problem because different partitions of the set of arrays may be compatible, so you have the problem of finding the largest valid partition.
Assuming the easy problem, a better way to define the problem is that you want to KEEP all arrays which have at least one element < the corresponding element in ALL the other arrays. (In the hard problem, it is the corresponding element in the other KEPT arrays. We will not consider this.)
Stage 1
To solve this problem what you do is arrange the arrays in columns and then sort each row while maintaining the key to the array and the mapping of each array-row to position (POSITION lists). For example, you might end up with a result in stage 1 like this:
row 1: B C D A E
row 2: C A E B D
row 3: E D B C A
Meaning that for the first element (row 1) array B has a value >= C, C >= D, etc.
Now, sort and iterate the last column of this matrix ({E D A} in the example). For each item, check if the element is less than the previous element in its row. For example, in row 1, you would check if E < A. If this is true you return immediately and keep the result. For example, if E_row1 < A_row1 then you can keep array E. Only if the values in the row are equal do you need to do a stage 2 test (see below).
In the example shown you would keep E, D, A (as long as they passed the test above).
Stage 2
This leaves B and C. Sort the POSITION list for each. For example, this will tell you that the row with B's mininum position is row 2. Now do a direct comparison between B and every array below it in the mininum row, here row 2. Here there is only one such array, D. Do a direct comparison between B and D. This shows that B < D in row 3, therefore B is compatible with D. If the item is compatible with every array below its minimum position keep it. We keep B.
Now we do the same thing for C. In C's case we need only do one direct comparison, with A. C dominates A so we do not keep C.
Note that in addition to testing items that did not appear in the last column we need to test items that had equality in Stage 1. For example, imagine D=A=E in row 1. In this case we would have to do direct comparisons for every equality involving the array in the last column. So, in this case we direct compare E to A and E to D. This shows that E dominates D, so E is not kept.
The final result is we keep A, B, and D. C and E are discarded.
The overall performance of this algorithm is n2*log n in Stage 1 + { n lower bound, n * log n - upper bound } in Stage 2. So, maximum running time is n2*log n + nlogn and minimum running time is n2logn + n. Note that the running time of your algorithm is n-cubed n3. Since you compare each matrix (n*n) and each comparison is n element comparisons = n*n*n.
In general, this will be much faster than the brute force approach. Most of the time will be spent sorting the original matrix, a more or less unavoidable task. Note that you could potentially improve my algorithm by using priority queues instead of sorting, but the resulting algorithm would be much more complicated.

Improve performance on Python nested for loop

I have two sets of numbers, each in a list in my Python script. For each number in the first list, I need to see if any of the numbers in the second are larger than it. I only need the number of times that an n2 was larger than an n1. (For example, if numset1 is [7,2] and numset2 is [6,9], I just need 3)
Right now I'm doing this - going through each n1 and checking if each n2 is larger than it:
possibilities = [(n1<n2) for n1 in numset1 for n2 in numset2]
numPossibilities = sum(possibilities)
Currently this is the slowest portion of my script, particularly when dealing with larger datasets (numset1 and numset2 containing thousands of numbers). I'm sure there's some way to make this more efficient, I'm just not sure how.
Sort numset2 and then iterate over numset1 but use a binary search on numset2, for example using the bisect module: http://docs.python.org/2/library/bisect.html
import bisect
# your code here
numset2.sort()
L = len(numset2)
numPossibilities = sum([bisect.bisect_right(numset2,n1) < L for n1 in numset1])
Also note that your original code does not compute what you have asked for in your second sentence - for each element in numset1, it sums how many elements in numset2 are greater than this element, not whether there is an element that matches the criterion.
To match your original code, do:
numPossibilities = sum([L - bisect.bisect_right(numset2,n1) for n1 in numset1])
Your problem is that you have to iterate over each combination of (n1, n2), and there are len(numset1) * len(numset2) combinations, which gets really large when numset1 and numset2 are only fairly large.
Put a different way, the running time of your algorithm is O(n^2) (if len(numset1) is about equal to len(numset2). Let's make that runtime faster. :-)
This becomes a lot easier to do if we sort the lists. So let's sort numset1 and numset2.
>>> numset1.sort()
>>> numset2.sort()
Now, compare the smallest element of numset1 (call it n1) and the smallest element of numset2 (call it n2). If n1 is smaller, then we know that there are len(numset2) elements in numset2 larger than it. If n2 is smaller, we know that no elements in numset1 are smaller than it.
Now, we don't want to actually delete elements from the beginning of the list, because that's an O(n) operation on a Python list. So instead, let's keep track of where we are in each list and iterate through.
>>> n1_idx, n2_idx, accumulator = 0, 0, 0
>>> while n1_idx < len(numset1) and n2_idx < len(numset2):
if numset1[n1_idx] < numset2[n2_idx]:
accumulator += len(numset2) - n2_idx
n1_idx += 1
else:
n2_idx += 1
At the end of this operation, we've spent O(nlog(n)) time sorting the lists and O(n) time doing our iteration, so our overall runtime complexity is O(nlog(n)).
That, and accumulator has the number of (n1, n2) pairs where n1 < n2.
Here is what should be a pretty efficient implementation:
def get_possibilities(numset1, numset2):
sortset1 = sorted(numset1)
sortset2 = sorted(numset2)
total = 0
i2 = 0
for i1, n1 in enumerate(sortset1, 1):
while sortset2[i2] <= n1:
i2 += 1
if i2 >= len(sortset2):
# reached end of i2, so just return total now
return total
# current from sortset2 is greater than from sortset1 so far
total += i1
# all remaining elements of sortset2 greater than all elements of sortset1
total += (len(sortset2) - i2 - 1) * len(sortset1)
return total
This iterates over each set only once, which it accomplishes by sorting the sets before running. This allows some improved logic, because if the element in sortset2 at index i2 is greater than an element of sortset1 at index i1, then it is also greater than all elements in sortset1 at earlier indices.

How can I merge two lists and sort them working in 'linear' time?

I have this, and it works:
# E. Given two lists sorted in increasing order, create and return a merged
# list of all the elements in sorted order. You may modify the passed in lists.
# Ideally, the solution should work in "linear" time, making a single
# pass of both lists.
def linear_merge(list1, list2):
finalList = []
for item in list1:
finalList.append(item)
for item in list2:
finalList.append(item)
finalList.sort()
return finalList
# +++your code here+++
return
But, I'd really like to learn this stuff well. :) What does 'linear' time mean?
Linear means O(n) in Big O notation, while your code uses a sort() which is most likely O(nlogn).
The question is asking for the standard merge algorithm. A simple Python implementation would be:
def merge(l, m):
result = []
i = j = 0
total = len(l) + len(m)
while len(result) != total:
if len(l) == i:
result += m[j:]
break
elif len(m) == j:
result += l[i:]
break
elif l[i] < m[j]:
result.append(l[i])
i += 1
else:
result.append(m[j])
j += 1
return result
>>> merge([1,2,6,7], [1,3,5,9])
[1, 1, 2, 3, 5, 6, 7, 9]
Linear time means that the time taken is bounded by some undefined constant times (in this context) the number of items in the two lists you want to merge. Your approach doesn't achieve this - it takes O(n log n) time.
When specifying how long an algorithm takes in terms of the problem size, we ignore details like how fast the machine is, which basically means we ignore all the constant terms. We use "asymptotic notation" for that. These basically describe the shape of the curve you would plot in a graph of problem size in x against time taken in y. The logic is that a bad curve (one that gets steeper quickly) will always lead to a slower execution time if the problem is big enough. It may be faster on a very small problem (depending on the constants, which probably depends on the machine) but for small problems the execution time isn't generally a big issue anyway.
The "big O" specifies an upper bound on execution time. There are related notations for average execution time and lower bounds, but "big O" is the one that gets all the attention.
O(1) is constant time - the problem size doesn't matter.
O(log n) is a quite shallow curve - the time increases a bit as the problem gets bigger.
O(n) is linear time - each unit increase means it takes a roughly constant amount of extra time. The graph is (roughly) a straight line.
O(n log n) curves upwards more steeply as the problem gets more complex, but not by very much. This is the best that a general-purpose sorting algorithm can do.
O(n squared) curves upwards a lot more steeply as the problem gets more complex. This is typical for slower sorting algorithms like bubble sort.
The nastiest algorithms are classified as "np-hard" or "np-complete" where the "np" means "non-polynomial" - the curve gets steeper quicker than any polynomial. Exponential time is bad, but some are even worse. These kinds of things are still done, but only for very small problems.
EDIT the last paragraph is wrong, as indicated by the comment. I do have some holes in my algorithm theory, and clearly it's time I checked the things I thought I had figured out. In the mean time, I'm not quite sure how to correct that paragraph, so just be warned.
For your merging problem, consider that your two input lists are already sorted. The smallest item from your output must be the smallest item from one of your inputs. Get the first item from both and compare the two, and put the smallest in your output. Put the largest back where it came from. You have done a constant amount of work and you have handled one item. Repeat until both lists are exhausted.
Some details... First, putting the item back in the list just to pull it back out again is obviously silly, but it makes the explanation easier. Next - one input list will be exhausted before the other, so you need to cope with that (basically just empty out the rest of the other list and add it to the output). Finally - you don't actually have to remove items from the input lists - again, that's just the explanation. You can just step through them.
Linear time means that the runtime of the program is proportional to the length of the input. In this case the input consists of two lists. If the lists are twice as long, then the program will run approximately twice as long. Technically, we say that the algorithm should be O(n), where n is the size of the input (in this case the length of the two input lists combined).
This appears to be homework, so I will no supply you with an answer. Even though this is not homework, I am of the opinion that you will be best served by taking a pen and a piece of paper, construct two smallish example lists which are sorted, and figure out how you would merge those two lists, by hand. Once you figured that out, implementing the algorithm is a piece of cake.
(If all goes well, you will notice that you need to iterate over each list only once, in a single direction. That means that the algorithm is indeed linear. Good luck!)
If you build the result in reverse sorted order, you can use pop() and still be O(N)
pop() from the right end of the list does not require shifting the elements, so is O(1)
Reversing the list before we return it is O(N)
>>> def merge(l, r):
... result = []
... while l and r:
... if l[-1] > r[-1]:
... result.append(l.pop())
... else:
... result.append(r.pop())
... result+=(l+r)[::-1]
... result.reverse()
... return result
...
>>> merge([1,2,6,7], [1,3,5,9])
[1, 1, 2, 3, 5, 6, 7, 9]
This thread contains various implementations of a linear-time merge algorithm. Note that for practical purposes, you would use heapq.merge.
Linear time means O(n) complexity. You can read something about algorithmn comlexity and big-O notation here: http://en.wikipedia.org/wiki/Big_O_notation .
You should try to combine those lists not after getting them in the finalList, try to merge them gradually - adding an element, assuring the result is sorted, then add next element... this should give you some ideas.
A simpler version which will require equal sized lists:
def merge_sort(L1, L2):
res = []
for i in range(len(L1)):
if(L1[i]<L2[i]):
first = L1[i]
secound = L2[i]
else:
first = L2[i]
secound = L1[i]
res.extend([first,secound])
return res
itertoolz provides an efficient implementation to merge two sorted lists
https://toolz.readthedocs.io/en/latest/_modules/toolz/itertoolz.html#merge_sorted
'Linear time' means that time is an O(n) function, where n - the number of items input (items in the lists).
f(n) = O(n) means that that there exist constants x and y such that x * n <= f(n) <= y * n.
def linear_merge(list1, list2):
finalList = []
i = 0
j = 0
while i < len(list1):
if j < len(list2):
if list1[i] < list2[j]:
finalList.append(list1[i])
i += 1
else:
finalList.append(list2[j])
j += 1
else:
finalList.append(list1[i])
i += 1
while j < len(list2):
finalList.append(list2[j])
j += 1
return finalList

Categories

Resources