Time Complexity of "in" (containment operator) - python

I was just wondering when understanding the time complexity of an algorithm like the one below.
For a python list, if we have a for loop iterating over it, and then a containment check, would the time complexity of that be O(n^2).
I know both are O(n) (or I think) so having them nested in one another would that make it O(n^2)?
I think if this "list" is actually a list, then the time complexity of the code below is O(n^2). But if it's a dictionary it would be O(n) because lookup is O(1). Is that correct?
Thanks for any help in advance!
for element in list:
if x in list:

Your analysis is correct.
List containment is O(n), and doing an O(n) operation O(n) times is O(n2).
Dictionary lookups are O(1), and doing an O(1) operation O(n) times is O(n).

Related

Deque (time complexity)

What is the time complexity for accessing deque[0], deque[somewhere in the middle], and deque[-1]?
From the documentation:
Indexed access is O(1) at both ends but slows to O(n) in the middle. For fast random access, use lists instead.
This suggests that the implementation is a doubly-linked list.

Efficient Sorted List Python

I am looking for a Python datastructure that functions as a sorted list that has the following asymptotics:
O(1) pop from beginning (pop smallest element)
O(1) pop from end (pop largest element)
>= O(log n) insert
Does such a datastructure with an efficient implementation exist? If so, is there a library that implements it in Python?
A regular red/black tree or B-tree can do this in an amortized sense. If you store pointers to the smallest and biggest elements of the tree, then the cost of deleting those elements is amortized O(1), meaning that any series of d deletions will take time O(d), though individual deletions may take longer than this. The cost of insertions are O(log n), which is as good as possible because otherwise you could sort n items in less than O(n log n) time with your data structure.
As for libraries that implement this - that I’m not sure of.

Asymptotic complexity of converting list to set in Python

There's already a question regarding this, and the answer says that the asymptotic complexity is O(n). But I observed that if an unsorted list is converted into a set, the set can be printed out in a sorted order, which means that at some point in the middle of these operations the list has been sorted. Then, as any comparison sort has the lower bound of Omega(n lg n), the asymptotic complexity of this operation should also be Omega(n lg n). So what exactly is the complexity of this operation?
A set in Python is an unordered collection so any order you see is by chance. As both dict and set are implemented as hash tables in CPython, insertion is average case O(1) and worst case O(N).
So list(set(...)) is always O(N) and set(list(...)) is average case O(N).
You can browse the source code for set here.

What is the algorithmic complexity of converting a collections.deque to python list?

I'm trying to determine the complexity of converting a collections.deque object into a python list object is O(n). I imagine it would have to take every element and convert it into the list, but I cannot seem to find the implementation code behind deque. So has Python built in something more efficient behind the hood that could allow for O(1) conversion to a list?
Edit: Based off the following I do not believe it could be any faster than O(n)
"Indexed access is O(1) at both ends but slows to O(n) in the middle. For fast random access, use lists instead."
If it cannot access a middle node in O(1) time it will not be able to convert without the same complexity.
You have to access every node. O(1) time is impossible for that fact alone.
I would believe that a deque follows the same principles as conventional deques, in that it's constant time to access the first element. You have to do that for n elements, so the runtime to do so would be O(n).
Here is the implementation of deque
However, that is irrelevant for determining complexity to convert a deque to list in python.
If python is not reusing the data structure internally somehow, conversion into a list will require a walk through the deque and it will be O(n).

time complexity of sorting a dictionary

I was wondering what is the time complexity of sorting a dictionary by key and sorting a dictionary by value.
for e.g :
for key in sorted(my_dict, key = my_dict.get):
<some-code>
in the above line , what is the time complexity of sorted ? If it is assumed that quicksort is used, is it O(NlogN) on an average and O(N*N) in the worst case ?
and is the time complexity of sorting by value and sorting by key are different ? Since , accessing the value by its key takes only O(1) time, both should be same ?
Thanks.
sorted doesn't really sort a dictionary; it collects the iterable it receives into a list and sorts the list using the Timsort algorithm. Timsort is decidedly not a variant of quicksort, it is a hybrid algorithm closer to merge sort. According to wikipedia, its complexity is O(n log n) in the worst case, with optimizations to speed up the commonly encountered partially ordered data sets.
Since collecting the dict keys and values are both O(n), the complexity of both sorts is the same, determined by the sort algorithm as O(n log n).

Categories

Resources