According to this question, I checked the performance on my laptop.
Surprisingly, I found that pop(0) from a list is faster than popleft() from a deque stucture:
python -m timeit 'l = range(10000)' 'l.pop(0)'
gives:
10000 loops, best of 3: 66 usec per loop
While:
python -m timeit 'import collections' 'l = collections.deque(range(10000))' 'l.popleft()'
gives:
10000 loops, best of 3: 123 usec per loop
Moreover, I checked the performance on jupyter finding the same outcome:
%timeit l = range(10000); l.pop(0)
10000 loops, best of 3: 64.7 µs per loop
from collections import deque
%timeit l = deque(range(10000)); l.popleft()
10000 loops, best of 3: 122 µs per loop
What is the reason?
The problem is that your timeit call also times the deque/list creation, and creating a deque is obviously much slower because of the chaining.
In the command line, you can pass the setup to timeit using the -s option like this:
python -m timeit -s"import collections, time; l = collections.deque(range(10000000))" "l.popleft()"
Also, since setup is only run once, you get a pop error (empty list) after a whule, since I haven't changed default number of iterations, so I created a large deque to make it up, and got
10000000 loops, best of 3: 0.0758 usec per loop
on the other hand with list it's slower:
python -m timeit -s "l = list(range(10000000))" "l.pop(0)"
100 loops, best of 3: 9.72 msec per loop
I have also coded the bench in a script (more convenient), with a setup (to avoid clocking the setup) and 99999 iterations on a 100000-size list:
import timeit
print(timeit.timeit(stmt='l.pop(0)',setup='l = list(range(100000))',number=99999))
print(timeit.timeit(setup='import collections; l = collections.deque(range(100000))', stmt='l.popleft()', number=99999))
no surprise: deque wins:
2.442976927292288 for pop in list
0.007311641921253109 for pop in deque
note that l.pop() for the list runs in 0.011536903686244897 seconds, which is very good when popping the last element, as expected.
Related
I've recently noticed that the built-in list has a list.clear() method. So far, when I wanted to ensure a list is empty, I just create a new list: l = [].
I was curious if it makes a difference, so I measured it:
$ python --version
Python 3.11.0
$ python -m timeit 'a = [1, 2, 3, 4]; a= []'
5000000 loops, best of 5: 61.5 nsec per loop
$ python -m timeit 'a = [1, 2, 3, 4]; a.clear()'
5000000 loops, best of 5: 57.4 nsec per loop
So creating a new empty list is about 7% slower than using clear() for small lists.
For bigger lists, it seems to be faster to just create a new list:
$ python -m timeit 'a = list(range(10_000)); a = []'
2000 loops, best of 5: 134 usec per loop
$ python -m timeit 'a = list(range(10_000)); a = []'
2000 loops, best of 5: 132 usec per loop
$ python -m timeit 'a = list(range(10_000)); a = []'
2000 loops, best of 5: 134 usec per loop
$ python -m timeit 'a = list(range(10_000)); a.clear()'
2000 loops, best of 5: 143 usec per loop
$ python -m timeit 'a = list(range(10_000)); a.clear()'
2000 loops, best of 5: 139 usec per loop
$ python -m timeit 'a = list(range(10_000)); a.clear()'
2000 loops, best of 5: 139 usec per loop
why is that the case?
edit: Small clarification: I am aware that both are (in some scenarios) not semantically the same. If you clear a list you could still have two pointers to it. When you create a new list object, there will not be a pointer to it. For this question, I don't care about this potential difference. Just assume there is exactly one reference to that list.
I was curious if it makes a difference, so I measured it:
This is not the right way to do the measurement. The setup code should be separated out, so that it is not included in the timing. In fact, all of these operations are O(1); you see O(N) results because creating the original data is O(N).
So far, when I wanted to ensure a list is empty, I just create a new list: l = [].
list.clear is not equivalent to creating a new list. Consider:
a = [1,2,3]
b = a
Doing a = [] will not cause b to change, because it only rebinds the a name. But doing a.clear() will cause b to change, because both names refer to the same list object, which was mutated.
(Using a = [] is fine when that result is intentional, of course. Similarly, user-defined objects are often best "reset" by re-creating them.)
Instead, a.clear() is equivalent to a[:] = [] (or the same with some other empty sequence). However, this is harder to read. The Zen of Python tells us: "Beautiful is better than ugly"; "Explicit is better than implicit"; "There should be one-- and preferably only one --obvious way to do it".
It's also clearly (sorry) faster:
$ python -m timeit --setup 'a = list(range(10_000))' 'a.clear()'
10000000 loops, best of 5: 34.9 nsec per loop
$ python -m timeit --setup 'a = list(range(10_000))' 'a[:] = []'
5000000 loops, best of 5: 67.1 nsec per loop
$ python -m timeit --setup 'a = list(range(10_000))' 'a[:] = ()'
5000000 loops, best of 5: 51.7 nsec per loop
The "clever" ways require creating temporary objects, both for the iterable and for the slice used to index the list. Either way, it will end up in C code that does relatively simple, O(1) pointer bookkeeping (and a memory deallocation).
I have this simple function that partitions a list and returns an index i in the list such that elements at indices less that i are smaller than list[i] and elements at indices greater than i are bigger.
def partition(arr):
first_high = 0
pivot = len(arr) - 1
for i in range(len(arr)):
if arr[i] < arr[pivot]:
arr[first_high], arr[i] = arr[i], arr[first_high]
first_high = first_high + 1
arr[first_high], arr[pivot] = arr[pivot], arr[first_high]
return first_high
if __name__ == "__main__":
arr = [1, 5, 4, 6, 0, 3]
pivot = partition(arr)
print(pivot)
The runtime is substantially bigger with python 3.4 that python 2.7.6
on OS X:
time python3 partition.py
real 0m0.040s
user 0m0.027s
sys 0m0.010s
time python partition.py
real 0m0.031s
user 0m0.018s
sys 0m0.011s
Same thing on ubuntu 14.04 / virtual box
python3:
real 0m0.049s
user 0m0.034s
sys 0m0.015s
python:
real 0m0.044s
user 0m0.022s
sys 0m0.018s
Is python3 inherently slower that python2.7 or is there any specific optimizations to the code do make run as fast as on python2.7
As mentioned in the comments, you should be benchmarking with timeit rather than with OS tools.
My guess is the range function is probably performing a little slower in Python 3. In Python 2 it simply returns a list, in Python 3 it returns a range which behave more or less like a generator. I did some benchmarking and this was the result, which may be a hint on what you're experiencing:
python -mtimeit "range(10)"
1000000 loops, best of 3: 0.474 usec per loop
python3 -mtimeit "range(10)"
1000000 loops, best of 3: 0.59 usec per loop
python -mtimeit "range(100)"
1000000 loops, best of 3: 1.1 usec per loop
python3 -mtimeit "range(100)"
1000000 loops, best of 3: 0.578 usec per loop
python -mtimeit "range(1000)"
100000 loops, best of 3: 11.6 usec per loop
python3 -mtimeit "range(1000)"
1000000 loops, best of 3: 0.66 usec per loop
As you can see, when input provided to range is small, it tends to be fast in Python 2. If the input grows, then Python 3's range behave better.
My suggestion: test the code for larger arrays, with a hundred or a thousand elements.
Actually, I went further and test a complete iteration through the elements. The results were totally in favor of Python 2:
python -mtimeit "for i in range(1000):pass"
10000 loops, best of 3: 31 usec per loop
python3 -mtimeit "for i in range(1000):pass"
10000 loops, best of 3: 45.3 usec per loop
python -mtimeit "for i in range(10000):pass"
1000 loops, best of 3: 330 usec per loop
python3 -mtimeit "for i in range(10000):pass"
1000 loops, best of 3: 480 usec per loop
My conclusion is that, is probably faster to iterate through a list than through a generator. Although the latter is definitely more efficient regarding memory consumption. This is a classic example of the trade off between speed and memory. Although the speed difference is not that big per se (less than miliseconds). So you should value this and what's better for your program.
I need to have a 100000 characters long string. What is the most efficient and shortest way of producing such a string in python?
The content of the string is not of importance.
Something like:
'x' * 100000 # or,
''.join('x' for x in xrange(100000)) # or,
from itertools import repeat
''.join(repeat('x', times=100000))
Or for a bit of a mixup of letters:
from string import ascii_letters
from random import choice
''.join(choice(ascii_letters) for _ in xrange(100000))
Or, for some random data:
import os
s = os.urandom(100000)
You can simply do
s = 'a' * 100000
Since efficiency is important, here's a quick benchmark for some of the approaches mentioned so far:
$ python -m timeit "" "'a'*100000"
100000 loops, best of 3: 4.99 usec per loop
$ python -m timeit "from itertools import repeat" "''.join(repeat('x', times=100000))"
1000 loops, best of 3: 2.24 msec per loop
$ python -m timeit "import array" "array.array('c',[' ']*100000).tostring()"
100 loops, best of 3: 3.92 msec per loop
$ python -m timeit "" "''.join('x' for x in xrange(100000))"
100 loops, best of 3: 5.69 msec per loop
$ python -m timeit "import os" "os.urandom(100000)"
100 loops, best of 3: 6.17 msec per loop
Not surprisingly, of the ones posted, using string multiplication is the fastest by far.
Also note that it is more efficient to multiply a single char than a multi-char string (to get the same final string length).
$ python -m timeit "" "'a'*100000"
100000 loops, best of 3: 4.99 usec per loop
$ python -m timeit "" "'ab'*50000"
100000 loops, best of 3: 6.02 usec per loop
$ python -m timeit "" "'abcd'*25000"
100000 loops, best of 3: 6 usec per loop
$ python -m timeit "" "'abcdefghij'*10000"
100000 loops, best of 3: 6.03 usec per loop
Tested on Python 2.7.3
Strings can use the multiplication operator:
"a" * 100000
Try making an array of blank characters.
import array
longCharArray = array.array('c',[' ']*100000)
This will allocate an array of ' ' characters of size 100000
longCharArray.tostring()
Will convert to a string.
Just pick some character and repeat it 100000 times:
"a"*100000
Why you would want this is another question. . .
You can try something like this:
"".join(random.sample(string.lowercase * 385,10000))
As a one liner:
''.join([chr(random.randint(32, 126)) for x in range(30)])
Change the range() value to get a different length of string; change the bounds of randint() to get a different set of characters.
I recently wrote a quick and dirty BFS implementation, to find diamonds in a directed graph.
The BFS loop looked like this:
while toVisit:
y = toVisit.pop()
if y in visited: return "Found diamond"
visited.add(y)
toVisit.extend(G[y])
(G is the graph - a dictionary from node names to the lists of their neighbors)
Then comes the interesting part:
I thought that list.pop() is probably too slow, so I ran a profiler to compare the speed of this implementation with deque.pop - and got a bit of an improvement. Then I compared it with y = toVisit[0]; toVisit = toVisit[1:], and to my surprise, the last implementation is actually the fastest one.
Does this make any sense?
Is there any performance reason to ever use list.pop() instead of the apparently much faster two-liner?
You have measured wrong. With cPython 2.7 on x64, I get the following results:
$ python -m timeit 'l = list(range(10000))' 'while l: l = l[1:]'
10 loops, best of 3: 365 msec per loop
$ python -m timeit 'l = list(range(10000))' 'while l: l.pop()'
1000 loops, best of 3: 1.82 msec per loop
$ python -m timeit 'import collections' \
'l = collections.deque(list(range(10000)))' 'while l: l.pop()'
1000 loops, best of 3: 1.67 msec per loop
Use generators for perfomance
python -m timeit 'import itertools' 'l=iter(xrange(10000))' 'while next(l, None): l,a = itertools.tee(l)'
1000000 loops, best of 3: 0.986 usec per loop
What is the cost of len() function for Python built-ins? (list/tuple/string/dictionary)
It's O(1) (constant time, not depending of actual length of the element - very fast) on every type you've mentioned, plus set and others such as array.array.
Calling len() on those data types is O(1) in CPython, the official and most common implementation of the Python language. Here's a link to a table that provides the algorithmic complexity of many different functions in CPython:
TimeComplexity Python Wiki Page
All those objects keep track of their own length. The time to extract the length is small (O(1) in big-O notation) and mostly consists of [rough description, written in Python terms, not C terms]: look up "len" in a dictionary and dispatch it to the built_in len function which will look up the object's __len__ method and call that ... all it has to do is return self.length
The below measurements provide evidence that len() is O(1) for oft-used data structures.
A note regarding timeit: When the -s flag is used and two strings are passed to timeit the first string is executed only once and is not timed.
List:
$ python -m timeit -s "l = range(10);" "len(l)"
10000000 loops, best of 3: 0.0677 usec per loop
$ python -m timeit -s "l = range(1000000);" "len(l)"
10000000 loops, best of 3: 0.0688 usec per loop
Tuple:
$ python -m timeit -s "t = (1,)*10;" "len(t)"
10000000 loops, best of 3: 0.0712 usec per loop
$ python -m timeit -s "t = (1,)*1000000;" "len(t)"
10000000 loops, best of 3: 0.0699 usec per loop
String:
$ python -m timeit -s "s = '1'*10;" "len(s)"
10000000 loops, best of 3: 0.0713 usec per loop
$ python -m timeit -s "s = '1'*1000000;" "len(s)"
10000000 loops, best of 3: 0.0686 usec per loop
Dictionary (dictionary-comprehension available in 2.7+):
$ python -mtimeit -s"d = {i:j for i,j in enumerate(range(10))};" "len(d)"
10000000 loops, best of 3: 0.0711 usec per loop
$ python -mtimeit -s"d = {i:j for i,j in enumerate(range(1000000))};" "len(d)"
10000000 loops, best of 3: 0.0727 usec per loop
Array:
$ python -mtimeit -s"import array;a=array.array('i',range(10));" "len(a)"
10000000 loops, best of 3: 0.0682 usec per loop
$ python -mtimeit -s"import array;a=array.array('i',range(1000000));" "len(a)"
10000000 loops, best of 3: 0.0753 usec per loop
Set (set-comprehension available in 2.7+):
$ python -mtimeit -s"s = {i for i in range(10)};" "len(s)"
10000000 loops, best of 3: 0.0754 usec per loop
$ python -mtimeit -s"s = {i for i in range(1000000)};" "len(s)"
10000000 loops, best of 3: 0.0713 usec per loop
Deque:
$ python -mtimeit -s"from collections import deque;d=deque(range(10));" "len(d)"
100000000 loops, best of 3: 0.0163 usec per loop
$ python -mtimeit -s"from collections import deque;d=deque(range(1000000));" "len(d)"
100000000 loops, best of 3: 0.0163 usec per loop
len is an O(1) because in your RAM, lists are stored as tables (series of contiguous addresses). To know when the table stops the computer needs two things : length and start point. That is why len() is a O(1), the computer stores the value, so it just needs to look it up.