Is list comprehension really much slower than str.replace? - python

I'm testing different versions of string sanitizing and encountered the effect below. What is hard for me to tell if this is really the result of caching as %timeit of IPython warns or if this is real. Please advise:
str.replace:
def sanit2(s):
for c in ["'", '%', '"']:
s=s.replace(c,'')
return s
In [44]: %timeit sanit2(r""" ' ' % a % ' """)
The slowest run took 12.43 times longer than the fastest. This could mean that an intermediate result is being cached
1000000 loops, best of 3: 985 ns per loop
List comprehension:
def sanit3(s):
removed = [x for x in s if not x in ["'", '%', '"']]
return ''.join(removed)
In [42]: %timeit sanit3(r""" ' ' % a % ' """)
The slowest run took 8.95 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 2.12 µs per loop
This seems to hold for relatively long strings too:
In [46]: reallylong = r""" ' ' % a % ' """ * 1000
In [47]: len(reallylong)
Out[47]: 22000
In [48]: %timeit sanit2(reallylong)
The slowest run took 4.94 times longer than the fastest. This could mean that an intermediate result is being cached
10000 loops, best of 3: 96.9 µs per loop
In [49]: %timeit sanit3(reallylong)
1000 loops, best of 3: 1.9 ms per loop
UPDATE: I presumed that str.replace also has more or less O(n) complexity, so I expected both sanit2 and sanit3 to have about O(n^2) complexity.
I tested cost of str.replace depending on string length:
In [59]: orig_str = r""" ' ' % a % ' """
In [60]: for i in range(1,11):
....: longer = orig_str * i * 1000
....: %timeit longer.replace('%', '')
....:
10000 loops, best of 3: 44.2 µs per loop
10000 loops, best of 3: 87.8 µs per loop
10000 loops, best of 3: 131 µs per loop
10000 loops, best of 3: 177 µs per loop
1000 loops, best of 3: 219 µs per loop
1000 loops, best of 3: 259 µs per loop
1000 loops, best of 3: 311 µs per loop
1000 loops, best of 3: 349 µs per loop
1000 loops, best of 3: 398 µs per loop
1000 loops, best of 3: 435 µs per loop
In [61]: t="""10000 loops, best of 3: 44.2 s per loop
....: 10000 loops, best of 3: 87.8 s per loop
....: 10000 loops, best of 3: 131 s per loop
....: 10000 loops, best of 3: 177 s per loop
....: 1000 loops, best of 3: 219 s per loop
....: 1000 loops, best of 3: 259 s per loop
....: 1000 loops, best of 3: 311 s per loop
....: 1000 loops, best of 3: 349 s per loop
....: 1000 loops, best of 3: 398 s per loop
....: 1000 loops, best of 3: 435 s per loop"""
Looks linear, but I calculated it to be sure:
In [63]: averages=[]
In [66]: for idx, line in enumerate(t.split('\n')):
....: repl_time = line.rsplit(':',1)[1].split(' ')[1]
....: averages.append(float(repl_time)/(idx+1))
....:
In [67]: averages
Out[67]:
[44.2,
43.9,
43.666666666666664,
44.25,
43.8,
43.166666666666664,
44.42857142857143,
43.625,
44.22222222222222,
43.5]
Yes, str.replace is almost perfectly O(n). So on top of iterating over a list of characters to be replaced, sanit2 should have O(n^2) complexity just like sanit3 (x for x in s => iterate over characters of a string to be replaced, O(n). ...x in ["'", '%', '"'] should be O(n) as well given list.__contains__ cost. Altogether O(n^2)).
So in reply to chepner, yes, sanit2 does a fixed number of function calls (and few, just 3 in the example), but due to internal cost of str.replace it seems like sanit2 should have similar order of complexity to sanit3.
Is the difference all due to the fact that str.replace is implemented in C or maybe the function call (list.__contains__) also play an important role?

sanit2 makes a fixed number of calls, independent of the length of s, to a string method implemented in C.
sanit3 makes a variable number of calls (one per element in s) to list.__contains__, which itself uses an O(n), not O(1), algorithm. It also has to construct a list object, then call ''.join on that list.
It's not surprising that sanit2 is faster.

Related

Difference between df.loc['col name'], df.loc[index]['col name'] and df.loc[index, 'col name'] in pandas?

I have a dataframe df with a column name 'Store'. If I want to retrieve the column, the following lines work equally well - df['Store'] or df[:]['Store'] or df[:,'Store'].
What is the difference between the two? And should one be used over the other?
Thank you.
df.loc[index, 'col name'] is more idiomatic and preferred, especially if you want to filter rows
Demo: for 1.000.000 x 3 shape DF
In [26]: df = pd.DataFrame(np.random.rand(10**6,3), columns=list('abc'))
In [27]: %timeit df[df.a < 0.5]['a']
10 loops, best of 3: 45.8 ms per loop
In [28]: %timeit df.loc[df.a < 0.5]['a']
10 loops, best of 3: 45.8 ms per loop
In [29]: %timeit df.loc[df.a < 0.5, 'a']
10 loops, best of 3: 37 ms per loop
For construction where you need only one column and don't filter rows like df[:]['Store'] - it's better to use simply df['Store']:
In [30]: %timeit df[:]['a']
1000 loops, best of 3: 436 µs per loop
In [31]: %timeit df.loc[:]['a']
10000 loops, best of 3: 25.9 µs per loop
In [36]: %timeit df['a'].loc[:]
10000 loops, best of 3: 26.5 µs per loop
In [32]: %timeit df.loc[:, 'a']
10000 loops, best of 3: 126 µs per loop
In [33]: %timeit df['a']
The slowest run took 5.08 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 8.17 µs per loop
Unconditional access of multiple columns:
In [34]: %timeit df[['a','b']]
10 loops, best of 3: 22 ms per loop
In [35]: %timeit df.loc[:, ['a','b']]
10 loops, best of 3: 22.6 ms per loop

What is the preferred way to compose a set from multiple lists in Python

I have a few different lists that I want to turn into a set and use to find the difference from another set. Let's call them A, B, and C.
Is the more optimal way to do this set(A + B + C) or set(A).union(set(B)).union(set(C))
Would certain properties of A, B, and C like the number of duplicates or length affect this decision?
Would having an arbitrary number of sets?
For small lists, set(A + B + C) works fine. For larger lists, the following is more efficient because it does not create a temporary list:
myset = set(A)
myset.update(B)
myset.update(C)
A different approach uses itertools.chain, which is also efficient because it does not create temporary list:
import itertools
myset = set(itertools.chain(A, B, C))
Here is some timing experiments:
import numpy as np
import itertools
for r in [10,100,1000,10000]:
A = list(np.random.randint(r, size=1000000))
B = list(np.random.randint(r, size=1000000))
%timeit set(A).update(B)
%timeit set(A+B)
%timeit set(itertools.chain(A, B))
print('---')
Here is the results for size = 1000:
10000 loops, best of 3: 87.2 µs per loop
10000 loops, best of 3: 87.3 µs per loop
10000 loops, best of 3: 90.7 µs per loop
---
10000 loops, best of 3: 88.2 µs per loop
10000 loops, best of 3: 86.8 µs per loop
10000 loops, best of 3: 89.4 µs per loop
---
10000 loops, best of 3: 80.9 µs per loop
10000 loops, best of 3: 84.5 µs per loop
10000 loops, best of 3: 87 µs per loop
---
10000 loops, best of 3: 97.4 µs per loop
10000 loops, best of 3: 102 µs per loop
10000 loops, best of 3: 107 µs per loop
Here is the results for size = 1000000:
10 loops, best of 3: 89 ms per loop
10 loops, best of 3: 106 ms per loop
10 loops, best of 3: 98.4 ms per loop
---
10 loops, best of 3: 89.1 ms per loop
10 loops, best of 3: 110 ms per loop
10 loops, best of 3: 94.2 ms per loop
---
10 loops, best of 3: 94.9 ms per loop
10 loops, best of 3: 109 ms per loop
10 loops, best of 3: 105 ms per loop
---
10 loops, best of 3: 115 ms per loop
10 loops, best of 3: 143 ms per loop
10 loops, best of 3: 138 ms per loop
So, update() seems to be slightly faster than both other methods. However, I don't think that the time difference is significant.

L.append(x) vs L[len(L):len(L)] = [x]

In Python, is there a difference (say, in performance) between writing
L.append(x)
and
L[len(L):len(L)] = [x]
where L is a list? If there is, what is it caused by?
Thanks!
Apart from append method, you could append elements to list using insert, I'm guessing that's what you are pointing at:
In [115]: l=[1,]
In [116]: l.insert(len(l), 11)
In [117]: l
Out[117]: [1, 11]
l.append(x) vs. l.insert(len(l), x):
In [166]: %timeit -n1000 l=[1]; l.append(11)
1000 loops, best of 3: 936 ns per loop
In [167]: %timeit -n1000 l=[1]; l.insert(len(l), 11)
1000 loops, best of 3: 1.44 us per loop
It's obvious that method append is better.
and then L.append(x) vs L[len(L):len(L)] = [x]:
or L[len(L):]=[x]
In [145]: %timeit -n1000 l=[1]; l.append(123);
1000 loops, best of 3: 878 ns per loop
In [146]: %timeit -n1000 l=[1]; l[len(l):]=[123]
1000 loops, best of 3: 1.24 us per loop
In [147]: %timeit -n1000 l=[1]; l[len(l):len(l)]=[123]
1000 loops, best of 3: 1.46 us per loop
There is no difference on my system...
In [22]: f = (4,)
In [21]: %timeit l = [1,2,3]; l.append(4)
1000000 loops, best of 3: 265 ns per loop
In [23]: %timeit l = [1,2,3]; l.append(f)
1000000 loops, best of 3: 266 ns per loop
In [24]: %timeit l = [1,2,3]; l.extend(f)
1000000 loops, best of 3: 270 ns per loop
In [25]: %timeit l = [1,2,3]; l[4:] = f
1000000 loops, best of 3: 260 ns per loop
This means that in an apples-to-apples comparison, they are the same (above differences are probably less than random error).
However, anything extra (such as having to calculate len in that version) may skew the results for some particular implementation.
As always, performance testing has pitfalls. But in your example:
x need not be an iterable, you are wrapping it in an iterable. This obviously is an extra step that incurs a performance penalty.
Performing len(L) is not free, it takes a non-zero amount of time. This also incurs a performance penalty.
Some quick testing bears this out:
def f():
a = []
for i in range(10000):
a.append(0)
def g():
a = []
for i in range(10000):
a[len(a):len(a)] = [0]
%timeit f()
1000 loops, best of 3: 683 us per loop
%timeit g()
100 loops, best of 3: 2.4 ms per loop
Now one non-obvious "optimization" you can do to remove the len(L) effect is use a constant slice that is higher than the length of your list will ever get. Extended slicing never throws an IndexError, even if you're waaaaay off the end of the iterable. So let's do that.
def h():
a = []
for i in range(10000):
a[11111:11111] = [0]
%timeit h()
1000 loops, best of 3: 1.45 ms per loop
So as suspected, both wrapping your x in an iterable and calling len have small but tangible performance penalties.
And, of course, doing li[len(li):len(li)] is UGLY. That's the biggest performance penalty: the time it takes my brain to figure out what the heck it just looked at. :-)

python pandas: why map is faster?

in pandas' manual, there is this example about indexing:
In [653]: criterion = df2['a'].map(lambda x: x.startswith('t'))
In [654]: df2[criterion]
then Wes wrote:
**# equivalent but slower**
In [655]: df2[[x.startswith('t') for x in df2['a']]]
can anyone here explain a bit why the map approach is faster? Is this a python feature or this is a pandas feature?
Arguments about why a certain way of doing things in Python "should be" faster can't be taken too seriously, because you're often measuring implementation details which may behave differently in certain situations. As a result, when people guess what should be faster, they're often (usually?) wrong. For example, I find that map can actually be slower. Using this setup code:
import numpy as np, pandas as pd
import random, string
def make_test(num, width):
s = [''.join(random.sample(string.ascii_lowercase, width)) for i in range(num)]
df = pd.DataFrame({"a": s})
return df
Let's compare the time they take to make the indexing object -- whether a Series or a list -- and the resulting time it takes to use that object to index into the DataFrame. It could be, for example, that making a list is fast but before using it as an index it needs to be internally converted to a Series or an ndarray or something and so there's extra time added there.
First, for a small frame:
>>> df = make_test(10, 10)
>>> %timeit df['a'].map(lambda x: x.startswith('t'))
10000 loops, best of 3: 85.8 µs per loop
>>> %timeit [x.startswith('t') for x in df['a']]
100000 loops, best of 3: 15.6 µs per loop
>>> %timeit df['a'].str.startswith("t")
10000 loops, best of 3: 118 µs per loop
>>> %timeit df[df['a'].map(lambda x: x.startswith('t'))]
1000 loops, best of 3: 304 µs per loop
>>> %timeit df[[x.startswith('t') for x in df['a']]]
10000 loops, best of 3: 194 µs per loop
>>> %timeit df[df['a'].str.startswith("t")]
1000 loops, best of 3: 348 µs per loop
and in this case the listcomp is fastest. That doesn't actually surprise me too much, to be honest, because going via a lambda is likely to be slower than using str.startswith directly, but it's really hard to guess. 10 is small enough we're probably still measuring things like setup costs for Series; what happens in a larger frame?
>>> df = make_test(10**5, 10)
>>> %timeit df['a'].map(lambda x: x.startswith('t'))
10 loops, best of 3: 46.6 ms per loop
>>> %timeit [x.startswith('t') for x in df['a']]
10 loops, best of 3: 27.8 ms per loop
>>> %timeit df['a'].str.startswith("t")
10 loops, best of 3: 48.5 ms per loop
>>> %timeit df[df['a'].map(lambda x: x.startswith('t'))]
10 loops, best of 3: 47.1 ms per loop
>>> %timeit df[[x.startswith('t') for x in df['a']]]
10 loops, best of 3: 52.8 ms per loop
>>> %timeit df[df['a'].str.startswith("t")]
10 loops, best of 3: 49.6 ms per loop
And now it seems like the map is winning when used as an index, although the difference is marginal. But not so fast: what if we manually turn the listcomp into an array or a Series?
>>> %timeit df[np.array([x.startswith('t') for x in df['a']])]
10 loops, best of 3: 40.7 ms per loop
>>> %timeit df[pd.Series([x.startswith('t') for x in df['a']])]
10 loops, best of 3: 37.5 ms per loop
and now the listcomp wins again!
Conclusion: who knows? But never believe anything without timeit results, and even then you have to ask whether you're testing what you think you are.

Why is numpy.array() is sometimes very slow?

I'm using the numpy.array() function to create numpy.float64 ndarrays from lists.
I noticed that this is very slow when either the list contains None or a list of lists is provided.
Below are some examples with times. There are obvious workarounds but why is this so slow?
Examples for list of None:
### Very slow to call array() with list of None
In [3]: %timeit numpy.array([None]*100000, dtype=numpy.float64)
1 loops, best of 3: 240 ms per loop
### Problem doesn't exist with array of zeroes
In [4]: %timeit numpy.array([0.0]*100000, dtype=numpy.float64)
100 loops, best of 3: 9.94 ms per loop
### Also fast if we use dtype=object and convert to float64
In [5]: %timeit numpy.array([None]*100000, dtype=numpy.object).astype(numpy.float64)
100 loops, best of 3: 4.92 ms per loop
### Also fast if we use fromiter() insead of array()
In [6]: %timeit numpy.fromiter([None]*100000, dtype=numpy.float64)
100 loops, best of 3: 3.29 ms per loop
Examples for list of lists:
### Very slow to create column matrix
In [7]: %timeit numpy.array([[0.0]]*100000, dtype=numpy.float64)
1 loops, best of 3: 353 ms per loop
### No problem to create column vector and reshape
In [8]: %timeit numpy.array([0.0]*100000, dtype=numpy.float64).reshape((-1,1))
100 loops, best of 3: 10 ms per loop
### Can use itertools to flatten input lists
In [9]: %timeit numpy.fromiter(itertools.chain.from_iterable([[0.0]]*100000),dtype=numpy.float64).reshape((-1,1))
100 loops, best of 3: 9.65 ms per loop
I've reported this as a numpy issue. The report and patch files are here:
https://github.com/numpy/numpy/issues/3392
After patching:
# was 240 ms, best alternate version was 3.29
In [5]: %timeit numpy.array([None]*100000)
100 loops, best of 3: 7.49 ms per loop
# was 353 ms, best alternate version was 9.65
In [6]: %timeit numpy.array([[0.0]]*100000)
10 loops, best of 3: 23.7 ms per loop
My guess would be that the code for converting lists just calls float on everything. If the argument defines __float__, we call that, otherwise we treat it like a string (throwing an exception on None, we catch that and puts in np.nan). The exception handling should be relatively slower.
Timing seems to verify this hypothesis:
import numpy as np
%timeit [None] * 100000
> 1000 loops, best of 3: 1.04 ms per loop
%timeit np.array([0.0] * 100000)
> 10 loops, best of 3: 21.3 ms per loop
%timeit [i.__float__() for i in [0.0] * 100000]
> 10 loops, best of 3: 32 ms per loop
def flt(d):
try:
return float(d)
except:
return np.nan
%timeit np.array([None] * 100000, dtype=np.float64)
> 1 loops, best of 3: 477 ms per loop
%timeit [flt(d) for d in [None] * 100000]
> 1 loops, best of 3: 328 ms per loop
Adding another case just to be obvious about where I'm going with this. If there was an explicit check for None, it would not be this slow above:
def flt2(d):
if d is None:
return np.nan
try:
return float(d)
except:
return np.nan
%timeit [flt2(d) for d in [None] * 100000]
> 10 loops, best of 3: 45 ms per loop

Categories

Resources