numpy.sum on range and range_iterator objects - python

Consider this performance test on Ipython under python 3:
Create a range, a range_iterator and a generator
In [1]: g1 = range(1000000)
In [2]: g2 = iter(range(1000000))
In [3]: g3 = (i for i in range(1000000))
Measure time for summing using python native sum
In [4]: %timeit sum(g1)
10 loops, best of 3: 47.4 ms per loop
In [5]: %timeit sum(g2)
The slowest run took 374430.34 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 123 ns per loop
In [6]: %timeit sum(g3)
The slowest run took 1302907.54 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 128 ns per loop
Not sure if I should worry about the warning. The range version timing is vary long (why?), but the range_iterator and the generator are similar.
Now let's use numpy.sum
In [7]: import numpy as np
In [8]: %timeit np.sum(g1)
10 loops, best of 3: 174 ms per loop
In [9]: %timeit np.sum(g2)
The slowest run took 8.47 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 6.51 µs per loop
In [10]: %timeit np.sum(g3)
The slowest run took 9.59 times longer than the fastest. This could mean that an intermediate result is being cached.
1000000 loops, best of 3: 446 ns per loop
g1 and g3 became x~3.5 slower, but the range_iterator g2 is now some ~50 times slower compared to the native sum. g3 wins.
In [11]: type(g1)
Out[11]: range
In [12]: type(g2)
Out[12]: range_iterator
In [13]: type(g3)
Out[13]: generator
Why such a penalty to range_iterator on numpy.sum? Should such objects be avoided? Does it generalized - Do "home made" generators always beat other objects on numpy?
EDIT 1: I realized that the np.sum does not evaluate the range_iterator but returns another range_iterator object. So this comparison is not good. Why doesn't it get evaluated?
EDIT 2: I also realized that numpy.sum keeps the range in integer form and accordingly gives the wrong results on my sum due to integer overflow.
In [12]: sum(range(1000000))
Out[12]: 499999500000
In [13]: np.sum(range(1000000))
Out[13]: 1783293664
In [14]: np.sum(range(1000000), dtype=float)
Out[14]: 499999500000.0
Intermediate conclusion - don't use numpy.sum on non numpy objects...?

Did you look at the results of repeated sums on the iter?
95:~/mypy$ g2=iter(range(10))
96:~/mypy$ sum(g2)
Out[96]: 45
97:~/mypy$ sum(g2)
Out[97]: 0
98:~/mypy$ sum(g2)
Out[98]: 0
Why the 0s? Because g2 can be use only once. Same goes for the generator expression.
Or look at it with list
100:~/mypy$ g2=iter(range(10))
101:~/mypy$ list(g2)
Out[101]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
102:~/mypy$ list(g2)
Out[102]: []
In Python 3, range is a range object, not a list. So it's an iterator that regenerates each time it is used.
As for np.sum, np.sum(range(10)) has to make an array first.
When operating on a list, the Python sum is quite fast, faster than np.sum on the same:
116:~/mypy$ %%timeit x=list(range(10000))
...: sum(x)
1000 loops, best of 3: 202 µs per loop
117:~/mypy$ %%timeit x=list(range(10000))
...: np.sum(x)
1000 loops, best of 3: 1.62 ms per loop
But operating on an array, np.sum does much better
118:~/mypy$ %%timeit x=np.arange(10000)
...: sum(x)
100 loops, best of 3: 5.92 ms per loop
119:~/mypy$ %%timeit x=np.arange(10000)
...: np.sum(x)
<caching warning>
100000 loops, best of 3: 18.6 µs per loop
Another timing - various ways of making an array. fromiter can be faster than np.array; but the builtin arange is much better.
124:~/mypy$ timeit np.array(range(100000))
10 loops, best of 3: 39.2 ms per loop
125:~/mypy$ timeit np.fromiter(range(100000),int)
100 loops, best of 3: 12.9 ms per loop
126:~/mypy$ timeit np.arange(100000)
The slowest run took 6.93 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 106 µs per loop
Use range if you intend to work with lists; but use numpy's own range if you need to work with arrays. There is an overhead when creating arrays, so they are more valuable when working with large ones.
==================
On the question of how np.sum handles an iterator - it doesn't. Look at what np.array does to such an object:
In [12]: np.array(iter(range(10)))
Out[12]: array(<range_iterator object at 0xb5998f98>, dtype=object)
It produces a single element array with dtype object.
fromiter will evaluate this iterable:
In [13]: np.fromiter(iter(range(10)),int)
Out[13]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
np.array follows some complicated rules when it comes to converting the input to an array. It's designed to work primarily, with a list of numbers or nested equal length lists.
If you have questions of how a np function handles a non-array object, first check what np.array does to that object.

Related

Can Dask provide a speedup for 1D arrays?

When working on 2D data, I see a slight speed-up on 2D arrays, but even on large 1D arrays that advantage disappears.
E.g., in 2D:
In [48]: x = np.random.random((3000, 2000))
In [49]: X = da.from_array(x, chunks=(500,500))
In [50]: %timeit (np.cumsum(x - x**2, axis=0))
10 loops, best of 3: 131 ms per loop
In [51]: %timeit (da.cumsum(X - X**2, axis=0)).compute()
10 loops, best of 3: 89.3 ms per loop
But in 1D:
In [52]: x = np.random.random(10e5)
In [53]: X = da.from_array(x, chunks=(2000,))
In [54]: %timeit (np.cumsum(x - x**2, axis=0))
100 loops, best of 3: 8.28 ms per loop
In [55]: %timeit (da.cumsum(X - X**2, axis=0)).compute()
1 loop, best of 3: 304 ms per loop
Can Dask provide a speedup for 1D arrays and, if so, what would an ideal chunk size be?
Your FLOP/Byte ratio is still too low. The CPU isn't the bottleneck, your memory hierarchy is.
Additionally, chunksizes of (2000,) are just too small for Dask.array to be meaningful. Recall that dask introduces an overhead of a few hundred microseconds per task, so each task you do should be significantly longer than this. This explains the 300ms duration you're seeing.
In [11]: 10e5 / 2000 # number of tasks
Out[11]: 500.0
But even if you do go for larger chunksizes you don't get any speedup on this computation:
In [15]: x = np.random.random(1e8)
In [16]: X = da.from_array(x, chunks=1e6)
In [17]: %timeit np.cumsum(x - x**2, axis=0)
1 loop, best of 3: 632 ms per loop
In [18]: %timeit da.cumsum(X - X**2, axis=0).compute()
1 loop, best of 3: 759 ms per loop
However if you do something that requires more computation per byte then you enter the regime where parallel processing can actually help. For example arcsinh is actually quite costly to compute:
In [20]: %timeit np.arcsinh(x).sum()
1 loop, best of 3: 3.32 s per loop
In [21]: %timeit da.arcsinh(X).sum().compute()
1 loop, best of 3: 724 ms per loop

python: check if an numpy array contains any element of another array

What is the best way to check if an numpy array contains any element of another array?
example:
array1 = [10,5,4,13,10,1,1,22,7,3,15,9]
array2 = [3,4,9,10,13,15,16,18,19,20,21,22,23]`
I want to get a True if array1 contains any value of array2, otherwise a False.
Using Pandas, you can use isin:
a1 = np.array([10,5,4,13,10,1,1,22,7,3,15,9])
a2 = np.array([3,4,9,10,13,15,16,18,19,20,21,22,23])
>>> pd.Series(a1).isin(a2).any()
True
And using the in1d numpy function(per the comment from #Norman):
>>> np.any(np.in1d(a1, a2))
True
For small arrays such as those in this example, the solution using set is the clear winner. For larger, dissimilar arrays (i.e. no overlap), the Pandas and Numpy solutions are faster. However, np.intersect1d appears to excel for larger arrays.
Small arrays (12-13 elements)
%timeit set(array1) & set(array2)
The slowest run took 4.22 times longer than the fastest. This could mean that an intermediate result is being cached
1000000 loops, best of 3: 1.69 µs per loop
%timeit any(i in a1 for i in a2)
The slowest run took 12.29 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 1.88 µs per loop
%timeit np.intersect1d(a1, a2)
The slowest run took 10.29 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 15.6 µs per loop
%timeit np.any(np.in1d(a1, a2))
10000 loops, best of 3: 27.1 µs per loop
%timeit pd.Series(a1).isin(a2).any()
10000 loops, best of 3: 135 µs per loop
Using an array with 100k elements (no overlap):
a3 = np.random.randint(0, 100000, 100000)
a4 = a3 + 100000
%timeit np.intersect1d(a3, a4)
100 loops, best of 3: 13.8 ms per loop
%timeit pd.Series(a3).isin(a4).any()
100 loops, best of 3: 18.3 ms per loop
%timeit np.any(np.in1d(a3, a4))
100 loops, best of 3: 18.4 ms per loop
%timeit set(a3) & set(a4)
10 loops, best of 3: 23.6 ms per loop
%timeit any(i in a3 for i in a4)
1 loops, best of 3: 34.5 s per loop
You can try this
>>> array1 = [10,5,4,13,10,1,1,22,7,3,15,9]
>>> array2 = [3,4,9,10,13,15,16,18,19,20,21,22,23]
>>> set(array1) & set(array2)
set([3, 4, 9, 10, 13, 15, 22])
If you get result means there are common elements in both array.
If result is empty means no common elements.
You can use any built-in function and list comprehension:
>>> array1 = [10,5,4,13,10,1,1,22,7,3,15,9]
>>> array2 = [3,4,9,10,13,15,16,18,19,20,21,22,23]
>>> any(i in array2 for i in array1)
True

Number of loops in Numpy Element wise operation

I am taking element wise power of a numpy array as well as a python list. Why are there 10000 loops for the numpy operation?
In [1]: a = np.arange(1000)
In [2]: %timeit a**5
10000 loops, best of 3: 77.8 µs per loop
In [3]: b = range(1000)
In [4]: %timeit [i**5 for i in b]
1000 loops, best of 3: 1.64 ms per loop
From the documentation (https://docs.python.org/2/library/timeit.html#command-line-interface):
If -n is not given, a suitable number of loops is calculated by trying
successive powers of 10 until the total time is at least 0.2 seconds.
In other words, timeit runs your statement 10000 times because that's about how many it can do in .2 seconds. It has nothing to do with the number you passed to arange.

L.append(x) vs L[len(L):len(L)] = [x]

In Python, is there a difference (say, in performance) between writing
L.append(x)
and
L[len(L):len(L)] = [x]
where L is a list? If there is, what is it caused by?
Thanks!
Apart from append method, you could append elements to list using insert, I'm guessing that's what you are pointing at:
In [115]: l=[1,]
In [116]: l.insert(len(l), 11)
In [117]: l
Out[117]: [1, 11]
l.append(x) vs. l.insert(len(l), x):
In [166]: %timeit -n1000 l=[1]; l.append(11)
1000 loops, best of 3: 936 ns per loop
In [167]: %timeit -n1000 l=[1]; l.insert(len(l), 11)
1000 loops, best of 3: 1.44 us per loop
It's obvious that method append is better.
and then L.append(x) vs L[len(L):len(L)] = [x]:
or L[len(L):]=[x]
In [145]: %timeit -n1000 l=[1]; l.append(123);
1000 loops, best of 3: 878 ns per loop
In [146]: %timeit -n1000 l=[1]; l[len(l):]=[123]
1000 loops, best of 3: 1.24 us per loop
In [147]: %timeit -n1000 l=[1]; l[len(l):len(l)]=[123]
1000 loops, best of 3: 1.46 us per loop
There is no difference on my system...
In [22]: f = (4,)
In [21]: %timeit l = [1,2,3]; l.append(4)
1000000 loops, best of 3: 265 ns per loop
In [23]: %timeit l = [1,2,3]; l.append(f)
1000000 loops, best of 3: 266 ns per loop
In [24]: %timeit l = [1,2,3]; l.extend(f)
1000000 loops, best of 3: 270 ns per loop
In [25]: %timeit l = [1,2,3]; l[4:] = f
1000000 loops, best of 3: 260 ns per loop
This means that in an apples-to-apples comparison, they are the same (above differences are probably less than random error).
However, anything extra (such as having to calculate len in that version) may skew the results for some particular implementation.
As always, performance testing has pitfalls. But in your example:
x need not be an iterable, you are wrapping it in an iterable. This obviously is an extra step that incurs a performance penalty.
Performing len(L) is not free, it takes a non-zero amount of time. This also incurs a performance penalty.
Some quick testing bears this out:
def f():
a = []
for i in range(10000):
a.append(0)
def g():
a = []
for i in range(10000):
a[len(a):len(a)] = [0]
%timeit f()
1000 loops, best of 3: 683 us per loop
%timeit g()
100 loops, best of 3: 2.4 ms per loop
Now one non-obvious "optimization" you can do to remove the len(L) effect is use a constant slice that is higher than the length of your list will ever get. Extended slicing never throws an IndexError, even if you're waaaaay off the end of the iterable. So let's do that.
def h():
a = []
for i in range(10000):
a[11111:11111] = [0]
%timeit h()
1000 loops, best of 3: 1.45 ms per loop
So as suspected, both wrapping your x in an iterable and calling len have small but tangible performance penalties.
And, of course, doing li[len(li):len(li)] is UGLY. That's the biggest performance penalty: the time it takes my brain to figure out what the heck it just looked at. :-)

numpy np.array versus np.matrix (performance)

often when working with numpy I find the distinction annoying - when I pull out a vector or a row from a matrix and then perform operations with np.arrays there are usually problems.
to reduce headaches, I've taken to sometimes just using np.matrix (converting all np.arrays to np.matrix) just for simplicity. however, I suspect there are some performance implications. could anyone comment as to what those might be and the reasons why?
it seems like if they are both just arrays underneath the hood that element access is simply an offset calculation to get the value, so I'm not sure without reading through the entire source what the difference might be.
more specifically, what performance implications does this have:
v = np.matrix([1, 2, 3, 4])
# versus the below
w = np.array([1, 2, 3, 4])
thanks
I added some more tests, and it appears that an array is considerably faster than matrix when array/matrices are small, but the difference gets smaller for larger data structures:
Small (4x4):
In [11]: a = [[1,2,3,4],[5,6,7,8]]
In [12]: aa = np.array(a)
In [13]: ma = np.matrix(a)
In [14]: %timeit aa.sum()
1000000 loops, best of 3: 1.77 us per loop
In [15]: %timeit ma.sum()
100000 loops, best of 3: 15.1 us per loop
In [16]: %timeit np.dot(aa, aa.T)
1000000 loops, best of 3: 1.72 us per loop
In [17]: %timeit ma * ma.T
100000 loops, best of 3: 7.46 us per loop
Larger (100x100):
In [19]: aa = np.arange(10000).reshape(100,100)
In [20]: ma = np.matrix(aa)
In [21]: %timeit aa.sum()
100000 loops, best of 3: 9.18 us per loop
In [22]: %timeit ma.sum()
10000 loops, best of 3: 22.9 us per loop
In [23]: %timeit np.dot(aa, aa.T)
1000 loops, best of 3: 1.26 ms per loop
In [24]: %timeit ma * ma.T
1000 loops, best of 3: 1.24 ms per loop
Notice that matrices are actually slightly faster for multiplication.
I believe that what I am getting here is consistent with what #Jaime is explaining the comment.
There is a general discusion on SciPy.org and on this question.
To compare performance, I did the following in iPython. It turns out that arrays are significantly faster.
In [1]: import numpy as np
In [2]: %%timeit
...: v = np.matrix([1, 2, 3, 4])
100000 loops, best of 3: 16.9 us per loop
In [3]: %%timeit
...: w = np.array([1, 2, 3, 4])
100000 loops, best of 3: 7.54 us per loop
Therefore numpy arrays seem to have faster performance than numpy matrices.
Versions used:
Numpy: 1.7.1
IPython: 0.13.2
Python: 2.7

Categories

Resources