I've read other posts on how python speed/performance should be relatively unaffected by whether code being run is just in main, in a function or defined as a class attribute, but these do not explain the very large differences in performance that I see when using class vs local variables, especially when using the numpy library. To be more clear, I made an script example below.
import numpy as np
import copy
class Test:
def __init__(self, n, m):
self.X = np.random.rand(n,n,m)
self.Y = np.random.rand(n,n,m)
self.Z = np.random.rand(n,n,m)
def matmul1(self):
self.A = np.zeros(self.X.shape)
for i in range(self.X.shape[2]):
self.A[:,:,i] = self.X[:,:,i] # self.Y[:,:,i] # self.Z[:,:,i]
return
def matmul2(self):
self.A = np.zeros(self.X.shape)
for i in range(self.X.shape[2]):
x = copy.deepcopy(self.X[:,:,i])
y = copy.deepcopy(self.Y[:,:,i])
z = copy.deepcopy(self.Z[:,:,i])
self.A[:,:,i] = x # y # z
return
t1 = Test(300,100)
%%timeit
t1.matmul1()
#OUTPUT: 20.9 s ± 1.37 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
t1.matmul2()
#OUTPUT: 516 ms ± 6.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In this script I define a class with attributes X, Y and Z as 3-way arrays. I also have two function attributes (matmul1 and matmul2) which loop through the 3rd index of the arrays and matrix multiply each of the 3 slices to populate an array, A. matmul1 just loops through class variables and matrix multiplies, whereas matmul2 creates local copies for each matrix multiplication within the loop. Matmul1 is ~40X slower than matmul2. Can someone explain why this is happening? Maybe I am thinking about how to use classes incorrectly, but I also wouldn't assume that variables should be deep copied all the time. Basically, what is it about deep copying that affects my performance so significantly, and is this unavoidable when using class attributes/variables? It seems like its more than just the overhead of calling class attributes as discussed here. Any input is appreciated, thanks!
Edit: My real question is why do copies of, instead of views of subarrays of class instance variables, result in much better performance for these types of methods.
If you put the m dimension first, you could do this product without iteration:
In [146]: X1,Y1,Z1 = X.transpose(2,0,1), Y.transpose(2,0,1), Z.transpose(2,0,1)
In [147]: A1 = X1#Y1#Z1
In [148]: np.allclose(A, A1.transpose(1,2,0))
Out[148]: True
However sometimes, working with very large arrays is slower, due to memory management complexities.
It might worth testing
A1[i] = X1[i] # Y1[i] # Z1[i]
where the iteration is on the outermost dimension.
My computer is too small to do good timings on these array sizes.
edit
I added these alternatives to your class, and tested with a smaller case:
In [67]: class Test:
...: def __init__(self, n, m):
...: self.X = np.random.rand(n,n,m)
...: self.Y = np.random.rand(n,n,m)
...: self.Z = np.random.rand(n,n,m)
...: def matmul1(self):
...: A = np.zeros(self.X.shape)
...: for i in range(self.X.shape[2]):
...: A[:,:,i] = self.X[:,:,i] # self.Y[:,:,i] # self.Z[:,:,i]
...: return A
...: def matmul2(self):
...: A = np.zeros(self.X.shape)
...: for i in range(self.X.shape[2]):
...: x = self.X[:,:,i].copy()
...: y = self.Y[:,:,i].copy()
...: z = self.Z[:,:,i].copy()
...: A[:,:,i] = x # y # z
...: return A
...: def matmul3(self):
...: x = self.X.transpose(2,0,1).copy()
...: y = self.Y.transpose(2,0,1).copy()
...: z = self.Z.transpose(2,0,1).copy()
...: return (x#y#z).transpose(1,2,0)
...: def matmul4(self):
...: x = self.X.transpose(2,0,1).copy()
...: y = self.Y.transpose(2,0,1).copy()
...: z = self.Z.transpose(2,0,1).copy()
...: A = np.zeros(x.shape)
...: for i in range(x.shape[0]):
...: A[i] = x[i]#y[i]#z[i]
...: return A.transpose(1,2,0)
In [68]: t1=Test(100,50)
In [69]: np.max(np.abs(t1.matmul2()-t1.matmul4()))
Out[69]: 0.0
In [70]: np.allclose(t1.matmul3(),t1.matmul2())
Out[70]: True
The view iteration is 10x slower:
In [71]: timeit t1.matmul1()
252 ms ± 424 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [72]: timeit t1.matmul2()
26 ms ± 475 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
The additions are about the same:
In [73]: timeit t1.matmul3()
30.8 ms ± 4.33 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [74]: timeit t1.matmul4()
27.3 ms ± 172 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Without the copy(), the transpose produces a view, and times are similar to matmul1 (250ms).
My guess is that with "fresh" copies, matmul is able to pass them to the best BLAS function by reference. With views, as in matmul1, it has to take some sort of slower route.
But if I use dot instead of matmul, I get the faster time, even with the matmul1 iteation.
In [77]: %%timeit
...: A = np.zeros(X.shape)
...: for i in range(X.shape[2]):
...: A[:,:,i] = X[:,:,i].dot(Y[:,:,i]).dot(Z[:,:,i])
25.2 ms ± 250 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
It sure looks like matmul with views is taking some suboptimal calculation choice.
Related
for my class I need to write more optimized math function using NumPy. Problem is, when using NumPy my solutions are slower when native Python.
function which cubes all the elements of an array and sum them
Python:
def cube(x):
result = 0
for i in range(len(x)):
result += x[i] ** 3
return result
My, using NumPy (15-30% slower):
def cube(x):
it = numpy.nditer([x, None])
for a, b in it:
b[...] = a*a*a
return numpy.sum(it.operands[1])
Some random calculation function
Python:
def calc(x):
m = sum(x) / len(x)
result = 0
for i in range(len(x)):
result += (x[i] - m)**4
return result / len(x)
NumPy (>10x slower):
def calc(x):
m = numpy.mean(x)
result = 0
for i in range(len(x)):
result += numpy.power((x[i] - m), 4)
return result / len(x)
I don't know how to approatch this, so far I have tried random functions from NumPy
To elaborate on what has been said in comments:
Numpy's power comes from being able to do all the looping in fast c/fortran rather than slow Python looping. For example, if you have an array x and you want to calculate the square of every value in that array, you could do
y = []
for value in x:
y.append(value**2)
or even (with a list comprehension)
y = [value**2 for value in x]
but it will be much faster if you can do all the looping inside numpy with
y = x**2
(assuming x is already a numpy array).
So for your examples, the proper way to do it in numpy would be
1.
def sum_of_cubes(x):
result = 0
for i in range(len(x)):
result += x[i] ** 3
return result
def sum_of_cubes_numpy(x):
return (x**3).sum()
def calc(x):
m = sum(x) / len(x)
result = 0
for i in range(len(x)):
result += (x[i] - m)**4
return result / len(x)
def calc_numpy(x):
m = numpy.mean(x) # or just x.mean()
return numpy.sum((x - m)**4) / len(x)
Note that I've assumed that the input x is already a numpy array, not a regular Python list: if you have a list lst, you can create an array from it with arr = numpy.array(lst).
In [337]: def cube(x):
...: result = 0
...: for i in range(len(x)):
...: result += x[i] ** 3
...: return result
...:
nditer is not a good numpy iterator, at least not when used in Python level code. It's really just a stepping stone toward writing compiled code. It's docs need a better disclaimer.
In [338]: def cube1(x):
...: it = numpy.nditer([x, None])
...: for a, b in it:
...: b[...] = a*a*a
...: return numpy.sum(it.operands[1])
...:
In [339]: cube(list(range(10)))
Out[339]: 2025
In [340]: cube1(list(range(10)))
Out[340]: 2025
In [341]: cube1(np.arange(10))
Out[341]: 2025
A more direct numpy iteration:
In [342]: def cube2(x):
...: it = [a*a*a for a in x]
...: return numpy.sum(it)
...:
The better whole-array code. Just as sum can work with the whole array, the power also applies the whole.
In [343]: def cube3(x):
...: return numpy.sum(x**3)
...:
In [344]: cube2(np.arange(10))
Out[344]: 2025
In [345]: cube3(np.arange(10))
Out[345]: 2025
Doing some timings:
The list reference:
In [346]: timeit cube(list(range(1000)))
438 µs ± 9.87 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
The slow nditer:
In [348]: timeit cube1(np.arange(1000))
2.8 ms ± 5.65 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
The partial numpy:
In [349]: timeit cube2(np.arange(1000))
520 µs ± 20 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
I can improve its time by passing a list instead of an array. Iteration on lists is faster.
In [352]: timeit cube2(list(range(1000)))
229 µs ± 9.53 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
But the time for a 'pure' numpy version blows all of those out of the water:
In [350]: timeit cube3(np.arange(1000))
23.6 µs ± 128 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
The general rule is that numpy methods applied to a numpy array are fastest. But if you must loop, it's usually better to use lists.
Sometimes the pure numpy approach creates very large temporary array. Then memory management complexities can reduce performance. In such cases a modest of number of iterations on a complex task may be best.
I'm comparing two versions of the Fibonacci routine in Python 3:
import functools
#functools.lru_cache()
def fibonacci_rec(target: int) -> int:
if target < 2:
return target
res = fibonacci_rec(target - 1) + fibonacci_rec(target - 2)
return res
def fibonacci_it(target: int) -> int:
if target < 2:
return target
n_1 = 2
n_2 = 1
for n in range(3, target):
new = n_2 + n_1
n_2 = n_1
n_1 = new
return n_1
The first version is recursive, with memoization (thanks to lru_cache). The second is simply iterative.
I then benchmarked the two versions and I'm slightly surprised by the results:
In [5]: %timeit fibonacci_rec(1000)
82.7 ns ± 2.94 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
In [6]: %timeit fibonacci_it(1000)
67.5 µs ± 2.1 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
The iterative version is waaaaay slower than the recursive one. Of course the first run of the recursive version will take lots of time (to cache all the results), and the recursive version takes more memory space (to store all the calls). But I wasn't expecting such difference on the runtime. Don't I get some overhead by calling a function, compared to just iterating over numbers and swapping variables?
As you can see, timeit invokes the function many times, to get a reliable measurement. The LRU cache of the recursive version is not being cleared between invocations, so after the first run, fibonacci_rec(1000) is just returned from the cache immediately without doing any computation.
As explained by #Thomas, the cache isn't cleared between invocations of fibonacci_rec (so the result of fibonacci(1000) will be cached and re-used). Here is a better benchmark:
def wrapper_rec(target: int) -> int:
res = fibonacci_rec(target)
fibonacci_rec.cache_clear()
return res
def wrapper_it(target: int) -> int:
res = fibonacci_it(target)
# Just to make sure the comparison will be consistent
fibonacci_rec.cache_clear()
return res
And the results:
In [9]: %timeit wrapper_rec(1000)
445 µs ± 12.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [10]: %timeit wrapper_it(1000)
67.5 µs ± 2.46 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
I have the following data for a Python program.
import numpy as np
np.random.seed(28)
n = 100000
d = 60
S = np.random.rand(n)
O = np.random.rand(n, d, d)
p = np.random.rand()
mask = np.where(S < 0.5)
And I want to run the following algorithm:
def method1():
sum_p = np.zeros([d, d])
sum_m = np.zeros([d, d])
for k in range(n):
s = S[k] * O[k]
sum_p += s
if(S[k] < 0.5):
sum_m -= s
return p * sum_p + sum_m
This is a minimal example, but the code in method1() is supposed to be run many times in my project, so I would like to rewrite it in a more pythonic way, to make it as efficient as possible. I have tried with the following method:
def method2():
sall = S[:, None, None] * O
return p * sall.sum(axis=0) - sall[mask].sum(axis=0)
But, although this method performs better with low values of d, when d=60 it does not provide good times:
# To check that both methods provide the same result.
In [1]: np.sum(method1() == method2()) == d*d
Out[1]: True
In [2]: %timeit method1()
Out[2]: 801 ms ± 2.98 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [3]: %timeit method2()
Out[3]: 1.91 s ± 6.17 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Do you have any other ideas to optimize this method?
(As additional information, the variable mask is supposed to be used in other parts of my final code, so I don't need to consider it inside the code of method2 for the time computation.)
import numpy as np
from datetime import datetime
import math
def norm(l):
s = 0
for i in l:
s += i**2
return math.sqrt(s)
def foo(a, b, f):
l = range(a)
s = datetime.now()
for i in range(b):
f(l)
e = datetime.now()
return e-s
foo(10**4, 10**5, norm)
foo(10**4, 10**5, np.linalg.norm)
foo(10**2, 10**7, norm)
foo(10**2, 10**7, np.linalg.norm)
I got the following output:
0:00:43.156278
0:00:23.923239
0:00:44.184835
0:01:00.343875
It seems like when np.linalg.norm is called many times for small-sized data, it runs slower than my norm function.
What is the cause of that?
First of all: datetime.now() isn't appropriate to measure performance, it includes the wall-time and you may just pick a bad time (for your computer) when a high-priority process runs or Pythons GC kicks in, ...
There are dedicated timing functions/modules available in Python: the built-in timeit module or %timeit in IPython/Jupyter and several other external modules (like perf, ...)
Let's see what happens if I use these on your data:
import numpy as np
import math
def norm(l):
s = 0
for i in l:
s += i**2
return math.sqrt(s)
r1 = range(10**4)
r2 = range(10**2)
%timeit norm(r1)
3.34 ms ± 150 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit np.linalg.norm(r1)
1.05 ms ± 3.92 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit norm(r2)
30.8 µs ± 1.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.linalg.norm(r2)
14.2 µs ± 313 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
It isn't slower for short iterables it's still faster. However note that the real advantage from NumPy functions comes if you already have NumPy arrays:
a1 = np.arange(10**4)
a2 = np.arange(10**2)
%timeit np.linalg.norm(a1)
18.7 µs ± 539 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit np.linalg.norm(a2)
4.03 µs ± 157 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Yeah, it's quite a lot faster now. 18.7us vs. 1ms - almost 100 times faster for 10000 elements. That means most of the time of np.linalg.norm in your examples was spent in converting the range to a np.array.
You are on the right way
np.linalg.norm has a quite high overhead on small arrays. On large arrays both the jit compiled function and np.linalg.norm runs in a memory bottleneck, which is expected on a function that does simple multiplications most of the time.
If the jitted function is called from another jitted function it might get inlined, which can lead to a quite a lot larger advantage over the numpy-norm function.
Example
import numba as nb
import numpy as np
#nb.njit(fastmath=True)
def norm(l):
s = 0.
for i in range(l.shape[0]):
s += l[i]**2
return np.sqrt(s)
Performance
r1 = np.array(np.arange(10**2),dtype=np.int32)
Numba:0.42µs
linalg:4.46µs
r1 = np.array(np.arange(10**2),dtype=np.int32)
Numba:8.9µs
linalg:13.4µs
r1 = np.array(np.arange(10**2),dtype=np.float64)
Numba:0.35µs
linalg:3.71µs
r2 = np.array(np.arange(10**4), dtype=np.float64)
Numba:1.4µs
linalg:5.6µs
Measuring Performance
Call the jit-compiled function one time before the measurement (there is a static compilation overhead on the first call)
Make clear if the measurement is valid (since small arrays stays in processor-cache there may be to optimistic results exceeding your RAM throughput on realistic examples eg. example)
Trying to imitate Excel's SUMPRODUCT function:
SUMPRODUCT(v1, v2, ..., vN) =
v1[0]*v2[0]*...*vN[0] + v1[1]*v2[1]*...*vN[1] + ... + v1[n]*v2[n]*...*vN[n]
where n is the number of elements in each vector.
This is similar to dot product, but for multiple vectors. I read the very detailed discussion of the regular dot product, but I don't know how to cleanly extend it to multiple vectors. For reference, I'm copying the optimized code proposed there, which I ported (trivially) to Python 3. BTW, for dot product, the last approach still wins in P3K.
def d0(v1,v2):
"""
d0 is Nominal approach:
multiply/add in a loop
"""
out = 0
for k in range(len(v1)):
out += v1[k] * v2[k]
return out
def d1(v1,v2):
"""
d1 uses a map
"""
return sum(map(mul,v1,v2))
def d3(v1,v2):
"""
d3 uses a starmap (itertools) to apply the mul operator on an zipped (v1,v2)
"""
return sum(starmap(mul,zip(v1,v2)))
import operator
def sumproduct(*lists):
return sum(reduce(operator.mul, data) for data in zip(*lists))
for python 3
import operator
import functools
def sumproduct(*lists):
return sum(functools.reduce(operator.mul, data) for data in zip(*lists))
What about good old list comprehensions? (As mentioned by #Turksarama this only works for two lists)
sum([x * y for x, y in zip(*lists)])
Testing in Python 3.6:
In [532]: import random
In [534]: x = [random.randint(0,100) for _ in range(100)]
In [535]: y = [random.randint(0,100) for _ in range(100)]
In [536]: lists = x, y
Using list comprehensions
In [543]: %timeit(sum([x * y for x, y in zip(*lists)]))
8.73 µs ± 24.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Note that "tuple" comprehensions are slower
In [537]: %timeit(sum(x * y for x, y in zip(*lists)))
10.5 µs ± 170 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Using map
In [539]: %timeit(sum(map(lambda xi, yi: xi * yi, x, y)))
12.3 µs ± 144 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Using functools.reduce
In [542]: %timeit(sum(functools.reduce(operator.mul, data) for data in zip(*lists)))
38.6 µs ± 330 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Map the list to create a list of products, and then sum it.
This can be done in one line:
sum(map(lambda Xi, Yi: Xi * Yi, ListX, ListY))