I am currently trying to follow a simple example for parallelizing a loop with cython's prange.
I have installed OpenBlas 0.2.14 with openmp allowed and compiled numpy 1.10.1 and scipy 0.16 from source against openblas. To test the performance of the libraries I am following this example: http://nealhughes.net/parallelcomp2/.
The functions to be timed are copied form the site:
import numpy as np
from math import exp
from libc.math cimport exp as c_exp
from cython.parallel import prange,parallel
def array_f(X):
Y = np.zeros(X.shape)
index = X > 0.5
Y[index] = np.exp(X[index])
return Y
def c_array_f(double[:] X):
cdef int N = X.shape[0]
cdef double[:] Y = np.zeros(N)
cdef int i
for i in range(N):
if X[i] > 0.5:
Y[i] = c_exp(X[i])
else:
Y[i] = 0
return Y
def c_array_f_multi(double[:] X):
cdef int N = X.shape[0]
cdef double[:] Y = np.zeros(N)
cdef int i
with nogil, parallel():
for i in prange(N):
if X[i] > 0.5:
Y[i] = c_exp(X[i])
else:
Y[i] = 0
return Y
The author of the code reports following speed ups for 4 cores:
from thread_demo import *
import numpy as np
X = -1 + 2*np.random.rand(10000000)
%timeit array_f(X)
1 loops, best of 3: 222 ms per loop
%timeit c_array_f(X)
10 loops, best of 3: 87.5 ms per loop
%timeit c_array_f_multi(X)
10 loops, best of 3: 22.4 ms per loop
When I run these example on my machines ( macbook pro with osx 10.10 ), I get the following timings for export OMP_NUM_THREADS=1
In [1]: from bla import *
In [2]: import numpy as np
In [3]: X = -1 + 2*np.random.rand(10000000)
In [4]: %timeit c_array_f(X)
10 loops, best of 3: 89.7 ms per loop
In [5]: %timeit c_array_f_multi(X)
1 loops, best of 3: 343 ms per loop
and for OMP_NUM_THREADS=4
In [1]: from bla import *
In [2]: import numpy as np
In [3]: X = -1 + 2*np.random.rand(10000000)
In [4]: %timeit c_array_f(X)
10 loops, best of 3: 89.5 ms per loop
In [5]: %timeit c_array_f_multi(X)
10 loops, best of 3: 119 ms per loop
I see this same behavior on an openSuse machine, hence my question. How can the author get a 4x speed up while the same code runs slower for 4 threads on 2 of my systems.
The setup script for generating the *.c & .so is also identical to the one used in the blog.
from distutils.core import setup
from Cython.Build import cythonize
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy as np
ext_modules=[
Extension("bla",
["bla.pyx"],
libraries=["m"],
extra_compile_args = ["-O3", "-ffast-math","-march=native", "-fopenmp" ],
extra_link_args=['-fopenmp'],
include_dirs = [np.get_include()]
)
]
setup(
name = "bla",
cmdclass = {"build_ext": build_ext},
ext_modules = ext_modules
)
Would be great if someone could explain to me why this happens.
1) An important feature of prange (like any other parallel for loop) is that it activates out-of-order execution, which means that the loop can execute in any arbitrary order. Out-of-order execution really pays off when you have no data dependency between iterations.
I do not know the internals of Cython but I reckon that if boundschecking is not turned off, the loop cannot be executed arbitrarily, since the next iteration will depend on whether or not the array is going out of bounds in the current iteration, hence the problem becomes almost serial as threads will have to wait for the result. This is one of the issues with your code. In fact Cython does give me the following warning:
warning: bla.pyx:42:16: Use boundscheck(False) for faster access
So add the following
from cython import boundscheck, wraparound
#boundscheck(False)
#wraparound(False)
def c_array_f(double[:] X):
# Rest of your code
#boundscheck(False)
#wraparound(False)
def c_array_f_multi(double[:] X):
# Rest of your code
Let's now time them with your data X = -1 + 2*np.random.rand(10000000).
With Bounds Checking:
In [2]:%timeit array_f(X)
10 loops, best of 3: 189 ms per loop
In [4]:%timeit c_array_f(X)
10 loops, best of 3: 93.6 ms per loop
In [5]:%timeit c_array_f_multi(X)
10 loops, best of 3: 103 ms per loop
Without Bounds Checking:
In [9]:%timeit c_array_f(X)
10 loops, best of 3: 84.2 ms per loop
In [10]:%timeit c_array_f_multi(X)
10 loops, best of 3: 42.3 ms per loop
These results are with num_threads=4 (I have 4 logical cores) and the speed-up is around 2x. Before getting further we can still shave off a few more ms by declaring our arrays to be contiguous i.e. declaring X and Y with double[::1].
Contiguous Arrays:
In [14]:%timeit c_array_f(X)
10 loops, best of 3: 81.8 ms per loop
In [15]:%timeit c_array_f_multi(X)
10 loops, best of 3: 39.3 ms per loop
2) Even more important is job scheduling and this is what your benchmark suffers from. By default chunk sizes are determined at compile time i.e. schedule=static however it is very likely that the environment variables (for instance OMP_SCHEDULE) and work-load of the two machines (yours and the one from the blog post) are different, and they schedule the jobs at runtime, dynmically, guidedly and so on. Let's experiment it with replacing your prange to
for i in prange(N, schedule='static'):
# static scheduling...
for i in prange(N, schedule='dynamic'):
# dynamic scheduling...
Let's time them now (only the multi-threaded code):
Scheduling Effect:
In [23]:%timeit c_array_f_multi(X) # static
10 loops, best of 3: 39.5 ms per loop
In [28]:%timeit c_array_f_multi(X) # dynamic
1 loops, best of 3: 319 ms per loop
You might be able to replicate this depending on the work-load on your own machine. As a side note, since you are just trying to measure the performance of a parallel vs serial code in a micro-benchmark test and not an actual code, I suggest you get rid of the if-else condition i.e. only keep Y[i] = c_exp(X[i]) within the for loop. This is because if-else statements also adversely affect branch-prediction and out-of-order execution in parallel code. On my machine I get almost 2.7x speed-up over serial code with this change.
Related
I am trying to understand how the multiprocessing.Pool works, and I have developed a minimal example that illustrates my question. Briefly, I am using pool.map to parallelize a CPU-bound function operating on an array by following the example Dead simple example of using Multiprocessing Queue, Pool and Locking. When I follow that pattern, I get only a modest speedup with 4 cores, but if I instead manually chunk the array into num_threads and then use pool.map over the chunks, I find speedup factors that vastly exceed 4x, which makes no sense to me. Details to follow.
First, the function definitions.
def take_up_time():
n = 1e3
while n > 0:
n -= 1
def count_even_numbers(x):
take_up_time()
return np.where(np.mod(x, 2) == 0, 1, 0)
Now define the functions we'll benchmark.
First the function that runs in serial:
def serial(arr):
return np.sum(map(count_even_numbers,arr))
Now the function that uses Pool.map in the "standard" way:
def parallelization_strategy1(arr):
num_threads = multiprocessing_count()
pool = multiprocessing.Pool(num_threads)
result = pool.map(count_even_numbers,arr)
pool.close()
return np.sum(result)
Finally, the second strategy in which I manually chunk the array and then run Pool.map over the chunks (Splitting solution due to python numpy split array into unequal subarrays)
def split_padded(a,n):
""" Simple helper function for strategy 2
"""
padding = (-len(a))%n
if padding == 0:
return np.split(a, n)
else:
sub_arrays = np.split(np.concatenate((a,np.zeros(padding))),n)
sub_arrays[-1] = sub_arrays[-1][:-padding]
return sub_arrays
def parallelization_strategy2(arr):
num_threads = multiprocessing_count()
sub_arrays = split_padded(arr, num_threads)
pool = multiprocessing.Pool(num_threads)
result = pool.map(count_even_numbers,sub_arrays)
pool.close()
return np.sum(np.array(result))
Here is my array input:
npts = 1e3
arr = np.arange(npts)
Now I use the IPython %timeit function to run my timings, and for 1e3 points I get the following:
serial: 10 loops, best of 3: 98.7 ms per loop
parallelization_strategy1: 10 loops, best of 3: 77.7 ms per loop
parallelization_strategy2: 10 loops, best of 3: 22 ms per loop
Since I have 4 cores, Strategy 1 is a disappointingly modest speedup, and strategy 2 is suspiciously larger than the maximum 4x speedup.
When I increase npts to 1e4, the results are even more perplexing:
serial: 1 loops, best of 3: 967 ms per loop
parallelization_strategy1: 1 loops, best of 3: 596 ms per loop
parallelization_strategy2: 10 loops, best of 3: 22.9 ms per loop
So the two sources of confusion are:
Strategy 2 is way faster than the naive theoretical limit
For some reason, %timeit with npts=1e4 only triggers 1 loop for serial and strategy 1, but 10 loops for strategy 2.
Turns out your example fits perfectly in the Pythran model. Compiling the following source code count_even.py:
#pythran export count_even(int [:])
import numpy as np
def count_even_numbers(x):
return np.where(np.mod(x, 2) == 0, 1, 0)
def count_even(arr):
s = 0
#omp parallel for reduction(+:s)
for elem in arr:
s += count_even_numbers(elem)
return s
with the command line (-fopenmp activates the handling of the OpenMP annotations):
pythran count_even.py -fopenmp
And running timeit over this already yields massive speedups thanks to the conversion to native code:
Without Pythran
$ python -m timeit -s 'import numpy as np; arr = np.arange(1e7, dtype=int); from count_even import count_even' 'count_even(arr)'
verryyy long, more than several minutes :-/
With Pythran, one core
$ OMP_NUM_THREADS=1 python -m timeit -s 'import numpy as np; arr = np.arange(1e7, dtype=int); from count_even import count_even' 'count_even(arr)'
100 loops, best of 3: 10.3 msec per loop
With Pythran, two cores:
$ OMP_NUM_THREADS=2 python -m timeit -s 'import numpy as np; arr = np.arange(1e7, dtype=int); from count_even import count_even' 'count_even(arr)'
100 loops, best of 3: 5.5 msec per loop
twice as fast, parallelization is working :-)
Note that OpenMP enables multi-threading, not multi-processing.
Your strategies aren't doing the same!
In the first strategy, the Pool.map iterates over an array, so count_even_numbers is called for every array item (since the shape of the array is one-dimensional).
The second strategy maps over a list of arrays, so count_even_numbers is called for every array in the list.
I've been attempting to optimize a piece of python code that involves large multi-dimensional array calculations. I am getting counterintuitive results with numba. I am running on an MBP, mid 2015, 2.5 GHz i7 quadcore, OS 10.10.5, python 2.7.11. Consider the following:
import numpy as np
from numba import jit, vectorize, guvectorize
import numexpr as ne
import timeit
def add_two_2ds_naive(A,B,res):
for i in range(A.shape[0]):
for j in range(B.shape[1]):
res[i,j] = A[i,j]+B[i,j]
#jit
def add_two_2ds_jit(A,B,res):
for i in range(A.shape[0]):
for j in range(B.shape[1]):
res[i,j] = A[i,j]+B[i,j]
#guvectorize(['float64[:,:],float64[:,:],float64[:,:]'],
'(n,m),(n,m)->(n,m)',target='cpu')
def add_two_2ds_cpu(A,B,res):
for i in range(A.shape[0]):
for j in range(B.shape[1]):
res[i,j] = A[i,j]+B[i,j]
#guvectorize(['(float64[:,:],float64[:,:],float64[:,:])'],
'(n,m),(n,m)->(n,m)',target='parallel')
def add_two_2ds_parallel(A,B,res):
for i in range(A.shape[0]):
for j in range(B.shape[1]):
res[i,j] = A[i,j]+B[i,j]
def add_two_2ds_numexpr(A,B,res):
res = ne.evaluate('A+B')
if __name__=="__main__":
np.random.seed(69)
A = np.random.rand(10000,100)
B = np.random.rand(10000,100)
res = np.zeros((10000,100))
I can now run timeit on the various functions:
%timeit add_two_2ds_jit(A,B,res)
1000 loops, best of 3: 1.16 ms per loop
%timeit add_two_2ds_cpu(A,B,res)
1000 loops, best of 3: 1.19 ms per loop
%timeit add_two_2ds_parallel(A,B,res)
100 loops, best of 3: 6.9 ms per loop
%timeit add_two_2ds_numexpr(A,B,res)
1000 loops, best of 3: 1.62 ms per loop
It seems that 'parallel' is not taking even using the majority of a single core, as it's usage in top shows that python is hitting ~40% cpu for 'parallel', ~100% for 'cpu', and numexpr hits ~300%.
There are two issues with your #guvectorize implementations. The first is that you are are doing all the looping inside your #guvectorize kernel, so there is actually nothing for the Numba parallel target to parallelize. Both #vectorize and #guvectorize parallelize on the broadcast dimensions in a ufunc/gufunc. Since the signature of your gufunc is 2D, and your inputs are 2D, there is only a single call to the inner function, which explains the only 100% CPU usage you saw.
The best way to write the function you have above is to use a regular ufunc:
#vectorize('(float64, float64)', target='parallel')
def add_ufunc(a, b):
return a + b
Then on my system, I see these speeds:
%timeit add_two_2ds_jit(A,B,res)
1000 loops, best of 3: 1.87 ms per loop
%timeit add_two_2ds_cpu(A,B,res)
1000 loops, best of 3: 1.81 ms per loop
%timeit add_two_2ds_parallel(A,B,res)
The slowest run took 11.82 times longer than the fastest. This could mean that an intermediate result is being cached
100 loops, best of 3: 2.43 ms per loop
%timeit add_two_2ds_numexpr(A,B,res)
100 loops, best of 3: 2.79 ms per loop
%timeit add_ufunc(A, B, res)
The slowest run took 9.24 times longer than the fastest. This could mean that an intermediate result is being cached
1000 loops, best of 3: 2.03 ms per loop
(This is a very similar OS X system to yours, but with OS X 10.11.)
Although Numba's parallel ufunc now beats numexpr (and I see add_ufunc using about 280% CPU), it doesn't beat the simple single-threaded CPU case. I suspect that the bottleneck is due to memory (or cache) bandwidth, but I haven't done the measurements to check that.
Generally speaking, you will see much more benefit from the parallel ufunc target if you are doing more math operations per memory element (like, say, a cosine).
I am new to numba's jit. For a personal project, I need to speed up functions that are similar to what will be shown below, though different for the purpose of writing standalone examples.
import numpy as np
from numba import jit, autojit, double, float64, float32, void
def f(n):
k=0.
for i in range(n):
for j in range(n):
k+= i+j
def f_with_return(n):
k=0.
for i in range(n):
for j in range(n):
k+= i+j
return k
def f_with_arange(n):
k=0.
for i in np.arange(n):
for j in np.arange(n):
k+= i+j
def f_with_arange_and_return(n):
k=0.
for i in np.arange(n):
for j in np.arange(n):
k+= i+j
#jit decorators
jit_f = jit(void(int32))(f)
jit_f_with_return = jit(int32(int32))(f_with_return)
jit_f_with_arange = jit(void(double))(f_with_arange)
jit_f_with_arange_and_return = jit(double(double))(f_with_arange_and_return)
And the benchmarks:
%timeit f(1000)
%timeit jit_f(1000)
10 loops, best of 3: 73.9 ms per loop / 1000000 loops, best of 3: 212 ns per loop
%timeit f_with_return(1000)
%timeit jit_f_with_return(1000)
10 loops, best of 3: 74.9 ms per loop / 1000000 loops, best of 3: 220 ns per loop
I don't understand these two:
%timeit f_with_arange(1000.0)
%timeit jit_f_with_arange(1000.0)
10 loops, best of 3: 175 ms per loop / 1 loops, best of 3: 167 ms per loop
%timeit f_with_arange_with_return(1000.0)
%timeit jit_f_with_arange_with_return(1000.0)
10 loops, best of 3: 174 ms per loop / 1 loops, best of 3: 172 ms per loop
I think I'm not giving the jit function the correct types for the output and input ? Just because the for loop is now running over a numpy.arange, and not a simple range anymore, I cannot get jit to make it faster. What is the issue here ?
Simply, numba doesn't know how to convert np.arange into a low level native loop, so it defaults to the object layer which is much slower and usually the same speed as pure python.
A nice trick is to pass the nopython=True keyword argument to jit to see if it can compile everything without resorting to the object mode:
import numpy as np
import numba as nb
def f_with_return(n):
k=0.
for i in range(n):
for j in range(n):
k+= i+j
return k
jit_f_with_return = nb.jit()(f_with_return)
jit_f_with_return_nopython = nb.jit(nopython=True)(f_with_return)
%timeit f_with_return(1000)
%timeit jit_f_with_return(1000)
%timeit jit_f_with_return_nopython(1000)
The last two are the same speed on my machine and much faster than the un-jitted code. The two examples that you had questions about will raise an error with nopython=True since it can't compile np.arange at this point.
See the following for more details:
http://numba.pydata.org/numba-doc/0.17.0/user/troubleshoot.html#the-compiled-code-is-too-slow
and for a list of supported numpy features with indications of what is and is not supported in nopython mode:
http://numba.pydata.org/numba-doc/0.17.0/reference/numpysupported.html
I have a problem where I have to do the following calculation.
I wanted to avoid the loop version, so I vectorized it.
Why is the loop version actually fast than the vectorized version?
Does anybody have an explanation for this.
thx
import numpy as np
from numpy.core.umath_tests import inner1d
num_vertices = 40000
num_pca_dims = 1000
num_vert_coords = 3
a = np.arange(num_vert_coords * num_vertices * num_pca_dims).reshape((num_pca_dims, num_vertices*num_vert_coords)).T
#n-by-3
norms = np.arange(num_vertices * num_vert_coords).reshape(num_vertices,-1)
#Loop version
def slowversion(a,norms):
res_list = []
for c_idx in range(a.shape[1]):
curr_col = a[:,c_idx].reshape(-1,3)
res = inner1d(curr_col, norms)
res_list.append(res)
res_list_conc = np.column_stack(res_list)
return res_list_conc
#Fast version
def fastversion(a,norms):
a_3 = a.reshape(num_vertices, 3, num_pca_dims)
fast_res = np.sum(a_3 * norms[:,:,None], axis=1)
return fast_res
res_list_conc = slowversion(a,norms)
fast_res = fastversion(a,norms)
assert np.all(res_list_conc == fast_res)
Your "slow code" is likely doing better because inner1d is a single optimized C++ function that can* make use of your BLAS implementation. Lets look at comparable timings for this operation:
np.allclose(inner1d(a[:,0].reshape(-1,3), norms),
np.sum(a[:,0].reshape(-1,3)*norms,axis=1))
True
%timeit inner1d(a[:,0].reshape(-1,3), norms)
10000 loops, best of 3: 200 µs per loop
%timeit np.sum(a[:,0].reshape(-1,3)*norms,axis=1)
1000 loops, best of 3: 625 µs per loop
%timeit np.einsum('ij,ij->i',a[:,0].reshape(-1,3), norms)
1000 loops, best of 3: 325 µs per loop
Using inner is quite a bit faster then the pure numpy operations. Note that einsum is almost twice as fast as pure numpy expressions and for good reason. As your loop is not that large and most of the FLOPS are in the inner computations the saving for the inner operation outweigh the cost of looping.
%timeit slowversion(a,norms)
1 loops, best of 3: 991 ms per loop
%timeit fastversion(a,norms)
1 loops, best of 3: 1.28 s per loop
#Thanks to DSM for writing this out
%timeit np.einsum('ijk,ij->ik',a.reshape(num_vertices, num_vert_coords, num_pca_dims), norms)
1 loops, best of 3: 488 ms per loop
Putting this back together we can see the overall advantage of the "slow version" wins out; however, using an einsum implementation, which is fairly optimized for this sort of thing, gives us a further speed increase.
*I don't see it right off in the code, but it is clearly threaded.
TLDR: in cython, why (or when?) is iterating over a numpy array faster than iterating over a python list?
Generally:
I've used Cython before and was able to get tremendous speed ups over naive python impl',
However, figuring out what exactly needs to be done seems non-trivial.
Consider the following 3 implementations of a sum() function.
They reside in a cython file called 'cy' (obviously, there's np.sum(), but that's besides my point..)
Naive python:
def sum_naive(A):
s = 0
for a in A:
s += a
return s
Cython with a function that expects a python list:
def sum_list(A):
cdef unsigned long s = 0
for a in A:
s += a
return s
Cython with a function that expects a numpy array.
def sum_np(np.ndarray[np.int64_t, ndim=1] A):
cdef unsigned long s = 0
for a in A:
s += a
return s
I would expect that in terms of running time, sum_np < sum_list < sum_naive, however, the following script demonstrates to the contrary (for completeness, I added np.sum() )
N = 1000000
v_np = np.array(range(N))
v_list = range(N)
%timeit cy.sum_naive(v_list)
%timeit cy.sum_naive(v_np)
%timeit cy.sum_list(v_list)
%timeit cy.sum_np(v_np)
%timeit v_np.sum()
with results:
In [18]: %timeit cyMatching.sum_naive(v_list)
100 loops, best of 3: 18.7 ms per loop
In [19]: %timeit cyMatching.sum_naive(v_np)
1 loops, best of 3: 389 ms per loop
In [20]: %timeit cyMatching.sum_list(v_list)
10 loops, best of 3: 82.9 ms per loop
In [21]: %timeit cyMatching.sum_np(v_np)
1 loops, best of 3: 1.14 s per loop
In [22]: %timeit v_np.sum()
1000 loops, best of 3: 659 us per loop
What's going on?
Why is cython+numpy slow?
P.S.
I do use
#cython: boundscheck=False
#cython: wraparound=False
There is a better way to implement this in cython that at least on my machine beats np.sum because it avoids type checking and other things that numpy normally has to do when dealing with an arbitrary array:
#cython.wraparound=False
#cython.boundscheck=False
cimport numpy as np
def sum_np(np.ndarray[np.int64_t, ndim=1] A):
cdef unsigned long s = 0
for a in A:
s += a
return s
def sum_np2(np.int64_t[::1] A):
cdef:
unsigned long s = 0
size_t k
for k in range(A.shape[0]):
s += A[k]
return s
And then the timings:
N = 1000000
v_np = np.array(range(N))
v_list = range(N)
%timeit sum(v_list)
%timeit sum_naive(v_list)
%timeit np.sum(v_np)
%timeit sum_np(v_np)
%timeit sum_np2(v_np)
10 loops, best of 3: 19.5 ms per loop
10 loops, best of 3: 64.9 ms per loop
1000 loops, best of 3: 1.62 ms per loop
1 loops, best of 3: 1.7 s per loop
1000 loops, best of 3: 1.42 ms per loop
You don't want to iterate over the numpy array via the Python style, but rather access elements using indexing as that it can be translated into pure C, rather than relying on the Python API.
a is untyped and thus there will be lots of conversions from Python to C types and back. These can be slow.
JoshAdel correctly pointed out that instead of iterating though, you should iterate over a range. Cython will convert the indexing to C, which is fast.
Using cython -a myfile.pyx will highlight these sorts of things for you; you want all of your loop logic to be white for maximum speed.
PS: Note that np.ndarray[np.int64_t, ndim=1] is outdated and has been deprecated in favour of the faster and more general long[:].