I want to apply a "black box" Python function f to a large array arr. Additional assumptions are:
Function f is "pure", e.g. is deterministic with no side effects.
Array arr has a small number of unique elements.
I can achieve this with a decorator that computes f for each unique element of arr as follows:
import numpy as np
from time import sleep
from functools import wraps
N = 1000
np.random.seed(0)
arr = np.random.randint(0, 10, size=(N, 2))
def vectorize_pure(f):
#wraps(f)
def f_vec(arr):
uniques, ix = np.unique(arr, return_inverse=True)
f_range = np.array([f(x) for x in uniques])
return f_range[ix].reshape(arr.shape)
return f_vec
#np.vectorize
def usual_vectorize(x):
sleep(0.001)
return x
#vectorize_pure
def pure_vectorize(x):
sleep(0.001)
return x
# In [47]: %timeit usual_vectorize(arr)
# 1.33 s ± 6.16 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# In [48]: %timeit pure_vectorize(arr)
# 13.6 ms ± 81.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
My concern is that np.unique sorts arr under the hood, which seems inefficient given the assumptions. I am looking for a practical way of implementing a similar decorator that
Takes advantage of fast numpy vectorized operations.
Does not sort the input array.
I suspect that the answer is "yes" using numba, but I would be especially interested in a numpy solution.
Also, it seems that depending on the arr datatype, numpy may use radix sort, so performance of unique may be good in some cases.
I found a workaround below, using pandas.unique; however, it still requires two passes over the original array, and pandas.unique does some extra work. I wonder if a better solution exists with pandas._libs.hashtable and cython, or anything else.
You actually can do this in one-pass over the array, however it requires that you know the dtype of the result beforehand. Otherwise you need a second-pass over the elements to determine it.
Neglecting the performance (and the functools.wraps) for a moment an implementation could look like this:
def vectorize_cached(output_dtype):
def vectorize_cached_factory(f):
def f_vec(arr):
flattened = arr.ravel()
if output_dtype is None:
result = np.empty_like(flattened)
else:
result = np.empty(arr.size, output_dtype)
cache = {}
for idx, item in enumerate(flattened):
res = cache.get(item)
if res is None:
res = f(item)
cache[item] = res
result[idx] = res
return result.reshape(arr.shape)
return f_vec
return vectorize_cached_factory
It first creates the result array, then it iterates over the input array. The function is called (and the result stored) once an element is encountered that's not already in the dictionary - otherwise it simply uses the value stored in the dictionary.
#vectorize_cached(np.float64)
def t(x):
print(x)
return x + 2.5
>>> t(np.array([1,1,1,2,2,2,3,3,1,1,1]))
1
2
3
array([3.5, 3.5, 3.5, 4.5, 4.5, 4.5, 5.5, 5.5, 3.5, 3.5, 3.5])
However this isn't particularly fast because we're doing a Python loop over a NumPy array.
A Cython solution
To make it faster we can actually port this implementation to Cython (currently only supporting float32, float64, int32, int64, uint32, and uint64 but almost trivial to extend because it uses fused-types):
%%cython
cimport numpy as cnp
ctypedef fused input_type:
cnp.float32_t
cnp.float64_t
cnp.uint32_t
cnp.uint64_t
cnp.int32_t
cnp.int64_t
ctypedef fused result_type:
cnp.float32_t
cnp.float64_t
cnp.uint32_t
cnp.uint64_t
cnp.int32_t
cnp.int64_t
cpdef void vectorized_cached_impl(input_type[:] array, result_type[:] result, object func):
cdef dict cache = {}
cdef Py_ssize_t idx
cdef input_type item
for idx in range(array.size):
item = array[idx]
res = cache.get(item)
if res is None:
res = func(item)
cache[item] = res
result[idx] = res
With a Python decorator (the following code is not compiled with Cython):
def vectorize_cached_cython(output_dtype):
def vectorize_cached_factory(f):
def f_vec(arr):
flattened = arr.ravel()
if output_dtype is None:
result = np.empty_like(flattened)
else:
result = np.empty(arr.size, output_dtype)
vectorized_cached_impl(flattened, result, f)
return result.reshape(arr.shape)
return f_vec
return vectorize_cached_factory
Again this only does one-pass and only applies the function once per unique value:
#vectorize_cached_cython(np.float64)
def t(x):
print(x)
return x + 2.5
>>> t(np.array([1,1,1,2,2,2,3,3,1,1,1]))
1
2
3
array([3.5, 3.5, 3.5, 4.5, 4.5, 4.5, 5.5, 5.5, 3.5, 3.5, 3.5])
Benchmark: Fast function, lots of duplicates
But the question is: Does it make sense to use Cython here?
I did a quick benchmark (without sleep) to get an idea how different the performance is (using my library simple_benchmark):
def func_to_vectorize(x):
return x
usual_vectorize = np.vectorize(func_to_vectorize)
pure_vectorize = vectorize_pure(func_to_vectorize)
pandas_vectorize = vectorize_with_pandas(func_to_vectorize)
cached_vectorize = vectorize_cached(None)(func_to_vectorize)
cython_vectorize = vectorize_cached_cython(None)(func_to_vectorize)
from simple_benchmark import BenchmarkBuilder
b = BenchmarkBuilder()
b.add_function(alias='usual_vectorize')(usual_vectorize)
b.add_function(alias='pure_vectorize')(pure_vectorize)
b.add_function(alias='pandas_vectorize')(pandas_vectorize)
b.add_function(alias='cached_vectorize')(cached_vectorize)
b.add_function(alias='cython_vectorize')(cython_vectorize)
#b.add_arguments('array size')
def argument_provider():
np.random.seed(0)
for exponent in range(6, 20):
size = 2**exponent
yield size, np.random.randint(0, 10, size=(size, 2))
r = b.run()
r.plot()
According to these times the ranking would be (fastest to slowest):
Cython version
Pandas solution (from another answer)
Pure solution (original post)
NumPys vectorize
The non-Cython version using Cache
The plain NumPy solution is only a factor 5-10 slower if the function call is very inexpensive. The pandas solution also has a much bigger constant factor, making it the slowest for very small arrays.
Benchmark: expensive function (time.sleep(0.001)), lots of duplicates
In case the function call is actually expensive (like with time.sleep) the np.vectorize solution will be a lot slower, however there is much less difference between the other solutions:
# This shows only the difference compared to the previous benchmark
def func_to_vectorize(x):
sleep(0.001)
return x
#b.add_arguments('array size')
def argument_provider():
np.random.seed(0)
for exponent in range(5, 10):
size = 2**exponent
yield size, np.random.randint(0, 10, size=(size, 2))
Benchmark: Fast function, few duplicates
However if you don't have that many duplicates the plain np.vectorize is almost as fast as the pure and pandas solution and only a bit slower than the Cython version:
# Again just difference to the original benchmark is shown
#b.add_arguments('array size')
def argument_provider():
np.random.seed(0)
for exponent in range(6, 20):
size = 2**exponent
# Maximum value is now depending on the size to ensures there
# are less duplicates in the array
yield size, np.random.randint(0, size // 10, size=(size, 2))
This problem is actually quite interesting as it is a perfect example of a trade off between computation time and memory consumption.
From an algorithmic perspective finding the unique elements, and eventually computing only unique elements, can be achieved in two ways:
two-(or more) passes approach:
find out all unique elements
find out where the unique elements are
compute the function on the unique elements
put all computed unique elements into the right place
single-pass approach:
compute elements on the go and cache results
if an element is in the cache get it from there
The algorithmic complexity depends on the size of the input N and on the number of unique elements U. The latter can be formalized also using the r = U / N ratio of unique elements.
The more-passes approaches are theoretically slower. However, they are quite competitive for small N and U.
The single-pass approaches are theoretically faster, but this would also strongly depends on the caching approaches and how they do perform depending on U.
Of course, no matter how important is the asymptotic behavior, the actual timings do depend on the constant computation time factors.
The most relevant in this problem is the func() computation time.
Approaches
A number of approaches can be compared:
not cached
pure() this would be the base function and could be already vectorized
np.vectorized() this would be the NumPy standard vectorization decorator
more-passes approaches
np_unique(): the unique values are found using np.unique() and uses indexing (from np.unique() output) for constructing the result (essentially equivalent to vectorize_pure() from here)
pd_unique(): the unique values are found using pd.unique() and uses indexing (via np.searchsorted()) for constructing the result(essentially equivalent to vectorize_with_pandas() from here)
set_unique(): the unique values are found using simply set() and uses indexing (via np.searchsorted()) for constructing the result
set_unique_msk(): the unique values are found using simply set() (like set_unique()) and uses looping and masking for constructing the result (instead of indexing)
nb_unique(): the unique values and their indexes are found using explicit looping with numba JIT acceleration
cy_unique(): the unique values and their indexes are found using explicit looping with cython
single-pass approaches
cached_dict(): uses a Python dict for the caching (O(1) look-up)
cached_dict_cy(): same as above but with Cython (essentially equivalent to vectorized_cached_impl() from here)
cached_arr_cy(): uses an array for the caching (O(U) look-up)
pure()
def pure(x):
return 2 * x
np.vectorized()
import numpy as np
vectorized = np.vectorize(pure)
vectorized.__name__ = 'vectorized'
np_unique()
import functools
import numpy as np
def vectorize_np_unique(func):
#functools.wraps(func)
def func_vect(arr):
uniques, ix = np.unique(arr, return_inverse=True)
result = np.array([func(x) for x in uniques])
return result[ix].reshape(arr.shape)
return func_vect
np_unique = vectorize_np_unique(pure)
np_unique.__name__ = 'np_unique'
pd_unique()
import functools
import numpy as np
import pandas as pd
def vectorize_pd_unique(func):
#functools.wraps(func)
def func_vect(arr):
shape = arr.shape
arr = arr.ravel()
uniques = np.sort(pd.unique(arr))
f_range = np.array([func(x) for x in uniques])
return f_range[np.searchsorted(uniques, arr)].reshape(shape)
return func_vect
pd_unique = vectorize_pd_unique(pure)
pd_unique.__name__ = 'pd_unique'
set_unique()
import functools
def vectorize_set_unique(func):
#functools.wraps(func)
def func_vect(arr):
shape = arr.shape
arr = arr.ravel()
uniques = sorted(set(arr))
result = np.array([func(x) for x in uniques])
return result[np.searchsorted(uniques, arr)].reshape(shape)
return func_vect
set_unique = vectorize_set_unique(pure)
set_unique.__name__ = 'set_unique'
set_unique_msk()
import functools
def vectorize_set_unique_msk(func):
#functools.wraps(func)
def func_vect(arr):
result = np.empty_like(arr)
for x in set(arr.ravel()):
result[arr == x] = func(x)
return result
return func_vect
set_unique_msk = vectorize_set_unique_msk(pure)
set_unique_msk.__name__ = 'set_unique_msk'
nb_unique()
import functools
import numpy as np
import numba as nb
import flyingcircus as fc
#nb.jit(forceobj=False, nopython=True, nogil=True, parallel=True)
def numba_unique(arr, max_uniques):
ix = np.empty(arr.size, dtype=np.int64)
uniques = np.empty(max_uniques, dtype=arr.dtype)
j = 0
for i in range(arr.size):
found = False
for k in nb.prange(j):
if arr[i] == uniques[k]:
found = True
break
if not found:
uniques[j] = arr[i]
j += 1
uniques = np.sort(uniques[:j])
# : get indices
num_uniques = j
for j in nb.prange(num_uniques):
x = uniques[j]
for i in nb.prange(arr.size):
if arr[i] == x:
ix[i] = j
return uniques, ix
#fc.base.parametric
def vectorize_nb_unique(func, max_uniques=-1):
#functools.wraps(func)
def func_vect(arr):
nonlocal max_uniques
shape = arr.shape
arr = arr.ravel()
if max_uniques <= 0:
m = arr.size
elif isinstance(max_uniques, int):
m = min(max_uniques, arr.size)
elif isinstance(max_uniques, float):
m = int(arr.size * min(max_uniques, 1.0))
uniques, ix = numba_unique(arr, m)
result = np.array([func(x) for x in uniques])
return result[ix].reshape(shape)
return func_vect
nb_unique = vectorize_nb_unique()(pure)
nb_unique.__name__ = 'nb_unique'
cy_unique()
%%cython -c-O3 -c-march=native -a
#cython: language_level=3, boundscheck=False, wraparound=False, initializedcheck=False, cdivision=True, infer_types=True
import numpy as np
import cython as cy
cimport cython as ccy
cimport numpy as cnp
ctypedef fused arr_t:
cnp.uint16_t
cnp.uint32_t
cnp.uint64_t
cnp.int16_t
cnp.int32_t
cnp.int64_t
cnp.float32_t
cnp.float64_t
cnp.complex64_t
cnp.complex128_t
def sort_numpy(arr_t[:] a):
np.asarray(a).sort()
cpdef cnp.int64_t cython_unique(
arr_t[:] arr,
arr_t[::1] uniques,
cnp.int64_t[:] ix):
cdef size_t size = arr.size
cdef arr_t x
cdef cnp.int64_t i, j, k, num_uniques
j = 0
for i in range(size):
found = False
for k in range(j):
if arr[i] == uniques[k]:
found = True
break
if not found:
uniques[j] = arr[i]
j += 1
sort_numpy(uniques[:j])
num_uniques = j
for j in range(num_uniques):
x = uniques[j]
for i in range(size):
if arr[i] == x:
ix[i] = j
return num_uniques
import functools
import numpy as np
import flyingcircus as fc
#fc.base.parametric
def vectorize_cy_unique(func, max_uniques=0):
#functools.wraps(func)
def func_vect(arr):
shape = arr.shape
arr = arr.ravel()
if max_uniques <= 0:
m = arr.size
elif isinstance(max_uniques, int):
m = min(max_uniques, arr.size)
elif isinstance(max_uniques, float):
m = int(arr.size * min(max_uniques, 1.0))
ix = np.empty(arr.size, dtype=np.int64)
uniques = np.empty(m, dtype=arr.dtype)
num_uniques = cy_uniques(arr, uniques, ix)
uniques = uniques[:num_uniques]
result = np.array([func(x) for x in uniques])
return result[ix].reshape(shape)
return func_vect
cy_unique = vectorize_cy_unique()(pure)
cy_unique.__name__ = 'cy_unique'
cached_dict()
import functools
import numpy as np
def vectorize_cached_dict(func):
#functools.wraps(func)
def func_vect(arr):
result = np.empty_like(arr.ravel())
cache = {}
for i, x in enumerate(arr.ravel()):
if x not in cache:
cache[x] = func(x)
result[i] = cache[x]
return result.reshape(arr.shape)
return func_vect
cached_dict = vectorize_cached_dict(pure)
cached_dict.__name__ = 'cached_dict'
cached_dict_cy()
%%cython -c-O3 -c-march=native -a
#cython: language_level=3, boundscheck=False, wraparound=False, initializedcheck=False, cdivision=True, infer_types=True
import numpy as np
import cython as cy
cimport cython as ccy
cimport numpy as cnp
ctypedef fused arr_t:
cnp.uint16_t
cnp.uint32_t
cnp.uint64_t
cnp.int16_t
cnp.int32_t
cnp.int64_t
cnp.float32_t
cnp.float64_t
cnp.complex64_t
cnp.complex128_t
ctypedef fused result_t:
cnp.uint16_t
cnp.uint32_t
cnp.uint64_t
cnp.int16_t
cnp.int32_t
cnp.int64_t
cnp.float32_t
cnp.float64_t
cnp.complex64_t
cnp.complex128_t
cpdef void apply_cached_dict_cy(arr_t[:] arr, result_t[:] result, object func):
cdef size_t size = arr.size
cdef size_t i
cdef dict cache = {}
cdef arr_t x
cdef result_t y
for i in range(size):
x = arr[i]
if x not in cache:
y = func(x)
cache[x] = y
else:
y = cache[x]
result[i] = y
import functools
import flyingcircus as fc
#fc.base.parametric
def vectorize_cached_dict_cy(func, dtype=None):
#functools.wraps(func)
def func_vect(arr):
nonlocal dtype
shape = arr.shape
arr = arr.ravel()
result = np.empty_like(arr) if dtype is None else np.empty(arr.shape, dtype=dtype)
apply_cached_dict_cy(arr, result, func)
return np.reshape(result, shape)
return func_vect
cached_dict_cy = vectorize_cached_dict_cy()(pure)
cached_dict_cy.__name__ = 'cached_dict_cy'
cached_arr_cy()
%%cython -c-O3 -c-march=native -a
#cython: language_level=3, boundscheck=False, wraparound=False, initializedcheck=False, cdivision=True, infer_types=True
import numpy as np
import cython as cy
cimport cython as ccy
cimport numpy as cnp
ctypedef fused arr_t:
cnp.uint16_t
cnp.uint32_t
cnp.uint64_t
cnp.int16_t
cnp.int32_t
cnp.int64_t
cnp.float32_t
cnp.float64_t
cnp.complex64_t
cnp.complex128_t
ctypedef fused result_t:
cnp.uint16_t
cnp.uint32_t
cnp.uint64_t
cnp.int16_t
cnp.int32_t
cnp.int64_t
cnp.float32_t
cnp.float64_t
cnp.complex64_t
cnp.complex128_t
cpdef void apply_cached_arr_cy(
arr_t[:] arr,
result_t[:] result,
object func,
arr_t[:] uniques,
result_t[:] func_uniques):
cdef size_t i
cdef size_t j
cdef size_t k
cdef size_t size = arr.size
j = 0
for i in range(size):
found = False
for k in range(j):
if arr[i] == uniques[k]:
found = True
break
if not found:
uniques[j] = arr[i]
func_uniques[j] = func(arr[i])
result[i] = func_uniques[j]
j += 1
else:
result[i] = func_uniques[k]
import functools
import numpy as np
import flyingcircus as fc
#fc.base.parametric
def vectorize_cached_arr_cy(func, dtype=None, max_uniques=None):
#functools.wraps(func)
def func_vect(arr):
nonlocal dtype, max_uniques
shape = arr.shape
arr = arr.ravel()
result = np.empty_like(arr) if dtype is None else np.empty(arr.shape, dtype=dtype)
if max_uniques is None or max_uniques <= 0:
max_uniques = arr.size
elif isinstance(max_uniques, int):
max_uniques = min(max_uniques, arr.size)
elif isinstance(max_uniques, float):
max_uniques = int(arr.size * min(max_uniques, 1.0))
uniques = np.empty(max_uniques, dtype=arr.dtype)
func_uniques = np.empty_like(arr) if dtype is None else np.empty(max_uniques, dtype=dtype)
apply_cached_arr_cy(arr, result, func, uniques, func_uniques)
return np.reshape(result, shape)
return func_vect
cached_arr_cy = vectorize_cached_arr_cy()(pure)
cached_arr_cy.__name__ = 'cached_arr_cy'
Notes
The meta-decorator #parametric (inspired from here and available in FlyingCircus as flyingcircus.base.parametric) is defined as below:
def parametric(decorator):
#functools.wraps(decorator)
def _decorator(*_args, **_kws):
def _wrapper(func):
return decorator(func, *_args, **_kws)
return _wrapper
return _decorator
Numba would not be able to handle single-pass methods more efficiently than regular Python code because passing an arbitrary callable would require Python object support enabled, thereby excluding fast JIT looping.
Cython has some limitation in that you would need to specify the expected result data type. You could also tentatively guess it from the input data type, but that is not really ideal.
Some implementation requiring a temporary storage were implemented for simplicity using a static NumPy array. It would be possible to improve these implementations with dynamic arrays in C++, for example, without much loss in speed, but much improved memory footprint.
Benchmarks
Slow function with only 10 unique values (less than ~0.05%)
(This is essentially the use-case of the original post).
Fast function with ~0.05% unique values
Fast function with ~10% unique values
Fast function with ~20% unique values
The full benchmark code (based on this template) is available here.
Discussion and Conclusion
The fastest approach will depend on both N and U.
For slow functions, all cached approaches are faster than just vectorized(). This result should be taken with a grain of salt of course, because the slow function tested here is ~4 orders of magnitude slower than the fast function, and such slow analytical functions are not really too common.
If the function can be written in vectorized form right away, that is by far and large the fastest approach.
In general, cached_dict_cy() is quite memory efficient and faster than vectorized() (even for fast functions) as long as U / N is ~20% or less.
Its major drawback is that requires Cython, which is a somewhat complex dependency and it would also require specifying the result data type.
The np_unique() approach is faster than vectorized() (even for fast functions) as long as U / N is ~10% or less.
The pd_unique() approach is competitive only for very small U and slow func.
For very small U, hashing is marginally less beneficial and cached_arr_cy() is the fastest approach.
After poking around a bit, here is one approach that uses pandas.unique (based on hashing) instead of numpy.unique (based on sorting).
import pandas as pd
def vectorize_with_pandas(f):
#wraps(f)
def f_vec(arr):
uniques = np.sort(pd.unique(arr.ravel()))
f_range = np.array([f(x) for x in uniques])
return f_range[
np.searchsorted(uniques, arr.ravel())
].reshape(arr.shape)
return f_vec
Giving the following performance boost:
N = 1_000_000
np.random.seed(0)
arr = np.random.randint(0, 10, size=(N, 2)).astype(float)
#vectorize_with_pandas
def pandas_vectorize(x):
sleep(0.001)
return x
In [33]: %timeit pure_vectorize(arr)
152 ms ± 2.34 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [34]: %timeit pandas_vectorize(arr)
76.8 ms ± 582 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Also, based on a suggestion by Warren Weckesser, you could go even faster if arr is an array of small integers, e.g. uint8. For example,
def unique_uint8(arr):
q = np.zeros(256, dtype=int)
q[arr.ravel()] = 1
return np.nonzero(q)[0]
def vectorize_uint8(f):
#wraps(f)
def f_vec(arr):
uniques = unique_uint8(arr)
f_range = np.array([f(x) for x in uniques])
return f_range[
np.searchsorted(uniques, arr.ravel())
].reshape(arr.shape)
return f_vec
The following decorator is:
10x faster than your usual_vectorize
10x slower than your vectorize_pure
not doing any sorting (to the best of my knowledge)
using numpy vectorized operations
Code:
def vectorize_pure2(f):
#wraps(f)
def f_vec(arr):
tups = [tuple(x) for x in arr]
tups_rows = dict(zip(tups, arr))
new_arr = np.ndarray(arr.shape)
for row in tups_rows.values():
row_ixs = (arr == row).all(axis=1)
new_arr[row_ixs] = f(row)
return new_arr
return f_vec
Performance:
#vectorize_pure2
def pure_vectorize2(x):
sleep(0.001)
return x
In [49]: %timeit pure_vectorize2(arr)
135 ms ± 879 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Some credit due this answer: https://stackoverflow.com/a/16992881/4960855
Related
I would like to calculate the p values of a large 2D numpy t values array. However, this takes long time and I would like to improve its speed. I tried using GSL.
Although a single gsl_cdf_tdist_P is much much faster than scipy.stats.t.sf, when iterating over the ndarray, the process is very slow. I would like help to improve this.
See the code below.
GSL_Test.pyx
import cython
cimport cython
import numpy
cimport numpy
from cython_gsl cimport *
DTYPE = numpy.float32
ctypedef numpy.float32_t DTYPE_t
cdef get_gsl_p(double t, double nu):
return (1 - gsl_cdf_tdist_P(t, nu)) * 2
#cython.boundscheck(False)
#cython.wraparound(False)
#cython.nonecheck(False)
cdef get_gsl_p_for_2D_matrix(numpy.ndarray[DTYPE_t, ndim=2] t_matrix, int n):
cdef unsigned int rows = t_matrix.shape[0]
cdef numpy.ndarray[DTYPE_t, ndim=2] out = numpy.zeros((rows, rows), dtype='float32')
cdef unsigned int row, col
for row in range(rows):
for col in range(rows):
out[row, col] = get_gsl_p(t_matrix[row, col], n-2)
return out
def get_gsl_p_for_2D_matrix_def(numpy.ndarray[DTYPE_t, ndim=2] t_matrix, int n):
return get_gsl_p_for_2D_matrix(t_matrix, n)
ipython
import GSL_Test
import numpy
import scipy.stats
a = numpy.random.rand(3544, 3544).astype('float32')
%timeit -n 1 GSL_Test.get_gsl_p_for_2D_matrix(a, 25)
1 loop, best of 3: 7.87 s per loop
%timeit -n 1 scipy.stats.t.sf(a, 25)*2
1 loop, best of 3: 4.66 s per loop
UPDATE: Adding cdef declarations I was able to reduce the computational time but not lower than scipy still. I modified the code to have the cdef declarations.
%timeit -n 1 GSL_Test.get_gsl_p_for_2D_matrix_def(a, 25)
1 loop, best of 3: 6.73 s per loop
You can get some small gain in raw performance by using a raw special function instead of stats.t.sf. Looking at the source, you find (https://github.com/scipy/scipy/blob/master/scipy/stats/_continuous_distns.py#L3849)
def _sf(self, x, df):
return sc.stdtr(df, -x)
So that you can use stdtr directly:
np.random.seed(1234)
x = np.random.random((3740, 374))
t1 = stats.t.sf(x, 25)
t2 = stdtr(25, -x)
1 loop, best of 3: 653 ms per loop
1 loop, best of 3: 562 ms per loop
If you do reach out for cython, the typed memoryview syntax often gives you faster code than the old ndarray syntax:
from scipy.special.cython_special cimport stdtr
from numpy cimport npy_intp
import numpy as np
def tsf(double [:, ::1] x, int df=25):
cdef double[:, ::1] out = np.empty_like(x)
cdef npy_intp i, j
cdef double tmp, xx
for i in range(x.shape[0]):
for j in range(x.shape[1]):
xx = x[i, j]
out[i, j] = stdtr(df, -xx)
return np.asarray(out)
Here I'm also using the cython_special interface, which is only avaialble in the dev version of scipy (http://scipy.github.io/devdocs/special.cython_special.html#module-scipy.special.cython_special), but you can use GSL if you want.
Finally, if you suspect a bottleneck in iterations, don't forget to inspect the output of cython -a to see if there's some python overhead in the hot loops.
I've trying to get a loop in python to run as fast as possible. So I've dived into NumPy and Cython.
Here's the original Python code:
def calculate_bsf_u_loop(uvel,dy,dz):
"""
Calculate barotropic stream function from zonal velocity
uvel (t,z,y,x)
dy (y,x)
dz (t,z,y,x)
bsf (t,y,x)
"""
nt = uvel.shape[0]
nz = uvel.shape[1]
ny = uvel.shape[2]
nx = uvel.shape[3]
bsf = np.zeros((nt,ny,nx))
for jn in range(0,nt):
for jk in range(0,nz):
for jj in range(0,ny):
for ji in range(0,nx):
bsf[jn,jj,ji] = bsf[jn,jj,ji] + uvel[jn,jk,jj,ji] * dz[jn,jk,jj,ji] * dy[jj,ji]
return bsf
It's just a sum over k indices. Array sizes are nt=12, nz=75, ny=559, nx=1442, so ~725 million elements.
That took 68 seconds. Now, I've done it in cython as
import numpy as np
cimport numpy as np
cimport cython
#cython.boundscheck(False) # turn off bounds-checking for entire function
#cython.wraparound(False) # turn off negative index wrapping for entire function
## Use cpdef instead of def
## Define types for arrays
cpdef calculate_bsf_u_loop(np.ndarray[np.float64_t, ndim=4] uvel, np.ndarray[np.float64_t, ndim=2] dy, np.ndarray[np.float64_t, ndim=4] dz):
"""
Calculate barotropic stream function from zonal velocity
uvel (t,z,y,x)
dy (y,x)
dz (t,z,y,x)
bsf (t,y,x)
"""
## cdef the constants
cdef int nt = uvel.shape[0]
cdef int nz = uvel.shape[1]
cdef int ny = uvel.shape[2]
cdef int nx = uvel.shape[3]
## cdef loop indices
cdef ji,jj,jk,jn
## cdef. Note that the cdef is followed by cython type
## but the np.zeros function as python (numpy) type
cdef np.ndarray[np.float64_t, ndim=3] bsf = np.zeros([nt,ny,nx], dtype=np.float64)
for jn in xrange(0,nt):
for jk in xrange(0,nz):
for jj in xrange(0,ny):
for ji in xrange(0,nx):
bsf[jn,jj,ji] += uvel[jn,jk,jj,ji] * dz[jn,jk,jj,ji] * dy[jj,ji]
return bsf
and that took 49 seconds.
However, swapping the loop for
for jn in range(0,nt):
for jk in range(0,nz):
bsf[jn,:,:] = bsf[jn,:,:] + uvel[jn,jk,:,:] * dz[jn,jk,:,:] * dy[:,:]
only takes 0.29 seconds! Unfortunately, I can't do this in my full code.
Why is NumPy slicing so much faster than the Cython loop?
I thought NumPy was fast because it is Cython under the hood. So shouldn't they be of similar speed?
As you can see, I've disabled boundary checks in cython, and I've also compiled using "fast math". However, this only gives a tiny speedup.
Is there anyway to get a loop to be of similar speed as NumPy slicing, or is looping always slower than slicing?
Any help is greatly appreciated!
/Joakim
That code is screaming for numpy.einsum's's intervention, given that you are doing elementwise-multiplication and then sum-reduction on the second axis of the 4D product array, which essenti
ally numpy.einsum does in a highly efficient manner. To solve your case, you can use numpy.einsum in two ways -
bsf = np.einsum('ijkl,ijkl,kl->ikl',uvel,dz,dy)
bsf = np.einsum('ijkl,ijkl->ikl',uvel,dz)*dy
Runtime tests & Verify outputs -
In [100]: # Take a (1/5)th of original input shapes
...: original_shape = [12,75, 559,1442]
...: m,n,p,q = (np.array(original_shape)/5).astype(int)
...:
...: # Generate random arrays with given shapes
...: uvel = np.random.rand(m,n,p,q)
...: dy = np.random.rand(p,q)
...: dz = np.random.rand(m,n,p,q)
...:
In [101]: bsf = calculate_bsf_u_loop(uvel,dy,dz)
In [102]: print(np.allclose(bsf,np.einsum('ijkl,ijkl,kl->ikl',uvel,dz,dy)))
True
In [103]: print(np.allclose(bsf,np.einsum('ijkl,ijkl->ikl',uvel,dz)*dy))
True
In [104]: %timeit calculate_bsf_u_loop(uvel,dy,dz)
1 loops, best of 3: 2.16 s per loop
In [105]: %timeit np.einsum('ijkl,ijkl,kl->ikl',uvel,dz,dy)
100 loops, best of 3: 3.94 ms per loop
In [106]: %timeit np.einsum('ijkl,ijkl->ikl',uvel,dz)*dy
100 loops, best of 3: 3.96 ms per loo
I am trying to speed up a finite differences integrator for a partial differential equation using Cython. I am not sure what I need to do in order for Cython to work correctly with the numpy arrays.
The diffusion term function that I use is
def laplacian(var, dh2):
""" (1D array, dx^2) -> laplacian(1D array)
periodic_laplacian_1D_4th_order
Implementing the 4th order 1D laplacian with periodic condition
"""
lap = numpy.zeros_like(var)
lap[1:] = (4.0/3.0)*var[:-1]
lap[0] = (4.0/3.0)*var[1]
lap[:-1] += (4.0/3.0)*var[1:]
lap[-1] += (4.0/3.0)*var[0]
lap += (-5.0/2.0)*var
lap[2:] += (-1.0/12.0)*var[:-2]
lap[:2] += (-1.0/12.0)*var[-2:]
lap[:-2] += (-1.0/12.0)*var[2:]
lap[-2:] += (-1.0/12.0)*var[:2]
return lap / dh2
And the rhs of the equations of the model are
from derivatives import laplacian
def dbdt(b,w,p,m,d,dx2):
""" db/dt of Modified Klausmeier """
return w*b**2 - m*b + laplacian(b,dx2)
def dwdt(b,w,p,m,d,dx2):
""" dw/dt of Modified Klausmeier """
return p - w - w*b**2 + d*laplacian(b,dx2)
How can I optimize those functions using Cython?
I have a repository on Github for my working code, that integrates the Gray-Scott model - Gray-Scott model integrator.
To use Cython efficiently, you should make all loops explicit and make sure cython -a shows as few Python calls as possible. A first try would be:
import numpy as np
cimport numpy as np
cimport cython
#cython.boundscheck(False)
#cython.wraparound(False)
#cython.cdivision(True)
def laplacian(double [::1] var, double dh2):
""" (1D array, dx^2) -> laplacian(1D array)
periodic_laplacian_1D_4th_order
Implementing the 4th order 1D laplacian with periodic condition
"""
cdef int n = var.shape[0]
cdef double[::1] lap = np.zeros(n)
cdef int i
for i in range(0, n-1):
lap[1+i] = (4.0/3.0)*var[i]
lap[0] = (4.0/3.0)*var[1]
for i in range(0, n-1):
lap[i] += (4.0/3.0)*var[1+i]
lap[n-1] += (4.0/3.0)*var[0]
for i in range(0, n):
lap[i] += (-5.0/2.0)*var[i]
for i in range(0, n-2):
lap[2+i] += (-1.0/12.0)*var[i]
for i in range(0, 2):
lap[i] += (-1.0/12.0)*var[n - 2 + i]
for i in range(0, n-2):
lap[i] += (-1.0/12.0)*var[i+2]
for i in range(0, 2):
lap[n-2+i] += (-1.0/12.0)*var[i]
for i in range(0, n):
lap[i] /= dh2
return lap
Now this gives you:
$ python -m timeit -s 'import numpy as np; from lap import laplacian; var = np.random.rand(1000000); dh2 = .01' 'laplacian(var, dh2)'
100 loops, best of 3: 11.5 msec per loop
while the NumPy code gave:
100 loops, best of 3: 18.5 msec per loop
Note that the Cython could be further optimized by merging loops etc.
I also tried with a customized (i.e. not committed in master) version of Pythran and without changing the original Python code, I had the same speedup as the Cython version, without the hassle of converting the code:
#pythran export laplacian(float [], float)
import numpy
def laplacian(var, dh2):
""" (1D array, dx^2) -> laplacian(1D array)
periodic_laplacian_1D_4th_order
Implementing the 4th order 1D laplacian with periodic condition
"""
lap = numpy.zeros_like(var)
lap[1:] = (4.0/3.0)*var[:-1]
lap[0] = (4.0/3.0)*var[1]
lap[:-1] += (4.0/3.0)*var[1:]
lap[-1] += (4.0/3.0)*var[0]
lap += (-5.0/2.0)*var
lap[2:] += (-1.0/12.0)*var[:-2]
lap[:2] += (-1.0/12.0)*var[-2:]
lap[:-2] += (-1.0/12.0)*var[2:]
lap[-2:] += (-1.0/12.0)*var[:2]
return lap / dh2
Converted with:
$ pythran lap.py -O3
And I get:
100 loops, best of 3: 11.6 msec per loop
So I guess I've figured it out, though I am not sure this is the most optimized way to do it:
import numpy as np
cimport numpy as np
cdef laplacian(np.ndarray[np.float64_t, ndim=1] var,np.float64_t dh2):
""" (1D array, dx^2) -> laplacian(1D array)
periodic_laplacian_1D_4th_order
Implementing the 4th order 1D laplacian with periodic condition
"""
lap = np.zeros_like(var)
lap[1:] = (4.0/3.0)*var[:-1]
lap[0] = (4.0/3.0)*var[1]
lap[:-1] += (4.0/3.0)*var[1:]
lap[-1] += (4.0/3.0)*var[0]
lap += (-5.0/2.0)*var
lap[2:] += (-1.0/12.0)*var[:-2]
lap[:2] += (-1.0/12.0)*var[-2:]
lap[:-2] += (-1.0/12.0)*var[2:]
lap[-2:] += (-1.0/12.0)*var[:2]
return lap / dh2
I have used the following setup.py
from distutils.core import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("derivatives_c.pyx")
)
Any advice on improving it is welcome..
I am trying to implement a NaN-safe shuffling procedure in Cython that can shuffle along several axis of a multidimensional matrix of arbitrary dimension.
In the simple case of a 1D matrix, one can simply shuffle over all indices with non-NaN values using the Fisher–Yates algorithm:
def shuffle1D(np.ndarray[double, ndim=1] x):
cdef np.ndarray[long, ndim=1] idx = np.where(~np.isnan(x))[0]
cdef unsigned int i,j,n,m
randint = np.random.randint
for i in xrange(len(idx)-1, 0, -1):
j = randint(i+1)
n,m = idx[i], idx[j]
x[n], x[m] = x[m], x[n]
I would like to extend this algorithm to handle large multidimensional arrays without reshape (which triggers a copy for more complicated cases not considered here). To this end, I would need to get rid of the fixed input dimension, which seems neither possible with numpy arrays nor memoryviews in Cython. Is there a workaround?
Many thanks in advance!
Thanks to the comments of #Veedrac this answer uses more of Cython capabilities.
A pointer array stores the memory address of the values along axis
Your algorithm is used with a modification that checks for nan values, preventing them of being sorted
It won't create a copy for C ordered arrays. In case of Fortran ordered arrays the ravel() command will return a copy. This could be improved by creating another array of double pointers to carry the values of x, probably with some cache penalty...
This code is at least one order of magnitude faster than the other based on slices.
from libc.stdlib cimport malloc, free
cimport numpy as np
import numpy as np
from numpy.random import randint
cdef extern from "numpy/npy_math.h":
bint npy_isnan(double x)
def shuffleND(x, int axis=-1):
cdef np.ndarray[double, ndim=1] v # view of x
cdef np.ndarray[int, ndim=1] strides
cdef int i, j
cdef int num_axis, pos, stride
cdef double tmp
cdef double **v_axis
if axis==-1:
axis = x.ndim-1
shape = list(x.shape)
num_axis = shape.pop(axis)
v_axis = <double **>malloc(num_axis*sizeof(double *))
for i in range(num_axis):
v_axis[i] = <double *>malloc(1*sizeof(double))
try:
tmp_strides = [s//x.itemsize for s in x.strides]
stride = tmp_strides.pop(axis)
strides = np.array(tmp_strides, dtype=np.int32)
v = x.ravel()
for indices in np.ndindex(*shape):
pos = (strides*indices).sum()
for i in range(num_axis):
v_axis[i] = &v[pos + i*stride]
for i in range(num_axis-1, 0, -1):
j = randint(i+1)
if npy_isnan(v_axis[i][0]) or npy_isnan(v_axis[j][0]):
continue
tmp = v_axis[i][0]
v_axis[i][0] = v_axis[j][0]
v_axis[j][0] = tmp
finally:
free(v_axis)
return x
The following algorithm is based on slices, where no copy is made and it should work for any np.ndarray. The main steps are:
np.ndindex() is used to run throught the different multidimensional indices, excluding the one belonging to the axis you want to shuffle
the shuffle already developed by you for the 1-D case is applied.
Code:
def shuffleND(np.ndarray x, axis=-1):
cdef np.ndarray[long long, ndim=1] idx
cdef unsigned int i, j, n, m
if axis==-1:
axis = x.ndim-1
all_shape = list(np.shape(x))
shape = all_shape[:]
shape.pop(axis)
for slices in np.ndindex(*shape):
slices = list(slices)
axis_slice = slices[:]
axis_slice.insert(axis, slice(None))
idx = np.where(~np.isnan(x[tuple(axis_slice)]))[0]
for i in range(idx.shape[0]-1, 0, -1):
j = randint(i+1)
n, m = idx[i], idx[j]
slice1 = slices[:]
slice1.insert(axis, n)
slice2 = slices[:]
slice2.insert(axis, m)
slice1 = tuple(slice1)
slice2 = tuple(slice2)
x[slice1], x[slice2] = x[slice2], x[slice1]
return x
I've been working on speeding up a resampling calculation for a particle filter. As python has many ways to speed it up, I though I'd try them all. Unfortunately, the numba version is incredibly slow. As Numba should result in a speed up, I assume this is an error on my part.
I tried 4 different versions:
Numba
Python
Numpy
Cython
The code for each is below:
import numpy as np
import scipy as sp
import numba as nb
from cython_resample import cython_resample
#nb.autojit
def numba_resample(qs, xs, rands):
n = qs.shape[0]
lookup = np.cumsum(qs)
results = np.empty(n)
for j in range(n):
for i in range(n):
if rands[j] < lookup[i]:
results[j] = xs[i]
break
return results
def python_resample(qs, xs, rands):
n = qs.shape[0]
lookup = np.cumsum(qs)
results = np.empty(n)
for j in range(n):
for i in range(n):
if rands[j] < lookup[i]:
results[j] = xs[i]
break
return results
def numpy_resample(qs, xs, rands):
results = np.empty_like(qs)
lookup = sp.cumsum(qs)
for j, key in enumerate(rands):
i = sp.argmax(lookup>key)
results[j] = xs[i]
return results
#The following is the code for the cython module. It was compiled in a
#separate file, but is included here to aid in the question.
"""
import numpy as np
cimport numpy as np
cimport cython
DTYPE = np.float64
ctypedef np.float64_t DTYPE_t
#cython.boundscheck(False)
def cython_resample(np.ndarray[DTYPE_t, ndim=1] qs,
np.ndarray[DTYPE_t, ndim=1] xs,
np.ndarray[DTYPE_t, ndim=1] rands):
if qs.shape[0] != xs.shape[0] or qs.shape[0] != rands.shape[0]:
raise ValueError("Arrays must have same shape")
assert qs.dtype == xs.dtype == rands.dtype == DTYPE
cdef unsigned int n = qs.shape[0]
cdef unsigned int i, j
cdef np.ndarray[DTYPE_t, ndim=1] lookup = np.cumsum(qs)
cdef np.ndarray[DTYPE_t, ndim=1] results = np.zeros(n, dtype=DTYPE)
for j in range(n):
for i in range(n):
if rands[j] < lookup[i]:
results[j] = xs[i]
break
return results
"""
if __name__ == '__main__':
n = 100
xs = np.arange(n, dtype=np.float64)
qs = np.array([1.0/n,]*n)
rands = np.random.rand(n)
print "Timing Numba Function:"
%timeit numba_resample(qs, xs, rands)
print "Timing Python Function:"
%timeit python_resample(qs, xs, rands)
print "Timing Numpy Function:"
%timeit numpy_resample(qs, xs, rands)
print "Timing Cython Function:"
%timeit cython_resample(qs, xs, rands)
This results in the following output:
Timing Numba Function:
1 loops, best of 3: 8.23 ms per loop
Timing Python Function:
100 loops, best of 3: 2.48 ms per loop
Timing Numpy Function:
1000 loops, best of 3: 793 µs per loop
Timing Cython Function:
10000 loops, best of 3: 25 µs per loop
Any idea why the numba code is so slow? I assumed it would be at least comparable to Numpy.
Note: if anyone has any ideas on how to speed up either the Numpy or Cython code samples, that would be nice too:) My main question is about Numba though.
The problem is that numba can't intuit the type of lookup. If you put a print nb.typeof(lookup) in your method, you'll see that numba is treating it as an object, which is slow. Normally I would just define the type of lookup in a locals dict, but I was getting a strange error. Instead I just created a little wrapper, so that I could explicitly define the input and output types.
#nb.jit(nb.f8[:](nb.f8[:]))
def numba_cumsum(x):
return np.cumsum(x)
#nb.autojit
def numba_resample2(qs, xs, rands):
n = qs.shape[0]
#lookup = np.cumsum(qs)
lookup = numba_cumsum(qs)
results = np.empty(n)
for j in range(n):
for i in range(n):
if rands[j] < lookup[i]:
results[j] = xs[i]
break
return results
Then my timings are:
print "Timing Numba Function:"
%timeit numba_resample(qs, xs, rands)
print "Timing Revised Numba Function:"
%timeit numba_resample2(qs, xs, rands)
Timing Numba Function:
100 loops, best of 3: 8.1 ms per loop
Timing Revised Numba Function:
100000 loops, best of 3: 15.3 µs per loop
You can go even a little faster still if you use jit instead of autojit:
#nb.jit(nb.f8[:](nb.f8[:], nb.f8[:], nb.f8[:]))
For me that lowers it from 15.3 microseconds to 12.5 microseconds, but it's still impressive how well autojit does.
Faster numpy version (10x speedup compared to numpy_resample)
def numpy_faster(qs, xs, rands):
lookup = np.cumsum(qs)
mm = lookup[None,:]>rands[:,None]
I = np.argmax(mm,1)
return xs[I]