What is the most fast way to calculate function like
# here x is just a number
def f(x):
if x >= 0:
return np.log(x+1)
else:
return -np.log(-x+1)
One possible way is:
# here x is an array
def loga(x)
cond = [x >= 0, x < 0]
choice = [np.log(x+1), -np.log(-x+1)
return np.select(cond, choice)
But seems numpy goes through array element by element.
Is there any way to use something conceptually similar to np.exp(x) to achieve better performance?
def f(x):
return (x/abs(x)) * np.log(1+abs(x))
In cases like these, masking helps -
def mask_vectorized_app(x):
out = np.empty_like(x)
mask = x>=0
mask_rev = ~mask
out[mask] = np.log(x[mask]+1)
out[mask_rev] = -np.log(-x[mask_rev]+1)
return out
Introducing numexpr module helps us further.
import numexpr as ne
def mask_vectorized_numexpr_app(x):
out = np.empty_like(x)
mask = x>=0
mask_rev = ~mask
x_masked = x[mask]
x_rev_masked = x[mask_rev]
out[mask] = ne.evaluate('log(x_masked+1)')
out[mask_rev] = ne.evaluate('-log(-x_rev_masked+1)')
return out
Inspired by #user2685079's post and then using the logarithmetic property : log(A**B) = B*log(A), we can push in the sign into the log computations and this allows us to do more work with numexpr's evaluate expression, like so -
s = (-2*(x<0))+1 # np.sign(x)
out = ne.evaluate('log( (abs(x)+1)**s)')
Computing sign using comparison gives us s in another way -
s = (-2*(x<0))+1
Finally, we can push this into the numexpr evaluate expression -
def mask_vectorized_numexpr_app2(x):
return ne.evaluate('log( (abs(x)+1)**((-2*(x<0))+1))')
Runtime test
Loopy approach for comparison -
def loopy_app(x):
out = np.empty_like(x)
for i in range(len(out)):
out[i] = f(x[i])
return out
Timings and verification -
In [141]: x = np.random.randn(100000)
...: print np.allclose(loopy_app(x), mask_vectorized_app(x))
...: print np.allclose(loopy_app(x), mask_vectorized_numexpr_app(x))
...: print np.allclose(loopy_app(x), mask_vectorized_numexpr_app2(x))
...:
True
True
True
In [142]: %timeit loopy_app(x)
...: %timeit mask_vectorized_numexpr_app(x)
...: %timeit mask_vectorized_numexpr_app2(x)
...:
10 loops, best of 3: 108 ms per loop
100 loops, best of 3: 3.6 ms per loop
1000 loops, best of 3: 942 µs per loop
Using #user2685079's solution using np.sign to replace the first part and then with and without numexpr evaluation -
In [143]: %timeit np.sign(x) * np.log(1+abs(x))
100 loops, best of 3: 3.26 ms per loop
In [144]: %timeit np.sign(x) * ne.evaluate('log(1+abs(x))')
1000 loops, best of 3: 1.66 ms per loop
Using numba
Numba gives you the power to speed up your applications with high performance functions written directly in Python. With a few annotations, array-oriented and math-heavy Python code can be just-in-time compiled to native machine instructions, similar in performance to C, C++ and Fortran, without having to switch languages or Python interpreters.
Numba works by generating optimized machine code using the LLVM compiler infrastructure at import time, runtime, or statically (using the included pycc tool). Numba supports compilation of Python to run on either CPU or GPU hardware, and is designed to integrate with the Python scientific software stack.
The Numba project is supported by Continuum Analytics and The Gordon and Betty Moore Foundation (Grant GBMF5423).
from numba import njit
import numpy as np
#njit
def pir(x):
a = np.empty_like(x)
for i in range(a.size):
x_ = x[i]
_x = abs(x_)
a[i] = np.sign(x_) * np.log(1 + _x)
return a
Accuracy
np.isclose(pir(x), f(x)).all()
True
Timing
x = np.random.randn(100000)
# My proposal
%timeit pir(x)
1000 loops, best of 3: 881 µs per loop
# OP test
%timeit f(x)
1000 loops, best of 3: 1.26 ms per loop
# Divakar-1
%timeit mask_vectorized_numexpr_app(x)
100 loops, best of 3: 2.97 ms per loop
# Divakar-2
%timeit mask_vectorized_numexpr_app2(x)
1000 loops, best of 3: 621 µs per loop
Function definitions
from numba import njit
import numpy as np
#njit
def pir(x):
a = np.empty_like(x)
for i in range(a.size):
x_ = x[i]
_x = abs(x_)
a[i] = np.sign(x_) * np.log(1 + _x)
return a
import numexpr as ne
def mask_vectorized_numexpr_app(x):
out = np.empty_like(x)
mask = x>=0
mask_rev = ~mask
x_masked = x[mask]
x_rev_masked = x[mask_rev]
out[mask] = ne.evaluate('log(x_masked+1)')
out[mask_rev] = ne.evaluate('-log(-x_rev_masked+1)')
return out
def mask_vectorized_numexpr_app2(x):
return ne.evaluate('log( (abs(x)+1)**((-2*(x<0))+1))')
def f(x):
return (x/abs(x)) * np.log(1+abs(x))
You can slightly improve the speed of your second solution by using np.where instead of np.select:
def loga(x):
cond = [x >= 0, x < 0]
choice = [np.log(x+1), -np.log(-x+1)]
return np.select(cond, choice)
def logb(x):
return np.where(x>=0, np.log(x+1), -np.log(-x+1))
In [16]: %timeit loga(arange(-1000,1000))
10000 loops, best of 3: 169 µs per loop
In [17]: %timeit logb(arange(-1000,1000))
10000 loops, best of 3: 98.3 µs per loop
In [18]: np.all(loga(arange(-1000,1000)) == logb(arange(-1000,1000)))
Out[18]: True
Related
I am trying to define a few of famous kernels like RBF, hyperbolic tangent, Fourier and etc for svm.SVR method in sklearn library. I started working on rbf (I know there's a default kernel in svm for rbf but I need to define one to be able to customize it later), and found some useful link in here and chose this one:
def my_kernel(X,Y):
K = np.zeros((X.shape[0],Y.shape[0]))
for i,x in enumerate(X):
for j,y in enumerate(Y):
K[i,j] = np.exp(-1*np.linalg.norm(x-y)**2)
return K
clf=SVR(kernel=my_kernel)
I used this one because I could use it for my train (with shape of [3850,4]) and test data (with shape of [1200,4]) which have different shapes. But the problem is that it's too slow and I have to wait so long for the results. I even used static-typing and memoryviews in cython, but its performance is not as good as the default svm rbf kernel. I also found this link which is about the same problem but working with numpy.einsum and numexpr.evaluate is a little bit confusing for me. It turns out that this was the best code in terms of speed performance:
from scipy.linalg.blas import sgemm
def app2(X, gamma, var):
X_norm = -gamma*np.einsum('ij,ij->i',X,X)
return ne.evaluate('v * exp(A + B + C)', {\
'A' : X_norm[:,None],\
'B' : X_norm[None,:],\
'C' : sgemm(alpha=2.0*gamma, a=X, b=X, trans_b=True),\
'g' : gamma,\
'v' : var\
})
This code just works for one input (X) and I couldn't find a way to modify it for my case (two inputs with two different sizes - The kernel function gets matrices with shape (m,n) and (l,n) and outputs (m,l) according to svm docs ). I guess I only need to replace the K[i,j] = np.exp(-1*np.linalg.norm(x-y)**2) from the first code in the second one to speed it up. Any helps would be appreciated.
You can just pipe your original kernel through pythran
kernel.py:
import numpy as np
#pythran export my_kernel(float64[:,:], float64[:,:])
def my_kernel(X,Y):
K = np.zeros((X.shape[0],Y.shape[0]))
for i,x in enumerate(X):
for j,y in enumerate(Y):
K[i,j] = np.exp(-1*np.linalg.norm(x-y)**2)
return K
Compilation step:
> pythran kernel.py
There's no rewriting step (you need to put the kernel in a separate file though) and the acceleration is significant: 19 times faster on my laptop, using
> python -m timeit -s 'from numpy.random import random; x = random((100,100)); y = random((100,100)); from svr_kernel import my_kernel as k' 'k(x,y)'
to gather timings.
Three possible variants
Variants 1 and 3 makes use of
(a-b)^2 = a^2 + b^2 - 2ab
as described here or here. But for special cases like a small second dimension Variant 2 is also OK.
import numpy as np
import numba as nb
import numexpr as ne
from scipy.linalg.blas import sgemm
def vers_1(X,Y, gamma, var):
X_norm = -gamma*np.einsum('ij,ij->i',X,X)
Y_norm = -gamma*np.einsum('ij,ij->i',Y,Y)
return ne.evaluate('v * exp(A + B + C)', {\
'A' : X_norm[:,None],\
'B' : Y_norm[None,:],\
'C' : sgemm(alpha=2.0*gamma, a=X, b=Y, trans_b=True),\
'g' : gamma,\
'v' : var\
})
#Maybe easier to read but slow, if X.shape[1] gets bigger
#nb.njit(fastmath=True,parallel=True)
def vers_2(X,Y):
K = np.empty((X.shape[0],Y.shape[0]),dtype=X.dtype)
for i in nb.prange(X.shape[0]):
for j in range(Y.shape[0]):
sum=0.
for k in range(X.shape[1]):
sum+=(X[i,k]-Y[j,k])**2
K[i,j] = np.exp(-1*sum)
return K
#nb.njit(fastmath=True,parallel=True)
def vers_3(A,B):
dist=np.dot(A,B.T)
TMP_A=np.empty(A.shape[0],dtype=A.dtype)
for i in nb.prange(A.shape[0]):
sum=0.
for j in range(A.shape[1]):
sum+=A[i,j]**2
TMP_A[i]=sum
TMP_B=np.empty(B.shape[0],dtype=A.dtype)
for i in nb.prange(B.shape[0]):
sum=0.
for j in range(B.shape[1]):
sum+=B[i,j]**2
TMP_B[i]=sum
for i in nb.prange(A.shape[0]):
for j in range(B.shape[0]):
dist[i,j]=np.exp((-2.*dist[i,j]+TMP_A[i]+TMP_B[j])*-1)
return dist
Timings
gamma = 1.
var = 1.
X=np.random.rand(3850,4)
Y=np.random.rand(1200,4)
res_1=vers_1(X,Y, gamma, var)
res_2=vers_2(X,Y)
res_3=vers_3(X,Y)
np.allclose(res_1,res_2)
np.allclose(res_1,res_3)
%timeit res_1=vers_1(X,Y, gamma, var)
19.8 ms ± 615 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit res_2=vers_2(X,Y)
16.1 ms ± 938 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit res_3=vers_3(X,Y)
13.5 ms ± 162 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
I am trying to optimize a nested for loop in python. Here is the code (Note: inputting the data needs not to be optimized):
Y=numpy.zeros(44100)
for i in range(len(Y)):
Y[i]=numpy.sin(i/len(Y))
### /Data^^
Z=numpy.zeros(len(Y))
for i in range(len(Y))
for j in range(len(Y))
Z[i]+=Y[j]*numpy.sinc(i-j)
How to best optimize code written for numpy arrays when nested for loops are involved?
EDIT: For clarity.
I guess this only makes sense to do if you multiply the argument to sinc with some factor f.. But then you can use numpy.convolve:
def orig(Y, f):
Z=numpy.zeros(len(Y))
for i in range(len(Y)):
for j in range(len(Y)):
Z[i]+=Y[j]*numpy.sinc((i-j)*f)
return Z
def new(Y, f):
sinc = np.sinc(np.arange(1-len(Y), len(Y)) * f)
return np.convolve(Y, sinc, 'valid')
In [111]: Y=numpy.zeros(441)
...: for i in range(len(Y)):
...: Y[i]=numpy.sin(i/len(Y))
In [112]: %time Z = orig(Y, 0.9)
Wall time: 2.81 s
In [113]: %timeit Z = new1(Y, 0.9)
The slowest run took 5.56 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 109 µs per loop
For the really good speed have a look at scipy.signal.fftconvolve
Lets say I have a 4-D numpy array (ex: np.rand((x,y,z,t))) of data with dimensions corresponding to X,Y,Z, and time.
For each X and Y point, and at each time step, I want to find the largest index in Z for which the data is larger than some threshold n.
So my end result should be an X-by-Y-by-t array. Instances where there are no values in the Z-column greater than the threshold should be represented by a 0.
I can loop through element-by-element and construct a new array as I go, however I am operating on a very large array and it takes too long.
Unfortunately, following the example of Python builtins, numpy doesn't make it easy to get the last index, although the first is trivial. Still, something like
def slow(arr, axis, threshold):
return (arr > threshold).cumsum(axis=axis).argmax(axis=axis)
def fast(arr, axis, threshold):
compare = (arr > threshold)
reordered = compare.swapaxes(axis, -1)
flipped = reordered[..., ::-1]
first_above = flipped.argmax(axis=-1)
last_above = flipped.shape[-1] - first_above - 1
are_any_above = compare.any(axis=axis)
# patch the no-matching-element found values
patched = np.where(are_any_above, last_above, 0)
return patched
gives me
In [14]: arr = np.random.random((100,100,30,50))
In [15]: %timeit a = slow(arr, axis=2, threshold=0.75)
1 loop, best of 3: 248 ms per loop
In [16]: %timeit b = fast(arr, axis=2, threshold=0.75)
10 loops, best of 3: 50.9 ms per loop
In [17]: (slow(arr, axis=2, threshold=0.75) == fast(arr, axis=2, threshold=0.75)).all()
Out[17]: True
(There's probably a slicker way to do the flipping but it's the end of day here and my brain is shutting down. :-)
Here's a faster approach -
def faster(a,n,invalid_specifier):
mask = a>n
idx = a.shape[2] - (mask[:,:,::-1]).argmax(2) - 1
idx[~mask[:,:,-1] & (idx == a.shape[2]-1)] = invalid_specifier
return idx
Runtime test -
# Using #DSM's benchmarking setup
In [553]: a = np.random.random((100,100,30,50))
...: n = 0.75
...:
In [554]: out1 = faster(a,n,invalid_specifier=0)
...: out2 = fast(a, axis=2, threshold=n) # #DSM's soln
...:
In [555]: np.allclose(out1,out2)
Out[555]: True
In [556]: %timeit fast(a, axis=2, threshold=n) # #DSM's soln
10 loops, best of 3: 64.6 ms per loop
In [557]: %timeit faster(a,n,invalid_specifier=0)
10 loops, best of 3: 43.7 ms per loop
For a large set of randomly distributed points in a 2D lattice, I want to efficiently extract a subarray, which contains only the elements that, approximated as indices, are assigned to non-zero values in a separate 2D binary matrix. Currently, my script is the following:
lat_len = 100 # lattice length
input = np.random.random(size=(1000,2)) * lat_len
binary_matrix = np.random.choice(2, lat_len * lat_len).reshape(lat_len, -1)
def landed(input):
output = []
input_as_indices = np.floor(input)
for i in range(len(input)):
if binary_matrix[input_as_indices[i,0], input_as_indices[i,1]] == 1:
output.append(input[i])
output = np.asarray(output)
return output
However, I suspect there must be a better way of doing this. The above script can take quite long to run for 10000 iterations.
You are correct. The calculation above, can be be done more efficiently without a for loop in python using advanced numpy indexing,
def landed2(input):
idx = np.floor(input).astype(np.int)
mask = binary_matrix[idx[:,0], idx[:,1]] == 1
return input[mask]
res1 = landed(input)
res2 = landed2(input)
np.testing.assert_allclose(res1, res2)
this results in a ~150x speed-up.
It seems you can squeeze in a noticeable performance boost if you work with linearly indexed arrays. Here's a vectorized implementation to solve our case, similar to #rth's answer, but using linear indexing -
# Get floor-ed indices
idx = np.floor(input).astype(np.int)
# Calculate linear indices
lin_idx = idx[:,0]*lat_len + idx[:,1]
# Index raveled/flattened version of binary_matrix with lin_idx
# to extract and form the desired output
out = input[binary_matrix.ravel()[lin_idx] ==1]
Thus, in short we have:
out = input[binary_matrix.ravel()[idx[:,0]*lat_len + idx[:,1]] ==1]
Runtime tests -
This section compares the proposed approach in this solution against the other solution that uses row-column indexing.
Case #1(Original datasizes):
In [62]: lat_len = 100 # lattice length
...: input = np.random.random(size=(1000,2)) * lat_len
...: binary_matrix = np.random.choice(2, lat_len * lat_len).
reshape(lat_len, -1)
...:
In [63]: idx = np.floor(input).astype(np.int)
In [64]: %timeit input[binary_matrix[idx[:,0], idx[:,1]] == 1]
10000 loops, best of 3: 121 µs per loop
In [65]: %timeit input[binary_matrix.ravel()[idx[:,0]*lat_len + idx[:,1]] ==1]
10000 loops, best of 3: 103 µs per loop
Case #2(Larger datasizes):
In [75]: lat_len = 1000 # lattice length
...: input = np.random.random(size=(100000,2)) * lat_len
...: binary_matrix = np.random.choice(2, lat_len * lat_len).
reshape(lat_len, -1)
...:
In [76]: idx = np.floor(input).astype(np.int)
In [77]: %timeit input[binary_matrix[idx[:,0], idx[:,1]] == 1]
100 loops, best of 3: 18.5 ms per loop
In [78]: %timeit input[binary_matrix.ravel()[idx[:,0]*lat_len + idx[:,1]] ==1]
100 loops, best of 3: 13.1 ms per loop
Thus, the performance boost with this linear indexing seems to be about 20% - 30%.
I've got a function like below that evaluates a kernel between the instances x and y:
def my_hik(x, y):
"""Histogram-Intersection-Kernel """
summe = 0
for i in xrange(len(x)):
summe += min(x[i],y[i])
return summe
#return np.sum(np.min(np.array([[x],[y]]),0))
metrics.pairwise.pairwise_kernels(instances, metric=my_hik, n_jobs=-1)
I call it with sklearns pairwise_kernels-function. But my data (some 3000 instances with a hundred attributes) seems to be too large and the calculation for one matrix takes minutes (as the function is called 9*10^6 times). Is there a way to make the function run faster?
def fast_hik(x, y):
return np.minimum(x, y).sum()
Timings:
>>> x = np.random.randn(100)
>>> y = np.random.randn(100)
>>> %timeit my_hik(x, y)
10000 loops, best of 3: 50.3 µs per loop
>>> %timeit fast_hik(x, y)
100000 loops, best of 3: 5.55 µs per loop
Greater speedups are obtained for longer vectors:
>>> x = np.random.randn(1000)
>>> y = np.random.randn(1000)
>>> %timeit my_hik(x, y)
1000 loops, best of 3: 498 µs per loop
>>> %timeit fast_hik(x, y)
100000 loops, best of 3: 7.92 µs per loop